What’s New#
You can checkout the development roadmap on Github
0.9.3 (Sep ‘25)#
New#
All model classes gain the following new private attributes for convenience:
._init_kwargsto track initialization arguments passed tolm(), lmer(), etc in R._REMLto track whether REML or ML estimation was used (lmermodels only)._reportthe string printed by the.report()method (which returnsNone)
Fixes#
#153 all models now properly support R keyword arguments at initialization time, which is most useful for controlling REML vs ML estimation in
lmer4models, i.e.lmer('y~x + (1|x)', data=data, REML=False)
0.9.2 (May ‘25)#
New#
skmer()class that acts as ascikit-learnEstimator. See the api for usage examplesModels have additional
.result_fit_statsfromperformance::model_performance(), including variance partitioning of multi-level modelsAdded tutorial on installing
pymer4withpixias an alternative tocondaAdded tutorial on similarities-and-differences in coming from R
0.9.1 (May ‘25)#
Fixes#
make
compare()more robust by automatically refitting all models if any are not fit or if any have been fit via bootstrapping
Development#
move
expand_grid()totidystats.plutilsMove all dependencies from
piptoconda-forgeanticipatingconda-forgeavailability
0.9.0 (May ‘25)#
This is release is a major overhaul of pymer4 that involved a near complete re-write of internals to faciliate future maintenance and integrate more advanced functionality from additional R libraries. This is a backwards incompatible release with numerous dependency and API breaking changes. You can check out the migration guide for more details. The 0.8.x version line is still available on github for community contributions/maintenance if desired.
As of this version, pymer4 is now only installable via conda following the instructions here using the ejolly channel. We expect to move to the conda-forge channel soon.
Summary of changes#
Adoption of
polarsas dataframe “backend”New consistent API that supports
lm,glm,lmerandglmermodelsFull support for factors and marginal-estimates/comparisons for all model types
Overhauled docs and tutorials with several included datasets for demo and teaching purposes
Much more extensive testing
Replaced bespoke code with “battle-tested” implementations of helper and utility functions in different R libaries (e.g.
broom,insight,parameters)Switched
setup.pyandrequirements.txt/requirements-dev.txttopyproject.tomlSwitched over to Pixi.sh for development-tooling and Github Actions
Switched documentation from
sphinxtojupyterbookSwitched project linting from
blacktoruffExclusively create
noarchbuilds for condaCurrently implemented as a
pixi taskusingconda build, with plans to switch topixi buildwhen ready
Encompassed Fixes#
Planned features#
These following features are planned for upcoming versions in 0.9.x line
support for
lmerControloptionssimulation and power modules
0.8.2#
Fixes#
Issue in
LogisticRegressionAPI name change
0.8.1#
Compatibility Updates#
This version includes a
noarchbuild that should be installable on arm-based macOS platforms (e.g. M1, M2, etc)This version drops support for Python 3.7 and adds support for 3.9-3.11
Breaking changes#
This version also uses
joblibfor model saving and loading and drops supported hdf5 files previously handled with thedeepdishlibrary as it is no longer actively maintained. This means that 0.8.1 will not be able to load modelssaved with earlier versions ofpymer4!
Fixes#
0.8.0#
NOTE#
there was no 0.7.9 release as there were enough major changes to warrant a new minor release version
this version unpins the maximum versions of
rpy2andpandasif there are install issues with the
condarelease accompanying this version you should be able to successfully install into a conda environment using pip with the following:conda install 'r-lmerTest' 'r-emmeans' rpy2 -c conda-forgefollowed bypip install pymer4
Bug fixes#
New features#
Lmmodels now supportfamily='binomial'and uses theLogisticRegressionclass from scikit-learn with no regularization for estimation. Estimates and errors have been verified against theglmimplementation in Rnew
lrtfunction for estimating likelihood-ratio tests betweenLmermodels thanks to@dramanica. This replicates the functionality ofanova()in R forlmermodels.new
.confint()method forLmermodels thanks to @dramanica. This allows computing confidence intervals on 1 or more paramters of an already fit model including random effects which are not computed by default when calling.fit()
0.7.8#
Maintenance release that pins
rpy2 >= 3.4.5,< 3.5.1due to R to Python dataframe conversion issue on recentrpy2versions that causes a recursion error.Pending code changes to support
rpy2 >= 3.5.1are tracked on this development branch.Upcoming releases will drop support for
rpy2 < 3.5.XClearer error message when making circular predictions using
Lmermodels
0.7.7#
This version is identical to 0.7.6 but supports
R >= 4.1Installation is also more flexible and includes instructions for using
conda-forgeand optimized libraries (MKL) for Intel CPUs
0.7.6#
**Bug fixes:**
: - fixes an issue in which a `Lmer` model fit using categorical
predictors would be unable to use `.predict` or would return
fitted values instead of predictions on new data. random
effect and fixed effect index names were lost thanks to
Mario Leaonardo Salinas for discovering this issue
0.7.5#
This version is identical to 0.7.4 and simply exists because a naming conflict that resulted in a failed released to Anaconda cloud. See release notes for 0.7.4 below
0.7.4#
**Compatibility updates:**
: - This version drops official support for Python 3.6 and adds
support for Python 3.9. While 3.6 should still work for the
most part, development support and testing against this
version of Python will no longer continue moving forward.
**New features:**
: - `utils.result_to_table` function nicely formats the
`model.coefs` output for a fitted model. The docstring also
contains instructions on using this in conjunction with the
[gspread-pandas](https://github.com/aiguofer/gspread-pandas)
library for \"exporting\" model results to a google sheet
0.7.3#
**Bug fixes:**
: - fix issue in which random effect and fixed effect index
names were lost thanks to
[\@jcheong0428](https://github.com/jcheong0428) and
[\@Shotgunosine](https://github.com/Shotgunosine) for the
quick PRs!
0.7.2#
**Bug fixes:**
: - fix bug in which `boot_func` would fail iwth `y=None` and
`paired=False`
**Compatibility updates:**
: - add support for `rpy2>=3.4.3` which handles model matrices
differently
- pin maximum `pandas<1.2`. This is neccesary until our other
dependency `deepdish` adds support. See [this
issue](https://github.com/uchicago-cs/deepdish/issues/45)
0.7.1#
**Pymer4 will be on conda as of this release!**
: - install with
`conda install -c ejolly -c defaults -c conda-forge pymer4`
- This should make installation much easier
- Big thanks to [Tom
Urbach](https://turbach.github.io/toms_kutaslab_website/)
for assisting with this!
**Bug fixes:**
: - design matrix now handles rfx only models properly
- compatibility with the latest version of pandas and rpy2 (as
of 08/20)
- `Lmer.residuals` now save as numpy array rather than
`R FloatVector`
**New features:**
: - `stats.tost_equivalence` now takes a `seed` argument for
reproducibility
**Result Altering Change:**
: - Custom contrasts in `Lmer` models are now expected to be
specified in *human readable* format. This should be more
intuitive for most users and is often what users expect from
R itself, even though that\'s not what it actually does! R
expects custom contrasts passed to the `contrasts()`
function to be the *inverse* of the desired contrasts. See
[this
vignette](https://rstudio-pubs-static.s3.amazonaws.com/65059_586f394d8eb84f84b1baaf56ffb6b47f.html)
for more info.
- In `Pymer4`, specifying the following contrasts:
`model.fit(factors = {"Col1": {'A': 1, 'B': -.5, 'C': -.5}}))`
will estimate the difference between A and the mean of B and
C as one would expect. Behind the scenes, `Pymer4` is
performing the inversion operation automatically for R.
Lots of other devops changes to make testing, bug-fixing, development, future releases and overall maintenance much easier. Much of this work has been off-loaded to automated testing and deployment via Travis CI.
0.7.0#
dropped support for versions of
rpy2 < 3.0Result Altering Change:
Lmstandard errors are now computed using the square-root of the adjusted mean-squared-error(np.sqrt(res.T.dot(res) / (X.shape[0] - X.shape[1])))rather than the standard deviation of the residuals with DOF adjustment(np.std(res, axis=0, ddof=X.shape[1])). While these produce the same results if an intercept is included in the model, they differ slightly when an intercept is not included. Formerly in the no-intercept case, results from pymer4 would differ slightly from R or statsmodels. This change ensures the results are always identical in all cases.Result Altering Change:
Lmrsquared and adjusted rsquared now take into account whether an intercept is included in the model estimation and adjust accordingly. This is consistent with the behavior of R and statsmodelsResult Altering Change: hc1 is the new default robust estimator for
Lmmodels, changed from hc0API change: all model residuals are now saved in the
model.residualsattribute and were formerly saved in themodel.residattribute. This is to maintain consistency withmodel.datacolumn names.New feature: addition of
pymer4.statsmodule for various parametric and non-parametric statistics functions (e.g. permutation testing and bootstrapping)New feature: addition of
pymer4.iomodule for saving and loading models to diskNew feature: addition of
Lm2models that can perform multi-level modeling by first estimating a separate regression for each group and then performing inference on those estimates. Can perform inference on first-level semi-partial and partial correlation coefficients instead of betas too.New feature: All model classes now have the ability to rank transform data prior to estimation, see the rank argument of their respective
.fit()methods.
**New features for Lm models:**
: - `Lm` models can transform coefficients to partial or
semi-partial correlation coefficients
- `Lm` models can also perform weight-least-squares (WLS)
regression given the weights argument to `.fit()`, with
optional dof correction via Satterthwaite approximation.
This is useful for categorical (e.g. group) comparison where
one does not want to assume equal variance between groups
(e.g. Welch\'s t-test). This remains an experimental feature
- `Lm` models can compute hc1 and hc2 robust standard errors
New documentation look: the look and feel of the docs site has been completely changed which should make getting information much more accessible. Additionally, overview pages have now been turned into downloadable tutorial jupyter notebooks
All methods/functions capable of parallelization now have their default
n_jobsset to 1 (i.e. no default parallelization)Various bug fixes to all models
Automated testing on travis now pins specific r and r-package versions
Switched from lsmeans to emmeans for post-hoc tests because lsmeans is deprecated
Updated interactions with rpy2 api for compatibility with version 3 and higher
Refactored package layout for easier maintainability
0.6.0#
Dropped support for Python 2
upgraded
rpy2dependency versionAdded conda installation instructions
Accepted JOSS version
0.5.0#
Lmermodels now support all generalized linear model family types supported by lme4 (e.g. poisson, gamma, etc)Lmermodels now support ANOVA tables with support for auto-orthogonalizing factors using the.anova()methodTest statistic inference for
Lmermodels can now be performed via non-parametric permutation tests that shuffle observations within clustersLmer.fit(factors={})arguments now support custom arbitrary contrastsNew forest plots for visualizing model estimates and confidence intervals via the
Lmer.plot_summary()methodMore comprehensive documentation with examples of new features
Submission to JOSS
0.4.0#
Added
.post_hoc()method toLmermodelsAdded
.simulate()method toLmermodelsSeveral bug fixes for Python 3 compatibility
0.3.2#
addition of
simulatemodule
0.2.2#
Official pyipi release
0.2.1#
Support for standard linear regression models
Models include support for robust standard errors, boot-strapped CIs, and permuted inference
0.2.0#
Support for categorical predictors, model predictions, and model plots
0.1.0#
Linear and Logit multi-level models