Compare commits

...

40 Commits

Author SHA1 Message Date
Tyler Reddy 516e0229b6
Merge pull request #24260 from mdhaber/gh24256
DOC: stats.anderson/anderson_ksamp: improve release notes
2025-12-31 09:38:52 -07:00
Matt Haberland a0ff50a6ea REL: stats.anderson/anderson_ksamp: improve release notes
[docs only]
2025-12-31 09:12:25 -06:00
Tyler Reddy 64c7163920
REL: set 1.17.0rc3 unreleased (#24255) 2025-12-30 23:33:35 +00:00
Tyler Reddy 2ff870db06 REL: 1.17.0rc2 rel commit [wheel build]
* SciPy `1.17.0rc2` release commit.

[wheel build]
2025-12-30 09:19:05 -07:00
Tyler Reddy 9af41549f1
Merge pull request #24207 from tylerjereddy/treddy_backports_1.17.0rc2
MAINT: backports for SciPy 1.17.0rc2
2025-12-30 09:09:36 -07:00
Tyler Reddy 5f723185d6 CI: PR 24207 revisions
* typo fix in linux_blas.yml
2025-12-29 12:47:14 -07:00
Tyler Reddy 6664f7f978 DOC: PR 24207 revisions
* Update the SciPy `1.17.0` release notes
following additional backport activity.
2025-12-29 12:28:52 -07:00
Tyler Reddy 627796a6f4 CI: PR 24207 revisions
* Pin `scipy-openblas` versions in a problematic CI
job to avoid issues with a recent release.
2025-12-29 12:20:26 -07:00
Lucas Colley 5dec102490 MAINT: bump array-api-compat to 1.13 tag 2025-12-29 12:13:22 -07:00
Lucas Colley 87c798350e MAINT: bump array-api-compat for JAX>=0.8.2 compat 2025-12-29 12:11:40 -07:00
Tyler Reddy 1b80f73a1e DOC: PR 24207 revisions
* Update the SciPy `1.17.0` release notes following additional
backport activity. Also mention the two `stats` deprecations
requested (https://github.com/scipy/scipy/pull/24207#pullrequestreview-3601342641).

* Also address release notes review comments from:
https://github.com/scipy/scipy/pull/24207#discussion_r2637390315
https://github.com/scipy/scipy/pull/24207#discussion_r2637390386
2025-12-23 11:51:06 -07:00
Lucas Colley 95d09b267a BLD/DEV: use flang 21 on Windows (#24111)
[skip circle]
2025-12-23 11:22:44 -07:00
Tyler Reddy 212fce6f82 BENCH, MAINT: NumPy 2.4.0 bench compat (#24218)
* Fixes gh-24212.

* The changes here allow the problematic benchmarking incantation described
in the matching ticket to have a passing/zero exit code with NumPy
`2.4.0`:
`asv run --quick -b "time_fancy_getitem" -e`

* I looked at the circleci failure log in the matching issue and this
seems to be the only issue with NumPy `2.4.0` in the benchmark suite,
but we'll see what the CI says..
2025-12-23 11:22:44 -07:00
Lucas Colley ce87eb18eb BLD: linalg: `link_language: 'fortran'` for `_fblas` (#24223) 2025-12-23 11:22:44 -07:00
Albert Steppi 38de396839 TST: fft: More allow_dask_compute=True (#24213) 2025-12-23 11:22:44 -07:00
Tyler Reddy 08f9877a4e MAINT, TYP: PR 24207 revisions [wheel build]
* manually revert a few typing-related adjustments
based on reviewer feedback at:
https://github.com/scipy/scipy/pull/24207/changes#r2637006039

[wheel build]
2025-12-23 11:22:44 -07:00
Tyler Reddy 120d615f9c DOC: update the 1.17.0 release notes
* Update the SciPy `1.17.0` release notes
following additional backport activity, including
updates to the author list and issue/PR lists.

* Address the comment at:
https://github.com/scipy/scipy/pull/24138#discussion_r2615964668

* Fix a few typos in the release notes that were noticed
after RC1.

* Address the missing release notes entries based on the summary
at:
https://github.com/scipy/scipy/issues/24152
2025-12-23 11:22:36 -07:00
ilayn c4c0a98258 REV:integrate: Revert the stepsize change in LSODA 2025-12-23 11:21:39 -07:00
ilayn 96db5495ab BUG:linalg.interpolative:Prepare nonarray args before sending to idd_reconid 2025-12-23 11:21:39 -07:00
ilayn f0147c01a6 BUG:linalg.interpolative:Care for single row arrays 2025-12-23 11:21:39 -07:00
Florian Bourgey cc3302197e MAINT: bump `xsf` to fix `special.itj0y0` regression (#24155) 2025-12-23 11:21:39 -07:00
Matt Haberland 377f9e3c5a STY: _lib._util: use type instead of TypeAlias
[lint only]
2025-12-23 11:21:39 -07:00
Matt Haberland 71346c21be MAINT: linalg: raise for zero-size batch 2025-12-23 11:21:39 -07:00
Joren Hammudoglu 3b799b2025 BUG: ``sparse``: fix ``coo_matrix.__setitem__`` signature (#24145) 2025-12-23 11:21:39 -07:00
Joren Hammudoglu 737314eee5 TYP: generic ``signal.LinearTimeInvariant`` type (#24143) 2025-12-23 11:21:39 -07:00
Joren Hammudoglu 42cc8115ec API: ``signal.savgol_coeffs``: keyword-only ``xp`` and ``device`` (#24140) 2025-12-23 11:21:39 -07:00
Joren Hammudoglu 1b62723e50 API: ``linalg.inv``: keyword-only ``assume_a`` and ``lower`` (#24139) 2025-12-23 11:21:39 -07:00
Jake Bowhay 3194b2f8e9 BUG: integrate: fix no module named 'scipy.integrate._lsoda' (#24133) 2025-12-23 11:21:39 -07:00
Jan Möseritz-Schmidt 8f0a365d6c DOC: update mailmap
[skip ci]
2025-12-23 11:21:39 -07:00
Matt Haberland 8b087266c6 ENH: stats.anderson_ksamp: add 'variant' parameter (#24031) 2025-12-23 11:21:39 -07:00
Matt Haberland 3a5aeb35f1 ENH: stats.anderson: add `method` parameter (#24030)
* ENH: stats.anderson: add method parameter

* TST: stats.anderson: test method parameter

* MAINT: stats.anderson: adjustments per self-review

* DOC: stats.anderson: adjust documentation
2025-12-23 11:21:39 -07:00
Tyler Reddy 9ad034a294
Merge pull request #24214 from ev-br/upd_1.17notes_rbf
DOC: update 1.17.0 notes for RBFInterpolator
2025-12-21 13:05:03 -07:00
Evgeni Burovski 275750ec0f DOC: update 1.17.0 notes for RBFInterpolator
[ci skip]
2025-12-21 10:49:58 +01:00
Tyler Reddy 60bb19b029
Merge pull request #24138 from dschult/relnotes_sparse_tweak
DOC: release notes: improve wording of sparse highlight
2025-12-12 18:26:23 -07:00
Dan Schult 5a627f161c apply suggestion re: from numpy [docs only] 2025-12-11 14:59:26 -05:00
Dan Schult 467fe4d216 Better sparse highlight COO wording. Transition into ARPACK [docs only] 2025-12-10 23:49:43 -05:00
Tyler Reddy 357baaa1f8
REL: set 1.17.0rc2 unreleased (#24129)
* Set the version to SciPy `1.17.0rc2` "unreleased."

[skip ci] [ci skip]
2025-12-10 08:53:28 +01:00
Tyler Reddy f77b58ae6f REL: 1.17.0rc1 release commit [wheel build]
* SciPy `1.17.0rc1` release commit.

[wheel build]
2025-12-08 19:56:48 -07:00
Tyler Reddy 8d98c4da75
Merge pull request #24118 from tylerjereddy/treddy_deps_1.17.0
MAINT: version pins/prep for 1.17.0rc1
2025-12-08 19:47:09 -07:00
Tyler Reddy 39c4e05da3 MAINT: version pins/prep for 1.17.0rc1
* Adjust the build and runtime dependency version upper
bounds (on maintenance branch) as we prepare for the
release of SciPy `1.17.0rc1`.

* Many of the decisions are based on the previous PR
for the last release cycle that aimed to do the same:
gh-23013.

[skip circle]
2025-12-08 10:28:27 -07:00
29 changed files with 1030 additions and 399 deletions

View File

@ -127,7 +127,9 @@ jobs:
pip install cython numpy pybind11 pythran pytest hypothesis pytest-xdist pooch
pip install -r requirements/dev.txt
pip install git+https://github.com/numpy/meson.git@main-numpymeson
pip install scipy-openblas32 scipy-openblas64
# pin scipy-openblas on release branch--see:
# https://github.com/scipy/scipy/pull/24207#issuecomment-3687862055
pip install "scipy-openblas32==0.3.30.0.8" "scipy-openblas64==0.3.30.0.8"
- name: Write out scipy-openblas64.pc
run: |

View File

@ -276,6 +276,7 @@ Jakub Dyczek <34447984+JDkuba@users.noreply.github.com> JDkuba <34447984+JDkuba@
James T. Webber <jamestwebber@gmail.com> jamestwebber <jamestwebber@gmail.com>
James T. Webber <jamestwebber@gmail.com> James Webber <jamestwebber@users.noreply.github.com>
James T. Webber <jamestwebber@gmail.com> James Webber <j@meswebber.com>
Jan Möseritz-Schmidt <jaro.schmidt@gmail.com> JaRoSchm <jaro.schmidt@gmail.com>
Jan Schlüter <jan.schlueter@ofai.at> Jan Schlueter <jan.schlueter@ofai.at>
Jan Schlüter <jan.schlueter@ofai.at> Jan Schlüter <github@jan-schlueter.de>
Jan Soedingrekso <jan.soedingrekso@tu-dortmund.de> sudojan <jan.soedingrekso@tu-dortmund.de>
@ -285,7 +286,6 @@ Janani Padmanabhan <jenny.stone125@gmail.com> janani <janani@janani-Vostro-3446.
Janani Padmanabhan <jenny.stone125@gmail.com> Janani <jenny.stone125@gmail.com>
Janez Demšar <janez.demsar@fri.uni-lj.si> janez <janez.demsar@fri.uni-lj.si>
Janez Demšar <janez.demsar@fri.uni-lj.si> janezd <janez.demsar@fri.uni-lj.si>
Jaro Schmidt <jaro.schmidt@gmail.com> JaRoSchm <jaro.schmidt@gmail.com>
Jarrod Millman <jarrod.millman@gmail.com> Jarrod Millman <millman@berkeley.edu>
Jean-François B. <jfbu@free.fr> jfbu <jfbu@free.fr>
Jean-François B. <jfbu@free.fr> Jean-François B <jfbu@free.fr>

View File

@ -323,9 +323,9 @@ class Getset(Benchmark):
v = np.random.rand(n)
if N == 1:
i = int(i)
j = int(j)
v = float(v)
i = int(i[0])
j = int(j[0])
v = float(v[0])
base = A.asformat(format)

View File

@ -27,10 +27,11 @@ Highlights of this release
array input and additional support for the array API standard. An overall
summary of the latter is now available in a
:ref:`set of tables <dev-arrayapi_coverage>`.
- In `scipy.sparse`, ``coo_array`` now has full support for indexing across
dimensions without needing to convert between sparse formats. ARPACK
and PROPACK rewrites from Fortran77 to C now empower the use of external
pseudorandom number generators.
- In `scipy.sparse`, ``coo_array`` now supports indexing. This includes integers,
slices, arrays, ``np.newaxis``, ``Ellipsis``, in 1D, 2D and the relatively
new nD. In `scipy.sparse.linalg`, ARPACK and PROPACK rewrites from Fortran77
to C now empower the use of external pseudorandom number generators, e.g.
from numpy.
- In `scipy.spatial`, ``transform.Rotation`` and ``transform.RigidTransform``
have been extended to support N-D arrays. ``geometric_slerp`` now has support
for extrapolation.
@ -52,6 +53,9 @@ New features
``zvode`` have been ported from Fortran77 to C.
- `scipy.integrate.quad` now has a fast path for returning 0 when the integration
interval is empty.
- The ``BDF``, ``DOP853``, ``RK23``, ``RK45``, ``OdeSolver``, ``DenseOutput``,
``ode``, and ``complex_ode`` classes now support subscription, making them
generic types, for compatibility with ``scipy-stubs``.
``scipy.cluster`` improvements
==============================
@ -60,20 +64,30 @@ New features
``scipy.interpolate`` improvements
==================================
- A new ``bc_type`` argument has been added to `scipy.interpolate.make_splrep`
and `scipy.interpolate.make_splprep` to control the boundary conditions for
spline fitting. Allowed values are ``"not-a-knot"`` (default) and
``"periodic"``.
- A new ``bc_type`` argument has been added to `scipy.interpolate.make_splrep`,
`scipy.interpolate.make_splprep`, and `scipy.interpolate.generate_knots` to
control the boundary conditions for spline fitting. Allowed values are
``"not-a-knot"`` (default) and ``"periodic"``.
- A new ``derivative`` method has been added to the
`scipy.interpolate.NdBSpline` class, to construct a new spline representing a
partial derivative of the given spline. This method is similar to the
``BSpline.derivative`` method of 1-D spline objects.
``BSpline.derivative`` method of 1-D spline objects. In addition, the
``NdBSpline`` mutable instance attribute ``.c`` was changed into a read-only
``@property``.
- Performance of ``"cubic"`` and ``"quintic"`` modes of
`scipy.interpolate.RegularGridInterpolator` has been improved.
- Numerical stability of `scipy.interpolate.AAA` has been improved.
`scipy.interpolate.RegularGridInterpolator` has been improved. Furthermore,
the (mutable) instance attributes ``.grid`` and ``.values`` were changed into
(read-only) properties.
- Numerical stability of `scipy.interpolate.AAA` has been improved and it has
gained a new ``axis`` parameter.
- `scipy.interpolate.FloaterHormannInterpolator` added support for
multidimensional, batched inputs and gained a new ``axis`` parameter to
select the interpolation axis.
- ``RBFInterpolator`` has gained an array API standard compatible backend, with an
improved support for GPU arrays.
- The ``AAA``, ``*Interpolator``, ``*Poly``, and ``*Spline`` classes now
support subscription, making them generic types, for compatibility with
``scipy-stubs``.
``scipy.linalg`` improvements
@ -105,6 +119,9 @@ New features
- Callback functions used by ``optimize.minimize(method="slsqp")`` can
opt into the new callback interface by accepting a single keyword argument
``intermediate_result``.
- The ``BroydenFirst``, ``*Jacobian``, and ``Bounds`` classes now support
subscription, making them generic types, for compatibility with
``scipy-stubs``.
``scipy.signal`` improvements
@ -122,6 +139,9 @@ New features
axes along which the two-dimensional analytic signal should be calculated.
Furthermore, the documentation of `~scipy.signal.hilbert` and
`~scipy.signal.hilbert2` was significantly improved.
- The ``ShortTimeFFT`` and ``LinearTimeInvariant`` classes now support
subscription, making them generic types, for compatibility with
``scipy-stubs``.
``scipy.sparse`` improvements
@ -143,10 +163,15 @@ New features
or another ``dok_array`` matrix. It performs additional validation that keys
are valid index tuples.
- `~scipy.sparse.dia_array.tocsr` is approximately three times faster and
some unneccesary copy operations have been removed from sparse format
some unnecessary copy operations have been removed from sparse format
interconversions more broadly.
- Added `scipy.sparse.linalg.funm_multiply_krylov`, a restarted Krylov method
for evaluating ``y = f(tA) b``.
- In ``sparse.linalg``, the ``LinearOperator``, ``LaplacianNd``, and ``SuperLU``
classes now support subscription, making them generic types, for
compatibility with ``scipy-stubs``.
- In ``sparse.linalg`` the ``eigs`` and ``eigsh`` functions now accept a new
``rng`` parameter.
``scipy.spatial`` improvements
==============================
@ -183,6 +208,8 @@ New features
point at the south pole.
- ``Rotation.as_euler`` and ``Rotation.as_davenport`` methods have gained a
``suppress_warnings`` parameter to enable suppression of gimbal lock warnings.
- ``Rotation.__init__`` has gained a new optional ``scalar_first`` parameter and
there is a new ``Rotation.__setitem__`` method.
``scipy.special`` improvements
==============================
@ -221,6 +248,19 @@ New features
`scipy.stats.ks_1samp`, `scipy.stats.levene`, and `scipy.stats.mood`.
Typically, this improves performance with multidimensional (batch) input.
- The critical value tables of `scipy.stats.anderson` have been updated.
- A new ``method`` parameter of `scipy.stats.anderson` allows the user
to compute p-values by interpolating between tabulated values or using Monte
Carlo simulation. The ``method`` parameter must be passed explicitly
to add a ``pvalue`` attribute to the result object and avoid a warning
about the upcoming removal of ``critical_value``, ``significance_level``,
and ``fit_result`` attributes.
- A new ``variant`` parameter of `scipy.stats.anderson_ksamp` allows the user
to select between three different variants of the statistic, superseding the
``midrank`` parameter which allowed toggling beteen two. The new ``'continuous'``
variant is equivalent to ``'discrete'`` when there are no ties in the sample, but
the calculation is faster. The ``variant`` parameter must be passed explicitly to
avoid a warning about the deprecation of the ``midrank`` attribute and the upcoming
removal of ``critical_values`` from the result object.
- The speed and accuracy of most `scipy.stats.zipfian` methods has been
improved.
- The accuracies of the `scipy.stats.Binomial` methods ``logcdf`` and
@ -228,13 +268,25 @@ New features
- The default guess of ``scipy.stats.trapezoid.fit`` has been improved.
- The accuracy and range of the ``cdf``, ``sf``, ``isf``, and ``ppf`` methods
of `scipy.stats.binom` and `scipy.stats.nbinom` has been improved.
- The ``Covariance``, ``Uniform``, ``Normal``, ``Binomial``, ``Mixture``,
``rv_frozen``, and ``multi_rv_frozen`` classes now support subscription,
making them generic types, for compatibility with ``scipy-stubs``.
- The ``multivariate_t`` and ``multivariate_normal`` distributions have gained
a new ``marginal`` method.
- ``yeojohnson_llf`` gained new parameters ``axis``, ``nan_policy``,
and ``keepdims``, and now returns a numpy scalar where it would previously
return a 0D array.
- The new ``spearmanrho`` function is an array API compatible substitute for
``spearmanr``.
- The ``median_abs_deviation`` function has gained a ``keepdims`` parameter.
- The ``trim_mean`` function has gained new ``nan_policy`` and ``keepdims``
parameters.
**************************
Array API Standard Support
**************************
- An overall summary table for our array API standard support/coverage is
:ref:`now availalble <dev-arrayapi_coverage>`.
:ref:`now available <dev-arrayapi_coverage>`.
- The overhead associated with array namespace determination has been reduced,
providing improved performance in dispatching to different backends.
- `scipy.cluster.hierarchy.is_isomorphic` has gained support.
@ -290,6 +342,15 @@ Deprecated features and future changes
- The ``precenter`` argument of `scipy.signal.lombscargle` is deprecated and
will be removed in v1.19.0. Furthermore, some arguments will become keyword
only.
- For `scipy.stats.anderson`, the tuple-unpacking behavior of the return object
and attributes ``critical_values``, ``significance_level``, and
``fit_result`` are deprecated. Use the new ``method`` parameter to avoid the
deprecation warning. Beginning in SciPy 1.19.0, these features will
no longer be available, and the object returned will have attributes
``statistic`` and ``pvalue``.
- For `scipy.stats.anderson_ksamp`, the ``midrank`` parameter is deprecated
and the new ``variant`` parameter should be preferred. This also means that
the presence of the ``critical_values`` return array is deprecated.
********************
Expired deprecations
@ -369,17 +430,17 @@ Authors
* Marco Berzborn (1) +
* Ole Bialas (1) +
* Om Biradar (1) +
* Florian Bourgey (1)
* Jake Bowhay (102)
* Florian Bourgey (2)
* Jake Bowhay (103)
* Matteo Brivio (1) +
* Dietrich Brunn (34)
* Johannes Buchner (2) +
* Evgeni Burovski (288)
* Evgeni Burovski (290)
* Nicholas Carlini (1) +
* Luca Cerina (1) +
* Christine P. Chai (35)
* Saransh Chopra (1)
* Lucas Colley (117)
* Lucas Colley (121)
* Björn Ingvar Dahlgren (2) +
* Sumit Das (1) +
* Hans Dembinski (1)
@ -398,14 +459,15 @@ Authors
* Ralf Gommers (121)
* Nicolas Guidotti (1) +
* Geoffrey Gunter (1) +
* Matt Haberland (177)
* Joren Hammudoglu (56)
* Matt Haberland (181)
* Joren Hammudoglu (60)
* Jacob Hass (2) +
* Nick Hodgskin (1) +
* Stephen Huan (1) +
* Guido Imperiale (41)
* Gert-Ludwig Ingold (1)
* Jaime Rodríguez-Guerra (2) +
* Jan Möseritz-Schmidt (2) +
* JBlitzar (1) +
* Adam Jones (2)
* Dustin Kenefake (1) +
@ -442,27 +504,26 @@ Authors
* Gilles Peiffer (3) +
* Matti Picus (1)
* Jonas Pleyer (2) +
* Ilhan Polat (116)
* Ilhan Polat (119)
* Akshay Priyadarshi (2) +
* Mohammed Abdul Rahman (1) +
* Daniele Raimondi (2) +
* Ritesh Rana (1) +
* Adrian Raso (1) +
* Dan Raviv (1) +
* Tyler Reddy (116)
* Tyler Reddy (122)
* Lucas Roberts (4)
* Bernard Roesler (1) +
* Mikhail Ryazanov (27)
* Jaro Schmidt (1) +
* Daniel Schmitz (25)
* Martin Schuck (25)
* Dan Schult (29)
* Dan Schult (33)
* Mugunthan Selvanayagam (1) +
* Scott Shambaugh (14)
* Rodrigo Silva (1) +
* Samaresh Kumar Singh (8) +
* Kartik Sirohi (1) +
* Albert Steppi (178)
* Albert Steppi (179)
* Matthias Straka (1) +
* Theo Teske (1) +
* Noam Teyssier (1) +
@ -528,6 +589,7 @@ Issues closed for 1.17.0
* `#22310 <https://github.com/scipy/scipy/issues/22310>`__: BUG: scipy.sparse.linalg.eigsh returns an error when LinearOperator.dtype=None
* `#22365 <https://github.com/scipy/scipy/issues/22365>`__: BUG: ndimage: C++ memory management issues in ``_rank_filter_1d.cpp``
* `#22412 <https://github.com/scipy/scipy/issues/22412>`__: RFC/API: move ``clarkson_woodruff_transform`` from ``linalg``...
* `#22436 <https://github.com/scipy/scipy/issues/22436>`__: BUG: special.itj0y0: regression for ``t > 19``
* `#22500 <https://github.com/scipy/scipy/issues/22500>`__: ENH: spatial.transform: Add array API standard support
* `#22510 <https://github.com/scipy/scipy/issues/22510>`__: performance: _construct_docstrings slow in scipy.stats
* `#22682 <https://github.com/scipy/scipy/issues/22682>`__: BUG: special.betainc: inaccurate/incorrect for small a and b...
@ -549,11 +611,13 @@ Issues closed for 1.17.0
* `#23160 <https://github.com/scipy/scipy/issues/23160>`__: DOC: signal.medfilt: remove outdated note
* `#23166 <https://github.com/scipy/scipy/issues/23166>`__: BENCH/DEV: ``spin bench`` option ``--quick`` is useless (always...
* `#23171 <https://github.com/scipy/scipy/issues/23171>`__: BUG: ``interpolate.RegularGridInterpolator`` fails on a 2D grid...
* `#23195 <https://github.com/scipy/scipy/issues/23195>`__: BUG: New backend for interpolative decomposition breaks for (1xn)...
* `#23204 <https://github.com/scipy/scipy/issues/23204>`__: BUG/TST: stats: XSLOW test failures of ``ttest_ind``
* `#23208 <https://github.com/scipy/scipy/issues/23208>`__: DOC: Missing docstrings for some warnings in scipy.io
* `#23231 <https://github.com/scipy/scipy/issues/23231>`__: DOC: Fix bug in release notes for scipy v1.12.0
* `#23237 <https://github.com/scipy/scipy/issues/23237>`__: DOC: Breadcrumbs missing and sidebar different in API reference...
* `#23248 <https://github.com/scipy/scipy/issues/23248>`__: BUG/TST: ``differentiate`` ``test_dtype`` fails with ``float16``...
* `#23272 <https://github.com/scipy/scipy/issues/23272>`__: BUG: linalg.interpolative.reconstruct_matrix_from_id: segfault...
* `#23297 <https://github.com/scipy/scipy/issues/23297>`__: BUG [nightly]: spatial.transform.Rotation.from_quat: fails for...
* `#23301 <https://github.com/scipy/scipy/issues/23301>`__: BUG: free-threading: functionality based on multiprocessing.Pool...
* `#23303 <https://github.com/scipy/scipy/issues/23303>`__: BUG: spatial.transform.Rotation: Warning Inconsistency
@ -623,6 +687,8 @@ Issues closed for 1.17.0
* `#24046 <https://github.com/scipy/scipy/issues/24046>`__: TST: stats: TestStudentT.test_moments_t failure
* `#24052 <https://github.com/scipy/scipy/issues/24052>`__: CI: TypeError: Multiple namespaces for array inputs
* `#24089 <https://github.com/scipy/scipy/issues/24089>`__: ENH: spatial.transform: Add ``normalize`` argument to ``Rotation.from_matrix``
* `#24131 <https://github.com/scipy/scipy/issues/24131>`__: BUG: ``1.17.0rc1``\ : ModuleNotFoundError: No module named 'scipy.integrate._lso...
* `#24212 <https://github.com/scipy/scipy/issues/24212>`__: BENCH, MAINT: (sparse) asv suite compat with NumPy 2.4.0
************************
Pull requests for 1.17.0
@ -1094,6 +1160,8 @@ Pull requests for 1.17.0
* `#24025 <https://github.com/scipy/scipy/pull/24025>`__: DOC: stats: add equations for Cramer von Mises functions
* `#24028 <https://github.com/scipy/scipy/pull/24028>`__: MAINT:integrate: Initialize variable to silence compiler warnings
* `#24029 <https://github.com/scipy/scipy/pull/24029>`__: ENH: stats.mood: add array API support
* `#24030 <https://github.com/scipy/scipy/pull/24030>`__: ENH: stats.anderson: add ``method`` parameter
* `#24031 <https://github.com/scipy/scipy/pull/24031>`__: ENH: stats.anderson_ksamp: add ``variant`` parameter
* `#24033 <https://github.com/scipy/scipy/pull/24033>`__: DEP: deprecate ``scipy.odr``
* `#24036 <https://github.com/scipy/scipy/pull/24036>`__: CI: run Windows OneAPI job on main every 2 days to prefetch cache
* `#24039 <https://github.com/scipy/scipy/pull/24039>`__: DOC/TST: fft: mark array api coverage with ``xp_capabilities``
@ -1124,6 +1192,27 @@ Pull requests for 1.17.0
* `#24095 <https://github.com/scipy/scipy/pull/24095>`__: DOC: differential_evolution - fix typo in custom strategy example
* `#24097 <https://github.com/scipy/scipy/pull/24097>`__: ENH: stats: added marginal function to ``stats.multivariate_t``
* `#24098 <https://github.com/scipy/scipy/pull/24098>`__: TST: linalg: unskip a test
* `#24100 <https://github.com/scipy/scipy/pull/24100>`__: DOC: SciPy 1.17.0 release notes
* `#24106 <https://github.com/scipy/scipy/pull/24106>`__: BUG: interpolate: Fix h[m] -> h[n] typo in _deBoor_D derivative...
* `#24107 <https://github.com/scipy/scipy/pull/24107>`__: ENH: linalg/solve: move the tridiagonal solve slice iteration...
* `#24111 <https://github.com/scipy/scipy/pull/24111>`__: BLD/DEV: fix windows build
* `#24113 <https://github.com/scipy/scipy/pull/24113>`__: DOC: signal: mark some legacy functions as out of scope for array...
* `#24118 <https://github.com/scipy/scipy/pull/24118>`__: MAINT: version pins/prep for 1.17.0rc1
* `#24129 <https://github.com/scipy/scipy/pull/24129>`__: REL: set 1.17.0rc2 unreleased
* `#24132 <https://github.com/scipy/scipy/pull/24132>`__: DOC: update mailmap
* `#24133 <https://github.com/scipy/scipy/pull/24133>`__: BUG: integrate: fix no module named 'scipy.integrate._lsoda'
* `#24138 <https://github.com/scipy/scipy/pull/24138>`__: DOC: release notes: improve wording of sparse highlight
* `#24139 <https://github.com/scipy/scipy/pull/24139>`__: API: ``linalg.inv``\ : keyword-only ``assume_a`` and ``lower``...
* `#24140 <https://github.com/scipy/scipy/pull/24140>`__: API: ``signal.savgol_coeffs``\ : keyword-only ``xp`` and ``device``
* `#24143 <https://github.com/scipy/scipy/pull/24143>`__: TYP: generic ``signal.LinearTimeInvariant`` type
* `#24145 <https://github.com/scipy/scipy/pull/24145>`__: BUG: ``sparse``\ : fix ``coo_matrix.__setitem__`` signature
* `#24151 <https://github.com/scipy/scipy/pull/24151>`__: MAINT: linalg: raise for zero-size batch
* `#24155 <https://github.com/scipy/scipy/pull/24155>`__: MAINT: bump ``xsf`` to fix ``special.itj0y0`` regression
* `#24161 <https://github.com/scipy/scipy/pull/24161>`__: BUG:linalg.interpolative: Fix two edge cases for id_dist Cython...
* `#24174 <https://github.com/scipy/scipy/pull/24174>`__: REV:integrate: Revert the stepsize change in LSODA
* `#24213 <https://github.com/scipy/scipy/pull/24213>`__: TST: fft: Add allow_dask_compute=True to more places in fft
* `#24214 <https://github.com/scipy/scipy/pull/24214>`__: DOC: update 1.17.0 notes for RBFInterpolator
* `#24218 <https://github.com/scipy/scipy/pull/24218>`__: BENCH, MAINT: NumPy 2.4.0 bench compat
* `#24223 <https://github.com/scipy/scipy/pull/24223>`__: BLD: linalg: ``link_language: 'fortran'`` for ``_fblas``
* `#24239 <https://github.com/scipy/scipy/pull/24239>`__: MAINT: bump array-api-compat for JAX>=0.8.2 compat
* `#24240 <https://github.com/scipy/scipy/pull/24240>`__: MAINT: bump array-api-compat to 1.13 tag

654
pixi.lock

File diff suppressed because it is too large Load Diff

View File

@ -152,7 +152,6 @@ blas-devel = "*"
### Default building ###
[feature.build-deps.dependencies]
compilers = "*"
ccache = "*"
pkg-config = "*"
ninja = "*"
@ -165,9 +164,16 @@ pybind11 = "*"
numpy = "*"
blas-devel = "*"
[feature.build-deps.target.unix.dependencies]
compilers = "*"
[feature.build-deps.target.win-64.dependencies]
# force flang 21 until https://github.com/conda-forge/compilers-feedstock/pull/76 is in
flang_win-64 = "21.*"
[feature.win-openblas.target.win-64.dependencies]
openblas = "*"
blas-devel = { version = "*", build = "*openblas"}
blas-devel = { version = "*", build = "*openblas" }
# XXX: when updating this task, remember to update other build tasks if appropriate
[feature.build-task.tasks.build]
@ -177,7 +183,7 @@ description = "Build SciPy (default settings)"
[feature.build-task.target.win-64.tasks.build]
cmd = "spin build --setup-args=-Dblas=openblas --setup-args=-Dlapack=openblas --setup-args=-Duse-g77-abi=true && cp .pixi/envs/build/Library/bin/openblas.dll build-install/Lib/site-packages/scipy/linalg/openblas.dll"
env = { CC = "ccache clang-cl", CXX = "ccache clang-cl", FC = "ccache $FC", FC_LD = "lld-link" }
env = { CC = "ccache clang-cl", CXX = "ccache clang-cl", FC = "ccache $FC" }
description = "Build SciPy (default settings)"

View File

@ -17,10 +17,18 @@
[build-system]
build-backend = 'mesonpy'
requires = [
"meson-python>=0.15.0",
"Cython>=3.0.8", # when updating version, also update check in meson.build
"pybind11>=2.13.2", # when updating version, also update check in scipy/meson.build
"pythran>=0.14.0",
# The upper bound on meson-python is pre-emptive only; 0.18.0
# is working at the time of writing, and chance of breakage
# in 0.19/0.20 is low
"meson-python>=0.15.0,<0.21.0",
# We need at least Cython 3.1.0 for free-threaded CPython;
# for other CPython versions, the regular pre-emptive
# Cython version bounds policy applies
"Cython>=3.0.8,<3.3.0",
# The upper bound on pybind11 is pre-emptive only
"pybind11>=2.13.2,<3.1.0",
# The upper bound on pythran is pre-emptive only;
"pythran>=0.14.0,<0.19.0",
# numpy requirement for wheel builds for distribution on PyPI - building
# against 2.x yields wheels that are also compatible with numpy 1.x at
@ -28,12 +36,13 @@ requires = [
# Note that building against numpy 1.x works fine too - users and
# redistributors can do this by installing the numpy version they like and
# disabling build isolation.
"numpy>=2.0.0",
# NOTE: need numpy>=2.1.3 for free-threaded CPython support
"numpy>=2.0.0,<2.7",
]
[project]
name = "scipy"
version = "1.17.0.dev0"
version = "1.17.0rc3.dev0"
# TODO: add `license-files` once PEP 639 is accepted (see meson-python#88)
# at that point, no longer include them in `py3.install_sources()`
license = { file = "LICENSE.txt" }
@ -46,7 +55,9 @@ maintainers = [
# https://scipy.github.io/devdocs/dev/core-dev/index.html#version-ranges-for-numpy-and-other-dependencies
requires-python = ">=3.11" # keep in sync with `min_python_version` in meson.build
dependencies = [
"numpy>=1.26.4",
# free-threaded CPython support requires more
# recent NumPy release at runtime (>= 2.1.3)
"numpy>=1.26.4,<2.7",
] # keep in sync with `min_numpy_version` in meson.build
readme = "README.rst"
classifiers = [

View File

@ -64,7 +64,7 @@ del _distributor_init
from scipy._lib import _pep440
# In maintenance branch, change to np_maxversion N+3 if numpy is at N
np_minversion = '1.26.4'
np_maxversion = '9.9.99'
np_maxversion = '2.7.0'
if (_pep440.parse(__numpy_version__) < _pep440.Version(np_minversion) or
_pep440.parse(__numpy_version__) >= _pep440.Version(np_maxversion)):
import warnings

View File

@ -1111,6 +1111,7 @@ The documentation is written assuming array arguments are of specified
"core" shapes. However, array argument(s) of this function may have additional
"batch" dimensions prepended to the core shape. In this case, the array is treated
as a batch of lower-dimensional slices; see :ref:`linalg_batch` for details.
Note that calls with zero-size batches are unsupported and will raise a ``ValueError``.
"""
@ -1182,6 +1183,13 @@ def _apply_over_batch(*argdefs):
# Determine broadcasted batch shape
batch_shape = np.broadcast_shapes(*batch_shapes) # Gives OK error message
# We can't support zero-size batches right now because without data with
# which to call the function, the decorator doesn't even know the *number*
# of outputs, let alone their core shapes or dtypes.
if math.prod(batch_shape) == 0:
message = f'`{f.__name__}` does not support zero-size batches.'
raise ValueError(message)
# Broadcast arrays to appropriate shape
for i, (array, core_shape) in enumerate(zip(arrays, core_shapes)):
if array is None:

@ -1 +1 @@
Subproject commit 6c708d13e826fb850161babf34f306fe80cae875
Subproject commit 946ce4ad77968b94e93594c79653162426ec3224

View File

@ -472,7 +472,7 @@ def irfft(x, n=None, axis=-1, norm=None, overwrite_x=False, workers=None, *,
return (Dispatchable(x, np.ndarray),)
@xp_capabilities()
@xp_capabilities(allow_dask_compute=True)
@_dispatch
def hfft(x, n=None, axis=-1, norm=None, overwrite_x=False, workers=None, *,
plan=None):
@ -624,7 +624,7 @@ def ihfft(x, n=None, axis=-1, norm=None, overwrite_x=False, workers=None, *,
return (Dispatchable(x, np.ndarray),)
@xp_capabilities()
@xp_capabilities(allow_dask_compute=True)
@_dispatch
def fftn(x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *,
plan=None):
@ -1032,7 +1032,7 @@ def ifft2(x, s=None, axes=(-2, -1), norm=None, overwrite_x=False, workers=None,
return (Dispatchable(x, np.ndarray),)
@xp_capabilities()
@xp_capabilities(allow_dask_compute=True)
@_dispatch
def rfftn(x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *,
plan=None):

View File

@ -1254,7 +1254,7 @@ class lsoda(IntegratorBase):
with_jacobian=False,
rtol=1e-6, atol=1e-12,
lband=None, uband=None,
nsteps=5000, # Increased default for tighter tolerances
nsteps=500,
max_step=0.0, # corresponds to infinite
min_step=0.0,
first_step=0.0, # determined by solver

View File

@ -11,5 +11,5 @@ def __dir__():
def __getattr__(name):
return _sub_module_deprecation(sub_package="integrate", module="lsoda",
private_modules=["_lsoda"], all=__all__,
private_modules=["_odepack"], all=__all__,
attribute=name)

View File

@ -864,7 +864,16 @@ stoda_handle_corrector_failure(double* yh, int nyh, int* ncf, double told, const
return 0; // Successful recovery - caller should retry
}
/**
* @brief Computes the predicted values for the next step.
*
* This function computes the predicted values by effectively multiplying
* the yh array by the pascal triangle matrix. It also updates the time
* and checks if the Jacobian needs updating based on step size ratio changes.
*
* @param S Pointer to the common structure containing integrator state.
* @param yh1 Pointer to the Nordsieck history array.
*/
static void
stoda_get_predicted_values(lsoda_common_struct_t* S, double* yh1)
{
@ -875,15 +884,10 @@ stoda_get_predicted_values(lsoda_common_struct_t* S, double* yh1)
// to force pjac to be called, if a jacobian is involved.
// in any case, pjac is called at least every msbp steps.
// NOTE: Only check rc/msbp conditions if Jacobian was NOT just computed.
// After prja sets rc=1.0 and jcur=1, we may retry with reduced step size
// which modifies rc. We should not request another Jacobian evaluation
// until jcur is reset to 0 (after corrector convergence at line 1124).
// This matches Fortran behavior and prevents excessive Jacobian evaluations.
if (S->jcur == 0) {
if (fabs(S->rc - 1.0) > S->ccmax) { S->ipup = S->miter; }
if (S->nst >= S->nslp + S->msbp) { S->ipup = S->miter; }
}
// Check if Jacobian needs updating based on rc changes or step count
// These checks match Fortran lines 232-233 in stoda.f
if (fabs(S->rc - 1.0) > S->ccmax) { S->ipup = S->miter; }
if (S->nst >= S->nslp + S->msbp) { S->ipup = S->miter; }
S->tn = S->tn + S->h;
@ -898,7 +902,27 @@ stoda_get_predicted_values(lsoda_common_struct_t* S, double* yh1)
}
}
/**
* @brief Performs one step of the integration using the chosen method.
*
* This is the core stepping routine that implements the predictor-corrector
* algorithm. It handles both Adams (non-stiff) and BDF (stiff) methods,
* manages the corrector iteration, adjusts step size and order, and performs
* error control.
*
* @param neq Pointer to the number of equations.
* @param y Array containing the current solution vector.
* @param yh Nordsieck history array.
* @param nyh Leading dimension of the yh array.
* @param ewt Error weight vector.
* @param savf Array containing f evaluated at the current state.
* @param acor Array containing the accumulated corrections.
* @param wm Real work space for matrices.
* @param iwm Integer work space for matrix operations.
* @param f User-supplied function for evaluating dy/dt.
* @param jac User-supplied Jacobian function.
* @param S Pointer to the common structure containing integrator state.
*/
static void
stoda(
int* neq, double* y, double* yh, int nyh, double* ewt,
@ -1179,7 +1203,7 @@ stoda(
if (pdh * rh1 > 0.00001) { rh1it = sm1[S->nq - 1] / pdh; }
rh1 = fmin(rh1, rh1it);
if (S->nq <= S->mxords)
if (S->nq > S->mxords)
{
nqm2 = S->mxords;
lm2 = S->mxords + 1;
@ -1424,7 +1448,21 @@ stoda(
*********************************************************************************
*/
/**
* @brief Computes interpolated values of y and its derivatives.
*
* This function computes the k-th derivative of the interpolating polynomial
* at the time t. It is used to obtain values of the solution or its derivatives
* at times other than the integration steps.
*
* @param t Time at which to evaluate the interpolant.
* @param k Order of the derivative (0 for y itself, 1 for dy/dt, etc.).
* @param yh Nordsieck history array.
* @param nyh Leading dimension of the yh array.
* @param dky Output array containing the computed derivative.
* @param iflag Output flag: 0 if successful, -1 if k is out of range, -2 if t is out of range.
* @param S Pointer to the common structure containing integrator state.
*/
static void
intdy(const double t, const int k, double* yh, const int nyh, double* dky, int* iflag, lsoda_common_struct_t* S)
{
@ -1472,12 +1510,19 @@ intdy(const double t, const int k, double* yh, const int nyh, double* dky, int*
/**
* @brief Sets the error weight vector for error control.
*
* This subroutine sets the error weight vector ewt according to
* This function sets the error weight vector ewt according to
* ewt(i) = rtol(i)*abs(ycur(i)) + atol(i), i = 1,...,n,
* with the subscript on rtol and/or atol possibly replaced by 1 above,
* depending on the value of itol.
*
* @param n Number of equations.
* @param itol Tolerance type indicator (1-4).
* @param rtol Relative tolerance array.
* @param atol Absolute tolerance array.
* @param ycur Current solution vector.
* @param ewt Output error weight vector.
*/
static void
ewset(const int n, const int itol, double* rtol, double* atol, double* ycur, double* ewt)
@ -1516,9 +1561,14 @@ ewset(const int n, const int itol, double* rtol, double* atol, double* ycur, dou
}
/**
* @brief Handles error state tracking for the lsoda integrator.
*
* This helper function mimics the error handling at the end of lsoda
* in the original FORTRAN code, setting istate and incrementing illin.
* If illin reaches 5, istate is set to -8 to indicate a more severe error.
*
* @param istate Pointer to the integration state flag.
* @param illin Pointer to the illegal input counter.
*/
void lsoda_mark_error(int* istate, int* illin)
{
@ -1528,7 +1578,32 @@ void lsoda_mark_error(int* istate, int* illin)
return;
}
/**
* @brief (L)ivermore (S)olver for (O)rdinary (D)ifferential Equations with (A)utomatic method switching.
*
* LSODA solves the initial value problem for stiff or non-stiff systems of first-order ODEs,
* dy/dt = f(t,y). It automatically selects between non-stiff (Adams) and stiff (BDF) methods
* based on the problem characteristics.
*
* @param f User-supplied function to evaluate dy/dt = f(t,y).
* @param neq Number of first-order ODEs.
* @param y Array of length neq containing the solution vector.
* @param t Pointer to the independent variable (time).
* @param tout The next point where output is desired.
* @param itol Tolerance type indicator (1-4).
* @param rtol Relative tolerance array.
* @param atol Absolute tolerance array.
* @param itask Task indicator (1-5).
* @param istate State indicator: 1 for first call, 2 for subsequent calls, negative for errors.
* @param iopt Optional input flag (0 or 1).
* @param rwork Real working array.
* @param lrw Length of rwork.
* @param iwork Integer working array.
* @param liw Length of iwork.
* @param jac User-supplied Jacobian function (optional, depends on jt).
* @param jt Jacobian type indicator.
* @param S Pointer to the common structure containing integrator state.
*/
void lsoda(
lsoda_func_t f, int neq, double* restrict y, double* t, double* tout, int itol, double* rtol, double* atol,
int* itask, int* istate, int* iopt, double* restrict rwork, int lrw, int* restrict iwork, int liw, lsoda_jac_t jac,

View File

@ -1337,7 +1337,7 @@ def solve_circulant(c, b, singular='raise', tol=None,
# matrix inversion
def inv(a, overwrite_a=False, check_finite=True, assume_a=None, lower=False):
def inv(a, overwrite_a=False, check_finite=True, *, assume_a=None, lower=False):
r"""
Compute the inverse of a matrix.

View File

@ -207,10 +207,10 @@ def idd_estrank(cnp.ndarray[cnp.float64_t, mode="c", ndim=2] a: NDArray, eps: fl
rta = rta[rng.permutation(m), :]
# idd_subselect pick randomly n2-many rows
subselect = rng.choice(m, n2, replace=False)
rta = rta[subselect, :]
# idd_subselect pick randomly n2-many rows if there are more than one row
if m > 1:
subselect = rng.choice(m, n2, replace=False)
rta = rta[subselect, :]
# Perform rfft on each column. Note that the first and the last
# element of the result is real valued (n2 is power of 2).
#

View File

@ -563,6 +563,7 @@ def reconstruct_matrix_from_id(B, idx, proj):
:class:`numpy.ndarray`
Reconstructed matrix.
"""
B = np.atleast_2d(_C_contiguous_copy(B))
if _is_real(B):
return _backend.idd_reconid(B, idx, proj)
else:

View File

@ -69,6 +69,7 @@ py3.extension_module('_fblas',
link_args: version_link_args,
dependencies: [lapack_lp64_dep, fortranobject_dep],
install: true,
link_language: 'fortran',
subdir: 'scipy/linalg'
)

View File

@ -609,3 +609,12 @@ class TestBatch:
message = "Batch support for sparse arrays is not available."
with pytest.raises(NotImplementedError, match=message):
linalg.clarkson_woodruff_transform(A, sketch_size=3, rng=rng)
@pytest.mark.parametrize('f, args', [
(linalg.toeplitz, (np.ones((0, 4)),)),
(linalg.eig, (np.ones((3, 0, 5, 5)),)),
])
def test_zero_size_batch(self, f, args):
message = "does not support zero-size batches."
with pytest.raises(ValueError, match=message):
f(*args)

View File

@ -20,6 +20,7 @@ time invariant systems.
# Merged discrete systems and added dlti
import warnings
from types import GenericAlias
# np.linalg.qr fails on some tests with LinAlgError: zgeqrf returns -7
# use scipy's qr until this is solved
@ -44,6 +45,9 @@ __all__ = ['lti', 'dlti', 'TransferFunction', 'ZerosPolesGain', 'StateSpace',
class LinearTimeInvariant:
# generic type compatibility with scipy-stubs
__class_getitem__ = classmethod(GenericAlias)
def __new__(cls, *system, **kwargs):
"""Create a new object, don't allow direct instances."""
if cls is LinearTimeInvariant:

View File

@ -9,7 +9,7 @@ from ._arraytools import axis_slice
def savgol_coeffs(window_length, polyorder, deriv=0, delta=1.0, pos=None,
use="conv", xp=None, device=None):
use="conv", *, xp=None, device=None):
"""Compute the coefficients for a 1-D Savitzky-Golay FIR filter.
Parameters

View File

@ -1,7 +1,8 @@
import warnings
from types import GenericAlias
import numpy as np
import pytest
from pytest import raises as assert_raises
from scipy._lib._array_api import(
assert_almost_equal, xp_assert_equal, xp_assert_close, make_xp_test_case
@ -1236,3 +1237,9 @@ class Test_freqresp:
expected = 1 / (s + 1)**4
assert_almost_equal(H.real, expected.real)
assert_almost_equal(H.imag, expected.imag)
@pytest.mark.parametrize(
"cls", [StateSpace, TransferFunction, ZerosPolesGain, lti, dlti]
)
def test_subscriptable_generic_types(cls):
assert isinstance(cls[np.float64], GenericAlias)

View File

@ -1877,5 +1877,5 @@ class coo_matrix(spmatrix, _coo_base):
def __getitem__(self, key):
raise TypeError("'coo_matrix' object is not subscriptable")
def __setitem__(self, key):
def __setitem__(self, key, x):
raise TypeError("'coo_matrix' object does not support item assignment")

View File

@ -968,7 +968,7 @@ def goodness_of_fit(dist, data, *, known_params=None, fit_params=None,
>>> import numpy as np
>>> from scipy import stats
>>> rng = np.random.default_rng()
>>> rng = np.random.default_rng(1638083107694713882823079058616272161)
>>> x = stats.uniform.rvs(size=75, random_state=rng)
were sampled from a normal distribution. To perform a KS test, the
@ -1028,8 +1028,7 @@ def goodness_of_fit(dist, data, *, known_params=None, fit_params=None,
calculated exactly and is tyically approximated using Monte Carlo methods
as described above. This is where `goodness_of_fit` excels.
>>> res = stats.goodness_of_fit(stats.norm, x, statistic='ks',
... rng=rng)
>>> res = stats.goodness_of_fit(stats.norm, x, statistic='ks', rng=rng)
>>> res.statistic, res.pvalue
(0.1119257570456813, 0.0196)
@ -1043,25 +1042,21 @@ def goodness_of_fit(dist, data, *, known_params=None, fit_params=None,
statistic - resulting in a higher test power - can be used now that we can
approximate the null distribution
computationally. The Anderson-Darling statistic [1]_ tends to be more
sensitive, and critical values of the this statistic have been tabulated
sensitive, and critical values of this statistic have been tabulated
for various significance levels and sample sizes using Monte Carlo methods.
>>> res = stats.anderson(x, 'norm')
>>> res = stats.anderson(x, 'norm', method='interpolate')
>>> print(res.statistic)
1.2139573337497467
>>> print(res.critical_values)
[0.555 0.625 0.744 0.864 1.024]
>>> print(res.significance_level)
[15. 10. 5. 2.5 1. ]
>>> print(res.pvalue)
0.01
Here, the observed value of the statistic exceeds the critical value
corresponding with a 1% significance level. This tells us that the p-value
of the observed data is less than 1%, but what is it? We could interpolate
from these (already-interpolated) values, but `goodness_of_fit` can
of the observed data is 1% or less, but what is it? `goodness_of_fit` can
estimate it directly.
>>> res = stats.goodness_of_fit(stats.norm, x, statistic='ad',
... rng=rng)
>>> res = stats.goodness_of_fit(stats.norm, x, statistic='ad', rng=rng)
>>> res.statistic, res.pvalue
(1.2139573337497467, 0.0034)

View File

@ -12,6 +12,7 @@ from numpy import (isscalar, log, around, zeros,
from scipy import optimize, special, interpolate, stats
from scipy._lib._bunch import _make_tuple_bunch
from scipy._lib._util import _rename_parameter, _contains_nan, _get_nan
from scipy._lib.deprecation import _NoValue
import scipy._lib.array_api_extra as xpx
from scipy._lib._array_api import (
@ -2192,8 +2193,19 @@ AndersonResult = _make_tuple_bunch('AndersonResult',
'significance_level'], ['fit_result'])
_anderson_warning_message = (
"""As of SciPy 1.17, users must choose a p-value calculation method by providing the
`method` parameter. `method='interpolate'` interpolates the p-value from pre-calculated
tables; `method` may also be an instance of `MonteCarloMethod` to approximate the
p-value via Monte Carlo simulation. When `method` is specified, the result object will
include a `pvalue` attribute and not attributes `critical_value`, `significance_level`,
or `fit_result`. Beginning in 1.19.0, these other attributes will no longer be
available, and a p-value will always be computed according to one of the available
`method` options.""".replace('\n', ' '))
@xp_capabilities(np_only=True)
def anderson(x, dist='norm'):
def anderson(x, dist='norm', *, method=None):
"""Anderson-Darling test for data coming from a particular distribution.
The Anderson-Darling test tests the null hypothesis that a sample is
@ -2211,11 +2223,35 @@ def anderson(x, dist='norm'):
The type of distribution to test against. The default is 'norm'.
The names 'extreme1', 'gumbel_l' and 'gumbel' are synonyms for the
same distribution.
method : str or instance of `MonteCarloMethod`
Defines the method used to compute the p-value.
If `method` is ``"interpolated"``, the p-value is interpolated from
pre-calculated tables.
If `method` is an instance of `MonteCarloMethod`, the p-value is computed using
`scipy.stats.monte_carlo_test` with the provided configuration options and other
appropriate settings.
.. versionadded:: 1.17.0
If `method` is not specified, `anderson` will emit a ``FutureWarning``
specifying that the user must opt into a p-value calculation method.
When `method` is specified, the object returned will include a ``pvalue``
attribute, but no ``critical_value``, ``significance_level``, or
``fit_result`` attributes. Beginning in 1.19.0, these other attributes will
no longer be available, and a p-value will always be computed according to
one of the available `method` options.
Returns
-------
result : AndersonResult
An object with the following attributes:
If `method` is provided, this is an object with the following attributes:
statistic : float
The Anderson-Darling test statistic.
pvalue: float
The p-value corresponding with the test statistic, calculated according to
the specified `method`.
If `method` is unspecified, this is an object with the following attributes:
statistic : float
The Anderson-Darling test statistic.
@ -2230,13 +2266,21 @@ def anderson(x, dist='norm'):
An object containing the results of fitting the distribution to
the data.
.. deprecated :: 1.17.0
The tuple-unpacking behavior of the return object and attributes
``critical_values``, ``significance_level``, and ``fit_result`` are
deprecated. Beginning in SciPy 1.19.0, these features will no longer be
available, and the object returned will have attributes ``statistic`` and
``pvalue``.
See Also
--------
kstest : The Kolmogorov-Smirnov test for goodness-of-fit.
Notes
-----
Critical values provided are for the following significance levels:
Critical values provided when `method` is unspecified are for the following
significance levels:
normal/exponential
15%, 10%, 5%, 2.5%, 1%
@ -2295,18 +2339,15 @@ def anderson(x, dist='norm'):
>>> import numpy as np
>>> from scipy.stats import anderson
>>> rng = np.random.default_rng()
>>> rng = np.random.default_rng(9781234521)
>>> data = rng.random(size=35)
>>> res = anderson(data)
>>> res = anderson(data, dist='norm', method='interpolate')
>>> res.statistic
0.8398018749744764
>>> res.critical_values
array([0.548, 0.617, 0.735, 0.853, 1.011])
>>> res.significance_level
array([15. , 10. , 5. , 2.5, 1. ])
np.float64(0.9887620209957291)
>>> res.pvalue
np.float64(0.012111200538380142)
The value of the statistic (barely) exceeds the critical value associated
with a significance level of 2.5%, so the null hypothesis may be rejected
The p-value is approximately 0.012,, so the null hypothesis may be rejected
at a significance level of 2.5%, but not at a significance level of 1%.
""" # numpy/numpydoc#87 # noqa: E501
@ -2399,7 +2440,72 @@ def anderson(x, dist='norm'):
fit_result = FitResult(getattr(distributions, dist), y,
discrete=False, res=res)
return AndersonResult(A2, critical, sig, fit_result=fit_result)
if method is None:
warnings.warn(_anderson_warning_message, FutureWarning, stacklevel=2)
return AndersonResult(A2, critical, sig, fit_result=fit_result)
if method == 'interpolate':
sig = 1 - sig if dist == 'weibull_min' else sig / 100
pvalue = np.interp(A2, critical, sig)
elif isinstance(method, stats.MonteCarloMethod):
pvalue = _anderson_simulate_pvalue(x, dist, method)
else:
message = ("`method` must be either 'interpolate' or "
"an instance of `MonteCarloMethod`.")
raise ValueError(message)
return SignificanceResult(statistic=A2, pvalue=pvalue)
def _anderson_simulate_pvalue(x, dist, method):
message = ("The `___` attribute of a `MonteCarloMethod` object passed as the "
"`method` parameter of `scipy.stats.anderson` is ignored.")
method = method._asdict()
if method.pop('rvs', False):
warnings.warn(message.replace('___', 'rvs'), UserWarning, stacklevel=3)
if method.pop('batch', False):
warnings.warn(message.replace('___', 'batch'), UserWarning, stacklevel=3)
method['n_mc_samples'] = method.pop('n_resamples')
kwargs= {'known_params': {'loc': 0}} if dist == 'expon' else {}
dist = getattr(stats, dist)
res = stats.goodness_of_fit(dist, x, statistic='ad', **kwargs, **method)
return res.pvalue
def _anderson_ksamp_continuous(samples, Z, Zstar, k, n, N):
"""Compute A2akN equation 3 of Scholz & Stephens.
Parameters
----------
samples : sequence of 1-D array_like
Array of sample arrays.
Z : array_like
Sorted array of all observations.
Zstar : array_like
Sorted array of unique observations. Unused.
k : int
Number of samples.
n : array_like
Number of observations in each sample.
N : int
Total number of observations.
Returns
-------
A2KN : float
The A2KN statistics of Scholz and Stephens 1987.
"""
A2kN = 0.
j = np.arange(1, N)
for i in arange(0, k):
s = np.sort(samples[i])
Mij = s.searchsorted(Z[:-1], side='right')
inner = (N*Mij - j*n[i])**2 / (j * (N - j))
A2kN += inner.sum() / n[i]
return A2kN / N
def _anderson_ksamp_midrank(samples, Z, Zstar, k, n, N):
@ -2488,7 +2594,7 @@ Anderson_ksampResult = _make_tuple_bunch(
@xp_capabilities(np_only=True)
def anderson_ksamp(samples, midrank=True, *, method=None):
def anderson_ksamp(samples, midrank=_NoValue, *, variant=_NoValue, method=None):
"""The Anderson-Darling test for k-samples.
The k-sample Anderson-Darling test is a modification of the
@ -2502,10 +2608,20 @@ def anderson_ksamp(samples, midrank=True, *, method=None):
samples : sequence of 1-D array_like
Array of sample data in arrays.
midrank : bool, optional
Type of Anderson-Darling test which is computed. Default
Variant of Anderson-Darling test which is computed. Default
(True) is the midrank test applicable to continuous and
discrete populations. If False, the right side empirical
distribution is used.
.. deprecated::1.17.0
Use parameter `variant` instead.
variant : {'midrank', 'right', 'continuous'}
Variant of Anderson-Darling test to be computed. ``'midrank'`` is applicable
to both continuous and discrete populations. ``'discrete'`` and ``'continuous'``
perform alternative versions of the test for discrete and continuous
populations, respectively.
When `variant` is specified, the return object will not be unpackable as a
tuple, and only attributes ``statistic`` and ``pvalue`` will be present.
method : PermutationMethod, optional
Defines the method used to compute the p-value. If `method` is an
instance of `PermutationMethod`, the p-value is computed using
@ -2523,6 +2639,10 @@ def anderson_ksamp(samples, midrank=True, *, method=None):
critical_values : array
The critical values for significance levels 25%, 10%, 5%, 2.5%, 1%,
0.5%, 0.1%.
.. deprecated::1.17.0
Present only when `variant` is unspecified.
pvalue : float
The approximate p-value of the test. If `method` is not
provided, the value is floored / capped at 0.1% / 25%.
@ -2545,11 +2665,12 @@ def anderson_ksamp(samples, midrank=True, *, method=None):
distributions, in which ties between samples may occur. The
default of this routine is to compute the version based on the
midrank empirical distribution function. This test is applicable
to continuous and discrete data. If midrank is set to False, the
to continuous and discrete data. If `variant` is set to ``'discrete'``, the
right side empirical distribution is used for a test for discrete
data. According to [1]_, the two discrete test statistics differ
only slightly if a few collisions due to round-off errors occur in
the test not adjusted for ties between samples.
data; if `variant` is ``'continuous'``, the same test statistic and p-value are
computed for data with no ties, but with less computation. According to [1]_,
the two discrete test statistics differ only slightly if a few collisions due
to round-off errors occur in the test not adjusted for ties between samples.
The critical values corresponding to the significance levels from 0.01
to 0.25 are taken from [1]_. p-values are floored / capped
@ -2569,41 +2690,33 @@ def anderson_ksamp(samples, midrank=True, *, method=None):
--------
>>> import numpy as np
>>> from scipy import stats
>>> rng = np.random.default_rng()
>>> res = stats.anderson_ksamp([rng.normal(size=50),
... rng.normal(loc=0.5, size=30)])
>>> rng = np.random.default_rng(44925884305279435)
>>> res = stats.anderson_ksamp([rng.normal(size=50), rng.normal(loc=0.5, size=30)],
... variant='midrank')
>>> res.statistic, res.pvalue
(1.974403288713695, 0.04991293614572478)
>>> res.critical_values
array([0.325, 1.226, 1.961, 2.718, 3.752, 4.592, 6.546])
(3.4444310693448936, 0.013106682406720973)
The null hypothesis that the two random samples come from the same
distribution can be rejected at the 5% level because the returned
test value is greater than the critical value for 5% (1.961) but
not at the 2.5% level. The interpolation gives an approximate
p-value of 4.99%.
p-value is less than 0.05, but not at the 1% level.
>>> samples = [rng.normal(size=50), rng.normal(size=30),
... rng.normal(size=20)]
>>> res = stats.anderson_ksamp(samples)
>>> res = stats.anderson_ksamp(samples, variant='continuous')
>>> res.statistic, res.pvalue
(-0.29103725200789504, 0.25)
>>> res.critical_values
array([ 0.44925884, 1.3052767 , 1.9434184 , 2.57696569, 3.41634856,
4.07210043, 5.56419101])
(-0.6309662273193832, 0.25)
The null hypothesis cannot be rejected for three samples from an
identical distribution. The reported p-value (25%) has been capped and
may not be very accurate (since it corresponds to the value 0.449
whereas the statistic is -0.291).
As we might expect, the null hypothesis cannot be rejected here for three samples
from an identical distribution. The reported p-value (25%) has been capped at the
maximum value for which pre-computed p-values are available.
In such cases where the p-value is capped or when sample sizes are
small, a permutation test may be more accurate.
>>> method = stats.PermutationMethod(n_resamples=9999, random_state=rng)
>>> res = stats.anderson_ksamp(samples, method=method)
>>> res = stats.anderson_ksamp(samples, variant='continuous', method=method)
>>> res.pvalue
0.5254
0.699
"""
k = len(samples)
@ -2623,10 +2736,28 @@ def anderson_ksamp(samples, midrank=True, *, method=None):
raise ValueError("anderson_ksamp encountered sample without "
"observations")
if midrank:
if variant == _NoValue or midrank != _NoValue:
message = ("Parameter `variant` has been introduced to replace `midrank`; "
"`midrank` will be removed in SciPy 1.19.0. Specify `variant` to "
"silence this warning. Note that the returned object will no longer "
"be unpackable as a tuple, and `critical_values` will be omitted.")
warnings.warn(message, category=UserWarning, stacklevel=2)
return_critical_values = False
if variant == _NoValue:
return_critical_values = True
variant = 'midrank' if midrank else 'right'
if variant == 'midrank':
A2kN_fun = _anderson_ksamp_midrank
else:
elif variant == 'right':
A2kN_fun = _anderson_ksamp_right
elif variant == 'continuous':
A2kN_fun = _anderson_ksamp_continuous
else:
message = "`variant` must be one of 'midrank', 'right', or 'continuous'."
raise ValueError(message)
A2kN = A2kN_fun(samples, Z, Zstar, k, n, N)
def statistic(*samples):
@ -2677,12 +2808,17 @@ def anderson_ksamp(samples, midrank=True, *, method=None):
else:
p = res.pvalue if method is not None else p
# create result object with alias for backward compatibility
res = Anderson_ksampResult(A2, critical, p)
res.significance_level = p
if return_critical_values:
# create result object with alias for backward compatibility
res = Anderson_ksampResult(A2, critical, p)
res.significance_level = p
else:
res = SignificanceResult(statistic=A2, pvalue=p)
return res
AnsariResult = namedtuple('AnsariResult', ('statistic', 'pvalue'))

View File

@ -916,11 +916,10 @@ class TestGoodnessOfFit:
# c that produced critical value of statistic found w/ root_scalar
x = stats.genextreme(0.051896837188595134, loc=0.5,
scale=1.5).rvs(size=1000, random_state=rng)
res = goodness_of_fit(stats.gumbel_r, x, statistic='ad',
rng=rng)
ref = stats.anderson(x, dist='gumbel_r')
assert_allclose(res.statistic, ref.critical_values[0])
assert_allclose(res.pvalue, ref.significance_level[0]/100, atol=5e-3)
res = goodness_of_fit(stats.gumbel_r, x, statistic='ad', rng=rng)
ref = stats.anderson(x, dist='gumbel_r', method='interpolate')
assert_allclose(res.statistic, ref.statistic)
assert_allclose(res.pvalue, ref.pvalue, atol=5e-3)
def test_against_filliben_norm(self):
# Test against `stats.fit` ref. [7] Section 8 "Example"

View File

@ -253,6 +253,7 @@ class TestShapiro:
assert_allclose(res.pvalue, 0.2313666489882, rtol=1e-6)
@pytest.mark.filterwarnings("ignore: As of SciPy 1.17: FutureWarning")
class TestAnderson:
def test_normal(self):
rs = RandomState(1234567890)
@ -397,6 +398,89 @@ class TestAnderson:
assert_equal(_get_As_weibull(1/m), _Avals_weibull[0])
class TestAndersonMethod:
def test_warning(self):
message = "As of SciPy 1.17, users..."
with pytest.warns(FutureWarning, match=message):
stats.anderson([1, 2, 3], 'norm')
def test_method_input_validation(self):
message = "`method` must be either..."
with pytest.raises(ValueError, match=message):
stats.anderson([1, 2, 3], 'norm', method='ekki-ekki')
def test_monte_carlo_method(self):
rng = np.random.default_rng(94982389149239)
message = "The `rvs` attribute..."
with pytest.warns(UserWarning, match=message):
method = stats.MonteCarloMethod(rvs=rng.random)
stats.anderson([1, 2, 3], 'norm', method=method)
message = "The `batch` attribute..."
with pytest.warns(UserWarning, match=message):
method = stats.MonteCarloMethod(batch=10)
stats.anderson([1, 2, 3], 'norm', method=method)
method = stats.MonteCarloMethod(n_resamples=9, rng=rng)
res = stats.anderson([1, 2, 3], 'norm', method=method)
ten_p = res.pvalue * 10
# p-value will always be divisible by n_resamples + 1
assert np.round(ten_p) == ten_p
method = stats.MonteCarloMethod(rng=np.random.default_rng(23495984827))
ref = stats.anderson([1, 2, 3, 4, 5], 'norm', method=method)
method = stats.MonteCarloMethod(rng=np.random.default_rng(23495984827))
res = stats.anderson([1, 2, 3, 4, 5], 'norm', method=method)
assert res.pvalue == ref.pvalue # same random state -> same p-value
method = stats.MonteCarloMethod(rng=np.random.default_rng(23495984828))
res = stats.anderson([1, 2, 3, 4, 5], 'norm', method=method)
assert res.pvalue != ref.pvalue # different random state -> different p-value
@pytest.mark.parametrize('dist_name, seed',
[('norm', 4202165767275),
('expon', 9094400417269),
pytest.param('logistic', 3776634590070, marks=pytest.mark.xslow),
pytest.param('gumbel_l', 7966588969335, marks=pytest.mark.xslow),
pytest.param('gumbel_r', 1886450383828, marks=pytest.mark.xslow)])
def test_method_consistency(self, dist_name, seed):
dist = getattr(stats, dist_name)
rng = np.random.default_rng(seed)
x = dist.rvs(size=50, random_state=rng)
ref = stats.anderson(x, dist_name, method='interpolate')
res = stats.anderson(x, dist_name, method=stats.MonteCarloMethod(rng=rng))
np.testing.assert_allclose(res.statistic, ref.statistic)
np.testing.assert_allclose(res.pvalue, ref.pvalue, atol=0.005)
@pytest.mark.parametrize('dist_name',
['norm', 'expon', 'logistic', 'gumbel_l', 'gumbel_r', 'weibull_min'])
def test_interpolate_saturation(self, dist_name):
dist = getattr(stats, dist_name)
rng = np.random.default_rng(4202165767276)
args = (3.5,) if dist_name == 'weibull_min' else tuple()
x = dist.rvs(*args, size=50, random_state=rng)
with pytest.warns(FutureWarning):
res = stats.anderson(x, dist_name)
pvalues = (1 - np.asarray(res.significance_level) if dist_name == 'weibull_min'
else np.asarray(res.significance_level) / 100)
pvalue_min = np.min(pvalues)
pvalue_max = np.max(pvalues)
statistic_min = np.min(res.critical_values)
statistic_max = np.max(res.critical_values)
# data drawn from distribution -> low statistic / high p-value
res = stats.anderson(x, dist_name, method='interpolate')
assert res.statistic < statistic_min
assert res.pvalue == pvalue_max
# data not from distribution -> high statistic / low p-value
res = stats.anderson(rng.random(size=50), dist_name, method='interpolate')
assert res.statistic > statistic_max
assert res.pvalue == pvalue_min
@pytest.mark.filterwarnings("ignore:Parameter `variant`...:UserWarning")
class TestAndersonKSamp:
def test_example1a(self):
# Example data from Scholz & Stephens (1987), originally
@ -604,6 +688,63 @@ class TestAndersonKSamp:
assert_equal(res.significance_level, res.pvalue)
class TestAndersonKSampVariant:
def test_variant_values(self):
x = [1, 2, 2, 3, 4, 5]
y = [1, 2, 3, 4, 4, 5, 6, 6, 6, 7]
message = "Parameter `variant` has been introduced..."
with pytest.warns(UserWarning, match=message):
ref = stats.anderson_ksamp((x, y))
assert len(ref) == 3 and hasattr(ref, 'critical_values')
with pytest.warns(UserWarning, match=message):
res = stats.anderson_ksamp((x, y), midrank=True)
assert_equal(res.statistic, ref.statistic)
assert_equal(res.pvalue, ref.pvalue)
assert len(res) == 3 and hasattr(res, 'critical_values')
with pytest.warns(UserWarning, match=message):
res = stats.anderson_ksamp((x, y), midrank=False, variant='midrank')
assert_equal(res.statistic, ref.statistic)
assert_equal(res.pvalue, ref.pvalue)
assert not hasattr(res, 'critical_values')
res = stats.anderson_ksamp((x, y), variant='midrank')
assert_equal(res.statistic, ref.statistic)
assert_equal(res.pvalue, ref.pvalue)
assert not hasattr(res, 'critical_values')
with pytest.warns(UserWarning, match=message):
ref = stats.anderson_ksamp((x, y), midrank=False)
assert len(ref) == 3 and hasattr(ref, 'critical_values')
with pytest.warns(UserWarning, match=message):
res = stats.anderson_ksamp((x, y), midrank=True, variant='right')
assert_equal(res.statistic, ref.statistic)
assert_equal(res.pvalue, ref.pvalue)
assert not hasattr(res, 'critical_values')
res = stats.anderson_ksamp((x, y), variant='right')
assert_equal(res.statistic, ref.statistic)
assert_equal(res.pvalue, ref.pvalue)
assert not hasattr(res, 'critical_values')
def test_variant_input_validation(self):
x = np.arange(10)
message = "`variant` must be one of 'midrank', 'right', or 'continuous'."
with pytest.raises(ValueError, match=message):
stats.anderson_ksamp((x, x), variant='Camelot')
@pytest.mark.parametrize('n_samples', [2, 3])
def test_variant_continuous(self, n_samples):
rng = np.random.default_rng(20182053007)
samples = rng.random((n_samples, 15)) + 0.1*np.arange(n_samples)[:, np.newaxis]
ref = stats.anderson_ksamp(samples, variant='right')
res = stats.anderson_ksamp(samples, variant='continuous')
assert_allclose(res.statistic, ref.statistic)
assert_allclose(res.pvalue, ref.pvalue)
@make_xp_test_case(stats.ansari)
class TestAnsari:

View File

@ -1022,22 +1022,19 @@ class TestMonteCarloHypothesisTest:
assert_allclose(res.pvalue, expected.pvalue, atol=self.atol)
@pytest.mark.slow
@pytest.mark.parametrize('dist_name', ('norm', 'logistic'))
@pytest.mark.parametrize('i', range(5))
def test_against_anderson(self, dist_name, i):
# test that monte_carlo_test can reproduce results of `anderson`. Note:
# `anderson` does not provide a p-value; it provides a list of
# significance levels and the associated critical value of the test
# statistic. `i` used to index this list.
@pytest.mark.parametrize('dist_name', ['norm', 'logistic'])
@pytest.mark.parametrize('target_statistic', [0.6, 0.7, 0.8])
def test_against_anderson(self, dist_name, target_statistic):
# test that monte_carlo_test can reproduce results of `anderson`.
# find the skewness for which the sample statistic matches one of the
# critical values provided by `stats.anderson`
# find the skewness for which the sample statistic is within the range of
# values tubulated by `anderson`
def fun(a):
rng = np.random.default_rng(394295467)
x = stats.tukeylambda.rvs(a, size=100, random_state=rng)
expected = stats.anderson(x, dist_name)
return expected.statistic - expected.critical_values[i]
expected = stats.anderson(x, dist_name, method='interpolate')
return expected.statistic - target_statistic
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
sol = root(fun, x0=0)
@ -1048,13 +1045,13 @@ class TestMonteCarloHypothesisTest:
a = sol.x[0]
rng = np.random.default_rng(394295467)
x = stats.tukeylambda.rvs(a, size=100, random_state=rng)
expected = stats.anderson(x, dist_name)
expected = stats.anderson(x, dist_name, method='interpolate')
expected_stat = expected.statistic
expected_p = expected.significance_level[i]/100
expected_p = expected.pvalue
# perform equivalent Monte Carlo test and compare results
def statistic1d(x):
return stats.anderson(x, dist_name).statistic
return stats.anderson(x, dist_name, method='interpolate').statistic
dist_rvs = self.get_rvs(getattr(stats, dist_name).rvs, rng)
with warnings.catch_warnings():

@ -1 +1 @@
Subproject commit dd14bf826356f3a365d4735060b82c943072d4ff
Subproject commit 0d0a593fd31073af10062d0093144e13ae34f8f3