Compare commits

...

94 Commits

Author SHA1 Message Date
Evgeni Burovski 867638ec7b
Merge pull request #20776 from tylerjereddy/treddy_prep_1_13_2
REL, MAINT: prep for 1.13.2
2024-05-24 07:27:29 +03:00
Tyler Reddy e251bba12c REL, MAINT: prep for 1.13.2
* Prepare for SciPy `1.13.2`. I have no plans to do this release
as `1.14.0` looms, but the issue tracker may disagree, of course.

[skip cirrus]
2024-05-23 15:56:27 -06:00
Tyler Reddy 44e4ebaac9 REL: SciPy 1.13.1 release commit [wheel build]
* SciPy `1.13.1` release commit

[wheel build]
2024-05-22 11:51:53 -06:00
Tyler Reddy 2eb8e1b738
Merge pull request #20632 from tylerjereddy/treddy_prep_1_13_1_backports
MAINT: backports/prep for 1.13.1
2024-05-22 11:32:55 -06:00
Tyler Reddy 1a00d4856b DOC: PR 20632 updates
* Update the SciPy `1.13.1` release notes following substantial
backport activity.

[skip cirrus]
2024-05-22 09:43:06 -06:00
Daniel Schmitz 11e99bad58 BUG: stats: Fix `zipf.pmf` and `zipfian.pmf` for int32 `k` (#20702) [wheel build]
* BUG: fix zipf.pmf for integer k

* ENH: increase range for zipfian.pmf for integer k

---------

Co-authored-by: Tyler Reddy <tyler.je.reddy@gmail.com>

[wheel build]
2024-05-21 21:19:57 -06:00
Matt Haberland 506cbeb5c5 MAINT: stats.wilcoxon: fix bug with Ndim>1, shape[axis]>50, NaN, 'auto' (#20592) 2024-05-21 21:15:05 -06:00
Tyler Reddy c9d8613968 CI, MAINT: PR 20632 revisions
* Increase the Cirrus CI Linux aarch64
job timeout to 72 minutes; it timed
out twice in a row via manual restarts.
2024-05-21 21:01:15 -06:00
Tyler Reddy 95c50a4839 MAINT: PR 20632 revisions [wheel build]
* Need to pin `pythran` version in a CI job
because of a recent new release.

[wheel build]
2024-05-21 15:29:56 -06:00
Tyler Reddy e7abaf1baa MAINT: PR 20632 revisions [wheel build]
* Changes needed to get MacOS ARM wheels building
again, based on debug work on my fork.

[wheel build]
2024-05-21 15:17:58 -06:00
Tyler Reddy aa32933aa0 CI: PR 20632 revisions [wheel build]
* This is an empty commit to test the wheel
building in the above PR.

[wheel build]
2024-05-18 19:23:21 -06:00
Tyler Reddy 023b0fb7fb CI: MR 20632 revisions
* attempt to fix `M1 test - openblas` MacOS ARM CI job
based on some `gfortran` shims applied to a similar
job on `main`
2024-05-18 17:15:07 -06:00
Tyler Reddy 8b9a6d67c3 CI, MAINT: pin Python for MacOS conda
* We're seeing CI failures related to an undesirable
bump to Python `3.12` in this job, when the intention
was clearly to respect the Python version specific
in the GHA matrix. I didn't check too closely why
exactly it suddenly started happening, but some
packages weren't ready for `3.12` yet on this
job (`scikit-umfpack` in particular) and I don't
see too much harm in adding an extra pin to
respect the intention for the Python version.
2024-05-16 18:05:50 -06:00
thalassemia ce91a923c3 TST: Adapt to `__array__(copy=True)`
[skip circle]
2024-05-16 11:47:45 -06:00
Tyler Reddy 870e5b4a83 MAINT: import fixes
* more import cleanups after backport activity
to make the linter happy
2024-05-15 14:47:57 -06:00
Tyler Reddy f1575a83a1 Revert "MAINT: stats: update boost to improve `skewnorm.ppf` (#20643)"
This reverts commit ee6203de3b.
2024-05-15 14:38:38 -06:00
Tyler Reddy cec68d7e18 MAINT: import fixes
* some minor import fixes following a large
series of backports
2024-05-15 14:36:04 -06:00
Tyler Reddy 7db39e8f8b MAINT: release branch failures
* attempt to deal with gh-20576
2024-05-15 14:19:52 -06:00
Tyler Reddy 5c770e8177 BUG: value_indices unfixed types
* Fixes gh-19423

* Add a few more `case` statements to account for
the (i.e., Windows) data types that don't have a fixed
width, and add a regression test.

[skip circle] [skip cirrus]
2024-05-15 13:49:39 -06:00
Ralf Gommers 558272060a DEV: lint: disable UP031
Showed up as a linting error in an unrelated PR for me:
```
scipy/interpolate/_interpolate.py:918:30: UP032 [*] Use f-string instead of `format` call
scipy/interpolate/_interpolate.py:1972:30: UP032 [*] Use f-string instead of `format` call
```
This should not happen; the old code is fine, so this check needs to be
silenced or fixed separately. Similar to gh-20601.

[skip ci]
2024-05-15 13:38:10 -06:00
lucascolley 67d42523fc MAINT: lint: temporarily disable UP031
[lint only]
2024-05-15 13:28:03 -06:00
Sijo Valayakkad Manikandan 4fa2e23074 MAINT: added doc/source/.jupyterlite.doit.db to .gitignore See #20264 2024-05-15 13:25:02 -06:00
Dan Schult 198a73b6e2 BUG: sparse: Fix summing duplicates for CSR/CSC creation from (data,coords) (#20687)
* test and then fix duplicates for CSR/CSC creation from (data,coords)

* remove has_canonical_format check when summing duplicates.
2024-05-15 13:23:36 -06:00
Dan Schult d08c8d82fe BUG: sparse: Clean up 1D input handling to sparse array/matrix and add tests (#20444)
* test 1d input to sparse. Add FutureWarnings and ValueErrors

* remove matrix changes. Let them create 2D

* correct imports

* rebase on main and update to support 1D CSR input
2024-05-15 13:21:52 -06:00
Daniel Schmitz ee6203de3b MAINT: stats: update boost to improve `skewnorm.ppf` (#20643)
* MAINT: stats: update boost to fix improve `skewnorm.ppf`
2024-05-15 11:47:17 -06:00
Matt Haberland a536ab12bc MAINT: stats.yulesimon: fix kurtosis 2024-05-15 11:42:57 -06:00
Gregory Lee 74e9a4aeb6 fix origin handling for minimum_filter and maximum_filter with axes subset 2024-05-15 11:40:25 -06:00
Tyler Reddy b481e5de34 BUG: fix Vor/Delaunay segfaults
* Deals with the `spatial` part of gh-20623 (`Voronoi`
was also affected beyond the originally-reported `Delaunay`
segfault).

* Both classes are documented to accept arrays with two dimensions
only, so raise a `ValueError` in cases with other dimensions to
avoid the segfault.

* Other potential points of confusion here are the differences
between arrays with two dimensions and two dimensional arrays
that represent generators with more than two dimensions via
the number of columns, but this is just how things are for
most array programming languages of course. Also, the docs
don't explicitly say array-like for the generators I don't think, but this patch
only worked if I placed it after the coercion to array type.

[skip circle] [skip ci] [ci skip]
2024-05-05 11:48:58 -06:00
Edgar Andrés Margffoy Tuay f2d48738c3 BUG: prevent QHull message stream being closed twice (#20611) 2024-05-05 11:48:12 -06:00
Tyler Reddy de2a454824 DOC: update 1.13.1 relnotes
* Update the SciPy `1.13.1` release notes following
a series of backports.

[skip cirrus]
2024-05-02 16:04:17 -06:00
Tyler Reddy adaa430740 BUG: spherical_in old patch
* Apply "old version" of the patch provided
at gh-20527
2024-05-02 15:53:20 -06:00
h-vetinari a9afa6f50a BUG: special: handle uint arrays in factorial{,2,k} (#20607) 2024-05-02 13:52:34 -06:00
Scott Shambaugh 4b6d2e10e8 BUG: Fix error with 180 degree rotation in Rotation.align_vectors() with an infinite weight (#20573)
* Fix exact rotations at 180 deg, improve near 180 deg

Comments

* Tests for exact near 180 deg rotations

* Fix tests

* Code review updates

---------

Co-authored-by: Scott Shambaugh <scottshambaugh@users.noreply.github.com>
2024-05-02 13:48:49 -06:00
Matti Picus 57e3699f37 update openblas to 0.3.27 2024-05-02 13:47:10 -06:00
Ben Greiner c7b7322f3e BLD: Fix error message for f2py generation fail (#20530)
Co-authored-by: Ralf Gommers <ralf.gommers@gmail.com>
2024-05-02 13:45:46 -06:00
Philip Loche a2373cb6a4 BUG: fix spherical_in for n=1 and z=0 (#20527) 2024-05-02 13:43:37 -06:00
Evgeni Burovski 178a125729 BUG: linalg: fix eigh(1x1 array, driver='evd') f2py check (#20516)
f2py check was just too strict;
LAPACK docs indicate that for N=1, lwork>=1 is acceptable:

*  LWORK   (input) INTEGER
*          The dimension of the array WORK.
*          If N <= 1,               LWORK must be at least 1.
*          If JOBZ = 'N' and N > 1, LWORK must be at least 2*N+1.
*          If JOBZ = 'V' and N > 1, LWORK must be at least
*                                                1 + 6*N + 2*N**2.

https://www.netlib.org/lapack/explore-3.1.1-html/ssyevd.f.html

closes gh-20512
2024-05-02 13:38:47 -06:00
thalassemia 768f46bf82 TST: compare absolute values of U and VT in pydata-sparse SVD test 2024-05-02 13:35:38 -06:00
Evgeni Burovski 3ec0606782 TST: sparse/linalg: bump tolerance for an svds test
tol violation observed on conda-forge on win+blis; suggested in
https://github.com/scipy/scipy/pull/20474#issuecomment-2054269010
2024-05-02 13:34:10 -06:00
Evgeni Burovski 15cf993108 TST: interpolate: bump test tolerance in test_rbfinterp::test_pickleable
An exact equality was reported as flaky on conda-forge in
https://github.com/scipy/scipy/issues/20472

The tolerance violations are of the order of fp noise (< 2e-16), and I don't
think that pickling/unpickling guarantees bit-to-bit compatibility.
In principle, this may invoke recalculations and those may be subject
to fp noise. So I think it's OK to only require allclose(atol=eps)
instead of exact equality.
2024-05-02 13:34:04 -06:00
Evgeni Burovski 5006ef1710 TST: interpolate: bump test tolerance in TestInterpN
A small tolerance violation reported on conda-forge
in https://github.com/scipy/scipy/issues/20472
2024-05-02 13:33:57 -06:00
Evgeni Burovski 3c3ac73d28 TST: interpolate: bump test tolerance in TestRGI::test_derivatives
A slight tolerance violation was reported on conda-forge in
https://github.com/scipy/scipy/issues/20472
2024-05-02 13:33:50 -06:00
Evgeni Burovski 3687a970c2 BUG: linalg: fix ordering of complex conj gen eigenvalues
This was reported to cause test failures under windows + MKL in conda-forge
cf https://github.com/scipy/scipy/issues/20471
2024-05-02 10:52:59 -06:00
Jake Bowhay 85c9e29bf7 DOC: remove spurious backtic from release notes
[doc only]
2024-05-02 10:51:45 -06:00
Lucas Colley 1541f68e2f MAINT/DOC: fix syntax in 1.13.0 release notes (#20437) 2024-05-02 10:50:27 -06:00
Jake Bowhay eb8e3a9e28 DOC: add missing deprecations from 1.13.0 release notes
[doc only]
2024-05-01 20:30:06 -06:00
Atsushi Sakai e5666b6122 DOC: optimize: fix wrong optional argument name in `root(method="lm")`. (#20401) 2024-05-01 20:24:13 -06:00
Tyler Reddy ba975c47d1 BUG: sync pocketfft again
* Fixes gh-20300 by syncing `pocketfft` again, this time
to completely disable `aligned_alloc`; see https://github.com/scipy/pocketfft/pull/3
for details, but in short our more conservative shim
was not sufficient for `conda-forge`, so let's just do the same
thing NumPy did...

[skip cirrus] [skip circle]
2024-05-01 20:23:15 -06:00
Dan Schult 42de0940b7 align dok.pop() to dict.pop() case with no default (#20322) 2024-05-01 20:21:47 -06:00
Tyler Reddy dd1e525ffb
Merge pull request #20567 from rgommers/revert-f2py-cython-changes
REV: 1.13.x: revert changes to f2py and tempita handling in meson.build files
2024-04-24 13:51:21 -06:00
Ralf Gommers ca221b03ae Revert "BLD: improve use of Tempita in meson.build files"
This reverts commit 629872ac5b.
2024-04-24 10:32:54 +02:00
Ralf Gommers 185b85d292 Revert "BLD: use `generate_f2pymod` as a program in meson.build files"
This reverts commit 0d19f3c338.
2024-04-24 10:32:36 +02:00
Tyler Reddy 1cea374f6f
Merge pull request #20537 from DWesl/cygwin-compile-fixes
BLD: Move Python-including files to start of source.
2024-04-20 15:55:18 -06:00
DWesl d7d878fb97 BLD: Move Python-including files to start of source.
This should allow Cygwin to build in the next release.
2024-04-20 08:39:20 -04:00
Ralf Gommers 69f9431ae0
Merge pull request #20381 from tylerjereddy/treddy_prep_1_13_1
REL, MAINT: prep for 1.13.1
2024-04-03 07:38:13 +02:00
Tyler Reddy a6e357105d REL, MAINT: prep for 1.13.1
* Prepare for SciPy `1.13.1`.

[docs only]
2024-04-02 17:22:31 -06:00
Tyler Reddy 7dcd8c5993 REL: 1.13.0 release commit [wheel build]
* SciPy `1.13.0` "final" release commit.

[wheel build]
2024-04-02 12:31:54 -06:00
Tyler Reddy 15a69da577
Merge pull request #20375 from tylerjereddy/treddy_prep_1_13_0_final
MAINT, REL: Prepare for SciPy 1.13.0 "final" (proposing to skip RC2 for Numpy 2 series support)
2024-04-02 12:02:42 -06:00
Tyler Reddy 4cbb9e8668 DOC: PR 20375 revisions
* Fix a few typos and add a missing comment based
on code review.

[docs only]
2024-04-02 10:04:04 -06:00
Tyler Reddy 2431661d77 MAINT: PR 20375 revisions [wheel build]
* attempt wheel build since regular
CI is passing (empty commit)

[wheel build]
2024-04-01 18:01:35 -06:00
Tyler Reddy b85940a1b9 DOC, REL: set 1.13.0 final unreleased
* Prepare for SciPy `1.13.0` "final" release
instead of RC2 given urgency of supporting
NumPy 2 series.

* Remove note about SciPy `1.13.0` not being
released yet in the release notes, and update
the `1.13.0` release notes to reflect another
bump in the OpenBLAS version vendored in for PyPI
wheels.

* The usual version bumps removing `rc2` version
flags, and author/pr/issue list updates.
2024-04-01 15:11:05 -06:00
Tyler Reddy 13c30bdebc MAINT: spatial: simplify meson.build 2024-04-01 15:00:44 -06:00
Evgeni Burovski abb04b231c MAINT: spatial: use cython_lapack in spatial/_qhull.pyx 2024-04-01 15:00:38 -06:00
Atsushi Sakai 729ff0f73b BUG: interpolate: Fix wrong warning message if degree=-1 in `interpolate.RBFInterpolator` (#20364)
* BUG: optimize: Fix wrong warning message if degree=-1 in interpolate.RBFInterpolator

* BUG: optimize: add a comment for test
2024-04-01 14:59:39 -06:00
Tyler Reddy 8d82b0a993 MAINT, BUG: bump OpenBLAS (#20362)
* Fixes #20294.

* Pull in new OpenBLAS binaries as discussed in the above issue,
to solve a bug that affects downstream consumers like `scikit-learn`.

* Some additional shims were needed: bumping `cibuildwheel` to
improve GitHub actions MacOS M1 runner support, and `openblas_support`
module needed an adjustment to process out an apparent syntax
error in the pkg-config file for openblas that we pull in from upstream.
2024-04-01 14:57:49 -06:00
Tyler Reddy 0e67a72fec MAINT: backport amos license update 2024-04-01 14:56:19 -06:00
Irwin Zaid 18e317b279 ENH: Converting amos to std::complex (#20359)
* Converting amos to std::complex

* Define M_PI if not available (for windows)

* Bump test tolerance

---------

Co-authored-by: izaid <hi@irwinzaid.com>
Co-authored-by: Albert Steppi <albert.steppi@gmail.com>
2024-04-01 14:54:00 -06:00
Shawn Hsu 6faa09972d DOC: remove outdated NumPy imports note (#20353) 2024-04-01 14:52:52 -06:00
Atsushi Sakai 4066cb45a9 BUG: optimize: Fix lint CI 2024-04-01 14:51:13 -06:00
Atsushi Sakai 108dd6cafe BUG: optimize: Fix wrong condition to check invalid optimization state in `scalar_search_wolfe2` 2024-04-01 14:51:06 -06:00
Evgeni Burovski e0bb57fc0a Update scipy/linalg/_decomp_svd.py
Co-authored-by: Pearu Peterson <pearu.peterson@gmail.com>
2024-04-01 14:50:31 -06:00
Evgeni Burovski f36468475d BUG: linalg: avoid a segfault in ?gesdd
linalg.svd with too large matrices may segfault if
max(m, n)*max(m, n) > int_max on large-memory machines (on smaller memory machines
it may fail with a MemoryError instead). The root cause is an integer overflow
in indexing 2D arrays, deep in the LAPACK code.
Thus, detect a possible error condition and bail out early.
2024-04-01 14:50:23 -06:00
Ralf Gommers 6b28037e06 BLD: require pybind11 >=2.12.0 for numpy 2.0 compatibility 2024-04-01 14:48:21 -06:00
Ralf Gommers 87a48d5e15 MAINT: revert pybind11 workaround for numpy 2.0 support
This was introduced in gh-20193; no longer needed with pybind11 2.12.0
2024-04-01 14:45:14 -06:00
Andrew Nelson f6516470a3 BUG: nelder-mead fix degenerate simplex 2024-04-01 14:41:41 -06:00
Evgeni Burovski d16d6a4b75 BUG: linalg: raise an error in dnrm2(..., incx<0)
See https://github.com/scipy/scipy/issues/16930 for discussion.

closes gh-16930
2024-04-01 14:40:40 -06:00
Evgeni Burovski bae492224f DOC: mention BSpline.insert_knot in the 1.13.0 release notes
[docs only]
2024-04-01 14:40:14 -06:00
Atsushi Sakai 7824f8fb69 BUG: signal: Fix scalar input issue of signal.lfilter (#20318) 2024-04-01 14:39:26 -06:00
CJ Carey f5da9a98c3 Restore random coordinate ordering to pre-1.12 2024-04-01 14:37:43 -06:00
Tyler Reddy 7247004e88 MAINT: bump pocketfft, MacOS patch
* related to gh-20300, though awaits `conda-forge` testing
on old MacOS versions for actual confirmation of fix

* pulls in changes described at https://github.com/mreineck/pocketfft/pull/11
and https://github.com/scipy/pocketfft/pull/2

* I'm honestly not sure how much we gain by being conservative and
not just turning the code path off entirely
(33ae5dc94c);
perhaps there are some performance advantages or something, but if
we have any more trouble with this we should obviously just do that
(and I suspect that's what NumPy should do for
https://github.com/numpy/numpy/issues/25940)

[skip cirrus] [skip circle]
2024-04-01 14:36:54 -06:00
Tim Butters 3aab521d91 BUG: optimize: `NewtonCG` `min` crashes with `xtol=0` (#20299)
* BUG: Optimize: NewtonCG min crashes with xtol=0

The minimize function crashes if `xtol` is set to `0`. This is
because the initial value of a loop variable is `2 * xtol`, which
is intended to ensure the loop runs at least once. However, if
`xtol = 0` then the loop never runs, a variable isn't instantiated
that is used outside of the loop, and an `UnboundLocalError` is
raised.

To fix this, the initial value of the loop variable is set to the
max float value.

* BUG: Optimize: UnboundLocalError for xtol=0 NCG

Corrected error from previous rework when switching to
`np.linalg.norm` that introduced the possibility of the unbound
variable error.

* TST: Optimize: NewtonCF Minimize when xtol=0

Regression test for gh-20214 which addressed an issue in the
Newton CG method which would raise an UnboundLocalVariable error
if `xtol` was set to zero. It also ensures the method finds the
solution in this case.

* MAINT: Add comment to explain initial value

Comment added to explain the purpose of setting `update_l1norm` to the max float value - this is to ensure the preceding while loop executes at least once.

Co-authored-by: Christian Lorentzen <lorentzen.ch@gmail.com>

* TST: Test solution value as well as success

This is for the `xtol=0` test in the Newton CG minimize method.

* TST: Explicitly extract val from array

---------

Co-authored-by: Christian Lorentzen <lorentzen.ch@gmail.com>
2024-04-01 14:36:10 -06:00
Evgeni Burovski 2d0ee7c80a TST: linalg: fix complex sort in test_bad_geneig 2024-04-01 14:35:36 -06:00
Dan Schult c62f229c52 MAINT: sparse: add missing dict methods to DOK and tests (#20175)
* add missing dict methods to DOK and tests

* turn off some dict dunders for dok_array

* tweak tests
2024-04-01 14:35:05 -06:00
Elmar Zander 5360d89c0c BUG:optimize:Fix the inner loop of nnls solver
Co-Authored-By: Elmar Zander <elmar.zander@googlemail.com>
2024-04-01 14:34:13 -06:00
Tyler Reddy 19ad564133
Merge pull request #20374 from rgommers/1.13.0-deps
MAINT: update pybind11 and numpy build-time requirements for 1.13.0
2024-04-01 13:48:52 -06:00
Ralf Gommers 16f962160b MAINT: update pybind11 and numpy build-time requirements for 1.13.0 2024-04-01 20:24:45 +02:00
Tyler Reddy 2a8c4ca6ed
Merge pull request #20290 from tylerjereddy/treddy_version_bump_rc2
REL: set 1.13.0rc2 unreleased
2024-03-19 13:19:39 -06:00
Tyler Reddy dee2022d88 REL: 1.13.0rc2 unreleased
* set version to `1.13.0rc2` unreleased

[skip ci] [ci ckip] [skip circle] [skip cirrus]
2024-03-19 13:02:05 -06:00
Tyler Reddy 4db2cc477f REL: 1.13.0rc1 rel commit [wheel build]
* SciPy `1.13.0rc1` release commit

[wheel build]
2024-03-18 20:11:13 -06:00
Tyler Reddy 38fdb9a82f
Merge pull request #20279 from tylerjereddy/treddy_1_13_rc1_prep
WIP, MAINT: 1.13.0rc1 prep [wheel build]
2024-03-18 19:48:21 -06:00
Tyler Reddy 932648e040 WIP, MAINT: 1.13.0rc1 prep [wheel build]
* A few more version-related changes, and some chance
of screw up here given special circumstances of a
new major NumPy release, but have to continue at it.

* mixed in with our own
version-related changes here are adjustments to avoid
some of the "nightly" shims for NumPy pre-release in wheel build
infrastructure itself, which perhaps also makes sense
to happen on `main` now that the NumPy 2 beta is on PyPI (depends
what you want to test though...);
I was somewhat concerned this may otherwise lead to us getting
nightly NumPy builds off of `main` instead of more official
builds from PyPI (in theory, we're supposed to support `2.1.0`
as well, but that seems like an extra degree of confusion
and not our desired build target for immediate RC1 wheels...)

* start probing the wheel builds to see what goes
wrong as we look to RC1 and building alongside NumPy 2

[wheel build]
2024-03-18 14:12:56 -06:00
Tyler Reddy 16521ed8c8
Merge pull request #20270 from rgommers/update-deps-for-1.13
BLD: update dependencies for 1.13.0 release and numpy 2.0
2024-03-18 13:20:09 -06:00
Ralf Gommers 3d62fcc0fe CI: restructure gcc-8 job to work with numpy 2.0.0b1/1.22.4
[skip cirrus] [skip circle]
2024-03-18 07:24:03 +01:00
Ralf Gommers 4aa4970dd9 BLD: update dependencies for 1.13.0 release and numpy 2.0 2024-03-10 19:17:09 +01:00
96 changed files with 1933 additions and 1068 deletions

View File

@ -235,21 +235,16 @@ jobs:
with:
python-version: "3.9"
- name: Setup OS
- name: Setup system dependencies
run: |
sudo apt-get -y update
sudo apt install -y g++-8 gcc-8 gfortran-8
sudo apt install -y libatlas-base-dev liblapack-dev libgmp-dev \
libmpfr-dev libmpc-dev pkg-config libsuitesparse-dev liblapack-dev
- name: Setup Python deps
- name: Setup Python build deps
run: |
pip install "numpy==1.22.4" &&
# build deps
pip install build meson-python ninja pythran pybind11 cython
# test deps
pip install gmpy2 threadpoolctl mpmath pooch pythran pybind11 pytest pytest-xdist==2.5.0 pytest-timeout hypothesis
pip install sparse
pip install build meson-python ninja "pythran<0.16.0" pybind11 cython "numpy>=2.0.0b1"
- name: Build wheel and install
run: |
@ -260,6 +255,11 @@ jobs:
CC=gcc-8 CXX=g++-8 FC=gfortran-8 python -m build --wheel --no-isolation -Csetup-args=-Dblas=blas-atlas -Csetup-args=-Dlapack=lapack-atlas -Ccompile-args=-j2
python -m pip install dist/scipy*.whl
- name: Install test dependencies
run: |
# Downgrade numpy to oldest supported version
pip install gmpy2 threadpoolctl mpmath pooch pytest pytest-xdist==2.5.0 pytest-timeout hypothesis sparse "numpy==1.22.4"
- name: Run tests
run: |
# can't be in source directory

View File

@ -110,6 +110,7 @@ jobs:
shell: bash -l {0}
run: |
conda activate scipy-dev
mamba install python=${{ matrix.python-version}}
# optional test dependencies
mamba install scikit-umfpack scikit-sparse
@ -166,13 +167,14 @@ jobs:
git submodule update --init
# for some reason gfortran is not on the path
GFORTRAN_LOC=$(brew --prefix gfortran)/bin/gfortran
GFORTRAN_LOC=$(which gfortran-13)
ln -s $GFORTRAN_LOC gfortran
export PATH=$PWD:$PATH
# make sure we have openblas
bash tools/wheels/cibw_before_build_macos.sh $PWD
export DYLD_LIBRARY_PATH=/usr/local/gfortran/lib:/opt/arm64-builds/lib
GFORTRAN_LIB=$(dirname `gfortran --print-file-name libgfortran.dylib`)
export DYLD_LIBRARY_PATH=$GFORTRAN_LIB
export PKG_CONFIG_PATH=/opt/arm64-builds/lib/pkgconfig
pip install click doit pydevtool rich_click meson cython pythran pybind11 ninja numpy

View File

@ -128,26 +128,18 @@ jobs:
run: |
if [[ ${{ matrix.buildplat[2] }} == 'arm64' ]]; then
# macosx_arm64
# use homebrew gfortran
LIB_PATH=$LIB_PATH:/opt/arm64-builds/lib
sudo xcode-select -s /Applications/Xcode_15.2.app
# for some reason gfortran is not on the path
GFORTRAN_LOC=$(brew --prefix gfortran)/bin/gfortran
ln -s $GFORTRAN_LOC gfortran
export PATH=$PWD:$PATH
echo "PATH=$PATH" >> "$GITHUB_ENV"
# location of the gfortran's libraries
GFORTRAN_LIB=$(dirname `gfortran --print-file-name libgfortran.dylib`)
GFORTRAN_LIB="\$(dirname \$(gfortran --print-file-name libgfortran.dylib))"
CIBW="MACOSX_DEPLOYMENT_TARGET=12.0\
MACOS_DEPLOYMENT_TARGET=12.0\
LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH\
LD_LIBRARY_PATH=$LIB_PATH:$LD_LIBRARY_PATH\
SDKROOT=$(xcrun --sdk macosx --show-sdk-path)\
_PYTHON_HOST_PLATFORM=macosx-12.0-arm64\
PIP_PRE=1\
PIP_NO_BUILD_ISOLATION=false\
PKG_CONFIG_PATH=/opt/arm64-builds/lib/pkgconfig\
PIP_EXTRA_INDEX_URL=https://pypi.anaconda.org/scientific-python-nightly-wheels/simple"
PKG_CONFIG_PATH=/opt/arm64-builds/lib/pkgconfig"
echo "CIBW_ENVIRONMENT_MACOS=$CIBW" >> "$GITHUB_ENV"
CIBW="sudo xcode-select -s /Applications/Xcode_15.2.app"
@ -155,8 +147,8 @@ jobs:
echo "REPAIR_PATH=/opt/arm64-builds/lib" >> "$GITHUB_ENV"
CIBW="DYLD_LIBRARY_PATH=$GFORTRAN_LIB:/opt/arm64-builds/lib delocate-listdeps {wheel} &&\
DYLD_LIBRARY_PATH=$GFORTRAN_LIB:/opt/arm64-builds/lib delocate-wheel --require-archs {delocate_archs} -w {dest_dir} {wheel}"
CIBW="DYLD_LIBRARY_PATH=$GFORTRAN_LIB:$LIB_PATH delocate-listdeps {wheel} &&\
DYLD_LIBRARY_PATH=$GFORTRAN_LIB:$LIB_PATH delocate-wheel --require-archs {delocate_archs} -w {dest_dir} {wheel}"
echo "CIBW_REPAIR_WHEEL_COMMAND_MACOS=$CIBW" >> "$GITHUB_ENV"
else
@ -171,10 +163,7 @@ jobs:
MACOS_DEPLOYMENT_TARGET=10.9\
SDKROOT=/Applications/Xcode_11.7.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk\
LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH\
_PYTHON_HOST_PLATFORM=macosx-10.9-x86_64\
PIP_PRE=1\
PIP_NO_BUILD_ISOLATION=false\
PIP_EXTRA_INDEX_URL=https://pypi.anaconda.org/scientific-python-nightly-wheels/simple"
_PYTHON_HOST_PLATFORM=macosx-10.9-x86_64"
echo "CIBW_ENVIRONMENT_MACOS=$CIBW" >> "$GITHUB_ENV"
CIBW="DYLD_LIBRARY_PATH=/usr/local/lib delocate-listdeps {wheel} &&\
@ -183,29 +172,16 @@ jobs:
fi
- name: Build wheels
uses: pypa/cibuildwheel@v2.16.5
uses: pypa/cibuildwheel@v2.17.0
env:
CIBW_BUILD: ${{ matrix.python[0] }}-${{ matrix.buildplat[1] }}*
CIBW_ARCHS: ${{ matrix.buildplat[2] }}
CIBW_ENVIRONMENT_PASS_LINUX: RUNNER_OS
CIBW_PRERELEASE_PYTHONS: True
# TODO remove the CIBW_BEFORE_BUILD_* lines once there are
# numpy2.0 wheels available on PyPI. Also remove/comment out the
# PIP_NO_BUILD_ISOLATION and PIP_EXTRA_INDEX_URL from CIBW_ENVIRONMENT
# (also for _MACOS and _WINDOWS below)
CIBW_BEFORE_BUILD_LINUX: "pip install numpy>=2.0.0.dev0 meson-python cython pythran pybind11 ninja; bash {project}/tools/wheels/cibw_before_build_linux.sh {project}"
CIBW_BEFORE_BUILD_WINDOWS: "pip install numpy>=2.0.0.dev0 meson-python cython pythran pybind11 ninja && bash {project}/tools/wheels/cibw_before_build_win.sh {project}"
CIBW_BEFORE_BUILD_MACOS: "pip install numpy>=2.0.0.dev0 meson-python cython pythran pybind11 ninja; bash {project}/tools/wheels/cibw_before_build_macos.sh {project}"
# Allow pip to find install nightly wheels if necessary
# Setting PIP_NO_BUILD_ISOLATION=false makes pip use build-isolation.
CIBW_ENVIRONMENT: "PIP_NO_BUILD_ISOLATION=false PIP_PRE=1 PIP_EXTRA_INDEX_URL=https://pypi.anaconda.org/scientific-python-nightly-wheels/simple"
CIBW_ENVIRONMENT_WINDOWS: >
PKG_CONFIG_PATH=c:/opt/64/lib/pkgconfig
PIP_PRE=1
PIP_EXTRA_INDEX_URL=https://pypi.anaconda.org/scientific-python-nightly-wheels/simple
PIP_NO_BUILD_ISOLATION=false
- uses: actions/upload-artifact@v4
with:

1
.gitignore vendored
View File

@ -94,6 +94,7 @@ build-install/
.doit.db.dir
.doit.db.db
.doit.db
doc/source/.jupyterlite.doit.db
# Logs and databases #
######################

View File

@ -1,6 +1,6 @@
build_and_store_wheels: &BUILD_AND_STORE_WHEELS
install_cibuildwheel_script:
- python -m pip install cibuildwheel==2.16.5
- python -m pip install cibuildwheel==2.17.0
cibuildwheel_script:
- cibuildwheel
wheels_artifacts:
@ -19,6 +19,7 @@ cirrus_wheels_linux_aarch64_task:
platform: linux
cpu: 2
memory: 4G
timeout_in: 72m
matrix:
# build in a matrix because building and testing all four wheels in a
# single task takes longer than 60 mins (the default time limit for a
@ -28,14 +29,7 @@ cirrus_wheels_linux_aarch64_task:
- env:
CIBW_BUILD: cp311-manylinux* cp312-manylinux*
env:
CIBW_PRERELEASE_PYTHONS: True
# TODO remove the CIBW_BEFORE_BUILD_LINUX line once there are numpy2.0 wheels available on PyPI.
# Also remove/comment out PIP_NO_BUILD_ISOLATION, PIP_EXTRA_INDEX_URL flags.
CIBW_BEFORE_BUILD_LINUX: "pip install numpy>=2.0.0.dev0 meson-python cython pythran pybind11 ninja;bash {project}/tools/wheels/cibw_before_build_linux.sh {project}"
CIBW_ENVIRONMENT: >
PIP_PRE=1
PIP_EXTRA_INDEX_URL=https://pypi.anaconda.org/scientific-python-nightly-wheels/simple
PIP_NO_BUILD_ISOLATION=false
CIBW_PRERELEASE_PYTHONS: False
build_script: |
apt install -y python3-venv python-is-python3

View File

@ -84,12 +84,6 @@ section that the submodule in question is public. Of course you can still use::
.. note:: SciPy is using a lazy loading mechanism which means that modules
are only loaded in memory when you first try to access them.
.. note::
The ``scipy`` namespace itself also contains functions imported from ``numpy``.
These functions still exist for backwards compatibility, but should be
imported from ``numpy`` directly.
API definition
--------------

View File

@ -8,6 +8,8 @@ see the `commit logs <https://github.com/scipy/scipy/commits/>`_.
.. toctree::
:maxdepth: 1
release/1.13.2-notes
release/1.13.1-notes
release/1.13.0-notes
release/1.12.0-notes
release/1.11.4-notes

View File

@ -2,14 +2,12 @@
SciPy 1.13.0 Release Notes
==========================
.. note:: SciPy 1.13.0 is not released yet!
.. contents::
SciPy 1.13.0 is the culmination of 3 months of hard work. This
out-of-band release aims to support NumPy ``2.0.0``, and is backwards
compatible to NumPy ``1.22.4``. The version of OpenBLAS used to build
the PyPI wheels has been increased to ``0.3.26``.
the PyPI wheels has been increased to ``0.3.26.dev``.
This release requires Python 3.9+ and NumPy 1.22.4 or greater.
@ -51,6 +49,9 @@ New features
- The Modified Akima Interpolation has been added to
``interpolate.Akima1DInterpolator``, available via the new ``method``
argument.
- New method ``BSpline.insert_knot`` inserts a knot into a ``BSpline`` instance.
This routine is similar to the module-level `scipy.interpolate.insert`
function, and works with the BSpline objects instead of ``tck`` tuples.
- ``RegularGridInterpolator`` gained the functionality to compute derivatives
in place. For instance, ``RegularGridInterolator((x, y), values,
method="cubic")(xi, nu=(1, 1))`` evaluates the mixed second derivative,
@ -139,13 +140,39 @@ Deprecated features
- Complex dtypes in ``PchipInterpolator`` and ``Akima1DInterpolator`` have
been deprecated and will raise an error in SciPy 1.15.0. If you are trying
to use the real components of the passed array, use ``np.real`` on ``y``.
- Non-integer values of ``n`` together with ``exact=True`` are deprecated for
`scipy.special.factorial`.
*********************
Expired Deprecations
*********************
There is an ongoing effort to follow through on long-standing deprecations.
The following previously deprecated features are affected:
- ``scipy.signal.{lsim2,impulse2,step2}`` have been removed in favour of
``scipy.signal.{lsim,impulse,step}``.
- Window functions can no longer be imported from the `scipy.signal` namespace and
instead should be accessed through either `scipy.signal.windows` or
`scipy.signal.get_window`.
- `scipy.sparse` no longer supports multi-Ellipsis indexing
- ``scipy.signal.{bspline,quadratic,cubic}`` have been removed in favour of alternatives
in `scipy.interpolate`.
- ``scipy.linalg.tri{,u,l}`` have been removed in favour of ``numpy.tri{,u,l}``.
- Non-integer arrays in `scipy.special.factorial` with ``exact=True`` now raise an
error.
- Functions from NumPy's main namespace which were exposed in SciPy's main
namespace, such as ``numpy.histogram`` exposed by ``scipy.histogram``, have
been removed from SciPy's main namespace. Please use the functions directly
from ``numpy``. This was originally performed for SciPy 1.12.0 however was missed from
the release notes so is included here for completeness.
******************************
Backwards incompatible changes
******************************
*************
Other changes
*************
@ -175,10 +202,11 @@ Authors
* Jake Bowhay (25)
* Matthew Brett (1)
* Dietrich Brunn (7)
* Evgeni Burovski (48)
* Evgeni Burovski (65)
* Matthias Bussonnier (4)
* Tim Butters (1) +
* Cale (1) +
* CJ Carey (4)
* CJ Carey (5)
* Thomas A Caswell (1)
* Sean Cheah (44) +
* Lucas Colley (97)
@ -190,9 +218,10 @@ Authors
* fancidev (13) +
* Daniel Garcia (1) +
* Lukas Geiger (3)
* Ralf Gommers (139)
* Matt Haberland (79)
* Ralf Gommers (147)
* Matt Haberland (81)
* Tessa van der Heiden (2) +
* Shawn Hsu (1) +
* inky (3) +
* Jannes Münchmeyer (2) +
* Aditya Vidyadhar Kamath (2) +
@ -200,6 +229,7 @@ Authors
* Andrew Landau (1) +
* Eric Larson (7)
* Zhen-Qi Liu (1) +
* Christian Lorentzen (2)
* Adam Lugowski (4)
* m-maggi (6) +
* Chethin Manage (1) +
@ -213,28 +243,29 @@ Authors
* Juan Montesinos (1) +
* Juan F. Montesinos (1) +
* Takumasa Nakamura (1)
* Andrew Nelson (26)
* Andrew Nelson (27)
* Praveer Nidamaluri (1)
* Yagiz Olmez (5) +
* Dimitri Papadopoulos Orfanos (1)
* Drew Parsons (1) +
* Tirth Patel (7)
* Pearu Peterson (1)
* Matti Picus (3)
* Rambaud Pierrick (1) +
* Ilhan Polat (30)
* Quentin Barthélemy (1)
* Tyler Reddy (81)
* Tyler Reddy (117)
* Pamphile Roy (10)
* Atsushi Sakai (4)
* Atsushi Sakai (8)
* Daniel Schmitz (10)
* Dan Schult (16)
* Dan Schult (17)
* Eli Schwartz (4)
* Stefanie Senger (1) +
* Scott Shambaugh (2)
* Kevin Sheppard (2)
* sidsrinivasan (4) +
* Samuel St-Jean (1)
* Albert Steppi (30)
* Albert Steppi (31)
* Adam J. Stewart (4)
* Kai Striega (3)
* Ruikang Sun (1) +
@ -248,10 +279,11 @@ Authors
* Ben Wallace (1) +
* Xuefeng Xu (3)
* Xiao Yuan (5)
* Irwin Zaid (6)
* Irwin Zaid (8)
* Elmar Zander (1) +
* Mathias Zechmeister (1) +
A total of 91 people contributed to this release.
A total of 96 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.
This list of names is automatically generated, and may not be fully complete.
@ -266,7 +298,9 @@ Issues closed for 1.13.0
* `#9950 <https://github.com/scipy/scipy/issues/9950>`__: "++" initialization in kmeans2 fails for univariate data
* `#10317 <https://github.com/scipy/scipy/issues/10317>`__: scipy.stats.nbinom.interval returns wrong result for p=1
* `#10569 <https://github.com/scipy/scipy/issues/10569>`__: API: \`s\` argument different in scipy.fft and numpy.fft
* `#11359 <https://github.com/scipy/scipy/issues/11359>`__: lfilter error when input b is 0-dim
* `#11577 <https://github.com/scipy/scipy/issues/11577>`__: generalized eigenvalues are sometimes wrong (on some hardware)
* `#14001 <https://github.com/scipy/scipy/issues/14001>`__: Pycharm scipy SVD returning error code without message
* `#14176 <https://github.com/scipy/scipy/issues/14176>`__: Add option for terminating solver after n events
* `#14220 <https://github.com/scipy/scipy/issues/14220>`__: Documentation for dctn/idctn s-parameter is confusing
* `#14450 <https://github.com/scipy/scipy/issues/14450>`__: Passing a numpy array as sampling frequency to signal.iirfilter...
@ -275,7 +309,11 @@ Issues closed for 1.13.0
* `#15108 <https://github.com/scipy/scipy/issues/15108>`__: BUG: Seg. fault in scipy.sparse.linalg tests in PROPACK
* `#16098 <https://github.com/scipy/scipy/issues/16098>`__: BLD:1.8.0: SciPy is not LTO ready
* `#16792 <https://github.com/scipy/scipy/issues/16792>`__: BUG: Manually vectorizing scipy.linalg.expm fails in version...
* `#16930 <https://github.com/scipy/scipy/issues/16930>`__: BUG: scipy.linalg.blas.dnrm2 may return error result when incx...
* `#17004 <https://github.com/scipy/scipy/issues/17004>`__: Test failures for \`Test_SVDS_PROPACK.test_small_sigma2\` test...
* `#17125 <https://github.com/scipy/scipy/issues/17125>`__: BUG: osx-64 scipy 1.9.1 test_bad_geneig numerical error
* `#17172 <https://github.com/scipy/scipy/issues/17172>`__: BUG: scipy.linalg.expm, coshm, sinhm and tanhm fail for read-only...
* `#17362 <https://github.com/scipy/scipy/issues/17362>`__: Add support for Flexiblas
* `#17436 <https://github.com/scipy/scipy/issues/17436>`__: BUG: linalg.cholesky: segmentation fault with large matrix
* `#17530 <https://github.com/scipy/scipy/issues/17530>`__: Unnecessary approximation in \`scipy.stats.wilcoxon(x, y)\`
* `#17681 <https://github.com/scipy/scipy/issues/17681>`__: BUG: special: \`pbvv_seq\` is broken.
@ -286,6 +324,8 @@ Issues closed for 1.13.0
* `#18423 <https://github.com/scipy/scipy/issues/18423>`__: ENH: Adding the SDMN Fortran routine to the python Wrapped functions.
* `#18678 <https://github.com/scipy/scipy/issues/18678>`__: BUG: scipy.special.stdtrit is not thread-safe for df.size > 500
* `#18722 <https://github.com/scipy/scipy/issues/18722>`__: DOC: in optimize.quadratic_assignment 2opt method, partial_match...
* `#18767 <https://github.com/scipy/scipy/issues/18767>`__: Too-strict version restrictions on NumPy break distribution builds
* `#18773 <https://github.com/scipy/scipy/issues/18773>`__: BUG: Update oldest-supported-numpy metadata
* `#18902 <https://github.com/scipy/scipy/issues/18902>`__: DOC: make default bounds in scipy.optimize.linprog more obvious
* `#19088 <https://github.com/scipy/scipy/issues/19088>`__: \`pull-request-labeler\` misbehaving and therefore disabled again
* `#19181 <https://github.com/scipy/scipy/issues/19181>`__: TST: improve array API test skip decorators
@ -312,6 +352,7 @@ Issues closed for 1.13.0
* `#19774 <https://github.com/scipy/scipy/issues/19774>`__: DOC: Detail what "concatenate" means in the context of \`spatial.transform.Rotation.concatenate\`
* `#19799 <https://github.com/scipy/scipy/issues/19799>`__: DOC: array types: update array validation guidance
* `#19813 <https://github.com/scipy/scipy/issues/19813>`__: BUG: typo in specfun.f?
* `#19819 <https://github.com/scipy/scipy/issues/19819>`__: BUG: In RBFInterpolator, wrong warning message if degree=-1
* `#19831 <https://github.com/scipy/scipy/issues/19831>`__: Test failures with OpenBLAS 0.3.26
* `#19835 <https://github.com/scipy/scipy/issues/19835>`__: DOC: \`fft\` missing from list of subpackages
* `#19836 <https://github.com/scipy/scipy/issues/19836>`__: DOC: remove incorrect sentence about subpackage imports
@ -328,12 +369,14 @@ Issues closed for 1.13.0
* `#19951 <https://github.com/scipy/scipy/issues/19951>`__: BUG: boolean masking broken for sparse array classes
* `#19963 <https://github.com/scipy/scipy/issues/19963>`__: DOC: scipy.optimize with large differences in parameter scales
* `#19974 <https://github.com/scipy/scipy/issues/19974>`__: DOC/REL: retroactively add missing expired deprecations to 1.12.0...
* `#19991 <https://github.com/scipy/scipy/issues/19991>`__: BUG: Scipy Optimize with Nelder-Mead method has issues when specifying...
* `#19993 <https://github.com/scipy/scipy/issues/19993>`__: BUG: F_INT type conflict with f2py translation of INTEGER type...
* `#19998 <https://github.com/scipy/scipy/issues/19998>`__: DOC: Boundary conditions in splrep
* `#20001 <https://github.com/scipy/scipy/issues/20001>`__: BUG: scipy.stats.loglaplace may return negative moments
* `#20009 <https://github.com/scipy/scipy/issues/20009>`__: BUG: ShortTimeFFT fails with complex input
* `#20012 <https://github.com/scipy/scipy/issues/20012>`__: MAINT: Use NumPy sliding_window_view instead of as_strided in...
* `#20014 <https://github.com/scipy/scipy/issues/20014>`__: TST: signal: TestCorrelateReal failing on Meson 3.12 job
* `#20027 <https://github.com/scipy/scipy/issues/20027>`__: BUG: \`sparse.random\` returns transposed array in 1.12
* `#20031 <https://github.com/scipy/scipy/issues/20031>`__: TST: prefer \`pytest.warns\` over \`np.testing.assert_warns\`
* `#20034 <https://github.com/scipy/scipy/issues/20034>`__: TST: linalg: test_decomp_cossin.py::test_cossin_separate[float64]...
* `#20036 <https://github.com/scipy/scipy/issues/20036>`__: MAINT: implement scipy.stats.powerlaw._munp
@ -350,15 +393,24 @@ Issues closed for 1.13.0
* `#20129 <https://github.com/scipy/scipy/issues/20129>`__: BUG: regression: eval_chebyt gives wrong results for complex...
* `#20131 <https://github.com/scipy/scipy/issues/20131>`__: DOC: linalg: Unclear description for the output \`P\` of \`qr\`.
* `#20142 <https://github.com/scipy/scipy/issues/20142>`__: Typo in the doc of the Kstwobign distribution
* `#20156 <https://github.com/scipy/scipy/issues/20156>`__: BUG: sparse.dok_matrix throws KeyError for valid pop(key) since...
* `#20157 <https://github.com/scipy/scipy/issues/20157>`__: MAINT, TST: test_svds_parameter_tol failures
* `#20161 <https://github.com/scipy/scipy/issues/20161>`__: \`dev.py test\` fails to accept both \`--argument\` and \`--...
* `#20170 <https://github.com/scipy/scipy/issues/20170>`__: Test failures due to \`asarray(..., copy=False)\` semantics change...
* `#20180 <https://github.com/scipy/scipy/issues/20180>`__: deprecation warnings for Node.js 16 on GHA wheel build jobs
* `#20182 <https://github.com/scipy/scipy/issues/20182>`__: BUG: \`csr_row_index\` and \`csr_column_index\` error for mixed...
* `#20188 <https://github.com/scipy/scipy/issues/20188>`__: BUG: Raising scipy.spatial.transform.Rotation to power of 0 adds...
* `#20214 <https://github.com/scipy/scipy/issues/20214>`__: BUG: minimize(method="newton-cg") crashes with UnboundLocalError...
* `#20220 <https://github.com/scipy/scipy/issues/20220>`__: new problem on Cirrus with Homebrew Python in macOS arm64 jobs
* `#20225 <https://github.com/scipy/scipy/issues/20225>`__: CI/MAINT: \`choco\` error for invalid credentials
* `#20230 <https://github.com/scipy/scipy/issues/20230>`__: CI, DOC, TST: failure related to scipy/stats/_distn_infrastructure.py...
* `#20268 <https://github.com/scipy/scipy/issues/20268>`__: MAINT: failing prerelease deps job - "numpy.broadcast size changed"
* `#20291 <https://github.com/scipy/scipy/issues/20291>`__: BUG: Macro collision (\`complex\`) with Windows SDK in amos code
* `#20294 <https://github.com/scipy/scipy/issues/20294>`__: BUG: Hang on Windows in scikit-learn with 1.13rc1 and 1.14.dev...
* `#20300 <https://github.com/scipy/scipy/issues/20300>`__: BUG: SciPy 1.13.0rc1 not buildable on old macOS due to pocketfft...
* `#20302 <https://github.com/scipy/scipy/issues/20302>`__: BUG: scipy.optimize.nnls fails with exception
* `#20340 <https://github.com/scipy/scipy/issues/20340>`__: BUG: line_search_wolfe2 fails to converge due to a wrong condition
* `#20344 <https://github.com/scipy/scipy/issues/20344>`__: MAINT/DOC: remove outdated note about NumPy imports
************************
Pull requests for 1.13.0
@ -439,6 +491,7 @@ Pull requests for 1.13.0
* `#19773 <https://github.com/scipy/scipy/pull/19773>`__: DOC: stats: The docstring for scipy.stats.crystalball needs an...
* `#19775 <https://github.com/scipy/scipy/pull/19775>`__: DOC: Docstring and examples for Rotation.concatenate
* `#19776 <https://github.com/scipy/scipy/pull/19776>`__: ENH: stats.rankdata: vectorize calculation
* `#19777 <https://github.com/scipy/scipy/pull/19777>`__: ENH: add \`BSpline.insert_knot\` method
* `#19778 <https://github.com/scipy/scipy/pull/19778>`__: DOC, MAINT: fix make dist in rel process
* `#19780 <https://github.com/scipy/scipy/pull/19780>`__: MAINT: scipy.stats: replace \`_normtest_finish\`/\`_ttest_finish\`/etc......
* `#19781 <https://github.com/scipy/scipy/pull/19781>`__: CI, MAINT: switch to stable python release
@ -482,6 +535,7 @@ Pull requests for 1.13.0
* `#19871 <https://github.com/scipy/scipy/pull/19871>`__: MAINT: make isinstance check in \`stats._distn_infrastructure\`...
* `#19874 <https://github.com/scipy/scipy/pull/19874>`__: rankdata: ensure correct shape for empty inputs
* `#19876 <https://github.com/scipy/scipy/pull/19876>`__: MAINT: stats: Add tests to ensure consistency between \`wasserstein_distance\` and different backends of \`wasserstein_distance_nd\`
* `#19880 <https://github.com/scipy/scipy/pull/19880>`__: DOC: prepare 1.13.0 release notes
* `#19882 <https://github.com/scipy/scipy/pull/19882>`__: MAINT: vendor \`pocketfft\` as git submodule
* `#19885 <https://github.com/scipy/scipy/pull/19885>`__: MAINT: fix some small array API support issues
* `#19886 <https://github.com/scipy/scipy/pull/19886>`__: TST: stats: fix a few issues with non-reproducible seeding
@ -588,9 +642,11 @@ Pull requests for 1.13.0
* `#20153 <https://github.com/scipy/scipy/pull/20153>`__: BLD: interpolate: _interpnd_info does not need installing
* `#20154 <https://github.com/scipy/scipy/pull/20154>`__: ENH: sparse: implement fromkeys for _dok_base
* `#20163 <https://github.com/scipy/scipy/pull/20163>`__: MAINT: dev.py: allow --args after --
* `#20168 <https://github.com/scipy/scipy/pull/20168>`__: BUG: optimize: Fix constraint condition in inner loop of nnls
* `#20172 <https://github.com/scipy/scipy/pull/20172>`__: MAINT: (additional) array copy semantics shims
* `#20173 <https://github.com/scipy/scipy/pull/20173>`__: TST:special:Add partial tests for nrdtrimn and nrdtrisd
* `#20174 <https://github.com/scipy/scipy/pull/20174>`__: DOC: interpolate: \`splrep\` default boundary condition
* `#20175 <https://github.com/scipy/scipy/pull/20175>`__: MAINT: sparse: add missing dict methods to DOK and tests
* `#20176 <https://github.com/scipy/scipy/pull/20176>`__: MAINT: vulture/ruff fixups
* `#20181 <https://github.com/scipy/scipy/pull/20181>`__: MAINT: Avoid \`descr->elsize\` and use intp for it.
* `#20183 <https://github.com/scipy/scipy/pull/20183>`__: BUG: Fix fancy indexing on compressed sparse arrays with mixed...
@ -599,6 +655,7 @@ Pull requests for 1.13.0
* `#20191 <https://github.com/scipy/scipy/pull/20191>`__: BUG: Fix shape of single Rotation raised to the 0 or 1 power
* `#20193 <https://github.com/scipy/scipy/pull/20193>`__: MAINT: Bump \`npy2_compat.h\` and add temporary pybind11 workaround
* `#20195 <https://github.com/scipy/scipy/pull/20195>`__: ENH: linalg: allow readonly arrays in expm et al
* `#20197 <https://github.com/scipy/scipy/pull/20197>`__: TST: linalg: fix complex sort in test_bad_geneig
* `#20198 <https://github.com/scipy/scipy/pull/20198>`__: BLD: update minimum Cython version to 3.0.8
* `#20203 <https://github.com/scipy/scipy/pull/20203>`__: TST: linalg: undo xfail TestEig::test_singular
* `#20204 <https://github.com/scipy/scipy/pull/20204>`__: TST: linalg: add a regression test for a gen eig problem
@ -632,3 +689,25 @@ Pull requests for 1.13.0
* `#20259 <https://github.com/scipy/scipy/pull/20259>`__: BUG: linalg: fix \`expm\` for large arrays
* `#20261 <https://github.com/scipy/scipy/pull/20261>`__: BUG:linalg:Remove the 2x2 branch in expm
* `#20263 <https://github.com/scipy/scipy/pull/20263>`__: DOC/REL: add missing expired deprecations to 1.12.0 notes
* `#20266 <https://github.com/scipy/scipy/pull/20266>`__: MAINT: stats.wilcoxon: pass \`PermutationMethod\` options to...
* `#20270 <https://github.com/scipy/scipy/pull/20270>`__: BLD: update dependencies for 1.13.0 release and numpy 2.0
* `#20279 <https://github.com/scipy/scipy/pull/20279>`__: MAINT: 1.13.0rc1 prep [wheel build]
* `#20290 <https://github.com/scipy/scipy/pull/20290>`__: REL: set 1.13.0rc2 unreleased
* `#20299 <https://github.com/scipy/scipy/pull/20299>`__: BUG: Optimize: NewtonCG min crashes with xtol=0
* `#20313 <https://github.com/scipy/scipy/pull/20313>`__: MAINT: bump pocketfft, MacOS patch
* `#20314 <https://github.com/scipy/scipy/pull/20314>`__: BUG: sparse: Restore random coordinate ordering to pre-1.12 results
* `#20318 <https://github.com/scipy/scipy/pull/20318>`__: BUG: signal: Fix scalar input issue of signal.lfilter
* `#20327 <https://github.com/scipy/scipy/pull/20327>`__: DOC: mention BSpline.insert_knot in the 1.13.0 release notes
* `#20333 <https://github.com/scipy/scipy/pull/20333>`__: BUG: sync pocketfft again
* `#20337 <https://github.com/scipy/scipy/pull/20337>`__: MAINT: spatial: use cython_lapack in spatial/_qhull.pyx
* `#20341 <https://github.com/scipy/scipy/pull/20341>`__: BUG: linalg: raise an error in dnrm2(..., incx<0)
* `#20345 <https://github.com/scipy/scipy/pull/20345>`__: BUG: nelder-mead fix degenerate simplex
* `#20347 <https://github.com/scipy/scipy/pull/20347>`__: BLD: require pybind11 >=2.12.0 for numpy 2.0 compatibility
* `#20349 <https://github.com/scipy/scipy/pull/20349>`__: Do not segfault in svd(a) with VT.size > INT_MAX
* `#20350 <https://github.com/scipy/scipy/pull/20350>`__: BUG: optimize: Fix wrong condition to check invalid optimization...
* `#20353 <https://github.com/scipy/scipy/pull/20353>`__: DOC: remove outdated NumPy imports note
* `#20359 <https://github.com/scipy/scipy/pull/20359>`__: ENH: Converting amos to std::complex
* `#20361 <https://github.com/scipy/scipy/pull/20361>`__: ENH: Rest of amos translation
* `#20362 <https://github.com/scipy/scipy/pull/20362>`__: MAINT, BUG: bump OpenBLAS
* `#20364 <https://github.com/scipy/scipy/pull/20364>`__: BUG: interpolate: Fix wrong warning message if degree=-1 in \`interpolate.RBFInterpolator\`
* `#20374 <https://github.com/scipy/scipy/pull/20374>`__: MAINT: update pybind11 and numpy build-time requirements for...

View File

@ -0,0 +1,102 @@
==========================
SciPy 1.13.1 Release Notes
==========================
.. contents::
SciPy 1.13.1 is a bug-fix release with no new features
compared to 1.13.0. The version of OpenBLAS shipped with
the PyPI binaries has been increased to 0.3.27.
Authors
=======
* Name (commits)
* h-vetinari (1)
* Jake Bowhay (2)
* Evgeni Burovski (6)
* Sean Cheah (2)
* Lucas Colley (2)
* DWesl (2)
* Ralf Gommers (7)
* Ben Greiner (1) +
* Matt Haberland (2)
* Gregory R. Lee (1)
* Philip Loche (1) +
* Sijo Valayakkad Manikandan (1) +
* Matti Picus (1)
* Tyler Reddy (62)
* Atsushi Sakai (1)
* Daniel Schmitz (2)
* Dan Schult (3)
* Scott Shambaugh (2)
* Edgar Andrés Margffoy Tuay (1)
A total of 19 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.
This list of names is automatically generated, and may not be fully complete.
Issues closed for 1.13.1
------------------------
* `#19423 <https://github.com/scipy/scipy/issues/19423>`__: BUG: \`scipy.ndimage.value_indices\` returns empty dict for \`intc\`/\`uintc\` dtype on Windows
* `#20264 <https://github.com/scipy/scipy/issues/20264>`__: DOC, MAINT: .jupyterlite.doit.db shows up untracked
* `#20392 <https://github.com/scipy/scipy/issues/20392>`__: DOC: optimize.root(method='lm') option
* `#20415 <https://github.com/scipy/scipy/issues/20415>`__: BUG: csr_array can no longer be initialized with 1D array
* `#20471 <https://github.com/scipy/scipy/issues/20471>`__: BUG: \`TestEig.test_falker\` fails on windows + MKL as well as...
* `#20491 <https://github.com/scipy/scipy/issues/20491>`__: BUG: Cannot find \`OpenBLAS\` on Cygwin
* `#20506 <https://github.com/scipy/scipy/issues/20506>`__: BUG: special.spherical_in: derivative at \`z=0, n=1\` incorrect
* `#20512 <https://github.com/scipy/scipy/issues/20512>`__: BUG: \`eigh\` fails for size 1 array with driver=evd
* `#20531 <https://github.com/scipy/scipy/issues/20531>`__: BUG: warning from \`optimize.least_squares\` for astropy with...
* `#20555 <https://github.com/scipy/scipy/issues/20555>`__: BUG: spatial: error in \`Rotation.align_vectors()\` with an infinite...
* `#20576 <https://github.com/scipy/scipy/issues/20576>`__: MAINT, TST: two types of failures observed on maintenance/1.13.x...
* `#20580 <https://github.com/scipy/scipy/issues/20580>`__: BUG: scipy.special.factorial2 doesn't handle \`uint32\` dtypes
* `#20591 <https://github.com/scipy/scipy/issues/20591>`__: BUG: scipy.stats.wilcoxon in 1.13 fails on 2D array with nan...
* `#20623 <https://github.com/scipy/scipy/issues/20623>`__: BUG: scipy.spatial.Delaunay, scipy.interpolate.LinearNDInterpolator...
* `#20648 <https://github.com/scipy/scipy/issues/20648>`__: BUG: stats.yulesimon: incorrect kurtosis values
* `#20652 <https://github.com/scipy/scipy/issues/20652>`__: BUG: incorrect origin tuple handling in ndimage \`minimum_filter\`...
* `#20660 <https://github.com/scipy/scipy/issues/20660>`__: BUG: spatial: \`Rotation.align_vectors()\` incorrect for anti-parallel...
* `#20670 <https://github.com/scipy/scipy/issues/20670>`__: BUG: sparse matrix creation in 1.13 with indices not summing...
* `#20692 <https://github.com/scipy/scipy/issues/20692>`__: BUG: stats.zipf: incorrect pmf values
* `#20714 <https://github.com/scipy/scipy/issues/20714>`__: CI: scipy installation failing in umfpack tests
Pull requests for 1.13.1
------------------------
* `#20280 <https://github.com/scipy/scipy/pull/20280>`__: MAINT: added doc/source/.jupyterlite.doit.db to .gitignore See...
* `#20322 <https://github.com/scipy/scipy/pull/20322>`__: BUG: sparse: align dok_array.pop() to dict.pop() for case with...
* `#20333 <https://github.com/scipy/scipy/pull/20333>`__: BUG: sync pocketfft again
* `#20381 <https://github.com/scipy/scipy/pull/20381>`__: REL, MAINT: prep for 1.13.1
* `#20401 <https://github.com/scipy/scipy/pull/20401>`__: DOC: optimize: fix wrong optional argument name in \`root(method="lm")\`.
* `#20435 <https://github.com/scipy/scipy/pull/20435>`__: DOC: add missing deprecations from 1.13.0 release notes
* `#20437 <https://github.com/scipy/scipy/pull/20437>`__: MAINT/DOC: fix syntax in 1.13.0 release notes
* `#20444 <https://github.com/scipy/scipy/pull/20444>`__: BUG: sparse: Clean up 1D input handling to sparse array/matrix...
* `#20449 <https://github.com/scipy/scipy/pull/20449>`__: DOC: remove spurious backtick from release notes
* `#20473 <https://github.com/scipy/scipy/pull/20473>`__: BUG: linalg: fix ordering of complex conj gen eigenvalues
* `#20474 <https://github.com/scipy/scipy/pull/20474>`__: TST: tolerance bumps for the conda-forge builds
* `#20484 <https://github.com/scipy/scipy/pull/20484>`__: TST: compare absolute values of U and VT in pydata-sparse SVD...
* `#20505 <https://github.com/scipy/scipy/pull/20505>`__: BUG: Include Python.h before system headers.
* `#20516 <https://github.com/scipy/scipy/pull/20516>`__: BUG: linalg: fix eigh(1x1 array, driver='evd') f2py check
* `#20527 <https://github.com/scipy/scipy/pull/20527>`__: BUG: \`spherical_in\` for \`n=0\` and \`z=0\`
* `#20530 <https://github.com/scipy/scipy/pull/20530>`__: BLD: Fix error message for f2py generation fail
* `#20533 <https://github.com/scipy/scipy/pull/20533>`__: TST: Adapt to \`__array__(copy=True)\`
* `#20537 <https://github.com/scipy/scipy/pull/20537>`__: BLD: Move Python-including files to start of source.
* `#20567 <https://github.com/scipy/scipy/pull/20567>`__: REV: 1.13.x: revert changes to f2py and tempita handling in meson.build...
* `#20569 <https://github.com/scipy/scipy/pull/20569>`__: update openblas to 0.3.27
* `#20573 <https://github.com/scipy/scipy/pull/20573>`__: BUG: Fix error with 180 degree rotation in Rotation.align_vectors()...
* `#20586 <https://github.com/scipy/scipy/pull/20586>`__: MAINT: optimize.linprog: fix bug when integrality is a list of...
* `#20592 <https://github.com/scipy/scipy/pull/20592>`__: MAINT: stats.wilcoxon: fix failure with multidimensional \`x\`...
* `#20601 <https://github.com/scipy/scipy/pull/20601>`__: MAINT: lint: temporarily disable UP031
* `#20607 <https://github.com/scipy/scipy/pull/20607>`__: BUG: handle uint arrays in factorial{,2,k}
* `#20611 <https://github.com/scipy/scipy/pull/20611>`__: BUG: prevent QHull message stream being closed twice
* `#20629 <https://github.com/scipy/scipy/pull/20629>`__: MAINT/DEV: lint: disable UP032
* `#20633 <https://github.com/scipy/scipy/pull/20633>`__: BUG: fix Vor/Delaunay segfaults
* `#20644 <https://github.com/scipy/scipy/pull/20644>`__: BUG: ndimage.value_indices: deal with unfixed types
* `#20653 <https://github.com/scipy/scipy/pull/20653>`__: BUG: ndimage: fix origin handling for \`{minimum, maximum}_filter\`
* `#20654 <https://github.com/scipy/scipy/pull/20654>`__: MAINT: stats.yulesimon: fix kurtosis
* `#20687 <https://github.com/scipy/scipy/pull/20687>`__: BUG: sparse: Fix summing duplicates for CSR/CSC creation from...
* `#20702 <https://github.com/scipy/scipy/pull/20702>`__: BUG: stats: Fix \`zipf.pmf\` and \`zipfian.pmf\` for int32 \`k\`
* `#20727 <https://github.com/scipy/scipy/pull/20727>`__: CI: pin Python for MacOS conda

View File

@ -0,0 +1,23 @@
==========================
SciPy 1.13.2 Release Notes
==========================
.. contents::
SciPy 1.13.2 is a bug-fix release with no new features
compared to 1.13.1.
Authors
=======
* Name (commits)
Issues closed for 1.13.2
------------------------
Pull requests for 1.13.2
------------------------

View File

@ -4,7 +4,7 @@ project(
# Note that the git commit hash cannot be added dynamically here (it is added
# in the dynamically generated and installed `scipy/version.py` though - see
# tools/version_utils.py
version: '1.13.0.dev0',
version: '1.13.2.dev0',
license: 'BSD-3',
meson_version: '>= 1.1.0',
default_options: [
@ -130,8 +130,8 @@ if not cc.links('', name: '-Wl,--version-script', args: ['-shared', version_link
version_link_args = []
endif
generate_f2pymod = find_program('tools/generate_f2pymod.py')
tempita = find_program('scipy/_build_utils/tempita.py')
generate_f2pymod = files('tools/generate_f2pymod.py')
tempita = files('scipy/_build_utils/tempita.py')
use_pythran = get_option('use-pythran')
if use_pythran

View File

@ -6,40 +6,41 @@
# the most recent version on PyPI:
#
# "pybind11>=2.4.3,<2.5.0",
#
# Upper bounds in release branches must have notes on why they are added.
# Distro packages can ignore upper bounds added only to prevent future
# breakage; if we add pins or bounds because of known problems then they need
# them too.
[build-system]
build-backend = 'mesonpy'
requires = [
"meson-python>=0.15.0",
"Cython>=3.0.8", # when updating version, also update check in meson.build
"pybind11>=2.10.4",
"pythran>=0.14.0",
# The upper bound on meson-python is pre-emptive only (looser on purpose,
# since change of breakage in 0.16/0.17 is low)
"meson-python>=0.15.0,<0.18.0",
# The upper bound on Cython is pre-emptive only
"Cython>=3.0.8,<3.1.0", # when updating version, also update check in meson.build
# The upper bound on pybind11 is pre-emptive only
"pybind11>=2.12.0,<2.13.0",
# The upper bound on Pythran is pre-emptive only
"pythran>=0.14.0,<0.16.0",
# When numpy 2.0.0rc1 comes out, we should update this to build against 2.0,
# and then runtime depend on the range 1.22.X to <2.3. No need to switch to
# 1.25.2 in the meantime (1.25.x is the first version which exports older C
# API versions by default).
# default numpy requirements
"numpy==1.22.4; python_version<='3.10' and platform_python_implementation != 'PyPy'",
"numpy==1.23.2; python_version=='3.11' and platform_python_implementation != 'PyPy'",
"numpy>=1.26.0,<1.27; python_version=='3.12'",
# PyPy requirements; 1.25.0 was the first version to have pypy-3.9 wheels,
# and 1.25.0 also changed the C API target to 1.19.x, so no longer a need
# for an exact pin.
"numpy>=1.25.0; python_version>='3.9' and platform_python_implementation=='PyPy'",
# For Python versions which aren't yet officially supported, we specify an
# unpinned NumPy which allows source distributions to be used and allows
# wheels to be used as soon as they become available.
# Python 3.13 has known issues that are only fixed in numpy 2.0.0.dev0
"numpy>=2.0.0.dev0; python_version>='3.13'",
# Comments on numpy build requirement range:
#
# 1. >=2.0.x is the numpy requirement for wheel builds for distribution
# on PyPI - building against 2.x yields wheels that are also
# ABI-compatible with numpy 1.x at runtime.
# 2. Note that building against numpy 1.x works fine too - users and
# redistributors can do this by installing the numpy version they like
# and disabling build isolation.
# 3. The <2.3 upper bound is for matching the numpy deprecation policy,
# it should not be loosened.
"numpy>=2.0.0rc1,<2.3",
]
[project]
name = "scipy"
version = "1.13.0.dev0"
version = "1.13.2.dev0"
# TODO: add `license-files` once PEP 639 is accepted (see meson-python#88)
# at that point, no longer include them in `py3.install_sources()`
license = {file = "LICENSE.txt"}
@ -51,11 +52,7 @@ maintainers = [
# release branches, see:
# https://scipy.github.io/devdocs/dev/core-dev/index.html#version-ranges-for-numpy-and-other-dependencies
requires-python = ">=3.9"
dependencies = [
# TODO: update to "pin-compatible" once possible, see
# https://github.com/mesonbuild/meson-python/issues/29
"numpy>=1.22.4",
]
dependencies = ["numpy>=1.22.4,<2.3"]
readme = "README.rst"
classifiers = [
"Development Status :: 5 - Production/Stable",

View File

@ -2,7 +2,7 @@
# Do not edit this file; modify `pyproject.toml` instead and run `python tools/generate_requirements.py`.
meson-python>=0.15.0
Cython>=3.0.8
pybind11>=2.10.4
pybind11>=2.12.0
pythran>=0.14.0
ninja
numpy>=1.22.4

View File

@ -67,7 +67,7 @@ del _distributor_init
from scipy._lib import _pep440
# In maintenance branch, change to np_maxversion N+3 if numpy is at N
np_minversion = '1.22.4'
np_maxversion = '9.9.99'
np_maxversion = '2.3.0'
if (_pep440.parse(__numpy_version__) < _pep440.Version(np_minversion) or
_pep440.parse(__numpy_version__) >= _pep440.Version(np_maxversion)):
import warnings

View File

@ -1,4 +1,3 @@
#!/usr/bin/env python3
import sys
import os
import argparse
@ -31,8 +30,6 @@ def main():
help="Path to the input file")
parser.add_argument("-o", "--outdir", type=str,
help="Path to the output directory")
parser.add_argument("--outfile", type=str,
help="Path to the output file (use either this or outdir)")
parser.add_argument("-i", "--ignore", type=str,
help="An ignored input - may be useful to add a "
"dependency between custom targets")
@ -41,15 +38,12 @@ def main():
if not args.infile.endswith('.in'):
raise ValueError(f"Unexpected extension: {args.infile}")
if not (args.outdir or args.outfile):
raise ValueError("Missing `--outdir` or `--outfile` argument to tempita.py")
if not args.outdir:
raise ValueError("Missing `--outdir` argument to tempita.py")
if args.outfile:
outfile = args.outfile
else:
outdir_abs = os.path.join(os.getcwd(), args.outdir)
outfile = os.path.join(outdir_abs,
os.path.splitext(os.path.split(args.infile)[1])[0])
outdir_abs = os.path.join(os.getcwd(), args.outdir)
outfile = os.path.join(outdir_abs,
os.path.splitext(os.path.split(args.infile)[1])[0])
process_tempita(args.infile, outfile)

@ -1 +1 @@
Subproject commit c1347f538e9af666c7c1f3351ccb8eebaecbe1ea
Subproject commit 9367142748fcc9696a1c9e5a99b76ed9897c9daa

View File

@ -26,6 +26,7 @@ class ParseCall(ast.NodeVisitor):
def visit_Name(self, node):
self.ls.append(node.id)
class FindFuncs(ast.NodeVisitor):
def __init__(self, filename):
super().__init__()
@ -96,6 +97,7 @@ def test_warning_calls_filters(warning_calls):
os.path.join('datasets', '__init__.py'),
os.path.join('optimize', '_optimize.py'),
os.path.join('optimize', '_constraints.py'),
os.path.join('optimize', '_nnls.py'),
os.path.join('signal', '_ltisys.py'),
os.path.join('sparse', '__init__.py'), # np.matrix pending-deprecation
os.path.join('stats', '_discrete_distns.py'), # gh-14901

View File

@ -126,7 +126,7 @@ py3.extension_module('_odepack',
vode_module = custom_target('vode_module',
output: ['_vode-f2pywrappers.f', '_vodemodule.c'],
input: 'vode.pyf',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_vode',
@ -143,7 +143,7 @@ py3.extension_module('_vode',
lsoda_module = custom_target('lsoda_module',
output: ['_lsoda-f2pywrappers.f', '_lsodamodule.c'],
input: 'lsoda.pyf',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_lsoda',
@ -160,7 +160,7 @@ py3.extension_module('_lsoda',
_dop_module = custom_target('_dop_module',
output: ['_dop-f2pywrappers.f', '_dopmodule.c'],
input: 'dop.pyf',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_dop',
@ -184,7 +184,7 @@ py3.extension_module('_test_multivariate',
_test_odeint_banded_module = custom_target('_test_odeint_banded_module',
output: ['_test_odeint_bandedmodule.c', '_test_odeint_banded-f2pywrappers.f'],
input: 'tests/banded5x5.pyf',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_test_odeint_banded',

View File

@ -342,10 +342,11 @@ class RBFInterpolator:
degree = int(degree)
if degree < -1:
raise ValueError("`degree` must be at least -1.")
elif degree < min_degree:
elif -1 < degree < min_degree:
warnings.warn(
f"`degree` should not be below {min_degree} when `kernel` "
f"is '{kernel}'. The interpolant may not be uniquely "
f"`degree` should not be below {min_degree} except -1 "
f"when `kernel` is '{kernel}'."
f"The interpolant may not be uniquely "
f"solvable, and the smoothing parameter may have an "
f"unintuitive effect.",
UserWarning, stacklevel=2

View File

@ -145,7 +145,7 @@ py3.extension_module('_fitpack',
dfitpack_module = custom_target('dfitpack_module',
output: ['dfitpack-f2pywrappers.f', 'dfitpackmodule.c'],
input: 'src/fitpack.pyf',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
# TODO: Add flags for 64 bit ints

View File

@ -2,7 +2,7 @@ import pickle
import pytest
import numpy as np
from numpy.linalg import LinAlgError
from numpy.testing import assert_allclose, assert_array_equal
from numpy.testing import assert_allclose
from scipy.stats.qmc import Halton
from scipy.spatial import cKDTree
from scipy.interpolate._rbfinterp import (
@ -369,9 +369,18 @@ class _TestRBFInterpolator:
y = np.linspace(0, 1, 5)[:, None]
d = np.zeros(5)
for kernel, deg in _NAME_TO_MIN_DEGREE.items():
match = f'`degree` should not be below {deg}'
with pytest.warns(Warning, match=match):
self.build(y, d, epsilon=1.0, kernel=kernel, degree=deg-1)
# Only test for kernels that its minimum degree is not 0.
if deg >= 1:
match = f'`degree` should not be below {deg}'
with pytest.warns(Warning, match=match):
self.build(y, d, epsilon=1.0, kernel=kernel, degree=deg-1)
def test_minus_one_degree(self):
# Make sure a degree of -1 is accepted without any warning.
y = np.linspace(0, 1, 5)[:, None]
d = np.zeros(5)
for kernel, _ in _NAME_TO_MIN_DEGREE.items():
self.build(y, d, epsilon=1.0, kernel=kernel, degree=-1)
def test_rank_error(self):
# An error should be raised when `kernel` is "thin_plate_spline" and
@ -406,7 +415,7 @@ class _TestRBFInterpolator:
yitp1 = interp(xitp)
yitp2 = pickle.loads(pickle.dumps(interp))(xitp)
assert_array_equal(yitp1, yitp2)
assert_allclose(yitp1, yitp2, atol=1e-16)
class TestRBFInterpolatorNeighborsNone(_TestRBFInterpolator):

View File

@ -137,7 +137,7 @@ class TestRegularGridInterpolator:
# 2nd derivatives of a linear function are zero
assert_allclose(interp(sample, nu=(0, 1, 1, 0)),
[0, 0, 0], atol=1e-12)
[0, 0, 0], atol=2e-12)
@parametrize_rgi_interp_methods
def test_complex(self, method):
@ -983,7 +983,11 @@ class TestInterpN:
v1 = interpn((x, y), values, sample, method=method)
v2 = interpn((x, y), np.asarray(values), sample, method=method)
assert_allclose(v1, v2)
if method == "quintic":
# https://github.com/scipy/scipy/issues/20472
assert_allclose(v1, v2, atol=5e-5, rtol=2e-6)
else:
assert_allclose(v1, v2)
def test_length_one_axis(self):
# gh-5890, gh-9524 : length-1 axis is legal for method='linear'.

View File

@ -2,6 +2,8 @@
// Use of this source code is governed by the BSD 2-clause license found in the LICENSE.txt file.
// SPDX-License-Identifier: BSD-2-Clause
#include "_fmm_core.hpp"
#include <fast_matrix_market/types.hpp>
#include <cstdint>
namespace fast_matrix_market {
@ -16,8 +18,6 @@ namespace fast_matrix_market {
}
#include <fast_matrix_market/fast_matrix_market.hpp>
#include "_fmm_core.hpp"
////////////////////////////////////////////////
//// Header methods
////////////////////////////////////////////////

View File

@ -1,7 +1,7 @@
_test_fortran_module = custom_target('_test_fortran_module',
output: ['_test_fortranmodule.c'],
input: '_test_fortran.pyf',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_test_fortran',

View File

@ -116,6 +116,20 @@ def svd(a, full_matrices=True, compute_uv=True, overwrite_a=False,
if lapack_driver not in ('gesdd', 'gesvd'):
message = f'lapack_driver must be "gesdd" or "gesvd", not "{lapack_driver}"'
raise ValueError(message)
if lapack_driver == 'gesdd' and compute_uv:
# XXX: revisit int32 when ILP64 lapack becomes a thing
max_mn, min_mn = (m, n) if m > n else (n, m)
if full_matrices:
if max_mn*max_mn > numpy.iinfo(numpy.int32).max:
raise ValueError(f"Indexing a matrix size {max_mn} x {max_mn} "
" would incur integer overflow in LAPACK.")
else:
sz = max(m * min_mn, n * min_mn)
if max(m * min_mn, n * min_mn) > numpy.iinfo(numpy.int32).max:
raise ValueError(f"Indexing a matrix of {sz} elements would "
"incur an in integer overflow in LAPACK.")
funcs = (lapack_driver, lapack_driver + '_lwork')
gesXd, gesXd_lwork = get_lapack_funcs(funcs, (a1,), ilp64='preferred')

View File

@ -12,6 +12,8 @@ class _FakeMatrix2:
self._data = data
def __array__(self, dtype=None, copy=None):
if copy:
return self._data.copy()
return self._data

View File

@ -388,7 +388,7 @@ function <prefix3>nrm2(n,x,offx,incx) result(n2)
<ftype3> dimension(*),intent(in) :: x
integer optional, intent(in),check(incx>0||incx<0) :: incx = 1
integer optional, intent(in),check(incx>0) :: incx = 1
integer optional,intent(in),depend(x) :: offx=0
check(offx>=0 && offx<len(x)) :: offx
@ -410,7 +410,7 @@ function <prefix4>nrm2(n,x,offx,incx) result(n2)
<ftype4> dimension(*),intent(in) :: x
integer optional, intent(in),check(incx>0||incx<0) :: incx = 1
integer optional, intent(in),check(incx>0) :: incx = 1
integer optional,intent(in),depend(x) :: offx=0
check(offx>=0 && offx<len(x)) :: offx

View File

@ -124,7 +124,7 @@ subroutine <prefix2>syevd(compute_v,lower,n,w,a,lda,work,lwork,iwork,liwork,info
<ftype2> dimension(n),intent(out),depend(n) :: w
integer optional,intent(in),depend(n,compute_v) :: lwork=max((compute_v?1+6*n+2*n*n:2*n+1),1)
check(lwork>=(compute_v?1+6*n+2*n*n:2*n+1)) :: lwork
check( (lwork>=((compute_v?1+6*n+2*n*n:2*n+1))) || ((n==1)&&(lwork>=1)) ) :: lwork
<ftype2> dimension(lwork),intent(hide,cache),depend(lwork) :: work
integer optional,intent(in),depend(n,compute_v) :: liwork = (compute_v?3+5*n:1)
@ -180,7 +180,7 @@ subroutine <prefix2c>heevd(compute_v,lower,n,w,a,lda,work,lwork,iwork,liwork,rwo
<ftype2> dimension(n),intent(out),depend(n) :: w
integer optional,intent(in),depend(n,compute_v) :: lwork=max((compute_v?2*n+n*n:n+1),1)
check(lwork>=(compute_v?2*n+n*n:n+1)) :: lwork
check( (lwork>=(compute_v?2*n+n*n:n+1)) || ((n==1)&&(lwork>=1)) ) :: lwork
<ftype2c> dimension(lwork),intent(hide,cache),depend(lwork) :: work
integer optional,intent(in),depend(n,compute_v) :: liwork = (compute_v?3+5*n:1)

View File

@ -60,7 +60,7 @@ linalg_cython_gen = generator(cython,
fblas_module = custom_target('fblas_module',
output: ['_fblasmodule.c'],
input: 'fblas.pyf.src',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
# Note: we're linking LAPACK on purpose here. For some routines (e.g., spmv)
@ -80,7 +80,7 @@ py3.extension_module('_fblas',
flapack_module = custom_target('flapack_module',
output: ['_flapackmodule.c'],
input: 'flapack.pyf.src',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
# Note that -Wno-empty-body is Clang-specific and comes from `callstatement`s
@ -101,7 +101,7 @@ py3.extension_module('_flapack',
interpolative_module = custom_target('interpolative_module',
output: '_interpolativemodule.c',
input: 'interpolative.pyf',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
# id_dist contains a copy of FFTPACK, which has type mismatch warnings
@ -218,7 +218,7 @@ py3.extension_module('_decomp_lu_cython',
_decomp_update_pyx = custom_target('_decomp_update',
output: '_decomp_update.pyx',
input: '_decomp_update.pyx.in',
command: [tempita, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, tempita, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_decomp_update',
@ -234,7 +234,7 @@ py3.extension_module('_decomp_update',
_matfuncs_expm_pyx = custom_target('_matfuncs_expm',
output: '_matfuncs_expm.pyx',
input: '_matfuncs_expm.pyx.in',
command: [tempita, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, tempita, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_matfuncs_expm',

View File

@ -1102,3 +1102,13 @@ def test_gh_169309():
actual = scipy.linalg.blas.dnrm2(x, 5, 3, -1)
expected = math.sqrt(500)
assert_allclose(actual, expected)
def test_dnrm2_neg_incx():
# check that dnrm2(..., incx < 0) raises
# XXX: remove the test after the lowest supported BLAS implements
# negative incx (new in LAPACK 3.10)
x = np.repeat(10, 9)
incx = -1
with assert_raises(fblas.__fblas_error):
scipy.linalg.blas.dnrm2(x, 5, 3, incx)

View File

@ -59,22 +59,6 @@ COMPLEX_DTYPES = [np.complex64, np.complex128]
DTYPES = REAL_DTYPES + COMPLEX_DTYPES
def clear_fuss(ar, fuss_binary_bits=7):
"""Clears trailing `fuss_binary_bits` of mantissa of a floating number"""
x = np.asanyarray(ar)
if np.iscomplexobj(x):
return clear_fuss(x.real) + 1j * clear_fuss(x.imag)
significant_binary_bits = np.finfo(x.dtype).nmant
x_mant, x_exp = np.frexp(x)
f = 2.0**(significant_binary_bits - fuss_binary_bits)
x_mant *= f
np.rint(x_mant, out=x_mant)
x_mant /= f
return np.ldexp(x_mant, x_exp)
# XXX: This function should not be defined here, but somewhere in
# scipy.linalg namespace
def symrand(dim_or_eigv, rng):
@ -238,11 +222,18 @@ class TestEig:
assert_allclose(res[:, i], 0,
rtol=1e-13, atol=1e-13, err_msg=msg)
# try to consistently order eigenvalues, including complex conjugate pairs
w_fin = w[isfinite(w)]
wt_fin = wt[isfinite(wt)]
perm = argsort(clear_fuss(w_fin))
permt = argsort(clear_fuss(wt_fin))
assert_allclose(w[perm], wt[permt],
# prune noise in the real parts
w_fin = -1j * np.real_if_close(1j*w_fin, tol=1e-10)
wt_fin = -1j * np.real_if_close(1j*wt_fin, tol=1e-10)
perm = argsort(abs(w_fin) + w_fin.imag)
permt = argsort(abs(wt_fin) + wt_fin.imag)
assert_allclose(w_fin[perm], wt_fin[permt],
atol=1e-7, rtol=1e-7, err_msg=msg)
length = np.empty(len(vr))
@ -886,6 +877,17 @@ class TestEigh:
atol=1000*np.finfo(dtype_).eps,
rtol=0.)
@pytest.mark.parametrize('driver', ("ev", "evd", "evr", "evx"))
def test_1x1_lwork(self, driver):
w, v = eigh([[1]], driver=driver)
assert_allclose(w, array([1.]), atol=1e-15)
assert_allclose(v, array([[1.]]), atol=1e-15)
# complex case now
w, v = eigh([[1j]], driver=driver)
assert_allclose(w, array([0]), atol=1e-15)
assert_allclose(v, array([[1.]]), atol=1e-15)
@pytest.mark.parametrize('type', (1, 2, 3))
@pytest.mark.parametrize('driver', ("gv", "gvd", "gvx"))
def test_various_drivers_generalized(self, driver, type):
@ -1099,6 +1101,14 @@ class TestSVD_GESVD(TestSVD_GESDD):
lapack_driver = 'gesvd'
def test_svd_gesdd_nofegfault():
# svd(a) with {U,VT}.size > INT_MAX does not segfault
# cf https://github.com/scipy/scipy/issues/14001
df=np.ones((4799, 53130), dtype=np.float64)
with assert_raises(ValueError):
svd(df)
class TestSVDVals:
def test_empty(self):

View File

@ -85,7 +85,7 @@ npyrandom_path = _incdir_numpy_abs / '..' / '..' / 'random' / 'lib'
npymath_lib = cc.find_library('npymath', dirs: npymath_path)
npyrandom_lib = cc.find_library('npyrandom', dirs: npyrandom_path)
pybind11_dep = dependency('pybind11', version: '>=2.10.4')
pybind11_dep = dependency('pybind11', version: '>=2.12.0')
# Pythran include directory and build flags
if use_pythran
@ -275,12 +275,6 @@ if use_pythran
)
endif
# Used for templated C code (not Cython code since Cython is path-dependent)
tempita_gen = generator(tempita,
arguments : ['@INPUT@', '--outfile', '@OUTPUT@'],
output : '@BASENAME@',
)
# Check if compiler flags are supported. This is necessary to ensure that SciPy
# can be built with any supported compiler. We need so many warning flags
# because we want to be able to build with `-Werror` in CI; that ensures that

View File

@ -1266,7 +1266,7 @@ def _min_or_max_filter(input, size, footprint, structure, output, mode,
else:
output[...] = input[...]
else:
origins = _ni_support._normalize_sequence(origin, input.ndim)
origins = _ni_support._normalize_sequence(origin, num_axes)
if num_axes < input.ndim:
if footprint.ndim != num_axes:
raise RuntimeError("footprint array has incorrect shape")
@ -1274,17 +1274,23 @@ def _min_or_max_filter(input, size, footprint, structure, output, mode,
footprint,
tuple(ax for ax in range(input.ndim) if ax not in axes)
)
# set origin = 0 for any axes not being filtered
origins_temp = [0,] * input.ndim
for o, ax in zip(origins, axes):
origins_temp[ax] = o
origins = origins_temp
fshape = [ii for ii in footprint.shape if ii > 0]
if len(fshape) != input.ndim:
raise RuntimeError('footprint array has incorrect shape.')
for origin, lenf in zip(origins, fshape):
if (lenf // 2 + origin < 0) or (lenf // 2 + origin >= lenf):
raise ValueError('invalid origin')
raise ValueError("invalid origin")
if not footprint.flags.contiguous:
footprint = footprint.copy()
if structure is not None:
if len(structure.shape) != input.ndim:
raise RuntimeError('structure array has incorrect shape')
raise RuntimeError("structure array has incorrect shape")
if num_axes != structure.ndim:
structure = numpy.expand_dims(
structure,

View File

@ -999,6 +999,11 @@ static PyObject *NI_ValueIndices(PyObject *self, PyObject *args)
case NPY_UINT32: CASE_VALUEINDICES_SET_MINMAX(npy_uint32); break;
case NPY_INT64: CASE_VALUEINDICES_SET_MINMAX(npy_int64); break;
case NPY_UINT64: CASE_VALUEINDICES_SET_MINMAX(npy_uint64); break;
default:
switch(arrType) {
case NPY_UINT: CASE_VALUEINDICES_SET_MINMAX(npy_uint); break;
case NPY_INT: CASE_VALUEINDICES_SET_MINMAX(npy_int); break;
}
}
NI_ITERATOR_NEXT(ndiIter, arrData);
}
@ -1016,6 +1021,11 @@ static PyObject *NI_ValueIndices(PyObject *self, PyObject *args)
case NPY_UINT32: CASE_VALUEINDICES_MAKEHISTOGRAM(npy_uint32); break;
case NPY_INT64: CASE_VALUEINDICES_MAKEHISTOGRAM(npy_int64); break;
case NPY_UINT64: CASE_VALUEINDICES_MAKEHISTOGRAM(npy_uint64); break;
default:
switch(arrType) {
case NPY_INT: CASE_VALUEINDICES_MAKEHISTOGRAM(npy_int); break;
case NPY_UINT: CASE_VALUEINDICES_MAKEHISTOGRAM(npy_uint); break;
}
}
}
@ -1047,6 +1057,11 @@ static PyObject *NI_ValueIndices(PyObject *self, PyObject *args)
case NPY_UINT32: CASE_VALUEINDICES_MAKE_VALUEOBJ_FROMOFFSET(npy_uint32, ii); break;
case NPY_INT64: CASE_VALUEINDICES_MAKE_VALUEOBJ_FROMOFFSET(npy_int64, ii); break;
case NPY_UINT64: CASE_VALUEINDICES_MAKE_VALUEOBJ_FROMOFFSET(npy_uint64, ii); break;
default:
switch(arrType) {
case NPY_INT: CASE_VALUEINDICES_MAKE_VALUEOBJ_FROMOFFSET(npy_int, ii); break;
case NPY_UINT: CASE_VALUEINDICES_MAKE_VALUEOBJ_FROMOFFSET(npy_uint, ii); break;
}
}
/* Create a tuple of <ndim> index arrays */
t = PyTuple_New(ndim);
@ -1093,6 +1108,11 @@ static PyObject *NI_ValueIndices(PyObject *self, PyObject *args)
case NPY_UINT32: CASE_VALUEINDICES_GET_VALUEOFFSET(npy_uint32); break;
case NPY_INT64: CASE_VALUEINDICES_GET_VALUEOFFSET(npy_int64); break;
case NPY_UINT64: CASE_VALUEINDICES_GET_VALUEOFFSET(npy_uint64); break;
default:
switch(arrType) {
case NPY_INT: CASE_VALUEINDICES_GET_VALUEOFFSET(npy_int); break;
case NPY_UINT: CASE_VALUEINDICES_GET_VALUEOFFSET(npy_uint); break;
}
}
if (ignoreValIsNone || (!valueIsIgnore)) {

View File

@ -8,6 +8,7 @@ from numpy.testing import (assert_equal, assert_allclose,
assert_array_almost_equal,
assert_array_equal, assert_almost_equal,
suppress_warnings, assert_)
import numpy as np
import pytest
from pytest import raises as assert_raises
@ -792,6 +793,32 @@ class TestNdimageFilters:
expected = filter_func(array, *args, size_3d, **kwargs)
assert_allclose(output, expected)
@pytest.mark.parametrize("filter_func, kwargs",
[(ndimage.minimum_filter, {}),
(ndimage.maximum_filter, {}),
(ndimage.median_filter, {}),
(ndimage.rank_filter, {"rank": 1}),
(ndimage.percentile_filter, {"percentile": 30})])
def test_filter_weights_subset_axes_origins(self, filter_func, kwargs):
axes = (-2, -1)
origins = (0, 1)
array = np.arange(6 * 8 * 12, dtype=np.float64).reshape(6, 8, 12)
axes = np.array(axes)
# weights with ndim matching len(axes)
footprint = np.ones((3, 5), dtype=bool)
footprint[0, 1] = 0 # make non-separable
output = filter_func(
array, footprint=footprint, axes=axes, origin=origins, **kwargs)
output0 = filter_func(
array, footprint=footprint, axes=axes, origin=0, **kwargs)
# output has origin shift on last axis relative to output0, so
# expect shifted arrays to be equal.
np.testing.assert_array_equal(output[:, :, 1:], output0[:, :, :-1])
@pytest.mark.parametrize(
'filter_func, args',
[(ndimage.gaussian_filter, (1.0,)), # args = (sigma,)

View File

@ -10,6 +10,7 @@ from numpy.testing import (
assert_equal,
suppress_warnings,
)
import pytest
from pytest import raises as assert_raises
import scipy.ndimage as ndimage
@ -1407,3 +1408,12 @@ class TestWatershedIft:
expected = [[1, 1],
[1, 1]]
assert_allclose(out, expected)
@pytest.mark.parametrize("dt", [np.intc, np.uintc])
def test_gh_19423(dt):
rng = np.random.default_rng(123)
max_val = 8
image = rng.integers(low=0, high=max_val, size=(10, 12)).astype(dtype=dt)
val_idx = ndimage.value_indices(image)
assert len(val_idx.keys()) == max_val

View File

@ -414,7 +414,7 @@ def scalar_search_wolfe2(phi, derphi, phi0=None,
return True
for i in range(maxiter):
if alpha1 == 0 or (amax is not None and alpha0 == amax):
if alpha1 == 0 or (amax is not None and alpha0 > amax):
# alpha1 == 0: This shouldn't happen. Perhaps the increment has
# slipped below machine precision?
alpha_star = None

View File

@ -1,6 +1,6 @@
import numpy as np
from scipy.linalg import solve
from scipy.linalg import solve, LinAlgWarning
import warnings
__all__ = ['nnls']
@ -112,43 +112,47 @@ def _nnls(A, b, maxiter=None, tol=None):
# Initialize vars
x = np.zeros(n, dtype=np.float64)
s = np.zeros(n, dtype=np.float64)
# Inactive constraint switches
P = np.zeros(n, dtype=bool)
# Projected residual
resid = Atb.copy().astype(np.float64) # x=0. Skip (-AtA @ x) term
w = Atb.copy().astype(np.float64) # x=0. Skip (-AtA @ x) term
# Overall iteration counter
# Outer loop is not counted, inner iter is counted across outer spins
iter = 0
while (not P.all()) and (resid[~P] > tol).any(): # B
while (not P.all()) and (w[~P] > tol).any(): # B
# Get the "most" active coeff index and move to inactive set
resid[P] = -np.inf
k = np.argmax(resid) # B.2
k = np.argmax(w * (~P)) # B.2
P[k] = True # B.3
# Iteration solution
s = np.zeros(n, dtype=np.float64)
P_ind = P.nonzero()[0]
s[P] = solve(AtA[P_ind[:, None], P_ind[None, :]], Atb[P],
assume_a='sym', check_finite=False) # B.4
s[:] = 0.
# B.4
with warnings.catch_warnings():
warnings.filterwarnings('ignore', message='Ill-conditioned matrix',
category=LinAlgWarning)
s[P] = solve(AtA[np.ix_(P, P)], Atb[P], assume_a='sym', check_finite=False)
# Inner loop
while (iter < maxiter) and (s[P].min() <= tol): # C.1
alpha_ind = ((s < tol) & P).nonzero()
alpha = (x[alpha_ind] / (x[alpha_ind] - s[alpha_ind])).min() # C.2
while (iter < maxiter) and (s[P].min() < 0): # C.1
iter += 1
inds = P * (s < 0)
alpha = (x[inds] / (x[inds] - s[inds])).min() # C.2
x *= (1 - alpha)
x += alpha*s
P[x < tol] = False
s[P] = solve(AtA[np.ix_(P, P)], Atb[P], assume_a='sym',
check_finite=False)
P[x <= tol] = False
with warnings.catch_warnings():
warnings.filterwarnings('ignore', message='Ill-conditioned matrix',
category=LinAlgWarning)
s[P] = solve(AtA[np.ix_(P, P)], Atb[P], assume_a='sym',
check_finite=False)
s[~P] = 0 # C.6
iter += 1
x[:] = s[:]
resid = Atb - AtA @ x
w[:] = Atb - AtA @ x
if iter == maxiter:
# Typically following line should return

View File

@ -12,6 +12,7 @@ from scipy.linalg import norm, solve, inv, qr, svd, LinAlgError
import scipy.sparse.linalg
import scipy.sparse
from scipy.linalg import get_blas_funcs
from scipy._lib._util import copy_if_needed
from scipy._lib._util import getfullargspec_no_self as _getfullargspec
from ._linesearch import scalar_search_wolfe1, scalar_search_armijo
@ -701,7 +702,7 @@ class LowRankMatrix:
def collapse(self):
"""Collapse the low-rank matrix to a full-rank one."""
self.collapsed = np.array(self)
self.collapsed = np.array(self, copy=copy_if_needed)
self.cs = None
self.ds = None
self.alpha = None

View File

@ -797,6 +797,15 @@ def _minimize_neldermead(func, x0, args=(), callback=None,
maxfun = np.inf
if bounds is not None:
# The default simplex construction may make all entries (for a given
# parameter) greater than an upper bound if x0 is very close to the
# upper bound. If one simply clips the simplex to the bounds this could
# make the simplex entries degenerate. If that occurs reflect into the
# interior.
msk = sim > upper_bound
# reflect into the interior
sim = np.where(msk, 2*upper_bound - sim, sim)
# but make sure the reflection is no less than the lower_bound
sim = np.clip(sim, lower_bound, upper_bound)
one2np1 = list(range(1, N + 1))
@ -1153,7 +1162,7 @@ def _line_search_wolfe12(f, fprime, xk, pk, gfk, old_fval, old_old_fval,
def fmin_bfgs(f, x0, fprime=None, args=(), gtol=1e-5, norm=np.inf,
epsilon=_epsilon, maxiter=None, full_output=0, disp=1,
retall=0, callback=None, xrtol=0, c1=1e-4, c2=0.9,
retall=0, callback=None, xrtol=0, c1=1e-4, c2=0.9,
hess_inv0=None):
"""
Minimize a function using the BFGS algorithm.
@ -1226,7 +1235,7 @@ def fmin_bfgs(f, x0, fprime=None, args=(), gtol=1e-5, norm=np.inf,
Optimize the function, `f`, whose gradient is given by `fprime`
using the quasi-Newton method of Broyden, Fletcher, Goldfarb,
and Shanno (BFGS).
Parameters `c1` and `c2` must satisfy ``0 < c1 < c2 < 1``.
See Also
@ -1298,7 +1307,7 @@ def fmin_bfgs(f, x0, fprime=None, args=(), gtol=1e-5, norm=np.inf,
def _minimize_bfgs(fun, x0, args=(), jac=None, callback=None,
gtol=1e-5, norm=np.inf, eps=_epsilon, maxiter=None,
disp=False, return_all=False, finite_diff_rel_step=None,
xrtol=0, c1=1e-4, c2=0.9,
xrtol=0, c1=1e-4, c2=0.9,
hess_inv0=None, **unknown_options):
"""
Minimization of scalar function of one or more variables using the
@ -2025,7 +2034,8 @@ def _minimize_newtoncg(fun, x0, args=(), jac=None, hess=None, hessp=None,
cg_maxiter = 20*len(x0)
xtol = len(x0) * avextol
update_l1norm = 2 * xtol
# Make sure we enter the while loop.
update_l1norm = np.finfo(float).max
xk = np.copy(x0)
if retall:
allvecs = [xk]
@ -2116,7 +2126,7 @@ def _minimize_newtoncg(fun, x0, args=(), jac=None, hess=None, hessp=None,
update_l1norm = np.linalg.norm(update, ord=1)
else:
if np.isnan(old_fval) or np.isnan(update).any():
if np.isnan(old_fval) or np.isnan(update_l1norm):
return terminate(3, _status_message['nan'])
msg = _status_message['success']

View File

@ -280,9 +280,9 @@ def _root_leastsq(fun, x0, args=(), jac=None,
maxiter : int
The maximum number of calls to the function. If zero, then
100*(N+1) is the maximum where N is the number of elements in x0.
epsfcn : float
eps : float
A suitable step length for the forward-difference approximation of
the Jacobian (for Dfun=None). If epsfcn is less than the machine
the Jacobian (for Dfun=None). If `eps` is less than the machine
precision, it is assumed that the relative errors in the functions
are of the order of the machine precision.
factor : float

View File

@ -8,7 +8,7 @@ _zeros_pyx = custom_target('_zeros_pyx',
output: '_zeros.pyx',
input: '_zeros.pyx.in',
command: [
tempita, '@INPUT@', '-o', '@OUTDIR@',
py3, tempita, '@INPUT@', '-o', '@OUTDIR@',
'--ignore', _dummy_init_cyoptimize[0]
]
)

View File

@ -92,7 +92,7 @@ py3.extension_module('_zeros',
lbfgsb_module = custom_target('lbfgsb_module',
output: ['_lbfgsb-f2pywrappers.f', '_lbfgsbmodule.c'],
input: 'lbfgsb_src/lbfgsb.pyf',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_lbfgsb',
@ -126,7 +126,7 @@ py3.extension_module('_moduleTNC',
cobyla_module = custom_target('cobyla_module',
output: ['_cobylamodule.c'],
input: 'cobyla/cobyla.pyf',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_cobyla',
@ -143,7 +143,7 @@ py3.extension_module('_cobyla',
minpack2_module = custom_target('minpack2_module',
output: ['_minpack2module.c'],
input: 'minpack2/minpack2.pyf',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_minpack2',
@ -160,7 +160,7 @@ py3.extension_module('_minpack2',
slsqp_module = custom_target('slsqp_module',
output: ['_slsqpmodule.c'],
input: 'slsqp/slsqp.pyf',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_slsqp',

View File

@ -17,8 +17,8 @@ def assert_wolfe(s, phi, derphi, c1=1e-4, c2=0.9, err_msg=""):
phi0 = phi(0)
derphi0 = derphi(0)
derphi1 = derphi(s)
msg = "s = {}; phi(0) = {}; phi(s) = {}; phi'(0) = {}; phi'(s) = {}; {}".format(
s, phi0, phi1, derphi0, derphi1, err_msg)
msg = (f"s = {s}; phi(0) = {phi0}; phi(s) = {phi1}; phi'(0) = {derphi0};"
f" phi'(s) = {derphi1}; {err_msg}")
assert phi1 <= phi0 + c1*s*derphi0, "Wolfe 1 failed: " + msg
assert abs(derphi1) <= abs(c2*derphi0), "Wolfe 2 failed: " + msg
@ -159,9 +159,9 @@ class TestLineSearch:
def derphi(alpha):
return 2 * (alpha - 5)
s, _, _, _ = assert_warns(LineSearchWarning,
ls.scalar_search_wolfe2, phi, derphi, amax=0.001)
assert s is None
alpha_star, _, _, derphi_star = ls.scalar_search_wolfe2(phi, derphi, amax=0.001)
assert alpha_star is None # Not converged
assert derphi_star is None # Not converged
def test_scalar_search_wolfe2_regression(self):
# Regression test for gh-12157

View File

@ -42,3 +42,277 @@ class TestNNLS:
b = self.rng.uniform(size=5)
with assert_raises(RuntimeError):
nnls(a, b, maxiter=1)
def test_nnls_inner_loop_case1(self):
# See gh-20168
n = np.array(
[3, 2, 0, 1, 1, 1, 3, 8, 14, 16, 29, 23, 41, 47, 53, 57, 67, 76,
103, 89, 97, 94, 85, 95, 78, 78, 78, 77, 73, 50, 50, 56, 68, 98,
95, 112, 134, 145, 158, 172, 213, 234, 222, 215, 216, 216, 206,
183, 135, 156, 110, 92, 63, 60, 52, 29, 20, 16, 12, 5, 5, 5, 1, 2,
3, 0, 2])
k = np.array(
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0.7205812007860187, 0., 1.4411624015720375,
0.7205812007860187, 2.882324803144075, 5.76464960628815,
5.76464960628815, 12.249880413362318, 15.132205216506394,
20.176273622008523, 27.382085629868712, 48.27894045266326,
47.558359251877235, 68.45521407467177, 97.99904330689854,
108.0871801179028, 135.46926574777152, 140.51333415327366,
184.4687874012208, 171.49832578707245, 205.36564222401535,
244.27702706646033, 214.01261663344755, 228.42424064916793,
232.02714665309804, 205.36564222401535, 172.9394881886445,
191.67459940908097, 162.1307701768542, 153.48379576742198,
110.96950492104689, 103.04311171240067, 86.46974409432225,
60.528820866025576, 43.234872047161126, 23.779179625938617,
24.499760826724636, 17.29394881886445, 11.5292992125763,
5.76464960628815, 5.044068405502131, 3.6029060039300935, 0.,
2.882324803144075, 0., 0., 0.])
d = np.array(
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0.003889242101538, 0., 0.007606268390096, 0.,
0.025457371599973, 0.036952882091577, 0., 0.08518359183449,
0.048201126400243, 0.196234990022205, 0.144116240157247,
0.171145134062442, 0., 0., 0.269555036538714, 0., 0., 0.,
0.010893241091872, 0., 0., 0., 0., 0., 0., 0., 0.,
0.048167058272886, 0.011238724891049, 0., 0., 0.055162603456078,
0., 0., 0., 0., 0.027753339088588, 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0.])
# The following code sets up a system of equations such that
# $k_i-p_i*n_i$ is minimized for $p_i$ with weights $n_i$ and
# monotonicity constraints on $p_i$. This translates to a system of
# equations of the form $k_i - (d_1 + ... + d_i) * n_i$ and
# non-negativity constraints on the $d_i$. If $n_i$ is zero the
# system is modified such that $d_i - d_{i+1}$ is then minimized.
N = len(n)
A = np.diag(n) @ np.tril(np.ones((N, N)))
w = n ** 0.5
nz = (n == 0).nonzero()[0]
A[nz, nz] = 1
A[nz, np.minimum(nz + 1, N - 1)] = -1
w[nz] = 1
k[nz] = 0
W = np.diag(w)
# Small perturbations can already make the infinite loop go away (just
# uncomment the next line)
# k = k + 1e-10 * np.random.normal(size=N)
dact, _ = nnls(W @ A, W @ k)
assert_allclose(dact, d, rtol=0., atol=1e-10)
def test_nnls_inner_loop_case2(self):
# See gh-20168
n = np.array(
[1, 0, 1, 2, 2, 2, 3, 3, 5, 4, 14, 14, 19, 26, 36, 42, 36, 64, 64,
64, 81, 85, 85, 95, 95, 95, 75, 76, 69, 81, 62, 59, 68, 64, 71, 67,
74, 78, 118, 135, 153, 159, 210, 195, 218, 243, 236, 215, 196, 175,
185, 149, 144, 103, 104, 75, 56, 40, 32, 26, 17, 9, 12, 8, 2, 1, 1,
1])
k = np.array(
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.7064355064917867, 0., 0., 2.11930651947536,
0.7064355064917867, 0., 3.5321775324589333, 7.064355064917867,
11.302968103868587, 16.95445215580288, 20.486629688261814,
20.486629688261814, 37.44108184406469, 55.808405012851146,
78.41434122058831, 103.13958394780086, 105.965325973768,
125.74552015553803, 149.057891869767, 176.60887662294667,
197.09550631120848, 211.930651947536, 204.86629688261814,
233.8301526487814, 221.1143135319292, 195.6826352982249,
197.80194181770025, 191.4440222592742, 187.91184472681525,
144.11284332432447, 131.39700420747232, 116.5618585711448,
93.24948685691584, 89.01087381796512, 53.68909849337579,
45.211872415474346, 31.083162285638615, 24.72524272721253,
16.95445215580288, 9.890097090885014, 9.890097090885014,
2.8257420259671466, 2.8257420259671466, 1.4128710129835733,
0.7064355064917867, 1.4128710129835733])
d = np.array(
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.0021916146355674473, 0., 0.,
0.011252740799789484, 0., 0., 0.037746623295934395,
0.03602328132946222, 0.09509167709829734, 0.10505765870204821,
0.01391037014274718, 0.0188296228752321, 0.20723559202324254,
0.3056220879462608, 0.13304643490426477, 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.043185876949706214, 0.0037266261379722554,
0., 0., 0., 0., 0., 0.094797899357143, 0., 0., 0., 0., 0., 0., 0.,
0., 0.23450935613672663, 0., 0., 0.07064355064917871])
# The following code sets up a system of equations such that
# $k_i-p_i*n_i$ is minimized for $p_i$ with weights $n_i$ and
# monotonicity constraints on $p_i$. This translates to a system of
# equations of the form $k_i - (d_1 + ... + d_i) * n_i$ and
# non-negativity constraints on the $d_i$. If $n_i$ is zero the
# system is modified such that $d_i - d_{i+1}$ is then minimized.
N = len(n)
A = np.diag(n) @ np.tril(np.ones((N, N)))
w = n ** 0.5
nz = (n == 0).nonzero()[0]
A[nz, nz] = 1
A[nz, np.minimum(nz + 1, N - 1)] = -1
w[nz] = 1
k[nz] = 0
W = np.diag(w)
dact, _ = nnls(W @ A, W @ k, atol=1e-7)
p = np.cumsum(dact)
assert np.all(dact >= 0)
assert np.linalg.norm(k - n * p, ord=np.inf) < 28
assert_allclose(dact, d, rtol=0., atol=1e-10)
def test_nnls_gh20302(self):
# See gh-20302
A = np.array(
[0.33408569134321575, 0.11136189711440525, 0.049140798007949286,
0.03712063237146841, 0.055680948557202625, 0.16642814595936478,
0.11095209730624318, 0.09791993030943345, 0.14793612974165757,
0.44380838922497273, 0.11099502671044059, 0.11099502671044059,
0.14693672599330593, 0.3329850801313218, 1.498432860590948,
0.0832374225132955, 0.11098323001772734, 0.19589481249472837,
0.5919105600945457, 3.5514633605672747, 0.06658716751427037,
0.11097861252378394, 0.24485832778293645, 0.9248217710315328,
6.936163282736496, 0.05547609388181014, 0.11095218776362029,
0.29376003042571264, 1.3314262531634435, 11.982836278470993,
0.047506113282944136, 0.11084759766020298, 0.3423969672933396,
1.8105107617833156, 19.010362998724812, 0.041507335004505576,
0.11068622667868154, 0.39074115283013344, 2.361306169145206,
28.335674029742474, 0.03682846280947718, 0.11048538842843154,
0.4387861797121048, 2.9831054875676517, 40.2719240821633,
0.03311278164362387, 0.11037593881207958, 0.4870572300443105,
3.6791979604026523, 55.187969406039784, 0.030079304092299915,
0.11029078167176636, 0.5353496017200152, 4.448394860761242,
73.3985152025605, 0.02545939709595835, 0.11032405408248619,
0.6328767609778363, 6.214921713313388, 121.19097340961108,
0.022080881724881523, 0.11040440862440762, 0.7307742886903428,
8.28033064683057, 186.30743955368786, 0.020715838214945492,
0.1104844704797093, 0.7800578384588346, 9.42800814760186,
226.27219554244465, 0.01843179728340054, 0.11059078370040323,
0.8784095015912599, 11.94380463964355, 322.48272527037585,
0.015812787653789077, 0.11068951357652354, 1.0257259848595766,
16.27135849574896, 512.5477926160922, 0.014438550529330062,
0.11069555405819713, 1.1234754801775881, 19.519316032262093,
673.4164031130423, 0.012760770585072577, 0.110593345070629,
1.2688431112524712, 24.920367089248398, 971.8943164806875,
0.011427556646114315, 0.11046638091243838, 1.413623342459821,
30.967408782453557, 1347.0822820367298, 0.010033330264470307,
0.11036663290917338, 1.6071533470570285, 40.063087746029936,
1983.122843428482, 0.008950061496507258, 0.11038409179025618,
1.802244865119193, 50.37194055362024, 2795.642700725923,
0.008071078821135658, 0.11030474388885401, 1.9956465761433504,
61.80742482572119, 3801.1566267818534, 0.007191031207777556,
0.11026247851925586, 2.238160187262168, 77.7718015155818,
5366.2543045751445, 0.00636834224248, 0.11038459886965334,
2.5328963107984297, 99.49331844784753, 7760.4788389321075,
0.005624259098118485, 0.11061042892966355, 2.879742607664547,
128.34496770138628, 11358.529641572684, 0.0050354270614989555,
0.11077939535297703, 3.2263279459292575, 160.85168205252265,
15924.316523199741, 0.0044997853165982555, 0.1109947044760903,
3.6244287189055613, 202.60233390369015, 22488.859063309606,
0.004023601950058174, 0.1113196539516095, 4.07713905729421,
255.6270320242126, 31825.565487014468, 0.0036024117873727094,
0.111674765408554, 4.582933773135057, 321.9583486728612,
44913.18963986413, 0.003201503089582304, 0.11205260813538065,
5.191786833370116, 411.79333489752383, 64857.45024636,
0.0028633044552448853, 0.11262330857296549, 5.864295861648949,
522.7223161899905, 92521.84996562831, 0.0025691897303891965,
0.11304434813712465, 6.584584405106342, 656.5615739804199,
129999.19164812315, 0.0022992911894424675, 0.11343169867916175,
7.4080129906658305, 828.2026426227864, 183860.98666225857,
0.0020449922071108764, 0.11383789952917212, 8.388975556433872,
1058.2750599896935, 265097.9025274183, 0.001831274615120854,
0.11414945100919989, 9.419351803810935, 1330.564050780237,
373223.2162438565, 0.0016363333454631633, 0.11454333418242145,
10.6143816579462, 1683.787012481595, 530392.9089317025,
0.0014598610433380044, 0.11484240207592301, 11.959688127956882,
2132.0874753402027, 754758.9662704318, 0.0012985240015312626,
0.11513579480243862, 13.514425358573531, 2715.5160990137824,
1083490.9235064993, 0.0011614735761289934, 0.11537304189548002,
15.171418602667567, 3415.195870828736, 1526592.554260445,
0.0010347472698811352, 0.11554677847006009, 17.080800985009617,
4322.412404600832, 2172012.2333119176, 0.0009232988811258664,
0.1157201264344419, 19.20004861829407, 5453.349531598553,
3075689.135821584, 0.0008228871862975205, 0.11602709326795038,
21.65735242414206, 6920.203923780365, 4390869.389638642,
0.00073528900066722, 0.11642075843897651, 24.40223571298994,
8755.811207598026, 6238515.485413593, 0.0006602764384729194,
0.11752920604817965, 27.694443541914293, 11171.386093291572,
8948280.260726549, 0.0005935538977939806, 0.11851292825953147,
31.325508920763063, 14174.185724149384, 12735505.873148222,
0.0005310755355633124, 0.11913794514470308, 35.381052949627765,
17987.010118815077, 18157886.71494382, 0.00047239949671590953,
0.1190446731724092, 39.71342528048061, 22679.438775422022,
25718483.571328573, 0.00041829129789387623, 0.11851586773659825,
44.45299332965028, 28542.57147989741, 36391778.63686921,
0.00037321512015419886, 0.11880681324908665, 50.0668539579632,
36118.26128449941, 51739409.29004541, 0.0003315539616702064,
0.1184752823034871, 56.04387059062639, 45383.29960621684,
72976345.76679668, 0.00029456064937920213, 0.11831519416731286,
62.91195073220101, 57265.53993693082, 103507463.43600245,
0.00026301867496859703, 0.11862142241083726, 70.8217262087034,
72383.14781936012, 146901598.49939138, 0.00023618734450420032,
0.11966825454879482, 80.26535457124461, 92160.51176984518,
210125966.835247, 0.00021165918071578316, 0.12043407382728061,
90.7169587544247, 116975.56852918258, 299515943.218972,
0.00018757727511329545, 0.11992440455576689, 101.49899864101785,
147056.26174166967, 423080865.0307836, 0.00016654469159895833,
0.11957908856805206, 113.65970431102812, 184937.67016486943,
597533612.3026931, 0.00014717439179415048, 0.11872067604728138,
126.77899683346702, 231758.58906776624, 841283678.3159915,
0.00012868496382376066, 0.1166314722122684, 139.93635237349534,
287417.30847929465, 1172231492.6328032, 0.00011225559452625302,
0.11427619522772557, 154.0034283704458, 355281.4912295324,
1627544511.322488, 9.879511142981067e-05, 0.11295574406808354,
170.96532050841535, 442971.0111288653, 2279085852.2580123,
8.71257780313587e-05, 0.11192758284428547, 190.35067416684697,
554165.2523674504, 3203629323.93623, 7.665069027765277e-05,
0.11060694607065294, 211.28835951100046, 690933.608546013,
4486577387.093535, 6.734021094824451e-05, 0.10915848194710433,
234.24338803525194, 860487.9079859136, 6276829044.8032465,
5.9191625040287665e-05, 0.10776821865668373, 259.7454711820425,
1071699.0387579766, 8780430224.544102, 5.1856803674907676e-05,
0.10606444911641115, 287.1843540288165, 1331126.3723998806,
12251687131.5685, 4.503421404759231e-05, 0.10347361247668461,
314.7338642485931, 1638796.0697522392, 16944331963.203278,
3.90470387455642e-05, 0.1007804070023012, 344.3427560918527,
2014064.4865519698, 23392351979.057854, 3.46557661636393e-05,
0.10046706610839032, 385.56603915081587, 2533036.2523656,
33044724430.235435, 3.148745865254635e-05, 0.1025441570117926,
442.09038234164746, 3262712.3882769793, 47815050050.199135,
2.9790762078715404e-05, 0.1089845379379672, 527.8068231298969,
4375751.903321453, 72035815708.42941, 2.8772639817606534e-05,
0.11823636789048445, 643.2048194503195, 5989838.001888927,
110764084330.93005, 2.7951691815106586e-05, 0.12903432664913705,
788.5500418523591, 8249371.000613411, 171368308481.2427,
2.6844392423114212e-05, 0.1392060709754626, 955.6296403631383,
11230229.319931043, 262063016295.25085, 2.499458273851386e-05,
0.14559344445184325, 1122.7022399726002, 14820229.698461473,
388475270970.9214, 2.337386729019776e-05, 0.15294300496886065,
1324.8158105672455, 19644861.137128454, 578442936182.7473,
2.0081014872174113e-05, 0.14760215298210377, 1436.2385042492353,
23923681.729276657, 791311658718.4193, 1.773374462991839e-05,
0.14642752940923615, 1600.5596278736678, 29949429.82503553,
1112815989293.9326, 1.5303115839590797e-05, 0.14194150045081785,
1742.873058605698, 36634451.931305364, 1529085389160.7544,
1.3148448731163076e-05, 0.13699368732998807, 1889.5284359054356,
44614279.74469635, 2091762812969.9607, 1.1739194407590062e-05,
0.13739553134643406, 2128.794599579694, 56462810.11822766,
2973783283306.8145, 1.0293367506254706e-05, 0.13533033372723272,
2355.372854690074, 70176508.28667311, 4151852759764.441,
9.678312586863569e-06, 0.14293577249119244, 2794.531827932675,
93528671.31952812, 6215821967224.52, -1.174086323572049e-05,
0.1429501325944908, 3139.4804810720925, 118031680.16618933,
-6466892421886.174, -2.1188265307407812e-05, 0.1477108290912869,
3644.1133424610953, 153900132.62392554, -4828013117542.036,
-8.614483025123122e-05, 0.16037100755883044, 4444.386620899393,
210846007.89660168, -1766340937974.433, 4.981445776141726e-05,
0.16053420251962536, 4997.558254401547, 266327328.4755411,
3862250287024.725, 1.8500019169456637e-05, 0.15448417164977674,
5402.289867444643, 323399508.1475582, 12152445411933.408,
-5.647882376069748e-05, 0.1406372975946189, 5524.633133597753,
371512945.9909363, -4162951345292.1514, 2.8048523486337994e-05,
0.13183417571186926, 5817.462495763679, 439447252.3728975,
9294740538175.03]).reshape(89, 5)
b = np.ones(89, dtype=np.float64)
sol, rnorm = nnls(A, b)
assert_allclose(sol, np.array([0.61124315, 8.22262829, 0., 0., 0.]))
assert_allclose(rnorm, 1.0556460808977297)

View File

@ -213,9 +213,9 @@ class CheckOptimizeParameterized(CheckOptimize):
[[0, -5.25060743e-01, 4.87748473e-01],
[0, -5.24885582e-01, 4.87530347e-01]],
atol=1e-14, rtol=1e-7)
def test_bfgs_hess_inv0_neg(self):
# Ensure that BFGS does not accept neg. def. initial inverse
# Ensure that BFGS does not accept neg. def. initial inverse
# Hessian estimate.
with pytest.raises(ValueError, match="'hess_inv0' matrix isn't "
"positive definite."):
@ -223,9 +223,9 @@ class CheckOptimizeParameterized(CheckOptimize):
opts = {'disp': self.disp, 'hess_inv0': -np.eye(5)}
optimize.minimize(optimize.rosen, x0=x0, method='BFGS', args=(),
options=opts)
def test_bfgs_hess_inv0_semipos(self):
# Ensure that BFGS does not accept semi pos. def. initial inverse
# Ensure that BFGS does not accept semi pos. def. initial inverse
# Hessian estimate.
with pytest.raises(ValueError, match="'hess_inv0' matrix isn't "
"positive definite."):
@ -235,18 +235,18 @@ class CheckOptimizeParameterized(CheckOptimize):
opts = {'disp': self.disp, 'hess_inv0': hess_inv0}
optimize.minimize(optimize.rosen, x0=x0, method='BFGS', args=(),
options=opts)
def test_bfgs_hess_inv0_sanity(self):
# Ensure that BFGS handles `hess_inv0` parameter correctly.
fun = optimize.rosen
x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
opts = {'disp': self.disp, 'hess_inv0': 1e-2 * np.eye(5)}
res = optimize.minimize(fun, x0=x0, method='BFGS', args=(),
res = optimize.minimize(fun, x0=x0, method='BFGS', args=(),
options=opts)
res_true = optimize.minimize(fun, x0=x0, method='BFGS', args=(),
res_true = optimize.minimize(fun, x0=x0, method='BFGS', args=(),
options={'disp': self.disp})
assert_allclose(res.fun, res_true.fun, atol=1e-6)
@pytest.mark.filterwarnings('ignore::UserWarning')
def test_bfgs_infinite(self):
# Test corner case where -Inf is the minimum. See gh-2019.
@ -293,7 +293,7 @@ class CheckOptimizeParameterized(CheckOptimize):
res_mod = optimize.minimize(optimize.rosen,
x0, method='bfgs', options={'c2': 1e-2})
assert res_default.nit > res_mod.nit
@pytest.mark.parametrize(["c1", "c2"], [[0.5, 2],
[-0.1, 0.1],
[0.2, 0.1]])
@ -499,6 +499,29 @@ class CheckOptimizeParameterized(CheckOptimize):
full_output=True, disp=False, retall=False,
initial_simplex=simplex)
def test_neldermead_x0_ub(self):
# checks whether minimisation occurs correctly for entries where
# x0 == ub
# gh19991
def quad(x):
return np.sum(x**2)
res = optimize.minimize(
quad,
[1],
bounds=[(0, 1.)],
method='nelder-mead'
)
assert_allclose(res.x, [0])
res = optimize.minimize(
quad,
[1, 2],
bounds=[(0, 1.), (1, 3.)],
method='nelder-mead'
)
assert_allclose(res.x, [0, 1])
def test_ncg_negative_maxiter(self):
# Regression test for gh-8241
opts = {'maxiter': -1}
@ -507,6 +530,24 @@ class CheckOptimizeParameterized(CheckOptimize):
args=(), options=opts)
assert result.status == 1
def test_ncg_zero_xtol(self):
# Regression test for gh-20214
def cosine(x):
return np.cos(x[0])
def jac(x):
return -np.sin(x[0])
x0 = [0.1]
xtol = 0
result = optimize.minimize(cosine,
x0=x0,
jac=jac,
method="newton-cg",
options=dict(xtol=xtol))
assert result.status == 0
assert_almost_equal(result.x[0], np.pi)
def test_ncg(self):
# line-search Newton conjugate gradient optimization routine
if self.use_wrapper:

View File

@ -2087,6 +2087,7 @@ def lfilter(b, a, x, axis=-1, zi=None):
>>> plt.show()
"""
b = np.atleast_1d(b)
a = np.atleast_1d(a)
if len(a) == 1:
# This path only supports types fdgFDGO to mirror _linear_filter below.

View File

@ -1,10 +1,23 @@
correlate_nd_c = custom_target('_correlate_nd',
output: '_correlate_nd.c',
input: '_correlate_nd.c.in',
command: [py3, tempita, '@INPUT@', '-o', '@OUTDIR@']
)
lfilter_c = custom_target('_lfilter',
output: '_lfilter.c',
input: '_lfilter.c.in',
command: [py3, tempita, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_sigtools',
[
'_firfilter.c',
'_sigtoolsmodule.c',
'_firfilter.c',
'_sigtools.h',
'_medianfilter.c',
tempita_gen.process('_correlate_nd.c.in'),
tempita_gen.process('_lfilter.c.in'),
lfilter_c,
correlate_nd_c
],
dependencies: np_dep,
link_args: version_link_args,
@ -68,11 +81,14 @@ foreach pyx_file: pyx_files
)
endforeach
bspline_util = custom_target('_bspline_util',
output: '_bspline_util.c',
input: '_bspline_util.c.in',
command: [py3, tempita, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_spline',
[
'_splinemodule.c',
tempita_gen.process('_bspline_util.c.in'),
],
['_splinemodule.c', bspline_util],
dependencies: np_dep,
link_args: version_link_args,
install: true,

View File

@ -1864,6 +1864,14 @@ class _TestLinearFilter:
assert_equal(b, b0)
assert_equal(a, a0)
@pytest.mark.parametrize("a", [1.0, [1.0], np.array(1.0)])
@pytest.mark.parametrize("b", [1.0, [1.0], np.array(1.0)])
def test_scalar_input(self, a, b):
data = np.random.randn(10)
assert_allclose(
lfilter(np.array([1.0]), np.array([1.0]), data),
lfilter(b, a, data))
class TestLinearFilterFloat32(_TestLinearFilter):
dtype = np.dtype('f')

View File

@ -105,9 +105,9 @@ class _bsr_base(_cs_matrix, _minmax_mixin):
except Exception as e:
raise ValueError("unrecognized form for"
" %s_matrix constructor" % self.format) from e
arg1 = self._coo_container(
arg1, dtype=dtype
).tobsr(blocksize=blocksize)
if isinstance(self, sparray) and arg1.ndim != 2:
raise ValueError(f"BSR arrays don't support {arg1.ndim}D input. Use 2D")
arg1 = self._coo_container(arg1, dtype=dtype).tobsr(blocksize=blocksize)
self.indptr, self.indices, self.data, self._shape = (
arg1.indptr, arg1.indices, arg1.data, arg1._shape
)

View File

@ -7,7 +7,7 @@ import operator
import numpy as np
from scipy._lib._util import _prune_array, copy_if_needed
from ._base import _spbase, issparse, SparseEfficiencyWarning
from ._base import _spbase, issparse, SparseEfficiencyWarning, sparray
from ._data import _data_matrix, _minmax_mixin
from . import _sparsetools
from ._sparsetools import (get_csr_submatrix, csr_sample_offsets, csr_todense,
@ -55,6 +55,7 @@ class _cs_matrix(_data_matrix, _minmax_mixin, IndexMixin):
coo = self._coo_container(arg1, shape=shape, dtype=dtype)
arrays = coo._coo_to_compressed(self._swap)
self.indptr, self.indices, self.data, self._shape = arrays
self.sum_duplicates()
elif len(arg1) == 3:
# (data, indices, indptr) format
(data, indices, indptr) = arg1
@ -84,6 +85,10 @@ class _cs_matrix(_data_matrix, _minmax_mixin, IndexMixin):
except Exception as e:
msg = f"unrecognized {self.format}_matrix constructor usage"
raise ValueError(msg) from e
if isinstance(self, sparray) and arg1.ndim < 2 and self.format == "csc":
raise ValueError(
f"CSC arrays don't support {arg1.ndim}D input. Use 2D"
)
coo = self._coo_container(arg1, dtype=dtype)
arrays = coo._coo_to_compressed(self._swap)
self.indptr, self.indices, self.data, self._shape = arrays

View File

@ -1208,7 +1208,7 @@ def _random(shape, density=0.01, format=None, dtype=None,
# rng.choice uses int64 if first arg is an int
if tot_prod < np.iinfo(np.int64).max:
raveled_ind = rng.choice(tot_prod, size=size, replace=False)
ind = np.unravel_index(raveled_ind, shape=shape)
ind = np.unravel_index(raveled_ind, shape=shape, order='F')
else:
# for ravel indices bigger than dtype max, use sets to remove duplicates
ndim = len(shape)

View File

@ -79,7 +79,7 @@ class _coo_base(_data_matrix, _minmax_mixin):
if not is_array:
M = np.atleast_2d(M)
if M.ndim != 2:
raise TypeError('expected dimension <= 2 array or matrix')
raise TypeError(f'expected 2D array or matrix, not {M.ndim}D')
self._shape = check_shape(M.shape, allow_1d=is_array)
if shape is not None:

View File

@ -70,6 +70,8 @@ class _dia_base(_data_matrix):
except Exception as e:
raise ValueError("unrecognized form for"
" %s_matrix constructor" % self.format) from e
if isinstance(self, sparray) and arg1.ndim != 2:
raise ValueError(f"DIA arrays don't support {arg1.ndim}D input. Use 2D")
A = self._coo_container(arg1, dtype=dtype, shape=shape).todia()
self.data = A.data
self.offsets = A.offsets

View File

@ -89,6 +89,24 @@ class _dok_base(_spbase, IndexMixin, dict):
def clear(self):
return self._dict.clear()
def pop(self, /, *args):
return self._dict.pop(*args)
def __reversed__(self):
raise TypeError("reversed is not defined for dok_array type")
def __or__(self, other):
type_names = f"{type(self).__name__} and {type(other).__name__}"
raise TypeError(f"unsupported operand type for |: {type_names}")
def __ror__(self, other):
type_names = f"{type(self).__name__} and {type(other).__name__}"
raise TypeError(f"unsupported operand type for |: {type_names}")
def __ior__(self, other):
type_names = f"{type(self).__name__} and {type(other).__name__}"
raise TypeError(f"unsupported operand type for |: {type_names}")
def popitem(self):
return self._dict.popitem()
@ -414,11 +432,10 @@ class _dok_base(_spbase, IndexMixin, dict):
@classmethod
def fromkeys(cls, iterable, value=1, /):
tmp = dict.fromkeys(iterable, value)
keys = tmp.keys()
if isinstance(next(iter(keys)), tuple):
shape = tuple(max(idx) + 1 for idx in zip(*keys))
if isinstance(next(iter(tmp)), tuple):
shape = tuple(max(idx) + 1 for idx in zip(*tmp))
else:
shape = (max(keys) + 1,)
shape = (max(tmp) + 1,)
result = cls(shape, dtype=type(value))
result._dict = tmp
return result
@ -633,3 +650,23 @@ class dok_matrix(spmatrix, _dok_base):
return self._shape
shape = property(fget=get_shape, fset=set_shape)
def __reversed__(self):
return self._dict.__reversed__()
def __or__(self, other):
if isinstance(other, _dok_base):
return self._dict | other._dict
return self._dict | other
def __ror__(self, other):
if isinstance(other, _dok_base):
return self._dict | other._dict
return self._dict | other
def __ior__(self, other):
if isinstance(other, _dok_base):
self._dict |= other._dict
else:
self._dict |= other
return self

View File

@ -57,13 +57,14 @@ class _lil_base(_spbase, IndexMixin):
A = self._ascontainer(arg1)
except TypeError as e:
raise TypeError('unsupported matrix type') from e
else:
A = self._csr_container(A, dtype=dtype).tolil()
if isinstance(self, sparray) and A.ndim != 2:
raise ValueError(f"LIL arrays don't support {A.ndim}D input. Use 2D")
A = self._csr_container(A, dtype=dtype).tolil()
self._shape = check_shape(A.shape)
self.dtype = A.dtype
self.rows = A.rows
self.data = A.data
self._shape = check_shape(A.shape)
self.dtype = A.dtype
self.rows = A.rows
self.data = A.data
def __iadd__(self,other):
self[:,:] = self + other

View File

@ -95,7 +95,7 @@ arpack_lib = static_library('arpack_lib',
arpack_module = custom_target('arpack_module',
output: ['_arpackmodule.c', '_arpack-f2pywrappers.f'],
input: 'arpack.pyf.src',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
_arpack = py3.extension_module('_arpack',

View File

@ -266,7 +266,7 @@ class SVDSCommonTests:
else:
res = svds(A, k=k, which=which, solver=self.solver,
random_state=0)
_check_svds(A, k, *res, which=which, atol=8e-10)
_check_svds(A, k, *res, which=which, atol=1e-9, rtol=2e-13)
@pytest.mark.filterwarnings("ignore:Exited",
reason="Ignore LOBPCG early exit.")

View File

@ -99,7 +99,7 @@ foreach ele: elements
propack_module = custom_target('propack_module' + ele[0],
output: [ele[0] + '-f2pywrappers.f', ele[0] + 'module.c'],
input: ele[2],
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
propacklib = py3.extension_module(ele[0],

View File

@ -121,8 +121,8 @@ def test_svds(matrices):
u, s, vt = splin.svds(A_sparse, k=2, v0=v0)
assert_allclose(s, s0)
assert_allclose(u, u0)
assert_allclose(vt, vt0)
assert_allclose(np.abs(u), np.abs(u0))
assert_allclose(np.abs(vt), np.abs(vt0))
def test_lobpcg(matrices):

View File

@ -1,7 +1,7 @@
_csparsetools_pyx = custom_target('_csparsetools_pyx',
output: '_csparsetools.pyx',
input: '_csparsetools.pyx.in',
command: [tempita, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, tempita, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_csparsetools',

View File

@ -292,8 +292,8 @@ class _TestCommon:
datsp = self.datsp_dtypes[dtype]
assert_raises(ValueError, bool, datsp)
assert_(self.spcreator([1]))
assert_(not self.spcreator([0]))
assert_(self.spcreator([[1]]))
assert_(not self.spcreator([[0]]))
if isinstance(self, TestDOK):
pytest.skip("Cannot create a rank <= 2 DOK matrix.")
@ -3504,7 +3504,7 @@ class _TestMinMax:
assert_equal(X.max(), -1)
# and a fully sparse matrix
Z = self.spcreator(np.zeros(1))
Z = self.spcreator(np.zeros((1, 1)))
assert_equal(Z.min(), 0)
assert_equal(Z.max(), 0)
assert_equal(Z.max().dtype, Z.dtype)
@ -3784,8 +3784,8 @@ def sparse_test_class(getset=True, slicing=True, slicing_assign=True,
continue
old_cls = names.get(name)
if old_cls is not None:
raise ValueError("Test class {} overloads test {} defined in {}".format(
cls.__name__, name, old_cls.__name__))
raise ValueError(f"Test class {cls.__name__} overloads test "
f"{name} defined in {old_cls.__name__}")
names[name] = cls
return type("TestBase", bases, {})
@ -3851,6 +3851,10 @@ class TestCSR(sparse_test_class()):
dense = array([[2**63 + 1, 0], [0, 1]], dtype=np.uint64)
assert_array_equal(dense, csr.toarray())
# with duplicates (should sum the duplicates)
csr = csr_matrix(([1,1,1,1], ([0,2,2,0], [0,1,1,0])))
assert csr.nnz == 2
def test_constructor5(self):
# infer dimensions from arrays
indptr = array([0,1,3,3])
@ -4039,8 +4043,8 @@ class TestCSR(sparse_test_class()):
def test_binop_explicit_zeros(self):
# Check that binary ops don't introduce spurious explicit zeros.
# See gh-9619 for context.
a = csr_matrix([0, 1, 0])
b = csr_matrix([1, 1, 0])
a = csr_matrix([[0, 1, 0]])
b = csr_matrix([[1, 1, 0]])
assert (a + b).nnz == 2
assert a.multiply(b).nnz == 1
@ -4092,6 +4096,10 @@ class TestCSC(sparse_test_class()):
csc = csc_matrix((data,ij),(4,3))
assert_array_equal(arange(12).reshape(4, 3), csc.toarray())
# with duplicates (should sum the duplicates)
csc = csc_matrix(([1,1,1,1], ([0,2,2,0], [0,1,1,0])))
assert csc.nnz == 2
def test_constructor5(self):
# infer dimensions from arrays
indptr = array([0,1,3,3])
@ -4506,6 +4514,13 @@ class TestCOO(sparse_test_class(getset=False,
dok = coo.todok()
assert_array_equal(dok.toarray(), coo.toarray())
def test_tocompressed_duplicates(self):
coo = coo_matrix(([1,1,1,1], ([0,2,2,0], [0,1,1,0])))
csr = coo.tocsr()
assert_equal(csr.nnz + 2, coo.nnz)
csc = coo.tocsc()
assert_equal(csc.nnz + 2, coo.nnz)
def test_eliminate_zeros(self):
data = array([1, 0, 0, 0, 2, 0, 3, 0])
row = array([0, 0, 0, 1, 1, 1, 1, 1])

View File

@ -5,6 +5,9 @@ import pytest
import numpy as np
import scipy as sp
from scipy.sparse import (
bsr_array, csc_array, dia_array, lil_array,
)
from scipy.sparse._sputils import supported_dtypes, matrix
from scipy._lib._util import ComplexWarning
@ -31,6 +34,15 @@ def datsp_math_dtypes(dat1d):
}
# Test init with 1D dense input
# sparrays which do not plan to support 1D
@pytest.mark.parametrize("spcreator", [bsr_array, csc_array, dia_array, lil_array])
def test_no_1d_support_in_init(spcreator):
with pytest.raises(ValueError, match="arrays don't support 1D input"):
spcreator([0, 1, 2, 3])
# Main tests class
@pytest.mark.parametrize("spcreator", spcreators)
class TestCommon1D:
"""test common functionality shared by 1D sparse formats"""
@ -54,6 +66,10 @@ class TestCommon1D:
A = np.array([-1, 0, 17, 0, -5, 0, 1, -4, 0, 0, 0, 0], 'd')
assert np.array_equal(-A, (-spcreator(A)).toarray())
def test_1d_supported_init(self, spcreator):
A = spcreator([0, 1, 2, 3])
assert A.ndim == 1
def test_reshape_1d_tofrom_row_or_column(self, spcreator):
# add a dimension 1d->2d
x = spcreator([1, 0, 7, 0, 0, 0, 0, -3, 0, 0, 0, 5])

View File

@ -1,29 +1,210 @@
from numpy.testing import assert_array_equal
from scipy.sparse import dok_array
import pytest
import numpy as np
from numpy.testing import assert_equal
import scipy as sp
from scipy.sparse import dok_array, dok_matrix
@pytest.fixture
def d():
return {(0, 1): 1, (0, 2): 2}
@pytest.fixture
def A():
return np.array([[0, 1, 2], [0, 0, 0], [0, 0, 0]])
@pytest.fixture(params=[dok_array, dok_matrix])
def Asp(request):
A = request.param((3, 3))
A[(0, 1)] = 1
A[(0, 2)] = 2
yield A
# Note: __iter__ and comparison dunders act like ndarrays for DOK, not dict.
# Dunders reversed, or, ror, ior work as dict for dok_matrix, raise for dok_array
# All other dict methods on DOK format act like dict methods (with extra checks).
# Start of tests
################
def test_dict_methods_covered(d, Asp):
d_methods = set(dir(d)) - {"__class_getitem__"}
asp_methods = set(dir(Asp))
assert d_methods < asp_methods
def test_clear(d, Asp):
assert d.items() == Asp.items()
d.clear()
Asp.clear()
assert d.items() == Asp.items()
def test_copy(d, Asp):
assert d.items() == Asp.items()
dd = d.copy()
asp = Asp.copy()
assert dd.items() == asp.items()
assert asp.items() == Asp.items()
asp[(0, 1)] = 3
assert Asp[(0, 1)] == 1
def test_fromkeys_default():
# test with default value
edges = [(0, 2), (1, 0), (2, 1)]
Xdok = dok_array.fromkeys(edges)
assert_array_equal(
Xdok.toarray(),
[[0, 0, 1], [1, 0, 0], [0, 1, 0]],
)
X = [[0, 0, 1], [1, 0, 0], [0, 1, 0]]
assert_equal(Xdok.toarray(), X)
def test_fromkeys_positional():
# test with positional value
edges = [(0, 2), (1, 0), (2, 1)]
Xdok = dok_array.fromkeys(edges, -1)
assert_array_equal(
Xdok.toarray(),
[[0, 0, -1], [-1, 0, 0], [0, -1, 0]],
)
X = [[0, 0, -1], [-1, 0, 0], [0, -1, 0]]
assert_equal(Xdok.toarray(), X)
def test_fromkeys_iterator():
it = ((a, a % 2) for a in range(4))
Xdok = dok_array.fromkeys(it)
assert_array_equal(
Xdok.toarray(),
[[1, 0], [0, 1], [1, 0], [0, 1]],
)
X = [[1, 0], [0, 1], [1, 0], [0, 1]]
assert_equal(Xdok.toarray(), X)
def test_get(d, Asp):
assert Asp.get((0, 1)) == d.get((0, 1))
assert Asp.get((0, 0), 99) == d.get((0, 0), 99)
with pytest.raises(IndexError, match="out of bounds"):
Asp.get((0, 4), 99)
def test_items(d, Asp):
assert Asp.items() == d.items()
def test_keys(d, Asp):
assert Asp.keys() == d.keys()
def test_pop(d, Asp):
assert d.pop((0, 1)) == 1
assert Asp.pop((0, 1)) == 1
assert d.items() == Asp.items()
assert Asp.pop((22, 21), None) is None
assert Asp.pop((22, 21), "other") == "other"
with pytest.raises(KeyError, match="(22, 21)"):
Asp.pop((22, 21))
with pytest.raises(TypeError, match="got an unexpected keyword argument"):
Asp.pop((22, 21), default=5)
def test_popitem(d, Asp):
assert d.popitem() == Asp.popitem()
assert d.items() == Asp.items()
def test_setdefault(d, Asp):
assert Asp.setdefault((0, 1), 4) == 1
assert Asp.setdefault((2, 2), 4) == 4
d.setdefault((0, 1), 4)
d.setdefault((2, 2), 4)
assert d.items() == Asp.items()
def test_update(d, Asp):
with pytest.raises(NotImplementedError):
Asp.update(Asp)
def test_values(d, Asp):
# Note: dict.values are strange: d={1: 1}; d.values() == d.values() is False
# Using list(d.values()) makes them comparable.
assert list(Asp.values()) == list(d.values())
def test_dunder_getitem(d, Asp):
assert Asp[(0, 1)] == d[(0, 1)]
def test_dunder_setitem(d, Asp):
Asp[(1, 1)] = 5
d[(1, 1)] = 5
assert d.items() == Asp.items()
def test_dunder_delitem(d, Asp):
del Asp[(0, 1)]
del d[(0, 1)]
assert d.items() == Asp.items()
def test_dunder_contains(d, Asp):
assert ((0, 1) in d) == ((0, 1) in Asp)
assert ((0, 0) in d) == ((0, 0) in Asp)
def test_dunder_len(d, Asp):
assert len(d) == len(Asp)
# Note: dunders reversed, or, ror, ior work as dict for dok_matrix, raise for dok_array
def test_dunder_reversed(d, Asp):
if isinstance(Asp, dok_array):
with pytest.raises(TypeError):
list(reversed(Asp))
else:
list(reversed(Asp)) == list(reversed(d))
def test_dunder_ior(d, Asp):
if isinstance(Asp, dok_array):
with pytest.raises(TypeError):
Asp |= Asp
else:
dd = {(0, 0): 5}
Asp |= dd
assert Asp[(0, 0)] == 5
d |= dd
assert d.items() == Asp.items()
dd |= Asp
assert dd.items() == Asp.items()
def test_dunder_or(d, Asp):
if isinstance(Asp, dok_array):
with pytest.raises(TypeError):
Asp | Asp
else:
assert d | d == Asp | d
assert d | d == Asp | Asp
def test_dunder_ror(d, Asp):
if isinstance(Asp, dok_array):
with pytest.raises(TypeError):
Asp | Asp
with pytest.raises(TypeError):
d | Asp
else:
assert Asp.__ror__(d) == Asp.__ror__(Asp)
assert d.__ror__(d) == Asp.__ror__(d)
assert d | Asp
# Note: comparison dunders, e.g. ==, >=, etc follow np.array not dict
def test_dunder_eq(A, Asp):
with np.testing.suppress_warnings() as sup:
sup.filter(sp.sparse.SparseEfficiencyWarning)
assert (Asp == Asp).toarray().all()
assert (A == Asp).all()
def test_dunder_ne(A, Asp):
assert not (Asp != Asp).toarray().any()
assert not (A != Asp).any()
def test_dunder_lt(A, Asp):
assert not (Asp < Asp).toarray().any()
assert not (A < Asp).any()
def test_dunder_gt(A, Asp):
assert not (Asp > Asp).toarray().any()
assert not (A > Asp).any()
def test_dunder_le(A, Asp):
with np.testing.suppress_warnings() as sup:
sup.filter(sp.sparse.SparseEfficiencyWarning)
assert (Asp <= Asp).toarray().all()
assert (A <= Asp).all()
def test_dunder_ge(A, Asp):
with np.testing.suppress_warnings() as sup:
sup.filter(sp.sparse.SparseEfficiencyWarning)
assert (Asp >= Asp).toarray().all()
assert (A >= Asp).all()
# Note: iter dunder follows np.array not dict
def test_dunder_iter(A, Asp):
if isinstance(Asp, dok_array):
with pytest.raises(NotImplementedError):
[a.toarray() for a in Asp]
else:
assert all((a == asp).all() for a, asp in zip(A, Asp))

View File

@ -23,6 +23,8 @@ from libc.math cimport NAN
from scipy._lib.messagestream cimport MessageStream
from libc.stdio cimport FILE
from scipy.linalg.cython_lapack cimport dgetrf, dgetrs, dgecon
np.import_array()
@ -170,6 +172,8 @@ cdef extern from "qhull_src/src/libqhull_r.h":
coordT* qh_sethalfspace_all(qhT *, int dim, int count, coordT* halfspaces, pointT *feasible)
cdef extern from "qhull_misc.h":
ctypedef int CBLAS_INT # actual type defined in the header file
void qhull_misc_lib_check()
int qh_new_qhull_scipy(qhT *, int dim, int numpoints, realT *points,
boolT ismalloc, char* qhull_cmd, void *outfile,
void *errfile, coordT* feaspoint) nogil
@ -201,20 +205,6 @@ cdef extern from "qhull_src/src/mem_r.h":
from libc.stdlib cimport qsort
#------------------------------------------------------------------------------
# LAPACK interface
#------------------------------------------------------------------------------
cdef extern from "qhull_misc.h":
ctypedef int CBLAS_INT # actual type defined in the header file
void qhull_misc_lib_check()
void qh_dgetrf(CBLAS_INT *m, CBLAS_INT *n, double *a, CBLAS_INT *lda, CBLAS_INT *ipiv,
CBLAS_INT *info) nogil
void qh_dgetrs(char *trans, CBLAS_INT *n, CBLAS_INT *nrhs, double *a, CBLAS_INT *lda,
CBLAS_INT *ipiv, double *b, CBLAS_INT *ldb, CBLAS_INT *info) nogil
void qh_dgecon(char *norm, CBLAS_INT *n, double *a, CBLAS_INT *lda, double *anorm,
double *rcond, double *work, CBLAS_INT *iwork, CBLAS_INT *info) nogil
#------------------------------------------------------------------------------
# Qhull wrapper
@ -371,7 +361,8 @@ cdef class _Qhull:
"qhull: did not free %d bytes (%d pieces)" %
(totlong, curlong))
self._messages.close()
if self._messages is not None:
self._messages.close()
@cython.final
def close(self):
@ -397,7 +388,8 @@ cdef class _Qhull:
"qhull: did not free %d bytes (%d pieces)" %
(totlong, curlong))
self._messages.close()
if self._messages is not None:
self._messages.close()
@cython.final
def get_points(self):
@ -1113,12 +1105,12 @@ def _get_barycentric_transforms(np.ndarray[np.double_t, ndim=2] points,
nrhs = ndim
lda = ndim
ldb = ndim
qh_dgetrf(&n, &n, <double*>T.data, &lda, ipiv, &info)
dgetrf(&n, &n, <double*>T.data, &lda, ipiv, &info)
# Check condition number
if info == 0:
qh_dgecon("1", &n, <double*>T.data, &lda, &anorm, &rcond,
work, iwork, &info)
dgecon("1", &n, <double*>T.data, &lda, &anorm, &rcond,
work, iwork, &info)
if rcond < rcond_limit:
# The transform seems singular
@ -1126,7 +1118,7 @@ def _get_barycentric_transforms(np.ndarray[np.double_t, ndim=2] points,
# Compute transform
if info == 0:
qh_dgetrs("N", &n, &nrhs, <double*>T.data, &lda, ipiv,
dgetrs("N", &n, &nrhs, <double*>T.data, &lda, ipiv,
(<double*>Tinvs.data) + ndim*(ndim+1)*isimplex,
&ldb, &info)
@ -1812,6 +1804,8 @@ class Delaunay(_QhullUser):
if np.ma.isMaskedArray(points):
raise ValueError('Input points cannot be a masked array')
points = np.ascontiguousarray(points, dtype=np.double)
if points.ndim != 2:
raise ValueError("Input points array must have 2 dimensions.")
if qhull_options is None:
if not incremental:
@ -2602,6 +2596,8 @@ class Voronoi(_QhullUser):
if np.ma.isMaskedArray(points):
raise ValueError('Input points cannot be a masked array')
points = np.ascontiguousarray(points, dtype=np.double)
if points.ndim != 2:
raise ValueError("Input points array must have 2 dimensions.")
if qhull_options is None:
if not incremental:

View File

@ -40,7 +40,7 @@ py3.extension_module('_qhull',
'qhull_src/src'
],
link_args: version_link_args,
dependencies: [lapack, np_dep],
dependencies: [np_dep],
install: true,
subdir: 'scipy/spatial'
)
@ -80,7 +80,7 @@ py3.extension_module('_distance_wrap',
py3.extension_module('_distance_pybind',
['src/distance_pybind.cpp'],
include_directories: ['src/', '../_build_utils/src'],
include_directories: ['src/'],
dependencies: [np_dep, pybind11_dep],
link_args: version_link_args,
install: true,

View File

@ -1,19 +1,12 @@
/*
* Handle different Fortran conventions and qh_new_qhull_scipy entry point.
* Handle qh_new_qhull_scipy entry point.
*/
#ifndef QHULL_MISC_H_
#define QHULL_MISC_H_
/* for CBLAS_INT only*/
#include "npy_cblas.h"
void BLAS_FUNC(dgetrf)(CBLAS_INT*, CBLAS_INT*, double*, CBLAS_INT*, CBLAS_INT*, CBLAS_INT*);
void BLAS_FUNC(dgecon)(char*, CBLAS_INT*, double*, CBLAS_INT*, double*, double*, double*, CBLAS_INT*, CBLAS_INT*, size_t);
void BLAS_FUNC(dgetrs)(char*, CBLAS_INT*, CBLAS_INT*, double*, CBLAS_INT*, CBLAS_INT*, double*, CBLAS_INT*, CBLAS_INT*, size_t);
#define qh_dgetrf(m,n,a,lda,ipiv,info) BLAS_FUNC(dgetrf)(m,n,a,lda,ipiv,info)
#define qh_dgecon(norm,n,a,lda,anorm,rcond,work,iwork,info) BLAS_FUNC(dgecon)(norm,n,a,lda,anorm,rcond,work,iwork,info,1)
#define qh_dgetrs(trans,n,nrhs,a,lda,ipiv,b,ldb,info) BLAS_FUNC(dgetrs)(trans,n,nrhs,a,lda,ipiv,b,ldb,info,1)
#define qhull_misc_lib_check() QHULL_LIB_CHECK
#include "qhull_src/src/libqhull_r.h"

View File

@ -11,8 +11,6 @@
#include <sstream>
#include <string>
#include "npy_2_compat.h"
namespace py = pybind11;
namespace {
@ -376,20 +374,7 @@ template <typename Container>
py::array prepare_out_argument(const py::object& obj, const py::dtype& dtype,
const Container& out_shape) {
if (obj.is_none()) {
// TODO: The strides are only passed for NumPy 2.0 transition to allow
// pybind11 to catch up and can be removed at any time. Also
// remove `npy_2_compat.h` include and the `../_build_utils/src`
// in meson.build. (Same as PyArray_ITEMSIZE use above.)
npy_intp itemsize = PyDataType_ELSIZE((PyArray_Descr *)dtype.ptr());
Container strides;
if (strides.size() == 1) {
strides[0] = itemsize;
}
else {
strides[0] = itemsize * out_shape[1];
strides[1] = itemsize;
}
return py::array(dtype, out_shape, strides);
return py::array(dtype, out_shape);
}
if (!py::isinstance<py::array>(obj)) {

View File

@ -1176,3 +1176,11 @@ class Test_HalfspaceIntersection:
assert set(a) == set(b) # facet orientation can differ
assert_allclose(hs.dual_points, qhalf_points)
@pytest.mark.parametrize("diagram_type", [Voronoi, qhull.Delaunay])
def test_gh_20623(diagram_type):
rng = np.random.default_rng(123)
invalid_data = rng.random((4, 10, 3))
with pytest.raises(ValueError, match="dimensions"):
diagram_type(invalid_data)

View File

@ -3431,13 +3431,32 @@ cdef class Rotation:
# We first find the minimum angle rotation between the primary
# vectors.
cross = np.cross(b_pri[0], a_pri[0])
theta = atan2(_norm3(cross), np.dot(a_pri[0], b_pri[0]))
if theta < 1e-3: # small angle Taylor series approximation
cross_norm = _norm3(cross)
theta = atan2(cross_norm, _dot3(a_pri[0], b_pri[0]))
tolerance = 1e-3 # tolerance for small angle approximation (rad)
R_flip = cls.identity()
if (np.pi - theta) < tolerance:
# Near pi radians, the Taylor series appoximation of x/sin(x)
# diverges, so for numerical stability we flip pi and then
# rotate back by the small angle pi - theta
if cross_norm == 0:
# For antiparallel vectors, cross = [0, 0, 0] so we need to
# manually set an arbitrary orthogonal axis of rotation
i = np.argmin(np.abs(a_pri[0]))
r = np.zeros(3)
r[i - 1], r[i - 2] = a_pri[0][i - 2], -a_pri[0][i - 1]
else:
r = cross # Shortest angle orthogonal axis of rotation
R_flip = Rotation.from_rotvec(r / np.linalg.norm(r) * np.pi)
theta = np.pi - theta
cross = -cross
if abs(theta) < tolerance:
# Small angle Taylor series approximation for numerical stability
theta2 = theta * theta
r = cross * (1 + theta2 / 6 + theta2 * theta2 * 7 / 360)
else:
r = cross * theta / np.sin(theta)
R_pri = cls.from_rotvec(r)
R_pri = cls.from_rotvec(r) * R_flip
if N == 1:
# No secondary vectors, so we are done

View File

@ -1417,6 +1417,32 @@ def test_align_vectors_parallel():
assert_allclose(R.apply(b[0]), a[0], atol=atol)
def test_align_vectors_antiparallel():
# Test exact 180 deg rotation
atol = 1e-12
as_to_test = np.array([[[1, 0, 0], [0, 1, 0]],
[[0, 1, 0], [1, 0, 0]],
[[0, 0, 1], [0, 1, 0]]])
bs_to_test = [[-a[0], a[1]] for a in as_to_test]
for a, b in zip(as_to_test, bs_to_test):
R, _ = Rotation.align_vectors(a, b, weights=[np.inf, 1])
assert_allclose(R.magnitude(), np.pi, atol=atol)
assert_allclose(R.apply(b[0]), a[0], atol=atol)
# Test exact rotations near 180 deg
Rs = Rotation.random(100, random_state=0)
dRs = Rotation.from_rotvec(Rs.as_rotvec()*1e-4) # scale down to small angle
a = [[ 1, 0, 0], [0, 1, 0]]
b = [[-1, 0, 0], [0, 1, 0]]
as_to_test = []
for dR in dRs:
as_to_test.append([dR.apply(a[0]), a[1]])
for a in as_to_test:
R, _ = Rotation.align_vectors(a, b, weights=[np.inf, 1])
R2, _ = Rotation.align_vectors(a, b, weights=[1e10, 1])
assert_allclose(R.as_matrix(), R2.as_matrix(), atol=atol)
def test_align_vectors_primary_only():
atol = 1e-12
mats_a = Rotation.random(100, random_state=0).as_matrix()

View File

@ -1,118 +0,0 @@
/*
*
* This file accompanied with the implementation file _amos.c is a
* C translation of the Fortran code written by D.E. Amos with the
* following original description:
*
*
* A Portable Package for Bessel Functions of a Complex Argument
* and Nonnegative Order
*
* This algorithm is a package of subroutines for computing Bessel
* functions and Airy functions. The routines are updated
* versions of those routines found in TOMS algorithm 644.
*
* Disclaimer:
*
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* ISSUED BY SANDIA LABORATORIES,
* A PRIME CONTRACTOR TO THE
* UNITED STATES DEPARTMENT OF ENERGY
* * * * * * * * * * * * * * NOTICE * * * * * * * * * * * * * * *
* THIS REPORT WAS PREPARED AS AN ACCOUNT OF WORK SPONSORED BY THE
* UNITED STATES GOVERNMENT. NEITHER THE UNITED STATES NOR THE
* UNITED STATES DEPARTMENT OF ENERGY, NOR ANY OF THEIR
* EMPLOYEES, NOR ANY OF THEIR CONTRACTORS, SUBCONTRACTORS, OR THEIR
* EMPLOYEES, MAKES ANY WARRANTY, EXPRESS OR IMPLIED, OR ASSUMES ANY
* LEGAL LIABILITY OR RESPONSIBILITY FOR THE ACCURACY, COMPLETENESS
* OR USEFULNESS OF ANY INFORMATION, APPARATUS, PRODUCT OR PROCESS
* DISCLOSED, OR REPRESENTS THAT ITS USE WOULD NOT INFRINGE
* PRIVATELY OWNED RIGHTS.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* THIS CODE HAS BEEN APPROVED FOR UNLIMITED RELEASE.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
*
*
*
* The original Fortran code can be found at https://www.netlib.org/amos/
*
* References:
*
* [1]: Abramowitz, M. and Stegun, I. A., Handbook of Mathematical
* Functions, NBS Applied Math Series 55, U.S. Dept. of Commerce,
* Washington, D.C., 1955
*
* [2]: Amos, D. E., Algorithm 644, A Portable Package For Bessel
* Functions of a Complex Argument and Nonnegative Order, ACM
* Transactions on Mathematical Software, Vol. 12, No. 3,
* September 1986, Pages 265-273, DOI:10.1145/7921.214331
*
* [3]: Amos, D. E., Remark on Algorithm 644, ACM Transactions on
* Mathematical Software, Vol. 16, No. 4, December 1990, Page
* 404, DOI:10.1145/98267.98299
*
* [4]: Amos, D. E., A remark on Algorithm 644: "A portable package
* for Bessel functions of a complex argument and nonnegative order",
* ACM Transactions on Mathematical Software, Vol. 21, No. 4,
* December 1995, Pages 388-393, DOI:10.1145/212066.212078
*
* [5]: Cody, W. J., Algorithm 665, MACHAR: A Subroutine to
* Dynamically Determine Machine Parameters, ACM Transactions on
* Mathematical Software, Vol. 14, No. 4, December 1988, Pages
* 303-311, DOI:10.1145/50063.51907
*
*/
/*
* Copyright (C) 2024 SciPy developers
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* a. Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* b. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* c. Names of the SciPy Developers may not be used to endorse or promote
* products derived from this software without specific prior written
* permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
* OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef _AMOS_H
#define _AMOS_H
#ifdef __cplusplus
extern "C"
{
#endif /* __cplusplus */
#include <math.h>
#include <complex.h>
double complex amos_airy(double complex, int, int, int *, int *);
int amos_besh(double complex, double, int, int, int, double complex *, int *);
int amos_besi(double complex, double, int, int, double complex *, int *);
int amos_besj(double complex, double, int, int, double complex *, int *);
int amos_besk(double complex, double, int, int, double complex *, int *);
int amos_besy(double complex, double, int, int, double complex *, int *);
double complex amos_biry(double complex,int, int, int *);
#ifdef __cplusplus
} /* extern "C" */
#endif /* __cplusplus */
#endif /* ifndef */

View File

@ -2931,7 +2931,8 @@ def _factorialx_approx_core(n, k):
for r in np.unique(n_mod_k):
if r == 0:
continue
result[n_mod_k == r] *= corr(k, r)
# cast to int because uint types break on `-r`
result[n_mod_k == r] *= corr(k, int(r))
return result

View File

@ -343,7 +343,10 @@ cdef inline double spherical_in_d_real(long n, double x) noexcept nogil:
return spherical_in_real(1, x)
else:
if x == 0:
return 0
if n == 1:
return 1 / 3.
else:
return 0
return (spherical_in_real(n - 1, x) -
(n + 1)*spherical_in_real(n, x)/x)

View File

@ -1,7 +1,7 @@
#include <complex>
#include "_wright.h"
#include <complex>
using namespace std;
extern "C" {

View File

@ -1,9 +1,18 @@
#include "amos_wrappers.h"
#define CADDR(z) ((double *) (&(z))), (&(((double *) (&(z)))[1]))
#ifndef M_PI
#define M_PI 3.14159265358979323846
#endif
#ifndef CMPLX
#define CMPLX(x, y) ((double complex)((double)(x) + I * (double)(y)))
#endif /* CMPLX */
#include "amos_wrappers.h"
#include "special/amos/amos.h"
#define DO_SFERR(name, varp) \
do { \
if (nz !=0 || ierr != 0) { \
sf_error(name, (sf_error_t) ierr_to_sferr(nz, ierr), NULL);\
set_nan_if_no_computation_done(varp, ierr); \
} \
} while (0)
int ierr_to_sferr(int nz, int ierr) {
/* Return sf_error equivalents for ierr values */
@ -33,6 +42,10 @@ void set_nan_if_no_computation_done(npy_cdouble *v, int ierr) {
}
}
void set_nan_if_no_computation_done(std::complex<double> *var, int ierr) {
set_nan_if_no_computation_done(reinterpret_cast<npy_cdouble *>(var), ierr);
}
double sin_pi(double x)
{
if (floor(x) == x && fabs(x) < 1e14) {
@ -114,9 +127,9 @@ rotate_i(npy_cdouble i, npy_cdouble k, double v)
return w;
}
int cephes_airy(double x, double *ai, double *aip, double *bi, double *bip);
extern "C" int cephes_airy(double x, double *ai, double *aip, double *bi, double *bip);
int airy_wrap(double x, double *ai, double *aip, double *bi, double *bip)
extern "C" int airy_wrap(double x, double *ai, double *aip, double *bi, double *bip)
{
npy_cdouble z, zai, zaip, zbi, zbip;
@ -143,8 +156,8 @@ int cairy_wrap(npy_cdouble z, npy_cdouble *ai, npy_cdouble *aip, npy_cdouble *bi
int ierr = 0;
int kode = 1;
int nz;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex res;
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> res;
NPY_CSETREAL(ai, NAN);
NPY_CSETIMAG(ai, NAN);
@ -156,26 +169,26 @@ int cairy_wrap(npy_cdouble z, npy_cdouble *ai, npy_cdouble *aip, npy_cdouble *bi
NPY_CSETIMAG(bip, NAN);
res = amos_airy(z99, id, kode, &nz, &ierr);
NPY_CSETREAL(ai, creal(res));
NPY_CSETIMAG(ai, cimag(res));
NPY_CSETREAL(ai, res.real());
NPY_CSETIMAG(ai, res.imag());
DO_SFERR("airy:", ai);
nz = 0;
res = amos_biry(z99, id, kode, &ierr);
NPY_CSETREAL(bi, creal(res));
NPY_CSETIMAG(bi, cimag(res));
NPY_CSETREAL(bi, res.real());
NPY_CSETIMAG(bi, res.imag());
DO_SFERR("airy:", bi);
id = 1;
res = amos_airy(z99, id, kode, &nz, &ierr);
NPY_CSETREAL(aip, creal(res));
NPY_CSETIMAG(aip, cimag(res));
NPY_CSETREAL(aip, res.real());
NPY_CSETIMAG(aip, res.imag());
DO_SFERR("airy:", aip);
nz = 0;
res = amos_biry(z99, id, kode, &ierr);
NPY_CSETREAL(bip, creal(res));
NPY_CSETIMAG(bip, cimag(res));
NPY_CSETREAL(bip, res.real());
NPY_CSETIMAG(bip, res.imag());
DO_SFERR("airy:", bip);
return 0;
}
@ -185,8 +198,8 @@ int cairy_wrap_e(npy_cdouble z, npy_cdouble *ai, npy_cdouble *aip, npy_cdouble *
int kode = 2; /* Exponential scaling */
int nz, ierr;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex res;
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> res;
NPY_CSETREAL(ai, NAN);
NPY_CSETIMAG(ai, NAN);
@ -198,26 +211,26 @@ int cairy_wrap_e(npy_cdouble z, npy_cdouble *ai, npy_cdouble *aip, npy_cdouble *
NPY_CSETIMAG(bip, NAN);
res = amos_airy(z99, id, kode, &nz, &ierr);
NPY_CSETREAL(ai, creal(res));
NPY_CSETIMAG(ai, cimag(res));
NPY_CSETREAL(ai, res.real());
NPY_CSETIMAG(ai, std::imag(res));
DO_SFERR("airye:", ai);
nz = 0;
res = amos_biry(z99, id, kode, &ierr);
NPY_CSETREAL(bi, creal(res));
NPY_CSETIMAG(bi, cimag(res));
NPY_CSETREAL(bi, res.real());
NPY_CSETIMAG(bi, std::imag(res));
DO_SFERR("airye:", bi);
id = 1;
res = amos_airy(z99, id, kode, &nz, &ierr);
NPY_CSETREAL(aip, creal(res));
NPY_CSETIMAG(aip, cimag(res));
NPY_CSETREAL(aip, res.real());
NPY_CSETIMAG(aip, std::imag(res));
DO_SFERR("airye:", aip);
nz = 0;
res = amos_biry(z99, id, kode, &ierr);
NPY_CSETREAL(bip, creal(res));
NPY_CSETIMAG(bip, cimag(res));
NPY_CSETREAL(bip, res.real());
NPY_CSETIMAG(bip, std::imag(res));
DO_SFERR("airye:", bip);
return 0;
}
@ -228,8 +241,8 @@ int cairy_wrap_e_real(double z, double *ai, double *aip, double *bi, double *bip
int nz, ierr;
npy_cdouble cai, caip, cbi, cbip;
double complex z99 = z;
double complex res;
std::complex<double> z99 = z;
std::complex<double> res;
NPY_CSETREAL(&cai, NAN);
NPY_CSETIMAG(&cai, NAN);
@ -245,16 +258,16 @@ int cairy_wrap_e_real(double z, double *ai, double *aip, double *bi, double *bip
*ai = NAN;
} else {
res = amos_airy(z99, id, kode, &nz, &ierr);
NPY_CSETREAL(&cai, creal(res));
NPY_CSETIMAG(&cai, cimag(res));
NPY_CSETREAL(&cai, res.real());
NPY_CSETIMAG(&cai, std::imag(res));
DO_SFERR("airye:", &cai);
*ai = npy_creal(cai);
}
nz = 0;
res = amos_biry(z99, id, kode, &ierr);
NPY_CSETREAL(&cbi, creal(res));
NPY_CSETIMAG(&cbi, cimag(res));
NPY_CSETREAL(&cbi, res.real());
NPY_CSETIMAG(&cbi, std::imag(res));
DO_SFERR("airye:", &cbi);
*bi = npy_creal(cbi);
@ -263,16 +276,16 @@ int cairy_wrap_e_real(double z, double *ai, double *aip, double *bi, double *bip
*aip = NAN;
} else {
res = amos_airy(z99, id, kode, &nz, &ierr);
NPY_CSETREAL(&caip, creal(res));
NPY_CSETIMAG(&caip, cimag(res));
NPY_CSETREAL(&caip, res.real());
NPY_CSETIMAG(&caip, std::imag(res));
DO_SFERR("airye:", &caip);
*aip = npy_creal(caip);
}
nz = 0;
res = amos_biry(z99, id, kode, &ierr);
NPY_CSETREAL(&cbip, creal(res));
NPY_CSETIMAG(&cbip, cimag(res));
NPY_CSETREAL(&cbip, res.real());
NPY_CSETIMAG(&cbip, std::imag(res));
DO_SFERR("airye:", &cbip);
*bip = npy_creal(cbip);
return 0;
@ -285,9 +298,9 @@ npy_cdouble cbesi_wrap( double v, npy_cdouble z) {
int nz, ierr;
npy_cdouble cy, cy_k;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex cy99[1] = { NAN };
double complex cy_k99[1] = { NAN };
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> cy99[1] = { NAN };
std::complex<double> cy_k99[1] = { NAN };
NPY_CSETREAL(&cy, NAN);
NPY_CSETIMAG(&cy, NAN);
@ -302,8 +315,8 @@ npy_cdouble cbesi_wrap( double v, npy_cdouble z) {
sign = -1;
}
nz = amos_besi(z99, v, kode, n, cy99, &ierr);
NPY_CSETREAL(&cy, creal(cy99[0]));
NPY_CSETIMAG(&cy, cimag(cy99[0]));
NPY_CSETREAL(&cy, cy99[0].real());
NPY_CSETIMAG(&cy, std::imag(cy99[0]));
DO_SFERR("iv:", &cy);
if (ierr == 2) {
/* overflow */
@ -323,8 +336,8 @@ npy_cdouble cbesi_wrap( double v, npy_cdouble z) {
if (sign == -1) {
if (!reflect_i(&cy, v)) {
nz = amos_besk(z99, v, kode, n, cy_k99, &ierr);
NPY_CSETREAL(&cy_k, creal(cy_k99[0]));
NPY_CSETIMAG(&cy_k, cimag(cy_k99[0]));
NPY_CSETREAL(&cy_k, cy_k99[0].real());
NPY_CSETIMAG(&cy_k, std::imag(cy_k99[0]));
DO_SFERR("iv(kv):", &cy_k);
cy = rotate_i(cy, cy_k, v);
}
@ -340,9 +353,9 @@ npy_cdouble cbesi_wrap_e( double v, npy_cdouble z) {
int nz, ierr;
npy_cdouble cy, cy_k;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex cy99[1] = { NAN };
double complex cy_k99[1] = { NAN };
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> cy99[1] = { NAN };
std::complex<double> cy_k99[1] = { NAN };
NPY_CSETREAL(&cy, NAN);
NPY_CSETIMAG(&cy, NAN);
@ -357,15 +370,15 @@ npy_cdouble cbesi_wrap_e( double v, npy_cdouble z) {
sign = -1;
}
nz = amos_besi(z99, v, kode, n, cy99, &ierr);
NPY_CSETREAL(&cy, creal(cy99[0]));
NPY_CSETIMAG(&cy, cimag(cy99[0]));
NPY_CSETREAL(&cy, cy99[0].real());
NPY_CSETIMAG(&cy, std::imag(cy99[0]));
DO_SFERR("ive:", &cy);
if (sign == -1) {
if (!reflect_i(&cy, v)) {
nz = amos_besk(z99, v, kode, n, cy_k99, &ierr);
NPY_CSETREAL(&cy_k, creal(cy_k99[0]));
NPY_CSETIMAG(&cy_k, cimag(cy_k99[0]));
NPY_CSETREAL(&cy_k, cy_k99[0].real());
NPY_CSETIMAG(&cy_k, std::imag(cy_k99[0]));
DO_SFERR("ive(kv):", &cy_k);
/* adjust scaling to match zbesi */
cy_k = rotate(cy_k, -npy_cimag(z)/M_PI);
@ -400,9 +413,9 @@ npy_cdouble cbesj_wrap( double v, npy_cdouble z) {
int sign = 1;
npy_cdouble cy_j, cy_y;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex cy_j99[1] = { NAN };
double complex cy_y99[1] = { NAN };
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> cy_j99[1] = { NAN };
std::complex<double> cy_y99[1] = { NAN };
NPY_CSETREAL(&cy_j, NAN);
NPY_CSETIMAG(&cy_j, NAN);
@ -417,8 +430,8 @@ npy_cdouble cbesj_wrap( double v, npy_cdouble z) {
sign = -1;
}
nz = amos_besj(z99, v, kode, n, cy_j99, &ierr);
NPY_CSETREAL(&cy_j, creal(cy_j99[0]));
NPY_CSETIMAG(&cy_j, cimag(cy_j99[0]));
NPY_CSETREAL(&cy_j, cy_j99[0].real());
NPY_CSETIMAG(&cy_j, std::imag(cy_j99[0]));
DO_SFERR("jv:", &cy_j);
if (ierr == 2) {
/* overflow */
@ -430,8 +443,8 @@ npy_cdouble cbesj_wrap( double v, npy_cdouble z) {
if (sign == -1) {
if (!reflect_jy(&cy_j, v)) {
nz = amos_besy(z99, v, kode, n, cy_y99, &ierr);
NPY_CSETREAL(&cy_y, creal(cy_y99[0]));
NPY_CSETIMAG(&cy_y, cimag(cy_y99[0]));
NPY_CSETREAL(&cy_y, cy_y99[0].real());
NPY_CSETIMAG(&cy_y, std::imag(cy_y99[0]));
DO_SFERR("jv(yv):", &cy_y);
cy_j = rotate_jy(cy_j, cy_y, v);
}
@ -439,7 +452,7 @@ npy_cdouble cbesj_wrap( double v, npy_cdouble z) {
return cy_j;
}
double cephes_jv(double v, double x);
extern "C" double cephes_jv(double v, double x);
double cbesj_wrap_real(double v, double x)
{
@ -467,9 +480,9 @@ npy_cdouble cbesj_wrap_e( double v, npy_cdouble z) {
int sign = 1;
npy_cdouble cy_j, cy_y;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex cy_j99[1] = { NAN };
double complex cy_y99[1] = { NAN };
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> cy_j99[1] = { NAN };
std::complex<double> cy_y99[1] = { NAN };
NPY_CSETREAL(&cy_j, NAN);
NPY_CSETIMAG(&cy_j, NAN);
@ -484,14 +497,14 @@ npy_cdouble cbesj_wrap_e( double v, npy_cdouble z) {
sign = -1;
}
nz = amos_besj(z99, v, kode, n, cy_j99, &ierr);
NPY_CSETREAL(&cy_j, creal(cy_j99[0]));
NPY_CSETIMAG(&cy_j, cimag(cy_j99[0]));
NPY_CSETREAL(&cy_j, cy_j99[0].real());
NPY_CSETIMAG(&cy_j, std::imag(cy_j99[0]));
DO_SFERR("jve:", &cy_j);
if (sign == -1) {
if (!reflect_jy(&cy_j, v)) {
nz = amos_besy(z99, v, kode, n, cy_y99, &ierr);
NPY_CSETREAL(&cy_y, creal(cy_y99[0]));
NPY_CSETIMAG(&cy_y, cimag(cy_y99[0]));
NPY_CSETREAL(&cy_y, cy_y99[0].real());
NPY_CSETIMAG(&cy_y, std::imag(cy_y99[0]));
DO_SFERR("jve(yve):", &cy_y);
cy_j = rotate_jy(cy_j, cy_y, v);
}
@ -518,9 +531,9 @@ npy_cdouble cbesy_wrap( double v, npy_cdouble z) {
int sign = 1;
npy_cdouble cy_y, cy_j;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex cy_j99[1] = { NAN };
double complex cy_y99[1] = { NAN };
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> cy_j99[1] = { NAN };
std::complex<double> cy_y99[1] = { NAN };
NPY_CSETREAL(&cy_j, NAN);
NPY_CSETIMAG(&cy_j, NAN);
@ -543,8 +556,8 @@ npy_cdouble cbesy_wrap( double v, npy_cdouble z) {
}
else {
nz = amos_besy(z99, v, kode, n, cy_y99, &ierr);
NPY_CSETREAL(&cy_y, creal(cy_y99[0]));
NPY_CSETIMAG(&cy_y, cimag(cy_y99[0]));
NPY_CSETREAL(&cy_y, cy_y99[0].real());
NPY_CSETIMAG(&cy_y, std::imag(cy_y99[0]));
DO_SFERR("yv:", &cy_y);
if (ierr == 2) {
if (npy_creal(z) >= 0 && npy_cimag(z) == 0) {
@ -558,8 +571,8 @@ npy_cdouble cbesy_wrap( double v, npy_cdouble z) {
if (sign == -1) {
if (!reflect_jy(&cy_y, v)) {
nz = amos_besj(z99, v, kode, n, cy_j99, &ierr);
NPY_CSETREAL(&cy_j, creal(cy_j99[0]));
NPY_CSETIMAG(&cy_j, cimag(cy_j99[0]));
NPY_CSETREAL(&cy_j, cy_j99[0].real());
NPY_CSETIMAG(&cy_j, std::imag(cy_j99[0]));
// F_FUNC(zbesj,ZBESJ)(CADDR(z), &v, &kode, &n, CADDR(cy_j), &nz, &ierr);
DO_SFERR("yv(jv):", &cy_j);
cy_y = rotate_jy(cy_y, cy_j, -v);
@ -568,7 +581,7 @@ npy_cdouble cbesy_wrap( double v, npy_cdouble z) {
return cy_y;
}
double cephes_yv(double v, double x);
extern "C" double cephes_yv(double v, double x);
double cbesy_wrap_real(double v, double x)
{
@ -596,9 +609,9 @@ npy_cdouble cbesy_wrap_e( double v, npy_cdouble z) {
int sign = 1;
npy_cdouble cy_y, cy_j;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex cy_j99[1] = { NAN };
double complex cy_y99[1] = { NAN };
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> cy_j99[1] = { NAN };
std::complex<double> cy_y99[1] = { NAN };
NPY_CSETREAL(&cy_j, NAN);
NPY_CSETIMAG(&cy_j, NAN);
@ -613,8 +626,8 @@ npy_cdouble cbesy_wrap_e( double v, npy_cdouble z) {
sign = -1;
}
nz = amos_besy(z99, v, kode, n, cy_y99, &ierr);
NPY_CSETREAL(&cy_y, creal(cy_y99[0]));
NPY_CSETIMAG(&cy_y, cimag(cy_y99[0]));
NPY_CSETREAL(&cy_y, cy_y99[0].real());
NPY_CSETIMAG(&cy_y, std::imag(cy_y99[0]));
DO_SFERR("yve:", &cy_y);
if (ierr == 2) {
if (npy_creal(z) >= 0 && npy_cimag(z) == 0) {
@ -627,8 +640,8 @@ npy_cdouble cbesy_wrap_e( double v, npy_cdouble z) {
if (sign == -1) {
if (!reflect_jy(&cy_y, v)) {
nz = amos_besj(z99, v, kode, n, cy_j99, &ierr);
NPY_CSETREAL(&cy_j, creal(cy_j99[0]));
NPY_CSETIMAG(&cy_j, cimag(cy_j99[0]));
NPY_CSETREAL(&cy_j, cy_j99[0].real());
NPY_CSETIMAG(&cy_j, std::imag(cy_j99[0]));
DO_SFERR("yv(jv):", &cy_j);
cy_y = rotate_jy(cy_y, cy_j, -v);
}
@ -654,8 +667,8 @@ npy_cdouble cbesk_wrap( double v, npy_cdouble z) {
int nz, ierr;
npy_cdouble cy;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex cy99[1] = { NAN };
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> cy99[1] = { NAN };
NPY_CSETREAL(&cy, NAN);
NPY_CSETIMAG(&cy, NAN);
@ -668,8 +681,8 @@ npy_cdouble cbesk_wrap( double v, npy_cdouble z) {
v = -v;
}
nz = amos_besk(z99, v, kode, n, cy99, &ierr);
NPY_CSETREAL(&cy, creal(cy99[0]));
NPY_CSETIMAG(&cy, cimag(cy99[0]));
NPY_CSETREAL(&cy, std::real(cy99[0]));
NPY_CSETIMAG(&cy, std::imag(cy99[0]));
DO_SFERR("kv:", &cy);
if (ierr == 2) {
if (npy_creal(z) >= 0 && npy_cimag(z) == 0) {
@ -688,8 +701,8 @@ npy_cdouble cbesk_wrap_e( double v, npy_cdouble z) {
int nz, ierr;
npy_cdouble cy;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex cy99[1] = { NAN };
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> cy99[1] = { NAN };
NPY_CSETREAL(&cy, NAN);
NPY_CSETIMAG(&cy, NAN);
@ -702,8 +715,8 @@ npy_cdouble cbesk_wrap_e( double v, npy_cdouble z) {
v = -v;
}
nz = amos_besk(z99, v, kode, n, cy99, &ierr);
NPY_CSETREAL(&cy, creal(cy99[0]));
NPY_CSETIMAG(&cy, cimag(cy99[0]));
NPY_CSETREAL(&cy, std::real(cy99[0]));
NPY_CSETIMAG(&cy, std::imag(cy99[0]));
DO_SFERR("kve:", &cy);
if (ierr == 2) {
if (npy_creal(z) >= 0 && npy_cimag(z) == 0) {
@ -768,8 +781,8 @@ npy_cdouble cbesh_wrap1( double v, npy_cdouble z) {
int sign = 1;
npy_cdouble cy;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex cy99[1] = { NAN };
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> cy99[1] = { NAN };
NPY_CSETREAL(&cy, NAN);
NPY_CSETIMAG(&cy, NAN);
@ -782,8 +795,8 @@ npy_cdouble cbesh_wrap1( double v, npy_cdouble z) {
sign = -1;
}
nz = amos_besh(z99, v, kode, m, n, cy99, &ierr);
NPY_CSETREAL(&cy, creal(cy99[0]));
NPY_CSETIMAG(&cy, cimag(cy99[0]));
NPY_CSETREAL(&cy, std::real(cy99[0]));
NPY_CSETIMAG(&cy, std::imag(cy99[0]));
DO_SFERR("hankel1:", &cy);
if (sign == -1) {
cy = rotate(cy, v);
@ -799,8 +812,8 @@ npy_cdouble cbesh_wrap1_e( double v, npy_cdouble z) {
int sign = 1;
npy_cdouble cy;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex cy99[1] = { NAN };
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> cy99[1] = { NAN };
NPY_CSETREAL(&cy, NAN);
NPY_CSETIMAG(&cy, NAN);
@ -813,8 +826,8 @@ npy_cdouble cbesh_wrap1_e( double v, npy_cdouble z) {
sign = -1;
}
nz = amos_besh(z99, v, kode, m, n, cy99, &ierr);
NPY_CSETREAL(&cy, creal(cy99[0]));
NPY_CSETIMAG(&cy, cimag(cy99[0]));
NPY_CSETREAL(&cy, std::real(cy99[0]));
NPY_CSETIMAG(&cy, std::imag(cy99[0]));
DO_SFERR("hankel1e:", &cy);
if (sign == -1) {
cy = rotate(cy, v);
@ -830,8 +843,8 @@ npy_cdouble cbesh_wrap2( double v, npy_cdouble z) {
int sign = 1;
npy_cdouble cy;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex cy99[1] = { NAN };
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> cy99[1] = { NAN };
NPY_CSETREAL(&cy, NAN);
NPY_CSETIMAG(&cy, NAN);
@ -844,8 +857,8 @@ npy_cdouble cbesh_wrap2( double v, npy_cdouble z) {
sign = -1;
}
nz = amos_besh(z99, v, kode, m, n, cy99, &ierr);
NPY_CSETREAL(&cy, creal(cy99[0]));
NPY_CSETIMAG(&cy, cimag(cy99[0]));
NPY_CSETREAL(&cy, std::real(cy99[0]));
NPY_CSETIMAG(&cy, std::imag(cy99[0]));
DO_SFERR("hankel2:", &cy);
if (sign == -1) {
cy = rotate(cy, -v);
@ -861,8 +874,8 @@ npy_cdouble cbesh_wrap2_e( double v, npy_cdouble z) {
int sign = 1;
npy_cdouble cy;
double complex z99 = CMPLX(npy_creal(z), npy_cimag(z));
double complex cy99[1] = { NAN };
std::complex<double> z99(npy_creal(z), npy_cimag(z));
std::complex<double> cy99[1] = { NAN };
NPY_CSETREAL(&cy, NAN);
NPY_CSETIMAG(&cy, NAN);
@ -875,8 +888,8 @@ npy_cdouble cbesh_wrap2_e( double v, npy_cdouble z) {
sign = -1;
}
nz = amos_besh(z99, v, kode, m, n, cy99, &ierr);
NPY_CSETREAL(&cy, creal(cy99[0]));
NPY_CSETIMAG(&cy, cimag(cy99[0]));
NPY_CSETREAL(&cy, cy99[0].real());
NPY_CSETIMAG(&cy, cy99[0].imag());
DO_SFERR("hankel2e:", &cy);
if (sign == -1) {
cy = rotate(cy, -v);

View File

@ -5,21 +5,17 @@
* arguments.
*/
#ifndef _AMOS_WRAPPERS_H
#define _AMOS_WRAPPERS_H
#pragma once
#include "Python.h"
#include "sf_error.h"
#include "npy_2_complexcompat.h"
#include "_amos.h"
#include <numpy/npy_math.h>
#define DO_SFERR(name, varp) \
do { \
if (nz !=0 || ierr != 0) { \
sf_error(name, ierr_to_sferr(nz, ierr), NULL);\
set_nan_if_no_computation_done(varp, ierr); \
} \
} while (0)
#ifdef __cplusplus
extern "C"
{
#endif /* __cplusplus */
int ierr_to_sferr( int nz, int ierr);
void set_nan_if_no_computation_done(npy_cdouble *var, int ierr);
@ -58,9 +54,9 @@ int cbesy_(doublecomplex *, double *, int *, int *, doublecomplex *, int *, doub
int cbesh_(doublecomplex *, double *, int *, int *, int *, doublecomplex *, int *, int *);
*/
#endif
#ifdef __cplusplus
} /* extern "C" */
#endif /* __cplusplus */

View File

@ -104,16 +104,10 @@ cephes_sources = [
'cephes/zetac.c'
]
amos_lib = static_library('_amos',
['_amos.c'],
c_args: use_math_defines,
include_directories: ['../_lib', '../_build_utils/src'],
)
ufuncs_sources = [
'_cosine.c',
'scaled_exp1.c',
'amos_wrappers.c',
'amos_wrappers.cpp',
'specfun_wrappers.cpp',
'sf_error.c'
]
@ -207,7 +201,6 @@ py3.extension_module('_ufuncs',
],
link_args: version_link_args,
link_with: [
amos_lib,
cephes_lib
],
install: true,
@ -267,7 +260,7 @@ py3.extension_module('cython_special',
uf_cython_gen.process(cython_special[6]), # cython_special.pyx
'_cosine.c',
'scaled_exp1.c',
'amos_wrappers.c',
'amos_wrappers.cpp',
'specfun_wrappers.cpp',
'sf_error.c'
],
@ -276,7 +269,6 @@ py3.extension_module('cython_special',
link_args: version_link_args,
dependencies: [np_dep, npymath_lib, lapack],
link_with: [
amos_lib,
cephes_lib
],
install: true,

File diff suppressed because it is too large Load Diff

View File

@ -2171,9 +2171,13 @@ class TestFactorialFunctions:
_check(special.factorialk(n, 3, exact=exact), exp_nucleus[3])
@pytest.mark.parametrize("exact", [True, False])
@pytest.mark.parametrize("dtype", [
None, int, np.int8, np.int16, np.int32, np.int64,
np.uint8, np.uint16, np.uint32, np.uint64
])
@pytest.mark.parametrize("dim", range(0, 5))
def test_factorialx_array_dimension(self, dim, exact):
n = np.array(5, ndmin=dim)
def test_factorialx_array_dimension(self, dim, dtype, exact):
n = np.array(5, dtype=dtype, ndmin=dim)
exp = {1: 120, 2: 15, 3: 10}
assert_allclose(special.factorial(n, exact=exact),
np.array(exp[1], ndmin=dim))

View File

@ -445,7 +445,7 @@ BOOST_TESTS = [
data(jvp, 'bessel_j_prime_large_data_ipp-bessel_j_prime_large_data',
(0,1), 2, rtol=1e-11),
data(jvp, 'bessel_j_prime_large_data_ipp-bessel_j_prime_large_data',
(0,1j), 2, rtol=1e-11),
(0,1j), 2, rtol=2e-11),
data(kn, 'bessel_k_int_data_ipp-bessel_k_int_data', (0,1), 2, rtol=1e-12),

View File

@ -286,9 +286,10 @@ class TestSphericalInDerivatives(SphericalDerivativesTestCase):
return spherical_in(n, z, derivative=True)
def test_spherical_in_d_zero(self):
n = np.array([1, 2, 3, 7, 15])
n = np.array([0, 1, 2, 3, 7, 15])
spherical_in(n, 0, derivative=False)
assert_allclose(spherical_in(n, 0, derivative=True),
np.zeros(5))
np.array([0, 1/3, 0, 0, 0, 0]))
class TestSphericalKnDerivatives(SphericalDerivativesTestCase):

View File

@ -692,12 +692,10 @@ class beta_gen(rv_continuous):
return _boost._beta_sf(x, a, b)
def _isf(self, x, a, b):
with np.errstate(over='ignore'): # see gh-17432
return _boost._beta_isf(x, a, b)
return sc.betainccinv(a, b, x)
def _ppf(self, q, a, b):
with np.errstate(over='ignore'): # see gh-17432
return _boost._beta_ppf(q, a, b)
return sc.betaincinv(a, b, q)
def _stats(self, a, b):
return (

View File

@ -1305,8 +1305,9 @@ class zipf_gen(rv_discrete):
return a > 1
def _pmf(self, k, a):
k = k.astype(np.float64)
# zipf.pmf(k, a) = 1/(zeta(a) * k**a)
Pk = 1.0 / special.zeta(a, 1) / k**a
Pk = 1.0 / special.zeta(a, 1) * k**-a
return Pk
def _munp(self, n, a):
@ -1404,7 +1405,8 @@ class zipfian_gen(rv_discrete):
return 1, n
def _pmf(self, k, a, n):
return 1.0 / _gen_harmonic(n, a) / k**a
k = k.astype(np.float64)
return 1.0 / _gen_harmonic(n, a) * k**-a
def _cdf(self, k, a, n):
return _gen_harmonic(k, a) / _gen_harmonic(n, a)
@ -1658,7 +1660,7 @@ class yulesimon_gen(rv_discrete):
np.inf)
g1 = np.where(alpha <= 2, np.nan, g1)
g2 = np.where(alpha > 4,
alpha + 3 + ((alpha**3 - 49 * alpha - 22) /
alpha + 3 + ((11 * alpha**3 - 49 * alpha - 22) /
(alpha * (alpha - 4) * (alpha - 3))),
np.inf)
g2 = np.where(alpha <= 2, np.nan, g2)

View File

@ -142,7 +142,7 @@ distdiscrete = [
['poisson', (0.6,)],
['randint', (7, 31)],
['skellam', (15, 8)],
['zipf', (6.5,)],
['zipf', (6.6,)],
['zipfian', (0.75, 15)],
['zipfian', (1.25, 10)],
['yulesimon', (11.0,)],

View File

@ -102,6 +102,7 @@ def _wilcoxon_iv(x, y, zero_method, correction, alternative, method, axis):
"an instance of `stats.PermutationMethod`.")
if method not in methods:
raise ValueError(message)
output_z = True if method == 'approx' else False
# logic unchanged here for backward compatibility
n_zero = np.sum(d == 0, axis=-1)
@ -127,7 +128,7 @@ def _wilcoxon_iv(x, y, zero_method, correction, alternative, method, axis):
if 0 < d.shape[-1] < 10 and method == "approx":
warnings.warn("Sample size too small for normal approximation.", stacklevel=2)
return d, zero_method, correction, alternative, method, axis
return d, zero_method, correction, alternative, method, axis, output_z
def _wilcoxon_statistic(d, zero_method='wilcox'):
@ -196,7 +197,7 @@ def _wilcoxon_nd(x, y=None, zero_method='wilcox', correction=True,
alternative='two-sided', method='auto', axis=0):
temp = _wilcoxon_iv(x, y, zero_method, correction, alternative, method, axis)
d, zero_method, correction, alternative, method, axis = temp
d, zero_method, correction, alternative, method, axis, output_z = temp
if d.size == 0:
NaN = _get_nan(d)
@ -232,6 +233,6 @@ def _wilcoxon_nd(x, y=None, zero_method='wilcox', correction=True,
z = -np.abs(z) if (alternative == 'two-sided' and method == 'approx') else z
res = _morestats.WilcoxonResult(statistic=statistic, pvalue=p[()])
if method == 'approx':
if output_z:
res.zstatistic = z[()]
return res

View File

@ -35,7 +35,7 @@ py3.extension_module('_ansari_swilk_statistics',
mvn_module = custom_target('mvn_module',
output: ['_mvn-f2pywrappers.f', '_mvnmodule.c'],
input: 'mvn.pyf',
command: [generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
command: [py3, generate_f2pymod, '@INPUT@', '-o', '@OUTDIR@']
)
py3.extension_module('_mvn',

View File

@ -91,7 +91,9 @@ def test_moments(distname, arg):
check_mean_expect(distfn, arg, m, distname)
check_var_expect(distfn, arg, m, v, distname)
check_skew_expect(distfn, arg, m, v, s, distname)
if distname not in ['zipf', 'yulesimon', 'betanbinom']:
with np.testing.suppress_warnings() as sup:
if distname in ['zipf', 'betanbinom']:
sup.filter(RuntimeWarning)
check_kurt_expect(distfn, arg, m, v, k, distname)
# frozen distr moments

View File

@ -351,6 +351,14 @@ class TestZipfian:
assert_allclose(zipfian.stats(a, n, moments="mvsk"),
[mean, var, skew, kurtosis])
def test_pmf_integer_k(self):
k = np.arange(0, 1000)
k_int32 = k.astype(np.int32)
dist = zipfian(111, 22)
pmf = dist.pmf(k)
pmf_k_int32 = dist.pmf(k_int32)
assert_equal(pmf, pmf_k_int32)
class TestNCH:
np.random.seed(2) # seeds 0 and 1 had some xl = xu; randint failed
@ -627,3 +635,14 @@ class TestBetaNBinom:
# return float(fourth_moment/var**2 - 3.)
assert_allclose(betanbinom.stats(n, a, b, moments="k"),
ref, rtol=3e-15)
class TestZipf:
def test_gh20692(self):
# test that int32 data for k generates same output as double
k = np.arange(0, 1000)
k_int32 = k.astype(np.int32)
dist = zipf(9)
pmf = dist.pmf(k)
pmf_k_int32 = dist.pmf(k_int32)
assert_equal(pmf, pmf_k_int32)

View File

@ -13,7 +13,7 @@ import platform
from numpy.testing import (assert_equal, assert_array_equal,
assert_almost_equal, assert_array_almost_equal,
assert_allclose, assert_, assert_warns,
assert_array_less, suppress_warnings, IS_PYPY)
assert_array_less, suppress_warnings)
import pytest
from pytest import raises as assert_raises
@ -4454,7 +4454,7 @@ class TestBeta:
assert_equal(stats.beta.pdf(1, a, b), 5)
assert_equal(stats.beta.pdf(1-1e-310, a, b), 5)
@pytest.mark.xfail(IS_PYPY, reason="Does not convert boost warning")
@pytest.mark.xfail(reason="Does not warn on special codepath")
def test_boost_eval_issue_14606(self):
q, a, b = 0.995, 1.0e11, 1.0e13
with pytest.warns(RuntimeWarning):

View File

@ -1656,6 +1656,22 @@ class TestWilcoxon:
assert_equal(np.round(res.pvalue, 2), res.pvalue) # n_resamples used
assert_equal(res.pvalue, ref.pvalue) # random_state used
def test_method_auto_nan_propagate_ND_length_gt_50_gh20591(self):
# When method!='approx', nan_policy='propagate', and a slice of
# a >1 dimensional array input contained NaN, the result object of
# `wilcoxon` could (under yet other conditions) return `zstatistic`
# for some slices but not others. This resulted in an error because
# `apply_along_axis` would have to create a ragged array.
# Check that this is resolved.
rng = np.random.default_rng(235889269872456)
A = rng.normal(size=(51, 2)) # length along slice > exact threshold
A[5, 1] = np.nan
res = stats.wilcoxon(A)
ref = stats.wilcoxon(A, method='approx')
assert_allclose(res, ref)
assert hasattr(ref, 'zstatistic')
assert not hasattr(res, 'zstatistic')
class TestKstat:
def test_moments_normal_distribution(self):

View File

@ -1,4 +1,3 @@
#!/usr/bin/env python3
"""
Process f2py template files (`filename.pyf.src` -> `filename.pyf`)
@ -290,9 +289,9 @@ def main():
cwd=os.getcwd())
out, err = p.communicate()
if not (p.returncode == 0):
raise RuntimeError(f"Writing {args.outfile} with f2py failed!\n"
f"{out}\n"
r"{err}")
raise RuntimeError(f"Processing {fname_pyf} with f2py failed!\n"
f"{out.decode()}\n"
f"{err.decode()}")
if __name__ == "__main__":

View File

@ -11,7 +11,8 @@ target-version = "py39"
# and `B028` which checks that warnings include the `stacklevel` keyword.
# `B028` added in gh-19623.
select = ["E", "F", "PGH004", "UP", "B028"]
ignore = ["E741"]
# UP031 should be enabled once someone fixes the errors.
ignore = ["E741", "UP031", "UP032"]
# Allow unused variables when underscore-prefixed.
dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"

View File

@ -13,8 +13,8 @@ from tempfile import mkstemp, gettempdir
from urllib.request import urlopen, Request
from urllib.error import HTTPError
OPENBLAS_V = '0.3.26'
OPENBLAS_LONG = 'v0.3.26'
OPENBLAS_V = '0.3.27'
OPENBLAS_LONG = 'v0.3.27'
BASE_LOC = 'https://anaconda.org/multibuild-wheels-staging/openblas-libs'
NIGHTLY_BASE_LOC = (
'https://anaconda.org/scientific-python-nightly-wheels/openblas-libs'
@ -231,6 +231,29 @@ def extract_tarfile_to(tarfileobj, target_path, archive_path):
yield member
tarfileobj.extractall(target_path, members=get_members())
reformat_pkg_file(target_path=target_path)
def reformat_pkg_file(target_path):
# attempt to deal with:
# https://github.com/scipy/scipy/pull/20362#issuecomment-2028517797
# NOTE: this machinery can be removed in the future because the
# problem is a simple typo fixed upstream now at:
# https://github.com/OpenMathLib/OpenBLAS/pull/4594
for root, dirs, files in os.walk(target_path):
for name in files:
if name.endswith(".pc") and "openblas" in name:
pkg_path = os.path.join(root, name)
new_pkg_lines = []
with open(pkg_path) as pkg_orig:
for line in pkg_orig:
if line.startswith("Libs:"):
new_line = line.replace("$(libprefix}", "${libprefix}")
new_pkg_lines.append(new_line)
else:
new_pkg_lines.append(line)
with open(pkg_path, "w") as new_pkg:
new_pkg.writelines(new_pkg_lines)
def make_init(dirname):

View File

@ -5,9 +5,9 @@ import argparse
MAJOR = 1
MINOR = 13
MICRO = 0
MICRO = 2
ISRELEASED = False
IS_RELEASE_BRANCH = False
IS_RELEASE_BRANCH = True
VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)

View File

@ -34,3 +34,6 @@ else
unzip $target -d /c/opt/
cp /c/opt/64/bin/*.dll /c/opt/openblas/openblas_dll
fi
# attempt to deal with:
# https://github.com/scipy/scipy/pull/20362#issuecomment-2028517797
python -c "import tools.openblas_support as obs; obs.reformat_pkg_file('C:/opt/')"