Compare commits

...

89 Commits

Author SHA1 Message Date
Ralf Gommers 6036debefc
Merge pull request #21431 from tylerjereddy/treddy_prep_1.14.2
REL, MAINT: prep for 1.14.2
2024-08-22 08:28:06 +02:00
Tyler Reddy 8381001d52 REL, MAINT: prep for 1.14.2
* Prepare for SciPy `1.14.2`, just in case.

[skip cirrus]
2024-08-21 19:26:24 -06:00
Tyler Reddy 92d2a85927 REL: 1.14.1 rel commit [wheel build]
* SciPy `1.14.1` release commit.

[wheel build]
2024-08-20 16:43:12 -06:00
Tyler Reddy 85623a1fc9
Merge pull request #21362 from tylerjereddy/treddy_1.14.1_backports 2024-08-20 16:16:23 -06:00
Tyler Reddy d924005fb7 MAINT: PR 21362 revisions [wheel build]
* Revert some commits related to `setuptools` breakage.
See gh-21416.

[wheel build]
2024-08-20 14:10:13 -06:00
Tyler Reddy b901a4e5b3 MAINT, CI: PR 21362 revisions [wheel build]
* Restore build isolation for Python 3.13 wheel builds.

[wheel build]
2024-08-19 16:33:55 -06:00
Tyler Reddy 2a7ec60409 MAINT, BLD: PR 21362 revisions [wheel build]
* Try setting an upper bound to `setuptools` version
in `pyproject.toml` (this is probably terrible), to hack
around gh-21416.

[wheel build]
2024-08-19 16:24:11 -06:00
Tyler Reddy f4f084d10a MAINT, CI: PR 21362 revisions [wheel build]
* Try adding a workaround in the wheel builds
for gh-21416.

[wheel build]
2024-08-19 16:02:24 -06:00
Tyler Reddy b712fc6e85 DOC: update 1.14.1 relnotes [wheel build]
* Update the SciPy `1.14.1` release notes after
several more backports.

[wheel build]
2024-08-19 13:51:33 -06:00
Warren Weckesser cdd5aca303 MAINT: special: Accommodate changed integer handling in NumPy 2.0. (#21401)
* Add the 'p' character to the typecodes recognized by the script
  `_generate_pyx.py`.  This code corresponds to the integer types
  `npy_intp` and `Py_ssize_t`.
* In Cython code:
  * In change the fused type `dl_number_t` to `dlp_number_t`, and
    include `Py_ssize_t` in the types it fuses.
  * Change other uses of `long` to `Py_ssize_t`.
* In the type signatures of ufuncs configured in `functions.json`,
  change occurrences of `i` or `l` in an input to `p`.
* Change use of `int` in the wrappers of `xsf` functions to `Py_ssize_t`.
  The thin wrappers cast this to `int` before calling the `xsf` function.
2024-08-19 13:44:05 -06:00
Andrew Nelson 0f91838d8a BLD: cp313 wheels on `manylinux_aarch64` (#21409) 2024-08-19 12:16:29 -06:00
Tyler Reddy 6dd0b002bc MAINT, CI: wheel build changes [wheel build]
* Additional modified backporting from
gh-21000

Co-authored-by: Andrew Nelson <andyfaff@gmail.com>
2024-08-19 12:11:07 -06:00
Tyler Reddy 6d549b36cb CI, BLD: 3.13 wheel changes [wheel build]
* Custom backport of gh-21000 to support Python `3.13`
wheel builds.

Co-authored-by: Andrew Nelson <andyfaff@gmail.com>
2024-08-19 12:10:50 -06:00
Lucas Colley 1584e476ce MAINT: adapt to array-api-strict 2.0 (#21074)
* MAINT: adapt to array-api-strict 2.0
* DEV: enforce array-api-strict>=2.0

[skip cirrus] [skip circle]
2024-08-16 16:03:48 -06:00
Daniel Schmitz 5f8c2b66b8 BUG: sparse.linalg: Update `SuperLU` to fix unfilterable output for singular matrices (#21172)
[ci skip]
2024-08-16 13:11:05 -06:00
Tyler Reddy eb8e432ae7 MAINT, CI: PR 21362 revisions
* Pin `pybind11` in the "oldest gcc" CI job
where a newer version was getting pulled in and
causing problems given the upper bound in our
`pyproject.toml`.

[skip cirrus]
2024-08-15 17:40:59 -06:00
Dan Schult 35111a05a3 BUG: sparse: Fix 1D specialty hstack codes (#21400)
* add more hstack/vstack tests to cover coo and csr special cases

* change shape to _shape_as_2d where needed in specialty block() functions.

[skip cirrus]
2024-08-15 15:03:19 -06:00
Tyler Reddy 196f8c38cb MAINT, DOC: PR 21362 revisions
* Two `stats` tutorial examples had
floating point expected values that could
vary, so allow for this variation on the
release branch to prevent `smoke-tutorials`
failures.

* One of the examples/reasons is explained here:
https://github.com/scipy/scipy/pull/21362#issuecomment-2292214328

[docs only]
2024-08-15 14:55:19 -06:00
ਗਗਨਦੀਪ ਸਿੰਘ (Gagandeep Singh) 34f5674866 MAINT: Unskip `scipy.misc.test.test_decorator` for Python 3.13+ (#21104)
Closes gh-19572

[docs only]
2024-08-14 20:22:53 -06:00
Evgeni Burovski 921081fad3 DOC: stats: silence the doctest error
[docs only]
2024-08-14 20:15:43 -06:00
Dan Schult 223c7ebc02 BUG: sparse: fix failing doctests that print sparse (#21244)
* fix doctests using sparse in csgraph and io

* ensure no mixed spaces and tabs

[docs only]
2024-08-14 18:38:11 -06:00
Tyler Reddy 82145b536b DOC: sphinx upper bound
* Upper bounding the version of Sphinx used on the maintenance
branch seems to help avoid a doc build issue locally.

[docs only]
2024-08-13 17:22:43 -06:00
Melissa Weber Mendonça 8199e4c6ac DOC: Suppress autosummary module names warning
May be fixed by upstream in the future.

[docs only]
2024-08-13 16:59:44 -06:00
Adam Turner 5e40b45a72 Fix type of ``html_sidebars`` value in ``conf.py`` 2024-08-13 16:58:30 -06:00
Warren Weckesser c836f9701e DOC: Don't use Sphinx 8.0.0 until gh-21323 is resolved.
[docs only]
2024-08-13 16:46:11 -06:00
Tibor Völcker e787fcb5c7 STY: odr: Remove double newlines 2024-08-13 16:36:38 -06:00
Tibor Völcker 8ca58a6190 BUG: odr: fix pickling
The `__getattr__` method of the `odr` objects led to a recursion error
when unpickling. This is because, when unpickling, the instance is
checked for a `__setstate__` method, before the any attributes are
added.
The implementation before assumed that the attributes are already set.
Closes gh-21306.
2024-08-13 16:36:30 -06:00
Tyler Reddy 8a753e4631 MAINT, DOC: update 1.14.1 relnotes
* Update the SciPy `1.14.1` release notes
following backport activity, and to clarify
that this release add support for Python `3.13`.

* Also update `pyproject.toml` for `3.13` wheel
builds.

[skip cirrus]
2024-08-11 14:06:44 -06:00
Ewout ter Hoeven 712e2b2b39 CI: Update to cibuildwheel 2.20.0
cibuildwheel 2.20.0 uses the ABI stable Python 3.13.0rc1 and build Python 3.13 wheels by default.
2024-08-11 13:57:53 -06:00
Lucas Colley c45c03bbe1 BUG: stats: adapt to `np.floor` type promotion removal (#21283)
* BUG: stats: adapt to `np.floor` type promotion removal

`rv_discrete._cdf` relied on `np.floor` promoting its integer input to `np.float64`. This is no longer the case since numpy/numpy#26766.
2024-08-11 13:55:09 -06:00
Ralf Gommers 314393d3f1 MAINT: sparse.linalg: update `SuperLU/colamd.c` to fix license issue (#21274)
This applies the upstream commit (e08261a7c9) that resolved a problem
with this file - it mentioned both BSD and LGPL licenses.

Closes gh-21234
2024-08-11 13:53:16 -06:00
Lucas Colley f07e96cdf9 BUG: fft: fix array-like input (#21209) 2024-08-11 13:52:13 -06:00
Ralf Gommers 7a8be336d6 BUG: signal: fix crash under free-threaded CPython in medfilt2d
This also improves the logic; the `setjmp`/`longjmp` nor the
`check_malloc` function were good ways to deal with error
handling when memory allocation failed.
It looks like that was introduced because the type-generic
`*_medfilt2d` defined through a macro cannot have an `int`
return type (only `void` for macros)`. However, an integer
error value can instead be passed to the function as an extra
argument.

Closes gh-21142
2024-08-11 13:51:01 -06:00
Dan Schult 8532b8b086 BUG: sparse: fix 1D for vstack/hstack and Improve 1D error msgs in h/v-stacks/kron/kronsum (#21108)
* fix 1D for vstack/hstack and Improve 1D err msg stacks & kron/kronsum

* improve error messages in kronsum

Co-authored-by: CJ Carey <perimosocordiae@gmail.com>

* adjust exception test match string

---------

Co-authored-by: CJ Carey <perimosocordiae@gmail.com>
2024-08-11 13:49:53 -06:00
Dan Schult b9da365fd8 add release note for 1.14 re: sparse array iteration
[docs only]
2024-08-11 13:49:12 -06:00
Nick ODell 7477e8329a BLD: Enable open_memstream on newer glibc
On glibc >= 2.10, if -std=c17 is set and no feature test macros are
set, glibc will not define an open_memstream definition. musl has
similar behavior. This causes SciPy to unnecessarily open a temporary
file, when capturing the error stream could have been done in memory.
2024-08-11 13:47:28 -06:00
h-vetinari 1e94f6badc MAINT: fix typo in small_dynamic_array.h (#21069)
on clang-19, this causes:
```
../scipy/_lib/_uarray/small_dynamic_array.h(145,18): error: reference to non-static member function must be called
  145 |     size_ = copy.size;
      |             ~~~~~^~~~
1 error generated.
```
I'm not sure how previous versions (much less other compilers) dealt with this,
as it seems that the `SmallDynamicArray` class has no `size` member or field at all.
2024-08-11 13:46:33 -06:00
Albert Steppi 4300542ad1 BUG: special: remove type punning (#21067)
to avoid warnings in LTO builds
2024-08-11 13:45:25 -06:00
Albert Steppi c86674041a BUG: special: Fixes for pro_rad1 (#21062)
* BUG: Fix indices in sphj

* Fix sphj argument order

* Fix translation error from original Fortran

* TST: Add test for pro_rad1

* Bump test tolerance for linux 32bit
2024-08-11 13:44:28 -06:00
Ralf Gommers a959627fef
Merge pull request #21038 from tylerjereddy/treddy_prep_1.14.1
REL, MAINT: prep for 1.14.1
2024-06-25 12:05:34 +02:00
Tyler Reddy 3403880a9d REL, MAINT: prep for 1.14.1
* Prepare for SciPy `1.14.1`.

[docs only]
2024-06-24 16:03:14 -06:00
Tyler Reddy 87c46641a8 REL: SciPy 1.14.0 rel commit [wheel build]
SciPy `1.14.0` release commit.

[wheel build]
2024-06-24 12:58:56 -06:00
Tyler Reddy ac63c817c6
Merge pull request #21019 from tylerjereddy/treddy_1.14.0_final_backports
MAINT: "final" backports for 1.14.0
2024-06-24 12:40:41 -06:00
Tyler Reddy 541003fb4d DOC: update 1.14 relnotes [wheel build]
* Update SciPy `1.14.0` release notes following more
backport activity.

[wheel build]
2024-06-24 11:31:50 -06:00
Matt Haberland a08d1ffe9e MAINT: stats.gstd: warn when an observation is <= 0 2024-06-23 20:28:31 -06:00
Jake Bowhay a4f711937e DEP: special.perm: deprecate non-integer `N` and `k` with `exact=True` (#20909)
Co-authored-by: Lucas Colley <lucas.colley8@gmail.com>
Co-authored-by: h-vetinari <h.vetinari@gmx.com>
Co-authored-by: Matt Haberland <mhaberla@calpoly.edu>
2024-06-23 20:27:11 -06:00
Matt Haberland 73339fbb13 TST: stats: fix use of np.testing to compare xp-arrays 2024-06-21 14:55:27 -06:00
Tyler Reddy 0542df61c5 DOC: Update 1.14.0 release notes
* Update the SciPy `1.14.0` release notes
following a series of backports.
2024-06-21 11:55:31 -06:00
Matt Haberland f8e530cb3b STY: _lib._util: silence mypy
[skip cirrus] [skip circle]
2024-06-21 11:47:50 -06:00
Ilhan Polat e2cbda2036 TST:sparse.linalg: Skip test due to sensitivity to numerical noise 2024-06-21 11:38:38 -06:00
H. Vetinari 4fb2e6a6d4 TST: robustify test_nnls_inner_loop_case1 2024-06-21 11:37:46 -06:00
Ralf Gommers 8cb15897ca
Merge pull request #20970 from tylerjereddy/treddy_prep_1.14.0_final
REL: set 1.14.0 rc3 unreleased
2024-06-18 08:46:28 +02:00
Tyler Reddy 232615428e REL: set 1.14.0rc3 unreleased
* Set SciPy version to `1.14.0rc3` unreleased.

[ci skip] [skip ci] [skip circle]
2024-06-17 20:53:33 -06:00
Tyler Reddy eb8edc2254 REL: 1.14.0rc2 rel commit [wheel build]
SciPy `1.14.0rc2` release commit.

[wheel build]
2024-06-15 15:48:30 -06:00
Tyler Reddy 1d7765156b
Merge pull request #20933 from tylerjereddy/treddy_1.14.0rc2_backports
WIP, MAINT: 1.14.0rc2 backports
2024-06-15 15:33:09 -06:00
Tyler Reddy 21da6a9aee CI: PR 20933 wheel test [wheel build]
* An empty commit to test wheel builds prior
to SciPy `1.14.0rc2`.

[wheel build]
2024-06-14 16:00:27 -06:00
Tyler Reddy 877b1a6479 DOC: update 1.14.0 release notes 2024-06-14 14:42:38 -06:00
Luiz Eduardo Amaral 908f222ad1 DOC: update doctests to satisfy scipy-doctests==1.2.0 2024-06-14 14:30:41 -06:00
Tyler Reddy a2270c21ac DOC: update 1.14.0 release notes 2024-06-14 12:44:08 -06:00
Evgeni Burovski 990014f4f8 DOC: interpolate: fix the example in the tutorial 2024-06-14 12:38:43 -06:00
Evgeni Burovski e1c30dfc80 MAINT: adapt to a scipy-doctests change
Allow the filter-warnings context manager to not receive
a (doc)test.

This is useful to filter out DeprecationWarnings which are
emitted during test discovery. Which is needed, e.g.
for numpy 1/2 transition where things deprecated in numpy 2
emit warnings on import.
2024-06-14 12:38:36 -06:00
Tyler Reddy 47f7b181b1 DOC: update 1.14.0 release notes
* Update the SciPy `1.14.0` release notes
follow a series of backports.
2024-06-14 11:58:32 -06:00
H. Vetinari 4f817ee1a6 loosen tolerance in test_krandinit slightly to pass with MKL
[skip ci] [ci skip] [skip circle]
2024-06-14 11:19:41 -06:00
H. Vetinari 5a70629407 loosen tolerance in test_x0_working to pass with alternate BLAS backends 2024-06-14 11:18:06 -06:00
Albert Steppi fcdc13ffcb BUG/BLD: special: Ensure symbols in `sf_error_state` shared library are exported/imported on MSVC (#20937)
* Add compile flag fpp for ifx, not just ifort

* Avoid designated initializers

- This is a C++20 feature which just happens to work in every
supported compiler except MSVC at the moment

* Make sure symbols in shared library are exported/imported on MSVC

Co-authored-by: h-vetinari <h.vetinari@gmx.com>
Co-authored-by: Ralf Gommers <ralf.gommers@gmail.com>

[skip ci] [ci skip] [skip circle]
2024-06-14 10:51:14 -06:00
Matt Haberland ab6ccd26d2 TST: adjust tolerances of failing tests
[ci skip] [skip ci] [skip circle]
2024-06-14 10:41:43 -06:00
Tyler Reddy 03247bd4c7 Revert "Merge pull request #18926 from andfoy/simplify_symiir"
This reverts commit 7febfdd1c9, reversing
changes made to 8ce393f117.

[ci skip] [skip ci] [skip circle]
2024-06-14 10:22:39 -06:00
Matt Haberland 764275a70d DOC: `array_api.rst`: update 1.14 functions with array API support (#20936)
[skip ci] [ci skip] [skip circle]
2024-06-11 13:54:28 -06:00
ਗਗਨਦੀਪ ਸਿੰਘ (Gagandeep Singh) 43b8f7d72e MAINT: special: fix msvc build by using new and delete in AMOS (#20920) 2024-06-11 13:51:04 -06:00
Ralf Gommers f7773213f6 BLD: optimize: use hidden visibility for static HiGHS libraries
This is a follow-up to gh-20477, where HiGHS wasn't touched on
purpose to avoid a merge conflict in another PR.

Minor useful side benefit: it shrinks the size of `_highs_wrapper.so`
by 0.4%

Closes gh-20256

[ci skip] [skip ci] [skip circle]
2024-06-10 13:20:00 -06:00
Evgeni Burovski 87d68fafa8 TST: linalg: bump tolerance in TestEig::test_singular
Some assertions have atol/rtol configurable, and one assertion had them
hardcoded, and that was causing tolerance problems in a Debian build with
reference LAPACK.

closes https://github.com/scipy/scipy/issues/20911
2024-06-10 13:02:59 -06:00
Melissa Weber Mendonça c93d919707 DOC: Write API reference titles in monospace font
Also splits module, class and object information for better readability.

[docs only]
2024-06-10 13:00:45 -06:00
Pamphile Roy de82ac88fe DOC: mailing list to forum (#20881) 2024-06-10 12:54:38 -06:00
Andrew Nelson dff4dd51d7 BLD: test delocate works by removing original lib (#20870)
Closes gh-20852

Co-authored-by: Matti Picus <matti.picus@gmail.com>
2024-06-10 12:50:13 -06:00
Matt Haberland 502befea52 MAINT: optimize.differential_evolution: add fail_slow exception
[skip ci]
2024-06-10 12:40:59 -06:00
Matt Haberland fa6675fd10 DOC: stats.bootstrap: warning admonition -> versionchanged
[skip cirrus] [skip circle]
2024-06-10 12:40:52 -06:00
Matt Haberland 3394b89e6a MAINT: stats.bootstrap: FutureWarning about broadcasting 2024-06-10 12:40:43 -06:00
h-vetinari d385aa4be1 DEP: deprecate trapz alias of `stats.trapezoid` distribution (#20828)
Co-authored-by: Jake Bowhay <60778417+j-bowhay@users.noreply.github.com>
2024-06-10 12:36:58 -06:00
ਗਗਨਦੀਪ ਸਿੰਘ (Gagandeep Singh) b07a143209 BLD: Warning fix from ``scipy/special/special/gamma.h`` (#20820)
* DEV: Rename gamma to gamma_complex for complex type

* DEV: Use gamma_complex

* DEV: Use template for std::complex

* DEV: Shift gamma to gamma.h
2024-06-10 12:22:01 -06:00
Samuel Le Meur-Diebolt 216a53d431 MAINT: stats._axis_nan_policy: fix `keepdims` when result objects have Python ints (#20734)
* MAINT: stats._axis_nan_policy: fix `keepdims` when result objects have Python ints 

---------

Co-authored-by: Matt Haberland <mhaberla@calpoly.edu>
2024-06-10 12:19:19 -06:00
Andrew Valentine b997dabed1 BUG: integrate: make `select_initial_step` aware of integration interval (#17348)
* Added t_bound optional argument to select_initial_step.

* Made t_bound a required argument for select_initial_step

* Check that solve_ivp honours specified bounds

* Add max_step arg to select_initial_step

* Fix broken test

* Test max_step honoured by select_initial_step

* Fixed error in docstring order

* Tidying in response to code review

* Fixed some overly long lines.

* Fixed bug in test_zero_interval and updated assert statements to reflect standard.

* Fixed bug in test_zero_interval to check against  last time step

* Minor cosmetic updates

* Minor cosmetic updates to function call

* Apply minor formatting fixes from code review

Co-authored-by: Jake Bowhay <60778417+j-bowhay@users.noreply.github.com>

* Formatting fixes

---------

Co-authored-by: Jake Bowhay <60778417+j-bowhay@users.noreply.github.com>
2024-06-10 12:17:23 -06:00
Tyler Reddy a2a686a02e
Merge pull request #20851 from andyfaff/cob
DOC: add cobyqa website reference (#20841)
2024-06-09 20:35:56 -06:00
Andrew Nelson 44f78c09ad DOC: add cobyqa website reference (#20841)
Co-authored-by: Zaikun ZHANG <zaikunzhang@gmail.com>
2024-06-01 13:54:42 +10:00
Dan Schult 67d743ee8c
DOC: sparse: 1.14.0 release notes additions (#20838) 2024-05-30 19:38:41 +01:00
Tyler Reddy 5fe5704cd3
Merge pull request #20839 from tylerjereddy/treddy_version_bump_rc2
REL: set 1.14.0rc2 unreleased
2024-05-30 11:36:26 -06:00
Tyler Reddy 2d4a7d7898 REL: set 1.14.0rc2 unreleased
* Set SciPy version to `1.14.0rc2` unreleased.

[ci skip] [skip ci] [skip circle]
2024-05-30 10:53:12 -06:00
Tyler Reddy 3aa7154252 REL: 1.14.0rc1 rel commit [wheel build]
* SciPy `1.14.0rc1` release commit

[wheel build]
2024-05-29 17:57:42 -06:00
Tyler Reddy 3efd40b3d5
Merge pull request #20831 from tylerjereddy/treddy_deps_1.14
MAINT: version pins/prep for 1.14.0rc1
2024-05-29 16:40:48 -06:00
Tyler Reddy 0ceb0bcae0 MAINT: version pins/prep for 1.14.0rc1 [wheel build]
* Draft in the version pins and related
adjustments prior to the release of SciPy
`1.14.0rc1`.

[wheel build] [skip circle]
2024-05-29 14:51:56 -06:00
190 changed files with 3343 additions and 2387 deletions

View File

@ -3,9 +3,9 @@ contact_links:
- name: Stack Overflow
url: https://stackoverflow.com/questions/tagged/scipy
about: Please ask and answer usage questions on Stack Overflow
- name: Developer Mailing list
url: https://mail.python.org/mailman3/lists/scipy-dev.python.org/
about: Development discussions and announcements on the mailing list
- name: Developer Forum
url: https://discuss.scientific-python.org/c/contributor/scipy
about: Development discussions and announcements on the forum
- name: Blank issue
url: https://github.com/scipy/scipy/issues/new
about: Please note that other templates should be used in most cases

View File

@ -253,7 +253,7 @@ jobs:
- name: Setup Python build deps
run: |
pip install build meson-python ninja pythran pybind11 cython "numpy>=2.0.0b1"
pip install build meson-python ninja pythran "pybind11<2.13.0" cython "numpy>=2.0.0b1"
- name: Build wheel and install
run: |

View File

@ -81,13 +81,12 @@ jobs:
# so easier to separate out here.
- [ubuntu-22.04, manylinux, x86_64, "", ""]
- [ubuntu-22.04, musllinux, x86_64, "", ""]
- [macos-12, macosx, x86_64, openblas, "10.9"]
- [macos-12, macosx, x86_64, openblas, "10.13"]
- [macos-13, macosx, x86_64, accelerate, "14.0"]
- [macos-14, macosx, arm64, openblas, "12.0"]
- [macos-14, macosx, arm64, openblas, "12.3"]
- [macos-14, macosx, arm64, accelerate, "14.0"]
- [windows-2019, win, AMD64, "", ""]
python: [["cp310", "3.10"], ["cp311", "3.11"], ["cp312", "3.12"]]
python: [["cp310", "3.10"], ["cp311", "3.11"], ["cp312", "3.12"], ["cp313", "3.13"]]
# python[0] is used to specify the python versions made by cibuildwheel
env:
@ -116,13 +115,13 @@ jobs:
if: ${{ runner.os == 'Windows' && env.IS_32_BIT == 'false' }}
- name: windows - set PKG_CONFIG_PATH
if: ${{ runner.os == 'Windows' }}
run: |
$env:CIBW = "${{ github.workspace }}"
# It seems somewhere in the env passing, `\` is not
# passed through, so convert it to '/'
$env:CIBW=$env:CIBW.replace("\","/")
echo "CIBW_ENVIRONMENT_WINDOWS=PKG_CONFIG_PATH=$env:CIBW" >> $env:GITHUB_ENV
if: ${{ runner.os == 'Windows' }}
echo "CIBW_ENVIRONMENT=PKG_CONFIG_PATH=$env:CIBW" >> $env:GITHUB_ENV
- name: Setup macOS
if: startsWith( matrix.buildplat[0], 'macos-' )
@ -135,8 +134,8 @@ jobs:
echo "PATH=$PATH" >> "$GITHUB_ENV"
LIB_PATH=$(dirname $(gfortran --print-file-name libgfortran.dylib))
fi
if [[ ${{ matrix.buildplat[4] }} == '10.9' ]]; then
# Newest version of Xcode that supports macOS 10.9
if [[ ${{ matrix.buildplat[4] }} == '10.13' ]]; then
# 20240621 macos-12 images span Xcode 13.1-->14.2
XCODE_VER='13.4.1'
else
XCODE_VER='15.2'
@ -149,39 +148,79 @@ jobs:
CIBW="MACOSX_DEPLOYMENT_TARGET=${{ matrix.buildplat[4] }}\
SDKROOT=$(xcrun --sdk macosx --show-sdk-path)\
PKG_CONFIG_PATH=${{ github.workspace }}"
echo "CIBW_ENVIRONMENT_MACOS=$CIBW" >> "$GITHUB_ENV"
echo "CIBW_ENVIRONMENT=$CIBW" >> "$GITHUB_ENV"
echo "REPAIR_PATH=$LIB_PATH" >> "$GITHUB_ENV"
if [[ ${{ matrix.buildplat[3] }} == 'accelerate' ]]; then
PREFIX=DYLD_LIBRARY_PATH=$(dirname $(gfortran --print-file-name libgfortran.dylib))
else
# Use libgfortran and friends from the scipy-openblas wheel,
# which should be exactly the same as the ones from gfortran
# This will exclude the duplicates from gfortran in /opt/gfortran*
EXCLUDE="-e /gfortran"
fi
CIBW="$PREFIX delocate-listdeps {wheel} &&\
$PREFIX delocate-wheel $EXCLUDE --require-archs \
{delocate_archs} -w {dest_dir} {wheel}"
PREFIX=DYLD_LIBRARY_PATH="\$(dirname \$(gfortran --print-file-name libgfortran.dylib))"
# remove libgfortran from location used for linking (if any), to
# check wheel has bundled things correctly and all tests pass without
# needing installed gfortran
POSTFIX=" sudo rm -rf /opt/gfortran-darwin-x86_64-native &&\
sudo rm -rf /usr/local/gfortran/lib"
CIBW="$PREFIX delocate-listdeps -d {wheel} && echo "-----------" &&\
$PREFIX delocate-wheel -v $EXCLUDE --require-archs \
{delocate_archs} -w {dest_dir} {wheel} && echo "-----------" &&\
delocate-listdeps -d {dest_dir}/*.whl && echo "-----------" &&\
$POSTFIX"
# Rename x86 Accelerate wheel to test on macOS 13 runner
if [[ ${{ matrix.buildplat[0] }} == 'macos-13' && ${{ matrix.buildplat[4] }} == '14.0' ]]; then
CIBW+=" && mv {dest_dir}/\$(basename {wheel}) \
{dest_dir}/\$(echo \$(basename {wheel}) | sed 's/14_0/13_0/')"
fi
# macos-arm64-openblas wheels that target macos-12 need a
# MACOS_DEPLOYMENT_TARGET of 12.3 otherwise delocate complains.
# Unclear of cause, possibly build tool related.
# This results in wheels that have 12_3 in their name. Since Python
# has no concept of minor OS versions in packaging land rename the
# wheel back to 12.
if [[ ${{ matrix.buildplat[0] }} == 'macos-14' && ${{ matrix.buildplat[4] }} == '12.3' ]]; then
CIBW+=" && echo \$(ls {dest_dir}) && \
mv {dest_dir}/*.whl \$(find {dest_dir} -type f -name '*.whl' | sed 's/12_3/12_0/')"
fi
echo "CIBW_REPAIR_WHEEL_COMMAND_MACOS=$CIBW" >> "$GITHUB_ENV"
- name: Inject environment variable for python dev version
if: matrix.python[1] == '3.13'
shell: bash
run: |
# For dev versions of python need to use wheels from scientific-python-nightly-wheels
# When the python version is released please comment out this section, but do not remove
# (there will soon be another dev version to target).
DEPS0="pip install -U --pre numpy>=2.1.0rc1"
DEPS1="pip install ninja meson-python pybind11 pythran cython"
CIBW="$DEPS0;$DEPS1;bash {project}/tools/wheels/cibw_before_build_linux.sh {project}"
echo "CIBW_BEFORE_BUILD_LINUX=$CIBW" >> "$GITHUB_ENV"
CIBW="$DEPS0 && $DEPS1 && bash {project}/tools/wheels/cibw_before_build_win.sh {project}"
echo "CIBW_BEFORE_BUILD_WINDOWS=$CIBW" >> "$GITHUB_ENV"
CIBW="$DEPS0;$DEPS1;bash {project}/tools/wheels/cibw_before_build_macos.sh {project}"
echo "CIBW_BEFORE_BUILD_MACOS=$CIBW" >> "$GITHUB_ENV"
echo "CIBW_BEFORE_TEST=$DEPS0" >> "$GITHUB_ENV"
CIBW="pip; args: --no-build-isolation"
echo "CIBW_BUILD_FRONTEND=$CIBW" >> "$GITHUB_ENV"
- name: Build wheels
uses: pypa/cibuildwheel@v2.17.0
uses: pypa/cibuildwheel@v2.20.0
env:
CIBW_BUILD: ${{ matrix.python[0] }}-${{ matrix.buildplat[1] }}*
CIBW_ARCHS: ${{ matrix.buildplat[2] }}
CIBW_PRERELEASE_PYTHONS: True
- name: Rename after test (macOS x86 Accelerate only)
# Rename x86 Accelerate wheel back so it targets macOS >= 14
if: matrix.buildplat[0] == 'macos-13' && matrix.buildplat[4] == '14.0'
- name: Rename macOS wheels
if: startsWith( matrix.buildplat[0], 'macos-' )
run: |
mv ./wheelhouse/*.whl $(find ./wheelhouse -type f -name '*.whl' | sed 's/13_0/14_0/')
# macos-x86_64-accelerate wheels targeting macos-14 were renamed to 13
# so they could be tested. Shift wheel name back to targeting 14.
if [[ ${{ matrix.buildplat[0] }} == 'macos-13' && ${{ matrix.buildplat[4] }} == '14.0' ]]; then
mv ./wheelhouse/*.whl $(find ./wheelhouse -type f -name '*.whl' | sed 's/13_0/14_0/')
fi
- uses: actions/upload-artifact@v4
with:

View File

@ -66,8 +66,9 @@ Writing code isnt the only way to contribute to SciPy. You can also:
- write grant proposals and help with other fundraising efforts
If youre unsure where to start or how your skills fit in, reach out! You can
ask on the mailing list or here, on GitHub, by leaving a
comment on a relevant issue that is already open.
ask on the `forum <https://discuss.scientific-python.org/c/contributor/scipy>`__
or here, on GitHub, by leaving a comment on a relevant issue that is already
open.
If you are new to contributing to open source, `this
guide <https://opensource.guide/how-to-contribute/>`__ helps explain why, what,

View File

@ -1,6 +1,6 @@
build_and_store_wheels: &BUILD_AND_STORE_WHEELS
install_cibuildwheel_script:
- python -m pip install cibuildwheel==2.17.0
- python -m pip install cibuildwheel==2.20.0
cibuildwheel_script:
- cibuildwheel
wheels_artifacts:
@ -24,18 +24,17 @@ cirrus_wheels_linux_aarch64_task:
# single task takes longer than 60 mins (the default time limit for a
# cirrus-ci task).
- env:
CIBW_BUILD: cp310-manylinux*
CIBW_BUILD: cp313-manylinux* cp310-manylinux*
- env:
CIBW_BUILD: cp311-manylinux* cp312-manylinux*
env:
CIBW_PRERELEASE_PYTHONS: True
# TODO remove the CIBW_BEFORE_BUILD_LINUX line once there are numpy2.0 wheels available on PyPI.
# Also remove/comment out PIP_NO_BUILD_ISOLATION, PIP_EXTRA_INDEX_URL flags.
CIBW_BEFORE_BUILD_LINUX: "pip install numpy>=2.0.0.dev0 meson-python cython pythran pybind11 ninja;bash {project}/tools/wheels/cibw_before_build_linux.sh {project}"
CIBW_ENVIRONMENT: >
PIP_PRE=1
PIP_EXTRA_INDEX_URL=https://pypi.anaconda.org/scientific-python-nightly-wheels/simple
PIP_NO_BUILD_ISOLATION=false
# The following settings are useful when targetting an unreleased Python version.
#CIBW_BEFORE_BUILD_LINUX: "pip install numpy>=2.0.0.dev0 meson-python cython pythran pybind11 ninja;bash {project}/tools/wheels/cibw_before_build_linux.sh {project}"
#CIBW_ENVIRONMENT: >
# PIP_PRE=1
# PIP_EXTRA_INDEX_URL=https://pypi.anaconda.org/scientific-python-nightly-wheels/simple
# PIP_NO_BUILD_ISOLATION=false
build_script: |
apt install -y python3-venv python-is-python3

View File

@ -78,3 +78,22 @@ div.admonition>.admonition-title {
gap: 10px;
margin-bottom: 20px;
}
/* Wrap long titles in pages */
h1 {
word-wrap: break-word;
}
/* Monospace titles for API docs */
div.empty + section>h1 {
font-family: var(--pst-font-family-monospace);
}
.prename {
font-family: var(--pst-font-family-monospace);
font-size: var(--pst-font-size-h4);
}
.sig-prename {
display: none;
}

View File

@ -1,8 +1,13 @@
:orphan:
.. raw:: html
<div class="prename">{{ module }}.{{ class }}.</div>
<div class="empty"></div>
{{ fullname }}
{{ underline }}
.. currentmodule:: {{ module }}
.. autoattribute:: {{ objname }}
.. autoattribute:: {{ objname }}

View File

@ -1,4 +1,9 @@
{{ fullname }}
.. raw:: html
<div class="prename">{{ module }}.</div>
<div class="empty"></div>
{{ name }}
{{ underline }}
.. currentmodule:: {{ module }}

View File

@ -0,0 +1,11 @@
.. raw:: html
<div class="prename">{{ module }}.</div>
<div class="empty"></div>
{{ name }}
{{ underline }}
.. currentmodule:: {{ module }}
.. autofunction:: {{ objname }}

View File

@ -1,8 +1,13 @@
:orphan:
{{ fullname }}
.. raw:: html
<div class="prename">{{ module }}.{{ class }}.</div>
<div class="empty"></div>
{{ name }}
{{ underline }}
.. currentmodule:: {{ module }}
.. automethod:: {{ objname }}
.. automethod:: {{ objname }}

View File

@ -1,4 +1,9 @@
{{ fullname }}
.. raw:: html
<div class="prename">{{ module }}.</div>
<div class="empty"></div>
{{ name }}
{{ underline }}
.. currentmodule:: {{ module }}

View File

@ -1,8 +1,13 @@
:orphan:
.. raw:: html
<div class="prename">{{ module }}.{{ class }}.</div>
<div class="empty"></div>
{{ fullname }}
{{ underline }}
.. currentmodule:: {{ module }}
.. autoproperty:: {{ objname }}
.. autoproperty:: {{ objname }}

View File

@ -190,6 +190,10 @@ warnings.filterwarnings(
message=r'.*path is deprecated.*',
category=DeprecationWarning,
)
# See https://github.com/sphinx-doc/sphinx/issues/12589
suppress_warnings = [
'autosummary.import_cycle',
]
# -----------------------------------------------------------------------------
# HTML output
@ -201,7 +205,7 @@ html_logo = '_static/logo.svg'
html_favicon = '_static/favicon.ico'
html_sidebars = {
"index": "search-button-field",
"index": ["search-button-field"],
"**": ["search-button-field", "sidebar-nav-bs"]
}

View File

@ -103,10 +103,19 @@ Support is provided in `scipy.special` for the following functions:
`scipy.special.erf`, `scipy.special.erfc`, `scipy.special.i0`,
`scipy.special.i0e`, `scipy.special.i1`, `scipy.special.i1e`,
`scipy.special.gammaln`, `scipy.special.gammainc`, `scipy.special.gammaincc`,
`scipy.special.logit`, and `scipy.special.expit`.
`scipy.special.logit`, `scipy.special.expit`, `scipy.special.entr`,
`scipy.special.rel_entr`, `scipy.special.rel_entr`, `scipy.special.xlogy`,
and `scipy.special.chdtrc`.
Support is provided in `scipy.stats` for the following functions:
`scipy.stats.pearsonr` and `scipy.stats.moment`.
`scipy.stats.describe`, `scipy.stats.moment`, `scipy.stats.skew`,
`scipy.stats.kurtosis`, `scipy.stats.kstat`, `scipy.stats.kstatvar`,
`scipy.stats.circmean`, `scipy.stats.circvar`, `scipy.stats.circstd`,
`scipy.stats.entropy`, `scipy.stats.variation` , `scipy.stats.sem`,
`scipy.stats.ttest_1samp`, `scipy.stats.pearsonr`, `scipy.stats.chisquare`,
`scipy.stats.skewtest`, `scipy.stats.kurtosistest`, `scipy.stats.normaltest`,
`scipy.stats.jarque_bera`, `scipy.stats.bartlett`, `scipy.stats.power_divergence`,
and `scipy.stats.monte_carlo_test`.
Implementation notes

View File

@ -259,7 +259,7 @@ has a nice help page that outlines the process for `filing pull requests`_.
If your changes involve modifications to the API or addition/modification of a
function, you should initiate a code review. This involves sending an email to
the `SciPy mailing list`_ with a link to your PR along with a description of
the `SciPy forum`_ with a link to your PR along with a description of
and a motivation for your changes.
.. _pr-checklist:

View File

@ -11,14 +11,14 @@ Code
Any significant decisions on adding (or not adding) new features, breaking
backwards compatibility or making other significant changes to the codebase
should be made on the scipy-dev mailing list after a discussion (preferably
should be made on the scipy-dev forum after a discussion (preferably
with full consensus).
Any non-trivial change (where trivial means a typo, or a one-liner maintenance
commit) has to go in through a pull request (PR). It has to be reviewed by
another developer. In case review doesn't happen quickly enough and it is
important that the PR is merged quickly, the submitter of the PR should send a
message to mailing list saying they intend to merge that PR without review
message to the forum saying they intend to merge that PR without review
at time X for reason Y unless someone reviews it before then.
Changes and new additions should be tested. Untested code is broken code.
@ -27,4 +27,4 @@ Commit rights
-------------
Who gets commit rights is decided by the SciPy Steering Council; changes in
commit rights will then be announced on the scipy-dev mailing list.
commit rights will then be announced on the scipy-dev forum.

View File

@ -11,7 +11,7 @@ In general, it's not a good idea to remove something without warning users about
that removal first. Therefore, this is what should be done before removing
something from the public API:
#. Propose to deprecate the functionality on the scipy-dev mailing list and get
#. Propose to deprecate the functionality on the scipy-dev forum and get
agreement that that's OK.
#. Add a ``DeprecationWarning`` for it, which states that the functionality was
deprecated, and in which release. For Cython APIs, see
@ -27,4 +27,4 @@ when running the test suite so they don't pollute the output.
It's possible that there is reason to want to ignore this deprecation policy
for a particular deprecation; this can always be discussed on the scipy-dev
mailing list.
forum.

View File

@ -77,7 +77,7 @@ classifiers in ``setup.py``, and mentioned in the release notes for each
release. All newly released Python versions will be supported as soon as
possible. For the general policy on dropping support for a Python or NumPy
version, see :ref:`NEP 29 <NEP29>`. The final decision on dropping support is
always taken on the scipy-dev mailing list.
always taken on the scipy-dev forum.
The lowest supported Numpy_ version for a SciPy version is mentioned in the
release notes and is encoded in ``pyproject.toml``, ``scipy/__init__.py`` and the
@ -90,7 +90,7 @@ Supported versions of optional dependencies and compilers is documented in
:ref:`toolchain-roadmap`. Note that not all versions of optional dependencies
that are supported are tested well or at all by SciPy's Continuous
Integration setup. Issues regarding this are dealt with as they come up in the
issue tracker or mailing list.
issue tracker or forum.
Building binary installers

View File

@ -43,7 +43,7 @@ Dealing with pull requests
- When merging contributions, a committer is responsible for ensuring that
those meet the requirements outlined in :ref:`Contributing to SciPy <hacking>`.
Also check that new features and backwards compatibility breaks were discussed
on the scipy-dev mailing list.
on the scipy-dev forum.
- New code goes in via a pull request (PR).
- Merge new code with the green button. In case of merge conflicts, ask the PR
submitter to rebase (this may require providing some ``git`` instructions).

View File

@ -21,7 +21,7 @@ These contributions cannot be accepted for inclusion in SciPy unless the
original code author is willing to (re)license their code under the
modified BSD (or compatible) license. If the original author agrees,
add a comment saying so to the source files and forward the relevant
communication to the scipy-dev mailing list.
communication to the scipy-dev forum.
Another common occurrence is for code to be translated or derived from code in
R, Octave (both GPL-licensed) or a commercial application. Such code also

View File

@ -64,5 +64,5 @@
.. _`NumPy mailing list`: https://numpy.org/community/
.. _SciPy: https://www.scipy.org
.. _`SciPy github`: https://github.com/scipy/scipy
.. _`SciPy mailing list`: https://mail.python.org/mailman3/lists/scipy-dev.python.org/
.. _`SciPy forum`: https://discuss.scientific-python.org/c/contributor/scipy
.. _`SciPy repository`: https://github.com/scipy/scipy

View File

@ -28,7 +28,7 @@ documentation, designs, or other work to the Project. Anyone can be a
Contributor. Contributors can be affiliated with any legal entity or
none. Contributors participate in the project by submitting, reviewing,
and discussing GitHub Pull Requests and Issues and participating in open
and public Project discussions on GitHub, mailing lists, and other
and public Project discussions on GitHub, forums, and other
channels. The foundation of Project participation is openness and
transparency.
@ -146,7 +146,7 @@ active over the last two years.
When considering potential Members, the Council will look at candidates with a
comprehensive view of their contributions. This will include, but is not limited
to, code, code review, infrastructure work, mailing list and chat participation,
to, code, code review, infrastructure work, forum and chat participation,
community help/building, education and outreach, design work, etc. We are
deliberately not setting arbitrary quantitative metrics (like “100 commits in
this repo”) to avoid encouraging behavior that plays to the metrics rather than
@ -186,10 +186,10 @@ responsible for:
the :ref:`scipy-roadmap`) bi-yearly, around mid-April and mid-October.
- At the same times of the year, summarizing any relevant
organizational updates and issues in the preceding period, and asking for
feedback/suggestions on the mailing list.
feedback/suggestions on the forum.
- Ensuring the composition of the Steering Council stays current.
- Ensuring matters discussed in private by the Steering Council get
summarized on the mailing list to keep the Community informed.
summarized on the forum to keep the Community informed.
- Ensuring other important organizational documents (e.g., Code of Conduct,
Fiscal Sponsorship Agreement) stay current after they are added.

View File

@ -17,8 +17,7 @@ There are a lot of ways you can contribute:
- Reviewing open pull requests
- Triaging issues
- Working on the `scipy.org`_ website
- Answering questions and participating on the scipy-dev and scipy-user
`mailing lists`_.
- Answering questions and participating on the `forum`_.
Contributing new code
=====================
@ -41,14 +40,14 @@ more domain-specific code than SciPy.
Now if you have code that you would like to see included in SciPy, how do you
go about it? After checking that your code can be distributed in SciPy under a
compatible license (see :ref:`license-considerations`), the first step is to
discuss it on the scipy-dev mailing list. All new features, as well as changes to
discuss it on the scipy-dev `forum`_. All new features, as well as changes to
existing code, are discussed and decided on there. You can, and probably
should already start this discussion before your code is finished. Remember
that in order to be added to SciPy your code will need to be reviewed by
someone else, so try to find someone willing to review your work while you're
at it.
Assuming the outcome of the discussion on the mailing list is positive and you
Assuming the outcome of the discussion on the `forum`_ is positive and you
have a function or piece of code that does what you need it to do, what next?
Before code is added to SciPy, it at least has to have good documentation, unit
tests, benchmarks, and correct code style.
@ -120,13 +119,13 @@ Once you think your code is ready for inclusion in SciPy, you can send a pull
request (PR) on Github. We won't go into the details of how to work with git
here, this is described well in :ref:`git-development`
and on the `Github help pages`_. When you send the PR for a new
feature, be sure to also mention this on the scipy-dev mailing list. This can
feature, be sure to also mention this on the scipy-dev `forum`_. This can
prompt interested people to help review your PR. Assuming that you already got
positive feedback before on the general idea of your code/feature, the purpose
of the code review is to ensure that the code is correct, efficient and meets
the requirements outlined above. In many cases, the code review happens
relatively quickly, but it's possible that it stalls. If you have addressed
all feedback already given, it's perfectly fine to ask on the mailing list
all feedback already given, it's perfectly fine to ask on the `forum`_
again for review (after a reasonable amount of time, say a couple of weeks, has
passed). Once the review is completed, the PR is merged into the "main"
branch of SciPy.
@ -134,7 +133,7 @@ branch of SciPy.
The above describes the requirements and process for adding code to SciPy. It
doesn't yet answer the question though how decisions are made exactly. The
basic answer is: decisions are made by consensus, by everyone who chooses to
participate in the discussion on the mailing list. This includes developers,
participate in the discussion on the `forum`_. This includes developers,
other users and yourself. Aiming for consensus in the discussion is important
-- SciPy is a project by and for the scientific Python community. In those
rare cases that agreement cannot be reached, the maintainers of the module
@ -153,7 +152,7 @@ MIT, PSF) then it's OK. Code which is GPL or Apache licensed, has no
clear license, requires citation or is free for academic use only can't be
included in SciPy. Therefore if you copied existing code with such a license
or made a direct translation to Python of it, your code can't be included.
If you're unsure, please ask on the scipy-dev `mailing list <mailing lists>`_.
If you're unsure, please ask on the scipy-dev `forum`_.
*Why is SciPy under the BSD license and not, say, the GPL?*
@ -183,7 +182,7 @@ The discussion on code style and unit testing above applies equally to bug
fixes. It is usually best to start by writing a unit test that shows the
problem, i.e. it should pass but doesn't. Once you have that, you can fix the
code so that the test does pass. That should be enough to send a PR for this
issue. Unlike when adding new code, discussing this on the mailing list may
issue. Unlike when adding new code, discussing this on the `forum`_ may
not be necessary - if the old behavior of the code is clearly incorrect, no one
will object to having it fixed. It may be necessary to add some warning or
deprecation message for the changed behavior. This should be part of the
@ -236,7 +235,7 @@ thoughts in a comment) allows prioritizing maintenance work and finding related
issues easily when working on an existing function or subpackage. To read more
about issue triage, see :ref:`triaging`.
Participating in discussions on the scipy-user and scipy-dev `mailing lists`_ is
Participating in discussions on the scipy-user and scipy-dev `forum`_ is
a contribution in itself. Everyone who writes to those lists with a problem or
an idea would like to get responses, and writing such responses makes the
project and community function better and appear more welcoming.
@ -296,7 +295,7 @@ improvements, and submit your first PR!
.. _Pytest: https://pytest.org/
.. _mailing lists: https://scipy.org/community/#scipy-mailing-list
.. _forum: https://discuss.scientific-python.org/c/contributor/scipy
.. _Spyder: https://www.spyder-ide.org/

View File

@ -82,7 +82,7 @@ is simple enough to fully document all attributes immediately below its name.
Some return classes are sufficiently complex to deserve their own rendered
documentation. This is fairly standard if the return class is public, but
return classes should only be public if 1) they are intended to be imported by
end-users and 2) if they have been approved by the mailing list. For complex,
end-users and 2) if they have been approved by the forum. For complex,
private return classes, please see how `~scipy.stats.binomtest` summarizes
`~scipy.stats._result_classes.BinomTestResult` and links to its documentation,
and note that ``BinomTestResult`` cannot be imported from `~scipy.stats`.

View File

@ -18,9 +18,9 @@ going and where help is needed most.
General
-------
This roadmap will be evolving together with SciPy. Updates can be submitted as
pull requests. For large or disruptive changes you may want to discuss
those first on the scipy-dev mailing list.
This roadmap will be evolving together with SciPy. Updates can be submitted as
pull requests. For large or disruptive changes you may want to discuss
those first on the scipy-dev forum.
API changes

View File

@ -17,7 +17,7 @@ SciPy documentation
`Source Repository <https://github.com/scipy/scipy>`__ |
`Issues & Ideas <https://github.com/scipy/scipy/issues>`__ |
`Q&A Support <https://stackoverflow.com/questions/tagged/scipy>`__ |
`Mailing List <https://mail.python.org/mailman3/lists/scipy-dev.python.org/>`__
`Forum <https://discuss.scientific-python.org/c/contributor/scipy>`__
**SciPy** (pronounced "Sigh Pie") is an open-source software for mathematics,
science, and engineering.

View File

@ -8,6 +8,8 @@ see the `commit logs <https://github.com/scipy/scipy/commits/>`_.
.. toctree::
:maxdepth: 1
release/1.14.2-notes
release/1.14.1-notes
release/1.14.0-notes
release/1.13.1-notes
release/1.13.0-notes

View File

@ -2,8 +2,6 @@
SciPy 1.14.0 Release Notes
==========================
.. note:: SciPy 1.14.0 is not released yet!
.. contents::
SciPy 1.14.0 is the culmination of 3 months of hard work. It contains
@ -84,6 +82,14 @@ New features
`scipy.sparse` improvements
===========================
- Sparse arrays now support 1D shapes in COO, DOK and CSR formats.
These are all the formats we currently intend to support 1D shapes.
Other sparse array formats raise an exception for 1D input.
- Sparse array methods min/nanmin/argmin and max analogs now return 1D arrays.
Results are still COO format sparse arrays for min/nanmin and
dense ``np.ndarray`` for argmin.
- Iterating over ``csr_array`` or ``csc_array`` yields 1D (CSC) arrays.
- Sparse matrix and array objects improve their ``repr`` and ``str`` output.
- A special case has been added to handle multiplying a ``dia_array`` by a
scalar, which avoids a potentially costly conversion to CSR format.
- `scipy.sparse.csgraph.yen` has been added, allowing usage of Yen's K-Shortest
@ -160,6 +166,7 @@ As of 1.14.0, there is support for
- `scipy.stats`: (select functions)
- `scipy.stats.describe`
- `scipy.stats.moment`
- `scipy.stats.skew`
- `scipy.stats.kurtosis`
@ -197,7 +204,7 @@ Deprecated features
- The option ``quadrature="trapz"`` in `scipy.integrate.quad_vec` has been
deprecated in favour of ``quadrature="trapezoid"`` and will be removed in
SciPy 1.16.0.
- `scipy.special.comb` has deprecated support for use of ``exact=True`` in
- ``scipy.special.{comb,perm}`` have deprecated support for use of ``exact=True`` in
conjunction with non-integral ``N`` and/or ``k``.
@ -276,22 +283,23 @@ Other changes
Authors
*******
* Name (commits)
* h-vetinari (30)
* h-vetinari (34)
* Steven Adams (1) +
* Max Aehle (1) +
* Ataf Fazledin Ahamed (2) +
* Luiz Eduardo Amaral (1) +
* Trinh Quoc Anh (1) +
* Miguel A. Batalla (7) +
* Tim Beyer (1) +
* Andrea Blengino (1) +
* boatwrong (1)
* Jake Bowhay (47)
* Jake Bowhay (51)
* Dietrich Brunn (2)
* Evgeni Burovski (174)
* Evgeni Burovski (177)
* Tim Butters (7) +
* CJ Carey (5)
* Sean Cheah (46)
* Lucas Colley (72)
* Lucas Colley (73)
* Giuseppe "Peppe" Dilillo (1) +
* DWesl (2)
* Pieter Eendebak (5)
@ -300,11 +308,11 @@ Authors
* fancidev (2)
* Anthony Frazier (1) +
* Ilan Gold (1) +
* Ralf Gommers (122)
* Ralf Gommers (125)
* Rohit Goswami (28)
* Ben Greiner (1) +
* Lorenzo Gualniera (1) +
* Matt Haberland (250)
* Matt Haberland (260)
* Shawn Hsu (1) +
* Budjen Jovan (3) +
* Jozsef Kutas (1)
@ -316,37 +324,39 @@ Authors
* marinelay (2) +
* Nikolay Mayorov (1)
* Nicholas McKibben (2)
* Melissa Weber Mendonça (6)
* Melissa Weber Mendonça (7)
* João Mendes (1) +
* Samuel Le Meur-Diebolt (1) +
* Tomiță Militaru (2) +
* Andrew Nelson (32)
* Andrew Nelson (35)
* Lysandros Nikolaou (1)
* Nick ODell (5) +
* Jacob Ogle (1) +
* Pearu Peterson (1)
* Matti Picus (4)
* Ilhan Polat (8)
* Matti Picus (5)
* Ilhan Polat (9)
* pwcnorthrop (3) +
* Bharat Raghunathan (1)
* Tom M. Ragonneau (2) +
* Tyler Reddy (47)
* Pamphile Roy (17)
* Tyler Reddy (101)
* Pamphile Roy (18)
* Atsushi Sakai (9)
* Daniel Schmitz (5)
* Julien Schueller (2) +
* Dan Schult (12)
* Dan Schult (13)
* Tomer Sery (7)
* Scott Shambaugh (4)
* Tuhin Sharma (1) +
* Sheila-nk (4)
* Skylake (1) +
* Albert Steppi (214)
* Albert Steppi (215)
* Kai Striega (6)
* Zhibing Sun (2) +
* Nimish Telang (1) +
* toofooboo (1) +
* tpl2go (1) +
* Edgar Andrés Margffoy Tuay (44)
* Andrew Valentine (1)
* Valerix (1) +
* Christian Veenhuis (1)
* void (2) +
@ -356,9 +366,10 @@ Authors
* Xiao Yuan (1)
* Irwin Zaid (35)
* Elmar Zander (1) +
* ਗਗਨਦੀਪ ਸਿੰਘ (Gagandeep Singh) (2) +
* Zaikun ZHANG (1)
* ਗਗਨਦੀਪ ਸਿੰਘ (Gagandeep Singh) (4) +
A total of 81 people contributed to this release.
A total of 85 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.
This list of names is automatically generated, and may not be fully complete.
@ -373,7 +384,9 @@ Issues closed for 1.14.0
* `#8056 <https://github.com/scipy/scipy/issues/8056>`__: cho_factor and cho_solve don't support (0,0)-shape matrices
* `#8083 <https://github.com/scipy/scipy/issues/8083>`__: special.hyp2f1 returns the wrong values when c-a-b is an integer...
* `#8510 <https://github.com/scipy/scipy/issues/8510>`__: ValueError: failed to create intent(cache|hide)|optional array--...
* `#8848 <https://github.com/scipy/scipy/issues/8848>`__: \`integrate.solve_ivp\` try to evaluate the function with much...
* `#8856 <https://github.com/scipy/scipy/issues/8856>`__: LinearNDInterpolator not thread safe
* `#9198 <https://github.com/scipy/scipy/issues/9198>`__: \`solve_ivp\` RK45 can evaluate the function at times later than...
* `#9307 <https://github.com/scipy/scipy/issues/9307>`__: feature request: make \`scipy.stats.pearsonr\` accept 2-D arrays
* `#9459 <https://github.com/scipy/scipy/issues/9459>`__: BUG: linalg: lu and decompositions don't support (0, 1) or (0,...
* `#12515 <https://github.com/scipy/scipy/issues/12515>`__: scipy.linalg.pinvh gives incorrect results
@ -386,6 +399,7 @@ Issues closed for 1.14.0
* `#16748 <https://github.com/scipy/scipy/issues/16748>`__: None of the \`cython_\*\` APIs have any tests using Cython
* `#16926 <https://github.com/scipy/scipy/issues/16926>`__: TEST/BUG: Tolerance violation in test_solvers::test_solve_discrete_are
* `#17084 <https://github.com/scipy/scipy/issues/17084>`__: ENH: Exporting the removed component of detrend()
* `#17341 <https://github.com/scipy/scipy/issues/17341>`__: BUG: \`solve_ivp\` evaluates outside the requested interval for...
* `#17559 <https://github.com/scipy/scipy/issues/17559>`__: ENH: _mannwhitneyu.py computation of exact MWU statistics may...
* `#17658 <https://github.com/scipy/scipy/issues/17658>`__: Inconsistent support for empty matrices in linalg
* `#19322 <https://github.com/scipy/scipy/issues/19322>`__: BUG: \`rv_discrete.expect\` fails when duplicate positions
@ -403,10 +417,12 @@ Issues closed for 1.14.0
* `#20128 <https://github.com/scipy/scipy/issues/20128>`__: BUG: \`csr_array(int())\` errors
* `#20208 <https://github.com/scipy/scipy/issues/20208>`__: BUG: Test failures due to \`invalid value encountered in _beta_ppf\`...
* `#20247 <https://github.com/scipy/scipy/issues/20247>`__: ENH: Akima1DInterpolator Extrapolation
* `#20256 <https://github.com/scipy/scipy/issues/20256>`__: MAINT, BLD: symbol visibility warnings on MacOS ARM static lib...
* `#20277 <https://github.com/scipy/scipy/issues/20277>`__: Very noisy doc builds after jupyterlite-sphinx integration
* `#20296 <https://github.com/scipy/scipy/issues/20296>`__: CI: jupyterlite-shpinx pin breaks recent doc builds
* `#20324 <https://github.com/scipy/scipy/issues/20324>`__: MAINT, BUG (?): pearsonr statistic return type change
* `#20357 <https://github.com/scipy/scipy/issues/20357>`__: BUG: Memory usage in griddata function in version 1.12
* `#20358 <https://github.com/scipy/scipy/issues/20358>`__: TST, MAINT: failure in TestGroupDelay::test_singular against...
* `#20377 <https://github.com/scipy/scipy/issues/20377>`__: ENH: sparse: Update str dunder to handle 1D (and 2D better)
* `#20378 <https://github.com/scipy/scipy/issues/20378>`__: ENH: sparse: Update repr dunder to handle 1D (and maybe 2D better)
* `#20385 <https://github.com/scipy/scipy/issues/20385>`__: MAINT: special version hex cleanup
@ -420,6 +436,8 @@ Issues closed for 1.14.0
* `#20458 <https://github.com/scipy/scipy/issues/20458>`__: MAINT: more potential cleanups related to version bumps
* `#20461 <https://github.com/scipy/scipy/issues/20461>`__: DOC: some likely changes to release process docs
* `#20466 <https://github.com/scipy/scipy/issues/20466>`__: BUG: scipy.linalg.bandwidth returns incorrect upper bandwidth
* `#20470 <https://github.com/scipy/scipy/issues/20470>`__: BUG: \`TestNNLS.test_nnls_inner_loop_case1\` fails with MKL
* `#20486 <https://github.com/scipy/scipy/issues/20486>`__: DEP: deprecate and remove remaining usages of slur-adjacent "trapz"
* `#20488 <https://github.com/scipy/scipy/issues/20488>`__: BUG: When given invalid bounds, \`_minimize_neldermead\` raises...
* `#20492 <https://github.com/scipy/scipy/issues/20492>`__: DOC: linalg.solve_discrete_lyapunov: dead reference link
* `#20502 <https://github.com/scipy/scipy/issues/20502>`__: BUG: special.hyp2f1: local test failure
@ -431,7 +449,9 @@ Issues closed for 1.14.0
* `#20562 <https://github.com/scipy/scipy/issues/20562>`__: BUG: Invalid default bracket selection in _bracket_minimum.
* `#20564 <https://github.com/scipy/scipy/issues/20564>`__: TST: stats array API failure for test_skew_constant_value[torch]...
* `#20584 <https://github.com/scipy/scipy/issues/20584>`__: BUG: \`optimize.linprog\` fails with \`list\` type \`integrality\`...
* `#20587 <https://github.com/scipy/scipy/issues/20587>`__: BLD: warning from \`scipy/special/special/gamma.h\`
* `#20598 <https://github.com/scipy/scipy/issues/20598>`__: ENH: special: add log of wright_bessel
* `#20603 <https://github.com/scipy/scipy/issues/20603>`__: DOC: document switch from mailing list to discourse
* `#20614 <https://github.com/scipy/scipy/issues/20614>`__: DOC: dual_annealing optimizer does not pass bounds to minimizer...
* `#20618 <https://github.com/scipy/scipy/issues/20618>`__: BUG: scipy 'minimize' with method='trust-constr' with equality...
* `#20620 <https://github.com/scipy/scipy/issues/20620>`__: DOC: Suggested improvement to interp2d transition guide
@ -442,12 +462,20 @@ Issues closed for 1.14.0
* `#20683 <https://github.com/scipy/scipy/issues/20683>`__: DOC: A typo in ValueError raised by signal.iirdesign
* `#20691 <https://github.com/scipy/scipy/issues/20691>`__: ENH: Reintroduce Apple Accelerate support
* `#20697 <https://github.com/scipy/scipy/issues/20697>`__: BUG: special: algorithmic Error in \`ratevl\` in \`cephes/polevl.h\`
* `#20740 <https://github.com/scipy/scipy/issues/20740>`__: BLD/DEV: special: build warnings
* `#20755 <https://github.com/scipy/scipy/issues/20755>`__: BUG: stats: Two new test failures
* `#20768 <https://github.com/scipy/scipy/issues/20768>`__: BUG: optimize.minimize: garbage collection in \`lbfgs\`
* `#20783 <https://github.com/scipy/scipy/issues/20783>`__: BUG: Build failure on PyPy3.10 7.3.16: \`error: Py_Initialize...
* `#20797 <https://github.com/scipy/scipy/issues/20797>`__: BUG: special.hyp1f1: broken for complex argument
* `#20802 <https://github.com/scipy/scipy/issues/20802>`__: MAINT, TST: pytest-fail-slow and local concurrent runs/variability
* `#20840 <https://github.com/scipy/scipy/issues/20840>`__: BUG: first shared library in scipy fails to be consumed by MSVC
* `#20850 <https://github.com/scipy/scipy/issues/20850>`__: DOC: stats.bootstrap: improve documentation multidimensional...
* `#20852 <https://github.com/scipy/scipy/issues/20852>`__: BUG: Library not loaded: @rpath/libgfortran.5.dylib for scipy...
* `#20860 <https://github.com/scipy/scipy/issues/20860>`__: BUG/BLD: scipy-1.13.1 fails to build with msvc
* `#20901 <https://github.com/scipy/scipy/issues/20901>`__: BUG: \`zsh: abort python\` after \`scipy.linalg.sqrtm\` on empty...
* `#20911 <https://github.com/scipy/scipy/issues/20911>`__: TST: TestEig.test_singular failing tolerance with generic BLAS...
* `#20921 <https://github.com/scipy/scipy/issues/20921>`__: DOC: stats: wrong docstrings of \`\*Result\` classes
* `#20938 <https://github.com/scipy/scipy/issues/20938>`__: TST: tolerance violations with SciPy 1.14.0rc1 on linux-{aarch64,ppc64le}
* `#20943 <https://github.com/scipy/scipy/issues/20943>`__: TST: test failures on windows with SciPy 1.14.0rc1
************************
Pull requests for 1.14.0
@ -455,6 +483,7 @@ Pull requests for 1.14.0
* `#13534 <https://github.com/scipy/scipy/pull/13534>`__: ENH: Add more initialization methods for HessianUpdateStrategy
* `#15321 <https://github.com/scipy/scipy/pull/15321>`__: ENH: fft: Add \`prev_fast_len\` to complement \`next_fast_len\`
* `#17348 <https://github.com/scipy/scipy/pull/17348>`__: BUG: integrate: make \`select_initial_step\` aware of integration...
* `#17924 <https://github.com/scipy/scipy/pull/17924>`__: ENH: sparse.linalg: speed up \`spsolve_triangular\`
* `#18926 <https://github.com/scipy/scipy/pull/18926>`__: ENH: Move symiirorder1/2, cspline2d, qspline2d and spline_filter...
* `#19561 <https://github.com/scipy/scipy/pull/19561>`__: ENH: stats.power: add function to simulate hypothesis test power
@ -598,7 +627,7 @@ Pull requests for 1.14.0
* `#20588 <https://github.com/scipy/scipy/pull/20588>`__: DEP: special: remove legacy kwarg from special.comb and switch...
* `#20590 <https://github.com/scipy/scipy/pull/20590>`__: Revert "ENH: Use \`highspy\` in \`linprog\`"
* `#20593 <https://github.com/scipy/scipy/pull/20593>`__: ENH: constants: add array api support
* `#20595 <https://github.com/scipy/scipy/pull/20595>`__: ENH: stats.circ___: add array-API support
* `#20595 <https://github.com/scipy/scipy/pull/20595>`__: ENH: \`stats.circ___\`: add array-API support
* `#20597 <https://github.com/scipy/scipy/pull/20597>`__: ENH: stats.skewtest: add array-API support
* `#20600 <https://github.com/scipy/scipy/pull/20600>`__: TYP: update supported Mypy version from 1.0.0 to 1.10.0
* `#20604 <https://github.com/scipy/scipy/pull/20604>`__: ENH: stats.monte_carlo_test: add array API support
@ -649,6 +678,7 @@ Pull requests for 1.14.0
* `#20726 <https://github.com/scipy/scipy/pull/20726>`__: DOC: stats.{circmean, circvar, circstd}: improve accuracy/clarity
* `#20730 <https://github.com/scipy/scipy/pull/20730>`__: BUG: special: fix algorithmic error in \`ratevl\` in \`cephes/polevl.h\`
* `#20732 <https://github.com/scipy/scipy/pull/20732>`__: BUG: interpolate: do not segfault on bad boundary conditions
* `#20734 <https://github.com/scipy/scipy/pull/20734>`__: BUG: stats.ttest_1samp: fix use of \`keepdims\`
* `#20736 <https://github.com/scipy/scipy/pull/20736>`__: ENH: stats.normaltest/jarque_bera: add array-API support
* `#20737 <https://github.com/scipy/scipy/pull/20737>`__: TST, MAINT: run optimize array API tests and fix \`chandrupatla\`
* `#20738 <https://github.com/scipy/scipy/pull/20738>`__: DOC: sparse.csgraph.dijkstra: add warning for \`directed=False\`...
@ -671,6 +701,7 @@ Pull requests for 1.14.0
* `#20780 <https://github.com/scipy/scipy/pull/20780>`__: DEP: special.comb: deprecate \`exact=True\` for non-integer intputs
* `#20781 <https://github.com/scipy/scipy/pull/20781>`__: TST: stats: remove overhead of array_namespace in calls to _get_pvalue
* `#20782 <https://github.com/scipy/scipy/pull/20782>`__: ENH: stats: end-to-end array-API support for NHSTs with chi-squared...
* `#20784 <https://github.com/scipy/scipy/pull/20784>`__: DOC: SciPy 1.14.0 relnotes
* `#20787 <https://github.com/scipy/scipy/pull/20787>`__: DOC: interpolate: mention default kinds in interp2d transition...
* `#20788 <https://github.com/scipy/scipy/pull/20788>`__: ENH: optimize: improve \`cobyqa\` performance by reducing overhead...
* `#20789 <https://github.com/scipy/scipy/pull/20789>`__: DEP: stats.linregress: deprecate one-arg use
@ -687,3 +718,30 @@ Pull requests for 1.14.0
* `#20812 <https://github.com/scipy/scipy/pull/20812>`__: DOC: extend "building reproducible binaries" page
* `#20815 <https://github.com/scipy/scipy/pull/20815>`__: DOC: integrate: odeint user functions must not modify y.
* `#20819 <https://github.com/scipy/scipy/pull/20819>`__: REV: revert accidental \`cobyqa\` update in gh-17924
* `#20820 <https://github.com/scipy/scipy/pull/20820>`__: BLD: Warning fix from \`\`scipy/special/special/gamma.h\`\`
* `#20828 <https://github.com/scipy/scipy/pull/20828>`__: DEP: deprecate trapz alias of \`stats.trapezoid\` distribution
* `#20831 <https://github.com/scipy/scipy/pull/20831>`__: MAINT: version pins/prep for 1.14.0rc1
* `#20838 <https://github.com/scipy/scipy/pull/20838>`__: DOC: sparse: 1.14.0 release notes additions
* `#20839 <https://github.com/scipy/scipy/pull/20839>`__: REL: set 1.14.0rc2 unreleased
* `#20841 <https://github.com/scipy/scipy/pull/20841>`__: DOC: add cobyqa website reference
* `#20851 <https://github.com/scipy/scipy/pull/20851>`__: DOC: add cobyqa website reference (#20841)
* `#20858 <https://github.com/scipy/scipy/pull/20858>`__: MAINT: \`stats.bootstrap\`: emit \`FutureWarning\` about broadcasting
* `#20870 <https://github.com/scipy/scipy/pull/20870>`__: BLD: test delocate works by removing original lib [wheel build]
* `#20881 <https://github.com/scipy/scipy/pull/20881>`__: DOC: mailing list to forum
* `#20890 <https://github.com/scipy/scipy/pull/20890>`__: DOC: Write API reference titles in monospace font
* `#20909 <https://github.com/scipy/scipy/pull/20909>`__: DEP: special.perm: deprecate non-integer \`N\` and \`k\` with...
* `#20914 <https://github.com/scipy/scipy/pull/20914>`__: TST: linalg: bump tolerance in \`TestEig::test_singular\`
* `#20919 <https://github.com/scipy/scipy/pull/20919>`__: BLD: optimize: use hidden visibility for static HiGHS libraries
* `#20920 <https://github.com/scipy/scipy/pull/20920>`__: MAINT: special: fix msvc build by using \`new\` and \`delete\`...
* `#20923 <https://github.com/scipy/scipy/pull/20923>`__: DOC: update doctests to satisfy scipy-doctests==1.2.0
* `#20927 <https://github.com/scipy/scipy/pull/20927>`__: MAINT: adapt to a scipy-doctests change
* `#20933 <https://github.com/scipy/scipy/pull/20933>`__: MAINT: 1.14.0rc2 backports
* `#20936 <https://github.com/scipy/scipy/pull/20936>`__: DOC: \`array_api.rst\`: update 1.14 functions with array API...
* `#20937 <https://github.com/scipy/scipy/pull/20937>`__: BUG/BLD: special: Ensure symbols in \`sf_error_state\` shared...
* `#20945 <https://github.com/scipy/scipy/pull/20945>`__: TST: address tolerance violations with SciPy 1.14.0rc1 on linux-{aarch64,ppc64le}
* `#20952 <https://github.com/scipy/scipy/pull/20952>`__: TST: loosen tolerance in test_x0_working to pass with alternate...
* `#20953 <https://github.com/scipy/scipy/pull/20953>`__: TST: loosen tolerance in test_krandinit slightly to pass with...
* `#20961 <https://github.com/scipy/scipy/pull/20961>`__: TST: robustify test_nnls_inner_loop_case1
* `#20970 <https://github.com/scipy/scipy/pull/20970>`__: REL: set 1.14.0 rc3 unreleased
* `#20973 <https://github.com/scipy/scipy/pull/20973>`__: TST:sparse.linalg: Skip test due to sensitivity to numerical...
* `#20979 <https://github.com/scipy/scipy/pull/20979>`__: STY: \`_lib._util\`: address new mypy complaint in main

View File

@ -0,0 +1,89 @@
==========================
SciPy 1.14.1 Release Notes
==========================
.. contents::
SciPy 1.14.1 adds support for Python 3.13, including binary
wheels on PyPI. Apart from that, it is a bug-fix release with
no new features compared to 1.14.0.
Authors
=======
* Name (commits)
* h-vetinari (1)
* Evgeni Burovski (1)
* CJ Carey (2)
* Lucas Colley (3)
* Ralf Gommers (3)
* Melissa Weber Mendonça (1)
* Andrew Nelson (3)
* Nick ODell (1)
* Tyler Reddy (36)
* Daniel Schmitz (1)
* Dan Schult (4)
* Albert Steppi (2)
* Ewout ter Hoeven (1)
* Tibor Völcker (2) +
* Adam Turner (1) +
* Warren Weckesser (2)
* ਗਗਨਦੀਪ ਸਿੰਘ (Gagandeep Singh) (1)
A total of 17 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.
This list of names is automatically generated, and may not be fully complete.
Issues closed for 1.14.1
------------------------
* `#19572 <https://github.com/scipy/scipy/issues/19572>`__: BUG: doccer: \`test_decorator\` fails with Python 3.13 due to...
* `#19911 <https://github.com/scipy/scipy/issues/19911>`__: BUG: open_memstream unavailable with glibc >= 2.10 + C99
* `#20992 <https://github.com/scipy/scipy/issues/20992>`__: ENH: 3.13 wheels
* `#20993 <https://github.com/scipy/scipy/issues/20993>`__: BUG: spsolve prints "dgstrf info" to stdout on singular matrices
* `#21058 <https://github.com/scipy/scipy/issues/21058>`__: BUG: \`special.pro_rad1\`: incorrect results
* `#21064 <https://github.com/scipy/scipy/issues/21064>`__: BUG: sparse: \`hstack/vstack\` between a sparse and ndarray breaks...
* `#21073 <https://github.com/scipy/scipy/issues/21073>`__: MAINT: \`cluster\`/\`stats\`: array API test failures in main
* `#21079 <https://github.com/scipy/scipy/issues/21079>`__: BUG: unable to securely deploy app as scipy 1.14.0 requires write...
* `#21142 <https://github.com/scipy/scipy/issues/21142>`__: BUG: signal: crash in \`signaltools\` on free-threaded Python,...
* `#21195 <https://github.com/scipy/scipy/issues/21195>`__: CI: documentation build failing?
* `#21207 <https://github.com/scipy/scipy/issues/21207>`__: BUG: \`fft.hfftn\` fails on list inputs
* `#21234 <https://github.com/scipy/scipy/issues/21234>`__: BUG: Files in SuperLU under LGPL license
* `#21238 <https://github.com/scipy/scipy/issues/21238>`__: BUG: io/scipy.sparse.csgraph: refguide check failure in main
* `#21250 <https://github.com/scipy/scipy/issues/21250>`__: DOC: \`sampling_tdr.rst\` failing in CircleCI smoke-tutorials...
* `#21272 <https://github.com/scipy/scipy/issues/21272>`__: BUG: dtype changed for argument to \`rv_discrete._pmf\`
* `#21306 <https://github.com/scipy/scipy/issues/21306>`__: BUG: odr: pickling is not possible
* `#21323 <https://github.com/scipy/scipy/issues/21323>`__: DOC: build failing in CI
* `#21408 <https://github.com/scipy/scipy/issues/21408>`__: BLD, CI: Cirrus 3.13 wheels?
Pull requests for 1.14.1
------------------------
* `#21000 <https://github.com/scipy/scipy/pull/21000>`__: BLD: make cp313 wheels [wheel build]
* `#21038 <https://github.com/scipy/scipy/pull/21038>`__: REL, MAINT: prep for 1.14.1
* `#21062 <https://github.com/scipy/scipy/pull/21062>`__: BUG: special: Fixes for pro_rad1
* `#21067 <https://github.com/scipy/scipy/pull/21067>`__: BUG: special: remove type punning to avoid warnings in LTO builds
* `#21069 <https://github.com/scipy/scipy/pull/21069>`__: MAINT: uarray: fix typo in \`small_dynamic_array.h\`
* `#21074 <https://github.com/scipy/scipy/pull/21074>`__: MAINT: adapt to array-api-strict 2.0
* `#21084 <https://github.com/scipy/scipy/pull/21084>`__: BLD: Enable \`open_memstream()\` on newer glibc
* `#21104 <https://github.com/scipy/scipy/pull/21104>`__: MAINT: Unskip \`scipy.misc.test.test_decorator\` for Python 3.13+
* `#21107 <https://github.com/scipy/scipy/pull/21107>`__: DOC: add release note for 1.14 sparse section about sparse array...
* `#21108 <https://github.com/scipy/scipy/pull/21108>`__: BUG: sparse: fix 1D for vstack/hstack and Improve 1D error msgs...
* `#21160 <https://github.com/scipy/scipy/pull/21160>`__: BUG: signal: fix crash under free-threaded CPython in medfilt2d
* `#21172 <https://github.com/scipy/scipy/pull/21172>`__: BUG: sparse.linalg: Update \`SuperLU\` to fix unfilterable output...
* `#21200 <https://github.com/scipy/scipy/pull/21200>`__: DOC: Fix type of \`\`html_sidebars\`\` value in \`\`conf.py\`\`
* `#21209 <https://github.com/scipy/scipy/pull/21209>`__: BUG: fft: fix array-like input
* `#21244 <https://github.com/scipy/scipy/pull/21244>`__: BUG: sparse: fix failing doctests in io and csgraph that print...
* `#21271 <https://github.com/scipy/scipy/pull/21271>`__: DOC: stats: silence the doctest error
* `#21274 <https://github.com/scipy/scipy/pull/21274>`__: MAINT: sparse.linalg: update \`SuperLU/colamd.c\` to fix license...
* `#21283 <https://github.com/scipy/scipy/pull/21283>`__: BUG: stats: adapt to \`np.floor\` type promotion removal
* `#21305 <https://github.com/scipy/scipy/pull/21305>`__: MAINT: \`stats.bartlett\`: ensure statistic is non-negative
* `#21315 <https://github.com/scipy/scipy/pull/21315>`__: CI: Update to cibuildwheel 2.20.0
* `#21320 <https://github.com/scipy/scipy/pull/21320>`__: BUG: odr: fix pickling
* `#21324 <https://github.com/scipy/scipy/pull/21324>`__: DOC: Don't use Sphinx 8.0.0 until gh-21323 is resolved.
* `#21400 <https://github.com/scipy/scipy/pull/21400>`__: BUG: sparse: Fix 1D specialty hstack codes
* `#21401 <https://github.com/scipy/scipy/pull/21401>`__: MAINT: special: Accommodate changed integer handling in NumPy...
* `#21409 <https://github.com/scipy/scipy/pull/21409>`__: BLD: cp313 wheels on \`manylinux_aarch64\`

View File

@ -0,0 +1,23 @@
==========================
SciPy 1.14.2 Release Notes
==========================
.. contents::
SciPy 1.14.2 is a bug-fix release with no new features
compared to 1.14.1.
Authors
=======
* Name (commits)
Issues closed for 1.14.2
------------------------
Pull requests for 1.14.2
------------------------

View File

@ -231,9 +231,9 @@ example that follows.
Notice that `sproot` may fail to find an obvious solution at the edge of the
approximation interval, :math:`x = 0`. If we define the spline on a slightly
larger interval, we recover both roots :math:`x = 0` and :math:`x = 2\pi`:
larger interval, we recover both roots :math:`x = 0` and :math:`x = \pi`:
>>> x = np.linspace(-np.pi/4, 2.*np.pi + np.pi/4, 21)
>>> x = np.linspace(-np.pi/4, np.pi + np.pi/4, 51)
>>> y = np.sin(x)
>>> tck = interpolate.splrep(x, y, s=0)
>>> interpolate.sproot(tck)

View File

@ -81,7 +81,7 @@ The demon replies:
>>> from scipy.stats import binom
>>> prob = binom.cdf(x, n, p)
>>> print(f"The correct answer is approximately {prob}")
The correct answer is approximately 0.18410080866334788
The correct answer is approximately 0.18410080866334788 # may vary
As your soul is being eaten, you take solace in the knowledge that your
simple Monte Carlo approach was more accurate than the normal

View File

@ -249,7 +249,7 @@ by visualizing the histogram of the samples:
>>> rng.rvs()
-1.526829048388144
>>> norm.rvs(random_state=urng1_copy)
1.3194816698862635
1.3194816698862635 # may vary
We can pass a ``domain`` parameter to truncate the distribution:

View File

@ -161,7 +161,7 @@ distribution:
>>> dist2.callbacks = 0 # don't consider evaluations during setup
>>> rvs = rng2.rvs(100000)
>>> dist2.callbacks # evaluations during sampling
84
84 # may vary
As we can see, far fewer PDF evaluations are required during sampling when
we increase the ``squeeze_hat_ratio``. The PPF-hat function is also more accurate:

View File

@ -4,7 +4,7 @@ project(
# Note that the git commit hash cannot be added dynamically here (it is added
# in the dynamically generated and installed `scipy/version.py` though - see
# tools/version_utils.py
version: '1.14.0.dev0',
version: '1.14.2.dev0',
license: 'BSD-3',
meson_version: '>= 1.1.0',
default_options: [

View File

@ -17,10 +17,17 @@
[build-system]
build-backend = 'mesonpy'
requires = [
"meson-python>=0.15.0",
"Cython>=3.0.8", # when updating version, also update check in meson.build
"pybind11>=2.12.0", # when updating version, also update check in scipy/meson.build
"pythran>=0.14.0",
# The upper bound on meson-python is pre-emptive only (looser
# on purpose, since chance of breakage in 0.17/0.18 is low with
# 0.16 working at time or writing)
"meson-python>=0.15.0,<0.19.0",
# The upper bound on Cython is pre-emptive only
"Cython>=3.0.8,<3.1.0", # when updating version, also update check in meson.build
# The upper bound on pybind11 is pre-emptive only
"pybind11>=2.12.0,<2.13.0", # when updating version, also update check in scipy/meson.build
# The upper bound on pythran is pre-emptive only; 0.16.1
# is released/working at time of writing.
"pythran>=0.14.0,<0.17.0",
# numpy requirement for wheel builds for distribution on PyPI - building
# against 2.x yields wheels that are also compatible with numpy 1.x at
@ -28,12 +35,13 @@ requires = [
# Note that building against numpy 1.x works fine too - users and
# redistributors can do this by installing the numpy version they like and
# disabling build isolation.
"numpy>=2.0.0rc1",
"numpy>=2.0.0rc1,<2.3",
"numpy>=2.1.0rc1; python_version>='3.13'",
]
[project]
name = "scipy"
version = "1.14.0.dev0"
version = "1.14.2.dev0"
# TODO: add `license-files` once PEP 639 is accepted (see meson-python#88)
# at that point, no longer include them in `py3.install_sources()`
license = { file = "LICENSE.txt" }
@ -45,7 +53,7 @@ maintainers = [
# release branches, see:
# https://scipy.github.io/devdocs/dev/core-dev/index.html#version-ranges-for-numpy-and-other-dependencies
requires-python = ">=3.10"
dependencies = ["numpy>=1.23.5"] # keep in sync with `min_numpy_version` in meson.build
dependencies = ["numpy>=1.23.5,<2.3"] # keep in sync with `min_numpy_version` in meson.build
readme = "README.rst"
classifiers = [
"Development Status :: 5 - Production/Stable",
@ -80,13 +88,13 @@ test = [
"scikit-umfpack",
"pooch",
"hypothesis>=6.30",
"array-api-strict",
"array-api-strict>=2.0",
"Cython",
"meson",
'ninja; sys_platform != "emscripten"',
]
doc = [
"sphinx>=5.0.0",
"sphinx>=5.0.0,<=7.3.7",
"pydata-sphinx-theme>=0.15.2",
"sphinx-design>=0.4.0",
"matplotlib>=3.5",
@ -142,13 +150,13 @@ before-build = "bash {project}/tools/wheels/cibw_before_build_linux.sh {project}
[tool.cibuildwheel.linux.environment]
# /project will be the $PWD equivalent inside the docker used to build the wheel
PKG_CONFIG_PATH="/project/"
PKG_CONFIG_PATH = "/project/"
[tool.cibuildwheel.macos]
before-build = "bash {project}/tools/wheels/cibw_before_build_macos.sh {project}"
[tool.cibuildwheel.macos.environment]
PKG_CONFIG_PATH="{project}"
PKG_CONFIG_PATH = "{project}"
[tool.cibuildwheel.windows]
before-build = "bash {project}/tools/wheels/cibw_before_build_win.sh {project}"
@ -156,6 +164,6 @@ repair-wheel-command = "bash ./tools/wheels/repair_windows.sh {wheel} {dest_dir}
[tool.cibuildwheel.windows.environment]
# This does not work because pkg-config does not like backslashes,
PKG_CONFIG_PATH="{project}"
PKG_CONFIG_PATH = "{project}"
# do this instead (which will override this setting)
# set CIBW_ENVIRONMENT_WINDOWS=PKG_CONFIG_PATH=PWD.replace('\\', '/')

View File

@ -17,3 +17,4 @@ filterwarnings =
ignore:.*`numpy.core` has been made officially private.*:DeprecationWarning
ignore:.*In the future `np.long` will be defined as.*:FutureWarning
ignore:.*JAX is multithreaded.*:RuntimeWarning
ignore:.*The 2023.12 version of the array API specification is still preliminary.*:UserWarning

View File

@ -1,6 +1,6 @@
# Generated via tools/generate_requirements.py.
# Do not edit this file; modify `pyproject.toml` instead and run `python tools/generate_requirements.py`.
sphinx>=5.0.0
sphinx>=5.0.0,<=7.3.7
pydata-sphinx-theme>=0.15.2
sphinx-design>=0.4.0
matplotlib>=3.5

View File

@ -11,7 +11,7 @@ threadpoolctl
# scikit-umfpack # circular dependency issues
pooch
hypothesis>=6.30
array-api-strict
array-api-strict>=2.0
Cython
meson
ninja; sys_platform != "emscripten"

View File

@ -67,7 +67,7 @@ del _distributor_init
from scipy._lib import _pep440
# In maintenance branch, change to np_maxversion N+3 if numpy is at N
np_minversion = '1.23.5'
np_maxversion = '9.9.99'
np_maxversion = '2.3.0'
if (_pep440.parse(__numpy_version__) < _pep440.Version(np_minversion) or
_pep440.parse(__numpy_version__) >= _pep440.Version(np_maxversion)):
import warnings

View File

@ -0,0 +1,18 @@
#pragma once
// SCIPY_DLL
// inspired by https://github.com/abseil/abseil-cpp/blob/20240116.2/absl/base/config.h#L736-L753
//
// When building sf_error_state as a DLL, this macro expands to `__declspec(dllexport)`
// so we can annotate symbols appropriately as being exported. When used in
// headers consuming a DLL, this macro expands to `__declspec(dllimport)` so
// that consumers know the symbol is defined inside the DLL. In all other cases,
// the macro expands to nothing.
// Note: SCIPY_DLL_{EX,IM}PORTS are set in scipy/special/meson.build
#if defined(SCIPY_DLL_EXPORTS)
#define SCIPY_DLL __declspec(dllexport)
#elif defined(SCIPY_DLL_IMPORTS)
#define SCIPY_DLL __declspec(dllimport)
#else
#define SCIPY_DLL
#endif

View File

@ -142,7 +142,7 @@ public:
clear();
size_ = copy.size;
size_ = copy.size_;
try {
allocate();
} catch (...) {

View File

@ -28,7 +28,7 @@ if np.lib.NumpyVersion(np.__version__) >= '1.25.0':
DTypePromotionError
)
else:
from numpy import (
from numpy import ( # type: ignore[attr-defined, no-redef]
AxisError, ComplexWarning, VisibleDeprecationWarning # noqa: F401
)
DTypePromotionError = TypeError # type: ignore

View File

@ -87,7 +87,7 @@ py3.extension_module('_test_deprecation_def',
conf_memstream = configuration_data()
if meson.get_compiler('c').has_function('open_memstream',
prefix: '#include <stdio.h>')
prefix: '#define _POSIX_C_SOURCE 200809L\n#include <stdio.h>')
conf_memstream.set('has_openmemstream', '1')
else
conf_memstream.set('has_openmemstream', '0')

View File

@ -3321,7 +3321,7 @@ def dendrogram(Z, p=30, truncate_mode=None, color_threshold=None,
if color_threshold is None or (isinstance(color_threshold, str) and
color_threshold == 'default'):
color_threshold = max(Z[:, 2]) * 0.7
color_threshold = xp.max(Z[:, 2]) * 0.7
R = {'icoord': icoord_list, 'dcoord': dcoord_list, 'ivl': ivl,
'leaves': lvs, 'color_list': color_list}
@ -3355,7 +3355,7 @@ def dendrogram(Z, p=30, truncate_mode=None, color_threshold=None,
above_threshold_color=above_threshold_color)
if not no_plot:
mh = max(Z[:, 2])
mh = xp.max(Z[:, 2])
_plot_dendrogram(icoord_list, dcoord_list, ivl, p, n, mh, orientation,
no_labels, color_list,
leaf_font_size=leaf_font_size,

View File

@ -946,9 +946,9 @@ class TestDendrogram:
def test_labels_as_array_or_list(self, xp):
# test for gh-12418
Z = linkage(xp.asarray(hierarchy_test_data.ytdist), 'single')
labels = xp.asarray([1, 3, 2, 6, 4, 5])
result1 = dendrogram(Z, labels=labels, no_plot=True)
result2 = dendrogram(Z, labels=list(labels), no_plot=True)
labels = [1, 3, 2, 6, 4, 5]
result1 = dendrogram(Z, labels=xp.asarray(labels), no_plot=True)
result2 = dendrogram(Z, labels=labels, no_plot=True)
assert result1 == result2
@pytest.mark.skipif(not have_matplotlib, reason="no matplotlib")

View File

@ -355,7 +355,7 @@ class TestKMean:
init = _krandinit(data, k, rng, xp)
orig_cov = cov(data.T)
init_cov = cov(init.T)
xp_assert_close(orig_cov, init_cov, atol=1e-2)
xp_assert_close(orig_cov, init_cov, atol=1.1e-2)
def test_kmeans2_empty(self, xp):
# Regression test for gh-1032.

View File

@ -13,6 +13,7 @@ import hypothesis
from scipy._lib._fpumode import get_fpu_mode
from scipy._lib._testutils import FPUModeChangeWarning
from scipy._lib._array_api import SCIPY_ARRAY_API, SCIPY_DEVICE
from scipy._lib import _pep440
try:
from scipy_doctest.conftest import dt_config
@ -117,6 +118,11 @@ if SCIPY_ARRAY_API and isinstance(SCIPY_ARRAY_API, str):
try:
import array_api_strict
xp_available_backends.update({'array_api_strict': array_api_strict})
if _pep440.parse(array_api_strict.__version__) < _pep440.Version('2.0'):
raise ImportError("array-api-strict must be >= version 2.0")
array_api_strict.set_array_api_strict_flags(
api_version='2023.12'
)
except ImportError:
pass
@ -267,7 +273,7 @@ if HAVE_SCPDT:
# FIXME: populate the dict once
@contextmanager
def warnings_errors_and_rng(test):
def warnings_errors_and_rng(test=None):
"""Temporarily turn (almost) all warnings to errors.
Filter out known warnings which we allow.
@ -337,11 +343,11 @@ if HAVE_SCPDT:
with _fixed_default_rng():
np.random.seed(None)
with warnings.catch_warnings():
if test.name in known_warnings:
if test and test.name in known_warnings:
warnings.filterwarnings('ignore',
**known_warnings[test.name])
yield
elif test.name in legit:
elif test and test.name in legit:
yield
else:
warnings.simplefilter('error', Warning)

View File

@ -25,6 +25,7 @@ def _execute_1D(func_str, pocketfft_func, x, n, axis, norm, overwrite_x, workers
xp = array_namespace(x)
if is_numpy(xp):
x = np.asarray(x)
return pocketfft_func(x, n=n, axis=axis, norm=norm,
overwrite_x=overwrite_x, workers=workers, plan=plan)
@ -42,6 +43,7 @@ def _execute_nD(func_str, pocketfft_func, x, s, axes, norm, overwrite_x, workers
xp = array_namespace(x)
if is_numpy(xp):
x = np.asarray(x)
return pocketfft_func(x, s=s, axes=axes, norm=norm,
overwrite_x=overwrite_x, workers=workers, plan=plan)
@ -151,6 +153,7 @@ def hfftn(x, s=None, axes=None, norm=None,
overwrite_x=False, workers=None, *, plan=None):
xp = array_namespace(x)
if is_numpy(xp):
x = np.asarray(x)
return _pocketfft.hfftn(x, s, axes, norm, overwrite_x, workers, plan=plan)
if is_complex(x, xp):
x = xp.conj(x)
@ -167,6 +170,7 @@ def ihfftn(x, s=None, axes=None, norm=None,
overwrite_x=False, workers=None, *, plan=None):
xp = array_namespace(x)
if is_numpy(xp):
x = np.asarray(x)
return _pocketfft.ihfftn(x, s, axes, norm, overwrite_x, workers, plan=plan)
return xp.conj(rfftn(x, s, axes, _swap_direction(norm),
overwrite_x, workers, plan=plan))

View File

@ -13,6 +13,7 @@ LN_2 = np.log(2)
def fht(a, dln, mu, offset=0.0, bias=0.0):
xp = array_namespace(a)
a = xp.asarray(a)
# size of transform
n = a.shape[-1]
@ -40,6 +41,7 @@ def fht(a, dln, mu, offset=0.0, bias=0.0):
def ifht(A, dln, mu, offset=0.0, bias=0.0):
xp = array_namespace(A)
A = xp.asarray(A)
# size of transform
n = A.shape[-1]

View File

@ -40,7 +40,7 @@ def fft1(x):
phase = np.arange(L).reshape(-1, 1) * phase
return np.sum(x*np.exp(phase), axis=1)
class TestFFT1D:
class TestFFT:
def test_identity(self, xp):
maxlen = 512
@ -339,6 +339,24 @@ class TestFFT1D:
# Check both numerical results and exact dtype matches
xp_assert_close(res_fft, x)
@skip_xp_backends(np_only=True,
reasons=['array-likes only supported for NumPy backend'])
@pytest.mark.parametrize("op", [fft.fft, fft.ifft,
fft.fft2, fft.ifft2,
fft.fftn, fft.ifftn,
fft.rfft, fft.irfft,
fft.rfft2, fft.irfft2,
fft.rfftn, fft.irfftn,
fft.hfft, fft.ihfft,
fft.hfft2, fft.ihfft2,
fft.hfftn, fft.ihfftn,])
def test_array_like(self, xp, op):
x = [[[1.0, 1.0], [1.0, 1.0]],
[[1.0, 1.0], [1.0, 1.0]],
[[1.0, 1.0], [1.0, 1.0]]]
xp_assert_close(op(x), op(xp.asarray(x)))
@skip_xp_backends(np_only=True)
@pytest.mark.parametrize(
"dtype",

View File

@ -8,7 +8,8 @@ from scipy.special import poch
from scipy.conftest import array_api_compatible
from scipy._lib._array_api import xp_assert_close
pytestmark = array_api_compatible
pytestmark = [array_api_compatible, pytest.mark.usefixtures("skip_xp_backends"),]
skip_xp_backends = pytest.mark.skip_xp_backends
def test_fht_agrees_with_fftlog(xp):
@ -167,3 +168,12 @@ def test_fht_exact(n, xp):
At = xp.asarray((2/k)**gamma * poch((mu+1-gamma)/2, gamma))
xp_assert_close(A, At)
@skip_xp_backends(np_only=True,
reasons=['array-likes only supported for NumPy backend'])
@pytest.mark.parametrize("op", [fht, ifht])
def test_array_like(xp, op):
x = [[[1.0, 1.0], [1.0, 1.0]],
[[1.0, 1.0], [1.0, 1.0]],
[[1.0, 1.0], [1.0, 1.0]]]
xp_assert_close(op(x, 1.0, 2.0), op(xp.asarray(x), 1.0, 2.0))

View File

@ -238,3 +238,12 @@ def test_orthogonalize_dcst3(func, norm, xp):
y1 = func(x, type=3, norm=norm, orthogonalize=True)
y2 = func(x2, type=3, norm=norm, orthogonalize=False)
xp_assert_close(y1, y2)
@skip_xp_backends(np_only=True,
reasons=['array-likes only supported for NumPy backend'])
@pytest.mark.parametrize("func", [dct, idct, dctn, idctn, dst, idst, dstn, idstn])
def test_array_like(xp, func):
x = [[[1.0, 1.0], [1.0, 1.0]],
[[1.0, 1.0], [1.0, 1.0]],
[[1.0, 1.0], [1.0, 1.0]]]
xp_assert_close(func(x), func(xp.asarray(x)))

View File

@ -204,7 +204,8 @@ class BDF(OdeSolver):
self.rtol, self.atol = validate_tol(rtol, atol, self.n)
f = self.fun(self.t, self.y)
if first_step is None:
self.h_abs = select_initial_step(self.fun, self.t, self.y, f,
self.h_abs = select_initial_step(self.fun, self.t, self.y,
t_bound, max_step, f,
self.direction, 1,
self.rtol, self.atol)
else:

View File

@ -65,7 +65,8 @@ def norm(x):
return np.linalg.norm(x) / x.size ** 0.5
def select_initial_step(fun, t0, y0, f0, direction, order, rtol, atol):
def select_initial_step(fun, t0, y0, t_bound,
max_step, f0, direction, order, rtol, atol):
"""Empirically select a good initial step.
The algorithm is described in [1]_.
@ -78,6 +79,11 @@ def select_initial_step(fun, t0, y0, f0, direction, order, rtol, atol):
Initial value of the independent variable.
y0 : ndarray, shape (n,)
Initial value of the dependent variable.
t_bound : float
End-point of integration interval; used to ensure that t0+step<=tbound
and that fun is only evaluated in the interval [t0,tbound]
max_step : float
Maximum allowable step size.
f0 : ndarray, shape (n,)
Initial value of the derivative, i.e., ``fun(t0, y0)``.
direction : float
@ -103,6 +109,10 @@ def select_initial_step(fun, t0, y0, f0, direction, order, rtol, atol):
if y0.size == 0:
return np.inf
interval_length = abs(t_bound - t0)
if interval_length == 0.0:
return 0.0
scale = atol + np.abs(y0) * rtol
d0 = norm(y0 / scale)
d1 = norm(f0 / scale)
@ -110,7 +120,8 @@ def select_initial_step(fun, t0, y0, f0, direction, order, rtol, atol):
h0 = 1e-6
else:
h0 = 0.01 * d0 / d1
# Check t0+h0*direction doesn't take us beyond t_bound
h0 = min(h0, interval_length)
y1 = y0 + h0 * direction * f0
f1 = fun(t0 + h0 * direction, y1)
d2 = norm((f1 - f0) / scale) / h0
@ -120,7 +131,7 @@ def select_initial_step(fun, t0, y0, f0, direction, order, rtol, atol):
else:
h1 = (0.01 / max(d1, d2)) ** (1 / (order + 1))
return min(100 * h0, h1)
return min(100 * h0, h1, interval_length, max_step)
class OdeSolution:

View File

@ -305,7 +305,7 @@ class Radau(OdeSolver):
# the error.
if first_step is None:
self.h_abs = select_initial_step(
self.fun, self.t, self.y, self.f, self.direction,
self.fun, self.t, self.y, t_bound, max_step, self.f, self.direction,
3, self.rtol, self.atol)
else:
self.h_abs = validate_first_step(first_step, t0, t_bound)

View File

@ -94,7 +94,7 @@ class RungeKutta(OdeSolver):
self.f = self.fun(self.t, self.y)
if first_step is None:
self.h_abs = select_initial_step(
self.fun, self.t, self.y, self.f, self.direction,
self.fun, self.t, self.y, t_bound, max_step, self.f, self.direction,
self.error_estimator_order, self.rtol, self.atol)
else:
self.h_abs = validate_first_step(first_step, t0, t_bound)

View File

@ -7,7 +7,7 @@ import numpy as np
from scipy.optimize._numdiff import group_columns
from scipy.integrate import solve_ivp, RK23, RK45, DOP853, Radau, BDF, LSODA
from scipy.integrate import OdeSolution
from scipy.integrate._ivp.common import num_jac
from scipy.integrate._ivp.common import num_jac, select_initial_step
from scipy.integrate._ivp.base import ConstantDenseOutput
from scipy.sparse import coo_matrix, csc_matrix
@ -598,7 +598,7 @@ def test_max_step():
solver = method(fun_rational, t_span[0], y0, t_span[1],
rtol=rtol, atol=atol, max_step=1e-20)
message = solver.step()
message = solver.step() # First step succeeds but second step fails.
assert_equal(solver.status, 'failed')
assert_("step size is less" in message)
assert_raises(RuntimeError, solver.step)
@ -1128,9 +1128,117 @@ def test_args_single_value():
sol = solve_ivp(fun_with_arg, (0, 0.1), [1], args=(-1,))
assert_allclose(sol.y[0, -1], np.exp(-0.1))
@pytest.mark.parametrize("f0_fill", [np.nan, np.inf])
def test_initial_state_finiteness(f0_fill):
# regression test for gh-17846
msg = "All components of the initial state `y0` must be finite."
with pytest.raises(ValueError, match=msg):
solve_ivp(fun_zero, [0, 10], np.full(3, f0_fill))
@pytest.mark.parametrize('method', ['RK23', 'RK45', 'DOP853', 'Radau', 'BDF'])
def test_zero_interval(method):
# Case where upper and lower limits of integration are the same
# Result of integration should match initial state.
# f[y(t)] = 2y(t)
def f(t, y):
return 2 * y
res = solve_ivp(f, (0.0, 0.0), np.array([1.0]), method=method)
assert res.success
assert_allclose(res.y[0, -1], 1.0)
@pytest.mark.parametrize('method', ['RK23', 'RK45', 'DOP853', 'Radau', 'BDF'])
def test_tbound_respected_small_interval(method):
"""Regression test for gh-17341"""
SMALL = 1e-4
# f[y(t)] = 2y(t) on t in [0,SMALL]
# undefined otherwise
def f(t, y):
if t > SMALL:
raise ValueError("Function was evaluated outside interval")
return 2 * y
res = solve_ivp(f, (0.0, SMALL), np.array([1]), method=method)
assert res.success
@pytest.mark.parametrize('method', ['RK23', 'RK45', 'DOP853', 'Radau', 'BDF'])
def test_tbound_respected_larger_interval(method):
"""Regression test for gh-8848"""
def V(r):
return -11/r + 10 * r / (0.05 + r**2)
def func(t, p):
if t < -17 or t > 2:
raise ValueError("Function was evaluated outside interval")
P = p[0]
Q = p[1]
r = np.exp(t)
dPdr = r * Q
dQdr = -2.0 * r * ((-0.2 - V(r)) * P + 1 / r * Q)
return np.array([dPdr, dQdr])
result = solve_ivp(func,
(-17, 2),
y0=np.array([1, -11]),
max_step=0.03,
vectorized=False,
t_eval=None,
atol=1e-8,
rtol=1e-5)
assert result.success
@pytest.mark.parametrize('method', ['RK23', 'RK45', 'DOP853', 'Radau', 'BDF'])
def test_tbound_respected_oscillator(method):
"Regression test for gh-9198"
def reactions_func(t, y):
if (t > 205):
raise ValueError("Called outside interval")
yprime = np.array([1.73307544e-02,
6.49376470e-06,
0.00000000e+00,
0.00000000e+00])
return yprime
def run_sim2(t_end, n_timepoints=10, shortest_delay_line=10000000):
init_state = np.array([134.08298555, 138.82348612, 100., 0.])
t0 = 100.0
t1 = 200.0
return solve_ivp(reactions_func,
(t0, t1),
init_state.copy(),
dense_output=True,
max_step=t1 - t0)
result = run_sim2(1000, 100, 100)
assert result.success
def test_inital_maxstep():
"""Verify that select_inital_step respects max_step"""
rtol = 1e-3
atol = 1e-6
y0 = np.array([1/3, 2/9])
for (t0, t_bound) in ((5, 9), (5, 1)):
for method_order in [RK23.error_estimator_order,
RK45.error_estimator_order,
DOP853.error_estimator_order,
3, #RADAU
1 #BDF
]:
step_no_max = select_initial_step(fun_rational, t0, y0, t_bound,
np.inf,
fun_rational(t0,y0),
np.sign(t_bound - t0),
method_order,
rtol, atol)
max_step = step_no_max/2
step_with_max = select_initial_step(fun_rational, t0, y0, t_bound,
max_step,
fun_rational(t0, y0),
np.sign(t_bound - t0),
method_order,
rtol, atol)
assert_equal(max_step, step_with_max)

View File

@ -498,9 +498,12 @@ def hb_read(path_or_open_file):
>>> data = csr_array(eye(3)) # create a sparse array
>>> hb_write("data.hb", data) # write a hb file
>>> print(hb_read("data.hb")) # read a hb file
(np.int32(0), np.int32(0)) 1.0
(np.int32(1), np.int32(1)) 1.0
(np.int32(2), np.int32(2)) 1.0
<Compressed Sparse Column sparse matrix of dtype 'float64'
with 3 stored elements and shape (3, 3)>
Coords Values
(0, 0) 1.0
(1, 1) 1.0
(2, 2) 1.0
"""
def _get_matrix(fid):
hb = HBFile(fid)
@ -548,9 +551,12 @@ def hb_write(path_or_open_file, m, hb_info=None):
>>> data = csr_array(eye(3)) # create a sparse array
>>> hb_write("data.hb", data) # write a hb file
>>> print(hb_read("data.hb")) # read a hb file
(np.int32(0), np.int32(0)) 1.0
(np.int32(1), np.int32(1)) 1.0
(np.int32(2), np.int32(2)) 1.0
<Compressed Sparse Column sparse matrix of dtype 'float64'
with 3 stored elements and shape (3, 3)>
Coords Values
(0, 0) 1.0
(1, 1) 1.0
(2, 2) 1.0
"""
m = m.tocsc(copy=False)

View File

@ -181,7 +181,8 @@ class TestEig:
assert_equal(w, np.inf)
assert_allclose(vr, 1)
def _check_gen_eig(self, A, B, atol_homog=1e-13, rtol_homog=1e-13):
def _check_gen_eig(self, A, B, atol_homog=1e-13, rtol_homog=1e-13,
atol=1e-13, rtol=1e-13):
if B is not None:
A, B = asarray(A), asarray(B)
B0 = B
@ -230,7 +231,7 @@ class TestEig:
for i in range(res.shape[1]):
if np.all(isfinite(res[:, i])):
assert_allclose(res[:, i], 0,
rtol=1e-13, atol=1e-13, err_msg=msg)
rtol=rtol, atol=atol, err_msg=msg)
# try to consistently order eigenvalues, including complex conjugate pairs
w_fin = w[isfinite(w)]
@ -269,7 +270,7 @@ class TestEig:
[24, 35, 18, 21, 22]])
with np.errstate(all='ignore'):
self._check_gen_eig(A, B, atol_homog=5e-13)
self._check_gen_eig(A, B, atol_homog=5e-13, atol=5e-13)
def test_falker(self):
# Test matrices giving some Nan generalized eigenvalues.

View File

@ -422,7 +422,7 @@ fortran_ignore_warnings = ff.get_supported_arguments(
# Intel Fortran (ifort) does not run the preprocessor by default, if Fortran
# code uses preprocessor statements, add this compile flag to it.
_fflag_fpp = []
if ff.get_id() == 'intel-cl'
if ff.get_id() in ['intel-cl', 'intel-llvm-cl']
if is_windows
_fflag_fpp = ff.get_supported_arguments('/fpp')
else

View File

@ -83,23 +83,32 @@ def test_decorator():
""" Docstring
%(strtest3)s
"""
assert_equal(func.__doc__, """ Docstring
def expected():
""" Docstring
Another test
with some indent
""")
"""
assert_equal(func.__doc__, expected.__doc__)
# without unindentation of parameters
decorator = doccer.filldoc(doc_dict, False)
# The docstring should be unindented for Python 3.13+
# because of https://github.com/python/cpython/issues/81283
decorator = doccer.filldoc(doc_dict, False if \
sys.version_info < (3, 13) else True)
@decorator
def func():
""" Docstring
%(strtest3)s
"""
assert_equal(func.__doc__, """ Docstring
def expected():
""" Docstring
Another test
with some indent
""")
"""
assert_equal(func.__doc__, expected.__doc__)
@pytest.mark.skipif(DOCSTRINGS_STRIPPED, reason="docstrings stripped")

View File

@ -286,7 +286,7 @@ class Data:
def __getattr__(self, attr):
""" Dispatch attribute access to the metadata dictionary.
"""
if attr in self.meta:
if attr != "meta" and attr in self.meta:
return self.meta[attr]
else:
raise AttributeError("'%s' not in metadata" % attr)
@ -408,17 +408,18 @@ class RealData(Data):
return weights
def __getattr__(self, attr):
lookup_tbl = {('wd', 'sx'): (self._sd2wt, self.sx),
('wd', 'covx'): (self._cov2wt, self.covx),
('we', 'sy'): (self._sd2wt, self.sy),
('we', 'covy'): (self._cov2wt, self.covy)}
if attr not in ('wd', 'we'):
if attr in self.meta:
if attr != "meta" and attr in self.meta:
return self.meta[attr]
else:
raise AttributeError("'%s' not in metadata" % attr)
else:
lookup_tbl = {('wd', 'sx'): (self._sd2wt, self.sx),
('wd', 'covx'): (self._cov2wt, self.covx),
('we', 'sy'): (self._sd2wt, self.sy),
('we', 'covy'): (self._cov2wt, self.covy)}
func, arg = lookup_tbl[(attr, self._ga_flags[attr])]
if arg is not None:
@ -533,7 +534,7 @@ class Model:
""" Dispatch attribute access to the metadata.
"""
if attr in self.meta:
if attr != "meta" and attr in self.meta:
return self.meta[attr]
else:
raise AttributeError("'%s' not in metadata" % attr)

View File

@ -1,3 +1,4 @@
import pickle
import tempfile
import shutil
import os
@ -563,3 +564,43 @@ class TestODR:
odr_obj.set_job(fit_type=0, del_init=1)
# Just make sure that it runs without raising an exception.
odr_obj.run()
def test_pickling_data(self):
x = np.linspace(0.0, 5.0)
y = 1.0 * x + 2.0
data = Data(x, y)
obj_pickle = pickle.dumps(data)
del data
pickle.loads(obj_pickle)
def test_pickling_real_data(self):
x = np.linspace(0.0, 5.0)
y = 1.0 * x + 2.0
data = RealData(x, y)
obj_pickle = pickle.dumps(data)
del data
pickle.loads(obj_pickle)
def test_pickling_model(self):
obj_pickle = pickle.dumps(unilinear)
pickle.loads(obj_pickle)
def test_pickling_odr(self):
x = np.linspace(0.0, 5.0)
y = 1.0 * x + 2.0
odr_obj = ODR(Data(x, y), unilinear)
obj_pickle = pickle.dumps(odr_obj)
del odr_obj
pickle.loads(obj_pickle)
def test_pickling_output(self):
x = np.linspace(0.0, 5.0)
y = 1.0 * x + 2.0
output = ODR(Data(x, y), unilinear).run
obj_pickle = pickle.dumps(output)
del output
pickle.loads(obj_pickle)

View File

@ -10,7 +10,7 @@ def _minimize_cobyqa(fun, x0, args=(), bounds=None, constraints=(),
**unknown_options):
"""
Minimize a scalar function of one or more variables using the
Constrained Optimization BY Quadratic Approximations (COBYQA) algorithm.
Constrained Optimization BY Quadratic Approximations (COBYQA) algorithm [1]_.
.. versionadded:: 1.14.0
@ -40,6 +40,11 @@ def _minimize_cobyqa(fun, x0, args=(), bounds=None, constraints=(),
if all the lower and upper bounds are finite, the variables are scaled
to be within the range :math:`[-1, 1]`. If any of the lower or upper
bounds is infinite, the variables are not scaled.
References
----------
.. [1] COBYQA
https://www.cobyqa.com/stable/
"""
from .._lib.cobyqa import minimize # import here to avoid circular imports

View File

@ -51,7 +51,8 @@ basiclu_lib = static_library('basiclu',
'../../_lib/highs/src',
'../../_lib/highs/src/ipm/basiclu/include'
],
c_args: [Wno_unused_variable, highs_define_macros]
c_args: [Wno_unused_variable, highs_define_macros],
gnu_symbol_visibility: 'inlineshidden',
)
highs_flags = [
@ -109,7 +110,8 @@ ipx_lib = static_library('ipx',
'cython/src/'
],
dependencies: thread_dep,
cpp_args: [highs_flags, highs_define_macros]
cpp_args: [highs_flags, highs_define_macros],
gnu_symbol_visibility: 'inlineshidden',
)
highs_lib = static_library('highs',
@ -226,7 +228,8 @@ highs_lib = static_library('highs',
'../../_lib/highs/src/util/',
],
dependencies: thread_dep,
cpp_args: [highs_flags, highs_define_macros]
cpp_args: [highs_flags, highs_define_macros],
gnu_symbol_visibility: 'inlineshidden',
)
_highs_wrapper = py3.extension_module('_highs_wrapper',

View File

@ -1420,6 +1420,7 @@ class TestDifferentialEvolutionSolver:
assert_(np.all(res.x >= np.array(bounds)[:, 0]))
assert_(np.all(res.x <= np.array(bounds)[:, 1]))
@pytest.mark.fail_slow(5)
def test_L9(self):
# Lampinen ([5]) test problem 9

View File

@ -64,7 +64,7 @@ class TestDifferentiate:
assert_equal(res.df.shape, shape)
ref_error = [ref.error for ref in refs]
assert_allclose(res.error.ravel(), ref_error, atol=5e-15)
assert_allclose(res.error.ravel(), ref_error, atol=1e-12)
assert_equal(res.error.shape, shape)
ref_success = [ref.success for ref in refs]

View File

@ -97,7 +97,7 @@ class TestNNLS:
# Small perturbations can already make the infinite loop go away (just
# uncomment the next line)
# k = k + 1e-10 * np.random.normal(size=N)
k = k + 1e-10 * np.random.normal(size=N)
dact, _ = nnls(W @ A, W @ k)
assert_allclose(dact, d, rtol=0., atol=1e-10)

View File

@ -310,10 +310,13 @@ from ._max_len_seq import max_len_seq
from ._upfirdn import upfirdn
from ._spline import (
sepfir2d
cspline2d,
qspline2d,
sepfir2d,
symiirorder1,
symiirorder2,
)
from ._splines import *
from ._bsplines import *
from ._filter_design import *
from ._fir_filter_design import *

View File

@ -22,6 +22,20 @@
#define ABSQ(a) ( (a*CONJ(a)))
#endif
void
compute_root_from_lambda(double lambda, double *r, double *omega)
{
double xi;
double tmp, tmp2;
tmp = sqrt(3 + 144*lambda);
xi = 1 - 96*lambda + 24*lambda * tmp;
*omega = atan(sqrt((144*lambda - 1.0)/xi));
tmp2 = sqrt(xi);
*r = (24*lambda - 1 - tmp2)/(24*lambda) * sqrt((48*lambda + 24*lambda*tmp))/tmp2;
return;
}
{{py:
@ -37,195 +51,180 @@ RTYPES = RTYPE + RTYPE
}}
{{for SUB, TYP in zip(RNAME, RTYPE)}}
// Approximate steady state conditions for the system cs / (1 - a2 * z^-1 - a3 * z^-2)
static {{TYP}} {{SUB}}_hc(int, {{TYP}}, double, double);
// Approximate steady state conditions for the system cs / (1 - a2 * z - a3 * z^2)
static {{TYP}} {{SUB}}_hs(int, {{TYP}}, double, double);
{{endfor}}
/* Implement the following difference equation */
/* y[n] = a1 * x[n] + a2 * y[n-1] */
/* with a given starting value loaded into the array */
{{for SUB, TYP, RTYP in zip(NAMES, TYPES, RTYPES)}}
{{for SUB, TYP in zip(NAMES, TYPES)}}
{{if SUB in CNAME}}
#ifdef __GNUC__
{{endif}}
int {{SUB}}_SYM_IIR1_initial({{TYP}} z1, {{TYP}} *x, {{TYP}} *yp0,
int M, int N, {{RTYP}} precision)
void {{SUB}}_IIR_order1({{TYP}} a1, {{TYP}} a2, {{TYP}} *x, {{TYP}} *y,
int N, int stridex, int stridey)
{
{{TYP}} powz1, diff;
{{RTYP}} err;
int k;
{{TYP}} *yvec = y + stridey;
{{TYP}} *xvec = x + stridex;
int n;
if (ABSQ(z1) >= 1.0) return -2; /* z1 not less than 1 */
/* Fix starting value assuming mirror-symmetric boundary conditions. */
for(int i = 0; i < M; i++) {
yp0[i] = x[N * i];
for (n = 1; n < N; n++) {
*yvec = *xvec * a1 + *(yvec - stridey) * a2;
yvec += stridey;
xvec += stridex;
}
powz1 = 1.0;
k = 0;
precision *= precision;
do {
powz1 *= z1;
for(int i = 0; i < M; i++) {
yp0[i] += powz1 * x[N * i + k];
}
diff = powz1;
err = ABSQ(diff);
k++;
} while((err > precision) && (k < N));
if (k >= N){
/* sum did not converge */
return -3;
}
return 0;
}
{{if SUB in CNAME}}
#endif
{{endif}}
{{endfor}}
/**
Compute the starting initial conditions for the system
cs / (1 - a2 * z^-1 - a3 * z^-2) against signals x, where:
a2 = 2 * r * cos(omega)
a3 = - r ** 2
cs = 1 - 2 * r * cos(omega) + r ** 2
/* Implement the following difference equation */
/* y[n] = a1 * x[n] + a2 * y[n-1] + a3 * y[n-2] */
/* with two starting values loaded into the array */
Arguments
---------
x: double* or float*
2D strided pointer signal of size (M, N). When M > 1, multiple signals
will be processed independently.
yp: double* or float*
Strided output state condition pointer of size (M, 2).
yp[:, 0] will contain the initial conditions y[n - 1],
whereas yp[:, 1] will contain the initial conditions y[n - 2].
M: int
Number of signals to compute initial conditions for.
N: int
Length of the signals.
precision: double* or float*
Precision up to which the initial conditions will be computed.
**/
{{for SUB, TYP in zip(RNAME, RTYPE)}}
int {{SUB}}_SYM_IIR2_initial_fwd(double r, double omega,
{{TYP}} *x, {{TYP}} *yp, int M, int N, {{TYP}} precision) {
/* Fix starting values assuming mirror-symmetric boundary conditions. */
{{TYP}} cs = 1 - 2 * r * cos(omega) + r * r;
{{for SUB, TYP in zip(NAMES, TYPES)}}
{{if SUB in CNAME}}
#ifdef __GNUC__
{{endif}}
void {{SUB}}_IIR_order2({{TYP}} a1, {{TYP}} a2, {{TYP}} a3,
{{TYP}} *x, {{TYP}} *y, int N, int stridex, int stridey)
{
{{TYP}} *yvec = y + 2*stridey;
{{TYP}} *xvec = x + 2*stridex;
int n;
for(int i = 0; i < M; i++) {
// Compute starting condition y[n - 1]
yp[2 * i] = {{SUB}}_hc(0, cs, r, omega) * x[N * i];
for (n = 2; n < N; n++) {
*yvec = *xvec * a1 + *(yvec - stridey) * a2 + *(yvec - 2*stridey) * a3;
yvec += stridey;
xvec += stridex;
}
int k = 0;
precision *= precision;
{{TYP}} err;
{{TYP}} diff;
do {
diff = {{SUB}}_hc(k+1, cs, r, omega);
for(int i = 0; i < M; i++) {
// Keep computing starting condition y[n - 1]
yp[2 * i] += diff * x[N * i + k];
}
err = diff * diff;
k++;
} while((err > precision) && (k < N));
if (k >= N) {return -3;} /* sum did not converge */
for(int i = 0; i < M; i++) {
// Compute starting condition y[n - 2]
yp[2 * i + 1] = {{SUB}}_hc(0, cs, r, omega) * x[N * i + 1];
yp[2 * i + 1] += {{SUB}}_hc(1, cs, r, omega) * x[N * i];
}
k = 0;
do {
diff = {{SUB}}_hc(k+2, cs, r, omega);
for(int i = 0; i < M; i++) {
// Keep computing starting condition y[n - 2]
yp[2 * i + 1] += diff * x[N * i + k];
}
err = diff * diff;
k++;
} while((err > precision) && (k < N));
if (k >= N) {return -3;} /* sum did not converge */
return 0;
}
{{if SUB in CNAME}}
#endif
{{endif}}
{{endfor}}
/**
Compute the starting initial conditions for the system (ran in backwards)
cs / (1 - a2 * z - a3 * z^2) against signal x, where:
a2 = 2 * r * cos(omega)
a3 = - r ** 2
cs = 1 - 2 * r * cos(omega) + r ** 2
/* Implement a second order IIR difference equation using a cascade
of first order sections. Suppose the transfer function is
cs
H(z) = -------------------
(1-z1/z) ( 1-z2/z)
Arguments
---------
x: double* or float*
2D strided pointer signal of size (M, N). When M > 1, multiple signals
will be processed independently.
yp: double* or float*
Output state condition pointer of size (M, 2), yp[:, 0] will contain the
initial conditions y[n + 1], whereas yp[:, 1] will contain the initial
condition y[n + 2].
M: int
Number of signals to compute initial conditions for.
N: int
Length of the signals.
precision: double* or float*
Precision up to which the initial conditions will be computed.
**/
{{for SUB, TYP in zip(RNAME, RTYPE)}}
int {{SUB}}_SYM_IIR2_initial_bwd(double r, double omega,
{{TYP}} *x, {{TYP}} *yp, int M, int N, {{TYP}} precision) {
double rsq = r * r;
{{TYP}} cs = 1 - 2 * r * cos(omega) + rsq;
then the following pair is implemented with one starting value loaded in
the output array and the starting value for the intermediate array
passed in as yp0.
/* Fix starting values assuming mirror-symmetric boundary conditions. */
int k = 0;
y1[n] = x[n] + z1 y1[n-1]
yp[n] = cs y1[n] + z2 yp[n-1]
{{TYP}} err;
{{TYP}} diff;
*/
do {
diff = ({{SUB}}_hs(k, cs, rsq, omega) + {{SUB}}_hs(k+1, cs, rsq, omega));
for(int i = 0; i < M; i++) {
// Compute initial condition y[n + 1]
yp[2 * i] += diff * x[N * i + N - 1 - k];
}
err = diff * diff;
k++;
} while((err > precision) && (k < N));
if (k >= N) {return -3;} /* sum did not converge */
{{for SUB, TYP in zip(NAMES, TYPES)}}
{{if SUB in CNAME}}
#ifdef __GNUC__
{{endif}}
void {{SUB}}_IIR_order2_cascade({{TYP}} cs, {{TYP}} z1, {{TYP}} z2,
{{TYP}} y1_0, {{TYP}} *x, {{TYP}} *yp,
int N, int stridex, int stridey)
{
{{TYP}} *yvec = yp + stridey;
{{TYP}} *xvec = x + stridex;
int n;
for (n = 1; n < N; n++) {
y1_0 = *xvec + y1_0 * z1;
*yvec = cs * y1_0 + *(yvec - stridey) * z2;
yvec += stridey;
xvec += stridex;
}
}
{{if SUB in CNAME}}
#endif
{{endif}}
{{endfor}}
/* Implement a smoothing IIR filter with mirror-symmetric boundary conditions
using a cascade of first-order sections. The second section uses a
reversed sequence. This implements the following transfer function:
c0
H(z) = ---------------------------
(1-z1/z) (1 - z1 z)
with the following difference equations:
yp[n] = x[n] + z1 yp[n-1]
with starting condition:
yp[0] = x[0] + Sum(z1^(k+1) x[k],k=0..Infinity)
and
y[n] = z1 y[n+1] + c0 yp[n]
with starting condition:
y[N-1] = z1 / (z1-1) yp[N-1]
The resulting signal will have mirror symmetric boundary conditions as well.
If memory could not be allocated for the temporary vector yp, the
function returns -1 otherwise it returns 0.
z1 should be less than 1;
*/
{{for SUB, TYP, RTYP in zip(NAMES, TYPES, RTYPES)}}
{{if SUB in CNAME}}
#ifdef __GNUC__
{{endif}}
int {{SUB}}_IIR_forback1({{TYP}} c0, {{TYP}} z1, {{TYP}} *x, {{TYP}} *y,
int N, int stridex, int stridey, {{RTYP}} precision)
{
{{TYP}} *yp = NULL;
{{TYP}} *xptr = x;
{{TYP}} yp0, powz1, diff;
{{RTYP}} err;
int k;
if (ABSQ(z1) >= 1.0) return -2; /* z1 not less than 1 */
/* Initialize memory for loop */
if ((yp = malloc(N*sizeof({{TYP}})))==NULL) return -1;
/* Fix starting value assuming mirror-symmetric boundary conditions. */
yp0 = x[0];
powz1 = 1.0;
k = 0;
precision *= precision;
do {
diff = ({{SUB}}_hs(k-1, cs, rsq, omega) + {{SUB}}_hs(k+2, cs, rsq, omega));
for(int i = 0; i < M; i++) {
// Compute initial condition y[n + 2]
yp[2 * i + 1] += diff * x[N * i + N - 1 - k];
}
err = diff * diff;
k++;
yp[0] = yp0;
powz1 *= z1;
yp0 += powz1 * (*xptr);
diff = powz1;
err = ABSQ(diff);
xptr += stridex;
k++;
} while((err > precision) && (k < N));
if (k >= N){
/* sum did not converge */
free(yp);
return -3;
}
yp[0] = yp0;
if (k >= N) {return -3;} /* sum did not converge */
{{SUB}}_IIR_order1(1.0, z1, x, yp, N, stridex, 1);
*(y + (N - 1)*stridey) = -c0 / (z1 - 1.0) * yp[N-1];
{{SUB}}_IIR_order1(c0, z1, yp + N - 1, y + (N - 1)*stridey, N, -1, -stridey);
free(yp);
return 0;
}
{{if SUB in CNAME}}
#endif
{{endif}}
{{endfor}}
/* h must be odd length */
@ -349,10 +348,12 @@ int {{SUB}}_separable_2Dconvolve_mirror({{TYP}} *in, {{TYP}} *out,
{{endif}}
{{endfor}}
/**
Approximate the steady state of a two-pole filter in polar form for a
step input.
**/
{{for SUB, TYP in zip(RNAME, RTYPE)}}
static {{TYP}} {{SUB}}_hc(int, {{TYP}}, double, double);
static {{TYP}} {{SUB}}_hs(int, {{TYP}}, double, double);
{{endfor}}
{{for SUB, TYP in zip(RNAME, RTYPE)}}
{{TYP}} {{SUB}}_hc(int k, {{TYP}} cs, double r, double omega)
{
@ -365,11 +366,6 @@ step input.
}
{{endfor}}
/**
Approximate the steady state of a two-pole filer in polar form ran in backwards
for a step input.
**/
{{for SUB, TYP in zip(RNAME, RTYPE)}}
{{TYP}} {{SUB}}_hs(int k, {{TYP}} cs, double rsq, double omega)
{
@ -396,3 +392,284 @@ for a step input.
}
{{endfor}}
/* Implement a smoothing IIR filter with mirror-symmetric boundary conditions
using a cascade of second-order sections. The second section uses a
reversed sequence. This implements the following transfer function:
cs^2
H(z) = --------------------------------------
(1 - a2/z - a3/z^2) (1 - a2 z -a3 z^2 )
where a2 = (2 r cos omega)
a3 = - r^2
cs = 1 - 2 r cos omega + r^2
with the following difference equations:
yp[n] = cs*x[n] - b1 yp[n-1] - b2 yp[n-2]
with starting conditions:
yp[0] = hc[0] x[0] + Sum(hc[k+1]*x[k],k=0..Infinity)
yp[1] = hc[0] x[1] + hc[1] x[0] + Sum(hc[k+2] x[k], k=0..Infinity)
and
y[n] = cs*yp[n] - b1 y[n+1] -b2 y[n+2]
with starting conditions:
y[N-1] = Sum((hs[k] + hs[k+1])x[N-1-k],k=0..Infinity)
y[N-2] = Sum((hs[k-1] + hs[k+2])x[N-1-k],k=0..Infinity)
The resulting signal will have mirror symmetric boundary conditions as well.
If memory could not be allocated for the temporary vector yp, the
function returns -1 otherwise it returns 0.
z1 should be less than 1;
*/
{{for SUB, TYP in zip(RNAME, RTYPE)}}
int {{SUB}}_IIR_forback2 (double r, double omega, {{TYP}} *x, {{TYP}} *y,
int N, int stridex, int stridey, {{TYP}} precision) {
{{TYP}} cs;
{{TYP}} *yp = NULL;
{{TYP}} *yptr;
{{TYP}} *xptr;
{{TYP}} yp0;
{{TYP}} yp1;
double rsq;
{{TYP}} diff;
{{TYP}} err;
{{TYP}} a2, a3;
int k;
if (r >= 1.0) return -2; /* z1 not less than 1 */
/* Initialize memory for loop */
if ((yp = malloc(N*sizeof({{TYP}})))==NULL) return -1;
rsq = r * r;
a2 = 2 * r * cos(omega);
a3 = -rsq;
cs = 1 - 2 * r * cos(omega) + rsq;
/* Fix starting values assuming mirror-symmetric boundary conditions. */
yp0 = {{SUB}}_hc(0, cs, r, omega) * x[0];
k = 0;
precision *= precision;
xptr = x;
do {
yp[0] = yp0;
diff = {{SUB}}_hc(k+1, cs, r, omega);
yp0 += diff * (*xptr);
err = diff * diff;
xptr += stridex;
k++;
} while((err > precision) && (k < N));
if (k >= N) {free(yp); return -3;} /* sum did not converge */
yp[0] = yp0;
yp1 = {{SUB}}_hc(0, cs, r, omega) * (*(x+stridex));
yp1 += {{SUB}}_hc(1, cs, r, omega) * x[0];
k = 0;
xptr = x;
do {
yp[1] = yp1;
diff = {{SUB}}_hc(k+2, cs, r, omega);
yp1 += diff * (*xptr);
err = diff * diff;
xptr += stridex;
k++;
} while((err > precision) && (k < N));
if (k >= N) {free(yp); return -3;} /* sum did not converge */
yp[1] = yp1;
{{SUB}}_IIR_order2(cs, a2, a3, x, yp, N, stridex, 1);
/* Fix starting values assuming mirror-symmetric boundary conditions. */
yp0 = 0.0;
k = 0;
yptr = y + (N-1)*stridey;
xptr = x + (N-1)*stridex;
do {
*yptr = yp0;
diff = ({{SUB}}_hs(k, cs, rsq, omega) + {{SUB}}_hs(k+1, cs, rsq, omega));
yp0 += diff * (*xptr);
err = diff * diff;
xptr -= stridex;
k++;
} while((err > precision) && (k < N));
if (k >= N) {free(yp); return -3;} /* sum did not converge */
*yptr = yp0;
yp1 = 0.0;
k = 0;
yptr -= stridey; /* Initialize in next-to-last slot in output array */
xptr = x + (N-1)*stridex;
do {
*yptr = yp1;
diff = ({{SUB}}_hs(k-1, cs, rsq, omega) + {{SUB}}_hs(k+2, cs, rsq, omega));
yp1 += diff * (*xptr);
err = diff * diff;
xptr -= stridex;
k++;
} while((err > precision) && (k < N));
if (k >= N) {free(yp); return -3;} /* sum did not converge */
*yptr = yp1;
{{SUB}}_IIR_order2(cs, a2, a3, yp+N-1, yptr+stridey, N, -1, -stridey);
free(yp);
return 0;
}
{{endfor}}
/* Find the cubic spline coefficients of an image
image is M rows by N columns stored rowise in memory (vary column number
first). It will be replaced with the spline coefficients.
lambda is a smoothing parameter (lambda = 100 approximately corresponds
to a cutoff frequency of 0.1*(sample freq))
strides is an integer array [rowstride, colstride]
telling how much memory in units of sizeof(DATA TYPE) bytes to skip
to get to the next element.
*/
/* to get the (smoothed) image back mirror-symmetric convolve with a length
three separable FIR filter [1.0, 4.0, 1.0]/ 6.0
*/
{{for SUB, TYP in zip(RNAME, RTYPE)}}
int {{SUB}}_cubic_spline2D({{TYP}} *image, {{TYP}} *coeffs, int M, int N, double lambda,
npy_intp *strides, npy_intp *cstrides, {{TYP}} precision) {
double r, omega;
{{TYP}} *inptr;
{{TYP}} *coptr;
{{TYP}} *tmpmem;
{{TYP}} *tptr;
int m, n, retval=0;
tmpmem = malloc(N*M*sizeof({{TYP}}));
if (tmpmem == NULL) return -1;
if (lambda <= 1.0 / 144.0) {
/* normal cubic spline */
r = -2 + sqrt(3.0);
/* Loop over rows */
inptr = image;
tptr = tmpmem;
for (m = 0; m < M; m++) {
retval = {{SUB}}_IIR_forback1 (-r*6.0, r, inptr, tptr, N, strides[1], 1, precision);
if (retval < 0) break;
inptr += strides[0];
tptr += N;
}
if (retval >=0) {
/* Loop over columns */
tptr = tmpmem;
coptr = coeffs;
for (n = 0; n < N; n++) {
retval = {{SUB}}_IIR_forback1 (-r*6.0, r, tptr, coptr, M, N, cstrides[0], precision);
if (retval < 0) break;
coptr += cstrides[1];
tptr += 1;
}
}
free(tmpmem);
return retval;
}
/* Smoothing spline */
/* Compute r and omega from lambda */
compute_root_from_lambda(lambda, &r, &omega);
/* Loop over rows */
inptr = image;
tptr = tmpmem;
for (m = 0; m < M; m++) {
retval = {{SUB}}_IIR_forback2 (r, omega, inptr, tptr, N, strides[1],
1, precision);
if (retval < 0) break;
inptr += strides[0];
tptr += N;
}
/* Loop over columns */
tptr = tmpmem;
coptr = coeffs;
for (n = 0; n < N; n++) {
retval = {{SUB}}_IIR_forback2 (r, omega, tptr, coptr, M, N,
cstrides[0], precision);
if (retval < 0) break;
coptr += cstrides[1];
tptr += 1;
}
free(tmpmem);
return retval;
}
{{endfor}}
/* Find the quadratic spline coefficients of an image
image is M rows by N columns stored rowise in memory (vary column number
first). It will be replaced with the spline coefficients.
lambda is a smoothing parameter (lambda = 100 approximately corresponds
to a cutoff frequency of 0.1*(sample freq))
must be zero for now.
strides is an integer array [rowstride, colstride]
telling how much memory in units of sizeof(DATA TYPE) bytes to skip
to get to the next element.
*/
/* to get the (smoothed) image back mirror-symmetric convolve with a length
three separable FIR filter [1.0, 6.0, 1.0]/ 8.0
*/
{{for SUB, TYP in zip(RNAME, RTYPE)}}
int {{SUB}}_quadratic_spline2D({{TYP}} *image, {{TYP}} *coeffs, int M, int N, double lambda,
npy_intp *strides, npy_intp *cstrides, {{TYP}} precision) {
double r;
{{TYP}} *inptr;
{{TYP}} *coptr;
{{TYP}} *tmpmem;
{{TYP}} *tptr;
int m,n, retval=0;
if (lambda > 0) return -2;
tmpmem = malloc(N*M*sizeof({{TYP}}));
if (tmpmem == NULL) return -1;
/* normal quadratic spline */
r = -3 + 2*sqrt(2.0);
/* Loop over rows */
inptr = image;
tptr = tmpmem;
for (m = 0; m < M; m++) {
retval = {{SUB}}_IIR_forback1 (-r*8.0, r, inptr, tptr, N, strides[1], 1, precision);
if (retval < 0) break;
inptr += strides[0];
tptr += N;
}
if (retval >=0) {
/* Loop over columns */
tptr = tmpmem;
coptr = coeffs;
for (n = 0; n < N; n++) {
retval = {{SUB}}_IIR_forback1 (-r*8.0, r, tptr, coptr, M, N, cstrides[0], precision);
if (retval < 0) break;
coptr += cstrides[1];
tptr += 1;
}
}
free(tmpmem);
return retval;
}
{{endfor}}

View File

@ -1,20 +1,15 @@
from numpy import (asarray, pi, zeros_like,
array, arctan2, tan, ones, arange, floor,
r_, atleast_1d, sqrt, exp, greater, cos, add, sin,
moveaxis, abs, arctan, complex64, float32)
from scipy._lib._util import normalize_axis_index
r_, atleast_1d, sqrt, exp, greater, cos, add, sin)
# From splinemodule.c
from ._spline import sepfir2d
from ._splines import symiirorder1, symiirorder2
from ._spline import cspline2d, sepfir2d
from ._signaltools import lfilter, sosfilt, lfiltic
from scipy.interpolate import BSpline
__all__ = ['spline_filter', 'gauss_spline',
'cspline1d', 'qspline1d', 'qspline2d', 'cspline2d',
'cspline1d_eval', 'qspline1d_eval']
'cspline1d', 'qspline1d', 'cspline1d_eval', 'qspline1d_eval']
def spline_filter(Iin, lmbda=5.0):
@ -273,16 +268,6 @@ def _quadratic_coeff(signal):
return output * 8.0
def compute_root_from_lambda(lamb):
tmp = sqrt(3 + 144 * lamb)
xi = 1 - 96 * lamb + 24 * lamb * tmp
omega = arctan(sqrt((144 * lamb - 1.0) / xi))
tmp2 = sqrt(xi)
r = ((24 * lamb - 1 - tmp2) / (24 * lamb) *
sqrt(48*lamb + 24 * lamb * tmp) / tmp2)
return r, omega
def cspline1d(signal, lamb=0.0):
"""
Compute cubic spline coefficients for rank-1 array.
@ -384,118 +369,6 @@ def qspline1d(signal, lamb=0.0):
return _quadratic_coeff(signal)
def collapse_2d(x, axis):
x = moveaxis(x, axis, -1)
x_shape = x.shape
x = x.reshape(-1, x.shape[-1])
if not x.flags.c_contiguous:
x = x.copy()
return x, x_shape
def symiirorder_nd(func, input, *args, axis=-1, **kwargs):
axis = normalize_axis_index(axis, input.ndim)
input_shape = input.shape
input_ndim = input.ndim
if input.ndim > 1:
input, input_shape = collapse_2d(input, axis)
out = func(input, *args, **kwargs)
if input_ndim > 1:
out = out.reshape(input_shape)
out = moveaxis(out, -1, axis)
if not out.flags.c_contiguous:
out = out.copy()
return out
def qspline2d(signal, lamb=0.0, precision=-1.0):
"""
Coefficients for 2-D quadratic (2nd order) B-spline.
Return the second-order B-spline coefficients over a regularly spaced
input grid for the two-dimensional input image.
Parameters
----------
input : ndarray
The input signal.
lamb : float
Specifies the amount of smoothing in the transfer function.
precision : float
Specifies the precision for computing the infinite sum needed to apply
mirror-symmetric boundary conditions.
Returns
-------
output : ndarray
The filtered signal.
"""
if precision < 0.0 or precision >= 1.0:
if signal.dtype in [float32, complex64]:
precision = 1e-3
else:
precision = 1e-6
if lamb > 0:
raise ValueError('lambda must be negative or zero')
# normal quadratic spline
r = -3 + 2 * sqrt(2.0)
c0 = -r * 8.0
z1 = r
out = symiirorder_nd(symiirorder1, signal, c0, z1, precision, axis=-1)
out = symiirorder_nd(symiirorder1, out, c0, z1, precision, axis=0)
return out
def cspline2d(signal, lamb=0.0, precision=-1.0):
"""
Coefficients for 2-D cubic (3rd order) B-spline.
Return the third-order B-spline coefficients over a regularly spaced
input grid for the two-dimensional input image.
Parameters
----------
input : ndarray
The input signal.
lamb : float
Specifies the amount of smoothing in the transfer function.
precision : float
Specifies the precision for computing the infinite sum needed to apply
mirror-symmetric boundary conditions.
Returns
-------
output : ndarray
The filtered signal.
"""
if precision < 0.0 or precision >= 1.0:
if signal.dtype in [float32, complex64]:
precision = 1e-3
else:
precision = 1e-6
if lamb <= 1 / 144.0:
# Normal cubic spline
r = -2 + sqrt(3.0)
out = symiirorder_nd(
symiirorder1, signal, -r * 6.0, r, precision=precision, axis=-1)
out = symiirorder_nd(
symiirorder1, out, -r * 6.0, r, precision=precision, axis=0)
return out
r, omega = compute_root_from_lambda(lamb)
out = symiirorder_nd(symiirorder2, signal, r, omega,
precision=precision, axis=-1)
out = symiirorder_nd(symiirorder2, out, r, omega,
precision=precision, axis=0)
return out
def cspline1d_eval(cj, newx, dx=1.0, x0=0):
"""Evaluate a cubic spline at the new set of points.

View File

@ -7,10 +7,9 @@
/* defined below */
void f_medfilt2(float*,float*,npy_intp*,npy_intp*);
void d_medfilt2(double*,double*,npy_intp*,npy_intp*);
void b_medfilt2(unsigned char*,unsigned char*,npy_intp*,npy_intp*);
extern char *check_malloc (size_t);
void f_medfilt2(float*,float*,npy_intp*,npy_intp*,int*);
void d_medfilt2(double*,double*,npy_intp*,npy_intp*,int*);
void b_medfilt2(unsigned char*,unsigned char*,npy_intp*,npy_intp*,int*);
/* The QUICK_SELECT routine is based on Hoare's Quickselect algorithm,
@ -72,7 +71,7 @@ TYPE NAME(TYPE arr[], int n) \
/* 2-D median filter with zero-padding on edges. */
#define MEDIAN_FILTER_2D(NAME, TYPE, SELECT) \
void NAME(TYPE* in, TYPE* out, npy_intp* Nwin, npy_intp* Ns) \
void NAME(TYPE* in, TYPE* out, npy_intp* Nwin, npy_intp* Ns, int* errnum) \
{ \
int nx, ny, hN[2]; \
int pre_x, pre_y, pos_x, pos_y; \
@ -80,7 +79,11 @@ void NAME(TYPE* in, TYPE* out, npy_intp* Nwin, npy_intp* Ns)
TYPE *myvals, *fptr1, *fptr2, *ptr1, *ptr2; \
\
totN = Nwin[0] * Nwin[1]; \
myvals = (TYPE *) check_malloc( totN * sizeof(TYPE)); \
myvals = (TYPE *) malloc( totN * sizeof(TYPE)); \
if (myvals == NULL) { \
*errnum = -1; \
return; \
} \
\
Py_BEGIN_ALLOW_THREADS \
\
@ -118,6 +121,7 @@ void NAME(TYPE* in, TYPE* out, npy_intp* Nwin, npy_intp* Ns)
Py_END_ALLOW_THREADS \
\
free(myvals); \
*errnum = 0; \
}

View File

@ -10,25 +10,11 @@ is granted under the SciPy License.
#include "npy_2_compat.h"
#include "_sigtools.h"
#include <setjmp.h>
#include <stdlib.h>
#define PYERR(message) {PyErr_SetString(PyExc_ValueError, message); goto fail;}
jmp_buf MALLOC_FAIL;
char *check_malloc(size_t size)
{
char *the_block = malloc(size);
if (the_block == NULL) {
printf("\nERROR: unable to allocate %zu bytes!\n", size);
longjmp(MALLOC_FAIL,-1);
}
return the_block;
}
/********************************************************
*
* Code taken from remez.c by Erik Kvaleberg which was
@ -947,14 +933,14 @@ fail:
static char doc_median2d[] = "filt = _median2d(data, size)";
extern void f_medfilt2(float*,float*,npy_intp*,npy_intp*);
extern void d_medfilt2(double*,double*,npy_intp*,npy_intp*);
extern void b_medfilt2(unsigned char*,unsigned char*,npy_intp*,npy_intp*);
extern void f_medfilt2(float*,float*,npy_intp*,npy_intp*,int*);
extern void d_medfilt2(double*,double*,npy_intp*,npy_intp*,int*);
extern void b_medfilt2(unsigned char*,unsigned char*,npy_intp*,npy_intp*,int*);
static PyObject *_sigtools_median2d(PyObject *NPY_UNUSED(dummy), PyObject *args)
{
PyObject *image=NULL, *size=NULL;
int typenum;
int typenum, errnum=-2;
PyArrayObject *a_image=NULL, *a_size=NULL;
PyArrayObject *a_out=NULL;
npy_intp Nwin[2] = {3,3};
@ -977,29 +963,30 @@ static PyObject *_sigtools_median2d(PyObject *NPY_UNUSED(dummy), PyObject *args)
a_out = (PyArrayObject *)PyArray_SimpleNew(2, PyArray_DIMS(a_image), typenum);
if (a_out == NULL) goto fail;
if (setjmp(MALLOC_FAIL)) {
PYERR("Memory allocation error.");
}
else {
switch (typenum) {
case NPY_UBYTE:
b_medfilt2((unsigned char *)PyArray_DATA(a_image),
switch (typenum) {
case NPY_UBYTE:
b_medfilt2((unsigned char *)PyArray_DATA(a_image),
(unsigned char *)PyArray_DATA(a_out),
Nwin, PyArray_DIMS(a_image));
break;
case NPY_FLOAT:
f_medfilt2((float *)PyArray_DATA(a_image),
Nwin, PyArray_DIMS(a_image),
&errnum);
break;
case NPY_FLOAT:
f_medfilt2((float *)PyArray_DATA(a_image),
(float *)PyArray_DATA(a_out), Nwin,
PyArray_DIMS(a_image));
break;
case NPY_DOUBLE:
d_medfilt2((double *)PyArray_DATA(a_image),
PyArray_DIMS(a_image),
&errnum);
break;
case NPY_DOUBLE:
d_medfilt2((double *)PyArray_DATA(a_image),
(double *)PyArray_DATA(a_out), Nwin,
PyArray_DIMS(a_image));
break;
default:
PYERR("2D median filter only supports uint8, float32, and float64.");
}
PyArray_DIMS(a_image),
&errnum);
break;
default:
PYERR("2D median filter only supports uint8, float32, and float64.");
}
if (errnum != 0) {
PYERR("ERROR: unable to allocate enough memory in _medfilt2d!\n");
}
Py_DECREF(a_image);

View File

@ -1,34 +0,0 @@
import numpy as np
from numpy.typing import NDArray
FloatingArray = NDArray[np.float32] | NDArray[np.float64]
ComplexArray = NDArray[np.complex64] | NDArray[np.complex128]
FloatingComplexArray = FloatingArray | ComplexArray
def symiirorder1_ic(signal: FloatingComplexArray,
c0: float,
z1: float,
precision: float) -> FloatingComplexArray:
...
def symiirorder2_ic_fwd(signal: FloatingArray,
r: float,
omega: float,
precision: float) -> FloatingArray:
...
def symiirorder2_ic_bwd(signal: FloatingArray,
r: float,
omega: float,
precision: float) -> FloatingArray:
...
def sepfir2d(input: FloatingComplexArray,
hrow: FloatingComplexArray,
hcol: FloatingComplexArray) -> FloatingComplexArray:
...

View File

@ -18,6 +18,166 @@ convert_strides(npy_intp* instrides,npy_intp* convstrides,int size,int N)
}
}
static char doc_cspline2d[] = "out = cspline2d(input, lambda=0.0, precision=-1.0)\n"
"\n"
" Coefficients for 2-D cubic (3rd order) B-spline.\n"
"\n"
" Return the third-order B-spline coefficients over a regularly spaced\n"
" input grid for the two-dimensional input image.\n"
"\n"
" Parameters\n"
" ----------\n"
" input : ndarray\n"
" The input signal.\n"
" lambda : float\n"
" Specifies the amount of smoothing in the transfer function.\n"
" precision : float\n"
" Specifies the precision for computing the infinite sum needed to apply mirror-\n"
" symmetric boundary conditions.\n"
"\n"
" Returns\n"
" -------\n"
" output : ndarray\n"
" The filtered signal.\n"
"\n"
" Examples\n"
" --------\n"
" Examples are given :ref:`in the tutorial <tutorial-signal-bsplines>`.\n"
"\n";
static PyObject *cspline2d(PyObject *NPY_UNUSED(dummy), PyObject *args)
{
PyObject *image=NULL;
PyArrayObject *a_image=NULL, *ck=NULL;
double lambda = 0.0;
double precision = -1.0;
int thetype, M, N, retval=0;
npy_intp outstrides[2], instrides[2];
if (!PyArg_ParseTuple(args, "O|dd", &image, &lambda, &precision)) return NULL;
thetype = PyArray_ObjectType(image, NPY_FLOAT);
thetype = PyArray_MIN(thetype, NPY_DOUBLE);
a_image = (PyArrayObject *)PyArray_FromObject(image, thetype, 2, 2);
if (a_image == NULL) goto fail;
ck = (PyArrayObject *)PyArray_SimpleNew(2, PyArray_DIMS(a_image), thetype);
if (ck == NULL) goto fail;
M = PyArray_DIMS(a_image)[0];
N = PyArray_DIMS(a_image)[1];
convert_strides(PyArray_STRIDES(a_image), instrides, PyArray_ITEMSIZE(a_image), 2);
outstrides[0] = N;
outstrides[1] = 1;
if (thetype == NPY_FLOAT) {
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-3;
retval = S_cubic_spline2D((float *)PyArray_DATA(a_image),
(float *)PyArray_DATA(ck),
M, N, lambda, instrides, outstrides, precision);
}
else if (thetype == NPY_DOUBLE) {
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-6;
retval = D_cubic_spline2D((double *)PyArray_DATA(a_image),
(double *)PyArray_DATA(ck),
M, N, lambda, instrides, outstrides, precision);
}
if (retval == -3) PYERR("Precision too high. Error did not converge.");
if (retval < 0) PYERR("Problem occurred inside routine");
Py_DECREF(a_image);
return PyArray_Return(ck);
fail:
Py_XDECREF(a_image);
Py_XDECREF(ck);
return NULL;
}
static char doc_qspline2d[] = "out = qspline2d(input, lambda=0.0, precision=-1.0)\n"
"\n"
" Coefficients for 2-D quadratic (2nd order) B-spline:\n"
"\n"
" Return the second-order B-spline coefficients over a regularly spaced\n"
" input grid for the two-dimensional input image.\n"
"\n"
" Parameters\n"
" ----------\n"
" input : ndarray\n"
" The input signal.\n"
" lambda : float\n"
" Specifies the amount of smoothing in the transfer function.\n"
" precision : float\n"
" Specifies the precision for computing the infinite sum needed to apply mirror-\n"
" symmetric boundary conditions.\n"
"\n"
" Returns\n"
" -------\n"
" output : ndarray\n"
" The filtered signal.\n"
"\n"
" Examples\n"
" --------\n"
" Examples are given :ref:`in the tutorial <tutorial-signal-bsplines>`.\n"
"\n";
static PyObject *qspline2d(PyObject *NPY_UNUSED(dummy), PyObject *args)
{
PyObject *image=NULL;
PyArrayObject *a_image=NULL, *ck=NULL;
double lambda = 0.0;
double precision = -1.0;
int thetype, M, N, retval=0;
npy_intp outstrides[2], instrides[2];
if (!PyArg_ParseTuple(args, "O|dd", &image, &lambda, &precision)) return NULL;
if (lambda != 0.0) PYERR("Smoothing spline not yet implemented.");
thetype = PyArray_ObjectType(image, NPY_FLOAT);
thetype = PyArray_MIN(thetype, NPY_DOUBLE);
a_image = (PyArrayObject *)PyArray_FromObject(image, thetype, 2, 2);
if (a_image == NULL) goto fail;
ck = (PyArrayObject *)PyArray_SimpleNew(2, PyArray_DIMS(a_image), thetype);
if (ck == NULL) goto fail;
M = PyArray_DIMS(a_image)[0];
N = PyArray_DIMS(a_image)[1];
convert_strides(PyArray_STRIDES(a_image), instrides, PyArray_ITEMSIZE(a_image), 2);
outstrides[0] = N;
outstrides[1] = 1;
if (thetype == NPY_FLOAT) {
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-3;
retval = S_quadratic_spline2D((float *)PyArray_DATA(a_image),
(float *)PyArray_DATA(ck),
M, N, lambda, instrides, outstrides, precision);
}
else if (thetype == NPY_DOUBLE) {
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-6;
retval = D_quadratic_spline2D((double *)PyArray_DATA(a_image),
(double *)PyArray_DATA(ck),
M, N, lambda, instrides, outstrides, precision);
}
if (retval == -3) PYERR("Precision too high. Error did not converge.");
if (retval < 0) PYERR("Problem occurred inside routine");
Py_DECREF(a_image);
return PyArray_Return(ck);
fail:
Py_XDECREF(a_image);
Py_XDECREF(ck);
return NULL;
}
static char doc_FIRsepsym2d[] = "out = sepfir2d(input, hrow, hcol)\n"
"\n"
" Convolve with a 2-D separable FIR filter.\n"
@ -138,109 +298,101 @@ static PyObject *FIRsepsym2d(PyObject *NPY_UNUSED(dummy), PyObject *args)
}
static char doc_IIRsymorder1_ic[] = "out = symiirorder1_ic(input, z1, precision=-1.0)\n"
static char doc_IIRsymorder1[] = "out = symiirorder1(input, c0, z1, precision=-1.0)\n"
"\n"
" Compute the (forward) mirror-symmetric boundary conditions for a smoothing\n"
" IIR filter that is composed of cascaded first-order sections.\n"
" Implement a smoothing IIR filter with mirror-symmetric boundary conditions\n"
" using a cascade of first-order sections. The second section uses a\n"
" reversed sequence. This implements a system with the following\n"
" transfer function and mirror-symmetric boundary conditions::\n"
"\n"
" The starting condition returned by this function is computed based on\n"
" the following transfer function::\n"
"\n"
" 1 \n"
" H(z) = ------------ \n"
" (1 - z1/z) \n"
" c0 \n"
" H(z) = --------------------- \n"
" (1-z1/z) (1 - z1 z) \n"
"\n"
" The resulting signal will have mirror symmetric boundary conditions as well.\n"
"\n"
" Parameters\n"
" ----------\n"
" input : ndarray\n"
" The input signal. If 2D, then it will find the initial conditions \n"
" for each of the elements on the last axis.\n"
" z1 : scalar\n"
" Parameter in the transfer function.\n"
" The input signal.\n"
" c0, z1 : scalar\n"
" Parameters in the transfer function.\n"
" precision :\n"
" Specifies the precision for calculating initial conditions\n"
" of the recursive filter based on mirror-symmetric input.\n"
"\n"
" Returns\n"
" -------\n"
" z_0 : ndarray\n"
" The mirror-symmetric initial condition for the forward IIR filter.";
" output : ndarray\n"
" The filtered signal.";
static PyObject *IIRsymorder1_ic(PyObject *NPY_UNUSED(dummy), PyObject *args)
static PyObject *IIRsymorder1(PyObject *NPY_UNUSED(dummy), PyObject *args)
{
PyObject *sig=NULL;
PyArrayObject *a_sig=NULL, *out=NULL;
npy_intp* in_size;
Py_complex z1;
Py_complex c0, z1;
double precision = -1.0;
int thetype, ret;
npy_intp M, N;
PyArray_Descr* dtype;
int thetype, N, ret;
npy_intp outstrides, instrides;
if (!PyArg_ParseTuple(args, "OD|d", &sig, &z1, &precision))
if (!PyArg_ParseTuple(args, "ODD|d", &sig, &c0, &z1, &precision))
return NULL;
thetype = PyArray_ObjectType(sig, NPY_FLOAT);
thetype = PyArray_MIN(thetype, NPY_CDOUBLE);
a_sig = (PyArrayObject *)PyArray_FromObject(sig, thetype, 1, 2);
a_sig = (PyArrayObject *)PyArray_FromObject(sig, thetype, 1, 1);
if (a_sig == NULL) goto fail;
in_size = PyArray_DIMS(a_sig);
M = 1;
N = in_size[0];
if(PyArray_NDIM(a_sig) > 1) {
M = in_size[0];
N = in_size[1];
}
const npy_intp sz[2] = {M, 1};
dtype = PyArray_DescrFromType(thetype);
out = (PyArrayObject *)PyArray_Empty(2, sz, dtype, 0);
out = (PyArrayObject *)PyArray_SimpleNew(1, PyArray_DIMS(a_sig), thetype);
if (out == NULL) goto fail;
N = PyArray_DIMS(a_sig)[0];
convert_strides(PyArray_STRIDES(a_sig), &instrides, PyArray_ITEMSIZE(a_sig), 1);
outstrides = 1;
switch (thetype) {
case NPY_FLOAT:
{
float rc0 = c0.real;
float rz1 = z1.real;
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-6;
ret = S_SYM_IIR1_initial(rz1, (float *)PyArray_DATA(a_sig),
(float *)PyArray_DATA(out), M, N,
(float )precision);
ret = S_IIR_forback1 (rc0, rz1, (float *)PyArray_DATA(a_sig),
(float *)PyArray_DATA(out), N,
instrides, outstrides, (float )precision);
}
break;
case NPY_DOUBLE:
{
double rc0 = c0.real;
double rz1 = z1.real;
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-11;
ret = D_SYM_IIR1_initial(rz1, (double *)PyArray_DATA(a_sig),
(double *)PyArray_DATA(out), M, N,
precision);
ret = D_IIR_forback1 (rc0, rz1, (double *)PyArray_DATA(a_sig),
(double *)PyArray_DATA(out), N,
instrides, outstrides, precision);
}
break;
#ifdef __GNUC__
case NPY_CFLOAT:
{
__complex__ float zc0 = c0.real + 1.0i*c0.imag;
__complex__ float zz1 = z1.real + 1.0i*z1.imag;
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-6;
ret = C_SYM_IIR1_initial (zz1, (__complex__ float *)PyArray_DATA(a_sig),
(__complex__ float *)PyArray_DATA(out), M, N,
(float )precision);
ret = C_IIR_forback1 (zc0, zz1, (__complex__ float *)PyArray_DATA(a_sig),
(__complex__ float *)PyArray_DATA(out), N,
instrides, outstrides, (float )precision);
}
break;
case NPY_CDOUBLE:
{
__complex__ double zc0 = c0.real + 1.0i*c0.imag;
__complex__ double zz1 = z1.real + 1.0i*z1.imag;
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-11;
ret = Z_SYM_IIR1_initial (zz1, (__complex__ double *)PyArray_DATA(a_sig),
(__complex__ double *)PyArray_DATA(out), M, N,
precision);
ret = Z_IIR_forback1 (zc0, zz1, (__complex__ double *)PyArray_DATA(a_sig),
(__complex__ double *)PyArray_DATA(out), N,
instrides, outstrides, precision);
}
break;
#endif
@ -267,17 +419,15 @@ static PyObject *IIRsymorder1_ic(PyObject *NPY_UNUSED(dummy), PyObject *args)
}
static char doc_IIRsymorder2_ic_fwd[] = "out = symiirorder2_ic_fwd(input, r, omega, precision=-1.0)\n"
static char doc_IIRsymorder2[] = "out = symiirorder2(input, r, omega, precision=-1.0)\n"
"\n"
" Compute the (forward) mirror-symmetric boundary conditions for a smoothing\n"
" IIR filter that is composed of cascaded second-order sections.\n"
" Implement a smoothing IIR filter with mirror-symmetric boundary conditions\n"
" using a cascade of second-order sections. The second section uses a\n"
" reversed sequence. This implements the following transfer function::\n"
"\n"
" The starting condition returned by this function is computed based on\n"
" the following transfer function::\n"
"\n"
" cs\n"
" H(z) = -------------------\n"
" (1 - a2/z - a3/z^2)\n"
" cs^2\n"
" H(z) = ---------------------------------------\n"
" (1 - a2/z - a3/z^2) (1 - a2 z - a3 z^2 )\n"
"\n"
" where::\n"
"\n"
@ -297,182 +447,55 @@ static char doc_IIRsymorder2_ic_fwd[] = "out = symiirorder2_ic_fwd(input, r, ome
"\n"
" Returns\n"
" -------\n"
" zi : ndarray\n"
" The mirror-symmetric initial condition for the forward IIR filter.";
" output : ndarray\n"
" The filtered signal.";
static PyObject *IIRsymorder2_ic_fwd(PyObject *NPY_UNUSED(dummy), PyObject *args)
static PyObject *IIRsymorder2(PyObject *NPY_UNUSED(dummy), PyObject *args)
{
PyObject *sig=NULL;
PyArrayObject *a_sig=NULL, *out=NULL;
npy_intp* in_size;
double r, omega;
double precision = -1.0;
int thetype, ret;
npy_intp N, M;
PyArray_Descr* dtype;
int thetype, N, ret;
npy_intp outstrides, instrides;
if (!PyArg_ParseTuple(args, "Odd|d", &sig, &r, &omega, &precision))
return NULL;
thetype = PyArray_ObjectType(sig, NPY_FLOAT);
thetype = PyArray_MIN(thetype, NPY_DOUBLE);
a_sig = (PyArrayObject *)PyArray_FromObject(sig, thetype, 1, 2);
a_sig = (PyArrayObject *)PyArray_FromObject(sig, thetype, 1, 1);
if (a_sig == NULL) goto fail;
in_size = PyArray_DIMS(a_sig);
M = 1;
N = in_size[0];
if(PyArray_NDIM(a_sig) > 1) {
M = in_size[0];
N = in_size[1];
}
dtype = PyArray_DescrFromType(thetype);
const npy_intp sz[2] = {M, 2};
out = (PyArrayObject *)PyArray_Empty(2, sz, dtype, 0);
out = (PyArrayObject *)PyArray_SimpleNew(1, PyArray_DIMS(a_sig), thetype);
if (out == NULL) goto fail;
N = PyArray_DIMS(a_sig)[0];
convert_strides(PyArray_STRIDES(a_sig), &instrides, PyArray_ITEMSIZE(a_sig), 1);
outstrides = 1;
switch (thetype) {
case NPY_FLOAT:
{
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-6;
ret = S_SYM_IIR2_initial_fwd(r, omega, (float *)PyArray_DATA(a_sig),
(float *)PyArray_DATA(out), M, N,
(float )precision);
}
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-6;
ret = S_IIR_forback2 (r, omega, (float *)PyArray_DATA(a_sig),
(float *)PyArray_DATA(out), N,
instrides, outstrides, precision);
break;
case NPY_DOUBLE:
{
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-11;
ret = D_SYM_IIR2_initial_fwd(r, omega, (double *)PyArray_DATA(a_sig),
(double *)PyArray_DATA(out), M, N,
precision);
}
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-11;
ret = D_IIR_forback2 (r, omega, (double *)PyArray_DATA(a_sig),
(double *)PyArray_DATA(out), N,
instrides, outstrides, precision);
break;
default:
PYERR("Incorrect type.");
}
if (ret == 0) {
Py_DECREF(a_sig);
return PyArray_Return(out);
}
if (ret == -1) PYERR("Could not allocate enough memory.");
if (ret == -2) PYERR("|z1| must be less than 1.0");
if (ret == -3) PYERR("Sum to find symmetric boundary conditions did not converge.");
PYERR("Unknown error.");
fail:
Py_XDECREF(a_sig);
Py_XDECREF(out);
return NULL;
}
static char doc_IIRsymorder2_ic_bwd[] = "out = symiirorder2_ic_bwd(input, r, omega, precision=-1.0)\n"
"\n"
" Compute the (backward) mirror-symmetric boundary conditions for a smoothing\n"
" IIR filter that is composed of cascaded second-order sections.\n"
"\n"
" The starting condition returned by this function is computed based on\n"
" the following transfer function::\n"
"\n"
" cs\n"
" H(z) = -------------------\n"
" (1 - a2 z - a3 z^2)\n"
"\n"
" where::\n"
"\n"
" a2 = (2 r cos omega)\n"
" a3 = - r^2\n"
" cs = 1 - 2 r cos omega + r^2\n"
"\n"
" Parameters\n"
" ----------\n"
" input : ndarray\n"
" The input signal.\n"
" r, omega : float\n"
" Parameters in the transfer function.\n"
" precision : float\n"
" Specifies the precision for calculating initial conditions\n"
" of the recursive filter based on mirror-symmetric input.\n"
"\n"
" Returns\n"
" -------\n"
" zi : ndarray\n"
" The mirror-symmetric initial condition for the forward IIR filter.";
static PyObject *IIRsymorder2_ic_bwd(PyObject *NPY_UNUSED(dummy), PyObject *args)
{
PyObject *sig=NULL;
PyArrayObject *a_sig=NULL, *out=NULL;
npy_intp* in_size;
double r, omega;
double precision = -1.0;
int thetype, ret;
npy_intp M, N;
PyArray_Descr* dtype;
if (!PyArg_ParseTuple(args, "Odd|d", &sig, &r, &omega, &precision))
return NULL;
thetype = PyArray_ObjectType(sig, NPY_FLOAT);
thetype = PyArray_MIN(thetype, NPY_DOUBLE);
a_sig = (PyArrayObject *)PyArray_FromObject(sig, thetype, 1, 2);
if (a_sig == NULL) goto fail;
in_size = PyArray_DIMS(a_sig);
M = 1;
N = in_size[0];
if(PyArray_NDIM(a_sig) > 1) {
M = in_size[0];
N = in_size[1];
}
dtype = PyArray_DescrFromType(thetype);
const npy_intp sz[2] = {M, 2};
out = (PyArrayObject *)PyArray_Zeros(2, sz, dtype, 0);
if (out == NULL) goto fail;
switch (thetype) {
case NPY_FLOAT:
{
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-6;
ret = S_SYM_IIR2_initial_bwd(r, omega, (float *)PyArray_DATA(a_sig),
(float *)PyArray_DATA(out), M, N,
(float )precision);
}
break;
case NPY_DOUBLE:
{
if ((precision <= 0.0) || (precision > 1.0)) precision = 1e-11;
ret = D_SYM_IIR2_initial_bwd(r, omega, (double *)PyArray_DATA(a_sig),
(double *)PyArray_DATA(out), M, N,
precision);
}
break;
default:
PYERR("Incorrect type.");
}
if (ret == 0) {
Py_DECREF(a_sig);
return PyArray_Return(out);
}
if (ret == -1) PYERR("Could not allocate enough memory.");
if (ret == -2) PYERR("|z1| must be less than 1.0");
if (ret == -3) PYERR("Sum to find symmetric boundary conditions did not converge.");
PYERR("Unknown error.");
if (ret < 0) PYERR("Problem occurred inside routine.");
Py_DECREF(a_sig);
return PyArray_Return(out);
fail:
Py_XDECREF(a_sig);
@ -483,10 +506,11 @@ static PyObject *IIRsymorder2_ic_bwd(PyObject *NPY_UNUSED(dummy), PyObject *args
static struct PyMethodDef toolbox_module_methods[] = {
{"cspline2d", cspline2d, METH_VARARGS, doc_cspline2d},
{"qspline2d", qspline2d, METH_VARARGS, doc_qspline2d},
{"sepfir2d", FIRsepsym2d, METH_VARARGS, doc_FIRsepsym2d},
{"symiirorder1_ic", IIRsymorder1_ic, METH_VARARGS, doc_IIRsymorder1_ic},
{"symiirorder2_ic_fwd", IIRsymorder2_ic_fwd, METH_VARARGS, doc_IIRsymorder2_ic_fwd},
{"symiirorder2_ic_bwd", IIRsymorder2_ic_bwd, METH_VARARGS,doc_IIRsymorder2_ic_bwd },
{"symiirorder1", IIRsymorder1, METH_VARARGS, doc_IIRsymorder1},
{"symiirorder2", IIRsymorder2, METH_VARARGS, doc_IIRsymorder2},
{NULL, NULL, 0, NULL} /* sentinel */
};

View File

@ -7,19 +7,21 @@
static void convert_strides(npy_intp*,npy_intp*,int,int);
extern int S_SYM_IIR1_initial(float, float*, float*, int, int, float);
extern int S_SYM_IIR2_initial_fwd(double, double, float*, float*, int, int, float);
extern int S_SYM_IIR2_initial_bwd(double, double, float*, float*, int, int, float);
extern int S_cubic_spline2D(float*,float*,int,int,double,npy_intp*,npy_intp*,float);
extern int S_quadratic_spline2D(float*,float*,int,int,double,npy_intp*,npy_intp*,float);
extern int S_IIR_forback1(float,float,float*,float*,int,int,int,float);
extern int S_IIR_forback2(double,double,float*,float*,int,int,int,float);
extern int S_separable_2Dconvolve_mirror(float*,float*,int,int,float*,float*,int,int,npy_intp*,npy_intp*);
extern int D_SYM_IIR1_initial(double, double*, double*, int, int, double);
extern int D_SYM_IIR2_initial_fwd(double, double, double*, double*, int, int, double);
extern int D_SYM_IIR2_initial_bwd(double, double, double*, double*, int, int, double);
extern int D_cubic_spline2D(double*,double*,int,int,double,npy_intp*,npy_intp*,double);
extern int D_quadratic_spline2D(double*,double*,int,int,double,npy_intp*,npy_intp*,double);
extern int D_IIR_forback1(double,double,double*,double*,int,int,int,double);
extern int D_IIR_forback2(double,double,double*,double*,int,int,int,double);
extern int D_separable_2Dconvolve_mirror(double*,double*,int,int,double*,double*,int,int,npy_intp*,npy_intp*);
#ifdef __GNUC__
extern int C_SYM_IIR1_initial(__complex__ float, __complex__ float*, __complex__ float*, int, int, float);
extern int C_IIR_forback1(__complex__ float,__complex__ float,__complex__ float*,__complex__ float*,int,int,int,float);
extern int C_separable_2Dconvolve_mirror(__complex__ float*,__complex__ float*,int,int,__complex__ float*,__complex__ float*,int,int,npy_intp*,npy_intp*);
extern int Z_SYM_IIR1_initial(__complex__ double, __complex__ double*, __complex__ double*, int, int, double);
extern int Z_IIR_forback1(__complex__ double,__complex__ double,__complex__ double*,__complex__ double*,int,int,int,double);
extern int Z_separable_2Dconvolve_mirror(__complex__ double*,__complex__ double*,int,int,__complex__ double*,__complex__ double*,int,int,npy_intp*,npy_intp*);
#endif

View File

@ -1,162 +0,0 @@
import numpy as np
from ._arraytools import axis_slice, axis_reverse
from ._signaltools import lfilter, sosfilt
from ._spline import symiirorder1_ic, symiirorder2_ic_fwd, symiirorder2_ic_bwd
__all__ = ['symiirorder1', 'symiirorder2']
def symiirorder1(signal, c0, z1, precision=-1.0):
"""
Implement a smoothing IIR filter with mirror-symmetric boundary conditions
using a cascade of first-order sections. The second section uses a
reversed sequence. This implements a system with the following
transfer function and mirror-symmetric boundary conditions::
c0
H(z) = ---------------------
(1-z1/z) (1 - z1 z)
The resulting signal will have mirror symmetric boundary conditions
as well.
Parameters
----------
input : ndarray
The input signal. If 2D, then the filter will be applied in a batched
fashion across the last axis.
c0, z1 : scalar
Parameters in the transfer function.
precision :
Specifies the precision for calculating initial conditions
of the recursive filter based on mirror-symmetric input.
Returns
-------
output : ndarray
The filtered signal.
"""
if np.abs(z1) >= 1:
raise ValueError('|z1| must be less than 1.0')
if signal.ndim > 2:
raise ValueError('Input must be 1D or 2D')
squeeze_dim = False
if signal.ndim == 1:
signal = signal[None, :]
squeeze_dim = True
if np.issubdtype(signal.dtype, np.integer):
signal = signal.astype(np.float64)
y0 = symiirorder1_ic(signal, z1, precision)
# Apply first the system 1 / (1 - z1 * z^-1)
b = np.ones(1, dtype=signal.dtype)
a = np.r_[1, -z1]
a = a.astype(signal.dtype)
# Compute the initial state for lfilter.
zii = y0 * z1
y1, _ = lfilter(b, a, axis_slice(signal, 1), zi=zii)
y1 = np.c_[y0, y1]
# Compute backward symmetric condition and apply the system
# c0 / (1 - z1 * z)
b = np.asarray([c0], dtype=signal.dtype)
out_last = -c0 / (z1 - 1.0) * axis_slice(y1, -1)
# Compute the initial state for lfilter.
zii = out_last * z1
out, _ = lfilter(b, a, axis_slice(y1, -2, step=-1), zi=zii)
out = np.c_[axis_reverse(out), out_last]
if squeeze_dim:
out = out[0]
return out
def symiirorder2(input, r, omega, precision=-1.0):
"""
Implement a smoothing IIR filter with mirror-symmetric boundary conditions
using a cascade of second-order sections. The second section uses a
reversed sequence. This implements the following transfer function::
cs^2
H(z) = ---------------------------------------
(1 - a2/z - a3/z^2) (1 - a2 z - a3 z^2 )
where::
a2 = 2 * r * cos(omega)
a3 = - r ** 2
cs = 1 - 2 * r * cos(omega) + r ** 2
Parameters
----------
input : ndarray
The input signal.
r, omega : float
Parameters in the transfer function.
precision : float
Specifies the precision for calculating initial conditions
of the recursive filter based on mirror-symmetric input.
Returns
-------
output : ndarray
The filtered signal.
"""
if r >= 1.0:
raise ValueError('r must be less than 1.0')
if input.ndim > 2:
raise ValueError('Input must be 1D or 2D')
if not input.flags.c_contiguous:
input = input.copy()
squeeze_dim = False
if input.ndim == 1:
input = input[None, :]
squeeze_dim = True
if np.issubdtype(input.dtype, np.integer):
input = input.astype(np.float64)
rsq = r * r
a2 = 2 * r * np.cos(omega)
a3 = -rsq
cs = np.atleast_1d(1 - 2 * r * np.cos(omega) + rsq)
sos = np.atleast_2d(np.r_[cs, 0, 0, 1, -a2, -a3]).astype(input.dtype)
# Find the starting (forward) conditions.
ic_fwd = symiirorder2_ic_fwd(input, r, omega, precision)
# Apply first the system cs / (1 - a2 * z^-1 - a3 * z^-2)
# Compute the initial conditions in the form expected by sosfilt
# coef = np.asarray([[a3, a2], [0, a3]], dtype=input.dtype)
coef = np.r_[a3, a2, 0, a3].reshape(2, 2).astype(input.dtype)
zi = np.matmul(coef, ic_fwd[:, :, None])[:, :, 0]
y_fwd, _ = sosfilt(sos, axis_slice(input, 2), zi=zi[None])
y_fwd = np.c_[ic_fwd, y_fwd]
# Then compute the symmetric backward starting conditions
ic_bwd = symiirorder2_ic_bwd(input, r, omega, precision)
# Apply the system cs / (1 - a2 * z^1 - a3 * z^2)
# Compute the initial conditions in the form expected by sosfilt
zi = np.matmul(coef, ic_bwd[:, :, None])[:, :, 0]
y, _ = sosfilt(sos, axis_slice(y_fwd, -3, step=-1), zi=zi[None])
out = np.c_[axis_reverse(y), axis_reverse(ic_bwd)]
if squeeze_dim:
out = out[0]
return out

View File

@ -94,8 +94,6 @@ py3.install_sources([
'_short_time_fft.py',
'_signaltools.py',
'_spectral_py.py',
'_spline.pyi',
'_splines.py',
'_upfirdn.py',
'_waveforms.py',
'_wavelets.py',

View File

@ -6,7 +6,8 @@ import warnings
from . import _spline
__all__ = ['sepfir2d'] # noqa: F822
__all__ = [ # noqa: F822
'cspline2d', 'qspline2d', 'sepfir2d', 'symiirorder1', 'symiirorder2']
def __dir__():

View File

@ -17,7 +17,6 @@ py3.install_sources([
'test_short_time_fft.py',
'test_signaltools.py',
'test_spectral.py',
'test_splines.py',
'test_upfirdn.py',
'test_waveforms.py',
'test_wavelets.py',

View File

@ -1,312 +0,0 @@
# pylint: disable=missing-docstring
import numpy as np
from numpy import array
from numpy.testing import assert_allclose, assert_raises
import pytest
from scipy.signal._spline import (
symiirorder1_ic, symiirorder2_ic_fwd, symiirorder2_ic_bwd)
from scipy.signal._splines import symiirorder1, symiirorder2
def _compute_symiirorder2_bwd_hs(k, cs, rsq, omega):
cssq = cs * cs
k = np.abs(k)
rsupk = np.power(rsq, k / 2.0)
c0 = (cssq * (1.0 + rsq) / (1.0 - rsq) /
(1 - 2 * rsq * np.cos(2 * omega) + rsq * rsq))
gamma = (1.0 - rsq) / (1.0 + rsq) / np.tan(omega)
return c0 * rsupk * (np.cos(omega * k) + gamma * np.sin(omega * k))
class TestSymIIR:
@pytest.mark.parametrize(
'dtype', [np.float32, np.float64, np.complex64, np.complex128])
@pytest.mark.parametrize('precision', [-1.0, 0.7, 0.5, 0.25, 0.0075])
def test_symiir1_ic(self, dtype, precision):
c_precision = precision
if precision <= 0.0 or precision > 1.0:
if dtype in {np.float32, np.complex64}:
c_precision = 1e-6
else:
c_precision = 1e-11
# Symmetrical initial conditions for a IIR filter of order 1 are:
# x[0] + z1 * \sum{k = 0}^{n - 1} x[k] * z1^k
# Check the initial condition for a low-pass filter
# with coefficient b = 0.85 on a step signal. The initial condition is
# a geometric series: 1 + b * \sum_{k = 0}^{n - 1} u[k] b^k.
# Finding the initial condition corresponds to
# 1. Computing the index n such that b**n < precision, which
# corresponds to ceil(log(precision) / log(b))
# 2. Computing the geometric series until n, this can be computed
# using the partial sum formula: (1 - b**n) / (1 - b)
# This holds due to the input being a step signal.
b = 0.85
n_exp = int(np.ceil(np.log(c_precision) / np.log(b)))
expected = np.asarray([[(1 - b ** n_exp) / (1 - b)]], dtype=dtype)
expected = 1 + b * expected
# Create a step signal of size n + 1
x = np.ones(n_exp + 1, dtype=dtype)
assert_allclose(symiirorder1_ic(x, b, precision), expected,
atol=2e-6, rtol=2e-7)
# Check the conditions for a exponential decreasing signal with base 2.
# Same conditions hold, as the product of 0.5^n * 0.85^n is
# still a geometric series
b_d = array(b, dtype=dtype)
expected = np.asarray(
[[(1 - (0.5 * b_d) ** n_exp) / (1 - (0.5 * b_d))]], dtype=dtype)
expected = 1 + b_d * expected
# Create an exponential decreasing signal of size n + 1
x = 2 ** -np.arange(n_exp + 1, dtype=dtype)
assert_allclose(symiirorder1_ic(x, b, precision), expected,
atol=2e-6, rtol=2e-7)
def test_symiir1_ic_fails(self):
# Test that symiirorder1_ic fails whenever \sum_{n = 1}^{n} b^n > eps
b = 0.85
# Create a step signal of size 100
x = np.ones(100, dtype=np.float64)
# Compute the closed form for the geometrical series
precision = 1 / (1 - b)
assert_raises(ValueError, symiirorder1_ic, x, b, precision)
# Test that symiirorder1_ic fails when |z1| >= 1
assert_raises(ValueError, symiirorder1_ic, x, 1.0, -1)
assert_raises(ValueError, symiirorder1_ic, x, 2.0, -1)
@pytest.mark.parametrize(
'dtype', [np.float32, np.float64, np.complex64, np.complex128])
@pytest.mark.parametrize('precision', [-1.0, 0.7, 0.5, 0.25, 0.0075])
def test_symiir1(self, dtype, precision):
c_precision = precision
if precision <= 0.0 or precision > 1.0:
if dtype in {np.float32, np.complex64}:
c_precision = 1e-6
else:
c_precision = 1e-11
# Test for a low-pass filter with c0 = 0.15 and z1 = 0.85
# using an unit step over 200 samples.
c0 = 0.15
z1 = 0.85
n = 200
signal = np.ones(n, dtype=dtype)
# Find the initial condition. See test_symiir1_ic for a detailed
# explanation
n_exp = int(np.ceil(np.log(c_precision) / np.log(z1)))
initial = np.asarray((1 - z1 ** n_exp) / (1 - z1), dtype=dtype)
initial = 1 + z1 * initial
# Forward pass
# The transfer function for the system 1 / (1 - z1 * z^-1) when
# applied to an unit step with initial conditions y0 is
# 1 / (1 - z1 * z^-1) * (z^-1 / (1 - z^-1) + y0)
# Solving the inverse Z-transform for the given expression yields:
# y[n] = y0 * z1**n * u[n] +
# -z1 / (1 - z1) * z1**(k - 1) * u[k - 1] +
# 1 / (1 - z1) * u[k - 1]
# d is the Kronecker delta function, and u is the unit step
# y0 * z1**n * u[n]
pos = np.arange(n, dtype=dtype)
comp1 = initial * z1**pos
# -z1 / (1 - z1) * z1**(k - 1) * u[k - 1]
comp2 = np.zeros(n, dtype=dtype)
comp2[1:] = -z1 / (1 - z1) * z1**pos[:-1]
# 1 / (1 - z1) * u[k - 1]
comp3 = np.zeros(n, dtype=dtype)
comp3[1:] = 1 / (1 - z1)
expected_fwd = comp1 + comp2 + comp3
# Reverse condition
sym_cond = -c0 / (z1 - 1.0) * expected_fwd[-1]
# Backward pass
# The transfer function for the forward result is equivalent to
# the forward system times c0 / (1 - z1 * z).
# Computing a closed form for the complete expression is difficult
# The result will be computed iteratively from the difference equation
exp_out = np.zeros(n, dtype=dtype)
exp_out[0] = sym_cond
for i in range(1, n):
exp_out[i] = c0 * expected_fwd[n - 1 - i] + z1 * exp_out[i - 1]
exp_out = exp_out[::-1]
out = symiirorder1(signal, c0, z1, precision)
assert_allclose(out, exp_out, atol=4e-6, rtol=6e-7)
@pytest.mark.parametrize(
'dtype', [np.float32, np.float64])
@pytest.mark.parametrize('precision', [-1.0, 0.7, 0.5, 0.25, 0.0075])
def test_symiir2_initial_fwd(self, dtype, precision):
c_precision = precision
if precision <= 0.0 or precision > 1.0:
if dtype in {np.float32, np.complex64}:
c_precision = 1e-6
else:
c_precision = 1e-11
# Compute the initial conditions for a order-two symmetrical low-pass
# filter with r = 0.5 and omega = pi / 3 for an unit step input.
r = np.asarray(0.5, dtype=dtype)
omega = np.asarray(np.pi / 3.0, dtype=dtype)
cs = 1 - 2 * r * np.cos(omega) + r**2
# The index n for the initial condition is bound from 0 to the
# first position where sin(omega * (n + 2)) = 0 => omega * (n + 2) = pi
# For omega = pi / 3, the maximum initial condition occurs when
# sqrt(3) / 2 * r**n < precision.
# => n = log(2 * sqrt(3) / 3 * precision) / log(r)
ub = np.ceil(np.log(c_precision / np.sin(omega)) / np.log(c_precision))
lb = np.ceil(np.pi / omega) - 2
n_exp = min(ub, lb)
# The forward initial condition for a filter of order two is:
# \frac{cs}{\sin(\omega)} \sum_{n = 0}^{N - 1} {
# r^(n + 1) \sin{\omega(n + 2)}} + cs
# The closed expression for this sum is:
# s[n] = 2 * r * np.cos(omega) -
# r**2 - r**(n + 2) * np.sin(omega * (n + 3)) / np.sin(omega) +
# r**(n + 3) * np.sin(omega * (n + 2)) / np.sin(omega) + cs
fwd_initial_1 = (
cs +
2 * r * np.cos(omega) -
r**2 -
r**(n_exp + 2) * np.sin(omega * (n_exp + 3)) / np.sin(omega) +
r**(n_exp + 3) * np.sin(omega * (n_exp + 2)) / np.sin(omega))
# The second initial condition is given by
# s[n] = 1 / np.sin(omega) * (
# r**2 * np.sin(3 * omega) -
# r**3 * np.sin(2 * omega) -
# r**(n + 3) * np.sin(omega * (n + 4)) +
# r**(n + 4) * np.sin(omega * (n + 3)))
ub = np.ceil(np.log(c_precision / np.sin(omega)) / np.log(c_precision))
lb = np.ceil(np.pi / omega) - 3
n_exp = min(ub, lb)
fwd_initial_2 = (
cs + cs * 2 * r * np.cos(omega) +
(r**2 * np.sin(3 * omega) -
r**3 * np.sin(2 * omega) -
r**(n_exp + 3) * np.sin(omega * (n_exp + 4)) +
r**(n_exp + 4) * np.sin(omega * (n_exp + 3))) / np.sin(omega))
expected = np.r_[fwd_initial_1, fwd_initial_2][None, :]
n = 100
signal = np.ones(n, dtype=dtype)
out = symiirorder2_ic_fwd(signal, r, omega, precision)
assert_allclose(out, expected, atol=4e-6, rtol=6e-7)
@pytest.mark.parametrize(
'dtype', [np.float32, np.float64])
@pytest.mark.parametrize('precision', [-1.0, 0.7, 0.5, 0.25, 0.0075])
def test_symiir2_initial_bwd(self, dtype, precision):
c_precision = precision
if precision <= 0.0 or precision > 1.0:
if dtype in {np.float32, np.complex64}:
c_precision = 1e-6
else:
c_precision = 1e-11
r = np.asarray(0.5, dtype=dtype)
omega = np.asarray(np.pi / 3.0, dtype=dtype)
cs = 1 - 2 * r * np.cos(omega) + r * r
a2 = 2 * r * np.cos(omega)
a3 = -r * r
n = 100
signal = np.ones(n, dtype=dtype)
# Compute initial forward conditions
ic = symiirorder2_ic_fwd(signal, r, omega, precision)
out = np.zeros(n + 2, dtype=dtype)
out[:2] = ic[0]
# Apply the forward system cs / (1 - a2 * z^-1 - a3 * z^-2))
for i in range(2, n + 2):
out[i] = cs * signal[i - 2] + a2 * out[i - 1] + a3 * out[i - 2]
# Find the backward initial conditions
ic2 = np.zeros(2, dtype=dtype)
idx = np.arange(n)
diff = (_compute_symiirorder2_bwd_hs(idx, cs, r * r, omega) +
_compute_symiirorder2_bwd_hs(idx + 1, cs, r * r, omega))
ic2_0_all = np.cumsum(diff * out[:1:-1])
pos = np.where(diff ** 2 < c_precision)[0]
ic2[0] = ic2_0_all[pos[0]]
diff = (_compute_symiirorder2_bwd_hs(idx - 1, cs, r * r, omega) +
_compute_symiirorder2_bwd_hs(idx + 2, cs, r * r, omega))
ic2_1_all = np.cumsum(diff * out[:1:-1])
pos = np.where(diff ** 2 < c_precision)[0]
ic2[1] = ic2_1_all[pos[0]]
out_ic = symiirorder2_ic_bwd(out, r, omega, precision)[0]
assert_allclose(out_ic, ic2, atol=4e-6, rtol=6e-7)
@pytest.mark.parametrize(
'dtype', [np.float32, np.float64])
@pytest.mark.parametrize('precision', [-1.0, 0.7, 0.5, 0.25, 0.0075])
def test_symiir2(self, dtype, precision):
r = np.asarray(0.5, dtype=dtype)
omega = np.asarray(np.pi / 3.0, dtype=dtype)
cs = 1 - 2 * r * np.cos(omega) + r * r
a2 = 2 * r * np.cos(omega)
a3 = -r * r
n = 100
signal = np.ones(n, dtype=dtype)
# Compute initial forward conditions
ic = symiirorder2_ic_fwd(signal, r, omega, precision)
out1 = np.zeros(n + 2, dtype=dtype)
out1[:2] = ic[0]
# Apply the forward system cs / (1 - a2 * z^-1 - a3 * z^-2))
for i in range(2, n + 2):
out1[i] = cs * signal[i - 2] + a2 * out1[i - 1] + a3 * out1[i - 2]
# Find the backward initial conditions
ic2 = symiirorder2_ic_bwd(out1, r, omega, precision)[0]
# Apply the system cs / (1 - a2 * z - a3 * z^2)) in backwards
exp = np.empty(n, dtype=dtype)
exp[-2:] = ic2[::-1]
for i in range(n - 3, -1, -1):
exp[i] = cs * out1[i] + a2 * exp[i + 1] + a3 * exp[i + 2]
out = symiirorder2(signal, r, omega, precision)
assert_allclose(out, exp, atol=4e-6, rtol=6e-7)
def test_symiir1_integer_input(self):
s = np.where(np.arange(100) % 2, -1, 1)
expected = symiirorder1(s.astype(float), 0.5, 0.5)
out = symiirorder1(s, 0.5, 0.5)
assert_allclose(out, expected)
def test_symiir2_integer_input(self):
s = np.where(np.arange(100) % 2, -1, 1)
expected = symiirorder2(s.astype(float), 0.5, np.pi / 3.0)
out = symiirorder2(s, 0.5, np.pi / 3.0)
assert_allclose(out, expected)

View File

@ -487,10 +487,14 @@ def kron(A, B, format=None):
coo_sparse = coo_matrix
B = coo_sparse(B)
if B.ndim != 2:
raise ValueError(f"kron requires 2D input arrays. `B` is {B.ndim}D.")
# B is fairly dense, use BSR
if (format is None or format == "bsr") and 2*B.nnz >= B.shape[0] * B.shape[1]:
A = csr_sparse(A,copy=True)
if A.ndim != 2:
raise ValueError(f"kron requires 2D input arrays. `A` is {A.ndim}D.")
output_shape = (A.shape[0]*B.shape[0], A.shape[1]*B.shape[1])
if A.nnz == 0 or B.nnz == 0:
@ -505,6 +509,8 @@ def kron(A, B, format=None):
else:
# use COO
A = coo_sparse(A)
if A.ndim != 2:
raise ValueError(f"kron requires 2D input arrays. `A` is {A.ndim}D.")
output_shape = (A.shape[0]*B.shape[0], A.shape[1]*B.shape[1])
if A.nnz == 0 or B.nnz == 0:
@ -570,9 +576,12 @@ def kronsum(A, B, format=None):
A = coo_sparse(A)
B = coo_sparse(B)
if A.ndim != 2:
raise ValueError(f"kronsum requires 2D inputs. `A` is {A.ndim}D.")
if B.ndim != 2:
raise ValueError(f"kronsum requires 2D inputs. `B` is {B.ndim}D.")
if A.shape[0] != A.shape[1]:
raise ValueError('A is not square')
if B.shape[0] != B.shape[1]:
raise ValueError('B is not square')
@ -593,23 +602,23 @@ def _compressed_sparse_stack(blocks, axis, return_spmatrix):
"""
other_axis = 1 if axis == 0 else 0
data = np.concatenate([b.data for b in blocks])
constant_dim = blocks[0].shape[other_axis]
constant_dim = blocks[0]._shape_as_2d[other_axis]
idx_dtype = get_index_dtype(arrays=[b.indptr for b in blocks],
maxval=max(data.size, constant_dim))
indices = np.empty(data.size, dtype=idx_dtype)
indptr = np.empty(sum(b.shape[axis] for b in blocks) + 1, dtype=idx_dtype)
indptr = np.empty(sum(b._shape_as_2d[axis] for b in blocks) + 1, dtype=idx_dtype)
last_indptr = idx_dtype(0)
sum_dim = 0
sum_indices = 0
for b in blocks:
if b.shape[other_axis] != constant_dim:
if b._shape_as_2d[other_axis] != constant_dim:
raise ValueError(f'incompatible dimensions for axis {other_axis}')
indices[sum_indices:sum_indices+b.indices.size] = b.indices
sum_indices += b.indices.size
idxs = slice(sum_dim, sum_dim + b.shape[axis])
idxs = slice(sum_dim, sum_dim + b._shape_as_2d[axis])
indptr[idxs] = b.indptr[:-1]
indptr[idxs] += last_indptr
sum_dim += b.shape[axis]
sum_dim += b._shape_as_2d[axis]
last_indptr += b.indptr[-1]
indptr[-1] = last_indptr
# TODO remove this if-structure when sparse matrices removed
@ -643,7 +652,7 @@ def _stack_along_minor_axis(blocks, axis):
# check for incompatible dimensions
other_axis = 1 if axis == 0 else 0
other_axis_dims = {b.shape[other_axis] for b in blocks}
other_axis_dims = {b._shape_as_2d[other_axis] for b in blocks}
if len(other_axis_dims) > 1:
raise ValueError(f'Mismatching dimensions along axis {other_axis}: '
f'{other_axis_dims}')
@ -659,10 +668,10 @@ def _stack_along_minor_axis(blocks, axis):
# - The max value in indptr is the number of non-zero entries. This is
# exceedingly unlikely to require int64, but is checked out of an
# abundance of caution.
sum_dim = sum(b.shape[axis] for b in blocks)
sum_dim = sum(b._shape_as_2d[axis] for b in blocks)
nnz = sum(len(b.indices) for b in blocks)
idx_dtype = get_index_dtype(maxval=max(sum_dim - 1, nnz))
stack_dim_cat = np.array([b.shape[axis] for b in blocks], dtype=idx_dtype)
stack_dim_cat = np.array([b._shape_as_2d[axis] for b in blocks], dtype=idx_dtype)
if data_cat.size > 0:
indptr_cat = np.concatenate(indptr_list).astype(idx_dtype)
indices_cat = (np.concatenate([b.indices for b in blocks])
@ -940,19 +949,19 @@ def _block(blocks, format, dtype, return_spmatrix=False):
block_mask[i,j] = True
if brow_lengths[i] == 0:
brow_lengths[i] = A.shape[0]
elif brow_lengths[i] != A.shape[0]:
brow_lengths[i] = A._shape_as_2d[0]
elif brow_lengths[i] != A._shape_as_2d[0]:
msg = (f'blocks[{i},:] has incompatible row dimensions. '
f'Got blocks[{i},{j}].shape[0] == {A.shape[0]}, '
f'Got blocks[{i},{j}].shape[0] == {A._shape_as_2d[0]}, '
f'expected {brow_lengths[i]}.')
raise ValueError(msg)
if bcol_lengths[j] == 0:
bcol_lengths[j] = A.shape[1]
elif bcol_lengths[j] != A.shape[1]:
bcol_lengths[j] = A._shape_as_2d[1]
elif bcol_lengths[j] != A._shape_as_2d[1]:
msg = (f'blocks[:,{j}] has incompatible column '
f'dimensions. '
f'Got blocks[{i},{j}].shape[1] == {A.shape[1]}, '
f'Got blocks[{i},{j}].shape[1] == {A._shape_as_2d[1]}, '
f'expected {bcol_lengths[j]}.')
raise ValueError(msg)

View File

@ -59,11 +59,14 @@ def reverse_cuthill_mckee(graph, symmetric_mode=False):
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(np.int32(0), np.int32(1)) 1
(np.int32(0), np.int32(2)) 2
(np.int32(1), np.int32(3)) 1
(np.int32(2), np.int32(0)) 2
(np.int32(2), np.int32(3)) 3
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 5 stored elements and shape (4, 4)>
Coords Values
(0, 1) 1
(0, 2) 2
(1, 3) 1
(2, 0) 2
(2, 3) 3
>>> reverse_cuthill_mckee(graph)
array([3, 2, 1, 0], dtype=int32)
@ -215,14 +218,17 @@ def structural_rank(graph):
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(np.int32(0), np.int32(1)) 1
(np.int32(0), np.int32(2)) 2
(np.int32(1), np.int32(0)) 1
(np.int32(1), np.int32(3)) 1
(np.int32(2), np.int32(0)) 2
(np.int32(2), np.int32(3)) 3
(np.int32(3), np.int32(1)) 1
(np.int32(3), np.int32(2)) 3
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 8 stored elements and shape (4, 4)>
Coords Values
(0, 1) 1
(0, 2) 2
(1, 0) 1
(1, 3) 1
(2, 0) 2
(2, 3) 3
(3, 1) 1
(3, 2) 3
>>> structural_rank(graph)
4

View File

@ -148,11 +148,14 @@ def shortest_path(csgraph, method='auto',
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(np.int32(0), np.int32(1)) 1
(np.int32(0), np.int32(2)) 2
(np.int32(1), np.int32(3)) 1
(np.int32(2), np.int32(0)) 2
(np.int32(2), np.int32(3)) 3
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 5 stored elements and shape (4, 4)>
Coords Values
(0, 1) 1
(0, 2) 2
(1, 3) 1
(2, 0) 2
(2, 3) 3
>>> dist_matrix, predecessors = shortest_path(csgraph=graph, directed=False, indices=0, return_predecessors=True)
>>> dist_matrix
@ -294,11 +297,14 @@ def floyd_warshall(csgraph, directed=True,
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(np.int32(0), np.int32(1)) 1
(np.int32(0), np.int32(2)) 2
(np.int32(1), np.int32(3)) 1
(np.int32(2), np.int32(0)) 2
(np.int32(2), np.int32(3)) 3
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 5 stored elements and shape (4, 4)>
Coords Values
(0, 1) 1
(0, 2) 2
(1, 3) 1
(2, 0) 2
(2, 3) 3
>>> dist_matrix, predecessors = floyd_warshall(csgraph=graph, directed=False, return_predecessors=True)
>>> dist_matrix
@ -520,10 +526,13 @@ def dijkstra(csgraph, directed=True, indices=None,
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(np.int32(0), np.int32(1)) 1
(np.int32(0), np.int32(2)) 2
(np.int32(1), np.int32(3)) 1
(np.int32(2), np.int32(3)) 3
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 4 stored elements and shape (4, 4)>
Coords Values
(0, 1) 1
(0, 2) 2
(1, 3) 1
(2, 3) 3
>>> dist_matrix, predecessors = dijkstra(csgraph=graph, directed=False, indices=0, return_predecessors=True)
>>> dist_matrix
@ -1005,11 +1014,14 @@ def bellman_ford(csgraph, directed=True, indices=None,
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(np.int32(0), np.int32(1)) 1
(np.int32(0), np.int32(2)) 2
(np.int32(1), np.int32(3)) 1
(np.int32(2), np.int32(0)) 2
(np.int32(2), np.int32(3)) 3
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 5 stored elements and shape (4, 4)>
Coords Values
(0, 1) 1
(0, 2) 2
(1, 3) 1
(2, 0) 2
(2, 3) 3
>>> dist_matrix, predecessors = bellman_ford(csgraph=graph, directed=False, indices=0, return_predecessors=True)
>>> dist_matrix
@ -1242,11 +1254,14 @@ def johnson(csgraph, directed=True, indices=None,
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(np.int32(0), np.int32(1)) 1
(np.int32(0), np.int32(2)) 2
(np.int32(1), np.int32(3)) 1
(np.int32(2), np.int32(0)) 2
(np.int32(2), np.int32(3)) 3
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 5 stored elements and shape (4, 4)>
Coords Values
(0, 1) 1
(0, 2) 2
(1, 3) 1
(2, 0) 2
(2, 3) 3
>>> dist_matrix, predecessors = johnson(csgraph=graph, directed=False, indices=0, return_predecessors=True)
>>> dist_matrix
@ -1772,11 +1787,14 @@ def yen(
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(np.int32(0), np.int32(1)) 1
(np.int32(0), np.int32(2)) 2
(np.int32(1), np.int32(3)) 1
(np.int32(2), np.int32(0)) 2
(np.int32(2), np.int32(3)) 3
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 5 stored elements and shape (4, 4)>
Coords Values
(0, 1) 1
(0, 2) 2
(1, 3) 1
(2, 0) 2
(2, 3) 3
>>> dist_array, predecessors = yen(csgraph=graph, source=0, sink=3, K=2,
... directed=False, return_predecessors=True)

View File

@ -453,10 +453,13 @@ def reconstruct_path(csgraph, predecessors, directed=True):
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(np.int32(0), np.int32(1)) 1
(np.int32(0), np.int32(2)) 2
(np.int32(1), np.int32(3)) 1
(np.int32(2), np.int32(3)) 3
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 4 stored elements and shape (4, 4)>
Coords Values
(0, 1) 1
(0, 2) 2
(1, 3) 1
(2, 3) 3
>>> pred = np.array([-9999, 0, 0, 1], dtype=np.int32)
@ -562,10 +565,13 @@ def construct_dist_matrix(graph,
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(np.int32(0), np.int32(1)) 1
(np.int32(0), np.int32(2)) 2
(np.int32(1), np.int32(3)) 1
(np.int32(2), np.int32(3)) 3
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 4 stored elements and shape (4, 4)>
Coords Values
(0, 1) 1
(0, 2) 2
(1, 3) 1
(2, 3) 3
>>> pred = np.array([[-9999, 0, 0, 2],
... [1, -9999, 0, 1],

View File

@ -75,10 +75,13 @@ def connected_components(csgraph, directed=True, connection='weak',
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(np.int32(0), np.int32(1)) 1
(np.int32(0), np.int32(2)) 1
(np.int32(1), np.int32(2)) 1
(np.int32(3), np.int32(4)) 1
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 4 stored elements and shape (5, 5)>
Coords Values
(0, 1) 1
(0, 2) 1
(1, 2) 1
(3, 4) 1
>>> n_components, labels = connected_components(csgraph=graph, directed=False, return_labels=True)
>>> n_components
@ -332,11 +335,14 @@ cpdef breadth_first_order(csgraph, i_start,
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(np.int32(0), np.int32(1)) 1
(np.int32(0), np.int32(2)) 2
(np.int32(1), np.int32(3)) 1
(np.int32(2), np.int32(0)) 2
(np.int32(2), np.int32(3)) 3
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 5 stored elements and shape (4, 4)>
Coords Values
(0, 1) 1
(0, 2) 2
(1, 3) 1
(2, 0) 2
(2, 3) 3
>>> breadth_first_order(graph,0)
(array([0, 1, 2, 3], dtype=int32), array([-9999, 0, 0, 1], dtype=int32))
@ -538,11 +544,14 @@ cpdef depth_first_order(csgraph, i_start,
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(np.int32(0), np.int32(1)) 1
(np.int32(0), np.int32(2)) 2
(np.int32(1), np.int32(3)) 1
(np.int32(2), np.int32(0)) 2
(np.int32(2), np.int32(3)) 3
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 5 stored elements and shape (4, 4)>
Coords Values
(0, 1) 1
(0, 2) 2
(1, 3) 1
(2, 0) 2
(2, 3) 3
>>> depth_first_order(graph,0)
(array([0, 1, 3, 2], dtype=int32), array([-9999, 0, 0, 1], dtype=int32))

View File

@ -79,15 +79,9 @@ are described below.
To install the library, type:
make install
To run the installation test, type:
To run the regression tests, type:
make test (or: ctest)
The test results are in the files below:
build/TESTING/s_test.out # single precision, real
build/TESTING/d_test.out # double precision, real
build/TESTING/c_test.out # single precision, complex
build/TESTING/z_test.out # double precision, complex
2. Manual installation with makefile.
Before installing the package, please examine the three things dependent

View File

@ -0,0 +1,6 @@
This directory contains a bundled version of SuperLU from
https://github.com/xiaoyeli/superlu at
commit 18958c35006c63071813df6808d7809247caac89.
We use all .c and .h files except `mc64ad.c` which is license
incompatible from the SRC folder and applied
the patches from `scipychanges.patch`.

View File

@ -44,7 +44,7 @@ int cfill_diag(int n, NCformat *Astore)
}
if (fill)
{
nzval_new = complexMalloc(nnz + fill);
nzval_new = singlecomplexMalloc(nnz + fill);
rowind_new = intMalloc(nnz + fill);
fill = 0;
for (i = 0; i < n; i++)
@ -94,7 +94,7 @@ int cdominate(int n, NCformat *Astore)
}
if (fill)
{
nzval_new = complexMalloc(nnz + fill);
nzval_new = singlecomplexMalloc(nnz + fill);
rowind_new = intMalloc(nnz+ fill);
fill = 0;
for (i = 0; i < n; i++)

View File

@ -37,7 +37,7 @@ at the top-level directory.
*
* CGSCON estimates the reciprocal of the condition number of a general
* real matrix A, in either the 1-norm or the infinity-norm, using
* the LU factorization computed by CGETRF. *
* the LU factorization computed by CGSTRF. *
*
* An estimate is obtained for norm(inv(A)), and the reciprocal of the
* condition number is computed as
@ -119,7 +119,7 @@ cgscon(char *norm, SuperMatrix *L, SuperMatrix *U,
return;
}
work = complexCalloc( 3*L->nrow );
work = singlecomplexCalloc( 3*L->nrow );
if ( !work )

View File

@ -21,8 +21,7 @@ at the top-level directory.
*/
#include "slu_cdefs.h"
/*!
* Computes approximate solutions using the ILU factorization from cgsitrf()
/*! \brief
*
* <pre>
* Purpose
@ -374,7 +373,7 @@ at the top-level directory.
* mem_usage (output) mem_usage_t*
* Record the memory usage statistics, consisting of following fields:
* - for_lu (float)
* The amount of space used in bytes for L\\U data structures.
* The amount of space used in bytes for L\U data structures.
* - total_needed (float)
* The amount of space needed in bytes to perform factorization.
* - expansions (int)
@ -455,12 +454,12 @@ cgsisx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r,
}
/* Test the input parameters */
if (options->Fact != DOFACT && options->Fact != SamePattern &&
options->Fact != SamePattern_SameRowPerm &&
options->Fact != FACTORED &&
options->Trans != NOTRANS && options->Trans != TRANS &&
options->Trans != CONJ &&
options->Equil != NO && options->Equil != YES)
if ( (options->Fact != DOFACT && options->Fact != SamePattern &&
options->Fact != SamePattern_SameRowPerm &&
options->Fact != FACTORED) ||
(options->Trans != NOTRANS && options->Trans != TRANS &&
options->Trans != CONJ) ||
(options->Equil != NO && options->Equil != YES) )
*info = -1;
else if ( A->nrow != A->ncol || A->nrow < 0 ||
(A->Stype != SLU_NC && A->Stype != SLU_NR) ||

View File

@ -540,7 +540,7 @@ cgsitrf(superlu_options_t *options, SuperMatrix *A, int relax, int panel_size,
tol_U = SUPERLU_MAX(drop_tol, tol_U * 0.5);
}
if (drop_sum.r != 0.0 && drop_sum.i != 0.0)
if (drop_sum.r != 0.0 || drop_sum.i != 0.0)
{
omega = SUPERLU_MIN(2.0*(1.0-alpha)/c_abs1(&drop_sum), 1.0);
cs_mult(&drop_sum, &drop_sum, omega);

View File

@ -227,7 +227,7 @@ cgsrfs(trans_t trans, SuperMatrix *A, SuperMatrix *L, SuperMatrix *U,
colequ = strncmp(equed, "C", 1)==0 || strncmp(equed, "B", 1)==0;
/* Allocate working space */
work = complexMalloc(2*A->nrow);
work = singlecomplexMalloc(2*A->nrow);
rwork = (float *) SUPERLU_MALLOC( A->nrow * sizeof(float) );
iwork = int32Malloc(A->nrow);
if ( !work || !rwork || !iwork )

View File

@ -232,9 +232,12 @@ cgssv(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r,
/* Solve the system A*X=B, overwriting B with X. */
int info1;
cgstrs (trans, L, U, perm_c, perm_r, B, stat, &info1);
} else {
}
#if ( PRNTlevel>=1 )
else {
printf("cgstrf info %lld\n", (long long) *info); fflush(stdout);
}
#endif
utime[SOLVE] = SuperLU_timer_() - t;

View File

@ -21,9 +21,7 @@ at the top-level directory.
*/
#include "slu_cdefs.h"
/*!
* Solves the system of linear equations using
* the LU factorization from cgstrf()
/*! \brief
*
* <pre>
* Purpose
@ -245,7 +243,7 @@ at the top-level directory.
* to hold data structures for factors L and U.
* On exit, if fact is not 'F', L and U point to this array.
*
* lwork (input) int
* lwork (input) int_t
* Specifies the size of work array in bytes.
* = 0: allocate space internally by system malloc;
* > 0: use user-supplied work array of length lwork in bytes,
@ -326,7 +324,7 @@ at the top-level directory.
* mem_usage (output) mem_usage_t*
* Record the memory usage statistics, consisting of following fields:
* - for_lu (float)
* The amount of space used in bytes for L\\U data structures.
* The amount of space used in bytes for L\U data structures.
* - total_needed (float)
* The amount of space needed in bytes to perform factorization.
* - expansions (int)
@ -364,7 +362,6 @@ cgssvx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r,
GlobalLU_t *Glu, mem_usage_t *mem_usage, SuperLUStat_t *stat, int_t *info )
{
DNformat *Bstore, *Xstore;
singlecomplex *Bmat, *Xmat;
int ldb, ldx, nrhs;
@ -411,13 +408,13 @@ printf("dgssvx: Fact=%4d, Trans=%4d, equed=%c\n",
#endif
/* Test the input parameters */
if (options->Fact != DOFACT && options->Fact != SamePattern &&
options->Fact != SamePattern_SameRowPerm &&
options->Fact != FACTORED &&
options->Trans != NOTRANS && options->Trans != TRANS &&
options->Trans != CONJ &&
options->Equil != NO && options->Equil != YES)
*info = -1;
if ( (options->Fact != DOFACT && options->Fact != SamePattern &&
options->Fact != SamePattern_SameRowPerm &&
options->Fact != FACTORED) ||
(options->Trans != NOTRANS && options->Trans != TRANS &&
options->Trans != CONJ) ||
(options->Equil != NO && options->Equil != YES) )
*info = -1;
else if ( A->nrow != A->ncol || A->nrow < 0 ||
(A->Stype != SLU_NC && A->Stype != SLU_NR) ||
A->Dtype != SLU_C || A->Mtype != SLU_GE )

View File

@ -137,9 +137,9 @@ cgstrs (trans_t trans, SuperMatrix *L, SuperMatrix *U,
}
n = L->nrow;
work = complexCalloc((size_t) n * (size_t) nrhs);
work = singlecomplexCalloc((size_t) n * (size_t) nrhs);
if ( !work ) ABORT("Malloc fails for local work[].");
soln = complexMalloc((size_t) n);
soln = singlecomplexMalloc((size_t) n);
if ( !soln ) ABORT("Malloc fails for local soln[].");
Bmat = Bstore->nzval;

View File

@ -116,7 +116,7 @@ cldperm(int job, int n, int_t nnz, int_t colptr[], int_t adjncy[],
for (i = 0; i <= n; ++i) ++colptr[i];
for (i = 0; i < nnz; ++i) ++adjncy[i];
#if ( DEBUGlevel>=2 )
printf("LDPERM(): n %d, nnz %d\n", n, nnz);
printf("LDPERM(): n %d, nnz %lld\n", n, (long long) nnz);
slu_PrintInt10("colptr", n+1, colptr);
slu_PrintInt10("adjncy", nnz, adjncy);
#endif
@ -152,7 +152,7 @@ cldperm(int job, int n, int_t nnz, int_t colptr[], int_t adjncy[],
#if ( DEBUGlevel>=2 )
slu_PrintInt10("perm", n, perm);
printf(".. After MC64AD info %lld\tsize of matching %d\n", (long long)info[0], num);
printf(".. After MC64AD info %lld\tsize of matching %lld\n", (long long)info[0], (long long) num);
#endif
if ( info[0] == 1 ) { /* Structurally singular */
printf(".. The last %d permutations:\n", (int)(n-num));

View File

@ -679,13 +679,13 @@ cStackCompress(GlobalLU_t *Glu)
void
callocateA(int n, int_t nnz, singlecomplex **a, int_t **asub, int_t **xa)
{
*a = (singlecomplex *) complexMalloc(nnz);
*a = (singlecomplex *) singlecomplexMalloc(nnz);
*asub = (int_t *) intMalloc(nnz);
*xa = (int_t *) intMalloc(n+1);
}
singlecomplex *complexMalloc(size_t n)
singlecomplex *singlecomplexMalloc(size_t n)
{
singlecomplex *buf;
buf = (singlecomplex *) SUPERLU_MALLOC(n * (size_t) sizeof(singlecomplex));
@ -695,7 +695,7 @@ singlecomplex *complexMalloc(size_t n)
return (buf);
}
singlecomplex *complexCalloc(size_t n)
singlecomplex *singlecomplexCalloc(size_t n)
{
singlecomplex *buf;
register size_t i;

View File

@ -1,4 +1,4 @@
/*! \file
/*
Copyright (c) 2003, The Regents of the University of California, through
Lawrence Berkeley National Laboratory (subject to receipt of any required
approvals from U.S. Dept. of Energy)
@ -8,6 +8,11 @@ All rights reserved.
The source code is distributed under BSD license, see the file License.txt
at the top-level directory.
*/
/*! \file
* \brief Approximate minimum degree column ordering algorithm
*
* \ingroup Common
*/
/* ========================================================================== */
/* === colamd/symamd - a sparse matrix column ordering algorithm ============ */
/* ========================================================================== */
@ -60,30 +65,7 @@ at the top-level directory.
COLAMD is also available under alternate licenses, contact T. Davis
for details.
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301
USA
Permission is hereby granted to use or copy this program under the
terms of the GNU LGPL, provided that the Copyright, this License,
and the Availability of the original version is retained on all copies.
User documentation of any code that uses this code or any modified
version of this code must cite the Copyright, this License, the
Availability note, and "Used by permission." Permission to modify
the code and to distribute modified code is granted, provided the
Copyright, this License, and the Availability note are retained,
and a notice that the code was modified is included.
See COLAMD/Doc/License.txt for the license.
Availability:
@ -685,6 +667,37 @@ at the top-level directory.
#define NULL ((void *) 0)
#endif
/* ========================================================================== */
/* === int or SuiteSparse_long ============================================== */
/* ========================================================================== */
#ifdef DLONG
#define Int SuiteSparse_long
#define ID SuiteSparse_long_id
#define Int_MAX SuiteSparse_long_max
#define COLAMD_recommended colamd_l_recommended
#define COLAMD_set_defaults colamd_l_set_defaults
#define COLAMD_MAIN colamd_l
#define SYMAMD_MAIN symamd_l
#define COLAMD_report colamd_l_report
#define SYMAMD_report symamd_l_report
#else
#define Int int
#define ID "%d"
#define Int_MAX INT_MAX
#define COLAMD_recommended colamd_recommended
#define COLAMD_set_defaults colamd_set_defaults
#define COLAMD_MAIN colamd
#define SYMAMD_MAIN symamd
#define COLAMD_report colamd_report
#define SYMAMD_report symamd_report
#endif
/* ========================================================================== */
/* === Row and Column structures ============================================ */
@ -1597,7 +1610,6 @@ PUBLIC Int COLAMD_MAIN /* returns TRUE if successful, FALSE otherwise*/
return (TRUE) ;
}
#if 0
/* ========================================================================== */
/* === colamd_report ======================================================== */
@ -1624,7 +1636,7 @@ PUBLIC void SYMAMD_report
print_report ("symamd", stats) ;
}
#endif
/* ========================================================================== */
/* === NON-USER-CALLABLE ROUTINES: ========================================== */
@ -3189,15 +3201,15 @@ PRIVATE void print_report
SUITESPARSE_PRINTF(
"%s: number of duplicate or out-of-order row indices: %d\n",
method, (int) i3) ;
method, i3) ;
SUITESPARSE_PRINTF(
"%s: last seen duplicate or out-of-order row index: %d\n",
method, (int) INDEX (i2)) ;
method, INDEX (i2)) ;
SUITESPARSE_PRINTF(
"%s: last seen in column: %d",
method, (int) INDEX (i1)) ;
method, INDEX (i1)) ;
/* no break - fall through to next case instead */
@ -3207,15 +3219,15 @@ PRIVATE void print_report
SUITESPARSE_PRINTF(
"%s: number of dense or empty rows ignored: %d\n",
method, (int) stats [COLAMD_DENSE_ROW]) ;
method, stats [COLAMD_DENSE_ROW]) ;
SUITESPARSE_PRINTF(
"%s: number of dense or empty columns ignored: %d\n",
method, (int) stats [COLAMD_DENSE_COL]) ;
method, stats [COLAMD_DENSE_COL]) ;
SUITESPARSE_PRINTF(
"%s: number of garbage collections performed: %d\n",
method, (int) stats [COLAMD_DEFRAG_COUNT]) ;
method, stats [COLAMD_DEFRAG_COUNT]) ;
break ;
case COLAMD_ERROR_A_not_present:
@ -3232,24 +3244,24 @@ PRIVATE void print_report
case COLAMD_ERROR_nrow_negative:
SUITESPARSE_PRINTF("Invalid number of rows (%d).\n", (int) i1) ;
SUITESPARSE_PRINTF("Invalid number of rows (%d).\n", i1) ;
break ;
case COLAMD_ERROR_ncol_negative:
SUITESPARSE_PRINTF("Invalid number of columns (%d).\n", (int) i1) ;
SUITESPARSE_PRINTF("Invalid number of columns (%d).\n", i1) ;
break ;
case COLAMD_ERROR_nnz_negative:
SUITESPARSE_PRINTF(
"Invalid number of nonzero entries (%d).\n", (int) i1) ;
"Invalid number of nonzero entries (%d).\n", i1) ;
break ;
case COLAMD_ERROR_p0_nonzero:
SUITESPARSE_PRINTF(
"Invalid column pointer, p [0] = %d, must be zero.\n", (int)i1);
"Invalid column pointer, p [0] = %d, must be zero.\n", i1);
break ;
case COLAMD_ERROR_A_too_small:
@ -3257,21 +3269,21 @@ PRIVATE void print_report
SUITESPARSE_PRINTF("Array A too small.\n") ;
SUITESPARSE_PRINTF(
" Need Alen >= %d, but given only Alen = %d.\n",
(int) i1, (int) i2) ;
i1, i2) ;
break ;
case COLAMD_ERROR_col_length_negative:
SUITESPARSE_PRINTF
("Column %d has a negative number of nonzero entries (%d).\n",
(int) INDEX (i1), (int) i2) ;
INDEX (i1), i2) ;
break ;
case COLAMD_ERROR_row_index_out_of_bounds:
SUITESPARSE_PRINTF
("Row index (row %d) out of bounds (%d to %d) in column %d.\n",
(int) INDEX (i2), (int) INDEX (0), (int) INDEX (i3-1), (int) INDEX (i1)) ;
INDEX (i2), INDEX (0), INDEX (i3-1), INDEX (i1)) ;
break ;
case COLAMD_ERROR_out_of_memory:

Some files were not shown because too many files have changed in this diff Show More