Compare commits

...

110 Commits

Author SHA1 Message Date
Tyler Reddy bc29b5bf5f REL: SciPy 1.2.3 release commit 2020-01-20 13:47:15 -07:00
Tyler Reddy 33183d282d
Merge pull request #10758 from tylerjereddy/scipy_123_backports
MAINT: SciPy 1.2.3 LTS backports / prep
2020-01-20 12:11:58 -07:00
Tyler Reddy ad406305c1 DOC: update SciPy 1.2.3 relnotes
* update SciPy 1.2.3 release notes to
include backports for: gh-11126, gh-10906,
gh-10633, gh-10138, and gh-10076.

* briefly mention in the notes that this
is part of the Python 2.7 LTS support series
2020-01-20 10:59:23 -07:00
Tyler Reddy 4e3f0923ae Revert "BUG: scipy/interpolate: fix PPoly/Cubic*Spline roots() extrapolation for single-interval case"
This reverts commit 8e5842303c.
2020-01-20 10:33:10 -07:00
Pauli Virtanen 9d9caa805b BUG: scipy/interpolate: fix PPoly/Cubic*Spline roots() extrapolation for single-interval case
Fix mistake in PPoly.roots(extrapolate=True) when a piecewise polynomial
has only a single interval. Previously, roots to the right of the
interval were not considered.
2020-01-20 10:15:52 -07:00
Pauli Virtanen 55a97c53dc BUG: interpolate/fitpack: fix memory leak in splprep 2020-01-20 10:13:36 -07:00
Pauli Virtanen fd3e365ae0 BUG: interpolate/fitpack: fix leak in splprep for task!=0
This is the non-default code path, not realised usually.
2020-01-20 10:13:16 -07:00
Pauli Virtanen 2985d452d2 BUG: sparse/linalg: fix expm for np.matrix inputs (#10906) 2020-01-20 10:12:52 -07:00
Evgeni Burovski fd643051d8 BUG: interpolate: integral(a, b) should be zero
Integrals should be zero when both limits are outside
of the interpolation range

fixes gh-7906
2020-01-20 10:12:15 -07:00
Warren Weckesser 4f12826fc2 BUG: special: Invalid arguments to ellip_harm can crash Python.
For example,

    In [1]: from scipy.special import ellip_harm

    In [2]: n = 4

    In [3]: p = 2*n + 2  # Too big!

    In [4]: ellip_harm(0.5, 2.0, n, p, 0.2)

    python(74331,0x7fff7c7ce310) malloc: *** error for object 0x120c1d110: pointer being freed was not allocated
    *** set a breakpoint in malloc_error_break to debug
    Abort trap: 6

The problem was that when the function `lame_coefficients` detected
invalid arguments, it returned without initializing bufferp[0].  In
normal use, bufferp[0] is malloc'ed by `lame_coefficients` and must
be freed by the caller.  If `lame_coefficients` failed to initialize
bufferp[0] with NULL, the value returned in bufferp[0] would be garbage,
and that would cause a crash when the caller attempted to free the memory.
2020-01-20 10:10:25 -07:00
Evgeni Burovski 4942c3b95c BUG: optimize: fix curve_fit for mixed float32/float64 input
curve_fit sometimes fails if xdata and ydata dtypes differ
(one is float32, and the other is float64), or both are float32.

Thus always cast them to float64.

fixes gh-9581, fixes gh-7117
2020-01-20 10:09:30 -07:00
Tyler Reddy 91595eafce DOC: update SciPy 1.2.3 relnotes
* update SciPy 1.2.3 LTS release notes
after backporting gh-10882, gh-11199,
gh-10961, and gh-10833.
2020-01-20 09:59:27 -07:00
Tyler Reddy 905f1a8ca4 TST: test_gh_4915() compatibility
* adjust test_gh_4915() such that it is
compatible with Python 2.7
2020-01-20 09:41:50 -07:00
Nikolay Mayorov d6b10d5108 BUG: Fix signal.unique_roots (#10961)
* BUG, TST: Make unique_roots work properly with complex roots + tests

* DOC: Update docstring of unique_roots

* BUG: Do correct sorting of pairs of close roots in unique_roots

* DOC: Tweak docstring of signal.unique_roots

* ENH: Improve asymptotic complexity of signal.unique_roots

* DOC: Update docstring of signal.unique_roots
2020-01-19 20:20:52 -07:00
Eric Larson 7517d5c36e BUG: Fix subspace_angles for complex values 2020-01-19 20:12:07 -07:00
Pauli Virtanen 25e8fad635 BUG: sparse.linalg: mistake in unsymm. real shift-invert ARPACK eigenvalue selection
For real-valued unsymmetric problems, ARPACK can in some cases return
one extra eigenvalue.

The code that drops the extraneous eigenvalue did not take into account
that in shift-invert mode, the which= keyword applies to the
*transformed* eigenvalues, and dropped sometimes the wrong eigenvalue.

Moreover, fix sort order in this case for 'LM', 'LR', 'LI' modes
(largest items come first).
2020-01-19 20:09:28 -07:00
Pauli Virtanen 46a2c47d05 BUG: sparse/arpack: fix incorrect code for complex hermitian M
The code used transposition as a shortcut for csc format conversion,
but this is incorrect for complex-valued matrices.
2020-01-19 20:07:31 -07:00
Pauli Virtanen 4f7edc686e TST: sparse/arpack: test complex hermitian M eigenvalue problem 2020-01-19 20:07:21 -07:00
Tyler Reddy f5bb5e00a2 MAINT: CircleCI fixes
* try installing xindy via apt, as
CircleCI "make latex" doc build currently
fails with an missing xindy issue
2020-01-18 16:03:43 -07:00
Tyler Reddy 52d21aefa3 MAINT: adjust broadcast shim again
* hoist the version specific application
of the _broadcast_arrays() shim from gh-10379
to an import-level decision, to avoid performance
hits on potential indexing hot paths
2020-01-17 15:59:36 -07:00
Tyler Reddy 1ced0d4363 DOC: update SciPy 1.2.3 relnotes
* update SciPy 1.2.3 LTS release notes for partial
backport of gh-9992
2020-01-17 15:34:51 -07:00
Tyler Reddy da43d7efb7 MAINT: add missing font
* partial backport of gh-9992 to fix CircleCI
doc build
2020-01-17 15:31:07 -07:00
Tyler Reddy f2d5f8eced DOC: update 1.2.3 relnotes
* update SciPy 1.2.3 LTS release notes to
account for a partial backport of gh-10540
2020-01-17 15:03:09 -07:00
Eric Larson 768d5e5168 MAINT: Bump tolerance on test for OSX 2020-01-17 15:00:22 -07:00
Tyler Reddy eff68b79f1 DOC: fftpack tutorial fix
* fix a small refguide-related error
in fftpack tutorial
2020-01-17 13:56:52 -07:00
Tyler Reddy 899ccdce1c DOC: update 1.2.3 relnotes
* update 1.2.3 release notes to reflect backporting of
gh-11269
2020-01-17 11:08:15 -07:00
ananyashreyjain 16aecefd8a Fix: Sparse matrix constructor data type detection changes on Numpy 1.18.0 (#11269) 2020-01-17 11:05:08 -07:00
Tyler Reddy e8516760cb TST: add skips in test_cont_basic
* test_cont_basic() had some test cases
that were not stable on MacOS; skip
these two cases based on reviewer
suggestion in gh-10758
2020-01-16 21:19:18 -07:00
Tyler Reddy 4d62127900 DOC: refguide fixes for LTS branch
* pearsonr() docstring was missing some
references, probably because some doc
updates leaked in from gh-9562

* brentq() docstring See Also section
was moved to Notes section to fix
a refguide problem, matching the
solution in master branch

* repair a minor formatting issue
in the fsolve() docstring See Also
section that was causing a refguide
failure, based on the state of this
section in master
2020-01-16 21:13:30 -07:00
Tyler Reddy 77f4e1afca MAINT: adjust _broadcast_arrays shim
* the _broadcast_arrays() shim function
now only adjusts writeability for NumPy >= 1.17.0
since it will otherwise cause issues for old
versions of NumPy supported for LTS branch
(i.e., 1.8.2)
2020-01-14 17:35:46 -07:00
Tyler Reddy e9658fcb15 TST: TestIQR.test_scale adjustment
* apparently for NumPy versions in 1.17.x series and above
TestIQR.test_scale() can expect less warnings
2020-01-14 16:33:08 -07:00
Tyler Reddy 545c674935 DOC: 1.2.3 release notes adjustment
* the HalfspaceIntersection PR has been removed
from the LTS branch based on discussion in backport
PR 10758 (non-trivial to determine cause of related
failure), so release notes adjusted accordingly
2020-01-14 16:09:06 -07:00
Tyler Reddy 7c59ea8dcc Revert "BUG: spatial/qhull: get HalfspaceIntersection.dual_points from the correct array"
This reverts commit a0a84332dd.
2020-01-14 16:04:21 -07:00
Tyler Reddy 1231639d12 MAINT: re-pin Sphinx version
* re-pin Sphinx version for Circle CI
docs build config based on LTS branch
backport PR reviewer feedback
2020-01-12 20:53:16 -07:00
Tyler Reddy be888f1465 DOC: update 1.2.3 release notes
* gh-10457 has been removed from the 1.2.3 LTS
release, so adjust the release notes accordingly
2020-01-12 20:46:36 -07:00
Tyler Reddy 49849676c8 Revert "Fix 5040"
This reverts commit 9e08c44fa2.
2020-01-12 20:45:14 -07:00
Tyler Reddy 2d5a1bec75 MAINT: backport PR 10379
* backport gh-10379, which involves
changes to some files that don't even exist in the
LTS branch; naturally, some of the changes differ
from the original PR, but all of the FutureWarning
cases have now been suppressed in the full test suite

* adjust release notes accordingly
2020-01-12 20:23:13 -07:00
Tyler Reddy f0be5be4bc DOC: update release notes for 1.2.3
* early draft of updated release notes
for SciPy 1.2.3
2019-09-02 16:50:04 -06:00
Pauli Virtanen a0a84332dd BUG: spatial/qhull: get HalfspaceIntersection.dual_points from the correct array
Qhull stores half-space intersection points in the `points` array, which
gives the canonical ordering for the dual hull points.

The order of points in `vertex_list` may be permuted, and should not be
used here.
2019-09-02 16:35:59 -06:00
Yu Feng 9e08c44fa2 Fix 5040
Allow ckdtree to take empty data input of shape (0, m).
2019-09-02 16:33:56 -06:00
Eric Larson 12ddb82ac2 MAINT: Bump to latest numpydoc 2019-09-02 16:28:58 -06:00
Eric Larson 0da9dc1189 MAINT: Unpin sphinx 2019-09-02 16:28:44 -06:00
Eric Larson 5c5a7f7cab MAINT: Fix doc build bugs 2019-09-02 16:26:51 -06:00
Pauli Virtanen 477f7e5dd1 MAINT: only warn on imp deprecation warnings 2019-09-02 16:20:50 -06:00
Pauli Virtanen 29612b7ad8 MAINT: fix version.py loading in setup.py 2019-09-02 16:19:45 -06:00
Tyler Reddy f392e0118a TST, MAINT: adjustments for pytest 5.0
* prefer importlib over imp to avoid deprecation warnings
since master branch no longer has a support burden for Python 2

* pytest-faulthandler can be removed from infrastructure and
docs because it has been merged into pytest core now
2019-09-02 16:18:53 -06:00
David Hagen da8b7304b5 TST: Mark test with pytest.mark.slow 2019-09-02 16:10:55 -06:00
David Hagen dbbc2e7d43 TEST: Use pytest.mark.timeout on tests that could hang 2019-09-02 16:10:49 -06:00
David Hagen 339286d30b TEST: Use pytest on stiff integration test 2019-09-02 16:10:41 -06:00
David Hagen 1a415784db TEST: Add regression test for stiff solvers not handling stiff 2019-09-02 16:10:34 -06:00
David Hagen b544737603 BUG: Pass jac=None directly to lsoda
Closes gh-9901. If the Jacobian is not provided, LSODA must receive `jac=None`. Otherwise, it will not compute the Jacobian via finite differences and use the stiff solver.
2019-09-02 16:10:21 -06:00
Matt Haberland 80530a9690 BUG: optimize: wrap ma.mask, which could be 0d, in np.atleast_1d 2019-09-02 16:07:32 -06:00
Matt Haberland e8b57a3730 BUG: optimize: dummy commit to investigate #10303 on CI 2019-09-02 16:07:23 -06:00
G. D. McBain 798df5dc52 DOC: reconstruct SuperLU permutation matrices in one step 2019-09-02 16:00:44 -06:00
Ralf Gommers 75fa5f1e41
Merge pull request #10276 from tylerjereddy/prep_123
MAINT: prepare LTS branch for SciPy 1.2.3
2019-06-07 19:34:09 +02:00
Tyler Reddy fa662fd41d MAINT: prepare LTS branch for SciPy 1.2.3 2019-06-07 10:10:29 -07:00
Tyler Reddy 600e7f576a REL: SciPy 1.2.2 release commit 2019-06-06 10:33:39 -07:00
Tyler Reddy 2ab922f987
Merge pull request #10254 from tylerjereddy/backport_122_June_3
MAINT: Backports / updates for 1.2.2 (LTS)
2019-06-06 10:06:39 -07:00
Tyler Reddy 2fcfa3d1a2 MAINT: updates / prep for 1.2.2
* update 1.2.x LTS branch CI to use OpenBLAS
0.3.7.dev because this is the version that will
be used for wheels & that contains a patch
for SkylakeX kernel issues

* fill in release notes for SciPy 1.2.2

* Azure and Appveyor CI adjusted to use distutils
from NumPy LTS (1.16.x) branch instead of
replacing with NumPy master branch distutils.
This is because the latter is no longer Python 2.7
compliant on Windows

* remove PyPy from CircleCI build matrix for the same
reasons provided in gh-10085

* test_parallel() in test__differential_evolution.py
has been adjusted to be Python 2/3 compatible
2019-06-05 15:50:00 -07:00
Kevin Sheppard f094fb181a MAINT: Use math.factorial rather than np.math.factorial
Switch import from numpy to math since function always from math
2019-06-03 14:40:07 -07:00
Kevin Sheppard fa355e23b4 BUG: Ensure factorial is not too large in kendaltau
Avoid excessively large factorial in kendall tau when sample size is large

closes #9611
2019-06-03 14:39:57 -07:00
Kai 9369382a2c DOC: Remove extra blankline 2019-06-03 14:35:54 -07:00
Kai c43b86b5c9 BUG: Avoid inplace modification of input array in newton
optimize.newton previously changes the users input array when
performing calculations. This PR changes this to perform an explicit
copy of the array before any inplace modifications are performed.
2019-06-03 14:35:46 -07:00
Andrew Nelson d57cc5c80b TST: two ways of testing parallel solve 2019-06-03 14:21:16 -07:00
Andrew Nelson 550110919e MAINT: ``MapWrapper.__exit__`` should terminate 2019-06-03 14:21:05 -07:00
Tyler Reddy f1063a07c7 TST: confine scikit-umfpack testing
* scikit-umfpack is now only used in the
single "full" test matrix entry in Travis CI;
it was previously preventing proper testing
with NumPy 1.13.3 (oldest supported version)
2019-06-03 14:16:33 -07:00
Tyler Dawson d66f41751c Update description for nnz on csc.py (#10141)
This is to resolve issue #10132
2019-05-06 13:56:10 -04:00
Ralf Gommers c5e87a5009
Merge pull request #9805 from tylerjereddy/LTS_122
MAINT: prepare LTS branch for 1.2.2 & pavement.py backport
2019-02-11 11:26:15 -08:00
Tyler Reddy c5a8e94ec8 MAINT: backport pavement.py fixes from gh-9607. 2019-02-11 10:18:18 -08:00
Tyler Reddy afd85536f9 MAINT: prepare LTS branch for SciPy 1.2.2 2019-02-11 10:15:28 -08:00
Ralf Gommers c3fa90dcfc
Merge pull request #9787 from tylerjereddy/issue_9777
TST: xfail problematic test for 1.2.x LTS branch
2019-02-07 22:51:26 -08:00
Tyler Reddy ef1659a0a9 TST: xfail problematic test for 1.2.1 LTS
* test_splrep_errors is now xfailed when SciPy
is tested with NumPy < 1.14.0 because of an
f2py-related requirement identified in gh-9777
2019-02-07 21:11:02 -08:00
Tyler Reddy 70f04c2186 REL: 1.2.1 release commit 2019-02-06 15:32:33 -08:00
Tyler Reddy a16af808b5
Merge pull request #9767 from tylerjereddy/relnotes-121
DOC: update 1.2.1 release notes
2019-02-06 14:55:11 -08:00
Ralf Gommers fb0af04f32 TST: reduce max memory usage for sparse.linalg.lgmres test
Follow-up to gh-9598.  Now really closes gh-9595.
2019-02-06 10:48:53 -08:00
Tyler Reddy 33b0fa4ccb DOC: update 1.2.1 release notes 2019-02-05 11:29:58 -08:00
Tyler Reddy 664565acf1
Merge pull request #9743 from tylerjereddy/backport-gh9739
MAINT: backport gh-9739
2019-02-02 16:24:37 -08:00
Eric Moore 214dae102c BUG: qr_updates fails if u is exactly in span Q
This change (from < to <=) is in Riechel's paper and was missed
when the code was first written.

closes gh-9733
2019-02-01 16:37:05 -08:00
Tyler Reddy 54818a41b0
Merge pull request #9741 from tylerjereddy/backports_feb1_2019
MAINT: a few backports as 1.2.1 approaches
2019-02-01 16:29:52 -08:00
Tyler Reddy 8fd5532a48 TST: adjust to vmImage dispatch in Azure 2019-02-01 12:29:51 -08:00
Tyler Reddy 7e9764e325 TST: pin mingw for Azure Win CI. 2019-02-01 12:29:33 -08:00
Evgeni Burovski 419dbdc07b BUG: stats: weighted KDE does not modify the weights array 2019-02-01 12:28:51 -08:00
Tyler Reddy 92bc62d609
Merge pull request #9683 from tylerjereddy/backport_gh_9612
MAINT: backport gh-9612, others
2019-01-18 10:54:15 -08:00
Tyler Reddy 671d767eec MAINT: partial backport of gh-9681. 2019-01-15 14:22:50 -08:00
Evgeni Burovski 491e27f2a2 TST: interpolate: avoid pytest deprecations
pytest.raises(..., message=...) is deprecated,
the recommended parameter is match=...
2019-01-14 15:48:23 -08:00
Tyler Reddy 203ca4e96d MAINT: backport gh-9612
* don't use array newton unless size is greater than 1
2019-01-14 10:24:14 -08:00
Ralf Gommers caf1fe1c09
Merge pull request #9633 from tylerjereddy/backport-dec28-1.2.1
MAINT: prepare branch for 1.2.1 & backport gh-9615
2019-01-02 21:23:23 -08:00
Tyler Reddy 443127321f MAINT: address gh-9633 review comments
* 1.2.1 release notes now mention that
installation from source with Python 2.7
is once again possible

* test_cdf_nolan_samples can now pass
on this branch after selectively backporting
a commit hunk from 24f1c26 that includes an
updated warning filter

* fixed a merge conflict resolution issue causing
a syntax error in .travis.yml
2019-01-02 19:58:19 -08:00
Tyler Reddy 4000666be2 MAINT: backport gh-9615. 2018-12-28 11:20:16 -08:00
Tyler Reddy 4c1db3943d REL: bump version to 1.2.1 unreleased 2018-12-28 10:58:42 -08:00
Tyler Reddy 722bfc3f2c REL: 1.2.0 release commit 2018-12-17 09:54:00 -08:00
Tyler Reddy 1c87cfca4d TST: reduce max memory usage for sparse.linalg.gcrotmk test. (#9600)
Closes gh-9595

This test calls `np.random.choice` with a large sparse array of shape
(10000, 10000). Under the hood, `choice` ends up constructing a dense
array of the same size explicitly, in `np.random.permutation`. That
takes 0.8 GB for int64. That then goes on to `np.random.shuffle`, which
creates some extra buffers. The `MemoryError` seems to be simply because
we're pushing the 2 GB limit on 32-bit Python.
2018-12-15 14:57:33 -08:00
Ralf Gommers f006e82f3b
Merge pull request #9597 from tylerjereddy/dec14_updates_12x_branch
MAINT: backports for 1.2.x branch -- mostly CI
2018-12-14 20:37:17 -08:00
Tyler Reddy 03a70f591c TST: restore Azure 64-bit Python 2.7 2018-12-14 14:01:55 -08:00
Tyler Reddy 47d0edc543 TST: Add 32-bit testing to CI
* Add 32-bit Linux testing to Azure
CI; 32-bit testing has been missing
from our CI for quite some time
2018-12-14 13:49:40 -08:00
Tyler Reddy c356d0c7b4 TST: add Windows Azure CI 2018-12-14 13:47:08 -08:00
Tyler Reddy 2f67a4bae4 REL: set version to unreleased 1.2.0 2018-12-14 13:44:53 -08:00
Tyler Reddy c5f8abc381 REL: updated 1.2.0rc2 release commit following gh-9572 reversion. 2018-12-04 21:50:11 -08:00
Ralf Gommers 2bee5acbcf
Merge pull request #9572 from tylerjereddy/revert_noprefix
REV: restore noprefix.h
2018-12-04 21:01:48 -08:00
Tyler Reddy 89ebe5783e REV: restore noprefix.h
* revert gh-9561, which is causing
wheel build failures and is far too
tricky to debug this close to
the 1.2.0 release
2018-12-04 11:02:22 -08:00
Tyler Reddy 0aec4171cd REL: 1.2.0rc2 release commit 2018-12-04 09:17:20 -08:00
Ralf Gommers 7209e14ca0
Merge pull request #9566 from tylerjereddy/rc2_backports
MAINT: multi backports for 1.2.0rc2
2018-12-03 20:43:14 -08:00
Tyler Reddy 845b8e54dc MAINT: multi backports for 1.2.0rc2
* backports for 1.2.0rc2: gh-9541, gh-9549,
gh-9550, gh-9561

* appropriate release note updates for the above
backports, and some release note updates from
the rc1 backports

* increment rc version number and set as unreleased
2018-12-03 16:26:07 -08:00
Tyler Reddy 3488be84f0
Merge pull request #9534 from tylerjereddy/multi_backports
MAINT: multiple backports for 1.2.x
2018-11-25 09:13:17 -08:00
Tyler Reddy f3e7a417d8 MAINT: multiple backports for 1.2.x
* backports for gh-9486, gh-9494,
gh-9512, gh-9507, gh-9526
2018-11-24 17:34:01 -08:00
Ralf Gommers d914fa9cdb
Merge pull request #9532 from tylerjereddy/pr_9493_backport
TST: backport gh-9493 to 1.2.x branch
2018-11-24 14:59:02 -08:00
Tyler Reddy 61bac6446e TST: 32-bit test_distance.py now passes
* the floating point precision stringency
of test_pdist_jensenshannon_iris_nonC() has
been adjusted to allow it to pass with
a 32-bit Python interpreter, as required
by the wheels infrastructure
2018-11-24 13:46:58 -08:00
Ilhan Polat d78390093a
Merge pull request #9506 from rgommers/relnotes-py27
DOC: add note that 1.2.x is the last release to support Python 2.7
2018-11-19 21:57:29 +01:00
Ralf Gommers c2c7121853 DOC: add note that 1.2.x is the last release to support Python 2.7
[ci skip]
2018-11-19 12:46:43 -08:00
Tyler Reddy dbf668065e
REL: set version to 1.2.0rc1 2018-11-13 15:20:19 -08:00
72 changed files with 1142 additions and 423 deletions

View File

@ -21,7 +21,10 @@ jobs:
name: install Debian dependencies
command: |
sudo apt-get update
sudo apt-get install libatlas-dev libatlas-base-dev liblapack-dev gfortran libgmp-dev libmpfr-dev libfreetype6-dev libpng-dev zlib1g zlib1g-dev texlive-fonts-recommended texlive-latex-recommended texlive-latex-extra texlive-generic-extra latexmk texlive-xetex
sudo apt-get install libatlas-dev libatlas-base-dev liblapack-dev gfortran libgmp-dev libmpfr-dev libfreetype6-dev libpng-dev zlib1g zlib1g-dev texlive-fonts-recommended texlive-latex-recommended texlive-latex-extra texlive-generic-extra latexmk texlive-xetex fonts-freefont-otf
sudo sh -c 'echo "deb http://ftp.de.debian.org/debian sid main" >> /etc/apt/sources.list'
sudo apt-get update
sudo apt-get install xindy
- run:
name: setup Python venv
@ -30,7 +33,8 @@ jobs:
. venv/bin/activate
pip install --install-option="--no-cython-compile" cython
pip install numpy
pip install nose mpmath argparse Pillow codecov matplotlib Sphinx==1.7.2
pip install nose mpmath argparse Pillow codecov matplotlib "Sphinx<2.1"
pip install pybind11
- run:
name: build docs
@ -91,55 +95,6 @@ jobs:
git commit -m "Docs build of $CIRCLE_SHA1";
git push --set-upstream origin gh-pages --force
# Run test suite on pypy3
pypy3:
docker:
- image: pypy:3-6.0.0
steps:
- restore_cache:
keys:
- pypy3-ccache-{{ .Branch }}
- pypy3-ccache
- checkout
- run:
name: setup
command: |
apt-get -yq update
apt-get -yq install libatlas-dev libatlas-base-dev liblapack-dev gfortran ccache
ccache -M 512M
export CCACHE_COMPRESS=1
export NPY_NUM_BUILD_JOBS=`pypy3 -c 'import multiprocessing as mp; print(mp.cpu_count())'`
export PATH=/usr/lib/ccache:$PATH
# XXX: use "numpy>=1.15.0" when it's released
pypy3 -mpip install --upgrade pip setuptools wheel
pypy3 -mpip install --no-build-isolation --extra-index https://antocuni.github.io/pypy-wheels/ubuntu pytest pytest-xdist Tempita "Cython>=0.28.2" mpmath
pypy3 -mpip install --no-build-isolation git+https://github.com/numpy/numpy.git@db552b5b6b37f2ff085b304751d7a2ebed26adc9
- run:
name: build
command: |
export CCACHE_COMPRESS=1
export PATH=/usr/lib/ccache:$PATH
# Limit parallelism for Cythonization to 4 processes, to
# avoid exceeding CircleCI memory limits
export SCIPY_NUM_CYTHONIZE_JOBS=4
export NPY_NUM_BUILD_JOBS=`pypy3 -c 'import multiprocessing as mp; print(mp.cpu_count())'`
# Less aggressive optimization flags for faster compilation
OPT="-O1" FOPT="-O1" pypy3 setup.py build
- save_cache:
key: pypy3-ccache-{{ .Branch }}-{{ .BuildNum }}
paths:
- ~/.ccache
- ~/.cache/pip
- run:
name: test
command: |
# CircleCI has 4G memory limit, play it safe
export SCIPY_AVAILABLE_MEM=1G
pypy3 runtests.py -- -rfEX -n 3 --durations=30
workflows:
version: 2
default:
@ -152,4 +107,3 @@ workflows:
filters:
branches:
only: master
- pypy3

View File

@ -21,7 +21,8 @@ matrix:
script:
- PYFLAKES_NODOCTEST=1 pyflakes scipy benchmarks/benchmarks | grep -E -v 'unable to detect undefined names|assigned to but never used|imported but unused|redefinition of unused|may be undefined, or defined from star imports' > test.out; cat test.out; test \! -s test.out
- pycodestyle scipy benchmarks/benchmarks
- python: 2.7
- |
grep -rlIP '[^\x00-\x7F]' scipy | grep '\.pyx\?' | sort > unicode.out; grep -rlI '# -\*- coding: \(utf-8\|latin-1\) -\*-' scipy | grep '\.pyx\?' | sort > coding.out; comm -23 unicode.out coding.out > test_code.out; cat test_code.out; test \! -s test_code.out
env:
- TESTMODE=fast
- COVERAGE=
@ -116,13 +117,13 @@ before_install:
export CFLAGS="-arch x86_64"
export CXXFLAGS="-arch x86_64"
printenv
wget https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a43.ssl.cf2.rackcdn.com/openblas-0.2.20-345-g6a6ffaff-Darwin-x86_64-1becaaa.tar.gz -O openblas.tar.gz
wget https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a43.ssl.cf2.rackcdn.com/openblas-v0.3.5-274-g6a8b4269-macosx_10_9_x86_64-gf_1becaaa.tar.gz -O openblas.tar.gz
mkdir openblas
tar -xzf openblas.tar.gz -C openblas
# Modify the openblas dylib so it can be used in its current location
# Also make it use the current install location for libgfortran, libquadmath, and libgcc_s.
pushd openblas/usr/local/lib
install_name_tool -id $TRAVIS_BUILD_DIR/openblas/usr/local/lib/libopenblasp-r0.3.0.dev.dylib libopenblas.dylib
install_name_tool -id $TRAVIS_BUILD_DIR/openblas/usr/local/lib/libopenblasp-r0.3.7.dev.dylib libopenblas.dylib
install_name_tool -change /usr/local/gfortran/lib/libgfortran.3.dylib `$FC -v 2>&1 | perl -nle 'print $1 if m{--libdir=([^\s]+)}'`/libgfortran.3.dylib libopenblas.dylib
install_name_tool -change /usr/local/gfortran/lib/libquadmath.0.dylib `$FC -v 2>&1 | perl -nle 'print $1 if m{--libdir=([^\s]+)}'`/libquadmath.0.dylib libopenblas.dylib
install_name_tool -change /usr/local/gfortran/lib/libgcc_s.1.dylib `$FC -v 2>&1 | perl -nle 'print $1 if m{--libdir=([^\s]+)}'`/libgcc_s.1.dylib libopenblas.dylib
@ -144,24 +145,11 @@ before_install:
- travis_retry pip install --upgrade pip setuptools wheel
- travis_retry pip install cython
- if [ -n "$NUMPYSPEC" ]; then travis_retry pip install $NUMPYSPEC; fi
- travis_retry pip install --upgrade pytest pytest-xdist pytest-faulthandler mpmath argparse Pillow codecov
- travis_retry pip install --upgrade pytest pytest-xdist mpmath argparse Pillow codecov
- travis_retry pip install gmpy2 # speeds up mpmath (scipy.special tests)
- |
if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then
# optional sparse.linalg dependency (test linux only, no suitesparse installed on osx)
if [ -z "$NUMPYSPEC" ]; then
# numpy must be installed to build scikit-umfpack
travis_retry pip install numpy
fi
travis_retry pip install scikit-umfpack
if [ -z "$NUMPYSPEC" ]; then
# cleanup after ourselves
travis_retry pip uninstall -y numpy
fi
fi
- |
if [ "${TESTMODE}" == "full" ]; then
travis_retry pip install pytest-cov coverage matplotlib
travis_retry pip install pytest-cov coverage matplotlib scikit-umfpack
fi
- |
if [ "${REFGUIDE_CHECK}" == "1" ]; then

View File

@ -12,8 +12,8 @@ environment:
global:
MINGW_32: C:\mingw-w64\i686-6.3.0-posix-dwarf-rt_v5-rev1\mingw32\bin
MINGW_64: C:\mingw-w64\x86_64-6.3.0-posix-seh-rt_v5-rev1\mingw64\bin
OPENBLAS_32: https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a43.ssl.cf2.rackcdn.com/openblas-5f998ef_gcc7_1_0_win32.zip
OPENBLAS_64: https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a43.ssl.cf2.rackcdn.com/openblas-5f998ef_gcc7_1_0_win64.zip
OPENBLAS_32: https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a43.ssl.cf2.rackcdn.com/openblas-v0.3.5-274-g6a8b4269-win32-gcc_7_1_0.zip
OPENBLAS_64: https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a43.ssl.cf2.rackcdn.com/openblas-v0.3.5-274-g6a8b4269-win_amd64-gcc_7_1_0.zip
NUMPY_HEAD: https://github.com/numpy/numpy.git
NUMPY_BRANCH: master
APPVEYOR_SAVE_CACHE_ON_ERROR: true
@ -137,17 +137,6 @@ install:
# Install the scipy test dependencies.
- '%CMD_IN_ENV% pip install -U --timeout 5 --retries 2 -r tools/ci/appveyor/requirements.txt'
# Replace numpy distutils with a version that can build with msvc + mingw-gfortran
- ps: |
$NumpyDir = $((python -c 'import os; import numpy; print(os.path.dirname(numpy.__file__))') | Out-String).Trim()
rm -r -Force "$NumpyDir\distutils"
$tmpdir = New-TemporaryFile | %{ rm $_; mkdir $_ }
echo $env:NUMPY_HEAD
echo $env:NUMPY_BRANCH
git clone -q --depth=1 -b $env:NUMPY_BRANCH $env:NUMPY_HEAD $tmpdir
mv $tmpdir\numpy\distutils $NumpyDir
build_script:
- ps: |
$PYTHON_ARCH = $env:PYTHON_ARCH

136
azure-pipelines.yml Normal file
View File

@ -0,0 +1,136 @@
trigger:
# start a new build for every push
batch: False
branches:
include:
- master
- maintenance/*
jobs:
- job: Linux_Python_36_32bit_full
pool:
vmImage: 'ubuntu-16.04'
steps:
- script: |
docker pull i386/ubuntu:bionic
docker run -v $(pwd):/scipy i386/ubuntu:bionic /bin/bash -c "cd scipy && \
apt-get -y update && \
apt-get -y install python3.6-dev python3-pip pkg-config libpng-dev libjpeg8-dev libfreetype6-dev && \
pip3 install setuptools wheel 'numpy<1.17' cython==0.29 pytest pytest-timeout pytest-xdist pytest-env Pillow mpmath matplotlib && \
apt-get -y install gfortran-5 wget && \
cd .. && \
mkdir openblas && cd openblas && \
wget https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a43.ssl.cf2.rackcdn.com/openblas-v0.3.5-274-g6a8b4269-manylinux1_i686.tar.gz && \
tar zxvf openblas-v0.3.5-274-g6a8b4269-manylinux1_i686.tar.gz && \
cp -r ./usr/local/lib/* /usr/lib && \
cp ./usr/local/include/* /usr/include && \
cd ../scipy && \
F77=gfortran-5 F90=gfortran-5 python3 runtests.py --mode=full -- -n auto -rsx --junitxml=junit/test-results.xml"
displayName: 'Run 32-bit Ubuntu Docker Build / Tests'
- task: PublishTestResults@2
inputs:
testResultsFiles: '**/test-*.xml'
testRunTitle: 'Publish test results for Python 3.6-32 bit full Linux'
- job: Windows
pool:
vmImage: 'VS2017-Win2016'
variables:
OPENBLAS_32: https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a43.ssl.cf2.rackcdn.com/openblas-v0.3.5-274-g6a8b4269-win32-gcc_7_1_0.zip
OPENBLAS_64: https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a43.ssl.cf2.rackcdn.com/openblas-v0.3.5-274-g6a8b4269-win_amd64-gcc_7_1_0.zip
strategy:
maxParallel: 4
matrix:
Python27-64bit-full:
PYTHON_VERSION: '2.7'
PYTHON_ARCH: 'x64'
TEST_MODE: full
OPENBLAS: $(OPENBLAS_64)
BITS: 64
Python36-32bit-full:
PYTHON_VERSION: '3.6'
PYTHON_ARCH: 'x86'
TEST_MODE: full
OPENBLAS: $(OPENBLAS_32)
BITS: 32
Python35-64bit-full:
PYTHON_VERSION: '3.5'
PYTHON_ARCH: 'x64'
TEST_MODE: full
OPENBLAS: $(OPENBLAS_64)
BITS: 64
Python36-64bit-full:
PYTHON_VERSION: '3.6'
PYTHON_ARCH: 'x64'
TEST_MODE: full
OPENBLAS: $(OPENBLAS_64)
BITS: 64
Python37-64bit-full:
PYTHON_VERSION: '3.7'
PYTHON_ARCH: 'x64'
TEST_MODE: full
OPENBLAS: $(OPENBLAS_64)
BITS: 64
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: $(PYTHON_VERSION)
addToPath: true
architecture: $(PYTHON_ARCH)
# as noted by numba project, currently need
# specific VC install for Python 2.7
- powershell: |
$wc = New-Object net.webclient
$wc.Downloadfile("https://download.microsoft.com/download/7/9/6/796EF2E4-801B-4FC4-AB28-B59FBF6D907B/VCForPython27.msi", "VCForPython27.msi")
Start-Process "VCForPython27.msi" /qn -Wait
displayName: 'Install VC 9.0'
condition: eq(variables['PYTHON_VERSION'], '2.7')
- script: python -m pip install --upgrade pip setuptools wheel
displayName: 'Install tools'
- powershell: |
$wc = New-Object net.webclient;
$wc.Downloadfile("$(OPENBLAS)", "openblas.zip")
$tmpdir = New-TemporaryFile | %{ rm $_; mkdir $_ }
Expand-Archive "openblas.zip" $tmpdir
$pyversion = python -c "from __future__ import print_function; import sys; print(sys.version.split()[0])"
Write-Host "Python Version: $pyversion"
$target = "C:\\hostedtoolcache\\windows\\Python\\$pyversion\\$(PYTHON_ARCH)\\lib\\openblas.a"
Write-Host "target path: $target"
cp $tmpdir\$(BITS)\lib\libopenblas_v0.3.5-274-g6a8b4269-gcc_7_1_0.a $target
displayName: 'Download / Install OpenBLAS'
- powershell: |
choco install -y mingw --forcex86 --force --version=6.4.0
displayName: 'Install 32-bit mingw for 32-bit builds'
condition: eq(variables['BITS'], 32)
- powershell: |
choco install -y mingw --force --version=6.4.0
displayName: 'Install 64-bit mingw for 64-bit builds'
condition: eq(variables['BITS'], 64)
- bash: python -m pip install 'numpy<1.17' cython==0.28.5 pytest pytest-timeout pytest-xdist pytest-env Pillow mpmath matplotlib
displayName: 'Install dependencies'
- powershell: |
If ($(BITS) -eq 32) {
# 32-bit build requires careful adjustments
# until Microsoft has a switch we can use
# directly for i686 mingw
$env:NPY_DISTUTILS_APPEND_FLAGS = 1
$env:CFLAGS = "-m32"
$env:LDFLAGS = "-m32"
refreshenv
}
$env:PATH = "C:\\ProgramData\\chocolatey\\lib\\mingw\\tools\\install\\mingw$(BITS)\\bin;" + $env:PATH
mkdir dist
pip wheel --no-build-isolation -v -v -v --wheel-dir=dist .
ls dist -r | Foreach-Object {
pip install $_.FullName
}
displayName: 'Build SciPy'
- powershell: |
$env:PATH = "C:\\ProgramData\\chocolatey\\lib\\mingw\\tools\\install\\mingw$(BITS)\\bin;" + $env:PATH
python runtests.py -n --mode=$(TEST_MODE) -- -n auto -rsx --junitxml=junit/test-results.xml
displayName: 'Run SciPy Test Suite'
- task: PublishTestResults@2
inputs:
testResultsFiles: '**/test-*.xml'
testRunTitle: 'Publish test results for Python $(python.version)'

View File

@ -2,8 +2,6 @@
SciPy 1.2.0 Release Notes
==========================
.. note:: Scipy 1.2.0 is not released yet!
.. contents::
SciPy 1.2.0 is the culmination of 6 months of hard work. It contains
@ -19,12 +17,16 @@ Our development attention will now shift to bug-fix releases on the
This release requires Python 2.7 or 3.4+ and NumPy 1.8.2 or greater.
.. note:: This will be the last SciPy release to support Python 2.7.
Consequently, the 1.2.x series will be a long term support (LTS)
release; we will backport bug fixes until 1 Jan 2020.
For running on PyPy, PyPy3 6.0+ and NumPy 1.15.0 are required.
Highlights of this release
--------------------------
- 1-D root finding improvements with a new solver, ``toms748``, and a new
- 1-D root finding improvements with a new solver, ``toms748``, and a new
unified interface, ``root_scalar``
- New ``dual_annealing`` optimization method that combines stochastic and
local deterministic searching
@ -45,7 +47,7 @@ Proper spline coefficient calculations have been added for the ``mirror``,
`scipy.fftpack` improvements
--------------------------------
DCT-IV, DST-IV, DCT-I, and DST-I orthonormalization are now supported in
DCT-IV, DST-IV, DCT-I, and DST-I orthonormalization are now supported in
`scipy.fftpack`.
`scipy.interpolate` improvements
@ -67,11 +69,11 @@ The function ``softmax`` was added to `scipy.special`.
`scipy.optimize` improvements
-----------------------------
The one-dimensional nonlinear solvers have been given a unified interface
`scipy.optimize.root_scalar`, similar to the `scipy.optimize.root` interface
The one-dimensional nonlinear solvers have been given a unified interface
`scipy.optimize.root_scalar`, similar to the `scipy.optimize.root` interface
for multi-dimensional solvers. ``scipy.optimize.root_scalar(f, bracket=[a ,b],
method="brenth")`` is equivalent to ``scipy.optimize.brenth(f, a ,b)``. If no
``method`` is specified, an appropriate one will be selected based upon the
``method`` is specified, an appropriate one will be selected based upon the
bracket and the number of derivatives available.
The so-called Algorithm 748 of Alefeld, Potra and Shi for root-finding within
@ -81,27 +83,27 @@ of approximately 1.65 (for sufficiently well-behaved functions.)
``differential_evolution`` now has the ``updating`` and ``workers`` keywords.
The first chooses between continuous updating of the best solution vector (the
default), or once per generation. Continuous updating can lead to faster
default), or once per generation. Continuous updating can lead to faster
convergence. The ``workers`` keyword accepts an ``int`` or map-like callable,
and parallelises the solver (having the side effect of updating once per
and parallelises the solver (having the side effect of updating once per
generation). Supplying an ``int`` evaluates the trial solutions in N parallel
parts. Supplying a map-like callable allows other parallelisation approaches
parts. Supplying a map-like callable allows other parallelisation approaches
(such as ``mpi4py``, or ``joblib``) to be used.
``dual_annealing`` (and ``shgo`` below) is a powerful new general purpose
global optizimation (GO) algorithm. ``dual_annealing`` uses two annealing
processes to accelerate the convergence towards the global minimum of an
objective mathematical function. The first annealing process controls the
stochastic Markov chain searching and the second annealing process controls the
deterministic minimization. So, dual annealing is a hybrid method that takes
``dual_annealing`` (and ``shgo`` below) is a powerful new general purpose
global optizimation (GO) algorithm. ``dual_annealing`` uses two annealing
processes to accelerate the convergence towards the global minimum of an
objective mathematical function. The first annealing process controls the
stochastic Markov chain searching and the second annealing process controls the
deterministic minimization. So, dual annealing is a hybrid method that takes
advantage of stochastic and local deterministic searching in an efficient way.
``shgo`` (simplicial homology global optimization) is a similar algorithm
appropriate for solving black box and derivative free optimization (DFO)
problems. The algorithm generally converges to the global solution in finite
time. The convergence holds for non-linear inequality and
equality constraints. In addition to returning a global minimum, the
algorithm also returns any other global and local minima found after every
appropriate for solving black box and derivative free optimization (DFO)
problems. The algorithm generally converges to the global solution in finite
time. The convergence holds for non-linear inequality and
equality constraints. In addition to returning a global minimum, the
algorithm also returns any other global and local minima found after every
iteration. This makes it useful for exploring the solutions in a domain.
`scipy.optimize.newton` can now accept a scalar or an array
@ -113,18 +115,18 @@ be used on multiple threads.
---------------------------
Digital filter design functions now include a parameter to specify the sampling
rate. Previously, digital filters could only be specified using normalized
frequency, but different functions used different scales (e.g. 0 to 1 for
``butter`` vs 0 to π for ``freqz``), leading to errors and confusion. With
rate. Previously, digital filters could only be specified using normalized
frequency, but different functions used different scales (e.g. 0 to 1 for
``butter`` vs 0 to π for ``freqz``), leading to errors and confusion. With
the ``fs`` parameter, ordinary frequencies can now be entered directly into
functions, with the normalization handled internally.
``find_peaks`` and related functions no longer raise an exception if the
``find_peaks`` and related functions no longer raise an exception if the
properties of a peak have unexpected values (e.g. a prominence of 0). A
``PeakPropertyWarning`` is given instead.
The new keyword argument ``plateau_size`` was added to ``find_peaks``.
``plateau_size`` may be used to select peaks based on the length of the
The new keyword argument ``plateau_size`` was added to ``find_peaks``.
``plateau_size`` may be used to select peaks based on the length of the
flat top of a peak.
``welch()`` and ``csd()`` methods in `scipy.signal` now support calculation
@ -140,12 +142,12 @@ conversions is now improved.
The issue where SuperLU or UMFPACK solvers crashed on matrices with
non-canonical format in `scipy.sparse.linalg` was fixed. The solver wrapper
canonicalizes the matrix if necessary before calling the SuperLU or UMFPACK
solver.
canonicalizes the matrix if necessary before calling the SuperLU or UMFPACK
solver.
The ``largest`` option of `scipy.sparse.linalg.lobpcg()` was fixed to have
a correct (and expected) behavior. The order of the eigenvalues was made
consistent with the ARPACK solver (``eigs()``), i.e. ascending for the
a correct (and expected) behavior. The order of the eigenvalues was made
consistent with the ARPACK solver (``eigs()``), i.e. ascending for the
smallest eigenvalues, and descending for the largest eigenvalues.
The `scipy.sparse.random` function is now faster and also supports integer and
@ -155,20 +157,20 @@ complex values by passing the appropriate value to the ``dtype`` argument.
----------------------------
The function `scipy.spatial.distance.jaccard` was modified to return 0 instead
of ``np.nan`` when two all-zero vectors are compared.
of ``np.nan`` when two all-zero vectors are compared.
Support for the Jensen Shannon distance, the square-root of the divergence, has
been added under `scipy.spatial.distance.jensenshannon`
An optional keyword was added to the function
`scipy.spatial.cKDTree.query_ball_point()` to sort or not sort the returned
An optional keyword was added to the function
`scipy.spatial.cKDTree.query_ball_point()` to sort or not sort the returned
indices. Not sorting the indices can speed up calls.
A new category of quaternion-based transformations are available in
`scipy.spatial.transform`, including spherical linear interpolation of
rotations (``Slerp``), conversions to and from quaternions, Euler angles,
and general rotation and inversion capabilities
(`spatial.transform.Rotation`), and uniform random sampling of 3D
and general rotation and inversion capabilities
(`spatial.transform.Rotation`), and uniform random sampling of 3D
rotations (`spatial.transform.Rotation.random`).
`scipy.stats` improvements
@ -182,10 +184,10 @@ values.
Added a general method to sample random variates based on the density only, in
the new function ``rvs_ratio_uniforms``.
The Yule-Simon distribution (``yulesimon``) was added -- this is a new
The Yule-Simon distribution (``yulesimon``) was added -- this is a new
discrete probability distribution.
``stats`` and ``mstats`` now have access to a new regression method,
``stats`` and ``mstats`` now have access to a new regression method,
``siegelslopes``, a robust linear regression algorithm
`scipy.stats.gaussian_kde` now has the ability to deal with weighted samples,
@ -200,8 +202,8 @@ and ``mstats``
`scipy.linalg` improvements
--------------------------
`scipy.linalg.lapack` now exposes the LAPACK routines using the Rectangular
Full Packed storage (RFP) for upper triangular, lower triangular, symmetric,
`scipy.linalg.lapack` now exposes the LAPACK routines using the Rectangular
Full Packed storage (RFP) for upper triangular, lower triangular, symmetric,
or Hermitian matrices; the upper trapezoidal fat matrix RZ decomposition
routines are now available as well.
@ -221,34 +223,34 @@ The function ``scipy.linalg.subspace_angles(A, B)`` now gives correct
results for all angles. Before this, the function only returned
correct values for those angles which were greater than pi/4.
Support for the Bento build system has been removed. Bento has not been
Support for the Bento build system has been removed. Bento has not been
maintained for several years, and did not have good Python 3 or wheel support,
hence it was time to remove it.
The required signature of `scipy.optimize.lingprog` ``method=simplex``
callback function has changed. Before iteration begins, the simplex solver
The required signature of `scipy.optimize.lingprog` ``method=simplex``
callback function has changed. Before iteration begins, the simplex solver
first converts the problem into a standard form that does not, in general,
have the same variables or constraints
as the problem defined by the user. Previously, the simplex solver would pass a
user-specified callback function several separate arguments, such as the
current solution vector ``xk``, corresponding to this standard form problem.
Unfortunately, the relationship between the standard form problem and the
user-defined problem was not documented, limiting the utility of the
user-specified callback function several separate arguments, such as the
current solution vector ``xk``, corresponding to this standard form problem.
Unfortunately, the relationship between the standard form problem and the
user-defined problem was not documented, limiting the utility of the
information passed to the callback function.
In addition to numerous bug fix changes, the simplex solver now passes a
In addition to numerous bug fix changes, the simplex solver now passes a
user-specified callback function a single ``OptimizeResult`` object containing
information that corresponds directly to the user-defined problem. In future
releases, this ``OptimizeResult`` object may be expanded to include additional
information, such as variables corresponding to the standard-form problem and
information concerning the relationship between the standard-form and
information, such as variables corresponding to the standard-form problem and
information concerning the relationship between the standard-form and
user-defined problems.
The implementation of `scipy.sparse.random` has changed, and this affects the
numerical values returned for both ``sparse.random`` and ``sparse.rand`` for
The implementation of `scipy.sparse.random` has changed, and this affects the
numerical values returned for both ``sparse.random`` and ``sparse.rand`` for
some matrix shapes and a given seed.
`scipy.optimize.newton` will no longer use Halley's method in cases where it
`scipy.optimize.newton` will no longer use Halley's method in cases where it
negatively impacts convergence
Other changes
@ -294,6 +296,7 @@ Authors
* emmi474 +
* Stefan Endres +
* Thomas Etherington +
* Piotr Figiel
* Alex Fikl +
* fo40225 +
* Joseph Fox-Rabinovitz
@ -400,6 +403,9 @@ This list of names is automatically generated, and may not be fully complete.
Issues closed for 1.2.0
-----------------------
* `#9520 <https://github.com/scipy/scipy/issues/9520>`__: signal.correlate with method='fft' doesn't benefit from long...
* `#9547 <https://github.com/scipy/scipy/issues/9547>`__: signature of dual_annealing doesn't match other optimizers
* `#9540 <https://github.com/scipy/scipy/issues/9540>`__: SciPy v1.2.0rc1 cannot be imported on Python 2.7.15
* `#1240 <https://github.com/scipy/scipy/issues/1240>`__: Allowing multithreaded use of minpack through scipy.optimize...
* `#1432 <https://github.com/scipy/scipy/issues/1432>`__: scipy.stats.mode extremely slow (Trac #905)
* `#3372 <https://github.com/scipy/scipy/issues/3372>`__: Please add Sphinx search field to online scipy html docs
@ -461,6 +467,14 @@ Issues closed for 1.2.0
Pull requests for 1.2.0
-----------------------
* `#9526 <https://github.com/scipy/scipy/pull/9526>`__: TST: relax precision requirements in signal.correlate tests
* `#9507 <https://github.com/scipy/scipy/pull/9507>`__: CI: MAINT: Skip a ckdtree test on pypy
* `#9512 <https://github.com/scipy/scipy/pull/9512>`__: TST: test_random_sampling 32-bit handling
* `#9494 <https://github.com/scipy/scipy/pull/9494>`__: TST: test_kolmogorov xfail 32-bit
* `#9486 <https://github.com/scipy/scipy/pull/9486>`__: BUG: fix sparse random int handling
* `#9550 <https://github.com/scipy/scipy/pull/9550>`__: BUG: scipy/_lib/_numpy_compat: get_randint
* `#9549 <https://github.com/scipy/scipy/pull/9549>`__: MAINT: make dual_annealing signature match other optimizers
* `#9541 <https://github.com/scipy/scipy/pull/9541>`__: BUG: fix SyntaxError due to non-ascii character on Python 2.7
* `#7352 <https://github.com/scipy/scipy/pull/7352>`__: ENH: add Brunner Munzel test to scipy.stats.
* `#7373 <https://github.com/scipy/scipy/pull/7373>`__: BUG: Jaccard distance for all-zero arrays would return np.nan
* `#7374 <https://github.com/scipy/scipy/pull/7374>`__: ENH: Add PDF, CDF and parameter estimation for Stable Distributions

View File

@ -0,0 +1,44 @@
==========================
SciPy 1.2.1 Release Notes
==========================
.. contents::
SciPy 1.2.1 is a bug-fix release with no new features compared to 1.2.0.
Most importantly, it solves the issue that 1.2.0 cannot be installed
from source on Python 2.7 because of non-ascii character issues.
It is also notable that SciPy 1.2.1 wheels were built with OpenBLAS
0.3.5.dev, which may alleviate some linear algebra issues observed
in SciPy 1.2.0.
Authors
=======
* Eric Larson
* Mark Mikofski
* Evgeni Burovski
* Ralf Gommers
* Eric Moore
* Tyler Reddy
Issues closed for 1.2.1
-----------------------
* `#9606 <https://github.com/scipy/scipy/issues/9606>`__: SyntaxError: Non-ASCII character '\xe2' in file scipy/stats/_continuous_distns.py on line 3346, but no encoding declared
* `#9608 <https://github.com/scipy/scipy/issues/9608>`__: Version 1.2.0 introduces `too many indices for array` error in...
* `#9709 <https://github.com/scipy/scipy/issues/9709>`__: scipy.stats.gaussian_kde normalizes the weights keyword argument...
* `#9733 <https://github.com/scipy/scipy/issues/9733>`__: scipy.linalg.qr_update gives NaN result
* `#9724 <https://github.com/scipy/scipy/issues/9724>`__: CI: Is scipy.scipy Windows Python36-32bit-full working?
Pull requests for 1.2.1
-----------------------
* `#9612 <https://github.com/scipy/scipy/pull/9612>`__: BUG: don't use array newton unless size is greater than 1
* `#9615 <https://github.com/scipy/scipy/pull/9615>`__: ENH: Add test for encoding
* `#9720 <https://github.com/scipy/scipy/pull/9720>`__: BUG: stats: weighted KDE does not modify the weights array
* `#9739 <https://github.com/scipy/scipy/pull/9739>`__: BUG: qr_updates fails if u is exactly in span Q
* `#9725 <https://github.com/scipy/scipy/pull/9725>`__: TST: pin mingw for Azure Win CI
* `#9736 <https://github.com/scipy/scipy/pull/9736>`__: TST: adjust to vmImage dispatch in Azure
* `#9681 <https://github.com/scipy/scipy/pull/9681>`__: BUG: Fix failing stats tests (partial backport)
* `#9662 <https://github.com/scipy/scipy/pull/9662>`__: TST: interpolate: avoid pytest deprecations

View File

@ -0,0 +1,39 @@
==========================
SciPy 1.2.2 Release Notes
==========================
.. contents::
SciPy 1.2.2 is a bug-fix release with no new features compared to 1.2.1.
Importantly, the SciPy 1.2.2 wheels are built with OpenBLAS 0.3.7.dev to
alleviate issues with SkylakeX AVX512 kernels.
Authors
=======
* CJ Carey
* Tyler Dawson +
* Ralf Gommers
* Kai Striega
* Andrew Nelson
* Tyler Reddy
* Kevin Sheppard +
A total of 7 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.
This list of names is automatically generated, and may not be fully complete.
Issues closed for 1.2.2
-----------------------
* `#9611 <https://github.com/scipy/scipy/issues/9611>`__: Overflow error with new way of p-value calculation in kendall tau correlation for perfectly monotonic vectors
* `#9964 <https://github.com/scipy/scipy/issues/9964>`__: optimize.newton : overwrites x0 argument when it is a numpy array
* `#9784 <https://github.com/scipy/scipy/issues/9784>`__: TST: Minimum NumPy version is not being CI tested
* `#10132 <https://github.com/scipy/scipy/issues/10132>`__: Docs: Description of nnz attribute of sparse.csc_matrix misleading
Pull requests for 1.2.2
-----------------------
* `#10056 <https://github.com/scipy/scipy/pull/10056>`__: BUG: Ensure factorial is not too large in kendaltau
* `#9991 <https://github.com/scipy/scipy/pull/9991>`__: BUG: Avoid inplace modification of input array in newton
* `#9788 <https://github.com/scipy/scipy/pull/9788>`__: TST, BUG: f2py-related issues with NumPy < 1.14.0
* `#9749 <https://github.com/scipy/scipy/pull/9749>`__: BUG: MapWrapper.__exit__ should terminate
* `#10141 <https://github.com/scipy/scipy/pull/10141>`__: Update description for nnz on csc.py

View File

@ -0,0 +1,63 @@
==========================
SciPy 1.2.3 Release Notes
==========================
.. contents::
SciPy 1.2.3 is a bug-fix release with no new features compared to 1.2.2. It is
part of the long-term support (LTS) release series for Python 2.7.
Authors
=======
* Geordie McBain
* Matt Haberland
* David Hagen
* Tyler Reddy
* Pauli Virtanen
* Eric Larson
* Yu Feng
* ananyashreyjain
* Nikolay Mayorov
* Evgeni Burovski
* Warren Weckesser
Issues closed for 1.2.3
-----------------------
* `#4915 <https://github.com/scipy/scipy/issues/4915>`__: Bug in unique_roots in scipy.signal.signaltools.py for roots with same magnitude
* `#5546 <https://github.com/scipy/scipy/issues/5546>`__: ValueError raised if scipy.sparse.linalg.expm recieves array larger than 200x200
* `#7117 <https://github.com/scipy/scipy/issues/7117>`__: Warn users when using float32 input data to curve_fit and friends
* `#7906 <https://github.com/scipy/scipy/issues/7906>`__: Wrong result from scipy.interpolate.UnivariateSpline.integral for out-of-bounds
* `#9581 <https://github.com/scipy/scipy/issues/9581>`__: Least-squares minimization fails silently when x and y data are different types
* `#9901 <https://github.com/scipy/scipy/issues/9901>`__: lsoda fails to detect stiff problem when called from solve_ivp
* `#9988 <https://github.com/scipy/scipy/issues/9988>`__: doc build broken with Sphinx 2.0.0
* `#10303 <https://github.com/scipy/scipy/issues/10303>`__: BUG: optimize: `linprog` failing TestLinprogSimplexBland::test_unbounded_below_no_presolve_corrected
* `#10376 <https://github.com/scipy/scipy/issues/10376>`__: TST: Travis CI fails (with pytest 5.0 ?)
* `#10384 <https://github.com/scipy/scipy/issues/10384>`__: CircleCI doc build failing on new warnings
* `#10535 <https://github.com/scipy/scipy/issues/10535>`__: TST: master branch CI failures
* `#11121 <https://github.com/scipy/scipy/issues/11121>`__: Calls to `scipy.interpolate.splprep` increase RAM usage.
* `#11198 <https://github.com/scipy/scipy/issues/11198>`__: BUG: sparse eigs (arpack) shift-invert drops the smallest eigenvalue for some k
* `#11266 <https://github.com/scipy/scipy/issues/11266>`__: Sparse matrix constructor data type detection changes on Numpy 1.18.0
Pull requests for 1.2.3
-----------------------
* `#9992 <https://github.com/scipy/scipy/pull/9992>`__: MAINT: Undo Sphinx pin
* `#10071 <https://github.com/scipy/scipy/pull/10071>`__: DOC: reconstruct SuperLU permutation matrices avoiding SparseEfficiencyWarning
* `#10076 <https://github.com/scipy/scipy/pull/10076>`__: BUG: optimize: fix curve_fit for mixed float32/float64 input
* `#10138 <https://github.com/scipy/scipy/pull/10138>`__: BUG: special: Invalid arguments to ellip_harm can crash Python.
* `#10306 <https://github.com/scipy/scipy/pull/10306>`__: BUG: optimize: Fix for 10303
* `#10309 <https://github.com/scipy/scipy/pull/10309>`__: BUG: Pass jac=None directly to lsoda
* `#10377 <https://github.com/scipy/scipy/pull/10377>`__: TST, MAINT: adjustments for pytest 5.0
* `#10379 <https://github.com/scipy/scipy/pull/10379>`__: BUG: sparse: set writeability to be forward-compatible with numpy>=1.17
* `#10426 <https://github.com/scipy/scipy/pull/10426>`__: MAINT: Fix doc build bugs
* `#10540 <https://github.com/scipy/scipy/pull/10540>`__: MAINT: Fix Travis and Circle
* `#10633 <https://github.com/scipy/scipy/pull/10633>`__: BUG: interpolate: integral(a, b) should be zero when both limits are outside of the interpolation range
* `#10833 <https://github.com/scipy/scipy/pull/10833>`__: BUG: Fix subspace_angles for complex values
* `#10882 <https://github.com/scipy/scipy/pull/10882>`__: BUG: sparse/arpack: fix incorrect code for complex hermitian M
* `#10906 <https://github.com/scipy/scipy/pull/10906>`__: BUG: sparse/linalg: fix expm for np.matrix inputs
* `#10961 <https://github.com/scipy/scipy/pull/10961>`__: BUG: Fix signal.unique_roots
* `#11126 <https://github.com/scipy/scipy/pull/11126>`__: BUG: interpolate/fitpack: fix memory leak in splprep
* `#11199 <https://github.com/scipy/scipy/pull/11199>`__: BUG: sparse.linalg: mistake in unsymm. real shift-invert ARPACK eigenvalue selection
* `#11269 <https://github.com/scipy/scipy/pull/11269>`__: Fix: Sparse matrix constructor data type detection changes on Numpy 1.18.0

View File

@ -0,0 +1,8 @@
:orphan:
{{ fullname }}
{{ underline }}
.. currentmodule:: {{ module }}
.. autoproperty:: {{ objname }}

View File

@ -0,0 +1 @@
.. include:: ../release/1.2.1-notes.rst

View File

@ -0,0 +1 @@
.. include:: ../release/1.2.2-notes.rst

View File

@ -0,0 +1 @@
.. include:: ../release/1.2.3-notes.rst

View File

@ -5,6 +5,9 @@ Release Notes
.. toctree::
:maxdepth: 1
release.1.2.3
release.1.2.2
release.1.2.1
release.1.2.0
release.1.1.0
release.1.0.1

View File

@ -127,7 +127,7 @@ truncated for illustrative purposes).
>>> from scipy.signal import blackman
>>> w = blackman(N)
>>> ywf = fft(y*w)
>>> xf = np.linspace(0.0, 1.0/(2.0*T), N/2)
>>> xf = np.linspace(0.0, 1.0/(2.0*T), N//2)
>>> import matplotlib.pyplot as plt
>>> plt.semilogy(xf[1:N//2], 2.0/N * np.abs(yf[1:N//2]), '-b')
>>> plt.semilogy(xf[1:N//2], 2.0/N * np.abs(ywf[1:N//2]), '-r')

View File

@ -323,7 +323,7 @@ Matrix Market files
Wav sound files (:mod:`scipy.io.wavfile`)
-----------------------------------------
.. module:: scipy.io.wavfile
.. currentmodule:: scipy.io.wavfile
.. autosummary::
@ -333,7 +333,7 @@ Wav sound files (:mod:`scipy.io.wavfile`)
Arff files (:mod:`scipy.io.arff`)
---------------------------------
.. automodule:: scipy.io.arff
.. currentmodule:: scipy.io.arff
.. autosummary::
@ -342,7 +342,7 @@ Arff files (:mod:`scipy.io.arff`)
Netcdf (:mod:`scipy.io.netcdf`)
-------------------------------
.. module:: scipy.io.netcdf
.. currentmodule:: scipy.io
.. autosummary::

@ -1 +1 @@
Subproject commit f33c5d95fee205ff0aba1d402a3fc8558bf24829
Subproject commit 8efdc7b00916dbd4d1e9b65e94e4a179dae67afc

View File

@ -74,7 +74,7 @@ try:
from paver.tasks import VERSION as _PVER
if not _PVER >= '1.0':
raise RuntimeError("paver version >= 1.0 required (was %s)" % _PVER)
except ImportError, e:
except (ImportError, e):
raise RuntimeError("paver version >= 1.0 required")
import paver
@ -113,11 +113,11 @@ except AttributeError:
#-----------------------------------
# Source of the release notes
RELEASE = 'doc/release/1.2.0-notes.rst'
RELEASE = 'doc/release/1.2.3-notes.rst'
# Start/end of the log (from git)
LOG_START = 'v1.1.0'
LOG_END = 'master'
LOG_START = 'v1.2.2'
LOG_END = 'maintenance/1.2.x'
#-------------------------------------------------------
@ -319,7 +319,7 @@ def sdist():
sh('git submodule update')
# Fix file permissions
sh('chmod a+rX -R *')
sh('chmod -R a+rX *')
# To be sure to bypass paver when building sdist... paver + scipy.distutils
# do not play well together.
@ -658,7 +658,7 @@ def compute_md5(idirs):
released = paver.path.path(idirs).listdir()
checksums = []
for f in sorted(released):
m = md5(open(f, 'r').read())
m = md5(open(f, 'rb').read())
checksums.append('%s %s' % (m.hexdigest(), os.path.basename(f)))
return checksums
@ -669,7 +669,7 @@ def compute_sha256(idirs):
released = paver.path.path(idirs).listdir()
checksums = []
for f in sorted(released):
m = sha256(open(f, 'r').read())
m = sha256(open(f, 'rb').read())
checksums.append('%s %s' % (m.hexdigest(), os.path.basename(f)))
return checksums
@ -716,7 +716,7 @@ def write_log_task(filename='Changelog'):
['git', 'log', '%s..%s' % (LOG_START, LOG_END)],
stdout=subprocess.PIPE)
out = st.communicate()[0]
out = st.communicate()[0].decode()
a = open(filename, 'w')
a.writelines(out)
a.close()

View File

@ -10,5 +10,8 @@ filterwarnings =
ignore:the matrix subclass is not the recommended way*:PendingDeprecationWarning
ignore:Using or importing the ABCs from 'collections'*:DeprecationWarning
ignore:can't resolve package from __spec__ or __package__, falling back on __name__ and __path__:ImportWarning
once:the imp module is deprecated in favour of importlib.*:DeprecationWarning
once:the imp module is deprecated in favour of importlib.*:PendingDeprecationWarning
env =
PYTHONHASHSEED=0

View File

@ -40,9 +40,9 @@ class LowLevelCallable(tuple):
Attributes
----------
function
Callback function given
Callback function given.
user_data
User data given
User data given.
signature
Signature of the function.

View File

@ -94,30 +94,10 @@ else:
# In NumPy versions previous to 1.11.0 the randint funtion and the randint
# method of RandomState does only work with int32 values.
def get_randint(random_state):
def randint_patched(*args, **kwargs):
try:
low = args[0]
except IndexError:
low = None
high = kwargs.pop('high', None)
dtype = kwargs.pop('dtype', None)
if high is None:
high = low
low = 0
low_min = np.iinfo(np.int32).min
if low is None:
low = low_min
else:
low = max(low, low_min)
high_max = np.iinfo(np.int32).max
if high is None:
high = high_max
else:
high = min(high, high_max)
integers = random_state.randint(low, high=high, **kwargs)
def randint_patched(low, high, size, dtype=np.int32):
low = max(low, np.iinfo(dtype).min, np.iinfo(np.int32).min)
high = min(high, np.iinfo(dtype).max, np.iinfo(np.int32).max)
integers = random_state.randint(low, high=high, size=size)
return integers.astype(dtype, copy=False)
return randint_patched
@ -799,6 +779,3 @@ else:
c = dot(X, X_T.conj())
c *= 1. / np.float64(fact)
return c.squeeze()

View File

@ -11,6 +11,19 @@ import inspect
import numpy as np
def _broadcast_arrays(a, b):
"""
Same as np.broadcast_arrays(a, b) but old writeability rules.
Numpy >= 1.17.0 transitions broadcast_arrays to return
read-only arrays. Set writeability explicitly to avoid warnings.
Retain the old writeability rules, as our Cython code assumes
the old behavior.
"""
# backport based on gh-10379
x, y = np.broadcast_arrays(a, b)
x.flags.writeable = a.flags.writeable
y.flags.writeable = b.flags.writeable
return x, y
def _valarray(shape, value=np.nan, typecode=None):
"""Return an array of all value.
@ -387,6 +400,7 @@ class MapWrapper(object):
def __del__(self):
self.close()
self.terminate()
def terminate(self):
if self._own_pool:
@ -402,11 +416,8 @@ class MapWrapper(object):
def __exit__(self, exc_type, exc_value, traceback):
if self._own_pool:
if exc_type is None:
self.pool.close()
self.pool.join()
else:
self.pool.terminate()
self.pool.close()
self.pool.terminate()
def __call__(self, func, iterable):
# only accept one iterable because that's all Pool.map accepts

View File

@ -125,10 +125,6 @@ class LSODA(OdeSolver):
rtol, atol = validate_tol(rtol, atol, self.n)
if jac is None: # No lambda as PEP8 insists.
def jac():
return None
solver = ode(self.fun, jac)
solver.set_integrator('lsoda', rtol=rtol, atol=atol, max_step=max_step,
min_step=min_step, first_step=first_step,
@ -150,7 +146,7 @@ class LSODA(OdeSolver):
itask = integrator.call_args[2]
integrator.call_args[2] = 5
solver._y, solver.t = integrator.run(
solver.f, solver.jac, solver._y, solver.t,
solver.f, solver.jac or (lambda: None), solver._y, solver.t,
self.t_bound, solver.f_params, solver.jac_params)
integrator.call_args[2] = itask

View File

@ -2,6 +2,7 @@ from __future__ import division, print_function, absolute_import
from itertools import product
from numpy.testing import (assert_, assert_allclose,
assert_equal, assert_no_warnings)
import pytest
from pytest import raises as assert_raises
from scipy._lib._numpy_compat import suppress_warnings
import numpy as np
@ -296,6 +297,30 @@ def test_integration_const_jac():
assert_allclose(res.sol(res.t), res.y, rtol=1e-14, atol=1e-14)
@pytest.mark.slow
@pytest.mark.parametrize('method', ['Radau', 'BDF', 'LSODA'])
def test_integration_stiff(method):
rtol = 1e-6
atol = 1e-6
y0 = [1e4, 0, 0]
tspan = [0, 1e8]
def fun_robertson(t, state):
x, y, z = state
return [
-0.04 * x + 1e4 * y * z,
0.04 * x - 1e4 * y * z - 3e7 * y * y,
3e7 * y * y,
]
res = solve_ivp(fun_robertson, tspan, y0, rtol=rtol,
atol=atol, method=method)
# If the stiff mode is not activated correctly, these numbers will be much bigger
assert res.nfev < 5000
assert res.njev < 200
def test_events():
def event_rational_1(t, y):
return y[0] - y[1] ** 0.7

View File

@ -1,4 +1,5 @@
subroutine fpintb(t,n,bint,nk1,x,y)
implicit none
c subroutine fpintb calculates integrals of the normalized b-splines
c nj,k+1(x) of degree k, defined on the set of knots t(j),j=1,2,...n.
c it makes use of the formulae of gaffney for the calculation of
@ -47,6 +48,7 @@ c the integration limits are arranged in increasing order.
min = 1
30 if(a.lt.t(k1)) a = t(k1)
if(b.gt.t(nk1+1)) b = t(nk1+1)
if(a.gt.b) go to 160
c using the expression of gaffney for the indefinite integral of a
c b-spline we find that
c bint(j) = (t(j+k+1)-t(j))*(res(j,b)-res(j,a))/(k+1)

View File

@ -1,4 +1,5 @@
real*8 function splint(t,n,c,k,a,b,wrk)
implicit none
c function splint calculates the integral of a spline function s(x)
c of degree k, which is given in its normalized b-spline representation
c

View File

@ -432,6 +432,8 @@ fitpack_parcur(PyObject *dummy, PyObject *args)
}
n = no = ap_t->dimensions[0];
memcpy(t, ap_t->data, n*sizeof(double));
Py_DECREF(ap_t);
ap_t = NULL;
}
if (iopt == 1) {
memcpy(wrk, ap_wrk->data, n*sizeof(double));
@ -461,10 +463,18 @@ fitpack_parcur(PyObject *dummy, PyObject *args)
goto fail;
}
if (iopt == 0|| n > no) {
Py_XDECREF(ap_wrk);
ap_wrk = NULL;
Py_XDECREF(ap_iwrk);
ap_iwrk = NULL;
dims[0] = n;
ap_wrk = (PyArrayObject *)PyArray_SimpleNew(1, dims, NPY_DOUBLE);
if (ap_wrk == NULL) {
goto fail;
}
ap_iwrk = (PyArrayObject *)PyArray_SimpleNew(1, dims, NPY_INT);
if (ap_wrk == NULL || ap_iwrk == NULL) {
if (ap_iwrk == NULL) {
goto fail;
}
}

View File

@ -10,6 +10,7 @@ from scipy.interpolate import (BSpline, BPoly, PPoly, make_interp_spline,
make_lsq_spline, _bspl, splev, splrep, splprep, splder, splantider,
sproot, splint, insert)
import scipy.linalg as sl
from scipy._lib._version import NumpyVersion
from scipy.interpolate._bsplines import _not_a_knot, _augknt
import scipy.interpolate._fitpack_impl as _impl
@ -614,15 +615,16 @@ class TestInterop(object):
b = BSpline(*tck)
assert_allclose(y, b(x), atol=1e-15)
@pytest.mark.xfail(NumpyVersion(np.__version__) < '1.14.0',
reason='requires NumPy >= 1.14.0')
def test_splrep_errors(self):
# test that both "old" and "new" splrep raise for an n-D ``y`` array
# with n > 1
x, y = self.xx, self.yy
y2 = np.c_[y, y]
msg = "failed in converting 3rd argument `y' of dfitpack.curfit to C/Fortran array"
with assert_raises(Exception, message=msg):
with assert_raises(ValueError):
splrep(x, y2)
with assert_raises(Exception, message=msg):
with assert_raises(ValueError):
_impl.splrep(x, y2)
# input below minimum size

View File

@ -141,6 +141,16 @@ class TestUnivariateSpline(object):
assert_allclose(spl2(0.6) - spl2(0.2),
spl.integral(0.2, 0.6))
def test_integral_out_of_bounds(self):
# Regression test for gh-7906: .integral(a, b) is wrong if both
# a and b are out-of-bounds
x = np.linspace(0., 1., 7)
for ext in range(4):
f = UnivariateSpline(x, x, s=0, ext=ext)
for (a, b) in [(1, 1), (1, 5), (2, 5),
(0, 0), (-2, 0), (-2, -1)]:
assert_allclose(f.integral(a, b), 0, atol=1e-15)
def test_nan(self):
# bail out early if the input data contains nans
x = np.arange(10, dtype=float)

View File

@ -661,4 +661,3 @@ class TestCubicSpline(object):
# periodic condition, y[-1] must be equal to y[0]:
assert_raises(ValueError, CubicSpline, x, y, 0, 'periodic', True)

View File

@ -1,5 +1,5 @@
# -*- encoding:utf-8 -*-
"""
# -*- coding: utf-8 -*-
u"""
==================================
Input and output (:mod:`scipy.io`)
==================================

View File

@ -268,6 +268,12 @@ cdef bint blas_t_less_than(blas_t x, blas_t y) nogil:
else:
return x.real < y.real
cdef bint blas_t_less_equal(blas_t x, blas_t y) nogil:
if blas_t is float or blas_t is double:
return x <= y
else:
return x.real <= y.real
cdef int to_lwork(blas_t a, blas_t b) nogil:
cdef int ai, bi
if blas_t is float or blas_t is double:
@ -1133,7 +1139,7 @@ cdef int reorth(int m, int n, blas_t* q, int* qs, bint qisF, blas_t* u,
wpnorm = nrm2(m, u, us[0])
if blas_t_less_than(wpnorm, wnorm*inv_root2): # u lies in span(q)
if blas_t_less_equal(wpnorm, wnorm*inv_root2): # u lies in span(q)
scal(m, 0, u, us[0])
axpy(n, 1, s, 1, s+n, 1)
scal(n, unorm, s, 1)

View File

@ -472,15 +472,15 @@ def subspace_angles(A, B):
del B
# 2. Compute SVD for cosine
QA_T_QB = dot(QA.T, QB)
sigma = svdvals(QA_T_QB)
QA_H_QB = dot(QA.T.conj(), QB)
sigma = svdvals(QA_H_QB)
# 3. Compute matrix B
if QA.shape[1] >= QB.shape[1]:
B = QB - dot(QA, QA_T_QB)
B = QB - dot(QA, QA_H_QB)
else:
B = QA - dot(QB, QA_T_QB.T)
del QA, QB, QA_T_QB
B = QA - dot(QB, QA_H_QB.T.conj())
del QA, QB, QA_H_QB
# 4. Compute SVD for sine
mask = sigma ** 2 >= 0.5
@ -490,6 +490,7 @@ def subspace_angles(A, B):
mu_arcsin = 0.
# 5. Compute the principal angles
# with reverse ordering of sigma because smallest sigma belongs to largest angle theta
# with reverse ordering of sigma because smallest sigma belongs to largest
# angle theta
theta = where(mask, mu_arcsin, arccos(clip(sigma[::-1], -1., 1.)))
return theta

View File

@ -2736,6 +2736,14 @@ def test_subspace_angles():
expected = np.array([np.pi/2, 0, 0])
assert_allclose(subspace_angles(A, B), expected, rtol=1e-12)
# Complex
# second column in "b" does not affect result, just there so that
# b can have more cols than a, and vice-versa (both conditional code paths)
a = [[1 + 1j], [0]]
b = [[1 - 1j, 0], [0, 1]]
assert_allclose(subspace_angles(a, b), 0., atol=1e-14)
assert_allclose(subspace_angles(b, a), 0., atol=1e-14)
class TestCDF2RDF(object):

View File

@ -1617,6 +1617,15 @@ class BaseQRupdate(BaseQRdeltas):
assert_raises(ValueError, qr_update, q0, r0, u[:,0], v[:,0])
assert_raises(ValueError, qr_update, q0, r0, u, v)
def test_u_exactly_in_span_q(self):
q = np.array([[0, 0], [0, 0], [1, 0], [0, 1]], self.dtype)
r = np.array([[1, 0], [0, 1]], self.dtype)
u = np.array([0, 0, 0, -1], self.dtype)
v = np.array([1, 2], self.dtype)
q1, r1 = qr_update(q, r, u, v)
a1 = np.dot(q, r) + np.outer(u, v.conj())
check_qr(q1, r1, a1, self.rtol, self.atol, False)
class TestQRupdate_f(BaseQRupdate):
dtype = np.dtype('f')

View File

@ -802,10 +802,12 @@ class DifferentialEvolutionSolver(object):
def __exit__(self, *args):
# to make sure resources are closed down
self._mapwrapper.close()
self._mapwrapper.terminate()
def __del__(self):
# to make sure resources are closed down
self._mapwrapper.close()
self._mapwrapper.terminate()
def __next__(self):
"""

View File

@ -414,11 +414,11 @@ class LocalSearchWrapper(object):
return e, x_tmp
def dual_annealing(func, x0, bounds, args=(), maxiter=1000,
def dual_annealing(func, bounds, args=(), maxiter=1000,
local_search_options={}, initial_temp=5230.,
restart_temp_ratio=2.e-5, visit=2.62, accept=-5.0,
maxfun=1e7, seed=None, no_local_search=False,
callback=None):
callback=None, x0=None):
"""
Find the global minimum of a function using Dual Annealing.
@ -429,10 +429,6 @@ def dual_annealing(func, x0, bounds, args=(), maxiter=1000,
``f(x, *args)``, where ``x`` is the argument in the form of a 1-D array
and ``args`` is a tuple of any additional fixed parameters needed to
completely specify the function.
x0 : ndarray, shape(n,)
A single initial starting point coordinates. If ``None`` is provided,
initial coordinates are automatically generated (using the ``reset``
method from the internal ``EnergyState`` class).
bounds : sequence, shape (n, 2)
Bounds for variables. ``(min, max)`` pairs for each element in ``x``,
defining bounds for the objective function parameter.
@ -495,6 +491,8 @@ def dual_annealing(func, x0, bounds, args=(), maxiter=1000,
- 2: detection done in the dual annealing process.
If the callback implementation returns True, the algorithm will stop.
x0 : ndarray, shape(n,), optional
Coordinates of a single n-dimensional starting point.
Returns
-------
@ -582,7 +580,7 @@ def dual_annealing(func, x0, bounds, args=(), maxiter=1000,
>>> func = lambda x: np.sum(x*x - 10*np.cos(2*np.pi*x)) + 10*np.size(x)
>>> lw = [-5.12] * 10
>>> up = [5.12] * 10
>>> ret = dual_annealing(func, None, bounds=list(zip(lw, up)), seed=1234)
>>> ret = dual_annealing(func, bounds=list(zip(lw, up)), seed=1234)
>>> print("global minimum: xmin = {0}, f(xmin) = {1:.6f}".format(
... ret.x, ret.fun))
global minimum: xmin = [-4.26437714e-09 -3.91699361e-09 -1.86149218e-09 -3.97165720e-09
@ -605,7 +603,10 @@ def dual_annealing(func, x0, bounds, args=(), maxiter=1000,
raise ValueError('Some bounds values are inf values or nan values')
# Checking that bounds are consistent
if not np.all(lower < upper):
raise ValueError('Bounds are note consistent min < max')
raise ValueError('Bounds are not consistent min < max')
# Checking that bounds are the same length
if not len(lower) == len(upper):
raise ValueError('Bounds do not have the same dimensions')
# Wrapper for the objective function
func_wrapper = ObjectiveFunWrapper(func, maxfun, *args)

View File

@ -63,7 +63,8 @@ def _pivot_col(T, tol=1.0E-12, bland=False):
if ma.count() == 0:
return False, np.nan
if bland:
return True, np.nonzero(ma.mask == False)[0][0]
# ma.mask is sometimes 0d
return True, np.nonzero(np.logical_not(np.atleast_1d(ma.mask)))[0][0]
return True, np.ma.nonzero(ma == ma.min())[0][0]

View File

@ -129,7 +129,7 @@ def fsolve(func, x0, args=(), fprime=None, full_output=0,
See Also
--------
root : Interface to root finding algorithms for multivariate
functions. See the 'hybr' `method` in particular.
functions. See the ``method=='hybr'`` in particular.
Notes
-----
@ -712,6 +712,10 @@ def curve_fit(f, xdata, ydata, p0=None, sigma=None, absolute_sigma=False,
else:
xdata = np.asarray(xdata)
# optimization may produce garbage for float32 inputs, cast them to float64
xdata = xdata.astype(float)
ydata = ydata.astype(float)
# Determine type of sigma
if sigma is not None:
sigma = np.asarray(sigma)

View File

@ -1,6 +1,8 @@
"""
Unit tests for the differential global minimization algorithm.
"""
import multiprocessing
from scipy.optimize import _differentialevolution
from scipy.optimize._differentialevolution import DifferentialEvolutionSolver
from scipy.optimize import differential_evolution
@ -527,8 +529,18 @@ class TestDifferentialEvolutionSolver(object):
def test_parallel(self):
# smoke test for parallelisation with deferred updating
bounds = [(0., 2.), (0., 2.)]
with DifferentialEvolutionSolver(rosen, bounds,
updating='deferred',
try:
p = multiprocessing.Pool(2)
with DifferentialEvolutionSolver(rosen, bounds,
updating='deferred',
workers=p.map) as solver:
assert_(solver._mapwrapper.pool is not None)
assert_(solver._updating == 'deferred')
solver.solve()
finally:
p.close()
with DifferentialEvolutionSolver(rosen, bounds, updating='deferred',
workers=2) as solver:
assert_(solver._mapwrapper.pool is not None)
assert_(solver._updating == 'deferred')

View File

@ -101,41 +101,41 @@ class TestDualAnnealing(TestCase):
def test_low_dim(self):
ret = dual_annealing(
self.func, None, self.ld_bounds, seed=self.seed)
self.func, self.ld_bounds, seed=self.seed)
assert_allclose(ret.fun, 0., atol=1e-12)
def test_high_dim(self):
ret = dual_annealing(self.func, None, self.hd_bounds)
ret = dual_annealing(self.func, self.hd_bounds)
assert_allclose(ret.fun, 0., atol=1e-12)
def test_low_dim_no_ls(self):
ret = dual_annealing(self.func, None, self.ld_bounds,
ret = dual_annealing(self.func, self.ld_bounds,
no_local_search=True)
assert_allclose(ret.fun, 0., atol=1e-4)
def test_high_dim_no_ls(self):
ret = dual_annealing(self.func, None, self.hd_bounds,
ret = dual_annealing(self.func, self.hd_bounds,
no_local_search=True)
assert_allclose(ret.fun, 0., atol=1e-4)
def test_nb_fun_call(self):
ret = dual_annealing(self.func, None, self.ld_bounds)
ret = dual_annealing(self.func, self.ld_bounds)
assert_equal(self.nb_fun_call, ret.nfev)
def test_nb_fun_call_no_ls(self):
ret = dual_annealing(self.func, None, self.ld_bounds,
no_local_search=True)
ret = dual_annealing(self.func, self.ld_bounds,
no_local_search=True)
assert_equal(self.nb_fun_call, ret.nfev)
def test_max_reinit(self):
assert_raises(ValueError, dual_annealing, self.weirdfunc, None,
self.ld_bounds)
assert_raises(ValueError, dual_annealing, self.weirdfunc,
self.ld_bounds)
def test_reproduce(self):
seed = 1234
res1 = dual_annealing(self.func, None, self.ld_bounds, seed=seed)
res2 = dual_annealing(self.func, None, self.ld_bounds, seed=seed)
res3 = dual_annealing(self.func, None, self.ld_bounds, seed=seed)
res1 = dual_annealing(self.func, self.ld_bounds, seed=seed)
res2 = dual_annealing(self.func, self.ld_bounds, seed=seed)
res3 = dual_annealing(self.func, self.ld_bounds, seed=seed)
# If we have reproducible results, x components found has to
# be exactly the same, which is not the case with no seeding
assert_equal(res1.x, res2.x)
@ -143,22 +143,22 @@ class TestDualAnnealing(TestCase):
def test_bounds_integrity(self):
wrong_bounds = [(-5.12, 5.12), (1, 0), (5.12, 5.12)]
assert_raises(ValueError, dual_annealing, self.func, None,
wrong_bounds)
assert_raises(ValueError, dual_annealing, self.func,
wrong_bounds)
def test_bound_validity(self):
invalid_bounds = [(-5, 5), (-np.inf, 0), (-5, 5)]
assert_raises(ValueError, dual_annealing, self.func, None,
invalid_bounds)
assert_raises(ValueError, dual_annealing, self.func,
invalid_bounds)
invalid_bounds = [(-5, 5), (0, np.inf), (-5, 5)]
assert_raises(ValueError, dual_annealing, self.func, None,
invalid_bounds)
assert_raises(ValueError, dual_annealing, self.func,
invalid_bounds)
invalid_bounds = [(-5, 5), (0, np.nan), (-5, 5)]
assert_raises(ValueError, dual_annealing, self.func, None,
invalid_bounds)
assert_raises(ValueError, dual_annealing, self.func,
invalid_bounds)
def test_max_fun_ls(self):
ret = dual_annealing(self.func, None, self.ld_bounds, maxfun=100)
ret = dual_annealing(self.func, self.ld_bounds, maxfun=100)
ls_max_iter = min(max(
len(self.ld_bounds) * LocalSearchWrapper.LS_MAXITER_RATIO,
@ -167,30 +167,30 @@ class TestDualAnnealing(TestCase):
assert ret.nfev <= 100 + ls_max_iter
def test_max_fun_no_ls(self):
ret = dual_annealing(self.func, None, self.ld_bounds,
ret = dual_annealing(self.func, self.ld_bounds,
no_local_search=True, maxfun=500)
assert ret.nfev <= 500
def test_maxiter(self):
ret = dual_annealing(self.func, None, self.ld_bounds, maxiter=700)
ret = dual_annealing(self.func, self.ld_bounds, maxiter=700)
assert ret.nit <= 700
# Testing that args are passed correctly for dual_annealing
def test_fun_args_ls(self):
ret = dual_annealing(self.func, None, self.ld_bounds,
ret = dual_annealing(self.func, self.ld_bounds,
args=((3.14159, )))
assert_allclose(ret.fun, 3.14159, atol=1e-6)
# Testing that args are passed correctly for pure simulated annealing
def test_fun_args_no_ls(self):
ret = dual_annealing(self.func, None, self.ld_bounds,
ret = dual_annealing(self.func, self.ld_bounds,
args=((3.14159, )), no_local_search=True)
assert_allclose(ret.fun, 3.14159, atol=1e-4)
def test_callback_stop(self):
# Testing that callback make the algorithm stop for
# fun value <= 1.0 (see callback method)
ret = dual_annealing(self.func, None, self.ld_bounds,
ret = dual_annealing(self.func, self.ld_bounds,
callback=self.callback)
assert ret.fun <= 1.0
assert 'stop early' in ret.message[0]
@ -199,7 +199,7 @@ class TestDualAnnealing(TestCase):
minimizer_opts = {
'method': 'Nelder-Mead',
}
ret = dual_annealing(self.func, None, self.ld_bounds,
ret = dual_annealing(self.func, self.ld_bounds,
local_search_options=minimizer_opts)
assert_allclose(ret.fun, 0., atol=1e-6)
@ -207,7 +207,7 @@ class TestDualAnnealing(TestCase):
minimizer_opts = {
'method': 'Powell',
}
ret = dual_annealing(self.func, None, self.ld_bounds,
ret = dual_annealing(self.func, self.ld_bounds,
local_search_options=minimizer_opts)
assert_allclose(ret.fun, 0., atol=1e-8)
@ -215,7 +215,7 @@ class TestDualAnnealing(TestCase):
minimizer_opts = {
'method': 'CG',
}
ret = dual_annealing(self.func, None, self.ld_bounds,
ret = dual_annealing(self.func, self.ld_bounds,
local_search_options=minimizer_opts)
assert_allclose(ret.fun, 0., atol=1e-8)
@ -223,7 +223,7 @@ class TestDualAnnealing(TestCase):
minimizer_opts = {
'method': 'BFGS',
}
ret = dual_annealing(self.func, None, self.ld_bounds,
ret = dual_annealing(self.func, self.ld_bounds,
local_search_options=minimizer_opts)
assert_allclose(ret.fun, 0., atol=1e-8)
@ -231,7 +231,7 @@ class TestDualAnnealing(TestCase):
minimizer_opts = {
'method': 'TNC',
}
ret = dual_annealing(self.func, None, self.ld_bounds,
ret = dual_annealing(self.func, self.ld_bounds,
local_search_options=minimizer_opts)
assert_allclose(ret.fun, 0., atol=1e-8)
@ -239,7 +239,7 @@ class TestDualAnnealing(TestCase):
minimizer_opts = {
'method': 'COBYLA',
}
ret = dual_annealing(self.func, None, self.ld_bounds,
ret = dual_annealing(self.func, self.ld_bounds,
local_search_options=minimizer_opts)
assert_allclose(ret.fun, 0., atol=1e-5)
@ -247,20 +247,20 @@ class TestDualAnnealing(TestCase):
minimizer_opts = {
'method': 'SLSQP',
}
ret = dual_annealing(self.func, None, self.ld_bounds,
ret = dual_annealing(self.func, self.ld_bounds,
local_search_options=minimizer_opts)
assert_allclose(ret.fun, 0., atol=1e-7)
def test_wrong_restart_temp(self):
assert_raises(ValueError, dual_annealing, self.func, None,
assert_raises(ValueError, dual_annealing, self.func,
self.ld_bounds, restart_temp_ratio=1)
assert_raises(ValueError, dual_annealing, self.func, None,
assert_raises(ValueError, dual_annealing, self.func,
self.ld_bounds, restart_temp_ratio=0)
def test_gradient_gnev(self):
minimizer_opts = {
'jac': self.rosen_der_wrapper,
}
ret = dual_annealing(rosen, None, self.ld_bounds,
ret = dual_annealing(rosen, self.ld_bounds,
local_search_options=minimizer_opts)
assert ret.njev == self.ngev

View File

@ -3,6 +3,8 @@ Unit tests for optimization routines from minpack.py.
"""
from __future__ import division, print_function, absolute_import
import warnings
from numpy.testing import (assert_, assert_almost_equal, assert_array_equal,
assert_array_almost_equal, assert_allclose)
from pytest import raises as assert_raises
@ -707,6 +709,55 @@ class TestCurveFit(object):
assert_allclose(popt1, popt2, atol=1e-14)
assert_allclose(pcov1, pcov2, atol=1e-14)
def test_dtypes(self):
# regression test for gh-9581: curve_fit fails if x and y dtypes differ
x = np.arange(-3, 5)
y = 1.5*x + 3.0 + 0.5*np.sin(x)
def func(x, a, b):
return a*x + b
for method in ['lm', 'trf', 'dogbox']:
for dtx in [np.float32, np.float64]:
for dty in [np.float32, np.float64]:
x = x.astype(dtx)
y = y.astype(dty)
with warnings.catch_warnings():
warnings.simplefilter("error", OptimizeWarning)
p, cov = curve_fit(func, x, y, method=method)
assert np.isfinite(cov).all()
assert not np.allclose(p, 1) # curve_fit's initial value
def test_dtypes2(self):
# regression test for gh-7117: curve_fit fails if
# both inputs are float32
def hyperbola(x, s_1, s_2, o_x, o_y, c):
b_2 = (s_1 + s_2) / 2
b_1 = (s_2 - s_1) / 2
return o_y + b_1*(x-o_x) + b_2*np.sqrt((x-o_x)**2 + c**2/4)
min_fit = np.array([-3.0, 0.0, -2.0, -10.0, 0.0])
max_fit = np.array([0.0, 3.0, 3.0, 0.0, 10.0])
guess = np.array([-2.5/3.0, 4/3.0, 1.0, -4.0, 0.5])
params = [-2, .4, -1, -5, 9.5]
xdata = np.array([-32, -16, -8, 4, 4, 8, 16, 32])
ydata = hyperbola(xdata, *params)
# run optimization twice, with xdata being float32 and float64
popt_64, _ = curve_fit(f=hyperbola, xdata=xdata, ydata=ydata, p0=guess,
bounds=(min_fit, max_fit))
xdata = xdata.astype(np.float32)
ydata = hyperbola(xdata, *params)
popt_32, _ = curve_fit(f=hyperbola, xdata=xdata, ydata=ydata, p0=guess,
bounds=(min_fit, max_fit))
assert_allclose(popt_32, popt_64, atol=2e-5)
class TestFixedPoint(object):

View File

@ -5,7 +5,8 @@ from math import sqrt, exp, sin, cos
from numpy.testing import (assert_warns, assert_,
assert_allclose,
assert_equal)
assert_equal,
assert_array_equal)
import numpy as np
from numpy import finfo, power, nan, isclose
@ -406,6 +407,13 @@ class TestBasic(object):
dfunc = lambda x: 2*x
assert_warns(RuntimeWarning, zeros.newton, func, 0.0, dfunc)
def test_newton_does_not_modify_x0(self):
# https://github.com/scipy/scipy/issues/9964
x0 = np.array([0.1, 3])
x0_copy = x0.copy() # Copy to test for equality.
newton(np.sin, x0, np.cos)
assert_array_equal(x0, x0_copy)
def test_gh_5555():
root = 0.1
@ -619,3 +627,37 @@ def test_gh_8881():
# Check that it now succeeds.
rt, r = newton(f, x0, fprime=fp, fprime2=fpp, full_output=True)
assert(r.converged)
def test_gh_9608_preserve_array_shape():
"""
Test that shape is preserved for array inputs even if fprime or fprime2 is
scalar
"""
def f(x):
return x**2
def fp(x):
return 2 * x
def fpp(x):
return 2
x0 = np.array([-2], dtype=np.float32)
rt, r = newton(f, x0, fprime=fp, fprime2=fpp, full_output=True)
assert(r.converged)
x0_array = np.array([-2, -3], dtype=np.float32)
# This next invocation should fail
with pytest.raises(IndexError):
result = zeros.newton(
f, x0_array, fprime=fp, fprime2=fpp, full_output=True
)
def fpp_array(x):
return 2*np.ones(np.shape(x), dtype=np.float32)
result = zeros.newton(
f, x0_array, fprime=fp, fprime2=fpp_array, full_output=True
)
assert result.converged.all()

View File

@ -99,9 +99,10 @@ def newton(func, x0, fprime=None, args=(), tol=1.48e-8, maxiter=50,
derivative `fprime2` of `func` is also provided, then Halley's method is
used.
If `x0` is a sequence, then `newton` returns an array, and `func` must be
vectorized and return a sequence or array of the same shape as its first
argument.
If `x0` is a sequence with more than one item, then `newton` returns an
array, and `func` must be vectorized and return a sequence or array of the
same shape as its first argument. If `fprime` or `fprime2` is given then
its return must also have the same shape.
Parameters
----------
@ -258,7 +259,7 @@ def newton(func, x0, fprime=None, args=(), tol=1.48e-8, maxiter=50,
raise ValueError("tol too small (%g <= 0)" % tol)
if maxiter < 1:
raise ValueError("maxiter must be greater than 0")
if not np.isscalar(x0):
if np.size(x0) > 1:
return _array_newton(func, x0, fprime, args, tol, maxiter, fprime2,
full_output)
@ -349,13 +350,15 @@ def _array_newton(func, x0, fprime, args, tol, maxiter, fprime2, full_output):
A vectorized version of Newton, Halley, and secant methods for arrays.
Do not use this method directly. This method is called from `newton`
when ``np.isscalar(x0)`` is True. For docstring, see `newton`.
when ``np.size(x0) > 1`` is ``True``. For docstring, see `newton`.
"""
# Explicitly copy `x0` as `p` will be modified inplace, but, the
# user's array should not be altered.
try:
p = np.asarray(x0, dtype=float)
p = np.array(x0, copy=True, dtype=float)
except TypeError:
# can't convert complex to float
p = np.asarray(x0)
p = np.array(x0, copy=True)
failures = np.ones_like(p, dtype=bool)
nz_der = np.ones_like(failures)
@ -693,8 +696,12 @@ def brentq(f, a, b, args=(),
Object containing information about the convergence. In particular,
``r.converged`` is True if the routine converged.
See Also
--------
Notes
-----
`f` must be continuous. f(a) and f(b) must have opposite signs.
Related functions fall into several classes:
multivariate local optimizers
`fmin`, `fmin_powell`, `fmin_cg`, `fmin_bfgs`, `fmin_ncg`
nonlinear least squares minimizer
@ -712,10 +719,6 @@ def brentq(f, a, b, args=(),
scalar fixed-point finder
`fixed_point`
Notes
-----
`f` must be continuous. f(a) and f(b) must have opposite signs.
References
----------
.. [Brent1973]

View File

@ -7,7 +7,7 @@ import operator
import threading
import sys
import timeit
from scipy.spatial import cKDTree
from . import sigtools, dlti
from ._upfirdn import upfirdn, _output_len
from scipy._lib.six import callable
@ -1716,32 +1716,42 @@ def cmplx_sort(p):
def unique_roots(p, tol=1e-3, rtype='min'):
"""
Determine unique roots and their multiplicities from a list of roots.
"""Determine unique roots and their multiplicities from a list of roots.
Parameters
----------
p : array_like
The list of roots.
tol : float, optional
The tolerance for two roots to be considered equal. Default is 1e-3.
rtype : {'max', 'min, 'avg'}, optional
The tolerance for two roots to be considered equal in terms of
the distance between them. Default is 1e-3. Refer to Notes about
the details on roots grouping.
rtype : {'max', 'maximum', 'min', 'minimum', 'avg', 'mean'}, optional
How to determine the returned root if multiple roots are within
`tol` of each other.
- 'max': pick the maximum of those roots.
- 'min': pick the minimum of those roots.
- 'avg': take the average of those roots.
- 'max', 'maximum': pick the maximum of those roots
- 'min', 'minimum': pick the minimum of those roots
- 'avg', 'mean': take the average of those roots
When finding minimum or maximum among complex roots they are compared
first by the real part and then by the imaginary part.
Returns
-------
pout : ndarray
The list of unique roots, sorted from low to high.
mult : ndarray
unique : ndarray
The list of unique roots.
multiplicity : ndarray
The multiplicity of each root.
Notes
-----
If we have 3 roots ``a``, ``b`` and ``c``, such that ``a`` is close to
``b`` and ``b`` is close to ``c`` (distance is less than `tol`), then it
doesn't necessarily mean that ``a`` is close to ``c``. It means that roots
grouping is not unique. In this function we use "greedy" grouping going
through the roots in the order they are given in the input `p`.
This utility function is not specific to roots but can be used for any
sequence of values for which uniqueness and multiplicity has to be
determined. For a more general routine, see `numpy.unique`.
@ -1756,39 +1766,40 @@ def unique_roots(p, tol=1e-3, rtype='min'):
>>> uniq[mult > 1]
array([ 1.305])
"""
if rtype in ['max', 'maximum']:
comproot = np.max
reduce = np.max
elif rtype in ['min', 'minimum']:
comproot = np.min
reduce = np.min
elif rtype in ['avg', 'mean']:
comproot = np.mean
reduce = np.mean
else:
raise ValueError("`rtype` must be one of "
"{'max', 'maximum', 'min', 'minimum', 'avg', 'mean'}")
p = asarray(p) * 1.0
tol = abs(tol)
p, indx = cmplx_sort(p)
pout = []
mult = []
indx = -1
curp = p[0] + 5 * tol
sameroots = []
for k in range(len(p)):
tr = p[k]
if abs(tr - curp) < tol:
sameroots.append(tr)
curp = comproot(sameroots)
pout[indx] = curp
mult[indx] += 1
else:
pout.append(tr)
curp = tr
sameroots = [tr]
indx += 1
mult.append(1)
return array(pout), array(mult)
p = np.asarray(p)
points = np.empty((len(p), 2))
points[:, 0] = np.real(p)
points[:, 1] = np.imag(p)
tree = cKDTree(points)
p_unique = []
p_multiplicity = []
used = np.zeros(len(p), dtype=bool)
for i in range(len(p)):
if used[i]:
continue
group = tree.query_ball_point(points[i], tol)
group = [x for x in group if not used[x]]
p_unique.append(reduce(p[group]))
p_multiplicity.append(len(group))
used[group] = True
return np.asarray(p_unique), np.asarray(p_multiplicity)
def invres(r, p, k, tol=1e-3, rtype='avg'):

View File

@ -549,7 +549,7 @@ class TestMinimumPhase(object):
k = [0.349585548646686, 0.373552164395447, 0.326082685363438,
0.077152207480935, -0.129943946349364, -0.059355880509749]
m = minimum_phase(h, 'hilbert')
assert_allclose(m, k, rtol=2e-3)
assert_allclose(m, k, rtol=5e-3)
# f=[0 0.8 0.9 1];
# a=[0 0 1 1];

View File

@ -24,7 +24,7 @@ from scipy.signal import (
correlate, convolve, convolve2d, fftconvolve, choose_conv_method,
hilbert, hilbert2, lfilter, lfilter_zi, filtfilt, butter, zpk2tf, zpk2sos,
invres, invresz, vectorstrength, lfiltic, tf2sos, sosfilt, sosfiltfilt,
sosfilt_zi, tf2zpk, BadCoefficients)
sosfilt_zi, tf2zpk, BadCoefficients, unique_roots)
from scipy.signal.windows import hann
from scipy.signal.signaltools import _filtfilt_gust
@ -1517,6 +1517,14 @@ class TestCorrelateReal(object):
pass
return decimal
def equal_tolerance_fft(self, res_dt):
# FFT implementations convert longdouble arguments down to
# double so don't expect better precision, see gh-9520
if res_dt == np.longdouble:
return self.equal_tolerance(np.double)
else:
return self.equal_tolerance(res_dt)
def test_method(self, dt):
if dt == Decimal:
method = choose_conv_method([Decimal(4)], [Decimal(3)])
@ -1526,8 +1534,8 @@ class TestCorrelateReal(object):
y_fft = correlate(a, b, method='fft')
y_direct = correlate(a, b, method='direct')
assert_array_almost_equal(y_r, y_fft, decimal=self.equal_tolerance(y_fft.dtype))
assert_array_almost_equal(y_r, y_direct, decimal=self.equal_tolerance(y_fft.dtype))
assert_array_almost_equal(y_r, y_fft, decimal=self.equal_tolerance_fft(y_fft.dtype))
assert_array_almost_equal(y_r, y_direct, decimal=self.equal_tolerance(y_direct.dtype))
assert_equal(y_fft.dtype, dt)
assert_equal(y_direct.dtype, dt)
@ -1658,8 +1666,13 @@ class TestCorrelateComplex(object):
# The decimal precision to be used for comparing results.
# This value will be passed as the 'decimal' keyword argument of
# assert_array_almost_equal().
# Since correlate may chose to use FFT method which converts
# longdoubles to doubles internally don't expect better precision
# for longdouble than for double (see gh-9520).
def decimal(self, dt):
if dt == np.clongdouble:
dt = np.cdouble
return int(2 * np.finfo(dt).precision / 3)
def _setup_rank1(self, dt, mode):
@ -2639,3 +2652,81 @@ class TestDeconvolve(object):
recorded = [0, 2, 1, 0, 2, 3, 1, 0, 0]
recovered, remainder = signal.deconvolve(recorded, impulse_response)
assert_allclose(recovered, original)
class TestUniqueRoots(object):
def test_real_no_repeat(self):
p = [-1.0, -0.5, 0.3, 1.2, 10.0]
unique, multiplicity = unique_roots(p)
assert_almost_equal(unique, p, decimal=15)
assert_equal(multiplicity, np.ones(len(p)))
def test_real_repeat(self):
p = [-1.0, -0.95, -0.89, -0.8, 0.5, 1.0, 1.05]
unique, multiplicity = unique_roots(p, tol=1e-1, rtype='min')
assert_almost_equal(unique, [-1.0, -0.89, 0.5, 1.0], decimal=15)
assert_equal(multiplicity, [2, 2, 1, 2])
unique, multiplicity = unique_roots(p, tol=1e-1, rtype='max')
assert_almost_equal(unique, [-0.95, -0.8, 0.5, 1.05], decimal=15)
assert_equal(multiplicity, [2, 2, 1, 2])
unique, multiplicity = unique_roots(p, tol=1e-1, rtype='avg')
assert_almost_equal(unique, [-0.975, -0.845, 0.5, 1.025], decimal=15)
assert_equal(multiplicity, [2, 2, 1, 2])
def test_complex_no_repeat(self):
p = [-1.0, 1.0j, 0.5 + 0.5j, -1.0 - 1.0j, 3.0 + 2.0j]
unique, multiplicity = unique_roots(p)
assert_almost_equal(unique, p, decimal=15)
assert_equal(multiplicity, np.ones(len(p)))
def test_complex_repeat(self):
p = [-1.0, -1.0 + 0.05j, -0.95 + 0.15j, -0.90 + 0.15j, 0.0,
0.5 + 0.5j, 0.45 + 0.55j]
unique, multiplicity = unique_roots(p, tol=1e-1, rtype='min')
assert_almost_equal(unique, [-1.0, -0.95 + 0.15j, 0.0, 0.45 + 0.55j],
decimal=15)
assert_equal(multiplicity, [2, 2, 1, 2])
unique, multiplicity = unique_roots(p, tol=1e-1, rtype='max')
assert_almost_equal(unique,
[-1.0 + 0.05j, -0.90 + 0.15j, 0.0, 0.5 + 0.5j],
decimal=15)
assert_equal(multiplicity, [2, 2, 1, 2])
unique, multiplicity = unique_roots(p, tol=1e-1, rtype='avg')
assert_almost_equal(
unique, [-1.0 + 0.025j, -0.925 + 0.15j, 0.0, 0.475 + 0.525j],
decimal=15)
assert_equal(multiplicity, [2, 2, 1, 2])
def test_gh_4915(self):
p = np.roots(np.convolve(np.ones(5), np.ones(5)))
true_roots = [-(-1 + 0j)**(1/5),
(-1 + 0j)**(4/5),
-(-1 + 0j)**(3/5),
(-1 + 0j)**(2/5)]
unique, multiplicity = unique_roots(p)
unique = np.sort(unique)
assert_almost_equal(np.sort(unique), true_roots, decimal=7)
assert_equal(multiplicity, [2, 2, 2, 2])
def test_complex_roots_extra(self):
unique, multiplicity = unique_roots([1.0, 1.0j, 1.0])
assert_almost_equal(unique, [1.0, 1.0j], decimal=15)
assert_equal(multiplicity, [2, 1])
unique, multiplicity = unique_roots([1, 1 + 2e-9, 1e-9 + 1j], tol=0.1)
assert_almost_equal(unique, [1.0, 1e-9 + 1.0j], decimal=15)
assert_equal(multiplicity, [2, 1])
def test_single_unique_root(self):
p = np.random.rand(100) + 1j * np.random.rand(100)
unique, multiplicity = unique_roots(p, 2)
assert_almost_equal(unique, [np.min(p)], decimal=15)
assert_equal(multiplicity, [100])

View File

@ -53,7 +53,7 @@ class csc_matrix(_cs_matrix, IndexMixin):
ndim : int
Number of dimensions (this is always 2)
nnz
Number of nonzero elements
Number of stored values, including explicit zeros
data
Data array of the matrix
indices

View File

@ -11,7 +11,14 @@ from bisect import bisect_left
import numpy as np
from scipy._lib._version import NumpyVersion
from scipy._lib.six import xrange, zip
if NumpyVersion(np.__version__) >= '1.17.0':
from scipy._lib._util import _broadcast_arrays
else:
from numpy import broadcast_arrays as _broadcast_arrays
from .base import spmatrix, isspmatrix
from .sputils import (getdtype, isshape, isscalarlike, IndexMixin,
upcast_scalar, get_index_dtype, isintlike, check_shape,
@ -344,7 +351,7 @@ class lil_matrix(spmatrix, IndexMixin):
# Make x and i into the same shape
x = np.asarray(x, dtype=self.dtype)
x, _ = np.broadcast_arrays(x, i)
x, _ = _broadcast_arrays(x, i)
if x.shape != i.shape:
raise ValueError("shape mismatch in assignment")

View File

@ -69,10 +69,8 @@ add_newdoc('scipy.sparse.linalg.dsolve._superlu', 'SuperLU',
The permutation matrices can be constructed:
>>> Pr = csc_matrix((4, 4))
>>> Pr[lu.perm_r, np.arange(4)] = 1
>>> Pc = csc_matrix((4, 4))
>>> Pc[np.arange(4), lu.perm_c] = 1
>>> Pr = csc_matrix((np.ones(4), (lu.perm_r, np.arange(4))))
>>> Pc = csc_matrix((np.ones(4), (np.arange(4), lu.perm_c)))
We can reassemble the original matrix:

View File

@ -852,22 +852,27 @@ class _UnsymmetricArpackParams(_ArpackParams):
z = z[:, :nreturned]
else:
# we got one extra eigenvalue (likely a cc pair, but which?)
# cut at approx precision for sorting
rd = np.round(d, decimals=_ndigits[self.tp])
if self.mode in (1, 2):
rd = d
elif self.mode in (3, 4):
rd = 1 / (d - self.sigma)
if self.which in ['LR', 'SR']:
ind = np.argsort(rd.real)
elif self.which in ['LI', 'SI']:
# for LI,SI ARPACK returns largest,smallest
# abs(imaginary) why?
# abs(imaginary) (complex pairs come together)
ind = np.argsort(abs(rd.imag))
else:
ind = np.argsort(abs(rd))
if self.which in ['LR', 'LM', 'LI']:
d = d[ind[-k:]]
z = z[:, ind[-k:]]
if self.which in ['SR', 'SM', 'SI']:
d = d[ind[:k]]
z = z[:, ind[:k]]
ind = ind[-k:][::-1]
elif self.which in ['SR', 'SM', 'SI']:
ind = ind[:k]
d = d[ind]
z = z[:, ind]
else:
# complex is so much simpler...
d, z, ierr =\
@ -1026,20 +1031,28 @@ class IterOpInv(LinearOperator):
return self.OP.dtype
def get_inv_matvec(M, symmetric=False, tol=0):
def _fast_spmatrix_to_csc(A, hermitian=False):
"""Convert sparse matrix to CSC (by transposing, if possible)"""
if (isspmatrix_csr(A) and hermitian
and not np.issubdtype(A.dtype, np.complexfloating)):
return A.T
else:
return A.tocsc()
def get_inv_matvec(M, hermitian=False, tol=0):
if isdense(M):
return LuInv(M).matvec
elif isspmatrix(M):
if isspmatrix_csr(M) and symmetric:
M = M.T
M = _fast_spmatrix_to_csc(M, hermitian=hermitian)
return SpLuInv(M).matvec
else:
return IterInv(M, tol=tol).matvec
def get_OPinv_matvec(A, M, sigma, symmetric=False, tol=0):
def get_OPinv_matvec(A, M, sigma, hermitian=False, tol=0):
if sigma == 0:
return get_inv_matvec(A, symmetric=symmetric, tol=tol)
return get_inv_matvec(A, hermitian=hermitian, tol=tol)
if M is None:
#M is the identity matrix
@ -1053,9 +1066,8 @@ def get_OPinv_matvec(A, M, sigma, symmetric=False, tol=0):
return LuInv(A).matvec
elif isspmatrix(A):
A = A - sigma * eye(A.shape[0])
if symmetric and isspmatrix_csr(A):
A = A.T
return SpLuInv(A.tocsc()).matvec
A = _fast_spmatrix_to_csc(A, hermitian=hermitian)
return SpLuInv(A).matvec
else:
return IterOpInv(_aslinearoperator_with_dtype(A),
M, sigma, tol=tol).matvec
@ -1069,9 +1081,8 @@ def get_OPinv_matvec(A, M, sigma, symmetric=False, tol=0):
return LuInv(A - sigma * M).matvec
else:
OP = A - sigma * M
if symmetric and isspmatrix_csr(OP):
OP = OP.T
return SpLuInv(OP.tocsc()).matvec
OP = _fast_spmatrix_to_csc(OP, hermitian=hermitian)
return SpLuInv(OP).matvec
# ARPACK is not threadsafe or reentrant (SAVE variables), so we need a
@ -1288,7 +1299,7 @@ def eigs(A, k=6, M=None, sigma=None, which='LM', v0=None,
#general eigenvalue problem
mode = 2
if Minv is None:
Minv_matvec = get_inv_matvec(M, symmetric=True, tol=tol)
Minv_matvec = get_inv_matvec(M, hermitian=True, tol=tol)
else:
Minv = _aslinearoperator_with_dtype(Minv)
Minv_matvec = Minv.matvec
@ -1314,7 +1325,7 @@ def eigs(A, k=6, M=None, sigma=None, which='LM', v0=None,
raise ValueError("Minv should not be specified when sigma is")
if OPinv is None:
Minv_matvec = get_OPinv_matvec(A, M, sigma,
symmetric=False, tol=tol)
hermitian=False, tol=tol)
else:
OPinv = _aslinearoperator_with_dtype(OPinv)
Minv_matvec = OPinv.matvec
@ -1603,7 +1614,7 @@ def eigsh(A, k=6, M=None, sigma=None, which='LM', v0=None,
#general eigenvalue problem
mode = 2
if Minv is None:
Minv_matvec = get_inv_matvec(M, symmetric=True, tol=tol)
Minv_matvec = get_inv_matvec(M, hermitian=True, tol=tol)
else:
Minv = _aslinearoperator_with_dtype(Minv)
Minv_matvec = Minv.matvec
@ -1619,7 +1630,7 @@ def eigsh(A, k=6, M=None, sigma=None, which='LM', v0=None,
matvec = None
if OPinv is None:
Minv_matvec = get_OPinv_matvec(A, M, sigma,
symmetric=True, tol=tol)
hermitian=True, tol=tol)
else:
OPinv = _aslinearoperator_with_dtype(OPinv)
Minv_matvec = OPinv.matvec
@ -1634,7 +1645,7 @@ def eigsh(A, k=6, M=None, sigma=None, which='LM', v0=None,
mode = 4
if OPinv is None:
Minv_matvec = get_OPinv_matvec(A, M, sigma,
symmetric=True, tol=tol)
hermitian=True, tol=tol)
else:
Minv_matvec = _aslinearoperator_with_dtype(OPinv).matvec
matvec = _aslinearoperator_with_dtype(A).matvec
@ -1646,7 +1657,7 @@ def eigsh(A, k=6, M=None, sigma=None, which='LM', v0=None,
matvec = _aslinearoperator_with_dtype(A).matvec
if OPinv is None:
Minv_matvec = get_OPinv_matvec(A, M, sigma,
symmetric=True, tol=tol)
hermitian=True, tol=tol)
else:
Minv_matvec = _aslinearoperator_with_dtype(OPinv).matvec
if M is None:

View File

@ -7,6 +7,7 @@ To run tests locally:
"""
import threading
import itertools
import numpy as np
@ -17,7 +18,7 @@ import pytest
from numpy import dot, conj, random
from scipy.linalg import eig, eigh, hilbert, svd
from scipy.sparse import csc_matrix, csr_matrix, isspmatrix, diags
from scipy.sparse import csc_matrix, csr_matrix, isspmatrix, diags, rand
from scipy.sparse.linalg import LinearOperator, aslinearoperator
from scipy.sparse.linalg.eigen.arpack import eigs, eigsh, svds, \
ArpackNoConvergence, arpack
@ -208,7 +209,7 @@ def eval_evec(symmetric, d, typ, k, which, v0=None, sigma=None,
ac = mattype(a)
if general:
b = d['bmat'].astype(typ.lower())
b = d['bmat'].astype(typ)
bc = mattype(b)
# get exact eigenvalues
@ -301,6 +302,8 @@ class SymmetricParams:
pos_definite=True).astype('f').astype('d')
Ac = generate_matrix(N, hermitian=True, pos_definite=True,
complex=True).astype('F').astype('D')
Mc = generate_matrix(N, hermitian=True, pos_definite=True,
complex=True).astype('F').astype('D')
v0 = np.random.random(N)
# standard symmetric problem
@ -329,8 +332,15 @@ class SymmetricParams:
GH['v0'] = v0
GH['eval'] = eigh(GH['mat'], GH['bmat'], eigvals_only=True)
# general hermitian problem with hermitian M
GHc = DictWithRepr("gen-hermitian-Mc")
GHc['mat'] = Ac
GHc['bmat'] = Mc
GHc['v0'] = v0
GHc['eval'] = eigh(GHc['mat'], GHc['bmat'], eigvals_only=True)
self.real_test_cases = [SS, GS]
self.complex_test_cases = [SH, GH]
self.complex_test_cases = [SH, GH, GHc]
class NonSymmetricParams:
@ -963,3 +973,42 @@ def test_eigsh_for_k_greater():
# Test 'A' for different types
assert_raises(TypeError, eigsh, aslinearoperator(A), k=4)
assert_raises(TypeError, eigsh, A_sparse, M=M_dense, k=4)
def test_real_eigs_real_k_subset():
np.random.seed(1)
n = 10
A = rand(n, n, density=0.5)
A.data *= 2
A.data -= 1
v0 = np.ones(n)
whichs = ['LM', 'SM', 'LR', 'SR', 'LI', 'SI']
dtypes = [np.float32, np.float64]
for which, sigma, dtype in itertools.product(whichs, [None, 0, 5], dtypes):
prev_w = np.array([], dtype=dtype)
eps = np.finfo(dtype).eps
for k in range(1, 9):
w, z = eigs(A.astype(dtype), k=k, which=which, sigma=sigma,
v0=v0.astype(dtype), tol=0)
assert_allclose(np.linalg.norm(A.dot(z) - z * w), 0, atol=np.sqrt(eps))
# Check that the set of eigenvalues for `k` is a subset of that for `k+1`
dist = abs(prev_w[:,None] - w).min(axis=1)
assert_allclose(dist, 0, atol=np.sqrt(eps))
prev_w = w
# Check sort order
if sigma is None:
d = w
else:
d = 1 / (w - sigma)
if which == 'LM':
# ARPACK is systematic for 'LM', but sort order
# appears not well defined for other modes
assert np.all(np.diff(abs(d)) <= 1e-6)

View File

@ -61,8 +61,8 @@ class TestGCROTMK(object):
def test_arnoldi(self):
np.random.rand(1234)
A = eye(10000) + rand(10000,10000,density=1e-4)
b = np.random.rand(10000)
A = eye(2000) + rand(2000, 2000, density=5e-4)
b = np.random.rand(2000)
# The inner arnoldi should be equivalent to gmres
with suppress_warnings() as sup:

View File

@ -5,6 +5,9 @@ from __future__ import division, print_function, absolute_import
from numpy.testing import assert_, assert_allclose, assert_equal
import pytest
from platform import python_implementation
import numpy as np
from numpy import zeros, array, allclose
from scipy.linalg import norm
@ -16,13 +19,13 @@ from scipy.sparse.linalg.isolve import lgmres, gmres
from scipy._lib._numpy_compat import suppress_warnings
Am = csr_matrix(array([[-2,1,0,0,0,9],
[1,-2,1,0,5,0],
[0,1,-2,1,0,0],
[0,0,1,-2,1,0],
[0,3,0,1,-2,1],
[1,0,0,0,1,-2]]))
b = array([1,2,3,4,5,6])
Am = csr_matrix(array([[-2, 1, 0, 0, 0, 9],
[1, -2, 1, 0, 5, 0],
[0, 1, -2, 1, 0, 0],
[0, 0, 1, -2, 1, 0],
[0, 3, 0, 1, -2, 1],
[1, 0, 0, 0, 1, -2]]))
b = array([1, 2, 3, 4, 5, 6])
count = [0]
@ -38,7 +41,8 @@ def do_solve(**kw):
count[0] = 0
with suppress_warnings() as sup:
sup.filter(DeprecationWarning, ".*called without specifying.*")
x0, flag = lgmres(A, b, x0=zeros(A.shape[0]), inner_m=6, tol=1e-14, **kw)
x0, flag = lgmres(A, b, x0=zeros(A.shape[0]),
inner_m=6, tol=1e-14, **kw)
count_0 = count[0]
assert_(allclose(A*x0, b, rtol=1e-12, atol=1e-12), norm(A*x0-b))
return x0, count_0
@ -65,7 +69,8 @@ class TestLGMRES(object):
assert_(len(outer_v) > 0)
assert_(len(outer_v) <= 6)
x1, count_1 = do_solve(outer_k=6, outer_v=outer_v, prepend_outer_v=True)
x1, count_1 = do_solve(outer_k=6, outer_v=outer_v,
prepend_outer_v=True)
assert_(count_1 == 2, count_1)
assert_(count_1 < count_0/2)
assert_(allclose(x1, x0, rtol=1e-14))
@ -73,31 +78,37 @@ class TestLGMRES(object):
# ---
outer_v = []
x0, count_0 = do_solve(outer_k=6, outer_v=outer_v, store_outer_Av=False)
x0, count_0 = do_solve(outer_k=6, outer_v=outer_v,
store_outer_Av=False)
assert_(array([v[1] is None for v in outer_v]).all())
assert_(len(outer_v) > 0)
assert_(len(outer_v) <= 6)
x1, count_1 = do_solve(outer_k=6, outer_v=outer_v, prepend_outer_v=True)
x1, count_1 = do_solve(outer_k=6, outer_v=outer_v,
prepend_outer_v=True)
assert_(count_1 == 3, count_1)
assert_(count_1 < count_0/2)
assert_(allclose(x1, x0, rtol=1e-14))
@pytest.mark.skipif(python_implementation() == 'PyPy',
reason="Fails on PyPy CI runs. See #9507")
def test_arnoldi(self):
np.random.rand(1234)
A = eye(10000) + rand(10000,10000,density=1e-4)
b = np.random.rand(10000)
A = eye(2000) + rand(2000, 2000, density=5e-4)
b = np.random.rand(2000)
# The inner arnoldi should be equivalent to gmres
with suppress_warnings() as sup:
sup.filter(DeprecationWarning, ".*called without specifying.*")
x0, flag0 = lgmres(A, b, x0=zeros(A.shape[0]), inner_m=15, maxiter=1)
x1, flag1 = gmres(A, b, x0=zeros(A.shape[0]), restart=15, maxiter=1)
x0, flag0 = lgmres(A, b, x0=zeros(A.shape[0]),
inner_m=15, maxiter=1)
x1, flag1 = gmres(A, b, x0=zeros(A.shape[0]),
restart=15, maxiter=1)
assert_equal(flag0, 1)
assert_equal(flag1, 1)
assert_(np.linalg.norm(A.dot(x0) - b) > 1e-3)
assert_(np.linalg.norm(A.dot(x0) - b) > 4e-4)
assert_allclose(x0, x1)
@ -134,7 +145,7 @@ class TestLGMRES(object):
def test_nans(self):
A = eye(3, format='lil')
A[1,1] = np.nan
A[1, 1] = np.nan
b = np.ones(3)
with suppress_warnings() as sup:

View File

@ -611,7 +611,7 @@ def _expm(A, use_exact_onenorm):
# algorithms.
# Avoid indiscriminate asarray() to allow sparse or other strange arrays.
if isinstance(A, (list, tuple)):
if isinstance(A, (list, tuple, np.matrix)):
A = np.asarray(A)
if len(A.shape) != 2 or A.shape[0] != A.shape[1]:
raise ValueError('expected a square matrix')

View File

@ -524,6 +524,17 @@ class TestExpM(object):
atol = 1e-13 * abs(expected).max()
assert_allclose(got, expected, atol=atol)
def test_matrix_input(self):
# Large np.matrix inputs should work, gh-5546
A = np.zeros((200, 200))
A[-1,0] = 1
B0 = expm(A)
with suppress_warnings() as sup:
sup.filter(DeprecationWarning, "the matrix subclass.*")
sup.filter(PendingDeprecationWarning, "the matrix subclass.*")
B = expm(np.matrix(A))
assert_allclose(B, B0)
class TestOperators(object):

View File

@ -7,11 +7,18 @@ import operator
import warnings
import numpy as np
from scipy._lib._version import NumpyVersion
if NumpyVersion(np.__version__) >= '1.17.0':
from scipy._lib._util import _broadcast_arrays
else:
from numpy import broadcast_arrays as _broadcast_arrays
__all__ = ['upcast', 'getdtype', 'isscalarlike', 'isintlike',
'isshape', 'issequence', 'isdense', 'ismatrix', 'get_sum_dtype']
supported_dtypes = ['bool', 'int8', 'uint8', 'short', 'ushort', 'intc',
'uintc', 'longlong', 'ulonglong', 'single', 'double',
'uintc', 'l', 'L', 'longlong', 'ulonglong', 'single', 'double',
'longdouble', 'csingle', 'cdouble', 'clongdouble']
supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
@ -455,7 +462,7 @@ class IndexMixin(object):
# row vector special case
j = np.atleast_1d(j)
if i.ndim == 1:
i, j = np.broadcast_arrays(i, j)
i, j = _broadcast_arrays(i, j)
i = i[:, None]
j = j[:, None]
return i, j
@ -464,7 +471,7 @@ class IndexMixin(object):
if i_slice and j.ndim > 1:
raise IndexError('index returns 3-dim structure')
i, j = np.broadcast_arrays(i, j)
i, j = _broadcast_arrays(i, j)
if i.ndim == 1:
# return column vectors for 1-D indexing

View File

@ -1157,7 +1157,7 @@ class TestPdist(object):
_assert_within_tol(Y_test1, Y_right, eps, verbose > 2)
def test_pdist_jensenshannon_iris_nonC(self):
eps = 5e-13
eps = 5e-12
X = eo['iris']
Y_right = eo['pdist-jensenshannon-iris']
Y_test2 = pdist(X, 'test_jensenshannon')

View File

@ -3,10 +3,11 @@
from __future__ import division, print_function, absolute_import
from numpy.testing import (assert_equal, assert_array_equal,
assert_almost_equal, assert_array_almost_equal, assert_)
from numpy.testing import (assert_equal, assert_array_equal, assert_,
assert_almost_equal, assert_array_almost_equal)
from pytest import raises as assert_raises
import pytest
from platform import python_implementation
import numpy as np
from scipy.spatial import KDTree, Rectangle, distance_matrix, cKDTree
from scipy.spatial.ckdtree import cKDTreeNode
@ -441,8 +442,8 @@ def test_random_ball_vectorized_compiled():
r = T.query_ball_point(np.random.randn(2,3,m),1)
assert_equal(r.shape,(2,3))
assert_(isinstance(r[0,0],list))
def test_query_ball_point_multithreading():
np.random.seed(0)
n = 5000
@ -452,15 +453,15 @@ def test_query_ball_point_multithreading():
l1 = T.query_ball_point(points,0.003,n_jobs=1)
l2 = T.query_ball_point(points,0.003,n_jobs=64)
l3 = T.query_ball_point(points,0.003,n_jobs=-1)
for i in range(n):
if l1[i] or l2[i]:
assert_array_equal(l1[i],l2[i])
for i in range(n):
if l1[i] or l3[i]:
assert_array_equal(l1[i],l3[i])
class two_trees_consistency:
@ -714,7 +715,7 @@ class Test_sparse_distance_matrix_compiled(sparse_distance_matrix_consistency):
M1 = self.T1.sparse_distance_matrix(self.T2, self.r)
M2 = self.ref_T1.sparse_distance_matrix(self.ref_T2, self.r)
assert_array_almost_equal(M1.todense(), M2.todense(), decimal=14)
def test_against_logic_error_regression(self):
# regression test for gh-5077 logic error
np.random.seed(0)
@ -730,7 +731,7 @@ class Test_sparse_distance_matrix_compiled(sparse_distance_matrix_consistency):
for j in range(self.n):
v = self.data1[i,:] - self.data2[j,:]
ref[i,j] = np.dot(v,v)
ref = np.sqrt(ref)
ref = np.sqrt(ref)
ref[ref > self.r] = 0.
# test return type 'dict'
dist = np.zeros((self.n,self.n))
@ -740,7 +741,7 @@ class Test_sparse_distance_matrix_compiled(sparse_distance_matrix_consistency):
assert_array_almost_equal(ref, dist, decimal=14)
# test return type 'ndarray'
dist = np.zeros((self.n,self.n))
r = self.T1.sparse_distance_matrix(self.T2, self.r,
r = self.T1.sparse_distance_matrix(self.T2, self.r,
output_type='ndarray')
for k in range(r.shape[0]):
i = r['i'][k]
@ -749,11 +750,11 @@ class Test_sparse_distance_matrix_compiled(sparse_distance_matrix_consistency):
dist[i,j] = v
assert_array_almost_equal(ref, dist, decimal=14)
# test return type 'dok_matrix'
r = self.T1.sparse_distance_matrix(self.T2, self.r,
r = self.T1.sparse_distance_matrix(self.T2, self.r,
output_type='dok_matrix')
assert_array_almost_equal(ref, r.todense(), decimal=14)
# test return type 'coo_matrix'
r = self.T1.sparse_distance_matrix(self.T2, self.r,
r = self.T1.sparse_distance_matrix(self.T2, self.r,
output_type='coo_matrix')
assert_array_almost_equal(ref, r.todense(), decimal=14)
@ -858,11 +859,11 @@ def test_ckdtree_query_pairs():
l0 = sorted(brute)
# test default return type
s = T.query_pairs(r)
l1 = sorted(s)
l1 = sorted(s)
assert_array_equal(l0,l1)
# test return type 'set'
s = T.query_pairs(r, output_type='set')
l1 = sorted(s)
l1 = sorted(s)
assert_array_equal(l0,l1)
# test return type 'ndarray'
s = set()
@ -871,8 +872,8 @@ def test_ckdtree_query_pairs():
s.add((int(arr[i,0]),int(arr[i,1])))
l2 = sorted(s)
assert_array_equal(l0,l2)
def test_ball_point_ints():
# Regression test for #1373.
x, y = np.mgrid[0:4, 0:4]
@ -942,10 +943,10 @@ def test_ckdtree_pickle_boxsize():
T1 = T1.query(points, k=5)[-1]
T2 = T2.query(points, k=5)[-1]
assert_array_equal(T1, T2)
def test_ckdtree_copy_data():
# check if copy_data=True makes the kd-tree
# impervious to data corruption by modification of
# impervious to data corruption by modification of
# the data arrray
np.random.seed(0)
n = 5000
@ -957,7 +958,7 @@ def test_ckdtree_copy_data():
points[...] = np.random.randn(n, k)
T2 = T.query(q, k=5)[-1]
assert_array_equal(T1, T2)
def test_ckdtree_parallel():
# check if parallel=True also generates correct
# query results
@ -972,7 +973,7 @@ def test_ckdtree_parallel():
assert_array_equal(T1, T2)
assert_array_equal(T1, T3)
def test_ckdtree_view():
def test_ckdtree_view():
# Check that the nodes can be correctly viewed from Python.
# This test also sanity checks each node in the cKDTree, and
# thus verifies the internal structure of the kd-tree.
@ -981,11 +982,11 @@ def test_ckdtree_view():
k = 4
points = np.random.randn(n, k)
kdtree = cKDTree(points)
# walk the whole kd-tree and sanity check each node
def recurse_tree(n):
assert_(isinstance(n, cKDTreeNode))
if n.split_dim == -1:
assert_(isinstance(n, cKDTreeNode))
if n.split_dim == -1:
assert_(n.lesser is None)
assert_(n.greater is None)
assert_(n.indices.shape[0] <= kdtree.leafsize)
@ -995,7 +996,7 @@ def test_ckdtree_view():
x = n.lesser.data_points[:, n.split_dim]
y = n.greater.data_points[:, n.split_dim]
assert_(x.max() < y.min())
recurse_tree(kdtree.tree)
# check that indices are correctly retrieved
n = kdtree.tree
@ -1058,7 +1059,7 @@ def test_ckdtree_box():
dd1, ii1 = kdtree.query(data + 1.0, k, p=p)
assert_almost_equal(dd, dd1)
assert_equal(ii, ii1)
dd1, ii1 = kdtree.query(data - 1.0, k, p=p)
assert_almost_equal(dd, dd1)
assert_equal(ii, ii1)
@ -1122,6 +1123,9 @@ def simulate_periodic_box(kdtree, data, k, boxsize, p):
result.sort(order='dd')
return result['dd'][:, :k], result['ii'][:,:k]
@pytest.mark.skipif(python_implementation() == 'PyPy',
reason="Fails on PyPy CI runs. See #9507")
def test_ckdtree_memuse():
# unit test adaptation of gh-5630
@ -1177,13 +1181,13 @@ def test_ckdtree_weights():
for i in range(10):
# since weights are uniform, these shall agree:
c1 = tree1.count_neighbors(tree1, np.linspace(0, 10, i))
c2 = tree1.count_neighbors(tree1, np.linspace(0, 10, i),
c2 = tree1.count_neighbors(tree1, np.linspace(0, 10, i),
weights=(weights, weights))
c3 = tree1.count_neighbors(tree1, np.linspace(0, 10, i),
c3 = tree1.count_neighbors(tree1, np.linspace(0, 10, i),
weights=(weights, None))
c4 = tree1.count_neighbors(tree1, np.linspace(0, 10, i),
c4 = tree1.count_neighbors(tree1, np.linspace(0, 10, i),
weights=(None, weights))
c5 = tree1.count_neighbors(tree1, np.linspace(0, 10, i),
c5 = tree1.count_neighbors(tree1, np.linspace(0, 10, i),
weights=weights)
assert_array_equal(c1, c2)
@ -1222,12 +1226,12 @@ def test_ckdtree_count_neighbous_multiple_r():
nnc = kdtree.count_neighbors(kdtree, r0, cumulative=False)
assert_equal(n0, nnc.cumsum())
for i, r in zip(itertools.permutations(i0),
for i, r in zip(itertools.permutations(i0),
itertools.permutations(r0)):
# permute n0 by i and it shall agree
# permute n0 by i and it shall agree
n = kdtree.count_neighbors(kdtree, r)
assert_array_equal(n, n0[list(i)])
def test_len0_arrays():
# make sure len-0 arrays are handled correctly
# in range queries (gh-5639)
@ -1276,7 +1280,7 @@ def test_len0_arrays():
def test_ckdtree_duplicated_inputs():
# check ckdtree with duplicated inputs
n = 1024
n = 1024
for m in range(1, 8):
data = np.concatenate([
np.ones((n // 2, m)) * 1,

View File

@ -1571,8 +1571,8 @@ class Rotation(object):
References
----------
.. [1] F. Landis Markley,
Attitude determination using vector observations: a fast
optimal matrix algorithm, Journal of Astronautical Sciences,
"Attitude determination using vector observations: a fast
optimal matrix algorithm", Journal of Astronautical Sciences,
Vol. 41, No.2, 1993, pp. 261-280.
.. [2] F. Landis Markley,
"Attitude determination using vector observations and the

View File

@ -49,6 +49,11 @@ cdef extern from "lapack_defs.h":
cdef inline double* lame_coefficients(double h2, double k2, int n, int p,
void **bufferp, double signm,
double signn) nogil:
# Ensure that the caller can safely call free(*bufferp) even if an
# invalid argument is found in the following validation code.
bufferp[0] = NULL
if n < 0:
sf_error.error("ellip_harm", sf_error.ARG, "invalid value for n")
return NULL

View File

@ -271,3 +271,11 @@ def test_ellip_harm():
points = np.array(points)
assert_func_equal(ellip_harm, ellip_harm_known, points, rtol=1e-12)
def test_ellip_harm_invalid_p():
# Regression test. This should return nan.
n = 4
# Make p > 2*n + 1.
p = 2*n + 2
result = ellip_harm(0.5, 2.0, n, p, 0.2)
assert np.isnan(result)

View File

@ -1,6 +1,8 @@
from __future__ import division, print_function, absolute_import
import itertools
import sys
import pytest
import numpy as np
from numpy.testing import assert_
@ -228,6 +230,8 @@ class TestSmirnovp(object):
dataset0 = np.column_stack([n, x, pp])
FuncData(_smirnovp, dataset0, (0, 1), 2, rtol=_rtol).check(dtypes=[int, float, float])
@pytest.mark.xfail(sys.maxsize <= 2**32,
reason="requires 64-bit platform")
def test_oneovernclose(self):
# Check derivative at x=1/n (Discontinuous at x=1/n, test on either side: x=1/n +/- 2epsilon)
n = np.arange(3, 20)

View File

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
#
# Author: Travis Oliphant 2002-2011 with contributions from
# SciPy Developers 2004-2011
@ -3342,7 +3343,7 @@ class norminvgauss_gen(rv_continuous):
O. Barndorff-Nielsen, "Normal Inverse Gaussian Distributions and Stochastic
Volatility Modelling", Scandinavian Journal of Statistics, Vol. 24,
pp. 113, 1997.
pp. 1-13, 1997.
%(example)s
@ -3379,7 +3380,7 @@ norminvgauss = norminvgauss_gen(name="norminvgauss")
class invweibull_gen(rv_continuous):
r"""An inverted Weibull continuous random variable.
u"""An inverted Weibull continuous random variable.
This distribution is also known as the Fréchet distribution or the
type II extreme value distribution.
@ -3392,7 +3393,7 @@ class invweibull_gen(rv_continuous):
.. math::
f(x, c) = c x^{-c-1} \exp(-x^{-c})
f(x, c) = c x^{-c-1} \\exp(-x^{-c})
for :math:`x > 0`, :math:`c > 0`.
@ -3994,7 +3995,8 @@ class levy_stable_gen(rv_continuous):
fft_n_points_two_power = getattr(self, 'pdf_fft_n_points_two_power', None)
# group data in unique arrays of alpha, beta pairs
uniq_param_pairs = np.vstack({tuple(row) for row in data_in[:,1:]})
uniq_param_pairs = np.vstack(
list({tuple(row) for row in data_in[:,1:]}))
for pair in uniq_param_pairs:
data_mask = np.all(data_in[:,1:] == pair, axis=-1)
data_subset = data_in[data_mask]
@ -4002,8 +4004,10 @@ class levy_stable_gen(rv_continuous):
data_out[data_mask] = np.array([levy_stable._cdf_single_value_zolotarev(_x, _alpha, _beta)
for _x, _alpha, _beta in data_subset]).reshape(len(data_subset), 1)
else:
warnings.warn('Cumulative density calculations experimental for FFT method.' +
' Use zolatarev method instead.', RuntimeWarning)
warnings.warn(u'FFT method is considered experimental for ' +
u'cumulative distribution function ' +
u'evaluations. Use Zolotarevs method instead).',
RuntimeWarning)
_alpha, _beta = pair
_x = data_subset[:,(0,)]

View File

@ -11,6 +11,7 @@ LinregressResult = namedtuple('LinregressResult', ('slope', 'intercept',
'rvalue', 'pvalue',
'stderr'))
def linregress(x, y=None):
"""
Calculate a linear least-squares regression for two sets of measurements.
@ -327,7 +328,7 @@ def siegelslopes(y, x=None, method="hierarchical"):
.. [2] A. Stein and M. Werman, "Finding the repeated median regression
line", Proceedings of the Third Annual ACM-SIAM Symposium on
Discrete Algorithms, pp. 409413, 1992.
Discrete Algorithms, pp. 409-413, 1992.
Examples
--------

View File

@ -197,7 +197,7 @@ class gaussian_kde(object):
self.d, self.n = self.dataset.shape
if weights is not None:
self._weights = atleast_1d(weights)
self._weights = atleast_1d(weights).astype(float)
self._weights /= sum(self._weights)
if self.weights.ndim != 1:
raise ValueError("`weights` input should be one-dimensional.")

View File

@ -3010,10 +3010,48 @@ def pearsonr(x, y):
where :math:`m_x` is the mean of the vector :math:`x` and :math:`m_y` is
the mean of the vector :math:`y`.
Under the assumption that x and y are drawn from independent normal
distributions (so the population correlation coefficient is 0), the
probability density function of the sample correlation coefficient r
is ([1]_, [2]_)::
(1 - r**2)**(n/2 - 2)
f(r) = ---------------------
B(1/2, n/2 - 1)
where n is the number of samples, and B is the beta function. This
is sometimes referred to as the exact distribution of r. This is
the distribution that is used in `pearsonr` to compute the p-value.
The distribution is a beta distribution on the interval [-1, 1],
with equal shape parameters a = b = n/2 - 1. In terms of SciPy's
implementation of the beta distribution, the distribution of r is::
dist = scipy.stats.beta(n/2 - 1, n/2 - 1, loc=-1, scale=2)
The p-value returned by `pearsonr` is a two-sided p-value. For a
given sample with correlation coefficient r, the p-value is
the probability that abs(r') of a random sample x' and y' drawn from
the population with zero correlation would be greater than or equal
to abs(r). In terms of the object ``dist`` shown above, the p-value
for a given r and length n can be computed as::
p = 2*dist.cdf(-abs(r))
When n is 2, the above continuous distribution is not well-defined.
One can interpret the limit of the beta distribution as the shape
parameters a and b approach a = b = 0 as a discrete distribution with
equal probability masses at r = 1 and r = -1. More directly, one
can observe that, given the data x = [x1, x2] and y = [y1, y2], and
assuming x1 != x2 and y1 != y2, the only possible values for r are 1
and -1. Because abs(r') for any sample x' and y' with length 2 will
be 1, the two-sided p-value for a sample of length 2 is always 1.
References
----------
http://www.statsoft.com/textbook/glosp.html#Pearson%20Correlation
.. [1] "Pearson correlation coefficient", Wikipedia,
https://en.wikipedia.org/wiki/Pearson_correlation_coefficient
.. [2] Student, "Probable error of a correlation coefficient",
Biometrika, Volume 6, Issue 2-3, 1 September 1908, pp. 302-310.
Examples
--------
@ -3629,11 +3667,10 @@ def kendalltau(x, y, initial_lexsort=None, nan_policy='propagate', method='auto'
elif size == 2:
pvalue = 1.0
elif c == 0:
pvalue = 2.0/np.math.factorial(size)
pvalue = 2.0/math.factorial(size) if size < 171 else 0.0
elif c == 1:
pvalue = 2.0/np.math.factorial(size-1)
pvalue = 2.0/math.factorial(size-1) if (size-1) < 171 else 0.0
else:
old = [0.0]*(c+1)
new = [0.0]*(c+1)
new[0] = 1.0
new[1] = 1.0
@ -3643,7 +3680,7 @@ def kendalltau(x, y, initial_lexsort=None, nan_policy='propagate', method='auto'
new[k] += new[k-1]
for k in range(j,c+1):
new[k] += new[k-1] - old[k-j]
pvalue = 2.0*sum(new)/np.math.factorial(size)
pvalue = 2.0*sum(new)/math.factorial(size) if size < 171 else 0.0
elif method == 'asymptotic':
# con_minus_dis is approx normally distributed with this variance [3]_

View File

@ -136,7 +136,7 @@ def test_cont_basic(distname, arg):
check_pickling(distfn, arg)
# Entropy
if distname not in ['ksone', 'kstwobign']:
if distname not in ['ksone', 'kstwobign', 'ncf', 'crystalball']:
check_entropy(distfn, arg, distname)
if distfn.numargs == 0:

View File

@ -1586,7 +1586,9 @@ class TestLevyStable(object):
stats.levy_stable.pdf_fft_min_points_threshold = fft_min_points
subdata = data[filter_func(data)] if filter_func is not None else data
with suppress_warnings() as sup:
sup.record(RuntimeWarning, "Cumulative density calculations experimental for FFT method.*")
sup.record(RuntimeWarning, 'FFT method is considered ' +
'experimental for cumulative distribution ' +
'function evaluations.*')
p = stats.levy_stable.cdf(subdata['x'], subdata['alpha'], subdata['beta'], scale=1, loc=0)
subdata2 = rec_append_fields(subdata, 'calc', p)
failures = subdata2[(np.abs(p-subdata['p']) >= 1.5*10.**(-decimal_places)) | np.isnan(p)]

View File

@ -3,7 +3,7 @@ from __future__ import division, print_function, absolute_import
from scipy import stats
import numpy as np
from numpy.testing import (assert_almost_equal, assert_,
assert_array_almost_equal, assert_array_almost_equal_nulp)
assert_array_almost_equal, assert_array_almost_equal_nulp, assert_allclose)
import pytest
from pytest import raises as assert_raises
@ -366,3 +366,28 @@ def test_pdf_logpdf_weighted():
pdf = np.log(gkde.evaluate(xn))
pdf2 = gkde.logpdf(xn)
assert_almost_equal(pdf, pdf2, decimal=12)
def test_weights_intact():
# regression test for gh-9709: weights are not modified
np.random.seed(12345)
vals = np.random.lognormal(size=100)
weights = np.random.choice([1.0, 10.0, 100], size=vals.size)
orig_weights = weights.copy()
stats.gaussian_kde(np.log10(vals), weights=weights)
assert_allclose(weights, orig_weights, atol=1e-14, rtol=1e-14)
def test_weights_integer():
# integer weights are OK, cf gh-9709 (comment)
np.random.seed(12345)
values = [0.2, 13.5, 21.0, 75.0, 99.0]
weights = [1, 2, 4, 8, 16] # a list of integers
pdf_i = stats.gaussian_kde(values, weights=weights)
pdf_f = stats.gaussian_kde(values, weights=np.float64(weights))
xn = [0.3, 11, 88]
assert_allclose(pdf_i.evaluate(xn),
pdf_f.evaluate(xn), atol=1e-14, rtol=1e-14)

View File

@ -977,6 +977,20 @@ def test_weightedtau():
assert_approx_equal(tau, -0.56694968153682723)
def test_kendall_tau_large():
n = 172.
x = np.arange(n)
y = np.arange(n)
_, pval = stats.kendalltau(x, y, method='exact')
assert_equal(pval, 0.0)
y[-1], y[-2] = y[-2], y[-1]
_, pval = stats.kendalltau(x, y, method='exact')
assert_equal(pval, 0.0)
y[-3], y[-4] = y[-4], y[-3]
_, pval = stats.kendalltau(x, y, method='exact')
assert_equal(pval, 0.0)
def test_weightedtau_vs_quadratic():
# Trivial quadratic implementation, all parameters mandatory
def wkq(x, y, rank, weigher, add):
@ -2070,7 +2084,10 @@ class TestIQR(object):
np.array([2, np.nan, 2]) / 1.3489795)
assert_equal(stats.iqr(x, axis=1, scale=2.0, nan_policy='propagate'),
[1, np.nan, 1])
_check_warnings(w, RuntimeWarning, 6)
if numpy_version <= '1.16.6':
_check_warnings(w, RuntimeWarning, 6)
else:
_check_warnings(w, RuntimeWarning, 0)
if numpy_version < '1.9.0a':
with warnings.catch_warnings(record=True) as w:

View File

@ -58,8 +58,8 @@ Operating System :: MacOS
MAJOR = 1
MINOR = 2
MICRO = 0
ISRELEASED = False
MICRO = 3
ISRELEASED = True
VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)
@ -111,9 +111,9 @@ def get_version_info():
elif os.path.exists('scipy/version.py'):
# must be a source distribution, use existing version file
# load it as a separate module to not load scipy/__init__.py
import imp
version = imp.load_source('scipy.version', 'scipy/version.py')
GIT_REVISION = version.git_revision
import runpy
ns = runpy.run_path('scipy/version.py')
GIT_REVISION = ns['git_revision']
else:
GIT_REVISION = "Unknown"

View File

@ -1,10 +1,9 @@
numpy
numpy<1.17
cython
pytest
pytest-timeout
pytest-xdist
pytest-env
pytest-faulthandler
Pillow
mpmath
matplotlib