Compare commits

...

62 Commits

Author SHA1 Message Date
Ralf Gommers 83eb32d32f REL: set version to 1.0.1 2018-03-23 20:12:35 -07:00
Ralf Gommers c977d14c49 DOC: add SciPy 1.0.1 release notes 2018-03-23 20:12:29 -07:00
Oleksandr Pavlyk 5d7a2c70ab Fix Cython 0.28 build break of qhull.pyx
Fixes https://github.com/cython/cython/issues/2154
while maintaining ability to use Cython 0.27.3 and earlier
2018-03-23 20:10:05 -07:00
Ralf Gommers 31125f500e
Merge pull request #8573 from rgommers/prep-1.0.1
Backports for 1.0.1
2018-03-22 11:59:48 -07:00
Ralf Gommers 0e5391d26f TST: fix refguide check issue with figure returned from voronoi_plot_2d 2018-03-18 21:44:35 -07:00
Pauli Virtanen d1b220c311 TST: stats: don't use exact equality check for hdmedian test
The test uses floating-point arithmetic, so better allow for rounding
error.

(cherry picked from commit c8438ce4b6)
2018-03-18 20:44:52 -07:00
Alessandro Pietro Bardelli baea883e1a BUG: missing check on minkowski w kwarg
(cherry picked from commit 32b3ec49a3)
2018-03-18 20:41:41 -07:00
Ralf Gommers ed92dbc68d CI: fix issue with macOS build on TravisCI. Takne over from gh-8197. 2018-03-18 20:06:51 -07:00
Ralf Gommers b2071dff4d TST: silence RuntimeWarning for stats test with nan's.
This was causing a failure on Appveyor in CI; not easily
reproducible on other platforms.
2018-03-18 19:54:23 -07:00
Ralf Gommers f1bfbd30b8 CI: run refguide check with numpy 1.13.3 (avoid printing changes in 1.14) 2018-03-18 19:48:22 -07:00
Pauli Virtanen c6df39a45b BUG: signal/signaltools: fix wrong refcounting in PyArray_OrderFilterND
Use copyswap to get correct refcounting when copying values out from
sort_buffer (which has borrowed references).  Add a test that previously
triggered a segfault.

(cherry picked from commit 81bea4e943)
2018-03-18 19:41:29 -07:00
xoviat 56441eb7d9 MAINT: resolve UPDATEIFCOPY deprecation errors (#8150)
(cherry picked from commit 73f723a6ec)
2018-03-17 23:53:44 -07:00
xoviat a923628e8d MAINT: fix NumPy deprecation test failures
Replace uses of `fromstring` on non-strings with
`frombuffer`. Closes gh-8067.

(cherry picked from commit ac6c856f31)
2018-03-17 23:16:32 -07:00
Ilhan Polat d328777962 Fixed Cython version to 2.7.3
Cython 2.8 broke the Appveyor tests.

(cherry picked from commit a0f063f348)
2018-03-17 23:07:40 -07:00
Philip DeBoer e5e65e2edf BUG: Fix random orthogonal matrices
(cherry picked from commit d1a9225e37)
2018-03-17 14:46:01 -07:00
Philip DeBoer 1fed376ab8 Fix handling of random_state
(cherry picked from commit 4cd688daa2)
2018-03-17 14:45:53 -07:00
Ted Ying 5367ec1389 TST: Add a test for stats.ortho_group's distribution of pairwise distances.
(cherry picked from commit c297e70a6b)
2018-03-17 14:45:38 -07:00
Warren Weckesser 63d298f543 BUG: linalg: Fixed typo in flapack.pyf.src.
(cherry picked from commit faa46f7c1a)
2018-03-17 14:45:00 -07:00
Andrew Nelson 3e7d1c9e96 MAINT: prevent scipy.optimize.root segfault
(cherry picked from commit 9c647b774b)
(cherry picked from commit 2a8ece356a)
2018-03-17 14:44:28 -07:00
Anant Prakash 2d34679f41 REV: reintroduce csgraph import in scipy.sparse
(cherry picked from commit 66f886da74)
2018-03-17 14:43:16 -07:00
Denis Laxalde b50895dc33 BUG: declare "gfk" variable before call of terminate() in newton-cg
In _minimize_newtoncg(), the first call to terminate() might occur
before the "gfk" variable has been declared, leading to:

    NameError: free variable 'gfk' referenced before assignment in enclosing scope

This happens for instance if maxiter is negative as demonstrated in
added test case.

Closes #8241.

(cherry picked from commit be7496a2b3)
2018-03-17 14:42:50 -07:00
Matt Haberland 198a6d0102 BUG: optimize: fixed linprog presolve bug with trivial bounds #8234
(cherry-pick of the 3 commits in gh-8237)
2018-03-17 14:41:42 -07:00
Pauli Virtanen df86db185b MAINT: ndimage: move check before initializing iterators + explicit list of types 2018-03-17 14:40:24 -07:00
Saurabh Agarwal ce160d2d10 BUG: ndimage: check early whether input data type is supported
Also add a test for it.
2018-03-17 14:40:14 -07:00
Eric Larson abe0a94f40 FIX: Fix for cobyla 2018-03-17 14:17:03 -07:00
Mihai Capotă 54edac3d87 BUG: fix solve_lyapunov import
Currently, `scipy.linalg.solve_lyapunov` is not importable.

The documentation says:
"The old name is kept for backwards-compatibility."
https://docs.scipy.org/doc/scipy/reference/release.1.0.0.html#other-changes

However, the old name is missing from `__all__`, so it is not picked up
by `__init__.py`, despite the request in issue #6622 which sparked the
rename:
https://github.com/scipy/scipy/issues/6622#issuecomment-249624589
2018-03-17 14:16:00 -07:00
Ralf Gommers 7511430b36 REL: set version to 1.0.1.dev0 2017-10-25 23:40:50 +13:00
Ralf Gommers 11509c4a98 REL: set version to 1.0.0 2017-10-25 18:34:16 +13:00
Ralf Gommers 6893a1887c DOC: improve the 1.0.0 release notes.
[ci skip]
2017-10-25 18:34:16 +13:00
Ralf Gommers 883de6242a Merge pull request #8071 from rgommers/skip-linalg-test
TST: skip a problematic linalg test for 1.0.x
2017-10-25 18:22:57 +13:00
Ralf Gommers 23b3fe6721 TST: skip a problematic linalg test for 1.0.x
See gh-7500 and gh-8064.
2017-10-25 08:54:12 +13:00
Ralf Gommers 8bd19de6bb Merge pull request #8070 from rgommers/CoC-backport
DOC: add a SciPy Code of Conduct. backport of gh-7963
2017-10-25 08:37:36 +13:00
Ralf Gommers 7c09b93905 DOC: add a SciPy Code of Conduct.
This code of conduct has been discussed on the mailing list so far.
It starts with an adaptation of the Apache CoC.
The later bits about reporting and handling reports are derived from
the Contributor Covenant and the Jupyter CoC.

DOC: update Code of Conduct for review comments on gh-7963.

Version of email text on urgent interventions

I added a rewritten and more specific version of the text I sent to the
mailing list today, as discussed on the hangout.

I also tried to address Nathaniel's point about the importance of
addressing subtle discrimination, particularly against women.  I wasn't
sure what to add to the procedure, but I thought at least it was worth
emphasizing that we are aware of the problem, and that we intend the
procedure to apply.

DOC: update CoC for last review comments on gh-7963.

[ci skip]

(backport of gh-7963)
2017-10-24 22:17:48 +13:00
Ilhan Polat 9bcaf3f6c5 TST: Relax test_trsm precision to 5 decimals
And switch to triangular solver.

(backport of gh-7986)
2017-10-18 11:13:28 +13:00
Ralf Gommers 69c0d190f7 REL: set version to 1.0.0rc2 2017-10-17 20:22:28 +13:00
Ralf Gommers c2408b5857 Merge pull request #8041 from rgommers/rc2
MAINT: backports of PRs for 1.0.0rc2
2017-10-17 20:20:06 +13:00
Ralf Gommers 1fc7221049 DOC: update release notes for changes in 1.0.0rc2
[ci skip]
2017-10-17 20:16:27 +13:00
Pauli Virtanen 275745eb87 MAINT: refguide_check: add more matplotlib-related stopwords 2017-10-17 20:15:38 +13:00
Josh Wilson 2894096b43 TST: fix test failures from Matplotlib 2.1 update
Closes gh-7997.
2017-10-17 20:15:38 +13:00
Josh Wilson 5ba17256b0 MAINT: fix test failures with NumPy master
The failures were caused by `friedmanchisquare` calling `np.nonzero`
on empty arrays, and fixed by instead checking whether the size of the
arrays was positive. Closes gh-8016.

(backport of gh-8019)
2017-10-17 20:15:37 +13:00
Pauli Virtanen 17a2e9bfe9 BUG: signal: fix crash in lfilter
(cherry picked from commit e04b1d2c88)
2017-10-17 20:15:37 +13:00
Pauli Virtanen 15ed0270fd MAINT: signal: drop unused py3-compat defines
(cherry picked from commit 4d208a55dd)
2017-10-17 20:15:37 +13:00
Oleksandr Pavlyk 7ca603a3c3 BUG: fixed keyword in 'info' in _get_mem_available utility
This addresses GH-7972

(cherry picked from commit 26892c2169)
2017-10-16 22:20:07 +13:00
Pauli Virtanen b8584aaa84 TST: special: mark a failing hyp2f1 test as xfail
mpmath prior to 1.0.0 had the same bug, so this wasn't caught before.

(cherry picked from commit b28b292621)
2017-10-16 22:18:21 +13:00
Pauli Virtanen db1b6c3659 TST: special: remove workaround for mpmath>=1.0.0 in a test
(cherry picked from commit c00e206d6c)
2017-10-16 22:18:13 +13:00
Pauli Virtanen 670cb02ac4 BUG: optimize: revert changes to bfgs in gh-7165
This reverts commit 56e42a5eef, reversing
changes made to c324145ac0.

Further testing showed some issues (gh-7959):

- The "if k == 0" hessian scaling on first step does not necessarily
  play along well with the initial step scaling done using the line
  search. Whether it really is incorrect is unclear, but this would
  require further investigation looking at the test cases in
  statsmodels.

- If this condition is removed, it appears the new formula may lose
  precision when yk_sk is large, indicated by failures in
  TestOptimizeSimple.test_initial_step_scaling.

(cherry picked from commit b0bf5d8d76)
2017-10-16 22:17:19 +13:00
Ralf Gommers f7f861796d DOC: update SciPy Roadmap for 1.0 release and recent discussions.
(cherry picked from commit 75dfc03601)
2017-10-16 22:16:27 +13:00
Ralf Gommers 8dbde91b3e DOC: add note on checking for deprecations before upgrade to release notes
[ci skip]
2017-10-16 22:15:35 +13:00
Ralf Gommers 2e9f10792b REL: set version to 1.0.0rc2.dev0
[ci skip]
2017-10-16 22:14:25 +13:00
Ralf Gommers 4f57a98d14 REL: set version to 1.0.0rc1 2017-09-27 22:31:21 +13:00
Ralf Gommers 4d3c4be755 DOC: update 1.0.0 release notes for backports
[ci skip]
2017-09-27 22:20:44 +13:00
Robert Cimrman 6cbb1f8219 FIX: fix UmfpackContext family selection in _get_umf_family() for win-amd64
- allow synonyms using np.sctypeDict
- add error message for unsupported matrix/indices types

(cherry picked from commit 9affaa6b12)
2017-09-27 22:03:29 +13:00
Ralf Gommers 774b17d341 Merge pull request #7938 from rgommers/backports
MAINT: backports from 1.0.x
2017-09-27 22:02:09 +13:00
Ralf Gommers 243426cca6 TST: skip an integrate.ode test that emits spurious warnings.
See gh-7888 for details.
2017-09-26 20:11:53 +13:00
Warren Weckesser 959df3ab4a MAINT: signal: Make multidim. behavior of freqz match 0.19.1.
(cherry picked from commit d88f427213)
2017-09-26 20:09:51 +13:00
Eric Larson f452f3c344 FIX: Avoid bad __del__ (close) behavior
(cherry picked from commit 4f42d42b64)
2017-09-26 20:08:17 +13:00
Ralf Gommers 73bfdfab4f TST: mark two optimize.linprog tests as xfail. See gh-7877.
Also fix up an assert to be more informative.

(cherry picked from commit ee48a1eb74)
2017-09-26 20:07:21 +13:00
Nick Papior 365713ca71 BUG: linalg: fix speed regression in solve()
This fixes the speed regression in gh-7847. Since the default
change of sv to svx the solve routine suffered a huge performance
penalty for anything but low order NRHS.

This commit fixes that issue by converting to the sx routines,
with condition number checking.

However, the main funtionality by using svx was the easy check
of the condition number to assert a non-singular matrix.
This commit adds a call to the con routines to extract the
appropriate condition number. This forces the addition of:

   - lamch (machine precision extraction)
   - gecon (condition number calculation from LU factorization)
   - lange (1/I-norm of matrix)

This commit adds a ValueError issued for complex matrices
an transposed=True (which was a bug in prior versions).
The complex transposed case could be solved, but it is ambiguous whether
it should be the transposed or Hermitian transposed. Hence, a valueerror is
raised. See master (#7879) for a fix.

A couple of additional tests have been added, mainly to check the arguments.

(cherry picked from commit 8e56f42775)
2017-09-26 20:06:30 +13:00
Nick Papior bbf799084e MAINT: changed defaults to lower in sytf2, sytrf and hetrf
For consistency the default lower flag is now False.

Signed-off-by: Nick Papior <nickpapior@gmail.com>
(cherry picked from commit 3c08ecae62)
2017-09-26 20:05:37 +13:00
Alessandro Pietro Bardelli eb1c5d4875 REV: revert changes to spatial.distance.wminkowski
DOC: Update 1.0.0-notes.rst

Deprecation of wminkowski and hamming.
Deprecation of args in Xdist.
Different implementation/behaviour of weighted minkowski.

(cherry picked from commit 40eb5d973c)

REV: restored code for wminkowski and soft deprecate it

(cherry picked from commit d82cb89f8c)

DOC: slight changes to release note edits for spatial.distance

(cherry picked from commit 1c2907e54e)
2017-09-26 20:03:14 +13:00
Ralf Gommers 2ab5d9f02f REL: set version to 1.0.0rc1.dev0 2017-09-26 20:01:13 +13:00
Ralf Gommers dfe9a1f433 REL: set version to 1.0.0b1 2017-09-17 21:54:03 +12:00
60 changed files with 1491 additions and 415 deletions

View File

@ -35,6 +35,7 @@ matrix:
- COVERAGE=
- USE_WHEEL=1
- REFGUIDE_CHECK=1
- NUMPYSPEC="numpy==1.13.3"
- python: 3.4
env:
- TESTMODE=fast
@ -74,6 +75,7 @@ before_install:
export PATH=/usr/lib/ccache:$PATH
free -m
elif [[ "$TRAVIS_OS_NAME" == "osx" ]]; then
brew cask uninstall oclint
brew install gcc ccache
touch config.sh
git clone https://github.com/matthew-brett/multibuild.git
@ -101,16 +103,16 @@ before_install:
- df -h
- ulimit -a
- mkdir builds
- pushd builds
- cd builds
# Install gmpy2 dependencies
- mkdir -p $HOME/.local
- wget ftp://ftp.gnu.org/gnu/mpc/mpc-1.0.2.tar.gz
- tar xzvf mpc-1.0.2.tar.gz
- pushd mpc-1.0.2
- cd mpc-1.0.2
- ./configure --prefix=$HOME/.local
- make
- make install
- popd
- cd ..
- export CPATH=$HOME/.local/include
- export LIBRARY_PATH=$HOME/.local/lib
- export LD_LIBRARY_PATH=$HOME/.local/lib
@ -129,7 +131,7 @@ before_install:
fi
- python -V
- ccache -s
- popd
- cd ..
- set -o pipefail
script:
- python -c 'import numpy as np; print("relaxed strides checking:", np.ones((10,1),order="C").flags.f_contiguous)'
@ -151,14 +153,14 @@ script:
echo "setup.py sdist"
python tools/suppress_output.py python setup.py sdist
# Move out of source directory to avoid finding local scipy
pushd dist
cd dist
# Use pip --build option to make ccache work better.
# However, this option is partially broken
# (see https://github.com/pypa/pip/issues/4242)
# and some shenanigans are needed to make it work.
echo "pip install"
python ../tools/suppress_output.py pip install --build="$HOME/builds" --upgrade "file://`echo -n $PWD/scipy*`#egg=scipy" -v
popd
cd ..
USE_WHEEL_BUILD="--no-build"
fi
- export SCIPY_AVAILABLE_MEM=3G
@ -169,9 +171,9 @@ after_success:
- ccache -s
# Upload coverage information
- if [ "${COVERAGE}" == "--coverage" ]; then
pushd build/testenv/lib/python*/site-packages/;
cd build/testenv/lib/python*/site-packages/;
codecov;
popd;
cd ..;
fi
notifications:
# Perhaps we should have status emails sent to the mailing list, but

View File

@ -182,6 +182,7 @@ Charles Masson for the Wasserstein and the Cramér-von Mises statistical
distances.
Felix Lenders for implementing trust-trlib method.
Dezmond Goff for adding optional out parameter to pdist/cdist
Nick R. Papior for allowing a wider choice of solvers
Institutions
------------

View File

@ -1,5 +1,5 @@
Name: scipy
Version: 0.19.0.dev0
Version: 1.0.1.dev0
Summary: SciPy: Scientific Library for Python
Url: https://www.scipy.org
DownloadUrl: https://github.com/scipy/scipy/releases

View File

@ -5,16 +5,12 @@ SciPy Roadmap
Most of this roadmap is intended to provide a high-level view on what is
most needed per SciPy submodule in terms of new functionality, bug fixes, etc.
Part of those are must-haves for the ``1.0`` version of Scipy.
Furthermore it contains ideas for major new features - those are marked as such,
and are not needed for SciPy to become 1.0. Things not mentioned in this roadmap are
Besides important "business as usual" changes, it contains ideas for major new
features - those are marked as such, and are expected to take significant
dedicated effort. Things not mentioned in this roadmap are
not necessarily unimportant or out of scope, however we (the SciPy developers)
want to provide to our users and contributors a clear picture of where SciPy is
going and where help is needed most urgently.
When a module is in a 1.0-ready state, it means that it has the functionality
we consider essential and has an API and code quality (including documentation
and tests) that's of high enough quality.
going and where help is needed most.
General
@ -26,20 +22,12 @@ those first on the scipy-dev mailing list.
API changes
```````````
In general, we want to take advantage of the major version change to fix some
known warts in the API. The change from 0.x.x to 1.x.x is the chance to fix
those API issues that we all know are ugly warts. Example: unify the
convention for specifying tolerances (including absolute, relative, argument
and function value tolerances) of the optimization functions. More API issues
will be noted in the module sections below. However, there should be
*clear value* in making a breaking change. The 1.0 version label is not a
license to just break things - see it as a normal release with a somewhat
more aggressive/extensive set of cleanups.
In general, we want to evolve the API to remove known warts as much as possible,
*however as much as possible without breaking backwards compatibility*.
It should be made more clear what is public and what is private in SciPy.
Everything private should be underscored as much as possible. Now this is done
consistently when we add new code, but for 1.0 it should also be done for
existing code.
Also, it should be made (even) more clear what is public and what is private in
SciPy. Everything private should be named starting with an underscore as much
as possible.
Test coverage
@ -48,19 +36,18 @@ Test coverage of code added in the last few years is quite good, and we aim for
a high coverage for all new code that is added. However, there is still a
significant amount of old code for which coverage is poor. Bringing that up to
the current standard is probably not realistic, but we should plug the biggest
holes. Additionally the coverage should be tracked over time and we should
ensure it only goes up.
holes.
Besides coverage there is also the issue of correctness - older code may have a
few tests that provide decent statement coverage, but that doesn't necessarily
say much about whether the code does what it says on the box. Therefore code
review of some parts of the code (``stats`` and ``signal`` in particular) is
necessary.
review of some parts of the code (``stats``,``signal`` and ``ndimage`` in
particular) is necessary.
Documentation
`````````````
The documentation is in decent shape. Expanding of current docstrings and
The documentation is in good shape. Expanding of current docstrings and
putting them in the standard NumPy format should continue, so the number of
reST errors and glitches in the html docs decreases. Most modules also have a
tutorial in the reference guide that is a good introduction, however there are
@ -69,9 +56,6 @@ a few missing or incomplete tutorials - this should be fixed.
Other
`````
Scipy 1.0 will likely contain more backwards-incompatible changes than a minor
release. Therefore we will have a longer-lived maintenance branch of the last
0.X release.
Regarding Cython code:
@ -79,23 +63,21 @@ Regarding Cython code:
.so files too large. This needs measuring.
- Cython's old syntax for using NumPy arrays should be removed and replaced
with Cython memoryviews.
- New feature idea: more of the currently wrapped libraries should export
Cython-importable versions that can be used without linking.
Regarding build environments:
- NumPy and SciPy should both build from source on Windows with a MinGW-w64
toolchain and be compatible with Python
installations compiled with either the same MinGW or with MSVC.
- SciPy builds from source on Windows now with a MSVC + MinGW-w64 gfortran
toolchain. This still needs to prove itself, but is looking good so far.
- Support for Accelerate will be dropped, likely in SciPy 1.1.0. If there is
enough interest, we may want to write wrappers so the BLAS part of Accelerate
can still be used.
- Bento development has stopped, so will remain having an experimental,
use-at-your-own-risk status. Only the people that use it will be
responsible for keeping the Bento build updated.
A more complete continuous integration setup is needed; at the moment we often
find out right before a release that there are issues on some less-often used
platform or Python version. At least needed are Windows (MSVC and MingwPy),
Linux and OS X builds, coverage of the lowest and highest Python and NumPy
versions that are supported.
Continuous integration is in good shape, it covers Windows, macOS and Linux, as well
as a range of versions of our dependencies and building release quality wheels.
Modules
-------
@ -132,23 +114,16 @@ integrate
Needed for ODE solvers:
- Documentation is pretty bad, needs fixing
- A promising new ODE solver interface is in progress: gh-6326.
This needs to be finished and merged. After that, older API can
possibly be deprecated.
- A new ODE solver interface (``solve_ivp``) was added in SciPy 1.0.0.
In the future we can consider (soft-)deprecating the older API.
The numerical integration functions are in good shape. Support for integrating
complex-valued functions and integrating multiple intervals (see gh-3325) could
be added, but is not required for SciPy 1.0.
be added.
interpolate
```````````
Needed:
- Both fitpack and fitpack2 interfaces will be kept.
- splmake is deprecated; is different spline representation, we need exactly one
- interp1d/interp2d are somewhat ugly but widely used, so we keep them.
Ideas for new features:
@ -187,16 +162,13 @@ Ideas for new features:
misc
````
``scipy.misc`` will be removed as a public module. The functions in it can be
moved to other modules:
``scipy.misc`` will be removed as a public module. Most functions in it have
been moved to another submodule or deprecated. The few that are left:
- ``pilutil``, images : ``ndimage``
- ``comb``, ``factorial``, ``logsumexp``, ``pade`` : ``special``
- ``doccer`` : move to ``scipy._lib``
- ``doccer`` : move to ``scipy._lib`` (making it private)
- ``info``, ``who`` : these are NumPy functions
- ``derivative``, ``central_diff_weight`` : remove, replace with more extensive
functionality for numerical differentiation - likely in a new module
``scipy.diff`` (see below)
- ``derivative``, ``central_diff_weight`` : remove, possibly replacing them
with more extensive functionality for numerical differentiation.
ndimage
@ -258,10 +230,10 @@ things are done in `interpolate`), and eliminate any duplication.
*Filter design*: merge `firwin` and `firwin2` so `firwin2` can be removed.
*Continuous-Time Linear Systems*: remove `lsim2`, `impulse2`, `step2`. Make
`lsim`, `impulse` and `step` "just work" for any input system. Improve
performance of ltisys (less internal transformations between different
representations). Fill gaps in lti system conversion functions.
*Continuous-Time Linear Systems*: remove `lsim2`, `impulse2`, `step2`. The
`lsim`, `impulse` and `step` functions now "just work" for any input system.
Further improve the performance of ``ltisys`` (fewer internal transformations
between different representations). Fill gaps in lti system conversion functions.
*Second Order Sections*: Make SOS filtering equally capable as existing
methods. This includes ltisys objects, an `lfiltic` equivalent, and numerically
@ -362,20 +334,6 @@ stats
probably, this fits better in Statsmodels (which already has a lot more KDE
functionality).
``stats.mstats`` is a useful module for worked with data with missing values.
One problem it has though is that in many cases the functions have diverged
from their counterparts in ``scipy.stats``. The ``mstats`` functions should be
updated so that the two sets of functions are consistent.
weave
`````
This module is deprecated and will be removed for Scipy 1.0. It has been
packaged as a separate package "weave", which users that still rely on the
functionality provided by ``scipy.weave`` should use.
Also note that this is the only module that was not ported to Python 3.
New modules under discussion
----------------------------
@ -386,6 +344,9 @@ Currently Scipy doesn't provide much support for numerical differentiation.
A new ``scipy.diff`` module for that is discussed in
https://github.com/scipy/scipy/issues/2035. There's also a fairly detailed
GSoC proposal to build on, see `here <https://github.com/scipy/scipy/wiki/Proposal:-add-finite-difference-numerical-derivatives-as-scipy.diff>`_.
There has been a second (unsuccessful) GSoC project in 2017. Recent discussion
and the host of alternatives available make it unlikely that a new ``scipy.diff``
submodule will be added in the near future.
There is also ``approx_derivative`` in ``optimize``, which is still private
but could form a solid basis for this module.

View File

@ -2,18 +2,127 @@
SciPy 1.0.0 Release Notes
==========================
.. note:: Scipy 1.0.0 is not released yet!
.. contents::
SciPy 1.0.0 is the culmination of 8 months of hard work. It contains
many new features, numerous bug-fixes, improved test coverage and
better documentation. There have been a number of deprecations and
API changes in this release, which are documented below. All users
are encouraged to upgrade to this release, as there are a large number
of bug-fixes and optimizations. Moreover, our development attention
will now shift to bug-fix releases on the 1.0.x branch, and on adding
new features on the master branch.
We are extremely pleased to announce the release of SciPy 1.0, 16 years after
version 0.1 saw the light of day. It has been a long, productive journey to
get here, and we anticipate many more exciting new features and releases in the
future.
Why 1.0 now?
------------
A version number should reflect the maturity of a project - and SciPy was a
mature and stable library that is heavily used in production settings for a
long time already. From that perspective, the 1.0 version number is long
overdue.
Some key project goals, both technical (e.g. Windows wheels and continuous
integration) and organisational (a governance structure, code of conduct and a
roadmap), have been achieved recently.
Many of us are a bit perfectionist, and therefore are reluctant to call
something "1.0" because it may imply that it's "finished" or "we are 100% happy
with it". This is normal for many open source projects, however that doesn't
make it right. We acknowledge to ourselves that it's not perfect, and there
are some dusty corners left (that will probably always be the case). Despite
that, SciPy is extremely useful to its users, on average has high quality code
and documentation, and gives the stability and backwards compatibility
guarantees that a 1.0 label imply.
Some history and perspectives
-----------------------------
- 2001: the first SciPy release
- 2005: transition to NumPy
- 2007: creation of scikits
- 2008: scipy.spatial module and first Cython code added
- 2010: moving to a 6-monthly release cycle
- 2011: SciPy development moves to GitHub
- 2011: Python 3 support
- 2012: adding a sparse graph module and unified optimization interface
- 2012: removal of scipy.maxentropy
- 2013: continuous integration with TravisCI
- 2015: adding Cython interface for BLAS/LAPACK and a benchmark suite
- 2017: adding a unified C API with scipy.LowLevelCallable; removal of scipy.weave
- 2017: SciPy 1.0 release
**Pauli Virtanen** is SciPy's Benevolent Dictator For Life (BDFL). He says:
*Truthfully speaking, we could have released a SciPy 1.0 a long time ago, so I'm
happy we do it now at long last. The project has a long history, and during the
years it has matured also as a software project. I believe it has well proved
its merit to warrant a version number starting with unity.*
*Since its conception 15+ years ago, SciPy has largely been written by and for
scientists, to provide a box of basic tools that they need. Over time, the set
of people active in its development has undergone some rotation, and we have
evolved towards a somewhat more systematic approach to development. Regardless,
this underlying drive has stayed the same, and I think it will also continue
propelling the project forward in future. This is all good, since not long
after 1.0 comes 1.1.*
**Travis Oliphant** is one of SciPy's creators. He says:
*I'm honored to write a note of congratulations to the SciPy developers and the
entire SciPy community for the release of SciPy 1.0. This release represents
a dream of many that has been patiently pursued by a stalwart group of pioneers
for nearly 2 decades. Efforts have been broad and consistent over that time
from many hundreds of people. From initial discussions to efforts coding and
packaging to documentation efforts to extensive conference and community
building, the SciPy effort has been a global phenomenon that it has been a
privilege to participate in.*
*The idea of SciPy was already in multiple peoples minds in 1997 when I first
joined the Python community as a young graduate student who had just fallen in
love with the expressibility and extensibility of Python. The internet was
just starting to bringing together like-minded mathematicians and scientists in
nascent electronically-connected communities. In 1998, there was a concerted
discussion on the matrix-SIG, python mailing list with people like Paul
Barrett, Joe Harrington, Perry Greenfield, Paul Dubois, Konrad Hinsen, David
Ascher, and others. This discussion encouraged me in 1998 and 1999 to
procrastinate my PhD and spend a lot of time writing extension modules to
Python that mostly wrapped battle-tested Fortran and C-code making it available
to the Python user. This work attracted the help of others like Robert Kern,
Pearu Peterson and Eric Jones who joined their efforts with mine in 2000 so
that by 2001, the first SciPy release was ready. This was long before Github
simplified collaboration and input from others and the "patch" command and
email was how you helped a project improve.*
*Since that time, hundreds of people have spent an enormous amount of time
improving the SciPy library and the community surrounding this library has
dramatically grown. I stopped being able to participate actively in developing
the SciPy library around 2010. Fortunately, at that time, Pauli Virtanen and
Ralf Gommers picked up the pace of development supported by dozens of other key
contributors such as David Cournapeau, Evgeni Burovski, Josef Perktold, and
Warren Weckesser. While I have only been able to admire the development of
SciPy from a distance for the past 7 years, I have never lost my love of the
project and the concept of community-driven development. I remain driven
even now by a desire to help sustain the development of not only the SciPy
library but many other affiliated and related open-source projects. I am
extremely pleased that SciPy is in the hands of a world-wide community of
talented developers who will ensure that SciPy remains an example of how
grass-roots, community-driven development can succeed.*
**Fernando Perez** offers a wider community perspective:
*The existence of a nascent Scipy library, and the incredible --if tiny by
today's standards-- community surrounding it is what drew me into the
scientific Python world while still a physics graduate student in 2001. Today,
I am awed when I see these tools power everything from high school education to
the research that led to the 2017 Nobel Prize in physics.*
*Don't be fooled by the 1.0 number: this project is a mature cornerstone of the
modern scientific computing ecosystem. I am grateful for the many who have
made it possible, and hope to be able to contribute again to it in the future.
My sincere congratulations to the whole team!*
Highlights of this release
--------------------------
Some of the highlights of this release are:
@ -27,7 +136,16 @@ Some of the highlights of this release are:
- Many new BLAS and LAPACK functions were wrapped. The BLAS wrappers are now
complete.
This release requires Python 2.7 or 3.4+ and NumPy 1.8.2 or greater.
Upgrading and compatibility
---------------------------
There have been a number of deprecations and API changes in this release, which
are documented below. Before upgrading, we recommend that users check that
their own code does not use deprecated SciPy functionality (to do so, run your
code with ``python -Wd`` and check for ``DeprecationWarning`` s).
This release requires Python 2.7 or >=3.4 and NumPy 1.8.2 or greater.
This is also the last release to support LAPACK 3.1.x - 3.3.x. Moving the
lowest supported LAPACK version to >3.2.x was long blocked by Apple Accelerate
@ -187,6 +305,18 @@ The ``fillvalue`` of `scipy.signal.convolve2d` will be cast directly to the
dtypes of the input arrays in the future and checked that it is a scalar or
an array with a single element.
``scipy.spatial.distance.matching`` is deprecated. It is an alias of
`scipy.spatial.distance.hamming`, which should be used instead.
Implementation of `scipy.spatial.distance.wminkowski` was based on a wrong
interpretation of the metric definition. In scipy 1.0 it has been just
deprecated in the documentation to keep retro-compatibility but is recommended
to use the new version of `scipy.spatial.distance.minkowski` that implements
the correct behaviour.
Positional arguments of `scipy.spatial.distance.pdist` and
`scipy.spatial.distance.cdist` should be replaced with their keyword version.
Backwards incompatible changes
==============================
@ -303,6 +433,7 @@ Authors
* Patrick Callier
* Mark Campanelli +
* CJ Carey
* Robert Cimrman
* Adam Cox +
* Michael Danilov +
* David Haberthür +
@ -356,11 +487,12 @@ Authors
* Nathan Musoke +
* Andrew Nelson
* M.J. Nichol
* Nico Schlömer +
* Juan Nunez-Iglesias
* Arno Onken +
* Nick Papior +
* Dima Pasechnik +
* Ashwin Pathak +
* Oleksandr Pavlyk +
* Stefan Peterson
* Ilhan Polat
* Andrey Portnoy +
@ -378,6 +510,7 @@ Authors
* Matt Ruffalo +
* Eli Sadoff +
* Pim Schellart
* Nico Schlömer +
* Klaus Sembritzki +
* Nikolay Shebanov +
* Jonathan Tammo Siebert
@ -403,7 +536,7 @@ Authors
* Nikolay Zinov +
* Zé Vinícius +
A total of 118 people contributed to this release.
A total of 121 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.
This list of names is automatically generated, and may not be fully complete.
@ -472,6 +605,10 @@ Issues closed for 1.0.0
- `#7776 <https://github.com/scipy/scipy/issues/7776>`__: tests require `nose`
- `#7798 <https://github.com/scipy/scipy/issues/7798>`__: contributor names for 1.0 release notes
- `#7828 <https://github.com/scipy/scipy/issues/7828>`__: 32-bit Linux test errors on TestCephes
- `#7893 <https://github.com/scipy/scipy/issues/7893>`__: scipy.spatial.distance.wminkowski behaviour change in 1.0.0b1
- `#7898 <https://github.com/scipy/scipy/issues/7898>`__: DOC: Window functions
- `#7959 <https://github.com/scipy/scipy/issues/7959>`__: BUG maybe: fmin_bfgs possibly broken in 1.0
- `#7969 <https://github.com/scipy/scipy/issues/7969>`__: scipy 1.0.0rc1 windows wheels depend on missing msvcp140.dll
Pull requests for 1.0.0
@ -772,4 +909,20 @@ Pull requests for 1.0.0
- `#7872 <https://github.com/scipy/scipy/pull/7872>`__: TST: silence RuntimeWarning for stats.truncnorm test marked as...
- `#7874 <https://github.com/scipy/scipy/pull/7874>`__: TST: fix an optimize.linprog test that fails intermittently.
- `#7875 <https://github.com/scipy/scipy/pull/7875>`__: TST: filter two integration warnings in stats tests.
- `#7876 <https://github.com/scipy/scipy/pull/7876>`__: GEN: Add comments to the tests for clarification
- `#7891 <https://github.com/scipy/scipy/pull/7891>`__: ENH: backport #7879 to 1.0.x
- `#7902 <https://github.com/scipy/scipy/pull/7902>`__: MAINT: signal: Make freqz handling of multidim. arrays match...
- `#7905 <https://github.com/scipy/scipy/pull/7905>`__: REV: restore wminkowski
- `#7908 <https://github.com/scipy/scipy/pull/7908>`__: FIX: Avoid bad ``__del__`` (close) behavior
- `#7918 <https://github.com/scipy/scipy/pull/7918>`__: TST: mark two optimize.linprog tests as xfail. See gh-7877.
- `#7929 <https://github.com/scipy/scipy/pull/7929>`__: MAINT: changed defaults to lower in sytf2, sytrf and hetrf
- `#7939 <https://github.com/scipy/scipy/pull/7939>`__: Fix umfpack solver construction for win-amd64
- `#7948 <https://github.com/scipy/scipy/pull/7948>`__: DOC: add note on checking for deprecations before upgrade to...
- `#7952 <https://github.com/scipy/scipy/pull/7952>`__: DOC: update SciPy Roadmap for 1.0 release and recent discussions.
- `#7960 <https://github.com/scipy/scipy/pull/7960>`__: BUG: optimize: revert changes to bfgs in gh-7165
- `#7962 <https://github.com/scipy/scipy/pull/7962>`__: TST: special: mark a failing hyp2f1 test as xfail
- `#7973 <https://github.com/scipy/scipy/pull/7973>`__: BUG: fixed keyword in 'info' in ``_get_mem_available`` utility
- `#8001 <https://github.com/scipy/scipy/pull/8001>`__: TST: fix test failures from Matplotlib 2.1 update
- `#8010 <https://github.com/scipy/scipy/pull/8010>`__: BUG: signal: fix crash in lfilter
- `#8019 <https://github.com/scipy/scipy/pull/8019>`__: MAINT: fix test failures with NumPy master

View File

@ -0,0 +1,69 @@
==========================
SciPy 1.0.1 Release Notes
==========================
.. contents::
SciPy 1.0.1 is a bug-fix release with no new features compared to 1.0.0.
Probably the most important change is a fix for an incompatibility between
SciPy 1.0.0 and ``numpy.f2py`` in the NumPy master branch.
Authors
=======
* Saurabh Agarwal +
* Alessandro Pietro Bardelli
* Philip DeBoer
* Ralf Gommers
* Matt Haberland
* Eric Larson
* Denis Laxalde
* Mihai Capotă +
* Andrew Nelson
* Oleksandr Pavlyk
* Ilhan Polat
* Anant Prakash +
* Pauli Virtanen
* Warren Weckesser
* @xoviat
* Ted Ying +
A total of 16 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.
This list of names is automatically generated, and may not be fully complete.
Issues closed for 1.0.1
-----------------------
- `#7493 <https://github.com/scipy/scipy/issues/7493>`__: `ndimage.morphology` functions are broken with numpy 1.13.0
- `#8118 <https://github.com/scipy/scipy/issues/8118>`__: minimize_cobyla broken if `disp=True` passed
- `#8142 <https://github.com/scipy/scipy/issues/8142>`__: scipy-v1.0.0 pdist with metric=`minkowski` raises `ValueError:...
- `#8173 <https://github.com/scipy/scipy/issues/8173>`__: `scipy.stats.ortho_group` produces all negative determinants...
- `#8207 <https://github.com/scipy/scipy/issues/8207>`__: gaussian_filter seg faults on float16 numpy arrays
- `#8234 <https://github.com/scipy/scipy/issues/8234>`__: `scipy.optimize.linprog` `interior-point` presolve bug with trivial...
- `#8243 <https://github.com/scipy/scipy/issues/8243>`__: Make csgraph importable again via `from scipy.sparse import*`
- `#8320 <https://github.com/scipy/scipy/issues/8320>`__: scipy.root segfaults with optimizer 'lm'
Pull requests for 1.0.1
-----------------------
- `#8068 <https://github.com/scipy/scipy/pull/8068>`__: BUG: fix numpy deprecation test failures
- `#8082 <https://github.com/scipy/scipy/pull/8082>`__: BUG: fix solve_lyapunov import
- `#8144 <https://github.com/scipy/scipy/pull/8144>`__: MRG: Fix for cobyla
- `#8150 <https://github.com/scipy/scipy/pull/8150>`__: MAINT: resolve UPDATEIFCOPY deprecation errors
- `#8156 <https://github.com/scipy/scipy/pull/8156>`__: BUG: missing check on minkowski w kwarg
- `#8187 <https://github.com/scipy/scipy/pull/8187>`__: BUG: Sign of elements in random orthogonal 2D matrices in "ortho_group_gen"...
- `#8197 <https://github.com/scipy/scipy/pull/8197>`__: CI: uninstall oclint
- `#8215 <https://github.com/scipy/scipy/pull/8215>`__: Fixes Numpy datatype compatibility issues
- `#8237 <https://github.com/scipy/scipy/pull/8237>`__: BUG: optimize: fix bug when variables fixed by bounds are inconsistent...
- `#8248 <https://github.com/scipy/scipy/pull/8248>`__: BUG: declare "gfk" variable before call of terminate() in newton-cg
- `#8280 <https://github.com/scipy/scipy/pull/8280>`__: REV: reintroduce csgraph import in scipy.sparse
- `#8322 <https://github.com/scipy/scipy/pull/8322>`__: MAINT: prevent scipy.optimize.root segfault closes #8320
- `#8334 <https://github.com/scipy/scipy/pull/8334>`__: TST: stats: don't use exact equality check for hdmedian test
- `#8477 <https://github.com/scipy/scipy/pull/8477>`__: BUG: signal/signaltools: fix wrong refcounting in PyArray_OrderFilterND
- `#8530 <https://github.com/scipy/scipy/pull/8530>`__: BUG: linalg: Fixed typo in flapack.pyf.src.
- `#8566 <https://github.com/scipy/scipy/pull/8566>`__: CI: Temporarily pin Cython version to 0.27.3
- `#8573 <https://github.com/scipy/scipy/pull/8573>`__: Backports for 1.0.1
- `#8581 <https://github.com/scipy/scipy/pull/8581>`__: Fix Cython 0.28 build break of qhull.pyx

View File

@ -0,0 +1,165 @@
SciPy Code of Conduct
=====================
Introduction
------------
This code of conduct applies to all spaces managed by the SciPy project,
including all public and private mailing lists, issue trackers, wikis, blogs,
Twitter, and any other communication channel used by our community. The SciPy
project does not organise in-person events, however events related to our
community should have a code of conduct similar in spirit to this one.
This code of conduct should be honored by everyone who participates in
the SciPy community formally or informally, or claims any affiliation with the
project, in any project-related activities and especially when representing the
project, in any role.
This code is not exhaustive or complete. It serves to distill our common
understanding of a collaborative, shared environment and goals. Please try to
follow this code in spirit as much as in letter, to create a friendly and
productive environment that enriches the surrounding community.
Specific Guidelines
-------------------
We strive to:
1. Be open. We invite anyone to participate in our community. We prefer to use
public methods of communication for project-related messages, unless
discussing something sensitive. This applies to messages for help or
project-related support, too; not only is a public support request much more
likely to result in an answer to a question, it also ensures that any
inadvertent mistakes in answering are more easily detected and corrected.
2. Be empathetic, welcoming, friendly, and patient. We work together to resolve
conflict, and assume good intentions. We may all experience some frustration
from time to time, but we do not allow frustration to turn into a personal
attack. A community where people feel uncomfortable or threatened is not a
productive one.
3. Be collaborative. Our work will be used by other people, and in turn we will
depend on the work of others. When we make something for the benefit of the
project, we are willing to explain to others how it works, so that they can
build on the work to make it even better. Any decision we make will affect
users and colleagues, and we take those consequences seriously when making
decisions.
4. Be inquisitive. Nobody knows everything! Asking questions early avoids many
problems later, so we encourage questions, although we may direct them to
the appropriate forum. We will try hard to be responsive and helpful.
5. Be careful in the words that we choose. We are careful and respectful in
our communication and we take responsibility for our own speech. Be kind to
others. Do not insult or put down other participants. We will not accept
harassment or other exclusionary behaviour, such as:
- Violent threats or language directed against another person.
- Sexist, racist, or otherwise discriminatory jokes and language.
- Posting sexually explicit or violent material.
- Posting (or threatening to post) other people's personally identifying information ("doxing").
- Sharing private content, such as emails sent privately or non-publicly, or unlogged forums such as IRC channel history, without the sender's consent.
- Personal insults, especially those using racist or sexist terms.
- Unwelcome sexual attention.
- Excessive profanity. Please avoid swearwords; people differ greatly in their sensitivity to swearing.
- Repeated harassment of others. In general, if someone asks you to stop, then stop.
- Advocating for, or encouraging, any of the above behaviour.
Diversity Statement
-------------------
The SciPy project welcomes and encourages participation by everyone. We are
committed to being a community that everyone enjoys being part of. Although
we may not always be able to accommodate each individual's preferences, we try
our best to treat everyone kindly.
No matter how you identify yourself or how others perceive you: we welcome you.
Though no list can hope to be comprehensive, we explicitly honour diversity in:
age, culture, ethnicity, genotype, gender identity or expression, language,
national origin, neurotype, phenotype, political beliefs, profession, race,
religion, sexual orientation, socioeconomic status, subculture and technical
ability.
Though we welcome people fluent in all languages, SciPy development is
conducted in English.
Standards for behaviour in the SciPy community are detailed in the Code of
Conduct above. Participants in our community should uphold these standards
in all their interactions and help others to do so as well (see next section).
Reporting Guidelines
--------------------
We know that it is painfully common for internet communication to start at or
devolve into obvious and flagrant abuse. We also recognize that sometimes
people may have a bad day, or be unaware of some of the guidelines in this Code
of Conduct. Please keep this in mind when deciding on how to respond to a
breach of this Code.
For clearly intentional breaches, report those to the Code of Conduct committee
(see below). For possibly unintentional breaches, you may reply to the person
and point out this code of conduct (either in public or in private, whatever is
most appropriate). If you would prefer not to do that, please feel free to
report to the Code of Conduct Committee directly, or ask the Committee for
advice, in confidence.
You can report issues to the SciPy Code of Conduct committee, at
scipy-conduct@googlegroups.com. Currently, the committee consists of:
- Stefan van der Walt
- Nathaniel J. Smith
- Ralf Gommers
If your report involves any members of the committee, or if they feel they have
a conflict of interest in handling it, then they will recuse themselves from
considering your report. Alternatively, if for any reason you feel
uncomfortable making a report to the committee, then you can also contact:
- Chair of the SciPy Steering Committee: Ralf Gommers, or
- Executive Director of NumFOCUS: Leah Silen
Incident reporting resolution & Code of Conduct enforcement
-----------------------------------------------------------
*This section summarizes the most important points, more details can be found
in* :ref:`CoC_reporting_manual`.
We will investigate and respond to all complaints. The SciPy Code of Conduct
Committee and the SciPy Steering Committee (if involved) will protect the
identity of the reporter, and treat the content of complaints as confidential
(unless the reporter agrees otherwise).
In case of severe and obvious breaches, e.g. personal threat or violent, sexist
or racist language, we will immediately disconnect the originator from SciPy
communication channels; please see the manual for details.
In cases not involving clear severe and obvious breaches of this code of
conduct, the process for acting on any received code of conduct violation
report will be:
1. acknowledge report is received
2. reasonable discussion/feedback
3. mediation (if feedback didn't help, and only if both reporter and reportee agree to this)
4. enforcement via transparent decision (see :ref:`CoC_resolutions`) by the
Code of Conduct Committee
The committee will respond to any report as soon as possible, and at most
within 72 hours.
Endnotes
--------
We are thankful to the groups behind the following documents, from which we
drew content and inspiration:
- `The Apache Foundation Code of Conduct <https://www.apache.org/foundation/policies/conduct.html>`_
- `The Contributor Covenant <https://www.contributor-covenant.org/version/1/4/code-of-conduct/>`_
- `Jupyter Code of Conduct <https://github.com/jupyter/governance/tree/master/conduct>`_
- `Open Source Guides - Code of Conduct <https://opensource.guide/code-of-conduct/>`_

View File

@ -0,0 +1,218 @@
.. _CoC_reporting_manual:
SciPy Code of Conduct - How to follow up on a report
----------------------------------------------------
This is the manual followed by SciPy's Code of Conduct Committee. It's used
when we respond to an issue to make sure we're consistent and fair.
Enforcing the Code of Conduct impacts our community today and for the future.
It's an action that we do not take lightly. When reviewing enforcement
measures, the Code of Conduct Committee will keep the following values and
guidelines in mind:
* Act in a personal manner rather than impersonal. The Committee can engage
the parties to understand the situation, while respecting the privacy and any
necessary confidentiality of reporters. However, sometimes it is necessary
to communicate with one or more individuals directly: the Committee's goal is
to improve the health of our community rather than only produce a formal
decision.
* Emphasize empathy for individuals rather than judging behavior, avoiding
binary labels of "good" and "bad/evil". Overt, clear-cut aggression and
harassment exists and we will be address that firmly. But many scenarios
that can prove challenging to resolve are those where normal disagreements
devolve into unhelpful or harmful behavior from multiple parties.
Understanding the full context and finding a path that re-engages all is
hard, but ultimately the most productive for our community.
* We understand that email is a difficult medium and can be isolating.
Receiving criticism over email, without personal contact, can be
particularly painful. This makes it especially important to keep an
atmosphere of open-minded respect of the views of others. It also means
that we must be transparent in our actions, and that we will do everything
in our power to make sure that all our members are treated fairly and with
sympathy.
* Discrimination can be subtle and it can be unconscious. It can show itself
as unfairness and hostility in otherwise ordinary interactions. We know
that this does occur, and we will take care to look out for it. We would
very much like to hear from you if you feel you have been treated unfairly,
and we will use these procedures to make sure that your complaint is heard
and addressed.
* Help increase engagement in good discussion practice: try to identify where
discussion may have broken down and provide actionable information, pointers
and resources that can lead to positive change on these points.
* Be mindful of the needs of new members: provide them with explicit support
and consideration, with the aim of increasing participation from
underrepresented groups in particular.
* Individuals come from different cultural backgrounds and native languages.
Try to identify any honest misunderstandings caused by a non-native speaker
and help them understand the issue and what they can change to avoid causing
offence. Complex discussion in a foreign language can be very intimidating,
and we want to grow our diversity also across nationalities and cultures.
*Mediation*: voluntary, informal mediation is a tool at our disposal. In
contexts such as when two or more parties have all escalated to the point of
inappropriate behavior (something sadly common in human conflict), it may be
useful to facilitate a mediation process. This is only an example: the
Committee can consider mediation in any case, mindful that the process is meant
to be strictly voluntary and no party can be pressured to participate. If the
Committee suggests mediation, it should:
* Find a candidate who can serve as a mediator.
* Obtain the agreement of the reporter(s). The reporter(s) have complete
freedom to decline the mediation idea, or to propose an alternate mediator.
* Obtain the agreement of the reported person(s).
* Settle on the mediator: while parties can propose a different mediator than
the suggested candidate, only if common agreement is reached on all terms can
the process move forward.
* Establish a timeline for mediation to complete, ideally within two weeks.
The mediator will engage with all the parties and seek a resolution that is
satisfactory to all. Upon completion, the mediator will provide a report
(vetted by all parties to the process) to the Committee, with recommendations
on further steps. The Committee will then evaluate these results (whether
satisfactory resolution was achieved or not) and decide on any additional
action deemed necessary.
How the committee will respond to reports
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When the committee (or a committee member) receives a report, they will first
determine whether the report is about a clear and severe breach (as defined
below). If so, immediate action needs to be taken in addition to the regular
report handling process.
Clear and severe breach actions
+++++++++++++++++++++++++++++++
We know that it is painfully common for internet communication to start at or
devolve into obvious and flagrant abuse. We will deal quickly with clear and
severe breaches like personal threats, violent, sexist or racist language.
When a member of the Code of Conduct committee becomes aware of a clear and
severe breach, they will do the following:
* Immediately disconnect the originator from all SciPy communication channels.
* Reply to the reporter that their report has been received and that the
originator has been disconnected.
* In every case, the moderator should make a reasonable effort to contact the
originator, and tell them specifically how their language or actions
qualify as a "clear and severe breach". The moderator should also say
that, if the originator believes this is unfair or they want to be
reconnected to SciPy, they have the right to ask for a review, as below, by
the Code of Conduct Committee.
The moderator should copy this explanation to the Code of Conduct Committee.
* The Code of Conduct Committee will formally review and sign off on all cases
where this mechanism has been applied to make sure it is not being used to
control ordinary heated disagreement.
Report handling
+++++++++++++++
When a report is sent to the committee they will immediately reply to the
reporter to confirm receipt. This reply must be sent within 72 hours, and the
group should strive to respond much quicker than that.
If a report doesn't contain enough information, the committee will obtain all
relevant data before acting. The committee is empowered to act on the Steering
Councils behalf in contacting any individuals involved to get a more complete
account of events.
The committee will then review the incident and determine, to the best of their
ability:
* What happened.
* Whether this event constitutes a Code of Conduct violation.
* Who are the responsible party(ies).
* Whether this is an ongoing situation, and there is a threat to anyone's
physical safety.
This information will be collected in writing, and whenever possible the
group's deliberations will be recorded and retained (i.e. chat transcripts,
email discussions, recorded conference calls, summaries of voice conversations,
etc).
It is important to retain an archive of all activities of this committee to
ensure consistency in behavior and provide institutional memory for the
project. To assist in this, the default channel of discussion for this
committee will be a private mailing list accessible to current and future
members of the committee as well as members of the Steering Council upon
justified request. If the Committee finds the need to use off-list
communications (e.g. phone calls for early/rapid response), it should in all
cases summarize these back to the list so there's a good record of the process.
The Code of Conduct Committee should aim to have a resolution agreed upon within
two weeks. In the event that a resolution can't be determined in that time, the
committee will respond to the reporter(s) with an update and projected timeline
for resolution.
.. _CoC_resolutions:
Resolutions
~~~~~~~~~~~
The committee must agree on a resolution by consensus. If the group cannot reach
consensus and deadlocks for over a week, the group will turn the matter over to
the Steering Council for resolution.
Possible responses may include:
* Taking no further action
- if we determine no violations have occurred.
- if the matter has been resolved publicly while the committee was considering responses.
* Coordinating voluntary mediation: if all involved parties agree, the
Committee may facilitate a mediation process as detailed above.
* Remind publicly, and point out that some behavior/actions/language have been
judged inappropriate and why in the current context, or can but hurtful to
some people, requesting the community to self-adjust.
* A private reprimand from the committee to the individual(s) involved. In this
case, the group chair will deliver that reprimand to the individual(s) over
email, cc'ing the group.
* A public reprimand. In this case, the committee chair will deliver that
reprimand in the same venue that the violation occurred, within the limits of
practicality. E.g., the original mailing list for an email violation, but
for a chat room discussion where the person/context may be gone, they can be
reached by other means. The group may choose to publish this message
elsewhere for documentation purposes.
* A request for a public or private apology, assuming the reporter agrees to
this idea: they may at their discretion refuse further contact with the
violator. The chair will deliver this request. The committee may, if it
chooses, attach "strings" to this request: for example, the group may ask a
violator to apologize in order to retain ones membership on a mailing list.
* A "mutually agreed upon hiatus" where the committee asks the individual to
temporarily refrain from community participation. If the individual chooses
not to take a temporary break voluntarily, the committee may issue a
"mandatory cooling off period".
* A permanent or temporary ban from some or all SciPy spaces (mailing lists,
gitter.im, etc.). The group will maintain records of all such bans so that
they may be reviewed in the future or otherwise maintained.
Once a resolution is agreed upon, but before it is enacted, the committee will
contact the original reporter and any other affected parties and explain the
proposed resolution. The committee will ask if this resolution is acceptable,
and must note feedback for the record.
Finally, the committee will make a report to the SciPy Steering Council (as
well as the SciPy core team in the event of an ongoing resolution, such as a
ban).
The committee will never publicly discuss the issue; all public statements will
be made by the chair of the Code of Conduct Committee or the SciPy Steering
Council.
Conflicts of Interest
~~~~~~~~~~~~~~~~~~~~~
In the event of any conflict of interest, a committee member must immediately
notify the other members, and recuse themselves if necessary.

View File

@ -33,6 +33,7 @@ maintenance activities and policies.
.. toctree::
:maxdepth: 1
dev/conduct/code_of_conduct
hacking
dev/index
dev/governance/governance

View File

@ -0,0 +1 @@
.. include:: ../release/1.0.1-notes.rst

View File

@ -5,6 +5,7 @@ Release Notes
.. toctree::
:maxdepth: 1
release.1.0.1
release.1.0.0
release.0.19.1
release.0.19.0

View File

@ -136,9 +136,9 @@ through the ``jac`` parameter as illustrated below.
... options={'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 32 # may vary
Function evaluations: 34
Gradient evaluations: 34
Iterations: 51 # may vary
Function evaluations: 63
Gradient evaluations: 63
>>> res.x
array([1., 1., 1., 1., 1.])

View File

@ -115,11 +115,11 @@ except AttributeError:
#-----------------------------------
# Source of the release notes
RELEASE = 'doc/release/1.0.0-notes.rst'
RELEASE = 'doc/release/1.0.1-notes.rst'
# Start/end of the log (from git)
LOG_START = 'v0.19.0'
LOG_END = 'master'
LOG_START = 'v1.0.0'
LOG_END = '1.0.x'
#-------------------------------------------------------

View File

@ -123,6 +123,6 @@ def _get_mem_available():
# Linux >= 3.14
return info['memavailable']
else:
return info['memfree'] + info['memcached']
return info['memfree'] + info['cached']
return None

View File

@ -14,6 +14,7 @@ from scipy._lib.six import xrange
from numpy.testing import (
assert_, assert_array_almost_equal,
assert_allclose, assert_array_equal, assert_equal)
import pytest
from pytest import raises as assert_raises
from scipy.integrate import odeint, ode, complex_ode
@ -611,6 +612,7 @@ class ODECheckParameterUse(object):
solver.set_jac_params(omega)
self._check_solver(solver)
@pytest.mark.skip("Gives spurious warning messages, see gh-7888")
def test_warns_on_failure(self):
# Set nsteps small to ensure failure
solver = self._get_solver(f, jac)

View File

@ -281,15 +281,15 @@ def _read_array(f, typecode, array_desc):
warnings.warn("Not able to verify number of bytes from header")
# Read bytes as numpy array
array = np.fromstring(f.read(array_desc['nbytes']),
dtype=DTYPE_DICT[typecode])
array = np.frombuffer(f.read(array_desc['nbytes']),
dtype=DTYPE_DICT[typecode])
elif typecode in [2, 12]:
# These are 2 byte types, need to skip every two as they are not packed
array = np.fromstring(f.read(array_desc['nbytes']*2),
dtype=DTYPE_DICT[typecode])[1::2]
array = np.frombuffer(f.read(array_desc['nbytes']*2),
dtype=DTYPE_DICT[typecode])[1::2]
else:

View File

@ -45,7 +45,7 @@ import mmap as mm
import numpy as np
from numpy.compat import asbytes, asstr
from numpy import fromstring, dtype, empty, array, asarray
from numpy import frombuffer, dtype, empty, array, asarray
from numpy import little_endian as LITTLE_ENDIAN
from functools import reduce
@ -276,7 +276,7 @@ class netcdf_file(object):
def close(self):
"""Closes the NetCDF file."""
if not self.fp.closed:
if hasattr(self, 'fp') and not self.fp.closed:
try:
self.flush()
finally:
@ -511,8 +511,8 @@ class netcdf_file(object):
# Handle rec vars with shape[0] < nrecs.
if self._recs > len(var.data):
shape = (self._recs,) + var.data.shape[1:]
# Resize in-place does not always work since
# the array might not be single-segment
# Resize in-place does not always work since
# the array might not be single-segment
try:
var.data.resize(shape)
except ValueError:
@ -584,7 +584,7 @@ class netcdf_file(object):
if not magic == b'CDF':
raise TypeError("Error: %s is not a valid NetCDF 3 file" %
self.filename)
self.__dict__['version_byte'] = fromstring(self.fp.read(1), '>b')[0]
self.__dict__['version_byte'] = frombuffer(self.fp.read(1), '>b')[0]
# Read file headers and set data.
self._read_numrecs()
@ -678,7 +678,7 @@ class netcdf_file(object):
else:
pos = self.fp.tell()
self.fp.seek(begin_)
data = fromstring(self.fp.read(a_size), dtype=dtype_)
data = frombuffer(self.fp.read(a_size), dtype=dtype_)
data.shape = shape
self.fp.seek(pos)
@ -700,7 +700,7 @@ class netcdf_file(object):
else:
pos = self.fp.tell()
self.fp.seek(begin)
rec_array = fromstring(self.fp.read(self._recs*self._recsize), dtype=dtypes)
rec_array = frombuffer(self.fp.read(self._recs*self._recsize), dtype=dtypes)
rec_array.shape = (self._recs,)
self.fp.seek(pos)
@ -743,7 +743,7 @@ class netcdf_file(object):
self.fp.read(-count % 4) # read padding
if typecode is not 'c':
values = fromstring(values, dtype='>%s' % typecode)
values = frombuffer(values, dtype='>%s' % typecode)
if values.shape == (1,):
values = values[0]
else:
@ -761,14 +761,14 @@ class netcdf_file(object):
_pack_int32 = _pack_int
def _unpack_int(self):
return int(fromstring(self.fp.read(4), '>i')[0])
return int(frombuffer(self.fp.read(4), '>i')[0])
_unpack_int32 = _unpack_int
def _pack_int64(self, value):
self.fp.write(array(value, '>q').tostring())
def _unpack_int64(self):
return fromstring(self.fp.read(8), '>q')[0]
return frombuffer(self.fp.read(8), '>q')[0]
def _pack_string(self, s):
count = len(s)
@ -987,8 +987,8 @@ class netcdf_variable(object):
recs = rec_index + 1
if recs > len(self.data):
shape = (recs,) + self._shape[1:]
# Resize in-place does not always work since
# the array might not be single-segment
# Resize in-place does not always work since
# the array might not be single-segment
try:
self.data.resize(shape)
except ValueError:

View File

@ -126,7 +126,7 @@ def _read_data_chunk(fid, format_tag, channels, bit_depth, is_big_endian,
else:
dtype += 'f%d' % bytes_per_sample
if not mmap:
data = numpy.fromstring(fid.read(size), dtype=dtype)
data = numpy.frombuffer(fid.read(size), dtype=dtype)
else:
start = fid.tell()
data = numpy.memmap(fid, dtype=dtype, mode='c', offset=start,

View File

@ -25,6 +25,7 @@ from .special_matrices import kron, block_diag
__all__ = ['solve_sylvester',
'solve_continuous_lyapunov', 'solve_discrete_lyapunov',
'solve_lyapunov',
'solve_continuous_are', 'solve_discrete_are']

View File

@ -22,6 +22,24 @@ __all__ = ['solve', 'solve_triangular', 'solveh_banded', 'solve_banded',
# Linear equations
def _solve_check(n, info, lamch=None, rcond=None):
""" Check arguments during the different steps of the solution phase """
if info < 0:
raise ValueError('LAPACK reported an illegal value in {}-th argument'
'.'.format(-info))
elif 0 < info:
raise LinAlgError('Matrix is singular.')
if lamch is None:
return
E = lamch('E')
if rcond < E:
warnings.warn('scipy.linalg.solve\nIll-conditioned matrix detected.'
' Result is not guaranteed to be accurate.\nReciprocal '
'condition number/precision: {} / {}'.format(rcond, E),
RuntimeWarning)
def solve(a, b, sym_pos=False, lower=False, overwrite_a=False,
overwrite_b=False, debug=None, check_finite=True, assume_a='gen',
transposed=False):
@ -73,8 +91,8 @@ def solve(a, b, sym_pos=False, lower=False, overwrite_a=False,
assume_a : str, optional
Valid entries are explained above.
transposed: bool, optional
If True, depending on the data type ``a^T x = b`` or ``a^H x = b`` is
solved (only taken into account for ``'gen'``).
If True, ``a^T x = b`` for real matrices, raises `NotImplementedError`
for complex matrices (only for True).
Returns
-------
@ -89,6 +107,8 @@ def solve(a, b, sym_pos=False, lower=False, overwrite_a=False,
If the matrix is singular.
RuntimeWarning
If an ill-conditioned input a is detected.
NotImplementedError
If transposed is True and input a is a complex matrix.
Examples
--------
@ -111,12 +131,11 @@ def solve(a, b, sym_pos=False, lower=False, overwrite_a=False,
numpy.dot() behavior and the returned result is still 1D array.
The generic, symmetric, hermitian and positive definite solutions are
obtained via calling ?GESVX, ?SYSVX, ?HESVX, and ?POSVX routines of
obtained via calling ?GESV, ?SYSV, ?HESV, and ?POSV routines of
LAPACK respectively.
"""
# Flags for 1D or nD right hand side
b_is_1D = False
b_is_ND = False
a1 = atleast_2d(_asarray_validated(a, check_finite=check_finite))
b1 = atleast_1d(_asarray_validated(b, check_finite=check_finite))
@ -138,90 +157,106 @@ def solve(a, b, sym_pos=False, lower=False, overwrite_a=False,
if b1.size == 0:
return np.asfortranarray(b1.copy())
# regularize 1D b arrays to 2D and catch nD RHS arrays
# regularize 1D b arrays to 2D
if b1.ndim == 1:
if n == 1:
b1 = b1[None, :]
else:
b1 = b1[:, None]
b_is_1D = True
elif b1.ndim > 2:
b_is_ND = True
r_or_c = complex if np.iscomplexobj(a1) else float
# Backwards compatibility - old keyword.
if sym_pos:
assume_a = 'pos'
if assume_a in ('gen', 'sym', 'her', 'pos'):
_structure = assume_a
else:
if assume_a not in ('gen', 'sym', 'her', 'pos'):
raise ValueError('{} is not a recognized matrix structure'
''.format(assume_a))
# Deprecate keyword "debug"
if debug is not None:
warnings.warn('Use of the "debug" keyword is deprecated '
'and this keyword will be removed in the future '
'and this keyword will be removed in future '
'versions of SciPy.', DeprecationWarning)
# Backwards compatibility - old keyword.
if sym_pos:
assume_a = 'pos'
if _structure == 'gen':
gesvx = get_lapack_funcs('gesvx', (a1, b1))
trans_conj = 'N'
if transposed:
trans_conj = 'T' if r_or_c is float else 'H'
(_, _, _, _, _, _, _,
x, rcond, _, _, info) = gesvx(a1, b1,
trans=trans_conj,
overwrite_a=overwrite_a,
overwrite_b=overwrite_b
)
elif _structure == 'sym':
sysvx, sysvx_lw = get_lapack_funcs(('sysvx', 'sysvx_lwork'), (a1, b1))
lwork = _compute_lwork(sysvx_lw, n, lower)
_, _, _, _, x, rcond, _, _, info = sysvx(a1, b1, lwork=lwork,
lower=lower,
overwrite_a=overwrite_a,
overwrite_b=overwrite_b
)
elif _structure == 'her':
hesvx, hesvx_lw = get_lapack_funcs(('hesvx', 'hesvx_lwork'), (a1, b1))
lwork = _compute_lwork(hesvx_lw, n, lower)
_, _, x, rcond, _, _, info = hesvx(a1, b1, lwork=lwork,
lower=lower,
overwrite_a=overwrite_a,
overwrite_b=overwrite_b
)
# Get the correct lamch function.
# The LAMCH functions only exists for S and D
# So for complex values we have to convert to real/double.
if a1.dtype.char in 'fF': # single precision
lamch = get_lapack_funcs('lamch', dtype='f')
else:
posvx = get_lapack_funcs('posvx', (a1, b1))
_, _, _, _, _, x, rcond, _, _, info = posvx(a1, b1,
lower=lower,
overwrite_a=overwrite_a,
overwrite_b=overwrite_b
)
lamch = get_lapack_funcs('lamch', dtype='d')
# Currently we do not have the other forms of the norm calculators
# lansy, lanpo, lanhe.
# However, in any case they only reduce computations slightly...
lange = get_lapack_funcs('lange', (a1,))
# Since the I-norm and 1-norm are the same for symmetric matrices
# we can collect them all in this one call
# Note however, that when issuing 'gen' and form!='none', then
# the I-norm should be used
if transposed:
trans = 1
norm = 'I'
if np.iscomplexobj(a1):
raise NotImplementedError('scipy.linalg.solve can currently '
'not solve a^T x = b or a^H x = b '
'for complex matrices.')
else:
trans = 0
norm = '1'
anorm = lange(norm, a1)
# Generalized case 'gesv'
if assume_a == 'gen':
gecon, getrf, getrs = get_lapack_funcs(('gecon', 'getrf', 'getrs'),
(a1, b1))
lu, ipvt, info = getrf(a1, overwrite_a=overwrite_a)
_solve_check(n, info)
x, info = getrs(lu, ipvt, b1,
trans=trans, overwrite_b=overwrite_b)
_solve_check(n, info)
rcond, info = gecon(lu, anorm, norm=norm)
# Hermitian case 'hesv'
elif assume_a == 'her':
hecon, hesv, hesv_lw = get_lapack_funcs(('hecon', 'hesv', 'hesv_lwork'),
(a1, b1))
lwork = _compute_lwork(hesv_lw, n, lower)
lu, ipvt, x, info = hesv(a1, b1, lwork=lwork,
lower=lower,
overwrite_a=overwrite_a,
overwrite_b=overwrite_b)
_solve_check(n, info)
rcond, info = hecon(lu, ipvt, anorm)
# Symmetric case 'sysv'
elif assume_a == 'sym':
sycon, sysv, sysv_lw = get_lapack_funcs(('sycon', 'sysv', 'sysv_lwork'),
(a1, b1))
lwork = _compute_lwork(sysv_lw, n, lower)
lu, ipvt, x, info = sysv(a1, b1, lwork=lwork,
lower=lower,
overwrite_a=overwrite_a,
overwrite_b=overwrite_b)
_solve_check(n, info)
rcond, info = sycon(lu, ipvt, anorm)
# Positive definite case 'posv'
else:
pocon, posv = get_lapack_funcs(('pocon', 'posv'),
(a1, b1))
lu, x, info = posv(a1, b1, lower=lower,
overwrite_a=overwrite_a,
overwrite_b=overwrite_b)
_solve_check(n, info)
rcond, info = pocon(lu, anorm)
_solve_check(n, info, lamch, rcond)
# Unlike ?xxSV, ?xxSVX writes the solution x to a separate array, and
# overwrites b with its scaled version which is thrown away. Thus, the
# solution does not admit the same shape with the original b. For
# backwards compatibility, we reshape it manually.
if b_is_1D:
x = x.ravel()
if b_is_ND:
x = x.reshape(*b1.shape, order='F')
if info < 0:
raise ValueError('LAPACK reported an illegal value in {}-th argument'
'.'.format(-info))
elif info == 0:
return x
elif 0 < info <= n:
raise LinAlgError('Matrix is singular.')
elif info > n:
warnings.warn('scipy.linalg.solve\nIll-conditioned matrix detected.'
' Result is not guaranteed to be accurate.\nReciprocal'
' condition number: {}'.format(rcond), RuntimeWarning)
return x
return x
def solve_triangular(a, b, trans=0, lower=False, unit_diagonal=False,

View File

@ -657,7 +657,7 @@ interface
callstatement (*f2py_func)((lower?"L":"U"),&n,a,&lda,ipiv,&info)
callprotoargument char*,int*,<ctype>*,int*,int*,int*
integer optional,intent(in),check(lower==0||lower==1):: lower = 1
integer optional,intent(in),check(lower==0||lower==1):: lower = 0
integer depend(a),intent(hide):: n = shape(a,0)
<ftype> dimension(n,n),intent(in,out,copy,out=ldu):: a
integer depend(a),intent(hide):: lda = max(shape(a,0),1)
@ -677,7 +677,7 @@ interface
callstatement (*f2py_func)((lower?"L":"U"),&n,a,&lda,ipiv,work,&lwork,&info)
callprotoargument char*,int*,<ctype>*,int*,int*,<ctype>*,int*,int*
integer optional,intent(in),check(lower==0||lower==1):: lower = 1
integer optional,intent(in),check(lower==0||lower==1):: lower = 0
integer depend(a),intent(hide):: n = shape(a,0)
<ftype> dimension(n,n),intent(in,out,copy,out=ldu):: a
integer depend(a),intent(hide):: lda = max(shape(a,0),1)
@ -881,7 +881,7 @@ interface
callstatement (*f2py_func)((lower?"L":"U"),&n,a,&lda,ipiv,work,&lwork,&info)
callprotoargument char*,int*,<ctype2c>*,int*,int*,<ctype2c>*,int*,int*
integer optional,intent(in),check(lower==0||lower==1):: lower = 1
integer optional,intent(in),check(lower==0||lower==1):: lower = 0
integer depend(a),intent(hide):: n = shape(a,0)
<ftype2c> dimension(n,n),intent(in,out,copy,out=ldu):: a
integer depend(a),intent(hide):: lda = max(shape(a,0),1)
@ -1757,7 +1757,7 @@ interface
callprotoargument int*,int*,int*,<ctype2c>*,int*,<ctype2c>*,int*,int*,<ctype2>*,int*,<ctype2c>*,int*,<ctype2>*,int*
integer intent(in) :: m
integer intent(in)):: n
integer intent(in) :: n
integer intent(hide) :: maxmn = MAX(m,n)
<ftype2c> intent(hide) :: a

View File

@ -730,11 +730,16 @@ class TestSolve(object):
def test_transposed_keyword(self):
A = np.arange(9).reshape(3, 3) + 1
x = solve(np.tril(A)/9, np.ones(3), transposed=1)
x = solve(np.tril(A)/9, np.ones(3), transposed=True)
assert_array_almost_equal(x, [1.2, 0.2, 1])
x = solve(np.tril(A)/9, np.ones(3), transposed=0)
x = solve(np.tril(A)/9, np.ones(3), transposed=False)
assert_array_almost_equal(x, [9, -5.4, -1.2])
def test_transposed_notimplemented(self):
a = np.eye(3).astype(complex)
with assert_raises(NotImplementedError):
solve(a, a, transposed=True)
def test_nonsquare_a(self):
assert_raises(ValueError, solve, [1, 2], 1)
@ -745,6 +750,8 @@ class TestSolve(object):
def test_assume_a_keyword(self):
assert_raises(ValueError, solve, 1, 1, assume_a='zxcv')
@pytest.mark.skip(reason="Failure on OS X (gh-7500), "
"crash on Windows (gh-8064)")
def test_all_type_size_routine_combinations(self):
sizes = [10, 100, 1000]
assume_as = ['gen', 'sym', 'pos', 'her']
@ -771,13 +778,27 @@ class TestSolve(object):
elif assume_a == 'pos':
a = a.conj().T.dot(a) + 0.1*np.eye(size)
x = solve(a, b, assume_a=assume_a)
tol = 1e-12 if dtype in (np.float64, np.complex128) else 1e-6
if assume_a in ['gen', 'sym', 'her']:
# We revert the tolerance from before
# 4b4a6e7c34fa4060533db38f9a819b98fa81476c
if dtype in (np.float32, np.complex64):
tol *= 10
x = solve(a, b, assume_a=assume_a)
assert_allclose(a.dot(x), b,
atol=tol * size,
rtol=tol * size,
err_msg=err_msg)
if assume_a == 'sym' and dtype not in (np.complex64, np.complex128):
x = solve(a, b, assume_a=assume_a, transposed=True)
assert_allclose(a.dot(x), b,
atol=tol * size,
rtol=tol * size,
err_msg=err_msg)
class TestSolveTriangular(object):

View File

@ -23,7 +23,8 @@ from numpy import float32, float64, complex64, complex128, arange, triu, \
nonzero
from numpy.random import rand, seed
from scipy.linalg import _fblas as fblas, get_blas_funcs, toeplitz, solve
from scipy.linalg import _fblas as fblas, get_blas_funcs, toeplitz, solve, \
solve_triangular
try:
from scipy.linalg import _cblas as cblas
@ -1030,6 +1031,7 @@ class TestTRMM(object):
def test_trsm():
seed(1234)
for ind, dtype in enumerate(DTYPES):
tol = np.finfo(dtype).eps*1000
func, = get_blas_funcs(('trsm',), dtype=dtype)
# Test protection against size mismatches
@ -1052,26 +1054,26 @@ def test_trsm():
x1 = func(alpha=alpha, a=A, b=B1)
assert_equal(B1.shape, x1.shape)
x2 = solve(Au, alpha*B1)
assert_array_almost_equal(x1, x2)
assert_allclose(x1, x2, atol=tol)
x1 = func(alpha=alpha, a=A, b=B1, trans_a=1)
x2 = solve(Au.T, alpha*B1)
assert_array_almost_equal(x1, x2)
assert_allclose(x1, x2, atol=tol)
x1 = func(alpha=alpha, a=A, b=B1, trans_a=2)
x2 = solve(Au.conj().T, alpha*B1)
assert_array_almost_equal(x1, x2)
assert_allclose(x1, x2, atol=tol)
x1 = func(alpha=alpha, a=A, b=B1, diag=1)
Au[arange(m), arange(m)] = dtype(1)
x2 = solve(Au, alpha*B1)
assert_array_almost_equal(x1, x2)
assert_allclose(x1, x2, atol=tol)
x1 = func(alpha=alpha, a=A, b=B2, diag=1, side=1)
x2 = solve(Au.conj().T, alpha*B2.conj().T)
assert_array_almost_equal(x1, x2.conj().T)
assert_allclose(x1, x2.conj().T, atol=tol)
x1 = func(alpha=alpha, a=A, b=B2, diag=1, side=1, lower=1)
Al[arange(m), arange(m)] = dtype(1)
x2 = solve(Al.conj().T, alpha*B2.conj().T)
assert_array_almost_equal(x1, x2.conj().T)
assert_allclose(x1, x2.conj().T, atol=tol)

View File

@ -5,7 +5,7 @@ Functions which are common and require SciPy Base and Level 1 SciPy
from __future__ import division, print_function, absolute_import
from numpy import arange, newaxis, hstack, product, array, fromstring
from numpy import arange, newaxis, hstack, product, array, frombuffer
__all__ = ['central_diff_weights', 'derivative', 'ascent', 'face']
@ -196,7 +196,7 @@ def face(gray=False):
with open(os.path.join(os.path.dirname(__file__), 'face.dat'), 'rb') as f:
rawdata = f.read()
data = bz2.decompress(rawdata)
face = fromstring(data, dtype='uint8')
face = frombuffer(data, dtype='uint8')
face.shape = (768, 1024, 3)
if gray is True:
face = (0.21 * face[:,:,0] + 0.71 * face[:,:,1] + 0.07 * face[:,:,2]).astype('uint8')

View File

@ -45,6 +45,10 @@
#include "ccallback.h"
#include "numpy/npy_3kcompat.h"
#if NPY_API_VERSION >= 0x0000000c
#define HAVE_WRITEBACKIFCOPY
#endif
typedef struct {
PyObject *extra_arguments;
PyObject *extra_keywords;
@ -106,7 +110,11 @@ NI_ObjectToOptionalInputArray(PyObject *object, PyArrayObject **array)
static int
NI_ObjectToOutputArray(PyObject *object, PyArrayObject **array)
{
int flags = NPY_ARRAY_BEHAVED_NS | NPY_ARRAY_UPDATEIFCOPY;
#ifdef HAVE_WRITEBACKIFCOPY
int flags = NPY_ARRAY_BEHAVED_NS | NPY_ARRAY_WRITEBACKIFCOPY;
#else
int flags = NPY_ARRAY_BEHAVED_NS | NPY_ARRAY_UPDATEIFCOPY;
#endif
/*
* This would also be caught by the PyArray_CheckFromAny call, but
* we check it explicitly here to provide a saner error message.
@ -190,6 +198,9 @@ static PyObject *Py_Correlate1D(PyObject *obj, PyObject *args)
NI_Correlate1D(input, weights, axis, output, (NI_ExtendMode)mode, cval,
origin);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
Py_XDECREF(input);
@ -218,6 +229,9 @@ static PyObject *Py_Correlate(PyObject *obj, PyObject *args)
NI_Correlate(input, weights, output, (NI_ExtendMode)mode, cval,
origin.ptr);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
Py_XDECREF(input);
@ -243,6 +257,9 @@ static PyObject *Py_UniformFilter1D(PyObject *obj, PyObject *args)
NI_UniformFilter1D(input, filter_size, axis, output, (NI_ExtendMode)mode,
cval, origin);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
Py_XDECREF(input);
@ -266,6 +283,9 @@ static PyObject *Py_MinOrMaxFilter1D(PyObject *obj, PyObject *args)
NI_MinOrMaxFilter1D(input, filter_size, axis, output, (NI_ExtendMode)mode,
cval, origin, minimum);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
Py_XDECREF(input);
@ -297,6 +317,9 @@ static PyObject *Py_MinOrMaxFilter(PyObject *obj, PyObject *args)
NI_MinOrMaxFilter(input, footprint, structure, output, (NI_ExtendMode)mode,
cval, origin.ptr, minimum);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
Py_XDECREF(input);
@ -328,6 +351,9 @@ static PyObject *Py_RankFilter(PyObject *obj, PyObject *args)
NI_RankFilter(input, rank, footprint, output, (NI_ExtendMode)mode, cval,
origin.ptr);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
Py_XDECREF(input);
@ -453,6 +479,9 @@ static PyObject *Py_GenericFilter1D(PyObject *obj, PyObject *args)
NI_GenericFilter1D(input, func, data, filter_size, axis, output,
(NI_ExtendMode)mode, cval, origin);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
if (callback.py_function != NULL || callback.c_function != NULL) {
@ -577,6 +606,9 @@ static PyObject *Py_GenericFilter(PyObject *obj, PyObject *args)
NI_GenericFilter(input, func, data, footprint, output, (NI_ExtendMode)mode,
cval, origin.ptr);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
if (callback.py_function != NULL || callback.c_function != NULL) {
@ -604,6 +636,9 @@ static PyObject *Py_FourierFilter(PyObject *obj, PyObject *args)
goto exit;
NI_FourierFilter(input, parameters, n, axis, output, filter_type);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
Py_XDECREF(input);
@ -626,6 +661,9 @@ static PyObject *Py_FourierShift(PyObject *obj, PyObject *args)
goto exit;
NI_FourierShift(input, shifts, n, axis, output);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
Py_XDECREF(input);
@ -646,6 +684,9 @@ static PyObject *Py_SplineFilter1D(PyObject *obj, PyObject *args)
goto exit;
NI_SplineFilter1D(input, order, axis, output);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
Py_XDECREF(input);
@ -778,6 +819,9 @@ static PyObject *Py_GeometricTransform(PyObject *obj, PyObject *args)
NI_GeometricTransform(input, func, data, matrix, shift, coordinates,
output, order, (NI_ExtendMode)mode, cval);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
if (callback.py_function != NULL || callback.c_function != NULL) {
@ -807,6 +851,9 @@ static PyObject *Py_ZoomShift(PyObject *obj, PyObject *args)
goto exit;
NI_ZoomShift(input, zoom, shift, output, order, (NI_ExtendMode)mode, cval);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
Py_XDECREF(input);
@ -918,6 +965,9 @@ static PyObject *Py_WatershedIFT(PyObject *obj, PyObject *args)
goto exit;
NI_WatershedIFT(input, markers, strct, output);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
Py_XDECREF(input);
@ -943,6 +993,9 @@ static PyObject *Py_DistanceTransformBruteForce(PyObject *obj,
goto exit;
NI_DistanceTransformBruteForce(input, metric, sampling, output, features);
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
Py_XDECREF(input);
@ -1034,6 +1087,10 @@ static PyObject *Py_BinaryErosion(PyObject *obj, PyObject *args)
if (return_coordinates) {
cobj = NpyCapsule_FromVoidPtr(coordinate_list, _FreeCoordinateList);
}
#ifdef HAVE_WRITEBACKIFCOPY
PyArray_ResolveWritebackIfCopy(output);
#endif
exit:
Py_XDECREF(input);
Py_XDECREF(strct);

View File

@ -158,6 +158,7 @@ int NI_InitLineBuffer(PyArrayObject *array, int axis, npy_intp size1,
NI_ExtendMode extend_mode, double extend_value, NI_LineBuffer *buffer)
{
npy_intp line_length = 0, array_lines = 0, size;
int array_type;
size = PyArray_SIZE(array);
/* check if the buffer is big enough: */
@ -165,6 +166,37 @@ int NI_InitLineBuffer(PyArrayObject *array, int axis, npy_intp size1,
PyErr_SetString(PyExc_RuntimeError, "buffer too small");
return 0;
}
/*
* Check that the data type is supported, against the types listed in
* NI_ArrayToLineBuffer
*/
array_type = NI_CanonicalType(PyArray_TYPE(array));
switch (array_type) {
case NPY_BOOL:
case NPY_UBYTE:
case NPY_USHORT:
case NPY_UINT:
case NPY_ULONG:
case NPY_ULONGLONG:
case NPY_BYTE:
case NPY_SHORT:
case NPY_INT:
case NPY_LONG:
case NPY_LONGLONG:
case NPY_FLOAT:
case NPY_DOUBLE:
break;
default:
#if PY_VERSION_HEX >= 0x03040000
PyErr_Format(PyExc_RuntimeError, "array type %R not supported",
(PyObject *)PyArray_DTYPE(array));
#else
PyErr_Format(PyExc_RuntimeError, "array type %d not supported",
array_type);
#endif
return 0;
}
/* Initialize a line iterator to move over the array: */
if (!NI_InitPointIterator(array, &(buffer->iterator)))
return 0;
@ -178,7 +210,7 @@ int NI_InitLineBuffer(PyArrayObject *array, int axis, npy_intp size1,
buffer->array_data = (void *)PyArray_DATA(array);
buffer->buffer_data = buffer_data;
buffer->buffer_lines = buffer_lines;
buffer->array_type = NI_CanonicalType(PyArray_TYPE(array));
buffer->array_type = array_type;
buffer->array_lines = array_lines;
buffer->next_line = 0;
buffer->size1 = size1;

View File

@ -400,3 +400,11 @@ def test_footprint_all_zeros():
kernel = np.zeros((3, 3), bool)
with assert_raises(ValueError):
sndi.maximum_filter(arr, footprint=kernel)
def test_gaussian_filter():
# Test gaussian filter with np.float16
# gh-8207
data = np.array([1],dtype = np.float16)
sigma = 1.0
with assert_raises(RuntimeError):
sndi.gaussian_filter(data,sigma)

View File

@ -64,7 +64,7 @@ int raw_multipack_calling_function(int *n, double *x, double *fvec, int *iflag)
PyArrayObject *result_array = NULL;
result_array = (PyArrayObject *)call_python_function(multipack_python_function, *n, x, multipack_extra_arguments, 1, minpack_error);
result_array = (PyArrayObject *)call_python_function(multipack_python_function, *n, x, multipack_extra_arguments, 1, minpack_error, *n);
if (result_array == NULL) {
*iflag = -1;
return -1;
@ -90,7 +90,7 @@ int jac_multipack_calling_function(int *n, double *x, double *fvec, double *fjac
PyArrayObject *result_array;
if (*iflag == 1) {
result_array = (PyArrayObject *)call_python_function(multipack_python_function, *n, x, multipack_extra_arguments, 1, minpack_error);
result_array = (PyArrayObject *)call_python_function(multipack_python_function, *n, x, multipack_extra_arguments, 1, minpack_error, *n);
if (result_array == NULL) {
*iflag = -1;
return -1;
@ -98,7 +98,7 @@ int jac_multipack_calling_function(int *n, double *x, double *fvec, double *fjac
memcpy(fvec, PyArray_DATA(result_array), (*n)*sizeof(double));
}
else { /* iflag == 2 */
result_array = (PyArrayObject *)call_python_function(multipack_python_jacobian, *n, x, multipack_extra_arguments, 2, minpack_error);
result_array = (PyArrayObject *)call_python_function(multipack_python_jacobian, *n, x, multipack_extra_arguments, 2, minpack_error, (*n)*(*ldfjac));
if (result_array == NULL) {
*iflag = -1;
return -1;
@ -123,7 +123,7 @@ int raw_multipack_lm_function(int *m, int *n, double *x, double *fvec, int *ifla
PyArrayObject *result_array = NULL;
result_array = (PyArrayObject *)call_python_function(multipack_python_function,*n, x, multipack_extra_arguments, 1, minpack_error);
result_array = (PyArrayObject *)call_python_function(multipack_python_function,*n, x, multipack_extra_arguments, 1, minpack_error, *m);
if (result_array == NULL) {
*iflag = -1;
return -1;
@ -148,7 +148,7 @@ int jac_multipack_lm_function(int *m, int *n, double *x, double *fvec, double *f
PyArrayObject *result_array;
if (*iflag == 1) {
result_array = (PyArrayObject *)call_python_function(multipack_python_function, *n, x, multipack_extra_arguments, 1, minpack_error);
result_array = (PyArrayObject *)call_python_function(multipack_python_function, *n, x, multipack_extra_arguments, 1, minpack_error, *m);
if (result_array == NULL) {
*iflag = -1;
return -1;
@ -156,7 +156,7 @@ int jac_multipack_lm_function(int *m, int *n, double *x, double *fvec, double *f
memcpy(fvec, PyArray_DATA(result_array), (*m)*sizeof(double));
}
else { /* iflag == 2 */
result_array = (PyArrayObject *)call_python_function(multipack_python_jacobian, *n, x, multipack_extra_arguments, 2, minpack_error);
result_array = (PyArrayObject *)call_python_function(multipack_python_jacobian, *n, x, multipack_extra_arguments, 2, minpack_error, (*n)*(*ldfjac));
if (result_array == NULL) {
*iflag = -1;
return -1;
@ -186,7 +186,7 @@ int smjac_multipack_lm_function(int *m, int *n, double *x, double *fvec, double
PyArrayObject *result_array;
if (*iflag == 1) {
result_array = (PyArrayObject *)call_python_function(multipack_python_function, *n, x, multipack_extra_arguments, 1, minpack_error);
result_array = (PyArrayObject *)call_python_function(multipack_python_function, *n, x, multipack_extra_arguments, 1, minpack_error, *m);
if (result_array == NULL) {
*iflag = -1;
return -1;
@ -209,7 +209,7 @@ int smjac_multipack_lm_function(int *m, int *n, double *x, double *fvec, double
return -1;
}
result_array = (PyArrayObject *)call_python_function(multipack_python_jacobian, *n, x, newargs, 2, minpack_error);
result_array = (PyArrayObject *)call_python_function(multipack_python_jacobian, *n, x, newargs, 2, minpack_error, *n);
if (result_array == NULL) {
Py_DECREF(newargs);
*iflag = -1;
@ -260,7 +260,7 @@ static PyObject *minpack_hybrd(PyObject *dummy, PyObject *args) {
if (maxfev < 0) maxfev = 200*(n+1);
/* Setup array to hold the function evaluations */
ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args, 1, minpack_error);
ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args, 1, minpack_error, -1);
if (ap_fvec == NULL) goto fail;
fvec = (double *) PyArray_DATA(ap_fvec);
if (PyArray_NDIM(ap_fvec) == 0)
@ -361,7 +361,7 @@ static PyObject *minpack_hybrj(PyObject *dummy, PyObject *args) {
if (maxfev < 0) maxfev = 100*(n+1);
/* Setup array to hold the function evaluations */
ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args, 1, minpack_error);
ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args, 1, minpack_error, -1);
if (ap_fvec == NULL) goto fail;
fvec = (double *) PyArray_DATA(ap_fvec);
if (PyArray_NDIM(ap_fvec) == 0)
@ -467,7 +467,7 @@ static PyObject *minpack_lmdif(PyObject *dummy, PyObject *args) {
if (maxfev < 0) maxfev = 200*(n+1);
/* Setup array to hold the function evaluations and find it's size*/
ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args, 1, minpack_error);
ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args, 1, minpack_error, -1);
if (ap_fvec == NULL) goto fail;
fvec = (double *) PyArray_DATA(ap_fvec);
m = (PyArray_NDIM(ap_fvec) > 0 ? PyArray_DIMS(ap_fvec)[0] : 1);
@ -562,7 +562,7 @@ static PyObject *minpack_lmder(PyObject *dummy, PyObject *args) {
if (maxfev < 0) maxfev = 100*(n+1);
/* Setup array to hold the function evaluations */
ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args, 1, minpack_error);
ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args, 1, minpack_error, -1);
if (ap_fvec == NULL) goto fail;
fvec = (double *) PyArray_DATA(ap_fvec);

View File

@ -543,6 +543,21 @@ def _presolve(c, A_ub, b_ub, A_eq, b_eq, bounds, rr):
# identical bounds indicate that variable can be removed
i_f = np.abs(lb - ub) < tol # indices of "fixed" variables
i_nf = np.logical_not(i_f) # indices of "not fixed" variables
# test_bounds_equal_but_infeasible
if np.all(i_f): # if bounds define solution, check for consistency
residual = b_eq - A_eq.dot(lb)
slack = b_ub - A_ub.dot(lb)
if ((A_ub.size > 0 and np.any(slack < 0)) or
(A_eq.size > 0 and not np.allclose(residual, 0))):
status = 2
message = ("The problem is (trivially) infeasible because the "
"bounds fix all variables to values inconsistent with "
"the constraints")
complete = True
return (c, c0, A_ub, b_ub, A_eq, b_eq, bounds,
x, undo, complete, status, message)
if np.any(i_f):
c0 += c[i_f].dot(lb[i_f])
b_eq = b_eq - A_eq[:, i_f].dot(lb[i_f])
@ -562,7 +577,7 @@ def _presolve(c, A_ub, b_ub, A_eq, b_eq, bounds, rr):
# test_empty_constraint_1
if c.size == 0:
status = 0
message = ("The solution was determined in presolve as there are"
message = ("The solution was determined in presolve as there are "
"no non-trivial constraints.")
elif (np.any(np.logical_and(c < 0, ub == np.inf)) or
np.any(np.logical_and(c > 0, lb == -np.inf))):
@ -570,8 +585,8 @@ def _presolve(c, A_ub, b_ub, A_eq, b_eq, bounds, rr):
status = 3
message = ("If feasible, the problem is (trivially) unbounded "
"because there are no constraints and at least one "
" element of c is negative. If you wish to check "
" whether the problem is infeasible, turn presolve "
"element of c is negative. If you wish to check "
"whether the problem is infeasible, turn presolve "
"off.")
else: # test_empty_constraint_2
status = 0

View File

@ -347,9 +347,9 @@ def minimize(fun, x0, args=(), method=None, jac=None, hess=None,
... options={'gtol': 1e-6, 'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 33
Function evaluations: 35
Gradient evaluations: 35
Iterations: 26
Function evaluations: 31
Gradient evaluations: 31
>>> res.x
array([ 1., 1., 1., 1., 1.])
>>> print(res.message)

View File

@ -196,8 +196,7 @@ def _minimize_cobyla(fun, x0, args=(), constraints=(),
_check_unknown_options(unknown_options)
maxfun = maxiter
rhoend = tol
if not disp:
iprint = 0
iprint = int(bool(disp))
# check constraints
if isinstance(constraints, dict):

View File

@ -110,7 +110,7 @@ static PyObject *multipack_python_jacobian=NULL;
static PyObject *multipack_extra_arguments=NULL; /* a tuple */
static int multipack_jac_transpose=1;
static PyObject *call_python_function(PyObject *func, npy_intp n, double *x, PyObject *args, int dim, PyObject *error_obj)
static PyObject *call_python_function(PyObject *func, npy_intp n, double *x, PyObject *args, int dim, PyObject *error_obj, npy_intp out_size)
{
/*
This is a generic function to call a python function that takes a 1-D
@ -130,6 +130,7 @@ static PyObject *call_python_function(PyObject *func, npy_intp n, double *x, PyO
PyObject *arg1 = NULL;
PyObject *result = NULL;
PyArrayObject *result_array = NULL;
npy_intp fvec_sz = 0;
/* Build sequence argument from inputs */
sequence = (PyArrayObject *)PyArray_SimpleNewFromData(1, &n, NPY_DOUBLE, (char *)x);
@ -158,6 +159,13 @@ static PyObject *call_python_function(PyObject *func, npy_intp n, double *x, PyO
if ((result_array = (PyArrayObject *)PyArray_ContiguousFromObject(result, NPY_DOUBLE, dim-1, dim))==NULL)
PYERR2(error_obj,"Result from function call is not a proper array of floats.");
fvec_sz = PyArray_SIZE(result_array);
if(out_size != -1 && fvec_sz != out_size){
PyErr_SetString(PyExc_ValueError, "The array returned by a function changed size between calls");
Py_DECREF(result_array);
goto fail;
}
Py_DECREF(result);
Py_DECREF(arglist);
return (PyObject *)result_array;

View File

@ -38,7 +38,6 @@ from .linesearch import (line_search_wolfe1, line_search_wolfe2,
line_search_wolfe2 as line_search,
LineSearchWarning)
from scipy._lib._util import getargspec_no_self as _getargspec
from scipy.linalg import get_blas_funcs
# standard status messages of optimizers
@ -947,11 +946,6 @@ def _minimize_bfgs(fun, x0, args=(), jac=None, callback=None,
I = numpy.eye(N, dtype=int)
Hk = I
# get needed blas functions
syr = get_blas_funcs('syr', dtype='d') # Symetric rank 1 update
syr2 = get_blas_funcs('syr2', dtype='d') # Symetric rank 2 update
symv = get_blas_funcs('symv', dtype='d') # Symetric matrix-vector product
# Sets the initial step guess to dx ~ 1
old_fval = f(x0)
old_old_fval = old_fval + np.linalg.norm(gfk) / 2
@ -963,7 +957,7 @@ def _minimize_bfgs(fun, x0, args=(), jac=None, callback=None,
warnflag = 0
gnorm = vecnorm(gfk, ord=norm)
while (gnorm > gtol) and (k < maxiter):
pk = symv(-1, Hk, gfk)
pk = -numpy.dot(Hk, gfk)
try:
alpha_k, fc, gc, old_fval, old_old_fval, gfkp1 = \
_line_search_wolfe12(f, myfprime, xk, pk, gfk,
@ -985,6 +979,7 @@ def _minimize_bfgs(fun, x0, args=(), jac=None, callback=None,
gfk = gfkp1
if callback is not None:
callback(xk)
k += 1
gnorm = vecnorm(gfk, ord=norm)
if (gnorm <= gtol):
break
@ -995,9 +990,8 @@ def _minimize_bfgs(fun, x0, args=(), jac=None, callback=None,
warnflag = 2
break
yk_sk = np.dot(yk, sk)
try: # this was handled in numeric, let it remaines for more safety
rhok = 1.0 / yk_sk
rhok = 1.0 / (numpy.dot(yk, sk))
except ZeroDivisionError:
rhok = 1000.0
if disp:
@ -1006,31 +1000,10 @@ def _minimize_bfgs(fun, x0, args=(), jac=None, callback=None,
rhok = 1000.0
if disp:
print("Divide-by-zero encountered: rhok assumed large")
# Heristic to adjust Hk for k == 0
# described at Nocedal/Wright "Numerical Optimization"
# p.143 formula (6.20)
if k == 0:
Hk = yk_sk / np.dot(yk, yk)*I
# Implement BFGS update using the formula:
# Hk <- Hk + ((Hk yk).T yk+sk.T yk)*(rhok**2)*sk sk.T -rhok*[(Hk yk)sk.T +sk(Hk yk).T]
# This formula is equivalent to (6.17) from
# Nocedal/Wright "Numerical Optimization"
# written in a more efficient way for implementation.
Hk_yk = symv(1, Hk, yk)
c = rhok**2 * (yk_sk+Hk_yk.dot(yk))
Hk = syr2(-rhok, sk, Hk_yk, a=Hk)
Hk = syr(c, sk, a=Hk)
k += 1
# The matrix Hk is obtained from the
# symmetric representation that were being
# used to store it.
Hk_triu = numpy.triu(Hk)
Hk_diag = numpy.diag(Hk)
Hk = Hk_triu + Hk_triu.T - numpy.diag(Hk_diag)
A1 = I - sk[:, numpy.newaxis] * yk[numpy.newaxis, :] * rhok
A2 = I - yk[:, numpy.newaxis] * sk[numpy.newaxis, :] * rhok
Hk = numpy.dot(A1, numpy.dot(Hk, A2)) + (rhok * sk[:, numpy.newaxis] *
sk[numpy.newaxis, :])
fval = old_fval
if np.isnan(fval):
@ -1565,6 +1538,7 @@ def _minimize_newtoncg(fun, x0, args=(), jac=None, hess=None, hessp=None,
if retall:
allvecs = [xk]
k = 0
gfk = None
old_fval = f(x0)
old_old_fval = None
float64eps = numpy.finfo(numpy.float64).eps

View File

@ -4,6 +4,7 @@ Unit tests for optimization routines from _root.py.
from __future__ import division, print_function, absolute_import
from numpy.testing import assert_
from pytest import raises as assert_raises
import numpy as np
from scipy.optimize import root
@ -45,3 +46,26 @@ class TestRoot(object):
x, y = z
return np.array([x**3 - 1, y**3 - f])
root(func, [1.1, 1.1], args=1.5)
def test_f_size(self):
# gh8320
# check that decreasing the size of the returned array raises an error
# and doesn't segfault
class fun(object):
def __init__(self):
self.count = 0
def __call__(self, x):
self.count += 1
if not (self.count % 5):
ret = x[0] + 0.5 * (x[0] - x[1]) ** 3 - 1.0
else:
ret = ([x[0] + 0.5 * (x[0] - x[1]) ** 3 - 1.0,
0.5 * (x[1] - x[0]) ** 3 + x[1]])
return ret
F = fun()
with assert_raises(ValueError):
sol = root(F, [0.1, 0.0], method='lm')

View File

@ -25,8 +25,9 @@ class TestCobyla(object):
return -self.con1(x)
def test_simple(self):
# use disp=True as smoke test for gh-8118
x = fmin_cobyla(self.fun, self.x0, [self.con1, self.con2], rhobeg=1,
rhoend=1e-5, maxfun=100)
rhoend=1e-5, maxfun=100, disp=True)
assert_allclose(x, self.solution, atol=1e-4)
def test_minimize_simple(self):

View File

@ -10,6 +10,8 @@ from scipy.optimize import linprog, OptimizeWarning
from scipy._lib._numpy_compat import _assert_warns, suppress_warnings
from scipy.sparse.linalg import MatrixRankWarning
import pytest
def magic_square(n):
np.random.seed(0)
@ -115,7 +117,11 @@ def _assert_success(res, desired_fun=None, desired_x=None,
# res: linprog result object
# desired_fun: desired objective function value or None
# desired_x: desired solution or None
assert_(res.success)
if not res.success:
msg = "linprog status {0}, message: {1}".format(res.status,
res.message)
raise AssertionError(msg)
assert_equal(res.status, 0)
if desired_fun is not None:
assert_allclose(res.fun, desired_fun,
@ -347,6 +353,7 @@ class LinprogCommonTests(object):
with suppress_warnings() as sup:
sup.filter(RuntimeWarning, "scipy.linalg.solve\nIll...")
sup.filter(OptimizeWarning, "A_eq does not appear...")
sup.filter(OptimizeWarning, "Solving system with option...")
res = linprog(c=cost, A_eq=A_eq, b_eq=b_eq, bounds=bounds,
method=self.method, options=self.options)
_assert_success(res, desired_fun=14)
@ -654,6 +661,24 @@ class TestLinprogSimplex(LinprogCommonTests):
class BaseTestLinprogIP(LinprogCommonTests):
method = "interior-point"
def test_bounds_equal_but_infeasible(self):
c = [-4, 1]
A_ub = [[7, -2], [0, 1], [2, -2]]
b_ub = [14, 0, 3]
bounds = [(2, 2), (0, None)]
res = linprog(c=c, A_ub=A_ub, b_ub=b_ub, bounds=bounds,
method=self.method)
_assert_infeasible(res)
def test_bounds_equal_but_infeasible2(self):
c = [-4, 1]
A_eq = [[7, -2], [0, 1], [2, -2]]
b_eq = [14, 0, 3]
bounds = [(2, 2), (0, None)]
res = linprog(c=c, A_eq=A_eq, b_eq=b_eq, bounds=bounds,
method=self.method)
_assert_infeasible(res)
def test_magic_square_bug_7044(self):
# test linprog with a problem with a rank-deficient A_eq matrix
A, b, c, N = magic_square(3)
@ -865,6 +890,7 @@ class TestLinprogIPSpecific:
A, b, c = lpgen_2d(20, 20)
with suppress_warnings() as sup:
sup.filter(RuntimeWarning, "scipy.linalg.solve\nIll...")
sup.filter(OptimizeWarning, "Solving system with option...")
res = linprog(c, A_ub=A, b_ub=b, method=self.method,
options={"ip": True, "disp": True})
# ip code is independent of sparse/dense
@ -902,6 +928,11 @@ class TestLinprogIPSpecific:
class TestLinprogIPSparse(BaseTestLinprogIP):
options = {"sparse": True}
@pytest.mark.xfail(reason='Fails with ATLAS, see gh-7877')
def test_bug_6690(self):
# Test defined in base class, but can't mark as xfail there
super(TestLinprogIPSparse, self).test_bug_6690()
def test_magic_square_sparse_no_presolve(self):
# test linprog with a problem with a rank-deficient A_eq matrix
A, b, c, N = magic_square(3)
@ -935,3 +966,8 @@ class TestLinprogIPDense(BaseTestLinprogIP):
class TestLinprogIPSparsePresolve(BaseTestLinprogIP):
options = {"sparse": True, "_sparse_presolve": True}
@pytest.mark.xfail(reason='Fails with ATLAS, see gh-7877')
def test_bug_6690(self):
# Test defined in base class, but can't mark as xfail there
super(TestLinprogIPSparsePresolve, self).test_bug_6690()

View File

@ -158,14 +158,14 @@ class CheckOptimizeParameterized(CheckOptimize):
atol=1e-6)
# Ensure that function call counts are 'known good'; these are from
# Scipy 1.0.0. Don't allow them to increase.
assert_(self.funccalls == 9, self.funccalls)
assert_(self.gradcalls == 7, self.gradcalls)
# Scipy 0.7.0. Don't allow them to increase.
assert_(self.funccalls == 10, self.funccalls)
assert_(self.gradcalls == 8, self.gradcalls)
# Ensure that the function behaves the same; this is from Scipy 1.0.0
# Ensure that the function behaves the same; this is from Scipy 0.7.0
assert_allclose(self.trace[6:8],
[[7.323472e-15, -5.248650e-01, 4.875251e-01],
[7.323472e-15, -5.248650e-01, 4.875251e-01]],
[[0, -5.25060743e-01, 4.87748473e-01],
[0, -5.24885582e-01, 4.87530347e-01]],
atol=1e-14, rtol=1e-7)
def test_bfgs_infinite(self):
@ -321,6 +321,14 @@ class CheckOptimizeParameterized(CheckOptimize):
full_output=True, disp=False, retall=False,
initial_simplex=simplex)
def test_ncg_negative_maxiter(self):
# Regression test for gh-8241
opts = {'maxiter': -1}
result = optimize.minimize(self.func, self.startparams,
method='Newton-CG', jac=self.grad,
args=(), options=opts)
assert_(result.status == 1)
def test_ncg(self):
# line-search Newton conjugate gradient optimization routine
if self.use_wrapper:

View File

@ -260,26 +260,35 @@ def freqz(b, a=1, worN=None, whole=False, plot=None):
Given the M-order numerator `b` and N-order denominator `a` of a digital
filter, compute its frequency response::
jw -jw -jwM
jw B(e ) b[0] + b[1]e + .... + b[M]e
H(e ) = ---- = -----------------------------------
jw -jw -jwN
A(e ) a[0] + a[1]e + .... + a[N]e
jw -jw -jwM
jw B(e ) b[0] + b[1]e + ... + b[M]e
H(e ) = ------ = -----------------------------------
jw -jw -jwN
A(e ) a[0] + a[1]e + ... + a[N]e
Parameters
----------
b : array_like
Numerator of a linear filter. Must be 1D.
Numerator of a linear filter. If `b` has dimension greater than 1,
it is assumed that the coefficients are stored in the first dimension,
and ``b.shape[1:]``, ``a.shape[1:]``, and the shape of the frequencies
array must be compatible for broadcasting.
a : array_like
Denominator of a linear filter. Must be 1D.
Denominator of a linear filter. If `b` has dimension greater than 1,
it is assumed that the coefficients are stored in the first dimension,
and ``b.shape[1:]``, ``a.shape[1:]``, and the shape of the frequencies
array must be compatible for broadcasting.
worN : {None, int, array_like}, optional
If None (default), then compute at 512 frequencies equally spaced
around the unit circle.
If a single integer, then compute at that many frequencies.
If None (default), then compute at 512 equally spaced frequencies.
If a single integer, then compute at that many frequencies. This is
a convenient alternative to::
np.linspace(0, 2*pi if whole else pi, N, endpoint=False)
Using a number that is fast for FFT computations can result in
faster computations (see Notes).
If an array_like, compute the response at the frequencies given (in
radians/sample; must be 1D).
radians/sample).
whole : bool, optional
Normally, frequencies are computed from 0 to the Nyquist frequency,
pi radians/sample (upper-half of unit-circle). If `whole` is True,
@ -318,6 +327,7 @@ def freqz(b, a=1, worN=None, whole=False, plot=None):
3. The denominator coefficients are a single value (``a.shape[0] == 1``).
4. `worN` is at least as long as the numerator coefficients
(``worN >= b.shape[0]``).
5. If ``b.ndim > 1``, then ``b.shape[-1] == 1``.
For long FIR filters, the FFT approach can have lower error and be much
faster than the equivalent direct polynomial calculation.
@ -345,13 +355,51 @@ def freqz(b, a=1, worN=None, whole=False, plot=None):
>>> plt.axis('tight')
>>> plt.show()
Broadcasting Examples
Suppose we have two FIR filters whose coefficients are stored in the
rows of an array with shape (2, 25). For this demonstration we'll
use random data:
>>> np.random.seed(42)
>>> b = np.random.rand(2, 25)
To compute the frequency response for these two filters with one call
to `freqz`, we must pass in ``b.T``, because `freqz` expects the first
axis to hold the coefficients. We must then extend the shape with a
trivial dimension of length 1 to allow broadcasting with the array
of frequencies. That is, we pass in ``b.T[..., np.newaxis]``, which has
shape (25, 2, 1):
>>> w, h = signal.freqz(b.T[..., np.newaxis], worN=1024)
>>> w.shape
(1024,)
>>> h.shape
(2, 1024)
Now suppose we have two transfer functions, with the same numerator
coefficients ``b = [0.5, 0.5]``. The coefficients for the two denominators
are stored in the first dimension of the two-dimensional array `a`::
a = [ 1 1 ]
[ -0.25, -0.5 ]
>>> b = np.array([0.5, 0.5])
>>> a = np.array([[1, 1], [-0.25, -0.5]])
Only `a` is more than one-dimensional. To make it compatible for
broadcasting with the frequencies, we extend it with a trivial dimension
in the call to `freqz`:
>>> w, h = signal.freqz(b, a[..., np.newaxis], worN=1024)
>>> w.shape
(1024,)
>>> h.shape
(2, 1024)
"""
b = atleast_1d(b)
a = atleast_1d(a)
if b.ndim != 1:
raise ValueError('b must be 1D')
if a.ndim != 1:
raise ValueError('a must be 1D')
if worN is None:
worN = 512
@ -362,20 +410,20 @@ def freqz(b, a=1, worN=None, whole=False, plot=None):
except TypeError: # not int-like
w = atleast_1d(worN)
else:
if worN <= 0:
raise ValueError('worN must be positive, got %s' % (worN,))
if worN < 0:
raise ValueError('worN must be nonnegative, got %s' % (worN,))
lastpoint = 2 * pi if whole else pi
w = np.linspace(0, lastpoint, worN, endpoint=False)
min_size = b.size
if (a.size == 1 and worN >= min_size and
fftpack.next_fast_len(worN) == worN):
if (a.size == 1 and worN >= b.shape[0] and
fftpack.next_fast_len(worN) == worN and
(b.ndim == 1 or (b.shape[-1] == 1))):
# if worN is fast, 2 * worN will be fast, too, so no need to check
n_fft = worN if whole else worN * 2
if np.isrealobj(b) and np.isrealobj(a):
fft_func = np.fft.rfft
else:
fft_func = fftpack.fft
h = fft_func(b, n=n_fft)[:worN]
h = fft_func(b, n=n_fft, axis=0)[:worN]
h /= a
if fft_func is np.fft.rfft and whole:
# exclude DC and maybe Nyquist (no need to use axis_reverse
@ -383,16 +431,17 @@ def freqz(b, a=1, worN=None, whole=False, plot=None):
stop = -1 if n_fft % 2 == 1 else -2
h_flip = slice(stop, 0, -1)
h = np.concatenate((h, h[h_flip].conj()))
if b.ndim > 1:
# Last axis of h has length 1, so drop it.
h = h[..., 0]
# Rotate the first axis of h to the end.
h = np.rollaxis(h, 0, h.ndim)
del worN
if w.ndim != 1:
raise ValueError('w must be 1D')
if h is None: # still need to compute using freqs w
if w.size == 0:
raise ValueError('w must have at least one element, got 0')
zm1 = exp(-1j * w)
h = npp_polyval(zm1, b)
h /= npp_polyval(zm1, a)
h = (npp_polyval(zm1, b, tensor=False) /
npp_polyval(zm1, a, tensor=False))
if plot is not None:
plot(w, h)

View File

@ -193,9 +193,14 @@ scipy_signal_sigtools_linear_filter(PyObject * NPY_UNUSED(dummy), PyObject * arg
if (str == NULL) {
goto fail;
}
#if PY_VERSION_HEX >= 0x03000000
msg = PyUnicode_FromFormat(
"input type '%U' not supported\n", str);
#else
s = PyString_AsString(str);
msg = PyString_FromFormat(
"input type '%s' not supported\n", s);
#endif
Py_DECREF(str);
if (msg == NULL) {
goto fail;

View File

@ -2,12 +2,6 @@
#define _SCIPY_PRIVATE_SIGNAL_SIGTOOLS_H_
#include "Python.h"
#if PY_VERSION_HEX >= 0x03000000
#define PyString_AsString PyBytes_AsString
#define PyString_FromFormat PyBytes_FromFormat
#endif
#include "numpy/noprefix.h"
#define BOUNDARY_MASK 12

View File

@ -852,6 +852,7 @@ PyObject *PyArray_OrderFilterND(PyObject *op1, PyObject *op2, int order) {
intp *ret_ind;
CompareFunction compare_func;
char *zptr=NULL;
PyArray_CopySwapFunc *copyswap;
/* Get Array objects from input */
typenum = PyArray_ObjectType(op1, 0);
@ -907,6 +908,8 @@ PyObject *PyArray_OrderFilterND(PyObject *op1, PyObject *op2, int order) {
os = PyArray_ITEMSIZE(ret);
op = PyArray_DATA(ret);
copyswap = PyArray_DESCR(ret)->f->copyswap;
bytes_in_array = PyArray_NDIM(ap1)*sizeof(intp);
mode_dep = malloc(bytes_in_array);
for (k = 0; k < PyArray_NDIM(ap1); k++) {
@ -980,8 +983,15 @@ PyObject *PyArray_OrderFilterND(PyObject *op1, PyObject *op2, int order) {
fill_buffer(ap1_ptr,ap1,ap2,sort_buffer,n2,check,b_ind,temp_ind,offsets);
qsort(sort_buffer, n2_nonzero, is1, compare_func);
memcpy(op, sort_buffer + order*is1, os);
/*
* Use copyswap for correct refcounting with object arrays
* (sort_buffer has borrowed references, op owns references). Note
* also that os == PyArray_ITEMSIZE(ret) and we are copying a single
* scalar here.
*/
copyswap(op, sort_buffer + order*is1, 0, NULL);
/* increment index counter */
incr = increment(ret_ind,PyArray_NDIM(ret),PyArray_DIMS(ret));
/* increment to next output index */

View File

@ -502,6 +502,12 @@ class TestFreqz(object):
assert_array_almost_equal(w, np.pi * np.arange(9) / 9.)
assert_array_almost_equal(h, np.ones(9))
for a in [1, np.ones(2)]:
w, h = freqz(np.ones(2), a, worN=0)
assert_equal(w.shape, (0,))
assert_equal(h.shape, (0,))
assert_equal(h.dtype, np.dtype('complex128'))
t = np.linspace(0, 1, 4, endpoint=False)
for b, a, h_whole in zip(
([1., 0, 0, 0], np.sin(2 * np.pi * t)),
@ -519,13 +525,6 @@ class TestFreqz(object):
assert_array_almost_equal(w, expected_w)
assert_array_almost_equal(h, h_whole)
# force 1D
b = np.array(b)
a = np.array(a)
assert_raises(ValueError, freqz, b[np.newaxis], a, w)
assert_raises(ValueError, freqz, b, a[np.newaxis], w)
assert_raises(ValueError, freqz, b, a, w[np.newaxis])
def test_basic_whole(self):
w, h = freqz([1.0], worN=8, whole=True)
assert_array_almost_equal(w, 2 * np.pi * np.arange(8.0) / 8)
@ -605,6 +604,70 @@ class TestFreqz(object):
assert_array_almost_equal(w, expected_w)
assert_array_almost_equal(h, expected_h)
def test_broadcasting1(self):
# Test broadcasting with worN an integer or a 1-D array,
# b and a are n-dimensional arrays.
np.random.seed(123)
b = np.random.rand(3, 5, 1)
a = np.random.rand(2, 1)
for whole in [False, True]:
# Test with worN being integers (one fast for FFT and one not),
# a 1-D array, and an empty array.
for worN in [16, 17, np.linspace(0, 1, 10), np.array([])]:
w, h = freqz(b, a, worN=worN, whole=whole)
for k in range(b.shape[1]):
bk = b[:, k, 0]
ak = a[:, 0]
ww, hh = freqz(bk, ak, worN=worN, whole=whole)
assert_allclose(ww, w)
assert_allclose(hh, h[k])
def test_broadcasting2(self):
# Test broadcasting with worN an integer or a 1-D array,
# b is an n-dimensional array, and a is left at the default value.
np.random.seed(123)
b = np.random.rand(3, 5, 1)
for whole in [False, True]:
for worN in [16, 17, np.linspace(0, 1, 10)]:
w, h = freqz(b, worN=worN, whole=whole)
for k in range(b.shape[1]):
bk = b[:, k, 0]
ww, hh = freqz(bk, worN=worN, whole=whole)
assert_allclose(ww, w)
assert_allclose(hh, h[k])
def test_broadcasting3(self):
# Test broadcasting where b.shape[-1] is the same length
# as worN, and a is left at the default value.
np.random.seed(123)
N = 16
b = np.random.rand(3, N)
for whole in [False, True]:
for worN in [N, np.linspace(0, 1, N)]:
w, h = freqz(b, worN=worN, whole=whole)
assert_equal(w.size, N)
for k in range(N):
bk = b[:, k]
ww, hh = freqz(bk, worN=w[k], whole=whole)
assert_allclose(ww, w[k])
assert_allclose(hh, h[k])
def test_broadcasting4(self):
# Test broadcasting with worN a 2-D array.
np.random.seed(123)
b = np.random.rand(4, 2, 1, 1)
a = np.random.rand(5, 2, 1, 1)
for whole in [False, True]:
for worN in [np.random.rand(6, 7), np.empty((6, 0))]:
w, h = freqz(b, a, worN=worN, whole=whole)
assert_array_equal(w, worN)
assert_equal(h.shape, (2,) + worN.shape)
for k in range(2):
ww, hh = freqz(b[:, k, 0, 0], a[:, k, 0, 0], worN=worN.ravel(),
whole=whole)
assert_equal(ww, worN.ravel())
assert_allclose(hh, h[k, :, :].ravel())
class TestSOSFreqz(object):

View File

@ -551,6 +551,21 @@ class TestMedFilt(object):
a.strides = 16
assert_(signal.medfilt(a, 1) == 5.)
def test_refcounting(self):
# Check a refcounting-related crash
a = Decimal(123)
x = np.array([a, a], dtype=object)
if hasattr(sys, 'getrefcount'):
n = 2 * sys.getrefcount(a)
else:
n = 10
# Shouldn't segfault:
for j in range(n):
signal.medfilt(x)
if hasattr(sys, 'getrefcount'):
assert_(sys.getrefcount(a) < n)
assert_equal(x, [a, a])
class TestWiener(object):
@ -1183,6 +1198,11 @@ def test_lfilter_bad_object():
assert_raises(TypeError, lfilter, [None], [1.0], [1.0, 2.0, 3.0])
def test_lfilter_notimplemented_input():
# Should not crash, gh-7991
assert_raises(NotImplementedError, lfilter, [2,3], [4,5], [1,2,3,4,5])
@pytest.mark.parametrize('dt', [np.ubyte, np.byte, np.ushort, np.short, np.uint, int,
np.ulonglong, np.ulonglong, np.float32, np.float64,
np.longdouble, Decimal])

View File

@ -237,6 +237,9 @@ from .construct import *
from .extract import *
from ._matrix_io import *
# For backward compatibility with v0.19.
from . import csgraph
__all__ = [s for s in dir() if not s.startswith('_')]
from scipy._lib._testutils import PytestTester

View File

@ -59,9 +59,26 @@ def use_solver(**kwargs):
def _get_umf_family(A):
"""Get umfpack family string given the sparse matrix dtype."""
family = {'di': 'di', 'Di': 'zi', 'dl': 'dl', 'Dl': 'zl'}
dt = A.dtype.char + A.indices.dtype.char
return family[dt]
_families = {
(np.float64, np.int32): 'di',
(np.complex128, np.int32): 'zi',
(np.float64, np.int64): 'dl',
(np.complex128, np.int64): 'zl'
}
f_type = np.sctypeDict[A.dtype.name]
i_type = np.sctypeDict[A.indices.dtype.name]
try:
family = _families[(f_type, i_type)]
except KeyError:
msg = 'only float64 or complex128 matrices with int32 or int64' \
' indices are supported! (got: matrix: %s, indices: %s)' \
% (f_type, i_type)
raise ValueError(msg)
return family
def spsolve(A, b, permc_spec=None, use_umfpack=True):
"""Solve the sparse linear system Ax=b, where b may be a vector or a matrix.

View File

@ -199,7 +199,10 @@ class TestExpM(object):
tiny = 1e-17
A_logm_perturbed = A_logm.copy()
A_logm_perturbed[1, 0] = tiny
A_expm_logm_perturbed = expm(A_logm_perturbed)
with suppress_warnings() as sup:
sup.filter(RuntimeWarning,
"scipy.linalg.solve\nIll-conditioned.*")
A_expm_logm_perturbed = expm(A_logm_perturbed)
rtol = 1e-4
atol = 100 * tiny
assert_(not np.allclose(A_expm_logm_perturbed, A, rtol=rtol, atol=atol))

View File

@ -41,6 +41,9 @@
#define PyInt_AsSsize_t PyLong_AsSsize_t
#define PyInt_FromSsize_t PyLong_FromSsize_t
#endif
#if NPY_API_VERSION >= 0x0000000c
#define HAVE_WRITEBACKIFCOPY
#endif
static const int supported_I_typenums[] = {NPY_INT32, NPY_INT64};
static const int n_supported_I_typenums = sizeof(supported_I_typenums) / sizeof(int);
@ -444,6 +447,11 @@ fail:
--j;
continue;
}
#ifdef HAVE_WRITEBACKIFCOPY
if (is_output[j] && arg_arrays[j] != NULL && PyArray_Check(arg_arrays[j])) {
PyArray_ResolveWritebackIfCopy((PyArrayObject *)arg_arrays[j]);
}
#endif
Py_XDECREF(arg_arrays[j]);
if (*p == 'i' && arg_list[j] != NULL) {
std::free(arg_list[j]);
@ -578,11 +586,16 @@ static PyObject *c_array_from_object(PyObject *obj, int typenum, int is_output)
}
}
else {
#ifdef HAVE_WRITEBACKIFCOPY
int flags = NPY_C_CONTIGUOUS|NPY_WRITEABLE|NPY_ARRAY_WRITEBACKIFCOPY|NPY_NOTSWAPPED;
#else
int flags = NPY_C_CONTIGUOUS|NPY_WRITEABLE|NPY_UPDATEIFCOPY|NPY_NOTSWAPPED;
#endif
if (typenum == -1) {
return PyArray_FROM_OF(obj, NPY_C_CONTIGUOUS|NPY_WRITEABLE|NPY_UPDATEIFCOPY|NPY_NOTSWAPPED);
return PyArray_FROM_OF(obj, flags);
}
else {
return PyArray_FROM_OTF(obj, typenum, NPY_C_CONTIGUOUS|NPY_WRITEABLE|NPY_UPDATEIFCOPY|NPY_NOTSWAPPED);
return PyArray_FROM_OTF(obj, typenum, flags);
}
}
}

View File

@ -275,8 +275,6 @@ def _validate_mahalanobis_kwargs(X, m, n, **kwargs):
def _validate_minkowski_kwargs(X, m, n, **kwargs):
if 'w' in kwargs:
kwargs['w'] = _convert_to_double(kwargs['w'])
if 'p' not in kwargs:
kwargs['p'] = 2.
return kwargs
@ -329,6 +327,13 @@ def _validate_vector(u, dtype=None):
return u
def _validate_weights(w, dtype=np.double):
w = _validate_vector(w, dtype=dtype)
if np.any(w < 0):
raise ValueError("Input weights should be all non-negative")
return w
def _validate_wminkowski_kwargs(X, m, n, **kwargs):
w = kwargs.pop('w', None)
if w is None:
@ -443,7 +448,7 @@ def minkowski(u, v, p=2, w=None):
{||u-v||}_p = (\\sum{|u_i - v_i|^p})^{1/p}.
\\left(\\sum{(|w_i (u_i - v_i)|^p)}\\right)^{1/p}.
\\left(\\sum{w_i(|(u_i - v_i)|^p)}\\right)^{1/p}.
Parameters
----------
@ -469,7 +474,7 @@ def minkowski(u, v, p=2, w=None):
raise ValueError("p must be at least 1")
u_v = u - v
if w is not None:
w = _validate_vector(w)
w = _validate_weights(w)
if p == 1:
root_w = w
if p == 2:
@ -486,18 +491,41 @@ def minkowski(u, v, p=2, w=None):
# deprecated `wminkowski`. Not done at once because it would be annoying for
# downstream libraries that used `wminkowski` and support multiple scipy
# versions.
def wminkowski(*args, **kwds):
return minkowski(*args, **kwds)
def wminkowski(u, v, p, w):
"""
Computes the weighted Minkowski distance between two 1-D arrays.
The weighted Minkowski distance between `u` and `v`, defined as
if minkowski.__doc__ is not None:
doc = minkowski.__doc__.replace("Minkowski", "Weighted Minkowski")
doc += """Notes
.. math::
\\left(\\sum{(|w_i (u_i - v_i)|^p)}\\right)^{1/p}.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
p : int
The order of the norm of the difference :math:`{||u-v||}_p`.
w : (N,) array_like
The weight vector.
Returns
-------
wminkowski : double
The weighted Minkowski distance between vectors `u` and `v`.
Notes
-----
`wminkowski` is DEPRECATED. It is simply an alias of `minkowski` in
scipy >= 1.0.
`wminkowski` is DEPRECATED. It implements a definition where weights
are powered. It is recommended to use the weighted version of `minkowski`
instead. This function will be removed in a future version of scipy.
"""
wminkowski.__doc__ = doc
w = _validate_vector(w)
return minkowski(u, v, p=p, w=w**p)
def euclidean(u, v, w=None):
"""
@ -509,7 +537,7 @@ def euclidean(u, v, w=None):
{||u-v||}_2
\\left(\\sum{(|w_i (u_i - v_i)|^2)}\\right)^{1/2}
\\left(\\sum{(w_i |(u_i - v_i)|^2)}\\right)^{1/2}
Parameters
----------
@ -1675,8 +1703,6 @@ def pdist(X, metric='euclidean', *args, **kwargs):
if(mstr in _METRICS['seuclidean'].aka or
mstr in _METRICS['mahalanobis'].aka):
raise ValueError("metric %s incompatible with weights" % mstr)
if mstr in _METRICS['wminkowski'].aka:
mstr = "minkowski"
# need to use python version for weighting
kwargs['out'] = out
mstr = "test_%s" % mstr
@ -2404,8 +2430,6 @@ def cdist(XA, XB, metric='euclidean', *args, **kwargs):
if(mstr in _METRICS['seuclidean'].aka or
mstr in _METRICS['mahalanobis'].aka):
raise ValueError("metric %s incompatible with weights" % mstr)
if mstr in _METRICS['wminkowski'].aka:
mstr = "minkowski"
# need to use python version for weighting
kwargs['out'] = out
mstr = "test_%s" % mstr

View File

@ -2152,7 +2152,16 @@ def tsearch(tri, xi):
"""
return tri.find_simplex(xi)
Delaunay.add_points.__func__.__doc__ = _QhullUser._add_points.__doc__
# Set docstring for foo to docstring of bar, working around change in Cython 0.28
# See https://github.com/scipy/scipy/pull/8581
def _copy_docstr(dst, src):
try:
dst.__doc__ = src.__doc__
except AttributeError:
dst.__func__.__doc__ = src.__func__.__doc__
_copy_docstr(Delaunay.add_points, _QhullUser._add_points)
#------------------------------------------------------------------------------
# Delaunay triangulation interface, for low-level C
@ -2347,7 +2356,7 @@ class ConvexHull(_QhullUser):
self._vertices = np.unique(self.simplices)
return self._vertices
ConvexHull.add_points.__func__.__doc__ = _QhullUser._add_points.__doc__
_copy_docstr(ConvexHull.add_points, _QhullUser._add_points)
#------------------------------------------------------------------------------
# Voronoi diagrams
@ -2418,7 +2427,7 @@ class Voronoi(_QhullUser):
Plot it:
>>> import matplotlib.pyplot as plt
>>> voronoi_plot_2d(vor)
>>> fig = voronoi_plot_2d(vor)
>>> plt.show()
The Voronoi vertices:
@ -2499,7 +2508,8 @@ class Voronoi(_QhullUser):
self.ridge_vertices))
return self._ridge_dict
Voronoi.add_points.__func__.__doc__ = _QhullUser._add_points.__doc__
_copy_docstr(Voronoi.add_points, _QhullUser._add_points)
#------------------------------------------------------------------------------
# Halfspace Intersection

View File

@ -58,7 +58,8 @@ from scipy.spatial.distance import (braycurtis, canberra, chebyshev, cityblock,
hamming, jaccard, kulsinski, mahalanobis,
matching, minkowski, rogerstanimoto,
russellrao, seuclidean, sokalmichener,
sokalsneath, sqeuclidean, yule, wminkowski)
sokalsneath, sqeuclidean, yule)
from scipy.spatial.distance import wminkowski as old_wminkowski
_filenames = [
"cdist-X1.txt",
@ -221,7 +222,7 @@ def _assert_within_tol(a, b, atol=0, rtol=0, verbose_=False):
def _rand_split(arrays, weights, axis, split_per, seed=None):
# inverse operation for stats.collapse_weights
weights = np.array(weights) # modified inplace; need a copy
weights = np.array(weights, dtype=np.float64) # modified inplace; need a copy
seeded_rand = np.random.RandomState(seed)
def mytake(a, ix, axis):
@ -236,7 +237,7 @@ def _rand_split(arrays, weights, axis, split_per, seed=None):
prev_w = weights[split_ix]
q = seeded_rand.rand()
weights[split_ix] = q * prev_w
weights = np.append(weights, (1-q) * prev_w)
weights = np.append(weights, (1. - q) * prev_w)
arrays = [np.append(a, mytake(a, split_ix, axis=axis),
axis=axis) for a in arrays]
return arrays, weights
@ -362,7 +363,7 @@ wyule = _weight_checked(yule)
wdice = _weight_checked(dice)
wcosine = _weight_checked(cosine)
wcorrelation = _weight_checked(correlation)
wminkowski = _weight_checked(wminkowski, const_test=False)
wminkowski = _weight_checked(minkowski, const_test=False)
wjaccard = _weight_checked(jaccard)
weuclidean = _weight_checked(euclidean, const_test=False)
wsqeuclidean = _weight_checked(sqeuclidean, const_test=False)
@ -533,6 +534,8 @@ class TestCdist(object):
for metric in _METRICS_NAMES:
if verbose > 2:
print("testing: ", metric, " with: ", eo_name)
if metric == 'wminkowski':
continue
if metric in {'dice', 'yule', 'kulsinski', 'matching',
'rogerstanimoto', 'russellrao', 'sokalmichener',
'sokalsneath'} and 'bool' not in eo_name:
@ -1294,6 +1297,8 @@ class TestPdist(object):
# NOTE: num samples needs to be > than dimensions for mahalanobis
X = eo[eo_name][::5, ::2]
for metric in _METRICS_NAMES:
if metric == 'wminkowski':
continue
if verbose > 2:
print("testing: ", metric, " with: ", eo_name)
if metric in {'dice', 'yule', 'kulsinski', 'matching',
@ -1391,6 +1396,24 @@ class TestSomeDistanceFunctions(object):
assert_almost_equal(dist1p5, (1.0 + 2.0**1.5)**(2. / 3))
dist2 = wminkowski(x, y, p=2)
def test_old_wminkowski(self):
with suppress_warnings() as wrn:
wrn.filter(message="`wminkowski` is deprecated")
w = np.array([1.0, 2.0, 0.5])
for x, y in self.cases:
dist1 = old_wminkowski(x, y, p=1, w=w)
assert_almost_equal(dist1, 3.0)
dist1p5 = old_wminkowski(x, y, p=1.5, w=w)
assert_almost_equal(dist1p5, (2.0**1.5+1.0)**(2./3))
dist2 = old_wminkowski(x, y, p=2, w=w)
assert_almost_equal(dist2, np.sqrt(5))
# test weights Issue #7893
arr = np.arange(4)
w = np.full_like(arr, 4)
assert_almost_equal(old_wminkowski(arr, arr + 1, p=2, w=w), 8.0)
assert_almost_equal(wminkowski(arr, arr + 1, p=2, w=w), 4.0)
def test_euclidean(self):
for x, y in self.cases:
dist = weuclidean(x, y)
@ -1786,6 +1809,18 @@ def test_hamming_string_array():
assert_allclose(whamming(a, b), desired)
def test_minkowski_w():
# Regression test for gh-8142.
arr_in = np.array([[83.33333333, 100., 83.33333333, 100., 36.,
60., 90., 150., 24., 48.],
[83.33333333, 100., 83.33333333, 100., 36.,
60., 90., 150., 24., 48.]])
pdist(arr_in, metric='minkowski', p=1, w=None)
cdist(arr_in, arr_in, metric='minkowski', p=1, w=None)
pdist(arr_in, metric='minkowski', p=1)
cdist(arr_in, arr_in, metric='minkowski', p=1)
def test_sqeuclidean_dtypes():
# Assert that sqeuclidean returns the right types of values.
# Integer types should be converted to floating for stability.

View File

@ -9,6 +9,8 @@ from numpy.testing import assert_, assert_allclose
from numpy import pi
import pytest
from distutils.version import LooseVersion
import scipy.special as sc
from scipy._lib.six import with_metaclass
from scipy.special._testutils import (
@ -102,11 +104,12 @@ def test_hyp0f1_gh_1609():
# hyp2f1
# ------------------------------------------------------------------------------
@check_version(mpmath, '0.14')
@pytest.mark.xfail(reason="hyp2f1 produces wrong/nonstandard values (gh-7961)")
@check_version(mpmath, '1.0.0')
def test_hyp2f1_strange_points():
pts = [
(2, -1, -1, 0.7),
(2, -2, -2, 0.7),
(2, -1, -1, 0.7), # expected: 2.4
(2, -2, -2, 0.7), # expected: 3.87
]
kw = dict(eliminate=True)
dataset = [p + (float(mpmath.hyp2f1(*p, **kw)),) for p in pts]
@ -1750,14 +1753,18 @@ class TestSystematic(object):
@pytest.mark.xfail(condition=_is_32bit_platform, reason="see gh-3551 for bad points")
def test_rf(self):
def mppoch(a, m):
# deal with cases where the result in double precision
# hits exactly a non-positive integer, but the
# corresponding extended-precision mpf floats don't
if float(a + m) == int(a + m) and float(a + m) <= 0:
a = mpmath.mpf(a)
m = int(a + m) - a
return mpmath.rf(a, m)
if LooseVersion(mpmath.__version__) >= LooseVersion("1.0.0"):
# no workarounds needed
mppoch = mpmath.rf
else:
def mppoch(a, m):
# deal with cases where the result in double precision
# hits exactly a non-positive integer, but the
# corresponding extended-precision mpf floats don't
if float(a + m) == int(a + m) and float(a + m) <= 0:
a = mpmath.mpf(a)
m = int(a + m) - a
return mpmath.rf(a, m)
assert_mpmath_equal(sc.poch,
mppoch,

View File

@ -3308,6 +3308,8 @@ class special_ortho_group_gen(multi_rv_generic):
Random size N-dimensional matrices, dimension (size, dim, dim)
"""
random_state = self._get_random_state(random_state)
size = int(size)
if size > 1:
return np.array([self.rvs(dim, size=1, random_state=random_state)
@ -3315,23 +3317,19 @@ class special_ortho_group_gen(multi_rv_generic):
dim = self._process_parameters(dim)
random_state = self._get_random_state(random_state)
H = np.eye(dim)
D = np.ones((dim,))
for n in range(1, dim):
x = random_state.normal(size=(dim-n+1,))
D[n-1] = np.sign(x[0])
x[0] -= D[n-1]*np.sqrt((x*x).sum())
D = np.empty((dim,))
for n in range(dim-1):
x = random_state.normal(size=(dim-n,))
D[n] = np.sign(x[0]) if x[0] != 0 else 1
x[0] += D[n]*np.sqrt((x*x).sum())
# Householder transformation
Hx = (np.eye(dim-n+1)
Hx = (np.eye(dim-n)
- 2.*np.outer(x, x)/(x*x).sum())
mat = np.eye(dim)
mat[n-1:, n-1:] = Hx
mat[n:, n:] = Hx
H = np.dot(H, mat)
# Fix the last sign such that the determinant is 1
D[-1] = (-1)**(1-(dim % 2))*D.prod()
D[-1] = (-1)**(dim-1)*D[:-1].prod()
# Equivalent to np.dot(np.diag(D), H) but faster, apparently
H = (D*H.T).T
return H
@ -3448,6 +3446,8 @@ class ortho_group_gen(multi_rv_generic):
Random size N-dimensional matrices, dimension (size, dim, dim)
"""
random_state = self._get_random_state(random_state)
size = int(size)
if size > 1:
return np.array([self.rvs(dim, size=1, random_state=random_state)
@ -3455,19 +3455,17 @@ class ortho_group_gen(multi_rv_generic):
dim = self._process_parameters(dim)
random_state = self._get_random_state(random_state)
H = np.eye(dim)
for n in range(1, dim):
x = random_state.normal(size=(dim-n+1,))
for n in range(dim):
x = random_state.normal(size=(dim-n,))
# random sign, 50/50, but chosen carefully to avoid roundoff error
D = np.sign(x[0])
D = np.sign(x[0]) if x[0] != 0 else 1
x[0] += D*np.sqrt((x*x).sum())
# Householder transformation
Hx = -D*(np.eye(dim-n+1)
Hx = -D*(np.eye(dim-n)
- 2.*np.outer(x, x)/(x*x).sum())
mat = np.eye(dim)
mat[n-1:, n-1:] = Hx
mat[n:, n:] = Hx
H = np.dot(H, mat)
return H
@ -3518,7 +3516,6 @@ class random_correlation_gen(multi_rv_generic):
[-0.20387311, 1. , -0.24351129, 0.06703474],
[ 0.18366501, -0.24351129, 1. , 0.38530195],
[-0.04953711, 0.06703474, 0.38530195, 1. ]])
>>> import scipy.linalg
>>> e, v = scipy.linalg.eigh(x)
>>> e
@ -3735,6 +3732,8 @@ class unitary_group_gen(multi_rv_generic):
Random size N-dimensional matrices, dimension (size, dim, dim)
"""
random_state = self._get_random_state(random_state)
size = int(size)
if size > 1:
return np.array([self.rvs(dim, size=1, random_state=random_state)
@ -3742,8 +3741,6 @@ class unitary_group_gen(multi_rv_generic):
dim = self._process_parameters(dim)
random_state = self._get_random_state(random_state)
z = 1/math.sqrt(2)*(random_state.normal(size=(dim,dim)) +
1j*random_state.normal(size=(dim,dim)))
q, r = scipy.linalg.qr(z)

View File

@ -2598,8 +2598,8 @@ def friedmanchisquare(*args):
ranked = ranked._data
(k,n) = ranked.shape
# Ties correction
repeats = np.array([find_repeats(_) for _ in ranked.T], dtype=object)
ties = repeats[repeats.nonzero()].reshape(-1,2)[:,-1].astype(int)
repeats = [find_repeats(row) for row in ranked.T]
ties = np.array([y for x, y in repeats if x.size > 0])
tie_correction = 1 - (ties**3-ties).sum()/float(n*(k**3-k))
ssbg = np.sum((ranked.sum(-1) - n*(k+1)/2.)**2)

View File

@ -921,12 +921,11 @@ class TestPpccPlot(object):
def test_plot_kwarg(self):
# Check with the matplotlib.pyplot module
fig = plt.figure()
fig.add_subplot(111)
ax = fig.add_subplot(111)
stats.ppcc_plot(self.x, -20, 20, plot=plt)
plt.close()
fig.delaxes(ax)
# Check that a Matplotlib Axes object is accepted
fig.add_subplot(111)
ax = fig.add_subplot(111)
stats.ppcc_plot(self.x, -20, 20, plot=ax)
plt.close()
@ -1118,12 +1117,11 @@ class TestBoxcoxNormplot(object):
def test_plot_kwarg(self):
# Check with the matplotlib.pyplot module
fig = plt.figure()
fig.add_subplot(111)
ax = fig.add_subplot(111)
stats.boxcox_normplot(self.x, -20, 20, plot=plt)
plt.close()
fig.delaxes(ax)
# Check that a Matplotlib Axes object is accepted
fig.add_subplot(111)
ax = fig.add_subplot(111)
stats.boxcox_normplot(self.x, -20, 20, plot=ax)
plt.close()
@ -1426,4 +1424,3 @@ class TestMedianTest(object):
exp_stat, exp_p, dof, e = stats.chi2_contingency(tbl, correction=False)
assert_allclose(stat, exp_stat)
assert_allclose(p, exp_p)

View File

@ -20,10 +20,10 @@ def test_compare_medians_ms():
def test_hdmedian():
# 1-D array
x = ma.arange(11)
assert_equal(ms.hdmedian(x), 5)
assert_allclose(ms.hdmedian(x), 5, rtol=1e-14)
x.mask = ma.make_mask(x)
x.mask[:7] = False
assert_equal(ms.hdmedian(x), 3)
assert_allclose(ms.hdmedian(x), 3, rtol=1e-14)
# Check that `var` keyword returns a value. TODO: check whether returned
# value is actually correct.

View File

@ -1288,9 +1288,9 @@ class TestSpecialOrthoGroup(object):
def test_reproducibility(self):
np.random.seed(514)
x = special_ortho_group.rvs(3)
expected = np.array([[0.99394515, -0.04527879, 0.10011432],
[-0.04821555, 0.63900322, 0.76769144],
[-0.09873351, -0.76787024, 0.63295101]])
expected = np.array([[-0.99394515, -0.04527879, 0.10011432],
[0.04821555, -0.99846897, 0.02711042],
[0.09873351, 0.03177334, 0.99460653]])
assert_array_almost_equal(x, expected)
random_state = np.random.RandomState(seed=514)
@ -1334,7 +1334,7 @@ class TestSpecialOrthoGroup(object):
# Generate samples
dim = 5
samples = 1000 # Not too many, or the test takes too long
ks_prob = 0.39 # ...so don't expect much precision
ks_prob = .05
np.random.seed(514)
xs = special_ortho_group.rvs(dim, size=samples)
@ -1355,16 +1355,16 @@ class TestSpecialOrthoGroup(object):
class TestOrthoGroup(object):
def test_reproducibility(self):
np.random.seed(514)
np.random.seed(515)
x = ortho_group.rvs(3)
x2 = ortho_group.rvs(3, random_state=514)
x2 = ortho_group.rvs(3, random_state=515)
# Note this matrix has det -1, distinguishing O(N) from SO(N)
expected = np.array([[0.993945, -0.045279, 0.100114],
[-0.048216, -0.998469, 0.02711],
[-0.098734, 0.031773, 0.994607]])
assert_almost_equal(np.linalg.det(x), -1)
expected = np.array([[0.94449759, -0.21678569, -0.24683651],
[-0.13147569, -0.93800245, 0.3207266],
[0.30106219, 0.27047251, 0.9144431]])
assert_array_almost_equal(x, expected)
assert_array_almost_equal(x2, expected)
assert_almost_equal(np.linalg.det(x), -1)
def test_invalid_dim(self):
assert_raises(ValueError, ortho_group.rvs, None)
@ -1373,18 +1373,25 @@ class TestOrthoGroup(object):
assert_raises(ValueError, ortho_group.rvs, 2.5)
def test_det_and_ortho(self):
xs = [ortho_group.rvs(dim)
for dim in range(2,12)
for i in range(3)]
xs = [[ortho_group.rvs(dim)
for i in range(10)]
for dim in range(2,12)]
# Test that determinants are always +1
dets = [np.fabs(np.linalg.det(x)) for x in xs]
assert_allclose(dets, [1.]*30, rtol=1e-13)
# Test that abs determinants are always +1
dets = np.array([[np.linalg.det(x) for x in xx] for xx in xs])
assert_allclose(np.fabs(dets), np.ones(dets.shape), rtol=1e-13)
# Test that we get both positive and negative determinants
# Check that we have at least one and less than 10 negative dets in a sample of 10. The rest are positive by the previous test.
# Test each dimension separately
assert_array_less([0]*10, [np.where(d < 0)[0].shape[0] for d in dets])
assert_array_less([np.where(d < 0)[0].shape[0] for d in dets], [10]*10)
# Test that these are orthogonal matrices
for x in xs:
assert_array_almost_equal(np.dot(x, x.T),
np.eye(x.shape[0]))
for xx in xs:
for x in xx:
assert_array_almost_equal(np.dot(x, x.T),
np.eye(x.shape[0]))
def test_haar(self):
# Test that the distribution is constant under rotation
@ -1394,7 +1401,7 @@ class TestOrthoGroup(object):
# Generate samples
dim = 5
samples = 1000 # Not too many, or the test takes too long
ks_prob = 0.39 # ...so don't expect much precision
ks_prob = .05
np.random.seed(518) # Note that the test is sensitive to seed too
xs = ortho_group.rvs(dim, size=samples)
@ -1413,6 +1420,31 @@ class TestOrthoGroup(object):
ks_tests = [ks_2samp(proj[p0], proj[p1])[1] for (p0, p1) in pairs]
assert_array_less([ks_prob]*len(pairs), ks_tests)
def test_pairwise_distances(self):
# Test that the distribution of pairwise distances is close to correct.
np.random.seed(514)
def random_ortho(dim):
u, _s, v = np.linalg.svd(np.random.normal(size=(dim, dim)))
return np.dot(u, v)
for dim in range(2, 6):
def generate_test_statistics(rvs, N=1000, eps=1e-10):
stats = np.array([
np.sum((rvs(dim=dim) - rvs(dim=dim))**2)
for _ in range(N)
])
# Add a bit of noise to account for numeric accuracy.
stats += np.random.uniform(-eps, eps, size=stats.shape)
return stats
expected = generate_test_statistics(random_ortho)
actual = generate_test_statistics(scipy.stats.ortho_group.rvs)
_D, p = scipy.stats.ks_2samp(expected, actual)
assert_array_less(.05, p)
class TestRandomCorrelation(object):
def test_reproducibility(self):
np.random.seed(514)

View File

@ -789,11 +789,12 @@ def test_weightedtau():
# NaNs
x = [12, 2, 1, 12, 2]
y = [1, 4, 7, 1, np.nan]
tau, p_value = stats.weightedtau(x, y)
assert_approx_equal(tau, -0.56694968153682723)
x = [12, 2, np.nan, 12, 2]
tau, p_value = stats.weightedtau(x, y)
assert_approx_equal(tau, -0.56694968153682723)
with np.errstate(invalid="ignore"):
tau, p_value = stats.weightedtau(x, y)
assert_approx_equal(tau, -0.56694968153682723)
x = [12, 2, np.nan, 12, 2]
tau, p_value = stats.weightedtau(x, y)
assert_approx_equal(tau, -0.56694968153682723)
def test_weightedtau_vs_quadratic():

View File

@ -60,8 +60,8 @@ Operating System :: MacOS
MAJOR = 1
MINOR = 0
MICRO = 0
ISRELEASED = False
MICRO = 1
ISRELEASED = True
VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)

View File

@ -1,5 +1,5 @@
numpy
cython
cython==0.27.3
pytest-timeout
pytest-xdist
pytest-env

View File

@ -511,7 +511,8 @@ class Checker(doctest.OutputChecker):
stopwords = {'plt.', '.hist', '.show', '.ylim', '.subplot(',
'set_title', 'imshow', 'plt.show', 'ax.axis', 'plt.plot(',
'.bar(', '.title', '.ylabel', '.xlabel', 'set_ylim', 'set_xlim',
'# reformatted'}
'# reformatted', '.set_xlabel(', '.set_ylabel(', '.set_zlabel(',
'.set(xlim=', '.set(ylim=', '.set(xlabel=', '.set(ylabel='}
def __init__(self, parse_namedtuples=True, ns=None, atol=1e-8, rtol=1e-2):
self.parse_namedtuples = parse_namedtuples