repo
stringlengths 2
99
| file
stringlengths 13
225
| code
stringlengths 0
18.3M
| file_length
int64 0
18.3M
| avg_line_length
float64 0
1.36M
| max_line_length
int64 0
4.26M
| extension_type
stringclasses 1
value |
|---|---|---|---|---|---|---|
scipy
|
scipy-main/scipy/constants/codata.py
|
# This file is not meant for public use and will be removed in SciPy v2.0.0.
# Use the `scipy.constants` namespace for importing the functions
# included below.
import warnings
from . import _codata
__all__ = [ # noqa: F822
'physical_constants', 'value', 'unit', 'precision', 'find',
'ConstantWarning', 'txt2002', 'txt2006', 'txt2010', 'txt2014',
'txt2018', 'parse_constants_2002to2014',
'parse_constants_2018toXXXX', 'k', 'c', 'mu0', 'epsilon0',
'exact_values', 'key', 'val', 'v'
]
def __dir__():
return __all__
def __getattr__(name):
if name not in __all__:
raise AttributeError(
"scipy.constants.codata is deprecated and has no attribute "
f"{name}. Try looking in scipy.constants instead.")
warnings.warn(f"Please use `{name}` from the `scipy.constants` namespace, "
"the `scipy.constants.codata` namespace is deprecated.",
category=DeprecationWarning, stacklevel=2)
return getattr(_codata, name)
| 1,015
| 29.787879
| 79
|
py
|
scipy
|
scipy-main/scipy/constants/__init__.py
|
r"""
==================================
Constants (:mod:`scipy.constants`)
==================================
.. currentmodule:: scipy.constants
Physical and mathematical constants and units.
Mathematical constants
======================
================ =================================================================
``pi`` Pi
``golden`` Golden ratio
``golden_ratio`` Golden ratio
================ =================================================================
Physical constants
==================
=========================== =================================================================
``c`` speed of light in vacuum
``speed_of_light`` speed of light in vacuum
``mu_0`` the magnetic constant :math:`\mu_0`
``epsilon_0`` the electric constant (vacuum permittivity), :math:`\epsilon_0`
``h`` the Planck constant :math:`h`
``Planck`` the Planck constant :math:`h`
``hbar`` :math:`\hbar = h/(2\pi)`
``G`` Newtonian constant of gravitation
``gravitational_constant`` Newtonian constant of gravitation
``g`` standard acceleration of gravity
``e`` elementary charge
``elementary_charge`` elementary charge
``R`` molar gas constant
``gas_constant`` molar gas constant
``alpha`` fine-structure constant
``fine_structure`` fine-structure constant
``N_A`` Avogadro constant
``Avogadro`` Avogadro constant
``k`` Boltzmann constant
``Boltzmann`` Boltzmann constant
``sigma`` Stefan-Boltzmann constant :math:`\sigma`
``Stefan_Boltzmann`` Stefan-Boltzmann constant :math:`\sigma`
``Wien`` Wien displacement law constant
``Rydberg`` Rydberg constant
``m_e`` electron mass
``electron_mass`` electron mass
``m_p`` proton mass
``proton_mass`` proton mass
``m_n`` neutron mass
``neutron_mass`` neutron mass
=========================== =================================================================
Constants database
------------------
In addition to the above variables, :mod:`scipy.constants` also contains the
2018 CODATA recommended values [CODATA2018]_ database containing more physical
constants.
.. autosummary::
:toctree: generated/
value -- Value in physical_constants indexed by key
unit -- Unit in physical_constants indexed by key
precision -- Relative precision in physical_constants indexed by key
find -- Return list of physical_constant keys with a given string
ConstantWarning -- Constant sought not in newest CODATA data set
.. data:: physical_constants
Dictionary of physical constants, of the format
``physical_constants[name] = (value, unit, uncertainty)``.
Available constants:
====================================================================== ====
%(constant_names)s
====================================================================== ====
Units
=====
SI prefixes
-----------
============ =================================================================
``quetta`` :math:`10^{30}`
``ronna`` :math:`10^{27}`
``yotta`` :math:`10^{24}`
``zetta`` :math:`10^{21}`
``exa`` :math:`10^{18}`
``peta`` :math:`10^{15}`
``tera`` :math:`10^{12}`
``giga`` :math:`10^{9}`
``mega`` :math:`10^{6}`
``kilo`` :math:`10^{3}`
``hecto`` :math:`10^{2}`
``deka`` :math:`10^{1}`
``deci`` :math:`10^{-1}`
``centi`` :math:`10^{-2}`
``milli`` :math:`10^{-3}`
``micro`` :math:`10^{-6}`
``nano`` :math:`10^{-9}`
``pico`` :math:`10^{-12}`
``femto`` :math:`10^{-15}`
``atto`` :math:`10^{-18}`
``zepto`` :math:`10^{-21}`
``yocto`` :math:`10^{-24}`
``ronto`` :math:`10^{-27}`
``quecto`` :math:`10^{-30}`
============ =================================================================
Binary prefixes
---------------
============ =================================================================
``kibi`` :math:`2^{10}`
``mebi`` :math:`2^{20}`
``gibi`` :math:`2^{30}`
``tebi`` :math:`2^{40}`
``pebi`` :math:`2^{50}`
``exbi`` :math:`2^{60}`
``zebi`` :math:`2^{70}`
``yobi`` :math:`2^{80}`
============ =================================================================
Mass
----
================= ============================================================
``gram`` :math:`10^{-3}` kg
``metric_ton`` :math:`10^{3}` kg
``grain`` one grain in kg
``lb`` one pound (avoirdupous) in kg
``pound`` one pound (avoirdupous) in kg
``blob`` one inch version of a slug in kg (added in 1.0.0)
``slinch`` one inch version of a slug in kg (added in 1.0.0)
``slug`` one slug in kg (added in 1.0.0)
``oz`` one ounce in kg
``ounce`` one ounce in kg
``stone`` one stone in kg
``grain`` one grain in kg
``long_ton`` one long ton in kg
``short_ton`` one short ton in kg
``troy_ounce`` one Troy ounce in kg
``troy_pound`` one Troy pound in kg
``carat`` one carat in kg
``m_u`` atomic mass constant (in kg)
``u`` atomic mass constant (in kg)
``atomic_mass`` atomic mass constant (in kg)
================= ============================================================
Angle
-----
================= ============================================================
``degree`` degree in radians
``arcmin`` arc minute in radians
``arcminute`` arc minute in radians
``arcsec`` arc second in radians
``arcsecond`` arc second in radians
================= ============================================================
Time
----
================= ============================================================
``minute`` one minute in seconds
``hour`` one hour in seconds
``day`` one day in seconds
``week`` one week in seconds
``year`` one year (365 days) in seconds
``Julian_year`` one Julian year (365.25 days) in seconds
================= ============================================================
Length
------
===================== ============================================================
``inch`` one inch in meters
``foot`` one foot in meters
``yard`` one yard in meters
``mile`` one mile in meters
``mil`` one mil in meters
``pt`` one point in meters
``point`` one point in meters
``survey_foot`` one survey foot in meters
``survey_mile`` one survey mile in meters
``nautical_mile`` one nautical mile in meters
``fermi`` one Fermi in meters
``angstrom`` one Angstrom in meters
``micron`` one micron in meters
``au`` one astronomical unit in meters
``astronomical_unit`` one astronomical unit in meters
``light_year`` one light year in meters
``parsec`` one parsec in meters
===================== ============================================================
Pressure
--------
================= ============================================================
``atm`` standard atmosphere in pascals
``atmosphere`` standard atmosphere in pascals
``bar`` one bar in pascals
``torr`` one torr (mmHg) in pascals
``mmHg`` one torr (mmHg) in pascals
``psi`` one psi in pascals
================= ============================================================
Area
----
================= ============================================================
``hectare`` one hectare in square meters
``acre`` one acre in square meters
================= ============================================================
Volume
------
=================== ========================================================
``liter`` one liter in cubic meters
``litre`` one liter in cubic meters
``gallon`` one gallon (US) in cubic meters
``gallon_US`` one gallon (US) in cubic meters
``gallon_imp`` one gallon (UK) in cubic meters
``fluid_ounce`` one fluid ounce (US) in cubic meters
``fluid_ounce_US`` one fluid ounce (US) in cubic meters
``fluid_ounce_imp`` one fluid ounce (UK) in cubic meters
``bbl`` one barrel in cubic meters
``barrel`` one barrel in cubic meters
=================== ========================================================
Speed
-----
================== ==========================================================
``kmh`` kilometers per hour in meters per second
``mph`` miles per hour in meters per second
``mach`` one Mach (approx., at 15 C, 1 atm) in meters per second
``speed_of_sound`` one Mach (approx., at 15 C, 1 atm) in meters per second
``knot`` one knot in meters per second
================== ==========================================================
Temperature
-----------
===================== =======================================================
``zero_Celsius`` zero of Celsius scale in Kelvin
``degree_Fahrenheit`` one Fahrenheit (only differences) in Kelvins
===================== =======================================================
.. autosummary::
:toctree: generated/
convert_temperature
Energy
------
==================== =======================================================
``eV`` one electron volt in Joules
``electron_volt`` one electron volt in Joules
``calorie`` one calorie (thermochemical) in Joules
``calorie_th`` one calorie (thermochemical) in Joules
``calorie_IT`` one calorie (International Steam Table calorie, 1956) in Joules
``erg`` one erg in Joules
``Btu`` one British thermal unit (International Steam Table) in Joules
``Btu_IT`` one British thermal unit (International Steam Table) in Joules
``Btu_th`` one British thermal unit (thermochemical) in Joules
``ton_TNT`` one ton of TNT in Joules
==================== =======================================================
Power
-----
==================== =======================================================
``hp`` one horsepower in watts
``horsepower`` one horsepower in watts
==================== =======================================================
Force
-----
==================== =======================================================
``dyn`` one dyne in newtons
``dyne`` one dyne in newtons
``lbf`` one pound force in newtons
``pound_force`` one pound force in newtons
``kgf`` one kilogram force in newtons
``kilogram_force`` one kilogram force in newtons
==================== =======================================================
Optics
------
.. autosummary::
:toctree: generated/
lambda2nu
nu2lambda
References
==========
.. [CODATA2018] CODATA Recommended Values of the Fundamental
Physical Constants 2018.
https://physics.nist.gov/cuu/Constants/
"""
# Modules contributed by BasSw (wegwerp@gmail.com)
from ._codata import *
from ._constants import *
from ._codata import _obsolete_constants, physical_constants
# Deprecated namespaces, to be removed in v2.0.0
from . import codata, constants
_constant_names_list = [(_k.lower(), _k, _v)
for _k, _v in physical_constants.items()
if _k not in _obsolete_constants]
_constant_names = "\n".join(["``{}``{} {} {}".format(_x[1], " "*(66-len(_x[1])),
_x[2][0], _x[2][1])
for _x in sorted(_constant_names_list)])
if __doc__:
__doc__ = __doc__ % dict(constant_names=_constant_names)
del _constant_names
del _constant_names_list
__all__ = [s for s in dir() if not s.startswith('_')]
from scipy._lib._testutils import PytestTester
test = PytestTester(__name__)
del PytestTester
| 12,423
| 34.701149
| 94
|
py
|
scipy
|
scipy-main/scipy/constants/_codata.py
|
"""
Fundamental Physical Constants
------------------------------
These constants are taken from CODATA Recommended Values of the Fundamental
Physical Constants 2018.
Object
------
physical_constants : dict
A dictionary containing physical constants. Keys are the names of physical
constants, values are tuples (value, units, precision).
Functions
---------
value(key):
Returns the value of the physical constant(key).
unit(key):
Returns the units of the physical constant(key).
precision(key):
Returns the relative precision of the physical constant(key).
find(sub):
Prints or returns list of keys containing the string sub, default is all.
Source
------
The values of the constants provided at this site are recommended for
international use by CODATA and are the latest available. Termed the "2018
CODATA recommended values," they are generally recognized worldwide for use in
all fields of science and technology. The values became available on 20 May
2019 and replaced the 2014 CODATA set. Also available is an introduction to the
constants for non-experts at
https://physics.nist.gov/cuu/Constants/introduction.html
References
----------
Theoretical and experimental publications relevant to the fundamental constants
and closely related precision measurements published since the mid 1980s, but
also including many older papers of particular interest, some of which date
back to the 1800s. To search the bibliography, visit
https://physics.nist.gov/cuu/Constants/
"""
# Compiled by Charles Harris, dated October 3, 2002
# updated to 2002 values by BasSw, 2006
# Updated to 2006 values by Vincent Davis June 2010
# Updated to 2014 values by Joseph Booker, 2015
# Updated to 2018 values by Jakob Jakobson, 2019
from __future__ import annotations
import warnings
from typing import Any
__all__ = ['physical_constants', 'value', 'unit', 'precision', 'find',
'ConstantWarning']
"""
Source: https://physics.nist.gov/cuu/Constants/
The values of the constants provided at this site are recommended for
international use by CODATA and are the latest available. Termed the "2018
CODATA recommended values," they are generally recognized worldwide for use in
all fields of science and technology. The values became available on 20 May
2019 and replaced the 2014 CODATA set.
"""
#
# Source: https://physics.nist.gov/cuu/Constants/
#
# Quantity Value Uncertainty Unit
# ---------------------------------------------------- --------------------- -------------------- -------------
txt2002 = """\
Wien displacement law constant 2.897 7685e-3 0.000 0051e-3 m K
atomic unit of 1st hyperpolarizablity 3.206 361 51e-53 0.000 000 28e-53 C^3 m^3 J^-2
atomic unit of 2nd hyperpolarizablity 6.235 3808e-65 0.000 0011e-65 C^4 m^4 J^-3
atomic unit of electric dipole moment 8.478 353 09e-30 0.000 000 73e-30 C m
atomic unit of electric polarizablity 1.648 777 274e-41 0.000 000 016e-41 C^2 m^2 J^-1
atomic unit of electric quadrupole moment 4.486 551 24e-40 0.000 000 39e-40 C m^2
atomic unit of magn. dipole moment 1.854 801 90e-23 0.000 000 16e-23 J T^-1
atomic unit of magn. flux density 2.350 517 42e5 0.000 000 20e5 T
deuteron magn. moment 0.433 073 482e-26 0.000 000 038e-26 J T^-1
deuteron magn. moment to Bohr magneton ratio 0.466 975 4567e-3 0.000 000 0050e-3
deuteron magn. moment to nuclear magneton ratio 0.857 438 2329 0.000 000 0092
deuteron-electron magn. moment ratio -4.664 345 548e-4 0.000 000 050e-4
deuteron-proton magn. moment ratio 0.307 012 2084 0.000 000 0045
deuteron-neutron magn. moment ratio -0.448 206 52 0.000 000 11
electron gyromagn. ratio 1.760 859 74e11 0.000 000 15e11 s^-1 T^-1
electron gyromagn. ratio over 2 pi 28 024.9532 0.0024 MHz T^-1
electron magn. moment -928.476 412e-26 0.000 080e-26 J T^-1
electron magn. moment to Bohr magneton ratio -1.001 159 652 1859 0.000 000 000 0038
electron magn. moment to nuclear magneton ratio -1838.281 971 07 0.000 000 85
electron magn. moment anomaly 1.159 652 1859e-3 0.000 000 0038e-3
electron to shielded proton magn. moment ratio -658.227 5956 0.000 0071
electron to shielded helion magn. moment ratio 864.058 255 0.000 010
electron-deuteron magn. moment ratio -2143.923 493 0.000 023
electron-muon magn. moment ratio 206.766 9894 0.000 0054
electron-neutron magn. moment ratio 960.920 50 0.000 23
electron-proton magn. moment ratio -658.210 6862 0.000 0066
magn. constant 12.566 370 614...e-7 0 N A^-2
magn. flux quantum 2.067 833 72e-15 0.000 000 18e-15 Wb
muon magn. moment -4.490 447 99e-26 0.000 000 40e-26 J T^-1
muon magn. moment to Bohr magneton ratio -4.841 970 45e-3 0.000 000 13e-3
muon magn. moment to nuclear magneton ratio -8.890 596 98 0.000 000 23
muon-proton magn. moment ratio -3.183 345 118 0.000 000 089
neutron gyromagn. ratio 1.832 471 83e8 0.000 000 46e8 s^-1 T^-1
neutron gyromagn. ratio over 2 pi 29.164 6950 0.000 0073 MHz T^-1
neutron magn. moment -0.966 236 45e-26 0.000 000 24e-26 J T^-1
neutron magn. moment to Bohr magneton ratio -1.041 875 63e-3 0.000 000 25e-3
neutron magn. moment to nuclear magneton ratio -1.913 042 73 0.000 000 45
neutron to shielded proton magn. moment ratio -0.684 996 94 0.000 000 16
neutron-electron magn. moment ratio 1.040 668 82e-3 0.000 000 25e-3
neutron-proton magn. moment ratio -0.684 979 34 0.000 000 16
proton gyromagn. ratio 2.675 222 05e8 0.000 000 23e8 s^-1 T^-1
proton gyromagn. ratio over 2 pi 42.577 4813 0.000 0037 MHz T^-1
proton magn. moment 1.410 606 71e-26 0.000 000 12e-26 J T^-1
proton magn. moment to Bohr magneton ratio 1.521 032 206e-3 0.000 000 015e-3
proton magn. moment to nuclear magneton ratio 2.792 847 351 0.000 000 028
proton magn. shielding correction 25.689e-6 0.015e-6
proton-neutron magn. moment ratio -1.459 898 05 0.000 000 34
shielded helion gyromagn. ratio 2.037 894 70e8 0.000 000 18e8 s^-1 T^-1
shielded helion gyromagn. ratio over 2 pi 32.434 1015 0.000 0028 MHz T^-1
shielded helion magn. moment -1.074 553 024e-26 0.000 000 093e-26 J T^-1
shielded helion magn. moment to Bohr magneton ratio -1.158 671 474e-3 0.000 000 014e-3
shielded helion magn. moment to nuclear magneton ratio -2.127 497 723 0.000 000 025
shielded helion to proton magn. moment ratio -0.761 766 562 0.000 000 012
shielded helion to shielded proton magn. moment ratio -0.761 786 1313 0.000 000 0033
shielded helion gyromagn. ratio 2.037 894 70e8 0.000 000 18e8 s^-1 T^-1
shielded helion gyromagn. ratio over 2 pi 32.434 1015 0.000 0028 MHz T^-1
shielded proton magn. moment 1.410 570 47e-26 0.000 000 12e-26 J T^-1
shielded proton magn. moment to Bohr magneton ratio 1.520 993 132e-3 0.000 000 016e-3
shielded proton magn. moment to nuclear magneton ratio 2.792 775 604 0.000 000 030
{220} lattice spacing of silicon 192.015 5965e-12 0.000 0070e-12 m"""
txt2006 = """\
lattice spacing of silicon 192.015 5762 e-12 0.000 0050 e-12 m
alpha particle-electron mass ratio 7294.299 5365 0.000 0031
alpha particle mass 6.644 656 20 e-27 0.000 000 33 e-27 kg
alpha particle mass energy equivalent 5.971 919 17 e-10 0.000 000 30 e-10 J
alpha particle mass energy equivalent in MeV 3727.379 109 0.000 093 MeV
alpha particle mass in u 4.001 506 179 127 0.000 000 000 062 u
alpha particle molar mass 4.001 506 179 127 e-3 0.000 000 000 062 e-3 kg mol^-1
alpha particle-proton mass ratio 3.972 599 689 51 0.000 000 000 41
Angstrom star 1.000 014 98 e-10 0.000 000 90 e-10 m
atomic mass constant 1.660 538 782 e-27 0.000 000 083 e-27 kg
atomic mass constant energy equivalent 1.492 417 830 e-10 0.000 000 074 e-10 J
atomic mass constant energy equivalent in MeV 931.494 028 0.000 023 MeV
atomic mass unit-electron volt relationship 931.494 028 e6 0.000 023 e6 eV
atomic mass unit-hartree relationship 3.423 177 7149 e7 0.000 000 0049 e7 E_h
atomic mass unit-hertz relationship 2.252 342 7369 e23 0.000 000 0032 e23 Hz
atomic mass unit-inverse meter relationship 7.513 006 671 e14 0.000 000 011 e14 m^-1
atomic mass unit-joule relationship 1.492 417 830 e-10 0.000 000 074 e-10 J
atomic mass unit-kelvin relationship 1.080 9527 e13 0.000 0019 e13 K
atomic mass unit-kilogram relationship 1.660 538 782 e-27 0.000 000 083 e-27 kg
atomic unit of 1st hyperpolarizability 3.206 361 533 e-53 0.000 000 081 e-53 C^3 m^3 J^-2
atomic unit of 2nd hyperpolarizability 6.235 380 95 e-65 0.000 000 31 e-65 C^4 m^4 J^-3
atomic unit of action 1.054 571 628 e-34 0.000 000 053 e-34 J s
atomic unit of charge 1.602 176 487 e-19 0.000 000 040 e-19 C
atomic unit of charge density 1.081 202 300 e12 0.000 000 027 e12 C m^-3
atomic unit of current 6.623 617 63 e-3 0.000 000 17 e-3 A
atomic unit of electric dipole mom. 8.478 352 81 e-30 0.000 000 21 e-30 C m
atomic unit of electric field 5.142 206 32 e11 0.000 000 13 e11 V m^-1
atomic unit of electric field gradient 9.717 361 66 e21 0.000 000 24 e21 V m^-2
atomic unit of electric polarizability 1.648 777 2536 e-41 0.000 000 0034 e-41 C^2 m^2 J^-1
atomic unit of electric potential 27.211 383 86 0.000 000 68 V
atomic unit of electric quadrupole mom. 4.486 551 07 e-40 0.000 000 11 e-40 C m^2
atomic unit of energy 4.359 743 94 e-18 0.000 000 22 e-18 J
atomic unit of force 8.238 722 06 e-8 0.000 000 41 e-8 N
atomic unit of length 0.529 177 208 59 e-10 0.000 000 000 36 e-10 m
atomic unit of mag. dipole mom. 1.854 801 830 e-23 0.000 000 046 e-23 J T^-1
atomic unit of mag. flux density 2.350 517 382 e5 0.000 000 059 e5 T
atomic unit of magnetizability 7.891 036 433 e-29 0.000 000 027 e-29 J T^-2
atomic unit of mass 9.109 382 15 e-31 0.000 000 45 e-31 kg
atomic unit of momentum 1.992 851 565 e-24 0.000 000 099 e-24 kg m s^-1
atomic unit of permittivity 1.112 650 056... e-10 (exact) F m^-1
atomic unit of time 2.418 884 326 505 e-17 0.000 000 000 016 e-17 s
atomic unit of velocity 2.187 691 2541 e6 0.000 000 0015 e6 m s^-1
Avogadro constant 6.022 141 79 e23 0.000 000 30 e23 mol^-1
Bohr magneton 927.400 915 e-26 0.000 023 e-26 J T^-1
Bohr magneton in eV/T 5.788 381 7555 e-5 0.000 000 0079 e-5 eV T^-1
Bohr magneton in Hz/T 13.996 246 04 e9 0.000 000 35 e9 Hz T^-1
Bohr magneton in inverse meters per tesla 46.686 4515 0.000 0012 m^-1 T^-1
Bohr magneton in K/T 0.671 7131 0.000 0012 K T^-1
Bohr radius 0.529 177 208 59 e-10 0.000 000 000 36 e-10 m
Boltzmann constant 1.380 6504 e-23 0.000 0024 e-23 J K^-1
Boltzmann constant in eV/K 8.617 343 e-5 0.000 015 e-5 eV K^-1
Boltzmann constant in Hz/K 2.083 6644 e10 0.000 0036 e10 Hz K^-1
Boltzmann constant in inverse meters per kelvin 69.503 56 0.000 12 m^-1 K^-1
characteristic impedance of vacuum 376.730 313 461... (exact) ohm
classical electron radius 2.817 940 2894 e-15 0.000 000 0058 e-15 m
Compton wavelength 2.426 310 2175 e-12 0.000 000 0033 e-12 m
Compton wavelength over 2 pi 386.159 264 59 e-15 0.000 000 53 e-15 m
conductance quantum 7.748 091 7004 e-5 0.000 000 0053 e-5 S
conventional value of Josephson constant 483 597.9 e9 (exact) Hz V^-1
conventional value of von Klitzing constant 25 812.807 (exact) ohm
Cu x unit 1.002 076 99 e-13 0.000 000 28 e-13 m
deuteron-electron mag. mom. ratio -4.664 345 537 e-4 0.000 000 039 e-4
deuteron-electron mass ratio 3670.482 9654 0.000 0016
deuteron g factor 0.857 438 2308 0.000 000 0072
deuteron mag. mom. 0.433 073 465 e-26 0.000 000 011 e-26 J T^-1
deuteron mag. mom. to Bohr magneton ratio 0.466 975 4556 e-3 0.000 000 0039 e-3
deuteron mag. mom. to nuclear magneton ratio 0.857 438 2308 0.000 000 0072
deuteron mass 3.343 583 20 e-27 0.000 000 17 e-27 kg
deuteron mass energy equivalent 3.005 062 72 e-10 0.000 000 15 e-10 J
deuteron mass energy equivalent in MeV 1875.612 793 0.000 047 MeV
deuteron mass in u 2.013 553 212 724 0.000 000 000 078 u
deuteron molar mass 2.013 553 212 724 e-3 0.000 000 000 078 e-3 kg mol^-1
deuteron-neutron mag. mom. ratio -0.448 206 52 0.000 000 11
deuteron-proton mag. mom. ratio 0.307 012 2070 0.000 000 0024
deuteron-proton mass ratio 1.999 007 501 08 0.000 000 000 22
deuteron rms charge radius 2.1402 e-15 0.0028 e-15 m
electric constant 8.854 187 817... e-12 (exact) F m^-1
electron charge to mass quotient -1.758 820 150 e11 0.000 000 044 e11 C kg^-1
electron-deuteron mag. mom. ratio -2143.923 498 0.000 018
electron-deuteron mass ratio 2.724 437 1093 e-4 0.000 000 0012 e-4
electron g factor -2.002 319 304 3622 0.000 000 000 0015
electron gyromag. ratio 1.760 859 770 e11 0.000 000 044 e11 s^-1 T^-1
electron gyromag. ratio over 2 pi 28 024.953 64 0.000 70 MHz T^-1
electron mag. mom. -928.476 377 e-26 0.000 023 e-26 J T^-1
electron mag. mom. anomaly 1.159 652 181 11 e-3 0.000 000 000 74 e-3
electron mag. mom. to Bohr magneton ratio -1.001 159 652 181 11 0.000 000 000 000 74
electron mag. mom. to nuclear magneton ratio -1838.281 970 92 0.000 000 80
electron mass 9.109 382 15 e-31 0.000 000 45 e-31 kg
electron mass energy equivalent 8.187 104 38 e-14 0.000 000 41 e-14 J
electron mass energy equivalent in MeV 0.510 998 910 0.000 000 013 MeV
electron mass in u 5.485 799 0943 e-4 0.000 000 0023 e-4 u
electron molar mass 5.485 799 0943 e-7 0.000 000 0023 e-7 kg mol^-1
electron-muon mag. mom. ratio 206.766 9877 0.000 0052
electron-muon mass ratio 4.836 331 71 e-3 0.000 000 12 e-3
electron-neutron mag. mom. ratio 960.920 50 0.000 23
electron-neutron mass ratio 5.438 673 4459 e-4 0.000 000 0033 e-4
electron-proton mag. mom. ratio -658.210 6848 0.000 0054
electron-proton mass ratio 5.446 170 2177 e-4 0.000 000 0024 e-4
electron-tau mass ratio 2.875 64 e-4 0.000 47 e-4
electron to alpha particle mass ratio 1.370 933 555 70 e-4 0.000 000 000 58 e-4
electron to shielded helion mag. mom. ratio 864.058 257 0.000 010
electron to shielded proton mag. mom. ratio -658.227 5971 0.000 0072
electron volt 1.602 176 487 e-19 0.000 000 040 e-19 J
electron volt-atomic mass unit relationship 1.073 544 188 e-9 0.000 000 027 e-9 u
electron volt-hartree relationship 3.674 932 540 e-2 0.000 000 092 e-2 E_h
electron volt-hertz relationship 2.417 989 454 e14 0.000 000 060 e14 Hz
electron volt-inverse meter relationship 8.065 544 65 e5 0.000 000 20 e5 m^-1
electron volt-joule relationship 1.602 176 487 e-19 0.000 000 040 e-19 J
electron volt-kelvin relationship 1.160 4505 e4 0.000 0020 e4 K
electron volt-kilogram relationship 1.782 661 758 e-36 0.000 000 044 e-36 kg
elementary charge 1.602 176 487 e-19 0.000 000 040 e-19 C
elementary charge over h 2.417 989 454 e14 0.000 000 060 e14 A J^-1
Faraday constant 96 485.3399 0.0024 C mol^-1
Faraday constant for conventional electric current 96 485.3401 0.0048 C_90 mol^-1
Fermi coupling constant 1.166 37 e-5 0.000 01 e-5 GeV^-2
fine-structure constant 7.297 352 5376 e-3 0.000 000 0050 e-3
first radiation constant 3.741 771 18 e-16 0.000 000 19 e-16 W m^2
first radiation constant for spectral radiance 1.191 042 759 e-16 0.000 000 059 e-16 W m^2 sr^-1
hartree-atomic mass unit relationship 2.921 262 2986 e-8 0.000 000 0042 e-8 u
hartree-electron volt relationship 27.211 383 86 0.000 000 68 eV
Hartree energy 4.359 743 94 e-18 0.000 000 22 e-18 J
Hartree energy in eV 27.211 383 86 0.000 000 68 eV
hartree-hertz relationship 6.579 683 920 722 e15 0.000 000 000 044 e15 Hz
hartree-inverse meter relationship 2.194 746 313 705 e7 0.000 000 000 015 e7 m^-1
hartree-joule relationship 4.359 743 94 e-18 0.000 000 22 e-18 J
hartree-kelvin relationship 3.157 7465 e5 0.000 0055 e5 K
hartree-kilogram relationship 4.850 869 34 e-35 0.000 000 24 e-35 kg
helion-electron mass ratio 5495.885 2765 0.000 0052
helion mass 5.006 411 92 e-27 0.000 000 25 e-27 kg
helion mass energy equivalent 4.499 538 64 e-10 0.000 000 22 e-10 J
helion mass energy equivalent in MeV 2808.391 383 0.000 070 MeV
helion mass in u 3.014 932 2473 0.000 000 0026 u
helion molar mass 3.014 932 2473 e-3 0.000 000 0026 e-3 kg mol^-1
helion-proton mass ratio 2.993 152 6713 0.000 000 0026
hertz-atomic mass unit relationship 4.439 821 6294 e-24 0.000 000 0064 e-24 u
hertz-electron volt relationship 4.135 667 33 e-15 0.000 000 10 e-15 eV
hertz-hartree relationship 1.519 829 846 006 e-16 0.000 000 000010e-16 E_h
hertz-inverse meter relationship 3.335 640 951... e-9 (exact) m^-1
hertz-joule relationship 6.626 068 96 e-34 0.000 000 33 e-34 J
hertz-kelvin relationship 4.799 2374 e-11 0.000 0084 e-11 K
hertz-kilogram relationship 7.372 496 00 e-51 0.000 000 37 e-51 kg
inverse fine-structure constant 137.035 999 679 0.000 000 094
inverse meter-atomic mass unit relationship 1.331 025 0394 e-15 0.000 000 0019 e-15 u
inverse meter-electron volt relationship 1.239 841 875 e-6 0.000 000 031 e-6 eV
inverse meter-hartree relationship 4.556 335 252 760 e-8 0.000 000 000 030 e-8 E_h
inverse meter-hertz relationship 299 792 458 (exact) Hz
inverse meter-joule relationship 1.986 445 501 e-25 0.000 000 099 e-25 J
inverse meter-kelvin relationship 1.438 7752 e-2 0.000 0025 e-2 K
inverse meter-kilogram relationship 2.210 218 70 e-42 0.000 000 11 e-42 kg
inverse of conductance quantum 12 906.403 7787 0.000 0088 ohm
Josephson constant 483 597.891 e9 0.012 e9 Hz V^-1
joule-atomic mass unit relationship 6.700 536 41 e9 0.000 000 33 e9 u
joule-electron volt relationship 6.241 509 65 e18 0.000 000 16 e18 eV
joule-hartree relationship 2.293 712 69 e17 0.000 000 11 e17 E_h
joule-hertz relationship 1.509 190 450 e33 0.000 000 075 e33 Hz
joule-inverse meter relationship 5.034 117 47 e24 0.000 000 25 e24 m^-1
joule-kelvin relationship 7.242 963 e22 0.000 013 e22 K
joule-kilogram relationship 1.112 650 056... e-17 (exact) kg
kelvin-atomic mass unit relationship 9.251 098 e-14 0.000 016 e-14 u
kelvin-electron volt relationship 8.617 343 e-5 0.000 015 e-5 eV
kelvin-hartree relationship 3.166 8153 e-6 0.000 0055 e-6 E_h
kelvin-hertz relationship 2.083 6644 e10 0.000 0036 e10 Hz
kelvin-inverse meter relationship 69.503 56 0.000 12 m^-1
kelvin-joule relationship 1.380 6504 e-23 0.000 0024 e-23 J
kelvin-kilogram relationship 1.536 1807 e-40 0.000 0027 e-40 kg
kilogram-atomic mass unit relationship 6.022 141 79 e26 0.000 000 30 e26 u
kilogram-electron volt relationship 5.609 589 12 e35 0.000 000 14 e35 eV
kilogram-hartree relationship 2.061 486 16 e34 0.000 000 10 e34 E_h
kilogram-hertz relationship 1.356 392 733 e50 0.000 000 068 e50 Hz
kilogram-inverse meter relationship 4.524 439 15 e41 0.000 000 23 e41 m^-1
kilogram-joule relationship 8.987 551 787... e16 (exact) J
kilogram-kelvin relationship 6.509 651 e39 0.000 011 e39 K
lattice parameter of silicon 543.102 064 e-12 0.000 014 e-12 m
Loschmidt constant (273.15 K, 101.325 kPa) 2.686 7774 e25 0.000 0047 e25 m^-3
mag. constant 12.566 370 614... e-7 (exact) N A^-2
mag. flux quantum 2.067 833 667 e-15 0.000 000 052 e-15 Wb
molar gas constant 8.314 472 0.000 015 J mol^-1 K^-1
molar mass constant 1 e-3 (exact) kg mol^-1
molar mass of carbon-12 12 e-3 (exact) kg mol^-1
molar Planck constant 3.990 312 6821 e-10 0.000 000 0057 e-10 J s mol^-1
molar Planck constant times c 0.119 626 564 72 0.000 000 000 17 J m mol^-1
molar volume of ideal gas (273.15 K, 100 kPa) 22.710 981 e-3 0.000 040 e-3 m^3 mol^-1
molar volume of ideal gas (273.15 K, 101.325 kPa) 22.413 996 e-3 0.000 039 e-3 m^3 mol^-1
molar volume of silicon 12.058 8349 e-6 0.000 0011 e-6 m^3 mol^-1
Mo x unit 1.002 099 55 e-13 0.000 000 53 e-13 m
muon Compton wavelength 11.734 441 04 e-15 0.000 000 30 e-15 m
muon Compton wavelength over 2 pi 1.867 594 295 e-15 0.000 000 047 e-15 m
muon-electron mass ratio 206.768 2823 0.000 0052
muon g factor -2.002 331 8414 0.000 000 0012
muon mag. mom. -4.490 447 86 e-26 0.000 000 16 e-26 J T^-1
muon mag. mom. anomaly 1.165 920 69 e-3 0.000 000 60 e-3
muon mag. mom. to Bohr magneton ratio -4.841 970 49 e-3 0.000 000 12 e-3
muon mag. mom. to nuclear magneton ratio -8.890 597 05 0.000 000 23
muon mass 1.883 531 30 e-28 0.000 000 11 e-28 kg
muon mass energy equivalent 1.692 833 510 e-11 0.000 000 095 e-11 J
muon mass energy equivalent in MeV 105.658 3668 0.000 0038 MeV
muon mass in u 0.113 428 9256 0.000 000 0029 u
muon molar mass 0.113 428 9256 e-3 0.000 000 0029 e-3 kg mol^-1
muon-neutron mass ratio 0.112 454 5167 0.000 000 0029
muon-proton mag. mom. ratio -3.183 345 137 0.000 000 085
muon-proton mass ratio 0.112 609 5261 0.000 000 0029
muon-tau mass ratio 5.945 92 e-2 0.000 97 e-2
natural unit of action 1.054 571 628 e-34 0.000 000 053 e-34 J s
natural unit of action in eV s 6.582 118 99 e-16 0.000 000 16 e-16 eV s
natural unit of energy 8.187 104 38 e-14 0.000 000 41 e-14 J
natural unit of energy in MeV 0.510 998 910 0.000 000 013 MeV
natural unit of length 386.159 264 59 e-15 0.000 000 53 e-15 m
natural unit of mass 9.109 382 15 e-31 0.000 000 45 e-31 kg
natural unit of momentum 2.730 924 06 e-22 0.000 000 14 e-22 kg m s^-1
natural unit of momentum in MeV/c 0.510 998 910 0.000 000 013 MeV/c
natural unit of time 1.288 088 6570 e-21 0.000 000 0018 e-21 s
natural unit of velocity 299 792 458 (exact) m s^-1
neutron Compton wavelength 1.319 590 8951 e-15 0.000 000 0020 e-15 m
neutron Compton wavelength over 2 pi 0.210 019 413 82 e-15 0.000 000 000 31 e-15 m
neutron-electron mag. mom. ratio 1.040 668 82 e-3 0.000 000 25 e-3
neutron-electron mass ratio 1838.683 6605 0.000 0011
neutron g factor -3.826 085 45 0.000 000 90
neutron gyromag. ratio 1.832 471 85 e8 0.000 000 43 e8 s^-1 T^-1
neutron gyromag. ratio over 2 pi 29.164 6954 0.000 0069 MHz T^-1
neutron mag. mom. -0.966 236 41 e-26 0.000 000 23 e-26 J T^-1
neutron mag. mom. to Bohr magneton ratio -1.041 875 63 e-3 0.000 000 25 e-3
neutron mag. mom. to nuclear magneton ratio -1.913 042 73 0.000 000 45
neutron mass 1.674 927 211 e-27 0.000 000 084 e-27 kg
neutron mass energy equivalent 1.505 349 505 e-10 0.000 000 075 e-10 J
neutron mass energy equivalent in MeV 939.565 346 0.000 023 MeV
neutron mass in u 1.008 664 915 97 0.000 000 000 43 u
neutron molar mass 1.008 664 915 97 e-3 0.000 000 000 43 e-3 kg mol^-1
neutron-muon mass ratio 8.892 484 09 0.000 000 23
neutron-proton mag. mom. ratio -0.684 979 34 0.000 000 16
neutron-proton mass ratio 1.001 378 419 18 0.000 000 000 46
neutron-tau mass ratio 0.528 740 0.000 086
neutron to shielded proton mag. mom. ratio -0.684 996 94 0.000 000 16
Newtonian constant of gravitation 6.674 28 e-11 0.000 67 e-11 m^3 kg^-1 s^-2
Newtonian constant of gravitation over h-bar c 6.708 81 e-39 0.000 67 e-39 (GeV/c^2)^-2
nuclear magneton 5.050 783 24 e-27 0.000 000 13 e-27 J T^-1
nuclear magneton in eV/T 3.152 451 2326 e-8 0.000 000 0045 e-8 eV T^-1
nuclear magneton in inverse meters per tesla 2.542 623 616 e-2 0.000 000 064 e-2 m^-1 T^-1
nuclear magneton in K/T 3.658 2637 e-4 0.000 0064 e-4 K T^-1
nuclear magneton in MHz/T 7.622 593 84 0.000 000 19 MHz T^-1
Planck constant 6.626 068 96 e-34 0.000 000 33 e-34 J s
Planck constant in eV s 4.135 667 33 e-15 0.000 000 10 e-15 eV s
Planck constant over 2 pi 1.054 571 628 e-34 0.000 000 053 e-34 J s
Planck constant over 2 pi in eV s 6.582 118 99 e-16 0.000 000 16 e-16 eV s
Planck constant over 2 pi times c in MeV fm 197.326 9631 0.000 0049 MeV fm
Planck length 1.616 252 e-35 0.000 081 e-35 m
Planck mass 2.176 44 e-8 0.000 11 e-8 kg
Planck mass energy equivalent in GeV 1.220 892 e19 0.000 061 e19 GeV
Planck temperature 1.416 785 e32 0.000 071 e32 K
Planck time 5.391 24 e-44 0.000 27 e-44 s
proton charge to mass quotient 9.578 833 92 e7 0.000 000 24 e7 C kg^-1
proton Compton wavelength 1.321 409 8446 e-15 0.000 000 0019 e-15 m
proton Compton wavelength over 2 pi 0.210 308 908 61 e-15 0.000 000 000 30 e-15 m
proton-electron mass ratio 1836.152 672 47 0.000 000 80
proton g factor 5.585 694 713 0.000 000 046
proton gyromag. ratio 2.675 222 099 e8 0.000 000 070 e8 s^-1 T^-1
proton gyromag. ratio over 2 pi 42.577 4821 0.000 0011 MHz T^-1
proton mag. mom. 1.410 606 662 e-26 0.000 000 037 e-26 J T^-1
proton mag. mom. to Bohr magneton ratio 1.521 032 209 e-3 0.000 000 012 e-3
proton mag. mom. to nuclear magneton ratio 2.792 847 356 0.000 000 023
proton mag. shielding correction 25.694 e-6 0.014 e-6
proton mass 1.672 621 637 e-27 0.000 000 083 e-27 kg
proton mass energy equivalent 1.503 277 359 e-10 0.000 000 075 e-10 J
proton mass energy equivalent in MeV 938.272 013 0.000 023 MeV
proton mass in u 1.007 276 466 77 0.000 000 000 10 u
proton molar mass 1.007 276 466 77 e-3 0.000 000 000 10 e-3 kg mol^-1
proton-muon mass ratio 8.880 243 39 0.000 000 23
proton-neutron mag. mom. ratio -1.459 898 06 0.000 000 34
proton-neutron mass ratio 0.998 623 478 24 0.000 000 000 46
proton rms charge radius 0.8768 e-15 0.0069 e-15 m
proton-tau mass ratio 0.528 012 0.000 086
quantum of circulation 3.636 947 5199 e-4 0.000 000 0050 e-4 m^2 s^-1
quantum of circulation times 2 7.273 895 040 e-4 0.000 000 010 e-4 m^2 s^-1
Rydberg constant 10 973 731.568 527 0.000 073 m^-1
Rydberg constant times c in Hz 3.289 841 960 361 e15 0.000 000 000 022 e15 Hz
Rydberg constant times hc in eV 13.605 691 93 0.000 000 34 eV
Rydberg constant times hc in J 2.179 871 97 e-18 0.000 000 11 e-18 J
Sackur-Tetrode constant (1 K, 100 kPa) -1.151 7047 0.000 0044
Sackur-Tetrode constant (1 K, 101.325 kPa) -1.164 8677 0.000 0044
second radiation constant 1.438 7752 e-2 0.000 0025 e-2 m K
shielded helion gyromag. ratio 2.037 894 730 e8 0.000 000 056 e8 s^-1 T^-1
shielded helion gyromag. ratio over 2 pi 32.434 101 98 0.000 000 90 MHz T^-1
shielded helion mag. mom. -1.074 552 982 e-26 0.000 000 030 e-26 J T^-1
shielded helion mag. mom. to Bohr magneton ratio -1.158 671 471 e-3 0.000 000 014 e-3
shielded helion mag. mom. to nuclear magneton ratio -2.127 497 718 0.000 000 025
shielded helion to proton mag. mom. ratio -0.761 766 558 0.000 000 011
shielded helion to shielded proton mag. mom. ratio -0.761 786 1313 0.000 000 0033
shielded proton gyromag. ratio 2.675 153 362 e8 0.000 000 073 e8 s^-1 T^-1
shielded proton gyromag. ratio over 2 pi 42.576 3881 0.000 0012 MHz T^-1
shielded proton mag. mom. 1.410 570 419 e-26 0.000 000 038 e-26 J T^-1
shielded proton mag. mom. to Bohr magneton ratio 1.520 993 128 e-3 0.000 000 017 e-3
shielded proton mag. mom. to nuclear magneton ratio 2.792 775 598 0.000 000 030
speed of light in vacuum 299 792 458 (exact) m s^-1
standard acceleration of gravity 9.806 65 (exact) m s^-2
standard atmosphere 101 325 (exact) Pa
Stefan-Boltzmann constant 5.670 400 e-8 0.000 040 e-8 W m^-2 K^-4
tau Compton wavelength 0.697 72 e-15 0.000 11 e-15 m
tau Compton wavelength over 2 pi 0.111 046 e-15 0.000 018 e-15 m
tau-electron mass ratio 3477.48 0.57
tau mass 3.167 77 e-27 0.000 52 e-27 kg
tau mass energy equivalent 2.847 05 e-10 0.000 46 e-10 J
tau mass energy equivalent in MeV 1776.99 0.29 MeV
tau mass in u 1.907 68 0.000 31 u
tau molar mass 1.907 68 e-3 0.000 31 e-3 kg mol^-1
tau-muon mass ratio 16.8183 0.0027
tau-neutron mass ratio 1.891 29 0.000 31
tau-proton mass ratio 1.893 90 0.000 31
Thomson cross section 0.665 245 8558 e-28 0.000 000 0027 e-28 m^2
triton-electron mag. mom. ratio -1.620 514 423 e-3 0.000 000 021 e-3
triton-electron mass ratio 5496.921 5269 0.000 0051
triton g factor 5.957 924 896 0.000 000 076
triton mag. mom. 1.504 609 361 e-26 0.000 000 042 e-26 J T^-1
triton mag. mom. to Bohr magneton ratio 1.622 393 657 e-3 0.000 000 021 e-3
triton mag. mom. to nuclear magneton ratio 2.978 962 448 0.000 000 038
triton mass 5.007 355 88 e-27 0.000 000 25 e-27 kg
triton mass energy equivalent 4.500 387 03 e-10 0.000 000 22 e-10 J
triton mass energy equivalent in MeV 2808.920 906 0.000 070 MeV
triton mass in u 3.015 500 7134 0.000 000 0025 u
triton molar mass 3.015 500 7134 e-3 0.000 000 0025 e-3 kg mol^-1
triton-neutron mag. mom. ratio -1.557 185 53 0.000 000 37
triton-proton mag. mom. ratio 1.066 639 908 0.000 000 010
triton-proton mass ratio 2.993 717 0309 0.000 000 0025
unified atomic mass unit 1.660 538 782 e-27 0.000 000 083 e-27 kg
von Klitzing constant 25 812.807 557 0.000 018 ohm
weak mixing angle 0.222 55 0.000 56
Wien frequency displacement law constant 5.878 933 e10 0.000 010 e10 Hz K^-1
Wien wavelength displacement law constant 2.897 7685 e-3 0.000 0051 e-3 m K"""
txt2010 = """\
{220} lattice spacing of silicon 192.015 5714 e-12 0.000 0032 e-12 m
alpha particle-electron mass ratio 7294.299 5361 0.000 0029
alpha particle mass 6.644 656 75 e-27 0.000 000 29 e-27 kg
alpha particle mass energy equivalent 5.971 919 67 e-10 0.000 000 26 e-10 J
alpha particle mass energy equivalent in MeV 3727.379 240 0.000 082 MeV
alpha particle mass in u 4.001 506 179 125 0.000 000 000 062 u
alpha particle molar mass 4.001 506 179 125 e-3 0.000 000 000 062 e-3 kg mol^-1
alpha particle-proton mass ratio 3.972 599 689 33 0.000 000 000 36
Angstrom star 1.000 014 95 e-10 0.000 000 90 e-10 m
atomic mass constant 1.660 538 921 e-27 0.000 000 073 e-27 kg
atomic mass constant energy equivalent 1.492 417 954 e-10 0.000 000 066 e-10 J
atomic mass constant energy equivalent in MeV 931.494 061 0.000 021 MeV
atomic mass unit-electron volt relationship 931.494 061 e6 0.000 021 e6 eV
atomic mass unit-hartree relationship 3.423 177 6845 e7 0.000 000 0024 e7 E_h
atomic mass unit-hertz relationship 2.252 342 7168 e23 0.000 000 0016 e23 Hz
atomic mass unit-inverse meter relationship 7.513 006 6042 e14 0.000 000 0053 e14 m^-1
atomic mass unit-joule relationship 1.492 417 954 e-10 0.000 000 066 e-10 J
atomic mass unit-kelvin relationship 1.080 954 08 e13 0.000 000 98 e13 K
atomic mass unit-kilogram relationship 1.660 538 921 e-27 0.000 000 073 e-27 kg
atomic unit of 1st hyperpolarizability 3.206 361 449 e-53 0.000 000 071 e-53 C^3 m^3 J^-2
atomic unit of 2nd hyperpolarizability 6.235 380 54 e-65 0.000 000 28 e-65 C^4 m^4 J^-3
atomic unit of action 1.054 571 726 e-34 0.000 000 047 e-34 J s
atomic unit of charge 1.602 176 565 e-19 0.000 000 035 e-19 C
atomic unit of charge density 1.081 202 338 e12 0.000 000 024 e12 C m^-3
atomic unit of current 6.623 617 95 e-3 0.000 000 15 e-3 A
atomic unit of electric dipole mom. 8.478 353 26 e-30 0.000 000 19 e-30 C m
atomic unit of electric field 5.142 206 52 e11 0.000 000 11 e11 V m^-1
atomic unit of electric field gradient 9.717 362 00 e21 0.000 000 21 e21 V m^-2
atomic unit of electric polarizability 1.648 777 2754 e-41 0.000 000 0016 e-41 C^2 m^2 J^-1
atomic unit of electric potential 27.211 385 05 0.000 000 60 V
atomic unit of electric quadrupole mom. 4.486 551 331 e-40 0.000 000 099 e-40 C m^2
atomic unit of energy 4.359 744 34 e-18 0.000 000 19 e-18 J
atomic unit of force 8.238 722 78 e-8 0.000 000 36 e-8 N
atomic unit of length 0.529 177 210 92 e-10 0.000 000 000 17 e-10 m
atomic unit of mag. dipole mom. 1.854 801 936 e-23 0.000 000 041 e-23 J T^-1
atomic unit of mag. flux density 2.350 517 464 e5 0.000 000 052 e5 T
atomic unit of magnetizability 7.891 036 607 e-29 0.000 000 013 e-29 J T^-2
atomic unit of mass 9.109 382 91 e-31 0.000 000 40 e-31 kg
atomic unit of mom.um 1.992 851 740 e-24 0.000 000 088 e-24 kg m s^-1
atomic unit of permittivity 1.112 650 056... e-10 (exact) F m^-1
atomic unit of time 2.418 884 326 502e-17 0.000 000 000 012e-17 s
atomic unit of velocity 2.187 691 263 79 e6 0.000 000 000 71 e6 m s^-1
Avogadro constant 6.022 141 29 e23 0.000 000 27 e23 mol^-1
Bohr magneton 927.400 968 e-26 0.000 020 e-26 J T^-1
Bohr magneton in eV/T 5.788 381 8066 e-5 0.000 000 0038 e-5 eV T^-1
Bohr magneton in Hz/T 13.996 245 55 e9 0.000 000 31 e9 Hz T^-1
Bohr magneton in inverse meters per tesla 46.686 4498 0.000 0010 m^-1 T^-1
Bohr magneton in K/T 0.671 713 88 0.000 000 61 K T^-1
Bohr radius 0.529 177 210 92 e-10 0.000 000 000 17 e-10 m
Boltzmann constant 1.380 6488 e-23 0.000 0013 e-23 J K^-1
Boltzmann constant in eV/K 8.617 3324 e-5 0.000 0078 e-5 eV K^-1
Boltzmann constant in Hz/K 2.083 6618 e10 0.000 0019 e10 Hz K^-1
Boltzmann constant in inverse meters per kelvin 69.503 476 0.000 063 m^-1 K^-1
characteristic impedance of vacuum 376.730 313 461... (exact) ohm
classical electron radius 2.817 940 3267 e-15 0.000 000 0027 e-15 m
Compton wavelength 2.426 310 2389 e-12 0.000 000 0016 e-12 m
Compton wavelength over 2 pi 386.159 268 00 e-15 0.000 000 25 e-15 m
conductance quantum 7.748 091 7346 e-5 0.000 000 0025 e-5 S
conventional value of Josephson constant 483 597.9 e9 (exact) Hz V^-1
conventional value of von Klitzing constant 25 812.807 (exact) ohm
Cu x unit 1.002 076 97 e-13 0.000 000 28 e-13 m
deuteron-electron mag. mom. ratio -4.664 345 537 e-4 0.000 000 039 e-4
deuteron-electron mass ratio 3670.482 9652 0.000 0015
deuteron g factor 0.857 438 2308 0.000 000 0072
deuteron mag. mom. 0.433 073 489 e-26 0.000 000 010 e-26 J T^-1
deuteron mag. mom. to Bohr magneton ratio 0.466 975 4556 e-3 0.000 000 0039 e-3
deuteron mag. mom. to nuclear magneton ratio 0.857 438 2308 0.000 000 0072
deuteron mass 3.343 583 48 e-27 0.000 000 15 e-27 kg
deuteron mass energy equivalent 3.005 062 97 e-10 0.000 000 13 e-10 J
deuteron mass energy equivalent in MeV 1875.612 859 0.000 041 MeV
deuteron mass in u 2.013 553 212 712 0.000 000 000 077 u
deuteron molar mass 2.013 553 212 712 e-3 0.000 000 000 077 e-3 kg mol^-1
deuteron-neutron mag. mom. ratio -0.448 206 52 0.000 000 11
deuteron-proton mag. mom. ratio 0.307 012 2070 0.000 000 0024
deuteron-proton mass ratio 1.999 007 500 97 0.000 000 000 18
deuteron rms charge radius 2.1424 e-15 0.0021 e-15 m
electric constant 8.854 187 817... e-12 (exact) F m^-1
electron charge to mass quotient -1.758 820 088 e11 0.000 000 039 e11 C kg^-1
electron-deuteron mag. mom. ratio -2143.923 498 0.000 018
electron-deuteron mass ratio 2.724 437 1095 e-4 0.000 000 0011 e-4
electron g factor -2.002 319 304 361 53 0.000 000 000 000 53
electron gyromag. ratio 1.760 859 708 e11 0.000 000 039 e11 s^-1 T^-1
electron gyromag. ratio over 2 pi 28 024.952 66 0.000 62 MHz T^-1
electron-helion mass ratio 1.819 543 0761 e-4 0.000 000 0017 e-4
electron mag. mom. -928.476 430 e-26 0.000 021 e-26 J T^-1
electron mag. mom. anomaly 1.159 652 180 76 e-3 0.000 000 000 27 e-3
electron mag. mom. to Bohr magneton ratio -1.001 159 652 180 76 0.000 000 000 000 27
electron mag. mom. to nuclear magneton ratio -1838.281 970 90 0.000 000 75
electron mass 9.109 382 91 e-31 0.000 000 40 e-31 kg
electron mass energy equivalent 8.187 105 06 e-14 0.000 000 36 e-14 J
electron mass energy equivalent in MeV 0.510 998 928 0.000 000 011 MeV
electron mass in u 5.485 799 0946 e-4 0.000 000 0022 e-4 u
electron molar mass 5.485 799 0946 e-7 0.000 000 0022 e-7 kg mol^-1
electron-muon mag. mom. ratio 206.766 9896 0.000 0052
electron-muon mass ratio 4.836 331 66 e-3 0.000 000 12 e-3
electron-neutron mag. mom. ratio 960.920 50 0.000 23
electron-neutron mass ratio 5.438 673 4461 e-4 0.000 000 0032 e-4
electron-proton mag. mom. ratio -658.210 6848 0.000 0054
electron-proton mass ratio 5.446 170 2178 e-4 0.000 000 0022 e-4
electron-tau mass ratio 2.875 92 e-4 0.000 26 e-4
electron to alpha particle mass ratio 1.370 933 555 78 e-4 0.000 000 000 55 e-4
electron to shielded helion mag. mom. ratio 864.058 257 0.000 010
electron to shielded proton mag. mom. ratio -658.227 5971 0.000 0072
electron-triton mass ratio 1.819 200 0653 e-4 0.000 000 0017 e-4
electron volt 1.602 176 565 e-19 0.000 000 035 e-19 J
electron volt-atomic mass unit relationship 1.073 544 150 e-9 0.000 000 024 e-9 u
electron volt-hartree relationship 3.674 932 379 e-2 0.000 000 081 e-2 E_h
electron volt-hertz relationship 2.417 989 348 e14 0.000 000 053 e14 Hz
electron volt-inverse meter relationship 8.065 544 29 e5 0.000 000 18 e5 m^-1
electron volt-joule relationship 1.602 176 565 e-19 0.000 000 035 e-19 J
electron volt-kelvin relationship 1.160 4519 e4 0.000 0011 e4 K
electron volt-kilogram relationship 1.782 661 845 e-36 0.000 000 039 e-36 kg
elementary charge 1.602 176 565 e-19 0.000 000 035 e-19 C
elementary charge over h 2.417 989 348 e14 0.000 000 053 e14 A J^-1
Faraday constant 96 485.3365 0.0021 C mol^-1
Faraday constant for conventional electric current 96 485.3321 0.0043 C_90 mol^-1
Fermi coupling constant 1.166 364 e-5 0.000 005 e-5 GeV^-2
fine-structure constant 7.297 352 5698 e-3 0.000 000 0024 e-3
first radiation constant 3.741 771 53 e-16 0.000 000 17 e-16 W m^2
first radiation constant for spectral radiance 1.191 042 869 e-16 0.000 000 053 e-16 W m^2 sr^-1
hartree-atomic mass unit relationship 2.921 262 3246 e-8 0.000 000 0021 e-8 u
hartree-electron volt relationship 27.211 385 05 0.000 000 60 eV
Hartree energy 4.359 744 34 e-18 0.000 000 19 e-18 J
Hartree energy in eV 27.211 385 05 0.000 000 60 eV
hartree-hertz relationship 6.579 683 920 729 e15 0.000 000 000 033 e15 Hz
hartree-inverse meter relationship 2.194 746 313 708 e7 0.000 000 000 011 e7 m^-1
hartree-joule relationship 4.359 744 34 e-18 0.000 000 19 e-18 J
hartree-kelvin relationship 3.157 7504 e5 0.000 0029 e5 K
hartree-kilogram relationship 4.850 869 79 e-35 0.000 000 21 e-35 kg
helion-electron mass ratio 5495.885 2754 0.000 0050
helion g factor -4.255 250 613 0.000 000 050
helion mag. mom. -1.074 617 486 e-26 0.000 000 027 e-26 J T^-1
helion mag. mom. to Bohr magneton ratio -1.158 740 958 e-3 0.000 000 014 e-3
helion mag. mom. to nuclear magneton ratio -2.127 625 306 0.000 000 025
helion mass 5.006 412 34 e-27 0.000 000 22 e-27 kg
helion mass energy equivalent 4.499 539 02 e-10 0.000 000 20 e-10 J
helion mass energy equivalent in MeV 2808.391 482 0.000 062 MeV
helion mass in u 3.014 932 2468 0.000 000 0025 u
helion molar mass 3.014 932 2468 e-3 0.000 000 0025 e-3 kg mol^-1
helion-proton mass ratio 2.993 152 6707 0.000 000 0025
hertz-atomic mass unit relationship 4.439 821 6689 e-24 0.000 000 0031 e-24 u
hertz-electron volt relationship 4.135 667 516 e-15 0.000 000 091 e-15 eV
hertz-hartree relationship 1.519 829 8460045e-16 0.000 000 0000076e-16 E_h
hertz-inverse meter relationship 3.335 640 951... e-9 (exact) m^-1
hertz-joule relationship 6.626 069 57 e-34 0.000 000 29 e-34 J
hertz-kelvin relationship 4.799 2434 e-11 0.000 0044 e-11 K
hertz-kilogram relationship 7.372 496 68 e-51 0.000 000 33 e-51 kg
inverse fine-structure constant 137.035 999 074 0.000 000 044
inverse meter-atomic mass unit relationship 1.331 025 051 20 e-15 0.000 000 000 94 e-15 u
inverse meter-electron volt relationship 1.239 841 930 e-6 0.000 000 027 e-6 eV
inverse meter-hartree relationship 4.556 335 252 755 e-8 0.000 000 000 023 e-8 E_h
inverse meter-hertz relationship 299 792 458 (exact) Hz
inverse meter-joule relationship 1.986 445 684 e-25 0.000 000 088 e-25 J
inverse meter-kelvin relationship 1.438 7770 e-2 0.000 0013 e-2 K
inverse meter-kilogram relationship 2.210 218 902 e-42 0.000 000 098 e-42 kg
inverse of conductance quantum 12 906.403 7217 0.000 0042 ohm
Josephson constant 483 597.870 e9 0.011 e9 Hz V^-1
joule-atomic mass unit relationship 6.700 535 85 e9 0.000 000 30 e9 u
joule-electron volt relationship 6.241 509 34 e18 0.000 000 14 e18 eV
joule-hartree relationship 2.293 712 48 e17 0.000 000 10 e17 E_h
joule-hertz relationship 1.509 190 311 e33 0.000 000 067 e33 Hz
joule-inverse meter relationship 5.034 117 01 e24 0.000 000 22 e24 m^-1
joule-kelvin relationship 7.242 9716 e22 0.000 0066 e22 K
joule-kilogram relationship 1.112 650 056... e-17 (exact) kg
kelvin-atomic mass unit relationship 9.251 0868 e-14 0.000 0084 e-14 u
kelvin-electron volt relationship 8.617 3324 e-5 0.000 0078 e-5 eV
kelvin-hartree relationship 3.166 8114 e-6 0.000 0029 e-6 E_h
kelvin-hertz relationship 2.083 6618 e10 0.000 0019 e10 Hz
kelvin-inverse meter relationship 69.503 476 0.000 063 m^-1
kelvin-joule relationship 1.380 6488 e-23 0.000 0013 e-23 J
kelvin-kilogram relationship 1.536 1790 e-40 0.000 0014 e-40 kg
kilogram-atomic mass unit relationship 6.022 141 29 e26 0.000 000 27 e26 u
kilogram-electron volt relationship 5.609 588 85 e35 0.000 000 12 e35 eV
kilogram-hartree relationship 2.061 485 968 e34 0.000 000 091 e34 E_h
kilogram-hertz relationship 1.356 392 608 e50 0.000 000 060 e50 Hz
kilogram-inverse meter relationship 4.524 438 73 e41 0.000 000 20 e41 m^-1
kilogram-joule relationship 8.987 551 787... e16 (exact) J
kilogram-kelvin relationship 6.509 6582 e39 0.000 0059 e39 K
lattice parameter of silicon 543.102 0504 e-12 0.000 0089 e-12 m
Loschmidt constant (273.15 K, 100 kPa) 2.651 6462 e25 0.000 0024 e25 m^-3
Loschmidt constant (273.15 K, 101.325 kPa) 2.686 7805 e25 0.000 0024 e25 m^-3
mag. constant 12.566 370 614... e-7 (exact) N A^-2
mag. flux quantum 2.067 833 758 e-15 0.000 000 046 e-15 Wb
molar gas constant 8.314 4621 0.000 0075 J mol^-1 K^-1
molar mass constant 1 e-3 (exact) kg mol^-1
molar mass of carbon-12 12 e-3 (exact) kg mol^-1
molar Planck constant 3.990 312 7176 e-10 0.000 000 0028 e-10 J s mol^-1
molar Planck constant times c 0.119 626 565 779 0.000 000 000 084 J m mol^-1
molar volume of ideal gas (273.15 K, 100 kPa) 22.710 953 e-3 0.000 021 e-3 m^3 mol^-1
molar volume of ideal gas (273.15 K, 101.325 kPa) 22.413 968 e-3 0.000 020 e-3 m^3 mol^-1
molar volume of silicon 12.058 833 01 e-6 0.000 000 80 e-6 m^3 mol^-1
Mo x unit 1.002 099 52 e-13 0.000 000 53 e-13 m
muon Compton wavelength 11.734 441 03 e-15 0.000 000 30 e-15 m
muon Compton wavelength over 2 pi 1.867 594 294 e-15 0.000 000 047 e-15 m
muon-electron mass ratio 206.768 2843 0.000 0052
muon g factor -2.002 331 8418 0.000 000 0013
muon mag. mom. -4.490 448 07 e-26 0.000 000 15 e-26 J T^-1
muon mag. mom. anomaly 1.165 920 91 e-3 0.000 000 63 e-3
muon mag. mom. to Bohr magneton ratio -4.841 970 44 e-3 0.000 000 12 e-3
muon mag. mom. to nuclear magneton ratio -8.890 596 97 0.000 000 22
muon mass 1.883 531 475 e-28 0.000 000 096 e-28 kg
muon mass energy equivalent 1.692 833 667 e-11 0.000 000 086 e-11 J
muon mass energy equivalent in MeV 105.658 3715 0.000 0035 MeV
muon mass in u 0.113 428 9267 0.000 000 0029 u
muon molar mass 0.113 428 9267 e-3 0.000 000 0029 e-3 kg mol^-1
muon-neutron mass ratio 0.112 454 5177 0.000 000 0028
muon-proton mag. mom. ratio -3.183 345 107 0.000 000 084
muon-proton mass ratio 0.112 609 5272 0.000 000 0028
muon-tau mass ratio 5.946 49 e-2 0.000 54 e-2
natural unit of action 1.054 571 726 e-34 0.000 000 047 e-34 J s
natural unit of action in eV s 6.582 119 28 e-16 0.000 000 15 e-16 eV s
natural unit of energy 8.187 105 06 e-14 0.000 000 36 e-14 J
natural unit of energy in MeV 0.510 998 928 0.000 000 011 MeV
natural unit of length 386.159 268 00 e-15 0.000 000 25 e-15 m
natural unit of mass 9.109 382 91 e-31 0.000 000 40 e-31 kg
natural unit of mom.um 2.730 924 29 e-22 0.000 000 12 e-22 kg m s^-1
natural unit of mom.um in MeV/c 0.510 998 928 0.000 000 011 MeV/c
natural unit of time 1.288 088 668 33 e-21 0.000 000 000 83 e-21 s
natural unit of velocity 299 792 458 (exact) m s^-1
neutron Compton wavelength 1.319 590 9068 e-15 0.000 000 0011 e-15 m
neutron Compton wavelength over 2 pi 0.210 019 415 68 e-15 0.000 000 000 17 e-15 m
neutron-electron mag. mom. ratio 1.040 668 82 e-3 0.000 000 25 e-3
neutron-electron mass ratio 1838.683 6605 0.000 0011
neutron g factor -3.826 085 45 0.000 000 90
neutron gyromag. ratio 1.832 471 79 e8 0.000 000 43 e8 s^-1 T^-1
neutron gyromag. ratio over 2 pi 29.164 6943 0.000 0069 MHz T^-1
neutron mag. mom. -0.966 236 47 e-26 0.000 000 23 e-26 J T^-1
neutron mag. mom. to Bohr magneton ratio -1.041 875 63 e-3 0.000 000 25 e-3
neutron mag. mom. to nuclear magneton ratio -1.913 042 72 0.000 000 45
neutron mass 1.674 927 351 e-27 0.000 000 074 e-27 kg
neutron mass energy equivalent 1.505 349 631 e-10 0.000 000 066 e-10 J
neutron mass energy equivalent in MeV 939.565 379 0.000 021 MeV
neutron mass in u 1.008 664 916 00 0.000 000 000 43 u
neutron molar mass 1.008 664 916 00 e-3 0.000 000 000 43 e-3 kg mol^-1
neutron-muon mass ratio 8.892 484 00 0.000 000 22
neutron-proton mag. mom. ratio -0.684 979 34 0.000 000 16
neutron-proton mass difference 2.305 573 92 e-30 0.000 000 76 e-30
neutron-proton mass difference energy equivalent 2.072 146 50 e-13 0.000 000 68 e-13
neutron-proton mass difference energy equivalent in MeV 1.293 332 17 0.000 000 42
neutron-proton mass difference in u 0.001 388 449 19 0.000 000 000 45
neutron-proton mass ratio 1.001 378 419 17 0.000 000 000 45
neutron-tau mass ratio 0.528 790 0.000 048
neutron to shielded proton mag. mom. ratio -0.684 996 94 0.000 000 16
Newtonian constant of gravitation 6.673 84 e-11 0.000 80 e-11 m^3 kg^-1 s^-2
Newtonian constant of gravitation over h-bar c 6.708 37 e-39 0.000 80 e-39 (GeV/c^2)^-2
nuclear magneton 5.050 783 53 e-27 0.000 000 11 e-27 J T^-1
nuclear magneton in eV/T 3.152 451 2605 e-8 0.000 000 0022 e-8 eV T^-1
nuclear magneton in inverse meters per tesla 2.542 623 527 e-2 0.000 000 056 e-2 m^-1 T^-1
nuclear magneton in K/T 3.658 2682 e-4 0.000 0033 e-4 K T^-1
nuclear magneton in MHz/T 7.622 593 57 0.000 000 17 MHz T^-1
Planck constant 6.626 069 57 e-34 0.000 000 29 e-34 J s
Planck constant in eV s 4.135 667 516 e-15 0.000 000 091 e-15 eV s
Planck constant over 2 pi 1.054 571 726 e-34 0.000 000 047 e-34 J s
Planck constant over 2 pi in eV s 6.582 119 28 e-16 0.000 000 15 e-16 eV s
Planck constant over 2 pi times c in MeV fm 197.326 9718 0.000 0044 MeV fm
Planck length 1.616 199 e-35 0.000 097 e-35 m
Planck mass 2.176 51 e-8 0.000 13 e-8 kg
Planck mass energy equivalent in GeV 1.220 932 e19 0.000 073 e19 GeV
Planck temperature 1.416 833 e32 0.000 085 e32 K
Planck time 5.391 06 e-44 0.000 32 e-44 s
proton charge to mass quotient 9.578 833 58 e7 0.000 000 21 e7 C kg^-1
proton Compton wavelength 1.321 409 856 23 e-15 0.000 000 000 94 e-15 m
proton Compton wavelength over 2 pi 0.210 308 910 47 e-15 0.000 000 000 15 e-15 m
proton-electron mass ratio 1836.152 672 45 0.000 000 75
proton g factor 5.585 694 713 0.000 000 046
proton gyromag. ratio 2.675 222 005 e8 0.000 000 063 e8 s^-1 T^-1
proton gyromag. ratio over 2 pi 42.577 4806 0.000 0010 MHz T^-1
proton mag. mom. 1.410 606 743 e-26 0.000 000 033 e-26 J T^-1
proton mag. mom. to Bohr magneton ratio 1.521 032 210 e-3 0.000 000 012 e-3
proton mag. mom. to nuclear magneton ratio 2.792 847 356 0.000 000 023
proton mag. shielding correction 25.694 e-6 0.014 e-6
proton mass 1.672 621 777 e-27 0.000 000 074 e-27 kg
proton mass energy equivalent 1.503 277 484 e-10 0.000 000 066 e-10 J
proton mass energy equivalent in MeV 938.272 046 0.000 021 MeV
proton mass in u 1.007 276 466 812 0.000 000 000 090 u
proton molar mass 1.007 276 466 812 e-3 0.000 000 000 090 e-3 kg mol^-1
proton-muon mass ratio 8.880 243 31 0.000 000 22
proton-neutron mag. mom. ratio -1.459 898 06 0.000 000 34
proton-neutron mass ratio 0.998 623 478 26 0.000 000 000 45
proton rms charge radius 0.8775 e-15 0.0051 e-15 m
proton-tau mass ratio 0.528 063 0.000 048
quantum of circulation 3.636 947 5520 e-4 0.000 000 0024 e-4 m^2 s^-1
quantum of circulation times 2 7.273 895 1040 e-4 0.000 000 0047 e-4 m^2 s^-1
Rydberg constant 10 973 731.568 539 0.000 055 m^-1
Rydberg constant times c in Hz 3.289 841 960 364 e15 0.000 000 000 017 e15 Hz
Rydberg constant times hc in eV 13.605 692 53 0.000 000 30 eV
Rydberg constant times hc in J 2.179 872 171 e-18 0.000 000 096 e-18 J
Sackur-Tetrode constant (1 K, 100 kPa) -1.151 7078 0.000 0023
Sackur-Tetrode constant (1 K, 101.325 kPa) -1.164 8708 0.000 0023
second radiation constant 1.438 7770 e-2 0.000 0013 e-2 m K
shielded helion gyromag. ratio 2.037 894 659 e8 0.000 000 051 e8 s^-1 T^-1
shielded helion gyromag. ratio over 2 pi 32.434 100 84 0.000 000 81 MHz T^-1
shielded helion mag. mom. -1.074 553 044 e-26 0.000 000 027 e-26 J T^-1
shielded helion mag. mom. to Bohr magneton ratio -1.158 671 471 e-3 0.000 000 014 e-3
shielded helion mag. mom. to nuclear magneton ratio -2.127 497 718 0.000 000 025
shielded helion to proton mag. mom. ratio -0.761 766 558 0.000 000 011
shielded helion to shielded proton mag. mom. ratio -0.761 786 1313 0.000 000 0033
shielded proton gyromag. ratio 2.675 153 268 e8 0.000 000 066 e8 s^-1 T^-1
shielded proton gyromag. ratio over 2 pi 42.576 3866 0.000 0010 MHz T^-1
shielded proton mag. mom. 1.410 570 499 e-26 0.000 000 035 e-26 J T^-1
shielded proton mag. mom. to Bohr magneton ratio 1.520 993 128 e-3 0.000 000 017 e-3
shielded proton mag. mom. to nuclear magneton ratio 2.792 775 598 0.000 000 030
speed of light in vacuum 299 792 458 (exact) m s^-1
standard acceleration of gravity 9.806 65 (exact) m s^-2
standard atmosphere 101 325 (exact) Pa
standard-state pressure 100 000 (exact) Pa
Stefan-Boltzmann constant 5.670 373 e-8 0.000 021 e-8 W m^-2 K^-4
tau Compton wavelength 0.697 787 e-15 0.000 063 e-15 m
tau Compton wavelength over 2 pi 0.111 056 e-15 0.000 010 e-15 m
tau-electron mass ratio 3477.15 0.31
tau mass 3.167 47 e-27 0.000 29 e-27 kg
tau mass energy equivalent 2.846 78 e-10 0.000 26 e-10 J
tau mass energy equivalent in MeV 1776.82 0.16 MeV
tau mass in u 1.907 49 0.000 17 u
tau molar mass 1.907 49 e-3 0.000 17 e-3 kg mol^-1
tau-muon mass ratio 16.8167 0.0015
tau-neutron mass ratio 1.891 11 0.000 17
tau-proton mass ratio 1.893 72 0.000 17
Thomson cross section 0.665 245 8734 e-28 0.000 000 0013 e-28 m^2
triton-electron mass ratio 5496.921 5267 0.000 0050
triton g factor 5.957 924 896 0.000 000 076
triton mag. mom. 1.504 609 447 e-26 0.000 000 038 e-26 J T^-1
triton mag. mom. to Bohr magneton ratio 1.622 393 657 e-3 0.000 000 021 e-3
triton mag. mom. to nuclear magneton ratio 2.978 962 448 0.000 000 038
triton mass 5.007 356 30 e-27 0.000 000 22 e-27 kg
triton mass energy equivalent 4.500 387 41 e-10 0.000 000 20 e-10 J
triton mass energy equivalent in MeV 2808.921 005 0.000 062 MeV
triton mass in u 3.015 500 7134 0.000 000 0025 u
triton molar mass 3.015 500 7134 e-3 0.000 000 0025 e-3 kg mol^-1
triton-proton mass ratio 2.993 717 0308 0.000 000 0025
unified atomic mass unit 1.660 538 921 e-27 0.000 000 073 e-27 kg
von Klitzing constant 25 812.807 4434 0.000 0084 ohm
weak mixing angle 0.2223 0.0021
Wien frequency displacement law constant 5.878 9254 e10 0.000 0053 e10 Hz K^-1
Wien wavelength displacement law constant 2.897 7721 e-3 0.000 0026 e-3 m K"""
txt2014 = """\
{220} lattice spacing of silicon 192.015 5714 e-12 0.000 0032 e-12 m
alpha particle-electron mass ratio 7294.299 541 36 0.000 000 24
alpha particle mass 6.644 657 230 e-27 0.000 000 082 e-27 kg
alpha particle mass energy equivalent 5.971 920 097 e-10 0.000 000 073 e-10 J
alpha particle mass energy equivalent in MeV 3727.379 378 0.000 023 MeV
alpha particle mass in u 4.001 506 179 127 0.000 000 000 063 u
alpha particle molar mass 4.001 506 179 127 e-3 0.000 000 000 063 e-3 kg mol^-1
alpha particle-proton mass ratio 3.972 599 689 07 0.000 000 000 36
Angstrom star 1.000 014 95 e-10 0.000 000 90 e-10 m
atomic mass constant 1.660 539 040 e-27 0.000 000 020 e-27 kg
atomic mass constant energy equivalent 1.492 418 062 e-10 0.000 000 018 e-10 J
atomic mass constant energy equivalent in MeV 931.494 0954 0.000 0057 MeV
atomic mass unit-electron volt relationship 931.494 0954 e6 0.000 0057 e6 eV
atomic mass unit-hartree relationship 3.423 177 6902 e7 0.000 000 0016 e7 E_h
atomic mass unit-hertz relationship 2.252 342 7206 e23 0.000 000 0010 e23 Hz
atomic mass unit-inverse meter relationship 7.513 006 6166 e14 0.000 000 0034 e14 m^-1
atomic mass unit-joule relationship 1.492 418 062 e-10 0.000 000 018 e-10 J
atomic mass unit-kelvin relationship 1.080 954 38 e13 0.000 000 62 e13 K
atomic mass unit-kilogram relationship 1.660 539 040 e-27 0.000 000 020 e-27 kg
atomic unit of 1st hyperpolarizability 3.206 361 329 e-53 0.000 000 020 e-53 C^3 m^3 J^-2
atomic unit of 2nd hyperpolarizability 6.235 380 085 e-65 0.000 000 077 e-65 C^4 m^4 J^-3
atomic unit of action 1.054 571 800 e-34 0.000 000 013 e-34 J s
atomic unit of charge 1.602 176 6208 e-19 0.000 000 0098 e-19 C
atomic unit of charge density 1.081 202 3770 e12 0.000 000 0067 e12 C m^-3
atomic unit of current 6.623 618 183 e-3 0.000 000 041 e-3 A
atomic unit of electric dipole mom. 8.478 353 552 e-30 0.000 000 052 e-30 C m
atomic unit of electric field 5.142 206 707 e11 0.000 000 032 e11 V m^-1
atomic unit of electric field gradient 9.717 362 356 e21 0.000 000 060 e21 V m^-2
atomic unit of electric polarizability 1.648 777 2731 e-41 0.000 000 0011 e-41 C^2 m^2 J^-1
atomic unit of electric potential 27.211 386 02 0.000 000 17 V
atomic unit of electric quadrupole mom. 4.486 551 484 e-40 0.000 000 028 e-40 C m^2
atomic unit of energy 4.359 744 650 e-18 0.000 000 054 e-18 J
atomic unit of force 8.238 723 36 e-8 0.000 000 10 e-8 N
atomic unit of length 0.529 177 210 67 e-10 0.000 000 000 12 e-10 m
atomic unit of mag. dipole mom. 1.854 801 999 e-23 0.000 000 011 e-23 J T^-1
atomic unit of mag. flux density 2.350 517 550 e5 0.000 000 014 e5 T
atomic unit of magnetizability 7.891 036 5886 e-29 0.000 000 0090 e-29 J T^-2
atomic unit of mass 9.109 383 56 e-31 0.000 000 11 e-31 kg
atomic unit of mom.um 1.992 851 882 e-24 0.000 000 024 e-24 kg m s^-1
atomic unit of permittivity 1.112 650 056... e-10 (exact) F m^-1
atomic unit of time 2.418 884 326509e-17 0.000 000 000014e-17 s
atomic unit of velocity 2.187 691 262 77 e6 0.000 000 000 50 e6 m s^-1
Avogadro constant 6.022 140 857 e23 0.000 000 074 e23 mol^-1
Bohr magneton 927.400 9994 e-26 0.000 0057 e-26 J T^-1
Bohr magneton in eV/T 5.788 381 8012 e-5 0.000 000 0026 e-5 eV T^-1
Bohr magneton in Hz/T 13.996 245 042 e9 0.000 000 086 e9 Hz T^-1
Bohr magneton in inverse meters per tesla 46.686 448 14 0.000 000 29 m^-1 T^-1
Bohr magneton in K/T 0.671 714 05 0.000 000 39 K T^-1
Bohr radius 0.529 177 210 67 e-10 0.000 000 000 12 e-10 m
Boltzmann constant 1.380 648 52 e-23 0.000 000 79 e-23 J K^-1
Boltzmann constant in eV/K 8.617 3303 e-5 0.000 0050 e-5 eV K^-1
Boltzmann constant in Hz/K 2.083 6612 e10 0.000 0012 e10 Hz K^-1
Boltzmann constant in inverse meters per kelvin 69.503 457 0.000 040 m^-1 K^-1
characteristic impedance of vacuum 376.730 313 461... (exact) ohm
classical electron radius 2.817 940 3227 e-15 0.000 000 0019 e-15 m
Compton wavelength 2.426 310 2367 e-12 0.000 000 0011 e-12 m
Compton wavelength over 2 pi 386.159 267 64 e-15 0.000 000 18 e-15 m
conductance quantum 7.748 091 7310 e-5 0.000 000 0018 e-5 S
conventional value of Josephson constant 483 597.9 e9 (exact) Hz V^-1
conventional value of von Klitzing constant 25 812.807 (exact) ohm
Cu x unit 1.002 076 97 e-13 0.000 000 28 e-13 m
deuteron-electron mag. mom. ratio -4.664 345 535 e-4 0.000 000 026 e-4
deuteron-electron mass ratio 3670.482 967 85 0.000 000 13
deuteron g factor 0.857 438 2311 0.000 000 0048
deuteron mag. mom. 0.433 073 5040 e-26 0.000 000 0036 e-26 J T^-1
deuteron mag. mom. to Bohr magneton ratio 0.466 975 4554 e-3 0.000 000 0026 e-3
deuteron mag. mom. to nuclear magneton ratio 0.857 438 2311 0.000 000 0048
deuteron mass 3.343 583 719 e-27 0.000 000 041 e-27 kg
deuteron mass energy equivalent 3.005 063 183 e-10 0.000 000 037 e-10 J
deuteron mass energy equivalent in MeV 1875.612 928 0.000 012 MeV
deuteron mass in u 2.013 553 212 745 0.000 000 000 040 u
deuteron molar mass 2.013 553 212 745 e-3 0.000 000 000 040 e-3 kg mol^-1
deuteron-neutron mag. mom. ratio -0.448 206 52 0.000 000 11
deuteron-proton mag. mom. ratio 0.307 012 2077 0.000 000 0015
deuteron-proton mass ratio 1.999 007 500 87 0.000 000 000 19
deuteron rms charge radius 2.1413 e-15 0.0025 e-15 m
electric constant 8.854 187 817... e-12 (exact) F m^-1
electron charge to mass quotient -1.758 820 024 e11 0.000 000 011 e11 C kg^-1
electron-deuteron mag. mom. ratio -2143.923 499 0.000 012
electron-deuteron mass ratio 2.724 437 107 484 e-4 0.000 000 000 096 e-4
electron g factor -2.002 319 304 361 82 0.000 000 000 000 52
electron gyromag. ratio 1.760 859 644 e11 0.000 000 011 e11 s^-1 T^-1
electron gyromag. ratio over 2 pi 28 024.951 64 0.000 17 MHz T^-1
electron-helion mass ratio 1.819 543 074 854 e-4 0.000 000 000 088 e-4
electron mag. mom. -928.476 4620 e-26 0.000 0057 e-26 J T^-1
electron mag. mom. anomaly 1.159 652 180 91 e-3 0.000 000 000 26 e-3
electron mag. mom. to Bohr magneton ratio -1.001 159 652 180 91 0.000 000 000 000 26
electron mag. mom. to nuclear magneton ratio -1838.281 972 34 0.000 000 17
electron mass 9.109 383 56 e-31 0.000 000 11 e-31 kg
electron mass energy equivalent 8.187 105 65 e-14 0.000 000 10 e-14 J
electron mass energy equivalent in MeV 0.510 998 9461 0.000 000 0031 MeV
electron mass in u 5.485 799 090 70 e-4 0.000 000 000 16 e-4 u
electron molar mass 5.485 799 090 70 e-7 0.000 000 000 16 e-7 kg mol^-1
electron-muon mag. mom. ratio 206.766 9880 0.000 0046
electron-muon mass ratio 4.836 331 70 e-3 0.000 000 11 e-3
electron-neutron mag. mom. ratio 960.920 50 0.000 23
electron-neutron mass ratio 5.438 673 4428 e-4 0.000 000 0027 e-4
electron-proton mag. mom. ratio -658.210 6866 0.000 0020
electron-proton mass ratio 5.446 170 213 52 e-4 0.000 000 000 52 e-4
electron-tau mass ratio 2.875 92 e-4 0.000 26 e-4
electron to alpha particle mass ratio 1.370 933 554 798 e-4 0.000 000 000 045 e-4
electron to shielded helion mag. mom. ratio 864.058 257 0.000 010
electron to shielded proton mag. mom. ratio -658.227 5971 0.000 0072
electron-triton mass ratio 1.819 200 062 203 e-4 0.000 000 000 084 e-4
electron volt 1.602 176 6208 e-19 0.000 000 0098 e-19 J
electron volt-atomic mass unit relationship 1.073 544 1105 e-9 0.000 000 0066 e-9 u
electron volt-hartree relationship 3.674 932 248 e-2 0.000 000 023 e-2 E_h
electron volt-hertz relationship 2.417 989 262 e14 0.000 000 015 e14 Hz
electron volt-inverse meter relationship 8.065 544 005 e5 0.000 000 050 e5 m^-1
electron volt-joule relationship 1.602 176 6208 e-19 0.000 000 0098 e-19 J
electron volt-kelvin relationship 1.160 452 21 e4 0.000 000 67 e4 K
electron volt-kilogram relationship 1.782 661 907 e-36 0.000 000 011 e-36 kg
elementary charge 1.602 176 6208 e-19 0.000 000 0098 e-19 C
elementary charge over h 2.417 989 262 e14 0.000 000 015 e14 A J^-1
Faraday constant 96 485.332 89 0.000 59 C mol^-1
Faraday constant for conventional electric current 96 485.3251 0.0012 C_90 mol^-1
Fermi coupling constant 1.166 3787 e-5 0.000 0006 e-5 GeV^-2
fine-structure constant 7.297 352 5664 e-3 0.000 000 0017 e-3
first radiation constant 3.741 771 790 e-16 0.000 000 046 e-16 W m^2
first radiation constant for spectral radiance 1.191 042 953 e-16 0.000 000 015 e-16 W m^2 sr^-1
hartree-atomic mass unit relationship 2.921 262 3197 e-8 0.000 000 0013 e-8 u
hartree-electron volt relationship 27.211 386 02 0.000 000 17 eV
Hartree energy 4.359 744 650 e-18 0.000 000 054 e-18 J
Hartree energy in eV 27.211 386 02 0.000 000 17 eV
hartree-hertz relationship 6.579 683 920 711 e15 0.000 000 000 039 e15 Hz
hartree-inverse meter relationship 2.194 746 313 702 e7 0.000 000 000 013 e7 m^-1
hartree-joule relationship 4.359 744 650 e-18 0.000 000 054 e-18 J
hartree-kelvin relationship 3.157 7513 e5 0.000 0018 e5 K
hartree-kilogram relationship 4.850 870 129 e-35 0.000 000 060 e-35 kg
helion-electron mass ratio 5495.885 279 22 0.000 000 27
helion g factor -4.255 250 616 0.000 000 050
helion mag. mom. -1.074 617 522 e-26 0.000 000 014 e-26 J T^-1
helion mag. mom. to Bohr magneton ratio -1.158 740 958 e-3 0.000 000 014 e-3
helion mag. mom. to nuclear magneton ratio -2.127 625 308 0.000 000 025
helion mass 5.006 412 700 e-27 0.000 000 062 e-27 kg
helion mass energy equivalent 4.499 539 341 e-10 0.000 000 055 e-10 J
helion mass energy equivalent in MeV 2808.391 586 0.000 017 MeV
helion mass in u 3.014 932 246 73 0.000 000 000 12 u
helion molar mass 3.014 932 246 73 e-3 0.000 000 000 12 e-3 kg mol^-1
helion-proton mass ratio 2.993 152 670 46 0.000 000 000 29
hertz-atomic mass unit relationship 4.439 821 6616 e-24 0.000 000 0020 e-24 u
hertz-electron volt relationship 4.135 667 662 e-15 0.000 000 025 e-15 eV
hertz-hartree relationship 1.5198298460088 e-16 0.0000000000090e-16 E_h
hertz-inverse meter relationship 3.335 640 951... e-9 (exact) m^-1
hertz-joule relationship 6.626 070 040 e-34 0.000 000 081 e-34 J
hertz-kelvin relationship 4.799 2447 e-11 0.000 0028 e-11 K
hertz-kilogram relationship 7.372 497 201 e-51 0.000 000 091 e-51 kg
inverse fine-structure constant 137.035 999 139 0.000 000 031
inverse meter-atomic mass unit relationship 1.331 025 049 00 e-15 0.000 000 000 61 e-15 u
inverse meter-electron volt relationship 1.239 841 9739 e-6 0.000 000 0076 e-6 eV
inverse meter-hartree relationship 4.556 335 252 767 e-8 0.000 000 000 027 e-8 E_h
inverse meter-hertz relationship 299 792 458 (exact) Hz
inverse meter-joule relationship 1.986 445 824 e-25 0.000 000 024 e-25 J
inverse meter-kelvin relationship 1.438 777 36 e-2 0.000 000 83 e-2 K
inverse meter-kilogram relationship 2.210 219 057 e-42 0.000 000 027 e-42 kg
inverse of conductance quantum 12 906.403 7278 0.000 0029 ohm
Josephson constant 483 597.8525 e9 0.0030 e9 Hz V^-1
joule-atomic mass unit relationship 6.700 535 363 e9 0.000 000 082 e9 u
joule-electron volt relationship 6.241 509 126 e18 0.000 000 038 e18 eV
joule-hartree relationship 2.293 712 317 e17 0.000 000 028 e17 E_h
joule-hertz relationship 1.509 190 205 e33 0.000 000 019 e33 Hz
joule-inverse meter relationship 5.034 116 651 e24 0.000 000 062 e24 m^-1
joule-kelvin relationship 7.242 9731 e22 0.000 0042 e22 K
joule-kilogram relationship 1.112 650 056... e-17 (exact) kg
kelvin-atomic mass unit relationship 9.251 0842 e-14 0.000 0053 e-14 u
kelvin-electron volt relationship 8.617 3303 e-5 0.000 0050 e-5 eV
kelvin-hartree relationship 3.166 8105 e-6 0.000 0018 e-6 E_h
kelvin-hertz relationship 2.083 6612 e10 0.000 0012 e10 Hz
kelvin-inverse meter relationship 69.503 457 0.000 040 m^-1
kelvin-joule relationship 1.380 648 52 e-23 0.000 000 79 e-23 J
kelvin-kilogram relationship 1.536 178 65 e-40 0.000 000 88 e-40 kg
kilogram-atomic mass unit relationship 6.022 140 857 e26 0.000 000 074 e26 u
kilogram-electron volt relationship 5.609 588 650 e35 0.000 000 034 e35 eV
kilogram-hartree relationship 2.061 485 823 e34 0.000 000 025 e34 E_h
kilogram-hertz relationship 1.356 392 512 e50 0.000 000 017 e50 Hz
kilogram-inverse meter relationship 4.524 438 411 e41 0.000 000 056 e41 m^-1
kilogram-joule relationship 8.987 551 787... e16 (exact) J
kilogram-kelvin relationship 6.509 6595 e39 0.000 0037 e39 K
lattice parameter of silicon 543.102 0504 e-12 0.000 0089 e-12 m
Loschmidt constant (273.15 K, 100 kPa) 2.651 6467 e25 0.000 0015 e25 m^-3
Loschmidt constant (273.15 K, 101.325 kPa) 2.686 7811 e25 0.000 0015 e25 m^-3
mag. constant 12.566 370 614... e-7 (exact) N A^-2
mag. flux quantum 2.067 833 831 e-15 0.000 000 013 e-15 Wb
molar gas constant 8.314 4598 0.000 0048 J mol^-1 K^-1
molar mass constant 1 e-3 (exact) kg mol^-1
molar mass of carbon-12 12 e-3 (exact) kg mol^-1
molar Planck constant 3.990 312 7110 e-10 0.000 000 0018 e-10 J s mol^-1
molar Planck constant times c 0.119 626 565 582 0.000 000 000 054 J m mol^-1
molar volume of ideal gas (273.15 K, 100 kPa) 22.710 947 e-3 0.000 013 e-3 m^3 mol^-1
molar volume of ideal gas (273.15 K, 101.325 kPa) 22.413 962 e-3 0.000 013 e-3 m^3 mol^-1
molar volume of silicon 12.058 832 14 e-6 0.000 000 61 e-6 m^3 mol^-1
Mo x unit 1.002 099 52 e-13 0.000 000 53 e-13 m
muon Compton wavelength 11.734 441 11 e-15 0.000 000 26 e-15 m
muon Compton wavelength over 2 pi 1.867 594 308 e-15 0.000 000 042 e-15 m
muon-electron mass ratio 206.768 2826 0.000 0046
muon g factor -2.002 331 8418 0.000 000 0013
muon mag. mom. -4.490 448 26 e-26 0.000 000 10 e-26 J T^-1
muon mag. mom. anomaly 1.165 920 89 e-3 0.000 000 63 e-3
muon mag. mom. to Bohr magneton ratio -4.841 970 48 e-3 0.000 000 11 e-3
muon mag. mom. to nuclear magneton ratio -8.890 597 05 0.000 000 20
muon mass 1.883 531 594 e-28 0.000 000 048 e-28 kg
muon mass energy equivalent 1.692 833 774 e-11 0.000 000 043 e-11 J
muon mass energy equivalent in MeV 105.658 3745 0.000 0024 MeV
muon mass in u 0.113 428 9257 0.000 000 0025 u
muon molar mass 0.113 428 9257 e-3 0.000 000 0025 e-3 kg mol^-1
muon-neutron mass ratio 0.112 454 5167 0.000 000 0025
muon-proton mag. mom. ratio -3.183 345 142 0.000 000 071
muon-proton mass ratio 0.112 609 5262 0.000 000 0025
muon-tau mass ratio 5.946 49 e-2 0.000 54 e-2
natural unit of action 1.054 571 800 e-34 0.000 000 013 e-34 J s
natural unit of action in eV s 6.582 119 514 e-16 0.000 000 040 e-16 eV s
natural unit of energy 8.187 105 65 e-14 0.000 000 10 e-14 J
natural unit of energy in MeV 0.510 998 9461 0.000 000 0031 MeV
natural unit of length 386.159 267 64 e-15 0.000 000 18 e-15 m
natural unit of mass 9.109 383 56 e-31 0.000 000 11 e-31 kg
natural unit of mom.um 2.730 924 488 e-22 0.000 000 034 e-22 kg m s^-1
natural unit of mom.um in MeV/c 0.510 998 9461 0.000 000 0031 MeV/c
natural unit of time 1.288 088 667 12 e-21 0.000 000 000 58 e-21 s
natural unit of velocity 299 792 458 (exact) m s^-1
neutron Compton wavelength 1.319 590 904 81 e-15 0.000 000 000 88 e-15 m
neutron Compton wavelength over 2 pi 0.210 019 415 36 e-15 0.000 000 000 14 e-15 m
neutron-electron mag. mom. ratio 1.040 668 82 e-3 0.000 000 25 e-3
neutron-electron mass ratio 1838.683 661 58 0.000 000 90
neutron g factor -3.826 085 45 0.000 000 90
neutron gyromag. ratio 1.832 471 72 e8 0.000 000 43 e8 s^-1 T^-1
neutron gyromag. ratio over 2 pi 29.164 6933 0.000 0069 MHz T^-1
neutron mag. mom. -0.966 236 50 e-26 0.000 000 23 e-26 J T^-1
neutron mag. mom. to Bohr magneton ratio -1.041 875 63 e-3 0.000 000 25 e-3
neutron mag. mom. to nuclear magneton ratio -1.913 042 73 0.000 000 45
neutron mass 1.674 927 471 e-27 0.000 000 021 e-27 kg
neutron mass energy equivalent 1.505 349 739 e-10 0.000 000 019 e-10 J
neutron mass energy equivalent in MeV 939.565 4133 0.000 0058 MeV
neutron mass in u 1.008 664 915 88 0.000 000 000 49 u
neutron molar mass 1.008 664 915 88 e-3 0.000 000 000 49 e-3 kg mol^-1
neutron-muon mass ratio 8.892 484 08 0.000 000 20
neutron-proton mag. mom. ratio -0.684 979 34 0.000 000 16
neutron-proton mass difference 2.305 573 77 e-30 0.000 000 85 e-30
neutron-proton mass difference energy equivalent 2.072 146 37 e-13 0.000 000 76 e-13
neutron-proton mass difference energy equivalent in MeV 1.293 332 05 0.000 000 48
neutron-proton mass difference in u 0.001 388 449 00 0.000 000 000 51
neutron-proton mass ratio 1.001 378 418 98 0.000 000 000 51
neutron-tau mass ratio 0.528 790 0.000 048
neutron to shielded proton mag. mom. ratio -0.684 996 94 0.000 000 16
Newtonian constant of gravitation 6.674 08 e-11 0.000 31 e-11 m^3 kg^-1 s^-2
Newtonian constant of gravitation over h-bar c 6.708 61 e-39 0.000 31 e-39 (GeV/c^2)^-2
nuclear magneton 5.050 783 699 e-27 0.000 000 031 e-27 J T^-1
nuclear magneton in eV/T 3.152 451 2550 e-8 0.000 000 0015 e-8 eV T^-1
nuclear magneton in inverse meters per tesla 2.542 623 432 e-2 0.000 000 016 e-2 m^-1 T^-1
nuclear magneton in K/T 3.658 2690 e-4 0.000 0021 e-4 K T^-1
nuclear magneton in MHz/T 7.622 593 285 0.000 000 047 MHz T^-1
Planck constant 6.626 070 040 e-34 0.000 000 081 e-34 J s
Planck constant in eV s 4.135 667 662 e-15 0.000 000 025 e-15 eV s
Planck constant over 2 pi 1.054 571 800 e-34 0.000 000 013 e-34 J s
Planck constant over 2 pi in eV s 6.582 119 514 e-16 0.000 000 040 e-16 eV s
Planck constant over 2 pi times c in MeV fm 197.326 9788 0.000 0012 MeV fm
Planck length 1.616 229 e-35 0.000 038 e-35 m
Planck mass 2.176 470 e-8 0.000 051 e-8 kg
Planck mass energy equivalent in GeV 1.220 910 e19 0.000 029 e19 GeV
Planck temperature 1.416 808 e32 0.000 033 e32 K
Planck time 5.391 16 e-44 0.000 13 e-44 s
proton charge to mass quotient 9.578 833 226 e7 0.000 000 059 e7 C kg^-1
proton Compton wavelength 1.321 409 853 96 e-15 0.000 000 000 61 e-15 m
proton Compton wavelength over 2 pi 0.210 308910109e-15 0.000 000 000097e-15 m
proton-electron mass ratio 1836.152 673 89 0.000 000 17
proton g factor 5.585 694 702 0.000 000 017
proton gyromag. ratio 2.675 221 900 e8 0.000 000 018 e8 s^-1 T^-1
proton gyromag. ratio over 2 pi 42.577 478 92 0.000 000 29 MHz T^-1
proton mag. mom. 1.410 606 7873 e-26 0.000 000 0097 e-26 J T^-1
proton mag. mom. to Bohr magneton ratio 1.521 032 2053 e-3 0.000 000 0046 e-3
proton mag. mom. to nuclear magneton ratio 2.792 847 3508 0.000 000 0085
proton mag. shielding correction 25.691 e-6 0.011 e-6
proton mass 1.672 621 898 e-27 0.000 000 021 e-27 kg
proton mass energy equivalent 1.503 277 593 e-10 0.000 000 018 e-10 J
proton mass energy equivalent in MeV 938.272 0813 0.000 0058 MeV
proton mass in u 1.007 276 466 879 0.000 000 000 091 u
proton molar mass 1.007 276 466 879 e-3 0.000 000 000 091 e-3 kg mol^-1
proton-muon mass ratio 8.880 243 38 0.000 000 20
proton-neutron mag. mom. ratio -1.459 898 05 0.000 000 34
proton-neutron mass ratio 0.998 623 478 44 0.000 000 000 51
proton rms charge radius 0.8751 e-15 0.0061 e-15 m
proton-tau mass ratio 0.528 063 0.000 048
quantum of circulation 3.636 947 5486 e-4 0.000 000 0017 e-4 m^2 s^-1
quantum of circulation times 2 7.273 895 0972 e-4 0.000 000 0033 e-4 m^2 s^-1
Rydberg constant 10 973 731.568 508 0.000 065 m^-1
Rydberg constant times c in Hz 3.289 841 960 355 e15 0.000 000 000 019 e15 Hz
Rydberg constant times hc in eV 13.605 693 009 0.000 000 084 eV
Rydberg constant times hc in J 2.179 872 325 e-18 0.000 000 027 e-18 J
Sackur-Tetrode constant (1 K, 100 kPa) -1.151 7084 0.000 0014
Sackur-Tetrode constant (1 K, 101.325 kPa) -1.164 8714 0.000 0014
second radiation constant 1.438 777 36 e-2 0.000 000 83 e-2 m K
shielded helion gyromag. ratio 2.037 894 585 e8 0.000 000 027 e8 s^-1 T^-1
shielded helion gyromag. ratio over 2 pi 32.434 099 66 0.000 000 43 MHz T^-1
shielded helion mag. mom. -1.074 553 080 e-26 0.000 000 014 e-26 J T^-1
shielded helion mag. mom. to Bohr magneton ratio -1.158 671 471 e-3 0.000 000 014 e-3
shielded helion mag. mom. to nuclear magneton ratio -2.127 497 720 0.000 000 025
shielded helion to proton mag. mom. ratio -0.761 766 5603 0.000 000 0092
shielded helion to shielded proton mag. mom. ratio -0.761 786 1313 0.000 000 0033
shielded proton gyromag. ratio 2.675 153 171 e8 0.000 000 033 e8 s^-1 T^-1
shielded proton gyromag. ratio over 2 pi 42.576 385 07 0.000 000 53 MHz T^-1
shielded proton mag. mom. 1.410 570 547 e-26 0.000 000 018 e-26 J T^-1
shielded proton mag. mom. to Bohr magneton ratio 1.520 993 128 e-3 0.000 000 017 e-3
shielded proton mag. mom. to nuclear magneton ratio 2.792 775 600 0.000 000 030
speed of light in vacuum 299 792 458 (exact) m s^-1
standard acceleration of gravity 9.806 65 (exact) m s^-2
standard atmosphere 101 325 (exact) Pa
standard-state pressure 100 000 (exact) Pa
Stefan-Boltzmann constant 5.670 367 e-8 0.000 013 e-8 W m^-2 K^-4
tau Compton wavelength 0.697 787 e-15 0.000 063 e-15 m
tau Compton wavelength over 2 pi 0.111 056 e-15 0.000 010 e-15 m
tau-electron mass ratio 3477.15 0.31
tau mass 3.167 47 e-27 0.000 29 e-27 kg
tau mass energy equivalent 2.846 78 e-10 0.000 26 e-10 J
tau mass energy equivalent in MeV 1776.82 0.16 MeV
tau mass in u 1.907 49 0.000 17 u
tau molar mass 1.907 49 e-3 0.000 17 e-3 kg mol^-1
tau-muon mass ratio 16.8167 0.0015
tau-neutron mass ratio 1.891 11 0.000 17
tau-proton mass ratio 1.893 72 0.000 17
Thomson cross section 0.665 245 871 58 e-28 0.000 000 000 91 e-28 m^2
triton-electron mass ratio 5496.921 535 88 0.000 000 26
triton g factor 5.957 924 920 0.000 000 028
triton mag. mom. 1.504 609 503 e-26 0.000 000 012 e-26 J T^-1
triton mag. mom. to Bohr magneton ratio 1.622 393 6616 e-3 0.000 000 0076 e-3
triton mag. mom. to nuclear magneton ratio 2.978 962 460 0.000 000 014
triton mass 5.007 356 665 e-27 0.000 000 062 e-27 kg
triton mass energy equivalent 4.500 387 735 e-10 0.000 000 055 e-10 J
triton mass energy equivalent in MeV 2808.921 112 0.000 017 MeV
triton mass in u 3.015 500 716 32 0.000 000 000 11 u
triton molar mass 3.015 500 716 32 e-3 0.000 000 000 11 e-3 kg mol^-1
triton-proton mass ratio 2.993 717 033 48 0.000 000 000 22
unified atomic mass unit 1.660 539 040 e-27 0.000 000 020 e-27 kg
von Klitzing constant 25 812.807 4555 0.000 0059 ohm
weak mixing angle 0.2223 0.0021
Wien frequency displacement law constant 5.878 9238 e10 0.000 0034 e10 Hz K^-1
Wien wavelength displacement law constant 2.897 7729 e-3 0.000 0017 e-3 m K"""
txt2018 = """\
alpha particle-electron mass ratio 7294.299 541 42 0.000 000 24
alpha particle mass 6.644 657 3357 e-27 0.000 000 0020 e-27 kg
alpha particle mass energy equivalent 5.971 920 1914 e-10 0.000 000 0018 e-10 J
alpha particle mass energy equivalent in MeV 3727.379 4066 0.000 0011 MeV
alpha particle mass in u 4.001 506 179 127 0.000 000 000 063 u
alpha particle molar mass 4.001 506 1777 e-3 0.000 000 0012 e-3 kg mol^-1
alpha particle-proton mass ratio 3.972 599 690 09 0.000 000 000 22
alpha particle relative atomic mass 4.001 506 179 127 0.000 000 000 063
Angstrom star 1.000 014 95 e-10 0.000 000 90 e-10 m
atomic mass constant 1.660 539 066 60 e-27 0.000 000 000 50 e-27 kg
atomic mass constant energy equivalent 1.492 418 085 60 e-10 0.000 000 000 45 e-10 J
atomic mass constant energy equivalent in MeV 931.494 102 42 0.000 000 28 MeV
atomic mass unit-electron volt relationship 9.314 941 0242 e8 0.000 000 0028 e8 eV
atomic mass unit-hartree relationship 3.423 177 6874 e7 0.000 000 0010 e7 E_h
atomic mass unit-hertz relationship 2.252 342 718 71 e23 0.000 000 000 68 e23 Hz
atomic mass unit-inverse meter relationship 7.513 006 6104 e14 0.000 000 0023 e14 m^-1
atomic mass unit-joule relationship 1.492 418 085 60 e-10 0.000 000 000 45 e-10 J
atomic mass unit-kelvin relationship 1.080 954 019 16 e13 0.000 000 000 33 e13 K
atomic mass unit-kilogram relationship 1.660 539 066 60 e-27 0.000 000 000 50 e-27 kg
atomic unit of 1st hyperpolarizability 3.206 361 3061 e-53 0.000 000 0015 e-53 C^3 m^3 J^-2
atomic unit of 2nd hyperpolarizability 6.235 379 9905 e-65 0.000 000 0038 e-65 C^4 m^4 J^-3
atomic unit of action 1.054 571 817... e-34 (exact) J s
atomic unit of charge 1.602 176 634 e-19 (exact) C
atomic unit of charge density 1.081 202 384 57 e12 0.000 000 000 49 e12 C m^-3
atomic unit of current 6.623 618 237 510 e-3 0.000 000 000 013 e-3 A
atomic unit of electric dipole mom. 8.478 353 6255 e-30 0.000 000 0013 e-30 C m
atomic unit of electric field 5.142 206 747 63 e11 0.000 000 000 78 e11 V m^-1
atomic unit of electric field gradient 9.717 362 4292 e21 0.000 000 0029 e21 V m^-2
atomic unit of electric polarizability 1.648 777 274 36 e-41 0.000 000 000 50 e-41 C^2 m^2 J^-1
atomic unit of electric potential 27.211 386 245 988 0.000 000 000 053 V
atomic unit of electric quadrupole mom. 4.486 551 5246 e-40 0.000 000 0014 e-40 C m^2
atomic unit of energy 4.359 744 722 2071 e-18 0.000 000 000 0085 e-18 J
atomic unit of force 8.238 723 4983 e-8 0.000 000 0012 e-8 N
atomic unit of length 5.291 772 109 03 e-11 0.000 000 000 80 e-11 m
atomic unit of mag. dipole mom. 1.854 802 015 66 e-23 0.000 000 000 56 e-23 J T^-1
atomic unit of mag. flux density 2.350 517 567 58 e5 0.000 000 000 71 e5 T
atomic unit of magnetizability 7.891 036 6008 e-29 0.000 000 0048 e-29 J T^-2
atomic unit of mass 9.109 383 7015 e-31 0.000 000 0028 e-31 kg
atomic unit of momentum 1.992 851 914 10 e-24 0.000 000 000 30 e-24 kg m s^-1
atomic unit of permittivity 1.112 650 055 45 e-10 0.000 000 000 17 e-10 F m^-1
atomic unit of time 2.418 884 326 5857 e-17 0.000 000 000 0047 e-17 s
atomic unit of velocity 2.187 691 263 64 e6 0.000 000 000 33 e6 m s^-1
Avogadro constant 6.022 140 76 e23 (exact) mol^-1
Bohr magneton 9.274 010 0783 e-24 0.000 000 0028 e-24 J T^-1
Bohr magneton in eV/T 5.788 381 8060 e-5 0.000 000 0017 e-5 eV T^-1
Bohr magneton in Hz/T 1.399 624 493 61 e10 0.000 000 000 42 e10 Hz T^-1
Bohr magneton in inverse meter per tesla 46.686 447 783 0.000 000 014 m^-1 T^-1
Bohr magneton in K/T 0.671 713 815 63 0.000 000 000 20 K T^-1
Bohr radius 5.291 772 109 03 e-11 0.000 000 000 80 e-11 m
Boltzmann constant 1.380 649 e-23 (exact) J K^-1
Boltzmann constant in eV/K 8.617 333 262... e-5 (exact) eV K^-1
Boltzmann constant in Hz/K 2.083 661 912... e10 (exact) Hz K^-1
Boltzmann constant in inverse meter per kelvin 69.503 480 04... (exact) m^-1 K^-1
characteristic impedance of vacuum 376.730 313 668 0.000 000 057 ohm
classical electron radius 2.817 940 3262 e-15 0.000 000 0013 e-15 m
Compton wavelength 2.426 310 238 67 e-12 0.000 000 000 73 e-12 m
conductance quantum 7.748 091 729... e-5 (exact) S
conventional value of ampere-90 1.000 000 088 87... (exact) A
conventional value of coulomb-90 1.000 000 088 87... (exact) C
conventional value of farad-90 0.999 999 982 20... (exact) F
conventional value of henry-90 1.000 000 017 79... (exact) H
conventional value of Josephson constant 483 597.9 e9 (exact) Hz V^-1
conventional value of ohm-90 1.000 000 017 79... (exact) ohm
conventional value of volt-90 1.000 000 106 66... (exact) V
conventional value of von Klitzing constant 25 812.807 (exact) ohm
conventional value of watt-90 1.000 000 195 53... (exact) W
Cu x unit 1.002 076 97 e-13 0.000 000 28 e-13 m
deuteron-electron mag. mom. ratio -4.664 345 551 e-4 0.000 000 012 e-4
deuteron-electron mass ratio 3670.482 967 88 0.000 000 13
deuteron g factor 0.857 438 2338 0.000 000 0022
deuteron mag. mom. 4.330 735 094 e-27 0.000 000 011 e-27 J T^-1
deuteron mag. mom. to Bohr magneton ratio 4.669 754 570 e-4 0.000 000 012 e-4
deuteron mag. mom. to nuclear magneton ratio 0.857 438 2338 0.000 000 0022
deuteron mass 3.343 583 7724 e-27 0.000 000 0010 e-27 kg
deuteron mass energy equivalent 3.005 063 231 02 e-10 0.000 000 000 91 e-10 J
deuteron mass energy equivalent in MeV 1875.612 942 57 0.000 000 57 MeV
deuteron mass in u 2.013 553 212 745 0.000 000 000 040 u
deuteron molar mass 2.013 553 212 05 e-3 0.000 000 000 61 e-3 kg mol^-1
deuteron-neutron mag. mom. ratio -0.448 206 53 0.000 000 11
deuteron-proton mag. mom. ratio 0.307 012 209 39 0.000 000 000 79
deuteron-proton mass ratio 1.999 007 501 39 0.000 000 000 11
deuteron relative atomic mass 2.013 553 212 745 0.000 000 000 040
deuteron rms charge radius 2.127 99 e-15 0.000 74 e-15 m
electron charge to mass quotient -1.758 820 010 76 e11 0.000 000 000 53 e11 C kg^-1
electron-deuteron mag. mom. ratio -2143.923 4915 0.000 0056
electron-deuteron mass ratio 2.724 437 107 462 e-4 0.000 000 000 096 e-4
electron g factor -2.002 319 304 362 56 0.000 000 000 000 35
electron gyromag. ratio 1.760 859 630 23 e11 0.000 000 000 53 e11 s^-1 T^-1
electron gyromag. ratio in MHz/T 28 024.951 4242 0.000 0085 MHz T^-1
electron-helion mass ratio 1.819 543 074 573 e-4 0.000 000 000 079 e-4
electron mag. mom. -9.284 764 7043 e-24 0.000 000 0028 e-24 J T^-1
electron mag. mom. anomaly 1.159 652 181 28 e-3 0.000 000 000 18 e-3
electron mag. mom. to Bohr magneton ratio -1.001 159 652 181 28 0.000 000 000 000 18
electron mag. mom. to nuclear magneton ratio -1838.281 971 88 0.000 000 11
electron mass 9.109 383 7015 e-31 0.000 000 0028 e-31 kg
electron mass energy equivalent 8.187 105 7769 e-14 0.000 000 0025 e-14 J
electron mass energy equivalent in MeV 0.510 998 950 00 0.000 000 000 15 MeV
electron mass in u 5.485 799 090 65 e-4 0.000 000 000 16 e-4 u
electron molar mass 5.485 799 0888 e-7 0.000 000 0017 e-7 kg mol^-1
electron-muon mag. mom. ratio 206.766 9883 0.000 0046
electron-muon mass ratio 4.836 331 69 e-3 0.000 000 11 e-3
electron-neutron mag. mom. ratio 960.920 50 0.000 23
electron-neutron mass ratio 5.438 673 4424 e-4 0.000 000 0026 e-4
electron-proton mag. mom. ratio -658.210 687 89 0.000 000 20
electron-proton mass ratio 5.446 170 214 87 e-4 0.000 000 000 33 e-4
electron relative atomic mass 5.485 799 090 65 e-4 0.000 000 000 16 e-4
electron-tau mass ratio 2.875 85 e-4 0.000 19 e-4
electron to alpha particle mass ratio 1.370 933 554 787 e-4 0.000 000 000 045 e-4
electron to shielded helion mag. mom. ratio 864.058 257 0.000 010
electron to shielded proton mag. mom. ratio -658.227 5971 0.000 0072
electron-triton mass ratio 1.819 200 062 251 e-4 0.000 000 000 090 e-4
electron volt 1.602 176 634 e-19 (exact) J
electron volt-atomic mass unit relationship 1.073 544 102 33 e-9 0.000 000 000 32 e-9 u
electron volt-hartree relationship 3.674 932 217 5655 e-2 0.000 000 000 0071 e-2 E_h
electron volt-hertz relationship 2.417 989 242... e14 (exact) Hz
electron volt-inverse meter relationship 8.065 543 937... e5 (exact) m^-1
electron volt-joule relationship 1.602 176 634 e-19 (exact) J
electron volt-kelvin relationship 1.160 451 812... e4 (exact) K
electron volt-kilogram relationship 1.782 661 921... e-36 (exact) kg
elementary charge 1.602 176 634 e-19 (exact) C
elementary charge over h-bar 1.519 267 447... e15 (exact) A J^-1
Faraday constant 96 485.332 12... (exact) C mol^-1
Fermi coupling constant 1.166 3787 e-5 0.000 0006 e-5 GeV^-2
fine-structure constant 7.297 352 5693 e-3 0.000 000 0011 e-3
first radiation constant 3.741 771 852... e-16 (exact) W m^2
first radiation constant for spectral radiance 1.191 042 972... e-16 (exact) W m^2 sr^-1
hartree-atomic mass unit relationship 2.921 262 322 05 e-8 0.000 000 000 88 e-8 u
hartree-electron volt relationship 27.211 386 245 988 0.000 000 000 053 eV
Hartree energy 4.359 744 722 2071 e-18 0.000 000 000 0085 e-18 J
Hartree energy in eV 27.211 386 245 988 0.000 000 000 053 eV
hartree-hertz relationship 6.579 683 920 502 e15 0.000 000 000 013 e15 Hz
hartree-inverse meter relationship 2.194 746 313 6320 e7 0.000 000 000 0043 e7 m^-1
hartree-joule relationship 4.359 744 722 2071 e-18 0.000 000 000 0085 e-18 J
hartree-kelvin relationship 3.157 750 248 0407 e5 0.000 000 000 0061 e5 K
hartree-kilogram relationship 4.850 870 209 5432 e-35 0.000 000 000 0094 e-35 kg
helion-electron mass ratio 5495.885 280 07 0.000 000 24
helion g factor -4.255 250 615 0.000 000 050
helion mag. mom. -1.074 617 532 e-26 0.000 000 013 e-26 J T^-1
helion mag. mom. to Bohr magneton ratio -1.158 740 958 e-3 0.000 000 014 e-3
helion mag. mom. to nuclear magneton ratio -2.127 625 307 0.000 000 025
helion mass 5.006 412 7796 e-27 0.000 000 0015 e-27 kg
helion mass energy equivalent 4.499 539 4125 e-10 0.000 000 0014 e-10 J
helion mass energy equivalent in MeV 2808.391 607 43 0.000 000 85 MeV
helion mass in u 3.014 932 247 175 0.000 000 000 097 u
helion molar mass 3.014 932 246 13 e-3 0.000 000 000 91 e-3 kg mol^-1
helion-proton mass ratio 2.993 152 671 67 0.000 000 000 13
helion relative atomic mass 3.014 932 247 175 0.000 000 000 097
helion shielding shift 5.996 743 e-5 0.000 010 e-5
hertz-atomic mass unit relationship 4.439 821 6652 e-24 0.000 000 0013 e-24 u
hertz-electron volt relationship 4.135 667 696... e-15 (exact) eV
hertz-hartree relationship 1.519 829 846 0570 e-16 0.000 000 000 0029 e-16 E_h
hertz-inverse meter relationship 3.335 640 951... e-9 (exact) m^-1
hertz-joule relationship 6.626 070 15 e-34 (exact) J
hertz-kelvin relationship 4.799 243 073... e-11 (exact) K
hertz-kilogram relationship 7.372 497 323... e-51 (exact) kg
hyperfine transition frequency of Cs-133 9 192 631 770 (exact) Hz
inverse fine-structure constant 137.035 999 084 0.000 000 021
inverse meter-atomic mass unit relationship 1.331 025 050 10 e-15 0.000 000 000 40 e-15 u
inverse meter-electron volt relationship 1.239 841 984... e-6 (exact) eV
inverse meter-hartree relationship 4.556 335 252 9120 e-8 0.000 000 000 0088 e-8 E_h
inverse meter-hertz relationship 299 792 458 (exact) Hz
inverse meter-joule relationship 1.986 445 857... e-25 (exact) J
inverse meter-kelvin relationship 1.438 776 877... e-2 (exact) K
inverse meter-kilogram relationship 2.210 219 094... e-42 (exact) kg
inverse of conductance quantum 12 906.403 72... (exact) ohm
Josephson constant 483 597.848 4... e9 (exact) Hz V^-1
joule-atomic mass unit relationship 6.700 535 2565 e9 0.000 000 0020 e9 u
joule-electron volt relationship 6.241 509 074... e18 (exact) eV
joule-hartree relationship 2.293 712 278 3963 e17 0.000 000 000 0045 e17 E_h
joule-hertz relationship 1.509 190 179... e33 (exact) Hz
joule-inverse meter relationship 5.034 116 567... e24 (exact) m^-1
joule-kelvin relationship 7.242 970 516... e22 (exact) K
joule-kilogram relationship 1.112 650 056... e-17 (exact) kg
kelvin-atomic mass unit relationship 9.251 087 3014 e-14 0.000 000 0028 e-14 u
kelvin-electron volt relationship 8.617 333 262... e-5 (exact) eV
kelvin-hartree relationship 3.166 811 563 4556 e-6 0.000 000 000 0061 e-6 E_h
kelvin-hertz relationship 2.083 661 912... e10 (exact) Hz
kelvin-inverse meter relationship 69.503 480 04... (exact) m^-1
kelvin-joule relationship 1.380 649 e-23 (exact) J
kelvin-kilogram relationship 1.536 179 187... e-40 (exact) kg
kilogram-atomic mass unit relationship 6.022 140 7621 e26 0.000 000 0018 e26 u
kilogram-electron volt relationship 5.609 588 603... e35 (exact) eV
kilogram-hartree relationship 2.061 485 788 7409 e34 0.000 000 000 0040 e34 E_h
kilogram-hertz relationship 1.356 392 489... e50 (exact) Hz
kilogram-inverse meter relationship 4.524 438 335... e41 (exact) m^-1
kilogram-joule relationship 8.987 551 787... e16 (exact) J
kilogram-kelvin relationship 6.509 657 260... e39 (exact) K
lattice parameter of silicon 5.431 020 511 e-10 0.000 000 089 e-10 m
lattice spacing of ideal Si (220) 1.920 155 716 e-10 0.000 000 032 e-10 m
Loschmidt constant (273.15 K, 100 kPa) 2.651 645 804... e25 (exact) m^-3
Loschmidt constant (273.15 K, 101.325 kPa) 2.686 780 111... e25 (exact) m^-3
luminous efficacy 683 (exact) lm W^-1
mag. flux quantum 2.067 833 848... e-15 (exact) Wb
molar gas constant 8.314 462 618... (exact) J mol^-1 K^-1
molar mass constant 0.999 999 999 65 e-3 0.000 000 000 30 e-3 kg mol^-1
molar mass of carbon-12 11.999 999 9958 e-3 0.000 000 0036 e-3 kg mol^-1
molar Planck constant 3.990 312 712... e-10 (exact) J Hz^-1 mol^-1
molar volume of ideal gas (273.15 K, 100 kPa) 22.710 954 64... e-3 (exact) m^3 mol^-1
molar volume of ideal gas (273.15 K, 101.325 kPa) 22.413 969 54... e-3 (exact) m^3 mol^-1
molar volume of silicon 1.205 883 199 e-5 0.000 000 060 e-5 m^3 mol^-1
Mo x unit 1.002 099 52 e-13 0.000 000 53 e-13 m
muon Compton wavelength 1.173 444 110 e-14 0.000 000 026 e-14 m
muon-electron mass ratio 206.768 2830 0.000 0046
muon g factor -2.002 331 8418 0.000 000 0013
muon mag. mom. -4.490 448 30 e-26 0.000 000 10 e-26 J T^-1
muon mag. mom. anomaly 1.165 920 89 e-3 0.000 000 63 e-3
muon mag. mom. to Bohr magneton ratio -4.841 970 47 e-3 0.000 000 11 e-3
muon mag. mom. to nuclear magneton ratio -8.890 597 03 0.000 000 20
muon mass 1.883 531 627 e-28 0.000 000 042 e-28 kg
muon mass energy equivalent 1.692 833 804 e-11 0.000 000 038 e-11 J
muon mass energy equivalent in MeV 105.658 3755 0.000 0023 MeV
muon mass in u 0.113 428 9259 0.000 000 0025 u
muon molar mass 1.134 289 259 e-4 0.000 000 025 e-4 kg mol^-1
muon-neutron mass ratio 0.112 454 5170 0.000 000 0025
muon-proton mag. mom. ratio -3.183 345 142 0.000 000 071
muon-proton mass ratio 0.112 609 5264 0.000 000 0025
muon-tau mass ratio 5.946 35 e-2 0.000 40 e-2
natural unit of action 1.054 571 817... e-34 (exact) J s
natural unit of action in eV s 6.582 119 569... e-16 (exact) eV s
natural unit of energy 8.187 105 7769 e-14 0.000 000 0025 e-14 J
natural unit of energy in MeV 0.510 998 950 00 0.000 000 000 15 MeV
natural unit of length 3.861 592 6796 e-13 0.000 000 0012 e-13 m
natural unit of mass 9.109 383 7015 e-31 0.000 000 0028 e-31 kg
natural unit of momentum 2.730 924 530 75 e-22 0.000 000 000 82 e-22 kg m s^-1
natural unit of momentum in MeV/c 0.510 998 950 00 0.000 000 000 15 MeV/c
natural unit of time 1.288 088 668 19 e-21 0.000 000 000 39 e-21 s
natural unit of velocity 299 792 458 (exact) m s^-1
neutron Compton wavelength 1.319 590 905 81 e-15 0.000 000 000 75 e-15 m
neutron-electron mag. mom. ratio 1.040 668 82 e-3 0.000 000 25 e-3
neutron-electron mass ratio 1838.683 661 73 0.000 000 89
neutron g factor -3.826 085 45 0.000 000 90
neutron gyromag. ratio 1.832 471 71 e8 0.000 000 43 e8 s^-1 T^-1
neutron gyromag. ratio in MHz/T 29.164 6931 0.000 0069 MHz T^-1
neutron mag. mom. -9.662 3651 e-27 0.000 0023 e-27 J T^-1
neutron mag. mom. to Bohr magneton ratio -1.041 875 63 e-3 0.000 000 25 e-3
neutron mag. mom. to nuclear magneton ratio -1.913 042 73 0.000 000 45
neutron mass 1.674 927 498 04 e-27 0.000 000 000 95 e-27 kg
neutron mass energy equivalent 1.505 349 762 87 e-10 0.000 000 000 86 e-10 J
neutron mass energy equivalent in MeV 939.565 420 52 0.000 000 54 MeV
neutron mass in u 1.008 664 915 95 0.000 000 000 49 u
neutron molar mass 1.008 664 915 60 e-3 0.000 000 000 57 e-3 kg mol^-1
neutron-muon mass ratio 8.892 484 06 0.000 000 20
neutron-proton mag. mom. ratio -0.684 979 34 0.000 000 16
neutron-proton mass difference 2.305 574 35 e-30 0.000 000 82 e-30 kg
neutron-proton mass difference energy equivalent 2.072 146 89 e-13 0.000 000 74 e-13 J
neutron-proton mass difference energy equivalent in MeV 1.293 332 36 0.000 000 46 MeV
neutron-proton mass difference in u 1.388 449 33 e-3 0.000 000 49 e-3 u
neutron-proton mass ratio 1.001 378 419 31 0.000 000 000 49
neutron relative atomic mass 1.008 664 915 95 0.000 000 000 49
neutron-tau mass ratio 0.528 779 0.000 036
neutron to shielded proton mag. mom. ratio -0.684 996 94 0.000 000 16
Newtonian constant of gravitation 6.674 30 e-11 0.000 15 e-11 m^3 kg^-1 s^-2
Newtonian constant of gravitation over h-bar c 6.708 83 e-39 0.000 15 e-39 (GeV/c^2)^-2
nuclear magneton 5.050 783 7461 e-27 0.000 000 0015 e-27 J T^-1
nuclear magneton in eV/T 3.152 451 258 44 e-8 0.000 000 000 96 e-8 eV T^-1
nuclear magneton in inverse meter per tesla 2.542 623 413 53 e-2 0.000 000 000 78 e-2 m^-1 T^-1
nuclear magneton in K/T 3.658 267 7756 e-4 0.000 000 0011 e-4 K T^-1
nuclear magneton in MHz/T 7.622 593 2291 0.000 000 0023 MHz T^-1
Planck constant 6.626 070 15 e-34 (exact) J Hz^-1
Planck constant in eV/Hz 4.135 667 696... e-15 (exact) eV Hz^-1
Planck length 1.616 255 e-35 0.000 018 e-35 m
Planck mass 2.176 434 e-8 0.000 024 e-8 kg
Planck mass energy equivalent in GeV 1.220 890 e19 0.000 014 e19 GeV
Planck temperature 1.416 784 e32 0.000 016 e32 K
Planck time 5.391 247 e-44 0.000 060 e-44 s
proton charge to mass quotient 9.578 833 1560 e7 0.000 000 0029 e7 C kg^-1
proton Compton wavelength 1.321 409 855 39 e-15 0.000 000 000 40 e-15 m
proton-electron mass ratio 1836.152 673 43 0.000 000 11
proton g factor 5.585 694 6893 0.000 000 0016
proton gyromag. ratio 2.675 221 8744 e8 0.000 000 0011 e8 s^-1 T^-1
proton gyromag. ratio in MHz/T 42.577 478 518 0.000 000 018 MHz T^-1
proton mag. mom. 1.410 606 797 36 e-26 0.000 000 000 60 e-26 J T^-1
proton mag. mom. to Bohr magneton ratio 1.521 032 202 30 e-3 0.000 000 000 46 e-3
proton mag. mom. to nuclear magneton ratio 2.792 847 344 63 0.000 000 000 82
proton mag. shielding correction 2.5689 e-5 0.0011 e-5
proton mass 1.672 621 923 69 e-27 0.000 000 000 51 e-27 kg
proton mass energy equivalent 1.503 277 615 98 e-10 0.000 000 000 46 e-10 J
proton mass energy equivalent in MeV 938.272 088 16 0.000 000 29 MeV
proton mass in u 1.007 276 466 621 0.000 000 000 053 u
proton molar mass 1.007 276 466 27 e-3 0.000 000 000 31 e-3 kg mol^-1
proton-muon mass ratio 8.880 243 37 0.000 000 20
proton-neutron mag. mom. ratio -1.459 898 05 0.000 000 34
proton-neutron mass ratio 0.998 623 478 12 0.000 000 000 49
proton relative atomic mass 1.007 276 466 621 0.000 000 000 053
proton rms charge radius 8.414 e-16 0.019 e-16 m
proton-tau mass ratio 0.528 051 0.000 036
quantum of circulation 3.636 947 5516 e-4 0.000 000 0011 e-4 m^2 s^-1
quantum of circulation times 2 7.273 895 1032 e-4 0.000 000 0022 e-4 m^2 s^-1
reduced Compton wavelength 3.861 592 6796 e-13 0.000 000 0012 e-13 m
reduced muon Compton wavelength 1.867 594 306 e-15 0.000 000 042 e-15 m
reduced neutron Compton wavelength 2.100 194 1552 e-16 0.000 000 0012 e-16 m
reduced Planck constant 1.054 571 817... e-34 (exact) J s
reduced Planck constant in eV s 6.582 119 569... e-16 (exact) eV s
reduced Planck constant times c in MeV fm 197.326 980 4... (exact) MeV fm
reduced proton Compton wavelength 2.103 089 103 36 e-16 0.000 000 000 64 e-16 m
reduced tau Compton wavelength 1.110 538 e-16 0.000 075 e-16 m
Rydberg constant 10 973 731.568 160 0.000 021 m^-1
Rydberg constant times c in Hz 3.289 841 960 2508 e15 0.000 000 000 0064 e15 Hz
Rydberg constant times hc in eV 13.605 693 122 994 0.000 000 000 026 eV
Rydberg constant times hc in J 2.179 872 361 1035 e-18 0.000 000 000 0042 e-18 J
Sackur-Tetrode constant (1 K, 100 kPa) -1.151 707 537 06 0.000 000 000 45
Sackur-Tetrode constant (1 K, 101.325 kPa) -1.164 870 523 58 0.000 000 000 45
second radiation constant 1.438 776 877... e-2 (exact) m K
shielded helion gyromag. ratio 2.037 894 569 e8 0.000 000 024 e8 s^-1 T^-1
shielded helion gyromag. ratio in MHz/T 32.434 099 42 0.000 000 38 MHz T^-1
shielded helion mag. mom. -1.074 553 090 e-26 0.000 000 013 e-26 J T^-1
shielded helion mag. mom. to Bohr magneton ratio -1.158 671 471 e-3 0.000 000 014 e-3
shielded helion mag. mom. to nuclear magneton ratio -2.127 497 719 0.000 000 025
shielded helion to proton mag. mom. ratio -0.761 766 5618 0.000 000 0089
shielded helion to shielded proton mag. mom. ratio -0.761 786 1313 0.000 000 0033
shielded proton gyromag. ratio 2.675 153 151 e8 0.000 000 029 e8 s^-1 T^-1
shielded proton gyromag. ratio in MHz/T 42.576 384 74 0.000 000 46 MHz T^-1
shielded proton mag. mom. 1.410 570 560 e-26 0.000 000 015 e-26 J T^-1
shielded proton mag. mom. to Bohr magneton ratio 1.520 993 128 e-3 0.000 000 017 e-3
shielded proton mag. mom. to nuclear magneton ratio 2.792 775 599 0.000 000 030
shielding difference of d and p in HD 2.0200 e-8 0.0020 e-8
shielding difference of t and p in HT 2.4140 e-8 0.0020 e-8
speed of light in vacuum 299 792 458 (exact) m s^-1
standard acceleration of gravity 9.806 65 (exact) m s^-2
standard atmosphere 101 325 (exact) Pa
standard-state pressure 100 000 (exact) Pa
Stefan-Boltzmann constant 5.670 374 419... e-8 (exact) W m^-2 K^-4
tau Compton wavelength 6.977 71 e-16 0.000 47 e-16 m
tau-electron mass ratio 3477.23 0.23
tau energy equivalent 1776.86 0.12 MeV
tau mass 3.167 54 e-27 0.000 21 e-27 kg
tau mass energy equivalent 2.846 84 e-10 0.000 19 e-10 J
tau mass in u 1.907 54 0.000 13 u
tau molar mass 1.907 54 e-3 0.000 13 e-3 kg mol^-1
tau-muon mass ratio 16.8170 0.0011
tau-neutron mass ratio 1.891 15 0.000 13
tau-proton mass ratio 1.893 76 0.000 13
Thomson cross section 6.652 458 7321 e-29 0.000 000 0060 e-29 m^2
triton-electron mass ratio 5496.921 535 73 0.000 000 27
triton g factor 5.957 924 931 0.000 000 012
triton mag. mom. 1.504 609 5202 e-26 0.000 000 0030 e-26 J T^-1
triton mag. mom. to Bohr magneton ratio 1.622 393 6651 e-3 0.000 000 0032 e-3
triton mag. mom. to nuclear magneton ratio 2.978 962 4656 0.000 000 0059
triton mass 5.007 356 7446 e-27 0.000 000 0015 e-27 kg
triton mass energy equivalent 4.500 387 8060 e-10 0.000 000 0014 e-10 J
triton mass energy equivalent in MeV 2808.921 132 98 0.000 000 85 MeV
triton mass in u 3.015 500 716 21 0.000 000 000 12 u
triton molar mass 3.015 500 715 17 e-3 0.000 000 000 92 e-3 kg mol^-1
triton-proton mass ratio 2.993 717 034 14 0.000 000 000 15
triton relative atomic mass 3.015 500 716 21 0.000 000 000 12
triton to proton mag. mom. ratio 1.066 639 9191 0.000 000 0021
unified atomic mass unit 1.660 539 066 60 e-27 0.000 000 000 50 e-27 kg
vacuum electric permittivity 8.854 187 8128 e-12 0.000 000 0013 e-12 F m^-1
vacuum mag. permeability 1.256 637 062 12 e-6 0.000 000 000 19 e-6 N A^-2
von Klitzing constant 25 812.807 45... (exact) ohm
weak mixing angle 0.222 90 0.000 30
Wien frequency displacement law constant 5.878 925 757... e10 (exact) Hz K^-1
Wien wavelength displacement law constant 2.897 771 955... e-3 (exact) m K
W to Z mass ratio 0.881 53 0.000 17 """
# -----------------------------------------------------------------------------
physical_constants: dict[str, tuple[float, str, float]] = {}
def parse_constants_2002to2014(d: str) -> dict[str, tuple[float, str, float]]:
constants = {}
for line in d.split('\n'):
name = line[:55].rstrip()
val = float(line[55:77].replace(' ', '').replace('...', ''))
uncert = float(line[77:99].replace(' ', '').replace('(exact)', '0'))
units = line[99:].rstrip()
constants[name] = (val, units, uncert)
return constants
def parse_constants_2018toXXXX(d: str) -> dict[str, tuple[float, str, float]]:
constants = {}
for line in d.split('\n'):
name = line[:60].rstrip()
val = float(line[60:85].replace(' ', '').replace('...', ''))
uncert = float(line[85:110].replace(' ', '').replace('(exact)', '0'))
units = line[110:].rstrip()
constants[name] = (val, units, uncert)
return constants
_physical_constants_2002 = parse_constants_2002to2014(txt2002)
_physical_constants_2006 = parse_constants_2002to2014(txt2006)
_physical_constants_2010 = parse_constants_2002to2014(txt2010)
_physical_constants_2014 = parse_constants_2002to2014(txt2014)
_physical_constants_2018 = parse_constants_2018toXXXX(txt2018)
physical_constants.update(_physical_constants_2002)
physical_constants.update(_physical_constants_2006)
physical_constants.update(_physical_constants_2010)
physical_constants.update(_physical_constants_2014)
physical_constants.update(_physical_constants_2018)
_current_constants = _physical_constants_2018
_current_codata = "CODATA 2018"
# check obsolete values
_obsolete_constants = {}
for k in physical_constants:
if k not in _current_constants:
_obsolete_constants[k] = True
# generate some additional aliases
_aliases = {}
for k in _physical_constants_2002:
if 'magn.' in k:
_aliases[k] = k.replace('magn.', 'mag.')
for k in _physical_constants_2006:
if 'momentum' in k:
_aliases[k] = k.replace('momentum', 'mom.um')
for k in _physical_constants_2018:
if 'momentum' in k:
_aliases[k] = k.replace('momentum', 'mom.um')
# CODATA 2018: renamed and no longer exact; use as aliases
_aliases['mag. constant'] = 'vacuum mag. permeability'
_aliases['electric constant'] = 'vacuum electric permittivity'
class ConstantWarning(DeprecationWarning):
"""Accessing a constant no longer in current CODATA data set"""
pass
def _check_obsolete(key: str) -> None:
if key in _obsolete_constants and key not in _aliases:
warnings.warn("Constant '{}' is not in current {} data set".format(
key, _current_codata), ConstantWarning)
def value(key: str) -> float:
"""
Value in physical_constants indexed by key
Parameters
----------
key : Python string
Key in dictionary `physical_constants`
Returns
-------
value : float
Value in `physical_constants` corresponding to `key`
Examples
--------
>>> from scipy import constants
>>> constants.value('elementary charge')
1.602176634e-19
"""
_check_obsolete(key)
return physical_constants[key][0]
def unit(key: str) -> str:
"""
Unit in physical_constants indexed by key
Parameters
----------
key : Python string
Key in dictionary `physical_constants`
Returns
-------
unit : Python string
Unit in `physical_constants` corresponding to `key`
Examples
--------
>>> from scipy import constants
>>> constants.unit('proton mass')
'kg'
"""
_check_obsolete(key)
return physical_constants[key][1]
def precision(key: str) -> float:
"""
Relative precision in physical_constants indexed by key
Parameters
----------
key : Python string
Key in dictionary `physical_constants`
Returns
-------
prec : float
Relative precision in `physical_constants` corresponding to `key`
Examples
--------
>>> from scipy import constants
>>> constants.precision('proton mass')
5.1e-37
"""
_check_obsolete(key)
return physical_constants[key][2] / physical_constants[key][0]
def find(sub: str | None = None, disp: bool = False) -> Any:
"""
Return list of physical_constant keys containing a given string.
Parameters
----------
sub : str
Sub-string to search keys for. By default, return all keys.
disp : bool
If True, print the keys that are found and return None.
Otherwise, return the list of keys without printing anything.
Returns
-------
keys : list or None
If `disp` is False, the list of keys is returned.
Otherwise, None is returned.
Examples
--------
>>> from scipy.constants import find, physical_constants
Which keys in the ``physical_constants`` dictionary contain 'boltzmann'?
>>> find('boltzmann')
['Boltzmann constant',
'Boltzmann constant in Hz/K',
'Boltzmann constant in eV/K',
'Boltzmann constant in inverse meter per kelvin',
'Stefan-Boltzmann constant']
Get the constant called 'Boltzmann constant in Hz/K':
>>> physical_constants['Boltzmann constant in Hz/K']
(20836619120.0, 'Hz K^-1', 0.0)
Find constants with 'radius' in the key:
>>> find('radius')
['Bohr radius',
'classical electron radius',
'deuteron rms charge radius',
'proton rms charge radius']
>>> physical_constants['classical electron radius']
(2.8179403262e-15, 'm', 1.3e-24)
"""
if sub is None:
result = list(_current_constants.keys())
else:
result = [key for key in _current_constants
if sub.lower() in key.lower()]
result.sort()
if disp:
for key in result:
print(key)
return
else:
return result
c = value('speed of light in vacuum')
mu0 = value('vacuum mag. permeability')
epsilon0 = value('vacuum electric permittivity')
# Table is lacking some digits for exact values: calculate from definition
exact_values = {
'joule-kilogram relationship': (1 / (c * c), 'kg', 0.0),
'kilogram-joule relationship': (c * c, 'J', 0.0),
'hertz-inverse meter relationship': (1 / c, 'm^-1', 0.0),
}
# sanity check
for key in exact_values:
val = physical_constants[key][0]
if abs(exact_values[key][0] - val) / val > 1e-9:
raise ValueError("Constants.codata: exact values too far off.")
if exact_values[key][2] == 0 and physical_constants[key][2] != 0:
raise ValueError("Constants.codata: value not exact")
physical_constants.update(exact_values)
_tested_keys = ['natural unit of velocity',
'natural unit of action',
'natural unit of action in eV s',
'natural unit of mass',
'natural unit of energy',
'natural unit of energy in MeV',
'natural unit of mom.um',
'natural unit of mom.um in MeV/c',
'natural unit of length',
'natural unit of time']
# finally, insert aliases for values
for k, v in list(_aliases.items()):
if v in _current_constants or v in _tested_keys:
physical_constants[k] = physical_constants[v]
else:
del _aliases[k]
| 155,622
| 87.978273
| 124
|
py
|
scipy
|
scipy-main/scipy/constants/_constants.py
|
"""
Collection of physical constants and conversion factors.
Most constants are in SI units, so you can do
print '10 mile per minute is', 10*mile/minute, 'm/s or', 10*mile/(minute*knot), 'knots'
The list is not meant to be comprehensive, but just convenient for everyday use.
"""
from __future__ import annotations
import math as _math
from typing import TYPE_CHECKING, Any
from ._codata import value as _cd
import numpy as _np
if TYPE_CHECKING:
import numpy.typing as npt
"""
BasSw 2006
physical constants: imported from CODATA
unit conversion: see e.g., NIST special publication 811
Use at own risk: double-check values before calculating your Mars orbit-insertion burn.
Some constants exist in a few variants, which are marked with suffixes.
The ones without any suffix should be the most common ones.
"""
__all__ = [
'Avogadro', 'Boltzmann', 'Btu', 'Btu_IT', 'Btu_th', 'G',
'Julian_year', 'N_A', 'Planck', 'R', 'Rydberg',
'Stefan_Boltzmann', 'Wien', 'acre', 'alpha',
'angstrom', 'arcmin', 'arcminute', 'arcsec',
'arcsecond', 'astronomical_unit', 'atm',
'atmosphere', 'atomic_mass', 'atto', 'au', 'bar',
'barrel', 'bbl', 'blob', 'c', 'calorie',
'calorie_IT', 'calorie_th', 'carat', 'centi',
'convert_temperature', 'day', 'deci', 'degree',
'degree_Fahrenheit', 'deka', 'dyn', 'dyne', 'e',
'eV', 'electron_mass', 'electron_volt',
'elementary_charge', 'epsilon_0', 'erg',
'exa', 'exbi', 'femto', 'fermi', 'fine_structure',
'fluid_ounce', 'fluid_ounce_US', 'fluid_ounce_imp',
'foot', 'g', 'gallon', 'gallon_US', 'gallon_imp',
'gas_constant', 'gibi', 'giga', 'golden', 'golden_ratio',
'grain', 'gram', 'gravitational_constant', 'h', 'hbar',
'hectare', 'hecto', 'horsepower', 'hour', 'hp',
'inch', 'k', 'kgf', 'kibi', 'kilo', 'kilogram_force',
'kmh', 'knot', 'lambda2nu', 'lb', 'lbf',
'light_year', 'liter', 'litre', 'long_ton', 'm_e',
'm_n', 'm_p', 'm_u', 'mach', 'mebi', 'mega',
'metric_ton', 'micro', 'micron', 'mil', 'mile',
'milli', 'minute', 'mmHg', 'mph', 'mu_0', 'nano',
'nautical_mile', 'neutron_mass', 'nu2lambda',
'ounce', 'oz', 'parsec', 'pebi', 'peta',
'pi', 'pico', 'point', 'pound', 'pound_force',
'proton_mass', 'psi', 'pt', 'quecto', 'quetta', 'ronna', 'ronto',
'short_ton', 'sigma', 'slinch', 'slug', 'speed_of_light',
'speed_of_sound', 'stone', 'survey_foot',
'survey_mile', 'tebi', 'tera', 'ton_TNT',
'torr', 'troy_ounce', 'troy_pound', 'u',
'week', 'yard', 'year', 'yobi', 'yocto',
'yotta', 'zebi', 'zepto', 'zero_Celsius', 'zetta'
]
# mathematical constants
pi = _math.pi
golden = golden_ratio = (1 + _math.sqrt(5)) / 2
# SI prefixes
quetta = 1e30
ronna = 1e27
yotta = 1e24
zetta = 1e21
exa = 1e18
peta = 1e15
tera = 1e12
giga = 1e9
mega = 1e6
kilo = 1e3
hecto = 1e2
deka = 1e1
deci = 1e-1
centi = 1e-2
milli = 1e-3
micro = 1e-6
nano = 1e-9
pico = 1e-12
femto = 1e-15
atto = 1e-18
zepto = 1e-21
yocto = 1e-24
ronto = 1e-27
quecto = 1e-30
# binary prefixes
kibi = 2**10
mebi = 2**20
gibi = 2**30
tebi = 2**40
pebi = 2**50
exbi = 2**60
zebi = 2**70
yobi = 2**80
# physical constants
c = speed_of_light = _cd('speed of light in vacuum')
mu_0 = _cd('vacuum mag. permeability')
epsilon_0 = _cd('vacuum electric permittivity')
h = Planck = _cd('Planck constant')
hbar = h / (2 * pi)
G = gravitational_constant = _cd('Newtonian constant of gravitation')
g = _cd('standard acceleration of gravity')
e = elementary_charge = _cd('elementary charge')
R = gas_constant = _cd('molar gas constant')
alpha = fine_structure = _cd('fine-structure constant')
N_A = Avogadro = _cd('Avogadro constant')
k = Boltzmann = _cd('Boltzmann constant')
sigma = Stefan_Boltzmann = _cd('Stefan-Boltzmann constant')
Wien = _cd('Wien wavelength displacement law constant')
Rydberg = _cd('Rydberg constant')
# mass in kg
gram = 1e-3
metric_ton = 1e3
grain = 64.79891e-6
lb = pound = 7000 * grain # avoirdupois
blob = slinch = pound * g / 0.0254 # lbf*s**2/in (added in 1.0.0)
slug = blob / 12 # lbf*s**2/foot (added in 1.0.0)
oz = ounce = pound / 16
stone = 14 * pound
long_ton = 2240 * pound
short_ton = 2000 * pound
troy_ounce = 480 * grain # only for metals / gems
troy_pound = 12 * troy_ounce
carat = 200e-6
m_e = electron_mass = _cd('electron mass')
m_p = proton_mass = _cd('proton mass')
m_n = neutron_mass = _cd('neutron mass')
m_u = u = atomic_mass = _cd('atomic mass constant')
# angle in rad
degree = pi / 180
arcmin = arcminute = degree / 60
arcsec = arcsecond = arcmin / 60
# time in second
minute = 60.0
hour = 60 * minute
day = 24 * hour
week = 7 * day
year = 365 * day
Julian_year = 365.25 * day
# length in meter
inch = 0.0254
foot = 12 * inch
yard = 3 * foot
mile = 1760 * yard
mil = inch / 1000
pt = point = inch / 72 # typography
survey_foot = 1200.0 / 3937
survey_mile = 5280 * survey_foot
nautical_mile = 1852.0
fermi = 1e-15
angstrom = 1e-10
micron = 1e-6
au = astronomical_unit = 149597870700.0
light_year = Julian_year * c
parsec = au / arcsec
# pressure in pascal
atm = atmosphere = _cd('standard atmosphere')
bar = 1e5
torr = mmHg = atm / 760
psi = pound * g / (inch * inch)
# area in meter**2
hectare = 1e4
acre = 43560 * foot**2
# volume in meter**3
litre = liter = 1e-3
gallon = gallon_US = 231 * inch**3 # US
# pint = gallon_US / 8
fluid_ounce = fluid_ounce_US = gallon_US / 128
bbl = barrel = 42 * gallon_US # for oil
gallon_imp = 4.54609e-3 # UK
fluid_ounce_imp = gallon_imp / 160
# speed in meter per second
kmh = 1e3 / hour
mph = mile / hour
mach = speed_of_sound = 340.5 # approx value at 15 degrees in 1 atm. Is this a common value?
knot = nautical_mile / hour
# temperature in kelvin
zero_Celsius = 273.15
degree_Fahrenheit = 1/1.8 # only for differences
# energy in joule
eV = electron_volt = elementary_charge # * 1 Volt
calorie = calorie_th = 4.184
calorie_IT = 4.1868
erg = 1e-7
Btu_th = pound * degree_Fahrenheit * calorie_th / gram
Btu = Btu_IT = pound * degree_Fahrenheit * calorie_IT / gram
ton_TNT = 1e9 * calorie_th
# Wh = watt_hour
# power in watt
hp = horsepower = 550 * foot * pound * g
# force in newton
dyn = dyne = 1e-5
lbf = pound_force = pound * g
kgf = kilogram_force = g # * 1 kg
# functions for conversions that are not linear
def convert_temperature(
val: npt.ArrayLike,
old_scale: str,
new_scale: str,
) -> Any:
"""
Convert from a temperature scale to another one among Celsius, Kelvin,
Fahrenheit, and Rankine scales.
Parameters
----------
val : array_like
Value(s) of the temperature(s) to be converted expressed in the
original scale.
old_scale : str
Specifies as a string the original scale from which the temperature
value(s) will be converted. Supported scales are Celsius ('Celsius',
'celsius', 'C' or 'c'), Kelvin ('Kelvin', 'kelvin', 'K', 'k'),
Fahrenheit ('Fahrenheit', 'fahrenheit', 'F' or 'f'), and Rankine
('Rankine', 'rankine', 'R', 'r').
new_scale : str
Specifies as a string the new scale to which the temperature
value(s) will be converted. Supported scales are Celsius ('Celsius',
'celsius', 'C' or 'c'), Kelvin ('Kelvin', 'kelvin', 'K', 'k'),
Fahrenheit ('Fahrenheit', 'fahrenheit', 'F' or 'f'), and Rankine
('Rankine', 'rankine', 'R', 'r').
Returns
-------
res : float or array of floats
Value(s) of the converted temperature(s) expressed in the new scale.
Notes
-----
.. versionadded:: 0.18.0
Examples
--------
>>> from scipy.constants import convert_temperature
>>> import numpy as np
>>> convert_temperature(np.array([-40, 40]), 'Celsius', 'Kelvin')
array([ 233.15, 313.15])
"""
# Convert from `old_scale` to Kelvin
if old_scale.lower() in ['celsius', 'c']:
tempo = _np.asanyarray(val) + zero_Celsius
elif old_scale.lower() in ['kelvin', 'k']:
tempo = _np.asanyarray(val)
elif old_scale.lower() in ['fahrenheit', 'f']:
tempo = (_np.asanyarray(val) - 32) * 5 / 9 + zero_Celsius
elif old_scale.lower() in ['rankine', 'r']:
tempo = _np.asanyarray(val) * 5 / 9
else:
raise NotImplementedError("%s scale is unsupported: supported scales "
"are Celsius, Kelvin, Fahrenheit, and "
"Rankine" % old_scale)
# and from Kelvin to `new_scale`.
if new_scale.lower() in ['celsius', 'c']:
res = tempo - zero_Celsius
elif new_scale.lower() in ['kelvin', 'k']:
res = tempo
elif new_scale.lower() in ['fahrenheit', 'f']:
res = (tempo - zero_Celsius) * 9 / 5 + 32
elif new_scale.lower() in ['rankine', 'r']:
res = tempo * 9 / 5
else:
raise NotImplementedError("'%s' scale is unsupported: supported "
"scales are 'Celsius', 'Kelvin', "
"'Fahrenheit', and 'Rankine'" % new_scale)
return res
# optics
def lambda2nu(lambda_: npt.ArrayLike) -> Any:
"""
Convert wavelength to optical frequency
Parameters
----------
lambda_ : array_like
Wavelength(s) to be converted.
Returns
-------
nu : float or array of floats
Equivalent optical frequency.
Notes
-----
Computes ``nu = c / lambda`` where c = 299792458.0, i.e., the
(vacuum) speed of light in meters/second.
Examples
--------
>>> from scipy.constants import lambda2nu, speed_of_light
>>> import numpy as np
>>> lambda2nu(np.array((1, speed_of_light)))
array([ 2.99792458e+08, 1.00000000e+00])
"""
return c / _np.asanyarray(lambda_)
def nu2lambda(nu: npt.ArrayLike) -> Any:
"""
Convert optical frequency to wavelength.
Parameters
----------
nu : array_like
Optical frequency to be converted.
Returns
-------
lambda : float or array of floats
Equivalent wavelength(s).
Notes
-----
Computes ``lambda = c / nu`` where c = 299792458.0, i.e., the
(vacuum) speed of light in meters/second.
Examples
--------
>>> from scipy.constants import nu2lambda, speed_of_light
>>> import numpy as np
>>> nu2lambda(np.array((1, speed_of_light)))
array([ 2.99792458e+08, 1.00000000e+00])
"""
return c / _np.asanyarray(nu)
| 10,369
| 27.646409
| 93
|
py
|
scipy
|
scipy-main/scipy/constants/tests/test_constants.py
|
from numpy.testing import assert_equal, assert_allclose
import scipy.constants as sc
def test_convert_temperature():
assert_equal(sc.convert_temperature(32, 'f', 'Celsius'), 0)
assert_equal(sc.convert_temperature([0, 0], 'celsius', 'Kelvin'),
[273.15, 273.15])
assert_equal(sc.convert_temperature([0, 0], 'kelvin', 'c'),
[-273.15, -273.15])
assert_equal(sc.convert_temperature([32, 32], 'f', 'k'), [273.15, 273.15])
assert_equal(sc.convert_temperature([273.15, 273.15], 'kelvin', 'F'),
[32, 32])
assert_equal(sc.convert_temperature([0, 0], 'C', 'fahrenheit'), [32, 32])
assert_allclose(sc.convert_temperature([0, 0], 'c', 'r'), [491.67, 491.67],
rtol=0., atol=1e-13)
assert_allclose(sc.convert_temperature([491.67, 491.67], 'Rankine', 'C'),
[0., 0.], rtol=0., atol=1e-13)
assert_allclose(sc.convert_temperature([491.67, 491.67], 'r', 'F'),
[32., 32.], rtol=0., atol=1e-13)
assert_allclose(sc.convert_temperature([32, 32], 'fahrenheit', 'R'),
[491.67, 491.67], rtol=0., atol=1e-13)
assert_allclose(sc.convert_temperature([273.15, 273.15], 'K', 'R'),
[491.67, 491.67], rtol=0., atol=1e-13)
assert_allclose(sc.convert_temperature([491.67, 0.], 'rankine', 'kelvin'),
[273.15, 0.], rtol=0., atol=1e-13)
def test_lambda_to_nu():
assert_equal(sc.lambda2nu([sc.speed_of_light, 1]), [1, sc.speed_of_light])
def test_nu_to_lambda():
assert_equal(sc.nu2lambda([sc.speed_of_light, 1]), [1, sc.speed_of_light])
| 1,632
| 44.361111
| 79
|
py
|
scipy
|
scipy-main/scipy/constants/tests/__init__.py
| 0
| 0
| 0
|
py
|
|
scipy
|
scipy-main/scipy/constants/tests/test_codata.py
|
from scipy.constants import find, value, ConstantWarning, c, speed_of_light
from numpy.testing import (assert_equal, assert_, assert_almost_equal,
suppress_warnings)
import scipy.constants._codata as _cd
def test_find():
keys = find('weak mixing', disp=False)
assert_equal(keys, ['weak mixing angle'])
keys = find('qwertyuiop', disp=False)
assert_equal(keys, [])
keys = find('natural unit', disp=False)
assert_equal(keys, sorted(['natural unit of velocity',
'natural unit of action',
'natural unit of action in eV s',
'natural unit of mass',
'natural unit of energy',
'natural unit of energy in MeV',
'natural unit of momentum',
'natural unit of momentum in MeV/c',
'natural unit of length',
'natural unit of time']))
def test_basic_table_parse():
c_s = 'speed of light in vacuum'
assert_equal(value(c_s), c)
assert_equal(value(c_s), speed_of_light)
def test_basic_lookup():
assert_equal('%d %s' % (_cd.c, _cd.unit('speed of light in vacuum')),
'299792458 m s^-1')
def test_find_all():
assert_(len(find(disp=False)) > 300)
def test_find_single():
assert_equal(find('Wien freq', disp=False)[0],
'Wien frequency displacement law constant')
def test_2002_vs_2006():
assert_almost_equal(value('magn. flux quantum'),
value('mag. flux quantum'))
def test_exact_values():
# Check that updating stored values with exact ones worked.
with suppress_warnings() as sup:
sup.filter(ConstantWarning)
for key in _cd.exact_values:
assert_((_cd.exact_values[key][0] - value(key)) / value(key) == 0)
| 1,959
| 32.793103
| 78
|
py
|
scipy
|
scipy-main/scipy/interpolate/_interpnd_info.py
|
"""
Here we perform some symbolic computations required for the N-D
interpolation routines in `interpnd.pyx`.
"""
from sympy import symbols, binomial, Matrix
def _estimate_gradients_2d_global():
#
# Compute
#
#
f1, f2, df1, df2, x = symbols(['f1', 'f2', 'df1', 'df2', 'x'])
c = [f1, (df1 + 3*f1)/3, (df2 + 3*f2)/3, f2]
w = 0
for k in range(4):
w += binomial(3, k) * c[k] * x**k*(1-x)**(3-k)
wpp = w.diff(x, 2).expand()
intwpp2 = (wpp**2).integrate((x, 0, 1)).expand()
A = Matrix([[intwpp2.coeff(df1**2), intwpp2.coeff(df1*df2)/2],
[intwpp2.coeff(df1*df2)/2, intwpp2.coeff(df2**2)]])
B = Matrix([[intwpp2.coeff(df1).subs(df2, 0)],
[intwpp2.coeff(df2).subs(df1, 0)]]) / 2
print("A")
print(A)
print("B")
print(B)
print("solution")
print(A.inv() * B)
| 869
| 21.894737
| 67
|
py
|
scipy
|
scipy-main/scipy/interpolate/fitpack2.py
|
# This file is not meant for public use and will be removed in SciPy v2.0.0.
# Use the `scipy.interpolate` namespace for importing the functions
# included below.
import warnings
from . import _fitpack2
__all__ = [ # noqa: F822
'BivariateSpline',
'InterpolatedUnivariateSpline',
'LSQBivariateSpline',
'LSQSphereBivariateSpline',
'LSQUnivariateSpline',
'RectBivariateSpline',
'RectSphereBivariateSpline',
'SmoothBivariateSpline',
'SmoothSphereBivariateSpline',
'SphereBivariateSpline',
'UnivariateSpline',
'array',
'concatenate',
'dfitpack',
'dfitpack_int',
'diff',
'ones',
'ravel',
'zeros',
]
def __dir__():
return __all__
def __getattr__(name):
if name not in __all__:
raise AttributeError(
"scipy.interpolate.fitpack2 is deprecated and has no attribute "
f"{name}. Try looking in scipy.interpolate instead.")
warnings.warn(f"Please use `{name}` from the `scipy.interpolate` namespace, "
"the `scipy.interpolate.fitpack2` namespace is deprecated.",
category=DeprecationWarning, stacklevel=2)
return getattr(_fitpack2, name)
| 1,195
| 24.446809
| 81
|
py
|
scipy
|
scipy-main/scipy/interpolate/_fitpack_py.py
|
__all__ = ['splrep', 'splprep', 'splev', 'splint', 'sproot', 'spalde',
'bisplrep', 'bisplev', 'insert', 'splder', 'splantider']
import numpy as np
# These are in the API for fitpack even if not used in fitpack.py itself.
from ._fitpack_impl import bisplrep, bisplev, dblint # noqa: F401
from . import _fitpack_impl as _impl
from ._bsplines import BSpline
def splprep(x, w=None, u=None, ub=None, ue=None, k=3, task=0, s=None, t=None,
full_output=0, nest=None, per=0, quiet=1):
"""
Find the B-spline representation of an N-D curve.
Given a list of N rank-1 arrays, `x`, which represent a curve in
N-dimensional space parametrized by `u`, find a smooth approximating
spline curve g(`u`). Uses the FORTRAN routine parcur from FITPACK.
Parameters
----------
x : array_like
A list of sample vector arrays representing the curve.
w : array_like, optional
Strictly positive rank-1 array of weights the same length as `x[0]`.
The weights are used in computing the weighted least-squares spline
fit. If the errors in the `x` values have standard-deviation given by
the vector d, then `w` should be 1/d. Default is ``ones(len(x[0]))``.
u : array_like, optional
An array of parameter values. If not given, these values are
calculated automatically as ``M = len(x[0])``, where
v[0] = 0
v[i] = v[i-1] + distance(`x[i]`, `x[i-1]`)
u[i] = v[i] / v[M-1]
ub, ue : int, optional
The end-points of the parameters interval. Defaults to
u[0] and u[-1].
k : int, optional
Degree of the spline. Cubic splines are recommended.
Even values of `k` should be avoided especially with a small s-value.
``1 <= k <= 5``, default is 3.
task : int, optional
If task==0 (default), find t and c for a given smoothing factor, s.
If task==1, find t and c for another value of the smoothing factor, s.
There must have been a previous call with task=0 or task=1
for the same set of data.
If task=-1 find the weighted least square spline for a given set of
knots, t.
s : float, optional
A smoothing condition. The amount of smoothness is determined by
satisfying the conditions: ``sum((w * (y - g))**2,axis=0) <= s``,
where g(x) is the smoothed interpolation of (x,y). The user can
use `s` to control the trade-off between closeness and smoothness
of fit. Larger `s` means more smoothing while smaller values of `s`
indicate less smoothing. Recommended values of `s` depend on the
weights, w. If the weights represent the inverse of the
standard-deviation of y, then a good `s` value should be found in
the range ``(m-sqrt(2*m),m+sqrt(2*m))``, where m is the number of
data points in x, y, and w.
t : array, optional
The knots needed for ``task=-1``.
There must be at least ``2*k+2`` knots.
full_output : int, optional
If non-zero, then return optional outputs.
nest : int, optional
An over-estimate of the total number of knots of the spline to
help in determining the storage space. By default nest=m/2.
Always large enough is nest=m+k+1.
per : int, optional
If non-zero, data points are considered periodic with period
``x[m-1] - x[0]`` and a smooth periodic spline approximation is
returned. Values of ``y[m-1]`` and ``w[m-1]`` are not used.
quiet : int, optional
Non-zero to suppress messages.
Returns
-------
tck : tuple
A tuple, ``(t,c,k)`` containing the vector of knots, the B-spline
coefficients, and the degree of the spline.
u : array
An array of the values of the parameter.
fp : float
The weighted sum of squared residuals of the spline approximation.
ier : int
An integer flag about splrep success. Success is indicated
if ier<=0. If ier in [1,2,3] an error occurred but was not raised.
Otherwise an error is raised.
msg : str
A message corresponding to the integer flag, ier.
See Also
--------
splrep, splev, sproot, spalde, splint,
bisplrep, bisplev
UnivariateSpline, BivariateSpline
BSpline
make_interp_spline
Notes
-----
See `splev` for evaluation of the spline and its derivatives.
The number of dimensions N must be smaller than 11.
The number of coefficients in the `c` array is ``k+1`` less than the number
of knots, ``len(t)``. This is in contrast with `splrep`, which zero-pads
the array of coefficients to have the same length as the array of knots.
These additional coefficients are ignored by evaluation routines, `splev`
and `BSpline`.
References
----------
.. [1] P. Dierckx, "Algorithms for smoothing data with periodic and
parametric splines, Computer Graphics and Image Processing",
20 (1982) 171-184.
.. [2] P. Dierckx, "Algorithms for smoothing data with periodic and
parametric splines", report tw55, Dept. Computer Science,
K.U.Leuven, 1981.
.. [3] P. Dierckx, "Curve and surface fitting with splines", Monographs on
Numerical Analysis, Oxford University Press, 1993.
Examples
--------
Generate a discretization of a limacon curve in the polar coordinates:
>>> import numpy as np
>>> phi = np.linspace(0, 2.*np.pi, 40)
>>> r = 0.5 + np.cos(phi) # polar coords
>>> x, y = r * np.cos(phi), r * np.sin(phi) # convert to cartesian
And interpolate:
>>> from scipy.interpolate import splprep, splev
>>> tck, u = splprep([x, y], s=0)
>>> new_points = splev(u, tck)
Notice that (i) we force interpolation by using `s=0`,
(ii) the parameterization, ``u``, is generated automatically.
Now plot the result:
>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots()
>>> ax.plot(x, y, 'ro')
>>> ax.plot(new_points[0], new_points[1], 'r-')
>>> plt.show()
"""
res = _impl.splprep(x, w, u, ub, ue, k, task, s, t, full_output, nest, per,
quiet)
return res
def splrep(x, y, w=None, xb=None, xe=None, k=3, task=0, s=None, t=None,
full_output=0, per=0, quiet=1):
"""
Find the B-spline representation of a 1-D curve.
Given the set of data points ``(x[i], y[i])`` determine a smooth spline
approximation of degree k on the interval ``xb <= x <= xe``.
Parameters
----------
x, y : array_like
The data points defining a curve y = f(x).
w : array_like, optional
Strictly positive rank-1 array of weights the same length as x and y.
The weights are used in computing the weighted least-squares spline
fit. If the errors in the y values have standard-deviation given by the
vector d, then w should be 1/d. Default is ones(len(x)).
xb, xe : float, optional
The interval to fit. If None, these default to x[0] and x[-1]
respectively.
k : int, optional
The degree of the spline fit. It is recommended to use cubic splines.
Even values of k should be avoided especially with small s values.
1 <= k <= 5
task : {1, 0, -1}, optional
If task==0 find t and c for a given smoothing factor, s.
If task==1 find t and c for another value of the smoothing factor, s.
There must have been a previous call with task=0 or task=1 for the same
set of data (t will be stored an used internally)
If task=-1 find the weighted least square spline for a given set of
knots, t. These should be interior knots as knots on the ends will be
added automatically.
s : float, optional
A smoothing condition. The amount of smoothness is determined by
satisfying the conditions: ``sum((w * (y - g))**2,axis=0) <= s`` where g(x)
is the smoothed interpolation of (x,y). The user can use s to control
the tradeoff between closeness and smoothness of fit. Larger s means
more smoothing while smaller values of s indicate less smoothing.
Recommended values of s depend on the weights, w. If the weights
represent the inverse of the standard-deviation of y, then a good s
value should be found in the range (m-sqrt(2*m),m+sqrt(2*m)) where m is
the number of datapoints in x, y, and w. default : s=m-sqrt(2*m) if
weights are supplied. s = 0.0 (interpolating) if no weights are
supplied.
t : array_like, optional
The knots needed for task=-1. If given then task is automatically set
to -1.
full_output : bool, optional
If non-zero, then return optional outputs.
per : bool, optional
If non-zero, data points are considered periodic with period x[m-1] -
x[0] and a smooth periodic spline approximation is returned. Values of
y[m-1] and w[m-1] are not used.
quiet : bool, optional
Non-zero to suppress messages.
Returns
-------
tck : tuple
A tuple (t,c,k) containing the vector of knots, the B-spline
coefficients, and the degree of the spline.
fp : array, optional
The weighted sum of squared residuals of the spline approximation.
ier : int, optional
An integer flag about splrep success. Success is indicated if ier<=0.
If ier in [1,2,3] an error occurred but was not raised. Otherwise an
error is raised.
msg : str, optional
A message corresponding to the integer flag, ier.
See Also
--------
UnivariateSpline, BivariateSpline
splprep, splev, sproot, spalde, splint
bisplrep, bisplev
BSpline
make_interp_spline
Notes
-----
See `splev` for evaluation of the spline and its derivatives. Uses the
FORTRAN routine ``curfit`` from FITPACK.
The user is responsible for assuring that the values of `x` are unique.
Otherwise, `splrep` will not return sensible results.
If provided, knots `t` must satisfy the Schoenberg-Whitney conditions,
i.e., there must be a subset of data points ``x[j]`` such that
``t[j] < x[j] < t[j+k+1]``, for ``j=0, 1,...,n-k-2``.
This routine zero-pads the coefficients array ``c`` to have the same length
as the array of knots ``t`` (the trailing ``k + 1`` coefficients are ignored
by the evaluation routines, `splev` and `BSpline`.) This is in contrast with
`splprep`, which does not zero-pad the coefficients.
References
----------
Based on algorithms described in [1]_, [2]_, [3]_, and [4]_:
.. [1] P. Dierckx, "An algorithm for smoothing, differentiation and
integration of experimental data using spline functions",
J.Comp.Appl.Maths 1 (1975) 165-184.
.. [2] P. Dierckx, "A fast algorithm for smoothing data on a rectangular
grid while using spline functions", SIAM J.Numer.Anal. 19 (1982)
1286-1304.
.. [3] P. Dierckx, "An improved algorithm for curve fitting with spline
functions", report tw54, Dept. Computer Science,K.U. Leuven, 1981.
.. [4] P. Dierckx, "Curve and surface fitting with splines", Monographs on
Numerical Analysis, Oxford University Press, 1993.
Examples
--------
You can interpolate 1-D points with a B-spline curve.
Further examples are given in
:ref:`in the tutorial <tutorial-interpolate_splXXX>`.
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.interpolate import splev, splrep
>>> x = np.linspace(0, 10, 10)
>>> y = np.sin(x)
>>> spl = splrep(x, y)
>>> x2 = np.linspace(0, 10, 200)
>>> y2 = splev(x2, spl)
>>> plt.plot(x, y, 'o', x2, y2)
>>> plt.show()
"""
res = _impl.splrep(x, y, w, xb, xe, k, task, s, t, full_output, per, quiet)
return res
def splev(x, tck, der=0, ext=0):
"""
Evaluate a B-spline or its derivatives.
Given the knots and coefficients of a B-spline representation, evaluate
the value of the smoothing polynomial and its derivatives. This is a
wrapper around the FORTRAN routines splev and splder of FITPACK.
Parameters
----------
x : array_like
An array of points at which to return the value of the smoothed
spline or its derivatives. If `tck` was returned from `splprep`,
then the parameter values, u should be given.
tck : 3-tuple or a BSpline object
If a tuple, then it should be a sequence of length 3 returned by
`splrep` or `splprep` containing the knots, coefficients, and degree
of the spline. (Also see Notes.)
der : int, optional
The order of derivative of the spline to compute (must be less than
or equal to k, the degree of the spline).
ext : int, optional
Controls the value returned for elements of ``x`` not in the
interval defined by the knot sequence.
* if ext=0, return the extrapolated value.
* if ext=1, return 0
* if ext=2, raise a ValueError
* if ext=3, return the boundary value.
The default value is 0.
Returns
-------
y : ndarray or list of ndarrays
An array of values representing the spline function evaluated at
the points in `x`. If `tck` was returned from `splprep`, then this
is a list of arrays representing the curve in an N-D space.
See Also
--------
splprep, splrep, sproot, spalde, splint
bisplrep, bisplev
BSpline
Notes
-----
Manipulating the tck-tuples directly is not recommended. In new code,
prefer using `BSpline` objects.
References
----------
.. [1] C. de Boor, "On calculating with b-splines", J. Approximation
Theory, 6, p.50-62, 1972.
.. [2] M. G. Cox, "The numerical evaluation of b-splines", J. Inst. Maths
Applics, 10, p.134-149, 1972.
.. [3] P. Dierckx, "Curve and surface fitting with splines", Monographs
on Numerical Analysis, Oxford University Press, 1993.
Examples
--------
Examples are given :ref:`in the tutorial <tutorial-interpolate_splXXX>`.
"""
if isinstance(tck, BSpline):
if tck.c.ndim > 1:
mesg = ("Calling splev() with BSpline objects with c.ndim > 1 is "
"not allowed. Use BSpline.__call__(x) instead.")
raise ValueError(mesg)
# remap the out-of-bounds behavior
try:
extrapolate = {0: True, }[ext]
except KeyError as e:
raise ValueError("Extrapolation mode %s is not supported "
"by BSpline." % ext) from e
return tck(x, der, extrapolate=extrapolate)
else:
return _impl.splev(x, tck, der, ext)
def splint(a, b, tck, full_output=0):
"""
Evaluate the definite integral of a B-spline between two given points.
Parameters
----------
a, b : float
The end-points of the integration interval.
tck : tuple or a BSpline instance
If a tuple, then it should be a sequence of length 3, containing the
vector of knots, the B-spline coefficients, and the degree of the
spline (see `splev`).
full_output : int, optional
Non-zero to return optional output.
Returns
-------
integral : float
The resulting integral.
wrk : ndarray
An array containing the integrals of the normalized B-splines
defined on the set of knots.
(Only returned if `full_output` is non-zero)
See Also
--------
splprep, splrep, sproot, spalde, splev
bisplrep, bisplev
BSpline
Notes
-----
`splint` silently assumes that the spline function is zero outside the data
interval (`a`, `b`).
Manipulating the tck-tuples directly is not recommended. In new code,
prefer using the `BSpline` objects.
References
----------
.. [1] P.W. Gaffney, The calculation of indefinite integrals of b-splines",
J. Inst. Maths Applics, 17, p.37-41, 1976.
.. [2] P. Dierckx, "Curve and surface fitting with splines", Monographs
on Numerical Analysis, Oxford University Press, 1993.
Examples
--------
Examples are given :ref:`in the tutorial <tutorial-interpolate_splXXX>`.
"""
if isinstance(tck, BSpline):
if tck.c.ndim > 1:
mesg = ("Calling splint() with BSpline objects with c.ndim > 1 is "
"not allowed. Use BSpline.integrate() instead.")
raise ValueError(mesg)
if full_output != 0:
mesg = ("full_output = %s is not supported. Proceeding as if "
"full_output = 0" % full_output)
return tck.integrate(a, b, extrapolate=False)
else:
return _impl.splint(a, b, tck, full_output)
def sproot(tck, mest=10):
"""
Find the roots of a cubic B-spline.
Given the knots (>=8) and coefficients of a cubic B-spline return the
roots of the spline.
Parameters
----------
tck : tuple or a BSpline object
If a tuple, then it should be a sequence of length 3, containing the
vector of knots, the B-spline coefficients, and the degree of the
spline.
The number of knots must be >= 8, and the degree must be 3.
The knots must be a montonically increasing sequence.
mest : int, optional
An estimate of the number of zeros (Default is 10).
Returns
-------
zeros : ndarray
An array giving the roots of the spline.
See Also
--------
splprep, splrep, splint, spalde, splev
bisplrep, bisplev
BSpline
Notes
-----
Manipulating the tck-tuples directly is not recommended. In new code,
prefer using the `BSpline` objects.
References
----------
.. [1] C. de Boor, "On calculating with b-splines", J. Approximation
Theory, 6, p.50-62, 1972.
.. [2] M. G. Cox, "The numerical evaluation of b-splines", J. Inst. Maths
Applics, 10, p.134-149, 1972.
.. [3] P. Dierckx, "Curve and surface fitting with splines", Monographs
on Numerical Analysis, Oxford University Press, 1993.
Examples
--------
For some data, this method may miss a root. This happens when one of
the spline knots (which FITPACK places automatically) happens to
coincide with the true root. A workaround is to convert to `PPoly`,
which uses a different root-finding algorithm.
For example,
>>> x = [1.96, 1.97, 1.98, 1.99, 2.00, 2.01, 2.02, 2.03, 2.04, 2.05]
>>> y = [-6.365470e-03, -4.790580e-03, -3.204320e-03, -1.607270e-03,
... 4.440892e-16, 1.616930e-03, 3.243000e-03, 4.877670e-03,
... 6.520430e-03, 8.170770e-03]
>>> from scipy.interpolate import splrep, sproot, PPoly
>>> tck = splrep(x, y, s=0)
>>> sproot(tck)
array([], dtype=float64)
Converting to a PPoly object does find the roots at `x=2`:
>>> ppoly = PPoly.from_spline(tck)
>>> ppoly.roots(extrapolate=False)
array([2.])
Further examples are given :ref:`in the tutorial
<tutorial-interpolate_splXXX>`.
"""
if isinstance(tck, BSpline):
if tck.c.ndim > 1:
mesg = ("Calling sproot() with BSpline objects with c.ndim > 1 is "
"not allowed.")
raise ValueError(mesg)
t, c, k = tck.tck
# _impl.sproot expects the interpolation axis to be last, so roll it.
# NB: This transpose is a no-op if c is 1D.
sh = tuple(range(c.ndim))
c = c.transpose(sh[1:] + (0,))
return _impl.sproot((t, c, k), mest)
else:
return _impl.sproot(tck, mest)
def spalde(x, tck):
"""
Evaluate all derivatives of a B-spline.
Given the knots and coefficients of a cubic B-spline compute all
derivatives up to order k at a point (or set of points).
Parameters
----------
x : array_like
A point or a set of points at which to evaluate the derivatives.
Note that ``t(k) <= x <= t(n-k+1)`` must hold for each `x`.
tck : tuple
A tuple (t,c,k) containing the vector of knots,
the B-spline coefficients, and the degree of the spline.
Returns
-------
results : {ndarray, list of ndarrays}
An array (or a list of arrays) containing all derivatives
up to order k inclusive for each point `x`.
See Also
--------
splprep, splrep, splint, sproot, splev, bisplrep, bisplev,
UnivariateSpline, BivariateSpline
References
----------
.. [1] de Boor C : On calculating with b-splines, J. Approximation Theory
6 (1972) 50-62.
.. [2] Cox M.G. : The numerical evaluation of b-splines, J. Inst. Maths
applics 10 (1972) 134-149.
.. [3] Dierckx P. : Curve and surface fitting with splines, Monographs on
Numerical Analysis, Oxford University Press, 1993.
"""
if isinstance(tck, BSpline):
raise TypeError("spalde does not accept BSpline instances.")
else:
return _impl.spalde(x, tck)
def insert(x, tck, m=1, per=0):
"""
Insert knots into a B-spline.
Given the knots and coefficients of a B-spline representation, create a
new B-spline with a knot inserted `m` times at point `x`.
This is a wrapper around the FORTRAN routine insert of FITPACK.
Parameters
----------
x (u) : array_like
A 1-D point at which to insert a new knot(s). If `tck` was returned
from ``splprep``, then the parameter values, u should be given.
tck : a `BSpline` instance or a tuple
If tuple, then it is expected to be a tuple (t,c,k) containing
the vector of knots, the B-spline coefficients, and the degree of
the spline.
m : int, optional
The number of times to insert the given knot (its multiplicity).
Default is 1.
per : int, optional
If non-zero, the input spline is considered periodic.
Returns
-------
BSpline instance or a tuple
A new B-spline with knots t, coefficients c, and degree k.
``t(k+1) <= x <= t(n-k)``, where k is the degree of the spline.
In case of a periodic spline (``per != 0``) there must be
either at least k interior knots t(j) satisfying ``t(k+1)<t(j)<=x``
or at least k interior knots t(j) satisfying ``x<=t(j)<t(n-k)``.
A tuple is returned iff the input argument `tck` is a tuple, otherwise
a BSpline object is constructed and returned.
Notes
-----
Based on algorithms from [1]_ and [2]_.
Manipulating the tck-tuples directly is not recommended. In new code,
prefer using the `BSpline` objects.
References
----------
.. [1] W. Boehm, "Inserting new knots into b-spline curves.",
Computer Aided Design, 12, p.199-201, 1980.
.. [2] P. Dierckx, "Curve and surface fitting with splines, Monographs on
Numerical Analysis", Oxford University Press, 1993.
Examples
--------
You can insert knots into a B-spline.
>>> from scipy.interpolate import splrep, insert
>>> import numpy as np
>>> x = np.linspace(0, 10, 5)
>>> y = np.sin(x)
>>> tck = splrep(x, y)
>>> tck[0]
array([ 0., 0., 0., 0., 5., 10., 10., 10., 10.])
A knot is inserted:
>>> tck_inserted = insert(3, tck)
>>> tck_inserted[0]
array([ 0., 0., 0., 0., 3., 5., 10., 10., 10., 10.])
Some knots are inserted:
>>> tck_inserted2 = insert(8, tck, m=3)
>>> tck_inserted2[0]
array([ 0., 0., 0., 0., 5., 8., 8., 8., 10., 10., 10., 10.])
"""
if isinstance(tck, BSpline):
t, c, k = tck.tck
# FITPACK expects the interpolation axis to be last, so roll it over
# NB: if c array is 1D, transposes are no-ops
sh = tuple(range(c.ndim))
c = c.transpose(sh[1:] + (0,))
t_, c_, k_ = _impl.insert(x, (t, c, k), m, per)
# and roll the last axis back
c_ = np.asarray(c_)
c_ = c_.transpose((sh[-1],) + sh[:-1])
return BSpline(t_, c_, k_)
else:
return _impl.insert(x, tck, m, per)
def splder(tck, n=1):
"""
Compute the spline representation of the derivative of a given spline
Parameters
----------
tck : BSpline instance or a tuple of (t, c, k)
Spline whose derivative to compute
n : int, optional
Order of derivative to evaluate. Default: 1
Returns
-------
`BSpline` instance or tuple
Spline of order k2=k-n representing the derivative
of the input spline.
A tuple is returned iff the input argument `tck` is a tuple, otherwise
a BSpline object is constructed and returned.
See Also
--------
splantider, splev, spalde
BSpline
Notes
-----
.. versionadded:: 0.13.0
Examples
--------
This can be used for finding maxima of a curve:
>>> from scipy.interpolate import splrep, splder, sproot
>>> import numpy as np
>>> x = np.linspace(0, 10, 70)
>>> y = np.sin(x)
>>> spl = splrep(x, y, k=4)
Now, differentiate the spline and find the zeros of the
derivative. (NB: `sproot` only works for order 3 splines, so we
fit an order 4 spline):
>>> dspl = splder(spl)
>>> sproot(dspl) / np.pi
array([ 0.50000001, 1.5 , 2.49999998])
This agrees well with roots :math:`\\pi/2 + n\\pi` of
:math:`\\cos(x) = \\sin'(x)`.
"""
if isinstance(tck, BSpline):
return tck.derivative(n)
else:
return _impl.splder(tck, n)
def splantider(tck, n=1):
"""
Compute the spline for the antiderivative (integral) of a given spline.
Parameters
----------
tck : BSpline instance or a tuple of (t, c, k)
Spline whose antiderivative to compute
n : int, optional
Order of antiderivative to evaluate. Default: 1
Returns
-------
BSpline instance or a tuple of (t2, c2, k2)
Spline of order k2=k+n representing the antiderivative of the input
spline.
A tuple is returned iff the input argument `tck` is a tuple, otherwise
a BSpline object is constructed and returned.
See Also
--------
splder, splev, spalde
BSpline
Notes
-----
The `splder` function is the inverse operation of this function.
Namely, ``splder(splantider(tck))`` is identical to `tck`, modulo
rounding error.
.. versionadded:: 0.13.0
Examples
--------
>>> from scipy.interpolate import splrep, splder, splantider, splev
>>> import numpy as np
>>> x = np.linspace(0, np.pi/2, 70)
>>> y = 1 / np.sqrt(1 - 0.8*np.sin(x)**2)
>>> spl = splrep(x, y)
The derivative is the inverse operation of the antiderivative,
although some floating point error accumulates:
>>> splev(1.7, spl), splev(1.7, splder(splantider(spl)))
(array(2.1565429877197317), array(2.1565429877201865))
Antiderivative can be used to evaluate definite integrals:
>>> ispl = splantider(spl)
>>> splev(np.pi/2, ispl) - splev(0, ispl)
2.2572053588768486
This is indeed an approximation to the complete elliptic integral
:math:`K(m) = \\int_0^{\\pi/2} [1 - m\\sin^2 x]^{-1/2} dx`:
>>> from scipy.special import ellipk
>>> ellipk(0.8)
2.2572053268208538
"""
if isinstance(tck, BSpline):
return tck.antiderivative(n)
else:
return _impl.splantider(tck, n)
| 27,528
| 33.97967
| 83
|
py
|
scipy
|
scipy-main/scipy/interpolate/setup.py
|
import os
from os.path import join
def configuration(parent_package='',top_path=None):
from numpy.distutils.misc_util import Configuration
from scipy._build_utils import (get_f2py_int64_options,
ilp64_pre_build_hook,
uses_blas64, numpy_nodepr_api)
if uses_blas64():
# TODO: Note that fitpack does not use BLAS/LAPACK.
# The reason why we use 64-bit ints only in this case
# is because scipy._build_utils knows the 64-bit int
# flags for too few Fortran compilers, so we cannot turn
# this on by default.
pre_build_hook = ilp64_pre_build_hook
f2py_options = get_f2py_int64_options()
define_macros = [("HAVE_ILP64", None)]
else:
pre_build_hook = None
f2py_options = None
define_macros = []
config = Configuration('interpolate', parent_package, top_path)
fitpack_src = [join('fitpack', '*.f')]
config.add_library('fitpack', sources=fitpack_src,
_pre_build_hook=pre_build_hook)
config.add_extension('interpnd',
sources=['interpnd.c'])
config.add_extension('_rgi_cython',
sources=['_rgi_cython.c'])
config.add_extension('_ppoly',
sources=['_ppoly.c'])
config.add_extension('_bspl',
sources=['_bspl.c'],
depends=['src/__fitpack.h'])
config.add_extension('_fitpack',
sources=['src/_fitpackmodule.c'],
libraries=['fitpack'],
define_macros=define_macros + numpy_nodepr_api['define_macros'],
depends=(['src/__fitpack.h']
+ fitpack_src)
)
config.add_extension('dfitpack',
sources=['src/fitpack.pyf'],
libraries=['fitpack'],
define_macros=define_macros,
depends=fitpack_src,
f2py_options=f2py_options
)
if int(os.environ.get('SCIPY_USE_PYTHRAN', 1)):
from pythran.dist import PythranExtension
ext = PythranExtension(
'scipy.interpolate._rbfinterp_pythran',
sources=['scipy/interpolate/_rbfinterp_pythran.py'],
config=['compiler.blas=none']
)
config.ext_modules.append(ext)
config.add_data_dir('tests')
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
setup(**configuration(top_path='').todict())
| 2,692
| 33.974026
| 89
|
py
|
scipy
|
scipy-main/scipy/interpolate/rbf.py
|
# This file is not meant for public use and will be removed in SciPy v2.0.0.
# Use the `scipy.interpolate` namespace for importing the functions
# included below.
import warnings
from . import _rbf
__all__ = [ # noqa: F822
'Rbf',
'cdist',
'linalg',
'pdist',
'squareform',
'xlogy',
]
def __dir__():
return __all__
def __getattr__(name):
if name not in __all__:
raise AttributeError(
"scipy.interpolate.rbf is deprecated and has no attribute "
f"{name}. Try looking in scipy.interpolate instead.")
warnings.warn(f"Please use `{name}` from the `scipy.interpolate` namespace, "
"the `scipy.interpolate.rbf` namespace is deprecated.",
category=DeprecationWarning, stacklevel=2)
return getattr(_rbf, name)
| 818
| 23.088235
| 81
|
py
|
scipy
|
scipy-main/scipy/interpolate/_rbfinterp.py
|
"""Module for RBF interpolation."""
import warnings
from itertools import combinations_with_replacement
import numpy as np
from numpy.linalg import LinAlgError
from scipy.spatial import KDTree
from scipy.special import comb
from scipy.linalg.lapack import dgesv # type: ignore[attr-defined]
from ._rbfinterp_pythran import (_build_system,
_build_evaluation_coefficients,
_polynomial_matrix)
__all__ = ["RBFInterpolator"]
# These RBFs are implemented.
_AVAILABLE = {
"linear",
"thin_plate_spline",
"cubic",
"quintic",
"multiquadric",
"inverse_multiquadric",
"inverse_quadratic",
"gaussian"
}
# The shape parameter does not need to be specified when using these RBFs.
_SCALE_INVARIANT = {"linear", "thin_plate_spline", "cubic", "quintic"}
# For RBFs that are conditionally positive definite of order m, the interpolant
# should include polynomial terms with degree >= m - 1. Define the minimum
# degrees here. These values are from Chapter 8 of Fasshauer's "Meshfree
# Approximation Methods with MATLAB". The RBFs that are not in this dictionary
# are positive definite and do not need polynomial terms.
_NAME_TO_MIN_DEGREE = {
"multiquadric": 0,
"linear": 0,
"thin_plate_spline": 1,
"cubic": 1,
"quintic": 2
}
def _monomial_powers(ndim, degree):
"""Return the powers for each monomial in a polynomial.
Parameters
----------
ndim : int
Number of variables in the polynomial.
degree : int
Degree of the polynomial.
Returns
-------
(nmonos, ndim) int ndarray
Array where each row contains the powers for each variable in a
monomial.
"""
nmonos = comb(degree + ndim, ndim, exact=True)
out = np.zeros((nmonos, ndim), dtype=int)
count = 0
for deg in range(degree + 1):
for mono in combinations_with_replacement(range(ndim), deg):
# `mono` is a tuple of variables in the current monomial with
# multiplicity indicating power (e.g., (0, 1, 1) represents x*y**2)
for var in mono:
out[count, var] += 1
count += 1
return out
def _build_and_solve_system(y, d, smoothing, kernel, epsilon, powers):
"""Build and solve the RBF interpolation system of equations.
Parameters
----------
y : (P, N) float ndarray
Data point coordinates.
d : (P, S) float ndarray
Data values at `y`.
smoothing : (P,) float ndarray
Smoothing parameter for each data point.
kernel : str
Name of the RBF.
epsilon : float
Shape parameter.
powers : (R, N) int ndarray
The exponents for each monomial in the polynomial.
Returns
-------
coeffs : (P + R, S) float ndarray
Coefficients for each RBF and monomial.
shift : (N,) float ndarray
Domain shift used to create the polynomial matrix.
scale : (N,) float ndarray
Domain scaling used to create the polynomial matrix.
"""
lhs, rhs, shift, scale = _build_system(
y, d, smoothing, kernel, epsilon, powers
)
_, _, coeffs, info = dgesv(lhs, rhs, overwrite_a=True, overwrite_b=True)
if info < 0:
raise ValueError(f"The {-info}-th argument had an illegal value.")
elif info > 0:
msg = "Singular matrix."
nmonos = powers.shape[0]
if nmonos > 0:
pmat = _polynomial_matrix((y - shift)/scale, powers)
rank = np.linalg.matrix_rank(pmat)
if rank < nmonos:
msg = (
"Singular matrix. The matrix of monomials evaluated at "
"the data point coordinates does not have full column "
f"rank ({rank}/{nmonos})."
)
raise LinAlgError(msg)
return shift, scale, coeffs
class RBFInterpolator:
"""Radial basis function (RBF) interpolation in N dimensions.
Parameters
----------
y : (npoints, ndims) array_like
2-D array of data point coordinates.
d : (npoints, ...) array_like
N-D array of data values at `y`. The length of `d` along the first
axis must be equal to the length of `y`. Unlike some interpolators, the
interpolation axis cannot be changed.
neighbors : int, optional
If specified, the value of the interpolant at each evaluation point
will be computed using only this many nearest data points. All the data
points are used by default.
smoothing : float or (npoints, ) array_like, optional
Smoothing parameter. The interpolant perfectly fits the data when this
is set to 0. For large values, the interpolant approaches a least
squares fit of a polynomial with the specified degree. Default is 0.
kernel : str, optional
Type of RBF. This should be one of
- 'linear' : ``-r``
- 'thin_plate_spline' : ``r**2 * log(r)``
- 'cubic' : ``r**3``
- 'quintic' : ``-r**5``
- 'multiquadric' : ``-sqrt(1 + r**2)``
- 'inverse_multiquadric' : ``1/sqrt(1 + r**2)``
- 'inverse_quadratic' : ``1/(1 + r**2)``
- 'gaussian' : ``exp(-r**2)``
Default is 'thin_plate_spline'.
epsilon : float, optional
Shape parameter that scales the input to the RBF. If `kernel` is
'linear', 'thin_plate_spline', 'cubic', or 'quintic', this defaults to
1 and can be ignored because it has the same effect as scaling the
smoothing parameter. Otherwise, this must be specified.
degree : int, optional
Degree of the added polynomial. For some RBFs the interpolant may not
be well-posed if the polynomial degree is too small. Those RBFs and
their corresponding minimum degrees are
- 'multiquadric' : 0
- 'linear' : 0
- 'thin_plate_spline' : 1
- 'cubic' : 1
- 'quintic' : 2
The default value is the minimum degree for `kernel` or 0 if there is
no minimum degree. Set this to -1 for no added polynomial.
Notes
-----
An RBF is a scalar valued function in N-dimensional space whose value at
:math:`x` can be expressed in terms of :math:`r=||x - c||`, where :math:`c`
is the center of the RBF.
An RBF interpolant for the vector of data values :math:`d`, which are from
locations :math:`y`, is a linear combination of RBFs centered at :math:`y`
plus a polynomial with a specified degree. The RBF interpolant is written
as
.. math::
f(x) = K(x, y) a + P(x) b,
where :math:`K(x, y)` is a matrix of RBFs with centers at :math:`y`
evaluated at the points :math:`x`, and :math:`P(x)` is a matrix of
monomials, which span polynomials with the specified degree, evaluated at
:math:`x`. The coefficients :math:`a` and :math:`b` are the solution to the
linear equations
.. math::
(K(y, y) + \\lambda I) a + P(y) b = d
and
.. math::
P(y)^T a = 0,
where :math:`\\lambda` is a non-negative smoothing parameter that controls
how well we want to fit the data. The data are fit exactly when the
smoothing parameter is 0.
The above system is uniquely solvable if the following requirements are
met:
- :math:`P(y)` must have full column rank. :math:`P(y)` always has full
column rank when `degree` is -1 or 0. When `degree` is 1,
:math:`P(y)` has full column rank if the data point locations are not
all collinear (N=2), coplanar (N=3), etc.
- If `kernel` is 'multiquadric', 'linear', 'thin_plate_spline',
'cubic', or 'quintic', then `degree` must not be lower than the
minimum value listed above.
- If `smoothing` is 0, then each data point location must be distinct.
When using an RBF that is not scale invariant ('multiquadric',
'inverse_multiquadric', 'inverse_quadratic', or 'gaussian'), an appropriate
shape parameter must be chosen (e.g., through cross validation). Smaller
values for the shape parameter correspond to wider RBFs. The problem can
become ill-conditioned or singular when the shape parameter is too small.
The memory required to solve for the RBF interpolation coefficients
increases quadratically with the number of data points, which can become
impractical when interpolating more than about a thousand data points.
To overcome memory limitations for large interpolation problems, the
`neighbors` argument can be specified to compute an RBF interpolant for
each evaluation point using only the nearest data points.
.. versionadded:: 1.7.0
See Also
--------
NearestNDInterpolator
LinearNDInterpolator
CloughTocher2DInterpolator
References
----------
.. [1] Fasshauer, G., 2007. Meshfree Approximation Methods with Matlab.
World Scientific Publishing Co.
.. [2] http://amadeus.math.iit.edu/~fass/603_ch3.pdf
.. [3] Wahba, G., 1990. Spline Models for Observational Data. SIAM.
.. [4] http://pages.stat.wisc.edu/~wahba/stat860public/lect/lect8/lect8.pdf
Examples
--------
Demonstrate interpolating scattered data to a grid in 2-D.
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.interpolate import RBFInterpolator
>>> from scipy.stats.qmc import Halton
>>> rng = np.random.default_rng()
>>> xobs = 2*Halton(2, seed=rng).random(100) - 1
>>> yobs = np.sum(xobs, axis=1)*np.exp(-6*np.sum(xobs**2, axis=1))
>>> xgrid = np.mgrid[-1:1:50j, -1:1:50j]
>>> xflat = xgrid.reshape(2, -1).T
>>> yflat = RBFInterpolator(xobs, yobs)(xflat)
>>> ygrid = yflat.reshape(50, 50)
>>> fig, ax = plt.subplots()
>>> ax.pcolormesh(*xgrid, ygrid, vmin=-0.25, vmax=0.25, shading='gouraud')
>>> p = ax.scatter(*xobs.T, c=yobs, s=50, ec='k', vmin=-0.25, vmax=0.25)
>>> fig.colorbar(p)
>>> plt.show()
"""
def __init__(self, y, d,
neighbors=None,
smoothing=0.0,
kernel="thin_plate_spline",
epsilon=None,
degree=None):
y = np.asarray(y, dtype=float, order="C")
if y.ndim != 2:
raise ValueError("`y` must be a 2-dimensional array.")
ny, ndim = y.shape
d_dtype = complex if np.iscomplexobj(d) else float
d = np.asarray(d, dtype=d_dtype, order="C")
if d.shape[0] != ny:
raise ValueError(
f"Expected the first axis of `d` to have length {ny}."
)
d_shape = d.shape[1:]
d = d.reshape((ny, -1))
# If `d` is complex, convert it to a float array with twice as many
# columns. Otherwise, the LHS matrix would need to be converted to
# complex and take up 2x more memory than necessary.
d = d.view(float)
if np.isscalar(smoothing):
smoothing = np.full(ny, smoothing, dtype=float)
else:
smoothing = np.asarray(smoothing, dtype=float, order="C")
if smoothing.shape != (ny,):
raise ValueError(
"Expected `smoothing` to be a scalar or have shape "
f"({ny},)."
)
kernel = kernel.lower()
if kernel not in _AVAILABLE:
raise ValueError(f"`kernel` must be one of {_AVAILABLE}.")
if epsilon is None:
if kernel in _SCALE_INVARIANT:
epsilon = 1.0
else:
raise ValueError(
"`epsilon` must be specified if `kernel` is not one of "
f"{_SCALE_INVARIANT}."
)
else:
epsilon = float(epsilon)
min_degree = _NAME_TO_MIN_DEGREE.get(kernel, -1)
if degree is None:
degree = max(min_degree, 0)
else:
degree = int(degree)
if degree < -1:
raise ValueError("`degree` must be at least -1.")
elif degree < min_degree:
warnings.warn(
f"`degree` should not be below {min_degree} when `kernel` "
f"is '{kernel}'. The interpolant may not be uniquely "
"solvable, and the smoothing parameter may have an "
"unintuitive effect.",
UserWarning
)
if neighbors is None:
nobs = ny
else:
# Make sure the number of nearest neighbors used for interpolation
# does not exceed the number of observations.
neighbors = int(min(neighbors, ny))
nobs = neighbors
powers = _monomial_powers(ndim, degree)
# The polynomial matrix must have full column rank in order for the
# interpolant to be well-posed, which is not possible if there are
# fewer observations than monomials.
if powers.shape[0] > nobs:
raise ValueError(
f"At least {powers.shape[0]} data points are required when "
f"`degree` is {degree} and the number of dimensions is {ndim}."
)
if neighbors is None:
shift, scale, coeffs = _build_and_solve_system(
y, d, smoothing, kernel, epsilon, powers
)
# Make these attributes private since they do not always exist.
self._shift = shift
self._scale = scale
self._coeffs = coeffs
else:
self._tree = KDTree(y)
self.y = y
self.d = d
self.d_shape = d_shape
self.d_dtype = d_dtype
self.neighbors = neighbors
self.smoothing = smoothing
self.kernel = kernel
self.epsilon = epsilon
self.powers = powers
def _chunk_evaluator(
self,
x,
y,
shift,
scale,
coeffs,
memory_budget=1000000
):
"""
Evaluate the interpolation while controlling memory consumption.
We chunk the input if we need more memory than specified.
Parameters
----------
x : (Q, N) float ndarray
array of points on which to evaluate
y: (P, N) float ndarray
array of points on which we know function values
shift: (N, ) ndarray
Domain shift used to create the polynomial matrix.
scale : (N,) float ndarray
Domain scaling used to create the polynomial matrix.
coeffs: (P+R, S) float ndarray
Coefficients in front of basis functions
memory_budget: int
Total amount of memory (in units of sizeof(float)) we wish
to devote for storing the array of coefficients for
interpolated points. If we need more memory than that, we
chunk the input.
Returns
-------
(Q, S) float ndarray
Interpolated array
"""
nx, ndim = x.shape
if self.neighbors is None:
nnei = len(y)
else:
nnei = self.neighbors
# in each chunk we consume the same space we already occupy
chunksize = memory_budget // (self.powers.shape[0] + nnei) + 1
if chunksize <= nx:
out = np.empty((nx, self.d.shape[1]), dtype=float)
for i in range(0, nx, chunksize):
vec = _build_evaluation_coefficients(
x[i:i + chunksize, :],
y,
self.kernel,
self.epsilon,
self.powers,
shift,
scale)
out[i:i + chunksize, :] = np.dot(vec, coeffs)
else:
vec = _build_evaluation_coefficients(
x,
y,
self.kernel,
self.epsilon,
self.powers,
shift,
scale)
out = np.dot(vec, coeffs)
return out
def __call__(self, x):
"""Evaluate the interpolant at `x`.
Parameters
----------
x : (Q, N) array_like
Evaluation point coordinates.
Returns
-------
(Q, ...) ndarray
Values of the interpolant at `x`.
"""
x = np.asarray(x, dtype=float, order="C")
if x.ndim != 2:
raise ValueError("`x` must be a 2-dimensional array.")
nx, ndim = x.shape
if ndim != self.y.shape[1]:
raise ValueError("Expected the second axis of `x` to have length "
f"{self.y.shape[1]}.")
# Our memory budget for storing RBF coefficients is
# based on how many floats in memory we already occupy
# If this number is below 1e6 we just use 1e6
# This memory budget is used to decide how we chunk
# the inputs
memory_budget = max(x.size + self.y.size + self.d.size, 1000000)
if self.neighbors is None:
out = self._chunk_evaluator(
x,
self.y,
self._shift,
self._scale,
self._coeffs,
memory_budget=memory_budget)
else:
# Get the indices of the k nearest observation points to each
# evaluation point.
_, yindices = self._tree.query(x, self.neighbors)
if self.neighbors == 1:
# `KDTree` squeezes the output when neighbors=1.
yindices = yindices[:, None]
# Multiple evaluation points may have the same neighborhood of
# observation points. Make the neighborhoods unique so that we only
# compute the interpolation coefficients once for each
# neighborhood.
yindices = np.sort(yindices, axis=1)
yindices, inv = np.unique(yindices, return_inverse=True, axis=0)
# `inv` tells us which neighborhood will be used by each evaluation
# point. Now we find which evaluation points will be using each
# neighborhood.
xindices = [[] for _ in range(len(yindices))]
for i, j in enumerate(inv):
xindices[j].append(i)
out = np.empty((nx, self.d.shape[1]), dtype=float)
for xidx, yidx in zip(xindices, yindices):
# `yidx` are the indices of the observations in this
# neighborhood. `xidx` are the indices of the evaluation points
# that are using this neighborhood.
xnbr = x[xidx]
ynbr = self.y[yidx]
dnbr = self.d[yidx]
snbr = self.smoothing[yidx]
shift, scale, coeffs = _build_and_solve_system(
ynbr,
dnbr,
snbr,
self.kernel,
self.epsilon,
self.powers,
)
out[xidx] = self._chunk_evaluator(
xnbr,
ynbr,
shift,
scale,
coeffs,
memory_budget=memory_budget)
out = out.view(self.d_dtype)
out = out.reshape((nx, ) + self.d_shape)
return out
| 19,587
| 34.679417
| 79
|
py
|
scipy
|
scipy-main/scipy/interpolate/_pade.py
|
from numpy import zeros, asarray, eye, poly1d, hstack, r_
from scipy import linalg
__all__ = ["pade"]
def pade(an, m, n=None):
"""
Return Pade approximation to a polynomial as the ratio of two polynomials.
Parameters
----------
an : (N,) array_like
Taylor series coefficients.
m : int
The order of the returned approximating polynomial `q`.
n : int, optional
The order of the returned approximating polynomial `p`. By default,
the order is ``len(an)-1-m``.
Returns
-------
p, q : Polynomial class
The Pade approximation of the polynomial defined by `an` is
``p(x)/q(x)``.
Examples
--------
>>> import numpy as np
>>> from scipy.interpolate import pade
>>> e_exp = [1.0, 1.0, 1.0/2.0, 1.0/6.0, 1.0/24.0, 1.0/120.0]
>>> p, q = pade(e_exp, 2)
>>> e_exp.reverse()
>>> e_poly = np.poly1d(e_exp)
Compare ``e_poly(x)`` and the Pade approximation ``p(x)/q(x)``
>>> e_poly(1)
2.7166666666666668
>>> p(1)/q(1)
2.7179487179487181
"""
an = asarray(an)
if n is None:
n = len(an) - 1 - m
if n < 0:
raise ValueError("Order of q <m> must be smaller than len(an)-1.")
if n < 0:
raise ValueError("Order of p <n> must be greater than 0.")
N = m + n
if N > len(an)-1:
raise ValueError("Order of q+p <m+n> must be smaller than len(an).")
an = an[:N+1]
Akj = eye(N+1, n+1, dtype=an.dtype)
Bkj = zeros((N+1, m), dtype=an.dtype)
for row in range(1, m+1):
Bkj[row,:row] = -(an[:row])[::-1]
for row in range(m+1, N+1):
Bkj[row,:] = -(an[row-m:row])[::-1]
C = hstack((Akj, Bkj))
pq = linalg.solve(C, an)
p = pq[:n+1]
q = r_[1.0, pq[n+1:]]
return poly1d(p[::-1]), poly1d(q[::-1])
| 1,827
| 25.882353
| 78
|
py
|
scipy
|
scipy-main/scipy/interpolate/_ndgriddata.py
|
"""
Convenience interface to N-D interpolation
.. versionadded:: 0.9
"""
import numpy as np
from .interpnd import LinearNDInterpolator, NDInterpolatorBase, \
CloughTocher2DInterpolator, _ndim_coords_from_arrays
from scipy.spatial import cKDTree
__all__ = ['griddata', 'NearestNDInterpolator', 'LinearNDInterpolator',
'CloughTocher2DInterpolator']
#------------------------------------------------------------------------------
# Nearest-neighbor interpolation
#------------------------------------------------------------------------------
class NearestNDInterpolator(NDInterpolatorBase):
"""NearestNDInterpolator(x, y).
Nearest-neighbor interpolation in N > 1 dimensions.
.. versionadded:: 0.9
Methods
-------
__call__
Parameters
----------
x : (npoints, ndims) 2-D ndarray of floats
Data point coordinates.
y : (npoints, ) 1-D ndarray of float or complex
Data values.
rescale : boolean, optional
Rescale points to unit cube before performing interpolation.
This is useful if some of the input dimensions have
incommensurable units and differ by many orders of magnitude.
.. versionadded:: 0.14.0
tree_options : dict, optional
Options passed to the underlying ``cKDTree``.
.. versionadded:: 0.17.0
See Also
--------
griddata :
Interpolate unstructured D-D data.
LinearNDInterpolator :
Piecewise linear interpolant in N dimensions.
CloughTocher2DInterpolator :
Piecewise cubic, C1 smooth, curvature-minimizing interpolant in 2D.
interpn : Interpolation on a regular grid or rectilinear grid.
RegularGridInterpolator : Interpolation on a regular or rectilinear grid
in arbitrary dimensions (`interpn` wraps this
class).
Notes
-----
Uses ``scipy.spatial.cKDTree``
.. note:: For data on a regular grid use `interpn` instead.
Examples
--------
We can interpolate values on a 2D plane:
>>> from scipy.interpolate import NearestNDInterpolator
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> rng = np.random.default_rng()
>>> x = rng.random(10) - 0.5
>>> y = rng.random(10) - 0.5
>>> z = np.hypot(x, y)
>>> X = np.linspace(min(x), max(x))
>>> Y = np.linspace(min(y), max(y))
>>> X, Y = np.meshgrid(X, Y) # 2D grid for interpolation
>>> interp = NearestNDInterpolator(list(zip(x, y)), z)
>>> Z = interp(X, Y)
>>> plt.pcolormesh(X, Y, Z, shading='auto')
>>> plt.plot(x, y, "ok", label="input point")
>>> plt.legend()
>>> plt.colorbar()
>>> plt.axis("equal")
>>> plt.show()
"""
def __init__(self, x, y, rescale=False, tree_options=None):
NDInterpolatorBase.__init__(self, x, y, rescale=rescale,
need_contiguous=False,
need_values=False)
if tree_options is None:
tree_options = dict()
self.tree = cKDTree(self.points, **tree_options)
self.values = np.asarray(y)
def __call__(self, *args):
"""
Evaluate interpolator at given points.
Parameters
----------
x1, x2, ... xn : array-like of float
Points where to interpolate data at.
x1, x2, ... xn can be array-like of float with broadcastable shape.
or x1 can be array-like of float with shape ``(..., ndim)``
"""
# For the sake of enabling subclassing, NDInterpolatorBase._set_xi performs some operations
# which are not required by NearestNDInterpolator.__call__, hence here we operate
# on xi directly, without calling a parent class function.
xi = _ndim_coords_from_arrays(args, ndim=self.points.shape[1])
xi = self._check_call_shape(xi)
xi = self._scale_x(xi)
dist, i = self.tree.query(xi)
return self.values[i]
#------------------------------------------------------------------------------
# Convenience interface function
#------------------------------------------------------------------------------
def griddata(points, values, xi, method='linear', fill_value=np.nan,
rescale=False):
"""
Interpolate unstructured D-D data.
Parameters
----------
points : 2-D ndarray of floats with shape (n, D), or length D tuple of 1-D ndarrays with shape (n,).
Data point coordinates.
values : ndarray of float or complex, shape (n,)
Data values.
xi : 2-D ndarray of floats with shape (m, D), or length D tuple of ndarrays broadcastable to the same shape.
Points at which to interpolate data.
method : {'linear', 'nearest', 'cubic'}, optional
Method of interpolation. One of
``nearest``
return the value at the data point closest to
the point of interpolation. See `NearestNDInterpolator` for
more details.
``linear``
tessellate the input point set to N-D
simplices, and interpolate linearly on each simplex. See
`LinearNDInterpolator` for more details.
``cubic`` (1-D)
return the value determined from a cubic
spline.
``cubic`` (2-D)
return the value determined from a
piecewise cubic, continuously differentiable (C1), and
approximately curvature-minimizing polynomial surface. See
`CloughTocher2DInterpolator` for more details.
fill_value : float, optional
Value used to fill in for requested points outside of the
convex hull of the input points. If not provided, then the
default is ``nan``. This option has no effect for the
'nearest' method.
rescale : bool, optional
Rescale points to unit cube before performing interpolation.
This is useful if some of the input dimensions have
incommensurable units and differ by many orders of magnitude.
.. versionadded:: 0.14.0
Returns
-------
ndarray
Array of interpolated values.
See Also
--------
LinearNDInterpolator :
Piecewise linear interpolant in N dimensions.
NearestNDInterpolator :
Nearest-neighbor interpolation in N dimensions.
CloughTocher2DInterpolator :
Piecewise cubic, C1 smooth, curvature-minimizing interpolant in 2D.
interpn : Interpolation on a regular grid or rectilinear grid.
RegularGridInterpolator : Interpolation on a regular or rectilinear grid
in arbitrary dimensions (`interpn` wraps this
class).
Notes
-----
.. versionadded:: 0.9
.. note:: For data on a regular grid use `interpn` instead.
Examples
--------
Suppose we want to interpolate the 2-D function
>>> import numpy as np
>>> def func(x, y):
... return x*(1-x)*np.cos(4*np.pi*x) * np.sin(4*np.pi*y**2)**2
on a grid in [0, 1]x[0, 1]
>>> grid_x, grid_y = np.mgrid[0:1:100j, 0:1:200j]
but we only know its values at 1000 data points:
>>> rng = np.random.default_rng()
>>> points = rng.random((1000, 2))
>>> values = func(points[:,0], points[:,1])
This can be done with `griddata` -- below we try out all of the
interpolation methods:
>>> from scipy.interpolate import griddata
>>> grid_z0 = griddata(points, values, (grid_x, grid_y), method='nearest')
>>> grid_z1 = griddata(points, values, (grid_x, grid_y), method='linear')
>>> grid_z2 = griddata(points, values, (grid_x, grid_y), method='cubic')
One can see that the exact result is reproduced by all of the
methods to some degree, but for this smooth function the piecewise
cubic interpolant gives the best results:
>>> import matplotlib.pyplot as plt
>>> plt.subplot(221)
>>> plt.imshow(func(grid_x, grid_y).T, extent=(0,1,0,1), origin='lower')
>>> plt.plot(points[:,0], points[:,1], 'k.', ms=1)
>>> plt.title('Original')
>>> plt.subplot(222)
>>> plt.imshow(grid_z0.T, extent=(0,1,0,1), origin='lower')
>>> plt.title('Nearest')
>>> plt.subplot(223)
>>> plt.imshow(grid_z1.T, extent=(0,1,0,1), origin='lower')
>>> plt.title('Linear')
>>> plt.subplot(224)
>>> plt.imshow(grid_z2.T, extent=(0,1,0,1), origin='lower')
>>> plt.title('Cubic')
>>> plt.gcf().set_size_inches(6, 6)
>>> plt.show()
"""
points = _ndim_coords_from_arrays(points)
if points.ndim < 2:
ndim = points.ndim
else:
ndim = points.shape[-1]
if ndim == 1 and method in ('nearest', 'linear', 'cubic'):
from ._interpolate import interp1d
points = points.ravel()
if isinstance(xi, tuple):
if len(xi) != 1:
raise ValueError("invalid number of dimensions in xi")
xi, = xi
# Sort points/values together, necessary as input for interp1d
idx = np.argsort(points)
points = points[idx]
values = values[idx]
if method == 'nearest':
fill_value = 'extrapolate'
ip = interp1d(points, values, kind=method, axis=0, bounds_error=False,
fill_value=fill_value)
return ip(xi)
elif method == 'nearest':
ip = NearestNDInterpolator(points, values, rescale=rescale)
return ip(xi)
elif method == 'linear':
ip = LinearNDInterpolator(points, values, fill_value=fill_value,
rescale=rescale)
return ip(xi)
elif method == 'cubic' and ndim == 2:
ip = CloughTocher2DInterpolator(points, values, fill_value=fill_value,
rescale=rescale)
return ip(xi)
else:
raise ValueError("Unknown interpolation method %r for "
"%d dimensional data" % (method, ndim))
| 9,943
| 33.769231
| 112
|
py
|
scipy
|
scipy-main/scipy/interpolate/_fitpack2.py
|
"""
fitpack --- curve and surface fitting with splines
fitpack is based on a collection of Fortran routines DIERCKX
by P. Dierckx (see http://www.netlib.org/dierckx/) transformed
to double routines by Pearu Peterson.
"""
# Created by Pearu Peterson, June,August 2003
__all__ = [
'UnivariateSpline',
'InterpolatedUnivariateSpline',
'LSQUnivariateSpline',
'BivariateSpline',
'LSQBivariateSpline',
'SmoothBivariateSpline',
'LSQSphereBivariateSpline',
'SmoothSphereBivariateSpline',
'RectBivariateSpline',
'RectSphereBivariateSpline']
import warnings
from numpy import zeros, concatenate, ravel, diff, array, ones
import numpy as np
from . import _fitpack_impl
from . import dfitpack
dfitpack_int = dfitpack.types.intvar.dtype
# ############### Univariate spline ####################
_curfit_messages = {1: """
The required storage space exceeds the available storage space, as
specified by the parameter nest: nest too small. If nest is already
large (say nest > m/2), it may also indicate that s is too small.
The approximation returned is the weighted least-squares spline
according to the knots t[0],t[1],...,t[n-1]. (n=nest) the parameter fp
gives the corresponding weighted sum of squared residuals (fp>s).
""",
2: """
A theoretically impossible result was found during the iteration
process for finding a smoothing spline with fp = s: s too small.
There is an approximation returned but the corresponding weighted sum
of squared residuals does not satisfy the condition abs(fp-s)/s < tol.""",
3: """
The maximal number of iterations maxit (set to 20 by the program)
allowed for finding a smoothing spline with fp=s has been reached: s
too small.
There is an approximation returned but the corresponding weighted sum
of squared residuals does not satisfy the condition abs(fp-s)/s < tol.""",
10: """
Error on entry, no approximation returned. The following conditions
must hold:
xb<=x[0]<x[1]<...<x[m-1]<=xe, w[i]>0, i=0..m-1
if iopt=-1:
xb<t[k+1]<t[k+2]<...<t[n-k-2]<xe"""
}
# UnivariateSpline, ext parameter can be an int or a string
_extrap_modes = {0: 0, 'extrapolate': 0,
1: 1, 'zeros': 1,
2: 2, 'raise': 2,
3: 3, 'const': 3}
class UnivariateSpline:
"""
1-D smoothing spline fit to a given set of data points.
Fits a spline y = spl(x) of degree `k` to the provided `x`, `y` data. `s`
specifies the number of knots by specifying a smoothing condition.
Parameters
----------
x : (N,) array_like
1-D array of independent input data. Must be increasing;
must be strictly increasing if `s` is 0.
y : (N,) array_like
1-D array of dependent input data, of the same length as `x`.
w : (N,) array_like, optional
Weights for spline fitting. Must be positive. If `w` is None,
weights are all 1. Default is None.
bbox : (2,) array_like, optional
2-sequence specifying the boundary of the approximation interval. If
`bbox` is None, ``bbox=[x[0], x[-1]]``. Default is None.
k : int, optional
Degree of the smoothing spline. Must be 1 <= `k` <= 5.
``k = 3`` is a cubic spline. Default is 3.
s : float or None, optional
Positive smoothing factor used to choose the number of knots. Number
of knots will be increased until the smoothing condition is satisfied::
sum((w[i] * (y[i]-spl(x[i])))**2, axis=0) <= s
However, because of numerical issues, the actual condition is::
abs(sum((w[i] * (y[i]-spl(x[i])))**2, axis=0) - s) < 0.001 * s
If `s` is None, `s` will be set as `len(w)` for a smoothing spline
that uses all data points.
If 0, spline will interpolate through all data points. This is
equivalent to `InterpolatedUnivariateSpline`.
Default is None.
The user can use the `s` to control the tradeoff between closeness
and smoothness of fit. Larger `s` means more smoothing while smaller
values of `s` indicate less smoothing.
Recommended values of `s` depend on the weights, `w`. If the weights
represent the inverse of the standard-deviation of `y`, then a good
`s` value should be found in the range (m-sqrt(2*m),m+sqrt(2*m))
where m is the number of datapoints in `x`, `y`, and `w`. This means
``s = len(w)`` should be a good value if ``1/w[i]`` is an
estimate of the standard deviation of ``y[i]``.
ext : int or str, optional
Controls the extrapolation mode for elements
not in the interval defined by the knot sequence.
* if ext=0 or 'extrapolate', return the extrapolated value.
* if ext=1 or 'zeros', return 0
* if ext=2 or 'raise', raise a ValueError
* if ext=3 of 'const', return the boundary value.
Default is 0.
check_finite : bool, optional
Whether to check that the input arrays contain only finite numbers.
Disabling may give a performance gain, but may result in problems
(crashes, non-termination or non-sensical results) if the inputs
do contain infinities or NaNs.
Default is False.
See Also
--------
BivariateSpline :
a base class for bivariate splines.
SmoothBivariateSpline :
a smoothing bivariate spline through the given points
LSQBivariateSpline :
a bivariate spline using weighted least-squares fitting
RectSphereBivariateSpline :
a bivariate spline over a rectangular mesh on a sphere
SmoothSphereBivariateSpline :
a smoothing bivariate spline in spherical coordinates
LSQSphereBivariateSpline :
a bivariate spline in spherical coordinates using weighted
least-squares fitting
RectBivariateSpline :
a bivariate spline over a rectangular mesh
InterpolatedUnivariateSpline :
a interpolating univariate spline for a given set of data points.
bisplrep :
a function to find a bivariate B-spline representation of a surface
bisplev :
a function to evaluate a bivariate B-spline and its derivatives
splrep :
a function to find the B-spline representation of a 1-D curve
splev :
a function to evaluate a B-spline or its derivatives
sproot :
a function to find the roots of a cubic B-spline
splint :
a function to evaluate the definite integral of a B-spline between two
given points
spalde :
a function to evaluate all derivatives of a B-spline
Notes
-----
The number of data points must be larger than the spline degree `k`.
**NaN handling**: If the input arrays contain ``nan`` values, the result
is not useful, since the underlying spline fitting routines cannot deal
with ``nan``. A workaround is to use zero weights for not-a-number
data points:
>>> import numpy as np
>>> from scipy.interpolate import UnivariateSpline
>>> x, y = np.array([1, 2, 3, 4]), np.array([1, np.nan, 3, 4])
>>> w = np.isnan(y)
>>> y[w] = 0.
>>> spl = UnivariateSpline(x, y, w=~w)
Notice the need to replace a ``nan`` by a numerical value (precise value
does not matter as long as the corresponding weight is zero.)
References
----------
Based on algorithms described in [1]_, [2]_, [3]_, and [4]_:
.. [1] P. Dierckx, "An algorithm for smoothing, differentiation and
integration of experimental data using spline functions",
J.Comp.Appl.Maths 1 (1975) 165-184.
.. [2] P. Dierckx, "A fast algorithm for smoothing data on a rectangular
grid while using spline functions", SIAM J.Numer.Anal. 19 (1982)
1286-1304.
.. [3] P. Dierckx, "An improved algorithm for curve fitting with spline
functions", report tw54, Dept. Computer Science,K.U. Leuven, 1981.
.. [4] P. Dierckx, "Curve and surface fitting with splines", Monographs on
Numerical Analysis, Oxford University Press, 1993.
Examples
--------
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.interpolate import UnivariateSpline
>>> rng = np.random.default_rng()
>>> x = np.linspace(-3, 3, 50)
>>> y = np.exp(-x**2) + 0.1 * rng.standard_normal(50)
>>> plt.plot(x, y, 'ro', ms=5)
Use the default value for the smoothing parameter:
>>> spl = UnivariateSpline(x, y)
>>> xs = np.linspace(-3, 3, 1000)
>>> plt.plot(xs, spl(xs), 'g', lw=3)
Manually change the amount of smoothing:
>>> spl.set_smoothing_factor(0.5)
>>> plt.plot(xs, spl(xs), 'b', lw=3)
>>> plt.show()
"""
def __init__(self, x, y, w=None, bbox=[None]*2, k=3, s=None,
ext=0, check_finite=False):
x, y, w, bbox, self.ext = self.validate_input(x, y, w, bbox, k, s, ext,
check_finite)
# _data == x,y,w,xb,xe,k,s,n,t,c,fp,fpint,nrdata,ier
data = dfitpack.fpcurf0(x, y, k, w=w, xb=bbox[0],
xe=bbox[1], s=s)
if data[-1] == 1:
# nest too small, setting to maximum bound
data = self._reset_nest(data)
self._data = data
self._reset_class()
@staticmethod
def validate_input(x, y, w, bbox, k, s, ext, check_finite):
x, y, bbox = np.asarray(x), np.asarray(y), np.asarray(bbox)
if w is not None:
w = np.asarray(w)
if check_finite:
w_finite = np.isfinite(w).all() if w is not None else True
if (not np.isfinite(x).all() or not np.isfinite(y).all() or
not w_finite):
raise ValueError("x and y array must not contain "
"NaNs or infs.")
if s is None or s > 0:
if not np.all(diff(x) >= 0.0):
raise ValueError("x must be increasing if s > 0")
else:
if not np.all(diff(x) > 0.0):
raise ValueError("x must be strictly increasing if s = 0")
if x.size != y.size:
raise ValueError("x and y should have a same length")
elif w is not None and not x.size == y.size == w.size:
raise ValueError("x, y, and w should have a same length")
elif bbox.shape != (2,):
raise ValueError("bbox shape should be (2,)")
elif not (1 <= k <= 5):
raise ValueError("k should be 1 <= k <= 5")
elif s is not None and not s >= 0.0:
raise ValueError("s should be s >= 0.0")
try:
ext = _extrap_modes[ext]
except KeyError as e:
raise ValueError("Unknown extrapolation mode %s." % ext) from e
return x, y, w, bbox, ext
@classmethod
def _from_tck(cls, tck, ext=0):
"""Construct a spline object from given tck"""
self = cls.__new__(cls)
t, c, k = tck
self._eval_args = tck
# _data == x,y,w,xb,xe,k,s,n,t,c,fp,fpint,nrdata,ier
self._data = (None, None, None, None, None, k, None, len(t), t,
c, None, None, None, None)
self.ext = ext
return self
def _reset_class(self):
data = self._data
n, t, c, k, ier = data[7], data[8], data[9], data[5], data[-1]
self._eval_args = t[:n], c[:n], k
if ier == 0:
# the spline returned has a residual sum of squares fp
# such that abs(fp-s)/s <= tol with tol a relative
# tolerance set to 0.001 by the program
pass
elif ier == -1:
# the spline returned is an interpolating spline
self._set_class(InterpolatedUnivariateSpline)
elif ier == -2:
# the spline returned is the weighted least-squares
# polynomial of degree k. In this extreme case fp gives
# the upper bound fp0 for the smoothing factor s.
self._set_class(LSQUnivariateSpline)
else:
# error
if ier == 1:
self._set_class(LSQUnivariateSpline)
message = _curfit_messages.get(ier, 'ier=%s' % (ier))
warnings.warn(message)
def _set_class(self, cls):
self._spline_class = cls
if self.__class__ in (UnivariateSpline, InterpolatedUnivariateSpline,
LSQUnivariateSpline):
self.__class__ = cls
else:
# It's an unknown subclass -- don't change class. cf. #731
pass
def _reset_nest(self, data, nest=None):
n = data[10]
if nest is None:
k, m = data[5], len(data[0])
nest = m+k+1 # this is the maximum bound for nest
else:
if not n <= nest:
raise ValueError("`nest` can only be increased")
t, c, fpint, nrdata = (np.resize(data[j], nest) for j in
[8, 9, 11, 12])
args = data[:8] + (t, c, n, fpint, nrdata, data[13])
data = dfitpack.fpcurf1(*args)
return data
def set_smoothing_factor(self, s):
""" Continue spline computation with the given smoothing
factor s and with the knots found at the last call.
This routine modifies the spline in place.
"""
data = self._data
if data[6] == -1:
warnings.warn('smoothing factor unchanged for'
'LSQ spline with fixed knots')
return
args = data[:6] + (s,) + data[7:]
data = dfitpack.fpcurf1(*args)
if data[-1] == 1:
# nest too small, setting to maximum bound
data = self._reset_nest(data)
self._data = data
self._reset_class()
def __call__(self, x, nu=0, ext=None):
"""
Evaluate spline (or its nu-th derivative) at positions x.
Parameters
----------
x : array_like
A 1-D array of points at which to return the value of the smoothed
spline or its derivatives. Note: `x` can be unordered but the
evaluation is more efficient if `x` is (partially) ordered.
nu : int
The order of derivative of the spline to compute.
ext : int
Controls the value returned for elements of `x` not in the
interval defined by the knot sequence.
* if ext=0 or 'extrapolate', return the extrapolated value.
* if ext=1 or 'zeros', return 0
* if ext=2 or 'raise', raise a ValueError
* if ext=3 or 'const', return the boundary value.
The default value is 0, passed from the initialization of
UnivariateSpline.
"""
x = np.asarray(x)
# empty input yields empty output
if x.size == 0:
return array([])
if ext is None:
ext = self.ext
else:
try:
ext = _extrap_modes[ext]
except KeyError as e:
raise ValueError("Unknown extrapolation mode %s." % ext) from e
return _fitpack_impl.splev(x, self._eval_args, der=nu, ext=ext)
def get_knots(self):
""" Return positions of interior knots of the spline.
Internally, the knot vector contains ``2*k`` additional boundary knots.
"""
data = self._data
k, n = data[5], data[7]
return data[8][k:n-k]
def get_coeffs(self):
"""Return spline coefficients."""
data = self._data
k, n = data[5], data[7]
return data[9][:n-k-1]
def get_residual(self):
"""Return weighted sum of squared residuals of the spline approximation.
This is equivalent to::
sum((w[i] * (y[i]-spl(x[i])))**2, axis=0)
"""
return self._data[10]
def integral(self, a, b):
""" Return definite integral of the spline between two given points.
Parameters
----------
a : float
Lower limit of integration.
b : float
Upper limit of integration.
Returns
-------
integral : float
The value of the definite integral of the spline between limits.
Examples
--------
>>> import numpy as np
>>> from scipy.interpolate import UnivariateSpline
>>> x = np.linspace(0, 3, 11)
>>> y = x**2
>>> spl = UnivariateSpline(x, y)
>>> spl.integral(0, 3)
9.0
which agrees with :math:`\\int x^2 dx = x^3 / 3` between the limits
of 0 and 3.
A caveat is that this routine assumes the spline to be zero outside of
the data limits:
>>> spl.integral(-1, 4)
9.0
>>> spl.integral(-1, 0)
0.0
"""
return _fitpack_impl.splint(a, b, self._eval_args)
def derivatives(self, x):
""" Return all derivatives of the spline at the point x.
Parameters
----------
x : float
The point to evaluate the derivatives at.
Returns
-------
der : ndarray, shape(k+1,)
Derivatives of the orders 0 to k.
Examples
--------
>>> import numpy as np
>>> from scipy.interpolate import UnivariateSpline
>>> x = np.linspace(0, 3, 11)
>>> y = x**2
>>> spl = UnivariateSpline(x, y)
>>> spl.derivatives(1.5)
array([2.25, 3.0, 2.0, 0])
"""
return _fitpack_impl.spalde(x, self._eval_args)
def roots(self):
""" Return the zeros of the spline.
Notes
-----
Restriction: only cubic splines are supported by FITPACK. For non-cubic
splines, use `PPoly.root` (see below for an example).
Examples
--------
For some data, this method may miss a root. This happens when one of
the spline knots (which FITPACK places automatically) happens to
coincide with the true root. A workaround is to convert to `PPoly`,
which uses a different root-finding algorithm.
For example,
>>> x = [1.96, 1.97, 1.98, 1.99, 2.00, 2.01, 2.02, 2.03, 2.04, 2.05]
>>> y = [-6.365470e-03, -4.790580e-03, -3.204320e-03, -1.607270e-03,
... 4.440892e-16, 1.616930e-03, 3.243000e-03, 4.877670e-03,
... 6.520430e-03, 8.170770e-03]
>>> from scipy.interpolate import UnivariateSpline
>>> spl = UnivariateSpline(x, y, s=0)
>>> spl.roots()
array([], dtype=float64)
Converting to a PPoly object does find the roots at `x=2`:
>>> from scipy.interpolate import splrep, PPoly
>>> tck = splrep(x, y, s=0)
>>> ppoly = PPoly.from_spline(tck)
>>> ppoly.roots(extrapolate=False)
array([2.])
See Also
--------
sproot
PPoly.roots
"""
k = self._data[5]
if k == 3:
t = self._eval_args[0]
mest = 3 * (len(t) - 7)
return _fitpack_impl.sproot(self._eval_args, mest=mest)
raise NotImplementedError('finding roots unsupported for '
'non-cubic splines')
def derivative(self, n=1):
"""
Construct a new spline representing the derivative of this spline.
Parameters
----------
n : int, optional
Order of derivative to evaluate. Default: 1
Returns
-------
spline : UnivariateSpline
Spline of order k2=k-n representing the derivative of this
spline.
See Also
--------
splder, antiderivative
Notes
-----
.. versionadded:: 0.13.0
Examples
--------
This can be used for finding maxima of a curve:
>>> import numpy as np
>>> from scipy.interpolate import UnivariateSpline
>>> x = np.linspace(0, 10, 70)
>>> y = np.sin(x)
>>> spl = UnivariateSpline(x, y, k=4, s=0)
Now, differentiate the spline and find the zeros of the
derivative. (NB: `sproot` only works for order 3 splines, so we
fit an order 4 spline):
>>> spl.derivative().roots() / np.pi
array([ 0.50000001, 1.5 , 2.49999998])
This agrees well with roots :math:`\\pi/2 + n\\pi` of
:math:`\\cos(x) = \\sin'(x)`.
"""
tck = _fitpack_impl.splder(self._eval_args, n)
# if self.ext is 'const', derivative.ext will be 'zeros'
ext = 1 if self.ext == 3 else self.ext
return UnivariateSpline._from_tck(tck, ext=ext)
def antiderivative(self, n=1):
"""
Construct a new spline representing the antiderivative of this spline.
Parameters
----------
n : int, optional
Order of antiderivative to evaluate. Default: 1
Returns
-------
spline : UnivariateSpline
Spline of order k2=k+n representing the antiderivative of this
spline.
Notes
-----
.. versionadded:: 0.13.0
See Also
--------
splantider, derivative
Examples
--------
>>> import numpy as np
>>> from scipy.interpolate import UnivariateSpline
>>> x = np.linspace(0, np.pi/2, 70)
>>> y = 1 / np.sqrt(1 - 0.8*np.sin(x)**2)
>>> spl = UnivariateSpline(x, y, s=0)
The derivative is the inverse operation of the antiderivative,
although some floating point error accumulates:
>>> spl(1.7), spl.antiderivative().derivative()(1.7)
(array(2.1565429877197317), array(2.1565429877201865))
Antiderivative can be used to evaluate definite integrals:
>>> ispl = spl.antiderivative()
>>> ispl(np.pi/2) - ispl(0)
2.2572053588768486
This is indeed an approximation to the complete elliptic integral
:math:`K(m) = \\int_0^{\\pi/2} [1 - m\\sin^2 x]^{-1/2} dx`:
>>> from scipy.special import ellipk
>>> ellipk(0.8)
2.2572053268208538
"""
tck = _fitpack_impl.splantider(self._eval_args, n)
return UnivariateSpline._from_tck(tck, self.ext)
class InterpolatedUnivariateSpline(UnivariateSpline):
"""
1-D interpolating spline for a given set of data points.
Fits a spline y = spl(x) of degree `k` to the provided `x`, `y` data.
Spline function passes through all provided points. Equivalent to
`UnivariateSpline` with `s` = 0.
Parameters
----------
x : (N,) array_like
Input dimension of data points -- must be strictly increasing
y : (N,) array_like
input dimension of data points
w : (N,) array_like, optional
Weights for spline fitting. Must be positive. If None (default),
weights are all 1.
bbox : (2,) array_like, optional
2-sequence specifying the boundary of the approximation interval. If
None (default), ``bbox=[x[0], x[-1]]``.
k : int, optional
Degree of the smoothing spline. Must be ``1 <= k <= 5``. Default is
``k = 3``, a cubic spline.
ext : int or str, optional
Controls the extrapolation mode for elements
not in the interval defined by the knot sequence.
* if ext=0 or 'extrapolate', return the extrapolated value.
* if ext=1 or 'zeros', return 0
* if ext=2 or 'raise', raise a ValueError
* if ext=3 of 'const', return the boundary value.
The default value is 0.
check_finite : bool, optional
Whether to check that the input arrays contain only finite numbers.
Disabling may give a performance gain, but may result in problems
(crashes, non-termination or non-sensical results) if the inputs
do contain infinities or NaNs.
Default is False.
See Also
--------
UnivariateSpline :
a smooth univariate spline to fit a given set of data points.
LSQUnivariateSpline :
a spline for which knots are user-selected
SmoothBivariateSpline :
a smoothing bivariate spline through the given points
LSQBivariateSpline :
a bivariate spline using weighted least-squares fitting
splrep :
a function to find the B-spline representation of a 1-D curve
splev :
a function to evaluate a B-spline or its derivatives
sproot :
a function to find the roots of a cubic B-spline
splint :
a function to evaluate the definite integral of a B-spline between two
given points
spalde :
a function to evaluate all derivatives of a B-spline
Notes
-----
The number of data points must be larger than the spline degree `k`.
Examples
--------
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.interpolate import InterpolatedUnivariateSpline
>>> rng = np.random.default_rng()
>>> x = np.linspace(-3, 3, 50)
>>> y = np.exp(-x**2) + 0.1 * rng.standard_normal(50)
>>> spl = InterpolatedUnivariateSpline(x, y)
>>> plt.plot(x, y, 'ro', ms=5)
>>> xs = np.linspace(-3, 3, 1000)
>>> plt.plot(xs, spl(xs), 'g', lw=3, alpha=0.7)
>>> plt.show()
Notice that the ``spl(x)`` interpolates `y`:
>>> spl.get_residual()
0.0
"""
def __init__(self, x, y, w=None, bbox=[None]*2, k=3,
ext=0, check_finite=False):
x, y, w, bbox, self.ext = self.validate_input(x, y, w, bbox, k, None,
ext, check_finite)
if not np.all(diff(x) > 0.0):
raise ValueError('x must be strictly increasing')
# _data == x,y,w,xb,xe,k,s,n,t,c,fp,fpint,nrdata,ier
self._data = dfitpack.fpcurf0(x, y, k, w=w, xb=bbox[0],
xe=bbox[1], s=0)
self._reset_class()
_fpchec_error_string = """The input parameters have been rejected by fpchec. \
This means that at least one of the following conditions is violated:
1) k+1 <= n-k-1 <= m
2) t(1) <= t(2) <= ... <= t(k+1)
t(n-k) <= t(n-k+1) <= ... <= t(n)
3) t(k+1) < t(k+2) < ... < t(n-k)
4) t(k+1) <= x(i) <= t(n-k)
5) The conditions specified by Schoenberg and Whitney must hold
for at least one subset of data points, i.e., there must be a
subset of data points y(j) such that
t(j) < y(j) < t(j+k+1), j=1,2,...,n-k-1
"""
class LSQUnivariateSpline(UnivariateSpline):
"""
1-D spline with explicit internal knots.
Fits a spline y = spl(x) of degree `k` to the provided `x`, `y` data. `t`
specifies the internal knots of the spline
Parameters
----------
x : (N,) array_like
Input dimension of data points -- must be increasing
y : (N,) array_like
Input dimension of data points
t : (M,) array_like
interior knots of the spline. Must be in ascending order and::
bbox[0] < t[0] < ... < t[-1] < bbox[-1]
w : (N,) array_like, optional
weights for spline fitting. Must be positive. If None (default),
weights are all 1.
bbox : (2,) array_like, optional
2-sequence specifying the boundary of the approximation interval. If
None (default), ``bbox = [x[0], x[-1]]``.
k : int, optional
Degree of the smoothing spline. Must be 1 <= `k` <= 5.
Default is `k` = 3, a cubic spline.
ext : int or str, optional
Controls the extrapolation mode for elements
not in the interval defined by the knot sequence.
* if ext=0 or 'extrapolate', return the extrapolated value.
* if ext=1 or 'zeros', return 0
* if ext=2 or 'raise', raise a ValueError
* if ext=3 of 'const', return the boundary value.
The default value is 0.
check_finite : bool, optional
Whether to check that the input arrays contain only finite numbers.
Disabling may give a performance gain, but may result in problems
(crashes, non-termination or non-sensical results) if the inputs
do contain infinities or NaNs.
Default is False.
Raises
------
ValueError
If the interior knots do not satisfy the Schoenberg-Whitney conditions
See Also
--------
UnivariateSpline :
a smooth univariate spline to fit a given set of data points.
InterpolatedUnivariateSpline :
a interpolating univariate spline for a given set of data points.
splrep :
a function to find the B-spline representation of a 1-D curve
splev :
a function to evaluate a B-spline or its derivatives
sproot :
a function to find the roots of a cubic B-spline
splint :
a function to evaluate the definite integral of a B-spline between two
given points
spalde :
a function to evaluate all derivatives of a B-spline
Notes
-----
The number of data points must be larger than the spline degree `k`.
Knots `t` must satisfy the Schoenberg-Whitney conditions,
i.e., there must be a subset of data points ``x[j]`` such that
``t[j] < x[j] < t[j+k+1]``, for ``j=0, 1,...,n-k-2``.
Examples
--------
>>> import numpy as np
>>> from scipy.interpolate import LSQUnivariateSpline, UnivariateSpline
>>> import matplotlib.pyplot as plt
>>> rng = np.random.default_rng()
>>> x = np.linspace(-3, 3, 50)
>>> y = np.exp(-x**2) + 0.1 * rng.standard_normal(50)
Fit a smoothing spline with a pre-defined internal knots:
>>> t = [-1, 0, 1]
>>> spl = LSQUnivariateSpline(x, y, t)
>>> xs = np.linspace(-3, 3, 1000)
>>> plt.plot(x, y, 'ro', ms=5)
>>> plt.plot(xs, spl(xs), 'g-', lw=3)
>>> plt.show()
Check the knot vector:
>>> spl.get_knots()
array([-3., -1., 0., 1., 3.])
Constructing lsq spline using the knots from another spline:
>>> x = np.arange(10)
>>> s = UnivariateSpline(x, x, s=0)
>>> s.get_knots()
array([ 0., 2., 3., 4., 5., 6., 7., 9.])
>>> knt = s.get_knots()
>>> s1 = LSQUnivariateSpline(x, x, knt[1:-1]) # Chop 1st and last knot
>>> s1.get_knots()
array([ 0., 2., 3., 4., 5., 6., 7., 9.])
"""
def __init__(self, x, y, t, w=None, bbox=[None]*2, k=3,
ext=0, check_finite=False):
x, y, w, bbox, self.ext = self.validate_input(x, y, w, bbox, k, None,
ext, check_finite)
if not np.all(diff(x) >= 0.0):
raise ValueError('x must be increasing')
# _data == x,y,w,xb,xe,k,s,n,t,c,fp,fpint,nrdata,ier
xb = bbox[0]
xe = bbox[1]
if xb is None:
xb = x[0]
if xe is None:
xe = x[-1]
t = concatenate(([xb]*(k+1), t, [xe]*(k+1)))
n = len(t)
if not np.all(t[k+1:n-k]-t[k:n-k-1] > 0, axis=0):
raise ValueError('Interior knots t must satisfy '
'Schoenberg-Whitney conditions')
if not dfitpack.fpchec(x, t, k) == 0:
raise ValueError(_fpchec_error_string)
data = dfitpack.fpcurfm1(x, y, k, t, w=w, xb=xb, xe=xe)
self._data = data[:-3] + (None, None, data[-1])
self._reset_class()
# ############### Bivariate spline ####################
class _BivariateSplineBase:
""" Base class for Bivariate spline s(x,y) interpolation on the rectangle
[xb,xe] x [yb, ye] calculated from a given set of data points
(x,y,z).
See Also
--------
bisplrep :
a function to find a bivariate B-spline representation of a surface
bisplev :
a function to evaluate a bivariate B-spline and its derivatives
BivariateSpline :
a base class for bivariate splines.
SphereBivariateSpline :
a bivariate spline on a spherical grid
"""
@classmethod
def _from_tck(cls, tck):
"""Construct a spline object from given tck and degree"""
self = cls.__new__(cls)
if len(tck) != 5:
raise ValueError("tck should be a 5 element tuple of tx,"
" ty, c, kx, ky")
self.tck = tck[:3]
self.degrees = tck[3:]
return self
def get_residual(self):
""" Return weighted sum of squared residuals of the spline
approximation: sum ((w[i]*(z[i]-s(x[i],y[i])))**2,axis=0)
"""
return self.fp
def get_knots(self):
""" Return a tuple (tx,ty) where tx,ty contain knots positions
of the spline with respect to x-, y-variable, respectively.
The position of interior and additional knots are given as
t[k+1:-k-1] and t[:k+1]=b, t[-k-1:]=e, respectively.
"""
return self.tck[:2]
def get_coeffs(self):
""" Return spline coefficients."""
return self.tck[2]
def __call__(self, x, y, dx=0, dy=0, grid=True):
"""
Evaluate the spline or its derivatives at given positions.
Parameters
----------
x, y : array_like
Input coordinates.
If `grid` is False, evaluate the spline at points ``(x[i],
y[i]), i=0, ..., len(x)-1``. Standard Numpy broadcasting
is obeyed.
If `grid` is True: evaluate spline at the grid points
defined by the coordinate arrays x, y. The arrays must be
sorted to increasing order.
The ordering of axes is consistent with
``np.meshgrid(..., indexing="ij")`` and inconsistent with the
default ordering ``np.meshgrid(..., indexing="xy")``.
dx : int
Order of x-derivative
.. versionadded:: 0.14.0
dy : int
Order of y-derivative
.. versionadded:: 0.14.0
grid : bool
Whether to evaluate the results on a grid spanned by the
input arrays, or at points specified by the input arrays.
.. versionadded:: 0.14.0
Examples
--------
Suppose that we want to bilinearly interpolate an exponentially decaying
function in 2 dimensions.
>>> import numpy as np
>>> from scipy.interpolate import RectBivariateSpline
We sample the function on a coarse grid. Note that the default indexing="xy"
of meshgrid would result in an unexpected (transposed) result after
interpolation.
>>> xarr = np.linspace(-3, 3, 100)
>>> yarr = np.linspace(-3, 3, 100)
>>> xgrid, ygrid = np.meshgrid(xarr, yarr, indexing="ij")
The function to interpolate decays faster along one axis than the other.
>>> zdata = np.exp(-np.sqrt((xgrid / 2) ** 2 + ygrid**2))
Next we sample on a finer grid using interpolation (kx=ky=1 for bilinear).
>>> rbs = RectBivariateSpline(xarr, yarr, zdata, kx=1, ky=1)
>>> xarr_fine = np.linspace(-3, 3, 200)
>>> yarr_fine = np.linspace(-3, 3, 200)
>>> xgrid_fine, ygrid_fine = np.meshgrid(xarr_fine, yarr_fine, indexing="ij")
>>> zdata_interp = rbs(xgrid_fine, ygrid_fine, grid=False)
And check that the result agrees with the input by plotting both.
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax1 = fig.add_subplot(1, 2, 1, aspect="equal")
>>> ax2 = fig.add_subplot(1, 2, 2, aspect="equal")
>>> ax1.imshow(zdata)
>>> ax2.imshow(zdata_interp)
>>> plt.show()
"""
x = np.asarray(x)
y = np.asarray(y)
tx, ty, c = self.tck[:3]
kx, ky = self.degrees
if grid:
if x.size == 0 or y.size == 0:
return np.zeros((x.size, y.size), dtype=self.tck[2].dtype)
if (x.size >= 2) and (not np.all(np.diff(x) >= 0.0)):
raise ValueError("x must be strictly increasing when `grid` is True")
if (y.size >= 2) and (not np.all(np.diff(y) >= 0.0)):
raise ValueError("y must be strictly increasing when `grid` is True")
if dx or dy:
z, ier = dfitpack.parder(tx, ty, c, kx, ky, dx, dy, x, y)
if not ier == 0:
raise ValueError("Error code returned by parder: %s" % ier)
else:
z, ier = dfitpack.bispev(tx, ty, c, kx, ky, x, y)
if not ier == 0:
raise ValueError("Error code returned by bispev: %s" % ier)
else:
# standard Numpy broadcasting
if x.shape != y.shape:
x, y = np.broadcast_arrays(x, y)
shape = x.shape
x = x.ravel()
y = y.ravel()
if x.size == 0 or y.size == 0:
return np.zeros(shape, dtype=self.tck[2].dtype)
if dx or dy:
z, ier = dfitpack.pardeu(tx, ty, c, kx, ky, dx, dy, x, y)
if not ier == 0:
raise ValueError("Error code returned by pardeu: %s" % ier)
else:
z, ier = dfitpack.bispeu(tx, ty, c, kx, ky, x, y)
if not ier == 0:
raise ValueError("Error code returned by bispeu: %s" % ier)
z = z.reshape(shape)
return z
def partial_derivative(self, dx, dy):
"""Construct a new spline representing a partial derivative of this
spline.
Parameters
----------
dx, dy : int
Orders of the derivative in x and y respectively. They must be
non-negative integers and less than the respective degree of the
original spline (self) in that direction (``kx``, ``ky``).
Returns
-------
spline :
A new spline of degrees (``kx - dx``, ``ky - dy``) representing the
derivative of this spline.
Notes
-----
.. versionadded:: 1.9.0
"""
if dx == 0 and dy == 0:
return self
else:
kx, ky = self.degrees
if not (dx >= 0 and dy >= 0):
raise ValueError("order of derivative must be positive or"
" zero")
if not (dx < kx and dy < ky):
raise ValueError("order of derivative must be less than"
" degree of spline")
tx, ty, c = self.tck[:3]
newc, ier = dfitpack.pardtc(tx, ty, c, kx, ky, dx, dy)
if ier != 0:
# This should not happen under normal conditions.
raise ValueError("Unexpected error code returned by"
" pardtc: %d" % ier)
nx = len(tx)
ny = len(ty)
newtx = tx[dx:nx - dx]
newty = ty[dy:ny - dy]
newkx, newky = kx - dx, ky - dy
newclen = (nx - dx - kx - 1) * (ny - dy - ky - 1)
return _DerivedBivariateSpline._from_tck((newtx, newty,
newc[:newclen],
newkx, newky))
_surfit_messages = {1: """
The required storage space exceeds the available storage space: nxest
or nyest too small, or s too small.
The weighted least-squares spline corresponds to the current set of
knots.""",
2: """
A theoretically impossible result was found during the iteration
process for finding a smoothing spline with fp = s: s too small or
badly chosen eps.
Weighted sum of squared residuals does not satisfy abs(fp-s)/s < tol.""",
3: """
the maximal number of iterations maxit (set to 20 by the program)
allowed for finding a smoothing spline with fp=s has been reached:
s too small.
Weighted sum of squared residuals does not satisfy abs(fp-s)/s < tol.""",
4: """
No more knots can be added because the number of b-spline coefficients
(nx-kx-1)*(ny-ky-1) already exceeds the number of data points m:
either s or m too small.
The weighted least-squares spline corresponds to the current set of
knots.""",
5: """
No more knots can be added because the additional knot would (quasi)
coincide with an old one: s too small or too large a weight to an
inaccurate data point.
The weighted least-squares spline corresponds to the current set of
knots.""",
10: """
Error on entry, no approximation returned. The following conditions
must hold:
xb<=x[i]<=xe, yb<=y[i]<=ye, w[i]>0, i=0..m-1
If iopt==-1, then
xb<tx[kx+1]<tx[kx+2]<...<tx[nx-kx-2]<xe
yb<ty[ky+1]<ty[ky+2]<...<ty[ny-ky-2]<ye""",
-3: """
The coefficients of the spline returned have been computed as the
minimal norm least-squares solution of a (numerically) rank deficient
system (deficiency=%i). If deficiency is large, the results may be
inaccurate. Deficiency may strongly depend on the value of eps."""
}
class BivariateSpline(_BivariateSplineBase):
"""
Base class for bivariate splines.
This describes a spline ``s(x, y)`` of degrees ``kx`` and ``ky`` on
the rectangle ``[xb, xe] * [yb, ye]`` calculated from a given set
of data points ``(x, y, z)``.
This class is meant to be subclassed, not instantiated directly.
To construct these splines, call either `SmoothBivariateSpline` or
`LSQBivariateSpline` or `RectBivariateSpline`.
See Also
--------
UnivariateSpline :
a smooth univariate spline to fit a given set of data points.
SmoothBivariateSpline :
a smoothing bivariate spline through the given points
LSQBivariateSpline :
a bivariate spline using weighted least-squares fitting
RectSphereBivariateSpline :
a bivariate spline over a rectangular mesh on a sphere
SmoothSphereBivariateSpline :
a smoothing bivariate spline in spherical coordinates
LSQSphereBivariateSpline :
a bivariate spline in spherical coordinates using weighted
least-squares fitting
RectBivariateSpline :
a bivariate spline over a rectangular mesh.
bisplrep :
a function to find a bivariate B-spline representation of a surface
bisplev :
a function to evaluate a bivariate B-spline and its derivatives
"""
def ev(self, xi, yi, dx=0, dy=0):
"""
Evaluate the spline at points
Returns the interpolated value at ``(xi[i], yi[i]),
i=0,...,len(xi)-1``.
Parameters
----------
xi, yi : array_like
Input coordinates. Standard Numpy broadcasting is obeyed.
The ordering of axes is consistent with
``np.meshgrid(..., indexing="ij")`` and inconsistent with the
default ordering ``np.meshgrid(..., indexing="xy")``.
dx : int, optional
Order of x-derivative
.. versionadded:: 0.14.0
dy : int, optional
Order of y-derivative
.. versionadded:: 0.14.0
Examples
--------
Suppose that we want to bilinearly interpolate an exponentially decaying
function in 2 dimensions.
>>> import numpy as np
>>> from scipy.interpolate import RectBivariateSpline
>>> def f(x, y):
... return np.exp(-np.sqrt((x / 2) ** 2 + y**2))
We sample the function on a coarse grid and set up the interpolator. Note that
the default ``indexing="xy"`` of meshgrid would result in an unexpected (transposed)
result after interpolation.
>>> xarr = np.linspace(-3, 3, 21)
>>> yarr = np.linspace(-3, 3, 21)
>>> xgrid, ygrid = np.meshgrid(xarr, yarr, indexing="ij")
>>> zdata = f(xgrid, ygrid)
>>> rbs = RectBivariateSpline(xarr, yarr, zdata, kx=1, ky=1)
Next we sample the function along a diagonal slice through the coordinate space
on a finer grid using interpolation.
>>> xinterp = np.linspace(-3, 3, 201)
>>> yinterp = np.linspace(3, -3, 201)
>>> zinterp = rbs.ev(xinterp, yinterp)
And check that the interpolation passes through the function evaluations as a
function of the distance from the origin along the slice.
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax1 = fig.add_subplot(1, 1, 1)
>>> ax1.plot(np.sqrt(xarr**2 + yarr**2), np.diag(zdata), "or")
>>> ax1.plot(np.sqrt(xinterp**2 + yinterp**2), zinterp, "-b")
>>> plt.show()
"""
return self.__call__(xi, yi, dx=dx, dy=dy, grid=False)
def integral(self, xa, xb, ya, yb):
"""
Evaluate the integral of the spline over area [xa,xb] x [ya,yb].
Parameters
----------
xa, xb : float
The end-points of the x integration interval.
ya, yb : float
The end-points of the y integration interval.
Returns
-------
integ : float
The value of the resulting integral.
"""
tx, ty, c = self.tck[:3]
kx, ky = self.degrees
return dfitpack.dblint(tx, ty, c, kx, ky, xa, xb, ya, yb)
@staticmethod
def _validate_input(x, y, z, w, kx, ky, eps):
x, y, z = np.asarray(x), np.asarray(y), np.asarray(z)
if not x.size == y.size == z.size:
raise ValueError('x, y, and z should have a same length')
if w is not None:
w = np.asarray(w)
if x.size != w.size:
raise ValueError('x, y, z, and w should have a same length')
elif not np.all(w >= 0.0):
raise ValueError('w should be positive')
if (eps is not None) and (not 0.0 < eps < 1.0):
raise ValueError('eps should be between (0, 1)')
if not x.size >= (kx + 1) * (ky + 1):
raise ValueError('The length of x, y and z should be at least'
' (kx+1) * (ky+1)')
return x, y, z, w
class _DerivedBivariateSpline(_BivariateSplineBase):
"""Bivariate spline constructed from the coefficients and knots of another
spline.
Notes
-----
The class is not meant to be instantiated directly from the data to be
interpolated or smoothed. As a result, its ``fp`` attribute and
``get_residual`` method are inherited but overriden; ``AttributeError`` is
raised when they are accessed.
The other inherited attributes can be used as usual.
"""
_invalid_why = ("is unavailable, because _DerivedBivariateSpline"
" instance is not constructed from data that are to be"
" interpolated or smoothed, but derived from the"
" underlying knots and coefficients of another spline"
" object")
@property
def fp(self):
raise AttributeError("attribute \"fp\" %s" % self._invalid_why)
def get_residual(self):
raise AttributeError("method \"get_residual\" %s" % self._invalid_why)
class SmoothBivariateSpline(BivariateSpline):
"""
Smooth bivariate spline approximation.
Parameters
----------
x, y, z : array_like
1-D sequences of data points (order is not important).
w : array_like, optional
Positive 1-D sequence of weights, of same length as `x`, `y` and `z`.
bbox : array_like, optional
Sequence of length 4 specifying the boundary of the rectangular
approximation domain. By default,
``bbox=[min(x), max(x), min(y), max(y)]``.
kx, ky : ints, optional
Degrees of the bivariate spline. Default is 3.
s : float, optional
Positive smoothing factor defined for estimation condition:
``sum((w[i]*(z[i]-s(x[i], y[i])))**2, axis=0) <= s``
Default ``s=len(w)`` which should be a good value if ``1/w[i]`` is an
estimate of the standard deviation of ``z[i]``.
eps : float, optional
A threshold for determining the effective rank of an over-determined
linear system of equations. `eps` should have a value within the open
interval ``(0, 1)``, the default is 1e-16.
See Also
--------
BivariateSpline :
a base class for bivariate splines.
UnivariateSpline :
a smooth univariate spline to fit a given set of data points.
LSQBivariateSpline :
a bivariate spline using weighted least-squares fitting
RectSphereBivariateSpline :
a bivariate spline over a rectangular mesh on a sphere
SmoothSphereBivariateSpline :
a smoothing bivariate spline in spherical coordinates
LSQSphereBivariateSpline :
a bivariate spline in spherical coordinates using weighted
least-squares fitting
RectBivariateSpline :
a bivariate spline over a rectangular mesh
bisplrep :
a function to find a bivariate B-spline representation of a surface
bisplev :
a function to evaluate a bivariate B-spline and its derivatives
Notes
-----
The length of `x`, `y` and `z` should be at least ``(kx+1) * (ky+1)``.
If the input data is such that input dimensions have incommensurate
units and differ by many orders of magnitude, the interpolant may have
numerical artifacts. Consider rescaling the data before interpolating.
This routine constructs spline knot vectors automatically via the FITPACK
algorithm. The spline knots may be placed away from the data points. For
some data sets, this routine may fail to construct an interpolating spline,
even if one is requested via ``s=0`` parameter. In such situations, it is
recommended to use `bisplrep` / `bisplev` directly instead of this routine
and, if needed, increase the values of ``nxest`` and ``nyest`` parameters
of `bisplrep`.
For linear interpolation, prefer `LinearNDInterpolator`.
See ``https://gist.github.com/ev-br/8544371b40f414b7eaf3fe6217209bff``
for discussion.
"""
def __init__(self, x, y, z, w=None, bbox=[None] * 4, kx=3, ky=3, s=None,
eps=1e-16):
x, y, z, w = self._validate_input(x, y, z, w, kx, ky, eps)
bbox = ravel(bbox)
if not bbox.shape == (4,):
raise ValueError('bbox shape should be (4,)')
if s is not None and not s >= 0.0:
raise ValueError("s should be s >= 0.0")
xb, xe, yb, ye = bbox
nx, tx, ny, ty, c, fp, wrk1, ier = dfitpack.surfit_smth(x, y, z, w,
xb, xe, yb,
ye, kx, ky,
s=s, eps=eps,
lwrk2=1)
if ier > 10: # lwrk2 was to small, re-run
nx, tx, ny, ty, c, fp, wrk1, ier = dfitpack.surfit_smth(x, y, z, w,
xb, xe, yb,
ye, kx, ky,
s=s,
eps=eps,
lwrk2=ier)
if ier in [0, -1, -2]: # normal return
pass
else:
message = _surfit_messages.get(ier, 'ier=%s' % (ier))
warnings.warn(message)
self.fp = fp
self.tck = tx[:nx], ty[:ny], c[:(nx-kx-1)*(ny-ky-1)]
self.degrees = kx, ky
class LSQBivariateSpline(BivariateSpline):
"""
Weighted least-squares bivariate spline approximation.
Parameters
----------
x, y, z : array_like
1-D sequences of data points (order is not important).
tx, ty : array_like
Strictly ordered 1-D sequences of knots coordinates.
w : array_like, optional
Positive 1-D array of weights, of the same length as `x`, `y` and `z`.
bbox : (4,) array_like, optional
Sequence of length 4 specifying the boundary of the rectangular
approximation domain. By default,
``bbox=[min(x,tx),max(x,tx), min(y,ty),max(y,ty)]``.
kx, ky : ints, optional
Degrees of the bivariate spline. Default is 3.
eps : float, optional
A threshold for determining the effective rank of an over-determined
linear system of equations. `eps` should have a value within the open
interval ``(0, 1)``, the default is 1e-16.
See Also
--------
BivariateSpline :
a base class for bivariate splines.
UnivariateSpline :
a smooth univariate spline to fit a given set of data points.
SmoothBivariateSpline :
a smoothing bivariate spline through the given points
RectSphereBivariateSpline :
a bivariate spline over a rectangular mesh on a sphere
SmoothSphereBivariateSpline :
a smoothing bivariate spline in spherical coordinates
LSQSphereBivariateSpline :
a bivariate spline in spherical coordinates using weighted
least-squares fitting
RectBivariateSpline :
a bivariate spline over a rectangular mesh.
bisplrep :
a function to find a bivariate B-spline representation of a surface
bisplev :
a function to evaluate a bivariate B-spline and its derivatives
Notes
-----
The length of `x`, `y` and `z` should be at least ``(kx+1) * (ky+1)``.
If the input data is such that input dimensions have incommensurate
units and differ by many orders of magnitude, the interpolant may have
numerical artifacts. Consider rescaling the data before interpolating.
"""
def __init__(self, x, y, z, tx, ty, w=None, bbox=[None]*4, kx=3, ky=3,
eps=None):
x, y, z, w = self._validate_input(x, y, z, w, kx, ky, eps)
bbox = ravel(bbox)
if not bbox.shape == (4,):
raise ValueError('bbox shape should be (4,)')
nx = 2*kx+2+len(tx)
ny = 2*ky+2+len(ty)
# The Fortran subroutine "surfit" (called as dfitpack.surfit_lsq)
# requires that the knot arrays passed as input should be "real
# array(s) of dimension nmax" where "nmax" refers to the greater of nx
# and ny. We pad the tx1/ty1 arrays here so that this is satisfied, and
# slice them to the desired sizes upon return.
nmax = max(nx, ny)
tx1 = zeros((nmax,), float)
ty1 = zeros((nmax,), float)
tx1[kx+1:nx-kx-1] = tx
ty1[ky+1:ny-ky-1] = ty
xb, xe, yb, ye = bbox
tx1, ty1, c, fp, ier = dfitpack.surfit_lsq(x, y, z, nx, tx1, ny, ty1,
w, xb, xe, yb, ye,
kx, ky, eps, lwrk2=1)
if ier > 10:
tx1, ty1, c, fp, ier = dfitpack.surfit_lsq(x, y, z,
nx, tx1, ny, ty1, w,
xb, xe, yb, ye,
kx, ky, eps, lwrk2=ier)
if ier in [0, -1, -2]: # normal return
pass
else:
if ier < -2:
deficiency = (nx-kx-1)*(ny-ky-1)+ier
message = _surfit_messages.get(-3) % (deficiency)
else:
message = _surfit_messages.get(ier, 'ier=%s' % (ier))
warnings.warn(message)
self.fp = fp
self.tck = tx1[:nx], ty1[:ny], c
self.degrees = kx, ky
class RectBivariateSpline(BivariateSpline):
"""
Bivariate spline approximation over a rectangular mesh.
Can be used for both smoothing and interpolating data.
Parameters
----------
x,y : array_like
1-D arrays of coordinates in strictly ascending order.
Evaluated points outside the data range will be extrapolated.
z : array_like
2-D array of data with shape (x.size,y.size).
bbox : array_like, optional
Sequence of length 4 specifying the boundary of the rectangular
approximation domain, which means the start and end spline knots of
each dimension are set by these values. By default,
``bbox=[min(x), max(x), min(y), max(y)]``.
kx, ky : ints, optional
Degrees of the bivariate spline. Default is 3.
s : float, optional
Positive smoothing factor defined for estimation condition:
``sum((z[i]-f(x[i], y[i]))**2, axis=0) <= s`` where f is a spline
function. Default is ``s=0``, which is for interpolation.
See Also
--------
BivariateSpline :
a base class for bivariate splines.
UnivariateSpline :
a smooth univariate spline to fit a given set of data points.
SmoothBivariateSpline :
a smoothing bivariate spline through the given points
LSQBivariateSpline :
a bivariate spline using weighted least-squares fitting
RectSphereBivariateSpline :
a bivariate spline over a rectangular mesh on a sphere
SmoothSphereBivariateSpline :
a smoothing bivariate spline in spherical coordinates
LSQSphereBivariateSpline :
a bivariate spline in spherical coordinates using weighted
least-squares fitting
bisplrep :
a function to find a bivariate B-spline representation of a surface
bisplev :
a function to evaluate a bivariate B-spline and its derivatives
Notes
-----
If the input data is such that input dimensions have incommensurate
units and differ by many orders of magnitude, the interpolant may have
numerical artifacts. Consider rescaling the data before interpolating.
"""
def __init__(self, x, y, z, bbox=[None] * 4, kx=3, ky=3, s=0):
x, y, bbox = ravel(x), ravel(y), ravel(bbox)
z = np.asarray(z)
if not np.all(diff(x) > 0.0):
raise ValueError('x must be strictly increasing')
if not np.all(diff(y) > 0.0):
raise ValueError('y must be strictly increasing')
if not x.size == z.shape[0]:
raise ValueError('x dimension of z must have same number of '
'elements as x')
if not y.size == z.shape[1]:
raise ValueError('y dimension of z must have same number of '
'elements as y')
if not bbox.shape == (4,):
raise ValueError('bbox shape should be (4,)')
if s is not None and not s >= 0.0:
raise ValueError("s should be s >= 0.0")
z = ravel(z)
xb, xe, yb, ye = bbox
nx, tx, ny, ty, c, fp, ier = dfitpack.regrid_smth(x, y, z, xb, xe, yb,
ye, kx, ky, s)
if ier not in [0, -1, -2]:
msg = _surfit_messages.get(ier, 'ier=%s' % (ier))
raise ValueError(msg)
self.fp = fp
self.tck = tx[:nx], ty[:ny], c[:(nx - kx - 1) * (ny - ky - 1)]
self.degrees = kx, ky
_spherefit_messages = _surfit_messages.copy()
_spherefit_messages[10] = """
ERROR. On entry, the input data are controlled on validity. The following
restrictions must be satisfied:
-1<=iopt<=1, m>=2, ntest>=8 ,npest >=8, 0<eps<1,
0<=teta(i)<=pi, 0<=phi(i)<=2*pi, w(i)>0, i=1,...,m
lwrk1 >= 185+52*v+10*u+14*u*v+8*(u-1)*v**2+8*m
kwrk >= m+(ntest-7)*(npest-7)
if iopt=-1: 8<=nt<=ntest , 9<=np<=npest
0<tt(5)<tt(6)<...<tt(nt-4)<pi
0<tp(5)<tp(6)<...<tp(np-4)<2*pi
if iopt>=0: s>=0
if one of these conditions is found to be violated,control
is immediately repassed to the calling program. in that
case there is no approximation returned."""
_spherefit_messages[-3] = """
WARNING. The coefficients of the spline returned have been computed as the
minimal norm least-squares solution of a (numerically) rank
deficient system (deficiency=%i, rank=%i). Especially if the rank
deficiency, which is computed by 6+(nt-8)*(np-7)+ier, is large,
the results may be inaccurate. They could also seriously depend on
the value of eps."""
class SphereBivariateSpline(_BivariateSplineBase):
"""
Bivariate spline s(x,y) of degrees 3 on a sphere, calculated from a
given set of data points (theta,phi,r).
.. versionadded:: 0.11.0
See Also
--------
bisplrep :
a function to find a bivariate B-spline representation of a surface
bisplev :
a function to evaluate a bivariate B-spline and its derivatives
UnivariateSpline :
a smooth univariate spline to fit a given set of data points.
SmoothBivariateSpline :
a smoothing bivariate spline through the given points
LSQUnivariateSpline :
a univariate spline using weighted least-squares fitting
"""
def __call__(self, theta, phi, dtheta=0, dphi=0, grid=True):
"""
Evaluate the spline or its derivatives at given positions.
Parameters
----------
theta, phi : array_like
Input coordinates.
If `grid` is False, evaluate the spline at points
``(theta[i], phi[i]), i=0, ..., len(x)-1``. Standard
Numpy broadcasting is obeyed.
If `grid` is True: evaluate spline at the grid points
defined by the coordinate arrays theta, phi. The arrays
must be sorted to increasing order.
The ordering of axes is consistent with
``np.meshgrid(..., indexing="ij")`` and inconsistent with the
default ordering ``np.meshgrid(..., indexing="xy")``.
dtheta : int, optional
Order of theta-derivative
.. versionadded:: 0.14.0
dphi : int
Order of phi-derivative
.. versionadded:: 0.14.0
grid : bool
Whether to evaluate the results on a grid spanned by the
input arrays, or at points specified by the input arrays.
.. versionadded:: 0.14.0
Examples
--------
Suppose that we want to use splines to interpolate a bivariate function on a sphere.
The value of the function is known on a grid of longitudes and colatitudes.
>>> import numpy as np
>>> from scipy.interpolate import RectSphereBivariateSpline
>>> def f(theta, phi):
... return np.sin(theta) * np.cos(phi)
We evaluate the function on the grid. Note that the default indexing="xy"
of meshgrid would result in an unexpected (transposed) result after
interpolation.
>>> thetaarr = np.linspace(0, np.pi, 22)[1:-1]
>>> phiarr = np.linspace(0, 2 * np.pi, 21)[:-1]
>>> thetagrid, phigrid = np.meshgrid(thetaarr, phiarr, indexing="ij")
>>> zdata = f(thetagrid, phigrid)
We next set up the interpolator and use it to evaluate the function
on a finer grid.
>>> rsbs = RectSphereBivariateSpline(thetaarr, phiarr, zdata)
>>> thetaarr_fine = np.linspace(0, np.pi, 200)
>>> phiarr_fine = np.linspace(0, 2 * np.pi, 200)
>>> zdata_fine = rsbs(thetaarr_fine, phiarr_fine)
Finally we plot the coarsly-sampled input data alongside the
finely-sampled interpolated data to check that they agree.
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax1 = fig.add_subplot(1, 2, 1)
>>> ax2 = fig.add_subplot(1, 2, 2)
>>> ax1.imshow(zdata)
>>> ax2.imshow(zdata_fine)
>>> plt.show()
"""
theta = np.asarray(theta)
phi = np.asarray(phi)
if theta.size > 0 and (theta.min() < 0. or theta.max() > np.pi):
raise ValueError("requested theta out of bounds.")
return _BivariateSplineBase.__call__(self, theta, phi,
dx=dtheta, dy=dphi, grid=grid)
def ev(self, theta, phi, dtheta=0, dphi=0):
"""
Evaluate the spline at points
Returns the interpolated value at ``(theta[i], phi[i]),
i=0,...,len(theta)-1``.
Parameters
----------
theta, phi : array_like
Input coordinates. Standard Numpy broadcasting is obeyed.
The ordering of axes is consistent with
np.meshgrid(..., indexing="ij") and inconsistent with the
default ordering np.meshgrid(..., indexing="xy").
dtheta : int, optional
Order of theta-derivative
.. versionadded:: 0.14.0
dphi : int, optional
Order of phi-derivative
.. versionadded:: 0.14.0
Examples
--------
Suppose that we want to use splines to interpolate a bivariate function on a sphere.
The value of the function is known on a grid of longitudes and colatitudes.
>>> import numpy as np
>>> from scipy.interpolate import RectSphereBivariateSpline
>>> def f(theta, phi):
... return np.sin(theta) * np.cos(phi)
We evaluate the function on the grid. Note that the default indexing="xy"
of meshgrid would result in an unexpected (transposed) result after
interpolation.
>>> thetaarr = np.linspace(0, np.pi, 22)[1:-1]
>>> phiarr = np.linspace(0, 2 * np.pi, 21)[:-1]
>>> thetagrid, phigrid = np.meshgrid(thetaarr, phiarr, indexing="ij")
>>> zdata = f(thetagrid, phigrid)
We next set up the interpolator and use it to evaluate the function
at points not on the original grid.
>>> rsbs = RectSphereBivariateSpline(thetaarr, phiarr, zdata)
>>> thetainterp = np.linspace(thetaarr[0], thetaarr[-1], 200)
>>> phiinterp = np.linspace(phiarr[0], phiarr[-1], 200)
>>> zinterp = rsbs.ev(thetainterp, phiinterp)
Finally we plot the original data for a diagonal slice through the
initial grid, and the spline approximation along the same slice.
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax1 = fig.add_subplot(1, 1, 1)
>>> ax1.plot(np.sin(thetaarr) * np.sin(phiarr), np.diag(zdata), "or")
>>> ax1.plot(np.sin(thetainterp) * np.sin(phiinterp), zinterp, "-b")
>>> plt.show()
"""
return self.__call__(theta, phi, dtheta=dtheta, dphi=dphi, grid=False)
class SmoothSphereBivariateSpline(SphereBivariateSpline):
"""
Smooth bivariate spline approximation in spherical coordinates.
.. versionadded:: 0.11.0
Parameters
----------
theta, phi, r : array_like
1-D sequences of data points (order is not important). Coordinates
must be given in radians. Theta must lie within the interval
``[0, pi]``, and phi must lie within the interval ``[0, 2pi]``.
w : array_like, optional
Positive 1-D sequence of weights.
s : float, optional
Positive smoothing factor defined for estimation condition:
``sum((w(i)*(r(i) - s(theta(i), phi(i))))**2, axis=0) <= s``
Default ``s=len(w)`` which should be a good value if ``1/w[i]`` is an
estimate of the standard deviation of ``r[i]``.
eps : float, optional
A threshold for determining the effective rank of an over-determined
linear system of equations. `eps` should have a value within the open
interval ``(0, 1)``, the default is 1e-16.
See Also
--------
BivariateSpline :
a base class for bivariate splines.
UnivariateSpline :
a smooth univariate spline to fit a given set of data points.
SmoothBivariateSpline :
a smoothing bivariate spline through the given points
LSQBivariateSpline :
a bivariate spline using weighted least-squares fitting
RectSphereBivariateSpline :
a bivariate spline over a rectangular mesh on a sphere
LSQSphereBivariateSpline :
a bivariate spline in spherical coordinates using weighted
least-squares fitting
RectBivariateSpline :
a bivariate spline over a rectangular mesh.
bisplrep :
a function to find a bivariate B-spline representation of a surface
bisplev :
a function to evaluate a bivariate B-spline and its derivatives
Notes
-----
For more information, see the FITPACK_ site about this function.
.. _FITPACK: http://www.netlib.org/dierckx/sphere.f
Examples
--------
Suppose we have global data on a coarse grid (the input data does not
have to be on a grid):
>>> import numpy as np
>>> theta = np.linspace(0., np.pi, 7)
>>> phi = np.linspace(0., 2*np.pi, 9)
>>> data = np.empty((theta.shape[0], phi.shape[0]))
>>> data[:,0], data[0,:], data[-1,:] = 0., 0., 0.
>>> data[1:-1,1], data[1:-1,-1] = 1., 1.
>>> data[1,1:-1], data[-2,1:-1] = 1., 1.
>>> data[2:-2,2], data[2:-2,-2] = 2., 2.
>>> data[2,2:-2], data[-3,2:-2] = 2., 2.
>>> data[3,3:-2] = 3.
>>> data = np.roll(data, 4, 1)
We need to set up the interpolator object
>>> lats, lons = np.meshgrid(theta, phi)
>>> from scipy.interpolate import SmoothSphereBivariateSpline
>>> lut = SmoothSphereBivariateSpline(lats.ravel(), lons.ravel(),
... data.T.ravel(), s=3.5)
As a first test, we'll see what the algorithm returns when run on the
input coordinates
>>> data_orig = lut(theta, phi)
Finally we interpolate the data to a finer grid
>>> fine_lats = np.linspace(0., np.pi, 70)
>>> fine_lons = np.linspace(0., 2 * np.pi, 90)
>>> data_smth = lut(fine_lats, fine_lons)
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax1 = fig.add_subplot(131)
>>> ax1.imshow(data, interpolation='nearest')
>>> ax2 = fig.add_subplot(132)
>>> ax2.imshow(data_orig, interpolation='nearest')
>>> ax3 = fig.add_subplot(133)
>>> ax3.imshow(data_smth, interpolation='nearest')
>>> plt.show()
"""
def __init__(self, theta, phi, r, w=None, s=0., eps=1E-16):
theta, phi, r = np.asarray(theta), np.asarray(phi), np.asarray(r)
# input validation
if not ((0.0 <= theta).all() and (theta <= np.pi).all()):
raise ValueError('theta should be between [0, pi]')
if not ((0.0 <= phi).all() and (phi <= 2.0 * np.pi).all()):
raise ValueError('phi should be between [0, 2pi]')
if w is not None:
w = np.asarray(w)
if not (w >= 0.0).all():
raise ValueError('w should be positive')
if not s >= 0.0:
raise ValueError('s should be positive')
if not 0.0 < eps < 1.0:
raise ValueError('eps should be between (0, 1)')
if np.issubclass_(w, float):
w = ones(len(theta)) * w
nt_, tt_, np_, tp_, c, fp, ier = dfitpack.spherfit_smth(theta, phi,
r, w=w, s=s,
eps=eps)
if ier not in [0, -1, -2]:
message = _spherefit_messages.get(ier, 'ier=%s' % (ier))
raise ValueError(message)
self.fp = fp
self.tck = tt_[:nt_], tp_[:np_], c[:(nt_ - 4) * (np_ - 4)]
self.degrees = (3, 3)
def __call__(self, theta, phi, dtheta=0, dphi=0, grid=True):
theta = np.asarray(theta)
phi = np.asarray(phi)
if phi.size > 0 and (phi.min() < 0. or phi.max() > 2. * np.pi):
raise ValueError("requested phi out of bounds.")
return SphereBivariateSpline.__call__(self, theta, phi, dtheta=dtheta,
dphi=dphi, grid=grid)
class LSQSphereBivariateSpline(SphereBivariateSpline):
"""
Weighted least-squares bivariate spline approximation in spherical
coordinates.
Determines a smoothing bicubic spline according to a given
set of knots in the `theta` and `phi` directions.
.. versionadded:: 0.11.0
Parameters
----------
theta, phi, r : array_like
1-D sequences of data points (order is not important). Coordinates
must be given in radians. Theta must lie within the interval
``[0, pi]``, and phi must lie within the interval ``[0, 2pi]``.
tt, tp : array_like
Strictly ordered 1-D sequences of knots coordinates.
Coordinates must satisfy ``0 < tt[i] < pi``, ``0 < tp[i] < 2*pi``.
w : array_like, optional
Positive 1-D sequence of weights, of the same length as `theta`, `phi`
and `r`.
eps : float, optional
A threshold for determining the effective rank of an over-determined
linear system of equations. `eps` should have a value within the
open interval ``(0, 1)``, the default is 1e-16.
See Also
--------
BivariateSpline :
a base class for bivariate splines.
UnivariateSpline :
a smooth univariate spline to fit a given set of data points.
SmoothBivariateSpline :
a smoothing bivariate spline through the given points
LSQBivariateSpline :
a bivariate spline using weighted least-squares fitting
RectSphereBivariateSpline :
a bivariate spline over a rectangular mesh on a sphere
SmoothSphereBivariateSpline :
a smoothing bivariate spline in spherical coordinates
RectBivariateSpline :
a bivariate spline over a rectangular mesh.
bisplrep :
a function to find a bivariate B-spline representation of a surface
bisplev :
a function to evaluate a bivariate B-spline and its derivatives
Notes
-----
For more information, see the FITPACK_ site about this function.
.. _FITPACK: http://www.netlib.org/dierckx/sphere.f
Examples
--------
Suppose we have global data on a coarse grid (the input data does not
have to be on a grid):
>>> from scipy.interpolate import LSQSphereBivariateSpline
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> theta = np.linspace(0, np.pi, num=7)
>>> phi = np.linspace(0, 2*np.pi, num=9)
>>> data = np.empty((theta.shape[0], phi.shape[0]))
>>> data[:,0], data[0,:], data[-1,:] = 0., 0., 0.
>>> data[1:-1,1], data[1:-1,-1] = 1., 1.
>>> data[1,1:-1], data[-2,1:-1] = 1., 1.
>>> data[2:-2,2], data[2:-2,-2] = 2., 2.
>>> data[2,2:-2], data[-3,2:-2] = 2., 2.
>>> data[3,3:-2] = 3.
>>> data = np.roll(data, 4, 1)
We need to set up the interpolator object. Here, we must also specify the
coordinates of the knots to use.
>>> lats, lons = np.meshgrid(theta, phi)
>>> knotst, knotsp = theta.copy(), phi.copy()
>>> knotst[0] += .0001
>>> knotst[-1] -= .0001
>>> knotsp[0] += .0001
>>> knotsp[-1] -= .0001
>>> lut = LSQSphereBivariateSpline(lats.ravel(), lons.ravel(),
... data.T.ravel(), knotst, knotsp)
As a first test, we'll see what the algorithm returns when run on the
input coordinates
>>> data_orig = lut(theta, phi)
Finally we interpolate the data to a finer grid
>>> fine_lats = np.linspace(0., np.pi, 70)
>>> fine_lons = np.linspace(0., 2*np.pi, 90)
>>> data_lsq = lut(fine_lats, fine_lons)
>>> fig = plt.figure()
>>> ax1 = fig.add_subplot(131)
>>> ax1.imshow(data, interpolation='nearest')
>>> ax2 = fig.add_subplot(132)
>>> ax2.imshow(data_orig, interpolation='nearest')
>>> ax3 = fig.add_subplot(133)
>>> ax3.imshow(data_lsq, interpolation='nearest')
>>> plt.show()
"""
def __init__(self, theta, phi, r, tt, tp, w=None, eps=1E-16):
theta, phi, r = np.asarray(theta), np.asarray(phi), np.asarray(r)
tt, tp = np.asarray(tt), np.asarray(tp)
if not ((0.0 <= theta).all() and (theta <= np.pi).all()):
raise ValueError('theta should be between [0, pi]')
if not ((0.0 <= phi).all() and (phi <= 2*np.pi).all()):
raise ValueError('phi should be between [0, 2pi]')
if not ((0.0 < tt).all() and (tt < np.pi).all()):
raise ValueError('tt should be between (0, pi)')
if not ((0.0 < tp).all() and (tp < 2*np.pi).all()):
raise ValueError('tp should be between (0, 2pi)')
if w is not None:
w = np.asarray(w)
if not (w >= 0.0).all():
raise ValueError('w should be positive')
if not 0.0 < eps < 1.0:
raise ValueError('eps should be between (0, 1)')
if np.issubclass_(w, float):
w = ones(len(theta)) * w
nt_, np_ = 8 + len(tt), 8 + len(tp)
tt_, tp_ = zeros((nt_,), float), zeros((np_,), float)
tt_[4:-4], tp_[4:-4] = tt, tp
tt_[-4:], tp_[-4:] = np.pi, 2. * np.pi
tt_, tp_, c, fp, ier = dfitpack.spherfit_lsq(theta, phi, r, tt_, tp_,
w=w, eps=eps)
if ier > 0:
message = _spherefit_messages.get(ier, 'ier=%s' % (ier))
raise ValueError(message)
self.fp = fp
self.tck = tt_, tp_, c
self.degrees = (3, 3)
def __call__(self, theta, phi, dtheta=0, dphi=0, grid=True):
theta = np.asarray(theta)
phi = np.asarray(phi)
if phi.size > 0 and (phi.min() < 0. or phi.max() > 2. * np.pi):
raise ValueError("requested phi out of bounds.")
return SphereBivariateSpline.__call__(self, theta, phi, dtheta=dtheta,
dphi=dphi, grid=grid)
_spfit_messages = _surfit_messages.copy()
_spfit_messages[10] = """
ERROR: on entry, the input data are controlled on validity
the following restrictions must be satisfied.
-1<=iopt(1)<=1, 0<=iopt(2)<=1, 0<=iopt(3)<=1,
-1<=ider(1)<=1, 0<=ider(2)<=1, ider(2)=0 if iopt(2)=0.
-1<=ider(3)<=1, 0<=ider(4)<=1, ider(4)=0 if iopt(3)=0.
mu >= mumin (see above), mv >= 4, nuest >=8, nvest >= 8,
kwrk>=5+mu+mv+nuest+nvest,
lwrk >= 12+nuest*(mv+nvest+3)+nvest*24+4*mu+8*mv+max(nuest,mv+nvest)
0< u(i-1)<u(i)< pi,i=2,..,mu,
-pi<=v(1)< pi, v(1)<v(i-1)<v(i)<v(1)+2*pi, i=3,...,mv
if iopt(1)=-1: 8<=nu<=min(nuest,mu+6+iopt(2)+iopt(3))
0<tu(5)<tu(6)<...<tu(nu-4)< pi
8<=nv<=min(nvest,mv+7)
v(1)<tv(5)<tv(6)<...<tv(nv-4)<v(1)+2*pi
the schoenberg-whitney conditions, i.e. there must be
subset of grid co-ordinates uu(p) and vv(q) such that
tu(p) < uu(p) < tu(p+4) ,p=1,...,nu-4
(iopt(2)=1 and iopt(3)=1 also count for a uu-value
tv(q) < vv(q) < tv(q+4) ,q=1,...,nv-4
(vv(q) is either a value v(j) or v(j)+2*pi)
if iopt(1)>=0: s>=0
if s=0: nuest>=mu+6+iopt(2)+iopt(3), nvest>=mv+7
if one of these conditions is found to be violated,control is
immediately repassed to the calling program. in that case there is no
approximation returned."""
class RectSphereBivariateSpline(SphereBivariateSpline):
"""
Bivariate spline approximation over a rectangular mesh on a sphere.
Can be used for smoothing data.
.. versionadded:: 0.11.0
Parameters
----------
u : array_like
1-D array of colatitude coordinates in strictly ascending order.
Coordinates must be given in radians and lie within the open interval
``(0, pi)``.
v : array_like
1-D array of longitude coordinates in strictly ascending order.
Coordinates must be given in radians. First element (``v[0]``) must lie
within the interval ``[-pi, pi)``. Last element (``v[-1]``) must satisfy
``v[-1] <= v[0] + 2*pi``.
r : array_like
2-D array of data with shape ``(u.size, v.size)``.
s : float, optional
Positive smoothing factor defined for estimation condition
(``s=0`` is for interpolation).
pole_continuity : bool or (bool, bool), optional
Order of continuity at the poles ``u=0`` (``pole_continuity[0]``) and
``u=pi`` (``pole_continuity[1]``). The order of continuity at the pole
will be 1 or 0 when this is True or False, respectively.
Defaults to False.
pole_values : float or (float, float), optional
Data values at the poles ``u=0`` and ``u=pi``. Either the whole
parameter or each individual element can be None. Defaults to None.
pole_exact : bool or (bool, bool), optional
Data value exactness at the poles ``u=0`` and ``u=pi``. If True, the
value is considered to be the right function value, and it will be
fitted exactly. If False, the value will be considered to be a data
value just like the other data values. Defaults to False.
pole_flat : bool or (bool, bool), optional
For the poles at ``u=0`` and ``u=pi``, specify whether or not the
approximation has vanishing derivatives. Defaults to False.
See Also
--------
BivariateSpline :
a base class for bivariate splines.
UnivariateSpline :
a smooth univariate spline to fit a given set of data points.
SmoothBivariateSpline :
a smoothing bivariate spline through the given points
LSQBivariateSpline :
a bivariate spline using weighted least-squares fitting
SmoothSphereBivariateSpline :
a smoothing bivariate spline in spherical coordinates
LSQSphereBivariateSpline :
a bivariate spline in spherical coordinates using weighted
least-squares fitting
RectBivariateSpline :
a bivariate spline over a rectangular mesh.
bisplrep :
a function to find a bivariate B-spline representation of a surface
bisplev :
a function to evaluate a bivariate B-spline and its derivatives
Notes
-----
Currently, only the smoothing spline approximation (``iopt[0] = 0`` and
``iopt[0] = 1`` in the FITPACK routine) is supported. The exact
least-squares spline approximation is not implemented yet.
When actually performing the interpolation, the requested `v` values must
lie within the same length 2pi interval that the original `v` values were
chosen from.
For more information, see the FITPACK_ site about this function.
.. _FITPACK: http://www.netlib.org/dierckx/spgrid.f
Examples
--------
Suppose we have global data on a coarse grid
>>> import numpy as np
>>> lats = np.linspace(10, 170, 9) * np.pi / 180.
>>> lons = np.linspace(0, 350, 18) * np.pi / 180.
>>> data = np.dot(np.atleast_2d(90. - np.linspace(-80., 80., 18)).T,
... np.atleast_2d(180. - np.abs(np.linspace(0., 350., 9)))).T
We want to interpolate it to a global one-degree grid
>>> new_lats = np.linspace(1, 180, 180) * np.pi / 180
>>> new_lons = np.linspace(1, 360, 360) * np.pi / 180
>>> new_lats, new_lons = np.meshgrid(new_lats, new_lons)
We need to set up the interpolator object
>>> from scipy.interpolate import RectSphereBivariateSpline
>>> lut = RectSphereBivariateSpline(lats, lons, data)
Finally we interpolate the data. The `RectSphereBivariateSpline` object
only takes 1-D arrays as input, therefore we need to do some reshaping.
>>> data_interp = lut.ev(new_lats.ravel(),
... new_lons.ravel()).reshape((360, 180)).T
Looking at the original and the interpolated data, one can see that the
interpolant reproduces the original data very well:
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax1 = fig.add_subplot(211)
>>> ax1.imshow(data, interpolation='nearest')
>>> ax2 = fig.add_subplot(212)
>>> ax2.imshow(data_interp, interpolation='nearest')
>>> plt.show()
Choosing the optimal value of ``s`` can be a delicate task. Recommended
values for ``s`` depend on the accuracy of the data values. If the user
has an idea of the statistical errors on the data, she can also find a
proper estimate for ``s``. By assuming that, if she specifies the
right ``s``, the interpolator will use a spline ``f(u,v)`` which exactly
reproduces the function underlying the data, she can evaluate
``sum((r(i,j)-s(u(i),v(j)))**2)`` to find a good estimate for this ``s``.
For example, if she knows that the statistical errors on her
``r(i,j)``-values are not greater than 0.1, she may expect that a good
``s`` should have a value not larger than ``u.size * v.size * (0.1)**2``.
If nothing is known about the statistical error in ``r(i,j)``, ``s`` must
be determined by trial and error. The best is then to start with a very
large value of ``s`` (to determine the least-squares polynomial and the
corresponding upper bound ``fp0`` for ``s``) and then to progressively
decrease the value of ``s`` (say by a factor 10 in the beginning, i.e.
``s = fp0 / 10, fp0 / 100, ...`` and more carefully as the approximation
shows more detail) to obtain closer fits.
The interpolation results for different values of ``s`` give some insight
into this process:
>>> fig2 = plt.figure()
>>> s = [3e9, 2e9, 1e9, 1e8]
>>> for idx, sval in enumerate(s, 1):
... lut = RectSphereBivariateSpline(lats, lons, data, s=sval)
... data_interp = lut.ev(new_lats.ravel(),
... new_lons.ravel()).reshape((360, 180)).T
... ax = fig2.add_subplot(2, 2, idx)
... ax.imshow(data_interp, interpolation='nearest')
... ax.set_title(f"s = {sval:g}")
>>> plt.show()
"""
def __init__(self, u, v, r, s=0., pole_continuity=False, pole_values=None,
pole_exact=False, pole_flat=False):
iopt = np.array([0, 0, 0], dtype=dfitpack_int)
ider = np.array([-1, 0, -1, 0], dtype=dfitpack_int)
if pole_values is None:
pole_values = (None, None)
elif isinstance(pole_values, (float, np.float32, np.float64)):
pole_values = (pole_values, pole_values)
if isinstance(pole_continuity, bool):
pole_continuity = (pole_continuity, pole_continuity)
if isinstance(pole_exact, bool):
pole_exact = (pole_exact, pole_exact)
if isinstance(pole_flat, bool):
pole_flat = (pole_flat, pole_flat)
r0, r1 = pole_values
iopt[1:] = pole_continuity
if r0 is None:
ider[0] = -1
else:
ider[0] = pole_exact[0]
if r1 is None:
ider[2] = -1
else:
ider[2] = pole_exact[1]
ider[1], ider[3] = pole_flat
u, v = np.ravel(u), np.ravel(v)
r = np.asarray(r)
if not (0.0 < u[0] and u[-1] < np.pi):
raise ValueError('u should be between (0, pi)')
if not -np.pi <= v[0] < np.pi:
raise ValueError('v[0] should be between [-pi, pi)')
if not v[-1] <= v[0] + 2*np.pi:
raise ValueError('v[-1] should be v[0] + 2pi or less ')
if not np.all(np.diff(u) > 0.0):
raise ValueError('u must be strictly increasing')
if not np.all(np.diff(v) > 0.0):
raise ValueError('v must be strictly increasing')
if not u.size == r.shape[0]:
raise ValueError('u dimension of r must have same number of '
'elements as u')
if not v.size == r.shape[1]:
raise ValueError('v dimension of r must have same number of '
'elements as v')
if pole_continuity[1] is False and pole_flat[1] is True:
raise ValueError('if pole_continuity is False, so must be '
'pole_flat')
if pole_continuity[0] is False and pole_flat[0] is True:
raise ValueError('if pole_continuity is False, so must be '
'pole_flat')
if not s >= 0.0:
raise ValueError('s should be positive')
r = np.ravel(r)
nu, tu, nv, tv, c, fp, ier = dfitpack.regrid_smth_spher(iopt, ider,
u.copy(),
v.copy(),
r.copy(),
r0, r1, s)
if ier not in [0, -1, -2]:
msg = _spfit_messages.get(ier, 'ier=%s' % (ier))
raise ValueError(msg)
self.fp = fp
self.tck = tu[:nu], tv[:nv], c[:(nu - 4) * (nv-4)]
self.degrees = (3, 3)
self.v0 = v[0]
def __call__(self, theta, phi, dtheta=0, dphi=0, grid=True):
theta = np.asarray(theta)
phi = np.asarray(phi)
return SphereBivariateSpline.__call__(self, theta, phi, dtheta=dtheta,
dphi=dphi, grid=grid)
| 89,209
| 36.736887
| 92
|
py
|
scipy
|
scipy-main/scipy/interpolate/polyint.py
|
# This file is not meant for public use and will be removed in SciPy v2.0.0.
# Use the `scipy.interpolate` namespace for importing the functions
# included below.
import warnings
from . import _polyint
__all__ = [ # noqa: F822
'BarycentricInterpolator',
'KroghInterpolator',
'approximate_taylor_polynomial',
'barycentric_interpolate',
'factorial',
'float_factorial',
'krogh_interpolate',
]
def __dir__():
return __all__
def __getattr__(name):
if name not in __all__:
raise AttributeError(
"scipy.interpolate.polyint is deprecated and has no attribute "
f"{name}. Try looking in scipy.interpolate instead.")
warnings.warn(f"Please use `{name}` from the `scipy.interpolate` namespace, "
"the `scipy.interpolate.polyint` namespace is deprecated.",
category=DeprecationWarning, stacklevel=2)
return getattr(_polyint, name)
| 941
| 25.914286
| 81
|
py
|
scipy
|
scipy-main/scipy/interpolate/fitpack.py
|
# This file is not meant for public use and will be removed in SciPy v2.0.0.
# Use the `scipy.interpolate` namespace for importing the functions
# included below.
import warnings
from . import _fitpack_py
__all__ = [ # noqa: F822
'BSpline',
'bisplev',
'bisplrep',
'dblint',
'insert',
'spalde',
'splantider',
'splder',
'splev',
'splint',
'splprep',
'splrep',
'sproot',
]
def __dir__():
return __all__
def __getattr__(name):
if name not in __all__:
raise AttributeError(
"scipy.interpolate.fitpack is deprecated and has no attribute "
f"{name}. Try looking in scipy.interpolate instead.")
warnings.warn(f"Please use `{name}` from the `scipy.interpolate` namespace, "
"the `scipy.interpolate.fitpack` namespace is deprecated.",
category=DeprecationWarning, stacklevel=2)
return getattr(_fitpack_py, name)
| 948
| 22.146341
| 81
|
py
|
scipy
|
scipy-main/scipy/interpolate/_bsplines.py
|
import operator
from math import prod
import numpy as np
from scipy._lib._util import normalize_axis_index
from scipy.linalg import (get_lapack_funcs, LinAlgError,
cholesky_banded, cho_solve_banded,
solve, solve_banded)
from scipy.optimize import minimize_scalar
from . import _bspl
from . import _fitpack_impl
from scipy.sparse import csr_array
from scipy.special import poch
from itertools import combinations
__all__ = ["BSpline", "make_interp_spline", "make_lsq_spline",
"make_smoothing_spline"]
def _get_dtype(dtype):
"""Return np.complex128 for complex dtypes, np.float64 otherwise."""
if np.issubdtype(dtype, np.complexfloating):
return np.complex_
else:
return np.float_
def _as_float_array(x, check_finite=False):
"""Convert the input into a C contiguous float array.
NB: Upcasts half- and single-precision floats to double precision.
"""
x = np.ascontiguousarray(x)
dtyp = _get_dtype(x.dtype)
x = x.astype(dtyp, copy=False)
if check_finite and not np.isfinite(x).all():
raise ValueError("Array must not contain infs or nans.")
return x
def _dual_poly(j, k, t, y):
"""
Dual polynomial of the B-spline B_{j,k,t} -
polynomial which is associated with B_{j,k,t}:
$p_{j,k}(y) = (y - t_{j+1})(y - t_{j+2})...(y - t_{j+k})$
"""
if k == 0:
return 1
return np.prod([(y - t[j + i]) for i in range(1, k + 1)])
def _diff_dual_poly(j, k, y, d, t):
"""
d-th derivative of the dual polynomial $p_{j,k}(y)$
"""
if d == 0:
return _dual_poly(j, k, t, y)
if d == k:
return poch(1, k)
comb = list(combinations(range(j + 1, j + k + 1), d))
res = 0
for i in range(len(comb) * len(comb[0])):
res += np.prod([(y - t[j + p]) for p in range(1, k + 1)
if (j + p) not in comb[i//d]])
return res
class BSpline:
r"""Univariate spline in the B-spline basis.
.. math::
S(x) = \sum_{j=0}^{n-1} c_j B_{j, k; t}(x)
where :math:`B_{j, k; t}` are B-spline basis functions of degree `k`
and knots `t`.
Parameters
----------
t : ndarray, shape (n+k+1,)
knots
c : ndarray, shape (>=n, ...)
spline coefficients
k : int
B-spline degree
extrapolate : bool or 'periodic', optional
whether to extrapolate beyond the base interval, ``t[k] .. t[n]``,
or to return nans.
If True, extrapolates the first and last polynomial pieces of b-spline
functions active on the base interval.
If 'periodic', periodic extrapolation is used.
Default is True.
axis : int, optional
Interpolation axis. Default is zero.
Attributes
----------
t : ndarray
knot vector
c : ndarray
spline coefficients
k : int
spline degree
extrapolate : bool
If True, extrapolates the first and last polynomial pieces of b-spline
functions active on the base interval.
axis : int
Interpolation axis.
tck : tuple
A read-only equivalent of ``(self.t, self.c, self.k)``
Methods
-------
__call__
basis_element
derivative
antiderivative
integrate
construct_fast
design_matrix
from_power_basis
Notes
-----
B-spline basis elements are defined via
.. math::
B_{i, 0}(x) = 1, \textrm{if $t_i \le x < t_{i+1}$, otherwise $0$,}
B_{i, k}(x) = \frac{x - t_i}{t_{i+k} - t_i} B_{i, k-1}(x)
+ \frac{t_{i+k+1} - x}{t_{i+k+1} - t_{i+1}} B_{i+1, k-1}(x)
**Implementation details**
- At least ``k+1`` coefficients are required for a spline of degree `k`,
so that ``n >= k+1``. Additional coefficients, ``c[j]`` with
``j > n``, are ignored.
- B-spline basis elements of degree `k` form a partition of unity on the
*base interval*, ``t[k] <= x <= t[n]``.
Examples
--------
Translating the recursive definition of B-splines into Python code, we have:
>>> def B(x, k, i, t):
... if k == 0:
... return 1.0 if t[i] <= x < t[i+1] else 0.0
... if t[i+k] == t[i]:
... c1 = 0.0
... else:
... c1 = (x - t[i])/(t[i+k] - t[i]) * B(x, k-1, i, t)
... if t[i+k+1] == t[i+1]:
... c2 = 0.0
... else:
... c2 = (t[i+k+1] - x)/(t[i+k+1] - t[i+1]) * B(x, k-1, i+1, t)
... return c1 + c2
>>> def bspline(x, t, c, k):
... n = len(t) - k - 1
... assert (n >= k+1) and (len(c) >= n)
... return sum(c[i] * B(x, k, i, t) for i in range(n))
Note that this is an inefficient (if straightforward) way to
evaluate B-splines --- this spline class does it in an equivalent,
but much more efficient way.
Here we construct a quadratic spline function on the base interval
``2 <= x <= 4`` and compare with the naive way of evaluating the spline:
>>> from scipy.interpolate import BSpline
>>> k = 2
>>> t = [0, 1, 2, 3, 4, 5, 6]
>>> c = [-1, 2, 0, -1]
>>> spl = BSpline(t, c, k)
>>> spl(2.5)
array(1.375)
>>> bspline(2.5, t, c, k)
1.375
Note that outside of the base interval results differ. This is because
`BSpline` extrapolates the first and last polynomial pieces of B-spline
functions active on the base interval.
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> fig, ax = plt.subplots()
>>> xx = np.linspace(1.5, 4.5, 50)
>>> ax.plot(xx, [bspline(x, t, c ,k) for x in xx], 'r-', lw=3, label='naive')
>>> ax.plot(xx, spl(xx), 'b-', lw=4, alpha=0.7, label='BSpline')
>>> ax.grid(True)
>>> ax.legend(loc='best')
>>> plt.show()
References
----------
.. [1] Tom Lyche and Knut Morken, Spline methods,
http://www.uio.no/studier/emner/matnat/ifi/INF-MAT5340/v05/undervisningsmateriale/
.. [2] Carl de Boor, A practical guide to splines, Springer, 2001.
"""
def __init__(self, t, c, k, extrapolate=True, axis=0):
super().__init__()
self.k = operator.index(k)
self.c = np.asarray(c)
self.t = np.ascontiguousarray(t, dtype=np.float64)
if extrapolate == 'periodic':
self.extrapolate = extrapolate
else:
self.extrapolate = bool(extrapolate)
n = self.t.shape[0] - self.k - 1
axis = normalize_axis_index(axis, self.c.ndim)
# Note that the normalized axis is stored in the object.
self.axis = axis
if axis != 0:
# roll the interpolation axis to be the first one in self.c
# More specifically, the target shape for self.c is (n, ...),
# and axis !=0 means that we have c.shape (..., n, ...)
# ^
# axis
self.c = np.moveaxis(self.c, axis, 0)
if k < 0:
raise ValueError("Spline order cannot be negative.")
if self.t.ndim != 1:
raise ValueError("Knot vector must be one-dimensional.")
if n < self.k + 1:
raise ValueError("Need at least %d knots for degree %d" %
(2*k + 2, k))
if (np.diff(self.t) < 0).any():
raise ValueError("Knots must be in a non-decreasing order.")
if len(np.unique(self.t[k:n+1])) < 2:
raise ValueError("Need at least two internal knots.")
if not np.isfinite(self.t).all():
raise ValueError("Knots should not have nans or infs.")
if self.c.ndim < 1:
raise ValueError("Coefficients must be at least 1-dimensional.")
if self.c.shape[0] < n:
raise ValueError("Knots, coefficients and degree are inconsistent.")
dt = _get_dtype(self.c.dtype)
self.c = np.ascontiguousarray(self.c, dtype=dt)
@classmethod
def construct_fast(cls, t, c, k, extrapolate=True, axis=0):
"""Construct a spline without making checks.
Accepts same parameters as the regular constructor. Input arrays
`t` and `c` must of correct shape and dtype.
"""
self = object.__new__(cls)
self.t, self.c, self.k = t, c, k
self.extrapolate = extrapolate
self.axis = axis
return self
@property
def tck(self):
"""Equivalent to ``(self.t, self.c, self.k)`` (read-only).
"""
return self.t, self.c, self.k
@classmethod
def basis_element(cls, t, extrapolate=True):
"""Return a B-spline basis element ``B(x | t[0], ..., t[k+1])``.
Parameters
----------
t : ndarray, shape (k+2,)
internal knots
extrapolate : bool or 'periodic', optional
whether to extrapolate beyond the base interval, ``t[0] .. t[k+1]``,
or to return nans.
If 'periodic', periodic extrapolation is used.
Default is True.
Returns
-------
basis_element : callable
A callable representing a B-spline basis element for the knot
vector `t`.
Notes
-----
The degree of the B-spline, `k`, is inferred from the length of `t` as
``len(t)-2``. The knot vector is constructed by appending and prepending
``k+1`` elements to internal knots `t`.
Examples
--------
Construct a cubic B-spline:
>>> import numpy as np
>>> from scipy.interpolate import BSpline
>>> b = BSpline.basis_element([0, 1, 2, 3, 4])
>>> k = b.k
>>> b.t[k:-k]
array([ 0., 1., 2., 3., 4.])
>>> k
3
Construct a quadratic B-spline on ``[0, 1, 1, 2]``, and compare
to its explicit form:
>>> t = [0, 1, 1, 2]
>>> b = BSpline.basis_element(t)
>>> def f(x):
... return np.where(x < 1, x*x, (2. - x)**2)
>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots()
>>> x = np.linspace(0, 2, 51)
>>> ax.plot(x, b(x), 'g', lw=3)
>>> ax.plot(x, f(x), 'r', lw=8, alpha=0.4)
>>> ax.grid(True)
>>> plt.show()
"""
k = len(t) - 2
t = _as_float_array(t)
t = np.r_[(t[0]-1,) * k, t, (t[-1]+1,) * k]
c = np.zeros_like(t)
c[k] = 1.
return cls.construct_fast(t, c, k, extrapolate)
@classmethod
def design_matrix(cls, x, t, k, extrapolate=False):
"""
Returns a design matrix as a CSR format sparse array.
Parameters
----------
x : array_like, shape (n,)
Points to evaluate the spline at.
t : array_like, shape (nt,)
Sorted 1D array of knots.
k : int
B-spline degree.
extrapolate : bool or 'periodic', optional
Whether to extrapolate based on the first and last intervals
or raise an error. If 'periodic', periodic extrapolation is used.
Default is False.
.. versionadded:: 1.10.0
Returns
-------
design_matrix : `csr_array` object
Sparse matrix in CSR format where each row contains all the basis
elements of the input row (first row = basis elements of x[0],
..., last row = basis elements x[-1]).
Examples
--------
Construct a design matrix for a B-spline
>>> from scipy.interpolate import make_interp_spline, BSpline
>>> import numpy as np
>>> x = np.linspace(0, np.pi * 2, 4)
>>> y = np.sin(x)
>>> k = 3
>>> bspl = make_interp_spline(x, y, k=k)
>>> design_matrix = bspl.design_matrix(x, bspl.t, k)
>>> design_matrix.toarray()
[[1. , 0. , 0. , 0. ],
[0.2962963 , 0.44444444, 0.22222222, 0.03703704],
[0.03703704, 0.22222222, 0.44444444, 0.2962963 ],
[0. , 0. , 0. , 1. ]]
Construct a design matrix for some vector of knots
>>> k = 2
>>> t = [-1, 0, 1, 2, 3, 4, 5, 6]
>>> x = [1, 2, 3, 4]
>>> design_matrix = BSpline.design_matrix(x, t, k).toarray()
>>> design_matrix
[[0.5, 0.5, 0. , 0. , 0. ],
[0. , 0.5, 0.5, 0. , 0. ],
[0. , 0. , 0.5, 0.5, 0. ],
[0. , 0. , 0. , 0.5, 0.5]]
This result is equivalent to the one created in the sparse format
>>> c = np.eye(len(t) - k - 1)
>>> design_matrix_gh = BSpline(t, c, k)(x)
>>> np.allclose(design_matrix, design_matrix_gh, atol=1e-14)
True
Notes
-----
.. versionadded:: 1.8.0
In each row of the design matrix all the basis elements are evaluated
at the certain point (first row - x[0], ..., last row - x[-1]).
`nt` is a length of the vector of knots: as far as there are
`nt - k - 1` basis elements, `nt` should be not less than `2 * k + 2`
to have at least `k + 1` basis element.
Out of bounds `x` raises a ValueError.
"""
x = _as_float_array(x, True)
t = _as_float_array(t, True)
if extrapolate != 'periodic':
extrapolate = bool(extrapolate)
if k < 0:
raise ValueError("Spline order cannot be negative.")
if t.ndim != 1 or np.any(t[1:] < t[:-1]):
raise ValueError(f"Expect t to be a 1-D sorted array_like, but "
f"got t={t}.")
# There are `nt - k - 1` basis elements in a BSpline built on the
# vector of knots with length `nt`, so to have at least `k + 1` basis
# elements we need to have at least `2 * k + 2` elements in the vector
# of knots.
if len(t) < 2 * k + 2:
raise ValueError(f"Length t is not enough for k={k}.")
if extrapolate == 'periodic':
# With periodic extrapolation we map x to the segment
# [t[k], t[n]].
n = t.size - k - 1
x = t[k] + (x - t[k]) % (t[n] - t[k])
extrapolate = False
elif not extrapolate and (
(min(x) < t[k]) or (max(x) > t[t.shape[0] - k - 1])
):
# Checks from `find_interval` function
raise ValueError(f'Out of bounds w/ x = {x}.')
# Compute number of non-zeros of final CSR array in order to determine
# the dtype of indices and indptr of the CSR array.
n = x.shape[0]
nnz = n * (k + 1)
if nnz < np.iinfo(np.int32).max:
int_dtype = np.int32
else:
int_dtype = np.int64
# Preallocate indptr and indices
indices = np.empty(n * (k + 1), dtype=int_dtype)
indptr = np.arange(0, (n + 1) * (k + 1), k + 1, dtype=int_dtype)
# indptr is not passed to Cython as it is already fully computed
data, indices = _bspl._make_design_matrix(
x, t, k, extrapolate, indices
)
return csr_array(
(data, indices, indptr),
shape=(x.shape[0], t.shape[0] - k - 1)
)
def __call__(self, x, nu=0, extrapolate=None):
"""
Evaluate a spline function.
Parameters
----------
x : array_like
points to evaluate the spline at.
nu : int, optional
derivative to evaluate (default is 0).
extrapolate : bool or 'periodic', optional
whether to extrapolate based on the first and last intervals
or return nans. If 'periodic', periodic extrapolation is used.
Default is `self.extrapolate`.
Returns
-------
y : array_like
Shape is determined by replacing the interpolation axis
in the coefficient array with the shape of `x`.
"""
if extrapolate is None:
extrapolate = self.extrapolate
x = np.asarray(x)
x_shape, x_ndim = x.shape, x.ndim
x = np.ascontiguousarray(x.ravel(), dtype=np.float_)
# With periodic extrapolation we map x to the segment
# [self.t[k], self.t[n]].
if extrapolate == 'periodic':
n = self.t.size - self.k - 1
x = self.t[self.k] + (x - self.t[self.k]) % (self.t[n] -
self.t[self.k])
extrapolate = False
out = np.empty((len(x), prod(self.c.shape[1:])), dtype=self.c.dtype)
self._ensure_c_contiguous()
self._evaluate(x, nu, extrapolate, out)
out = out.reshape(x_shape + self.c.shape[1:])
if self.axis != 0:
# transpose to move the calculated values to the interpolation axis
l = list(range(out.ndim))
l = l[x_ndim:x_ndim+self.axis] + l[:x_ndim] + l[x_ndim+self.axis:]
out = out.transpose(l)
return out
def _evaluate(self, xp, nu, extrapolate, out):
_bspl.evaluate_spline(self.t, self.c.reshape(self.c.shape[0], -1),
self.k, xp, nu, extrapolate, out)
def _ensure_c_contiguous(self):
"""
c and t may be modified by the user. The Cython code expects
that they are C contiguous.
"""
if not self.t.flags.c_contiguous:
self.t = self.t.copy()
if not self.c.flags.c_contiguous:
self.c = self.c.copy()
def derivative(self, nu=1):
"""Return a B-spline representing the derivative.
Parameters
----------
nu : int, optional
Derivative order.
Default is 1.
Returns
-------
b : BSpline object
A new instance representing the derivative.
See Also
--------
splder, splantider
"""
c = self.c
# pad the c array if needed
ct = len(self.t) - len(c)
if ct > 0:
c = np.r_[c, np.zeros((ct,) + c.shape[1:])]
tck = _fitpack_impl.splder((self.t, c, self.k), nu)
return self.construct_fast(*tck, extrapolate=self.extrapolate,
axis=self.axis)
def antiderivative(self, nu=1):
"""Return a B-spline representing the antiderivative.
Parameters
----------
nu : int, optional
Antiderivative order. Default is 1.
Returns
-------
b : BSpline object
A new instance representing the antiderivative.
Notes
-----
If antiderivative is computed and ``self.extrapolate='periodic'``,
it will be set to False for the returned instance. This is done because
the antiderivative is no longer periodic and its correct evaluation
outside of the initially given x interval is difficult.
See Also
--------
splder, splantider
"""
c = self.c
# pad the c array if needed
ct = len(self.t) - len(c)
if ct > 0:
c = np.r_[c, np.zeros((ct,) + c.shape[1:])]
tck = _fitpack_impl.splantider((self.t, c, self.k), nu)
if self.extrapolate == 'periodic':
extrapolate = False
else:
extrapolate = self.extrapolate
return self.construct_fast(*tck, extrapolate=extrapolate,
axis=self.axis)
def integrate(self, a, b, extrapolate=None):
"""Compute a definite integral of the spline.
Parameters
----------
a : float
Lower limit of integration.
b : float
Upper limit of integration.
extrapolate : bool or 'periodic', optional
whether to extrapolate beyond the base interval,
``t[k] .. t[-k-1]``, or take the spline to be zero outside of the
base interval. If 'periodic', periodic extrapolation is used.
If None (default), use `self.extrapolate`.
Returns
-------
I : array_like
Definite integral of the spline over the interval ``[a, b]``.
Examples
--------
Construct the linear spline ``x if x < 1 else 2 - x`` on the base
interval :math:`[0, 2]`, and integrate it
>>> from scipy.interpolate import BSpline
>>> b = BSpline.basis_element([0, 1, 2])
>>> b.integrate(0, 1)
array(0.5)
If the integration limits are outside of the base interval, the result
is controlled by the `extrapolate` parameter
>>> b.integrate(-1, 1)
array(0.0)
>>> b.integrate(-1, 1, extrapolate=False)
array(0.5)
>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots()
>>> ax.grid(True)
>>> ax.axvline(0, c='r', lw=5, alpha=0.5) # base interval
>>> ax.axvline(2, c='r', lw=5, alpha=0.5)
>>> xx = [-1, 1, 2]
>>> ax.plot(xx, b(xx))
>>> plt.show()
"""
if extrapolate is None:
extrapolate = self.extrapolate
# Prepare self.t and self.c.
self._ensure_c_contiguous()
# Swap integration bounds if needed.
sign = 1
if b < a:
a, b = b, a
sign = -1
n = self.t.size - self.k - 1
if extrapolate != "periodic" and not extrapolate:
# Shrink the integration interval, if needed.
a = max(a, self.t[self.k])
b = min(b, self.t[n])
if self.c.ndim == 1:
# Fast path: use FITPACK's routine
# (cf _fitpack_impl.splint).
integral = _fitpack_impl.splint(a, b, self.tck)
return integral * sign
out = np.empty((2, prod(self.c.shape[1:])), dtype=self.c.dtype)
# Compute the antiderivative.
c = self.c
ct = len(self.t) - len(c)
if ct > 0:
c = np.r_[c, np.zeros((ct,) + c.shape[1:])]
ta, ca, ka = _fitpack_impl.splantider((self.t, c, self.k), 1)
if extrapolate == 'periodic':
# Split the integral into the part over period (can be several
# of them) and the remaining part.
ts, te = self.t[self.k], self.t[n]
period = te - ts
interval = b - a
n_periods, left = divmod(interval, period)
if n_periods > 0:
# Evaluate the difference of antiderivatives.
x = np.asarray([ts, te], dtype=np.float_)
_bspl.evaluate_spline(ta, ca.reshape(ca.shape[0], -1),
ka, x, 0, False, out)
integral = out[1] - out[0]
integral *= n_periods
else:
integral = np.zeros((1, prod(self.c.shape[1:])),
dtype=self.c.dtype)
# Map a to [ts, te], b is always a + left.
a = ts + (a - ts) % period
b = a + left
# If b <= te then we need to integrate over [a, b], otherwise
# over [a, te] and from xs to what is remained.
if b <= te:
x = np.asarray([a, b], dtype=np.float_)
_bspl.evaluate_spline(ta, ca.reshape(ca.shape[0], -1),
ka, x, 0, False, out)
integral += out[1] - out[0]
else:
x = np.asarray([a, te], dtype=np.float_)
_bspl.evaluate_spline(ta, ca.reshape(ca.shape[0], -1),
ka, x, 0, False, out)
integral += out[1] - out[0]
x = np.asarray([ts, ts + b - te], dtype=np.float_)
_bspl.evaluate_spline(ta, ca.reshape(ca.shape[0], -1),
ka, x, 0, False, out)
integral += out[1] - out[0]
else:
# Evaluate the difference of antiderivatives.
x = np.asarray([a, b], dtype=np.float_)
_bspl.evaluate_spline(ta, ca.reshape(ca.shape[0], -1),
ka, x, 0, extrapolate, out)
integral = out[1] - out[0]
integral *= sign
return integral.reshape(ca.shape[1:])
@classmethod
def from_power_basis(cls, pp, bc_type='not-a-knot'):
r"""
Construct a polynomial in the B-spline basis
from a piecewise polynomial in the power basis.
For now, accepts ``CubicSpline`` instances only.
Parameters
----------
pp : CubicSpline
A piecewise polynomial in the power basis, as created
by ``CubicSpline``
bc_type : string, optional
Boundary condition type as in ``CubicSpline``: one of the
``not-a-knot``, ``natural``, ``clamped``, or ``periodic``.
Necessary for construction an instance of ``BSpline`` class.
Default is ``not-a-knot``.
Returns
-------
b : BSpline object
A new instance representing the initial polynomial
in the B-spline basis.
Notes
-----
.. versionadded:: 1.8.0
Accepts only ``CubicSpline`` instances for now.
The algorithm follows from differentiation
the Marsden's identity [1]: each of coefficients of spline
interpolation function in the B-spline basis is computed as follows:
.. math::
c_j = \sum_{m=0}^{k} \frac{(k-m)!}{k!}
c_{m,i} (-1)^{k-m} D^m p_{j,k}(x_i)
:math:`c_{m, i}` - a coefficient of CubicSpline,
:math:`D^m p_{j, k}(x_i)` - an m-th defivative of a dual polynomial
in :math:`x_i`.
``k`` always equals 3 for now.
First ``n - 2`` coefficients are computed in :math:`x_i = x_j`, e.g.
.. math::
c_1 = \sum_{m=0}^{k} \frac{(k-1)!}{k!} c_{m,1} D^m p_{j,3}(x_1)
Last ``nod + 2`` coefficients are computed in ``x[-2]``,
``nod`` - number of derivatives at the ends.
For example, consider :math:`x = [0, 1, 2, 3, 4]`,
:math:`y = [1, 1, 1, 1, 1]` and bc_type = ``natural``
The coefficients of CubicSpline in the power basis:
:math:`[[0, 0, 0, 0, 0], [0, 0, 0, 0, 0],
[0, 0, 0, 0, 0], [1, 1, 1, 1, 1]]`
The knot vector: :math:`t = [0, 0, 0, 0, 1, 2, 3, 4, 4, 4, 4]`
In this case
.. math::
c_j = \frac{0!}{k!} c_{3, i} k! = c_{3, i} = 1,~j = 0, ..., 6
References
----------
.. [1] Tom Lyche and Knut Morken, Spline Methods, 2005, Section 3.1.2
"""
from ._cubic import CubicSpline
if not isinstance(pp, CubicSpline):
raise NotImplementedError("Only CubicSpline objects are accepted"
"for now. Got %s instead." % type(pp))
x = pp.x
coef = pp.c
k = pp.c.shape[0] - 1
n = x.shape[0]
if bc_type == 'not-a-knot':
t = _not_a_knot(x, k)
elif bc_type == 'natural' or bc_type == 'clamped':
t = _augknt(x, k)
elif bc_type == 'periodic':
t = _periodic_knots(x, k)
else:
raise TypeError('Unknown boundary condition: %s' % bc_type)
nod = t.shape[0] - (n + k + 1) # number of derivatives at the ends
c = np.zeros(n + nod, dtype=pp.c.dtype)
for m in range(k + 1):
for i in range(n - 2):
c[i] += poch(k + 1, -m) * coef[m, i]\
* np.power(-1, k - m)\
* _diff_dual_poly(i, k, x[i], m, t)
for j in range(n - 2, n + nod):
c[j] += poch(k + 1, -m) * coef[m, n - 2]\
* np.power(-1, k - m)\
* _diff_dual_poly(j, k, x[n - 2], m, t)
return cls.construct_fast(t, c, k, pp.extrapolate, pp.axis)
#################################
# Interpolating spline helpers #
#################################
def _not_a_knot(x, k):
"""Given data x, construct the knot vector w/ not-a-knot BC.
cf de Boor, XIII(12)."""
x = np.asarray(x)
if k % 2 != 1:
raise ValueError("Odd degree for now only. Got %s." % k)
m = (k - 1) // 2
t = x[m+1:-m-1]
t = np.r_[(x[0],)*(k+1), t, (x[-1],)*(k+1)]
return t
def _augknt(x, k):
"""Construct a knot vector appropriate for the order-k interpolation."""
return np.r_[(x[0],)*k, x, (x[-1],)*k]
def _convert_string_aliases(deriv, target_shape):
if isinstance(deriv, str):
if deriv == "clamped":
deriv = [(1, np.zeros(target_shape))]
elif deriv == "natural":
deriv = [(2, np.zeros(target_shape))]
else:
raise ValueError("Unknown boundary condition : %s" % deriv)
return deriv
def _process_deriv_spec(deriv):
if deriv is not None:
try:
ords, vals = zip(*deriv)
except TypeError as e:
msg = ("Derivatives, `bc_type`, should be specified as a pair of "
"iterables of pairs of (order, value).")
raise ValueError(msg) from e
else:
ords, vals = [], []
return np.atleast_1d(ords, vals)
def _woodbury_algorithm(A, ur, ll, b, k):
'''
Solve a cyclic banded linear system with upper right
and lower blocks of size ``(k-1) / 2`` using
the Woodbury formula
Parameters
----------
A : 2-D array, shape(k, n)
Matrix of diagonals of original matrix (see
``solve_banded`` documentation).
ur : 2-D array, shape(bs, bs)
Upper right block matrix.
ll : 2-D array, shape(bs, bs)
Lower left block matrix.
b : 1-D array, shape(n,)
Vector of constant terms of the system of linear equations.
k : int
B-spline degree.
Returns
-------
c : 1-D array, shape(n,)
Solution of the original system of linear equations.
Notes
-----
This algorithm works only for systems with banded matrix A plus
a correction term U @ V.T, where the matrix U @ V.T gives upper right
and lower left block of A
The system is solved with the following steps:
1. New systems of linear equations are constructed:
A @ z_i = u_i,
u_i - columnn vector of U,
i = 1, ..., k - 1
2. Matrix Z is formed from vectors z_i:
Z = [ z_1 | z_2 | ... | z_{k - 1} ]
3. Matrix H = (1 + V.T @ Z)^{-1}
4. The system A' @ y = b is solved
5. x = y - Z @ (H @ V.T @ y)
Also, ``n`` should be greater than ``k``, otherwise corner block
elements will intersect with diagonals.
Examples
--------
Consider the case of n = 8, k = 5 (size of blocks - 2 x 2).
The matrix of a system: U: V:
x x x * * a b a b 0 0 0 0 1 0
x x x x * * c 0 c 0 0 0 0 0 1
x x x x x * * 0 0 0 0 0 0 0 0
* x x x x x * 0 0 0 0 0 0 0 0
* * x x x x x 0 0 0 0 0 0 0 0
d * * x x x x 0 0 d 0 1 0 0 0
e f * * x x x 0 0 e f 0 1 0 0
References
----------
.. [1] William H. Press, Saul A. Teukolsky, William T. Vetterling
and Brian P. Flannery, Numerical Recipes, 2007, Section 2.7.3
'''
k_mod = k - k % 2
bs = int((k - 1) / 2) + (k + 1) % 2
n = A.shape[1] + 1
U = np.zeros((n - 1, k_mod))
VT = np.zeros((k_mod, n - 1)) # V transpose
# upper right block
U[:bs, :bs] = ur
VT[np.arange(bs), np.arange(bs) - bs] = 1
# lower left block
U[-bs:, -bs:] = ll
VT[np.arange(bs) - bs, np.arange(bs)] = 1
Z = solve_banded((bs, bs), A, U)
H = solve(np.identity(k_mod) + VT @ Z, np.identity(k_mod))
y = solve_banded((bs, bs), A, b)
c = y - Z @ (H @ (VT @ y))
return c
def _periodic_knots(x, k):
'''
returns vector of nodes on circle
'''
xc = np.copy(x)
n = len(xc)
if k % 2 == 0:
dx = np.diff(xc)
xc[1: -1] -= dx[:-1] / 2
dx = np.diff(xc)
t = np.zeros(n + 2 * k)
t[k: -k] = xc
for i in range(0, k):
# filling first `k` elements in descending order
t[k - i - 1] = t[k - i] - dx[-(i % (n - 1)) - 1]
# filling last `k` elements in ascending order
t[-k + i] = t[-k + i - 1] + dx[i % (n - 1)]
return t
def _make_interp_per_full_matr(x, y, t, k):
'''
Returns a solution of a system for B-spline interpolation with periodic
boundary conditions. First ``k - 1`` rows of matrix are condtions of
periodicity (continuity of ``k - 1`` derivatives at the boundary points).
Last ``n`` rows are interpolation conditions.
RHS is ``k - 1`` zeros and ``n`` ordinates in this case.
Parameters
----------
x : 1-D array, shape (n,)
Values of x - coordinate of a given set of points.
y : 1-D array, shape (n,)
Values of y - coordinate of a given set of points.
t : 1-D array, shape(n+2*k,)
Vector of knots.
k : int
The maximum degree of spline
Returns
-------
c : 1-D array, shape (n+k-1,)
B-spline coefficients
Notes
-----
``t`` is supposed to be taken on circle.
'''
x, y, t = map(np.asarray, (x, y, t))
n = x.size
# LHS: the collocation matrix + derivatives at edges
matr = np.zeros((n + k - 1, n + k - 1))
# derivatives at x[0] and x[-1]:
for i in range(k - 1):
bb = _bspl.evaluate_all_bspl(t, k, x[0], k, nu=i + 1)
matr[i, : k + 1] += bb
bb = _bspl.evaluate_all_bspl(t, k, x[-1], n + k - 1, nu=i + 1)[:-1]
matr[i, -k:] -= bb
# collocation matrix
for i in range(n):
xval = x[i]
# find interval
if xval == t[k]:
left = k
else:
left = np.searchsorted(t, xval) - 1
# fill a row
bb = _bspl.evaluate_all_bspl(t, k, xval, left)
matr[i + k - 1, left-k:left+1] = bb
# RHS
b = np.r_[[0] * (k - 1), y]
c = solve(matr, b)
return c
def _make_periodic_spline(x, y, t, k, axis):
'''
Compute the (coefficients of) interpolating B-spline with periodic
boundary conditions.
Parameters
----------
x : array_like, shape (n,)
Abscissas.
y : array_like, shape (n,)
Ordinates.
k : int
B-spline degree.
t : array_like, shape (n + 2 * k,).
Knots taken on a circle, ``k`` on the left and ``k`` on the right
of the vector ``x``.
Returns
-------
b : a BSpline object of the degree ``k`` and with knots ``t``.
Notes
-----
The original system is formed by ``n + k - 1`` equations where the first
``k - 1`` of them stand for the ``k - 1`` derivatives continuity on the
edges while the other equations correspond to an interpolating case
(matching all the input points). Due to a special form of knot vector, it
can be proved that in the original system the first and last ``k``
coefficients of a spline function are the same, respectively. It follows
from the fact that all ``k - 1`` derivatives are equal term by term at ends
and that the matrix of the original system of linear equations is
non-degenerate. So, we can reduce the number of equations to ``n - 1``
(first ``k - 1`` equations could be reduced). Another trick of this
implementation is cyclic shift of values of B-splines due to equality of
``k`` unknown coefficients. With this we can receive matrix of the system
with upper right and lower left blocks, and ``k`` diagonals. It allows
to use Woodbury formula to optimize the computations.
'''
n = y.shape[0]
extradim = prod(y.shape[1:])
y_new = y.reshape(n, extradim)
c = np.zeros((n + k - 1, extradim))
# n <= k case is solved with full matrix
if n <= k:
for i in range(extradim):
c[:, i] = _make_interp_per_full_matr(x, y_new[:, i], t, k)
c = np.ascontiguousarray(c.reshape((n + k - 1,) + y.shape[1:]))
return BSpline.construct_fast(t, c, k, extrapolate='periodic', axis=axis)
nt = len(t) - k - 1
# size of block elements
kul = int(k / 2)
# kl = ku = k
ab = np.zeros((3 * k + 1, nt), dtype=np.float_, order='F')
# upper right and lower left blocks
ur = np.zeros((kul, kul))
ll = np.zeros_like(ur)
# `offset` is made to shift all the non-zero elements to the end of the
# matrix
_bspl._colloc(x, t, k, ab, offset=k)
# remove zeros before the matrix
ab = ab[-k - (k + 1) % 2:, :]
# The least elements in rows (except repetitions) are diagonals
# of block matrices. Upper right matrix is an upper triangular
# matrix while lower left is a lower triangular one.
for i in range(kul):
ur += np.diag(ab[-i - 1, i: kul], k=i)
ll += np.diag(ab[i, -kul - (k % 2): n - 1 + 2 * kul - i], k=-i)
# remove elements that occur in the last point
# (first and last points are equivalent)
A = ab[:, kul: -k + kul]
for i in range(extradim):
cc = _woodbury_algorithm(A, ur, ll, y_new[:, i][:-1], k)
c[:, i] = np.concatenate((cc[-kul:], cc, cc[:kul + k % 2]))
c = np.ascontiguousarray(c.reshape((n + k - 1,) + y.shape[1:]))
return BSpline.construct_fast(t, c, k, extrapolate='periodic', axis=axis)
def make_interp_spline(x, y, k=3, t=None, bc_type=None, axis=0,
check_finite=True):
"""Compute the (coefficients of) interpolating B-spline.
Parameters
----------
x : array_like, shape (n,)
Abscissas.
y : array_like, shape (n, ...)
Ordinates.
k : int, optional
B-spline degree. Default is cubic, ``k = 3``.
t : array_like, shape (nt + k + 1,), optional.
Knots.
The number of knots needs to agree with the number of data points and
the number of derivatives at the edges. Specifically, ``nt - n`` must
equal ``len(deriv_l) + len(deriv_r)``.
bc_type : 2-tuple or None
Boundary conditions.
Default is None, which means choosing the boundary conditions
automatically. Otherwise, it must be a length-two tuple where the first
element (``deriv_l``) sets the boundary conditions at ``x[0]`` and
the second element (``deriv_r``) sets the boundary conditions at
``x[-1]``. Each of these must be an iterable of pairs
``(order, value)`` which gives the values of derivatives of specified
orders at the given edge of the interpolation interval.
Alternatively, the following string aliases are recognized:
* ``"clamped"``: The first derivatives at the ends are zero. This is
equivalent to ``bc_type=([(1, 0.0)], [(1, 0.0)])``.
* ``"natural"``: The second derivatives at ends are zero. This is
equivalent to ``bc_type=([(2, 0.0)], [(2, 0.0)])``.
* ``"not-a-knot"`` (default): The first and second segments are the
same polynomial. This is equivalent to having ``bc_type=None``.
* ``"periodic"``: The values and the first ``k-1`` derivatives at the
ends are equivalent.
axis : int, optional
Interpolation axis. Default is 0.
check_finite : bool, optional
Whether to check that the input arrays contain only finite numbers.
Disabling may give a performance gain, but may result in problems
(crashes, non-termination) if the inputs do contain infinities or NaNs.
Default is True.
Returns
-------
b : a BSpline object of the degree ``k`` and with knots ``t``.
See Also
--------
BSpline : base class representing the B-spline objects
CubicSpline : a cubic spline in the polynomial basis
make_lsq_spline : a similar factory function for spline fitting
UnivariateSpline : a wrapper over FITPACK spline fitting routines
splrep : a wrapper over FITPACK spline fitting routines
Examples
--------
Use cubic interpolation on Chebyshev nodes:
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> def cheb_nodes(N):
... jj = 2.*np.arange(N) + 1
... x = np.cos(np.pi * jj / 2 / N)[::-1]
... return x
>>> x = cheb_nodes(20)
>>> y = np.sqrt(1 - x**2)
>>> from scipy.interpolate import BSpline, make_interp_spline
>>> b = make_interp_spline(x, y)
>>> np.allclose(b(x), y)
True
Note that the default is a cubic spline with a not-a-knot boundary condition
>>> b.k
3
Here we use a 'natural' spline, with zero 2nd derivatives at edges:
>>> l, r = [(2, 0.0)], [(2, 0.0)]
>>> b_n = make_interp_spline(x, y, bc_type=(l, r)) # or, bc_type="natural"
>>> np.allclose(b_n(x), y)
True
>>> x0, x1 = x[0], x[-1]
>>> np.allclose([b_n(x0, 2), b_n(x1, 2)], [0, 0])
True
Interpolation of parametric curves is also supported. As an example, we
compute a discretization of a snail curve in polar coordinates
>>> phi = np.linspace(0, 2.*np.pi, 40)
>>> r = 0.3 + np.cos(phi)
>>> x, y = r*np.cos(phi), r*np.sin(phi) # convert to Cartesian coordinates
Build an interpolating curve, parameterizing it by the angle
>>> spl = make_interp_spline(phi, np.c_[x, y])
Evaluate the interpolant on a finer grid (note that we transpose the result
to unpack it into a pair of x- and y-arrays)
>>> phi_new = np.linspace(0, 2.*np.pi, 100)
>>> x_new, y_new = spl(phi_new).T
Plot the result
>>> plt.plot(x, y, 'o')
>>> plt.plot(x_new, y_new, '-')
>>> plt.show()
Build a B-spline curve with 2 dimensional y
>>> x = np.linspace(0, 2*np.pi, 10)
>>> y = np.array([np.sin(x), np.cos(x)])
Periodic condition is satisfied because y coordinates of points on the ends
are equivalent
>>> ax = plt.axes(projection='3d')
>>> xx = np.linspace(0, 2*np.pi, 100)
>>> bspl = make_interp_spline(x, y, k=5, bc_type='periodic', axis=1)
>>> ax.plot3D(xx, *bspl(xx))
>>> ax.scatter3D(x, *y, color='red')
>>> plt.show()
"""
# convert string aliases for the boundary conditions
if bc_type is None or bc_type == 'not-a-knot' or bc_type == 'periodic':
deriv_l, deriv_r = None, None
elif isinstance(bc_type, str):
deriv_l, deriv_r = bc_type, bc_type
else:
try:
deriv_l, deriv_r = bc_type
except TypeError as e:
raise ValueError("Unknown boundary condition: %s" % bc_type) from e
y = np.asarray(y)
axis = normalize_axis_index(axis, y.ndim)
x = _as_float_array(x, check_finite)
y = _as_float_array(y, check_finite)
y = np.moveaxis(y, axis, 0) # now internally interp axis is zero
# sanity check the input
if bc_type == 'periodic' and not np.allclose(y[0], y[-1], atol=1e-15):
raise ValueError("First and last points does not match while "
"periodic case expected")
if x.size != y.shape[0]:
raise ValueError('Shapes of x {} and y {} are incompatible'
.format(x.shape, y.shape))
if np.any(x[1:] == x[:-1]):
raise ValueError("Expect x to not have duplicates")
if x.ndim != 1 or np.any(x[1:] < x[:-1]):
raise ValueError("Expect x to be a 1D strictly increasing sequence.")
# special-case k=0 right away
if k == 0:
if any(_ is not None for _ in (t, deriv_l, deriv_r)):
raise ValueError("Too much info for k=0: t and bc_type can only "
"be None.")
t = np.r_[x, x[-1]]
c = np.asarray(y)
c = np.ascontiguousarray(c, dtype=_get_dtype(c.dtype))
return BSpline.construct_fast(t, c, k, axis=axis)
# special-case k=1 (e.g., Lyche and Morken, Eq.(2.16))
if k == 1 and t is None:
if not (deriv_l is None and deriv_r is None):
raise ValueError("Too much info for k=1: bc_type can only be None.")
t = np.r_[x[0], x, x[-1]]
c = np.asarray(y)
c = np.ascontiguousarray(c, dtype=_get_dtype(c.dtype))
return BSpline.construct_fast(t, c, k, axis=axis)
k = operator.index(k)
if bc_type == 'periodic' and t is not None:
raise NotImplementedError("For periodic case t is constructed "
"automatically and can not be passed "
"manually")
# come up with a sensible knot vector, if needed
if t is None:
if deriv_l is None and deriv_r is None:
if bc_type == 'periodic':
t = _periodic_knots(x, k)
elif k == 2:
# OK, it's a bit ad hoc: Greville sites + omit
# 2nd and 2nd-to-last points, a la not-a-knot
t = (x[1:] + x[:-1]) / 2.
t = np.r_[(x[0],)*(k+1),
t[1:-1],
(x[-1],)*(k+1)]
else:
t = _not_a_knot(x, k)
else:
t = _augknt(x, k)
t = _as_float_array(t, check_finite)
if k < 0:
raise ValueError("Expect non-negative k.")
if t.ndim != 1 or np.any(t[1:] < t[:-1]):
raise ValueError("Expect t to be a 1-D sorted array_like.")
if t.size < x.size + k + 1:
raise ValueError('Got %d knots, need at least %d.' %
(t.size, x.size + k + 1))
if (x[0] < t[k]) or (x[-1] > t[-k]):
raise ValueError('Out of bounds w/ x = %s.' % x)
if bc_type == 'periodic':
return _make_periodic_spline(x, y, t, k, axis)
# Here : deriv_l, r = [(nu, value), ...]
deriv_l = _convert_string_aliases(deriv_l, y.shape[1:])
deriv_l_ords, deriv_l_vals = _process_deriv_spec(deriv_l)
nleft = deriv_l_ords.shape[0]
deriv_r = _convert_string_aliases(deriv_r, y.shape[1:])
deriv_r_ords, deriv_r_vals = _process_deriv_spec(deriv_r)
nright = deriv_r_ords.shape[0]
# have `n` conditions for `nt` coefficients; need nt-n derivatives
n = x.size
nt = t.size - k - 1
if nt - n != nleft + nright:
raise ValueError("The number of derivatives at boundaries does not "
"match: expected {}, got {}+{}".format(nt-n, nleft, nright))
# bail out if the `y` array is zero-sized
if y.size == 0:
c = np.zeros((nt,) + y.shape[1:], dtype=float)
return BSpline.construct_fast(t, c, k, axis=axis)
# set up the LHS: the collocation matrix + derivatives at boundaries
kl = ku = k
ab = np.zeros((2*kl + ku + 1, nt), dtype=np.float_, order='F')
_bspl._colloc(x, t, k, ab, offset=nleft)
if nleft > 0:
_bspl._handle_lhs_derivatives(t, k, x[0], ab, kl, ku, deriv_l_ords)
if nright > 0:
_bspl._handle_lhs_derivatives(t, k, x[-1], ab, kl, ku, deriv_r_ords,
offset=nt-nright)
# set up the RHS: values to interpolate (+ derivative values, if any)
extradim = prod(y.shape[1:])
rhs = np.empty((nt, extradim), dtype=y.dtype)
if nleft > 0:
rhs[:nleft] = deriv_l_vals.reshape(-1, extradim)
rhs[nleft:nt - nright] = y.reshape(-1, extradim)
if nright > 0:
rhs[nt - nright:] = deriv_r_vals.reshape(-1, extradim)
# solve Ab @ x = rhs; this is the relevant part of linalg.solve_banded
if check_finite:
ab, rhs = map(np.asarray_chkfinite, (ab, rhs))
gbsv, = get_lapack_funcs(('gbsv',), (ab, rhs))
lu, piv, c, info = gbsv(kl, ku, ab, rhs,
overwrite_ab=True, overwrite_b=True)
if info > 0:
raise LinAlgError("Collocation matrix is singular.")
elif info < 0:
raise ValueError('illegal value in %d-th argument of internal gbsv' % -info)
c = np.ascontiguousarray(c.reshape((nt,) + y.shape[1:]))
return BSpline.construct_fast(t, c, k, axis=axis)
def make_lsq_spline(x, y, t, k=3, w=None, axis=0, check_finite=True):
r"""Compute the (coefficients of) an LSQ (Least SQuared) based
fitting B-spline.
The result is a linear combination
.. math::
S(x) = \sum_j c_j B_j(x; t)
of the B-spline basis elements, :math:`B_j(x; t)`, which minimizes
.. math::
\sum_{j} \left( w_j \times (S(x_j) - y_j) \right)^2
Parameters
----------
x : array_like, shape (m,)
Abscissas.
y : array_like, shape (m, ...)
Ordinates.
t : array_like, shape (n + k + 1,).
Knots.
Knots and data points must satisfy Schoenberg-Whitney conditions.
k : int, optional
B-spline degree. Default is cubic, ``k = 3``.
w : array_like, shape (m,), optional
Weights for spline fitting. Must be positive. If ``None``,
then weights are all equal.
Default is ``None``.
axis : int, optional
Interpolation axis. Default is zero.
check_finite : bool, optional
Whether to check that the input arrays contain only finite numbers.
Disabling may give a performance gain, but may result in problems
(crashes, non-termination) if the inputs do contain infinities or NaNs.
Default is True.
Returns
-------
b : a BSpline object of the degree ``k`` with knots ``t``.
See Also
--------
BSpline : base class representing the B-spline objects
make_interp_spline : a similar factory function for interpolating splines
LSQUnivariateSpline : a FITPACK-based spline fitting routine
splrep : a FITPACK-based fitting routine
Notes
-----
The number of data points must be larger than the spline degree ``k``.
Knots ``t`` must satisfy the Schoenberg-Whitney conditions,
i.e., there must be a subset of data points ``x[j]`` such that
``t[j] < x[j] < t[j+k+1]``, for ``j=0, 1,...,n-k-2``.
Examples
--------
Generate some noisy data:
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> rng = np.random.default_rng()
>>> x = np.linspace(-3, 3, 50)
>>> y = np.exp(-x**2) + 0.1 * rng.standard_normal(50)
Now fit a smoothing cubic spline with a pre-defined internal knots.
Here we make the knot vector (k+1)-regular by adding boundary knots:
>>> from scipy.interpolate import make_lsq_spline, BSpline
>>> t = [-1, 0, 1]
>>> k = 3
>>> t = np.r_[(x[0],)*(k+1),
... t,
... (x[-1],)*(k+1)]
>>> spl = make_lsq_spline(x, y, t, k)
For comparison, we also construct an interpolating spline for the same
set of data:
>>> from scipy.interpolate import make_interp_spline
>>> spl_i = make_interp_spline(x, y)
Plot both:
>>> xs = np.linspace(-3, 3, 100)
>>> plt.plot(x, y, 'ro', ms=5)
>>> plt.plot(xs, spl(xs), 'g-', lw=3, label='LSQ spline')
>>> plt.plot(xs, spl_i(xs), 'b-', lw=3, alpha=0.7, label='interp spline')
>>> plt.legend(loc='best')
>>> plt.show()
**NaN handling**: If the input arrays contain ``nan`` values, the result is
not useful since the underlying spline fitting routines cannot deal with
``nan``. A workaround is to use zero weights for not-a-number data points:
>>> y[8] = np.nan
>>> w = np.isnan(y)
>>> y[w] = 0.
>>> tck = make_lsq_spline(x, y, t, w=~w)
Notice the need to replace a ``nan`` by a numerical value (precise value
does not matter as long as the corresponding weight is zero.)
"""
x = _as_float_array(x, check_finite)
y = _as_float_array(y, check_finite)
t = _as_float_array(t, check_finite)
if w is not None:
w = _as_float_array(w, check_finite)
else:
w = np.ones_like(x)
k = operator.index(k)
axis = normalize_axis_index(axis, y.ndim)
y = np.moveaxis(y, axis, 0) # now internally interp axis is zero
if x.ndim != 1 or np.any(x[1:] - x[:-1] <= 0):
raise ValueError("Expect x to be a 1-D sorted array_like.")
if x.shape[0] < k+1:
raise ValueError("Need more x points.")
if k < 0:
raise ValueError("Expect non-negative k.")
if t.ndim != 1 or np.any(t[1:] - t[:-1] < 0):
raise ValueError("Expect t to be a 1-D sorted array_like.")
if x.size != y.shape[0]:
raise ValueError('Shapes of x {} and y {} are incompatible'
.format(x.shape, y.shape))
if k > 0 and np.any((x < t[k]) | (x > t[-k])):
raise ValueError('Out of bounds w/ x = %s.' % x)
if x.size != w.size:
raise ValueError('Shapes of x {} and w {} are incompatible'
.format(x.shape, w.shape))
# number of coefficients
n = t.size - k - 1
# construct A.T @ A and rhs with A the collocation matrix, and
# rhs = A.T @ y for solving the LSQ problem ``A.T @ A @ c = A.T @ y``
lower = True
extradim = prod(y.shape[1:])
ab = np.zeros((k+1, n), dtype=np.float_, order='F')
rhs = np.zeros((n, extradim), dtype=y.dtype, order='F')
_bspl._norm_eq_lsq(x, t, k,
y.reshape(-1, extradim),
w,
ab, rhs)
rhs = rhs.reshape((n,) + y.shape[1:])
# have observation matrix & rhs, can solve the LSQ problem
cho_decomp = cholesky_banded(ab, overwrite_ab=True, lower=lower,
check_finite=check_finite)
c = cho_solve_banded((cho_decomp, lower), rhs, overwrite_b=True,
check_finite=check_finite)
c = np.ascontiguousarray(c)
return BSpline.construct_fast(t, c, k, axis=axis)
#############################
# Smoothing spline helpers #
#############################
def _compute_optimal_gcv_parameter(X, wE, y, w):
"""
Returns an optimal regularization parameter from the GCV criteria [1].
Parameters
----------
X : array, shape (5, n)
5 bands of the design matrix ``X`` stored in LAPACK banded storage.
wE : array, shape (5, n)
5 bands of the penalty matrix :math:`W^{-1} E` stored in LAPACK banded
storage.
y : array, shape (n,)
Ordinates.
w : array, shape (n,)
Vector of weights.
Returns
-------
lam : float
An optimal from the GCV criteria point of view regularization
parameter.
Notes
-----
No checks are performed.
References
----------
.. [1] G. Wahba, "Estimating the smoothing parameter" in Spline models
for observational data, Philadelphia, Pennsylvania: Society for
Industrial and Applied Mathematics, 1990, pp. 45-65.
:doi:`10.1137/1.9781611970128`
"""
def compute_banded_symmetric_XT_W_Y(X, w, Y):
"""
Assuming that the product :math:`X^T W Y` is symmetric and both ``X``
and ``Y`` are 5-banded, compute the unique bands of the product.
Parameters
----------
X : array, shape (5, n)
5 bands of the matrix ``X`` stored in LAPACK banded storage.
w : array, shape (n,)
Array of weights
Y : array, shape (5, n)
5 bands of the matrix ``Y`` stored in LAPACK banded storage.
Returns
-------
res : array, shape (4, n)
The result of the product :math:`X^T Y` stored in the banded way.
Notes
-----
As far as the matrices ``X`` and ``Y`` are 5-banded, their product
:math:`X^T W Y` is 7-banded. It is also symmetric, so we can store only
unique diagonals.
"""
# compute W Y
W_Y = np.copy(Y)
W_Y[2] *= w
for i in range(2):
W_Y[i, 2 - i:] *= w[:-2 + i]
W_Y[3 + i, :-1 - i] *= w[1 + i:]
n = X.shape[1]
res = np.zeros((4, n))
for i in range(n):
for j in range(min(n-i, 4)):
res[-j-1, i + j] = sum(X[j:, i] * W_Y[:5-j, i + j])
return res
def compute_b_inv(A):
"""
Inverse 3 central bands of matrix :math:`A=U^T D^{-1} U` assuming that
``U`` is a unit upper triangular banded matrix using an algorithm
proposed in [1].
Parameters
----------
A : array, shape (4, n)
Matrix to inverse, stored in LAPACK banded storage.
Returns
-------
B : array, shape (4, n)
3 unique bands of the symmetric matrix that is an inverse to ``A``.
The first row is filled with zeros.
Notes
-----
The algorithm is based on the cholesky decomposition and, therefore,
in case matrix ``A`` is close to not positive defined, the function
raises LinalgError.
Both matrices ``A`` and ``B`` are stored in LAPACK banded storage.
References
----------
.. [1] M. F. Hutchinson and F. R. de Hoog, "Smoothing noisy data with
spline functions," Numerische Mathematik, vol. 47, no. 1,
pp. 99-106, 1985.
:doi:`10.1007/BF01389878`
"""
def find_b_inv_elem(i, j, U, D, B):
rng = min(3, n - i - 1)
rng_sum = 0.
if j == 0:
# use 2-nd formula from [1]
for k in range(1, rng + 1):
rng_sum -= U[-k - 1, i + k] * B[-k - 1, i + k]
rng_sum += D[i]
B[-1, i] = rng_sum
else:
# use 1-st formula from [1]
for k in range(1, rng + 1):
diag = abs(k - j)
ind = i + min(k, j)
rng_sum -= U[-k - 1, i + k] * B[-diag - 1, ind + diag]
B[-j - 1, i + j] = rng_sum
U = cholesky_banded(A)
for i in range(2, 5):
U[-i, i-1:] /= U[-1, :-i+1]
D = 1. / (U[-1])**2
U[-1] /= U[-1]
n = U.shape[1]
B = np.zeros(shape=(4, n))
for i in range(n - 1, -1, -1):
for j in range(min(3, n - i - 1), -1, -1):
find_b_inv_elem(i, j, U, D, B)
# the first row contains garbage and should be removed
B[0] = [0.] * n
return B
def _gcv(lam, X, XtWX, wE, XtE):
r"""
Computes the generalized cross-validation criteria [1].
Parameters
----------
lam : float, (:math:`\lambda \geq 0`)
Regularization parameter.
X : array, shape (5, n)
Matrix is stored in LAPACK banded storage.
XtWX : array, shape (4, n)
Product :math:`X^T W X` stored in LAPACK banded storage.
wE : array, shape (5, n)
Matrix :math:`W^{-1} E` stored in LAPACK banded storage.
XtE : array, shape (4, n)
Product :math:`X^T E` stored in LAPACK banded storage.
Returns
-------
res : float
Value of the GCV criteria with the regularization parameter
:math:`\lambda`.
Notes
-----
Criteria is computed from the formula (1.3.2) [3]:
.. math:
GCV(\lambda) = \dfrac{1}{n} \sum\limits_{k = 1}^{n} \dfrac{ \left(
y_k - f_{\lambda}(x_k) \right)^2}{\left( 1 - \Tr{A}/n\right)^2}$.
The criteria is discussed in section 1.3 [3].
The numerator is computed using (2.2.4) [3] and the denominator is
computed using an algorithm from [2] (see in the ``compute_b_inv``
function).
References
----------
.. [1] G. Wahba, "Estimating the smoothing parameter" in Spline models
for observational data, Philadelphia, Pennsylvania: Society for
Industrial and Applied Mathematics, 1990, pp. 45-65.
:doi:`10.1137/1.9781611970128`
.. [2] M. F. Hutchinson and F. R. de Hoog, "Smoothing noisy data with
spline functions," Numerische Mathematik, vol. 47, no. 1,
pp. 99-106, 1985.
:doi:`10.1007/BF01389878`
.. [3] E. Zemlyanoy, "Generalized cross-validation smoothing splines",
BSc thesis, 2022. Might be available (in Russian)
`here <https://www.hse.ru/ba/am/students/diplomas/620910604>`_
"""
# Compute the numerator from (2.2.4) [3]
n = X.shape[1]
c = solve_banded((2, 2), X + lam * wE, y)
res = np.zeros(n)
# compute ``W^{-1} E c`` with respect to banded-storage of ``E``
tmp = wE * c
for i in range(n):
for j in range(max(0, i - n + 3), min(5, i + 3)):
res[i] += tmp[j, i + 2 - j]
numer = np.linalg.norm(lam * res)**2 / n
# compute the denominator
lhs = XtWX + lam * XtE
try:
b_banded = compute_b_inv(lhs)
# compute the trace of the product b_banded @ XtX
tr = b_banded * XtWX
tr[:-1] *= 2
# find the denominator
denom = (1 - sum(sum(tr)) / n)**2
except LinAlgError:
# cholesky decomposition cannot be performed
raise ValueError('Seems like the problem is ill-posed')
res = numer / denom
return res
n = X.shape[1]
XtWX = compute_banded_symmetric_XT_W_Y(X, w, X)
XtE = compute_banded_symmetric_XT_W_Y(X, w, wE)
def fun(lam):
return _gcv(lam, X, XtWX, wE, XtE)
gcv_est = minimize_scalar(fun, bounds=(0, n), method='Bounded')
if gcv_est.success:
return gcv_est.x
raise ValueError(f"Unable to find minimum of the GCV "
f"function: {gcv_est.message}")
def _coeff_of_divided_diff(x):
"""
Returns the coefficients of the divided difference.
Parameters
----------
x : array, shape (n,)
Array which is used for the computation of divided difference.
Returns
-------
res : array_like, shape (n,)
Coefficients of the divided difference.
Notes
-----
Vector ``x`` should have unique elements, otherwise an error division by
zero might be raised.
No checks are performed.
"""
n = x.shape[0]
res = np.zeros(n)
for i in range(n):
pp = 1.
for k in range(n):
if k != i:
pp *= (x[i] - x[k])
res[i] = 1. / pp
return res
def make_smoothing_spline(x, y, w=None, lam=None):
r"""
Compute the (coefficients of) smoothing cubic spline function using
``lam`` to control the tradeoff between the amount of smoothness of the
curve and its proximity to the data. In case ``lam`` is None, using the
GCV criteria [1] to find it.
A smoothing spline is found as a solution to the regularized weighted
linear regression problem:
.. math::
\sum\limits_{i=1}^n w_i\lvert y_i - f(x_i) \rvert^2 +
\lambda\int\limits_{x_1}^{x_n} (f^{(2)}(u))^2 d u
where :math:`f` is a spline function, :math:`w` is a vector of weights and
:math:`\lambda` is a regularization parameter.
If ``lam`` is None, we use the GCV criteria to find an optimal
regularization parameter, otherwise we solve the regularized weighted
linear regression problem with given parameter. The parameter controls
the tradeoff in the following way: the larger the parameter becomes, the
smoother the function gets.
Parameters
----------
x : array_like, shape (n,)
Abscissas. `n` must be larger than 5.
y : array_like, shape (n,)
Ordinates. `n` must be larger than 5.
w : array_like, shape (n,), optional
Vector of weights. Default is ``np.ones_like(x)``.
lam : float, (:math:`\lambda \geq 0`), optional
Regularization parameter. If ``lam`` is None, then it is found from
the GCV criteria. Default is None.
Returns
-------
func : a BSpline object.
A callable representing a spline in the B-spline basis
as a solution of the problem of smoothing splines using
the GCV criteria [1] in case ``lam`` is None, otherwise using the
given parameter ``lam``.
Notes
-----
This algorithm is a clean room reimplementation of the algorithm
introduced by Woltring in FORTRAN [2]. The original version cannot be used
in SciPy source code because of the license issues. The details of the
reimplementation are discussed here (available only in Russian) [4].
If the vector of weights ``w`` is None, we assume that all the points are
equal in terms of weights, and vector of weights is vector of ones.
Note that in weighted residual sum of squares, weights are not squared:
:math:`\sum\limits_{i=1}^n w_i\lvert y_i - f(x_i) \rvert^2` while in
``splrep`` the sum is built from the squared weights.
In cases when the initial problem is ill-posed (for example, the product
:math:`X^T W X` where :math:`X` is a design matrix is not a positive
defined matrix) a ValueError is raised.
References
----------
.. [1] G. Wahba, "Estimating the smoothing parameter" in Spline models for
observational data, Philadelphia, Pennsylvania: Society for Industrial
and Applied Mathematics, 1990, pp. 45-65.
:doi:`10.1137/1.9781611970128`
.. [2] H. J. Woltring, A Fortran package for generalized, cross-validatory
spline smoothing and differentiation, Advances in Engineering
Software, vol. 8, no. 2, pp. 104-113, 1986.
:doi:`10.1016/0141-1195(86)90098-7`
.. [3] T. Hastie, J. Friedman, and R. Tisbshirani, "Smoothing Splines" in
The elements of Statistical Learning: Data Mining, Inference, and
prediction, New York: Springer, 2017, pp. 241-249.
:doi:`10.1007/978-0-387-84858-7`
.. [4] E. Zemlyanoy, "Generalized cross-validation smoothing splines",
BSc thesis, 2022.
`<https://www.hse.ru/ba/am/students/diplomas/620910604>`_ (in
Russian)
Examples
--------
Generate some noisy data
>>> import numpy as np
>>> np.random.seed(1234)
>>> n = 200
>>> def func(x):
... return x**3 + x**2 * np.sin(4 * x)
>>> x = np.sort(np.random.random_sample(n) * 4 - 2)
>>> y = func(x) + np.random.normal(scale=1.5, size=n)
Make a smoothing spline function
>>> from scipy.interpolate import make_smoothing_spline
>>> spl = make_smoothing_spline(x, y)
Plot both
>>> import matplotlib.pyplot as plt
>>> grid = np.linspace(x[0], x[-1], 400)
>>> plt.plot(grid, spl(grid), label='Spline')
>>> plt.plot(grid, func(grid), label='Original function')
>>> plt.scatter(x, y, marker='.')
>>> plt.legend(loc='best')
>>> plt.show()
"""
x = np.ascontiguousarray(x, dtype=float)
y = np.ascontiguousarray(y, dtype=float)
if any(x[1:] - x[:-1] <= 0):
raise ValueError('``x`` should be an ascending array')
if x.ndim != 1 or y.ndim != 1 or x.shape[0] != y.shape[0]:
raise ValueError('``x`` and ``y`` should be one dimensional and the'
' same size')
if w is None:
w = np.ones(len(x))
else:
w = np.ascontiguousarray(w)
if any(w <= 0):
raise ValueError('Invalid vector of weights')
t = np.r_[[x[0]] * 3, x, [x[-1]] * 3]
n = x.shape[0]
if n <= 4:
raise ValueError('``x`` and ``y`` length must be larger than 5')
# It is known that the solution to the stated minimization problem exists
# and is a natural cubic spline with vector of knots equal to the unique
# elements of ``x`` [3], so we will solve the problem in the basis of
# natural splines.
# create design matrix in the B-spline basis
X_bspl = BSpline.design_matrix(x, t, 3)
# move from B-spline basis to the basis of natural splines using equations
# (2.1.7) [4]
# central elements
X = np.zeros((5, n))
for i in range(1, 4):
X[i, 2: -2] = X_bspl[i: i - 4, 3: -3][np.diag_indices(n - 4)]
# first elements
X[1, 1] = X_bspl[0, 0]
X[2, :2] = ((x[2] + x[1] - 2 * x[0]) * X_bspl[0, 0],
X_bspl[1, 1] + X_bspl[1, 2])
X[3, :2] = ((x[2] - x[0]) * X_bspl[1, 1], X_bspl[2, 2])
# last elements
X[1, -2:] = (X_bspl[-3, -3], (x[-1] - x[-3]) * X_bspl[-2, -2])
X[2, -2:] = (X_bspl[-2, -3] + X_bspl[-2, -2],
(2 * x[-1] - x[-2] - x[-3]) * X_bspl[-1, -1])
X[3, -2] = X_bspl[-1, -1]
# create penalty matrix and divide it by vector of weights: W^{-1} E
wE = np.zeros((5, n))
wE[2:, 0] = _coeff_of_divided_diff(x[:3]) / w[:3]
wE[1:, 1] = _coeff_of_divided_diff(x[:4]) / w[:4]
for j in range(2, n - 2):
wE[:, j] = (x[j+2] - x[j-2]) * _coeff_of_divided_diff(x[j-2:j+3])\
/ w[j-2: j+3]
wE[:-1, -2] = -_coeff_of_divided_diff(x[-4:]) / w[-4:]
wE[:-2, -1] = _coeff_of_divided_diff(x[-3:]) / w[-3:]
wE *= 6
if lam is None:
lam = _compute_optimal_gcv_parameter(X, wE, y, w)
elif lam < 0.:
raise ValueError('Regularization parameter should be non-negative')
# solve the initial problem in the basis of natural splines
c = solve_banded((2, 2), X + lam * wE, y)
# move back to B-spline basis using equations (2.2.10) [4]
c_ = np.r_[c[0] * (t[5] + t[4] - 2 * t[3]) + c[1],
c[0] * (t[5] - t[3]) + c[1],
c[1: -1],
c[-1] * (t[-4] - t[-6]) + c[-2],
c[-1] * (2 * t[-4] - t[-5] - t[-6]) + c[-2]]
return BSpline.construct_fast(t, c_, 3)
| 69,387
| 33.030407
| 90
|
py
|
scipy
|
scipy-main/scipy/interpolate/_rgi.py
|
__all__ = ['RegularGridInterpolator', 'interpn']
import itertools
import numpy as np
from .interpnd import _ndim_coords_from_arrays
from ._cubic import PchipInterpolator
from ._rgi_cython import evaluate_linear_2d, find_indices
from ._bsplines import make_interp_spline
from ._fitpack2 import RectBivariateSpline
def _check_points(points):
descending_dimensions = []
grid = []
for i, p in enumerate(points):
# early make points float
# see https://github.com/scipy/scipy/pull/17230
p = np.asarray(p, dtype=float)
if not np.all(p[1:] > p[:-1]):
if np.all(p[1:] < p[:-1]):
# input is descending, so make it ascending
descending_dimensions.append(i)
p = np.flip(p)
else:
raise ValueError(
"The points in dimension %d must be strictly "
"ascending or descending" % i)
# see https://github.com/scipy/scipy/issues/17716
p = np.ascontiguousarray(p)
grid.append(p)
return tuple(grid), tuple(descending_dimensions)
def _check_dimensionality(points, values):
if len(points) > values.ndim:
raise ValueError("There are %d point arrays, but values has %d "
"dimensions" % (len(points), values.ndim))
for i, p in enumerate(points):
if not np.asarray(p).ndim == 1:
raise ValueError("The points in dimension %d must be "
"1-dimensional" % i)
if not values.shape[i] == len(p):
raise ValueError("There are %d points and %d values in "
"dimension %d" % (len(p), values.shape[i], i))
class RegularGridInterpolator:
"""
Interpolation on a regular or rectilinear grid in arbitrary dimensions.
The data must be defined on a rectilinear grid; that is, a rectangular
grid with even or uneven spacing. Linear, nearest-neighbor, spline
interpolations are supported. After setting up the interpolator object,
the interpolation method may be chosen at each evaluation.
Parameters
----------
points : tuple of ndarray of float, with shapes (m1, ), ..., (mn, )
The points defining the regular grid in n dimensions. The points in
each dimension (i.e. every elements of the points tuple) must be
strictly ascending or descending.
values : array_like, shape (m1, ..., mn, ...)
The data on the regular grid in n dimensions. Complex data can be
acceptable.
method : str, optional
The method of interpolation to perform. Supported are "linear",
"nearest", "slinear", "cubic", "quintic" and "pchip". This
parameter will become the default for the object's ``__call__``
method. Default is "linear".
bounds_error : bool, optional
If True, when interpolated values are requested outside of the
domain of the input data, a ValueError is raised.
If False, then `fill_value` is used.
Default is True.
fill_value : float or None, optional
The value to use for points outside of the interpolation domain.
If None, values outside the domain are extrapolated.
Default is ``np.nan``.
Methods
-------
__call__
Attributes
----------
grid : tuple of ndarrays
The points defining the regular grid in n dimensions.
This tuple defines the full grid via
``np.meshgrid(*grid, indexing='ij')``
values : ndarray
Data values at the grid.
method : str
Interpolation method.
fill_value : float or ``None``
Use this value for out-of-bounds arguments to `__call__`.
bounds_error : bool
If ``True``, out-of-bounds argument raise a ``ValueError``.
Notes
-----
Contrary to `LinearNDInterpolator` and `NearestNDInterpolator`, this class
avoids expensive triangulation of the input data by taking advantage of the
regular grid structure.
In other words, this class assumes that the data is defined on a
*rectilinear* grid.
.. versionadded:: 0.14
The 'slinear'(k=1), 'cubic'(k=3), and 'quintic'(k=5) methods are
tensor-product spline interpolators, where `k` is the spline degree,
If any dimension has fewer points than `k` + 1, an error will be raised.
.. versionadded:: 1.9
If the input data is such that dimensions have incommensurate
units and differ by many orders of magnitude, the interpolant may have
numerical artifacts. Consider rescaling the data before interpolating.
Examples
--------
**Evaluate a function on the points of a 3-D grid**
As a first example, we evaluate a simple example function on the points of
a 3-D grid:
>>> from scipy.interpolate import RegularGridInterpolator
>>> import numpy as np
>>> def f(x, y, z):
... return 2 * x**3 + 3 * y**2 - z
>>> x = np.linspace(1, 4, 11)
>>> y = np.linspace(4, 7, 22)
>>> z = np.linspace(7, 9, 33)
>>> xg, yg ,zg = np.meshgrid(x, y, z, indexing='ij', sparse=True)
>>> data = f(xg, yg, zg)
``data`` is now a 3-D array with ``data[i, j, k] = f(x[i], y[j], z[k])``.
Next, define an interpolating function from this data:
>>> interp = RegularGridInterpolator((x, y, z), data)
Evaluate the interpolating function at the two points
``(x,y,z) = (2.1, 6.2, 8.3)`` and ``(3.3, 5.2, 7.1)``:
>>> pts = np.array([[2.1, 6.2, 8.3],
... [3.3, 5.2, 7.1]])
>>> interp(pts)
array([ 125.80469388, 146.30069388])
which is indeed a close approximation to
>>> f(2.1, 6.2, 8.3), f(3.3, 5.2, 7.1)
(125.54200000000002, 145.894)
**Interpolate and extrapolate a 2D dataset**
As a second example, we interpolate and extrapolate a 2D data set:
>>> x, y = np.array([-2, 0, 4]), np.array([-2, 0, 2, 5])
>>> def ff(x, y):
... return x**2 + y**2
>>> xg, yg = np.meshgrid(x, y, indexing='ij')
>>> data = ff(xg, yg)
>>> interp = RegularGridInterpolator((x, y), data,
... bounds_error=False, fill_value=None)
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax = fig.add_subplot(projection='3d')
>>> ax.scatter(xg.ravel(), yg.ravel(), data.ravel(),
... s=60, c='k', label='data')
Evaluate and plot the interpolator on a finer grid
>>> xx = np.linspace(-4, 9, 31)
>>> yy = np.linspace(-4, 9, 31)
>>> X, Y = np.meshgrid(xx, yy, indexing='ij')
>>> # interpolator
>>> ax.plot_wireframe(X, Y, interp((X, Y)), rstride=3, cstride=3,
... alpha=0.4, color='m', label='linear interp')
>>> # ground truth
>>> ax.plot_wireframe(X, Y, ff(X, Y), rstride=3, cstride=3,
... alpha=0.4, label='ground truth')
>>> plt.legend()
>>> plt.show()
Other examples are given
:ref:`in the tutorial <tutorial-interpolate_regular_grid_interpolator>`.
See Also
--------
NearestNDInterpolator : Nearest neighbor interpolation on *unstructured*
data in N dimensions
LinearNDInterpolator : Piecewise linear interpolant on *unstructured* data
in N dimensions
interpn : a convenience function which wraps `RegularGridInterpolator`
scipy.ndimage.map_coordinates : interpolation on grids with equal spacing
(suitable for e.g., N-D image resampling)
References
----------
.. [1] Python package *regulargrid* by Johannes Buchner, see
https://pypi.python.org/pypi/regulargrid/
.. [2] Wikipedia, "Trilinear interpolation",
https://en.wikipedia.org/wiki/Trilinear_interpolation
.. [3] Weiser, Alan, and Sergio E. Zarantonello. "A note on piecewise linear
and multilinear table interpolation in many dimensions." MATH.
COMPUT. 50.181 (1988): 189-196.
https://www.ams.org/journals/mcom/1988-50-181/S0025-5718-1988-0917826-0/S0025-5718-1988-0917826-0.pdf
:doi:`10.1090/S0025-5718-1988-0917826-0`
"""
# this class is based on code originally programmed by Johannes Buchner,
# see https://github.com/JohannesBuchner/regulargrid
_SPLINE_DEGREE_MAP = {"slinear": 1, "cubic": 3, "quintic": 5, 'pchip': 3}
_SPLINE_METHODS = list(_SPLINE_DEGREE_MAP.keys())
_ALL_METHODS = ["linear", "nearest"] + _SPLINE_METHODS
def __init__(self, points, values, method="linear", bounds_error=True,
fill_value=np.nan):
if method not in self._ALL_METHODS:
raise ValueError("Method '%s' is not defined" % method)
elif method in self._SPLINE_METHODS:
self._validate_grid_dimensions(points, method)
self.method = method
self.bounds_error = bounds_error
self.grid, self._descending_dimensions = _check_points(points)
self.values = self._check_values(values)
self._check_dimensionality(self.grid, self.values)
self.fill_value = self._check_fill_value(self.values, fill_value)
if self._descending_dimensions:
self.values = np.flip(values, axis=self._descending_dimensions)
def _check_dimensionality(self, grid, values):
_check_dimensionality(grid, values)
def _check_points(self, points):
return _check_points(points)
def _check_values(self, values):
if not hasattr(values, 'ndim'):
# allow reasonable duck-typed values
values = np.asarray(values)
if hasattr(values, 'dtype') and hasattr(values, 'astype'):
if not np.issubdtype(values.dtype, np.inexact):
values = values.astype(float)
return values
def _check_fill_value(self, values, fill_value):
if fill_value is not None:
fill_value_dtype = np.asarray(fill_value).dtype
if (hasattr(values, 'dtype') and not
np.can_cast(fill_value_dtype, values.dtype,
casting='same_kind')):
raise ValueError("fill_value must be either 'None' or "
"of a type compatible with values")
return fill_value
def __call__(self, xi, method=None):
"""
Interpolation at coordinates.
Parameters
----------
xi : ndarray of shape (..., ndim)
The coordinates to evaluate the interpolator at.
method : str, optional
The method of interpolation to perform. Supported are "linear",
"nearest", "slinear", "cubic", "quintic" and "pchip". Default is
the method chosen when the interpolator was created.
Returns
-------
values_x : ndarray, shape xi.shape[:-1] + values.shape[ndim:]
Interpolated values at `xi`. See notes for behaviour when
``xi.ndim == 1``.
Notes
-----
In the case that ``xi.ndim == 1`` a new axis is inserted into
the 0 position of the returned array, values_x, so its shape is
instead ``(1,) + values.shape[ndim:]``.
Examples
--------
Here we define a nearest-neighbor interpolator of a simple function
>>> import numpy as np
>>> x, y = np.array([0, 1, 2]), np.array([1, 3, 7])
>>> def f(x, y):
... return x**2 + y**2
>>> data = f(*np.meshgrid(x, y, indexing='ij', sparse=True))
>>> from scipy.interpolate import RegularGridInterpolator
>>> interp = RegularGridInterpolator((x, y), data, method='nearest')
By construction, the interpolator uses the nearest-neighbor
interpolation
>>> interp([[1.5, 1.3], [0.3, 4.5]])
array([2., 9.])
We can however evaluate the linear interpolant by overriding the
`method` parameter
>>> interp([[1.5, 1.3], [0.3, 4.5]], method='linear')
array([ 4.7, 24.3])
"""
is_method_changed = self.method != method
method = self.method if method is None else method
if method not in self._ALL_METHODS:
raise ValueError("Method '%s' is not defined" % method)
xi, xi_shape, ndim, nans, out_of_bounds = self._prepare_xi(xi)
if method == "linear":
indices, norm_distances = self._find_indices(xi.T)
if (ndim == 2 and hasattr(self.values, 'dtype') and
self.values.ndim == 2 and self.values.flags.writeable and
self.values.dtype in (np.float64, np.complex128) and
self.values.dtype.byteorder == '='):
# until cython supports const fused types, the fast path
# cannot support non-writeable values
# a fast path
out = np.empty(indices.shape[1], dtype=self.values.dtype)
result = evaluate_linear_2d(self.values,
indices,
norm_distances,
self.grid,
out)
else:
result = self._evaluate_linear(indices, norm_distances)
elif method == "nearest":
indices, norm_distances = self._find_indices(xi.T)
result = self._evaluate_nearest(indices, norm_distances)
elif method in self._SPLINE_METHODS:
if is_method_changed:
self._validate_grid_dimensions(self.grid, method)
result = self._evaluate_spline(xi, method)
if not self.bounds_error and self.fill_value is not None:
result[out_of_bounds] = self.fill_value
# f(nan) = nan, if any
if np.any(nans):
result[nans] = np.nan
return result.reshape(xi_shape[:-1] + self.values.shape[ndim:])
def _prepare_xi(self, xi):
ndim = len(self.grid)
xi = _ndim_coords_from_arrays(xi, ndim=ndim)
if xi.shape[-1] != len(self.grid):
raise ValueError("The requested sample points xi have dimension "
f"{xi.shape[-1]} but this "
f"RegularGridInterpolator has dimension {ndim}")
xi_shape = xi.shape
xi = xi.reshape(-1, xi_shape[-1])
xi = np.asarray(xi, dtype=float)
# find nans in input
nans = np.any(np.isnan(xi), axis=-1)
if self.bounds_error:
for i, p in enumerate(xi.T):
if not np.logical_and(np.all(self.grid[i][0] <= p),
np.all(p <= self.grid[i][-1])):
raise ValueError("One of the requested xi is out of bounds "
"in dimension %d" % i)
out_of_bounds = None
else:
out_of_bounds = self._find_out_of_bounds(xi.T)
return xi, xi_shape, ndim, nans, out_of_bounds
def _evaluate_linear(self, indices, norm_distances):
# slice for broadcasting over trailing dimensions in self.values
vslice = (slice(None),) + (None,)*(self.values.ndim - len(indices))
# Compute shifting up front before zipping everything together
shift_norm_distances = [1 - yi for yi in norm_distances]
shift_indices = [i + 1 for i in indices]
# The formula for linear interpolation in 2d takes the form:
# values = self.values[(i0, i1)] * (1 - y0) * (1 - y1) + \
# self.values[(i0, i1 + 1)] * (1 - y0) * y1 + \
# self.values[(i0 + 1, i1)] * y0 * (1 - y1) + \
# self.values[(i0 + 1, i1 + 1)] * y0 * y1
# We pair i with 1 - yi (zipped1) and i + 1 with yi (zipped2)
zipped1 = zip(indices, shift_norm_distances)
zipped2 = zip(shift_indices, norm_distances)
# Take all products of zipped1 and zipped2 and iterate over them
# to get the terms in the above formula. This corresponds to iterating
# over the vertices of a hypercube.
hypercube = itertools.product(*zip(zipped1, zipped2))
value = np.array([0.])
for h in hypercube:
edge_indices, weights = zip(*h)
weight = np.array([1.])
for w in weights:
weight = weight * w
term = np.asarray(self.values[edge_indices]) * weight[vslice]
value = value + term # cannot use += because broadcasting
return value
def _evaluate_nearest(self, indices, norm_distances):
idx_res = [np.where(yi <= .5, i, i + 1)
for i, yi in zip(indices, norm_distances)]
return self.values[tuple(idx_res)]
def _validate_grid_dimensions(self, points, method):
k = self._SPLINE_DEGREE_MAP[method]
for i, point in enumerate(points):
ndim = len(np.atleast_1d(point))
if ndim <= k:
raise ValueError(f"There are {ndim} points in dimension {i},"
f" but method {method} requires at least "
f" {k+1} points per dimension.")
def _evaluate_spline(self, xi, method):
# ensure xi is 2D list of points to evaluate (`m` is the number of
# points and `n` is the number of interpolation dimensions,
# ``n == len(self.grid)``.)
if xi.ndim == 1:
xi = xi.reshape((1, xi.size))
m, n = xi.shape
# Reorder the axes: n-dimensional process iterates over the
# interpolation axes from the last axis downwards: E.g. for a 4D grid
# the order of axes is 3, 2, 1, 0. Each 1D interpolation works along
# the 0th axis of its argument array (for 1D routine it's its ``y``
# array). Thus permute the interpolation axes of `values` *and keep
# trailing dimensions trailing*.
axes = tuple(range(self.values.ndim))
axx = axes[:n][::-1] + axes[n:]
values = self.values.transpose(axx)
if method == 'pchip':
_eval_func = self._do_pchip
else:
_eval_func = self._do_spline_fit
k = self._SPLINE_DEGREE_MAP[method]
# Non-stationary procedure: difficult to vectorize this part entirely
# into numpy-level operations. Unfortunately this requires explicit
# looping over each point in xi.
# can at least vectorize the first pass across all points in the
# last variable of xi.
last_dim = n - 1
first_values = _eval_func(self.grid[last_dim],
values,
xi[:, last_dim],
k)
# the rest of the dimensions have to be on a per point-in-xi basis
shape = (m, *self.values.shape[n:])
result = np.empty(shape, dtype=self.values.dtype)
for j in range(m):
# Main process: Apply 1D interpolate in each dimension
# sequentially, starting with the last dimension.
# These are then "folded" into the next dimension in-place.
folded_values = first_values[j, ...]
for i in range(last_dim-1, -1, -1):
# Interpolate for each 1D from the last dimensions.
# This collapses each 1D sequence into a scalar.
folded_values = _eval_func(self.grid[i],
folded_values,
xi[j, i],
k)
result[j, ...] = folded_values
return result
@staticmethod
def _do_spline_fit(x, y, pt, k):
local_interp = make_interp_spline(x, y, k=k, axis=0)
values = local_interp(pt)
return values
@staticmethod
def _do_pchip(x, y, pt, k):
local_interp = PchipInterpolator(x, y, axis=0)
values = local_interp(pt)
return values
def _find_indices(self, xi):
return find_indices(self.grid, xi)
def _find_out_of_bounds(self, xi):
# check for out of bounds xi
out_of_bounds = np.zeros((xi.shape[1]), dtype=bool)
# iterate through dimensions
for x, grid in zip(xi, self.grid):
out_of_bounds += x < grid[0]
out_of_bounds += x > grid[-1]
return out_of_bounds
def interpn(points, values, xi, method="linear", bounds_error=True,
fill_value=np.nan):
"""
Multidimensional interpolation on regular or rectilinear grids.
Strictly speaking, not all regular grids are supported - this function
works on *rectilinear* grids, that is, a rectangular grid with even or
uneven spacing.
Parameters
----------
points : tuple of ndarray of float, with shapes (m1, ), ..., (mn, )
The points defining the regular grid in n dimensions. The points in
each dimension (i.e. every elements of the points tuple) must be
strictly ascending or descending.
values : array_like, shape (m1, ..., mn, ...)
The data on the regular grid in n dimensions. Complex data can be
acceptable.
xi : ndarray of shape (..., ndim)
The coordinates to sample the gridded data at
method : str, optional
The method of interpolation to perform. Supported are "linear",
"nearest", "slinear", "cubic", "quintic", "pchip", and "splinef2d".
"splinef2d" is only supported for 2-dimensional data.
bounds_error : bool, optional
If True, when interpolated values are requested outside of the
domain of the input data, a ValueError is raised.
If False, then `fill_value` is used.
fill_value : number, optional
If provided, the value to use for points outside of the
interpolation domain. If None, values outside
the domain are extrapolated. Extrapolation is not supported by method
"splinef2d".
Returns
-------
values_x : ndarray, shape xi.shape[:-1] + values.shape[ndim:]
Interpolated values at `xi`. See notes for behaviour when
``xi.ndim == 1``.
See Also
--------
NearestNDInterpolator : Nearest neighbor interpolation on unstructured
data in N dimensions
LinearNDInterpolator : Piecewise linear interpolant on unstructured data
in N dimensions
RegularGridInterpolator : interpolation on a regular or rectilinear grid
in arbitrary dimensions (`interpn` wraps this
class).
RectBivariateSpline : Bivariate spline approximation over a rectangular mesh
scipy.ndimage.map_coordinates : interpolation on grids with equal spacing
(suitable for e.g., N-D image resampling)
Notes
-----
.. versionadded:: 0.14
In the case that ``xi.ndim == 1`` a new axis is inserted into
the 0 position of the returned array, values_x, so its shape is
instead ``(1,) + values.shape[ndim:]``.
If the input data is such that input dimensions have incommensurate
units and differ by many orders of magnitude, the interpolant may have
numerical artifacts. Consider rescaling the data before interpolation.
Examples
--------
Evaluate a simple example function on the points of a regular 3-D grid:
>>> import numpy as np
>>> from scipy.interpolate import interpn
>>> def value_func_3d(x, y, z):
... return 2 * x + 3 * y - z
>>> x = np.linspace(0, 4, 5)
>>> y = np.linspace(0, 5, 6)
>>> z = np.linspace(0, 6, 7)
>>> points = (x, y, z)
>>> values = value_func_3d(*np.meshgrid(*points, indexing='ij'))
Evaluate the interpolating function at a point
>>> point = np.array([2.21, 3.12, 1.15])
>>> print(interpn(points, values, point))
[12.63]
"""
# sanity check 'method' kwarg
if method not in ["linear", "nearest", "cubic", "quintic", "pchip",
"splinef2d", "slinear"]:
raise ValueError("interpn only understands the methods 'linear', "
"'nearest', 'slinear', 'cubic', 'quintic', 'pchip', "
f"and 'splinef2d'. You provided {method}.")
if not hasattr(values, 'ndim'):
values = np.asarray(values)
ndim = values.ndim
if ndim > 2 and method == "splinef2d":
raise ValueError("The method splinef2d can only be used for "
"2-dimensional input data")
if not bounds_error and fill_value is None and method == "splinef2d":
raise ValueError("The method splinef2d does not support extrapolation.")
# sanity check consistency of input dimensions
if len(points) > ndim:
raise ValueError("There are %d point arrays, but values has %d "
"dimensions" % (len(points), ndim))
if len(points) != ndim and method == 'splinef2d':
raise ValueError("The method splinef2d can only be used for "
"scalar data with one point per coordinate")
grid, descending_dimensions = _check_points(points)
_check_dimensionality(grid, values)
# sanity check requested xi
xi = _ndim_coords_from_arrays(xi, ndim=len(grid))
if xi.shape[-1] != len(grid):
raise ValueError("The requested sample points xi have dimension "
"%d, but this RegularGridInterpolator has "
"dimension %d" % (xi.shape[-1], len(grid)))
if bounds_error:
for i, p in enumerate(xi.T):
if not np.logical_and(np.all(grid[i][0] <= p),
np.all(p <= grid[i][-1])):
raise ValueError("One of the requested xi is out of bounds "
"in dimension %d" % i)
# perform interpolation
if method in ["linear", "nearest", "slinear", "cubic", "quintic", "pchip"]:
interp = RegularGridInterpolator(points, values, method=method,
bounds_error=bounds_error,
fill_value=fill_value)
return interp(xi)
elif method == "splinef2d":
xi_shape = xi.shape
xi = xi.reshape(-1, xi.shape[-1])
# RectBivariateSpline doesn't support fill_value; we need to wrap here
idx_valid = np.all((grid[0][0] <= xi[:, 0], xi[:, 0] <= grid[0][-1],
grid[1][0] <= xi[:, 1], xi[:, 1] <= grid[1][-1]),
axis=0)
result = np.empty_like(xi[:, 0])
# make a copy of values for RectBivariateSpline
interp = RectBivariateSpline(points[0], points[1], values[:])
result[idx_valid] = interp.ev(xi[idx_valid, 0], xi[idx_valid, 1])
result[np.logical_not(idx_valid)] = fill_value
return result.reshape(xi_shape[:-1])
| 26,908
| 39.043155
| 112
|
py
|
scipy
|
scipy-main/scipy/interpolate/_rbf.py
|
"""rbf - Radial basis functions for interpolation/smoothing scattered N-D data.
Written by John Travers <jtravs@gmail.com>, February 2007
Based closely on Matlab code by Alex Chirokov
Additional, large, improvements by Robert Hetland
Some additional alterations by Travis Oliphant
Interpolation with multi-dimensional target domain by Josua Sassen
Permission to use, modify, and distribute this software is given under the
terms of the SciPy (BSD style) license. See LICENSE.txt that came with
this distribution for specifics.
NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.
Copyright (c) 2006-2007, Robert Hetland <hetland@tamu.edu>
Copyright (c) 2007, John Travers <jtravs@gmail.com>
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of Robert Hetland nor the names of any
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
import numpy as np
from scipy import linalg
from scipy.special import xlogy
from scipy.spatial.distance import cdist, pdist, squareform
__all__ = ['Rbf']
class Rbf:
"""
Rbf(*args, **kwargs)
A class for radial basis function interpolation of functions from
N-D scattered data to an M-D domain.
.. legacy:: class
`Rbf` is legacy code, for new usage please use `RBFInterpolator`
instead.
Parameters
----------
*args : arrays
x, y, z, ..., d, where x, y, z, ... are the coordinates of the nodes
and d is the array of values at the nodes
function : str or callable, optional
The radial basis function, based on the radius, r, given by the norm
(default is Euclidean distance); the default is 'multiquadric'::
'multiquadric': sqrt((r/self.epsilon)**2 + 1)
'inverse': 1.0/sqrt((r/self.epsilon)**2 + 1)
'gaussian': exp(-(r/self.epsilon)**2)
'linear': r
'cubic': r**3
'quintic': r**5
'thin_plate': r**2 * log(r)
If callable, then it must take 2 arguments (self, r). The epsilon
parameter will be available as self.epsilon. Other keyword
arguments passed in will be available as well.
epsilon : float, optional
Adjustable constant for gaussian or multiquadrics functions
- defaults to approximate average distance between nodes (which is
a good start).
smooth : float, optional
Values greater than zero increase the smoothness of the
approximation. 0 is for interpolation (default), the function will
always go through the nodal points in this case.
norm : str, callable, optional
A function that returns the 'distance' between two points, with
inputs as arrays of positions (x, y, z, ...), and an output as an
array of distance. E.g., the default: 'euclidean', such that the result
is a matrix of the distances from each point in ``x1`` to each point in
``x2``. For more options, see documentation of
`scipy.spatial.distances.cdist`.
mode : str, optional
Mode of the interpolation, can be '1-D' (default) or 'N-D'. When it is
'1-D' the data `d` will be considered as 1-D and flattened
internally. When it is 'N-D' the data `d` is assumed to be an array of
shape (n_samples, m), where m is the dimension of the target domain.
Attributes
----------
N : int
The number of data points (as determined by the input arrays).
di : ndarray
The 1-D array of data values at each of the data coordinates `xi`.
xi : ndarray
The 2-D array of data coordinates.
function : str or callable
The radial basis function. See description under Parameters.
epsilon : float
Parameter used by gaussian or multiquadrics functions. See Parameters.
smooth : float
Smoothing parameter. See description under Parameters.
norm : str or callable
The distance function. See description under Parameters.
mode : str
Mode of the interpolation. See description under Parameters.
nodes : ndarray
A 1-D array of node values for the interpolation.
A : internal property, do not use
See Also
--------
RBFInterpolator
Examples
--------
>>> import numpy as np
>>> from scipy.interpolate import Rbf
>>> rng = np.random.default_rng()
>>> x, y, z, d = rng.random((4, 50))
>>> rbfi = Rbf(x, y, z, d) # radial basis function interpolator instance
>>> xi = yi = zi = np.linspace(0, 1, 20)
>>> di = rbfi(xi, yi, zi) # interpolated values
>>> di.shape
(20,)
"""
# Available radial basis functions that can be selected as strings;
# they all start with _h_ (self._init_function relies on that)
def _h_multiquadric(self, r):
return np.sqrt((1.0/self.epsilon*r)**2 + 1)
def _h_inverse_multiquadric(self, r):
return 1.0/np.sqrt((1.0/self.epsilon*r)**2 + 1)
def _h_gaussian(self, r):
return np.exp(-(1.0/self.epsilon*r)**2)
def _h_linear(self, r):
return r
def _h_cubic(self, r):
return r**3
def _h_quintic(self, r):
return r**5
def _h_thin_plate(self, r):
return xlogy(r**2, r)
# Setup self._function and do smoke test on initial r
def _init_function(self, r):
if isinstance(self.function, str):
self.function = self.function.lower()
_mapped = {'inverse': 'inverse_multiquadric',
'inverse multiquadric': 'inverse_multiquadric',
'thin-plate': 'thin_plate'}
if self.function in _mapped:
self.function = _mapped[self.function]
func_name = "_h_" + self.function
if hasattr(self, func_name):
self._function = getattr(self, func_name)
else:
functionlist = [x[3:] for x in dir(self)
if x.startswith('_h_')]
raise ValueError("function must be a callable or one of " +
", ".join(functionlist))
self._function = getattr(self, "_h_"+self.function)
elif callable(self.function):
allow_one = False
if hasattr(self.function, 'func_code') or \
hasattr(self.function, '__code__'):
val = self.function
allow_one = True
elif hasattr(self.function, "__call__"):
val = self.function.__call__.__func__
else:
raise ValueError("Cannot determine number of arguments to "
"function")
argcount = val.__code__.co_argcount
if allow_one and argcount == 1:
self._function = self.function
elif argcount == 2:
self._function = self.function.__get__(self, Rbf)
else:
raise ValueError("Function argument must take 1 or 2 "
"arguments.")
a0 = self._function(r)
if a0.shape != r.shape:
raise ValueError("Callable must take array and return array of "
"the same shape")
return a0
def __init__(self, *args, **kwargs):
# `args` can be a variable number of arrays; we flatten them and store
# them as a single 2-D array `xi` of shape (n_args-1, array_size),
# plus a 1-D array `di` for the values.
# All arrays must have the same number of elements
self.xi = np.asarray([np.asarray(a, dtype=np.float_).flatten()
for a in args[:-1]])
self.N = self.xi.shape[-1]
self.mode = kwargs.pop('mode', '1-D')
if self.mode == '1-D':
self.di = np.asarray(args[-1]).flatten()
self._target_dim = 1
elif self.mode == 'N-D':
self.di = np.asarray(args[-1])
self._target_dim = self.di.shape[-1]
else:
raise ValueError("Mode has to be 1-D or N-D.")
if not all([x.size == self.di.shape[0] for x in self.xi]):
raise ValueError("All arrays must be equal length.")
self.norm = kwargs.pop('norm', 'euclidean')
self.epsilon = kwargs.pop('epsilon', None)
if self.epsilon is None:
# default epsilon is the "the average distance between nodes" based
# on a bounding hypercube
ximax = np.amax(self.xi, axis=1)
ximin = np.amin(self.xi, axis=1)
edges = ximax - ximin
edges = edges[np.nonzero(edges)]
self.epsilon = np.power(np.prod(edges)/self.N, 1.0/edges.size)
self.smooth = kwargs.pop('smooth', 0.0)
self.function = kwargs.pop('function', 'multiquadric')
# attach anything left in kwargs to self for use by any user-callable
# function or to save on the object returned.
for item, value in kwargs.items():
setattr(self, item, value)
# Compute weights
if self._target_dim > 1: # If we have more than one target dimension,
# we first factorize the matrix
self.nodes = np.zeros((self.N, self._target_dim), dtype=self.di.dtype)
lu, piv = linalg.lu_factor(self.A)
for i in range(self._target_dim):
self.nodes[:, i] = linalg.lu_solve((lu, piv), self.di[:, i])
else:
self.nodes = linalg.solve(self.A, self.di)
@property
def A(self):
# this only exists for backwards compatibility: self.A was available
# and, at least technically, public.
r = squareform(pdist(self.xi.T, self.norm)) # Pairwise norm
return self._init_function(r) - np.eye(self.N)*self.smooth
def _call_norm(self, x1, x2):
return cdist(x1.T, x2.T, self.norm)
def __call__(self, *args):
args = [np.asarray(x) for x in args]
if not all([x.shape == y.shape for x in args for y in args]):
raise ValueError("Array lengths must be equal")
if self._target_dim > 1:
shp = args[0].shape + (self._target_dim,)
else:
shp = args[0].shape
xa = np.asarray([a.flatten() for a in args], dtype=np.float_)
r = self._call_norm(xa, self.xi)
return np.dot(self._function(r), self.nodes).reshape(shp)
| 11,672
| 39.113402
| 82
|
py
|
scipy
|
scipy-main/scipy/interpolate/_fitpack_impl.py
|
"""
fitpack (dierckx in netlib) --- A Python-C wrapper to FITPACK (by P. Dierckx).
FITPACK is a collection of FORTRAN programs for curve and surface
fitting with splines and tensor product splines.
See
https://web.archive.org/web/20010524124604/http://www.cs.kuleuven.ac.be:80/cwis/research/nalag/research/topics/fitpack.html
or
http://www.netlib.org/dierckx/
Copyright 2002 Pearu Peterson all rights reserved,
Pearu Peterson <pearu@cens.ioc.ee>
Permission to use, modify, and distribute this software is given under the
terms of the SciPy (BSD style) license. See LICENSE.txt that came with
this distribution for specifics.
NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.
TODO: Make interfaces to the following fitpack functions:
For univariate splines: cocosp, concon, fourco, insert
For bivariate splines: profil, regrid, parsur, surev
"""
__all__ = ['splrep', 'splprep', 'splev', 'splint', 'sproot', 'spalde',
'bisplrep', 'bisplev', 'insert', 'splder', 'splantider']
import warnings
import numpy as np
from . import _fitpack
from numpy import (atleast_1d, array, ones, zeros, sqrt, ravel, transpose,
empty, iinfo, asarray)
# Try to replace _fitpack interface with
# f2py-generated version
from . import dfitpack
dfitpack_int = dfitpack.types.intvar.dtype
def _int_overflow(x, exception, msg=None):
"""Cast the value to an dfitpack_int and raise an OverflowError if the value
cannot fit.
"""
if x > iinfo(dfitpack_int).max:
if msg is None:
msg = f'{x!r} cannot fit into an {dfitpack_int!r}'
raise exception(msg)
return dfitpack_int.type(x)
_iermess = {
0: ["The spline has a residual sum of squares fp such that "
"abs(fp-s)/s<=0.001", None],
-1: ["The spline is an interpolating spline (fp=0)", None],
-2: ["The spline is weighted least-squares polynomial of degree k.\n"
"fp gives the upper bound fp0 for the smoothing factor s", None],
1: ["The required storage space exceeds the available storage space.\n"
"Probable causes: data (x,y) size is too small or smoothing parameter"
"\ns is too small (fp>s).", ValueError],
2: ["A theoretically impossible result when finding a smoothing spline\n"
"with fp = s. Probable cause: s too small. (abs(fp-s)/s>0.001)",
ValueError],
3: ["The maximal number of iterations (20) allowed for finding smoothing\n"
"spline with fp=s has been reached. Probable cause: s too small.\n"
"(abs(fp-s)/s>0.001)", ValueError],
10: ["Error on input data", ValueError],
'unknown': ["An error occurred", TypeError]
}
_iermess2 = {
0: ["The spline has a residual sum of squares fp such that "
"abs(fp-s)/s<=0.001", None],
-1: ["The spline is an interpolating spline (fp=0)", None],
-2: ["The spline is weighted least-squares polynomial of degree kx and ky."
"\nfp gives the upper bound fp0 for the smoothing factor s", None],
-3: ["Warning. The coefficients of the spline have been computed as the\n"
"minimal norm least-squares solution of a rank deficient system.",
None],
1: ["The required storage space exceeds the available storage space.\n"
"Probable causes: nxest or nyest too small or s is too small. (fp>s)",
ValueError],
2: ["A theoretically impossible result when finding a smoothing spline\n"
"with fp = s. Probable causes: s too small or badly chosen eps.\n"
"(abs(fp-s)/s>0.001)", ValueError],
3: ["The maximal number of iterations (20) allowed for finding smoothing\n"
"spline with fp=s has been reached. Probable cause: s too small.\n"
"(abs(fp-s)/s>0.001)", ValueError],
4: ["No more knots can be added because the number of B-spline\n"
"coefficients already exceeds the number of data points m.\n"
"Probable causes: either s or m too small. (fp>s)", ValueError],
5: ["No more knots can be added because the additional knot would\n"
"coincide with an old one. Probable cause: s too small or too large\n"
"a weight to an inaccurate data point. (fp>s)", ValueError],
10: ["Error on input data", ValueError],
11: ["rwrk2 too small, i.e., there is not enough workspace for computing\n"
"the minimal least-squares solution of a rank deficient system of\n"
"linear equations.", ValueError],
'unknown': ["An error occurred", TypeError]
}
_parcur_cache = {'t': array([], float), 'wrk': array([], float),
'iwrk': array([], dfitpack_int), 'u': array([], float),
'ub': 0, 'ue': 1}
def splprep(x, w=None, u=None, ub=None, ue=None, k=3, task=0, s=None, t=None,
full_output=0, nest=None, per=0, quiet=1):
# see the docstring of `_fitpack_py/splprep`
if task <= 0:
_parcur_cache = {'t': array([], float), 'wrk': array([], float),
'iwrk': array([], dfitpack_int), 'u': array([], float),
'ub': 0, 'ue': 1}
x = atleast_1d(x)
idim, m = x.shape
if per:
for i in range(idim):
if x[i][0] != x[i][-1]:
if not quiet:
warnings.warn(RuntimeWarning('Setting x[%d][%d]=x[%d][0]' %
(i, m, i)))
x[i][-1] = x[i][0]
if not 0 < idim < 11:
raise TypeError('0 < idim < 11 must hold')
if w is None:
w = ones(m, float)
else:
w = atleast_1d(w)
ipar = (u is not None)
if ipar:
_parcur_cache['u'] = u
if ub is None:
_parcur_cache['ub'] = u[0]
else:
_parcur_cache['ub'] = ub
if ue is None:
_parcur_cache['ue'] = u[-1]
else:
_parcur_cache['ue'] = ue
else:
_parcur_cache['u'] = zeros(m, float)
if not (1 <= k <= 5):
raise TypeError('1 <= k= %d <=5 must hold' % k)
if not (-1 <= task <= 1):
raise TypeError('task must be -1, 0 or 1')
if (not len(w) == m) or (ipar == 1 and (not len(u) == m)):
raise TypeError('Mismatch of input dimensions')
if s is None:
s = m - sqrt(2*m)
if t is None and task == -1:
raise TypeError('Knots must be given for task=-1')
if t is not None:
_parcur_cache['t'] = atleast_1d(t)
n = len(_parcur_cache['t'])
if task == -1 and n < 2*k + 2:
raise TypeError('There must be at least 2*k+2 knots for task=-1')
if m <= k:
raise TypeError('m > k must hold')
if nest is None:
nest = m + 2*k
if (task >= 0 and s == 0) or (nest < 0):
if per:
nest = m + 2*k
else:
nest = m + k + 1
nest = max(nest, 2*k + 3)
u = _parcur_cache['u']
ub = _parcur_cache['ub']
ue = _parcur_cache['ue']
t = _parcur_cache['t']
wrk = _parcur_cache['wrk']
iwrk = _parcur_cache['iwrk']
t, c, o = _fitpack._parcur(ravel(transpose(x)), w, u, ub, ue, k,
task, ipar, s, t, nest, wrk, iwrk, per)
_parcur_cache['u'] = o['u']
_parcur_cache['ub'] = o['ub']
_parcur_cache['ue'] = o['ue']
_parcur_cache['t'] = t
_parcur_cache['wrk'] = o['wrk']
_parcur_cache['iwrk'] = o['iwrk']
ier = o['ier']
fp = o['fp']
n = len(t)
u = o['u']
c.shape = idim, n - k - 1
tcku = [t, list(c), k], u
if ier <= 0 and not quiet:
warnings.warn(RuntimeWarning(_iermess[ier][0] +
"\tk=%d n=%d m=%d fp=%f s=%f" %
(k, len(t), m, fp, s)))
if ier > 0 and not full_output:
if ier in [1, 2, 3]:
warnings.warn(RuntimeWarning(_iermess[ier][0]))
else:
try:
raise _iermess[ier][1](_iermess[ier][0])
except KeyError as e:
raise _iermess['unknown'][1](_iermess['unknown'][0]) from e
if full_output:
try:
return tcku, fp, ier, _iermess[ier][0]
except KeyError:
return tcku, fp, ier, _iermess['unknown'][0]
else:
return tcku
_curfit_cache = {'t': array([], float), 'wrk': array([], float),
'iwrk': array([], dfitpack_int)}
def splrep(x, y, w=None, xb=None, xe=None, k=3, task=0, s=None, t=None,
full_output=0, per=0, quiet=1):
# see the docstring of `_fitpack_py/splrep`
if task <= 0:
_curfit_cache = {}
x, y = map(atleast_1d, [x, y])
m = len(x)
if w is None:
w = ones(m, float)
if s is None:
s = 0.0
else:
w = atleast_1d(w)
if s is None:
s = m - sqrt(2*m)
if not len(w) == m:
raise TypeError('len(w)=%d is not equal to m=%d' % (len(w), m))
if (m != len(y)) or (m != len(w)):
raise TypeError('Lengths of the first three arguments (x,y,w) must '
'be equal')
if not (1 <= k <= 5):
raise TypeError('Given degree of the spline (k=%d) is not supported. '
'(1<=k<=5)' % k)
if m <= k:
raise TypeError('m > k must hold')
if xb is None:
xb = x[0]
if xe is None:
xe = x[-1]
if not (-1 <= task <= 1):
raise TypeError('task must be -1, 0 or 1')
if t is not None:
task = -1
if task == -1:
if t is None:
raise TypeError('Knots must be given for task=-1')
numknots = len(t)
_curfit_cache['t'] = empty((numknots + 2*k + 2,), float)
_curfit_cache['t'][k+1:-k-1] = t
nest = len(_curfit_cache['t'])
elif task == 0:
if per:
nest = max(m + 2*k, 2*k + 3)
else:
nest = max(m + k + 1, 2*k + 3)
t = empty((nest,), float)
_curfit_cache['t'] = t
if task <= 0:
if per:
_curfit_cache['wrk'] = empty((m*(k + 1) + nest*(8 + 5*k),), float)
else:
_curfit_cache['wrk'] = empty((m*(k + 1) + nest*(7 + 3*k),), float)
_curfit_cache['iwrk'] = empty((nest,), dfitpack_int)
try:
t = _curfit_cache['t']
wrk = _curfit_cache['wrk']
iwrk = _curfit_cache['iwrk']
except KeyError as e:
raise TypeError("must call with task=1 only after"
" call with task=0,-1") from e
if not per:
n, c, fp, ier = dfitpack.curfit(task, x, y, w, t, wrk, iwrk,
xb, xe, k, s)
else:
n, c, fp, ier = dfitpack.percur(task, x, y, w, t, wrk, iwrk, k, s)
tck = (t[:n], c[:n], k)
if ier <= 0 and not quiet:
_mess = (_iermess[ier][0] + "\tk=%d n=%d m=%d fp=%f s=%f" %
(k, len(t), m, fp, s))
warnings.warn(RuntimeWarning(_mess))
if ier > 0 and not full_output:
if ier in [1, 2, 3]:
warnings.warn(RuntimeWarning(_iermess[ier][0]))
else:
try:
raise _iermess[ier][1](_iermess[ier][0])
except KeyError as e:
raise _iermess['unknown'][1](_iermess['unknown'][0]) from e
if full_output:
try:
return tck, fp, ier, _iermess[ier][0]
except KeyError:
return tck, fp, ier, _iermess['unknown'][0]
else:
return tck
def splev(x, tck, der=0, ext=0):
# see the docstring of `_fitpack_py/splev`
t, c, k = tck
try:
c[0][0]
parametric = True
except Exception:
parametric = False
if parametric:
return list(map(lambda c, x=x, t=t, k=k, der=der:
splev(x, [t, c, k], der, ext), c))
else:
if not (0 <= der <= k):
raise ValueError("0<=der=%d<=k=%d must hold" % (der, k))
if ext not in (0, 1, 2, 3):
raise ValueError("ext = %s not in (0, 1, 2, 3) " % ext)
x = asarray(x)
shape = x.shape
x = atleast_1d(x).ravel()
if der == 0:
y, ier = dfitpack.splev(t, c, k, x, ext)
else:
y, ier = dfitpack.splder(t, c, k, x, der, ext)
if ier == 10:
raise ValueError("Invalid input data")
if ier == 1:
raise ValueError("Found x value not in the domain")
if ier:
raise TypeError("An error occurred")
return y.reshape(shape)
def splint(a, b, tck, full_output=0):
# see the docstring of `_fitpack_py/splint`
t, c, k = tck
try:
c[0][0]
parametric = True
except Exception:
parametric = False
if parametric:
return list(map(lambda c, a=a, b=b, t=t, k=k:
splint(a, b, [t, c, k]), c))
else:
aint, wrk = dfitpack.splint(t, c, k, a, b)
if full_output:
return aint, wrk
else:
return aint
def sproot(tck, mest=10):
# see the docstring of `_fitpack_py/sproot`
t, c, k = tck
if k != 3:
raise ValueError("sproot works only for cubic (k=3) splines")
try:
c[0][0]
parametric = True
except Exception:
parametric = False
if parametric:
return list(map(lambda c, t=t, k=k, mest=mest:
sproot([t, c, k], mest), c))
else:
if len(t) < 8:
raise TypeError("The number of knots %d>=8" % len(t))
z, m, ier = dfitpack.sproot(t, c, mest)
if ier == 10:
raise TypeError("Invalid input data. "
"t1<=..<=t4<t5<..<tn-3<=..<=tn must hold.")
if ier == 0:
return z[:m]
if ier == 1:
warnings.warn(RuntimeWarning("The number of zeros exceeds mest"))
return z[:m]
raise TypeError("Unknown error")
def spalde(x, tck):
# see the docstring of `_fitpack_py/spalde`
t, c, k = tck
try:
c[0][0]
parametric = True
except Exception:
parametric = False
if parametric:
return list(map(lambda c, x=x, t=t, k=k:
spalde(x, [t, c, k]), c))
else:
x = atleast_1d(x)
if len(x) > 1:
return list(map(lambda x, tck=tck: spalde(x, tck), x))
d, ier = dfitpack.spalde(t, c, k+1, x[0])
if ier == 0:
return d
if ier == 10:
raise TypeError("Invalid input data. t(k)<=x<=t(n-k+1) must hold.")
raise TypeError("Unknown error")
# def _curfit(x,y,w=None,xb=None,xe=None,k=3,task=0,s=None,t=None,
# full_output=0,nest=None,per=0,quiet=1):
_surfit_cache = {'tx': array([], float), 'ty': array([], float),
'wrk': array([], float), 'iwrk': array([], dfitpack_int)}
def bisplrep(x, y, z, w=None, xb=None, xe=None, yb=None, ye=None,
kx=3, ky=3, task=0, s=None, eps=1e-16, tx=None, ty=None,
full_output=0, nxest=None, nyest=None, quiet=1):
"""
Find a bivariate B-spline representation of a surface.
Given a set of data points (x[i], y[i], z[i]) representing a surface
z=f(x,y), compute a B-spline representation of the surface. Based on
the routine SURFIT from FITPACK.
Parameters
----------
x, y, z : ndarray
Rank-1 arrays of data points.
w : ndarray, optional
Rank-1 array of weights. By default ``w=np.ones(len(x))``.
xb, xe : float, optional
End points of approximation interval in `x`.
By default ``xb = x.min(), xe=x.max()``.
yb, ye : float, optional
End points of approximation interval in `y`.
By default ``yb=y.min(), ye = y.max()``.
kx, ky : int, optional
The degrees of the spline (1 <= kx, ky <= 5).
Third order (kx=ky=3) is recommended.
task : int, optional
If task=0, find knots in x and y and coefficients for a given
smoothing factor, s.
If task=1, find knots and coefficients for another value of the
smoothing factor, s. bisplrep must have been previously called
with task=0 or task=1.
If task=-1, find coefficients for a given set of knots tx, ty.
s : float, optional
A non-negative smoothing factor. If weights correspond
to the inverse of the standard-deviation of the errors in z,
then a good s-value should be found in the range
``(m-sqrt(2*m),m+sqrt(2*m))`` where m=len(x).
eps : float, optional
A threshold for determining the effective rank of an
over-determined linear system of equations (0 < eps < 1).
`eps` is not likely to need changing.
tx, ty : ndarray, optional
Rank-1 arrays of the knots of the spline for task=-1
full_output : int, optional
Non-zero to return optional outputs.
nxest, nyest : int, optional
Over-estimates of the total number of knots. If None then
``nxest = max(kx+sqrt(m/2),2*kx+3)``,
``nyest = max(ky+sqrt(m/2),2*ky+3)``.
quiet : int, optional
Non-zero to suppress printing of messages.
Returns
-------
tck : array_like
A list [tx, ty, c, kx, ky] containing the knots (tx, ty) and
coefficients (c) of the bivariate B-spline representation of the
surface along with the degree of the spline.
fp : ndarray
The weighted sum of squared residuals of the spline approximation.
ier : int
An integer flag about splrep success. Success is indicated if
ier<=0. If ier in [1,2,3] an error occurred but was not raised.
Otherwise an error is raised.
msg : str
A message corresponding to the integer flag, ier.
See Also
--------
splprep, splrep, splint, sproot, splev
UnivariateSpline, BivariateSpline
Notes
-----
See `bisplev` to evaluate the value of the B-spline given its tck
representation.
If the input data is such that input dimensions have incommensurate
units and differ by many orders of magnitude, the interpolant may have
numerical artifacts. Consider rescaling the data before interpolation.
References
----------
.. [1] Dierckx P.:An algorithm for surface fitting with spline functions
Ima J. Numer. Anal. 1 (1981) 267-283.
.. [2] Dierckx P.:An algorithm for surface fitting with spline functions
report tw50, Dept. Computer Science,K.U.Leuven, 1980.
.. [3] Dierckx P.:Curve and surface fitting with splines, Monographs on
Numerical Analysis, Oxford University Press, 1993.
Examples
--------
Examples are given :ref:`in the tutorial <tutorial-interpolate_2d_spline>`.
"""
x, y, z = map(ravel, [x, y, z]) # ensure 1-d arrays.
m = len(x)
if not (m == len(y) == len(z)):
raise TypeError('len(x)==len(y)==len(z) must hold.')
if w is None:
w = ones(m, float)
else:
w = atleast_1d(w)
if not len(w) == m:
raise TypeError('len(w)=%d is not equal to m=%d' % (len(w), m))
if xb is None:
xb = x.min()
if xe is None:
xe = x.max()
if yb is None:
yb = y.min()
if ye is None:
ye = y.max()
if not (-1 <= task <= 1):
raise TypeError('task must be -1, 0 or 1')
if s is None:
s = m - sqrt(2*m)
if tx is None and task == -1:
raise TypeError('Knots_x must be given for task=-1')
if tx is not None:
_surfit_cache['tx'] = atleast_1d(tx)
nx = len(_surfit_cache['tx'])
if ty is None and task == -1:
raise TypeError('Knots_y must be given for task=-1')
if ty is not None:
_surfit_cache['ty'] = atleast_1d(ty)
ny = len(_surfit_cache['ty'])
if task == -1 and nx < 2*kx+2:
raise TypeError('There must be at least 2*kx+2 knots_x for task=-1')
if task == -1 and ny < 2*ky+2:
raise TypeError('There must be at least 2*ky+2 knots_x for task=-1')
if not ((1 <= kx <= 5) and (1 <= ky <= 5)):
raise TypeError('Given degree of the spline (kx,ky=%d,%d) is not '
'supported. (1<=k<=5)' % (kx, ky))
if m < (kx + 1)*(ky + 1):
raise TypeError('m >= (kx+1)(ky+1) must hold')
if nxest is None:
nxest = int(kx + sqrt(m/2))
if nyest is None:
nyest = int(ky + sqrt(m/2))
nxest, nyest = max(nxest, 2*kx + 3), max(nyest, 2*ky + 3)
if task >= 0 and s == 0:
nxest = int(kx + sqrt(3*m))
nyest = int(ky + sqrt(3*m))
if task == -1:
_surfit_cache['tx'] = atleast_1d(tx)
_surfit_cache['ty'] = atleast_1d(ty)
tx, ty = _surfit_cache['tx'], _surfit_cache['ty']
wrk = _surfit_cache['wrk']
u = nxest - kx - 1
v = nyest - ky - 1
km = max(kx, ky) + 1
ne = max(nxest, nyest)
bx, by = kx*v + ky + 1, ky*u + kx + 1
b1, b2 = bx, bx + v - ky
if bx > by:
b1, b2 = by, by + u - kx
msg = "Too many data points to interpolate"
lwrk1 = _int_overflow(u*v*(2 + b1 + b2) +
2*(u + v + km*(m + ne) + ne - kx - ky) + b2 + 1,
OverflowError,
msg=msg)
lwrk2 = _int_overflow(u*v*(b2 + 1) + b2, OverflowError, msg=msg)
tx, ty, c, o = _fitpack._surfit(x, y, z, w, xb, xe, yb, ye, kx, ky,
task, s, eps, tx, ty, nxest, nyest,
wrk, lwrk1, lwrk2)
_curfit_cache['tx'] = tx
_curfit_cache['ty'] = ty
_curfit_cache['wrk'] = o['wrk']
ier, fp = o['ier'], o['fp']
tck = [tx, ty, c, kx, ky]
ierm = min(11, max(-3, ier))
if ierm <= 0 and not quiet:
_mess = (_iermess2[ierm][0] +
"\tkx,ky=%d,%d nx,ny=%d,%d m=%d fp=%f s=%f" %
(kx, ky, len(tx), len(ty), m, fp, s))
warnings.warn(RuntimeWarning(_mess))
if ierm > 0 and not full_output:
if ier in [1, 2, 3, 4, 5]:
_mess = ("\n\tkx,ky=%d,%d nx,ny=%d,%d m=%d fp=%f s=%f" %
(kx, ky, len(tx), len(ty), m, fp, s))
warnings.warn(RuntimeWarning(_iermess2[ierm][0] + _mess))
else:
try:
raise _iermess2[ierm][1](_iermess2[ierm][0])
except KeyError as e:
raise _iermess2['unknown'][1](_iermess2['unknown'][0]) from e
if full_output:
try:
return tck, fp, ier, _iermess2[ierm][0]
except KeyError:
return tck, fp, ier, _iermess2['unknown'][0]
else:
return tck
def bisplev(x, y, tck, dx=0, dy=0):
"""
Evaluate a bivariate B-spline and its derivatives.
Return a rank-2 array of spline function values (or spline derivative
values) at points given by the cross-product of the rank-1 arrays `x` and
`y`. In special cases, return an array or just a float if either `x` or
`y` or both are floats. Based on BISPEV and PARDER from FITPACK.
Parameters
----------
x, y : ndarray
Rank-1 arrays specifying the domain over which to evaluate the
spline or its derivative.
tck : tuple
A sequence of length 5 returned by `bisplrep` containing the knot
locations, the coefficients, and the degree of the spline:
[tx, ty, c, kx, ky].
dx, dy : int, optional
The orders of the partial derivatives in `x` and `y` respectively.
Returns
-------
vals : ndarray
The B-spline or its derivative evaluated over the set formed by
the cross-product of `x` and `y`.
See Also
--------
splprep, splrep, splint, sproot, splev
UnivariateSpline, BivariateSpline
Notes
-----
See `bisplrep` to generate the `tck` representation.
References
----------
.. [1] Dierckx P. : An algorithm for surface fitting
with spline functions
Ima J. Numer. Anal. 1 (1981) 267-283.
.. [2] Dierckx P. : An algorithm for surface fitting
with spline functions
report tw50, Dept. Computer Science,K.U.Leuven, 1980.
.. [3] Dierckx P. : Curve and surface fitting with splines,
Monographs on Numerical Analysis, Oxford University Press, 1993.
Examples
--------
Examples are given :ref:`in the tutorial <tutorial-interpolate_2d_spline>`.
"""
tx, ty, c, kx, ky = tck
if not (0 <= dx < kx):
raise ValueError("0 <= dx = %d < kx = %d must hold" % (dx, kx))
if not (0 <= dy < ky):
raise ValueError("0 <= dy = %d < ky = %d must hold" % (dy, ky))
x, y = map(atleast_1d, [x, y])
if (len(x.shape) != 1) or (len(y.shape) != 1):
raise ValueError("First two entries should be rank-1 arrays.")
msg = "Too many data points to interpolate."
_int_overflow(x.size * y.size, MemoryError, msg=msg)
if dx != 0 or dy != 0:
_int_overflow((tx.size - kx - 1)*(ty.size - ky - 1),
MemoryError, msg=msg)
z, ier = dfitpack.parder(tx, ty, c, kx, ky, dx, dy, x, y)
else:
z, ier = dfitpack.bispev(tx, ty, c, kx, ky, x, y)
if ier == 10:
raise ValueError("Invalid input data")
if ier:
raise TypeError("An error occurred")
z.shape = len(x), len(y)
if len(z) > 1:
return z
if len(z[0]) > 1:
return z[0]
return z[0][0]
def dblint(xa, xb, ya, yb, tck):
"""Evaluate the integral of a spline over area [xa,xb] x [ya,yb].
Parameters
----------
xa, xb : float
The end-points of the x integration interval.
ya, yb : float
The end-points of the y integration interval.
tck : list [tx, ty, c, kx, ky]
A sequence of length 5 returned by bisplrep containing the knot
locations tx, ty, the coefficients c, and the degrees kx, ky
of the spline.
Returns
-------
integ : float
The value of the resulting integral.
"""
tx, ty, c, kx, ky = tck
return dfitpack.dblint(tx, ty, c, kx, ky, xa, xb, ya, yb)
def insert(x, tck, m=1, per=0):
# see the docstring of `_fitpack_py/insert`
t, c, k = tck
try:
c[0][0]
parametric = True
except Exception:
parametric = False
if parametric:
cc = []
for c_vals in c:
tt, cc_val, kk = insert(x, [t, c_vals, k], m)
cc.append(cc_val)
return (tt, cc, kk)
else:
tt, cc, ier = _fitpack._insert(per, t, c, k, x, m)
if ier == 10:
raise ValueError("Invalid input data")
if ier:
raise TypeError("An error occurred")
return (tt, cc, k)
def splder(tck, n=1):
# see the docstring of `_fitpack_py/splder`
if n < 0:
return splantider(tck, -n)
t, c, k = tck
if n > k:
raise ValueError(("Order of derivative (n = {!r}) must be <= "
"order of spline (k = {!r})").format(n, tck[2]))
# Extra axes for the trailing dims of the `c` array:
sh = (slice(None),) + ((None,)*len(c.shape[1:]))
with np.errstate(invalid='raise', divide='raise'):
try:
for j in range(n):
# See e.g. Schumaker, Spline Functions: Basic Theory, Chapter 5
# Compute the denominator in the differentiation formula.
# (and append traling dims, if necessary)
dt = t[k+1:-1] - t[1:-k-1]
dt = dt[sh]
# Compute the new coefficients
c = (c[1:-1-k] - c[:-2-k]) * k / dt
# Pad coefficient array to same size as knots (FITPACK
# convention)
c = np.r_[c, np.zeros((k,) + c.shape[1:])]
# Adjust knots
t = t[1:-1]
k -= 1
except FloatingPointError as e:
raise ValueError(("The spline has internal repeated knots "
"and is not differentiable %d times") % n) from e
return t, c, k
def splantider(tck, n=1):
# see the docstring of `_fitpack_py/splantider`
if n < 0:
return splder(tck, -n)
t, c, k = tck
# Extra axes for the trailing dims of the `c` array:
sh = (slice(None),) + (None,)*len(c.shape[1:])
for j in range(n):
# This is the inverse set of operations to splder.
# Compute the multiplier in the antiderivative formula.
dt = t[k+1:] - t[:-k-1]
dt = dt[sh]
# Compute the new coefficients
c = np.cumsum(c[:-k-1] * dt, axis=0) / (k + 1)
c = np.r_[np.zeros((1,) + c.shape[1:]),
c,
[c[-1]] * (k+2)]
# New knots
t = np.r_[t[0], t, t[-1]]
k += 1
return t, c, k
| 28,486
| 34.475716
| 124
|
py
|
scipy
|
scipy-main/scipy/interpolate/_interpolate.py
|
__all__ = ['interp1d', 'interp2d', 'lagrange', 'PPoly', 'BPoly', 'NdPPoly']
from math import prod
import warnings
import numpy as np
from numpy import (array, transpose, searchsorted, atleast_1d, atleast_2d,
ravel, poly1d, asarray, intp)
import scipy.special as spec
from scipy.special import comb
from . import _fitpack_py
from . import dfitpack
from ._polyint import _Interpolator1D
from . import _ppoly
from .interpnd import _ndim_coords_from_arrays
from ._bsplines import make_interp_spline, BSpline
def lagrange(x, w):
r"""
Return a Lagrange interpolating polynomial.
Given two 1-D arrays `x` and `w,` returns the Lagrange interpolating
polynomial through the points ``(x, w)``.
Warning: This implementation is numerically unstable. Do not expect to
be able to use more than about 20 points even if they are chosen optimally.
Parameters
----------
x : array_like
`x` represents the x-coordinates of a set of datapoints.
w : array_like
`w` represents the y-coordinates of a set of datapoints, i.e., f(`x`).
Returns
-------
lagrange : `numpy.poly1d` instance
The Lagrange interpolating polynomial.
Examples
--------
Interpolate :math:`f(x) = x^3` by 3 points.
>>> import numpy as np
>>> from scipy.interpolate import lagrange
>>> x = np.array([0, 1, 2])
>>> y = x**3
>>> poly = lagrange(x, y)
Since there are only 3 points, Lagrange polynomial has degree 2. Explicitly,
it is given by
.. math::
\begin{aligned}
L(x) &= 1\times \frac{x (x - 2)}{-1} + 8\times \frac{x (x-1)}{2} \\
&= x (-2 + 3x)
\end{aligned}
>>> from numpy.polynomial.polynomial import Polynomial
>>> Polynomial(poly.coef[::-1]).coef
array([ 0., -2., 3.])
>>> import matplotlib.pyplot as plt
>>> x_new = np.arange(0, 2.1, 0.1)
>>> plt.scatter(x, y, label='data')
>>> plt.plot(x_new, Polynomial(poly.coef[::-1])(x_new), label='Polynomial')
>>> plt.plot(x_new, 3*x_new**2 - 2*x_new + 0*x_new,
... label=r"$3 x^2 - 2 x$", linestyle='-.')
>>> plt.legend()
>>> plt.show()
"""
M = len(x)
p = poly1d(0.0)
for j in range(M):
pt = poly1d(w[j])
for k in range(M):
if k == j:
continue
fac = x[j]-x[k]
pt *= poly1d([1.0, -x[k]])/fac
p += pt
return p
# !! Need to find argument for keeping initialize. If it isn't
# !! found, get rid of it!
dep_mesg = """\
`interp2d` is deprecated in SciPy 1.10 and will be removed in SciPy 1.13.0.
For legacy code, nearly bug-for-bug compatible replacements are
`RectBivariateSpline` on regular grids, and `bisplrep`/`bisplev` for
scattered 2D data.
In new code, for regular grids use `RegularGridInterpolator` instead.
For scattered data, prefer `LinearNDInterpolator` or
`CloughTocher2DInterpolator`.
For more details see
`https://scipy.github.io/devdocs/notebooks/interp_transition_guide.html`
"""
class interp2d:
"""
interp2d(x, y, z, kind='linear', copy=True, bounds_error=False,
fill_value=None)
.. deprecated:: 1.10.0
`interp2d` is deprecated in SciPy 1.10 and will be removed in SciPy
1.13.0.
For legacy code, nearly bug-for-bug compatible replacements are
`RectBivariateSpline` on regular grids, and `bisplrep`/`bisplev` for
scattered 2D data.
In new code, for regular grids use `RegularGridInterpolator` instead.
For scattered data, prefer `LinearNDInterpolator` or
`CloughTocher2DInterpolator`.
For more details see
`https://scipy.github.io/devdocs/notebooks/interp_transition_guide.html
<https://scipy.github.io/devdocs/notebooks/interp_transition_guide.html>`_
Interpolate over a 2-D grid.
`x`, `y` and `z` are arrays of values used to approximate some function
f: ``z = f(x, y)`` which returns a scalar value `z`. This class returns a
function whose call method uses spline interpolation to find the value
of new points.
If `x` and `y` represent a regular grid, consider using
`RectBivariateSpline`.
If `z` is a vector value, consider using `interpn`.
Note that calling `interp2d` with NaNs present in input values, or with
decreasing values in `x` an `y` results in undefined behaviour.
Methods
-------
__call__
Parameters
----------
x, y : array_like
Arrays defining the data point coordinates.
The data point coordinates need to be sorted by increasing order.
If the points lie on a regular grid, `x` can specify the column
coordinates and `y` the row coordinates, for example::
>>> x = [0,1,2]; y = [0,3]; z = [[1,2,3], [4,5,6]]
Otherwise, `x` and `y` must specify the full coordinates for each
point, for example::
>>> x = [0,1,2,0,1,2]; y = [0,0,0,3,3,3]; z = [1,4,2,5,3,6]
If `x` and `y` are multidimensional, they are flattened before use.
z : array_like
The values of the function to interpolate at the data points. If
`z` is a multidimensional array, it is flattened before use assuming
Fortran-ordering (order='F'). The length of a flattened `z` array
is either len(`x`)*len(`y`) if `x` and `y` specify the column and
row coordinates or ``len(z) == len(x) == len(y)`` if `x` and `y`
specify coordinates for each point.
kind : {'linear', 'cubic', 'quintic'}, optional
The kind of spline interpolation to use. Default is 'linear'.
copy : bool, optional
If True, the class makes internal copies of x, y and z.
If False, references may be used. The default is to copy.
bounds_error : bool, optional
If True, when interpolated values are requested outside of the
domain of the input data (x,y), a ValueError is raised.
If False, then `fill_value` is used.
fill_value : number, optional
If provided, the value to use for points outside of the
interpolation domain. If omitted (None), values outside
the domain are extrapolated via nearest-neighbor extrapolation.
See Also
--------
RectBivariateSpline :
Much faster 2-D interpolation if your input data is on a grid
bisplrep, bisplev :
Spline interpolation based on FITPACK
BivariateSpline : a more recent wrapper of the FITPACK routines
interp1d : 1-D version of this function
RegularGridInterpolator : interpolation on a regular or rectilinear grid
in arbitrary dimensions.
interpn : Multidimensional interpolation on regular grids (wraps
`RegularGridInterpolator` and `RectBivariateSpline`).
Notes
-----
The minimum number of data points required along the interpolation
axis is ``(k+1)**2``, with k=1 for linear, k=3 for cubic and k=5 for
quintic interpolation.
The interpolator is constructed by `bisplrep`, with a smoothing factor
of 0. If more control over smoothing is needed, `bisplrep` should be
used directly.
The coordinates of the data points to interpolate `xnew` and `ynew`
have to be sorted by ascending order.
`interp2d` is legacy and is not
recommended for use in new code. New code should use
`RegularGridInterpolator` instead.
Examples
--------
Construct a 2-D grid and interpolate on it:
>>> import numpy as np
>>> from scipy import interpolate
>>> x = np.arange(-5.01, 5.01, 0.25)
>>> y = np.arange(-5.01, 5.01, 0.25)
>>> xx, yy = np.meshgrid(x, y)
>>> z = np.sin(xx**2+yy**2)
>>> f = interpolate.interp2d(x, y, z, kind='cubic')
Now use the obtained interpolation function and plot the result:
>>> import matplotlib.pyplot as plt
>>> xnew = np.arange(-5.01, 5.01, 1e-2)
>>> ynew = np.arange(-5.01, 5.01, 1e-2)
>>> znew = f(xnew, ynew)
>>> plt.plot(x, z[0, :], 'ro-', xnew, znew[0, :], 'b-')
>>> plt.show()
"""
def __init__(self, x, y, z, kind='linear', copy=True, bounds_error=False,
fill_value=None):
warnings.warn(dep_mesg, DeprecationWarning, stacklevel=2)
x = ravel(x)
y = ravel(y)
z = asarray(z)
rectangular_grid = (z.size == len(x) * len(y))
if rectangular_grid:
if z.ndim == 2:
if z.shape != (len(y), len(x)):
raise ValueError("When on a regular grid with x.size = m "
"and y.size = n, if z.ndim == 2, then z "
"must have shape (n, m)")
if not np.all(x[1:] >= x[:-1]):
j = np.argsort(x)
x = x[j]
z = z[:, j]
if not np.all(y[1:] >= y[:-1]):
j = np.argsort(y)
y = y[j]
z = z[j, :]
z = ravel(z.T)
else:
z = ravel(z)
if len(x) != len(y):
raise ValueError(
"x and y must have equal lengths for non rectangular grid")
if len(z) != len(x):
raise ValueError(
"Invalid length for input z for non rectangular grid")
interpolation_types = {'linear': 1, 'cubic': 3, 'quintic': 5}
try:
kx = ky = interpolation_types[kind]
except KeyError as e:
raise ValueError(
f"Unsupported interpolation type {repr(kind)}, must be "
f"either of {', '.join(map(repr, interpolation_types))}."
) from e
if not rectangular_grid:
# TODO: surfit is really not meant for interpolation!
self.tck = _fitpack_py.bisplrep(x, y, z, kx=kx, ky=ky, s=0.0)
else:
nx, tx, ny, ty, c, fp, ier = dfitpack.regrid_smth(
x, y, z, None, None, None, None,
kx=kx, ky=ky, s=0.0)
self.tck = (tx[:nx], ty[:ny], c[:(nx - kx - 1) * (ny - ky - 1)],
kx, ky)
self.bounds_error = bounds_error
self.fill_value = fill_value
self.x, self.y, self.z = (array(a, copy=copy) for a in (x, y, z))
self.x_min, self.x_max = np.amin(x), np.amax(x)
self.y_min, self.y_max = np.amin(y), np.amax(y)
def __call__(self, x, y, dx=0, dy=0, assume_sorted=False):
"""Interpolate the function.
Parameters
----------
x : 1-D array
x-coordinates of the mesh on which to interpolate.
y : 1-D array
y-coordinates of the mesh on which to interpolate.
dx : int >= 0, < kx
Order of partial derivatives in x.
dy : int >= 0, < ky
Order of partial derivatives in y.
assume_sorted : bool, optional
If False, values of `x` and `y` can be in any order and they are
sorted first.
If True, `x` and `y` have to be arrays of monotonically
increasing values.
Returns
-------
z : 2-D array with shape (len(y), len(x))
The interpolated values.
"""
warnings.warn(dep_mesg, DeprecationWarning, stacklevel=2)
x = atleast_1d(x)
y = atleast_1d(y)
if x.ndim != 1 or y.ndim != 1:
raise ValueError("x and y should both be 1-D arrays")
if not assume_sorted:
x = np.sort(x, kind="mergesort")
y = np.sort(y, kind="mergesort")
if self.bounds_error or self.fill_value is not None:
out_of_bounds_x = (x < self.x_min) | (x > self.x_max)
out_of_bounds_y = (y < self.y_min) | (y > self.y_max)
any_out_of_bounds_x = np.any(out_of_bounds_x)
any_out_of_bounds_y = np.any(out_of_bounds_y)
if self.bounds_error and (any_out_of_bounds_x or any_out_of_bounds_y):
raise ValueError("Values out of range; x must be in %r, y in %r"
% ((self.x_min, self.x_max),
(self.y_min, self.y_max)))
z = _fitpack_py.bisplev(x, y, self.tck, dx, dy)
z = atleast_2d(z)
z = transpose(z)
if self.fill_value is not None:
if any_out_of_bounds_x:
z[:, out_of_bounds_x] = self.fill_value
if any_out_of_bounds_y:
z[out_of_bounds_y, :] = self.fill_value
if len(z) == 1:
z = z[0]
return array(z)
def _check_broadcast_up_to(arr_from, shape_to, name):
"""Helper to check that arr_from broadcasts up to shape_to"""
shape_from = arr_from.shape
if len(shape_to) >= len(shape_from):
for t, f in zip(shape_to[::-1], shape_from[::-1]):
if f != 1 and f != t:
break
else: # all checks pass, do the upcasting that we need later
if arr_from.size != 1 and arr_from.shape != shape_to:
arr_from = np.ones(shape_to, arr_from.dtype) * arr_from
return arr_from.ravel()
# at least one check failed
raise ValueError('%s argument must be able to broadcast up '
'to shape %s but had shape %s'
% (name, shape_to, shape_from))
def _do_extrapolate(fill_value):
"""Helper to check if fill_value == "extrapolate" without warnings"""
return (isinstance(fill_value, str) and
fill_value == 'extrapolate')
class interp1d(_Interpolator1D):
"""
Interpolate a 1-D function.
.. legacy:: class
For a guide to the intended replacements for `interp1d` see
:ref:`tutorial-interpolate_1Dsection`.
`x` and `y` are arrays of values used to approximate some function f:
``y = f(x)``. This class returns a function whose call method uses
interpolation to find the value of new points.
Parameters
----------
x : (npoints, ) array_like
A 1-D array of real values.
y : (..., npoints, ...) array_like
A N-D array of real values. The length of `y` along the interpolation
axis must be equal to the length of `x`. Use the ``axis`` parameter
to select correct axis. Unlike other interpolators, the default
interpolation axis is the last axis of `y`.
kind : str or int, optional
Specifies the kind of interpolation as a string or as an integer
specifying the order of the spline interpolator to use.
The string has to be one of 'linear', 'nearest', 'nearest-up', 'zero',
'slinear', 'quadratic', 'cubic', 'previous', or 'next'. 'zero',
'slinear', 'quadratic' and 'cubic' refer to a spline interpolation of
zeroth, first, second or third order; 'previous' and 'next' simply
return the previous or next value of the point; 'nearest-up' and
'nearest' differ when interpolating half-integers (e.g. 0.5, 1.5)
in that 'nearest-up' rounds up and 'nearest' rounds down. Default
is 'linear'.
axis : int, optional
Axis in the ``y`` array corresponding to the x-coordinate values. Unlike
other interpolators, defaults to ``axis=-1``.
copy : bool, optional
If True, the class makes internal copies of x and y.
If False, references to `x` and `y` are used. The default is to copy.
bounds_error : bool, optional
If True, a ValueError is raised any time interpolation is attempted on
a value outside of the range of x (where extrapolation is
necessary). If False, out of bounds values are assigned `fill_value`.
By default, an error is raised unless ``fill_value="extrapolate"``.
fill_value : array-like or (array-like, array_like) or "extrapolate", optional
- if a ndarray (or float), this value will be used to fill in for
requested points outside of the data range. If not provided, then
the default is NaN. The array-like must broadcast properly to the
dimensions of the non-interpolation axes.
- If a two-element tuple, then the first element is used as a
fill value for ``x_new < x[0]`` and the second element is used for
``x_new > x[-1]``. Anything that is not a 2-element tuple (e.g.,
list or ndarray, regardless of shape) is taken to be a single
array-like argument meant to be used for both bounds as
``below, above = fill_value, fill_value``. Using a two-element tuple
or ndarray requires ``bounds_error=False``.
.. versionadded:: 0.17.0
- If "extrapolate", then points outside the data range will be
extrapolated.
.. versionadded:: 0.17.0
assume_sorted : bool, optional
If False, values of `x` can be in any order and they are sorted first.
If True, `x` has to be an array of monotonically increasing values.
Attributes
----------
fill_value
Methods
-------
__call__
See Also
--------
splrep, splev
Spline interpolation/smoothing based on FITPACK.
UnivariateSpline : An object-oriented wrapper of the FITPACK routines.
interp2d : 2-D interpolation
Notes
-----
Calling `interp1d` with NaNs present in input values results in
undefined behaviour.
Input values `x` and `y` must be convertible to `float` values like
`int` or `float`.
If the values in `x` are not unique, the resulting behavior is
undefined and specific to the choice of `kind`, i.e., changing
`kind` will change the behavior for duplicates.
Examples
--------
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy import interpolate
>>> x = np.arange(0, 10)
>>> y = np.exp(-x/3.0)
>>> f = interpolate.interp1d(x, y)
>>> xnew = np.arange(0, 9, 0.1)
>>> ynew = f(xnew) # use interpolation function returned by `interp1d`
>>> plt.plot(x, y, 'o', xnew, ynew, '-')
>>> plt.show()
"""
def __init__(self, x, y, kind='linear', axis=-1,
copy=True, bounds_error=None, fill_value=np.nan,
assume_sorted=False):
""" Initialize a 1-D linear interpolation class."""
_Interpolator1D.__init__(self, x, y, axis=axis)
self.bounds_error = bounds_error # used by fill_value setter
self.copy = copy
if kind in ['zero', 'slinear', 'quadratic', 'cubic']:
order = {'zero': 0, 'slinear': 1,
'quadratic': 2, 'cubic': 3}[kind]
kind = 'spline'
elif isinstance(kind, int):
order = kind
kind = 'spline'
elif kind not in ('linear', 'nearest', 'nearest-up', 'previous',
'next'):
raise NotImplementedError("%s is unsupported: Use fitpack "
"routines for other types." % kind)
x = array(x, copy=self.copy)
y = array(y, copy=self.copy)
if not assume_sorted:
ind = np.argsort(x, kind="mergesort")
x = x[ind]
y = np.take(y, ind, axis=axis)
if x.ndim != 1:
raise ValueError("the x array must have exactly one dimension.")
if y.ndim == 0:
raise ValueError("the y array must have at least one dimension.")
# Force-cast y to a floating-point type, if it's not yet one
if not issubclass(y.dtype.type, np.inexact):
y = y.astype(np.float_)
# Backward compatibility
self.axis = axis % y.ndim
# Interpolation goes internally along the first axis
self.y = y
self._y = self._reshape_yi(self.y)
self.x = x
del y, x # clean up namespace to prevent misuse; use attributes
self._kind = kind
# Adjust to interpolation kind; store reference to *unbound*
# interpolation methods, in order to avoid circular references to self
# stored in the bound instance methods, and therefore delayed garbage
# collection. See: https://docs.python.org/reference/datamodel.html
if kind in ('linear', 'nearest', 'nearest-up', 'previous', 'next'):
# Make a "view" of the y array that is rotated to the interpolation
# axis.
minval = 1
if kind == 'nearest':
# Do division before addition to prevent possible integer
# overflow
self._side = 'left'
self.x_bds = self.x / 2.0
self.x_bds = self.x_bds[1:] + self.x_bds[:-1]
self._call = self.__class__._call_nearest
elif kind == 'nearest-up':
# Do division before addition to prevent possible integer
# overflow
self._side = 'right'
self.x_bds = self.x / 2.0
self.x_bds = self.x_bds[1:] + self.x_bds[:-1]
self._call = self.__class__._call_nearest
elif kind == 'previous':
# Side for np.searchsorted and index for clipping
self._side = 'left'
self._ind = 0
# Move x by one floating point value to the left
self._x_shift = np.nextafter(self.x, -np.inf)
self._call = self.__class__._call_previousnext
if _do_extrapolate(fill_value):
self._check_and_update_bounds_error_for_extrapolation()
# assume y is sorted by x ascending order here.
fill_value = (np.nan, np.take(self.y, -1, axis))
elif kind == 'next':
self._side = 'right'
self._ind = 1
# Move x by one floating point value to the right
self._x_shift = np.nextafter(self.x, np.inf)
self._call = self.__class__._call_previousnext
if _do_extrapolate(fill_value):
self._check_and_update_bounds_error_for_extrapolation()
# assume y is sorted by x ascending order here.
fill_value = (np.take(self.y, 0, axis), np.nan)
else:
# Check if we can delegate to numpy.interp (2x-10x faster).
np_types = (np.float_, np.int_)
cond = self.x.dtype in np_types and self.y.dtype in np_types
cond = cond and self.y.ndim == 1
cond = cond and not _do_extrapolate(fill_value)
if cond:
self._call = self.__class__._call_linear_np
else:
self._call = self.__class__._call_linear
else:
minval = order + 1
rewrite_nan = False
xx, yy = self.x, self._y
if order > 1:
# Quadratic or cubic spline. If input contains even a single
# nan, then the output is all nans. We cannot just feed data
# with nans to make_interp_spline because it calls LAPACK.
# So, we make up a bogus x and y with no nans and use it
# to get the correct shape of the output, which we then fill
# with nans.
# For slinear or zero order spline, we just pass nans through.
mask = np.isnan(self.x)
if mask.any():
sx = self.x[~mask]
if sx.size == 0:
raise ValueError("`x` array is all-nan")
xx = np.linspace(np.nanmin(self.x),
np.nanmax(self.x),
len(self.x))
rewrite_nan = True
if np.isnan(self._y).any():
yy = np.ones_like(self._y)
rewrite_nan = True
self._spline = make_interp_spline(xx, yy, k=order,
check_finite=False)
if rewrite_nan:
self._call = self.__class__._call_nan_spline
else:
self._call = self.__class__._call_spline
if len(self.x) < minval:
raise ValueError("x and y arrays must have at "
"least %d entries" % minval)
self.fill_value = fill_value # calls the setter, can modify bounds_err
@property
def fill_value(self):
"""The fill value."""
# backwards compat: mimic a public attribute
return self._fill_value_orig
@fill_value.setter
def fill_value(self, fill_value):
# extrapolation only works for nearest neighbor and linear methods
if _do_extrapolate(fill_value):
self._check_and_update_bounds_error_for_extrapolation()
self._extrapolate = True
else:
broadcast_shape = (self.y.shape[:self.axis] +
self.y.shape[self.axis + 1:])
if len(broadcast_shape) == 0:
broadcast_shape = (1,)
# it's either a pair (_below_range, _above_range) or a single value
# for both above and below range
if isinstance(fill_value, tuple) and len(fill_value) == 2:
below_above = [np.asarray(fill_value[0]),
np.asarray(fill_value[1])]
names = ('fill_value (below)', 'fill_value (above)')
for ii in range(2):
below_above[ii] = _check_broadcast_up_to(
below_above[ii], broadcast_shape, names[ii])
else:
fill_value = np.asarray(fill_value)
below_above = [_check_broadcast_up_to(
fill_value, broadcast_shape, 'fill_value')] * 2
self._fill_value_below, self._fill_value_above = below_above
self._extrapolate = False
if self.bounds_error is None:
self.bounds_error = True
# backwards compat: fill_value was a public attr; make it writeable
self._fill_value_orig = fill_value
def _check_and_update_bounds_error_for_extrapolation(self):
if self.bounds_error:
raise ValueError("Cannot extrapolate and raise "
"at the same time.")
self.bounds_error = False
def _call_linear_np(self, x_new):
# Note that out-of-bounds values are taken care of in self._evaluate
return np.interp(x_new, self.x, self.y)
def _call_linear(self, x_new):
# 2. Find where in the original data, the values to interpolate
# would be inserted.
# Note: If x_new[n] == x[m], then m is returned by searchsorted.
x_new_indices = searchsorted(self.x, x_new)
# 3. Clip x_new_indices so that they are within the range of
# self.x indices and at least 1. Removes mis-interpolation
# of x_new[n] = x[0]
x_new_indices = x_new_indices.clip(1, len(self.x)-1).astype(int)
# 4. Calculate the slope of regions that each x_new value falls in.
lo = x_new_indices - 1
hi = x_new_indices
x_lo = self.x[lo]
x_hi = self.x[hi]
y_lo = self._y[lo]
y_hi = self._y[hi]
# Note that the following two expressions rely on the specifics of the
# broadcasting semantics.
slope = (y_hi - y_lo) / (x_hi - x_lo)[:, None]
# 5. Calculate the actual value for each entry in x_new.
y_new = slope*(x_new - x_lo)[:, None] + y_lo
return y_new
def _call_nearest(self, x_new):
""" Find nearest neighbor interpolated y_new = f(x_new)."""
# 2. Find where in the averaged data the values to interpolate
# would be inserted.
# Note: use side='left' (right) to searchsorted() to define the
# halfway point to be nearest to the left (right) neighbor
x_new_indices = searchsorted(self.x_bds, x_new, side=self._side)
# 3. Clip x_new_indices so that they are within the range of x indices.
x_new_indices = x_new_indices.clip(0, len(self.x)-1).astype(intp)
# 4. Calculate the actual value for each entry in x_new.
y_new = self._y[x_new_indices]
return y_new
def _call_previousnext(self, x_new):
"""Use previous/next neighbor of x_new, y_new = f(x_new)."""
# 1. Get index of left/right value
x_new_indices = searchsorted(self._x_shift, x_new, side=self._side)
# 2. Clip x_new_indices so that they are within the range of x indices.
x_new_indices = x_new_indices.clip(1-self._ind,
len(self.x)-self._ind).astype(intp)
# 3. Calculate the actual value for each entry in x_new.
y_new = self._y[x_new_indices+self._ind-1]
return y_new
def _call_spline(self, x_new):
return self._spline(x_new)
def _call_nan_spline(self, x_new):
out = self._spline(x_new)
out[...] = np.nan
return out
def _evaluate(self, x_new):
# 1. Handle values in x_new that are outside of x. Throw error,
# or return a list of mask array indicating the outofbounds values.
# The behavior is set by the bounds_error variable.
x_new = asarray(x_new)
y_new = self._call(self, x_new)
if not self._extrapolate:
below_bounds, above_bounds = self._check_bounds(x_new)
if len(y_new) > 0:
# Note fill_value must be broadcast up to the proper size
# and flattened to work here
y_new[below_bounds] = self._fill_value_below
y_new[above_bounds] = self._fill_value_above
return y_new
def _check_bounds(self, x_new):
"""Check the inputs for being in the bounds of the interpolated data.
Parameters
----------
x_new : array
Returns
-------
out_of_bounds : bool array
The mask on x_new of values that are out of the bounds.
"""
# If self.bounds_error is True, we raise an error if any x_new values
# fall outside the range of x. Otherwise, we return an array indicating
# which values are outside the boundary region.
below_bounds = x_new < self.x[0]
above_bounds = x_new > self.x[-1]
if self.bounds_error and below_bounds.any():
below_bounds_value = x_new[np.argmax(below_bounds)]
raise ValueError("A value ({}) in x_new is below "
"the interpolation range's minimum value ({})."
.format(below_bounds_value, self.x[0]))
if self.bounds_error and above_bounds.any():
above_bounds_value = x_new[np.argmax(above_bounds)]
raise ValueError("A value ({}) in x_new is above "
"the interpolation range's maximum value ({})."
.format(above_bounds_value, self.x[-1]))
# !! Should we emit a warning if some values are out of bounds?
# !! matlab does not.
return below_bounds, above_bounds
class _PPolyBase:
"""Base class for piecewise polynomials."""
__slots__ = ('c', 'x', 'extrapolate', 'axis')
def __init__(self, c, x, extrapolate=None, axis=0):
self.c = np.asarray(c)
self.x = np.ascontiguousarray(x, dtype=np.float64)
if extrapolate is None:
extrapolate = True
elif extrapolate != 'periodic':
extrapolate = bool(extrapolate)
self.extrapolate = extrapolate
if self.c.ndim < 2:
raise ValueError("Coefficients array must be at least "
"2-dimensional.")
if not (0 <= axis < self.c.ndim - 1):
raise ValueError("axis=%s must be between 0 and %s" %
(axis, self.c.ndim-1))
self.axis = axis
if axis != 0:
# move the interpolation axis to be the first one in self.c
# More specifically, the target shape for self.c is (k, m, ...),
# and axis !=0 means that we have c.shape (..., k, m, ...)
# ^
# axis
# So we roll two of them.
self.c = np.moveaxis(self.c, axis+1, 0)
self.c = np.moveaxis(self.c, axis+1, 0)
if self.x.ndim != 1:
raise ValueError("x must be 1-dimensional")
if self.x.size < 2:
raise ValueError("at least 2 breakpoints are needed")
if self.c.ndim < 2:
raise ValueError("c must have at least 2 dimensions")
if self.c.shape[0] == 0:
raise ValueError("polynomial must be at least of order 0")
if self.c.shape[1] != self.x.size-1:
raise ValueError("number of coefficients != len(x)-1")
dx = np.diff(self.x)
if not (np.all(dx >= 0) or np.all(dx <= 0)):
raise ValueError("`x` must be strictly increasing or decreasing.")
dtype = self._get_dtype(self.c.dtype)
self.c = np.ascontiguousarray(self.c, dtype=dtype)
def _get_dtype(self, dtype):
if np.issubdtype(dtype, np.complexfloating) \
or np.issubdtype(self.c.dtype, np.complexfloating):
return np.complex_
else:
return np.float_
@classmethod
def construct_fast(cls, c, x, extrapolate=None, axis=0):
"""
Construct the piecewise polynomial without making checks.
Takes the same parameters as the constructor. Input arguments
``c`` and ``x`` must be arrays of the correct shape and type. The
``c`` array can only be of dtypes float and complex, and ``x``
array must have dtype float.
"""
self = object.__new__(cls)
self.c = c
self.x = x
self.axis = axis
if extrapolate is None:
extrapolate = True
self.extrapolate = extrapolate
return self
def _ensure_c_contiguous(self):
"""
c and x may be modified by the user. The Cython code expects
that they are C contiguous.
"""
if not self.x.flags.c_contiguous:
self.x = self.x.copy()
if not self.c.flags.c_contiguous:
self.c = self.c.copy()
def extend(self, c, x):
"""
Add additional breakpoints and coefficients to the polynomial.
Parameters
----------
c : ndarray, size (k, m, ...)
Additional coefficients for polynomials in intervals. Note that
the first additional interval will be formed using one of the
``self.x`` end points.
x : ndarray, size (m,)
Additional breakpoints. Must be sorted in the same order as
``self.x`` and either to the right or to the left of the current
breakpoints.
"""
c = np.asarray(c)
x = np.asarray(x)
if c.ndim < 2:
raise ValueError("invalid dimensions for c")
if x.ndim != 1:
raise ValueError("invalid dimensions for x")
if x.shape[0] != c.shape[1]:
raise ValueError("Shapes of x {} and c {} are incompatible"
.format(x.shape, c.shape))
if c.shape[2:] != self.c.shape[2:] or c.ndim != self.c.ndim:
raise ValueError("Shapes of c {} and self.c {} are incompatible"
.format(c.shape, self.c.shape))
if c.size == 0:
return
dx = np.diff(x)
if not (np.all(dx >= 0) or np.all(dx <= 0)):
raise ValueError("`x` is not sorted.")
if self.x[-1] >= self.x[0]:
if not x[-1] >= x[0]:
raise ValueError("`x` is in the different order "
"than `self.x`.")
if x[0] >= self.x[-1]:
action = 'append'
elif x[-1] <= self.x[0]:
action = 'prepend'
else:
raise ValueError("`x` is neither on the left or on the right "
"from `self.x`.")
else:
if not x[-1] <= x[0]:
raise ValueError("`x` is in the different order "
"than `self.x`.")
if x[0] <= self.x[-1]:
action = 'append'
elif x[-1] >= self.x[0]:
action = 'prepend'
else:
raise ValueError("`x` is neither on the left or on the right "
"from `self.x`.")
dtype = self._get_dtype(c.dtype)
k2 = max(c.shape[0], self.c.shape[0])
c2 = np.zeros((k2, self.c.shape[1] + c.shape[1]) + self.c.shape[2:],
dtype=dtype)
if action == 'append':
c2[k2-self.c.shape[0]:, :self.c.shape[1]] = self.c
c2[k2-c.shape[0]:, self.c.shape[1]:] = c
self.x = np.r_[self.x, x]
elif action == 'prepend':
c2[k2-self.c.shape[0]:, :c.shape[1]] = c
c2[k2-c.shape[0]:, c.shape[1]:] = self.c
self.x = np.r_[x, self.x]
self.c = c2
def __call__(self, x, nu=0, extrapolate=None):
"""
Evaluate the piecewise polynomial or its derivative.
Parameters
----------
x : array_like
Points to evaluate the interpolant at.
nu : int, optional
Order of derivative to evaluate. Must be non-negative.
extrapolate : {bool, 'periodic', None}, optional
If bool, determines whether to extrapolate to out-of-bounds points
based on first and last intervals, or to return NaNs.
If 'periodic', periodic extrapolation is used.
If None (default), use `self.extrapolate`.
Returns
-------
y : array_like
Interpolated values. Shape is determined by replacing
the interpolation axis in the original array with the shape of x.
Notes
-----
Derivatives are evaluated piecewise for each polynomial
segment, even if the polynomial is not differentiable at the
breakpoints. The polynomial intervals are considered half-open,
``[a, b)``, except for the last interval which is closed
``[a, b]``.
"""
if extrapolate is None:
extrapolate = self.extrapolate
x = np.asarray(x)
x_shape, x_ndim = x.shape, x.ndim
x = np.ascontiguousarray(x.ravel(), dtype=np.float_)
# With periodic extrapolation we map x to the segment
# [self.x[0], self.x[-1]].
if extrapolate == 'periodic':
x = self.x[0] + (x - self.x[0]) % (self.x[-1] - self.x[0])
extrapolate = False
out = np.empty((len(x), prod(self.c.shape[2:])), dtype=self.c.dtype)
self._ensure_c_contiguous()
self._evaluate(x, nu, extrapolate, out)
out = out.reshape(x_shape + self.c.shape[2:])
if self.axis != 0:
# transpose to move the calculated values to the interpolation axis
l = list(range(out.ndim))
l = l[x_ndim:x_ndim+self.axis] + l[:x_ndim] + l[x_ndim+self.axis:]
out = out.transpose(l)
return out
class PPoly(_PPolyBase):
"""
Piecewise polynomial in terms of coefficients and breakpoints
The polynomial between ``x[i]`` and ``x[i + 1]`` is written in the
local power basis::
S = sum(c[m, i] * (xp - x[i])**(k-m) for m in range(k+1))
where ``k`` is the degree of the polynomial.
Parameters
----------
c : ndarray, shape (k, m, ...)
Polynomial coefficients, order `k` and `m` intervals.
x : ndarray, shape (m+1,)
Polynomial breakpoints. Must be sorted in either increasing or
decreasing order.
extrapolate : bool or 'periodic', optional
If bool, determines whether to extrapolate to out-of-bounds points
based on first and last intervals, or to return NaNs. If 'periodic',
periodic extrapolation is used. Default is True.
axis : int, optional
Interpolation axis. Default is zero.
Attributes
----------
x : ndarray
Breakpoints.
c : ndarray
Coefficients of the polynomials. They are reshaped
to a 3-D array with the last dimension representing
the trailing dimensions of the original coefficient array.
axis : int
Interpolation axis.
Methods
-------
__call__
derivative
antiderivative
integrate
solve
roots
extend
from_spline
from_bernstein_basis
construct_fast
See also
--------
BPoly : piecewise polynomials in the Bernstein basis
Notes
-----
High-order polynomials in the power basis can be numerically
unstable. Precision problems can start to appear for orders
larger than 20-30.
"""
def _evaluate(self, x, nu, extrapolate, out):
_ppoly.evaluate(self.c.reshape(self.c.shape[0], self.c.shape[1], -1),
self.x, x, nu, bool(extrapolate), out)
def derivative(self, nu=1):
"""
Construct a new piecewise polynomial representing the derivative.
Parameters
----------
nu : int, optional
Order of derivative to evaluate. Default is 1, i.e., compute the
first derivative. If negative, the antiderivative is returned.
Returns
-------
pp : PPoly
Piecewise polynomial of order k2 = k - n representing the derivative
of this polynomial.
Notes
-----
Derivatives are evaluated piecewise for each polynomial
segment, even if the polynomial is not differentiable at the
breakpoints. The polynomial intervals are considered half-open,
``[a, b)``, except for the last interval which is closed
``[a, b]``.
"""
if nu < 0:
return self.antiderivative(-nu)
# reduce order
if nu == 0:
c2 = self.c.copy()
else:
c2 = self.c[:-nu, :].copy()
if c2.shape[0] == 0:
# derivative of order 0 is zero
c2 = np.zeros((1,) + c2.shape[1:], dtype=c2.dtype)
# multiply by the correct rising factorials
factor = spec.poch(np.arange(c2.shape[0], 0, -1), nu)
c2 *= factor[(slice(None),) + (None,)*(c2.ndim-1)]
# construct a compatible polynomial
return self.construct_fast(c2, self.x, self.extrapolate, self.axis)
def antiderivative(self, nu=1):
"""
Construct a new piecewise polynomial representing the antiderivative.
Antiderivative is also the indefinite integral of the function,
and derivative is its inverse operation.
Parameters
----------
nu : int, optional
Order of antiderivative to evaluate. Default is 1, i.e., compute
the first integral. If negative, the derivative is returned.
Returns
-------
pp : PPoly
Piecewise polynomial of order k2 = k + n representing
the antiderivative of this polynomial.
Notes
-----
The antiderivative returned by this function is continuous and
continuously differentiable to order n-1, up to floating point
rounding error.
If antiderivative is computed and ``self.extrapolate='periodic'``,
it will be set to False for the returned instance. This is done because
the antiderivative is no longer periodic and its correct evaluation
outside of the initially given x interval is difficult.
"""
if nu <= 0:
return self.derivative(-nu)
c = np.zeros((self.c.shape[0] + nu, self.c.shape[1]) + self.c.shape[2:],
dtype=self.c.dtype)
c[:-nu] = self.c
# divide by the correct rising factorials
factor = spec.poch(np.arange(self.c.shape[0], 0, -1), nu)
c[:-nu] /= factor[(slice(None),) + (None,)*(c.ndim-1)]
# fix continuity of added degrees of freedom
self._ensure_c_contiguous()
_ppoly.fix_continuity(c.reshape(c.shape[0], c.shape[1], -1),
self.x, nu - 1)
if self.extrapolate == 'periodic':
extrapolate = False
else:
extrapolate = self.extrapolate
# construct a compatible polynomial
return self.construct_fast(c, self.x, extrapolate, self.axis)
def integrate(self, a, b, extrapolate=None):
"""
Compute a definite integral over a piecewise polynomial.
Parameters
----------
a : float
Lower integration bound
b : float
Upper integration bound
extrapolate : {bool, 'periodic', None}, optional
If bool, determines whether to extrapolate to out-of-bounds points
based on first and last intervals, or to return NaNs.
If 'periodic', periodic extrapolation is used.
If None (default), use `self.extrapolate`.
Returns
-------
ig : array_like
Definite integral of the piecewise polynomial over [a, b]
"""
if extrapolate is None:
extrapolate = self.extrapolate
# Swap integration bounds if needed
sign = 1
if b < a:
a, b = b, a
sign = -1
range_int = np.empty((prod(self.c.shape[2:]),), dtype=self.c.dtype)
self._ensure_c_contiguous()
# Compute the integral.
if extrapolate == 'periodic':
# Split the integral into the part over period (can be several
# of them) and the remaining part.
xs, xe = self.x[0], self.x[-1]
period = xe - xs
interval = b - a
n_periods, left = divmod(interval, period)
if n_periods > 0:
_ppoly.integrate(
self.c.reshape(self.c.shape[0], self.c.shape[1], -1),
self.x, xs, xe, False, out=range_int)
range_int *= n_periods
else:
range_int.fill(0)
# Map a to [xs, xe], b is always a + left.
a = xs + (a - xs) % period
b = a + left
# If b <= xe then we need to integrate over [a, b], otherwise
# over [a, xe] and from xs to what is remained.
remainder_int = np.empty_like(range_int)
if b <= xe:
_ppoly.integrate(
self.c.reshape(self.c.shape[0], self.c.shape[1], -1),
self.x, a, b, False, out=remainder_int)
range_int += remainder_int
else:
_ppoly.integrate(
self.c.reshape(self.c.shape[0], self.c.shape[1], -1),
self.x, a, xe, False, out=remainder_int)
range_int += remainder_int
_ppoly.integrate(
self.c.reshape(self.c.shape[0], self.c.shape[1], -1),
self.x, xs, xs + left + a - xe, False, out=remainder_int)
range_int += remainder_int
else:
_ppoly.integrate(
self.c.reshape(self.c.shape[0], self.c.shape[1], -1),
self.x, a, b, bool(extrapolate), out=range_int)
# Return
range_int *= sign
return range_int.reshape(self.c.shape[2:])
def solve(self, y=0., discontinuity=True, extrapolate=None):
"""
Find real solutions of the equation ``pp(x) == y``.
Parameters
----------
y : float, optional
Right-hand side. Default is zero.
discontinuity : bool, optional
Whether to report sign changes across discontinuities at
breakpoints as roots.
extrapolate : {bool, 'periodic', None}, optional
If bool, determines whether to return roots from the polynomial
extrapolated based on first and last intervals, 'periodic' works
the same as False. If None (default), use `self.extrapolate`.
Returns
-------
roots : ndarray
Roots of the polynomial(s).
If the PPoly object describes multiple polynomials, the
return value is an object array whose each element is an
ndarray containing the roots.
Notes
-----
This routine works only on real-valued polynomials.
If the piecewise polynomial contains sections that are
identically zero, the root list will contain the start point
of the corresponding interval, followed by a ``nan`` value.
If the polynomial is discontinuous across a breakpoint, and
there is a sign change across the breakpoint, this is reported
if the `discont` parameter is True.
Examples
--------
Finding roots of ``[x**2 - 1, (x - 1)**2]`` defined on intervals
``[-2, 1], [1, 2]``:
>>> import numpy as np
>>> from scipy.interpolate import PPoly
>>> pp = PPoly(np.array([[1, -4, 3], [1, 0, 0]]).T, [-2, 1, 2])
>>> pp.solve()
array([-1., 1.])
"""
if extrapolate is None:
extrapolate = self.extrapolate
self._ensure_c_contiguous()
if np.issubdtype(self.c.dtype, np.complexfloating):
raise ValueError("Root finding is only for "
"real-valued polynomials")
y = float(y)
r = _ppoly.real_roots(self.c.reshape(self.c.shape[0], self.c.shape[1], -1),
self.x, y, bool(discontinuity),
bool(extrapolate))
if self.c.ndim == 2:
return r[0]
else:
r2 = np.empty(prod(self.c.shape[2:]), dtype=object)
# this for-loop is equivalent to ``r2[...] = r``, but that's broken
# in NumPy 1.6.0
for ii, root in enumerate(r):
r2[ii] = root
return r2.reshape(self.c.shape[2:])
def roots(self, discontinuity=True, extrapolate=None):
"""
Find real roots of the piecewise polynomial.
Parameters
----------
discontinuity : bool, optional
Whether to report sign changes across discontinuities at
breakpoints as roots.
extrapolate : {bool, 'periodic', None}, optional
If bool, determines whether to return roots from the polynomial
extrapolated based on first and last intervals, 'periodic' works
the same as False. If None (default), use `self.extrapolate`.
Returns
-------
roots : ndarray
Roots of the polynomial(s).
If the PPoly object describes multiple polynomials, the
return value is an object array whose each element is an
ndarray containing the roots.
See Also
--------
PPoly.solve
"""
return self.solve(0, discontinuity, extrapolate)
@classmethod
def from_spline(cls, tck, extrapolate=None):
"""
Construct a piecewise polynomial from a spline
Parameters
----------
tck
A spline, as returned by `splrep` or a BSpline object.
extrapolate : bool or 'periodic', optional
If bool, determines whether to extrapolate to out-of-bounds points
based on first and last intervals, or to return NaNs.
If 'periodic', periodic extrapolation is used. Default is True.
Examples
--------
Construct an interpolating spline and convert it to a `PPoly` instance
>>> import numpy as np
>>> from scipy.interpolate import splrep, PPoly
>>> x = np.linspace(0, 1, 11)
>>> y = np.sin(2*np.pi*x)
>>> tck = splrep(x, y, s=0)
>>> p = PPoly.from_spline(tck)
>>> isinstance(p, PPoly)
True
Note that this function only supports 1D splines out of the box.
If the ``tck`` object represents a parametric spline (e.g. constructed
by `splprep` or a `BSpline` with ``c.ndim > 1``), you will need to loop
over the dimensions manually.
>>> from scipy.interpolate import splprep, splev
>>> t = np.linspace(0, 1, 11)
>>> x = np.sin(2*np.pi*t)
>>> y = np.cos(2*np.pi*t)
>>> (t, c, k), u = splprep([x, y], s=0)
Note that ``c`` is a list of two arrays of length 11.
>>> unew = np.arange(0, 1.01, 0.01)
>>> out = splev(unew, (t, c, k))
To convert this spline to the power basis, we convert each
component of the list of b-spline coefficients, ``c``, into the
corresponding cubic polynomial.
>>> polys = [PPoly.from_spline((t, cj, k)) for cj in c]
>>> polys[0].c.shape
(4, 14)
Note that the coefficients of the polynomials `polys` are in the
power basis and their dimensions reflect just that: here 4 is the order
(degree+1), and 14 is the number of intervals---which is nothing but
the length of the knot array of the original `tck` minus one.
Optionally, we can stack the components into a single `PPoly` along
the third dimension:
>>> cc = np.dstack([p.c for p in polys]) # has shape = (4, 14, 2)
>>> poly = PPoly(cc, polys[0].x)
>>> np.allclose(poly(unew).T, # note the transpose to match `splev`
... out, atol=1e-15)
True
"""
if isinstance(tck, BSpline):
t, c, k = tck.tck
if extrapolate is None:
extrapolate = tck.extrapolate
else:
t, c, k = tck
cvals = np.empty((k + 1, len(t)-1), dtype=c.dtype)
for m in range(k, -1, -1):
y = _fitpack_py.splev(t[:-1], tck, der=m)
cvals[k - m, :] = y/spec.gamma(m+1)
return cls.construct_fast(cvals, t, extrapolate)
@classmethod
def from_bernstein_basis(cls, bp, extrapolate=None):
"""
Construct a piecewise polynomial in the power basis
from a polynomial in Bernstein basis.
Parameters
----------
bp : BPoly
A Bernstein basis polynomial, as created by BPoly
extrapolate : bool or 'periodic', optional
If bool, determines whether to extrapolate to out-of-bounds points
based on first and last intervals, or to return NaNs.
If 'periodic', periodic extrapolation is used. Default is True.
"""
if not isinstance(bp, BPoly):
raise TypeError(".from_bernstein_basis only accepts BPoly instances. "
"Got %s instead." % type(bp))
dx = np.diff(bp.x)
k = bp.c.shape[0] - 1 # polynomial order
rest = (None,)*(bp.c.ndim-2)
c = np.zeros_like(bp.c)
for a in range(k+1):
factor = (-1)**a * comb(k, a) * bp.c[a]
for s in range(a, k+1):
val = comb(k-a, s-a) * (-1)**s
c[k-s] += factor * val / dx[(slice(None),)+rest]**s
if extrapolate is None:
extrapolate = bp.extrapolate
return cls.construct_fast(c, bp.x, extrapolate, bp.axis)
class BPoly(_PPolyBase):
"""Piecewise polynomial in terms of coefficients and breakpoints.
The polynomial between ``x[i]`` and ``x[i + 1]`` is written in the
Bernstein polynomial basis::
S = sum(c[a, i] * b(a, k; x) for a in range(k+1)),
where ``k`` is the degree of the polynomial, and::
b(a, k; x) = binom(k, a) * t**a * (1 - t)**(k - a),
with ``t = (x - x[i]) / (x[i+1] - x[i])`` and ``binom`` is the binomial
coefficient.
Parameters
----------
c : ndarray, shape (k, m, ...)
Polynomial coefficients, order `k` and `m` intervals
x : ndarray, shape (m+1,)
Polynomial breakpoints. Must be sorted in either increasing or
decreasing order.
extrapolate : bool, optional
If bool, determines whether to extrapolate to out-of-bounds points
based on first and last intervals, or to return NaNs. If 'periodic',
periodic extrapolation is used. Default is True.
axis : int, optional
Interpolation axis. Default is zero.
Attributes
----------
x : ndarray
Breakpoints.
c : ndarray
Coefficients of the polynomials. They are reshaped
to a 3-D array with the last dimension representing
the trailing dimensions of the original coefficient array.
axis : int
Interpolation axis.
Methods
-------
__call__
extend
derivative
antiderivative
integrate
construct_fast
from_power_basis
from_derivatives
See also
--------
PPoly : piecewise polynomials in the power basis
Notes
-----
Properties of Bernstein polynomials are well documented in the literature,
see for example [1]_ [2]_ [3]_.
References
----------
.. [1] https://en.wikipedia.org/wiki/Bernstein_polynomial
.. [2] Kenneth I. Joy, Bernstein polynomials,
http://www.idav.ucdavis.edu/education/CAGDNotes/Bernstein-Polynomials.pdf
.. [3] E. H. Doha, A. H. Bhrawy, and M. A. Saker, Boundary Value Problems,
vol 2011, article ID 829546, :doi:`10.1155/2011/829543`.
Examples
--------
>>> from scipy.interpolate import BPoly
>>> x = [0, 1]
>>> c = [[1], [2], [3]]
>>> bp = BPoly(c, x)
This creates a 2nd order polynomial
.. math::
B(x) = 1 \\times b_{0, 2}(x) + 2 \\times b_{1, 2}(x) + 3 \\times b_{2, 2}(x) \\\\
= 1 \\times (1-x)^2 + 2 \\times 2 x (1 - x) + 3 \\times x^2
"""
def _evaluate(self, x, nu, extrapolate, out):
_ppoly.evaluate_bernstein(
self.c.reshape(self.c.shape[0], self.c.shape[1], -1),
self.x, x, nu, bool(extrapolate), out)
def derivative(self, nu=1):
"""
Construct a new piecewise polynomial representing the derivative.
Parameters
----------
nu : int, optional
Order of derivative to evaluate. Default is 1, i.e., compute the
first derivative. If negative, the antiderivative is returned.
Returns
-------
bp : BPoly
Piecewise polynomial of order k - nu representing the derivative of
this polynomial.
"""
if nu < 0:
return self.antiderivative(-nu)
if nu > 1:
bp = self
for k in range(nu):
bp = bp.derivative()
return bp
# reduce order
if nu == 0:
c2 = self.c.copy()
else:
# For a polynomial
# B(x) = \sum_{a=0}^{k} c_a b_{a, k}(x),
# we use the fact that
# b'_{a, k} = k ( b_{a-1, k-1} - b_{a, k-1} ),
# which leads to
# B'(x) = \sum_{a=0}^{k-1} (c_{a+1} - c_a) b_{a, k-1}
#
# finally, for an interval [y, y + dy] with dy != 1,
# we need to correct for an extra power of dy
rest = (None,)*(self.c.ndim-2)
k = self.c.shape[0] - 1
dx = np.diff(self.x)[(None, slice(None))+rest]
c2 = k * np.diff(self.c, axis=0) / dx
if c2.shape[0] == 0:
# derivative of order 0 is zero
c2 = np.zeros((1,) + c2.shape[1:], dtype=c2.dtype)
# construct a compatible polynomial
return self.construct_fast(c2, self.x, self.extrapolate, self.axis)
def antiderivative(self, nu=1):
"""
Construct a new piecewise polynomial representing the antiderivative.
Parameters
----------
nu : int, optional
Order of antiderivative to evaluate. Default is 1, i.e., compute
the first integral. If negative, the derivative is returned.
Returns
-------
bp : BPoly
Piecewise polynomial of order k + nu representing the
antiderivative of this polynomial.
Notes
-----
If antiderivative is computed and ``self.extrapolate='periodic'``,
it will be set to False for the returned instance. This is done because
the antiderivative is no longer periodic and its correct evaluation
outside of the initially given x interval is difficult.
"""
if nu <= 0:
return self.derivative(-nu)
if nu > 1:
bp = self
for k in range(nu):
bp = bp.antiderivative()
return bp
# Construct the indefinite integrals on individual intervals
c, x = self.c, self.x
k = c.shape[0]
c2 = np.zeros((k+1,) + c.shape[1:], dtype=c.dtype)
c2[1:, ...] = np.cumsum(c, axis=0) / k
delta = x[1:] - x[:-1]
c2 *= delta[(None, slice(None)) + (None,)*(c.ndim-2)]
# Now fix continuity: on the very first interval, take the integration
# constant to be zero; on an interval [x_j, x_{j+1}) with j>0,
# the integration constant is then equal to the jump of the `bp` at x_j.
# The latter is given by the coefficient of B_{n+1, n+1}
# *on the previous interval* (other B. polynomials are zero at the
# breakpoint). Finally, use the fact that BPs form a partition of unity.
c2[:,1:] += np.cumsum(c2[k, :], axis=0)[:-1]
if self.extrapolate == 'periodic':
extrapolate = False
else:
extrapolate = self.extrapolate
return self.construct_fast(c2, x, extrapolate, axis=self.axis)
def integrate(self, a, b, extrapolate=None):
"""
Compute a definite integral over a piecewise polynomial.
Parameters
----------
a : float
Lower integration bound
b : float
Upper integration bound
extrapolate : {bool, 'periodic', None}, optional
Whether to extrapolate to out-of-bounds points based on first
and last intervals, or to return NaNs. If 'periodic', periodic
extrapolation is used. If None (default), use `self.extrapolate`.
Returns
-------
array_like
Definite integral of the piecewise polynomial over [a, b]
"""
# XXX: can probably use instead the fact that
# \int_0^{1} B_{j, n}(x) \dx = 1/(n+1)
ib = self.antiderivative()
if extrapolate is None:
extrapolate = self.extrapolate
# ib.extrapolate shouldn't be 'periodic', it is converted to
# False for 'periodic. in antiderivative() call.
if extrapolate != 'periodic':
ib.extrapolate = extrapolate
if extrapolate == 'periodic':
# Split the integral into the part over period (can be several
# of them) and the remaining part.
# For simplicity and clarity convert to a <= b case.
if a <= b:
sign = 1
else:
a, b = b, a
sign = -1
xs, xe = self.x[0], self.x[-1]
period = xe - xs
interval = b - a
n_periods, left = divmod(interval, period)
res = n_periods * (ib(xe) - ib(xs))
# Map a and b to [xs, xe].
a = xs + (a - xs) % period
b = a + left
# If b <= xe then we need to integrate over [a, b], otherwise
# over [a, xe] and from xs to what is remained.
if b <= xe:
res += ib(b) - ib(a)
else:
res += ib(xe) - ib(a) + ib(xs + left + a - xe) - ib(xs)
return sign * res
else:
return ib(b) - ib(a)
def extend(self, c, x):
k = max(self.c.shape[0], c.shape[0])
self.c = self._raise_degree(self.c, k - self.c.shape[0])
c = self._raise_degree(c, k - c.shape[0])
return _PPolyBase.extend(self, c, x)
extend.__doc__ = _PPolyBase.extend.__doc__
@classmethod
def from_power_basis(cls, pp, extrapolate=None):
"""
Construct a piecewise polynomial in Bernstein basis
from a power basis polynomial.
Parameters
----------
pp : PPoly
A piecewise polynomial in the power basis
extrapolate : bool or 'periodic', optional
If bool, determines whether to extrapolate to out-of-bounds points
based on first and last intervals, or to return NaNs.
If 'periodic', periodic extrapolation is used. Default is True.
"""
if not isinstance(pp, PPoly):
raise TypeError(".from_power_basis only accepts PPoly instances. "
"Got %s instead." % type(pp))
dx = np.diff(pp.x)
k = pp.c.shape[0] - 1 # polynomial order
rest = (None,)*(pp.c.ndim-2)
c = np.zeros_like(pp.c)
for a in range(k+1):
factor = pp.c[a] / comb(k, k-a) * dx[(slice(None),)+rest]**(k-a)
for j in range(k-a, k+1):
c[j] += factor * comb(j, k-a)
if extrapolate is None:
extrapolate = pp.extrapolate
return cls.construct_fast(c, pp.x, extrapolate, pp.axis)
@classmethod
def from_derivatives(cls, xi, yi, orders=None, extrapolate=None):
"""Construct a piecewise polynomial in the Bernstein basis,
compatible with the specified values and derivatives at breakpoints.
Parameters
----------
xi : array_like
sorted 1-D array of x-coordinates
yi : array_like or list of array_likes
``yi[i][j]`` is the ``j``th derivative known at ``xi[i]``
orders : None or int or array_like of ints. Default: None.
Specifies the degree of local polynomials. If not None, some
derivatives are ignored.
extrapolate : bool or 'periodic', optional
If bool, determines whether to extrapolate to out-of-bounds points
based on first and last intervals, or to return NaNs.
If 'periodic', periodic extrapolation is used. Default is True.
Notes
-----
If ``k`` derivatives are specified at a breakpoint ``x``, the
constructed polynomial is exactly ``k`` times continuously
differentiable at ``x``, unless the ``order`` is provided explicitly.
In the latter case, the smoothness of the polynomial at
the breakpoint is controlled by the ``order``.
Deduces the number of derivatives to match at each end
from ``order`` and the number of derivatives available. If
possible it uses the same number of derivatives from
each end; if the number is odd it tries to take the
extra one from y2. In any case if not enough derivatives
are available at one end or another it draws enough to
make up the total from the other end.
If the order is too high and not enough derivatives are available,
an exception is raised.
Examples
--------
>>> from scipy.interpolate import BPoly
>>> BPoly.from_derivatives([0, 1], [[1, 2], [3, 4]])
Creates a polynomial `f(x)` of degree 3, defined on `[0, 1]`
such that `f(0) = 1, df/dx(0) = 2, f(1) = 3, df/dx(1) = 4`
>>> BPoly.from_derivatives([0, 1, 2], [[0, 1], [0], [2]])
Creates a piecewise polynomial `f(x)`, such that
`f(0) = f(1) = 0`, `f(2) = 2`, and `df/dx(0) = 1`.
Based on the number of derivatives provided, the order of the
local polynomials is 2 on `[0, 1]` and 1 on `[1, 2]`.
Notice that no restriction is imposed on the derivatives at
``x = 1`` and ``x = 2``.
Indeed, the explicit form of the polynomial is::
f(x) = | x * (1 - x), 0 <= x < 1
| 2 * (x - 1), 1 <= x <= 2
So that f'(1-0) = -1 and f'(1+0) = 2
"""
xi = np.asarray(xi)
if len(xi) != len(yi):
raise ValueError("xi and yi need to have the same length")
if np.any(xi[1:] - xi[:1] <= 0):
raise ValueError("x coordinates are not in increasing order")
# number of intervals
m = len(xi) - 1
# global poly order is k-1, local orders are <=k and can vary
try:
k = max(len(yi[i]) + len(yi[i+1]) for i in range(m))
except TypeError as e:
raise ValueError(
"Using a 1-D array for y? Please .reshape(-1, 1)."
) from e
if orders is None:
orders = [None] * m
else:
if isinstance(orders, (int, np.integer)):
orders = [orders] * m
k = max(k, max(orders))
if any(o <= 0 for o in orders):
raise ValueError("Orders must be positive.")
c = []
for i in range(m):
y1, y2 = yi[i], yi[i+1]
if orders[i] is None:
n1, n2 = len(y1), len(y2)
else:
n = orders[i]+1
n1 = min(n//2, len(y1))
n2 = min(n - n1, len(y2))
n1 = min(n - n2, len(y2))
if n1+n2 != n:
mesg = ("Point %g has %d derivatives, point %g"
" has %d derivatives, but order %d requested" % (
xi[i], len(y1), xi[i+1], len(y2), orders[i]))
raise ValueError(mesg)
if not (n1 <= len(y1) and n2 <= len(y2)):
raise ValueError("`order` input incompatible with"
" length y1 or y2.")
b = BPoly._construct_from_derivatives(xi[i], xi[i+1],
y1[:n1], y2[:n2])
if len(b) < k:
b = BPoly._raise_degree(b, k - len(b))
c.append(b)
c = np.asarray(c)
return cls(c.swapaxes(0, 1), xi, extrapolate)
@staticmethod
def _construct_from_derivatives(xa, xb, ya, yb):
r"""Compute the coefficients of a polynomial in the Bernstein basis
given the values and derivatives at the edges.
Return the coefficients of a polynomial in the Bernstein basis
defined on ``[xa, xb]`` and having the values and derivatives at the
endpoints `xa` and `xb` as specified by `ya`` and `yb`.
The polynomial constructed is of the minimal possible degree, i.e.,
if the lengths of `ya` and `yb` are `na` and `nb`, the degree
of the polynomial is ``na + nb - 1``.
Parameters
----------
xa : float
Left-hand end point of the interval
xb : float
Right-hand end point of the interval
ya : array_like
Derivatives at `xa`. `ya[0]` is the value of the function, and
`ya[i]` for ``i > 0`` is the value of the ``i``th derivative.
yb : array_like
Derivatives at `xb`.
Returns
-------
array
coefficient array of a polynomial having specified derivatives
Notes
-----
This uses several facts from life of Bernstein basis functions.
First of all,
.. math:: b'_{a, n} = n (b_{a-1, n-1} - b_{a, n-1})
If B(x) is a linear combination of the form
.. math:: B(x) = \sum_{a=0}^{n} c_a b_{a, n},
then :math: B'(x) = n \sum_{a=0}^{n-1} (c_{a+1} - c_{a}) b_{a, n-1}.
Iterating the latter one, one finds for the q-th derivative
.. math:: B^{q}(x) = n!/(n-q)! \sum_{a=0}^{n-q} Q_a b_{a, n-q},
with
.. math:: Q_a = \sum_{j=0}^{q} (-)^{j+q} comb(q, j) c_{j+a}
This way, only `a=0` contributes to :math: `B^{q}(x = xa)`, and
`c_q` are found one by one by iterating `q = 0, ..., na`.
At ``x = xb`` it's the same with ``a = n - q``.
"""
ya, yb = np.asarray(ya), np.asarray(yb)
if ya.shape[1:] != yb.shape[1:]:
raise ValueError('Shapes of ya {} and yb {} are incompatible'
.format(ya.shape, yb.shape))
dta, dtb = ya.dtype, yb.dtype
if (np.issubdtype(dta, np.complexfloating) or
np.issubdtype(dtb, np.complexfloating)):
dt = np.complex_
else:
dt = np.float_
na, nb = len(ya), len(yb)
n = na + nb
c = np.empty((na+nb,) + ya.shape[1:], dtype=dt)
# compute coefficients of a polynomial degree na+nb-1
# walk left-to-right
for q in range(0, na):
c[q] = ya[q] / spec.poch(n - q, q) * (xb - xa)**q
for j in range(0, q):
c[q] -= (-1)**(j+q) * comb(q, j) * c[j]
# now walk right-to-left
for q in range(0, nb):
c[-q-1] = yb[q] / spec.poch(n - q, q) * (-1)**q * (xb - xa)**q
for j in range(0, q):
c[-q-1] -= (-1)**(j+1) * comb(q, j+1) * c[-q+j]
return c
@staticmethod
def _raise_degree(c, d):
r"""Raise a degree of a polynomial in the Bernstein basis.
Given the coefficients of a polynomial degree `k`, return (the
coefficients of) the equivalent polynomial of degree `k+d`.
Parameters
----------
c : array_like
coefficient array, 1-D
d : integer
Returns
-------
array
coefficient array, 1-D array of length `c.shape[0] + d`
Notes
-----
This uses the fact that a Bernstein polynomial `b_{a, k}` can be
identically represented as a linear combination of polynomials of
a higher degree `k+d`:
.. math:: b_{a, k} = comb(k, a) \sum_{j=0}^{d} b_{a+j, k+d} \
comb(d, j) / comb(k+d, a+j)
"""
if d == 0:
return c
k = c.shape[0] - 1
out = np.zeros((c.shape[0] + d,) + c.shape[1:], dtype=c.dtype)
for a in range(c.shape[0]):
f = c[a] * comb(k, a)
for j in range(d+1):
out[a+j] += f * comb(d, j) / comb(k+d, a+j)
return out
class NdPPoly:
"""
Piecewise tensor product polynomial
The value at point ``xp = (x', y', z', ...)`` is evaluated by first
computing the interval indices `i` such that::
x[0][i[0]] <= x' < x[0][i[0]+1]
x[1][i[1]] <= y' < x[1][i[1]+1]
...
and then computing::
S = sum(c[k0-m0-1,...,kn-mn-1,i[0],...,i[n]]
* (xp[0] - x[0][i[0]])**m0
* ...
* (xp[n] - x[n][i[n]])**mn
for m0 in range(k[0]+1)
...
for mn in range(k[n]+1))
where ``k[j]`` is the degree of the polynomial in dimension j. This
representation is the piecewise multivariate power basis.
Parameters
----------
c : ndarray, shape (k0, ..., kn, m0, ..., mn, ...)
Polynomial coefficients, with polynomial order `kj` and
`mj+1` intervals for each dimension `j`.
x : ndim-tuple of ndarrays, shapes (mj+1,)
Polynomial breakpoints for each dimension. These must be
sorted in increasing order.
extrapolate : bool, optional
Whether to extrapolate to out-of-bounds points based on first
and last intervals, or to return NaNs. Default: True.
Attributes
----------
x : tuple of ndarrays
Breakpoints.
c : ndarray
Coefficients of the polynomials.
Methods
-------
__call__
derivative
antiderivative
integrate
integrate_1d
construct_fast
See also
--------
PPoly : piecewise polynomials in 1D
Notes
-----
High-order polynomials in the power basis can be numerically
unstable.
"""
def __init__(self, c, x, extrapolate=None):
self.x = tuple(np.ascontiguousarray(v, dtype=np.float64) for v in x)
self.c = np.asarray(c)
if extrapolate is None:
extrapolate = True
self.extrapolate = bool(extrapolate)
ndim = len(self.x)
if any(v.ndim != 1 for v in self.x):
raise ValueError("x arrays must all be 1-dimensional")
if any(v.size < 2 for v in self.x):
raise ValueError("x arrays must all contain at least 2 points")
if c.ndim < 2*ndim:
raise ValueError("c must have at least 2*len(x) dimensions")
if any(np.any(v[1:] - v[:-1] < 0) for v in self.x):
raise ValueError("x-coordinates are not in increasing order")
if any(a != b.size - 1 for a, b in zip(c.shape[ndim:2*ndim], self.x)):
raise ValueError("x and c do not agree on the number of intervals")
dtype = self._get_dtype(self.c.dtype)
self.c = np.ascontiguousarray(self.c, dtype=dtype)
@classmethod
def construct_fast(cls, c, x, extrapolate=None):
"""
Construct the piecewise polynomial without making checks.
Takes the same parameters as the constructor. Input arguments
``c`` and ``x`` must be arrays of the correct shape and type. The
``c`` array can only be of dtypes float and complex, and ``x``
array must have dtype float.
"""
self = object.__new__(cls)
self.c = c
self.x = x
if extrapolate is None:
extrapolate = True
self.extrapolate = extrapolate
return self
def _get_dtype(self, dtype):
if np.issubdtype(dtype, np.complexfloating) \
or np.issubdtype(self.c.dtype, np.complexfloating):
return np.complex_
else:
return np.float_
def _ensure_c_contiguous(self):
if not self.c.flags.c_contiguous:
self.c = self.c.copy()
if not isinstance(self.x, tuple):
self.x = tuple(self.x)
def __call__(self, x, nu=None, extrapolate=None):
"""
Evaluate the piecewise polynomial or its derivative
Parameters
----------
x : array-like
Points to evaluate the interpolant at.
nu : tuple, optional
Orders of derivatives to evaluate. Each must be non-negative.
extrapolate : bool, optional
Whether to extrapolate to out-of-bounds points based on first
and last intervals, or to return NaNs.
Returns
-------
y : array-like
Interpolated values. Shape is determined by replacing
the interpolation axis in the original array with the shape of x.
Notes
-----
Derivatives are evaluated piecewise for each polynomial
segment, even if the polynomial is not differentiable at the
breakpoints. The polynomial intervals are considered half-open,
``[a, b)``, except for the last interval which is closed
``[a, b]``.
"""
if extrapolate is None:
extrapolate = self.extrapolate
else:
extrapolate = bool(extrapolate)
ndim = len(self.x)
x = _ndim_coords_from_arrays(x)
x_shape = x.shape
x = np.ascontiguousarray(x.reshape(-1, x.shape[-1]), dtype=np.float_)
if nu is None:
nu = np.zeros((ndim,), dtype=np.intc)
else:
nu = np.asarray(nu, dtype=np.intc)
if nu.ndim != 1 or nu.shape[0] != ndim:
raise ValueError("invalid number of derivative orders nu")
dim1 = prod(self.c.shape[:ndim])
dim2 = prod(self.c.shape[ndim:2*ndim])
dim3 = prod(self.c.shape[2*ndim:])
ks = np.array(self.c.shape[:ndim], dtype=np.intc)
out = np.empty((x.shape[0], dim3), dtype=self.c.dtype)
self._ensure_c_contiguous()
_ppoly.evaluate_nd(self.c.reshape(dim1, dim2, dim3),
self.x,
ks,
x,
nu,
bool(extrapolate),
out)
return out.reshape(x_shape[:-1] + self.c.shape[2*ndim:])
def _derivative_inplace(self, nu, axis):
"""
Compute 1-D derivative along a selected dimension in-place
May result to non-contiguous c array.
"""
if nu < 0:
return self._antiderivative_inplace(-nu, axis)
ndim = len(self.x)
axis = axis % ndim
# reduce order
if nu == 0:
# noop
return
else:
sl = [slice(None)]*ndim
sl[axis] = slice(None, -nu, None)
c2 = self.c[tuple(sl)]
if c2.shape[axis] == 0:
# derivative of order 0 is zero
shp = list(c2.shape)
shp[axis] = 1
c2 = np.zeros(shp, dtype=c2.dtype)
# multiply by the correct rising factorials
factor = spec.poch(np.arange(c2.shape[axis], 0, -1), nu)
sl = [None]*c2.ndim
sl[axis] = slice(None)
c2 *= factor[tuple(sl)]
self.c = c2
def _antiderivative_inplace(self, nu, axis):
"""
Compute 1-D antiderivative along a selected dimension
May result to non-contiguous c array.
"""
if nu <= 0:
return self._derivative_inplace(-nu, axis)
ndim = len(self.x)
axis = axis % ndim
perm = list(range(ndim))
perm[0], perm[axis] = perm[axis], perm[0]
perm = perm + list(range(ndim, self.c.ndim))
c = self.c.transpose(perm)
c2 = np.zeros((c.shape[0] + nu,) + c.shape[1:],
dtype=c.dtype)
c2[:-nu] = c
# divide by the correct rising factorials
factor = spec.poch(np.arange(c.shape[0], 0, -1), nu)
c2[:-nu] /= factor[(slice(None),) + (None,)*(c.ndim-1)]
# fix continuity of added degrees of freedom
perm2 = list(range(c2.ndim))
perm2[1], perm2[ndim+axis] = perm2[ndim+axis], perm2[1]
c2 = c2.transpose(perm2)
c2 = c2.copy()
_ppoly.fix_continuity(c2.reshape(c2.shape[0], c2.shape[1], -1),
self.x[axis], nu-1)
c2 = c2.transpose(perm2)
c2 = c2.transpose(perm)
# Done
self.c = c2
def derivative(self, nu):
"""
Construct a new piecewise polynomial representing the derivative.
Parameters
----------
nu : ndim-tuple of int
Order of derivatives to evaluate for each dimension.
If negative, the antiderivative is returned.
Returns
-------
pp : NdPPoly
Piecewise polynomial of orders (k[0] - nu[0], ..., k[n] - nu[n])
representing the derivative of this polynomial.
Notes
-----
Derivatives are evaluated piecewise for each polynomial
segment, even if the polynomial is not differentiable at the
breakpoints. The polynomial intervals in each dimension are
considered half-open, ``[a, b)``, except for the last interval
which is closed ``[a, b]``.
"""
p = self.construct_fast(self.c.copy(), self.x, self.extrapolate)
for axis, n in enumerate(nu):
p._derivative_inplace(n, axis)
p._ensure_c_contiguous()
return p
def antiderivative(self, nu):
"""
Construct a new piecewise polynomial representing the antiderivative.
Antiderivative is also the indefinite integral of the function,
and derivative is its inverse operation.
Parameters
----------
nu : ndim-tuple of int
Order of derivatives to evaluate for each dimension.
If negative, the derivative is returned.
Returns
-------
pp : PPoly
Piecewise polynomial of order k2 = k + n representing
the antiderivative of this polynomial.
Notes
-----
The antiderivative returned by this function is continuous and
continuously differentiable to order n-1, up to floating point
rounding error.
"""
p = self.construct_fast(self.c.copy(), self.x, self.extrapolate)
for axis, n in enumerate(nu):
p._antiderivative_inplace(n, axis)
p._ensure_c_contiguous()
return p
def integrate_1d(self, a, b, axis, extrapolate=None):
r"""
Compute NdPPoly representation for one dimensional definite integral
The result is a piecewise polynomial representing the integral:
.. math::
p(y, z, ...) = \int_a^b dx\, p(x, y, z, ...)
where the dimension integrated over is specified with the
`axis` parameter.
Parameters
----------
a, b : float
Lower and upper bound for integration.
axis : int
Dimension over which to compute the 1-D integrals
extrapolate : bool, optional
Whether to extrapolate to out-of-bounds points based on first
and last intervals, or to return NaNs.
Returns
-------
ig : NdPPoly or array-like
Definite integral of the piecewise polynomial over [a, b].
If the polynomial was 1D, an array is returned,
otherwise, an NdPPoly object.
"""
if extrapolate is None:
extrapolate = self.extrapolate
else:
extrapolate = bool(extrapolate)
ndim = len(self.x)
axis = int(axis) % ndim
# reuse 1-D integration routines
c = self.c
swap = list(range(c.ndim))
swap.insert(0, swap[axis])
del swap[axis + 1]
swap.insert(1, swap[ndim + axis])
del swap[ndim + axis + 1]
c = c.transpose(swap)
p = PPoly.construct_fast(c.reshape(c.shape[0], c.shape[1], -1),
self.x[axis],
extrapolate=extrapolate)
out = p.integrate(a, b, extrapolate=extrapolate)
# Construct result
if ndim == 1:
return out.reshape(c.shape[2:])
else:
c = out.reshape(c.shape[2:])
x = self.x[:axis] + self.x[axis+1:]
return self.construct_fast(c, x, extrapolate=extrapolate)
def integrate(self, ranges, extrapolate=None):
"""
Compute a definite integral over a piecewise polynomial.
Parameters
----------
ranges : ndim-tuple of 2-tuples float
Sequence of lower and upper bounds for each dimension,
``[(a[0], b[0]), ..., (a[ndim-1], b[ndim-1])]``
extrapolate : bool, optional
Whether to extrapolate to out-of-bounds points based on first
and last intervals, or to return NaNs.
Returns
-------
ig : array_like
Definite integral of the piecewise polynomial over
[a[0], b[0]] x ... x [a[ndim-1], b[ndim-1]]
"""
ndim = len(self.x)
if extrapolate is None:
extrapolate = self.extrapolate
else:
extrapolate = bool(extrapolate)
if not hasattr(ranges, '__len__') or len(ranges) != ndim:
raise ValueError("Range not a sequence of correct length")
self._ensure_c_contiguous()
# Reuse 1D integration routine
c = self.c
for n, (a, b) in enumerate(ranges):
swap = list(range(c.ndim))
swap.insert(1, swap[ndim - n])
del swap[ndim - n + 1]
c = c.transpose(swap)
p = PPoly.construct_fast(c, self.x[n], extrapolate=extrapolate)
out = p.integrate(a, b, extrapolate=extrapolate)
c = out.reshape(c.shape[2:])
return c
| 88,129
| 34.665722
| 89
|
py
|
scipy
|
scipy-main/scipy/interpolate/_rbfinterp_pythran.py
|
import numpy as np
def linear(r):
return -r
def thin_plate_spline(r):
if r == 0:
return 0.0
else:
return r**2*np.log(r)
def cubic(r):
return r**3
def quintic(r):
return -r**5
def multiquadric(r):
return -np.sqrt(r**2 + 1)
def inverse_multiquadric(r):
return 1/np.sqrt(r**2 + 1)
def inverse_quadratic(r):
return 1/(r**2 + 1)
def gaussian(r):
return np.exp(-r**2)
NAME_TO_FUNC = {
"linear": linear,
"thin_plate_spline": thin_plate_spline,
"cubic": cubic,
"quintic": quintic,
"multiquadric": multiquadric,
"inverse_multiquadric": inverse_multiquadric,
"inverse_quadratic": inverse_quadratic,
"gaussian": gaussian
}
def kernel_vector(x, y, kernel_func, out):
"""Evaluate RBFs, with centers at `y`, at the point `x`."""
for i in range(y.shape[0]):
out[i] = kernel_func(np.linalg.norm(x - y[i]))
def polynomial_vector(x, powers, out):
"""Evaluate monomials, with exponents from `powers`, at the point `x`."""
for i in range(powers.shape[0]):
out[i] = np.prod(x**powers[i])
def kernel_matrix(x, kernel_func, out):
"""Evaluate RBFs, with centers at `x`, at `x`."""
for i in range(x.shape[0]):
for j in range(i+1):
out[i, j] = kernel_func(np.linalg.norm(x[i] - x[j]))
out[j, i] = out[i, j]
def polynomial_matrix(x, powers, out):
"""Evaluate monomials, with exponents from `powers`, at `x`."""
for i in range(x.shape[0]):
for j in range(powers.shape[0]):
out[i, j] = np.prod(x[i]**powers[j])
# pythran export _kernel_matrix(float[:, :], str)
def _kernel_matrix(x, kernel):
"""Return RBFs, with centers at `x`, evaluated at `x`."""
out = np.empty((x.shape[0], x.shape[0]), dtype=float)
kernel_func = NAME_TO_FUNC[kernel]
kernel_matrix(x, kernel_func, out)
return out
# pythran export _polynomial_matrix(float[:, :], int[:, :])
def _polynomial_matrix(x, powers):
"""Return monomials, with exponents from `powers`, evaluated at `x`."""
out = np.empty((x.shape[0], powers.shape[0]), dtype=float)
polynomial_matrix(x, powers, out)
return out
# pythran export _build_system(float[:, :],
# float[:, :],
# float[:],
# str,
# float,
# int[:, :])
def _build_system(y, d, smoothing, kernel, epsilon, powers):
"""Build the system used to solve for the RBF interpolant coefficients.
Parameters
----------
y : (P, N) float ndarray
Data point coordinates.
d : (P, S) float ndarray
Data values at `y`.
smoothing : (P,) float ndarray
Smoothing parameter for each data point.
kernel : str
Name of the RBF.
epsilon : float
Shape parameter.
powers : (R, N) int ndarray
The exponents for each monomial in the polynomial.
Returns
-------
lhs : (P + R, P + R) float ndarray
Left-hand side matrix.
rhs : (P + R, S) float ndarray
Right-hand side matrix.
shift : (N,) float ndarray
Domain shift used to create the polynomial matrix.
scale : (N,) float ndarray
Domain scaling used to create the polynomial matrix.
"""
p = d.shape[0]
s = d.shape[1]
r = powers.shape[0]
kernel_func = NAME_TO_FUNC[kernel]
# Shift and scale the polynomial domain to be between -1 and 1
mins = np.min(y, axis=0)
maxs = np.max(y, axis=0)
shift = (maxs + mins)/2
scale = (maxs - mins)/2
# The scale may be zero if there is a single point or all the points have
# the same value for some dimension. Avoid division by zero by replacing
# zeros with ones.
scale[scale == 0.0] = 1.0
yeps = y*epsilon
yhat = (y - shift)/scale
# Transpose to make the array fortran contiguous. This is required for
# dgesv to not make a copy of lhs.
lhs = np.empty((p + r, p + r), dtype=float).T
kernel_matrix(yeps, kernel_func, lhs[:p, :p])
polynomial_matrix(yhat, powers, lhs[:p, p:])
lhs[p:, :p] = lhs[:p, p:].T
lhs[p:, p:] = 0.0
for i in range(p):
lhs[i, i] += smoothing[i]
# Transpose to make the array fortran contiguous.
rhs = np.empty((s, p + r), dtype=float).T
rhs[:p] = d
rhs[p:] = 0.0
return lhs, rhs, shift, scale
# pythran export _build_evaluation_coefficients(float[:, :],
# float[:, :],
# str,
# float,
# int[:, :],
# float[:],
# float[:])
def _build_evaluation_coefficients(x, y, kernel, epsilon, powers,
shift, scale):
"""Construct the coefficients needed to evaluate
the RBF.
Parameters
----------
x : (Q, N) float ndarray
Evaluation point coordinates.
y : (P, N) float ndarray
Data point coordinates.
kernel : str
Name of the RBF.
epsilon : float
Shape parameter.
powers : (R, N) int ndarray
The exponents for each monomial in the polynomial.
shift : (N,) float ndarray
Shifts the polynomial domain for numerical stability.
scale : (N,) float ndarray
Scales the polynomial domain for numerical stability.
Returns
-------
(Q, P + R) float ndarray
"""
q = x.shape[0]
p = y.shape[0]
r = powers.shape[0]
kernel_func = NAME_TO_FUNC[kernel]
yeps = y*epsilon
xeps = x*epsilon
xhat = (x - shift)/scale
vec = np.empty((q, p + r), dtype=float)
for i in range(q):
kernel_vector(xeps[i], yeps, kernel_func, vec[i, :p])
polynomial_vector(xhat[i], powers, vec[i, p:])
return vec
| 5,831
| 25.875576
| 77
|
py
|
scipy
|
scipy-main/scipy/interpolate/_polyint.py
|
import warnings
import numpy as np
from scipy.special import factorial
from scipy._lib._util import _asarray_validated, float_factorial
__all__ = ["KroghInterpolator", "krogh_interpolate", "BarycentricInterpolator",
"barycentric_interpolate", "approximate_taylor_polynomial"]
def _isscalar(x):
"""Check whether x is if a scalar type, or 0-dim"""
return np.isscalar(x) or hasattr(x, 'shape') and x.shape == ()
class _Interpolator1D:
"""
Common features in univariate interpolation
Deal with input data type and interpolation axis rolling. The
actual interpolator can assume the y-data is of shape (n, r) where
`n` is the number of x-points, and `r` the number of variables,
and use self.dtype as the y-data type.
Attributes
----------
_y_axis
Axis along which the interpolation goes in the original array
_y_extra_shape
Additional trailing shape of the input arrays, excluding
the interpolation axis.
dtype
Dtype of the y-data arrays. Can be set via _set_dtype, which
forces it to be float or complex.
Methods
-------
__call__
_prepare_x
_finish_y
_reshape_yi
_set_yi
_set_dtype
_evaluate
"""
__slots__ = ('_y_axis', '_y_extra_shape', 'dtype')
def __init__(self, xi=None, yi=None, axis=None):
self._y_axis = axis
self._y_extra_shape = None
self.dtype = None
if yi is not None:
self._set_yi(yi, xi=xi, axis=axis)
def __call__(self, x):
"""
Evaluate the interpolant
Parameters
----------
x : array_like
Points to evaluate the interpolant at.
Returns
-------
y : array_like
Interpolated values. Shape is determined by replacing
the interpolation axis in the original array with the shape of x.
Notes
-----
Input values `x` must be convertible to `float` values like `int`
or `float`.
"""
x, x_shape = self._prepare_x(x)
y = self._evaluate(x)
return self._finish_y(y, x_shape)
def _evaluate(self, x):
"""
Actually evaluate the value of the interpolator.
"""
raise NotImplementedError()
def _prepare_x(self, x):
"""Reshape input x array to 1-D"""
x = _asarray_validated(x, check_finite=False, as_inexact=True)
x_shape = x.shape
return x.ravel(), x_shape
def _finish_y(self, y, x_shape):
"""Reshape interpolated y back to an N-D array similar to initial y"""
y = y.reshape(x_shape + self._y_extra_shape)
if self._y_axis != 0 and x_shape != ():
nx = len(x_shape)
ny = len(self._y_extra_shape)
s = (list(range(nx, nx + self._y_axis))
+ list(range(nx)) + list(range(nx+self._y_axis, nx+ny)))
y = y.transpose(s)
return y
def _reshape_yi(self, yi, check=False):
yi = np.moveaxis(np.asarray(yi), self._y_axis, 0)
if check and yi.shape[1:] != self._y_extra_shape:
ok_shape = "{!r} + (N,) + {!r}".format(self._y_extra_shape[-self._y_axis:],
self._y_extra_shape[:-self._y_axis])
raise ValueError("Data must be of shape %s" % ok_shape)
return yi.reshape((yi.shape[0], -1))
def _set_yi(self, yi, xi=None, axis=None):
if axis is None:
axis = self._y_axis
if axis is None:
raise ValueError("no interpolation axis specified")
yi = np.asarray(yi)
shape = yi.shape
if shape == ():
shape = (1,)
if xi is not None and shape[axis] != len(xi):
raise ValueError("x and y arrays must be equal in length along "
"interpolation axis.")
self._y_axis = (axis % yi.ndim)
self._y_extra_shape = yi.shape[:self._y_axis]+yi.shape[self._y_axis+1:]
self.dtype = None
self._set_dtype(yi.dtype)
def _set_dtype(self, dtype, union=False):
if np.issubdtype(dtype, np.complexfloating) \
or np.issubdtype(self.dtype, np.complexfloating):
self.dtype = np.complex_
else:
if not union or self.dtype != np.complex_:
self.dtype = np.float_
class _Interpolator1DWithDerivatives(_Interpolator1D):
def derivatives(self, x, der=None):
"""
Evaluate many derivatives of the polynomial at the point x
Produce an array of all derivative values at the point x.
Parameters
----------
x : array_like
Point or points at which to evaluate the derivatives
der : int or None, optional
How many derivatives to extract; None for all potentially
nonzero derivatives (that is a number equal to the number
of points). This number includes the function value as 0th
derivative.
Returns
-------
d : ndarray
Array with derivatives; d[j] contains the jth derivative.
Shape of d[j] is determined by replacing the interpolation
axis in the original array with the shape of x.
Examples
--------
>>> from scipy.interpolate import KroghInterpolator
>>> KroghInterpolator([0,0,0],[1,2,3]).derivatives(0)
array([1.0,2.0,3.0])
>>> KroghInterpolator([0,0,0],[1,2,3]).derivatives([0,0])
array([[1.0,1.0],
[2.0,2.0],
[3.0,3.0]])
"""
x, x_shape = self._prepare_x(x)
y = self._evaluate_derivatives(x, der)
y = y.reshape((y.shape[0],) + x_shape + self._y_extra_shape)
if self._y_axis != 0 and x_shape != ():
nx = len(x_shape)
ny = len(self._y_extra_shape)
s = ([0] + list(range(nx+1, nx + self._y_axis+1))
+ list(range(1, nx+1)) +
list(range(nx+1+self._y_axis, nx+ny+1)))
y = y.transpose(s)
return y
def derivative(self, x, der=1):
"""
Evaluate one derivative of the polynomial at the point x
Parameters
----------
x : array_like
Point or points at which to evaluate the derivatives
der : integer, optional
Which derivative to extract. This number includes the
function value as 0th derivative.
Returns
-------
d : ndarray
Derivative interpolated at the x-points. Shape of d is
determined by replacing the interpolation axis in the
original array with the shape of x.
Notes
-----
This is computed by evaluating all derivatives up to the desired
one (using self.derivatives()) and then discarding the rest.
"""
x, x_shape = self._prepare_x(x)
y = self._evaluate_derivatives(x, der+1)
return self._finish_y(y[der], x_shape)
class KroghInterpolator(_Interpolator1DWithDerivatives):
"""
Interpolating polynomial for a set of points.
The polynomial passes through all the pairs (xi,yi). One may
additionally specify a number of derivatives at each point xi;
this is done by repeating the value xi and specifying the
derivatives as successive yi values.
Allows evaluation of the polynomial and all its derivatives.
For reasons of numerical stability, this function does not compute
the coefficients of the polynomial, although they can be obtained
by evaluating all the derivatives.
Parameters
----------
xi : array_like, shape (npoints, )
Known x-coordinates. Must be sorted in increasing order.
yi : array_like, shape (..., npoints, ...)
Known y-coordinates. When an xi occurs two or more times in
a row, the corresponding yi's represent derivative values. The length of `yi`
along the interpolation axis must be equal to the length of `xi`. Use the
`axis` parameter to select the correct axis.
axis : int, optional
Axis in the `yi` array corresponding to the x-coordinate values. Defaults to
``axis=0``.
Notes
-----
Be aware that the algorithms implemented here are not necessarily
the most numerically stable known. Moreover, even in a world of
exact computation, unless the x coordinates are chosen very
carefully - Chebyshev zeros (e.g., cos(i*pi/n)) are a good choice -
polynomial interpolation itself is a very ill-conditioned process
due to the Runge phenomenon. In general, even with well-chosen
x values, degrees higher than about thirty cause problems with
numerical instability in this code.
Based on [1]_.
References
----------
.. [1] Krogh, "Efficient Algorithms for Polynomial Interpolation
and Numerical Differentiation", 1970.
Examples
--------
To produce a polynomial that is zero at 0 and 1 and has
derivative 2 at 0, call
>>> from scipy.interpolate import KroghInterpolator
>>> KroghInterpolator([0,0,1],[0,2,0])
This constructs the quadratic 2*X**2-2*X. The derivative condition
is indicated by the repeated zero in the xi array; the corresponding
yi values are 0, the function value, and 2, the derivative value.
For another example, given xi, yi, and a derivative ypi for each
point, appropriate arrays can be constructed as:
>>> import numpy as np
>>> rng = np.random.default_rng()
>>> xi = np.linspace(0, 1, 5)
>>> yi, ypi = rng.random((2, 5))
>>> xi_k, yi_k = np.repeat(xi, 2), np.ravel(np.dstack((yi,ypi)))
>>> KroghInterpolator(xi_k, yi_k)
To produce a vector-valued polynomial, supply a higher-dimensional
array for yi:
>>> KroghInterpolator([0,1],[[2,3],[4,5]])
This constructs a linear polynomial giving (2,3) at 0 and (4,5) at 1.
"""
def __init__(self, xi, yi, axis=0):
_Interpolator1DWithDerivatives.__init__(self, xi, yi, axis)
self.xi = np.asarray(xi)
self.yi = self._reshape_yi(yi)
self.n, self.r = self.yi.shape
if (deg := self.xi.size) > 30:
warnings.warn(f"{deg} degrees provided, degrees higher than about"
" thirty cause problems with numerical instability "
"with 'KroghInterpolator'", stacklevel=2)
c = np.zeros((self.n+1, self.r), dtype=self.dtype)
c[0] = self.yi[0]
Vk = np.zeros((self.n, self.r), dtype=self.dtype)
for k in range(1, self.n):
s = 0
while s <= k and xi[k-s] == xi[k]:
s += 1
s -= 1
Vk[0] = self.yi[k]/float_factorial(s)
for i in range(k-s):
if xi[i] == xi[k]:
raise ValueError("Elements if `xi` can't be equal.")
if s == 0:
Vk[i+1] = (c[i]-Vk[i])/(xi[i]-xi[k])
else:
Vk[i+1] = (Vk[i+1]-Vk[i])/(xi[i]-xi[k])
c[k] = Vk[k-s]
self.c = c
def _evaluate(self, x):
pi = 1
p = np.zeros((len(x), self.r), dtype=self.dtype)
p += self.c[0,np.newaxis,:]
for k in range(1, self.n):
w = x - self.xi[k-1]
pi = w*pi
p += pi[:,np.newaxis] * self.c[k]
return p
def _evaluate_derivatives(self, x, der=None):
n = self.n
r = self.r
if der is None:
der = self.n
pi = np.zeros((n, len(x)))
w = np.zeros((n, len(x)))
pi[0] = 1
p = np.zeros((len(x), self.r), dtype=self.dtype)
p += self.c[0, np.newaxis, :]
for k in range(1, n):
w[k-1] = x - self.xi[k-1]
pi[k] = w[k-1] * pi[k-1]
p += pi[k, :, np.newaxis] * self.c[k]
cn = np.zeros((max(der, n+1), len(x), r), dtype=self.dtype)
cn[:n+1, :, :] += self.c[:n+1, np.newaxis, :]
cn[0] = p
for k in range(1, n):
for i in range(1, n-k+1):
pi[i] = w[k+i-1]*pi[i-1] + pi[i]
cn[k] = cn[k] + pi[i, :, np.newaxis]*cn[k+i]
cn[k] *= float_factorial(k)
cn[n, :, :] = 0
return cn[:der]
def krogh_interpolate(xi, yi, x, der=0, axis=0):
"""
Convenience function for polynomial interpolation.
See `KroghInterpolator` for more details.
Parameters
----------
xi : array_like
Known x-coordinates.
yi : array_like
Known y-coordinates, of shape ``(xi.size, R)``. Interpreted as
vectors of length R, or scalars if R=1.
x : array_like
Point or points at which to evaluate the derivatives.
der : int or list, optional
How many derivatives to extract; None for all potentially
nonzero derivatives (that is a number equal to the number
of points), or a list of derivatives to extract. This number
includes the function value as 0th derivative.
axis : int, optional
Axis in the yi array corresponding to the x-coordinate values.
Returns
-------
d : ndarray
If the interpolator's values are R-D then the
returned array will be the number of derivatives by N by R.
If `x` is a scalar, the middle dimension will be dropped; if
the `yi` are scalars then the last dimension will be dropped.
See Also
--------
KroghInterpolator : Krogh interpolator
Notes
-----
Construction of the interpolating polynomial is a relatively expensive
process. If you want to evaluate it repeatedly consider using the class
KroghInterpolator (which is what this function uses).
Examples
--------
We can interpolate 2D observed data using krogh interpolation:
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.interpolate import krogh_interpolate
>>> x_observed = np.linspace(0.0, 10.0, 11)
>>> y_observed = np.sin(x_observed)
>>> x = np.linspace(min(x_observed), max(x_observed), num=100)
>>> y = krogh_interpolate(x_observed, y_observed, x)
>>> plt.plot(x_observed, y_observed, "o", label="observation")
>>> plt.plot(x, y, label="krogh interpolation")
>>> plt.legend()
>>> plt.show()
"""
P = KroghInterpolator(xi, yi, axis=axis)
if der == 0:
return P(x)
elif _isscalar(der):
return P.derivative(x,der=der)
else:
return P.derivatives(x,der=np.amax(der)+1)[der]
def approximate_taylor_polynomial(f,x,degree,scale,order=None):
"""
Estimate the Taylor polynomial of f at x by polynomial fitting.
Parameters
----------
f : callable
The function whose Taylor polynomial is sought. Should accept
a vector of `x` values.
x : scalar
The point at which the polynomial is to be evaluated.
degree : int
The degree of the Taylor polynomial
scale : scalar
The width of the interval to use to evaluate the Taylor polynomial.
Function values spread over a range this wide are used to fit the
polynomial. Must be chosen carefully.
order : int or None, optional
The order of the polynomial to be used in the fitting; `f` will be
evaluated ``order+1`` times. If None, use `degree`.
Returns
-------
p : poly1d instance
The Taylor polynomial (translated to the origin, so that
for example p(0)=f(x)).
Notes
-----
The appropriate choice of "scale" is a trade-off; too large and the
function differs from its Taylor polynomial too much to get a good
answer, too small and round-off errors overwhelm the higher-order terms.
The algorithm used becomes numerically unstable around order 30 even
under ideal circumstances.
Choosing order somewhat larger than degree may improve the higher-order
terms.
Examples
--------
We can calculate Taylor approximation polynomials of sin function with
various degrees:
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.interpolate import approximate_taylor_polynomial
>>> x = np.linspace(-10.0, 10.0, num=100)
>>> plt.plot(x, np.sin(x), label="sin curve")
>>> for degree in np.arange(1, 15, step=2):
... sin_taylor = approximate_taylor_polynomial(np.sin, 0, degree, 1,
... order=degree + 2)
... plt.plot(x, sin_taylor(x), label=f"degree={degree}")
>>> plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left',
... borderaxespad=0.0, shadow=True)
>>> plt.tight_layout()
>>> plt.axis([-10, 10, -10, 10])
>>> plt.show()
"""
if order is None:
order = degree
n = order+1
# Choose n points that cluster near the endpoints of the interval in
# a way that avoids the Runge phenomenon. Ensure, by including the
# endpoint or not as appropriate, that one point always falls at x
# exactly.
xs = scale*np.cos(np.linspace(0,np.pi,n,endpoint=n % 1)) + x
P = KroghInterpolator(xs, f(xs))
d = P.derivatives(x,der=degree+1)
return np.poly1d((d/factorial(np.arange(degree+1)))[::-1])
class BarycentricInterpolator(_Interpolator1D):
"""The interpolating polynomial for a set of points
Constructs a polynomial that passes through a given set of points.
Allows evaluation of the polynomial, efficient changing of the y
values to be interpolated, and updating by adding more x values.
For reasons of numerical stability, this function does not compute
the coefficients of the polynomial.
The values yi need to be provided before the function is
evaluated, but none of the preprocessing depends on them, so rapid
updates are possible.
Parameters
----------
xi : array_like, shape (npoints, )
1-D array of x coordinates of the points the polynomial
should pass through
yi : array_like, shape (..., npoints, ...), optional
N-D array of y coordinates of the points the polynomial should pass through.
If None, the y values will be supplied later via the `set_y` method.
The length of `yi` along the interpolation axis must be equal to the length
of `xi`. Use the ``axis`` parameter to select correct axis.
axis : int, optional
Axis in the yi array corresponding to the x-coordinate values. Defaults
to ``axis=0``.
Notes
-----
This class uses a "barycentric interpolation" method that treats
the problem as a special case of rational function interpolation.
This algorithm is quite stable, numerically, but even in a world of
exact computation, unless the x coordinates are chosen very
carefully - Chebyshev zeros (e.g., cos(i*pi/n)) are a good choice -
polynomial interpolation itself is a very ill-conditioned process
due to the Runge phenomenon.
Based on Berrut and Trefethen 2004, "Barycentric Lagrange Interpolation".
"""
def __init__(self, xi, yi=None, axis=0):
_Interpolator1D.__init__(self, xi, yi, axis)
self.xi = np.asfarray(xi)
self.set_yi(yi)
self.n = len(self.xi)
# See page 510 of Berrut and Trefethen 2004 for an explanation of the
# capacity scaling and the suggestion of using a random permutation of
# the input factors.
# At the moment, the permutation is not performed for xi that are
# appended later through the add_xi interface. It's not clear to me how
# to implement that and it seems that most situations that require
# these numerical stability improvements will be able to provide all
# the points to the constructor.
self._inv_capacity = 4.0 / (np.max(self.xi) - np.min(self.xi))
permute = np.random.permutation(self.n)
inv_permute = np.zeros(self.n, dtype=np.int32)
inv_permute[permute] = np.arange(self.n)
self.wi = np.zeros(self.n)
for i in range(self.n):
dist = self._inv_capacity * (self.xi[i] - self.xi[permute])
dist[inv_permute[i]] = 1.0
self.wi[i] = 1.0 / np.prod(dist)
def set_yi(self, yi, axis=None):
"""
Update the y values to be interpolated
The barycentric interpolation algorithm requires the calculation
of weights, but these depend only on the xi. The yi can be changed
at any time.
Parameters
----------
yi : array_like
The y coordinates of the points the polynomial should pass through.
If None, the y values will be supplied later.
axis : int, optional
Axis in the yi array corresponding to the x-coordinate values.
"""
if yi is None:
self.yi = None
return
self._set_yi(yi, xi=self.xi, axis=axis)
self.yi = self._reshape_yi(yi)
self.n, self.r = self.yi.shape
def add_xi(self, xi, yi=None):
"""
Add more x values to the set to be interpolated
The barycentric interpolation algorithm allows easy updating by
adding more points for the polynomial to pass through.
Parameters
----------
xi : array_like
The x coordinates of the points that the polynomial should pass
through.
yi : array_like, optional
The y coordinates of the points the polynomial should pass through.
Should have shape ``(xi.size, R)``; if R > 1 then the polynomial is
vector-valued.
If `yi` is not given, the y values will be supplied later. `yi`
should be given if and only if the interpolator has y values
specified.
"""
if yi is not None:
if self.yi is None:
raise ValueError("No previous yi value to update!")
yi = self._reshape_yi(yi, check=True)
self.yi = np.vstack((self.yi,yi))
else:
if self.yi is not None:
raise ValueError("No update to yi provided!")
old_n = self.n
self.xi = np.concatenate((self.xi,xi))
self.n = len(self.xi)
self.wi **= -1
old_wi = self.wi
self.wi = np.zeros(self.n)
self.wi[:old_n] = old_wi
for j in range(old_n, self.n):
self.wi[:j] *= self._inv_capacity * (self.xi[j]-self.xi[:j])
self.wi[j] = np.multiply.reduce(
self._inv_capacity * (self.xi[:j]-self.xi[j])
)
self.wi **= -1
def __call__(self, x):
"""Evaluate the interpolating polynomial at the points x
Parameters
----------
x : array_like
Points to evaluate the interpolant at.
Returns
-------
y : array_like
Interpolated values. Shape is determined by replacing
the interpolation axis in the original array with the shape of x.
Notes
-----
Currently the code computes an outer product between x and the
weights, that is, it constructs an intermediate array of size
N by len(x), where N is the degree of the polynomial.
"""
return _Interpolator1D.__call__(self, x)
def _evaluate(self, x):
if x.size == 0:
p = np.zeros((0, self.r), dtype=self.dtype)
else:
c = x[..., np.newaxis] - self.xi
z = c == 0
c[z] = 1
c = self.wi/c
with np.errstate(divide='ignore'):
p = np.dot(c, self.yi) / np.sum(c, axis=-1)[..., np.newaxis]
# Now fix where x==some xi
r = np.nonzero(z)
if len(r) == 1: # evaluation at a scalar
if len(r[0]) > 0: # equals one of the points
p = self.yi[r[0][0]]
else:
p[r[:-1]] = self.yi[r[-1]]
return p
def barycentric_interpolate(xi, yi, x, axis=0):
"""
Convenience function for polynomial interpolation.
Constructs a polynomial that passes through a given set of points,
then evaluates the polynomial. For reasons of numerical stability,
this function does not compute the coefficients of the polynomial.
This function uses a "barycentric interpolation" method that treats
the problem as a special case of rational function interpolation.
This algorithm is quite stable, numerically, but even in a world of
exact computation, unless the `x` coordinates are chosen very
carefully - Chebyshev zeros (e.g., cos(i*pi/n)) are a good choice -
polynomial interpolation itself is a very ill-conditioned process
due to the Runge phenomenon.
Parameters
----------
xi : array_like
1-D array of x coordinates of the points the polynomial should
pass through
yi : array_like
The y coordinates of the points the polynomial should pass through.
x : scalar or array_like
Points to evaluate the interpolator at.
axis : int, optional
Axis in the yi array corresponding to the x-coordinate values.
Returns
-------
y : scalar or array_like
Interpolated values. Shape is determined by replacing
the interpolation axis in the original array with the shape of x.
See Also
--------
BarycentricInterpolator : Bary centric interpolator
Notes
-----
Construction of the interpolation weights is a relatively slow process.
If you want to call this many times with the same xi (but possibly
varying yi or x) you should use the class `BarycentricInterpolator`.
This is what this function uses internally.
Examples
--------
We can interpolate 2D observed data using barycentric interpolation:
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.interpolate import barycentric_interpolate
>>> x_observed = np.linspace(0.0, 10.0, 11)
>>> y_observed = np.sin(x_observed)
>>> x = np.linspace(min(x_observed), max(x_observed), num=100)
>>> y = barycentric_interpolate(x_observed, y_observed, x)
>>> plt.plot(x_observed, y_observed, "o", label="observation")
>>> plt.plot(x, y, label="barycentric interpolation")
>>> plt.legend()
>>> plt.show()
"""
return BarycentricInterpolator(xi, yi, axis=axis)(x)
| 26,463
| 34.285333
| 87
|
py
|
scipy
|
scipy-main/scipy/interpolate/_ndbspline.py
|
import operator
import numpy as np
from math import prod
from . import _bspl # type: ignore
__all__ = ["NdBSpline"]
def _get_dtype(dtype):
"""Return np.complex128 for complex dtypes, np.float64 otherwise."""
if np.issubdtype(dtype, np.complexfloating):
return np.complex_
else:
return np.float_
class NdBSpline:
"""Tensor product spline object.
The value at point ``xp = (x1, x2, ..., xN)`` is evaluated as a linear
combination of products of one-dimensional b-splines in each of the ``N``
dimensions::
c[i1, i2, ..., iN] * B(x1; i1, t1) * B(x2; i2, t2) * ... * B(xN; iN, tN)
Here ``B(x; i, t)`` is the ``i``-th b-spline defined by the knot vector
``t`` evaluated at ``x``.
Parameters
----------
t : tuple of 1D ndarrays
knot vectors in directions 1, 2, ... N,
``len(t[i]) == n[i] + k + 1``
c : ndarray, shape (n1, n2, ..., nN, ...)
b-spline coefficients
k : int or length-d tuple of integers
spline degrees.
A single integer is interpreted as having this degree for
all dimensions.
extrapolate : bool, optional
Whether to extrapolate out-of-bounds inputs, or return `nan`.
Default is to extrapolate.
Attributes
----------
t : tuple of ndarrays
Knots vectors.
c : ndarray
Coefficients of the tensor-produce spline.
k : tuple of integers
Degrees for each dimension.
extrapolate : bool, optional
Whether to extrapolate or return nans for out-of-bounds inputs.
Defaults to true.
Methods
-------
__call__
See Also
--------
BSpline : a one-dimensional B-spline object
NdPPoly : an N-dimensional piecewise tensor product polynomial
"""
def __init__(self, t, c, k, *, extrapolate=None):
ndim = len(t)
try:
len(k)
except TypeError:
# make k a tuple
k = (k,)*ndim
if len(k) != ndim:
raise ValueError(f"{len(t) = } != {len(k) = }.")
self.k = tuple(operator.index(ki) for ki in k)
self.t = tuple(np.ascontiguousarray(ti, dtype=float) for ti in t)
self.c = np.asarray(c)
if extrapolate is None:
extrapolate = True
self.extrapolate = bool(extrapolate)
self.c = np.asarray(c)
for d in range(ndim):
td = self.t[d]
kd = self.k[d]
n = td.shape[0] - kd - 1
if kd < 0:
raise ValueError(f"Spline degree in dimension {d} cannot be"
f" negative.")
if td.ndim != 1:
raise ValueError(f"Knot vector in dimension {d} must be"
f" one-dimensional.")
if n < kd + 1:
raise ValueError(f"Need at least {2*kd + 2} knots for degree"
f" {kd} in dimension {d}.")
if (np.diff(td) < 0).any():
raise ValueError(f"Knots in dimension {d} must be in a"
f" non-decreasing order.")
if len(np.unique(td[kd:n + 1])) < 2:
raise ValueError(f"Need at least two internal knots in"
f" dimension {d}.")
if not np.isfinite(td).all():
raise ValueError(f"Knots in dimension {d} should not have"
f" nans or infs.")
if self.c.ndim < ndim:
raise ValueError(f"Coefficients must be at least"
f" {d}-dimensional.")
if self.c.shape[d] != n:
raise ValueError(f"Knots, coefficients and degree in dimension"
f" {d} are inconsistent:"
f" got {self.c.shape[d]} coefficients for"
f" {len(td)} knots, need at least {n} for"
f" k={k}.")
dt = _get_dtype(self.c.dtype)
self.c = np.ascontiguousarray(self.c, dtype=dt)
def __call__(self, xi, *, nu=None, extrapolate=None):
"""Evaluate the tensor product b-spline at ``xi``.
Parameters
----------
xi : array_like, shape(..., ndim)
The coordinates to evaluate the interpolator at.
This can be a list or tuple of ndim-dimensional points
or an array with the shape (num_points, ndim).
nu : array_like, optional, shape (ndim,)
Orders of derivatives to evaluate. Each must be non-negative. Defaults to the zeroth derivivative.
extrapolate : bool, optional
Whether to exrapolate based on first and last intervals in each
dimension, or return `nan`. Default is to ``self.extrapolate``.
Returns
-------
values : ndarray, shape ``xi.shape[:-1] + self.c.shape[ndim:]``
Interpolated values at ``xi``
"""
ndim = len(self.t)
if extrapolate is None:
extrapolate = self.extrapolate
extrapolate = bool(extrapolate)
if nu is None:
nu = np.zeros((ndim,), dtype=np.intc)
else:
nu = np.asarray(nu, dtype=np.intc)
if nu.ndim != 1 or nu.shape[0] != ndim:
raise ValueError("invalid number of derivative orders nu")
# prepare xi : shape (..., m1, ..., md) -> (1, m1, ..., md)
xi = np.asarray(xi, dtype=float)
xi_shape = xi.shape
xi = xi.reshape(-1, xi_shape[-1])
xi = np.ascontiguousarray(xi)
if xi_shape[-1] != ndim:
raise ValueError(f"Shapes: xi.shape={xi_shape} and ndim={ndim}")
# prepare k & t
_k = np.asarray(self.k)
# pack the knots into a single array
len_t = [len(ti) for ti in self.t]
_t = np.empty((ndim, max(len_t)), dtype=float)
_t.fill(np.nan)
for d in range(ndim):
_t[d, :len(self.t[d])] = self.t[d]
# tabulate the flat indices for iterating over the (k+1)**ndim subarray
shape = tuple(kd + 1 for kd in self.k)
indices = np.unravel_index(np.arange(prod(shape)), shape)
_indices_k1d = np.asarray(indices, dtype=np.intp).T
# prepare the coefficients: flatten the trailing dimensions
c1 = self.c.reshape(self.c.shape[:ndim] + (-1,))
c1r = c1.ravel()
# XXX: is c1r guaranteed to be C-contiguous? Cython code assumes it is.
# replacement for np.ravel_multi_index for indexing of `c1`:
_strides_c1 = np.asarray([s // c1.dtype.itemsize
for s in c1.strides], dtype=np.intp)
num_c_tr = c1.shape[-1] # # of trailing coefficients
out = np.empty(xi.shape[:-1] + (num_c_tr,), dtype=c1.dtype)
_bspl.evaluate_ndbspline(xi,
_t,
_k,
nu,
extrapolate,
c1r,
num_c_tr,
_strides_c1,
_indices_k1d,
out,)
return out.reshape(xi_shape[:-1] + self.c.shape[ndim:])
| 7,320
| 34.36715
| 110
|
py
|
scipy
|
scipy-main/scipy/interpolate/__init__.py
|
"""
========================================
Interpolation (:mod:`scipy.interpolate`)
========================================
.. currentmodule:: scipy.interpolate
Sub-package for objects used in interpolation.
As listed below, this sub-package contains spline functions and classes,
1-D and multidimensional (univariate and multivariate)
interpolation classes, Lagrange and Taylor polynomial interpolators, and
wrappers for `FITPACK <http://www.netlib.org/dierckx/>`__
and DFITPACK functions.
Univariate interpolation
========================
.. autosummary::
:toctree: generated/
interp1d
BarycentricInterpolator
KroghInterpolator
barycentric_interpolate
krogh_interpolate
pchip_interpolate
CubicHermiteSpline
PchipInterpolator
Akima1DInterpolator
CubicSpline
PPoly
BPoly
Multivariate interpolation
==========================
Unstructured data:
.. autosummary::
:toctree: generated/
griddata
LinearNDInterpolator
NearestNDInterpolator
CloughTocher2DInterpolator
RBFInterpolator
Rbf
interp2d
For data on a grid:
.. autosummary::
:toctree: generated/
interpn
RegularGridInterpolator
RectBivariateSpline
.. seealso::
`scipy.ndimage.map_coordinates`
Tensor product polynomials:
.. autosummary::
:toctree: generated/
NdPPoly
NdBSpline
1-D Splines
===========
.. autosummary::
:toctree: generated/
BSpline
make_interp_spline
make_lsq_spline
make_smoothing_spline
Functional interface to FITPACK routines:
.. autosummary::
:toctree: generated/
splrep
splprep
splev
splint
sproot
spalde
splder
splantider
insert
Object-oriented FITPACK interface:
.. autosummary::
:toctree: generated/
UnivariateSpline
InterpolatedUnivariateSpline
LSQUnivariateSpline
2-D Splines
===========
For data on a grid:
.. autosummary::
:toctree: generated/
RectBivariateSpline
RectSphereBivariateSpline
For unstructured data:
.. autosummary::
:toctree: generated/
BivariateSpline
SmoothBivariateSpline
SmoothSphereBivariateSpline
LSQBivariateSpline
LSQSphereBivariateSpline
Low-level interface to FITPACK functions:
.. autosummary::
:toctree: generated/
bisplrep
bisplev
Additional tools
================
.. autosummary::
:toctree: generated/
lagrange
approximate_taylor_polynomial
pade
.. seealso::
`scipy.ndimage.map_coordinates`,
`scipy.ndimage.spline_filter`,
`scipy.signal.resample`,
`scipy.signal.bspline`,
`scipy.signal.gauss_spline`,
`scipy.signal.qspline1d`,
`scipy.signal.cspline1d`,
`scipy.signal.qspline1d_eval`,
`scipy.signal.cspline1d_eval`,
`scipy.signal.qspline2d`,
`scipy.signal.cspline2d`.
``pchip`` is an alias of `PchipInterpolator` for backward compatibility
(should not be used in new code).
"""
from ._interpolate import *
from ._fitpack_py import *
# New interface to fitpack library:
from ._fitpack2 import *
from ._rbf import Rbf
from ._rbfinterp import *
from ._polyint import *
from ._cubic import *
from ._ndgriddata import *
from ._bsplines import *
from ._pade import *
from ._rgi import *
from ._ndbspline import NdBSpline
# Deprecated namespaces, to be removed in v2.0.0
from . import fitpack, fitpack2, interpolate, ndgriddata, polyint, rbf
__all__ = [s for s in dir() if not s.startswith('_')]
from scipy._lib._testutils import PytestTester
test = PytestTester(__name__)
del PytestTester
# Backward compatibility
pchip = PchipInterpolator
| 3,530
| 16.480198
| 72
|
py
|
scipy
|
scipy-main/scipy/interpolate/_cubic.py
|
"""Interpolation algorithms using piecewise cubic polynomials."""
import numpy as np
from . import PPoly
from ._polyint import _isscalar
from scipy.linalg import solve_banded, solve
__all__ = ["CubicHermiteSpline", "PchipInterpolator", "pchip_interpolate",
"Akima1DInterpolator", "CubicSpline"]
def prepare_input(x, y, axis, dydx=None):
"""Prepare input for cubic spline interpolators.
All data are converted to numpy arrays and checked for correctness.
Axes equal to `axis` of arrays `y` and `dydx` are moved to be the 0th
axis. The value of `axis` is converted to lie in
[0, number of dimensions of `y`).
"""
x, y = map(np.asarray, (x, y))
if np.issubdtype(x.dtype, np.complexfloating):
raise ValueError("`x` must contain real values.")
x = x.astype(float)
if np.issubdtype(y.dtype, np.complexfloating):
dtype = complex
else:
dtype = float
if dydx is not None:
dydx = np.asarray(dydx)
if y.shape != dydx.shape:
raise ValueError("The shapes of `y` and `dydx` must be identical.")
if np.issubdtype(dydx.dtype, np.complexfloating):
dtype = complex
dydx = dydx.astype(dtype, copy=False)
y = y.astype(dtype, copy=False)
axis = axis % y.ndim
if x.ndim != 1:
raise ValueError("`x` must be 1-dimensional.")
if x.shape[0] < 2:
raise ValueError("`x` must contain at least 2 elements.")
if x.shape[0] != y.shape[axis]:
raise ValueError("The length of `y` along `axis`={} doesn't "
"match the length of `x`".format(axis))
if not np.all(np.isfinite(x)):
raise ValueError("`x` must contain only finite values.")
if not np.all(np.isfinite(y)):
raise ValueError("`y` must contain only finite values.")
if dydx is not None and not np.all(np.isfinite(dydx)):
raise ValueError("`dydx` must contain only finite values.")
dx = np.diff(x)
if np.any(dx <= 0):
raise ValueError("`x` must be strictly increasing sequence.")
y = np.moveaxis(y, axis, 0)
if dydx is not None:
dydx = np.moveaxis(dydx, axis, 0)
return x, dx, y, axis, dydx
class CubicHermiteSpline(PPoly):
"""Piecewise-cubic interpolator matching values and first derivatives.
The result is represented as a `PPoly` instance.
Parameters
----------
x : array_like, shape (n,)
1-D array containing values of the independent variable.
Values must be real, finite and in strictly increasing order.
y : array_like
Array containing values of the dependent variable. It can have
arbitrary number of dimensions, but the length along ``axis``
(see below) must match the length of ``x``. Values must be finite.
dydx : array_like
Array containing derivatives of the dependent variable. It can have
arbitrary number of dimensions, but the length along ``axis``
(see below) must match the length of ``x``. Values must be finite.
axis : int, optional
Axis along which `y` is assumed to be varying. Meaning that for
``x[i]`` the corresponding values are ``np.take(y, i, axis=axis)``.
Default is 0.
extrapolate : {bool, 'periodic', None}, optional
If bool, determines whether to extrapolate to out-of-bounds points
based on first and last intervals, or to return NaNs. If 'periodic',
periodic extrapolation is used. If None (default), it is set to True.
Attributes
----------
x : ndarray, shape (n,)
Breakpoints. The same ``x`` which was passed to the constructor.
c : ndarray, shape (4, n-1, ...)
Coefficients of the polynomials on each segment. The trailing
dimensions match the dimensions of `y`, excluding ``axis``.
For example, if `y` is 1-D, then ``c[k, i]`` is a coefficient for
``(x-x[i])**(3-k)`` on the segment between ``x[i]`` and ``x[i+1]``.
axis : int
Interpolation axis. The same axis which was passed to the
constructor.
Methods
-------
__call__
derivative
antiderivative
integrate
roots
See Also
--------
Akima1DInterpolator : Akima 1D interpolator.
PchipInterpolator : PCHIP 1-D monotonic cubic interpolator.
CubicSpline : Cubic spline data interpolator.
PPoly : Piecewise polynomial in terms of coefficients and breakpoints
Notes
-----
If you want to create a higher-order spline matching higher-order
derivatives, use `BPoly.from_derivatives`.
References
----------
.. [1] `Cubic Hermite spline
<https://en.wikipedia.org/wiki/Cubic_Hermite_spline>`_
on Wikipedia.
"""
def __init__(self, x, y, dydx, axis=0, extrapolate=None):
if extrapolate is None:
extrapolate = True
x, dx, y, axis, dydx = prepare_input(x, y, axis, dydx)
dxr = dx.reshape([dx.shape[0]] + [1] * (y.ndim - 1))
slope = np.diff(y, axis=0) / dxr
t = (dydx[:-1] + dydx[1:] - 2 * slope) / dxr
c = np.empty((4, len(x) - 1) + y.shape[1:], dtype=t.dtype)
c[0] = t / dxr
c[1] = (slope - dydx[:-1]) / dxr - t
c[2] = dydx[:-1]
c[3] = y[:-1]
super().__init__(c, x, extrapolate=extrapolate)
self.axis = axis
class PchipInterpolator(CubicHermiteSpline):
r"""PCHIP 1-D monotonic cubic interpolation.
``x`` and ``y`` are arrays of values used to approximate some function f,
with ``y = f(x)``. The interpolant uses monotonic cubic splines
to find the value of new points. (PCHIP stands for Piecewise Cubic
Hermite Interpolating Polynomial).
Parameters
----------
x : ndarray, shape (npoints, )
A 1-D array of monotonically increasing real values. ``x`` cannot
include duplicate values (otherwise f is overspecified)
y : ndarray, shape (..., npoints, ...)
A N-D array of real values. ``y``'s length along the interpolation
axis must be equal to the length of ``x``. Use the ``axis``
parameter to select the interpolation axis.
axis : int, optional
Axis in the ``y`` array corresponding to the x-coordinate values. Defaults
to ``axis=0``.
extrapolate : bool, optional
Whether to extrapolate to out-of-bounds points based on first
and last intervals, or to return NaNs.
Methods
-------
__call__
derivative
antiderivative
roots
See Also
--------
CubicHermiteSpline : Piecewise-cubic interpolator.
Akima1DInterpolator : Akima 1D interpolator.
CubicSpline : Cubic spline data interpolator.
PPoly : Piecewise polynomial in terms of coefficients and breakpoints.
Notes
-----
The interpolator preserves monotonicity in the interpolation data and does
not overshoot if the data is not smooth.
The first derivatives are guaranteed to be continuous, but the second
derivatives may jump at :math:`x_k`.
Determines the derivatives at the points :math:`x_k`, :math:`f'_k`,
by using PCHIP algorithm [1]_.
Let :math:`h_k = x_{k+1} - x_k`, and :math:`d_k = (y_{k+1} - y_k) / h_k`
are the slopes at internal points :math:`x_k`.
If the signs of :math:`d_k` and :math:`d_{k-1}` are different or either of
them equals zero, then :math:`f'_k = 0`. Otherwise, it is given by the
weighted harmonic mean
.. math::
\frac{w_1 + w_2}{f'_k} = \frac{w_1}{d_{k-1}} + \frac{w_2}{d_k}
where :math:`w_1 = 2 h_k + h_{k-1}` and :math:`w_2 = h_k + 2 h_{k-1}`.
The end slopes are set using a one-sided scheme [2]_.
References
----------
.. [1] F. N. Fritsch and J. Butland,
A method for constructing local
monotone piecewise cubic interpolants,
SIAM J. Sci. Comput., 5(2), 300-304 (1984).
:doi:`10.1137/0905021`.
.. [2] see, e.g., C. Moler, Numerical Computing with Matlab, 2004.
:doi:`10.1137/1.9780898717952`
"""
def __init__(self, x, y, axis=0, extrapolate=None):
x, _, y, axis, _ = prepare_input(x, y, axis)
xp = x.reshape((x.shape[0],) + (1,)*(y.ndim-1))
dk = self._find_derivatives(xp, y)
super().__init__(x, y, dk, axis=0, extrapolate=extrapolate)
self.axis = axis
@staticmethod
def _edge_case(h0, h1, m0, m1):
# one-sided three-point estimate for the derivative
d = ((2*h0 + h1)*m0 - h0*m1) / (h0 + h1)
# try to preserve shape
mask = np.sign(d) != np.sign(m0)
mask2 = (np.sign(m0) != np.sign(m1)) & (np.abs(d) > 3.*np.abs(m0))
mmm = (~mask) & mask2
d[mask] = 0.
d[mmm] = 3.*m0[mmm]
return d
@staticmethod
def _find_derivatives(x, y):
# Determine the derivatives at the points y_k, d_k, by using
# PCHIP algorithm is:
# We choose the derivatives at the point x_k by
# Let m_k be the slope of the kth segment (between k and k+1)
# If m_k=0 or m_{k-1}=0 or sgn(m_k) != sgn(m_{k-1}) then d_k == 0
# else use weighted harmonic mean:
# w_1 = 2h_k + h_{k-1}, w_2 = h_k + 2h_{k-1}
# 1/d_k = 1/(w_1 + w_2)*(w_1 / m_k + w_2 / m_{k-1})
# where h_k is the spacing between x_k and x_{k+1}
y_shape = y.shape
if y.ndim == 1:
# So that _edge_case doesn't end up assigning to scalars
x = x[:, None]
y = y[:, None]
hk = x[1:] - x[:-1]
mk = (y[1:] - y[:-1]) / hk
if y.shape[0] == 2:
# edge case: only have two points, use linear interpolation
dk = np.zeros_like(y)
dk[0] = mk
dk[1] = mk
return dk.reshape(y_shape)
smk = np.sign(mk)
condition = (smk[1:] != smk[:-1]) | (mk[1:] == 0) | (mk[:-1] == 0)
w1 = 2*hk[1:] + hk[:-1]
w2 = hk[1:] + 2*hk[:-1]
# values where division by zero occurs will be excluded
# by 'condition' afterwards
with np.errstate(divide='ignore', invalid='ignore'):
whmean = (w1/mk[:-1] + w2/mk[1:]) / (w1 + w2)
dk = np.zeros_like(y)
dk[1:-1][condition] = 0.0
dk[1:-1][~condition] = 1.0 / whmean[~condition]
# special case endpoints, as suggested in
# Cleve Moler, Numerical Computing with MATLAB, Chap 3.6 (pchiptx.m)
dk[0] = PchipInterpolator._edge_case(hk[0], hk[1], mk[0], mk[1])
dk[-1] = PchipInterpolator._edge_case(hk[-1], hk[-2], mk[-1], mk[-2])
return dk.reshape(y_shape)
def pchip_interpolate(xi, yi, x, der=0, axis=0):
"""
Convenience function for pchip interpolation.
xi and yi are arrays of values used to approximate some function f,
with ``yi = f(xi)``. The interpolant uses monotonic cubic splines
to find the value of new points x and the derivatives there.
See `scipy.interpolate.PchipInterpolator` for details.
Parameters
----------
xi : array_like
A sorted list of x-coordinates, of length N.
yi : array_like
A 1-D array of real values. `yi`'s length along the interpolation
axis must be equal to the length of `xi`. If N-D array, use axis
parameter to select correct axis.
x : scalar or array_like
Of length M.
der : int or list, optional
Derivatives to extract. The 0th derivative can be included to
return the function value.
axis : int, optional
Axis in the yi array corresponding to the x-coordinate values.
Returns
-------
y : scalar or array_like
The result, of length R or length M or M by R.
See Also
--------
PchipInterpolator : PCHIP 1-D monotonic cubic interpolator.
Examples
--------
We can interpolate 2D observed data using pchip interpolation:
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.interpolate import pchip_interpolate
>>> x_observed = np.linspace(0.0, 10.0, 11)
>>> y_observed = np.sin(x_observed)
>>> x = np.linspace(min(x_observed), max(x_observed), num=100)
>>> y = pchip_interpolate(x_observed, y_observed, x)
>>> plt.plot(x_observed, y_observed, "o", label="observation")
>>> plt.plot(x, y, label="pchip interpolation")
>>> plt.legend()
>>> plt.show()
"""
P = PchipInterpolator(xi, yi, axis=axis)
if der == 0:
return P(x)
elif _isscalar(der):
return P.derivative(der)(x)
else:
return [P.derivative(nu)(x) for nu in der]
class Akima1DInterpolator(CubicHermiteSpline):
"""
Akima interpolator
Fit piecewise cubic polynomials, given vectors x and y. The interpolation
method by Akima uses a continuously differentiable sub-spline built from
piecewise cubic polynomials. The resultant curve passes through the given
data points and will appear smooth and natural.
Parameters
----------
x : ndarray, shape (npoints, )
1-D array of monotonically increasing real values.
y : ndarray, shape (..., npoints, ...)
N-D array of real values. The length of ``y`` along the interpolation axis
must be equal to the length of ``x``. Use the ``axis`` parameter to
select the interpolation axis.
axis : int, optional
Axis in the ``y`` array corresponding to the x-coordinate values. Defaults
to ``axis=0``.
Methods
-------
__call__
derivative
antiderivative
roots
See Also
--------
PchipInterpolator : PCHIP 1-D monotonic cubic interpolator.
CubicSpline : Cubic spline data interpolator.
PPoly : Piecewise polynomial in terms of coefficients and breakpoints
Notes
-----
.. versionadded:: 0.14
Use only for precise data, as the fitted curve passes through the given
points exactly. This routine is useful for plotting a pleasingly smooth
curve through a few given points for purposes of plotting.
References
----------
[1] A new method of interpolation and smooth curve fitting based
on local procedures. Hiroshi Akima, J. ACM, October 1970, 17(4),
589-602.
"""
def __init__(self, x, y, axis=0):
# Original implementation in MATLAB by N. Shamsundar (BSD licensed), see
# https://www.mathworks.com/matlabcentral/fileexchange/1814-akima-interpolation
x, dx, y, axis, _ = prepare_input(x, y, axis)
# determine slopes between breakpoints
m = np.empty((x.size + 3, ) + y.shape[1:])
dx = dx[(slice(None), ) + (None, ) * (y.ndim - 1)]
m[2:-2] = np.diff(y, axis=0) / dx
# add two additional points on the left ...
m[1] = 2. * m[2] - m[3]
m[0] = 2. * m[1] - m[2]
# ... and on the right
m[-2] = 2. * m[-3] - m[-4]
m[-1] = 2. * m[-2] - m[-3]
# if m1 == m2 != m3 == m4, the slope at the breakpoint is not
# defined. This is the fill value:
t = .5 * (m[3:] + m[:-3])
# get the denominator of the slope t
dm = np.abs(np.diff(m, axis=0))
f1 = dm[2:]
f2 = dm[:-2]
f12 = f1 + f2
# These are the mask of where the slope at breakpoint is defined:
ind = np.nonzero(f12 > 1e-9 * np.max(f12, initial=-np.inf))
x_ind, y_ind = ind[0], ind[1:]
# Set the slope at breakpoint
t[ind] = (f1[ind] * m[(x_ind + 1,) + y_ind] +
f2[ind] * m[(x_ind + 2,) + y_ind]) / f12[ind]
super().__init__(x, y, t, axis=0, extrapolate=False)
self.axis = axis
def extend(self, c, x, right=True):
raise NotImplementedError("Extending a 1-D Akima interpolator is not "
"yet implemented")
# These are inherited from PPoly, but they do not produce an Akima
# interpolator. Hence stub them out.
@classmethod
def from_spline(cls, tck, extrapolate=None):
raise NotImplementedError("This method does not make sense for "
"an Akima interpolator.")
@classmethod
def from_bernstein_basis(cls, bp, extrapolate=None):
raise NotImplementedError("This method does not make sense for "
"an Akima interpolator.")
class CubicSpline(CubicHermiteSpline):
"""Cubic spline data interpolator.
Interpolate data with a piecewise cubic polynomial which is twice
continuously differentiable [1]_. The result is represented as a `PPoly`
instance with breakpoints matching the given data.
Parameters
----------
x : array_like, shape (n,)
1-D array containing values of the independent variable.
Values must be real, finite and in strictly increasing order.
y : array_like
Array containing values of the dependent variable. It can have
arbitrary number of dimensions, but the length along ``axis``
(see below) must match the length of ``x``. Values must be finite.
axis : int, optional
Axis along which `y` is assumed to be varying. Meaning that for
``x[i]`` the corresponding values are ``np.take(y, i, axis=axis)``.
Default is 0.
bc_type : string or 2-tuple, optional
Boundary condition type. Two additional equations, given by the
boundary conditions, are required to determine all coefficients of
polynomials on each segment [2]_.
If `bc_type` is a string, then the specified condition will be applied
at both ends of a spline. Available conditions are:
* 'not-a-knot' (default): The first and second segment at a curve end
are the same polynomial. It is a good default when there is no
information on boundary conditions.
* 'periodic': The interpolated functions is assumed to be periodic
of period ``x[-1] - x[0]``. The first and last value of `y` must be
identical: ``y[0] == y[-1]``. This boundary condition will result in
``y'[0] == y'[-1]`` and ``y''[0] == y''[-1]``.
* 'clamped': The first derivative at curves ends are zero. Assuming
a 1D `y`, ``bc_type=((1, 0.0), (1, 0.0))`` is the same condition.
* 'natural': The second derivative at curve ends are zero. Assuming
a 1D `y`, ``bc_type=((2, 0.0), (2, 0.0))`` is the same condition.
If `bc_type` is a 2-tuple, the first and the second value will be
applied at the curve start and end respectively. The tuple values can
be one of the previously mentioned strings (except 'periodic') or a
tuple `(order, deriv_values)` allowing to specify arbitrary
derivatives at curve ends:
* `order`: the derivative order, 1 or 2.
* `deriv_value`: array_like containing derivative values, shape must
be the same as `y`, excluding ``axis`` dimension. For example, if
`y` is 1-D, then `deriv_value` must be a scalar. If `y` is 3-D with
the shape (n0, n1, n2) and axis=2, then `deriv_value` must be 2-D
and have the shape (n0, n1).
extrapolate : {bool, 'periodic', None}, optional
If bool, determines whether to extrapolate to out-of-bounds points
based on first and last intervals, or to return NaNs. If 'periodic',
periodic extrapolation is used. If None (default), ``extrapolate`` is
set to 'periodic' for ``bc_type='periodic'`` and to True otherwise.
Attributes
----------
x : ndarray, shape (n,)
Breakpoints. The same ``x`` which was passed to the constructor.
c : ndarray, shape (4, n-1, ...)
Coefficients of the polynomials on each segment. The trailing
dimensions match the dimensions of `y`, excluding ``axis``.
For example, if `y` is 1-d, then ``c[k, i]`` is a coefficient for
``(x-x[i])**(3-k)`` on the segment between ``x[i]`` and ``x[i+1]``.
axis : int
Interpolation axis. The same axis which was passed to the
constructor.
Methods
-------
__call__
derivative
antiderivative
integrate
roots
See Also
--------
Akima1DInterpolator : Akima 1D interpolator.
PchipInterpolator : PCHIP 1-D monotonic cubic interpolator.
PPoly : Piecewise polynomial in terms of coefficients and breakpoints.
Notes
-----
Parameters `bc_type` and ``extrapolate`` work independently, i.e. the
former controls only construction of a spline, and the latter only
evaluation.
When a boundary condition is 'not-a-knot' and n = 2, it is replaced by
a condition that the first derivative is equal to the linear interpolant
slope. When both boundary conditions are 'not-a-knot' and n = 3, the
solution is sought as a parabola passing through given points.
When 'not-a-knot' boundary conditions is applied to both ends, the
resulting spline will be the same as returned by `splrep` (with ``s=0``)
and `InterpolatedUnivariateSpline`, but these two methods use a
representation in B-spline basis.
.. versionadded:: 0.18.0
Examples
--------
In this example the cubic spline is used to interpolate a sampled sinusoid.
You can see that the spline continuity property holds for the first and
second derivatives and violates only for the third derivative.
>>> import numpy as np
>>> from scipy.interpolate import CubicSpline
>>> import matplotlib.pyplot as plt
>>> x = np.arange(10)
>>> y = np.sin(x)
>>> cs = CubicSpline(x, y)
>>> xs = np.arange(-0.5, 9.6, 0.1)
>>> fig, ax = plt.subplots(figsize=(6.5, 4))
>>> ax.plot(x, y, 'o', label='data')
>>> ax.plot(xs, np.sin(xs), label='true')
>>> ax.plot(xs, cs(xs), label="S")
>>> ax.plot(xs, cs(xs, 1), label="S'")
>>> ax.plot(xs, cs(xs, 2), label="S''")
>>> ax.plot(xs, cs(xs, 3), label="S'''")
>>> ax.set_xlim(-0.5, 9.5)
>>> ax.legend(loc='lower left', ncol=2)
>>> plt.show()
In the second example, the unit circle is interpolated with a spline. A
periodic boundary condition is used. You can see that the first derivative
values, ds/dx=0, ds/dy=1 at the periodic point (1, 0) are correctly
computed. Note that a circle cannot be exactly represented by a cubic
spline. To increase precision, more breakpoints would be required.
>>> theta = 2 * np.pi * np.linspace(0, 1, 5)
>>> y = np.c_[np.cos(theta), np.sin(theta)]
>>> cs = CubicSpline(theta, y, bc_type='periodic')
>>> print("ds/dx={:.1f} ds/dy={:.1f}".format(cs(0, 1)[0], cs(0, 1)[1]))
ds/dx=0.0 ds/dy=1.0
>>> xs = 2 * np.pi * np.linspace(0, 1, 100)
>>> fig, ax = plt.subplots(figsize=(6.5, 4))
>>> ax.plot(y[:, 0], y[:, 1], 'o', label='data')
>>> ax.plot(np.cos(xs), np.sin(xs), label='true')
>>> ax.plot(cs(xs)[:, 0], cs(xs)[:, 1], label='spline')
>>> ax.axes.set_aspect('equal')
>>> ax.legend(loc='center')
>>> plt.show()
The third example is the interpolation of a polynomial y = x**3 on the
interval 0 <= x<= 1. A cubic spline can represent this function exactly.
To achieve that we need to specify values and first derivatives at
endpoints of the interval. Note that y' = 3 * x**2 and thus y'(0) = 0 and
y'(1) = 3.
>>> cs = CubicSpline([0, 1], [0, 1], bc_type=((1, 0), (1, 3)))
>>> x = np.linspace(0, 1)
>>> np.allclose(x**3, cs(x))
True
References
----------
.. [1] `Cubic Spline Interpolation
<https://en.wikiversity.org/wiki/Cubic_Spline_Interpolation>`_
on Wikiversity.
.. [2] Carl de Boor, "A Practical Guide to Splines", Springer-Verlag, 1978.
"""
def __init__(self, x, y, axis=0, bc_type='not-a-knot', extrapolate=None):
x, dx, y, axis, _ = prepare_input(x, y, axis)
n = len(x)
bc, y = self._validate_bc(bc_type, y, y.shape[1:], axis)
if extrapolate is None:
if bc[0] == 'periodic':
extrapolate = 'periodic'
else:
extrapolate = True
if y.size == 0:
# bail out early for zero-sized arrays
s = np.zeros_like(y)
else:
dxr = dx.reshape([dx.shape[0]] + [1] * (y.ndim - 1))
slope = np.diff(y, axis=0) / dxr
# If bc is 'not-a-knot' this change is just a convention.
# If bc is 'periodic' then we already checked that y[0] == y[-1],
# and the spline is just a constant, we handle this case in the
# same way by setting the first derivatives to slope, which is 0.
if n == 2:
if bc[0] in ['not-a-knot', 'periodic']:
bc[0] = (1, slope[0])
if bc[1] in ['not-a-knot', 'periodic']:
bc[1] = (1, slope[0])
# This is a special case, when both conditions are 'not-a-knot'
# and n == 3. In this case 'not-a-knot' can't be handled regularly
# as the both conditions are identical. We handle this case by
# constructing a parabola passing through given points.
if n == 3 and bc[0] == 'not-a-knot' and bc[1] == 'not-a-knot':
A = np.zeros((3, 3)) # This is a standard matrix.
b = np.empty((3,) + y.shape[1:], dtype=y.dtype)
A[0, 0] = 1
A[0, 1] = 1
A[1, 0] = dx[1]
A[1, 1] = 2 * (dx[0] + dx[1])
A[1, 2] = dx[0]
A[2, 1] = 1
A[2, 2] = 1
b[0] = 2 * slope[0]
b[1] = 3 * (dxr[0] * slope[1] + dxr[1] * slope[0])
b[2] = 2 * slope[1]
s = solve(A, b, overwrite_a=True, overwrite_b=True,
check_finite=False)
elif n == 3 and bc[0] == 'periodic':
# In case when number of points is 3 we compute the derivatives
# manually
s = np.empty((n,) + y.shape[1:], dtype=y.dtype)
t = (slope / dxr).sum() / (1. / dxr).sum()
s.fill(t)
else:
# Find derivative values at each x[i] by solving a tridiagonal
# system.
A = np.zeros((3, n)) # This is a banded matrix representation.
b = np.empty((n,) + y.shape[1:], dtype=y.dtype)
# Filling the system for i=1..n-2
# (x[i-1] - x[i]) * s[i-1] +\
# 2 * ((x[i] - x[i-1]) + (x[i+1] - x[i])) * s[i] +\
# (x[i] - x[i-1]) * s[i+1] =\
# 3 * ((x[i+1] - x[i])*(y[i] - y[i-1])/(x[i] - x[i-1]) +\
# (x[i] - x[i-1])*(y[i+1] - y[i])/(x[i+1] - x[i]))
A[1, 1:-1] = 2 * (dx[:-1] + dx[1:]) # The diagonal
A[0, 2:] = dx[:-1] # The upper diagonal
A[-1, :-2] = dx[1:] # The lower diagonal
b[1:-1] = 3 * (dxr[1:] * slope[:-1] + dxr[:-1] * slope[1:])
bc_start, bc_end = bc
if bc_start == 'periodic':
# Due to the periodicity, and because y[-1] = y[0], the
# linear system has (n-1) unknowns/equations instead of n:
A = A[:, 0:-1]
A[1, 0] = 2 * (dx[-1] + dx[0])
A[0, 1] = dx[-1]
b = b[:-1]
# Also, due to the periodicity, the system is not tri-diagonal.
# We need to compute a "condensed" matrix of shape (n-2, n-2).
# See https://web.archive.org/web/20151220180652/http://www.cfm.brown.edu/people/gk/chap6/node14.html
# for more explanations.
# The condensed matrix is obtained by removing the last column
# and last row of the (n-1, n-1) system matrix. The removed
# values are saved in scalar variables with the (n-1, n-1)
# system matrix indices forming their names:
a_m1_0 = dx[-2] # lower left corner value: A[-1, 0]
a_m1_m2 = dx[-1]
a_m1_m1 = 2 * (dx[-1] + dx[-2])
a_m2_m1 = dx[-3]
a_0_m1 = dx[0]
b[0] = 3 * (dxr[0] * slope[-1] + dxr[-1] * slope[0])
b[-1] = 3 * (dxr[-1] * slope[-2] + dxr[-2] * slope[-1])
Ac = A[:, :-1]
b1 = b[:-1]
b2 = np.zeros_like(b1)
b2[0] = -a_0_m1
b2[-1] = -a_m2_m1
# s1 and s2 are the solutions of (n-2, n-2) system
s1 = solve_banded((1, 1), Ac, b1, overwrite_ab=False,
overwrite_b=False, check_finite=False)
s2 = solve_banded((1, 1), Ac, b2, overwrite_ab=False,
overwrite_b=False, check_finite=False)
# computing the s[n-2] solution:
s_m1 = ((b[-1] - a_m1_0 * s1[0] - a_m1_m2 * s1[-1]) /
(a_m1_m1 + a_m1_0 * s2[0] + a_m1_m2 * s2[-1]))
# s is the solution of the (n, n) system:
s = np.empty((n,) + y.shape[1:], dtype=y.dtype)
s[:-2] = s1 + s_m1 * s2
s[-2] = s_m1
s[-1] = s[0]
else:
if bc_start == 'not-a-knot':
A[1, 0] = dx[1]
A[0, 1] = x[2] - x[0]
d = x[2] - x[0]
b[0] = ((dxr[0] + 2*d) * dxr[1] * slope[0] +
dxr[0]**2 * slope[1]) / d
elif bc_start[0] == 1:
A[1, 0] = 1
A[0, 1] = 0
b[0] = bc_start[1]
elif bc_start[0] == 2:
A[1, 0] = 2 * dx[0]
A[0, 1] = dx[0]
b[0] = -0.5 * bc_start[1] * dx[0]**2 + 3 * (y[1] - y[0])
if bc_end == 'not-a-knot':
A[1, -1] = dx[-2]
A[-1, -2] = x[-1] - x[-3]
d = x[-1] - x[-3]
b[-1] = ((dxr[-1]**2*slope[-2] +
(2*d + dxr[-1])*dxr[-2]*slope[-1]) / d)
elif bc_end[0] == 1:
A[1, -1] = 1
A[-1, -2] = 0
b[-1] = bc_end[1]
elif bc_end[0] == 2:
A[1, -1] = 2 * dx[-1]
A[-1, -2] = dx[-1]
b[-1] = 0.5 * bc_end[1] * dx[-1]**2 + 3 * (y[-1] - y[-2])
s = solve_banded((1, 1), A, b, overwrite_ab=True,
overwrite_b=True, check_finite=False)
super().__init__(x, y, s, axis=0, extrapolate=extrapolate)
self.axis = axis
@staticmethod
def _validate_bc(bc_type, y, expected_deriv_shape, axis):
"""Validate and prepare boundary conditions.
Returns
-------
validated_bc : 2-tuple
Boundary conditions for a curve start and end.
y : ndarray
y casted to complex dtype if one of the boundary conditions has
complex dtype.
"""
if isinstance(bc_type, str):
if bc_type == 'periodic':
if not np.allclose(y[0], y[-1], rtol=1e-15, atol=1e-15):
raise ValueError(
"The first and last `y` point along axis {} must "
"be identical (within machine precision) when "
"bc_type='periodic'.".format(axis))
bc_type = (bc_type, bc_type)
else:
if len(bc_type) != 2:
raise ValueError("`bc_type` must contain 2 elements to "
"specify start and end conditions.")
if 'periodic' in bc_type:
raise ValueError("'periodic' `bc_type` is defined for both "
"curve ends and cannot be used with other "
"boundary conditions.")
validated_bc = []
for bc in bc_type:
if isinstance(bc, str):
if bc == 'clamped':
validated_bc.append((1, np.zeros(expected_deriv_shape)))
elif bc == 'natural':
validated_bc.append((2, np.zeros(expected_deriv_shape)))
elif bc in ['not-a-knot', 'periodic']:
validated_bc.append(bc)
else:
raise ValueError(f"bc_type={bc} is not allowed.")
else:
try:
deriv_order, deriv_value = bc
except Exception as e:
raise ValueError(
"A specified derivative value must be "
"given in the form (order, value)."
) from e
if deriv_order not in [1, 2]:
raise ValueError("The specified derivative order must "
"be 1 or 2.")
deriv_value = np.asarray(deriv_value)
if deriv_value.shape != expected_deriv_shape:
raise ValueError(
"`deriv_value` shape {} is not the expected one {}."
.format(deriv_value.shape, expected_deriv_shape))
if np.issubdtype(deriv_value.dtype, np.complexfloating):
y = y.astype(complex, copy=False)
validated_bc.append((deriv_order, deriv_value))
return validated_bc, y
| 33,920
| 38.169746
| 121
|
py
|
scipy
|
scipy-main/scipy/interpolate/interpolate.py
|
# This file is not meant for public use and will be removed in SciPy v2.0.0.
# Use the `scipy.interpolate` namespace for importing the functions
# included below.
import warnings
from . import _interpolate
__all__ = [ # noqa: F822
'BPoly',
'BSpline',
'NdPPoly',
'PPoly',
'RectBivariateSpline',
'RegularGridInterpolator',
'array',
'asarray',
'atleast_1d',
'atleast_2d',
'comb',
'dfitpack',
'interp1d',
'interp2d',
'interpn',
'intp',
'itertools',
'lagrange',
'make_interp_spline',
'poly1d',
'prod',
'ravel',
'searchsorted',
'spec',
'transpose',
]
def __dir__():
return __all__
def __getattr__(name):
if name not in __all__:
raise AttributeError(
"scipy.interpolate.interpolate is deprecated and has no attribute "
f"{name}. Try looking in scipy.interpolate instead.")
warnings.warn(f"Please use `{name}` from the `scipy.interpolate` namespace, "
"the `scipy.interpolate.interpolate` namespace is deprecated.",
category=DeprecationWarning, stacklevel=2)
return getattr(_interpolate, name)
| 1,180
| 21.283019
| 81
|
py
|
scipy
|
scipy-main/scipy/interpolate/ndgriddata.py
|
# This file is not meant for public use and will be removed in SciPy v2.0.0.
# Use the `scipy.interpolate` namespace for importing the functions
# included below.
import warnings
from . import _ndgriddata
__all__ = [ # noqa: F822
'CloughTocher2DInterpolator',
'LinearNDInterpolator',
'NDInterpolatorBase',
'NearestNDInterpolator',
'cKDTree',
'griddata',
]
def __dir__():
return __all__
def __getattr__(name):
if name not in __all__:
raise AttributeError(
"scipy.interpolate.ndgriddata is deprecated and has no attribute "
f"{name}. Try looking in scipy.interpolate instead.")
warnings.warn(f"Please use `{name}` from the `scipy.interpolate` namespace, "
"the `scipy.interpolate.ndgriddata` namespace is deprecated.",
category=DeprecationWarning, stacklevel=2)
return getattr(_ndgriddata, name)
| 912
| 25.852941
| 81
|
py
|
scipy
|
scipy-main/scipy/interpolate/tests/test_interpolate.py
|
from numpy.testing import (assert_, assert_equal, assert_almost_equal,
assert_array_almost_equal, assert_array_equal,
assert_allclose, suppress_warnings)
from pytest import raises as assert_raises
import pytest
from numpy import mgrid, pi, sin, ogrid, poly1d, linspace
import numpy as np
from scipy.interpolate import (interp1d, interp2d, lagrange, PPoly, BPoly,
splrep, splev, splantider, splint, sproot, Akima1DInterpolator,
NdPPoly, BSpline)
from scipy.special import poch, gamma
from scipy.interpolate import _ppoly
from scipy._lib._gcutils import assert_deallocated, IS_PYPY
from scipy.integrate import nquad
from scipy.special import binom
class TestInterp2D:
def test_interp2d(self):
y, x = mgrid[0:2:20j, 0:pi:21j]
z = sin(x+0.5*y)
with suppress_warnings() as sup:
sup.filter(DeprecationWarning)
II = interp2d(x, y, z)
assert_almost_equal(II(1.0, 2.0), sin(2.0), decimal=2)
v, u = ogrid[0:2:24j, 0:pi:25j]
assert_almost_equal(II(u.ravel(), v.ravel()),
sin(u+0.5*v), decimal=2)
def test_interp2d_meshgrid_input(self):
# Ticket #703
x = linspace(0, 2, 16)
y = linspace(0, pi, 21)
z = sin(x[None, :] + y[:, None]/2.)
with suppress_warnings() as sup:
sup.filter(DeprecationWarning)
II = interp2d(x, y, z)
assert_almost_equal(II(1.0, 2.0), sin(2.0), decimal=2)
def test_interp2d_meshgrid_input_unsorted(self):
np.random.seed(1234)
x = linspace(0, 2, 16)
y = linspace(0, pi, 21)
z = sin(x[None, :] + y[:, None] / 2.)
with suppress_warnings() as sup:
sup.filter(DeprecationWarning)
ip1 = interp2d(x.copy(), y.copy(), z, kind='cubic')
np.random.shuffle(x)
z = sin(x[None, :] + y[:, None]/2.)
ip2 = interp2d(x.copy(), y.copy(), z, kind='cubic')
np.random.shuffle(x)
np.random.shuffle(y)
z = sin(x[None, :] + y[:, None] / 2.)
ip3 = interp2d(x, y, z, kind='cubic')
x = linspace(0, 2, 31)
y = linspace(0, pi, 30)
assert_equal(ip1(x, y), ip2(x, y))
assert_equal(ip1(x, y), ip3(x, y))
def test_interp2d_eval_unsorted(self):
y, x = mgrid[0:2:20j, 0:pi:21j]
z = sin(x + 0.5*y)
with suppress_warnings() as sup:
sup.filter(DeprecationWarning)
func = interp2d(x, y, z)
xe = np.array([3, 4, 5])
ye = np.array([5.3, 7.1])
assert_allclose(func(xe, ye), func(xe, ye[::-1]))
assert_raises(ValueError, func, xe, ye[::-1], 0, 0, True)
def test_interp2d_linear(self):
# Ticket #898
a = np.zeros([5, 5])
a[2, 2] = 1.0
x = y = np.arange(5)
with suppress_warnings() as sup:
sup.filter(DeprecationWarning)
b = interp2d(x, y, a, 'linear')
assert_almost_equal(b(2.0, 1.5), np.array([0.5]), decimal=2)
assert_almost_equal(b(2.0, 2.5), np.array([0.5]), decimal=2)
def test_interp2d_bounds(self):
x = np.linspace(0, 1, 5)
y = np.linspace(0, 2, 7)
z = x[None, :]**2 + y[:, None]
ix = np.linspace(-1, 3, 31)
iy = np.linspace(-1, 3, 33)
with suppress_warnings() as sup:
sup.filter(DeprecationWarning)
b = interp2d(x, y, z, bounds_error=True)
assert_raises(ValueError, b, ix, iy)
b = interp2d(x, y, z, fill_value=np.nan)
iz = b(ix, iy)
mx = (ix < 0) | (ix > 1)
my = (iy < 0) | (iy > 2)
assert_(np.isnan(iz[my, :]).all())
assert_(np.isnan(iz[:, mx]).all())
assert_(np.isfinite(iz[~my, :][:, ~mx]).all())
class TestInterp1D:
def setup_method(self):
self.x5 = np.arange(5.)
self.x10 = np.arange(10.)
self.y10 = np.arange(10.)
self.x25 = self.x10.reshape((2,5))
self.x2 = np.arange(2.)
self.y2 = np.arange(2.)
self.x1 = np.array([0.])
self.y1 = np.array([0.])
self.y210 = np.arange(20.).reshape((2, 10))
self.y102 = np.arange(20.).reshape((10, 2))
self.y225 = np.arange(20.).reshape((2, 2, 5))
self.y25 = np.arange(10.).reshape((2, 5))
self.y235 = np.arange(30.).reshape((2, 3, 5))
self.y325 = np.arange(30.).reshape((3, 2, 5))
# Edge updated test matrix 1
# array([[ 30, 1, 2, 3, 4, 5, 6, 7, 8, -30],
# [ 30, 11, 12, 13, 14, 15, 16, 17, 18, -30]])
self.y210_edge_updated = np.arange(20.).reshape((2, 10))
self.y210_edge_updated[:, 0] = 30
self.y210_edge_updated[:, -1] = -30
# Edge updated test matrix 2
# array([[ 30, 30],
# [ 2, 3],
# [ 4, 5],
# [ 6, 7],
# [ 8, 9],
# [ 10, 11],
# [ 12, 13],
# [ 14, 15],
# [ 16, 17],
# [-30, -30]])
self.y102_edge_updated = np.arange(20.).reshape((10, 2))
self.y102_edge_updated[0, :] = 30
self.y102_edge_updated[-1, :] = -30
self.fill_value = -100.0
def test_validation(self):
# Make sure that appropriate exceptions are raised when invalid values
# are given to the constructor.
# These should all work.
for kind in ('nearest', 'nearest-up', 'zero', 'linear', 'slinear',
'quadratic', 'cubic', 'previous', 'next'):
interp1d(self.x10, self.y10, kind=kind)
interp1d(self.x10, self.y10, kind=kind, fill_value="extrapolate")
interp1d(self.x10, self.y10, kind='linear', fill_value=(-1, 1))
interp1d(self.x10, self.y10, kind='linear',
fill_value=np.array([-1]))
interp1d(self.x10, self.y10, kind='linear',
fill_value=(-1,))
interp1d(self.x10, self.y10, kind='linear',
fill_value=-1)
interp1d(self.x10, self.y10, kind='linear',
fill_value=(-1, -1))
interp1d(self.x10, self.y10, kind=0)
interp1d(self.x10, self.y10, kind=1)
interp1d(self.x10, self.y10, kind=2)
interp1d(self.x10, self.y10, kind=3)
interp1d(self.x10, self.y210, kind='linear', axis=-1,
fill_value=(-1, -1))
interp1d(self.x2, self.y210, kind='linear', axis=0,
fill_value=np.ones(10))
interp1d(self.x2, self.y210, kind='linear', axis=0,
fill_value=(np.ones(10), np.ones(10)))
interp1d(self.x2, self.y210, kind='linear', axis=0,
fill_value=(np.ones(10), -1))
# x array must be 1D.
assert_raises(ValueError, interp1d, self.x25, self.y10)
# y array cannot be a scalar.
assert_raises(ValueError, interp1d, self.x10, np.array(0))
# Check for x and y arrays having the same length.
assert_raises(ValueError, interp1d, self.x10, self.y2)
assert_raises(ValueError, interp1d, self.x2, self.y10)
assert_raises(ValueError, interp1d, self.x10, self.y102)
interp1d(self.x10, self.y210)
interp1d(self.x10, self.y102, axis=0)
# Check for x and y having at least 1 element.
assert_raises(ValueError, interp1d, self.x1, self.y10)
assert_raises(ValueError, interp1d, self.x10, self.y1)
# Bad fill values
assert_raises(ValueError, interp1d, self.x10, self.y10, kind='linear',
fill_value=(-1, -1, -1)) # doesn't broadcast
assert_raises(ValueError, interp1d, self.x10, self.y10, kind='linear',
fill_value=[-1, -1, -1]) # doesn't broadcast
assert_raises(ValueError, interp1d, self.x10, self.y10, kind='linear',
fill_value=np.array((-1, -1, -1))) # doesn't broadcast
assert_raises(ValueError, interp1d, self.x10, self.y10, kind='linear',
fill_value=[[-1]]) # doesn't broadcast
assert_raises(ValueError, interp1d, self.x10, self.y10, kind='linear',
fill_value=[-1, -1]) # doesn't broadcast
assert_raises(ValueError, interp1d, self.x10, self.y10, kind='linear',
fill_value=np.array([])) # doesn't broadcast
assert_raises(ValueError, interp1d, self.x10, self.y10, kind='linear',
fill_value=()) # doesn't broadcast
assert_raises(ValueError, interp1d, self.x2, self.y210, kind='linear',
axis=0, fill_value=[-1, -1]) # doesn't broadcast
assert_raises(ValueError, interp1d, self.x2, self.y210, kind='linear',
axis=0, fill_value=(0., [-1, -1])) # above doesn't bc
def test_init(self):
# Check that the attributes are initialized appropriately by the
# constructor.
assert_(interp1d(self.x10, self.y10).copy)
assert_(not interp1d(self.x10, self.y10, copy=False).copy)
assert_(interp1d(self.x10, self.y10).bounds_error)
assert_(not interp1d(self.x10, self.y10, bounds_error=False).bounds_error)
assert_(np.isnan(interp1d(self.x10, self.y10).fill_value))
assert_equal(interp1d(self.x10, self.y10, fill_value=3.0).fill_value,
3.0)
assert_equal(interp1d(self.x10, self.y10, fill_value=(1.0, 2.0)).fill_value,
(1.0, 2.0))
assert_equal(interp1d(self.x10, self.y10).axis, 0)
assert_equal(interp1d(self.x10, self.y210).axis, 1)
assert_equal(interp1d(self.x10, self.y102, axis=0).axis, 0)
assert_array_equal(interp1d(self.x10, self.y10).x, self.x10)
assert_array_equal(interp1d(self.x10, self.y10).y, self.y10)
assert_array_equal(interp1d(self.x10, self.y210).y, self.y210)
def test_assume_sorted(self):
# Check for unsorted arrays
interp10 = interp1d(self.x10, self.y10)
interp10_unsorted = interp1d(self.x10[::-1], self.y10[::-1])
assert_array_almost_equal(interp10_unsorted(self.x10), self.y10)
assert_array_almost_equal(interp10_unsorted(1.2), np.array([1.2]))
assert_array_almost_equal(interp10_unsorted([2.4, 5.6, 6.0]),
interp10([2.4, 5.6, 6.0]))
# Check assume_sorted keyword (defaults to False)
interp10_assume_kw = interp1d(self.x10[::-1], self.y10[::-1],
assume_sorted=False)
assert_array_almost_equal(interp10_assume_kw(self.x10), self.y10)
interp10_assume_kw2 = interp1d(self.x10[::-1], self.y10[::-1],
assume_sorted=True)
# Should raise an error for unsorted input if assume_sorted=True
assert_raises(ValueError, interp10_assume_kw2, self.x10)
# Check that if y is a 2-D array, things are still consistent
interp10_y_2d = interp1d(self.x10, self.y210)
interp10_y_2d_unsorted = interp1d(self.x10[::-1], self.y210[:, ::-1])
assert_array_almost_equal(interp10_y_2d(self.x10),
interp10_y_2d_unsorted(self.x10))
def test_linear(self):
for kind in ['linear', 'slinear']:
self._check_linear(kind)
def _check_linear(self, kind):
# Check the actual implementation of linear interpolation.
interp10 = interp1d(self.x10, self.y10, kind=kind)
assert_array_almost_equal(interp10(self.x10), self.y10)
assert_array_almost_equal(interp10(1.2), np.array([1.2]))
assert_array_almost_equal(interp10([2.4, 5.6, 6.0]),
np.array([2.4, 5.6, 6.0]))
# test fill_value="extrapolate"
extrapolator = interp1d(self.x10, self.y10, kind=kind,
fill_value='extrapolate')
assert_allclose(extrapolator([-1., 0, 9, 11]),
[-1, 0, 9, 11], rtol=1e-14)
opts = dict(kind=kind,
fill_value='extrapolate',
bounds_error=True)
assert_raises(ValueError, interp1d, self.x10, self.y10, **opts)
def test_linear_dtypes(self):
# regression test for gh-5898, where 1D linear interpolation has been
# delegated to numpy.interp for all float dtypes, and the latter was
# not handling e.g. np.float128.
for dtyp in [np.float16,
np.float32,
np.float64,
np.longdouble]:
x = np.arange(8, dtype=dtyp)
y = x
yp = interp1d(x, y, kind='linear')(x)
assert_equal(yp.dtype, dtyp)
assert_allclose(yp, y, atol=1e-15)
# regression test for gh-14531, where 1D linear interpolation has been
# has been extended to delegate to numpy.interp for integer dtypes
x = [0, 1, 2]
y = [np.nan, 0, 1]
yp = interp1d(x, y)(x)
assert_allclose(yp, y, atol=1e-15)
def test_slinear_dtypes(self):
# regression test for gh-7273: 1D slinear interpolation fails with
# float32 inputs
dt_r = [np.float16, np.float32, np.float64]
dt_rc = dt_r + [np.complex64, np.complex128]
spline_kinds = ['slinear', 'zero', 'quadratic', 'cubic']
for dtx in dt_r:
x = np.arange(0, 10, dtype=dtx)
for dty in dt_rc:
y = np.exp(-x/3.0).astype(dty)
for dtn in dt_r:
xnew = x.astype(dtn)
for kind in spline_kinds:
f = interp1d(x, y, kind=kind, bounds_error=False)
assert_allclose(f(xnew), y, atol=1e-7,
err_msg=f"{dtx}, {dty} {dtn}")
def test_cubic(self):
# Check the actual implementation of spline interpolation.
interp10 = interp1d(self.x10, self.y10, kind='cubic')
assert_array_almost_equal(interp10(self.x10), self.y10)
assert_array_almost_equal(interp10(1.2), np.array([1.2]))
assert_array_almost_equal(interp10(1.5), np.array([1.5]))
assert_array_almost_equal(interp10([2.4, 5.6, 6.0]),
np.array([2.4, 5.6, 6.0]),)
def test_nearest(self):
# Check the actual implementation of nearest-neighbour interpolation.
# Nearest asserts that half-integer case (1.5) rounds down to 1
interp10 = interp1d(self.x10, self.y10, kind='nearest')
assert_array_almost_equal(interp10(self.x10), self.y10)
assert_array_almost_equal(interp10(1.2), np.array(1.))
assert_array_almost_equal(interp10(1.5), np.array(1.))
assert_array_almost_equal(interp10([2.4, 5.6, 6.0]),
np.array([2., 6., 6.]),)
# test fill_value="extrapolate"
extrapolator = interp1d(self.x10, self.y10, kind='nearest',
fill_value='extrapolate')
assert_allclose(extrapolator([-1., 0, 9, 11]),
[0, 0, 9, 9], rtol=1e-14)
opts = dict(kind='nearest',
fill_value='extrapolate',
bounds_error=True)
assert_raises(ValueError, interp1d, self.x10, self.y10, **opts)
def test_nearest_up(self):
# Check the actual implementation of nearest-neighbour interpolation.
# Nearest-up asserts that half-integer case (1.5) rounds up to 2
interp10 = interp1d(self.x10, self.y10, kind='nearest-up')
assert_array_almost_equal(interp10(self.x10), self.y10)
assert_array_almost_equal(interp10(1.2), np.array(1.))
assert_array_almost_equal(interp10(1.5), np.array(2.))
assert_array_almost_equal(interp10([2.4, 5.6, 6.0]),
np.array([2., 6., 6.]),)
# test fill_value="extrapolate"
extrapolator = interp1d(self.x10, self.y10, kind='nearest-up',
fill_value='extrapolate')
assert_allclose(extrapolator([-1., 0, 9, 11]),
[0, 0, 9, 9], rtol=1e-14)
opts = dict(kind='nearest-up',
fill_value='extrapolate',
bounds_error=True)
assert_raises(ValueError, interp1d, self.x10, self.y10, **opts)
def test_previous(self):
# Check the actual implementation of previous interpolation.
interp10 = interp1d(self.x10, self.y10, kind='previous')
assert_array_almost_equal(interp10(self.x10), self.y10)
assert_array_almost_equal(interp10(1.2), np.array(1.))
assert_array_almost_equal(interp10(1.5), np.array(1.))
assert_array_almost_equal(interp10([2.4, 5.6, 6.0]),
np.array([2., 5., 6.]),)
# test fill_value="extrapolate"
extrapolator = interp1d(self.x10, self.y10, kind='previous',
fill_value='extrapolate')
assert_allclose(extrapolator([-1., 0, 9, 11]),
[np.nan, 0, 9, 9], rtol=1e-14)
# Tests for gh-9591
interpolator1D = interp1d(self.x10, self.y10, kind="previous",
fill_value='extrapolate')
assert_allclose(interpolator1D([-1, -2, 5, 8, 12, 25]),
[np.nan, np.nan, 5, 8, 9, 9])
interpolator2D = interp1d(self.x10, self.y210, kind="previous",
fill_value='extrapolate')
assert_allclose(interpolator2D([-1, -2, 5, 8, 12, 25]),
[[np.nan, np.nan, 5, 8, 9, 9],
[np.nan, np.nan, 15, 18, 19, 19]])
interpolator2DAxis0 = interp1d(self.x10, self.y102, kind="previous",
axis=0, fill_value='extrapolate')
assert_allclose(interpolator2DAxis0([-2, 5, 12]),
[[np.nan, np.nan],
[10, 11],
[18, 19]])
opts = dict(kind='previous',
fill_value='extrapolate',
bounds_error=True)
assert_raises(ValueError, interp1d, self.x10, self.y10, **opts)
# Tests for gh-16813
interpolator1D = interp1d([0, 1, 2],
[0, 1, -1], kind="previous",
fill_value='extrapolate',
assume_sorted=True)
assert_allclose(interpolator1D([-2, -1, 0, 1, 2, 3, 5]),
[np.nan, np.nan, 0, 1, -1, -1, -1])
interpolator1D = interp1d([2, 0, 1], # x is not ascending
[-1, 0, 1], kind="previous",
fill_value='extrapolate',
assume_sorted=False)
assert_allclose(interpolator1D([-2, -1, 0, 1, 2, 3, 5]),
[np.nan, np.nan, 0, 1, -1, -1, -1])
interpolator2D = interp1d(self.x10, self.y210_edge_updated,
kind="previous",
fill_value='extrapolate')
assert_allclose(interpolator2D([-1, -2, 5, 8, 12, 25]),
[[np.nan, np.nan, 5, 8, -30, -30],
[np.nan, np.nan, 15, 18, -30, -30]])
interpolator2DAxis0 = interp1d(self.x10, self.y102_edge_updated,
kind="previous",
axis=0, fill_value='extrapolate')
assert_allclose(interpolator2DAxis0([-2, 5, 12]),
[[np.nan, np.nan],
[10, 11],
[-30, -30]])
def test_next(self):
# Check the actual implementation of next interpolation.
interp10 = interp1d(self.x10, self.y10, kind='next')
assert_array_almost_equal(interp10(self.x10), self.y10)
assert_array_almost_equal(interp10(1.2), np.array(2.))
assert_array_almost_equal(interp10(1.5), np.array(2.))
assert_array_almost_equal(interp10([2.4, 5.6, 6.0]),
np.array([3., 6., 6.]),)
# test fill_value="extrapolate"
extrapolator = interp1d(self.x10, self.y10, kind='next',
fill_value='extrapolate')
assert_allclose(extrapolator([-1., 0, 9, 11]),
[0, 0, 9, np.nan], rtol=1e-14)
# Tests for gh-9591
interpolator1D = interp1d(self.x10, self.y10, kind="next",
fill_value='extrapolate')
assert_allclose(interpolator1D([-1, -2, 5, 8, 12, 25]),
[0, 0, 5, 8, np.nan, np.nan])
interpolator2D = interp1d(self.x10, self.y210, kind="next",
fill_value='extrapolate')
assert_allclose(interpolator2D([-1, -2, 5, 8, 12, 25]),
[[0, 0, 5, 8, np.nan, np.nan],
[10, 10, 15, 18, np.nan, np.nan]])
interpolator2DAxis0 = interp1d(self.x10, self.y102, kind="next",
axis=0, fill_value='extrapolate')
assert_allclose(interpolator2DAxis0([-2, 5, 12]),
[[0, 1],
[10, 11],
[np.nan, np.nan]])
opts = dict(kind='next',
fill_value='extrapolate',
bounds_error=True)
assert_raises(ValueError, interp1d, self.x10, self.y10, **opts)
# Tests for gh-16813
interpolator1D = interp1d([0, 1, 2],
[0, 1, -1], kind="next",
fill_value='extrapolate',
assume_sorted=True)
assert_allclose(interpolator1D([-2, -1, 0, 1, 2, 3, 5]),
[0, 0, 0, 1, -1, np.nan, np.nan])
interpolator1D = interp1d([2, 0, 1], # x is not ascending
[-1, 0, 1], kind="next",
fill_value='extrapolate',
assume_sorted=False)
assert_allclose(interpolator1D([-2, -1, 0, 1, 2, 3, 5]),
[0, 0, 0, 1, -1, np.nan, np.nan])
interpolator2D = interp1d(self.x10, self.y210_edge_updated,
kind="next",
fill_value='extrapolate')
assert_allclose(interpolator2D([-1, -2, 5, 8, 12, 25]),
[[30, 30, 5, 8, np.nan, np.nan],
[30, 30, 15, 18, np.nan, np.nan]])
interpolator2DAxis0 = interp1d(self.x10, self.y102_edge_updated,
kind="next",
axis=0, fill_value='extrapolate')
assert_allclose(interpolator2DAxis0([-2, 5, 12]),
[[30, 30],
[10, 11],
[np.nan, np.nan]])
def test_zero(self):
# Check the actual implementation of zero-order spline interpolation.
interp10 = interp1d(self.x10, self.y10, kind='zero')
assert_array_almost_equal(interp10(self.x10), self.y10)
assert_array_almost_equal(interp10(1.2), np.array(1.))
assert_array_almost_equal(interp10(1.5), np.array(1.))
assert_array_almost_equal(interp10([2.4, 5.6, 6.0]),
np.array([2., 5., 6.]))
def bounds_check_helper(self, interpolant, test_array, fail_value):
# Asserts that a ValueError is raised and that the error message
# contains the value causing this exception.
assert_raises(ValueError, interpolant, test_array)
try:
interpolant(test_array)
except ValueError as err:
assert (f"{fail_value}" in str(err))
def _bounds_check(self, kind='linear'):
# Test that our handling of out-of-bounds input is correct.
extrap10 = interp1d(self.x10, self.y10, fill_value=self.fill_value,
bounds_error=False, kind=kind)
assert_array_equal(extrap10(11.2), np.array(self.fill_value))
assert_array_equal(extrap10(-3.4), np.array(self.fill_value))
assert_array_equal(extrap10([[[11.2], [-3.4], [12.6], [19.3]]]),
np.array(self.fill_value),)
assert_array_equal(extrap10._check_bounds(
np.array([-1.0, 0.0, 5.0, 9.0, 11.0])),
np.array([[True, False, False, False, False],
[False, False, False, False, True]]))
raises_bounds_error = interp1d(self.x10, self.y10, bounds_error=True,
kind=kind)
self.bounds_check_helper(raises_bounds_error, -1.0, -1.0)
self.bounds_check_helper(raises_bounds_error, 11.0, 11.0)
self.bounds_check_helper(raises_bounds_error, [0.0, -1.0, 0.0], -1.0)
self.bounds_check_helper(raises_bounds_error, [0.0, 1.0, 21.0], 21.0)
raises_bounds_error([0.0, 5.0, 9.0])
def _bounds_check_int_nan_fill(self, kind='linear'):
x = np.arange(10).astype(np.int_)
y = np.arange(10).astype(np.int_)
c = interp1d(x, y, kind=kind, fill_value=np.nan, bounds_error=False)
yi = c(x - 1)
assert_(np.isnan(yi[0]))
assert_array_almost_equal(yi, np.r_[np.nan, y[:-1]])
def test_bounds(self):
for kind in ('linear', 'cubic', 'nearest', 'previous', 'next',
'slinear', 'zero', 'quadratic'):
self._bounds_check(kind)
self._bounds_check_int_nan_fill(kind)
def _check_fill_value(self, kind):
interp = interp1d(self.x10, self.y10, kind=kind,
fill_value=(-100, 100), bounds_error=False)
assert_array_almost_equal(interp(10), 100)
assert_array_almost_equal(interp(-10), -100)
assert_array_almost_equal(interp([-10, 10]), [-100, 100])
# Proper broadcasting:
# interp along axis of length 5
# other dim=(2, 3), (3, 2), (2, 2), or (2,)
# one singleton fill_value (works for all)
for y in (self.y235, self.y325, self.y225, self.y25):
interp = interp1d(self.x5, y, kind=kind, axis=-1,
fill_value=100, bounds_error=False)
assert_array_almost_equal(interp(10), 100)
assert_array_almost_equal(interp(-10), 100)
assert_array_almost_equal(interp([-10, 10]), 100)
# singleton lower, singleton upper
interp = interp1d(self.x5, y, kind=kind, axis=-1,
fill_value=(-100, 100), bounds_error=False)
assert_array_almost_equal(interp(10), 100)
assert_array_almost_equal(interp(-10), -100)
if y.ndim == 3:
result = [[[-100, 100]] * y.shape[1]] * y.shape[0]
else:
result = [[-100, 100]] * y.shape[0]
assert_array_almost_equal(interp([-10, 10]), result)
# one broadcastable (3,) fill_value
fill_value = [100, 200, 300]
for y in (self.y325, self.y225):
assert_raises(ValueError, interp1d, self.x5, y, kind=kind,
axis=-1, fill_value=fill_value, bounds_error=False)
interp = interp1d(self.x5, self.y235, kind=kind, axis=-1,
fill_value=fill_value, bounds_error=False)
assert_array_almost_equal(interp(10), [[100, 200, 300]] * 2)
assert_array_almost_equal(interp(-10), [[100, 200, 300]] * 2)
assert_array_almost_equal(interp([-10, 10]), [[[100, 100],
[200, 200],
[300, 300]]] * 2)
# one broadcastable (2,) fill_value
fill_value = [100, 200]
assert_raises(ValueError, interp1d, self.x5, self.y235, kind=kind,
axis=-1, fill_value=fill_value, bounds_error=False)
for y in (self.y225, self.y325, self.y25):
interp = interp1d(self.x5, y, kind=kind, axis=-1,
fill_value=fill_value, bounds_error=False)
result = [100, 200]
if y.ndim == 3:
result = [result] * y.shape[0]
assert_array_almost_equal(interp(10), result)
assert_array_almost_equal(interp(-10), result)
result = [[100, 100], [200, 200]]
if y.ndim == 3:
result = [result] * y.shape[0]
assert_array_almost_equal(interp([-10, 10]), result)
# broadcastable (3,) lower, singleton upper
fill_value = (np.array([-100, -200, -300]), 100)
for y in (self.y325, self.y225):
assert_raises(ValueError, interp1d, self.x5, y, kind=kind,
axis=-1, fill_value=fill_value, bounds_error=False)
interp = interp1d(self.x5, self.y235, kind=kind, axis=-1,
fill_value=fill_value, bounds_error=False)
assert_array_almost_equal(interp(10), 100)
assert_array_almost_equal(interp(-10), [[-100, -200, -300]] * 2)
assert_array_almost_equal(interp([-10, 10]), [[[-100, 100],
[-200, 100],
[-300, 100]]] * 2)
# broadcastable (2,) lower, singleton upper
fill_value = (np.array([-100, -200]), 100)
assert_raises(ValueError, interp1d, self.x5, self.y235, kind=kind,
axis=-1, fill_value=fill_value, bounds_error=False)
for y in (self.y225, self.y325, self.y25):
interp = interp1d(self.x5, y, kind=kind, axis=-1,
fill_value=fill_value, bounds_error=False)
assert_array_almost_equal(interp(10), 100)
result = [-100, -200]
if y.ndim == 3:
result = [result] * y.shape[0]
assert_array_almost_equal(interp(-10), result)
result = [[-100, 100], [-200, 100]]
if y.ndim == 3:
result = [result] * y.shape[0]
assert_array_almost_equal(interp([-10, 10]), result)
# broadcastable (3,) lower, broadcastable (3,) upper
fill_value = ([-100, -200, -300], [100, 200, 300])
for y in (self.y325, self.y225):
assert_raises(ValueError, interp1d, self.x5, y, kind=kind,
axis=-1, fill_value=fill_value, bounds_error=False)
for ii in range(2): # check ndarray as well as list here
if ii == 1:
fill_value = tuple(np.array(f) for f in fill_value)
interp = interp1d(self.x5, self.y235, kind=kind, axis=-1,
fill_value=fill_value, bounds_error=False)
assert_array_almost_equal(interp(10), [[100, 200, 300]] * 2)
assert_array_almost_equal(interp(-10), [[-100, -200, -300]] * 2)
assert_array_almost_equal(interp([-10, 10]), [[[-100, 100],
[-200, 200],
[-300, 300]]] * 2)
# broadcastable (2,) lower, broadcastable (2,) upper
fill_value = ([-100, -200], [100, 200])
assert_raises(ValueError, interp1d, self.x5, self.y235, kind=kind,
axis=-1, fill_value=fill_value, bounds_error=False)
for y in (self.y325, self.y225, self.y25):
interp = interp1d(self.x5, y, kind=kind, axis=-1,
fill_value=fill_value, bounds_error=False)
result = [100, 200]
if y.ndim == 3:
result = [result] * y.shape[0]
assert_array_almost_equal(interp(10), result)
result = [-100, -200]
if y.ndim == 3:
result = [result] * y.shape[0]
assert_array_almost_equal(interp(-10), result)
result = [[-100, 100], [-200, 200]]
if y.ndim == 3:
result = [result] * y.shape[0]
assert_array_almost_equal(interp([-10, 10]), result)
# one broadcastable (2, 2) array-like
fill_value = [[100, 200], [1000, 2000]]
for y in (self.y235, self.y325, self.y25):
assert_raises(ValueError, interp1d, self.x5, y, kind=kind,
axis=-1, fill_value=fill_value, bounds_error=False)
for ii in range(2):
if ii == 1:
fill_value = np.array(fill_value)
interp = interp1d(self.x5, self.y225, kind=kind, axis=-1,
fill_value=fill_value, bounds_error=False)
assert_array_almost_equal(interp(10), [[100, 200], [1000, 2000]])
assert_array_almost_equal(interp(-10), [[100, 200], [1000, 2000]])
assert_array_almost_equal(interp([-10, 10]), [[[100, 100],
[200, 200]],
[[1000, 1000],
[2000, 2000]]])
# broadcastable (2, 2) lower, broadcastable (2, 2) upper
fill_value = ([[-100, -200], [-1000, -2000]],
[[100, 200], [1000, 2000]])
for y in (self.y235, self.y325, self.y25):
assert_raises(ValueError, interp1d, self.x5, y, kind=kind,
axis=-1, fill_value=fill_value, bounds_error=False)
for ii in range(2):
if ii == 1:
fill_value = (np.array(fill_value[0]), np.array(fill_value[1]))
interp = interp1d(self.x5, self.y225, kind=kind, axis=-1,
fill_value=fill_value, bounds_error=False)
assert_array_almost_equal(interp(10), [[100, 200], [1000, 2000]])
assert_array_almost_equal(interp(-10), [[-100, -200],
[-1000, -2000]])
assert_array_almost_equal(interp([-10, 10]), [[[-100, 100],
[-200, 200]],
[[-1000, 1000],
[-2000, 2000]]])
def test_fill_value(self):
# test that two-element fill value works
for kind in ('linear', 'nearest', 'cubic', 'slinear', 'quadratic',
'zero', 'previous', 'next'):
self._check_fill_value(kind)
def test_fill_value_writeable(self):
# backwards compat: fill_value is a public writeable attribute
interp = interp1d(self.x10, self.y10, fill_value=123.0)
assert_equal(interp.fill_value, 123.0)
interp.fill_value = 321.0
assert_equal(interp.fill_value, 321.0)
def _nd_check_interp(self, kind='linear'):
# Check the behavior when the inputs and outputs are multidimensional.
# Multidimensional input.
interp10 = interp1d(self.x10, self.y10, kind=kind)
assert_array_almost_equal(interp10(np.array([[3., 5.], [2., 7.]])),
np.array([[3., 5.], [2., 7.]]))
# Scalar input -> 0-dim scalar array output
assert_(isinstance(interp10(1.2), np.ndarray))
assert_equal(interp10(1.2).shape, ())
# Multidimensional outputs.
interp210 = interp1d(self.x10, self.y210, kind=kind)
assert_array_almost_equal(interp210(1.), np.array([1., 11.]))
assert_array_almost_equal(interp210(np.array([1., 2.])),
np.array([[1., 2.], [11., 12.]]))
interp102 = interp1d(self.x10, self.y102, axis=0, kind=kind)
assert_array_almost_equal(interp102(1.), np.array([2.0, 3.0]))
assert_array_almost_equal(interp102(np.array([1., 3.])),
np.array([[2., 3.], [6., 7.]]))
# Both at the same time!
x_new = np.array([[3., 5.], [2., 7.]])
assert_array_almost_equal(interp210(x_new),
np.array([[[3., 5.], [2., 7.]],
[[13., 15.], [12., 17.]]]))
assert_array_almost_equal(interp102(x_new),
np.array([[[6., 7.], [10., 11.]],
[[4., 5.], [14., 15.]]]))
def _nd_check_shape(self, kind='linear'):
# Check large N-D output shape
a = [4, 5, 6, 7]
y = np.arange(np.prod(a)).reshape(*a)
for n, s in enumerate(a):
x = np.arange(s)
z = interp1d(x, y, axis=n, kind=kind)
assert_array_almost_equal(z(x), y, err_msg=kind)
x2 = np.arange(2*3*1).reshape((2,3,1)) / 12.
b = list(a)
b[n:n+1] = [2,3,1]
assert_array_almost_equal(z(x2).shape, b, err_msg=kind)
def test_nd(self):
for kind in ('linear', 'cubic', 'slinear', 'quadratic', 'nearest',
'zero', 'previous', 'next'):
self._nd_check_interp(kind)
self._nd_check_shape(kind)
def _check_complex(self, dtype=np.complex_, kind='linear'):
x = np.array([1, 2.5, 3, 3.1, 4, 6.4, 7.9, 8.0, 9.5, 10])
y = x * x ** (1 + 2j)
y = y.astype(dtype)
# simple test
c = interp1d(x, y, kind=kind)
assert_array_almost_equal(y[:-1], c(x)[:-1])
# check against interpolating real+imag separately
xi = np.linspace(1, 10, 31)
cr = interp1d(x, y.real, kind=kind)
ci = interp1d(x, y.imag, kind=kind)
assert_array_almost_equal(c(xi).real, cr(xi))
assert_array_almost_equal(c(xi).imag, ci(xi))
def test_complex(self):
for kind in ('linear', 'nearest', 'cubic', 'slinear', 'quadratic',
'zero', 'previous', 'next'):
self._check_complex(np.complex64, kind)
self._check_complex(np.complex128, kind)
@pytest.mark.skipif(IS_PYPY, reason="Test not meaningful on PyPy")
def test_circular_refs(self):
# Test interp1d can be automatically garbage collected
x = np.linspace(0, 1)
y = np.linspace(0, 1)
# Confirm interp can be released from memory after use
with assert_deallocated(interp1d, x, y) as interp:
interp([0.1, 0.2])
del interp
def test_overflow_nearest(self):
# Test that the x range doesn't overflow when given integers as input
for kind in ('nearest', 'previous', 'next'):
x = np.array([0, 50, 127], dtype=np.int8)
ii = interp1d(x, x, kind=kind)
assert_array_almost_equal(ii(x), x)
def test_local_nans(self):
# check that for local interpolation kinds (slinear, zero) a single nan
# only affects its local neighborhood
x = np.arange(10).astype(float)
y = x.copy()
y[6] = np.nan
for kind in ('zero', 'slinear'):
ir = interp1d(x, y, kind=kind)
vals = ir([4.9, 7.0])
assert_(np.isfinite(vals).all())
def test_spline_nans(self):
# Backwards compat: a single nan makes the whole spline interpolation
# return nans in an array of the correct shape. And it doesn't raise,
# just quiet nans because of backcompat.
x = np.arange(8).astype(float)
y = x.copy()
yn = y.copy()
yn[3] = np.nan
for kind in ['quadratic', 'cubic']:
ir = interp1d(x, y, kind=kind)
irn = interp1d(x, yn, kind=kind)
for xnew in (6, [1, 6], [[1, 6], [3, 5]]):
xnew = np.asarray(xnew)
out, outn = ir(x), irn(x)
assert_(np.isnan(outn).all())
assert_equal(out.shape, outn.shape)
def test_all_nans(self):
# regression test for gh-11637: interp1d core dumps with all-nan `x`
x = np.ones(10) * np.nan
y = np.arange(10)
with assert_raises(ValueError):
interp1d(x, y, kind='cubic')
def test_read_only(self):
x = np.arange(0, 10)
y = np.exp(-x / 3.0)
xnew = np.arange(0, 9, 0.1)
# Check both read-only and not read-only:
for xnew_writeable in (True, False):
xnew.flags.writeable = xnew_writeable
x.flags.writeable = False
for kind in ('linear', 'nearest', 'zero', 'slinear', 'quadratic',
'cubic'):
f = interp1d(x, y, kind=kind)
vals = f(xnew)
assert_(np.isfinite(vals).all())
@pytest.mark.parametrize(
"kind", ("linear", "nearest", "nearest-up", "previous", "next")
)
def test_single_value(self, kind):
# https://github.com/scipy/scipy/issues/4043
f = interp1d([1.5], [6], kind=kind, bounds_error=False,
fill_value=(2, 10))
assert_array_equal(f([1, 1.5, 2]), [2, 6, 10])
# check still error if bounds_error=True
f = interp1d([1.5], [6], kind=kind, bounds_error=True)
with assert_raises(ValueError, match="x_new is above"):
f(2.0)
class TestLagrange:
def test_lagrange(self):
p = poly1d([5,2,1,4,3])
xs = np.arange(len(p.coeffs))
ys = p(xs)
pl = lagrange(xs,ys)
assert_array_almost_equal(p.coeffs,pl.coeffs)
class TestAkima1DInterpolator:
def test_eval(self):
x = np.arange(0., 11.)
y = np.array([0., 2., 1., 3., 2., 6., 5.5, 5.5, 2.7, 5.1, 3.])
ak = Akima1DInterpolator(x, y)
xi = np.array([0., 0.5, 1., 1.5, 2.5, 3.5, 4.5, 5.1, 6.5, 7.2,
8.6, 9.9, 10.])
yi = np.array([0., 1.375, 2., 1.5, 1.953125, 2.484375,
4.1363636363636366866103344, 5.9803623910336236590978842,
5.5067291516462386624652936, 5.2031367459745245795943447,
4.1796554159017080820603951, 3.4110386597938129327189927,
3.])
assert_allclose(ak(xi), yi)
def test_eval_2d(self):
x = np.arange(0., 11.)
y = np.array([0., 2., 1., 3., 2., 6., 5.5, 5.5, 2.7, 5.1, 3.])
y = np.column_stack((y, 2. * y))
ak = Akima1DInterpolator(x, y)
xi = np.array([0., 0.5, 1., 1.5, 2.5, 3.5, 4.5, 5.1, 6.5, 7.2,
8.6, 9.9, 10.])
yi = np.array([0., 1.375, 2., 1.5, 1.953125, 2.484375,
4.1363636363636366866103344,
5.9803623910336236590978842,
5.5067291516462386624652936,
5.2031367459745245795943447,
4.1796554159017080820603951,
3.4110386597938129327189927, 3.])
yi = np.column_stack((yi, 2. * yi))
assert_allclose(ak(xi), yi)
def test_eval_3d(self):
x = np.arange(0., 11.)
y_ = np.array([0., 2., 1., 3., 2., 6., 5.5, 5.5, 2.7, 5.1, 3.])
y = np.empty((11, 2, 2))
y[:, 0, 0] = y_
y[:, 1, 0] = 2. * y_
y[:, 0, 1] = 3. * y_
y[:, 1, 1] = 4. * y_
ak = Akima1DInterpolator(x, y)
xi = np.array([0., 0.5, 1., 1.5, 2.5, 3.5, 4.5, 5.1, 6.5, 7.2,
8.6, 9.9, 10.])
yi = np.empty((13, 2, 2))
yi_ = np.array([0., 1.375, 2., 1.5, 1.953125, 2.484375,
4.1363636363636366866103344,
5.9803623910336236590978842,
5.5067291516462386624652936,
5.2031367459745245795943447,
4.1796554159017080820603951,
3.4110386597938129327189927, 3.])
yi[:, 0, 0] = yi_
yi[:, 1, 0] = 2. * yi_
yi[:, 0, 1] = 3. * yi_
yi[:, 1, 1] = 4. * yi_
assert_allclose(ak(xi), yi)
def test_degenerate_case_multidimensional(self):
# This test is for issue #5683.
x = np.array([0, 1, 2])
y = np.vstack((x, x**2)).T
ak = Akima1DInterpolator(x, y)
x_eval = np.array([0.5, 1.5])
y_eval = ak(x_eval)
assert_allclose(y_eval, np.vstack((x_eval, x_eval**2)).T)
def test_extend(self):
x = np.arange(0., 11.)
y = np.array([0., 2., 1., 3., 2., 6., 5.5, 5.5, 2.7, 5.1, 3.])
ak = Akima1DInterpolator(x, y)
match = "Extending a 1-D Akima interpolator is not yet implemented"
with pytest.raises(NotImplementedError, match=match):
ak.extend(None, None)
class TestPPolyCommon:
# test basic functionality for PPoly and BPoly
def test_sort_check(self):
c = np.array([[1, 4], [2, 5], [3, 6]])
x = np.array([0, 1, 0.5])
assert_raises(ValueError, PPoly, c, x)
assert_raises(ValueError, BPoly, c, x)
def test_ctor_c(self):
# wrong shape: `c` must be at least 2D
with assert_raises(ValueError):
PPoly([1, 2], [0, 1])
def test_extend(self):
# Test adding new points to the piecewise polynomial
np.random.seed(1234)
order = 3
x = np.unique(np.r_[0, 10 * np.random.rand(30), 10])
c = 2*np.random.rand(order+1, len(x)-1, 2, 3) - 1
for cls in (PPoly, BPoly):
pp = cls(c[:,:9], x[:10])
pp.extend(c[:,9:], x[10:])
pp2 = cls(c[:, 10:], x[10:])
pp2.extend(c[:, :10], x[:10])
pp3 = cls(c, x)
assert_array_equal(pp.c, pp3.c)
assert_array_equal(pp.x, pp3.x)
assert_array_equal(pp2.c, pp3.c)
assert_array_equal(pp2.x, pp3.x)
def test_extend_diff_orders(self):
# Test extending polynomial with different order one
np.random.seed(1234)
x = np.linspace(0, 1, 6)
c = np.random.rand(2, 5)
x2 = np.linspace(1, 2, 6)
c2 = np.random.rand(4, 5)
for cls in (PPoly, BPoly):
pp1 = cls(c, x)
pp2 = cls(c2, x2)
pp_comb = cls(c, x)
pp_comb.extend(c2, x2[1:])
# NB. doesn't match to pp1 at the endpoint, because pp1 is not
# continuous with pp2 as we took random coefs.
xi1 = np.linspace(0, 1, 300, endpoint=False)
xi2 = np.linspace(1, 2, 300)
assert_allclose(pp1(xi1), pp_comb(xi1))
assert_allclose(pp2(xi2), pp_comb(xi2))
def test_extend_descending(self):
np.random.seed(0)
order = 3
x = np.sort(np.random.uniform(0, 10, 20))
c = np.random.rand(order + 1, x.shape[0] - 1, 2, 3)
for cls in (PPoly, BPoly):
p = cls(c, x)
p1 = cls(c[:, :9], x[:10])
p1.extend(c[:, 9:], x[10:])
p2 = cls(c[:, 10:], x[10:])
p2.extend(c[:, :10], x[:10])
assert_array_equal(p1.c, p.c)
assert_array_equal(p1.x, p.x)
assert_array_equal(p2.c, p.c)
assert_array_equal(p2.x, p.x)
def test_shape(self):
np.random.seed(1234)
c = np.random.rand(8, 12, 5, 6, 7)
x = np.sort(np.random.rand(13))
xp = np.random.rand(3, 4)
for cls in (PPoly, BPoly):
p = cls(c, x)
assert_equal(p(xp).shape, (3, 4, 5, 6, 7))
# 'scalars'
for cls in (PPoly, BPoly):
p = cls(c[..., 0, 0, 0], x)
assert_equal(np.shape(p(0.5)), ())
assert_equal(np.shape(p(np.array(0.5))), ())
assert_raises(ValueError, p, np.array([[0.1, 0.2], [0.4]], dtype=object))
def test_complex_coef(self):
np.random.seed(12345)
x = np.sort(np.random.random(13))
c = np.random.random((8, 12)) * (1. + 0.3j)
c_re, c_im = c.real, c.imag
xp = np.random.random(5)
for cls in (PPoly, BPoly):
p, p_re, p_im = cls(c, x), cls(c_re, x), cls(c_im, x)
for nu in [0, 1, 2]:
assert_allclose(p(xp, nu).real, p_re(xp, nu))
assert_allclose(p(xp, nu).imag, p_im(xp, nu))
def test_axis(self):
np.random.seed(12345)
c = np.random.rand(3, 4, 5, 6, 7, 8)
c_s = c.shape
xp = np.random.random((1, 2))
for axis in (0, 1, 2, 3):
m = c.shape[axis+1]
x = np.sort(np.random.rand(m+1))
for cls in (PPoly, BPoly):
p = cls(c, x, axis=axis)
assert_equal(p.c.shape,
c_s[axis:axis+2] + c_s[:axis] + c_s[axis+2:])
res = p(xp)
targ_shape = c_s[:axis] + xp.shape + c_s[2+axis:]
assert_equal(res.shape, targ_shape)
# deriv/antideriv does not drop the axis
for p1 in [cls(c, x, axis=axis).derivative(),
cls(c, x, axis=axis).derivative(2),
cls(c, x, axis=axis).antiderivative(),
cls(c, x, axis=axis).antiderivative(2)]:
assert_equal(p1.axis, p.axis)
# c array needs two axes for the coefficients and intervals, so
# 0 <= axis < c.ndim-1; raise otherwise
for axis in (-1, 4, 5, 6):
for cls in (BPoly, PPoly):
assert_raises(ValueError, cls, **dict(c=c, x=x, axis=axis))
class TestPolySubclassing:
class P(PPoly):
pass
class B(BPoly):
pass
def _make_polynomials(self):
np.random.seed(1234)
x = np.sort(np.random.random(3))
c = np.random.random((4, 2))
return self.P(c, x), self.B(c, x)
def test_derivative(self):
pp, bp = self._make_polynomials()
for p in (pp, bp):
pd = p.derivative()
assert_equal(p.__class__, pd.__class__)
ppa = pp.antiderivative()
assert_equal(pp.__class__, ppa.__class__)
def test_from_spline(self):
np.random.seed(1234)
x = np.sort(np.r_[0, np.random.rand(11), 1])
y = np.random.rand(len(x))
spl = splrep(x, y, s=0)
pp = self.P.from_spline(spl)
assert_equal(pp.__class__, self.P)
def test_conversions(self):
pp, bp = self._make_polynomials()
pp1 = self.P.from_bernstein_basis(bp)
assert_equal(pp1.__class__, self.P)
bp1 = self.B.from_power_basis(pp)
assert_equal(bp1.__class__, self.B)
def test_from_derivatives(self):
x = [0, 1, 2]
y = [[1], [2], [3]]
bp = self.B.from_derivatives(x, y)
assert_equal(bp.__class__, self.B)
class TestPPoly:
def test_simple(self):
c = np.array([[1, 4], [2, 5], [3, 6]])
x = np.array([0, 0.5, 1])
p = PPoly(c, x)
assert_allclose(p(0.3), 1*0.3**2 + 2*0.3 + 3)
assert_allclose(p(0.7), 4*(0.7-0.5)**2 + 5*(0.7-0.5) + 6)
def test_periodic(self):
c = np.array([[1, 4], [2, 5], [3, 6]])
x = np.array([0, 0.5, 1])
p = PPoly(c, x, extrapolate='periodic')
assert_allclose(p(1.3), 1 * 0.3 ** 2 + 2 * 0.3 + 3)
assert_allclose(p(-0.3), 4 * (0.7 - 0.5) ** 2 + 5 * (0.7 - 0.5) + 6)
assert_allclose(p(1.3, 1), 2 * 0.3 + 2)
assert_allclose(p(-0.3, 1), 8 * (0.7 - 0.5) + 5)
def test_read_only(self):
c = np.array([[1, 4], [2, 5], [3, 6]])
x = np.array([0, 0.5, 1])
xnew = np.array([0, 0.1, 0.2])
PPoly(c, x, extrapolate='periodic')
for writeable in (True, False):
x.flags.writeable = writeable
c.flags.writeable = writeable
f = PPoly(c, x)
vals = f(xnew)
assert_(np.isfinite(vals).all())
def test_descending(self):
def binom_matrix(power):
n = np.arange(power + 1).reshape(-1, 1)
k = np.arange(power + 1)
B = binom(n, k)
return B[::-1, ::-1]
np.random.seed(0)
power = 3
for m in [10, 20, 30]:
x = np.sort(np.random.uniform(0, 10, m + 1))
ca = np.random.uniform(-2, 2, size=(power + 1, m))
h = np.diff(x)
h_powers = h[None, :] ** np.arange(power + 1)[::-1, None]
B = binom_matrix(power)
cap = ca * h_powers
cdp = np.dot(B.T, cap)
cd = cdp / h_powers
pa = PPoly(ca, x, extrapolate=True)
pd = PPoly(cd[:, ::-1], x[::-1], extrapolate=True)
x_test = np.random.uniform(-10, 20, 100)
assert_allclose(pa(x_test), pd(x_test), rtol=1e-13)
assert_allclose(pa(x_test, 1), pd(x_test, 1), rtol=1e-13)
pa_d = pa.derivative()
pd_d = pd.derivative()
assert_allclose(pa_d(x_test), pd_d(x_test), rtol=1e-13)
# Antiderivatives won't be equal because fixing continuity is
# done in the reverse order, but surely the differences should be
# equal.
pa_i = pa.antiderivative()
pd_i = pd.antiderivative()
for a, b in np.random.uniform(-10, 20, (5, 2)):
int_a = pa.integrate(a, b)
int_d = pd.integrate(a, b)
assert_allclose(int_a, int_d, rtol=1e-13)
assert_allclose(pa_i(b) - pa_i(a), pd_i(b) - pd_i(a),
rtol=1e-13)
roots_d = pd.roots()
roots_a = pa.roots()
assert_allclose(roots_a, np.sort(roots_d), rtol=1e-12)
def test_multi_shape(self):
c = np.random.rand(6, 2, 1, 2, 3)
x = np.array([0, 0.5, 1])
p = PPoly(c, x)
assert_equal(p.x.shape, x.shape)
assert_equal(p.c.shape, c.shape)
assert_equal(p(0.3).shape, c.shape[2:])
assert_equal(p(np.random.rand(5, 6)).shape, (5, 6) + c.shape[2:])
dp = p.derivative()
assert_equal(dp.c.shape, (5, 2, 1, 2, 3))
ip = p.antiderivative()
assert_equal(ip.c.shape, (7, 2, 1, 2, 3))
def test_construct_fast(self):
np.random.seed(1234)
c = np.array([[1, 4], [2, 5], [3, 6]], dtype=float)
x = np.array([0, 0.5, 1])
p = PPoly.construct_fast(c, x)
assert_allclose(p(0.3), 1*0.3**2 + 2*0.3 + 3)
assert_allclose(p(0.7), 4*(0.7-0.5)**2 + 5*(0.7-0.5) + 6)
def test_vs_alternative_implementations(self):
np.random.seed(1234)
c = np.random.rand(3, 12, 22)
x = np.sort(np.r_[0, np.random.rand(11), 1])
p = PPoly(c, x)
xp = np.r_[0.3, 0.5, 0.33, 0.6]
expected = _ppoly_eval_1(c, x, xp)
assert_allclose(p(xp), expected)
expected = _ppoly_eval_2(c[:,:,0], x, xp)
assert_allclose(p(xp)[:,0], expected)
def test_from_spline(self):
np.random.seed(1234)
x = np.sort(np.r_[0, np.random.rand(11), 1])
y = np.random.rand(len(x))
spl = splrep(x, y, s=0)
pp = PPoly.from_spline(spl)
xi = np.linspace(0, 1, 200)
assert_allclose(pp(xi), splev(xi, spl))
# make sure .from_spline accepts BSpline objects
b = BSpline(*spl)
ppp = PPoly.from_spline(b)
assert_allclose(ppp(xi), b(xi))
# BSpline's extrapolate attribute propagates unless overridden
t, c, k = spl
for extrap in (None, True, False):
b = BSpline(t, c, k, extrapolate=extrap)
p = PPoly.from_spline(b)
assert_equal(p.extrapolate, b.extrapolate)
def test_derivative_simple(self):
np.random.seed(1234)
c = np.array([[4, 3, 2, 1]]).T
dc = np.array([[3*4, 2*3, 2]]).T
ddc = np.array([[2*3*4, 1*2*3]]).T
x = np.array([0, 1])
pp = PPoly(c, x)
dpp = PPoly(dc, x)
ddpp = PPoly(ddc, x)
assert_allclose(pp.derivative().c, dpp.c)
assert_allclose(pp.derivative(2).c, ddpp.c)
def test_derivative_eval(self):
np.random.seed(1234)
x = np.sort(np.r_[0, np.random.rand(11), 1])
y = np.random.rand(len(x))
spl = splrep(x, y, s=0)
pp = PPoly.from_spline(spl)
xi = np.linspace(0, 1, 200)
for dx in range(0, 3):
assert_allclose(pp(xi, dx), splev(xi, spl, dx))
def test_derivative(self):
np.random.seed(1234)
x = np.sort(np.r_[0, np.random.rand(11), 1])
y = np.random.rand(len(x))
spl = splrep(x, y, s=0, k=5)
pp = PPoly.from_spline(spl)
xi = np.linspace(0, 1, 200)
for dx in range(0, 10):
assert_allclose(pp(xi, dx), pp.derivative(dx)(xi),
err_msg="dx=%d" % (dx,))
def test_antiderivative_of_constant(self):
# https://github.com/scipy/scipy/issues/4216
p = PPoly([[1.]], [0, 1])
assert_equal(p.antiderivative().c, PPoly([[1], [0]], [0, 1]).c)
assert_equal(p.antiderivative().x, PPoly([[1], [0]], [0, 1]).x)
def test_antiderivative_regression_4355(self):
# https://github.com/scipy/scipy/issues/4355
p = PPoly([[1., 0.5]], [0, 1, 2])
q = p.antiderivative()
assert_equal(q.c, [[1, 0.5], [0, 1]])
assert_equal(q.x, [0, 1, 2])
assert_allclose(p.integrate(0, 2), 1.5)
assert_allclose(q(2) - q(0), 1.5)
def test_antiderivative_simple(self):
np.random.seed(1234)
# [ p1(x) = 3*x**2 + 2*x + 1,
# p2(x) = 1.6875]
c = np.array([[3, 2, 1], [0, 0, 1.6875]]).T
# [ pp1(x) = x**3 + x**2 + x,
# pp2(x) = 1.6875*(x - 0.25) + pp1(0.25)]
ic = np.array([[1, 1, 1, 0], [0, 0, 1.6875, 0.328125]]).T
# [ ppp1(x) = (1/4)*x**4 + (1/3)*x**3 + (1/2)*x**2,
# ppp2(x) = (1.6875/2)*(x - 0.25)**2 + pp1(0.25)*x + ppp1(0.25)]
iic = np.array([[1/4, 1/3, 1/2, 0, 0],
[0, 0, 1.6875/2, 0.328125, 0.037434895833333336]]).T
x = np.array([0, 0.25, 1])
pp = PPoly(c, x)
ipp = pp.antiderivative()
iipp = pp.antiderivative(2)
iipp2 = ipp.antiderivative()
assert_allclose(ipp.x, x)
assert_allclose(ipp.c.T, ic.T)
assert_allclose(iipp.c.T, iic.T)
assert_allclose(iipp2.c.T, iic.T)
def test_antiderivative_vs_derivative(self):
np.random.seed(1234)
x = np.linspace(0, 1, 30)**2
y = np.random.rand(len(x))
spl = splrep(x, y, s=0, k=5)
pp = PPoly.from_spline(spl)
for dx in range(0, 10):
ipp = pp.antiderivative(dx)
# check that derivative is inverse op
pp2 = ipp.derivative(dx)
assert_allclose(pp.c, pp2.c)
# check continuity
for k in range(dx):
pp2 = ipp.derivative(k)
r = 1e-13
endpoint = r*pp2.x[:-1] + (1 - r)*pp2.x[1:]
assert_allclose(pp2(pp2.x[1:]), pp2(endpoint),
rtol=1e-7, err_msg="dx=%d k=%d" % (dx, k))
def test_antiderivative_vs_spline(self):
np.random.seed(1234)
x = np.sort(np.r_[0, np.random.rand(11), 1])
y = np.random.rand(len(x))
spl = splrep(x, y, s=0, k=5)
pp = PPoly.from_spline(spl)
for dx in range(0, 10):
pp2 = pp.antiderivative(dx)
spl2 = splantider(spl, dx)
xi = np.linspace(0, 1, 200)
assert_allclose(pp2(xi), splev(xi, spl2),
rtol=1e-7)
def test_antiderivative_continuity(self):
c = np.array([[2, 1, 2, 2], [2, 1, 3, 3]]).T
x = np.array([0, 0.5, 1])
p = PPoly(c, x)
ip = p.antiderivative()
# check continuity
assert_allclose(ip(0.5 - 1e-9), ip(0.5 + 1e-9), rtol=1e-8)
# check that only lowest order coefficients were changed
p2 = ip.derivative()
assert_allclose(p2.c, p.c)
def test_integrate(self):
np.random.seed(1234)
x = np.sort(np.r_[0, np.random.rand(11), 1])
y = np.random.rand(len(x))
spl = splrep(x, y, s=0, k=5)
pp = PPoly.from_spline(spl)
a, b = 0.3, 0.9
ig = pp.integrate(a, b)
ipp = pp.antiderivative()
assert_allclose(ig, ipp(b) - ipp(a))
assert_allclose(ig, splint(a, b, spl))
a, b = -0.3, 0.9
ig = pp.integrate(a, b, extrapolate=True)
assert_allclose(ig, ipp(b) - ipp(a))
assert_(np.isnan(pp.integrate(a, b, extrapolate=False)).all())
def test_integrate_readonly(self):
x = np.array([1, 2, 4])
c = np.array([[0., 0.], [-1., -1.], [2., -0.], [1., 2.]])
for writeable in (True, False):
x.flags.writeable = writeable
P = PPoly(c, x)
vals = P.integrate(1, 4)
assert_(np.isfinite(vals).all())
def test_integrate_periodic(self):
x = np.array([1, 2, 4])
c = np.array([[0., 0.], [-1., -1.], [2., -0.], [1., 2.]])
P = PPoly(c, x, extrapolate='periodic')
I = P.antiderivative()
period_int = I(4) - I(1)
assert_allclose(P.integrate(1, 4), period_int)
assert_allclose(P.integrate(-10, -7), period_int)
assert_allclose(P.integrate(-10, -4), 2 * period_int)
assert_allclose(P.integrate(1.5, 2.5), I(2.5) - I(1.5))
assert_allclose(P.integrate(3.5, 5), I(2) - I(1) + I(4) - I(3.5))
assert_allclose(P.integrate(3.5 + 12, 5 + 12),
I(2) - I(1) + I(4) - I(3.5))
assert_allclose(P.integrate(3.5, 5 + 12),
I(2) - I(1) + I(4) - I(3.5) + 4 * period_int)
assert_allclose(P.integrate(0, -1), I(2) - I(3))
assert_allclose(P.integrate(-9, -10), I(2) - I(3))
assert_allclose(P.integrate(0, -10), I(2) - I(3) - 3 * period_int)
def test_roots(self):
x = np.linspace(0, 1, 31)**2
y = np.sin(30*x)
spl = splrep(x, y, s=0, k=3)
pp = PPoly.from_spline(spl)
r = pp.roots()
r = r[(r >= 0 - 1e-15) & (r <= 1 + 1e-15)]
assert_allclose(r, sproot(spl), atol=1e-15)
def test_roots_idzero(self):
# Roots for piecewise polynomials with identically zero
# sections.
c = np.array([[-1, 0.25], [0, 0], [-1, 0.25]]).T
x = np.array([0, 0.4, 0.6, 1.0])
pp = PPoly(c, x)
assert_array_equal(pp.roots(),
[0.25, 0.4, np.nan, 0.6 + 0.25])
# ditto for p.solve(const) with sections identically equal const
const = 2.
c1 = c.copy()
c1[1, :] += const
pp1 = PPoly(c1, x)
assert_array_equal(pp1.solve(const),
[0.25, 0.4, np.nan, 0.6 + 0.25])
def test_roots_all_zero(self):
# test the code path for the polynomial being identically zero everywhere
c = [[0], [0]]
x = [0, 1]
p = PPoly(c, x)
assert_array_equal(p.roots(), [0, np.nan])
assert_array_equal(p.solve(0), [0, np.nan])
assert_array_equal(p.solve(1), [])
c = [[0, 0], [0, 0]]
x = [0, 1, 2]
p = PPoly(c, x)
assert_array_equal(p.roots(), [0, np.nan, 1, np.nan])
assert_array_equal(p.solve(0), [0, np.nan, 1, np.nan])
assert_array_equal(p.solve(1), [])
def test_roots_repeated(self):
# Check roots repeated in multiple sections are reported only
# once.
# [(x + 1)**2 - 1, -x**2] ; x == 0 is a repeated root
c = np.array([[1, 0, -1], [-1, 0, 0]]).T
x = np.array([-1, 0, 1])
pp = PPoly(c, x)
assert_array_equal(pp.roots(), [-2, 0])
assert_array_equal(pp.roots(extrapolate=False), [0])
def test_roots_discont(self):
# Check that a discontinuity across zero is reported as root
c = np.array([[1], [-1]]).T
x = np.array([0, 0.5, 1])
pp = PPoly(c, x)
assert_array_equal(pp.roots(), [0.5])
assert_array_equal(pp.roots(discontinuity=False), [])
# ditto for a discontinuity across y:
assert_array_equal(pp.solve(0.5), [0.5])
assert_array_equal(pp.solve(0.5, discontinuity=False), [])
assert_array_equal(pp.solve(1.5), [])
assert_array_equal(pp.solve(1.5, discontinuity=False), [])
def test_roots_random(self):
# Check high-order polynomials with random coefficients
np.random.seed(1234)
num = 0
for extrapolate in (True, False):
for order in range(0, 20):
x = np.unique(np.r_[0, 10 * np.random.rand(30), 10])
c = 2*np.random.rand(order+1, len(x)-1, 2, 3) - 1
pp = PPoly(c, x)
for y in [0, np.random.random()]:
r = pp.solve(y, discontinuity=False, extrapolate=extrapolate)
for i in range(2):
for j in range(3):
rr = r[i,j]
if rr.size > 0:
# Check that the reported roots indeed are roots
num += rr.size
val = pp(rr, extrapolate=extrapolate)[:,i,j]
cmpval = pp(rr, nu=1,
extrapolate=extrapolate)[:,i,j]
msg = f"({extrapolate!r}) r = {repr(rr)}"
assert_allclose((val-y) / cmpval, 0, atol=1e-7,
err_msg=msg)
# Check that we checked a number of roots
assert_(num > 100, repr(num))
def test_roots_croots(self):
# Test the complex root finding algorithm
np.random.seed(1234)
for k in range(1, 15):
c = np.random.rand(k, 1, 130)
if k == 3:
# add a case with zero discriminant
c[:,0,0] = 1, 2, 1
for y in [0, np.random.random()]:
w = np.empty(c.shape, dtype=complex)
_ppoly._croots_poly1(c, w)
if k == 1:
assert_(np.isnan(w).all())
continue
res = 0
cres = 0
for i in range(k):
res += c[i,None] * w**(k-1-i)
cres += abs(c[i,None] * w**(k-1-i))
with np.errstate(invalid='ignore'):
res /= cres
res = res.ravel()
res = res[~np.isnan(res)]
assert_allclose(res, 0, atol=1e-10)
def test_extrapolate_attr(self):
# [ 1 - x**2 ]
c = np.array([[-1, 0, 1]]).T
x = np.array([0, 1])
for extrapolate in [True, False, None]:
pp = PPoly(c, x, extrapolate=extrapolate)
pp_d = pp.derivative()
pp_i = pp.antiderivative()
if extrapolate is False:
assert_(np.isnan(pp([-0.1, 1.1])).all())
assert_(np.isnan(pp_i([-0.1, 1.1])).all())
assert_(np.isnan(pp_d([-0.1, 1.1])).all())
assert_equal(pp.roots(), [1])
else:
assert_allclose(pp([-0.1, 1.1]), [1-0.1**2, 1-1.1**2])
assert_(not np.isnan(pp_i([-0.1, 1.1])).any())
assert_(not np.isnan(pp_d([-0.1, 1.1])).any())
assert_allclose(pp.roots(), [1, -1])
class TestBPoly:
def test_simple(self):
x = [0, 1]
c = [[3]]
bp = BPoly(c, x)
assert_allclose(bp(0.1), 3.)
def test_simple2(self):
x = [0, 1]
c = [[3], [1]]
bp = BPoly(c, x) # 3*(1-x) + 1*x
assert_allclose(bp(0.1), 3*0.9 + 1.*0.1)
def test_simple3(self):
x = [0, 1]
c = [[3], [1], [4]]
bp = BPoly(c, x) # 3 * (1-x)**2 + 2 * x (1-x) + 4 * x**2
assert_allclose(bp(0.2),
3 * 0.8*0.8 + 1 * 2*0.2*0.8 + 4 * 0.2*0.2)
def test_simple4(self):
x = [0, 1]
c = [[1], [1], [1], [2]]
bp = BPoly(c, x)
assert_allclose(bp(0.3), 0.7**3 +
3 * 0.7**2 * 0.3 +
3 * 0.7 * 0.3**2 +
2 * 0.3**3)
def test_simple5(self):
x = [0, 1]
c = [[1], [1], [8], [2], [1]]
bp = BPoly(c, x)
assert_allclose(bp(0.3), 0.7**4 +
4 * 0.7**3 * 0.3 +
8 * 6 * 0.7**2 * 0.3**2 +
2 * 4 * 0.7 * 0.3**3 +
0.3**4)
def test_periodic(self):
x = [0, 1, 3]
c = [[3, 0], [0, 0], [0, 2]]
# [3*(1-x)**2, 2*((x-1)/2)**2]
bp = BPoly(c, x, extrapolate='periodic')
assert_allclose(bp(3.4), 3 * 0.6**2)
assert_allclose(bp(-1.3), 2 * (0.7/2)**2)
assert_allclose(bp(3.4, 1), -6 * 0.6)
assert_allclose(bp(-1.3, 1), 2 * (0.7/2))
def test_descending(self):
np.random.seed(0)
power = 3
for m in [10, 20, 30]:
x = np.sort(np.random.uniform(0, 10, m + 1))
ca = np.random.uniform(-0.1, 0.1, size=(power + 1, m))
# We need only to flip coefficients to get it right!
cd = ca[::-1].copy()
pa = BPoly(ca, x, extrapolate=True)
pd = BPoly(cd[:, ::-1], x[::-1], extrapolate=True)
x_test = np.random.uniform(-10, 20, 100)
assert_allclose(pa(x_test), pd(x_test), rtol=1e-13)
assert_allclose(pa(x_test, 1), pd(x_test, 1), rtol=1e-13)
pa_d = pa.derivative()
pd_d = pd.derivative()
assert_allclose(pa_d(x_test), pd_d(x_test), rtol=1e-13)
# Antiderivatives won't be equal because fixing continuity is
# done in the reverse order, but surely the differences should be
# equal.
pa_i = pa.antiderivative()
pd_i = pd.antiderivative()
for a, b in np.random.uniform(-10, 20, (5, 2)):
int_a = pa.integrate(a, b)
int_d = pd.integrate(a, b)
assert_allclose(int_a, int_d, rtol=1e-12)
assert_allclose(pa_i(b) - pa_i(a), pd_i(b) - pd_i(a),
rtol=1e-12)
def test_multi_shape(self):
c = np.random.rand(6, 2, 1, 2, 3)
x = np.array([0, 0.5, 1])
p = BPoly(c, x)
assert_equal(p.x.shape, x.shape)
assert_equal(p.c.shape, c.shape)
assert_equal(p(0.3).shape, c.shape[2:])
assert_equal(p(np.random.rand(5,6)).shape,
(5,6)+c.shape[2:])
dp = p.derivative()
assert_equal(dp.c.shape, (5, 2, 1, 2, 3))
def test_interval_length(self):
x = [0, 2]
c = [[3], [1], [4]]
bp = BPoly(c, x)
xval = 0.1
s = xval / 2 # s = (x - xa) / (xb - xa)
assert_allclose(bp(xval), 3 * (1-s)*(1-s) + 1 * 2*s*(1-s) + 4 * s*s)
def test_two_intervals(self):
x = [0, 1, 3]
c = [[3, 0], [0, 0], [0, 2]]
bp = BPoly(c, x) # [3*(1-x)**2, 2*((x-1)/2)**2]
assert_allclose(bp(0.4), 3 * 0.6*0.6)
assert_allclose(bp(1.7), 2 * (0.7/2)**2)
def test_extrapolate_attr(self):
x = [0, 2]
c = [[3], [1], [4]]
bp = BPoly(c, x)
for extrapolate in (True, False, None):
bp = BPoly(c, x, extrapolate=extrapolate)
bp_d = bp.derivative()
if extrapolate is False:
assert_(np.isnan(bp([-0.1, 2.1])).all())
assert_(np.isnan(bp_d([-0.1, 2.1])).all())
else:
assert_(not np.isnan(bp([-0.1, 2.1])).any())
assert_(not np.isnan(bp_d([-0.1, 2.1])).any())
class TestBPolyCalculus:
def test_derivative(self):
x = [0, 1, 3]
c = [[3, 0], [0, 0], [0, 2]]
bp = BPoly(c, x) # [3*(1-x)**2, 2*((x-1)/2)**2]
bp_der = bp.derivative()
assert_allclose(bp_der(0.4), -6*(0.6))
assert_allclose(bp_der(1.7), 0.7)
# derivatives in-place
assert_allclose([bp(0.4, nu=1), bp(0.4, nu=2), bp(0.4, nu=3)],
[-6*(1-0.4), 6., 0.])
assert_allclose([bp(1.7, nu=1), bp(1.7, nu=2), bp(1.7, nu=3)],
[0.7, 1., 0])
def test_derivative_ppoly(self):
# make sure it's consistent w/ power basis
np.random.seed(1234)
m, k = 5, 8 # number of intervals, order
x = np.sort(np.random.random(m))
c = np.random.random((k, m-1))
bp = BPoly(c, x)
pp = PPoly.from_bernstein_basis(bp)
for d in range(k):
bp = bp.derivative()
pp = pp.derivative()
xp = np.linspace(x[0], x[-1], 21)
assert_allclose(bp(xp), pp(xp))
def test_deriv_inplace(self):
np.random.seed(1234)
m, k = 5, 8 # number of intervals, order
x = np.sort(np.random.random(m))
c = np.random.random((k, m-1))
# test both real and complex coefficients
for cc in [c.copy(), c*(1. + 2.j)]:
bp = BPoly(cc, x)
xp = np.linspace(x[0], x[-1], 21)
for i in range(k):
assert_allclose(bp(xp, i), bp.derivative(i)(xp))
def test_antiderivative_simple(self):
# f(x) = x for x \in [0, 1),
# (x-1)/2 for x \in [1, 3]
#
# antiderivative is then
# F(x) = x**2 / 2 for x \in [0, 1),
# 0.5*x*(x/2 - 1) + A for x \in [1, 3]
# where A = 3/4 for continuity at x = 1.
x = [0, 1, 3]
c = [[0, 0], [1, 1]]
bp = BPoly(c, x)
bi = bp.antiderivative()
xx = np.linspace(0, 3, 11)
assert_allclose(bi(xx),
np.where(xx < 1, xx**2 / 2.,
0.5 * xx * (xx/2. - 1) + 3./4),
atol=1e-12, rtol=1e-12)
def test_der_antider(self):
np.random.seed(1234)
x = np.sort(np.random.random(11))
c = np.random.random((4, 10, 2, 3))
bp = BPoly(c, x)
xx = np.linspace(x[0], x[-1], 100)
assert_allclose(bp.antiderivative().derivative()(xx),
bp(xx), atol=1e-12, rtol=1e-12)
def test_antider_ppoly(self):
np.random.seed(1234)
x = np.sort(np.random.random(11))
c = np.random.random((4, 10, 2, 3))
bp = BPoly(c, x)
pp = PPoly.from_bernstein_basis(bp)
xx = np.linspace(x[0], x[-1], 10)
assert_allclose(bp.antiderivative(2)(xx),
pp.antiderivative(2)(xx), atol=1e-12, rtol=1e-12)
def test_antider_continuous(self):
np.random.seed(1234)
x = np.sort(np.random.random(11))
c = np.random.random((4, 10))
bp = BPoly(c, x).antiderivative()
xx = bp.x[1:-1]
assert_allclose(bp(xx - 1e-14),
bp(xx + 1e-14), atol=1e-12, rtol=1e-12)
def test_integrate(self):
np.random.seed(1234)
x = np.sort(np.random.random(11))
c = np.random.random((4, 10))
bp = BPoly(c, x)
pp = PPoly.from_bernstein_basis(bp)
assert_allclose(bp.integrate(0, 1),
pp.integrate(0, 1), atol=1e-12, rtol=1e-12)
def test_integrate_extrap(self):
c = [[1]]
x = [0, 1]
b = BPoly(c, x)
# default is extrapolate=True
assert_allclose(b.integrate(0, 2), 2., atol=1e-14)
# .integrate argument overrides self.extrapolate
b1 = BPoly(c, x, extrapolate=False)
assert_(np.isnan(b1.integrate(0, 2)))
assert_allclose(b1.integrate(0, 2, extrapolate=True), 2., atol=1e-14)
def test_integrate_periodic(self):
x = np.array([1, 2, 4])
c = np.array([[0., 0.], [-1., -1.], [2., -0.], [1., 2.]])
P = BPoly.from_power_basis(PPoly(c, x), extrapolate='periodic')
I = P.antiderivative()
period_int = I(4) - I(1)
assert_allclose(P.integrate(1, 4), period_int)
assert_allclose(P.integrate(-10, -7), period_int)
assert_allclose(P.integrate(-10, -4), 2 * period_int)
assert_allclose(P.integrate(1.5, 2.5), I(2.5) - I(1.5))
assert_allclose(P.integrate(3.5, 5), I(2) - I(1) + I(4) - I(3.5))
assert_allclose(P.integrate(3.5 + 12, 5 + 12),
I(2) - I(1) + I(4) - I(3.5))
assert_allclose(P.integrate(3.5, 5 + 12),
I(2) - I(1) + I(4) - I(3.5) + 4 * period_int)
assert_allclose(P.integrate(0, -1), I(2) - I(3))
assert_allclose(P.integrate(-9, -10), I(2) - I(3))
assert_allclose(P.integrate(0, -10), I(2) - I(3) - 3 * period_int)
def test_antider_neg(self):
# .derivative(-nu) ==> .andiderivative(nu) and vice versa
c = [[1]]
x = [0, 1]
b = BPoly(c, x)
xx = np.linspace(0, 1, 21)
assert_allclose(b.derivative(-1)(xx), b.antiderivative()(xx),
atol=1e-12, rtol=1e-12)
assert_allclose(b.derivative(1)(xx), b.antiderivative(-1)(xx),
atol=1e-12, rtol=1e-12)
class TestPolyConversions:
def test_bp_from_pp(self):
x = [0, 1, 3]
c = [[3, 2], [1, 8], [4, 3]]
pp = PPoly(c, x)
bp = BPoly.from_power_basis(pp)
pp1 = PPoly.from_bernstein_basis(bp)
xp = [0.1, 1.4]
assert_allclose(pp(xp), bp(xp))
assert_allclose(pp(xp), pp1(xp))
def test_bp_from_pp_random(self):
np.random.seed(1234)
m, k = 5, 8 # number of intervals, order
x = np.sort(np.random.random(m))
c = np.random.random((k, m-1))
pp = PPoly(c, x)
bp = BPoly.from_power_basis(pp)
pp1 = PPoly.from_bernstein_basis(bp)
xp = np.linspace(x[0], x[-1], 21)
assert_allclose(pp(xp), bp(xp))
assert_allclose(pp(xp), pp1(xp))
def test_pp_from_bp(self):
x = [0, 1, 3]
c = [[3, 3], [1, 1], [4, 2]]
bp = BPoly(c, x)
pp = PPoly.from_bernstein_basis(bp)
bp1 = BPoly.from_power_basis(pp)
xp = [0.1, 1.4]
assert_allclose(bp(xp), pp(xp))
assert_allclose(bp(xp), bp1(xp))
def test_broken_conversions(self):
# regression test for gh-10597: from_power_basis only accepts PPoly etc.
x = [0, 1, 3]
c = [[3, 3], [1, 1], [4, 2]]
pp = PPoly(c, x)
with assert_raises(TypeError):
PPoly.from_bernstein_basis(pp)
bp = BPoly(c, x)
with assert_raises(TypeError):
BPoly.from_power_basis(bp)
class TestBPolyFromDerivatives:
def test_make_poly_1(self):
c1 = BPoly._construct_from_derivatives(0, 1, [2], [3])
assert_allclose(c1, [2., 3.])
def test_make_poly_2(self):
c1 = BPoly._construct_from_derivatives(0, 1, [1, 0], [1])
assert_allclose(c1, [1., 1., 1.])
# f'(0) = 3
c2 = BPoly._construct_from_derivatives(0, 1, [2, 3], [1])
assert_allclose(c2, [2., 7./2, 1.])
# f'(1) = 3
c3 = BPoly._construct_from_derivatives(0, 1, [2], [1, 3])
assert_allclose(c3, [2., -0.5, 1.])
def test_make_poly_3(self):
# f'(0)=2, f''(0)=3
c1 = BPoly._construct_from_derivatives(0, 1, [1, 2, 3], [4])
assert_allclose(c1, [1., 5./3, 17./6, 4.])
# f'(1)=2, f''(1)=3
c2 = BPoly._construct_from_derivatives(0, 1, [1], [4, 2, 3])
assert_allclose(c2, [1., 19./6, 10./3, 4.])
# f'(0)=2, f'(1)=3
c3 = BPoly._construct_from_derivatives(0, 1, [1, 2], [4, 3])
assert_allclose(c3, [1., 5./3, 3., 4.])
def test_make_poly_12(self):
np.random.seed(12345)
ya = np.r_[0, np.random.random(5)]
yb = np.r_[0, np.random.random(5)]
c = BPoly._construct_from_derivatives(0, 1, ya, yb)
pp = BPoly(c[:, None], [0, 1])
for j in range(6):
assert_allclose([pp(0.), pp(1.)], [ya[j], yb[j]])
pp = pp.derivative()
def test_raise_degree(self):
np.random.seed(12345)
x = [0, 1]
k, d = 8, 5
c = np.random.random((k, 1, 2, 3, 4))
bp = BPoly(c, x)
c1 = BPoly._raise_degree(c, d)
bp1 = BPoly(c1, x)
xp = np.linspace(0, 1, 11)
assert_allclose(bp(xp), bp1(xp))
def test_xi_yi(self):
assert_raises(ValueError, BPoly.from_derivatives, [0, 1], [0])
def test_coords_order(self):
xi = [0, 0, 1]
yi = [[0], [0], [0]]
assert_raises(ValueError, BPoly.from_derivatives, xi, yi)
def test_zeros(self):
xi = [0, 1, 2, 3]
yi = [[0, 0], [0], [0, 0], [0, 0]] # NB: will have to raise the degree
pp = BPoly.from_derivatives(xi, yi)
assert_(pp.c.shape == (4, 3))
ppd = pp.derivative()
for xp in [0., 0.1, 1., 1.1, 1.9, 2., 2.5]:
assert_allclose([pp(xp), ppd(xp)], [0., 0.])
def _make_random_mk(self, m, k):
# k derivatives at each breakpoint
np.random.seed(1234)
xi = np.asarray([1. * j**2 for j in range(m+1)])
yi = [np.random.random(k) for j in range(m+1)]
return xi, yi
def test_random_12(self):
m, k = 5, 12
xi, yi = self._make_random_mk(m, k)
pp = BPoly.from_derivatives(xi, yi)
for order in range(k//2):
assert_allclose(pp(xi), [yy[order] for yy in yi])
pp = pp.derivative()
def test_order_zero(self):
m, k = 5, 12
xi, yi = self._make_random_mk(m, k)
assert_raises(ValueError, BPoly.from_derivatives,
**dict(xi=xi, yi=yi, orders=0))
def test_orders_too_high(self):
m, k = 5, 12
xi, yi = self._make_random_mk(m, k)
BPoly.from_derivatives(xi, yi, orders=2*k-1) # this is still ok
assert_raises(ValueError, BPoly.from_derivatives, # but this is not
**dict(xi=xi, yi=yi, orders=2*k))
def test_orders_global(self):
m, k = 5, 12
xi, yi = self._make_random_mk(m, k)
# ok, this is confusing. Local polynomials will be of the order 5
# which means that up to the 2nd derivatives will be used at each point
order = 5
pp = BPoly.from_derivatives(xi, yi, orders=order)
for j in range(order//2+1):
assert_allclose(pp(xi[1:-1] - 1e-12), pp(xi[1:-1] + 1e-12))
pp = pp.derivative()
assert_(not np.allclose(pp(xi[1:-1] - 1e-12), pp(xi[1:-1] + 1e-12)))
# now repeat with `order` being even: on each interval, it uses
# order//2 'derivatives' @ the right-hand endpoint and
# order//2+1 @ 'derivatives' the left-hand endpoint
order = 6
pp = BPoly.from_derivatives(xi, yi, orders=order)
for j in range(order//2):
assert_allclose(pp(xi[1:-1] - 1e-12), pp(xi[1:-1] + 1e-12))
pp = pp.derivative()
assert_(not np.allclose(pp(xi[1:-1] - 1e-12), pp(xi[1:-1] + 1e-12)))
def test_orders_local(self):
m, k = 7, 12
xi, yi = self._make_random_mk(m, k)
orders = [o + 1 for o in range(m)]
for i, x in enumerate(xi[1:-1]):
pp = BPoly.from_derivatives(xi, yi, orders=orders)
for j in range(orders[i] // 2 + 1):
assert_allclose(pp(x - 1e-12), pp(x + 1e-12))
pp = pp.derivative()
assert_(not np.allclose(pp(x - 1e-12), pp(x + 1e-12)))
def test_yi_trailing_dims(self):
m, k = 7, 5
xi = np.sort(np.random.random(m+1))
yi = np.random.random((m+1, k, 6, 7, 8))
pp = BPoly.from_derivatives(xi, yi)
assert_equal(pp.c.shape, (2*k, m, 6, 7, 8))
def test_gh_5430(self):
# At least one of these raises an error unless gh-5430 is
# fixed. In py2k an int is implemented using a C long, so
# which one fails depends on your system. In py3k there is only
# one arbitrary precision integer type, so both should fail.
orders = np.int32(1)
p = BPoly.from_derivatives([0, 1], [[0], [0]], orders=orders)
assert_almost_equal(p(0), 0)
orders = np.int64(1)
p = BPoly.from_derivatives([0, 1], [[0], [0]], orders=orders)
assert_almost_equal(p(0), 0)
orders = 1
# This worked before; make sure it still works
p = BPoly.from_derivatives([0, 1], [[0], [0]], orders=orders)
assert_almost_equal(p(0), 0)
orders = 1
class TestNdPPoly:
def test_simple_1d(self):
np.random.seed(1234)
c = np.random.rand(4, 5)
x = np.linspace(0, 1, 5+1)
xi = np.random.rand(200)
p = NdPPoly(c, (x,))
v1 = p((xi,))
v2 = _ppoly_eval_1(c[:,:,None], x, xi).ravel()
assert_allclose(v1, v2)
def test_simple_2d(self):
np.random.seed(1234)
c = np.random.rand(4, 5, 6, 7)
x = np.linspace(0, 1, 6+1)
y = np.linspace(0, 1, 7+1)**2
xi = np.random.rand(200)
yi = np.random.rand(200)
v1 = np.empty([len(xi), 1], dtype=c.dtype)
v1.fill(np.nan)
_ppoly.evaluate_nd(c.reshape(4*5, 6*7, 1),
(x, y),
np.array([4, 5], dtype=np.intc),
np.c_[xi, yi],
np.array([0, 0], dtype=np.intc),
1,
v1)
v1 = v1.ravel()
v2 = _ppoly2d_eval(c, (x, y), xi, yi)
assert_allclose(v1, v2)
p = NdPPoly(c, (x, y))
for nu in (None, (0, 0), (0, 1), (1, 0), (2, 3), (9, 2)):
v1 = p(np.c_[xi, yi], nu=nu)
v2 = _ppoly2d_eval(c, (x, y), xi, yi, nu=nu)
assert_allclose(v1, v2, err_msg=repr(nu))
def test_simple_3d(self):
np.random.seed(1234)
c = np.random.rand(4, 5, 6, 7, 8, 9)
x = np.linspace(0, 1, 7+1)
y = np.linspace(0, 1, 8+1)**2
z = np.linspace(0, 1, 9+1)**3
xi = np.random.rand(40)
yi = np.random.rand(40)
zi = np.random.rand(40)
p = NdPPoly(c, (x, y, z))
for nu in (None, (0, 0, 0), (0, 1, 0), (1, 0, 0), (2, 3, 0),
(6, 0, 2)):
v1 = p((xi, yi, zi), nu=nu)
v2 = _ppoly3d_eval(c, (x, y, z), xi, yi, zi, nu=nu)
assert_allclose(v1, v2, err_msg=repr(nu))
def test_simple_4d(self):
np.random.seed(1234)
c = np.random.rand(4, 5, 6, 7, 8, 9, 10, 11)
x = np.linspace(0, 1, 8+1)
y = np.linspace(0, 1, 9+1)**2
z = np.linspace(0, 1, 10+1)**3
u = np.linspace(0, 1, 11+1)**4
xi = np.random.rand(20)
yi = np.random.rand(20)
zi = np.random.rand(20)
ui = np.random.rand(20)
p = NdPPoly(c, (x, y, z, u))
v1 = p((xi, yi, zi, ui))
v2 = _ppoly4d_eval(c, (x, y, z, u), xi, yi, zi, ui)
assert_allclose(v1, v2)
def test_deriv_1d(self):
np.random.seed(1234)
c = np.random.rand(4, 5)
x = np.linspace(0, 1, 5+1)
p = NdPPoly(c, (x,))
# derivative
dp = p.derivative(nu=[1])
p1 = PPoly(c, x)
dp1 = p1.derivative()
assert_allclose(dp.c, dp1.c)
# antiderivative
dp = p.antiderivative(nu=[2])
p1 = PPoly(c, x)
dp1 = p1.antiderivative(2)
assert_allclose(dp.c, dp1.c)
def test_deriv_3d(self):
np.random.seed(1234)
c = np.random.rand(4, 5, 6, 7, 8, 9)
x = np.linspace(0, 1, 7+1)
y = np.linspace(0, 1, 8+1)**2
z = np.linspace(0, 1, 9+1)**3
p = NdPPoly(c, (x, y, z))
# differentiate vs x
p1 = PPoly(c.transpose(0, 3, 1, 2, 4, 5), x)
dp = p.derivative(nu=[2])
dp1 = p1.derivative(2)
assert_allclose(dp.c,
dp1.c.transpose(0, 2, 3, 1, 4, 5))
# antidifferentiate vs y
p1 = PPoly(c.transpose(1, 4, 0, 2, 3, 5), y)
dp = p.antiderivative(nu=[0, 1, 0])
dp1 = p1.antiderivative(1)
assert_allclose(dp.c,
dp1.c.transpose(2, 0, 3, 4, 1, 5))
# differentiate vs z
p1 = PPoly(c.transpose(2, 5, 0, 1, 3, 4), z)
dp = p.derivative(nu=[0, 0, 3])
dp1 = p1.derivative(3)
assert_allclose(dp.c,
dp1.c.transpose(2, 3, 0, 4, 5, 1))
def test_deriv_3d_simple(self):
# Integrate to obtain function x y**2 z**4 / (2! 4!)
c = np.ones((1, 1, 1, 3, 4, 5))
x = np.linspace(0, 1, 3+1)**1
y = np.linspace(0, 1, 4+1)**2
z = np.linspace(0, 1, 5+1)**3
p = NdPPoly(c, (x, y, z))
ip = p.antiderivative((1, 0, 4))
ip = ip.antiderivative((0, 2, 0))
xi = np.random.rand(20)
yi = np.random.rand(20)
zi = np.random.rand(20)
assert_allclose(ip((xi, yi, zi)),
xi * yi**2 * zi**4 / (gamma(3)*gamma(5)))
def test_integrate_2d(self):
np.random.seed(1234)
c = np.random.rand(4, 5, 16, 17)
x = np.linspace(0, 1, 16+1)**1
y = np.linspace(0, 1, 17+1)**2
# make continuously differentiable so that nquad() has an
# easier time
c = c.transpose(0, 2, 1, 3)
cx = c.reshape(c.shape[0], c.shape[1], -1).copy()
_ppoly.fix_continuity(cx, x, 2)
c = cx.reshape(c.shape)
c = c.transpose(0, 2, 1, 3)
c = c.transpose(1, 3, 0, 2)
cx = c.reshape(c.shape[0], c.shape[1], -1).copy()
_ppoly.fix_continuity(cx, y, 2)
c = cx.reshape(c.shape)
c = c.transpose(2, 0, 3, 1).copy()
# Check integration
p = NdPPoly(c, (x, y))
for ranges in [[(0, 1), (0, 1)],
[(0, 0.5), (0, 1)],
[(0, 1), (0, 0.5)],
[(0.3, 0.7), (0.6, 0.2)]]:
ig = p.integrate(ranges)
ig2, err2 = nquad(lambda x, y: p((x, y)), ranges,
opts=[dict(epsrel=1e-5, epsabs=1e-5)]*2)
assert_allclose(ig, ig2, rtol=1e-5, atol=1e-5,
err_msg=repr(ranges))
def test_integrate_1d(self):
np.random.seed(1234)
c = np.random.rand(4, 5, 6, 16, 17, 18)
x = np.linspace(0, 1, 16+1)**1
y = np.linspace(0, 1, 17+1)**2
z = np.linspace(0, 1, 18+1)**3
# Check 1-D integration
p = NdPPoly(c, (x, y, z))
u = np.random.rand(200)
v = np.random.rand(200)
a, b = 0.2, 0.7
px = p.integrate_1d(a, b, axis=0)
pax = p.antiderivative((1, 0, 0))
assert_allclose(px((u, v)), pax((b, u, v)) - pax((a, u, v)))
py = p.integrate_1d(a, b, axis=1)
pay = p.antiderivative((0, 1, 0))
assert_allclose(py((u, v)), pay((u, b, v)) - pay((u, a, v)))
pz = p.integrate_1d(a, b, axis=2)
paz = p.antiderivative((0, 0, 1))
assert_allclose(pz((u, v)), paz((u, v, b)) - paz((u, v, a)))
def _ppoly_eval_1(c, x, xps):
"""Evaluate piecewise polynomial manually"""
out = np.zeros((len(xps), c.shape[2]))
for i, xp in enumerate(xps):
if xp < 0 or xp > 1:
out[i,:] = np.nan
continue
j = np.searchsorted(x, xp) - 1
d = xp - x[j]
assert_(x[j] <= xp < x[j+1])
r = sum(c[k,j] * d**(c.shape[0]-k-1)
for k in range(c.shape[0]))
out[i,:] = r
return out
def _ppoly_eval_2(coeffs, breaks, xnew, fill=np.nan):
"""Evaluate piecewise polynomial manually (another way)"""
a = breaks[0]
b = breaks[-1]
K = coeffs.shape[0]
saveshape = np.shape(xnew)
xnew = np.ravel(xnew)
res = np.empty_like(xnew)
mask = (xnew >= a) & (xnew <= b)
res[~mask] = fill
xx = xnew.compress(mask)
indxs = np.searchsorted(breaks, xx)-1
indxs = indxs.clip(0, len(breaks))
pp = coeffs
diff = xx - breaks.take(indxs)
V = np.vander(diff, N=K)
values = np.array([np.dot(V[k, :], pp[:, indxs[k]]) for k in range(len(xx))])
res[mask] = values
res.shape = saveshape
return res
def _dpow(x, y, n):
"""
d^n (x**y) / dx^n
"""
if n < 0:
raise ValueError("invalid derivative order")
elif n > y:
return 0
else:
return poch(y - n + 1, n) * x**(y - n)
def _ppoly2d_eval(c, xs, xnew, ynew, nu=None):
"""
Straightforward evaluation of 2-D piecewise polynomial
"""
if nu is None:
nu = (0, 0)
out = np.empty((len(xnew),), dtype=c.dtype)
nx, ny = c.shape[:2]
for jout, (x, y) in enumerate(zip(xnew, ynew)):
if not ((xs[0][0] <= x <= xs[0][-1]) and
(xs[1][0] <= y <= xs[1][-1])):
out[jout] = np.nan
continue
j1 = np.searchsorted(xs[0], x) - 1
j2 = np.searchsorted(xs[1], y) - 1
s1 = x - xs[0][j1]
s2 = y - xs[1][j2]
val = 0
for k1 in range(c.shape[0]):
for k2 in range(c.shape[1]):
val += (c[nx-k1-1,ny-k2-1,j1,j2]
* _dpow(s1, k1, nu[0])
* _dpow(s2, k2, nu[1]))
out[jout] = val
return out
def _ppoly3d_eval(c, xs, xnew, ynew, znew, nu=None):
"""
Straightforward evaluation of 3-D piecewise polynomial
"""
if nu is None:
nu = (0, 0, 0)
out = np.empty((len(xnew),), dtype=c.dtype)
nx, ny, nz = c.shape[:3]
for jout, (x, y, z) in enumerate(zip(xnew, ynew, znew)):
if not ((xs[0][0] <= x <= xs[0][-1]) and
(xs[1][0] <= y <= xs[1][-1]) and
(xs[2][0] <= z <= xs[2][-1])):
out[jout] = np.nan
continue
j1 = np.searchsorted(xs[0], x) - 1
j2 = np.searchsorted(xs[1], y) - 1
j3 = np.searchsorted(xs[2], z) - 1
s1 = x - xs[0][j1]
s2 = y - xs[1][j2]
s3 = z - xs[2][j3]
val = 0
for k1 in range(c.shape[0]):
for k2 in range(c.shape[1]):
for k3 in range(c.shape[2]):
val += (c[nx-k1-1,ny-k2-1,nz-k3-1,j1,j2,j3]
* _dpow(s1, k1, nu[0])
* _dpow(s2, k2, nu[1])
* _dpow(s3, k3, nu[2]))
out[jout] = val
return out
def _ppoly4d_eval(c, xs, xnew, ynew, znew, unew, nu=None):
"""
Straightforward evaluation of 4-D piecewise polynomial
"""
if nu is None:
nu = (0, 0, 0, 0)
out = np.empty((len(xnew),), dtype=c.dtype)
mx, my, mz, mu = c.shape[:4]
for jout, (x, y, z, u) in enumerate(zip(xnew, ynew, znew, unew)):
if not ((xs[0][0] <= x <= xs[0][-1]) and
(xs[1][0] <= y <= xs[1][-1]) and
(xs[2][0] <= z <= xs[2][-1]) and
(xs[3][0] <= u <= xs[3][-1])):
out[jout] = np.nan
continue
j1 = np.searchsorted(xs[0], x) - 1
j2 = np.searchsorted(xs[1], y) - 1
j3 = np.searchsorted(xs[2], z) - 1
j4 = np.searchsorted(xs[3], u) - 1
s1 = x - xs[0][j1]
s2 = y - xs[1][j2]
s3 = z - xs[2][j3]
s4 = u - xs[3][j4]
val = 0
for k1 in range(c.shape[0]):
for k2 in range(c.shape[1]):
for k3 in range(c.shape[2]):
for k4 in range(c.shape[3]):
val += (c[mx-k1-1,my-k2-1,mz-k3-1,mu-k4-1,j1,j2,j3,j4]
* _dpow(s1, k1, nu[0])
* _dpow(s2, k2, nu[1])
* _dpow(s3, k3, nu[2])
* _dpow(s4, k4, nu[3]))
out[jout] = val
return out
| 95,850
| 36.588627
| 85
|
py
|
scipy
|
scipy-main/scipy/interpolate/tests/test_pade.py
|
from numpy.testing import (assert_array_equal, assert_array_almost_equal)
from scipy.interpolate import pade
def test_pade_trivial():
nump, denomp = pade([1.0], 0)
assert_array_equal(nump.c, [1.0])
assert_array_equal(denomp.c, [1.0])
nump, denomp = pade([1.0], 0, 0)
assert_array_equal(nump.c, [1.0])
assert_array_equal(denomp.c, [1.0])
def test_pade_4term_exp():
# First four Taylor coefficients of exp(x).
# Unlike poly1d, the first array element is the zero-order term.
an = [1.0, 1.0, 0.5, 1.0/6]
nump, denomp = pade(an, 0)
assert_array_almost_equal(nump.c, [1.0/6, 0.5, 1.0, 1.0])
assert_array_almost_equal(denomp.c, [1.0])
nump, denomp = pade(an, 1)
assert_array_almost_equal(nump.c, [1.0/6, 2.0/3, 1.0])
assert_array_almost_equal(denomp.c, [-1.0/3, 1.0])
nump, denomp = pade(an, 2)
assert_array_almost_equal(nump.c, [1.0/3, 1.0])
assert_array_almost_equal(denomp.c, [1.0/6, -2.0/3, 1.0])
nump, denomp = pade(an, 3)
assert_array_almost_equal(nump.c, [1.0])
assert_array_almost_equal(denomp.c, [-1.0/6, 0.5, -1.0, 1.0])
# Testing inclusion of optional parameter
nump, denomp = pade(an, 0, 3)
assert_array_almost_equal(nump.c, [1.0/6, 0.5, 1.0, 1.0])
assert_array_almost_equal(denomp.c, [1.0])
nump, denomp = pade(an, 1, 2)
assert_array_almost_equal(nump.c, [1.0/6, 2.0/3, 1.0])
assert_array_almost_equal(denomp.c, [-1.0/3, 1.0])
nump, denomp = pade(an, 2, 1)
assert_array_almost_equal(nump.c, [1.0/3, 1.0])
assert_array_almost_equal(denomp.c, [1.0/6, -2.0/3, 1.0])
nump, denomp = pade(an, 3, 0)
assert_array_almost_equal(nump.c, [1.0])
assert_array_almost_equal(denomp.c, [-1.0/6, 0.5, -1.0, 1.0])
# Testing reducing array.
nump, denomp = pade(an, 0, 2)
assert_array_almost_equal(nump.c, [0.5, 1.0, 1.0])
assert_array_almost_equal(denomp.c, [1.0])
nump, denomp = pade(an, 1, 1)
assert_array_almost_equal(nump.c, [1.0/2, 1.0])
assert_array_almost_equal(denomp.c, [-1.0/2, 1.0])
nump, denomp = pade(an, 2, 0)
assert_array_almost_equal(nump.c, [1.0])
assert_array_almost_equal(denomp.c, [1.0/2, -1.0, 1.0])
def test_pade_ints():
# Simple test sequences (one of ints, one of floats).
an_int = [1, 2, 3, 4]
an_flt = [1.0, 2.0, 3.0, 4.0]
# Make sure integer arrays give the same result as float arrays with same values.
for i in range(0, len(an_int)):
for j in range(0, len(an_int) - i):
# Create float and int pade approximation for given order.
nump_int, denomp_int = pade(an_int, i, j)
nump_flt, denomp_flt = pade(an_flt, i, j)
# Check that they are the same.
assert_array_equal(nump_int.c, nump_flt.c)
assert_array_equal(denomp_int.c, denomp_flt.c)
def test_pade_complex():
# Test sequence with known solutions - see page 6 of 10.1109/PESGM.2012.6344759.
# Variable x is parameter - these tests will work with any complex number.
x = 0.2 + 0.6j
an = [1.0, x, -x*x.conjugate(), x.conjugate()*(x**2) + x*(x.conjugate()**2),
-(x**3)*x.conjugate() - 3*(x*x.conjugate())**2 - x*(x.conjugate()**3)]
nump, denomp = pade(an, 1, 1)
assert_array_almost_equal(nump.c, [x + x.conjugate(), 1.0])
assert_array_almost_equal(denomp.c, [x.conjugate(), 1.0])
nump, denomp = pade(an, 1, 2)
assert_array_almost_equal(nump.c, [x**2, 2*x + x.conjugate(), 1.0])
assert_array_almost_equal(denomp.c, [x + x.conjugate(), 1.0])
nump, denomp = pade(an, 2, 2)
assert_array_almost_equal(nump.c, [x**2 + x*x.conjugate() + x.conjugate()**2, 2*(x + x.conjugate()), 1.0])
assert_array_almost_equal(denomp.c, [x.conjugate()**2, x + 2*x.conjugate(), 1.0])
| 3,786
| 36.127451
| 110
|
py
|
scipy
|
scipy-main/scipy/interpolate/tests/test_interpnd.py
|
import os
import numpy as np
from numpy.testing import (assert_equal, assert_allclose, assert_almost_equal,
suppress_warnings)
from pytest import raises as assert_raises
import pytest
import scipy.interpolate.interpnd as interpnd
import scipy.spatial._qhull as qhull
import pickle
def data_file(basename):
return os.path.join(os.path.abspath(os.path.dirname(__file__)),
'data', basename)
class TestLinearNDInterpolation:
def test_smoketest(self):
# Test at single points
x = np.array([(0,0), (-0.5,-0.5), (-0.5,0.5), (0.5, 0.5), (0.25, 0.3)],
dtype=np.double)
y = np.arange(x.shape[0], dtype=np.double)
yi = interpnd.LinearNDInterpolator(x, y)(x)
assert_almost_equal(y, yi)
def test_smoketest_alternate(self):
# Test at single points, alternate calling convention
x = np.array([(0,0), (-0.5,-0.5), (-0.5,0.5), (0.5, 0.5), (0.25, 0.3)],
dtype=np.double)
y = np.arange(x.shape[0], dtype=np.double)
yi = interpnd.LinearNDInterpolator((x[:,0], x[:,1]), y)(x[:,0], x[:,1])
assert_almost_equal(y, yi)
def test_complex_smoketest(self):
# Test at single points
x = np.array([(0,0), (-0.5,-0.5), (-0.5,0.5), (0.5, 0.5), (0.25, 0.3)],
dtype=np.double)
y = np.arange(x.shape[0], dtype=np.double)
y = y - 3j*y
yi = interpnd.LinearNDInterpolator(x, y)(x)
assert_almost_equal(y, yi)
def test_tri_input(self):
# Test at single points
x = np.array([(0,0), (-0.5,-0.5), (-0.5,0.5), (0.5, 0.5), (0.25, 0.3)],
dtype=np.double)
y = np.arange(x.shape[0], dtype=np.double)
y = y - 3j*y
tri = qhull.Delaunay(x)
yi = interpnd.LinearNDInterpolator(tri, y)(x)
assert_almost_equal(y, yi)
def test_square(self):
# Test barycentric interpolation on a square against a manual
# implementation
points = np.array([(0,0), (0,1), (1,1), (1,0)], dtype=np.double)
values = np.array([1., 2., -3., 5.], dtype=np.double)
# NB: assume triangles (0, 1, 3) and (1, 2, 3)
#
# 1----2
# | \ |
# | \ |
# 0----3
def ip(x, y):
t1 = (x + y <= 1)
t2 = ~t1
x1 = x[t1]
y1 = y[t1]
x2 = x[t2]
y2 = y[t2]
z = 0*x
z[t1] = (values[0]*(1 - x1 - y1)
+ values[1]*y1
+ values[3]*x1)
z[t2] = (values[2]*(x2 + y2 - 1)
+ values[1]*(1 - x2)
+ values[3]*(1 - y2))
return z
xx, yy = np.broadcast_arrays(np.linspace(0, 1, 14)[:,None],
np.linspace(0, 1, 14)[None,:])
xx = xx.ravel()
yy = yy.ravel()
xi = np.array([xx, yy]).T.copy()
zi = interpnd.LinearNDInterpolator(points, values)(xi)
assert_almost_equal(zi, ip(xx, yy))
def test_smoketest_rescale(self):
# Test at single points
x = np.array([(0, 0), (-5, -5), (-5, 5), (5, 5), (2.5, 3)],
dtype=np.double)
y = np.arange(x.shape[0], dtype=np.double)
yi = interpnd.LinearNDInterpolator(x, y, rescale=True)(x)
assert_almost_equal(y, yi)
def test_square_rescale(self):
# Test barycentric interpolation on a rectangle with rescaling
# agaings the same implementation without rescaling
points = np.array([(0,0), (0,100), (10,100), (10,0)], dtype=np.double)
values = np.array([1., 2., -3., 5.], dtype=np.double)
xx, yy = np.broadcast_arrays(np.linspace(0, 10, 14)[:,None],
np.linspace(0, 100, 14)[None,:])
xx = xx.ravel()
yy = yy.ravel()
xi = np.array([xx, yy]).T.copy()
zi = interpnd.LinearNDInterpolator(points, values)(xi)
zi_rescaled = interpnd.LinearNDInterpolator(points, values,
rescale=True)(xi)
assert_almost_equal(zi, zi_rescaled)
def test_tripoints_input_rescale(self):
# Test at single points
x = np.array([(0,0), (-5,-5), (-5,5), (5, 5), (2.5, 3)],
dtype=np.double)
y = np.arange(x.shape[0], dtype=np.double)
y = y - 3j*y
tri = qhull.Delaunay(x)
yi = interpnd.LinearNDInterpolator(tri.points, y)(x)
yi_rescale = interpnd.LinearNDInterpolator(tri.points, y,
rescale=True)(x)
assert_almost_equal(yi, yi_rescale)
def test_tri_input_rescale(self):
# Test at single points
x = np.array([(0,0), (-5,-5), (-5,5), (5, 5), (2.5, 3)],
dtype=np.double)
y = np.arange(x.shape[0], dtype=np.double)
y = y - 3j*y
tri = qhull.Delaunay(x)
match = ("Rescaling is not supported when passing a "
"Delaunay triangulation as ``points``.")
with pytest.raises(ValueError, match=match):
interpnd.LinearNDInterpolator(tri, y, rescale=True)(x)
def test_pickle(self):
# Test at single points
np.random.seed(1234)
x = np.random.rand(30, 2)
y = np.random.rand(30) + 1j*np.random.rand(30)
ip = interpnd.LinearNDInterpolator(x, y)
ip2 = pickle.loads(pickle.dumps(ip))
assert_almost_equal(ip(0.5, 0.5), ip2(0.5, 0.5))
class TestEstimateGradients2DGlobal:
def test_smoketest(self):
x = np.array([(0, 0), (0, 2),
(1, 0), (1, 2), (0.25, 0.75), (0.6, 0.8)], dtype=float)
tri = qhull.Delaunay(x)
# Should be exact for linear functions, independent of triangulation
funcs = [
(lambda x, y: 0*x + 1, (0, 0)),
(lambda x, y: 0 + x, (1, 0)),
(lambda x, y: -2 + y, (0, 1)),
(lambda x, y: 3 + 3*x + 14.15*y, (3, 14.15))
]
for j, (func, grad) in enumerate(funcs):
z = func(x[:,0], x[:,1])
dz = interpnd.estimate_gradients_2d_global(tri, z, tol=1e-6)
assert_equal(dz.shape, (6, 2))
assert_allclose(dz, np.array(grad)[None,:] + 0*dz,
rtol=1e-5, atol=1e-5, err_msg="item %d" % j)
def test_regression_2359(self):
# Check regression --- for certain point sets, gradient
# estimation could end up in an infinite loop
points = np.load(data_file('estimate_gradients_hang.npy'))
values = np.random.rand(points.shape[0])
tri = qhull.Delaunay(points)
# This should not hang
with suppress_warnings() as sup:
sup.filter(interpnd.GradientEstimationWarning,
"Gradient estimation did not converge")
interpnd.estimate_gradients_2d_global(tri, values, maxiter=1)
class TestCloughTocher2DInterpolator:
def _check_accuracy(self, func, x=None, tol=1e-6, alternate=False, rescale=False, **kw):
np.random.seed(1234)
if x is None:
x = np.array([(0, 0), (0, 1),
(1, 0), (1, 1), (0.25, 0.75), (0.6, 0.8),
(0.5, 0.2)],
dtype=float)
if not alternate:
ip = interpnd.CloughTocher2DInterpolator(x, func(x[:,0], x[:,1]),
tol=1e-6, rescale=rescale)
else:
ip = interpnd.CloughTocher2DInterpolator((x[:,0], x[:,1]),
func(x[:,0], x[:,1]),
tol=1e-6, rescale=rescale)
p = np.random.rand(50, 2)
if not alternate:
a = ip(p)
else:
a = ip(p[:,0], p[:,1])
b = func(p[:,0], p[:,1])
try:
assert_allclose(a, b, **kw)
except AssertionError:
print("_check_accuracy: abs(a-b):", abs(a - b))
print("ip.grad:", ip.grad)
raise
def test_linear_smoketest(self):
# Should be exact for linear functions, independent of triangulation
funcs = [
lambda x, y: 0*x + 1,
lambda x, y: 0 + x,
lambda x, y: -2 + y,
lambda x, y: 3 + 3*x + 14.15*y,
]
for j, func in enumerate(funcs):
self._check_accuracy(func, tol=1e-13, atol=1e-7, rtol=1e-7,
err_msg="Function %d" % j)
self._check_accuracy(func, tol=1e-13, atol=1e-7, rtol=1e-7,
alternate=True,
err_msg="Function (alternate) %d" % j)
# check rescaling
self._check_accuracy(func, tol=1e-13, atol=1e-7, rtol=1e-7,
err_msg="Function (rescaled) %d" % j, rescale=True)
self._check_accuracy(func, tol=1e-13, atol=1e-7, rtol=1e-7,
alternate=True, rescale=True,
err_msg="Function (alternate, rescaled) %d" % j)
def test_quadratic_smoketest(self):
# Should be reasonably accurate for quadratic functions
funcs = [
lambda x, y: x**2,
lambda x, y: y**2,
lambda x, y: x**2 - y**2,
lambda x, y: x*y,
]
for j, func in enumerate(funcs):
self._check_accuracy(func, tol=1e-9, atol=0.22, rtol=0,
err_msg="Function %d" % j)
self._check_accuracy(func, tol=1e-9, atol=0.22, rtol=0,
err_msg="Function %d" % j, rescale=True)
def test_tri_input(self):
# Test at single points
x = np.array([(0,0), (-0.5,-0.5), (-0.5,0.5), (0.5, 0.5), (0.25, 0.3)],
dtype=np.double)
y = np.arange(x.shape[0], dtype=np.double)
y = y - 3j*y
tri = qhull.Delaunay(x)
yi = interpnd.CloughTocher2DInterpolator(tri, y)(x)
assert_almost_equal(y, yi)
def test_tri_input_rescale(self):
# Test at single points
x = np.array([(0,0), (-5,-5), (-5,5), (5, 5), (2.5, 3)],
dtype=np.double)
y = np.arange(x.shape[0], dtype=np.double)
y = y - 3j*y
tri = qhull.Delaunay(x)
match = ("Rescaling is not supported when passing a "
"Delaunay triangulation as ``points``.")
with pytest.raises(ValueError, match=match):
interpnd.CloughTocher2DInterpolator(tri, y, rescale=True)(x)
def test_tripoints_input_rescale(self):
# Test at single points
x = np.array([(0,0), (-5,-5), (-5,5), (5, 5), (2.5, 3)],
dtype=np.double)
y = np.arange(x.shape[0], dtype=np.double)
y = y - 3j*y
tri = qhull.Delaunay(x)
yi = interpnd.CloughTocher2DInterpolator(tri.points, y)(x)
yi_rescale = interpnd.CloughTocher2DInterpolator(tri.points, y, rescale=True)(x)
assert_almost_equal(yi, yi_rescale)
def test_dense(self):
# Should be more accurate for dense meshes
funcs = [
lambda x, y: x**2,
lambda x, y: y**2,
lambda x, y: x**2 - y**2,
lambda x, y: x*y,
lambda x, y: np.cos(2*np.pi*x)*np.sin(2*np.pi*y)
]
np.random.seed(4321) # use a different seed than the check!
grid = np.r_[np.array([(0,0), (0,1), (1,0), (1,1)], dtype=float),
np.random.rand(30*30, 2)]
for j, func in enumerate(funcs):
self._check_accuracy(func, x=grid, tol=1e-9, atol=5e-3, rtol=1e-2,
err_msg="Function %d" % j)
self._check_accuracy(func, x=grid, tol=1e-9, atol=5e-3, rtol=1e-2,
err_msg="Function %d" % j, rescale=True)
def test_wrong_ndim(self):
x = np.random.randn(30, 3)
y = np.random.randn(30)
assert_raises(ValueError, interpnd.CloughTocher2DInterpolator, x, y)
def test_pickle(self):
# Test at single points
np.random.seed(1234)
x = np.random.rand(30, 2)
y = np.random.rand(30) + 1j*np.random.rand(30)
ip = interpnd.CloughTocher2DInterpolator(x, y)
ip2 = pickle.loads(pickle.dumps(ip))
assert_almost_equal(ip(0.5, 0.5), ip2(0.5, 0.5))
def test_boundary_tri_symmetry(self):
# Interpolation at neighbourless triangles should retain
# symmetry with mirroring the triangle.
# Equilateral triangle
points = np.array([(0, 0), (1, 0), (0.5, np.sqrt(3)/2)])
values = np.array([1, 0, 0])
ip = interpnd.CloughTocher2DInterpolator(points, values)
# Set gradient to zero at vertices
ip.grad[...] = 0
# Interpolation should be symmetric vs. bisector
alpha = 0.3
p1 = np.array([0.5 * np.cos(alpha), 0.5 * np.sin(alpha)])
p2 = np.array([0.5 * np.cos(np.pi/3 - alpha), 0.5 * np.sin(np.pi/3 - alpha)])
v1 = ip(p1)
v2 = ip(p2)
assert_allclose(v1, v2)
# ... and affine invariant
np.random.seed(1)
A = np.random.randn(2, 2)
b = np.random.randn(2)
points = A.dot(points.T).T + b[None,:]
p1 = A.dot(p1) + b
p2 = A.dot(p2) + b
ip = interpnd.CloughTocher2DInterpolator(points, values)
ip.grad[...] = 0
w1 = ip(p1)
w2 = ip(p2)
assert_allclose(w1, v1)
assert_allclose(w2, v2)
| 13,627
| 34.21447
| 92
|
py
|
scipy
|
scipy-main/scipy/interpolate/tests/test_rgi.py
|
import itertools
import pytest
import numpy as np
from numpy.testing import (assert_allclose, assert_equal, assert_warns,
assert_array_almost_equal, assert_array_equal)
from pytest import raises as assert_raises
from scipy.interpolate import (RegularGridInterpolator, interpn,
RectBivariateSpline,
NearestNDInterpolator, LinearNDInterpolator)
from scipy.sparse._sputils import matrix
parametrize_rgi_interp_methods = pytest.mark.parametrize(
"method", ['linear', 'nearest', 'slinear', 'cubic', 'quintic', 'pchip']
)
class TestRegularGridInterpolator:
def _get_sample_4d(self):
# create a 4-D grid of 3 points in each dimension
points = [(0., .5, 1.)] * 4
values = np.asarray([0., .5, 1.])
values0 = values[:, np.newaxis, np.newaxis, np.newaxis]
values1 = values[np.newaxis, :, np.newaxis, np.newaxis]
values2 = values[np.newaxis, np.newaxis, :, np.newaxis]
values3 = values[np.newaxis, np.newaxis, np.newaxis, :]
values = (values0 + values1 * 10 + values2 * 100 + values3 * 1000)
return points, values
def _get_sample_4d_2(self):
# create another 4-D grid of 3 points in each dimension
points = [(0., .5, 1.)] * 2 + [(0., 5., 10.)] * 2
values = np.asarray([0., .5, 1.])
values0 = values[:, np.newaxis, np.newaxis, np.newaxis]
values1 = values[np.newaxis, :, np.newaxis, np.newaxis]
values2 = values[np.newaxis, np.newaxis, :, np.newaxis]
values3 = values[np.newaxis, np.newaxis, np.newaxis, :]
values = (values0 + values1 * 10 + values2 * 100 + values3 * 1000)
return points, values
def _get_sample_4d_3(self):
# create another 4-D grid of 7 points in each dimension
points = [(0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0)] * 4
values = np.asarray([0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0])
values0 = values[:, np.newaxis, np.newaxis, np.newaxis]
values1 = values[np.newaxis, :, np.newaxis, np.newaxis]
values2 = values[np.newaxis, np.newaxis, :, np.newaxis]
values3 = values[np.newaxis, np.newaxis, np.newaxis, :]
values = (values0 + values1 * 10 + values2 * 100 + values3 * 1000)
return points, values
def _get_sample_4d_4(self):
# create another 4-D grid of 2 points in each dimension
points = [(0.0, 1.0)] * 4
values = np.asarray([0.0, 1.0])
values0 = values[:, np.newaxis, np.newaxis, np.newaxis]
values1 = values[np.newaxis, :, np.newaxis, np.newaxis]
values2 = values[np.newaxis, np.newaxis, :, np.newaxis]
values3 = values[np.newaxis, np.newaxis, np.newaxis, :]
values = (values0 + values1 * 10 + values2 * 100 + values3 * 1000)
return points, values
@parametrize_rgi_interp_methods
def test_list_input(self, method):
points, values = self._get_sample_4d_3()
sample = np.asarray([[0.1, 0.1, 1., .9], [0.2, 0.1, .45, .8],
[0.5, 0.5, .5, .5]])
interp = RegularGridInterpolator(points,
values.tolist(),
method=method)
v1 = interp(sample.tolist())
interp = RegularGridInterpolator(points,
values,
method=method)
v2 = interp(sample)
assert_allclose(v1, v2)
@pytest.mark.parametrize('method', ['cubic', 'quintic', 'pchip'])
def test_spline_dim_error(self, method):
points, values = self._get_sample_4d_4()
match = "points in dimension"
# Check error raise when creating interpolator
with pytest.raises(ValueError, match=match):
RegularGridInterpolator(points, values, method=method)
# Check error raise when creating interpolator
interp = RegularGridInterpolator(points, values)
sample = np.asarray([[0.1, 0.1, 1., .9], [0.2, 0.1, .45, .8],
[0.5, 0.5, .5, .5]])
with pytest.raises(ValueError, match=match):
interp(sample, method=method)
@pytest.mark.parametrize(
"points_values, sample",
[
(
_get_sample_4d,
np.asarray(
[[0.1, 0.1, 1.0, 0.9],
[0.2, 0.1, 0.45, 0.8],
[0.5, 0.5, 0.5, 0.5]]
),
),
(_get_sample_4d_2, np.asarray([0.1, 0.1, 10.0, 9.0])),
],
)
def test_linear_and_slinear_close(self, points_values, sample):
points, values = points_values(self)
interp = RegularGridInterpolator(points, values, method="linear")
v1 = interp(sample)
interp = RegularGridInterpolator(points, values, method="slinear")
v2 = interp(sample)
assert_allclose(v1, v2)
@parametrize_rgi_interp_methods
def test_complex(self, method):
points, values = self._get_sample_4d_3()
values = values - 2j*values
sample = np.asarray([[0.1, 0.1, 1., .9], [0.2, 0.1, .45, .8],
[0.5, 0.5, .5, .5]])
interp = RegularGridInterpolator(points, values, method=method)
rinterp = RegularGridInterpolator(points, values.real, method=method)
iinterp = RegularGridInterpolator(points, values.imag, method=method)
v1 = interp(sample)
v2 = rinterp(sample) + 1j*iinterp(sample)
assert_allclose(v1, v2)
def test_cubic_vs_pchip(self):
x, y = [1, 2, 3, 4], [1, 2, 3, 4]
xg, yg = np.meshgrid(x, y, indexing='ij')
values = (lambda x, y: x**4 * y**4)(xg, yg)
cubic = RegularGridInterpolator((x, y), values, method='cubic')
pchip = RegularGridInterpolator((x, y), values, method='pchip')
vals_cubic = cubic([1.5, 2])
vals_pchip = pchip([1.5, 2])
assert not np.allclose(vals_cubic, vals_pchip, atol=1e-14, rtol=0)
def test_linear_xi1d(self):
points, values = self._get_sample_4d_2()
interp = RegularGridInterpolator(points, values)
sample = np.asarray([0.1, 0.1, 10., 9.])
wanted = 1001.1
assert_array_almost_equal(interp(sample), wanted)
def test_linear_xi3d(self):
points, values = self._get_sample_4d()
interp = RegularGridInterpolator(points, values)
sample = np.asarray([[0.1, 0.1, 1., .9], [0.2, 0.1, .45, .8],
[0.5, 0.5, .5, .5]])
wanted = np.asarray([1001.1, 846.2, 555.5])
assert_array_almost_equal(interp(sample), wanted)
@pytest.mark.parametrize(
"sample, wanted",
[
(np.asarray([0.1, 0.1, 0.9, 0.9]), 1100.0),
(np.asarray([0.1, 0.1, 0.1, 0.1]), 0.0),
(np.asarray([0.0, 0.0, 0.0, 0.0]), 0.0),
(np.asarray([1.0, 1.0, 1.0, 1.0]), 1111.0),
(np.asarray([0.1, 0.4, 0.6, 0.9]), 1055.0),
],
)
def test_nearest(self, sample, wanted):
points, values = self._get_sample_4d()
interp = RegularGridInterpolator(points, values, method="nearest")
assert_array_almost_equal(interp(sample), wanted)
def test_linear_edges(self):
points, values = self._get_sample_4d()
interp = RegularGridInterpolator(points, values)
sample = np.asarray([[0., 0., 0., 0.], [1., 1., 1., 1.]])
wanted = np.asarray([0., 1111.])
assert_array_almost_equal(interp(sample), wanted)
def test_valid_create(self):
# create a 2-D grid of 3 points in each dimension
points = [(0., .5, 1.), (0., 1., .5)]
values = np.asarray([0., .5, 1.])
values0 = values[:, np.newaxis]
values1 = values[np.newaxis, :]
values = (values0 + values1 * 10)
assert_raises(ValueError, RegularGridInterpolator, points, values)
points = [((0., .5, 1.), ), (0., .5, 1.)]
assert_raises(ValueError, RegularGridInterpolator, points, values)
points = [(0., .5, .75, 1.), (0., .5, 1.)]
assert_raises(ValueError, RegularGridInterpolator, points, values)
points = [(0., .5, 1.), (0., .5, 1.), (0., .5, 1.)]
assert_raises(ValueError, RegularGridInterpolator, points, values)
points = [(0., .5, 1.), (0., .5, 1.)]
assert_raises(ValueError, RegularGridInterpolator, points, values,
method="undefmethod")
def test_valid_call(self):
points, values = self._get_sample_4d()
interp = RegularGridInterpolator(points, values)
sample = np.asarray([[0., 0., 0., 0.], [1., 1., 1., 1.]])
assert_raises(ValueError, interp, sample, "undefmethod")
sample = np.asarray([[0., 0., 0.], [1., 1., 1.]])
assert_raises(ValueError, interp, sample)
sample = np.asarray([[0., 0., 0., 0.], [1., 1., 1., 1.1]])
assert_raises(ValueError, interp, sample)
def test_out_of_bounds_extrap(self):
points, values = self._get_sample_4d()
interp = RegularGridInterpolator(points, values, bounds_error=False,
fill_value=None)
sample = np.asarray([[-.1, -.1, -.1, -.1], [1.1, 1.1, 1.1, 1.1],
[21, 2.1, -1.1, -11], [2.1, 2.1, -1.1, -1.1]])
wanted = np.asarray([0., 1111., 11., 11.])
assert_array_almost_equal(interp(sample, method="nearest"), wanted)
wanted = np.asarray([-111.1, 1222.1, -11068., -1186.9])
assert_array_almost_equal(interp(sample, method="linear"), wanted)
def test_out_of_bounds_extrap2(self):
points, values = self._get_sample_4d_2()
interp = RegularGridInterpolator(points, values, bounds_error=False,
fill_value=None)
sample = np.asarray([[-.1, -.1, -.1, -.1], [1.1, 1.1, 1.1, 1.1],
[21, 2.1, -1.1, -11], [2.1, 2.1, -1.1, -1.1]])
wanted = np.asarray([0., 11., 11., 11.])
assert_array_almost_equal(interp(sample, method="nearest"), wanted)
wanted = np.asarray([-12.1, 133.1, -1069., -97.9])
assert_array_almost_equal(interp(sample, method="linear"), wanted)
def test_out_of_bounds_fill(self):
points, values = self._get_sample_4d()
interp = RegularGridInterpolator(points, values, bounds_error=False,
fill_value=np.nan)
sample = np.asarray([[-.1, -.1, -.1, -.1], [1.1, 1.1, 1.1, 1.1],
[2.1, 2.1, -1.1, -1.1]])
wanted = np.asarray([np.nan, np.nan, np.nan])
assert_array_almost_equal(interp(sample, method="nearest"), wanted)
assert_array_almost_equal(interp(sample, method="linear"), wanted)
sample = np.asarray([[0.1, 0.1, 1., .9], [0.2, 0.1, .45, .8],
[0.5, 0.5, .5, .5]])
wanted = np.asarray([1001.1, 846.2, 555.5])
assert_array_almost_equal(interp(sample), wanted)
def test_nearest_compare_qhull(self):
points, values = self._get_sample_4d()
interp = RegularGridInterpolator(points, values, method="nearest")
points_qhull = itertools.product(*points)
points_qhull = [p for p in points_qhull]
points_qhull = np.asarray(points_qhull)
values_qhull = values.reshape(-1)
interp_qhull = NearestNDInterpolator(points_qhull, values_qhull)
sample = np.asarray([[0.1, 0.1, 1., .9], [0.2, 0.1, .45, .8],
[0.5, 0.5, .5, .5]])
assert_array_almost_equal(interp(sample), interp_qhull(sample))
def test_linear_compare_qhull(self):
points, values = self._get_sample_4d()
interp = RegularGridInterpolator(points, values)
points_qhull = itertools.product(*points)
points_qhull = [p for p in points_qhull]
points_qhull = np.asarray(points_qhull)
values_qhull = values.reshape(-1)
interp_qhull = LinearNDInterpolator(points_qhull, values_qhull)
sample = np.asarray([[0.1, 0.1, 1., .9], [0.2, 0.1, .45, .8],
[0.5, 0.5, .5, .5]])
assert_array_almost_equal(interp(sample), interp_qhull(sample))
@pytest.mark.parametrize("method", ["nearest", "linear"])
def test_duck_typed_values(self, method):
x = np.linspace(0, 2, 5)
y = np.linspace(0, 1, 7)
values = MyValue((5, 7))
interp = RegularGridInterpolator((x, y), values, method=method)
v1 = interp([0.4, 0.7])
interp = RegularGridInterpolator((x, y), values._v, method=method)
v2 = interp([0.4, 0.7])
assert_allclose(v1, v2)
def test_invalid_fill_value(self):
np.random.seed(1234)
x = np.linspace(0, 2, 5)
y = np.linspace(0, 1, 7)
values = np.random.rand(5, 7)
# integers can be cast to floats
RegularGridInterpolator((x, y), values, fill_value=1)
# complex values cannot
assert_raises(ValueError, RegularGridInterpolator,
(x, y), values, fill_value=1+2j)
def test_fillvalue_type(self):
# from #3703; test that interpolator object construction succeeds
values = np.ones((10, 20, 30), dtype='>f4')
points = [np.arange(n) for n in values.shape]
# xi = [(1, 1, 1)]
RegularGridInterpolator(points, values)
RegularGridInterpolator(points, values, fill_value=0.)
def test_length_one_axis(self):
# gh-5890, gh-9524 : length-1 axis is legal for method='linear'.
# Along the axis it's linear interpolation; away from the length-1
# axis, it's an extrapolation, so fill_value should be used.
def f(x, y):
return x + y
x = np.linspace(1, 1, 1)
y = np.linspace(1, 10, 10)
data = f(*np.meshgrid(x, y, indexing="ij", sparse=True))
interp = RegularGridInterpolator((x, y), data, method="linear",
bounds_error=False, fill_value=101)
# check values at the grid
assert_allclose(interp(np.array([[1, 1], [1, 5], [1, 10]])),
[2, 6, 11],
atol=1e-14)
# check off-grid interpolation is indeed linear
assert_allclose(interp(np.array([[1, 1.4], [1, 5.3], [1, 10]])),
[2.4, 6.3, 11],
atol=1e-14)
# check exrapolation w/ fill_value
assert_allclose(interp(np.array([1.1, 2.4])),
interp.fill_value,
atol=1e-14)
# check extrapolation: linear along the `y` axis, const along `x`
interp.fill_value = None
assert_allclose(interp([[1, 0.3], [1, 11.5]]),
[1.3, 12.5], atol=1e-15)
assert_allclose(interp([[1.5, 0.3], [1.9, 11.5]]),
[1.3, 12.5], atol=1e-15)
# extrapolation with method='nearest'
interp = RegularGridInterpolator((x, y), data, method="nearest",
bounds_error=False, fill_value=None)
assert_allclose(interp([[1.5, 1.8], [-4, 5.1]]),
[3, 6],
atol=1e-15)
@pytest.mark.parametrize("fill_value", [None, np.nan, np.pi])
@pytest.mark.parametrize("method", ['linear', 'nearest'])
def test_length_one_axis2(self, fill_value, method):
options = {"fill_value": fill_value, "bounds_error": False,
"method": method}
x = np.linspace(0, 2*np.pi, 20)
z = np.sin(x)
fa = RegularGridInterpolator((x,), z[:], **options)
fb = RegularGridInterpolator((x, [0]), z[:, None], **options)
x1a = np.linspace(-1, 2*np.pi+1, 100)
za = fa(x1a)
# evaluated at provided y-value, fb should behave exactly as fa
y1b = np.zeros(100)
zb = fb(np.vstack([x1a, y1b]).T)
assert_allclose(zb, za)
# evaluated at a different y-value, fb should return fill value
y1b = np.ones(100)
zb = fb(np.vstack([x1a, y1b]).T)
if fill_value is None:
assert_allclose(zb, za)
else:
assert_allclose(zb, fill_value)
@pytest.mark.parametrize("method", ['nearest', 'linear'])
def test_nan_x_1d(self, method):
# gh-6624 : if x is nan, result should be nan
f = RegularGridInterpolator(([1, 2, 3],), [10, 20, 30], fill_value=1,
bounds_error=False, method=method)
assert np.isnan(f([np.nan]))
# test arbitrary nan pattern
rng = np.random.default_rng(8143215468)
x = rng.random(size=100)*4
i = rng.random(size=100) > 0.5
x[i] = np.nan
with np.errstate(invalid='ignore'):
# out-of-bounds comparisons, `out_of_bounds += x < grid[0]`,
# generate numpy warnings if `x` contains nans.
# These warnings should propagate to user (since `x` is user
# input) and we simply filter them out.
res = f(x)
assert_equal(res[i], np.nan)
assert_equal(res[~i], f(x[~i]))
# also test the length-one axis f(nan)
x = [1, 2, 3]
y = [1, ]
data = np.ones((3, 1))
f = RegularGridInterpolator((x, y), data, fill_value=1,
bounds_error=False, method=method)
assert np.isnan(f([np.nan, 1]))
assert np.isnan(f([1, np.nan]))
@pytest.mark.parametrize("method", ['nearest', 'linear'])
def test_nan_x_2d(self, method):
x, y = np.array([0, 1, 2]), np.array([1, 3, 7])
def f(x, y):
return x**2 + y**2
xg, yg = np.meshgrid(x, y, indexing='ij', sparse=True)
data = f(xg, yg)
interp = RegularGridInterpolator((x, y), data,
method=method, bounds_error=False)
with np.errstate(invalid='ignore'):
res = interp([[1.5, np.nan], [1, 1]])
assert_allclose(res[1], 2, atol=1e-14)
assert np.isnan(res[0])
# test arbitrary nan pattern
rng = np.random.default_rng(8143215468)
x = rng.random(size=100)*4-1
y = rng.random(size=100)*8
i1 = rng.random(size=100) > 0.5
i2 = rng.random(size=100) > 0.5
i = i1 | i2
x[i1] = np.nan
y[i2] = np.nan
z = np.array([x, y]).T
with np.errstate(invalid='ignore'):
# out-of-bounds comparisons, `out_of_bounds += x < grid[0]`,
# generate numpy warnings if `x` contains nans.
# These warnings should propagate to user (since `x` is user
# input) and we simply filter them out.
res = interp(z)
assert_equal(res[i], np.nan)
assert_equal(res[~i], interp(z[~i]))
@parametrize_rgi_interp_methods
@pytest.mark.parametrize(("ndims", "func"), [
(2, lambda x, y: 2 * x ** 3 + 3 * y ** 2),
(3, lambda x, y, z: 2 * x ** 3 + 3 * y ** 2 - z),
(4, lambda x, y, z, a: 2 * x ** 3 + 3 * y ** 2 - z + a),
(5, lambda x, y, z, a, b: 2 * x ** 3 + 3 * y ** 2 - z + a * b),
])
def test_descending_points_nd(self, method, ndims, func):
rng = np.random.default_rng(42)
sample_low = 1
sample_high = 5
test_points = rng.uniform(sample_low, sample_high, size=(2, ndims))
ascending_points = [np.linspace(sample_low, sample_high, 12)
for _ in range(ndims)]
ascending_values = func(*np.meshgrid(*ascending_points,
indexing="ij",
sparse=True))
ascending_interp = RegularGridInterpolator(ascending_points,
ascending_values,
method=method)
ascending_result = ascending_interp(test_points)
descending_points = [xi[::-1] for xi in ascending_points]
descending_values = func(*np.meshgrid(*descending_points,
indexing="ij",
sparse=True))
descending_interp = RegularGridInterpolator(descending_points,
descending_values,
method=method)
descending_result = descending_interp(test_points)
assert_array_equal(ascending_result, descending_result)
def test_invalid_points_order(self):
def val_func_2d(x, y):
return 2 * x ** 3 + 3 * y ** 2
x = np.array([.5, 2., 0., 4., 5.5]) # not ascending or descending
y = np.array([.5, 2., 3., 4., 5.5])
points = (x, y)
values = val_func_2d(*np.meshgrid(*points, indexing='ij',
sparse=True))
match = "must be strictly ascending or descending"
with pytest.raises(ValueError, match=match):
RegularGridInterpolator(points, values)
@parametrize_rgi_interp_methods
def test_fill_value(self, method):
interp = RegularGridInterpolator([np.arange(6)], np.ones(6),
method=method, bounds_error=False)
assert np.isnan(interp([10]))
@parametrize_rgi_interp_methods
def test_nonscalar_values(self, method):
# Verify that non-scalar valued values also works
points = [(0.0, 0.5, 1.0, 1.5, 2.0, 2.5)] * 2 + [
(0.0, 5.0, 10.0, 15.0, 20, 25.0)
] * 2
rng = np.random.default_rng(1234)
values = rng.random((6, 6, 6, 6, 8))
sample = rng.random((7, 3, 4))
interp = RegularGridInterpolator(points, values, method=method,
bounds_error=False)
v = interp(sample)
assert_equal(v.shape, (7, 3, 8), err_msg=method)
vs = []
for j in range(8):
interp = RegularGridInterpolator(points, values[..., j],
method=method,
bounds_error=False)
vs.append(interp(sample))
v2 = np.array(vs).transpose(1, 2, 0)
assert_allclose(v, v2, atol=1e-14, err_msg=method)
@parametrize_rgi_interp_methods
@pytest.mark.parametrize("flip_points", [False, True])
def test_nonscalar_values_2(self, method, flip_points):
# Verify that non-scalar valued values also work : use different
# lengths of axes to simplify tracing the internals
points = [(0.0, 0.5, 1.0, 1.5, 2.0, 2.5),
(0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0),
(0.0, 5.0, 10.0, 15.0, 20, 25.0, 35.0, 36.0),
(0.0, 5.0, 10.0, 15.0, 20, 25.0, 35.0, 36.0, 47)]
# verify, that strictly decreasing dimensions work
if flip_points:
points = [tuple(reversed(p)) for p in points]
rng = np.random.default_rng(1234)
trailing_points = (3, 2)
# NB: values has a `num_trailing_dims` trailing dimension
values = rng.random((6, 7, 8, 9, *trailing_points))
sample = rng.random(4) # a single sample point !
interp = RegularGridInterpolator(points, values, method=method,
bounds_error=False)
v = interp(sample)
# v has a single sample point *per entry in the trailing dimensions*
assert v.shape == (1, *trailing_points)
# check the values, too : manually loop over the trailing dimensions
vs = np.empty(values.shape[-2:])
for i in range(values.shape[-2]):
for j in range(values.shape[-1]):
interp = RegularGridInterpolator(points, values[..., i, j],
method=method,
bounds_error=False)
vs[i, j] = interp(sample).item()
v2 = np.expand_dims(vs, axis=0)
assert_allclose(v, v2, atol=1e-14, err_msg=method)
def test_nonscalar_values_linear_2D(self):
# Verify that non-scalar values work in the 2D fast path
method = 'linear'
points = [(0.0, 0.5, 1.0, 1.5, 2.0, 2.5),
(0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0), ]
rng = np.random.default_rng(1234)
trailing_points = (3, 4)
# NB: values has a `num_trailing_dims` trailing dimension
values = rng.random((6, 7, *trailing_points))
sample = rng.random(2) # a single sample point !
interp = RegularGridInterpolator(points, values, method=method,
bounds_error=False)
v = interp(sample)
# v has a single sample point *per entry in the trailing dimensions*
assert v.shape == (1, *trailing_points)
# check the values, too : manually loop over the trailing dimensions
vs = np.empty(values.shape[-2:])
for i in range(values.shape[-2]):
for j in range(values.shape[-1]):
interp = RegularGridInterpolator(points, values[..., i, j],
method=method,
bounds_error=False)
vs[i, j] = interp(sample).item()
v2 = np.expand_dims(vs, axis=0)
assert_allclose(v, v2, atol=1e-14, err_msg=method)
@pytest.mark.parametrize(
"dtype",
[np.float32, np.float64, np.complex64, np.complex128]
)
@pytest.mark.parametrize("xi_dtype", [np.float32, np.float64])
def test_float32_values(self, dtype, xi_dtype):
# regression test for gh-17718: values.dtype=float32 fails
def f(x, y):
return 2 * x**3 + 3 * y**2
x = np.linspace(1, 4, 11)
y = np.linspace(4, 7, 22)
xg, yg = np.meshgrid(x, y, indexing='ij', sparse=True)
data = f(xg, yg)
data = data.astype(dtype)
interp = RegularGridInterpolator((x, y), data)
pts = np.array([[2.1, 6.2],
[3.3, 5.2]], dtype=xi_dtype)
# the values here are just what the call returns; the test checks that
# that the call succeeds at all, instead of failing with cython not
# having a float32 kernel
assert_allclose(interp(pts), [134.10469388, 153.40069388], atol=1e-7)
class MyValue:
"""
Minimal indexable object
"""
def __init__(self, shape):
self.ndim = 2
self.shape = shape
self._v = np.arange(np.prod(shape)).reshape(shape)
def __getitem__(self, idx):
return self._v[idx]
def __array_interface__(self):
return None
def __array__(self):
raise RuntimeError("No array representation")
class TestInterpN:
def _sample_2d_data(self):
x = np.array([.5, 2., 3., 4., 5.5, 6.])
y = np.array([.5, 2., 3., 4., 5.5, 6.])
z = np.array(
[
[1, 2, 1, 2, 1, 1],
[1, 2, 1, 2, 1, 1],
[1, 2, 3, 2, 1, 1],
[1, 2, 2, 2, 1, 1],
[1, 2, 1, 2, 1, 1],
[1, 2, 2, 2, 1, 1],
]
)
return x, y, z
def test_spline_2d(self):
x, y, z = self._sample_2d_data()
lut = RectBivariateSpline(x, y, z)
xi = np.array([[1, 2.3, 5.3, 0.5, 3.3, 1.2, 3],
[1, 3.3, 1.2, 4.0, 5.0, 1.0, 3]]).T
assert_array_almost_equal(interpn((x, y), z, xi, method="splinef2d"),
lut.ev(xi[:, 0], xi[:, 1]))
@parametrize_rgi_interp_methods
def test_list_input(self, method):
x, y, z = self._sample_2d_data()
xi = np.array([[1, 2.3, 5.3, 0.5, 3.3, 1.2, 3],
[1, 3.3, 1.2, 4.0, 5.0, 1.0, 3]]).T
v1 = interpn((x, y), z, xi, method=method)
v2 = interpn(
(x.tolist(), y.tolist()), z.tolist(), xi.tolist(), method=method
)
assert_allclose(v1, v2, err_msg=method)
def test_spline_2d_outofbounds(self):
x = np.array([.5, 2., 3., 4., 5.5])
y = np.array([.5, 2., 3., 4., 5.5])
z = np.array([[1, 2, 1, 2, 1], [1, 2, 1, 2, 1], [1, 2, 3, 2, 1],
[1, 2, 2, 2, 1], [1, 2, 1, 2, 1]])
lut = RectBivariateSpline(x, y, z)
xi = np.array([[1, 2.3, 6.3, 0.5, 3.3, 1.2, 3],
[1, 3.3, 1.2, -4.0, 5.0, 1.0, 3]]).T
actual = interpn((x, y), z, xi, method="splinef2d",
bounds_error=False, fill_value=999.99)
expected = lut.ev(xi[:, 0], xi[:, 1])
expected[2:4] = 999.99
assert_array_almost_equal(actual, expected)
# no extrapolation for splinef2d
assert_raises(ValueError, interpn, (x, y), z, xi, method="splinef2d",
bounds_error=False, fill_value=None)
def _sample_4d_data(self):
points = [(0., .5, 1.)] * 2 + [(0., 5., 10.)] * 2
values = np.asarray([0., .5, 1.])
values0 = values[:, np.newaxis, np.newaxis, np.newaxis]
values1 = values[np.newaxis, :, np.newaxis, np.newaxis]
values2 = values[np.newaxis, np.newaxis, :, np.newaxis]
values3 = values[np.newaxis, np.newaxis, np.newaxis, :]
values = (values0 + values1 * 10 + values2 * 100 + values3 * 1000)
return points, values
def test_linear_4d(self):
# create a 4-D grid of 3 points in each dimension
points, values = self._sample_4d_data()
interp_rg = RegularGridInterpolator(points, values)
sample = np.asarray([[0.1, 0.1, 10., 9.]])
wanted = interpn(points, values, sample, method="linear")
assert_array_almost_equal(interp_rg(sample), wanted)
def test_4d_linear_outofbounds(self):
# create a 4-D grid of 3 points in each dimension
points, values = self._sample_4d_data()
sample = np.asarray([[0.1, -0.1, 10.1, 9.]])
wanted = 999.99
actual = interpn(points, values, sample, method="linear",
bounds_error=False, fill_value=999.99)
assert_array_almost_equal(actual, wanted)
def test_nearest_4d(self):
# create a 4-D grid of 3 points in each dimension
points, values = self._sample_4d_data()
interp_rg = RegularGridInterpolator(points, values, method="nearest")
sample = np.asarray([[0.1, 0.1, 10., 9.]])
wanted = interpn(points, values, sample, method="nearest")
assert_array_almost_equal(interp_rg(sample), wanted)
def test_4d_nearest_outofbounds(self):
# create a 4-D grid of 3 points in each dimension
points, values = self._sample_4d_data()
sample = np.asarray([[0.1, -0.1, 10.1, 9.]])
wanted = 999.99
actual = interpn(points, values, sample, method="nearest",
bounds_error=False, fill_value=999.99)
assert_array_almost_equal(actual, wanted)
def test_xi_1d(self):
# verify that 1-D xi works as expected
points, values = self._sample_4d_data()
sample = np.asarray([0.1, 0.1, 10., 9.])
v1 = interpn(points, values, sample, bounds_error=False)
v2 = interpn(points, values, sample[None,:], bounds_error=False)
assert_allclose(v1, v2)
def test_xi_nd(self):
# verify that higher-d xi works as expected
points, values = self._sample_4d_data()
np.random.seed(1234)
sample = np.random.rand(2, 3, 4)
v1 = interpn(points, values, sample, method='nearest',
bounds_error=False)
assert_equal(v1.shape, (2, 3))
v2 = interpn(points, values, sample.reshape(-1, 4),
method='nearest', bounds_error=False)
assert_allclose(v1, v2.reshape(v1.shape))
@parametrize_rgi_interp_methods
def test_xi_broadcast(self, method):
# verify that the interpolators broadcast xi
x, y, values = self._sample_2d_data()
points = (x, y)
xi = np.linspace(0, 1, 2)
yi = np.linspace(0, 3, 3)
sample = (xi[:, None], yi[None, :])
v1 = interpn(points, values, sample, method=method, bounds_error=False)
assert_equal(v1.shape, (2, 3))
xx, yy = np.meshgrid(xi, yi)
sample = np.c_[xx.T.ravel(), yy.T.ravel()]
v2 = interpn(points, values, sample,
method=method, bounds_error=False)
assert_allclose(v1, v2.reshape(v1.shape))
@parametrize_rgi_interp_methods
def test_nonscalar_values(self, method):
# Verify that non-scalar valued values also works
points = [(0.0, 0.5, 1.0, 1.5, 2.0, 2.5)] * 2 + [
(0.0, 5.0, 10.0, 15.0, 20, 25.0)
] * 2
rng = np.random.default_rng(1234)
values = rng.random((6, 6, 6, 6, 8))
sample = rng.random((7, 3, 4))
v = interpn(points, values, sample, method=method,
bounds_error=False)
assert_equal(v.shape, (7, 3, 8), err_msg=method)
vs = [interpn(points, values[..., j], sample, method=method,
bounds_error=False) for j in range(8)]
v2 = np.array(vs).transpose(1, 2, 0)
assert_allclose(v, v2, atol=1e-14, err_msg=method)
@parametrize_rgi_interp_methods
def test_nonscalar_values_2(self, method):
# Verify that non-scalar valued values also work : use different
# lengths of axes to simplify tracing the internals
points = [(0.0, 0.5, 1.0, 1.5, 2.0, 2.5),
(0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0),
(0.0, 5.0, 10.0, 15.0, 20, 25.0, 35.0, 36.0),
(0.0, 5.0, 10.0, 15.0, 20, 25.0, 35.0, 36.0, 47)]
rng = np.random.default_rng(1234)
trailing_points = (3, 2)
# NB: values has a `num_trailing_dims` trailing dimension
values = rng.random((6, 7, 8, 9, *trailing_points))
sample = rng.random(4) # a single sample point !
v = interpn(points, values, sample, method=method, bounds_error=False)
# v has a single sample point *per entry in the trailing dimensions*
assert v.shape == (1, *trailing_points)
# check the values, too : manually loop over the trailing dimensions
vs = [[
interpn(points, values[..., i, j], sample, method=method,
bounds_error=False) for i in range(values.shape[-2])
] for j in range(values.shape[-1])]
assert_allclose(v, np.asarray(vs).T, atol=1e-14, err_msg=method)
def test_non_scalar_values_splinef2d(self):
# Vector-valued splines supported with fitpack
points, values = self._sample_4d_data()
np.random.seed(1234)
values = np.random.rand(3, 3, 3, 3, 6)
sample = np.random.rand(7, 11, 4)
assert_raises(ValueError, interpn, points, values, sample,
method='splinef2d')
@parametrize_rgi_interp_methods
def test_complex(self, method):
x, y, values = self._sample_2d_data()
points = (x, y)
values = values - 2j*values
sample = np.array([[1, 2.3, 5.3, 0.5, 3.3, 1.2, 3],
[1, 3.3, 1.2, 4.0, 5.0, 1.0, 3]]).T
v1 = interpn(points, values, sample, method=method)
v2r = interpn(points, values.real, sample, method=method)
v2i = interpn(points, values.imag, sample, method=method)
v2 = v2r + 1j*v2i
assert_allclose(v1, v2)
def test_complex_spline2fd(self):
# Complex-valued data not supported by spline2fd
x, y, values = self._sample_2d_data()
points = (x, y)
values = values - 2j*values
sample = np.array([[1, 2.3, 5.3, 0.5, 3.3, 1.2, 3],
[1, 3.3, 1.2, 4.0, 5.0, 1.0, 3]]).T
with assert_warns(np.ComplexWarning):
interpn(points, values, sample, method='splinef2d')
@pytest.mark.parametrize(
"method",
["linear", "nearest"]
)
def test_duck_typed_values(self, method):
x = np.linspace(0, 2, 5)
y = np.linspace(0, 1, 7)
values = MyValue((5, 7))
v1 = interpn((x, y), values, [0.4, 0.7], method=method)
v2 = interpn((x, y), values._v, [0.4, 0.7], method=method)
assert_allclose(v1, v2)
@parametrize_rgi_interp_methods
def test_matrix_input(self, method):
x = np.linspace(0, 2, 6)
y = np.linspace(0, 1, 7)
values = matrix(np.random.rand(6, 7))
sample = np.random.rand(3, 7, 2)
v1 = interpn((x, y), values, sample, method=method)
v2 = interpn((x, y), np.asarray(values), sample, method=method)
assert_allclose(v1, v2)
def test_length_one_axis(self):
# gh-5890, gh-9524 : length-1 axis is legal for method='linear'.
# Along the axis it's linear interpolation; away from the length-1
# axis, it's an extrapolation, so fill_value should be used.
values = np.array([[0.1, 1, 10]])
xi = np.array([[1, 2.2], [1, 3.2], [1, 3.8]])
res = interpn(([1], [2, 3, 4]), values, xi)
wanted = [0.9*0.2 + 0.1, # on [2, 3) it's 0.9*(x-2) + 0.1
9*0.2 + 1, # on [3, 4] it's 9*(x-3) + 1
9*0.8 + 1]
assert_allclose(res, wanted, atol=1e-15)
# check extrapolation
xi = np.array([[1.1, 2.2], [1.5, 3.2], [-2.3, 3.8]])
res = interpn(([1], [2, 3, 4]), values, xi,
bounds_error=False, fill_value=None)
assert_allclose(res, wanted, atol=1e-15)
def test_descending_points(self):
def value_func_4d(x, y, z, a):
return 2 * x ** 3 + 3 * y ** 2 - z - a
x1 = np.array([0, 1, 2, 3])
x2 = np.array([0, 10, 20, 30])
x3 = np.array([0, 10, 20, 30])
x4 = np.array([0, .1, .2, .30])
points = (x1, x2, x3, x4)
values = value_func_4d(
*np.meshgrid(*points, indexing='ij', sparse=True))
pts = (0.1, 0.3, np.transpose(np.linspace(0, 30, 4)),
np.linspace(0, 0.3, 4))
correct_result = interpn(points, values, pts)
x1_descend = x1[::-1]
x2_descend = x2[::-1]
x3_descend = x3[::-1]
x4_descend = x4[::-1]
points_shuffled = (x1_descend, x2_descend, x3_descend, x4_descend)
values_shuffled = value_func_4d(
*np.meshgrid(*points_shuffled, indexing='ij', sparse=True))
test_result = interpn(points_shuffled, values_shuffled, pts)
assert_array_equal(correct_result, test_result)
def test_invalid_points_order(self):
x = np.array([.5, 2., 0., 4., 5.5]) # not ascending or descending
y = np.array([.5, 2., 3., 4., 5.5])
z = np.array([[1, 2, 1, 2, 1], [1, 2, 1, 2, 1], [1, 2, 3, 2, 1],
[1, 2, 2, 2, 1], [1, 2, 1, 2, 1]])
xi = np.array([[1, 2.3, 6.3, 0.5, 3.3, 1.2, 3],
[1, 3.3, 1.2, -4.0, 5.0, 1.0, 3]]).T
match = "must be strictly ascending or descending"
with pytest.raises(ValueError, match=match):
interpn((x, y), z, xi)
def test_invalid_xi_dimensions(self):
# https://github.com/scipy/scipy/issues/16519
points = [(0, 1)]
values = [0, 1]
xi = np.ones((1, 1, 3))
msg = ("The requested sample points xi have dimension 3, but this "
"RegularGridInterpolator has dimension 1")
with assert_raises(ValueError, match=msg):
interpn(points, values, xi)
def test_readonly_grid(self):
# https://github.com/scipy/scipy/issues/17716
x = np.linspace(0, 4, 5)
y = np.linspace(0, 5, 6)
z = np.linspace(0, 6, 7)
points = (x, y, z)
values = np.ones((5, 6, 7))
point = np.array([2.21, 3.12, 1.15])
for d in points:
d.flags.writeable = False
values.flags.writeable = False
point.flags.writeable = False
interpn(points, values, point)
RegularGridInterpolator(points, values)(point)
def test_2d_readonly_grid(self):
# https://github.com/scipy/scipy/issues/17716
# test special 2d case
x = np.linspace(0, 4, 5)
y = np.linspace(0, 5, 6)
points = (x, y)
values = np.ones((5, 6))
point = np.array([2.21, 3.12])
for d in points:
d.flags.writeable = False
values.flags.writeable = False
point.flags.writeable = False
interpn(points, values, point)
RegularGridInterpolator(points, values)(point)
def test_non_c_contiguous_grid(self):
# https://github.com/scipy/scipy/issues/17716
x = np.linspace(0, 4, 5)
x = np.vstack((x, np.empty_like(x))).T.copy()[:, 0]
assert not x.flags.c_contiguous
y = np.linspace(0, 5, 6)
z = np.linspace(0, 6, 7)
points = (x, y, z)
values = np.ones((5, 6, 7))
point = np.array([2.21, 3.12, 1.15])
interpn(points, values, point)
RegularGridInterpolator(points, values)(point)
@pytest.mark.parametrize("dtype", ['>f8', '<f8'])
def test_endianness(self, dtype):
# https://github.com/scipy/scipy/issues/17716
# test special 2d case
x = np.linspace(0, 4, 5, dtype=dtype)
y = np.linspace(0, 5, 6, dtype=dtype)
points = (x, y)
values = np.ones((5, 6), dtype=dtype)
point = np.array([2.21, 3.12], dtype=dtype)
interpn(points, values, point)
RegularGridInterpolator(points, values)(point)
| 41,718
| 39.661793
| 79
|
py
|
scipy
|
scipy-main/scipy/interpolate/tests/test_rbfinterp.py
|
import pickle
import pytest
import numpy as np
from numpy.linalg import LinAlgError
from numpy.testing import assert_allclose, assert_array_equal
from scipy.stats.qmc import Halton
from scipy.spatial import cKDTree
from scipy.interpolate._rbfinterp import (
_AVAILABLE, _SCALE_INVARIANT, _NAME_TO_MIN_DEGREE, _monomial_powers,
RBFInterpolator
)
from scipy.interpolate import _rbfinterp_pythran
def _vandermonde(x, degree):
# Returns a matrix of monomials that span polynomials with the specified
# degree evaluated at x.
powers = _monomial_powers(x.shape[1], degree)
return _rbfinterp_pythran._polynomial_matrix(x, powers)
def _1d_test_function(x):
# Test function used in Wahba's "Spline Models for Observational Data".
# domain ~= (0, 3), range ~= (-1.0, 0.2)
x = x[:, 0]
y = 4.26*(np.exp(-x) - 4*np.exp(-2*x) + 3*np.exp(-3*x))
return y
def _2d_test_function(x):
# Franke's test function.
# domain ~= (0, 1) X (0, 1), range ~= (0.0, 1.2)
x1, x2 = x[:, 0], x[:, 1]
term1 = 0.75 * np.exp(-(9*x1-2)**2/4 - (9*x2-2)**2/4)
term2 = 0.75 * np.exp(-(9*x1+1)**2/49 - (9*x2+1)/10)
term3 = 0.5 * np.exp(-(9*x1-7)**2/4 - (9*x2-3)**2/4)
term4 = -0.2 * np.exp(-(9*x1-4)**2 - (9*x2-7)**2)
y = term1 + term2 + term3 + term4
return y
def _is_conditionally_positive_definite(kernel, m):
# Tests whether the kernel is conditionally positive definite of order m.
# See chapter 7 of Fasshauer's "Meshfree Approximation Methods with
# MATLAB".
nx = 10
ntests = 100
for ndim in [1, 2, 3, 4, 5]:
# Generate sample points with a Halton sequence to avoid samples that
# are too close to eachother, which can make the matrix singular.
seq = Halton(ndim, scramble=False, seed=np.random.RandomState())
for _ in range(ntests):
x = 2*seq.random(nx) - 1
A = _rbfinterp_pythran._kernel_matrix(x, kernel)
P = _vandermonde(x, m - 1)
Q, R = np.linalg.qr(P, mode='complete')
# Q2 forms a basis spanning the space where P.T.dot(x) = 0. Project
# A onto this space, and then see if it is positive definite using
# the Cholesky decomposition. If not, then the kernel is not c.p.d.
# of order m.
Q2 = Q[:, P.shape[1]:]
B = Q2.T.dot(A).dot(Q2)
try:
np.linalg.cholesky(B)
except np.linalg.LinAlgError:
return False
return True
# Sorting the parametrize arguments is necessary to avoid a parallelization
# issue described here: https://github.com/pytest-dev/pytest-xdist/issues/432.
@pytest.mark.parametrize('kernel', sorted(_AVAILABLE))
def test_conditionally_positive_definite(kernel):
# Test if each kernel in _AVAILABLE is conditionally positive definite of
# order m, where m comes from _NAME_TO_MIN_DEGREE. This is a necessary
# condition for the smoothed RBF interpolant to be well-posed in general.
m = _NAME_TO_MIN_DEGREE.get(kernel, -1) + 1
assert _is_conditionally_positive_definite(kernel, m)
class _TestRBFInterpolator:
@pytest.mark.parametrize('kernel', sorted(_SCALE_INVARIANT))
def test_scale_invariance_1d(self, kernel):
# Verify that the functions in _SCALE_INVARIANT are insensitive to the
# shape parameter (when smoothing == 0) in 1d.
seq = Halton(1, scramble=False, seed=np.random.RandomState())
x = 3*seq.random(50)
y = _1d_test_function(x)
xitp = 3*seq.random(50)
yitp1 = self.build(x, y, epsilon=1.0, kernel=kernel)(xitp)
yitp2 = self.build(x, y, epsilon=2.0, kernel=kernel)(xitp)
assert_allclose(yitp1, yitp2, atol=1e-8)
@pytest.mark.parametrize('kernel', sorted(_SCALE_INVARIANT))
def test_scale_invariance_2d(self, kernel):
# Verify that the functions in _SCALE_INVARIANT are insensitive to the
# shape parameter (when smoothing == 0) in 2d.
seq = Halton(2, scramble=False, seed=np.random.RandomState())
x = seq.random(100)
y = _2d_test_function(x)
xitp = seq.random(100)
yitp1 = self.build(x, y, epsilon=1.0, kernel=kernel)(xitp)
yitp2 = self.build(x, y, epsilon=2.0, kernel=kernel)(xitp)
assert_allclose(yitp1, yitp2, atol=1e-8)
@pytest.mark.parametrize('kernel', sorted(_AVAILABLE))
def test_extreme_domains(self, kernel):
# Make sure the interpolant remains numerically stable for very
# large/small domains.
seq = Halton(2, scramble=False, seed=np.random.RandomState())
scale = 1e50
shift = 1e55
x = seq.random(100)
y = _2d_test_function(x)
xitp = seq.random(100)
if kernel in _SCALE_INVARIANT:
yitp1 = self.build(x, y, kernel=kernel)(xitp)
yitp2 = self.build(
x*scale + shift, y,
kernel=kernel
)(xitp*scale + shift)
else:
yitp1 = self.build(x, y, epsilon=5.0, kernel=kernel)(xitp)
yitp2 = self.build(
x*scale + shift, y,
epsilon=5.0/scale,
kernel=kernel
)(xitp*scale + shift)
assert_allclose(yitp1, yitp2, atol=1e-8)
def test_polynomial_reproduction(self):
# If the observed data comes from a polynomial, then the interpolant
# should be able to reproduce the polynomial exactly, provided that
# `degree` is sufficiently high.
rng = np.random.RandomState(0)
seq = Halton(2, scramble=False, seed=rng)
degree = 3
x = seq.random(50)
xitp = seq.random(50)
P = _vandermonde(x, degree)
Pitp = _vandermonde(xitp, degree)
poly_coeffs = rng.normal(0.0, 1.0, P.shape[1])
y = P.dot(poly_coeffs)
yitp1 = Pitp.dot(poly_coeffs)
yitp2 = self.build(x, y, degree=degree)(xitp)
assert_allclose(yitp1, yitp2, atol=1e-8)
@pytest.mark.slow
def test_chunking(self, monkeypatch):
# If the observed data comes from a polynomial, then the interpolant
# should be able to reproduce the polynomial exactly, provided that
# `degree` is sufficiently high.
rng = np.random.RandomState(0)
seq = Halton(2, scramble=False, seed=rng)
degree = 3
largeN = 1000 + 33
# this is large to check that chunking of the RBFInterpolator is tested
x = seq.random(50)
xitp = seq.random(largeN)
P = _vandermonde(x, degree)
Pitp = _vandermonde(xitp, degree)
poly_coeffs = rng.normal(0.0, 1.0, P.shape[1])
y = P.dot(poly_coeffs)
yitp1 = Pitp.dot(poly_coeffs)
interp = self.build(x, y, degree=degree)
ce_real = interp._chunk_evaluator
def _chunk_evaluator(*args, **kwargs):
kwargs.update(memory_budget=100)
return ce_real(*args, **kwargs)
monkeypatch.setattr(interp, '_chunk_evaluator', _chunk_evaluator)
yitp2 = interp(xitp)
assert_allclose(yitp1, yitp2, atol=1e-8)
def test_vector_data(self):
# Make sure interpolating a vector field is the same as interpolating
# each component separately.
seq = Halton(2, scramble=False, seed=np.random.RandomState())
x = seq.random(100)
xitp = seq.random(100)
y = np.array([_2d_test_function(x),
_2d_test_function(x[:, ::-1])]).T
yitp1 = self.build(x, y)(xitp)
yitp2 = self.build(x, y[:, 0])(xitp)
yitp3 = self.build(x, y[:, 1])(xitp)
assert_allclose(yitp1[:, 0], yitp2)
assert_allclose(yitp1[:, 1], yitp3)
def test_complex_data(self):
# Interpolating complex input should be the same as interpolating the
# real and complex components.
seq = Halton(2, scramble=False, seed=np.random.RandomState())
x = seq.random(100)
xitp = seq.random(100)
y = _2d_test_function(x) + 1j*_2d_test_function(x[:, ::-1])
yitp1 = self.build(x, y)(xitp)
yitp2 = self.build(x, y.real)(xitp)
yitp3 = self.build(x, y.imag)(xitp)
assert_allclose(yitp1.real, yitp2)
assert_allclose(yitp1.imag, yitp3)
@pytest.mark.parametrize('kernel', sorted(_AVAILABLE))
def test_interpolation_misfit_1d(self, kernel):
# Make sure that each kernel, with its default `degree` and an
# appropriate `epsilon`, does a good job at interpolation in 1d.
seq = Halton(1, scramble=False, seed=np.random.RandomState())
x = 3*seq.random(50)
xitp = 3*seq.random(50)
y = _1d_test_function(x)
ytrue = _1d_test_function(xitp)
yitp = self.build(x, y, epsilon=5.0, kernel=kernel)(xitp)
mse = np.mean((yitp - ytrue)**2)
assert mse < 1.0e-4
@pytest.mark.parametrize('kernel', sorted(_AVAILABLE))
def test_interpolation_misfit_2d(self, kernel):
# Make sure that each kernel, with its default `degree` and an
# appropriate `epsilon`, does a good job at interpolation in 2d.
seq = Halton(2, scramble=False, seed=np.random.RandomState())
x = seq.random(100)
xitp = seq.random(100)
y = _2d_test_function(x)
ytrue = _2d_test_function(xitp)
yitp = self.build(x, y, epsilon=5.0, kernel=kernel)(xitp)
mse = np.mean((yitp - ytrue)**2)
assert mse < 2.0e-4
@pytest.mark.parametrize('kernel', sorted(_AVAILABLE))
def test_smoothing_misfit(self, kernel):
# Make sure we can find a smoothing parameter for each kernel that
# removes a sufficient amount of noise.
rng = np.random.RandomState(0)
seq = Halton(1, scramble=False, seed=rng)
noise = 0.2
rmse_tol = 0.1
smoothing_range = 10**np.linspace(-4, 1, 20)
x = 3*seq.random(100)
y = _1d_test_function(x) + rng.normal(0.0, noise, (100,))
ytrue = _1d_test_function(x)
rmse_within_tol = False
for smoothing in smoothing_range:
ysmooth = self.build(
x, y,
epsilon=1.0,
smoothing=smoothing,
kernel=kernel)(x)
rmse = np.sqrt(np.mean((ysmooth - ytrue)**2))
if rmse < rmse_tol:
rmse_within_tol = True
break
assert rmse_within_tol
def test_array_smoothing(self):
# Test using an array for `smoothing` to give less weight to a known
# outlier.
rng = np.random.RandomState(0)
seq = Halton(1, scramble=False, seed=rng)
degree = 2
x = seq.random(50)
P = _vandermonde(x, degree)
poly_coeffs = rng.normal(0.0, 1.0, P.shape[1])
y = P.dot(poly_coeffs)
y_with_outlier = np.copy(y)
y_with_outlier[10] += 1.0
smoothing = np.zeros((50,))
smoothing[10] = 1000.0
yitp = self.build(x, y_with_outlier, smoothing=smoothing)(x)
# Should be able to reproduce the uncorrupted data almost exactly.
assert_allclose(yitp, y, atol=1e-4)
def test_inconsistent_x_dimensions_error(self):
# ValueError should be raised if the observation points and evaluation
# points have a different number of dimensions.
y = Halton(2, scramble=False, seed=np.random.RandomState()).random(10)
d = _2d_test_function(y)
x = Halton(1, scramble=False, seed=np.random.RandomState()).random(10)
match = 'Expected the second axis of `x`'
with pytest.raises(ValueError, match=match):
self.build(y, d)(x)
def test_inconsistent_d_length_error(self):
y = np.linspace(0, 1, 5)[:, None]
d = np.zeros(1)
match = 'Expected the first axis of `d`'
with pytest.raises(ValueError, match=match):
self.build(y, d)
def test_y_not_2d_error(self):
y = np.linspace(0, 1, 5)
d = np.zeros(5)
match = '`y` must be a 2-dimensional array.'
with pytest.raises(ValueError, match=match):
self.build(y, d)
def test_inconsistent_smoothing_length_error(self):
y = np.linspace(0, 1, 5)[:, None]
d = np.zeros(5)
smoothing = np.ones(1)
match = 'Expected `smoothing` to be'
with pytest.raises(ValueError, match=match):
self.build(y, d, smoothing=smoothing)
def test_invalid_kernel_name_error(self):
y = np.linspace(0, 1, 5)[:, None]
d = np.zeros(5)
match = '`kernel` must be one of'
with pytest.raises(ValueError, match=match):
self.build(y, d, kernel='test')
def test_epsilon_not_specified_error(self):
y = np.linspace(0, 1, 5)[:, None]
d = np.zeros(5)
for kernel in _AVAILABLE:
if kernel in _SCALE_INVARIANT:
continue
match = '`epsilon` must be specified'
with pytest.raises(ValueError, match=match):
self.build(y, d, kernel=kernel)
def test_x_not_2d_error(self):
y = np.linspace(0, 1, 5)[:, None]
x = np.linspace(0, 1, 5)
d = np.zeros(5)
match = '`x` must be a 2-dimensional array.'
with pytest.raises(ValueError, match=match):
self.build(y, d)(x)
def test_not_enough_observations_error(self):
y = np.linspace(0, 1, 1)[:, None]
d = np.zeros(1)
match = 'At least 2 data points are required'
with pytest.raises(ValueError, match=match):
self.build(y, d, kernel='thin_plate_spline')
def test_degree_warning(self):
y = np.linspace(0, 1, 5)[:, None]
d = np.zeros(5)
for kernel, deg in _NAME_TO_MIN_DEGREE.items():
match = f'`degree` should not be below {deg}'
with pytest.warns(Warning, match=match):
self.build(y, d, epsilon=1.0, kernel=kernel, degree=deg-1)
def test_rank_error(self):
# An error should be raised when `kernel` is "thin_plate_spline" and
# observations are 2-D and collinear.
y = np.array([[2.0, 0.0], [1.0, 0.0], [0.0, 0.0]])
d = np.array([0.0, 0.0, 0.0])
match = 'does not have full column rank'
with pytest.raises(LinAlgError, match=match):
self.build(y, d, kernel='thin_plate_spline')(y)
def test_single_point(self):
# Make sure interpolation still works with only one point (in 1, 2, and
# 3 dimensions).
for dim in [1, 2, 3]:
y = np.zeros((1, dim))
d = np.ones((1,))
f = self.build(y, d, kernel='linear')(y)
assert_allclose(d, f)
def test_pickleable(self):
# Make sure we can pickle and unpickle the interpolant without any
# changes in the behavior.
seq = Halton(1, scramble=False, seed=np.random.RandomState(2305982309))
x = 3*seq.random(50)
xitp = 3*seq.random(50)
y = _1d_test_function(x)
interp = self.build(x, y)
yitp1 = interp(xitp)
yitp2 = pickle.loads(pickle.dumps(interp))(xitp)
assert_array_equal(yitp1, yitp2)
class TestRBFInterpolatorNeighborsNone(_TestRBFInterpolator):
def build(self, *args, **kwargs):
return RBFInterpolator(*args, **kwargs)
def test_smoothing_limit_1d(self):
# For large smoothing parameters, the interpolant should approach a
# least squares fit of a polynomial with the specified degree.
seq = Halton(1, scramble=False, seed=np.random.RandomState())
degree = 3
smoothing = 1e8
x = 3*seq.random(50)
xitp = 3*seq.random(50)
y = _1d_test_function(x)
yitp1 = self.build(
x, y,
degree=degree,
smoothing=smoothing
)(xitp)
P = _vandermonde(x, degree)
Pitp = _vandermonde(xitp, degree)
yitp2 = Pitp.dot(np.linalg.lstsq(P, y, rcond=None)[0])
assert_allclose(yitp1, yitp2, atol=1e-8)
def test_smoothing_limit_2d(self):
# For large smoothing parameters, the interpolant should approach a
# least squares fit of a polynomial with the specified degree.
seq = Halton(2, scramble=False, seed=np.random.RandomState())
degree = 3
smoothing = 1e8
x = seq.random(100)
xitp = seq.random(100)
y = _2d_test_function(x)
yitp1 = self.build(
x, y,
degree=degree,
smoothing=smoothing
)(xitp)
P = _vandermonde(x, degree)
Pitp = _vandermonde(xitp, degree)
yitp2 = Pitp.dot(np.linalg.lstsq(P, y, rcond=None)[0])
assert_allclose(yitp1, yitp2, atol=1e-8)
class TestRBFInterpolatorNeighbors20(_TestRBFInterpolator):
# RBFInterpolator using 20 nearest neighbors.
def build(self, *args, **kwargs):
return RBFInterpolator(*args, **kwargs, neighbors=20)
def test_equivalent_to_rbf_interpolator(self):
seq = Halton(2, scramble=False, seed=np.random.RandomState())
x = seq.random(100)
xitp = seq.random(100)
y = _2d_test_function(x)
yitp1 = self.build(x, y)(xitp)
yitp2 = []
tree = cKDTree(x)
for xi in xitp:
_, nbr = tree.query(xi, 20)
yitp2.append(RBFInterpolator(x[nbr], y[nbr])(xi[None])[0])
assert_allclose(yitp1, yitp2, atol=1e-8)
class TestRBFInterpolatorNeighborsInf(TestRBFInterpolatorNeighborsNone):
# RBFInterpolator using neighbors=np.inf. This should give exactly the same
# results as neighbors=None, but it will be slower.
def build(self, *args, **kwargs):
return RBFInterpolator(*args, **kwargs, neighbors=np.inf)
def test_equivalent_to_rbf_interpolator(self):
seq = Halton(1, scramble=False, seed=np.random.RandomState())
x = 3*seq.random(50)
xitp = 3*seq.random(50)
y = _1d_test_function(x)
yitp1 = self.build(x, y)(xitp)
yitp2 = RBFInterpolator(x, y)(xitp)
assert_allclose(yitp1, yitp2, atol=1e-8)
| 18,127
| 34.685039
| 79
|
py
|
scipy
|
scipy-main/scipy/interpolate/tests/test_rbf.py
|
# Created by John Travers, Robert Hetland, 2007
""" Test functions for rbf module """
import numpy as np
from numpy.testing import (assert_, assert_array_almost_equal,
assert_almost_equal)
from numpy import linspace, sin, cos, random, exp, allclose
from scipy.interpolate._rbf import Rbf
FUNCTIONS = ('multiquadric', 'inverse multiquadric', 'gaussian',
'cubic', 'quintic', 'thin-plate', 'linear')
def check_rbf1d_interpolation(function):
# Check that the Rbf function interpolates through the nodes (1D)
x = linspace(0,10,9)
y = sin(x)
rbf = Rbf(x, y, function=function)
yi = rbf(x)
assert_array_almost_equal(y, yi)
assert_almost_equal(rbf(float(x[0])), y[0])
def check_rbf2d_interpolation(function):
# Check that the Rbf function interpolates through the nodes (2D).
x = random.rand(50,1)*4-2
y = random.rand(50,1)*4-2
z = x*exp(-x**2-1j*y**2)
rbf = Rbf(x, y, z, epsilon=2, function=function)
zi = rbf(x, y)
zi.shape = x.shape
assert_array_almost_equal(z, zi)
def check_rbf3d_interpolation(function):
# Check that the Rbf function interpolates through the nodes (3D).
x = random.rand(50, 1)*4 - 2
y = random.rand(50, 1)*4 - 2
z = random.rand(50, 1)*4 - 2
d = x*exp(-x**2 - y**2)
rbf = Rbf(x, y, z, d, epsilon=2, function=function)
di = rbf(x, y, z)
di.shape = x.shape
assert_array_almost_equal(di, d)
def test_rbf_interpolation():
for function in FUNCTIONS:
check_rbf1d_interpolation(function)
check_rbf2d_interpolation(function)
check_rbf3d_interpolation(function)
def check_2drbf1d_interpolation(function):
# Check that the 2-D Rbf function interpolates through the nodes (1D)
x = linspace(0, 10, 9)
y0 = sin(x)
y1 = cos(x)
y = np.vstack([y0, y1]).T
rbf = Rbf(x, y, function=function, mode='N-D')
yi = rbf(x)
assert_array_almost_equal(y, yi)
assert_almost_equal(rbf(float(x[0])), y[0])
def check_2drbf2d_interpolation(function):
# Check that the 2-D Rbf function interpolates through the nodes (2D).
x = random.rand(50, ) * 4 - 2
y = random.rand(50, ) * 4 - 2
z0 = x * exp(-x ** 2 - 1j * y ** 2)
z1 = y * exp(-y ** 2 - 1j * x ** 2)
z = np.vstack([z0, z1]).T
rbf = Rbf(x, y, z, epsilon=2, function=function, mode='N-D')
zi = rbf(x, y)
zi.shape = z.shape
assert_array_almost_equal(z, zi)
def check_2drbf3d_interpolation(function):
# Check that the 2-D Rbf function interpolates through the nodes (3D).
x = random.rand(50, ) * 4 - 2
y = random.rand(50, ) * 4 - 2
z = random.rand(50, ) * 4 - 2
d0 = x * exp(-x ** 2 - y ** 2)
d1 = y * exp(-y ** 2 - x ** 2)
d = np.vstack([d0, d1]).T
rbf = Rbf(x, y, z, d, epsilon=2, function=function, mode='N-D')
di = rbf(x, y, z)
di.shape = d.shape
assert_array_almost_equal(di, d)
def test_2drbf_interpolation():
for function in FUNCTIONS:
check_2drbf1d_interpolation(function)
check_2drbf2d_interpolation(function)
check_2drbf3d_interpolation(function)
def check_rbf1d_regularity(function, atol):
# Check that the Rbf function approximates a smooth function well away
# from the nodes.
x = linspace(0, 10, 9)
y = sin(x)
rbf = Rbf(x, y, function=function)
xi = linspace(0, 10, 100)
yi = rbf(xi)
msg = "abs-diff: %f" % abs(yi - sin(xi)).max()
assert_(allclose(yi, sin(xi), atol=atol), msg)
def test_rbf_regularity():
tolerances = {
'multiquadric': 0.1,
'inverse multiquadric': 0.15,
'gaussian': 0.15,
'cubic': 0.15,
'quintic': 0.1,
'thin-plate': 0.1,
'linear': 0.2
}
for function in FUNCTIONS:
check_rbf1d_regularity(function, tolerances.get(function, 1e-2))
def check_2drbf1d_regularity(function, atol):
# Check that the 2-D Rbf function approximates a smooth function well away
# from the nodes.
x = linspace(0, 10, 9)
y0 = sin(x)
y1 = cos(x)
y = np.vstack([y0, y1]).T
rbf = Rbf(x, y, function=function, mode='N-D')
xi = linspace(0, 10, 100)
yi = rbf(xi)
msg = "abs-diff: %f" % abs(yi - np.vstack([sin(xi), cos(xi)]).T).max()
assert_(allclose(yi, np.vstack([sin(xi), cos(xi)]).T, atol=atol), msg)
def test_2drbf_regularity():
tolerances = {
'multiquadric': 0.1,
'inverse multiquadric': 0.15,
'gaussian': 0.15,
'cubic': 0.15,
'quintic': 0.1,
'thin-plate': 0.15,
'linear': 0.2
}
for function in FUNCTIONS:
check_2drbf1d_regularity(function, tolerances.get(function, 1e-2))
def check_rbf1d_stability(function):
# Check that the Rbf function with default epsilon is not subject
# to overshoot. Regression for issue #4523.
#
# Generate some data (fixed random seed hence deterministic)
np.random.seed(1234)
x = np.linspace(0, 10, 50)
z = x + 4.0 * np.random.randn(len(x))
rbf = Rbf(x, z, function=function)
xi = np.linspace(0, 10, 1000)
yi = rbf(xi)
# subtract the linear trend and make sure there no spikes
assert_(np.abs(yi-xi).max() / np.abs(z-x).max() < 1.1)
def test_rbf_stability():
for function in FUNCTIONS:
check_rbf1d_stability(function)
def test_default_construction():
# Check that the Rbf class can be constructed with the default
# multiquadric basis function. Regression test for ticket #1228.
x = linspace(0,10,9)
y = sin(x)
rbf = Rbf(x, y)
yi = rbf(x)
assert_array_almost_equal(y, yi)
def test_function_is_callable():
# Check that the Rbf class can be constructed with function=callable.
x = linspace(0,10,9)
y = sin(x)
def linfunc(x):
return x
rbf = Rbf(x, y, function=linfunc)
yi = rbf(x)
assert_array_almost_equal(y, yi)
def test_two_arg_function_is_callable():
# Check that the Rbf class can be constructed with a two argument
# function=callable.
def _func(self, r):
return self.epsilon + r
x = linspace(0,10,9)
y = sin(x)
rbf = Rbf(x, y, function=_func)
yi = rbf(x)
assert_array_almost_equal(y, yi)
def test_rbf_epsilon_none():
x = linspace(0, 10, 9)
y = sin(x)
Rbf(x, y, epsilon=None)
def test_rbf_epsilon_none_collinear():
# Check that collinear points in one dimension doesn't cause an error
# due to epsilon = 0
x = [1, 2, 3]
y = [4, 4, 4]
z = [5, 6, 7]
rbf = Rbf(x, y, z, epsilon=None)
assert_(rbf.epsilon > 0)
| 6,559
| 28.41704
| 78
|
py
|
scipy
|
scipy-main/scipy/interpolate/tests/test_polyint.py
|
import warnings
import io
import numpy as np
from numpy.testing import (
assert_almost_equal, assert_array_equal, assert_array_almost_equal,
assert_allclose, assert_equal, assert_)
from pytest import raises as assert_raises
import pytest
from scipy.interpolate import (
KroghInterpolator, krogh_interpolate,
BarycentricInterpolator, barycentric_interpolate,
approximate_taylor_polynomial, CubicHermiteSpline, pchip,
PchipInterpolator, pchip_interpolate, Akima1DInterpolator, CubicSpline,
make_interp_spline)
def check_shape(interpolator_cls, x_shape, y_shape, deriv_shape=None, axis=0,
extra_args={}):
np.random.seed(1234)
x = [-1, 0, 1, 2, 3, 4]
s = list(range(1, len(y_shape)+1))
s.insert(axis % (len(y_shape)+1), 0)
y = np.random.rand(*((6,) + y_shape)).transpose(s)
xi = np.zeros(x_shape)
if interpolator_cls is CubicHermiteSpline:
dydx = np.random.rand(*((6,) + y_shape)).transpose(s)
yi = interpolator_cls(x, y, dydx, axis=axis, **extra_args)(xi)
else:
yi = interpolator_cls(x, y, axis=axis, **extra_args)(xi)
target_shape = ((deriv_shape or ()) + y.shape[:axis]
+ x_shape + y.shape[axis:][1:])
assert_equal(yi.shape, target_shape)
# check it works also with lists
if x_shape and y.size > 0:
if interpolator_cls is CubicHermiteSpline:
interpolator_cls(list(x), list(y), list(dydx), axis=axis,
**extra_args)(list(xi))
else:
interpolator_cls(list(x), list(y), axis=axis,
**extra_args)(list(xi))
# check also values
if xi.size > 0 and deriv_shape is None:
bs_shape = y.shape[:axis] + (1,)*len(x_shape) + y.shape[axis:][1:]
yv = y[((slice(None,),)*(axis % y.ndim)) + (1,)]
yv = yv.reshape(bs_shape)
yi, y = np.broadcast_arrays(yi, yv)
assert_allclose(yi, y)
SHAPES = [(), (0,), (1,), (6, 2, 5)]
def test_shapes():
def spl_interp(x, y, axis):
return make_interp_spline(x, y, axis=axis)
for ip in [KroghInterpolator, BarycentricInterpolator, CubicHermiteSpline,
pchip, Akima1DInterpolator, CubicSpline, spl_interp]:
for s1 in SHAPES:
for s2 in SHAPES:
for axis in range(-len(s2), len(s2)):
if ip != CubicSpline:
check_shape(ip, s1, s2, None, axis)
else:
for bc in ['natural', 'clamped']:
extra = {'bc_type': bc}
check_shape(ip, s1, s2, None, axis, extra)
def test_derivs_shapes():
def krogh_derivs(x, y, axis=0):
return KroghInterpolator(x, y, axis).derivatives
for s1 in SHAPES:
for s2 in SHAPES:
for axis in range(-len(s2), len(s2)):
check_shape(krogh_derivs, s1, s2, (6,), axis)
def test_deriv_shapes():
def krogh_deriv(x, y, axis=0):
return KroghInterpolator(x, y, axis).derivative
def pchip_deriv(x, y, axis=0):
return pchip(x, y, axis).derivative()
def pchip_deriv2(x, y, axis=0):
return pchip(x, y, axis).derivative(2)
def pchip_antideriv(x, y, axis=0):
return pchip(x, y, axis).antiderivative()
def pchip_antideriv2(x, y, axis=0):
return pchip(x, y, axis).antiderivative(2)
def pchip_deriv_inplace(x, y, axis=0):
class P(PchipInterpolator):
def __call__(self, x):
return PchipInterpolator.__call__(self, x, 1)
pass
return P(x, y, axis)
def akima_deriv(x, y, axis=0):
return Akima1DInterpolator(x, y, axis).derivative()
def akima_antideriv(x, y, axis=0):
return Akima1DInterpolator(x, y, axis).antiderivative()
def cspline_deriv(x, y, axis=0):
return CubicSpline(x, y, axis).derivative()
def cspline_antideriv(x, y, axis=0):
return CubicSpline(x, y, axis).antiderivative()
def bspl_deriv(x, y, axis=0):
return make_interp_spline(x, y, axis=axis).derivative()
def bspl_antideriv(x, y, axis=0):
return make_interp_spline(x, y, axis=axis).antiderivative()
for ip in [krogh_deriv, pchip_deriv, pchip_deriv2, pchip_deriv_inplace,
pchip_antideriv, pchip_antideriv2, akima_deriv, akima_antideriv,
cspline_deriv, cspline_antideriv, bspl_deriv, bspl_antideriv]:
for s1 in SHAPES:
for s2 in SHAPES:
for axis in range(-len(s2), len(s2)):
check_shape(ip, s1, s2, (), axis)
def test_complex():
x = [1, 2, 3, 4]
y = [1, 2, 1j, 3]
for ip in [KroghInterpolator, BarycentricInterpolator, pchip, CubicSpline]:
p = ip(x, y)
assert_allclose(y, p(x))
dydx = [0, -1j, 2, 3j]
p = CubicHermiteSpline(x, y, dydx)
assert_allclose(y, p(x))
assert_allclose(dydx, p(x, 1))
class TestKrogh:
def setup_method(self):
self.true_poly = np.poly1d([-2,3,1,5,-4])
self.test_xs = np.linspace(-1,1,100)
self.xs = np.linspace(-1,1,5)
self.ys = self.true_poly(self.xs)
def test_lagrange(self):
P = KroghInterpolator(self.xs,self.ys)
assert_almost_equal(self.true_poly(self.test_xs),P(self.test_xs))
def test_scalar(self):
P = KroghInterpolator(self.xs,self.ys)
assert_almost_equal(self.true_poly(7),P(7))
assert_almost_equal(self.true_poly(np.array(7)), P(np.array(7)))
def test_derivatives(self):
P = KroghInterpolator(self.xs,self.ys)
D = P.derivatives(self.test_xs)
for i in range(D.shape[0]):
assert_almost_equal(self.true_poly.deriv(i)(self.test_xs),
D[i])
def test_low_derivatives(self):
P = KroghInterpolator(self.xs,self.ys)
D = P.derivatives(self.test_xs,len(self.xs)+2)
for i in range(D.shape[0]):
assert_almost_equal(self.true_poly.deriv(i)(self.test_xs),
D[i])
def test_derivative(self):
P = KroghInterpolator(self.xs,self.ys)
m = 10
r = P.derivatives(self.test_xs,m)
for i in range(m):
assert_almost_equal(P.derivative(self.test_xs,i),r[i])
def test_high_derivative(self):
P = KroghInterpolator(self.xs,self.ys)
for i in range(len(self.xs), 2*len(self.xs)):
assert_almost_equal(P.derivative(self.test_xs,i),
np.zeros(len(self.test_xs)))
def test_hermite(self):
P = KroghInterpolator(self.xs,self.ys)
assert_almost_equal(self.true_poly(self.test_xs),P(self.test_xs))
def test_vector(self):
xs = [0, 1, 2]
ys = np.array([[0,1],[1,0],[2,1]])
P = KroghInterpolator(xs,ys)
Pi = [KroghInterpolator(xs,ys[:,i]) for i in range(ys.shape[1])]
test_xs = np.linspace(-1,3,100)
assert_almost_equal(P(test_xs),
np.asarray([p(test_xs) for p in Pi]).T)
assert_almost_equal(P.derivatives(test_xs),
np.transpose(np.asarray([p.derivatives(test_xs) for p in Pi]),
(1,2,0)))
def test_empty(self):
P = KroghInterpolator(self.xs,self.ys)
assert_array_equal(P([]), [])
def test_shapes_scalarvalue(self):
P = KroghInterpolator(self.xs,self.ys)
assert_array_equal(np.shape(P(0)), ())
assert_array_equal(np.shape(P(np.array(0))), ())
assert_array_equal(np.shape(P([0])), (1,))
assert_array_equal(np.shape(P([0,1])), (2,))
def test_shapes_scalarvalue_derivative(self):
P = KroghInterpolator(self.xs,self.ys)
n = P.n
assert_array_equal(np.shape(P.derivatives(0)), (n,))
assert_array_equal(np.shape(P.derivatives(np.array(0))), (n,))
assert_array_equal(np.shape(P.derivatives([0])), (n,1))
assert_array_equal(np.shape(P.derivatives([0,1])), (n,2))
def test_shapes_vectorvalue(self):
P = KroghInterpolator(self.xs,np.outer(self.ys,np.arange(3)))
assert_array_equal(np.shape(P(0)), (3,))
assert_array_equal(np.shape(P([0])), (1,3))
assert_array_equal(np.shape(P([0,1])), (2,3))
def test_shapes_1d_vectorvalue(self):
P = KroghInterpolator(self.xs,np.outer(self.ys,[1]))
assert_array_equal(np.shape(P(0)), (1,))
assert_array_equal(np.shape(P([0])), (1,1))
assert_array_equal(np.shape(P([0,1])), (2,1))
def test_shapes_vectorvalue_derivative(self):
P = KroghInterpolator(self.xs,np.outer(self.ys,np.arange(3)))
n = P.n
assert_array_equal(np.shape(P.derivatives(0)), (n,3))
assert_array_equal(np.shape(P.derivatives([0])), (n,1,3))
assert_array_equal(np.shape(P.derivatives([0,1])), (n,2,3))
def test_wrapper(self):
P = KroghInterpolator(self.xs, self.ys)
ki = krogh_interpolate
assert_almost_equal(P(self.test_xs), ki(self.xs, self.ys, self.test_xs))
assert_almost_equal(P.derivative(self.test_xs, 2),
ki(self.xs, self.ys, self.test_xs, der=2))
assert_almost_equal(P.derivatives(self.test_xs, 2),
ki(self.xs, self.ys, self.test_xs, der=[0, 1]))
def test_int_inputs(self):
# Check input args are cast correctly to floats, gh-3669
x = [0, 234, 468, 702, 936, 1170, 1404, 2340, 3744, 6084, 8424,
13104, 60000]
offset_cdf = np.array([-0.95, -0.86114777, -0.8147762, -0.64072425,
-0.48002351, -0.34925329, -0.26503107,
-0.13148093, -0.12988833, -0.12979296,
-0.12973574, -0.08582937, 0.05])
f = KroghInterpolator(x, offset_cdf)
assert_allclose(abs((f(x) - offset_cdf) / f.derivative(x, 1)),
0, atol=1e-10)
def test_derivatives_complex(self):
# regression test for gh-7381: krogh.derivatives(0) fails complex y
x, y = np.array([-1, -1, 0, 1, 1]), np.array([1, 1.0j, 0, -1, 1.0j])
func = KroghInterpolator(x, y)
cmplx = func.derivatives(0)
cmplx2 = (KroghInterpolator(x, y.real).derivatives(0) +
1j*KroghInterpolator(x, y.imag).derivatives(0))
assert_allclose(cmplx, cmplx2, atol=1e-15)
def test_high_degree_warning(self):
with pytest.warns(UserWarning, match="40 degrees provided,"):
KroghInterpolator(np.arange(40), np.ones(40))
class TestTaylor:
def test_exponential(self):
degree = 5
p = approximate_taylor_polynomial(np.exp, 0, degree, 1, 15)
for i in range(degree+1):
assert_almost_equal(p(0),1)
p = p.deriv()
assert_almost_equal(p(0),0)
class TestBarycentric:
def setup_method(self):
self.true_poly = np.poly1d([-2, 3, 1, 5, -4])
self.test_xs = np.linspace(-1, 1, 100)
self.xs = np.linspace(-1, 1, 5)
self.ys = self.true_poly(self.xs)
def test_lagrange(self):
P = BarycentricInterpolator(self.xs, self.ys)
assert_almost_equal(self.true_poly(self.test_xs), P(self.test_xs))
def test_scalar(self):
P = BarycentricInterpolator(self.xs, self.ys)
assert_almost_equal(self.true_poly(7), P(7))
assert_almost_equal(self.true_poly(np.array(7)), P(np.array(7)))
def test_delayed(self):
P = BarycentricInterpolator(self.xs)
P.set_yi(self.ys)
assert_almost_equal(self.true_poly(self.test_xs), P(self.test_xs))
def test_append(self):
P = BarycentricInterpolator(self.xs[:3], self.ys[:3])
P.add_xi(self.xs[3:], self.ys[3:])
assert_almost_equal(self.true_poly(self.test_xs), P(self.test_xs))
def test_vector(self):
xs = [0, 1, 2]
ys = np.array([[0, 1], [1, 0], [2, 1]])
BI = BarycentricInterpolator
P = BI(xs, ys)
Pi = [BI(xs, ys[:, i]) for i in range(ys.shape[1])]
test_xs = np.linspace(-1, 3, 100)
assert_almost_equal(P(test_xs),
np.asarray([p(test_xs) for p in Pi]).T)
def test_shapes_scalarvalue(self):
P = BarycentricInterpolator(self.xs, self.ys)
assert_array_equal(np.shape(P(0)), ())
assert_array_equal(np.shape(P(np.array(0))), ())
assert_array_equal(np.shape(P([0])), (1,))
assert_array_equal(np.shape(P([0, 1])), (2,))
def test_shapes_vectorvalue(self):
P = BarycentricInterpolator(self.xs, np.outer(self.ys, np.arange(3)))
assert_array_equal(np.shape(P(0)), (3,))
assert_array_equal(np.shape(P([0])), (1, 3))
assert_array_equal(np.shape(P([0, 1])), (2, 3))
def test_shapes_1d_vectorvalue(self):
P = BarycentricInterpolator(self.xs, np.outer(self.ys, [1]))
assert_array_equal(np.shape(P(0)), (1,))
assert_array_equal(np.shape(P([0])), (1, 1))
assert_array_equal(np.shape(P([0,1])), (2, 1))
def test_wrapper(self):
P = BarycentricInterpolator(self.xs, self.ys)
values = barycentric_interpolate(self.xs, self.ys, self.test_xs)
assert_almost_equal(P(self.test_xs), values)
def test_int_input(self):
x = 1000 * np.arange(1, 11) # np.prod(x[-1] - x[:-1]) overflows
y = np.arange(1, 11)
value = barycentric_interpolate(x, y, 1000 * 9.5)
assert_almost_equal(value, 9.5)
def test_large_chebyshev(self):
# The weights for Chebyshev points of the second kind have analytically
# solvable weights. Naive calculation of barycentric weights will fail
# for large N because of numerical underflow and overflow. We test
# correctness for large N against analytical Chebyshev weights.
# Without capacity scaling or permutation, n=800 fails,
# With just capacity scaling, n=1097 fails
# With both capacity scaling and random permutation, n=30000 succeeds
n = 800
j = np.arange(n + 1).astype(np.float64)
x = np.cos(j * np.pi / n)
# See page 506 of Berrut and Trefethen 2004 for this formula
w = (-1) ** j
w[0] *= 0.5
w[-1] *= 0.5
P = BarycentricInterpolator(x)
# It's okay to have a constant scaling factor in the weights because it
# cancels out in the evaluation of the polynomial.
factor = P.wi[0]
assert_almost_equal(P.wi / (2 * factor), w)
def test_warning(self):
# Test if the divide-by-zero warning is properly ignored when computing
# interpolated values equals to interpolation points
P = BarycentricInterpolator([0, 1], [1, 2])
with np.errstate(divide='raise'):
yi = P(P.xi)
# Additionaly check if the interpolated values are the nodes values
assert_almost_equal(yi, P.yi.ravel())
class TestPCHIP:
def _make_random(self, npts=20):
np.random.seed(1234)
xi = np.sort(np.random.random(npts))
yi = np.random.random(npts)
return pchip(xi, yi), xi, yi
def test_overshoot(self):
# PCHIP should not overshoot
p, xi, yi = self._make_random()
for i in range(len(xi)-1):
x1, x2 = xi[i], xi[i+1]
y1, y2 = yi[i], yi[i+1]
if y1 > y2:
y1, y2 = y2, y1
xp = np.linspace(x1, x2, 10)
yp = p(xp)
assert_(((y1 <= yp + 1e-15) & (yp <= y2 + 1e-15)).all())
def test_monotone(self):
# PCHIP should preserve monotonicty
p, xi, yi = self._make_random()
for i in range(len(xi)-1):
x1, x2 = xi[i], xi[i+1]
y1, y2 = yi[i], yi[i+1]
xp = np.linspace(x1, x2, 10)
yp = p(xp)
assert_(((y2-y1) * (yp[1:] - yp[:1]) > 0).all())
def test_cast(self):
# regression test for integer input data, see gh-3453
data = np.array([[0, 4, 12, 27, 47, 60, 79, 87, 99, 100],
[-33, -33, -19, -2, 12, 26, 38, 45, 53, 55]])
xx = np.arange(100)
curve = pchip(data[0], data[1])(xx)
data1 = data * 1.0
curve1 = pchip(data1[0], data1[1])(xx)
assert_allclose(curve, curve1, atol=1e-14, rtol=1e-14)
def test_nag(self):
# Example from NAG C implementation,
# http://nag.com/numeric/cl/nagdoc_cl25/html/e01/e01bec.html
# suggested in gh-5326 as a smoke test for the way the derivatives
# are computed (see also gh-3453)
dataStr = '''
7.99 0.00000E+0
8.09 0.27643E-4
8.19 0.43750E-1
8.70 0.16918E+0
9.20 0.46943E+0
10.00 0.94374E+0
12.00 0.99864E+0
15.00 0.99992E+0
20.00 0.99999E+0
'''
data = np.loadtxt(io.StringIO(dataStr))
pch = pchip(data[:,0], data[:,1])
resultStr = '''
7.9900 0.0000
9.1910 0.4640
10.3920 0.9645
11.5930 0.9965
12.7940 0.9992
13.9950 0.9998
15.1960 0.9999
16.3970 1.0000
17.5980 1.0000
18.7990 1.0000
20.0000 1.0000
'''
result = np.loadtxt(io.StringIO(resultStr))
assert_allclose(result[:,1], pch(result[:,0]), rtol=0., atol=5e-5)
def test_endslopes(self):
# this is a smoke test for gh-3453: PCHIP interpolator should not
# set edge slopes to zero if the data do not suggest zero edge derivatives
x = np.array([0.0, 0.1, 0.25, 0.35])
y1 = np.array([279.35, 0.5e3, 1.0e3, 2.5e3])
y2 = np.array([279.35, 2.5e3, 1.50e3, 1.0e3])
for pp in (pchip(x, y1), pchip(x, y2)):
for t in (x[0], x[-1]):
assert_(pp(t, 1) != 0)
def test_all_zeros(self):
x = np.arange(10)
y = np.zeros_like(x)
# this should work and not generate any warnings
with warnings.catch_warnings():
warnings.filterwarnings('error')
pch = pchip(x, y)
xx = np.linspace(0, 9, 101)
assert_equal(pch(xx), 0.)
def test_two_points(self):
# regression test for gh-6222: pchip([0, 1], [0, 1]) fails because
# it tries to use a three-point scheme to estimate edge derivatives,
# while there are only two points available.
# Instead, it should construct a linear interpolator.
x = np.linspace(0, 1, 11)
p = pchip([0, 1], [0, 2])
assert_allclose(p(x), 2*x, atol=1e-15)
def test_pchip_interpolate(self):
assert_array_almost_equal(
pchip_interpolate([1,2,3], [4,5,6], [0.5], der=1),
[1.])
assert_array_almost_equal(
pchip_interpolate([1,2,3], [4,5,6], [0.5], der=0),
[3.5])
assert_array_almost_equal(
pchip_interpolate([1,2,3], [4,5,6], [0.5], der=[0, 1]),
[[3.5], [1]])
def test_roots(self):
# regression test for gh-6357: .roots method should work
p = pchip([0, 1], [-1, 1])
r = p.roots()
assert_allclose(r, 0.5)
class TestCubicSpline:
@staticmethod
def check_correctness(S, bc_start='not-a-knot', bc_end='not-a-knot',
tol=1e-14):
"""Check that spline coefficients satisfy the continuity and boundary
conditions."""
x = S.x
c = S.c
dx = np.diff(x)
dx = dx.reshape([dx.shape[0]] + [1] * (c.ndim - 2))
dxi = dx[:-1]
# Check C2 continuity.
assert_allclose(c[3, 1:], c[0, :-1] * dxi**3 + c[1, :-1] * dxi**2 +
c[2, :-1] * dxi + c[3, :-1], rtol=tol, atol=tol)
assert_allclose(c[2, 1:], 3 * c[0, :-1] * dxi**2 +
2 * c[1, :-1] * dxi + c[2, :-1], rtol=tol, atol=tol)
assert_allclose(c[1, 1:], 3 * c[0, :-1] * dxi + c[1, :-1],
rtol=tol, atol=tol)
# Check that we found a parabola, the third derivative is 0.
if x.size == 3 and bc_start == 'not-a-knot' and bc_end == 'not-a-knot':
assert_allclose(c[0], 0, rtol=tol, atol=tol)
return
# Check periodic boundary conditions.
if bc_start == 'periodic':
assert_allclose(S(x[0], 0), S(x[-1], 0), rtol=tol, atol=tol)
assert_allclose(S(x[0], 1), S(x[-1], 1), rtol=tol, atol=tol)
assert_allclose(S(x[0], 2), S(x[-1], 2), rtol=tol, atol=tol)
return
# Check other boundary conditions.
if bc_start == 'not-a-knot':
if x.size == 2:
slope = (S(x[1]) - S(x[0])) / dx[0]
assert_allclose(S(x[0], 1), slope, rtol=tol, atol=tol)
else:
assert_allclose(c[0, 0], c[0, 1], rtol=tol, atol=tol)
elif bc_start == 'clamped':
assert_allclose(S(x[0], 1), 0, rtol=tol, atol=tol)
elif bc_start == 'natural':
assert_allclose(S(x[0], 2), 0, rtol=tol, atol=tol)
else:
order, value = bc_start
assert_allclose(S(x[0], order), value, rtol=tol, atol=tol)
if bc_end == 'not-a-knot':
if x.size == 2:
slope = (S(x[1]) - S(x[0])) / dx[0]
assert_allclose(S(x[1], 1), slope, rtol=tol, atol=tol)
else:
assert_allclose(c[0, -1], c[0, -2], rtol=tol, atol=tol)
elif bc_end == 'clamped':
assert_allclose(S(x[-1], 1), 0, rtol=tol, atol=tol)
elif bc_end == 'natural':
assert_allclose(S(x[-1], 2), 0, rtol=2*tol, atol=2*tol)
else:
order, value = bc_end
assert_allclose(S(x[-1], order), value, rtol=tol, atol=tol)
def check_all_bc(self, x, y, axis):
deriv_shape = list(y.shape)
del deriv_shape[axis]
first_deriv = np.empty(deriv_shape)
first_deriv.fill(2)
second_deriv = np.empty(deriv_shape)
second_deriv.fill(-1)
bc_all = [
'not-a-knot',
'natural',
'clamped',
(1, first_deriv),
(2, second_deriv)
]
for bc in bc_all[:3]:
S = CubicSpline(x, y, axis=axis, bc_type=bc)
self.check_correctness(S, bc, bc)
for bc_start in bc_all:
for bc_end in bc_all:
S = CubicSpline(x, y, axis=axis, bc_type=(bc_start, bc_end))
self.check_correctness(S, bc_start, bc_end, tol=2e-14)
def test_general(self):
x = np.array([-1, 0, 0.5, 2, 4, 4.5, 5.5, 9])
y = np.array([0, -0.5, 2, 3, 2.5, 1, 1, 0.5])
for n in [2, 3, x.size]:
self.check_all_bc(x[:n], y[:n], 0)
Y = np.empty((2, n, 2))
Y[0, :, 0] = y[:n]
Y[0, :, 1] = y[:n] - 1
Y[1, :, 0] = y[:n] + 2
Y[1, :, 1] = y[:n] + 3
self.check_all_bc(x[:n], Y, 1)
def test_periodic(self):
for n in [2, 3, 5]:
x = np.linspace(0, 2 * np.pi, n)
y = np.cos(x)
S = CubicSpline(x, y, bc_type='periodic')
self.check_correctness(S, 'periodic', 'periodic')
Y = np.empty((2, n, 2))
Y[0, :, 0] = y
Y[0, :, 1] = y + 2
Y[1, :, 0] = y - 1
Y[1, :, 1] = y + 5
S = CubicSpline(x, Y, axis=1, bc_type='periodic')
self.check_correctness(S, 'periodic', 'periodic')
def test_periodic_eval(self):
x = np.linspace(0, 2 * np.pi, 10)
y = np.cos(x)
S = CubicSpline(x, y, bc_type='periodic')
assert_almost_equal(S(1), S(1 + 2 * np.pi), decimal=15)
def test_second_derivative_continuity_gh_11758(self):
# gh-11758: C2 continuity fail
x = np.array([0.9, 1.3, 1.9, 2.1, 2.6, 3.0, 3.9, 4.4, 4.7, 5.0, 6.0,
7.0, 8.0, 9.2, 10.5, 11.3, 11.6, 12.0, 12.6, 13.0, 13.3])
y = np.array([1.3, 1.5, 1.85, 2.1, 2.6, 2.7, 2.4, 2.15, 2.05, 2.1,
2.25, 2.3, 2.25, 1.95, 1.4, 0.9, 0.7, 0.6, 0.5, 0.4, 1.3])
S = CubicSpline(x, y, bc_type='periodic', extrapolate='periodic')
self.check_correctness(S, 'periodic', 'periodic')
def test_three_points(self):
# gh-11758: Fails computing a_m2_m1
# In this case, s (first derivatives) could be found manually by solving
# system of 2 linear equations. Due to solution of this system,
# s[i] = (h1m2 + h2m1) / (h1 + h2), where h1 = x[1] - x[0], h2 = x[2] - x[1],
# m1 = (y[1] - y[0]) / h1, m2 = (y[2] - y[1]) / h2
x = np.array([1.0, 2.75, 3.0])
y = np.array([1.0, 15.0, 1.0])
S = CubicSpline(x, y, bc_type='periodic')
self.check_correctness(S, 'periodic', 'periodic')
assert_allclose(S.derivative(1)(x), np.array([-48.0, -48.0, -48.0]))
def test_dtypes(self):
x = np.array([0, 1, 2, 3], dtype=int)
y = np.array([-5, 2, 3, 1], dtype=int)
S = CubicSpline(x, y)
self.check_correctness(S)
y = np.array([-1+1j, 0.0, 1-1j, 0.5-1.5j])
S = CubicSpline(x, y)
self.check_correctness(S)
S = CubicSpline(x, x ** 3, bc_type=("natural", (1, 2j)))
self.check_correctness(S, "natural", (1, 2j))
y = np.array([-5, 2, 3, 1])
S = CubicSpline(x, y, bc_type=[(1, 2 + 0.5j), (2, 0.5 - 1j)])
self.check_correctness(S, (1, 2 + 0.5j), (2, 0.5 - 1j))
def test_small_dx(self):
rng = np.random.RandomState(0)
x = np.sort(rng.uniform(size=100))
y = 1e4 + rng.uniform(size=100)
S = CubicSpline(x, y)
self.check_correctness(S, tol=1e-13)
def test_incorrect_inputs(self):
x = np.array([1, 2, 3, 4])
y = np.array([1, 2, 3, 4])
xc = np.array([1 + 1j, 2, 3, 4])
xn = np.array([np.nan, 2, 3, 4])
xo = np.array([2, 1, 3, 4])
yn = np.array([np.nan, 2, 3, 4])
y3 = [1, 2, 3]
x1 = [1]
y1 = [1]
assert_raises(ValueError, CubicSpline, xc, y)
assert_raises(ValueError, CubicSpline, xn, y)
assert_raises(ValueError, CubicSpline, x, yn)
assert_raises(ValueError, CubicSpline, xo, y)
assert_raises(ValueError, CubicSpline, x, y3)
assert_raises(ValueError, CubicSpline, x[:, np.newaxis], y)
assert_raises(ValueError, CubicSpline, x1, y1)
wrong_bc = [('periodic', 'clamped'),
((2, 0), (3, 10)),
((1, 0), ),
(0., 0.),
'not-a-typo']
for bc_type in wrong_bc:
assert_raises(ValueError, CubicSpline, x, y, 0, bc_type, True)
# Shapes mismatch when giving arbitrary derivative values:
Y = np.c_[y, y]
bc1 = ('clamped', (1, 0))
bc2 = ('clamped', (1, [0, 0, 0]))
bc3 = ('clamped', (1, [[0, 0]]))
assert_raises(ValueError, CubicSpline, x, Y, 0, bc1, True)
assert_raises(ValueError, CubicSpline, x, Y, 0, bc2, True)
assert_raises(ValueError, CubicSpline, x, Y, 0, bc3, True)
# periodic condition, y[-1] must be equal to y[0]:
assert_raises(ValueError, CubicSpline, x, y, 0, 'periodic', True)
def test_CubicHermiteSpline_correctness():
x = [0, 2, 7]
y = [-1, 2, 3]
dydx = [0, 3, 7]
s = CubicHermiteSpline(x, y, dydx)
assert_allclose(s(x), y, rtol=1e-15)
assert_allclose(s(x, 1), dydx, rtol=1e-15)
def test_CubicHermiteSpline_error_handling():
x = [1, 2, 3]
y = [0, 3, 5]
dydx = [1, -1, 2, 3]
assert_raises(ValueError, CubicHermiteSpline, x, y, dydx)
dydx_with_nan = [1, 0, np.nan]
assert_raises(ValueError, CubicHermiteSpline, x, y, dydx_with_nan)
def test_roots_extrapolate_gh_11185():
x = np.array([0.001, 0.002])
y = np.array([1.66066935e-06, 1.10410807e-06])
dy = np.array([-1.60061854, -1.600619])
p = CubicHermiteSpline(x, y, dy)
# roots(extrapolate=True) for a polynomial with a single interval
# should return all three real roots
r = p.roots(extrapolate=True)
assert_equal(p.c.shape[1], 1)
assert_equal(r.size, 3)
class TestZeroSizeArrays:
# regression tests for gh-17241 : CubicSpline et al must not segfault
# when y.size == 0
# The two methods below are _almost_ the same, but not quite:
# one is for objects which have the `bc_type` argument (CubicSpline)
# and the other one is for those which do not (Pchip, Akima1D)
@pytest.mark.parametrize('y', [np.zeros((10, 0, 5)),
np.zeros((10, 5, 0))])
@pytest.mark.parametrize('bc_type',
['not-a-knot', 'periodic', 'natural', 'clamped'])
@pytest.mark.parametrize('axis', [0, 1, 2])
@pytest.mark.parametrize('cls', [make_interp_spline, CubicSpline])
def test_zero_size(self, cls, y, bc_type, axis):
x = np.arange(10)
xval = np.arange(3)
obj = cls(x, y, bc_type=bc_type)
assert obj(xval).size == 0
assert obj(xval).shape == xval.shape + y.shape[1:]
# Also check with an explicit non-default axis
yt = np.moveaxis(y, 0, axis) # (10, 0, 5) --> (0, 10, 5) if axis=1 etc
obj = cls(x, yt, bc_type=bc_type, axis=axis)
sh = yt.shape[:axis] + (xval.size, ) + yt.shape[axis+1:]
assert obj(xval).size == 0
assert obj(xval).shape == sh
@pytest.mark.parametrize('y', [np.zeros((10, 0, 5)),
np.zeros((10, 5, 0))])
@pytest.mark.parametrize('axis', [0, 1, 2])
@pytest.mark.parametrize('cls', [PchipInterpolator, Akima1DInterpolator])
def test_zero_size_2(self, cls, y, axis):
x = np.arange(10)
xval = np.arange(3)
obj = cls(x, y)
assert obj(xval).size == 0
assert obj(xval).shape == xval.shape + y.shape[1:]
# Also check with an explicit non-default axis
yt = np.moveaxis(y, 0, axis) # (10, 0, 5) --> (0, 10, 5) if axis=1 etc
obj = cls(x, yt, axis=axis)
sh = yt.shape[:axis] + (xval.size, ) + yt.shape[axis+1:]
assert obj(xval).size == 0
assert obj(xval).shape == sh
| 30,292
| 36.444994
| 85
|
py
|
scipy
|
scipy-main/scipy/interpolate/tests/test_fitpack.py
|
import itertools
import os
import numpy as np
from numpy.testing import (assert_equal, assert_allclose, assert_,
assert_almost_equal, assert_array_almost_equal)
from pytest import raises as assert_raises
import pytest
from scipy._lib._testutils import check_free_memory
from scipy.interpolate import RectBivariateSpline
from scipy.interpolate._fitpack_py import (splrep, splev, bisplrep, bisplev,
sproot, splprep, splint, spalde, splder, splantider, insert, dblint)
from scipy.interpolate.dfitpack import regrid_smth
from scipy.interpolate._fitpack2 import dfitpack_int
def data_file(basename):
return os.path.join(os.path.abspath(os.path.dirname(__file__)),
'data', basename)
def norm2(x):
return np.sqrt(np.dot(x.T, x))
def f1(x, d=0):
"""Derivatives of sin->cos->-sin->-cos."""
if d % 4 == 0:
return np.sin(x)
if d % 4 == 1:
return np.cos(x)
if d % 4 == 2:
return -np.sin(x)
if d % 4 == 3:
return -np.cos(x)
def makepairs(x, y):
"""Helper function to create an array of pairs of x and y."""
xy = np.array(list(itertools.product(np.asarray(x), np.asarray(y))))
return xy.T
class TestSmokeTests:
"""
Smoke tests (with a few asserts) for fitpack routines -- mostly
check that they are runnable
"""
def check_1(self, per=0, s=0, a=0, b=2*np.pi, at_nodes=False,
xb=None, xe=None):
if xb is None:
xb = a
if xe is None:
xe = b
N = 20
# nodes and middle points of the nodes
x = np.linspace(a, b, N + 1)
x1 = a + (b - a) * np.arange(1, N, dtype=float) / float(N - 1)
v = f1(x)
def err_est(k, d):
# Assume f has all derivatives < 1
h = 1.0 / N
tol = 5 * h**(.75*(k-d))
if s > 0:
tol += 1e5*s
return tol
for k in range(1, 6):
tck = splrep(x, v, s=s, per=per, k=k, xe=xe)
tt = tck[0][k:-k] if at_nodes else x1
for d in range(k+1):
tol = err_est(k, d)
err = norm2(f1(tt, d) - splev(tt, tck, d)) / norm2(f1(tt, d))
assert err < tol
def check_2(self, per=0, N=20, ia=0, ib=2*np.pi):
a, b, dx = 0, 2*np.pi, 0.2*np.pi
x = np.linspace(a, b, N+1) # nodes
v = np.sin(x)
def err_est(k, d):
# Assume f has all derivatives < 1
h = 1.0 / N
tol = 5 * h**(.75*(k-d))
return tol
nk = []
for k in range(1, 6):
tck = splrep(x, v, s=0, per=per, k=k, xe=b)
nk.append([splint(ia, ib, tck), spalde(dx, tck)])
k = 1
for r in nk:
d = 0
for dr in r[1]:
tol = err_est(k, d)
assert_allclose(dr, f1(dx, d), atol=0, rtol=tol)
d = d+1
k = k+1
def test_smoke_splrep_splev(self):
self.check_1(s=1e-6)
self.check_1(b=1.5*np.pi)
self.check_1(b=1.5*np.pi, xe=2*np.pi, per=1, s=1e-1)
@pytest.mark.parametrize('per', [0, 1])
@pytest.mark.parametrize('at_nodes', [True, False])
def test_smoke_splrep_splev_2(self, per, at_nodes):
self.check_1(per=per, at_nodes=at_nodes)
@pytest.mark.parametrize('N', [20, 50])
@pytest.mark.parametrize('per', [0, 1])
def test_smoke_splint_spalde(self, N, per):
self.check_2(per=per, N=N)
@pytest.mark.parametrize('N', [20, 50])
@pytest.mark.parametrize('per', [0, 1])
def test_smoke_splint_spalde_iaib(self, N, per):
self.check_2(ia=0.2*np.pi, ib=np.pi, N=N, per=per)
def test_smoke_sproot(self):
# sproot is only implemented for k=3
a, b = 0.1, 15
x = np.linspace(a, b, 20)
v = np.sin(x)
for k in [1, 2, 4, 5]:
tck = splrep(x, v, s=0, per=0, k=k, xe=b)
with assert_raises(ValueError):
sproot(tck)
k = 3
tck = splrep(x, v, s=0, k=3)
roots = sproot(tck)
assert_allclose(splev(roots, tck), 0, atol=1e-10, rtol=1e-10)
assert_allclose(roots, np.pi * np.array([1, 2, 3, 4]), rtol=1e-3)
@pytest.mark.parametrize('N', [20, 50])
@pytest.mark.parametrize('k', [1, 2, 3, 4, 5])
def test_smoke_splprep_splrep_splev(self, N, k):
a, b, dx = 0, 2.*np.pi, 0.2*np.pi
x = np.linspace(a, b, N+1) # nodes
v = np.sin(x)
tckp, u = splprep([x, v], s=0, per=0, k=k, nest=-1)
uv = splev(dx, tckp)
err1 = abs(uv[1] - np.sin(uv[0]))
assert err1 < 1e-2
tck = splrep(x, v, s=0, per=0, k=k)
err2 = abs(splev(uv[0], tck) - np.sin(uv[0]))
assert err2 < 1e-2
# Derivatives of parametric cubic spline at u (first function)
if k == 3:
tckp, u = splprep([x, v], s=0, per=0, k=k, nest=-1)
for d in range(1, k+1):
uv = splev(dx, tckp, d)
def test_smoke_bisplrep_bisplev(self):
xb, xe = 0, 2.*np.pi
yb, ye = 0, 2.*np.pi
kx, ky = 3, 3
Nx, Ny = 20, 20
def f2(x, y):
return np.sin(x+y)
x = np.linspace(xb, xe, Nx + 1)
y = np.linspace(yb, ye, Ny + 1)
xy = makepairs(x, y)
tck = bisplrep(xy[0], xy[1], f2(xy[0], xy[1]), s=0, kx=kx, ky=ky)
tt = [tck[0][kx:-kx], tck[1][ky:-ky]]
t2 = makepairs(tt[0], tt[1])
v1 = bisplev(tt[0], tt[1], tck)
v2 = f2(t2[0], t2[1])
v2.shape = len(tt[0]), len(tt[1])
assert norm2(np.ravel(v1 - v2)) < 1e-2
class TestSplev:
def test_1d_shape(self):
x = [1,2,3,4,5]
y = [4,5,6,7,8]
tck = splrep(x, y)
z = splev([1], tck)
assert_equal(z.shape, (1,))
z = splev(1, tck)
assert_equal(z.shape, ())
def test_2d_shape(self):
x = [1, 2, 3, 4, 5]
y = [4, 5, 6, 7, 8]
tck = splrep(x, y)
t = np.array([[1.0, 1.5, 2.0, 2.5],
[3.0, 3.5, 4.0, 4.5]])
z = splev(t, tck)
z0 = splev(t[0], tck)
z1 = splev(t[1], tck)
assert_equal(z, np.row_stack((z0, z1)))
def test_extrapolation_modes(self):
# test extrapolation modes
# * if ext=0, return the extrapolated value.
# * if ext=1, return 0
# * if ext=2, raise a ValueError
# * if ext=3, return the boundary value.
x = [1,2,3]
y = [0,2,4]
tck = splrep(x, y, k=1)
rstl = [[-2, 6], [0, 0], None, [0, 4]]
for ext in (0, 1, 3):
assert_array_almost_equal(splev([0, 4], tck, ext=ext), rstl[ext])
assert_raises(ValueError, splev, [0, 4], tck, ext=2)
class TestSplder:
def setup_method(self):
# non-uniform grid, just to make it sure
x = np.linspace(0, 1, 100)**3
y = np.sin(20 * x)
self.spl = splrep(x, y)
# double check that knots are non-uniform
assert_(np.ptp(np.diff(self.spl[0])) > 0)
def test_inverse(self):
# Check that antiderivative + derivative is identity.
for n in range(5):
spl2 = splantider(self.spl, n)
spl3 = splder(spl2, n)
assert_allclose(self.spl[0], spl3[0])
assert_allclose(self.spl[1], spl3[1])
assert_equal(self.spl[2], spl3[2])
def test_splder_vs_splev(self):
# Check derivative vs. FITPACK
for n in range(3+1):
# Also extrapolation!
xx = np.linspace(-1, 2, 2000)
if n == 3:
# ... except that FITPACK extrapolates strangely for
# order 0, so let's not check that.
xx = xx[(xx >= 0) & (xx <= 1)]
dy = splev(xx, self.spl, n)
spl2 = splder(self.spl, n)
dy2 = splev(xx, spl2)
if n == 1:
assert_allclose(dy, dy2, rtol=2e-6)
else:
assert_allclose(dy, dy2)
def test_splantider_vs_splint(self):
# Check antiderivative vs. FITPACK
spl2 = splantider(self.spl)
# no extrapolation, splint assumes function is zero outside
# range
xx = np.linspace(0, 1, 20)
for x1 in xx:
for x2 in xx:
y1 = splint(x1, x2, self.spl)
y2 = splev(x2, spl2) - splev(x1, spl2)
assert_allclose(y1, y2)
def test_order0_diff(self):
assert_raises(ValueError, splder, self.spl, 4)
def test_kink(self):
# Should refuse to differentiate splines with kinks
spl2 = insert(0.5, self.spl, m=2)
splder(spl2, 2) # Should work
assert_raises(ValueError, splder, spl2, 3)
spl2 = insert(0.5, self.spl, m=3)
splder(spl2, 1) # Should work
assert_raises(ValueError, splder, spl2, 2)
spl2 = insert(0.5, self.spl, m=4)
assert_raises(ValueError, splder, spl2, 1)
def test_multidim(self):
# c can have trailing dims
for n in range(3):
t, c, k = self.spl
c2 = np.c_[c, c, c]
c2 = np.dstack((c2, c2))
spl2 = splantider((t, c2, k), n)
spl3 = splder(spl2, n)
assert_allclose(t, spl3[0])
assert_allclose(c2, spl3[1])
assert_equal(k, spl3[2])
class TestSplint:
def test_len_c(self):
n, k = 7, 3
x = np.arange(n)
y = x**3
t, c, k = splrep(x, y, s=0)
# note that len(c) == len(t) == 11 (== len(x) + 2*(k-1))
assert len(t) == len(c) == n + 2*(k-1)
# integrate directly: $\int_0^6 x^3 dx = 6^4 / 4$
res = splint(0, 6, (t, c, k))
assert_allclose(res, 6**4 / 4, atol=1e-15)
# check that the coefficients past len(t) - k - 1 are ignored
c0 = c.copy()
c0[len(t)-k-1:] = np.nan
res0 = splint(0, 6, (t, c0, k))
assert_allclose(res0, 6**4 / 4, atol=1e-15)
# however, all other coefficients *are* used
c0[6] = np.nan
assert np.isnan(splint(0, 6, (t, c0, k)))
# check that the coefficient array can have length `len(t) - k - 1`
c1 = c[:len(t) - k - 1]
res1 = splint(0, 6, (t, c1, k))
assert_allclose(res1, 6**4 / 4, atol=1e-15)
# however shorter c arrays raise. The error from f2py is a
# `dftipack.error`, which is an Exception but not ValueError etc.
with assert_raises(Exception, match=r">=n-k-1"):
splint(0, 1, (np.ones(10), np.ones(5), 3))
class TestBisplrep:
def test_overflow(self):
from numpy.lib.stride_tricks import as_strided
if dfitpack_int.itemsize == 8:
size = 1500000**2
else:
size = 400**2
# Don't allocate a real array, as it's very big, but rely
# on that it's not referenced
x = as_strided(np.zeros(()), shape=(size,))
assert_raises(OverflowError, bisplrep, x, x, x, w=x,
xb=0, xe=1, yb=0, ye=1, s=0)
def test_regression_1310(self):
# Regression test for gh-1310
data = np.load(data_file('bug-1310.npz'))['data']
# Shouldn't crash -- the input data triggers work array sizes
# that caused previously some data to not be aligned on
# sizeof(double) boundaries in memory, which made the Fortran
# code to crash when compiled with -O3
bisplrep(data[:,0], data[:,1], data[:,2], kx=3, ky=3, s=0,
full_output=True)
@pytest.mark.skipif(dfitpack_int != np.int64, reason="needs ilp64 fitpack")
def test_ilp64_bisplrep(self):
check_free_memory(28000) # VM size, doesn't actually use the pages
x = np.linspace(0, 1, 400)
y = np.linspace(0, 1, 400)
x, y = np.meshgrid(x, y)
z = np.zeros_like(x)
tck = bisplrep(x, y, z, kx=3, ky=3, s=0)
assert_allclose(bisplev(0.5, 0.5, tck), 0.0)
def test_dblint():
# Basic test to see it runs and gives the correct result on a trivial
# problem. Note that `dblint` is not exposed in the interpolate namespace.
x = np.linspace(0, 1)
y = np.linspace(0, 1)
xx, yy = np.meshgrid(x, y)
rect = RectBivariateSpline(x, y, 4 * xx * yy)
tck = list(rect.tck)
tck.extend(rect.degrees)
assert_almost_equal(dblint(0, 1, 0, 1, tck), 1)
assert_almost_equal(dblint(0, 0.5, 0, 1, tck), 0.25)
assert_almost_equal(dblint(0.5, 1, 0, 1, tck), 0.75)
assert_almost_equal(dblint(-100, 100, -100, 100, tck), 1)
def test_splev_der_k():
# regression test for gh-2188: splev(x, tck, der=k) gives garbage or crashes
# for x outside of knot range
# test case from gh-2188
tck = (np.array([0., 0., 2.5, 2.5]),
np.array([-1.56679978, 2.43995873, 0., 0.]),
1)
t, c, k = tck
x = np.array([-3, 0, 2.5, 3])
# an explicit form of the linear spline
assert_allclose(splev(x, tck), c[0] + (c[1] - c[0]) * x/t[2])
assert_allclose(splev(x, tck, 1), (c[1]-c[0]) / t[2])
# now check a random spline vs splder
np.random.seed(1234)
x = np.sort(np.random.random(30))
y = np.random.random(30)
t, c, k = splrep(x, y)
x = [t[0] - 1., t[-1] + 1.]
tck2 = splder((t, c, k), k)
assert_allclose(splev(x, (t, c, k), k), splev(x, tck2))
def test_splprep_segfault():
# regression test for gh-3847: splprep segfaults if knots are specified
# for task=-1
t = np.arange(0, 1.1, 0.1)
x = np.sin(2*np.pi*t)
y = np.cos(2*np.pi*t)
tck, u = splprep([x, y], s=0)
np.arange(0, 1.01, 0.01)
uknots = tck[0] # using the knots from the previous fitting
tck, u = splprep([x, y], task=-1, t=uknots) # here is the crash
def test_bisplev_integer_overflow():
np.random.seed(1)
x = np.linspace(0, 1, 11)
y = x
z = np.random.randn(11, 11).ravel()
kx = 1
ky = 1
nx, tx, ny, ty, c, fp, ier = regrid_smth(
x, y, z, None, None, None, None, kx=kx, ky=ky, s=0.0)
tck = (tx[:nx], ty[:ny], c[:(nx - kx - 1) * (ny - ky - 1)], kx, ky)
xp = np.zeros([2621440])
yp = np.zeros([2621440])
assert_raises((RuntimeError, MemoryError), bisplev, xp, yp, tck)
@pytest.mark.xslow
def test_gh_1766():
# this should fail gracefully instead of segfaulting (int overflow)
size = 22
kx, ky = 3, 3
def f2(x, y):
return np.sin(x+y)
x = np.linspace(0, 10, size)
y = np.linspace(50, 700, size)
xy = makepairs(x, y)
tck = bisplrep(xy[0], xy[1], f2(xy[0], xy[1]), s=0, kx=kx, ky=ky)
# the size value here can either segfault
# or produce a MemoryError on main
tx_ty_size = 500000
tck[0] = np.arange(tx_ty_size)
tck[1] = np.arange(tx_ty_size) * 4
tt_0 = np.arange(50)
tt_1 = np.arange(50) * 3
with pytest.raises(MemoryError):
bisplev(tt_0, tt_1, tck, 1, 1)
def test_spalde_scalar_input():
# Ticket #629
x = np.linspace(0, 10)
y = x**3
tck = splrep(x, y, k=3, t=[5])
res = spalde(np.float64(1), tck)
des = np.array([1., 3., 6., 6.])
assert_almost_equal(res, des)
| 15,175
| 30.290722
| 80
|
py
|
scipy
|
scipy-main/scipy/interpolate/tests/test_gil.py
|
import itertools
import threading
import time
import numpy as np
from numpy.testing import assert_equal
import pytest
import scipy.interpolate
class TestGIL:
"""Check if the GIL is properly released by scipy.interpolate functions."""
def setup_method(self):
self.messages = []
def log(self, message):
self.messages.append(message)
def make_worker_thread(self, target, args):
log = self.log
class WorkerThread(threading.Thread):
def run(self):
log('interpolation started')
target(*args)
log('interpolation complete')
return WorkerThread()
@pytest.mark.slow
@pytest.mark.xfail(reason='race conditions, may depend on system load')
def test_rectbivariatespline(self):
def generate_params(n_points):
x = y = np.linspace(0, 1000, n_points)
x_grid, y_grid = np.meshgrid(x, y)
z = x_grid * y_grid
return x, y, z
def calibrate_delay(requested_time):
for n_points in itertools.count(5000, 1000):
args = generate_params(n_points)
time_started = time.time()
interpolate(*args)
if time.time() - time_started > requested_time:
return args
def interpolate(x, y, z):
scipy.interpolate.RectBivariateSpline(x, y, z)
args = calibrate_delay(requested_time=3)
worker_thread = self.make_worker_thread(interpolate, args)
worker_thread.start()
for i in range(3):
time.sleep(0.5)
self.log('working')
worker_thread.join()
assert_equal(self.messages, [
'interpolation started',
'working',
'working',
'working',
'interpolation complete',
])
| 1,874
| 27.409091
| 79
|
py
|
scipy
|
scipy-main/scipy/interpolate/tests/test_bsplines.py
|
import operator
import itertools
import numpy as np
from numpy.testing import assert_equal, assert_allclose, assert_
from pytest import raises as assert_raises
import pytest
from scipy.interpolate import (
BSpline, BPoly, PPoly, make_interp_spline, make_lsq_spline, _bspl,
splev, splrep, splprep, splder, splantider, sproot, splint, insert,
CubicSpline, NdBSpline, make_smoothing_spline
)
import scipy.linalg as sl
from scipy.interpolate._bsplines import (_not_a_knot, _augknt,
_woodbury_algorithm, _periodic_knots,
_make_interp_per_full_matr)
import scipy.interpolate._fitpack_impl as _impl
import os
class TestBSpline:
def test_ctor(self):
# knots should be an ordered 1-D array of finite real numbers
assert_raises((TypeError, ValueError), BSpline,
**dict(t=[1, 1.j], c=[1.], k=0))
with np.errstate(invalid='ignore'):
assert_raises(ValueError, BSpline, **dict(t=[1, np.nan], c=[1.], k=0))
assert_raises(ValueError, BSpline, **dict(t=[1, np.inf], c=[1.], k=0))
assert_raises(ValueError, BSpline, **dict(t=[1, -1], c=[1.], k=0))
assert_raises(ValueError, BSpline, **dict(t=[[1], [1]], c=[1.], k=0))
# for n+k+1 knots and degree k need at least n coefficients
assert_raises(ValueError, BSpline, **dict(t=[0, 1, 2], c=[1], k=0))
assert_raises(ValueError, BSpline,
**dict(t=[0, 1, 2, 3, 4], c=[1., 1.], k=2))
# non-integer orders
assert_raises(TypeError, BSpline,
**dict(t=[0., 0., 1., 2., 3., 4.], c=[1., 1., 1.], k="cubic"))
assert_raises(TypeError, BSpline,
**dict(t=[0., 0., 1., 2., 3., 4.], c=[1., 1., 1.], k=2.5))
# basic interval cannot have measure zero (here: [1..1])
assert_raises(ValueError, BSpline,
**dict(t=[0., 0, 1, 1, 2, 3], c=[1., 1, 1], k=2))
# tck vs self.tck
n, k = 11, 3
t = np.arange(n+k+1)
c = np.random.random(n)
b = BSpline(t, c, k)
assert_allclose(t, b.t)
assert_allclose(c, b.c)
assert_equal(k, b.k)
def test_tck(self):
b = _make_random_spline()
tck = b.tck
assert_allclose(b.t, tck[0], atol=1e-15, rtol=1e-15)
assert_allclose(b.c, tck[1], atol=1e-15, rtol=1e-15)
assert_equal(b.k, tck[2])
# b.tck is read-only
with pytest.raises(AttributeError):
b.tck = 'foo'
def test_degree_0(self):
xx = np.linspace(0, 1, 10)
b = BSpline(t=[0, 1], c=[3.], k=0)
assert_allclose(b(xx), 3)
b = BSpline(t=[0, 0.35, 1], c=[3, 4], k=0)
assert_allclose(b(xx), np.where(xx < 0.35, 3, 4))
def test_degree_1(self):
t = [0, 1, 2, 3, 4]
c = [1, 2, 3]
k = 1
b = BSpline(t, c, k)
x = np.linspace(1, 3, 50)
assert_allclose(c[0]*B_012(x) + c[1]*B_012(x-1) + c[2]*B_012(x-2),
b(x), atol=1e-14)
assert_allclose(splev(x, (t, c, k)), b(x), atol=1e-14)
def test_bernstein(self):
# a special knot vector: Bernstein polynomials
k = 3
t = np.asarray([0]*(k+1) + [1]*(k+1))
c = np.asarray([1., 2., 3., 4.])
bp = BPoly(c.reshape(-1, 1), [0, 1])
bspl = BSpline(t, c, k)
xx = np.linspace(-1., 2., 10)
assert_allclose(bp(xx, extrapolate=True),
bspl(xx, extrapolate=True), atol=1e-14)
assert_allclose(splev(xx, (t, c, k)),
bspl(xx), atol=1e-14)
def test_rndm_naive_eval(self):
# test random coefficient spline *on the base interval*,
# t[k] <= x < t[-k-1]
b = _make_random_spline()
t, c, k = b.tck
xx = np.linspace(t[k], t[-k-1], 50)
y_b = b(xx)
y_n = [_naive_eval(x, t, c, k) for x in xx]
assert_allclose(y_b, y_n, atol=1e-14)
y_n2 = [_naive_eval_2(x, t, c, k) for x in xx]
assert_allclose(y_b, y_n2, atol=1e-14)
def test_rndm_splev(self):
b = _make_random_spline()
t, c, k = b.tck
xx = np.linspace(t[k], t[-k-1], 50)
assert_allclose(b(xx), splev(xx, (t, c, k)), atol=1e-14)
def test_rndm_splrep(self):
np.random.seed(1234)
x = np.sort(np.random.random(20))
y = np.random.random(20)
tck = splrep(x, y)
b = BSpline(*tck)
t, k = b.t, b.k
xx = np.linspace(t[k], t[-k-1], 80)
assert_allclose(b(xx), splev(xx, tck), atol=1e-14)
def test_rndm_unity(self):
b = _make_random_spline()
b.c = np.ones_like(b.c)
xx = np.linspace(b.t[b.k], b.t[-b.k-1], 100)
assert_allclose(b(xx), 1.)
def test_vectorization(self):
n, k = 22, 3
t = np.sort(np.random.random(n))
c = np.random.random(size=(n, 6, 7))
b = BSpline(t, c, k)
tm, tp = t[k], t[-k-1]
xx = tm + (tp - tm) * np.random.random((3, 4, 5))
assert_equal(b(xx).shape, (3, 4, 5, 6, 7))
def test_len_c(self):
# for n+k+1 knots, only first n coefs are used.
# and BTW this is consistent with FITPACK
n, k = 33, 3
t = np.sort(np.random.random(n+k+1))
c = np.random.random(n)
# pad coefficients with random garbage
c_pad = np.r_[c, np.random.random(k+1)]
b, b_pad = BSpline(t, c, k), BSpline(t, c_pad, k)
dt = t[-1] - t[0]
xx = np.linspace(t[0] - dt, t[-1] + dt, 50)
assert_allclose(b(xx), b_pad(xx), atol=1e-14)
assert_allclose(b(xx), splev(xx, (t, c, k)), atol=1e-14)
assert_allclose(b(xx), splev(xx, (t, c_pad, k)), atol=1e-14)
def test_endpoints(self):
# base interval is closed
b = _make_random_spline()
t, _, k = b.tck
tm, tp = t[k], t[-k-1]
for extrap in (True, False):
assert_allclose(b([tm, tp], extrap),
b([tm + 1e-10, tp - 1e-10], extrap), atol=1e-9)
def test_continuity(self):
# assert continuity at internal knots
b = _make_random_spline()
t, _, k = b.tck
assert_allclose(b(t[k+1:-k-1] - 1e-10), b(t[k+1:-k-1] + 1e-10),
atol=1e-9)
def test_extrap(self):
b = _make_random_spline()
t, c, k = b.tck
dt = t[-1] - t[0]
xx = np.linspace(t[k] - dt, t[-k-1] + dt, 50)
mask = (t[k] < xx) & (xx < t[-k-1])
# extrap has no effect within the base interval
assert_allclose(b(xx[mask], extrapolate=True),
b(xx[mask], extrapolate=False))
# extrapolated values agree with FITPACK
assert_allclose(b(xx, extrapolate=True),
splev(xx, (t, c, k), ext=0))
def test_default_extrap(self):
# BSpline defaults to extrapolate=True
b = _make_random_spline()
t, _, k = b.tck
xx = [t[0] - 1, t[-1] + 1]
yy = b(xx)
assert_(not np.all(np.isnan(yy)))
def test_periodic_extrap(self):
np.random.seed(1234)
t = np.sort(np.random.random(8))
c = np.random.random(4)
k = 3
b = BSpline(t, c, k, extrapolate='periodic')
n = t.size - (k + 1)
dt = t[-1] - t[0]
xx = np.linspace(t[k] - dt, t[n] + dt, 50)
xy = t[k] + (xx - t[k]) % (t[n] - t[k])
assert_allclose(b(xx), splev(xy, (t, c, k)))
# Direct check
xx = [-1, 0, 0.5, 1]
xy = t[k] + (xx - t[k]) % (t[n] - t[k])
assert_equal(b(xx, extrapolate='periodic'), b(xy, extrapolate=True))
def test_ppoly(self):
b = _make_random_spline()
t, c, k = b.tck
pp = PPoly.from_spline((t, c, k))
xx = np.linspace(t[k], t[-k], 100)
assert_allclose(b(xx), pp(xx), atol=1e-14, rtol=1e-14)
def test_derivative_rndm(self):
b = _make_random_spline()
t, c, k = b.tck
xx = np.linspace(t[0], t[-1], 50)
xx = np.r_[xx, t]
for der in range(1, k+1):
yd = splev(xx, (t, c, k), der=der)
assert_allclose(yd, b(xx, nu=der), atol=1e-14)
# higher derivatives all vanish
assert_allclose(b(xx, nu=k+1), 0, atol=1e-14)
def test_derivative_jumps(self):
# example from de Boor, Chap IX, example (24)
# NB: knots augmented & corresp coefs are zeroed out
# in agreement with the convention (29)
k = 2
t = [-1, -1, 0, 1, 1, 3, 4, 6, 6, 6, 7, 7]
np.random.seed(1234)
c = np.r_[0, 0, np.random.random(5), 0, 0]
b = BSpline(t, c, k)
# b is continuous at x != 6 (triple knot)
x = np.asarray([1, 3, 4, 6])
assert_allclose(b(x[x != 6] - 1e-10),
b(x[x != 6] + 1e-10))
assert_(not np.allclose(b(6.-1e-10), b(6+1e-10)))
# 1st derivative jumps at double knots, 1 & 6:
x0 = np.asarray([3, 4])
assert_allclose(b(x0 - 1e-10, nu=1),
b(x0 + 1e-10, nu=1))
x1 = np.asarray([1, 6])
assert_(not np.all(np.allclose(b(x1 - 1e-10, nu=1),
b(x1 + 1e-10, nu=1))))
# 2nd derivative is not guaranteed to be continuous either
assert_(not np.all(np.allclose(b(x - 1e-10, nu=2),
b(x + 1e-10, nu=2))))
def test_basis_element_quadratic(self):
xx = np.linspace(-1, 4, 20)
b = BSpline.basis_element(t=[0, 1, 2, 3])
assert_allclose(b(xx),
splev(xx, (b.t, b.c, b.k)), atol=1e-14)
assert_allclose(b(xx),
B_0123(xx), atol=1e-14)
b = BSpline.basis_element(t=[0, 1, 1, 2])
xx = np.linspace(0, 2, 10)
assert_allclose(b(xx),
np.where(xx < 1, xx*xx, (2.-xx)**2), atol=1e-14)
def test_basis_element_rndm(self):
b = _make_random_spline()
t, c, k = b.tck
xx = np.linspace(t[k], t[-k-1], 20)
assert_allclose(b(xx), _sum_basis_elements(xx, t, c, k), atol=1e-14)
def test_cmplx(self):
b = _make_random_spline()
t, c, k = b.tck
cc = c * (1. + 3.j)
b = BSpline(t, cc, k)
b_re = BSpline(t, b.c.real, k)
b_im = BSpline(t, b.c.imag, k)
xx = np.linspace(t[k], t[-k-1], 20)
assert_allclose(b(xx).real, b_re(xx), atol=1e-14)
assert_allclose(b(xx).imag, b_im(xx), atol=1e-14)
def test_nan(self):
# nan in, nan out.
b = BSpline.basis_element([0, 1, 1, 2])
assert_(np.isnan(b(np.nan)))
def test_derivative_method(self):
b = _make_random_spline(k=5)
t, c, k = b.tck
b0 = BSpline(t, c, k)
xx = np.linspace(t[k], t[-k-1], 20)
for j in range(1, k):
b = b.derivative()
assert_allclose(b0(xx, j), b(xx), atol=1e-12, rtol=1e-12)
def test_antiderivative_method(self):
b = _make_random_spline()
t, c, k = b.tck
xx = np.linspace(t[k], t[-k-1], 20)
assert_allclose(b.antiderivative().derivative()(xx),
b(xx), atol=1e-14, rtol=1e-14)
# repeat with N-D array for c
c = np.c_[c, c, c]
c = np.dstack((c, c))
b = BSpline(t, c, k)
assert_allclose(b.antiderivative().derivative()(xx),
b(xx), atol=1e-14, rtol=1e-14)
def test_integral(self):
b = BSpline.basis_element([0, 1, 2]) # x for x < 1 else 2 - x
assert_allclose(b.integrate(0, 1), 0.5)
assert_allclose(b.integrate(1, 0), -1 * 0.5)
assert_allclose(b.integrate(1, 0), -0.5)
# extrapolate or zeros outside of [0, 2]; default is yes
assert_allclose(b.integrate(-1, 1), 0)
assert_allclose(b.integrate(-1, 1, extrapolate=True), 0)
assert_allclose(b.integrate(-1, 1, extrapolate=False), 0.5)
assert_allclose(b.integrate(1, -1, extrapolate=False), -1 * 0.5)
# Test ``_fitpack._splint()``
assert_allclose(b.integrate(1, -1, extrapolate=False),
_impl.splint(1, -1, b.tck))
# Test ``extrapolate='periodic'``.
b.extrapolate = 'periodic'
i = b.antiderivative()
period_int = i(2) - i(0)
assert_allclose(b.integrate(0, 2), period_int)
assert_allclose(b.integrate(2, 0), -1 * period_int)
assert_allclose(b.integrate(-9, -7), period_int)
assert_allclose(b.integrate(-8, -4), 2 * period_int)
assert_allclose(b.integrate(0.5, 1.5), i(1.5) - i(0.5))
assert_allclose(b.integrate(1.5, 3), i(1) - i(0) + i(2) - i(1.5))
assert_allclose(b.integrate(1.5 + 12, 3 + 12),
i(1) - i(0) + i(2) - i(1.5))
assert_allclose(b.integrate(1.5, 3 + 12),
i(1) - i(0) + i(2) - i(1.5) + 6 * period_int)
assert_allclose(b.integrate(0, -1), i(0) - i(1))
assert_allclose(b.integrate(-9, -10), i(0) - i(1))
assert_allclose(b.integrate(0, -9), i(1) - i(2) - 4 * period_int)
def test_integrate_ppoly(self):
# test .integrate method to be consistent with PPoly.integrate
x = [0, 1, 2, 3, 4]
b = make_interp_spline(x, x)
b.extrapolate = 'periodic'
p = PPoly.from_spline(b)
for x0, x1 in [(-5, 0.5), (0.5, 5), (-4, 13)]:
assert_allclose(b.integrate(x0, x1),
p.integrate(x0, x1))
def test_subclassing(self):
# classmethods should not decay to the base class
class B(BSpline):
pass
b = B.basis_element([0, 1, 2, 2])
assert_equal(b.__class__, B)
assert_equal(b.derivative().__class__, B)
assert_equal(b.antiderivative().__class__, B)
@pytest.mark.parametrize('axis', range(-4, 4))
def test_axis(self, axis):
n, k = 22, 3
t = np.linspace(0, 1, n + k + 1)
sh = [6, 7, 8]
# We need the positive axis for some of the indexing and slices used
# in this test.
pos_axis = axis % 4
sh.insert(pos_axis, n) # [22, 6, 7, 8] etc
c = np.random.random(size=sh)
b = BSpline(t, c, k, axis=axis)
assert_equal(b.c.shape,
[sh[pos_axis],] + sh[:pos_axis] + sh[pos_axis+1:])
xp = np.random.random((3, 4, 5))
assert_equal(b(xp).shape,
sh[:pos_axis] + list(xp.shape) + sh[pos_axis+1:])
# -c.ndim <= axis < c.ndim
for ax in [-c.ndim - 1, c.ndim]:
assert_raises(np.AxisError, BSpline,
**dict(t=t, c=c, k=k, axis=ax))
# derivative, antiderivative keeps the axis
for b1 in [BSpline(t, c, k, axis=axis).derivative(),
BSpline(t, c, k, axis=axis).derivative(2),
BSpline(t, c, k, axis=axis).antiderivative(),
BSpline(t, c, k, axis=axis).antiderivative(2)]:
assert_equal(b1.axis, b.axis)
def test_neg_axis(self):
k = 2
t = [0, 1, 2, 3, 4, 5, 6]
c = np.array([[-1, 2, 0, -1], [2, 0, -3, 1]])
spl = BSpline(t, c, k, axis=-1)
spl0 = BSpline(t, c[0], k)
spl1 = BSpline(t, c[1], k)
assert_equal(spl(2.5), [spl0(2.5), spl1(2.5)])
def test_design_matrix_bc_types(self):
'''
Splines with different boundary conditions are built on different
types of vectors of knots. As far as design matrix depends only on
vector of knots, `k` and `x` it is useful to make tests for different
boundary conditions (and as following different vectors of knots).
'''
def run_design_matrix_tests(n, k, bc_type):
'''
To avoid repetition of code the following function is provided.
'''
np.random.seed(1234)
x = np.sort(np.random.random_sample(n) * 40 - 20)
y = np.random.random_sample(n) * 40 - 20
if bc_type == "periodic":
y[0] = y[-1]
bspl = make_interp_spline(x, y, k=k, bc_type=bc_type)
c = np.eye(len(bspl.t) - k - 1)
des_matr_def = BSpline(bspl.t, c, k)(x)
des_matr_csr = BSpline.design_matrix(x,
bspl.t,
k).toarray()
assert_allclose(des_matr_csr @ bspl.c, y, atol=1e-14)
assert_allclose(des_matr_def, des_matr_csr, atol=1e-14)
# "clamped" and "natural" work only with `k = 3`
n = 11
k = 3
for bc in ["clamped", "natural"]:
run_design_matrix_tests(n, k, bc)
# "not-a-knot" works with odd `k`
for k in range(3, 8, 2):
run_design_matrix_tests(n, k, "not-a-knot")
# "periodic" works with any `k` (even more than `n`)
n = 5 # smaller `n` to test `k > n` case
for k in range(2, 7):
run_design_matrix_tests(n, k, "periodic")
@pytest.mark.parametrize('extrapolate', [False, True, 'periodic'])
@pytest.mark.parametrize('degree', range(5))
def test_design_matrix_same_as_BSpline_call(self, extrapolate, degree):
"""Test that design_matrix(x) is equivalent to BSpline(..)(x)."""
np.random.seed(1234)
x = np.random.random_sample(10 * (degree + 1))
xmin, xmax = np.amin(x), np.amax(x)
k = degree
t = np.r_[np.linspace(xmin - 2, xmin - 1, degree),
np.linspace(xmin, xmax, 2 * (degree + 1)),
np.linspace(xmax + 1, xmax + 2, degree)]
c = np.eye(len(t) - k - 1)
bspline = BSpline(t, c, k, extrapolate)
assert_allclose(
bspline(x), BSpline.design_matrix(x, t, k, extrapolate).toarray()
)
# extrapolation regime
x = np.array([xmin - 10, xmin - 1, xmax + 1.5, xmax + 10])
if not extrapolate:
with pytest.raises(ValueError):
BSpline.design_matrix(x, t, k, extrapolate)
else:
assert_allclose(
bspline(x),
BSpline.design_matrix(x, t, k, extrapolate).toarray()
)
def test_design_matrix_x_shapes(self):
# test for different `x` shapes
np.random.seed(1234)
n = 10
k = 3
x = np.sort(np.random.random_sample(n) * 40 - 20)
y = np.random.random_sample(n) * 40 - 20
bspl = make_interp_spline(x, y, k=k)
for i in range(1, 4):
xc = x[:i]
yc = y[:i]
des_matr_csr = BSpline.design_matrix(xc,
bspl.t,
k).toarray()
assert_allclose(des_matr_csr @ bspl.c, yc, atol=1e-14)
def test_design_matrix_t_shapes(self):
# test for minimal possible `t` shape
t = [1., 1., 1., 2., 3., 4., 4., 4.]
des_matr = BSpline.design_matrix(2., t, 3).toarray()
assert_allclose(des_matr,
[[0.25, 0.58333333, 0.16666667, 0.]],
atol=1e-14)
def test_design_matrix_asserts(self):
np.random.seed(1234)
n = 10
k = 3
x = np.sort(np.random.random_sample(n) * 40 - 20)
y = np.random.random_sample(n) * 40 - 20
bspl = make_interp_spline(x, y, k=k)
# invalid vector of knots (should be a 1D non-descending array)
# here the actual vector of knots is reversed, so it is invalid
with assert_raises(ValueError):
BSpline.design_matrix(x, bspl.t[::-1], k)
k = 2
t = [0., 1., 2., 3., 4., 5.]
x = [1., 2., 3., 4.]
# out of bounds
with assert_raises(ValueError):
BSpline.design_matrix(x, t, k)
@pytest.mark.parametrize('bc_type', ['natural', 'clamped',
'periodic', 'not-a-knot'])
def test_from_power_basis(self, bc_type):
np.random.seed(1234)
x = np.sort(np.random.random(20))
y = np.random.random(20)
if bc_type == 'periodic':
y[-1] = y[0]
cb = CubicSpline(x, y, bc_type=bc_type)
bspl = BSpline.from_power_basis(cb, bc_type=bc_type)
xx = np.linspace(0, 1, 20)
assert_allclose(cb(xx), bspl(xx), atol=1e-15)
bspl_new = make_interp_spline(x, y, bc_type=bc_type)
assert_allclose(bspl.c, bspl_new.c, atol=1e-15)
@pytest.mark.parametrize('bc_type', ['natural', 'clamped',
'periodic', 'not-a-knot'])
def test_from_power_basis_complex(self, bc_type):
np.random.seed(1234)
x = np.sort(np.random.random(20))
y = np.random.random(20) + np.random.random(20) * 1j
if bc_type == 'periodic':
y[-1] = y[0]
cb = CubicSpline(x, y, bc_type=bc_type)
bspl = BSpline.from_power_basis(cb, bc_type=bc_type)
bspl_new_real = make_interp_spline(x, y.real, bc_type=bc_type)
bspl_new_imag = make_interp_spline(x, y.imag, bc_type=bc_type)
assert_equal(bspl.c.dtype, (bspl_new_real.c
+ 1j * bspl_new_imag.c).dtype)
assert_allclose(bspl.c, bspl_new_real.c
+ 1j * bspl_new_imag.c, atol=1e-15)
def test_from_power_basis_exmp(self):
'''
For x = [0, 1, 2, 3, 4] and y = [1, 1, 1, 1, 1]
the coefficients of Cubic Spline in the power basis:
$[[0, 0, 0, 0, 0],\\$
$[0, 0, 0, 0, 0],\\$
$[0, 0, 0, 0, 0],\\$
$[1, 1, 1, 1, 1]]$
It could be shown explicitly that coefficients of the interpolating
function in B-spline basis are c = [1, 1, 1, 1, 1, 1, 1]
'''
x = np.array([0, 1, 2, 3, 4])
y = np.array([1, 1, 1, 1, 1])
bspl = BSpline.from_power_basis(CubicSpline(x, y, bc_type='natural'),
bc_type='natural')
assert_allclose(bspl.c, [1, 1, 1, 1, 1, 1, 1], atol=1e-15)
def test_read_only(self):
# BSpline must work on read-only knots and coefficients.
t = np.array([0, 1])
c = np.array([3.0])
t.setflags(write=False)
c.setflags(write=False)
xx = np.linspace(0, 1, 10)
xx.setflags(write=False)
b = BSpline(t=t, c=c, k=0)
assert_allclose(b(xx), 3)
def test_knots_multiplicity():
# Take a spline w/ random coefficients, throw in knots of varying
# multiplicity.
def check_splev(b, j, der=0, atol=1e-14, rtol=1e-14):
# check evaluations against FITPACK, incl extrapolations
t, c, k = b.tck
x = np.unique(t)
x = np.r_[t[0]-0.1, 0.5*(x[1:] + x[:1]), t[-1]+0.1]
assert_allclose(splev(x, (t, c, k), der), b(x, der),
atol=atol, rtol=rtol, err_msg=f'der = {der} k = {b.k}')
# test loop itself
# [the index `j` is for interpreting the traceback in case of a failure]
for k in [1, 2, 3, 4, 5]:
b = _make_random_spline(k=k)
for j, b1 in enumerate(_make_multiples(b)):
check_splev(b1, j)
for der in range(1, k+1):
check_splev(b1, j, der, 1e-12, 1e-12)
### stolen from @pv, verbatim
def _naive_B(x, k, i, t):
"""
Naive way to compute B-spline basis functions. Useful only for testing!
computes B(x; t[i],..., t[i+k+1])
"""
if k == 0:
return 1.0 if t[i] <= x < t[i+1] else 0.0
if t[i+k] == t[i]:
c1 = 0.0
else:
c1 = (x - t[i])/(t[i+k] - t[i]) * _naive_B(x, k-1, i, t)
if t[i+k+1] == t[i+1]:
c2 = 0.0
else:
c2 = (t[i+k+1] - x)/(t[i+k+1] - t[i+1]) * _naive_B(x, k-1, i+1, t)
return (c1 + c2)
### stolen from @pv, verbatim
def _naive_eval(x, t, c, k):
"""
Naive B-spline evaluation. Useful only for testing!
"""
if x == t[k]:
i = k
else:
i = np.searchsorted(t, x) - 1
assert t[i] <= x <= t[i+1]
assert i >= k and i < len(t) - k
return sum(c[i-j] * _naive_B(x, k, i-j, t) for j in range(0, k+1))
def _naive_eval_2(x, t, c, k):
"""Naive B-spline evaluation, another way."""
n = len(t) - (k+1)
assert n >= k+1
assert len(c) >= n
assert t[k] <= x <= t[n]
return sum(c[i] * _naive_B(x, k, i, t) for i in range(n))
def _sum_basis_elements(x, t, c, k):
n = len(t) - (k+1)
assert n >= k+1
assert len(c) >= n
s = 0.
for i in range(n):
b = BSpline.basis_element(t[i:i+k+2], extrapolate=False)(x)
s += c[i] * np.nan_to_num(b) # zero out out-of-bounds elements
return s
def B_012(x):
""" A linear B-spline function B(x | 0, 1, 2)."""
x = np.atleast_1d(x)
return np.piecewise(x, [(x < 0) | (x > 2),
(x >= 0) & (x < 1),
(x >= 1) & (x <= 2)],
[lambda x: 0., lambda x: x, lambda x: 2.-x])
def B_0123(x, der=0):
"""A quadratic B-spline function B(x | 0, 1, 2, 3)."""
x = np.atleast_1d(x)
conds = [x < 1, (x > 1) & (x < 2), x > 2]
if der == 0:
funcs = [lambda x: x*x/2.,
lambda x: 3./4 - (x-3./2)**2,
lambda x: (3.-x)**2 / 2]
elif der == 2:
funcs = [lambda x: 1.,
lambda x: -2.,
lambda x: 1.]
else:
raise ValueError('never be here: der=%s' % der)
pieces = np.piecewise(x, conds, funcs)
return pieces
def _make_random_spline(n=35, k=3):
np.random.seed(123)
t = np.sort(np.random.random(n+k+1))
c = np.random.random(n)
return BSpline.construct_fast(t, c, k)
def _make_multiples(b):
"""Increase knot multiplicity."""
c, k = b.c, b.k
t1 = b.t.copy()
t1[17:19] = t1[17]
t1[22] = t1[21]
yield BSpline(t1, c, k)
t1 = b.t.copy()
t1[:k+1] = t1[0]
yield BSpline(t1, c, k)
t1 = b.t.copy()
t1[-k-1:] = t1[-1]
yield BSpline(t1, c, k)
class TestInterop:
#
# Test that FITPACK-based spl* functions can deal with BSpline objects
#
def setup_method(self):
xx = np.linspace(0, 4.*np.pi, 41)
yy = np.cos(xx)
b = make_interp_spline(xx, yy)
self.tck = (b.t, b.c, b.k)
self.xx, self.yy, self.b = xx, yy, b
self.xnew = np.linspace(0, 4.*np.pi, 21)
c2 = np.c_[b.c, b.c, b.c]
self.c2 = np.dstack((c2, c2))
self.b2 = BSpline(b.t, self.c2, b.k)
def test_splev(self):
xnew, b, b2 = self.xnew, self.b, self.b2
# check that splev works with 1-D array of coefficients
# for array and scalar `x`
assert_allclose(splev(xnew, b),
b(xnew), atol=1e-15, rtol=1e-15)
assert_allclose(splev(xnew, b.tck),
b(xnew), atol=1e-15, rtol=1e-15)
assert_allclose([splev(x, b) for x in xnew],
b(xnew), atol=1e-15, rtol=1e-15)
# With N-D coefficients, there's a quirck:
# splev(x, BSpline) is equivalent to BSpline(x)
with assert_raises(ValueError, match="Calling splev.. with BSpline"):
splev(xnew, b2)
# However, splev(x, BSpline.tck) needs some transposes. This is because
# BSpline interpolates along the first axis, while the legacy FITPACK
# wrapper does list(map(...)) which effectively interpolates along the
# last axis. Like so:
sh = tuple(range(1, b2.c.ndim)) + (0,) # sh = (1, 2, 0)
cc = b2.c.transpose(sh)
tck = (b2.t, cc, b2.k)
assert_allclose(splev(xnew, tck),
b2(xnew).transpose(sh), atol=1e-15, rtol=1e-15)
def test_splrep(self):
x, y = self.xx, self.yy
# test that "new" splrep is equivalent to _impl.splrep
tck = splrep(x, y)
t, c, k = _impl.splrep(x, y)
assert_allclose(tck[0], t, atol=1e-15)
assert_allclose(tck[1], c, atol=1e-15)
assert_equal(tck[2], k)
# also cover the `full_output=True` branch
tck_f, _, _, _ = splrep(x, y, full_output=True)
assert_allclose(tck_f[0], t, atol=1e-15)
assert_allclose(tck_f[1], c, atol=1e-15)
assert_equal(tck_f[2], k)
# test that the result of splrep roundtrips with splev:
# evaluate the spline on the original `x` points
yy = splev(x, tck)
assert_allclose(y, yy, atol=1e-15)
# ... and also it roundtrips if wrapped in a BSpline
b = BSpline(*tck)
assert_allclose(y, b(x), atol=1e-15)
def test_splrep_errors(self):
# test that both "old" and "new" splrep raise for an N-D ``y`` array
# with n > 1
x, y = self.xx, self.yy
y2 = np.c_[y, y]
with assert_raises(ValueError):
splrep(x, y2)
with assert_raises(ValueError):
_impl.splrep(x, y2)
# input below minimum size
with assert_raises(TypeError, match="m > k must hold"):
splrep(x[:3], y[:3])
with assert_raises(TypeError, match="m > k must hold"):
_impl.splrep(x[:3], y[:3])
def test_splprep(self):
x = np.arange(15).reshape((3, 5))
b, u = splprep(x)
tck, u1 = _impl.splprep(x)
# test the roundtrip with splev for both "old" and "new" output
assert_allclose(u, u1, atol=1e-15)
assert_allclose(splev(u, b), x, atol=1e-15)
assert_allclose(splev(u, tck), x, atol=1e-15)
# cover the ``full_output=True`` branch
(b_f, u_f), _, _, _ = splprep(x, s=0, full_output=True)
assert_allclose(u, u_f, atol=1e-15)
assert_allclose(splev(u_f, b_f), x, atol=1e-15)
def test_splprep_errors(self):
# test that both "old" and "new" code paths raise for x.ndim > 2
x = np.arange(3*4*5).reshape((3, 4, 5))
with assert_raises(ValueError, match="too many values to unpack"):
splprep(x)
with assert_raises(ValueError, match="too many values to unpack"):
_impl.splprep(x)
# input below minimum size
x = np.linspace(0, 40, num=3)
with assert_raises(TypeError, match="m > k must hold"):
splprep([x])
with assert_raises(TypeError, match="m > k must hold"):
_impl.splprep([x])
# automatically calculated parameters are non-increasing
# see gh-7589
x = [-50.49072266, -50.49072266, -54.49072266, -54.49072266]
with assert_raises(ValueError, match="Invalid inputs"):
splprep([x])
with assert_raises(ValueError, match="Invalid inputs"):
_impl.splprep([x])
# given non-increasing parameter values u
x = [1, 3, 2, 4]
u = [0, 0.3, 0.2, 1]
with assert_raises(ValueError, match="Invalid inputs"):
splprep(*[[x], None, u])
def test_sproot(self):
b, b2 = self.b, self.b2
roots = np.array([0.5, 1.5, 2.5, 3.5])*np.pi
# sproot accepts a BSpline obj w/ 1-D coef array
assert_allclose(sproot(b), roots, atol=1e-7, rtol=1e-7)
assert_allclose(sproot((b.t, b.c, b.k)), roots, atol=1e-7, rtol=1e-7)
# ... and deals with trailing dimensions if coef array is N-D
with assert_raises(ValueError, match="Calling sproot.. with BSpline"):
sproot(b2, mest=50)
# and legacy behavior is preserved for a tck tuple w/ N-D coef
c2r = b2.c.transpose(1, 2, 0)
rr = np.asarray(sproot((b2.t, c2r, b2.k), mest=50))
assert_equal(rr.shape, (3, 2, 4))
assert_allclose(rr - roots, 0, atol=1e-12)
def test_splint(self):
# test that splint accepts BSpline objects
b, b2 = self.b, self.b2
assert_allclose(splint(0, 1, b),
splint(0, 1, b.tck), atol=1e-14)
assert_allclose(splint(0, 1, b),
b.integrate(0, 1), atol=1e-14)
# ... and deals with N-D arrays of coefficients
with assert_raises(ValueError, match="Calling splint.. with BSpline"):
splint(0, 1, b2)
# and the legacy behavior is preserved for a tck tuple w/ N-D coef
c2r = b2.c.transpose(1, 2, 0)
integr = np.asarray(splint(0, 1, (b2.t, c2r, b2.k)))
assert_equal(integr.shape, (3, 2))
assert_allclose(integr,
splint(0, 1, b), atol=1e-14)
def test_splder(self):
for b in [self.b, self.b2]:
# pad the c array (FITPACK convention)
ct = len(b.t) - len(b.c)
if ct > 0:
b.c = np.r_[b.c, np.zeros((ct,) + b.c.shape[1:])]
for n in [1, 2, 3]:
bd = splder(b)
tck_d = _impl.splder((b.t, b.c, b.k))
assert_allclose(bd.t, tck_d[0], atol=1e-15)
assert_allclose(bd.c, tck_d[1], atol=1e-15)
assert_equal(bd.k, tck_d[2])
assert_(isinstance(bd, BSpline))
assert_(isinstance(tck_d, tuple)) # back-compat: tck in and out
def test_splantider(self):
for b in [self.b, self.b2]:
# pad the c array (FITPACK convention)
ct = len(b.t) - len(b.c)
if ct > 0:
b.c = np.r_[b.c, np.zeros((ct,) + b.c.shape[1:])]
for n in [1, 2, 3]:
bd = splantider(b)
tck_d = _impl.splantider((b.t, b.c, b.k))
assert_allclose(bd.t, tck_d[0], atol=1e-15)
assert_allclose(bd.c, tck_d[1], atol=1e-15)
assert_equal(bd.k, tck_d[2])
assert_(isinstance(bd, BSpline))
assert_(isinstance(tck_d, tuple)) # back-compat: tck in and out
def test_insert(self):
b, b2, xx = self.b, self.b2, self.xx
j = b.t.size // 2
tn = 0.5*(b.t[j] + b.t[j+1])
bn, tck_n = insert(tn, b), insert(tn, (b.t, b.c, b.k))
assert_allclose(splev(xx, bn),
splev(xx, tck_n), atol=1e-15)
assert_(isinstance(bn, BSpline))
assert_(isinstance(tck_n, tuple)) # back-compat: tck in, tck out
# for N-D array of coefficients, BSpline.c needs to be transposed
# after that, the results are equivalent.
sh = tuple(range(b2.c.ndim))
c_ = b2.c.transpose(sh[1:] + (0,))
tck_n2 = insert(tn, (b2.t, c_, b2.k))
bn2 = insert(tn, b2)
# need a transpose for comparing the results, cf test_splev
assert_allclose(np.asarray(splev(xx, tck_n2)).transpose(2, 0, 1),
bn2(xx), atol=1e-15)
assert_(isinstance(bn2, BSpline))
assert_(isinstance(tck_n2, tuple)) # back-compat: tck in, tck out
class TestInterp:
#
# Test basic ways of constructing interpolating splines.
#
xx = np.linspace(0., 2.*np.pi)
yy = np.sin(xx)
def test_non_int_order(self):
with assert_raises(TypeError):
make_interp_spline(self.xx, self.yy, k=2.5)
def test_order_0(self):
b = make_interp_spline(self.xx, self.yy, k=0)
assert_allclose(b(self.xx), self.yy, atol=1e-14, rtol=1e-14)
b = make_interp_spline(self.xx, self.yy, k=0, axis=-1)
assert_allclose(b(self.xx), self.yy, atol=1e-14, rtol=1e-14)
def test_linear(self):
b = make_interp_spline(self.xx, self.yy, k=1)
assert_allclose(b(self.xx), self.yy, atol=1e-14, rtol=1e-14)
b = make_interp_spline(self.xx, self.yy, k=1, axis=-1)
assert_allclose(b(self.xx), self.yy, atol=1e-14, rtol=1e-14)
@pytest.mark.parametrize('k', [0, 1, 2, 3])
def test_incompatible_x_y(self, k):
x = [0, 1, 2, 3, 4, 5]
y = [0, 1, 2, 3, 4, 5, 6, 7]
with assert_raises(ValueError, match="Shapes of x"):
make_interp_spline(x, y, k=k)
@pytest.mark.parametrize('k', [0, 1, 2, 3])
def test_broken_x(self, k):
x = [0, 1, 1, 2, 3, 4] # duplicates
y = [0, 1, 2, 3, 4, 5]
with assert_raises(ValueError, match="x to not have duplicates"):
make_interp_spline(x, y, k=k)
x = [0, 2, 1, 3, 4, 5] # unsorted
with assert_raises(ValueError, match="Expect x to be a 1D strictly"):
make_interp_spline(x, y, k=k)
x = [0, 1, 2, 3, 4, 5]
x = np.asarray(x).reshape((1, -1)) # 1D
with assert_raises(ValueError, match="Expect x to be a 1D strictly"):
make_interp_spline(x, y, k=k)
def test_not_a_knot(self):
for k in [3, 5]:
b = make_interp_spline(self.xx, self.yy, k)
assert_allclose(b(self.xx), self.yy, atol=1e-14, rtol=1e-14)
def test_periodic(self):
# k = 5 here for more derivatives
b = make_interp_spline(self.xx, self.yy, k=5, bc_type='periodic')
assert_allclose(b(self.xx), self.yy, atol=1e-14, rtol=1e-14)
# in periodic case it is expected equality of k-1 first
# derivatives at the boundaries
for i in range(1, 5):
assert_allclose(b(self.xx[0], nu=i), b(self.xx[-1], nu=i), atol=1e-11)
# tests for axis=-1
b = make_interp_spline(self.xx, self.yy, k=5, bc_type='periodic', axis=-1)
assert_allclose(b(self.xx), self.yy, atol=1e-14, rtol=1e-14)
for i in range(1, 5):
assert_allclose(b(self.xx[0], nu=i), b(self.xx[-1], nu=i), atol=1e-11)
@pytest.mark.parametrize('k', [2, 3, 4, 5, 6, 7])
def test_periodic_random(self, k):
# tests for both cases (k > n and k <= n)
n = 5
np.random.seed(1234)
x = np.sort(np.random.random_sample(n) * 10)
y = np.random.random_sample(n) * 100
y[0] = y[-1]
b = make_interp_spline(x, y, k=k, bc_type='periodic')
assert_allclose(b(x), y, atol=1e-14)
def test_periodic_axis(self):
n = self.xx.shape[0]
np.random.seed(1234)
x = np.random.random_sample(n) * 2 * np.pi
x = np.sort(x)
x[0] = 0.
x[-1] = 2 * np.pi
y = np.zeros((2, n))
y[0] = np.sin(x)
y[1] = np.cos(x)
b = make_interp_spline(x, y, k=5, bc_type='periodic', axis=1)
for i in range(n):
assert_allclose(b(x[i]), y[:, i], atol=1e-14)
assert_allclose(b(x[0]), b(x[-1]), atol=1e-14)
def test_periodic_points_exception(self):
# first and last points should match when periodic case expected
np.random.seed(1234)
k = 5
n = 8
x = np.sort(np.random.random_sample(n))
y = np.random.random_sample(n)
y[0] = y[-1] - 1 # to be sure that they are not equal
with assert_raises(ValueError):
make_interp_spline(x, y, k=k, bc_type='periodic')
def test_periodic_knots_exception(self):
# `periodic` case does not work with passed vector of knots
np.random.seed(1234)
k = 3
n = 7
x = np.sort(np.random.random_sample(n))
y = np.random.random_sample(n)
t = np.zeros(n + 2 * k)
with assert_raises(ValueError):
make_interp_spline(x, y, k, t, 'periodic')
@pytest.mark.parametrize('k', [2, 3, 4, 5])
def test_periodic_splev(self, k):
# comparision values of periodic b-spline with splev
b = make_interp_spline(self.xx, self.yy, k=k, bc_type='periodic')
tck = splrep(self.xx, self.yy, per=True, k=k)
spl = splev(self.xx, tck)
assert_allclose(spl, b(self.xx), atol=1e-14)
# comparison derivatives of periodic b-spline with splev
for i in range(1, k):
spl = splev(self.xx, tck, der=i)
assert_allclose(spl, b(self.xx, nu=i), atol=1e-10)
def test_periodic_cubic(self):
# comparison values of cubic periodic b-spline with CubicSpline
b = make_interp_spline(self.xx, self.yy, k=3, bc_type='periodic')
cub = CubicSpline(self.xx, self.yy, bc_type='periodic')
assert_allclose(b(self.xx), cub(self.xx), atol=1e-14)
# edge case: Cubic interpolation on 3 points
n = 3
x = np.sort(np.random.random_sample(n) * 10)
y = np.random.random_sample(n) * 100
y[0] = y[-1]
b = make_interp_spline(x, y, k=3, bc_type='periodic')
cub = CubicSpline(x, y, bc_type='periodic')
assert_allclose(b(x), cub(x), atol=1e-14)
def test_periodic_full_matrix(self):
# comparison values of cubic periodic b-spline with
# solution of the system with full matrix
k = 3
b = make_interp_spline(self.xx, self.yy, k=k, bc_type='periodic')
t = _periodic_knots(self.xx, k)
c = _make_interp_per_full_matr(self.xx, self.yy, t, k)
b1 = np.vectorize(lambda x: _naive_eval(x, t, c, k))
assert_allclose(b(self.xx), b1(self.xx), atol=1e-14)
def test_quadratic_deriv(self):
der = [(1, 8.)] # order, value: f'(x) = 8.
# derivative at right-hand edge
b = make_interp_spline(self.xx, self.yy, k=2, bc_type=(None, der))
assert_allclose(b(self.xx), self.yy, atol=1e-14, rtol=1e-14)
assert_allclose(b(self.xx[-1], 1), der[0][1], atol=1e-14, rtol=1e-14)
# derivative at left-hand edge
b = make_interp_spline(self.xx, self.yy, k=2, bc_type=(der, None))
assert_allclose(b(self.xx), self.yy, atol=1e-14, rtol=1e-14)
assert_allclose(b(self.xx[0], 1), der[0][1], atol=1e-14, rtol=1e-14)
def test_cubic_deriv(self):
k = 3
# first derivatives at left & right edges:
der_l, der_r = [(1, 3.)], [(1, 4.)]
b = make_interp_spline(self.xx, self.yy, k, bc_type=(der_l, der_r))
assert_allclose(b(self.xx), self.yy, atol=1e-14, rtol=1e-14)
assert_allclose([b(self.xx[0], 1), b(self.xx[-1], 1)],
[der_l[0][1], der_r[0][1]], atol=1e-14, rtol=1e-14)
# 'natural' cubic spline, zero out 2nd derivatives at the boundaries
der_l, der_r = [(2, 0)], [(2, 0)]
b = make_interp_spline(self.xx, self.yy, k, bc_type=(der_l, der_r))
assert_allclose(b(self.xx), self.yy, atol=1e-14, rtol=1e-14)
def test_quintic_derivs(self):
k, n = 5, 7
x = np.arange(n).astype(np.float_)
y = np.sin(x)
der_l = [(1, -12.), (2, 1)]
der_r = [(1, 8.), (2, 3.)]
b = make_interp_spline(x, y, k=k, bc_type=(der_l, der_r))
assert_allclose(b(x), y, atol=1e-14, rtol=1e-14)
assert_allclose([b(x[0], 1), b(x[0], 2)],
[val for (nu, val) in der_l])
assert_allclose([b(x[-1], 1), b(x[-1], 2)],
[val for (nu, val) in der_r])
@pytest.mark.xfail(reason='unstable')
def test_cubic_deriv_unstable(self):
# 1st and 2nd derivative at x[0], no derivative information at x[-1]
# The problem is not that it fails [who would use this anyway],
# the problem is that it fails *silently*, and I've no idea
# how to detect this sort of instability.
# In this particular case: it's OK for len(t) < 20, goes haywire
# at larger `len(t)`.
k = 3
t = _augknt(self.xx, k)
der_l = [(1, 3.), (2, 4.)]
b = make_interp_spline(self.xx, self.yy, k, t, bc_type=(der_l, None))
assert_allclose(b(self.xx), self.yy, atol=1e-14, rtol=1e-14)
def test_knots_not_data_sites(self):
# Knots need not coincide with the data sites.
# use a quadratic spline, knots are at data averages,
# two additional constraints are zero 2nd derivatives at edges
k = 2
t = np.r_[(self.xx[0],)*(k+1),
(self.xx[1:] + self.xx[:-1]) / 2.,
(self.xx[-1],)*(k+1)]
b = make_interp_spline(self.xx, self.yy, k, t,
bc_type=([(2, 0)], [(2, 0)]))
assert_allclose(b(self.xx), self.yy, atol=1e-14, rtol=1e-14)
assert_allclose([b(self.xx[0], 2), b(self.xx[-1], 2)], [0., 0.],
atol=1e-14)
def test_minimum_points_and_deriv(self):
# interpolation of f(x) = x**3 between 0 and 1. f'(x) = 3 * xx**2 and
# f'(0) = 0, f'(1) = 3.
k = 3
x = [0., 1.]
y = [0., 1.]
b = make_interp_spline(x, y, k, bc_type=([(1, 0.)], [(1, 3.)]))
xx = np.linspace(0., 1.)
yy = xx**3
assert_allclose(b(xx), yy, atol=1e-14, rtol=1e-14)
def test_deriv_spec(self):
# If one of the derivatives is omitted, the spline definition is
# incomplete.
x = y = [1.0, 2, 3, 4, 5, 6]
with assert_raises(ValueError):
make_interp_spline(x, y, bc_type=([(1, 0.)], None))
with assert_raises(ValueError):
make_interp_spline(x, y, bc_type=(1, 0.))
with assert_raises(ValueError):
make_interp_spline(x, y, bc_type=[(1, 0.)])
with assert_raises(ValueError):
make_interp_spline(x, y, bc_type=42)
# CubicSpline expects`bc_type=(left_pair, right_pair)`, while
# here we expect `bc_type=(iterable, iterable)`.
l, r = (1, 0.0), (1, 0.0)
with assert_raises(ValueError):
make_interp_spline(x, y, bc_type=(l, r))
def test_complex(self):
k = 3
xx = self.xx
yy = self.yy + 1.j*self.yy
# first derivatives at left & right edges:
der_l, der_r = [(1, 3.j)], [(1, 4.+2.j)]
b = make_interp_spline(xx, yy, k, bc_type=(der_l, der_r))
assert_allclose(b(xx), yy, atol=1e-14, rtol=1e-14)
assert_allclose([b(xx[0], 1), b(xx[-1], 1)],
[der_l[0][1], der_r[0][1]], atol=1e-14, rtol=1e-14)
# also test zero and first order
for k in (0, 1):
b = make_interp_spline(xx, yy, k=k)
assert_allclose(b(xx), yy, atol=1e-14, rtol=1e-14)
def test_int_xy(self):
x = np.arange(10).astype(np.int_)
y = np.arange(10).astype(np.int_)
# Cython chokes on "buffer type mismatch" (construction) or
# "no matching signature found" (evaluation)
for k in (0, 1, 2, 3):
b = make_interp_spline(x, y, k=k)
b(x)
def test_sliced_input(self):
# Cython code chokes on non C contiguous arrays
xx = np.linspace(-1, 1, 100)
x = xx[::5]
y = xx[::5]
for k in (0, 1, 2, 3):
make_interp_spline(x, y, k=k)
def test_check_finite(self):
# check_finite defaults to True; nans and such trigger a ValueError
x = np.arange(10).astype(float)
y = x**2
for z in [np.nan, np.inf, -np.inf]:
y[-1] = z
assert_raises(ValueError, make_interp_spline, x, y)
@pytest.mark.parametrize('k', [1, 2, 3, 5])
def test_list_input(self, k):
# regression test for gh-8714: TypeError for x, y being lists and k=2
x = list(range(10))
y = [a**2 for a in x]
make_interp_spline(x, y, k=k)
def test_multiple_rhs(self):
yy = np.c_[np.sin(self.xx), np.cos(self.xx)]
der_l = [(1, [1., 2.])]
der_r = [(1, [3., 4.])]
b = make_interp_spline(self.xx, yy, k=3, bc_type=(der_l, der_r))
assert_allclose(b(self.xx), yy, atol=1e-14, rtol=1e-14)
assert_allclose(b(self.xx[0], 1), der_l[0][1], atol=1e-14, rtol=1e-14)
assert_allclose(b(self.xx[-1], 1), der_r[0][1], atol=1e-14, rtol=1e-14)
def test_shapes(self):
np.random.seed(1234)
k, n = 3, 22
x = np.sort(np.random.random(size=n))
y = np.random.random(size=(n, 5, 6, 7))
b = make_interp_spline(x, y, k)
assert_equal(b.c.shape, (n, 5, 6, 7))
# now throw in some derivatives
d_l = [(1, np.random.random((5, 6, 7)))]
d_r = [(1, np.random.random((5, 6, 7)))]
b = make_interp_spline(x, y, k, bc_type=(d_l, d_r))
assert_equal(b.c.shape, (n + k - 1, 5, 6, 7))
def test_string_aliases(self):
yy = np.sin(self.xx)
# a single string is duplicated
b1 = make_interp_spline(self.xx, yy, k=3, bc_type='natural')
b2 = make_interp_spline(self.xx, yy, k=3, bc_type=([(2, 0)], [(2, 0)]))
assert_allclose(b1.c, b2.c, atol=1e-15)
# two strings are handled
b1 = make_interp_spline(self.xx, yy, k=3,
bc_type=('natural', 'clamped'))
b2 = make_interp_spline(self.xx, yy, k=3,
bc_type=([(2, 0)], [(1, 0)]))
assert_allclose(b1.c, b2.c, atol=1e-15)
# one-sided BCs are OK
b1 = make_interp_spline(self.xx, yy, k=2, bc_type=(None, 'clamped'))
b2 = make_interp_spline(self.xx, yy, k=2, bc_type=(None, [(1, 0.0)]))
assert_allclose(b1.c, b2.c, atol=1e-15)
# 'not-a-knot' is equivalent to None
b1 = make_interp_spline(self.xx, yy, k=3, bc_type='not-a-knot')
b2 = make_interp_spline(self.xx, yy, k=3, bc_type=None)
assert_allclose(b1.c, b2.c, atol=1e-15)
# unknown strings do not pass
with assert_raises(ValueError):
make_interp_spline(self.xx, yy, k=3, bc_type='typo')
# string aliases are handled for 2D values
yy = np.c_[np.sin(self.xx), np.cos(self.xx)]
der_l = [(1, [0., 0.])]
der_r = [(2, [0., 0.])]
b2 = make_interp_spline(self.xx, yy, k=3, bc_type=(der_l, der_r))
b1 = make_interp_spline(self.xx, yy, k=3,
bc_type=('clamped', 'natural'))
assert_allclose(b1.c, b2.c, atol=1e-15)
# ... and for N-D values:
np.random.seed(1234)
k, n = 3, 22
x = np.sort(np.random.random(size=n))
y = np.random.random(size=(n, 5, 6, 7))
# now throw in some derivatives
d_l = [(1, np.zeros((5, 6, 7)))]
d_r = [(1, np.zeros((5, 6, 7)))]
b1 = make_interp_spline(x, y, k, bc_type=(d_l, d_r))
b2 = make_interp_spline(x, y, k, bc_type='clamped')
assert_allclose(b1.c, b2.c, atol=1e-15)
def test_full_matrix(self):
np.random.seed(1234)
k, n = 3, 7
x = np.sort(np.random.random(size=n))
y = np.random.random(size=n)
t = _not_a_knot(x, k)
b = make_interp_spline(x, y, k, t)
cf = make_interp_full_matr(x, y, t, k)
assert_allclose(b.c, cf, atol=1e-14, rtol=1e-14)
def test_woodbury(self):
'''
Random elements in diagonal matrix with blocks in the
left lower and right upper corners checking the
implementation of Woodbury algorithm.
'''
np.random.seed(1234)
n = 201
for k in range(3, 32, 2):
offset = int((k - 1) / 2)
a = np.diagflat(np.random.random((1, n)))
for i in range(1, offset + 1):
a[:-i, i:] += np.diagflat(np.random.random((1, n - i)))
a[i:, :-i] += np.diagflat(np.random.random((1, n - i)))
ur = np.random.random((offset, offset))
a[:offset, -offset:] = ur
ll = np.random.random((offset, offset))
a[-offset:, :offset] = ll
d = np.zeros((k, n))
for i, j in enumerate(range(offset, -offset - 1, -1)):
if j < 0:
d[i, :j] = np.diagonal(a, offset=j)
else:
d[i, j:] = np.diagonal(a, offset=j)
b = np.random.random(n)
assert_allclose(_woodbury_algorithm(d, ur, ll, b, k),
np.linalg.solve(a, b), atol=1e-14)
def make_interp_full_matr(x, y, t, k):
"""Assemble an spline order k with knots t to interpolate
y(x) using full matrices.
Not-a-knot BC only.
This routine is here for testing only (even though it's functional).
"""
assert x.size == y.size
assert t.size == x.size + k + 1
n = x.size
A = np.zeros((n, n), dtype=np.float_)
for j in range(n):
xval = x[j]
if xval == t[k]:
left = k
else:
left = np.searchsorted(t, xval) - 1
# fill a row
bb = _bspl.evaluate_all_bspl(t, k, xval, left)
A[j, left-k:left+1] = bb
c = sl.solve(A, y)
return c
def make_lsq_full_matrix(x, y, t, k=3):
"""Make the least-square spline, full matrices."""
x, y, t = map(np.asarray, (x, y, t))
m = x.size
n = t.size - k - 1
A = np.zeros((m, n), dtype=np.float_)
for j in range(m):
xval = x[j]
# find interval
if xval == t[k]:
left = k
else:
left = np.searchsorted(t, xval) - 1
# fill a row
bb = _bspl.evaluate_all_bspl(t, k, xval, left)
A[j, left-k:left+1] = bb
# have observation matrix, can solve the LSQ problem
B = np.dot(A.T, A)
Y = np.dot(A.T, y)
c = sl.solve(B, Y)
return c, (A, Y)
class TestLSQ:
#
# Test make_lsq_spline
#
np.random.seed(1234)
n, k = 13, 3
x = np.sort(np.random.random(n))
y = np.random.random(n)
t = _augknt(np.linspace(x[0], x[-1], 7), k)
def test_lstsq(self):
# check LSQ construction vs a full matrix version
x, y, t, k = self.x, self.y, self.t, self.k
c0, AY = make_lsq_full_matrix(x, y, t, k)
b = make_lsq_spline(x, y, t, k)
assert_allclose(b.c, c0)
assert_equal(b.c.shape, (t.size - k - 1,))
# also check against numpy.lstsq
aa, yy = AY
c1, _, _, _ = np.linalg.lstsq(aa, y, rcond=-1)
assert_allclose(b.c, c1)
def test_weights(self):
# weights = 1 is same as None
x, y, t, k = self.x, self.y, self.t, self.k
w = np.ones_like(x)
b = make_lsq_spline(x, y, t, k)
b_w = make_lsq_spline(x, y, t, k, w=w)
assert_allclose(b.t, b_w.t, atol=1e-14)
assert_allclose(b.c, b_w.c, atol=1e-14)
assert_equal(b.k, b_w.k)
def test_multiple_rhs(self):
x, t, k, n = self.x, self.t, self.k, self.n
y = np.random.random(size=(n, 5, 6, 7))
b = make_lsq_spline(x, y, t, k)
assert_equal(b.c.shape, (t.size-k-1, 5, 6, 7))
def test_complex(self):
# cmplx-valued `y`
x, t, k = self.x, self.t, self.k
yc = self.y * (1. + 2.j)
b = make_lsq_spline(x, yc, t, k)
b_re = make_lsq_spline(x, yc.real, t, k)
b_im = make_lsq_spline(x, yc.imag, t, k)
assert_allclose(b(x), b_re(x) + 1.j*b_im(x), atol=1e-15, rtol=1e-15)
def test_int_xy(self):
x = np.arange(10).astype(np.int_)
y = np.arange(10).astype(np.int_)
t = _augknt(x, k=1)
# Cython chokes on "buffer type mismatch"
make_lsq_spline(x, y, t, k=1)
def test_sliced_input(self):
# Cython code chokes on non C contiguous arrays
xx = np.linspace(-1, 1, 100)
x = xx[::3]
y = xx[::3]
t = _augknt(x, 1)
make_lsq_spline(x, y, t, k=1)
def test_checkfinite(self):
# check_finite defaults to True; nans and such trigger a ValueError
x = np.arange(12).astype(float)
y = x**2
t = _augknt(x, 3)
for z in [np.nan, np.inf, -np.inf]:
y[-1] = z
assert_raises(ValueError, make_lsq_spline, x, y, t)
def test_read_only(self):
# Check that make_lsq_spline works with read only arrays
x, y, t = self.x, self.y, self.t
x.setflags(write=False)
y.setflags(write=False)
t.setflags(write=False)
make_lsq_spline(x=x, y=y, t=t)
def data_file(basename):
return os.path.join(os.path.abspath(os.path.dirname(__file__)),
'data', basename)
class TestSmoothingSpline:
#
# test make_smoothing_spline
#
def test_invalid_input(self):
np.random.seed(1234)
n = 100
x = np.sort(np.random.random_sample(n) * 4 - 2)
y = x**2 * np.sin(4 * x) + x**3 + np.random.normal(0., 1.5, n)
# ``x`` and ``y`` should have same shapes (1-D array)
with assert_raises(ValueError):
make_smoothing_spline(x, y[1:])
with assert_raises(ValueError):
make_smoothing_spline(x[1:], y)
with assert_raises(ValueError):
make_smoothing_spline(x.reshape(1, n), y)
# ``x`` should be an ascending array
with assert_raises(ValueError):
make_smoothing_spline(x[::-1], y)
x_dupl = np.copy(x)
x_dupl[0] = x_dupl[1]
with assert_raises(ValueError):
make_smoothing_spline(x_dupl, y)
# x and y length must be larger than 5
x = np.arange(4)
y = np.ones(4)
exception_message = "``x`` and ``y`` length must be larger than 5"
with pytest.raises(ValueError, match=exception_message):
make_smoothing_spline(x, y)
def test_compare_with_GCVSPL(self):
"""
Data is generated in the following way:
>>> np.random.seed(1234)
>>> n = 100
>>> x = np.sort(np.random.random_sample(n) * 4 - 2)
>>> y = np.sin(x) + np.random.normal(scale=.5, size=n)
>>> np.savetxt('x.csv', x)
>>> np.savetxt('y.csv', y)
We obtain the result of performing the GCV smoothing splines
package (by Woltring, gcvspl) on the sample data points
using its version for Octave (https://github.com/srkuberski/gcvspl).
In order to use this implementation, one should clone the repository
and open the folder in Octave.
In Octave, we load up ``x`` and ``y`` (generated from Python code
above):
>>> x = csvread('x.csv');
>>> y = csvread('y.csv');
Then, in order to access the implementation, we compile gcvspl files in
Octave:
>>> mex gcvsplmex.c gcvspl.c
>>> mex spldermex.c gcvspl.c
The first function computes the vector of unknowns from the dataset
(x, y) while the second one evaluates the spline in certain points
with known vector of coefficients.
>>> c = gcvsplmex( x, y, 2 );
>>> y0 = spldermex( x, c, 2, x, 0 );
If we want to compare the results of the gcvspl code, we can save
``y0`` in csv file:
>>> csvwrite('y0.csv', y0);
"""
# load the data sample
data = np.load(data_file('gcvspl.npz'))
# data points
x = data['x']
y = data['y']
y_GCVSPL = data['y_GCVSPL']
y_compr = make_smoothing_spline(x, y)(x)
# such tolerance is explained by the fact that the spline is built
# using an iterative algorithm for minimizing the GCV criteria. These
# algorithms may vary, so the tolerance should be rather low.
assert_allclose(y_compr, y_GCVSPL, atol=1e-4, rtol=1e-4)
def test_non_regularized_case(self):
"""
In case the regularization parameter is 0, the resulting spline
is an interpolation spline with natural boundary conditions.
"""
# create data sample
np.random.seed(1234)
n = 100
x = np.sort(np.random.random_sample(n) * 4 - 2)
y = x**2 * np.sin(4 * x) + x**3 + np.random.normal(0., 1.5, n)
spline_GCV = make_smoothing_spline(x, y, lam=0.)
spline_interp = make_interp_spline(x, y, 3, bc_type='natural')
grid = np.linspace(x[0], x[-1], 2 * n)
assert_allclose(spline_GCV(grid),
spline_interp(grid),
atol=1e-15)
def test_weighted_smoothing_spline(self):
# create data sample
np.random.seed(1234)
n = 100
x = np.sort(np.random.random_sample(n) * 4 - 2)
y = x**2 * np.sin(4 * x) + x**3 + np.random.normal(0., 1.5, n)
spl = make_smoothing_spline(x, y)
# in order not to iterate over all of the indices, we select 10 of
# them randomly
for ind in np.random.choice(range(100), size=10):
w = np.ones(n)
w[ind] = 30.
spl_w = make_smoothing_spline(x, y, w)
# check that spline with weight in a certain point is closer to the
# original point than the one without weights
orig = abs(spl(x[ind]) - y[ind])
weighted = abs(spl_w(x[ind]) - y[ind])
if orig < weighted:
raise ValueError(f'Spline with weights should be closer to the'
f' points than the original one: {orig:.4} < '
f'{weighted:.4}')
################################
# NdBSpline tests
def bspline2(xy, t, c, k):
"""A naive 2D tensort product spline evaluation."""
x, y = xy
tx, ty = t
nx = len(tx) - k - 1
assert (nx >= k+1)
ny = len(ty) - k - 1
assert (ny >= k+1)
return sum(c[ix, iy] * B(x, k, ix, tx) * B(y, k, iy, ty)
for ix in range(nx) for iy in range(ny))
def B(x, k, i, t):
if k == 0:
return 1.0 if t[i] <= x < t[i+1] else 0.0
if t[i+k] == t[i]:
c1 = 0.0
else:
c1 = (x - t[i])/(t[i+k] - t[i]) * B(x, k-1, i, t)
if t[i+k+1] == t[i+1]:
c2 = 0.0
else:
c2 = (t[i+k+1] - x)/(t[i+k+1] - t[i+1]) * B(x, k-1, i+1, t)
return c1 + c2
def bspline(x, t, c, k):
n = len(t) - k - 1
assert (n >= k+1) and (len(c) >= n)
return sum(c[i] * B(x, k, i, t) for i in range(n))
class NdBSpline0:
def __init__(self, t, c, k=3):
"""Tensor product spline object.
c[i1, i2, ..., id] * B(x1, i1) * B(x2, i2) * ... * B(xd, id)
Parameters
----------
c : ndarray, shape (n1, n2, ..., nd, ...)
b-spline coefficients
t : tuple of 1D ndarrays
knot vectors in directions 1, 2, ... d
``len(t[i]) == n[i] + k + 1``
k : int or length-d tuple of integers
spline degrees.
"""
ndim = len(t)
assert ndim <= len(c.shape)
try:
len(k)
except TypeError:
# make k a tuple
k = (k,)*ndim
self.k = tuple(operator.index(ki) for ki in k)
self.t = tuple(np.asarray(ti, dtype=float) for ti in t)
self.c = c
def __call__(self, x):
ndim = len(self.t)
# a single evaluation point: `x` is a 1D array_like, shape (ndim,)
assert len(x) == ndim
# get the indices in an ndim-dimensional vector
i = ['none', ]*ndim
for d in range(ndim):
td, xd = self.t[d], x[d]
k = self.k[d]
# find the index for x[d]
if xd == td[k]:
i[d] = k
else:
i[d] = np.searchsorted(td, xd) - 1
assert td[i[d]] <= xd <= td[i[d]+1]
assert i[d] >= k and i[d] < len(td) - k
i = tuple(i)
# iterate over the dimensions, form linear combinations of
# products B(x_1) * B(x_2) * ... B(x_N) of (k+1)**N b-splines
# which are non-zero at `i = (i_1, i_2, ..., i_N)`.
result = 0
iters = [range(i[d] - self.k[d], i[d] + 1) for d in range(ndim)]
for idx in itertools.product(*iters):
term = self.c[idx] * np.prod([B(x[d], self.k[d], idx[d], self.t[d])
for d in range(ndim)])
result += term
return result
class TestNdBSpline:
def test_1D(self):
# test ndim=1 agrees with BSpline
rng = np.random.default_rng(12345)
n, k = 11, 3
n_tr = 7
t = np.sort(rng.uniform(size=n + k + 1))
c = rng.uniform(size=(n, n_tr))
b = BSpline(t, c, k)
nb = NdBSpline((t,), c, k)
xi = rng.uniform(size=21)
# NdBSpline expects xi.shape=(npts, ndim)
assert_allclose(nb(xi[:, None]),
b(xi), atol=1e-14)
assert nb(xi[:, None]).shape == (xi.shape[0], c.shape[1])
def make_2d_case(self):
# make a 2D separable spline
x = np.arange(6)
y = x**3
spl = make_interp_spline(x, y, k=3)
y_1 = x**3 + 2*x
spl_1 = make_interp_spline(x, y_1, k=3)
t2 = (spl.t, spl_1.t)
c2 = spl.c[:, None] * spl_1.c[None, :]
return t2, c2, 3
def make_2d_mixed(self):
# make a 2D separable spline w/ kx=3, ky=2
x = np.arange(6)
y = x**3
spl = make_interp_spline(x, y, k=3)
x = np.arange(5) + 1.5
y_1 = x**2 + 2*x
spl_1 = make_interp_spline(x, y_1, k=2)
t2 = (spl.t, spl_1.t)
c2 = spl.c[:, None] * spl_1.c[None, :]
return t2, c2, spl.k, spl_1.k
def test_2D_separable(self):
xi = [(1.5, 2.5), (2.5, 1), (0.5, 1.5)]
t2, c2, k = self.make_2d_case()
target = [x**3 * (y**3 + 2*y) for (x, y) in xi]
# sanity check: bspline2 gives the product as constructed
assert_allclose([bspline2(xy, t2, c2, k) for xy in xi],
target,
atol=1e-14)
# check evaluation on a 2D array: the 1D array of 2D points
bspl2 = NdBSpline(t2, c2, k=3)
assert bspl2(xi).shape == (len(xi), )
assert_allclose(bspl2(xi),
target, atol=1e-14)
# now check on a multidim xi
rng = np.random.default_rng(12345)
xi = rng.uniform(size=(4, 3, 2)) * 5
result = bspl2(xi)
assert result.shape == (4, 3)
# also check the values
x, y = xi.reshape((-1, 2)).T
assert_allclose(result.ravel(),
x**3 * (y**3 + 2*y), atol=1e-14)
def test_2D_separable_2(self):
# test `c` with trailing dimensions, i.e. c.ndim > ndim
ndim = 2
xi = [(1.5, 2.5), (2.5, 1), (0.5, 1.5)]
target = [x**3 * (y**3 + 2*y) for (x, y) in xi]
t2, c2, k = self.make_2d_case()
c2_4 = np.dstack((c2, c2, c2, c2)) # c22.shape = (6, 6, 4)
xy = (1.5, 2.5)
bspl2_4 = NdBSpline(t2, c2_4, k=3)
result = bspl2_4(xy)
val_single = NdBSpline(t2, c2, k)(xy)
assert result.shape == (4,)
assert_allclose(result,
[val_single, ]*4, atol=1e-14)
# now try the array xi : the output.shape is (3, 4) where 3
# is the number of points in xi and 4 is the trailing dimension of c
assert bspl2_4(xi).shape == np.shape(xi)[:-1] + bspl2_4.c.shape[ndim:]
assert_allclose(bspl2_4(xi) - np.asarray(target)[:, None],
0, atol=5e-14)
# two trailing dimensions
c2_22 = c2_4.reshape((6, 6, 2, 2))
bspl2_22 = NdBSpline(t2, c2_22, k=3)
result = bspl2_22(xy)
assert result.shape == (2, 2)
assert_allclose(result,
[[val_single, val_single],
[val_single, val_single]], atol=1e-14)
# now try the array xi : the output shape is (3, 2, 2)
# for 3 points in xi and c trailing dimensions being (2, 2)
assert (bspl2_22(xi).shape ==
np.shape(xi)[:-1] + bspl2_22.c.shape[ndim:])
assert_allclose(bspl2_22(xi) - np.asarray(target)[:, None, None],
0, atol=5e-14)
def test_2D_random(self):
rng = np.random.default_rng(12345)
k = 3
tx = np.r_[0, 0, 0, 0, np.sort(rng.uniform(size=7)) * 3, 3, 3, 3, 3]
ty = np.r_[0, 0, 0, 0, np.sort(rng.uniform(size=8)) * 4, 4, 4, 4, 4]
c = rng.uniform(size=(tx.size-k-1, ty.size-k-1))
spl = NdBSpline((tx, ty), c, k=k)
xi = (1., 1.)
assert_allclose(spl(xi),
bspline2(xi, (tx, ty), c, k), atol=1e-14)
xi = np.c_[[1, 1.5, 2],
[1.1, 1.6, 2.1]]
assert_allclose(spl(xi),
[bspline2(xy, (tx, ty), c, k) for xy in xi],
atol=1e-14)
def test_2D_mixed(self):
t2, c2, kx, ky = self.make_2d_mixed()
xi = [(1.4, 4.5), (2.5, 2.4), (4.5, 3.5)]
target = [x**3 * (y**2 + 2*y) for (x, y) in xi]
bspl2 = NdBSpline(t2, c2, k=(kx, ky))
assert bspl2(xi).shape == (len(xi), )
assert_allclose(bspl2(xi),
target, atol=1e-14)
def test_2D_derivative(self):
t2, c2, kx, ky = self.make_2d_mixed()
xi = [(1.4, 4.5), (2.5, 2.4), (4.5, 3.5)]
bspl2 = NdBSpline(t2, c2, k=(kx, ky))
der = bspl2(xi, nu=(1, 0))
assert_allclose(der,
[3*x**2 * (y**2 + 2*y) for x, y in xi], atol=1e-14)
der = bspl2(xi, nu=(1, 1))
assert_allclose(der,
[3*x**2 * (2*y + 2) for x, y in xi], atol=1e-14)
der = bspl2(xi, nu=(0, 0))
assert_allclose(der,
[x**3 * (y**2 + 2*y) for x, y in xi], atol=1e-14)
def test_2D_mixed_random(self):
rng = np.random.default_rng(12345)
kx, ky = 2, 3
tx = np.r_[0, 0, 0, 0, np.sort(rng.uniform(size=7)) * 3, 3, 3, 3, 3]
ty = np.r_[0, 0, 0, 0, np.sort(rng.uniform(size=8)) * 4, 4, 4, 4, 4]
c = rng.uniform(size=(tx.size - kx - 1, ty.size - ky - 1))
xi = np.c_[[1, 1.5, 2],
[1.1, 1.6, 2.1]]
bspl2 = NdBSpline((tx, ty), c, k=(kx, ky))
bspl2_0 = NdBSpline0((tx, ty), c, k=(kx, ky))
assert_allclose(bspl2(xi),
[bspl2_0(xp) for xp in xi], atol=1e-14)
def make_3d_case(self):
# make a 3D separable spline
x = np.arange(6)
y = x**3
spl = make_interp_spline(x, y, k=3)
y_1 = x**3 + 2*x
spl_1 = make_interp_spline(x, y_1, k=3)
y_2 = x**3 + 3*x + 1
spl_2 = make_interp_spline(x, y_2, k=3)
t2 = (spl.t, spl_1.t, spl_2.t)
c2 = (spl.c[:, None, None] *
spl_1.c[None, :, None] *
spl_2.c[None, None, :])
return t2, c2, 3
def test_3D_separable(self):
rng = np.random.default_rng(12345)
x, y, z = rng.uniform(size=(3, 11)) * 5
target = x**3 * (y**3 + 2*y) * (z**3 + 3*z + 1)
t3, c3, k = self.make_3d_case()
bspl3 = NdBSpline(t3, c3, k=3)
xi = [_ for _ in zip(x, y, z)]
result = bspl3(xi)
assert result.shape == (11,)
assert_allclose(result, target, atol=1e-14)
def test_3D_derivative(self):
t3, c3, k = self.make_3d_case()
bspl3 = NdBSpline(t3, c3, k=3)
rng = np.random.default_rng(12345)
x, y, z = rng.uniform(size=(3, 11)) * 5
xi = [_ for _ in zip(x, y, z)]
assert_allclose(bspl3(xi, nu=(1, 0, 0)),
3*x**2 * (y**3 + 2*y) * (z**3 + 3*z + 1), atol=1e-14)
assert_allclose(bspl3(xi, nu=(2, 0, 0)),
6*x * (y**3 + 2*y) * (z**3 + 3*z + 1), atol=1e-14)
assert_allclose(bspl3(xi, nu=(2, 1, 0)),
6*x * (3*y**2 + 2) * (z**3 + 3*z + 1), atol=1e-14)
assert_allclose(bspl3(xi, nu=(2, 1, 3)),
6*x * (3*y**2 + 2) * (6), atol=1e-14)
assert_allclose(bspl3(xi, nu=(2, 1, 4)),
np.zeros(len(xi)), atol=1e-14)
def test_3D_random(self):
rng = np.random.default_rng(12345)
k = 3
tx = np.r_[0, 0, 0, 0, np.sort(rng.uniform(size=7)) * 3, 3, 3, 3, 3]
ty = np.r_[0, 0, 0, 0, np.sort(rng.uniform(size=8)) * 4, 4, 4, 4, 4]
tz = np.r_[0, 0, 0, 0, np.sort(rng.uniform(size=8)) * 4, 4, 4, 4, 4]
c = rng.uniform(size=(tx.size-k-1, ty.size-k-1, tz.size-k-1))
spl = NdBSpline((tx, ty, tz), c, k=k)
spl_0 = NdBSpline0((tx, ty, tz), c, k=k)
xi = (1., 1., 1)
assert_allclose(spl(xi), spl_0(xi), atol=1e-14)
xi = np.c_[[1, 1.5, 2],
[1.1, 1.6, 2.1],
[0.9, 1.4, 1.9]]
assert_allclose(spl(xi), [spl_0(xp) for xp in xi], atol=1e-14)
def test_3D_random_complex(self):
rng = np.random.default_rng(12345)
k = 3
tx = np.r_[0, 0, 0, 0, np.sort(rng.uniform(size=7)) * 3, 3, 3, 3, 3]
ty = np.r_[0, 0, 0, 0, np.sort(rng.uniform(size=8)) * 4, 4, 4, 4, 4]
tz = np.r_[0, 0, 0, 0, np.sort(rng.uniform(size=8)) * 4, 4, 4, 4, 4]
c = (rng.uniform(size=(tx.size-k-1, ty.size-k-1, tz.size-k-1)) +
rng.uniform(size=(tx.size-k-1, ty.size-k-1, tz.size-k-1))*1j)
spl = NdBSpline((tx, ty, tz), c, k=k)
spl_re = NdBSpline((tx, ty, tz), c.real, k=k)
spl_im = NdBSpline((tx, ty, tz), c.imag, k=k)
xi = np.c_[[1, 1.5, 2],
[1.1, 1.6, 2.1],
[0.9, 1.4, 1.9]]
assert_allclose(spl(xi),
spl_re(xi) + 1j*spl_im(xi), atol=1e-14)
@pytest.mark.parametrize('cls_extrap', [None, True])
@pytest.mark.parametrize('call_extrap', [None, True])
def test_extrapolate_3D_separable(self, cls_extrap, call_extrap):
# test that extrapolate=True does extrapolate
t3, c3, k = self.make_3d_case()
bspl3 = NdBSpline(t3, c3, k=3, extrapolate=cls_extrap)
# evaluate out of bounds
x, y, z = [-2, -1, 7], [-3, -0.5, 6.5], [-1, -1.5, 7.5]
x, y, z = map(np.asarray, (x, y, z))
xi = [_ for _ in zip(x, y, z)]
target = x**3 * (y**3 + 2*y) * (z**3 + 3*z + 1)
result = bspl3(xi, extrapolate=call_extrap)
assert_allclose(result, target, atol=1e-14)
@pytest.mark.parametrize('extrap', [(False, True), (True, None)])
def test_extrapolate_3D_separable_2(self, extrap):
# test that call(..., extrapolate=None) defers to self.extrapolate,
# otherwise supersedes self.extrapolate
t3, c3, k = self.make_3d_case()
cls_extrap, call_extrap = extrap
bspl3 = NdBSpline(t3, c3, k=3, extrapolate=cls_extrap)
# evaluate out of bounds
x, y, z = [-2, -1, 7], [-3, -0.5, 6.5], [-1, -1.5, 7.5]
x, y, z = map(np.asarray, (x, y, z))
xi = [_ for _ in zip(x, y, z)]
target = x**3 * (y**3 + 2*y) * (z**3 + 3*z + 1)
result = bspl3(xi, extrapolate=call_extrap)
assert_allclose(result, target, atol=1e-14)
def test_extrapolate_false_3D_separable(self):
# test that extrapolate=False produces nans for out-of-bounds values
t3, c3, k = self.make_3d_case()
bspl3 = NdBSpline(t3, c3, k=3)
# evaluate out of bounds and inside
x, y, z = [-2, 1, 7], [-3, 0.5, 6.5], [-1, 1.5, 7.5]
x, y, z = map(np.asarray, (x, y, z))
xi = [_ for _ in zip(x, y, z)]
target = x**3 * (y**3 + 2*y) * (z**3 + 3*z + 1)
result = bspl3(xi, extrapolate=False)
assert np.isnan(result[0])
assert np.isnan(result[-1])
assert_allclose(result[1:-1], target[1:-1], atol=1e-14)
def test_x_nan_3D(self):
# test that spline(nan) is nan
t3, c3, k = self.make_3d_case()
bspl3 = NdBSpline(t3, c3, k=3)
# evaluate out of bounds and inside
x = np.asarray([-2, 3, np.nan, 1, 2, 7, np.nan])
y = np.asarray([-3, 3.5, 1, np.nan, 3, 6.5, 6.5])
z = np.asarray([-1, 3.5, 2, 3, np.nan, 7.5, 7.5])
xi = [_ for _ in zip(x, y, z)]
target = x**3 * (y**3 + 2*y) * (z**3 + 3*z + 1)
mask = np.isnan(x) | np.isnan(y) | np.isnan(z)
target[mask] = np.nan
result = bspl3(xi)
assert np.isnan(result[mask]).all()
assert_allclose(result, target, atol=1e-14)
def test_non_c_contiguous(self):
# check that non C-contiguous inputs are OK
rng = np.random.default_rng(12345)
kx, ky = 3, 3
tx = np.sort(rng.uniform(low=0, high=4, size=16))
tx = np.r_[(tx[0],)*kx, tx, (tx[-1],)*kx]
ty = np.sort(rng.uniform(low=0, high=4, size=16))
ty = np.r_[(ty[0],)*ky, ty, (ty[-1],)*ky]
assert not tx[::2].flags.c_contiguous
assert not ty[::2].flags.c_contiguous
c = rng.uniform(size=(tx.size//2 - kx - 1, ty.size//2 - ky - 1))
c = c.T
assert not c.flags.c_contiguous
xi = np.c_[[1, 1.5, 2],
[1.1, 1.6, 2.1]]
bspl2 = NdBSpline((tx[::2], ty[::2]), c, k=(kx, ky))
bspl2_0 = NdBSpline0((tx[::2], ty[::2]), c, k=(kx, ky))
assert_allclose(bspl2(xi),
[bspl2_0(xp) for xp in xi], atol=1e-14)
def test_readonly(self):
t3, c3, k = self.make_3d_case()
bspl3 = NdBSpline(t3, c3, k=3)
for i in range(3):
t3[i].flags.writeable = False
c3.flags.writeable = False
bspl3_ = NdBSpline(t3, c3, k=3)
assert bspl3((1, 2, 3)) == bspl3_((1, 2, 3))
| 76,664
| 34.791317
| 82
|
py
|
scipy
|
scipy-main/scipy/interpolate/tests/__init__.py
| 0
| 0
| 0
|
py
|
|
scipy
|
scipy-main/scipy/interpolate/tests/test_ndgriddata.py
|
import numpy as np
from numpy.testing import assert_equal, assert_array_equal, assert_allclose
import pytest
from pytest import raises as assert_raises
from scipy.interpolate import (griddata, NearestNDInterpolator,
LinearNDInterpolator,
CloughTocher2DInterpolator)
parametrize_interpolators = pytest.mark.parametrize(
"interpolator", [NearestNDInterpolator, LinearNDInterpolator,
CloughTocher2DInterpolator]
)
class TestGriddata:
def test_fill_value(self):
x = [(0,0), (0,1), (1,0)]
y = [1, 2, 3]
yi = griddata(x, y, [(1,1), (1,2), (0,0)], fill_value=-1)
assert_array_equal(yi, [-1., -1, 1])
yi = griddata(x, y, [(1,1), (1,2), (0,0)])
assert_array_equal(yi, [np.nan, np.nan, 1])
def test_alternative_call(self):
x = np.array([(0,0), (-0.5,-0.5), (-0.5,0.5), (0.5, 0.5), (0.25, 0.3)],
dtype=np.double)
y = (np.arange(x.shape[0], dtype=np.double)[:,None]
+ np.array([0,1])[None,:])
for method in ('nearest', 'linear', 'cubic'):
for rescale in (True, False):
msg = repr((method, rescale))
yi = griddata((x[:,0], x[:,1]), y, (x[:,0], x[:,1]), method=method,
rescale=rescale)
assert_allclose(y, yi, atol=1e-14, err_msg=msg)
def test_multivalue_2d(self):
x = np.array([(0,0), (-0.5,-0.5), (-0.5,0.5), (0.5, 0.5), (0.25, 0.3)],
dtype=np.double)
y = (np.arange(x.shape[0], dtype=np.double)[:,None]
+ np.array([0,1])[None,:])
for method in ('nearest', 'linear', 'cubic'):
for rescale in (True, False):
msg = repr((method, rescale))
yi = griddata(x, y, x, method=method, rescale=rescale)
assert_allclose(y, yi, atol=1e-14, err_msg=msg)
def test_multipoint_2d(self):
x = np.array([(0,0), (-0.5,-0.5), (-0.5,0.5), (0.5, 0.5), (0.25, 0.3)],
dtype=np.double)
y = np.arange(x.shape[0], dtype=np.double)
xi = x[:,None,:] + np.array([0,0,0])[None,:,None]
for method in ('nearest', 'linear', 'cubic'):
for rescale in (True, False):
msg = repr((method, rescale))
yi = griddata(x, y, xi, method=method, rescale=rescale)
assert_equal(yi.shape, (5, 3), err_msg=msg)
assert_allclose(yi, np.tile(y[:,None], (1, 3)),
atol=1e-14, err_msg=msg)
def test_complex_2d(self):
x = np.array([(0,0), (-0.5,-0.5), (-0.5,0.5), (0.5, 0.5), (0.25, 0.3)],
dtype=np.double)
y = np.arange(x.shape[0], dtype=np.double)
y = y - 2j*y[::-1]
xi = x[:,None,:] + np.array([0,0,0])[None,:,None]
for method in ('nearest', 'linear', 'cubic'):
for rescale in (True, False):
msg = repr((method, rescale))
yi = griddata(x, y, xi, method=method, rescale=rescale)
assert_equal(yi.shape, (5, 3), err_msg=msg)
assert_allclose(yi, np.tile(y[:,None], (1, 3)),
atol=1e-14, err_msg=msg)
def test_1d(self):
x = np.array([1, 2.5, 3, 4.5, 5, 6])
y = np.array([1, 2, 0, 3.9, 2, 1])
for method in ('nearest', 'linear', 'cubic'):
assert_allclose(griddata(x, y, x, method=method), y,
err_msg=method, atol=1e-14)
assert_allclose(griddata(x.reshape(6, 1), y, x, method=method), y,
err_msg=method, atol=1e-14)
assert_allclose(griddata((x,), y, (x,), method=method), y,
err_msg=method, atol=1e-14)
def test_1d_borders(self):
# Test for nearest neighbor case with xi outside
# the range of the values.
x = np.array([1, 2.5, 3, 4.5, 5, 6])
y = np.array([1, 2, 0, 3.9, 2, 1])
xi = np.array([0.9, 6.5])
yi_should = np.array([1.0, 1.0])
method = 'nearest'
assert_allclose(griddata(x, y, xi,
method=method), yi_should,
err_msg=method,
atol=1e-14)
assert_allclose(griddata(x.reshape(6, 1), y, xi,
method=method), yi_should,
err_msg=method,
atol=1e-14)
assert_allclose(griddata((x, ), y, (xi, ),
method=method), yi_should,
err_msg=method,
atol=1e-14)
def test_1d_unsorted(self):
x = np.array([2.5, 1, 4.5, 5, 6, 3])
y = np.array([1, 2, 0, 3.9, 2, 1])
for method in ('nearest', 'linear', 'cubic'):
assert_allclose(griddata(x, y, x, method=method), y,
err_msg=method, atol=1e-10)
assert_allclose(griddata(x.reshape(6, 1), y, x, method=method), y,
err_msg=method, atol=1e-10)
assert_allclose(griddata((x,), y, (x,), method=method), y,
err_msg=method, atol=1e-10)
def test_square_rescale_manual(self):
points = np.array([(0,0), (0,100), (10,100), (10,0), (1, 5)], dtype=np.double)
points_rescaled = np.array([(0,0), (0,1), (1,1), (1,0), (0.1, 0.05)], dtype=np.double)
values = np.array([1., 2., -3., 5., 9.], dtype=np.double)
xx, yy = np.broadcast_arrays(np.linspace(0, 10, 14)[:,None],
np.linspace(0, 100, 14)[None,:])
xx = xx.ravel()
yy = yy.ravel()
xi = np.array([xx, yy]).T.copy()
for method in ('nearest', 'linear', 'cubic'):
msg = method
zi = griddata(points_rescaled, values, xi/np.array([10, 100.]),
method=method)
zi_rescaled = griddata(points, values, xi, method=method,
rescale=True)
assert_allclose(zi, zi_rescaled, err_msg=msg,
atol=1e-12)
def test_xi_1d(self):
# Check that 1-D xi is interpreted as a coordinate
x = np.array([(0,0), (-0.5,-0.5), (-0.5,0.5), (0.5, 0.5), (0.25, 0.3)],
dtype=np.double)
y = np.arange(x.shape[0], dtype=np.double)
y = y - 2j*y[::-1]
xi = np.array([0.5, 0.5])
for method in ('nearest', 'linear', 'cubic'):
p1 = griddata(x, y, xi, method=method)
p2 = griddata(x, y, xi[None,:], method=method)
assert_allclose(p1, p2, err_msg=method)
xi1 = np.array([0.5])
xi3 = np.array([0.5, 0.5, 0.5])
assert_raises(ValueError, griddata, x, y, xi1,
method=method)
assert_raises(ValueError, griddata, x, y, xi3,
method=method)
class TestNearestNDInterpolator:
def test_nearest_options(self):
# smoke test that NearestNDInterpolator accept cKDTree options
npts, nd = 4, 3
x = np.arange(npts*nd).reshape((npts, nd))
y = np.arange(npts)
nndi = NearestNDInterpolator(x, y)
opts = {'balanced_tree': False, 'compact_nodes': False}
nndi_o = NearestNDInterpolator(x, y, tree_options=opts)
assert_allclose(nndi(x), nndi_o(x), atol=1e-14)
def test_nearest_list_argument(self):
nd = np.array([[0, 0, 0, 0, 1, 0, 1],
[0, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1, 2]])
d = nd[:, 3:]
# z is np.array
NI = NearestNDInterpolator((d[0], d[1]), d[2])
assert_array_equal(NI([0.1, 0.9], [0.1, 0.9]), [0, 2])
# z is list
NI = NearestNDInterpolator((d[0], d[1]), list(d[2]))
assert_array_equal(NI([0.1, 0.9], [0.1, 0.9]), [0, 2])
class TestNDInterpolators:
@parametrize_interpolators
def test_broadcastable_input(self, interpolator):
# input data
np.random.seed(0)
x = np.random.random(10)
y = np.random.random(10)
z = np.hypot(x, y)
# x-y grid for interpolation
X = np.linspace(min(x), max(x))
Y = np.linspace(min(y), max(y))
X, Y = np.meshgrid(X, Y)
XY = np.vstack((X.ravel(), Y.ravel())).T
interp = interpolator(list(zip(x, y)), z)
# single array input
interp_points0 = interp(XY)
# tuple input
interp_points1 = interp((X, Y))
interp_points2 = interp((X, 0.0))
# broadcastable input
interp_points3 = interp(X, Y)
interp_points4 = interp(X, 0.0)
assert_equal(interp_points0.size ==
interp_points1.size ==
interp_points2.size ==
interp_points3.size ==
interp_points4.size, True)
@parametrize_interpolators
def test_read_only(self, interpolator):
# input data
np.random.seed(0)
xy = np.random.random((10, 2))
x, y = xy[:, 0], xy[:, 1]
z = np.hypot(x, y)
# interpolation points
XY = np.random.random((50, 2))
xy.setflags(write=False)
z.setflags(write=False)
XY.setflags(write=False)
interp = interpolator(xy, z)
interp(XY)
| 9,445
| 37.242915
| 94
|
py
|
scipy
|
scipy-main/scipy/interpolate/tests/test_fitpack2.py
|
# Created by Pearu Peterson, June 2003
import itertools
import numpy as np
from numpy.testing import (assert_equal, assert_almost_equal, assert_array_equal,
assert_array_almost_equal, assert_allclose, suppress_warnings)
from pytest import raises as assert_raises
from numpy import array, diff, linspace, meshgrid, ones, pi, shape
from scipy.interpolate._fitpack_py import bisplrep, bisplev, splrep, spalde
from scipy.interpolate._fitpack2 import (UnivariateSpline,
LSQUnivariateSpline, InterpolatedUnivariateSpline,
LSQBivariateSpline, SmoothBivariateSpline, RectBivariateSpline,
LSQSphereBivariateSpline, SmoothSphereBivariateSpline,
RectSphereBivariateSpline)
class TestUnivariateSpline:
def test_linear_constant(self):
x = [1,2,3]
y = [3,3,3]
lut = UnivariateSpline(x,y,k=1)
assert_array_almost_equal(lut.get_knots(),[1,3])
assert_array_almost_equal(lut.get_coeffs(),[3,3])
assert_almost_equal(lut.get_residual(),0.0)
assert_array_almost_equal(lut([1,1.5,2]),[3,3,3])
def test_preserve_shape(self):
x = [1, 2, 3]
y = [0, 2, 4]
lut = UnivariateSpline(x, y, k=1)
arg = 2
assert_equal(shape(arg), shape(lut(arg)))
assert_equal(shape(arg), shape(lut(arg, nu=1)))
arg = [1.5, 2, 2.5]
assert_equal(shape(arg), shape(lut(arg)))
assert_equal(shape(arg), shape(lut(arg, nu=1)))
def test_linear_1d(self):
x = [1,2,3]
y = [0,2,4]
lut = UnivariateSpline(x,y,k=1)
assert_array_almost_equal(lut.get_knots(),[1,3])
assert_array_almost_equal(lut.get_coeffs(),[0,4])
assert_almost_equal(lut.get_residual(),0.0)
assert_array_almost_equal(lut([1,1.5,2]),[0,1,2])
def test_subclassing(self):
# See #731
class ZeroSpline(UnivariateSpline):
def __call__(self, x):
return 0*array(x)
sp = ZeroSpline([1,2,3,4,5], [3,2,3,2,3], k=2)
assert_array_equal(sp([1.5, 2.5]), [0., 0.])
def test_empty_input(self):
# Test whether empty input returns an empty output. Ticket 1014
x = [1,3,5,7,9]
y = [0,4,9,12,21]
spl = UnivariateSpline(x, y, k=3)
assert_array_equal(spl([]), array([]))
def test_roots(self):
x = [1, 3, 5, 7, 9]
y = [0, 4, 9, 12, 21]
spl = UnivariateSpline(x, y, k=3)
assert_almost_equal(spl.roots()[0], 1.050290639101332)
def test_roots_length(self): # for gh18335
x = np.linspace(0, 50 * np.pi, 1000)
y = np.cos(x)
spl = UnivariateSpline(x, y, s=0)
assert_equal(len(spl.roots()), 50)
def test_derivatives(self):
x = [1, 3, 5, 7, 9]
y = [0, 4, 9, 12, 21]
spl = UnivariateSpline(x, y, k=3)
assert_almost_equal(spl.derivatives(3.5),
[5.5152902, 1.7146577, -0.1830357, 0.3125])
def test_derivatives_2(self):
x = np.arange(8)
y = x**3 + 2.*x**2
tck = splrep(x, y, s=0)
ders = spalde(3, tck)
assert_allclose(ders, [45., # 3**3 + 2*(3)**2
39., # 3*(3)**2 + 4*(3)
22., # 6*(3) + 4
6.], # 6*3**0
atol=1e-15)
spl = UnivariateSpline(x, y, s=0, k=3)
assert_allclose(spl.derivatives(3),
ders,
atol=1e-15)
def test_resize_regression(self):
"""Regression test for #1375."""
x = [-1., -0.65016502, -0.58856235, -0.26903553, -0.17370892,
-0.10011001, 0., 0.10011001, 0.17370892, 0.26903553, 0.58856235,
0.65016502, 1.]
y = [1.,0.62928599, 0.5797223, 0.39965815, 0.36322694, 0.3508061,
0.35214793, 0.3508061, 0.36322694, 0.39965815, 0.5797223,
0.62928599, 1.]
w = [1.00000000e+12, 6.88875973e+02, 4.89314737e+02, 4.26864807e+02,
6.07746770e+02, 4.51341444e+02, 3.17480210e+02, 4.51341444e+02,
6.07746770e+02, 4.26864807e+02, 4.89314737e+02, 6.88875973e+02,
1.00000000e+12]
spl = UnivariateSpline(x=x, y=y, w=w, s=None)
desired = array([0.35100374, 0.51715855, 0.87789547, 0.98719344])
assert_allclose(spl([0.1, 0.5, 0.9, 0.99]), desired, atol=5e-4)
def test_out_of_range_regression(self):
# Test different extrapolation modes. See ticket 3557
x = np.arange(5, dtype=float)
y = x**3
xp = linspace(-8, 13, 100)
xp_zeros = xp.copy()
xp_zeros[np.logical_or(xp_zeros < 0., xp_zeros > 4.)] = 0
xp_clip = xp.copy()
xp_clip[xp_clip < x[0]] = x[0]
xp_clip[xp_clip > x[-1]] = x[-1]
for cls in [UnivariateSpline, InterpolatedUnivariateSpline]:
spl = cls(x=x, y=y)
for ext in [0, 'extrapolate']:
assert_allclose(spl(xp, ext=ext), xp**3, atol=1e-16)
assert_allclose(cls(x, y, ext=ext)(xp), xp**3, atol=1e-16)
for ext in [1, 'zeros']:
assert_allclose(spl(xp, ext=ext), xp_zeros**3, atol=1e-16)
assert_allclose(cls(x, y, ext=ext)(xp), xp_zeros**3, atol=1e-16)
for ext in [2, 'raise']:
assert_raises(ValueError, spl, xp, **dict(ext=ext))
for ext in [3, 'const']:
assert_allclose(spl(xp, ext=ext), xp_clip**3, atol=1e-16)
assert_allclose(cls(x, y, ext=ext)(xp), xp_clip**3, atol=1e-16)
# also test LSQUnivariateSpline [which needs explicit knots]
t = spl.get_knots()[3:4] # interior knots w/ default k=3
spl = LSQUnivariateSpline(x, y, t)
assert_allclose(spl(xp, ext=0), xp**3, atol=1e-16)
assert_allclose(spl(xp, ext=1), xp_zeros**3, atol=1e-16)
assert_raises(ValueError, spl, xp, **dict(ext=2))
assert_allclose(spl(xp, ext=3), xp_clip**3, atol=1e-16)
# also make sure that unknown values for `ext` are caught early
for ext in [-1, 'unknown']:
spl = UnivariateSpline(x, y)
assert_raises(ValueError, spl, xp, **dict(ext=ext))
assert_raises(ValueError, UnivariateSpline,
**dict(x=x, y=y, ext=ext))
def test_lsq_fpchec(self):
xs = np.arange(100) * 1.
ys = np.arange(100) * 1.
knots = np.linspace(0, 99, 10)
bbox = (-1, 101)
assert_raises(ValueError, LSQUnivariateSpline, xs, ys, knots,
bbox=bbox)
def test_derivative_and_antiderivative(self):
# Thin wrappers to splder/splantider, so light smoke test only.
x = np.linspace(0, 1, 70)**3
y = np.cos(x)
spl = UnivariateSpline(x, y, s=0)
spl2 = spl.antiderivative(2).derivative(2)
assert_allclose(spl(0.3), spl2(0.3))
spl2 = spl.antiderivative(1)
assert_allclose(spl2(0.6) - spl2(0.2),
spl.integral(0.2, 0.6))
def test_derivative_extrapolation(self):
# Regression test for gh-10195: for a const-extrapolation spline
# its derivative evaluates to zero for extrapolation
x_values = [1, 2, 4, 6, 8.5]
y_values = [0.5, 0.8, 1.3, 2.5, 5]
f = UnivariateSpline(x_values, y_values, ext='const', k=3)
x = [-1, 0, -0.5, 9, 9.5, 10]
assert_allclose(f.derivative()(x), 0, atol=1e-15)
def test_integral_out_of_bounds(self):
# Regression test for gh-7906: .integral(a, b) is wrong if both
# a and b are out-of-bounds
x = np.linspace(0., 1., 7)
for ext in range(4):
f = UnivariateSpline(x, x, s=0, ext=ext)
for (a, b) in [(1, 1), (1, 5), (2, 5),
(0, 0), (-2, 0), (-2, -1)]:
assert_allclose(f.integral(a, b), 0, atol=1e-15)
def test_nan(self):
# bail out early if the input data contains nans
x = np.arange(10, dtype=float)
y = x**3
w = np.ones_like(x)
# also test LSQUnivariateSpline [which needs explicit knots]
spl = UnivariateSpline(x, y, check_finite=True)
t = spl.get_knots()[3:4] # interior knots w/ default k=3
y_end = y[-1]
for z in [np.nan, np.inf, -np.inf]:
y[-1] = z
assert_raises(ValueError, UnivariateSpline,
**dict(x=x, y=y, check_finite=True))
assert_raises(ValueError, InterpolatedUnivariateSpline,
**dict(x=x, y=y, check_finite=True))
assert_raises(ValueError, LSQUnivariateSpline,
**dict(x=x, y=y, t=t, check_finite=True))
y[-1] = y_end # check valid y but invalid w
w[-1] = z
assert_raises(ValueError, UnivariateSpline,
**dict(x=x, y=y, w=w, check_finite=True))
assert_raises(ValueError, InterpolatedUnivariateSpline,
**dict(x=x, y=y, w=w, check_finite=True))
assert_raises(ValueError, LSQUnivariateSpline,
**dict(x=x, y=y, t=t, w=w, check_finite=True))
def test_strictly_increasing_x(self):
# Test the x is required to be strictly increasing for
# UnivariateSpline if s=0 and for InterpolatedUnivariateSpline,
# but merely increasing for UnivariateSpline if s>0
# and for LSQUnivariateSpline; see gh-8535
xx = np.arange(10, dtype=float)
yy = xx**3
x = np.arange(10, dtype=float)
x[1] = x[0]
y = x**3
w = np.ones_like(x)
# also test LSQUnivariateSpline [which needs explicit knots]
spl = UnivariateSpline(xx, yy, check_finite=True)
t = spl.get_knots()[3:4] # interior knots w/ default k=3
UnivariateSpline(x=x, y=y, w=w, s=1, check_finite=True)
LSQUnivariateSpline(x=x, y=y, t=t, w=w, check_finite=True)
assert_raises(ValueError, UnivariateSpline,
**dict(x=x, y=y, s=0, check_finite=True))
assert_raises(ValueError, InterpolatedUnivariateSpline,
**dict(x=x, y=y, check_finite=True))
def test_increasing_x(self):
# Test that x is required to be increasing, see gh-8535
xx = np.arange(10, dtype=float)
yy = xx**3
x = np.arange(10, dtype=float)
x[1] = x[0] - 1.0
y = x**3
w = np.ones_like(x)
# also test LSQUnivariateSpline [which needs explicit knots]
spl = UnivariateSpline(xx, yy, check_finite=True)
t = spl.get_knots()[3:4] # interior knots w/ default k=3
assert_raises(ValueError, UnivariateSpline,
**dict(x=x, y=y, check_finite=True))
assert_raises(ValueError, InterpolatedUnivariateSpline,
**dict(x=x, y=y, check_finite=True))
assert_raises(ValueError, LSQUnivariateSpline,
**dict(x=x, y=y, t=t, w=w, check_finite=True))
def test_invalid_input_for_univariate_spline(self):
with assert_raises(ValueError) as info:
x_values = [1, 2, 4, 6, 8.5]
y_values = [0.5, 0.8, 1.3, 2.5]
UnivariateSpline(x_values, y_values)
assert "x and y should have a same length" in str(info.value)
with assert_raises(ValueError) as info:
x_values = [1, 2, 4, 6, 8.5]
y_values = [0.5, 0.8, 1.3, 2.5, 2.8]
w_values = [-1.0, 1.0, 1.0, 1.0]
UnivariateSpline(x_values, y_values, w=w_values)
assert "x, y, and w should have a same length" in str(info.value)
with assert_raises(ValueError) as info:
bbox = (-1)
UnivariateSpline(x_values, y_values, bbox=bbox)
assert "bbox shape should be (2,)" in str(info.value)
with assert_raises(ValueError) as info:
UnivariateSpline(x_values, y_values, k=6)
assert "k should be 1 <= k <= 5" in str(info.value)
with assert_raises(ValueError) as info:
UnivariateSpline(x_values, y_values, s=-1.0)
assert "s should be s >= 0.0" in str(info.value)
def test_invalid_input_for_interpolated_univariate_spline(self):
with assert_raises(ValueError) as info:
x_values = [1, 2, 4, 6, 8.5]
y_values = [0.5, 0.8, 1.3, 2.5]
InterpolatedUnivariateSpline(x_values, y_values)
assert "x and y should have a same length" in str(info.value)
with assert_raises(ValueError) as info:
x_values = [1, 2, 4, 6, 8.5]
y_values = [0.5, 0.8, 1.3, 2.5, 2.8]
w_values = [-1.0, 1.0, 1.0, 1.0]
InterpolatedUnivariateSpline(x_values, y_values, w=w_values)
assert "x, y, and w should have a same length" in str(info.value)
with assert_raises(ValueError) as info:
bbox = (-1)
InterpolatedUnivariateSpline(x_values, y_values, bbox=bbox)
assert "bbox shape should be (2,)" in str(info.value)
with assert_raises(ValueError) as info:
InterpolatedUnivariateSpline(x_values, y_values, k=6)
assert "k should be 1 <= k <= 5" in str(info.value)
def test_invalid_input_for_lsq_univariate_spline(self):
x_values = [1, 2, 4, 6, 8.5]
y_values = [0.5, 0.8, 1.3, 2.5, 2.8]
spl = UnivariateSpline(x_values, y_values, check_finite=True)
t_values = spl.get_knots()[3:4] # interior knots w/ default k=3
with assert_raises(ValueError) as info:
x_values = [1, 2, 4, 6, 8.5]
y_values = [0.5, 0.8, 1.3, 2.5]
LSQUnivariateSpline(x_values, y_values, t_values)
assert "x and y should have a same length" in str(info.value)
with assert_raises(ValueError) as info:
x_values = [1, 2, 4, 6, 8.5]
y_values = [0.5, 0.8, 1.3, 2.5, 2.8]
w_values = [1.0, 1.0, 1.0, 1.0]
LSQUnivariateSpline(x_values, y_values, t_values, w=w_values)
assert "x, y, and w should have a same length" in str(info.value)
with assert_raises(ValueError) as info:
bbox = (100, -100)
LSQUnivariateSpline(x_values, y_values, t_values, bbox=bbox)
assert "Interior knots t must satisfy Schoenberg-Whitney conditions" in str(info.value)
with assert_raises(ValueError) as info:
bbox = (-1)
LSQUnivariateSpline(x_values, y_values, t_values, bbox=bbox)
assert "bbox shape should be (2,)" in str(info.value)
with assert_raises(ValueError) as info:
LSQUnivariateSpline(x_values, y_values, t_values, k=6)
assert "k should be 1 <= k <= 5" in str(info.value)
def test_array_like_input(self):
x_values = np.array([1, 2, 4, 6, 8.5])
y_values = np.array([0.5, 0.8, 1.3, 2.5, 2.8])
w_values = np.array([1.0, 1.0, 1.0, 1.0, 1.0])
bbox = np.array([-100, 100])
# np.array input
spl1 = UnivariateSpline(x=x_values, y=y_values, w=w_values,
bbox=bbox)
# list input
spl2 = UnivariateSpline(x=x_values.tolist(), y=y_values.tolist(),
w=w_values.tolist(), bbox=bbox.tolist())
assert_allclose(spl1([0.1, 0.5, 0.9, 0.99]),
spl2([0.1, 0.5, 0.9, 0.99]))
def test_fpknot_oob_crash(self):
# https://github.com/scipy/scipy/issues/3691
x = range(109)
y = [0., 0., 0., 0., 0., 10.9, 0., 11., 0.,
0., 0., 10.9, 0., 0., 0., 0., 0., 0.,
10.9, 0., 0., 0., 11., 0., 0., 0., 10.9,
0., 0., 0., 10.5, 0., 0., 0., 10.7, 0.,
0., 0., 11., 0., 0., 0., 0., 0., 0.,
10.9, 0., 0., 10.7, 0., 0., 0., 10.6, 0.,
0., 0., 10.5, 0., 0., 10.7, 0., 0., 10.5,
0., 0., 11.5, 0., 0., 0., 10.7, 0., 0.,
10.7, 0., 0., 10.9, 0., 0., 10.8, 0., 0.,
0., 10.7, 0., 0., 10.6, 0., 0., 0., 10.4,
0., 0., 10.6, 0., 0., 10.5, 0., 0., 0.,
10.7, 0., 0., 0., 10.4, 0., 0., 0., 10.8, 0.]
with suppress_warnings() as sup:
r = sup.record(
UserWarning,
r"""
The maximal number of iterations maxit \(set to 20 by the program\)
allowed for finding a smoothing spline with fp=s has been reached: s
too small.
There is an approximation returned but the corresponding weighted sum
of squared residuals does not satisfy the condition abs\(fp-s\)/s < tol.""")
UnivariateSpline(x, y, k=1)
assert_equal(len(r), 1)
class TestLSQBivariateSpline:
# NOTE: The systems in this test class are rank-deficient
def test_linear_constant(self):
x = [1,1,1,2,2,2,3,3,3]
y = [1,2,3,1,2,3,1,2,3]
z = [3,3,3,3,3,3,3,3,3]
s = 0.1
tx = [1+s,3-s]
ty = [1+s,3-s]
with suppress_warnings() as sup:
r = sup.record(UserWarning, "\nThe coefficients of the spline")
lut = LSQBivariateSpline(x,y,z,tx,ty,kx=1,ky=1)
assert_equal(len(r), 1)
assert_almost_equal(lut(2,2), 3.)
def test_bilinearity(self):
x = [1,1,1,2,2,2,3,3,3]
y = [1,2,3,1,2,3,1,2,3]
z = [0,7,8,3,4,7,1,3,4]
s = 0.1
tx = [1+s,3-s]
ty = [1+s,3-s]
with suppress_warnings() as sup:
# This seems to fail (ier=1, see ticket 1642).
sup.filter(UserWarning, "\nThe coefficients of the spline")
lut = LSQBivariateSpline(x,y,z,tx,ty,kx=1,ky=1)
tx, ty = lut.get_knots()
for xa, xb in zip(tx[:-1], tx[1:]):
for ya, yb in zip(ty[:-1], ty[1:]):
for t in [0.1, 0.5, 0.9]:
for s in [0.3, 0.4, 0.7]:
xp = xa*(1-t) + xb*t
yp = ya*(1-s) + yb*s
zp = (+ lut(xa, ya)*(1-t)*(1-s)
+ lut(xb, ya)*t*(1-s)
+ lut(xa, yb)*(1-t)*s
+ lut(xb, yb)*t*s)
assert_almost_equal(lut(xp,yp), zp)
def test_integral(self):
x = [1,1,1,2,2,2,8,8,8]
y = [1,2,3,1,2,3,1,2,3]
z = array([0,7,8,3,4,7,1,3,4])
s = 0.1
tx = [1+s,3-s]
ty = [1+s,3-s]
with suppress_warnings() as sup:
r = sup.record(UserWarning, "\nThe coefficients of the spline")
lut = LSQBivariateSpline(x, y, z, tx, ty, kx=1, ky=1)
assert_equal(len(r), 1)
tx, ty = lut.get_knots()
tz = lut(tx, ty)
trpz = .25*(diff(tx)[:,None]*diff(ty)[None,:]
* (tz[:-1,:-1]+tz[1:,:-1]+tz[:-1,1:]+tz[1:,1:])).sum()
assert_almost_equal(lut.integral(tx[0], tx[-1], ty[0], ty[-1]),
trpz)
def test_empty_input(self):
# Test whether empty inputs returns an empty output. Ticket 1014
x = [1,1,1,2,2,2,3,3,3]
y = [1,2,3,1,2,3,1,2,3]
z = [3,3,3,3,3,3,3,3,3]
s = 0.1
tx = [1+s,3-s]
ty = [1+s,3-s]
with suppress_warnings() as sup:
r = sup.record(UserWarning, "\nThe coefficients of the spline")
lut = LSQBivariateSpline(x, y, z, tx, ty, kx=1, ky=1)
assert_equal(len(r), 1)
assert_array_equal(lut([], []), np.zeros((0,0)))
assert_array_equal(lut([], [], grid=False), np.zeros((0,)))
def test_invalid_input(self):
s = 0.1
tx = [1 + s, 3 - s]
ty = [1 + s, 3 - s]
with assert_raises(ValueError) as info:
x = np.linspace(1.0, 10.0)
y = np.linspace(1.0, 10.0)
z = np.linspace(1.0, 10.0, num=10)
LSQBivariateSpline(x, y, z, tx, ty)
assert "x, y, and z should have a same length" in str(info.value)
with assert_raises(ValueError) as info:
x = np.linspace(1.0, 10.0)
y = np.linspace(1.0, 10.0)
z = np.linspace(1.0, 10.0)
w = np.linspace(1.0, 10.0, num=20)
LSQBivariateSpline(x, y, z, tx, ty, w=w)
assert "x, y, z, and w should have a same length" in str(info.value)
with assert_raises(ValueError) as info:
w = np.linspace(-1.0, 10.0)
LSQBivariateSpline(x, y, z, tx, ty, w=w)
assert "w should be positive" in str(info.value)
with assert_raises(ValueError) as info:
bbox = (-100, 100, -100)
LSQBivariateSpline(x, y, z, tx, ty, bbox=bbox)
assert "bbox shape should be (4,)" in str(info.value)
with assert_raises(ValueError) as info:
LSQBivariateSpline(x, y, z, tx, ty, kx=10, ky=10)
assert "The length of x, y and z should be at least (kx+1) * (ky+1)" in \
str(info.value)
with assert_raises(ValueError) as exc_info:
LSQBivariateSpline(x, y, z, tx, ty, eps=0.0)
assert "eps should be between (0, 1)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
LSQBivariateSpline(x, y, z, tx, ty, eps=1.0)
assert "eps should be between (0, 1)" in str(exc_info.value)
def test_array_like_input(self):
s = 0.1
tx = np.array([1 + s, 3 - s])
ty = np.array([1 + s, 3 - s])
x = np.linspace(1.0, 10.0)
y = np.linspace(1.0, 10.0)
z = np.linspace(1.0, 10.0)
w = np.linspace(1.0, 10.0)
bbox = np.array([1.0, 10.0, 1.0, 10.0])
with suppress_warnings() as sup:
r = sup.record(UserWarning, "\nThe coefficients of the spline")
# np.array input
spl1 = LSQBivariateSpline(x, y, z, tx, ty, w=w, bbox=bbox)
# list input
spl2 = LSQBivariateSpline(x.tolist(), y.tolist(), z.tolist(),
tx.tolist(), ty.tolist(), w=w.tolist(),
bbox=bbox)
assert_allclose(spl1(2.0, 2.0), spl2(2.0, 2.0))
assert_equal(len(r), 2)
def test_unequal_length_of_knots(self):
"""Test for the case when the input knot-location arrays in x and y are
of different lengths.
"""
x, y = np.mgrid[0:100, 0:100]
x = x.ravel()
y = y.ravel()
z = 3.0 * np.ones_like(x)
tx = np.linspace(0.1, 98.0, 29)
ty = np.linspace(0.1, 98.0, 33)
with suppress_warnings() as sup:
r = sup.record(UserWarning, "\nThe coefficients of the spline")
lut = LSQBivariateSpline(x,y,z,tx,ty)
assert_equal(len(r), 1)
assert_almost_equal(lut(x, y, grid=False), z)
class TestSmoothBivariateSpline:
def test_linear_constant(self):
x = [1,1,1,2,2,2,3,3,3]
y = [1,2,3,1,2,3,1,2,3]
z = [3,3,3,3,3,3,3,3,3]
lut = SmoothBivariateSpline(x,y,z,kx=1,ky=1)
assert_array_almost_equal(lut.get_knots(),([1,1,3,3],[1,1,3,3]))
assert_array_almost_equal(lut.get_coeffs(),[3,3,3,3])
assert_almost_equal(lut.get_residual(),0.0)
assert_array_almost_equal(lut([1,1.5,2],[1,1.5]),[[3,3],[3,3],[3,3]])
def test_linear_1d(self):
x = [1,1,1,2,2,2,3,3,3]
y = [1,2,3,1,2,3,1,2,3]
z = [0,0,0,2,2,2,4,4,4]
lut = SmoothBivariateSpline(x,y,z,kx=1,ky=1)
assert_array_almost_equal(lut.get_knots(),([1,1,3,3],[1,1,3,3]))
assert_array_almost_equal(lut.get_coeffs(),[0,0,4,4])
assert_almost_equal(lut.get_residual(),0.0)
assert_array_almost_equal(lut([1,1.5,2],[1,1.5]),[[0,0],[1,1],[2,2]])
def test_integral(self):
x = [1,1,1,2,2,2,4,4,4]
y = [1,2,3,1,2,3,1,2,3]
z = array([0,7,8,3,4,7,1,3,4])
with suppress_warnings() as sup:
# This seems to fail (ier=1, see ticket 1642).
sup.filter(UserWarning, "\nThe required storage space")
lut = SmoothBivariateSpline(x, y, z, kx=1, ky=1, s=0)
tx = [1,2,4]
ty = [1,2,3]
tz = lut(tx, ty)
trpz = .25*(diff(tx)[:,None]*diff(ty)[None,:]
* (tz[:-1,:-1]+tz[1:,:-1]+tz[:-1,1:]+tz[1:,1:])).sum()
assert_almost_equal(lut.integral(tx[0], tx[-1], ty[0], ty[-1]), trpz)
lut2 = SmoothBivariateSpline(x, y, z, kx=2, ky=2, s=0)
assert_almost_equal(lut2.integral(tx[0], tx[-1], ty[0], ty[-1]), trpz,
decimal=0) # the quadratures give 23.75 and 23.85
tz = lut(tx[:-1], ty[:-1])
trpz = .25*(diff(tx[:-1])[:,None]*diff(ty[:-1])[None,:]
* (tz[:-1,:-1]+tz[1:,:-1]+tz[:-1,1:]+tz[1:,1:])).sum()
assert_almost_equal(lut.integral(tx[0], tx[-2], ty[0], ty[-2]), trpz)
def test_rerun_lwrk2_too_small(self):
# in this setting, lwrk2 is too small in the default run. Here we
# check for equality with the bisplrep/bisplev output because there,
# an automatic re-run of the spline representation is done if ier>10.
x = np.linspace(-2, 2, 80)
y = np.linspace(-2, 2, 80)
z = x + y
xi = np.linspace(-1, 1, 100)
yi = np.linspace(-2, 2, 100)
tck = bisplrep(x, y, z)
res1 = bisplev(xi, yi, tck)
interp_ = SmoothBivariateSpline(x, y, z)
res2 = interp_(xi, yi)
assert_almost_equal(res1, res2)
def test_invalid_input(self):
with assert_raises(ValueError) as info:
x = np.linspace(1.0, 10.0)
y = np.linspace(1.0, 10.0)
z = np.linspace(1.0, 10.0, num=10)
SmoothBivariateSpline(x, y, z)
assert "x, y, and z should have a same length" in str(info.value)
with assert_raises(ValueError) as info:
x = np.linspace(1.0, 10.0)
y = np.linspace(1.0, 10.0)
z = np.linspace(1.0, 10.0)
w = np.linspace(1.0, 10.0, num=20)
SmoothBivariateSpline(x, y, z, w=w)
assert "x, y, z, and w should have a same length" in str(info.value)
with assert_raises(ValueError) as info:
w = np.linspace(-1.0, 10.0)
SmoothBivariateSpline(x, y, z, w=w)
assert "w should be positive" in str(info.value)
with assert_raises(ValueError) as info:
bbox = (-100, 100, -100)
SmoothBivariateSpline(x, y, z, bbox=bbox)
assert "bbox shape should be (4,)" in str(info.value)
with assert_raises(ValueError) as info:
SmoothBivariateSpline(x, y, z, kx=10, ky=10)
assert "The length of x, y and z should be at least (kx+1) * (ky+1)" in\
str(info.value)
with assert_raises(ValueError) as info:
SmoothBivariateSpline(x, y, z, s=-1.0)
assert "s should be s >= 0.0" in str(info.value)
with assert_raises(ValueError) as exc_info:
SmoothBivariateSpline(x, y, z, eps=0.0)
assert "eps should be between (0, 1)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
SmoothBivariateSpline(x, y, z, eps=1.0)
assert "eps should be between (0, 1)" in str(exc_info.value)
def test_array_like_input(self):
x = np.array([1, 1, 1, 2, 2, 2, 3, 3, 3])
y = np.array([1, 2, 3, 1, 2, 3, 1, 2, 3])
z = np.array([3, 3, 3, 3, 3, 3, 3, 3, 3])
w = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1])
bbox = np.array([1.0, 3.0, 1.0, 3.0])
# np.array input
spl1 = SmoothBivariateSpline(x, y, z, w=w, bbox=bbox, kx=1, ky=1)
# list input
spl2 = SmoothBivariateSpline(x.tolist(), y.tolist(), z.tolist(),
bbox=bbox.tolist(), w=w.tolist(),
kx=1, ky=1)
assert_allclose(spl1(0.1, 0.5), spl2(0.1, 0.5))
class TestLSQSphereBivariateSpline:
def setup_method(self):
# define the input data and coordinates
ntheta, nphi = 70, 90
theta = linspace(0.5/(ntheta - 1), 1 - 0.5/(ntheta - 1), ntheta) * pi
phi = linspace(0.5/(nphi - 1), 1 - 0.5/(nphi - 1), nphi) * 2. * pi
data = ones((theta.shape[0], phi.shape[0]))
# define knots and extract data values at the knots
knotst = theta[::5]
knotsp = phi[::5]
knotdata = data[::5, ::5]
# calculate spline coefficients
lats, lons = meshgrid(theta, phi)
lut_lsq = LSQSphereBivariateSpline(lats.ravel(), lons.ravel(),
data.T.ravel(), knotst, knotsp)
self.lut_lsq = lut_lsq
self.data = knotdata
self.new_lons, self.new_lats = knotsp, knotst
def test_linear_constant(self):
assert_almost_equal(self.lut_lsq.get_residual(), 0.0)
assert_array_almost_equal(self.lut_lsq(self.new_lats, self.new_lons),
self.data)
def test_empty_input(self):
assert_array_almost_equal(self.lut_lsq([], []), np.zeros((0,0)))
assert_array_almost_equal(self.lut_lsq([], [], grid=False), np.zeros((0,)))
def test_invalid_input(self):
ntheta, nphi = 70, 90
theta = linspace(0.5 / (ntheta - 1), 1 - 0.5 / (ntheta - 1),
ntheta) * pi
phi = linspace(0.5 / (nphi - 1), 1 - 0.5 / (nphi - 1), nphi) * 2. * pi
data = ones((theta.shape[0], phi.shape[0]))
# define knots and extract data values at the knots
knotst = theta[::5]
knotsp = phi[::5]
with assert_raises(ValueError) as exc_info:
invalid_theta = linspace(-0.1, 1.0, num=ntheta) * pi
invalid_lats, lons = meshgrid(invalid_theta, phi)
LSQSphereBivariateSpline(invalid_lats.ravel(), lons.ravel(),
data.T.ravel(), knotst, knotsp)
assert "theta should be between [0, pi]" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
invalid_theta = linspace(0.1, 1.1, num=ntheta) * pi
invalid_lats, lons = meshgrid(invalid_theta, phi)
LSQSphereBivariateSpline(invalid_lats.ravel(), lons.ravel(),
data.T.ravel(), knotst, knotsp)
assert "theta should be between [0, pi]" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
invalid_phi = linspace(-0.1, 1.0, num=ntheta) * 2.0 * pi
lats, invalid_lons = meshgrid(theta, invalid_phi)
LSQSphereBivariateSpline(lats.ravel(), invalid_lons.ravel(),
data.T.ravel(), knotst, knotsp)
assert "phi should be between [0, 2pi]" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
invalid_phi = linspace(0.0, 1.1, num=ntheta) * 2.0 * pi
lats, invalid_lons = meshgrid(theta, invalid_phi)
LSQSphereBivariateSpline(lats.ravel(), invalid_lons.ravel(),
data.T.ravel(), knotst, knotsp)
assert "phi should be between [0, 2pi]" in str(exc_info.value)
lats, lons = meshgrid(theta, phi)
with assert_raises(ValueError) as exc_info:
invalid_knotst = np.copy(knotst)
invalid_knotst[0] = -0.1
LSQSphereBivariateSpline(lats.ravel(), lons.ravel(),
data.T.ravel(), invalid_knotst, knotsp)
assert "tt should be between (0, pi)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
invalid_knotst = np.copy(knotst)
invalid_knotst[0] = pi
LSQSphereBivariateSpline(lats.ravel(), lons.ravel(),
data.T.ravel(), invalid_knotst, knotsp)
assert "tt should be between (0, pi)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
invalid_knotsp = np.copy(knotsp)
invalid_knotsp[0] = -0.1
LSQSphereBivariateSpline(lats.ravel(), lons.ravel(),
data.T.ravel(), knotst, invalid_knotsp)
assert "tp should be between (0, 2pi)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
invalid_knotsp = np.copy(knotsp)
invalid_knotsp[0] = 2 * pi
LSQSphereBivariateSpline(lats.ravel(), lons.ravel(),
data.T.ravel(), knotst, invalid_knotsp)
assert "tp should be between (0, 2pi)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
invalid_w = array([-1.0, 1.0, 1.5, 0.5, 1.0, 1.5, 0.5, 1.0, 1.0])
LSQSphereBivariateSpline(lats.ravel(), lons.ravel(), data.T.ravel(),
knotst, knotsp, w=invalid_w)
assert "w should be positive" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
LSQSphereBivariateSpline(lats.ravel(), lons.ravel(), data.T.ravel(),
knotst, knotsp, eps=0.0)
assert "eps should be between (0, 1)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
LSQSphereBivariateSpline(lats.ravel(), lons.ravel(), data.T.ravel(),
knotst, knotsp, eps=1.0)
assert "eps should be between (0, 1)" in str(exc_info.value)
def test_array_like_input(self):
ntheta, nphi = 70, 90
theta = linspace(0.5 / (ntheta - 1), 1 - 0.5 / (ntheta - 1),
ntheta) * pi
phi = linspace(0.5 / (nphi - 1), 1 - 0.5 / (nphi - 1),
nphi) * 2. * pi
lats, lons = meshgrid(theta, phi)
data = ones((theta.shape[0], phi.shape[0]))
# define knots and extract data values at the knots
knotst = theta[::5]
knotsp = phi[::5]
w = ones(lats.ravel().shape[0])
# np.array input
spl1 = LSQSphereBivariateSpline(lats.ravel(), lons.ravel(),
data.T.ravel(), knotst, knotsp, w=w)
# list input
spl2 = LSQSphereBivariateSpline(lats.ravel().tolist(),
lons.ravel().tolist(),
data.T.ravel().tolist(),
knotst.tolist(),
knotsp.tolist(), w=w.tolist())
assert_array_almost_equal(spl1(1.0, 1.0), spl2(1.0, 1.0))
class TestSmoothSphereBivariateSpline:
def setup_method(self):
theta = array([.25*pi, .25*pi, .25*pi, .5*pi, .5*pi, .5*pi, .75*pi,
.75*pi, .75*pi])
phi = array([.5 * pi, pi, 1.5 * pi, .5 * pi, pi, 1.5 * pi, .5 * pi, pi,
1.5 * pi])
r = array([3, 3, 3, 3, 3, 3, 3, 3, 3])
self.lut = SmoothSphereBivariateSpline(theta, phi, r, s=1E10)
def test_linear_constant(self):
assert_almost_equal(self.lut.get_residual(), 0.)
assert_array_almost_equal(self.lut([1, 1.5, 2],[1, 1.5]),
[[3, 3], [3, 3], [3, 3]])
def test_empty_input(self):
assert_array_almost_equal(self.lut([], []), np.zeros((0,0)))
assert_array_almost_equal(self.lut([], [], grid=False), np.zeros((0,)))
def test_invalid_input(self):
theta = array([.25 * pi, .25 * pi, .25 * pi, .5 * pi, .5 * pi, .5 * pi,
.75 * pi, .75 * pi, .75 * pi])
phi = array([.5 * pi, pi, 1.5 * pi, .5 * pi, pi, 1.5 * pi, .5 * pi, pi,
1.5 * pi])
r = array([3, 3, 3, 3, 3, 3, 3, 3, 3])
with assert_raises(ValueError) as exc_info:
invalid_theta = array([-0.1 * pi, .25 * pi, .25 * pi, .5 * pi,
.5 * pi, .5 * pi, .75 * pi, .75 * pi,
.75 * pi])
SmoothSphereBivariateSpline(invalid_theta, phi, r, s=1E10)
assert "theta should be between [0, pi]" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
invalid_theta = array([.25 * pi, .25 * pi, .25 * pi, .5 * pi,
.5 * pi, .5 * pi, .75 * pi, .75 * pi,
1.1 * pi])
SmoothSphereBivariateSpline(invalid_theta, phi, r, s=1E10)
assert "theta should be between [0, pi]" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
invalid_phi = array([-.1 * pi, pi, 1.5 * pi, .5 * pi, pi, 1.5 * pi,
.5 * pi, pi, 1.5 * pi])
SmoothSphereBivariateSpline(theta, invalid_phi, r, s=1E10)
assert "phi should be between [0, 2pi]" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
invalid_phi = array([1.0 * pi, pi, 1.5 * pi, .5 * pi, pi, 1.5 * pi,
.5 * pi, pi, 2.1 * pi])
SmoothSphereBivariateSpline(theta, invalid_phi, r, s=1E10)
assert "phi should be between [0, 2pi]" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
invalid_w = array([-1.0, 1.0, 1.5, 0.5, 1.0, 1.5, 0.5, 1.0, 1.0])
SmoothSphereBivariateSpline(theta, phi, r, w=invalid_w, s=1E10)
assert "w should be positive" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
SmoothSphereBivariateSpline(theta, phi, r, s=-1.0)
assert "s should be positive" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
SmoothSphereBivariateSpline(theta, phi, r, eps=-1.0)
assert "eps should be between (0, 1)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
SmoothSphereBivariateSpline(theta, phi, r, eps=1.0)
assert "eps should be between (0, 1)" in str(exc_info.value)
def test_array_like_input(self):
theta = np.array([.25 * pi, .25 * pi, .25 * pi, .5 * pi, .5 * pi,
.5 * pi, .75 * pi, .75 * pi, .75 * pi])
phi = np.array([.5 * pi, pi, 1.5 * pi, .5 * pi, pi, 1.5 * pi, .5 * pi,
pi, 1.5 * pi])
r = np.array([3, 3, 3, 3, 3, 3, 3, 3, 3])
w = np.array([1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0])
# np.array input
spl1 = SmoothSphereBivariateSpline(theta, phi, r, w=w, s=1E10)
# list input
spl2 = SmoothSphereBivariateSpline(theta.tolist(), phi.tolist(),
r.tolist(), w=w.tolist(), s=1E10)
assert_array_almost_equal(spl1(1.0, 1.0), spl2(1.0, 1.0))
class TestRectBivariateSpline:
def test_defaults(self):
x = array([1,2,3,4,5])
y = array([1,2,3,4,5])
z = array([[1,2,1,2,1],[1,2,1,2,1],[1,2,3,2,1],[1,2,2,2,1],[1,2,1,2,1]])
lut = RectBivariateSpline(x,y,z)
assert_array_almost_equal(lut(x,y),z)
def test_evaluate(self):
x = array([1,2,3,4,5])
y = array([1,2,3,4,5])
z = array([[1,2,1,2,1],[1,2,1,2,1],[1,2,3,2,1],[1,2,2,2,1],[1,2,1,2,1]])
lut = RectBivariateSpline(x,y,z)
xi = [1, 2.3, 5.3, 0.5, 3.3, 1.2, 3]
yi = [1, 3.3, 1.2, 4.0, 5.0, 1.0, 3]
zi = lut.ev(xi, yi)
zi2 = array([lut(xp, yp)[0,0] for xp, yp in zip(xi, yi)])
assert_almost_equal(zi, zi2)
def test_derivatives_grid(self):
x = array([1,2,3,4,5])
y = array([1,2,3,4,5])
z = array([[1,2,1,2,1],[1,2,1,2,1],[1,2,3,2,1],[1,2,2,2,1],[1,2,1,2,1]])
dx = array([[0,0,-20,0,0],[0,0,13,0,0],[0,0,4,0,0],
[0,0,-11,0,0],[0,0,4,0,0]])/6.
dy = array([[4,-1,0,1,-4],[4,-1,0,1,-4],[0,1.5,0,-1.5,0],
[2,.25,0,-.25,-2],[4,-1,0,1,-4]])
dxdy = array([[40,-25,0,25,-40],[-26,16.25,0,-16.25,26],
[-8,5,0,-5,8],[22,-13.75,0,13.75,-22],[-8,5,0,-5,8]])/6.
lut = RectBivariateSpline(x,y,z)
assert_array_almost_equal(lut(x,y,dx=1),dx)
assert_array_almost_equal(lut(x,y,dy=1),dy)
assert_array_almost_equal(lut(x,y,dx=1,dy=1),dxdy)
def test_derivatives(self):
x = array([1,2,3,4,5])
y = array([1,2,3,4,5])
z = array([[1,2,1,2,1],[1,2,1,2,1],[1,2,3,2,1],[1,2,2,2,1],[1,2,1,2,1]])
dx = array([0,0,2./3,0,0])
dy = array([4,-1,0,-.25,-4])
dxdy = array([160,65,0,55,32])/24.
lut = RectBivariateSpline(x,y,z)
assert_array_almost_equal(lut(x,y,dx=1,grid=False),dx)
assert_array_almost_equal(lut(x,y,dy=1,grid=False),dy)
assert_array_almost_equal(lut(x,y,dx=1,dy=1,grid=False),dxdy)
def test_partial_derivative_method_grid(self):
x = array([1, 2, 3, 4, 5])
y = array([1, 2, 3, 4, 5])
z = array([[1, 2, 1, 2, 1],
[1, 2, 1, 2, 1],
[1, 2, 3, 2, 1],
[1, 2, 2, 2, 1],
[1, 2, 1, 2, 1]])
dx = array([[0, 0, -20, 0, 0],
[0, 0, 13, 0, 0],
[0, 0, 4, 0, 0],
[0, 0, -11, 0, 0],
[0, 0, 4, 0, 0]]) / 6.
dy = array([[4, -1, 0, 1, -4],
[4, -1, 0, 1, -4],
[0, 1.5, 0, -1.5, 0],
[2, .25, 0, -.25, -2],
[4, -1, 0, 1, -4]])
dxdy = array([[40, -25, 0, 25, -40],
[-26, 16.25, 0, -16.25, 26],
[-8, 5, 0, -5, 8],
[22, -13.75, 0, 13.75, -22],
[-8, 5, 0, -5, 8]]) / 6.
lut = RectBivariateSpline(x, y, z)
assert_array_almost_equal(lut.partial_derivative(1, 0)(x, y), dx)
assert_array_almost_equal(lut.partial_derivative(0, 1)(x, y), dy)
assert_array_almost_equal(lut.partial_derivative(1, 1)(x, y), dxdy)
def test_partial_derivative_method(self):
x = array([1, 2, 3, 4, 5])
y = array([1, 2, 3, 4, 5])
z = array([[1, 2, 1, 2, 1],
[1, 2, 1, 2, 1],
[1, 2, 3, 2, 1],
[1, 2, 2, 2, 1],
[1, 2, 1, 2, 1]])
dx = array([0, 0, 2./3, 0, 0])
dy = array([4, -1, 0, -.25, -4])
dxdy = array([160, 65, 0, 55, 32]) / 24.
lut = RectBivariateSpline(x, y, z)
assert_array_almost_equal(lut.partial_derivative(1, 0)(x, y,
grid=False),
dx)
assert_array_almost_equal(lut.partial_derivative(0, 1)(x, y,
grid=False),
dy)
assert_array_almost_equal(lut.partial_derivative(1, 1)(x, y,
grid=False),
dxdy)
def test_partial_derivative_order_too_large(self):
x = array([0, 1, 2, 3, 4], dtype=float)
y = x.copy()
z = ones((x.size, y.size))
lut = RectBivariateSpline(x, y, z)
with assert_raises(ValueError):
lut.partial_derivative(4, 1)
def test_broadcast(self):
x = array([1,2,3,4,5])
y = array([1,2,3,4,5])
z = array([[1,2,1,2,1],[1,2,1,2,1],[1,2,3,2,1],[1,2,2,2,1],[1,2,1,2,1]])
lut = RectBivariateSpline(x,y,z)
assert_allclose(lut(x, y), lut(x[:,None], y[None,:], grid=False))
def test_invalid_input(self):
with assert_raises(ValueError) as info:
x = array([6, 2, 3, 4, 5])
y = array([1, 2, 3, 4, 5])
z = array([[1, 2, 1, 2, 1], [1, 2, 1, 2, 1], [1, 2, 3, 2, 1],
[1, 2, 2, 2, 1], [1, 2, 1, 2, 1]])
RectBivariateSpline(x, y, z)
assert "x must be strictly increasing" in str(info.value)
with assert_raises(ValueError) as info:
x = array([1, 2, 3, 4, 5])
y = array([2, 2, 3, 4, 5])
z = array([[1, 2, 1, 2, 1], [1, 2, 1, 2, 1], [1, 2, 3, 2, 1],
[1, 2, 2, 2, 1], [1, 2, 1, 2, 1]])
RectBivariateSpline(x, y, z)
assert "y must be strictly increasing" in str(info.value)
with assert_raises(ValueError) as info:
x = array([1, 2, 3, 4, 5])
y = array([1, 2, 3, 4, 5])
z = array([[1, 2, 1, 2, 1], [1, 2, 1, 2, 1], [1, 2, 3, 2, 1],
[1, 2, 2, 2, 1]])
RectBivariateSpline(x, y, z)
assert "x dimension of z must have same number of elements as x"\
in str(info.value)
with assert_raises(ValueError) as info:
x = array([1, 2, 3, 4, 5])
y = array([1, 2, 3, 4, 5])
z = array([[1, 2, 1, 2], [1, 2, 1, 2], [1, 2, 3, 2],
[1, 2, 2, 2], [1, 2, 1, 2]])
RectBivariateSpline(x, y, z)
assert "y dimension of z must have same number of elements as y"\
in str(info.value)
with assert_raises(ValueError) as info:
x = array([1, 2, 3, 4, 5])
y = array([1, 2, 3, 4, 5])
z = array([[1, 2, 1, 2, 1], [1, 2, 1, 2, 1], [1, 2, 3, 2, 1],
[1, 2, 2, 2, 1], [1, 2, 1, 2, 1]])
bbox = (-100, 100, -100)
RectBivariateSpline(x, y, z, bbox=bbox)
assert "bbox shape should be (4,)" in str(info.value)
with assert_raises(ValueError) as info:
RectBivariateSpline(x, y, z, s=-1.0)
assert "s should be s >= 0.0" in str(info.value)
def test_array_like_input(self):
x = array([1, 2, 3, 4, 5])
y = array([1, 2, 3, 4, 5])
z = array([[1, 2, 1, 2, 1], [1, 2, 1, 2, 1], [1, 2, 3, 2, 1],
[1, 2, 2, 2, 1], [1, 2, 1, 2, 1]])
bbox = array([1, 5, 1, 5])
spl1 = RectBivariateSpline(x, y, z, bbox=bbox)
spl2 = RectBivariateSpline(x.tolist(), y.tolist(), z.tolist(),
bbox=bbox.tolist())
assert_array_almost_equal(spl1(1.0, 1.0), spl2(1.0, 1.0))
def test_not_increasing_input(self):
# gh-8565
NSamp = 20
Theta = np.random.uniform(0, np.pi, NSamp)
Phi = np.random.uniform(0, 2 * np.pi, NSamp)
Data = np.ones(NSamp)
Interpolator = SmoothSphereBivariateSpline(Theta, Phi, Data, s=3.5)
NLon = 6
NLat = 3
GridPosLats = np.arange(NLat) / NLat * np.pi
GridPosLons = np.arange(NLon) / NLon * 2 * np.pi
# No error
Interpolator(GridPosLats, GridPosLons)
nonGridPosLats = GridPosLats.copy()
nonGridPosLats[2] = 0.001
with assert_raises(ValueError) as exc_info:
Interpolator(nonGridPosLats, GridPosLons)
assert "x must be strictly increasing" in str(exc_info.value)
nonGridPosLons = GridPosLons.copy()
nonGridPosLons[2] = 0.001
with assert_raises(ValueError) as exc_info:
Interpolator(GridPosLats, nonGridPosLons)
assert "y must be strictly increasing" in str(exc_info.value)
class TestRectSphereBivariateSpline:
def test_defaults(self):
y = linspace(0.01, 2*pi-0.01, 7)
x = linspace(0.01, pi-0.01, 7)
z = array([[1,2,1,2,1,2,1],[1,2,1,2,1,2,1],[1,2,3,2,1,2,1],
[1,2,2,2,1,2,1],[1,2,1,2,1,2,1],[1,2,2,2,1,2,1],
[1,2,1,2,1,2,1]])
lut = RectSphereBivariateSpline(x,y,z)
assert_array_almost_equal(lut(x,y),z)
def test_evaluate(self):
y = linspace(0.01, 2*pi-0.01, 7)
x = linspace(0.01, pi-0.01, 7)
z = array([[1,2,1,2,1,2,1],[1,2,1,2,1,2,1],[1,2,3,2,1,2,1],
[1,2,2,2,1,2,1],[1,2,1,2,1,2,1],[1,2,2,2,1,2,1],
[1,2,1,2,1,2,1]])
lut = RectSphereBivariateSpline(x,y,z)
yi = [0.2, 1, 2.3, 2.35, 3.0, 3.99, 5.25]
xi = [1.5, 0.4, 1.1, 0.45, 0.2345, 1., 0.0001]
zi = lut.ev(xi, yi)
zi2 = array([lut(xp, yp)[0,0] for xp, yp in zip(xi, yi)])
assert_almost_equal(zi, zi2)
def test_invalid_input(self):
data = np.dot(np.atleast_2d(90. - np.linspace(-80., 80., 18)).T,
np.atleast_2d(180. - np.abs(np.linspace(0., 350., 9)))).T
with assert_raises(ValueError) as exc_info:
lats = np.linspace(-1, 170, 9) * np.pi / 180.
lons = np.linspace(0, 350, 18) * np.pi / 180.
RectSphereBivariateSpline(lats, lons, data)
assert "u should be between (0, pi)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
lats = np.linspace(10, 181, 9) * np.pi / 180.
lons = np.linspace(0, 350, 18) * np.pi / 180.
RectSphereBivariateSpline(lats, lons, data)
assert "u should be between (0, pi)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
lats = np.linspace(10, 170, 9) * np.pi / 180.
lons = np.linspace(-181, 10, 18) * np.pi / 180.
RectSphereBivariateSpline(lats, lons, data)
assert "v[0] should be between [-pi, pi)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
lats = np.linspace(10, 170, 9) * np.pi / 180.
lons = np.linspace(-10, 360, 18) * np.pi / 180.
RectSphereBivariateSpline(lats, lons, data)
assert "v[-1] should be v[0] + 2pi or less" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
lats = np.linspace(10, 170, 9) * np.pi / 180.
lons = np.linspace(10, 350, 18) * np.pi / 180.
RectSphereBivariateSpline(lats, lons, data, s=-1)
assert "s should be positive" in str(exc_info.value)
def test_derivatives_grid(self):
y = linspace(0.01, 2*pi-0.01, 7)
x = linspace(0.01, pi-0.01, 7)
z = array([[1,2,1,2,1,2,1],[1,2,1,2,1,2,1],[1,2,3,2,1,2,1],
[1,2,2,2,1,2,1],[1,2,1,2,1,2,1],[1,2,2,2,1,2,1],
[1,2,1,2,1,2,1]])
lut = RectSphereBivariateSpline(x,y,z)
y = linspace(0.02, 2*pi-0.02, 7)
x = linspace(0.02, pi-0.02, 7)
assert_allclose(lut(x, y, dtheta=1), _numdiff_2d(lut, x, y, dx=1),
rtol=1e-4, atol=1e-4)
assert_allclose(lut(x, y, dphi=1), _numdiff_2d(lut, x, y, dy=1),
rtol=1e-4, atol=1e-4)
assert_allclose(lut(x, y, dtheta=1, dphi=1), _numdiff_2d(lut, x, y, dx=1, dy=1, eps=1e-6),
rtol=1e-3, atol=1e-3)
assert_array_equal(lut(x, y, dtheta=1),
lut.partial_derivative(1, 0)(x, y))
assert_array_equal(lut(x, y, dphi=1),
lut.partial_derivative(0, 1)(x, y))
assert_array_equal(lut(x, y, dtheta=1, dphi=1),
lut.partial_derivative(1, 1)(x, y))
assert_array_equal(lut(x, y, dtheta=1, grid=False),
lut.partial_derivative(1, 0)(x, y, grid=False))
assert_array_equal(lut(x, y, dphi=1, grid=False),
lut.partial_derivative(0, 1)(x, y, grid=False))
assert_array_equal(lut(x, y, dtheta=1, dphi=1, grid=False),
lut.partial_derivative(1, 1)(x, y, grid=False))
def test_derivatives(self):
y = linspace(0.01, 2*pi-0.01, 7)
x = linspace(0.01, pi-0.01, 7)
z = array([[1,2,1,2,1,2,1],[1,2,1,2,1,2,1],[1,2,3,2,1,2,1],
[1,2,2,2,1,2,1],[1,2,1,2,1,2,1],[1,2,2,2,1,2,1],
[1,2,1,2,1,2,1]])
lut = RectSphereBivariateSpline(x,y,z)
y = linspace(0.02, 2*pi-0.02, 7)
x = linspace(0.02, pi-0.02, 7)
assert_equal(lut(x, y, dtheta=1, grid=False).shape, x.shape)
assert_allclose(lut(x, y, dtheta=1, grid=False),
_numdiff_2d(lambda x,y: lut(x,y,grid=False), x, y, dx=1),
rtol=1e-4, atol=1e-4)
assert_allclose(lut(x, y, dphi=1, grid=False),
_numdiff_2d(lambda x,y: lut(x,y,grid=False), x, y, dy=1),
rtol=1e-4, atol=1e-4)
assert_allclose(lut(x, y, dtheta=1, dphi=1, grid=False),
_numdiff_2d(lambda x,y: lut(x,y,grid=False), x, y, dx=1, dy=1, eps=1e-6),
rtol=1e-3, atol=1e-3)
def test_invalid_input_2(self):
data = np.dot(np.atleast_2d(90. - np.linspace(-80., 80., 18)).T,
np.atleast_2d(180. - np.abs(np.linspace(0., 350., 9)))).T
with assert_raises(ValueError) as exc_info:
lats = np.linspace(0, 170, 9) * np.pi / 180.
lons = np.linspace(0, 350, 18) * np.pi / 180.
RectSphereBivariateSpline(lats, lons, data)
assert "u should be between (0, pi)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
lats = np.linspace(10, 180, 9) * np.pi / 180.
lons = np.linspace(0, 350, 18) * np.pi / 180.
RectSphereBivariateSpline(lats, lons, data)
assert "u should be between (0, pi)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
lats = np.linspace(10, 170, 9) * np.pi / 180.
lons = np.linspace(-181, 10, 18) * np.pi / 180.
RectSphereBivariateSpline(lats, lons, data)
assert "v[0] should be between [-pi, pi)" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
lats = np.linspace(10, 170, 9) * np.pi / 180.
lons = np.linspace(-10, 360, 18) * np.pi / 180.
RectSphereBivariateSpline(lats, lons, data)
assert "v[-1] should be v[0] + 2pi or less" in str(exc_info.value)
with assert_raises(ValueError) as exc_info:
lats = np.linspace(10, 170, 9) * np.pi / 180.
lons = np.linspace(10, 350, 18) * np.pi / 180.
RectSphereBivariateSpline(lats, lons, data, s=-1)
assert "s should be positive" in str(exc_info.value)
def test_array_like_input(self):
y = linspace(0.01, 2 * pi - 0.01, 7)
x = linspace(0.01, pi - 0.01, 7)
z = array([[1, 2, 1, 2, 1, 2, 1], [1, 2, 1, 2, 1, 2, 1],
[1, 2, 3, 2, 1, 2, 1],
[1, 2, 2, 2, 1, 2, 1], [1, 2, 1, 2, 1, 2, 1],
[1, 2, 2, 2, 1, 2, 1],
[1, 2, 1, 2, 1, 2, 1]])
# np.array input
spl1 = RectSphereBivariateSpline(x, y, z)
# list input
spl2 = RectSphereBivariateSpline(x.tolist(), y.tolist(), z.tolist())
assert_array_almost_equal(spl1(x, y), spl2(x, y))
def test_negative_evaluation(self):
lats = np.array([25, 30, 35, 40, 45])
lons = np.array([-90, -85, -80, -75, 70])
mesh = np.meshgrid(lats, lons)
data = mesh[0] + mesh[1] # lon + lat value
lat_r = np.radians(lats)
lon_r = np.radians(lons)
interpolator = RectSphereBivariateSpline(lat_r, lon_r, data)
query_lat = np.radians(np.array([35, 37.5]))
query_lon = np.radians(np.array([-80, -77.5]))
data_interp = interpolator(query_lat, query_lon)
ans = np.array([[-45.0, -42.480862],
[-49.0625, -46.54315]])
assert_array_almost_equal(data_interp, ans)
def test_pole_continuity_gh_14591(self):
# regression test for https://github.com/scipy/scipy/issues/14591
# with pole_continuty=(True, True), the internal work array size
# was too small, leading to a FITPACK data validation error.
# The reproducer in gh-14591 was using a NetCDF4 file with
# 361x507 arrays, so here we trivialize array sizes to a minimum
# which still demonstrates the issue.
u = np.arange(1, 10) * np.pi / 10
v = np.arange(1, 10) * np.pi / 10
r = np.zeros((9, 9))
for p in [(True, True), (True, False), (False, False)]:
RectSphereBivariateSpline(u, v, r, s=0, pole_continuity=p)
def _numdiff_2d(func, x, y, dx=0, dy=0, eps=1e-8):
if dx == 0 and dy == 0:
return func(x, y)
elif dx == 1 and dy == 0:
return (func(x + eps, y) - func(x - eps, y)) / (2*eps)
elif dx == 0 and dy == 1:
return (func(x, y + eps) - func(x, y - eps)) / (2*eps)
elif dx == 1 and dy == 1:
return (func(x + eps, y + eps) - func(x - eps, y + eps)
- func(x + eps, y - eps) + func(x - eps, y - eps)) / (2*eps)**2
else:
raise ValueError("invalid derivative order")
class Test_DerivedBivariateSpline:
"""Test the creation, usage, and attribute access of the (private)
_DerivedBivariateSpline class.
"""
def setup_method(self):
x = np.concatenate(list(zip(range(10), range(10))))
y = np.concatenate(list(zip(range(10), range(1, 11))))
z = np.concatenate((np.linspace(3, 1, 10), np.linspace(1, 3, 10)))
with suppress_warnings() as sup:
sup.record(UserWarning, "\nThe coefficients of the spline")
self.lut_lsq = LSQBivariateSpline(x, y, z,
linspace(0.5, 19.5, 4),
linspace(1.5, 20.5, 4),
eps=1e-2)
self.lut_smooth = SmoothBivariateSpline(x, y, z)
xx = linspace(0, 1, 20)
yy = xx + 1.0
zz = array([np.roll(z, i) for i in range(z.size)])
self.lut_rect = RectBivariateSpline(xx, yy, zz)
self.orders = list(itertools.product(range(3), range(3)))
def test_creation_from_LSQ(self):
for nux, nuy in self.orders:
lut_der = self.lut_lsq.partial_derivative(nux, nuy)
a = lut_der(3.5, 3.5, grid=False)
b = self.lut_lsq(3.5, 3.5, dx=nux, dy=nuy, grid=False)
assert_equal(a, b)
def test_creation_from_Smooth(self):
for nux, nuy in self.orders:
lut_der = self.lut_smooth.partial_derivative(nux, nuy)
a = lut_der(5.5, 5.5, grid=False)
b = self.lut_smooth(5.5, 5.5, dx=nux, dy=nuy, grid=False)
assert_equal(a, b)
def test_creation_from_Rect(self):
for nux, nuy in self.orders:
lut_der = self.lut_rect.partial_derivative(nux, nuy)
a = lut_der(0.5, 1.5, grid=False)
b = self.lut_rect(0.5, 1.5, dx=nux, dy=nuy, grid=False)
assert_equal(a, b)
def test_invalid_attribute_fp(self):
der = self.lut_rect.partial_derivative(1, 1)
with assert_raises(AttributeError):
der.fp
def test_invalid_attribute_get_residual(self):
der = self.lut_smooth.partial_derivative(1, 1)
with assert_raises(AttributeError):
der.get_residual()
| 58,667
| 42.329394
| 98
|
py
|
scipy
|
scipy-main/scipy/spatial/_plotutils.py
|
import numpy as np
from scipy._lib.decorator import decorator as _decorator
__all__ = ['delaunay_plot_2d', 'convex_hull_plot_2d', 'voronoi_plot_2d']
@_decorator
def _held_figure(func, obj, ax=None, **kw):
import matplotlib.pyplot as plt
if ax is None:
fig = plt.figure()
ax = fig.gca()
return func(obj, ax=ax, **kw)
# As of matplotlib 2.0, the "hold" mechanism is deprecated.
# When matplotlib 1.x is no longer supported, this check can be removed.
was_held = getattr(ax, 'ishold', lambda: True)()
if was_held:
return func(obj, ax=ax, **kw)
try:
ax.hold(True)
return func(obj, ax=ax, **kw)
finally:
ax.hold(was_held)
def _adjust_bounds(ax, points):
margin = 0.1 * np.ptp(points, axis=0)
xy_min = points.min(axis=0) - margin
xy_max = points.max(axis=0) + margin
ax.set_xlim(xy_min[0], xy_max[0])
ax.set_ylim(xy_min[1], xy_max[1])
@_held_figure
def delaunay_plot_2d(tri, ax=None):
"""
Plot the given Delaunay triangulation in 2-D
Parameters
----------
tri : scipy.spatial.Delaunay instance
Triangulation to plot
ax : matplotlib.axes.Axes instance, optional
Axes to plot on
Returns
-------
fig : matplotlib.figure.Figure instance
Figure for the plot
See Also
--------
Delaunay
matplotlib.pyplot.triplot
Notes
-----
Requires Matplotlib.
Examples
--------
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.spatial import Delaunay, delaunay_plot_2d
The Delaunay triangulation of a set of random points:
>>> rng = np.random.default_rng()
>>> points = rng.random((30, 2))
>>> tri = Delaunay(points)
Plot it:
>>> _ = delaunay_plot_2d(tri)
>>> plt.show()
"""
if tri.points.shape[1] != 2:
raise ValueError("Delaunay triangulation is not 2-D")
x, y = tri.points.T
ax.plot(x, y, 'o')
ax.triplot(x, y, tri.simplices.copy())
_adjust_bounds(ax, tri.points)
return ax.figure
@_held_figure
def convex_hull_plot_2d(hull, ax=None):
"""
Plot the given convex hull diagram in 2-D
Parameters
----------
hull : scipy.spatial.ConvexHull instance
Convex hull to plot
ax : matplotlib.axes.Axes instance, optional
Axes to plot on
Returns
-------
fig : matplotlib.figure.Figure instance
Figure for the plot
See Also
--------
ConvexHull
Notes
-----
Requires Matplotlib.
Examples
--------
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.spatial import ConvexHull, convex_hull_plot_2d
The convex hull of a random set of points:
>>> rng = np.random.default_rng()
>>> points = rng.random((30, 2))
>>> hull = ConvexHull(points)
Plot it:
>>> _ = convex_hull_plot_2d(hull)
>>> plt.show()
"""
from matplotlib.collections import LineCollection
if hull.points.shape[1] != 2:
raise ValueError("Convex hull is not 2-D")
ax.plot(hull.points[:, 0], hull.points[:, 1], 'o')
line_segments = [hull.points[simplex] for simplex in hull.simplices]
ax.add_collection(LineCollection(line_segments,
colors='k',
linestyle='solid'))
_adjust_bounds(ax, hull.points)
return ax.figure
@_held_figure
def voronoi_plot_2d(vor, ax=None, **kw):
"""
Plot the given Voronoi diagram in 2-D
Parameters
----------
vor : scipy.spatial.Voronoi instance
Diagram to plot
ax : matplotlib.axes.Axes instance, optional
Axes to plot on
show_points : bool, optional
Add the Voronoi points to the plot.
show_vertices : bool, optional
Add the Voronoi vertices to the plot.
line_colors : string, optional
Specifies the line color for polygon boundaries
line_width : float, optional
Specifies the line width for polygon boundaries
line_alpha : float, optional
Specifies the line alpha for polygon boundaries
point_size : float, optional
Specifies the size of points
Returns
-------
fig : matplotlib.figure.Figure instance
Figure for the plot
See Also
--------
Voronoi
Notes
-----
Requires Matplotlib.
Examples
--------
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.spatial import Voronoi, voronoi_plot_2d
Create a set of points for the example:
>>> rng = np.random.default_rng()
>>> points = rng.random((10,2))
Generate the Voronoi diagram for the points:
>>> vor = Voronoi(points)
Use `voronoi_plot_2d` to plot the diagram:
>>> fig = voronoi_plot_2d(vor)
Use `voronoi_plot_2d` to plot the diagram again, with some settings
customized:
>>> fig = voronoi_plot_2d(vor, show_vertices=False, line_colors='orange',
... line_width=2, line_alpha=0.6, point_size=2)
>>> plt.show()
"""
from matplotlib.collections import LineCollection
if vor.points.shape[1] != 2:
raise ValueError("Voronoi diagram is not 2-D")
if kw.get('show_points', True):
point_size = kw.get('point_size', None)
ax.plot(vor.points[:, 0], vor.points[:, 1], '.', markersize=point_size)
if kw.get('show_vertices', True):
ax.plot(vor.vertices[:, 0], vor.vertices[:, 1], 'o')
line_colors = kw.get('line_colors', 'k')
line_width = kw.get('line_width', 1.0)
line_alpha = kw.get('line_alpha', 1.0)
center = vor.points.mean(axis=0)
ptp_bound = np.ptp(vor.points, axis=0)
finite_segments = []
infinite_segments = []
for pointidx, simplex in zip(vor.ridge_points, vor.ridge_vertices):
simplex = np.asarray(simplex)
if np.all(simplex >= 0):
finite_segments.append(vor.vertices[simplex])
else:
i = simplex[simplex >= 0][0] # finite end Voronoi vertex
t = vor.points[pointidx[1]] - vor.points[pointidx[0]] # tangent
t /= np.linalg.norm(t)
n = np.array([-t[1], t[0]]) # normal
midpoint = vor.points[pointidx].mean(axis=0)
direction = np.sign(np.dot(midpoint - center, n)) * n
if (vor.furthest_site):
direction = -direction
far_point = vor.vertices[i] + direction * ptp_bound.max()
infinite_segments.append([vor.vertices[i], far_point])
ax.add_collection(LineCollection(finite_segments,
colors=line_colors,
lw=line_width,
alpha=line_alpha,
linestyle='solid'))
ax.add_collection(LineCollection(infinite_segments,
colors=line_colors,
lw=line_width,
alpha=line_alpha,
linestyle='dashed'))
_adjust_bounds(ax, vor.points)
return ax.figure
| 7,176
| 25.581481
| 79
|
py
|
scipy
|
scipy-main/scipy/spatial/_kdtree.py
|
# Copyright Anne M. Archibald 2008
# Released under the scipy license
import numpy as np
from ._ckdtree import cKDTree, cKDTreeNode
__all__ = ['minkowski_distance_p', 'minkowski_distance',
'distance_matrix',
'Rectangle', 'KDTree']
def minkowski_distance_p(x, y, p=2):
"""Compute the pth power of the L**p distance between two arrays.
For efficiency, this function computes the L**p distance but does
not extract the pth root. If `p` is 1 or infinity, this is equal to
the actual L**p distance.
The last dimensions of `x` and `y` must be the same length. Any
other dimensions must be compatible for broadcasting.
Parameters
----------
x : (..., K) array_like
Input array.
y : (..., K) array_like
Input array.
p : float, 1 <= p <= infinity
Which Minkowski p-norm to use.
Returns
-------
dist : ndarray
pth power of the distance between the input arrays.
Examples
--------
>>> from scipy.spatial import minkowski_distance_p
>>> minkowski_distance_p([[0, 0], [0, 0]], [[1, 1], [0, 1]])
array([2, 1])
"""
x = np.asarray(x)
y = np.asarray(y)
# Find smallest common datatype with float64 (return type of this
# function) - addresses #10262.
# Don't just cast to float64 for complex input case.
common_datatype = np.promote_types(np.promote_types(x.dtype, y.dtype),
'float64')
# Make sure x and y are NumPy arrays of correct datatype.
x = x.astype(common_datatype)
y = y.astype(common_datatype)
if p == np.inf:
return np.amax(np.abs(y-x), axis=-1)
elif p == 1:
return np.sum(np.abs(y-x), axis=-1)
else:
return np.sum(np.abs(y-x)**p, axis=-1)
def minkowski_distance(x, y, p=2):
"""Compute the L**p distance between two arrays.
The last dimensions of `x` and `y` must be the same length. Any
other dimensions must be compatible for broadcasting.
Parameters
----------
x : (..., K) array_like
Input array.
y : (..., K) array_like
Input array.
p : float, 1 <= p <= infinity
Which Minkowski p-norm to use.
Returns
-------
dist : ndarray
Distance between the input arrays.
Examples
--------
>>> from scipy.spatial import minkowski_distance
>>> minkowski_distance([[0, 0], [0, 0]], [[1, 1], [0, 1]])
array([ 1.41421356, 1. ])
"""
x = np.asarray(x)
y = np.asarray(y)
if p == np.inf or p == 1:
return minkowski_distance_p(x, y, p)
else:
return minkowski_distance_p(x, y, p)**(1./p)
class Rectangle:
"""Hyperrectangle class.
Represents a Cartesian product of intervals.
"""
def __init__(self, maxes, mins):
"""Construct a hyperrectangle."""
self.maxes = np.maximum(maxes,mins).astype(float)
self.mins = np.minimum(maxes,mins).astype(float)
self.m, = self.maxes.shape
def __repr__(self):
return "<Rectangle %s>" % list(zip(self.mins, self.maxes))
def volume(self):
"""Total volume."""
return np.prod(self.maxes-self.mins)
def split(self, d, split):
"""Produce two hyperrectangles by splitting.
In general, if you need to compute maximum and minimum
distances to the children, it can be done more efficiently
by updating the maximum and minimum distances to the parent.
Parameters
----------
d : int
Axis to split hyperrectangle along.
split : float
Position along axis `d` to split at.
"""
mid = np.copy(self.maxes)
mid[d] = split
less = Rectangle(self.mins, mid)
mid = np.copy(self.mins)
mid[d] = split
greater = Rectangle(mid, self.maxes)
return less, greater
def min_distance_point(self, x, p=2.):
"""
Return the minimum distance between input and points in the
hyperrectangle.
Parameters
----------
x : array_like
Input.
p : float, optional
Input.
"""
return minkowski_distance(
0, np.maximum(0, np.maximum(self.mins-x, x-self.maxes)),
p
)
def max_distance_point(self, x, p=2.):
"""
Return the maximum distance between input and points in the hyperrectangle.
Parameters
----------
x : array_like
Input array.
p : float, optional
Input.
"""
return minkowski_distance(0, np.maximum(self.maxes-x, x-self.mins), p)
def min_distance_rectangle(self, other, p=2.):
"""
Compute the minimum distance between points in the two hyperrectangles.
Parameters
----------
other : hyperrectangle
Input.
p : float
Input.
"""
return minkowski_distance(
0,
np.maximum(0, np.maximum(self.mins-other.maxes,
other.mins-self.maxes)),
p
)
def max_distance_rectangle(self, other, p=2.):
"""
Compute the maximum distance between points in the two hyperrectangles.
Parameters
----------
other : hyperrectangle
Input.
p : float, optional
Input.
"""
return minkowski_distance(
0, np.maximum(self.maxes-other.mins, other.maxes-self.mins), p)
class KDTree(cKDTree):
"""kd-tree for quick nearest-neighbor lookup.
This class provides an index into a set of k-dimensional points
which can be used to rapidly look up the nearest neighbors of any
point.
Parameters
----------
data : array_like, shape (n,m)
The n data points of dimension m to be indexed. This array is
not copied unless this is necessary to produce a contiguous
array of doubles, and so modifying this data will result in
bogus results. The data are also copied if the kd-tree is built
with copy_data=True.
leafsize : positive int, optional
The number of points at which the algorithm switches over to
brute-force. Default: 10.
compact_nodes : bool, optional
If True, the kd-tree is built to shrink the hyperrectangles to
the actual data range. This usually gives a more compact tree that
is robust against degenerated input data and gives faster queries
at the expense of longer build time. Default: True.
copy_data : bool, optional
If True the data is always copied to protect the kd-tree against
data corruption. Default: False.
balanced_tree : bool, optional
If True, the median is used to split the hyperrectangles instead of
the midpoint. This usually gives a more compact tree and
faster queries at the expense of longer build time. Default: True.
boxsize : array_like or scalar, optional
Apply a m-d toroidal topology to the KDTree.. The topology is generated
by :math:`x_i + n_i L_i` where :math:`n_i` are integers and :math:`L_i`
is the boxsize along i-th dimension. The input data shall be wrapped
into :math:`[0, L_i)`. A ValueError is raised if any of the data is
outside of this bound.
Notes
-----
The algorithm used is described in Maneewongvatana and Mount 1999.
The general idea is that the kd-tree is a binary tree, each of whose
nodes represents an axis-aligned hyperrectangle. Each node specifies
an axis and splits the set of points based on whether their coordinate
along that axis is greater than or less than a particular value.
During construction, the axis and splitting point are chosen by the
"sliding midpoint" rule, which ensures that the cells do not all
become long and thin.
The tree can be queried for the r closest neighbors of any given point
(optionally returning only those within some maximum distance of the
point). It can also be queried, with a substantial gain in efficiency,
for the r approximate closest neighbors.
For large dimensions (20 is already large) do not expect this to run
significantly faster than brute force. High-dimensional nearest-neighbor
queries are a substantial open problem in computer science.
Attributes
----------
data : ndarray, shape (n,m)
The n data points of dimension m to be indexed. This array is
not copied unless this is necessary to produce a contiguous
array of doubles. The data are also copied if the kd-tree is built
with `copy_data=True`.
leafsize : positive int
The number of points at which the algorithm switches over to
brute-force.
m : int
The dimension of a single data-point.
n : int
The number of data points.
maxes : ndarray, shape (m,)
The maximum value in each dimension of the n data points.
mins : ndarray, shape (m,)
The minimum value in each dimension of the n data points.
size : int
The number of nodes in the tree.
"""
class node:
@staticmethod
def _create(ckdtree_node=None):
"""Create either an inner or leaf node, wrapping a cKDTreeNode instance"""
if ckdtree_node is None:
return KDTree.node(ckdtree_node)
elif ckdtree_node.split_dim == -1:
return KDTree.leafnode(ckdtree_node)
else:
return KDTree.innernode(ckdtree_node)
def __init__(self, ckdtree_node=None):
if ckdtree_node is None:
ckdtree_node = cKDTreeNode()
self._node = ckdtree_node
def __lt__(self, other):
return id(self) < id(other)
def __gt__(self, other):
return id(self) > id(other)
def __le__(self, other):
return id(self) <= id(other)
def __ge__(self, other):
return id(self) >= id(other)
def __eq__(self, other):
return id(self) == id(other)
class leafnode(node):
@property
def idx(self):
return self._node.indices
@property
def children(self):
return self._node.children
class innernode(node):
def __init__(self, ckdtreenode):
assert isinstance(ckdtreenode, cKDTreeNode)
super().__init__(ckdtreenode)
self.less = KDTree.node._create(ckdtreenode.lesser)
self.greater = KDTree.node._create(ckdtreenode.greater)
@property
def split_dim(self):
return self._node.split_dim
@property
def split(self):
return self._node.split
@property
def children(self):
return self._node.children
@property
def tree(self):
if not hasattr(self, "_tree"):
self._tree = KDTree.node._create(super().tree)
return self._tree
def __init__(self, data, leafsize=10, compact_nodes=True, copy_data=False,
balanced_tree=True, boxsize=None):
data = np.asarray(data)
if data.dtype.kind == 'c':
raise TypeError("KDTree does not work with complex data")
# Note KDTree has different default leafsize from cKDTree
super().__init__(data, leafsize, compact_nodes, copy_data,
balanced_tree, boxsize)
def query(
self, x, k=1, eps=0, p=2, distance_upper_bound=np.inf, workers=1):
r"""Query the kd-tree for nearest neighbors.
Parameters
----------
x : array_like, last dimension self.m
An array of points to query.
k : int or Sequence[int], optional
Either the number of nearest neighbors to return, or a list of the
k-th nearest neighbors to return, starting from 1.
eps : nonnegative float, optional
Return approximate nearest neighbors; the kth returned value
is guaranteed to be no further than (1+eps) times the
distance to the real kth nearest neighbor.
p : float, 1<=p<=infinity, optional
Which Minkowski p-norm to use.
1 is the sum-of-absolute-values distance ("Manhattan" distance).
2 is the usual Euclidean distance.
infinity is the maximum-coordinate-difference distance.
A large, finite p may cause a ValueError if overflow can occur.
distance_upper_bound : nonnegative float, optional
Return only neighbors within this distance. This is used to prune
tree searches, so if you are doing a series of nearest-neighbor
queries, it may help to supply the distance to the nearest neighbor
of the most recent point.
workers : int, optional
Number of workers to use for parallel processing. If -1 is given
all CPU threads are used. Default: 1.
.. versionadded:: 1.6.0
Returns
-------
d : float or array of floats
The distances to the nearest neighbors.
If ``x`` has shape ``tuple+(self.m,)``, then ``d`` has shape
``tuple+(k,)``.
When k == 1, the last dimension of the output is squeezed.
Missing neighbors are indicated with infinite distances.
Hits are sorted by distance (nearest first).
.. versionchanged:: 1.9.0
Previously if ``k=None``, then `d` was an object array of
shape ``tuple``, containing lists of distances. This behavior
has been removed, use `query_ball_point` instead.
i : integer or array of integers
The index of each neighbor in ``self.data``.
``i`` is the same shape as d.
Missing neighbors are indicated with ``self.n``.
Examples
--------
>>> import numpy as np
>>> from scipy.spatial import KDTree
>>> x, y = np.mgrid[0:5, 2:8]
>>> tree = KDTree(np.c_[x.ravel(), y.ravel()])
To query the nearest neighbours and return squeezed result, use
>>> dd, ii = tree.query([[0, 0], [2.2, 2.9]], k=1)
>>> print(dd, ii, sep='\n')
[2. 0.2236068]
[ 0 13]
To query the nearest neighbours and return unsqueezed result, use
>>> dd, ii = tree.query([[0, 0], [2.2, 2.9]], k=[1])
>>> print(dd, ii, sep='\n')
[[2. ]
[0.2236068]]
[[ 0]
[13]]
To query the second nearest neighbours and return unsqueezed result,
use
>>> dd, ii = tree.query([[0, 0], [2.2, 2.9]], k=[2])
>>> print(dd, ii, sep='\n')
[[2.23606798]
[0.80622577]]
[[ 6]
[19]]
To query the first and second nearest neighbours, use
>>> dd, ii = tree.query([[0, 0], [2.2, 2.9]], k=2)
>>> print(dd, ii, sep='\n')
[[2. 2.23606798]
[0.2236068 0.80622577]]
[[ 0 6]
[13 19]]
or, be more specific
>>> dd, ii = tree.query([[0, 0], [2.2, 2.9]], k=[1, 2])
>>> print(dd, ii, sep='\n')
[[2. 2.23606798]
[0.2236068 0.80622577]]
[[ 0 6]
[13 19]]
"""
x = np.asarray(x)
if x.dtype.kind == 'c':
raise TypeError("KDTree does not work with complex data")
if k is None:
raise ValueError("k must be an integer or a sequence of integers")
d, i = super().query(x, k, eps, p, distance_upper_bound, workers)
if isinstance(i, int):
i = np.intp(i)
return d, i
def query_ball_point(self, x, r, p=2., eps=0, workers=1,
return_sorted=None, return_length=False):
"""Find all points within distance r of point(s) x.
Parameters
----------
x : array_like, shape tuple + (self.m,)
The point or points to search for neighbors of.
r : array_like, float
The radius of points to return, must broadcast to the length of x.
p : float, optional
Which Minkowski p-norm to use. Should be in the range [1, inf].
A finite large p may cause a ValueError if overflow can occur.
eps : nonnegative float, optional
Approximate search. Branches of the tree are not explored if their
nearest points are further than ``r / (1 + eps)``, and branches are
added in bulk if their furthest points are nearer than
``r * (1 + eps)``.
workers : int, optional
Number of jobs to schedule for parallel processing. If -1 is given
all processors are used. Default: 1.
.. versionadded:: 1.6.0
return_sorted : bool, optional
Sorts returned indicies if True and does not sort them if False. If
None, does not sort single point queries, but does sort
multi-point queries which was the behavior before this option
was added.
.. versionadded:: 1.6.0
return_length : bool, optional
Return the number of points inside the radius instead of a list
of the indices.
.. versionadded:: 1.6.0
Returns
-------
results : list or array of lists
If `x` is a single point, returns a list of the indices of the
neighbors of `x`. If `x` is an array of points, returns an object
array of shape tuple containing lists of neighbors.
Notes
-----
If you have many points whose neighbors you want to find, you may save
substantial amounts of time by putting them in a KDTree and using
query_ball_tree.
Examples
--------
>>> import numpy as np
>>> from scipy import spatial
>>> x, y = np.mgrid[0:5, 0:5]
>>> points = np.c_[x.ravel(), y.ravel()]
>>> tree = spatial.KDTree(points)
>>> sorted(tree.query_ball_point([2, 0], 1))
[5, 10, 11, 15]
Query multiple points and plot the results:
>>> import matplotlib.pyplot as plt
>>> points = np.asarray(points)
>>> plt.plot(points[:,0], points[:,1], '.')
>>> for results in tree.query_ball_point(([2, 0], [3, 3]), 1):
... nearby_points = points[results]
... plt.plot(nearby_points[:,0], nearby_points[:,1], 'o')
>>> plt.margins(0.1, 0.1)
>>> plt.show()
"""
x = np.asarray(x)
if x.dtype.kind == 'c':
raise TypeError("KDTree does not work with complex data")
return super().query_ball_point(
x, r, p, eps, workers, return_sorted, return_length)
def query_ball_tree(self, other, r, p=2., eps=0):
"""
Find all pairs of points between `self` and `other` whose distance is
at most r.
Parameters
----------
other : KDTree instance
The tree containing points to search against.
r : float
The maximum distance, has to be positive.
p : float, optional
Which Minkowski norm to use. `p` has to meet the condition
``1 <= p <= infinity``.
eps : float, optional
Approximate search. Branches of the tree are not explored
if their nearest points are further than ``r/(1+eps)``, and
branches are added in bulk if their furthest points are nearer
than ``r * (1+eps)``. `eps` has to be non-negative.
Returns
-------
results : list of lists
For each element ``self.data[i]`` of this tree, ``results[i]`` is a
list of the indices of its neighbors in ``other.data``.
Examples
--------
You can search all pairs of points between two kd-trees within a distance:
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> from scipy.spatial import KDTree
>>> rng = np.random.default_rng()
>>> points1 = rng.random((15, 2))
>>> points2 = rng.random((15, 2))
>>> plt.figure(figsize=(6, 6))
>>> plt.plot(points1[:, 0], points1[:, 1], "xk", markersize=14)
>>> plt.plot(points2[:, 0], points2[:, 1], "og", markersize=14)
>>> kd_tree1 = KDTree(points1)
>>> kd_tree2 = KDTree(points2)
>>> indexes = kd_tree1.query_ball_tree(kd_tree2, r=0.2)
>>> for i in range(len(indexes)):
... for j in indexes[i]:
... plt.plot([points1[i, 0], points2[j, 0]],
... [points1[i, 1], points2[j, 1]], "-r")
>>> plt.show()
"""
return super().query_ball_tree(other, r, p, eps)
def query_pairs(self, r, p=2., eps=0, output_type='set'):
"""Find all pairs of points in `self` whose distance is at most r.
Parameters
----------
r : positive float
The maximum distance.
p : float, optional
Which Minkowski norm to use. `p` has to meet the condition
``1 <= p <= infinity``.
eps : float, optional
Approximate search. Branches of the tree are not explored
if their nearest points are further than ``r/(1+eps)``, and
branches are added in bulk if their furthest points are nearer
than ``r * (1+eps)``. `eps` has to be non-negative.
output_type : string, optional
Choose the output container, 'set' or 'ndarray'. Default: 'set'
.. versionadded:: 1.6.0
Returns
-------
results : set or ndarray
Set of pairs ``(i,j)``, with ``i < j``, for which the corresponding
positions are close. If output_type is 'ndarray', an ndarry is
returned instead of a set.
Examples
--------
You can search all pairs of points in a kd-tree within a distance:
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> from scipy.spatial import KDTree
>>> rng = np.random.default_rng()
>>> points = rng.random((20, 2))
>>> plt.figure(figsize=(6, 6))
>>> plt.plot(points[:, 0], points[:, 1], "xk", markersize=14)
>>> kd_tree = KDTree(points)
>>> pairs = kd_tree.query_pairs(r=0.2)
>>> for (i, j) in pairs:
... plt.plot([points[i, 0], points[j, 0]],
... [points[i, 1], points[j, 1]], "-r")
>>> plt.show()
"""
return super().query_pairs(r, p, eps, output_type)
def count_neighbors(self, other, r, p=2., weights=None, cumulative=True):
"""Count how many nearby pairs can be formed.
Count the number of pairs ``(x1,x2)`` can be formed, with ``x1`` drawn
from ``self`` and ``x2`` drawn from ``other``, and where
``distance(x1, x2, p) <= r``.
Data points on ``self`` and ``other`` are optionally weighted by the
``weights`` argument. (See below)
This is adapted from the "two-point correlation" algorithm described by
Gray and Moore [1]_. See notes for further discussion.
Parameters
----------
other : KDTree
The other tree to draw points from, can be the same tree as self.
r : float or one-dimensional array of floats
The radius to produce a count for. Multiple radii are searched with
a single tree traversal.
If the count is non-cumulative(``cumulative=False``), ``r`` defines
the edges of the bins, and must be non-decreasing.
p : float, optional
1<=p<=infinity.
Which Minkowski p-norm to use.
Default 2.0.
A finite large p may cause a ValueError if overflow can occur.
weights : tuple, array_like, or None, optional
If None, the pair-counting is unweighted.
If given as a tuple, weights[0] is the weights of points in
``self``, and weights[1] is the weights of points in ``other``;
either can be None to indicate the points are unweighted.
If given as an array_like, weights is the weights of points in
``self`` and ``other``. For this to make sense, ``self`` and
``other`` must be the same tree. If ``self`` and ``other`` are two
different trees, a ``ValueError`` is raised.
Default: None
.. versionadded:: 1.6.0
cumulative : bool, optional
Whether the returned counts are cumulative. When cumulative is set
to ``False`` the algorithm is optimized to work with a large number
of bins (>10) specified by ``r``. When ``cumulative`` is set to
True, the algorithm is optimized to work with a small number of
``r``. Default: True
.. versionadded:: 1.6.0
Returns
-------
result : scalar or 1-D array
The number of pairs. For unweighted counts, the result is integer.
For weighted counts, the result is float.
If cumulative is False, ``result[i]`` contains the counts with
``(-inf if i == 0 else r[i-1]) < R <= r[i]``
Notes
-----
Pair-counting is the basic operation used to calculate the two point
correlation functions from a data set composed of position of objects.
Two point correlation function measures the clustering of objects and
is widely used in cosmology to quantify the large scale structure
in our Universe, but it may be useful for data analysis in other fields
where self-similar assembly of objects also occur.
The Landy-Szalay estimator for the two point correlation function of
``D`` measures the clustering signal in ``D``. [2]_
For example, given the position of two sets of objects,
- objects ``D`` (data) contains the clustering signal, and
- objects ``R`` (random) that contains no signal,
.. math::
\\xi(r) = \\frac{<D, D> - 2 f <D, R> + f^2<R, R>}{f^2<R, R>},
where the brackets represents counting pairs between two data sets
in a finite bin around ``r`` (distance), corresponding to setting
`cumulative=False`, and ``f = float(len(D)) / float(len(R))`` is the
ratio between number of objects from data and random.
The algorithm implemented here is loosely based on the dual-tree
algorithm described in [1]_. We switch between two different
pair-cumulation scheme depending on the setting of ``cumulative``.
The computing time of the method we use when for
``cumulative == False`` does not scale with the total number of bins.
The algorithm for ``cumulative == True`` scales linearly with the
number of bins, though it is slightly faster when only
1 or 2 bins are used. [5]_.
As an extension to the naive pair-counting,
weighted pair-counting counts the product of weights instead
of number of pairs.
Weighted pair-counting is used to estimate marked correlation functions
([3]_, section 2.2),
or to properly calculate the average of data per distance bin
(e.g. [4]_, section 2.1 on redshift).
.. [1] Gray and Moore,
"N-body problems in statistical learning",
Mining the sky, 2000,
https://arxiv.org/abs/astro-ph/0012333
.. [2] Landy and Szalay,
"Bias and variance of angular correlation functions",
The Astrophysical Journal, 1993,
http://adsabs.harvard.edu/abs/1993ApJ...412...64L
.. [3] Sheth, Connolly and Skibba,
"Marked correlations in galaxy formation models",
Arxiv e-print, 2005,
https://arxiv.org/abs/astro-ph/0511773
.. [4] Hawkins, et al.,
"The 2dF Galaxy Redshift Survey: correlation functions,
peculiar velocities and the matter density of the Universe",
Monthly Notices of the Royal Astronomical Society, 2002,
http://adsabs.harvard.edu/abs/2003MNRAS.346...78H
.. [5] https://github.com/scipy/scipy/pull/5647#issuecomment-168474926
Examples
--------
You can count neighbors number between two kd-trees within a distance:
>>> import numpy as np
>>> from scipy.spatial import KDTree
>>> rng = np.random.default_rng()
>>> points1 = rng.random((5, 2))
>>> points2 = rng.random((5, 2))
>>> kd_tree1 = KDTree(points1)
>>> kd_tree2 = KDTree(points2)
>>> kd_tree1.count_neighbors(kd_tree2, 0.2)
1
This number is same as the total pair number calculated by
`query_ball_tree`:
>>> indexes = kd_tree1.query_ball_tree(kd_tree2, r=0.2)
>>> sum([len(i) for i in indexes])
1
"""
return super().count_neighbors(other, r, p, weights, cumulative)
def sparse_distance_matrix(
self, other, max_distance, p=2., output_type='dok_matrix'):
"""Compute a sparse distance matrix.
Computes a distance matrix between two KDTrees, leaving as zero
any distance greater than max_distance.
Parameters
----------
other : KDTree
max_distance : positive float
p : float, 1<=p<=infinity
Which Minkowski p-norm to use.
A finite large p may cause a ValueError if overflow can occur.
output_type : string, optional
Which container to use for output data. Options: 'dok_matrix',
'coo_matrix', 'dict', or 'ndarray'. Default: 'dok_matrix'.
.. versionadded:: 1.6.0
Returns
-------
result : dok_matrix, coo_matrix, dict or ndarray
Sparse matrix representing the results in "dictionary of keys"
format. If a dict is returned the keys are (i,j) tuples of indices.
If output_type is 'ndarray' a record array with fields 'i', 'j',
and 'v' is returned,
Examples
--------
You can compute a sparse distance matrix between two kd-trees:
>>> import numpy as np
>>> from scipy.spatial import KDTree
>>> rng = np.random.default_rng()
>>> points1 = rng.random((5, 2))
>>> points2 = rng.random((5, 2))
>>> kd_tree1 = KDTree(points1)
>>> kd_tree2 = KDTree(points2)
>>> sdm = kd_tree1.sparse_distance_matrix(kd_tree2, 0.3)
>>> sdm.toarray()
array([[0. , 0. , 0.12295571, 0. , 0. ],
[0. , 0. , 0. , 0. , 0. ],
[0.28942611, 0. , 0. , 0.2333084 , 0. ],
[0. , 0. , 0. , 0. , 0. ],
[0.24617575, 0.29571802, 0.26836782, 0. , 0. ]])
You can check distances above the `max_distance` are zeros:
>>> from scipy.spatial import distance_matrix
>>> distance_matrix(points1, points2)
array([[0.56906522, 0.39923701, 0.12295571, 0.8658745 , 0.79428925],
[0.37327919, 0.7225693 , 0.87665969, 0.32580855, 0.75679479],
[0.28942611, 0.30088013, 0.6395831 , 0.2333084 , 0.33630734],
[0.31994999, 0.72658602, 0.71124834, 0.55396483, 0.90785663],
[0.24617575, 0.29571802, 0.26836782, 0.57714465, 0.6473269 ]])
"""
return super().sparse_distance_matrix(
other, max_distance, p, output_type)
def distance_matrix(x, y, p=2, threshold=1000000):
"""Compute the distance matrix.
Returns the matrix of all pair-wise distances.
Parameters
----------
x : (M, K) array_like
Matrix of M vectors in K dimensions.
y : (N, K) array_like
Matrix of N vectors in K dimensions.
p : float, 1 <= p <= infinity
Which Minkowski p-norm to use.
threshold : positive int
If ``M * N * K`` > `threshold`, algorithm uses a Python loop instead
of large temporary arrays.
Returns
-------
result : (M, N) ndarray
Matrix containing the distance from every vector in `x` to every vector
in `y`.
Examples
--------
>>> from scipy.spatial import distance_matrix
>>> distance_matrix([[0,0],[0,1]], [[1,0],[1,1]])
array([[ 1. , 1.41421356],
[ 1.41421356, 1. ]])
"""
x = np.asarray(x)
m, k = x.shape
y = np.asarray(y)
n, kk = y.shape
if k != kk:
raise ValueError(f"x contains {k}-dimensional vectors but y contains "
f"{kk}-dimensional vectors")
if m*n*k <= threshold:
return minkowski_distance(x[:,np.newaxis,:],y[np.newaxis,:,:],p)
else:
result = np.empty((m,n),dtype=float) # FIXME: figure out the best dtype
if m < n:
for i in range(m):
result[i,:] = minkowski_distance(x[i],y,p)
else:
for j in range(n):
result[:,j] = minkowski_distance(x,y[j],p)
return result
| 33,444
| 35.313789
| 86
|
py
|
scipy
|
scipy-main/scipy/spatial/_spherical_voronoi.py
|
"""
Spherical Voronoi Code
.. versionadded:: 0.18.0
"""
#
# Copyright (C) Tyler Reddy, Ross Hemsley, Edd Edmondson,
# Nikolai Nowaczyk, Joe Pitt-Francis, 2015.
#
# Distributed under the same BSD license as SciPy.
#
import numpy as np
import scipy
from . import _voronoi
from scipy.spatial import cKDTree
__all__ = ['SphericalVoronoi']
def calculate_solid_angles(R):
"""Calculates the solid angles of plane triangles. Implements the method of
Van Oosterom and Strackee [VanOosterom]_ with some modifications. Assumes
that input points have unit norm."""
# Original method uses a triple product `R1 . (R2 x R3)` for the numerator.
# This is equal to the determinant of the matrix [R1 R2 R3], which can be
# computed with better stability.
numerator = np.linalg.det(R)
denominator = 1 + (np.einsum('ij,ij->i', R[:, 0], R[:, 1]) +
np.einsum('ij,ij->i', R[:, 1], R[:, 2]) +
np.einsum('ij,ij->i', R[:, 2], R[:, 0]))
return np.abs(2 * np.arctan2(numerator, denominator))
class SphericalVoronoi:
""" Voronoi diagrams on the surface of a sphere.
.. versionadded:: 0.18.0
Parameters
----------
points : ndarray of floats, shape (npoints, ndim)
Coordinates of points from which to construct a spherical
Voronoi diagram.
radius : float, optional
Radius of the sphere (Default: 1)
center : ndarray of floats, shape (ndim,)
Center of sphere (Default: origin)
threshold : float
Threshold for detecting duplicate points and
mismatches between points and sphere parameters.
(Default: 1e-06)
Attributes
----------
points : double array of shape (npoints, ndim)
the points in `ndim` dimensions to generate the Voronoi diagram from
radius : double
radius of the sphere
center : double array of shape (ndim,)
center of the sphere
vertices : double array of shape (nvertices, ndim)
Voronoi vertices corresponding to points
regions : list of list of integers of shape (npoints, _ )
the n-th entry is a list consisting of the indices
of the vertices belonging to the n-th point in points
Methods
-------
calculate_areas
Calculates the areas of the Voronoi regions. For 2D point sets, the
regions are circular arcs. The sum of the areas is `2 * pi * radius`.
For 3D point sets, the regions are spherical polygons. The sum of the
areas is `4 * pi * radius**2`.
Raises
------
ValueError
If there are duplicates in `points`.
If the provided `radius` is not consistent with `points`.
Notes
-----
The spherical Voronoi diagram algorithm proceeds as follows. The Convex
Hull of the input points (generators) is calculated, and is equivalent to
their Delaunay triangulation on the surface of the sphere [Caroli]_.
The Convex Hull neighbour information is then used to
order the Voronoi region vertices around each generator. The latter
approach is substantially less sensitive to floating point issues than
angle-based methods of Voronoi region vertex sorting.
Empirical assessment of spherical Voronoi algorithm performance suggests
quadratic time complexity (loglinear is optimal, but algorithms are more
challenging to implement).
References
----------
.. [Caroli] Caroli et al. Robust and Efficient Delaunay triangulations of
points on or close to a sphere. Research Report RR-7004, 2009.
.. [VanOosterom] Van Oosterom and Strackee. The solid angle of a plane
triangle. IEEE Transactions on Biomedical Engineering,
2, 1983, pp 125--126.
See Also
--------
Voronoi : Conventional Voronoi diagrams in N dimensions.
Examples
--------
Do some imports and take some points on a cube:
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.spatial import SphericalVoronoi, geometric_slerp
>>> from mpl_toolkits.mplot3d import proj3d
>>> # set input data
>>> points = np.array([[0, 0, 1], [0, 0, -1], [1, 0, 0],
... [0, 1, 0], [0, -1, 0], [-1, 0, 0], ])
Calculate the spherical Voronoi diagram:
>>> radius = 1
>>> center = np.array([0, 0, 0])
>>> sv = SphericalVoronoi(points, radius, center)
Generate plot:
>>> # sort vertices (optional, helpful for plotting)
>>> sv.sort_vertices_of_regions()
>>> t_vals = np.linspace(0, 1, 2000)
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111, projection='3d')
>>> # plot the unit sphere for reference (optional)
>>> u = np.linspace(0, 2 * np.pi, 100)
>>> v = np.linspace(0, np.pi, 100)
>>> x = np.outer(np.cos(u), np.sin(v))
>>> y = np.outer(np.sin(u), np.sin(v))
>>> z = np.outer(np.ones(np.size(u)), np.cos(v))
>>> ax.plot_surface(x, y, z, color='y', alpha=0.1)
>>> # plot generator points
>>> ax.scatter(points[:, 0], points[:, 1], points[:, 2], c='b')
>>> # plot Voronoi vertices
>>> ax.scatter(sv.vertices[:, 0], sv.vertices[:, 1], sv.vertices[:, 2],
... c='g')
>>> # indicate Voronoi regions (as Euclidean polygons)
>>> for region in sv.regions:
... n = len(region)
... for i in range(n):
... start = sv.vertices[region][i]
... end = sv.vertices[region][(i + 1) % n]
... result = geometric_slerp(start, end, t_vals)
... ax.plot(result[..., 0],
... result[..., 1],
... result[..., 2],
... c='k')
>>> ax.azim = 10
>>> ax.elev = 40
>>> _ = ax.set_xticks([])
>>> _ = ax.set_yticks([])
>>> _ = ax.set_zticks([])
>>> fig.set_size_inches(4, 4)
>>> plt.show()
"""
def __init__(self, points, radius=1, center=None, threshold=1e-06):
if radius is None:
raise ValueError('`radius` is `None`. '
'Please provide a floating point number '
'(i.e. `radius=1`).')
self.radius = float(radius)
self.points = np.array(points).astype(np.double)
self._dim = self.points.shape[1]
if center is None:
self.center = np.zeros(self._dim)
else:
self.center = np.array(center, dtype=float)
# test degenerate input
self._rank = np.linalg.matrix_rank(self.points - self.points[0],
tol=threshold * self.radius)
if self._rank < self._dim:
raise ValueError(f"Rank of input points must be at least {self._dim}")
if cKDTree(self.points).query_pairs(threshold * self.radius):
raise ValueError("Duplicate generators present.")
radii = np.linalg.norm(self.points - self.center, axis=1)
max_discrepancy = np.abs(radii - self.radius).max()
if max_discrepancy >= threshold * self.radius:
raise ValueError("Radius inconsistent with generators.")
self._calc_vertices_regions()
def _calc_vertices_regions(self):
"""
Calculates the Voronoi vertices and regions of the generators stored
in self.points. The vertices will be stored in self.vertices and the
regions in self.regions.
This algorithm was discussed at PyData London 2015 by
Tyler Reddy, Ross Hemsley and Nikolai Nowaczyk
"""
# get Convex Hull
conv = scipy.spatial.ConvexHull(self.points)
# get circumcenters of Convex Hull triangles from facet equations
# for 3D input circumcenters will have shape: (2N-4, 3)
self.vertices = self.radius * conv.equations[:, :-1] + self.center
self._simplices = conv.simplices
# calculate regions from triangulation
# for 3D input simplex_indices will have shape: (2N-4,)
simplex_indices = np.arange(len(self._simplices))
# for 3D input tri_indices will have shape: (6N-12,)
tri_indices = np.column_stack([simplex_indices] * self._dim).ravel()
# for 3D input point_indices will have shape: (6N-12,)
point_indices = self._simplices.ravel()
# for 3D input indices will have shape: (6N-12,)
indices = np.argsort(point_indices, kind='mergesort')
# for 3D input flattened_groups will have shape: (6N-12,)
flattened_groups = tri_indices[indices].astype(np.intp)
# intervals will have shape: (N+1,)
intervals = np.cumsum(np.bincount(point_indices + 1))
# split flattened groups to get nested list of unsorted regions
groups = [list(flattened_groups[intervals[i]:intervals[i + 1]])
for i in range(len(intervals) - 1)]
self.regions = groups
def sort_vertices_of_regions(self):
"""Sort indices of the vertices to be (counter-)clockwise ordered.
Raises
------
TypeError
If the points are not three-dimensional.
Notes
-----
For each region in regions, it sorts the indices of the Voronoi
vertices such that the resulting points are in a clockwise or
counterclockwise order around the generator point.
This is done as follows: Recall that the n-th region in regions
surrounds the n-th generator in points and that the k-th
Voronoi vertex in vertices is the circumcenter of the k-th triangle
in self._simplices. For each region n, we choose the first triangle
(=Voronoi vertex) in self._simplices and a vertex of that triangle
not equal to the center n. These determine a unique neighbor of that
triangle, which is then chosen as the second triangle. The second
triangle will have a unique vertex not equal to the current vertex or
the center. This determines a unique neighbor of the second triangle,
which is then chosen as the third triangle and so forth. We proceed
through all the triangles (=Voronoi vertices) belonging to the
generator in points and obtain a sorted version of the vertices
of its surrounding region.
"""
if self._dim != 3:
raise TypeError("Only supported for three-dimensional point sets")
_voronoi.sort_vertices_of_regions(self._simplices, self.regions)
def _calculate_areas_3d(self):
self.sort_vertices_of_regions()
sizes = [len(region) for region in self.regions]
csizes = np.cumsum(sizes)
num_regions = csizes[-1]
# We create a set of triangles consisting of one point and two Voronoi
# vertices. The vertices of each triangle are adjacent in the sorted
# regions list.
point_indices = [i for i, size in enumerate(sizes)
for j in range(size)]
nbrs1 = np.array([r for region in self.regions for r in region])
# The calculation of nbrs2 is a vectorized version of:
# np.array([r for region in self.regions for r in np.roll(region, 1)])
nbrs2 = np.roll(nbrs1, 1)
indices = np.roll(csizes, 1)
indices[0] = 0
nbrs2[indices] = nbrs1[csizes - 1]
# Normalize points and vertices.
pnormalized = (self.points - self.center) / self.radius
vnormalized = (self.vertices - self.center) / self.radius
# Create the complete set of triangles and calculate their solid angles
triangles = np.hstack([pnormalized[point_indices],
vnormalized[nbrs1],
vnormalized[nbrs2]
]).reshape((num_regions, 3, 3))
triangle_solid_angles = calculate_solid_angles(triangles)
# Sum the solid angles of the triangles in each region
solid_angles = np.cumsum(triangle_solid_angles)[csizes - 1]
solid_angles[1:] -= solid_angles[:-1]
# Get polygon areas using A = omega * r**2
return solid_angles * self.radius**2
def _calculate_areas_2d(self):
# Find start and end points of arcs
arcs = self.points[self._simplices] - self.center
# Calculate the angle subtended by arcs
cosine = np.einsum('ij,ij->i', arcs[:, 0], arcs[:, 1])
sine = np.abs(np.linalg.det(arcs))
theta = np.arctan2(sine, cosine)
# Get areas using A = r * theta
areas = self.radius * theta
# Correct arcs which go the wrong way (single-hemisphere inputs)
signs = np.sign(np.einsum('ij,ij->i', arcs[:, 0],
self.vertices - self.center))
indices = np.where(signs < 0)
areas[indices] = 2 * np.pi * self.radius - areas[indices]
return areas
def calculate_areas(self):
"""Calculates the areas of the Voronoi regions.
For 2D point sets, the regions are circular arcs. The sum of the areas
is `2 * pi * radius`.
For 3D point sets, the regions are spherical polygons. The sum of the
areas is `4 * pi * radius**2`.
.. versionadded:: 1.5.0
Returns
-------
areas : double array of shape (npoints,)
The areas of the Voronoi regions.
"""
if self._dim == 2:
return self._calculate_areas_2d()
elif self._dim == 3:
return self._calculate_areas_3d()
else:
raise TypeError("Only supported for 2D and 3D point sets")
| 13,564
| 38.548105
| 82
|
py
|
scipy
|
scipy-main/scipy/spatial/setup.py
|
from os.path import join, dirname
import glob
def pre_build_hook(build_ext, ext):
from scipy._build_utils.compiler_helper import (set_cxx_flags_hook,
try_add_flag)
cc = build_ext._cxx_compiler
args = ext.extra_compile_args
set_cxx_flags_hook(build_ext, ext)
if cc.compiler_type == 'msvc':
# Ignore "structured exceptions" which are non-standard MSVC extensions
args.append('/EHsc')
else:
# Don't export library symbols
try_add_flag(args, cc, '-fvisibility=hidden')
def configuration(parent_package='', top_path=None):
from numpy.distutils.misc_util import Configuration, get_numpy_include_dirs
from numpy.distutils.system_info import get_info
from scipy._build_utils import combine_dict, uses_blas64, numpy_nodepr_api
from scipy._build_utils.compiler_helper import set_cxx_flags_hook
from distutils.sysconfig import get_python_inc
import pybind11
config = Configuration('spatial', parent_package, top_path)
config.add_data_dir('tests')
# spatial.transform
config.add_subpackage('transform')
# qhull
qhull_src = sorted(glob.glob(join(dirname(__file__), 'qhull_src',
'src', '*.c')))
inc_dirs = [get_python_inc()]
if inc_dirs[0] != get_python_inc(plat_specific=1):
inc_dirs.append(get_python_inc(plat_specific=1))
inc_dirs.append(get_numpy_include_dirs())
inc_dirs.append(join(dirname(dirname(__file__)), '_lib'))
inc_dirs.append(join(dirname(dirname(__file__)), '_build_utils', 'src'))
if uses_blas64():
lapack_opt = get_info('lapack_ilp64_opt')
else:
lapack_opt = get_info('lapack_opt')
cfg = combine_dict(lapack_opt, include_dirs=inc_dirs)
config.add_extension('_qhull',
sources=['_qhull.c', 'qhull_misc.c'] + qhull_src,
**cfg)
# cKDTree
ckdtree_src = ['query.cxx',
'build.cxx',
'query_pairs.cxx',
'count_neighbors.cxx',
'query_ball_point.cxx',
'query_ball_tree.cxx',
'sparse_distances.cxx']
ckdtree_src = [join('ckdtree', 'src', x) for x in ckdtree_src]
ckdtree_headers = ['ckdtree_decl.h',
'coo_entries.h',
'distance_base.h',
'distance.h',
'ordered_pair.h',
'rectangle.h']
ckdtree_headers = [join('ckdtree', 'src', x) for x in ckdtree_headers]
ckdtree_dep = ['_ckdtree.cxx'] + ckdtree_headers + ckdtree_src
ext = config.add_extension('_ckdtree',
sources=['_ckdtree.cxx'] + ckdtree_src,
depends=ckdtree_dep,
include_dirs=inc_dirs + [join('ckdtree', 'src')])
ext._pre_build_hook = set_cxx_flags_hook
# _distance_wrap
config.add_extension('_distance_wrap',
sources=[join('src', 'distance_wrap.c')],
depends=[join('src', 'distance_impl.h')],
include_dirs=[
get_numpy_include_dirs(),
join(dirname(dirname(__file__)), '_lib')],
**numpy_nodepr_api)
distance_pybind_includes = [
pybind11.get_include(True),
pybind11.get_include(False),
get_numpy_include_dirs()]
ext = config.add_extension('_distance_pybind',
sources=[join('src', 'distance_pybind.cpp')],
depends=[join('src', 'function_ref.h'),
join('src', 'views.h'),
join('src', 'distance_metrics.h')],
include_dirs=distance_pybind_includes,
language='c++',
**numpy_nodepr_api)
ext._pre_build_hook = pre_build_hook
config.add_extension('_voronoi',
sources=['_voronoi.c'])
config.add_extension('_hausdorff',
sources=['_hausdorff.c'])
# Add license files
config.add_data_files('qhull_src/COPYING.txt')
# Type stubs
config.add_data_files('*.pyi')
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
setup(**configuration(top_path='').todict())
| 4,502
| 35.314516
| 79
|
py
|
scipy
|
scipy-main/scipy/spatial/kdtree.py
|
# This file is not meant for public use and will be removed in SciPy v2.0.0.
# Use the `scipy.spatial` namespace for importing the functions
# included below.
import warnings
from . import _kdtree
__all__ = [ # noqa: F822
'KDTree',
'Rectangle',
'cKDTree',
'cKDTreeNode',
'distance_matrix',
'minkowski_distance',
'minkowski_distance_p',
]
def __dir__():
return __all__
def __getattr__(name):
if name not in __all__:
raise AttributeError(
"scipy.spatial.kdtree is deprecated and has no attribute "
f"{name}. Try looking in scipy.spatial instead.")
warnings.warn(f"Please use `{name}` from the `scipy.spatial` namespace, "
"the `scipy.spatial.kdtree` namespace is deprecated.",
category=DeprecationWarning, stacklevel=2)
return getattr(_kdtree, name)
| 870
| 23.885714
| 77
|
py
|
scipy
|
scipy-main/scipy/spatial/qhull.py
|
# This file is not meant for public use and will be removed in SciPy v2.0.0.
# Use the `scipy.spatial` namespace for importing the functions
# included below.
import warnings
from . import _qhull
__all__ = [ # noqa: F822
'ConvexHull',
'Delaunay',
'HalfspaceIntersection',
'QhullError',
'Voronoi',
'os',
'sys',
'tempfile',
'threading',
'tsearch',
]
def __dir__():
return __all__
def __getattr__(name):
if name not in __all__:
raise AttributeError(
"scipy.spatial.qhull is deprecated and has no attribute "
f"{name}. Try looking in scipy.spatial instead.")
warnings.warn(f"Please use `{name}` from the `scipy.spatial` namespace, "
"the `scipy.spatial.qhull` namespace is deprecated.",
category=DeprecationWarning, stacklevel=2)
return getattr(_qhull, name)
| 889
| 22.421053
| 77
|
py
|
scipy
|
scipy-main/scipy/spatial/_procrustes.py
|
"""
This module provides functions to perform full Procrustes analysis.
This code was originally written by Justin Kucynski and ported over from
scikit-bio by Yoshiki Vazquez-Baeza.
"""
import numpy as np
from scipy.linalg import orthogonal_procrustes
__all__ = ['procrustes']
def procrustes(data1, data2):
r"""Procrustes analysis, a similarity test for two data sets.
Each input matrix is a set of points or vectors (the rows of the matrix).
The dimension of the space is the number of columns of each matrix. Given
two identically sized matrices, procrustes standardizes both such that:
- :math:`tr(AA^{T}) = 1`.
- Both sets of points are centered around the origin.
Procrustes ([1]_, [2]_) then applies the optimal transform to the second
matrix (including scaling/dilation, rotations, and reflections) to minimize
:math:`M^{2}=\sum(data1-data2)^{2}`, or the sum of the squares of the
pointwise differences between the two input datasets.
This function was not designed to handle datasets with different numbers of
datapoints (rows). If two data sets have different dimensionality
(different number of columns), simply add columns of zeros to the smaller
of the two.
Parameters
----------
data1 : array_like
Matrix, n rows represent points in k (columns) space `data1` is the
reference data, after it is standardised, the data from `data2` will be
transformed to fit the pattern in `data1` (must have >1 unique points).
data2 : array_like
n rows of data in k space to be fit to `data1`. Must be the same
shape ``(numrows, numcols)`` as data1 (must have >1 unique points).
Returns
-------
mtx1 : array_like
A standardized version of `data1`.
mtx2 : array_like
The orientation of `data2` that best fits `data1`. Centered, but not
necessarily :math:`tr(AA^{T}) = 1`.
disparity : float
:math:`M^{2}` as defined above.
Raises
------
ValueError
If the input arrays are not two-dimensional.
If the shape of the input arrays is different.
If the input arrays have zero columns or zero rows.
See Also
--------
scipy.linalg.orthogonal_procrustes
scipy.spatial.distance.directed_hausdorff : Another similarity test
for two data sets
Notes
-----
- The disparity should not depend on the order of the input matrices, but
the output matrices will, as only the first output matrix is guaranteed
to be scaled such that :math:`tr(AA^{T}) = 1`.
- Duplicate data points are generally ok, duplicating a data point will
increase its effect on the procrustes fit.
- The disparity scales as the number of points per input matrix.
References
----------
.. [1] Krzanowski, W. J. (2000). "Principles of Multivariate analysis".
.. [2] Gower, J. C. (1975). "Generalized procrustes analysis".
Examples
--------
>>> import numpy as np
>>> from scipy.spatial import procrustes
The matrix ``b`` is a rotated, shifted, scaled and mirrored version of
``a`` here:
>>> a = np.array([[1, 3], [1, 2], [1, 1], [2, 1]], 'd')
>>> b = np.array([[4, -2], [4, -4], [4, -6], [2, -6]], 'd')
>>> mtx1, mtx2, disparity = procrustes(a, b)
>>> round(disparity)
0.0
"""
mtx1 = np.array(data1, dtype=np.double, copy=True)
mtx2 = np.array(data2, dtype=np.double, copy=True)
if mtx1.ndim != 2 or mtx2.ndim != 2:
raise ValueError("Input matrices must be two-dimensional")
if mtx1.shape != mtx2.shape:
raise ValueError("Input matrices must be of same shape")
if mtx1.size == 0:
raise ValueError("Input matrices must be >0 rows and >0 cols")
# translate all the data to the origin
mtx1 -= np.mean(mtx1, 0)
mtx2 -= np.mean(mtx2, 0)
norm1 = np.linalg.norm(mtx1)
norm2 = np.linalg.norm(mtx2)
if norm1 == 0 or norm2 == 0:
raise ValueError("Input matrices must contain >1 unique points")
# change scaling of data (in rows) such that trace(mtx*mtx') = 1
mtx1 /= norm1
mtx2 /= norm2
# transform mtx2 to minimize disparity
R, s = orthogonal_procrustes(mtx1, mtx2)
mtx2 = np.dot(mtx2, R.T) * s
# measure the dissimilarity between the two datasets
disparity = np.sum(np.square(mtx1 - mtx2))
return mtx1, mtx2, disparity
| 4,427
| 32.293233
| 79
|
py
|
scipy
|
scipy-main/scipy/spatial/ckdtree.py
|
# This file is not meant for public use and will be removed in SciPy v2.0.0.
# Use the `scipy.spatial` namespace for importing the functions
# included below.
import warnings
from . import _ckdtree
__all__ = [ # noqa: F822
'cKDTree',
'cKDTreeNode',
'coo_entries',
'operator',
'ordered_pairs',
'os',
'scipy',
'threading',
]
def __dir__():
return __all__
def __getattr__(name):
if name not in __all__:
raise AttributeError(
"scipy.spatial.ckdtree is deprecated and has no attribute "
f"{name}. Try looking in scipy.spatial instead.")
warnings.warn(f"Please use `{name}` from the `scipy.spatial` namespace, "
"the `scipy.spatial.ckdtree` namespace is deprecated.",
category=DeprecationWarning, stacklevel=2)
return getattr(_ckdtree, name)
| 862
| 22.972222
| 77
|
py
|
scipy
|
scipy-main/scipy/spatial/_geometric_slerp.py
|
from __future__ import annotations
__all__ = ['geometric_slerp']
import warnings
from typing import TYPE_CHECKING
import numpy as np
from scipy.spatial.distance import euclidean
if TYPE_CHECKING:
import numpy.typing as npt
def _geometric_slerp(start, end, t):
# create an orthogonal basis using QR decomposition
basis = np.vstack([start, end])
Q, R = np.linalg.qr(basis.T)
signs = 2 * (np.diag(R) >= 0) - 1
Q = Q.T * signs.T[:, np.newaxis]
R = R.T * signs.T[:, np.newaxis]
# calculate the angle between `start` and `end`
c = np.dot(start, end)
s = np.linalg.det(R)
omega = np.arctan2(s, c)
# interpolate
start, end = Q
s = np.sin(t * omega)
c = np.cos(t * omega)
return start * c[:, np.newaxis] + end * s[:, np.newaxis]
def geometric_slerp(
start: npt.ArrayLike,
end: npt.ArrayLike,
t: npt.ArrayLike,
tol: float = 1e-7,
) -> np.ndarray:
"""
Geometric spherical linear interpolation.
The interpolation occurs along a unit-radius
great circle arc in arbitrary dimensional space.
Parameters
----------
start : (n_dimensions, ) array-like
Single n-dimensional input coordinate in a 1-D array-like
object. `n` must be greater than 1.
end : (n_dimensions, ) array-like
Single n-dimensional input coordinate in a 1-D array-like
object. `n` must be greater than 1.
t : float or (n_points,) 1D array-like
A float or 1D array-like of doubles representing interpolation
parameters, with values required in the inclusive interval
between 0 and 1. A common approach is to generate the array
with ``np.linspace(0, 1, n_pts)`` for linearly spaced points.
Ascending, descending, and scrambled orders are permitted.
tol : float
The absolute tolerance for determining if the start and end
coordinates are antipodes.
Returns
-------
result : (t.size, D)
An array of doubles containing the interpolated
spherical path and including start and
end when 0 and 1 t are used. The
interpolated values should correspond to the
same sort order provided in the t array. The result
may be 1-dimensional if ``t`` is a float.
Raises
------
ValueError
If ``start`` and ``end`` are antipodes, not on the
unit n-sphere, or for a variety of degenerate conditions.
See Also
--------
scipy.spatial.transform.Slerp : 3-D Slerp that works with quaternions
Notes
-----
The implementation is based on the mathematical formula provided in [1]_,
and the first known presentation of this algorithm, derived from study of
4-D geometry, is credited to Glenn Davis in a footnote of the original
quaternion Slerp publication by Ken Shoemake [2]_.
.. versionadded:: 1.5.0
References
----------
.. [1] https://en.wikipedia.org/wiki/Slerp#Geometric_Slerp
.. [2] Ken Shoemake (1985) Animating rotation with quaternion curves.
ACM SIGGRAPH Computer Graphics, 19(3): 245-254.
Examples
--------
Interpolate four linearly-spaced values on the circumference of
a circle spanning 90 degrees:
>>> import numpy as np
>>> from scipy.spatial import geometric_slerp
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111)
>>> start = np.array([1, 0])
>>> end = np.array([0, 1])
>>> t_vals = np.linspace(0, 1, 4)
>>> result = geometric_slerp(start,
... end,
... t_vals)
The interpolated results should be at 30 degree intervals
recognizable on the unit circle:
>>> ax.scatter(result[...,0], result[...,1], c='k')
>>> circle = plt.Circle((0, 0), 1, color='grey')
>>> ax.add_artist(circle)
>>> ax.set_aspect('equal')
>>> plt.show()
Attempting to interpolate between antipodes on a circle is
ambiguous because there are two possible paths, and on a
sphere there are infinite possible paths on the geodesic surface.
Nonetheless, one of the ambiguous paths is returned along
with a warning:
>>> opposite_pole = np.array([-1, 0])
>>> with np.testing.suppress_warnings() as sup:
... sup.filter(UserWarning)
... geometric_slerp(start,
... opposite_pole,
... t_vals)
array([[ 1.00000000e+00, 0.00000000e+00],
[ 5.00000000e-01, 8.66025404e-01],
[-5.00000000e-01, 8.66025404e-01],
[-1.00000000e+00, 1.22464680e-16]])
Extend the original example to a sphere and plot interpolation
points in 3D:
>>> from mpl_toolkits.mplot3d import proj3d
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111, projection='3d')
Plot the unit sphere for reference (optional):
>>> u = np.linspace(0, 2 * np.pi, 100)
>>> v = np.linspace(0, np.pi, 100)
>>> x = np.outer(np.cos(u), np.sin(v))
>>> y = np.outer(np.sin(u), np.sin(v))
>>> z = np.outer(np.ones(np.size(u)), np.cos(v))
>>> ax.plot_surface(x, y, z, color='y', alpha=0.1)
Interpolating over a larger number of points
may provide the appearance of a smooth curve on
the surface of the sphere, which is also useful
for discretized integration calculations on a
sphere surface:
>>> start = np.array([1, 0, 0])
>>> end = np.array([0, 0, 1])
>>> t_vals = np.linspace(0, 1, 200)
>>> result = geometric_slerp(start,
... end,
... t_vals)
>>> ax.plot(result[...,0],
... result[...,1],
... result[...,2],
... c='k')
>>> plt.show()
"""
start = np.asarray(start, dtype=np.float64)
end = np.asarray(end, dtype=np.float64)
t = np.asarray(t)
if t.ndim > 1:
raise ValueError("The interpolation parameter "
"value must be one dimensional.")
if start.ndim != 1 or end.ndim != 1:
raise ValueError("Start and end coordinates "
"must be one-dimensional")
if start.size != end.size:
raise ValueError("The dimensions of start and "
"end must match (have same size)")
if start.size < 2 or end.size < 2:
raise ValueError("The start and end coordinates must "
"both be in at least two-dimensional "
"space")
if np.array_equal(start, end):
return np.linspace(start, start, t.size)
# for points that violate equation for n-sphere
for coord in [start, end]:
if not np.allclose(np.linalg.norm(coord), 1.0,
rtol=1e-9,
atol=0):
raise ValueError("start and end are not"
" on a unit n-sphere")
if not isinstance(tol, float):
raise ValueError("tol must be a float")
else:
tol = np.fabs(tol)
coord_dist = euclidean(start, end)
# diameter of 2 within tolerance means antipodes, which is a problem
# for all unit n-spheres (even the 0-sphere would have an ambiguous path)
if np.allclose(coord_dist, 2.0, rtol=0, atol=tol):
warnings.warn("start and end are antipodes"
" using the specified tolerance;"
" this may cause ambiguous slerp paths")
t = np.asarray(t, dtype=np.float64)
if t.size == 0:
return np.empty((0, start.size))
if t.min() < 0 or t.max() > 1:
raise ValueError("interpolation parameter must be in [0, 1]")
if t.ndim == 0:
return _geometric_slerp(start,
end,
np.atleast_1d(t)).ravel()
else:
return _geometric_slerp(start,
end,
t)
| 7,945
| 32.108333
| 77
|
py
|
scipy
|
scipy-main/scipy/spatial/__init__.py
|
"""
=============================================================
Spatial algorithms and data structures (:mod:`scipy.spatial`)
=============================================================
.. currentmodule:: scipy.spatial
.. toctree::
:hidden:
spatial.distance
Spatial transformations
=======================
These are contained in the `scipy.spatial.transform` submodule.
Nearest-neighbor queries
========================
.. autosummary::
:toctree: generated/
KDTree -- class for efficient nearest-neighbor queries
cKDTree -- class for efficient nearest-neighbor queries (faster implementation)
Rectangle
Distance metrics
================
Distance metrics are contained in the :mod:`scipy.spatial.distance` submodule.
Delaunay triangulation, convex hulls, and Voronoi diagrams
==========================================================
.. autosummary::
:toctree: generated/
Delaunay -- compute Delaunay triangulation of input points
ConvexHull -- compute a convex hull for input points
Voronoi -- compute a Voronoi diagram hull from input points
SphericalVoronoi -- compute a Voronoi diagram from input points on the surface of a sphere
HalfspaceIntersection -- compute the intersection points of input halfspaces
Plotting helpers
================
.. autosummary::
:toctree: generated/
delaunay_plot_2d -- plot 2-D triangulation
convex_hull_plot_2d -- plot 2-D convex hull
voronoi_plot_2d -- plot 2-D Voronoi diagram
.. seealso:: :ref:`Tutorial <qhulltutorial>`
Simplex representation
======================
The simplices (triangles, tetrahedra, etc.) appearing in the Delaunay
tessellation (N-D simplices), convex hull facets, and Voronoi ridges
(N-1-D simplices) are represented in the following scheme::
tess = Delaunay(points)
hull = ConvexHull(points)
voro = Voronoi(points)
# coordinates of the jth vertex of the ith simplex
tess.points[tess.simplices[i, j], :] # tessellation element
hull.points[hull.simplices[i, j], :] # convex hull facet
voro.vertices[voro.ridge_vertices[i, j], :] # ridge between Voronoi cells
For Delaunay triangulations and convex hulls, the neighborhood
structure of the simplices satisfies the condition:
``tess.neighbors[i,j]`` is the neighboring simplex of the ith
simplex, opposite to the ``j``-vertex. It is -1 in case of no neighbor.
Convex hull facets also define a hyperplane equation::
(hull.equations[i,:-1] * coord).sum() + hull.equations[i,-1] == 0
Similar hyperplane equations for the Delaunay triangulation correspond
to the convex hull facets on the corresponding N+1-D
paraboloid.
The Delaunay triangulation objects offer a method for locating the
simplex containing a given point, and barycentric coordinate
computations.
Functions
---------
.. autosummary::
:toctree: generated/
tsearch
distance_matrix
minkowski_distance
minkowski_distance_p
procrustes
geometric_slerp
Warnings / Errors used in :mod:`scipy.spatial`
----------------------------------------------
.. autosummary::
:toctree: generated/
QhullError
"""
from ._kdtree import *
from ._ckdtree import *
from ._qhull import *
from ._spherical_voronoi import SphericalVoronoi
from ._plotutils import *
from ._procrustes import procrustes
from ._geometric_slerp import geometric_slerp
# Deprecated namespaces, to be removed in v2.0.0
from . import ckdtree, kdtree, qhull
__all__ = [s for s in dir() if not s.startswith('_')]
from . import distance, transform
__all__ += ['distance', 'transform']
from scipy._lib._testutils import PytestTester
test = PytestTester(__name__)
del PytestTester
| 3,683
| 27.338462
| 93
|
py
|
scipy
|
scipy-main/scipy/spatial/distance.py
|
"""
Distance computations (:mod:`scipy.spatial.distance`)
=====================================================
.. sectionauthor:: Damian Eads
Function reference
------------------
Distance matrix computation from a collection of raw observation vectors
stored in a rectangular array.
.. autosummary::
:toctree: generated/
pdist -- pairwise distances between observation vectors.
cdist -- distances between two collections of observation vectors
squareform -- convert distance matrix to a condensed one and vice versa
directed_hausdorff -- directed Hausdorff distance between arrays
Predicates for checking the validity of distance matrices, both
condensed and redundant. Also contained in this module are functions
for computing the number of observations in a distance matrix.
.. autosummary::
:toctree: generated/
is_valid_dm -- checks for a valid distance matrix
is_valid_y -- checks for a valid condensed distance matrix
num_obs_dm -- # of observations in a distance matrix
num_obs_y -- # of observations in a condensed distance matrix
Distance functions between two numeric vectors ``u`` and ``v``. Computing
distances over a large collection of vectors is inefficient for these
functions. Use ``pdist`` for this purpose.
.. autosummary::
:toctree: generated/
braycurtis -- the Bray-Curtis distance.
canberra -- the Canberra distance.
chebyshev -- the Chebyshev distance.
cityblock -- the Manhattan distance.
correlation -- the Correlation distance.
cosine -- the Cosine distance.
euclidean -- the Euclidean distance.
jensenshannon -- the Jensen-Shannon distance.
mahalanobis -- the Mahalanobis distance.
minkowski -- the Minkowski distance.
seuclidean -- the normalized Euclidean distance.
sqeuclidean -- the squared Euclidean distance.
Distance functions between two boolean vectors (representing sets) ``u`` and
``v``. As in the case of numerical vectors, ``pdist`` is more efficient for
computing the distances between all pairs.
.. autosummary::
:toctree: generated/
dice -- the Dice dissimilarity.
hamming -- the Hamming distance.
jaccard -- the Jaccard distance.
kulczynski1 -- the Kulczynski 1 distance.
rogerstanimoto -- the Rogers-Tanimoto dissimilarity.
russellrao -- the Russell-Rao dissimilarity.
sokalmichener -- the Sokal-Michener dissimilarity.
sokalsneath -- the Sokal-Sneath dissimilarity.
yule -- the Yule dissimilarity.
:func:`hamming` also operates over discrete numerical vectors.
"""
# Copyright (C) Damian Eads, 2007-2008. New BSD License.
__all__ = [
'braycurtis',
'canberra',
'cdist',
'chebyshev',
'cityblock',
'correlation',
'cosine',
'dice',
'directed_hausdorff',
'euclidean',
'hamming',
'is_valid_dm',
'is_valid_y',
'jaccard',
'jensenshannon',
'kulczynski1',
'mahalanobis',
'minkowski',
'num_obs_dm',
'num_obs_y',
'pdist',
'rogerstanimoto',
'russellrao',
'seuclidean',
'sokalmichener',
'sokalsneath',
'sqeuclidean',
'squareform',
'yule'
]
import os
import warnings
import numpy as np
import dataclasses
from typing import Optional, Callable
from functools import partial
from scipy._lib._util import _asarray_validated
from . import _distance_wrap
from . import _hausdorff
from ..linalg import norm
from ..special import rel_entr
from . import _distance_pybind
def _extra_windows_error_checks(x, out, required_shape, **kwargs):
# TODO: remove this function when distutils
# build system is removed because pybind11 error
# handling should suffice per gh-18108
if os.name == "nt" and out is not None:
if out.shape != required_shape:
raise ValueError("Output array has incorrect shape.")
if not out.flags["C_CONTIGUOUS"]:
raise ValueError("Output array must be C-contiguous.")
if not np.can_cast(x.dtype, out.dtype):
raise ValueError("Wrong out dtype.")
if os.name == "nt" and "w" in kwargs:
w = kwargs["w"]
if w is not None:
if (w < 0).sum() > 0:
raise ValueError("Input weights should be all non-negative")
def _copy_array_if_base_present(a):
"""Copy the array if its base points to a parent array."""
if a.base is not None:
return a.copy()
return a
def _correlation_cdist_wrap(XA, XB, dm, **kwargs):
XA = XA - XA.mean(axis=1, keepdims=True)
XB = XB - XB.mean(axis=1, keepdims=True)
_distance_wrap.cdist_cosine_double_wrap(XA, XB, dm, **kwargs)
def _correlation_pdist_wrap(X, dm, **kwargs):
X2 = X - X.mean(axis=1, keepdims=True)
_distance_wrap.pdist_cosine_double_wrap(X2, dm, **kwargs)
def _convert_to_type(X, out_type):
return np.ascontiguousarray(X, dtype=out_type)
def _nbool_correspond_all(u, v, w=None):
if u.dtype == v.dtype == bool and w is None:
not_u = ~u
not_v = ~v
nff = (not_u & not_v).sum()
nft = (not_u & v).sum()
ntf = (u & not_v).sum()
ntt = (u & v).sum()
else:
dtype = np.result_type(int, u.dtype, v.dtype)
u = u.astype(dtype)
v = v.astype(dtype)
not_u = 1.0 - u
not_v = 1.0 - v
if w is not None:
not_u = w * not_u
u = w * u
nff = (not_u * not_v).sum()
nft = (not_u * v).sum()
ntf = (u * not_v).sum()
ntt = (u * v).sum()
return (nff, nft, ntf, ntt)
def _nbool_correspond_ft_tf(u, v, w=None):
if u.dtype == v.dtype == bool and w is None:
not_u = ~u
not_v = ~v
nft = (not_u & v).sum()
ntf = (u & not_v).sum()
else:
dtype = np.result_type(int, u.dtype, v.dtype)
u = u.astype(dtype)
v = v.astype(dtype)
not_u = 1.0 - u
not_v = 1.0 - v
if w is not None:
not_u = w * not_u
u = w * u
nft = (not_u * v).sum()
ntf = (u * not_v).sum()
return (nft, ntf)
def _validate_cdist_input(XA, XB, mA, mB, n, metric_info, **kwargs):
# get supported types
types = metric_info.types
# choose best type
typ = types[types.index(XA.dtype)] if XA.dtype in types else types[0]
# validate data
XA = _convert_to_type(XA, out_type=typ)
XB = _convert_to_type(XB, out_type=typ)
# validate kwargs
_validate_kwargs = metric_info.validator
if _validate_kwargs:
kwargs = _validate_kwargs((XA, XB), mA + mB, n, **kwargs)
return XA, XB, typ, kwargs
def _validate_weight_with_size(X, m, n, **kwargs):
w = kwargs.pop('w', None)
if w is None:
return kwargs
if w.ndim != 1 or w.shape[0] != n:
raise ValueError("Weights must have same size as input vector. "
f"{w.shape[0]} vs. {n}")
kwargs['w'] = _validate_weights(w)
return kwargs
def _validate_hamming_kwargs(X, m, n, **kwargs):
w = kwargs.get('w', np.ones((n,), dtype='double'))
if w.ndim != 1 or w.shape[0] != n:
raise ValueError("Weights must have same size as input vector. %d vs. %d" % (w.shape[0], n))
kwargs['w'] = _validate_weights(w)
return kwargs
def _validate_mahalanobis_kwargs(X, m, n, **kwargs):
VI = kwargs.pop('VI', None)
if VI is None:
if m <= n:
# There are fewer observations than the dimension of
# the observations.
raise ValueError("The number of observations (%d) is too "
"small; the covariance matrix is "
"singular. For observations with %d "
"dimensions, at least %d observations "
"are required." % (m, n, n + 1))
if isinstance(X, tuple):
X = np.vstack(X)
CV = np.atleast_2d(np.cov(X.astype(np.double, copy=False).T))
VI = np.linalg.inv(CV).T.copy()
kwargs["VI"] = _convert_to_double(VI)
return kwargs
def _validate_minkowski_kwargs(X, m, n, **kwargs):
kwargs = _validate_weight_with_size(X, m, n, **kwargs)
if 'p' not in kwargs:
kwargs['p'] = 2.
else:
if kwargs['p'] <= 0:
raise ValueError("p must be greater than 0")
return kwargs
def _validate_pdist_input(X, m, n, metric_info, **kwargs):
# get supported types
types = metric_info.types
# choose best type
typ = types[types.index(X.dtype)] if X.dtype in types else types[0]
# validate data
X = _convert_to_type(X, out_type=typ)
# validate kwargs
_validate_kwargs = metric_info.validator
if _validate_kwargs:
kwargs = _validate_kwargs(X, m, n, **kwargs)
return X, typ, kwargs
def _validate_seuclidean_kwargs(X, m, n, **kwargs):
V = kwargs.pop('V', None)
if V is None:
if isinstance(X, tuple):
X = np.vstack(X)
V = np.var(X.astype(np.double, copy=False), axis=0, ddof=1)
else:
V = np.asarray(V, order='c')
if len(V.shape) != 1:
raise ValueError('Variance vector V must '
'be one-dimensional.')
if V.shape[0] != n:
raise ValueError('Variance vector V must be of the same '
'dimension as the vectors on which the distances '
'are computed.')
kwargs['V'] = _convert_to_double(V)
return kwargs
def _validate_vector(u, dtype=None):
# XXX Is order='c' really necessary?
u = np.asarray(u, dtype=dtype, order='c')
if u.ndim == 1:
return u
raise ValueError("Input vector should be 1-D.")
def _validate_weights(w, dtype=np.double):
w = _validate_vector(w, dtype=dtype)
if np.any(w < 0):
raise ValueError("Input weights should be all non-negative")
return w
def directed_hausdorff(u, v, seed=0):
"""
Compute the directed Hausdorff distance between two 2-D arrays.
Distances between pairs are calculated using a Euclidean metric.
Parameters
----------
u : (M,N) array_like
Input array.
v : (O,N) array_like
Input array.
seed : int or None
Local `numpy.random.RandomState` seed. Default is 0, a random
shuffling of u and v that guarantees reproducibility.
Returns
-------
d : double
The directed Hausdorff distance between arrays `u` and `v`,
index_1 : int
index of point contributing to Hausdorff pair in `u`
index_2 : int
index of point contributing to Hausdorff pair in `v`
Raises
------
ValueError
An exception is thrown if `u` and `v` do not have
the same number of columns.
See Also
--------
scipy.spatial.procrustes : Another similarity test for two data sets
Notes
-----
Uses the early break technique and the random sampling approach
described by [1]_. Although worst-case performance is ``O(m * o)``
(as with the brute force algorithm), this is unlikely in practice
as the input data would have to require the algorithm to explore
every single point interaction, and after the algorithm shuffles
the input points at that. The best case performance is O(m), which
is satisfied by selecting an inner loop distance that is less than
cmax and leads to an early break as often as possible. The authors
have formally shown that the average runtime is closer to O(m).
.. versionadded:: 0.19.0
References
----------
.. [1] A. A. Taha and A. Hanbury, "An efficient algorithm for
calculating the exact Hausdorff distance." IEEE Transactions On
Pattern Analysis And Machine Intelligence, vol. 37 pp. 2153-63,
2015.
Examples
--------
Find the directed Hausdorff distance between two 2-D arrays of
coordinates:
>>> from scipy.spatial.distance import directed_hausdorff
>>> import numpy as np
>>> u = np.array([(1.0, 0.0),
... (0.0, 1.0),
... (-1.0, 0.0),
... (0.0, -1.0)])
>>> v = np.array([(2.0, 0.0),
... (0.0, 2.0),
... (-2.0, 0.0),
... (0.0, -4.0)])
>>> directed_hausdorff(u, v)[0]
2.23606797749979
>>> directed_hausdorff(v, u)[0]
3.0
Find the general (symmetric) Hausdorff distance between two 2-D
arrays of coordinates:
>>> max(directed_hausdorff(u, v)[0], directed_hausdorff(v, u)[0])
3.0
Find the indices of the points that generate the Hausdorff distance
(the Hausdorff pair):
>>> directed_hausdorff(v, u)[1:]
(3, 3)
"""
u = np.asarray(u, dtype=np.float64, order='c')
v = np.asarray(v, dtype=np.float64, order='c')
if u.shape[1] != v.shape[1]:
raise ValueError('u and v need to have the same '
'number of columns')
result = _hausdorff.directed_hausdorff(u, v, seed)
return result
def minkowski(u, v, p=2, w=None):
"""
Compute the Minkowski distance between two 1-D arrays.
The Minkowski distance between 1-D arrays `u` and `v`,
is defined as
.. math::
{\\|u-v\\|}_p = (\\sum{|u_i - v_i|^p})^{1/p}.
\\left(\\sum{w_i(|(u_i - v_i)|^p)}\\right)^{1/p}.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
p : scalar
The order of the norm of the difference :math:`{\\|u-v\\|}_p`. Note
that for :math:`0 < p < 1`, the triangle inequality only holds with
an additional multiplicative factor, i.e. it is only a quasi-metric.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
minkowski : double
The Minkowski distance between vectors `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.minkowski([1, 0, 0], [0, 1, 0], 1)
2.0
>>> distance.minkowski([1, 0, 0], [0, 1, 0], 2)
1.4142135623730951
>>> distance.minkowski([1, 0, 0], [0, 1, 0], 3)
1.2599210498948732
>>> distance.minkowski([1, 1, 0], [0, 1, 0], 1)
1.0
>>> distance.minkowski([1, 1, 0], [0, 1, 0], 2)
1.0
>>> distance.minkowski([1, 1, 0], [0, 1, 0], 3)
1.0
"""
u = _validate_vector(u)
v = _validate_vector(v)
if p <= 0:
raise ValueError("p must be greater than 0")
u_v = u - v
if w is not None:
w = _validate_weights(w)
if p == 1:
root_w = w
elif p == 2:
# better precision and speed
root_w = np.sqrt(w)
elif p == np.inf:
root_w = (w != 0)
else:
root_w = np.power(w, 1/p)
u_v = root_w * u_v
dist = norm(u_v, ord=p)
return dist
def euclidean(u, v, w=None):
"""
Computes the Euclidean distance between two 1-D arrays.
The Euclidean distance between 1-D arrays `u` and `v`, is defined as
.. math::
{\\|u-v\\|}_2
\\left(\\sum{(w_i |(u_i - v_i)|^2)}\\right)^{1/2}
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
euclidean : double
The Euclidean distance between vectors `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.euclidean([1, 0, 0], [0, 1, 0])
1.4142135623730951
>>> distance.euclidean([1, 1, 0], [0, 1, 0])
1.0
"""
return minkowski(u, v, p=2, w=w)
def sqeuclidean(u, v, w=None):
"""
Compute the squared Euclidean distance between two 1-D arrays.
The squared Euclidean distance between `u` and `v` is defined as
.. math::
\\sum_i{w_i |u_i - v_i|^2}
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
sqeuclidean : double
The squared Euclidean distance between vectors `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.sqeuclidean([1, 0, 0], [0, 1, 0])
2.0
>>> distance.sqeuclidean([1, 1, 0], [0, 1, 0])
1.0
"""
# Preserve float dtypes, but convert everything else to np.float64
# for stability.
utype, vtype = None, None
if not (hasattr(u, "dtype") and np.issubdtype(u.dtype, np.inexact)):
utype = np.float64
if not (hasattr(v, "dtype") and np.issubdtype(v.dtype, np.inexact)):
vtype = np.float64
u = _validate_vector(u, dtype=utype)
v = _validate_vector(v, dtype=vtype)
u_v = u - v
u_v_w = u_v # only want weights applied once
if w is not None:
w = _validate_weights(w)
u_v_w = w * u_v
return np.dot(u_v, u_v_w)
def correlation(u, v, w=None, centered=True):
"""
Compute the correlation distance between two 1-D arrays.
The correlation distance between `u` and `v`, is
defined as
.. math::
1 - \\frac{(u - \\bar{u}) \\cdot (v - \\bar{v})}
{{\\|(u - \\bar{u})\\|}_2 {\\|(v - \\bar{v})\\|}_2}
where :math:`\\bar{u}` is the mean of the elements of `u`
and :math:`x \\cdot y` is the dot product of :math:`x` and :math:`y`.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
centered : bool, optional
If True, `u` and `v` will be centered. Default is True.
Returns
-------
correlation : double
The correlation distance between 1-D array `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
if w is not None:
w = _validate_weights(w)
if centered:
umu = np.average(u, weights=w)
vmu = np.average(v, weights=w)
u = u - umu
v = v - vmu
uv = np.average(u * v, weights=w)
uu = np.average(np.square(u), weights=w)
vv = np.average(np.square(v), weights=w)
dist = 1.0 - uv / np.sqrt(uu * vv)
# Return absolute value to avoid small negative value due to rounding
return np.abs(dist)
def cosine(u, v, w=None):
"""
Compute the Cosine distance between 1-D arrays.
The Cosine distance between `u` and `v`, is defined as
.. math::
1 - \\frac{u \\cdot v}
{\\|u\\|_2 \\|v\\|_2}.
where :math:`u \\cdot v` is the dot product of :math:`u` and
:math:`v`.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
cosine : double
The Cosine distance between vectors `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.cosine([1, 0, 0], [0, 1, 0])
1.0
>>> distance.cosine([100, 0, 0], [0, 1, 0])
1.0
>>> distance.cosine([1, 1, 0], [0, 1, 0])
0.29289321881345254
"""
# cosine distance is also referred to as 'uncentered correlation',
# or 'reflective correlation'
# clamp the result to 0-2
return max(0, min(correlation(u, v, w=w, centered=False), 2.0))
def hamming(u, v, w=None):
"""
Compute the Hamming distance between two 1-D arrays.
The Hamming distance between 1-D arrays `u` and `v`, is simply the
proportion of disagreeing components in `u` and `v`. If `u` and `v` are
boolean vectors, the Hamming distance is
.. math::
\\frac{c_{01} + c_{10}}{n}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n`.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
hamming : double
The Hamming distance between vectors `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.hamming([1, 0, 0], [0, 1, 0])
0.66666666666666663
>>> distance.hamming([1, 0, 0], [1, 1, 0])
0.33333333333333331
>>> distance.hamming([1, 0, 0], [2, 0, 0])
0.33333333333333331
>>> distance.hamming([1, 0, 0], [3, 0, 0])
0.33333333333333331
"""
u = _validate_vector(u)
v = _validate_vector(v)
if u.shape != v.shape:
raise ValueError('The 1d arrays must have equal lengths.')
u_ne_v = u != v
if w is not None:
w = _validate_weights(w)
if w.shape != u.shape:
raise ValueError("'w' should have the same length as 'u' and 'v'.")
return np.average(u_ne_v, weights=w)
def jaccard(u, v, w=None):
"""
Compute the Jaccard-Needham dissimilarity between two boolean 1-D arrays.
The Jaccard-Needham dissimilarity between 1-D boolean arrays `u` and `v`,
is defined as
.. math::
\\frac{c_{TF} + c_{FT}}
{c_{TT} + c_{FT} + c_{TF}}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
jaccard : double
The Jaccard distance between vectors `u` and `v`.
Notes
-----
When both `u` and `v` lead to a `0/0` division i.e. there is no overlap
between the items in the vectors the returned distance is 0. See the
Wikipedia page on the Jaccard index [1]_, and this paper [2]_.
.. versionchanged:: 1.2.0
Previously, when `u` and `v` lead to a `0/0` division, the function
would return NaN. This was changed to return 0 instead.
References
----------
.. [1] https://en.wikipedia.org/wiki/Jaccard_index
.. [2] S. Kosub, "A note on the triangle inequality for the Jaccard
distance", 2016, :arxiv:`1612.02696`
Examples
--------
>>> from scipy.spatial import distance
>>> distance.jaccard([1, 0, 0], [0, 1, 0])
1.0
>>> distance.jaccard([1, 0, 0], [1, 1, 0])
0.5
>>> distance.jaccard([1, 0, 0], [1, 2, 0])
0.5
>>> distance.jaccard([1, 0, 0], [1, 1, 1])
0.66666666666666663
"""
u = _validate_vector(u)
v = _validate_vector(v)
nonzero = np.bitwise_or(u != 0, v != 0)
unequal_nonzero = np.bitwise_and((u != v), nonzero)
if w is not None:
w = _validate_weights(w)
nonzero = w * nonzero
unequal_nonzero = w * unequal_nonzero
a = np.double(unequal_nonzero.sum())
b = np.double(nonzero.sum())
return (a / b) if b != 0 else 0
def kulczynski1(u, v, *, w=None):
"""
Compute the Kulczynski 1 dissimilarity between two boolean 1-D arrays.
The Kulczynski 1 dissimilarity between two boolean 1-D arrays `u` and `v`
of length ``n``, is defined as
.. math::
\\frac{c_{11}}
{c_{01} + c_{10}}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k \\in {0, 1, ..., n-1}`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
kulczynski1 : float
The Kulczynski 1 distance between vectors `u` and `v`.
Notes
-----
This measure has a minimum value of 0 and no upper limit.
It is un-defined when there are no non-matches.
.. versionadded:: 1.8.0
References
----------
.. [1] Kulczynski S. et al. Bulletin
International de l'Academie Polonaise des Sciences
et des Lettres, Classe des Sciences Mathematiques
et Naturelles, Serie B (Sciences Naturelles). 1927;
Supplement II: 57-203.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.kulczynski1([1, 0, 0], [0, 1, 0])
0.0
>>> distance.kulczynski1([True, False, False], [True, True, False])
1.0
>>> distance.kulczynski1([True, False, False], [True])
0.5
>>> distance.kulczynski1([1, 0, 0], [3, 1, 0])
-3.0
"""
u = _validate_vector(u)
v = _validate_vector(v)
if w is not None:
w = _validate_weights(w)
(_, nft, ntf, ntt) = _nbool_correspond_all(u, v, w=w)
return ntt / (ntf + nft)
def seuclidean(u, v, V):
"""
Return the standardized Euclidean distance between two 1-D arrays.
The standardized Euclidean distance between two n-vectors `u` and `v` is
.. math::
\\sqrt{\\sum\\limits_i \\frac{1}{V_i} \\left(u_i-v_i \\right)^2}
``V`` is the variance vector; ``V[I]`` is the variance computed over all the i-th
components of the points. If not passed, it is automatically computed.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
V : (N,) array_like
`V` is an 1-D array of component variances. It is usually computed
among a larger collection vectors.
Returns
-------
seuclidean : double
The standardized Euclidean distance between vectors `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.seuclidean([1, 0, 0], [0, 1, 0], [0.1, 0.1, 0.1])
4.4721359549995796
>>> distance.seuclidean([1, 0, 0], [0, 1, 0], [1, 0.1, 0.1])
3.3166247903553998
>>> distance.seuclidean([1, 0, 0], [0, 1, 0], [10, 0.1, 0.1])
3.1780497164141406
"""
u = _validate_vector(u)
v = _validate_vector(v)
V = _validate_vector(V, dtype=np.float64)
if V.shape[0] != u.shape[0] or u.shape[0] != v.shape[0]:
raise TypeError('V must be a 1-D array of the same dimension '
'as u and v.')
return euclidean(u, v, w=1/V)
def cityblock(u, v, w=None):
"""
Compute the City Block (Manhattan) distance.
Computes the Manhattan distance between two 1-D arrays `u` and `v`,
which is defined as
.. math::
\\sum_i {\\left| u_i - v_i \\right|}.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
cityblock : double
The City Block (Manhattan) distance between vectors `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.cityblock([1, 0, 0], [0, 1, 0])
2
>>> distance.cityblock([1, 0, 0], [0, 2, 0])
3
>>> distance.cityblock([1, 0, 0], [1, 1, 0])
1
"""
u = _validate_vector(u)
v = _validate_vector(v)
l1_diff = abs(u - v)
if w is not None:
w = _validate_weights(w)
l1_diff = w * l1_diff
return l1_diff.sum()
def mahalanobis(u, v, VI):
"""
Compute the Mahalanobis distance between two 1-D arrays.
The Mahalanobis distance between 1-D arrays `u` and `v`, is defined as
.. math::
\\sqrt{ (u-v) V^{-1} (u-v)^T }
where ``V`` is the covariance matrix. Note that the argument `VI`
is the inverse of ``V``.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
VI : array_like
The inverse of the covariance matrix.
Returns
-------
mahalanobis : double
The Mahalanobis distance between vectors `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> iv = [[1, 0.5, 0.5], [0.5, 1, 0.5], [0.5, 0.5, 1]]
>>> distance.mahalanobis([1, 0, 0], [0, 1, 0], iv)
1.0
>>> distance.mahalanobis([0, 2, 0], [0, 1, 0], iv)
1.0
>>> distance.mahalanobis([2, 0, 0], [0, 1, 0], iv)
1.7320508075688772
"""
u = _validate_vector(u)
v = _validate_vector(v)
VI = np.atleast_2d(VI)
delta = u - v
m = np.dot(np.dot(delta, VI), delta)
return np.sqrt(m)
def chebyshev(u, v, w=None):
"""
Compute the Chebyshev distance.
Computes the Chebyshev distance between two 1-D arrays `u` and `v`,
which is defined as
.. math::
\\max_i {|u_i-v_i|}.
Parameters
----------
u : (N,) array_like
Input vector.
v : (N,) array_like
Input vector.
w : (N,) array_like, optional
Unused, as 'max' is a weightless operation. Here for API consistency.
Returns
-------
chebyshev : double
The Chebyshev distance between vectors `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.chebyshev([1, 0, 0], [0, 1, 0])
1
>>> distance.chebyshev([1, 1, 0], [0, 1, 0])
1
"""
u = _validate_vector(u)
v = _validate_vector(v)
if w is not None:
w = _validate_weights(w)
has_weight = w > 0
if has_weight.sum() < w.size:
u = u[has_weight]
v = v[has_weight]
return max(abs(u - v))
def braycurtis(u, v, w=None):
"""
Compute the Bray-Curtis distance between two 1-D arrays.
Bray-Curtis distance is defined as
.. math::
\\sum{|u_i-v_i|} / \\sum{|u_i+v_i|}
The Bray-Curtis distance is in the range [0, 1] if all coordinates are
positive, and is undefined if the inputs are of length zero.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
braycurtis : double
The Bray-Curtis distance between 1-D arrays `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.braycurtis([1, 0, 0], [0, 1, 0])
1.0
>>> distance.braycurtis([1, 1, 0], [0, 1, 0])
0.33333333333333331
"""
u = _validate_vector(u)
v = _validate_vector(v, dtype=np.float64)
l1_diff = abs(u - v)
l1_sum = abs(u + v)
if w is not None:
w = _validate_weights(w)
l1_diff = w * l1_diff
l1_sum = w * l1_sum
return l1_diff.sum() / l1_sum.sum()
def canberra(u, v, w=None):
"""
Compute the Canberra distance between two 1-D arrays.
The Canberra distance is defined as
.. math::
d(u,v) = \\sum_i \\frac{|u_i-v_i|}
{|u_i|+|v_i|}.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
canberra : double
The Canberra distance between vectors `u` and `v`.
Notes
-----
When `u[i]` and `v[i]` are 0 for given i, then the fraction 0/0 = 0 is
used in the calculation.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.canberra([1, 0, 0], [0, 1, 0])
2.0
>>> distance.canberra([1, 1, 0], [0, 1, 0])
1.0
"""
u = _validate_vector(u)
v = _validate_vector(v, dtype=np.float64)
if w is not None:
w = _validate_weights(w)
with np.errstate(invalid='ignore'):
abs_uv = abs(u - v)
abs_u = abs(u)
abs_v = abs(v)
d = abs_uv / (abs_u + abs_v)
if w is not None:
d = w * d
d = np.nansum(d)
return d
def jensenshannon(p, q, base=None, *, axis=0, keepdims=False):
"""
Compute the Jensen-Shannon distance (metric) between
two probability arrays. This is the square root
of the Jensen-Shannon divergence.
The Jensen-Shannon distance between two probability
vectors `p` and `q` is defined as,
.. math::
\\sqrt{\\frac{D(p \\parallel m) + D(q \\parallel m)}{2}}
where :math:`m` is the pointwise mean of :math:`p` and :math:`q`
and :math:`D` is the Kullback-Leibler divergence.
This routine will normalize `p` and `q` if they don't sum to 1.0.
Parameters
----------
p : (N,) array_like
left probability vector
q : (N,) array_like
right probability vector
base : double, optional
the base of the logarithm used to compute the output
if not given, then the routine uses the default base of
scipy.stats.entropy.
axis : int, optional
Axis along which the Jensen-Shannon distances are computed. The default
is 0.
.. versionadded:: 1.7.0
keepdims : bool, optional
If this is set to `True`, the reduced axes are left in the
result as dimensions with size one. With this option,
the result will broadcast correctly against the input array.
Default is False.
.. versionadded:: 1.7.0
Returns
-------
js : double or ndarray
The Jensen-Shannon distances between `p` and `q` along the `axis`.
Notes
-----
.. versionadded:: 1.2.0
Examples
--------
>>> from scipy.spatial import distance
>>> import numpy as np
>>> distance.jensenshannon([1.0, 0.0, 0.0], [0.0, 1.0, 0.0], 2.0)
1.0
>>> distance.jensenshannon([1.0, 0.0], [0.5, 0.5])
0.46450140402245893
>>> distance.jensenshannon([1.0, 0.0, 0.0], [1.0, 0.0, 0.0])
0.0
>>> a = np.array([[1, 2, 3, 4],
... [5, 6, 7, 8],
... [9, 10, 11, 12]])
>>> b = np.array([[13, 14, 15, 16],
... [17, 18, 19, 20],
... [21, 22, 23, 24]])
>>> distance.jensenshannon(a, b, axis=0)
array([0.1954288, 0.1447697, 0.1138377, 0.0927636])
>>> distance.jensenshannon(a, b, axis=1)
array([0.1402339, 0.0399106, 0.0201815])
"""
p = np.asarray(p)
q = np.asarray(q)
p = p / np.sum(p, axis=axis, keepdims=True)
q = q / np.sum(q, axis=axis, keepdims=True)
m = (p + q) / 2.0
left = rel_entr(p, m)
right = rel_entr(q, m)
left_sum = np.sum(left, axis=axis, keepdims=keepdims)
right_sum = np.sum(right, axis=axis, keepdims=keepdims)
js = left_sum + right_sum
if base is not None:
js /= np.log(base)
return np.sqrt(js / 2.0)
def yule(u, v, w=None):
"""
Compute the Yule dissimilarity between two boolean 1-D arrays.
The Yule dissimilarity is defined as
.. math::
\\frac{R}{c_{TT} * c_{FF} + \\frac{R}{2}}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n` and :math:`R = 2.0 * c_{TF} * c_{FT}`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
yule : double
The Yule dissimilarity between vectors `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.yule([1, 0, 0], [0, 1, 0])
2.0
>>> distance.yule([1, 1, 0], [0, 1, 0])
0.0
"""
u = _validate_vector(u)
v = _validate_vector(v)
if w is not None:
w = _validate_weights(w)
(nff, nft, ntf, ntt) = _nbool_correspond_all(u, v, w=w)
half_R = ntf * nft
if half_R == 0:
return 0.0
else:
return float(2.0 * half_R / (ntt * nff + half_R))
def dice(u, v, w=None):
"""
Compute the Dice dissimilarity between two boolean 1-D arrays.
The Dice dissimilarity between `u` and `v`, is
.. math::
\\frac{c_{TF} + c_{FT}}
{2c_{TT} + c_{FT} + c_{TF}}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n`.
Parameters
----------
u : (N,) array_like, bool
Input 1-D array.
v : (N,) array_like, bool
Input 1-D array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
dice : double
The Dice dissimilarity between 1-D arrays `u` and `v`.
Notes
-----
This function computes the Dice dissimilarity index. To compute the
Dice similarity index, convert one to the other with similarity =
1 - dissimilarity.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.dice([1, 0, 0], [0, 1, 0])
1.0
>>> distance.dice([1, 0, 0], [1, 1, 0])
0.3333333333333333
>>> distance.dice([1, 0, 0], [2, 0, 0])
-0.3333333333333333
"""
u = _validate_vector(u)
v = _validate_vector(v)
if w is not None:
w = _validate_weights(w)
if u.dtype == v.dtype == bool and w is None:
ntt = (u & v).sum()
else:
dtype = np.result_type(int, u.dtype, v.dtype)
u = u.astype(dtype)
v = v.astype(dtype)
if w is None:
ntt = (u * v).sum()
else:
ntt = (u * v * w).sum()
(nft, ntf) = _nbool_correspond_ft_tf(u, v, w=w)
return float((ntf + nft) / np.array(2.0 * ntt + ntf + nft))
def rogerstanimoto(u, v, w=None):
"""
Compute the Rogers-Tanimoto dissimilarity between two boolean 1-D arrays.
The Rogers-Tanimoto dissimilarity between two boolean 1-D arrays
`u` and `v`, is defined as
.. math::
\\frac{R}
{c_{TT} + c_{FF} + R}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n` and :math:`R = 2(c_{TF} + c_{FT})`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
rogerstanimoto : double
The Rogers-Tanimoto dissimilarity between vectors
`u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.rogerstanimoto([1, 0, 0], [0, 1, 0])
0.8
>>> distance.rogerstanimoto([1, 0, 0], [1, 1, 0])
0.5
>>> distance.rogerstanimoto([1, 0, 0], [2, 0, 0])
-1.0
"""
u = _validate_vector(u)
v = _validate_vector(v)
if w is not None:
w = _validate_weights(w)
(nff, nft, ntf, ntt) = _nbool_correspond_all(u, v, w=w)
return float(2.0 * (ntf + nft)) / float(ntt + nff + (2.0 * (ntf + nft)))
def russellrao(u, v, w=None):
"""
Compute the Russell-Rao dissimilarity between two boolean 1-D arrays.
The Russell-Rao dissimilarity between two boolean 1-D arrays, `u` and
`v`, is defined as
.. math::
\\frac{n - c_{TT}}
{n}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
russellrao : double
The Russell-Rao dissimilarity between vectors `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.russellrao([1, 0, 0], [0, 1, 0])
1.0
>>> distance.russellrao([1, 0, 0], [1, 1, 0])
0.6666666666666666
>>> distance.russellrao([1, 0, 0], [2, 0, 0])
0.3333333333333333
"""
u = _validate_vector(u)
v = _validate_vector(v)
if u.dtype == v.dtype == bool and w is None:
ntt = (u & v).sum()
n = float(len(u))
elif w is None:
ntt = (u * v).sum()
n = float(len(u))
else:
w = _validate_weights(w)
ntt = (u * v * w).sum()
n = w.sum()
return float(n - ntt) / n
def sokalmichener(u, v, w=None):
"""
Compute the Sokal-Michener dissimilarity between two boolean 1-D arrays.
The Sokal-Michener dissimilarity between boolean 1-D arrays `u` and `v`,
is defined as
.. math::
\\frac{R}
{S + R}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n`, :math:`R = 2 * (c_{TF} + c_{FT})` and
:math:`S = c_{FF} + c_{TT}`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
sokalmichener : double
The Sokal-Michener dissimilarity between vectors `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.sokalmichener([1, 0, 0], [0, 1, 0])
0.8
>>> distance.sokalmichener([1, 0, 0], [1, 1, 0])
0.5
>>> distance.sokalmichener([1, 0, 0], [2, 0, 0])
-1.0
"""
u = _validate_vector(u)
v = _validate_vector(v)
if w is not None:
w = _validate_weights(w)
nff, nft, ntf, ntt = _nbool_correspond_all(u, v, w=w)
return float(2.0 * (ntf + nft)) / float(ntt + nff + 2.0 * (ntf + nft))
def sokalsneath(u, v, w=None):
"""
Compute the Sokal-Sneath dissimilarity between two boolean 1-D arrays.
The Sokal-Sneath dissimilarity between `u` and `v`,
.. math::
\\frac{R}
{c_{TT} + R}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n` and :math:`R = 2(c_{TF} + c_{FT})`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
w : (N,) array_like, optional
The weights for each value in `u` and `v`. Default is None,
which gives each value a weight of 1.0
Returns
-------
sokalsneath : double
The Sokal-Sneath dissimilarity between vectors `u` and `v`.
Examples
--------
>>> from scipy.spatial import distance
>>> distance.sokalsneath([1, 0, 0], [0, 1, 0])
1.0
>>> distance.sokalsneath([1, 0, 0], [1, 1, 0])
0.66666666666666663
>>> distance.sokalsneath([1, 0, 0], [2, 1, 0])
0.0
>>> distance.sokalsneath([1, 0, 0], [3, 1, 0])
-2.0
"""
u = _validate_vector(u)
v = _validate_vector(v)
if u.dtype == v.dtype == bool and w is None:
ntt = (u & v).sum()
elif w is None:
ntt = (u * v).sum()
else:
w = _validate_weights(w)
ntt = (u * v * w).sum()
(nft, ntf) = _nbool_correspond_ft_tf(u, v, w=w)
denom = np.array(ntt + 2.0 * (ntf + nft))
if not denom.any():
raise ValueError('Sokal-Sneath dissimilarity is not defined for '
'vectors that are entirely false.')
return float(2.0 * (ntf + nft)) / denom
_convert_to_double = partial(_convert_to_type, out_type=np.double)
_convert_to_bool = partial(_convert_to_type, out_type=bool)
# adding python-only wrappers to _distance_wrap module
_distance_wrap.pdist_correlation_double_wrap = _correlation_pdist_wrap
_distance_wrap.cdist_correlation_double_wrap = _correlation_cdist_wrap
@dataclasses.dataclass(frozen=True)
class CDistMetricWrapper:
metric_name: str
def __call__(self, XA, XB, *, out=None, **kwargs):
XA = np.ascontiguousarray(XA)
XB = np.ascontiguousarray(XB)
mA, n = XA.shape
mB, _ = XB.shape
metric_name = self.metric_name
metric_info = _METRICS[metric_name]
XA, XB, typ, kwargs = _validate_cdist_input(
XA, XB, mA, mB, n, metric_info, **kwargs)
w = kwargs.pop('w', None)
if w is not None:
metric = metric_info.dist_func
return _cdist_callable(
XA, XB, metric=metric, out=out, w=w, **kwargs)
dm = _prepare_out_argument(out, np.double, (mA, mB))
# get cdist wrapper
cdist_fn = getattr(_distance_wrap, f'cdist_{metric_name}_{typ}_wrap')
cdist_fn(XA, XB, dm, **kwargs)
return dm
@dataclasses.dataclass(frozen=True)
class CDistWeightedMetricWrapper:
metric_name: str
weighted_metric: str
def __call__(self, XA, XB, *, out=None, **kwargs):
XA = np.ascontiguousarray(XA)
XB = np.ascontiguousarray(XB)
mA, n = XA.shape
mB, _ = XB.shape
metric_name = self.metric_name
XA, XB, typ, kwargs = _validate_cdist_input(
XA, XB, mA, mB, n, _METRICS[metric_name], **kwargs)
dm = _prepare_out_argument(out, np.double, (mA, mB))
w = kwargs.pop('w', None)
if w is not None:
metric_name = self.weighted_metric
kwargs['w'] = w
# get cdist wrapper
cdist_fn = getattr(_distance_wrap, f'cdist_{metric_name}_{typ}_wrap')
cdist_fn(XA, XB, dm, **kwargs)
return dm
@dataclasses.dataclass(frozen=True)
class PDistMetricWrapper:
metric_name: str
def __call__(self, X, *, out=None, **kwargs):
X = np.ascontiguousarray(X)
m, n = X.shape
metric_name = self.metric_name
metric_info = _METRICS[metric_name]
X, typ, kwargs = _validate_pdist_input(
X, m, n, metric_info, **kwargs)
out_size = (m * (m - 1)) // 2
w = kwargs.pop('w', None)
if w is not None:
metric = metric_info.dist_func
return _pdist_callable(
X, metric=metric, out=out, w=w, **kwargs)
dm = _prepare_out_argument(out, np.double, (out_size,))
# get pdist wrapper
pdist_fn = getattr(_distance_wrap, f'pdist_{metric_name}_{typ}_wrap')
pdist_fn(X, dm, **kwargs)
return dm
@dataclasses.dataclass(frozen=True)
class PDistWeightedMetricWrapper:
metric_name: str
weighted_metric: str
def __call__(self, X, *, out=None, **kwargs):
X = np.ascontiguousarray(X)
m, n = X.shape
metric_name = self.metric_name
X, typ, kwargs = _validate_pdist_input(
X, m, n, _METRICS[metric_name], **kwargs)
out_size = (m * (m - 1)) // 2
dm = _prepare_out_argument(out, np.double, (out_size,))
w = kwargs.pop('w', None)
if w is not None:
metric_name = self.weighted_metric
kwargs['w'] = w
# get pdist wrapper
pdist_fn = getattr(_distance_wrap, f'pdist_{metric_name}_{typ}_wrap')
pdist_fn(X, dm, **kwargs)
return dm
@dataclasses.dataclass(frozen=True)
class MetricInfo:
# Name of python distance function
canonical_name: str
# All aliases, including canonical_name
aka: set[str]
# unvectorized distance function
dist_func: Callable
# Optimized cdist function
cdist_func: Callable
# Optimized pdist function
pdist_func: Callable
# function that checks kwargs and computes default values:
# f(X, m, n, **kwargs)
validator: Optional[Callable] = None
# list of supported types:
# X (pdist) and XA (cdist) are used to choose the type. if there is no
# match the first type is used. Default double
types: list[str] = dataclasses.field(default_factory=lambda: ['double'])
# true if out array must be C-contiguous
requires_contiguous_out: bool = True
# Registry of implemented metrics:
_METRIC_INFOS = [
MetricInfo(
canonical_name='braycurtis',
aka={'braycurtis'},
dist_func=braycurtis,
cdist_func=_distance_pybind.cdist_braycurtis,
pdist_func=_distance_pybind.pdist_braycurtis,
),
MetricInfo(
canonical_name='canberra',
aka={'canberra'},
dist_func=canberra,
cdist_func=_distance_pybind.cdist_canberra,
pdist_func=_distance_pybind.pdist_canberra,
),
MetricInfo(
canonical_name='chebyshev',
aka={'chebychev', 'chebyshev', 'cheby', 'cheb', 'ch'},
dist_func=chebyshev,
cdist_func=_distance_pybind.cdist_chebyshev,
pdist_func=_distance_pybind.pdist_chebyshev,
),
MetricInfo(
canonical_name='cityblock',
aka={'cityblock', 'cblock', 'cb', 'c'},
dist_func=cityblock,
cdist_func=_distance_pybind.cdist_cityblock,
pdist_func=_distance_pybind.pdist_cityblock,
),
MetricInfo(
canonical_name='correlation',
aka={'correlation', 'co'},
dist_func=correlation,
cdist_func=CDistMetricWrapper('correlation'),
pdist_func=PDistMetricWrapper('correlation'),
),
MetricInfo(
canonical_name='cosine',
aka={'cosine', 'cos'},
dist_func=cosine,
cdist_func=CDistMetricWrapper('cosine'),
pdist_func=PDistMetricWrapper('cosine'),
),
MetricInfo(
canonical_name='dice',
aka={'dice'},
types=['bool'],
dist_func=dice,
cdist_func=_distance_pybind.cdist_dice,
pdist_func=_distance_pybind.pdist_dice,
),
MetricInfo(
canonical_name='euclidean',
aka={'euclidean', 'euclid', 'eu', 'e'},
dist_func=euclidean,
cdist_func=_distance_pybind.cdist_euclidean,
pdist_func=_distance_pybind.pdist_euclidean,
),
MetricInfo(
canonical_name='hamming',
aka={'matching', 'hamming', 'hamm', 'ha', 'h'},
types=['double', 'bool'],
validator=_validate_hamming_kwargs,
dist_func=hamming,
cdist_func=_distance_pybind.cdist_hamming,
pdist_func=_distance_pybind.pdist_hamming,
),
MetricInfo(
canonical_name='jaccard',
aka={'jaccard', 'jacc', 'ja', 'j'},
types=['double', 'bool'],
dist_func=jaccard,
cdist_func=_distance_pybind.cdist_jaccard,
pdist_func=_distance_pybind.pdist_jaccard,
),
MetricInfo(
canonical_name='jensenshannon',
aka={'jensenshannon', 'js'},
dist_func=jensenshannon,
cdist_func=CDistMetricWrapper('jensenshannon'),
pdist_func=PDistMetricWrapper('jensenshannon'),
),
MetricInfo(
canonical_name='kulczynski1',
aka={'kulczynski1'},
types=['bool'],
dist_func=kulczynski1,
cdist_func=_distance_pybind.cdist_kulczynski1,
pdist_func=_distance_pybind.pdist_kulczynski1,
),
MetricInfo(
canonical_name='mahalanobis',
aka={'mahalanobis', 'mahal', 'mah'},
validator=_validate_mahalanobis_kwargs,
dist_func=mahalanobis,
cdist_func=CDistMetricWrapper('mahalanobis'),
pdist_func=PDistMetricWrapper('mahalanobis'),
),
MetricInfo(
canonical_name='minkowski',
aka={'minkowski', 'mi', 'm', 'pnorm'},
validator=_validate_minkowski_kwargs,
dist_func=minkowski,
cdist_func=_distance_pybind.cdist_minkowski,
pdist_func=_distance_pybind.pdist_minkowski,
),
MetricInfo(
canonical_name='rogerstanimoto',
aka={'rogerstanimoto'},
types=['bool'],
dist_func=rogerstanimoto,
cdist_func=_distance_pybind.cdist_rogerstanimoto,
pdist_func=_distance_pybind.pdist_rogerstanimoto,
),
MetricInfo(
canonical_name='russellrao',
aka={'russellrao'},
types=['bool'],
dist_func=russellrao,
cdist_func=_distance_pybind.cdist_russellrao,
pdist_func=_distance_pybind.pdist_russellrao,
),
MetricInfo(
canonical_name='seuclidean',
aka={'seuclidean', 'se', 's'},
validator=_validate_seuclidean_kwargs,
dist_func=seuclidean,
cdist_func=CDistMetricWrapper('seuclidean'),
pdist_func=PDistMetricWrapper('seuclidean'),
),
MetricInfo(
canonical_name='sokalmichener',
aka={'sokalmichener'},
types=['bool'],
dist_func=sokalmichener,
cdist_func=_distance_pybind.cdist_sokalmichener,
pdist_func=_distance_pybind.pdist_sokalmichener,
),
MetricInfo(
canonical_name='sokalsneath',
aka={'sokalsneath'},
types=['bool'],
dist_func=sokalsneath,
cdist_func=_distance_pybind.cdist_sokalsneath,
pdist_func=_distance_pybind.pdist_sokalsneath,
),
MetricInfo(
canonical_name='sqeuclidean',
aka={'sqeuclidean', 'sqe', 'sqeuclid'},
dist_func=sqeuclidean,
cdist_func=_distance_pybind.cdist_sqeuclidean,
pdist_func=_distance_pybind.pdist_sqeuclidean,
),
MetricInfo(
canonical_name='yule',
aka={'yule'},
types=['bool'],
dist_func=yule,
cdist_func=_distance_pybind.cdist_yule,
pdist_func=_distance_pybind.pdist_yule,
),
]
_METRICS = {info.canonical_name: info for info in _METRIC_INFOS}
_METRIC_ALIAS = {alias: info
for info in _METRIC_INFOS
for alias in info.aka}
_METRICS_NAMES = list(_METRICS.keys())
_TEST_METRICS = {'test_' + info.canonical_name: info for info in _METRIC_INFOS}
def pdist(X, metric='euclidean', *, out=None, **kwargs):
"""
Pairwise distances between observations in n-dimensional space.
See Notes for common calling conventions.
Parameters
----------
X : array_like
An m by n array of m original observations in an
n-dimensional space.
metric : str or function, optional
The distance metric to use. The distance function can
be 'braycurtis', 'canberra', 'chebyshev', 'cityblock',
'correlation', 'cosine', 'dice', 'euclidean', 'hamming',
'jaccard', 'jensenshannon', 'kulczynski1',
'mahalanobis', 'matching', 'minkowski', 'rogerstanimoto',
'russellrao', 'seuclidean', 'sokalmichener', 'sokalsneath',
'sqeuclidean', 'yule'.
out : ndarray
The output array.
If not None, condensed distance matrix Y is stored in this array.
**kwargs : dict, optional
Extra arguments to `metric`: refer to each metric documentation for a
list of all possible arguments.
Some possible arguments:
p : scalar
The p-norm to apply for Minkowski, weighted and unweighted.
Default: 2.
w : ndarray
The weight vector for metrics that support weights (e.g., Minkowski).
V : ndarray
The variance vector for standardized Euclidean.
Default: var(X, axis=0, ddof=1)
VI : ndarray
The inverse of the covariance matrix for Mahalanobis.
Default: inv(cov(X.T)).T
Returns
-------
Y : ndarray
Returns a condensed distance matrix Y. For each :math:`i` and :math:`j`
(where :math:`i<j<m`),where m is the number of original observations.
The metric ``dist(u=X[i], v=X[j])`` is computed and stored in entry ``m
* i + j - ((i + 2) * (i + 1)) // 2``.
See Also
--------
squareform : converts between condensed distance matrices and
square distance matrices.
Notes
-----
See ``squareform`` for information on how to calculate the index of
this entry or to convert the condensed distance matrix to a
redundant square matrix.
The following are common calling conventions.
1. ``Y = pdist(X, 'euclidean')``
Computes the distance between m points using Euclidean distance
(2-norm) as the distance metric between the points. The points
are arranged as m n-dimensional row vectors in the matrix X.
2. ``Y = pdist(X, 'minkowski', p=2.)``
Computes the distances using the Minkowski distance
:math:`\\|u-v\\|_p` (:math:`p`-norm) where :math:`p > 0` (note
that this is only a quasi-metric if :math:`0 < p < 1`).
3. ``Y = pdist(X, 'cityblock')``
Computes the city block or Manhattan distance between the
points.
4. ``Y = pdist(X, 'seuclidean', V=None)``
Computes the standardized Euclidean distance. The standardized
Euclidean distance between two n-vectors ``u`` and ``v`` is
.. math::
\\sqrt{\\sum {(u_i-v_i)^2 / V[x_i]}}
V is the variance vector; V[i] is the variance computed over all
the i'th components of the points. If not passed, it is
automatically computed.
5. ``Y = pdist(X, 'sqeuclidean')``
Computes the squared Euclidean distance :math:`\\|u-v\\|_2^2` between
the vectors.
6. ``Y = pdist(X, 'cosine')``
Computes the cosine distance between vectors u and v,
.. math::
1 - \\frac{u \\cdot v}
{{\\|u\\|}_2 {\\|v\\|}_2}
where :math:`\\|*\\|_2` is the 2-norm of its argument ``*``, and
:math:`u \\cdot v` is the dot product of ``u`` and ``v``.
7. ``Y = pdist(X, 'correlation')``
Computes the correlation distance between vectors u and v. This is
.. math::
1 - \\frac{(u - \\bar{u}) \\cdot (v - \\bar{v})}
{{\\|(u - \\bar{u})\\|}_2 {\\|(v - \\bar{v})\\|}_2}
where :math:`\\bar{v}` is the mean of the elements of vector v,
and :math:`x \\cdot y` is the dot product of :math:`x` and :math:`y`.
8. ``Y = pdist(X, 'hamming')``
Computes the normalized Hamming distance, or the proportion of
those vector elements between two n-vectors ``u`` and ``v``
which disagree. To save memory, the matrix ``X`` can be of type
boolean.
9. ``Y = pdist(X, 'jaccard')``
Computes the Jaccard distance between the points. Given two
vectors, ``u`` and ``v``, the Jaccard distance is the
proportion of those elements ``u[i]`` and ``v[i]`` that
disagree.
10. ``Y = pdist(X, 'jensenshannon')``
Computes the Jensen-Shannon distance between two probability arrays.
Given two probability vectors, :math:`p` and :math:`q`, the
Jensen-Shannon distance is
.. math::
\\sqrt{\\frac{D(p \\parallel m) + D(q \\parallel m)}{2}}
where :math:`m` is the pointwise mean of :math:`p` and :math:`q`
and :math:`D` is the Kullback-Leibler divergence.
11. ``Y = pdist(X, 'chebyshev')``
Computes the Chebyshev distance between the points. The
Chebyshev distance between two n-vectors ``u`` and ``v`` is the
maximum norm-1 distance between their respective elements. More
precisely, the distance is given by
.. math::
d(u,v) = \\max_i {|u_i-v_i|}
12. ``Y = pdist(X, 'canberra')``
Computes the Canberra distance between the points. The
Canberra distance between two points ``u`` and ``v`` is
.. math::
d(u,v) = \\sum_i \\frac{|u_i-v_i|}
{|u_i|+|v_i|}
13. ``Y = pdist(X, 'braycurtis')``
Computes the Bray-Curtis distance between the points. The
Bray-Curtis distance between two points ``u`` and ``v`` is
.. math::
d(u,v) = \\frac{\\sum_i {|u_i-v_i|}}
{\\sum_i {|u_i+v_i|}}
14. ``Y = pdist(X, 'mahalanobis', VI=None)``
Computes the Mahalanobis distance between the points. The
Mahalanobis distance between two points ``u`` and ``v`` is
:math:`\\sqrt{(u-v)(1/V)(u-v)^T}` where :math:`(1/V)` (the ``VI``
variable) is the inverse covariance. If ``VI`` is not None,
``VI`` will be used as the inverse covariance matrix.
15. ``Y = pdist(X, 'yule')``
Computes the Yule distance between each pair of boolean
vectors. (see yule function documentation)
16. ``Y = pdist(X, 'matching')``
Synonym for 'hamming'.
17. ``Y = pdist(X, 'dice')``
Computes the Dice distance between each pair of boolean
vectors. (see dice function documentation)
18. ``Y = pdist(X, 'kulczynski1')``
Computes the kulczynski1 distance between each pair of
boolean vectors. (see kulczynski1 function documentation)
19. ``Y = pdist(X, 'rogerstanimoto')``
Computes the Rogers-Tanimoto distance between each pair of
boolean vectors. (see rogerstanimoto function documentation)
20. ``Y = pdist(X, 'russellrao')``
Computes the Russell-Rao distance between each pair of
boolean vectors. (see russellrao function documentation)
21. ``Y = pdist(X, 'sokalmichener')``
Computes the Sokal-Michener distance between each pair of
boolean vectors. (see sokalmichener function documentation)
22. ``Y = pdist(X, 'sokalsneath')``
Computes the Sokal-Sneath distance between each pair of
boolean vectors. (see sokalsneath function documentation)
23. ``Y = pdist(X, 'kulczynski1')``
Computes the Kulczynski 1 distance between each pair of
boolean vectors. (see kulczynski1 function documentation)
24. ``Y = pdist(X, f)``
Computes the distance between all pairs of vectors in X
using the user supplied 2-arity function f. For example,
Euclidean distance between the vectors could be computed
as follows::
dm = pdist(X, lambda u, v: np.sqrt(((u-v)**2).sum()))
Note that you should avoid passing a reference to one of
the distance functions defined in this library. For example,::
dm = pdist(X, sokalsneath)
would calculate the pair-wise distances between the vectors in
X using the Python function sokalsneath. This would result in
sokalsneath being called :math:`{n \\choose 2}` times, which
is inefficient. Instead, the optimized C version is more
efficient, and we call it using the following syntax.::
dm = pdist(X, 'sokalsneath')
Examples
--------
>>> import numpy as np
>>> from scipy.spatial.distance import pdist
``x`` is an array of five points in three-dimensional space.
>>> x = np.array([[2, 0, 2], [2, 2, 3], [-2, 4, 5], [0, 1, 9], [2, 2, 4]])
``pdist(x)`` with no additional arguments computes the 10 pairwise
Euclidean distances:
>>> pdist(x)
array([2.23606798, 6.40312424, 7.34846923, 2.82842712, 4.89897949,
6.40312424, 1. , 5.38516481, 4.58257569, 5.47722558])
The following computes the pairwise Minkowski distances with ``p = 3.5``:
>>> pdist(x, metric='minkowski', p=3.5)
array([2.04898923, 5.1154929 , 7.02700737, 2.43802731, 4.19042714,
6.03956994, 1. , 4.45128103, 4.10636143, 5.0619695 ])
The pairwise city block or Manhattan distances:
>>> pdist(x, metric='cityblock')
array([ 3., 11., 10., 4., 8., 9., 1., 9., 7., 8.])
"""
# You can also call this as:
# Y = pdist(X, 'test_abc')
# where 'abc' is the metric being tested. This computes the distance
# between all pairs of vectors in X using the distance metric 'abc' but
# with a more succinct, verifiable, but less efficient implementation.
X = _asarray_validated(X, sparse_ok=False, objects_ok=True, mask_ok=True,
check_finite=False)
s = X.shape
if len(s) != 2:
raise ValueError('A 2-dimensional array must be passed.')
m, n = s
if callable(metric):
mstr = getattr(metric, '__name__', 'UnknownCustomMetric')
metric_info = _METRIC_ALIAS.get(mstr, None)
if metric_info is not None:
X, typ, kwargs = _validate_pdist_input(
X, m, n, metric_info, **kwargs)
return _pdist_callable(X, metric=metric, out=out, **kwargs)
elif isinstance(metric, str):
mstr = metric.lower()
metric_info = _METRIC_ALIAS.get(mstr, None)
if metric_info is not None:
pdist_fn = metric_info.pdist_func
_extra_windows_error_checks(X, out, (m * (m - 1) / 2,), **kwargs)
return pdist_fn(X, out=out, **kwargs)
elif mstr.startswith("test_"):
metric_info = _TEST_METRICS.get(mstr, None)
if metric_info is None:
raise ValueError(f'Unknown "Test" Distance Metric: {mstr[5:]}')
X, typ, kwargs = _validate_pdist_input(
X, m, n, metric_info, **kwargs)
return _pdist_callable(
X, metric=metric_info.dist_func, out=out, **kwargs)
else:
raise ValueError('Unknown Distance Metric: %s' % mstr)
else:
raise TypeError('2nd argument metric must be a string identifier '
'or a function.')
def squareform(X, force="no", checks=True):
"""
Convert a vector-form distance vector to a square-form distance
matrix, and vice-versa.
Parameters
----------
X : array_like
Either a condensed or redundant distance matrix.
force : str, optional
As with MATLAB(TM), if force is equal to ``'tovector'`` or
``'tomatrix'``, the input will be treated as a distance matrix or
distance vector respectively.
checks : bool, optional
If set to False, no checks will be made for matrix
symmetry nor zero diagonals. This is useful if it is known that
``X - X.T1`` is small and ``diag(X)`` is close to zero.
These values are ignored any way so they do not disrupt the
squareform transformation.
Returns
-------
Y : ndarray
If a condensed distance matrix is passed, a redundant one is
returned, or if a redundant one is passed, a condensed distance
matrix is returned.
Notes
-----
1. ``v = squareform(X)``
Given a square n-by-n symmetric distance matrix ``X``,
``v = squareform(X)`` returns a ``n * (n-1) / 2``
(i.e. binomial coefficient n choose 2) sized vector `v`
where :math:`v[{n \\choose 2} - {n-i \\choose 2} + (j-i-1)]`
is the distance between distinct points ``i`` and ``j``.
If ``X`` is non-square or asymmetric, an error is raised.
2. ``X = squareform(v)``
Given a ``n * (n-1) / 2`` sized vector ``v``
for some integer ``n >= 1`` encoding distances as described,
``X = squareform(v)`` returns a n-by-n distance matrix ``X``.
The ``X[i, j]`` and ``X[j, i]`` values are set to
:math:`v[{n \\choose 2} - {n-i \\choose 2} + (j-i-1)]`
and all diagonal elements are zero.
In SciPy 0.19.0, ``squareform`` stopped casting all input types to
float64, and started returning arrays of the same dtype as the input.
Examples
--------
>>> import numpy as np
>>> from scipy.spatial.distance import pdist, squareform
``x`` is an array of five points in three-dimensional space.
>>> x = np.array([[2, 0, 2], [2, 2, 3], [-2, 4, 5], [0, 1, 9], [2, 2, 4]])
``pdist(x)`` computes the Euclidean distances between each pair of
points in ``x``. The distances are returned in a one-dimensional
array with length ``5*(5 - 1)/2 = 10``.
>>> distvec = pdist(x)
>>> distvec
array([2.23606798, 6.40312424, 7.34846923, 2.82842712, 4.89897949,
6.40312424, 1. , 5.38516481, 4.58257569, 5.47722558])
``squareform(distvec)`` returns the 5x5 distance matrix.
>>> m = squareform(distvec)
>>> m
array([[0. , 2.23606798, 6.40312424, 7.34846923, 2.82842712],
[2.23606798, 0. , 4.89897949, 6.40312424, 1. ],
[6.40312424, 4.89897949, 0. , 5.38516481, 4.58257569],
[7.34846923, 6.40312424, 5.38516481, 0. , 5.47722558],
[2.82842712, 1. , 4.58257569, 5.47722558, 0. ]])
When given a square distance matrix ``m``, ``squareform(m)`` returns
the one-dimensional condensed distance vector associated with the
matrix. In this case, we recover ``distvec``.
>>> squareform(m)
array([2.23606798, 6.40312424, 7.34846923, 2.82842712, 4.89897949,
6.40312424, 1. , 5.38516481, 4.58257569, 5.47722558])
"""
X = np.ascontiguousarray(X)
s = X.shape
if force.lower() == 'tomatrix':
if len(s) != 1:
raise ValueError("Forcing 'tomatrix' but input X is not a "
"distance vector.")
elif force.lower() == 'tovector':
if len(s) != 2:
raise ValueError("Forcing 'tovector' but input X is not a "
"distance matrix.")
# X = squareform(v)
if len(s) == 1:
if s[0] == 0:
return np.zeros((1, 1), dtype=X.dtype)
# Grab the closest value to the square root of the number
# of elements times 2 to see if the number of elements
# is indeed a binomial coefficient.
d = int(np.ceil(np.sqrt(s[0] * 2)))
# Check that v is of valid dimensions.
if d * (d - 1) != s[0] * 2:
raise ValueError('Incompatible vector size. It must be a binomial '
'coefficient n choose 2 for some integer n >= 2.')
# Allocate memory for the distance matrix.
M = np.zeros((d, d), dtype=X.dtype)
# Since the C code does not support striding using strides.
# The dimensions are used instead.
X = _copy_array_if_base_present(X)
# Fill in the values of the distance matrix.
_distance_wrap.to_squareform_from_vector_wrap(M, X)
# Return the distance matrix.
return M
elif len(s) == 2:
if s[0] != s[1]:
raise ValueError('The matrix argument must be square.')
if checks:
is_valid_dm(X, throw=True, name='X')
# One-side of the dimensions is set here.
d = s[0]
if d <= 1:
return np.array([], dtype=X.dtype)
# Create a vector.
v = np.zeros((d * (d - 1)) // 2, dtype=X.dtype)
# Since the C code does not support striding using strides.
# The dimensions are used instead.
X = _copy_array_if_base_present(X)
# Convert the vector to squareform.
_distance_wrap.to_vector_from_squareform_wrap(X, v)
return v
else:
raise ValueError(('The first argument must be one or two dimensional '
'array. A %d-dimensional array is not '
'permitted') % len(s))
def is_valid_dm(D, tol=0.0, throw=False, name="D", warning=False):
"""
Return True if input array is a valid distance matrix.
Distance matrices must be 2-dimensional numpy arrays.
They must have a zero-diagonal, and they must be symmetric.
Parameters
----------
D : array_like
The candidate object to test for validity.
tol : float, optional
The distance matrix should be symmetric. `tol` is the maximum
difference between entries ``ij`` and ``ji`` for the distance
metric to be considered symmetric.
throw : bool, optional
An exception is thrown if the distance matrix passed is not valid.
name : str, optional
The name of the variable to checked. This is useful if
throw is set to True so the offending variable can be identified
in the exception message when an exception is thrown.
warning : bool, optional
Instead of throwing an exception, a warning message is
raised.
Returns
-------
valid : bool
True if the variable `D` passed is a valid distance matrix.
Notes
-----
Small numerical differences in `D` and `D.T` and non-zeroness of
the diagonal are ignored if they are within the tolerance specified
by `tol`.
Examples
--------
>>> import numpy as np
>>> from scipy.spatial.distance import is_valid_dm
This matrix is a valid distance matrix.
>>> d = np.array([[0.0, 1.1, 1.2, 1.3],
... [1.1, 0.0, 1.0, 1.4],
... [1.2, 1.0, 0.0, 1.5],
... [1.3, 1.4, 1.5, 0.0]])
>>> is_valid_dm(d)
True
In the following examples, the input is not a valid distance matrix.
Not square:
>>> is_valid_dm([[0, 2, 2], [2, 0, 2]])
False
Nonzero diagonal element:
>>> is_valid_dm([[0, 1, 1], [1, 2, 3], [1, 3, 0]])
False
Not symmetric:
>>> is_valid_dm([[0, 1, 3], [2, 0, 1], [3, 1, 0]])
False
"""
D = np.asarray(D, order='c')
valid = True
try:
s = D.shape
if len(D.shape) != 2:
if name:
raise ValueError(('Distance matrix \'%s\' must have shape=2 '
'(i.e. be two-dimensional).') % name)
else:
raise ValueError('Distance matrix must have shape=2 (i.e. '
'be two-dimensional).')
if tol == 0.0:
if not (D == D.T).all():
if name:
raise ValueError(('Distance matrix \'%s\' must be '
'symmetric.') % name)
else:
raise ValueError('Distance matrix must be symmetric.')
if not (D[range(0, s[0]), range(0, s[0])] == 0).all():
if name:
raise ValueError(('Distance matrix \'%s\' diagonal must '
'be zero.') % name)
else:
raise ValueError('Distance matrix diagonal must be zero.')
else:
if not (D - D.T <= tol).all():
if name:
raise ValueError(('Distance matrix \'%s\' must be '
'symmetric within tolerance %5.5f.')
% (name, tol))
else:
raise ValueError('Distance matrix must be symmetric within'
' tolerance %5.5f.' % tol)
if not (D[range(0, s[0]), range(0, s[0])] <= tol).all():
if name:
raise ValueError(('Distance matrix \'%s\' diagonal must be'
' close to zero within tolerance %5.5f.')
% (name, tol))
else:
raise ValueError(('Distance matrix \'%s\' diagonal must be'
' close to zero within tolerance %5.5f.')
% tol)
except Exception as e:
if throw:
raise
if warning:
warnings.warn(str(e))
valid = False
return valid
def is_valid_y(y, warning=False, throw=False, name=None):
"""
Return True if the input array is a valid condensed distance matrix.
Condensed distance matrices must be 1-dimensional numpy arrays.
Their length must be a binomial coefficient :math:`{n \\choose 2}`
for some positive integer n.
Parameters
----------
y : array_like
The condensed distance matrix.
warning : bool, optional
Invokes a warning if the variable passed is not a valid
condensed distance matrix. The warning message explains why
the distance matrix is not valid. `name` is used when
referencing the offending variable.
throw : bool, optional
Throws an exception if the variable passed is not a valid
condensed distance matrix.
name : bool, optional
Used when referencing the offending variable in the
warning or exception message.
Returns
-------
bool
True if the input array is a valid condensed distance matrix,
False otherwise.
Examples
--------
>>> from scipy.spatial.distance import is_valid_y
This vector is a valid condensed distance matrix. The length is 6,
which corresponds to ``n = 4``, since ``4*(4 - 1)/2`` is 6.
>>> v = [1.0, 1.2, 1.0, 0.5, 1.3, 0.9]
>>> is_valid_y(v)
True
An input vector with length, say, 7, is not a valid condensed distance
matrix.
>>> is_valid_y([1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7])
False
"""
y = np.asarray(y, order='c')
valid = True
try:
if len(y.shape) != 1:
if name:
raise ValueError(('Condensed distance matrix \'%s\' must '
'have shape=1 (i.e. be one-dimensional).')
% name)
else:
raise ValueError('Condensed distance matrix must have shape=1 '
'(i.e. be one-dimensional).')
n = y.shape[0]
d = int(np.ceil(np.sqrt(n * 2)))
if (d * (d - 1) / 2) != n:
if name:
raise ValueError(('Length n of condensed distance matrix '
'\'%s\' must be a binomial coefficient, i.e.'
'there must be a k such that '
'(k \\choose 2)=n)!') % name)
else:
raise ValueError('Length n of condensed distance matrix must '
'be a binomial coefficient, i.e. there must '
'be a k such that (k \\choose 2)=n)!')
except Exception as e:
if throw:
raise
if warning:
warnings.warn(str(e))
valid = False
return valid
def num_obs_dm(d):
"""
Return the number of original observations that correspond to a
square, redundant distance matrix.
Parameters
----------
d : array_like
The target distance matrix.
Returns
-------
num_obs_dm : int
The number of observations in the redundant distance matrix.
"""
d = np.asarray(d, order='c')
is_valid_dm(d, tol=np.inf, throw=True, name='d')
return d.shape[0]
def num_obs_y(Y):
"""
Return the number of original observations that correspond to a
condensed distance matrix.
Parameters
----------
Y : array_like
Condensed distance matrix.
Returns
-------
n : int
The number of observations in the condensed distance matrix `Y`.
"""
Y = np.asarray(Y, order='c')
is_valid_y(Y, throw=True, name='Y')
k = Y.shape[0]
if k == 0:
raise ValueError("The number of observations cannot be determined on "
"an empty distance matrix.")
d = int(np.ceil(np.sqrt(k * 2)))
if (d * (d - 1) / 2) != k:
raise ValueError("Invalid condensed distance matrix passed. Must be "
"some k where k=(n choose 2) for some n >= 2.")
return d
def _prepare_out_argument(out, dtype, expected_shape):
if out is None:
return np.empty(expected_shape, dtype=dtype)
if out.shape != expected_shape:
raise ValueError("Output array has incorrect shape.")
if not out.flags.c_contiguous:
raise ValueError("Output array must be C-contiguous.")
if out.dtype != np.double:
raise ValueError("Output array must be double type.")
return out
def _pdist_callable(X, *, out, metric, **kwargs):
n = X.shape[0]
out_size = (n * (n - 1)) // 2
dm = _prepare_out_argument(out, np.double, (out_size,))
k = 0
for i in range(X.shape[0] - 1):
for j in range(i + 1, X.shape[0]):
dm[k] = metric(X[i], X[j], **kwargs)
k += 1
return dm
def _cdist_callable(XA, XB, *, out, metric, **kwargs):
mA = XA.shape[0]
mB = XB.shape[0]
dm = _prepare_out_argument(out, np.double, (mA, mB))
for i in range(mA):
for j in range(mB):
dm[i, j] = metric(XA[i], XB[j], **kwargs)
return dm
def cdist(XA, XB, metric='euclidean', *, out=None, **kwargs):
"""
Compute distance between each pair of the two collections of inputs.
See Notes for common calling conventions.
Parameters
----------
XA : array_like
An :math:`m_A` by :math:`n` array of :math:`m_A`
original observations in an :math:`n`-dimensional space.
Inputs are converted to float type.
XB : array_like
An :math:`m_B` by :math:`n` array of :math:`m_B`
original observations in an :math:`n`-dimensional space.
Inputs are converted to float type.
metric : str or callable, optional
The distance metric to use. If a string, the distance function can be
'braycurtis', 'canberra', 'chebyshev', 'cityblock', 'correlation',
'cosine', 'dice', 'euclidean', 'hamming', 'jaccard', 'jensenshannon',
'kulczynski1', 'mahalanobis', 'matching', 'minkowski',
'rogerstanimoto', 'russellrao', 'seuclidean', 'sokalmichener',
'sokalsneath', 'sqeuclidean', 'yule'.
**kwargs : dict, optional
Extra arguments to `metric`: refer to each metric documentation for a
list of all possible arguments.
Some possible arguments:
p : scalar
The p-norm to apply for Minkowski, weighted and unweighted.
Default: 2.
w : array_like
The weight vector for metrics that support weights (e.g., Minkowski).
V : array_like
The variance vector for standardized Euclidean.
Default: var(vstack([XA, XB]), axis=0, ddof=1)
VI : array_like
The inverse of the covariance matrix for Mahalanobis.
Default: inv(cov(vstack([XA, XB].T))).T
out : ndarray
The output array
If not None, the distance matrix Y is stored in this array.
Returns
-------
Y : ndarray
A :math:`m_A` by :math:`m_B` distance matrix is returned.
For each :math:`i` and :math:`j`, the metric
``dist(u=XA[i], v=XB[j])`` is computed and stored in the
:math:`ij` th entry.
Raises
------
ValueError
An exception is thrown if `XA` and `XB` do not have
the same number of columns.
Notes
-----
The following are common calling conventions:
1. ``Y = cdist(XA, XB, 'euclidean')``
Computes the distance between :math:`m` points using
Euclidean distance (2-norm) as the distance metric between the
points. The points are arranged as :math:`m`
:math:`n`-dimensional row vectors in the matrix X.
2. ``Y = cdist(XA, XB, 'minkowski', p=2.)``
Computes the distances using the Minkowski distance
:math:`\\|u-v\\|_p` (:math:`p`-norm) where :math:`p > 0` (note
that this is only a quasi-metric if :math:`0 < p < 1`).
3. ``Y = cdist(XA, XB, 'cityblock')``
Computes the city block or Manhattan distance between the
points.
4. ``Y = cdist(XA, XB, 'seuclidean', V=None)``
Computes the standardized Euclidean distance. The standardized
Euclidean distance between two n-vectors ``u`` and ``v`` is
.. math::
\\sqrt{\\sum {(u_i-v_i)^2 / V[x_i]}}.
V is the variance vector; V[i] is the variance computed over all
the i'th components of the points. If not passed, it is
automatically computed.
5. ``Y = cdist(XA, XB, 'sqeuclidean')``
Computes the squared Euclidean distance :math:`\\|u-v\\|_2^2` between
the vectors.
6. ``Y = cdist(XA, XB, 'cosine')``
Computes the cosine distance between vectors u and v,
.. math::
1 - \\frac{u \\cdot v}
{{\\|u\\|}_2 {\\|v\\|}_2}
where :math:`\\|*\\|_2` is the 2-norm of its argument ``*``, and
:math:`u \\cdot v` is the dot product of :math:`u` and :math:`v`.
7. ``Y = cdist(XA, XB, 'correlation')``
Computes the correlation distance between vectors u and v. This is
.. math::
1 - \\frac{(u - \\bar{u}) \\cdot (v - \\bar{v})}
{{\\|(u - \\bar{u})\\|}_2 {\\|(v - \\bar{v})\\|}_2}
where :math:`\\bar{v}` is the mean of the elements of vector v,
and :math:`x \\cdot y` is the dot product of :math:`x` and :math:`y`.
8. ``Y = cdist(XA, XB, 'hamming')``
Computes the normalized Hamming distance, or the proportion of
those vector elements between two n-vectors ``u`` and ``v``
which disagree. To save memory, the matrix ``X`` can be of type
boolean.
9. ``Y = cdist(XA, XB, 'jaccard')``
Computes the Jaccard distance between the points. Given two
vectors, ``u`` and ``v``, the Jaccard distance is the
proportion of those elements ``u[i]`` and ``v[i]`` that
disagree where at least one of them is non-zero.
10. ``Y = cdist(XA, XB, 'jensenshannon')``
Computes the Jensen-Shannon distance between two probability arrays.
Given two probability vectors, :math:`p` and :math:`q`, the
Jensen-Shannon distance is
.. math::
\\sqrt{\\frac{D(p \\parallel m) + D(q \\parallel m)}{2}}
where :math:`m` is the pointwise mean of :math:`p` and :math:`q`
and :math:`D` is the Kullback-Leibler divergence.
11. ``Y = cdist(XA, XB, 'chebyshev')``
Computes the Chebyshev distance between the points. The
Chebyshev distance between two n-vectors ``u`` and ``v`` is the
maximum norm-1 distance between their respective elements. More
precisely, the distance is given by
.. math::
d(u,v) = \\max_i {|u_i-v_i|}.
12. ``Y = cdist(XA, XB, 'canberra')``
Computes the Canberra distance between the points. The
Canberra distance between two points ``u`` and ``v`` is
.. math::
d(u,v) = \\sum_i \\frac{|u_i-v_i|}
{|u_i|+|v_i|}.
13. ``Y = cdist(XA, XB, 'braycurtis')``
Computes the Bray-Curtis distance between the points. The
Bray-Curtis distance between two points ``u`` and ``v`` is
.. math::
d(u,v) = \\frac{\\sum_i (|u_i-v_i|)}
{\\sum_i (|u_i+v_i|)}
14. ``Y = cdist(XA, XB, 'mahalanobis', VI=None)``
Computes the Mahalanobis distance between the points. The
Mahalanobis distance between two points ``u`` and ``v`` is
:math:`\\sqrt{(u-v)(1/V)(u-v)^T}` where :math:`(1/V)` (the ``VI``
variable) is the inverse covariance. If ``VI`` is not None,
``VI`` will be used as the inverse covariance matrix.
15. ``Y = cdist(XA, XB, 'yule')``
Computes the Yule distance between the boolean
vectors. (see `yule` function documentation)
16. ``Y = cdist(XA, XB, 'matching')``
Synonym for 'hamming'.
17. ``Y = cdist(XA, XB, 'dice')``
Computes the Dice distance between the boolean vectors. (see
`dice` function documentation)
18. ``Y = cdist(XA, XB, 'kulczynski1')``
Computes the kulczynski distance between the boolean
vectors. (see `kulczynski1` function documentation)
19. ``Y = cdist(XA, XB, 'rogerstanimoto')``
Computes the Rogers-Tanimoto distance between the boolean
vectors. (see `rogerstanimoto` function documentation)
20. ``Y = cdist(XA, XB, 'russellrao')``
Computes the Russell-Rao distance between the boolean
vectors. (see `russellrao` function documentation)
21. ``Y = cdist(XA, XB, 'sokalmichener')``
Computes the Sokal-Michener distance between the boolean
vectors. (see `sokalmichener` function documentation)
22. ``Y = cdist(XA, XB, 'sokalsneath')``
Computes the Sokal-Sneath distance between the vectors. (see
`sokalsneath` function documentation)
23. ``Y = cdist(XA, XB, f)``
Computes the distance between all pairs of vectors in X
using the user supplied 2-arity function f. For example,
Euclidean distance between the vectors could be computed
as follows::
dm = cdist(XA, XB, lambda u, v: np.sqrt(((u-v)**2).sum()))
Note that you should avoid passing a reference to one of
the distance functions defined in this library. For example,::
dm = cdist(XA, XB, sokalsneath)
would calculate the pair-wise distances between the vectors in
X using the Python function `sokalsneath`. This would result in
sokalsneath being called :math:`{n \\choose 2}` times, which
is inefficient. Instead, the optimized C version is more
efficient, and we call it using the following syntax::
dm = cdist(XA, XB, 'sokalsneath')
Examples
--------
Find the Euclidean distances between four 2-D coordinates:
>>> from scipy.spatial import distance
>>> import numpy as np
>>> coords = [(35.0456, -85.2672),
... (35.1174, -89.9711),
... (35.9728, -83.9422),
... (36.1667, -86.7833)]
>>> distance.cdist(coords, coords, 'euclidean')
array([[ 0. , 4.7044, 1.6172, 1.8856],
[ 4.7044, 0. , 6.0893, 3.3561],
[ 1.6172, 6.0893, 0. , 2.8477],
[ 1.8856, 3.3561, 2.8477, 0. ]])
Find the Manhattan distance from a 3-D point to the corners of the unit
cube:
>>> a = np.array([[0, 0, 0],
... [0, 0, 1],
... [0, 1, 0],
... [0, 1, 1],
... [1, 0, 0],
... [1, 0, 1],
... [1, 1, 0],
... [1, 1, 1]])
>>> b = np.array([[ 0.1, 0.2, 0.4]])
>>> distance.cdist(a, b, 'cityblock')
array([[ 0.7],
[ 0.9],
[ 1.3],
[ 1.5],
[ 1.5],
[ 1.7],
[ 2.1],
[ 2.3]])
"""
# You can also call this as:
# Y = cdist(XA, XB, 'test_abc')
# where 'abc' is the metric being tested. This computes the distance
# between all pairs of vectors in XA and XB using the distance metric 'abc'
# but with a more succinct, verifiable, but less efficient implementation.
XA = np.asarray(XA)
XB = np.asarray(XB)
s = XA.shape
sB = XB.shape
if len(s) != 2:
raise ValueError('XA must be a 2-dimensional array.')
if len(sB) != 2:
raise ValueError('XB must be a 2-dimensional array.')
if s[1] != sB[1]:
raise ValueError('XA and XB must have the same number of columns '
'(i.e. feature dimension.)')
mA = s[0]
mB = sB[0]
n = s[1]
if callable(metric):
mstr = getattr(metric, '__name__', 'Unknown')
metric_info = _METRIC_ALIAS.get(mstr, None)
if metric_info is not None:
XA, XB, typ, kwargs = _validate_cdist_input(
XA, XB, mA, mB, n, metric_info, **kwargs)
return _cdist_callable(XA, XB, metric=metric, out=out, **kwargs)
elif isinstance(metric, str):
mstr = metric.lower()
metric_info = _METRIC_ALIAS.get(mstr, None)
if metric_info is not None:
cdist_fn = metric_info.cdist_func
_extra_windows_error_checks(XA, out, (mA, mB), **kwargs)
return cdist_fn(XA, XB, out=out, **kwargs)
elif mstr.startswith("test_"):
metric_info = _TEST_METRICS.get(mstr, None)
if metric_info is None:
raise ValueError(f'Unknown "Test" Distance Metric: {mstr[5:]}')
XA, XB, typ, kwargs = _validate_cdist_input(
XA, XB, mA, mB, n, metric_info, **kwargs)
return _cdist_callable(
XA, XB, metric=metric_info.dist_func, out=out, **kwargs)
else:
raise ValueError('Unknown Distance Metric: %s' % mstr)
else:
raise TypeError('2nd argument metric must be a string identifier '
'or a function.')
| 92,867
| 29.750993
| 100
|
py
|
scipy
|
scipy-main/scipy/spatial/tests/test_distance.py
|
#
# Author: Damian Eads
# Date: April 17, 2008
#
# Copyright (C) 2008 Damian Eads
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
#
# 3. The name of the author may not be used to endorse or promote
# products derived from this software without specific prior
# written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS
# OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
# GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import sys
import os.path
from functools import wraps, partial
import weakref
import numpy as np
import warnings
from numpy.linalg import norm
from numpy.testing import (verbose, assert_,
assert_array_equal, assert_equal,
assert_almost_equal, assert_allclose,
break_cycles, IS_PYPY)
import pytest
from scipy.spatial.distance import (
squareform, pdist, cdist, num_obs_y, num_obs_dm, is_valid_dm, is_valid_y,
_validate_vector, _METRICS_NAMES)
# these were missing: chebyshev cityblock
# jensenshannon and seuclidean are referenced by string name.
from scipy.spatial.distance import (braycurtis, canberra, chebyshev, cityblock,
correlation, cosine, dice, euclidean,
hamming, jaccard, jensenshannon,
kulczynski1, mahalanobis,
minkowski, rogerstanimoto,
russellrao, seuclidean, sokalmichener, # noqa: F401
sokalsneath, sqeuclidean, yule)
@pytest.fixture(params=_METRICS_NAMES, scope="session")
def metric(request):
"""
Fixture for all metrics in scipy.spatial.distance
"""
return request.param
_filenames = [
"cdist-X1.txt",
"cdist-X2.txt",
"iris.txt",
"pdist-boolean-inp.txt",
"pdist-chebyshev-ml-iris.txt",
"pdist-chebyshev-ml.txt",
"pdist-cityblock-ml-iris.txt",
"pdist-cityblock-ml.txt",
"pdist-correlation-ml-iris.txt",
"pdist-correlation-ml.txt",
"pdist-cosine-ml-iris.txt",
"pdist-cosine-ml.txt",
"pdist-double-inp.txt",
"pdist-euclidean-ml-iris.txt",
"pdist-euclidean-ml.txt",
"pdist-hamming-ml.txt",
"pdist-jaccard-ml.txt",
"pdist-jensenshannon-ml-iris.txt",
"pdist-jensenshannon-ml.txt",
"pdist-minkowski-3.2-ml-iris.txt",
"pdist-minkowski-3.2-ml.txt",
"pdist-minkowski-5.8-ml-iris.txt",
"pdist-seuclidean-ml-iris.txt",
"pdist-seuclidean-ml.txt",
"pdist-spearman-ml.txt",
"random-bool-data.txt",
"random-double-data.txt",
"random-int-data.txt",
"random-uint-data.txt",
]
_tdist = np.array([[0, 662, 877, 255, 412, 996],
[662, 0, 295, 468, 268, 400],
[877, 295, 0, 754, 564, 138],
[255, 468, 754, 0, 219, 869],
[412, 268, 564, 219, 0, 669],
[996, 400, 138, 869, 669, 0]], dtype='double')
_ytdist = squareform(_tdist)
# A hashmap of expected output arrays for the tests. These arrays
# come from a list of text files, which are read prior to testing.
# Each test loads inputs and outputs from this dictionary.
eo = {}
def load_testing_files():
for fn in _filenames:
name = fn.replace(".txt", "").replace("-ml", "")
fqfn = os.path.join(os.path.dirname(__file__), 'data', fn)
fp = open(fqfn)
eo[name] = np.loadtxt(fp)
fp.close()
eo['pdist-boolean-inp'] = np.bool_(eo['pdist-boolean-inp'])
eo['random-bool-data'] = np.bool_(eo['random-bool-data'])
eo['random-float32-data'] = np.float32(eo['random-double-data'])
eo['random-int-data'] = np.int_(eo['random-int-data'])
eo['random-uint-data'] = np.uint(eo['random-uint-data'])
load_testing_files()
def _is_32bit():
return np.intp(0).itemsize < 8
def _chk_asarrays(arrays, axis=None):
arrays = [np.asanyarray(a) for a in arrays]
if axis is None:
# np < 1.10 ravel removes subclass from arrays
arrays = [np.ravel(a) if a.ndim != 1 else a
for a in arrays]
axis = 0
arrays = tuple(np.atleast_1d(a) for a in arrays)
if axis < 0:
if not all(a.ndim == arrays[0].ndim for a in arrays):
raise ValueError("array ndim must be the same for neg axis")
axis = range(arrays[0].ndim)[axis]
return arrays + (axis,)
def _chk_weights(arrays, weights=None, axis=None,
force_weights=False, simplify_weights=True,
pos_only=False, neg_check=False,
nan_screen=False, mask_screen=False,
ddof=None):
chked = _chk_asarrays(arrays, axis=axis)
arrays, axis = chked[:-1], chked[-1]
simplify_weights = simplify_weights and not force_weights
if not force_weights and mask_screen:
force_weights = any(np.ma.getmask(a) is not np.ma.nomask for a in arrays)
if nan_screen:
has_nans = [np.isnan(np.sum(a)) for a in arrays]
if any(has_nans):
mask_screen = True
force_weights = True
arrays = tuple(np.ma.masked_invalid(a) if has_nan else a
for a, has_nan in zip(arrays, has_nans))
if weights is not None:
weights = np.asanyarray(weights)
elif force_weights:
weights = np.ones(arrays[0].shape[axis])
else:
return arrays + (weights, axis)
if ddof:
weights = _freq_weights(weights)
if mask_screen:
weights = _weight_masked(arrays, weights, axis)
if not all(weights.shape == (a.shape[axis],) for a in arrays):
raise ValueError("weights shape must match arrays along axis")
if neg_check and (weights < 0).any():
raise ValueError("weights cannot be negative")
if pos_only:
pos_weights = np.nonzero(weights > 0)[0]
if pos_weights.size < weights.size:
arrays = tuple(np.take(a, pos_weights, axis=axis) for a in arrays)
weights = weights[pos_weights]
if simplify_weights and (weights == 1).all():
weights = None
return arrays + (weights, axis)
def _freq_weights(weights):
if weights is None:
return weights
int_weights = weights.astype(int)
if (weights != int_weights).any():
raise ValueError("frequency (integer count-type) weights required %s" % weights)
return int_weights
def _weight_masked(arrays, weights, axis):
if axis is None:
axis = 0
weights = np.asanyarray(weights)
for a in arrays:
axis_mask = np.ma.getmask(a)
if axis_mask is np.ma.nomask:
continue
if a.ndim > 1:
not_axes = tuple(i for i in range(a.ndim) if i != axis)
axis_mask = axis_mask.any(axis=not_axes)
weights *= 1 - axis_mask.astype(int)
return weights
def _rand_split(arrays, weights, axis, split_per, seed=None):
# Coerce `arrays` to float64 if integer, to avoid nan-to-integer issues
arrays = [arr.astype(np.float64) if np.issubdtype(arr.dtype, np.integer)
else arr for arr in arrays]
# inverse operation for stats.collapse_weights
weights = np.array(weights, dtype=np.float64) # modified inplace; need a copy
seeded_rand = np.random.RandomState(seed)
def mytake(a, ix, axis):
record = np.asanyarray(np.take(a, ix, axis=axis))
return record.reshape([a.shape[i] if i != axis else 1
for i in range(a.ndim)])
n_obs = arrays[0].shape[axis]
assert all(a.shape[axis] == n_obs for a in arrays), "data must be aligned on sample axis"
for i in range(int(split_per) * n_obs):
split_ix = seeded_rand.randint(n_obs + i)
prev_w = weights[split_ix]
q = seeded_rand.rand()
weights[split_ix] = q * prev_w
weights = np.append(weights, (1. - q) * prev_w)
arrays = [np.append(a, mytake(a, split_ix, axis=axis),
axis=axis) for a in arrays]
return arrays, weights
def _rough_check(a, b, compare_assert=partial(assert_allclose, atol=1e-5),
key=lambda x: x, w=None):
check_a = key(a)
check_b = key(b)
try:
if np.array(check_a != check_b).any(): # try strict equality for string types
compare_assert(check_a, check_b)
except AttributeError: # masked array
compare_assert(check_a, check_b)
except (TypeError, ValueError): # nested data structure
for a_i, b_i in zip(check_a, check_b):
_rough_check(a_i, b_i, compare_assert=compare_assert)
# diff from test_stats:
# n_args=2, weight_arg='w', default_axis=None
# ma_safe = False, nan_safe = False
def _weight_checked(fn, n_args=2, default_axis=None, key=lambda x: x, weight_arg='w',
squeeze=True, silent=False,
ones_test=True, const_test=True, dup_test=True,
split_test=True, dud_test=True, ma_safe=False, ma_very_safe=False, nan_safe=False,
split_per=1.0, seed=0, compare_assert=partial(assert_allclose, atol=1e-5)):
"""runs fn on its arguments 2 or 3 ways, checks that the results are the same,
then returns the same thing it would have returned before"""
@wraps(fn)
def wrapped(*args, **kwargs):
result = fn(*args, **kwargs)
arrays = args[:n_args]
rest = args[n_args:]
weights = kwargs.get(weight_arg, None)
axis = kwargs.get('axis', default_axis)
chked = _chk_weights(arrays, weights=weights, axis=axis, force_weights=True, mask_screen=True)
arrays, weights, axis = chked[:-2], chked[-2], chked[-1]
if squeeze:
arrays = [np.atleast_1d(a.squeeze()) for a in arrays]
try:
# WEIGHTS CHECK 1: EQUAL WEIGHTED OBESERVATIONS
args = tuple(arrays) + rest
if ones_test:
kwargs[weight_arg] = weights
_rough_check(result, fn(*args, **kwargs), key=key)
if const_test:
kwargs[weight_arg] = weights * 101.0
_rough_check(result, fn(*args, **kwargs), key=key)
kwargs[weight_arg] = weights * 0.101
try:
_rough_check(result, fn(*args, **kwargs), key=key)
except Exception as e:
raise type(e)((e, arrays, weights)) from e
# WEIGHTS CHECK 2: ADDL 0-WEIGHTED OBS
if dud_test:
# add randomly resampled rows, weighted at 0
dud_arrays, dud_weights = _rand_split(arrays, weights, axis, split_per=split_per, seed=seed)
dud_weights[:weights.size] = weights # not exactly 1 because of masked arrays
dud_weights[weights.size:] = 0
dud_args = tuple(dud_arrays) + rest
kwargs[weight_arg] = dud_weights
_rough_check(result, fn(*dud_args, **kwargs), key=key)
# increase the value of those 0-weighted rows
for a in dud_arrays:
indexer = [slice(None)] * a.ndim
indexer[axis] = slice(weights.size, None)
indexer = tuple(indexer)
a[indexer] = a[indexer] * 101
dud_args = tuple(dud_arrays) + rest
_rough_check(result, fn(*dud_args, **kwargs), key=key)
# set those 0-weighted rows to NaNs
for a in dud_arrays:
indexer = [slice(None)] * a.ndim
indexer[axis] = slice(weights.size, None)
indexer = tuple(indexer)
a[indexer] = a[indexer] * np.nan
if kwargs.get("nan_policy", None) == "omit" and nan_safe:
dud_args = tuple(dud_arrays) + rest
_rough_check(result, fn(*dud_args, **kwargs), key=key)
# mask out those nan values
if ma_safe:
dud_arrays = [np.ma.masked_invalid(a) for a in dud_arrays]
dud_args = tuple(dud_arrays) + rest
_rough_check(result, fn(*dud_args, **kwargs), key=key)
if ma_very_safe:
kwargs[weight_arg] = None
_rough_check(result, fn(*dud_args, **kwargs), key=key)
del dud_arrays, dud_args, dud_weights
# WEIGHTS CHECK 3: DUPLICATE DATA (DUMB SPLITTING)
if dup_test:
dup_arrays = [np.append(a, a, axis=axis) for a in arrays]
dup_weights = np.append(weights, weights) / 2.0
dup_args = tuple(dup_arrays) + rest
kwargs[weight_arg] = dup_weights
_rough_check(result, fn(*dup_args, **kwargs), key=key)
del dup_args, dup_arrays, dup_weights
# WEIGHT CHECK 3: RANDOM SPLITTING
if split_test and split_per > 0:
split_arrays, split_weights = _rand_split(arrays, weights, axis, split_per=split_per, seed=seed)
split_args = tuple(split_arrays) + rest
kwargs[weight_arg] = split_weights
_rough_check(result, fn(*split_args, **kwargs), key=key)
except NotImplementedError as e:
# when some combination of arguments makes weighting impossible,
# this is the desired response
if not silent:
warnings.warn(f"{fn.__name__} NotImplemented weights: {e}")
return result
return wrapped
wcdist = _weight_checked(cdist, default_axis=1, squeeze=False)
wcdist_no_const = _weight_checked(cdist, default_axis=1, squeeze=False, const_test=False)
wpdist = _weight_checked(pdist, default_axis=1, squeeze=False, n_args=1)
wpdist_no_const = _weight_checked(pdist, default_axis=1, squeeze=False, const_test=False, n_args=1)
wrogerstanimoto = _weight_checked(rogerstanimoto)
wmatching = whamming = _weight_checked(hamming, dud_test=False)
wyule = _weight_checked(yule)
wdice = _weight_checked(dice)
wcityblock = _weight_checked(cityblock)
wchebyshev = _weight_checked(chebyshev)
wcosine = _weight_checked(cosine)
wcorrelation = _weight_checked(correlation)
wkulczynski1 = _weight_checked(kulczynski1)
wjaccard = _weight_checked(jaccard)
weuclidean = _weight_checked(euclidean, const_test=False)
wsqeuclidean = _weight_checked(sqeuclidean, const_test=False)
wbraycurtis = _weight_checked(braycurtis)
wcanberra = _weight_checked(canberra, const_test=False)
wsokalsneath = _weight_checked(sokalsneath)
wsokalmichener = _weight_checked(sokalmichener)
wrussellrao = _weight_checked(russellrao)
class TestCdist:
def setup_method(self):
self.rnd_eo_names = ['random-float32-data', 'random-int-data',
'random-uint-data', 'random-double-data',
'random-bool-data']
self.valid_upcasts = {'bool': [np.uint, np.int_, np.float32, np.double],
'uint': [np.int_, np.float32, np.double],
'int': [np.float32, np.double],
'float32': [np.double]}
def test_cdist_extra_args(self, metric):
# Tests that args and kwargs are correctly handled
X1 = [[1., 2., 3.], [1.2, 2.3, 3.4], [2.2, 2.3, 4.4]]
X2 = [[7., 5., 8.], [7.5, 5.8, 8.4], [5.5, 5.8, 4.4]]
kwargs = {"N0tV4l1D_p4raM": 3.14, "w": np.arange(3)}
args = [3.14] * 200
with pytest.raises(TypeError):
cdist(X1, X2, metric=metric, **kwargs)
with pytest.raises(TypeError):
cdist(X1, X2, metric=eval(metric), **kwargs)
with pytest.raises(TypeError):
cdist(X1, X2, metric="test_" + metric, **kwargs)
with pytest.raises(TypeError):
cdist(X1, X2, metric=metric, *args)
with pytest.raises(TypeError):
cdist(X1, X2, metric=eval(metric), *args)
with pytest.raises(TypeError):
cdist(X1, X2, metric="test_" + metric, *args)
def test_cdist_extra_args_custom(self):
# Tests that args and kwargs are correctly handled
# also for custom metric
def _my_metric(x, y, arg, kwarg=1, kwarg2=2):
return arg + kwarg + kwarg2
X1 = [[1., 2., 3.], [1.2, 2.3, 3.4], [2.2, 2.3, 4.4]]
X2 = [[7., 5., 8.], [7.5, 5.8, 8.4], [5.5, 5.8, 4.4]]
kwargs = {"N0tV4l1D_p4raM": 3.14, "w": np.arange(3)}
args = [3.14] * 200
with pytest.raises(TypeError):
cdist(X1, X2, _my_metric)
with pytest.raises(TypeError):
cdist(X1, X2, _my_metric, *args)
with pytest.raises(TypeError):
cdist(X1, X2, _my_metric, **kwargs)
with pytest.raises(TypeError):
cdist(X1, X2, _my_metric, kwarg=2.2, kwarg2=3.3)
with pytest.raises(TypeError):
cdist(X1, X2, _my_metric, 1, 2, kwarg=2.2)
with pytest.raises(TypeError):
cdist(X1, X2, _my_metric, 1, 2, kwarg=2.2)
with pytest.raises(TypeError):
cdist(X1, X2, _my_metric, 1.1, 2.2, 3.3)
with pytest.raises(TypeError):
cdist(X1, X2, _my_metric, 1.1, 2.2)
with pytest.raises(TypeError):
cdist(X1, X2, _my_metric, 1.1)
with pytest.raises(TypeError):
cdist(X1, X2, _my_metric, 1.1, kwarg=2.2, kwarg2=3.3)
# this should work
assert_allclose(cdist(X1, X2, metric=_my_metric,
arg=1.1, kwarg2=3.3), 5.4)
def test_cdist_euclidean_random_unicode(self):
eps = 1e-15
X1 = eo['cdist-X1']
X2 = eo['cdist-X2']
Y1 = wcdist_no_const(X1, X2, 'euclidean')
Y2 = wcdist_no_const(X1, X2, 'test_euclidean')
assert_allclose(Y1, Y2, rtol=eps, verbose=verbose > 2)
@pytest.mark.parametrize("p", [0.1, 0.25, 1.0, 1.23,
2.0, 3.8, 4.6, np.inf])
def test_cdist_minkowski_random(self, p):
eps = 1e-13
X1 = eo['cdist-X1']
X2 = eo['cdist-X2']
Y1 = wcdist_no_const(X1, X2, 'minkowski', p=p)
Y2 = wcdist_no_const(X1, X2, 'test_minkowski', p=p)
assert_allclose(Y1, Y2, atol=0, rtol=eps, verbose=verbose > 2)
def test_cdist_cosine_random(self):
eps = 1e-14
X1 = eo['cdist-X1']
X2 = eo['cdist-X2']
Y1 = wcdist(X1, X2, 'cosine')
# Naive implementation
def norms(X):
return np.linalg.norm(X, axis=1).reshape(-1, 1)
Y2 = 1 - np.dot((X1 / norms(X1)), (X2 / norms(X2)).T)
assert_allclose(Y1, Y2, rtol=eps, verbose=verbose > 2)
def test_cdist_mahalanobis(self):
# 1-dimensional observations
x1 = np.array([[2], [3]])
x2 = np.array([[2], [5]])
dist = cdist(x1, x2, metric='mahalanobis')
assert_allclose(dist, [[0.0, np.sqrt(4.5)], [np.sqrt(0.5), np.sqrt(2)]])
# 2-dimensional observations
x1 = np.array([[0, 0], [-1, 0]])
x2 = np.array([[0, 2], [1, 0], [0, -2]])
dist = cdist(x1, x2, metric='mahalanobis')
rt2 = np.sqrt(2)
assert_allclose(dist, [[rt2, rt2, rt2], [2, 2 * rt2, 2]])
# Too few observations
with pytest.raises(ValueError):
cdist([[0, 1]], [[2, 3]], metric='mahalanobis')
def test_cdist_custom_notdouble(self):
class myclass:
pass
def _my_metric(x, y):
if not isinstance(x[0], myclass) or not isinstance(y[0], myclass):
raise ValueError("Type has been changed")
return 1.123
data = np.array([[myclass()]], dtype=object)
cdist_y = cdist(data, data, metric=_my_metric)
right_y = 1.123
assert_equal(cdist_y, right_y, verbose=verbose > 2)
def _check_calling_conventions(self, X1, X2, metric, eps=1e-07, **kwargs):
# helper function for test_cdist_calling_conventions
try:
y1 = cdist(X1, X2, metric=metric, **kwargs)
y2 = cdist(X1, X2, metric=eval(metric), **kwargs)
y3 = cdist(X1, X2, metric="test_" + metric, **kwargs)
except Exception as e:
e_cls = e.__class__
if verbose > 2:
print(e_cls.__name__)
print(e)
with pytest.raises(e_cls):
cdist(X1, X2, metric=metric, **kwargs)
with pytest.raises(e_cls):
cdist(X1, X2, metric=eval(metric), **kwargs)
with pytest.raises(e_cls):
cdist(X1, X2, metric="test_" + metric, **kwargs)
else:
assert_allclose(y1, y2, rtol=eps, verbose=verbose > 2)
assert_allclose(y1, y3, rtol=eps, verbose=verbose > 2)
def test_cdist_calling_conventions(self, metric):
# Ensures that specifying the metric with a str or scipy function
# gives the same behaviour (i.e. same result or same exception).
# NOTE: The correctness should be checked within each metric tests.
for eo_name in self.rnd_eo_names:
# subsampling input data to speed-up tests
# NOTE: num samples needs to be > than dimensions for mahalanobis
X1 = eo[eo_name][::5, ::-2]
X2 = eo[eo_name][1::5, ::2]
if verbose > 2:
print("testing: ", metric, " with: ", eo_name)
if metric in {'dice', 'yule',
'rogerstanimoto',
'russellrao', 'sokalmichener',
'sokalsneath',
'kulczynski1'} and 'bool' not in eo_name:
# python version permits non-bools e.g. for fuzzy logic
continue
self._check_calling_conventions(X1, X2, metric)
# Testing built-in metrics with extra args
if metric == "seuclidean":
X12 = np.vstack([X1, X2]).astype(np.double)
V = np.var(X12, axis=0, ddof=1)
self._check_calling_conventions(X1, X2, metric, V=V)
elif metric == "mahalanobis":
X12 = np.vstack([X1, X2]).astype(np.double)
V = np.atleast_2d(np.cov(X12.T))
VI = np.array(np.linalg.inv(V).T)
self._check_calling_conventions(X1, X2, metric, VI=VI)
def test_cdist_dtype_equivalence(self, metric):
# Tests that the result is not affected by type up-casting
eps = 1e-07
tests = [(eo['random-bool-data'], self.valid_upcasts['bool']),
(eo['random-uint-data'], self.valid_upcasts['uint']),
(eo['random-int-data'], self.valid_upcasts['int']),
(eo['random-float32-data'], self.valid_upcasts['float32'])]
for test in tests:
X1 = test[0][::5, ::-2]
X2 = test[0][1::5, ::2]
try:
y1 = cdist(X1, X2, metric=metric)
except Exception as e:
e_cls = e.__class__
if verbose > 2:
print(e_cls.__name__)
print(e)
for new_type in test[1]:
X1new = new_type(X1)
X2new = new_type(X2)
with pytest.raises(e_cls):
cdist(X1new, X2new, metric=metric)
else:
for new_type in test[1]:
y2 = cdist(new_type(X1), new_type(X2), metric=metric)
assert_allclose(y1, y2, rtol=eps, verbose=verbose > 2)
def test_cdist_out(self, metric):
# Test that out parameter works properly
eps = 1e-15
X1 = eo['cdist-X1']
X2 = eo['cdist-X2']
out_r, out_c = X1.shape[0], X2.shape[0]
kwargs = dict()
if metric == 'minkowski':
kwargs['p'] = 1.23
out1 = np.empty((out_r, out_c), dtype=np.double)
Y1 = cdist(X1, X2, metric, **kwargs)
Y2 = cdist(X1, X2, metric, out=out1, **kwargs)
# test that output is numerically equivalent
assert_allclose(Y1, Y2, rtol=eps, verbose=verbose > 2)
# test that Y_test1 and out1 are the same object
assert_(Y2 is out1)
# test for incorrect shape
out2 = np.empty((out_r-1, out_c+1), dtype=np.double)
with pytest.raises(ValueError):
cdist(X1, X2, metric, out=out2, **kwargs)
# test for C-contiguous order
out3 = np.empty(
(2 * out_r, 2 * out_c), dtype=np.double)[::2, ::2]
out4 = np.empty((out_r, out_c), dtype=np.double, order='F')
with pytest.raises(ValueError):
cdist(X1, X2, metric, out=out3, **kwargs)
with pytest.raises(ValueError):
cdist(X1, X2, metric, out=out4, **kwargs)
# test for incorrect dtype
out5 = np.empty((out_r, out_c), dtype=np.int64)
with pytest.raises(ValueError):
cdist(X1, X2, metric, out=out5, **kwargs)
def test_striding(self, metric):
# test that striding is handled correct with calls to
# _copy_array_if_base_present
eps = 1e-15
X1 = eo['cdist-X1'][::2, ::2]
X2 = eo['cdist-X2'][::2, ::2]
X1_copy = X1.copy()
X2_copy = X2.copy()
# confirm equivalence
assert_equal(X1, X1_copy)
assert_equal(X2, X2_copy)
# confirm contiguity
assert_(not X1.flags.c_contiguous)
assert_(not X2.flags.c_contiguous)
assert_(X1_copy.flags.c_contiguous)
assert_(X2_copy.flags.c_contiguous)
kwargs = dict()
if metric == 'minkowski':
kwargs['p'] = 1.23
Y1 = cdist(X1, X2, metric, **kwargs)
Y2 = cdist(X1_copy, X2_copy, metric, **kwargs)
# test that output is numerically equivalent
assert_allclose(Y1, Y2, rtol=eps, verbose=verbose > 2)
def test_cdist_refcount(self, metric):
x1 = np.random.rand(10, 10)
x2 = np.random.rand(10, 10)
kwargs = dict()
if metric == 'minkowski':
kwargs['p'] = 1.23
out = cdist(x1, x2, metric=metric, **kwargs)
# Check reference counts aren't messed up. If we only hold weak
# references, the arrays should be deallocated.
weak_refs = [weakref.ref(v) for v in (x1, x2, out)]
del x1, x2, out
if IS_PYPY:
break_cycles()
assert all(weak_ref() is None for weak_ref in weak_refs)
class TestPdist:
def setup_method(self):
self.rnd_eo_names = ['random-float32-data', 'random-int-data',
'random-uint-data', 'random-double-data',
'random-bool-data']
self.valid_upcasts = {'bool': [np.uint, np.int_, np.float32, np.double],
'uint': [np.int_, np.float32, np.double],
'int': [np.float32, np.double],
'float32': [np.double]}
def test_pdist_extra_args(self, metric):
# Tests that args and kwargs are correctly handled
X1 = [[1., 2.], [1.2, 2.3], [2.2, 2.3]]
kwargs = {"N0tV4l1D_p4raM": 3.14, "w": np.arange(2)}
args = [3.14] * 200
with pytest.raises(TypeError):
pdist(X1, metric=metric, **kwargs)
with pytest.raises(TypeError):
pdist(X1, metric=eval(metric), **kwargs)
with pytest.raises(TypeError):
pdist(X1, metric="test_" + metric, **kwargs)
with pytest.raises(TypeError):
pdist(X1, metric=metric, *args)
with pytest.raises(TypeError):
pdist(X1, metric=eval(metric), *args)
with pytest.raises(TypeError):
pdist(X1, metric="test_" + metric, *args)
def test_pdist_extra_args_custom(self):
# Tests that args and kwargs are correctly handled
# also for custom metric
def _my_metric(x, y, arg, kwarg=1, kwarg2=2):
return arg + kwarg + kwarg2
X1 = [[1., 2.], [1.2, 2.3], [2.2, 2.3]]
kwargs = {"N0tV4l1D_p4raM": 3.14, "w": np.arange(2)}
args = [3.14] * 200
with pytest.raises(TypeError):
pdist(X1, _my_metric)
with pytest.raises(TypeError):
pdist(X1, _my_metric, *args)
with pytest.raises(TypeError):
pdist(X1, _my_metric, **kwargs)
with pytest.raises(TypeError):
pdist(X1, _my_metric, kwarg=2.2, kwarg2=3.3)
with pytest.raises(TypeError):
pdist(X1, _my_metric, 1, 2, kwarg=2.2)
with pytest.raises(TypeError):
pdist(X1, _my_metric, 1, 2, kwarg=2.2)
with pytest.raises(TypeError):
pdist(X1, _my_metric, 1.1, 2.2, 3.3)
with pytest.raises(TypeError):
pdist(X1, _my_metric, 1.1, 2.2)
with pytest.raises(TypeError):
pdist(X1, _my_metric, 1.1)
with pytest.raises(TypeError):
pdist(X1, _my_metric, 1.1, kwarg=2.2, kwarg2=3.3)
# these should work
assert_allclose(pdist(X1, metric=_my_metric,
arg=1.1, kwarg2=3.3), 5.4)
def test_pdist_euclidean_random(self):
eps = 1e-07
X = eo['pdist-double-inp']
Y_right = eo['pdist-euclidean']
Y_test1 = wpdist_no_const(X, 'euclidean')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_euclidean_random_u(self):
eps = 1e-07
X = eo['pdist-double-inp']
Y_right = eo['pdist-euclidean']
Y_test1 = wpdist_no_const(X, 'euclidean')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_euclidean_random_float32(self):
eps = 1e-07
X = np.float32(eo['pdist-double-inp'])
Y_right = eo['pdist-euclidean']
Y_test1 = wpdist_no_const(X, 'euclidean')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_euclidean_random_nonC(self):
eps = 1e-07
X = eo['pdist-double-inp']
Y_right = eo['pdist-euclidean']
Y_test2 = wpdist_no_const(X, 'test_euclidean')
assert_allclose(Y_test2, Y_right, rtol=eps)
@pytest.mark.slow
def test_pdist_euclidean_iris_double(self):
eps = 1e-7
X = eo['iris']
Y_right = eo['pdist-euclidean-iris']
Y_test1 = wpdist_no_const(X, 'euclidean')
assert_allclose(Y_test1, Y_right, rtol=eps)
@pytest.mark.slow
def test_pdist_euclidean_iris_float32(self):
eps = 1e-5
X = np.float32(eo['iris'])
Y_right = eo['pdist-euclidean-iris']
Y_test1 = wpdist_no_const(X, 'euclidean')
assert_allclose(Y_test1, Y_right, rtol=eps, verbose=verbose > 2)
@pytest.mark.slow
def test_pdist_euclidean_iris_nonC(self):
# Test pdist(X, 'test_euclidean') [the non-C implementation] on the
# Iris data set.
eps = 1e-7
X = eo['iris']
Y_right = eo['pdist-euclidean-iris']
Y_test2 = wpdist_no_const(X, 'test_euclidean')
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_seuclidean_random(self):
eps = 1e-7
X = eo['pdist-double-inp']
Y_right = eo['pdist-seuclidean']
Y_test1 = pdist(X, 'seuclidean')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_seuclidean_random_float32(self):
eps = 1e-7
X = np.float32(eo['pdist-double-inp'])
Y_right = eo['pdist-seuclidean']
Y_test1 = pdist(X, 'seuclidean')
assert_allclose(Y_test1, Y_right, rtol=eps)
# Check no error is raise when V has float32 dtype (#11171).
V = np.var(X, axis=0, ddof=1)
Y_test2 = pdist(X, 'seuclidean', V=V)
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_seuclidean_random_nonC(self):
# Test pdist(X, 'test_sqeuclidean') [the non-C implementation]
eps = 1e-07
X = eo['pdist-double-inp']
Y_right = eo['pdist-seuclidean']
Y_test2 = pdist(X, 'test_seuclidean')
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_seuclidean_iris(self):
eps = 1e-7
X = eo['iris']
Y_right = eo['pdist-seuclidean-iris']
Y_test1 = pdist(X, 'seuclidean')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_seuclidean_iris_float32(self):
# Tests pdist(X, 'seuclidean') on the Iris data set (float32).
eps = 1e-5
X = np.float32(eo['iris'])
Y_right = eo['pdist-seuclidean-iris']
Y_test1 = pdist(X, 'seuclidean')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_seuclidean_iris_nonC(self):
# Test pdist(X, 'test_seuclidean') [the non-C implementation] on the
# Iris data set.
eps = 1e-7
X = eo['iris']
Y_right = eo['pdist-seuclidean-iris']
Y_test2 = pdist(X, 'test_seuclidean')
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_cosine_random(self):
eps = 1e-7
X = eo['pdist-double-inp']
Y_right = eo['pdist-cosine']
Y_test1 = wpdist(X, 'cosine')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_cosine_random_float32(self):
eps = 1e-7
X = np.float32(eo['pdist-double-inp'])
Y_right = eo['pdist-cosine']
Y_test1 = wpdist(X, 'cosine')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_cosine_random_nonC(self):
# Test pdist(X, 'test_cosine') [the non-C implementation]
eps = 1e-7
X = eo['pdist-double-inp']
Y_right = eo['pdist-cosine']
Y_test2 = wpdist(X, 'test_cosine')
assert_allclose(Y_test2, Y_right, rtol=eps)
@pytest.mark.slow
def test_pdist_cosine_iris(self):
eps = 1e-05
X = eo['iris']
Y_right = eo['pdist-cosine-iris']
Y_test1 = wpdist(X, 'cosine')
assert_allclose(Y_test1, Y_right, atol=eps)
@pytest.mark.slow
def test_pdist_cosine_iris_float32(self):
eps = 1e-05
X = np.float32(eo['iris'])
Y_right = eo['pdist-cosine-iris']
Y_test1 = wpdist(X, 'cosine')
assert_allclose(Y_test1, Y_right, atol=eps, verbose=verbose > 2)
@pytest.mark.slow
def test_pdist_cosine_iris_nonC(self):
eps = 1e-05
X = eo['iris']
Y_right = eo['pdist-cosine-iris']
Y_test2 = wpdist(X, 'test_cosine')
assert_allclose(Y_test2, Y_right, atol=eps)
def test_pdist_cosine_bounds(self):
# Test adapted from @joernhees's example at gh-5208: case where
# cosine distance used to be negative. XXX: very sensitive to the
# specific norm computation.
x = np.abs(np.random.RandomState(1337).rand(91))
X = np.vstack([x, x])
assert_(wpdist(X, 'cosine')[0] >= 0,
msg='cosine distance should be non-negative')
def test_pdist_cityblock_random(self):
eps = 1e-7
X = eo['pdist-double-inp']
Y_right = eo['pdist-cityblock']
Y_test1 = wpdist_no_const(X, 'cityblock')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_cityblock_random_float32(self):
eps = 1e-7
X = np.float32(eo['pdist-double-inp'])
Y_right = eo['pdist-cityblock']
Y_test1 = wpdist_no_const(X, 'cityblock')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_cityblock_random_nonC(self):
eps = 1e-7
X = eo['pdist-double-inp']
Y_right = eo['pdist-cityblock']
Y_test2 = wpdist_no_const(X, 'test_cityblock')
assert_allclose(Y_test2, Y_right, rtol=eps)
@pytest.mark.slow
def test_pdist_cityblock_iris(self):
eps = 1e-14
X = eo['iris']
Y_right = eo['pdist-cityblock-iris']
Y_test1 = wpdist_no_const(X, 'cityblock')
assert_allclose(Y_test1, Y_right, rtol=eps)
@pytest.mark.slow
def test_pdist_cityblock_iris_float32(self):
eps = 1e-5
X = np.float32(eo['iris'])
Y_right = eo['pdist-cityblock-iris']
Y_test1 = wpdist_no_const(X, 'cityblock')
assert_allclose(Y_test1, Y_right, rtol=eps, verbose=verbose > 2)
@pytest.mark.slow
def test_pdist_cityblock_iris_nonC(self):
# Test pdist(X, 'test_cityblock') [the non-C implementation] on the
# Iris data set.
eps = 1e-14
X = eo['iris']
Y_right = eo['pdist-cityblock-iris']
Y_test2 = wpdist_no_const(X, 'test_cityblock')
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_correlation_random(self):
eps = 1e-7
X = eo['pdist-double-inp']
Y_right = eo['pdist-correlation']
Y_test1 = wpdist(X, 'correlation')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_correlation_random_float32(self):
eps = 1e-7
X = np.float32(eo['pdist-double-inp'])
Y_right = eo['pdist-correlation']
Y_test1 = wpdist(X, 'correlation')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_correlation_random_nonC(self):
eps = 1e-7
X = eo['pdist-double-inp']
Y_right = eo['pdist-correlation']
Y_test2 = wpdist(X, 'test_correlation')
assert_allclose(Y_test2, Y_right, rtol=eps)
@pytest.mark.slow
def test_pdist_correlation_iris(self):
eps = 1e-7
X = eo['iris']
Y_right = eo['pdist-correlation-iris']
Y_test1 = wpdist(X, 'correlation')
assert_allclose(Y_test1, Y_right, rtol=eps)
@pytest.mark.slow
def test_pdist_correlation_iris_float32(self):
eps = 1e-7
X = eo['iris']
Y_right = np.float32(eo['pdist-correlation-iris'])
Y_test1 = wpdist(X, 'correlation')
assert_allclose(Y_test1, Y_right, rtol=eps, verbose=verbose > 2)
@pytest.mark.slow
def test_pdist_correlation_iris_nonC(self):
if sys.maxsize > 2**32:
eps = 1e-7
else:
pytest.skip("see gh-16456")
X = eo['iris']
Y_right = eo['pdist-correlation-iris']
Y_test2 = wpdist(X, 'test_correlation')
assert_allclose(Y_test2, Y_right, rtol=eps)
@pytest.mark.parametrize("p", [0.1, 0.25, 1.0, 2.0, 3.2, np.inf])
def test_pdist_minkowski_random_p(self, p):
eps = 1e-13
X = eo['pdist-double-inp']
Y1 = wpdist_no_const(X, 'minkowski', p=p)
Y2 = wpdist_no_const(X, 'test_minkowski', p=p)
assert_allclose(Y1, Y2, atol=0, rtol=eps)
def test_pdist_minkowski_random(self):
eps = 1e-7
X = eo['pdist-double-inp']
Y_right = eo['pdist-minkowski-3.2']
Y_test1 = wpdist_no_const(X, 'minkowski', p=3.2)
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_minkowski_random_float32(self):
eps = 1e-7
X = np.float32(eo['pdist-double-inp'])
Y_right = eo['pdist-minkowski-3.2']
Y_test1 = wpdist_no_const(X, 'minkowski', p=3.2)
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_minkowski_random_nonC(self):
eps = 1e-7
X = eo['pdist-double-inp']
Y_right = eo['pdist-minkowski-3.2']
Y_test2 = wpdist_no_const(X, 'test_minkowski', p=3.2)
assert_allclose(Y_test2, Y_right, rtol=eps)
@pytest.mark.slow
def test_pdist_minkowski_3_2_iris(self):
eps = 1e-7
X = eo['iris']
Y_right = eo['pdist-minkowski-3.2-iris']
Y_test1 = wpdist_no_const(X, 'minkowski', p=3.2)
assert_allclose(Y_test1, Y_right, rtol=eps)
@pytest.mark.slow
def test_pdist_minkowski_3_2_iris_float32(self):
eps = 1e-5
X = np.float32(eo['iris'])
Y_right = eo['pdist-minkowski-3.2-iris']
Y_test1 = wpdist_no_const(X, 'minkowski', p=3.2)
assert_allclose(Y_test1, Y_right, rtol=eps)
@pytest.mark.slow
def test_pdist_minkowski_3_2_iris_nonC(self):
eps = 1e-7
X = eo['iris']
Y_right = eo['pdist-minkowski-3.2-iris']
Y_test2 = wpdist_no_const(X, 'test_minkowski', p=3.2)
assert_allclose(Y_test2, Y_right, rtol=eps)
@pytest.mark.slow
def test_pdist_minkowski_5_8_iris(self):
eps = 1e-7
X = eo['iris']
Y_right = eo['pdist-minkowski-5.8-iris']
Y_test1 = wpdist_no_const(X, 'minkowski', p=5.8)
assert_allclose(Y_test1, Y_right, rtol=eps)
@pytest.mark.slow
def test_pdist_minkowski_5_8_iris_float32(self):
eps = 1e-5
X = np.float32(eo['iris'])
Y_right = eo['pdist-minkowski-5.8-iris']
Y_test1 = wpdist_no_const(X, 'minkowski', p=5.8)
assert_allclose(Y_test1, Y_right, rtol=eps, verbose=verbose > 2)
@pytest.mark.slow
def test_pdist_minkowski_5_8_iris_nonC(self):
eps = 1e-7
X = eo['iris']
Y_right = eo['pdist-minkowski-5.8-iris']
Y_test2 = wpdist_no_const(X, 'test_minkowski', p=5.8)
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_mahalanobis(self):
# 1-dimensional observations
x = np.array([2.0, 2.0, 3.0, 5.0]).reshape(-1, 1)
dist = pdist(x, metric='mahalanobis')
assert_allclose(dist, [0.0, np.sqrt(0.5), np.sqrt(4.5),
np.sqrt(0.5), np.sqrt(4.5), np.sqrt(2.0)])
# 2-dimensional observations
x = np.array([[0, 0], [-1, 0], [0, 2], [1, 0], [0, -2]])
dist = pdist(x, metric='mahalanobis')
rt2 = np.sqrt(2)
assert_allclose(dist, [rt2, rt2, rt2, rt2, 2, 2 * rt2, 2, 2, 2 * rt2, 2])
# Too few observations
with pytest.raises(ValueError):
wpdist([[0, 1], [2, 3]], metric='mahalanobis')
def test_pdist_hamming_random(self):
eps = 1e-15
X = eo['pdist-boolean-inp']
Y_right = eo['pdist-hamming']
Y_test1 = wpdist(X, 'hamming')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_hamming_random_float32(self):
eps = 1e-15
X = np.float32(eo['pdist-boolean-inp'])
Y_right = eo['pdist-hamming']
Y_test1 = wpdist(X, 'hamming')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_hamming_random_nonC(self):
eps = 1e-15
X = eo['pdist-boolean-inp']
Y_right = eo['pdist-hamming']
Y_test2 = wpdist(X, 'test_hamming')
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_dhamming_random(self):
eps = 1e-15
X = np.float64(eo['pdist-boolean-inp'])
Y_right = eo['pdist-hamming']
Y_test1 = wpdist(X, 'hamming')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_dhamming_random_float32(self):
eps = 1e-15
X = np.float32(eo['pdist-boolean-inp'])
Y_right = eo['pdist-hamming']
Y_test1 = wpdist(X, 'hamming')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_dhamming_random_nonC(self):
eps = 1e-15
X = np.float64(eo['pdist-boolean-inp'])
Y_right = eo['pdist-hamming']
Y_test2 = wpdist(X, 'test_hamming')
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_jaccard_random(self):
eps = 1e-8
X = eo['pdist-boolean-inp']
Y_right = eo['pdist-jaccard']
Y_test1 = wpdist(X, 'jaccard')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_jaccard_random_float32(self):
eps = 1e-8
X = np.float32(eo['pdist-boolean-inp'])
Y_right = eo['pdist-jaccard']
Y_test1 = wpdist(X, 'jaccard')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_jaccard_random_nonC(self):
eps = 1e-8
X = eo['pdist-boolean-inp']
Y_right = eo['pdist-jaccard']
Y_test2 = wpdist(X, 'test_jaccard')
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_djaccard_random(self):
eps = 1e-8
X = np.float64(eo['pdist-boolean-inp'])
Y_right = eo['pdist-jaccard']
Y_test1 = wpdist(X, 'jaccard')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_djaccard_random_float32(self):
eps = 1e-8
X = np.float32(eo['pdist-boolean-inp'])
Y_right = eo['pdist-jaccard']
Y_test1 = wpdist(X, 'jaccard')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_djaccard_allzeros(self):
eps = 1e-15
Y = pdist(np.zeros((5, 3)), 'jaccard')
assert_allclose(np.zeros(10), Y, rtol=eps)
def test_pdist_djaccard_random_nonC(self):
eps = 1e-8
X = np.float64(eo['pdist-boolean-inp'])
Y_right = eo['pdist-jaccard']
Y_test2 = wpdist(X, 'test_jaccard')
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_jensenshannon_random(self):
eps = 1e-11
X = eo['pdist-double-inp']
Y_right = eo['pdist-jensenshannon']
Y_test1 = pdist(X, 'jensenshannon')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_jensenshannon_random_float32(self):
eps = 1e-8
X = np.float32(eo['pdist-double-inp'])
Y_right = eo['pdist-jensenshannon']
Y_test1 = pdist(X, 'jensenshannon')
assert_allclose(Y_test1, Y_right, rtol=eps, verbose=verbose > 2)
def test_pdist_jensenshannon_random_nonC(self):
eps = 1e-11
X = eo['pdist-double-inp']
Y_right = eo['pdist-jensenshannon']
Y_test2 = pdist(X, 'test_jensenshannon')
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_jensenshannon_iris(self):
if _is_32bit():
# Test failing on 32-bit Linux on Azure otherwise, see gh-12810
eps = 2.5e-10
else:
eps = 1e-12
X = eo['iris']
Y_right = eo['pdist-jensenshannon-iris']
Y_test1 = pdist(X, 'jensenshannon')
assert_allclose(Y_test1, Y_right, atol=eps)
def test_pdist_jensenshannon_iris_float32(self):
eps = 1e-06
X = np.float32(eo['iris'])
Y_right = eo['pdist-jensenshannon-iris']
Y_test1 = pdist(X, 'jensenshannon')
assert_allclose(Y_test1, Y_right, atol=eps, verbose=verbose > 2)
def test_pdist_jensenshannon_iris_nonC(self):
eps = 5e-5
X = eo['iris']
Y_right = eo['pdist-jensenshannon-iris']
Y_test2 = pdist(X, 'test_jensenshannon')
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_djaccard_allzeros_nonC(self):
eps = 1e-15
Y = pdist(np.zeros((5, 3)), 'test_jaccard')
assert_allclose(np.zeros(10), Y, rtol=eps)
def test_pdist_chebyshev_random(self):
eps = 1e-8
X = eo['pdist-double-inp']
Y_right = eo['pdist-chebyshev']
Y_test1 = pdist(X, 'chebyshev')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_chebyshev_random_float32(self):
eps = 1e-7
X = np.float32(eo['pdist-double-inp'])
Y_right = eo['pdist-chebyshev']
Y_test1 = pdist(X, 'chebyshev')
assert_allclose(Y_test1, Y_right, rtol=eps, verbose=verbose > 2)
def test_pdist_chebyshev_random_nonC(self):
eps = 1e-8
X = eo['pdist-double-inp']
Y_right = eo['pdist-chebyshev']
Y_test2 = pdist(X, 'test_chebyshev')
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_chebyshev_iris(self):
eps = 1e-14
X = eo['iris']
Y_right = eo['pdist-chebyshev-iris']
Y_test1 = pdist(X, 'chebyshev')
assert_allclose(Y_test1, Y_right, rtol=eps)
def test_pdist_chebyshev_iris_float32(self):
eps = 1e-5
X = np.float32(eo['iris'])
Y_right = eo['pdist-chebyshev-iris']
Y_test1 = pdist(X, 'chebyshev')
assert_allclose(Y_test1, Y_right, rtol=eps, verbose=verbose > 2)
def test_pdist_chebyshev_iris_nonC(self):
eps = 1e-14
X = eo['iris']
Y_right = eo['pdist-chebyshev-iris']
Y_test2 = pdist(X, 'test_chebyshev')
assert_allclose(Y_test2, Y_right, rtol=eps)
def test_pdist_matching_mtica1(self):
# Test matching(*,*) with mtica example #1 (nums).
m = wmatching(np.array([1, 0, 1, 1, 0]),
np.array([1, 1, 0, 1, 1]))
m2 = wmatching(np.array([1, 0, 1, 1, 0], dtype=bool),
np.array([1, 1, 0, 1, 1], dtype=bool))
assert_allclose(m, 0.6, rtol=0, atol=1e-10)
assert_allclose(m2, 0.6, rtol=0, atol=1e-10)
def test_pdist_matching_mtica2(self):
# Test matching(*,*) with mtica example #2.
m = wmatching(np.array([1, 0, 1]),
np.array([1, 1, 0]))
m2 = wmatching(np.array([1, 0, 1], dtype=bool),
np.array([1, 1, 0], dtype=bool))
assert_allclose(m, 2 / 3, rtol=0, atol=1e-10)
assert_allclose(m2, 2 / 3, rtol=0, atol=1e-10)
def test_pdist_jaccard_mtica1(self):
m = wjaccard(np.array([1, 0, 1, 1, 0]),
np.array([1, 1, 0, 1, 1]))
m2 = wjaccard(np.array([1, 0, 1, 1, 0], dtype=bool),
np.array([1, 1, 0, 1, 1], dtype=bool))
assert_allclose(m, 0.6, rtol=0, atol=1e-10)
assert_allclose(m2, 0.6, rtol=0, atol=1e-10)
def test_pdist_jaccard_mtica2(self):
m = wjaccard(np.array([1, 0, 1]),
np.array([1, 1, 0]))
m2 = wjaccard(np.array([1, 0, 1], dtype=bool),
np.array([1, 1, 0], dtype=bool))
assert_allclose(m, 2 / 3, rtol=0, atol=1e-10)
assert_allclose(m2, 2 / 3, rtol=0, atol=1e-10)
def test_pdist_yule_mtica1(self):
m = wyule(np.array([1, 0, 1, 1, 0]),
np.array([1, 1, 0, 1, 1]))
m2 = wyule(np.array([1, 0, 1, 1, 0], dtype=bool),
np.array([1, 1, 0, 1, 1], dtype=bool))
if verbose > 2:
print(m)
assert_allclose(m, 2, rtol=0, atol=1e-10)
assert_allclose(m2, 2, rtol=0, atol=1e-10)
def test_pdist_yule_mtica2(self):
m = wyule(np.array([1, 0, 1]),
np.array([1, 1, 0]))
m2 = wyule(np.array([1, 0, 1], dtype=bool),
np.array([1, 1, 0], dtype=bool))
if verbose > 2:
print(m)
assert_allclose(m, 2, rtol=0, atol=1e-10)
assert_allclose(m2, 2, rtol=0, atol=1e-10)
def test_pdist_dice_mtica1(self):
m = wdice(np.array([1, 0, 1, 1, 0]),
np.array([1, 1, 0, 1, 1]))
m2 = wdice(np.array([1, 0, 1, 1, 0], dtype=bool),
np.array([1, 1, 0, 1, 1], dtype=bool))
if verbose > 2:
print(m)
assert_allclose(m, 3 / 7, rtol=0, atol=1e-10)
assert_allclose(m2, 3 / 7, rtol=0, atol=1e-10)
def test_pdist_dice_mtica2(self):
m = wdice(np.array([1, 0, 1]),
np.array([1, 1, 0]))
m2 = wdice(np.array([1, 0, 1], dtype=bool),
np.array([1, 1, 0], dtype=bool))
if verbose > 2:
print(m)
assert_allclose(m, 0.5, rtol=0, atol=1e-10)
assert_allclose(m2, 0.5, rtol=0, atol=1e-10)
def test_pdist_sokalsneath_mtica1(self):
m = sokalsneath(np.array([1, 0, 1, 1, 0]),
np.array([1, 1, 0, 1, 1]))
m2 = sokalsneath(np.array([1, 0, 1, 1, 0], dtype=bool),
np.array([1, 1, 0, 1, 1], dtype=bool))
if verbose > 2:
print(m)
assert_allclose(m, 3 / 4, rtol=0, atol=1e-10)
assert_allclose(m2, 3 / 4, rtol=0, atol=1e-10)
def test_pdist_sokalsneath_mtica2(self):
m = wsokalsneath(np.array([1, 0, 1]),
np.array([1, 1, 0]))
m2 = wsokalsneath(np.array([1, 0, 1], dtype=bool),
np.array([1, 1, 0], dtype=bool))
if verbose > 2:
print(m)
assert_allclose(m, 4 / 5, rtol=0, atol=1e-10)
assert_allclose(m2, 4 / 5, rtol=0, atol=1e-10)
def test_pdist_rogerstanimoto_mtica1(self):
m = wrogerstanimoto(np.array([1, 0, 1, 1, 0]),
np.array([1, 1, 0, 1, 1]))
m2 = wrogerstanimoto(np.array([1, 0, 1, 1, 0], dtype=bool),
np.array([1, 1, 0, 1, 1], dtype=bool))
if verbose > 2:
print(m)
assert_allclose(m, 3 / 4, rtol=0, atol=1e-10)
assert_allclose(m2, 3 / 4, rtol=0, atol=1e-10)
def test_pdist_rogerstanimoto_mtica2(self):
m = wrogerstanimoto(np.array([1, 0, 1]),
np.array([1, 1, 0]))
m2 = wrogerstanimoto(np.array([1, 0, 1], dtype=bool),
np.array([1, 1, 0], dtype=bool))
if verbose > 2:
print(m)
assert_allclose(m, 4 / 5, rtol=0, atol=1e-10)
assert_allclose(m2, 4 / 5, rtol=0, atol=1e-10)
def test_pdist_russellrao_mtica1(self):
m = wrussellrao(np.array([1, 0, 1, 1, 0]),
np.array([1, 1, 0, 1, 1]))
m2 = wrussellrao(np.array([1, 0, 1, 1, 0], dtype=bool),
np.array([1, 1, 0, 1, 1], dtype=bool))
if verbose > 2:
print(m)
assert_allclose(m, 3 / 5, rtol=0, atol=1e-10)
assert_allclose(m2, 3 / 5, rtol=0, atol=1e-10)
def test_pdist_russellrao_mtica2(self):
m = wrussellrao(np.array([1, 0, 1]),
np.array([1, 1, 0]))
m2 = wrussellrao(np.array([1, 0, 1], dtype=bool),
np.array([1, 1, 0], dtype=bool))
if verbose > 2:
print(m)
assert_allclose(m, 2 / 3, rtol=0, atol=1e-10)
assert_allclose(m2, 2 / 3, rtol=0, atol=1e-10)
@pytest.mark.slow
def test_pdist_canberra_match(self):
D = eo['iris']
if verbose > 2:
print(D.shape, D.dtype)
eps = 1e-15
y1 = wpdist_no_const(D, "canberra")
y2 = wpdist_no_const(D, "test_canberra")
assert_allclose(y1, y2, rtol=eps, verbose=verbose > 2)
def test_pdist_canberra_ticket_711(self):
# Test pdist(X, 'canberra') to see if Canberra gives the right result
# as reported on gh-1238.
eps = 1e-8
pdist_y = wpdist_no_const(([3.3], [3.4]), "canberra")
right_y = 0.01492537
assert_allclose(pdist_y, right_y, atol=eps, verbose=verbose > 2)
def test_pdist_custom_notdouble(self):
# tests that when using a custom metric the data type is not altered
class myclass:
pass
def _my_metric(x, y):
if not isinstance(x[0], myclass) or not isinstance(y[0], myclass):
raise ValueError("Type has been changed")
return 1.123
data = np.array([[myclass()], [myclass()]], dtype=object)
pdist_y = pdist(data, metric=_my_metric)
right_y = 1.123
assert_equal(pdist_y, right_y, verbose=verbose > 2)
def _check_calling_conventions(self, X, metric, eps=1e-07, **kwargs):
# helper function for test_pdist_calling_conventions
try:
y1 = pdist(X, metric=metric, **kwargs)
y2 = pdist(X, metric=eval(metric), **kwargs)
y3 = pdist(X, metric="test_" + metric, **kwargs)
except Exception as e:
e_cls = e.__class__
if verbose > 2:
print(e_cls.__name__)
print(e)
with pytest.raises(e_cls):
pdist(X, metric=metric, **kwargs)
with pytest.raises(e_cls):
pdist(X, metric=eval(metric), **kwargs)
with pytest.raises(e_cls):
pdist(X, metric="test_" + metric, **kwargs)
else:
assert_allclose(y1, y2, rtol=eps, verbose=verbose > 2)
assert_allclose(y1, y3, rtol=eps, verbose=verbose > 2)
def test_pdist_calling_conventions(self, metric):
# Ensures that specifying the metric with a str or scipy function
# gives the same behaviour (i.e. same result or same exception).
# NOTE: The correctness should be checked within each metric tests.
# NOTE: Extra args should be checked with a dedicated test
for eo_name in self.rnd_eo_names:
# subsampling input data to speed-up tests
# NOTE: num samples needs to be > than dimensions for mahalanobis
X = eo[eo_name][::5, ::2]
if verbose > 2:
print("testing: ", metric, " with: ", eo_name)
if metric in {'dice', 'yule', 'matching',
'rogerstanimoto', 'russellrao', 'sokalmichener',
'sokalsneath',
'kulczynski1'} and 'bool' not in eo_name:
# python version permits non-bools e.g. for fuzzy logic
continue
self._check_calling_conventions(X, metric)
# Testing built-in metrics with extra args
if metric == "seuclidean":
V = np.var(X.astype(np.double), axis=0, ddof=1)
self._check_calling_conventions(X, metric, V=V)
elif metric == "mahalanobis":
V = np.atleast_2d(np.cov(X.astype(np.double).T))
VI = np.array(np.linalg.inv(V).T)
self._check_calling_conventions(X, metric, VI=VI)
def test_pdist_dtype_equivalence(self, metric):
# Tests that the result is not affected by type up-casting
eps = 1e-07
tests = [(eo['random-bool-data'], self.valid_upcasts['bool']),
(eo['random-uint-data'], self.valid_upcasts['uint']),
(eo['random-int-data'], self.valid_upcasts['int']),
(eo['random-float32-data'], self.valid_upcasts['float32'])]
for test in tests:
X1 = test[0][::5, ::2]
try:
y1 = pdist(X1, metric=metric)
except Exception as e:
e_cls = e.__class__
if verbose > 2:
print(e_cls.__name__)
print(e)
for new_type in test[1]:
X2 = new_type(X1)
with pytest.raises(e_cls):
pdist(X2, metric=metric)
else:
for new_type in test[1]:
y2 = pdist(new_type(X1), metric=metric)
assert_allclose(y1, y2, rtol=eps, verbose=verbose > 2)
def test_pdist_out(self, metric):
# Test that out parameter works properly
eps = 1e-15
X = eo['random-float32-data'][::5, ::2]
out_size = int((X.shape[0] * (X.shape[0] - 1)) / 2)
kwargs = dict()
if metric == 'minkowski':
kwargs['p'] = 1.23
out1 = np.empty(out_size, dtype=np.double)
Y_right = pdist(X, metric, **kwargs)
Y_test1 = pdist(X, metric, out=out1, **kwargs)
# test that output is numerically equivalent
assert_allclose(Y_test1, Y_right, rtol=eps)
# test that Y_test1 and out1 are the same object
assert_(Y_test1 is out1)
# test for incorrect shape
out2 = np.empty(out_size + 3, dtype=np.double)
with pytest.raises(ValueError):
pdist(X, metric, out=out2, **kwargs)
# test for (C-)contiguous output
out3 = np.empty(2 * out_size, dtype=np.double)[::2]
with pytest.raises(ValueError):
pdist(X, metric, out=out3, **kwargs)
# test for incorrect dtype
out5 = np.empty(out_size, dtype=np.int64)
with pytest.raises(ValueError):
pdist(X, metric, out=out5, **kwargs)
def test_striding(self, metric):
# test that striding is handled correct with calls to
# _copy_array_if_base_present
eps = 1e-15
X = eo['random-float32-data'][::5, ::2]
X_copy = X.copy()
# confirm contiguity
assert_(not X.flags.c_contiguous)
assert_(X_copy.flags.c_contiguous)
kwargs = dict()
if metric == 'minkowski':
kwargs['p'] = 1.23
Y1 = pdist(X, metric, **kwargs)
Y2 = pdist(X_copy, metric, **kwargs)
# test that output is numerically equivalent
assert_allclose(Y1, Y2, rtol=eps, verbose=verbose > 2)
class TestSomeDistanceFunctions:
def setup_method(self):
# 1D arrays
x = np.array([1.0, 2.0, 3.0])
y = np.array([1.0, 1.0, 5.0])
self.cases = [(x, y)]
def test_minkowski(self):
for x, y in self.cases:
dist1 = minkowski(x, y, p=1)
assert_almost_equal(dist1, 3.0)
dist1p5 = minkowski(x, y, p=1.5)
assert_almost_equal(dist1p5, (1.0 + 2.0**1.5)**(2. / 3))
dist2 = minkowski(x, y, p=2)
assert_almost_equal(dist2, 5.0 ** 0.5)
dist0p25 = minkowski(x, y, p=0.25)
assert_almost_equal(dist0p25, (1.0 + 2.0 ** 0.25) ** 4)
# Check that casting input to minimum scalar type doesn't affect result
# (issue #10262). This could be extended to more test inputs with
# np.min_scalar_type(np.max(input_matrix)).
a = np.array([352, 916])
b = np.array([350, 660])
assert_equal(minkowski(a, b),
minkowski(a.astype('uint16'), b.astype('uint16')))
def test_euclidean(self):
for x, y in self.cases:
dist = weuclidean(x, y)
assert_almost_equal(dist, np.sqrt(5))
def test_sqeuclidean(self):
for x, y in self.cases:
dist = wsqeuclidean(x, y)
assert_almost_equal(dist, 5.0)
def test_cosine(self):
for x, y in self.cases:
dist = wcosine(x, y)
assert_almost_equal(dist, 1.0 - 18.0 / (np.sqrt(14) * np.sqrt(27)))
def test_correlation(self):
xm = np.array([-1.0, 0, 1.0])
ym = np.array([-4.0 / 3, -4.0 / 3, 5.0 - 7.0 / 3])
for x, y in self.cases:
dist = wcorrelation(x, y)
assert_almost_equal(dist, 1.0 - np.dot(xm, ym) / (norm(xm) * norm(ym)))
def test_correlation_positive(self):
# Regression test for gh-12320 (negative return value due to rounding
x = np.array([0., 0., 0., 0., 0., 0., -2., 0., 0., 0., -2., -2., -2.,
0., -2., 0., -2., 0., 0., -1., -2., 0., 1., 0., 0., -2.,
0., 0., -2., 0., -2., -2., -2., -2., -2., -2., 0.])
y = np.array([1., 1., 1., 1., 1., 1., -1., 1., 1., 1., -1., -1., -1.,
1., -1., 1., -1., 1., 1., 0., -1., 1., 2., 1., 1., -1.,
1., 1., -1., 1., -1., -1., -1., -1., -1., -1., 1.])
dist = correlation(x, y)
assert 0 <= dist <= 10 * np.finfo(np.float64).eps
def test_mahalanobis(self):
x = np.array([1.0, 2.0, 3.0])
y = np.array([1.0, 1.0, 5.0])
vi = np.array([[2.0, 1.0, 0.0], [1.0, 2.0, 1.0], [0.0, 1.0, 2.0]])
for x, y in self.cases:
dist = mahalanobis(x, y, vi)
assert_almost_equal(dist, np.sqrt(6.0))
class TestSquareForm:
checked_dtypes = [np.float64, np.float32, np.int32, np.int8, bool]
def test_squareform_matrix(self):
for dtype in self.checked_dtypes:
self.check_squareform_matrix(dtype)
def test_squareform_vector(self):
for dtype in self.checked_dtypes:
self.check_squareform_vector(dtype)
def check_squareform_matrix(self, dtype):
A = np.zeros((0, 0), dtype=dtype)
rA = squareform(A)
assert_equal(rA.shape, (0,))
assert_equal(rA.dtype, dtype)
A = np.zeros((1, 1), dtype=dtype)
rA = squareform(A)
assert_equal(rA.shape, (0,))
assert_equal(rA.dtype, dtype)
A = np.array([[0, 4.2], [4.2, 0]], dtype=dtype)
rA = squareform(A)
assert_equal(rA.shape, (1,))
assert_equal(rA.dtype, dtype)
assert_array_equal(rA, np.array([4.2], dtype=dtype))
def check_squareform_vector(self, dtype):
v = np.zeros((0,), dtype=dtype)
rv = squareform(v)
assert_equal(rv.shape, (1, 1))
assert_equal(rv.dtype, dtype)
assert_array_equal(rv, [[0]])
v = np.array([8.3], dtype=dtype)
rv = squareform(v)
assert_equal(rv.shape, (2, 2))
assert_equal(rv.dtype, dtype)
assert_array_equal(rv, np.array([[0, 8.3], [8.3, 0]], dtype=dtype))
def test_squareform_multi_matrix(self):
for n in range(2, 5):
self.check_squareform_multi_matrix(n)
def check_squareform_multi_matrix(self, n):
X = np.random.rand(n, 4)
Y = wpdist_no_const(X)
assert_equal(len(Y.shape), 1)
A = squareform(Y)
Yr = squareform(A)
s = A.shape
k = 0
if verbose >= 3:
print(A.shape, Y.shape, Yr.shape)
assert_equal(len(s), 2)
assert_equal(len(Yr.shape), 1)
assert_equal(s[0], s[1])
for i in range(0, s[0]):
for j in range(i + 1, s[1]):
if i != j:
assert_equal(A[i, j], Y[k])
k += 1
else:
assert_equal(A[i, j], 0)
class TestNumObsY:
def test_num_obs_y_multi_matrix(self):
for n in range(2, 10):
X = np.random.rand(n, 4)
Y = wpdist_no_const(X)
assert_equal(num_obs_y(Y), n)
def test_num_obs_y_1(self):
# Tests num_obs_y(y) on a condensed distance matrix over 1
# observations. Expecting exception.
with pytest.raises(ValueError):
self.check_y(1)
def test_num_obs_y_2(self):
# Tests num_obs_y(y) on a condensed distance matrix over 2
# observations.
assert_(self.check_y(2))
def test_num_obs_y_3(self):
assert_(self.check_y(3))
def test_num_obs_y_4(self):
assert_(self.check_y(4))
def test_num_obs_y_5_10(self):
for i in range(5, 16):
self.minit(i)
def test_num_obs_y_2_100(self):
# Tests num_obs_y(y) on 100 improper condensed distance matrices.
# Expecting exception.
a = set()
for n in range(2, 16):
a.add(n * (n - 1) / 2)
for i in range(5, 105):
if i not in a:
with pytest.raises(ValueError):
self.bad_y(i)
def minit(self, n):
assert_(self.check_y(n))
def bad_y(self, n):
y = np.random.rand(n)
return num_obs_y(y)
def check_y(self, n):
return num_obs_y(self.make_y(n)) == n
def make_y(self, n):
return np.random.rand((n * (n - 1)) // 2)
class TestNumObsDM:
def test_num_obs_dm_multi_matrix(self):
for n in range(1, 10):
X = np.random.rand(n, 4)
Y = wpdist_no_const(X)
A = squareform(Y)
if verbose >= 3:
print(A.shape, Y.shape)
assert_equal(num_obs_dm(A), n)
def test_num_obs_dm_0(self):
# Tests num_obs_dm(D) on a 0x0 distance matrix. Expecting exception.
assert_(self.check_D(0))
def test_num_obs_dm_1(self):
# Tests num_obs_dm(D) on a 1x1 distance matrix.
assert_(self.check_D(1))
def test_num_obs_dm_2(self):
assert_(self.check_D(2))
def test_num_obs_dm_3(self):
assert_(self.check_D(2))
def test_num_obs_dm_4(self):
assert_(self.check_D(4))
def check_D(self, n):
return num_obs_dm(self.make_D(n)) == n
def make_D(self, n):
return np.random.rand(n, n)
def is_valid_dm_throw(D):
return is_valid_dm(D, throw=True)
class TestIsValidDM:
def test_is_valid_dm_improper_shape_1D_E(self):
D = np.zeros((5,), dtype=np.double)
with pytest.raises(ValueError):
is_valid_dm_throw((D))
def test_is_valid_dm_improper_shape_1D_F(self):
D = np.zeros((5,), dtype=np.double)
assert_equal(is_valid_dm(D), False)
def test_is_valid_dm_improper_shape_3D_E(self):
D = np.zeros((3, 3, 3), dtype=np.double)
with pytest.raises(ValueError):
is_valid_dm_throw((D))
def test_is_valid_dm_improper_shape_3D_F(self):
D = np.zeros((3, 3, 3), dtype=np.double)
assert_equal(is_valid_dm(D), False)
def test_is_valid_dm_nonzero_diagonal_E(self):
y = np.random.rand(10)
D = squareform(y)
for i in range(0, 5):
D[i, i] = 2.0
with pytest.raises(ValueError):
is_valid_dm_throw((D))
def test_is_valid_dm_nonzero_diagonal_F(self):
y = np.random.rand(10)
D = squareform(y)
for i in range(0, 5):
D[i, i] = 2.0
assert_equal(is_valid_dm(D), False)
def test_is_valid_dm_asymmetric_E(self):
y = np.random.rand(10)
D = squareform(y)
D[1, 3] = D[3, 1] + 1
with pytest.raises(ValueError):
is_valid_dm_throw((D))
def test_is_valid_dm_asymmetric_F(self):
y = np.random.rand(10)
D = squareform(y)
D[1, 3] = D[3, 1] + 1
assert_equal(is_valid_dm(D), False)
def test_is_valid_dm_correct_1_by_1(self):
D = np.zeros((1, 1), dtype=np.double)
assert_equal(is_valid_dm(D), True)
def test_is_valid_dm_correct_2_by_2(self):
y = np.random.rand(1)
D = squareform(y)
assert_equal(is_valid_dm(D), True)
def test_is_valid_dm_correct_3_by_3(self):
y = np.random.rand(3)
D = squareform(y)
assert_equal(is_valid_dm(D), True)
def test_is_valid_dm_correct_4_by_4(self):
y = np.random.rand(6)
D = squareform(y)
assert_equal(is_valid_dm(D), True)
def test_is_valid_dm_correct_5_by_5(self):
y = np.random.rand(10)
D = squareform(y)
assert_equal(is_valid_dm(D), True)
def is_valid_y_throw(y):
return is_valid_y(y, throw=True)
class TestIsValidY:
# If test case name ends on "_E" then an exception is expected for the
# given input, if it ends in "_F" then False is expected for the is_valid_y
# check. Otherwise the input is expected to be valid.
def test_is_valid_y_improper_shape_2D_E(self):
y = np.zeros((3, 3,), dtype=np.double)
with pytest.raises(ValueError):
is_valid_y_throw((y))
def test_is_valid_y_improper_shape_2D_F(self):
y = np.zeros((3, 3,), dtype=np.double)
assert_equal(is_valid_y(y), False)
def test_is_valid_y_improper_shape_3D_E(self):
y = np.zeros((3, 3, 3), dtype=np.double)
with pytest.raises(ValueError):
is_valid_y_throw((y))
def test_is_valid_y_improper_shape_3D_F(self):
y = np.zeros((3, 3, 3), dtype=np.double)
assert_equal(is_valid_y(y), False)
def test_is_valid_y_correct_2_by_2(self):
y = self.correct_n_by_n(2)
assert_equal(is_valid_y(y), True)
def test_is_valid_y_correct_3_by_3(self):
y = self.correct_n_by_n(3)
assert_equal(is_valid_y(y), True)
def test_is_valid_y_correct_4_by_4(self):
y = self.correct_n_by_n(4)
assert_equal(is_valid_y(y), True)
def test_is_valid_y_correct_5_by_5(self):
y = self.correct_n_by_n(5)
assert_equal(is_valid_y(y), True)
def test_is_valid_y_2_100(self):
a = set()
for n in range(2, 16):
a.add(n * (n - 1) / 2)
for i in range(5, 105):
if i not in a:
with pytest.raises(ValueError):
self.bad_y(i)
def bad_y(self, n):
y = np.random.rand(n)
return is_valid_y(y, throw=True)
def correct_n_by_n(self, n):
y = np.random.rand((n * (n - 1)) // 2)
return y
@pytest.mark.parametrize("p", [-10.0, -0.5, 0.0])
def test_bad_p(p):
# Raise ValueError if p <=0.
with pytest.raises(ValueError):
minkowski([1, 2], [3, 4], p)
with pytest.raises(ValueError):
minkowski([1, 2], [3, 4], p, [1, 1])
def test_sokalsneath_all_false():
# Regression test for ticket #876
with pytest.raises(ValueError):
sokalsneath([False, False, False], [False, False, False])
def test_canberra():
# Regression test for ticket #1430.
assert_equal(wcanberra([1, 2, 3], [2, 4, 6]), 1)
assert_equal(wcanberra([1, 1, 0, 0], [1, 0, 1, 0]), 2)
def test_braycurtis():
# Regression test for ticket #1430.
assert_almost_equal(wbraycurtis([1, 2, 3], [2, 4, 6]), 1. / 3, decimal=15)
assert_almost_equal(wbraycurtis([1, 1, 0, 0], [1, 0, 1, 0]), 0.5, decimal=15)
def test_euclideans():
# Regression test for ticket #1328.
x1 = np.array([1, 1, 1])
x2 = np.array([0, 0, 0])
# Basic test of the calculation.
assert_almost_equal(wsqeuclidean(x1, x2), 3.0, decimal=14)
assert_almost_equal(weuclidean(x1, x2), np.sqrt(3), decimal=14)
# Check flattening for (1, N) or (N, 1) inputs
with pytest.raises(ValueError, match="Input vector should be 1-D"):
weuclidean(x1[np.newaxis, :], x2[np.newaxis, :]), np.sqrt(3)
with pytest.raises(ValueError, match="Input vector should be 1-D"):
wsqeuclidean(x1[np.newaxis, :], x2[np.newaxis, :])
with pytest.raises(ValueError, match="Input vector should be 1-D"):
wsqeuclidean(x1[:, np.newaxis], x2[:, np.newaxis])
# Distance metrics only defined for vectors (= 1-D)
x = np.arange(4).reshape(2, 2)
with pytest.raises(ValueError):
weuclidean(x, x)
with pytest.raises(ValueError):
wsqeuclidean(x, x)
# Another check, with random data.
rs = np.random.RandomState(1234567890)
x = rs.rand(10)
y = rs.rand(10)
d1 = weuclidean(x, y)
d2 = wsqeuclidean(x, y)
assert_almost_equal(d1**2, d2, decimal=14)
def test_hamming_unequal_length():
# Regression test for gh-4290.
x = [0, 0, 1]
y = [1, 0, 1, 0]
# Used to give an AttributeError from ndarray.mean called on bool
with pytest.raises(ValueError):
whamming(x, y)
def test_hamming_unequal_length_with_w():
u = [0, 0, 1]
v = [0, 0, 1]
w = [1, 0, 1, 0]
msg = "'w' should have the same length as 'u' and 'v'."
with pytest.raises(ValueError, match=msg):
whamming(u, v, w)
def test_hamming_string_array():
# https://github.com/scikit-learn/scikit-learn/issues/4014
a = np.array(['eggs', 'spam', 'spam', 'eggs', 'spam', 'spam', 'spam',
'spam', 'spam', 'spam', 'spam', 'eggs', 'eggs', 'spam',
'eggs', 'eggs', 'eggs', 'eggs', 'eggs', 'spam'],
dtype='|S4')
b = np.array(['eggs', 'spam', 'spam', 'eggs', 'eggs', 'spam', 'spam',
'spam', 'spam', 'eggs', 'spam', 'eggs', 'spam', 'eggs',
'spam', 'spam', 'eggs', 'spam', 'spam', 'eggs'],
dtype='|S4')
desired = 0.45
assert_allclose(whamming(a, b), desired)
def test_minkowski_w():
# Regression test for gh-8142.
arr_in = np.array([[83.33333333, 100., 83.33333333, 100., 36.,
60., 90., 150., 24., 48.],
[83.33333333, 100., 83.33333333, 100., 36.,
60., 90., 150., 24., 48.]])
p0 = pdist(arr_in, metric='minkowski', p=1, w=None)
c0 = cdist(arr_in, arr_in, metric='minkowski', p=1, w=None)
p1 = pdist(arr_in, metric='minkowski', p=1)
c1 = cdist(arr_in, arr_in, metric='minkowski', p=1)
assert_allclose(p0, p1, rtol=1e-15)
assert_allclose(c0, c1, rtol=1e-15)
def test_sqeuclidean_dtypes():
# Assert that sqeuclidean returns the right types of values.
# Integer types should be converted to floating for stability.
# Floating point types should be the same as the input.
x = [1, 2, 3]
y = [4, 5, 6]
for dtype in [np.int8, np.int16, np.int32, np.int64]:
d = wsqeuclidean(np.asarray(x, dtype=dtype), np.asarray(y, dtype=dtype))
assert_(np.issubdtype(d.dtype, np.floating))
for dtype in [np.uint8, np.uint16, np.uint32, np.uint64]:
umax = np.iinfo(dtype).max
d1 = wsqeuclidean([0], np.asarray([umax], dtype=dtype))
d2 = wsqeuclidean(np.asarray([umax], dtype=dtype), [0])
assert_equal(d1, d2)
assert_equal(d1, np.float64(umax)**2)
dtypes = [np.float32, np.float64, np.complex64, np.complex128]
for dtype in ['float16', 'float128']:
# These aren't present in older numpy versions; float128 may also not
# be present on all platforms.
if hasattr(np, dtype):
dtypes.append(getattr(np, dtype))
for dtype in dtypes:
d = wsqeuclidean(np.asarray(x, dtype=dtype), np.asarray(y, dtype=dtype))
assert_equal(d.dtype, dtype)
def test_sokalmichener():
# Test that sokalmichener has the same result for bool and int inputs.
p = [True, True, False]
q = [True, False, True]
x = [int(b) for b in p]
y = [int(b) for b in q]
dist1 = sokalmichener(p, q)
dist2 = sokalmichener(x, y)
# These should be exactly the same.
assert_equal(dist1, dist2)
def test_sokalmichener_with_weight():
# from: | 1 | | 0 |
# to: | 1 | | 1 |
# weight| | 1 | | 0.2
ntf = 0 * 1 + 0 * 0.2
nft = 0 * 1 + 1 * 0.2
ntt = 1 * 1 + 0 * 0.2
nff = 0 * 1 + 0 * 0.2
expected = 2 * (nft + ntf) / (ntt + nff + 2 * (nft + ntf))
assert_almost_equal(expected, 0.2857143)
actual = sokalmichener([1, 0], [1, 1], w=[1, 0.2])
assert_almost_equal(expected, actual)
a1 = [False, False, True, True, True, False, False, True, True, True, True,
True, True, False, True, False, False, False, True, True]
a2 = [True, True, True, False, False, True, True, True, False, True,
True, True, True, True, False, False, False, True, True, True]
for w in [0.05, 0.1, 1.0, 20.0]:
assert_almost_equal(sokalmichener(a2, a1, [w]), 0.6666666666666666)
def test_modifies_input(metric):
# test whether cdist or pdist modifies input arrays
X1 = np.asarray([[1., 2., 3.],
[1.2, 2.3, 3.4],
[2.2, 2.3, 4.4],
[22.2, 23.3, 44.4]])
X1_copy = X1.copy()
cdist(X1, X1, metric)
pdist(X1, metric)
assert_array_equal(X1, X1_copy)
def test_Xdist_deprecated_args(metric):
# testing both cdist and pdist deprecated warnings
X1 = np.asarray([[1., 2., 3.],
[1.2, 2.3, 3.4],
[2.2, 2.3, 4.4],
[22.2, 23.3, 44.4]])
with pytest.raises(TypeError):
cdist(X1, X1, metric, 2.)
with pytest.raises(TypeError):
pdist(X1, metric, 2.)
for arg in ["p", "V", "VI"]:
kwargs = {arg: "foo"}
if ((arg == "V" and metric == "seuclidean")
or (arg == "VI" and metric == "mahalanobis")
or (arg == "p" and metric == "minkowski")):
continue
with pytest.raises(TypeError):
cdist(X1, X1, metric, **kwargs)
with pytest.raises(TypeError):
pdist(X1, metric, **kwargs)
def test_Xdist_non_negative_weights(metric):
X = eo['random-float32-data'][::5, ::2]
w = np.ones(X.shape[1])
w[::5] = -w[::5]
if metric in ['seuclidean', 'mahalanobis', 'jensenshannon']:
pytest.skip("not applicable")
for m in [metric, eval(metric), "test_" + metric]:
with pytest.raises(ValueError):
pdist(X, m, w=w)
with pytest.raises(ValueError):
cdist(X, X, m, w=w)
def test__validate_vector():
x = [1, 2, 3]
y = _validate_vector(x)
assert_array_equal(y, x)
y = _validate_vector(x, dtype=np.float64)
assert_array_equal(y, x)
assert_equal(y.dtype, np.float64)
x = [1]
y = _validate_vector(x)
assert_equal(y.ndim, 1)
assert_equal(y, x)
x = 1
with pytest.raises(ValueError, match="Input vector should be 1-D"):
_validate_vector(x)
x = np.arange(5).reshape(1, -1, 1)
with pytest.raises(ValueError, match="Input vector should be 1-D"):
_validate_vector(x)
x = [[1, 2], [3, 4]]
with pytest.raises(ValueError, match="Input vector should be 1-D"):
_validate_vector(x)
def test_yule_all_same():
# Test yule avoids a divide by zero when exactly equal
x = np.ones((2, 6), dtype=bool)
d = wyule(x[0], x[0])
assert d == 0.0
d = pdist(x, 'yule')
assert_equal(d, [0.0])
d = cdist(x[:1], x[:1], 'yule')
assert_equal(d, [[0.0]])
def test_jensenshannon():
assert_almost_equal(jensenshannon([1.0, 0.0, 0.0], [0.0, 1.0, 0.0], 2.0),
1.0)
assert_almost_equal(jensenshannon([1.0, 0.0], [0.5, 0.5]),
0.46450140402245893)
assert_almost_equal(jensenshannon([1.0, 0.0, 0.0], [1.0, 0.0, 0.0]), 0.0)
assert_almost_equal(jensenshannon([[1.0, 2.0]], [[0.5, 1.5]], axis=0),
[0.0, 0.0])
assert_almost_equal(jensenshannon([[1.0, 2.0]], [[0.5, 1.5]], axis=1),
[0.0649045])
assert_almost_equal(jensenshannon([[1.0, 2.0]], [[0.5, 1.5]], axis=0,
keepdims=True), [[0.0, 0.0]])
assert_almost_equal(jensenshannon([[1.0, 2.0]], [[0.5, 1.5]], axis=1,
keepdims=True), [[0.0649045]])
a = np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]])
b = np.array([[13, 14, 15, 16],
[17, 18, 19, 20],
[21, 22, 23, 24]])
assert_almost_equal(jensenshannon(a, b, axis=0),
[0.1954288, 0.1447697, 0.1138377, 0.0927636])
assert_almost_equal(jensenshannon(a, b, axis=1),
[0.1402339, 0.0399106, 0.0201815])
def test_gh_17703():
arr_1 = np.array([1, 0, 0])
arr_2 = np.array([2, 0, 0])
expected = dice(arr_1, arr_2)
actual = pdist([arr_1, arr_2], metric='dice')
assert_allclose(actual, expected)
actual = cdist(np.atleast_2d(arr_1),
np.atleast_2d(arr_2), metric='dice')
assert_allclose(actual, expected)
| 83,271
| 36.125279
| 112
|
py
|
scipy
|
scipy-main/scipy/spatial/tests/test_qhull.py
|
import os
import copy
import numpy as np
from numpy.testing import (assert_equal, assert_almost_equal,
assert_, assert_allclose, assert_array_equal)
import pytest
from pytest import raises as assert_raises
import scipy.spatial._qhull as qhull
from scipy.spatial import cKDTree as KDTree
from scipy.spatial import Voronoi
import itertools
def sorted_tuple(x):
return tuple(sorted(x))
def assert_unordered_tuple_list_equal(a, b, tpl=tuple):
if isinstance(a, np.ndarray):
a = a.tolist()
if isinstance(b, np.ndarray):
b = b.tolist()
a = list(map(tpl, a))
a.sort()
b = list(map(tpl, b))
b.sort()
assert_equal(a, b)
np.random.seed(1234)
points = [(0,0), (0,1), (1,0), (1,1), (0.5, 0.5), (0.5, 1.5)]
pathological_data_1 = np.array([
[-3.14,-3.14], [-3.14,-2.36], [-3.14,-1.57], [-3.14,-0.79],
[-3.14,0.0], [-3.14,0.79], [-3.14,1.57], [-3.14,2.36],
[-3.14,3.14], [-2.36,-3.14], [-2.36,-2.36], [-2.36,-1.57],
[-2.36,-0.79], [-2.36,0.0], [-2.36,0.79], [-2.36,1.57],
[-2.36,2.36], [-2.36,3.14], [-1.57,-0.79], [-1.57,0.79],
[-1.57,-1.57], [-1.57,0.0], [-1.57,1.57], [-1.57,-3.14],
[-1.57,-2.36], [-1.57,2.36], [-1.57,3.14], [-0.79,-1.57],
[-0.79,1.57], [-0.79,-3.14], [-0.79,-2.36], [-0.79,-0.79],
[-0.79,0.0], [-0.79,0.79], [-0.79,2.36], [-0.79,3.14],
[0.0,-3.14], [0.0,-2.36], [0.0,-1.57], [0.0,-0.79], [0.0,0.0],
[0.0,0.79], [0.0,1.57], [0.0,2.36], [0.0,3.14], [0.79,-3.14],
[0.79,-2.36], [0.79,-0.79], [0.79,0.0], [0.79,0.79],
[0.79,2.36], [0.79,3.14], [0.79,-1.57], [0.79,1.57],
[1.57,-3.14], [1.57,-2.36], [1.57,2.36], [1.57,3.14],
[1.57,-1.57], [1.57,0.0], [1.57,1.57], [1.57,-0.79],
[1.57,0.79], [2.36,-3.14], [2.36,-2.36], [2.36,-1.57],
[2.36,-0.79], [2.36,0.0], [2.36,0.79], [2.36,1.57],
[2.36,2.36], [2.36,3.14], [3.14,-3.14], [3.14,-2.36],
[3.14,-1.57], [3.14,-0.79], [3.14,0.0], [3.14,0.79],
[3.14,1.57], [3.14,2.36], [3.14,3.14],
])
pathological_data_2 = np.array([
[-1, -1], [-1, 0], [-1, 1],
[0, -1], [0, 0], [0, 1],
[1, -1 - np.finfo(np.float_).eps], [1, 0], [1, 1],
])
bug_2850_chunks = [np.random.rand(10, 2),
np.array([[0,0], [0,1], [1,0], [1,1]]) # add corners
]
# same with some additional chunks
bug_2850_chunks_2 = (bug_2850_chunks +
[np.random.rand(10, 2),
0.25 + np.array([[0,0], [0,1], [1,0], [1,1]])])
DATASETS = {
'some-points': np.asarray(points),
'random-2d': np.random.rand(30, 2),
'random-3d': np.random.rand(30, 3),
'random-4d': np.random.rand(30, 4),
'random-5d': np.random.rand(30, 5),
'random-6d': np.random.rand(10, 6),
'random-7d': np.random.rand(10, 7),
'random-8d': np.random.rand(10, 8),
'pathological-1': pathological_data_1,
'pathological-2': pathological_data_2
}
INCREMENTAL_DATASETS = {
'bug-2850': (bug_2850_chunks, None),
'bug-2850-2': (bug_2850_chunks_2, None),
}
def _add_inc_data(name, chunksize):
"""
Generate incremental datasets from basic data sets
"""
points = DATASETS[name]
ndim = points.shape[1]
opts = None
nmin = ndim + 2
if name == 'some-points':
# since Qz is not allowed, use QJ
opts = 'QJ Pp'
elif name == 'pathological-1':
# include enough points so that we get different x-coordinates
nmin = 12
chunks = [points[:nmin]]
for j in range(nmin, len(points), chunksize):
chunks.append(points[j:j+chunksize])
new_name = "%s-chunk-%d" % (name, chunksize)
assert new_name not in INCREMENTAL_DATASETS
INCREMENTAL_DATASETS[new_name] = (chunks, opts)
for name in DATASETS:
for chunksize in 1, 4, 16:
_add_inc_data(name, chunksize)
class Test_Qhull:
def test_swapping(self):
# Check that Qhull state swapping works
x = qhull._Qhull(b'v',
np.array([[0,0],[0,1],[1,0],[1,1.],[0.5,0.5]]),
b'Qz')
xd = copy.deepcopy(x.get_voronoi_diagram())
y = qhull._Qhull(b'v',
np.array([[0,0],[0,1],[1,0],[1,2.]]),
b'Qz')
yd = copy.deepcopy(y.get_voronoi_diagram())
xd2 = copy.deepcopy(x.get_voronoi_diagram())
x.close()
yd2 = copy.deepcopy(y.get_voronoi_diagram())
y.close()
assert_raises(RuntimeError, x.get_voronoi_diagram)
assert_raises(RuntimeError, y.get_voronoi_diagram)
assert_allclose(xd[0], xd2[0])
assert_unordered_tuple_list_equal(xd[1], xd2[1], tpl=sorted_tuple)
assert_unordered_tuple_list_equal(xd[2], xd2[2], tpl=sorted_tuple)
assert_unordered_tuple_list_equal(xd[3], xd2[3], tpl=sorted_tuple)
assert_array_equal(xd[4], xd2[4])
assert_allclose(yd[0], yd2[0])
assert_unordered_tuple_list_equal(yd[1], yd2[1], tpl=sorted_tuple)
assert_unordered_tuple_list_equal(yd[2], yd2[2], tpl=sorted_tuple)
assert_unordered_tuple_list_equal(yd[3], yd2[3], tpl=sorted_tuple)
assert_array_equal(yd[4], yd2[4])
x.close()
assert_raises(RuntimeError, x.get_voronoi_diagram)
y.close()
assert_raises(RuntimeError, y.get_voronoi_diagram)
def test_issue_8051(self):
points = np.array([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1], [1, 2],[2, 0], [2, 1], [2, 2]])
Voronoi(points)
class TestUtilities:
"""
Check that utility functions work.
"""
def test_find_simplex(self):
# Simple check that simplex finding works
points = np.array([(0,0), (0,1), (1,1), (1,0)], dtype=np.double)
tri = qhull.Delaunay(points)
# +---+
# |\ 0|
# | \ |
# |1 \|
# +---+
assert_equal(tri.simplices, [[1, 3, 2], [3, 1, 0]])
for p in [(0.25, 0.25, 1),
(0.75, 0.75, 0),
(0.3, 0.2, 1)]:
i = tri.find_simplex(p[:2])
assert_equal(i, p[2], err_msg=f'{p!r}')
j = qhull.tsearch(tri, p[:2])
assert_equal(i, j)
def test_plane_distance(self):
# Compare plane distance from hyperplane equations obtained from Qhull
# to manually computed plane equations
x = np.array([(0,0), (1, 1), (1, 0), (0.99189033, 0.37674127),
(0.99440079, 0.45182168)], dtype=np.double)
p = np.array([0.99966555, 0.15685619], dtype=np.double)
tri = qhull.Delaunay(x)
z = tri.lift_points(x)
pz = tri.lift_points(p)
dist = tri.plane_distance(p)
for j, v in enumerate(tri.simplices):
x1 = z[v[0]]
x2 = z[v[1]]
x3 = z[v[2]]
n = np.cross(x1 - x3, x2 - x3)
n /= np.sqrt(np.dot(n, n))
n *= -np.sign(n[2])
d = np.dot(n, pz - x3)
assert_almost_equal(dist[j], d)
def test_convex_hull(self):
# Simple check that the convex hull seems to works
points = np.array([(0,0), (0,1), (1,1), (1,0)], dtype=np.double)
tri = qhull.Delaunay(points)
# +---+
# |\ 0|
# | \ |
# |1 \|
# +---+
assert_equal(tri.convex_hull, [[3, 2], [1, 2], [1, 0], [3, 0]])
def test_volume_area(self):
#Basic check that we get back the correct volume and area for a cube
points = np.array([(0, 0, 0), (0, 1, 0), (1, 0, 0), (1, 1, 0),
(0, 0, 1), (0, 1, 1), (1, 0, 1), (1, 1, 1)])
hull = qhull.ConvexHull(points)
assert_allclose(hull.volume, 1., rtol=1e-14,
err_msg="Volume of cube is incorrect")
assert_allclose(hull.area, 6., rtol=1e-14,
err_msg="Area of cube is incorrect")
def test_random_volume_area(self):
#Test that the results for a random 10-point convex are
#coherent with the output of qconvex Qt s FA
points = np.array([(0.362568364506, 0.472712355305, 0.347003084477),
(0.733731893414, 0.634480295684, 0.950513180209),
(0.511239955611, 0.876839441267, 0.418047827863),
(0.0765906233393, 0.527373281342, 0.6509863541),
(0.146694972056, 0.596725793348, 0.894860986685),
(0.513808585741, 0.069576205858, 0.530890338876),
(0.512343805118, 0.663537132612, 0.037689295973),
(0.47282965018, 0.462176697655, 0.14061843691),
(0.240584597123, 0.778660020591, 0.722913476339),
(0.951271745935, 0.967000673944, 0.890661319684)])
hull = qhull.ConvexHull(points)
assert_allclose(hull.volume, 0.14562013, rtol=1e-07,
err_msg="Volume of random polyhedron is incorrect")
assert_allclose(hull.area, 1.6670425, rtol=1e-07,
err_msg="Area of random polyhedron is incorrect")
def test_incremental_volume_area_random_input(self):
"""Test that incremental mode gives the same volume/area as
non-incremental mode and incremental mode with restart"""
nr_points = 20
dim = 3
points = np.random.random((nr_points, dim))
inc_hull = qhull.ConvexHull(points[:dim+1, :], incremental=True)
inc_restart_hull = qhull.ConvexHull(points[:dim+1, :], incremental=True)
for i in range(dim+1, nr_points):
hull = qhull.ConvexHull(points[:i+1, :])
inc_hull.add_points(points[i:i+1, :])
inc_restart_hull.add_points(points[i:i+1, :], restart=True)
assert_allclose(hull.volume, inc_hull.volume, rtol=1e-7)
assert_allclose(hull.volume, inc_restart_hull.volume, rtol=1e-7)
assert_allclose(hull.area, inc_hull.area, rtol=1e-7)
assert_allclose(hull.area, inc_restart_hull.area, rtol=1e-7)
def _check_barycentric_transforms(self, tri, err_msg="",
unit_cube=False,
unit_cube_tol=0):
"""Check that a triangulation has reasonable barycentric transforms"""
vertices = tri.points[tri.simplices]
sc = 1/(tri.ndim + 1.0)
centroids = vertices.sum(axis=1) * sc
# Either: (i) the simplex has a `nan` barycentric transform,
# or, (ii) the centroid is in the simplex
def barycentric_transform(tr, x):
r = tr[:,-1,:]
Tinv = tr[:,:-1,:]
return np.einsum('ijk,ik->ij', Tinv, x - r)
eps = np.finfo(float).eps
c = barycentric_transform(tri.transform, centroids)
with np.errstate(invalid="ignore"):
ok = np.isnan(c).all(axis=1) | (abs(c - sc)/sc < 0.1).all(axis=1)
assert_(ok.all(), f"{err_msg} {np.nonzero(~ok)}")
# Invalid simplices must be (nearly) zero volume
q = vertices[:,:-1,:] - vertices[:,-1,None,:]
volume = np.array([np.linalg.det(q[k,:,:])
for k in range(tri.nsimplex)])
ok = np.isfinite(tri.transform[:,0,0]) | (volume < np.sqrt(eps))
assert_(ok.all(), f"{err_msg} {np.nonzero(~ok)}")
# Also, find_simplex for the centroid should end up in some
# simplex for the non-degenerate cases
j = tri.find_simplex(centroids)
ok = (j != -1) | np.isnan(tri.transform[:,0,0])
assert_(ok.all(), f"{err_msg} {np.nonzero(~ok)}")
if unit_cube:
# If in unit cube, no interior point should be marked out of hull
at_boundary = (centroids <= unit_cube_tol).any(axis=1)
at_boundary |= (centroids >= 1 - unit_cube_tol).any(axis=1)
ok = (j != -1) | at_boundary
assert_(ok.all(), f"{err_msg} {np.nonzero(~ok)}")
def test_degenerate_barycentric_transforms(self):
# The triangulation should not produce invalid barycentric
# transforms that stump the simplex finding
data = np.load(os.path.join(os.path.dirname(__file__), 'data',
'degenerate_pointset.npz'))
points = data['c']
data.close()
tri = qhull.Delaunay(points)
# Check that there are not too many invalid simplices
bad_count = np.isnan(tri.transform[:,0,0]).sum()
assert_(bad_count < 23, bad_count)
# Check the transforms
self._check_barycentric_transforms(tri)
@pytest.mark.slow
def test_more_barycentric_transforms(self):
# Triangulate some "nasty" grids
eps = np.finfo(float).eps
npoints = {2: 70, 3: 11, 4: 5, 5: 3}
for ndim in range(2, 6):
# Generate an uniform grid in n-d unit cube
x = np.linspace(0, 1, npoints[ndim])
grid = np.c_[list(map(np.ravel, np.broadcast_arrays(*np.ix_(*([x]*ndim)))))].T
err_msg = "ndim=%d" % ndim
# Check using regular grid
tri = qhull.Delaunay(grid)
self._check_barycentric_transforms(tri, err_msg=err_msg,
unit_cube=True)
# Check with eps-perturbations
np.random.seed(1234)
m = (np.random.rand(grid.shape[0]) < 0.2)
grid[m,:] += 2*eps*(np.random.rand(*grid[m,:].shape) - 0.5)
tri = qhull.Delaunay(grid)
self._check_barycentric_transforms(tri, err_msg=err_msg,
unit_cube=True,
unit_cube_tol=2*eps)
# Check with duplicated data
tri = qhull.Delaunay(np.r_[grid, grid])
self._check_barycentric_transforms(tri, err_msg=err_msg,
unit_cube=True,
unit_cube_tol=2*eps)
class TestVertexNeighborVertices:
def _check(self, tri):
expected = [set() for j in range(tri.points.shape[0])]
for s in tri.simplices:
for a in s:
for b in s:
if a != b:
expected[a].add(b)
indptr, indices = tri.vertex_neighbor_vertices
got = [set(map(int, indices[indptr[j]:indptr[j+1]]))
for j in range(tri.points.shape[0])]
assert_equal(got, expected, err_msg=f"{got!r} != {expected!r}")
def test_triangle(self):
points = np.array([(0,0), (0,1), (1,0)], dtype=np.double)
tri = qhull.Delaunay(points)
self._check(tri)
def test_rectangle(self):
points = np.array([(0,0), (0,1), (1,1), (1,0)], dtype=np.double)
tri = qhull.Delaunay(points)
self._check(tri)
def test_complicated(self):
points = np.array([(0,0), (0,1), (1,1), (1,0),
(0.5, 0.5), (0.9, 0.5)], dtype=np.double)
tri = qhull.Delaunay(points)
self._check(tri)
class TestDelaunay:
"""
Check that triangulation works.
"""
def test_masked_array_fails(self):
masked_array = np.ma.masked_all(1)
assert_raises(ValueError, qhull.Delaunay, masked_array)
def test_array_with_nans_fails(self):
points_with_nan = np.array([(0,0), (0,1), (1,1), (1,np.nan)], dtype=np.double)
assert_raises(ValueError, qhull.Delaunay, points_with_nan)
def test_nd_simplex(self):
# simple smoke test: triangulate a n-dimensional simplex
for nd in range(2, 8):
points = np.zeros((nd+1, nd))
for j in range(nd):
points[j,j] = 1.0
points[-1,:] = 1.0
tri = qhull.Delaunay(points)
tri.simplices.sort()
assert_equal(tri.simplices, np.arange(nd+1, dtype=int)[None, :])
assert_equal(tri.neighbors, -1 + np.zeros((nd+1), dtype=int)[None,:])
def test_2d_square(self):
# simple smoke test: 2d square
points = np.array([(0,0), (0,1), (1,1), (1,0)], dtype=np.double)
tri = qhull.Delaunay(points)
assert_equal(tri.simplices, [[1, 3, 2], [3, 1, 0]])
assert_equal(tri.neighbors, [[-1, -1, 1], [-1, -1, 0]])
def test_duplicate_points(self):
x = np.array([0, 1, 0, 1], dtype=np.float64)
y = np.array([0, 0, 1, 1], dtype=np.float64)
xp = np.r_[x, x]
yp = np.r_[y, y]
# shouldn't fail on duplicate points
qhull.Delaunay(np.c_[x, y])
qhull.Delaunay(np.c_[xp, yp])
def test_pathological(self):
# both should succeed
points = DATASETS['pathological-1']
tri = qhull.Delaunay(points)
assert_equal(tri.points[tri.simplices].max(), points.max())
assert_equal(tri.points[tri.simplices].min(), points.min())
points = DATASETS['pathological-2']
tri = qhull.Delaunay(points)
assert_equal(tri.points[tri.simplices].max(), points.max())
assert_equal(tri.points[tri.simplices].min(), points.min())
def test_joggle(self):
# Check that the option QJ indeed guarantees that all input points
# occur as vertices of the triangulation
points = np.random.rand(10, 2)
points = np.r_[points, points] # duplicate input data
tri = qhull.Delaunay(points, qhull_options="QJ Qbb Pp")
assert_array_equal(np.unique(tri.simplices.ravel()),
np.arange(len(points)))
def test_coplanar(self):
# Check that the coplanar point output option indeed works
points = np.random.rand(10, 2)
points = np.r_[points, points] # duplicate input data
tri = qhull.Delaunay(points)
assert_(len(np.unique(tri.simplices.ravel())) == len(points)//2)
assert_(len(tri.coplanar) == len(points)//2)
assert_(len(np.unique(tri.coplanar[:,2])) == len(points)//2)
assert_(np.all(tri.vertex_to_simplex >= 0))
def test_furthest_site(self):
points = [(0, 0), (0, 1), (1, 0), (0.5, 0.5), (1.1, 1.1)]
tri = qhull.Delaunay(points, furthest_site=True)
expected = np.array([(1, 4, 0), (4, 2, 0)]) # from Qhull
assert_array_equal(tri.simplices, expected)
@pytest.mark.parametrize("name", sorted(INCREMENTAL_DATASETS))
def test_incremental(self, name):
# Test incremental construction of the triangulation
chunks, opts = INCREMENTAL_DATASETS[name]
points = np.concatenate(chunks, axis=0)
obj = qhull.Delaunay(chunks[0], incremental=True,
qhull_options=opts)
for chunk in chunks[1:]:
obj.add_points(chunk)
obj2 = qhull.Delaunay(points)
obj3 = qhull.Delaunay(chunks[0], incremental=True,
qhull_options=opts)
if len(chunks) > 1:
obj3.add_points(np.concatenate(chunks[1:], axis=0),
restart=True)
# Check that the incremental mode agrees with upfront mode
if name.startswith('pathological'):
# XXX: These produce valid but different triangulations.
# They look OK when plotted, but how to check them?
assert_array_equal(np.unique(obj.simplices.ravel()),
np.arange(points.shape[0]))
assert_array_equal(np.unique(obj2.simplices.ravel()),
np.arange(points.shape[0]))
else:
assert_unordered_tuple_list_equal(obj.simplices, obj2.simplices,
tpl=sorted_tuple)
assert_unordered_tuple_list_equal(obj2.simplices, obj3.simplices,
tpl=sorted_tuple)
def assert_hulls_equal(points, facets_1, facets_2):
# Check that two convex hulls constructed from the same point set
# are equal
facets_1 = set(map(sorted_tuple, facets_1))
facets_2 = set(map(sorted_tuple, facets_2))
if facets_1 != facets_2 and points.shape[1] == 2:
# The direct check fails for the pathological cases
# --- then the convex hull from Delaunay differs (due
# to rounding error etc.) from the hull computed
# otherwise, by the question whether (tricoplanar)
# points that lie almost exactly on the hull are
# included as vertices of the hull or not.
#
# So we check the result, and accept it if the Delaunay
# hull line segments are a subset of the usual hull.
eps = 1000 * np.finfo(float).eps
for a, b in facets_1:
for ap, bp in facets_2:
t = points[bp] - points[ap]
t /= np.linalg.norm(t) # tangent
n = np.array([-t[1], t[0]]) # normal
# check that the two line segments are parallel
# to the same line
c1 = np.dot(n, points[b] - points[ap])
c2 = np.dot(n, points[a] - points[ap])
if not np.allclose(np.dot(c1, n), 0):
continue
if not np.allclose(np.dot(c2, n), 0):
continue
# Check that the segment (a, b) is contained in (ap, bp)
c1 = np.dot(t, points[a] - points[ap])
c2 = np.dot(t, points[b] - points[ap])
c3 = np.dot(t, points[bp] - points[ap])
if c1 < -eps or c1 > c3 + eps:
continue
if c2 < -eps or c2 > c3 + eps:
continue
# OK:
break
else:
raise AssertionError("comparison fails")
# it was OK
return
assert_equal(facets_1, facets_2)
class TestConvexHull:
def test_masked_array_fails(self):
masked_array = np.ma.masked_all(1)
assert_raises(ValueError, qhull.ConvexHull, masked_array)
def test_array_with_nans_fails(self):
points_with_nan = np.array([(0,0), (1,1), (2,np.nan)], dtype=np.double)
assert_raises(ValueError, qhull.ConvexHull, points_with_nan)
@pytest.mark.parametrize("name", sorted(DATASETS))
def test_hull_consistency_tri(self, name):
# Check that a convex hull returned by qhull in ndim
# and the hull constructed from ndim delaunay agree
points = DATASETS[name]
tri = qhull.Delaunay(points)
hull = qhull.ConvexHull(points)
assert_hulls_equal(points, tri.convex_hull, hull.simplices)
# Check that the hull extremes are as expected
if points.shape[1] == 2:
assert_equal(np.unique(hull.simplices), np.sort(hull.vertices))
else:
assert_equal(np.unique(hull.simplices), hull.vertices)
@pytest.mark.parametrize("name", sorted(INCREMENTAL_DATASETS))
def test_incremental(self, name):
# Test incremental construction of the convex hull
chunks, _ = INCREMENTAL_DATASETS[name]
points = np.concatenate(chunks, axis=0)
obj = qhull.ConvexHull(chunks[0], incremental=True)
for chunk in chunks[1:]:
obj.add_points(chunk)
obj2 = qhull.ConvexHull(points)
obj3 = qhull.ConvexHull(chunks[0], incremental=True)
if len(chunks) > 1:
obj3.add_points(np.concatenate(chunks[1:], axis=0),
restart=True)
# Check that the incremental mode agrees with upfront mode
assert_hulls_equal(points, obj.simplices, obj2.simplices)
assert_hulls_equal(points, obj.simplices, obj3.simplices)
def test_vertices_2d(self):
# The vertices should be in counterclockwise order in 2-D
np.random.seed(1234)
points = np.random.rand(30, 2)
hull = qhull.ConvexHull(points)
assert_equal(np.unique(hull.simplices), np.sort(hull.vertices))
# Check counterclockwiseness
x, y = hull.points[hull.vertices].T
angle = np.arctan2(y - y.mean(), x - x.mean())
assert_(np.all(np.diff(np.unwrap(angle)) > 0))
def test_volume_area(self):
# Basic check that we get back the correct volume and area for a cube
points = np.array([(0, 0, 0), (0, 1, 0), (1, 0, 0), (1, 1, 0),
(0, 0, 1), (0, 1, 1), (1, 0, 1), (1, 1, 1)])
tri = qhull.ConvexHull(points)
assert_allclose(tri.volume, 1., rtol=1e-14)
assert_allclose(tri.area, 6., rtol=1e-14)
@pytest.mark.parametrize("incremental", [False, True])
def test_good2d(self, incremental):
# Make sure the QGn option gives the correct value of "good".
points = np.array([[0.2, 0.2],
[0.2, 0.4],
[0.4, 0.4],
[0.4, 0.2],
[0.3, 0.6]])
hull = qhull.ConvexHull(points=points,
incremental=incremental,
qhull_options='QG4')
expected = np.array([False, True, False, False], dtype=bool)
actual = hull.good
assert_equal(actual, expected)
@pytest.mark.parametrize("visibility", [
"QG4", # visible=True
"QG-4", # visible=False
])
@pytest.mark.parametrize("new_gen, expected", [
# add generator that places QG4 inside hull
# so all facets are invisible
(np.array([[0.3, 0.7]]),
np.array([False, False, False, False, False], dtype=bool)),
# adding a generator on the opposite side of the square
# should preserve the single visible facet & add one invisible
# facet
(np.array([[0.3, -0.7]]),
np.array([False, True, False, False, False], dtype=bool)),
# split the visible facet on top of the square into two
# visible facets, with visibility at the end of the array
# because add_points concatenates
(np.array([[0.3, 0.41]]),
np.array([False, False, False, True, True], dtype=bool)),
# with our current Qhull options, coplanarity will not count
# for visibility; this case shifts one visible & one invisible
# facet & adds a coplanar facet
# simplex at index position 2 is the shifted visible facet
# the final simplex is the coplanar facet
(np.array([[0.5, 0.6], [0.6, 0.6]]),
np.array([False, False, True, False, False], dtype=bool)),
# place the new generator such that it envelops the query
# point within the convex hull, but only just barely within
# the double precision limit
# NOTE: testing exact degeneracy is less predictable than this
# scenario, perhaps because of the default Qt option we have
# enabled for Qhull to handle precision matters
(np.array([[0.3, 0.6 + 1e-16]]),
np.array([False, False, False, False, False], dtype=bool)),
])
def test_good2d_incremental_changes(self, new_gen, expected,
visibility):
# use the usual square convex hull
# generators from test_good2d
points = np.array([[0.2, 0.2],
[0.2, 0.4],
[0.4, 0.4],
[0.4, 0.2],
[0.3, 0.6]])
hull = qhull.ConvexHull(points=points,
incremental=True,
qhull_options=visibility)
hull.add_points(new_gen)
actual = hull.good
if '-' in visibility:
expected = np.invert(expected)
assert_equal(actual, expected)
@pytest.mark.parametrize("incremental", [False, True])
def test_good2d_no_option(self, incremental):
# handle case where good attribue doesn't exist
# because Qgn or Qg-n wasn't specified
points = np.array([[0.2, 0.2],
[0.2, 0.4],
[0.4, 0.4],
[0.4, 0.2],
[0.3, 0.6]])
hull = qhull.ConvexHull(points=points,
incremental=incremental)
actual = hull.good
assert actual is None
# preserve None after incremental addition
if incremental:
hull.add_points(np.zeros((1, 2)))
actual = hull.good
assert actual is None
@pytest.mark.parametrize("incremental", [False, True])
def test_good2d_inside(self, incremental):
# Make sure the QGn option gives the correct value of "good".
# When point n is inside the convex hull of the rest, good is
# all False.
points = np.array([[0.2, 0.2],
[0.2, 0.4],
[0.4, 0.4],
[0.4, 0.2],
[0.3, 0.3]])
hull = qhull.ConvexHull(points=points,
incremental=incremental,
qhull_options='QG4')
expected = np.array([False, False, False, False], dtype=bool)
actual = hull.good
assert_equal(actual, expected)
@pytest.mark.parametrize("incremental", [False, True])
def test_good3d(self, incremental):
# Make sure the QGn option gives the correct value of "good"
# for a 3d figure
points = np.array([[0.0, 0.0, 0.0],
[0.90029516, -0.39187448, 0.18948093],
[0.48676420, -0.72627633, 0.48536925],
[0.57651530, -0.81179274, -0.09285832],
[0.67846893, -0.71119562, 0.18406710]])
hull = qhull.ConvexHull(points=points,
incremental=incremental,
qhull_options='QG0')
expected = np.array([True, False, False, False], dtype=bool)
assert_equal(hull.good, expected)
class TestVoronoi:
@pytest.mark.parametrize("qhull_opts, extra_pts", [
# option Qz (default for SciPy) will add
# an extra point at infinity
("Qbb Qc Qz", 1),
("Qbb Qc", 0),
])
@pytest.mark.parametrize("n_pts", [50, 100])
@pytest.mark.parametrize("ndim", [2, 3])
def test_point_region_structure(self,
qhull_opts,
n_pts,
extra_pts,
ndim):
# see gh-16773
rng = np.random.default_rng(7790)
points = rng.random((n_pts, ndim))
vor = Voronoi(points, qhull_options=qhull_opts)
pt_region = vor.point_region
assert pt_region.max() == n_pts - 1 + extra_pts
assert pt_region.size == len(vor.regions) - extra_pts
assert len(vor.regions) == n_pts + extra_pts
assert vor.points.shape[0] == n_pts
# if there is an empty sublist in the Voronoi
# regions data structure, it should never be
# indexed because it corresponds to an internally
# added point at infinity and is not a member of the
# generators (input points)
if extra_pts:
sublens = [len(x) for x in vor.regions]
# only one point at infinity (empty region)
# is allowed
assert sublens.count(0) == 1
assert sublens.index(0) not in pt_region
def test_masked_array_fails(self):
masked_array = np.ma.masked_all(1)
assert_raises(ValueError, qhull.Voronoi, masked_array)
def test_simple(self):
# Simple case with known Voronoi diagram
points = [(0, 0), (0, 1), (0, 2),
(1, 0), (1, 1), (1, 2),
(2, 0), (2, 1), (2, 2)]
# qhull v o Fv Qbb Qc Qz < dat
output = """
2
5 10 1
-10.101 -10.101
0.5 0.5
0.5 1.5
1.5 0.5
1.5 1.5
2 0 1
3 2 0 1
2 0 2
3 3 0 1
4 1 2 4 3
3 4 0 2
2 0 3
3 4 0 3
2 0 4
0
12
4 0 3 0 1
4 0 1 0 1
4 1 4 1 2
4 1 2 0 2
4 2 5 0 2
4 3 4 1 3
4 3 6 0 3
4 4 5 2 4
4 4 7 3 4
4 5 8 0 4
4 6 7 0 3
4 7 8 0 4
"""
self._compare_qvoronoi(points, output)
def _compare_qvoronoi(self, points, output, **kw):
"""Compare to output from 'qvoronoi o Fv < data' to Voronoi()"""
# Parse output
output = [list(map(float, x.split())) for x in output.strip().splitlines()]
nvertex = int(output[1][0])
vertices = list(map(tuple, output[3:2+nvertex])) # exclude inf
nregion = int(output[1][1])
regions = [[int(y)-1 for y in x[1:]]
for x in output[2+nvertex:2+nvertex+nregion]]
ridge_points = [[int(y) for y in x[1:3]]
for x in output[3+nvertex+nregion:]]
ridge_vertices = [[int(y)-1 for y in x[3:]]
for x in output[3+nvertex+nregion:]]
# Compare results
vor = qhull.Voronoi(points, **kw)
def sorttuple(x):
return tuple(sorted(x))
assert_allclose(vor.vertices, vertices)
assert_equal(set(map(tuple, vor.regions)),
set(map(tuple, regions)))
p1 = list(zip(list(map(sorttuple, ridge_points)), list(map(sorttuple, ridge_vertices))))
p2 = list(zip(list(map(sorttuple, vor.ridge_points.tolist())),
list(map(sorttuple, vor.ridge_vertices))))
p1.sort()
p2.sort()
assert_equal(p1, p2)
@pytest.mark.parametrize("name", sorted(DATASETS))
def test_ridges(self, name):
# Check that the ridges computed by Voronoi indeed separate
# the regions of nearest neighborhood, by comparing the result
# to KDTree.
points = DATASETS[name]
tree = KDTree(points)
vor = qhull.Voronoi(points)
for p, v in vor.ridge_dict.items():
# consider only finite ridges
if not np.all(np.asarray(v) >= 0):
continue
ridge_midpoint = vor.vertices[v].mean(axis=0)
d = 1e-6 * (points[p[0]] - ridge_midpoint)
dist, k = tree.query(ridge_midpoint + d, k=1)
assert_equal(k, p[0])
dist, k = tree.query(ridge_midpoint - d, k=1)
assert_equal(k, p[1])
def test_furthest_site(self):
points = [(0, 0), (0, 1), (1, 0), (0.5, 0.5), (1.1, 1.1)]
# qhull v o Fv Qbb Qc Qu < dat
output = """
2
3 5 1
-10.101 -10.101
0.6000000000000001 0.5
0.5 0.6000000000000001
3 0 2 1
2 0 1
2 0 2
0
3 0 2 1
5
4 0 2 0 2
4 0 4 1 2
4 0 1 0 1
4 1 4 0 1
4 2 4 0 2
"""
self._compare_qvoronoi(points, output, furthest_site=True)
def test_furthest_site_flag(self):
points = [(0, 0), (0, 1), (1, 0), (0.5, 0.5), (1.1, 1.1)]
vor = Voronoi(points)
assert_equal(vor.furthest_site,False)
vor = Voronoi(points,furthest_site=True)
assert_equal(vor.furthest_site,True)
@pytest.mark.parametrize("name", sorted(INCREMENTAL_DATASETS))
def test_incremental(self, name):
# Test incremental construction of the triangulation
if INCREMENTAL_DATASETS[name][0][0].shape[1] > 3:
# too slow (testing of the result --- qhull is still fast)
return
chunks, opts = INCREMENTAL_DATASETS[name]
points = np.concatenate(chunks, axis=0)
obj = qhull.Voronoi(chunks[0], incremental=True,
qhull_options=opts)
for chunk in chunks[1:]:
obj.add_points(chunk)
obj2 = qhull.Voronoi(points)
obj3 = qhull.Voronoi(chunks[0], incremental=True,
qhull_options=opts)
if len(chunks) > 1:
obj3.add_points(np.concatenate(chunks[1:], axis=0),
restart=True)
# -- Check that the incremental mode agrees with upfront mode
assert_equal(len(obj.point_region), len(obj2.point_region))
assert_equal(len(obj.point_region), len(obj3.point_region))
# The vertices may be in different order or duplicated in
# the incremental map
for objx in obj, obj3:
vertex_map = {-1: -1}
for i, v in enumerate(objx.vertices):
for j, v2 in enumerate(obj2.vertices):
if np.allclose(v, v2):
vertex_map[i] = j
def remap(x):
if hasattr(x, '__len__'):
return tuple({remap(y) for y in x})
try:
return vertex_map[x]
except KeyError as e:
raise AssertionError("incremental result has spurious vertex at %r"
% (objx.vertices[x],)) from e
def simplified(x):
items = set(map(sorted_tuple, x))
if () in items:
items.remove(())
items = [x for x in items if len(x) > 1]
items.sort()
return items
assert_equal(
simplified(remap(objx.regions)),
simplified(obj2.regions)
)
assert_equal(
simplified(remap(objx.ridge_vertices)),
simplified(obj2.ridge_vertices)
)
# XXX: compare ridge_points --- not clear exactly how to do this
class Test_HalfspaceIntersection:
def assert_unordered_allclose(self, arr1, arr2, rtol=1e-7):
"""Check that every line in arr1 is only once in arr2"""
assert_equal(arr1.shape, arr2.shape)
truths = np.zeros((arr1.shape[0],), dtype=bool)
for l1 in arr1:
indexes = np.nonzero((abs(arr2 - l1) < rtol).all(axis=1))[0]
assert_equal(indexes.shape, (1,))
truths[indexes[0]] = True
assert_(truths.all())
@pytest.mark.parametrize("dt", [np.float64, int])
def test_cube_halfspace_intersection(self, dt):
halfspaces = np.array([[-1, 0, 0],
[0, -1, 0],
[1, 0, -2],
[0, 1, -2]], dtype=dt)
feasible_point = np.array([1, 1], dtype=dt)
points = np.array([[0.0, 0.0], [2.0, 0.0], [0.0, 2.0], [2.0, 2.0]])
hull = qhull.HalfspaceIntersection(halfspaces, feasible_point)
assert_allclose(hull.intersections, points)
def test_self_dual_polytope_intersection(self):
fname = os.path.join(os.path.dirname(__file__), 'data',
'selfdual-4d-polytope.txt')
ineqs = np.genfromtxt(fname)
halfspaces = -np.hstack((ineqs[:, 1:], ineqs[:, :1]))
feas_point = np.array([0., 0., 0., 0.])
hs = qhull.HalfspaceIntersection(halfspaces, feas_point)
assert_equal(hs.intersections.shape, (24, 4))
assert_almost_equal(hs.dual_volume, 32.0)
assert_equal(len(hs.dual_facets), 24)
for facet in hs.dual_facets:
assert_equal(len(facet), 6)
dists = halfspaces[:, -1] + halfspaces[:, :-1].dot(feas_point)
self.assert_unordered_allclose((halfspaces[:, :-1].T/dists).T, hs.dual_points)
points = itertools.permutations([0., 0., 0.5, -0.5])
for point in points:
assert_equal(np.sum((hs.intersections == point).all(axis=1)), 1)
def test_wrong_feasible_point(self):
halfspaces = np.array([[-1.0, 0.0, 0.0],
[0.0, -1.0, 0.0],
[1.0, 0.0, -1.0],
[0.0, 1.0, -1.0]])
feasible_point = np.array([0.5, 0.5, 0.5])
#Feasible point is (ndim,) instead of (ndim-1,)
assert_raises(ValueError, qhull.HalfspaceIntersection, halfspaces, feasible_point)
feasible_point = np.array([[0.5], [0.5]])
#Feasible point is (ndim-1, 1) instead of (ndim-1,)
assert_raises(ValueError, qhull.HalfspaceIntersection, halfspaces, feasible_point)
feasible_point = np.array([[0.5, 0.5]])
#Feasible point is (1, ndim-1) instead of (ndim-1,)
assert_raises(ValueError, qhull.HalfspaceIntersection, halfspaces, feasible_point)
feasible_point = np.array([-0.5, -0.5])
#Feasible point is outside feasible region
assert_raises(qhull.QhullError, qhull.HalfspaceIntersection, halfspaces, feasible_point)
def test_incremental(self):
#Cube
halfspaces = np.array([[0., 0., -1., -0.5],
[0., -1., 0., -0.5],
[-1., 0., 0., -0.5],
[1., 0., 0., -0.5],
[0., 1., 0., -0.5],
[0., 0., 1., -0.5]])
#Cut each summit
extra_normals = np.array([[1., 1., 1.],
[1., 1., -1.],
[1., -1., 1.],
[1, -1., -1.]])
offsets = np.array([[-1.]]*8)
extra_halfspaces = np.hstack((np.vstack((extra_normals, -extra_normals)),
offsets))
feas_point = np.array([0., 0., 0.])
inc_hs = qhull.HalfspaceIntersection(halfspaces, feas_point, incremental=True)
inc_res_hs = qhull.HalfspaceIntersection(halfspaces, feas_point, incremental=True)
for i, ehs in enumerate(extra_halfspaces):
inc_hs.add_halfspaces(ehs[np.newaxis, :])
inc_res_hs.add_halfspaces(ehs[np.newaxis, :], restart=True)
total = np.vstack((halfspaces, extra_halfspaces[:i+1, :]))
hs = qhull.HalfspaceIntersection(total, feas_point)
assert_allclose(inc_hs.halfspaces, inc_res_hs.halfspaces)
assert_allclose(inc_hs.halfspaces, hs.halfspaces)
#Direct computation and restart should have points in same order
assert_allclose(hs.intersections, inc_res_hs.intersections)
#Incremental will have points in different order than direct computation
self.assert_unordered_allclose(inc_hs.intersections, hs.intersections)
inc_hs.close()
def test_cube(self):
# Halfspaces of the cube:
halfspaces = np.array([[-1., 0., 0., 0.], # x >= 0
[1., 0., 0., -1.], # x <= 1
[0., -1., 0., 0.], # y >= 0
[0., 1., 0., -1.], # y <= 1
[0., 0., -1., 0.], # z >= 0
[0., 0., 1., -1.]]) # z <= 1
point = np.array([0.5, 0.5, 0.5])
hs = qhull.HalfspaceIntersection(halfspaces, point)
# qhalf H0.5,0.5,0.5 o < input.txt
qhalf_points = np.array([
[-2, 0, 0],
[2, 0, 0],
[0, -2, 0],
[0, 2, 0],
[0, 0, -2],
[0, 0, 2]])
qhalf_facets = [
[2, 4, 0],
[4, 2, 1],
[5, 2, 0],
[2, 5, 1],
[3, 4, 1],
[4, 3, 0],
[5, 3, 1],
[3, 5, 0]]
assert len(qhalf_facets) == len(hs.dual_facets)
for a, b in zip(qhalf_facets, hs.dual_facets):
assert set(a) == set(b) # facet orientation can differ
assert_allclose(hs.dual_points, qhalf_points)
| 43,730
| 36.440925
| 98
|
py
|
scipy
|
scipy-main/scipy/spatial/tests/test__plotutils.py
|
import pytest
from numpy.testing import assert_, assert_array_equal, suppress_warnings
try:
import matplotlib
matplotlib.rcParams['backend'] = 'Agg'
import matplotlib.pyplot as plt
has_matplotlib = True
except Exception:
has_matplotlib = False
from scipy.spatial import \
delaunay_plot_2d, voronoi_plot_2d, convex_hull_plot_2d, \
Delaunay, Voronoi, ConvexHull
@pytest.mark.skipif(not has_matplotlib, reason="Matplotlib not available")
class TestPlotting:
points = [(0,0), (0,1), (1,0), (1,1)]
def test_delaunay(self):
# Smoke test
fig = plt.figure()
obj = Delaunay(self.points)
s_before = obj.simplices.copy()
with suppress_warnings() as sup:
# filter can be removed when matplotlib 1.x is dropped
sup.filter(message="The ishold function was deprecated in version")
r = delaunay_plot_2d(obj, ax=fig.gca())
assert_array_equal(obj.simplices, s_before) # shouldn't modify
assert_(r is fig)
delaunay_plot_2d(obj, ax=fig.gca())
def test_voronoi(self):
# Smoke test
fig = plt.figure()
obj = Voronoi(self.points)
with suppress_warnings() as sup:
# filter can be removed when matplotlib 1.x is dropped
sup.filter(message="The ishold function was deprecated in version")
r = voronoi_plot_2d(obj, ax=fig.gca())
assert_(r is fig)
voronoi_plot_2d(obj)
voronoi_plot_2d(obj, show_vertices=False)
def test_convex_hull(self):
# Smoke test
fig = plt.figure()
tri = ConvexHull(self.points)
with suppress_warnings() as sup:
# filter can be removed when matplotlib 1.x is dropped
sup.filter(message="The ishold function was deprecated in version")
r = convex_hull_plot_2d(tri, ax=fig.gca())
assert_(r is fig)
convex_hull_plot_2d(tri)
| 1,943
| 34.345455
| 79
|
py
|
scipy
|
scipy-main/scipy/spatial/tests/test__procrustes.py
|
import numpy as np
from numpy.testing import assert_allclose, assert_equal, assert_almost_equal
from pytest import raises as assert_raises
from scipy.spatial import procrustes
class TestProcrustes:
def setup_method(self):
"""creates inputs"""
# an L
self.data1 = np.array([[1, 3], [1, 2], [1, 1], [2, 1]], 'd')
# a larger, shifted, mirrored L
self.data2 = np.array([[4, -2], [4, -4], [4, -6], [2, -6]], 'd')
# an L shifted up 1, right 1, and with point 4 shifted an extra .5
# to the right
# pointwise distance disparity with data1: 3*(2) + (1 + 1.5^2)
self.data3 = np.array([[2, 4], [2, 3], [2, 2], [3, 2.5]], 'd')
# data4, data5 are standardized (trace(A*A') = 1).
# procrustes should return an identical copy if they are used
# as the first matrix argument.
shiftangle = np.pi / 8
self.data4 = np.array([[1, 0], [0, 1], [-1, 0],
[0, -1]], 'd') / np.sqrt(4)
self.data5 = np.array([[np.cos(shiftangle), np.sin(shiftangle)],
[np.cos(np.pi / 2 - shiftangle),
np.sin(np.pi / 2 - shiftangle)],
[-np.cos(shiftangle),
-np.sin(shiftangle)],
[-np.cos(np.pi / 2 - shiftangle),
-np.sin(np.pi / 2 - shiftangle)]],
'd') / np.sqrt(4)
def test_procrustes(self):
# tests procrustes' ability to match two matrices.
#
# the second matrix is a rotated, shifted, scaled, and mirrored version
# of the first, in two dimensions only
#
# can shift, mirror, and scale an 'L'?
a, b, disparity = procrustes(self.data1, self.data2)
assert_allclose(b, a)
assert_almost_equal(disparity, 0.)
# if first mtx is standardized, leaves first mtx unchanged?
m4, m5, disp45 = procrustes(self.data4, self.data5)
assert_equal(m4, self.data4)
# at worst, data3 is an 'L' with one point off by .5
m1, m3, disp13 = procrustes(self.data1, self.data3)
#assert_(disp13 < 0.5 ** 2)
def test_procrustes2(self):
# procrustes disparity should not depend on order of matrices
m1, m3, disp13 = procrustes(self.data1, self.data3)
m3_2, m1_2, disp31 = procrustes(self.data3, self.data1)
assert_almost_equal(disp13, disp31)
# try with 3d, 8 pts per
rand1 = np.array([[2.61955202, 0.30522265, 0.55515826],
[0.41124708, -0.03966978, -0.31854548],
[0.91910318, 1.39451809, -0.15295084],
[2.00452023, 0.50150048, 0.29485268],
[0.09453595, 0.67528885, 0.03283872],
[0.07015232, 2.18892599, -1.67266852],
[0.65029688, 1.60551637, 0.80013549],
[-0.6607528, 0.53644208, 0.17033891]])
rand3 = np.array([[0.0809969, 0.09731461, -0.173442],
[-1.84888465, -0.92589646, -1.29335743],
[0.67031855, -1.35957463, 0.41938621],
[0.73967209, -0.20230757, 0.52418027],
[0.17752796, 0.09065607, 0.29827466],
[0.47999368, -0.88455717, -0.57547934],
[-0.11486344, -0.12608506, -0.3395779],
[-0.86106154, -0.28687488, 0.9644429]])
res1, res3, disp13 = procrustes(rand1, rand3)
res3_2, res1_2, disp31 = procrustes(rand3, rand1)
assert_almost_equal(disp13, disp31)
def test_procrustes_shape_mismatch(self):
assert_raises(ValueError, procrustes,
np.array([[1, 2], [3, 4]]),
np.array([[5, 6, 7], [8, 9, 10]]))
def test_procrustes_empty_rows_or_cols(self):
empty = np.array([[]])
assert_raises(ValueError, procrustes, empty, empty)
def test_procrustes_no_variation(self):
assert_raises(ValueError, procrustes,
np.array([[42, 42], [42, 42]]),
np.array([[45, 45], [45, 45]]))
def test_procrustes_bad_number_of_dimensions(self):
# fewer dimensions in one dataset
assert_raises(ValueError, procrustes,
np.array([1, 1, 2, 3, 5, 8]),
np.array([[1, 2], [3, 4]]))
# fewer dimensions in both datasets
assert_raises(ValueError, procrustes,
np.array([1, 1, 2, 3, 5, 8]),
np.array([1, 1, 2, 3, 5, 8]))
# zero dimensions
assert_raises(ValueError, procrustes, np.array(7), np.array(11))
# extra dimensions
assert_raises(ValueError, procrustes,
np.array([[[11], [7]]]),
np.array([[[5, 13]]]))
| 4,974
| 41.521368
| 79
|
py
|
scipy
|
scipy-main/scipy/spatial/tests/test_kdtree.py
|
# Copyright Anne M. Archibald 2008
# Released under the scipy license
import os
from numpy.testing import (assert_equal, assert_array_equal, assert_,
assert_almost_equal, assert_array_almost_equal,
assert_allclose)
from pytest import raises as assert_raises
import pytest
from platform import python_implementation
import numpy as np
from scipy.spatial import KDTree, Rectangle, distance_matrix, cKDTree
from scipy.spatial._ckdtree import cKDTreeNode
from scipy.spatial import minkowski_distance
import itertools
@pytest.fixture(params=[KDTree, cKDTree])
def kdtree_type(request):
return request.param
def KDTreeTest(kls):
"""Class decorator to create test cases for KDTree and cKDTree
Tests use the class variable ``kdtree_type`` as the tree constructor.
"""
if not kls.__name__.startswith('_Test'):
raise RuntimeError("Expected a class name starting with _Test")
for tree in (KDTree, cKDTree):
test_name = kls.__name__[1:] + '_' + tree.__name__
if test_name in globals():
raise RuntimeError("Duplicated test name: " + test_name)
# Create a new sub-class with kdtree_type defined
test_case = type(test_name, (kls,), {'kdtree_type': tree})
globals()[test_name] = test_case
return kls
def distance_box(a, b, p, boxsize):
diff = a - b
diff[diff > 0.5 * boxsize] -= boxsize
diff[diff < -0.5 * boxsize] += boxsize
d = minkowski_distance(diff, 0, p)
return d
class ConsistencyTests:
def distance(self, a, b, p):
return minkowski_distance(a, b, p)
def test_nearest(self):
x = self.x
d, i = self.kdtree.query(x, 1)
assert_almost_equal(d**2, np.sum((x-self.data[i])**2))
eps = 1e-8
assert_(np.all(np.sum((self.data-x[np.newaxis, :])**2, axis=1) > d**2-eps))
def test_m_nearest(self):
x = self.x
m = self.m
dd, ii = self.kdtree.query(x, m)
d = np.amax(dd)
i = ii[np.argmax(dd)]
assert_almost_equal(d**2, np.sum((x-self.data[i])**2))
eps = 1e-8
assert_equal(np.sum(np.sum((self.data-x[np.newaxis, :])**2, axis=1) < d**2+eps), m)
def test_points_near(self):
x = self.x
d = self.d
dd, ii = self.kdtree.query(x, k=self.kdtree.n, distance_upper_bound=d)
eps = 1e-8
hits = 0
for near_d, near_i in zip(dd, ii):
if near_d == np.inf:
continue
hits += 1
assert_almost_equal(near_d**2, np.sum((x-self.data[near_i])**2))
assert_(near_d < d+eps, f"near_d={near_d:g} should be less than {d:g}")
assert_equal(np.sum(self.distance(self.data, x, 2) < d**2+eps), hits)
def test_points_near_l1(self):
x = self.x
d = self.d
dd, ii = self.kdtree.query(x, k=self.kdtree.n, p=1, distance_upper_bound=d)
eps = 1e-8
hits = 0
for near_d, near_i in zip(dd, ii):
if near_d == np.inf:
continue
hits += 1
assert_almost_equal(near_d, self.distance(x, self.data[near_i], 1))
assert_(near_d < d+eps, f"near_d={near_d:g} should be less than {d:g}")
assert_equal(np.sum(self.distance(self.data, x, 1) < d+eps), hits)
def test_points_near_linf(self):
x = self.x
d = self.d
dd, ii = self.kdtree.query(x, k=self.kdtree.n, p=np.inf, distance_upper_bound=d)
eps = 1e-8
hits = 0
for near_d, near_i in zip(dd, ii):
if near_d == np.inf:
continue
hits += 1
assert_almost_equal(near_d, self.distance(x, self.data[near_i], np.inf))
assert_(near_d < d+eps, f"near_d={near_d:g} should be less than {d:g}")
assert_equal(np.sum(self.distance(self.data, x, np.inf) < d+eps), hits)
def test_approx(self):
x = self.x
k = self.k
eps = 0.1
d_real, i_real = self.kdtree.query(x, k)
d, i = self.kdtree.query(x, k, eps=eps)
assert_(np.all(d <= d_real*(1+eps)))
@KDTreeTest
class _Test_random(ConsistencyTests):
def setup_method(self):
self.n = 100
self.m = 4
np.random.seed(1234)
self.data = np.random.randn(self.n, self.m)
self.kdtree = self.kdtree_type(self.data, leafsize=2)
self.x = np.random.randn(self.m)
self.d = 0.2
self.k = 10
@KDTreeTest
class _Test_random_far(_Test_random):
def setup_method(self):
super().setup_method()
self.x = np.random.randn(self.m)+10
@KDTreeTest
class _Test_small(ConsistencyTests):
def setup_method(self):
self.data = np.array([[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 1],
[1, 0, 0],
[1, 0, 1],
[1, 1, 0],
[1, 1, 1]])
self.kdtree = self.kdtree_type(self.data)
self.n = self.kdtree.n
self.m = self.kdtree.m
np.random.seed(1234)
self.x = np.random.randn(3)
self.d = 0.5
self.k = 4
def test_nearest(self):
assert_array_equal(
self.kdtree.query((0, 0, 0.1), 1),
(0.1, 0))
def test_nearest_two(self):
assert_array_equal(
self.kdtree.query((0, 0, 0.1), 2),
([0.1, 0.9], [0, 1]))
@KDTreeTest
class _Test_small_nonleaf(_Test_small):
def setup_method(self):
super().setup_method()
self.kdtree = self.kdtree_type(self.data, leafsize=1)
class Test_vectorization_KDTree:
def setup_method(self):
self.data = np.array([[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 1],
[1, 0, 0],
[1, 0, 1],
[1, 1, 0],
[1, 1, 1]])
self.kdtree = KDTree(self.data)
def test_single_query(self):
d, i = self.kdtree.query(np.array([0, 0, 0]))
assert_(isinstance(d, float))
assert_(np.issubdtype(i, np.signedinteger))
def test_vectorized_query(self):
d, i = self.kdtree.query(np.zeros((2, 4, 3)))
assert_equal(np.shape(d), (2, 4))
assert_equal(np.shape(i), (2, 4))
def test_single_query_multiple_neighbors(self):
s = 23
kk = self.kdtree.n+s
d, i = self.kdtree.query(np.array([0, 0, 0]), k=kk)
assert_equal(np.shape(d), (kk,))
assert_equal(np.shape(i), (kk,))
assert_(np.all(~np.isfinite(d[-s:])))
assert_(np.all(i[-s:] == self.kdtree.n))
def test_vectorized_query_multiple_neighbors(self):
s = 23
kk = self.kdtree.n+s
d, i = self.kdtree.query(np.zeros((2, 4, 3)), k=kk)
assert_equal(np.shape(d), (2, 4, kk))
assert_equal(np.shape(i), (2, 4, kk))
assert_(np.all(~np.isfinite(d[:, :, -s:])))
assert_(np.all(i[:, :, -s:] == self.kdtree.n))
def test_query_raises_for_k_none(self):
x = 1.0
with pytest.raises(ValueError, match="k must be an integer or*"):
self.kdtree.query(x, k=None)
class Test_vectorization_cKDTree:
def setup_method(self):
self.data = np.array([[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 1],
[1, 0, 0],
[1, 0, 1],
[1, 1, 0],
[1, 1, 1]])
self.kdtree = cKDTree(self.data)
def test_single_query(self):
d, i = self.kdtree.query([0, 0, 0])
assert_(isinstance(d, float))
assert_(isinstance(i, int))
def test_vectorized_query(self):
d, i = self.kdtree.query(np.zeros((2, 4, 3)))
assert_equal(np.shape(d), (2, 4))
assert_equal(np.shape(i), (2, 4))
def test_vectorized_query_noncontiguous_values(self):
np.random.seed(1234)
qs = np.random.randn(3, 1000).T
ds, i_s = self.kdtree.query(qs)
for q, d, i in zip(qs, ds, i_s):
assert_equal(self.kdtree.query(q), (d, i))
def test_single_query_multiple_neighbors(self):
s = 23
kk = self.kdtree.n+s
d, i = self.kdtree.query([0, 0, 0], k=kk)
assert_equal(np.shape(d), (kk,))
assert_equal(np.shape(i), (kk,))
assert_(np.all(~np.isfinite(d[-s:])))
assert_(np.all(i[-s:] == self.kdtree.n))
def test_vectorized_query_multiple_neighbors(self):
s = 23
kk = self.kdtree.n+s
d, i = self.kdtree.query(np.zeros((2, 4, 3)), k=kk)
assert_equal(np.shape(d), (2, 4, kk))
assert_equal(np.shape(i), (2, 4, kk))
assert_(np.all(~np.isfinite(d[:, :, -s:])))
assert_(np.all(i[:, :, -s:] == self.kdtree.n))
class ball_consistency:
tol = 0.0
def distance(self, a, b, p):
return minkowski_distance(a * 1.0, b * 1.0, p)
def test_in_ball(self):
x = np.atleast_2d(self.x)
d = np.broadcast_to(self.d, x.shape[:-1])
l = self.T.query_ball_point(x, self.d, p=self.p, eps=self.eps)
for i, ind in enumerate(l):
dist = self.distance(self.data[ind], x[i], self.p) - d[i]*(1.+self.eps)
norm = self.distance(self.data[ind], x[i], self.p) + d[i]*(1.+self.eps)
assert_array_equal(dist < self.tol * norm, True)
def test_found_all(self):
x = np.atleast_2d(self.x)
d = np.broadcast_to(self.d, x.shape[:-1])
l = self.T.query_ball_point(x, self.d, p=self.p, eps=self.eps)
for i, ind in enumerate(l):
c = np.ones(self.T.n, dtype=bool)
c[ind] = False
dist = self.distance(self.data[c], x[i], self.p) - d[i]/(1.+self.eps)
norm = self.distance(self.data[c], x[i], self.p) + d[i]/(1.+self.eps)
assert_array_equal(dist > -self.tol * norm, True)
@KDTreeTest
class _Test_random_ball(ball_consistency):
def setup_method(self):
n = 100
m = 4
np.random.seed(1234)
self.data = np.random.randn(n, m)
self.T = self.kdtree_type(self.data, leafsize=2)
self.x = np.random.randn(m)
self.p = 2.
self.eps = 0
self.d = 0.2
@KDTreeTest
class _Test_random_ball_periodic(ball_consistency):
def distance(self, a, b, p):
return distance_box(a, b, p, 1.0)
def setup_method(self):
n = 10000
m = 4
np.random.seed(1234)
self.data = np.random.uniform(size=(n, m))
self.T = self.kdtree_type(self.data, leafsize=2, boxsize=1)
self.x = np.full(m, 0.1)
self.p = 2.
self.eps = 0
self.d = 0.2
def test_in_ball_outside(self):
l = self.T.query_ball_point(self.x + 1.0, self.d, p=self.p, eps=self.eps)
for i in l:
assert_(self.distance(self.data[i], self.x, self.p) <= self.d*(1.+self.eps))
l = self.T.query_ball_point(self.x - 1.0, self.d, p=self.p, eps=self.eps)
for i in l:
assert_(self.distance(self.data[i], self.x, self.p) <= self.d*(1.+self.eps))
def test_found_all_outside(self):
c = np.ones(self.T.n, dtype=bool)
l = self.T.query_ball_point(self.x + 1.0, self.d, p=self.p, eps=self.eps)
c[l] = False
assert_(np.all(self.distance(self.data[c], self.x, self.p) >= self.d/(1.+self.eps)))
l = self.T.query_ball_point(self.x - 1.0, self.d, p=self.p, eps=self.eps)
c[l] = False
assert_(np.all(self.distance(self.data[c], self.x, self.p) >= self.d/(1.+self.eps)))
@KDTreeTest
class _Test_random_ball_largep_issue9890(ball_consistency):
# allow some roundoff errors due to numerical issues
tol = 1e-13
def setup_method(self):
n = 1000
m = 2
np.random.seed(123)
self.data = np.random.randint(100, 1000, size=(n, m))
self.T = self.kdtree_type(self.data)
self.x = self.data
self.p = 100
self.eps = 0
self.d = 10
@KDTreeTest
class _Test_random_ball_approx(_Test_random_ball):
def setup_method(self):
super().setup_method()
self.eps = 0.1
@KDTreeTest
class _Test_random_ball_approx_periodic(_Test_random_ball):
def setup_method(self):
super().setup_method()
self.eps = 0.1
@KDTreeTest
class _Test_random_ball_far(_Test_random_ball):
def setup_method(self):
super().setup_method()
self.d = 2.
@KDTreeTest
class _Test_random_ball_far_periodic(_Test_random_ball_periodic):
def setup_method(self):
super().setup_method()
self.d = 2.
@KDTreeTest
class _Test_random_ball_l1(_Test_random_ball):
def setup_method(self):
super().setup_method()
self.p = 1
@KDTreeTest
class _Test_random_ball_linf(_Test_random_ball):
def setup_method(self):
super().setup_method()
self.p = np.inf
def test_random_ball_vectorized(kdtree_type):
n = 20
m = 5
np.random.seed(1234)
T = kdtree_type(np.random.randn(n, m))
r = T.query_ball_point(np.random.randn(2, 3, m), 1)
assert_equal(r.shape, (2, 3))
assert_(isinstance(r[0, 0], list))
def test_query_ball_point_multithreading(kdtree_type):
np.random.seed(0)
n = 5000
k = 2
points = np.random.randn(n, k)
T = kdtree_type(points)
l1 = T.query_ball_point(points, 0.003, workers=1)
l2 = T.query_ball_point(points, 0.003, workers=64)
l3 = T.query_ball_point(points, 0.003, workers=-1)
for i in range(n):
if l1[i] or l2[i]:
assert_array_equal(l1[i], l2[i])
for i in range(n):
if l1[i] or l3[i]:
assert_array_equal(l1[i], l3[i])
class two_trees_consistency:
def distance(self, a, b, p):
return minkowski_distance(a, b, p)
def test_all_in_ball(self):
r = self.T1.query_ball_tree(self.T2, self.d, p=self.p, eps=self.eps)
for i, l in enumerate(r):
for j in l:
assert_(self.distance(self.data1[i], self.data2[j], self.p) <= self.d*(1.+self.eps))
def test_found_all(self):
r = self.T1.query_ball_tree(self.T2, self.d, p=self.p, eps=self.eps)
for i, l in enumerate(r):
c = np.ones(self.T2.n, dtype=bool)
c[l] = False
assert_(np.all(self.distance(self.data2[c], self.data1[i], self.p) >= self.d/(1.+self.eps)))
@KDTreeTest
class _Test_two_random_trees(two_trees_consistency):
def setup_method(self):
n = 50
m = 4
np.random.seed(1234)
self.data1 = np.random.randn(n, m)
self.T1 = self.kdtree_type(self.data1, leafsize=2)
self.data2 = np.random.randn(n, m)
self.T2 = self.kdtree_type(self.data2, leafsize=2)
self.p = 2.
self.eps = 0
self.d = 0.2
@KDTreeTest
class _Test_two_random_trees_periodic(two_trees_consistency):
def distance(self, a, b, p):
return distance_box(a, b, p, 1.0)
def setup_method(self):
n = 50
m = 4
np.random.seed(1234)
self.data1 = np.random.uniform(size=(n, m))
self.T1 = self.kdtree_type(self.data1, leafsize=2, boxsize=1.0)
self.data2 = np.random.uniform(size=(n, m))
self.T2 = self.kdtree_type(self.data2, leafsize=2, boxsize=1.0)
self.p = 2.
self.eps = 0
self.d = 0.2
@KDTreeTest
class _Test_two_random_trees_far(_Test_two_random_trees):
def setup_method(self):
super().setup_method()
self.d = 2
@KDTreeTest
class _Test_two_random_trees_far_periodic(_Test_two_random_trees_periodic):
def setup_method(self):
super().setup_method()
self.d = 2
@KDTreeTest
class _Test_two_random_trees_linf(_Test_two_random_trees):
def setup_method(self):
super().setup_method()
self.p = np.inf
@KDTreeTest
class _Test_two_random_trees_linf_periodic(_Test_two_random_trees_periodic):
def setup_method(self):
super().setup_method()
self.p = np.inf
class Test_rectangle:
def setup_method(self):
self.rect = Rectangle([0, 0], [1, 1])
def test_min_inside(self):
assert_almost_equal(self.rect.min_distance_point([0.5, 0.5]), 0)
def test_min_one_side(self):
assert_almost_equal(self.rect.min_distance_point([0.5, 1.5]), 0.5)
def test_min_two_sides(self):
assert_almost_equal(self.rect.min_distance_point([2, 2]), np.sqrt(2))
def test_max_inside(self):
assert_almost_equal(self.rect.max_distance_point([0.5, 0.5]), 1/np.sqrt(2))
def test_max_one_side(self):
assert_almost_equal(self.rect.max_distance_point([0.5, 1.5]), np.hypot(0.5, 1.5))
def test_max_two_sides(self):
assert_almost_equal(self.rect.max_distance_point([2, 2]), 2*np.sqrt(2))
def test_split(self):
less, greater = self.rect.split(0, 0.1)
assert_array_equal(less.maxes, [0.1, 1])
assert_array_equal(less.mins, [0, 0])
assert_array_equal(greater.maxes, [1, 1])
assert_array_equal(greater.mins, [0.1, 0])
def test_distance_l2():
assert_almost_equal(minkowski_distance([0, 0], [1, 1], 2), np.sqrt(2))
def test_distance_l1():
assert_almost_equal(minkowski_distance([0, 0], [1, 1], 1), 2)
def test_distance_linf():
assert_almost_equal(minkowski_distance([0, 0], [1, 1], np.inf), 1)
def test_distance_vectorization():
np.random.seed(1234)
x = np.random.randn(10, 1, 3)
y = np.random.randn(1, 7, 3)
assert_equal(minkowski_distance(x, y).shape, (10, 7))
class count_neighbors_consistency:
def test_one_radius(self):
r = 0.2
assert_equal(self.T1.count_neighbors(self.T2, r),
np.sum([len(l) for l in self.T1.query_ball_tree(self.T2, r)]))
def test_large_radius(self):
r = 1000
assert_equal(self.T1.count_neighbors(self.T2, r),
np.sum([len(l) for l in self.T1.query_ball_tree(self.T2, r)]))
def test_multiple_radius(self):
rs = np.exp(np.linspace(np.log(0.01), np.log(10), 3))
results = self.T1.count_neighbors(self.T2, rs)
assert_(np.all(np.diff(results) >= 0))
for r, result in zip(rs, results):
assert_equal(self.T1.count_neighbors(self.T2, r), result)
@KDTreeTest
class _Test_count_neighbors(count_neighbors_consistency):
def setup_method(self):
n = 50
m = 2
np.random.seed(1234)
self.T1 = self.kdtree_type(np.random.randn(n, m), leafsize=2)
self.T2 = self.kdtree_type(np.random.randn(n, m), leafsize=2)
class sparse_distance_matrix_consistency:
def distance(self, a, b, p):
return minkowski_distance(a, b, p)
def test_consistency_with_neighbors(self):
M = self.T1.sparse_distance_matrix(self.T2, self.r)
r = self.T1.query_ball_tree(self.T2, self.r)
for i, l in enumerate(r):
for j in l:
assert_almost_equal(M[i, j],
self.distance(self.T1.data[i], self.T2.data[j], self.p),
decimal=14)
for ((i, j), d) in M.items():
assert_(j in r[i])
def test_zero_distance(self):
# raises an exception for bug 870 (FIXME: Does it?)
self.T1.sparse_distance_matrix(self.T1, self.r)
def test_consistency(self):
# Test consistency with a distance_matrix
M1 = self.T1.sparse_distance_matrix(self.T2, self.r)
expected = distance_matrix(self.T1.data, self.T2.data)
expected[expected > self.r] = 0
assert_array_almost_equal(M1.toarray(), expected, decimal=14)
def test_against_logic_error_regression(self):
# regression test for gh-5077 logic error
np.random.seed(0)
too_many = np.array(np.random.randn(18, 2), dtype=int)
tree = self.kdtree_type(
too_many, balanced_tree=False, compact_nodes=False)
d = tree.sparse_distance_matrix(tree, 3).toarray()
assert_array_almost_equal(d, d.T, decimal=14)
def test_ckdtree_return_types(self):
# brute-force reference
ref = np.zeros((self.n, self.n))
for i in range(self.n):
for j in range(self.n):
v = self.data1[i, :] - self.data2[j, :]
ref[i, j] = np.dot(v, v)
ref = np.sqrt(ref)
ref[ref > self.r] = 0.
# test return type 'dict'
dist = np.zeros((self.n, self.n))
r = self.T1.sparse_distance_matrix(self.T2, self.r, output_type='dict')
for i, j in r.keys():
dist[i, j] = r[(i, j)]
assert_array_almost_equal(ref, dist, decimal=14)
# test return type 'ndarray'
dist = np.zeros((self.n, self.n))
r = self.T1.sparse_distance_matrix(self.T2, self.r,
output_type='ndarray')
for k in range(r.shape[0]):
i = r['i'][k]
j = r['j'][k]
v = r['v'][k]
dist[i, j] = v
assert_array_almost_equal(ref, dist, decimal=14)
# test return type 'dok_matrix'
r = self.T1.sparse_distance_matrix(self.T2, self.r,
output_type='dok_matrix')
assert_array_almost_equal(ref, r.toarray(), decimal=14)
# test return type 'coo_matrix'
r = self.T1.sparse_distance_matrix(self.T2, self.r,
output_type='coo_matrix')
assert_array_almost_equal(ref, r.toarray(), decimal=14)
@KDTreeTest
class _Test_sparse_distance_matrix(sparse_distance_matrix_consistency):
def setup_method(self):
n = 50
m = 4
np.random.seed(1234)
data1 = np.random.randn(n, m)
data2 = np.random.randn(n, m)
self.T1 = self.kdtree_type(data1, leafsize=2)
self.T2 = self.kdtree_type(data2, leafsize=2)
self.r = 0.5
self.p = 2
self.data1 = data1
self.data2 = data2
self.n = n
self.m = m
def test_distance_matrix():
m = 10
n = 11
k = 4
np.random.seed(1234)
xs = np.random.randn(m, k)
ys = np.random.randn(n, k)
ds = distance_matrix(xs, ys)
assert_equal(ds.shape, (m, n))
for i in range(m):
for j in range(n):
assert_almost_equal(minkowski_distance(xs[i], ys[j]), ds[i, j])
def test_distance_matrix_looping():
m = 10
n = 11
k = 4
np.random.seed(1234)
xs = np.random.randn(m, k)
ys = np.random.randn(n, k)
ds = distance_matrix(xs, ys)
dsl = distance_matrix(xs, ys, threshold=1)
assert_equal(ds, dsl)
def check_onetree_query(T, d):
r = T.query_ball_tree(T, d)
s = set()
for i, l in enumerate(r):
for j in l:
if i < j:
s.add((i, j))
assert_(s == T.query_pairs(d))
def test_onetree_query(kdtree_type):
np.random.seed(0)
n = 50
k = 4
points = np.random.randn(n, k)
T = kdtree_type(points)
check_onetree_query(T, 0.1)
points = np.random.randn(3*n, k)
points[:n] *= 0.001
points[n:2*n] += 2
T = kdtree_type(points)
check_onetree_query(T, 0.1)
check_onetree_query(T, 0.001)
check_onetree_query(T, 0.00001)
check_onetree_query(T, 1e-6)
def test_query_pairs_single_node(kdtree_type):
tree = kdtree_type([[0, 1]])
assert_equal(tree.query_pairs(0.5), set())
def test_kdtree_query_pairs(kdtree_type):
np.random.seed(0)
n = 50
k = 2
r = 0.1
r2 = r**2
points = np.random.randn(n, k)
T = kdtree_type(points)
# brute force reference
brute = set()
for i in range(n):
for j in range(i+1, n):
v = points[i, :] - points[j, :]
if np.dot(v, v) <= r2:
brute.add((i, j))
l0 = sorted(brute)
# test default return type
s = T.query_pairs(r)
l1 = sorted(s)
assert_array_equal(l0, l1)
# test return type 'set'
s = T.query_pairs(r, output_type='set')
l1 = sorted(s)
assert_array_equal(l0, l1)
# test return type 'ndarray'
s = set()
arr = T.query_pairs(r, output_type='ndarray')
for i in range(arr.shape[0]):
s.add((int(arr[i, 0]), int(arr[i, 1])))
l2 = sorted(s)
assert_array_equal(l0, l2)
def test_query_pairs_eps(kdtree_type):
spacing = np.sqrt(2)
# irrational spacing to have potential rounding errors
x_range = np.linspace(0, 3 * spacing, 4)
y_range = np.linspace(0, 3 * spacing, 4)
xy_array = [(xi, yi) for xi in x_range for yi in y_range]
tree = kdtree_type(xy_array)
pairs_eps = tree.query_pairs(r=spacing, eps=.1)
# result: 24 with eps, 16 without due to rounding
pairs = tree.query_pairs(r=spacing * 1.01)
# result: 24
assert_equal(pairs, pairs_eps)
def test_ball_point_ints(kdtree_type):
# Regression test for #1373.
x, y = np.mgrid[0:4, 0:4]
points = list(zip(x.ravel(), y.ravel()))
tree = kdtree_type(points)
assert_equal(sorted([4, 8, 9, 12]),
sorted(tree.query_ball_point((2, 0), 1)))
points = np.asarray(points, dtype=float)
tree = kdtree_type(points)
assert_equal(sorted([4, 8, 9, 12]),
sorted(tree.query_ball_point((2, 0), 1)))
def test_kdtree_comparisons():
# Regression test: node comparisons were done wrong in 0.12 w/Py3.
nodes = [KDTree.node() for _ in range(3)]
assert_equal(sorted(nodes), sorted(nodes[::-1]))
def test_kdtree_build_modes(kdtree_type):
# check if different build modes for KDTree give similar query results
np.random.seed(0)
n = 5000
k = 4
points = np.random.randn(n, k)
T1 = kdtree_type(points).query(points, k=5)[-1]
T2 = kdtree_type(points, compact_nodes=False).query(points, k=5)[-1]
T3 = kdtree_type(points, balanced_tree=False).query(points, k=5)[-1]
T4 = kdtree_type(points, compact_nodes=False,
balanced_tree=False).query(points, k=5)[-1]
assert_array_equal(T1, T2)
assert_array_equal(T1, T3)
assert_array_equal(T1, T4)
def test_kdtree_pickle(kdtree_type):
# test if it is possible to pickle a KDTree
import pickle
np.random.seed(0)
n = 50
k = 4
points = np.random.randn(n, k)
T1 = kdtree_type(points)
tmp = pickle.dumps(T1)
T2 = pickle.loads(tmp)
T1 = T1.query(points, k=5)[-1]
T2 = T2.query(points, k=5)[-1]
assert_array_equal(T1, T2)
def test_kdtree_pickle_boxsize(kdtree_type):
# test if it is possible to pickle a periodic KDTree
import pickle
np.random.seed(0)
n = 50
k = 4
points = np.random.uniform(size=(n, k))
T1 = kdtree_type(points, boxsize=1.0)
tmp = pickle.dumps(T1)
T2 = pickle.loads(tmp)
T1 = T1.query(points, k=5)[-1]
T2 = T2.query(points, k=5)[-1]
assert_array_equal(T1, T2)
def test_kdtree_copy_data(kdtree_type):
# check if copy_data=True makes the kd-tree
# impervious to data corruption by modification of
# the data arrray
np.random.seed(0)
n = 5000
k = 4
points = np.random.randn(n, k)
T = kdtree_type(points, copy_data=True)
q = points.copy()
T1 = T.query(q, k=5)[-1]
points[...] = np.random.randn(n, k)
T2 = T.query(q, k=5)[-1]
assert_array_equal(T1, T2)
def test_ckdtree_parallel(kdtree_type, monkeypatch):
# check if parallel=True also generates correct query results
np.random.seed(0)
n = 5000
k = 4
points = np.random.randn(n, k)
T = kdtree_type(points)
T1 = T.query(points, k=5, workers=64)[-1]
T2 = T.query(points, k=5, workers=-1)[-1]
T3 = T.query(points, k=5)[-1]
assert_array_equal(T1, T2)
assert_array_equal(T1, T3)
monkeypatch.setattr(os, 'cpu_count', lambda: None)
with pytest.raises(NotImplementedError, match="Cannot determine the"):
T.query(points, 1, workers=-1)
def test_ckdtree_view():
# Check that the nodes can be correctly viewed from Python.
# This test also sanity checks each node in the cKDTree, and
# thus verifies the internal structure of the kd-tree.
np.random.seed(0)
n = 100
k = 4
points = np.random.randn(n, k)
kdtree = cKDTree(points)
# walk the whole kd-tree and sanity check each node
def recurse_tree(n):
assert_(isinstance(n, cKDTreeNode))
if n.split_dim == -1:
assert_(n.lesser is None)
assert_(n.greater is None)
assert_(n.indices.shape[0] <= kdtree.leafsize)
else:
recurse_tree(n.lesser)
recurse_tree(n.greater)
x = n.lesser.data_points[:, n.split_dim]
y = n.greater.data_points[:, n.split_dim]
assert_(x.max() < y.min())
recurse_tree(kdtree.tree)
# check that indices are correctly retrieved
n = kdtree.tree
assert_array_equal(np.sort(n.indices), range(100))
# check that data_points are correctly retrieved
assert_array_equal(kdtree.data[n.indices, :], n.data_points)
# KDTree is specialized to type double points, so no need to make
# a unit test corresponding to test_ball_point_ints()
def test_kdtree_list_k(kdtree_type):
# check kdtree periodic boundary
n = 200
m = 2
klist = [1, 2, 3]
kint = 3
np.random.seed(1234)
data = np.random.uniform(size=(n, m))
kdtree = kdtree_type(data, leafsize=1)
# check agreement between arange(1, k+1) and k
dd, ii = kdtree.query(data, klist)
dd1, ii1 = kdtree.query(data, kint)
assert_equal(dd, dd1)
assert_equal(ii, ii1)
# now check skipping one element
klist = np.array([1, 3])
kint = 3
dd, ii = kdtree.query(data, kint)
dd1, ii1 = kdtree.query(data, klist)
assert_equal(dd1, dd[..., klist - 1])
assert_equal(ii1, ii[..., klist - 1])
# check k == 1 special case
# and k == [1] non-special case
dd, ii = kdtree.query(data, 1)
dd1, ii1 = kdtree.query(data, [1])
assert_equal(len(dd.shape), 1)
assert_equal(len(dd1.shape), 2)
assert_equal(dd, np.ravel(dd1))
assert_equal(ii, np.ravel(ii1))
def test_kdtree_box(kdtree_type):
# check ckdtree periodic boundary
n = 2000
m = 3
k = 3
np.random.seed(1234)
data = np.random.uniform(size=(n, m))
kdtree = kdtree_type(data, leafsize=1, boxsize=1.0)
# use the standard python KDTree for the simulated periodic box
kdtree2 = kdtree_type(data, leafsize=1)
for p in [1, 2, 3.0, np.inf]:
dd, ii = kdtree.query(data, k, p=p)
dd1, ii1 = kdtree.query(data + 1.0, k, p=p)
assert_almost_equal(dd, dd1)
assert_equal(ii, ii1)
dd1, ii1 = kdtree.query(data - 1.0, k, p=p)
assert_almost_equal(dd, dd1)
assert_equal(ii, ii1)
dd2, ii2 = simulate_periodic_box(kdtree2, data, k, boxsize=1.0, p=p)
assert_almost_equal(dd, dd2)
assert_equal(ii, ii2)
def test_kdtree_box_0boxsize(kdtree_type):
# check ckdtree periodic boundary that mimics non-periodic
n = 2000
m = 2
k = 3
np.random.seed(1234)
data = np.random.uniform(size=(n, m))
kdtree = kdtree_type(data, leafsize=1, boxsize=0.0)
# use the standard python KDTree for the simulated periodic box
kdtree2 = kdtree_type(data, leafsize=1)
for p in [1, 2, np.inf]:
dd, ii = kdtree.query(data, k, p=p)
dd1, ii1 = kdtree2.query(data, k, p=p)
assert_almost_equal(dd, dd1)
assert_equal(ii, ii1)
def test_kdtree_box_upper_bounds(kdtree_type):
data = np.linspace(0, 2, 10).reshape(-1, 2)
data[:, 1] += 10
with pytest.raises(ValueError):
kdtree_type(data, leafsize=1, boxsize=1.0)
with pytest.raises(ValueError):
kdtree_type(data, leafsize=1, boxsize=(0.0, 2.0))
# skip a dimension.
kdtree_type(data, leafsize=1, boxsize=(2.0, 0.0))
def test_kdtree_box_lower_bounds(kdtree_type):
data = np.linspace(-1, 1, 10)
assert_raises(ValueError, kdtree_type, data, leafsize=1, boxsize=1.0)
def simulate_periodic_box(kdtree, data, k, boxsize, p):
dd = []
ii = []
x = np.arange(3 ** data.shape[1])
nn = np.array(np.unravel_index(x, [3] * data.shape[1])).T
nn = nn - 1.0
for n in nn:
image = data + n * 1.0 * boxsize
dd2, ii2 = kdtree.query(image, k, p=p)
dd2 = dd2.reshape(-1, k)
ii2 = ii2.reshape(-1, k)
dd.append(dd2)
ii.append(ii2)
dd = np.concatenate(dd, axis=-1)
ii = np.concatenate(ii, axis=-1)
result = np.empty([len(data), len(nn) * k], dtype=[
('ii', 'i8'),
('dd', 'f8')])
result['ii'][:] = ii
result['dd'][:] = dd
result.sort(order='dd')
return result['dd'][:, :k], result['ii'][:, :k]
@pytest.mark.skipif(python_implementation() == 'PyPy',
reason="Fails on PyPy CI runs. See #9507")
def test_ckdtree_memuse():
# unit test adaptation of gh-5630
# NOTE: this will fail when run via valgrind,
# because rss is no longer a reliable memory usage indicator.
try:
import resource
except ImportError:
# resource is not available on Windows
return
# Make some data
dx, dy = 0.05, 0.05
y, x = np.mgrid[slice(1, 5 + dy, dy),
slice(1, 5 + dx, dx)]
z = np.sin(x)**10 + np.cos(10 + y*x) * np.cos(x)
z_copy = np.empty_like(z)
z_copy[:] = z
# Place FILLVAL in z_copy at random number of random locations
FILLVAL = 99.
mask = np.random.randint(0, z.size, np.random.randint(50) + 5)
z_copy.flat[mask] = FILLVAL
igood = np.vstack(np.nonzero(x != FILLVAL)).T
ibad = np.vstack(np.nonzero(x == FILLVAL)).T
mem_use = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
# burn-in
for i in range(10):
tree = cKDTree(igood)
# count memleaks while constructing and querying cKDTree
num_leaks = 0
for i in range(100):
mem_use = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
tree = cKDTree(igood)
dist, iquery = tree.query(ibad, k=4, p=2)
new_mem_use = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
if new_mem_use > mem_use:
num_leaks += 1
# ideally zero leaks, but errors might accidentally happen
# outside cKDTree
assert_(num_leaks < 10)
def test_kdtree_weights(kdtree_type):
data = np.linspace(0, 1, 4).reshape(-1, 1)
tree1 = kdtree_type(data, leafsize=1)
weights = np.ones(len(data), dtype='f4')
nw = tree1._build_weights(weights)
assert_array_equal(nw, [4, 2, 1, 1, 2, 1, 1])
assert_raises(ValueError, tree1._build_weights, weights[:-1])
for i in range(10):
# since weights are uniform, these shall agree:
c1 = tree1.count_neighbors(tree1, np.linspace(0, 10, i))
c2 = tree1.count_neighbors(tree1, np.linspace(0, 10, i),
weights=(weights, weights))
c3 = tree1.count_neighbors(tree1, np.linspace(0, 10, i),
weights=(weights, None))
c4 = tree1.count_neighbors(tree1, np.linspace(0, 10, i),
weights=(None, weights))
tree1.count_neighbors(tree1, np.linspace(0, 10, i),
weights=weights)
assert_array_equal(c1, c2)
assert_array_equal(c1, c3)
assert_array_equal(c1, c4)
for i in range(len(data)):
# this tests removal of one data point by setting weight to 0
w1 = weights.copy()
w1[i] = 0
data2 = data[w1 != 0]
tree2 = kdtree_type(data2)
c1 = tree1.count_neighbors(tree1, np.linspace(0, 10, 100),
weights=(w1, w1))
# "c2 is correct"
c2 = tree2.count_neighbors(tree2, np.linspace(0, 10, 100))
assert_array_equal(c1, c2)
#this asserts for two different trees, singular weights
# crashes
assert_raises(ValueError, tree1.count_neighbors,
tree2, np.linspace(0, 10, 100), weights=w1)
def test_kdtree_count_neighbous_multiple_r(kdtree_type):
n = 2000
m = 2
np.random.seed(1234)
data = np.random.normal(size=(n, m))
kdtree = kdtree_type(data, leafsize=1)
r0 = [0, 0.01, 0.01, 0.02, 0.05]
i0 = np.arange(len(r0))
n0 = kdtree.count_neighbors(kdtree, r0)
nnc = kdtree.count_neighbors(kdtree, r0, cumulative=False)
assert_equal(n0, nnc.cumsum())
for i, r in zip(itertools.permutations(i0),
itertools.permutations(r0)):
# permute n0 by i and it shall agree
n = kdtree.count_neighbors(kdtree, r)
assert_array_equal(n, n0[list(i)])
def test_len0_arrays(kdtree_type):
# make sure len-0 arrays are handled correctly
# in range queries (gh-5639)
np.random.seed(1234)
X = np.random.rand(10, 2)
Y = np.random.rand(10, 2)
tree = kdtree_type(X)
# query_ball_point (single)
d, i = tree.query([.5, .5], k=1)
z = tree.query_ball_point([.5, .5], 0.1*d)
assert_array_equal(z, [])
# query_ball_point (multiple)
d, i = tree.query(Y, k=1)
mind = d.min()
z = tree.query_ball_point(Y, 0.1*mind)
y = np.empty(shape=(10, ), dtype=object)
y.fill([])
assert_array_equal(y, z)
# query_ball_tree
other = kdtree_type(Y)
y = tree.query_ball_tree(other, 0.1*mind)
assert_array_equal(10*[[]], y)
# count_neighbors
y = tree.count_neighbors(other, 0.1*mind)
assert_(y == 0)
# sparse_distance_matrix
y = tree.sparse_distance_matrix(other, 0.1*mind, output_type='dok_matrix')
assert_array_equal(y == np.zeros((10, 10)), True)
y = tree.sparse_distance_matrix(other, 0.1*mind, output_type='coo_matrix')
assert_array_equal(y == np.zeros((10, 10)), True)
y = tree.sparse_distance_matrix(other, 0.1*mind, output_type='dict')
assert_equal(y, {})
y = tree.sparse_distance_matrix(other, 0.1*mind, output_type='ndarray')
_dtype = [('i', np.intp), ('j', np.intp), ('v', np.float64)]
res_dtype = np.dtype(_dtype, align=True)
z = np.empty(shape=(0, ), dtype=res_dtype)
assert_array_equal(y, z)
# query_pairs
d, i = tree.query(X, k=2)
mind = d[:, -1].min()
y = tree.query_pairs(0.1*mind, output_type='set')
assert_equal(y, set())
y = tree.query_pairs(0.1*mind, output_type='ndarray')
z = np.empty(shape=(0, 2), dtype=np.intp)
assert_array_equal(y, z)
def test_kdtree_duplicated_inputs(kdtree_type):
# check kdtree with duplicated inputs
n = 1024
for m in range(1, 8):
data = np.ones((n, m))
data[n//2:] = 2
for balanced, compact in itertools.product((False, True), repeat=2):
kdtree = kdtree_type(data, balanced_tree=balanced,
compact_nodes=compact, leafsize=1)
assert kdtree.size == 3
tree = (kdtree.tree if kdtree_type is cKDTree else
kdtree.tree._node)
assert_equal(
np.sort(tree.lesser.indices),
np.arange(0, n // 2))
assert_equal(
np.sort(tree.greater.indices),
np.arange(n // 2, n))
def test_kdtree_noncumulative_nondecreasing(kdtree_type):
# check kdtree with duplicated inputs
# it shall not divide more than 3 nodes.
# root left (1), and right (2)
kdtree = kdtree_type([[0]], leafsize=1)
assert_raises(ValueError, kdtree.count_neighbors,
kdtree, [0.1, 0], cumulative=False)
def test_short_knn(kdtree_type):
# The test case is based on github: #6425 by @SteveDoyle2
xyz = np.array([
[0., 0., 0.],
[1.01, 0., 0.],
[0., 1., 0.],
[0., 1.01, 0.],
[1., 0., 0.],
[1., 1., 0.]],
dtype='float64')
ckdt = kdtree_type(xyz)
deq, ieq = ckdt.query(xyz, k=4, distance_upper_bound=0.2)
assert_array_almost_equal(deq,
[[0., np.inf, np.inf, np.inf],
[0., 0.01, np.inf, np.inf],
[0., 0.01, np.inf, np.inf],
[0., 0.01, np.inf, np.inf],
[0., 0.01, np.inf, np.inf],
[0., np.inf, np.inf, np.inf]])
def test_query_ball_point_vector_r(kdtree_type):
np.random.seed(1234)
data = np.random.normal(size=(100, 3))
query = np.random.normal(size=(100, 3))
tree = kdtree_type(data)
d = np.random.uniform(0, 0.3, size=len(query))
rvector = tree.query_ball_point(query, d)
rscalar = [tree.query_ball_point(qi, di) for qi, di in zip(query, d)]
for a, b in zip(rvector, rscalar):
assert_array_equal(sorted(a), sorted(b))
def test_query_ball_point_length(kdtree_type):
np.random.seed(1234)
data = np.random.normal(size=(100, 3))
query = np.random.normal(size=(100, 3))
tree = kdtree_type(data)
d = 0.3
length = tree.query_ball_point(query, d, return_length=True)
length2 = [len(ind) for ind in tree.query_ball_point(query, d, return_length=False)]
length3 = [len(tree.query_ball_point(qi, d)) for qi in query]
length4 = [tree.query_ball_point(qi, d, return_length=True) for qi in query]
assert_array_equal(length, length2)
assert_array_equal(length, length3)
assert_array_equal(length, length4)
def test_discontiguous(kdtree_type):
np.random.seed(1234)
data = np.random.normal(size=(100, 3))
d_contiguous = np.arange(100) * 0.04
d_discontiguous = np.ascontiguousarray(
np.arange(100)[::-1] * 0.04)[::-1]
query_contiguous = np.random.normal(size=(100, 3))
query_discontiguous = np.ascontiguousarray(query_contiguous.T).T
assert query_discontiguous.strides[-1] != query_contiguous.strides[-1]
assert d_discontiguous.strides[-1] != d_contiguous.strides[-1]
tree = kdtree_type(data)
length1 = tree.query_ball_point(query_contiguous,
d_contiguous, return_length=True)
length2 = tree.query_ball_point(query_discontiguous,
d_discontiguous, return_length=True)
assert_array_equal(length1, length2)
d1, i1 = tree.query(query_contiguous, 1)
d2, i2 = tree.query(query_discontiguous, 1)
assert_array_equal(d1, d2)
assert_array_equal(i1, i2)
@pytest.mark.parametrize("balanced_tree, compact_nodes",
[(True, False),
(True, True),
(False, False),
(False, True)])
def test_kdtree_empty_input(kdtree_type, balanced_tree, compact_nodes):
# https://github.com/scipy/scipy/issues/5040
np.random.seed(1234)
empty_v3 = np.empty(shape=(0, 3))
query_v3 = np.ones(shape=(1, 3))
query_v2 = np.ones(shape=(2, 3))
tree = kdtree_type(empty_v3, balanced_tree=balanced_tree,
compact_nodes=compact_nodes)
length = tree.query_ball_point(query_v3, 0.3, return_length=True)
assert length == 0
dd, ii = tree.query(query_v2, 2)
assert ii.shape == (2, 2)
assert dd.shape == (2, 2)
assert np.isinf(dd).all()
N = tree.count_neighbors(tree, [0, 1])
assert_array_equal(N, [0, 0])
M = tree.sparse_distance_matrix(tree, 0.3)
assert M.shape == (0, 0)
@KDTreeTest
class _Test_sorted_query_ball_point:
def setup_method(self):
np.random.seed(1234)
self.x = np.random.randn(100, 1)
self.ckdt = self.kdtree_type(self.x)
def test_return_sorted_True(self):
idxs_list = self.ckdt.query_ball_point(self.x, 1., return_sorted=True)
for idxs in idxs_list:
assert_array_equal(idxs, sorted(idxs))
for xi in self.x:
idxs = self.ckdt.query_ball_point(xi, 1., return_sorted=True)
assert_array_equal(idxs, sorted(idxs))
def test_return_sorted_None(self):
"""Previous behavior was to sort the returned indices if there were
multiple points per query but not sort them if there was a single point
per query."""
idxs_list = self.ckdt.query_ball_point(self.x, 1.)
for idxs in idxs_list:
assert_array_equal(idxs, sorted(idxs))
idxs_list_single = [self.ckdt.query_ball_point(xi, 1.) for xi in self.x]
idxs_list_False = self.ckdt.query_ball_point(self.x, 1., return_sorted=False)
for idxs0, idxs1 in zip(idxs_list_False, idxs_list_single):
assert_array_equal(idxs0, idxs1)
def test_kdtree_complex_data():
# Test that KDTree rejects complex input points (gh-9108)
points = np.random.rand(10, 2).view(complex)
with pytest.raises(TypeError, match="complex data"):
t = KDTree(points)
t = KDTree(points.real)
with pytest.raises(TypeError, match="complex data"):
t.query(points)
with pytest.raises(TypeError, match="complex data"):
t.query_ball_point(points, r=1)
def test_kdtree_tree_access():
# Test KDTree.tree can be used to traverse the KDTree
np.random.seed(1234)
points = np.random.rand(100, 4)
t = KDTree(points)
root = t.tree
assert isinstance(root, KDTree.innernode)
assert root.children == points.shape[0]
# Visit the tree and assert some basic properties for each node
nodes = [root]
while nodes:
n = nodes.pop(-1)
if isinstance(n, KDTree.leafnode):
assert isinstance(n.children, int)
assert n.children == len(n.idx)
assert_array_equal(points[n.idx], n._node.data_points)
else:
assert isinstance(n, KDTree.innernode)
assert isinstance(n.split_dim, int)
assert 0 <= n.split_dim < t.m
assert isinstance(n.split, float)
assert isinstance(n.children, int)
assert n.children == n.less.children + n.greater.children
nodes.append(n.greater)
nodes.append(n.less)
def test_kdtree_attributes():
# Test KDTree's attributes are available
np.random.seed(1234)
points = np.random.rand(100, 4)
t = KDTree(points)
assert isinstance(t.m, int)
assert t.n == points.shape[0]
assert isinstance(t.n, int)
assert t.m == points.shape[1]
assert isinstance(t.leafsize, int)
assert t.leafsize == 10
assert_array_equal(t.maxes, np.amax(points, axis=0))
assert_array_equal(t.mins, np.amin(points, axis=0))
assert t.data is points
@pytest.mark.parametrize("kdtree_class", [KDTree, cKDTree])
def test_kdtree_count_neighbors_weighted(kdtree_class):
np.random.seed(1234)
r = np.arange(0.05, 1, 0.05)
A = np.random.random(21).reshape((7,3))
B = np.random.random(45).reshape((15,3))
wA = np.random.random(7)
wB = np.random.random(15)
kdA = kdtree_class(A)
kdB = kdtree_class(B)
nAB = kdA.count_neighbors(kdB, r, cumulative=False, weights=(wA,wB))
# Compare against brute-force
weights = wA[None, :] * wB[:, None]
dist = np.linalg.norm(A[None, :, :] - B[:, None, :], axis=-1)
expect = [np.sum(weights[(prev_radius < dist) & (dist <= radius)])
for prev_radius, radius in zip(itertools.chain([0], r[:-1]), r)]
assert_allclose(nAB, expect)
def test_kdtree_nan():
vals = [1, 5, -10, 7, -4, -16, -6, 6, 3, -11]
n = len(vals)
data = np.concatenate([vals, np.full(n, np.nan)])[:, None]
with pytest.raises(ValueError, match="must be finite"):
KDTree(data)
def test_nonfinite_inputs_gh_18223():
rng = np.random.default_rng(12345)
coords = rng.uniform(size=(100, 3), low=0.0, high=0.1)
t = KDTree(coords, balanced_tree=False, compact_nodes=False)
bad_coord = [np.nan for _ in range(3)]
with pytest.raises(ValueError, match="must be finite"):
t.query(bad_coord)
with pytest.raises(ValueError, match="must be finite"):
t.query_ball_point(bad_coord, 1)
coords[0, :] = np.nan
with pytest.raises(ValueError, match="must be finite"):
KDTree(coords, balanced_tree=True, compact_nodes=False)
with pytest.raises(ValueError, match="must be finite"):
KDTree(coords, balanced_tree=False, compact_nodes=True)
with pytest.raises(ValueError, match="must be finite"):
KDTree(coords, balanced_tree=True, compact_nodes=True)
with pytest.raises(ValueError, match="must be finite"):
KDTree(coords, balanced_tree=False, compact_nodes=False)
@pytest.mark.parametrize("incantation", [cKDTree, KDTree])
def test_gh_18800(incantation):
# our prohibition on non-finite values
# in kd-tree workflows means we need
# coercion to NumPy arrays enforced
class ArrLike(np.ndarray):
def __new__(cls, input_array):
obj = np.asarray(input_array).view(cls)
# we override all() to mimic the problem
# pandas DataFrames encountered in gh-18800
obj.all = None
return obj
def __array_finalize__(self, obj):
if obj is None:
return
self.all = getattr(obj, 'all', None)
points = [
[66.22, 32.54],
[22.52, 22.39],
[31.01, 81.21],
]
arr = np.array(points)
arr_like = ArrLike(arr)
tree = incantation(points, 10)
tree.query(arr_like, 1)
tree.query_ball_point(arr_like, 200)
| 49,113
| 31.269382
| 104
|
py
|
scipy
|
scipy-main/scipy/spatial/tests/__init__.py
| 0
| 0
| 0
|
py
|
|
scipy
|
scipy-main/scipy/spatial/tests/test_slerp.py
|
import numpy as np
from numpy.testing import assert_allclose
import pytest
from scipy.spatial import geometric_slerp
def _generate_spherical_points(ndim=3, n_pts=2):
# generate uniform points on sphere
# see: https://stackoverflow.com/a/23785326
# tentatively extended to arbitrary dims
# for 0-sphere it will always produce antipodes
np.random.seed(123)
points = np.random.normal(size=(n_pts, ndim))
points /= np.linalg.norm(points, axis=1)[:, np.newaxis]
return points[0], points[1]
class TestGeometricSlerp:
# Test various properties of the geometric slerp code
@pytest.mark.parametrize("n_dims", [2, 3, 5, 7, 9])
@pytest.mark.parametrize("n_pts", [0, 3, 17])
def test_shape_property(self, n_dims, n_pts):
# geometric_slerp output shape should match
# input dimensionality & requested number
# of interpolation points
start, end = _generate_spherical_points(n_dims, 2)
actual = geometric_slerp(start=start,
end=end,
t=np.linspace(0, 1, n_pts))
assert actual.shape == (n_pts, n_dims)
@pytest.mark.parametrize("n_dims", [2, 3, 5, 7, 9])
@pytest.mark.parametrize("n_pts", [3, 17])
def test_include_ends(self, n_dims, n_pts):
# geometric_slerp should return a data structure
# that includes the start and end coordinates
# when t includes 0 and 1 ends
# this is convenient for plotting surfaces represented
# by interpolations for example
# the generator doesn't work so well for the unit
# sphere (it always produces antipodes), so use
# custom values there
start, end = _generate_spherical_points(n_dims, 2)
actual = geometric_slerp(start=start,
end=end,
t=np.linspace(0, 1, n_pts))
assert_allclose(actual[0], start)
assert_allclose(actual[-1], end)
@pytest.mark.parametrize("start, end", [
# both arrays are not flat
(np.zeros((1, 3)), np.ones((1, 3))),
# only start array is not flat
(np.zeros((1, 3)), np.ones(3)),
# only end array is not flat
(np.zeros(1), np.ones((3, 1))),
])
def test_input_shape_flat(self, start, end):
# geometric_slerp should handle input arrays that are
# not flat appropriately
with pytest.raises(ValueError, match='one-dimensional'):
geometric_slerp(start=start,
end=end,
t=np.linspace(0, 1, 10))
@pytest.mark.parametrize("start, end", [
# 7-D and 3-D ends
(np.zeros(7), np.ones(3)),
# 2-D and 1-D ends
(np.zeros(2), np.ones(1)),
# empty, "3D" will also get caught this way
(np.array([]), np.ones(3)),
])
def test_input_dim_mismatch(self, start, end):
# geometric_slerp must appropriately handle cases where
# an interpolation is attempted across two different
# dimensionalities
with pytest.raises(ValueError, match='dimensions'):
geometric_slerp(start=start,
end=end,
t=np.linspace(0, 1, 10))
@pytest.mark.parametrize("start, end", [
# both empty
(np.array([]), np.array([])),
])
def test_input_at_least1d(self, start, end):
# empty inputs to geometric_slerp must
# be handled appropriately when not detected
# by mismatch
with pytest.raises(ValueError, match='at least two-dim'):
geometric_slerp(start=start,
end=end,
t=np.linspace(0, 1, 10))
@pytest.mark.parametrize("start, end, expected", [
# North and South Poles are definitely antipodes
# but should be handled gracefully now
(np.array([0, 0, 1.0]), np.array([0, 0, -1.0]), "warning"),
# this case will issue a warning & be handled
# gracefully as well;
# North Pole was rotated very slightly
# using r = R.from_euler('x', 0.035, degrees=True)
# to achieve Euclidean distance offset from diameter by
# 9.328908379124812e-08, within the default tol
(np.array([0.00000000e+00,
-6.10865200e-04,
9.99999813e-01]), np.array([0, 0, -1.0]), "warning"),
# this case should succeed without warning because a
# sufficiently large
# rotation was applied to North Pole point to shift it
# to a Euclidean distance of 2.3036691931821451e-07
# from South Pole, which is larger than tol
(np.array([0.00000000e+00,
-9.59930941e-04,
9.99999539e-01]), np.array([0, 0, -1.0]), "success"),
])
def test_handle_antipodes(self, start, end, expected):
# antipodal points must be handled appropriately;
# there are an infinite number of possible geodesic
# interpolations between them in higher dims
if expected == "warning":
with pytest.warns(UserWarning, match='antipodes'):
res = geometric_slerp(start=start,
end=end,
t=np.linspace(0, 1, 10))
else:
res = geometric_slerp(start=start,
end=end,
t=np.linspace(0, 1, 10))
# antipodes or near-antipodes should still produce
# slerp paths on the surface of the sphere (but they
# may be ambiguous):
assert_allclose(np.linalg.norm(res, axis=1), 1.0)
@pytest.mark.parametrize("start, end, expected", [
# 2-D with n_pts=4 (two new interpolation points)
# this is an actual circle
(np.array([1, 0]),
np.array([0, 1]),
np.array([[1, 0],
[np.sqrt(3) / 2, 0.5], # 30 deg on unit circle
[0.5, np.sqrt(3) / 2], # 60 deg on unit circle
[0, 1]])),
# likewise for 3-D (add z = 0 plane)
# this is an ordinary sphere
(np.array([1, 0, 0]),
np.array([0, 1, 0]),
np.array([[1, 0, 0],
[np.sqrt(3) / 2, 0.5, 0],
[0.5, np.sqrt(3) / 2, 0],
[0, 1, 0]])),
# for 5-D, pad more columns with constants
# zeros are easiest--non-zero values on unit
# circle are more difficult to reason about
# at higher dims
(np.array([1, 0, 0, 0, 0]),
np.array([0, 1, 0, 0, 0]),
np.array([[1, 0, 0, 0, 0],
[np.sqrt(3) / 2, 0.5, 0, 0, 0],
[0.5, np.sqrt(3) / 2, 0, 0, 0],
[0, 1, 0, 0, 0]])),
])
def test_straightforward_examples(self, start, end, expected):
# some straightforward interpolation tests, sufficiently
# simple to use the unit circle to deduce expected values;
# for larger dimensions, pad with constants so that the
# data is N-D but simpler to reason about
actual = geometric_slerp(start=start,
end=end,
t=np.linspace(0, 1, 4))
assert_allclose(actual, expected, atol=1e-16)
@pytest.mark.parametrize("t", [
# both interval ends clearly violate limits
np.linspace(-20, 20, 300),
# only one interval end violating limit slightly
np.linspace(-0.0001, 0.0001, 17),
])
def test_t_values_limits(self, t):
# geometric_slerp() should appropriately handle
# interpolation parameters < 0 and > 1
with pytest.raises(ValueError, match='interpolation parameter'):
_ = geometric_slerp(start=np.array([1, 0]),
end=np.array([0, 1]),
t=t)
@pytest.mark.parametrize("start, end", [
(np.array([1]),
np.array([0])),
(np.array([0]),
np.array([1])),
(np.array([-17.7]),
np.array([165.9])),
])
def test_0_sphere_handling(self, start, end):
# it does not make sense to interpolate the set of
# two points that is the 0-sphere
with pytest.raises(ValueError, match='at least two-dim'):
_ = geometric_slerp(start=start,
end=end,
t=np.linspace(0, 1, 4))
@pytest.mark.parametrize("tol", [
# an integer currently raises
5,
# string raises
"7",
# list and arrays also raise
[5, 6, 7], np.array(9.0),
])
def test_tol_type(self, tol):
# geometric_slerp() should raise if tol is not
# a suitable float type
with pytest.raises(ValueError, match='must be a float'):
_ = geometric_slerp(start=np.array([1, 0]),
end=np.array([0, 1]),
t=np.linspace(0, 1, 5),
tol=tol)
@pytest.mark.parametrize("tol", [
-5e-6,
-7e-10,
])
def test_tol_sign(self, tol):
# geometric_slerp() currently handles negative
# tol values, as long as they are floats
_ = geometric_slerp(start=np.array([1, 0]),
end=np.array([0, 1]),
t=np.linspace(0, 1, 5),
tol=tol)
@pytest.mark.parametrize("start, end", [
# 1-sphere (circle) with one point at origin
# and the other on the circle
(np.array([1, 0]), np.array([0, 0])),
# 2-sphere (normal sphere) with both points
# just slightly off sphere by the same amount
# in different directions
(np.array([1 + 1e-6, 0, 0]),
np.array([0, 1 - 1e-6, 0])),
# same thing in 4-D
(np.array([1 + 1e-6, 0, 0, 0]),
np.array([0, 1 - 1e-6, 0, 0])),
])
def test_unit_sphere_enforcement(self, start, end):
# geometric_slerp() should raise on input that clearly
# cannot be on an n-sphere of radius 1
with pytest.raises(ValueError, match='unit n-sphere'):
geometric_slerp(start=start,
end=end,
t=np.linspace(0, 1, 5))
@pytest.mark.parametrize("start, end", [
# 1-sphere 45 degree case
(np.array([1, 0]),
np.array([np.sqrt(2) / 2.,
np.sqrt(2) / 2.])),
# 2-sphere 135 degree case
(np.array([1, 0]),
np.array([-np.sqrt(2) / 2.,
np.sqrt(2) / 2.])),
])
@pytest.mark.parametrize("t_func", [
np.linspace, np.logspace])
def test_order_handling(self, start, end, t_func):
# geometric_slerp() should handle scenarios with
# ascending and descending t value arrays gracefully;
# results should simply be reversed
# for scrambled / unsorted parameters, the same values
# should be returned, just in scrambled order
num_t_vals = 20
np.random.seed(789)
forward_t_vals = t_func(0, 10, num_t_vals)
# normalize to max of 1
forward_t_vals /= forward_t_vals.max()
reverse_t_vals = np.flipud(forward_t_vals)
shuffled_indices = np.arange(num_t_vals)
np.random.shuffle(shuffled_indices)
scramble_t_vals = forward_t_vals.copy()[shuffled_indices]
forward_results = geometric_slerp(start=start,
end=end,
t=forward_t_vals)
reverse_results = geometric_slerp(start=start,
end=end,
t=reverse_t_vals)
scrambled_results = geometric_slerp(start=start,
end=end,
t=scramble_t_vals)
# check fidelity to input order
assert_allclose(forward_results, np.flipud(reverse_results))
assert_allclose(forward_results[shuffled_indices],
scrambled_results)
@pytest.mark.parametrize("t", [
# string:
"15, 5, 7",
# complex numbers currently produce a warning
# but not sure we need to worry about it too much:
# [3 + 1j, 5 + 2j],
])
def test_t_values_conversion(self, t):
with pytest.raises(ValueError):
_ = geometric_slerp(start=np.array([1]),
end=np.array([0]),
t=t)
def test_accept_arraylike(self):
# array-like support requested by reviewer
# in gh-10380
actual = geometric_slerp([1, 0], [0, 1], [0, 1/3, 0.5, 2/3, 1])
# expected values are based on visual inspection
# of the unit circle for the progressions along
# the circumference provided in t
expected = np.array([[1, 0],
[np.sqrt(3) / 2, 0.5],
[np.sqrt(2) / 2,
np.sqrt(2) / 2],
[0.5, np.sqrt(3) / 2],
[0, 1]], dtype=np.float64)
# Tyler's original Cython implementation of geometric_slerp
# can pass at atol=0 here, but on balance we will accept
# 1e-16 for an implementation that avoids Cython and
# makes up accuracy ground elsewhere
assert_allclose(actual, expected, atol=1e-16)
def test_scalar_t(self):
# when t is a scalar, return value is a single
# interpolated point of the appropriate dimensionality
# requested by reviewer in gh-10380
actual = geometric_slerp([1, 0], [0, 1], 0.5)
expected = np.array([np.sqrt(2) / 2,
np.sqrt(2) / 2], dtype=np.float64)
assert actual.shape == (2,)
assert_allclose(actual, expected)
@pytest.mark.parametrize('start', [
np.array([1, 0, 0]),
np.array([0, 1]),
])
@pytest.mark.parametrize('t', [
np.array(1),
np.array([1]),
np.array([[1]]),
np.array([[[1]]]),
np.array([]),
np.linspace(0, 1, 5),
])
def test_degenerate_input(self, start, t):
if np.asarray(t).ndim > 1:
with pytest.raises(ValueError):
geometric_slerp(start=start, end=start, t=t)
else:
shape = (t.size,) + start.shape
expected = np.full(shape, start)
actual = geometric_slerp(start=start, end=start, t=t)
assert_allclose(actual, expected)
# Check that degenerate and non-degenerate
# inputs yield the same size
non_degenerate = geometric_slerp(start=start, end=start[::-1], t=t)
assert actual.size == non_degenerate.size
@pytest.mark.parametrize('k', np.logspace(-10, -1, 10))
def test_numerical_stability_pi(self, k):
# geometric_slerp should have excellent numerical
# stability for angles approaching pi between
# the start and end points
angle = np.pi - k
ts = np.linspace(0, 1, 100)
P = np.array([1, 0, 0, 0])
Q = np.array([np.cos(angle), np.sin(angle), 0, 0])
# the test should only be enforced for cases where
# geometric_slerp determines that the input is actually
# on the unit sphere
with np.testing.suppress_warnings() as sup:
sup.filter(UserWarning)
result = geometric_slerp(P, Q, ts, 1e-18)
norms = np.linalg.norm(result, axis=1)
error = np.max(np.abs(norms - 1))
assert error < 4e-15
@pytest.mark.parametrize('t', [
[[0, 0.5]],
[[[[[[[[[0, 0.5]]]]]]]]],
])
def test_interpolation_param_ndim(self, t):
# regression test for gh-14465
arr1 = np.array([0, 1])
arr2 = np.array([1, 0])
with pytest.raises(ValueError):
geometric_slerp(start=arr1,
end=arr2,
t=t)
with pytest.raises(ValueError):
geometric_slerp(start=arr1,
end=arr1,
t=t)
| 16,396
| 38.321343
| 79
|
py
|
scipy
|
scipy-main/scipy/spatial/tests/test_spherical_voronoi.py
|
import numpy as np
import itertools
from numpy.testing import (assert_equal,
assert_almost_equal,
assert_array_equal,
assert_array_almost_equal)
import pytest
from pytest import raises as assert_raises
from scipy.spatial import SphericalVoronoi, distance
from scipy.optimize import linear_sum_assignment
from scipy.constants import golden as phi
from scipy.special import gamma
TOL = 1E-10
def _generate_tetrahedron():
return np.array([[1, 1, 1], [1, -1, -1], [-1, 1, -1], [-1, -1, 1]])
def _generate_cube():
return np.array(list(itertools.product([-1, 1.], repeat=3)))
def _generate_octahedron():
return np.array([[-1, 0, 0], [+1, 0, 0], [0, -1, 0],
[0, +1, 0], [0, 0, -1], [0, 0, +1]])
def _generate_dodecahedron():
x1 = _generate_cube()
x2 = np.array([[0, -phi, -1 / phi],
[0, -phi, +1 / phi],
[0, +phi, -1 / phi],
[0, +phi, +1 / phi]])
x3 = np.array([[-1 / phi, 0, -phi],
[+1 / phi, 0, -phi],
[-1 / phi, 0, +phi],
[+1 / phi, 0, +phi]])
x4 = np.array([[-phi, -1 / phi, 0],
[-phi, +1 / phi, 0],
[+phi, -1 / phi, 0],
[+phi, +1 / phi, 0]])
return np.concatenate((x1, x2, x3, x4))
def _generate_icosahedron():
x = np.array([[0, -1, -phi],
[0, -1, +phi],
[0, +1, -phi],
[0, +1, +phi]])
return np.concatenate([np.roll(x, i, axis=1) for i in range(3)])
def _generate_polytope(name):
polygons = ["triangle", "square", "pentagon", "hexagon", "heptagon",
"octagon", "nonagon", "decagon", "undecagon", "dodecagon"]
polyhedra = ["tetrahedron", "cube", "octahedron", "dodecahedron",
"icosahedron"]
if name not in polygons and name not in polyhedra:
raise ValueError("unrecognized polytope")
if name in polygons:
n = polygons.index(name) + 3
thetas = np.linspace(0, 2 * np.pi, n, endpoint=False)
p = np.vstack([np.cos(thetas), np.sin(thetas)]).T
elif name == "tetrahedron":
p = _generate_tetrahedron()
elif name == "cube":
p = _generate_cube()
elif name == "octahedron":
p = _generate_octahedron()
elif name == "dodecahedron":
p = _generate_dodecahedron()
elif name == "icosahedron":
p = _generate_icosahedron()
return p / np.linalg.norm(p, axis=1, keepdims=True)
def _hypersphere_area(dim, radius):
# https://en.wikipedia.org/wiki/N-sphere#Closed_forms
return 2 * np.pi**(dim / 2) / gamma(dim / 2) * radius**(dim - 1)
def _sample_sphere(n, dim, seed=None):
# Sample points uniformly at random from the hypersphere
rng = np.random.RandomState(seed=seed)
points = rng.randn(n, dim)
points /= np.linalg.norm(points, axis=1, keepdims=True)
return points
class TestSphericalVoronoi:
def setup_method(self):
self.points = np.array([
[-0.78928481, -0.16341094, 0.59188373],
[-0.66839141, 0.73309634, 0.12578818],
[0.32535778, -0.92476944, -0.19734181],
[-0.90177102, -0.03785291, -0.43055335],
[0.71781344, 0.68428936, 0.12842096],
[-0.96064876, 0.23492353, -0.14820556],
[0.73181537, -0.22025898, -0.6449281],
[0.79979205, 0.54555747, 0.25039913]]
)
def test_constructor(self):
center = np.array([1, 2, 3])
radius = 2
s1 = SphericalVoronoi(self.points)
# user input checks in SphericalVoronoi now require
# the radius / center to match the generators so adjust
# accordingly here
s2 = SphericalVoronoi(self.points * radius, radius)
s3 = SphericalVoronoi(self.points + center, center=center)
s4 = SphericalVoronoi(self.points * radius + center, radius, center)
assert_array_equal(s1.center, np.array([0, 0, 0]))
assert_equal(s1.radius, 1)
assert_array_equal(s2.center, np.array([0, 0, 0]))
assert_equal(s2.radius, 2)
assert_array_equal(s3.center, center)
assert_equal(s3.radius, 1)
assert_array_equal(s4.center, center)
assert_equal(s4.radius, radius)
# Test a non-sequence/-ndarray based array-like
s5 = SphericalVoronoi(memoryview(self.points)) # type: ignore[arg-type]
assert_array_equal(s5.center, np.array([0, 0, 0]))
assert_equal(s5.radius, 1)
def test_vertices_regions_translation_invariance(self):
sv_origin = SphericalVoronoi(self.points)
center = np.array([1, 1, 1])
sv_translated = SphericalVoronoi(self.points + center, center=center)
assert_equal(sv_origin.regions, sv_translated.regions)
assert_array_almost_equal(sv_origin.vertices + center,
sv_translated.vertices)
def test_vertices_regions_scaling_invariance(self):
sv_unit = SphericalVoronoi(self.points)
sv_scaled = SphericalVoronoi(self.points * 2, 2)
assert_equal(sv_unit.regions, sv_scaled.regions)
assert_array_almost_equal(sv_unit.vertices * 2,
sv_scaled.vertices)
def test_old_radius_api_error(self):
with pytest.raises(ValueError, match='`radius` is `None`. *'):
SphericalVoronoi(self.points, radius=None)
def test_sort_vertices_of_regions(self):
sv = SphericalVoronoi(self.points)
unsorted_regions = sv.regions
sv.sort_vertices_of_regions()
assert_equal(sorted(sv.regions), sorted(unsorted_regions))
def test_sort_vertices_of_regions_flattened(self):
expected = sorted([[0, 6, 5, 2, 3], [2, 3, 10, 11, 8, 7], [0, 6, 4, 1],
[4, 8, 7, 5, 6], [9, 11, 10], [2, 7, 5],
[1, 4, 8, 11, 9], [0, 3, 10, 9, 1]])
expected = list(itertools.chain(*sorted(expected))) # type: ignore
sv = SphericalVoronoi(self.points)
sv.sort_vertices_of_regions()
actual = list(itertools.chain(*sorted(sv.regions)))
assert_array_equal(actual, expected)
def test_sort_vertices_of_regions_dimensionality(self):
points = np.array([[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[0.5, 0.5, 0.5, 0.5]])
with pytest.raises(TypeError, match="three-dimensional"):
sv = SphericalVoronoi(points)
sv.sort_vertices_of_regions()
def test_num_vertices(self):
# for any n >= 3, a spherical Voronoi diagram has 2n - 4
# vertices; this is a direct consequence of Euler's formula
# as explained by Dinis and Mamede (2010) Proceedings of the
# 2010 International Symposium on Voronoi Diagrams in Science
# and Engineering
sv = SphericalVoronoi(self.points)
expected = self.points.shape[0] * 2 - 4
actual = sv.vertices.shape[0]
assert_equal(actual, expected)
def test_voronoi_circles(self):
sv = SphericalVoronoi(self.points)
for vertex in sv.vertices:
distances = distance.cdist(sv.points, np.array([vertex]))
closest = np.array(sorted(distances)[0:3])
assert_almost_equal(closest[0], closest[1], 7, str(vertex))
assert_almost_equal(closest[0], closest[2], 7, str(vertex))
def test_duplicate_point_handling(self):
# an exception should be raised for degenerate generators
# related to Issue# 7046
self.degenerate = np.concatenate((self.points, self.points))
with assert_raises(ValueError):
SphericalVoronoi(self.degenerate)
def test_incorrect_radius_handling(self):
# an exception should be raised if the radius provided
# cannot possibly match the input generators
with assert_raises(ValueError):
SphericalVoronoi(self.points, radius=0.98)
def test_incorrect_center_handling(self):
# an exception should be raised if the center provided
# cannot possibly match the input generators
with assert_raises(ValueError):
SphericalVoronoi(self.points, center=[0.1, 0, 0])
@pytest.mark.parametrize("dim", range(2, 6))
@pytest.mark.parametrize("shift", [False, True])
def test_single_hemisphere_handling(self, dim, shift):
n = 10
points = _sample_sphere(n, dim, seed=0)
points[:, 0] = np.abs(points[:, 0])
center = (np.arange(dim) + 1) * shift
sv = SphericalVoronoi(points + center, center=center)
dots = np.einsum('ij,ij->i', sv.vertices - center,
sv.points[sv._simplices[:, 0]] - center)
circumradii = np.arccos(np.clip(dots, -1, 1))
assert np.max(circumradii) > np.pi / 2
@pytest.mark.parametrize("n", [1, 2, 10])
@pytest.mark.parametrize("dim", range(2, 6))
@pytest.mark.parametrize("shift", [False, True])
def test_rank_deficient(self, n, dim, shift):
center = (np.arange(dim) + 1) * shift
points = _sample_sphere(n, dim - 1, seed=0)
points = np.hstack([points, np.zeros((n, 1))])
with pytest.raises(ValueError, match="Rank of input points"):
SphericalVoronoi(points + center, center=center)
@pytest.mark.parametrize("dim", range(2, 6))
def test_higher_dimensions(self, dim):
n = 100
points = _sample_sphere(n, dim, seed=0)
sv = SphericalVoronoi(points)
assert sv.vertices.shape[1] == dim
assert len(sv.regions) == n
# verify Euler characteristic
cell_counts = []
simplices = np.sort(sv._simplices)
for i in range(1, dim + 1):
cells = []
for indices in itertools.combinations(range(dim), i):
cells.append(simplices[:, list(indices)])
cells = np.unique(np.concatenate(cells), axis=0)
cell_counts.append(len(cells))
expected_euler = 1 + (-1)**(dim-1)
actual_euler = sum([(-1)**i * e for i, e in enumerate(cell_counts)])
assert expected_euler == actual_euler
@pytest.mark.parametrize("dim", range(2, 6))
def test_cross_polytope_regions(self, dim):
# The hypercube is the dual of the cross-polytope, so the voronoi
# vertices of the cross-polytope lie on the points of the hypercube.
# generate points of the cross-polytope
points = np.concatenate((-np.eye(dim), np.eye(dim)))
sv = SphericalVoronoi(points)
assert all([len(e) == 2**(dim - 1) for e in sv.regions])
# generate points of the hypercube
expected = np.vstack(list(itertools.product([-1, 1], repeat=dim)))
expected = expected.astype(np.float64) / np.sqrt(dim)
# test that Voronoi vertices are correctly placed
dist = distance.cdist(sv.vertices, expected)
res = linear_sum_assignment(dist)
assert dist[res].sum() < TOL
@pytest.mark.parametrize("dim", range(2, 6))
def test_hypercube_regions(self, dim):
# The cross-polytope is the dual of the hypercube, so the voronoi
# vertices of the hypercube lie on the points of the cross-polytope.
# generate points of the hypercube
points = np.vstack(list(itertools.product([-1, 1], repeat=dim)))
points = points.astype(np.float64) / np.sqrt(dim)
sv = SphericalVoronoi(points)
# generate points of the cross-polytope
expected = np.concatenate((-np.eye(dim), np.eye(dim)))
# test that Voronoi vertices are correctly placed
dist = distance.cdist(sv.vertices, expected)
res = linear_sum_assignment(dist)
assert dist[res].sum() < TOL
@pytest.mark.parametrize("n", [10, 500])
@pytest.mark.parametrize("dim", [2, 3])
@pytest.mark.parametrize("radius", [0.5, 1, 2])
@pytest.mark.parametrize("shift", [False, True])
@pytest.mark.parametrize("single_hemisphere", [False, True])
def test_area_reconstitution(self, n, dim, radius, shift,
single_hemisphere):
points = _sample_sphere(n, dim, seed=0)
# move all points to one side of the sphere for single-hemisphere test
if single_hemisphere:
points[:, 0] = np.abs(points[:, 0])
center = (np.arange(dim) + 1) * shift
points = radius * points + center
sv = SphericalVoronoi(points, radius=radius, center=center)
areas = sv.calculate_areas()
assert_almost_equal(areas.sum(), _hypersphere_area(dim, radius))
@pytest.mark.parametrize("poly", ["triangle", "dodecagon",
"tetrahedron", "cube", "octahedron",
"dodecahedron", "icosahedron"])
def test_equal_area_reconstitution(self, poly):
points = _generate_polytope(poly)
n, dim = points.shape
sv = SphericalVoronoi(points)
areas = sv.calculate_areas()
assert_almost_equal(areas, _hypersphere_area(dim, 1) / n)
def test_area_unsupported_dimension(self):
dim = 4
points = np.concatenate((-np.eye(dim), np.eye(dim)))
sv = SphericalVoronoi(points)
with pytest.raises(TypeError, match="Only supported"):
sv.calculate_areas()
@pytest.mark.parametrize("radius", [1, 1.])
@pytest.mark.parametrize("center", [None, (1, 2, 3), (1., 2., 3.)])
def test_attribute_types(self, radius, center):
points = radius * self.points
if center is not None:
points += center
sv = SphericalVoronoi(points, radius=radius, center=center)
assert sv.points.dtype is np.dtype(np.float64)
assert sv.center.dtype is np.dtype(np.float64)
assert isinstance(sv.radius, float)
def test_region_types(self):
# Tests that region integer type does not change
# See Issue #13412
sv = SphericalVoronoi(self.points)
dtype = type(sv.regions[0][0])
sv.sort_vertices_of_regions()
assert type(sv.regions[0][0]) == dtype
sv.sort_vertices_of_regions()
assert type(sv.regions[0][0]) == dtype
| 14,361
| 39.342697
| 80
|
py
|
scipy
|
scipy-main/scipy/spatial/tests/test_hausdorff.py
|
import numpy as np
from numpy.testing import (assert_allclose,
assert_array_equal,
assert_equal)
import pytest
from scipy.spatial.distance import directed_hausdorff
from scipy.spatial import distance
from scipy._lib._util import check_random_state
class TestHausdorff:
# Test various properties of the directed Hausdorff code.
def setup_method(self):
np.random.seed(1234)
random_angles = np.random.random(100) * np.pi * 2
random_columns = np.column_stack(
(random_angles, random_angles, np.zeros(100)))
random_columns[..., 0] = np.cos(random_columns[..., 0])
random_columns[..., 1] = np.sin(random_columns[..., 1])
random_columns_2 = np.column_stack(
(random_angles, random_angles, np.zeros(100)))
random_columns_2[1:, 0] = np.cos(random_columns_2[1:, 0]) * 2.0
random_columns_2[1:, 1] = np.sin(random_columns_2[1:, 1]) * 2.0
# move one point farther out so we don't have two perfect circles
random_columns_2[0, 0] = np.cos(random_columns_2[0, 0]) * 3.3
random_columns_2[0, 1] = np.sin(random_columns_2[0, 1]) * 3.3
self.path_1 = random_columns
self.path_2 = random_columns_2
self.path_1_4d = np.insert(self.path_1, 3, 5, axis=1)
self.path_2_4d = np.insert(self.path_2, 3, 27, axis=1)
def test_symmetry(self):
# Ensure that the directed (asymmetric) Hausdorff distance is
# actually asymmetric
forward = directed_hausdorff(self.path_1, self.path_2)[0]
reverse = directed_hausdorff(self.path_2, self.path_1)[0]
assert forward != reverse
def test_brute_force_comparison_forward(self):
# Ensure that the algorithm for directed_hausdorff gives the
# same result as the simple / brute force approach in the
# forward direction.
actual = directed_hausdorff(self.path_1, self.path_2)[0]
# brute force over rows:
expected = max(np.amin(distance.cdist(self.path_1, self.path_2),
axis=1))
assert_allclose(actual, expected)
def test_brute_force_comparison_reverse(self):
# Ensure that the algorithm for directed_hausdorff gives the
# same result as the simple / brute force approach in the
# reverse direction.
actual = directed_hausdorff(self.path_2, self.path_1)[0]
# brute force over columns:
expected = max(np.amin(distance.cdist(self.path_1, self.path_2),
axis=0))
assert_allclose(actual, expected)
def test_degenerate_case(self):
# The directed Hausdorff distance must be zero if both input
# data arrays match.
actual = directed_hausdorff(self.path_1, self.path_1)[0]
assert_allclose(actual, 0.0)
def test_2d_data_forward(self):
# Ensure that 2D data is handled properly for a simple case
# relative to brute force approach.
actual = directed_hausdorff(self.path_1[..., :2],
self.path_2[..., :2])[0]
expected = max(np.amin(distance.cdist(self.path_1[..., :2],
self.path_2[..., :2]),
axis=1))
assert_allclose(actual, expected)
def test_4d_data_reverse(self):
# Ensure that 4D data is handled properly for a simple case
# relative to brute force approach.
actual = directed_hausdorff(self.path_2_4d, self.path_1_4d)[0]
# brute force over columns:
expected = max(np.amin(distance.cdist(self.path_1_4d, self.path_2_4d),
axis=0))
assert_allclose(actual, expected)
def test_indices(self):
# Ensure that correct point indices are returned -- they should
# correspond to the Hausdorff pair
path_simple_1 = np.array([[-1,-12],[0,0], [1,1], [3,7], [1,2]])
path_simple_2 = np.array([[0,0], [1,1], [4,100], [10,9]])
actual = directed_hausdorff(path_simple_2, path_simple_1)[1:]
expected = (2, 3)
assert_array_equal(actual, expected)
def test_random_state(self):
# ensure that the global random state is not modified because
# the directed Hausdorff algorithm uses randomization
rs = check_random_state(None)
old_global_state = rs.get_state()
directed_hausdorff(self.path_1, self.path_2)
rs2 = check_random_state(None)
new_global_state = rs2.get_state()
assert_equal(new_global_state, old_global_state)
@pytest.mark.parametrize("seed", [None, 27870671])
def test_random_state_None_int(self, seed):
# check that seed values of None or int do not alter global
# random state
rs = check_random_state(None)
old_global_state = rs.get_state()
directed_hausdorff(self.path_1, self.path_2, seed)
rs2 = check_random_state(None)
new_global_state = rs2.get_state()
assert_equal(new_global_state, old_global_state)
def test_invalid_dimensions(self):
# Ensure that a ValueError is raised when the number of columns
# is not the same
rng = np.random.default_rng(189048172503940875434364128139223470523)
A = rng.random((3, 2))
B = rng.random((3, 5))
msg = r"need to have the same number of columns"
with pytest.raises(ValueError, match=msg):
directed_hausdorff(A, B)
@pytest.mark.parametrize("A, B, seed, expected", [
# the two cases from gh-11332
([(0,0)],
[(0,1), (0,0)],
0,
(0.0, 0, 1)),
([(0,0)],
[(0,1), (0,0)],
1,
(0.0, 0, 1)),
# slightly more complex case
([(-5, 3), (0,0)],
[(0,1), (0,0), (-5, 3)],
77098,
# the maximum minimum distance will
# be the last one found, but a unique
# solution is not guaranteed more broadly
(0.0, 1, 1)),
])
def test_subsets(self, A, B, seed, expected):
# verify fix for gh-11332
actual = directed_hausdorff(u=A, v=B, seed=seed)
# check distance
assert_allclose(actual[0], expected[0])
# check indices
assert actual[1:] == expected[1:]
@pytest.mark.xslow
def test_massive_arr_overflow():
# on 64-bit systems we should be able to
# handle arrays that exceed the indexing
# size of a 32-bit signed integer
try:
import psutil
except ModuleNotFoundError:
pytest.skip("psutil required to check available memory")
if psutil.virtual_memory().available < 80*2**30:
# Don't run the test if there is less than 80 gig of RAM available.
pytest.skip('insufficient memory available to run this test')
size = int(3e9)
arr1 = np.zeros(shape=(size, 2))
arr2 = np.zeros(shape=(3, 2))
arr1[size - 1] = [5, 5]
actual = directed_hausdorff(u=arr1, v=arr2)
assert_allclose(actual[0], 7.0710678118654755)
assert_allclose(actual[1], size - 1)
| 7,114
| 40.127168
| 78
|
py
|
scipy
|
scipy-main/scipy/spatial/transform/setup.py
|
def configuration(parent_package='', top_path=None):
from numpy.distutils.misc_util import Configuration
config = Configuration('transform', parent_package, top_path)
config.add_data_dir('tests')
config.add_data_files('_rotation.pyi')
config.add_extension('_rotation',
sources=['_rotation.c'])
return config
| 361
| 26.846154
| 65
|
py
|
scipy
|
scipy-main/scipy/spatial/transform/_rotation_groups.py
|
import numpy as np
from scipy.constants import golden as phi
def icosahedral(cls):
g1 = tetrahedral(cls).as_quat()
a = 0.5
b = 0.5 / phi
c = phi / 2
g2 = np.array([[+a, +b, +c, 0],
[+a, +b, -c, 0],
[+a, +c, 0, +b],
[+a, +c, 0, -b],
[+a, -b, +c, 0],
[+a, -b, -c, 0],
[+a, -c, 0, +b],
[+a, -c, 0, -b],
[+a, 0, +b, +c],
[+a, 0, +b, -c],
[+a, 0, -b, +c],
[+a, 0, -b, -c],
[+b, +a, 0, +c],
[+b, +a, 0, -c],
[+b, +c, +a, 0],
[+b, +c, -a, 0],
[+b, -a, 0, +c],
[+b, -a, 0, -c],
[+b, -c, +a, 0],
[+b, -c, -a, 0],
[+b, 0, +c, +a],
[+b, 0, +c, -a],
[+b, 0, -c, +a],
[+b, 0, -c, -a],
[+c, +a, +b, 0],
[+c, +a, -b, 0],
[+c, +b, 0, +a],
[+c, +b, 0, -a],
[+c, -a, +b, 0],
[+c, -a, -b, 0],
[+c, -b, 0, +a],
[+c, -b, 0, -a],
[+c, 0, +a, +b],
[+c, 0, +a, -b],
[+c, 0, -a, +b],
[+c, 0, -a, -b],
[0, +a, +c, +b],
[0, +a, +c, -b],
[0, +a, -c, +b],
[0, +a, -c, -b],
[0, +b, +a, +c],
[0, +b, +a, -c],
[0, +b, -a, +c],
[0, +b, -a, -c],
[0, +c, +b, +a],
[0, +c, +b, -a],
[0, +c, -b, +a],
[0, +c, -b, -a]])
return cls.from_quat(np.concatenate((g1, g2)))
def octahedral(cls):
g1 = tetrahedral(cls).as_quat()
c = np.sqrt(2) / 2
g2 = np.array([[+c, 0, 0, +c],
[0, +c, 0, +c],
[0, 0, +c, +c],
[0, 0, -c, +c],
[0, -c, 0, +c],
[-c, 0, 0, +c],
[0, +c, +c, 0],
[0, -c, +c, 0],
[+c, 0, +c, 0],
[-c, 0, +c, 0],
[+c, +c, 0, 0],
[-c, +c, 0, 0]])
return cls.from_quat(np.concatenate((g1, g2)))
def tetrahedral(cls):
g1 = np.eye(4)
c = 0.5
g2 = np.array([[c, -c, -c, +c],
[c, -c, +c, +c],
[c, +c, -c, +c],
[c, +c, +c, +c],
[c, -c, -c, -c],
[c, -c, +c, -c],
[c, +c, -c, -c],
[c, +c, +c, -c]])
return cls.from_quat(np.concatenate((g1, g2)))
def dicyclic(cls, n, axis=2):
g1 = cyclic(cls, n, axis).as_rotvec()
thetas = np.linspace(0, np.pi, n, endpoint=False)
rv = np.pi * np.vstack([np.zeros(n), np.cos(thetas), np.sin(thetas)]).T
g2 = np.roll(rv, axis, axis=1)
return cls.from_rotvec(np.concatenate((g1, g2)))
def cyclic(cls, n, axis=2):
thetas = np.linspace(0, 2 * np.pi, n, endpoint=False)
rv = np.vstack([thetas, np.zeros(n), np.zeros(n)]).T
return cls.from_rotvec(np.roll(rv, axis, axis=1))
def create_group(cls, group, axis='Z'):
if not isinstance(group, str):
raise ValueError("`group` argument must be a string")
permitted_axes = ['x', 'y', 'z', 'X', 'Y', 'Z']
if axis not in permitted_axes:
raise ValueError("`axis` must be one of " + ", ".join(permitted_axes))
if group in ['I', 'O', 'T']:
symbol = group
order = 1
elif group[:1] in ['C', 'D'] and group[1:].isdigit():
symbol = group[:1]
order = int(group[1:])
else:
raise ValueError("`group` must be one of 'I', 'O', 'T', 'Dn', 'Cn'")
if order < 1:
raise ValueError("Group order must be positive")
axis = 'xyz'.index(axis.lower())
if symbol == 'I':
return icosahedral(cls)
elif symbol == 'O':
return octahedral(cls)
elif symbol == 'T':
return tetrahedral(cls)
elif symbol == 'D':
return dicyclic(cls, order, axis=axis)
elif symbol == 'C':
return cyclic(cls, order, axis=axis)
else:
assert False
| 4,422
| 30.368794
| 78
|
py
|
scipy
|
scipy-main/scipy/spatial/transform/rotation.py
|
# This file is not meant for public use and will be removed in SciPy v2.0.0.
# Use the `scipy.spatial` namespace for importing the functions
# included below.
import warnings
from . import _rotation
__all__ = [ # noqa: F822
'Rotation',
'Slerp',
'check_random_state',
'create_group',
're',
]
def __dir__():
return __all__
def __getattr__(name):
if name not in __all__:
raise AttributeError(
"scipy.spatial.transform.rotation is deprecated and has no attribute "
f"{name}. Try looking in scipy.spatial instead.")
warnings.warn(f"Please use `{name}` from the `scipy.spatial.transform` "
"namespace, the `scipy.spatial.transform.rotation` namespace"
" is deprecated.",
category=DeprecationWarning, stacklevel=2)
return getattr(_rotation, name)
| 872
| 24.676471
| 82
|
py
|
scipy
|
scipy-main/scipy/spatial/transform/__init__.py
|
"""
Spatial Transformations (:mod:`scipy.spatial.transform`)
========================================================
.. currentmodule:: scipy.spatial.transform
This package implements various spatial transformations. For now,
only rotations are supported.
Rotations in 3 dimensions
-------------------------
.. autosummary::
:toctree: generated/
Rotation
Slerp
RotationSpline
"""
from ._rotation import Rotation, Slerp
from ._rotation_spline import RotationSpline
# Deprecated namespaces, to be removed in v2.0.0
from . import rotation
__all__ = ['Rotation', 'Slerp', 'RotationSpline']
from scipy._lib._testutils import PytestTester
test = PytestTester(__name__)
del PytestTester
| 700
| 22.366667
| 65
|
py
|
scipy
|
scipy-main/scipy/spatial/transform/_rotation_spline.py
|
import numpy as np
from scipy.linalg import solve_banded
from ._rotation import Rotation
def _create_skew_matrix(x):
"""Create skew-symmetric matrices corresponding to vectors.
Parameters
----------
x : ndarray, shape (n, 3)
Set of vectors.
Returns
-------
ndarray, shape (n, 3, 3)
"""
result = np.zeros((len(x), 3, 3))
result[:, 0, 1] = -x[:, 2]
result[:, 0, 2] = x[:, 1]
result[:, 1, 0] = x[:, 2]
result[:, 1, 2] = -x[:, 0]
result[:, 2, 0] = -x[:, 1]
result[:, 2, 1] = x[:, 0]
return result
def _matrix_vector_product_of_stacks(A, b):
"""Compute the product of stack of matrices and vectors."""
return np.einsum("ijk,ik->ij", A, b)
def _angular_rate_to_rotvec_dot_matrix(rotvecs):
"""Compute matrices to transform angular rates to rot. vector derivatives.
The matrices depend on the current attitude represented as a rotation
vector.
Parameters
----------
rotvecs : ndarray, shape (n, 3)
Set of rotation vectors.
Returns
-------
ndarray, shape (n, 3, 3)
"""
norm = np.linalg.norm(rotvecs, axis=1)
k = np.empty_like(norm)
mask = norm > 1e-4
nm = norm[mask]
k[mask] = (1 - 0.5 * nm / np.tan(0.5 * nm)) / nm**2
mask = ~mask
nm = norm[mask]
k[mask] = 1/12 + 1/720 * nm**2
skew = _create_skew_matrix(rotvecs)
result = np.empty((len(rotvecs), 3, 3))
result[:] = np.identity(3)
result[:] += 0.5 * skew
result[:] += k[:, None, None] * np.matmul(skew, skew)
return result
def _rotvec_dot_to_angular_rate_matrix(rotvecs):
"""Compute matrices to transform rot. vector derivatives to angular rates.
The matrices depend on the current attitude represented as a rotation
vector.
Parameters
----------
rotvecs : ndarray, shape (n, 3)
Set of rotation vectors.
Returns
-------
ndarray, shape (n, 3, 3)
"""
norm = np.linalg.norm(rotvecs, axis=1)
k1 = np.empty_like(norm)
k2 = np.empty_like(norm)
mask = norm > 1e-4
nm = norm[mask]
k1[mask] = (1 - np.cos(nm)) / nm ** 2
k2[mask] = (nm - np.sin(nm)) / nm ** 3
mask = ~mask
nm = norm[mask]
k1[mask] = 0.5 - nm ** 2 / 24
k2[mask] = 1 / 6 - nm ** 2 / 120
skew = _create_skew_matrix(rotvecs)
result = np.empty((len(rotvecs), 3, 3))
result[:] = np.identity(3)
result[:] -= k1[:, None, None] * skew
result[:] += k2[:, None, None] * np.matmul(skew, skew)
return result
def _angular_acceleration_nonlinear_term(rotvecs, rotvecs_dot):
"""Compute the non-linear term in angular acceleration.
The angular acceleration contains a quadratic term with respect to
the derivative of the rotation vector. This function computes that.
Parameters
----------
rotvecs : ndarray, shape (n, 3)
Set of rotation vectors.
rotvecs_dot : ndarray, shape (n, 3)
Set of rotation vector derivatives.
Returns
-------
ndarray, shape (n, 3)
"""
norm = np.linalg.norm(rotvecs, axis=1)
dp = np.sum(rotvecs * rotvecs_dot, axis=1)
cp = np.cross(rotvecs, rotvecs_dot)
ccp = np.cross(rotvecs, cp)
dccp = np.cross(rotvecs_dot, cp)
k1 = np.empty_like(norm)
k2 = np.empty_like(norm)
k3 = np.empty_like(norm)
mask = norm > 1e-4
nm = norm[mask]
k1[mask] = (-nm * np.sin(nm) - 2 * (np.cos(nm) - 1)) / nm ** 4
k2[mask] = (-2 * nm + 3 * np.sin(nm) - nm * np.cos(nm)) / nm ** 5
k3[mask] = (nm - np.sin(nm)) / nm ** 3
mask = ~mask
nm = norm[mask]
k1[mask] = 1/12 - nm ** 2 / 180
k2[mask] = -1/60 + nm ** 2 / 12604
k3[mask] = 1/6 - nm ** 2 / 120
dp = dp[:, None]
k1 = k1[:, None]
k2 = k2[:, None]
k3 = k3[:, None]
return dp * (k1 * cp + k2 * ccp) + k3 * dccp
def _compute_angular_rate(rotvecs, rotvecs_dot):
"""Compute angular rates given rotation vectors and its derivatives.
Parameters
----------
rotvecs : ndarray, shape (n, 3)
Set of rotation vectors.
rotvecs_dot : ndarray, shape (n, 3)
Set of rotation vector derivatives.
Returns
-------
ndarray, shape (n, 3)
"""
return _matrix_vector_product_of_stacks(
_rotvec_dot_to_angular_rate_matrix(rotvecs), rotvecs_dot)
def _compute_angular_acceleration(rotvecs, rotvecs_dot, rotvecs_dot_dot):
"""Compute angular acceleration given rotation vector and its derivatives.
Parameters
----------
rotvecs : ndarray, shape (n, 3)
Set of rotation vectors.
rotvecs_dot : ndarray, shape (n, 3)
Set of rotation vector derivatives.
rotvecs_dot_dot : ndarray, shape (n, 3)
Set of rotation vector second derivatives.
Returns
-------
ndarray, shape (n, 3)
"""
return (_compute_angular_rate(rotvecs, rotvecs_dot_dot) +
_angular_acceleration_nonlinear_term(rotvecs, rotvecs_dot))
def _create_block_3_diagonal_matrix(A, B, d):
"""Create a 3-diagonal block matrix as banded.
The matrix has the following structure:
DB...
ADB..
.ADB.
..ADB
...AD
The blocks A, B and D are 3-by-3 matrices. The D matrices has the form
d * I.
Parameters
----------
A : ndarray, shape (n, 3, 3)
Stack of A blocks.
B : ndarray, shape (n, 3, 3)
Stack of B blocks.
d : ndarray, shape (n + 1,)
Values for diagonal blocks.
Returns
-------
ndarray, shape (11, 3 * (n + 1))
Matrix in the banded form as used by `scipy.linalg.solve_banded`.
"""
ind = np.arange(3)
ind_blocks = np.arange(len(A))
A_i = np.empty_like(A, dtype=int)
A_i[:] = ind[:, None]
A_i += 3 * (1 + ind_blocks[:, None, None])
A_j = np.empty_like(A, dtype=int)
A_j[:] = ind
A_j += 3 * ind_blocks[:, None, None]
B_i = np.empty_like(B, dtype=int)
B_i[:] = ind[:, None]
B_i += 3 * ind_blocks[:, None, None]
B_j = np.empty_like(B, dtype=int)
B_j[:] = ind
B_j += 3 * (1 + ind_blocks[:, None, None])
diag_i = diag_j = np.arange(3 * len(d))
i = np.hstack((A_i.ravel(), B_i.ravel(), diag_i))
j = np.hstack((A_j.ravel(), B_j.ravel(), diag_j))
values = np.hstack((A.ravel(), B.ravel(), np.repeat(d, 3)))
u = 5
l = 5
result = np.zeros((u + l + 1, 3 * len(d)))
result[u + i - j, j] = values
return result
class RotationSpline:
"""Interpolate rotations with continuous angular rate and acceleration.
The rotation vectors between each consecutive orientation are cubic
functions of time and it is guaranteed that angular rate and acceleration
are continuous. Such interpolation are analogous to cubic spline
interpolation.
Refer to [1]_ for math and implementation details.
Parameters
----------
times : array_like, shape (N,)
Times of the known rotations. At least 2 times must be specified.
rotations : `Rotation` instance
Rotations to perform the interpolation between. Must contain N
rotations.
Methods
-------
__call__
References
----------
.. [1] `Smooth Attitude Interpolation
<https://github.com/scipy/scipy/files/2932755/attitude_interpolation.pdf>`_
Examples
--------
>>> from scipy.spatial.transform import Rotation, RotationSpline
>>> import numpy as np
Define the sequence of times and rotations from the Euler angles:
>>> times = [0, 10, 20, 40]
>>> angles = [[-10, 20, 30], [0, 15, 40], [-30, 45, 30], [20, 45, 90]]
>>> rotations = Rotation.from_euler('XYZ', angles, degrees=True)
Create the interpolator object:
>>> spline = RotationSpline(times, rotations)
Interpolate the Euler angles, angular rate and acceleration:
>>> angular_rate = np.rad2deg(spline(times, 1))
>>> angular_acceleration = np.rad2deg(spline(times, 2))
>>> times_plot = np.linspace(times[0], times[-1], 100)
>>> angles_plot = spline(times_plot).as_euler('XYZ', degrees=True)
>>> angular_rate_plot = np.rad2deg(spline(times_plot, 1))
>>> angular_acceleration_plot = np.rad2deg(spline(times_plot, 2))
On this plot you see that Euler angles are continuous and smooth:
>>> import matplotlib.pyplot as plt
>>> plt.plot(times_plot, angles_plot)
>>> plt.plot(times, angles, 'x')
>>> plt.title("Euler angles")
>>> plt.show()
The angular rate is also smooth:
>>> plt.plot(times_plot, angular_rate_plot)
>>> plt.plot(times, angular_rate, 'x')
>>> plt.title("Angular rate")
>>> plt.show()
The angular acceleration is continuous, but not smooth. Also note that
the angular acceleration is not a piecewise-linear function, because
it is different from the second derivative of the rotation vector (which
is a piecewise-linear function as in the cubic spline).
>>> plt.plot(times_plot, angular_acceleration_plot)
>>> plt.plot(times, angular_acceleration, 'x')
>>> plt.title("Angular acceleration")
>>> plt.show()
"""
# Parameters for the solver for angular rate.
MAX_ITER = 10
TOL = 1e-9
def _solve_for_angular_rates(self, dt, angular_rates, rotvecs):
angular_rate_first = angular_rates[0].copy()
A = _angular_rate_to_rotvec_dot_matrix(rotvecs)
A_inv = _rotvec_dot_to_angular_rate_matrix(rotvecs)
M = _create_block_3_diagonal_matrix(
2 * A_inv[1:-1] / dt[1:-1, None, None],
2 * A[1:-1] / dt[1:-1, None, None],
4 * (1 / dt[:-1] + 1 / dt[1:]))
b0 = 6 * (rotvecs[:-1] * dt[:-1, None] ** -2 +
rotvecs[1:] * dt[1:, None] ** -2)
b0[0] -= 2 / dt[0] * A_inv[0].dot(angular_rate_first)
b0[-1] -= 2 / dt[-1] * A[-1].dot(angular_rates[-1])
for iteration in range(self.MAX_ITER):
rotvecs_dot = _matrix_vector_product_of_stacks(A, angular_rates)
delta_beta = _angular_acceleration_nonlinear_term(
rotvecs[:-1], rotvecs_dot[:-1])
b = b0 - delta_beta
angular_rates_new = solve_banded((5, 5), M, b.ravel())
angular_rates_new = angular_rates_new.reshape((-1, 3))
delta = np.abs(angular_rates_new - angular_rates[:-1])
angular_rates[:-1] = angular_rates_new
if np.all(delta < self.TOL * (1 + np.abs(angular_rates_new))):
break
rotvecs_dot = _matrix_vector_product_of_stacks(A, angular_rates)
angular_rates = np.vstack((angular_rate_first, angular_rates[:-1]))
return angular_rates, rotvecs_dot
def __init__(self, times, rotations):
from scipy.interpolate import PPoly
if rotations.single:
raise ValueError("`rotations` must be a sequence of rotations.")
if len(rotations) == 1:
raise ValueError("`rotations` must contain at least 2 rotations.")
times = np.asarray(times, dtype=float)
if times.ndim != 1:
raise ValueError("`times` must be 1-dimensional.")
if len(times) != len(rotations):
raise ValueError("Expected number of rotations to be equal to "
"number of timestamps given, got {} rotations "
"and {} timestamps."
.format(len(rotations), len(times)))
dt = np.diff(times)
if np.any(dt <= 0):
raise ValueError("Values in `times` must be in a strictly "
"increasing order.")
rotvecs = (rotations[:-1].inv() * rotations[1:]).as_rotvec()
angular_rates = rotvecs / dt[:, None]
if len(rotations) == 2:
rotvecs_dot = angular_rates
else:
angular_rates, rotvecs_dot = self._solve_for_angular_rates(
dt, angular_rates, rotvecs)
dt = dt[:, None]
coeff = np.empty((4, len(times) - 1, 3))
coeff[0] = (-2 * rotvecs + dt * angular_rates
+ dt * rotvecs_dot) / dt ** 3
coeff[1] = (3 * rotvecs - 2 * dt * angular_rates
- dt * rotvecs_dot) / dt ** 2
coeff[2] = angular_rates
coeff[3] = 0
self.times = times
self.rotations = rotations
self.interpolator = PPoly(coeff, times)
def __call__(self, times, order=0):
"""Compute interpolated values.
Parameters
----------
times : float or array_like
Times of interest.
order : {0, 1, 2}, optional
Order of differentiation:
* 0 (default) : return Rotation
* 1 : return the angular rate in rad/sec
* 2 : return the angular acceleration in rad/sec/sec
Returns
-------
Interpolated Rotation, angular rate or acceleration.
"""
if order not in [0, 1, 2]:
raise ValueError("`order` must be 0, 1 or 2.")
times = np.asarray(times, dtype=float)
if times.ndim > 1:
raise ValueError("`times` must be at most 1-dimensional.")
singe_time = times.ndim == 0
times = np.atleast_1d(times)
rotvecs = self.interpolator(times)
if order == 0:
index = np.searchsorted(self.times, times, side='right')
index -= 1
index[index < 0] = 0
n_segments = len(self.times) - 1
index[index > n_segments - 1] = n_segments - 1
result = self.rotations[index] * Rotation.from_rotvec(rotvecs)
elif order == 1:
rotvecs_dot = self.interpolator(times, 1)
result = _compute_angular_rate(rotvecs, rotvecs_dot)
elif order == 2:
rotvecs_dot = self.interpolator(times, 1)
rotvecs_dot_dot = self.interpolator(times, 2)
result = _compute_angular_acceleration(rotvecs, rotvecs_dot,
rotvecs_dot_dot)
else:
assert False
if singe_time:
result = result[0]
return result
| 14,083
| 29.550976
| 87
|
py
|
scipy
|
scipy-main/scipy/spatial/transform/tests/test_rotation_spline.py
|
from itertools import product
import numpy as np
from numpy.testing import assert_allclose
from pytest import raises
from scipy.spatial.transform import Rotation, RotationSpline
from scipy.spatial.transform._rotation_spline import (
_angular_rate_to_rotvec_dot_matrix,
_rotvec_dot_to_angular_rate_matrix,
_matrix_vector_product_of_stacks,
_angular_acceleration_nonlinear_term,
_create_block_3_diagonal_matrix)
def test_angular_rate_to_rotvec_conversions():
np.random.seed(0)
rv = np.random.randn(4, 3)
A = _angular_rate_to_rotvec_dot_matrix(rv)
A_inv = _rotvec_dot_to_angular_rate_matrix(rv)
# When the rotation vector is aligned with the angular rate, then
# the rotation vector rate and angular rate are the same.
assert_allclose(_matrix_vector_product_of_stacks(A, rv), rv)
assert_allclose(_matrix_vector_product_of_stacks(A_inv, rv), rv)
# A and A_inv must be reciprocal to each other.
I_stack = np.empty((4, 3, 3))
I_stack[:] = np.eye(3)
assert_allclose(np.matmul(A, A_inv), I_stack, atol=1e-15)
def test_angular_rate_nonlinear_term():
# The only simple test is to check that the term is zero when
# the rotation vector
np.random.seed(0)
rv = np.random.rand(4, 3)
assert_allclose(_angular_acceleration_nonlinear_term(rv, rv), 0,
atol=1e-19)
def test_create_block_3_diagonal_matrix():
np.random.seed(0)
A = np.empty((4, 3, 3))
A[:] = np.arange(1, 5)[:, None, None]
B = np.empty((4, 3, 3))
B[:] = -np.arange(1, 5)[:, None, None]
d = 10 * np.arange(10, 15)
banded = _create_block_3_diagonal_matrix(A, B, d)
# Convert the banded matrix to the full matrix.
k, l = list(zip(*product(np.arange(banded.shape[0]),
np.arange(banded.shape[1]))))
k = np.asarray(k)
l = np.asarray(l)
i = k - 5 + l
j = l
values = banded.ravel()
mask = (i >= 0) & (i < 15)
i = i[mask]
j = j[mask]
values = values[mask]
full = np.zeros((15, 15))
full[i, j] = values
zero = np.zeros((3, 3))
eye = np.eye(3)
# Create the reference full matrix in the most straightforward manner.
ref = np.block([
[d[0] * eye, B[0], zero, zero, zero],
[A[0], d[1] * eye, B[1], zero, zero],
[zero, A[1], d[2] * eye, B[2], zero],
[zero, zero, A[2], d[3] * eye, B[3]],
[zero, zero, zero, A[3], d[4] * eye],
])
assert_allclose(full, ref, atol=1e-19)
def test_spline_2_rotations():
times = [0, 10]
rotations = Rotation.from_euler('xyz', [[0, 0, 0], [10, -20, 30]],
degrees=True)
spline = RotationSpline(times, rotations)
rv = (rotations[0].inv() * rotations[1]).as_rotvec()
rate = rv / (times[1] - times[0])
times_check = np.array([-1, 5, 12])
dt = times_check - times[0]
rv_ref = rate * dt[:, None]
assert_allclose(spline(times_check).as_rotvec(), rv_ref)
assert_allclose(spline(times_check, 1), np.resize(rate, (3, 3)))
assert_allclose(spline(times_check, 2), 0, atol=1e-16)
def test_constant_attitude():
times = np.arange(10)
rotations = Rotation.from_rotvec(np.ones((10, 3)))
spline = RotationSpline(times, rotations)
times_check = np.linspace(-1, 11)
assert_allclose(spline(times_check).as_rotvec(), 1, rtol=1e-15)
assert_allclose(spline(times_check, 1), 0, atol=1e-17)
assert_allclose(spline(times_check, 2), 0, atol=1e-17)
assert_allclose(spline(5.5).as_rotvec(), 1, rtol=1e-15)
assert_allclose(spline(5.5, 1), 0, atol=1e-17)
assert_allclose(spline(5.5, 2), 0, atol=1e-17)
def test_spline_properties():
times = np.array([0, 5, 15, 27])
angles = [[-5, 10, 27], [3, 5, 38], [-12, 10, 25], [-15, 20, 11]]
rotations = Rotation.from_euler('xyz', angles, degrees=True)
spline = RotationSpline(times, rotations)
assert_allclose(spline(times).as_euler('xyz', degrees=True), angles)
assert_allclose(spline(0).as_euler('xyz', degrees=True), angles[0])
h = 1e-8
rv0 = spline(times).as_rotvec()
rvm = spline(times - h).as_rotvec()
rvp = spline(times + h).as_rotvec()
# rtol bumped from 1e-15 to 1.5e-15 in gh18414 for linux 32 bit
assert_allclose(rv0, 0.5 * (rvp + rvm), rtol=1.5e-15)
r0 = spline(times, 1)
rm = spline(times - h, 1)
rp = spline(times + h, 1)
assert_allclose(r0, 0.5 * (rm + rp), rtol=1e-14)
a0 = spline(times, 2)
am = spline(times - h, 2)
ap = spline(times + h, 2)
assert_allclose(a0, am, rtol=1e-7)
assert_allclose(a0, ap, rtol=1e-7)
def test_error_handling():
raises(ValueError, RotationSpline, [1.0], Rotation.random())
r = Rotation.random(10)
t = np.arange(10).reshape(5, 2)
raises(ValueError, RotationSpline, t, r)
t = np.arange(9)
raises(ValueError, RotationSpline, t, r)
t = np.arange(10)
t[5] = 0
raises(ValueError, RotationSpline, t, r)
t = np.arange(10)
s = RotationSpline(t, r)
raises(ValueError, s, 10, -1)
raises(ValueError, s, np.arange(10).reshape(5, 2))
| 5,105
| 30.325153
| 74
|
py
|
scipy
|
scipy-main/scipy/spatial/transform/tests/test_rotation.py
|
import pytest
import numpy as np
from numpy.testing import assert_equal, assert_array_almost_equal
from numpy.testing import assert_allclose
from scipy.spatial.transform import Rotation, Slerp
from scipy.stats import special_ortho_group
from itertools import permutations
import pickle
import copy
def test_generic_quat_matrix():
x = np.array([[3, 4, 0, 0], [5, 12, 0, 0]])
r = Rotation.from_quat(x)
expected_quat = x / np.array([[5], [13]])
assert_array_almost_equal(r.as_quat(), expected_quat)
def test_from_single_1d_quaternion():
x = np.array([3, 4, 0, 0])
r = Rotation.from_quat(x)
expected_quat = x / 5
assert_array_almost_equal(r.as_quat(), expected_quat)
def test_from_single_2d_quaternion():
x = np.array([[3, 4, 0, 0]])
r = Rotation.from_quat(x)
expected_quat = x / 5
assert_array_almost_equal(r.as_quat(), expected_quat)
def test_from_square_quat_matrix():
# Ensure proper norm array broadcasting
x = np.array([
[3, 0, 0, 4],
[5, 0, 12, 0],
[0, 0, 0, 1],
[-1, -1, -1, 1],
[0, 0, 0, -1], # Check double cover
[-1, -1, -1, -1] # Check double cover
])
r = Rotation.from_quat(x)
expected_quat = x / np.array([[5], [13], [1], [2], [1], [2]])
assert_array_almost_equal(r.as_quat(), expected_quat)
def test_quat_double_to_canonical_single_cover():
x = np.array([
[-1, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, -1, 0],
[0, 0, 0, -1],
[-1, -1, -1, -1]
])
r = Rotation.from_quat(x)
expected_quat = np.abs(x) / np.linalg.norm(x, axis=1)[:, None]
assert_array_almost_equal(r.as_quat(canonical=True), expected_quat)
def test_malformed_1d_from_quat():
with pytest.raises(ValueError):
Rotation.from_quat(np.array([1, 2, 3]))
def test_malformed_2d_from_quat():
with pytest.raises(ValueError):
Rotation.from_quat(np.array([
[1, 2, 3, 4, 5],
[4, 5, 6, 7, 8]
]))
def test_zero_norms_from_quat():
x = np.array([
[3, 4, 0, 0],
[0, 0, 0, 0],
[5, 0, 12, 0]
])
with pytest.raises(ValueError):
Rotation.from_quat(x)
def test_as_matrix_single_1d_quaternion():
quat = [0, 0, 0, 1]
mat = Rotation.from_quat(quat).as_matrix()
# mat.shape == (3,3) due to 1d input
assert_array_almost_equal(mat, np.eye(3))
def test_as_matrix_single_2d_quaternion():
quat = [[0, 0, 1, 1]]
mat = Rotation.from_quat(quat).as_matrix()
assert_equal(mat.shape, (1, 3, 3))
expected_mat = np.array([
[0, -1, 0],
[1, 0, 0],
[0, 0, 1]
])
assert_array_almost_equal(mat[0], expected_mat)
def test_as_matrix_from_square_input():
quats = [
[0, 0, 1, 1],
[0, 1, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, -1]
]
mat = Rotation.from_quat(quats).as_matrix()
assert_equal(mat.shape, (4, 3, 3))
expected0 = np.array([
[0, -1, 0],
[1, 0, 0],
[0, 0, 1]
])
assert_array_almost_equal(mat[0], expected0)
expected1 = np.array([
[0, 0, 1],
[0, 1, 0],
[-1, 0, 0]
])
assert_array_almost_equal(mat[1], expected1)
assert_array_almost_equal(mat[2], np.eye(3))
assert_array_almost_equal(mat[3], np.eye(3))
def test_as_matrix_from_generic_input():
quats = [
[0, 0, 1, 1],
[0, 1, 0, 1],
[1, 2, 3, 4]
]
mat = Rotation.from_quat(quats).as_matrix()
assert_equal(mat.shape, (3, 3, 3))
expected0 = np.array([
[0, -1, 0],
[1, 0, 0],
[0, 0, 1]
])
assert_array_almost_equal(mat[0], expected0)
expected1 = np.array([
[0, 0, 1],
[0, 1, 0],
[-1, 0, 0]
])
assert_array_almost_equal(mat[1], expected1)
expected2 = np.array([
[0.4, -2, 2.2],
[2.8, 1, 0.4],
[-1, 2, 2]
]) / 3
assert_array_almost_equal(mat[2], expected2)
def test_from_single_2d_matrix():
mat = [
[0, 0, 1],
[1, 0, 0],
[0, 1, 0]
]
expected_quat = [0.5, 0.5, 0.5, 0.5]
assert_array_almost_equal(
Rotation.from_matrix(mat).as_quat(),
expected_quat)
def test_from_single_3d_matrix():
mat = np.array([
[0, 0, 1],
[1, 0, 0],
[0, 1, 0]
]).reshape((1, 3, 3))
expected_quat = np.array([0.5, 0.5, 0.5, 0.5]).reshape((1, 4))
assert_array_almost_equal(
Rotation.from_matrix(mat).as_quat(),
expected_quat)
def test_from_matrix_calculation():
expected_quat = np.array([1, 1, 6, 1]) / np.sqrt(39)
mat = np.array([
[-0.8974359, -0.2564103, 0.3589744],
[0.3589744, -0.8974359, 0.2564103],
[0.2564103, 0.3589744, 0.8974359]
])
assert_array_almost_equal(
Rotation.from_matrix(mat).as_quat(),
expected_quat)
assert_array_almost_equal(
Rotation.from_matrix(mat.reshape((1, 3, 3))).as_quat(),
expected_quat.reshape((1, 4)))
def test_matrix_calculation_pipeline():
mat = special_ortho_group.rvs(3, size=10, random_state=0)
assert_array_almost_equal(Rotation.from_matrix(mat).as_matrix(), mat)
def test_from_matrix_ortho_output():
rnd = np.random.RandomState(0)
mat = rnd.random_sample((100, 3, 3))
ortho_mat = Rotation.from_matrix(mat).as_matrix()
mult_result = np.einsum('...ij,...jk->...ik', ortho_mat,
ortho_mat.transpose((0, 2, 1)))
eye3d = np.zeros((100, 3, 3))
for i in range(3):
eye3d[:, i, i] = 1.0
assert_array_almost_equal(mult_result, eye3d)
def test_from_1d_single_rotvec():
rotvec = [1, 0, 0]
expected_quat = np.array([0.4794255, 0, 0, 0.8775826])
result = Rotation.from_rotvec(rotvec)
assert_array_almost_equal(result.as_quat(), expected_quat)
def test_from_2d_single_rotvec():
rotvec = [[1, 0, 0]]
expected_quat = np.array([[0.4794255, 0, 0, 0.8775826]])
result = Rotation.from_rotvec(rotvec)
assert_array_almost_equal(result.as_quat(), expected_quat)
def test_from_generic_rotvec():
rotvec = [
[1, 2, 2],
[1, -1, 0.5],
[0, 0, 0]
]
expected_quat = np.array([
[0.3324983, 0.6649967, 0.6649967, 0.0707372],
[0.4544258, -0.4544258, 0.2272129, 0.7316889],
[0, 0, 0, 1]
])
assert_array_almost_equal(
Rotation.from_rotvec(rotvec).as_quat(),
expected_quat)
def test_from_rotvec_small_angle():
rotvec = np.array([
[5e-4 / np.sqrt(3), -5e-4 / np.sqrt(3), 5e-4 / np.sqrt(3)],
[0.2, 0.3, 0.4],
[0, 0, 0]
])
quat = Rotation.from_rotvec(rotvec).as_quat()
# cos(theta/2) ~~ 1 for small theta
assert_allclose(quat[0, 3], 1)
# sin(theta/2) / theta ~~ 0.5 for small theta
assert_allclose(quat[0, :3], rotvec[0] * 0.5)
assert_allclose(quat[1, 3], 0.9639685)
assert_allclose(
quat[1, :3],
np.array([
0.09879603932153465,
0.14819405898230198,
0.19759207864306931
]))
assert_equal(quat[2], np.array([0, 0, 0, 1]))
def test_degrees_from_rotvec():
rotvec1 = [1.0 / np.cbrt(3), 1.0 / np.cbrt(3), 1.0 / np.cbrt(3)]
rot1 = Rotation.from_rotvec(rotvec1, degrees=True)
quat1 = rot1.as_quat()
rotvec2 = np.deg2rad(rotvec1)
rot2 = Rotation.from_rotvec(rotvec2)
quat2 = rot2.as_quat()
assert_allclose(quat1, quat2)
def test_malformed_1d_from_rotvec():
with pytest.raises(ValueError, match='Expected `rot_vec` to have shape'):
Rotation.from_rotvec([1, 2])
def test_malformed_2d_from_rotvec():
with pytest.raises(ValueError, match='Expected `rot_vec` to have shape'):
Rotation.from_rotvec([
[1, 2, 3, 4],
[5, 6, 7, 8]
])
def test_as_generic_rotvec():
quat = np.array([
[1, 2, -1, 0.5],
[1, -1, 1, 0.0003],
[0, 0, 0, 1]
])
quat /= np.linalg.norm(quat, axis=1)[:, None]
rotvec = Rotation.from_quat(quat).as_rotvec()
angle = np.linalg.norm(rotvec, axis=1)
assert_allclose(quat[:, 3], np.cos(angle/2))
assert_allclose(np.cross(rotvec, quat[:, :3]), np.zeros((3, 3)))
def test_as_rotvec_single_1d_input():
quat = np.array([1, 2, -3, 2])
expected_rotvec = np.array([0.5772381, 1.1544763, -1.7317144])
actual_rotvec = Rotation.from_quat(quat).as_rotvec()
assert_equal(actual_rotvec.shape, (3,))
assert_allclose(actual_rotvec, expected_rotvec)
def test_as_rotvec_single_2d_input():
quat = np.array([[1, 2, -3, 2]])
expected_rotvec = np.array([[0.5772381, 1.1544763, -1.7317144]])
actual_rotvec = Rotation.from_quat(quat).as_rotvec()
assert_equal(actual_rotvec.shape, (1, 3))
assert_allclose(actual_rotvec, expected_rotvec)
def test_as_rotvec_degrees():
# x->y, y->z, z->x
mat = [[0, 0, 1], [1, 0, 0], [0, 1, 0]]
rot = Rotation.from_matrix(mat)
rotvec = rot.as_rotvec(degrees=True)
angle = np.linalg.norm(rotvec)
assert_allclose(angle, 120.0)
assert_allclose(rotvec[0], rotvec[1])
assert_allclose(rotvec[1], rotvec[2])
def test_rotvec_calc_pipeline():
# Include small angles
rotvec = np.array([
[0, 0, 0],
[1, -1, 2],
[-3e-4, 3.5e-4, 7.5e-5]
])
assert_allclose(Rotation.from_rotvec(rotvec).as_rotvec(), rotvec)
assert_allclose(Rotation.from_rotvec(rotvec, degrees=True).as_rotvec(degrees=True), rotvec)
def test_from_1d_single_mrp():
mrp = [0, 0, 1.0]
expected_quat = np.array([0, 0, 1, 0])
result = Rotation.from_mrp(mrp)
assert_array_almost_equal(result.as_quat(), expected_quat)
def test_from_2d_single_mrp():
mrp = [[0, 0, 1.0]]
expected_quat = np.array([[0, 0, 1, 0]])
result = Rotation.from_mrp(mrp)
assert_array_almost_equal(result.as_quat(), expected_quat)
def test_from_generic_mrp():
mrp = np.array([
[1, 2, 2],
[1, -1, 0.5],
[0, 0, 0]])
expected_quat = np.array([
[0.2, 0.4, 0.4, -0.8],
[0.61538462, -0.61538462, 0.30769231, -0.38461538],
[0, 0, 0, 1]])
assert_array_almost_equal(Rotation.from_mrp(mrp).as_quat(), expected_quat)
def test_malformed_1d_from_mrp():
with pytest.raises(ValueError, match='Expected `mrp` to have shape'):
Rotation.from_mrp([1, 2])
def test_malformed_2d_from_mrp():
with pytest.raises(ValueError, match='Expected `mrp` to have shape'):
Rotation.from_mrp([
[1, 2, 3, 4],
[5, 6, 7, 8]
])
def test_as_generic_mrp():
quat = np.array([
[1, 2, -1, 0.5],
[1, -1, 1, 0.0003],
[0, 0, 0, 1]])
quat /= np.linalg.norm(quat, axis=1)[:, None]
expected_mrp = np.array([
[0.33333333, 0.66666667, -0.33333333],
[0.57725028, -0.57725028, 0.57725028],
[0, 0, 0]])
assert_array_almost_equal(Rotation.from_quat(quat).as_mrp(), expected_mrp)
def test_past_180_degree_rotation():
# ensure that a > 180 degree rotation is returned as a <180 rotation in MRPs
# in this case 270 should be returned as -90
expected_mrp = np.array([-np.tan(np.pi/2/4), 0.0, 0])
assert_array_almost_equal(Rotation.from_euler('xyz', [270, 0, 0], degrees=True).as_mrp(), expected_mrp)
def test_as_mrp_single_1d_input():
quat = np.array([1, 2, -3, 2])
expected_mrp = np.array([0.16018862, 0.32037724, -0.48056586])
actual_mrp = Rotation.from_quat(quat).as_mrp()
assert_equal(actual_mrp.shape, (3,))
assert_allclose(actual_mrp, expected_mrp)
def test_as_mrp_single_2d_input():
quat = np.array([[1, 2, -3, 2]])
expected_mrp = np.array([[0.16018862, 0.32037724, -0.48056586]])
actual_mrp = Rotation.from_quat(quat).as_mrp()
assert_equal(actual_mrp.shape, (1, 3))
assert_allclose(actual_mrp, expected_mrp)
def test_mrp_calc_pipeline():
actual_mrp = np.array([
[0, 0, 0],
[1, -1, 2],
[0.41421356, 0, 0],
[0.1, 0.2, 0.1]])
expected_mrp = np.array([
[0, 0, 0],
[-0.16666667, 0.16666667, -0.33333333],
[0.41421356, 0, 0],
[0.1, 0.2, 0.1]])
assert_allclose(Rotation.from_mrp(actual_mrp).as_mrp(), expected_mrp)
def test_from_euler_single_rotation():
quat = Rotation.from_euler('z', 90, degrees=True).as_quat()
expected_quat = np.array([0, 0, 1, 1]) / np.sqrt(2)
assert_allclose(quat, expected_quat)
def test_single_intrinsic_extrinsic_rotation():
extrinsic = Rotation.from_euler('z', 90, degrees=True).as_matrix()
intrinsic = Rotation.from_euler('Z', 90, degrees=True).as_matrix()
assert_allclose(extrinsic, intrinsic)
def test_from_euler_rotation_order():
# Intrinsic rotation is same as extrinsic with order reversed
rnd = np.random.RandomState(0)
a = rnd.randint(low=0, high=180, size=(6, 3))
b = a[:, ::-1]
x = Rotation.from_euler('xyz', a, degrees=True).as_quat()
y = Rotation.from_euler('ZYX', b, degrees=True).as_quat()
assert_allclose(x, y)
def test_from_euler_elementary_extrinsic_rotation():
# Simple test to check if extrinsic rotations are implemented correctly
mat = Rotation.from_euler('zx', [90, 90], degrees=True).as_matrix()
expected_mat = np.array([
[0, -1, 0],
[0, 0, -1],
[1, 0, 0]
])
assert_array_almost_equal(mat, expected_mat)
def test_from_euler_intrinsic_rotation_312():
angles = [
[30, 60, 45],
[30, 60, 30],
[45, 30, 60]
]
mat = Rotation.from_euler('ZXY', angles, degrees=True).as_matrix()
assert_array_almost_equal(mat[0], np.array([
[0.3061862, -0.2500000, 0.9185587],
[0.8838835, 0.4330127, -0.1767767],
[-0.3535534, 0.8660254, 0.3535534]
]))
assert_array_almost_equal(mat[1], np.array([
[0.5334936, -0.2500000, 0.8080127],
[0.8080127, 0.4330127, -0.3995191],
[-0.2500000, 0.8660254, 0.4330127]
]))
assert_array_almost_equal(mat[2], np.array([
[0.0473672, -0.6123725, 0.7891491],
[0.6597396, 0.6123725, 0.4355958],
[-0.7500000, 0.5000000, 0.4330127]
]))
def test_from_euler_intrinsic_rotation_313():
angles = [
[30, 60, 45],
[30, 60, 30],
[45, 30, 60]
]
mat = Rotation.from_euler('ZXZ', angles, degrees=True).as_matrix()
assert_array_almost_equal(mat[0], np.array([
[0.43559574, -0.78914913, 0.4330127],
[0.65973961, -0.04736717, -0.750000],
[0.61237244, 0.61237244, 0.500000]
]))
assert_array_almost_equal(mat[1], np.array([
[0.6250000, -0.64951905, 0.4330127],
[0.64951905, 0.1250000, -0.750000],
[0.4330127, 0.750000, 0.500000]
]))
assert_array_almost_equal(mat[2], np.array([
[-0.1767767, -0.91855865, 0.35355339],
[0.88388348, -0.30618622, -0.35355339],
[0.4330127, 0.25000000, 0.8660254]
]))
def test_from_euler_extrinsic_rotation_312():
angles = [
[30, 60, 45],
[30, 60, 30],
[45, 30, 60]
]
mat = Rotation.from_euler('zxy', angles, degrees=True).as_matrix()
assert_array_almost_equal(mat[0], np.array([
[0.91855865, 0.1767767, 0.35355339],
[0.25000000, 0.4330127, -0.8660254],
[-0.30618622, 0.88388348, 0.35355339]
]))
assert_array_almost_equal(mat[1], np.array([
[0.96650635, -0.0580127, 0.2500000],
[0.25000000, 0.4330127, -0.8660254],
[-0.0580127, 0.89951905, 0.4330127]
]))
assert_array_almost_equal(mat[2], np.array([
[0.65973961, -0.04736717, 0.7500000],
[0.61237244, 0.61237244, -0.5000000],
[-0.43559574, 0.78914913, 0.4330127]
]))
def test_from_euler_extrinsic_rotation_313():
angles = [
[30, 60, 45],
[30, 60, 30],
[45, 30, 60]
]
mat = Rotation.from_euler('zxz', angles, degrees=True).as_matrix()
assert_array_almost_equal(mat[0], np.array([
[0.43559574, -0.65973961, 0.61237244],
[0.78914913, -0.04736717, -0.61237244],
[0.4330127, 0.75000000, 0.500000]
]))
assert_array_almost_equal(mat[1], np.array([
[0.62500000, -0.64951905, 0.4330127],
[0.64951905, 0.12500000, -0.750000],
[0.4330127, 0.75000000, 0.500000]
]))
assert_array_almost_equal(mat[2], np.array([
[-0.1767767, -0.88388348, 0.4330127],
[0.91855865, -0.30618622, -0.250000],
[0.35355339, 0.35355339, 0.8660254]
]))
def test_as_euler_asymmetric_axes():
# helper function for mean error tests
def test_stats(error, mean_max, rms_max):
mean = np.mean(error, axis=0)
std = np.std(error, axis=0)
rms = np.hypot(mean, std)
assert np.all(np.abs(mean) < mean_max)
assert np.all(rms < rms_max)
rnd = np.random.RandomState(0)
n = 1000
angles = np.empty((n, 3))
angles[:, 0] = rnd.uniform(low=-np.pi, high=np.pi, size=(n,))
angles[:, 1] = rnd.uniform(low=-np.pi / 2, high=np.pi / 2, size=(n,))
angles[:, 2] = rnd.uniform(low=-np.pi, high=np.pi, size=(n,))
for seq_tuple in permutations('xyz'):
# Extrinsic rotations
seq = ''.join(seq_tuple)
rotation = Rotation.from_euler(seq, angles)
angles_quat = rotation.as_euler(seq)
angles_mat = rotation._as_euler_from_matrix(seq)
assert_allclose(angles, angles_quat, atol=0, rtol=1e-12)
assert_allclose(angles, angles_mat, atol=0, rtol=1e-12)
test_stats(angles_quat - angles, 1e-15, 1e-14)
test_stats(angles_mat - angles, 1e-15, 1e-14)
# Intrinsic rotations
seq = seq.upper()
rotation = Rotation.from_euler(seq, angles)
angles_quat = rotation.as_euler(seq)
angles_mat = rotation._as_euler_from_matrix(seq)
assert_allclose(angles, angles_quat, atol=0, rtol=1e-12)
assert_allclose(angles, angles_mat, atol=0, rtol=1e-12)
test_stats(angles_quat - angles, 1e-15, 1e-14)
test_stats(angles_mat - angles, 1e-15, 1e-14)
def test_as_euler_symmetric_axes():
# helper function for mean error tests
def test_stats(error, mean_max, rms_max):
mean = np.mean(error, axis=0)
std = np.std(error, axis=0)
rms = np.hypot(mean, std)
assert np.all(np.abs(mean) < mean_max)
assert np.all(rms < rms_max)
rnd = np.random.RandomState(0)
n = 1000
angles = np.empty((n, 3))
angles[:, 0] = rnd.uniform(low=-np.pi, high=np.pi, size=(n,))
angles[:, 1] = rnd.uniform(low=0, high=np.pi, size=(n,))
angles[:, 2] = rnd.uniform(low=-np.pi, high=np.pi, size=(n,))
for seq_tuple in permutations('xyz'):
# Extrinsic rotations
seq = ''.join([seq_tuple[0], seq_tuple[1], seq_tuple[0]])
rotation = Rotation.from_euler(seq, angles)
angles_quat = rotation.as_euler(seq)
angles_mat = rotation._as_euler_from_matrix(seq)
assert_allclose(angles, angles_quat, atol=0, rtol=1e-13)
assert_allclose(angles, angles_mat, atol=0, rtol=1e-9)
test_stats(angles_quat - angles, 1e-16, 1e-14)
test_stats(angles_mat - angles, 1e-15, 1e-13)
# Intrinsic rotations
seq = seq.upper()
rotation = Rotation.from_euler(seq, angles)
angles_quat = rotation.as_euler(seq)
angles_mat = rotation._as_euler_from_matrix(seq)
assert_allclose(angles, angles_quat, atol=0, rtol=1e-13)
assert_allclose(angles, angles_mat, atol=0, rtol=1e-9)
test_stats(angles_quat - angles, 1e-16, 1e-14)
test_stats(angles_mat - angles, 1e-15, 1e-13)
def test_as_euler_degenerate_asymmetric_axes():
# Since we cannot check for angle equality, we check for rotation matrix
# equality
angles = np.array([
[45, 90, 35],
[35, -90, 20],
[35, 90, 25],
[25, -90, 15]])
with pytest.warns(UserWarning, match="Gimbal lock"):
for seq_tuple in permutations('xyz'):
# Extrinsic rotations
seq = ''.join(seq_tuple)
rotation = Rotation.from_euler(seq, angles, degrees=True)
mat_expected = rotation.as_matrix()
angle_estimates = rotation.as_euler(seq, degrees=True)
mat_estimated = Rotation.from_euler(
seq, angle_estimates, degrees=True
).as_matrix()
assert_array_almost_equal(mat_expected, mat_estimated)
# Intrinsic rotations
seq = seq.upper()
rotation = Rotation.from_euler(seq, angles, degrees=True)
mat_expected = rotation.as_matrix()
angle_estimates = rotation.as_euler(seq, degrees=True)
mat_estimated = Rotation.from_euler(
seq, angle_estimates, degrees=True
).as_matrix()
assert_array_almost_equal(mat_expected, mat_estimated)
def test_as_euler_degenerate_symmetric_axes():
# Since we cannot check for angle equality, we check for rotation matrix
# equality
angles = np.array([
[15, 0, 60],
[35, 0, 75],
[60, 180, 35],
[15, -180, 25]])
with pytest.warns(UserWarning, match="Gimbal lock"):
for seq_tuple in permutations('xyz'):
# Extrinsic rotations
seq = ''.join([seq_tuple[0], seq_tuple[1], seq_tuple[0]])
rotation = Rotation.from_euler(seq, angles, degrees=True)
mat_expected = rotation.as_matrix()
angle_estimates = rotation.as_euler(seq, degrees=True)
mat_estimated = Rotation.from_euler(
seq, angle_estimates, degrees=True
).as_matrix()
assert_array_almost_equal(mat_expected, mat_estimated)
# Intrinsic rotations
seq = seq.upper()
rotation = Rotation.from_euler(seq, angles, degrees=True)
mat_expected = rotation.as_matrix()
angle_estimates = rotation.as_euler(seq, degrees=True)
mat_estimated = Rotation.from_euler(
seq, angle_estimates, degrees=True
).as_matrix()
assert_array_almost_equal(mat_expected, mat_estimated)
def test_as_euler_degenerate_compare_algorithms():
# this test makes sure that both algorithms are doing the same choices
# in degenerate cases
# asymmetric axes
angles = np.array([
[45, 90, 35],
[35, -90, 20],
[35, 90, 25],
[25, -90, 15]])
with pytest.warns(UserWarning, match="Gimbal lock"):
for seq_tuple in permutations('xyz'):
# Extrinsic rotations
seq = ''.join(seq_tuple)
rot = Rotation.from_euler(seq, angles, degrees=True)
estimates_matrix = rot._as_euler_from_matrix(seq, degrees=True)
estimates_quat = rot.as_euler(seq, degrees=True)
assert_allclose(estimates_matrix[:, [0, 2]],
estimates_quat[:, [0, 2]], atol=0, rtol=1e-12)
assert_allclose(estimates_matrix[:, 1], estimates_quat[:, 1],
atol=0, rtol=1e-7)
# Intrinsic rotations
seq = seq.upper()
rot = Rotation.from_euler(seq, angles, degrees=True)
estimates_matrix = rot._as_euler_from_matrix(seq, degrees=True)
estimates_quat = rot.as_euler(seq, degrees=True)
assert_allclose(estimates_matrix[:, [0, 2]],
estimates_quat[:, [0, 2]], atol=0, rtol=1e-12)
assert_allclose(estimates_matrix[:, 1], estimates_quat[:, 1],
atol=0, rtol=1e-7)
# symmetric axes
# Absolute error tolerance must be looser to directly compare the results
# from both algorithms, because of numerical loss of precision for the
# method _as_euler_from_matrix near a zero angle value
angles = np.array([
[15, 0, 60],
[35, 0, 75],
[60, 180, 35],
[15, -180, 25]])
idx = angles[:, 1] == 0 # find problematic angles indices
with pytest.warns(UserWarning, match="Gimbal lock"):
for seq_tuple in permutations('xyz'):
# Extrinsic rotations
seq = ''.join([seq_tuple[0], seq_tuple[1], seq_tuple[0]])
rot = Rotation.from_euler(seq, angles, degrees=True)
estimates_matrix = rot._as_euler_from_matrix(seq, degrees=True)
estimates_quat = rot.as_euler(seq, degrees=True)
assert_allclose(estimates_matrix[:, [0, 2]],
estimates_quat[:, [0, 2]], atol=0, rtol=1e-12)
assert_allclose(estimates_matrix[~idx, 1], estimates_quat[~idx, 1],
atol=0, rtol=1e-7)
assert_allclose(estimates_matrix[idx, 1], estimates_quat[idx, 1],
atol=1e-6) # problematic, angles[1] = 0
# Intrinsic rotations
seq = seq.upper()
rot = Rotation.from_euler(seq, angles, degrees=True)
estimates_matrix = rot._as_euler_from_matrix(seq, degrees=True)
estimates_quat = rot.as_euler(seq, degrees=True)
assert_allclose(estimates_matrix[:, [0, 2]],
estimates_quat[:, [0, 2]], atol=0, rtol=1e-12)
assert_allclose(estimates_matrix[~idx, 1], estimates_quat[~idx, 1],
atol=0, rtol=1e-7)
assert_allclose(estimates_matrix[idx, 1], estimates_quat[idx, 1],
atol=1e-6) # problematic, angles[1] = 0
def test_inv():
rnd = np.random.RandomState(0)
n = 10
p = Rotation.random(num=n, random_state=rnd)
q = p.inv()
p_mat = p.as_matrix()
q_mat = q.as_matrix()
result1 = np.einsum('...ij,...jk->...ik', p_mat, q_mat)
result2 = np.einsum('...ij,...jk->...ik', q_mat, p_mat)
eye3d = np.empty((n, 3, 3))
eye3d[:] = np.eye(3)
assert_array_almost_equal(result1, eye3d)
assert_array_almost_equal(result2, eye3d)
def test_inv_single_rotation():
rnd = np.random.RandomState(0)
p = Rotation.random(random_state=rnd)
q = p.inv()
p_mat = p.as_matrix()
q_mat = q.as_matrix()
res1 = np.dot(p_mat, q_mat)
res2 = np.dot(q_mat, p_mat)
eye = np.eye(3)
assert_array_almost_equal(res1, eye)
assert_array_almost_equal(res2, eye)
x = Rotation.random(num=1, random_state=rnd)
y = x.inv()
x_matrix = x.as_matrix()
y_matrix = y.as_matrix()
result1 = np.einsum('...ij,...jk->...ik', x_matrix, y_matrix)
result2 = np.einsum('...ij,...jk->...ik', y_matrix, x_matrix)
eye3d = np.empty((1, 3, 3))
eye3d[:] = np.eye(3)
assert_array_almost_equal(result1, eye3d)
assert_array_almost_equal(result2, eye3d)
def test_identity_magnitude():
n = 10
assert_allclose(Rotation.identity(n).magnitude(), 0)
assert_allclose(Rotation.identity(n).inv().magnitude(), 0)
def test_single_identity_magnitude():
assert Rotation.identity().magnitude() == 0
assert Rotation.identity().inv().magnitude() == 0
def test_identity_invariance():
n = 10
p = Rotation.random(n, random_state=0)
result = p * Rotation.identity(n)
assert_array_almost_equal(p.as_quat(), result.as_quat())
result = result * p.inv()
assert_array_almost_equal(result.magnitude(), np.zeros(n))
def test_single_identity_invariance():
n = 10
p = Rotation.random(n, random_state=0)
result = p * Rotation.identity()
assert_array_almost_equal(p.as_quat(), result.as_quat())
result = result * p.inv()
assert_array_almost_equal(result.magnitude(), np.zeros(n))
def test_magnitude():
r = Rotation.from_quat(np.eye(4))
result = r.magnitude()
assert_array_almost_equal(result, [np.pi, np.pi, np.pi, 0])
r = Rotation.from_quat(-np.eye(4))
result = r.magnitude()
assert_array_almost_equal(result, [np.pi, np.pi, np.pi, 0])
def test_magnitude_single_rotation():
r = Rotation.from_quat(np.eye(4))
result1 = r[0].magnitude()
assert_allclose(result1, np.pi)
result2 = r[3].magnitude()
assert_allclose(result2, 0)
def test_mean():
axes = np.concatenate((-np.eye(3), np.eye(3)))
thetas = np.linspace(0, np.pi / 2, 100)
for t in thetas:
r = Rotation.from_rotvec(t * axes)
assert_allclose(r.mean().magnitude(), 0, atol=1E-10)
def test_weighted_mean():
# test that doubling a weight is equivalent to including a rotation twice.
axes = np.array([[0, 0, 0], [1, 0, 0], [1, 0, 0]])
thetas = np.linspace(0, np.pi / 2, 100)
for t in thetas:
rw = Rotation.from_rotvec(t * axes[:2])
mw = rw.mean(weights=[1, 2])
r = Rotation.from_rotvec(t * axes)
m = r.mean()
assert_allclose((m * mw.inv()).magnitude(), 0, atol=1E-10)
def test_mean_invalid_weights():
with pytest.raises(ValueError, match="non-negative"):
r = Rotation.from_quat(np.eye(4))
r.mean(weights=-np.ones(4))
def test_reduction_no_indices():
result = Rotation.identity().reduce(return_indices=False)
assert isinstance(result, Rotation)
def test_reduction_none_indices():
result = Rotation.identity().reduce(return_indices=True)
assert type(result) == tuple
assert len(result) == 3
reduced, left_best, right_best = result
assert left_best is None
assert right_best is None
def test_reduction_scalar_calculation():
rng = np.random.RandomState(0)
l = Rotation.random(5, random_state=rng)
r = Rotation.random(10, random_state=rng)
p = Rotation.random(7, random_state=rng)
reduced, left_best, right_best = p.reduce(l, r, return_indices=True)
# Loop implementation of the vectorized calculation in Rotation.reduce
scalars = np.zeros((len(l), len(p), len(r)))
for i, li in enumerate(l):
for j, pj in enumerate(p):
for k, rk in enumerate(r):
scalars[i, j, k] = np.abs((li * pj * rk).as_quat()[3])
scalars = np.reshape(np.moveaxis(scalars, 1, 0), (scalars.shape[1], -1))
max_ind = np.argmax(np.reshape(scalars, (len(p), -1)), axis=1)
left_best_check = max_ind // len(r)
right_best_check = max_ind % len(r)
assert (left_best == left_best_check).all()
assert (right_best == right_best_check).all()
reduced_check = l[left_best_check] * p * r[right_best_check]
mag = (reduced.inv() * reduced_check).magnitude()
assert_array_almost_equal(mag, np.zeros(len(p)))
def test_apply_single_rotation_single_point():
mat = np.array([
[0, -1, 0],
[1, 0, 0],
[0, 0, 1]
])
r_1d = Rotation.from_matrix(mat)
r_2d = Rotation.from_matrix(np.expand_dims(mat, axis=0))
v_1d = np.array([1, 2, 3])
v_2d = np.expand_dims(v_1d, axis=0)
v1d_rotated = np.array([-2, 1, 3])
v2d_rotated = np.expand_dims(v1d_rotated, axis=0)
assert_allclose(r_1d.apply(v_1d), v1d_rotated)
assert_allclose(r_1d.apply(v_2d), v2d_rotated)
assert_allclose(r_2d.apply(v_1d), v2d_rotated)
assert_allclose(r_2d.apply(v_2d), v2d_rotated)
v1d_inverse = np.array([2, -1, 3])
v2d_inverse = np.expand_dims(v1d_inverse, axis=0)
assert_allclose(r_1d.apply(v_1d, inverse=True), v1d_inverse)
assert_allclose(r_1d.apply(v_2d, inverse=True), v2d_inverse)
assert_allclose(r_2d.apply(v_1d, inverse=True), v2d_inverse)
assert_allclose(r_2d.apply(v_2d, inverse=True), v2d_inverse)
def test_apply_single_rotation_multiple_points():
mat = np.array([
[0, -1, 0],
[1, 0, 0],
[0, 0, 1]
])
r1 = Rotation.from_matrix(mat)
r2 = Rotation.from_matrix(np.expand_dims(mat, axis=0))
v = np.array([[1, 2, 3], [4, 5, 6]])
v_rotated = np.array([[-2, 1, 3], [-5, 4, 6]])
assert_allclose(r1.apply(v), v_rotated)
assert_allclose(r2.apply(v), v_rotated)
v_inverse = np.array([[2, -1, 3], [5, -4, 6]])
assert_allclose(r1.apply(v, inverse=True), v_inverse)
assert_allclose(r2.apply(v, inverse=True), v_inverse)
def test_apply_multiple_rotations_single_point():
mat = np.empty((2, 3, 3))
mat[0] = np.array([
[0, -1, 0],
[1, 0, 0],
[0, 0, 1]
])
mat[1] = np.array([
[1, 0, 0],
[0, 0, -1],
[0, 1, 0]
])
r = Rotation.from_matrix(mat)
v1 = np.array([1, 2, 3])
v2 = np.expand_dims(v1, axis=0)
v_rotated = np.array([[-2, 1, 3], [1, -3, 2]])
assert_allclose(r.apply(v1), v_rotated)
assert_allclose(r.apply(v2), v_rotated)
v_inverse = np.array([[2, -1, 3], [1, 3, -2]])
assert_allclose(r.apply(v1, inverse=True), v_inverse)
assert_allclose(r.apply(v2, inverse=True), v_inverse)
def test_apply_multiple_rotations_multiple_points():
mat = np.empty((2, 3, 3))
mat[0] = np.array([
[0, -1, 0],
[1, 0, 0],
[0, 0, 1]
])
mat[1] = np.array([
[1, 0, 0],
[0, 0, -1],
[0, 1, 0]
])
r = Rotation.from_matrix(mat)
v = np.array([[1, 2, 3], [4, 5, 6]])
v_rotated = np.array([[-2, 1, 3], [4, -6, 5]])
assert_allclose(r.apply(v), v_rotated)
v_inverse = np.array([[2, -1, 3], [4, 6, -5]])
assert_allclose(r.apply(v, inverse=True), v_inverse)
def test_getitem():
mat = np.empty((2, 3, 3))
mat[0] = np.array([
[0, -1, 0],
[1, 0, 0],
[0, 0, 1]
])
mat[1] = np.array([
[1, 0, 0],
[0, 0, -1],
[0, 1, 0]
])
r = Rotation.from_matrix(mat)
assert_allclose(r[0].as_matrix(), mat[0], atol=1e-15)
assert_allclose(r[1].as_matrix(), mat[1], atol=1e-15)
assert_allclose(r[:-1].as_matrix(), np.expand_dims(mat[0], axis=0), atol=1e-15)
def test_getitem_single():
with pytest.raises(TypeError, match='not subscriptable'):
Rotation.identity()[0]
def test_setitem_single():
r = Rotation.identity()
with pytest.raises(TypeError, match='not subscriptable'):
r[0] = Rotation.identity()
def test_setitem_slice():
rng = np.random.RandomState(seed=0)
r1 = Rotation.random(10, random_state=rng)
r2 = Rotation.random(5, random_state=rng)
r1[1:6] = r2
assert_equal(r1[1:6].as_quat(), r2.as_quat())
def test_setitem_integer():
rng = np.random.RandomState(seed=0)
r1 = Rotation.random(10, random_state=rng)
r2 = Rotation.random(random_state=rng)
r1[1] = r2
assert_equal(r1[1].as_quat(), r2.as_quat())
def test_setitem_wrong_type():
r = Rotation.random(10, random_state=0)
with pytest.raises(TypeError, match='Rotation object'):
r[0] = 1
def test_n_rotations():
mat = np.empty((2, 3, 3))
mat[0] = np.array([
[0, -1, 0],
[1, 0, 0],
[0, 0, 1]
])
mat[1] = np.array([
[1, 0, 0],
[0, 0, -1],
[0, 1, 0]
])
r = Rotation.from_matrix(mat)
assert_equal(len(r), 2)
assert_equal(len(r[:-1]), 1)
def test_align_vectors_no_rotation():
x = np.array([[1, 2, 3], [4, 5, 6]])
y = x.copy()
r, rmsd = Rotation.align_vectors(x, y)
assert_array_almost_equal(r.as_matrix(), np.eye(3))
assert_allclose(rmsd, 0, atol=1e-6)
def test_align_vectors_no_noise():
rnd = np.random.RandomState(0)
c = Rotation.random(random_state=rnd)
b = rnd.normal(size=(5, 3))
a = c.apply(b)
est, rmsd = Rotation.align_vectors(a, b)
assert_allclose(c.as_quat(), est.as_quat())
assert_allclose(rmsd, 0, atol=1e-7)
def test_align_vectors_improper_rotation():
# Tests correct logic for issue #10444
x = np.array([[0.89299824, -0.44372674, 0.0752378],
[0.60221789, -0.47564102, -0.6411702]])
y = np.array([[0.02386536, -0.82176463, 0.5693271],
[-0.27654929, -0.95191427, -0.1318321]])
est, rmsd = Rotation.align_vectors(x, y)
assert_allclose(x, est.apply(y), atol=1e-6)
assert_allclose(rmsd, 0, atol=1e-7)
def test_align_vectors_scaled_weights():
rng = np.random.RandomState(0)
c = Rotation.random(random_state=rng)
b = rng.normal(size=(5, 3))
a = c.apply(b)
est1, rmsd1, cov1 = Rotation.align_vectors(a, b, np.ones(5), True)
est2, rmsd2, cov2 = Rotation.align_vectors(a, b, 2 * np.ones(5), True)
assert_allclose(est1.as_matrix(), est2.as_matrix())
assert_allclose(np.sqrt(2) * rmsd1, rmsd2)
assert_allclose(cov1, cov2)
def test_align_vectors_noise():
rnd = np.random.RandomState(0)
n_vectors = 100
rot = Rotation.random(random_state=rnd)
vectors = rnd.normal(size=(n_vectors, 3))
result = rot.apply(vectors)
# The paper adds noise as independently distributed angular errors
sigma = np.deg2rad(1)
tolerance = 1.5 * sigma
noise = Rotation.from_rotvec(
rnd.normal(
size=(n_vectors, 3),
scale=sigma
)
)
# Attitude errors must preserve norm. Hence apply individual random
# rotations to each vector.
noisy_result = noise.apply(result)
est, rmsd, cov = Rotation.align_vectors(noisy_result, vectors,
return_sensitivity=True)
# Use rotation compositions to find out closeness
error_vector = (rot * est.inv()).as_rotvec()
assert_allclose(error_vector[0], 0, atol=tolerance)
assert_allclose(error_vector[1], 0, atol=tolerance)
assert_allclose(error_vector[2], 0, atol=tolerance)
# Check error bounds using covariance matrix
cov *= sigma
assert_allclose(cov[0, 0], 0, atol=tolerance)
assert_allclose(cov[1, 1], 0, atol=tolerance)
assert_allclose(cov[2, 2], 0, atol=tolerance)
assert_allclose(rmsd, np.sum((noisy_result - est.apply(vectors))**2)**0.5)
def test_align_vectors_single_vector():
with pytest.warns(UserWarning, match="Optimal rotation is not"):
r_estimate, rmsd = Rotation.align_vectors([[1, -1, 1]], [[1, 1, -1]])
assert_allclose(rmsd, 0, atol=1e-16)
def test_align_vectors_invalid_input():
with pytest.raises(ValueError, match="Expected input `a` to have shape"):
Rotation.align_vectors([1, 2, 3], [[1, 2, 3]])
with pytest.raises(ValueError, match="Expected input `b` to have shape"):
Rotation.align_vectors([[1, 2, 3]], [1, 2, 3])
with pytest.raises(ValueError, match="Expected inputs `a` and `b` "
"to have same shapes"):
Rotation.align_vectors([[1, 2, 3],[4, 5, 6]], [[1, 2, 3]])
with pytest.raises(ValueError,
match="Expected `weights` to be 1 dimensional"):
Rotation.align_vectors([[1, 2, 3]], [[1, 2, 3]], weights=[[1]])
with pytest.raises(ValueError,
match="Expected `weights` to have number of values"):
Rotation.align_vectors([[1, 2, 3]], [[1, 2, 3]], weights=[1, 2])
with pytest.raises(ValueError,
match="`weights` may not contain negative values"):
Rotation.align_vectors([[1, 2, 3]], [[1, 2, 3]], weights=[-1])
def test_random_rotation_shape():
rnd = np.random.RandomState(0)
assert_equal(Rotation.random(random_state=rnd).as_quat().shape, (4,))
assert_equal(Rotation.random(None, random_state=rnd).as_quat().shape, (4,))
assert_equal(Rotation.random(1, random_state=rnd).as_quat().shape, (1, 4))
assert_equal(Rotation.random(5, random_state=rnd).as_quat().shape, (5, 4))
def test_slerp():
rnd = np.random.RandomState(0)
key_rots = Rotation.from_quat(rnd.uniform(size=(5, 4)))
key_quats = key_rots.as_quat()
key_times = [0, 1, 2, 3, 4]
interpolator = Slerp(key_times, key_rots)
times = [0, 0.5, 0.25, 1, 1.5, 2, 2.75, 3, 3.25, 3.60, 4]
interp_rots = interpolator(times)
interp_quats = interp_rots.as_quat()
# Dot products are affected by sign of quaternions
interp_quats[interp_quats[:, -1] < 0] *= -1
# Checking for quaternion equality, perform same operation
key_quats[key_quats[:, -1] < 0] *= -1
# Equality at keyframes, including both endpoints
assert_allclose(interp_quats[0], key_quats[0])
assert_allclose(interp_quats[3], key_quats[1])
assert_allclose(interp_quats[5], key_quats[2])
assert_allclose(interp_quats[7], key_quats[3])
assert_allclose(interp_quats[10], key_quats[4])
# Constant angular velocity between keyframes. Check by equating
# cos(theta) between quaternion pairs with equal time difference.
cos_theta1 = np.sum(interp_quats[0] * interp_quats[2])
cos_theta2 = np.sum(interp_quats[2] * interp_quats[1])
assert_allclose(cos_theta1, cos_theta2)
cos_theta4 = np.sum(interp_quats[3] * interp_quats[4])
cos_theta5 = np.sum(interp_quats[4] * interp_quats[5])
assert_allclose(cos_theta4, cos_theta5)
# theta1: 0 -> 0.25, theta3 : 0.5 -> 1
# Use double angle formula for double the time difference
cos_theta3 = np.sum(interp_quats[1] * interp_quats[3])
assert_allclose(cos_theta3, 2 * (cos_theta1**2) - 1)
# Miscellaneous checks
assert_equal(len(interp_rots), len(times))
def test_slerp_rot_is_rotation():
with pytest.raises(TypeError, match="must be a `Rotation` instance"):
r = np.array([[1,2,3,4],
[0,0,0,1]])
t = np.array([0, 1])
Slerp(t, r)
def test_slerp_single_rot():
msg = "must be a sequence of at least 2 rotations"
with pytest.raises(ValueError, match=msg):
r = Rotation.from_quat([1, 2, 3, 4])
Slerp([1], r)
def test_slerp_rot_len1():
msg = "must be a sequence of at least 2 rotations"
with pytest.raises(ValueError, match=msg):
r = Rotation.from_quat([[1, 2, 3, 4]])
Slerp([1], r)
def test_slerp_time_dim_mismatch():
with pytest.raises(ValueError,
match="times to be specified in a 1 dimensional array"):
rnd = np.random.RandomState(0)
r = Rotation.from_quat(rnd.uniform(size=(2, 4)))
t = np.array([[1],
[2]])
Slerp(t, r)
def test_slerp_num_rotations_mismatch():
with pytest.raises(ValueError, match="number of rotations to be equal to "
"number of timestamps"):
rnd = np.random.RandomState(0)
r = Rotation.from_quat(rnd.uniform(size=(5, 4)))
t = np.arange(7)
Slerp(t, r)
def test_slerp_equal_times():
with pytest.raises(ValueError, match="strictly increasing order"):
rnd = np.random.RandomState(0)
r = Rotation.from_quat(rnd.uniform(size=(5, 4)))
t = [0, 1, 2, 2, 4]
Slerp(t, r)
def test_slerp_decreasing_times():
with pytest.raises(ValueError, match="strictly increasing order"):
rnd = np.random.RandomState(0)
r = Rotation.from_quat(rnd.uniform(size=(5, 4)))
t = [0, 1, 3, 2, 4]
Slerp(t, r)
def test_slerp_call_time_dim_mismatch():
rnd = np.random.RandomState(0)
r = Rotation.from_quat(rnd.uniform(size=(5, 4)))
t = np.arange(5)
s = Slerp(t, r)
with pytest.raises(ValueError,
match="`times` must be at most 1-dimensional."):
interp_times = np.array([[3.5],
[4.2]])
s(interp_times)
def test_slerp_call_time_out_of_range():
rnd = np.random.RandomState(0)
r = Rotation.from_quat(rnd.uniform(size=(5, 4)))
t = np.arange(5) + 1
s = Slerp(t, r)
with pytest.raises(ValueError, match="times must be within the range"):
s([0, 1, 2])
with pytest.raises(ValueError, match="times must be within the range"):
s([1, 2, 6])
def test_slerp_call_scalar_time():
r = Rotation.from_euler('X', [0, 80], degrees=True)
s = Slerp([0, 1], r)
r_interpolated = s(0.25)
r_interpolated_expected = Rotation.from_euler('X', 20, degrees=True)
delta = r_interpolated * r_interpolated_expected.inv()
assert_allclose(delta.magnitude(), 0, atol=1e-16)
def test_multiplication_stability():
qs = Rotation.random(50, random_state=0)
rs = Rotation.random(1000, random_state=1)
for q in qs:
rs *= q * rs
assert_allclose(np.linalg.norm(rs.as_quat(), axis=1), 1)
def test_rotation_within_numpy_array():
single = Rotation.random(random_state=0)
multiple = Rotation.random(2, random_state=1)
array = np.array(single)
assert_equal(array.shape, ())
array = np.array(multiple)
assert_equal(array.shape, (2,))
assert_allclose(array[0].as_matrix(), multiple[0].as_matrix())
assert_allclose(array[1].as_matrix(), multiple[1].as_matrix())
array = np.array([single])
assert_equal(array.shape, (1,))
assert_equal(array[0], single)
array = np.array([multiple])
assert_equal(array.shape, (1, 2))
assert_allclose(array[0, 0].as_matrix(), multiple[0].as_matrix())
assert_allclose(array[0, 1].as_matrix(), multiple[1].as_matrix())
array = np.array([single, multiple], dtype=object)
assert_equal(array.shape, (2,))
assert_equal(array[0], single)
assert_equal(array[1], multiple)
array = np.array([multiple, multiple, multiple])
assert_equal(array.shape, (3, 2))
def test_pickling():
r = Rotation.from_quat([0, 0, np.sin(np.pi/4), np.cos(np.pi/4)])
pkl = pickle.dumps(r)
unpickled = pickle.loads(pkl)
assert_allclose(r.as_matrix(), unpickled.as_matrix(), atol=1e-15)
def test_deepcopy():
r = Rotation.from_quat([0, 0, np.sin(np.pi/4), np.cos(np.pi/4)])
r1 = copy.deepcopy(r)
assert_allclose(r.as_matrix(), r1.as_matrix(), atol=1e-15)
def test_as_euler_contiguous():
r = Rotation.from_quat([0, 0, 0, 1])
e1 = r.as_euler('xyz') # extrinsic euler rotation
e2 = r.as_euler('XYZ') # intrinsic
assert e1.flags['C_CONTIGUOUS'] is True
assert e2.flags['C_CONTIGUOUS'] is True
assert all(i >= 0 for i in e1.strides)
assert all(i >= 0 for i in e2.strides)
def test_concatenate():
rotation = Rotation.random(10, random_state=0)
sizes = [1, 2, 3, 1, 3]
starts = [0] + list(np.cumsum(sizes))
split = [rotation[i:i + n] for i, n in zip(starts, sizes)]
result = Rotation.concatenate(split)
assert_equal(rotation.as_quat(), result.as_quat())
def test_concatenate_wrong_type():
with pytest.raises(TypeError, match='Rotation objects only'):
Rotation.concatenate([Rotation.identity(), 1, None])
# Regression test for gh-16663
def test_len_and_bool():
rotation_multi_empty = Rotation(np.empty((0, 4)))
rotation_multi_one = Rotation([[0, 0, 0, 1]])
rotation_multi = Rotation([[0, 0, 0, 1], [0, 0, 0, 1]])
rotation_single = Rotation([0, 0, 0, 1])
assert len(rotation_multi_empty) == 0
assert len(rotation_multi_one) == 1
assert len(rotation_multi) == 2
with pytest.raises(TypeError, match="Single rotation has no len()."):
len(rotation_single)
# Rotation should always be truthy. See gh-16663
assert rotation_multi_empty
assert rotation_multi_one
assert rotation_multi
assert rotation_single
| 47,183
| 30.26839
| 107
|
py
|
scipy
|
scipy-main/scipy/spatial/transform/tests/test_rotation_groups.py
|
import pytest
import numpy as np
from numpy.testing import assert_array_almost_equal
from scipy.spatial.transform import Rotation
from scipy.optimize import linear_sum_assignment
from scipy.spatial.distance import cdist
from scipy.constants import golden as phi
from scipy.spatial import cKDTree
TOL = 1E-12
NS = range(1, 13)
NAMES = ["I", "O", "T"] + ["C%d" % n for n in NS] + ["D%d" % n for n in NS]
SIZES = [60, 24, 12] + list(NS) + [2 * n for n in NS]
def _calculate_rmsd(P, Q):
"""Calculates the root-mean-square distance between the points of P and Q.
The distance is taken as the minimum over all possible matchings. It is
zero if P and Q are identical and non-zero if not.
"""
distance_matrix = cdist(P, Q, metric='sqeuclidean')
matching = linear_sum_assignment(distance_matrix)
return np.sqrt(distance_matrix[matching].sum())
def _generate_pyramid(n, axis):
thetas = np.linspace(0, 2 * np.pi, n + 1)[:-1]
P = np.vstack([np.zeros(n), np.cos(thetas), np.sin(thetas)]).T
P = np.concatenate((P, [[1, 0, 0]]))
return np.roll(P, axis, axis=1)
def _generate_prism(n, axis):
thetas = np.linspace(0, 2 * np.pi, n + 1)[:-1]
bottom = np.vstack([-np.ones(n), np.cos(thetas), np.sin(thetas)]).T
top = np.vstack([+np.ones(n), np.cos(thetas), np.sin(thetas)]).T
P = np.concatenate((bottom, top))
return np.roll(P, axis, axis=1)
def _generate_icosahedron():
x = np.array([[0, -1, -phi],
[0, -1, +phi],
[0, +1, -phi],
[0, +1, +phi]])
return np.concatenate([np.roll(x, i, axis=1) for i in range(3)])
def _generate_octahedron():
return np.array([[-1, 0, 0], [+1, 0, 0], [0, -1, 0],
[0, +1, 0], [0, 0, -1], [0, 0, +1]])
def _generate_tetrahedron():
return np.array([[1, 1, 1], [1, -1, -1], [-1, 1, -1], [-1, -1, 1]])
@pytest.mark.parametrize("name", [-1, None, True, np.array(['C3'])])
def test_group_type(name):
with pytest.raises(ValueError,
match="must be a string"):
Rotation.create_group(name)
@pytest.mark.parametrize("name", ["Q", " ", "CA", "C ", "DA", "D ", "I2", ""])
def test_group_name(name):
with pytest.raises(ValueError,
match="must be one of 'I', 'O', 'T', 'Dn', 'Cn'"):
Rotation.create_group(name)
@pytest.mark.parametrize("name", ["C0", "D0"])
def test_group_order_positive(name):
with pytest.raises(ValueError,
match="Group order must be positive"):
Rotation.create_group(name)
@pytest.mark.parametrize("axis", ['A', 'b', 0, 1, 2, 4, False, None])
def test_axis_valid(axis):
with pytest.raises(ValueError,
match="`axis` must be one of"):
Rotation.create_group("C1", axis)
def test_icosahedral():
"""The icosahedral group fixes the rotations of an icosahedron. Here we
test that the icosahedron is invariant after application of the elements
of the rotation group."""
P = _generate_icosahedron()
for g in Rotation.create_group("I"):
g = Rotation.from_quat(g.as_quat())
assert _calculate_rmsd(P, g.apply(P)) < TOL
def test_octahedral():
"""Test that the octahedral group correctly fixes the rotations of an
octahedron."""
P = _generate_octahedron()
for g in Rotation.create_group("O"):
assert _calculate_rmsd(P, g.apply(P)) < TOL
def test_tetrahedral():
"""Test that the tetrahedral group correctly fixes the rotations of a
tetrahedron."""
P = _generate_tetrahedron()
for g in Rotation.create_group("T"):
assert _calculate_rmsd(P, g.apply(P)) < TOL
@pytest.mark.parametrize("n", NS)
@pytest.mark.parametrize("axis", 'XYZ')
def test_dicyclic(n, axis):
"""Test that the dicyclic group correctly fixes the rotations of a
prism."""
P = _generate_prism(n, axis='XYZ'.index(axis))
for g in Rotation.create_group("D%d" % n, axis=axis):
assert _calculate_rmsd(P, g.apply(P)) < TOL
@pytest.mark.parametrize("n", NS)
@pytest.mark.parametrize("axis", 'XYZ')
def test_cyclic(n, axis):
"""Test that the cyclic group correctly fixes the rotations of a
pyramid."""
P = _generate_pyramid(n, axis='XYZ'.index(axis))
for g in Rotation.create_group("C%d" % n, axis=axis):
assert _calculate_rmsd(P, g.apply(P)) < TOL
@pytest.mark.parametrize("name, size", zip(NAMES, SIZES))
def test_group_sizes(name, size):
assert len(Rotation.create_group(name)) == size
@pytest.mark.parametrize("name, size", zip(NAMES, SIZES))
def test_group_no_duplicates(name, size):
g = Rotation.create_group(name)
kdtree = cKDTree(g.as_quat())
assert len(kdtree.query_pairs(1E-3)) == 0
@pytest.mark.parametrize("name, size", zip(NAMES, SIZES))
def test_group_symmetry(name, size):
g = Rotation.create_group(name)
q = np.concatenate((-g.as_quat(), g.as_quat()))
distance = np.sort(cdist(q, q))
deltas = np.max(distance, axis=0) - np.min(distance, axis=0)
assert (deltas < TOL).all()
@pytest.mark.parametrize("name", NAMES)
def test_reduction(name):
"""Test that the elements of the rotation group are correctly
mapped onto the identity rotation."""
g = Rotation.create_group(name)
f = g.reduce(g)
assert_array_almost_equal(f.magnitude(), np.zeros(len(g)))
@pytest.mark.parametrize("name", NAMES)
def test_single_reduction(name):
g = Rotation.create_group(name)
f = g[-1].reduce(g)
assert_array_almost_equal(f.magnitude(), 0)
assert f.as_quat().shape == (4,)
| 5,560
| 31.711765
| 78
|
py
|
scipy
|
scipy-main/scipy/spatial/transform/tests/__init__.py
| 0
| 0
| 0
|
py
|
|
scipy
|
scipy-main/scipy/_lib/decorator.py
|
# ######################### LICENSE ############################ #
# Copyright (c) 2005-2015, Michele Simionato
# All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
# Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# Redistributions in bytecode form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in
# the documentation and/or other materials provided with the
# distribution.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
# OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
# DAMAGE.
"""
Decorator module, see https://pypi.python.org/pypi/decorator
for the documentation.
"""
import re
import sys
import inspect
import operator
import itertools
import collections
from inspect import getfullargspec
__version__ = '4.0.5'
def get_init(cls):
return cls.__init__
# getargspec has been deprecated in Python 3.5
ArgSpec = collections.namedtuple(
'ArgSpec', 'args varargs varkw defaults')
def getargspec(f):
"""A replacement for inspect.getargspec"""
spec = getfullargspec(f)
return ArgSpec(spec.args, spec.varargs, spec.varkw, spec.defaults)
DEF = re.compile(r'\s*def\s*([_\w][_\w\d]*)\s*\(')
# basic functionality
class FunctionMaker:
"""
An object with the ability to create functions with a given signature.
It has attributes name, doc, module, signature, defaults, dict, and
methods update and make.
"""
# Atomic get-and-increment provided by the GIL
_compile_count = itertools.count()
def __init__(self, func=None, name=None, signature=None,
defaults=None, doc=None, module=None, funcdict=None):
self.shortsignature = signature
if func:
# func can be a class or a callable, but not an instance method
self.name = func.__name__
if self.name == '<lambda>': # small hack for lambda functions
self.name = '_lambda_'
self.doc = func.__doc__
self.module = func.__module__
if inspect.isfunction(func):
argspec = getfullargspec(func)
self.annotations = getattr(func, '__annotations__', {})
for a in ('args', 'varargs', 'varkw', 'defaults', 'kwonlyargs',
'kwonlydefaults'):
setattr(self, a, getattr(argspec, a))
for i, arg in enumerate(self.args):
setattr(self, 'arg%d' % i, arg)
allargs = list(self.args)
allshortargs = list(self.args)
if self.varargs:
allargs.append('*' + self.varargs)
allshortargs.append('*' + self.varargs)
elif self.kwonlyargs:
allargs.append('*') # single star syntax
for a in self.kwonlyargs:
allargs.append('%s=None' % a)
allshortargs.append(f'{a}={a}')
if self.varkw:
allargs.append('**' + self.varkw)
allshortargs.append('**' + self.varkw)
self.signature = ', '.join(allargs)
self.shortsignature = ', '.join(allshortargs)
self.dict = func.__dict__.copy()
# func=None happens when decorating a caller
if name:
self.name = name
if signature is not None:
self.signature = signature
if defaults:
self.defaults = defaults
if doc:
self.doc = doc
if module:
self.module = module
if funcdict:
self.dict = funcdict
# check existence required attributes
assert hasattr(self, 'name')
if not hasattr(self, 'signature'):
raise TypeError('You are decorating a non-function: %s' % func)
def update(self, func, **kw):
"Update the signature of func with the data in self"
func.__name__ = self.name
func.__doc__ = getattr(self, 'doc', None)
func.__dict__ = getattr(self, 'dict', {})
func.__defaults__ = getattr(self, 'defaults', ())
func.__kwdefaults__ = getattr(self, 'kwonlydefaults', None)
func.__annotations__ = getattr(self, 'annotations', None)
try:
frame = sys._getframe(3)
except AttributeError: # for IronPython and similar implementations
callermodule = '?'
else:
callermodule = frame.f_globals.get('__name__', '?')
func.__module__ = getattr(self, 'module', callermodule)
func.__dict__.update(kw)
def make(self, src_templ, evaldict=None, addsource=False, **attrs):
"Make a new function from a given template and update the signature"
src = src_templ % vars(self) # expand name and signature
evaldict = evaldict or {}
mo = DEF.match(src)
if mo is None:
raise SyntaxError('not a valid function template\n%s' % src)
name = mo.group(1) # extract the function name
names = set([name] + [arg.strip(' *') for arg in
self.shortsignature.split(',')])
for n in names:
if n in ('_func_', '_call_'):
raise NameError(f'{n} is overridden in\n{src}')
if not src.endswith('\n'): # add a newline just for safety
src += '\n' # this is needed in old versions of Python
# Ensure each generated function has a unique filename for profilers
# (such as cProfile) that depend on the tuple of (<filename>,
# <definition line>, <function name>) being unique.
filename = '<decorator-gen-%d>' % (next(self._compile_count),)
try:
code = compile(src, filename, 'single')
exec(code, evaldict)
except: # noqa: E722
print('Error in generated code:', file=sys.stderr)
print(src, file=sys.stderr)
raise
func = evaldict[name]
if addsource:
attrs['__source__'] = src
self.update(func, **attrs)
return func
@classmethod
def create(cls, obj, body, evaldict, defaults=None,
doc=None, module=None, addsource=True, **attrs):
"""
Create a function from the strings name, signature, and body.
evaldict is the evaluation dictionary. If addsource is true, an
attribute __source__ is added to the result. The attributes attrs
are added, if any.
"""
if isinstance(obj, str): # "name(signature)"
name, rest = obj.strip().split('(', 1)
signature = rest[:-1] # strip a right parens
func = None
else: # a function
name = None
signature = None
func = obj
self = cls(func, name, signature, defaults, doc, module)
ibody = '\n'.join(' ' + line for line in body.splitlines())
return self.make('def %(name)s(%(signature)s):\n' + ibody,
evaldict, addsource, **attrs)
def decorate(func, caller):
"""
decorate(func, caller) decorates a function using a caller.
"""
evaldict = func.__globals__.copy()
evaldict['_call_'] = caller
evaldict['_func_'] = func
fun = FunctionMaker.create(
func, "return _call_(_func_, %(shortsignature)s)",
evaldict, __wrapped__=func)
if hasattr(func, '__qualname__'):
fun.__qualname__ = func.__qualname__
return fun
def decorator(caller, _func=None):
"""decorator(caller) converts a caller function into a decorator"""
if _func is not None: # return a decorated function
# this is obsolete behavior; you should use decorate instead
return decorate(_func, caller)
# else return a decorator function
if inspect.isclass(caller):
name = caller.__name__.lower()
callerfunc = get_init(caller)
doc = 'decorator({}) converts functions/generators into ' \
'factories of {} objects'.format(caller.__name__, caller.__name__)
elif inspect.isfunction(caller):
if caller.__name__ == '<lambda>':
name = '_lambda_'
else:
name = caller.__name__
callerfunc = caller
doc = caller.__doc__
else: # assume caller is an object with a __call__ method
name = caller.__class__.__name__.lower()
callerfunc = caller.__call__.__func__
doc = caller.__call__.__doc__
evaldict = callerfunc.__globals__.copy()
evaldict['_call_'] = caller
evaldict['_decorate_'] = decorate
return FunctionMaker.create(
'%s(func)' % name, 'return _decorate_(func, _call_)',
evaldict, doc=doc, module=caller.__module__,
__wrapped__=caller)
# ####################### contextmanager ####################### #
try: # Python >= 3.2
from contextlib import _GeneratorContextManager
except ImportError: # Python >= 2.5
from contextlib import GeneratorContextManager as _GeneratorContextManager
class ContextManager(_GeneratorContextManager):
def __call__(self, func):
"""Context manager decorator"""
return FunctionMaker.create(
func, "with _self_: return _func_(%(shortsignature)s)",
dict(_self_=self, _func_=func), __wrapped__=func)
init = getfullargspec(_GeneratorContextManager.__init__)
n_args = len(init.args)
if n_args == 2 and not init.varargs: # (self, genobj) Python 2.7
def __init__(self, g, *a, **k):
return _GeneratorContextManager.__init__(self, g(*a, **k))
ContextManager.__init__ = __init__
elif n_args == 2 and init.varargs: # (self, gen, *a, **k) Python 3.4
pass
elif n_args == 4: # (self, gen, args, kwds) Python 3.5
def __init__(self, g, *a, **k):
return _GeneratorContextManager.__init__(self, g, a, k)
ContextManager.__init__ = __init__
contextmanager = decorator(ContextManager)
# ############################ dispatch_on ############################ #
def append(a, vancestors):
"""
Append ``a`` to the list of the virtual ancestors, unless it is already
included.
"""
add = True
for j, va in enumerate(vancestors):
if issubclass(va, a):
add = False
break
if issubclass(a, va):
vancestors[j] = a
add = False
if add:
vancestors.append(a)
# inspired from simplegeneric by P.J. Eby and functools.singledispatch
def dispatch_on(*dispatch_args):
"""
Factory of decorators turning a function into a generic function
dispatching on the given arguments.
"""
assert dispatch_args, 'No dispatch args passed'
dispatch_str = '(%s,)' % ', '.join(dispatch_args)
def check(arguments, wrong=operator.ne, msg=''):
"""Make sure one passes the expected number of arguments"""
if wrong(len(arguments), len(dispatch_args)):
raise TypeError('Expected %d arguments, got %d%s' %
(len(dispatch_args), len(arguments), msg))
def gen_func_dec(func):
"""Decorator turning a function into a generic function"""
# first check the dispatch arguments
argset = set(getfullargspec(func).args)
if not set(dispatch_args) <= argset:
raise NameError('Unknown dispatch arguments %s' % dispatch_str)
typemap = {}
def vancestors(*types):
"""
Get a list of sets of virtual ancestors for the given types
"""
check(types)
ras = [[] for _ in range(len(dispatch_args))]
for types_ in typemap:
for t, type_, ra in zip(types, types_, ras):
if issubclass(t, type_) and type_ not in t.__mro__:
append(type_, ra)
return [set(ra) for ra in ras]
def ancestors(*types):
"""
Get a list of virtual MROs, one for each type
"""
check(types)
lists = []
for t, vas in zip(types, vancestors(*types)):
n_vas = len(vas)
if n_vas > 1:
raise RuntimeError(
f'Ambiguous dispatch for {t}: {vas}')
elif n_vas == 1:
va, = vas
mro = type('t', (t, va), {}).__mro__[1:]
else:
mro = t.__mro__
lists.append(mro[:-1]) # discard t and object
return lists
def register(*types):
"""
Decorator to register an implementation for the given types
"""
check(types)
def dec(f):
check(getfullargspec(f).args, operator.lt, ' in ' + f.__name__)
typemap[types] = f
return f
return dec
def dispatch_info(*types):
"""
An utility to introspect the dispatch algorithm
"""
check(types)
lst = [tuple(a.__name__ for a in anc)
for anc in itertools.product(*ancestors(*types))]
return lst
def _dispatch(dispatch_args, *args, **kw):
types = tuple(type(arg) for arg in dispatch_args)
try: # fast path
f = typemap[types]
except KeyError:
pass
else:
return f(*args, **kw)
combinations = itertools.product(*ancestors(*types))
next(combinations) # the first one has been already tried
for types_ in combinations:
f = typemap.get(types_)
if f is not None:
return f(*args, **kw)
# else call the default implementation
return func(*args, **kw)
return FunctionMaker.create(
func, 'return _f_(%s, %%(shortsignature)s)' % dispatch_str,
dict(_f_=_dispatch), register=register, default=func,
typemap=typemap, vancestors=vancestors, ancestors=ancestors,
dispatch_info=dispatch_info, __wrapped__=func)
gen_func_dec.__name__ = 'dispatch_on' + dispatch_str
return gen_func_dec
| 15,045
| 36.615
| 79
|
py
|
scipy
|
scipy-main/scipy/_lib/_bunch.py
|
import sys as _sys
from keyword import iskeyword as _iskeyword
def _validate_names(typename, field_names, extra_field_names):
"""
Ensure that all the given names are valid Python identifiers that
do not start with '_'. Also check that there are no duplicates
among field_names + extra_field_names.
"""
for name in [typename] + field_names + extra_field_names:
if type(name) is not str:
raise TypeError('typename and all field names must be strings')
if not name.isidentifier():
raise ValueError('typename and all field names must be valid '
f'identifiers: {name!r}')
if _iskeyword(name):
raise ValueError('typename and all field names cannot be a '
f'keyword: {name!r}')
seen = set()
for name in field_names + extra_field_names:
if name.startswith('_'):
raise ValueError('Field names cannot start with an underscore: '
f'{name!r}')
if name in seen:
raise ValueError(f'Duplicate field name: {name!r}')
seen.add(name)
# Note: This code is adapted from CPython:Lib/collections/__init__.py
def _make_tuple_bunch(typename, field_names, extra_field_names=None,
module=None):
"""
Create a namedtuple-like class with additional attributes.
This function creates a subclass of tuple that acts like a namedtuple
and that has additional attributes.
The additional attributes are listed in `extra_field_names`. The
values assigned to these attributes are not part of the tuple.
The reason this function exists is to allow functions in SciPy
that currently return a tuple or a namedtuple to returned objects
that have additional attributes, while maintaining backwards
compatibility.
This should only be used to enhance *existing* functions in SciPy.
New functions are free to create objects as return values without
having to maintain backwards compatibility with an old tuple or
namedtuple return value.
Parameters
----------
typename : str
The name of the type.
field_names : list of str
List of names of the values to be stored in the tuple. These names
will also be attributes of instances, so the values in the tuple
can be accessed by indexing or as attributes. At least one name
is required. See the Notes for additional restrictions.
extra_field_names : list of str, optional
List of names of values that will be stored as attributes of the
object. See the notes for additional restrictions.
Returns
-------
cls : type
The new class.
Notes
-----
There are restrictions on the names that may be used in `field_names`
and `extra_field_names`:
* The names must be unique--no duplicates allowed.
* The names must be valid Python identifiers, and must not begin with
an underscore.
* The names must not be Python keywords (e.g. 'def', 'and', etc., are
not allowed).
Examples
--------
>>> from scipy._lib._bunch import _make_tuple_bunch
Create a class that acts like a namedtuple with length 2 (with field
names `x` and `y`) that will also have the attributes `w` and `beta`:
>>> Result = _make_tuple_bunch('Result', ['x', 'y'], ['w', 'beta'])
`Result` is the new class. We call it with keyword arguments to create
a new instance with given values.
>>> result1 = Result(x=1, y=2, w=99, beta=0.5)
>>> result1
Result(x=1, y=2, w=99, beta=0.5)
`result1` acts like a tuple of length 2:
>>> len(result1)
2
>>> result1[:]
(1, 2)
The values assigned when the instance was created are available as
attributes:
>>> result1.y
2
>>> result1.beta
0.5
"""
if len(field_names) == 0:
raise ValueError('field_names must contain at least one name')
if extra_field_names is None:
extra_field_names = []
_validate_names(typename, field_names, extra_field_names)
typename = _sys.intern(str(typename))
field_names = tuple(map(_sys.intern, field_names))
extra_field_names = tuple(map(_sys.intern, extra_field_names))
all_names = field_names + extra_field_names
arg_list = ', '.join(field_names)
full_list = ', '.join(all_names)
repr_fmt = ''.join(('(',
', '.join(f'{name}=%({name})r' for name in all_names),
')'))
tuple_new = tuple.__new__
_dict, _tuple, _zip = dict, tuple, zip
# Create all the named tuple methods to be added to the class namespace
s = f"""\
def __new__(_cls, {arg_list}, **extra_fields):
return _tuple_new(_cls, ({arg_list},))
def __init__(self, {arg_list}, **extra_fields):
for key in self._extra_fields:
if key not in extra_fields:
raise TypeError("missing keyword argument '%s'" % (key,))
for key, val in extra_fields.items():
if key not in self._extra_fields:
raise TypeError("unexpected keyword argument '%s'" % (key,))
self.__dict__[key] = val
def __setattr__(self, key, val):
if key in {repr(field_names)}:
raise AttributeError("can't set attribute %r of class %r"
% (key, self.__class__.__name__))
else:
self.__dict__[key] = val
"""
del arg_list
namespace = {'_tuple_new': tuple_new,
'__builtins__': dict(TypeError=TypeError,
AttributeError=AttributeError),
'__name__': f'namedtuple_{typename}'}
exec(s, namespace)
__new__ = namespace['__new__']
__new__.__doc__ = f'Create new instance of {typename}({full_list})'
__init__ = namespace['__init__']
__init__.__doc__ = f'Instantiate instance of {typename}({full_list})'
__setattr__ = namespace['__setattr__']
def __repr__(self):
'Return a nicely formatted representation string'
return self.__class__.__name__ + repr_fmt % self._asdict()
def _asdict(self):
'Return a new dict which maps field names to their values.'
out = _dict(_zip(self._fields, self))
out.update(self.__dict__)
return out
def __getnewargs_ex__(self):
'Return self as a plain tuple. Used by copy and pickle.'
return _tuple(self), self.__dict__
# Modify function metadata to help with introspection and debugging
for method in (__new__, __repr__, _asdict, __getnewargs_ex__):
method.__qualname__ = f'{typename}.{method.__name__}'
# Build-up the class namespace dictionary
# and use type() to build the result class
class_namespace = {
'__doc__': f'{typename}({full_list})',
'_fields': field_names,
'__new__': __new__,
'__init__': __init__,
'__repr__': __repr__,
'__setattr__': __setattr__,
'_asdict': _asdict,
'_extra_fields': extra_field_names,
'__getnewargs_ex__': __getnewargs_ex__,
}
for index, name in enumerate(field_names):
def _get(self, index=index):
return self[index]
class_namespace[name] = property(_get)
for name in extra_field_names:
def _get(self, name=name):
return self.__dict__[name]
class_namespace[name] = property(_get)
result = type(typename, (tuple,), class_namespace)
# For pickling to work, the __module__ variable needs to be set to the
# frame where the named tuple is created. Bypass this step in environments
# where sys._getframe is not defined (Jython for example) or sys._getframe
# is not defined for arguments greater than 0 (IronPython), or where the
# user has specified a particular module.
if module is None:
try:
module = _sys._getframe(1).f_globals.get('__name__', '__main__')
except (AttributeError, ValueError):
pass
if module is not None:
result.__module__ = module
__new__.__module__ = module
return result
| 8,116
| 34.915929
| 79
|
py
|
scipy
|
scipy-main/scipy/_lib/_threadsafety.py
|
import threading
import scipy._lib.decorator
__all__ = ['ReentrancyError', 'ReentrancyLock', 'non_reentrant']
class ReentrancyError(RuntimeError):
pass
class ReentrancyLock:
"""
Threading lock that raises an exception for reentrant calls.
Calls from different threads are serialized, and nested calls from the
same thread result to an error.
The object can be used as a context manager or to decorate functions
via the decorate() method.
"""
def __init__(self, err_msg):
self._rlock = threading.RLock()
self._entered = False
self._err_msg = err_msg
def __enter__(self):
self._rlock.acquire()
if self._entered:
self._rlock.release()
raise ReentrancyError(self._err_msg)
self._entered = True
def __exit__(self, type, value, traceback):
self._entered = False
self._rlock.release()
def decorate(self, func):
def caller(func, *a, **kw):
with self:
return func(*a, **kw)
return scipy._lib.decorator.decorate(func, caller)
def non_reentrant(err_msg=None):
"""
Decorate a function with a threading lock and prevent reentrant calls.
"""
def decorator(func):
msg = err_msg
if msg is None:
msg = "%s is not re-entrant" % func.__name__
lock = ReentrancyLock(msg)
return lock.decorate(func)
return decorator
| 1,455
| 23.677966
| 74
|
py
|
scipy
|
scipy-main/scipy/_lib/_testutils.py
|
"""
Generic test utilities.
"""
import os
import re
import sys
import numpy as np
import inspect
import sysconfig
__all__ = ['PytestTester', 'check_free_memory', '_TestPythranFunc', 'IS_MUSL']
IS_MUSL = False
try:
# Note that packaging is not a dependency, hence we need this try-except:
from packaging.tags import sys_tags
_tags = list(sys_tags())
if 'musllinux' in _tags[0].platform:
IS_MUSL = True
except ImportError:
# fallback to sysconfig (might be flaky)
v = sysconfig.get_config_var('HOST_GNU_TYPE') or ''
if 'musl' in v:
IS_MUSL = True
class FPUModeChangeWarning(RuntimeWarning):
"""Warning about FPU mode change"""
pass
class PytestTester:
"""
Run tests for this namespace
``scipy.test()`` runs tests for all of SciPy, with the default settings.
When used from a submodule (e.g., ``scipy.cluster.test()``, only the tests
for that namespace are run.
Parameters
----------
label : {'fast', 'full'}, optional
Whether to run only the fast tests, or also those marked as slow.
Default is 'fast'.
verbose : int, optional
Test output verbosity. Default is 1.
extra_argv : list, optional
Arguments to pass through to Pytest.
doctests : bool, optional
Whether to run doctests or not. Default is False.
coverage : bool, optional
Whether to run tests with code coverage measurements enabled.
Default is False.
tests : list of str, optional
List of module names to run tests for. By default, uses the module
from which the ``test`` function is called.
parallel : int, optional
Run tests in parallel with pytest-xdist, if number given is larger than
1. Default is 1.
"""
def __init__(self, module_name):
self.module_name = module_name
def __call__(self, label="fast", verbose=1, extra_argv=None, doctests=False,
coverage=False, tests=None, parallel=None):
import pytest
module = sys.modules[self.module_name]
module_path = os.path.abspath(module.__path__[0])
pytest_args = ['--showlocals', '--tb=short']
if doctests:
raise ValueError("Doctests not supported")
if extra_argv:
pytest_args += list(extra_argv)
if verbose and int(verbose) > 1:
pytest_args += ["-" + "v"*(int(verbose)-1)]
if coverage:
pytest_args += ["--cov=" + module_path]
if label == "fast":
pytest_args += ["-m", "not slow"]
elif label != "full":
pytest_args += ["-m", label]
if tests is None:
tests = [self.module_name]
if parallel is not None and parallel > 1:
if _pytest_has_xdist():
pytest_args += ['-n', str(parallel)]
else:
import warnings
warnings.warn('Could not run tests in parallel because '
'pytest-xdist plugin is not available.')
pytest_args += ['--pyargs'] + list(tests)
try:
code = pytest.main(pytest_args)
except SystemExit as exc:
code = exc.code
return (code == 0)
class _TestPythranFunc:
'''
These are situations that can be tested in our pythran tests:
- A function with multiple array arguments and then
other positional and keyword arguments.
- A function with array-like keywords (e.g. `def somefunc(x0, x1=None)`.
Note: list/tuple input is not yet tested!
`self.arguments`: A dictionary which key is the index of the argument,
value is tuple(array value, all supported dtypes)
`self.partialfunc`: A function used to freeze some non-array argument
that of no interests in the original function
'''
ALL_INTEGER = [np.int8, np.int16, np.int32, np.int64, np.intc, np.intp]
ALL_FLOAT = [np.float32, np.float64]
ALL_COMPLEX = [np.complex64, np.complex128]
def setup_method(self):
self.arguments = {}
self.partialfunc = None
self.expected = None
def get_optional_args(self, func):
# get optional arguments with its default value,
# used for testing keywords
signature = inspect.signature(func)
optional_args = {}
for k, v in signature.parameters.items():
if v.default is not inspect.Parameter.empty:
optional_args[k] = v.default
return optional_args
def get_max_dtype_list_length(self):
# get the max supported dtypes list length in all arguments
max_len = 0
for arg_idx in self.arguments:
cur_len = len(self.arguments[arg_idx][1])
if cur_len > max_len:
max_len = cur_len
return max_len
def get_dtype(self, dtype_list, dtype_idx):
# get the dtype from dtype_list via index
# if the index is out of range, then return the last dtype
if dtype_idx > len(dtype_list)-1:
return dtype_list[-1]
else:
return dtype_list[dtype_idx]
def test_all_dtypes(self):
for type_idx in range(self.get_max_dtype_list_length()):
args_array = []
for arg_idx in self.arguments:
new_dtype = self.get_dtype(self.arguments[arg_idx][1],
type_idx)
args_array.append(self.arguments[arg_idx][0].astype(new_dtype))
self.pythranfunc(*args_array)
def test_views(self):
args_array = []
for arg_idx in self.arguments:
args_array.append(self.arguments[arg_idx][0][::-1][::-1])
self.pythranfunc(*args_array)
def test_strided(self):
args_array = []
for arg_idx in self.arguments:
args_array.append(np.repeat(self.arguments[arg_idx][0],
2, axis=0)[::2])
self.pythranfunc(*args_array)
def _pytest_has_xdist():
"""
Check if the pytest-xdist plugin is installed, providing parallel tests
"""
# Check xdist exists without importing, otherwise pytests emits warnings
from importlib.util import find_spec
return find_spec('xdist') is not None
def check_free_memory(free_mb):
"""
Check *free_mb* of memory is available, otherwise do pytest.skip
"""
import pytest
try:
mem_free = _parse_size(os.environ['SCIPY_AVAILABLE_MEM'])
msg = '{} MB memory required, but environment SCIPY_AVAILABLE_MEM={}'.format(
free_mb, os.environ['SCIPY_AVAILABLE_MEM'])
except KeyError:
mem_free = _get_mem_available()
if mem_free is None:
pytest.skip("Could not determine available memory; set SCIPY_AVAILABLE_MEM "
"variable to free memory in MB to run the test.")
msg = '{} MB memory required, but {} MB available'.format(
free_mb, mem_free/1e6)
if mem_free < free_mb * 1e6:
pytest.skip(msg)
def _parse_size(size_str):
suffixes = {'': 1e6,
'b': 1.0,
'k': 1e3, 'M': 1e6, 'G': 1e9, 'T': 1e12,
'kb': 1e3, 'Mb': 1e6, 'Gb': 1e9, 'Tb': 1e12,
'kib': 1024.0, 'Mib': 1024.0**2, 'Gib': 1024.0**3, 'Tib': 1024.0**4}
m = re.match(r'^\s*(\d+)\s*({})\s*$'.format('|'.join(suffixes.keys())),
size_str,
re.I)
if not m or m.group(2) not in suffixes:
raise ValueError("Invalid size string")
return float(m.group(1)) * suffixes[m.group(2)]
def _get_mem_available():
"""
Get information about memory available, not counting swap.
"""
try:
import psutil
return psutil.virtual_memory().available
except (ImportError, AttributeError):
pass
if sys.platform.startswith('linux'):
info = {}
with open('/proc/meminfo') as f:
for line in f:
p = line.split()
info[p[0].strip(':').lower()] = float(p[1]) * 1e3
if 'memavailable' in info:
# Linux >= 3.14
return info['memavailable']
else:
return info['memfree'] + info['cached']
return None
| 8,273
| 31.069767
| 88
|
py
|
scipy
|
scipy-main/scipy/_lib/uarray.py
|
"""`uarray` provides functions for generating multimethods that dispatch to
multiple different backends
This should be imported, rather than `_uarray` so that an installed version could
be used instead, if available. This means that users can call
`uarray.set_backend` directly instead of going through SciPy.
"""
# Prefer an installed version of uarray, if available
try:
import uarray as _uarray
except ImportError:
_has_uarray = False
else:
from scipy._lib._pep440 import Version as _Version
_has_uarray = _Version(_uarray.__version__) >= _Version("0.8")
del _uarray
del _Version
if _has_uarray:
from uarray import *
from uarray import _Function
else:
from ._uarray import *
from ._uarray import _Function
del _has_uarray
| 773
| 23.1875
| 81
|
py
|
scipy
|
scipy-main/scipy/_lib/_pep440.py
|
"""Utility to compare pep440 compatible version strings.
The LooseVersion and StrictVersion classes that distutils provides don't
work; they don't recognize anything like alpha/beta/rc/dev versions.
"""
# Copyright (c) Donald Stufft and individual contributors.
# All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
import collections
import itertools
import re
__all__ = [
"parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN",
]
# BEGIN packaging/_structures.py
class Infinity:
def __repr__(self):
return "Infinity"
def __hash__(self):
return hash(repr(self))
def __lt__(self, other):
return False
def __le__(self, other):
return False
def __eq__(self, other):
return isinstance(other, self.__class__)
def __ne__(self, other):
return not isinstance(other, self.__class__)
def __gt__(self, other):
return True
def __ge__(self, other):
return True
def __neg__(self):
return NegativeInfinity
Infinity = Infinity()
class NegativeInfinity:
def __repr__(self):
return "-Infinity"
def __hash__(self):
return hash(repr(self))
def __lt__(self, other):
return True
def __le__(self, other):
return True
def __eq__(self, other):
return isinstance(other, self.__class__)
def __ne__(self, other):
return not isinstance(other, self.__class__)
def __gt__(self, other):
return False
def __ge__(self, other):
return False
def __neg__(self):
return Infinity
# BEGIN packaging/version.py
NegativeInfinity = NegativeInfinity()
_Version = collections.namedtuple(
"_Version",
["epoch", "release", "dev", "pre", "post", "local"],
)
def parse(version):
"""
Parse the given version string and return either a :class:`Version` object
or a :class:`LegacyVersion` object depending on if the given version is
a valid PEP 440 version or a legacy version.
"""
try:
return Version(version)
except InvalidVersion:
return LegacyVersion(version)
class InvalidVersion(ValueError):
"""
An invalid version was found, users should refer to PEP 440.
"""
class _BaseVersion:
def __hash__(self):
return hash(self._key)
def __lt__(self, other):
return self._compare(other, lambda s, o: s < o)
def __le__(self, other):
return self._compare(other, lambda s, o: s <= o)
def __eq__(self, other):
return self._compare(other, lambda s, o: s == o)
def __ge__(self, other):
return self._compare(other, lambda s, o: s >= o)
def __gt__(self, other):
return self._compare(other, lambda s, o: s > o)
def __ne__(self, other):
return self._compare(other, lambda s, o: s != o)
def _compare(self, other, method):
if not isinstance(other, _BaseVersion):
return NotImplemented
return method(self._key, other._key)
class LegacyVersion(_BaseVersion):
def __init__(self, version):
self._version = str(version)
self._key = _legacy_cmpkey(self._version)
def __str__(self):
return self._version
def __repr__(self):
return f"<LegacyVersion({repr(str(self))})>"
@property
def public(self):
return self._version
@property
def base_version(self):
return self._version
@property
def local(self):
return None
@property
def is_prerelease(self):
return False
@property
def is_postrelease(self):
return False
_legacy_version_component_re = re.compile(
r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE,
)
_legacy_version_replacement_map = {
"pre": "c", "preview": "c", "-": "final-", "rc": "c", "dev": "@",
}
def _parse_version_parts(s):
for part in _legacy_version_component_re.split(s):
part = _legacy_version_replacement_map.get(part, part)
if not part or part == ".":
continue
if part[:1] in "0123456789":
# pad for numeric comparison
yield part.zfill(8)
else:
yield "*" + part
# ensure that alpha/beta/candidate are before final
yield "*final"
def _legacy_cmpkey(version):
# We hardcode an epoch of -1 here. A PEP 440 version can only have an epoch
# greater than or equal to 0. This will effectively put the LegacyVersion,
# which uses the defacto standard originally implemented by setuptools,
# as before all PEP 440 versions.
epoch = -1
# This scheme is taken from pkg_resources.parse_version setuptools prior to
# its adoption of the packaging library.
parts = []
for part in _parse_version_parts(version.lower()):
if part.startswith("*"):
# remove "-" before a prerelease tag
if part < "*final":
while parts and parts[-1] == "*final-":
parts.pop()
# remove trailing zeros from each series of numeric parts
while parts and parts[-1] == "00000000":
parts.pop()
parts.append(part)
parts = tuple(parts)
return epoch, parts
# Deliberately not anchored to the start and end of the string, to make it
# easier for 3rd party code to reuse
VERSION_PATTERN = r"""
v?
(?:
(?:(?P<epoch>[0-9]+)!)? # epoch
(?P<release>[0-9]+(?:\.[0-9]+)*) # release segment
(?P<pre> # pre-release
[-_\.]?
(?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview))
[-_\.]?
(?P<pre_n>[0-9]+)?
)?
(?P<post> # post release
(?:-(?P<post_n1>[0-9]+))
|
(?:
[-_\.]?
(?P<post_l>post|rev|r)
[-_\.]?
(?P<post_n2>[0-9]+)?
)
)?
(?P<dev> # dev release
[-_\.]?
(?P<dev_l>dev)
[-_\.]?
(?P<dev_n>[0-9]+)?
)?
)
(?:\+(?P<local>[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version
"""
class Version(_BaseVersion):
_regex = re.compile(
r"^\s*" + VERSION_PATTERN + r"\s*$",
re.VERBOSE | re.IGNORECASE,
)
def __init__(self, version):
# Validate the version and parse it into pieces
match = self._regex.search(version)
if not match:
raise InvalidVersion(f"Invalid version: '{version}'")
# Store the parsed out pieces of the version
self._version = _Version(
epoch=int(match.group("epoch")) if match.group("epoch") else 0,
release=tuple(int(i) for i in match.group("release").split(".")),
pre=_parse_letter_version(
match.group("pre_l"),
match.group("pre_n"),
),
post=_parse_letter_version(
match.group("post_l"),
match.group("post_n1") or match.group("post_n2"),
),
dev=_parse_letter_version(
match.group("dev_l"),
match.group("dev_n"),
),
local=_parse_local_version(match.group("local")),
)
# Generate a key which will be used for sorting
self._key = _cmpkey(
self._version.epoch,
self._version.release,
self._version.pre,
self._version.post,
self._version.dev,
self._version.local,
)
def __repr__(self):
return f"<Version({repr(str(self))})>"
def __str__(self):
parts = []
# Epoch
if self._version.epoch != 0:
parts.append(f"{self._version.epoch}!")
# Release segment
parts.append(".".join(str(x) for x in self._version.release))
# Pre-release
if self._version.pre is not None:
parts.append("".join(str(x) for x in self._version.pre))
# Post-release
if self._version.post is not None:
parts.append(f".post{self._version.post[1]}")
# Development release
if self._version.dev is not None:
parts.append(f".dev{self._version.dev[1]}")
# Local version segment
if self._version.local is not None:
parts.append(
"+{}".format(".".join(str(x) for x in self._version.local))
)
return "".join(parts)
@property
def public(self):
return str(self).split("+", 1)[0]
@property
def base_version(self):
parts = []
# Epoch
if self._version.epoch != 0:
parts.append(f"{self._version.epoch}!")
# Release segment
parts.append(".".join(str(x) for x in self._version.release))
return "".join(parts)
@property
def local(self):
version_string = str(self)
if "+" in version_string:
return version_string.split("+", 1)[1]
@property
def is_prerelease(self):
return bool(self._version.dev or self._version.pre)
@property
def is_postrelease(self):
return bool(self._version.post)
def _parse_letter_version(letter, number):
if letter:
# We assume there is an implicit 0 in a pre-release if there is
# no numeral associated with it.
if number is None:
number = 0
# We normalize any letters to their lower-case form
letter = letter.lower()
# We consider some words to be alternate spellings of other words and
# in those cases we want to normalize the spellings to our preferred
# spelling.
if letter == "alpha":
letter = "a"
elif letter == "beta":
letter = "b"
elif letter in ["c", "pre", "preview"]:
letter = "rc"
elif letter in ["rev", "r"]:
letter = "post"
return letter, int(number)
if not letter and number:
# We assume that if we are given a number but not given a letter,
# then this is using the implicit post release syntax (e.g., 1.0-1)
letter = "post"
return letter, int(number)
_local_version_seperators = re.compile(r"[\._-]")
def _parse_local_version(local):
"""
Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
"""
if local is not None:
return tuple(
part.lower() if not part.isdigit() else int(part)
for part in _local_version_seperators.split(local)
)
def _cmpkey(epoch, release, pre, post, dev, local):
# When we compare a release version, we want to compare it with all of the
# trailing zeros removed. So we'll use a reverse the list, drop all the now
# leading zeros until we come to something non-zero, then take the rest,
# re-reverse it back into the correct order, and make it a tuple and use
# that for our sorting key.
release = tuple(
reversed(list(
itertools.dropwhile(
lambda x: x == 0,
reversed(release),
)
))
)
# We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
# We'll do this by abusing the pre-segment, but we _only_ want to do this
# if there is no pre- or a post-segment. If we have one of those, then
# the normal sorting rules will handle this case correctly.
if pre is None and post is None and dev is not None:
pre = -Infinity
# Versions without a pre-release (except as noted above) should sort after
# those with one.
elif pre is None:
pre = Infinity
# Versions without a post-segment should sort before those with one.
if post is None:
post = -Infinity
# Versions without a development segment should sort after those with one.
if dev is None:
dev = Infinity
if local is None:
# Versions without a local segment should sort before those with one.
local = -Infinity
else:
# Versions with a local segment need that segment parsed to implement
# the sorting rules in PEP440.
# - Alphanumeric segments sort before numeric segments
# - Alphanumeric segments sort lexicographically
# - Numeric segments sort numerically
# - Shorter versions sort before longer versions when the prefixes
# match exactly
local = tuple(
(i, "") if isinstance(i, int) else (-Infinity, i)
for i in local
)
return epoch, release, pre, post, dev, local
| 14,005
| 27.70082
| 79
|
py
|
scipy
|
scipy-main/scipy/_lib/_unuran_utils.py
|
"""Helper functions to get location of UNU.RAN source files."""
import pathlib
from typing import Union
def _unuran_dir(ret_path: bool = False) -> Union[pathlib.Path, str]:
"""Directory where root unuran/ directory lives."""
p = pathlib.Path(__file__).parent / "unuran"
return p if ret_path else str(p)
| 318
| 28
| 68
|
py
|
scipy
|
scipy-main/scipy/_lib/setup.py
|
import os
def check_boost_submodule():
from scipy._lib._boost_utils import _boost_dir
if not os.path.exists(_boost_dir(ret_path=True).parent / 'README.md'):
raise RuntimeError("Missing the `boost` submodule! Run `git submodule "
"update --init` to fix this.")
def check_highs_submodule():
from scipy._lib._highs_utils import _highs_dir
if not os.path.exists(_highs_dir() / 'README.md'):
raise RuntimeError("Missing the `highs` submodule! Run `git submodule "
"update --init` to fix this.")
def build_clib_pre_build_hook(cmd, ext):
from scipy._build_utils.compiler_helper import get_cxx_std_flag
std_flag = get_cxx_std_flag(cmd.compiler)
ext.setdefault('extra_compiler_args', [])
if std_flag is not None:
ext['extra_compiler_args'].append(std_flag)
def configuration(parent_package='',top_path=None):
from numpy.distutils.misc_util import Configuration
from scipy._lib._boost_utils import _boost_dir
check_boost_submodule()
check_highs_submodule()
config = Configuration('_lib', parent_package, top_path)
config.add_data_files('tests/*.py')
# Deal with array_api_compat git submodule
config.add_data_files('array_api_compat/array_api_compat/*.py')
config.add_data_files('array_api_compat/array_api_compat/common/*.py')
config.add_data_files('array_api_compat/array_api_compat/cupy/*.py')
config.add_data_files('array_api_compat/array_api_compat/numpy/*.py')
config.add_data_files('array_api_compat/array_api_compat/torch/*.py')
include_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), 'src'))
depends = [os.path.join(include_dir, 'ccallback.h')]
config.add_extension("_ccallback_c",
sources=["_ccallback_c.c"],
depends=depends,
include_dirs=[include_dir])
config.add_extension("_test_ccallback",
sources=["src/_test_ccallback.c"],
depends=depends,
include_dirs=[include_dir])
config.add_extension("_fpumode",
sources=["_fpumode.c"])
def get_messagestream_config(ext, build_dir):
# Generate a header file containing defines
config_cmd = config.get_config_cmd()
defines = []
if config_cmd.check_func('open_memstream', decl=True, call=True):
defines.append(('HAVE_OPEN_MEMSTREAM', '1'))
target = os.path.join(os.path.dirname(__file__), 'src',
'messagestream_config.h')
with open(target, 'w') as f:
for name, value in defines:
f.write(f'#define {name} {value}\n')
depends = [os.path.join(include_dir, 'messagestream.h')]
config.add_extension("messagestream",
sources=["messagestream.c"] + [get_messagestream_config],
depends=depends,
include_dirs=[include_dir])
config.add_extension("_test_deprecation_call",
sources=["_test_deprecation_call.c"],
include_dirs=[include_dir])
config.add_extension("_test_deprecation_def",
sources=["_test_deprecation_def.c"],
include_dirs=[include_dir])
config.add_subpackage('_uarray')
# ensure Boost was checked out and builds
config.add_library(
'test_boost_build',
sources=['tests/test_boost_build.cpp'],
include_dirs=_boost_dir(),
language='c++',
_pre_build_hook=build_clib_pre_build_hook)
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
setup(**configuration(top_path='').todict())
| 3,817
| 35.711538
| 82
|
py
|
scipy
|
scipy-main/scipy/_lib/_util.py
|
import re
from contextlib import contextmanager
import functools
import operator
import warnings
import numbers
from collections import namedtuple
import inspect
import math
from typing import (
Optional,
Union,
TYPE_CHECKING,
TypeVar,
)
import numpy as np
if np.lib.NumpyVersion(np.__version__) >= '1.25.0':
from numpy.exceptions import AxisError
else:
from numpy import AxisError
IntNumber = Union[int, np.integer]
DecimalNumber = Union[float, np.floating, np.integer]
# Since Generator was introduced in numpy 1.17, the following condition is needed for
# backward compatibility
if TYPE_CHECKING:
SeedType = Optional[Union[IntNumber, np.random.Generator,
np.random.RandomState]]
GeneratorType = TypeVar("GeneratorType", bound=Union[np.random.Generator,
np.random.RandomState])
try:
from numpy.random import Generator as Generator
except ImportError:
class Generator(): # type: ignore[no-redef]
pass
def _lazywhere(cond, arrays, f, fillvalue=None, f2=None):
"""
np.where(cond, x, fillvalue) always evaluates x even where cond is False.
This one only evaluates f(arr1[cond], arr2[cond], ...).
Examples
--------
>>> import numpy as np
>>> a, b = np.array([1, 2, 3, 4]), np.array([5, 6, 7, 8])
>>> def f(a, b):
... return a*b
>>> _lazywhere(a > 2, (a, b), f, np.nan)
array([ nan, nan, 21., 32.])
Notice, it assumes that all `arrays` are of the same shape, or can be
broadcasted together.
"""
cond = np.asarray(cond)
if fillvalue is None:
if f2 is None:
raise ValueError("One of (fillvalue, f2) must be given.")
else:
fillvalue = np.nan
else:
if f2 is not None:
raise ValueError("Only one of (fillvalue, f2) can be given.")
args = np.broadcast_arrays(cond, *arrays)
cond, arrays = args[0], args[1:]
temp = tuple(np.extract(cond, arr) for arr in arrays)
tcode = np.mintypecode([a.dtype.char for a in arrays])
out = np.full(np.shape(arrays[0]), fill_value=fillvalue, dtype=tcode)
np.place(out, cond, f(*temp))
if f2 is not None:
temp = tuple(np.extract(~cond, arr) for arr in arrays)
np.place(out, ~cond, f2(*temp))
return out
def _lazyselect(condlist, choicelist, arrays, default=0):
"""
Mimic `np.select(condlist, choicelist)`.
Notice, it assumes that all `arrays` are of the same shape or can be
broadcasted together.
All functions in `choicelist` must accept array arguments in the order
given in `arrays` and must return an array of the same shape as broadcasted
`arrays`.
Examples
--------
>>> import numpy as np
>>> x = np.arange(6)
>>> np.select([x <3, x > 3], [x**2, x**3], default=0)
array([ 0, 1, 4, 0, 64, 125])
>>> _lazyselect([x < 3, x > 3], [lambda x: x**2, lambda x: x**3], (x,))
array([ 0., 1., 4., 0., 64., 125.])
>>> a = -np.ones_like(x)
>>> _lazyselect([x < 3, x > 3],
... [lambda x, a: x**2, lambda x, a: a * x**3],
... (x, a), default=np.nan)
array([ 0., 1., 4., nan, -64., -125.])
"""
arrays = np.broadcast_arrays(*arrays)
tcode = np.mintypecode([a.dtype.char for a in arrays])
out = np.full(np.shape(arrays[0]), fill_value=default, dtype=tcode)
for func, cond in zip(choicelist, condlist):
if np.all(cond is False):
continue
cond, _ = np.broadcast_arrays(cond, arrays[0])
temp = tuple(np.extract(cond, arr) for arr in arrays)
np.place(out, cond, func(*temp))
return out
def _aligned_zeros(shape, dtype=float, order="C", align=None):
"""Allocate a new ndarray with aligned memory.
Primary use case for this currently is working around a f2py issue
in NumPy 1.9.1, where dtype.alignment is such that np.zeros() does
not necessarily create arrays aligned up to it.
"""
dtype = np.dtype(dtype)
if align is None:
align = dtype.alignment
if not hasattr(shape, '__len__'):
shape = (shape,)
size = functools.reduce(operator.mul, shape) * dtype.itemsize
buf = np.empty(size + align + 1, np.uint8)
offset = buf.__array_interface__['data'][0] % align
if offset != 0:
offset = align - offset
# Note: slices producing 0-size arrays do not necessarily change
# data pointer --- so we use and allocate size+1
buf = buf[offset:offset+size+1][:-1]
data = np.ndarray(shape, dtype, buf, order=order)
data.fill(0)
return data
def _prune_array(array):
"""Return an array equivalent to the input array. If the input
array is a view of a much larger array, copy its contents to a
newly allocated array. Otherwise, return the input unchanged.
"""
if array.base is not None and array.size < array.base.size // 2:
return array.copy()
return array
def float_factorial(n: int) -> float:
"""Compute the factorial and return as a float
Returns infinity when result is too large for a double
"""
return float(math.factorial(n)) if n < 171 else np.inf
# copy-pasted from scikit-learn utils/validation.py
# change this to scipy.stats._qmc.check_random_state once numpy 1.16 is dropped
def check_random_state(seed):
"""Turn `seed` into a `np.random.RandomState` instance.
Parameters
----------
seed : {None, int, `numpy.random.Generator`, `numpy.random.RandomState`}, optional
If `seed` is None (or `np.random`), the `numpy.random.RandomState`
singleton is used.
If `seed` is an int, a new ``RandomState`` instance is used,
seeded with `seed`.
If `seed` is already a ``Generator`` or ``RandomState`` instance then
that instance is used.
Returns
-------
seed : {`numpy.random.Generator`, `numpy.random.RandomState`}
Random number generator.
"""
if seed is None or seed is np.random:
return np.random.mtrand._rand
if isinstance(seed, (numbers.Integral, np.integer)):
return np.random.RandomState(seed)
if isinstance(seed, (np.random.RandomState, np.random.Generator)):
return seed
raise ValueError('%r cannot be used to seed a numpy.random.RandomState'
' instance' % seed)
def _asarray_validated(a, check_finite=True,
sparse_ok=False, objects_ok=False, mask_ok=False,
as_inexact=False):
"""
Helper function for SciPy argument validation.
Many SciPy linear algebra functions do support arbitrary array-like
input arguments. Examples of commonly unsupported inputs include
matrices containing inf/nan, sparse matrix representations, and
matrices with complicated elements.
Parameters
----------
a : array_like
The array-like input.
check_finite : bool, optional
Whether to check that the input matrices contain only finite numbers.
Disabling may give a performance gain, but may result in problems
(crashes, non-termination) if the inputs do contain infinities or NaNs.
Default: True
sparse_ok : bool, optional
True if scipy sparse matrices are allowed.
objects_ok : bool, optional
True if arrays with dype('O') are allowed.
mask_ok : bool, optional
True if masked arrays are allowed.
as_inexact : bool, optional
True to convert the input array to a np.inexact dtype.
Returns
-------
ret : ndarray
The converted validated array.
"""
if not sparse_ok:
import scipy.sparse
if scipy.sparse.issparse(a):
msg = ('Sparse matrices are not supported by this function. '
'Perhaps one of the scipy.sparse.linalg functions '
'would work instead.')
raise ValueError(msg)
if not mask_ok:
if np.ma.isMaskedArray(a):
raise ValueError('masked arrays are not supported')
toarray = np.asarray_chkfinite if check_finite else np.asarray
a = toarray(a)
if not objects_ok:
if a.dtype is np.dtype('O'):
raise ValueError('object arrays are not supported')
if as_inexact:
if not np.issubdtype(a.dtype, np.inexact):
a = toarray(a, dtype=np.float_)
return a
def _validate_int(k, name, minimum=None):
"""
Validate a scalar integer.
This functon can be used to validate an argument to a function
that expects the value to be an integer. It uses `operator.index`
to validate the value (so, for example, k=2.0 results in a
TypeError).
Parameters
----------
k : int
The value to be validated.
name : str
The name of the parameter.
minimum : int, optional
An optional lower bound.
"""
try:
k = operator.index(k)
except TypeError:
raise TypeError(f'{name} must be an integer.') from None
if minimum is not None and k < minimum:
raise ValueError(f'{name} must be an integer not less '
f'than {minimum}') from None
return k
# Add a replacement for inspect.getfullargspec()/
# The version below is borrowed from Django,
# https://github.com/django/django/pull/4846.
# Note an inconsistency between inspect.getfullargspec(func) and
# inspect.signature(func). If `func` is a bound method, the latter does *not*
# list `self` as a first argument, while the former *does*.
# Hence, cook up a common ground replacement: `getfullargspec_no_self` which
# mimics `inspect.getfullargspec` but does not list `self`.
#
# This way, the caller code does not need to know whether it uses a legacy
# .getfullargspec or a bright and shiny .signature.
FullArgSpec = namedtuple('FullArgSpec',
['args', 'varargs', 'varkw', 'defaults',
'kwonlyargs', 'kwonlydefaults', 'annotations'])
def getfullargspec_no_self(func):
"""inspect.getfullargspec replacement using inspect.signature.
If func is a bound method, do not list the 'self' parameter.
Parameters
----------
func : callable
A callable to inspect
Returns
-------
fullargspec : FullArgSpec(args, varargs, varkw, defaults, kwonlyargs,
kwonlydefaults, annotations)
NOTE: if the first argument of `func` is self, it is *not*, I repeat
*not*, included in fullargspec.args.
This is done for consistency between inspect.getargspec() under
Python 2.x, and inspect.signature() under Python 3.x.
"""
sig = inspect.signature(func)
args = [
p.name for p in sig.parameters.values()
if p.kind in [inspect.Parameter.POSITIONAL_OR_KEYWORD,
inspect.Parameter.POSITIONAL_ONLY]
]
varargs = [
p.name for p in sig.parameters.values()
if p.kind == inspect.Parameter.VAR_POSITIONAL
]
varargs = varargs[0] if varargs else None
varkw = [
p.name for p in sig.parameters.values()
if p.kind == inspect.Parameter.VAR_KEYWORD
]
varkw = varkw[0] if varkw else None
defaults = tuple(
p.default for p in sig.parameters.values()
if (p.kind == inspect.Parameter.POSITIONAL_OR_KEYWORD and
p.default is not p.empty)
) or None
kwonlyargs = [
p.name for p in sig.parameters.values()
if p.kind == inspect.Parameter.KEYWORD_ONLY
]
kwdefaults = {p.name: p.default for p in sig.parameters.values()
if p.kind == inspect.Parameter.KEYWORD_ONLY and
p.default is not p.empty}
annotations = {p.name: p.annotation for p in sig.parameters.values()
if p.annotation is not p.empty}
return FullArgSpec(args, varargs, varkw, defaults, kwonlyargs,
kwdefaults or None, annotations)
class _FunctionWrapper:
"""
Object to wrap user's function, allowing picklability
"""
def __init__(self, f, args):
self.f = f
self.args = [] if args is None else args
def __call__(self, x):
return self.f(x, *self.args)
class MapWrapper:
"""
Parallelisation wrapper for working with map-like callables, such as
`multiprocessing.Pool.map`.
Parameters
----------
pool : int or map-like callable
If `pool` is an integer, then it specifies the number of threads to
use for parallelization. If ``int(pool) == 1``, then no parallel
processing is used and the map builtin is used.
If ``pool == -1``, then the pool will utilize all available CPUs.
If `pool` is a map-like callable that follows the same
calling sequence as the built-in map function, then this callable is
used for parallelization.
"""
def __init__(self, pool=1):
self.pool = None
self._mapfunc = map
self._own_pool = False
if callable(pool):
self.pool = pool
self._mapfunc = self.pool
else:
from multiprocessing import Pool
# user supplies a number
if int(pool) == -1:
# use as many processors as possible
self.pool = Pool()
self._mapfunc = self.pool.map
self._own_pool = True
elif int(pool) == 1:
pass
elif int(pool) > 1:
# use the number of processors requested
self.pool = Pool(processes=int(pool))
self._mapfunc = self.pool.map
self._own_pool = True
else:
raise RuntimeError("Number of workers specified must be -1,"
" an int >= 1, or an object with a 'map' "
"method")
def __enter__(self):
return self
def terminate(self):
if self._own_pool:
self.pool.terminate()
def join(self):
if self._own_pool:
self.pool.join()
def close(self):
if self._own_pool:
self.pool.close()
def __exit__(self, exc_type, exc_value, traceback):
if self._own_pool:
self.pool.close()
self.pool.terminate()
def __call__(self, func, iterable):
# only accept one iterable because that's all Pool.map accepts
try:
return self._mapfunc(func, iterable)
except TypeError as e:
# wrong number of arguments
raise TypeError("The map-like callable must be of the"
" form f(func, iterable)") from e
def rng_integers(gen, low, high=None, size=None, dtype='int64',
endpoint=False):
"""
Return random integers from low (inclusive) to high (exclusive), or if
endpoint=True, low (inclusive) to high (inclusive). Replaces
`RandomState.randint` (with endpoint=False) and
`RandomState.random_integers` (with endpoint=True).
Return random integers from the "discrete uniform" distribution of the
specified dtype. If high is None (the default), then results are from
0 to low.
Parameters
----------
gen : {None, np.random.RandomState, np.random.Generator}
Random number generator. If None, then the np.random.RandomState
singleton is used.
low : int or array-like of ints
Lowest (signed) integers to be drawn from the distribution (unless
high=None, in which case this parameter is 0 and this value is used
for high).
high : int or array-like of ints
If provided, one above the largest (signed) integer to be drawn from
the distribution (see above for behavior if high=None). If array-like,
must contain integer values.
size : array-like of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k
samples are drawn. Default is None, in which case a single value is
returned.
dtype : {str, dtype}, optional
Desired dtype of the result. All dtypes are determined by their name,
i.e., 'int64', 'int', etc, so byteorder is not available and a specific
precision may have different C types depending on the platform.
The default value is np.int_.
endpoint : bool, optional
If True, sample from the interval [low, high] instead of the default
[low, high) Defaults to False.
Returns
-------
out: int or ndarray of ints
size-shaped array of random integers from the appropriate distribution,
or a single such random int if size not provided.
"""
if isinstance(gen, Generator):
return gen.integers(low, high=high, size=size, dtype=dtype,
endpoint=endpoint)
else:
if gen is None:
# default is RandomState singleton used by np.random.
gen = np.random.mtrand._rand
if endpoint:
# inclusive of endpoint
# remember that low and high can be arrays, so don't modify in
# place
if high is None:
return gen.randint(low + 1, size=size, dtype=dtype)
if high is not None:
return gen.randint(low, high=high + 1, size=size, dtype=dtype)
# exclusive
return gen.randint(low, high=high, size=size, dtype=dtype)
@contextmanager
def _fixed_default_rng(seed=1638083107694713882823079058616272161):
"""Context with a fixed np.random.default_rng seed."""
orig_fun = np.random.default_rng
np.random.default_rng = lambda seed=seed: orig_fun(seed)
try:
yield
finally:
np.random.default_rng = orig_fun
def _rng_html_rewrite(func):
"""Rewrite the HTML rendering of ``np.random.default_rng``.
This is intended to decorate
``numpydoc.docscrape_sphinx.SphinxDocString._str_examples``.
Examples are only run by Sphinx when there are plot involved. Even so,
it does not change the result values getting printed.
"""
# hexadecimal or number seed, case-insensitive
pattern = re.compile(r'np.random.default_rng\((0x[0-9A-F]+|\d+)\)', re.I)
def _wrapped(*args, **kwargs):
res = func(*args, **kwargs)
lines = [
re.sub(pattern, 'np.random.default_rng()', line)
for line in res
]
return lines
return _wrapped
def _argmin(a, keepdims=False, axis=None):
"""
argmin with a `keepdims` parameter.
See https://github.com/numpy/numpy/issues/8710
If axis is not None, a.shape[axis] must be greater than 0.
"""
res = np.argmin(a, axis=axis)
if keepdims and axis is not None:
res = np.expand_dims(res, axis=axis)
return res
def _first_nonnan(a, axis):
"""
Return the first non-nan value along the given axis.
If a slice is all nan, nan is returned for that slice.
The shape of the return value corresponds to ``keepdims=True``.
Examples
--------
>>> import numpy as np
>>> nan = np.nan
>>> a = np.array([[ 3., 3., nan, 3.],
[ 1., nan, 2., 4.],
[nan, nan, 9., -1.],
[nan, 5., 4., 3.],
[ 2., 2., 2., 2.],
[nan, nan, nan, nan]])
>>> _first_nonnan(a, axis=0)
array([[3., 3., 2., 3.]])
>>> _first_nonnan(a, axis=1)
array([[ 3.],
[ 1.],
[ 9.],
[ 5.],
[ 2.],
[nan]])
"""
k = _argmin(np.isnan(a), axis=axis, keepdims=True)
return np.take_along_axis(a, k, axis=axis)
def _nan_allsame(a, axis, keepdims=False):
"""
Determine if the values along an axis are all the same.
nan values are ignored.
`a` must be a numpy array.
`axis` is assumed to be normalized; that is, 0 <= axis < a.ndim.
For an axis of length 0, the result is True. That is, we adopt the
convention that ``allsame([])`` is True. (There are no values in the
input that are different.)
`True` is returned for slices that are all nan--not because all the
values are the same, but because this is equivalent to ``allsame([])``.
Examples
--------
>>> import numpy as np
>>> a = np.array([[ 3., 3., nan, 3.],
[ 1., nan, 2., 4.],
[nan, nan, 9., -1.],
[nan, 5., 4., 3.],
[ 2., 2., 2., 2.],
[nan, nan, nan, nan]])
>>> _nan_allsame(a, axis=1, keepdims=True)
array([[ True],
[False],
[False],
[False],
[ True],
[ True]])
"""
if axis is None:
if a.size == 0:
return True
a = a.ravel()
axis = 0
else:
shp = a.shape
if shp[axis] == 0:
shp = shp[:axis] + (1,)*keepdims + shp[axis + 1:]
return np.full(shp, fill_value=True, dtype=bool)
a0 = _first_nonnan(a, axis=axis)
return ((a0 == a) | np.isnan(a)).all(axis=axis, keepdims=keepdims)
def _contains_nan(a, nan_policy='propagate', use_summation=True,
policies=None):
if not isinstance(a, np.ndarray):
use_summation = False # some array_likes ignore nans (e.g. pandas)
if policies is None:
policies = ['propagate', 'raise', 'omit']
if nan_policy not in policies:
raise ValueError("nan_policy must be one of {%s}" %
', '.join("'%s'" % s for s in policies))
if np.issubdtype(a.dtype, np.inexact):
# The summation method avoids creating a (potentially huge) array.
if use_summation:
with np.errstate(invalid='ignore', over='ignore'):
contains_nan = np.isnan(np.sum(a))
else:
contains_nan = np.isnan(a).any()
elif np.issubdtype(a.dtype, object):
contains_nan = False
for el in a.ravel():
# isnan doesn't work on non-numeric elements
if np.issubdtype(type(el), np.number) and np.isnan(el):
contains_nan = True
break
else:
# Only `object` and `inexact` arrays can have NaNs
contains_nan = False
if contains_nan and nan_policy == 'raise':
raise ValueError("The input contains nan values")
return contains_nan, nan_policy
def _rename_parameter(old_name, new_name, dep_version=None):
"""
Generate decorator for backward-compatible keyword renaming.
Apply the decorator generated by `_rename_parameter` to functions with a
recently renamed parameter to maintain backward-compatibility.
After decoration, the function behaves as follows:
If only the new parameter is passed into the function, behave as usual.
If only the old parameter is passed into the function (as a keyword), raise
a DeprecationWarning if `dep_version` is provided, and behave as usual
otherwise.
If both old and new parameters are passed into the function, raise a
DeprecationWarning if `dep_version` is provided, and raise the appropriate
TypeError (function got multiple values for argument).
Parameters
----------
old_name : str
Old name of parameter
new_name : str
New name of parameter
dep_version : str, optional
Version of SciPy in which old parameter was deprecated in the format
'X.Y.Z'. If supplied, the deprecation message will indicate that
support for the old parameter will be removed in version 'X.Y+2.Z'
Notes
-----
Untested with functions that accept *args. Probably won't work as written.
"""
def decorator(fun):
@functools.wraps(fun)
def wrapper(*args, **kwargs):
if old_name in kwargs:
if dep_version:
end_version = dep_version.split('.')
end_version[1] = str(int(end_version[1]) + 2)
end_version = '.'.join(end_version)
message = (f"Use of keyword argument `{old_name}` is "
f"deprecated and replaced by `{new_name}`. "
f"Support for `{old_name}` will be removed "
f"in SciPy {end_version}.")
warnings.warn(message, DeprecationWarning, stacklevel=2)
if new_name in kwargs:
message = (f"{fun.__name__}() got multiple values for "
f"argument now known as `{new_name}`")
raise TypeError(message)
kwargs[new_name] = kwargs.pop(old_name)
return fun(*args, **kwargs)
return wrapper
return decorator
def _rng_spawn(rng, n_children):
# spawns independent RNGs from a parent RNG
bg = rng._bit_generator
ss = bg._seed_seq
child_rngs = [np.random.Generator(type(bg)(child_ss))
for child_ss in ss.spawn(n_children)]
return child_rngs
def _get_nan(*data):
# Get NaN of appropriate dtype for data
data = [np.asarray(item) for item in data]
dtype = np.result_type(*data, np.half) # must be a float16 at least
return np.array(np.nan, dtype=dtype)[()]
def normalize_axis_index(axis, ndim):
# Check if `axis` is in the correct range and normalize it
if axis < -ndim or axis >= ndim:
msg = f"axis {axis} is out of bounds for array of dimension {ndim}"
raise AxisError(msg)
if axis < 0:
axis = axis + ndim
return axis
| 25,661
| 33.216
| 86
|
py
|
scipy
|
scipy-main/scipy/_lib/_highs_utils.py
|
"""Helper functions to get location of source files."""
import pathlib
def _highs_dir() -> pathlib.Path:
"""Directory where root highs/ directory lives."""
p = pathlib.Path(__file__).parent / 'highs'
return p
| 224
| 21.5
| 55
|
py
|
scipy
|
scipy-main/scipy/_lib/deprecation.py
|
import functools
import warnings
from importlib import import_module
__all__ = ["_deprecated"]
# Object to use as default value for arguments to be deprecated. This should
# be used over 'None' as the user could parse 'None' as a positional argument
_NoValue = object()
def _sub_module_deprecation(*, sub_package, module, private_module, all,
attribute):
"""Helper function for deprecating modules that are public but were
intended to be private.
Parameters
----------
sub_package : str
Subpackage the module belongs to eg. stats
module : str
Public but intended private module to deprecate
private_module : str
Private replacement for `module`
all : list
``__all__`` belonging to `module`
attribute : str
The attribute in `module` being accessed
"""
if attribute not in all:
raise AttributeError(
f"`scipy.{sub_package}.{module}` has no attribute `{attribute}`; furthermore, "
f"`scipy.{sub_package}.{module}` is deprecated and will be removed in "
"SciPy 2.0.0.")
attr = getattr(import_module(f"scipy.{sub_package}"), attribute, None)
if attr is not None:
message = (f"Please import `{attribute}` from the `scipy.{sub_package}` namespace; "
f"the `scipy.{sub_package}.{module}` namespace is deprecated and "
"will be removed in SciPy 2.0.0.")
else:
message = (f"`scipy.{sub_package}.{module}.{attribute}` is deprecated along with "
f"the `scipy.{sub_package}.{module}` namespace. "
f"`scipy.{sub_package}.{module}.{attribute}` will be removed in SciPy 1.13.0, and "
f"the `scipy.{sub_package}.{module}` namespace will be removed in SciPy 2.0.0.")
warnings.warn(message, category=DeprecationWarning, stacklevel=3)
return getattr(import_module(f"scipy.{sub_package}.{private_module}"), attribute)
def _deprecated(msg, stacklevel=2):
"""Deprecate a function by emitting a warning on use."""
def wrap(fun):
if isinstance(fun, type):
warnings.warn(
f"Trying to deprecate class {fun!r}",
category=RuntimeWarning, stacklevel=2)
return fun
@functools.wraps(fun)
def call(*args, **kwargs):
warnings.warn(msg, category=DeprecationWarning,
stacklevel=stacklevel)
return fun(*args, **kwargs)
call.__doc__ = fun.__doc__
return call
return wrap
class _DeprecationHelperStr:
"""
Helper class used by deprecate_cython_api
"""
def __init__(self, content, message):
self._content = content
self._message = message
def __hash__(self):
return hash(self._content)
def __eq__(self, other):
res = (self._content == other)
if res:
warnings.warn(self._message, category=DeprecationWarning,
stacklevel=2)
return res
def deprecate_cython_api(module, routine_name, new_name=None, message=None):
"""
Deprecate an exported cdef function in a public Cython API module.
Only functions can be deprecated; typedefs etc. cannot.
Parameters
----------
module : module
Public Cython API module (e.g. scipy.linalg.cython_blas).
routine_name : str
Name of the routine to deprecate. May also be a fused-type
routine (in which case its all specializations are deprecated).
new_name : str
New name to include in the deprecation warning message
message : str
Additional text in the deprecation warning message
Examples
--------
Usually, this function would be used in the top-level of the
module ``.pyx`` file:
>>> from scipy._lib.deprecation import deprecate_cython_api
>>> import scipy.linalg.cython_blas as mod
>>> deprecate_cython_api(mod, "dgemm", "dgemm_new",
... message="Deprecated in Scipy 1.5.0")
>>> del deprecate_cython_api, mod
After this, Cython modules that use the deprecated function emit a
deprecation warning when they are imported.
"""
old_name = f"{module.__name__}.{routine_name}"
if new_name is None:
depdoc = "`%s` is deprecated!" % old_name
else:
depdoc = "`%s` is deprecated, use `%s` instead!" % \
(old_name, new_name)
if message is not None:
depdoc += "\n" + message
d = module.__pyx_capi__
# Check if the function is a fused-type function with a mangled name
j = 0
has_fused = False
while True:
fused_name = f"__pyx_fuse_{j}{routine_name}"
if fused_name in d:
has_fused = True
d[_DeprecationHelperStr(fused_name, depdoc)] = d.pop(fused_name)
j += 1
else:
break
# If not, apply deprecation to the named routine
if not has_fused:
d[_DeprecationHelperStr(routine_name, depdoc)] = d.pop(routine_name)
| 5,086
| 31.819355
| 102
|
py
|
scipy
|
scipy-main/scipy/_lib/_boost_utils.py
|
'''Helper functions to get location of header files.'''
import pathlib
from typing import Union
def _boost_dir(ret_path: bool = False) -> Union[pathlib.Path, str]:
'''Directory where root Boost/ directory lives.'''
p = pathlib.Path(__file__).parent / 'boost_math/include'
return p if ret_path else str(p)
| 320
| 28.181818
| 67
|
py
|
scipy
|
scipy-main/scipy/_lib/doccer.py
|
''' Utilities to allow inserting docstring fragments for common
parameters into function and method docstrings'''
import sys
__all__ = [
'docformat', 'inherit_docstring_from', 'indentcount_lines',
'filldoc', 'unindent_dict', 'unindent_string', 'extend_notes_in_docstring',
'replace_notes_in_docstring', 'doc_replace'
]
def docformat(docstring, docdict=None):
''' Fill a function docstring from variables in dictionary
Adapt the indent of the inserted docs
Parameters
----------
docstring : string
docstring from function, possibly with dict formatting strings
docdict : dict, optional
dictionary with keys that match the dict formatting strings
and values that are docstring fragments to be inserted. The
indentation of the inserted docstrings is set to match the
minimum indentation of the ``docstring`` by adding this
indentation to all lines of the inserted string, except the
first.
Returns
-------
outstring : string
string with requested ``docdict`` strings inserted
Examples
--------
>>> docformat(' Test string with %(value)s', {'value':'inserted value'})
' Test string with inserted value'
>>> docstring = 'First line\\n Second line\\n %(value)s'
>>> inserted_string = "indented\\nstring"
>>> docdict = {'value': inserted_string}
>>> docformat(docstring, docdict)
'First line\\n Second line\\n indented\\n string'
'''
if not docstring:
return docstring
if docdict is None:
docdict = {}
if not docdict:
return docstring
lines = docstring.expandtabs().splitlines()
# Find the minimum indent of the main docstring, after first line
if len(lines) < 2:
icount = 0
else:
icount = indentcount_lines(lines[1:])
indent = ' ' * icount
# Insert this indent to dictionary docstrings
indented = {}
for name, dstr in docdict.items():
lines = dstr.expandtabs().splitlines()
try:
newlines = [lines[0]]
for line in lines[1:]:
newlines.append(indent+line)
indented[name] = '\n'.join(newlines)
except IndexError:
indented[name] = dstr
return docstring % indented
def inherit_docstring_from(cls):
"""
This decorator modifies the decorated function's docstring by
replacing occurrences of '%(super)s' with the docstring of the
method of the same name from the class `cls`.
If the decorated method has no docstring, it is simply given the
docstring of `cls`s method.
Parameters
----------
cls : Python class or instance
A class with a method with the same name as the decorated method.
The docstring of the method in this class replaces '%(super)s' in the
docstring of the decorated method.
Returns
-------
f : function
The decorator function that modifies the __doc__ attribute
of its argument.
Examples
--------
In the following, the docstring for Bar.func created using the
docstring of `Foo.func`.
>>> class Foo:
... def func(self):
... '''Do something useful.'''
... return
...
>>> class Bar(Foo):
... @inherit_docstring_from(Foo)
... def func(self):
... '''%(super)s
... Do it fast.
... '''
... return
...
>>> b = Bar()
>>> b.func.__doc__
'Do something useful.\n Do it fast.\n '
"""
def _doc(func):
cls_docstring = getattr(cls, func.__name__).__doc__
func_docstring = func.__doc__
if func_docstring is None:
func.__doc__ = cls_docstring
else:
new_docstring = func_docstring % dict(super=cls_docstring)
func.__doc__ = new_docstring
return func
return _doc
def extend_notes_in_docstring(cls, notes):
"""
This decorator replaces the decorated function's docstring
with the docstring from corresponding method in `cls`.
It extends the 'Notes' section of that docstring to include
the given `notes`.
"""
def _doc(func):
cls_docstring = getattr(cls, func.__name__).__doc__
# If python is called with -OO option,
# there is no docstring
if cls_docstring is None:
return func
end_of_notes = cls_docstring.find(' References\n')
if end_of_notes == -1:
end_of_notes = cls_docstring.find(' Examples\n')
if end_of_notes == -1:
end_of_notes = len(cls_docstring)
func.__doc__ = (cls_docstring[:end_of_notes] + notes +
cls_docstring[end_of_notes:])
return func
return _doc
def replace_notes_in_docstring(cls, notes):
"""
This decorator replaces the decorated function's docstring
with the docstring from corresponding method in `cls`.
It replaces the 'Notes' section of that docstring with
the given `notes`.
"""
def _doc(func):
cls_docstring = getattr(cls, func.__name__).__doc__
notes_header = ' Notes\n -----\n'
# If python is called with -OO option,
# there is no docstring
if cls_docstring is None:
return func
start_of_notes = cls_docstring.find(notes_header)
end_of_notes = cls_docstring.find(' References\n')
if end_of_notes == -1:
end_of_notes = cls_docstring.find(' Examples\n')
if end_of_notes == -1:
end_of_notes = len(cls_docstring)
func.__doc__ = (cls_docstring[:start_of_notes + len(notes_header)] +
notes +
cls_docstring[end_of_notes:])
return func
return _doc
def indentcount_lines(lines):
''' Minimum indent for all lines in line list
>>> lines = [' one', ' two', ' three']
>>> indentcount_lines(lines)
1
>>> lines = []
>>> indentcount_lines(lines)
0
>>> lines = [' one']
>>> indentcount_lines(lines)
1
>>> indentcount_lines([' '])
0
'''
indentno = sys.maxsize
for line in lines:
stripped = line.lstrip()
if stripped:
indentno = min(indentno, len(line) - len(stripped))
if indentno == sys.maxsize:
return 0
return indentno
def filldoc(docdict, unindent_params=True):
''' Return docstring decorator using docdict variable dictionary
Parameters
----------
docdict : dictionary
dictionary containing name, docstring fragment pairs
unindent_params : {False, True}, boolean, optional
If True, strip common indentation from all parameters in
docdict
Returns
-------
decfunc : function
decorator that applies dictionary to input function docstring
'''
if unindent_params:
docdict = unindent_dict(docdict)
def decorate(f):
f.__doc__ = docformat(f.__doc__, docdict)
return f
return decorate
def unindent_dict(docdict):
''' Unindent all strings in a docdict '''
can_dict = {}
for name, dstr in docdict.items():
can_dict[name] = unindent_string(dstr)
return can_dict
def unindent_string(docstring):
''' Set docstring to minimum indent for all lines, including first
>>> unindent_string(' two')
'two'
>>> unindent_string(' two\\n three')
'two\\n three'
'''
lines = docstring.expandtabs().splitlines()
icount = indentcount_lines(lines)
if icount == 0:
return docstring
return '\n'.join([line[icount:] for line in lines])
def doc_replace(obj, oldval, newval):
"""Decorator to take the docstring from obj, with oldval replaced by newval
Equivalent to ``func.__doc__ = obj.__doc__.replace(oldval, newval)``
Parameters
----------
obj : object
The object to take the docstring from.
oldval : string
The string to replace from the original docstring.
newval : string
The string to replace ``oldval`` with.
"""
# __doc__ may be None for optimized Python (-OO)
doc = (obj.__doc__ or '').replace(oldval, newval)
def inner(func):
func.__doc__ = doc
return func
return inner
| 8,362
| 29.300725
| 79
|
py
|
scipy
|
scipy-main/scipy/_lib/_disjoint_set.py
|
"""
Disjoint set data structure
"""
class DisjointSet:
""" Disjoint set data structure for incremental connectivity queries.
.. versionadded:: 1.6.0
Attributes
----------
n_subsets : int
The number of subsets.
Methods
-------
add
merge
connected
subset
subset_size
subsets
__getitem__
Notes
-----
This class implements the disjoint set [1]_, also known as the *union-find*
or *merge-find* data structure. The *find* operation (implemented in
`__getitem__`) implements the *path halving* variant. The *merge* method
implements the *merge by size* variant.
References
----------
.. [1] https://en.wikipedia.org/wiki/Disjoint-set_data_structure
Examples
--------
>>> from scipy.cluster.hierarchy import DisjointSet
Initialize a disjoint set:
>>> disjoint_set = DisjointSet([1, 2, 3, 'a', 'b'])
Merge some subsets:
>>> disjoint_set.merge(1, 2)
True
>>> disjoint_set.merge(3, 'a')
True
>>> disjoint_set.merge('a', 'b')
True
>>> disjoint_set.merge('b', 'b')
False
Find root elements:
>>> disjoint_set[2]
1
>>> disjoint_set['b']
3
Test connectivity:
>>> disjoint_set.connected(1, 2)
True
>>> disjoint_set.connected(1, 'b')
False
List elements in disjoint set:
>>> list(disjoint_set)
[1, 2, 3, 'a', 'b']
Get the subset containing 'a':
>>> disjoint_set.subset('a')
{'a', 3, 'b'}
Get the size of the subset containing 'a' (without actually instantiating
the subset):
>>> disjoint_set.subset_size('a')
3
Get all subsets in the disjoint set:
>>> disjoint_set.subsets()
[{1, 2}, {'a', 3, 'b'}]
"""
def __init__(self, elements=None):
self.n_subsets = 0
self._sizes = {}
self._parents = {}
# _nbrs is a circular linked list which links connected elements.
self._nbrs = {}
# _indices tracks the element insertion order in `__iter__`.
self._indices = {}
if elements is not None:
for x in elements:
self.add(x)
def __iter__(self):
"""Returns an iterator of the elements in the disjoint set.
Elements are ordered by insertion order.
"""
return iter(self._indices)
def __len__(self):
return len(self._indices)
def __contains__(self, x):
return x in self._indices
def __getitem__(self, x):
"""Find the root element of `x`.
Parameters
----------
x : hashable object
Input element.
Returns
-------
root : hashable object
Root element of `x`.
"""
if x not in self._indices:
raise KeyError(x)
# find by "path halving"
parents = self._parents
while self._indices[x] != self._indices[parents[x]]:
parents[x] = parents[parents[x]]
x = parents[x]
return x
def add(self, x):
"""Add element `x` to disjoint set
"""
if x in self._indices:
return
self._sizes[x] = 1
self._parents[x] = x
self._nbrs[x] = x
self._indices[x] = len(self._indices)
self.n_subsets += 1
def merge(self, x, y):
"""Merge the subsets of `x` and `y`.
The smaller subset (the child) is merged into the larger subset (the
parent). If the subsets are of equal size, the root element which was
first inserted into the disjoint set is selected as the parent.
Parameters
----------
x, y : hashable object
Elements to merge.
Returns
-------
merged : bool
True if `x` and `y` were in disjoint sets, False otherwise.
"""
xr = self[x]
yr = self[y]
if self._indices[xr] == self._indices[yr]:
return False
sizes = self._sizes
if (sizes[xr], self._indices[yr]) < (sizes[yr], self._indices[xr]):
xr, yr = yr, xr
self._parents[yr] = xr
self._sizes[xr] += self._sizes[yr]
self._nbrs[xr], self._nbrs[yr] = self._nbrs[yr], self._nbrs[xr]
self.n_subsets -= 1
return True
def connected(self, x, y):
"""Test whether `x` and `y` are in the same subset.
Parameters
----------
x, y : hashable object
Elements to test.
Returns
-------
result : bool
True if `x` and `y` are in the same set, False otherwise.
"""
return self._indices[self[x]] == self._indices[self[y]]
def subset(self, x):
"""Get the subset containing `x`.
Parameters
----------
x : hashable object
Input element.
Returns
-------
result : set
Subset containing `x`.
"""
if x not in self._indices:
raise KeyError(x)
result = [x]
nxt = self._nbrs[x]
while self._indices[nxt] != self._indices[x]:
result.append(nxt)
nxt = self._nbrs[nxt]
return set(result)
def subset_size(self, x):
"""Get the size of the subset containing `x`.
Note that this method is faster than ``len(self.subset(x))`` because
the size is directly read off an internal field, without the need to
instantiate the full subset.
Parameters
----------
x : hashable object
Input element.
Returns
-------
result : int
Size of the subset containing `x`.
"""
return self._sizes[self[x]]
def subsets(self):
"""Get all the subsets in the disjoint set.
Returns
-------
result : list
Subsets in the disjoint set.
"""
result = []
visited = set()
for x in self:
if x not in visited:
xset = self.subset(x)
visited.update(xset)
result.append(xset)
return result
| 6,160
| 23.160784
| 79
|
py
|
scipy
|
scipy-main/scipy/_lib/_gcutils.py
|
"""
Module for testing automatic garbage collection of objects
.. autosummary::
:toctree: generated/
set_gc_state - enable or disable garbage collection
gc_state - context manager for given state of garbage collector
assert_deallocated - context manager to check for circular references on object
"""
import weakref
import gc
from contextlib import contextmanager
from platform import python_implementation
__all__ = ['set_gc_state', 'gc_state', 'assert_deallocated']
IS_PYPY = python_implementation() == 'PyPy'
class ReferenceError(AssertionError):
pass
def set_gc_state(state):
""" Set status of garbage collector """
if gc.isenabled() == state:
return
if state:
gc.enable()
else:
gc.disable()
@contextmanager
def gc_state(state):
""" Context manager to set state of garbage collector to `state`
Parameters
----------
state : bool
True for gc enabled, False for disabled
Examples
--------
>>> with gc_state(False):
... assert not gc.isenabled()
>>> with gc_state(True):
... assert gc.isenabled()
"""
orig_state = gc.isenabled()
set_gc_state(state)
yield
set_gc_state(orig_state)
@contextmanager
def assert_deallocated(func, *args, **kwargs):
"""Context manager to check that object is deallocated
This is useful for checking that an object can be freed directly by
reference counting, without requiring gc to break reference cycles.
GC is disabled inside the context manager.
This check is not available on PyPy.
Parameters
----------
func : callable
Callable to create object to check
\\*args : sequence
positional arguments to `func` in order to create object to check
\\*\\*kwargs : dict
keyword arguments to `func` in order to create object to check
Examples
--------
>>> class C: pass
>>> with assert_deallocated(C) as c:
... # do something
... del c
>>> class C:
... def __init__(self):
... self._circular = self # Make circular reference
>>> with assert_deallocated(C) as c: #doctest: +IGNORE_EXCEPTION_DETAIL
... # do something
... del c
Traceback (most recent call last):
...
ReferenceError: Remaining reference(s) to object
"""
if IS_PYPY:
raise RuntimeError("assert_deallocated is unavailable on PyPy")
with gc_state(False):
obj = func(*args, **kwargs)
ref = weakref.ref(obj)
yield obj
del obj
if ref() is not None:
raise ReferenceError("Remaining reference(s) to object")
| 2,669
| 24.188679
| 82
|
py
|
scipy
|
scipy-main/scipy/_lib/_finite_differences.py
|
from numpy import arange, newaxis, hstack, prod, array
def _central_diff_weights(Np, ndiv=1):
"""
Return weights for an Np-point central derivative.
Assumes equally-spaced function points.
If weights are in the vector w, then
derivative is w[0] * f(x-ho*dx) + ... + w[-1] * f(x+h0*dx)
Parameters
----------
Np : int
Number of points for the central derivative.
ndiv : int, optional
Number of divisions. Default is 1.
Returns
-------
w : ndarray
Weights for an Np-point central derivative. Its size is `Np`.
Notes
-----
Can be inaccurate for a large number of points.
Examples
--------
We can calculate a derivative value of a function.
>>> def f(x):
... return 2 * x**2 + 3
>>> x = 3.0 # derivative point
>>> h = 0.1 # differential step
>>> Np = 3 # point number for central derivative
>>> weights = _central_diff_weights(Np) # weights for first derivative
>>> vals = [f(x + (i - Np/2) * h) for i in range(Np)]
>>> sum(w * v for (w, v) in zip(weights, vals))/h
11.79999999999998
This value is close to the analytical solution:
f'(x) = 4x, so f'(3) = 12
References
----------
.. [1] https://en.wikipedia.org/wiki/Finite_difference
"""
if Np < ndiv + 1:
raise ValueError(
"Number of points must be at least the derivative order + 1."
)
if Np % 2 == 0:
raise ValueError("The number of points must be odd.")
from scipy import linalg
ho = Np >> 1
x = arange(-ho, ho + 1.0)
x = x[:, newaxis]
X = x**0.0
for k in range(1, Np):
X = hstack([X, x**k])
w = prod(arange(1, ndiv + 1), axis=0) * linalg.inv(X)[ndiv]
return w
def _derivative(func, x0, dx=1.0, n=1, args=(), order=3):
"""
Find the nth derivative of a function at a point.
Given a function, use a central difference formula with spacing `dx` to
compute the nth derivative at `x0`.
Parameters
----------
func : function
Input function.
x0 : float
The point at which the nth derivative is found.
dx : float, optional
Spacing.
n : int, optional
Order of the derivative. Default is 1.
args : tuple, optional
Arguments
order : int, optional
Number of points to use, must be odd.
Notes
-----
Decreasing the step size too small can result in round-off error.
Examples
--------
>>> def f(x):
... return x**3 + x**2
>>> _derivative(f, 1.0, dx=1e-6)
4.9999999999217337
"""
if order < n + 1:
raise ValueError(
"'order' (the number of points used to compute the derivative), "
"must be at least the derivative order 'n' + 1."
)
if order % 2 == 0:
raise ValueError(
"'order' (the number of points used to compute the derivative) "
"must be odd."
)
# pre-computed for n=1 and 2 and low-order for speed.
if n == 1:
if order == 3:
weights = array([-1, 0, 1]) / 2.0
elif order == 5:
weights = array([1, -8, 0, 8, -1]) / 12.0
elif order == 7:
weights = array([-1, 9, -45, 0, 45, -9, 1]) / 60.0
elif order == 9:
weights = array([3, -32, 168, -672, 0, 672, -168, 32, -3]) / 840.0
else:
weights = _central_diff_weights(order, 1)
elif n == 2:
if order == 3:
weights = array([1, -2.0, 1])
elif order == 5:
weights = array([-1, 16, -30, 16, -1]) / 12.0
elif order == 7:
weights = array([2, -27, 270, -490, 270, -27, 2]) / 180.0
elif order == 9:
weights = (
array([-9, 128, -1008, 8064, -14350, 8064, -1008, 128, -9])
/ 5040.0
)
else:
weights = _central_diff_weights(order, 2)
else:
weights = _central_diff_weights(order, n)
val = 0.0
ho = order >> 1
for k in range(order):
val += weights[k] * func(x0 + (k - ho) * dx, *args)
return val / prod((dx,) * n, axis=0)
| 4,172
| 27.582192
| 78
|
py
|
scipy
|
scipy-main/scipy/_lib/__init__.py
|
"""
Module containing private utility functions
===========================================
The ``scipy._lib`` namespace is empty (for now). Tests for all
utilities in submodules of ``_lib`` can be run with::
from scipy import _lib
_lib.test()
"""
from scipy._lib._testutils import PytestTester
test = PytestTester(__name__)
del PytestTester
| 353
| 22.6
| 62
|
py
|
scipy
|
scipy-main/scipy/_lib/_docscrape.py
|
"""Extract reference documentation from the NumPy source tree.
"""
# copied from numpydoc/docscrape.py
import inspect
import textwrap
import re
import pydoc
from warnings import warn
from collections import namedtuple
from collections.abc import Callable, Mapping
import copy
import sys
def strip_blank_lines(l): # noqa
"Remove leading and trailing blank lines from a list of lines"
while l and not l[0].strip():
del l[0]
while l and not l[-1].strip():
del l[-1]
return l
class Reader:
"""A line-based string reader.
"""
def __init__(self, data):
"""
Parameters
----------
data : str
String with lines separated by '\\n'.
"""
if isinstance(data, list):
self._str = data
else:
self._str = data.split('\n') # store string as list of lines
self.reset()
def __getitem__(self, n):
return self._str[n]
def reset(self):
self._l = 0 # current line nr
def read(self):
if not self.eof():
out = self[self._l]
self._l += 1
return out
else:
return ''
def seek_next_non_empty_line(self):
for l in self[self._l:]: # noqa
if l.strip():
break
else:
self._l += 1
def eof(self):
return self._l >= len(self._str)
def read_to_condition(self, condition_func):
start = self._l
for line in self[start:]:
if condition_func(line):
return self[start:self._l]
self._l += 1
if self.eof():
return self[start:self._l+1]
return []
def read_to_next_empty_line(self):
self.seek_next_non_empty_line()
def is_empty(line):
return not line.strip()
return self.read_to_condition(is_empty)
def read_to_next_unindented_line(self):
def is_unindented(line):
return (line.strip() and (len(line.lstrip()) == len(line)))
return self.read_to_condition(is_unindented)
def peek(self, n=0):
if self._l + n < len(self._str):
return self[self._l + n]
else:
return ''
def is_empty(self):
return not ''.join(self._str).strip()
class ParseError(Exception):
def __str__(self):
message = self.args[0]
if hasattr(self, 'docstring'):
message = f"{message} in {self.docstring!r}"
return message
Parameter = namedtuple('Parameter', ['name', 'type', 'desc'])
class NumpyDocString(Mapping):
"""Parses a numpydoc string to an abstract representation
Instances define a mapping from section title to structured data.
"""
sections = {
'Signature': '',
'Summary': [''],
'Extended Summary': [],
'Parameters': [],
'Returns': [],
'Yields': [],
'Receives': [],
'Raises': [],
'Warns': [],
'Other Parameters': [],
'Attributes': [],
'Methods': [],
'See Also': [],
'Notes': [],
'Warnings': [],
'References': '',
'Examples': '',
'index': {}
}
def __init__(self, docstring, config={}):
orig_docstring = docstring
docstring = textwrap.dedent(docstring).split('\n')
self._doc = Reader(docstring)
self._parsed_data = copy.deepcopy(self.sections)
try:
self._parse()
except ParseError as e:
e.docstring = orig_docstring
raise
def __getitem__(self, key):
return self._parsed_data[key]
def __setitem__(self, key, val):
if key not in self._parsed_data:
self._error_location("Unknown section %s" % key, error=False)
else:
self._parsed_data[key] = val
def __iter__(self):
return iter(self._parsed_data)
def __len__(self):
return len(self._parsed_data)
def _is_at_section(self):
self._doc.seek_next_non_empty_line()
if self._doc.eof():
return False
l1 = self._doc.peek().strip() # e.g. Parameters
if l1.startswith('.. index::'):
return True
l2 = self._doc.peek(1).strip() # ---------- or ==========
return l2.startswith('-'*len(l1)) or l2.startswith('='*len(l1))
def _strip(self, doc):
i = 0
j = 0
for i, line in enumerate(doc):
if line.strip():
break
for j, line in enumerate(doc[::-1]):
if line.strip():
break
return doc[i:len(doc)-j]
def _read_to_next_section(self):
section = self._doc.read_to_next_empty_line()
while not self._is_at_section() and not self._doc.eof():
if not self._doc.peek(-1).strip(): # previous line was empty
section += ['']
section += self._doc.read_to_next_empty_line()
return section
def _read_sections(self):
while not self._doc.eof():
data = self._read_to_next_section()
name = data[0].strip()
if name.startswith('..'): # index section
yield name, data[1:]
elif len(data) < 2:
yield StopIteration
else:
yield name, self._strip(data[2:])
def _parse_param_list(self, content, single_element_is_type=False):
r = Reader(content)
params = []
while not r.eof():
header = r.read().strip()
if ' : ' in header:
arg_name, arg_type = header.split(' : ')[:2]
else:
if single_element_is_type:
arg_name, arg_type = '', header
else:
arg_name, arg_type = header, ''
desc = r.read_to_next_unindented_line()
desc = dedent_lines(desc)
desc = strip_blank_lines(desc)
params.append(Parameter(arg_name, arg_type, desc))
return params
# See also supports the following formats.
#
# <FUNCNAME>
# <FUNCNAME> SPACE* COLON SPACE+ <DESC> SPACE*
# <FUNCNAME> ( COMMA SPACE+ <FUNCNAME>)+ (COMMA | PERIOD)? SPACE*
# <FUNCNAME> ( COMMA SPACE+ <FUNCNAME>)* SPACE* COLON SPACE+ <DESC> SPACE*
# <FUNCNAME> is one of
# <PLAIN_FUNCNAME>
# COLON <ROLE> COLON BACKTICK <PLAIN_FUNCNAME> BACKTICK
# where
# <PLAIN_FUNCNAME> is a legal function name, and
# <ROLE> is any nonempty sequence of word characters.
# Examples: func_f1 :meth:`func_h1` :obj:`~baz.obj_r` :class:`class_j`
# <DESC> is a string describing the function.
_role = r":(?P<role>\w+):"
_funcbacktick = r"`(?P<name>(?:~\w+\.)?[a-zA-Z0-9_\.-]+)`"
_funcplain = r"(?P<name2>[a-zA-Z0-9_\.-]+)"
_funcname = r"(" + _role + _funcbacktick + r"|" + _funcplain + r")"
_funcnamenext = _funcname.replace('role', 'rolenext')
_funcnamenext = _funcnamenext.replace('name', 'namenext')
_description = r"(?P<description>\s*:(\s+(?P<desc>\S+.*))?)?\s*$"
_func_rgx = re.compile(r"^\s*" + _funcname + r"\s*")
_line_rgx = re.compile(
r"^\s*" +
r"(?P<allfuncs>" + # group for all function names
_funcname +
r"(?P<morefuncs>([,]\s+" + _funcnamenext + r")*)" +
r")" + # end of "allfuncs"
# Some function lists have a trailing comma (or period) '\s*'
r"(?P<trailing>[,\.])?" +
_description)
# Empty <DESC> elements are replaced with '..'
empty_description = '..'
def _parse_see_also(self, content):
"""
func_name : Descriptive text
continued text
another_func_name : Descriptive text
func_name1, func_name2, :meth:`func_name`, func_name3
"""
items = []
def parse_item_name(text):
"""Match ':role:`name`' or 'name'."""
m = self._func_rgx.match(text)
if not m:
raise ParseError("%s is not a item name" % text)
role = m.group('role')
name = m.group('name') if role else m.group('name2')
return name, role, m.end()
rest = []
for line in content:
if not line.strip():
continue
line_match = self._line_rgx.match(line)
description = None
if line_match:
description = line_match.group('desc')
if line_match.group('trailing') and description:
self._error_location(
'Unexpected comma or period after function list at '
'index %d of line "%s"' % (line_match.end('trailing'),
line),
error=False)
if not description and line.startswith(' '):
rest.append(line.strip())
elif line_match:
funcs = []
text = line_match.group('allfuncs')
while True:
if not text.strip():
break
name, role, match_end = parse_item_name(text)
funcs.append((name, role))
text = text[match_end:].strip()
if text and text[0] == ',':
text = text[1:].strip()
rest = list(filter(None, [description]))
items.append((funcs, rest))
else:
raise ParseError("%s is not a item name" % line)
return items
def _parse_index(self, section, content):
"""
.. index:: default
:refguide: something, else, and more
"""
def strip_each_in(lst):
return [s.strip() for s in lst]
out = {}
section = section.split('::')
if len(section) > 1:
out['default'] = strip_each_in(section[1].split(','))[0]
for line in content:
line = line.split(':')
if len(line) > 2:
out[line[1]] = strip_each_in(line[2].split(','))
return out
def _parse_summary(self):
"""Grab signature (if given) and summary"""
if self._is_at_section():
return
# If several signatures present, take the last one
while True:
summary = self._doc.read_to_next_empty_line()
summary_str = " ".join([s.strip() for s in summary]).strip()
compiled = re.compile(r'^([\w., ]+=)?\s*[\w\.]+\(.*\)$')
if compiled.match(summary_str):
self['Signature'] = summary_str
if not self._is_at_section():
continue
break
if summary is not None:
self['Summary'] = summary
if not self._is_at_section():
self['Extended Summary'] = self._read_to_next_section()
def _parse(self):
self._doc.reset()
self._parse_summary()
sections = list(self._read_sections())
section_names = {section for section, content in sections}
has_returns = 'Returns' in section_names
has_yields = 'Yields' in section_names
# We could do more tests, but we are not. Arbitrarily.
if has_returns and has_yields:
msg = 'Docstring contains both a Returns and Yields section.'
raise ValueError(msg)
if not has_yields and 'Receives' in section_names:
msg = 'Docstring contains a Receives section but not Yields.'
raise ValueError(msg)
for (section, content) in sections:
if not section.startswith('..'):
section = (s.capitalize() for s in section.split(' '))
section = ' '.join(section)
if self.get(section):
self._error_location("The section %s appears twice"
% section)
if section in ('Parameters', 'Other Parameters', 'Attributes',
'Methods'):
self[section] = self._parse_param_list(content)
elif section in ('Returns', 'Yields', 'Raises', 'Warns',
'Receives'):
self[section] = self._parse_param_list(
content, single_element_is_type=True)
elif section.startswith('.. index::'):
self['index'] = self._parse_index(section, content)
elif section == 'See Also':
self['See Also'] = self._parse_see_also(content)
else:
self[section] = content
def _error_location(self, msg, error=True):
if hasattr(self, '_obj'):
# we know where the docs came from:
try:
filename = inspect.getsourcefile(self._obj)
except TypeError:
filename = None
msg = msg + (" in the docstring of %s in %s."
% (self._obj, filename))
if error:
raise ValueError(msg)
else:
warn(msg)
# string conversion routines
def _str_header(self, name, symbol='-'):
return [name, len(name)*symbol]
def _str_indent(self, doc, indent=4):
out = []
for line in doc:
out += [' '*indent + line]
return out
def _str_signature(self):
if self['Signature']:
return [self['Signature'].replace('*', r'\*')] + ['']
else:
return ['']
def _str_summary(self):
if self['Summary']:
return self['Summary'] + ['']
else:
return []
def _str_extended_summary(self):
if self['Extended Summary']:
return self['Extended Summary'] + ['']
else:
return []
def _str_param_list(self, name):
out = []
if self[name]:
out += self._str_header(name)
for param in self[name]:
parts = []
if param.name:
parts.append(param.name)
if param.type:
parts.append(param.type)
out += [' : '.join(parts)]
if param.desc and ''.join(param.desc).strip():
out += self._str_indent(param.desc)
out += ['']
return out
def _str_section(self, name):
out = []
if self[name]:
out += self._str_header(name)
out += self[name]
out += ['']
return out
def _str_see_also(self, func_role):
if not self['See Also']:
return []
out = []
out += self._str_header("See Also")
out += ['']
last_had_desc = True
for funcs, desc in self['See Also']:
assert isinstance(funcs, list)
links = []
for func, role in funcs:
if role:
link = f':{role}:`{func}`'
elif func_role:
link = f':{func_role}:`{func}`'
else:
link = "`%s`_" % func
links.append(link)
link = ', '.join(links)
out += [link]
if desc:
out += self._str_indent([' '.join(desc)])
last_had_desc = True
else:
last_had_desc = False
out += self._str_indent([self.empty_description])
if last_had_desc:
out += ['']
out += ['']
return out
def _str_index(self):
idx = self['index']
out = []
output_index = False
default_index = idx.get('default', '')
if default_index:
output_index = True
out += ['.. index:: %s' % default_index]
for section, references in idx.items():
if section == 'default':
continue
output_index = True
out += [' :{}: {}'.format(section, ', '.join(references))]
if output_index:
return out
else:
return ''
def __str__(self, func_role=''):
out = []
out += self._str_signature()
out += self._str_summary()
out += self._str_extended_summary()
for param_list in ('Parameters', 'Returns', 'Yields', 'Receives',
'Other Parameters', 'Raises', 'Warns'):
out += self._str_param_list(param_list)
out += self._str_section('Warnings')
out += self._str_see_also(func_role)
for s in ('Notes', 'References', 'Examples'):
out += self._str_section(s)
for param_list in ('Attributes', 'Methods'):
out += self._str_param_list(param_list)
out += self._str_index()
return '\n'.join(out)
def indent(str, indent=4): # noqa
indent_str = ' '*indent
if str is None:
return indent_str
lines = str.split('\n')
return '\n'.join(indent_str + l for l in lines) # noqa
def dedent_lines(lines):
"""Deindent a list of lines maximally"""
return textwrap.dedent("\n".join(lines)).split("\n")
def header(text, style='-'):
return text + '\n' + style*len(text) + '\n'
class FunctionDoc(NumpyDocString):
def __init__(self, func, role='func', doc=None, config={}):
self._f = func
self._role = role # e.g. "func" or "meth"
if doc is None:
if func is None:
raise ValueError("No function or docstring given")
doc = inspect.getdoc(func) or ''
NumpyDocString.__init__(self, doc, config)
def get_func(self):
func_name = getattr(self._f, '__name__', self.__class__.__name__)
if inspect.isclass(self._f):
func = getattr(self._f, '__call__', self._f.__init__)
else:
func = self._f
return func, func_name
def __str__(self):
out = ''
func, func_name = self.get_func()
roles = {'func': 'function',
'meth': 'method'}
if self._role:
if self._role not in roles:
print("Warning: invalid role %s" % self._role)
out += '.. {}:: {}\n \n\n'.format(roles.get(self._role, ''),
func_name)
out += super().__str__(func_role=self._role)
return out
class ClassDoc(NumpyDocString):
extra_public_methods = ['__call__']
def __init__(self, cls, doc=None, modulename='', func_doc=FunctionDoc,
config={}):
if not inspect.isclass(cls) and cls is not None:
raise ValueError("Expected a class or None, but got %r" % cls)
self._cls = cls
if 'sphinx' in sys.modules:
from sphinx.ext.autodoc import ALL
else:
ALL = object()
self.show_inherited_members = config.get(
'show_inherited_class_members', True)
if modulename and not modulename.endswith('.'):
modulename += '.'
self._mod = modulename
if doc is None:
if cls is None:
raise ValueError("No class or documentation string given")
doc = pydoc.getdoc(cls)
NumpyDocString.__init__(self, doc)
_members = config.get('members', [])
if _members is ALL:
_members = None
_exclude = config.get('exclude-members', [])
if config.get('show_class_members', True) and _exclude is not ALL:
def splitlines_x(s):
if not s:
return []
else:
return s.splitlines()
for field, items in [('Methods', self.methods),
('Attributes', self.properties)]:
if not self[field]:
doc_list = []
for name in sorted(items):
if (name in _exclude or
(_members and name not in _members)):
continue
try:
doc_item = pydoc.getdoc(getattr(self._cls, name))
doc_list.append(
Parameter(name, '', splitlines_x(doc_item)))
except AttributeError:
pass # method doesn't exist
self[field] = doc_list
@property
def methods(self):
if self._cls is None:
return []
return [name for name, func in inspect.getmembers(self._cls)
if ((not name.startswith('_')
or name in self.extra_public_methods)
and isinstance(func, Callable)
and self._is_show_member(name))]
@property
def properties(self):
if self._cls is None:
return []
return [name for name, func in inspect.getmembers(self._cls)
if (not name.startswith('_') and
(func is None or isinstance(func, property) or
inspect.isdatadescriptor(func))
and self._is_show_member(name))]
def _is_show_member(self, name):
if self.show_inherited_members:
return True # show all class members
if name not in self._cls.__dict__:
return False # class member is inherited, we do not show it
return True
| 21,547
| 30.641703
| 78
|
py
|
scipy
|
scipy-main/scipy/_lib/_ccallback.py
|
from . import _ccallback_c
import ctypes
PyCFuncPtr = ctypes.CFUNCTYPE(ctypes.c_void_p).__bases__[0]
ffi = None
class CData:
pass
def _import_cffi():
global ffi, CData
if ffi is not None:
return
try:
import cffi
ffi = cffi.FFI()
CData = ffi.CData
except ImportError:
ffi = False
class LowLevelCallable(tuple):
"""
Low-level callback function.
Some functions in SciPy take as arguments callback functions, which
can either be python callables or low-level compiled functions. Using
compiled callback functions can improve performance somewhat by
avoiding wrapping data in Python objects.
Such low-level functions in SciPy are wrapped in `LowLevelCallable`
objects, which can be constructed from function pointers obtained from
ctypes, cffi, Cython, or contained in Python `PyCapsule` objects.
.. seealso::
Functions accepting low-level callables:
`scipy.integrate.quad`, `scipy.ndimage.generic_filter`,
`scipy.ndimage.generic_filter1d`, `scipy.ndimage.geometric_transform`
Usage examples:
:ref:`ndimage-ccallbacks`, :ref:`quad-callbacks`
Parameters
----------
function : {PyCapsule, ctypes function pointer, cffi function pointer}
Low-level callback function.
user_data : {PyCapsule, ctypes void pointer, cffi void pointer}
User data to pass on to the callback function.
signature : str, optional
Signature of the function. If omitted, determined from *function*,
if possible.
Attributes
----------
function
Callback function given.
user_data
User data given.
signature
Signature of the function.
Methods
-------
from_cython
Class method for constructing callables from Cython C-exported
functions.
Notes
-----
The argument ``function`` can be one of:
- PyCapsule, whose name contains the C function signature
- ctypes function pointer
- cffi function pointer
The signature of the low-level callback must match one of those expected
by the routine it is passed to.
If constructing low-level functions from a PyCapsule, the name of the
capsule must be the corresponding signature, in the format::
return_type (arg1_type, arg2_type, ...)
For example::
"void (double)"
"double (double, int *, void *)"
The context of a PyCapsule passed in as ``function`` is used as ``user_data``,
if an explicit value for ``user_data`` was not given.
"""
# Make the class immutable
__slots__ = ()
def __new__(cls, function, user_data=None, signature=None):
# We need to hold a reference to the function & user data,
# to prevent them going out of scope
item = cls._parse_callback(function, user_data, signature)
return tuple.__new__(cls, (item, function, user_data))
def __repr__(self):
return f"LowLevelCallable({self.function!r}, {self.user_data!r})"
@property
def function(self):
return tuple.__getitem__(self, 1)
@property
def user_data(self):
return tuple.__getitem__(self, 2)
@property
def signature(self):
return _ccallback_c.get_capsule_signature(tuple.__getitem__(self, 0))
def __getitem__(self, idx):
raise ValueError()
@classmethod
def from_cython(cls, module, name, user_data=None, signature=None):
"""
Create a low-level callback function from an exported Cython function.
Parameters
----------
module : module
Cython module where the exported function resides
name : str
Name of the exported function
user_data : {PyCapsule, ctypes void pointer, cffi void pointer}, optional
User data to pass on to the callback function.
signature : str, optional
Signature of the function. If omitted, determined from *function*.
"""
try:
function = module.__pyx_capi__[name]
except AttributeError as e:
raise ValueError("Given module is not a Cython module with __pyx_capi__ attribute") from e
except KeyError as e:
raise ValueError(f"No function {name!r} found in __pyx_capi__ of the module") from e
return cls(function, user_data, signature)
@classmethod
def _parse_callback(cls, obj, user_data=None, signature=None):
_import_cffi()
if isinstance(obj, LowLevelCallable):
func = tuple.__getitem__(obj, 0)
elif isinstance(obj, PyCFuncPtr):
func, signature = _get_ctypes_func(obj, signature)
elif isinstance(obj, CData):
func, signature = _get_cffi_func(obj, signature)
elif _ccallback_c.check_capsule(obj):
func = obj
else:
raise ValueError("Given input is not a callable or a low-level callable (pycapsule/ctypes/cffi)")
if isinstance(user_data, ctypes.c_void_p):
context = _get_ctypes_data(user_data)
elif isinstance(user_data, CData):
context = _get_cffi_data(user_data)
elif user_data is None:
context = 0
elif _ccallback_c.check_capsule(user_data):
context = user_data
else:
raise ValueError("Given user data is not a valid low-level void* pointer (pycapsule/ctypes/cffi)")
return _ccallback_c.get_raw_capsule(func, signature, context)
#
# ctypes helpers
#
def _get_ctypes_func(func, signature=None):
# Get function pointer
func_ptr = ctypes.cast(func, ctypes.c_void_p).value
# Construct function signature
if signature is None:
signature = _typename_from_ctypes(func.restype) + " ("
for j, arg in enumerate(func.argtypes):
if j == 0:
signature += _typename_from_ctypes(arg)
else:
signature += ", " + _typename_from_ctypes(arg)
signature += ")"
return func_ptr, signature
def _typename_from_ctypes(item):
if item is None:
return "void"
elif item is ctypes.c_void_p:
return "void *"
name = item.__name__
pointer_level = 0
while name.startswith("LP_"):
pointer_level += 1
name = name[3:]
if name.startswith('c_'):
name = name[2:]
if pointer_level > 0:
name += " " + "*"*pointer_level
return name
def _get_ctypes_data(data):
# Get voidp pointer
return ctypes.cast(data, ctypes.c_void_p).value
#
# CFFI helpers
#
def _get_cffi_func(func, signature=None):
# Get function pointer
func_ptr = ffi.cast('uintptr_t', func)
# Get signature
if signature is None:
signature = ffi.getctype(ffi.typeof(func)).replace('(*)', ' ')
return func_ptr, signature
def _get_cffi_data(data):
# Get pointer
return ffi.cast('uintptr_t', data)
| 6,963
| 27.080645
| 110
|
py
|
scipy
|
scipy-main/scipy/_lib/_tmpdirs.py
|
''' Contexts for *with* statement providing temporary directories
'''
import os
from contextlib import contextmanager
from shutil import rmtree
from tempfile import mkdtemp
@contextmanager
def tempdir():
"""Create and return a temporary directory. This has the same
behavior as mkdtemp but can be used as a context manager.
Upon exiting the context, the directory and everything contained
in it are removed.
Examples
--------
>>> import os
>>> with tempdir() as tmpdir:
... fname = os.path.join(tmpdir, 'example_file.txt')
... with open(fname, 'wt') as fobj:
... _ = fobj.write('a string\\n')
>>> os.path.exists(tmpdir)
False
"""
d = mkdtemp()
yield d
rmtree(d)
@contextmanager
def in_tempdir():
''' Create, return, and change directory to a temporary directory
Examples
--------
>>> import os
>>> my_cwd = os.getcwd()
>>> with in_tempdir() as tmpdir:
... _ = open('test.txt', 'wt').write('some text')
... assert os.path.isfile('test.txt')
... assert os.path.isfile(os.path.join(tmpdir, 'test.txt'))
>>> os.path.exists(tmpdir)
False
>>> os.getcwd() == my_cwd
True
'''
pwd = os.getcwd()
d = mkdtemp()
os.chdir(d)
yield d
os.chdir(pwd)
rmtree(d)
@contextmanager
def in_dir(dir=None):
""" Change directory to given directory for duration of ``with`` block
Useful when you want to use `in_tempdir` for the final test, but
you are still debugging. For example, you may want to do this in the end:
>>> with in_tempdir() as tmpdir:
... # do something complicated which might break
... pass
But, indeed, the complicated thing does break, and meanwhile, the
``in_tempdir`` context manager wiped out the directory with the
temporary files that you wanted for debugging. So, while debugging, you
replace with something like:
>>> with in_dir() as tmpdir: # Use working directory by default
... # do something complicated which might break
... pass
You can then look at the temporary file outputs to debug what is happening,
fix, and finally replace ``in_dir`` with ``in_tempdir`` again.
"""
cwd = os.getcwd()
if dir is None:
yield cwd
return
os.chdir(dir)
yield dir
os.chdir(cwd)
| 2,374
| 26.298851
| 79
|
py
|
scipy
|
scipy-main/scipy/_lib/_array_api.py
|
"""Utility functions to use Python Array API compatible libraries.
For the context about the Array API see:
https://data-apis.org/array-api/latest/purpose_and_scope.html
The SciPy use case of the Array API is described on the following page:
https://data-apis.org/array-api/latest/use_cases.html#use-case-scipy
"""
from __future__ import annotations
import os
import numpy as np
import scipy._lib.array_api_compat.array_api_compat as array_api_compat
from scipy._lib.array_api_compat.array_api_compat import size
import scipy._lib.array_api_compat.array_api_compat.numpy as array_api_compat_numpy
__all__ = ['array_namespace', 'as_xparray', 'size']
# To enable array API and strict array-like input validation
SCIPY_ARRAY_API: str | bool = os.environ.get("SCIPY_ARRAY_API", False)
# To control the default device - for use in the test suite only
SCIPY_DEVICE = os.environ.get("SCIPY_DEVICE", "cpu")
_GLOBAL_CONFIG = {
"SCIPY_ARRAY_API": SCIPY_ARRAY_API,
"SCIPY_DEVICE": SCIPY_DEVICE,
}
def compliance_scipy(*arrays):
"""Raise exceptions on known-bad subclasses.
The following subclasses are not supported and raise and error:
- `np.ma.MaskedArray`
- `numpy.matrix`
- Any array-like which is not Array API compatible
"""
for array in arrays:
if isinstance(array, np.ma.MaskedArray):
raise TypeError("'numpy.ma.MaskedArray' are not supported")
elif isinstance(array, np.matrix):
raise TypeError("'numpy.matrix' are not supported")
elif not array_api_compat.is_array_api_obj(array):
raise TypeError("Only support Array API compatible arrays")
elif array.dtype is np.dtype('O'):
raise TypeError('object arrays are not supported')
def _check_finite(array, xp):
"""Check for NaNs or Infs."""
msg = "array must not contain infs or NaNs"
try:
if not xp.all(xp.isfinite(array)):
raise ValueError(msg)
except TypeError:
raise ValueError(msg)
def array_namespace(*arrays):
"""Get the array API compatible namespace for the arrays xs.
Parameters
----------
*arrays : sequence of array_like
Arrays used to infer the common namespace.
Returns
-------
namespace : module
Common namespace.
Notes
-----
Thin wrapper around `array_api_compat.array_namespace`.
1. Check for the global switch: SCIPY_ARRAY_API. This can also be accessed
dynamically through ``_GLOBAL_CONFIG['SCIPY_ARRAY_API']``.
2. `compliance_scipy` raise exceptions on known-bad subclasses. See
it's definition for more details.
When the global switch is False, it defaults to the `numpy` namespace.
In that case, there is no compliance check. This is a convenience to
ease the adoption. Otherwise, arrays must comply with the new rules.
"""
if not _GLOBAL_CONFIG["SCIPY_ARRAY_API"]:
# here we could wrap the namespace if needed
return array_api_compat_numpy
arrays = [array for array in arrays if array is not None]
compliance_scipy(*arrays)
return array_api_compat.array_namespace(*arrays)
def as_xparray(
array, dtype=None, order=None, copy=None, *, xp=None, check_finite=False
):
"""SciPy-specific replacement for `np.asarray` with `order` and `check_finite`.
Memory layout parameter `order` is not exposed in the Array API standard.
`order` is only enforced if the input array implementation
is NumPy based, otherwise `order` is just silently ignored.
`check_finite` is also not a keyword in the array API standard; included
here for convenience rather than that having to be a separate function
call inside SciPy functions.
"""
if xp is None:
xp = array_namespace(array)
if xp.__name__ in {"numpy", "scipy._lib.array_api_compat.array_api_compat.numpy"}:
# Use NumPy API to support order
if copy is True:
array = np.array(array, order=order, dtype=dtype)
else:
array = np.asarray(array, order=order, dtype=dtype)
# At this point array is a NumPy ndarray. We convert it to an array
# container that is consistent with the input's namespace.
array = xp.asarray(array)
else:
array = xp.asarray(array, dtype=dtype, copy=copy)
if check_finite:
_check_finite(array, xp)
return array
def atleast_nd(x, *, ndim, xp):
"""Recursively expand the dimension to have at least `ndim`."""
x = xp.asarray(x)
if x.ndim < ndim:
x = xp.expand_dims(x, axis=0)
x = atleast_nd(x, ndim=ndim, xp=xp)
return x
| 4,640
| 32.388489
| 86
|
py
|
scipy
|
scipy-main/scipy/_lib/_uarray/setup.py
|
def pre_build_hook(build_ext, ext):
from scipy._build_utils.compiler_helper import (
set_cxx_flags_hook, try_add_flag)
cc = build_ext._cxx_compiler
args = ext.extra_compile_args
set_cxx_flags_hook(build_ext, ext)
if cc.compiler_type == 'msvc':
args.append('/EHsc')
else:
try_add_flag(args, cc, '-fvisibility=hidden')
def configuration(parent_package='', top_path=None):
from numpy.distutils.misc_util import Configuration
config = Configuration('_uarray', parent_package, top_path)
config.add_data_files('LICENSE')
ext = config.add_extension('_uarray',
sources=['_uarray_dispatch.cxx', 'vectorcall.cxx'],
depends=['small_dynamic_array.h', 'vectorcall.h'],
language='c++')
ext._pre_build_hook = pre_build_hook
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
setup(**configuration(top_path='').todict())
| 1,014
| 31.741935
| 82
|
py
|
scipy
|
scipy-main/scipy/_lib/_uarray/__init__.py
|
"""
.. note:
If you are looking for overrides for NumPy-specific methods, see the
documentation for :obj:`unumpy`. This page explains how to write
back-ends and multimethods.
``uarray`` is built around a back-end protocol, and overridable multimethods.
It is necessary to define multimethods for back-ends to be able to override them.
See the documentation of :obj:`generate_multimethod` on how to write multimethods.
Let's start with the simplest:
``__ua_domain__`` defines the back-end *domain*. The domain consists of period-
separated string consisting of the modules you extend plus the submodule. For
example, if a submodule ``module2.submodule`` extends ``module1``
(i.e., it exposes dispatchables marked as types available in ``module1``),
then the domain string should be ``"module1.module2.submodule"``.
For the purpose of this demonstration, we'll be creating an object and setting
its attributes directly. However, note that you can use a module or your own type
as a backend as well.
>>> class Backend: pass
>>> be = Backend()
>>> be.__ua_domain__ = "ua_examples"
It might be useful at this point to sidetrack to the documentation of
:obj:`generate_multimethod` to find out how to generate a multimethod
overridable by :obj:`uarray`. Needless to say, writing a backend and
creating multimethods are mostly orthogonal activities, and knowing
one doesn't necessarily require knowledge of the other, although it
is certainly helpful. We expect core API designers/specifiers to write the
multimethods, and implementors to override them. But, as is often the case,
similar people write both.
Without further ado, here's an example multimethod:
>>> import uarray as ua
>>> from uarray import Dispatchable
>>> def override_me(a, b):
... return Dispatchable(a, int),
>>> def override_replacer(args, kwargs, dispatchables):
... return (dispatchables[0], args[1]), {}
>>> overridden_me = ua.generate_multimethod(
... override_me, override_replacer, "ua_examples"
... )
Next comes the part about overriding the multimethod. This requires
the ``__ua_function__`` protocol, and the ``__ua_convert__``
protocol. The ``__ua_function__`` protocol has the signature
``(method, args, kwargs)`` where ``method`` is the passed
multimethod, ``args``/``kwargs`` specify the arguments and ``dispatchables``
is the list of converted dispatchables passed in.
>>> def __ua_function__(method, args, kwargs):
... return method.__name__, args, kwargs
>>> be.__ua_function__ = __ua_function__
The other protocol of interest is the ``__ua_convert__`` protocol. It has the
signature ``(dispatchables, coerce)``. When ``coerce`` is ``False``, conversion
between the formats should ideally be an ``O(1)`` operation, but it means that
no memory copying should be involved, only views of the existing data.
>>> def __ua_convert__(dispatchables, coerce):
... for d in dispatchables:
... if d.type is int:
... if coerce and d.coercible:
... yield str(d.value)
... else:
... yield d.value
>>> be.__ua_convert__ = __ua_convert__
Now that we have defined the backend, the next thing to do is to call the multimethod.
>>> with ua.set_backend(be):
... overridden_me(1, "2")
('override_me', (1, '2'), {})
Note that the marked type has no effect on the actual type of the passed object.
We can also coerce the type of the input.
>>> with ua.set_backend(be, coerce=True):
... overridden_me(1, "2")
... overridden_me(1.0, "2")
('override_me', ('1', '2'), {})
('override_me', ('1.0', '2'), {})
Another feature is that if you remove ``__ua_convert__``, the arguments are not
converted at all and it's up to the backend to handle that.
>>> del be.__ua_convert__
>>> with ua.set_backend(be):
... overridden_me(1, "2")
('override_me', (1, '2'), {})
You also have the option to return ``NotImplemented``, in which case processing moves on
to the next back-end, which in this case, doesn't exist. The same applies to
``__ua_convert__``.
>>> be.__ua_function__ = lambda *a, **kw: NotImplemented
>>> with ua.set_backend(be):
... overridden_me(1, "2")
Traceback (most recent call last):
...
uarray.BackendNotImplementedError: ...
The last possibility is if we don't have ``__ua_convert__``, in which case the job is left
up to ``__ua_function__``, but putting things back into arrays after conversion will not be
possible.
"""
from ._backend import *
__version__ = '0.8.8.dev0+aa94c5a4.scipy'
| 4,493
| 37.410256
| 91
|
py
|
scipy
|
scipy-main/scipy/_lib/_uarray/_backend.py
|
import typing
import types
import inspect
import functools
from . import _uarray
import copyreg
import pickle
import contextlib
from ._uarray import ( # type: ignore
BackendNotImplementedError,
_Function,
_SkipBackendContext,
_SetBackendContext,
_BackendState,
)
__all__ = [
"set_backend",
"set_global_backend",
"skip_backend",
"register_backend",
"determine_backend",
"determine_backend_multi",
"clear_backends",
"create_multimethod",
"generate_multimethod",
"_Function",
"BackendNotImplementedError",
"Dispatchable",
"wrap_single_convertor",
"wrap_single_convertor_instance",
"all_of_type",
"mark_as",
"set_state",
"get_state",
"reset_state",
"_BackendState",
"_SkipBackendContext",
"_SetBackendContext",
]
ArgumentExtractorType = typing.Callable[..., typing.Tuple["Dispatchable", ...]]
ArgumentReplacerType = typing.Callable[
[typing.Tuple, typing.Dict, typing.Tuple], typing.Tuple[typing.Tuple, typing.Dict]
]
def unpickle_function(mod_name, qname, self_):
import importlib
try:
module = importlib.import_module(mod_name)
qname = qname.split(".")
func = module
for q in qname:
func = getattr(func, q)
if self_ is not None:
func = types.MethodType(func, self_)
return func
except (ImportError, AttributeError) as e:
from pickle import UnpicklingError
raise UnpicklingError from e
def pickle_function(func):
mod_name = getattr(func, "__module__", None)
qname = getattr(func, "__qualname__", None)
self_ = getattr(func, "__self__", None)
try:
test = unpickle_function(mod_name, qname, self_)
except pickle.UnpicklingError:
test = None
if test is not func:
raise pickle.PicklingError(
f"Can't pickle {func}: it's not the same object as {test}"
)
return unpickle_function, (mod_name, qname, self_)
def pickle_state(state):
return _uarray._BackendState._unpickle, state._pickle()
def pickle_set_backend_context(ctx):
return _SetBackendContext, ctx._pickle()
def pickle_skip_backend_context(ctx):
return _SkipBackendContext, ctx._pickle()
copyreg.pickle(_Function, pickle_function)
copyreg.pickle(_uarray._BackendState, pickle_state)
copyreg.pickle(_SetBackendContext, pickle_set_backend_context)
copyreg.pickle(_SkipBackendContext, pickle_skip_backend_context)
def get_state():
"""
Returns an opaque object containing the current state of all the backends.
Can be used for synchronization between threads/processes.
See Also
--------
set_state
Sets the state returned by this function.
"""
return _uarray.get_state()
@contextlib.contextmanager
def reset_state():
"""
Returns a context manager that resets all state once exited.
See Also
--------
set_state
Context manager that sets the backend state.
get_state
Gets a state to be set by this context manager.
"""
with set_state(get_state()):
yield
@contextlib.contextmanager
def set_state(state):
"""
A context manager that sets the state of the backends to one returned by :obj:`get_state`.
See Also
--------
get_state
Gets a state to be set by this context manager.
"""
old_state = get_state()
_uarray.set_state(state)
try:
yield
finally:
_uarray.set_state(old_state, True)
def create_multimethod(*args, **kwargs):
"""
Creates a decorator for generating multimethods.
This function creates a decorator that can be used with an argument
extractor in order to generate a multimethod. Other than for the
argument extractor, all arguments are passed on to
:obj:`generate_multimethod`.
See Also
--------
generate_multimethod
Generates a multimethod.
"""
def wrapper(a):
return generate_multimethod(a, *args, **kwargs)
return wrapper
def generate_multimethod(
argument_extractor: ArgumentExtractorType,
argument_replacer: ArgumentReplacerType,
domain: str,
default: typing.Optional[typing.Callable] = None,
):
"""
Generates a multimethod.
Parameters
----------
argument_extractor : ArgumentExtractorType
A callable which extracts the dispatchable arguments. Extracted arguments
should be marked by the :obj:`Dispatchable` class. It has the same signature
as the desired multimethod.
argument_replacer : ArgumentReplacerType
A callable with the signature (args, kwargs, dispatchables), which should also
return an (args, kwargs) pair with the dispatchables replaced inside the args/kwargs.
domain : str
A string value indicating the domain of this multimethod.
default: Optional[Callable], optional
The default implementation of this multimethod, where ``None`` (the default) specifies
there is no default implementation.
Examples
--------
In this example, ``a`` is to be dispatched over, so we return it, while marking it as an ``int``.
The trailing comma is needed because the args have to be returned as an iterable.
>>> def override_me(a, b):
... return Dispatchable(a, int),
Next, we define the argument replacer that replaces the dispatchables inside args/kwargs with the
supplied ones.
>>> def override_replacer(args, kwargs, dispatchables):
... return (dispatchables[0], args[1]), {}
Next, we define the multimethod.
>>> overridden_me = generate_multimethod(
... override_me, override_replacer, "ua_examples"
... )
Notice that there's no default implementation, unless you supply one.
>>> overridden_me(1, "a")
Traceback (most recent call last):
...
uarray.BackendNotImplementedError: ...
>>> overridden_me2 = generate_multimethod(
... override_me, override_replacer, "ua_examples", default=lambda x, y: (x, y)
... )
>>> overridden_me2(1, "a")
(1, 'a')
See Also
--------
uarray
See the module documentation for how to override the method by creating backends.
"""
kw_defaults, arg_defaults, opts = get_defaults(argument_extractor)
ua_func = _Function(
argument_extractor,
argument_replacer,
domain,
arg_defaults,
kw_defaults,
default,
)
return functools.update_wrapper(ua_func, argument_extractor)
def set_backend(backend, coerce=False, only=False):
"""
A context manager that sets the preferred backend.
Parameters
----------
backend
The backend to set.
coerce
Whether or not to coerce to a specific backend's types. Implies ``only``.
only
Whether or not this should be the last backend to try.
See Also
--------
skip_backend: A context manager that allows skipping of backends.
set_global_backend: Set a single, global backend for a domain.
"""
try:
return backend.__ua_cache__["set", coerce, only]
except AttributeError:
backend.__ua_cache__ = {}
except KeyError:
pass
ctx = _SetBackendContext(backend, coerce, only)
backend.__ua_cache__["set", coerce, only] = ctx
return ctx
def skip_backend(backend):
"""
A context manager that allows one to skip a given backend from processing
entirely. This allows one to use another backend's code in a library that
is also a consumer of the same backend.
Parameters
----------
backend
The backend to skip.
See Also
--------
set_backend: A context manager that allows setting of backends.
set_global_backend: Set a single, global backend for a domain.
"""
try:
return backend.__ua_cache__["skip"]
except AttributeError:
backend.__ua_cache__ = {}
except KeyError:
pass
ctx = _SkipBackendContext(backend)
backend.__ua_cache__["skip"] = ctx
return ctx
def get_defaults(f):
sig = inspect.signature(f)
kw_defaults = {}
arg_defaults = []
opts = set()
for k, v in sig.parameters.items():
if v.default is not inspect.Parameter.empty:
kw_defaults[k] = v.default
if v.kind in (
inspect.Parameter.POSITIONAL_ONLY,
inspect.Parameter.POSITIONAL_OR_KEYWORD,
):
arg_defaults.append(v.default)
opts.add(k)
return kw_defaults, tuple(arg_defaults), opts
def set_global_backend(backend, coerce=False, only=False, *, try_last=False):
"""
This utility method replaces the default backend for permanent use. It
will be tried in the list of backends automatically, unless the
``only`` flag is set on a backend. This will be the first tried
backend outside the :obj:`set_backend` context manager.
Note that this method is not thread-safe.
.. warning::
We caution library authors against using this function in
their code. We do *not* support this use-case. This function
is meant to be used only by users themselves, or by a reference
implementation, if one exists.
Parameters
----------
backend
The backend to register.
coerce : bool
Whether to coerce input types when trying this backend.
only : bool
If ``True``, no more backends will be tried if this fails.
Implied by ``coerce=True``.
try_last : bool
If ``True``, the global backend is tried after registered backends.
See Also
--------
set_backend: A context manager that allows setting of backends.
skip_backend: A context manager that allows skipping of backends.
"""
_uarray.set_global_backend(backend, coerce, only, try_last)
def register_backend(backend):
"""
This utility method sets registers backend for permanent use. It
will be tried in the list of backends automatically, unless the
``only`` flag is set on a backend.
Note that this method is not thread-safe.
Parameters
----------
backend
The backend to register.
"""
_uarray.register_backend(backend)
def clear_backends(domain, registered=True, globals=False):
"""
This utility method clears registered backends.
.. warning::
We caution library authors against using this function in
their code. We do *not* support this use-case. This function
is meant to be used only by users themselves.
.. warning::
Do NOT use this method inside a multimethod call, or the
program is likely to crash.
Parameters
----------
domain : Optional[str]
The domain for which to de-register backends. ``None`` means
de-register for all domains.
registered : bool
Whether or not to clear registered backends. See :obj:`register_backend`.
globals : bool
Whether or not to clear global backends. See :obj:`set_global_backend`.
See Also
--------
register_backend : Register a backend globally.
set_global_backend : Set a global backend.
"""
_uarray.clear_backends(domain, registered, globals)
class Dispatchable:
"""
A utility class which marks an argument with a specific dispatch type.
Attributes
----------
value
The value of the Dispatchable.
type
The type of the Dispatchable.
Examples
--------
>>> x = Dispatchable(1, str)
>>> x
<Dispatchable: type=<class 'str'>, value=1>
See Also
--------
all_of_type
Marks all unmarked parameters of a function.
mark_as
Allows one to create a utility function to mark as a given type.
"""
def __init__(self, value, dispatch_type, coercible=True):
self.value = value
self.type = dispatch_type
self.coercible = coercible
def __getitem__(self, index):
return (self.type, self.value)[index]
def __str__(self):
return "<{}: type={!r}, value={!r}>".format(
type(self).__name__, self.type, self.value
)
__repr__ = __str__
def mark_as(dispatch_type):
"""
Creates a utility function to mark something as a specific type.
Examples
--------
>>> mark_int = mark_as(int)
>>> mark_int(1)
<Dispatchable: type=<class 'int'>, value=1>
"""
return functools.partial(Dispatchable, dispatch_type=dispatch_type)
def all_of_type(arg_type):
"""
Marks all unmarked arguments as a given type.
Examples
--------
>>> @all_of_type(str)
... def f(a, b):
... return a, Dispatchable(b, int)
>>> f('a', 1)
(<Dispatchable: type=<class 'str'>, value='a'>, <Dispatchable: type=<class 'int'>, value=1>)
"""
def outer(func):
@functools.wraps(func)
def inner(*args, **kwargs):
extracted_args = func(*args, **kwargs)
return tuple(
Dispatchable(arg, arg_type)
if not isinstance(arg, Dispatchable)
else arg
for arg in extracted_args
)
return inner
return outer
def wrap_single_convertor(convert_single):
"""
Wraps a ``__ua_convert__`` defined for a single element to all elements.
If any of them return ``NotImplemented``, the operation is assumed to be
undefined.
Accepts a signature of (value, type, coerce).
"""
@functools.wraps(convert_single)
def __ua_convert__(dispatchables, coerce):
converted = []
for d in dispatchables:
c = convert_single(d.value, d.type, coerce and d.coercible)
if c is NotImplemented:
return NotImplemented
converted.append(c)
return converted
return __ua_convert__
def wrap_single_convertor_instance(convert_single):
"""
Wraps a ``__ua_convert__`` defined for a single element to all elements.
If any of them return ``NotImplemented``, the operation is assumed to be
undefined.
Accepts a signature of (value, type, coerce).
"""
@functools.wraps(convert_single)
def __ua_convert__(self, dispatchables, coerce):
converted = []
for d in dispatchables:
c = convert_single(self, d.value, d.type, coerce and d.coercible)
if c is NotImplemented:
return NotImplemented
converted.append(c)
return converted
return __ua_convert__
def determine_backend(value, dispatch_type, *, domain, only=True, coerce=False):
"""Set the backend to the first active backend that supports ``value``
This is useful for functions that call multimethods without any dispatchable
arguments. You can use :func:`determine_backend` to ensure the same backend
is used everywhere in a block of multimethod calls.
Parameters
----------
value
The value being tested
dispatch_type
The dispatch type associated with ``value``, aka
":ref:`marking <MarkingGlossary>`".
domain: string
The domain to query for backends and set.
coerce: bool
Whether or not to allow coercion to the backend's types. Implies ``only``.
only: bool
Whether or not this should be the last backend to try.
See Also
--------
set_backend: For when you know which backend to set
Notes
-----
Support is determined by the ``__ua_convert__`` protocol. Backends not
supporting the type must return ``NotImplemented`` from their
``__ua_convert__`` if they don't support input of that type.
Examples
--------
Suppose we have two backends ``BackendA`` and ``BackendB`` each supporting
different types, ``TypeA`` and ``TypeB``. Neither supporting the other type:
>>> with ua.set_backend(ex.BackendA):
... ex.call_multimethod(ex.TypeB(), ex.TypeB())
Traceback (most recent call last):
...
uarray.BackendNotImplementedError: ...
Now consider a multimethod that creates a new object of ``TypeA``, or
``TypeB`` depending on the active backend.
>>> with ua.set_backend(ex.BackendA), ua.set_backend(ex.BackendB):
... res = ex.creation_multimethod()
... ex.call_multimethod(res, ex.TypeA())
Traceback (most recent call last):
...
uarray.BackendNotImplementedError: ...
``res`` is an object of ``TypeB`` because ``BackendB`` is set in the
innermost with statement. So, ``call_multimethod`` fails since the types
don't match.
Instead, we need to first find a backend suitable for all of our objects.
>>> with ua.set_backend(ex.BackendA), ua.set_backend(ex.BackendB):
... x = ex.TypeA()
... with ua.determine_backend(x, "mark", domain="ua_examples"):
... res = ex.creation_multimethod()
... ex.call_multimethod(res, x)
TypeA
"""
dispatchables = (Dispatchable(value, dispatch_type, coerce),)
backend = _uarray.determine_backend(domain, dispatchables, coerce)
return set_backend(backend, coerce=coerce, only=only)
def determine_backend_multi(
dispatchables, *, domain, only=True, coerce=False, **kwargs
):
"""Set a backend supporting all ``dispatchables``
This is useful for functions that call multimethods without any dispatchable
arguments. You can use :func:`determine_backend_multi` to ensure the same
backend is used everywhere in a block of multimethod calls involving
multiple arrays.
Parameters
----------
dispatchables: Sequence[Union[uarray.Dispatchable, Any]]
The dispatchables that must be supported
domain: string
The domain to query for backends and set.
coerce: bool
Whether or not to allow coercion to the backend's types. Implies ``only``.
only: bool
Whether or not this should be the last backend to try.
dispatch_type: Optional[Any]
The default dispatch type associated with ``dispatchables``, aka
":ref:`marking <MarkingGlossary>`".
See Also
--------
determine_backend: For a single dispatch value
set_backend: For when you know which backend to set
Notes
-----
Support is determined by the ``__ua_convert__`` protocol. Backends not
supporting the type must return ``NotImplemented`` from their
``__ua_convert__`` if they don't support input of that type.
Examples
--------
:func:`determine_backend` allows the backend to be set from a single
object. :func:`determine_backend_multi` allows multiple objects to be
checked simultaneously for support in the backend. Suppose we have a
``BackendAB`` which supports ``TypeA`` and ``TypeB`` in the same call,
and a ``BackendBC`` that doesn't support ``TypeA``.
>>> with ua.set_backend(ex.BackendAB), ua.set_backend(ex.BackendBC):
... a, b = ex.TypeA(), ex.TypeB()
... with ua.determine_backend_multi(
... [ua.Dispatchable(a, "mark"), ua.Dispatchable(b, "mark")],
... domain="ua_examples"
... ):
... res = ex.creation_multimethod()
... ex.call_multimethod(res, a, b)
TypeA
This won't call ``BackendBC`` because it doesn't support ``TypeA``.
We can also use leave out the ``ua.Dispatchable`` if we specify the
default ``dispatch_type`` for the ``dispatchables`` argument.
>>> with ua.set_backend(ex.BackendAB), ua.set_backend(ex.BackendBC):
... a, b = ex.TypeA(), ex.TypeB()
... with ua.determine_backend_multi(
... [a, b], dispatch_type="mark", domain="ua_examples"
... ):
... res = ex.creation_multimethod()
... ex.call_multimethod(res, a, b)
TypeA
"""
if "dispatch_type" in kwargs:
disp_type = kwargs.pop("dispatch_type")
dispatchables = tuple(
d if isinstance(d, Dispatchable) else Dispatchable(d, disp_type)
for d in dispatchables
)
else:
dispatchables = tuple(dispatchables)
if not all(isinstance(d, Dispatchable) for d in dispatchables):
raise TypeError("dispatchables must be instances of uarray.Dispatchable")
if len(kwargs) != 0:
raise TypeError(f"Received unexpected keyword arguments: {kwargs}")
backend = _uarray.determine_backend(domain, dispatchables, coerce)
return set_backend(backend, coerce=coerce, only=only)
| 20,476
| 28.128023
| 101
|
py
|
scipy
|
scipy-main/scipy/_lib/tests/test_tmpdirs.py
|
""" Test tmpdirs module """
from os import getcwd
from os.path import realpath, abspath, dirname, isfile, join as pjoin, exists
from scipy._lib._tmpdirs import tempdir, in_tempdir, in_dir
from numpy.testing import assert_, assert_equal
MY_PATH = abspath(__file__)
MY_DIR = dirname(MY_PATH)
def test_tempdir():
with tempdir() as tmpdir:
fname = pjoin(tmpdir, 'example_file.txt')
with open(fname, "w") as fobj:
fobj.write('a string\\n')
assert_(not exists(tmpdir))
def test_in_tempdir():
my_cwd = getcwd()
with in_tempdir() as tmpdir:
with open('test.txt', "w") as f:
f.write('some text')
assert_(isfile('test.txt'))
assert_(isfile(pjoin(tmpdir, 'test.txt')))
assert_(not exists(tmpdir))
assert_equal(getcwd(), my_cwd)
def test_given_directory():
# Test InGivenDirectory
cwd = getcwd()
with in_dir() as tmpdir:
assert_equal(tmpdir, abspath(cwd))
assert_equal(tmpdir, abspath(getcwd()))
with in_dir(MY_DIR) as tmpdir:
assert_equal(tmpdir, MY_DIR)
assert_equal(realpath(MY_DIR), realpath(abspath(getcwd())))
# We were deleting the given directory! Check not so now.
assert_(isfile(MY_PATH))
| 1,240
| 27.860465
| 77
|
py
|
scipy
|
scipy-main/scipy/_lib/tests/test__testutils.py
|
import sys
from scipy._lib._testutils import _parse_size, _get_mem_available
import pytest
def test__parse_size():
expected = {
'12': 12e6,
'12 b': 12,
'12k': 12e3,
' 12 M ': 12e6,
' 12 G ': 12e9,
' 12Tb ': 12e12,
'12 Mib ': 12 * 1024.0**2,
'12Tib': 12 * 1024.0**4,
}
for inp, outp in sorted(expected.items()):
if outp is None:
with pytest.raises(ValueError):
_parse_size(inp)
else:
assert _parse_size(inp) == outp
def test__mem_available():
# May return None on non-Linux platforms
available = _get_mem_available()
if sys.platform.startswith('linux'):
assert available >= 0
else:
assert available is None or available >= 0
| 800
| 23.272727
| 65
|
py
|
scipy
|
scipy-main/scipy/_lib/tests/test_warnings.py
|
"""
Tests which scan for certain occurrences in the code, they may not find
all of these occurrences but should catch almost all. This file was adapted
from NumPy.
"""
import os
from pathlib import Path
import ast
import tokenize
import scipy
import pytest
class ParseCall(ast.NodeVisitor):
def __init__(self):
self.ls = []
def visit_Attribute(self, node):
ast.NodeVisitor.generic_visit(self, node)
self.ls.append(node.attr)
def visit_Name(self, node):
self.ls.append(node.id)
class FindFuncs(ast.NodeVisitor):
def __init__(self, filename):
super().__init__()
self.__filename = filename
self.bad_filters = []
self.bad_stacklevels = []
def visit_Call(self, node):
p = ParseCall()
p.visit(node.func)
ast.NodeVisitor.generic_visit(self, node)
if p.ls[-1] == 'simplefilter' or p.ls[-1] == 'filterwarnings':
if node.args[0].s == "ignore":
self.bad_filters.append(
f"{self.__filename}:{node.lineno}")
if p.ls[-1] == 'warn' and (
len(p.ls) == 1 or p.ls[-2] == 'warnings'):
if self.__filename == "_lib/tests/test_warnings.py":
# This file
return
# See if stacklevel exists:
if len(node.args) == 3:
return
args = {kw.arg for kw in node.keywords}
if "stacklevel" not in args:
self.bad_stacklevels.append(
f"{self.__filename}:{node.lineno}")
@pytest.fixture(scope="session")
def warning_calls():
# combined "ignore" and stacklevel error
base = Path(scipy.__file__).parent
bad_filters = []
bad_stacklevels = []
for path in base.rglob("*.py"):
# use tokenize to auto-detect encoding on systems where no
# default encoding is defined (e.g., LANG='C')
with tokenize.open(str(path)) as file:
tree = ast.parse(file.read(), filename=str(path))
finder = FindFuncs(path.relative_to(base))
finder.visit(tree)
bad_filters.extend(finder.bad_filters)
bad_stacklevels.extend(finder.bad_stacklevels)
return bad_filters, bad_stacklevels
@pytest.mark.slow
def test_warning_calls_filters(warning_calls):
bad_filters, bad_stacklevels = warning_calls
# We try not to add filters in the code base, because those filters aren't
# thread-safe. We aim to only filter in tests with
# np.testing.suppress_warnings. However, in some cases it may prove
# necessary to filter out warnings, because we can't (easily) fix the root
# cause for them and we don't want users to see some warnings when they use
# SciPy correctly. So we list exceptions here. Add new entries only if
# there's a good reason.
allowed_filters = (
os.path.join('datasets', '_fetchers.py'),
os.path.join('datasets', '__init__.py'),
os.path.join('optimize', '_optimize.py'),
os.path.join('sparse', '__init__.py'), # np.matrix pending-deprecation
os.path.join('stats', '_discrete_distns.py'), # gh-14901
os.path.join('stats', '_continuous_distns.py'),
)
bad_filters = [item for item in bad_filters if item.split(':')[0] not in
allowed_filters]
if bad_filters:
raise AssertionError(
"warning ignore filter should not be used, instead, use\n"
"numpy.testing.suppress_warnings (in tests only);\n"
"found in:\n {}".format(
"\n ".join(bad_filters)))
@pytest.mark.slow
@pytest.mark.xfail(reason="stacklevels currently missing")
def test_warning_calls_stacklevels(warning_calls):
bad_filters, bad_stacklevels = warning_calls
msg = ""
if bad_filters:
msg += ("warning ignore filter should not be used, instead, use\n"
"numpy.testing.suppress_warnings (in tests only);\n"
"found in:\n {}".format("\n ".join(bad_filters)))
msg += "\n\n"
if bad_stacklevels:
msg += "warnings should have an appropriate stacklevel:\n {}".format(
"\n ".join(bad_stacklevels))
if msg:
raise AssertionError(msg)
| 4,275
| 31.393939
| 80
|
py
|
scipy
|
scipy-main/scipy/_lib/tests/test__util.py
|
from multiprocessing import Pool
from multiprocessing.pool import Pool as PWL
import os
import re
import math
from fractions import Fraction
import numpy as np
from numpy.testing import assert_equal, assert_
import pytest
from pytest import raises as assert_raises, deprecated_call
import scipy
from scipy._lib._util import (_aligned_zeros, check_random_state, MapWrapper,
getfullargspec_no_self, FullArgSpec,
rng_integers, _validate_int, _rename_parameter,
_contains_nan, _rng_html_rewrite)
def test__aligned_zeros():
niter = 10
def check(shape, dtype, order, align):
err_msg = repr((shape, dtype, order, align))
x = _aligned_zeros(shape, dtype, order, align=align)
if align is None:
align = np.dtype(dtype).alignment
assert_equal(x.__array_interface__['data'][0] % align, 0)
if hasattr(shape, '__len__'):
assert_equal(x.shape, shape, err_msg)
else:
assert_equal(x.shape, (shape,), err_msg)
assert_equal(x.dtype, dtype)
if order == "C":
assert_(x.flags.c_contiguous, err_msg)
elif order == "F":
if x.size > 0:
# Size-0 arrays get invalid flags on NumPy 1.5
assert_(x.flags.f_contiguous, err_msg)
elif order is None:
assert_(x.flags.c_contiguous, err_msg)
else:
raise ValueError()
# try various alignments
for align in [1, 2, 3, 4, 8, 16, 32, 64, None]:
for n in [0, 1, 3, 11]:
for order in ["C", "F", None]:
for dtype in [np.uint8, np.float64]:
for shape in [n, (1, 2, 3, n)]:
for j in range(niter):
check(shape, dtype, order, align)
def test_check_random_state():
# If seed is None, return the RandomState singleton used by np.random.
# If seed is an int, return a new RandomState instance seeded with seed.
# If seed is already a RandomState instance, return it.
# Otherwise raise ValueError.
rsi = check_random_state(1)
assert_equal(type(rsi), np.random.RandomState)
rsi = check_random_state(rsi)
assert_equal(type(rsi), np.random.RandomState)
rsi = check_random_state(None)
assert_equal(type(rsi), np.random.RandomState)
assert_raises(ValueError, check_random_state, 'a')
if hasattr(np.random, 'Generator'):
# np.random.Generator is only available in NumPy >= 1.17
rg = np.random.Generator(np.random.PCG64())
rsi = check_random_state(rg)
assert_equal(type(rsi), np.random.Generator)
def test_getfullargspec_no_self():
p = MapWrapper(1)
argspec = getfullargspec_no_self(p.__init__)
assert_equal(argspec, FullArgSpec(['pool'], None, None, (1,), [],
None, {}))
argspec = getfullargspec_no_self(p.__call__)
assert_equal(argspec, FullArgSpec(['func', 'iterable'], None, None, None,
[], None, {}))
class _rv_generic:
def _rvs(self, a, b=2, c=3, *args, size=None, **kwargs):
return None
rv_obj = _rv_generic()
argspec = getfullargspec_no_self(rv_obj._rvs)
assert_equal(argspec, FullArgSpec(['a', 'b', 'c'], 'args', 'kwargs',
(2, 3), ['size'], {'size': None}, {}))
def test_mapwrapper_serial():
in_arg = np.arange(10.)
out_arg = np.sin(in_arg)
p = MapWrapper(1)
assert_(p._mapfunc is map)
assert_(p.pool is None)
assert_(p._own_pool is False)
out = list(p(np.sin, in_arg))
assert_equal(out, out_arg)
with assert_raises(RuntimeError):
p = MapWrapper(0)
def test_pool():
with Pool(2) as p:
p.map(math.sin, [1, 2, 3, 4])
def test_mapwrapper_parallel():
in_arg = np.arange(10.)
out_arg = np.sin(in_arg)
with MapWrapper(2) as p:
out = p(np.sin, in_arg)
assert_equal(list(out), out_arg)
assert_(p._own_pool is True)
assert_(isinstance(p.pool, PWL))
assert_(p._mapfunc is not None)
# the context manager should've closed the internal pool
# check that it has by asking it to calculate again.
with assert_raises(Exception) as excinfo:
p(np.sin, in_arg)
assert_(excinfo.type is ValueError)
# can also set a PoolWrapper up with a map-like callable instance
with Pool(2) as p:
q = MapWrapper(p.map)
assert_(q._own_pool is False)
q.close()
# closing the PoolWrapper shouldn't close the internal pool
# because it didn't create it
out = p.map(np.sin, in_arg)
assert_equal(list(out), out_arg)
# get our custom ones and a few from the "import *" cases
@pytest.mark.parametrize(
'key', ('ifft', 'diag', 'arccos', 'randn', 'rand', 'array'))
def test_numpy_deprecation(key):
"""Test that 'from numpy import *' functions are deprecated."""
if key in ('ifft', 'diag', 'arccos'):
arg = [1.0, 0.]
elif key == 'finfo':
arg = float
else:
arg = 2
func = getattr(scipy, key)
match = r'scipy\.%s is deprecated.*2\.0\.0' % key
with deprecated_call(match=match) as dep:
func(arg) # deprecated
# in case we catch more than one dep warning
fnames = [os.path.splitext(d.filename)[0] for d in dep.list]
basenames = [os.path.basename(fname) for fname in fnames]
assert 'test__util' in basenames
if key in ('rand', 'randn'):
root = np.random
elif key == 'ifft':
root = np.fft
else:
root = np
func_np = getattr(root, key)
func_np(arg) # not deprecated
assert func_np is not func
# classes should remain classes
if isinstance(func_np, type):
assert isinstance(func, type)
def test_numpy_deprecation_functionality():
# Check that the deprecation wrappers don't break basic NumPy
# functionality
with deprecated_call():
x = scipy.array([1, 2, 3], dtype=scipy.float64)
assert x.dtype == scipy.float64
assert x.dtype == np.float64
x = scipy.finfo(scipy.float32)
assert x.eps == np.finfo(np.float32).eps
assert scipy.float64 == np.float64
assert issubclass(np.float64, scipy.float64)
def test_rng_integers():
rng = np.random.RandomState()
# test that numbers are inclusive of high point
arr = rng_integers(rng, low=2, high=5, size=100, endpoint=True)
assert np.max(arr) == 5
assert np.min(arr) == 2
assert arr.shape == (100, )
# test that numbers are inclusive of high point
arr = rng_integers(rng, low=5, size=100, endpoint=True)
assert np.max(arr) == 5
assert np.min(arr) == 0
assert arr.shape == (100, )
# test that numbers are exclusive of high point
arr = rng_integers(rng, low=2, high=5, size=100, endpoint=False)
assert np.max(arr) == 4
assert np.min(arr) == 2
assert arr.shape == (100, )
# test that numbers are exclusive of high point
arr = rng_integers(rng, low=5, size=100, endpoint=False)
assert np.max(arr) == 4
assert np.min(arr) == 0
assert arr.shape == (100, )
# now try with np.random.Generator
try:
rng = np.random.default_rng()
except AttributeError:
return
# test that numbers are inclusive of high point
arr = rng_integers(rng, low=2, high=5, size=100, endpoint=True)
assert np.max(arr) == 5
assert np.min(arr) == 2
assert arr.shape == (100, )
# test that numbers are inclusive of high point
arr = rng_integers(rng, low=5, size=100, endpoint=True)
assert np.max(arr) == 5
assert np.min(arr) == 0
assert arr.shape == (100, )
# test that numbers are exclusive of high point
arr = rng_integers(rng, low=2, high=5, size=100, endpoint=False)
assert np.max(arr) == 4
assert np.min(arr) == 2
assert arr.shape == (100, )
# test that numbers are exclusive of high point
arr = rng_integers(rng, low=5, size=100, endpoint=False)
assert np.max(arr) == 4
assert np.min(arr) == 0
assert arr.shape == (100, )
class TestValidateInt:
@pytest.mark.parametrize('n', [4, np.uint8(4), np.int16(4), np.array(4)])
def test_validate_int(self, n):
n = _validate_int(n, 'n')
assert n == 4
@pytest.mark.parametrize('n', [4.0, np.array([4]), Fraction(4, 1)])
def test_validate_int_bad(self, n):
with pytest.raises(TypeError, match='n must be an integer'):
_validate_int(n, 'n')
def test_validate_int_below_min(self):
with pytest.raises(ValueError, match='n must be an integer not '
'less than 0'):
_validate_int(-1, 'n', 0)
class TestRenameParameter:
# check that wrapper `_rename_parameter` for backward-compatible
# keyword renaming works correctly
# Example method/function that still accepts keyword `old`
@_rename_parameter("old", "new")
def old_keyword_still_accepted(self, new):
return new
# Example method/function for which keyword `old` is deprecated
@_rename_parameter("old", "new", dep_version="1.9.0")
def old_keyword_deprecated(self, new):
return new
def test_old_keyword_still_accepted(self):
# positional argument and both keyword work identically
res1 = self.old_keyword_still_accepted(10)
res2 = self.old_keyword_still_accepted(new=10)
res3 = self.old_keyword_still_accepted(old=10)
assert res1 == res2 == res3 == 10
# unexpected keyword raises an error
message = re.escape("old_keyword_still_accepted() got an unexpected")
with pytest.raises(TypeError, match=message):
self.old_keyword_still_accepted(unexpected=10)
# multiple values for the same parameter raises an error
message = re.escape("old_keyword_still_accepted() got multiple")
with pytest.raises(TypeError, match=message):
self.old_keyword_still_accepted(10, new=10)
with pytest.raises(TypeError, match=message):
self.old_keyword_still_accepted(10, old=10)
with pytest.raises(TypeError, match=message):
self.old_keyword_still_accepted(new=10, old=10)
def test_old_keyword_deprecated(self):
# positional argument and both keyword work identically,
# but use of old keyword results in DeprecationWarning
dep_msg = "Use of keyword argument `old` is deprecated"
res1 = self.old_keyword_deprecated(10)
res2 = self.old_keyword_deprecated(new=10)
with pytest.warns(DeprecationWarning, match=dep_msg):
res3 = self.old_keyword_deprecated(old=10)
assert res1 == res2 == res3 == 10
# unexpected keyword raises an error
message = re.escape("old_keyword_deprecated() got an unexpected")
with pytest.raises(TypeError, match=message):
self.old_keyword_deprecated(unexpected=10)
# multiple values for the same parameter raises an error and,
# if old keyword is used, results in DeprecationWarning
message = re.escape("old_keyword_deprecated() got multiple")
with pytest.raises(TypeError, match=message):
self.old_keyword_deprecated(10, new=10)
with pytest.raises(TypeError, match=message), \
pytest.warns(DeprecationWarning, match=dep_msg):
self.old_keyword_deprecated(10, old=10)
with pytest.raises(TypeError, match=message), \
pytest.warns(DeprecationWarning, match=dep_msg):
self.old_keyword_deprecated(new=10, old=10)
class TestContainsNaNTest:
def test_policy(self):
data = np.array([1, 2, 3, np.nan])
contains_nan, nan_policy = _contains_nan(data, nan_policy="propagate")
assert contains_nan
assert nan_policy == "propagate"
contains_nan, nan_policy = _contains_nan(data, nan_policy="omit")
assert contains_nan
assert nan_policy == "omit"
msg = "The input contains nan values"
with pytest.raises(ValueError, match=msg):
_contains_nan(data, nan_policy="raise")
msg = "nan_policy must be one of"
with pytest.raises(ValueError, match=msg):
_contains_nan(data, nan_policy="nan")
def test_contains_nan_1d(self):
data1 = np.array([1, 2, 3])
assert not _contains_nan(data1)[0]
data2 = np.array([1, 2, 3, np.nan])
assert _contains_nan(data2)[0]
data3 = np.array([np.nan, 2, 3, np.nan])
assert _contains_nan(data3)[0]
data4 = np.array([1, 2, "3", np.nan]) # converted to string "nan"
assert not _contains_nan(data4)[0]
data5 = np.array([1, 2, "3", np.nan], dtype='object')
assert _contains_nan(data5)[0]
def test_contains_nan_2d(self):
data1 = np.array([[1, 2], [3, 4]])
assert not _contains_nan(data1)[0]
data2 = np.array([[1, 2], [3, np.nan]])
assert _contains_nan(data2)[0]
data3 = np.array([["1", 2], [3, np.nan]]) # converted to string "nan"
assert not _contains_nan(data3)[0]
data4 = np.array([["1", 2], [3, np.nan]], dtype='object')
assert _contains_nan(data4)[0]
def test__rng_html_rewrite():
def mock_str():
lines = [
'np.random.default_rng(8989843)',
'np.random.default_rng(seed)',
'np.random.default_rng(0x9a71b21474694f919882289dc1559ca)',
' bob ',
]
return lines
res = _rng_html_rewrite(mock_str)()
ref = [
'np.random.default_rng()',
'np.random.default_rng(seed)',
'np.random.default_rng()',
' bob ',
]
assert res == ref
| 13,835
| 33.41791
| 78
|
py
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.