instance_id stringlengths 13 45 | pull_number int64 7 30.1k | repo stringclasses 83
values | version stringclasses 68
values | base_commit stringlengths 40 40 | created_at stringdate 2013-05-16 18:15:55 2025-01-08 15:12:50 | patch stringlengths 347 35.2k | test_patch stringlengths 432 113k | non_py_patch stringlengths 0 18.3k | new_components listlengths 0 40 | FAIL_TO_PASS listlengths 1 2.53k | PASS_TO_PASS listlengths 0 1.7k | problem_statement stringlengths 607 52.7k | hints_text stringlengths 0 57.4k | environment_setup_commit stringclasses 167
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
RDFLib__rdflib-1237 | 1,237 | RDFLib/rdflib | null | 8e3c771a5c6a2787b5a34a9288c7d5d6967ecbbe | 2021-01-27T22:01:52Z | diff --git a/rdflib/namespace.py b/rdflib/namespace.py
index aa1b4d0d8..993d82a4d 100644
--- a/rdflib/namespace.py
+++ b/rdflib/namespace.py
@@ -112,7 +112,11 @@ class Namespace(str):
rdflib.term.URIRef(u'http://example.org/Person')
>>> n['first-name'] # as item - for things that are not valid python identifiers
rdflib.term.URIRef(u'http://example.org/first-name')
-
+ >>> n.Person in n
+ True
+ >>> n2 = Namespace("http://example2.org/")
+ >>> n.Person in n2
+ False
"""
def __new__(cls, value):
@@ -141,6 +145,9 @@ def __getattr__(self, name):
def __repr__(self):
return "Namespace(%r)" % str(self)
+
+ def __contains__(self, ref):
+ return ref.startswith(self) # test namespace membership with "ref in ns" syntax
class URIPattern(str):
@@ -187,11 +194,10 @@ def __init__(self, uri, terms):
self.__uris[t] = URIRef(self.uri + t)
def term(self, name):
- uri = self.__uris.get(name)
- if uri is None:
+ if name not in self.__uris:
raise KeyError("term '{}' not in namespace '{}'".format(name, self.uri))
else:
- return uri
+ return self.__uris[name]
def __getitem__(self, key, default=None):
return self.term(key)
@@ -213,9 +219,12 @@ def __repr__(self):
def __dir__(self):
return list(self._ClosedNamespace__uris)
+
+ def __contains__(self, ref):
+ return ref in self.__uris.values() # test namespace membership with "ref in ns" syntax
def _ipython_key_completions_(self):
- return dir(self)
+ return dir(self.uri)
class _RDFNamespace(ClosedNamespace):
@@ -590,6 +599,13 @@ def __init__(self, graph):
self.bind("rdfs", RDFS)
self.bind("xsd", XSD)
+ def __contians__(self, ref):
+ # checks if a reference is in any of the managed namespaces with syntax
+ # "ref in manager". Note that we don't use "ref in ns", as
+ # NamespaceManager.namespaces() returns Iterator[Tuple[str, URIRef]]
+ # rather than Iterator[Tuple[str, Namespace]]
+ return any(ref.startswith(ns) for prefix, ns in self.namespaces())
+
def reset(self):
self.__cache = {}
self.__strie = {}
| diff --git a/test/test_namespace.py b/test/test_namespace.py
index 2706467ff..12fa0600b 100644
--- a/test/test_namespace.py
+++ b/test/test_namespace.py
@@ -1,7 +1,7 @@
import unittest
from rdflib.graph import Graph
-from rdflib.namespace import FOAF
+from rdflib.namespace import Namespace, FOAF, RDF, RDFS, SH
from rdflib.term import URIRef
@@ -112,3 +112,18 @@ def add_not_in_namespace(s):
add_not_in_namespace("givenName"),
URIRef("http://xmlns.com/foaf/0.1/givenName"),
)
+
+ def test_contains_method(self):
+ """Tests for Namespace.__contains__() methods."""
+
+ ref = URIRef('http://www.w3.org/ns/shacl#example')
+ self.assertTrue(type(SH) == Namespace, "SH no longer a Namespace, update test.")
+ self.assertTrue(ref in SH, "sh:example not in SH")
+
+ ref = URIRef('http://www.w3.org/2000/01/rdf-schema#label')
+ self.assertTrue(ref in RDFS, "ClosedNamespace(RDFS) does not include rdfs:label")
+ ref = URIRef('http://www.w3.org/2000/01/rdf-schema#example')
+ self.assertFalse(ref in RDFS, "ClosedNamespace(RDFS) includes out-of-ns member rdfs:example")
+
+ ref = URIRef('http://www.w3.org/1999/02/22-rdf-syntax-ns#type')
+ self.assertTrue(ref in RDF, "_RDFNamespace does not include rdf:type")
\ No newline at end of file
| [
{
"components": [
{
"doc": "",
"lines": [
149,
150
],
"name": "Namespace.__contains__",
"signature": "def __contains__(self, ref):",
"type": "function"
},
{
"doc": "",
"lines": [
223,
224
... | [
"test/test_namespace.py::NamespacePrefixTest::test_contains_method"
] | [
"test/test_namespace.py::NamespacePrefixTest::test_closed_namespace",
"test/test_namespace.py::NamespacePrefixTest::test_compute_qname",
"test/test_namespace.py::NamespacePrefixTest::test_n3",
"test/test_namespace.py::NamespacePrefixTest::test_n32",
"test/test_namespace.py::NamespacePrefixTest::test_reset",... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Adding Namespace.__contains__()
No issue for this, just a quick addition for usability/literacy while working with the library.
## Proposed Changes
- Adds a `Namespace.__contains__()` method to enable syntax sugar of `ref in ns`. This is a natural way to discover if an arbitrary `URIRef` is in the `Namespace`.
- Current method would be manually writing `ref.startswith(ns)`, which doesn't preserve semantics
This has possible extension to `NamespaceManager`, testing if a `URIRef` is in any managed namespace.
```
class NamespaceManager():
def __contians__(self, ref):
return any(ref.startswith(ns) for x, ns in self.namespaces())
```
Note that in `NamespaceManager`, namespaces are stored as `URIRef`s, not `Namespace` objects :/
Hoping this starts a discussion about some usability concerns.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in rdflib/namespace.py]
(definition of Namespace.__contains__:)
def __contains__(self, ref):
(definition of ClosedNamespace.__contains__:)
def __contains__(self, ref):
(definition of NamespaceManager.__contians__:)
def __contians__(self, ref):
[end of new definitions in rdflib/namespace.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 0c11debb5178157baeac27b735e49a757916d2a6 | ||
pvlib__pvlib-python-1150 | 1,150 | pvlib/pvlib-python | 0.8 | 750b1fad5242e46a3abccd9768f1c4b0de799fa7 | 2021-01-27T21:05:28Z | diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 5586c4e44e..7372dad3df 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -630,8 +630,8 @@ ModelChain model definitions.
modelchain.ModelChain.desoto
modelchain.ModelChain.pvsyst
modelchain.ModelChain.pvwatts_dc
- modelchain.ModelChain.snlinverter
- modelchain.ModelChain.adrinverter
+ modelchain.ModelChain.sandia_inverter
+ modelchain.ModelChain.adr_inverter
modelchain.ModelChain.pvwatts_inverter
modelchain.ModelChain.ashrae_aoi_loss
modelchain.ModelChain.physical_aoi_loss
diff --git a/docs/sphinx/source/whatsnew/v0.9.0.rst b/docs/sphinx/source/whatsnew/v0.9.0.rst
index 3aaff14bdb..605898cf61 100644
--- a/docs/sphinx/source/whatsnew/v0.9.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.9.0.rst
@@ -36,6 +36,10 @@ Breaking changes
* ``irradiance.liujordan`` and ``ForecastModel.cloud_cover_to_irradiance_liujordan``
have been removed. (:pull:`1136`)
+* ``ModelChain.snlinverter`` changed to ``ModelChain.sandia_inverter``.
+ ``ModelChain.adrinverter`` changed to ``ModelChain.adr_inverter``.
+ (:pull:`1150`)
+
Deprecations
~~~~~~~~~~~~
@@ -76,9 +80,9 @@ Enhancements
* Support for :py:func:`~pvlib.inverter.sandia_multi` and
:py:func:`~pvlib.inverter.pvwatts_multi` added to
:py:class:`~pvlib.pvsystem.PVSystem` and
- :py:class:`~pvlib.modelchain.ModelChain` (as ``ac_model='sandia_multi'``
- and ``ac_model='pvwatts_multi'``).
- (:pull:`1076`, :issue:`1067`, :pull:`1132`, :issue:`1117`)
+ :py:class:`~pvlib.modelchain.ModelChain` (as ``ac_model='sandia'``
+ and ``ac_model='pvwatts'``).
+ (:pull:`1076`, :issue:`1067`, :pull:`1132`, :issue:`1117`, :pull:`1150`)
* :py:class:`~pvlib.modelchain.ModelChain` 'run_model' methods now
automatically switch to using ``'effective_irradiance'`` (if available) for
cell temperature models, when ``'poa_global'`` is not provided in input
@@ -88,8 +92,8 @@ Enhancements
``pvsystem.PVSystem.strings_per_inverter``. Note that both attributes still
default to 1. (:pull:`1138`)
* :py:meth:`~pvlib.pvsystem.PVSystem.get_ac` is added to calculate AC power
- from DC power. Use parameter 'model' to specify which inverter model to use.
- (:pull:`1147`, :issue:`998`)
+ from DC power. Use parameter ``model`` to specify which inverter model to use.
+ (:pull:`1147`, :issue:`998`, :pull:`1150`)
Bug fixes
~~~~~~~~~
diff --git a/pvlib/modelchain.py b/pvlib/modelchain.py
index c9c578bf96..0685d56336 100644
--- a/pvlib/modelchain.py
+++ b/pvlib/modelchain.py
@@ -770,15 +770,11 @@ def ac_model(self, model):
elif isinstance(model, str):
model = model.lower()
if model == 'sandia':
- self._ac_model = self.snlinverter
- elif model == 'sandia_multi':
- self._ac_model = self.sandia_multi_inverter
+ self._ac_model = self.sandia_inverter
elif model in 'adr':
- self._ac_model = self.adrinverter
+ self._ac_model = self.adr_inverter
elif model == 'pvwatts':
self._ac_model = self.pvwatts_inverter
- elif model == 'pvwatts_multi':
- self._ac_model = self.pvwatts_multi_inverter
else:
raise ValueError(model + ' is not a valid AC power model')
else:
@@ -787,12 +783,15 @@ def ac_model(self, model):
def infer_ac_model(self):
"""Infer AC power model from system attributes."""
inverter_params = set(self.system.inverter_parameters.keys())
- if self.system.num_arrays > 1:
- return self._infer_ac_model_multi(inverter_params)
if _snl_params(inverter_params):
- return self.snlinverter
+ return self.sandia_inverter
if _adr_params(inverter_params):
- return self.adrinverter
+ if self.system.num_arrays > 1:
+ raise ValueError(
+ 'The adr inverter function cannot be used for an inverter',
+ ' with multiple MPPT inputs')
+ else:
+ return self.adr_inverter
if _pvwatts_params(inverter_params):
return self.pvwatts_inverter
raise ValueError('could not infer AC model from '
@@ -800,40 +799,25 @@ def infer_ac_model(self):
'system.inverter_parameters or explicitly '
'set the model with the ac_model kwarg.')
- def _infer_ac_model_multi(self, inverter_params):
- if _snl_params(inverter_params):
- return self.sandia_multi_inverter
- elif _pvwatts_params(inverter_params):
- return self.pvwatts_multi_inverter
- raise ValueError('could not infer multi-array AC model from '
- 'system.inverter_parameters. Only sandia and pvwatts '
- 'inverter models support multiple '
- 'Arrays. Check system.inverter_parameters or '
- 'explicitly set the model with the ac_model kwarg.')
-
- def sandia_multi_inverter(self):
- self.results.ac = self.system.sandia_multi(
- _tuple_from_dfs(self.results.dc, 'v_mp'),
- _tuple_from_dfs(self.results.dc, 'p_mp')
+ def sandia_inverter(self):
+ self.results.ac = self.system.get_ac(
+ 'sandia',
+ _tuple_from_dfs(self.results.dc, 'p_mp'),
+ v_dc=_tuple_from_dfs(self.results.dc, 'v_mp')
)
return self
- def pvwatts_multi_inverter(self):
- self.results.ac = self.system.pvwatts_multi(self.results.dc)
- return self
-
- def snlinverter(self):
- self.results.ac = self.system.snlinverter(self.results.dc['v_mp'],
- self.results.dc['p_mp'])
- return self
-
- def adrinverter(self):
- self.results.ac = self.system.adrinverter(self.results.dc['v_mp'],
- self.results.dc['p_mp'])
+ def adr_inverter(self):
+ self.results.ac = self.system.get_ac(
+ 'adr',
+ self.results.dc['p_mp'],
+ v_dc=self.results.dc['v_mp']
+ )
return self
def pvwatts_inverter(self):
- self.results.ac = self.system.pvwatts_ac(self.results.dc).fillna(0)
+ ac = self.system.get_ac('pvwatts', self.results.dc)
+ self.results.ac = ac.fillna(0)
return self
@property
diff --git a/pvlib/pvsystem.py b/pvlib/pvsystem.py
index cb003f2b3f..33696bce81 100644
--- a/pvlib/pvsystem.py
+++ b/pvlib/pvsystem.py
@@ -8,7 +8,6 @@
import io
import os
from urllib.request import urlopen
-import warnings
import numpy as np
import pandas as pd
@@ -17,7 +16,6 @@
from pvlib import (atmosphere, iam, inverter, irradiance,
singlediode as _singlediode, temperature)
from pvlib.tools import _build_kwargs
-from pvlib._deprecation import pvlibDeprecationWarning
# a dict of required parameter names for each DC power model
@@ -921,6 +919,7 @@ def get_ac(self, model, p_dc, v_dc=None):
model + ' is not a valid AC power model.',
' model must be one of "sandia", "adr" or "pvwatts"')
+ @deprecated('0.9', alternative='PVSystem.get_ac', removal='0.10')
def snlinverter(self, v_dc, p_dc):
"""Uses :py:func:`pvlib.inverter.sandia` to calculate AC power based on
``self.inverter_parameters`` and the input voltage and power.
@@ -929,19 +928,7 @@ def snlinverter(self, v_dc, p_dc):
"""
return inverter.sandia(v_dc, p_dc, self.inverter_parameters)
- def sandia_multi(self, v_dc, p_dc):
- """Uses :py:func:`pvlib.inverter.sandia_multi` to calculate AC power
- based on ``self.inverter_parameters`` and the input voltage and power.
-
- The parameters `v_dc` and `p_dc` must be tuples with length equal to
- ``self.num_arrays`` if the system has more than one array.
-
- See :py:func:`pvlib.inverter.sandia_multi` for details.
- """
- v_dc = self._validate_per_array(v_dc)
- p_dc = self._validate_per_array(p_dc)
- return inverter.sandia_multi(v_dc, p_dc, self.inverter_parameters)
-
+ @deprecated('0.9', alternative='PVSystem.get_ac', removal='0.10')
def adrinverter(self, v_dc, p_dc):
"""Uses :py:func:`pvlib.inverter.adr` to calculate AC power based on
``self.inverter_parameters`` and the input voltage and power.
@@ -1009,6 +996,7 @@ def pvwatts_losses(self):
self.losses_parameters)
return pvwatts_losses(**kwargs)
+ @deprecated('0.9', alternative='PVSystem.get_ac', removal='0.10')
def pvwatts_ac(self, pdc):
"""
Calculates AC power according to the PVWatts model using
@@ -1023,20 +1011,6 @@ def pvwatts_ac(self, pdc):
return inverter.pvwatts(pdc, self.inverter_parameters['pdc0'],
**kwargs)
- def pvwatts_multi(self, p_dc):
- """Uses :py:func:`pvlib.inverter.pvwatts_multi` to calculate AC power
- based on ``self.inverter_parameters`` and the input voltage and power.
-
- The parameter `p_dc` must be a tuple with length equal to
- ``self.num_arrays`` if the system has more than one array.
-
- See :py:func:`pvlib.inverter.pvwatts_multi` for details.
- """
- p_dc = self._validate_per_array(p_dc)
- kwargs = _build_kwargs(['eta_inv_nom', 'eta_inv_ref'],
- self.inverter_parameters)
- return inverter.pvwatts_multi(p_dc, self.inverter_parameters['pdc0'],
- **kwargs)
@property
@_unwrap_single_value
def module_parameters(self):
| diff --git a/pvlib/tests/test_modelchain.py b/pvlib/tests/test_modelchain.py
index 9085b02b13..63f5c56153 100644
--- a/pvlib/tests/test_modelchain.py
+++ b/pvlib/tests/test_modelchain.py
@@ -3,7 +3,7 @@
import numpy as np
import pandas as pd
-from pvlib import iam, modelchain, pvsystem, temperature
+from pvlib import iam, modelchain, pvsystem, temperature, inverter
from pvlib.modelchain import ModelChain
from pvlib.pvsystem import PVSystem
from pvlib.tracking import SingleAxisTracker
@@ -480,7 +480,7 @@ def test_ModelChain_invalid_inverter_params_arrays(
sapm_dc_snl_ac_system_same_arrays.inverter_parameters = \
inverter_params[inverter]
with pytest.raises(ValueError,
- match=r'Only sandia and pvwatts inverter models'):
+ match=r'adr inverter function cannot'):
ModelChain(sapm_dc_snl_ac_system_same_arrays, location)
@@ -1235,27 +1235,36 @@ def acdc(mc):
mc.results.ac = mc.results.dc
-@pytest.mark.parametrize('ac_model', ['sandia', 'adr',
- 'pvwatts', 'sandia_multi',
- 'pvwatts_multi'])
+@pytest.mark.parametrize('inverter_model', ['sandia', 'adr',
+ 'pvwatts', 'sandia_multi',
+ 'pvwatts_multi'])
def test_ac_models(sapm_dc_snl_ac_system, cec_dc_adr_ac_system,
- pvwatts_dc_pvwatts_ac_system, location, ac_model,
- weather, mocker):
+ pvwatts_dc_pvwatts_ac_system, cec_dc_snl_ac_arrays,
+ pvwatts_dc_pvwatts_ac_system_arrays,
+ location, inverter_model, weather, mocker):
ac_systems = {'sandia': sapm_dc_snl_ac_system,
- 'sandia_multi': sapm_dc_snl_ac_system,
+ 'sandia_multi': cec_dc_snl_ac_arrays,
'adr': cec_dc_adr_ac_system,
'pvwatts': pvwatts_dc_pvwatts_ac_system,
- 'pvwatts_multi': pvwatts_dc_pvwatts_ac_system}
- ac_method_name = {'sandia': 'snlinverter',
- 'sandia_multi': 'sandia_multi',
- 'adr': 'adrinverter',
- 'pvwatts': 'pvwatts_ac',
- 'pvwatts_multi': 'pvwatts_multi'}
- system = ac_systems[ac_model]
-
+ 'pvwatts_multi': pvwatts_dc_pvwatts_ac_system_arrays}
+ inverter_to_ac_model = {
+ 'sandia': 'sandia',
+ 'sandia_multi': 'sandia',
+ 'adr': 'adr',
+ 'pvwatts': 'pvwatts',
+ 'pvwatts_multi': 'pvwatts'}
+ ac_model = inverter_to_ac_model[inverter_model]
+ system = ac_systems[inverter_model]
+
+ mc_inferred = ModelChain(system, location,
+ aoi_model='no_loss', spectral_model='no_loss')
mc = ModelChain(system, location, ac_model=ac_model,
aoi_model='no_loss', spectral_model='no_loss')
- m = mocker.spy(system, ac_method_name[ac_model])
+
+ # tests ModelChain.infer_ac_model
+ assert mc_inferred.ac_model.__name__ == mc.ac_model.__name__
+
+ m = mocker.spy(inverter, inverter_model)
mc.run_model(weather)
assert m.call_count == 1
assert isinstance(mc.results.ac, pd.Series)
@@ -1447,7 +1456,7 @@ def test_losses_models_no_loss(pvwatts_dc_pvwatts_ac_system, location, weather,
def test_invalid_dc_model_params(sapm_dc_snl_ac_system, cec_dc_snl_ac_system,
pvwatts_dc_pvwatts_ac_system, location):
- kwargs = {'dc_model': 'sapm', 'ac_model': 'snlinverter',
+ kwargs = {'dc_model': 'sapm', 'ac_model': 'sandia',
'aoi_model': 'no_loss', 'spectral_model': 'no_loss',
'temperature_model': 'sapm', 'losses_model': 'no_loss'}
sapm_dc_snl_ac_system.module_parameters.pop('A0') # remove a parameter
@@ -1488,9 +1497,9 @@ def test_bad_get_orientation():
def test_with_sapm_pvsystem_arrays(sapm_dc_snl_ac_system_Array, location,
weather):
mc = ModelChain.with_sapm(sapm_dc_snl_ac_system_Array, location,
- ac_model='sandia_multi')
+ ac_model='sandia')
assert mc.dc_model == mc.sapm
- assert mc.ac_model == mc.sandia_multi_inverter
+ assert mc.ac_model == mc.sandia_inverter
mc.run_model(weather)
assert mc.results
@@ -1625,7 +1634,7 @@ def test_ModelChain___repr__(sapm_dc_snl_ac_system, location, strategy,
' solar_position_method: nrel_numpy',
' airmass_model: kastenyoung1989',
' dc_model: sapm',
- ' ac_model: snlinverter',
+ ' ac_model: sandia_inverter',
' aoi_model: sapm_aoi_loss',
' spectral_model: sapm_spectral_loss',
' temperature_model: sapm_temp',
@@ -1778,7 +1787,7 @@ def test_inconsistent_array_params(location,
)
with pytest.raises(ValueError, match=temperature_error):
ModelChain(different_temp_system, location,
- ac_model='sandia_multi',
+ ac_model='sandia',
aoi_model='no_loss', spectral_model='no_loss',
temperature_model='sapm')
diff --git a/pvlib/tests/test_pvsystem.py b/pvlib/tests/test_pvsystem.py
index dd6a51abc2..c1b2cd76d9 100644
--- a/pvlib/tests/test_pvsystem.py
+++ b/pvlib/tests/test_pvsystem.py
@@ -5,7 +5,8 @@
import pandas as pd
import pytest
-from conftest import assert_series_equal, assert_frame_equal
+from conftest import (
+ assert_series_equal, assert_frame_equal, fail_on_pvlib_version)
from numpy.testing import assert_allclose
import unittest.mock as mock
@@ -15,6 +16,7 @@
from pvlib import irradiance
from pvlib.location import Location
from pvlib import temperature
+from pvlib._deprecation import pvlibDeprecationWarning
@pytest.mark.parametrize('iam_model,model_params', [
@@ -1383,12 +1385,12 @@ def test_PVSystem_get_ac_sandia(cec_inverter_parameters, mocker):
vdcs = pd.Series(np.linspace(0, 50, 3))
idcs = pd.Series(np.linspace(0, 11, 3))
pdcs = idcs * vdcs
- pacs = system.get_ac('sandia', vdcs, pdcs)
- assert_series_equal(pacs, pd.Series([-0.020000, 132.004308, 250.000000]))
+ pacs = system.get_ac('sandia', pdcs, v_dc=vdcs)
inv_fun.assert_called_once()
+ assert_series_equal(pacs, pd.Series([-0.020000, 132.004308, 250.000000]))
-# remove after deprecation period for PVSystem.snlinverter
+@fail_on_pvlib_version('0.10')
def test_PVSystem_snlinverter(cec_inverter_parameters):
system = pvsystem.PVSystem(
inverter=cec_inverter_parameters['Name'],
@@ -1397,8 +1399,8 @@ def test_PVSystem_snlinverter(cec_inverter_parameters):
vdcs = pd.Series(np.linspace(0,50,3))
idcs = pd.Series(np.linspace(0,11,3))
pdcs = idcs * vdcs
-
- pacs = system.snlinverter(vdcs, pdcs)
+ with pytest.warns(pvlibDeprecationWarning):
+ pacs = system.snlinverter(vdcs, pdcs)
assert_series_equal(pacs, pd.Series([-0.020000, 132.004308, 250.000000]))
@@ -1412,9 +1414,9 @@ def test_PVSystem_get_ac_sandia_multi(cec_inverter_parameters, mocker):
vdcs = pd.Series(np.linspace(0, 50, 3))
idcs = pd.Series(np.linspace(0, 11, 3)) / 2
pdcs = idcs * vdcs
- pacs = system.get_ac('sandia', (vdcs, vdcs), (pdcs, pdcs))
- assert_series_equal(pacs, pd.Series([-0.020000, 132.004308, 250.000000]))
+ pacs = system.get_ac('sandia', (pdcs, pdcs), v_dc=(vdcs, vdcs))
inv_fun.assert_called_once()
+ assert_series_equal(pacs, pd.Series([-0.020000, 132.004308, 250.000000]))
with pytest.raises(ValueError,
match="Length mismatch for per-array parameter"):
system.get_ac('sandia', vdcs, (pdcs, pdcs))
@@ -1426,52 +1428,6 @@ def test_PVSystem_get_ac_sandia_multi(cec_inverter_parameters, mocker):
system.get_ac('sandia', (vdcs, vdcs), (pdcs, pdcs, pdcs))
-# remove after deprecation period for PVSystem.sandia_multi
-def test_PVSystem_sandia_multi(cec_inverter_parameters):
- system = pvsystem.PVSystem(
- arrays=[pvsystem.Array(), pvsystem.Array()],
- inverter=cec_inverter_parameters['Name'],
- inverter_parameters=cec_inverter_parameters,
- )
- vdcs = pd.Series(np.linspace(0, 50, 3))
- idcs = pd.Series(np.linspace(0, 11, 3)) / 2
- pdcs = idcs * vdcs
- pacs = system.sandia_multi((vdcs, vdcs), (pdcs, pdcs))
- assert_series_equal(pacs, pd.Series([-0.020000, 132.004308, 250.000000]))
- with pytest.raises(ValueError,
- match="Length mismatch for per-array parameter"):
- system.sandia_multi(vdcs, (pdcs, pdcs))
- with pytest.raises(ValueError,
- match="Length mismatch for per-array parameter"):
- system.sandia_multi(vdcs, (pdcs,))
- with pytest.raises(ValueError,
- match="Length mismatch for per-array parameter"):
- system.sandia_multi((vdcs, vdcs), (pdcs, pdcs, pdcs))
-
-
-# remove after deprecation period for PVSystem.sandia_multi
-def test_PVSystem_sandia_multi_single_array(cec_inverter_parameters):
- system = pvsystem.PVSystem(
- arrays=[pvsystem.Array()],
- inverter=cec_inverter_parameters['Name'],
- inverter_parameters=cec_inverter_parameters,
- )
- vdcs = pd.Series(np.linspace(0, 50, 3))
- idcs = pd.Series(np.linspace(0, 11, 3))
- pdcs = idcs * vdcs
-
- pacs = system.sandia_multi(vdcs, pdcs)
- assert_series_equal(pacs, pd.Series([-0.020000, 132.004308, 250.000000]))
- pacs = system.sandia_multi((vdcs,), (pdcs,))
- assert_series_equal(pacs, pd.Series([-0.020000, 132.004308, 250.000000]))
- with pytest.raises(ValueError,
- match="Length mismatch for per-array parameter"):
- system.sandia_multi((vdcs, vdcs), pdcs)
- with pytest.raises(ValueError,
- match="Length mismatch for per-array parameter"):
- system.sandia_multi((vdcs,), (pdcs, pdcs))
-
-
def test_PVSystem_get_ac_pvwatts(pvwatts_system_defaults, mocker):
mocker.spy(inverter, 'pvwatts')
pdc = 50
@@ -2010,51 +1966,28 @@ def test_PVSystem_pvwatts_losses(pvwatts_system_defaults, mocker):
assert out < expected
-# remove after deprecation period for PVSystem.pvwatts_ac
+@fail_on_pvlib_version('0.10')
def test_PVSystem_pvwatts_ac(pvwatts_system_defaults, mocker):
mocker.spy(inverter, 'pvwatts')
pdc = 50
- out = pvwatts_system_defaults.pvwatts_ac(pdc)
+ with pytest.warns(pvlibDeprecationWarning):
+ out = pvwatts_system_defaults.pvwatts_ac(pdc)
inverter.pvwatts.assert_called_once_with(
pdc, **pvwatts_system_defaults.inverter_parameters)
assert out < pdc
-# remove after deprecation period for PVSystem.pvwatts_ac
+@fail_on_pvlib_version('0.10')
def test_PVSystem_pvwatts_ac_kwargs(pvwatts_system_kwargs, mocker):
mocker.spy(inverter, 'pvwatts')
pdc = 50
- out = pvwatts_system_kwargs.pvwatts_ac(pdc)
+ with pytest.warns(pvlibDeprecationWarning):
+ out = pvwatts_system_kwargs.pvwatts_ac(pdc)
inverter.pvwatts.assert_called_once_with(
pdc, **pvwatts_system_kwargs.inverter_parameters)
assert out < pdc
-# remove after deprecation period for PVSystem.pvwatts_ac
-def test_PVSystem_pvwatts_multi(pvwatts_system_defaults,
- pvwatts_system_kwargs):
- expected = [pd.Series([0.0, 48.123524, 86.400000]),
- pd.Series([0.0, 45.893550, 85.500000])]
- systems = [pvwatts_system_defaults, pvwatts_system_kwargs]
- for base_sys, exp in zip(systems, expected):
- system = pvsystem.PVSystem(
- arrays=[pvsystem.Array(), pvsystem.Array()],
- inverter_parameters=base_sys.inverter_parameters,
- )
- pdcs = pd.Series([0., 25., 50.])
- pacs = system.pvwatts_multi((pdcs, pdcs))
- assert_series_equal(pacs, exp)
- with pytest.raises(ValueError,
- match="Length mismatch for per-array parameter"):
- system.pvwatts_multi((pdcs,))
- with pytest.raises(ValueError,
- match="Length mismatch for per-array parameter"):
- system.pvwatts_multi(pdcs)
- with pytest.raises(ValueError,
- match="Length mismatch for per-array parameter"):
- system.pvwatts_multi((pdcs, pdcs, pdcs))
-
-
def test_PVSystem_num_arrays():
system_one = pvsystem.PVSystem()
system_two = pvsystem.PVSystem(arrays=[pvsystem.Array(), pvsystem.Array()])
| diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 5586c4e44e..7372dad3df 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -630,8 +630,8 @@ ModelChain model definitions.
modelchain.ModelChain.desoto
modelchain.ModelChain.pvsyst
modelchain.ModelChain.pvwatts_dc
- modelchain.ModelChain.snlinverter
- modelchain.ModelChain.adrinverter
+ modelchain.ModelChain.sandia_inverter
+ modelchain.ModelChain.adr_inverter
modelchain.ModelChain.pvwatts_inverter
modelchain.ModelChain.ashrae_aoi_loss
modelchain.ModelChain.physical_aoi_loss
diff --git a/docs/sphinx/source/whatsnew/v0.9.0.rst b/docs/sphinx/source/whatsnew/v0.9.0.rst
index 3aaff14bdb..605898cf61 100644
--- a/docs/sphinx/source/whatsnew/v0.9.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.9.0.rst
@@ -36,6 +36,10 @@ Breaking changes
* ``irradiance.liujordan`` and ``ForecastModel.cloud_cover_to_irradiance_liujordan``
have been removed. (:pull:`1136`)
+* ``ModelChain.snlinverter`` changed to ``ModelChain.sandia_inverter``.
+ ``ModelChain.adrinverter`` changed to ``ModelChain.adr_inverter``.
+ (:pull:`1150`)
+
Deprecations
~~~~~~~~~~~~
@@ -76,9 +80,9 @@ Enhancements
* Support for :py:func:`~pvlib.inverter.sandia_multi` and
:py:func:`~pvlib.inverter.pvwatts_multi` added to
:py:class:`~pvlib.pvsystem.PVSystem` and
- :py:class:`~pvlib.modelchain.ModelChain` (as ``ac_model='sandia_multi'``
- and ``ac_model='pvwatts_multi'``).
- (:pull:`1076`, :issue:`1067`, :pull:`1132`, :issue:`1117`)
+ :py:class:`~pvlib.modelchain.ModelChain` (as ``ac_model='sandia'``
+ and ``ac_model='pvwatts'``).
+ (:pull:`1076`, :issue:`1067`, :pull:`1132`, :issue:`1117`, :pull:`1150`)
* :py:class:`~pvlib.modelchain.ModelChain` 'run_model' methods now
automatically switch to using ``'effective_irradiance'`` (if available) for
cell temperature models, when ``'poa_global'`` is not provided in input
@@ -88,8 +92,8 @@ Enhancements
``pvsystem.PVSystem.strings_per_inverter``. Note that both attributes still
default to 1. (:pull:`1138`)
* :py:meth:`~pvlib.pvsystem.PVSystem.get_ac` is added to calculate AC power
- from DC power. Use parameter 'model' to specify which inverter model to use.
- (:pull:`1147`, :issue:`998`)
+ from DC power. Use parameter ``model`` to specify which inverter model to use.
+ (:pull:`1147`, :issue:`998`, :pull:`1150`)
Bug fixes
~~~~~~~~~
| [
{
"components": [
{
"doc": "",
"lines": [
802,
808
],
"name": "ModelChain.sandia_inverter",
"signature": "def sandia_inverter(self):",
"type": "function"
},
{
"doc": "",
"lines": [
810,
816
... | [
"pvlib/tests/test_modelchain.py::test_ModelChain_invalid_inverter_params_arrays[adr]",
"pvlib/tests/test_modelchain.py::test_ac_models[sandia_multi]",
"pvlib/tests/test_modelchain.py::test_ac_models[pvwatts_multi]",
"pvlib/tests/test_modelchain.py::test_with_sapm_pvsystem_arrays",
"pvlib/tests/test_modelcha... | [
"pvlib/tests/test_modelchain.py::test_ModelChain_creation",
"pvlib/tests/test_modelchain.py::test_with_sapm",
"pvlib/tests/test_modelchain.py::test_with_pvwatts",
"pvlib/tests/test_modelchain.py::test_orientation_strategy[None-expected0]",
"pvlib/tests/test_modelchain.py::test_orientation_strategy[None-expe... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
refactor ModelChain inverter methods to use PVSystem.get_ac
- follow up to #1147
- [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)
- [x] Tests added
- [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.
- [x] Adds description and name entries in the appropriate "what's new" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).
- [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.
- [x] Pull request is nearly complete and ready for detailed review.
- [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.
One could argue there should be a deprecation period for the `ModelChain.snlinverter` and `ModelChain.adrinverter` methods, but I'd be shocked if anyone is directly calling them.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in pvlib/modelchain.py]
(definition of ModelChain.sandia_inverter:)
def sandia_inverter(self):
(definition of ModelChain.adr_inverter:)
def adr_inverter(self):
[end of new definitions in pvlib/modelchain.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 311781d2380997044da0e484dc90aa146a74ca44 | |
sphinx-doc__sphinx-8771 | 8,771 | sphinx-doc/sphinx | 4.0 | 647510e184f3bb8c14d82911b6a0e1d9f8a2b15b | 2021-01-27T17:09:38Z | diff --git a/CHANGES b/CHANGES
index a2ef4f73ee8..4198e57bb25 100644
--- a/CHANGES
+++ b/CHANGES
@@ -192,6 +192,9 @@ Features added
* #8775: autodoc: Support type union operator (PEP-604) in Python 3.10 or above
* #8297: autodoc: Allow to extend :confval:`autodoc_default_options` via
directive options
+* #759: autodoc: Add a new configuration :confval:`autodoc_preserve_defaults` as
+ an experimental feature. It preserves the default argument values of
+ functions in source code and keep them not evaluated for readability.
* #8619: html: kbd role generates customizable HTML tags for compound keys
* #8634: html: Allow to change the order of JS/CSS via ``priority`` parameter
for :meth:`Sphinx.add_js_file()` and :meth:`Sphinx.add_css_file()`
diff --git a/doc/usage/extensions/autodoc.rst b/doc/usage/extensions/autodoc.rst
index f69ac8c5c05..ad58a1eb2b7 100644
--- a/doc/usage/extensions/autodoc.rst
+++ b/doc/usage/extensions/autodoc.rst
@@ -586,6 +586,16 @@ There are also config values that you can set:
.. __: https://mypy.readthedocs.io/en/latest/kinds_of_types.html#type-aliases
.. versionadded:: 3.3
+.. confval:: autodoc_preserve_defaults
+
+ If True, the default argument values of functions will be not evaluated on
+ generating document. It preserves them as is in the source code.
+
+ .. versionadded:: 4.0
+
+ Added as an experimental feature. This will be integrated into autodoc core
+ in the future.
+
.. confval:: autodoc_warningiserror
This value controls the behavior of :option:`sphinx-build -W` during
diff --git a/sphinx/ext/autodoc/__init__.py b/sphinx/ext/autodoc/__init__.py
index 8d6781a1a5c..0b570930149 100644
--- a/sphinx/ext/autodoc/__init__.py
+++ b/sphinx/ext/autodoc/__init__.py
@@ -2634,6 +2634,7 @@ def setup(app: Sphinx) -> Dict[str, Any]:
app.connect('config-inited', migrate_autodoc_member_order, priority=800)
+ app.setup_extension('sphinx.ext.autodoc.preserve_defaults')
app.setup_extension('sphinx.ext.autodoc.type_comment')
app.setup_extension('sphinx.ext.autodoc.typehints')
diff --git a/sphinx/ext/autodoc/preserve_defaults.py b/sphinx/ext/autodoc/preserve_defaults.py
new file mode 100644
index 00000000000..3d859fe8eee
--- /dev/null
+++ b/sphinx/ext/autodoc/preserve_defaults.py
@@ -0,0 +1,88 @@
+"""
+ sphinx.ext.autodoc.preserve_defaults
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ Preserve the default argument values of function signatures in source code
+ and keep them not evaluated for readability.
+
+ :copyright: Copyright 2007-2021 by the Sphinx team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import ast
+import inspect
+from typing import Any, Dict
+
+from sphinx.application import Sphinx
+from sphinx.locale import __
+from sphinx.pycode.ast import parse as ast_parse
+from sphinx.pycode.ast import unparse as ast_unparse
+from sphinx.util import logging
+
+logger = logging.getLogger(__name__)
+
+
+class DefaultValue:
+ def __init__(self, name: str) -> None:
+ self.name = name
+
+ def __repr__(self) -> str:
+ return self.name
+
+
+def get_function_def(obj: Any) -> ast.FunctionDef:
+ """Get FunctionDef object from living object.
+ This tries to parse original code for living object and returns
+ AST node for given *obj*.
+ """
+ try:
+ source = inspect.getsource(obj)
+ if source.startswith((' ', r'\t')):
+ # subject is placed inside class or block. To read its docstring,
+ # this adds if-block before the declaration.
+ module = ast_parse('if True:\n' + source)
+ return module.body[0].body[0] # type: ignore
+ else:
+ module = ast_parse(source)
+ return module.body[0] # type: ignore
+ except (OSError, TypeError): # failed to load source code
+ return None
+
+
+def update_defvalue(app: Sphinx, obj: Any, bound_method: bool) -> None:
+ """Update defvalue info of *obj* using type_comments."""
+ if not app.config.autodoc_preserve_defaults:
+ return
+
+ try:
+ function = get_function_def(obj)
+ if function.args.defaults or function.args.kw_defaults:
+ sig = inspect.signature(obj)
+ defaults = list(function.args.defaults)
+ kw_defaults = list(function.args.kw_defaults)
+ parameters = list(sig.parameters.values())
+ for i, param in enumerate(parameters):
+ if param.default is not param.empty:
+ if param.kind in (param.POSITIONAL_ONLY, param.POSITIONAL_OR_KEYWORD):
+ value = DefaultValue(ast_unparse(defaults.pop(0))) # type: ignore
+ parameters[i] = param.replace(default=value)
+ else:
+ value = DefaultValue(ast_unparse(kw_defaults.pop(0))) # type: ignore
+ parameters[i] = param.replace(default=value)
+ sig = sig.replace(parameters=parameters)
+ obj.__signature__ = sig
+ except (AttributeError, TypeError):
+ # failed to update signature (ex. built-in or extension types)
+ pass
+ except NotImplementedError as exc: # failed to ast.unparse()
+ logger.warning(__("Failed to parse a default argument value for %r: %s"), obj, exc)
+
+
+def setup(app: Sphinx) -> Dict[str, Any]:
+ app.add_config_value('autodoc_preserve_defaults', False, True)
+ app.connect('autodoc-before-process-signature', update_defvalue)
+
+ return {
+ 'version': '1.0',
+ 'parallel_read_safe': True
+ }
| diff --git a/tests/roots/test-ext-autodoc/target/preserve_defaults.py b/tests/roots/test-ext-autodoc/target/preserve_defaults.py
new file mode 100644
index 00000000000..c927fa03521
--- /dev/null
+++ b/tests/roots/test-ext-autodoc/target/preserve_defaults.py
@@ -0,0 +1,19 @@
+from datetime import datetime
+from typing import Any
+
+CONSTANT = 'foo'
+SENTINEL = object()
+
+
+def foo(name: str = CONSTANT,
+ sentinal: Any = SENTINEL,
+ now: datetime = datetime.now()) -> None:
+ """docstring"""
+
+
+class Class:
+ """docstring"""
+
+ def meth(self, name: str = CONSTANT, sentinal: Any = SENTINEL,
+ now: datetime = datetime.now()) -> None:
+ """docstring"""
diff --git a/tests/test_ext_autodoc_preserve_defaults.py b/tests/test_ext_autodoc_preserve_defaults.py
new file mode 100644
index 00000000000..f9833c29171
--- /dev/null
+++ b/tests/test_ext_autodoc_preserve_defaults.py
@@ -0,0 +1,45 @@
+"""
+ test_ext_autodoc_preserve_defaults
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ Test the autodoc extension.
+
+ :copyright: Copyright 2007-2021 by the Sphinx team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import pytest
+
+from .test_ext_autodoc import do_autodoc
+
+
+@pytest.mark.sphinx('html', testroot='ext-autodoc',
+ confoverrides={'autodoc_preserve_defaults': True})
+def test_preserve_defaults(app):
+ options = {"members": None}
+ actual = do_autodoc(app, 'module', 'target.preserve_defaults', options)
+ assert list(actual) == [
+ '',
+ '.. py:module:: target.preserve_defaults',
+ '',
+ '',
+ '.. py:class:: Class()',
+ ' :module: target.preserve_defaults',
+ '',
+ ' docstring',
+ '',
+ '',
+ ' .. py:method:: Class.meth(name: str = CONSTANT, sentinal: Any = SENTINEL, '
+ 'now: datetime.datetime = datetime.now()) -> None',
+ ' :module: target.preserve_defaults',
+ '',
+ ' docstring',
+ '',
+ '',
+ '.. py:function:: foo(name: str = CONSTANT, sentinal: Any = SENTINEL, now: '
+ 'datetime.datetime = datetime.now()) -> None',
+ ' :module: target.preserve_defaults',
+ '',
+ ' docstring',
+ '',
+ ]
| diff --git a/CHANGES b/CHANGES
index a2ef4f73ee8..4198e57bb25 100644
--- a/CHANGES
+++ b/CHANGES
@@ -192,6 +192,9 @@ Features added
* #8775: autodoc: Support type union operator (PEP-604) in Python 3.10 or above
* #8297: autodoc: Allow to extend :confval:`autodoc_default_options` via
directive options
+* #759: autodoc: Add a new configuration :confval:`autodoc_preserve_defaults` as
+ an experimental feature. It preserves the default argument values of
+ functions in source code and keep them not evaluated for readability.
* #8619: html: kbd role generates customizable HTML tags for compound keys
* #8634: html: Allow to change the order of JS/CSS via ``priority`` parameter
for :meth:`Sphinx.add_js_file()` and :meth:`Sphinx.add_css_file()`
diff --git a/doc/usage/extensions/autodoc.rst b/doc/usage/extensions/autodoc.rst
index f69ac8c5c05..ad58a1eb2b7 100644
--- a/doc/usage/extensions/autodoc.rst
+++ b/doc/usage/extensions/autodoc.rst
@@ -586,6 +586,16 @@ There are also config values that you can set:
.. __: https://mypy.readthedocs.io/en/latest/kinds_of_types.html#type-aliases
.. versionadded:: 3.3
+.. confval:: autodoc_preserve_defaults
+
+ If True, the default argument values of functions will be not evaluated on
+ generating document. It preserves them as is in the source code.
+
+ .. versionadded:: 4.0
+
+ Added as an experimental feature. This will be integrated into autodoc core
+ in the future.
+
.. confval:: autodoc_warningiserror
This value controls the behavior of :option:`sphinx-build -W` during
| [
{
"components": [
{
"doc": "",
"lines": [
25,
30
],
"name": "DefaultValue",
"signature": "class DefaultValue:",
"type": "class"
},
{
"doc": "",
"lines": [
26,
27
],
"name": "... | [
"tests/test_ext_autodoc_preserve_defaults.py::test_preserve_defaults"
] | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Fix #759: autodoc: Add sphinx.ext.autodoc.preserve_defaults extension
### Feature or Bugfix
- Feature
### Purpose
- Add a new extension `sphinx.ext.autodoc.preserve_defaults`.
- It preserves the default argument values of function signatures in source code
and keep them not evaluated for readability. This is an experimental
extension and it will be integrated into autodoc core in Sphinx-4.0.
- refs: #759 and #2844
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sphinx/ext/autodoc/preserve_defaults.py]
(definition of DefaultValue:)
class DefaultValue:
(definition of DefaultValue.__init__:)
def __init__(self, name: str) -> None:
(definition of DefaultValue.__repr__:)
def __repr__(self) -> str:
(definition of get_function_def:)
def get_function_def(obj: Any) -> ast.FunctionDef:
"""Get FunctionDef object from living object.
This tries to parse original code for living object and returns
AST node for given *obj*."""
(definition of update_defvalue:)
def update_defvalue(app: Sphinx, obj: Any, bound_method: bool) -> None:
"""Update defvalue info of *obj* using type_comments."""
(definition of setup:)
def setup(app: Sphinx) -> Dict[str, Any]:
[end of new definitions in sphinx/ext/autodoc/preserve_defaults.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Keeping original signatures for functions/methods
I'm using Sphinx's autodoc feature to document my API.
Example:
```
#!python
DEFAULT_OPTION = 'default'
def do_something(msg, option=DEFAULT_OPTION):
print msg
```
The generated documentation now shows the following signature:
```
do_something(msg, option='default')
```
It would be nice if there was a way to tell Sphinx to keep the name of the constant value, i.e.
```
do_something(msg, option=DEFAULT_OPTION)
```
At the moment the only alternative is to write all signature by hand again.
---
- Bitbucket: https://bitbucket.org/birkenfeld/sphinx/issue/759
- Originally reported by: Sebastian Rahlf
- Originally created at: 2011-08-31T11:29:50.797
----------
+1
@shimizukawa,@martinpengellyphillips you may try my fork with this feature.
Install [sphinx](https://github.com/hypnocat/sphinx), and add to conf.py
autodoc_dumb_docstring = True
then rebuild docs. Feel free to msg me.
Another alternative that would be nice and might work easily is to allow the user to specify the value of an argument instead of the entire signature.
- Unspecified arguments would be parsed by sphinx as usually.
- Only the specified arguments would be annotated differently.
- Omitting `:annotation:` would remove `myarg` from the signature so that it didn't show up at all as an interesting for secret optional args.
- Specifying blank `:annotation:` would omit the default altogether but the keyword argument would still show up, just with no default value.
- If `myarg` is not in the list of arguments parsed from the signature, then a warning is raised and the directive does nothing.
- Markup roles like `:class:`, `:data:`, etc. can be used to link to the specified default.
- If `myarg` is not a keyword argument, then should it warn or should it specify the default in defiance of all logic? probably warn.
For example, given the following `mymodule.py` file:
```
"""a module with special treatment of arguments"""
DEFAULT = "a very long value that I don't want to display in my function, class and method signatures"
"""a description of the default"""
def myfunc(args, myarg=DEFAULT, kwargs=None):
"""
a function with an ugly default arg value
:param args: some arguments
:param myarg: a keyword arg with a default specified by a module constant
:param kwargs: some keyword arguments
"""
pass
class MyClass():
"""a class with an ugly default arg value in its constructor"""
def __init__(self, args, myarg=DEFAULT, kwargs=None):
pass
def mymeth(self, args, myarg=DEFAULT, kwargs=None):
"""a method with an ugly default arg value"""
pass
```
use `mymodule.rst` file with the following:
```
.. automodule:: mymodule
.. autodata:: DEFAULT
:annotation: a default value
The value of ``myarg`` in this function is replaced by :data:`~mymodule.DEFAULT`
.. autofunction:: myfunc
.. argument:: myarg
:annotation: :data:`~mymodule.DEFAULT`
The value of ``myarg`` in this class constructor is not shown
.. autoClass:: MyClass
:members:
.. argument:: myarg
:annotation:
The value of ``myarg`` in this class method is hidden
.. automethod:: MyClass.mymeth
.. argument:: myarg
```
would output the following:
> mymodule
> a module with special treatment of arguments
>
> mymodule.DEFAULT = 'a default value'
> a description of the default
>
> The value of `myarg` in this function is replaced by :data:`~mymodule.DEFAULT`
>
> mymodule.myfunc(args, myarg=**DEFAULT**, kwargs=None)
> a function with an ugly default arg value
>
> **Parameters**
> - args - some arguments
> - myarg - a keyword arg with a default specified by a module constant
> - kwargs - some keyword arguments
>
> The value of `myarg` in this class constructor is not shown
>
> mymodule.MyClass(args, myarg=, kwargs=None)
> a class with an ugly default arg value in its constructor
>
> The value of `myarg` in this class method is hidden
>
> mymodule.MyClass.mymeth(args, kwargs=None)
> a method with an ugly default arg value
--------------------
</issues> | cd75f8fea1cdd509615b58ba2b32756795fbaa03 |
scikit-learn__scikit-learn-19278 | 19,278 | scikit-learn/scikit-learn | 1.0 | f081f5829182529eb1ab666f79c9b917b0b07f09 | 2021-01-26T10:48:43Z | diff --git a/doc/whats_new/v1.0.rst b/doc/whats_new/v1.0.rst
index 230802e10ef2a..bdea970117f96 100644
--- a/doc/whats_new/v1.0.rst
+++ b/doc/whats_new/v1.0.rst
@@ -452,6 +452,14 @@ Changelog
class methods and will be removed in 1.2.
:pr:`18543` by `Guillaume Lemaitre`_.
+- |API| :class:`metrics.DetCurveDisplay` exposes two class methods
+ :func:`~metrics.DetCurveDisplay.from_estimator` and
+ :func:`~metrics.DetCurveDisplay.from_predictions` allowing to create
+ a confusion matrix plot using an estimator or the predictions.
+ :func:`metrics.plot_det_curve` is deprecated in favor of these two
+ class methods and will be removed in 1.2.
+ :pr:`19278` by `Guillaume Lemaitre`_.
+
- |Enhancement| A fix to raise an error in :func:`metrics.hinge_loss` when
``pred_decision`` is 1d whereas it is a multiclass classification or when
``pred_decision`` parameter is not consistent with the ``labels`` parameter.
diff --git a/examples/model_selection/plot_det.py b/examples/model_selection/plot_det.py
index 2e27dd07ee684..5c97dc0b02ebd 100644
--- a/examples/model_selection/plot_det.py
+++ b/examples/model_selection/plot_det.py
@@ -8,7 +8,7 @@
for the same classification task.
DET curves are commonly plotted in normal deviate scale.
-To achieve this `plot_det_curve` transforms the error rates as returned by the
+To achieve this the DET display transforms the error rates as returned by the
:func:`~sklearn.metrics.det_curve` and the axis scale using
:func:`scipy.stats.norm`.
@@ -51,7 +51,7 @@
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
-from sklearn.metrics import plot_det_curve
+from sklearn.metrics import DetCurveDisplay
from sklearn.metrics import plot_roc_curve
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
@@ -81,7 +81,7 @@
clf.fit(X_train, y_train)
plot_roc_curve(clf, X_test, y_test, ax=ax_roc, name=name)
- plot_det_curve(clf, X_test, y_test, ax=ax_det, name=name)
+ DetCurveDisplay.from_estimator(clf, X_test, y_test, ax=ax_det, name=name)
ax_roc.set_title('Receiver Operating Characteristic (ROC) curves')
ax_det.set_title('Detection Error Tradeoff (DET) curves')
diff --git a/sklearn/metrics/_base.py b/sklearn/metrics/_base.py
index 514026238aaee..5640848b1a9d4 100644
--- a/sklearn/metrics/_base.py
+++ b/sklearn/metrics/_base.py
@@ -246,6 +246,6 @@ def _check_pos_label_consistency(pos_label, y_true):
"{-1, 1} or pass pos_label explicitly."
)
elif pos_label is None:
- pos_label = 1.0
+ pos_label = 1
return pos_label
diff --git a/sklearn/metrics/_plot/det_curve.py b/sklearn/metrics/_plot/det_curve.py
index 18914681cb51c..210fbdd54943b 100644
--- a/sklearn/metrics/_plot/det_curve.py
+++ b/sklearn/metrics/_plot/det_curve.py
@@ -3,14 +3,17 @@
from .base import _get_response
from .. import det_curve
+from .._base import _check_pos_label_consistency
from ...utils import check_matplotlib_support
+from ...utils import deprecated
class DetCurveDisplay:
"""DET curve visualization.
- It is recommend to use :func:`~sklearn.metrics.plot_det_curve` to create a
+ It is recommend to use :func:`~sklearn.metrics.DetCurveDisplay.from_estimator`
+ or :func:`~sklearn.metrics.DetCurveDisplay.from_predictions` to create a
visualizer. All parameters are stored as attributes.
Read more in the :ref:`User Guide <visualizations>`.
@@ -45,7 +48,10 @@ class DetCurveDisplay:
See Also
--------
det_curve : Compute error rates for different probability thresholds.
- plot_det_curve : Plot detection error tradeoff (DET) curve.
+ DetCurveDisplay.from_estimator : Plot DET curve given an estimator and
+ some data.
+ DetCurveDisplay.from_predictions : Plot DET curve given the true and
+ predicted labels.
Examples
--------
@@ -69,6 +75,210 @@ def __init__(self, *, fpr, fnr, estimator_name=None, pos_label=None):
self.estimator_name = estimator_name
self.pos_label = pos_label
+ @classmethod
+ def from_estimator(
+ cls,
+ estimator,
+ X,
+ y,
+ *,
+ sample_weight=None,
+ response_method="auto",
+ pos_label=None,
+ name=None,
+ ax=None,
+ **kwargs,
+ ):
+ """Plot DET curve given an estimator and data.
+
+ Read more in the :ref:`User Guide <visualizations>`.
+
+ .. versionadded:: 1.0
+
+ Parameters
+ ----------
+ estimator : estimator instance
+ Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
+ in which the last estimator is a classifier.
+
+ X : {array-like, sparse matrix} of shape (n_samples, n_features)
+ Input values.
+
+ y : array-like of shape (n_samples,)
+ Target values.
+
+ sample_weight : array-like of shape (n_samples,), default=None
+ Sample weights.
+
+ response_method : {'predict_proba', 'decision_function', 'auto'} \
+ default='auto'
+ Specifies whether to use :term:`predict_proba` or
+ :term:`decision_function` as the predicted target response. If set
+ to 'auto', :term:`predict_proba` is tried first and if it does not
+ exist :term:`decision_function` is tried next.
+
+ pos_label : str or int, default=None
+ The label of the positive class. When `pos_label=None`, if `y_true`
+ is in {-1, 1} or {0, 1}, `pos_label` is set to 1, otherwise an
+ error will be raised.
+
+ name : str, default=None
+ Name of DET curve for labeling. If `None`, use the name of the
+ estimator.
+
+ ax : matplotlib axes, default=None
+ Axes object to plot on. If `None`, a new figure and axes is
+ created.
+
+ **kwargs : dict
+ Additional keywords arguments passed to matplotlib `plot` function.
+
+ Returns
+ -------
+ display : :class:`~sklearn.metrics.DetCurveDisplay`
+ Object that stores computed values.
+
+ See Also
+ --------
+ det_curve : Compute error rates for different probability thresholds.
+ DetCurveDisplay.from_predictions : Plot DET curve given the true and
+ predicted labels.
+ plot_roc_curve : Plot Receiver operating characteristic (ROC) curve.
+
+ Examples
+ --------
+ >>> import matplotlib.pyplot as plt
+ >>> from sklearn.datasets import make_classification
+ >>> from sklearn.metrics import DetCurveDisplay
+ >>> from sklearn.model_selection import train_test_split
+ >>> from sklearn.svm import SVC
+ >>> X, y = make_classification(random_state=0)
+ >>> X_train, X_test, y_train, y_test = train_test_split(
+ ... X, y, random_state=0)
+ >>> clf = SVC(random_state=0).fit(X_train, y_train)
+ >>> DetCurveDisplay.from_estimator(
+ ... clf, X_test, y_test)
+ <...>
+ >>> plt.show()
+ """
+ check_matplotlib_support(f"{cls.__name__}.from_estimator")
+
+ name = estimator.__class__.__name__ if name is None else name
+
+ y_pred, pos_label = _get_response(
+ X,
+ estimator,
+ response_method,
+ pos_label=pos_label,
+ )
+
+ return cls.from_predictions(
+ y_true=y,
+ y_pred=y_pred,
+ sample_weight=sample_weight,
+ name=name,
+ ax=ax,
+ pos_label=pos_label,
+ **kwargs,
+ )
+
+ @classmethod
+ def from_predictions(
+ cls,
+ y_true,
+ y_pred,
+ *,
+ sample_weight=None,
+ pos_label=None,
+ name=None,
+ ax=None,
+ **kwargs,
+ ):
+ """Plot DET curve given the true and
+ predicted labels.
+
+ Read more in the :ref:`User Guide <visualizations>`.
+
+ .. versionadded:: 1.0
+
+ Parameters
+ ----------
+ y_true : array-like of shape (n_samples,)
+ True labels.
+
+ y_pred : array-like of shape (n_samples,)
+ Target scores, can either be probability estimates of the positive
+ class, confidence values, or non-thresholded measure of decisions
+ (as returned by `decision_function` on some classifiers).
+
+ sample_weight : array-like of shape (n_samples,), default=None
+ Sample weights.
+
+ pos_label : str or int, default=None
+ The label of the positive class. When `pos_label=None`, if `y_true`
+ is in {-1, 1} or {0, 1}, `pos_label` is set to 1, otherwise an
+ error will be raised.
+
+ name : str, default=None
+ Name of DET curve for labeling. If `None`, name will be set to
+ `"Classifier"`.
+
+ ax : matplotlib axes, default=None
+ Axes object to plot on. If `None`, a new figure and axes is
+ created.
+
+ **kwargs : dict
+ Additional keywords arguments passed to matplotlib `plot` function.
+
+ Returns
+ -------
+ display : :class:`~sklearn.metrics.DetCurveDisplay`
+ Object that stores computed values.
+
+ See Also
+ --------
+ det_curve : Compute error rates for different probability thresholds.
+ DetCurveDisplay.from_estimator : Plot DET curve given an estimator and
+ some data.
+ plot_roc_curve : Plot Receiver operating characteristic (ROC) curve.
+
+ Examples
+ --------
+ >>> import matplotlib.pyplot as plt
+ >>> from sklearn.datasets import make_classification
+ >>> from sklearn.metrics import DetCurveDisplay
+ >>> from sklearn.model_selection import train_test_split
+ >>> from sklearn.svm import SVC
+ >>> X, y = make_classification(random_state=0)
+ >>> X_train, X_test, y_train, y_test = train_test_split(
+ ... X, y, random_state=0)
+ >>> clf = SVC(random_state=0).fit(X_train, y_train)
+ >>> y_pred = clf.decision_function(X_test)
+ >>> DetCurveDisplay.from_predictions(
+ ... y_test, y_pred)
+ <...>
+ >>> plt.show()
+ """
+ check_matplotlib_support(f"{cls.__name__}.from_predictions")
+ fpr, fnr, _ = det_curve(
+ y_true,
+ y_pred,
+ pos_label=pos_label,
+ sample_weight=sample_weight,
+ )
+
+ pos_label = _check_pos_label_consistency(pos_label, y_true)
+ name = "Classifier" if name is None else name
+
+ viz = DetCurveDisplay(
+ fpr=fpr,
+ fnr=fnr,
+ estimator_name=name,
+ pos_label=pos_label,
+ )
+
+ return viz.plot(ax=ax, name=name, **kwargs)
+
def plot(self, ax=None, *, name=None, **kwargs):
"""Plot visualization.
@@ -82,6 +292,9 @@ def plot(self, ax=None, *, name=None, **kwargs):
Name of DET curve for labeling. If `None`, use the name of the
estimator.
+ **kwargs : dict
+ Additional keywords arguments passed to matplotlib `plot` function.
+
Returns
-------
display : :class:`~sklearn.metrics.plot.DetCurveDisplay`
@@ -132,6 +345,12 @@ def plot(self, ax=None, *, name=None, **kwargs):
return self
+@deprecated(
+ "Function plot_det_curve is deprecated in 1.0 and will be "
+ "removed in 1.2. Use one of the class methods: "
+ "DetCurveDisplay.from_predictions or "
+ "DetCurveDisplay.from_estimator."
+)
def plot_det_curve(
estimator,
X,
@@ -152,6 +371,12 @@ def plot_det_curve(
.. versionadded:: 0.24
+ .. deprecated:: 1.0
+ `plot_det_curve` is deprecated in 1.0 and will be removed in
+ 1.2. Use one of the following class methods:
+ :func:`~sklearn.metrics.DetCurveDisplay.from_predictions` or
+ :func:`~sklearn.metrics.DetCurveDisplay.from_estimator`.
+
Parameters
----------
estimator : estimator instance
@@ -186,6 +411,9 @@ def plot_det_curve(
When `pos_label=None`, if `y_true` is in {-1, 1} or {0, 1},
`pos_label` is set to 1, otherwise an error will be raised.
+ **kwargs : dict
+ Additional keywords arguments passed to matplotlib `plot` function.
+
Returns
-------
display : :class:`~sklearn.metrics.DetCurveDisplay`
@@ -194,7 +422,10 @@ def plot_det_curve(
See Also
--------
det_curve : Compute error rates for different probability thresholds.
- DetCurveDisplay : DET curve visualization.
+ DetCurveDisplay.from_estimator : Plot DET curve given an estimator and
+ some data.
+ DetCurveDisplay.from_predictions : Plot DET curve given the true and
+ predicted labels.
plot_roc_curve : Plot Receiver operating characteristic (ROC) curve.
Examples
@@ -207,7 +438,7 @@ def plot_det_curve(
>>> clf = svm.SVC(random_state=0)
>>> clf.fit(X_train, y_train)
SVC(random_state=0)
- >>> metrics.plot_det_curve(clf, X_test, y_test)
+ >>> metrics.plot_det_curve(clf, X_test, y_test) # doctest: +SKIP
<...>
>>> plt.show()
"""
diff --git a/sklearn/metrics/_ranking.py b/sklearn/metrics/_ranking.py
index 603d7c4d5be56..dab10d086bfe9 100644
--- a/sklearn/metrics/_ranking.py
+++ b/sklearn/metrics/_ranking.py
@@ -280,7 +280,10 @@ def det_curve(y_true, y_score, pos_label=None, sample_weight=None):
See Also
--------
- plot_det_curve : Plot detection error tradeoff (DET) curve.
+ DetCurveDisplay.from_estimator : Plot DET curve given an estimator and
+ some data.
+ DetCurveDisplay.from_predictions : Plot DET curve given the true and
+ predicted labels.
DetCurveDisplay : DET curve visualization.
roc_curve : Compute Receiver operating characteristic (ROC) curve.
precision_recall_curve : Compute precision-recall curve.
| diff --git a/sklearn/metrics/_plot/tests/test_common_curve_display.py b/sklearn/metrics/_plot/tests/test_common_curve_display.py
new file mode 100644
index 0000000000000..8e676018f222e
--- /dev/null
+++ b/sklearn/metrics/_plot/tests/test_common_curve_display.py
@@ -0,0 +1,138 @@
+import pytest
+
+from sklearn.base import ClassifierMixin, clone
+from sklearn.compose import make_column_transformer
+from sklearn.datasets import load_iris
+from sklearn.exceptions import NotFittedError
+from sklearn.linear_model import LogisticRegression
+from sklearn.pipeline import make_pipeline
+from sklearn.preprocessing import StandardScaler
+from sklearn.tree import DecisionTreeClassifier
+
+from sklearn.metrics import DetCurveDisplay
+
+
+@pytest.fixture(scope="module")
+def data():
+ return load_iris(return_X_y=True)
+
+
+@pytest.fixture(scope="module")
+def data_binary(data):
+ X, y = data
+ return X[y < 2], y[y < 2]
+
+
+@pytest.mark.parametrize("Display", [DetCurveDisplay])
+def test_display_curve_error_non_binary(pyplot, data, Display):
+ """Check that a proper error is raised when only binary classification is
+ supported."""
+ X, y = data
+ clf = DecisionTreeClassifier().fit(X, y)
+
+ msg = "DecisionTreeClassifier should be a binary classifier"
+ with pytest.raises(ValueError, match=msg):
+ Display.from_estimator(clf, X, y)
+
+
+@pytest.mark.parametrize(
+ "response_method, msg",
+ [
+ (
+ "predict_proba",
+ "response method predict_proba is not defined in MyClassifier",
+ ),
+ (
+ "decision_function",
+ "response method decision_function is not defined in MyClassifier",
+ ),
+ (
+ "auto",
+ "response method decision_function or predict_proba is not "
+ "defined in MyClassifier",
+ ),
+ (
+ "bad_method",
+ "response_method must be 'predict_proba', 'decision_function' or 'auto'",
+ ),
+ ],
+)
+@pytest.mark.parametrize("Display", [DetCurveDisplay])
+def test_display_curve_error_no_response(
+ pyplot,
+ data_binary,
+ response_method,
+ msg,
+ Display,
+):
+ """Check that a proper error is raised when the response method requested
+ is not defined for the given trained classifier."""
+ X, y = data_binary
+
+ class MyClassifier(ClassifierMixin):
+ def fit(self, X, y):
+ self.classes_ = [0, 1]
+ return self
+
+ clf = MyClassifier().fit(X, y)
+
+ with pytest.raises(ValueError, match=msg):
+ Display.from_estimator(clf, X, y, response_method=response_method)
+
+
+@pytest.mark.parametrize("Display", [DetCurveDisplay])
+@pytest.mark.parametrize("constructor_name", ["from_estimator", "from_predictions"])
+def test_display_curve_estimator_name_multiple_calls(
+ pyplot,
+ data_binary,
+ Display,
+ constructor_name,
+):
+ """Check that passing `name` when calling `plot` will overwrite the original name
+ in the legend."""
+ X, y = data_binary
+ clf_name = "my hand-crafted name"
+ clf = LogisticRegression().fit(X, y)
+ y_pred = clf.predict_proba(X)[:, 1]
+
+ # safe guard for the binary if/else construction
+ assert constructor_name in ("from_estimator", "from_predictions")
+
+ if constructor_name == "from_estimator":
+ disp = Display.from_estimator(clf, X, y, name=clf_name)
+ else:
+ disp = Display.from_predictions(y, y_pred, name=clf_name)
+ assert disp.estimator_name == clf_name
+ pyplot.close("all")
+ disp.plot()
+ assert clf_name in disp.line_.get_label()
+ pyplot.close("all")
+ clf_name = "another_name"
+ disp.plot(name=clf_name)
+ assert clf_name in disp.line_.get_label()
+
+
+@pytest.mark.parametrize(
+ "clf",
+ [
+ LogisticRegression(),
+ make_pipeline(StandardScaler(), LogisticRegression()),
+ make_pipeline(
+ make_column_transformer((StandardScaler(), [0, 1])), LogisticRegression()
+ ),
+ ],
+)
+@pytest.mark.parametrize("Display", [DetCurveDisplay])
+def test_display_curve_not_fitted_errors(pyplot, data_binary, clf, Display):
+ """Check that a proper error is raised when the classifier is not
+ fitted."""
+ X, y = data_binary
+ # clone since we parametrize the test and the classifier will be fitted
+ # when testing the second and subsequent plotting function
+ model = clone(clf)
+ with pytest.raises(NotFittedError):
+ Display.from_estimator(model, X, y)
+ model.fit(X, y)
+ disp = Display.from_estimator(model, X, y)
+ assert model.__class__.__name__ in disp.line_.get_label()
+ assert disp.estimator_name == model.__class__.__name__
diff --git a/sklearn/metrics/_plot/tests/test_det_curve_display.py b/sklearn/metrics/_plot/tests/test_det_curve_display.py
new file mode 100644
index 0000000000000..5d7a26d5e49a0
--- /dev/null
+++ b/sklearn/metrics/_plot/tests/test_det_curve_display.py
@@ -0,0 +1,108 @@
+import pytest
+import numpy as np
+from numpy.testing import assert_allclose
+
+from sklearn.datasets import load_iris
+from sklearn.linear_model import LogisticRegression
+
+from sklearn.metrics import det_curve
+from sklearn.metrics import DetCurveDisplay
+
+
+@pytest.mark.parametrize("constructor_name", ["from_estimator", "from_predictions"])
+@pytest.mark.parametrize("response_method", ["predict_proba", "decision_function"])
+@pytest.mark.parametrize("with_sample_weight", [True, False])
+@pytest.mark.parametrize("with_strings", [True, False])
+def test_det_curve_display(
+ pyplot, constructor_name, response_method, with_sample_weight, with_strings
+):
+ X, y = load_iris(return_X_y=True)
+ # Binarize the data with only the two first classes
+ X, y = X[y < 2], y[y < 2]
+
+ pos_label = None
+ if with_strings:
+ y = np.array(["c", "b"])[y]
+ pos_label = "c"
+
+ if with_sample_weight:
+ rng = np.random.RandomState(42)
+ sample_weight = rng.randint(1, 4, size=(X.shape[0]))
+ else:
+ sample_weight = None
+
+ lr = LogisticRegression()
+ lr.fit(X, y)
+ y_pred = getattr(lr, response_method)(X)
+ if y_pred.ndim == 2:
+ y_pred = y_pred[:, 1]
+
+ # safe guard for the binary if/else construction
+ assert constructor_name in ("from_estimator", "from_predictions")
+
+ common_kwargs = {
+ "name": lr.__class__.__name__,
+ "alpha": 0.8,
+ "sample_weight": sample_weight,
+ "pos_label": pos_label,
+ }
+ if constructor_name == "from_estimator":
+ disp = DetCurveDisplay.from_estimator(lr, X, y, **common_kwargs)
+ else:
+ disp = DetCurveDisplay.from_predictions(y, y_pred, **common_kwargs)
+
+ fpr, fnr, _ = det_curve(
+ y,
+ y_pred,
+ sample_weight=sample_weight,
+ pos_label=pos_label,
+ )
+
+ assert_allclose(disp.fpr, fpr)
+ assert_allclose(disp.fnr, fnr)
+
+ assert disp.estimator_name == "LogisticRegression"
+
+ # cannot fail thanks to pyplot fixture
+ import matplotlib as mpl # noqal
+
+ assert isinstance(disp.line_, mpl.lines.Line2D)
+ assert disp.line_.get_alpha() == 0.8
+ assert isinstance(disp.ax_, mpl.axes.Axes)
+ assert isinstance(disp.figure_, mpl.figure.Figure)
+ assert disp.line_.get_label() == "LogisticRegression"
+
+ expected_pos_label = 1 if pos_label is None else pos_label
+ expected_ylabel = f"False Negative Rate (Positive label: {expected_pos_label})"
+ expected_xlabel = f"False Positive Rate (Positive label: {expected_pos_label})"
+ assert disp.ax_.get_ylabel() == expected_ylabel
+ assert disp.ax_.get_xlabel() == expected_xlabel
+
+
+@pytest.mark.parametrize(
+ "constructor_name, expected_clf_name",
+ [
+ ("from_estimator", "LogisticRegression"),
+ ("from_predictions", "Classifier"),
+ ],
+)
+def test_det_curve_display_default_name(
+ pyplot,
+ constructor_name,
+ expected_clf_name,
+):
+ # Check the default name display in the figure when `name` is not provided
+ X, y = load_iris(return_X_y=True)
+ # Binarize the data with only the two first classes
+ X, y = X[y < 2], y[y < 2]
+
+ lr = LogisticRegression().fit(X, y)
+ y_pred = lr.predict_proba(X)[:, 1]
+
+ if constructor_name == "from_estimator":
+ disp = DetCurveDisplay.from_estimator(lr, X, y)
+ else:
+ disp = DetCurveDisplay.from_predictions(y, y_pred)
+
+ assert disp.estimator_name == expected_clf_name
+ assert disp.line_.get_label() == expected_clf_name
diff --git a/sklearn/metrics/_plot/tests/test_plot_curve_common.py b/sklearn/metrics/_plot/tests/test_plot_curve_common.py
index 952f6423ad52c..bf86977d6e4d4 100644
--- a/sklearn/metrics/_plot/tests/test_plot_curve_common.py
+++ b/sklearn/metrics/_plot/tests/test_plot_curve_common.py
@@ -25,6 +25,7 @@ def data_binary(data):
return X[y < 2], y[y < 2]
+@pytest.mark.filterwarnings("ignore: Function plot_det_curve is deprecated")
@pytest.mark.parametrize("plot_func", [plot_det_curve, plot_roc_curve])
def test_plot_curve_error_non_binary(pyplot, data, plot_func):
X, y = data
@@ -36,6 +37,7 @@ def test_plot_curve_error_non_binary(pyplot, data, plot_func):
plot_func(clf, X, y)
+@pytest.mark.filterwarnings("ignore: Function plot_det_curve is deprecated")
@pytest.mark.parametrize(
"response_method, msg",
[
@@ -79,6 +81,7 @@ def fit(self, X, y):
plot_func(clf, X, y, response_method=response_method)
+@pytest.mark.filterwarnings("ignore: Function plot_det_curve is deprecated")
@pytest.mark.parametrize("plot_func", [plot_det_curve, plot_roc_curve])
def test_plot_curve_estimator_name_multiple_calls(pyplot, data_binary, plot_func):
# non-regression test checking that the `name` used when calling
@@ -97,6 +100,7 @@ def test_plot_curve_estimator_name_multiple_calls(pyplot, data_binary, plot_func
assert clf_name in disp.line_.get_label()
+@pytest.mark.filterwarnings("ignore: Function plot_det_curve is deprecated")
@pytest.mark.parametrize(
"clf",
[
diff --git a/sklearn/metrics/_plot/tests/test_plot_det_curve.py b/sklearn/metrics/_plot/tests/test_plot_det_curve.py
index 81faf3caa7954..31f840a6a1ff5 100644
--- a/sklearn/metrics/_plot/tests/test_plot_det_curve.py
+++ b/sklearn/metrics/_plot/tests/test_plot_det_curve.py
@@ -1,3 +1,4 @@
+# TODO: remove this file when plot_det_curve will be deprecated in 1.2
import pytest
import numpy as np
from numpy.testing import assert_allclose
@@ -20,6 +21,7 @@ def data_binary(data):
return X[y < 2], y[y < 2]
+@pytest.mark.filterwarnings("ignore: Function plot_det_curve is deprecated")
@pytest.mark.parametrize("response_method", ["predict_proba", "decision_function"])
@pytest.mark.parametrize("with_sample_weight", [True, False])
@pytest.mark.parametrize("with_strings", [True, False])
| diff --git a/doc/whats_new/v1.0.rst b/doc/whats_new/v1.0.rst
index 230802e10ef2a..bdea970117f96 100644
--- a/doc/whats_new/v1.0.rst
+++ b/doc/whats_new/v1.0.rst
@@ -452,6 +452,14 @@ Changelog
class methods and will be removed in 1.2.
:pr:`18543` by `Guillaume Lemaitre`_.
+- |API| :class:`metrics.DetCurveDisplay` exposes two class methods
+ :func:`~metrics.DetCurveDisplay.from_estimator` and
+ :func:`~metrics.DetCurveDisplay.from_predictions` allowing to create
+ a confusion matrix plot using an estimator or the predictions.
+ :func:`metrics.plot_det_curve` is deprecated in favor of these two
+ class methods and will be removed in 1.2.
+ :pr:`19278` by `Guillaume Lemaitre`_.
+
- |Enhancement| A fix to raise an error in :func:`metrics.hinge_loss` when
``pred_decision`` is 1d whereas it is a multiclass classification or when
``pred_decision`` parameter is not consistent with the ``labels`` parameter.
| [
{
"components": [
{
"doc": "Plot DET curve given an estimator and data.\n\nRead more in the :ref:`User Guide <visualizations>`.\n\n.. versionadded:: 1.0\n\nParameters\n----------\nestimator : estimator instance\n Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`\n in which th... | [
"sklearn/metrics/_plot/tests/test_common_curve_display.py::test_display_curve_error_non_binary[DetCurveDisplay]",
"sklearn/metrics/_plot/tests/test_common_curve_display.py::test_display_curve_error_no_response[DetCurveDisplay-predict_proba-response",
"sklearn/metrics/_plot/tests/test_common_curve_display.py::te... | [
"sklearn/metrics/_plot/tests/test_plot_curve_common.py::test_plot_curve_error_non_binary[plot_det_curve]",
"sklearn/metrics/_plot/tests/test_plot_curve_common.py::test_plot_curve_error_non_binary[plot_roc_curve]",
"sklearn/metrics/_plot/tests/test_plot_curve_common.py::test_plot_curve_error_no_response[plot_det... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
API deprecate plot_det_curve in favor of display class methods
Address some of #15880
Create 2 class methods `from_estimator` and `from_predictions` to build the plot for the DET curve.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sklearn/metrics/_plot/det_curve.py]
(definition of DetCurveDisplay.from_estimator:)
def from_estimator( cls, estimator, X, y, *, sample_weight=None, response_method="auto", pos_label=None, name=None, ax=None, **kwargs, ):
"""Plot DET curve given an estimator and data.
Read more in the :ref:`User Guide <visualizations>`.
.. versionadded:: 1.0
Parameters
----------
estimator : estimator instance
Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a classifier.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
response_method : {'predict_proba', 'decision_function', 'auto'} default='auto'
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the predicted target response. If set
to 'auto', :term:`predict_proba` is tried first and if it does not
exist :term:`decision_function` is tried next.
pos_label : str or int, default=None
The label of the positive class. When `pos_label=None`, if `y_true`
is in {-1, 1} or {0, 1}, `pos_label` is set to 1, otherwise an
error will be raised.
name : str, default=None
Name of DET curve for labeling. If `None`, use the name of the
estimator.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
**kwargs : dict
Additional keywords arguments passed to matplotlib `plot` function.
Returns
-------
display : :class:`~sklearn.metrics.DetCurveDisplay`
Object that stores computed values.
See Also
--------
det_curve : Compute error rates for different probability thresholds.
DetCurveDisplay.from_predictions : Plot DET curve given the true and
predicted labels.
plot_roc_curve : Plot Receiver operating characteristic (ROC) curve.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import DetCurveDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = SVC(random_state=0).fit(X_train, y_train)
>>> DetCurveDisplay.from_estimator(
... clf, X_test, y_test)
<...>
>>> plt.show()"""
(definition of DetCurveDisplay.from_predictions:)
def from_predictions( cls, y_true, y_pred, *, sample_weight=None, pos_label=None, name=None, ax=None, **kwargs, ):
"""Plot DET curve given the true and
predicted labels.
Read more in the :ref:`User Guide <visualizations>`.
.. versionadded:: 1.0
Parameters
----------
y_true : array-like of shape (n_samples,)
True labels.
y_pred : array-like of shape (n_samples,)
Target scores, can either be probability estimates of the positive
class, confidence values, or non-thresholded measure of decisions
(as returned by `decision_function` on some classifiers).
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
pos_label : str or int, default=None
The label of the positive class. When `pos_label=None`, if `y_true`
is in {-1, 1} or {0, 1}, `pos_label` is set to 1, otherwise an
error will be raised.
name : str, default=None
Name of DET curve for labeling. If `None`, name will be set to
`"Classifier"`.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
**kwargs : dict
Additional keywords arguments passed to matplotlib `plot` function.
Returns
-------
display : :class:`~sklearn.metrics.DetCurveDisplay`
Object that stores computed values.
See Also
--------
det_curve : Compute error rates for different probability thresholds.
DetCurveDisplay.from_estimator : Plot DET curve given an estimator and
some data.
plot_roc_curve : Plot Receiver operating characteristic (ROC) curve.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import DetCurveDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = SVC(random_state=0).fit(X_train, y_train)
>>> y_pred = clf.decision_function(X_test)
>>> DetCurveDisplay.from_predictions(
... y_test, y_pred)
<...>
>>> plt.show()"""
[end of new definitions in sklearn/metrics/_plot/det_curve.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 3c732b9f6a77e95dfa6beb154ca2e1e7848b74f9 | |
pydata__xarray-4836 | 4,836 | pydata/xarray | 0.12 | bc35548d96caaec225be9a26afbbaa94069c9494 | 2021-01-21T12:38:58Z | diff --git a/xarray/backends/cfgrib_.py b/xarray/backends/cfgrib_.py
index 4a0ac7d67f9..65c5bc2a02b 100644
--- a/xarray/backends/cfgrib_.py
+++ b/xarray/backends/cfgrib_.py
@@ -12,7 +12,7 @@
BackendEntrypoint,
)
from .locks import SerializableLock, ensure_lock
-from .store import open_backend_dataset_store
+from .store import StoreBackendEntrypoint
try:
import cfgrib
@@ -86,62 +86,58 @@ def get_encoding(self):
return encoding
-def guess_can_open_cfgrib(store_spec):
- try:
- _, ext = os.path.splitext(store_spec)
- except TypeError:
- return False
- return ext in {".grib", ".grib2", ".grb", ".grb2"}
-
-
-def open_backend_dataset_cfgrib(
- filename_or_obj,
- *,
- mask_and_scale=True,
- decode_times=None,
- concat_characters=None,
- decode_coords=None,
- drop_variables=None,
- use_cftime=None,
- decode_timedelta=None,
- lock=None,
- indexpath="{path}.{short_hash}.idx",
- filter_by_keys={},
- read_keys=[],
- encode_cf=("parameter", "time", "geography", "vertical"),
- squeeze=True,
- time_dims=("time", "step"),
-):
-
- store = CfGribDataStore(
+class CfgribfBackendEntrypoint(BackendEntrypoint):
+ def guess_can_open(self, store_spec):
+ try:
+ _, ext = os.path.splitext(store_spec)
+ except TypeError:
+ return False
+ return ext in {".grib", ".grib2", ".grb", ".grb2"}
+
+ def open_dataset(
+ self,
filename_or_obj,
- indexpath=indexpath,
- filter_by_keys=filter_by_keys,
- read_keys=read_keys,
- encode_cf=encode_cf,
- squeeze=squeeze,
- time_dims=time_dims,
- lock=lock,
- )
-
- with close_on_error(store):
- ds = open_backend_dataset_store(
- store,
- mask_and_scale=mask_and_scale,
- decode_times=decode_times,
- concat_characters=concat_characters,
- decode_coords=decode_coords,
- drop_variables=drop_variables,
- use_cftime=use_cftime,
- decode_timedelta=decode_timedelta,
+ *,
+ mask_and_scale=True,
+ decode_times=None,
+ concat_characters=None,
+ decode_coords=None,
+ drop_variables=None,
+ use_cftime=None,
+ decode_timedelta=None,
+ lock=None,
+ indexpath="{path}.{short_hash}.idx",
+ filter_by_keys={},
+ read_keys=[],
+ encode_cf=("parameter", "time", "geography", "vertical"),
+ squeeze=True,
+ time_dims=("time", "step"),
+ ):
+
+ store = CfGribDataStore(
+ filename_or_obj,
+ indexpath=indexpath,
+ filter_by_keys=filter_by_keys,
+ read_keys=read_keys,
+ encode_cf=encode_cf,
+ squeeze=squeeze,
+ time_dims=time_dims,
+ lock=lock,
)
- return ds
-
-
-cfgrib_backend = BackendEntrypoint(
- open_dataset=open_backend_dataset_cfgrib, guess_can_open=guess_can_open_cfgrib
-)
+ store_entrypoint = StoreBackendEntrypoint()
+ with close_on_error(store):
+ ds = store_entrypoint.open_dataset(
+ store,
+ mask_and_scale=mask_and_scale,
+ decode_times=decode_times,
+ concat_characters=concat_characters,
+ decode_coords=decode_coords,
+ drop_variables=drop_variables,
+ use_cftime=use_cftime,
+ decode_timedelta=decode_timedelta,
+ )
+ return ds
if has_cfgrib:
- BACKEND_ENTRYPOINTS["cfgrib"] = cfgrib_backend
+ BACKEND_ENTRYPOINTS["cfgrib"] = CfgribfBackendEntrypoint
diff --git a/xarray/backends/common.py b/xarray/backends/common.py
index adb70658fab..e2905d0866b 100644
--- a/xarray/backends/common.py
+++ b/xarray/backends/common.py
@@ -1,7 +1,7 @@
import logging
import time
import traceback
-from typing import Dict
+from typing import Dict, Tuple, Type, Union
import numpy as np
@@ -344,12 +344,13 @@ def encode(self, variables, attributes):
class BackendEntrypoint:
- __slots__ = ("guess_can_open", "open_dataset", "open_dataset_parameters")
+ open_dataset_parameters: Union[Tuple, None] = None
- def __init__(self, open_dataset, open_dataset_parameters=None, guess_can_open=None):
- self.open_dataset = open_dataset
- self.open_dataset_parameters = open_dataset_parameters
- self.guess_can_open = guess_can_open
+ def open_dataset(self):
+ raise NotImplementedError
+ def guess_can_open(self, store_spec):
+ return False
-BACKEND_ENTRYPOINTS: Dict[str, BackendEntrypoint] = {}
+
+BACKEND_ENTRYPOINTS: Dict[str, Type[BackendEntrypoint]] = {}
diff --git a/xarray/backends/h5netcdf_.py b/xarray/backends/h5netcdf_.py
index 562600de4b6..aa892c4f89c 100644
--- a/xarray/backends/h5netcdf_.py
+++ b/xarray/backends/h5netcdf_.py
@@ -23,7 +23,7 @@
_get_datatype,
_nc4_require_group,
)
-from .store import open_backend_dataset_store
+from .store import StoreBackendEntrypoint
try:
import h5netcdf
@@ -328,62 +328,61 @@ def close(self, **kwargs):
self._manager.close(**kwargs)
-def guess_can_open_h5netcdf(store_spec):
- try:
- return read_magic_number(store_spec).startswith(b"\211HDF\r\n\032\n")
- except TypeError:
- pass
-
- try:
- _, ext = os.path.splitext(store_spec)
- except TypeError:
- return False
-
- return ext in {".nc", ".nc4", ".cdf"}
-
-
-def open_backend_dataset_h5netcdf(
- filename_or_obj,
- *,
- mask_and_scale=True,
- decode_times=None,
- concat_characters=None,
- decode_coords=None,
- drop_variables=None,
- use_cftime=None,
- decode_timedelta=None,
- format=None,
- group=None,
- lock=None,
- invalid_netcdf=None,
- phony_dims=None,
-):
-
- store = H5NetCDFStore.open(
+class H5netcdfBackendEntrypoint(BackendEntrypoint):
+ def guess_can_open(self, store_spec):
+ try:
+ return read_magic_number(store_spec).startswith(b"\211HDF\r\n\032\n")
+ except TypeError:
+ pass
+
+ try:
+ _, ext = os.path.splitext(store_spec)
+ except TypeError:
+ return False
+
+ return ext in {".nc", ".nc4", ".cdf"}
+
+ def open_dataset(
+ self,
filename_or_obj,
- format=format,
- group=group,
- lock=lock,
- invalid_netcdf=invalid_netcdf,
- phony_dims=phony_dims,
- )
+ *,
+ mask_and_scale=True,
+ decode_times=None,
+ concat_characters=None,
+ decode_coords=None,
+ drop_variables=None,
+ use_cftime=None,
+ decode_timedelta=None,
+ format=None,
+ group=None,
+ lock=None,
+ invalid_netcdf=None,
+ phony_dims=None,
+ ):
- ds = open_backend_dataset_store(
- store,
- mask_and_scale=mask_and_scale,
- decode_times=decode_times,
- concat_characters=concat_characters,
- decode_coords=decode_coords,
- drop_variables=drop_variables,
- use_cftime=use_cftime,
- decode_timedelta=decode_timedelta,
- )
- return ds
+ store = H5NetCDFStore.open(
+ filename_or_obj,
+ format=format,
+ group=group,
+ lock=lock,
+ invalid_netcdf=invalid_netcdf,
+ phony_dims=phony_dims,
+ )
+ store_entrypoint = StoreBackendEntrypoint()
+
+ ds = store_entrypoint.open_dataset(
+ store,
+ mask_and_scale=mask_and_scale,
+ decode_times=decode_times,
+ concat_characters=concat_characters,
+ decode_coords=decode_coords,
+ drop_variables=drop_variables,
+ use_cftime=use_cftime,
+ decode_timedelta=decode_timedelta,
+ )
+ return ds
-h5netcdf_backend = BackendEntrypoint(
- open_dataset=open_backend_dataset_h5netcdf, guess_can_open=guess_can_open_h5netcdf
-)
if has_h5netcdf:
- BACKEND_ENTRYPOINTS["h5netcdf"] = h5netcdf_backend
+ BACKEND_ENTRYPOINTS["h5netcdf"] = H5netcdfBackendEntrypoint
diff --git a/xarray/backends/netCDF4_.py b/xarray/backends/netCDF4_.py
index 5bb4eec837b..e3d87aaf83f 100644
--- a/xarray/backends/netCDF4_.py
+++ b/xarray/backends/netCDF4_.py
@@ -22,7 +22,7 @@
from .file_manager import CachingFileManager, DummyFileManager
from .locks import HDF5_LOCK, NETCDFC_LOCK, combine_locks, ensure_lock, get_write_lock
from .netcdf3 import encode_nc3_attr_value, encode_nc3_variable
-from .store import open_backend_dataset_store
+from .store import StoreBackendEntrypoint
try:
import netCDF4
@@ -512,65 +512,62 @@ def close(self, **kwargs):
self._manager.close(**kwargs)
-def guess_can_open_netcdf4(store_spec):
- if isinstance(store_spec, str) and is_remote_uri(store_spec):
- return True
- try:
- _, ext = os.path.splitext(store_spec)
- except TypeError:
- return False
- return ext in {".nc", ".nc4", ".cdf"}
-
-
-def open_backend_dataset_netcdf4(
- filename_or_obj,
- mask_and_scale=True,
- decode_times=None,
- concat_characters=None,
- decode_coords=None,
- drop_variables=None,
- use_cftime=None,
- decode_timedelta=None,
- group=None,
- mode="r",
- format="NETCDF4",
- clobber=True,
- diskless=False,
- persist=False,
- lock=None,
- autoclose=False,
-):
+class NetCDF4BackendEntrypoint(BackendEntrypoint):
+ def guess_can_open(self, store_spec):
+ if isinstance(store_spec, str) and is_remote_uri(store_spec):
+ return True
+ try:
+ _, ext = os.path.splitext(store_spec)
+ except TypeError:
+ return False
+ return ext in {".nc", ".nc4", ".cdf"}
- store = NetCDF4DataStore.open(
+ def open_dataset(
+ self,
filename_or_obj,
- mode=mode,
- format=format,
- group=group,
- clobber=clobber,
- diskless=diskless,
- persist=persist,
- lock=lock,
- autoclose=autoclose,
- )
+ mask_and_scale=True,
+ decode_times=None,
+ concat_characters=None,
+ decode_coords=None,
+ drop_variables=None,
+ use_cftime=None,
+ decode_timedelta=None,
+ group=None,
+ mode="r",
+ format="NETCDF4",
+ clobber=True,
+ diskless=False,
+ persist=False,
+ lock=None,
+ autoclose=False,
+ ):
- with close_on_error(store):
- ds = open_backend_dataset_store(
- store,
- mask_and_scale=mask_and_scale,
- decode_times=decode_times,
- concat_characters=concat_characters,
- decode_coords=decode_coords,
- drop_variables=drop_variables,
- use_cftime=use_cftime,
- decode_timedelta=decode_timedelta,
+ store = NetCDF4DataStore.open(
+ filename_or_obj,
+ mode=mode,
+ format=format,
+ group=group,
+ clobber=clobber,
+ diskless=diskless,
+ persist=persist,
+ lock=lock,
+ autoclose=autoclose,
)
- return ds
-
-netcdf4_backend = BackendEntrypoint(
- open_dataset=open_backend_dataset_netcdf4, guess_can_open=guess_can_open_netcdf4
-)
+ store_entrypoint = StoreBackendEntrypoint()
+ with close_on_error(store):
+ ds = store_entrypoint.open_dataset(
+ store,
+ mask_and_scale=mask_and_scale,
+ decode_times=decode_times,
+ concat_characters=concat_characters,
+ decode_coords=decode_coords,
+ drop_variables=drop_variables,
+ use_cftime=use_cftime,
+ decode_timedelta=decode_timedelta,
+ )
+ return ds
if has_netcdf4:
- BACKEND_ENTRYPOINTS["netcdf4"] = netcdf4_backend
+ BACKEND_ENTRYPOINTS["netcdf4"] = NetCDF4BackendEntrypoint
diff --git a/xarray/backends/plugins.py b/xarray/backends/plugins.py
index 6d3ec7e7da5..b8cd2bf6378 100644
--- a/xarray/backends/plugins.py
+++ b/xarray/backends/plugins.py
@@ -36,6 +36,7 @@ def remove_duplicates(backend_entrypoints):
def detect_parameters(open_dataset):
signature = inspect.signature(open_dataset)
parameters = signature.parameters
+ parameters_list = []
for name, param in parameters.items():
if param.kind in (
inspect.Parameter.VAR_KEYWORD,
@@ -45,7 +46,9 @@ def detect_parameters(open_dataset):
f"All the parameters in {open_dataset!r} signature should be explicit. "
"*args and **kwargs is not supported"
)
- return tuple(parameters)
+ if name != "self":
+ parameters_list.append(name)
+ return tuple(parameters_list)
def create_engines_dict(backend_entrypoints):
@@ -57,8 +60,8 @@ def create_engines_dict(backend_entrypoints):
return engines
-def set_missing_parameters(engines):
- for name, backend in engines.items():
+def set_missing_parameters(backend_entrypoints):
+ for name, backend in backend_entrypoints.items():
if backend.open_dataset_parameters is None:
open_dataset = backend.open_dataset
backend.open_dataset_parameters = detect_parameters(open_dataset)
@@ -70,7 +73,10 @@ def build_engines(entrypoints):
external_backend_entrypoints = create_engines_dict(pkg_entrypoints)
backend_entrypoints.update(external_backend_entrypoints)
set_missing_parameters(backend_entrypoints)
- return backend_entrypoints
+ engines = {}
+ for name, backend in backend_entrypoints.items():
+ engines[name] = backend()
+ return engines
@functools.lru_cache(maxsize=1)
diff --git a/xarray/backends/pseudonetcdf_.py b/xarray/backends/pseudonetcdf_.py
index c2bfd519bed..80485fce459 100644
--- a/xarray/backends/pseudonetcdf_.py
+++ b/xarray/backends/pseudonetcdf_.py
@@ -11,7 +11,7 @@
)
from .file_manager import CachingFileManager
from .locks import HDF5_LOCK, NETCDFC_LOCK, combine_locks, ensure_lock
-from .store import open_backend_dataset_store
+from .store import StoreBackendEntrypoint
try:
from PseudoNetCDF import pncopen
@@ -100,57 +100,55 @@ def close(self):
self._manager.close()
-def open_backend_dataset_pseudonetcdf(
- filename_or_obj,
- mask_and_scale=False,
- decode_times=None,
- concat_characters=None,
- decode_coords=None,
- drop_variables=None,
- use_cftime=None,
- decode_timedelta=None,
- mode=None,
- lock=None,
- **format_kwargs,
-):
-
- store = PseudoNetCDFDataStore.open(
- filename_or_obj, lock=lock, mode=mode, **format_kwargs
+class PseudoNetCDFBackendEntrypoint(BackendEntrypoint):
+
+ # *args and **kwargs are not allowed in open_backend_dataset_ kwargs,
+ # unless the open_dataset_parameters are explicity defined like this:
+ open_dataset_parameters = (
+ "filename_or_obj",
+ "mask_and_scale",
+ "decode_times",
+ "concat_characters",
+ "decode_coords",
+ "drop_variables",
+ "use_cftime",
+ "decode_timedelta",
+ "mode",
+ "lock",
)
- with close_on_error(store):
- ds = open_backend_dataset_store(
- store,
- mask_and_scale=mask_and_scale,
- decode_times=decode_times,
- concat_characters=concat_characters,
- decode_coords=decode_coords,
- drop_variables=drop_variables,
- use_cftime=use_cftime,
- decode_timedelta=decode_timedelta,
+ def open_dataset(
+ self,
+ filename_or_obj,
+ mask_and_scale=False,
+ decode_times=None,
+ concat_characters=None,
+ decode_coords=None,
+ drop_variables=None,
+ use_cftime=None,
+ decode_timedelta=None,
+ mode=None,
+ lock=None,
+ **format_kwargs,
+ ):
+ store = PseudoNetCDFDataStore.open(
+ filename_or_obj, lock=lock, mode=mode, **format_kwargs
)
- return ds
-
-
-# *args and **kwargs are not allowed in open_backend_dataset_ kwargs,
-# unless the open_dataset_parameters are explicity defined like this:
-open_dataset_parameters = (
- "filename_or_obj",
- "mask_and_scale",
- "decode_times",
- "concat_characters",
- "decode_coords",
- "drop_variables",
- "use_cftime",
- "decode_timedelta",
- "mode",
- "lock",
-)
-pseudonetcdf_backend = BackendEntrypoint(
- open_dataset=open_backend_dataset_pseudonetcdf,
- open_dataset_parameters=open_dataset_parameters,
-)
+
+ store_entrypoint = StoreBackendEntrypoint()
+ with close_on_error(store):
+ ds = store_entrypoint.open_dataset(
+ store,
+ mask_and_scale=mask_and_scale,
+ decode_times=decode_times,
+ concat_characters=concat_characters,
+ decode_coords=decode_coords,
+ drop_variables=drop_variables,
+ use_cftime=use_cftime,
+ decode_timedelta=decode_timedelta,
+ )
+ return ds
if has_pseudonetcdf:
- BACKEND_ENTRYPOINTS["pseudonetcdf"] = pseudonetcdf_backend
+ BACKEND_ENTRYPOINTS["pseudonetcdf"] = PseudoNetCDFBackendEntrypoint
diff --git a/xarray/backends/pydap_.py b/xarray/backends/pydap_.py
index c5ce943a10a..7f8622ca66e 100644
--- a/xarray/backends/pydap_.py
+++ b/xarray/backends/pydap_.py
@@ -11,7 +11,7 @@
BackendEntrypoint,
robust_getitem,
)
-from .store import open_backend_dataset_store
+from .store import StoreBackendEntrypoint
try:
import pydap.client
@@ -107,45 +107,41 @@ def get_dimensions(self):
return Frozen(self.ds.dimensions)
-def guess_can_open_pydap(store_spec):
- return isinstance(store_spec, str) and is_remote_uri(store_spec)
+class PydapBackendEntrypoint(BackendEntrypoint):
+ def guess_can_open(self, store_spec):
+ return isinstance(store_spec, str) and is_remote_uri(store_spec)
-
-def open_backend_dataset_pydap(
- filename_or_obj,
- mask_and_scale=True,
- decode_times=None,
- concat_characters=None,
- decode_coords=None,
- drop_variables=None,
- use_cftime=None,
- decode_timedelta=None,
- session=None,
-):
-
- store = PydapDataStore.open(
+ def open_dataset(
+ self,
filename_or_obj,
- session=session,
- )
-
- with close_on_error(store):
- ds = open_backend_dataset_store(
- store,
- mask_and_scale=mask_and_scale,
- decode_times=decode_times,
- concat_characters=concat_characters,
- decode_coords=decode_coords,
- drop_variables=drop_variables,
- use_cftime=use_cftime,
- decode_timedelta=decode_timedelta,
+ mask_and_scale=True,
+ decode_times=None,
+ concat_characters=None,
+ decode_coords=None,
+ drop_variables=None,
+ use_cftime=None,
+ decode_timedelta=None,
+ session=None,
+ ):
+ store = PydapDataStore.open(
+ filename_or_obj,
+ session=session,
)
- return ds
-
-pydap_backend = BackendEntrypoint(
- open_dataset=open_backend_dataset_pydap, guess_can_open=guess_can_open_pydap
-)
+ store_entrypoint = StoreBackendEntrypoint()
+ with close_on_error(store):
+ ds = store_entrypoint.open_dataset(
+ store,
+ mask_and_scale=mask_and_scale,
+ decode_times=decode_times,
+ concat_characters=concat_characters,
+ decode_coords=decode_coords,
+ drop_variables=drop_variables,
+ use_cftime=use_cftime,
+ decode_timedelta=decode_timedelta,
+ )
+ return ds
if has_pydap:
- BACKEND_ENTRYPOINTS["pydap"] = pydap_backend
+ BACKEND_ENTRYPOINTS["pydap"] = PydapBackendEntrypoint
diff --git a/xarray/backends/pynio_.py b/xarray/backends/pynio_.py
index 261daa69880..41c99efd076 100644
--- a/xarray/backends/pynio_.py
+++ b/xarray/backends/pynio_.py
@@ -11,7 +11,7 @@
)
from .file_manager import CachingFileManager
from .locks import HDF5_LOCK, NETCDFC_LOCK, SerializableLock, combine_locks, ensure_lock
-from .store import open_backend_dataset_store
+from .store import StoreBackendEntrypoint
try:
import Nio
@@ -97,41 +97,39 @@ def close(self):
self._manager.close()
-def open_backend_dataset_pynio(
- filename_or_obj,
- mask_and_scale=True,
- decode_times=None,
- concat_characters=None,
- decode_coords=None,
- drop_variables=None,
- use_cftime=None,
- decode_timedelta=None,
- mode="r",
- lock=None,
-):
-
- store = NioDataStore(
+class PynioBackendEntrypoint(BackendEntrypoint):
+ def open_dataset(
filename_or_obj,
- mode=mode,
- lock=lock,
- )
-
- with close_on_error(store):
- ds = open_backend_dataset_store(
- store,
- mask_and_scale=mask_and_scale,
- decode_times=decode_times,
- concat_characters=concat_characters,
- decode_coords=decode_coords,
- drop_variables=drop_variables,
- use_cftime=use_cftime,
- decode_timedelta=decode_timedelta,
+ mask_and_scale=True,
+ decode_times=None,
+ concat_characters=None,
+ decode_coords=None,
+ drop_variables=None,
+ use_cftime=None,
+ decode_timedelta=None,
+ mode="r",
+ lock=None,
+ ):
+ store = NioDataStore(
+ filename_or_obj,
+ mode=mode,
+ lock=lock,
)
- return ds
-
-pynio_backend = BackendEntrypoint(open_dataset=open_backend_dataset_pynio)
+ store_entrypoint = StoreBackendEntrypoint()
+ with close_on_error(store):
+ ds = store_entrypoint.open_dataset(
+ store,
+ mask_and_scale=mask_and_scale,
+ decode_times=decode_times,
+ concat_characters=concat_characters,
+ decode_coords=decode_coords,
+ drop_variables=drop_variables,
+ use_cftime=use_cftime,
+ decode_timedelta=decode_timedelta,
+ )
+ return ds
if has_pynio:
- BACKEND_ENTRYPOINTS["pynio"] = pynio_backend
+ BACKEND_ENTRYPOINTS["pynio"] = PynioBackendEntrypoint
diff --git a/xarray/backends/scipy_.py b/xarray/backends/scipy_.py
index df51d07d686..ddc157ed8e4 100644
--- a/xarray/backends/scipy_.py
+++ b/xarray/backends/scipy_.py
@@ -15,7 +15,7 @@
from .file_manager import CachingFileManager, DummyFileManager
from .locks import ensure_lock, get_write_lock
from .netcdf3 import encode_nc3_attr_value, encode_nc3_variable, is_valid_nc3_name
-from .store import open_backend_dataset_store
+from .store import StoreBackendEntrypoint
try:
import scipy.io
@@ -232,56 +232,54 @@ def close(self):
self._manager.close()
-def guess_can_open_scipy(store_spec):
- try:
- return read_magic_number(store_spec).startswith(b"CDF")
- except TypeError:
- pass
+class ScipyBackendEntrypoint(BackendEntrypoint):
+ def guess_can_open(self, store_spec):
+ try:
+ return read_magic_number(store_spec).startswith(b"CDF")
+ except TypeError:
+ pass
- try:
- _, ext = os.path.splitext(store_spec)
- except TypeError:
- return False
- return ext in {".nc", ".nc4", ".cdf", ".gz"}
-
-
-def open_backend_dataset_scipy(
- filename_or_obj,
- mask_and_scale=True,
- decode_times=None,
- concat_characters=None,
- decode_coords=None,
- drop_variables=None,
- use_cftime=None,
- decode_timedelta=None,
- mode="r",
- format=None,
- group=None,
- mmap=None,
- lock=None,
-):
-
- store = ScipyDataStore(
- filename_or_obj, mode=mode, format=format, group=group, mmap=mmap, lock=lock
- )
- with close_on_error(store):
- ds = open_backend_dataset_store(
- store,
- mask_and_scale=mask_and_scale,
- decode_times=decode_times,
- concat_characters=concat_characters,
- decode_coords=decode_coords,
- drop_variables=drop_variables,
- use_cftime=use_cftime,
- decode_timedelta=decode_timedelta,
- )
- return ds
+ try:
+ _, ext = os.path.splitext(store_spec)
+ except TypeError:
+ return False
+ return ext in {".nc", ".nc4", ".cdf", ".gz"}
+
+ def open_dataset(
+ self,
+ filename_or_obj,
+ mask_and_scale=True,
+ decode_times=None,
+ concat_characters=None,
+ decode_coords=None,
+ drop_variables=None,
+ use_cftime=None,
+ decode_timedelta=None,
+ mode="r",
+ format=None,
+ group=None,
+ mmap=None,
+ lock=None,
+ ):
+ store = ScipyDataStore(
+ filename_or_obj, mode=mode, format=format, group=group, mmap=mmap, lock=lock
+ )
-scipy_backend = BackendEntrypoint(
- open_dataset=open_backend_dataset_scipy, guess_can_open=guess_can_open_scipy
-)
+ store_entrypoint = StoreBackendEntrypoint()
+ with close_on_error(store):
+ ds = store_entrypoint.open_dataset(
+ store,
+ mask_and_scale=mask_and_scale,
+ decode_times=decode_times,
+ concat_characters=concat_characters,
+ decode_coords=decode_coords,
+ drop_variables=drop_variables,
+ use_cftime=use_cftime,
+ decode_timedelta=decode_timedelta,
+ )
+ return ds
if has_scipy:
- BACKEND_ENTRYPOINTS["scipy"] = scipy_backend
+ BACKEND_ENTRYPOINTS["scipy"] = ScipyBackendEntrypoint
diff --git a/xarray/backends/store.py b/xarray/backends/store.py
index 66fca0d39c3..d57b3ab9df8 100644
--- a/xarray/backends/store.py
+++ b/xarray/backends/store.py
@@ -3,47 +3,43 @@
from .common import BACKEND_ENTRYPOINTS, AbstractDataStore, BackendEntrypoint
-def guess_can_open_store(store_spec):
- return isinstance(store_spec, AbstractDataStore)
-
-
-def open_backend_dataset_store(
- store,
- *,
- mask_and_scale=True,
- decode_times=True,
- concat_characters=True,
- decode_coords=True,
- drop_variables=None,
- use_cftime=None,
- decode_timedelta=None,
-):
- vars, attrs = store.load()
- encoding = store.get_encoding()
-
- vars, attrs, coord_names = conventions.decode_cf_variables(
- vars,
- attrs,
- mask_and_scale=mask_and_scale,
- decode_times=decode_times,
- concat_characters=concat_characters,
- decode_coords=decode_coords,
- drop_variables=drop_variables,
- use_cftime=use_cftime,
- decode_timedelta=decode_timedelta,
- )
-
- ds = Dataset(vars, attrs=attrs)
- ds = ds.set_coords(coord_names.intersection(vars))
- ds.set_close(store.close)
- ds.encoding = encoding
-
- return ds
-
-
-store_backend = BackendEntrypoint(
- open_dataset=open_backend_dataset_store, guess_can_open=guess_can_open_store
-)
-
-
-BACKEND_ENTRYPOINTS["store"] = store_backend
+class StoreBackendEntrypoint(BackendEntrypoint):
+ def guess_can_open(self, store_spec):
+ return isinstance(store_spec, AbstractDataStore)
+
+ def open_dataset(
+ self,
+ store,
+ *,
+ mask_and_scale=True,
+ decode_times=True,
+ concat_characters=True,
+ decode_coords=True,
+ drop_variables=None,
+ use_cftime=None,
+ decode_timedelta=None,
+ ):
+ vars, attrs = store.load()
+ encoding = store.get_encoding()
+
+ vars, attrs, coord_names = conventions.decode_cf_variables(
+ vars,
+ attrs,
+ mask_and_scale=mask_and_scale,
+ decode_times=decode_times,
+ concat_characters=concat_characters,
+ decode_coords=decode_coords,
+ drop_variables=drop_variables,
+ use_cftime=use_cftime,
+ decode_timedelta=decode_timedelta,
+ )
+
+ ds = Dataset(vars, attrs=attrs)
+ ds = ds.set_coords(coord_names.intersection(vars))
+ ds.set_close(store.close)
+ ds.encoding = encoding
+
+ return ds
+
+
+BACKEND_ENTRYPOINTS["store"] = StoreBackendEntrypoint
diff --git a/xarray/backends/zarr.py b/xarray/backends/zarr.py
index ceeb23cac9b..1d667a38b53 100644
--- a/xarray/backends/zarr.py
+++ b/xarray/backends/zarr.py
@@ -15,7 +15,7 @@
BackendEntrypoint,
_encode_variable_name,
)
-from .store import open_backend_dataset_store
+from .store import StoreBackendEntrypoint
try:
import zarr
@@ -670,49 +670,48 @@ def open_zarr(
return ds
-def open_backend_dataset_zarr(
- filename_or_obj,
- mask_and_scale=True,
- decode_times=None,
- concat_characters=None,
- decode_coords=None,
- drop_variables=None,
- use_cftime=None,
- decode_timedelta=None,
- group=None,
- mode="r",
- synchronizer=None,
- consolidated=False,
- consolidate_on_close=False,
- chunk_store=None,
-):
-
- store = ZarrStore.open_group(
+class ZarrBackendEntrypoint(BackendEntrypoint):
+ def open_dataset(
+ self,
filename_or_obj,
- group=group,
- mode=mode,
- synchronizer=synchronizer,
- consolidated=consolidated,
- consolidate_on_close=consolidate_on_close,
- chunk_store=chunk_store,
- )
-
- with close_on_error(store):
- ds = open_backend_dataset_store(
- store,
- mask_and_scale=mask_and_scale,
- decode_times=decode_times,
- concat_characters=concat_characters,
- decode_coords=decode_coords,
- drop_variables=drop_variables,
- use_cftime=use_cftime,
- decode_timedelta=decode_timedelta,
+ mask_and_scale=True,
+ decode_times=None,
+ concat_characters=None,
+ decode_coords=None,
+ drop_variables=None,
+ use_cftime=None,
+ decode_timedelta=None,
+ group=None,
+ mode="r",
+ synchronizer=None,
+ consolidated=False,
+ consolidate_on_close=False,
+ chunk_store=None,
+ ):
+ store = ZarrStore.open_group(
+ filename_or_obj,
+ group=group,
+ mode=mode,
+ synchronizer=synchronizer,
+ consolidated=consolidated,
+ consolidate_on_close=consolidate_on_close,
+ chunk_store=chunk_store,
)
- return ds
-
-zarr_backend = BackendEntrypoint(open_dataset=open_backend_dataset_zarr)
+ store_entrypoint = StoreBackendEntrypoint()
+ with close_on_error(store):
+ ds = store_entrypoint.open_dataset(
+ store,
+ mask_and_scale=mask_and_scale,
+ decode_times=decode_times,
+ concat_characters=concat_characters,
+ decode_coords=decode_coords,
+ drop_variables=drop_variables,
+ use_cftime=use_cftime,
+ decode_timedelta=decode_timedelta,
+ )
+ return ds
if has_zarr:
- BACKEND_ENTRYPOINTS["zarr"] = zarr_backend
+ BACKEND_ENTRYPOINTS["zarr"] = ZarrBackendEntrypoint
| diff --git a/xarray/tests/test_plugins.py b/xarray/tests/test_plugins.py
index 38ebce6da1a..64a1c563dba 100644
--- a/xarray/tests/test_plugins.py
+++ b/xarray/tests/test_plugins.py
@@ -6,19 +6,24 @@
from xarray.backends import common, plugins
-def dummy_open_dataset_args(filename_or_obj, *args):
- pass
+class DummyBackendEntrypointArgs(common.BackendEntrypoint):
+ def open_dataset(filename_or_obj, *args):
+ pass
-def dummy_open_dataset_kwargs(filename_or_obj, **kwargs):
- pass
+class DummyBackendEntrypointKwargs(common.BackendEntrypoint):
+ def open_dataset(filename_or_obj, **kwargs):
+ pass
-def dummy_open_dataset(filename_or_obj, *, decoder):
- pass
+class DummyBackendEntrypoint1(common.BackendEntrypoint):
+ def open_dataset(self, filename_or_obj, *, decoder):
+ pass
-dummy_cfgrib = common.BackendEntrypoint(dummy_open_dataset)
+class DummyBackendEntrypoint2(common.BackendEntrypoint):
+ def open_dataset(self, filename_or_obj, *, decoder):
+ pass
@pytest.fixture
@@ -65,46 +70,48 @@ def test_create_engines_dict():
def test_set_missing_parameters():
- backend_1 = common.BackendEntrypoint(dummy_open_dataset)
- backend_2 = common.BackendEntrypoint(dummy_open_dataset, ("filename_or_obj",))
+ backend_1 = DummyBackendEntrypoint1
+ backend_2 = DummyBackendEntrypoint2
+ backend_2.open_dataset_parameters = ("filename_or_obj",)
engines = {"engine_1": backend_1, "engine_2": backend_2}
plugins.set_missing_parameters(engines)
assert len(engines) == 2
- engine_1 = engines["engine_1"]
- assert engine_1.open_dataset_parameters == ("filename_or_obj", "decoder")
- engine_2 = engines["engine_2"]
- assert engine_2.open_dataset_parameters == ("filename_or_obj",)
+ assert backend_1.open_dataset_parameters == ("filename_or_obj", "decoder")
+ assert backend_2.open_dataset_parameters == ("filename_or_obj",)
+
+ backend = DummyBackendEntrypointKwargs()
+ backend.open_dataset_parameters = ("filename_or_obj", "decoder")
+ plugins.set_missing_parameters({"engine": backend})
+ assert backend.open_dataset_parameters == ("filename_or_obj", "decoder")
+
+ backend = DummyBackendEntrypointArgs()
+ backend.open_dataset_parameters = ("filename_or_obj", "decoder")
+ plugins.set_missing_parameters({"engine": backend})
+ assert backend.open_dataset_parameters == ("filename_or_obj", "decoder")
def test_set_missing_parameters_raise_error():
- backend = common.BackendEntrypoint(dummy_open_dataset_args)
+ backend = DummyBackendEntrypointKwargs()
with pytest.raises(TypeError):
plugins.set_missing_parameters({"engine": backend})
- backend = common.BackendEntrypoint(
- dummy_open_dataset_args, ("filename_or_obj", "decoder")
- )
- plugins.set_missing_parameters({"engine": backend})
-
- backend = common.BackendEntrypoint(dummy_open_dataset_kwargs)
+ backend = DummyBackendEntrypointArgs()
with pytest.raises(TypeError):
plugins.set_missing_parameters({"engine": backend})
- backend = common.BackendEntrypoint(
- dummy_open_dataset_kwargs, ("filename_or_obj", "decoder")
- )
- plugins.set_missing_parameters({"engine": backend})
-
-@mock.patch("pkg_resources.EntryPoint.load", mock.MagicMock(return_value=dummy_cfgrib))
+@mock.patch(
+ "pkg_resources.EntryPoint.load",
+ mock.MagicMock(return_value=DummyBackendEntrypoint1),
+)
def test_build_engines():
- dummy_cfgrib_pkg_entrypoint = pkg_resources.EntryPoint.parse(
+ dummy_pkg_entrypoint = pkg_resources.EntryPoint.parse(
"cfgrib = xarray.tests.test_plugins:backend_1"
)
- backend_entrypoints = plugins.build_engines([dummy_cfgrib_pkg_entrypoint])
- assert backend_entrypoints["cfgrib"] is dummy_cfgrib
+ backend_entrypoints = plugins.build_engines([dummy_pkg_entrypoint])
+ assert isinstance(backend_entrypoints["cfgrib"], DummyBackendEntrypoint1)
assert backend_entrypoints["cfgrib"].open_dataset_parameters == (
"filename_or_obj",
"decoder",
| [
{
"components": [
{
"doc": "",
"lines": [
89,
139
],
"name": "CfgribfBackendEntrypoint",
"signature": "class CfgribfBackendEntrypoint(BackendEntrypoint):",
"type": "class"
},
{
"doc": "",
"lines": [
9... | [
"xarray/tests/test_plugins.py::test_set_missing_parameters",
"xarray/tests/test_plugins.py::test_set_missing_parameters_raise_error",
"xarray/tests/test_plugins.py::test_build_engines"
] | [
"xarray/tests/test_plugins.py::test_remove_duplicates",
"xarray/tests/test_plugins.py::test_remove_duplicates_warnings",
"xarray/tests/test_plugins.py::test_create_engines_dict"
] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
backend interface, now it uses subclassing
Currently, the interface between the backend and xarray is the class/container BackendEntrypoint, that must be instantiated by the backend.
With this pull request, BackendEntrypoint is replaced by AbstractBackendEntrypoint. The backend will inherit from this class.
Reason for these changes:
- This type of interface is more standard.
- [x] Tests updated
- [x] Passes `pre-commit run --all-files`
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in xarray/backends/cfgrib_.py]
(definition of CfgribfBackendEntrypoint:)
class CfgribfBackendEntrypoint(BackendEntrypoint):
(definition of CfgribfBackendEntrypoint.guess_can_open:)
def guess_can_open(self, store_spec):
(definition of CfgribfBackendEntrypoint.open_dataset:)
def open_dataset( self, filename_or_obj, *, mask_and_scale=True, decode_times=None, concat_characters=None, decode_coords=None, drop_variables=None, use_cftime=None, decode_timedelta=None, lock=None, indexpath="{path}.{short_hash}.idx", filter_by_keys={}, read_keys=[], encode_cf=("parameter", "time", "geography", "vertical"), squeeze=True, time_dims=("time", "step"), ):
[end of new definitions in xarray/backends/cfgrib_.py]
[start of new definitions in xarray/backends/common.py]
(definition of BackendEntrypoint.open_dataset:)
def open_dataset(self):
(definition of BackendEntrypoint.guess_can_open:)
def guess_can_open(self, store_spec):
[end of new definitions in xarray/backends/common.py]
[start of new definitions in xarray/backends/h5netcdf_.py]
(definition of H5netcdfBackendEntrypoint:)
class H5netcdfBackendEntrypoint(BackendEntrypoint):
(definition of H5netcdfBackendEntrypoint.guess_can_open:)
def guess_can_open(self, store_spec):
(definition of H5netcdfBackendEntrypoint.open_dataset:)
def open_dataset( self, filename_or_obj, *, mask_and_scale=True, decode_times=None, concat_characters=None, decode_coords=None, drop_variables=None, use_cftime=None, decode_timedelta=None, format=None, group=None, lock=None, invalid_netcdf=None, phony_dims=None, ):
[end of new definitions in xarray/backends/h5netcdf_.py]
[start of new definitions in xarray/backends/netCDF4_.py]
(definition of NetCDF4BackendEntrypoint:)
class NetCDF4BackendEntrypoint(BackendEntrypoint):
(definition of NetCDF4BackendEntrypoint.guess_can_open:)
def guess_can_open(self, store_spec):
(definition of NetCDF4BackendEntrypoint.open_dataset:)
def open_dataset( self, filename_or_obj, mask_and_scale=True, decode_times=None, concat_characters=None, decode_coords=None, drop_variables=None, use_cftime=None, decode_timedelta=None, group=None, mode="r", format="NETCDF4", clobber=True, diskless=False, persist=False, lock=None, autoclose=False, ):
[end of new definitions in xarray/backends/netCDF4_.py]
[start of new definitions in xarray/backends/pseudonetcdf_.py]
(definition of PseudoNetCDFBackendEntrypoint:)
class PseudoNetCDFBackendEntrypoint(BackendEntrypoint):
(definition of PseudoNetCDFBackendEntrypoint.open_dataset:)
def open_dataset( self, filename_or_obj, mask_and_scale=False, decode_times=None, concat_characters=None, decode_coords=None, drop_variables=None, use_cftime=None, decode_timedelta=None, mode=None, lock=None, **format_kwargs, ):
[end of new definitions in xarray/backends/pseudonetcdf_.py]
[start of new definitions in xarray/backends/pydap_.py]
(definition of PydapBackendEntrypoint:)
class PydapBackendEntrypoint(BackendEntrypoint):
(definition of PydapBackendEntrypoint.guess_can_open:)
def guess_can_open(self, store_spec):
(definition of PydapBackendEntrypoint.open_dataset:)
def open_dataset( self, filename_or_obj, mask_and_scale=True, decode_times=None, concat_characters=None, decode_coords=None, drop_variables=None, use_cftime=None, decode_timedelta=None, session=None, ):
[end of new definitions in xarray/backends/pydap_.py]
[start of new definitions in xarray/backends/pynio_.py]
(definition of PynioBackendEntrypoint:)
class PynioBackendEntrypoint(BackendEntrypoint):
(definition of PynioBackendEntrypoint.open_dataset:)
def open_dataset( filename_or_obj, mask_and_scale=True, decode_times=None, concat_characters=None, decode_coords=None, drop_variables=None, use_cftime=None, decode_timedelta=None, mode="r", lock=None, ):
[end of new definitions in xarray/backends/pynio_.py]
[start of new definitions in xarray/backends/scipy_.py]
(definition of ScipyBackendEntrypoint:)
class ScipyBackendEntrypoint(BackendEntrypoint):
(definition of ScipyBackendEntrypoint.guess_can_open:)
def guess_can_open(self, store_spec):
(definition of ScipyBackendEntrypoint.open_dataset:)
def open_dataset( self, filename_or_obj, mask_and_scale=True, decode_times=None, concat_characters=None, decode_coords=None, drop_variables=None, use_cftime=None, decode_timedelta=None, mode="r", format=None, group=None, mmap=None, lock=None, ):
[end of new definitions in xarray/backends/scipy_.py]
[start of new definitions in xarray/backends/store.py]
(definition of StoreBackendEntrypoint:)
class StoreBackendEntrypoint(BackendEntrypoint):
(definition of StoreBackendEntrypoint.guess_can_open:)
def guess_can_open(self, store_spec):
(definition of StoreBackendEntrypoint.open_dataset:)
def open_dataset( self, store, *, mask_and_scale=True, decode_times=True, concat_characters=True, decode_coords=True, drop_variables=None, use_cftime=None, decode_timedelta=None, ):
[end of new definitions in xarray/backends/store.py]
[start of new definitions in xarray/backends/zarr.py]
(definition of ZarrBackendEntrypoint:)
class ZarrBackendEntrypoint(BackendEntrypoint):
(definition of ZarrBackendEntrypoint.open_dataset:)
def open_dataset( self, filename_or_obj, mask_and_scale=True, decode_times=None, concat_characters=None, decode_coords=None, drop_variables=None, use_cftime=None, decode_timedelta=None, group=None, mode="r", synchronizer=None, consolidated=False, consolidate_on_close=False, chunk_store=None, ):
[end of new definitions in xarray/backends/zarr.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1c198a191127c601d091213c4b3292a8bb3054e1 | ||
sympy__sympy-20827 | 20,827 | sympy/sympy | 1.8 | 3cef36ad8d79fb359236192bd067c267a9a51271 | 2021-01-19T22:28:05Z | diff --git a/sympy/tensor/array/array_expressions.py b/sympy/tensor/array/array_expressions.py
new file mode 100644
index 000000000000..678b57ffc0e0
--- /dev/null
+++ b/sympy/tensor/array/array_expressions.py
@@ -0,0 +1,22 @@
+from sympy import Expr, ImmutableDenseNDimArray
+from sympy.core.sympify import _sympify
+
+
+class ZeroArray(Expr):
+ """
+ Symbolic array of zeros. Equivalent to ``ZeroMatrix`` for matrices.
+ """
+
+ def __new__(cls, *shape):
+ shape = map(_sympify, shape)
+ obj = Expr.__new__(cls, *shape)
+ return obj
+
+ @property
+ def shape(self):
+ return self._args
+
+ def as_explicit(self):
+ if any(not i.is_Integer for i in self.shape):
+ raise ValueError("Cannot return explicit form for symbolic shape.")
+ return ImmutableDenseNDimArray.zeros(*self.shape)
| diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index 9ad07fb7782e..d9533aa5adf3 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -4361,6 +4361,12 @@ def test_sympy__tensor__array__array_derivatives__ArrayDerivative():
assert _test_args(arrder)
+def test_sympy__tensor__array__array_expressions__ZeroArray():
+ from sympy.tensor.array.array_expressions import ZeroArray
+ m, n, k = symbols("m n k")
+ za = ZeroArray(m, n, k, 2)
+ assert _test_args(za)
+
def test_sympy__tensor__functions__TensorProduct():
from sympy.tensor.functions import TensorProduct
A = MatrixSymbol('A', 3, 3)
diff --git a/sympy/tensor/array/tests/test_array_expressions.py b/sympy/tensor/array/tests/test_array_expressions.py
new file mode 100644
index 000000000000..5c6451a90e38
--- /dev/null
+++ b/sympy/tensor/array/tests/test_array_expressions.py
@@ -0,0 +1,15 @@
+from sympy import symbols
+from sympy.tensor.array.array_expressions import ZeroArray
+from sympy.testing.pytest import raises
+
+
+def test_zero_array():
+ za = ZeroArray(3, 2, 4)
+ assert za.shape == (3, 2, 4)
+ za_e = za.as_explicit()
+ assert za_e.shape == (3, 2, 4)
+
+ m, n, k = symbols("m n k")
+ za = ZeroArray(m, n, k, 2)
+ assert za.shape == (m, n, k, 2)
+ raises(ValueError, lambda: za.as_explicit())
| [
{
"components": [
{
"doc": "Symbolic array of zeros. Equivalent to ``ZeroMatrix`` for matrices.",
"lines": [
5,
22
],
"name": "ZeroArray",
"signature": "class ZeroArray(Expr):",
"type": "class"
},
{
"doc": "",
... | [
"test_sympy__tensor__array__array_expressions__ZeroArray"
] | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_predicates",
"test_sympy__assumptions__assume__UndefinedPredicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers_... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add class ZeroArray for array expressions of zero-valued elements
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
#### Brief description of what is fixed or changed
#### Other comments
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* tensor
* Add class ZeroArray for array expressions of zero-valued elements.
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/tensor/array/array_expressions.py]
(definition of ZeroArray:)
class ZeroArray(Expr):
"""Symbolic array of zeros. Equivalent to ``ZeroMatrix`` for matrices."""
(definition of ZeroArray.__new__:)
def __new__(cls, *shape):
(definition of ZeroArray.shape:)
def shape(self):
(definition of ZeroArray.as_explicit:)
def as_explicit(self):
[end of new definitions in sympy/tensor/array/array_expressions.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 284db64e44124856accce4abff75d2d8990c301d | ||
pvlib__pvlib-python-1132 | 1,132 | pvlib/pvlib-python | 0.8 | f94b5ad0d64e176f79105c27b9c37cc6493269f0 | 2021-01-18T23:05:20Z | diff --git a/docs/sphinx/source/whatsnew/v0.9.0.rst b/docs/sphinx/source/whatsnew/v0.9.0.rst
index 1094973bd5..8b323fdad8 100644
--- a/docs/sphinx/source/whatsnew/v0.9.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.9.0.rst
@@ -42,10 +42,12 @@ Enhancements
:py:class:`~pvlib.modelchain.ModelChain`. This includes substantial API
enhancements for accepting different weather input for each ``Array`` in the
system. (:pull:`1076`, :issue:`1067`)
-* Support for :py:func:`~pvlib.inverter.sandia_multi` added to
+* Support for :py:func:`~pvlib.inverter.sandia_multi` and
+ :py:func:`~pvlib.inverter.pvwatts_multi` added to
:py:class:`~pvlib.pvsystem.PVSystem` and
- :py:class:`~pvlib.modelchain.ModelChain` (as ``ac_model='sandia_multi'``).
- (:pull:`1076`, :issue:`1067`)
+ :py:class:`~pvlib.modelchain.ModelChain` (as ``ac_model='sandia_multi'``
+ and ``ac_model='pvwatts_multi'``).
+ (:pull:`1076`, :issue:`1067`, :pull:`1132`, :issue:`1117`)
* :py:class:`~pvlib.modelchain.ModelChain` 'run_model' methods now
automatically switch to using ``'effective_irradiance'`` (if available) for
cell temperature models, when ``'poa_global'`` is not provided in input
diff --git a/pvlib/inverter.py b/pvlib/inverter.py
index 3c2fca9123..a192f5e1c2 100644
--- a/pvlib/inverter.py
+++ b/pvlib/inverter.py
@@ -414,7 +414,7 @@ def pvwatts_multi(pdc, pdc0, eta_inv_nom=0.96, eta_inv_ref=0.9637):
DC power on each MPPT input of the inverter. If type is array, must
be 2d with axis 0 being the MPPT inputs. Same unit as ``pdc0``.
pdc0: numeric
- DC input limit of the inverter. Same unit as ``pdc``.
+ Total DC power limit of the inverter. Same unit as ``pdc``.
eta_inv_nom: numeric, default 0.96
Nominal inverter efficiency. [unitless]
eta_inv_ref: numeric, default 0.9637
diff --git a/pvlib/modelchain.py b/pvlib/modelchain.py
index 4c67e835f4..2e010fa024 100644
--- a/pvlib/modelchain.py
+++ b/pvlib/modelchain.py
@@ -769,6 +769,8 @@ def ac_model(self, model):
self._ac_model = self.adrinverter
elif model == 'pvwatts':
self._ac_model = self.pvwatts_inverter
+ elif model == 'pvwatts_multi':
+ self._ac_model = self.pvwatts_multi_inverter
else:
raise ValueError(model + ' is not a valid AC power model')
else:
@@ -793,10 +795,11 @@ def infer_ac_model(self):
def _infer_ac_model_multi(self, inverter_params):
if _snl_params(inverter_params):
return self.sandia_multi_inverter
+ elif _pvwatts_params(inverter_params):
+ return self.pvwatts_multi_inverter
raise ValueError('could not infer multi-array AC model from '
- 'system.inverter_parameters. Not all ac models '
- 'support systems with mutiple Arrays. '
- 'Only sandia_multi supports multiple '
+ 'system.inverter_parameters. Only sandia and pvwatts '
+ 'inverter models support multiple '
'Arrays. Check system.inverter_parameters or '
'explicitly set the model with the ac_model kwarg.')
@@ -807,6 +810,10 @@ def sandia_multi_inverter(self):
)
return self
+ def pvwatts_multi_inverter(self):
+ self.results.ac = self.system.pvwatts_multi(self.results.dc)
+ return self
+
def snlinverter(self):
self.results.ac = self.system.snlinverter(self.results.dc['v_mp'],
self.results.dc['p_mp'])
diff --git a/pvlib/pvsystem.py b/pvlib/pvsystem.py
index 819d00c703..651c9eba4b 100644
--- a/pvlib/pvsystem.py
+++ b/pvlib/pvsystem.py
@@ -1000,6 +1000,21 @@ def pvwatts_ac(self, pdc):
return inverter.pvwatts(pdc, self.inverter_parameters['pdc0'],
**kwargs)
+ def pvwatts_multi(self, p_dc):
+ """Uses :py:func:`pvlib.inverter.pvwatts_multi` to calculate AC power
+ based on ``self.inverter_parameters`` and the input voltage and power.
+
+ The parameter `p_dc` must be a tuple with length equal to
+ ``self.num_arrays`` if the system has more than one array.
+
+ See :py:func:`pvlib.inverter.pvwatts_multi` for details.
+ """
+ p_dc = self._validate_per_array(p_dc)
+ kwargs = _build_kwargs(['eta_inv_nom', 'eta_inv_ref'],
+ self.inverter_parameters)
+ return inverter.pvwatts_multi(p_dc, self.inverter_parameters['pdc0'],
+ **kwargs)
+
@deprecated('0.8', alternative='PVSystem, Location, and ModelChain',
name='PVSystem.localize', removal='0.9')
def localize(self, location=None, latitude=None, longitude=None,
| diff --git a/pvlib/tests/test_modelchain.py b/pvlib/tests/test_modelchain.py
index 2987ae563f..10acc469eb 100644
--- a/pvlib/tests/test_modelchain.py
+++ b/pvlib/tests/test_modelchain.py
@@ -166,6 +166,26 @@ def pvwatts_dc_pvwatts_ac_system(sapm_temperature_cs5p_220m):
return system
+@pytest.fixture(scope="function")
+def pvwatts_dc_pvwatts_ac_system_arrays(sapm_temperature_cs5p_220m):
+ module_parameters = {'pdc0': 220, 'gamma_pdc': -0.003}
+ temp_model_params = sapm_temperature_cs5p_220m.copy()
+ inverter_parameters = {'pdc0': 220, 'eta_inv_nom': 0.95}
+ array_one = pvsystem.Array(
+ surface_tilt=32.2, surface_azimuth=180,
+ module_parameters=module_parameters.copy(),
+ temperature_model_parameters=temp_model_params.copy()
+ )
+ array_two = pvsystem.Array(
+ surface_tilt=42.2, surface_azimuth=220,
+ module_parameters=module_parameters.copy(),
+ temperature_model_parameters=temp_model_params.copy()
+ )
+ system = PVSystem(
+ arrays=[array_one, array_two], inverter_parameters=inverter_parameters)
+ return system
+
+
@pytest.fixture(scope="function")
def pvwatts_dc_pvwatts_ac_faiman_temp_system():
module_parameters = {'pdc0': 220, 'gamma_pdc': -0.003}
@@ -452,16 +472,15 @@ def test_run_model_from_irradiance_arrays_no_loss_input_type(
)
-@pytest.mark.parametrize('inverter', ['adr', 'pvwatts'])
+@pytest.mark.parametrize('inverter', ['adr'])
def test_ModelChain_invalid_inverter_params_arrays(
inverter, sapm_dc_snl_ac_system_same_arrays,
location, adr_inverter_parameters):
- inverter_params = {'adr': adr_inverter_parameters,
- 'pvwatts': {'pdc0': 220, 'eta_inv_nom': 0.95}}
+ inverter_params = {'adr': adr_inverter_parameters}
sapm_dc_snl_ac_system_same_arrays.inverter_parameters = \
inverter_params[inverter]
with pytest.raises(ValueError,
- match=r'Only sandia_multi supports multiple Arrays\.'):
+ match=r'Only sandia and pvwatts inverter models'):
ModelChain(sapm_dc_snl_ac_system_same_arrays, location)
@@ -570,10 +589,15 @@ def test_prepare_inputs_missing_irrad_component(
mc.prepare_inputs(weather)
+@pytest.mark.parametrize('ac_model', ['sandia', 'pvwatts'])
@pytest.mark.parametrize("input_type", [tuple, list])
-def test_run_model_arrays_weather(sapm_dc_snl_ac_system_same_arrays, location,
- input_type):
- mc = ModelChain(sapm_dc_snl_ac_system_same_arrays, location)
+def test_run_model_arrays_weather(sapm_dc_snl_ac_system_same_arrays,
+ pvwatts_dc_pvwatts_ac_system_arrays,
+ location, ac_model, input_type):
+ system = {'sandia': sapm_dc_snl_ac_system_same_arrays,
+ 'pvwatts': pvwatts_dc_pvwatts_ac_system_arrays}
+ mc = ModelChain(system[ac_model], location, aoi_model='no_loss',
+ spectral_model='no_loss')
times = pd.date_range('20200101 1200-0700', periods=2, freq='2H')
weather_one = pd.DataFrame({'dni': [900, 800],
'ghi': [600, 500],
@@ -1171,18 +1195,21 @@ def acdc(mc):
@pytest.mark.parametrize('ac_model', ['sandia', 'adr',
- 'pvwatts', 'sandia_multi'])
+ 'pvwatts', 'sandia_multi',
+ 'pvwatts_multi'])
def test_ac_models(sapm_dc_snl_ac_system, cec_dc_adr_ac_system,
pvwatts_dc_pvwatts_ac_system, location, ac_model,
weather, mocker):
ac_systems = {'sandia': sapm_dc_snl_ac_system,
'sandia_multi': sapm_dc_snl_ac_system,
'adr': cec_dc_adr_ac_system,
- 'pvwatts': pvwatts_dc_pvwatts_ac_system}
+ 'pvwatts': pvwatts_dc_pvwatts_ac_system,
+ 'pvwatts_multi': pvwatts_dc_pvwatts_ac_system}
ac_method_name = {'sandia': 'snlinverter',
'sandia_multi': 'sandia_multi',
'adr': 'adrinverter',
- 'pvwatts': 'pvwatts_ac'}
+ 'pvwatts': 'pvwatts_ac',
+ 'pvwatts_multi': 'pvwatts_multi'}
system = ac_systems[ac_model]
mc = ModelChain(system, location, ac_model=ac_model,
diff --git a/pvlib/tests/test_pvsystem.py b/pvlib/tests/test_pvsystem.py
index cf9305f5ce..dfc62df8db 100644
--- a/pvlib/tests/test_pvsystem.py
+++ b/pvlib/tests/test_pvsystem.py
@@ -1884,7 +1884,8 @@ def test_pvwatts_losses_series():
assert_series_equal(expected, out)
-def make_pvwatts_system_defaults():
+@pytest.fixture
+def pvwatts_system_defaults():
module_parameters = {'pdc0': 100, 'gamma_pdc': -0.003}
inverter_parameters = {'pdc0': 90}
system = pvsystem.PVSystem(module_parameters=module_parameters,
@@ -1892,7 +1893,8 @@ def make_pvwatts_system_defaults():
return system
-def make_pvwatts_system_kwargs():
+@pytest.fixture
+def pvwatts_system_kwargs():
module_parameters = {'pdc0': 100, 'gamma_pdc': -0.003, 'temp_ref': 20}
inverter_parameters = {'pdc0': 90, 'eta_inv_nom': 0.95, 'eta_inv_ref': 1.0}
system = pvsystem.PVSystem(module_parameters=module_parameters,
@@ -1900,27 +1902,25 @@ def make_pvwatts_system_kwargs():
return system
-def test_PVSystem_pvwatts_dc(mocker):
+def test_PVSystem_pvwatts_dc(pvwatts_system_defaults, mocker):
mocker.spy(pvsystem, 'pvwatts_dc')
- system = make_pvwatts_system_defaults()
irrad = 900
temp_cell = 30
expected = 90
- out = system.pvwatts_dc(irrad, temp_cell)
- pvsystem.pvwatts_dc.assert_called_once_with(irrad, temp_cell,
- **system.module_parameters)
+ out = pvwatts_system_defaults.pvwatts_dc(irrad, temp_cell)
+ pvsystem.pvwatts_dc.assert_called_once_with(
+ irrad, temp_cell, **pvwatts_system_defaults.module_parameters)
assert_allclose(expected, out, atol=10)
-def test_PVSystem_pvwatts_dc_kwargs(mocker):
+def test_PVSystem_pvwatts_dc_kwargs(pvwatts_system_kwargs, mocker):
mocker.spy(pvsystem, 'pvwatts_dc')
- system = make_pvwatts_system_kwargs()
irrad = 900
temp_cell = 30
expected = 90
- out = system.pvwatts_dc(irrad, temp_cell)
- pvsystem.pvwatts_dc.assert_called_once_with(irrad, temp_cell,
- **system.module_parameters)
+ out = pvwatts_system_kwargs.pvwatts_dc(irrad, temp_cell)
+ pvsystem.pvwatts_dc.assert_called_once_with(
+ irrad, temp_cell, **pvwatts_system_kwargs.module_parameters)
assert_allclose(expected, out, atol=10)
@@ -1977,37 +1977,59 @@ def test_PVSystem_multiple_array_pvwatts_dc_value_error():
# ValueError is raised for non-tuple iterable with correct length
system.pvwatts_dc((1, 1, 1), pd.Series([1, 2, 3]))
-def test_PVSystem_pvwatts_losses(mocker):
+
+def test_PVSystem_pvwatts_losses(pvwatts_system_defaults, mocker):
mocker.spy(pvsystem, 'pvwatts_losses')
- system = make_pvwatts_system_defaults()
age = 1
- system.losses_parameters = dict(age=age)
+ pvwatts_system_defaults.losses_parameters = dict(age=age)
expected = 15
- out = system.pvwatts_losses()
+ out = pvwatts_system_defaults.pvwatts_losses()
pvsystem.pvwatts_losses.assert_called_once_with(age=age)
assert out < expected
-def test_PVSystem_pvwatts_ac(mocker):
+def test_PVSystem_pvwatts_ac(pvwatts_system_defaults, mocker):
mocker.spy(inverter, 'pvwatts')
- system = make_pvwatts_system_defaults()
pdc = 50
- out = system.pvwatts_ac(pdc)
- inverter.pvwatts.assert_called_once_with(pdc,
- **system.inverter_parameters)
+ out = pvwatts_system_defaults.pvwatts_ac(pdc)
+ inverter.pvwatts.assert_called_once_with(
+ pdc, **pvwatts_system_defaults.inverter_parameters)
assert out < pdc
-def test_PVSystem_pvwatts_ac_kwargs(mocker):
+def test_PVSystem_pvwatts_ac_kwargs(pvwatts_system_kwargs, mocker):
mocker.spy(inverter, 'pvwatts')
- system = make_pvwatts_system_kwargs()
pdc = 50
- out = system.pvwatts_ac(pdc)
- inverter.pvwatts.assert_called_once_with(pdc,
- **system.inverter_parameters)
+ out = pvwatts_system_kwargs.pvwatts_ac(pdc)
+ inverter.pvwatts.assert_called_once_with(
+ pdc, **pvwatts_system_kwargs.inverter_parameters)
assert out < pdc
+def test_PVSystem_pvwatts_multi(pvwatts_system_defaults,
+ pvwatts_system_kwargs):
+ expected = [pd.Series([0.0, 48.123524, 86.400000]),
+ pd.Series([0.0, 45.893550, 85.500000])]
+ systems = [pvwatts_system_defaults, pvwatts_system_kwargs]
+ for base_sys, exp in zip(systems, expected):
+ system = pvsystem.PVSystem(
+ arrays=[pvsystem.Array(), pvsystem.Array()],
+ inverter_parameters=base_sys.inverter_parameters,
+ )
+ pdcs = pd.Series([0., 25., 50.])
+ pacs = system.pvwatts_multi((pdcs, pdcs))
+ assert_series_equal(pacs, exp)
+ with pytest.raises(ValueError,
+ match="Length mismatch for per-array parameter"):
+ system.pvwatts_multi((pdcs,))
+ with pytest.raises(ValueError,
+ match="Length mismatch for per-array parameter"):
+ system.pvwatts_multi(pdcs)
+ with pytest.raises(ValueError,
+ match="Length mismatch for per-array parameter"):
+ system.pvwatts_multi((pdcs, pdcs, pdcs))
+
+
def test_PVSystem_num_arrays():
system_one = pvsystem.PVSystem()
system_two = pvsystem.PVSystem(arrays=[pvsystem.Array(), pvsystem.Array()])
| diff --git a/docs/sphinx/source/whatsnew/v0.9.0.rst b/docs/sphinx/source/whatsnew/v0.9.0.rst
index 1094973bd5..8b323fdad8 100644
--- a/docs/sphinx/source/whatsnew/v0.9.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.9.0.rst
@@ -42,10 +42,12 @@ Enhancements
:py:class:`~pvlib.modelchain.ModelChain`. This includes substantial API
enhancements for accepting different weather input for each ``Array`` in the
system. (:pull:`1076`, :issue:`1067`)
-* Support for :py:func:`~pvlib.inverter.sandia_multi` added to
+* Support for :py:func:`~pvlib.inverter.sandia_multi` and
+ :py:func:`~pvlib.inverter.pvwatts_multi` added to
:py:class:`~pvlib.pvsystem.PVSystem` and
- :py:class:`~pvlib.modelchain.ModelChain` (as ``ac_model='sandia_multi'``).
- (:pull:`1076`, :issue:`1067`)
+ :py:class:`~pvlib.modelchain.ModelChain` (as ``ac_model='sandia_multi'``
+ and ``ac_model='pvwatts_multi'``).
+ (:pull:`1076`, :issue:`1067`, :pull:`1132`, :issue:`1117`)
* :py:class:`~pvlib.modelchain.ModelChain` 'run_model' methods now
automatically switch to using ``'effective_irradiance'`` (if available) for
cell temperature models, when ``'poa_global'`` is not provided in input
| [
{
"components": [
{
"doc": "",
"lines": [
813,
815
],
"name": "ModelChain.pvwatts_multi_inverter",
"signature": "def pvwatts_multi_inverter(self):",
"type": "function"
}
],
"file": "pvlib/modelchain.py"
},
{
"compone... | [
"pvlib/tests/test_modelchain.py::test_ModelChain_invalid_inverter_params_arrays[adr]",
"pvlib/tests/test_modelchain.py::test_run_model_arrays_weather[tuple-pvwatts]",
"pvlib/tests/test_modelchain.py::test_run_model_arrays_weather[list-pvwatts]",
"pvlib/tests/test_modelchain.py::test_ac_models[pvwatts_multi]",... | [
"pvlib/tests/test_modelchain.py::test_ModelChain_creation",
"pvlib/tests/test_modelchain.py::test_with_sapm",
"pvlib/tests/test_modelchain.py::test_with_pvwatts",
"pvlib/tests/test_modelchain.py::test_orientation_strategy[None-expected0]",
"pvlib/tests/test_modelchain.py::test_orientation_strategy[None-expe... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Implement pvwatts_multi methods for PVSystem and ModelChain
- [x] Closes #1117
- [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)
- [x] Tests added
- [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.
- [x] Adds description and name entries in the appropriate "what's new" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).
- [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.
- [x] Pull request is nearly complete and ready for detailed review.
- [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.
Also converts two functions defining pvwatts systems for PVSystem method tests to fixtures.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in pvlib/modelchain.py]
(definition of ModelChain.pvwatts_multi_inverter:)
def pvwatts_multi_inverter(self):
[end of new definitions in pvlib/modelchain.py]
[start of new definitions in pvlib/pvsystem.py]
(definition of PVSystem.pvwatts_multi:)
def pvwatts_multi(self, p_dc):
"""Uses :py:func:`pvlib.inverter.pvwatts_multi` to calculate AC power
based on ``self.inverter_parameters`` and the input voltage and power.
The parameter `p_dc` must be a tuple with length equal to
``self.num_arrays`` if the system has more than one array.
See :py:func:`pvlib.inverter.pvwatts_multi` for details."""
[end of new definitions in pvlib/pvsystem.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Integrate inverter.pvwatts_multi into PVSystem and ModelChain
#1106 added a PVWatts variation that supports multiple DC power inputs. After #1076 is merged we should add this function as an option for the AC power model in `ModelChain`. Requires adding it to `PVSystem` and plumbing it in to `ModelChain`.
----------
--------------------
</issues> | 311781d2380997044da0e484dc90aa146a74ca44 |
joblib__joblib-1149 | 1,149 | joblib/joblib | null | 9f7edc2f67d1d27757be88168ffe533a916e1032 | 2021-01-11T23:51:23Z | diff --git a/CHANGES.rst b/CHANGES.rst
index b6c361035..25d3f2756 100644
--- a/CHANGES.rst
+++ b/CHANGES.rst
@@ -32,6 +32,10 @@ In development
previous versions of Joblib.
https://github.com/joblib/joblib/pull/1374
+- Add ``cache_validation_callback`` in :meth:`joblib.Memory.cache`, to allow
+ custom cache invalidation based on the metadata of the function call.
+ https://github.com/joblib/joblib/pull/1149
+
- Add a ``return_generator`` parameter for ``Parallel``, that allows
to consume results asynchronously.
https://github.com/joblib/joblib/pull/1393
diff --git a/doc/memory.rst b/doc/memory.rst
index 7b6b176d9..0a6abd8cb 100644
--- a/doc/memory.rst
+++ b/doc/memory.rst
@@ -375,9 +375,9 @@ Gotchas
``self.method = memory.cache(self.method, ignore=['self'])``.
* **joblib cache entries may be invalidated after environment updates**.
- Values returned by ``joblib.hash`` are not guaranteed to stay
+ Values returned by :func:`joblib.hash` are not guaranteed to stay
constant across ``joblib`` versions. This means that **all** entries of a
- ``joblib.Memory`` cache can get invalidated when upgrading ``joblib``.
+ :class:`Memory` cache can get invalidated when upgrading ``joblib``.
Invalidation can also happen when upgrading a third party library (such as
``numpy``): in such a case, only the cached function calls with parameters
that are constructs (or contain references to constructs) defined in the
@@ -388,7 +388,8 @@ Ignoring some arguments
-----------------------
It may be useful not to recalculate a function when certain arguments
-change, for instance a debug flag. `Memory` provides the `ignore` list::
+change, for instance a debug flag. :class:`Memory` provides the ``ignore``
+list::
>>> @memory.cache(ignore=['debug'])
... def my_func(x, debug=True):
@@ -400,6 +401,62 @@ change, for instance a debug flag. `Memory` provides the `ignore` list::
>>> # my_func was not reevaluated
+Custom cache validation
+-----------------------
+
+In some cases, external factors can invalidate the cached results and
+one wants to have more control on whether to reuse a result or not.
+
+This is for instance the case if the results depends on database records
+that change over time: a small delay in the updates might be tolerable
+but after a while, the results might be invalid.
+
+One can have a finer control on the cache validity specifying a function
+via ``cache_validation_callback`` in :meth:`~joblib.Memory.cache`. For
+instance, one can only cache results that take more than 1s to be computed.
+
+ >>> import time
+ >>> def cache_validation_cb(metadata):
+ ... # Only retrieve cached results for calls that take more than 1s
+ ... return metadata['duration'] > 1
+
+ >>> @memory.cache(cache_validation_callback=cache_validation_cb)
+ ... def my_func(delay=0):
+ ... time.sleep(delay)
+ ... print(f'Called with {delay}s delay')
+
+ >>> my_func()
+ Called with 0s delay
+ >>> my_func(1.1)
+ Called with 1.1s delay
+ >>> my_func(1.1) # This result is retrieved from cache
+ >>> my_func() # This one is not and the call is repeated
+ Called with 0s delay
+
+``cache_validation_cb`` will be called with a single argument containing
+the metadata of the cached call as a dictionary containing the following
+keys:
+
+ - ``duration``: the duration of the function call,
+ - ``time``: the timestamp when the cache called has been recorded
+ - ``input_args``: a dictionary of keywords arguments for the cached function call.
+
+Note a validity duration for cached results can be defined via
+:func:`joblib.expires_after` by providing similar with arguments similar to the
+ones of a ``datetime.timedelta``:
+
+ >>> from joblib import expires_after
+ >>> @memory.cache(cache_validation_callback=expires_after(seconds=0.5))
+ ... def my_func():
+ ... print(f'Function run')
+ >>> my_func()
+ Function run
+ >>> my_func()
+ >>> time.sleep(0.5)
+ >>> my_func()
+ Function run
+
+
.. _memory_reference:
Reference documentation of the :class:`~joblib.Memory` class
@@ -448,3 +505,9 @@ without actually needing to call the function itself::
... shutil.rmtree(cachedir2)
... except OSError:
... pass # this can sometimes fail under Windows
+
+
+Helper Reference
+~~~~~~~~~~~~~~~~
+
+.. autofunction:: joblib.expires_after
diff --git a/examples/serialization_and_wrappers.py b/examples/serialization_and_wrappers.py
index d46324275..0b2156486 100644
--- a/examples/serialization_and_wrappers.py
+++ b/examples/serialization_and_wrappers.py
@@ -118,7 +118,7 @@ def func_async(i, *args):
###############################################################################
# To have both fast pickling, safe process creation and serialization of
# interactive functions, ``loky`` provides a wrapper function
-# :func:`wrap_non_picklable_objects` to wrap the non-picklable function and
+# ``wrap_non_picklable_objects`` to wrap the non-picklable function and
# indicate to the serialization process that this specific function should be
# serialized using ``cloudpickle``. This changes the serialization behavior
# only for this function and keeps using ``pickle`` for all other objects. The
diff --git a/joblib/__init__.py b/joblib/__init__.py
index bc34d81d5..b5be5fe50 100644
--- a/joblib/__init__.py
+++ b/joblib/__init__.py
@@ -110,13 +110,22 @@
import os
-from .memory import Memory, MemorizedResult, register_store_backend
+
+from .memory import Memory
+from .memory import MemorizedResult
+from .memory import register_store_backend
+from .memory import expires_after
+
from .logger import PrintTime
from .logger import Logger
+
from .hashing import hash
+
from .numpy_pickle import dump
from .numpy_pickle import load
+
from .compressor import register_compressor
+
from .parallel import Parallel
from .parallel import delayed
from .parallel import cpu_count
@@ -129,7 +138,7 @@
__all__ = ['Memory', 'MemorizedResult', 'PrintTime', 'Logger', 'hash', 'dump',
'load', 'Parallel', 'delayed', 'cpu_count', 'effective_n_jobs',
- 'register_parallel_backend', 'parallel_backend',
+ 'register_parallel_backend', 'parallel_backend', 'expires_after',
'register_store_backend', 'register_compressor',
'wrap_non_picklable_objects', 'parallel_config']
diff --git a/joblib/memory.py b/joblib/memory.py
index cc832c844..8171d24cb 100644
--- a/joblib/memory.py
+++ b/joblib/memory.py
@@ -22,6 +22,7 @@
import warnings
import inspect
import weakref
+from datetime import timedelta
from tokenize import open as open_py_source
@@ -408,17 +409,26 @@ class MemorizedFunc(Logger):
verbose: int, optional
The verbosity flag, controls messages that are issued as
the function is evaluated.
+
+ cache_validation_callback: callable, optional
+ Callable to check if a result in cache is valid or is to be recomputed.
+ When the function is called with arguments for which a cache exists,
+ the callback is called with the cache entry's metadata as its sole
+ argument. If it returns True, the cached result is returned, else the
+ cache for these arguments is cleared and the result is recomputed.
"""
# ------------------------------------------------------------------------
# Public interface
# ------------------------------------------------------------------------
def __init__(self, func, location, backend='local', ignore=None,
- mmap_mode=None, compress=False, verbose=1, timestamp=None):
+ mmap_mode=None, compress=False, verbose=1, timestamp=None,
+ cache_validation_callback=None):
Logger.__init__(self)
self.mmap_mode = mmap_mode
self.compress = compress
self.func = func
+ self.cache_validation_callback = cache_validation_callback
if ignore is None:
ignore = []
@@ -434,15 +444,16 @@ def __init__(self, func, location, backend='local', ignore=None,
)
if self.store_backend is not None:
# Create func directory on demand.
- self.store_backend.\
- store_cached_func_code([_build_func_identifier(self.func)])
+ self.store_backend.store_cached_func_code([
+ _build_func_identifier(self.func)
+ ])
if timestamp is None:
timestamp = time.time()
self.timestamp = timestamp
try:
functools.update_wrapper(self, func)
- except: # noqa: E722
+ except Exception:
" Objects like ufunc don't like that "
if inspect.isfunction(func):
doc = pydoc.TextDoc().document(func)
@@ -458,6 +469,34 @@ def __init__(self, func, location, backend='local', ignore=None,
self._func_code_info = None
self._func_code_id = None
+ def _is_in_cache_and_valid(self, path):
+ """Check if the function call is cached and valid for given arguments.
+
+ - Compare the function code with the one from the cached function,
+ asserting if it has changed.
+ - Check if the function call is present in the cache.
+ - Call `cache_validation_callback` for user define cache validation.
+
+ Returns True if the function call is in cache and can be used, and
+ returns False otherwise.
+ """
+ # Check if the code of the function has changed
+ if not self._check_previous_func_code(stacklevel=4):
+ return False
+
+ # Check if this specific call is in the cache
+ if not self.store_backend.contains_item(path):
+ return False
+
+ # Call the user defined cache validation callback
+ metadata = self.store_backend.get_metadata(path)
+ if (self.cache_validation_callback is not None and
+ not self.cache_validation_callback(metadata)):
+ self.store_backend.clear_item(path)
+ return False
+
+ return True
+
def _cached_call(self, args, kwargs, shelving=False):
"""Call wrapped function and cache result, or read cache if available.
@@ -513,20 +552,10 @@ def _cached_call(self, args, kwargs, shelving=False):
)
)
- # FIXME: The statements below should be try/excepted
# Compare the function code with the previous to see if the
- # function code has changed
- if not (self._check_previous_func_code(stacklevel=4) and
- self.store_backend.contains_item([func_id, args_id])):
- if self._verbose > 10:
- _, name = get_func_name(self.func)
- self.warn('Computing func {0}, argument hash {1} '
- 'in location {2}'
- .format(name, args_id,
- self.store_backend.
- get_cached_func_info([func_id])['location']))
- must_call = True
- else:
+ # function code has changed and check if the results are present in
+ # the cache.
+ if self._is_in_cache_and_valid([func_id, args_id]):
try:
t0 = time.time()
if self._verbose:
@@ -555,6 +584,15 @@ def _cached_call(self, args, kwargs, shelving=False):
'{}\n {}'.format(signature, traceback.format_exc()))
must_call = True
+ else:
+ if self._verbose > 10:
+ _, name = get_func_name(self.func)
+ self.warn('Computing func {0}, argument hash {1} '
+ 'in location {2}'
+ .format(name, args_id,
+ self.store_backend.
+ get_cached_func_info([func_id])['location']))
+ must_call = True
if must_call:
out, metadata = self.call(*args, **kwargs)
@@ -852,7 +890,9 @@ def _persist_input(self, duration, args, kwargs, this_duration_limit=0.5):
input_repr = dict((k, repr(v)) for k, v in argument_dict.items())
# This can fail due to race-conditions with multiple
# concurrent joblibs removing the file or the directory
- metadata = {"duration": duration, "input_args": input_repr}
+ metadata = {
+ "duration": duration, "input_args": input_repr, "time": start_time,
+ }
func_id, args_id = self._get_output_identifiers(*args, **kwargs)
self.store_backend.store_metadata([func_id, args_id], metadata)
@@ -982,7 +1022,8 @@ def __init__(self, location=None, backend='local',
backend_options=dict(compress=compress, mmap_mode=mmap_mode,
**backend_options))
- def cache(self, func=None, ignore=None, verbose=None, mmap_mode=False):
+ def cache(self, func=None, ignore=None, verbose=None, mmap_mode=False,
+ cache_validation_callback=None):
""" Decorates the given function func to only compute its return
value for input arguments not cached on disk.
@@ -999,6 +1040,13 @@ def cache(self, func=None, ignore=None, verbose=None, mmap_mode=False):
The memmapping mode used when loading from cache
numpy arrays. See numpy.load for the meaning of the
arguments. By default that of the memory object is used.
+ cache_validation_callback: callable, optional
+ Callable to validate whether or not the cache is valid. When
+ the cached function is called with arguments for which a cache
+ exists, this callable is called with the metadata of the cached
+ result as its sole argument. If it returns True, then the
+ cached result is returned, else the cache for these arguments
+ is cleared and recomputed.
Returns
-------
@@ -1008,11 +1056,21 @@ def cache(self, func=None, ignore=None, verbose=None, mmap_mode=False):
methods for cache lookup and management. See the
documentation for :class:`joblib.memory.MemorizedFunc`.
"""
+ if (cache_validation_callback is not None and
+ not callable(cache_validation_callback)):
+ raise ValueError(
+ "cache_validation_callback needs to be callable. "
+ f"Got {cache_validation_callback}."
+ )
if func is None:
# Partial application, to be able to specify extra keyword
# arguments in decorators
- return functools.partial(self.cache, ignore=ignore,
- verbose=verbose, mmap_mode=mmap_mode)
+ return functools.partial(
+ self.cache, ignore=ignore,
+ mmap_mode=mmap_mode,
+ verbose=verbose,
+ cache_validation_callback=cache_validation_callback
+ )
if self.store_backend is None:
return NotMemorizedFunc(func)
if verbose is None:
@@ -1021,11 +1079,12 @@ def cache(self, func=None, ignore=None, verbose=None, mmap_mode=False):
mmap_mode = self.mmap_mode
if isinstance(func, MemorizedFunc):
func = func.func
- return MemorizedFunc(func, location=self.store_backend,
- backend=self.backend,
- ignore=ignore, mmap_mode=mmap_mode,
- compress=self.compress,
- verbose=verbose, timestamp=self.timestamp)
+ return MemorizedFunc(
+ func, location=self.store_backend, backend=self.backend,
+ ignore=ignore, mmap_mode=mmap_mode, compress=self.compress,
+ verbose=verbose, timestamp=self.timestamp,
+ cache_validation_callback=cache_validation_callback
+ )
def clear(self, warn=True):
""" Erase the complete cache directory.
@@ -1113,3 +1172,28 @@ def __getstate__(self):
state = self.__dict__.copy()
state['timestamp'] = None
return state
+
+
+###############################################################################
+# cache_validation_callback helpers
+###############################################################################
+
+def expires_after(days=0, seconds=0, microseconds=0, milliseconds=0, minutes=0,
+ hours=0, weeks=0):
+ """Helper cache_validation_callback to force recompute after a duration.
+
+ Parameters
+ ----------
+ days, seconds, microseconds, milliseconds, minutes, hours, weeks: numbers
+ argument passed to a timedelta.
+ """
+ delta = timedelta(
+ days=days, seconds=seconds, microseconds=microseconds,
+ milliseconds=milliseconds, minutes=minutes, hours=hours, weeks=weeks
+ )
+
+ def cache_validation_callback(metadata):
+ computation_age = time.time() - metadata['time']
+ return computation_age < delta.total_seconds()
+
+ return cache_validation_callback
| diff --git a/joblib/test/test_memory.py b/joblib/test/test_memory.py
index 98cfce681..ff018fb8a 100644
--- a/joblib/test/test_memory.py
+++ b/joblib/test/test_memory.py
@@ -22,6 +22,7 @@
import pytest
from joblib.memory import Memory
+from joblib.memory import expires_after
from joblib.memory import MemorizedFunc, NotMemorizedFunc
from joblib.memory import MemorizedResult, NotMemorizedResult
from joblib.memory import _FUNCTION_HASHES
@@ -87,10 +88,10 @@ def test_memory_integration(tmpdir):
""" Simple test of memory lazy evaluation.
"""
accumulator = list()
+
# Rmk: this function has the same name than a module-level function,
# thus it serves as a test to see that both are identified
# as different.
-
def f(arg):
accumulator.append(1)
return arg
@@ -1401,3 +1402,87 @@ def test_deprecated_bytes_limit(tmpdir):
)
with pytest.warns(DeprecationWarning, match="bytes_limit"):
_ = Memory(location=tmpdir.strpath, bytes_limit='1K')
+
+
+class TestCacheValidationCallback:
+ "Tests on parameter `cache_validation_callback`"
+
+ @pytest.fixture()
+ def memory(self, tmp_path):
+ mem = Memory(location=tmp_path)
+ yield mem
+ mem.clear()
+
+ def foo(self, x, d, delay=None):
+ d["run"] = True
+ if delay is not None:
+ time.sleep(delay)
+ return x * 2
+
+ def test_invalid_cache_validation_callback(self, memory):
+ "Test invalid values for `cache_validation_callback"
+ match = "cache_validation_callback needs to be callable. Got True."
+ with pytest.raises(ValueError, match=match):
+ memory.cache(cache_validation_callback=True)
+
+ @pytest.mark.parametrize("consider_cache_valid", [True, False])
+ def test_constant_cache_validation_callback(
+ self, memory, consider_cache_valid
+ ):
+ "Test expiry of old results"
+ f = memory.cache(
+ self.foo, cache_validation_callback=lambda _: consider_cache_valid,
+ ignore=["d"]
+ )
+
+ d1, d2 = {"run": False}, {"run": False}
+ assert f(2, d1) == 4
+ assert f(2, d2) == 4
+
+ assert d1["run"]
+ assert d2["run"] != consider_cache_valid
+
+ def test_memory_only_cache_long_run(self, memory):
+ "Test cache validity based on run duration."
+
+ def cache_validation_callback(metadata):
+ duration = metadata['duration']
+ if duration > 0.1:
+ return True
+
+ f = memory.cache(
+ self.foo, cache_validation_callback=cache_validation_callback,
+ ignore=["d"]
+ )
+
+ # Short run are not cached
+ d1, d2 = {"run": False}, {"run": False}
+ assert f(2, d1, delay=0) == 4
+ assert f(2, d2, delay=0) == 4
+ assert d1["run"]
+ assert d2["run"]
+
+ # Longer run are cached
+ d1, d2 = {"run": False}, {"run": False}
+ assert f(2, d1, delay=0.2) == 4
+ assert f(2, d2, delay=0.2) == 4
+ assert d1["run"]
+ assert not d2["run"]
+
+ def test_memory_expires_after(self, memory):
+ "Test expiry of old cached results"
+
+ f = memory.cache(
+ self.foo, cache_validation_callback=expires_after(seconds=.3),
+ ignore=["d"]
+ )
+
+ d1, d2, d3 = {"run": False}, {"run": False}, {"run": False}
+ assert f(2, d1) == 4
+ assert f(2, d2) == 4
+ time.sleep(.5)
+ assert f(2, d3) == 4
+
+ assert d1["run"]
+ assert not d2["run"]
+ assert d3["run"]
| diff --git a/CHANGES.rst b/CHANGES.rst
index b6c361035..25d3f2756 100644
--- a/CHANGES.rst
+++ b/CHANGES.rst
@@ -32,6 +32,10 @@ In development
previous versions of Joblib.
https://github.com/joblib/joblib/pull/1374
+- Add ``cache_validation_callback`` in :meth:`joblib.Memory.cache`, to allow
+ custom cache invalidation based on the metadata of the function call.
+ https://github.com/joblib/joblib/pull/1149
+
- Add a ``return_generator`` parameter for ``Parallel``, that allows
to consume results asynchronously.
https://github.com/joblib/joblib/pull/1393
diff --git a/doc/memory.rst b/doc/memory.rst
index 7b6b176d9..0a6abd8cb 100644
--- a/doc/memory.rst
+++ b/doc/memory.rst
@@ -375,9 +375,9 @@ Gotchas
``self.method = memory.cache(self.method, ignore=['self'])``.
* **joblib cache entries may be invalidated after environment updates**.
- Values returned by ``joblib.hash`` are not guaranteed to stay
+ Values returned by :func:`joblib.hash` are not guaranteed to stay
constant across ``joblib`` versions. This means that **all** entries of a
- ``joblib.Memory`` cache can get invalidated when upgrading ``joblib``.
+ :class:`Memory` cache can get invalidated when upgrading ``joblib``.
Invalidation can also happen when upgrading a third party library (such as
``numpy``): in such a case, only the cached function calls with parameters
that are constructs (or contain references to constructs) defined in the
@@ -388,7 +388,8 @@ Ignoring some arguments
-----------------------
It may be useful not to recalculate a function when certain arguments
-change, for instance a debug flag. `Memory` provides the `ignore` list::
+change, for instance a debug flag. :class:`Memory` provides the ``ignore``
+list::
>>> @memory.cache(ignore=['debug'])
... def my_func(x, debug=True):
@@ -400,6 +401,62 @@ change, for instance a debug flag. `Memory` provides the `ignore` list::
>>> # my_func was not reevaluated
+Custom cache validation
+-----------------------
+
+In some cases, external factors can invalidate the cached results and
+one wants to have more control on whether to reuse a result or not.
+
+This is for instance the case if the results depends on database records
+that change over time: a small delay in the updates might be tolerable
+but after a while, the results might be invalid.
+
+One can have a finer control on the cache validity specifying a function
+via ``cache_validation_callback`` in :meth:`~joblib.Memory.cache`. For
+instance, one can only cache results that take more than 1s to be computed.
+
+ >>> import time
+ >>> def cache_validation_cb(metadata):
+ ... # Only retrieve cached results for calls that take more than 1s
+ ... return metadata['duration'] > 1
+
+ >>> @memory.cache(cache_validation_callback=cache_validation_cb)
+ ... def my_func(delay=0):
+ ... time.sleep(delay)
+ ... print(f'Called with {delay}s delay')
+
+ >>> my_func()
+ Called with 0s delay
+ >>> my_func(1.1)
+ Called with 1.1s delay
+ >>> my_func(1.1) # This result is retrieved from cache
+ >>> my_func() # This one is not and the call is repeated
+ Called with 0s delay
+
+``cache_validation_cb`` will be called with a single argument containing
+the metadata of the cached call as a dictionary containing the following
+keys:
+
+ - ``duration``: the duration of the function call,
+ - ``time``: the timestamp when the cache called has been recorded
+ - ``input_args``: a dictionary of keywords arguments for the cached function call.
+
+Note a validity duration for cached results can be defined via
+:func:`joblib.expires_after` by providing similar with arguments similar to the
+ones of a ``datetime.timedelta``:
+
+ >>> from joblib import expires_after
+ >>> @memory.cache(cache_validation_callback=expires_after(seconds=0.5))
+ ... def my_func():
+ ... print(f'Function run')
+ >>> my_func()
+ Function run
+ >>> my_func()
+ >>> time.sleep(0.5)
+ >>> my_func()
+ Function run
+
+
.. _memory_reference:
Reference documentation of the :class:`~joblib.Memory` class
@@ -448,3 +505,9 @@ without actually needing to call the function itself::
... shutil.rmtree(cachedir2)
... except OSError:
... pass # this can sometimes fail under Windows
+
+
+Helper Reference
+~~~~~~~~~~~~~~~~
+
+.. autofunction:: joblib.expires_after
| [
{
"components": [
{
"doc": "Check if the function call is cached and valid for given arguments.\n\n- Compare the function code with the one from the cached function,\nasserting if it has changed.\n- Check if the function call is present in the cache.\n- Call `cache_validation_callback` for user de... | [
"joblib/test/test_memory.py::test_memory_integration",
"joblib/test/test_memory.py::test_parallel_call_cached_function_defined_in_jupyter[True]",
"joblib/test/test_memory.py::test_parallel_call_cached_function_defined_in_jupyter[False]",
"joblib/test/test_memory.py::test_no_memory",
"joblib/test/test_memory... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
ENH add cache_validation_callback in Memory
Tentative to implement a way to validate the cache results with user define rules.
- Add an argument `validate_cache` that takes a callable `validate_cache(metadata) -> bool`.
- Add `time` in metadata to ease the robust checking of computation time in all backends.
- Add an helper `joblib.memory.expires_after` to check the age of a cached results.
- Add some small tests
- Clean some leftover pep8 :)
Todo:
- [x] Add some doc if the API is validated.
- [x] Add some entry in changelog.
Fixes #313 .
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in joblib/memory.py]
(definition of MemorizedFunc._is_in_cache_and_valid:)
def _is_in_cache_and_valid(self, path):
"""Check if the function call is cached and valid for given arguments.
- Compare the function code with the one from the cached function,
asserting if it has changed.
- Check if the function call is present in the cache.
- Call `cache_validation_callback` for user define cache validation.
Returns True if the function call is in cache and can be used, and
returns False otherwise."""
(definition of expires_after:)
def expires_after(days=0, seconds=0, microseconds=0, milliseconds=0, minutes=0, hours=0, weeks=0):
"""Helper cache_validation_callback to force recompute after a duration.
Parameters
----------
days, seconds, microseconds, milliseconds, minutes, hours, weeks: numbers
argument passed to a timedelta."""
(definition of expires_after.cache_validation_callback:)
def cache_validation_callback(metadata):
[end of new definitions in joblib/memory.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Expiry of old results
I've added a small extra feature that recalculates if a result is older than a certain age.
```
@memory.cache(expires_after=60)
def f(x):
...
```
Will recompute if cached result is older than 60 seconds. Code is in this branch:
https://github.com/fredludlow/joblib/tree/expires-after
Is this of interest? And if so, is there anything else that needs doing before sending a pull request? (I've added test coverage in test_memory.py)
----------
Do you have a particular use case in mind?
We try to keep joblib caching as much unsurprising as possible so my first reaction would be against this idea.
Totally agree with the philosophy of keeping it as unsurprising as possible - this wouldn't break any existing code and the default would be to behave exactly as before.
My use case is a web app which aggregates data from a bunch of components (SQL queries and remote web APIs) and returns a bundle of data to the client. A non-cached request for this project takes maybe 10 seconds. If you use an http cache (requests_cache) that reduces to 5 seconds.If I put joblib caching around each component (there are about 10) then that reduces to miliseconds for second and subsequent calls.
I'm happy with serving up results that are, at most, a week old (the quickest changing data sources I'm using update on that timescale) but not happy with caching "forever" (the current joblib model). Without this modification my options are:
1) Set-up a cron job to clear caches every week - this breaks portability of the app (adds another installation/uninstallation step).
2) Build the cache maintenance into the request cycle either in a very broad-brushed way (delete the whole lot if root cache_dir > max_age) or per result (what this change does).
Joblib has been great for this during development because small changes to one component don't cause a complete recalculation of the whole bundle, so the "Page Broken" --> "Make a change" --> "Hit F5" --> "Wait..." cycle is minimized.
Of course it's your decision, if it's not right for joblib then close this issue. Also if you think I'm going about this wrong and there's a better way of doing it I'd be very happy to hear it - I couldn't see any other similar library that provides this functionality, though maybe that's because it's a silly way to solve the problem!)
> We try to keep joblib caching as much unsurprising as possible so my first
> reaction would be against this idea.
+1. It seems out of scope.
However, it might be good to make sure that it is possible to build such
a cache removal mechanism outside of joblib, using joblib.
> However, it might be good to make sure that it is possible to build such a cache removal mechanism outside of joblib, using joblib.
You mean adding an extra function for explicitly clearing a cache ?
Something like :
``` python
joblib.clear_cache(mem, min_days=30, max_days=60, min_size=10, max_size=30)
```
> You mean adding an extra function for explicitly clearing a cache ?
> Something like :
>
> `joblib.clear_cache(mem, min_days=30, max_days=60, min_size=10, max_size=30)`
I think this could certainly be useful, but the reason I didn't go down this route is you still have to choose where a call to this function sits. It's either in the request cycle (and it's complexity is O(size_of_cache) so potentially a large overhead per-request) or in a separate scheduled maintenance process/thread/whatever (see previous comments about portability, as well as robustness - if your maintenance process/thread dies or stops, there's no way for the request to know if the cache is in date).
Adding it as a kwarg to the cache decorator makes it O(1) (it only checks the single entry we care about for that call).
> You mean adding an extra function for explicitly clearing a cache ?
>
> joblib.clear_cache(mem, min_days=30, max_days=60, min_size=10, max_size=30)
Actually such a function would be really useful in a more general
context. And there is a bit of work to add it, so I think it would be a
good first step.
What do @lesteve and @ogrisel think of this feature?
> I think this could certainly be useful, but the reason I didn't do down this
> route is you still have to choose where a call to this function sits. It's
> either in the request cycle (and it's complexity is O(size_of_cache) so
> potentially a large overhead per-request) or in a separate scheduled
> maintenance process/thread/whatever (see previous comments about portability,
> as well as robustness - if your maintenance process/thread dies or stops,
> there's no way for the request to know if the cache is in date).
Point taken.
> Adding it as a kwarg to the cache decorator makes it O(1) (it only checks the
> single entry we care about for that call).
Rather adding something to the cache decorator, which will overlead a lot
the signature and risk clutter, I would prefer adding something to a
method of the decorated function (which is an object: MemorizedFunc).
That function is actually not in the documentation, maybe it should be
added.
> Rather adding something to the cache decorator, which will overlead a lot
> the signature and risk clutter, I would prefer adding something to a
> method of the decorated function (which is an object: MemorizedFunc).
> That function is actually not in the documentation, maybe it should be
> added.
Fair enough, my current implementation ends up passing that kwarg through a couple of levels, which is getting quite verbose, do you mean something like:
```
@memory.cache
def f(a, b=None):
...
f.set_max_age(max_age) # Since the commit I uploaded last week I'm thinking 'max_age'
# is a nicer variable/method name than 'expires_after'
```
That would keep the advantage of setting and enforcing the cache age in one place and preserves the separation of concerns in a larger app (i.e. the calling function/framework doesn't have to care or even know about the existence of joblib).
:+1: for having some way to clean up a cache or at least support extending memory cleanly to support this
:+1: as well for having a way to clean up the cache with time based parameters
+1 to time to live. I have the same usecase as Fred and I bet a lot of other devs also. All major python caching libraries have `ttl` but none of them are as good as joblib in an overall. A much needed feature indeed.
This functionality is slowly coming in joblib. In the development version, joblib.Memory has an optional "bytes_limit" argument at creation, and a "reduce_size()" method.
Please have a look at this functionality, and comment on whether it is going in the right direction.
Cc @lesteve @aabadie
@GaelVaroquaux it looks good. I guess this is not exactly a time-based expiry but having a support for a fixed sized cache means that there is way to handle cache expiration and therefore eventually plug in alternative strategies.
As @pombredanne says, this looks helpful, especially if you can add your own strategy. My own requirement is an O(1) test and potential expiry of the result for a single input (but I'm aware this doesn't necessarily represent a normal use case. )
+1 for the TTL use case (ignore current cached result if older than X). We use it to cache slowly-changing data instead of reading it from a DB on each initialization, but a TTL mechanism is a must for this use case (so we can re-read it from the DB, say once a day).
The bytes_limit argument doesn't really address this use case.
+1 here. Very useful for web or database-interfacing applications which is a huge use case for caching in general
Here is my hack for expiring the cache every day:
```
class daily_memory:
"""
daily memory, expires every day
usage:
@daily_memory.cache
def myfunction()
"""
@staticmethod
def cache(func):
return daily_memory(func)
def __init__(self, func):
self.func = func
def __call__(self, *args, **kargs):
return Memory(cachedir=os.path.join(__storage_dir__, str(pacific_today())), verbose=0).cache(self.func)(*args, **kargs)
```
@chengguangnan, wouldn't this clutter the disk with cached results from all the previous days?
I have the same use case (TTL based cache expiration in web-application interacting with the database) as @fredludlow.
The idea from the issue creator `@memory.cache(expires_after=60)` is absolutely awesome! I need this for a web api application which caches data between a server side table loading with pagination (call count -> creates cache, call data uses this cache) ... but the cache should be invalidated quite soon because data gets updated ... just to make the second request faster. Can you merge this in immediately please?
I think this feature is indeed a nice one, and would prove very useful in `joblib.Memory`, in particular when interacting with SQL databases or changing environments.
A classical hack to implement this feature directly with the function is to pass an argument that only change when the condition is changed, like passing the day as a dead argument. This has 2 major disadvantages:
- The user need to change the way it call the function or add a custom decorator.
- The cache will grow very large as it is not cleaned automatically.
My main concern with the proposed implementation is that there can be many other reasons why one would want to invalidate the cache based on some external change to the function arguments. With time, this would lead to an ever growing list of arguments for `Memory.cache` that might become impractical.
To me, a good compromise would be to add an argument `valid_cache` which provides a callable which is called if the result is in cache. If the callable returns `False`, the cached result is considered invalid and re-run. We might as well provide some callable to check the time to make it easier for such use case.
WDYT @GaelVaroquaux @lesteve ? In particular, I am thinking of the APHP, this feature would have simplified the caching code a lot.
> โข The cache will grow very large as it is not cleaned automatically.
If our cache replacement policy works, this shouldn't be a problem (but
I'm not convinced that our cache replacement policy works).
> My main concern with the proposed implementation is that there can be many
> other reasons why one would want to invalidate the cache based on some external
> change to the function arguments. With the time, this would lead to an ever
> growing list of arguments that might become impractical.
Indeed. Is the present proposal absolutely central and more than others?
It might be.
> To me, a good compromise would be to add an argument valid_cache which provides
> a callable which is called if the result is in cache.
I like that!
--------------------
</issues> | 41b70ff10c293bd292465456434620e406d90d88 |
Textualize__rich-904 | 904 | Textualize/rich | null | 0c3db4e6411c6c6c9d9fb68bc91dd81e3f74172c | 2021-01-10T22:01:32Z | diff --git a/CHANGELOG.md b/CHANGELOG.md
index a0f1d036c6..ff35333440 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,8 +5,13 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
-## [9.8.0]
+## [9.8.0] - Unreleased
+###ย Added
+
+- Added **rich_measure** for tree
+- Added rich.align.VerticalCenter
+- Added height to ConsoleOptions
- Allow passing markdown over STDIN when using `python -m rich.markdown`
## [9.7.0] - 2021-01-09
diff --git a/pyproject.toml b/pyproject.toml
index f5ede46fb6..6789c60a2a 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -2,7 +2,7 @@
name = "rich"
homepage = "https://github.com/willmcgugan/rich"
documentation = "https://rich.readthedocs.io/en/latest/"
-version = "9.7.0"
+version = "9.8.0"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
authors = ["Will McGugan <willmcgugan@gmail.com>"]
license = "MIT"
diff --git a/rich/align.py b/rich/align.py
index ad891c668d..f6a4497c21 100644
--- a/rich/align.py
+++ b/rich/align.py
@@ -48,6 +48,9 @@ def __init__(
self.pad = pad
self.width = width
+ def __repr__(self) -> str:
+ return f"Align({self.renderable!r}, {self.align!r})"
+
@classmethod
def left(
cls,
@@ -89,7 +92,15 @@ def __rich_console__(
) -> "RenderResult":
align = self.align
- rendered = console.render(Constrain(self.renderable, width=self.width), options)
+
+ width = Measurement.get(console, self.renderable).maximum
+
+ rendered = console.render(
+ Constrain(
+ self.renderable, width if self.width is None else min(width, self.width)
+ ),
+ options,
+ )
lines = list(Segment.split_lines(rendered))
width, height = Segment.get_shape(lines)
lines = Segment.set_shape(lines, width, height)
@@ -144,10 +155,58 @@ def __rich_measure__(self, console: "Console", max_width: int) -> Measurement:
return measurement
+class VerticalCenter(JupyterMixin):
+ """Vertically aligns a renderable.
+
+ Args:
+ renderable (RenderableType): A renderable object.
+ """
+
+ def __init__(self, renderable: "RenderableType") -> None:
+ self.renderable = renderable
+
+ def __repr__(self) -> str:
+ return f"VerticalCenter({self.renderable!r})"
+
+ def __rich_console__(
+ self, console: "Console", options: "ConsoleOptions"
+ ) -> "RenderResult":
+ lines = console.render_lines(self.renderable, options, pad=False)
+ new_line = Segment.line()
+ height = console.size.height
+ top_space = (height - len(lines)) // 2
+ bottom_space = height - top_space - len(lines) - 1
+ if top_space > 0:
+ yield Segment("\n" * top_space)
+ for line in lines:
+ yield from line
+ yield new_line
+ if bottom_space > 0:
+ yield Segment("\n" * bottom_space)
+
+ def __rich_measure__(self, console: "Console", max_width: int) -> Measurement:
+ measurement = Measurement.get(console, self.renderable, max_width)
+ return measurement
+
+
if __name__ == "__main__": # pragma: no cover
- from rich.console import Console
+ from rich.console import Console, RenderGroup
+ from rich.panel import Panel
console = Console()
- for align in ["left", "center", "right"]:
- console.print(Align("Hello\nWorld!\nWorld!!!", align)) # type: ignore
+ panel = Align.center(
+ Panel(
+ RenderGroup(
+ Align("Left", "left"),
+ Align("Center", "right"),
+ Align("Right", "center"),
+ ),
+ width=60,
+ )
+ )
+
+ console.print(VerticalCenter(panel), end="")
+ import time
+
+ time.sleep(20)
diff --git a/rich/console.py b/rich/console.py
index d955ecd1f4..3be4da2cc2 100644
--- a/rich/console.py
+++ b/rich/console.py
@@ -1202,7 +1202,7 @@ def print(
for hook in self._render_hooks:
renderables = hook.process_renderables(renderables)
render_options = self.options.update(
- justify=justify,
+ justify="default",
overflow=overflow,
width=min(width, self.width) if width else None,
no_wrap=no_wrap,
diff --git a/rich/tree.py b/rich/tree.py
index 614783a3f9..864797e91f 100644
--- a/rich/tree.py
+++ b/rich/tree.py
@@ -3,6 +3,7 @@
from ._loop import loop_first, loop_last
from .console import Console, ConsoleOptions, RenderableType, RenderResult
from .jupyter import JupyterMixin
+from .measure import Measurement
from .segment import Segment
from .style import Style, StyleStack, StyleType
from .styled import Styled
@@ -151,6 +152,31 @@ def make_guide(index: int, style: Style) -> Segment:
guide_style_stack.push(get_style(node.guide_style))
push(iter(loop_last(node.children)))
+ def __rich_measure__(self, console: "Console", max_width: int) -> "Measurement":
+ stack: List[Iterator[Tree]] = [iter([self])]
+ pop = stack.pop
+ push = stack.append
+ minimum = 0
+ maximum = 0
+ measure = Measurement.get
+ level = 0
+ while stack:
+ iter_tree = pop()
+ try:
+ tree = next(iter_tree)
+ except StopIteration:
+ level -= 1
+ continue
+ push(iter_tree)
+ min_measure, max_measure = measure(console, tree.label, max_width)
+ indent = level * 4
+ minimum = max(min_measure + indent, minimum)
+ maximum = max(max_measure + indent, maximum)
+ if tree.expanded and tree.children:
+ push(iter(tree.children))
+ level += 1
+ return Measurement(minimum, maximum)
+
if __name__ == "__main__": # pragma: no cover
| diff --git a/tests/test_align.py b/tests/test_align.py
index 3faec95e3a..47ba47f440 100644
--- a/tests/test_align.py
+++ b/tests/test_align.py
@@ -3,7 +3,7 @@
import pytest
from rich.console import Console
-from rich.align import Align
+from rich.align import Align, VerticalCenter
from rich.measure import Measurement
@@ -25,6 +25,12 @@ def test_bad_align_legal():
Align("foo", "LEFT")
+def test_repr():
+ repr(Align("foo", "left"))
+ repr(Align("foo", "center"))
+ repr(Align("foo", "right"))
+
+
def test_align_left():
console = Console(file=io.StringIO(), width=10)
console.print(Align("foo", "left"))
@@ -87,3 +93,15 @@ def test_shortcuts():
assert Align.right("foo").renderable == "foo"
assert Align.center("foo").align == "center"
assert Align.center("foo").renderable == "foo"
+
+
+def test_vertical_center():
+ console = Console(color_system=None, height=6)
+ console.begin_capture()
+ vertical_center = VerticalCenter("foo")
+ repr(vertical_center)
+ console.print(vertical_center)
+ result = console.end_capture()
+ expected = "\n\nfoo\n\n\n"
+ assert result == expected
+ assert Measurement.get(console, vertical_center) == Measurement(3, 3)
diff --git a/tests/test_tree.py b/tests/test_tree.py
index fc5db87770..af5f45690d 100644
--- a/tests/test_tree.py
+++ b/tests/test_tree.py
@@ -1,4 +1,5 @@
from rich.console import Console
+from rich.measure import Measurement
from rich.tree import Tree
@@ -68,3 +69,12 @@ def test_render():
print(repr(result))
expected = "foo \nโโโ \x1b[3mbar\x1b[0m\x1b[3m \x1b[0m\n\x1b[44mโโโ \x1b[0m\x1b[44mbaz\x1b[0m\x1b[44m \x1b[0m\n\x1b[44mโ \x1b[0m\x1b[1;31;44mโฃโโ \x1b[0m\x1b[44m1\x1b[0m\x1b[44m \x1b[0m\n\x1b[44mโ \x1b[0m\x1b[1;31;44mโโโ \x1b[0m\x1b[44m2\x1b[0m\x1b[44m \x1b[0m\nโโโ egg \n"
assert result == expected
+
+
+def test_tree_measure():
+ tree = Tree("foo")
+ tree.add("bar")
+ tree.add("musroom risotto")
+ console = Console()
+ measurement = Measurement.get(console, tree)
+ assert measurement == Measurement(11, 19)
| diff --git a/CHANGELOG.md b/CHANGELOG.md
index a0f1d036c6..ff35333440 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,8 +5,13 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
-## [9.8.0]
+## [9.8.0] - Unreleased
+###ย Added
+
+- Added **rich_measure** for tree
+- Added rich.align.VerticalCenter
+- Added height to ConsoleOptions
- Allow passing markdown over STDIN when using `python -m rich.markdown`
## [9.7.0] - 2021-01-09
diff --git a/pyproject.toml b/pyproject.toml
index f5ede46fb6..6789c60a2a 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -2,7 +2,7 @@
name = "rich"
homepage = "https://github.com/willmcgugan/rich"
documentation = "https://rich.readthedocs.io/en/latest/"
-version = "9.7.0"
+version = "9.8.0"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
authors = ["Will McGugan <willmcgugan@gmail.com>"]
license = "MIT"
| [
{
"components": [
{
"doc": "",
"lines": [
51,
52
],
"name": "Align.__repr__",
"signature": "def __repr__(self) -> str:",
"type": "function"
},
{
"doc": "Vertically aligns a renderable.\n\nArgs:\n renderable (Rendera... | [
"tests/test_align.py::test_bad_align_legal",
"tests/test_align.py::test_repr",
"tests/test_align.py::test_align_left",
"tests/test_align.py::test_align_center",
"tests/test_align.py::test_align_right",
"tests/test_align.py::test_align_fit",
"tests/test_align.py::test_align_right_style",
"tests/test_al... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add rich_measure to tree and VerticalCenter
## Type of changes
- [ ] Bug fix
- [ ] New feature
- [ ] Documentation / docstrings
- [ ] Tests
- [ ] Other
## Checklist
- [ ] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [ ] I've added tests for new code.
- [ ] I accept that @willmcgugan may be pedantic in the code review.
## Description
Please describe your changes here. If this fixes a bug, please link to the issue, if possible.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in rich/align.py]
(definition of Align.__repr__:)
def __repr__(self) -> str:
(definition of VerticalCenter:)
class VerticalCenter(JupyterMixin):
"""Vertically aligns a renderable.
Args:
renderable (RenderableType): A renderable object."""
(definition of VerticalCenter.__init__:)
def __init__(self, renderable: "RenderableType") -> None:
(definition of VerticalCenter.__repr__:)
def __repr__(self) -> str:
(definition of VerticalCenter.__rich_console__:)
def __rich_console__( self, console: "Console", options: "ConsoleOptions" ) -> "RenderResult":
(definition of VerticalCenter.__rich_measure__:)
def __rich_measure__(self, console: "Console", max_width: int) -> Measurement:
[end of new definitions in rich/align.py]
[start of new definitions in rich/tree.py]
(definition of Tree.__rich_measure__:)
def __rich_measure__(self, console: "Console", max_width: int) -> "Measurement":
[end of new definitions in rich/tree.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | b0661de34bab35af9b4b1d3ba8e28b186b225e84 | |
Textualize__rich-901 | 901 | Textualize/rich | null | a9c0f917aed8d0bba232b7584742962f03a9a293 | 2021-01-09T16:12:28Z | diff --git a/CHANGELOG.md b/CHANGELOG.md
index d8378b729a..acb53b1308 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -10,6 +10,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Added
- Added rich.tree
+- Added no_color argument to Console
## [9.6.2] - 2021-01-07
diff --git a/rich/console.py b/rich/console.py
index 2ef5b3d75b..d955ecd1f4 100644
--- a/rich/console.py
+++ b/rich/console.py
@@ -402,6 +402,7 @@ class Console:
width (int, optional): The width of the terminal. Leave as default to auto-detect width.
height (int, optional): The height of the terminal. Leave as default to auto-detect height.
style (StyleType, optional): Style to apply to all output, or None for no style. Defaults to None.
+ no_color (Optional[bool], optional): Enabled no color mode, or None to auto detect. Defaults to None.
record (bool, optional): Boolean to enable recording of terminal output,
required to call :meth:`export_html` and :meth:`export_text`. Defaults to False.
markup (bool, optional): Boolean to enable :ref:`console_markup`. Defaults to True.
@@ -433,6 +434,7 @@ def __init__(
width: int = None,
height: int = None,
style: StyleType = None,
+ no_color: bool = None,
tab_size: int = 8,
record: bool = False,
markup: bool = True,
@@ -492,6 +494,9 @@ def __init__(
self.get_datetime = get_datetime or datetime.now
self.get_time = get_time or monotonic
self.style = style
+ self.no_color = (
+ no_color if no_color is not None else "NO_COLOR" in self._environ
+ )
self._record_buffer_lock = threading.RLock()
self._thread_locals = ConsoleThreadLocals(
@@ -538,7 +543,7 @@ def _detect_color_system(self) -> Optional[ColorSystem]:
"""Detect color system from env vars."""
if self.is_jupyter:
return ColorSystem.TRUECOLOR
- if not self.is_terminal or "NO_COLOR" in self._environ or self.is_dumb_terminal:
+ if not self.is_terminal or self.is_dumb_terminal:
return None
if WINDOWS: # pragma: no cover
if self.legacy_windows: # pragma: no cover
@@ -1374,6 +1379,8 @@ def _render_buffer(self, buffer: Iterable[Segment]) -> str:
with self._record_buffer_lock:
self._record_buffer.extend(buffer)
not_terminal = not self.is_terminal
+ if self.no_color and color_system:
+ buffer = Segment.remove_color(buffer)
for text, style, is_control in buffer:
if style:
append(
diff --git a/rich/segment.py b/rich/segment.py
index db17b0fbb6..34ae7ab0d0 100644
--- a/rich/segment.py
+++ b/rich/segment.py
@@ -1,4 +1,4 @@
-from typing import NamedTuple, Optional
+from typing import Dict, NamedTuple, Optional
from .cells import cell_len, set_cell_size
from .style import Style
@@ -331,12 +331,37 @@ def strip_links(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
def strip_styles(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
"""Remove all styles from an iterable of segments.
+ Args:
+ segments (Iterable[Segment]): An iterable segments.
+
Yields:
Segment: Segments with styles replace with None
"""
for text, _style, is_control in segments:
yield cls(text, None, is_control)
+ @classmethod
+ def remove_color(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
+ """Remove all color from an iterable of segments.
+
+ Args:
+ segments (Iterable[Segment]): An iterable segments.
+
+ Yields:
+ Segment: Segments with colorless style.
+ """
+
+ cache: Dict[Style, Style] = {}
+ for text, style, is_control in segments:
+ if style:
+ colorless_style = cache.get(style)
+ if colorless_style is None:
+ colorless_style = style.without_color
+ cache[style] = colorless_style
+ yield cls(text, colorless_style, is_control)
+ else:
+ yield cls(text, None, is_control)
+
if __name__ == "__main__": # pragma: no cover
lines = [[Segment("Hello")]]
diff --git a/rich/style.py b/rich/style.py
index 5a802d0762..cda8380638 100644
--- a/rich/style.py
+++ b/rich/style.py
@@ -383,6 +383,24 @@ def background_style(self) -> "Style":
"""A Style with background only."""
return Style(bgcolor=self.bgcolor)
+ @property
+ def without_color(self) -> "Style":
+ """Get a copy of the style with color removed."""
+ if self._null:
+ return NULL_STYLE
+ style = self.__new__(Style)
+ style._ansi = None
+ style._style_definition = None
+ style._color = None
+ style._bgcolor = None
+ style._attributes = self._attributes
+ style._set_attributes = self._set_attributes
+ style._link = self._link
+ style._link_id = f"{time()}-{randint(0, 999999)}" if self._link else ""
+ style._hash = self._hash
+ style._null = False
+ return style
+
@classmethod
@lru_cache(maxsize=4096)
def parse(cls, style_definition: str) -> "Style":
| diff --git a/tests/test_console.py b/tests/test_console.py
index b82c765911..82bc5b48d4 100644
--- a/tests/test_console.py
+++ b/tests/test_console.py
@@ -467,3 +467,14 @@ def test_console_style() -> None:
expected = "\x1b[31mfoo\x1b[0m\n"
result = console.file.getvalue()
assert result == expected
+
+
+def test_no_color():
+ console = Console(
+ file=io.StringIO(), color_system="truecolor", force_terminal=True, no_color=True
+ )
+ console.print("[bold magenta on red]FOO")
+ expected = "\x1b[1mFOO\x1b[0m\n"
+ result = console.file.getvalue()
+ print(repr(result))
+ assert result == expected
diff --git a/tests/test_segment.py b/tests/test_segment.py
index 2c3344dd1b..42abdcbacf 100644
--- a/tests/test_segment.py
+++ b/tests/test_segment.py
@@ -96,3 +96,14 @@ def test_strip_styles():
def test_strip_links():
segments = [Segment("foo", Style(bold=True, link="https://www.example.org"))]
assert list(Segment.strip_links(segments)) == [Segment("foo", Style(bold=True))]
+
+
+def test_remove_color():
+ segments = [
+ Segment("foo", Style(bold=True, color="red")),
+ Segment("bar", None),
+ ]
+ assert list(Segment.remove_color(segments)) == [
+ Segment("foo", Style(bold=True)),
+ Segment("bar", None),
+ ]
diff --git a/tests/test_style.py b/tests/test_style.py
index a5075e80fb..8282331f30 100644
--- a/tests/test_style.py
+++ b/tests/test_style.py
@@ -196,3 +196,13 @@ def test_background_style():
assert Style(bold=True, color="yellow", bgcolor="red").background_style == Style(
bgcolor="red"
)
+
+
+def test_without_color():
+ style = Style(bold=True, color="red", bgcolor="blue")
+ colorless_style = style.without_color
+ assert colorless_style.color == None
+ assert colorless_style.bgcolor == None
+ assert colorless_style.bold == True
+ null_style = Style.null()
+ assert null_style.without_color == null_style
| diff --git a/CHANGELOG.md b/CHANGELOG.md
index d8378b729a..acb53b1308 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -10,6 +10,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Added
- Added rich.tree
+- Added no_color argument to Console
## [9.6.2] - 2021-01-07
| [
{
"components": [
{
"doc": "Remove all color from an iterable of segments.\n\nArgs:\n segments (Iterable[Segment]): An iterable segments.\n\nYields:\n Segment: Segments with colorless style.",
"lines": [
344,
363
],
"name": "Segment.remove_color",
... | [
"tests/test_console.py::test_no_color",
"tests/test_segment.py::test_remove_color",
"tests/test_style.py::test_without_color"
] | [
"tests/test_console.py::test_dumb_terminal",
"tests/test_console.py::test_soft_wrap",
"tests/test_console.py::test_16color_terminal",
"tests/test_console.py::test_truecolor_terminal",
"tests/test_console.py::test_console_options_update",
"tests/test_console.py::test_init",
"tests/test_console.py::test_s... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
No color
Fixes https://github.com/willmcgugan/rich/issues/882
Adds a no_color mode which strips color before render.
## Type of changes
- [ ] Bug fix
- [ ] New feature
- [ ] Documentation / docstrings
- [ ] Tests
- [ ] Other
## Checklist
- [ ] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [ ] I've added tests for new code.
- [ ] I accept that @willmcgugan may be pedantic in the code review.
## Description
Please describe your changes here. If this fixes a bug, please link to the issue, if possible.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in rich/segment.py]
(definition of Segment.remove_color:)
def remove_color(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
"""Remove all color from an iterable of segments.
Args:
segments (Iterable[Segment]): An iterable segments.
Yields:
Segment: Segments with colorless style."""
[end of new definitions in rich/segment.py]
[start of new definitions in rich/style.py]
(definition of Style.without_color:)
def without_color(self) -> "Style":
"""Get a copy of the style with color removed."""
[end of new definitions in rich/style.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | b0661de34bab35af9b4b1d3ba8e28b186b225e84 | |
sympy__sympy-20761 | 20,761 | sympy/sympy | 1.8 | 07a6388bc237a2c43e65dc3cf932373e4d06d91b | 2021-01-08T08:14:30Z | diff --git a/sympy/matrices/decompositions.py b/sympy/matrices/decompositions.py
index 1233c5981e19..c34cb3e42ee9 100644
--- a/sympy/matrices/decompositions.py
+++ b/sympy/matrices/decompositions.py
@@ -1123,6 +1123,222 @@ def _LUdecompositionFF(M):
return P, L, DD, U
+def _singular_value_decomposition(A):
+ r"""Returns a Condensed Singular Value decomposition.
+
+ Explanation
+ ===========
+
+ A Singular Value decomposition is a decomposition in the form $A = U \Sigma V$
+ where
+
+ - $U, V$ are column orthogonal matrix.
+ - $\Sigma$ is a diagonal matrix, where the main diagonal contains singular
+ values of matrix A.
+
+ A column orthogonal matrix satisfies
+ $\mathbb{I} = U^H U$ while a full orthogonal matrix satisfies
+ relation $\mathbb{I} = U U^H = U^H U$ where $\mathbb{I}$ is an identity
+ matrix with matching dimensions.
+
+ For matrices which are not square or are rank-deficient, it is
+ sufficient to return a column orthogonal matrix because augmenting
+ them may introduce redundant computations.
+ In condensed Singular Value Decomposition we only return column orthognal
+ matrices because of this reason
+
+ If you want to augment the results to return a full orthogonal
+ decomposition, you should use the following procedures.
+
+ - Augment the $U , V$ matrices with columns that are orthogonal to every
+ other columns and make it square.
+ - Augument the $\Sigma$ matrix with zero rows to make it have the same
+ shape as the original matrix.
+
+ The procedure will be illustrated in the examples section.
+
+ Examples
+ ========
+
+ we take a full rank matrix first:
+
+ >>> from sympy import Matrix
+ >>> A = Matrix([[1, 2],[2,1]])
+ >>> U, S, V = A.singular_value_decomposition()
+ >>> U
+ Matrix([
+ [ sqrt(2)/2, sqrt(2)/2],
+ [-sqrt(2)/2, sqrt(2)/2]])
+ >>> S
+ Matrix([
+ [1, 0],
+ [0, 3]])
+ >>> V
+ Matrix([
+ [-sqrt(2)/2, sqrt(2)/2],
+ [ sqrt(2)/2, sqrt(2)/2]])
+
+ If a matrix if square and full rank both U, V
+ are orthogonal in both directions
+
+ >>> U * U.H
+ Matrix([
+ [1, 0],
+ [0, 1]])
+ >>> U.H * U
+ Matrix([
+ [1, 0],
+ [0, 1]])
+
+ >>> V * V.H
+ Matrix([
+ [1, 0],
+ [0, 1]])
+ >>> V.H * V
+ Matrix([
+ [1, 0],
+ [0, 1]])
+ >>> A == U * S * V.H
+ True
+
+ >>> C = Matrix([
+ ... [1, 0, 0, 0, 2],
+ ... [0, 0, 3, 0, 0],
+ ... [0, 0, 0, 0, 0],
+ ... [0, 2, 0, 0, 0],
+ ... ])
+ >>> U, S, V = C.singular_value_decomposition()
+
+ >>> V.H * V
+ Matrix([
+ [1, 0, 0],
+ [0, 1, 0],
+ [0, 0, 1]])
+ >>> V * V.H
+ Matrix([
+ [1/5, 0, 0, 0, 2/5],
+ [ 0, 1, 0, 0, 0],
+ [ 0, 0, 1, 0, 0],
+ [ 0, 0, 0, 0, 0],
+ [2/5, 0, 0, 0, 4/5]])
+
+ If you want to augment the results to be a full orthogonal
+ decomposition, you should augment $V$ with an another orthogonal
+ column.
+
+ You are able to append an arbitrary standard basis that are linearly
+ independent to every other columns and you can run the Gram-Schmidt
+ process to make them augmented as orthogonal basis.
+
+ >>> V_aug = V.row_join(Matrix([[0,0,0,0,1],
+ ... [0,0,0,1,0]]).H)
+ >>> V_aug = V_aug.QRdecomposition()[0]
+ >>> V_aug
+ Matrix([
+ [0, sqrt(5)/5, 0, -2*sqrt(5)/5, 0],
+ [1, 0, 0, 0, 0],
+ [0, 0, 1, 0, 0],
+ [0, 0, 0, 0, 1],
+ [0, 2*sqrt(5)/5, 0, sqrt(5)/5, 0]])
+ >>> V_aug.H * V_aug
+ Matrix([
+ [1, 0, 0, 0, 0],
+ [0, 1, 0, 0, 0],
+ [0, 0, 1, 0, 0],
+ [0, 0, 0, 1, 0],
+ [0, 0, 0, 0, 1]])
+ >>> V_aug * V_aug.H
+ Matrix([
+ [1, 0, 0, 0, 0],
+ [0, 1, 0, 0, 0],
+ [0, 0, 1, 0, 0],
+ [0, 0, 0, 1, 0],
+ [0, 0, 0, 0, 1]])
+
+ Similarly we augment U
+
+ >>> U_aug = U.row_join(Matrix([0,0,1,0]))
+ >>> U_aug = U_aug.QRdecomposition()[0]
+ >>> U_aug
+ Matrix([
+ [0, 1, 0, 0],
+ [0, 0, 1, 0],
+ [0, 0, 0, 1],
+ [1, 0, 0, 0]])
+
+ >>> U_aug.H * U_aug
+ Matrix([
+ [1, 0, 0, 0],
+ [0, 1, 0, 0],
+ [0, 0, 1, 0],
+ [0, 0, 0, 1]])
+ >>> U_aug * U_aug.H
+ Matrix([
+ [1, 0, 0, 0],
+ [0, 1, 0, 0],
+ [0, 0, 1, 0],
+ [0, 0, 0, 1]])
+
+ We add 2 zero columns and one row to S
+
+ >>> S_aug = S.col_join(Matrix([[0,0,0]]))
+ >>> S_aug = S_aug.row_join(Matrix([[0,0,0,0],
+ ... [0,0,0,0]]).H)
+ >>> S_aug
+ Matrix([
+ [2, 0, 0, 0, 0],
+ [0, sqrt(5), 0, 0, 0],
+ [0, 0, 3, 0, 0],
+ [0, 0, 0, 0, 0]])
+
+
+
+ >>> U_aug * S_aug * V_aug.H == C
+ True
+
+ """
+
+ AH = A.H
+ m, n = A.shape
+ if m >= n:
+ V, S = (AH * A).diagonalize()
+
+ ranked = []
+ for i, x in enumerate(S.diagonal()):
+ if not x.is_zero:
+ ranked.append(i)
+
+ V = V[:, ranked]
+
+ Singular_vals = [sqrt(S[i, i]) for i in range(S.rows) if i in ranked]
+
+ S = S.zeros(len(Singular_vals))
+
+ for i in range(len(Singular_vals)):
+ S[i, i] = Singular_vals[i]
+
+ V, _ = V.QRdecomposition()
+ U = A * V * S.inv()
+ else:
+ U, S = (A * AH).diagonalize()
+
+ ranked = []
+ for i, x in enumerate(S.diagonal()):
+ if not x.is_zero:
+ ranked.append(i)
+
+ U = U[:, ranked]
+ Singular_vals = [sqrt(S[i, i]) for i in range(S.rows) if i in ranked]
+
+ S = S.zeros(len(Singular_vals))
+
+ for i in range(len(Singular_vals)):
+ S[i, i] = Singular_vals[i]
+
+ U, _ = U.QRdecomposition()
+ V = AH * U * S.inv()
+
+ return U, S, V
def _QRdecomposition_optional(M, normalize=True):
def dot(u, v):
diff --git a/sympy/matrices/matrices.py b/sympy/matrices/matrices.py
index 17925a8a7c31..114ed549be65 100644
--- a/sympy/matrices/matrices.py
+++ b/sympy/matrices/matrices.py
@@ -49,7 +49,7 @@
from .decompositions import (
_rank_decomposition, _cholesky, _LDLdecomposition,
_LUdecomposition, _LUdecomposition_Simple, _LUdecompositionFF,
- _QRdecomposition, _upper_hessenberg_decomposition)
+ _singular_value_decomposition, _QRdecomposition, _upper_hessenberg_decomposition)
from .graph import _connected_components, _connected_components_decomposition
@@ -2159,6 +2159,9 @@ def LUdecomposition_Simple(self, iszerofunc=_iszero, simpfunc=None,
def LUdecompositionFF(self):
return _LUdecompositionFF(self)
+ def singular_value_decomposition(self):
+ return _singular_value_decomposition(self)
+
def QRdecomposition(self):
return _QRdecomposition(self)
@@ -2241,6 +2244,7 @@ def connected_components_decomposition(self):
LUdecomposition.__doc__ = _LUdecomposition.__doc__
LUdecomposition_Simple.__doc__ = _LUdecomposition_Simple.__doc__
LUdecompositionFF.__doc__ = _LUdecompositionFF.__doc__
+ singular_value_decomposition.__doc__ = _singular_value_decomposition.__doc__
QRdecomposition.__doc__ = _QRdecomposition.__doc__
upper_hessenberg_decomposition.__doc__ = _upper_hessenberg_decomposition.__doc__
| diff --git a/sympy/matrices/tests/test_decompositions.py b/sympy/matrices/tests/test_decompositions.py
index a16901824eb3..82ef110ae69d 100644
--- a/sympy/matrices/tests/test_decompositions.py
+++ b/sympy/matrices/tests/test_decompositions.py
@@ -91,6 +91,41 @@ def test_LUdecomp():
)
raises(ValueError, lambda : M.LUdecomposition_Simple(rankcheck=True))
+def test_singular_value_decompositionD():
+ A = Matrix([[1, 2], [2, 1]])
+ U, S, V = A.singular_value_decomposition()
+ assert U * S * V.T == A
+ assert U.T * U == eye(U.cols)
+ assert V.T * V == eye(V.cols)
+
+ B = Matrix([[1, 2]])
+ U, S, V = B.singular_value_decomposition()
+
+ assert U * S * V.T == B
+ assert U.T * U == eye(U.cols)
+ assert V.T * V == eye(V.cols)
+
+ C = Matrix([
+ [1, 0, 0, 0, 2],
+ [0, 0, 3, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 2, 0, 0, 0],
+ ])
+
+ U, S, V = C.singular_value_decomposition()
+
+ assert U * S * V.T == C
+ assert U.T * U == eye(U.cols)
+ assert V.T * V == eye(V.cols)
+
+ D = Matrix([[Rational(1, 3), sqrt(2)], [0, Rational(1, 4)]])
+ U, S, V = D.singular_value_decomposition()
+ assert simplify(U.T * U) == eye(U.cols)
+ assert simplify(V.T * V) == eye(V.cols)
+ assert simplify(U * S * V.T) == D
+
+
+
def test_QR():
A = Matrix([[1, 2], [2, 3]])
Q, S = A.QRdecomposition()
| [
{
"components": [
{
"doc": "Returns a Condensed Singular Value decomposition.\n\nExplanation\n===========\n\nA Singular Value decomposition is a decomposition in the form $A = U \\Sigma V$\nwhere\n\n- $U, V$ are column orthogonal matrix.\n- $\\Sigma$ is a diagonal matrix, where the main diagonal c... | [
"test_singular_value_decompositionD"
] | [
"test_LUdecomp",
"test_QR",
"test_QR_non_square",
"test_QR_trivial",
"test_QR_float",
"test_LUdecomposition_Simple_iszerofunc",
"test_LUdecomposition_iszerofunc",
"test_LDLdecomposition",
"test_pinv_succeeds_with_rank_decomposition_method",
"test_rank_decomposition"
] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Implemented Singular Value Decomposition
#### References to other Issues or PRs
Implemented code in #20029
#### Brief description of what is fixed or changed
Implemented Condensed Singular Value decomposition
#### Release Notes
<!-- BEGIN RELEASE NOTES -->
* matrices
* Implemented Singular Value decomposition for matrices
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/matrices/decompositions.py]
(definition of _singular_value_decomposition:)
def _singular_value_decomposition(A):
"""Returns a Condensed Singular Value decomposition.
Explanation
===========
A Singular Value decomposition is a decomposition in the form $A = U \Sigma V$
where
- $U, V$ are column orthogonal matrix.
- $\Sigma$ is a diagonal matrix, where the main diagonal contains singular
values of matrix A.
A column orthogonal matrix satisfies
$\mathbb{I} = U^H U$ while a full orthogonal matrix satisfies
relation $\mathbb{I} = U U^H = U^H U$ where $\mathbb{I}$ is an identity
matrix with matching dimensions.
For matrices which are not square or are rank-deficient, it is
sufficient to return a column orthogonal matrix because augmenting
them may introduce redundant computations.
In condensed Singular Value Decomposition we only return column orthognal
matrices because of this reason
If you want to augment the results to return a full orthogonal
decomposition, you should use the following procedures.
- Augment the $U , V$ matrices with columns that are orthogonal to every
other columns and make it square.
- Augument the $\Sigma$ matrix with zero rows to make it have the same
shape as the original matrix.
The procedure will be illustrated in the examples section.
Examples
========
we take a full rank matrix first:
>>> from sympy import Matrix
>>> A = Matrix([[1, 2],[2,1]])
>>> U, S, V = A.singular_value_decomposition()
>>> U
Matrix([
[ sqrt(2)/2, sqrt(2)/2],
[-sqrt(2)/2, sqrt(2)/2]])
>>> S
Matrix([
[1, 0],
[0, 3]])
>>> V
Matrix([
[-sqrt(2)/2, sqrt(2)/2],
[ sqrt(2)/2, sqrt(2)/2]])
If a matrix if square and full rank both U, V
are orthogonal in both directions
>>> U * U.H
Matrix([
[1, 0],
[0, 1]])
>>> U.H * U
Matrix([
[1, 0],
[0, 1]])
>>> V * V.H
Matrix([
[1, 0],
[0, 1]])
>>> V.H * V
Matrix([
[1, 0],
[0, 1]])
>>> A == U * S * V.H
True
>>> C = Matrix([
... [1, 0, 0, 0, 2],
... [0, 0, 3, 0, 0],
... [0, 0, 0, 0, 0],
... [0, 2, 0, 0, 0],
... ])
>>> U, S, V = C.singular_value_decomposition()
>>> V.H * V
Matrix([
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
>>> V * V.H
Matrix([
[1/5, 0, 0, 0, 2/5],
[ 0, 1, 0, 0, 0],
[ 0, 0, 1, 0, 0],
[ 0, 0, 0, 0, 0],
[2/5, 0, 0, 0, 4/5]])
If you want to augment the results to be a full orthogonal
decomposition, you should augment $V$ with an another orthogonal
column.
You are able to append an arbitrary standard basis that are linearly
independent to every other columns and you can run the Gram-Schmidt
process to make them augmented as orthogonal basis.
>>> V_aug = V.row_join(Matrix([[0,0,0,0,1],
... [0,0,0,1,0]]).H)
>>> V_aug = V_aug.QRdecomposition()[0]
>>> V_aug
Matrix([
[0, sqrt(5)/5, 0, -2*sqrt(5)/5, 0],
[1, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 1],
[0, 2*sqrt(5)/5, 0, sqrt(5)/5, 0]])
>>> V_aug.H * V_aug
Matrix([
[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 1]])
>>> V_aug * V_aug.H
Matrix([
[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 1]])
Similarly we augment U
>>> U_aug = U.row_join(Matrix([0,0,1,0]))
>>> U_aug = U_aug.QRdecomposition()[0]
>>> U_aug
Matrix([
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[1, 0, 0, 0]])
>>> U_aug.H * U_aug
Matrix([
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]])
>>> U_aug * U_aug.H
Matrix([
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]])
We add 2 zero columns and one row to S
>>> S_aug = S.col_join(Matrix([[0,0,0]]))
>>> S_aug = S_aug.row_join(Matrix([[0,0,0,0],
... [0,0,0,0]]).H)
>>> S_aug
Matrix([
[2, 0, 0, 0, 0],
[0, sqrt(5), 0, 0, 0],
[0, 0, 3, 0, 0],
[0, 0, 0, 0, 0]])
>>> U_aug * S_aug * V_aug.H == C
True"""
[end of new definitions in sympy/matrices/decompositions.py]
[start of new definitions in sympy/matrices/matrices.py]
(definition of MatrixBase.singular_value_decomposition:)
def singular_value_decomposition(self):
[end of new definitions in sympy/matrices/matrices.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 284db64e44124856accce4abff75d2d8990c301d | ||
openvinotoolkit__datumaro-70 | 70 | openvinotoolkit/datumaro | null | 6f1f49421cf407958cd9aa4fe635c2edbbbe0332 | 2021-01-04T13:34:56Z | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 21c110c5c7..7040309353 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -23,7 +23,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
-
### Fixed
--
+- Allow plugins inherited from `Extractor` (instead of only `SourceExtractor`) (<https://github.com/openvinotoolkit/datumaro/pull/70>)
### Security
-
diff --git a/datumaro/components/environment.py b/datumaro/components/environment.py
new file mode 100644
index 0000000000..430a32c98a
--- /dev/null
+++ b/datumaro/components/environment.py
@@ -0,0 +1,291 @@
+# Copyright (C) 2020-2021 Intel Corporation
+#
+# SPDX-License-Identifier: MIT
+
+from functools import partial
+from glob import glob
+import git
+import inspect
+import logging as log
+import os
+import os.path as osp
+
+from datumaro.components.config import Config
+from datumaro.components.config_model import Model, Source
+from datumaro.util.os_util import import_foreign_module
+
+
+class Registry:
+ def __init__(self, config=None, item_type=None):
+ self.item_type = item_type
+
+ self.items = {}
+
+ if config is not None:
+ self.load(config)
+
+ def load(self, config):
+ pass
+
+ def register(self, name, value):
+ if self.item_type:
+ value = self.item_type(value)
+ self.items[name] = value
+ return value
+
+ def unregister(self, name):
+ return self.items.pop(name, None)
+
+ def get(self, key):
+ """Returns a class or a factory function"""
+ return self.items[key]
+
+ def __getitem__(self, key):
+ return self.get(key)
+
+ def __contains__(self, key):
+ return key in self.items
+
+
+class ModelRegistry(Registry):
+ def __init__(self, config=None):
+ super().__init__(config, item_type=Model)
+
+ def load(self, config):
+ # TODO: list default dir, insert values
+ if 'models' in config:
+ for name, model in config.models.items():
+ self.register(name, model)
+
+
+class SourceRegistry(Registry):
+ def __init__(self, config=None):
+ super().__init__(config, item_type=Source)
+
+ def load(self, config):
+ # TODO: list default dir, insert values
+ if 'sources' in config:
+ for name, source in config.sources.items():
+ self.register(name, source)
+
+
+class PluginRegistry(Registry):
+ def __init__(self, config=None, builtin=None, local=None):
+ super().__init__(config)
+
+ from datumaro.components.cli_plugin import CliPlugin
+
+ if builtin is not None:
+ for v in builtin:
+ k = CliPlugin._get_name(v)
+ self.register(k, v)
+ if local is not None:
+ for v in local:
+ k = CliPlugin._get_name(v)
+ self.register(k, v)
+
+
+class GitWrapper:
+ def __init__(self, config=None):
+ self.repo = None
+
+ if config is not None and config.project_dir:
+ self.init(config.project_dir)
+
+ @staticmethod
+ def _git_dir(base_path):
+ return osp.join(base_path, '.git')
+
+ @classmethod
+ def spawn(cls, path):
+ spawn = not osp.isdir(cls._git_dir(path))
+ repo = git.Repo.init(path=path)
+ if spawn:
+ repo.config_writer().set_value("user", "name", "User") \
+ .set_value("user", "email", "user@nowhere.com") \
+ .release()
+ # gitpython does not support init, use git directly
+ repo.git.init()
+ repo.git.commit('-m', 'Initial commit', '--allow-empty')
+ return repo
+
+ def init(self, path):
+ self.repo = self.spawn(path)
+ return self.repo
+
+ def is_initialized(self):
+ return self.repo is not None
+
+ def create_submodule(self, name, dst_dir, **kwargs):
+ self.repo.create_submodule(name, dst_dir, **kwargs)
+
+ def has_submodule(self, name):
+ return name in [submodule.name for submodule in self.repo.submodules]
+
+ def remove_submodule(self, name, **kwargs):
+ return self.repo.submodule(name).remove(**kwargs)
+
+
+class Environment:
+ _builtin_plugins = None
+ PROJECT_EXTRACTOR_NAME = 'datumaro_project'
+
+ def __init__(self, config=None):
+ from datumaro.components.project import (
+ PROJECT_DEFAULT_CONFIG, PROJECT_SCHEMA, load_project_as_dataset)
+ config = Config(config,
+ fallback=PROJECT_DEFAULT_CONFIG, schema=PROJECT_SCHEMA)
+
+ self.models = ModelRegistry(config)
+ self.sources = SourceRegistry(config)
+
+ self.git = GitWrapper(config)
+
+ env_dir = osp.join(config.project_dir, config.env_dir)
+ builtin = self._load_builtin_plugins()
+ custom = self._load_plugins2(osp.join(env_dir, config.plugins_dir))
+ select = lambda seq, t: [e for e in seq if issubclass(e, t)]
+ from datumaro.components.converter import Converter
+ from datumaro.components.extractor import (Importer, SourceExtractor,
+ Transform)
+ from datumaro.components.launcher import Launcher
+ self.extractors = PluginRegistry(
+ builtin=select(builtin, SourceExtractor),
+ local=select(custom, SourceExtractor)
+ )
+ self.extractors.register(self.PROJECT_EXTRACTOR_NAME,
+ load_project_as_dataset)
+
+ self.importers = PluginRegistry(
+ builtin=select(builtin, Importer),
+ local=select(custom, Importer)
+ )
+ self.launchers = PluginRegistry(
+ builtin=select(builtin, Launcher),
+ local=select(custom, Launcher)
+ )
+ self.converters = PluginRegistry(
+ builtin=select(builtin, Converter),
+ local=select(custom, Converter)
+ )
+ self.transforms = PluginRegistry(
+ builtin=select(builtin, Transform),
+ local=select(custom, Transform)
+ )
+
+ @staticmethod
+ def _find_plugins(plugins_dir):
+ plugins = []
+ if not osp.exists(plugins_dir):
+ return plugins
+
+ for plugin_name in os.listdir(plugins_dir):
+ p = osp.join(plugins_dir, plugin_name)
+ if osp.isfile(p) and p.endswith('.py'):
+ plugins.append((plugins_dir, plugin_name, None))
+ elif osp.isdir(p):
+ plugins += [(plugins_dir,
+ osp.splitext(plugin_name)[0] + '.' + osp.basename(p),
+ osp.splitext(plugin_name)[0]
+ )
+ for p in glob(osp.join(p, '*.py'))]
+ return plugins
+
+ @classmethod
+ def _import_module(cls, module_dir, module_name, types, package=None):
+ module = import_foreign_module(osp.splitext(module_name)[0], module_dir,
+ package=package)
+
+ exports = []
+ if hasattr(module, 'exports'):
+ exports = module.exports
+ else:
+ for symbol in dir(module):
+ if symbol.startswith('_'):
+ continue
+ exports.append(getattr(module, symbol))
+
+ exports = [s for s in exports
+ if inspect.isclass(s) and issubclass(s, types) and not s in types]
+
+ return exports
+
+ @classmethod
+ def _load_plugins(cls, plugins_dir, types):
+ types = tuple(types)
+
+ plugins = cls._find_plugins(plugins_dir)
+
+ all_exports = []
+ for module_dir, module_name, package in plugins:
+ try:
+ exports = cls._import_module(module_dir, module_name, types,
+ package)
+ except Exception as e:
+ module_search_error = ImportError
+ try:
+ module_search_error = ModuleNotFoundError # python 3.6+
+ except NameError:
+ pass
+
+ message = ["Failed to import module '%s': %s", module_name, e]
+ if isinstance(e, module_search_error):
+ log.debug(*message)
+ else:
+ log.warning(*message)
+ continue
+
+ log.debug("Imported the following symbols from %s: %s" % \
+ (
+ module_name,
+ ', '.join(s.__name__ for s in exports)
+ )
+ )
+ all_exports.extend(exports)
+
+ return all_exports
+
+ @classmethod
+ def _load_builtin_plugins(cls):
+ if not cls._builtin_plugins:
+ plugins_dir = osp.join(
+ __file__[: __file__.rfind(osp.join('datumaro', 'components'))],
+ osp.join('datumaro', 'plugins')
+ )
+ assert osp.isdir(plugins_dir), plugins_dir
+ cls._builtin_plugins = cls._load_plugins2(plugins_dir)
+ return cls._builtin_plugins
+
+ @classmethod
+ def _load_plugins2(cls, plugins_dir):
+ from datumaro.components.converter import Converter
+ from datumaro.components.extractor import (Extractor, Importer,
+ Transform)
+ from datumaro.components.launcher import Launcher
+ types = [Extractor, Converter, Importer, Launcher, Transform]
+
+ return cls._load_plugins(plugins_dir, types)
+
+ def make_extractor(self, name, *args, **kwargs):
+ return self.extractors.get(name)(*args, **kwargs)
+
+ def make_importer(self, name, *args, **kwargs):
+ return self.importers.get(name)(*args, **kwargs)
+
+ def make_launcher(self, name, *args, **kwargs):
+ return self.launchers.get(name)(*args, **kwargs)
+
+ def make_converter(self, name, *args, **kwargs):
+ result = self.converters.get(name)
+ if inspect.isclass(result):
+ result = result.convert
+ return partial(result, *args, **kwargs)
+
+ def make_transform(self, name, *args, **kwargs):
+ return partial(self.transforms.get(name), *args, **kwargs)
+
+ def register_model(self, name, model):
+ self.models.register(name, model)
+
+ def unregister_model(self, name):
+ self.models.unregister(name)
diff --git a/datumaro/components/project.py b/datumaro/components/project.py
index 094dd22ffe..fb92bde649 100644
--- a/datumaro/components/project.py
+++ b/datumaro/components/project.py
@@ -3,301 +3,19 @@
# SPDX-License-Identifier: MIT
from collections import defaultdict
-from glob import glob
-import git
-import importlib
-import inspect
import logging as log
import os
import os.path as osp
import shutil
-import sys
from datumaro.components.config import Config, DEFAULT_FORMAT
from datumaro.components.config_model import (Model, Source,
PROJECT_DEFAULT_CONFIG, PROJECT_SCHEMA)
+from datumaro.components.environment import Environment
from datumaro.components.launcher import ModelTransform
from datumaro.components.dataset import Dataset
-def import_foreign_module(name, path, package=None):
- module = None
- default_path = sys.path.copy()
- try:
- sys.path = [ osp.abspath(path), ] + default_path
- sys.modules.pop(name, None) # remove from cache
- module = importlib.import_module(name, package=package)
- sys.modules.pop(name) # remove from cache
- except Exception:
- raise
- finally:
- sys.path = default_path
- return module
-
-
-class Registry:
- def __init__(self, config=None, item_type=None):
- self.item_type = item_type
-
- self.items = {}
-
- if config is not None:
- self.load(config)
-
- def load(self, config):
- pass
-
- def register(self, name, value):
- if self.item_type:
- value = self.item_type(value)
- self.items[name] = value
- return value
-
- def unregister(self, name):
- return self.items.pop(name, None)
-
- def get(self, key):
- return self.items[key] # returns a class / ctor
-
-
-class ModelRegistry(Registry):
- def __init__(self, config=None):
- super().__init__(config, item_type=Model)
-
- def load(self, config):
- # TODO: list default dir, insert values
- if 'models' in config:
- for name, model in config.models.items():
- self.register(name, model)
-
-
-class SourceRegistry(Registry):
- def __init__(self, config=None):
- super().__init__(config, item_type=Source)
-
- def load(self, config):
- # TODO: list default dir, insert values
- if 'sources' in config:
- for name, source in config.sources.items():
- self.register(name, source)
-
-class PluginRegistry(Registry):
- def __init__(self, config=None, builtin=None, local=None):
- super().__init__(config)
-
- from datumaro.components.cli_plugin import CliPlugin
-
- if builtin is not None:
- for v in builtin:
- k = CliPlugin._get_name(v)
- self.register(k, v)
- if local is not None:
- for v in local:
- k = CliPlugin._get_name(v)
- self.register(k, v)
-
-class GitWrapper:
- def __init__(self, config=None):
- self.repo = None
-
- if config is not None and config.project_dir:
- self.init(config.project_dir)
-
- @staticmethod
- def _git_dir(base_path):
- return osp.join(base_path, '.git')
-
- @classmethod
- def spawn(cls, path):
- spawn = not osp.isdir(cls._git_dir(path))
- repo = git.Repo.init(path=path)
- if spawn:
- repo.config_writer().set_value("user", "name", "User") \
- .set_value("user", "email", "user@nowhere.com") \
- .release()
- # gitpython does not support init, use git directly
- repo.git.init()
- repo.git.commit('-m', 'Initial commit', '--allow-empty')
- return repo
-
- def init(self, path):
- self.repo = self.spawn(path)
- return self.repo
-
- def is_initialized(self):
- return self.repo is not None
-
- def create_submodule(self, name, dst_dir, **kwargs):
- self.repo.create_submodule(name, dst_dir, **kwargs)
-
- def has_submodule(self, name):
- return name in [submodule.name for submodule in self.repo.submodules]
-
- def remove_submodule(self, name, **kwargs):
- return self.repo.submodule(name).remove(**kwargs)
-
-def load_project_as_dataset(url):
- # symbol forward declaration
- raise NotImplementedError()
-
-class Environment:
- _builtin_plugins = None
- PROJECT_EXTRACTOR_NAME = 'datumaro_project'
-
- def __init__(self, config=None):
- config = Config(config,
- fallback=PROJECT_DEFAULT_CONFIG, schema=PROJECT_SCHEMA)
-
- self.models = ModelRegistry(config)
- self.sources = SourceRegistry(config)
-
- self.git = GitWrapper(config)
-
- env_dir = osp.join(config.project_dir, config.env_dir)
- builtin = self._load_builtin_plugins()
- custom = self._load_plugins2(osp.join(env_dir, config.plugins_dir))
- select = lambda seq, t: [e for e in seq if issubclass(e, t)]
- from datumaro.components.extractor import Transform
- from datumaro.components.extractor import SourceExtractor
- from datumaro.components.extractor import Importer
- from datumaro.components.converter import Converter
- from datumaro.components.launcher import Launcher
- self.extractors = PluginRegistry(
- builtin=select(builtin, SourceExtractor),
- local=select(custom, SourceExtractor)
- )
- self.extractors.register(self.PROJECT_EXTRACTOR_NAME,
- load_project_as_dataset)
-
- self.importers = PluginRegistry(
- builtin=select(builtin, Importer),
- local=select(custom, Importer)
- )
- self.launchers = PluginRegistry(
- builtin=select(builtin, Launcher),
- local=select(custom, Launcher)
- )
- self.converters = PluginRegistry(
- builtin=select(builtin, Converter),
- local=select(custom, Converter)
- )
- self.transforms = PluginRegistry(
- builtin=select(builtin, Transform),
- local=select(custom, Transform)
- )
-
- @staticmethod
- def _find_plugins(plugins_dir):
- plugins = []
- if not osp.exists(plugins_dir):
- return plugins
-
- for plugin_name in os.listdir(plugins_dir):
- p = osp.join(plugins_dir, plugin_name)
- if osp.isfile(p) and p.endswith('.py'):
- plugins.append((plugins_dir, plugin_name, None))
- elif osp.isdir(p):
- plugins += [(plugins_dir,
- osp.splitext(plugin_name)[0] + '.' + osp.basename(p),
- osp.splitext(plugin_name)[0]
- )
- for p in glob(osp.join(p, '*.py'))]
- return plugins
-
- @classmethod
- def _import_module(cls, module_dir, module_name, types, package=None):
- module = import_foreign_module(osp.splitext(module_name)[0], module_dir,
- package=package)
-
- exports = []
- if hasattr(module, 'exports'):
- exports = module.exports
- else:
- for symbol in dir(module):
- if symbol.startswith('_'):
- continue
- exports.append(getattr(module, symbol))
-
- exports = [s for s in exports
- if inspect.isclass(s) and issubclass(s, types) and not s in types]
-
- return exports
-
- @classmethod
- def _load_plugins(cls, plugins_dir, types):
- types = tuple(types)
-
- plugins = cls._find_plugins(plugins_dir)
-
- all_exports = []
- for module_dir, module_name, package in plugins:
- try:
- exports = cls._import_module(module_dir, module_name, types,
- package)
- except Exception as e:
- module_search_error = ImportError
- try:
- module_search_error = ModuleNotFoundError # python 3.6+
- except NameError:
- pass
-
- message = ["Failed to import module '%s': %s", module_name, e]
- if isinstance(e, module_search_error):
- log.debug(*message)
- else:
- log.warning(*message)
- continue
-
- log.debug("Imported the following symbols from %s: %s" % \
- (
- module_name,
- ', '.join(s.__name__ for s in exports)
- )
- )
- all_exports.extend(exports)
-
- return all_exports
-
- @classmethod
- def _load_builtin_plugins(cls):
- if not cls._builtin_plugins:
- plugins_dir = osp.join(
- __file__[: __file__.rfind(osp.join('datumaro', 'components'))],
- osp.join('datumaro', 'plugins')
- )
- assert osp.isdir(plugins_dir), plugins_dir
- cls._builtin_plugins = cls._load_plugins2(plugins_dir)
- return cls._builtin_plugins
-
- @classmethod
- def _load_plugins2(cls, plugins_dir):
- from datumaro.components.extractor import Transform
- from datumaro.components.extractor import SourceExtractor
- from datumaro.components.extractor import Importer
- from datumaro.components.converter import Converter
- from datumaro.components.launcher import Launcher
- types = [SourceExtractor, Converter, Importer, Launcher, Transform]
-
- return cls._load_plugins(plugins_dir, types)
-
- def make_extractor(self, name, *args, **kwargs):
- return self.extractors.get(name)(*args, **kwargs)
-
- def make_importer(self, name, *args, **kwargs):
- return self.importers.get(name)(*args, **kwargs)
-
- def make_launcher(self, name, *args, **kwargs):
- return self.launchers.get(name)(*args, **kwargs)
-
- def make_converter(self, name, *args, **kwargs):
- return self.converters.get(name)(*args, **kwargs)
-
- def register_model(self, name, model):
- self.models.register(name, model)
-
- def unregister_model(self, name):
- self.models.unregister(name)
-
class ProjectDataset(Dataset):
def __init__(self, project):
super().__init__()
@@ -662,8 +380,5 @@ def local_model_dir(self, model_name):
def local_source_dir(self, source_name):
return osp.join(self.config.sources_dir, source_name)
-# pylint: disable=function-redefined
def load_project_as_dataset(url):
- # implement the function declared above
return Project.load(url).make_dataset()
-# pylint: enable=function-redefined
diff --git a/datumaro/util/__init__.py b/datumaro/util/__init__.py
index 0a75756bd2..b7e56890ae 100644
--- a/datumaro/util/__init__.py
+++ b/datumaro/util/__init__.py
@@ -3,8 +3,11 @@
#
# SPDX-License-Identifier: MIT
+import attr
import os
import os.path as osp
+from contextlib import ExitStack
+from functools import partial, wraps
from itertools import islice
@@ -90,4 +93,105 @@ def str_to_bool(s):
raise ValueError("Can't convert value '%s' to bool" % s)
def filter_dict(d, exclude_keys):
- return { k: v for k, v in d.items() if k not in exclude_keys }
\ No newline at end of file
+ return { k: v for k, v in d.items() if k not in exclude_keys }
+
+def optional_arg_decorator(fn):
+ @wraps(fn)
+ def wrapped_decorator(*args, **kwargs):
+ if len(args) == 1 and callable(args[0]) and not kwargs:
+ return fn(args[0], **kwargs)
+
+ else:
+ def real_decorator(decoratee):
+ return fn(decoratee, *args, **kwargs)
+
+ return real_decorator
+
+ return wrapped_decorator
+
+class Rollback:
+ @attr.attrs
+ class Handler:
+ callback = attr.attrib()
+ enabled = attr.attrib(default=True)
+ ignore_errors = attr.attrib(default=False)
+
+ def __call__(self):
+ if self.enabled:
+ try:
+ self.callback()
+ except: # pylint: disable=bare-except
+ if not self.ignore_errors:
+ raise
+
+ def __init__(self):
+ self._handlers = {}
+ self._stack = ExitStack()
+ self.enabled = True
+
+ def add(self, callback, *args,
+ name=None, enabled=True, ignore_errors=False,
+ fwd_kwargs=None, **kwargs):
+ if args or kwargs or fwd_kwargs:
+ if fwd_kwargs:
+ kwargs.update(fwd_kwargs)
+ callback = partial(callback, *args, **kwargs)
+ name = name or hash(callback)
+ assert name not in self._handlers
+ handler = self.Handler(callback,
+ enabled=enabled, ignore_errors=ignore_errors)
+ self._handlers[name] = handler
+ self._stack.callback(handler)
+ return name
+
+ do = add # readability alias
+
+ def enable(self, name=None):
+ if name:
+ self._handlers[name].enabled = True
+ else:
+ self.enabled = True
+
+ def disable(self, name=None):
+ if name:
+ self._handlers[name].enabled = False
+ else:
+ self.enabled = False
+
+ def clean(self):
+ self.__exit__(None, None, None)
+
+ def __enter__(self):
+ return self
+
+ # pylint: disable=redefined-builtin
+ def __exit__(self, type=None, value=None, traceback=None):
+ if type is None:
+ return
+ if not self.enabled:
+ return
+ self._stack.__exit__(type, value, traceback)
+ # pylint: enable=redefined-builtin
+
+@optional_arg_decorator
+def error_rollback(func, arg_name='on_error', implicit=False):
+ @wraps(func)
+ def wrapped_func(*args, **kwargs):
+ with Rollback() as manager:
+ if implicit:
+ fglobals = func.__globals__
+
+ has_arg = arg_name in fglobals
+ old_val = fglobals.get(arg_name)
+ fglobals[arg_name] = manager
+ try:
+ func(*args, **kwargs)
+ finally:
+ if has_arg:
+ func.__globals__[arg_name] = old_val
+ else:
+ func.__globals__.pop(arg_name)
+ else:
+ kwargs[arg_name] = manager
+ func(*args, **kwargs)
+ return wrapped_func
diff --git a/datumaro/util/os_util.py b/datumaro/util/os_util.py
index b4d05e376d..4c60c41fcc 100644
--- a/datumaro/util/os_util.py
+++ b/datumaro/util/os_util.py
@@ -1,9 +1,12 @@
-# Copyright (C) 2020 Intel Corporation
+# Copyright (C) 2020 - 2021 Intel Corporation
#
# SPDX-License-Identifier: MIT
+import importlib
+import os.path as osp
import subprocess
+import sys
def check_instruction_set(instruction):
@@ -14,4 +17,18 @@ def check_instruction_set(instruction):
subprocess.check_output(
'lscpu | grep -o "%s" | head -1' % instruction,
shell=True).decode('utf-8') # nosec
- )
\ No newline at end of file
+ )
+
+def import_foreign_module(name, path, package=None):
+ module = None
+ default_path = sys.path.copy()
+ try:
+ sys.path = [ osp.abspath(path), ] + default_path
+ sys.modules.pop(name, None) # remove from cache
+ module = importlib.import_module(name, package=package)
+ sys.modules.pop(name) # remove from cache
+ except Exception:
+ raise
+ finally:
+ sys.path = default_path
+ return module
| diff --git a/tests/test_util.py b/tests/test_util.py
new file mode 100644
index 0000000000..de5470092e
--- /dev/null
+++ b/tests/test_util.py
@@ -0,0 +1,106 @@
+from unittest import TestCase
+
+from datumaro.util import Rollback, error_rollback
+
+
+class TestRollback(TestCase):
+ def test_does_not_call_on_no_error(self):
+ success = True
+ def cb():
+ nonlocal success
+ success = False
+
+ with Rollback() as on_error:
+ on_error.do(cb)
+
+ self.assertTrue(success)
+
+ def test_calls_on_error(self):
+ success = False
+ def cb():
+ nonlocal success
+ success = True
+
+ try:
+ with Rollback() as on_error:
+ on_error.do(cb)
+ raise Exception('err')
+ except Exception:
+ pass
+ finally:
+ self.assertTrue(success)
+
+ def test_decorator_calls_on_error(self):
+ success = False
+ def cb():
+ nonlocal success
+ success = True
+
+ @error_rollback('on_error')
+ def foo(on_error=None):
+ on_error.do(cb)
+ raise Exception('err')
+
+ try:
+ foo()
+ except Exception:
+ pass
+ finally:
+ self.assertTrue(success)
+
+ def test_decorator_does_not_call_on_no_error(self):
+ success = True
+ def cb():
+ nonlocal success
+ success = False
+
+ @error_rollback('on_error')
+ def foo(on_error=None):
+ on_error.do(cb)
+
+ foo()
+
+ self.assertTrue(success)
+
+ def test_decorator_supports_implicit_arg(self):
+ success = False
+ def cb():
+ nonlocal success
+ success = True
+
+ @error_rollback('on_error', implicit=True)
+ def foo():
+ on_error.do(cb) # noqa: F821
+ raise Exception('err')
+
+ try:
+ foo()
+ except Exception:
+ pass
+ finally:
+ self.assertTrue(success)
+
+ def test_can_fowrard_args(self):
+ success1 = False
+ def cb1(a1, a2=None, ignore_errors=None):
+ nonlocal success1
+ if a1 == 5 and a2 == 2 and ignore_errors == None:
+ success1 = True
+
+ success2 = False
+ def cb2(a1, a2=None, ignore_errors=None):
+ nonlocal success2
+ if a1 == 5 and a2 == 2 and ignore_errors == 4:
+ success2 = True
+
+ try:
+ with Rollback() as on_error:
+ on_error.do(cb1, 5, a2=2, ignore_errors=True)
+ on_error.do(cb2, 5, a2=2, ignore_errors=True,
+ fwd_kwargs={'ignore_errors': 4})
+ raise Exception('err')
+ except Exception:
+ pass
+ finally:
+ self.assertTrue(success1)
+ self.assertTrue(success2)
| diff --git a/CHANGELOG.md b/CHANGELOG.md
index 21c110c5c7..7040309353 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -23,7 +23,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
-
### Fixed
--
+- Allow plugins inherited from `Extractor` (instead of only `SourceExtractor`) (<https://github.com/openvinotoolkit/datumaro/pull/70>)
### Security
-
| [
{
"components": [
{
"doc": "",
"lines": [
18,
47
],
"name": "Registry",
"signature": "class Registry:",
"type": "class"
},
{
"doc": "",
"lines": [
19,
25
],
"name": "Registry... | [
"tests/test_util.py::TestRollback::test_calls_on_error",
"tests/test_util.py::TestRollback::test_can_fowrard_args",
"tests/test_util.py::TestRollback::test_decorator_calls_on_error",
"tests/test_util.py::TestRollback::test_decorator_does_not_call_on_no_error",
"tests/test_util.py::TestRollback::test_decorat... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Refactor Environment class
<!-- Contributing guide: https://github.com/openvinotoolkit/datumaro/blob/develop/CONTRIBUTING.md -->
### Summary
<!--
Resolves #111 and #222.
Depends on #1000 (for series of dependent commits).
This PR introduces this capability to make the project better in this and that.
- Added this feature
- Removed that feature
- Fixed the problem #1234
-->
- Added error `Rollback` function decorator and context manager
- Allow `Extractor` plugins (instead of only `SourceExtractor`)
- `make_converter` returns a callable directly instead of a class. Use `.get` and `[]` to access classes.
- add `make_transform` similar to `make_converter`
### How to test
<!-- Describe the testing procedure for reviewers, if changes are
not fully covered by unit tests or manual testing can be complicated. -->
### Checklist
<!-- Put an 'x' in all the boxes that apply -->
- [x] I submit my changes into the `develop` branch
- [ ] I have added description of my changes into [CHANGELOG](https://github.com/openvinotoolkit/datumaro/blob/develop/CHANGELOG.md)
- [ ] I have updated the [documentation](
https://github.com/openvinotoolkit/datumaro/tree/develop/docs) accordingly
- [x] I have added tests to cover my changes
- [ ] I have [linked related issues](
https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword))
### License
- [x] I submit _my code changes_ under the same [MIT License](
https://github.com/opencv/cvat/blob/develop/LICENSE) that covers the project.
Feel free to contact the maintainers if that's a concern.
- [x] I have updated the license header for each file (see an example below)
```python
# Copyright (C) 2020 Intel Corporation
#
# SPDX-License-Identifier: MIT
```
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in datumaro/components/environment.py]
(definition of Registry:)
class Registry:
(definition of Registry.__init__:)
def __init__(self, config=None, item_type=None):
(definition of Registry.load:)
def load(self, config):
(definition of Registry.register:)
def register(self, name, value):
(definition of Registry.unregister:)
def unregister(self, name):
(definition of Registry.get:)
def get(self, key):
"""Returns a class or a factory function"""
(definition of Registry.__getitem__:)
def __getitem__(self, key):
(definition of Registry.__contains__:)
def __contains__(self, key):
(definition of ModelRegistry:)
class ModelRegistry(Registry):
(definition of ModelRegistry.__init__:)
def __init__(self, config=None):
(definition of ModelRegistry.load:)
def load(self, config):
(definition of SourceRegistry:)
class SourceRegistry(Registry):
(definition of SourceRegistry.__init__:)
def __init__(self, config=None):
(definition of SourceRegistry.load:)
def load(self, config):
(definition of PluginRegistry:)
class PluginRegistry(Registry):
(definition of PluginRegistry.__init__:)
def __init__(self, config=None, builtin=None, local=None):
(definition of GitWrapper:)
class GitWrapper:
(definition of GitWrapper.__init__:)
def __init__(self, config=None):
(definition of GitWrapper._git_dir:)
def _git_dir(base_path):
(definition of GitWrapper.spawn:)
def spawn(cls, path):
(definition of GitWrapper.init:)
def init(self, path):
(definition of GitWrapper.is_initialized:)
def is_initialized(self):
(definition of GitWrapper.create_submodule:)
def create_submodule(self, name, dst_dir, **kwargs):
(definition of GitWrapper.has_submodule:)
def has_submodule(self, name):
(definition of GitWrapper.remove_submodule:)
def remove_submodule(self, name, **kwargs):
(definition of Environment:)
class Environment:
(definition of Environment.__init__:)
def __init__(self, config=None):
(definition of Environment._find_plugins:)
def _find_plugins(plugins_dir):
(definition of Environment._import_module:)
def _import_module(cls, module_dir, module_name, types, package=None):
(definition of Environment._load_plugins:)
def _load_plugins(cls, plugins_dir, types):
(definition of Environment._load_builtin_plugins:)
def _load_builtin_plugins(cls):
(definition of Environment._load_plugins2:)
def _load_plugins2(cls, plugins_dir):
(definition of Environment.make_extractor:)
def make_extractor(self, name, *args, **kwargs):
(definition of Environment.make_importer:)
def make_importer(self, name, *args, **kwargs):
(definition of Environment.make_launcher:)
def make_launcher(self, name, *args, **kwargs):
(definition of Environment.make_converter:)
def make_converter(self, name, *args, **kwargs):
(definition of Environment.make_transform:)
def make_transform(self, name, *args, **kwargs):
(definition of Environment.register_model:)
def register_model(self, name, model):
(definition of Environment.unregister_model:)
def unregister_model(self, name):
[end of new definitions in datumaro/components/environment.py]
[start of new definitions in datumaro/util/__init__.py]
(definition of optional_arg_decorator:)
def optional_arg_decorator(fn): @wraps(fn)
(definition of optional_arg_decorator.wrapped_decorator:)
def wrapped_decorator(*args, **kwargs):
(definition of optional_arg_decorator.wrapped_decorator.real_decorator:)
def real_decorator(decoratee):
(definition of Rollback:)
class Rollback: @attr.attrs
(definition of Rollback.Handler:)
class Handler:
(definition of Rollback.Handler.__call__:)
def __call__(self):
(definition of Rollback.__init__:)
def __init__(self):
(definition of Rollback.add:)
def add(self, callback, *args, name=None, enabled=True, ignore_errors=False, fwd_kwargs=None, **kwargs):
(definition of Rollback.enable:)
def enable(self, name=None):
(definition of Rollback.disable:)
def disable(self, name=None):
(definition of Rollback.clean:)
def clean(self):
(definition of Rollback.__enter__:)
def __enter__(self):
(definition of Rollback.__exit__:)
def __exit__(self, type=None, value=None, traceback=None):
(definition of error_rollback:)
def error_rollback(func, arg_name='on_error', implicit=False): @wraps(func)
(definition of error_rollback.wrapped_func:)
def wrapped_func(*args, **kwargs):
[end of new definitions in datumaro/util/__init__.py]
[start of new definitions in datumaro/util/os_util.py]
(definition of import_foreign_module:)
def import_foreign_module(name, path, package=None):
[end of new definitions in datumaro/util/os_util.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 9478b9a2fe38229cf679defa9cc9c7c4a5682f7e | |
DLR-RM__stable-baselines3-277 | 277 | DLR-RM/stable-baselines3 | null | 06498e8be71b9c8aee38226176dbd28443afbb4f | 2020-12-28T16:54:04Z | diff --git a/docs/guide/tensorboard.rst b/docs/guide/tensorboard.rst
index cf5c4cf21..788ccc158 100644
--- a/docs/guide/tensorboard.rst
+++ b/docs/guide/tensorboard.rst
@@ -81,6 +81,78 @@ Here is a simple example on how to log both additional tensor or arbitrary scala
model.learn(50000, callback=TensorboardCallback())
+Logging Images
+--------------
+
+TensorBoard supports periodic logging of image data, which helps evaluating agents at various stages during training.
+
+.. warning::
+ To support image logging `pillow <https://github.com/python-pillow/Pillow>`_ must be installed otherwise, TensorBoard ignores the image and logs a warning.
+
+Here is an example of how to render an image to TensorBoard at regular intervals:
+
+.. code-block:: python
+
+ from stable_baselines3 import SAC
+ from stable_baselines3.common.callbacks import BaseCallback
+ from stable_baselines3.common.logger import Image
+
+ model = SAC("MlpPolicy", "Pendulum-v0", tensorboard_log="/tmp/sac/", verbose=1)
+
+
+ class ImageRecorderCallback(BaseCallback):
+ def __init__(self, verbose=0):
+ super(ImageRecorderCallback, self).__init__(verbose)
+
+ def _on_step(self):
+ image = self.training_env.render(mode="rgb_array")
+ # "HWC" specify the dataformat of the image, here channel last
+ # (H for height, W for width, C for channel)
+ # See https://pytorch.org/docs/stable/tensorboard.html
+ # for supported formats
+ self.logger.record("trajectory/image", Image(image, "HWC"), exclude=("stdout", "log", "json", "csv"))
+ return True
+
+
+ model.learn(50000, callback=ImageRecorderCallback())
+
+Logging Figures/Plots
+---------------------
+TensorBoard supports periodic logging of figures/plots created with matplotlib, which helps evaluating agents at various stages during training.
+
+.. warning::
+ To support figure logging `matplotlib <https://matplotlib.org/>`_ must be installed otherwise, TensorBoard ignores the figure and logs a warning.
+
+Here is an example of how to store a plot in TensorBoard at regular intervals:
+
+.. code-block:: python
+
+ import numpy as np
+ import matplotlib.pyplot as plt
+
+ from stable_baselines3 import SAC
+ from stable_baselines3.common.callbacks import BaseCallback
+ from stable_baselines3.common.logger import Figure
+
+ model = SAC("MlpPolicy", "Pendulum-v0", tensorboard_log="/tmp/sac/", verbose=1)
+
+
+ class FigureRecorderCallback(BaseCallback):
+ def __init__(self, verbose=0):
+ super(FigureRecorderCallback, self).__init__(verbose)
+
+ def _on_step(self):
+ # Plot values (here a random variable)
+ figure = plt.figure()
+ figure.add_subplot().plot(np.random.random(3))
+ # Close the figure after logging it
+ self.logger.record("trajectory/figure", Figure(figure, close=True), exclude=("stdout", "log", "json", "csv"))
+ plt.close()
+ return True
+
+
+ model.learn(50000, callback=FigureRecorderCallback())
+
Logging Videos
--------------
diff --git a/docs/misc/changelog.rst b/docs/misc/changelog.rst
index 726c9ad3d..401f41469 100644
--- a/docs/misc/changelog.rst
+++ b/docs/misc/changelog.rst
@@ -3,7 +3,7 @@
Changelog
==========
-Pre-Release 0.11.0a4 (WIP)
+Pre-Release 0.11.0a5 (WIP)
-------------------------------
Breaking Changes:
@@ -26,6 +26,7 @@ New Features:
- Added ``monitor_kwargs`` parameter to ``make_vec_env`` and ``make_atari_env``
- Wrap the environments automatically with a ``Monitor`` wrapper when possible.
- ``EvalCallback`` now logs the success rate when available (``is_success`` must be present in the info dict)
+- Added new wrappers to log images and matplotlib figures to tensorboard. (@zampanteymedio)
Bug Fixes:
^^^^^^^^^^
@@ -534,4 +535,4 @@ And all the contributors:
@flodorner @KuKuXia @NeoExtended @PartiallyTyped @mmcenta @richardwu @kinalmehta @rolandgvc @tkelestemur @mloo3
@tirafesi @blurLake @koulakis @joeljosephjin @shwang @rk37 @andyshih12 @RaphaelWag @xicocaio
@diditforlulz273 @liorcohen5 @ManifoldFR @mloo3 @SwamyDev @wmmc88 @megan-klaiber @thisray
-@tfederico @hn2 @LucasAlegre @AptX395
+@tfederico @hn2 @LucasAlegre @AptX395 @zampanteymedio
diff --git a/stable_baselines3/common/logger.py b/stable_baselines3/common/logger.py
index 391c62e77..10283ec41 100644
--- a/stable_baselines3/common/logger.py
+++ b/stable_baselines3/common/logger.py
@@ -10,6 +10,7 @@
import numpy as np
import pandas
import torch as th
+from matplotlib import pyplot as plt
try:
from torch.utils.tensorboard import SummaryWriter
@@ -26,6 +27,9 @@
class Video(object):
"""
Video data class storing the video frames and the frame per seconds
+
+ :param frames: frames to create the video from
+ :param fps: frames per second
"""
def __init__(self, frames: th.Tensor, fps: Union[float, int]):
@@ -33,6 +37,34 @@ def __init__(self, frames: th.Tensor, fps: Union[float, int]):
self.fps = fps
+class Figure(object):
+ """
+ Figure data class storing a matplotlib figure and whether to close the figure after logging it
+
+ :param figure: figure to log
+ :param close: if true, close the figure after logging it
+ """
+
+ def __init__(self, figure: plt.figure, close: bool):
+ self.figure = figure
+ self.close = close
+
+
+class Image(object):
+ """
+ Image data class storing an image and data format
+
+ :param image: image to log
+ :param dataformats: Image data format specification of the form NCHW, NHWC, CHW, HWC, HW, WH, etc.
+ More info in add_image method doc at https://pytorch.org/docs/stable/tensorboard.html
+ Gym envs normally use 'HWC' (channel last)
+ """
+
+ def __init__(self, image: Union[th.Tensor, np.ndarray, str], dataformats: str):
+ self.image = image
+ self.dataformats = dataformats
+
+
class FormatUnsupportedError(NotImplementedError):
def __init__(self, unsupported_formats: Sequence[str], value_description: str):
if len(unsupported_formats) > 1:
@@ -108,6 +140,12 @@ def write(self, key_values: Dict, key_excluded: Dict, step: int = 0) -> None:
if isinstance(value, Video):
raise FormatUnsupportedError(["stdout", "log"], "video")
+ if isinstance(value, Figure):
+ raise FormatUnsupportedError(["stdout", "log"], "figure")
+
+ if isinstance(value, Image):
+ raise FormatUnsupportedError(["stdout", "log"], "image")
+
if isinstance(value, float):
# Align left
value_str = f"{value:<8.3g}"
@@ -196,6 +234,10 @@ def write(self, key_values: Dict[str, Any], key_excluded: Dict[str, Union[str, T
def cast_to_json_serializable(value: Any):
if isinstance(value, Video):
raise FormatUnsupportedError(["json"], "video")
+ if isinstance(value, Figure):
+ raise FormatUnsupportedError(["json"], "figure")
+ if isinstance(value, Image):
+ raise FormatUnsupportedError(["json"], "image")
if hasattr(value, "dtype"):
if value.shape == () or len(value) == 1:
# if value is a dimensionless numpy array or of length 1, serialize as a float
@@ -258,6 +300,12 @@ def write(self, key_values: Dict[str, Any], key_excluded: Dict[str, Union[str, T
if isinstance(value, Video):
raise FormatUnsupportedError(["csv"], "video")
+ if isinstance(value, Figure):
+ raise FormatUnsupportedError(["csv"], "figure")
+
+ if isinstance(value, Image):
+ raise FormatUnsupportedError(["csv"], "image")
+
if value is not None:
self.file.write(str(value))
self.file.write("\n")
@@ -296,6 +344,12 @@ def write(self, key_values: Dict[str, Any], key_excluded: Dict[str, Union[str, T
if isinstance(value, Video):
self.writer.add_video(key, value.frames, step, value.fps)
+ if isinstance(value, Figure):
+ self.writer.add_figure(key, value.figure, step, close=value.close)
+
+ if isinstance(value, Image):
+ self.writer.add_image(key, value.image, step, dataformats=value.dataformats)
+
# Flush the output to the file
self.writer.flush()
diff --git a/stable_baselines3/version.txt b/stable_baselines3/version.txt
index 1b742ef03..f23fba9af 100644
--- a/stable_baselines3/version.txt
+++ b/stable_baselines3/version.txt
@@ -1,1 +1,1 @@
-0.11.0a4
+0.11.0a5
| diff --git a/tests/test_logger.py b/tests/test_logger.py
index 98f832d8f..c1cce8559 100644
--- a/tests/test_logger.py
+++ b/tests/test_logger.py
@@ -3,11 +3,14 @@
import numpy as np
import pytest
import torch as th
+from matplotlib import pyplot as plt
from pandas.errors import EmptyDataError
from stable_baselines3.common.logger import (
DEBUG,
+ Figure,
FormatUnsupportedError,
+ Image,
ScopedConfigure,
Video,
configure,
@@ -198,3 +201,51 @@ def test_report_video_to_unsupported_format_raises_error(tmp_path, unsupported_f
writer.write({"video": video}, key_excluded={"video": ()})
assert unsupported_format in str(exec_info.value)
writer.close()
+
+
+def test_report_image_to_tensorboard(tmp_path, read_log):
+ pytest.importorskip("tensorboard")
+
+ image = Image(image=th.rand(16, 16, 3), dataformats="HWC")
+ writer = make_output_format("tensorboard", tmp_path)
+ writer.write({"image": image}, key_excluded={"image": ()})
+
+ assert not read_log("tensorboard").empty
+ writer.close()
+
+
+@pytest.mark.parametrize("unsupported_format", ["stdout", "log", "json", "csv"])
+def test_report_image_to_unsupported_format_raises_error(tmp_path, unsupported_format):
+ writer = make_output_format(unsupported_format, tmp_path)
+
+ with pytest.raises(FormatUnsupportedError) as exec_info:
+ image = Image(image=th.rand(16, 16, 3), dataformats="HWC")
+ writer.write({"image": image}, key_excluded={"image": ()})
+ assert unsupported_format in str(exec_info.value)
+ writer.close()
+
+
+def test_report_figure_to_tensorboard(tmp_path, read_log):
+ pytest.importorskip("tensorboard")
+
+ fig = plt.figure()
+ fig.add_subplot().plot(np.random.random(3))
+ figure = Figure(figure=fig, close=True)
+ writer = make_output_format("tensorboard", tmp_path)
+ writer.write({"figure": figure}, key_excluded={"figure": ()})
+
+ assert not read_log("tensorboard").empty
+ writer.close()
+
+
+@pytest.mark.parametrize("unsupported_format", ["stdout", "log", "json", "csv"])
+def test_report_figure_to_unsupported_format_raises_error(tmp_path, unsupported_format):
+ writer = make_output_format(unsupported_format, tmp_path)
+
+ with pytest.raises(FormatUnsupportedError) as exec_info:
+ fig = plt.figure()
+ fig.add_subplot().plot(np.random.random(3))
+ figure = Figure(figure=fig, close=True)
+ writer.write({"figure": figure}, key_excluded={"figure": ()})
+ assert unsupported_format in str(exec_info.value)
+ writer.close()
| diff --git a/docs/guide/tensorboard.rst b/docs/guide/tensorboard.rst
index cf5c4cf21..788ccc158 100644
--- a/docs/guide/tensorboard.rst
+++ b/docs/guide/tensorboard.rst
@@ -81,6 +81,78 @@ Here is a simple example on how to log both additional tensor or arbitrary scala
model.learn(50000, callback=TensorboardCallback())
+Logging Images
+--------------
+
+TensorBoard supports periodic logging of image data, which helps evaluating agents at various stages during training.
+
+.. warning::
+ To support image logging `pillow <https://github.com/python-pillow/Pillow>`_ must be installed otherwise, TensorBoard ignores the image and logs a warning.
+
+Here is an example of how to render an image to TensorBoard at regular intervals:
+
+.. code-block:: python
+
+ from stable_baselines3 import SAC
+ from stable_baselines3.common.callbacks import BaseCallback
+ from stable_baselines3.common.logger import Image
+
+ model = SAC("MlpPolicy", "Pendulum-v0", tensorboard_log="/tmp/sac/", verbose=1)
+
+
+ class ImageRecorderCallback(BaseCallback):
+ def __init__(self, verbose=0):
+ super(ImageRecorderCallback, self).__init__(verbose)
+
+ def _on_step(self):
+ image = self.training_env.render(mode="rgb_array")
+ # "HWC" specify the dataformat of the image, here channel last
+ # (H for height, W for width, C for channel)
+ # See https://pytorch.org/docs/stable/tensorboard.html
+ # for supported formats
+ self.logger.record("trajectory/image", Image(image, "HWC"), exclude=("stdout", "log", "json", "csv"))
+ return True
+
+
+ model.learn(50000, callback=ImageRecorderCallback())
+
+Logging Figures/Plots
+---------------------
+TensorBoard supports periodic logging of figures/plots created with matplotlib, which helps evaluating agents at various stages during training.
+
+.. warning::
+ To support figure logging `matplotlib <https://matplotlib.org/>`_ must be installed otherwise, TensorBoard ignores the figure and logs a warning.
+
+Here is an example of how to store a plot in TensorBoard at regular intervals:
+
+.. code-block:: python
+
+ import numpy as np
+ import matplotlib.pyplot as plt
+
+ from stable_baselines3 import SAC
+ from stable_baselines3.common.callbacks import BaseCallback
+ from stable_baselines3.common.logger import Figure
+
+ model = SAC("MlpPolicy", "Pendulum-v0", tensorboard_log="/tmp/sac/", verbose=1)
+
+
+ class FigureRecorderCallback(BaseCallback):
+ def __init__(self, verbose=0):
+ super(FigureRecorderCallback, self).__init__(verbose)
+
+ def _on_step(self):
+ # Plot values (here a random variable)
+ figure = plt.figure()
+ figure.add_subplot().plot(np.random.random(3))
+ # Close the figure after logging it
+ self.logger.record("trajectory/figure", Figure(figure, close=True), exclude=("stdout", "log", "json", "csv"))
+ plt.close()
+ return True
+
+
+ model.learn(50000, callback=FigureRecorderCallback())
+
Logging Videos
--------------
diff --git a/docs/misc/changelog.rst b/docs/misc/changelog.rst
index 726c9ad3d..401f41469 100644
--- a/docs/misc/changelog.rst
+++ b/docs/misc/changelog.rst
@@ -3,7 +3,7 @@
Changelog
==========
-Pre-Release 0.11.0a4 (WIP)
+Pre-Release 0.11.0a5 (WIP)
-------------------------------
Breaking Changes:
@@ -26,6 +26,7 @@ New Features:
- Added ``monitor_kwargs`` parameter to ``make_vec_env`` and ``make_atari_env``
- Wrap the environments automatically with a ``Monitor`` wrapper when possible.
- ``EvalCallback`` now logs the success rate when available (``is_success`` must be present in the info dict)
+- Added new wrappers to log images and matplotlib figures to tensorboard. (@zampanteymedio)
Bug Fixes:
^^^^^^^^^^
@@ -534,4 +535,4 @@ And all the contributors:
@flodorner @KuKuXia @NeoExtended @PartiallyTyped @mmcenta @richardwu @kinalmehta @rolandgvc @tkelestemur @mloo3
@tirafesi @blurLake @koulakis @joeljosephjin @shwang @rk37 @andyshih12 @RaphaelWag @xicocaio
@diditforlulz273 @liorcohen5 @ManifoldFR @mloo3 @SwamyDev @wmmc88 @megan-klaiber @thisray
-@tfederico @hn2 @LucasAlegre @AptX395
+@tfederico @hn2 @LucasAlegre @AptX395 @zampanteymedio
diff --git a/stable_baselines3/version.txt b/stable_baselines3/version.txt
index 1b742ef03..f23fba9af 100644
--- a/stable_baselines3/version.txt
+++ b/stable_baselines3/version.txt
@@ -1,1 +1,1 @@
-0.11.0a4
+0.11.0a5
| [
{
"components": [
{
"doc": "Figure data class storing a matplotlib figure and whether to close the figure after logging it\n\n:param figure: figure to log\n:param close: if true, close the figure after logging it",
"lines": [
40,
50
],
"name": "Figure",
... | [
"tests/test_logger.py::test_main",
"tests/test_logger.py::test_make_output[stdout]",
"tests/test_logger.py::test_make_output[log]",
"tests/test_logger.py::test_make_output[json]",
"tests/test_logger.py::test_make_output[csv]",
"tests/test_logger.py::test_make_output_fail",
"tests/test_logger.py::test_ex... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add image and figure to tensorboard logger
## Description
Addition of Image and Figure as wrappers for images and maplotlib's figures respectively. These wrappers can be logged (only Tensorboard's implementation actually does anything with them, others raise Exceptions).
## Motivation and Context
I needed to plot the evolution of some parameters at the end of every episode, and logging them through Tensorboard seemed like the natural option.
closes #266
- [x] I have raised an issue to propose this change ([required](https://github.com/DLR-RM/stable-baselines3/blob/master/CONTRIBUTING.md) for new features and bug fixes)
https://github.com/DLR-RM/stable-baselines3/issues/266
## Types of changes
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [x] Documentation (update in the documentation)
## Checklist:
- [x] I've read the [CONTRIBUTION](https://github.com/DLR-RM/stable-baselines3/blob/master/CONTRIBUTING.md) guide (**required**)
- [x] I have updated the changelog accordingly (**required**).
- [x] My change requires a change to the documentation.
- [x] I have updated the tests accordingly (*required for a bug fix or a new feature*).
- [x] I have updated the documentation accordingly.
- [ ] I have reformatted the code using `make format` (**required**)
Due to security reasons, I can't install make on my laptop :-(
- [x] I have checked the codestyle using `make check-codestyle` and `make lint` (**required**)
Run by PR checks.
- [x] I have ensured `make pytest` and `make type` both pass. (**required**)
Run by PR checks.
- [x] I have checked that the documentation builds using `make doc` (**required**)
Run by PR checks.
Note: You can run most of the checks using `make commit-checks`.
Note: we are using a maximum length of 127 characters per line
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in stable_baselines3/common/logger.py]
(definition of Figure:)
class Figure(object):
"""Figure data class storing a matplotlib figure and whether to close the figure after logging it
:param figure: figure to log
:param close: if true, close the figure after logging it"""
(definition of Figure.__init__:)
def __init__(self, figure: plt.figure, close: bool):
(definition of Image:)
class Image(object):
"""Image data class storing an image and data format
:param image: image to log
:param dataformats: Image data format specification of the form NCHW, NHWC, CHW, HWC, HW, WH, etc.
More info in add_image method doc at https://pytorch.org/docs/stable/tensorboard.html
Gym envs normally use 'HWC' (channel last)"""
(definition of Image.__init__:)
def __init__(self, image: Union[th.Tensor, np.ndarray, str], dataformats: str):
[end of new definitions in stable_baselines3/common/logger.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
[Feature Request] Add image to tensorboard
### ๐ Feature
Add image logging support for logging to TensorBoard.
### Motivation
There is currently a way to log video into tensorBoard. Nevertheless, many environments need to log images, not videos.
### Pitch
Log images in TensorBoard,
### Alternatives
Creating and storing the images directly on my machine instead of tensorboard.
Creating videos of 1 frame.
### Additional context
Update logger module and implement images in TensorBoardOutputFormat class.
###ย Checklist
- [X] I have checked that there is no similar [issue](https://github.com/DLR-RM/stable-baselines3/issues) in the repo
----------
I'm happy to implement it myself and send a Pull Request if you consider that this feature is interesting for you.
Hello,
sounds like a reasonable feature, please go ahead with the PR ;)
--------------------
</issues> | 512eea923afad6f6da4bb53d72b6ea4c6d856e59 |
Textualize__rich-858 | 858 | Textualize/rich | null | 0511792c33e15b3e71f3bef089b0d1f387c6c511 | 2020-12-28T16:30:16Z | diff --git a/CHANGELOG.md b/CHANGELOG.md
index ad254e30de..7a195f113e 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,16 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+## [9.6.0] - Unreleased
+
+### Changed
+
+- MarkupError exception raise from None to omit internal exception
+
+### Added
+
+- Added Progress.TimeElapsedColumn
+
## [9.5.1] - 2020-12-19
### Fixed
diff --git a/docs/source/style.rst b/docs/source/style.rst
index 877cedd8fc..db8f4cf156 100644
--- a/docs/source/style.rst
+++ b/docs/source/style.rst
@@ -16,7 +16,7 @@ To specify a foreground color use one of the 256 :ref:`appendix-colors`. For exa
console.print("Hello", style="magenta")
-You may also use the color's number (an integer between 0 and 255) with the syntax `"color(<number>)"`. The following will give the equivalent output::
+You may also use the color's number (an integer between 0 and 255) with the syntax ``"color(<number>)"``. The following will give the equivalent output::
console.print("Hello", style="color(5)")
diff --git a/pyproject.toml b/pyproject.toml
index 500cac3a2d..cc75ef12fd 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -2,7 +2,7 @@
name = "rich"
homepage = "https://github.com/willmcgugan/rich"
documentation = "https://rich.readthedocs.io/en/latest/"
-version = "9.5.1"
+version = "9.6.0"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
authors = ["Will McGugan <willmcgugan@gmail.com>"]
license = "MIT"
diff --git a/rich/color.py b/rich/color.py
index 4ee6314e87..59dc9f88b7 100644
--- a/rich/color.py
+++ b/rich/color.py
@@ -364,9 +364,9 @@ def from_rgb(cls, red: float, green: float, blue: float) -> "Color":
"""Create a truecolor from three color components in the range(0->255).
Args:
- red (float): Red component.
- green (float): Green component.
- blue (float): Blue component.
+ red (float): Red component in range 0-255.
+ green (float): Green component in range 0-255.
+ blue (float): Blue component in range 0-255.
Returns:
Color: A new color object.
@@ -471,10 +471,8 @@ def downgrade(self, system: ColorSystem) -> "Color":
# Convert to 8-bit color from truecolor color
if system == ColorSystem.EIGHT_BIT and self.system == ColorSystem.TRUECOLOR:
assert self.triplet is not None
-
red, green, blue = self.triplet.normalized
_h, l, s = rgb_to_hls(red, green, blue)
-
# If saturation is under 10% assume it is grayscale
if s < 0.1:
gray = round(l * 25.0)
@@ -566,7 +564,7 @@ def blend_rgb(
if color_number < 16:
table.add_row(color_cell, f"{color_number}", Text(f'"{name}"'))
else:
- color = EIGHT_BIT_PALETTE[color_number]
+ color = EIGHT_BIT_PALETTE[color_number] # type: ignore
table.add_row(
color_cell, str(color_number), Text(f'"{name}"'), color.hex, color.rgb
)
diff --git a/rich/console.py b/rich/console.py
index 6fa82dfda8..2ef5b3d75b 100644
--- a/rich/console.py
+++ b/rich/console.py
@@ -705,7 +705,7 @@ def size(self) -> ConsoleDimensions:
width: Optional[int] = None
height: Optional[int] = None
- if WINDOWS: # type: ignore
+ if WINDOWS: # pragma: no cover
width, height = shutil.get_terminal_size()
else:
try:
diff --git a/rich/default_styles.py b/rich/default_styles.py
index 044f1d71de..ced2ce5618 100644
--- a/rich/default_styles.py
+++ b/rich/default_styles.py
@@ -113,6 +113,7 @@
"progress.filesize": Style(color="green"),
"progress.filesize.total": Style(color="green"),
"progress.download": Style(color="green"),
+ "progress.elapsed": Style(color="yellow"),
"progress.percentage": Style(color="magenta"),
"progress.remaining": Style(color="cyan"),
"progress.data.speed": Style(color="red"),
diff --git a/rich/markup.py b/rich/markup.py
index 61fb103e19..a2aef136a7 100644
--- a/rich/markup.py
+++ b/rich/markup.py
@@ -104,6 +104,7 @@ def render(markup: str, style: Union[str, Style] = "", emoji: bool = True) -> Te
append_span = spans.append
_Span = Span
+ _Tag = Tag
def pop_style(style_name: str) -> Tuple[int, Tag]:
"""Pop tag matching given style name."""
@@ -125,18 +126,18 @@ def pop_style(style_name: str) -> Tuple[int, Tag]:
except KeyError:
raise MarkupError(
f"closing tag '{tag.markup}' at position {position} doesn't match any open tag"
- )
+ ) from None
else: # implicit close
try:
start, open_tag = pop()
except IndexError:
raise MarkupError(
f"closing tag '[/]' at position {position} has nothing to close"
- )
+ ) from None
append_span(_Span(start, len(text), str(open_tag)))
else: # Opening tag
- normalized_tag = Tag(normalize(tag.name), tag.parameters)
+ normalized_tag = _Tag(normalize(tag.name), tag.parameters)
style_stack.append((len(text), normalized_tag))
text_length = len(text)
diff --git a/rich/palette.py b/rich/palette.py
index 3faa23fa93..80dff5ed74 100644
--- a/rich/palette.py
+++ b/rich/palette.py
@@ -1,9 +1,12 @@
from math import sqrt
from functools import lru_cache
-from typing import Sequence, Tuple
+from typing import Sequence, Tuple, TYPE_CHECKING
from .color_triplet import ColorTriplet
+if TYPE_CHECKING:
+ from rich.table import Table
+
class Palette:
"""A palette of available colors."""
@@ -14,6 +17,29 @@ def __init__(self, colors: Sequence[Tuple[int, int, int]]):
def __getitem__(self, number: int) -> ColorTriplet:
return ColorTriplet(*self._colors[number])
+ def __rich__(self) -> "Table":
+ from rich.color import Color
+ from rich.style import Style
+ from rich.text import Text
+ from rich.table import Table
+
+ table = Table(
+ "index",
+ "RGB",
+ "Color",
+ title="Palette",
+ caption=f"{len(self._colors)} colors",
+ highlight=True,
+ caption_justify="right",
+ )
+ for index, color in enumerate(self._colors):
+ table.add_row(
+ str(index),
+ repr(color),
+ Text(" " * 16, style=Style(bgcolor=Color.from_rgb(*color))),
+ )
+ return table
+
# This is somewhat inefficient and needs caching
@lru_cache(maxsize=1024)
def match(self, color: Tuple[int, int, int]) -> int:
@@ -43,3 +69,31 @@ def get_color_distance(index: int) -> float:
min_index = min(range(len(self._colors)), key=get_color_distance)
return min_index
+
+
+if __name__ == "__main__": # pragma: no cover
+ import colorsys
+ from typing import Iterable
+ from rich.color import Color
+ from rich.console import Console, ConsoleOptions
+ from rich.segment import Segment
+ from rich.style import Style
+
+ class ColorBox:
+ def __rich_console__(
+ self, console: Console, options: ConsoleOptions
+ ) -> Iterable[Segment]:
+ height = console.size.height - 3
+ for y in range(0, height):
+ for x in range(options.max_width):
+ h = x / options.max_width
+ l = y / (height + 1)
+ r1, g1, b1 = colorsys.hls_to_rgb(h, l, 1.0)
+ r2, g2, b2 = colorsys.hls_to_rgb(h, l + (1 / height / 2), 1.0)
+ bgcolor = Color.from_rgb(r1 * 255, g1 * 255, b1 * 255)
+ color = Color.from_rgb(r2 * 255, g2 * 255, b2 * 255)
+ yield Segment("โ", Style(color=color, bgcolor=bgcolor))
+ yield Segment.line()
+
+ console = Console()
+ console.print(ColorBox())
diff --git a/rich/progress.py b/rich/progress.py
index b38e67aecc..56e5541ef4 100644
--- a/rich/progress.py
+++ b/rich/progress.py
@@ -322,6 +322,18 @@ def render(self, task: "Task") -> ProgressBar:
)
+class TimeElapsedColumn(ProgressColumn):
+ """Renders time elapsed."""
+
+ def render(self, task: "Task") -> Text:
+ """Show time remaining."""
+ elapsed = task.finished_time if task.finished else task.elapsed
+ if elapsed is None:
+ return Text("-:--:--", style="progress.elapsed")
+ delta = timedelta(seconds=int(elapsed))
+ return Text(str(delta), style="progress.elapsed")
+
+
class TimeRemainingColumn(ProgressColumn):
"""Renders estimated time remaining."""
@@ -434,6 +446,9 @@ class Task:
_get_time: GetTimeCallable
"""Callable to get the current time."""
+ finished_time: Optional[float] = None
+ """float: Time task was finished."""
+
visible: bool = True
"""bool: Indicates if this task is visible in the progress display."""
@@ -475,8 +490,8 @@ def elapsed(self) -> Optional[float]:
@property
def finished(self) -> bool:
- """bool: Check if the task has completed."""
- return self.completed >= self.total
+ """Check if the task has finished."""
+ return self.finished_time is not None
@property
def percentage(self) -> float:
@@ -518,6 +533,7 @@ def time_remaining(self) -> Optional[float]:
def _reset(self) -> None:
"""Reset progress."""
self._progress.clear()
+ self.finished_time = None
class _RefreshThread(Thread):
@@ -813,6 +829,8 @@ def update(
popleft()
if update_completed > 0:
_progress.append(ProgressSample(current_time, update_completed))
+ if task.completed >= task.total and task.finished_time is None:
+ task.finished_time = task.elapsed
def reset(
self,
@@ -848,6 +866,7 @@ def reset(
task.fields = fields
if description is not None:
task.description = description
+ task.finished_time = None
self.refresh()
def advance(self, task_id: TaskID, advance: float = 1) -> None:
@@ -872,6 +891,8 @@ def advance(self, task_id: TaskID, advance: float = 1) -> None:
while len(_progress) > 1000:
popleft()
_progress.append(ProgressSample(current_time, update_completed))
+ if task.completed >= task.total and task.finished_time is None:
+ task.finished_time = task.elapsed
def refresh(self) -> None:
"""Refresh (render) the progress information."""
@@ -1057,6 +1078,7 @@ def process_renderables(
BarColumn(),
TextColumn("[progress.percentage]{task.percentage:>3.0f}%"),
TimeRemainingColumn(),
+ TimeElapsedColumn(),
console=console,
transient=True,
) as progress:
@@ -1074,3 +1096,4 @@ def process_renderables(
except:
console.save_html("progress.html")
print("wrote progress.html")
+ raise
| diff --git a/tests/test_palette.py b/tests/test_palette.py
new file mode 100644
index 0000000000..8ddd7cc907
--- /dev/null
+++ b/tests/test_palette.py
@@ -0,0 +1,8 @@
+from rich._palettes import STANDARD_PALETTE
+from rich.table import Table
+
+
+def test_rich_cast():
+ table = STANDARD_PALETTE.__rich__()
+ assert isinstance(table, Table)
+ assert table.row_count == 16
diff --git a/tests/test_progress.py b/tests/test_progress.py
index 7d65b5f376..8f1ecb5335 100644
--- a/tests/test_progress.py
+++ b/tests/test_progress.py
@@ -17,6 +17,7 @@
Progress,
Task,
TextColumn,
+ TimeElapsedColumn,
TimeRemainingColumn,
track,
_TrackThread,
@@ -262,6 +263,7 @@ def test_columns() -> None:
TextColumn("{task.description}"),
BarColumn(bar_width=None),
TimeRemainingColumn(),
+ TimeElapsedColumn(),
FileSizeColumn(),
TotalFileSizeColumn(),
DownloadColumn(),
@@ -285,7 +287,7 @@ def test_columns() -> None:
result = replace_link_ids(console.file.getvalue())
print(repr(result))
- expected = "\x1b[?25ltest foo \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m10 bytes\x1b[0m \x1b[32m0/10 bytes\x1b[0m \x1b[31m?\x1b[0m\ntest bar \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m7 bytes \x1b[0m \x1b[32m0/7 bytes \x1b[0m \x1b[31m?\x1b[0m\r\x1b[2K\x1b[1A\x1b[2Kfoo\ntest foo \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m10 bytes\x1b[0m \x1b[32m0/10 bytes\x1b[0m \x1b[31m?\x1b[0m\ntest bar \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m7 bytes \x1b[0m \x1b[32m0/7 bytes \x1b[0m \x1b[31m?\x1b[0m\r\x1b[2K\x1b[1A\x1b[2K\x1b[2;36m[TIME]\x1b[0m\x1b[2;36m \x1b[0mhello \ntest foo \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m10 bytes\x1b[0m \x1b[32m0/10 bytes\x1b[0m \x1b[31m?\x1b[0m\ntest bar \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m7 bytes \x1b[0m \x1b[32m0/7 bytes \x1b[0m \x1b[31m?\x1b[0m\r\x1b[2K\x1b[1A\x1b[2Kworld\ntest foo \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m10 bytes\x1b[0m \x1b[32m0/10 bytes\x1b[0m \x1b[31m?\x1b[0m\ntest bar \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m7 bytes \x1b[0m \x1b[32m0/7 bytes \x1b[0m \x1b[31m?\x1b[0m\r\x1b[2K\x1b[1A\x1b[2Ktest foo \x1b[38;2;114;156;31mโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m0:00:00\x1b[0m \x1b[32m12 bytes\x1b[0m \x1b[32m10 bytes\x1b[0m \x1b[32m12/10 bytes\x1b[0m \x1b[31m1 byte/s \x1b[0m\ntest bar \x1b[38;2;114;156;31mโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m0:00:00\x1b[0m \x1b[32m16 bytes\x1b[0m \x1b[32m7 bytes \x1b[0m \x1b[32m16/7 bytes \x1b[0m \x1b[31m2 bytes/s\x1b[0m\r\x1b[2K\x1b[1A\x1b[2Ktest foo \x1b[38;2;114;156;31mโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m0:00:00\x1b[0m \x1b[32m12 bytes\x1b[0m \x1b[32m10 bytes\x1b[0m \x1b[32m12/10 bytes\x1b[0m \x1b[31m1 byte/s \x1b[0m\ntest bar \x1b[38;2;114;156;31mโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m0:00:00\x1b[0m \x1b[32m16 bytes\x1b[0m \x1b[32m7 bytes \x1b[0m \x1b[32m16/7 bytes \x1b[0m \x1b[31m2 bytes/s\x1b[0m\n\x1b[?25h\r\x1b[1A\x1b[2K\x1b[1A\x1b[2K"
+ expected = "\x1b[?25ltest foo \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[33m0:00:37\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m10 bytes\x1b[0m \x1b[32m0/10 bytes\x1b[0m \x1b[31m?\x1b[0m\ntest bar \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[33m0:00:36\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m7 bytes \x1b[0m \x1b[32m0/7 bytes \x1b[0m \x1b[31m?\x1b[0m\r\x1b[2K\x1b[1A\x1b[2Kfoo\ntest foo \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[33m0:00:37\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m10 bytes\x1b[0m \x1b[32m0/10 bytes\x1b[0m \x1b[31m?\x1b[0m\ntest bar \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[33m0:00:36\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m7 bytes \x1b[0m \x1b[32m0/7 bytes \x1b[0m \x1b[31m?\x1b[0m\r\x1b[2K\x1b[1A\x1b[2K\x1b[2;36m[TIME]\x1b[0m\x1b[2;36m \x1b[0mhello \ntest foo \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[33m0:00:37\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m10 bytes\x1b[0m \x1b[32m0/10 bytes\x1b[0m \x1b[31m?\x1b[0m\ntest bar \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[33m0:00:36\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m7 bytes \x1b[0m \x1b[32m0/7 bytes \x1b[0m \x1b[31m?\x1b[0m\r\x1b[2K\x1b[1A\x1b[2Kworld\ntest foo \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[33m0:00:37\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m10 bytes\x1b[0m \x1b[32m0/10 bytes\x1b[0m \x1b[31m?\x1b[0m\ntest bar \x1b[38;5;237mโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m-:--:--\x1b[0m \x1b[33m0:00:36\x1b[0m \x1b[32m0 bytes\x1b[0m \x1b[32m7 bytes \x1b[0m \x1b[32m0/7 bytes \x1b[0m \x1b[31m?\x1b[0m\r\x1b[2K\x1b[1A\x1b[2Ktest foo \x1b[38;2;114;156;31mโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m0:00:00\x1b[0m \x1b[33m0:01:00\x1b[0m \x1b[32m12 bytes\x1b[0m \x1b[32m10 bytes\x1b[0m \x1b[32m12/10 bytes\x1b[0m \x1b[31m1 byte/s\x1b[0m\ntest bar \x1b[38;2;114;156;31mโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m0:00:00\x1b[0m \x1b[33m0:00:45\x1b[0m \x1b[32m16 bytes\x1b[0m \x1b[32m7 bytes \x1b[0m \x1b[32m16/7 bytes \x1b[0m \x1b[31m1 byte/s\x1b[0m\r\x1b[2K\x1b[1A\x1b[2Ktest foo \x1b[38;2;114;156;31mโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m0:00:00\x1b[0m \x1b[33m0:01:00\x1b[0m \x1b[32m12 bytes\x1b[0m \x1b[32m10 bytes\x1b[0m \x1b[32m12/10 bytes\x1b[0m \x1b[31m1 byte/s\x1b[0m\ntest bar \x1b[38;2;114;156;31mโโโโโโโโโโโโโโโโ\x1b[0m \x1b[36m0:00:00\x1b[0m \x1b[33m0:00:45\x1b[0m \x1b[32m16 bytes\x1b[0m \x1b[32m7 bytes \x1b[0m \x1b[32m16/7 bytes \x1b[0m \x1b[31m1 byte/s\x1b[0m\n\x1b[?25h\r\x1b[1A\x1b[2K\x1b[1A\x1b[2K"
assert result == expected
| diff --git a/CHANGELOG.md b/CHANGELOG.md
index ad254e30de..7a195f113e 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,16 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+## [9.6.0] - Unreleased
+
+### Changed
+
+- MarkupError exception raise from None to omit internal exception
+
+### Added
+
+- Added Progress.TimeElapsedColumn
+
## [9.5.1] - 2020-12-19
### Fixed
diff --git a/docs/source/style.rst b/docs/source/style.rst
index 877cedd8fc..db8f4cf156 100644
--- a/docs/source/style.rst
+++ b/docs/source/style.rst
@@ -16,7 +16,7 @@ To specify a foreground color use one of the 256 :ref:`appendix-colors`. For exa
console.print("Hello", style="magenta")
-You may also use the color's number (an integer between 0 and 255) with the syntax `"color(<number>)"`. The following will give the equivalent output::
+You may also use the color's number (an integer between 0 and 255) with the syntax ``"color(<number>)"``. The following will give the equivalent output::
console.print("Hello", style="color(5)")
diff --git a/pyproject.toml b/pyproject.toml
index 500cac3a2d..cc75ef12fd 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -2,7 +2,7 @@
name = "rich"
homepage = "https://github.com/willmcgugan/rich"
documentation = "https://rich.readthedocs.io/en/latest/"
-version = "9.5.1"
+version = "9.6.0"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
authors = ["Will McGugan <willmcgugan@gmail.com>"]
license = "MIT"
| [
{
"components": [
{
"doc": "",
"lines": [
20,
41
],
"name": "Palette.__rich__",
"signature": "def __rich__(self) -> \"Table\":",
"type": "function"
},
{
"doc": "",
"lines": [
82,
96
... | [
"tests/test_palette.py::test_rich_cast",
"tests/test_progress.py::test_bar_columns",
"tests/test_progress.py::test_text_column",
"tests/test_progress.py::test_time_remaining_column",
"tests/test_progress.py::test_download_progress_uses_decimal_units",
"tests/test_progress.py::test_download_progress_uses_b... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
added time elapsed column
## Type of changes
- [ ] Bug fix
- [ ] New feature
- [ ] Documentation / docstrings
- [ ] Tests
- [ ] Other
## Checklist
- [ ] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [ ] I've added tests for new code.
- [ ] I accept that @willmcgugan may be pedantic in the code review.
## Description
Please describe your changes here. If this fixes a bug, please link to the issue, if possible.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in rich/palette.py]
(definition of Palette.__rich__:)
def __rich__(self) -> "Table":
(definition of ColorBox:)
class ColorBox:
(definition of ColorBox.__rich_console__:)
def __rich_console__( self, console: Console, options: ConsoleOptions ) -> Iterable[Segment]:
[end of new definitions in rich/palette.py]
[start of new definitions in rich/progress.py]
(definition of TimeElapsedColumn:)
class TimeElapsedColumn(ProgressColumn):
"""Renders time elapsed."""
(definition of TimeElapsedColumn.render:)
def render(self, task: "Task") -> Text:
"""Show time remaining."""
[end of new definitions in rich/progress.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | b0661de34bab35af9b4b1d3ba8e28b186b225e84 | |
joke2k__faker-1355 | 1,355 | joke2k/faker | null | 79df7b05f1785508c07f8ab290bc92c6b83019c1 | 2020-12-28T09:27:29Z | diff --git a/faker/providers/person/en_IN/__init__.py b/faker/providers/person/en_IN/__init__.py
new file mode 100644
index 0000000000..5d66568f71
--- /dev/null
+++ b/faker/providers/person/en_IN/__init__.py
@@ -0,0 +1,680 @@
+from .. import Provider as PersonProvider
+
+
+class Provider(PersonProvider):
+ formats = (
+ '{{first_name}} {{last_name}}',
+ )
+
+ # First names are from https://www.babycenter.in/a25010193/modern-indian-baby-names
+ # Last names are from https://www.familyeducation.com/baby-names/browse-origin/surname/indian
+
+ first_names = (
+ 'Aarav',
+ 'Aarush',
+ 'Aayush',
+ 'Abram',
+ 'Advik',
+ 'Akarsh',
+ 'Anay',
+ 'Aniruddh',
+ 'Arhaan',
+ 'Armaan',
+ 'Arnav',
+ 'Azad',
+ 'Badal',
+ 'Bhavin',
+ 'Chirag',
+ 'Darshit',
+ 'Devansh',
+ 'Dhanuk',
+ 'Dhruv',
+ 'Divij',
+ 'Divit',
+ 'Divyansh',
+ 'Ehsaan',
+ 'Emir',
+ 'Faiyaz',
+ 'Farhan',
+ 'Fateh',
+ 'Gatik',
+ 'Gokul',
+ 'Hansh',
+ 'Himmat',
+ 'Hiran',
+ 'Hridaan',
+ 'Hunar',
+ 'Indrajit',
+ 'Indranil',
+ 'Ishaan',
+ 'Ivan',
+ 'Jayant',
+ 'Jayesh',
+ 'Jivin',
+ 'Kabir',
+ 'Kanav',
+ 'Kartik',
+ 'Kiaan',
+ 'Krish',
+ 'Lagan',
+ 'Lakshay',
+ 'Lakshit',
+ 'Madhav',
+ 'Madhup',
+ 'Manikya',
+ 'Mehul',
+ 'Miraan',
+ 'Nakul',
+ 'Nirvaan',
+ 'Nishith',
+ 'Ojas',
+ 'Onkar',
+ 'Pranay',
+ 'Prerak',
+ 'Priyansh',
+ 'Purab',
+ 'Raghav',
+ 'Ranbir',
+ 'Raunak',
+ 'Reyansh',
+ 'Riaan',
+ 'Ritvik',
+ 'Rohan',
+ 'Romil',
+ 'Ryan',
+ 'Sahil',
+ 'Saksham',
+ 'Samar',
+ 'Samarth',
+ 'Shaan',
+ 'Shalv',
+ 'Shamik',
+ 'Shayak',
+ 'Shlok',
+ 'Shray',
+ 'Stuvan',
+ 'Sumer',
+ 'Taimur',
+ 'Taran',
+ 'Tejas',
+ 'Tushar',
+ 'Umang',
+ 'Uthkarsh',
+ 'Vaibhav',
+ 'Veer',
+ 'Vidur',
+ 'Vihaan',
+ 'Vivaan',
+ 'Yakshit',
+ 'Yuvaan',
+ 'Yuvraj ',
+ 'Zain',
+ 'Zeeshan',
+ 'Aaina',
+ 'Aarna',
+ 'Aaryahi',
+ 'Adah',
+ 'Adira',
+ 'Advika',
+ 'Ahana ',
+ 'Alia',
+ 'Alisha',
+ 'Amani',
+ 'Amira',
+ 'Anahi',
+ 'Anahita',
+ 'Anaya',
+ 'Anika',
+ 'Anvi',
+ 'Anya',
+ 'Aradhya',
+ 'Ayesha',
+ 'Bhamini',
+ 'Charvi',
+ 'Damini',
+ 'Dishani',
+ 'Diya',
+ 'Drishya',
+ 'Ela',
+ 'Elakshi',
+ 'Eshani',
+ 'Eva',
+ 'Hazel',
+ 'Heer',
+ 'Hrishita',
+ 'Inaaya ',
+ 'Ira',
+ 'Ishita',
+ 'Ivana',
+ 'Jhanvi',
+ 'Jivika',
+ 'Jiya',
+ 'Kaira',
+ 'Kashvi',
+ 'Kavya',
+ 'Keya',
+ 'Khushi',
+ 'Kiara',
+ 'Kimaya',
+ 'Kismat',
+ 'Lavanya',
+ 'Mahika',
+ 'Manjari',
+ 'Mannat',
+ 'Miraya',
+ 'Misha',
+ 'Mishti',
+ 'Myra',
+ 'Navya',
+ 'Nayantara',
+ 'Neelofar',
+ 'Nehmat',
+ 'Neysa',
+ 'Nirvi',
+ 'Nitara',
+ 'Nitya',
+ 'Oorja',
+ 'Pari',
+ 'Parinaaz',
+ 'Pihu',
+ 'Piya',
+ 'Prisha',
+ 'Rania',
+ 'Rasha',
+ 'Rati',
+ 'Renee',
+ 'Rhea',
+ 'Riya',
+ 'Saanvi',
+ 'Saira',
+ 'Samaira',
+ 'Samiha',
+ 'Sana',
+ 'Sara',
+ 'Seher',
+ 'Shanaya',
+ 'Siya',
+ 'Suhana',
+ 'Tanya',
+ 'Tara',
+ 'Tarini',
+ 'Tiya',
+ 'Trisha',
+ 'Urvi',
+ 'Vanya',
+ 'Vardaniya',
+ 'Vedika',
+ 'Vritika',
+ 'Yashvi',
+ 'Yasmin',
+ 'Zaina',
+ 'Zara',
+ 'Zoya',
+ )
+
+ last_names = (
+ 'Acharya',
+ 'Agarwal',
+ 'Agate',
+ 'Aggarwal',
+ 'Agrawal',
+ 'Ahluwalia',
+ 'Ahuja',
+ 'Amble',
+ 'Anand',
+ 'Andra',
+ 'Anne',
+ 'Apte',
+ 'Arora',
+ 'Arya',
+ 'Atwal',
+ 'Aurora',
+ 'Babu',
+ 'Badal',
+ 'Badami',
+ 'Bahl',
+ 'Bahri',
+ 'Bail',
+ 'Bains',
+ 'Bajaj',
+ 'Bajwa',
+ 'Bakshi',
+ 'Bal',
+ 'Bala',
+ 'Bala',
+ 'Balakrishnan',
+ 'Balan',
+ 'Balasubramanian',
+ 'Balay',
+ 'Bali',
+ 'Bandi',
+ 'Banerjee',
+ 'Banik',
+ 'Bansal',
+ 'Barad',
+ 'Barad',
+ 'Baral',
+ 'Baria',
+ 'Barman',
+ 'Basak',
+ 'Bassi',
+ 'Basu',
+ 'Bath',
+ 'Batra',
+ 'Batta',
+ 'Bava',
+ 'Bawa',
+ 'Bedi',
+ 'Behl',
+ 'Ben',
+ 'Bera',
+ 'Bhagat',
+ 'Bhakta',
+ 'Bhalla',
+ 'Bhandari',
+ 'Bhardwaj',
+ 'Bhargava',
+ 'Bhasin',
+ 'Bhat',
+ 'Bhatia',
+ 'Bhatnagar',
+ 'Bhatt',
+ 'Bhattacharyya',
+ 'Bhatti',
+ 'Bhavsar',
+ 'Bir',
+ 'Biswas',
+ 'Boase',
+ 'Bobal',
+ 'Bora',
+ 'Bora',
+ 'Borah',
+ 'Borde',
+ 'Borra',
+ 'Bose',
+ 'Brahmbhatt',
+ 'Brar',
+ 'Buch',
+ 'Buch',
+ 'Bumb',
+ 'Butala',
+ 'Chacko',
+ 'Chad',
+ 'Chada',
+ 'Chadha',
+ 'Chahal',
+ 'Chakrabarti',
+ 'Chakraborty',
+ 'Chana',
+ 'Chand',
+ 'Chanda',
+ 'Chander',
+ 'Chandra',
+ 'Chandran',
+ 'Char',
+ 'Chatterjee',
+ 'Chaudhari',
+ 'Chaudhary',
+ 'Chaudhry',
+ 'Chaudhuri',
+ 'Chaudry',
+ 'Chauhan',
+ 'Chawla',
+ 'Cheema',
+ 'Cherian',
+ 'Chhabra',
+ 'Chokshi',
+ 'Chopra',
+ 'Choudhary',
+ 'Choudhry',
+ 'Choudhury',
+ 'Chowdhury',
+ 'Comar',
+ 'Contractor',
+ 'DโAlia',
+ 'Dada',
+ 'Dalal',
+ 'Dani',
+ 'Dar',
+ 'Dara',
+ 'Dara',
+ 'Das',
+ 'Dasgupta',
+ 'Dash',
+ 'Dass',
+ 'Date',
+ 'Datta',
+ 'Dave',
+ 'Dayal',
+ 'De',
+ 'Deep',
+ 'Deo',
+ 'Deol',
+ 'Desai',
+ 'Deshmukh',
+ 'Deshpande',
+ 'Devan',
+ 'Devi',
+ 'Dewan',
+ 'Dey',
+ 'Dhaliwal',
+ 'Dhar',
+ 'Dhar',
+ 'Dhawan',
+ 'Dhillon',
+ 'Dhingra',
+ 'Din',
+ 'Divan',
+ 'Dixit',
+ 'Doctor',
+ 'Dora',
+ 'Doshi',
+ 'Dua',
+ 'Dube',
+ 'Dubey',
+ 'Dugal',
+ 'Dugar',
+ 'Dugar',
+ 'Dutt',
+ 'Dutta',
+ 'Dyal',
+ 'Edwin',
+ 'Gaba',
+ 'Gade',
+ 'Gala',
+ 'Gandhi',
+ 'Ganesan',
+ 'Ganesh',
+ 'Ganguly',
+ 'Gara',
+ 'Garde',
+ 'Garg',
+ 'Gera',
+ 'Ghose',
+ 'Ghosh',
+ 'Gill',
+ 'Goda',
+ 'Goel',
+ 'Gokhale',
+ 'Gola',
+ 'Gole',
+ 'Golla',
+ 'Gopal',
+ 'Goswami',
+ 'Gour',
+ 'Goyal',
+ 'Grewal',
+ 'Grover',
+ 'Guha',
+ 'Gulati',
+ 'Gupta',
+ 'Halder',
+ 'Handa',
+ 'Hans',
+ 'Hari',
+ 'Hayer',
+ 'Hayre',
+ 'Hegde',
+ 'Hora',
+ 'Issac',
+ 'Iyengar',
+ 'Iyer',
+ 'Jaggi',
+ 'Jain',
+ 'Jani',
+ 'Jayaraman',
+ 'Jha',
+ 'Jhaveri',
+ 'Johal',
+ 'Joshi',
+ 'Kadakia',
+ 'Kade',
+ 'Kakar',
+ 'Kala',
+ 'Kala',
+ 'Kala',
+ 'Kale',
+ 'Kalita',
+ 'Kalla',
+ 'Kamdar',
+ 'Kanda',
+ 'Kannan',
+ 'Kant',
+ 'Kapadia',
+ 'Kapoor',
+ 'Kapur',
+ 'Kar',
+ 'Kara',
+ 'Karan',
+ 'Kari',
+ 'Karnik',
+ 'Karpe',
+ 'Kashyap',
+ 'Kata',
+ 'Kaul',
+ 'Kaur',
+ 'Keer',
+ 'Keer',
+ 'Khalsa',
+ 'Khanna',
+ 'Khare',
+ 'Khatri',
+ 'Khosla',
+ 'Khurana',
+ 'Kibe',
+ 'Kohli',
+ 'Konda',
+ 'Korpal',
+ 'Koshy',
+ 'Kota',
+ 'Kothari',
+ 'Krish',
+ 'Krishna',
+ 'Krishnamurthy',
+ 'Krishnan',
+ 'Kulkarni',
+ 'Kumar',
+ 'Kumer',
+ 'Kunda',
+ 'Kurian',
+ 'Kuruvilla',
+ 'Lad',
+ 'Lad',
+ 'Lal',
+ 'Lala',
+ 'Lall',
+ 'Lalla',
+ 'Lanka',
+ 'Lata',
+ 'Loke',
+ 'Loyal',
+ 'Luthra',
+ 'Madan',
+ 'Madan',
+ 'Magar',
+ 'Mahajan',
+ 'Mahal',
+ 'Maharaj',
+ 'Majumdar',
+ 'Malhotra',
+ 'Mall',
+ 'Mallick',
+ 'Mammen',
+ 'Mand',
+ 'Manda',
+ 'Mandal',
+ 'Mander',
+ 'Mane',
+ 'Mangal',
+ 'Mangat',
+ 'Mani',
+ 'Mani',
+ 'Mann',
+ 'Mannan',
+ 'Manne',
+ 'Master',
+ 'Raj',
+ 'Raja',
+ 'Rajagopal',
+ 'Rajagopalan',
+ 'Rajan',
+ 'Raju',
+ 'Ram',
+ 'Rama',
+ 'Ramachandran',
+ 'Ramakrishnan',
+ 'Raman',
+ 'Ramanathan',
+ 'Ramaswamy',
+ 'Ramesh',
+ 'Rana',
+ 'Randhawa',
+ 'Ranganathan',
+ 'Rao',
+ 'Rastogi',
+ 'Ratta',
+ 'Rattan',
+ 'Ratti',
+ 'Rau',
+ 'Raval',
+ 'Ravel',
+ 'Ravi',
+ 'Ray',
+ 'Reddy',
+ 'Rege',
+ 'Rout',
+ 'Roy',
+ 'Sabharwal',
+ 'Sachar',
+ 'Sachdev',
+ 'Sachdeva',
+ 'Sagar',
+ 'Saha',
+ 'Sahni',
+ 'Sahota',
+ 'Saini',
+ 'Salvi',
+ 'Sama',
+ 'Sami',
+ 'Sampath',
+ 'Samra',
+ 'Sandal',
+ 'Sandhu',
+ 'Sane',
+ 'Sangha',
+ 'Sanghvi',
+ 'Sani',
+ 'Sankar',
+ 'Sankaran',
+ 'Sant',
+ 'Saraf',
+ 'Saran',
+ 'Sarin',
+ 'Sarkar',
+ 'Sarma',
+ 'Sarna',
+ 'Sarraf',
+ 'Sastry',
+ 'Sathe',
+ 'Savant',
+ 'Sawhney',
+ 'Saxena',
+ 'Sehgal',
+ 'Sekhon',
+ 'Sem',
+ 'Sen',
+ 'Sengupta',
+ 'Seshadri',
+ 'Seth',
+ 'Sethi',
+ 'Setty',
+ 'Sha',
+ 'Shah',
+ 'Shan',
+ 'Shankar',
+ 'Shanker',
+ 'Sharaf',
+ 'Sharma',
+ 'Shenoy',
+ 'Shere',
+ 'Sheth',
+ 'Shetty',
+ 'Shroff',
+ 'Shukla',
+ 'Sibal',
+ 'Sidhu',
+ 'Singh',
+ 'Singhal',
+ 'Sinha',
+ 'Sodhi',
+ 'Solanki',
+ 'Som',
+ 'Soman',
+ 'Soni',
+ 'Sood',
+ 'Sridhar',
+ 'Srinivas',
+ 'Srinivasan',
+ 'Srivastava',
+ 'Subramaniam',
+ 'Subramanian',
+ 'Sule',
+ 'Sundaram',
+ 'Sunder',
+ 'Sur',
+ 'Sura',
+ 'Suresh',
+ 'Suri',
+ 'Swaminathan',
+ 'Swamy',
+ 'Tailor',
+ 'Tak',
+ 'Talwar',
+ 'Tandon',
+ 'Taneja',
+ 'Tank',
+ 'Tara',
+ 'Tata',
+ 'Tella',
+ 'Thaker',
+ 'Thakkar',
+ 'Thakur',
+ 'Thaman',
+ 'Tiwari',
+ 'Toor',
+ 'Tripathi',
+ 'Trivedi',
+ 'Upadhyay',
+ 'Uppal',
+ 'Vaidya',
+ 'Vala',
+ 'Varghese',
+ 'Varkey',
+ 'Varma',
+ 'Varty',
+ 'Varughese',
+ 'Vasa',
+ 'Venkataraman',
+ 'Venkatesh',
+ 'Verma',
+ 'Vig',
+ 'Virk',
+ 'Viswanathan',
+ 'Vohra',
+ 'Vora',
+ 'Vyas',
+ 'Wable',
+ 'Wadhwa',
+ 'Wagle',
+ 'Wali',
+ 'Wali',
+ 'Walia',
+ 'Walla',
+ 'Warrior',
+ 'Wason',
+ 'Yadav',
+ 'Yogi',
+ 'Yohannan',
+ 'Zacharia',
+ 'Zachariah',
+ )
| diff --git a/tests/providers/test_person.py b/tests/providers/test_person.py
index ec7bb91015..d1f5e23e9e 100644
--- a/tests/providers/test_person.py
+++ b/tests/providers/test_person.py
@@ -8,6 +8,7 @@
from faker.providers.person.ar_AA import Provider as ArProvider
from faker.providers.person.cs_CZ import Provider as CsCZProvider
from faker.providers.person.en import Provider as EnProvider
+from faker.providers.person.en_IN import Provider as EnINProvider
from faker.providers.person.en_US import Provider as EnUSProvider
from faker.providers.person.es_ES import Provider as EsESProvider
from faker.providers.person.fi_FI import Provider as FiProvider
@@ -804,3 +805,19 @@ def test_last_names(self):
""" test the last name is generating from the provided tuple """
last_name = self.fake.last_name()
assert last_name in OrINProvider.last_names
+
+
+class TestEnIN(unittest.TestCase):
+ """ Tests person in the en_IN locale """
+
+ def setUp(self):
+ self.fake = Faker('en_IN')
+ Faker.seed(0)
+
+ def test_first_name(self):
+ first_name = self.fake.first_name()
+ assert first_name in EnINProvider.first_names
+
+ def test_last_name(self):
+ last_name = self.fake.last_name()
+ assert last_name in EnINProvider.last_names
| [
{
"components": [
{
"doc": "",
"lines": [
4,
679
],
"name": "Provider",
"signature": "class Provider(PersonProvider):",
"type": "class"
}
],
"file": "faker/providers/person/en_IN/__init__.py"
}
] | [
"tests/providers/test_person.py::TestAr::test_first_name",
"tests/providers/test_person.py::TestAr::test_last_name",
"tests/providers/test_person.py::TestJaJP::test_person",
"tests/providers/test_person.py::TestNeNP::test_names",
"tests/providers/test_person.py::TestFiFI::test_gender_first_names",
"tests/... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
en_IN person provider
### What does this changes
With this PR, I add a new provider for Indian Names in English script and the tests for it too.
### What was wrong
There wasn't a way to generate random Indian names.
### How this fixes it
Added Indian first names and last names along with the source. Also added test case for the same.
Fixes #...
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in faker/providers/person/en_IN/__init__.py]
(definition of Provider:)
class Provider(PersonProvider):
[end of new definitions in faker/providers/person/en_IN/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 6edfdbf6ae90b0153309e3bf066aa3b2d16494a7 | ||
conan-io__conan-8263 | 8,263 | conan-io/conan | null | da8de69618d75da5ac7d77e0eb7a38ee4d564eb9 | 2020-12-25T13:42:54Z | diff --git a/conans/client/build/autotools_environment.py b/conans/client/build/autotools_environment.py
index e1a3bff7dcc..bad6f2c746a 100644
--- a/conans/client/build/autotools_environment.py
+++ b/conans/client/build/autotools_environment.py
@@ -41,6 +41,7 @@ def __init__(self, conanfile, win_bash=False, include_rpath_flags=False):
self._deps_cpp_info = conanfile.deps_cpp_info
self._os = conanfile.settings.get_safe("os")
self._os_version = conanfile.settings.get_safe("os.version")
+ self._os_sdk = conanfile.settings.get_safe("os.sdk")
self._arch = conanfile.settings.get_safe("arch")
self._os_target, self._arch_target = get_target_os_arch(conanfile)
@@ -353,7 +354,9 @@ def append(*args):
concat += " " + os.environ.get("CXXFLAGS", None)
if self._os_version and "-version-min" not in concat and "-target" not in concat:
tmp_compilation_flags.append(tools.apple_deployment_target_flag(self._os,
- self._os_version))
+ self._os_version,
+ self._os_sdk,
+ self._arch))
cxx_flags = append(tmp_compilation_flags, self.cxx_flags, self.cppstd_flag)
c_flags = tmp_compilation_flags
diff --git a/conans/client/conf/__init__.py b/conans/client/conf/__init__.py
index f9331d125f5..e53cb979592 100644
--- a/conans/client/conf/__init__.py
+++ b/conans/client/conf/__init__.py
@@ -37,14 +37,18 @@
Linux:
Macos:
version: [None, "10.6", "10.7", "10.8", "10.9", "10.10", "10.11", "10.12", "10.13", "10.14", "10.15", "11.0"]
+ sdk: [None, "macosx"]
Android:
api_level: ANY
iOS:
version: ["7.0", "7.1", "8.0", "8.1", "8.2", "8.3", "9.0", "9.1", "9.2", "9.3", "10.0", "10.1", "10.2", "10.3", "11.0", "11.1", "11.2", "11.3", "11.4", "12.0", "12.1", "12.2", "12.3", "12.4", "13.0", "13.1", "13.2", "13.3", "13.4", "13.5", "13.6"]
+ sdk: [None, "iphoneos", "iphonesimulator"]
watchOS:
version: ["4.0", "4.1", "4.2", "4.3", "5.0", "5.1", "5.2", "5.3", "6.0", "6.1"]
+ sdk: [None, "watchos", "watchsimulator"]
tvOS:
version: ["11.0", "11.1", "11.2", "11.3", "11.4", "12.0", "12.1", "12.2", "12.3", "12.4", "13.0"]
+ sdk: [None, "appletvos", "appletvsimulator"]
FreeBSD:
SunOS:
AIX:
diff --git a/conans/client/migrations_settings.py b/conans/client/migrations_settings.py
index e3dac028be2..4ed52a920eb 100644
--- a/conans/client/migrations_settings.py
+++ b/conans/client/migrations_settings.py
@@ -1842,14 +1842,18 @@
Linux:
Macos:
version: [None, "10.6", "10.7", "10.8", "10.9", "10.10", "10.11", "10.12", "10.13", "10.14", "10.15", "11.0"]
+ sdk: [None, "macosx"]
Android:
api_level: ANY
iOS:
version: ["7.0", "7.1", "8.0", "8.1", "8.2", "8.3", "9.0", "9.1", "9.2", "9.3", "10.0", "10.1", "10.2", "10.3", "11.0", "11.1", "11.2", "11.3", "11.4", "12.0", "12.1", "12.2", "12.3", "12.4", "13.0", "13.1", "13.2", "13.3", "13.4", "13.5", "13.6"]
+ sdk: [None, "iphoneos", "iphonesimulator"]
watchOS:
version: ["4.0", "4.1", "4.2", "4.3", "5.0", "5.1", "5.2", "5.3", "6.0", "6.1"]
+ sdk: [None, "watchos", "watchsimulator"]
tvOS:
version: ["11.0", "11.1", "11.2", "11.3", "11.4", "12.0", "12.1", "12.2", "12.3", "12.4", "13.0"]
+ sdk: [None, "appletvos", "appletvsimulator"]
FreeBSD:
SunOS:
AIX:
diff --git a/conans/client/tools/apple.py b/conans/client/tools/apple.py
index 9e861e41179..7efe492fdc7 100644
--- a/conans/client/tools/apple.py
+++ b/conans/client/tools/apple.py
@@ -22,11 +22,7 @@ def to_apple_arch(arch):
'armv7k': 'armv7k'}.get(str(arch))
-def apple_sdk_name(settings):
- """returns proper SDK name suitable for OS and architecture
- we're building for (considering simulators)"""
- arch = settings.get_safe('arch')
- os_ = settings.get_safe('os')
+def _guess_apple_sdk_name(os_, arch):
if str(arch).startswith('x86'):
return {'Macos': 'macosx',
'iOS': 'iphonesimulator',
@@ -39,6 +35,15 @@ def apple_sdk_name(settings):
'tvOS': 'appletvos'}.get(str(os_), None)
+def apple_sdk_name(settings):
+ """returns proper SDK name suitable for OS and architecture
+ we're building for (considering simulators)"""
+ arch = settings.get_safe('arch')
+ os_ = settings.get_safe('os')
+ os_sdk = settings.get_safe('os.sdk')
+ return os_sdk or _guess_apple_sdk_name(os_, arch)
+
+
def apple_deployment_target_env(os_, os_version):
"""environment variable name which controls deployment target"""
env_name = {'Macos': 'MACOSX_DEPLOYMENT_TARGET',
@@ -50,12 +55,16 @@ def apple_deployment_target_env(os_, os_version):
return {env_name: os_version}
-def apple_deployment_target_flag(os_, os_version):
+def apple_deployment_target_flag(os_, os_version, os_sdk=None, arch=None):
"""compiler flag name which controls deployment target"""
- flag = {'Macos': '-mmacosx-version-min',
- 'iOS': '-mios-version-min',
- 'watchOS': '-mwatchos-version-min',
- 'tvOS': '-mappletvos-version-min'}.get(str(os_))
+ os_sdk = os_sdk if os_sdk else _guess_apple_sdk_name(os_, arch)
+ flag = {'macosx': '-mmacosx-version-min',
+ 'iphoneos': '-mios-version-min',
+ 'iphonesimulator': '-mios-simulator-version-min',
+ 'watchos': '-mwatchos-version-min',
+ 'watchsimulator': '-mwatchos-simulator-version-min',
+ 'appletvos': '-mtvos-version-min',
+ 'appletvsimulator': '-mtvos-simulator-version-min'}.get(str(os_sdk))
if not flag:
return ''
return "%s=%s" % (flag, os_version)
| diff --git a/conans/test/unittests/util/apple_test.py b/conans/test/unittests/util/apple_test.py
index e36990152b4..24abe18be13 100644
--- a/conans/test/unittests/util/apple_test.py
+++ b/conans/test/unittests/util/apple_test.py
@@ -11,15 +11,18 @@
class FakeSettings(object):
- def __init__(self, _os, _arch):
+ def __init__(self, _os, _arch, _os_sdk=None):
self._os = _os
self._arch = _arch
+ self._os_sdk = _os_sdk
def get_safe(self, name):
if name == 'os':
return self._os
elif name == 'arch':
return self._arch
+ elif name == 'os.sdk':
+ return self._os_sdk
class AppleTest(unittest.TestCase):
@@ -54,6 +57,12 @@ def test_apple_sdk_name(self):
self.assertEqual(tools.apple_sdk_name(FakeSettings('tvOS', 'armv8')), 'appletvos')
self.assertIsNone(tools.apple_sdk_name(FakeSettings('Windows', 'x86')))
+ self.assertEqual(tools.apple_sdk_name(FakeSettings('iOS', 'armv8')), 'iphoneos')
+ self.assertEqual(tools.apple_sdk_name(FakeSettings('iOS', 'armv8', 'iphoneos')),
+ 'iphoneos')
+ self.assertEqual(tools.apple_sdk_name(FakeSettings('iOS', 'armv8', 'iphonesimulator')),
+ 'iphonesimulator')
+
def test_apple_sdk_name_custom_settings(self):
self.assertEqual(tools.apple_sdk_name(FakeSettings('Macos', 'ios_fat')), 'macosx')
self.assertEqual(tools.apple_sdk_name(FakeSettings('iOS', 'ios_fat')), 'iphoneos')
@@ -75,12 +84,37 @@ def test_deployment_target_env_name(self):
def test_deployment_target_flag_name(self):
self.assertEqual(tools.apple_deployment_target_flag('Macos', "10.1"),
'-mmacosx-version-min=10.1')
+
+ self.assertEqual(tools.apple_deployment_target_flag('Macos', "10.1", 'macosx'),
+ '-mmacosx-version-min=10.1')
+
self.assertEqual(tools.apple_deployment_target_flag('iOS', "10.1"),
'-mios-version-min=10.1')
+
+ self.assertEqual(tools.apple_deployment_target_flag('iOS', "10.1", 'iphoneos'),
+ '-mios-version-min=10.1')
+
+ self.assertEqual(tools.apple_deployment_target_flag('iOS', "10.1", 'iphonesimulator'),
+ '-mios-simulator-version-min=10.1')
+
self.assertEqual(tools.apple_deployment_target_flag('watchOS', "10.1"),
'-mwatchos-version-min=10.1')
+
+ self.assertEqual(tools.apple_deployment_target_flag('watchOS', "10.1", 'watchos'),
+ '-mwatchos-version-min=10.1')
+
+ self.assertEqual(tools.apple_deployment_target_flag('watchOS', "10.1", 'watchsimulator'),
+ '-mwatchos-simulator-version-min=10.1')
+
self.assertEqual(tools.apple_deployment_target_flag('tvOS', "10.1"),
- '-mappletvos-version-min=10.1')
+ '-mtvos-version-min=10.1')
+
+ self.assertEqual(tools.apple_deployment_target_flag('tvOS', "10.1", 'appletvos'),
+ '-mtvos-version-min=10.1')
+
+ self.assertEqual(tools.apple_deployment_target_flag('tvOS', "10.1", 'appletvsimulator'),
+ '-mtvos-simulator-version-min=10.1')
+
self.assertEqual('', tools.apple_deployment_target_flag('Solaris', "10.1"))
@pytest.mark.skipif(platform.system() != "Darwin", reason="Requires OSX")
| [
{
"components": [
{
"doc": "",
"lines": [
25,
35
],
"name": "_guess_apple_sdk_name",
"signature": "def _guess_apple_sdk_name(os_, arch):",
"type": "function"
}
],
"file": "conans/client/tools/apple.py"
}
] | [
"conans/test/unittests/util/apple_test.py::AppleTest::test_apple_sdk_name",
"conans/test/unittests/util/apple_test.py::AppleTest::test_deployment_target_flag_name"
] | [
"conans/test/unittests/util/apple_test.py::AppleTest::test_apple_sdk_name_custom_settings",
"conans/test/unittests/util/apple_test.py::AppleTest::test_deployment_target_env_name",
"conans/test/unittests/util/apple_test.py::AppleTest::test_is_apple_os",
"conans/test/unittests/util/apple_test.py::AppleTest::tes... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Feature: add os.sdk sub-settings for Apple
related: #8113
- add `os.sdk` sub-setting for Apple platforms (iOS/watchOS/tvOS/macOS)
- select correct SDK in `apple_sdk_name` based on `os.sdk`
- return correct `apple_deployment_target_flag` for simulators
- as now we have to accept `os.sdk` in tools, they were changed to accept `ConanFile` as a first argument
- `conan_v2_behavior` issued for old syntax (individual settings values instead of `ConanFile`)
it makes it possible to distinguish ARMv8 iPhone and ARMv8 iPhone Simulator (M1) binaries (they are really different, incompatible binaries).
/cc @theodelrieu
Changelog: Feature: Add os.sdk sub-settings for Apple
Docs: https://github.com/conan-io/docs/pull/1981
- [x] Refer to the issue that supports this Pull Request.
- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've followed the PEP8 style guides for Python code.
- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in conans/client/tools/apple.py]
(definition of _guess_apple_sdk_name:)
def _guess_apple_sdk_name(os_, arch):
[end of new definitions in conans/client/tools/apple.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 4a5b19a75db9225316c8cb022a2dfb9705a2af34 | ||
sympy__sympy-20659 | 20,659 | sympy/sympy | 1.8 | dd9188277caada7161b64bfb56cf900b5480772b | 2020-12-25T08:14:58Z | diff --git a/sympy/core/__init__.py b/sympy/core/__init__.py
index 43d7e0525eb3..dfca31f902c4 100644
--- a/sympy/core/__init__.py
+++ b/sympy/core/__init__.py
@@ -28,6 +28,7 @@
from .containers import Tuple, Dict
from .exprtools import gcd_terms, factor_terms, factor_nc
from .parameters import evaluate
+from .kind import UndefinedKind, NumberKind, BooleanKind
# expose singletons
Catalan = S.Catalan
@@ -88,4 +89,6 @@
'EulerGamma',
'GoldenRatio',
'TribonacciConstant',
+
+ 'UndefinedKind', 'NumberKind', 'BooleanKind',
]
diff --git a/sympy/core/add.py b/sympy/core/add.py
index 6a3d10ac0e66..f340169d90d7 100644
--- a/sympy/core/add.py
+++ b/sympy/core/add.py
@@ -70,6 +70,85 @@ def _unevaluated_Add(*args):
class Add(Expr, AssocOp):
+ """
+ Expression representing addition operation for algebraic group.
+
+ Every argument of ``Add()`` must be ``Expr``. Infix operator ``+``
+ on most scalar objects in SymPy calls this class.
+
+ Another use of ``Add()`` is to represent the structure of abstract
+ addition so that its arguments can be substituted to return different
+ class. Refer to examples section for this.
+
+ ``Add()`` evaluates the argument unless ``evaluate=False`` is passed.
+ The evaluation logic includes:
+
+ 1. Flattening
+ ``Add(x, Add(y, z))`` -> ``Add(x, y, z)``
+
+ 2. Identity removing
+ ``Add(x, 0, y)`` -> ``Add(x, y)``
+
+ 3. Coefficient collecting by ``.as_coeff_Mul()``
+ ``Add(x, 2*x)`` -> ``Mul(3, x)``
+
+ 4. Term sorting
+ ``Add(y, x, 2)`` -> ``Add(2, x, y)``
+
+ If no argument is passed, identity element 0 is returned. If single
+ element is passed, that element is returned.
+
+ Note that ``Add(*args)`` is more efficient than ``sum(args)`` because
+ it flattens the arguments. ``sum(a, b, c, ...)`` recursively adds the
+ arguments as ``a + (b + (c + ...))``, which has quadratic complexity.
+ On the other hand, ``Add(a, b, c, d)`` does not assume nested
+ structure, making the complexity linear.
+
+ Since addition is group operation, every argument should have the
+ same :obj:`sympy.core.kind.Kind()`.
+
+ Examples
+ ========
+
+ >>> from sympy import Add, I
+ >>> from sympy.abc import x, y
+ >>> Add(x, 1)
+ x + 1
+ >>> Add(x, x)
+ 2*x
+ >>> 2*x**2 + 3*x + I*y + 2*y + 2*x/5 + 1.0*y + 1
+ 2*x**2 + 17*x/5 + 3.0*y + I*y + 1
+
+ If ``evaluate=False`` is passed, result is not evaluated.
+
+ >>> Add(1, 2, evaluate=False)
+ 1 + 2
+ >>> Add(x, x, evaluate=False)
+ x + x
+
+ ``Add()`` also represents the general structure of addition operation.
+
+ >>> from sympy import MatrixSymbol
+ >>> A,B = MatrixSymbol('A', 2,2), MatrixSymbol('B', 2,2)
+ >>> expr = Add(x,y).subs({x:A, y:B})
+ >>> expr
+ A + B
+ >>> type(expr)
+ <class 'sympy.matrices.expressions.matadd.MatAdd'>
+
+ Note that the printers don't display in args order.
+
+ >>> Add(x, 1)
+ x + 1
+ >>> Add(x, 1).args
+ (1, x)
+
+ See Also
+ ========
+
+ MatAdd
+
+ """
__slots__ = ()
diff --git a/sympy/core/kind.py b/sympy/core/kind.py
index a86898d62c13..f1935b541a81 100644
--- a/sympy/core/kind.py
+++ b/sympy/core/kind.py
@@ -25,6 +25,13 @@
See https://github.com/sympy/sympy/pull/20549.
"""
+from collections import defaultdict
+
+from sympy.core.cache import cacheit
+from sympy.multipledispatch.dispatcher import (Dispatcher,
+ ambiguity_warn, ambiguity_register_error_ignore_dup,
+ str_signature, RaiseNotImplementedError)
+
class KindMeta(type):
"""
@@ -74,7 +81,7 @@ def __new__(cls, *args):
return inst
-class UndefinedKind(Kind):
+class _UndefinedKind(Kind):
"""
Default kind for all SymPy object. If the kind is not defined for
the object, or if the object cannot infer the kind from its
@@ -93,10 +100,10 @@ def __new__(cls):
def __repr__(self):
return "UndefinedKind"
-UndefinedKind = UndefinedKind()
+UndefinedKind = _UndefinedKind()
-class NumberKind(Kind):
+class _NumberKind(Kind):
"""
Kind for all numeric object.
@@ -152,10 +159,10 @@ def __new__(cls):
def __repr__(self):
return "NumberKind"
-NumberKind = NumberKind()
+NumberKind = _NumberKind()
-class BooleanKind(Kind):
+class _BooleanKind(Kind):
"""
Kind for boolean objects.
@@ -177,4 +184,204 @@ def __new__(cls):
def __repr__(self):
return "BooleanKind"
-BooleanKind = BooleanKind()
+BooleanKind = _BooleanKind()
+
+
+class KindDispatcher:
+ """
+ Dispatcher to select a kind from multiple kinds by binary dispatching.
+
+ .. notes::
+ This approach is experimental, and can be replaced or deleted in
+ the future.
+
+ Explanation
+ ===========
+
+ SymPy object's :obj:`sympy.core.kind.Kind()` vaguely represents the
+ algebraic structure where the object belongs to. Therefore, with
+ given operation, we can always find a dominating kind among the
+ different kinds. This class selects the kind by recursive binary
+ dispatching. If the result cannot be determined, ``UndefinedKind``
+ is returned.
+
+ Examples
+ ========
+
+ Multiplication between numbers return number.
+
+ >>> from sympy import Mul
+ >>> from sympy.core import NumberKind
+ >>> Mul._kind_dispatcher(NumberKind, NumberKind)
+ NumberKind
+
+ Multiplication between number and unknown-kind object returns unknown kind.
+
+ >>> from sympy.core import UndefinedKind
+ >>> Mul._kind_dispatcher(NumberKind, UndefinedKind)
+ UndefinedKind
+
+ Any number and order of kinds is allowed.
+
+ >>> Mul._kind_dispatcher(UndefinedKind, NumberKind)
+ UndefinedKind
+ >>> Mul._kind_dispatcher(NumberKind, UndefinedKind, NumberKind)
+ UndefinedKind
+
+ Since matrix forms a vector space over scalar field, multiplication
+ between matrix with numeric element and number returns matrix with
+ numeric element.
+
+ >>> from sympy.matrices import MatrixKind
+ >>> Mul._kind_dispatcher(MatrixKind(NumberKind), NumberKind)
+ MatrixKind(NumberKind)
+
+ If a matrix with number element and another matrix with unknown-kind
+ element are multiplied, we know that the result is matrix but the
+ kind of its elements is unknown.
+
+ >>> Mul._kind_dispatcher(MatrixKind(NumberKind), MatrixKind(UndefinedKind))
+ MatrixKind(UndefinedKind)
+
+ Parameters
+ ==========
+
+ name : str
+
+ commutative : bool, optional
+ If True, binary dispatch will be automatically registered in
+ reversed order as well.
+
+ doc : str, optional
+
+ """
+ def __init__(self, name, commutative=False, doc=None):
+ self.name = name
+ self.doc = doc
+ self.commutative = commutative
+ self._dispatcher = Dispatcher(name)
+
+ def __repr__(self):
+ return "<dispatched %s>" % self.name
+
+ def register(self, *types, **kwargs):
+ """
+ Register the binary dispatcher for two kind classes.
+
+ If *self.commutative* is ``True``, signature in reversed order is
+ automatically registered as well.
+ """
+ on_ambiguity = kwargs.pop("on_ambiguity", None)
+ if not on_ambiguity:
+ if self.commutative:
+ on_ambiguity = ambiguity_register_error_ignore_dup
+ else:
+ on_ambiguity = ambiguity_warn
+ kwargs.update(on_ambiguity=on_ambiguity)
+
+ if not len(types) == 2:
+ raise RuntimeError(
+ "Only binary dispatch is supported, but got %s types: <%s>." % (
+ len(types), str_signature(types)
+ ))
+
+ def _(func):
+ self._dispatcher.add(types, func, **kwargs)
+ if self.commutative:
+ self._dispatcher.add(tuple(reversed(types)), func, **kwargs)
+ return _
+
+ def __call__(self, *args, **kwargs):
+ if self.commutative:
+ kinds = frozenset(args)
+ else:
+ kinds = []
+ prev = None
+ for a in args:
+ if prev is not a:
+ kinds.append(a)
+ prev = a
+ return self.dispatch_kinds(kinds, **kwargs)
+
+ @cacheit
+ def dispatch_kinds(self, kinds, **kwargs):
+ # Quick exit for the case where all kinds are same
+ if len(kinds) == 1:
+ result, = kinds
+ if not isinstance(result, Kind):
+ raise RuntimeError("%s is not a kind." % result)
+ return result
+
+ for i,kind in enumerate(kinds):
+ if not isinstance(kind, Kind):
+ raise RuntimeError("%s is not a kind." % kind)
+
+ if i == 0:
+ result = kind
+ else:
+ prev_kind = result
+
+ t1, t2 = type(prev_kind), type(kind)
+ func = self._dispatcher.dispatch(t1, t2)
+ if func is None and self.commutative:
+ # try reversed order
+ func = self._dispatcher.dispatch(t2, t1)
+ if func is None:
+ # unregistered kind relation
+ result = UndefinedKind
+ else:
+ result = func(prev_kind, kind)
+ if not isinstance(result, Kind):
+ raise RuntimeError(
+ "Dispatcher for {!r} and {!r} must return a Kind, but got {!r}".format(
+ prev_kind, kind, result
+ ))
+
+ return result
+
+ @property
+ def __doc__(self):
+ docs = [
+ "Kind dispatcher : %s" % self.name,
+ "Note that support for this is experimental. See the docs for :class:`KindDispatcher` for details"
+ ]
+
+ if self.doc:
+ docs.append(self.doc)
+
+ s = "Registered kind classes\n"
+ s += '=' * len(s)
+ docs.append(s)
+
+ amb_sigs = []
+
+ typ_sigs = defaultdict(list)
+ for sigs in self._dispatcher.ordering[::-1]:
+ key = self._dispatcher.funcs[sigs]
+ typ_sigs[key].append(sigs)
+
+ for func, sigs in typ_sigs.items():
+
+ sigs_str = ', '.join('<%s>' % str_signature(sig) for sig in sigs)
+
+ if isinstance(func, RaiseNotImplementedError):
+ amb_sigs.append(sigs_str)
+ continue
+
+ s = 'Inputs: %s\n' % sigs_str
+ s += '-' * len(s) + '\n'
+ if func.__doc__:
+ s += func.__doc__.strip()
+ else:
+ s += func.__name__
+ docs.append(s)
+
+ if amb_sigs:
+ s = "Ambiguous kind classes\n"
+ s += '=' * len(s)
+ docs.append(s)
+
+ s = '\n'.join(amb_sigs)
+ docs.append(s)
+
+ return '\n\n'.join(docs)
diff --git a/sympy/core/mul.py b/sympy/core/mul.py
index 531770aa7573..472fc377c0dd 100644
--- a/sympy/core/mul.py
+++ b/sympy/core/mul.py
@@ -10,7 +10,7 @@
from .logic import fuzzy_not, _fuzzy_group
from .expr import Expr
from .parameters import global_parameters
-
+from .kind import KindDispatcher
# internal marker to indicate:
@@ -86,12 +86,80 @@ def _unevaluated_Mul(*args):
class Mul(Expr, AssocOp):
+ """
+ Expression representing multiplication operation for algebraic field.
+
+ Every argument of ``Mul()`` must be ``Expr``. Infix operator ``*``
+ on most scalar objects in SymPy calls this class.
+
+ Another use of ``Mul()`` is to represent the structure of abstract
+ multiplication so that its arguments can be substituted to return
+ different class. Refer to examples section for this.
+
+ ``Mul()`` evaluates the argument unless ``evaluate=False`` is passed.
+ The evaluation logic includes:
+
+ 1. Flattening
+ ``Mul(x, Mul(y, z))`` -> ``Mul(x, y, z)``
+
+ 2. Identity removing
+ ``Mul(x, 1, y)`` -> ``Mul(x, y)``
+ 3. Exponent collecting by ``.as_base_exp()``
+ ``Mul(x, x**2)`` -> ``Pow(x, 3)``
+
+ 4. Term sorting
+ ``Mul(y, x, 2)`` -> ``Mul(2, x, y)``
+
+ Since multiplication can be vector space operation, arguments may
+ have the different :obj:`sympy.core.kind.Kind()`. Kind of the
+ resulting object is automatically inferred.
+
+ Examples
+ ========
+
+ >>> from sympy import Mul
+ >>> from sympy.abc import x, y
+ >>> Mul(x, 1)
+ x
+ >>> Mul(x, x)
+ x**2
+
+ If ``evaluate=False`` is passed, result is not evaluated.
+
+ >>> Mul(1, 2, evaluate=False)
+ 1*2
+ >>> Mul(x, x, evaluate=False)
+ x*x
+
+ ``Mul()`` also represents the general structure of multiplication
+ operation.
+
+ >>> from sympy import MatrixSymbol
+ >>> A = MatrixSymbol('A', 2,2)
+ >>> expr = Mul(x,y).subs({y:A})
+ >>> expr
+ x*A
+ >>> type(expr)
+ <class 'sympy.matrices.expressions.matmul.MatMul'>
+
+ See Also
+ ========
+
+ MatMul
+
+ """
__slots__ = ()
is_Mul = True
_args_type = Expr
+ _kind_dispatcher = KindDispatcher("Mul_kind_dispatcher", commutative=True)
+
+ @property
+ def kind(self):
+ arg_kinds = (a.kind for a in self.args)
+ return self._kind_dispatcher(*arg_kinds)
def __neg__(self):
c, args = self.as_coeff_mul()
@@ -1927,6 +1995,7 @@ def _sorted_args(self):
mul = AssocOpDispatcher('mul')
+
def prod(a, start=1):
"""Return product of elements of a. Start with int 1 so if only
ints are included then an int result is returned.
diff --git a/sympy/matrices/matrices.py b/sympy/matrices/matrices.py
index 1e6e0805c4bc..571c304249e2 100644
--- a/sympy/matrices/matrices.py
+++ b/sympy/matrices/matrices.py
@@ -6,6 +6,7 @@
Callable, NotIterable, as_int, is_sequence)
from sympy.core.decorators import deprecated
from sympy.core.expr import Expr
+from sympy.core.mul import Mul
from sympy.core.power import Pow
from sympy.core.singleton import S
from sympy.core.symbol import Dummy, Symbol, uniquely_named_symbol
@@ -18,7 +19,7 @@
from sympy.printing import sstr
from sympy.printing.defaults import Printable
from sympy.simplify import simplify as _simplify
-from sympy.core.kind import Kind, NumberKind
+from sympy.core.kind import Kind, NumberKind, _NumberKind
from sympy.utilities.exceptions import SymPyDeprecationWarning
from sympy.utilities.iterables import flatten
from sympy.utilities.misc import filldedent
@@ -783,6 +784,25 @@ def __repr__(self):
return "MatrixKind(%s)" % self.element_kind
+@Mul._kind_dispatcher.register(_NumberKind, MatrixKind)
+def num_mat_mul(k1, k2):
+ """
+ Return MatrixKind. The element kind is selected by recursive dispatching.
+ Do not need to dispatch in reversed order because KindDispatcher
+ searches for this automatically.
+ """
+ # Deal with Mul._kind_dispatcher's commutativity
+ elemk = Mul._kind_dispatcher(NumberKind, k2.element_kind)
+ return MatrixKind(elemk)
+
+@Mul._kind_dispatcher.register(MatrixKind, MatrixKind)
+def mat_mat_mul(k1, k2):
+ """
+ Return MatrixKind. The element kind is selected by recursive dispatching.
+ """
+ elemk = Mul._kind_dispatcher(k1.element_kind, k2.element_kind)
+ return MatrixKind(elemk)
+
class MatrixBase(MatrixDeprecated,
MatrixCalculus,
MatrixEigen,
| diff --git a/sympy/core/tests/test_kind.py b/sympy/core/tests/test_kind.py
index fd056f138ba9..eabf9ad0c40f 100644
--- a/sympy/core/tests/test_kind.py
+++ b/sympy/core/tests/test_kind.py
@@ -1,11 +1,12 @@
from sympy.core.add import Add
from sympy.core.kind import NumberKind, UndefinedKind
+from sympy.core.mul import Mul
from sympy.core.numbers import pi, zoo, I, AlgebraicNumber
from sympy.core.singleton import S
from sympy.core.symbol import Symbol
from sympy.integrals.integrals import Integral
from sympy.matrices import (Matrix, SparseMatrix, ImmutableMatrix,
- ImmutableSparseMatrix, MatrixSymbol, MatrixKind)
+ ImmutableSparseMatrix, MatrixSymbol, MatrixKind, MatMul)
comm_x = Symbol('x')
noncomm_x = Symbol('x', commutative=False)
@@ -23,6 +24,12 @@ def test_Add_kind():
assert Add(2,comm_x).kind is NumberKind
assert Add(2,noncomm_x).kind is UndefinedKind
+def test_mul_kind():
+ assert Mul(2,comm_x, evaluate=False).kind is NumberKind
+ assert Mul(2,3, evaluate=False).kind is NumberKind
+ assert Mul(noncomm_x,2, evaluate=False).kind is UndefinedKind
+ assert Mul(2,noncomm_x, evaluate=False).kind is UndefinedKind
+
def test_Symbol_kind():
assert comm_x.kind is NumberKind
assert noncomm_x.kind is UndefinedKind
@@ -37,3 +44,8 @@ def test_Matrix_kind():
for cls in classes:
m = cls.zeros(3, 2)
assert m.kind is MatrixKind(NumberKind)
+
+def test_MatMul_kind():
+ M = Matrix([[1,2],[3,4]])
+ assert MatMul(2, M).kind is MatrixKind(NumberKind)
+ assert MatMul(comm_x, M).kind is MatrixKind(NumberKind)
| [
{
"components": [
{
"doc": "Default kind for all SymPy object. If the kind is not defined for\nthe object, or if the object cannot infer the kind from its\narguments, this will be returned.\n\nExamples\n========\n\n>>> from sympy import Expr\n>>> Expr().kind\nUndefinedKind",
"lines": [
... | [
"test_mul_kind"
] | [
"test_NumberKind",
"test_Add_kind",
"test_Symbol_kind",
"test_Integral_kind",
"test_Matrix_kind"
] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Introduce KindDispatcher
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
#### Brief description of what is fixed or changed
`KindDispatcher` is introduced to infer the kind of associative operation. This class selects a kind among multiple kinds, making associative operator able to infer its kind from arguments. The selection logic is based on multipledispatch and can be registered.
`Mul.kind` is introduced using this dispatcher.
Docstring is made on `Add` and `Mul`.
#### Other comments
Please see docstring of `KindDispatcher` for the examples.
Since `kind` system is introduced, multiply dispatched `add`, `mul` and `pow` functions need to work based on this. It will be done in succeeding PRs.
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
NO ENTRY
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/core/kind.py]
(definition of _UndefinedKind:)
class _UndefinedKind(Kind):
"""Default kind for all SymPy object. If the kind is not defined for
the object, or if the object cannot infer the kind from its
arguments, this will be returned.
Examples
========
>>> from sympy import Expr
>>> Expr().kind
UndefinedKind"""
(definition of _UndefinedKind.__new__:)
def __new__(cls):
(definition of _UndefinedKind.__repr__:)
def __repr__(self):
(definition of _NumberKind:)
class _NumberKind(Kind):
"""Kind for all numeric object.
This kind represents every number, including complex numbers,
infinity and ``S.NaN``. Other objects such as quaternions do not
have this kind.
Most ``Expr`` are initially designed to represent the number, so
this will be the most common kind in SymPy core. For example
``Symbol()``, which represents a scalar, has this kind as long as it
is commutative.
Numbers form a field. Any operation between number-kind objects will
result this kind as well.
Examples
========
>>> from sympy import S, oo, Symbol
>>> S.One.kind
NumberKind
>>> (-oo).kind
NumberKind
>>> S.NaN.kind
NumberKind
Commutative symbol are treated as number.
>>> x = Symbol('x')
>>> x.kind
NumberKind
>>> Symbol('y', commutative=False).kind
UndefinedKind
Operation between numbers results number.
>>> (x+1).kind
NumberKind
See Also
========
sympy.core.expr.Expr.is_Number : check if the object is strictly
subclass of ``Number`` class.
sympy.core.expr.Expr.is_number : check if the object is number
without any free symbol."""
(definition of _NumberKind.__new__:)
def __new__(cls):
(definition of _NumberKind.__repr__:)
def __repr__(self):
(definition of _BooleanKind:)
class _BooleanKind(Kind):
"""Kind for boolean objects.
SymPy's ``S.true``, ``S.false``, and built-in ``True`` and ``False``
have this kind. Boolean number ``1`` and ``0`` are not relevent.
Examples
========
>>> from sympy import S, Q
>>> S.true.kind
BooleanKind
>>> Q.even(3).kind
BooleanKind"""
(definition of _BooleanKind.__new__:)
def __new__(cls):
(definition of _BooleanKind.__repr__:)
def __repr__(self):
(definition of KindDispatcher:)
class KindDispatcher:
"""Dispatcher to select a kind from multiple kinds by binary dispatching.
.. notes::
This approach is experimental, and can be replaced or deleted in
the future.
Explanation
===========
SymPy object's :obj:`sympy.core.kind.Kind()` vaguely represents the
algebraic structure where the object belongs to. Therefore, with
given operation, we can always find a dominating kind among the
different kinds. This class selects the kind by recursive binary
dispatching. If the result cannot be determined, ``UndefinedKind``
is returned.
Examples
========
Multiplication between numbers return number.
>>> from sympy import Mul
>>> from sympy.core import NumberKind
>>> Mul._kind_dispatcher(NumberKind, NumberKind)
NumberKind
Multiplication between number and unknown-kind object returns unknown kind.
>>> from sympy.core import UndefinedKind
>>> Mul._kind_dispatcher(NumberKind, UndefinedKind)
UndefinedKind
Any number and order of kinds is allowed.
>>> Mul._kind_dispatcher(UndefinedKind, NumberKind)
UndefinedKind
>>> Mul._kind_dispatcher(NumberKind, UndefinedKind, NumberKind)
UndefinedKind
Since matrix forms a vector space over scalar field, multiplication
between matrix with numeric element and number returns matrix with
numeric element.
>>> from sympy.matrices import MatrixKind
>>> Mul._kind_dispatcher(MatrixKind(NumberKind), NumberKind)
MatrixKind(NumberKind)
If a matrix with number element and another matrix with unknown-kind
element are multiplied, we know that the result is matrix but the
kind of its elements is unknown.
>>> Mul._kind_dispatcher(MatrixKind(NumberKind), MatrixKind(UndefinedKind))
MatrixKind(UndefinedKind)
Parameters
==========
name : str
commutative : bool, optional
If True, binary dispatch will be automatically registered in
reversed order as well.
doc : str, optional"""
(definition of KindDispatcher.__init__:)
def __init__(self, name, commutative=False, doc=None):
(definition of KindDispatcher.__repr__:)
def __repr__(self):
(definition of KindDispatcher.register:)
def register(self, *types, **kwargs):
"""Register the binary dispatcher for two kind classes.
If *self.commutative* is ``True``, signature in reversed order is
automatically registered as well."""
(definition of KindDispatcher.register._:)
def _(func):
(definition of KindDispatcher.__call__:)
def __call__(self, *args, **kwargs):
(definition of KindDispatcher.dispatch_kinds:)
def dispatch_kinds(self, kinds, **kwargs):
(definition of KindDispatcher.__doc__:)
def __doc__(self):
[end of new definitions in sympy/core/kind.py]
[start of new definitions in sympy/core/mul.py]
(definition of Mul.kind:)
def kind(self):
[end of new definitions in sympy/core/mul.py]
[start of new definitions in sympy/matrices/matrices.py]
(definition of num_mat_mul:)
def num_mat_mul(k1, k2):
"""Return MatrixKind. The element kind is selected by recursive dispatching.
Do not need to dispatch in reversed order because KindDispatcher
searches for this automatically."""
(definition of mat_mat_mul:)
def mat_mat_mul(k1, k2):
"""Return MatrixKind. The element kind is selected by recursive dispatching."""
[end of new definitions in sympy/matrices/matrices.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 284db64e44124856accce4abff75d2d8990c301d | ||
sympy__sympy-20656 | 20,656 | sympy/sympy | 1.8 | 5288eb336d12e4f3c3070bce67d75f38ab4d8376 | 2020-12-24T06:35:06Z | diff --git a/doc/src/modules/assumptions/handlers/index.rst b/doc/src/modules/assumptions/handlers/index.rst
index 48781760ed35..31f9f79030f2 100644
--- a/doc/src/modules/assumptions/handlers/index.rst
+++ b/doc/src/modules/assumptions/handlers/index.rst
@@ -10,8 +10,8 @@ Contents
.. toctree::
:maxdepth: 3
- calculus.rst
common.rst
+ calculus.rst
matrices.rst
ntheory.rst
order.rst
diff --git a/doc/src/modules/assumptions/index.rst b/doc/src/modules/assumptions/index.rst
index 022fab67a5ed..96abb75e529c 100644
--- a/doc/src/modules/assumptions/index.rst
+++ b/doc/src/modules/assumptions/index.rst
@@ -17,30 +17,42 @@ Contents
refine.rst
handlers/index.rst
-Queries are used to ask information about expressions. Main method for this
-is ask():
-.. autofunction:: sympy.assumptions.ask::ask
+Predicates
+==========
+
+
+.. autoclass:: sympy.assumptions.assume::Predicate
:noindex:
+.. autoclass:: sympy.assumptions.assume::AppliedPredicate
+ :noindex:
+
+
Querying
========
-ask's optional second argument should be a boolean expression involving
-assumptions about objects in expr. Valid values include:
+Queries are used to ask information about expressions. Main method for this
+is ``ask()``:
- * Q.integer(x)
- * Q.positive(x)
- * Q.integer(x) & Q.positive(x)
+.. autofunction:: sympy.assumptions.ask::ask
+ :noindex:
+
+``ask``'s optional second argument should be a boolean expression involving
+assumptions about objects in *expr*. Valid values include:
+
+ * ``Q.integer(x)``
+ * ``Q.positive(x)``
+ * ``Q.integer(x) & Q.positive(x)``
* etc.
-Q is an object holding known predicates.
+``Q`` is an object holding known predicates.
See documentation for the logic module for a complete list of valid boolean
expressions.
You can also define a context so you don't have to pass that argument
-each time to function ask(). This is done by using the assuming context manager
+each time to function ``ask()``. This is done by using the assuming context manager
from module sympy.assumptions. ::
>>> from sympy import *
@@ -52,68 +64,6 @@ from module sympy.assumptions. ::
True
-Design
-======
-
-Each time ask is called, the appropriate Handler for the current key is called. This is
-always a subclass of sympy.assumptions.AskHandler. Its classmethods have the names of the classes
-it supports. For example, a (simplified) AskHandler for the ask 'positive' would
-look like this::
-
- class AskPositiveHandler(CommonHandler):
-
- def Mul(self):
- # return True if all argument's in self.expr.args are positive
- ...
-
- def Add(self):
- for arg in self.expr.args:
- if not ask(arg, positive, self.assumptions):
- break
- else:
- # if all argument's are positive
- return True
- ...
-
-The .Mul() method is called when self.expr is an instance of Mul, the Add method
-would be called when self.expr is an instance of Add and so on.
-
-
-Extensibility
-=============
-
-You can define new queries or support new types by subclassing sympy.assumptions.AskHandler
- and registering that handler for a particular key by calling register_handler:
-
-.. autofunction:: sympy.assumptions.ask::register_handler
- :noindex:
-
-You can undo this operation by calling remove_handler.
-
-.. autofunction:: sympy.assumptions.ask::remove_handler
- :noindex:
-
-You can support new types [1]_ by adding a handler to an existing key. In the
-following example, we will create a new type MyType and extend the key 'prime'
-to accept this type (and return True)
-
-.. parsed-literal::
-
- >>> from sympy.core import Basic
- >>> from sympy.assumptions import register_handler
- >>> from sympy.assumptions.handlers import AskHandler
- >>> class MyType(Basic):
- ... pass
- >>> class MyAskHandler(AskHandler):
- ... @staticmethod
- ... def MyType(expr, assumptions):
- ... return True
- >>> a = MyType()
- >>> register_handler('prime', MyAskHandler)
- >>> ask(Q.prime(a))
- True
-
-
Performance improvements
========================
@@ -130,6 +80,3 @@ Misc
You can find more examples in the in the form of test under directory
sympy/assumptions/tests/
-
-.. [1] New type must inherit from Basic, otherwise an exception will be raised.
- This is a bug and should be fixed.
diff --git a/sympy/assumptions/__init__.py b/sympy/assumptions/__init__.py
index 087936b5243b..6289cee5d91a 100644
--- a/sympy/assumptions/__init__.py
+++ b/sympy/assumptions/__init__.py
@@ -1,9 +1,16 @@
-from .assume import AppliedPredicate, Predicate, AssumptionsContext, assuming
+"""
+A module to implement logical predicates and assumption system.
+"""
+
+from .assume import (
+ AppliedPredicate, Predicate, AssumptionsContext, assuming,
+ global_assumptions
+)
from .ask import Q, ask, register_handler, remove_handler
from .refine import refine
__all__ = [
'AppliedPredicate', 'Predicate', 'AssumptionsContext', 'assuming',
- 'Q', 'ask', 'register_handler', 'remove_handler',
+ 'global_assumptions', 'Q', 'ask', 'register_handler', 'remove_handler',
'refine',
]
diff --git a/sympy/assumptions/ask.py b/sympy/assumptions/ask.py
index f996989fe93e..1a2b22a0f021 100644
--- a/sympy/assumptions/ask.py
+++ b/sympy/assumptions/ask.py
@@ -5,8 +5,8 @@
from sympy.core import sympify
from sympy.core.cache import cacheit
from sympy.core.relational import Relational
-from sympy.logic.boolalg import (to_cnf, And, Not, Or, Implies, Equivalent,
- BooleanFunction, BooleanAtom)
+from sympy.core.kind import BooleanKind
+from sympy.logic.boolalg import (to_cnf, And, Not, Or, Implies, Equivalent)
from sympy.logic.inference import satisfiable
from sympy.utilities.decorator import memoize_property
from sympy.assumptions.cnf import CNF, EncodedCNF, Literal
@@ -19,10 +19,15 @@
class AssumptionKeys:
"""
- This class contains all the supported keys by ``ask``. It should be accessed via the instance ``sympy.Q``.
+ This class contains all the supported keys by ``ask``.
+ It should be accessed via the instance ``sympy.Q``.
"""
+ # DO NOT add methods or properties other than predicate keys.
+ # SAT solver checks the properties of Q and use them to compute the
+ # fact system. Non-predicate attributes will break this.
+
@memoize_property
def hermitian(self):
"""
@@ -1301,12 +1306,12 @@ def _extract_facts(expr, symbol, check_reversed_rel=True):
return expr.func(*args)
-def _extract_all_facts(expr, symbol):
+def _extract_all_facts(expr, symbols):
facts = set()
- if isinstance(symbol, Relational):
- symbols = (symbol, symbol.reversed)
- else:
- symbols = (symbol,)
+ if len(symbols) == 1 and isinstance(symbols[0], Relational):
+ rel = symbols[0]
+ symbols = (rel, rel.reversed)
+
for clause in expr.clauses:
args = []
for literal in clause:
@@ -1325,18 +1330,31 @@ def _extract_all_facts(expr, symbol):
def ask(proposition, assumptions=True, context=global_assumptions):
"""
- Method for inferring properties about objects.
-
- Explanation
- ===========
+ Function to evaluate the proposition with assumptions.
**Syntax**
* ask(proposition)
+ Evaluate the *proposition* in global assumption context.
* ask(proposition, assumptions)
+ Evaluate the *proposition* with respect to *assumptions* in
+ global assumption context.
+
+ Parameters
+ ==========
+
+ proposition : any boolean expression
+ Proposition which will be evaluated to boolean value. If this is
+ not ``AppliedPredicate``, it will be wrapped by ``Q.is_true``.
+
+ assumptions : any boolean expression, optional
+ Local assumptions to evaluate the *proposition*.
+
+ context : AssumptionsContext, optional
+ Default assumptions to evaluate the *proposition*. By default,
+ this is ``sympy.assumptions.global_assumptions`` variable.
- where ``proposition`` is any boolean expression
Examples
========
@@ -1350,7 +1368,13 @@ def ask(proposition, assumptions=True, context=global_assumptions):
>>> ask(Q.prime(4*x), Q.integer(x))
False
+ If the truth value cannot be determined, ``None`` will be returned.
+
+ >>> print(ask(Q.odd(3*x))) # cannot determine unless we know x
+ None
+
**Remarks**
+
Relations in assumptions are not implemented (yet), so the following
will not give a meaningful result.
@@ -1361,21 +1385,24 @@ def ask(proposition, assumptions=True, context=global_assumptions):
"""
from sympy.assumptions.satask import satask
- if not isinstance(proposition, (BooleanFunction, AppliedPredicate, bool, BooleanAtom)):
+ proposition = sympify(proposition)
+ assumptions = sympify(assumptions)
+
+ if isinstance(proposition, Predicate) or proposition.kind is not BooleanKind:
raise TypeError("proposition must be a valid logical expression")
- if not isinstance(assumptions, (BooleanFunction, AppliedPredicate, bool, BooleanAtom)):
+ if isinstance(assumptions, Predicate) or assumptions.kind is not BooleanKind:
raise TypeError("assumptions must be a valid logical expression")
if isinstance(proposition, AppliedPredicate):
- key, expr = proposition.func, sympify(proposition.arg)
+ key, args = proposition.function, proposition.arguments
else:
- key, expr = Q.is_true, sympify(proposition)
+ key, args = Q.is_true, (proposition,)
assump = CNF.from_prop(assumptions)
assump.extend(context)
- local_facts = _extract_all_facts(assump, expr)
+ local_facts = _extract_all_facts(assump, args)
known_facts_cnf = get_all_known_facts()
known_facts_dict = get_known_facts_dict()
@@ -1405,7 +1432,7 @@ def ask(proposition, assumptions=True, context=global_assumptions):
return False
# direct resolution method, no logic
- res = key(expr)._eval_ask(assumptions)
+ res = key(*args)._eval_ask(assumptions)
if res is not None:
return bool(res)
# using satask (still costly)
@@ -1443,8 +1470,9 @@ class inheriting from AskHandler::
True
"""
- if type(key) is Predicate:
- key = key.name
+ # Will be deprecated
+ if isinstance(key, Predicate):
+ key = key.name.name
Qkey = getattr(Q, key, None)
if Qkey is not None:
Qkey.add_handler(handler)
@@ -1454,8 +1482,9 @@ class inheriting from AskHandler::
def remove_handler(key, handler):
"""Removes a handler from the ask system. Same syntax as register_handler"""
- if type(key) is Predicate:
- key = key.name
+ # Will be deprecated
+ if isinstance(key, Predicate):
+ key = key.name.name
getattr(Q, key).remove_handler(handler)
@@ -1543,7 +1572,7 @@ def get_known_facts_dict():
_val_template = 'sympy.assumptions.handlers.%s'
_handlers = [
("antihermitian", "sets.AskAntiHermitianHandler"),
- ("finite", "calculus.AskFiniteHandler"),
+ ("finite", "calculus.AskFiniteHandler"),
("commutative", "AskCommutativeHandler"),
("complex", "sets.AskComplexHandler"),
("composite", "ntheory.AskCompositeHandler"),
diff --git a/sympy/assumptions/assume.py b/sympy/assumptions/assume.py
index 101466adbb6c..d1d08f826412 100644
--- a/sympy/assumptions/assume.py
+++ b/sympy/assumptions/assume.py
@@ -1,15 +1,21 @@
+"""A module which implements predicates and assumption context."""
+
+from contextlib import contextmanager
import inspect
-from sympy.core.cache import cacheit
-from sympy.core.singleton import S
+from sympy.core.assumptions import ManagedProperties
+from sympy.core.symbol import Str
from sympy.core.sympify import _sympify
from sympy.logic.boolalg import Boolean
+from sympy.multipledispatch.dispatcher import (
+ Dispatcher, MDNotImplementedError
+)
from sympy.utilities.source import get_class
-from contextlib import contextmanager
class AssumptionsContext(set):
"""
- Set representing assumptions.
+ Set containing default assumptions which are applied to the ``ask()``
+ function.
Explanation
===========
@@ -21,18 +27,41 @@ class to create your own local assumptions contexts. It is basically a thin
Examples
========
- >>> from sympy import Q
- >>> from sympy.assumptions.assume import global_assumptions
+ The default assumption context is ``global_assumptions``, which is initially empty:
+
+ >>> from sympy import ask, Q
+ >>> from sympy.assumptions import global_assumptions
>>> global_assumptions
AssumptionsContext()
+
+ You can add default assumptions:
+
>>> from sympy.abc import x
>>> global_assumptions.add(Q.real(x))
>>> global_assumptions
AssumptionsContext({Q.real(x)})
+ >>> ask(Q.real(x))
+ True
+
+ And remove them:
+
>>> global_assumptions.remove(Q.real(x))
+ >>> print(ask(Q.real(x)))
+ None
+
+ The ``clear()`` method removes every assumption:
+
+ >>> global_assumptions.add(Q.positive(x))
>>> global_assumptions
- AssumptionsContext()
+ AssumptionsContext({Q.positive(x)})
>>> global_assumptions.clear()
+ >>> global_assumptions
+ AssumptionsContext()
+
+ See Also
+ ========
+
+ assuming
"""
@@ -50,27 +79,44 @@ def _sympystr(self, printer):
class AppliedPredicate(Boolean):
- """The class of expressions resulting from applying a Predicate.
+ """
+ The class of expressions resulting from applying ``Predicate`` to
+ the arguments. ``AppliedPredicate`` merely wraps its argument and
+ remain unevaluated. To evaluate it, use the ``ask()`` function.
Examples
========
- >>> from sympy import Q, Symbol
- >>> x = Symbol('x')
- >>> Q.integer(x)
- Q.integer(x)
- >>> type(Q.integer(x))
+ >>> from sympy import Q, ask
+ >>> Q.integer(1)
+ Q.integer(1)
+
+ The ``function`` attribute returns the predicate, and the ``arguments``
+ attribute returns the tuple of arguments.
+
+ >>> type(Q.integer(1))
<class 'sympy.assumptions.assume.AppliedPredicate'>
+ >>> Q.integer(1).function
+ Q.integer
+ >>> Q.integer(1).arguments
+ (1,)
+
+ Applied predicates can be evaluated to a boolean value with ``ask``:
+
+ >>> ask(Q.integer(1))
+ True
"""
__slots__ = ()
- def __new__(cls, predicate, arg):
- arg = _sympify(arg)
- return Boolean.__new__(cls, predicate, arg)
-
is_Atom = True # do not attempt to decompose this
+ def __new__(cls, predicate, *args):
+ if not isinstance(predicate, Predicate):
+ raise TypeError("%s is not a Predicate." % predicate)
+ args = map(_sympify, args)
+ return super().__new__(cls, predicate, *args)
+
@property
def arg(self):
"""
@@ -86,80 +132,232 @@ def arg(self):
x + 1
"""
- return self._args[1]
+ # Will be deprecated
+ args = self._args
+ if len(args) == 2:
+ # backwards compatibility
+ return args[1]
+ raise TypeError("'arg' property is allowed only for unary predicates.")
@property
def args(self):
+ # Will be deprecated and return normal Basic.func
return self._args[1:]
@property
def func(self):
+ # Will be deprecated and return normal Basic.func
return self._args[0]
- @cacheit
- def sort_key(self, order=None):
- return (self.class_key(), (2, (self.func.name, self.arg.sort_key())),
- S.One.sort_key(), S.One)
-
- def __eq__(self, other):
- if type(other) is AppliedPredicate:
- return self._args == other._args
- return False
+ @property
+ def function(self):
+ """
+ Return the predicate.
+ """
+ # Will be changed to self.args[0] after args overridding is removed
+ return self._args[0]
- def __hash__(self):
- return super().__hash__()
+ @property
+ def arguments(self):
+ """
+ Return the arguments which are applied to the predicate.
+ """
+ # Will be changed to self.args[1:] after args overridding is removed
+ return self._args[1:]
def _eval_ask(self, assumptions):
- return self.func.eval(self.arg, assumptions)
+ return self.function.eval(self.arguments, assumptions)
@property
def binary_symbols(self):
from sympy.core.relational import Eq, Ne
- if self.func.name in ['is_true', 'is_false']:
- i = self.arg
+ from .ask import Q
+ if self.function == Q.is_true:
+ i = self.arguments[0]
if i.is_Boolean or i.is_Symbol or isinstance(i, (Eq, Ne)):
return i.binary_symbols
return set()
-class Predicate(Boolean):
+class PredicateMeta(ManagedProperties):
+ """
+ Metaclass for ``Predicate``
+
+ If class attribute ``handler`` is not defined, assigns empty Dispatcher
+ to it.
+ """
+ def __new__(cls, clsname, bases, dct):
+ if "handler" not in dct:
+ name = f"Ask{clsname.capitalize()}Handler"
+ handler = Dispatcher(name, doc="Handler for key %s" % name)
+ dct["handler"] = handler
+ return super().__new__(cls, clsname, bases, dct)
+
+
+class Predicate(Boolean, metaclass=PredicateMeta):
"""
- A predicate is a function that returns a boolean value.
+ Base class for mathematical predicates. It also serves as a
+ constructor for undefined predicate objects.
+
+ Explanation
+ ===========
+
+ Predicate is a function that returns a boolean value [1].
+
+ Predicate function is object, and it is instance of predicate class.
+ When a predicate is applied to arguments, ``AppliedPredicate``
+ instance is returned. This merely wraps the argument and remain
+ unevaluated. To obtain the truth value of applied predicate, use the
+ function ``ask``.
- Predicates merely wrap their argument and remain unevaluated:
+ Evaluation of predicate is done by multiple dispatching. You can
+ register new handler to the predicate to support new types.
- >>> from sympy import Q, ask
- >>> type(Q.prime)
- <class 'sympy.assumptions.assume.Predicate'>
- >>> Q.prime.name
- 'prime'
- >>> Q.prime(7)
- Q.prime(7)
- >>> _.func.name
- 'prime'
+ Every predicate in SymPy can be accessed via the property of ``Q``.
+ For example, ``Q.even`` returns the predicate which checks if the
+ argument is even number.
- To obtain the truth value of an expression containing predicates, use
- the function ``ask``:
+ To define a predicate which can be evaluated, you must subclass this
+ class, make an instance of it, and register it to ``Q``. After then,
+ dispatch the handler by argument types.
- >>> ask(Q.prime(7))
- True
+ If you directly construct predicate using this class, you will get
+ ``UndefinedPredicate`` which cannot be dispatched. This is useful
+ when you are building boolean expressions which do not need to be
+ evaluated.
+
+ Examples
+ ========
+
+ Applying and evaluating to boolean value:
+
+ >>> from sympy import Q, ask
+ >>> from sympy.abc import x
+ >>> ask(Q.prime(7))
+ True
+
+ You can define a new predicate by subclassing and dispatching. Here,
+ we define a predicate for sexy primes [2] as an example.
+
+ >>> from sympy import Predicate, Integer
+ >>> class SexyPrimePredicate(Predicate):
+ ... name = "sexyprime"
+ >>> Q.sexyprime = SexyPrimePredicate()
+ >>> @Q.sexyprime.register(Integer, Integer)
+ ... def _(int1, int2, assumptions):
+ ... args = sorted([int1, int2])
+ ... if not all(ask(Q.prime(a), assumptions) for a in args):
+ ... return False
+ ... return args[1] - args[0] == 6
+ >>> ask(Q.sexyprime(5, 11))
+ True
+
+ Direct constructing returns ``UndefinedPredicate``, which can be
+ applied but cannot be dispatched.
+
+ >>> from sympy import Predicate, Integer
+ >>> Q.P = Predicate("P")
+ >>> type(Q.P)
+ <class 'sympy.assumptions.assume.UndefinedPredicate'>
+ >>> Q.P(1)
+ Q.P(1)
+ >>> Q.P.register(Integer)(lambda expr, assump: True)
+ Traceback (most recent call last):
+ ...
+ TypeError: <class 'sympy.assumptions.assume.UndefinedPredicate'> cannot be dispatched.
The tautological predicate ``Q.is_true`` can be used to wrap other objects:
+ >>> Q.is_true(x > 1)
+ Q.is_true(x > 1)
+
+ References
+ ==========
- >>> from sympy.abc import x
- >>> Q.is_true(x > 1)
- Q.is_true(x > 1)
+ .. [1] https://en.wikipedia.org/wiki/Predicate_(mathematical_logic)
+ .. [2] https://en.wikipedia.org/wiki/Sexy_prime
"""
is_Atom = True
+ def __new__(cls, *args, **kwargs):
+ if cls is Predicate:
+ return UndefinedPredicate(*args, **kwargs)
+ obj = super().__new__(cls, *args)
+ return obj
+
+ @property
+ def name(self):
+ # May be overridden
+ return type(self).__name__
+
+ def register(self, *types, **kwargs):
+ if self.handler is None:
+ # condition for UndefinedPredicate
+ raise TypeError("%s cannot be dispatched." % type(self))
+ return self.handler.register(*types, **kwargs)
+
+ def __call__(self, *args):
+ return AppliedPredicate(self, *args)
+
+ def eval(self, args, assumptions=True):
+ """
+ Evaluate ``self(*args)`` under the given assumptions.
+
+ This uses only direct resolution methods, not logical inference.
+ """
+ types = tuple(type(a) for a in args)
+ result = None
+ for func in self.handler.dispatch_iter(*types):
+ try:
+ result = func(*args, assumptions)
+ except MDNotImplementedError:
+ continue
+ else:
+ if result is not None:
+ return result
+ return result
+
+
+class UndefinedPredicate(Predicate):
+ """
+ Predicate without handler.
+
+ Explanation
+ ===========
+
+ This predicate is generated by using ``Predicate`` directly for
+ construction. It does not have a handler, and evaluating this with
+ arguments is done by SAT solver.
+
+ Examples
+ ========
+
+ >>> from sympy import Predicate, Q
+ >>> Q.P = Predicate('P')
+ >>> Q.P.func
+ <class 'sympy.assumptions.assume.UndefinedPredicate'>
+ >>> Q.P.name
+ Str('P')
+
+ """
+
def __new__(cls, name, handlers=None):
- obj = Boolean.__new__(cls)
- obj.name = name
+ # "handlers" parameter supports old design
+ if not isinstance(name, Str):
+ name = Str(name)
+ obj = super(Boolean, cls).__new__(cls, name)
obj.handlers = handlers or []
return obj
+ @property
+ def name(self):
+ return self.args[0]
+
+ @property
+ def handler(self):
+ return None
+
def _hashable_content(self):
return (self.name,)
@@ -170,21 +368,17 @@ def __call__(self, expr):
return AppliedPredicate(self, expr)
def add_handler(self, handler):
+ # Will be deprecated
self.handlers.append(handler)
def remove_handler(self, handler):
+ # Will be deprecated
self.handlers.remove(handler)
- @cacheit
- def sort_key(self, order=None):
- return self.class_key(), (1, (self.name,)), S.One.sort_key(), S.One
-
- def eval(self, expr, assumptions=True):
- """
- Evaluate self(expr) under the given assumptions.
-
- This uses only direct resolution methods, not logical inference.
- """
+ def eval(self, args, assumptions=True):
+ # Support for deprecated design
+ # When old design is removed, this will always return None
+ expr, = args
res, _res = None, None
mro = inspect.getmro(type(expr))
for handler in self.handlers:
@@ -221,10 +415,8 @@ def assuming(*assumptions):
>>> from sympy.assumptions import assuming, Q, ask
>>> from sympy.abc import x, y
-
>>> print(ask(Q.integer(x + y)))
None
-
>>> with assuming(Q.integer(x), Q.integer(y)):
... print(ask(Q.integer(x + y)))
True
diff --git a/sympy/assumptions/handlers/__init__.py b/sympy/assumptions/handlers/__init__.py
index 5139921a8a0c..8ac5898179a3 100644
--- a/sympy/assumptions/handlers/__init__.py
+++ b/sympy/assumptions/handlers/__init__.py
@@ -1,5 +1,11 @@
-from .common import (AskHandler, CommonHandler, AskCommutativeHandler,
- TautologicalHandler, test_closed_group)
+"""
+Multipledispatch handlers for ``Predicate`` are implemented here.
+Handlers in this module are not directly imported to other modules in
+order to avoid circular import problem.
+"""
+
+from .common import (AskHandler, CommonHandler,
+ AskCommutativeHandler, TautologicalHandler, test_closed_group)
__all__ = [
'AskHandler', 'CommonHandler', 'AskCommutativeHandler',
diff --git a/sympy/assumptions/handlers/common.py b/sympy/assumptions/handlers/common.py
index 356fa860d542..3b8dcac903c2 100644
--- a/sympy/assumptions/handlers/common.py
+++ b/sympy/assumptions/handlers/common.py
@@ -1,6 +1,11 @@
+"""
+This module defines base class for handlers and some core handlers:
+``Q.commutative`` and ``Q.is_true``.
+"""
+
+from sympy.assumptions import Q, ask
from sympy.core.logic import _fuzzy_group
from sympy.logic.boolalg import conjuncts
-from sympy.assumptions import Q, ask
class AskHandler:
diff --git a/sympy/assumptions/refine.py b/sympy/assumptions/refine.py
index 90f8314d8947..7b226a42f3bf 100644
--- a/sympy/assumptions/refine.py
+++ b/sympy/assumptions/refine.py
@@ -19,12 +19,12 @@ def refine(expr, assumptions=True):
Examples
========
- >>> from sympy import refine, sqrt, Q
- >>> from sympy.abc import x
- >>> refine(sqrt(x**2), Q.real(x))
- Abs(x)
- >>> refine(sqrt(x**2), Q.positive(x))
- x
+ >>> from sympy import refine, sqrt, Q
+ >>> from sympy.abc import x
+ >>> refine(sqrt(x**2), Q.real(x))
+ Abs(x)
+ >>> refine(sqrt(x**2), Q.positive(x))
+ x
"""
if not isinstance(expr, Basic):
diff --git a/sympy/printing/repr.py b/sympy/printing/repr.py
index 5930bff0a980..135d49b8b211 100644
--- a/sympy/printing/repr.py
+++ b/sympy/printing/repr.py
@@ -241,10 +241,12 @@ def _print_CoordinateSymbol(self, expr):
)
def _print_Predicate(self, expr):
- return "%s(%s)" % (expr.__class__.__name__, self._print(expr.name))
+ return "Q.%s" % expr.name
def _print_AppliedPredicate(self, expr):
- return "%s(%s, %s)" % (expr.__class__.__name__, expr.func, expr.arg)
+ # will be changed to just expr.args when args overriding is removed
+ args = expr._args
+ return "%s(%s)" % (expr.__class__.__name__, self.reprify(args, ", "))
def _print_str(self, expr):
return repr(expr)
diff --git a/sympy/printing/str.py b/sympy/printing/str.py
index 82abe861e9be..8a20c7c3257a 100644
--- a/sympy/printing/str.py
+++ b/sympy/printing/str.py
@@ -85,7 +85,8 @@ def _print_Xor(self, expr):
return self.stringify(expr.args, " ^ ", PRECEDENCE["BitwiseXor"])
def _print_AppliedPredicate(self, expr):
- return '%s(%s)' % (self._print(expr.func), self._print(expr.arg))
+ return '%s(%s)' % (
+ self._print(expr.function), self.stringify(expr.arguments, ", "))
def _print_Basic(self, expr):
l = [self._print(o) for o in expr.args]
| diff --git a/sympy/assumptions/tests/test_query.py b/sympy/assumptions/tests/test_query.py
index a6228a4827ea..5226275ee256 100644
--- a/sympy/assumptions/tests/test_query.py
+++ b/sympy/assumptions/tests/test_query.py
@@ -1,14 +1,14 @@
from sympy.abc import t, w, x, y, z, n, k, m, p, i
from sympy.assumptions import (ask, AssumptionsContext, Q, register_handler,
remove_handler)
-from sympy.assumptions.assume import global_assumptions
+from sympy.assumptions.assume import global_assumptions, Predicate
from sympy.assumptions.ask import compute_known_facts, single_fact_lookup
from sympy.assumptions.handlers import AskHandler
from sympy.core.add import Add
from sympy.core.numbers import (I, Integer, Rational, oo, pi)
from sympy.core.singleton import S
from sympy.core.power import Pow
-from sympy.core.symbol import symbols
+from sympy.core.symbol import symbols, Symbol
from sympy.functions.combinatorial.factorials import factorial
from sympy.functions.elementary.complexes import (Abs, im, re, sign)
from sympy.functions.elementary.exponential import (exp, log)
@@ -2040,39 +2040,54 @@ def test_key_extensibility():
# make sure the key is not defined
raises(AttributeError, lambda: ask(Q.my_key(x)))
+ # Old handler system
class MyAskHandler(AskHandler):
@staticmethod
def Symbol(expr, assumptions):
return True
- register_handler('my_key', MyAskHandler)
- assert ask(Q.my_key(x)) is True
- assert ask(Q.my_key(x + 1)) is None
- remove_handler('my_key', MyAskHandler)
- del Q.my_key
+ try:
+ register_handler('my_key', MyAskHandler)
+ assert ask(Q.my_key(x)) is True
+ assert ask(Q.my_key(x + 1)) is None
+ finally:
+ remove_handler('my_key', MyAskHandler)
+ del Q.my_key
+ raises(AttributeError, lambda: ask(Q.my_key(x)))
+
+ # New handler system
+ class MyPredicate(Predicate):
+ pass
+ try:
+ Q.my_key = MyPredicate()
+ @Q.my_key.register(Symbol)
+ def _(expr, assumptions):
+ return True
+ assert ask(Q.my_key(x)) is True
+ assert ask(Q.my_key(x+1)) is None
+ finally:
+ del Q.my_key
raises(AttributeError, lambda: ask(Q.my_key(x)))
def test_type_extensibility():
"""test that new types can be added to the ask system at runtime
- We create a custom type MyType, and override ask Q.prime=True with handler
- MyAskHandler for this type
-
- TODO: test incompatible resolutors
"""
from sympy.core import Basic
class MyType(Basic):
pass
+ # Old handler system
class MyAskHandler(AskHandler):
@staticmethod
def MyType(expr, assumptions):
return True
-
a = MyType()
register_handler(Q.prime, MyAskHandler)
assert ask(Q.prime(a)) is True
+ #TODO: add test for new handler system after predicates are migrated
+
def test_single_fact_lookup():
known_facts = And(Implies(Q.integer, Q.rational),
@@ -2253,11 +2268,9 @@ def test_autosimp_used_to_fail():
def test_custom_AskHandler():
- from sympy.assumptions import register_handler, ask, Q
- from sympy.assumptions.handlers import AskHandler
from sympy.logic.boolalg import conjuncts
- from sympy import Symbol
+ # Old handler system
class MersenneHandler(AskHandler):
@staticmethod
def Integer(expr, assumptions):
@@ -2268,7 +2281,69 @@ def Integer(expr, assumptions):
def Symbol(expr, assumptions):
if expr in conjuncts(assumptions):
return True
- register_handler('mersenne', MersenneHandler)
+ try:
+ register_handler('mersenne', MersenneHandler)
+ n = Symbol('n', integer=True)
+ assert ask(Q.mersenne(7))
+ assert ask(Q.mersenne(n), Q.mersenne(n))
+ finally:
+ del Q.mersenne
+
+ # New handler system
+ class MersennePredicate(Predicate):
+ pass
+ try:
+ Q.mersenne = MersennePredicate()
+ @Q.mersenne.register(Integer)
+ def _(expr, assumptions):
+ from sympy import log
+ if ask(Q.integer(log(expr + 1, 2))):
+ return True
+ @Q.mersenne.register(Symbol)
+ def _(expr, assumptions):
+ if expr in conjuncts(assumptions):
+ return True
+ assert ask(Q.mersenne(7))
+ assert ask(Q.mersenne(n), Q.mersenne(n))
+ finally:
+ del Q.mersenne
+
+
+def test_polyadic_predicate():
+
+ class SexyPredicate(Predicate):
+ pass
+ try:
+ Q.sexyprime = SexyPredicate()
+
+ @Q.sexyprime.register(Integer, Integer)
+ def _(int1, int2, assumptions):
+ args = sorted([int1, int2])
+ if not all(ask(Q.prime(a), assumptions) for a in args):
+ return False
+ return args[1] - args[0] == 6
- n = Symbol('n', integer=True)
- assert ask(Q.mersenne(n), Q.mersenne(n))
+ @Q.sexyprime.register(Integer, Integer, Integer)
+ def _(int1, int2, int3, assumptions):
+ args = sorted([int1, int2, int3])
+ if not all(ask(Q.prime(a), assumptions) for a in args):
+ return False
+ return args[2] - args[1] == 6 and args[1] - args[0] == 6
+
+ assert ask(Q.sexyprime(5, 11))
+ assert ask(Q.sexyprime(7, 13, 19))
+ finally:
+ del Q.sexyprime
+
+
+def test_Predicate_handler_is_unique():
+
+ # Undefined predicate does not have a handler
+ assert Predicate('mypredicate').handler is None
+
+ # Handler of defined predicate is unique to the class
+ class MyPredicate(Predicate):
+ pass
+ mp1 = MyPredicate('mp1')
+ mp2 = MyPredicate('mp2')
+ assert mp1.handler is mp2.handler
diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index 8685c635faff..b1d5f86253e9 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -91,7 +91,11 @@ def test_sympy__assumptions__assume__AppliedPredicate():
assert _test_args(AppliedPredicate(Predicate("test"), 2))
assert _test_args(Q.is_true(True))
+@SKIP("abstract class")
def test_sympy__assumptions__assume__Predicate():
+ pass
+
+def test_sympy__assumptions__assume__UndefinedPredicate():
from sympy.assumptions.assume import Predicate
assert _test_args(Predicate("test"))
diff --git a/sympy/printing/tests/test_repr.py b/sympy/printing/tests/test_repr.py
index 41b86f1a7846..897e6ac2a9fb 100644
--- a/sympy/printing/tests/test_repr.py
+++ b/sympy/printing/tests/test_repr.py
@@ -3,7 +3,7 @@
from sympy.testing.pytest import raises
from sympy import (symbols, sympify, Function, Integer, Matrix, Abs,
Rational, Float, S, WildFunction, ImmutableDenseMatrix, sin, true, false, ones,
- sqrt, root, AlgebraicNumber, Symbol, Dummy, Wild, MatrixSymbol)
+ sqrt, root, AlgebraicNumber, Symbol, Dummy, Wild, MatrixSymbol, Q)
from sympy.combinatorics import Cycle, Permutation
from sympy.core.symbol import Str
from sympy.geometry import Point, Ellipse
@@ -334,3 +334,9 @@ def test_set():
assert srepr(s) == "set()"
s = {x, y}
assert srepr(s) in ("{Symbol('x'), Symbol('y')}", "{Symbol('y'), Symbol('x')}")
+
+def test_Predicate():
+ sT(Q.even, "Q.even")
+
+def test_AppliedPredicate():
+ sT(Q.even(Symbol('z')), "AppliedPredicate(Q.even, Symbol('z'))")
diff --git a/sympy/printing/tests/test_str.py b/sympy/printing/tests/test_str.py
index 8be321f55f08..cb00abd867e3 100644
--- a/sympy/printing/tests/test_str.py
+++ b/sympy/printing/tests/test_str.py
@@ -4,7 +4,8 @@
Rational, Float, Rel, S, sin, SparseMatrix, sqrt, summation, Sum, Symbol,
symbols, Wild, WildFunction, zeta, zoo, Dummy, Dict, Tuple, FiniteSet, factor,
subfactorial, true, false, Equivalent, Xor, Complement, SymmetricDifference,
- AccumBounds, UnevaluatedExpr, Eq, Ne, Quaternion, Subs, MatrixSymbol, MatrixSlice)
+ AccumBounds, UnevaluatedExpr, Eq, Ne, Quaternion, Subs, MatrixSymbol, MatrixSlice,
+ Q)
from sympy.core import Expr, Mul
from sympy.external import import_module
from sympy.physics.control.lti import TransferFunction, Series, Parallel, Feedback
@@ -1021,3 +1022,9 @@ def test_NDimArray():
assert sstr(NDimArray(1.0), full_prec=False) == '1.0'
assert sstr(NDimArray([1.0, 2.0]), full_prec=True) == '[1.00000000000000, 2.00000000000000]'
assert sstr(NDimArray([1.0, 2.0]), full_prec=False) == '[1.0, 2.0]'
+
+def test_Predicate():
+ assert sstr(Q.even) == 'Q.even'
+
+def test_AppliedPredicate():
+ assert sstr(Q.even(x)) == 'Q.even(x)'
| diff --git a/doc/src/modules/assumptions/handlers/index.rst b/doc/src/modules/assumptions/handlers/index.rst
index 48781760ed35..31f9f79030f2 100644
--- a/doc/src/modules/assumptions/handlers/index.rst
+++ b/doc/src/modules/assumptions/handlers/index.rst
@@ -10,8 +10,8 @@ Contents
.. toctree::
:maxdepth: 3
- calculus.rst
common.rst
+ calculus.rst
matrices.rst
ntheory.rst
order.rst
diff --git a/doc/src/modules/assumptions/index.rst b/doc/src/modules/assumptions/index.rst
index 022fab67a5ed..96abb75e529c 100644
--- a/doc/src/modules/assumptions/index.rst
+++ b/doc/src/modules/assumptions/index.rst
@@ -17,30 +17,42 @@ Contents
refine.rst
handlers/index.rst
-Queries are used to ask information about expressions. Main method for this
-is ask():
-.. autofunction:: sympy.assumptions.ask::ask
+Predicates
+==========
+
+
+.. autoclass:: sympy.assumptions.assume::Predicate
:noindex:
+.. autoclass:: sympy.assumptions.assume::AppliedPredicate
+ :noindex:
+
+
Querying
========
-ask's optional second argument should be a boolean expression involving
-assumptions about objects in expr. Valid values include:
+Queries are used to ask information about expressions. Main method for this
+is ``ask()``:
- * Q.integer(x)
- * Q.positive(x)
- * Q.integer(x) & Q.positive(x)
+.. autofunction:: sympy.assumptions.ask::ask
+ :noindex:
+
+``ask``'s optional second argument should be a boolean expression involving
+assumptions about objects in *expr*. Valid values include:
+
+ * ``Q.integer(x)``
+ * ``Q.positive(x)``
+ * ``Q.integer(x) & Q.positive(x)``
* etc.
-Q is an object holding known predicates.
+``Q`` is an object holding known predicates.
See documentation for the logic module for a complete list of valid boolean
expressions.
You can also define a context so you don't have to pass that argument
-each time to function ask(). This is done by using the assuming context manager
+each time to function ``ask()``. This is done by using the assuming context manager
from module sympy.assumptions. ::
>>> from sympy import *
@@ -52,68 +64,6 @@ from module sympy.assumptions. ::
True
-Design
-======
-
-Each time ask is called, the appropriate Handler for the current key is called. This is
-always a subclass of sympy.assumptions.AskHandler. Its classmethods have the names of the classes
-it supports. For example, a (simplified) AskHandler for the ask 'positive' would
-look like this::
-
- class AskPositiveHandler(CommonHandler):
-
- def Mul(self):
- # return True if all argument's in self.expr.args are positive
- ...
-
- def Add(self):
- for arg in self.expr.args:
- if not ask(arg, positive, self.assumptions):
- break
- else:
- # if all argument's are positive
- return True
- ...
-
-The .Mul() method is called when self.expr is an instance of Mul, the Add method
-would be called when self.expr is an instance of Add and so on.
-
-
-Extensibility
-=============
-
-You can define new queries or support new types by subclassing sympy.assumptions.AskHandler
- and registering that handler for a particular key by calling register_handler:
-
-.. autofunction:: sympy.assumptions.ask::register_handler
- :noindex:
-
-You can undo this operation by calling remove_handler.
-
-.. autofunction:: sympy.assumptions.ask::remove_handler
- :noindex:
-
-You can support new types [1]_ by adding a handler to an existing key. In the
-following example, we will create a new type MyType and extend the key 'prime'
-to accept this type (and return True)
-
-.. parsed-literal::
-
- >>> from sympy.core import Basic
- >>> from sympy.assumptions import register_handler
- >>> from sympy.assumptions.handlers import AskHandler
- >>> class MyType(Basic):
- ... pass
- >>> class MyAskHandler(AskHandler):
- ... @staticmethod
- ... def MyType(expr, assumptions):
- ... return True
- >>> a = MyType()
- >>> register_handler('prime', MyAskHandler)
- >>> ask(Q.prime(a))
- True
-
-
Performance improvements
========================
@@ -130,6 +80,3 @@ Misc
You can find more examples in the in the form of test under directory
sympy/assumptions/tests/
-
-.. [1] New type must inherit from Basic, otherwise an exception will be raised.
- This is a bug and should be fixed.
| [
{
"components": [
{
"doc": "Return the predicate.",
"lines": [
153,
158
],
"name": "AppliedPredicate.function",
"signature": "def function(self):",
"type": "function"
},
{
"doc": "Return the arguments which are applied... | [
"test_key_extensibility",
"test_custom_AskHandler",
"test_polyadic_predicate"
] | [
"test_int_1",
"test_int_11",
"test_int_12",
"test_float_1",
"test_zero_0",
"test_negativeone",
"test_infinity",
"test_neg_infinity",
"test_nan",
"test_Rational_number",
"test_sqrt_2",
"test_pi",
"test_E",
"test_GoldenRatio",
"test_TribonacciConstant",
"test_I",
"test_bounded",
"tes... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Introduce polyadic predicate based on multipledispatch
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
Closes #20209
#### Brief description of what is fixed or changed
This PR improves the predicate and handler design in `assumptions` module.
1. Assumption handlers are now multipledispatch instance. Predicate now uses a single handler instead of a list of handlers.
2. `Predicate("...")` now returns `UndefinedPredicate`, which cannot be evaluated. To make predicate that can be evaluated, subclass of `Predicate` must be defined.
3. `Predicate` can now take multiple arguments.
This PR does not break back compatibility. Existing handlers are not yet migrated to this new design. It will be done in succeeding PRs.
Also, printing of predicates is improved and tests are added.
#### Other comments
Previous handler design was very unstable. It made `Predicate` mutable, it filters the argument by its class "name", and could not take multiple arguments. This new multipledispatch design fixes these.
Just as `Function('f')` returns undefined function which cannot be evaluated, `Predicate('...')` returns `UndefinedPredicate` in this PR. This instance does not have handler and cannot be evaluated by its own, but it can be used to construct expressions like `Not` and evaluated by SAT solver.
To define a new predicate with handler, new predicate class must be defined and functions must be dispatched to its handler. For example,
```python
class SexyPredicate(Predicate):
"""https://en.wikipedia.org/wiki/Sexy_prime"""
name = "sexyprime"
Q.sexyprime = SexyPredicate()
@Q.sexyprime.register(Integer, Integer)
def _(int1, int2, assumptions):
args = sorted([int1, int2])
if not all(ask(Q.prime(a), assumptions) for a in args):
return False
return args[1] - args[0] == 6
@Q.sexyprime.register(Integer, Integer, Integer)
def _(int1, int2, int3, assumptions):
args = sorted([int1, int2, int3])
if not all(ask(Q.prime(a), assumptions) for a in args):
return False
return args[2] - args[1] == 6 and args[1] - args[0] == 6
```
Evaluating this to boolean is done by `ask` as usual.
```python
>>> ask(Q.sexyprime(5, 11))
True
>>> ask(Q.sexyprime(7, 13, 19))
True
```
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
- assumptions
- `Predicate` now uses a single handler which is multipledispatch instance.
- `Predicate` can now take multiple arguments.
- `Predicate("...")` now returns `UndefinedPredicate` instance. To define a predicate, you must make a subclass of `Predicate`.
<!-- END RELEASE NOTES -->
----------
I collected some previous PRs to this one. Ping to the reviewers who participated in older versions - @oscarbenjamin @asmeurer @eric-wieser . Can you take a look at this, please?
Ping again @oscarbenjamin , @asmeurer , @eric-wieser
I left some comments but broadly I think this looks good.
Should the docs recommend using `UndefinedPredicate` rather than `Predicate` as a way of creating undefined predicates?
@asmeurer @eric-wieser do you want to review this?
> Should the docs recommend using UndefinedPredicate rather than Predicate as a way of creating undefined predicates?
I designed the `Predicate` to mimic `Function`. Since we encourage `Function('f')` to generate undefined function, I think we should encourage to use `Predicate('P')`.
@oscarbenjamin I introduced `function` property to return the predicate, and `arguments` property to return the arguments. I will remove the usage of overridden `args` and `func` in succeeding PR.
It would be better if we could avoid using a metaclass but otherwise I think this looks good.
@asmeurer @eric-wieser do you want to review?
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/assumptions/assume.py]
(definition of AppliedPredicate.function:)
def function(self):
"""Return the predicate."""
(definition of AppliedPredicate.arguments:)
def arguments(self):
"""Return the arguments which are applied to the predicate."""
(definition of PredicateMeta:)
class PredicateMeta(ManagedProperties):
"""Metaclass for ``Predicate``
If class attribute ``handler`` is not defined, assigns empty Dispatcher
to it."""
(definition of PredicateMeta.__new__:)
def __new__(cls, clsname, bases, dct):
(definition of Predicate.name:)
def name(self):
(definition of Predicate.register:)
def register(self, *types, **kwargs):
(definition of UndefinedPredicate:)
class UndefinedPredicate(Predicate):
"""Predicate without handler.
Explanation
===========
This predicate is generated by using ``Predicate`` directly for
construction. It does not have a handler, and evaluating this with
arguments is done by SAT solver.
Examples
========
>>> from sympy import Predicate, Q
>>> Q.P = Predicate('P')
>>> Q.P.func
<class 'sympy.assumptions.assume.UndefinedPredicate'>
>>> Q.P.name
Str('P')"""
(definition of UndefinedPredicate.__new__:)
def __new__(cls, name, handlers=None):
(definition of UndefinedPredicate.name:)
def name(self):
(definition of UndefinedPredicate.handler:)
def handler(self):
(definition of UndefinedPredicate._hashable_content:)
def _hashable_content(self):
(definition of UndefinedPredicate.__getnewargs__:)
def __getnewargs__(self):
(definition of UndefinedPredicate.__call__:)
def __call__(self, expr):
(definition of UndefinedPredicate.add_handler:)
def add_handler(self, handler):
(definition of UndefinedPredicate.remove_handler:)
def remove_handler(self, handler):
(definition of UndefinedPredicate.eval:)
def eval(self, args, assumptions=True):
[end of new definitions in sympy/assumptions/assume.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Using multipledispatch for handlers in assumptions module
Related to #20153 and #20148
As implemented in #20153 for polyadic predicate, we can use multipledispatch instead of handler classes for monadic predicates as well.
Benefits:
1. Handlers will be no longer dispatched by class "name", which is treacherous.
2. Instead of having multiple handlers registered in `Predicate.handlers` attribute, we can have only one handler and dispatch everything to it.
Please refer to #20153 to see how handlers will be implemented.
----------
Sorry for the meager explanation. This is still a draft in my head, and I will update after some trials.
This also offers a simpler path for users to register their own custom handlers.
--------------------
</issues> | 284db64e44124856accce4abff75d2d8990c301d |
sympy__sympy-20631 | 20,631 | sympy/sympy | 1.8 | 1e76752f545a2ed44a8af008c76c9ca7dd77e41d | 2020-12-19T18:56:17Z | diff --git a/doc/src/modules/polys/agca.rst b/doc/src/modules/polys/agca.rst
index 032416d26405..1d851f728dc8 100644
--- a/doc/src/modules/polys/agca.rst
+++ b/doc/src/modules/polys/agca.rst
@@ -334,3 +334,6 @@ as it is generated by a single element $t$.
.. autoclass:: MonogenicFiniteExtension
:members:
+
+.. autoclass:: ExtensionElement
+ :members:
diff --git a/sympy/polys/agca/extensions.py b/sympy/polys/agca/extensions.py
index 0864da18f7ba..41bc5dc393ab 100644
--- a/sympy/polys/agca/extensions.py
+++ b/sympy/polys/agca/extensions.py
@@ -1,11 +1,14 @@
"""Finite extensions of ring domains."""
-from sympy.polys.polyerrors import CoercionFailed, NotInvertible
+from sympy.polys.domains.domain import Domain
+from sympy.polys.domains.domainelement import DomainElement
+from sympy.polys.polyerrors import (CoercionFailed, NotInvertible,
+ GeneratorsError)
from sympy.polys.polytools import Poly
from sympy.printing.defaults import DefaultPrinting
-class ExtensionElement(DefaultPrinting):
+class ExtensionElement(DomainElement, DefaultPrinting):
"""
Element of a finite extension.
@@ -22,6 +25,9 @@ def __init__(self, rep, ext):
self.rep = rep
self.ext = ext
+ def parent(f):
+ return f.ext
+
def __bool__(f):
return bool(f.rep)
@@ -67,55 +73,94 @@ def __rsub__(f, g):
def __mul__(f, g):
rep = f._get_rep(g)
if rep is not None:
- return ExtElem((f.rep*rep) % f.ext.mod, f.ext)
+ return ExtElem((f.rep * rep) % f.ext.mod, f.ext)
else:
return NotImplemented
__rmul__ = __mul__
+ def _divcheck(f):
+ """Raise if division is not implemented for this divisor"""
+ if not f:
+ raise NotInvertible('Zero divisor')
+ elif f.ext.is_Field:
+ return True
+ elif f.rep.is_ground and f.ext.domain.is_unit(f.rep.rep[0]):
+ return True
+ else:
+ # Some cases like (2*x + 2)/2 over ZZ will fail here. It is
+ # unclear how to implement division in general if the ground
+ # domain is not a field so for now it was decided to restrict the
+ # implementation to division by invertible constants.
+ msg = (f"Can not invert {f} in {f.ext}. "
+ "Only division by invertible constants is implemented.")
+ raise NotImplementedError(msg)
+
def inverse(f):
"""Multiplicative inverse.
Raises
======
+
NotInvertible
If the element is a zero divisor.
"""
- if not f.ext.domain.is_Field:
- raise NotImplementedError("base field expected")
- return ExtElem(f.rep.invert(f.ext.mod), f.ext)
+ f._divcheck()
- def _invrep(f, g):
- rep = f._get_rep(g)
- if rep is not None:
- return rep.invert(f.ext.mod)
+ if f.ext.is_Field:
+ invrep = f.rep.invert(f.ext.mod)
else:
- return None
+ R = f.ext.ring
+ invrep = R.exquo(R.one, f.rep)
+
+ return ExtElem(invrep, f.ext)
def __truediv__(f, g):
- if not f.ext.domain.is_Field:
+ rep = f._get_rep(g)
+ if rep is None:
return NotImplemented
+ g = ExtElem(rep, f.ext)
+
try:
- rep = f._invrep(g)
+ ginv = g.inverse()
except NotInvertible:
- raise ZeroDivisionError
+ raise ZeroDivisionError(f"{f} / {g}")
- if rep is not None:
- return f*ExtElem(rep, f.ext)
- else:
- return NotImplemented
+ return f * ginv
__floordiv__ = __truediv__
def __rtruediv__(f, g):
try:
- return f.ext.convert(g)/f
+ g = f.ext.convert(g)
except CoercionFailed:
return NotImplemented
+ return g / f
__rfloordiv__ = __rtruediv__
+ def __mod__(f, g):
+ rep = f._get_rep(g)
+ if rep is None:
+ return NotImplemented
+ g = ExtElem(rep, f.ext)
+
+ try:
+ g._divcheck()
+ except NotInvertible:
+ raise ZeroDivisionError(f"{f} % {g}")
+
+ # Division where defined is always exact so there is no remainder
+ return f.ext.zero
+
+ def __rmod__(f, g):
+ try:
+ g = f.ext.convert(g)
+ except CoercionFailed:
+ return NotImplemented
+ return g % f
+
def __pow__(f, n):
if not isinstance(n, int):
raise TypeError("exponent of type 'int' expected")
@@ -154,10 +199,18 @@ def __str__(f):
__repr__ = __str__
+ @property
+ def is_ground(f):
+ return f.rep.is_ground
+
+ def to_ground(f):
+ [c] = f.rep.to_list()
+ return c
+
ExtElem = ExtensionElement
-class MonogenicFiniteExtension:
+class MonogenicFiniteExtension(Domain):
r"""
Finite extension generated by an integral element.
@@ -197,21 +250,20 @@ class MonogenicFiniteExtension:
>>> FiniteExtension(Poly(t**2 - x**3 - x + 1, t, field=True))
ZZ(x)[t]/(t**2 - x**3 - x + 1)
- Notes
- =====
+ """
+ is_FiniteExtension = True
- ``FiniteExtension`` is not a subclass of :class:`~.Domain`. Consequently,
- a ``FiniteExtension`` can't currently be used as ``domain`` for the
- :class:`~.Poly` class.
+ dtype = ExtensionElement
- """
def __init__(self, mod):
if not (isinstance(mod, Poly) and mod.is_univariate):
raise TypeError("modulus must be a univariate Poly")
- mod, rem = mod.div(mod.LC())
- if not rem.is_zero:
- raise ValueError("modulus could not be made monic")
+ # Using auto=True (default) potentially changes the ground domain to a
+ # field whereas auto=False raises if division is not exact. We'll let
+ # the caller decide whether or not they want to put the ground domain
+ # over a field. In most uses mod is already monic.
+ mod = mod.monic(auto=False)
self.rank = mod.degree()
self.modulus = mod
@@ -224,22 +276,69 @@ def __init__(self, mod):
self.one = self.convert(self.ring.one)
gen = self.ring.gens[0]
+ self.symbol = self.ring.symbols[0]
self.generator = self.convert(gen)
- self.basis = tuple(self.convert(gen**i)
- for i in range(self.rank))
+ self.basis = tuple(self.convert(gen**i) for i in range(self.rank))
+
+ # XXX: It might be necessary to check mod.is_irreducible here
+ self.is_Field = self.domain.is_Field
- def convert(self, f):
- rep = self.ring.convert(f)
+ def new(self, arg):
+ rep = self.ring.convert(arg)
return ExtElem(rep % self.mod, self)
- def to_sympy(self, f):
- return self.ring.to_sympy(f.rep)
+ def __eq__(self, other):
+ if not isinstance(other, FiniteExtension):
+ return False
+ return self.modulus == other.modulus
- __call__ = convert
+ def __hash__(self):
+ return hash((self.__class__.__name__, self.modulus))
def __str__(self):
return "%s/(%s)" % (self.ring, self.modulus.as_expr())
__repr__ = __str__
+ def convert(self, f, base=None):
+ rep = self.ring.convert(f, base)
+ return ExtElem(rep % self.mod, self)
+
+ def convert_from(self, f, base):
+ rep = self.ring.convert(f, base)
+ return ExtElem(rep % self.mod, self)
+
+ def to_sympy(self, f):
+ return self.ring.to_sympy(f.rep)
+
+ def from_sympy(self, f):
+ return self.convert(f)
+
+ def set_domain(self, K):
+ mod = self.modulus.set_domain(K)
+ return self.__class__(mod)
+
+ def drop(self, *symbols):
+ if self.symbol in symbols:
+ raise GeneratorsError('Can not drop generator from FiniteExtension')
+ K = self.domain.drop(*symbols)
+ return self.set_domain(K)
+
+ def quo(self, f, g):
+ rep = self.domain.exquo(f.rep, g.rep)
+ return ExtElem(rep % self.mod, self)
+
+ def exquo(self, f, g):
+ rep = self.domain.exquo(f.rep, g.rep)
+ return ExtElem(rep % self.mod, self)
+
+ def is_negative(self, a):
+ return False
+
+ def is_unit(self, a):
+ if self.is_Field:
+ return bool(a)
+ elif a.is_ground:
+ return self.domain.is_unit(a.to_ground())
+
FiniteExtension = MonogenicFiniteExtension
diff --git a/sympy/polys/domains/compositedomain.py b/sympy/polys/domains/compositedomain.py
index 3b5189ce1d16..560720a02469 100644
--- a/sympy/polys/domains/compositedomain.py
+++ b/sympy/polys/domains/compositedomain.py
@@ -15,8 +15,18 @@ class CompositeDomain(Domain):
gens, ngens, symbols, domain = [None]*4
def inject(self, *symbols):
- """Inject generators into this domain. """
+ """Inject generators into this domain. """
if not (set(self.symbols) & set(symbols)):
return self.__class__(self.domain, self.symbols + symbols, self.order)
else:
raise GeneratorsError("common generators in %s and %s" % (self.symbols, symbols))
+
+ def drop(self, *symbols):
+ """Drop generators from this domain. """
+ symset = set(symbols)
+ newsyms = tuple(s for s in self.symbols if s not in symset)
+ domain = self.domain.drop(*symbols)
+ if not newsyms:
+ return domain
+ else:
+ return self.__class__(domain, newsyms, self.order)
diff --git a/sympy/polys/domains/domain.py b/sympy/polys/domains/domain.py
index b763454ce5fa..40acad0a5023 100644
--- a/sympy/polys/domains/domain.py
+++ b/sympy/polys/domains/domain.py
@@ -4,7 +4,7 @@
from typing import Any, Optional, Type
from sympy.core import Basic, sympify
-from sympy.core.compatibility import HAS_GMPY, is_sequence
+from sympy.core.compatibility import HAS_GMPY, is_sequence, ordered
from sympy.core.decorators import deprecated
from sympy.polys.domains.domainelement import DomainElement
from sympy.polys.orderings import lex
@@ -37,6 +37,7 @@ class Domain:
is_PolynomialRing = is_Poly = False
is_FractionField = is_Frac = False
is_SymbolicDomain = is_EX = False
+ is_FiniteExtension = False
is_Exact = True
is_Numerical = False
@@ -227,6 +228,10 @@ def from_FractionField(K1, a, K0):
"""Convert a rational function to ``dtype``. """
return None
+ def from_MonogenicFiniteExtension(K1, a, K0):
+ """Convert an ``ExtensionElement`` to ``dtype``. """
+ return K1.convert_from(a.rep, K0.ring)
+
def from_ExpressionDomain(K1, a, K0):
"""Convert a ``EX`` object to ``dtype``. """
return K1.from_sympy(a.ex)
@@ -273,6 +278,21 @@ def unify(K0, K1, symbols=None):
if K1.is_EX:
return K1
+ if K0.is_FiniteExtension or K1.is_FiniteExtension:
+ if K1.is_FiniteExtension:
+ K0, K1 = K1, K0
+ if K1.is_FiniteExtension:
+ # Unifying two extensions.
+ # Try to ensure that K0.unify(K1) == K1.unify(K0)
+ if list(ordered([K0.modulus, K1.modulus]))[1] == K0.modulus:
+ K0, K1 = K1, K0
+ return K1.set_domain(K0)
+ else:
+ # Drop the generator from other and unify with the base domain
+ K1 = K1.drop(K0.symbol)
+ K1 = K0.domain.unify(K1)
+ return K0.set_domain(K1)
+
if K0.is_Composite or K1.is_Composite:
K0_ground = K0.dom if K0.is_Composite else K0
K1_ground = K1.dom if K1.is_Composite else K1
@@ -433,6 +453,12 @@ def inject(self, *symbols):
"""Inject generators into this domain. """
raise NotImplementedError
+ def drop(self, *symbols):
+ """Drop generators from this domain. """
+ if self.is_Simple:
+ return self
+ raise NotImplementedError # pragma: no cover
+
def is_zero(self, a):
"""Returns True if ``a`` is zero. """
return not a
diff --git a/sympy/polys/domains/domainelement.py b/sympy/polys/domains/domainelement.py
index 920882608be0..6c8f90d61e05 100644
--- a/sympy/polys/domains/domainelement.py
+++ b/sympy/polys/domains/domainelement.py
@@ -8,10 +8,10 @@ class DomainElement:
"""
Represents an element of a domain.
- Mix in this trait into a class which instances should be recognized as
+ Mix in this trait into a class whose instances should be recognized as
elements of a domain. Method ``parent()`` gives that domain.
-
"""
def parent(self):
+ """Get the domain associated with ``self`` """
raise NotImplementedError("abstract method")
diff --git a/sympy/polys/domains/field.py b/sympy/polys/domains/field.py
index 4158a5fdd893..33c1314dee45 100644
--- a/sympy/polys/domains/field.py
+++ b/sympy/polys/domains/field.py
@@ -98,3 +98,7 @@ def revert(self, a):
return 1/a
else:
raise NotReversible('zero is not reversible')
+
+ def is_unit(self, a):
+ """Return true if ``a`` is a invertible"""
+ return bool(a)
diff --git a/sympy/polys/domains/old_polynomialring.py b/sympy/polys/domains/old_polynomialring.py
index 143148af379e..cf5094e8a439 100644
--- a/sympy/polys/domains/old_polynomialring.py
+++ b/sympy/polys/domains/old_polynomialring.py
@@ -91,6 +91,22 @@ def from_AlgebraicField(K1, a, K0):
if K1.dom == K0:
return K1(a)
+ def from_PolynomialRing(K1, a, K0):
+ """Convert a `PolyElement` object to `dtype`. """
+ if K1.gens == K0.symbols:
+ if K1.dom == K0.dom:
+ return K1(dict(a)) # set the correct ring
+ else:
+ convert_dom = lambda c: K1.dom.convert_from(c, K0.dom)
+ return K1({m: convert_dom(c) for m, c in a.items()})
+ else:
+ monoms, coeffs = _dict_reorder(a.to_dict(), K0.symbols, K1.gens)
+
+ if K1.dom != K0.dom:
+ coeffs = [ K1.dom.convert(c, K0.dom) for c in coeffs ]
+
+ return K1(dict(zip(monoms, coeffs)))
+
def from_GlobalPolynomialRing(K1, a, K0):
"""Convert a `DMP` object to `dtype`. """
if K1.gens == K0.gens:
diff --git a/sympy/polys/domains/polynomialring.py b/sympy/polys/domains/polynomialring.py
index a26b00d4bbfa..b595abad2e70 100644
--- a/sympy/polys/domains/polynomialring.py
+++ b/sympy/polys/domains/polynomialring.py
@@ -67,6 +67,13 @@ def __eq__(self, other):
(self.dtype.ring, self.domain, self.symbols) == \
(other.dtype.ring, other.domain, other.symbols)
+ def is_unit(self, a):
+ """Returns ``True`` if ``a`` is a unit of ``self``"""
+ if not a.is_ground:
+ return False
+ K = self.domain
+ return K.is_unit(K.convert_from(a, self))
+
def to_sympy(self, a):
"""Convert `a` to a SymPy object. """
return a.as_expr()
@@ -124,6 +131,16 @@ def from_FractionField(K1, a, K0):
else:
return None
+ def from_GlobalPolynomialRing(K1, a, K0):
+ """Convert from old poly ring to ``dtype``. """
+ if K1.symbols == K0.gens:
+ ad = a.to_dict()
+ if K1.domain != K0.domain:
+ ad = {m: K1.domain.convert(c) for m, c in ad.items()}
+ return K1(ad)
+ elif a.is_ground and K0.domain == K1:
+ return K1.convert_from(a.to_list()[0], K0.domain)
+
def get_field(self):
"""Returns a field associated with `self`. """
return self.ring.to_field().to_domain()
diff --git a/sympy/polys/domains/ring.py b/sympy/polys/domains/ring.py
index a50afe2bebad..c69e6944d8f5 100644
--- a/sympy/polys/domains/ring.py
+++ b/sympy/polys/domains/ring.py
@@ -46,10 +46,10 @@ def invert(self, a, b):
def revert(self, a):
"""Returns ``a**(-1)`` if possible. """
- if self.is_one(a):
+ if self.is_one(a) or self.is_one(-a):
return a
else:
- raise NotReversible('only unity is reversible in a ring')
+ raise NotReversible('only units are reversible in a ring')
def is_unit(self, a):
try:
diff --git a/sympy/polys/polytools.py b/sympy/polys/polytools.py
index 5e8a7fdbc56e..39670bf874a6 100644
--- a/sympy/polys/polytools.py
+++ b/sympy/polys/polytools.py
@@ -2582,10 +2582,10 @@ def revert(f, n):
>>> Poly(1 + x, x).revert(1)
Poly(1, x, domain='ZZ')
- >>> Poly(x**2 - 1, x).revert(1)
+ >>> Poly(x**2 - 2, x).revert(2)
Traceback (most recent call last):
...
- NotReversible: only unity is reversible in a ring
+ NotReversible: only units are reversible in a ring
>>> Poly(1/x, x).revert(1)
Traceback (most recent call last):
diff --git a/sympy/polys/rings.py b/sympy/polys/rings.py
index 5b57b7280882..fa5970caa86e 100644
--- a/sympy/polys/rings.py
+++ b/sympy/polys/rings.py
@@ -383,7 +383,7 @@ def _rebuild(expr):
elif expr.is_Pow and expr.exp.is_Integer and expr.exp >= 0:
return _rebuild(expr.base)**int(expr.exp)
else:
- return domain.convert(expr)
+ return self.ground_new(domain.convert(expr))
return _rebuild(sympify(expr))
| diff --git a/sympy/polys/agca/tests/test_extensions.py b/sympy/polys/agca/tests/test_extensions.py
index 432687e2cf20..f7bb9b897a6e 100644
--- a/sympy/polys/agca/tests/test_extensions.py
+++ b/sympy/polys/agca/tests/test_extensions.py
@@ -1,8 +1,14 @@
+from sympy import symbols, sin, cos
+from sympy.polys import QQ, ZZ
from sympy.polys.polytools import Poly
from sympy.polys.polyerrors import NotInvertible
from sympy.polys.agca.extensions import FiniteExtension
+from sympy.polys.domainmatrix import DomainMatrix
+
from sympy.testing.pytest import raises
-from sympy.abc import x, t
+
+from sympy.abc import x, y, t
+
def test_FiniteExtension():
# Gaussian integers
@@ -10,13 +16,19 @@ def test_FiniteExtension():
assert A.rank == 2
assert str(A) == 'ZZ[x]/(x**2 + 1)'
i = A.generator
+ assert i.parent() is A
+
+ assert i*i == A(-1)
+ raises(TypeError, lambda: i*())
+
assert A.basis == (A.one, i)
assert A(1) == A.one
assert i**2 == A(-1)
assert i**2 != -1 # no coercion
assert (2 + i)*(1 - i) == 3 - i
assert (1 + i)**8 == A(16)
- raises(NotImplementedError, lambda: A(1).inverse())
+ assert A(1).inverse() == A(1)
+ raises(NotImplementedError, lambda: A(2).inverse())
# Finite field of order 27
F = FiniteExtension(Poly(x**3 - x + 1, x, modulus=3))
@@ -46,3 +58,138 @@ def test_FiniteExtension():
c = 1/(x**3 - x**2 + x - 1)
assert ((y + x)*(y - x)).inverse() == K(c)
assert (y + x)*(y - x)*c == K(1) # explicit inverse of y + x
+
+
+def test_FiniteExtension_eq_hash():
+ # Test eq and hash
+ p1 = Poly(x**2 - 2, x, domain=ZZ)
+ p2 = Poly(x**2 - 2, x, domain=QQ)
+ K1 = FiniteExtension(p1)
+ K2 = FiniteExtension(p2)
+ assert K1 == FiniteExtension(Poly(x**2 - 2))
+ assert K2 != FiniteExtension(Poly(x**2 - 2))
+ assert len({K1, K2, FiniteExtension(p1)}) == 2
+
+
+def test_FiniteExtension_mod():
+ # Test mod
+ K = FiniteExtension(Poly(x**3 + 1, x, domain=QQ))
+ xf = K(x)
+ assert (xf**2 - 1) % 1 == K.zero
+ assert 1 % (xf**2 - 1) == K.zero
+ assert (xf**2 - 1) / (xf - 1) == xf + 1
+ assert (xf**2 - 1) // (xf - 1) == xf + 1
+ assert (xf**2 - 1) % (xf - 1) == K.zero
+ raises(ZeroDivisionError, lambda: (xf**2 - 1) % 0)
+ raises(TypeError, lambda: xf % [])
+ raises(TypeError, lambda: [] % xf)
+
+ # Test mod over ring
+ K = FiniteExtension(Poly(x**3 + 1, x, domain=ZZ))
+ xf = K(x)
+ assert (xf**2 - 1) % 1 == K.zero
+ raises(NotImplementedError, lambda: (xf**2 - 1) % (xf - 1))
+
+
+def test_FiniteExtension_from_sympy():
+ # Test to_sympy/from_sympy
+ K = FiniteExtension(Poly(x**3 + 1, x, domain=ZZ))
+ xf = K(x)
+ assert K.from_sympy(x) == xf
+ assert K.to_sympy(xf) == x
+
+
+def test_FiniteExtension_set_domain():
+ KZ = FiniteExtension(Poly(x**2 + 1, x, domain='ZZ'))
+ KQ = FiniteExtension(Poly(x**2 + 1, x, domain='QQ'))
+ assert KZ.set_domain(QQ) == KQ
+
+
+def test_FiniteExtension_exquo():
+ # Test exquo
+ K = FiniteExtension(Poly(x**4 + 1))
+ xf = K(x)
+ assert K.exquo(xf**2 - 1, xf - 1) == xf + 1
+
+
+def test_FiniteExtension_convert():
+ # Test from_MonogenicFiniteExtension
+ K1 = FiniteExtension(Poly(x**2 + 1))
+ K2 = QQ[x]
+ x1, x2 = K1(x), K2(x)
+ assert K1.convert(x2) == x1
+ assert K2.convert(x1) == x2
+
+ K = FiniteExtension(Poly(x**2 - 1, domain=QQ))
+ assert K.convert_from(QQ(1, 2), QQ) == K.one/2
+
+
+def test_FiniteExtension_division_ring():
+ # Test division in FiniteExtension over a ring
+ KQ = FiniteExtension(Poly(x**2 - 1, x, domain=QQ))
+ KZ = FiniteExtension(Poly(x**2 - 1, x, domain=ZZ))
+ KQt = FiniteExtension(Poly(x**2 - 1, x, domain=QQ[t]))
+ KQtf = FiniteExtension(Poly(x**2 - 1, x, domain=QQ.frac_field(t)))
+ assert KQ.is_Field is True
+ assert KZ.is_Field is False
+ assert KQt.is_Field is False
+ assert KQtf.is_Field is True
+ for K in KQ, KZ, KQt, KQtf:
+ xK = K.convert(x)
+ assert xK / K.one == xK
+ assert xK // K.one == xK
+ assert xK % K.one == K.zero
+ raises(ZeroDivisionError, lambda: xK / K.zero)
+ raises(ZeroDivisionError, lambda: xK // K.zero)
+ raises(ZeroDivisionError, lambda: xK % K.zero)
+ if K.is_Field:
+ assert xK / xK == K.one
+ assert xK // xK == K.one
+ assert xK % xK == K.zero
+ else:
+ raises(NotImplementedError, lambda: xK / xK)
+ raises(NotImplementedError, lambda: xK // xK)
+ raises(NotImplementedError, lambda: xK % xK)
+
+
+def test_FiniteExtension_Poly():
+ K = FiniteExtension(Poly(x**2 - 2))
+ p = Poly(x, y, domain=K)
+ assert p.domain == K
+ assert p.as_expr() == x
+ assert (p**2).as_expr() == 2
+
+ K = FiniteExtension(Poly(x**2 - 2, x, domain=QQ))
+ K2 = FiniteExtension(Poly(t**2 - 2, t, domain=K))
+ assert str(K2) == 'QQ[x]/(x**2 - 2)[t]/(t**2 - 2)'
+
+ eK = K2.convert(x + t)
+ assert K2.to_sympy(eK) == x + t
+ assert K2.to_sympy(eK ** 2) == 4 + 2*x*t
+ p = Poly(x + t, y, domain=K2)
+ assert p**2 == Poly(4 + 2*x*t, y, domain=K2)
+
+
+def test_FiniteExtension_sincos_jacobian():
+ # Use FiniteExtensino to compute the Jacobian of a matrix involving sin
+ # and cos of different symbols.
+ r, p, t = symbols('rho, phi, theta')
+ elements = [
+ [sin(p)*cos(t), r*cos(p)*cos(t), -r*sin(p)*sin(t)],
+ [sin(p)*sin(t), r*cos(p)*sin(t), r*sin(p)*cos(t)],
+ [ cos(p), -r*sin(p), 0],
+ ]
+
+ def make_extension(K):
+ K = FiniteExtension(Poly(sin(p)**2+cos(p)**2-1, sin(p), domain=K[cos(p)]))
+ K = FiniteExtension(Poly(sin(t)**2+cos(t)**2-1, sin(t), domain=K[cos(t)]))
+ return K
+
+ Ksc1 = make_extension(ZZ[r])
+ Ksc2 = make_extension(ZZ)[r]
+
+ for K in [Ksc1, Ksc2]:
+ elements_K = [[K.convert(e) for e in row] for row in elements]
+ J = DomainMatrix(elements_K, (3, 3), K)
+ det = J.charpoly()[-1] * (-K.one)**3
+ assert det == K.convert(r**2*sin(p))
diff --git a/sympy/polys/domains/tests/test_domains.py b/sympy/polys/domains/tests/test_domains.py
index 255e51be37ce..6edfef8e7f12 100644
--- a/sympy/polys/domains/tests/test_domains.py
+++ b/sympy/polys/domains/tests/test_domains.py
@@ -15,6 +15,8 @@
from sympy.polys.rings import ring
from sympy.polys.fields import field
+from sympy.polys.agca.extensions import FiniteExtension
+
from sympy.polys.polyerrors import (
UnificationFailed,
GeneratorsError,
@@ -284,6 +286,57 @@ def test_Domain_unify_algebraic():
assert sqrt5.unify(sqrt7.frac_field(x, y)) == sqrt57.frac_field(x, y)
assert sqrt5.frac_field(x, y).unify(sqrt7) == sqrt57.frac_field(x, y)
+def test_Domain_unify_FiniteExtension():
+ KxZZ = FiniteExtension(Poly(x**2 - 2, x, domain=ZZ))
+ KxQQ = FiniteExtension(Poly(x**2 - 2, x, domain=QQ))
+ KxZZy = FiniteExtension(Poly(x**2 - 2, x, domain=ZZ[y]))
+ KxQQy = FiniteExtension(Poly(x**2 - 2, x, domain=QQ[y]))
+
+ assert KxZZ.unify(KxZZ) == KxZZ
+ assert KxQQ.unify(KxQQ) == KxQQ
+ assert KxZZy.unify(KxZZy) == KxZZy
+ assert KxQQy.unify(KxQQy) == KxQQy
+
+ assert KxZZ.unify(ZZ) == KxZZ
+ assert KxZZ.unify(QQ) == KxQQ
+ assert KxQQ.unify(ZZ) == KxQQ
+ assert KxQQ.unify(QQ) == KxQQ
+
+ assert KxZZ.unify(ZZ[y]) == KxZZy
+ assert KxZZ.unify(QQ[y]) == KxQQy
+ assert KxQQ.unify(ZZ[y]) == KxQQy
+ assert KxQQ.unify(QQ[y]) == KxQQy
+
+ assert KxZZy.unify(ZZ) == KxZZy
+ assert KxZZy.unify(QQ) == KxQQy
+ assert KxQQy.unify(ZZ) == KxQQy
+ assert KxQQy.unify(QQ) == KxQQy
+
+ assert KxZZy.unify(ZZ[y]) == KxZZy
+ assert KxZZy.unify(QQ[y]) == KxQQy
+ assert KxQQy.unify(ZZ[y]) == KxQQy
+ assert KxQQy.unify(QQ[y]) == KxQQy
+
+ K = FiniteExtension(Poly(x**2 - 2, x, domain=ZZ[y]))
+ assert K.unify(ZZ) == K
+ assert K.unify(ZZ[x]) == K
+ assert K.unify(ZZ[y]) == K
+ assert K.unify(ZZ[x, y]) == K
+
+ Kz = FiniteExtension(Poly(x**2 - 2, x, domain=ZZ[y, z]))
+ assert K.unify(ZZ[z]) == Kz
+ assert K.unify(ZZ[x, z]) == Kz
+ assert K.unify(ZZ[y, z]) == Kz
+ assert K.unify(ZZ[x, y, z]) == Kz
+
+ Kx = FiniteExtension(Poly(x**2 - 2, x, domain=ZZ))
+ Ky = FiniteExtension(Poly(y**2 - 2, y, domain=ZZ))
+ Kxy = FiniteExtension(Poly(y**2 - 2, y, domain=Kx))
+ assert Kx.unify(Kx) == Kx
+ assert Ky.unify(Ky) == Ky
+ assert Kx.unify(Ky) == Kxy
+ assert Ky.unify(Kx) == Kxy
+
def test_Domain_unify_with_symbols():
raises(UnificationFailed, lambda: ZZ[x, y].unify_with_symbols(ZZ, (y, z)))
raises(UnificationFailed, lambda: ZZ.unify_with_symbols(ZZ[x, y], (y, z)))
@@ -500,6 +553,21 @@ def test_Domain_get_exact():
assert QQ.frac_field(x, y).get_exact() == QQ.frac_field(x, y)
+def test_Domain_is_unit():
+ nums = [-2, -1, 0, 1, 2]
+ invring = [False, True, False, True, False]
+ invfield = [True, True, False, True, True]
+ ZZx, QQx, QQxf = ZZ[x], QQ[x], QQ.frac_field(x)
+ assert [ZZ.is_unit(ZZ(n)) for n in nums] == invring
+ assert [QQ.is_unit(QQ(n)) for n in nums] == invfield
+ assert [ZZx.is_unit(ZZx(n)) for n in nums] == invring
+ assert [QQx.is_unit(QQx(n)) for n in nums] == invfield
+ assert [QQxf.is_unit(QQxf(n)) for n in nums] == invfield
+ assert ZZx.is_unit(ZZx(x)) is False
+ assert QQx.is_unit(QQx(x)) is False
+ assert QQxf.is_unit(QQxf(x)) is True
+
+
def test_Domain_convert():
assert QQ.convert(10e-52) == QQ(1684996666696915, 1684996666696914987166688442938726917102321526408785780068975640576)
@@ -508,6 +576,23 @@ def test_Domain_convert():
assert ZZ.convert(x - x, R.to_domain()) == 0
+def test_GlobalPolynomialRing_convert():
+ K1 = QQ.old_poly_ring(x)
+ K2 = QQ[x]
+ assert K1.convert(x) == K1.convert(K2.convert(x), K2)
+ assert K2.convert(x) == K2.convert(K1.convert(x), K1)
+
+ K1 = QQ.old_poly_ring(x, y)
+ K2 = QQ[x]
+ assert K1.convert(x) == K1.convert(K2.convert(x), K2)
+ #assert K2.convert(x) == K2.convert(K1.convert(x), K1)
+
+ K1 = ZZ.old_poly_ring(x, y)
+ K2 = QQ[x]
+ assert K1.convert(x) == K1.convert(K2.convert(x), K2)
+ #assert K2.convert(x) == K2.convert(K1.convert(x), K1)
+
+
def test_PolynomialRing__init():
R, = ring("", ZZ)
assert ZZ.poly_ring() == R.to_domain()
@@ -525,6 +610,22 @@ def test_inject():
raises(GeneratorsError, lambda: ZZ[x].inject(x))
+def test_drop():
+ assert ZZ.drop(x) == ZZ
+ assert ZZ[x].drop(x) == ZZ
+ assert ZZ[x, y].drop(x) == ZZ[y]
+ assert ZZ.frac_field(x).drop(x) == ZZ
+ assert ZZ.frac_field(x, y).drop(x) == ZZ.frac_field(y)
+ assert ZZ[x][y].drop(y) == ZZ[x]
+ assert ZZ[x][y].drop(x) == ZZ[y]
+ assert ZZ.frac_field(x)[y].drop(x) == ZZ[y]
+ assert ZZ.frac_field(x)[y].drop(y) == ZZ.frac_field(x)
+ Ky = FiniteExtension(Poly(x**2-1, x, domain=ZZ[y]))
+ K = FiniteExtension(Poly(x**2-1, x, domain=ZZ))
+ assert Ky.drop(y) == K
+ raises(GeneratorsError, lambda: Ky.drop(x))
+
+
def test_Domain_map():
seq = ZZ.map([1, 2, 3, 4])
| diff --git a/doc/src/modules/polys/agca.rst b/doc/src/modules/polys/agca.rst
index 032416d26405..1d851f728dc8 100644
--- a/doc/src/modules/polys/agca.rst
+++ b/doc/src/modules/polys/agca.rst
@@ -334,3 +334,6 @@ as it is generated by a single element $t$.
.. autoclass:: MonogenicFiniteExtension
:members:
+
+.. autoclass:: ExtensionElement
+ :members:
| [
{
"components": [
{
"doc": "",
"lines": [
28,
29
],
"name": "ExtensionElement.parent",
"signature": "def parent(f):",
"type": "function"
},
{
"doc": "Raise if division is not implemented for this divisor",
"lin... | [
"test_FiniteExtension",
"test_FiniteExtension_eq_hash",
"test_FiniteExtension_mod",
"test_FiniteExtension_from_sympy",
"test_FiniteExtension_set_domain",
"test_FiniteExtension_exquo",
"test_FiniteExtension_convert",
"test_FiniteExtension_division_ring",
"test_FiniteExtension_Poly",
"test_Domain_un... | [
"test_Domain_unify",
"test_Domain_unify_composite",
"test_Domain_unify_algebraic",
"test_Domain_unify_with_symbols",
"test_Domain__contains__",
"test_Domain_get_ring",
"test_Domain_get_field",
"test_Domain_get_exact",
"test_Domain_convert",
"test_PolynomialRing__init",
"test_FractionField__init"... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
feat(polys): make FiniteExtension a proper Domain
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
See discussion in #20608.
#### Brief description of what is fixed or changed
Makes MonogenicFiniteExtension into a proper domain that can be used with Poly etc:
```python
In [1]: from sympy.polys.agca.extensions import FiniteExtension
In [2]: K = FiniteExtension(Poly(x**2 - 2))
In [3]: K
Out[3]: ZZ[x]/(x**2 - 2)
In [4]: p = Poly(x + y, y, domain=K)
In [5]: p
Out[5]: Poly(y + x, y, domain='ZZ[x]/(x**2 - 2)')
In [6]: p**2
Out[6]: Poly(y**2 + 2*x*y + 2, y, domain='ZZ[x]/(x**2 - 2)')
```
This also makes it possible to build a chain of extensions:
```python
In [7]: K2 = FiniteExtension(Poly(y**2 - 3, domain=K))
In [8]: K2
Out[8]: ZZ[x]/(x**2 - 2)[y]/(y**2 - 3)
In [9]: xf = K2.convert(x)
In [10]: yf = K2.convert(y)
In [11]: xf + yf
Out[11]: x + y
In [12]: _ ** 2
Out[12]: 2*x*y + 5
```
I've added a `Domain.drop` method to drop a generator:
```python
In [13]: QQ[x, y].drop(x)
Out[13]: QQ[y]
In [14]: QQ[x, y][z].drop(x)
Out[14]: QQ[y][z]
In [15]: QQ[x, y][z].get_field()
Out[15]: QQ[x,y](z)
In [16]: QQ[x, y][z].get_field().drop(x)
Out[16]: QQ[y](z)
In [17]: QQ[x, y].get_field()[z]
Out[17]: QQ(x,y)[z]
In [18]: QQ[x, y].get_field()[z].drop(x)
Out[18]: QQ(y)[z]
In [19]: QQ[x, y].get_field()[z].drop(x, y)
Out[19]: QQ[z]
```
#### Other comments
It seems that FiniteExtension uses `old_poly_ring`. I'm not sure if that's something that should be changed. I guess this should really be called `dense_poly_ring` and really it does make sense to use a dense representation here since the polys are limited in size by the modulus. There are however some bugs in conversion between old and new poly ring and also between different poly rings.
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* polys
* A new polys FiniteExtension domain is added.
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/polys/agca/extensions.py]
(definition of ExtensionElement.parent:)
def parent(f):
(definition of ExtensionElement._divcheck:)
def _divcheck(f):
"""Raise if division is not implemented for this divisor"""
(definition of ExtensionElement.__mod__:)
def __mod__(f, g):
(definition of ExtensionElement.__rmod__:)
def __rmod__(f, g):
(definition of ExtensionElement.is_ground:)
def is_ground(f):
(definition of ExtensionElement.to_ground:)
def to_ground(f):
(definition of MonogenicFiniteExtension.new:)
def new(self, arg):
(definition of MonogenicFiniteExtension.__eq__:)
def __eq__(self, other):
(definition of MonogenicFiniteExtension.__hash__:)
def __hash__(self):
(definition of MonogenicFiniteExtension.convert_from:)
def convert_from(self, f, base):
(definition of MonogenicFiniteExtension.from_sympy:)
def from_sympy(self, f):
(definition of MonogenicFiniteExtension.set_domain:)
def set_domain(self, K):
(definition of MonogenicFiniteExtension.drop:)
def drop(self, *symbols):
(definition of MonogenicFiniteExtension.quo:)
def quo(self, f, g):
(definition of MonogenicFiniteExtension.exquo:)
def exquo(self, f, g):
(definition of MonogenicFiniteExtension.is_negative:)
def is_negative(self, a):
(definition of MonogenicFiniteExtension.is_unit:)
def is_unit(self, a):
[end of new definitions in sympy/polys/agca/extensions.py]
[start of new definitions in sympy/polys/domains/compositedomain.py]
(definition of CompositeDomain.drop:)
def drop(self, *symbols):
"""Drop generators from this domain. """
[end of new definitions in sympy/polys/domains/compositedomain.py]
[start of new definitions in sympy/polys/domains/domain.py]
(definition of Domain.from_MonogenicFiniteExtension:)
def from_MonogenicFiniteExtension(K1, a, K0):
"""Convert an ``ExtensionElement`` to ``dtype``. """
(definition of Domain.drop:)
def drop(self, *symbols):
"""Drop generators from this domain. """
[end of new definitions in sympy/polys/domains/domain.py]
[start of new definitions in sympy/polys/domains/field.py]
(definition of Field.is_unit:)
def is_unit(self, a):
"""Return true if ``a`` is a invertible"""
[end of new definitions in sympy/polys/domains/field.py]
[start of new definitions in sympy/polys/domains/old_polynomialring.py]
(definition of PolynomialRingBase.from_PolynomialRing:)
def from_PolynomialRing(K1, a, K0):
"""Convert a `PolyElement` object to `dtype`. """
[end of new definitions in sympy/polys/domains/old_polynomialring.py]
[start of new definitions in sympy/polys/domains/polynomialring.py]
(definition of PolynomialRing.is_unit:)
def is_unit(self, a):
"""Returns ``True`` if ``a`` is a unit of ``self``"""
(definition of PolynomialRing.from_GlobalPolynomialRing:)
def from_GlobalPolynomialRing(K1, a, K0):
"""Convert from old poly ring to ``dtype``. """
[end of new definitions in sympy/polys/domains/polynomialring.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 284db64e44124856accce4abff75d2d8990c301d | |
sympy__sympy-20619 | 20,619 | sympy/sympy | 1.8 | 989300eda95d4a7661479d64bb7dac44240c036d | 2020-12-17T10:28:20Z | diff --git a/sympy/stats/__init__.py b/sympy/stats/__init__.py
index ed4ee9734088..e72ed56c7a4d 100644
--- a/sympy/stats/__init__.py
+++ b/sympy/stats/__init__.py
@@ -122,7 +122,7 @@
'Trapezoidal', 'Triangular', 'Uniform', 'UniformSum', 'VonMises', 'Wald',
'Weibull', 'WignerSemicircle', 'ContinuousDistributionHandmade',
- 'Geometric','Hermite', 'Logarithmic', 'NegativeBinomial', 'Poisson', 'Skellam',
+ 'FlorySchulz', 'Geometric','Hermite', 'Logarithmic', 'NegativeBinomial', 'Poisson', 'Skellam',
'YuleSimon', 'Zeta', 'DiscreteRV', 'DiscreteDistributionHandmade',
'JointRV', 'Dirichlet', 'GeneralizedMultivariateLogGamma',
@@ -170,7 +170,7 @@
PowerFunction, ShiftedGompertz, Trapezoidal, Triangular, Uniform, UniformSum,
VonMises, Wald, Weibull, WignerSemicircle, ContinuousDistributionHandmade)
-from .drv_types import (Geometric, Hermite, Logarithmic, NegativeBinomial, Poisson,
+from .drv_types import (FlorySchulz, Geometric, Hermite, Logarithmic, NegativeBinomial, Poisson,
Skellam, YuleSimon, Zeta, DiscreteRV, DiscreteDistributionHandmade)
from .joint_rv_types import (JointRV, Dirichlet,
diff --git a/sympy/stats/drv_types.py b/sympy/stats/drv_types.py
index d085df31406c..f3d3e8bf2c72 100644
--- a/sympy/stats/drv_types.py
+++ b/sympy/stats/drv_types.py
@@ -2,6 +2,7 @@
Contains
========
+FlorySchulz
Geometric
Hermite
Logarithmic
@@ -21,7 +22,8 @@
from sympy.stats.rv import _value_check, is_random
-__all__ = ['Geometric',
+__all__ = ['FlorySchulz',
+'Geometric',
'Hermite',
'Logarithmic',
'NegativeBinomial',
@@ -106,6 +108,78 @@ def DiscreteRV(symbol, density, set=S.Integers, **kwargs):
return rv(symbol.name, DiscreteDistributionHandmade, pdf, set, **kwargs)
+#-------------------------------------------------------------------------------
+# Flory-Schulz distribution ------------------------------------------------------------
+
+class FlorySchulzDistribution(SingleDiscreteDistribution):
+ _argnames = ('a',)
+ set = S.Naturals
+
+ @staticmethod
+ def check(a):
+ _value_check((0 < a, a < 1), "a must be between 0 and 1")
+
+ def pdf(self, k):
+ a = self.a
+ return (a**2 * k * (1 - a)**(k - 1))
+
+ def _characteristic_function(self, t):
+ a = self.a
+ return a**2*exp(I*t)/((1 + (a - 1)*exp(I*t))**2)
+
+ def _moment_generating_function(self, t):
+ a = self.a
+ return a**2*exp(t)/((1 + (a - 1)*exp(t))**2)
+
+
+def FlorySchulz(name, a):
+ r"""
+ Create a discrete random variable with a FlorySchulz distribution.
+
+ The density of the FlorySchulz distribution is given by
+
+ .. math::
+ f(k) := (a^2) k (1 - a)^{k-1}
+
+ Parameters
+ ==========
+
+ a
+ A real number between 0 and 1
+
+ Returns
+ =======
+
+ RandomSymbol
+
+ Examples
+ ========
+
+ >>> from sympy.stats import density, E, variance, FlorySchulz
+ >>> from sympy import Symbol, S
+
+ >>> a = S.One / 5
+ >>> z = Symbol("z")
+
+ >>> X = FlorySchulz("x", a)
+
+ >>> density(X)(z)
+ (4/5)**(z - 1)*z/25
+
+ >>> E(X)
+ 9
+
+ >>> variance(X)
+ 40
+
+ References
+ ==========
+
+ https://en.wikipedia.org/wiki/Flory%E2%80%93Schulz_distribution
+ """
+ return rv(name, FlorySchulzDistribution, a)
+
+
#-------------------------------------------------------------------------------
# Geometric distribution ------------------------------------------------------------
| diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index d33b2617503c..c3f38c3596a8 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -1595,6 +1595,9 @@ def test_sympy__stats__drv_types__NegativeBinomialDistribution():
from sympy.stats.drv_types import NegativeBinomialDistribution
assert _test_args(NegativeBinomialDistribution(.5, .5))
+def test_sympy__stats__drv_types__FlorySchulzDistribution():
+ from sympy.stats.drv_types import FlorySchulzDistribution
+ assert _test_args(FlorySchulzDistribution(.5))
def test_sympy__stats__drv_types__PoissonDistribution():
from sympy.stats.drv_types import PoissonDistribution
diff --git a/sympy/stats/tests/test_discrete_rv.py b/sympy/stats/tests/test_discrete_rv.py
index 3a40ef7a5ba2..48e028ef80b5 100644
--- a/sympy/stats/tests/test_discrete_rv.py
+++ b/sympy/stats/tests/test_discrete_rv.py
@@ -9,7 +9,7 @@
where, moment_generating_function, skewness, cdf,
kurtosis, coskewness)
from sympy.stats.drv_types import (PoissonDistribution, GeometricDistribution,
- Poisson, Geometric, Hermite, Logarithmic,
+ FlorySchulz, Poisson, Geometric, Hermite, Logarithmic,
NegativeBinomial, Skellam, YuleSimon, Zeta,
DiscreteRV)
from sympy.stats.rv import sample
@@ -41,6 +41,15 @@ def test_Poisson():
# issue 8248
assert x.pspace.compute_expectation(1) == 1
+def test_FlorySchulz():
+ a = Symbol("a")
+ z = Symbol("z")
+ x = FlorySchulz('x' , a)
+ assert E(x) == (2 - a)/a
+ assert (variance(x) - 2*(1 - a)/a**2).simplify() == S(0)
+ assert density(x)(z) == a**2*z*(1 - a)**(z - 1)
+
+
@slow
def test_GeometricDistribution():
p = S.One / 5
| [
{
"components": [
{
"doc": "",
"lines": [
114,
132
],
"name": "FlorySchulzDistribution",
"signature": "class FlorySchulzDistribution(SingleDiscreteDistribution):",
"type": "class"
},
{
"doc": "",
"lines": [
... | [
"test_sympy__stats__drv_types__FlorySchulzDistribution",
"test_PoissonDistribution",
"test_Poisson",
"test_FlorySchulz",
"test_Hermite",
"test_Logarithmic",
"test_negative_binomial",
"test_skellam",
"test_yule_simon",
"test_zeta",
"test_discrete_probability",
"test_DiscreteRV",
"test_precomp... | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_sympy__assumptions__assume__Predicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers__AnyArgs",
"test_sympy__assu... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Added FlorySchulz Distribution
<!-- BEGIN RELEASE NOTES -->
* stats
* Implemented FlorySchulz Distribution
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/stats/drv_types.py]
(definition of FlorySchulzDistribution:)
class FlorySchulzDistribution(SingleDiscreteDistribution):
(definition of FlorySchulzDistribution.check:)
def check(a):
(definition of FlorySchulzDistribution.pdf:)
def pdf(self, k):
(definition of FlorySchulzDistribution._characteristic_function:)
def _characteristic_function(self, t):
(definition of FlorySchulzDistribution._moment_generating_function:)
def _moment_generating_function(self, t):
(definition of FlorySchulz:)
def FlorySchulz(name, a):
"""Create a discrete random variable with a FlorySchulz distribution.
The density of the FlorySchulz distribution is given by
.. math::
f(k) := (a^2) k (1 - a)^{k-1}
Parameters
==========
a
A real number between 0 and 1
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import density, E, variance, FlorySchulz
>>> from sympy import Symbol, S
>>> a = S.One / 5
>>> z = Symbol("z")
>>> X = FlorySchulz("x", a)
>>> density(X)(z)
(4/5)**(z - 1)*z/25
>>> E(X)
9
>>> variance(X)
40
References
==========
https://en.wikipedia.org/wiki/Flory%E2%80%93Schulz_distribution"""
[end of new definitions in sympy/stats/drv_types.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 284db64e44124856accce4abff75d2d8990c301d | ||
joke2k__faker-1350 | 1,350 | joke2k/faker | null | d6117b28b36dd4f3ead76026654000ba9f410c3c | 2020-12-17T05:00:02Z | diff --git a/faker/providers/misc/__init__.py b/faker/providers/misc/__init__.py
index cb129ce5b8..68ca213d37 100644
--- a/faker/providers/misc/__init__.py
+++ b/faker/providers/misc/__init__.py
@@ -2,6 +2,7 @@
import hashlib
import io
import json
+import re
import string
import tarfile
import uuid
@@ -399,12 +400,12 @@ def json(self,
data structures it is recommended to use the dictionary format.
Data Column Dictionary format:
- {'key name': 'definition'}}
+ {'key name': 'definition'}
- The definition can simply be the 'name:argument_group' of a provider
- method, or can also be string {{ tokens }} that are passed to python
- provider pystr_format() method for complex string generation.
- Argument Groups are used to pass arguments to the provider methods.
+ The definition can be 'provider', 'provider:argument_group', tokenized
+ 'string {{ provider:argument_group }}' that is passed to the python
+ provider method pystr_format() for generation, or a fixed '@word'.
+ Using Lists, Tuples, and Dicts as a definition for structure.
Example:
fake.set_arguments('top_half', {'min_value': 50, 'max_value': 100})
@@ -426,8 +427,8 @@ def json(self,
:return: Serialized JSON data
:rtype: str
- :sample: data_columns={'ID': 'pyint', 'Details': {'Name': 'name',
- 'Address': 'address'}}, num_rows=1
+ :sample: data_columns={'Spec': '@1.0.1', 'ID': 'pyint',
+ 'Details': {'Name': 'name', 'Address': 'address'}}, num_rows=2
:sample: data_columns={'Candidates': ['name', 'name', 'name']},
num_rows=1
:sample: data_columns=[('Name', 'name'), ('Points', 'pyint',
@@ -449,7 +450,7 @@ def process_list_structure(data: list) -> dict:
raise TypeError('Invalid arguments type. Must be a dictionary')
if name is None:
- return self._format_selection(definition, **kwargs)
+ return self._value_format_selection(definition, **kwargs)
if isinstance(definition, tuple):
entry[name] = process_list_structure(definition)
@@ -457,28 +458,27 @@ def process_list_structure(data: list) -> dict:
entry[name] = [process_list_structure([item])
for item in definition]
else:
- entry[name] = self._format_selection(definition, **kwargs)
+ entry[name] = self._value_format_selection(definition, **kwargs)
return entry
def process_dict_structure(data: dict) -> dict:
entry = {}
if isinstance(data, str):
- return self._format_selection(data)
+ return self._value_format_selection(data)
- if isinstance(data, (float, int)):
- return data
+ if isinstance(data, dict):
+ for name, definition in data.items():
+ if isinstance(definition, (tuple, list, set)):
+ entry[name] = [process_dict_structure(item)
+ for item in definition]
+ elif isinstance(definition, (dict, int, float, bool)):
+ entry[name] = process_dict_structure(definition)
+ else:
+ entry[name] = self._value_format_selection(definition)
+ return entry
- for name, definition in data.items():
- if isinstance(definition, (tuple, list)):
- entry[name] = [process_dict_structure(item)
- for item in definition]
- elif isinstance(definition, (dict, int, float)):
- entry[name] = process_dict_structure(definition)
- else:
- entry[name] = self._format_selection(definition)
-
- return entry
+ return data
def create_json_structure(data_columns) -> dict:
if isinstance(data_columns, dict):
@@ -510,9 +510,11 @@ def fixed_width(self,
Data Column List format
[('field width', 'definition', {'arguments'})]
- The definition can simply be the 'name:argument_group' of a provider
- method, or can also be string tokens that are passed to python
- provider method pystr_format() for data generation.
+ The definition can be 'provider', 'provider:argument_group', tokenized
+ 'string {{ provider:argument_group }}' that is passed to the python
+ provider method pystr_format() for generation, or a fixed '@word'.
+ Using Lists, Tuples, and Dicts as a definition for structure.
+
Argument Groups can be used to pass arguments to the provider methods,
but will override the arguments supplied in the tuple record.
@@ -530,7 +532,7 @@ def fixed_width(self,
:rtype: str
:sample: data_columns=[(20, 'name'), (3, 'pyint', {'min_value': 50,
- 'max_value': 100})], align='right', num_rows=1
+ 'max_value': 100})], align='right', num_rows=2
"""
default_data_columns = [
(20, 'name'),
@@ -553,24 +555,39 @@ def fixed_width(self,
if not isinstance(kwargs, dict):
raise TypeError('Invalid arguments type. Must be a dictionary')
- result = self._format_selection(definition, **kwargs)
+ result = self._value_format_selection(definition, **kwargs)
field = "{0:%s%s}" % (align_map.get(align, '<'), width)
row.append(field.format(result)[:width])
data.append(''.join(row))
return '\n'.join(data)
- def _format_selection(self, definition, **kwargs):
+ def _value_format_selection(self, definition, **kwargs):
"""
- Formats the string with PyStr Format if special characters are found.
+ Formats the string in different ways depending on it's contents.
+
+ The return can be the '@word' itself, a '{{ token }}' passed to PyStr,
+ or a 'provider:argument_group' format field that returns potentially
+ a non-string type.
+
+ This ensures that Numbers, Boolean types that are generated in the
+ JSON structures in there proper type, and not just strings.
"""
- if '{{' in definition and '}}' in definition:
+
+ # Check for PyStr first as complex strings may start with @
+ if re.match(r'.*\{\{.*\}\}.*', definition):
return self.generator.pystr_format(definition)
- if definition.count(':') == 1:
+ # Check for fixed @words that won't be generated
+ if re.match(r'^@.*', definition):
+ return definition.lstrip('@')
+
+ # Check if a argument group has been supplied
+ if re.match(r'^[a-zA-Z0-9_-]*:\w', definition):
definition, argument_group = definition.split(':')
arguments = self.generator.get_arguments(argument_group.strip())
return self.generator.format(definition.strip(), **arguments)
+ # Assume the string is refering to a provider
return self.generator.format(definition, **kwargs)
| diff --git a/tests/providers/test_misc.py b/tests/providers/test_misc.py
index d51d80c3f5..d54dfa2b0b 100644
--- a/tests/providers/test_misc.py
+++ b/tests/providers/test_misc.py
@@ -477,11 +477,13 @@ def test_json_multiple_rows(self, faker_with_foobar):
assert isinstance(json_data, list) and len(json_data) == 2
- def test_json_passthrough_int_float(self, faker_with_foobar):
+ def test_json_passthrough_values(self, faker_with_foobar):
kwargs = {
'data_columns': {
'item1': 1,
'item2': 1.0,
+ 'item3': True,
+ 'item4': '@fixed',
},
'num_rows': 1,
}
@@ -489,6 +491,8 @@ def test_json_passthrough_int_float(self, faker_with_foobar):
assert json_data['item1'] == 1
assert json_data['item2'] == 1.0
+ assert json_data['item3'] is True
+ assert json_data['item4'] == 'fixed'
def test_json_type_integrity_int(self, faker_with_foobar):
kwargs = {
| [
{
"components": [
{
"doc": "Formats the string in different ways depending on it's contents.\n\nThe return can be the '@word' itself, a '{{ token }}' passed to PyStr,\nor a 'provider:argument_group' format field that returns potentially\na non-string type.\n\nThis ensures that Numbers, Boolean typ... | [
"tests/providers/test_misc.py::TestMiscProvider::test_json_passthrough_values"
] | [
"tests/providers/test_misc.py::TestMiscProvider::test_uuid4_str",
"tests/providers/test_misc.py::TestMiscProvider::test_uuid4_int",
"tests/providers/test_misc.py::TestMiscProvider::test_uuid4_uuid_object",
"tests/providers/test_misc.py::TestMiscProvider::test_uuid4_seedability",
"tests/providers/test_misc.p... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
FW and JSON Improvements
### What does this changes
The Format Value Selector was a bit clunky, and needed some updating. It now allows for strings to be fixed or pinned to a static value by using '@', to begin with.
**Example:**
```python
fake.json({"Spec": "@2.1b", "Source": "@host.example.com", "ID": "pyint:min-max"})
```
This will fix the values Spec, and Source, so they are not generated, but the ID will using the parameter group 'min-max'
### What was wrong
Because values in JSON can be non-string types, like integer, float, and boolean, it's important that PyStr selected only when needed. This is the role of the value format selector.
**Example:**
```python
fake.json({"ID": "{{ pyint }}"}) # This returns a string value with quotes in the JSON
fake.json({"ID": "pyint"}) # This returns a integer value without quotes in the JSON.
```
The down fall of this though is when you need data with fixed values in some fields for things like event specification values to a Data Stream. The JSON provider lacked an easy and obvious way to do this.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in faker/providers/misc/__init__.py]
(definition of Provider._value_format_selection:)
def _value_format_selection(self, definition, **kwargs):
"""Formats the string in different ways depending on it's contents.
The return can be the '@word' itself, a '{{ token }}' passed to PyStr,
or a 'provider:argument_group' format field that returns potentially
a non-string type.
This ensures that Numbers, Boolean types that are generated in the
JSON structures in there proper type, and not just strings."""
[end of new definitions in faker/providers/misc/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 6edfdbf6ae90b0153309e3bf066aa3b2d16494a7 | ||
sympy__sympy-20612 | 20,612 | sympy/sympy | 1.8 | deb3bb44be784538526ef4a03af030518426f715 | 2020-12-15T23:09:38Z | diff --git a/sympy/utilities/matchpy_connector.py b/sympy/utilities/matchpy_connector.py
index fcad28e3399f..7c523a647674 100644
--- a/sympy/utilities/matchpy_connector.py
+++ b/sympy/utilities/matchpy_connector.py
@@ -1,9 +1,14 @@
+"""
+The objects in this module allow the usage of the MatchPy pattern matching
+library on SymPy expressions.
+"""
from sympy.external import import_module
from sympy.functions import (log, sin, cos, tan, cot, csc, sec, erf, gamma, uppergamma)
from sympy.functions.elementary.hyperbolic import acosh, asinh, atanh, acoth, acsch, asech, cosh, sinh, tanh, coth, sech, csch
from sympy.functions.elementary.trigonometric import atan, acsc, asin, acot, acos, asec
from sympy.functions.special.error_functions import fresnelc, fresnels, erfc, erfi, Ei
-from sympy import (Basic, Mul, Add, Pow, Integral, exp)
+from sympy import (Basic, Mul, Add, Pow, Integral, exp, Symbol)
+from sympy.utilities.decorator import doctest_depends_on
matchpy = import_module("matchpy")
@@ -79,3 +84,61 @@ def _(operation):
@create_operation_expression.register(Basic)
def sympy_op_factory(old_operation, new_operands, variable_name=True):
return type(old_operation)(*new_operands)
+
+
+if matchpy:
+ from matchpy import Wildcard
+else:
+ class Wildcard:
+ def __init__(self, min_length, fixed_size, variable_name, optional):
+ pass
+
+
+@doctest_depends_on(modules=('matchpy',))
+class _WildAbstract(Wildcard, Symbol):
+ min_length = None # abstract field required in subclasses
+ fixed_size = None # abstract field required in subclasses
+
+ def __init__(self, variable_name=None, optional=None, **assumptions):
+ min_length = self.min_length
+ fixed_size = self.fixed_size
+ Wildcard.__init__(self, min_length, fixed_size, str(variable_name), optional)
+
+ def __new__(cls, variable_name=None, optional=None, **assumptions):
+ cls._sanitize(assumptions, cls)
+ return _WildAbstract.__xnew__(cls, variable_name, optional, **assumptions)
+
+ def __getnewargs__(self):
+ return self.min_count, self.fixed_size, self.variable_name, self.optional
+
+ @staticmethod
+ def __xnew__(cls, variable_name=None, optional=None, **assumptions):
+ obj = Symbol.__xnew__(cls, variable_name, **assumptions)
+ return obj
+
+ def _hashable_content(self):
+ if self.optional:
+ return super()._hashable_content() + (self.min_count, self.fixed_size, self.variable_name, self.optional)
+ else:
+ return super()._hashable_content() + (self.min_count, self.fixed_size, self.variable_name)
+
+ def __copy__(self) -> '_WildAbstract':
+ return type(self)(variable_name=self.variable_name, optional=self.optional)
+
+
+@doctest_depends_on(modules=('matchpy',))
+class WildDot(_WildAbstract):
+ min_length = 1
+ fixed_size = True
+
+
+@doctest_depends_on(modules=('matchpy',))
+class WildPlus(_WildAbstract):
+ min_length = 1
+ fixed_size = False
+
+
+@doctest_depends_on(modules=('matchpy',))
+class WildStar(_WildAbstract):
+ min_length = 0
+ fixed_size = False
| diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index d33b2617503c..88b6b106ce71 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -1830,6 +1830,21 @@ def test_sympy__stats__matrix_distributions__MatrixNormalDistribution():
assert _test_args(MatrixNormalDistribution(L, S1, S2))
+def test_sympy__utilities__matchpy_connector__WildDot():
+ from sympy.utilities.matchpy_connector import WildDot
+ assert _test_args(WildDot("w_"))
+
+
+def test_sympy__utilities__matchpy_connector__WildPlus():
+ from sympy.utilities.matchpy_connector import WildPlus
+ assert _test_args(WildPlus("w__"))
+
+
+def test_sympy__utilities__matchpy_connector__WildStar():
+ from sympy.utilities.matchpy_connector import WildStar
+ assert _test_args(WildStar("w___"))
+
+
def test_sympy__core__symbol__Str():
from sympy.core.symbol import Str
assert _test_args(Str('t'))
diff --git a/sympy/utilities/tests/test_matchpy_connector.py b/sympy/utilities/tests/test_matchpy_connector.py
new file mode 100644
index 000000000000..11e601abe031
--- /dev/null
+++ b/sympy/utilities/tests/test_matchpy_connector.py
@@ -0,0 +1,45 @@
+from sympy import symbols
+from sympy.external import import_module
+from sympy.utilities.matchpy_connector import WildDot, WildPlus, WildStar
+
+matchpy = import_module("matchpy")
+
+x, y, z = symbols("x y z")
+
+
+def _get_first_match(expr, pattern):
+ from matchpy import ManyToOneMatcher, Pattern
+
+ matcher = ManyToOneMatcher()
+ matcher.add(Pattern(pattern))
+ return next(iter(matcher.match(expr)))
+
+
+def test_matchpy_connector():
+ if matchpy is None:
+ return
+
+ from multiset import Multiset
+ from matchpy import Pattern, Substitution
+
+ w_ = WildDot("w_")
+ w__ = WildPlus("w__")
+ w___ = WildStar("w___")
+
+ expr = x + y
+ pattern = x + w_
+ p, subst = _get_first_match(expr, pattern)
+ assert p == Pattern(pattern)
+ assert subst == Substitution({'w_': y})
+
+ expr = x + y + z
+ pattern = x + w__
+ p, subst = _get_first_match(expr, pattern)
+ assert p == Pattern(pattern)
+ assert subst == Substitution({'w__': Multiset([y, z])})
+
+ expr = x + y + z
+ pattern = x + y + z + w___
+ p, subst = _get_first_match(expr, pattern)
+ assert p == Pattern(pattern)
+ assert subst == Substitution({'w___': Multiset()})
| [
{
"components": [
{
"doc": "",
"lines": [
92,
94
],
"name": "Wildcard",
"signature": "class Wildcard:",
"type": "class"
},
{
"doc": "",
"lines": [
93,
94
],
"name": "Wildcard... | [
"test_sympy__utilities__matchpy_connector__WildDot",
"test_sympy__utilities__matchpy_connector__WildPlus",
"test_sympy__utilities__matchpy_connector__WildStar"
] | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_sympy__assumptions__assume__Predicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers__AnyArgs",
"test_sympy__assu... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Added WildDot, WildPlus and WildStar classes to MatchPy connector
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
#### Brief description of what is fixed or changed
#### Other comments
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* utilities
* Added WildDot, WildPlus and WildStar classes to MatchPy connector. These classes correspond to the dot, dot-plut and dot-star expressions in regular expression, but operate on SymPy expression trees and are aware of associative and commutative properties, features supported through the MatchPy library.
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/utilities/matchpy_connector.py]
(definition of Wildcard:)
class Wildcard:
(definition of Wildcard.__init__:)
def __init__(self, min_length, fixed_size, variable_name, optional):
(definition of _WildAbstract:)
class _WildAbstract(Wildcard, Symbol):
(definition of _WildAbstract.__init__:)
def __init__(self, variable_name=None, optional=None, **assumptions):
(definition of _WildAbstract.__new__:)
def __new__(cls, variable_name=None, optional=None, **assumptions):
(definition of _WildAbstract.__getnewargs__:)
def __getnewargs__(self):
(definition of _WildAbstract.__xnew__:)
def __xnew__(cls, variable_name=None, optional=None, **assumptions):
(definition of _WildAbstract._hashable_content:)
def _hashable_content(self):
(definition of _WildAbstract.__copy__:)
def __copy__(self) -> '_WildAbstract':
(definition of WildDot:)
class WildDot(_WildAbstract):
(definition of WildPlus:)
class WildPlus(_WildAbstract):
(definition of WildStar:)
class WildStar(_WildAbstract):
[end of new definitions in sympy/utilities/matchpy_connector.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 284db64e44124856accce4abff75d2d8990c301d | ||
sympy__sympy-20569 | 20,569 | sympy/sympy | 1.8 | 68c32660057d2309b796b355e1b3be2e8c1e7cbe | 2020-12-10T14:00:15Z | diff --git a/sympy/__init__.py b/sympy/__init__.py
index 2e882f1e22e4..fa6bcd33124c 100644
--- a/sympy/__init__.py
+++ b/sympy/__init__.py
@@ -128,7 +128,7 @@ def __sympy_debug():
chebyshevu, chebyshevu_root, chebyshevt_root, laguerre,
assoc_laguerre, gegenbauer, jacobi, jacobi_normalized, Ynm, Ynm_c,
Znm, elliptic_k, elliptic_f, elliptic_e, elliptic_pi, beta, mathieus,
- mathieuc, mathieusprime, mathieucprime)
+ mathieuc, mathieusprime, mathieucprime, betainc, betainc_regularized)
from .ntheory import (nextprime, prevprime, prime, primepi, primerange,
randprime, Sieve, sieve, primorial, cycle_length, composite,
@@ -352,7 +352,8 @@ def __sympy_debug():
'chebyshevu_root', 'chebyshevt_root', 'laguerre', 'assoc_laguerre',
'gegenbauer', 'jacobi', 'jacobi_normalized', 'Ynm', 'Ynm_c', 'Znm',
'elliptic_k', 'elliptic_f', 'elliptic_e', 'elliptic_pi', 'beta',
- 'mathieus', 'mathieuc', 'mathieusprime', 'mathieucprime',
+ 'mathieus', 'mathieuc', 'mathieusprime', 'mathieucprime', 'betainc',
+ 'betainc_regularized',
# sympy.ntheory
'nextprime', 'prevprime', 'prime', 'primepi', 'primerange', 'randprime',
diff --git a/sympy/core/function.py b/sympy/core/function.py
index b59ace05e1db..595fdcf3146c 100644
--- a/sympy/core/function.py
+++ b/sympy/core/function.py
@@ -548,7 +548,12 @@ def _get_mpmath_func(fname):
return None
return getattr(mpmath, fname)
- func = _get_mpmath_func(self.func.__name__)
+ _eval_mpmath = getattr(self, '_eval_mpmath', None)
+ if _eval_mpmath is None:
+ func = _get_mpmath_func(self.func.__name__)
+ args = self.args
+ else:
+ func, args = _eval_mpmath()
# Fall-back evaluation
if func is None:
@@ -566,7 +571,7 @@ def _get_mpmath_func(fname):
# XXX + 5 is a guess, it is similar to what is used in evalf.py. Should
# we be more intelligent about it?
try:
- args = [arg._to_mpmath(prec + 5) for arg in self.args]
+ args = [arg._to_mpmath(prec + 5) for arg in args]
def bad(m):
from mpmath import mpf, mpc
# the precision of an mpf value is the last element
diff --git a/sympy/functions/__init__.py b/sympy/functions/__init__.py
index f3c8f04a7d7a..4d9aef1123ef 100644
--- a/sympy/functions/__init__.py
+++ b/sympy/functions/__init__.py
@@ -43,7 +43,7 @@
from sympy.functions.special.spherical_harmonics import Ynm, Ynm_c, Znm
from sympy.functions.special.elliptic_integrals import (elliptic_k,
elliptic_f, elliptic_e, elliptic_pi)
-from sympy.functions.special.beta_functions import beta
+from sympy.functions.special.beta_functions import beta, betainc, betainc_regularized
from sympy.functions.special.mathieu_functions import (mathieus, mathieuc,
mathieusprime, mathieucprime)
ln = log
@@ -104,7 +104,7 @@
'elliptic_k', 'elliptic_f', 'elliptic_e', 'elliptic_pi',
- 'beta',
+ 'beta', 'betainc', 'betainc_regularized',
'mathieus', 'mathieuc', 'mathieusprime', 'mathieucprime',
]
diff --git a/sympy/functions/special/beta_functions.py b/sympy/functions/special/beta_functions.py
index 0faa212db279..7a63ea0b9e5c 100644
--- a/sympy/functions/special/beta_functions.py
+++ b/sympy/functions/special/beta_functions.py
@@ -1,6 +1,16 @@
from sympy.core import S
from sympy.core.function import Function, ArgumentIndexError
+from sympy.core.symbol import Dummy
from sympy.functions.special.gamma_functions import gamma, digamma
+from sympy.functions.elementary.complexes import conjugate
+
+# See mpmath #569 and SymPy #20569
+def betainc_mpmath_fix(a, b, x1, x2, reg=0):
+ from mpmath import betainc, mpf
+ if x1 == x2:
+ return mpf(0)
+ else:
+ return betainc(a, b, x1, x2, reg)
###############################################################################
############################ COMPLETE BETA FUNCTION ##########################
@@ -12,7 +22,7 @@ class beta(Function):
Legendre:
.. math::
- \mathrm{B}(x,y) := \int^{1}_{0} t^{x-1} (1-t)^{y-1} \mathrm{d}t.
+ \mathrm{B}(x,y) \int^{1}_{0} t^{x-1} (1-t)^{y-1} \mathrm{d}t.
Explanation
===========
@@ -31,6 +41,15 @@ class beta(Function):
.. math::
\mathrm{B} = \frac{(a-1)! (b-1)!}{(a+b-1)!}
+ A special case of the Beta function when `x = y` is the
+ Central Beta function. It satisfies properties like:
+
+ .. math::
+ \mathrm{B}(x) = 2^{1 - 2x}\mathrm{B}(x, \frac{1}{2})
+ \mathrm{B}(x) = 2^{1 - 2x} cos(\pi x) \mathrm{B}(\frac{1}{2} - x, x)
+ \mathrm{B}(x) = \int_{0}^{1} \frac{t^x}{(1 + t)^{2x}} dt
+ \mathrm{B}(x) = \frac{2}{x} \prod_{n = 1}^{\infty} \frac{n(n + 2x)}{(n + x)^2}
+
Examples
========
@@ -39,31 +58,30 @@ class beta(Function):
The Beta function obeys the mirror symmetry:
- >>> from sympy import beta
- >>> from sympy import conjugate
+ >>> from sympy import beta, conjugate
>>> conjugate(beta(x, y))
beta(conjugate(x), conjugate(y))
Differentiation with respect to both $x$ and $y$ is supported:
- >>> from sympy import beta
- >>> from sympy import diff
+ >>> from sympy import beta, diff
>>> diff(beta(x, y), x)
(polygamma(0, x) - polygamma(0, x + y))*beta(x, y)
- >>> from sympy import beta
- >>> from sympy import diff
>>> diff(beta(x, y), y)
(polygamma(0, y) - polygamma(0, x + y))*beta(x, y)
- We can numerically evaluate the gamma function to arbitrary precision
- on the whole complex plane:
+ >>> diff(beta(x), x)
+ 2*(polygamma(0, x) - polygamma(0, 2*x))*beta(x, x)
+
+ We can numerically evaluate the Beta function to
+ arbitrary precision for any complex numbers x and y:
>>> from sympy import beta
- >>> beta(pi, pi).evalf(40)
+ >>> beta(pi).evalf(40)
0.02671848900111377452242355235388489324562
- >>> beta(1 + I, 1 + I).evalf(20)
+ >>> beta(1 + I).evalf(20)
-0.2112723729365330143 - 0.7655283165378005676*I
See Also
@@ -85,7 +103,6 @@ class beta(Function):
.. [3] http://dlmf.nist.gov/5.12
"""
- nargs = 2
unbranched = True
def fdiff(self, argindex):
@@ -100,7 +117,9 @@ def fdiff(self, argindex):
raise ArgumentIndexError(self, argindex)
@classmethod
- def eval(cls, x, y):
+ def eval(cls, x, y=None):
+ if y is None:
+ return beta(x, x)
if y is S.One:
return 1/x
if x is S.One:
@@ -118,3 +137,230 @@ def _eval_conjugate(self):
def _eval_rewrite_as_gamma(self, x, y, piecewise=True, **kwargs):
return self._eval_expand_func(**kwargs)
+
+ def _eval_rewrite_as_Integral(self, x, y, **kwargs):
+ from sympy.integrals.integrals import Integral
+ t = Dummy('t')
+ return Integral(t**(x - 1)*(1 - t)**(y - 1), (t, 0, 1))
+
+###############################################################################
+########################## INCOMPLETE BETA FUNCTION ###########################
+###############################################################################
+
+class betainc(Function):
+ r"""
+ The Generalized Incomplete Beta function is defined as
+
+ .. math::
+ \mathrm{B}_{(x_1, x_2)}(a, b) = \int_{x_1}^{x_2} t^{a - 1} (1 - t)^{b - 1} dt
+
+ The Incomplete Beta function is a special case
+ of the Generalized Incomplete Beta function :
+
+ .. math:: \mathrm{B}_z (a, b) = \mathrm{B}_{(0, z)}(a, b)
+
+ The Incomplete Beta function satisfies :
+
+ .. math:: \mathrm{B}_z (a, b) = (-1)^a \mathrm{B}_{\frac{z}{z - 1}} (a, 1 - a - b)
+
+ The Beta function is a special case of the Incomplete Beta function :
+
+ .. math:: \mathrm{B}(a, b) = \mathrm{B}_{1}(a, b)
+
+ Examples
+ ========
+
+ >>> from sympy import betainc, symbols, conjugate
+ >>> a, b, x, x1, x2 = symbols('a b x x1 x2')
+
+ The Generalized Incomplete Beta function is given by:
+
+ >>> betainc(a, b, x1, x2)
+ betainc(a, b, x1, x2)
+
+ The Incomplete Beta function can be obtained as follows:
+
+ >>> betainc(a, b, 0, x)
+ betainc(a, b, 0, x)
+
+ The Incomplete Beta function obeys the mirror symmetry:
+
+ >>> conjugate(betainc(a, b, x1, x2))
+ betainc(conjugate(a), conjugate(b), conjugate(x1), conjugate(x2))
+
+ We can numerically evaluate the Incomplete Beta function to
+ arbitrary precision for any complex numbers a, b, x1 and x2:
+
+ >>> from sympy import betainc, I
+ >>> betainc(2, 3, 4, 5).evalf(10)
+ 56.08333333
+ >>> betainc(0.75, 1 - 4*I, 0, 2 + 3*I).evalf(25)
+ 0.2241657956955709603655887 + 0.3619619242700451992411724*I
+
+ The Generalized Incomplete Beta function can be expressed
+ in terms of the Generalized Hypergeometric function.
+
+ >>> from sympy import hyper
+ >>> betainc(a, b, x1, x2).rewrite(hyper)
+ (-x1**a*hyper((a, 1 - b), (a + 1,), x1) + x2**a*hyper((a, 1 - b), (a + 1,), x2))/a
+
+ See Also
+ ========
+
+ beta: Beta function
+ hyper: Generalized Hypergeometric function
+
+ References
+ ==========
+
+ .. [1] https://en.wikipedia.org/wiki/Beta_function#Incomplete_beta_function
+ .. [2] https://dlmf.nist.gov/8.17
+ .. [3] https://functions.wolfram.com/GammaBetaErf/Beta4/
+ .. [4] https://functions.wolfram.com/GammaBetaErf/BetaRegularized4/02/
+
+ """
+ nargs = 4
+ unbranched = True
+
+ def fdiff(self, argindex):
+ a, b, x1, x2 = self.args
+ if argindex == 3:
+ # Diff wrt x1
+ return -(1 - x1)**(b - 1)*x1**(a - 1)
+ elif argindex == 4:
+ # Diff wrt x2
+ return (1 - x2)**(b - 1)*x2**(a - 1)
+ else:
+ raise ArgumentIndexError(self, argindex)
+
+ def _eval_mpmath(self):
+ return betainc_mpmath_fix, self.args
+
+ def _eval_is_real(self):
+ if all(arg.is_real for arg in self.args):
+ return True
+
+ def _eval_conjugate(self):
+ return self.func(*map(conjugate, self.args))
+
+ def _eval_rewrite_as_Integral(self, a, b, x1, x2, **kwargs):
+ from sympy.integrals.integrals import Integral
+ t = Dummy('t')
+ return Integral(t**(a - 1)*(1 - t)**(b - 1), (t, x1, x2))
+
+ def _eval_rewrite_as_hyper(self, a, b, x1, x2, **kwargs):
+ from sympy.functions.special.hyper import hyper
+ return (x2**a * hyper((a, 1 - b), (a + 1,), x2) - x1**a * hyper((a, 1 - b), (a + 1,), x1)) / a
+
+###############################################################################
+#################### REGULARIZED INCOMPLETE BETA FUNCTION #####################
+###############################################################################
+
+class betainc_regularized(Function):
+ r"""
+ The Generalized Regularized Incomplete Beta function is given by
+
+ .. math::
+ \mathrm{I}_{(x_1, x_2)}(a, b) = \frac{\mathrm{B}_{(x_1, x_2)}(a, b)}{\mathrm{B}(a, b)}
+
+ The Regularized Incomplete Beta function is a special case
+ of the Generalized Regularized Incomplete Beta function :
+
+ .. math:: \mathrm{I}_z (a, b) = \mathrm{I}_{(0, z)}(a, b)
+
+ The Regularized Incomplete Beta function is the cumulative distribution
+ function of the beta distribution.
+
+ Examples
+ ========
+
+ >>> from sympy import betainc_regularized, symbols, conjugate
+ >>> a, b, x, x1, x2 = symbols('a b x x1 x2')
+
+ The Generalized Regularized Incomplete Beta
+ function is given by:
+
+ >>> betainc_regularized(a, b, x1, x2)
+ betainc_regularized(a, b, x1, x2)
+
+ The Regularized Incomplete Beta function
+ can be obtained as follows:
+
+ >>> betainc_regularized(a, b, 0, x)
+ betainc_regularized(a, b, 0, x)
+
+ The Regularized Incomplete Beta function
+ obeys the mirror symmetry:
+
+ >>> conjugate(betainc_regularized(a, b, x1, x2))
+ betainc_regularized(conjugate(a), conjugate(b), conjugate(x1), conjugate(x2))
+
+ We can numerically evaluate the Regularized Incomplete Beta function
+ to arbitrary precision for any complex numbers a, b, x1 and x2:
+
+ >>> from sympy import betainc_regularized, pi, E
+ >>> betainc_regularized(1, 2, 0, 0.25).evalf(10)
+ 0.4375000000
+ >>> betainc_regularized(pi, E, 0, 1).evalf(5)
+ 1.00000
+
+ The Generalized Regularized Incomplete Beta function can be
+ expressed in terms of the Generalized Hypergeometric function.
+
+ >>> from sympy import hyper
+ >>> betainc_regularized(a, b, x1, x2).rewrite(hyper)
+ (-x1**a*hyper((a, 1 - b), (a + 1,), x1) + x2**a*hyper((a, 1 - b), (a + 1,), x2))/(a*beta(a, b))
+
+ See Also
+ ========
+
+ beta: Beta function
+ hyper: Generalized Hypergeometric function
+
+ References
+ ==========
+
+ .. [1] https://en.wikipedia.org/wiki/Beta_function#Incomplete_beta_function
+ .. [2] https://dlmf.nist.gov/8.17
+ .. [3] https://functions.wolfram.com/GammaBetaErf/Beta4/
+ .. [4] https://functions.wolfram.com/GammaBetaErf/BetaRegularized4/02/
+
+ """
+ nargs = 4
+ unbranched = True
+
+ def __new__(cls, a, b, x1, x2):
+ return Function.__new__(cls, a, b, x1, x2)
+
+ def _eval_mpmath(self):
+ return betainc_mpmath_fix, (*self.args, S(1))
+
+ def fdiff(self, argindex):
+ a, b, x1, x2 = self.args
+ if argindex == 3:
+ # Diff wrt x1
+ return -(1 - x1)**(b - 1)*x1**(a - 1) / beta(a, b)
+ elif argindex == 4:
+ # Diff wrt x2
+ return (1 - x2)**(b - 1)*x2**(a - 1) / beta(a, b)
+ else:
+ raise ArgumentIndexError(self, argindex)
+
+ def _eval_is_real(self):
+ if all(arg.is_real for arg in self.args):
+ return True
+
+ def _eval_conjugate(self):
+ return self.func(*map(conjugate, self.args))
+
+ def _eval_rewrite_as_Integral(self, a, b, x1, x2, **kwargs):
+ from sympy.integrals.integrals import Integral
+ t = Dummy('t')
+ integrand = t**(a - 1)*(1 - t)**(b - 1)
+ expr = Integral(integrand, (t, x1, x2))
+ return expr / Integral(integrand, (t, 0, 1))
+
+ def _eval_rewrite_as_hyper(self, a, b, x1, x2, **kwargs):
+ from sympy.functions.special.hyper import hyper
+ expr = (x2**a * hyper((a, 1 - b), (a + 1,), x2) - x1**a * hyper((a, 1 - b), (a + 1,), x1)) / a
+ return expr / beta(a, b)
diff --git a/sympy/printing/latex.py b/sympy/printing/latex.py
index 78316165a6a0..721d585e19e2 100644
--- a/sympy/printing/latex.py
+++ b/sympy/printing/latex.py
@@ -1157,6 +1157,18 @@ def _print_beta(self, expr, exp=None):
else:
return r"\operatorname{B}%s" % tex
+ def _print_betainc(self, expr, exp=None, operator='B'):
+ largs = [self._print(arg) for arg in expr.args]
+ tex = r"\left(%s, %s\right)" % (largs[0], largs[1])
+
+ if exp is not None:
+ return r"\operatorname{%s}_{(%s, %s)}^{%s}%s" % (operator, largs[2], largs[3], exp, tex)
+ else:
+ return r"\operatorname{%s}_{(%s, %s)}%s" % (operator, largs[2], largs[3], tex)
+
+ def _print_betainc_regularized(self, expr, exp=None):
+ return self._print_betainc(expr, exp, operator='I')
+
def _print_uppergamma(self, expr, exp=None):
tex = r"\left(%s, %s\right)" % (self._print(expr.args[0]),
self._print(expr.args[1]))
diff --git a/sympy/printing/pretty/pretty.py b/sympy/printing/pretty/pretty.py
index df7452ee873f..c9f9e1c8a3d0 100644
--- a/sympy/printing/pretty/pretty.py
+++ b/sympy/printing/pretty/pretty.py
@@ -1642,6 +1642,14 @@ def _print_beta(self, e):
func_name = greek_unicode['Beta'] if self._use_unicode else 'B'
return self._print_Function(e, func_name=func_name)
+ def _print_betainc(self, e):
+ func_name = "B'"
+ return self._print_Function(e, func_name=func_name)
+
+ def _print_betainc_regularized(self, e):
+ func_name = 'I'
+ return self._print_Function(e, func_name=func_name)
+
def _print_gamma(self, e):
func_name = greek_unicode['Gamma'] if self._use_unicode else 'Gamma'
return self._print_Function(e, func_name=func_name)
diff --git a/sympy/printing/pycode.py b/sympy/printing/pycode.py
index b7315a4c559c..92a58c5ff495 100644
--- a/sympy/printing/pycode.py
+++ b/sympy/printing/pycode.py
@@ -995,6 +995,21 @@ def _print_uppergamma(self, expr):
self._print(expr.args[0]),
self._print(expr.args[1]))
+ def _print_betainc(self, expr):
+ betainc = self._module_format('scipy.special.betainc')
+ beta = self._module_format('scipy.special.beta')
+ args = [self._print(arg) for arg in expr.args]
+ return f"({betainc}({args[0]}, {args[1]}, {args[3]}) - {betainc}({args[0]}, {args[1]}, {args[2]})) \
+ * {beta}({args[0]}, {args[1]})"
+
+ def _print_betainc_regularized(self, expr):
+ return "{0}({1}, {2}, {4}) - {0}({1}, {2}, {3})".format(
+ self._module_format('scipy.special.betainc'),
+ self._print(expr.args[0]),
+ self._print(expr.args[1]),
+ self._print(expr.args[2]),
+ self._print(expr.args[3]))
+
def _print_fresnels(self, expr):
return "{}({})[0]".format(
self._module_format("scipy.special.fresnel"),
diff --git a/sympy/utilities/lambdify.py b/sympy/utilities/lambdify.py
index 19456020c95f..59882602f7f0 100644
--- a/sympy/utilities/lambdify.py
+++ b/sympy/utilities/lambdify.py
@@ -77,6 +77,7 @@
"Ci": "ci",
"RisingFactorial": "rf",
"FallingFactorial": "ff",
+ "betainc_regularized": "betainc",
}
NUMPY_TRANSLATIONS = {} # type: Dict[str, str]
| diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index d33b2617503c..724df1993d92 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -2492,8 +2492,17 @@ def test_sympy__functions__special__gamma_functions__multigamma():
def test_sympy__functions__special__beta_functions__beta():
from sympy.functions.special.beta_functions import beta
+ assert _test_args(beta(x))
assert _test_args(beta(x, x))
+def test_sympy__functions__special__beta_functions__betainc():
+ from sympy.functions.special.beta_functions import betainc
+ assert _test_args(betainc(a, b, x, y))
+
+def test_sympy__functions__special__beta_functions__betainc_regularized():
+ from sympy.functions.special.beta_functions import betainc_regularized
+ assert _test_args(betainc_regularized(a, b, x, y))
+
def test_sympy__functions__special__mathieu_functions__MathieuBase():
pass
diff --git a/sympy/functions/special/tests/test_beta_functions.py b/sympy/functions/special/tests/test_beta_functions.py
index 2e4a8c6dab43..153ee068f873 100644
--- a/sympy/functions/special/tests/test_beta_functions.py
+++ b/sympy/functions/special/tests/test_beta_functions.py
@@ -1,13 +1,20 @@
-from sympy import (Symbol, gamma, expand_func, beta, diff, conjugate)
+from sympy import (symbols, gamma, expand_func, beta, betainc,
+ hyper, diff, conjugate, Integral, Dummy, I,
+ betainc_regularized)
from sympy.functions.special.gamma_functions import polygamma
from sympy.core.function import ArgumentIndexError
+from sympy.core.expr import unchanged
from sympy.testing.pytest import raises
def test_beta():
- x, y = Symbol('x'), Symbol('y')
+ x, y = symbols('x y')
+ t = Dummy('t')
- assert isinstance(beta(x, y), beta)
+ assert unchanged(beta, x, y)
+
+ assert beta(5, -3).is_real == True
+ assert beta(3, y).is_real is None
assert expand_func(beta(x, y)) == gamma(x)*gamma(y)/gamma(x + y)
assert expand_func(beta(x, y) - beta(y, x)) == 0 # Symmetric
@@ -21,3 +28,35 @@ def test_beta():
raises(ArgumentIndexError, lambda: beta(x, y).fdiff(3))
assert beta(x, y).rewrite(gamma) == gamma(x)*gamma(y)/gamma(x + y)
+ assert beta(x).rewrite(gamma) == gamma(x)**2/gamma(2*x)
+ assert beta(x, y).rewrite(Integral).dummy_eq(Integral(t**(x - 1) * (1 - t)**(y - 1), (t, 0, 1)))
+
+def test_betainc():
+ a, b, x1, x2 = symbols('a b x1 x2')
+
+ assert unchanged(betainc, a, b, x1, x2)
+ assert unchanged(betainc, a, b, 0, x1)
+
+ assert betainc(1, 2, 0, -5).is_real == True
+ assert betainc(1, 2, 0, x2).is_real is None
+ assert conjugate(betainc(I, 2, 3 - I, 1 + 4*I)) == betainc(-I, 2, 3 + I, 1 - 4*I)
+
+ assert betainc(a, b, 0, 1).rewrite(Integral).dummy_eq(beta(a, b).rewrite(Integral))
+ assert betainc(1, 2, 0, x2).rewrite(hyper) == x2*hyper((1, -1), (2,), x2)
+
+ assert betainc(1, 2, 3, 3).evalf() == 0
+
+def test_betainc_regularized():
+ a, b, x1, x2 = symbols('a b x1 x2')
+
+ assert unchanged(betainc_regularized, a, b, x1, x2)
+ assert unchanged(betainc_regularized, a, b, 0, x1)
+
+ assert betainc_regularized(3, 5, 0, -1).is_real == True
+ assert betainc_regularized(3, 5, 0, x2).is_real is None
+ assert conjugate(betainc_regularized(3*I, 1, 2 + I, 1 + 2*I)) == betainc_regularized(-3*I, 1, 2 - I, 1 - 2*I)
+
+ assert betainc_regularized(a, b, 0, 1).rewrite(Integral) == 1
+ assert betainc_regularized(1, 2, x1, x2).rewrite(hyper) == 2*x2*hyper((1, -1), (2,), x2) - 2*x1*hyper((1, -1), (2,), x1)
+
+ assert betainc_regularized(4, 1, 5, 5).evalf() == 0
diff --git a/sympy/utilities/tests/test_lambdify.py b/sympy/utilities/tests/test_lambdify.py
index 26eb22356d97..951c4055f100 100644
--- a/sympy/utilities/tests/test_lambdify.py
+++ b/sympy/utilities/tests/test_lambdify.py
@@ -10,7 +10,7 @@
true, false, And, Or, Not, ITE, Min, Max, floor, diff, IndexedBase, Sum,
DotProduct, Eq, Dummy, sinc, erf, erfc, factorial, gamma, loggamma,
digamma, RisingFactorial, besselj, bessely, besseli, besselk, S, beta,
- fresnelc, fresnels)
+ betainc, betainc_regularized, fresnelc, fresnels)
from sympy.codegen.cfunctions import expm1, log1p, exp2, log2, log10, hypot
from sympy.codegen.numpy_nodes import logaddexp, logaddexp2
from sympy.codegen.scipy_nodes import cosm1
@@ -1303,6 +1303,24 @@ def test_beta_math():
assert abs(beta(1.3, 2.3) - F(1.3, 2.3)) <= 1e-10
+def test_betainc_scipy():
+ if not scipy:
+ skip("scipy not installed")
+
+ f = betainc(w, x, y, z)
+ F = lambdify((w, x, y, z), f, modules='scipy')
+
+ assert abs(betainc(1.4, 3.1, 0.1, 0.5) - F(1.4, 3.1, 0.1, 0.5)) <= 1e-10
+
+def test_betainc_regularized_scipy():
+ if not scipy:
+ skip("scipy not installed")
+
+ f = betainc_regularized(w, x, y, z)
+ F = lambdify((w, x, y, z), f, modules='scipy')
+
+ assert abs(betainc_regularized(0.2, 3.5, 0.1, 1) - F(0.2, 3.5, 0.1, 1)) <= 1e-10
+
def test_numpy_special_math():
if not numpy:
| [
{
"components": [
{
"doc": "",
"lines": [
8,
13
],
"name": "betainc_mpmath_fix",
"signature": "def betainc_mpmath_fix(a, b, x1, x2, reg=0):",
"type": "function"
},
{
"doc": "",
"lines": [
141,
... | [
"test_sympy__functions__special__beta_functions__beta",
"test_sympy__functions__special__beta_functions__betainc",
"test_sympy__functions__special__beta_functions__betainc_regularized",
"test_beta",
"test_betainc",
"test_no_args",
"test_single_arg",
"test_list_args",
"test_nested_args",
"test_str_... | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_predicates",
"test_sympy__assumptions__assume__UndefinedPredicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers_... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Added central, generalized incomplete and regularized beta functions
I have added a `betainc` class through which generalized incomplete and generalized regularized incomplete beta functions can be computed. It uses `betainc` from mpmath for calculations in the backend.
#### References to other Issues or PRs
Fixes #20098, Stalled #18727
#### Brief description of what is fixed or changed
#### Other comments
Once this is merged, I plan to work on adding the PERT distribution #16970
#### Release Notes
<!-- BEGIN RELEASE NOTES -->
* functions
* Added Generalized Incomplete, Generalized Regularized and Central Beta functions
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/functions/special/beta_functions.py]
(definition of betainc_mpmath_fix:)
def betainc_mpmath_fix(a, b, x1, x2, reg=0):
(definition of beta._eval_rewrite_as_Integral:)
def _eval_rewrite_as_Integral(self, x, y, **kwargs):
(definition of betainc:)
class betainc(Function):
"""The Generalized Incomplete Beta function is defined as
.. math::
\mathrm{B}_{(x_1, x_2)}(a, b) = \int_{x_1}^{x_2} t^{a - 1} (1 - t)^{b - 1} dt
The Incomplete Beta function is a special case
of the Generalized Incomplete Beta function :
.. math:: \mathrm{B}_z (a, b) = \mathrm{B}_{(0, z)}(a, b)
The Incomplete Beta function satisfies :
.. math:: \mathrm{B}_z (a, b) = (-1)^a \mathrm{B}_{\frac{z}{z - 1}} (a, 1 - a - b)
The Beta function is a special case of the Incomplete Beta function :
.. math:: \mathrm{B}(a, b) = \mathrm{B}_{1}(a, b)
Examples
========
>>> from sympy import betainc, symbols, conjugate
>>> a, b, x, x1, x2 = symbols('a b x x1 x2')
The Generalized Incomplete Beta function is given by:
>>> betainc(a, b, x1, x2)
betainc(a, b, x1, x2)
The Incomplete Beta function can be obtained as follows:
>>> betainc(a, b, 0, x)
betainc(a, b, 0, x)
The Incomplete Beta function obeys the mirror symmetry:
>>> conjugate(betainc(a, b, x1, x2))
betainc(conjugate(a), conjugate(b), conjugate(x1), conjugate(x2))
We can numerically evaluate the Incomplete Beta function to
arbitrary precision for any complex numbers a, b, x1 and x2:
>>> from sympy import betainc, I
>>> betainc(2, 3, 4, 5).evalf(10)
56.08333333
>>> betainc(0.75, 1 - 4*I, 0, 2 + 3*I).evalf(25)
0.2241657956955709603655887 + 0.3619619242700451992411724*I
The Generalized Incomplete Beta function can be expressed
in terms of the Generalized Hypergeometric function.
>>> from sympy import hyper
>>> betainc(a, b, x1, x2).rewrite(hyper)
(-x1**a*hyper((a, 1 - b), (a + 1,), x1) + x2**a*hyper((a, 1 - b), (a + 1,), x2))/a
See Also
========
beta: Beta function
hyper: Generalized Hypergeometric function
References
==========
.. [1] https://en.wikipedia.org/wiki/Beta_function#Incomplete_beta_function
.. [2] https://dlmf.nist.gov/8.17
.. [3] https://functions.wolfram.com/GammaBetaErf/Beta4/
.. [4] https://functions.wolfram.com/GammaBetaErf/BetaRegularized4/02/"""
(definition of betainc.fdiff:)
def fdiff(self, argindex):
(definition of betainc._eval_mpmath:)
def _eval_mpmath(self):
(definition of betainc._eval_is_real:)
def _eval_is_real(self):
(definition of betainc._eval_conjugate:)
def _eval_conjugate(self):
(definition of betainc._eval_rewrite_as_Integral:)
def _eval_rewrite_as_Integral(self, a, b, x1, x2, **kwargs):
(definition of betainc._eval_rewrite_as_hyper:)
def _eval_rewrite_as_hyper(self, a, b, x1, x2, **kwargs):
(definition of betainc_regularized:)
class betainc_regularized(Function):
"""The Generalized Regularized Incomplete Beta function is given by
.. math::
\mathrm{I}_{(x_1, x_2)}(a, b) = \frac{\mathrm{B}_{(x_1, x_2)}(a, b)}{\mathrm{B}(a, b)}
The Regularized Incomplete Beta function is a special case
of the Generalized Regularized Incomplete Beta function :
.. math:: \mathrm{I}_z (a, b) = \mathrm{I}_{(0, z)}(a, b)
The Regularized Incomplete Beta function is the cumulative distribution
function of the beta distribution.
Examples
========
>>> from sympy import betainc_regularized, symbols, conjugate
>>> a, b, x, x1, x2 = symbols('a b x x1 x2')
The Generalized Regularized Incomplete Beta
function is given by:
>>> betainc_regularized(a, b, x1, x2)
betainc_regularized(a, b, x1, x2)
The Regularized Incomplete Beta function
can be obtained as follows:
>>> betainc_regularized(a, b, 0, x)
betainc_regularized(a, b, 0, x)
The Regularized Incomplete Beta function
obeys the mirror symmetry:
>>> conjugate(betainc_regularized(a, b, x1, x2))
betainc_regularized(conjugate(a), conjugate(b), conjugate(x1), conjugate(x2))
We can numerically evaluate the Regularized Incomplete Beta function
to arbitrary precision for any complex numbers a, b, x1 and x2:
>>> from sympy import betainc_regularized, pi, E
>>> betainc_regularized(1, 2, 0, 0.25).evalf(10)
0.4375000000
>>> betainc_regularized(pi, E, 0, 1).evalf(5)
1.00000
The Generalized Regularized Incomplete Beta function can be
expressed in terms of the Generalized Hypergeometric function.
>>> from sympy import hyper
>>> betainc_regularized(a, b, x1, x2).rewrite(hyper)
(-x1**a*hyper((a, 1 - b), (a + 1,), x1) + x2**a*hyper((a, 1 - b), (a + 1,), x2))/(a*beta(a, b))
See Also
========
beta: Beta function
hyper: Generalized Hypergeometric function
References
==========
.. [1] https://en.wikipedia.org/wiki/Beta_function#Incomplete_beta_function
.. [2] https://dlmf.nist.gov/8.17
.. [3] https://functions.wolfram.com/GammaBetaErf/Beta4/
.. [4] https://functions.wolfram.com/GammaBetaErf/BetaRegularized4/02/"""
(definition of betainc_regularized.__new__:)
def __new__(cls, a, b, x1, x2):
(definition of betainc_regularized._eval_mpmath:)
def _eval_mpmath(self):
(definition of betainc_regularized.fdiff:)
def fdiff(self, argindex):
(definition of betainc_regularized._eval_is_real:)
def _eval_is_real(self):
(definition of betainc_regularized._eval_conjugate:)
def _eval_conjugate(self):
(definition of betainc_regularized._eval_rewrite_as_Integral:)
def _eval_rewrite_as_Integral(self, a, b, x1, x2, **kwargs):
(definition of betainc_regularized._eval_rewrite_as_hyper:)
def _eval_rewrite_as_hyper(self, a, b, x1, x2, **kwargs):
[end of new definitions in sympy/functions/special/beta_functions.py]
[start of new definitions in sympy/printing/latex.py]
(definition of LatexPrinter._print_betainc:)
def _print_betainc(self, expr, exp=None, operator='B'):
(definition of LatexPrinter._print_betainc_regularized:)
def _print_betainc_regularized(self, expr, exp=None):
[end of new definitions in sympy/printing/latex.py]
[start of new definitions in sympy/printing/pretty/pretty.py]
(definition of PrettyPrinter._print_betainc:)
def _print_betainc(self, e):
(definition of PrettyPrinter._print_betainc_regularized:)
def _print_betainc_regularized(self, e):
[end of new definitions in sympy/printing/pretty/pretty.py]
[start of new definitions in sympy/printing/pycode.py]
(definition of SciPyPrinter._print_betainc:)
def _print_betainc(self, expr):
(definition of SciPyPrinter._print_betainc_regularized:)
def _print_betainc_regularized(self, expr):
[end of new definitions in sympy/printing/pycode.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Add incomplete and regularized beta functions and regularized gamma functions
I found that these functions are missing in SymPy. They are described in MathWorld at:
https://mathworld.wolfram.com/IncompleteBetaFunction.html
https://mathworld.wolfram.com/RegularizedBetaFunction.html
https://mathworld.wolfram.com/RegularizedGammaFunction.html
For what I seek to do, the regularized beta function is important because it is the beta distribution's cumulative distribution function.
----------
Hello, I am new to open source and want to work this issue. It will be very helpful if someone can guide me where to start.
There is a PR (#18727) which started implementing Incomplete Beta, but it has stalled. I went through Wolfram Functions and we can broadly classify the functions into 3 types for implementation purpose.









Right now, we have only the regular `beta` function implemented. IMO, we can just implement 2 classes - one for incomplete beta functions and another for regularized beta functions. Central beta can be added to existing `beta` class. I was thinking of the following API
``` python
>>> from sympy import beta, incomplete_beta, regularized_beta
>>> beta(a, b) #Existing
>>> beta(a) #Central beta
>>> incomplete_beta([z1, z2], a, b) #Generalized Incomplete beta
>>> incomplete_beta(z, a, b) #Incomplete beta
>>> regularized_beta([z1, z2], a, b) #Generalized Regularized beta
>>> regularized_beta(z1, a, b) #Regularized beta
```
Does this look good @oscarbenjamin ?
> ```python
> >>> beta(a, b) #Existing
> >>> beta(a) #Central beta
> ```
It's better not to have a class that doesn't know how many args it has. If `beta(a)` is automatically converted to `beta(a, a)` and always has 2 args then that's fine. Otherwise I would make a common base class and have `beta` and `centralbeta` as subclasses.
> > ```python
> > >>> beta(a, b) #Existing
> > >>> beta(a) #Central beta
> > ```
>
> It's better not to have a class that doesn't know how many args it has. If `beta(a)` is automatically converted to `beta(a, a)` and always has 2 args then that's fine. Otherwise I would make a common base class and have `beta` and `centralbeta` as subclasses.
I'd go with the option of converting `beta(a)` to `beta(a, a)` since this is the only difference between beta and central beta. It would only duplicate code if I add a class for central beta. Are the others fine?
There is a closed form for the Generalized Incomplete Beta Function in terms of Meijer G function. Infact -
``` python
>>> from sympy import *
>>> t, a, b, z1, z2 = symbols('t a b z1 z2')
>>> Integral(t**(a-1)*(1-t)**b, (t, z1, z2)).doit(meijerg=True)
-z1**a*gamma(a)*hyper((a, 1 - b), (a + 1,), z1*exp_polar(2*I*pi))/gamma(a + 1) + z2**a*gamma(a)*hyper((a, 1 - b), (a + 1,), z2*exp_polar(2*I*pi))/gamma(a + 1)
```
@oscarbenjamin Should I use this definition instead of the integral definition?
We should probably base our definitions on something like DLMF in general:
https://dlmf.nist.gov/8.17
Or perhaps fungrim:
http://fungrim.org/topic/Beta_function/
I see that these definitions don't include the generalized beta function, but substituting `z1 = 0` and simplifying the expression I mentioned above
> ``` python
> -z1**a*gamma(a)*hyper((a, 1 - b), (a + 1,), z1*exp_polar(2*I*pi))/gamma(a + 1) +
> z2**a*gamma(a)*hyper((a, 1 - b), (a + 1,), z2*exp_polar(2*I*pi))/gamma(a + 1)
> ```
gives the same formula that is mentioned in these sites. So, I'll go ahead and add them.
--------------------
</issues> | 284db64e44124856accce4abff75d2d8990c301d | |
pvlib__pvlib-python-1104 | 1,104 | pvlib/pvlib-python | 0.7 | 6e5148f59c5050e8f7a0084b7ae39e93b80f72e6 | 2020-12-08T21:06:18Z | diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 9d79aab177..1306ed79d1 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -181,7 +181,7 @@ DNI estimation models
irradiance.dirint
irradiance.dirindex
irradiance.erbs
- irradiance.liujordan
+ irradiance.campbell_norman
irradiance.gti_dirint
Clearness index models
@@ -520,7 +520,7 @@ Processing data
forecast.ForecastModel.cloud_cover_to_ghi_linear
forecast.ForecastModel.cloud_cover_to_irradiance_clearsky_scaling
forecast.ForecastModel.cloud_cover_to_transmittance_linear
- forecast.ForecastModel.cloud_cover_to_irradiance_liujordan
+ forecast.ForecastModel.cloud_cover_to_irradiance_campbell_norman
forecast.ForecastModel.cloud_cover_to_irradiance
forecast.ForecastModel.kelvin_to_celsius
forecast.ForecastModel.isobaric_to_ambient_temperature
diff --git a/docs/sphinx/source/whatsnew/v0.8.1.rst b/docs/sphinx/source/whatsnew/v0.8.1.rst
index f7b4ec1782..8b2bec94b5 100644
--- a/docs/sphinx/source/whatsnew/v0.8.1.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.1.rst
@@ -9,7 +9,7 @@ Breaking changes
Deprecations
~~~~~~~~~~~~
-
+* ``pvlib.irradiance.liujordan`` is deprecated.
Enhancements
~~~~~~~~~~~~
@@ -21,6 +21,12 @@ Enhancements
multiple MPPTs (:issue:`457`, :pull:`1085`)
* Added optional ``attributes`` parameter to :py:func:`pvlib.iotools.get_psm3`
and added the option of fetching 5- and 15-minute PSM3 data. (:pull:`1086`)
+* Added :py:func:`pvlib.irradiance.campbell_norman` for estimating DNI, DHI and GHI
+ from extraterrestrial irradiance. This function replaces ``pvlib.irradiance.liujordan``;
+ users of ``pvlib.irradiance.liujordan`` should note that :py:func:`pvlib.irradiance.campbell_norman`
+ expects different parameters.
+* :py:meth:`pvlib.forecast.Forecast.cloud_cover_to_irradiance_campbell_norman`
+ replaces ``pvlib.forecast.Forecast.cloud_cover_to_irradiance_liujordan``.
Bug fixes
~~~~~~~~~
diff --git a/pvlib/forecast.py b/pvlib/forecast.py
index 24e54e1015..4d4500c560 100644
--- a/pvlib/forecast.py
+++ b/pvlib/forecast.py
@@ -9,11 +9,14 @@
from xml.etree.ElementTree import ParseError
from pvlib.location import Location
-from pvlib.irradiance import liujordan, get_extra_radiation, disc
+from pvlib.irradiance import campbell_norman, get_extra_radiation, disc
+from pvlib.irradiance import _liujordan
from siphon.catalog import TDSCatalog
from siphon.ncss import NCSS
import warnings
+from pvlib._deprecation import deprecated
+
warnings.warn(
'The forecast module algorithms and features are highly experimental. '
@@ -526,8 +529,48 @@ def cloud_cover_to_transmittance_linear(self, cloud_cover, offset=0.75,
return transmittance
+ def cloud_cover_to_irradiance_campbell_norman(self, cloud_cover, **kwargs):
+ """
+ Estimates irradiance from cloud cover in the following steps:
+
+ 1. Determine transmittance using a function of cloud cover e.g.
+ :py:meth:`~ForecastModel.cloud_cover_to_transmittance_linear`
+ 2. Calculate GHI, DNI, DHI using the
+ :py:func:`pvlib.irradiance.campbell_norman` model
+
+ Parameters
+ ----------
+ cloud_cover : Series
+
+ Returns
+ -------
+ irradiance : DataFrame
+ Columns include ghi, dni, dhi
+ """
+ # in principle, get_solarposition could use the forecast
+ # pressure, temp, etc., but the cloud cover forecast is not
+ # accurate enough to justify using these minor corrections
+ solar_position = self.location.get_solarposition(cloud_cover.index)
+ dni_extra = get_extra_radiation(cloud_cover.index)
+
+ transmittance = self.cloud_cover_to_transmittance_linear(cloud_cover,
+ **kwargs)
+
+ irrads = campbell_norman(solar_position['apparent_zenith'],
+ transmittance, dni_extra=dni_extra)
+ irrads = irrads.fillna(0)
+
+ return irrads
+
+ @deprecated(
+ '0.8',
+ alternative='Forecast.cloud_cover_to_irradiance_campbell_norman',
+ name='Forecast.cloud_cover_to_irradiance_liujordan',
+ removal='0.9')
def cloud_cover_to_irradiance_liujordan(self, cloud_cover, **kwargs):
"""
+ Deprecated. Use cloud_cover_to_irradiance_campbell_norman instead.
+
Estimates irradiance from cloud cover in the following steps:
1. Determine transmittance using a function of cloud cover e.g.
@@ -554,9 +597,9 @@ def cloud_cover_to_irradiance_liujordan(self, cloud_cover, **kwargs):
transmittance = self.cloud_cover_to_transmittance_linear(cloud_cover,
**kwargs)
- irrads = liujordan(solar_position['apparent_zenith'],
- transmittance, airmass['airmass_absolute'],
- dni_extra=dni_extra)
+ irrads = _liujordan(solar_position['apparent_zenith'],
+ transmittance, airmass['airmass_absolute'],
+ dni_extra=dni_extra)
irrads = irrads.fillna(0)
return irrads
@@ -571,7 +614,8 @@ def cloud_cover_to_irradiance(self, cloud_cover, how='clearsky_scaling',
cloud_cover : Series
how : str, default 'clearsky_scaling'
Selects the method for conversion. Can be one of
- clearsky_scaling or liujordan.
+ clearsky_scaling or campbell_norman. Method liujordan is
+ deprecated.
**kwargs
Passed to the selected method.
@@ -585,6 +629,9 @@ def cloud_cover_to_irradiance(self, cloud_cover, how='clearsky_scaling',
if how == 'clearsky_scaling':
irrads = self.cloud_cover_to_irradiance_clearsky_scaling(
cloud_cover, **kwargs)
+ elif how == 'campbell_norman':
+ irrads = self.cloud_cover_to_irradiance_campbell_norman(
+ cloud_cover, **kwargs)
elif how == 'liujordan':
irrads = self.cloud_cover_to_irradiance_liujordan(
cloud_cover, **kwargs)
diff --git a/pvlib/irradiance.py b/pvlib/irradiance.py
index e1d07699b2..67a327a090 100644
--- a/pvlib/irradiance.py
+++ b/pvlib/irradiance.py
@@ -13,6 +13,9 @@
from pvlib import atmosphere, solarposition, tools
+from pvlib._deprecation import deprecated
+
+
# see References section of grounddiffuse function
SURFACE_ALBEDOS = {'urban': 0.18,
'grass': 0.20,
@@ -2184,7 +2187,61 @@ def erbs(ghi, zenith, datetime_or_doy, min_cos_zenith=0.065, max_zenith=87):
return data
-def liujordan(zenith, transmittance, airmass, dni_extra=1367.0):
+def campbell_norman(zenith, transmittance, pressure=101325.0,
+ dni_extra=1367.0):
+ '''
+ Determine DNI, DHI, GHI from extraterrestrial flux, transmittance,
+ and atmospheric pressure.
+
+ Parameters
+ ----------
+ zenith: pd.Series
+ True (not refraction-corrected) zenith angles in decimal
+ degrees. If Z is a vector it must be of the same size as all
+ other vector inputs. Z must be >=0 and <=180.
+
+ transmittance: float
+ Atmospheric transmittance between 0 and 1.
+
+ pressure: float, default 101325.0
+ Air pressure
+
+ dni_extra: float, default 1367.0
+ Direct irradiance incident at the top of the atmosphere.
+
+ Returns
+ -------
+ irradiance: DataFrame
+ Modeled direct normal irradiance, direct horizontal irradiance,
+ and global horizontal irradiance in W/m^2
+
+ References
+ ----------
+ .. [1] Campbell, G. S., J. M. Norman (1998) An Introduction to
+ Environmental Biophysics. 2nd Ed. New York: Springer.
+ '''
+
+ tau = transmittance
+
+ airmass = atmosphere.get_relative_airmass(zenith, model='simple')
+ airmass = atmosphere.get_absolute_airmass(airmass, pressure=pressure)
+ dni = dni_extra*tau**airmass
+ cos_zen = tools.cosd(zenith)
+ dhi = 0.3 * (1.0 - tau**airmass) * dni_extra * cos_zen
+ ghi = dhi + dni * cos_zen
+
+ irrads = OrderedDict()
+ irrads['ghi'] = ghi
+ irrads['dni'] = dni
+ irrads['dhi'] = dhi
+
+ if isinstance(ghi, pd.Series):
+ irrads = pd.DataFrame(irrads)
+
+ return irrads
+
+
+def _liujordan(zenith, transmittance, airmass, dni_extra=1367.0):
'''
Determine DNI, DHI, GHI from extraterrestrial flux, transmittance,
and optical air mass number.
@@ -2242,6 +2299,10 @@ def liujordan(zenith, transmittance, airmass, dni_extra=1367.0):
return irrads
+liujordan = deprecated('0.8', alternative='campbellnormam',
+ name='liujordan', removal='0.9')(_liujordan)
+
+
def _get_perez_coefficients(perezmodel):
'''
Find coefficients for the Perez model
| diff --git a/pvlib/tests/test_forecast.py b/pvlib/tests/test_forecast.py
index 75b3badb74..ff14de7629 100644
--- a/pvlib/tests/test_forecast.py
+++ b/pvlib/tests/test_forecast.py
@@ -10,7 +10,7 @@
requires_siphon,
has_siphon,
skip_windows,
- requires_recent_cftime,
+ requires_recent_cftime
)
from conftest import RERUNS, RERUNS_DELAY
@@ -69,7 +69,7 @@ def model(request):
@pytest.mark.remote_data
@pytest.mark.flaky(reruns=RERUNS, reruns_delay=RERUNS_DELAY)
def test_process_data(model):
- for how in ['liujordan', 'clearsky_scaling']:
+ for how in ['campbell_norman', 'clearsky_scaling']:
if model.raw_data.empty:
warnings.warn('Could not test {} process_data with how={} '
'because raw_data was empty'.format(model, how))
diff --git a/pvlib/tests/test_irradiance.py b/pvlib/tests/test_irradiance.py
index 0aeec3b95e..995952648d 100644
--- a/pvlib/tests/test_irradiance.py
+++ b/pvlib/tests/test_irradiance.py
@@ -13,7 +13,8 @@
from pvlib import irradiance
-from conftest import requires_ephem, requires_numba
+from conftest import requires_ephem, requires_numba, fail_on_pvlib_version
+from pvlib._deprecation import pvlibDeprecationWarning
# fixtures create realistic test input data
@@ -285,13 +286,27 @@ def test_get_sky_diffuse_invalid():
model='invalid')
+@fail_on_pvlib_version('0.9')
def test_liujordan():
expected = pd.DataFrame(np.array(
[[863.859736967, 653.123094076, 220.65905025]]),
columns=['ghi', 'dni', 'dhi'],
index=[0])
- out = irradiance.liujordan(
- pd.Series([10]), pd.Series([0.5]), pd.Series([1.1]), dni_extra=1400)
+ with pytest.warns(pvlibDeprecationWarning):
+ out = irradiance.liujordan(
+ pd.Series([10]), pd.Series([0.5]), pd.Series([1.1]),
+ dni_extra=1400)
+ assert_frame_equal(out, expected)
+
+
+def test_campbell_norman():
+ expected = pd.DataFrame(np.array(
+ [[863.859736967, 653.123094076, 220.65905025]]),
+ columns=['ghi', 'dni', 'dhi'],
+ index=[0])
+ out = irradiance.campbell_norman(
+ pd.Series([10]), pd.Series([0.5]), pd.Series([109764.21013135818]),
+ dni_extra=1400)
assert_frame_equal(out, expected)
| diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 9d79aab177..1306ed79d1 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -181,7 +181,7 @@ DNI estimation models
irradiance.dirint
irradiance.dirindex
irradiance.erbs
- irradiance.liujordan
+ irradiance.campbell_norman
irradiance.gti_dirint
Clearness index models
@@ -520,7 +520,7 @@ Processing data
forecast.ForecastModel.cloud_cover_to_ghi_linear
forecast.ForecastModel.cloud_cover_to_irradiance_clearsky_scaling
forecast.ForecastModel.cloud_cover_to_transmittance_linear
- forecast.ForecastModel.cloud_cover_to_irradiance_liujordan
+ forecast.ForecastModel.cloud_cover_to_irradiance_campbell_norman
forecast.ForecastModel.cloud_cover_to_irradiance
forecast.ForecastModel.kelvin_to_celsius
forecast.ForecastModel.isobaric_to_ambient_temperature
diff --git a/docs/sphinx/source/whatsnew/v0.8.1.rst b/docs/sphinx/source/whatsnew/v0.8.1.rst
index f7b4ec1782..8b2bec94b5 100644
--- a/docs/sphinx/source/whatsnew/v0.8.1.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.1.rst
@@ -9,7 +9,7 @@ Breaking changes
Deprecations
~~~~~~~~~~~~
-
+* ``pvlib.irradiance.liujordan`` is deprecated.
Enhancements
~~~~~~~~~~~~
@@ -21,6 +21,12 @@ Enhancements
multiple MPPTs (:issue:`457`, :pull:`1085`)
* Added optional ``attributes`` parameter to :py:func:`pvlib.iotools.get_psm3`
and added the option of fetching 5- and 15-minute PSM3 data. (:pull:`1086`)
+* Added :py:func:`pvlib.irradiance.campbell_norman` for estimating DNI, DHI and GHI
+ from extraterrestrial irradiance. This function replaces ``pvlib.irradiance.liujordan``;
+ users of ``pvlib.irradiance.liujordan`` should note that :py:func:`pvlib.irradiance.campbell_norman`
+ expects different parameters.
+* :py:meth:`pvlib.forecast.Forecast.cloud_cover_to_irradiance_campbell_norman`
+ replaces ``pvlib.forecast.Forecast.cloud_cover_to_irradiance_liujordan``.
Bug fixes
~~~~~~~~~
| [
{
"components": [
{
"doc": "Estimates irradiance from cloud cover in the following steps:\n\n1. Determine transmittance using a function of cloud cover e.g.\n :py:meth:`~ForecastModel.cloud_cover_to_transmittance_linear`\n2. Calculate GHI, DNI, DHI using the\n :py:func:`pvlib.irradiance.campbe... | [
"pvlib/tests/test_irradiance.py::test_liujordan",
"pvlib/tests/test_irradiance.py::test_campbell_norman"
] | [
"pvlib/tests/test_forecast.py::test_latest",
"pvlib/tests/test_forecast.py::test_full",
"pvlib/tests/test_forecast.py::test_temp_convert",
"pvlib/tests/test_forecast.py::test_set_location",
"pvlib/tests/test_forecast.py::test_set_query_time_range_tzfail",
"pvlib/tests/test_forecast.py::test_cloud_cover_to... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Convert irradiance.liujordan to irradiance.campbell_norman
- [x] Closes #1100
- [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)
- [x] Tests added
- [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.
- [x] Adds description and name entries in the appropriate "what's new" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).
- [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.
- [x] Pull request is nearly complete and ready for detailed review.
- [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.
Replaces `irradiance.liujordan` with `irradiance.campbellnorman`. `liujordan` is deprecated.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in pvlib/forecast.py]
(definition of ForecastModel.cloud_cover_to_irradiance_campbell_norman:)
def cloud_cover_to_irradiance_campbell_norman(self, cloud_cover, **kwargs):
"""Estimates irradiance from cloud cover in the following steps:
1. Determine transmittance using a function of cloud cover e.g.
:py:meth:`~ForecastModel.cloud_cover_to_transmittance_linear`
2. Calculate GHI, DNI, DHI using the
:py:func:`pvlib.irradiance.campbell_norman` model
Parameters
----------
cloud_cover : Series
Returns
-------
irradiance : DataFrame
Columns include ghi, dni, dhi"""
[end of new definitions in pvlib/forecast.py]
[start of new definitions in pvlib/irradiance.py]
(definition of campbell_norman:)
def campbell_norman(zenith, transmittance, pressure=101325.0, dni_extra=1367.0):
"""Determine DNI, DHI, GHI from extraterrestrial flux, transmittance,
and atmospheric pressure.
Parameters
----------
zenith: pd.Series
True (not refraction-corrected) zenith angles in decimal
degrees. If Z is a vector it must be of the same size as all
other vector inputs. Z must be >=0 and <=180.
transmittance: float
Atmospheric transmittance between 0 and 1.
pressure: float, default 101325.0
Air pressure
dni_extra: float, default 1367.0
Direct irradiance incident at the top of the atmosphere.
Returns
-------
irradiance: DataFrame
Modeled direct normal irradiance, direct horizontal irradiance,
and global horizontal irradiance in W/m^2
References
----------
.. [1] Campbell, G. S., J. M. Norman (1998) An Introduction to
Environmental Biophysics. 2nd Ed. New York: Springer."""
(definition of _liujordan:)
def _liujordan(zenith, transmittance, airmass, dni_extra=1367.0):
"""Determine DNI, DHI, GHI from extraterrestrial flux, transmittance,
and optical air mass number.
Liu and Jordan, 1960, developed a simplified direct radiation model.
DHI is from an empirical equation for diffuse radiation from Liu and
Jordan, 1960.
Parameters
----------
zenith: pd.Series
True (not refraction-corrected) zenith angles in decimal
degrees. If Z is a vector it must be of the same size as all
other vector inputs. Z must be >=0 and <=180.
transmittance: float
Atmospheric transmittance between 0 and 1.
pressure: float, default 101325.0
Air pressure
dni_extra: float, default 1367.0
Direct irradiance incident at the top of the atmosphere.
Returns
-------
irradiance: DataFrame
Modeled direct normal irradiance, direct horizontal irradiance,
and global horizontal irradiance in W/m^2
References
----------
.. [1] Campbell, G. S., J. M. Norman (1998) An Introduction to
Environmental Biophysics. 2nd Ed. New York: Springer.
.. [2] Liu, B. Y., R. C. Jordan, (1960). "The interrelationship and
characteristic distribution of direct, diffuse, and total solar
radiation". Solar Energy 4:1-19"""
[end of new definitions in pvlib/irradiance.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Docstring for irradiance.liujordan isn't clear
Docstring lists a parameter `pressure` but the parameter should be `airmass`. Appears to be an oversight in #386.
I examined the 1960 Liu/Jordan paper and don't recognize the equation coded https://github.com/pvlib/pvlib-python/blob/e4852c5563bc861f0d736f5ce294684ee6dc683e/pvlib/irradiance.py#L2231. Liu and Jordan (1960) don't mention airmass at all; is the model described in the Campbell and Norman book? Anyone have access to a copy to verify the model?
@wholmgren this function was added in #180
----------
The original author of this code provided a [pdf](https://github.com/pvlib/pvlib-python/files/5645236/liujordan_radiation.pdf) for the relevant section of the book. This expression is equation 11.13. The preceding line states "...using an empirical equation adapated from Liu and Jordan (1960)". So it's fair to question whether this function should really be called `liujordan`.
Thanks @wholmgren. I propose to:
- rename this function from `liujordan` to `campbellnorman`. The Liu-Jordan model is usually understood to be an empirical relationship between transmission coefficients for direct and diffuse irradiance, which can be used to separate DHI and DNI from clear-sky GHI. Instead, this function is a different sort than any other in pvlib currently, getting DNI and DHI from extraterrestrial irradiance and atmospheric transmittance. There is room in pvlib for the Liu-Jordan model as a decomposition function akin to Erbs, if that becomes of interest.
- revise the function to embed the calculation of airmass. The model described in the Cambell and Norman book assumes absolute air mass computed by the 'simple' air mass model (1/cos(zenith)). It's not clear that their correlation allows the air mass model to be exchanged for e.g. 'kastenyoung1989'. The `campbellnormal` function would expect pressure (default to atmospheric) as input rather than airmass.
Good plan. Thanks for digging into this.
--------------------
</issues> | aa1635bcb40dc83f82e9fd72158670c235bfe99b |
sympy__sympy-20549 | 20,549 | sympy/sympy | 1.8 | 1e76752f545a2ed44a8af008c76c9ca7dd77e41d | 2020-12-05T19:36:55Z | diff --git a/doc/src/modules/core.rst b/doc/src/modules/core.rst
index 0323aed5cfd0..fd6d3c88e96d 100644
--- a/doc/src/modules/core.rst
+++ b/doc/src/modules/core.rst
@@ -549,3 +549,22 @@ ordered
^^^^^^^
.. autofunction:: sympy.core.compatibility.ordered
+
+kind
+----
+.. module:: sympy.core.kind
+
+Kind
+^^^^
+.. autoclass:: Kind
+ :members:
+
+NumberKind
+^^^^^^^^^^
+.. autoclass:: NumberKind
+ :members:
+
+BooleanKind
+^^^^^^^^^^^
+.. autoclass:: BooleanKind
+ :members:
diff --git a/doc/src/special_topics/classification.rst b/doc/src/special_topics/classification.rst
new file mode 100644
index 000000000000..a2917075eb49
--- /dev/null
+++ b/doc/src/special_topics/classification.rst
@@ -0,0 +1,100 @@
+===============================
+Classification of SymPy objects
+===============================
+
+There are several ways of how SymPy object is classified.
+
+class
+=====
+
+Like any other object in Python, SymPy expression is an instance of class. You can
+get the class of the object with built-in `type()` function, and check it with
+`isinstance()` function.
+
+ >>> from sympy import Add
+ >>> from sympy.abc import x,y
+ >>> type(x + y)
+ <class 'sympy.core.add.Add'>
+ >>> isinstance(x + y, Add)
+ True
+
+Classes represent only the programmatic structures of the objects, and does not
+distinguish the mathematical difference between them. For example, the integral
+of number and the integral of matrix both have the class `Integral`, although the
+former is number and the latter is matrix.
+
+ >>> from sympy import MatrixSymbol, Integral
+ >>> A = MatrixSymbol('A', 2, 2)
+ >>> type(Integral(1, x))
+ <class 'sympy.integrals.integrals.Integral'>
+ >>> type(Integral(A, x))
+ <class 'sympy.integrals.integrals.Integral'>
+
+kind
+====
+
+Kind indicates what mathematical object does the expression represent.
+You can retrieve the kind of expression with `.kind` property.
+
+ >>> Integral(1, x).kind
+ NumberKind
+ >>> Integral(A, x).kind
+ MatrixKind(NumberKind)
+
+This result shows that `Integral(1, x)` is number, and `Integral(A, x)` is matrix with number element.
+
+Since the class cannot guarantee to catch this difference, kind of the object is very important.
+For example, if you are building a function or class that is designed to work only for
+numbers, you should consider filtering the arguments with `NumberKind` so that the user
+does not naively pass unsupported objects such as `Integral(A, x)`.
+
+For the performance, set theory is not implemented in kind system. For example,
+
+ `NumberKind` does not distinguish the real number and complex number.
+
+ >>> from sympy import pi, I
+ >>> pi.kind
+ NumberKind
+ >>> I.kind
+ NumberKind
+
+ SymPy's `Set` and kind are not compatible.
+
+ >>> from sympy import S
+ >>> from sympy.core.kind import NumberKind
+ >>> S.Reals.is_subset(S.Complexes)
+ True
+ >>> S.Reals.is_subset(NumberKind)
+ Traceback (most recent call last):
+ ...
+ ValueError: Unknown argument 'NumberKind'
+
+sets and assumptions
+====================
+
+If you want to classify the object in strictly mathematical way, you may need
+SymPy's sets and assumptions.
+
+ >>> from sympy import ask, Q
+ >>> S.One in S.Reals
+ True
+ >>> ask(Q.even(2*x), Q.odd(x))
+ True
+
+See `assumptions` module and `sets` module for more information.
+
+func
+====
+
+`func` is the head of the object, and it is used to recurse over the expression tree.
+
+ >>> Add(x + y).func
+ <class 'sympy.core.add.Add'>
+ >>> Add(x + x).func
+ <class 'sympy.core.mul.Mul'>
+ >>> Q.even(x).func
+ Q.even
+
+As you can see, resulting head may be a class or another SymPy object.
+Keep this in mind when you classify the object with this attribute.
+See :ref:`tutorial-manipulation` for detailed information.
diff --git a/doc/src/special_topics/index.rst b/doc/src/special_topics/index.rst
index 3b68f271de60..32e8bc713d04 100644
--- a/doc/src/special_topics/index.rst
+++ b/doc/src/special_topics/index.rst
@@ -9,3 +9,4 @@
intro.rst
finite_diff_derivatives.rst
+ classification.rst
diff --git a/sympy/concrete/expr_with_limits.py b/sympy/concrete/expr_with_limits.py
index 86cf319365b5..d3fa2a931ac5 100644
--- a/sympy/concrete/expr_with_limits.py
+++ b/sympy/concrete/expr_with_limits.py
@@ -200,6 +200,10 @@ def function(self):
"""
return self._args[0]
+ @property
+ def kind(self):
+ return self.function.kind
+
@property
def limits(self):
"""Return the limits of expression.
diff --git a/sympy/core/add.py b/sympy/core/add.py
index 772def16040d..6a3d10ac0e66 100644
--- a/sympy/core/add.py
+++ b/sympy/core/add.py
@@ -1,5 +1,6 @@
from collections import defaultdict
from functools import cmp_to_key, reduce
+from operator import attrgetter
from .basic import Basic
from .compatibility import is_sequence
from .parameters import global_parameters
@@ -9,6 +10,7 @@
from .cache import cacheit
from .numbers import ilcm, igcd
from .expr import Expr
+from .kind import UndefinedKind
# Key for sorting commutative args in canonical order
_args_sortkey = cmp_to_key(Basic.compare)
@@ -66,6 +68,7 @@ def _unevaluated_Add(*args):
newargs.insert(0, co)
return Add._from_args(newargs)
+
class Add(Expr, AssocOp):
__slots__ = ()
@@ -292,6 +295,19 @@ def class_key(cls):
"""Nice order of classes"""
return 3, 1, cls.__name__
+ @property
+ def kind(self):
+ k = attrgetter('kind')
+ kinds = map(k, self.args)
+ kinds = frozenset(kinds)
+ if len(kinds) != 1:
+ # Since addition is group operator, kind must be same.
+ # We know that this is unexpected signature, so return this.
+ result = UndefinedKind
+ else:
+ result, = kinds
+ return result
+
def as_coefficients_dict(a):
"""Return a dictionary mapping terms to their Rational coefficient.
Since the dictionary is a defaultdict, inquiries about terms which
diff --git a/sympy/core/basic.py b/sympy/core/basic.py
index 6318c38f5047..8e82778c7d60 100644
--- a/sympy/core/basic.py
+++ b/sympy/core/basic.py
@@ -8,6 +8,7 @@
from .sympify import _sympify, sympify, SympifyError
from .compatibility import iterable, ordered
from .singleton import S
+from .kind import UndefinedKind
from ._print_helpers import Printable
from inspect import getmro
@@ -107,6 +108,8 @@ class Basic(Printable, metaclass=ManagedProperties):
is_MatAdd = False
is_MatMul = False
+ kind = UndefinedKind
+
def __new__(cls, *args):
obj = object.__new__(cls)
obj._assumptions = cls.default_assumptions
diff --git a/sympy/core/kind.py b/sympy/core/kind.py
new file mode 100644
index 000000000000..a86898d62c13
--- /dev/null
+++ b/sympy/core/kind.py
@@ -0,0 +1,180 @@
+"""
+Module to efficiently partition SymPy objects.
+
+This system is introduced because class of SymPy object does not always
+represent the mathematical classification of the entity. For example,
+``Integral(1, x)`` and ``Integral(Matrix([1,2]), x)`` are both instance
+of ``Integral`` class. However the former is number and the latter is
+matrix.
+
+One way to resolve this is defining subclass for each mathematical type,
+such as ``MatAdd`` for the addition between matrices. Basic algebraic
+operation such as addition or multiplication take this approach, but
+defining every class for every mathematical object is not scalable.
+
+Therefore, we define the "kind" of the object and let the expression
+infer the kind of itself from its arguments. Function and class can
+filter the arguments by their kind, and behave differently according to
+the type of itself.
+
+This module defines basic kinds for core objects. Other kinds such as
+``ArrayKind`` or ``MatrixKind`` can be found in corresponding modules.
+
+.. notes::
+ This approach is experimental, and can be replaced or deleted in the future.
+ See https://github.com/sympy/sympy/pull/20549.
+"""
+
+
+class KindMeta(type):
+ """
+ Metaclass for ``Kind``.
+
+ Assigns empty ``dict`` as class attribute ``_inst`` for every class,
+ in order to endow singleton-like behavior.
+ """
+ def __new__(cls, clsname, bases, dct):
+ dct['_inst'] = {}
+ return super().__new__(cls, clsname, bases, dct)
+
+
+class Kind(object, metaclass=KindMeta):
+ """
+ Base class for kinds.
+
+ Kind of the object represents the mathematical classification that
+ the entity falls into. It is expected that functions and classes
+ recognize and filter the argument by its kind.
+
+ Kind of every object must be carefully selected so that it shows the
+ intention of design. Expressions may have different kind according
+ to the kind of its arguements. For example, arguements of ``Add``
+ must have common kind since addition is group operator, and the
+ resulting ``Add()`` has the same kind.
+
+ For the performance, each kind is as broad as possible and is not
+ based on set theory. For example, ``NumberKind`` includes not only
+ complex number but expression containing ``S.Infinity`` or ``S.NaN``
+ which are not strictly number.
+
+ Kind may have arguments as parameter. For example, ``MatrixKind()``
+ may be constructed with one element which represents the kind of its
+ elements.
+
+ ``Kind`` behaves in singleton-like fashion. Same signature will
+ return the same object.
+
+ """
+ def __new__(cls, *args):
+ if args in cls._inst:
+ inst = cls._inst[args]
+ else:
+ inst = super().__new__(cls)
+ cls._inst[args] = inst
+ return inst
+
+
+class UndefinedKind(Kind):
+ """
+ Default kind for all SymPy object. If the kind is not defined for
+ the object, or if the object cannot infer the kind from its
+ arguments, this will be returned.
+
+ Examples
+ ========
+
+ >>> from sympy import Expr
+ >>> Expr().kind
+ UndefinedKind
+ """
+ def __new__(cls):
+ return super().__new__(cls)
+
+ def __repr__(self):
+ return "UndefinedKind"
+
+UndefinedKind = UndefinedKind()
+
+
+class NumberKind(Kind):
+ """
+ Kind for all numeric object.
+
+ This kind represents every number, including complex numbers,
+ infinity and ``S.NaN``. Other objects such as quaternions do not
+ have this kind.
+
+ Most ``Expr`` are initially designed to represent the number, so
+ this will be the most common kind in SymPy core. For example
+ ``Symbol()``, which represents a scalar, has this kind as long as it
+ is commutative.
+
+ Numbers form a field. Any operation between number-kind objects will
+ result this kind as well.
+
+ Examples
+ ========
+
+ >>> from sympy import S, oo, Symbol
+ >>> S.One.kind
+ NumberKind
+ >>> (-oo).kind
+ NumberKind
+ >>> S.NaN.kind
+ NumberKind
+
+ Commutative symbol are treated as number.
+
+ >>> x = Symbol('x')
+ >>> x.kind
+ NumberKind
+ >>> Symbol('y', commutative=False).kind
+ UndefinedKind
+
+ Operation between numbers results number.
+
+ >>> (x+1).kind
+ NumberKind
+
+ See Also
+ ========
+
+ sympy.core.expr.Expr.is_Number : check if the object is strictly
+ subclass of ``Number`` class.
+
+ sympy.core.expr.Expr.is_number : check if the object is number
+ without any free symbol.
+
+ """
+ def __new__(cls):
+ return super().__new__(cls)
+
+ def __repr__(self):
+ return "NumberKind"
+
+NumberKind = NumberKind()
+
+
+class BooleanKind(Kind):
+ """
+ Kind for boolean objects.
+
+ SymPy's ``S.true``, ``S.false``, and built-in ``True`` and ``False``
+ have this kind. Boolean number ``1`` and ``0`` are not relevent.
+
+ Examples
+ ========
+
+ >>> from sympy import S, Q
+ >>> S.true.kind
+ BooleanKind
+ >>> Q.even(3).kind
+ BooleanKind
+ """
+ def __new__(cls):
+ return super().__new__(cls)
+
+ def __repr__(self):
+ return "BooleanKind"
+
+BooleanKind = BooleanKind()
diff --git a/sympy/core/mod.py b/sympy/core/mod.py
index 2ae3828a42e7..b4f3de25234b 100644
--- a/sympy/core/mod.py
+++ b/sympy/core/mod.py
@@ -1,5 +1,6 @@
from sympy.core.numbers import nan
from .function import Function
+from .kind import NumberKind
class Mod(Function):
@@ -31,6 +32,8 @@ class Mod(Function):
"""
+ kind = NumberKind
+
@classmethod
def eval(cls, p, q):
from sympy.core.add import Add
diff --git a/sympy/core/numbers.py b/sympy/core/numbers.py
index d4ef010b7ed2..79f6a7d165cb 100644
--- a/sympy/core/numbers.py
+++ b/sympy/core/numbers.py
@@ -17,6 +17,7 @@
from sympy.core.compatibility import (as_int, HAS_GMPY, SYMPY_INTS,
gmpy)
from sympy.core.cache import lru_cache
+from .kind import NumberKind
from sympy.multipledispatch import dispatch
import mpmath
import mpmath.libmp as mlib
@@ -587,6 +588,8 @@ class Number(AtomicExpr):
# Used to make max(x._prec, y._prec) return x._prec when only x is a float
_prec = -1
+ kind = NumberKind
+
def __new__(cls, *obj):
if len(obj) == 1:
obj = obj[0]
@@ -2433,6 +2436,9 @@ class AlgebraicNumber(Expr):
is_algebraic = True
is_number = True
+
+ kind = NumberKind
+
# Optional alias symbol is not free.
# Actually, alias should be a Str, but some methods
# expect that it be an instance of Expr.
@@ -3310,6 +3316,8 @@ class ComplexInfinity(AtomicExpr, metaclass=Singleton):
is_complex = False
is_extended_real = False
+ kind = NumberKind
+
__slots__ = ()
def __new__(cls):
@@ -3363,6 +3371,8 @@ class NumberSymbol(AtomicExpr):
is_NumberSymbol = True
+ kind = NumberKind
+
def __new__(cls):
return AtomicExpr.__new__(cls)
@@ -3848,6 +3858,8 @@ class ImaginaryUnit(AtomicExpr, metaclass=Singleton):
is_algebraic = True
is_transcendental = False
+ kind = NumberKind
+
__slots__ = ()
def _latex(self, printer):
diff --git a/sympy/core/symbol.py b/sympy/core/symbol.py
index 085633d39703..68a5d000f137 100644
--- a/sympy/core/symbol.py
+++ b/sympy/core/symbol.py
@@ -6,6 +6,7 @@
from .expr import Expr, AtomicExpr
from .cache import cacheit
from .function import FunctionClass
+from .kind import NumberKind, UndefinedKind
from sympy.core.logic import fuzzy_bool
from sympy.logic.boolalg import Boolean
from sympy.utilities.iterables import cartes, sift
@@ -205,6 +206,12 @@ class Symbol(AtomicExpr, Boolean):
is_Symbol = True
is_symbol = True
+ @property
+ def kind(self):
+ if self.is_commutative:
+ return NumberKind
+ return UndefinedKind
+
@property
def _diff_wrt(self):
"""Allow derivatives wrt Symbols.
diff --git a/sympy/core/trace.py b/sympy/core/trace.py
index 9eef9839893f..4a2cb9c2b5b7 100644
--- a/sympy/core/trace.py
+++ b/sympy/core/trace.py
@@ -154,6 +154,12 @@ def __new__(cls, *args):
return Expr.__new__(cls, expr, indices)
+ @property
+ def kind(self):
+ expr = self.args[0]
+ expr_kind = expr.kind
+ return expr_kind.element_kind
+
def doit(self, **kwargs):
""" Perform the trace operation.
diff --git a/sympy/logic/boolalg.py b/sympy/logic/boolalg.py
index c01628fe58cc..fbcba1f8d4ca 100644
--- a/sympy/logic/boolalg.py
+++ b/sympy/logic/boolalg.py
@@ -14,6 +14,7 @@
from sympy.core.operations import LatticeOp
from sympy.core.singleton import Singleton, S
from sympy.core.sympify import converter, _sympify, sympify
+from sympy.core.kind import BooleanKind
from sympy.utilities.iterables import sift, ibin
from sympy.utilities.misc import filldedent
@@ -65,6 +66,8 @@ class Boolean(Basic):
__slots__ = ()
+ kind = BooleanKind
+
@sympify_return([('other', 'Boolean')], NotImplemented)
def __and__(self, other):
return And(self, other)
diff --git a/sympy/matrices/__init__.py b/sympy/matrices/__init__.py
index 0b717488478f..a026b254c555 100644
--- a/sympy/matrices/__init__.py
+++ b/sympy/matrices/__init__.py
@@ -10,7 +10,7 @@
randMatrix, rot_axis1, rot_axis2, rot_axis3, symarray, wronskian,
zeros)
from .dense import MutableDenseMatrix
-from .matrices import DeferredVector, MatrixBase
+from .matrices import DeferredVector, MatrixBase, MatrixKind
Matrix = MutableMatrix = MutableDenseMatrix
@@ -42,7 +42,7 @@
'MutableDenseMatrix',
- 'DeferredVector', 'MatrixBase',
+ 'DeferredVector', 'MatrixBase', 'MatrixKind',
'Matrix', 'MutableMatrix',
diff --git a/sympy/matrices/expressions/matexpr.py b/sympy/matrices/expressions/matexpr.py
index 51db3c3395ae..e78d0607d31e 100644
--- a/sympy/matrices/expressions/matexpr.py
+++ b/sympy/matrices/expressions/matexpr.py
@@ -14,6 +14,7 @@
from sympy.functions.special.tensor_functions import KroneckerDelta
from sympy.matrices.common import NonSquareMatrixError
from sympy.simplify import simplify
+from sympy.matrices.matrices import MatrixKind
from sympy.utilities.misc import filldedent
from sympy.multipledispatch import dispatch
@@ -75,6 +76,8 @@ class MatrixExpr(Expr):
is_symbol = False
is_scalar = False
+ kind = MatrixKind()
+
def __new__(cls, *args, **kwargs):
args = map(_sympify, args)
return Basic.__new__(cls, *args, **kwargs)
diff --git a/sympy/matrices/matrices.py b/sympy/matrices/matrices.py
index 75c048ff92dd..1e6e0805c4bc 100644
--- a/sympy/matrices/matrices.py
+++ b/sympy/matrices/matrices.py
@@ -18,6 +18,7 @@
from sympy.printing import sstr
from sympy.printing.defaults import Printable
from sympy.simplify import simplify as _simplify
+from sympy.core.kind import Kind, NumberKind
from sympy.utilities.exceptions import SymPyDeprecationWarning
from sympy.utilities.iterables import flatten
from sympy.utilities.misc import filldedent
@@ -742,6 +743,46 @@ def permuteFwd(self, perm):
return self.permute_rows(perm, direction='forward')
+class MatrixKind(Kind):
+ """
+ Kind for all matrices in SymPy.
+
+ Basic class for this kind is ``MatrixBase`` and ``MatrixExpr``,
+ but any expression representing the matrix can have this.
+
+ Parameters
+ ==========
+
+ element_kind : Kind
+ Kind of the element. Default is ``NumberKind``, which means that
+ the matrix contains only numbers.
+
+ Examples
+ ========
+
+ >>> from sympy import MatrixSymbol, Integral
+ >>> from sympy.abc import x
+ >>> A = MatrixSymbol('A', 2,2)
+ >>> A.kind
+ MatrixKind(NumberKind)
+ >>> Integral(A,x).kind
+ MatrixKind(NumberKind)
+
+ See Also
+ ========
+
+ sympy.tensor.ArrayKind : Kind for N-dimensional arrays.
+
+ """
+ def __new__(cls, element_kind=NumberKind):
+ obj = super().__new__(cls, element_kind)
+ obj.element_kind = element_kind
+ return obj
+
+ def __repr__(self):
+ return "MatrixKind(%s)" % self.element_kind
+
+
class MatrixBase(MatrixDeprecated,
MatrixCalculus,
MatrixEigen,
@@ -757,6 +798,8 @@ class MatrixBase(MatrixDeprecated,
zero = S.Zero
one = S.One
+ kind = MatrixKind()
+
def __array__(self, dtype=object):
from .dense import matrix2numpy
return matrix2numpy(self, dtype=dtype)
diff --git a/sympy/tensor/array/__init__.py b/sympy/tensor/array/__init__.py
index 91f5bb4731ea..2518ceb2160b 100644
--- a/sympy/tensor/array/__init__.py
+++ b/sympy/tensor/array/__init__.py
@@ -227,7 +227,7 @@
from .dense_ndim_array import MutableDenseNDimArray, ImmutableDenseNDimArray, DenseNDimArray
from .sparse_ndim_array import MutableSparseNDimArray, ImmutableSparseNDimArray, SparseNDimArray
-from .ndim_array import NDimArray
+from .ndim_array import NDimArray, ArrayKind
from .arrayop import tensorproduct, tensorcontraction, tensordiagonal, derive_by_array, permutedims
from .array_comprehension import ArrayComprehension, ArrayComprehensionMap
@@ -238,7 +238,7 @@
'MutableSparseNDimArray', 'ImmutableSparseNDimArray', 'SparseNDimArray',
- 'NDimArray',
+ 'NDimArray', 'ArrayKind',
'tensorproduct', 'tensorcontraction', 'tensordiagonal', 'derive_by_array',
diff --git a/sympy/tensor/array/ndim_array.py b/sympy/tensor/array/ndim_array.py
index 9be5c7a661bd..aed0cbc97718 100644
--- a/sympy/tensor/array/ndim_array.py
+++ b/sympy/tensor/array/ndim_array.py
@@ -3,6 +3,7 @@
from sympy.core.expr import Expr
from sympy.core.numbers import Integer
from sympy.core.sympify import sympify
+from sympy.core.kind import Kind, NumberKind
from sympy.core.compatibility import SYMPY_INTS
from sympy.printing.defaults import Printable
@@ -10,6 +11,43 @@
from collections.abc import Iterable
+class ArrayKind(Kind):
+ """
+ Kind for N-dimensional array in SymPy.
+
+ This kind represents the multidimensional array that algebraic
+ operations are defined. Basic class for this kind is ``NDimArray``,
+ but any expression representing the array can have this.
+
+ Parameters
+ ==========
+
+ element_kind : Kind
+ Kind of the element. Default is ``NumberKind``, which means that
+ the array contains only numbers.
+
+ Examples
+ ========
+
+ >>> from sympy import NDimArray
+ >>> NDimArray([1,2,3]).kind
+ ArrayKind(NumberKind)
+
+ See Also
+ ========
+
+ sympy.matrices.MatrixKind : Kind for matrices.
+
+ """
+ def __new__(cls, element_kind=NumberKind):
+ obj = super().__new__(cls, element_kind)
+ obj.element_kind = element_kind
+ return obj
+
+ def __repr__(self):
+ return "ArrayKind(%s)" % self.element_kind
+
+
class NDimArray(Printable):
"""
@@ -70,6 +108,10 @@ def __new__(cls, iterable, shape=None, **kwargs):
from sympy.tensor.array import ImmutableDenseNDimArray
return ImmutableDenseNDimArray(iterable, shape, **kwargs)
+ @property
+ def kind(self):
+ return ArrayKind()
+
def _parse_index(self, index):
if isinstance(index, (SYMPY_INTS, Integer)):
raise ValueError("Only a tuple index is accepted")
| diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index 88b6b106ce71..39a1341f9fc0 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -52,6 +52,9 @@ def is_Basic(name):
cls = getattr(mod, name)
if hasattr(cls, '_sympy_deprecated_func'):
cls = cls._sympy_deprecated_func
+ if not isinstance(cls, type):
+ # check instance of singleton class with same name
+ cls = type(cls)
return issubclass(cls, Basic)
names = list(filter(is_Basic, names))
diff --git a/sympy/core/tests/test_kind.py b/sympy/core/tests/test_kind.py
new file mode 100644
index 000000000000..fd056f138ba9
--- /dev/null
+++ b/sympy/core/tests/test_kind.py
@@ -0,0 +1,39 @@
+from sympy.core.add import Add
+from sympy.core.kind import NumberKind, UndefinedKind
+from sympy.core.numbers import pi, zoo, I, AlgebraicNumber
+from sympy.core.singleton import S
+from sympy.core.symbol import Symbol
+from sympy.integrals.integrals import Integral
+from sympy.matrices import (Matrix, SparseMatrix, ImmutableMatrix,
+ ImmutableSparseMatrix, MatrixSymbol, MatrixKind)
+
+comm_x = Symbol('x')
+noncomm_x = Symbol('x', commutative=False)
+
+def test_NumberKind():
+ assert S.One.kind is NumberKind
+ assert pi.kind is NumberKind
+ assert S.NaN.kind is NumberKind
+ assert zoo.kind is NumberKind
+ assert I.kind is NumberKind
+ assert AlgebraicNumber(1).kind is NumberKind
+
+def test_Add_kind():
+ assert Add(2, 3, evaluate=False).kind is NumberKind
+ assert Add(2,comm_x).kind is NumberKind
+ assert Add(2,noncomm_x).kind is UndefinedKind
+
+def test_Symbol_kind():
+ assert comm_x.kind is NumberKind
+ assert noncomm_x.kind is UndefinedKind
+
+def test_Integral_kind():
+ A = MatrixSymbol('A', 2,2)
+ assert Integral(comm_x, comm_x).kind is NumberKind
+ assert Integral(A, comm_x).kind is MatrixKind(NumberKind)
+
+def test_Matrix_kind():
+ classes = (Matrix, SparseMatrix, ImmutableMatrix, ImmutableSparseMatrix)
+ for cls in classes:
+ m = cls.zeros(3, 2)
+ assert m.kind is MatrixKind(NumberKind)
| diff --git a/doc/src/modules/core.rst b/doc/src/modules/core.rst
index 0323aed5cfd0..fd6d3c88e96d 100644
--- a/doc/src/modules/core.rst
+++ b/doc/src/modules/core.rst
@@ -549,3 +549,22 @@ ordered
^^^^^^^
.. autofunction:: sympy.core.compatibility.ordered
+
+kind
+----
+.. module:: sympy.core.kind
+
+Kind
+^^^^
+.. autoclass:: Kind
+ :members:
+
+NumberKind
+^^^^^^^^^^
+.. autoclass:: NumberKind
+ :members:
+
+BooleanKind
+^^^^^^^^^^^
+.. autoclass:: BooleanKind
+ :members:
diff --git a/doc/src/special_topics/classification.rst b/doc/src/special_topics/classification.rst
new file mode 100644
index 000000000000..a2917075eb49
--- /dev/null
+++ b/doc/src/special_topics/classification.rst
@@ -0,0 +1,100 @@
+===============================
+Classification of SymPy objects
+===============================
+
+There are several ways of how SymPy object is classified.
+
+class
+=====
+
+Like any other object in Python, SymPy expression is an instance of class. You can
+get the class of the object with built-in `type()` function, and check it with
+`isinstance()` function.
+
+ >>> from sympy import Add
+ >>> from sympy.abc import x,y
+ >>> type(x + y)
+ <class 'sympy.core.add.Add'>
+ >>> isinstance(x + y, Add)
+ True
+
+Classes represent only the programmatic structures of the objects, and does not
+distinguish the mathematical difference between them. For example, the integral
+of number and the integral of matrix both have the class `Integral`, although the
+former is number and the latter is matrix.
+
+ >>> from sympy import MatrixSymbol, Integral
+ >>> A = MatrixSymbol('A', 2, 2)
+ >>> type(Integral(1, x))
+ <class 'sympy.integrals.integrals.Integral'>
+ >>> type(Integral(A, x))
+ <class 'sympy.integrals.integrals.Integral'>
+
+kind
+====
+
+Kind indicates what mathematical object does the expression represent.
+You can retrieve the kind of expression with `.kind` property.
+
+ >>> Integral(1, x).kind
+ NumberKind
+ >>> Integral(A, x).kind
+ MatrixKind(NumberKind)
+
+This result shows that `Integral(1, x)` is number, and `Integral(A, x)` is matrix with number element.
+
+Since the class cannot guarantee to catch this difference, kind of the object is very important.
+For example, if you are building a function or class that is designed to work only for
+numbers, you should consider filtering the arguments with `NumberKind` so that the user
+does not naively pass unsupported objects such as `Integral(A, x)`.
+
+For the performance, set theory is not implemented in kind system. For example,
+
+ `NumberKind` does not distinguish the real number and complex number.
+
+ >>> from sympy import pi, I
+ >>> pi.kind
+ NumberKind
+ >>> I.kind
+ NumberKind
+
+ SymPy's `Set` and kind are not compatible.
+
+ >>> from sympy import S
+ >>> from sympy.core.kind import NumberKind
+ >>> S.Reals.is_subset(S.Complexes)
+ True
+ >>> S.Reals.is_subset(NumberKind)
+ Traceback (most recent call last):
+ ...
+ ValueError: Unknown argument 'NumberKind'
+
+sets and assumptions
+====================
+
+If you want to classify the object in strictly mathematical way, you may need
+SymPy's sets and assumptions.
+
+ >>> from sympy import ask, Q
+ >>> S.One in S.Reals
+ True
+ >>> ask(Q.even(2*x), Q.odd(x))
+ True
+
+See `assumptions` module and `sets` module for more information.
+
+func
+====
+
+`func` is the head of the object, and it is used to recurse over the expression tree.
+
+ >>> Add(x + y).func
+ <class 'sympy.core.add.Add'>
+ >>> Add(x + x).func
+ <class 'sympy.core.mul.Mul'>
+ >>> Q.even(x).func
+ Q.even
+
+As you can see, resulting head may be a class or another SymPy object.
+Keep this in mind when you classify the object with this attribute.
+See :ref:`tutorial-manipulation` for detailed information.
diff --git a/doc/src/special_topics/index.rst b/doc/src/special_topics/index.rst
index 3b68f271de60..32e8bc713d04 100644
--- a/doc/src/special_topics/index.rst
+++ b/doc/src/special_topics/index.rst
@@ -9,3 +9,4 @@
intro.rst
finite_diff_derivatives.rst
+ classification.rst
| [
{
"components": [
{
"doc": "",
"lines": [
204,
205
],
"name": "ExprWithLimits.kind",
"signature": "def kind(self):",
"type": "function"
}
],
"file": "sympy/concrete/expr_with_limits.py"
},
{
"components": [
{
... | [
"test_NumberKind",
"test_Add_kind",
"test_Symbol_kind",
"test_Integral_kind"
] | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_sympy__assumptions__assume__Predicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers__AnyArgs",
"test_sympy__assu... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Introduce kind
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
Related to #20158
#### Brief description of what is fixed or changed
1. `core/kind` module is introduced.
2. `Basic` has `UndefinedKind` as default kind.
3. Every classes in `core/numbers` module, `int`, `float` have `NumberKind.
4. NDimArrays have parametrisable `ArrayKind`. Matrices have parametrisable `MatrixKind`.
5. Boolean expressions have `BooleanKind`.
6. Following classes support kind inference:
- `Add`
- `Symbol`
- `Integral`
- `Tr`
7. Documentation is made in `docs/src/special_topics`.
#### Other comments
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
- core
- Kind classification of objects is introduced. This feature is experimental, and can be replaced or deleted in the future.
<!-- END RELEASE NOTES -->
----------
@oscarbenjamin
I made some rudimentary implementation of your idea. Currently, `NumberType` and `MatrixType` are introduced, and `Add` and `Mul` supports typeclass inference.
```python
>>> from sympy.core.core import typeclass
>>> from sympy import MatrixSymbol, Add
>>> typeclass(2)
NumberSymbol
>>> typeclass(Add(2, 3, evaluate=False))
NumberSymbol
>>> typeclass(MatrixSymbol('A',2,3)*MatrixSymbol('B',3,2))
MatrixSymbol
>>> typeclass(2*MatrixSymbol('A',2,3))
MatrixSymbol
```
This will need many trials and reviews. If you have any suggestion, please let me know and I am ready to follow with pleasure.
What is the advantage of using single dispatch here rather than a method/attribute?
I was expecting something like:
```python
def typeclass(obj):
return obj._typeclass
class Number:
_typeclass = NumberType
class MatrixExpr:
_typeclass = MatrixType
class Set:
_typeclass = SetType
class Add:
def __new__(cls, args):
if len(set(map(typeclass, args))) != 1:
raise TypeError
...
obj._typeclass = typeclass(args[0])
return obj
```
This way the `typeclass` function is mostly just retrieving a class/instance attribute which is fast. We could also have a property for accessing this defined on Basic:
```python
class Basic:
_typeclass = UnknownType
@property
def typeclass(self):
return self._typeclass
```
Usually the advantage of single dispatch is that you want to be able to overload a function for a particular class without modifying the class itself. That's useful in some situations but in the case of the typeclass it always needs to be defined so we would have to add the dispatch immediately after the class definition anyway.
@oscarbenjamin
I changed the design as your suggestion. BTW how shall we do with `Add`, `Mul`, `Pow` etc? I still think that we can get rid of `MatAdd`, `MatMul` and unify everything to `Add` and `Mul` with careful redesigning of the classes...
> BTW how shall we do with `Add`, `Mul`, `Pow` etc? I still think that we can get rid of `MatAdd`, `MatMul` and unify everything to `Add` and `Mul` with careful redesigning of the classes...
I think that this is the tricky part. For now I would say let's not try to get rid of MatAdd etc as that would add a lot of churn and it's not clear what the benefit would be now that we have those classes. If we can use this to help eliminate the post-processors then that's good.
It's important not to get too carried away with something like this. For now we should keep the changes as small as possible but make sure that the basic structure is something that can work. In particular we need to ensure:
1. `typeclass` always returns quickly and it reliably handles the difference between e.g. number and matrix (including in nested expressions) for meaningful expressions.
2. We have a clear way to get e.g. the shape of any MatrixType expression.
3. There is clear documentation explaining what the definitions of the typeclasses are in a way that can be referenced from other parts of the docs.
4. Any changes are backwards compatible for downstream code that hasn't defined any typeclasses or that is mixing them in ways we might consider to be inconsistent but are currently allowed.
Potentially for `Add` it makes sense to require all terms to have the same typeclass but that wouldn't make sense for `Mul` and `Pow`. When I say "require" we can't make this an error because that would not be backwards compatible. Rather we need to document what should be allowed in a way that is clear to users, contributors, etc. Existing code should continue to work and it is premature even to give a deprecation warning at this stage. We would need the typeclasses to exist in a few sympy releases at least before deprecating anything.
We could say for e.g. `Mul` that there can be a mixture of `NumberType` and one other typeclass say `XType` which means that the typeclass of the `Mul` itself is `XType`. Potentially the logic here is the same as for the `add`, `mul` dispatched functions.
For `Pow` the typeclass should be the same as the base if the exponent is `NumberType`. I don't know if there are sensible examples of `Pow` with a non-`NumberType` exponent currently in sympy (or dependent libraries). In principle you can have e.g. `M**N` where `M` and `N` are both matrices and the result is a matrix. I don't know of any meaningful examples where the typeclass of a `Pow` could differ from the typeclass of the base though.
Agreed. I am aiming to restrict the scope of this PR to rudimentary implementation.
> If we can use this to help eliminate the post-processors then that's good.
Exactly. If we no longer need `MatAdd`, `VectorAdd`, etc, then we won't need postprocessors and dispatched `add`, `mul` functions as well.
> Potentially for Add it makes sense to require all terms to have the same typeclass
Sounds true, since addition is group operator. I changed the design that way in the latest commit.
> Potentially the logic here is the same as for the add, mul dispatched functions.
Yes. I implemented `TypeclassDispatcher` in `core/operator` module for this purpose.
> I don't know of any meaningful examples where the typeclass of a Pow could differ from the typeclass of the base though.
Mathematically, we have [matrix exponential](https://en.wikipedia.org/wiki/Matrix_exponential) so probably we should make `Pow(2, M)` (where `M` is matrix) have `MatrixType`.
> Mathematically, we have [matrix exponential](https://en.wikipedia.org/wiki/Matrix_exponential) so probably we should make `Pow(2, M)` (where `M` is matrix) have `MatrixType`.
Oh, that's true. We do already have the matrix exponential as well:
```python
In [12]: exp(Matrix([[0, -1], [1, 0]]))
Out[12]:
โกcos(1) -sin(1)โค
โข โฅ
โฃsin(1) cos(1) โฆ
```
@oscarbenjamin
I equipped every classes in `core` with typeclass. I think we can now focus on enhancing the design instead of adding something more in this PR.
Plus, I found an inconsistent behavior related to matrix and array derivation.
```python
>>> v = Matrix([x,y])
>>> type(v.diff(x))
sympy.matrices.dense.MutableDenseMatrix
>>> type(x.diff(v))
sympy.matrices.immutable.ImmutableDenseMatrix
>>> type(v.diff(v))
sympy.tensor.array.dense_ndim_array.MutableDenseNDimArray
```
Since the derivation of matrix by matrix returns `NDimArray`, I had to give `ArrayType` to unevaluated derivatives involving any matrix.
In my opinion, this is the class design failure rather than typeclass's problem. Matrix is 2 dimensional array, which is a special case of N dimensional array. It is important enough to deserve its own `matrices` module, but conceptually we should make `Matrix` a subclass of `NDimArray` and guarantee full compatibility between these two. And this means that `MatrixType` have to be replaced by `ArrayType` in the future.
Fortunately, we do not need to worry about this now because unevaluated matrix derivative hasn't been properly supported anyway.
```python
>>> expr = x.diff(v, evaluate=False)
>>> expr.func # Not Matrix instance
sympy.core.function.Derivative
>>> expr + v # Cannot be added to matrix with same shape
Traceback (most recent call last)
...
TypeError: ...
```
I agree that "kind" is the best name here. "Type" would actually be the best, but there's too much potential for confusion with the Python meaning of "type" (same with "class", which has the further disadvantage that it can't be used as a variable name).
> I agree that "kind" is the best name here.
Excellent. It is well known that naming things is hard. Hopefully we've got this name nailed!
> Since the derivation of matrix by matrix returns `NDimArray`, I had to give `ArrayType` to unevaluated derivatives involving any matrix.
>
> In my opinion, this is the class design failure rather than typeclass's problem.
I tend to agree. I have found that I don't quite understand how to formalise the way that matrix/array differentiation are mixed in a way that seems rigorous. I was recently discussing this with @Upabjojr in https://github.com/sympy/sympy/pull/20072#issuecomment-691951014
@oscarbenjamin
From Upapjojr's comment that you linked:
> * classes inheriting MatrixCommon: explicit-valued matrices (all elements of the matrix are expressed),
> * classes inheriting MatrixExpr but not MatrixCommon: implicit-valued matrices (i.e. the matrix is just a symbol, no elements are shown),
> * MatrixElement class: I'd call it implicit-valued (as elements are not shown), explicit-indexed (as indices are shown, unlike MatrixExpr),
> * classes inheriting NDimArray: like explicit matrices, but with N-axes (matrices always have 2 axes),
> * IndexedBase and Indexed objects: generic indexed objects, implicit-valued explicit-index,
> classes in sympy.codegen.array_utils: implicit-valued implicit-indexed arrays, still experimental.
> * sympy.tensor.tensor: abstract index notation for tensorial expressions. This is reserved for applications in theoretical physics.
Current design seems very complicated and unintuitive. I think the class design should be like:
1. We should have classes for explicit-valued and implicit-valued n-dimensional arrays, all subclassing `NDimArray`. For example, something like `ArraySymbol` should exist.
2. Explicit-valued matrix should subclass explicit-valued n-dimensional array. Similar for implicit-valued matrix.
3. `MatrixElement` should subclass `ArrayElement`, and perhaps this should subclass `Indexed`.
> 5\. `Tuple`, `tuple` and `list` have `SequenceKind`.
I hadn't considered the idea that there could be a kind for non-Basic types. I'm not sure if that's a good idea.
For tuple I guess it makes sense because tuple sympifies to Tuple. We don't have a Basic equivalent of list though. I had imagined that `kind` was a function that applied only to Basic and would be used only after sympification. There are also some other non-Basic types like Matrix that would probably need a kind. Matrix simpifies to ImmutableMatrix though which perhaps makes sense like tuple/Tuple.
I think that the broad idea here looks good.
I wonder if we also need an `UnknownExprKind` to distinguish from arbitrary `UnknownKind`.
There needs to be clear documentation that defines precisely what objects each kind represents. A good starting point for that would be in the docstring of the `Kind` classes. That way you can do `help(kind(obj))`. The docs should define exhaustively what objects have a given kind and should also explain what operations (e.g. `shape`, `+` etc) are possible with them.
What does anyone else think about this?
@asmeurer @sylee957 @eric-wieser
Is there any better way to centralize the tests about `Kind` than spreading everything over?
I think that a similar logic like `test_args` can be used to check whether all the sympy classes have correct logics for types, or at least have implemented tests for it.
I don't think that it's easy to control whether `kind` works or not correctly in this implementation, because it still gets loosely inherited by class inheritance, and let's say if someone makes a new class that inherits `Matrix` or `Add` and makes some nonsensical class and haven't defined `kind` on its own, `kind` function would still refer to the superclass
Thanks for the name change to "kind", that seems like a good name and is sort of consistent with `np.dtype.kind`.
---
Having said that, it's not very clear to me whether this is trying to solve the right problem.
To take an example, this makes it so that you can check if `ix = Integral(x*x, x)` is a matrix when `x` is.
The problem is, `isinstance(ix, MatrixBase)` is still false, so I can't do things like `ix.shape` or `ix.T`.
It's not clear to me what value asking for the kind of the object has if you still can't actually perform the usual operations for that kind.
My feeling is that this would be better solved by a `MatrixInterval` type, or perhaps a runtime-generated `Interval[MatrixExpr]` type, which `Interval` returns if its arguments are matrices (ideally via the type of dispatch you've explored elsewhere). I'd be much more comfortable building the behavior we need into the python class heirarchy and not our own parallel "kind" system.
> It's not clear to me what value asking for the kind of the object has if you still can't actually perform the usual operations for that kind.
Very often in sympy the operations just consist of building an expression tree and nothing actually needs to be done e.g.:
```python
In [2]: M = Matrix([[1, 2], [3, 4]])
In [3]: sin(M)
Out[3]:
โโก1 2โคโ
sinโโข โฅโ
โโฃ3 4โฆโ
```
If we know that the object mathematically represents a matrix then we know where it can be used in the expression tree. We don't necessarily need to do anything else. Knowing that it is a matrix makes it a valid argument for symbolic functions that accept matrices e.g.:
```python
In [5]: Integral(M, t)
Out[5]:
โ
โฎ โก1 2โค
โฎ โข โฅ dt
โฎ โฃ3 4โฆ
โก
In [6]: Trace(Integral(M, t))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-316cfa42acf5> in <module>
----> 1 Trace(Integral(M, t))
~/current/sympy/sympy/sympy/matrices/expressions/trace.py in __new__(cls, mat)
31
32 if not mat.is_Matrix:
---> 33 raise TypeError("input to Trace, %s, is not a matrix" % str(mat))
34
35 if not mat.is_square:
TypeError: input to Trace, Integral(Matrix([
[1, 2],
[3, 4]]), t), is not a matrix
```
As I explained the issue https://github.com/sympy/sympy/issues/20158#issuecomment-735230658 the most common uses of kind would be checking for invalid operations.
Of course we will also need to be able to get the shape of expressions that are of matrix kind and we can use a dispatch system to make a `shape` function that can do that. Other operations like `obj.T` would be `Transpose(obj)` here because we are working symbolically.
> My feeling is that this would be better solved by a `MatrixInterval` type
I presume you mean `MatrixIntegral`. The problem is that we have a Cartesian product of possibilities of symbolic functions that can contain different kinds of objects and also the different kinds of objects themselves e.g.:
```
{Number
Matrix
Vector
Quaternion
Tensor
...}
x
{
Mul
Add
Pow
Integral
Sum
Piecewise
Equality
...}
```
The current approach is to add classes like `MatAdd`, `MatPow` etc but we can't reasonably scale that up to everything because the number of classes needed is *quadratic*. We could have runtime generated classes but then that wouldn't actually solve the problem of how to get the shape of a matrix so it's not really any better.
I think that this PR is really codifying something that already happens in sympy: expressions of any given Python-type represent different kinds of objects depending on their args. The difference is that here we can actually check what that kind of object is. The other problem we have is about documentation and definitions. It's too late to give definitions for things like `Expr` but it's not too late to give a precise definition of `NumberKind`.
Are we expecting that all instances of certain classes are of the same "kind", like all `Number`s of `NumberKind`? If that is so, then could we make `Number` a subclass of an abstract `NumberKind` superclass. It could be tested by `isinstance(obj, NumberKind)`.
> Are we expecting that all instances of certain classes are of the same "kind", like all `Number`s of `NumberKind`?
Yes, although there would also be classes with instances that are not all of the same kind.
> If that is so, then could we make `Number` a subclass of an abstract `NumberKind` superclass. It could be tested by `isinstance(obj, NumberKind)`.
That test would fail for Add, Mul, etc though because they couldn't inherit `NumberKind` (unless we restrict their args to `NumberKind`).
I guess an abstract superclass might help in some cases with defining or implementing the features expected for an object of a given kind though.
How do matrices with quaternion elements fit into this design?
> How do matrices with quaternion elements fit into this design?
We'd need to have parametrised kinds like `MatrixKind[NumberKind]`.
Is matrix with non-numeric element used in SymPy?
> Is matrix with non-numeric element used in SymPy?
There is PolyMatrix which has Poly as elements.
Probably in quantum there are matrices of quantum operators etc. Possibly also in vector or somewhere else. Somewhere there are matrices with tuples as elements (although that should really be disallowed).
> There is PolyMatrix which has Poly as elements.
To be clear I think that Poly should be PolyKind and should not be considered the same as NumberKind.
Currently all Poly have NumberKind as coefficients and generators but in principle it is possible to have e.g. polynomials with matrix coefficients or with matrix generators. If we wanted to support those we could make PolyKind parametrised like `PolyKind[NumberKind, NumberKind]` meaning a polynomial with number coefficients and generators. I think that's just over-engineering things right now though and maybe a single `PolyKind` is fine.
We also have polynomials over finite fields and I don't think that elements of a finite field should be considered as NumberKind.
Also for `MatrixKind` we could make it so that a plain `MatrixKind` implicitly means `MatrixKind[NumberKind]`. That way we can use `MatrixKind` right now and extend it later for matrices with elements of other kinds. The important thing right now is just to be clear that a matrix whose elements are not numbers does not have the same kind as a matrix whose elements are numbers. Also we need to be able to determine the kind of something like `Trace(M)` which should be the kind of the elements of the matrix `M`.
This has a conflict now
How should we implement parametrised `MatrixKind`? Will it be instance of `MatrixKind` class, or do we need some sort of `ParametrisedKind` class?
Perhaps it must be instance of `MatrixKind` class so that the kinds can be dispatched. I will make a few more commits before merge...
Maybe it isn't necessary to have parametrised kinds and we can just have a way to ask a matrix what the kind of its elements is. A method for doing that would need to be able to recurse through `Integral` etc in the same way as `kind(obj)` though. I'm not sure what's simpler or better.
I would rather choose parametrised kind instead of having another property like `Integral.element_kind`. Plus, `MatrixKind(NumberKind)` is good since it resembles the mathematical way of representing a matrix, e.g. `S.Reals ** (n,m)`.
@oscarbenjamin
I redesigned the `Kind` class so that it can be parametrised. While doing this, I found out that kind for `Add`, `Mul` and `Pow` needs some more complex dispatching. I have an idea for this, but for now I just removed it to make this PR simple. It will be implemented in succeeding PR.
OK, this is ready to be merged now.
Can there just be singletons for e.g. `NumberKind` and `UndefinedKind`?
There needs to be some specific documentation for `kind`. At the moment it is just in the gotchas section. Each kind should have a clear explanation of what things are and are not included with examples of different sympy classes and expressions.
Oh, okay.
I enhanced the docstrings for kinds
We need a lot more documentation than this. This is a new concept and is being introduced to unmuddy the waters of what sympy objects actually represent but that won't work if it isn't well documented.
I think there should be a specific page in the rst docs that explains the idea of `kind` and also lists the example kinds.
Each kind should have a definition rather than just examples. For example `NumberKind` can be defined as the union of:
```
complex numbers (including real, integer, rational etc)
oo
-oo
zoo
a*oo + b*oo + ... (where a and b are complex numbers)
nan
```
This should also clarify that `Symbol('x')` is `NumberKind` but that `Symbol('x', commutative=False)` is not because `NumberKind` is always commutative. We should give other examples of what are not number kind that might confuse users like quaternions (some might expect these to count as numbers). We need to contrast `NumberKind` with the sympy class `Number` and also with `Expr` so it is clear what the differences are. This should also reference what `is_Number` and `is_number` mean because those are each something different.
This is an area where there is wide potential for confusion. Part of the benefit of introducing kinds as that we can then attach documented definitions. This is especially useful for `NumberKind` which doesn't have any definition elsewhere and is the most common kind. We need the documentation for the kinds themselves to be something that we can reference from other places in the docs so that a function can say that its argument should be NumberKind with a clickable link that leads to a discussion of kinds and a definition of NumberKind.
We need to have examples of what valid operations are for objects of different kinds and so on. It should be worth pointing to the core assumptions as well because the predicates defined there are only really valid for `NumberKind`.
In which docs section do you think the `kind.rst` file should be located? Module reference, or `SymPy Special Topics`?
> Current design seems very complicated and unintuitive. I think the class design should be like:
>
> 1. We should have classes for explicit-valued and implicit-valued n-dimensional arrays, all subclassing `NDimArray`. For example, something like `ArraySymbol` should exist.
> 2. Explicit-valued matrix should subclass explicit-valued n-dimensional array. Similar for implicit-valued matrix.
> 3. `MatrixElement` should subclass `ArrayElement`, and perhaps this should subclass `Indexed`.
Please do not change the class structure. The class structure may be complicated and unintuitive, but that's OK: it's internal to SymPy and SymPy users should not deal with it.
Point 2 is very dangerous to apply... the modules are different and you would risk breaking a lot of code. The way operator overloadings and methods work are different. Same reasoning for point 3.
I think both. We can have a page in special topics that introduces the idea of kinds and discusses them at length. Then we can have an autodoc section in modules/core.rst that lists the contents of the module itself and the docstrings (this should link to the more extended discussion in special topics). The docstrings for the kinds themselves should give concise definitions that are clear and complete along with a few examples and counter examples. It should be made clear in each docstring though that there is more information elsewhere (and the html docs should have clickable links to that information).
I also don't think that MatrixKind and ArrayKind should be mixed. If there is to be a unification of those then when that happens the kinds can be changed but for now the kind system should reflect the existing state of the codebase: Matrix and Array are very different things and are not really interchangeable anywhere.
OK, that's good. BTW I found that module reference page is extremely long, untrimmed and complicated. I think I shall make a short PR to break things down before making a detailed doc for kind here..
Sorry to jump into this _kind_-related discussion too late.
I had a first impression by looking at the code... doesn't it remind of [generics](https://docs.python.org/3/library/typing.html?highlight=generic#typing.Generic)? I mean... since Python 3.5 (with many fixes in Python 3.6), there has been the `typing` module in the standard Python library.
Generics work similarly to C++ templates. What you are trying to do with _kind_ really sounds like generics to me.
What about redefining `Basic` like this:
```python
from typing import TypeVar, Generic
TKind = TypeVar("TKind", covariant=True)
class Basic(Generic[TKind]):
...
```
At this point you can really specify the kind in **valid Python code**:
```python
m = Matrix([[1, 2], [3, 4]])
matrix_integral1 = Integral(m, x)
matrix_integral2 = Integral[MatrixKind](m, x)
```
where `matrix_integral1` and `matrix_integral2` are equivalent.
Sorry to jump in like this, but the `typing` module in Python deserves to be analyzed... some nice features are being added to the typesystem of Python.
I think that generics are still tied to the Python type system so unless I misunderstand it isn't possible for different instances of the same class to correspond to different types in difference contexts.
> At this point you can really specify the kind in **valid Python code**:
>
> ```python
> m = Matrix([[1, 2], [3, 4]])
> matrix_integral1 = Integral(m, x)
> matrix_integral2 = Integral[MatrixKind](m, x)
> ```
And then if I do:
```
matrix_integral3 = Integral(matrix_integral1, x)
```
then the "type" of `matrix_integral3` is `Integral[Integral[MatrixKind]]`? In this context we want it to have the same kind as `matrix_integral1` because it still mathematically represents a matrix.
The point here is that we want the kind of `matrix_integral1` to be `MatrixKind` or possibly `MatrixKind[NumberKind]`. The idea of the kind system is that the fact that it is an instance of the Python class `Integral` is not really relevant if what we want to know is whether it is a matrix or a number so that we can determine if something like `Trace(M)` or `M1 + M2` actually makes sense:
```python
In [1]: M = MatrixSymbol('M', 2, 2)
In [2]: Integral(M, x) + M
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
...
~/current/sympy/sympy/sympy/matrices/expressions/matadd.py in validate(*args)
91 def validate(*args):
92 if not all(arg.is_Matrix for arg in args):
---> 93 raise TypeError("Mix of Matrix and Scalar symbols")
94
95 A = args[0]
TypeError: Mix of Matrix and Scalar symbols
In [4]: Trace(Integral(M, x))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-8a9205ab72e5> in <module>
----> 1 Trace(Integral(M, x))
~/current/sympy/sympy/sympy/matrices/expressions/trace.py in __new__(cls, mat)
31
32 if not mat.is_Matrix:
---> 33 raise TypeError("input to Trace, %s, is not a matrix" % str(mat))
34
35 if not mat.is_square:
TypeError: input to Trace, Integral(M, x), is not a matrix
```
It can't do anything at runtime, but it could be used for mypy typing with a signature like:
```python
T = TypeVar("T")
class Integral(Expr[T]):
def __new__(expr: Expr[T], var: Expr[T]) -> Integral[T]: ...
```
You'd need to tack on some extra tricks to use this at runtime, but it would be cool if we could use the same syntax for both.
> ```
> matrix_integral3 = Integral(matrix_integral1, x)
> ```
>
>
> then the "type" of `matrix_integral3` is `Integral[Integral[MatrixKind]]`? In this context we want it to have the same kind as `matrix_integral1` because it still mathematically represents a matrix.
Generics don't have to follow the expression tree structure. You can define them as you want.
Think about this (imaginary) C++ implementation of `Integral`:
```c++
template <typename Kind>
class Integral {
public:
Integral(Kind expr, Symbol x) {
// ...
}
Integral(Integral<Kind> nested, Symbol x) {
// ...
}
...
}
```
Calling the constructor `matrix_integral3 = Integral(matrix_integral1, x)` would resolve to the `typename Kind` of `matrix_integral1`... returning an instance of `Integral<Kind>` identical to `matrix_integral1`.
How would you use generics to determine or specify the type of a Mul like this:
```
x, y = symbols('x, y')
M = MatrixSymb('M', 2, 2)
m = x * y * M
```
> How would you use generics to determine or specify the type of a Mul like this:
>
> ```
> x, y = symbols('x, y')
> M = MatrixSymb('M', 2, 2)
> m = x * y * M
> ```
You mean by abolishing `MatMul` and replacing it with something like `Mul[MatrixKind]`?
You don't even need to abolish `MatMul`, you just make it subclass `Mul[MatrixKind]` instead of the `Mul` it subclasses today (or perhaps, `Mul[MatrixExpr[T]]` if we want to do away with kinds entirely)
> You mean by abolishing `MatMul` and replacing it with something like `Mul[MatrixKind]`?
My point is how do I have a variadic type like `Mul`, `Add` etc and allow different args to have different generic types?
In the case of `Add` it's not unreasonable to require all args to have the same "type" but for `Mul` I need to allow a mixture.
Does the type `T` in any concrete instantiation have to correspond to an actual type that you can check with `isinstance` or can it just be an arbitrary identifier?
A quick demo of what `typing.Generic` can already do:
```pycon
In [24]: from typing import Generic, TypeVar
In [25]: T = TypeVar('T')
In [26]: class Foo(Generic[T]):
...: pass
...:
In [28]: class NumberKind: pass # this does have to be a `type`, but it doesn't have to mean anything
In [29]: FN = Foo[NumberKind]
In [31]: FN.__args__
Out[31]: (__main__.NumberKind,)
In [32]: fn = FN()
In [34]: type(fn)
Out[34]: __main__.Foo
In [36]: fn.__orig_class__ is Foo[NumberKind]
Out[36]: True
```
> My point is how do I have a variadic type like `Mul`, `Add` etc and allow different args to have different generic types?
>
> In the case of `Add` it's not unreasonable to require all args to have the same "type" but for `Mul` I need to allow a mixture.
Well, for what I was thinking, `Mul[MatrixKind]` does not mean that the arguments of `Mul` are matrices. It simply means that `Mul[MatrixKind]` behaves as a matrix. Its args can still be of type `Tuple[Expr, ...]`.
If you want to keep the generic parameter to correspond to the types of `.args`, then maybe you can use `Mul[Union[Expr, MatrixExpr]]`.
Otherwise you can use two generic parameters: one for the kind and one for the type of the `args`: `Mul[MatrixKind, Expr]`.
I've just been playing with it but how do you infer the kind at runtime:
```python
from typing import TypeVar, Generic, get_args
T = TypeVar('T')
class MatrixKind:
pass
class NumberKind:
pass
class Basic(Generic[T]):
def __init__(self, *args):
self.args = args
def __repr__(self):
args_str = ', '.join(map(str, self.args))
return '%s(%s)' % (type(self).__name__, args_str)
class Expr(Basic[T]):
pass
class Matrix(Expr[MatrixKind]):
pass
class Integral(Expr[T]):
def __init__(self, integrand: Expr[T], sym: Expr[NumberKind]):
super().__init__(integrand, sym)
class Add(Expr[T]):
def __init__(self, *args: Expr[T]):
super().__init__(args)
def kind(obj):
(k,) = get_args(type(obj))
return k
M = Matrix('M', 2, 2)
N = Add(M, M)
print(kind(M)) # doesn't work
```
@oscarbenjamin, my code sample above shows how to do that
It doesn't seem to work with a subclass though (in Python 3.8):
```python
In [7]: Expr[MatrixKind]().__orig_class__
Out[7]: __main__.Expr[__main__.MatrixKind]
In [8]: class Matrix(Expr[MatrixKind]): pass
In [9]: Matrix().__orig_class__
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-9-21b2c4f3664e> in <module>
----> 1 Matrix().__orig_class__
AttributeError: 'Matrix' object has no attribute '__orig_class__'
```
I had some alternative idea as:
if we had redesigned the `MatMul` to always take matrix arguments, and mix with binary `MatScalarMul` in some place where scalar -matrix multiplication is needed, it could better be compatible with typing modules or multipledispatch who doesn't know complicated types like symmetric and heterogeneous n-ary arguments.
It may be theoretically possible to convert all the n-ary operations between heterogeneous arguments into a combination of homogeneous n-ary operations and heterogeneous finitary operations, and if we delimit all the heterogeneous arguments to be finitary, it is always compatible with any typing framework.
But I'm not sure about what was the first reason that sympy had used flattened forms of MatMul instead of introducing MatMul and MatScalarMul at the first place, and if changing like this can be disadvantageous for that.
> It doesn't seem to work with a subclass though (in Python 3.8):
>
> ```python
> In [7]: Expr[MatrixKind]().__orig_class__
> Out[7]: __main__.Expr[__main__.MatrixKind]
>
> In [8]: class Matrix(Expr[MatrixKind]): pass
>
> In [9]: Matrix().__orig_class__
> ---------------------------------------------------------------------------
> AttributeError Traceback (most recent call last)
> <ipython-input-9-21b2c4f3664e> in <module>
> ----> 1 Matrix().__orig_class__
>
> AttributeError: 'Matrix' object has no attribute '__orig_class__'
> ```
Use `__orig_bases__` I think?
> Use `__orig_bases__` I think?
Okay, that works for M:
```python
def kind(obj):
(b,) = obj.__orig_bases__
(k,) = get_args(b)
return k
M = Matrix('M', 2, 2)
N = Add(M, M)
print(kind(M)) # MatrixKind
print(kind(N)) # ~T
```
I didn't want the `kind` of `N` to be `~T` though. The typechecker understands I think that it is `Add[MatrixKind]` but at runtime the type is just `Add`. It would be possible at runtime to check the kinds of the args to `Add` and then generate an instance like `Add[kind(args)](*args)`. That kind of thing would get quite fiddly if we needed it in every `__new__` though.
I think it would be possible to add type hints using generic types to indicate the kinds of the objects. In combination with the Kind system here that would make type hints more meaningful in sympy rather than just the generic `obj: Expr` that we would otherwise have to hint most parameters and returns with.
We would still need the runtime Kind inference implemented in this PR because it isn't possible to access the generic types at runtime even if they are theoretically resolvable by a type checker.
The implication of doing this would be that each `Kind` object would need to be a class rather than an instance. Also in keeping with typing syntax it should probably be square brackets for parametrisation like `MatrixKind[NumberKind]`.
Yeah, you need a combination of both approaches (class and bases) if you want it to work for both generic and unparameterized types.
> The implication of doing this would be that each `Kind` object would need to be a class rather than an instance. Also in keeping with typing syntax it should probably be square brackets for parametrisation like `MatrixKind[NumberKind]`.
The other implication would be that every `Kind` should have a clear Python class that all its instances belong to e.g. `Expr[NumberKind]` so that how to type-hint any given kind is unambiguous. That would mean that e.g. we shouldn't have any non-Expr objects of kind `NumberKind`.
By the way for anyone interested in using type-hints in sympy I should say that the main obstacle is sympification. Any function that accepts e.g. `Expr` but actually uses `sympify` will in reality accept a huge range of different types: int, float, Rational, complex, mpz, Decimal, str, and so on. I tried to type-hint sympify but actually the main version of the function recursively deconstructs lists:
```
In [90]: sympify([[1, 2], 3])
Out[90]: [[1, 2], 3]
```
There's no way to express that at all with current typing primitives: `Union[T, List[Union[T, List[Union[T, ...`
@oscarbenjamin
I enhanced docstrings some more.
> We need to have examples of what valid operations are for objects of different kinds and so on.
Rather than enumerating every available operation in docstring of every kind, it will be better to specify the kinds of parameter in the docstring of operator functions and classes. It should be done in succeeding PRs, where we redefine the available kinds for each operations. (This PR merely introduce kind system, and do not redesign `Add`, `Mul`, `Integral`, etc)
Plus, I think information about kind itself can be entirely included in docstrings (and thus, in `modules` docs).
The best way to write another dedicated docs for `kind` is to make guide document for (A) Defining new operator class which takes pre-defined kind of objects as arguments (B) Defining completely new objects with new kind. This have to be done in another PR as well, since we don't have any working example for kind-based operator yet.
> We would still need the runtime Kind inference implemented in this PR because it isn't possible to access the generic types at runtime even if they are theoretically resolvable by a type checker.
Maybe generics are better dealt in a future PR. This PR is already quite long.
> The implication of doing this would be that each `Kind` object would need to be a class rather than an instance. Also in keeping with typing syntax it should probably be square brackets for parametrisation like `MatrixKind[NumberKind]`.
Not necessarily. Generics can also be objects, the syntax `MatrixKind["hello world!"]` is still valid. Well, of course it depends on how you are going to design the class structure. Usually types are used in generics because you want type-inference to work.
What is the purpose of kind being both a function and an attribute? The only benefit I can see from reading the code is that it `kind()` the function can work on non-SymPy types. But that doesn't seem like a very strong benefit. Everything inside of an expression tree will be Basic, and functions should sympify their arguments anyway.
(Just now catching up on the discussions here). I don't have much experience with typing or languages that have them, so I don't have any thoughts on generics.
> Currently all Poly have NumberKind as coefficients and generators but in principle it is possible to have e.g. polynomials with matrix coefficients or with matrix generators.
In the case of Poly, we really do want the coefficients to be defined precisely mathematically (perhaps even more so than they currently are). The coefficients of a polynomial should come from a ring, and you should know the properties of that ring (its identity, zero, if it is a field, etc.). I suppose if you had a subset of matrices that defined a commutative ring you could build a coefficient doman out of them (currently Poly doesn't support noncommutative expressions). But this would need to be defined as a ring object that gave all the necessary mathematical properties.
It's not clear to me how rigorous we want to be with Kind. We apparently don't want Kinds to be sets, since those can be built up symbolically and we want things to be explicit. Do we want them to be rigorous algebraic structures? This is somewhat problematic too, since something like Expr is only vaguely a Ring, since it includes things like infinities and noncommutatives in the same bag of possible objects. Formalizing this mathematically is likely possible, but not simple, and it's not clear what the benefits would be.
> Formalizing this mathematically is likely possible, but not simple, and it's not clear what the benefits would be.
I don't think that the properties of kinds need to be formalised *entirely* but I do think that we should give precise definitions. For example `NumberKind` is a strange sort of union of the extended reals and the extended complex numbers as well as things like `1 + I*oo` and `nan`. There isn't really a comprehensive *mathematical* concept that corresponds to but it is what much of sympy currently works with. It can be defined precisely in the sense of what it includes and what it does not. It is also clearly distinct from a set, boolean, matrix etc. We need explicit definitions so we can clearly exclude the more ambiguous boundary cases like quaternions where someone might think that it counts as a "number" but including it would break a lot of implicit assumptions about the properties of a number.
The emphasis should be on how objects of a given kind can be used in sympy rather than what makes a completely defined mathematical system. Ideally we should aim for those two things to coincide where possible but we have a codebase that works in a certain way and the kind system should reflect that. The kind system gives us a language for describing what *already* happens in a way that is on the one hand documented and on the other computable.
> What is the purpose of kind being both a function and an attribute? The only benefit I can see from reading the code is that it `kind()` the function can work on non-SymPy types. But that doesn't seem like a very strong benefit. Everything inside of an expression tree will be Basic, and functions should sympify their arguments anyway.
I won't mind if we keep only the attribute. @oscarbenjamin , do you agree?
We can remove the kind function. I don't feel that strongly about it but in general I prefer functions to methods and attributes and Basic definitely has way too many methods and attributes. This one is actually worth having though unlike a lot of the others.
We do need more docs than this although potentially that doesn't have to be part of this PR. The explanation of kind in general is very short and there is not much in the way of definitions or examples for each of the particular kinds.
Otherwise I think that this looks good. Does anyone else want to take a look?
Are there plans to add a dispatcher of methods over the kinds?
For example, we currently call `m.shape` to get the shape of a matrix, if `m` is a matrix. At this point it would be nice to support `Integral(m, x).shape` as well (i.e. `Integral` is of `MatrixKind` if its argument is a matrix).
The question now is:
1. how do we assure that `Integral(m, x).shape` returns the shape of `m`?
2. how do we make sure that calling `Integral(x, x).shape` raises an exception if `x` is neither a matrix nor an array?
@Upabjojr
We will need singledispatch `shape()` function, which will be implemented in succeeding PR.
@oscarbenjamin I believe that this PR can be merged now. Shall I squash the commits?
> We will need singledispatch `shape()` function, which will be implemented in succeeding PR.
What's the plan to dispatch functions taking more elements having different kind?
For example, `Derivative(m1, m2)` is supposed to be a 4-dim array if both `m1` and `m2` are of `MatrixKind` but it can be interpreted as `MatrixKind` of both `m1` and `m2` are of shape `(k, 1)` (by squashing trivial dimension you get back to a 2-dim array).
> For example, `Derivative(m1, m2)` is supposed to be a 4-dim array if both `m1` and `m2` are of `MatrixKind` but it can be interpreted as `MatrixKind` of both `m1` and `m2` are of shape `(k, 1)` (by squashing trivial dimension you get back to a 2-dim array).
I think that mixing array and matrix like this is a mistake. There should be an explicit function for squashing trivial dimensions to get a matrix from an array.
In any case it would be possible in some way to compute the shape. It's just potentially messy if matrix and array are mixed like this.
Well, you can derive a matrix by a matrix. The resulting object has 4-dimensions, so it's not a matrix. Some support for handling interactions between `ArrayKind` and `MatrixKind` will be needed. I agree that matrices and arrays should be kept separate.
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/concrete/expr_with_limits.py]
(definition of ExprWithLimits.kind:)
def kind(self):
[end of new definitions in sympy/concrete/expr_with_limits.py]
[start of new definitions in sympy/core/add.py]
(definition of Add.kind:)
def kind(self):
[end of new definitions in sympy/core/add.py]
[start of new definitions in sympy/core/kind.py]
(definition of KindMeta:)
class KindMeta(type):
"""Metaclass for ``Kind``.
Assigns empty ``dict`` as class attribute ``_inst`` for every class,
in order to endow singleton-like behavior."""
(definition of KindMeta.__new__:)
def __new__(cls, clsname, bases, dct):
(definition of Kind:)
class Kind(object, metaclass=KindMeta):
"""Base class for kinds.
Kind of the object represents the mathematical classification that
the entity falls into. It is expected that functions and classes
recognize and filter the argument by its kind.
Kind of every object must be carefully selected so that it shows the
intention of design. Expressions may have different kind according
to the kind of its arguements. For example, arguements of ``Add``
must have common kind since addition is group operator, and the
resulting ``Add()`` has the same kind.
For the performance, each kind is as broad as possible and is not
based on set theory. For example, ``NumberKind`` includes not only
complex number but expression containing ``S.Infinity`` or ``S.NaN``
which are not strictly number.
Kind may have arguments as parameter. For example, ``MatrixKind()``
may be constructed with one element which represents the kind of its
elements.
``Kind`` behaves in singleton-like fashion. Same signature will
return the same object."""
(definition of Kind.__new__:)
def __new__(cls, *args):
(definition of UndefinedKind:)
class UndefinedKind(Kind):
"""Default kind for all SymPy object. If the kind is not defined for
the object, or if the object cannot infer the kind from its
arguments, this will be returned.
Examples
========
>>> from sympy import Expr
>>> Expr().kind
UndefinedKind"""
(definition of UndefinedKind.__new__:)
def __new__(cls):
(definition of UndefinedKind.__repr__:)
def __repr__(self):
(definition of NumberKind:)
class NumberKind(Kind):
"""Kind for all numeric object.
This kind represents every number, including complex numbers,
infinity and ``S.NaN``. Other objects such as quaternions do not
have this kind.
Most ``Expr`` are initially designed to represent the number, so
this will be the most common kind in SymPy core. For example
``Symbol()``, which represents a scalar, has this kind as long as it
is commutative.
Numbers form a field. Any operation between number-kind objects will
result this kind as well.
Examples
========
>>> from sympy import S, oo, Symbol
>>> S.One.kind
NumberKind
>>> (-oo).kind
NumberKind
>>> S.NaN.kind
NumberKind
Commutative symbol are treated as number.
>>> x = Symbol('x')
>>> x.kind
NumberKind
>>> Symbol('y', commutative=False).kind
UndefinedKind
Operation between numbers results number.
>>> (x+1).kind
NumberKind
See Also
========
sympy.core.expr.Expr.is_Number : check if the object is strictly
subclass of ``Number`` class.
sympy.core.expr.Expr.is_number : check if the object is number
without any free symbol."""
(definition of NumberKind.__new__:)
def __new__(cls):
(definition of NumberKind.__repr__:)
def __repr__(self):
(definition of BooleanKind:)
class BooleanKind(Kind):
"""Kind for boolean objects.
SymPy's ``S.true``, ``S.false``, and built-in ``True`` and ``False``
have this kind. Boolean number ``1`` and ``0`` are not relevent.
Examples
========
>>> from sympy import S, Q
>>> S.true.kind
BooleanKind
>>> Q.even(3).kind
BooleanKind"""
(definition of BooleanKind.__new__:)
def __new__(cls):
(definition of BooleanKind.__repr__:)
def __repr__(self):
[end of new definitions in sympy/core/kind.py]
[start of new definitions in sympy/core/symbol.py]
(definition of Symbol.kind:)
def kind(self):
[end of new definitions in sympy/core/symbol.py]
[start of new definitions in sympy/core/trace.py]
(definition of Tr.kind:)
def kind(self):
[end of new definitions in sympy/core/trace.py]
[start of new definitions in sympy/matrices/matrices.py]
(definition of MatrixKind:)
class MatrixKind(Kind):
"""Kind for all matrices in SymPy.
Basic class for this kind is ``MatrixBase`` and ``MatrixExpr``,
but any expression representing the matrix can have this.
Parameters
==========
element_kind : Kind
Kind of the element. Default is ``NumberKind``, which means that
the matrix contains only numbers.
Examples
========
>>> from sympy import MatrixSymbol, Integral
>>> from sympy.abc import x
>>> A = MatrixSymbol('A', 2,2)
>>> A.kind
MatrixKind(NumberKind)
>>> Integral(A,x).kind
MatrixKind(NumberKind)
See Also
========
sympy.tensor.ArrayKind : Kind for N-dimensional arrays."""
(definition of MatrixKind.__new__:)
def __new__(cls, element_kind=NumberKind):
(definition of MatrixKind.__repr__:)
def __repr__(self):
[end of new definitions in sympy/matrices/matrices.py]
[start of new definitions in sympy/tensor/array/ndim_array.py]
(definition of ArrayKind:)
class ArrayKind(Kind):
"""Kind for N-dimensional array in SymPy.
This kind represents the multidimensional array that algebraic
operations are defined. Basic class for this kind is ``NDimArray``,
but any expression representing the array can have this.
Parameters
==========
element_kind : Kind
Kind of the element. Default is ``NumberKind``, which means that
the array contains only numbers.
Examples
========
>>> from sympy import NDimArray
>>> NDimArray([1,2,3]).kind
ArrayKind(NumberKind)
See Also
========
sympy.matrices.MatrixKind : Kind for matrices."""
(definition of ArrayKind.__new__:)
def __new__(cls, element_kind=NumberKind):
(definition of ArrayKind.__repr__:)
def __repr__(self):
(definition of NDimArray.kind:)
def kind(self):
[end of new definitions in sympy/tensor/array/ndim_array.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 284db64e44124856accce4abff75d2d8990c301d | |
sympy__sympy-20465 | 20,465 | sympy/sympy | 1.8 | fb3f1fe09a89f5c6d6233ec10903469cf3a284ba | 2020-11-20T23:22:52Z | diff --git a/doc/src/modules/tensor/array.rst b/doc/src/modules/tensor/array.rst
index 4b9498294556..cc8d70a29315 100644
--- a/doc/src/modules/tensor/array.rst
+++ b/doc/src/modules/tensor/array.rst
@@ -32,3 +32,5 @@ Functions
.. autofunction:: tensorcontraction
.. autofunction:: tensorproduct
+
+.. autofunction:: tensordiagonal
diff --git a/sympy/__init__.py b/sympy/__init__.py
index e242b55c5ca0..2e882f1e22e4 100644
--- a/sympy/__init__.py
+++ b/sympy/__init__.py
@@ -218,8 +218,8 @@ def __sympy_debug():
from .tensor import (IndexedBase, Idx, Indexed, get_contraction_structure,
get_indices, MutableDenseNDimArray, ImmutableDenseNDimArray,
MutableSparseNDimArray, ImmutableSparseNDimArray, NDimArray,
- tensorproduct, tensorcontraction, derive_by_array, permutedims, Array,
- DenseNDimArray, SparseNDimArray)
+ tensorproduct, tensorcontraction, tensordiagonal, derive_by_array,
+ permutedims, Array, DenseNDimArray, SparseNDimArray)
from .parsing import parse_expr
@@ -458,8 +458,8 @@ def __sympy_debug():
'IndexedBase', 'Idx', 'Indexed', 'get_contraction_structure',
'get_indices', 'MutableDenseNDimArray', 'ImmutableDenseNDimArray',
'MutableSparseNDimArray', 'ImmutableSparseNDimArray', 'NDimArray',
- 'tensorproduct', 'tensorcontraction', 'derive_by_array', 'permutedims',
- 'Array', 'DenseNDimArray', 'SparseNDimArray',
+ 'tensorproduct', 'tensorcontraction', 'tensordiagonal', 'derive_by_array',
+ 'permutedims', 'Array', 'DenseNDimArray', 'SparseNDimArray',
# sympy.parsing
'parse_expr',
diff --git a/sympy/tensor/__init__.py b/sympy/tensor/__init__.py
index 8369b275d8b6..bb266e79da69 100644
--- a/sympy/tensor/__init__.py
+++ b/sympy/tensor/__init__.py
@@ -5,8 +5,8 @@
from .index_methods import get_contraction_structure, get_indices
from .array import (MutableDenseNDimArray, ImmutableDenseNDimArray,
MutableSparseNDimArray, ImmutableSparseNDimArray, NDimArray, tensorproduct,
- tensorcontraction, derive_by_array, permutedims, Array, DenseNDimArray,
- SparseNDimArray,)
+ tensorcontraction, tensordiagonal, derive_by_array, permutedims, Array,
+ DenseNDimArray, SparseNDimArray,)
__all__ = [
'IndexedBase', 'Idx', 'Indexed',
@@ -15,6 +15,6 @@
'MutableDenseNDimArray', 'ImmutableDenseNDimArray',
'MutableSparseNDimArray', 'ImmutableSparseNDimArray', 'NDimArray',
- 'tensorproduct', 'tensorcontraction', 'derive_by_array', 'permutedims',
+ 'tensorproduct', 'tensorcontraction', 'tensordiagonal', 'derive_by_array', 'permutedims',
'Array', 'DenseNDimArray', 'SparseNDimArray',
]
diff --git a/sympy/tensor/array/__init__.py b/sympy/tensor/array/__init__.py
index 9b2e56f12197..91f5bb4731ea 100644
--- a/sympy/tensor/array/__init__.py
+++ b/sympy/tensor/array/__init__.py
@@ -156,6 +156,31 @@
[2*x, x - y],
[2*z, -t + z]])
+Diagonal operator
+-----------------
+
+The ``tensordiagonal`` function acts in a similar manner as ``tensorcontraction``,
+but the joined indices are not summed over, for example diagonalizing
+positions `a` and `b` means
+
+`A_{i_1,\ldots,i_a,\ldots,i_b,\ldots,i_n} \implies A_{i_1,\ldots,k,\ldots,k,\ldots,i_n}
+\implies \tilde{A}_{i_1,\ldots,i_{a-1},i_{a+1},\ldots,i_{b-1},i_{b+1},\ldots,i_n,k}`
+
+where `\tilde{A}` is the array equivalent to the diagonal of `A` at positions
+`a` and `b` moved to the last index slot.
+
+Compare the difference between contraction and diagonal operators:
+
+>>> from sympy import tensordiagonal
+>>> from sympy.abc import a, b, c, d
+>>> m = Matrix([[a, b], [c, d]])
+>>> tensorcontraction(m, [0, 1])
+a + d
+>>> tensordiagonal(m, [0, 1])
+[a, d]
+
+In short, no summation occurs with ``tensordiagonal``.
+
Derivatives by array
--------------------
@@ -203,7 +228,7 @@
from .dense_ndim_array import MutableDenseNDimArray, ImmutableDenseNDimArray, DenseNDimArray
from .sparse_ndim_array import MutableSparseNDimArray, ImmutableSparseNDimArray, SparseNDimArray
from .ndim_array import NDimArray
-from .arrayop import tensorproduct, tensorcontraction, derive_by_array, permutedims
+from .arrayop import tensorproduct, tensorcontraction, tensordiagonal, derive_by_array, permutedims
from .array_comprehension import ArrayComprehension, ArrayComprehensionMap
Array = ImmutableDenseNDimArray
@@ -215,9 +240,9 @@
'NDimArray',
- 'tensorproduct', 'tensorcontraction', 'derive_by_array', 'permutedims',
+ 'tensorproduct', 'tensorcontraction', 'tensordiagonal', 'derive_by_array',
- 'ArrayComprehension', 'ArrayComprehensionMap',
+ 'permutedims', 'ArrayComprehension', 'ArrayComprehensionMap',
'Array',
]
diff --git a/sympy/tensor/array/arrayop.py b/sympy/tensor/array/arrayop.py
index 5a275587c8c9..d50051e36a2e 100644
--- a/sympy/tensor/array/arrayop.py
+++ b/sympy/tensor/array/arrayop.py
@@ -68,47 +68,14 @@ def tensorproduct(*args):
return ImmutableDenseNDimArray(product_list, a.shape + b.shape)
-def tensorcontraction(array, *contraction_axes):
- """
- Contraction of an array-like object on the specified axes.
-
- Examples
- ========
-
- >>> from sympy import Array, tensorcontraction
- >>> from sympy import Matrix, eye
- >>> tensorcontraction(eye(3), (0, 1))
- 3
- >>> A = Array(range(18), (3, 2, 3))
- >>> A
- [[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]], [[12, 13, 14], [15, 16, 17]]]
- >>> tensorcontraction(A, (0, 2))
- [21, 30]
-
- Matrix multiplication may be emulated with a proper combination of
- ``tensorcontraction`` and ``tensorproduct``
-
- >>> from sympy import tensorproduct
- >>> from sympy.abc import a,b,c,d,e,f,g,h
- >>> m1 = Matrix([[a, b], [c, d]])
- >>> m2 = Matrix([[e, f], [g, h]])
- >>> p = tensorproduct(m1, m2)
- >>> p
- [[[[a*e, a*f], [a*g, a*h]], [[b*e, b*f], [b*g, b*h]]], [[[c*e, c*f], [c*g, c*h]], [[d*e, d*f], [d*g, d*h]]]]
- >>> tensorcontraction(p, (1, 2))
- [[a*e + b*g, a*f + b*h], [c*e + d*g, c*f + d*h]]
- >>> m1*m2
- Matrix([
- [a*e + b*g, a*f + b*h],
- [c*e + d*g, c*f + d*h]])
- """
+def _util_contraction_diagonal(array, *contraction_or_diagonal_axes):
array = _arrayfy(array)
# Verify contraction_axes:
taken_dims = set()
- for axes_group in contraction_axes:
+ for axes_group in contraction_or_diagonal_axes:
if not isinstance(axes_group, Iterable):
- raise ValueError("collections of contraction axes expected")
+ raise ValueError("collections of contraction/diagonal axes expected")
dim = array.shape[axes_group[0]]
@@ -116,7 +83,7 @@ def tensorcontraction(array, *contraction_axes):
if d in taken_dims:
raise ValueError("dimension specified more than once")
if dim != array.shape[d]:
- raise ValueError("cannot contract between axes of different dimension")
+ raise ValueError("cannot contract or diagonalize between axes of different dimension")
taken_dims.add(d)
rank = array.rank()
@@ -140,19 +107,58 @@ def tensorcontraction(array, *contraction_axes):
# Determine absolute positions of the contracted indices:
summed_deltas = []
- for axes_group in contraction_axes:
+ for axes_group in contraction_or_diagonal_axes:
lidx = []
for js in range(array.shape[axes_group[0]]):
lidx.append(sum([cum_shape[ig] * js for ig in axes_group]))
summed_deltas.append(lidx)
+ return array, remaining_indices, remaining_shape, summed_deltas
+
+
+def tensorcontraction(array, *contraction_axes):
+ """
+ Contraction of an array-like object on the specified axes.
+
+ Examples
+ ========
+
+ >>> from sympy import Array, tensorcontraction
+ >>> from sympy import Matrix, eye
+ >>> tensorcontraction(eye(3), (0, 1))
+ 3
+ >>> A = Array(range(18), (3, 2, 3))
+ >>> A
+ [[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]], [[12, 13, 14], [15, 16, 17]]]
+ >>> tensorcontraction(A, (0, 2))
+ [21, 30]
+
+ Matrix multiplication may be emulated with a proper combination of
+ ``tensorcontraction`` and ``tensorproduct``
+
+ >>> from sympy import tensorproduct
+ >>> from sympy.abc import a,b,c,d,e,f,g,h
+ >>> m1 = Matrix([[a, b], [c, d]])
+ >>> m2 = Matrix([[e, f], [g, h]])
+ >>> p = tensorproduct(m1, m2)
+ >>> p
+ [[[[a*e, a*f], [a*g, a*h]], [[b*e, b*f], [b*g, b*h]]], [[[c*e, c*f], [c*g, c*h]], [[d*e, d*f], [d*g, d*h]]]]
+ >>> tensorcontraction(p, (1, 2))
+ [[a*e + b*g, a*f + b*h], [c*e + d*g, c*f + d*h]]
+ >>> m1*m2
+ Matrix([
+ [a*e + b*g, a*f + b*h],
+ [c*e + d*g, c*f + d*h]])
+ """
+ array, remaining_indices, remaining_shape, summed_deltas = _util_contraction_diagonal(array, *contraction_axes)
+
# Compute the contracted array:
#
# 1. external for loops on all uncontracted indices.
# Uncontracted indices are determined by the combinatorial product of
# the absolute positions of the remaining indices.
# 2. internal loop on all contracted indices.
- # It sum the values of the absolute contracted index and the absolute
+ # It sums the values of the absolute contracted index and the absolute
# uncontracted index for the external loop.
contracted_array = []
for icontrib in itertools.product(*remaining_indices):
@@ -171,6 +177,69 @@ def tensorcontraction(array, *contraction_axes):
return type(array)(contracted_array, remaining_shape)
+def tensordiagonal(array, *diagonal_axes):
+ """
+ Diagonalization of an array-like object on the specified axes.
+
+ This is equivalent to multiplying the expression by Kronecker deltas
+ uniting the axes.
+
+ The diagonal indices are put at the end of the axes.
+
+ Examples
+ ========
+
+ ``tensordiagonal`` acting on a 2-dimensional array by axes 0 and 1 is
+ equivalent to the diagonal of the matrix:
+
+ >>> from sympy import Array, tensordiagonal
+ >>> from sympy import Matrix, eye
+ >>> tensordiagonal(eye(3), (0, 1))
+ [1, 1, 1]
+
+ >>> from sympy.abc import a,b,c,d
+ >>> m1 = Matrix([[a, b], [c, d]])
+ >>> tensordiagonal(m1, [0, 1])
+ [a, d]
+
+ In case of higher dimensional arrays, the diagonalized out dimensions
+ are appended removed and appended as a single dimension at the end:
+
+ >>> A = Array(range(18), (3, 2, 3))
+ >>> A
+ [[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]], [[12, 13, 14], [15, 16, 17]]]
+ >>> tensordiagonal(A, (0, 2))
+ [[0, 7, 14], [3, 10, 17]]
+ >>> from sympy import permutedims
+ >>> tensordiagonal(A, (0, 2)) == permutedims(Array([A[0, :, 0], A[1, :, 1], A[2, :, 2]]), [1, 0])
+ True
+
+ """
+ array, remaining_indices, remaining_shape, diagonal_deltas = _util_contraction_diagonal(array, *diagonal_axes)
+
+ # Compute the diagonalized array:
+ #
+ # 1. external for loops on all undiagonalized indices.
+ # Undiagonalized indices are determined by the combinatorial product of
+ # the absolute positions of the remaining indices.
+ # 2. internal loop on all diagonal indices.
+ # It appends the values of the absolute diagonalized index and the absolute
+ # undiagonalized index for the external loop.
+ diagonalized_array = []
+ diagonal_shape = [len(i) for i in diagonal_deltas]
+ for icontrib in itertools.product(*remaining_indices):
+ index_base_position = sum(icontrib)
+ isum = []
+ for sum_to_index in itertools.product(*diagonal_deltas):
+ idx = array._get_tuple_index(index_base_position + sum(sum_to_index))
+ isum.append(array[idx])
+
+ isum = type(array)(isum).reshape(*diagonal_shape)
+ diagonalized_array.append(isum)
+
+ return type(array)(diagonalized_array, remaining_shape + diagonal_shape)
+
+
def derive_by_array(expr, dx):
r"""
Derivative by arrays. Supports both arrays and scalars.
| diff --git a/sympy/tensor/array/tests/test_arrayop.py b/sympy/tensor/array/tests/test_arrayop.py
index 065984563145..0bc3738ba517 100644
--- a/sympy/tensor/array/tests/test_arrayop.py
+++ b/sympy/tensor/array/tests/test_arrayop.py
@@ -7,7 +7,8 @@
from sympy import symbols, sin, exp, log, cos, transpose, adjoint, conjugate, diff
from sympy.tensor.array import Array, ImmutableDenseNDimArray, ImmutableSparseNDimArray, MutableSparseNDimArray
-from sympy.tensor.array.arrayop import tensorproduct, tensorcontraction, derive_by_array, permutedims, Flatten
+from sympy.tensor.array.arrayop import tensorproduct, tensorcontraction, derive_by_array, permutedims, Flatten, \
+ tensordiagonal
def test_import_NDimArray():
@@ -290,6 +291,7 @@ def test_array_permutedims():
B = SparseArrayType({1:1, 20000:2}, (10000, 20000))
assert B.transpose() == SparseArrayType({10000: 1, 1: 2}, (20000, 10000))
+
def test_flatten():
from sympy import Matrix
for ArrayType in [ImmutableDenseNDimArray, ImmutableSparseNDimArray, Matrix]:
@@ -298,3 +300,32 @@ def test_flatten():
for i, v in enumerate(Flatten(A)):
i == v
+
+
+def test_tensordiagonal():
+ from sympy import eye
+ expr = Array(range(9)).reshape(3, 3)
+ assert tensordiagonal(expr, [0], [1]) == expr
+ assert tensordiagonal(eye(3), [0, 1]) == Array([1, 1, 1])
+ assert tensordiagonal(expr, [0, 1]) == Array([0, 4, 8])
+ x, y, z = symbols("x y z")
+ expr2 = tensorproduct([x, y, z], expr)
+ assert tensordiagonal(expr2, [1, 2]) == Array([[0, 4*x, 8*x], [0, 4*y, 8*y], [0, 4*z, 8*z]])
+ assert tensordiagonal(expr2, [0, 1]) == Array([[0, 3*y, 6*z], [x, 4*y, 7*z], [2*x, 5*y, 8*z]])
+ assert tensordiagonal(expr2, [0, 1, 2]) == Array([0, 4*y, 8*z])
+ assert tensordiagonal(expr2, [0]) == permutedims(expr2, [1, 2, 0])
+ assert tensordiagonal(expr2, [1]) == permutedims(expr2, [0, 2, 1])
+ assert tensordiagonal(expr2, [2]) == expr2
+ assert tensordiagonal(expr2, [1], [2]) == expr2
+ assert tensordiagonal(expr2, [0], [1]) == permutedims(expr2, [2, 0, 1])
+
+ a, b, c, X, Y, Z = symbols("a b c X Y Z")
+ expr3 = tensorproduct([x, y, z], [1, 2, 3], [a, b, c], [X, Y, Z])
+ assert tensordiagonal(expr3, [0, 1, 2, 3]) == Array([x*a*X, 2*y*b*Y, 3*z*c*Z])
+ assert tensordiagonal(expr3, [0, 1], [2, 3]) == tensorproduct([x, 2*y, 3*z], [a*X, b*Y, c*Z])
+
+ assert tensordiagonal(expr3, [0], [1, 2], [3]) == tensorproduct([x, y, z], [a, 2*b, 3*c], [X, Y, Z])
+ assert tensordiagonal(tensordiagonal(expr3, [2, 3]), [0, 1]) == tensorproduct([a*X, b*Y, c*Z], [x, 2*y, 3*z])
+
+ raises(ValueError, lambda: tensordiagonal([[1, 2, 3], [4, 5, 6]], [0, 1]))
+ raises(ValueError, lambda: tensordiagonal(expr3.reshape(3, 3, 9), [1, 2]))
| diff --git a/doc/src/modules/tensor/array.rst b/doc/src/modules/tensor/array.rst
index 4b9498294556..cc8d70a29315 100644
--- a/doc/src/modules/tensor/array.rst
+++ b/doc/src/modules/tensor/array.rst
@@ -32,3 +32,5 @@ Functions
.. autofunction:: tensorcontraction
.. autofunction:: tensorproduct
+
+.. autofunction:: tensordiagonal
| [
{
"components": [
{
"doc": "",
"lines": [
71,
116
],
"name": "_util_contraction_diagonal",
"signature": "def _util_contraction_diagonal(array, *contraction_or_diagonal_axes):",
"type": "function"
},
{
"doc": "Diagonali... | [
"test_import_NDimArray",
"test_tensorproduct",
"test_tensorcontraction",
"test_derivative_by_array",
"test_issue_emerged_while_discussing_10972",
"test_array_permutedims",
"test_flatten"
] | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
added tensordiagonal( ) function to perform diagonalization of arrays
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
#### Brief description of what is fixed or changed
#### Other comments
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* tensor
* Added tensordiagonal( ) function to perform diagonalization of array expressions.
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/tensor/array/arrayop.py]
(definition of _util_contraction_diagonal:)
def _util_contraction_diagonal(array, *contraction_or_diagonal_axes):
(definition of tensordiagonal:)
def tensordiagonal(array, *diagonal_axes):
"""Diagonalization of an array-like object on the specified axes.
This is equivalent to multiplying the expression by Kronecker deltas
uniting the axes.
The diagonal indices are put at the end of the axes.
Examples
========
``tensordiagonal`` acting on a 2-dimensional array by axes 0 and 1 is
equivalent to the diagonal of the matrix:
>>> from sympy import Array, tensordiagonal
>>> from sympy import Matrix, eye
>>> tensordiagonal(eye(3), (0, 1))
[1, 1, 1]
>>> from sympy.abc import a,b,c,d
>>> m1 = Matrix([[a, b], [c, d]])
>>> tensordiagonal(m1, [0, 1])
[a, d]
In case of higher dimensional arrays, the diagonalized out dimensions
are appended removed and appended as a single dimension at the end:
>>> A = Array(range(18), (3, 2, 3))
>>> A
[[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]], [[12, 13, 14], [15, 16, 17]]]
>>> tensordiagonal(A, (0, 2))
[[0, 7, 14], [3, 10, 17]]
>>> from sympy import permutedims
>>> tensordiagonal(A, (0, 2)) == permutedims(Array([A[0, :, 0], A[1, :, 1], A[2, :, 2]]), [1, 0])
True"""
[end of new definitions in sympy/tensor/array/arrayop.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 284db64e44124856accce4abff75d2d8990c301d | |
pydata__xarray-4577 | 4,577 | pydata/xarray | 0.12 | 8ac3d862197204e6212a9882051808eb4b1cf3ff | 2020-11-12T15:53:00Z | diff --git a/setup.cfg b/setup.cfg
index 2a7762fb9f5..0a82f80ebd6 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -81,6 +81,12 @@ setup_requires =
setuptools >= 38.4
setuptools_scm
+[options.entry_points]
+xarray.backends =
+ zarr = xarray.backends.zarr:zarr_backend
+ h5netcdf = xarray.backends.h5netcdf_:h5netcdf_backend
+ cfgrib = xarray.backends.cfgrib_:cfgrib_backend
+
[options.extras_require]
io =
netCDF4
diff --git a/xarray/backends/__init__.py b/xarray/backends/__init__.py
index 2a769b1335e..1500ea5061f 100644
--- a/xarray/backends/__init__.py
+++ b/xarray/backends/__init__.py
@@ -9,6 +9,7 @@
from .h5netcdf_ import H5NetCDFStore
from .memory import InMemoryDataStore
from .netCDF4_ import NetCDF4DataStore
+from .plugins import list_engines
from .pseudonetcdf_ import PseudoNetCDFDataStore
from .pydap_ import PydapDataStore
from .pynio_ import NioDataStore
@@ -29,4 +30,5 @@
"H5NetCDFStore",
"ZarrStore",
"PseudoNetCDFDataStore",
+ "list_engines",
]
diff --git a/xarray/backends/api.py b/xarray/backends/api.py
index f1d58813958..5ffd50c9ca4 100644
--- a/xarray/backends/api.py
+++ b/xarray/backends/api.py
@@ -437,7 +437,7 @@ def open_dataset(
kwargs = locals().copy()
from . import apiv2, plugins
- if engine in plugins.ENGINES:
+ if engine in plugins.list_engines():
return apiv2.open_dataset(**kwargs)
if autoclose is not None:
diff --git a/xarray/backends/apiv2.py b/xarray/backends/apiv2.py
index bdeb2f2057d..b3f5343df7c 100644
--- a/xarray/backends/apiv2.py
+++ b/xarray/backends/apiv2.py
@@ -103,13 +103,13 @@ def _dataset_from_backend_dataset(
return ds
-def _resolve_decoders_kwargs(decode_cf, engine, **decoders):
- signature = plugins.ENGINES[engine]["signature"]
- if decode_cf is False:
- for d in decoders:
- if d in signature:
- decoders[d] = False
- return {k: v for k, v in decoders.items() if v is not None}
+def _resolve_decoders_kwargs(decode_cf, open_backend_dataset_parameters, **decoders):
+ for d in list(decoders):
+ if decode_cf is False and d in open_backend_dataset_parameters:
+ decoders[d] = False
+ if decoders[d] is None:
+ decoders.pop(d)
+ return decoders
def open_dataset(
@@ -250,9 +250,12 @@ def open_dataset(
if engine is None:
engine = _autodetect_engine(filename_or_obj)
+ engines = plugins.list_engines()
+ backend = _get_backend_cls(engine, engines=engines)
+
decoders = _resolve_decoders_kwargs(
decode_cf,
- engine=engine,
+ open_backend_dataset_parameters=backend.open_dataset_parameters,
mask_and_scale=mask_and_scale,
decode_times=decode_times,
decode_timedelta=decode_timedelta,
@@ -263,11 +266,7 @@ def open_dataset(
backend_kwargs = backend_kwargs.copy()
overwrite_encoded_chunks = backend_kwargs.pop("overwrite_encoded_chunks", None)
-
- open_backend_dataset = _get_backend_cls(engine, engines=plugins.ENGINES)[
- "open_dataset"
- ]
- backend_ds = open_backend_dataset(
+ backend_ds = backend.open_dataset(
filename_or_obj,
drop_variables=drop_variables,
**decoders,
diff --git a/xarray/backends/cfgrib_.py b/xarray/backends/cfgrib_.py
index 80b3183f197..cf506d67e96 100644
--- a/xarray/backends/cfgrib_.py
+++ b/xarray/backends/cfgrib_.py
@@ -7,6 +7,7 @@
from ..core.variable import Variable
from .common import AbstractDataStore, BackendArray
from .locks import SerializableLock, ensure_lock
+from .plugins import BackendEntrypoint
# FIXME: Add a dedicated lock, even if ecCodes is supposed to be thread-safe
# in most circumstances. See:
@@ -126,3 +127,6 @@ def open_backend_dataset_cfgrib(
ds.encoding = encoding
return ds
+
+
+cfgrib_backend = BackendEntrypoint(open_dataset=open_backend_dataset_cfgrib)
diff --git a/xarray/backends/h5netcdf_.py b/xarray/backends/h5netcdf_.py
index 9d013e3e46e..e3539a05fb1 100644
--- a/xarray/backends/h5netcdf_.py
+++ b/xarray/backends/h5netcdf_.py
@@ -18,6 +18,7 @@
_get_datatype,
_nc4_require_group,
)
+from .plugins import BackendEntrypoint
class H5NetCDFArrayWrapper(BaseNetCDF4Array):
@@ -325,7 +326,7 @@ def close(self, **kwargs):
self._manager.close(**kwargs)
-def open_backend_dataset_h5necdf(
+def open_backend_dataset_h5netcdf(
filename_or_obj,
*,
mask_and_scale=True,
@@ -374,3 +375,6 @@ def open_backend_dataset_h5necdf(
ds.encoding = encoding
return ds
+
+
+h5netcdf_backend = BackendEntrypoint(open_dataset=open_backend_dataset_h5netcdf)
diff --git a/xarray/backends/plugins.py b/xarray/backends/plugins.py
index a0b1a825600..9e78a3b71e8 100644
--- a/xarray/backends/plugins.py
+++ b/xarray/backends/plugins.py
@@ -1,31 +1,78 @@
import inspect
-import typing as T
-
-from . import cfgrib_, h5netcdf_, zarr
-
-ENGINES: T.Dict[str, T.Dict[str, T.Any]] = {
- "h5netcdf": {
- "open_dataset": h5netcdf_.open_backend_dataset_h5necdf,
- },
- "zarr": {
- "open_dataset": zarr.open_backend_dataset_zarr,
- },
- "cfgrib": {
- "open_dataset": cfgrib_.open_backend_dataset_cfgrib,
- },
-}
-
-
-for engine in ENGINES.values():
- if "signature" not in engine:
- parameters = inspect.signature(engine["open_dataset"]).parameters
- for name, param in parameters.items():
- if param.kind in (
- inspect.Parameter.VAR_KEYWORD,
- inspect.Parameter.VAR_POSITIONAL,
- ):
- raise TypeError(
- f'All the parameters in {engine["open_dataset"]!r} signature should be explicit. '
- "*args and **kwargs is not supported"
- )
- engine["signature"] = set(parameters)
+import itertools
+import warnings
+from functools import lru_cache
+
+import pkg_resources
+
+
+class BackendEntrypoint:
+ __slots__ = ("open_dataset", "open_dataset_parameters")
+
+ def __init__(self, open_dataset, open_dataset_parameters=None):
+ self.open_dataset = open_dataset
+ self.open_dataset_parameters = open_dataset_parameters
+
+
+def remove_duplicates(backend_entrypoints):
+
+ # sort and group entrypoints by name
+ backend_entrypoints = sorted(backend_entrypoints, key=lambda ep: ep.name)
+ backend_entrypoints_grouped = itertools.groupby(
+ backend_entrypoints, key=lambda ep: ep.name
+ )
+ # check if there are multiple entrypoints for the same name
+ unique_backend_entrypoints = []
+ for name, matches in backend_entrypoints_grouped:
+ matches = list(matches)
+ unique_backend_entrypoints.append(matches[0])
+ matches_len = len(matches)
+ if matches_len > 1:
+ selected_module_name = matches[0].module_name
+ all_module_names = [e.module_name for e in matches]
+ warnings.warn(
+ f"\nFound {matches_len} entrypoints for the engine name {name}:"
+ f"\n {all_module_names}.\n It will be used: {selected_module_name}.",
+ RuntimeWarning,
+ )
+ return unique_backend_entrypoints
+
+
+def detect_parameters(open_dataset):
+ signature = inspect.signature(open_dataset)
+ parameters = signature.parameters
+ for name, param in parameters.items():
+ if param.kind in (
+ inspect.Parameter.VAR_KEYWORD,
+ inspect.Parameter.VAR_POSITIONAL,
+ ):
+ raise TypeError(
+ f"All the parameters in {open_dataset!r} signature should be explicit. "
+ "*args and **kwargs is not supported"
+ )
+ return tuple(parameters)
+
+
+def create_engines_dict(backend_entrypoints):
+ engines = {}
+ for backend_ep in backend_entrypoints:
+ name = backend_ep.name
+ backend = backend_ep.load()
+ engines[name] = backend
+ return engines
+
+
+def set_missing_parameters(engines):
+ for name, backend in engines.items():
+ if backend.open_dataset_parameters is None:
+ open_dataset = backend.open_dataset
+ backend.open_dataset_parameters = detect_parameters(open_dataset)
+
+
+@lru_cache(maxsize=1)
+def list_engines():
+ entrypoints = pkg_resources.iter_entry_points("xarray.backends")
+ backend_entrypoints = remove_duplicates(entrypoints)
+ engines = create_engines_dict(backend_entrypoints)
+ set_missing_parameters(engines)
+ return engines
diff --git a/xarray/backends/zarr.py b/xarray/backends/zarr.py
index f3c92d52303..6d261651263 100644
--- a/xarray/backends/zarr.py
+++ b/xarray/backends/zarr.py
@@ -7,6 +7,7 @@
from ..core.utils import FrozenDict, HiddenKeyDict, close_on_error
from ..core.variable import Variable
from .common import AbstractWritableDataStore, BackendArray, _encode_variable_name
+from .plugins import BackendEntrypoint
# need some special secret attributes to tell us the dimensions
DIMENSION_KEY = "_ARRAY_DIMENSIONS"
@@ -692,3 +693,6 @@ def open_backend_dataset_zarr(
ds.encoding = encoding
return ds
+
+
+zarr_backend = BackendEntrypoint(open_dataset=open_backend_dataset_zarr)
| diff --git a/xarray/tests/test_plugins.py b/xarray/tests/test_plugins.py
new file mode 100644
index 00000000000..7e9bb58f140
--- /dev/null
+++ b/xarray/tests/test_plugins.py
@@ -0,0 +1,93 @@
+from unittest import mock
+
+import pkg_resources
+import pytest
+
+from xarray.backends import plugins
+
+
+def dummy_open_dataset_args(filename_or_obj, *args):
+ pass
+
+
+def dummy_open_dataset_kwargs(filename_or_obj, **kwargs):
+ pass
+
+
+def dummy_open_dataset(filename_or_obj, *, decoder):
+ pass
+
+
+@pytest.fixture
+def dummy_duplicated_entrypoints():
+ specs = [
+ "engine1 = xarray.tests.test_plugins:backend_1",
+ "engine1 = xarray.tests.test_plugins:backend_2",
+ "engine2 = xarray.tests.test_plugins:backend_1",
+ "engine2 = xarray.tests.test_plugins:backend_2",
+ ]
+ eps = [pkg_resources.EntryPoint.parse(spec) for spec in specs]
+ return eps
+
+
+def test_remove_duplicates(dummy_duplicated_entrypoints):
+ entrypoints = plugins.remove_duplicates(dummy_duplicated_entrypoints)
+ assert len(entrypoints) == 2
+
+
+def test_remove_duplicates_warnings(dummy_duplicated_entrypoints):
+
+ with pytest.warns(RuntimeWarning) as record:
+ _ = plugins.remove_duplicates(dummy_duplicated_entrypoints)
+
+ assert len(record) == 2
+ message0 = str(record[0].message)
+ message1 = str(record[1].message)
+ assert "entrypoints" in message0
+ assert "entrypoints" in message1
+
+
+@mock.patch("pkg_resources.EntryPoint.load", mock.MagicMock(return_value=None))
+def test_create_engines_dict():
+ specs = [
+ "engine1 = xarray.tests.test_plugins:backend_1",
+ "engine2 = xarray.tests.test_plugins:backend_2",
+ ]
+ entrypoints = [pkg_resources.EntryPoint.parse(spec) for spec in specs]
+ engines = plugins.create_engines_dict(entrypoints)
+ assert len(engines) == 2
+ assert engines.keys() == set(("engine1", "engine2"))
+
+
+def test_set_missing_parameters():
+ backend_1 = plugins.BackendEntrypoint(dummy_open_dataset)
+ backend_2 = plugins.BackendEntrypoint(dummy_open_dataset, ("filename_or_obj",))
+ engines = {"engine_1": backend_1, "engine_2": backend_2}
+ plugins.set_missing_parameters(engines)
+
+ assert len(engines) == 2
+ engine_1 = engines["engine_1"]
+ assert engine_1.open_dataset_parameters == ("filename_or_obj", "decoder")
+ engine_2 = engines["engine_2"]
+ assert engine_2.open_dataset_parameters == ("filename_or_obj",)
+
+
+def test_set_missing_parameters_raise_error():
+
+ backend = plugins.BackendEntrypoint(dummy_open_dataset_args)
+ with pytest.raises(TypeError):
+ plugins.set_missing_parameters({"engine": backend})
+
+ backend = plugins.BackendEntrypoint(
+ dummy_open_dataset_args, ("filename_or_obj", "decoder")
+ )
+ plugins.set_missing_parameters({"engine": backend})
+
+ backend = plugins.BackendEntrypoint(dummy_open_dataset_kwargs)
+ with pytest.raises(TypeError):
+ plugins.set_missing_parameters({"engine": backend})
+
+ backend = plugins.BackendEntrypoint(
+ dummy_open_dataset_kwargs, ("filename_or_obj", "decoder")
+ )
+ plugins.set_missing_parameters({"engine": backend})
| diff --git a/setup.cfg b/setup.cfg
index 2a7762fb9f5..0a82f80ebd6 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -81,6 +81,12 @@ setup_requires =
setuptools >= 38.4
setuptools_scm
+[options.entry_points]
+xarray.backends =
+ zarr = xarray.backends.zarr:zarr_backend
+ h5netcdf = xarray.backends.h5netcdf_:h5netcdf_backend
+ cfgrib = xarray.backends.cfgrib_:cfgrib_backend
+
[options.extras_require]
io =
netCDF4
| [
{
"components": [
{
"doc": "",
"lines": [
329,
377
],
"name": "open_backend_dataset_h5netcdf",
"signature": "def open_backend_dataset_h5netcdf( filename_or_obj, *, mask_and_scale=True, decode_times=None, concat_characters=None, decode_coords=None... | [
"xarray/tests/test_plugins.py::test_remove_duplicates",
"xarray/tests/test_plugins.py::test_remove_duplicates_warnings",
"xarray/tests/test_plugins.py::test_create_engines_dict",
"xarray/tests/test_plugins.py::test_set_missing_parameters",
"xarray/tests/test_plugins.py::test_set_missing_parameters_raise_err... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Backends entrypoints
- It's an update of @jhamman pull request https://github.com/pydata/xarray/pull/3166
- It uses `entrypoints` module to detect the installed engines. The detection is done at `open_dataset` function call and it is cached. It raises a warning in case of conflicts.
- Add a class for the backend interface `BackendEtrypoint` instead of a function.
Modified files:
- add plugins.py containing `detect_engines` function and `BackendEtrypoint`.
- dependencies file to add `entrypoints`.
- backend.__init__ to add `detect_engines`
- apiv2.py and api.py do use `detect_engines`
- zarr.py, h5netcdf_.py, cfgrib.py to instatiate the `BackendEtrypoint`.
- [x] Related to #3166
- [x] Tests added
- [x] Passes `isort . && black . && mypy . && flake8`
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in xarray/backends/h5netcdf_.py]
(definition of open_backend_dataset_h5netcdf:)
def open_backend_dataset_h5netcdf( filename_or_obj, *, mask_and_scale=True, decode_times=None, concat_characters=None, decode_coords=None, drop_variables=None, use_cftime=None, decode_timedelta=None, format=None, group=None, lock=None, invalid_netcdf=None, phony_dims=None, ):
[end of new definitions in xarray/backends/h5netcdf_.py]
[start of new definitions in xarray/backends/plugins.py]
(definition of BackendEntrypoint:)
class BackendEntrypoint:
(definition of BackendEntrypoint.__init__:)
def __init__(self, open_dataset, open_dataset_parameters=None):
(definition of remove_duplicates:)
def remove_duplicates(backend_entrypoints):
(definition of detect_parameters:)
def detect_parameters(open_dataset):
(definition of create_engines_dict:)
def create_engines_dict(backend_entrypoints):
(definition of set_missing_parameters:)
def set_missing_parameters(engines):
(definition of list_engines:)
def list_engines():
[end of new definitions in xarray/backends/plugins.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1c198a191127c601d091213c4b3292a8bb3054e1 | |
sympy__sympy-20406 | 20,406 | sympy/sympy | 1.8 | 4f95487032a53342bdef5b76b566f59b381fb6e3 | 2020-11-10T08:13:05Z | diff --git a/sympy/matrices/decompositions.py b/sympy/matrices/decompositions.py
index 0d59313c3b94..4084d88877a3 100644
--- a/sympy/matrices/decompositions.py
+++ b/sympy/matrices/decompositions.py
@@ -244,7 +244,7 @@ def _cholesky(M, hermitian=True):
========
sympy.matrices.dense.DenseMatrix.LDLdecomposition
- LUdecomposition
+ sympy.matrices.matrices.MatrixBase.LUdecomposition
QRdecomposition
"""
@@ -335,7 +335,7 @@ def _cholesky_sparse(M, hermitian=True):
========
sympy.matrices.sparse.SparseMatrix.LDLdecomposition
- LUdecomposition
+ sympy.matrices.matrices.MatrixBase.LUdecomposition
QRdecomposition
"""
@@ -446,7 +446,7 @@ def _LDLdecomposition(M, hermitian=True):
========
sympy.matrices.dense.DenseMatrix.cholesky
- LUdecomposition
+ sympy.matrices.matrices.MatrixBase.LUdecomposition
QRdecomposition
"""
@@ -927,7 +927,7 @@ def _LUdecomposition_Simple(M, iszerofunc=_iszero, simpfunc=None,
See Also
========
- LUdecomposition
+ sympy.matrices.matrices.MatrixBase.LUdecomposition
LUdecompositionFF
LUsolve
"""
@@ -1072,7 +1072,7 @@ def _LUdecompositionFF(M):
See Also
========
- LUdecomposition
+ sympy.matrices.matrices.MatrixBase.LUdecomposition
LUdecomposition_Simple
LUsolve
@@ -1300,7 +1300,7 @@ def _QRdecomposition(M):
sympy.matrices.dense.DenseMatrix.cholesky
sympy.matrices.dense.DenseMatrix.LDLdecomposition
- LUdecomposition
+ sympy.matrices.matrices.MatrixBase.LUdecomposition
QRsolve
"""
diff --git a/sympy/matrices/expressions/blockmatrix.py b/sympy/matrices/expressions/blockmatrix.py
index 44ced38a38f9..34b8469b57f9 100644
--- a/sympy/matrices/expressions/blockmatrix.py
+++ b/sympy/matrices/expressions/blockmatrix.py
@@ -236,6 +236,275 @@ def transpose(self):
"""
return self._eval_transpose()
+ def schur(self, mat = 'A', generalized = False):
+ """Return the Schur Complement of the 2x2 BlockMatrix
+
+ Parameters
+ ==========
+
+ mat : String, optional
+ The matrix with respect to which the
+ Schur Complement is calculated. 'A' is
+ used by default
+
+ generalized : bool, optional
+ If True, returns the generalized Schur
+ Component which uses Moore-Penrose Inverse
+
+ Examples
+ ========
+
+ >>> from sympy import symbols, MatrixSymbol, BlockMatrix
+ >>> m, n = symbols('m n')
+ >>> A = MatrixSymbol('A', n, n)
+ >>> B = MatrixSymbol('B', n, m)
+ >>> C = MatrixSymbol('C', m, n)
+ >>> D = MatrixSymbol('D', m, m)
+ >>> X = BlockMatrix([[A, B], [C, D]])
+
+ The default Schur Complement is evaluated with "A"
+
+ >>> X.schur()
+ -C*A**(-1)*B + D
+ >>> X.schur('D')
+ A - B*D**(-1)*C
+
+ Schur complement with non-invertible matrices is not
+ defined. Instead, the generalized Schur complement can
+ be calculated which uses the Moore-Penrose Inverse. To
+ achieve this, `generalized` must be set to `True`
+
+ >>> X.schur('B', generalized=True)
+ C - D*(B.T*B)**(-1)*B.T*A
+ >>> X.schur('C', generalized=True)
+ -A*(C.T*C)**(-1)*C.T*D + B
+
+ Returns
+ =======
+
+ M : Matrix
+ The Schur Complement Matrix
+
+ Raises
+ ======
+
+ ShapeError
+ If the block matrix is not a 2x2 matrix
+
+ NonInvertibleMatrixError
+ If given matrix is non-invertible
+
+ References
+ ==========
+
+ .. [1] Wikipedia Article on Schur Component : https://en.wikipedia.org/wiki/Schur_complement
+
+ See Also
+ ========
+
+ sympy.matrices.matrices.MatrixBase.pinv
+ """
+
+ if self.blockshape == (2, 2):
+ [[A, B],
+ [C, D]] = self.blocks.tolist()
+ d={'A' : A, 'B' : B, 'C' : C, 'D' : D}
+ try:
+ inv = (d[mat].T*d[mat]).inv()*d[mat].T if generalized else d[mat].inv()
+ if mat == 'A':
+ return D - C * inv * B
+ elif mat == 'B':
+ return C - D * inv * A
+ elif mat == 'C':
+ return B - A * inv * D
+ elif mat == 'D':
+ return A - B * inv * C
+ #For matrices where no sub-matrix is square
+ return self
+ except NonInvertibleMatrixError:
+ raise NonInvertibleMatrixError('The given matrix is not invertible. Please set generalized=True \
+ to compute the generalized Schur Complement which uses Moore-Penrose Inverse')
+ else:
+ raise ShapeError('Schur Complement can only be calculated for 2x2 block matrices')
+
+ def LDUdecomposition(self):
+ """Returns the Block LDU decomposition of
+ a 2x2 Block Matrix
+
+ Returns
+ =======
+
+ (L, D, U) : Matrices
+ L : Lower Diagonal Matrix
+ D : Diagonal Matrix
+ U : Upper Diagonal Matrix
+
+ Examples
+ ========
+
+ >>> from sympy import symbols, MatrixSymbol, BlockMatrix, block_collapse
+ >>> m, n = symbols('m n')
+ >>> A = MatrixSymbol('A', n, n)
+ >>> B = MatrixSymbol('B', n, m)
+ >>> C = MatrixSymbol('C', m, n)
+ >>> D = MatrixSymbol('D', m, m)
+ >>> X = BlockMatrix([[A, B], [C, D]])
+ >>> L, D, U = X.LDUdecomposition()
+ >>> block_collapse(L*D*U)
+ Matrix([
+ [A, B],
+ [C, D]])
+
+ Raises
+ ======
+
+ ShapeError
+ If the block matrix is not a 2x2 matrix
+
+ NonInvertibleMatrixError
+ If the matrix "A" is non-invertible
+
+ See Also
+ ========
+ sympy.matrices.expressions.blockmatrix.BlockMatrix.UDLdecomposition
+ sympy.matrices.expressions.blockmatrix.BlockMatrix.LUdecomposition
+ """
+ if self.blockshape == (2,2):
+ [[A, B],
+ [C, D]] = self.blocks.tolist()
+ try:
+ AI = A.I
+ except NonInvertibleMatrixError:
+ raise NonInvertibleMatrixError('Block LDU decomposition cannot be calculated when\
+ "A" is singular')
+ Ip = Identity(B.shape[0])
+ Iq = Identity(B.shape[1])
+ Z = ZeroMatrix(*B.shape)
+ L = BlockMatrix([[Ip, Z], [C*AI, Iq]])
+ D = BlockDiagMatrix(A, self.schur())
+ U = BlockMatrix([[Ip, AI*B],[Z.T, Iq]])
+ return L, D, U
+ else:
+ raise ShapeError("Block LDU decomposition is supported only for 2x2 block matrices")
+
+ def UDLdecomposition(self):
+ """Returns the Block UDL decomposition of
+ a 2x2 Block Matrix
+
+ Returns
+ =======
+
+ (U, D, L) : Matrices
+ U : Upper Diagonal Matrix
+ D : Diagonal Matrix
+ L : Lower Diagonal Matrix
+
+ Examples
+ ========
+
+ >>> from sympy import symbols, MatrixSymbol, BlockMatrix, block_collapse
+ >>> m, n = symbols('m n')
+ >>> A = MatrixSymbol('A', n, n)
+ >>> B = MatrixSymbol('B', n, m)
+ >>> C = MatrixSymbol('C', m, n)
+ >>> D = MatrixSymbol('D', m, m)
+ >>> X = BlockMatrix([[A, B], [C, D]])
+ >>> U, D, L = X.UDLdecomposition()
+ >>> block_collapse(U*D*L)
+ Matrix([
+ [A, B],
+ [C, D]])
+
+ Raises
+ ======
+
+ ShapeError
+ If the block matrix is not a 2x2 matrix
+
+ NonInvertibleMatrixError
+ If the matrix "D" is non-invertible
+
+ See Also
+ ========
+ sympy.matrices.expressions.blockmatrix.BlockMatrix.LDUdecomposition
+ sympy.matrices.expressions.blockmatrix.BlockMatrix.LUdecomposition
+ """
+ if self.blockshape == (2,2):
+ [[A, B],
+ [C, D]] = self.blocks.tolist()
+ try:
+ DI = D.I
+ except NonInvertibleMatrixError:
+ raise NonInvertibleMatrixError('Block UDL decomposition cannot be calculated when\
+ "D" is singular')
+ Ip = Identity(A.shape[0])
+ Iq = Identity(B.shape[1])
+ Z = ZeroMatrix(*B.shape)
+ U = BlockMatrix([[Ip, B*DI], [Z.T, Iq]])
+ D = BlockDiagMatrix(self.schur('D'), D)
+ L = BlockMatrix([[Ip, Z],[DI*C, Iq]])
+ return U, D, L
+ else:
+ raise ShapeError("Block UDL decomposition is supported only for 2x2 block matrices")
+
+ def LUdecomposition(self):
+ """Returns the Block LU decomposition of
+ a 2x2 Block Matrix
+
+ Returns
+ =======
+
+ (L, U) : Matrices
+ L : Lower Diagonal Matrix
+ U : Upper Diagonal Matrix
+
+ Examples
+ ========
+
+ >>> from sympy import symbols, MatrixSymbol, BlockMatrix, block_collapse
+ >>> m, n = symbols('m n')
+ >>> A = MatrixSymbol('A', n, n)
+ >>> B = MatrixSymbol('B', n, m)
+ >>> C = MatrixSymbol('C', m, n)
+ >>> D = MatrixSymbol('D', m, m)
+ >>> X = BlockMatrix([[A, B], [C, D]])
+ >>> L, U = X.LUdecomposition()
+ >>> block_collapse(L*U)
+ Matrix([
+ [A, B],
+ [C, D]])
+
+ Raises
+ ======
+
+ ShapeError
+ If the block matrix is not a 2x2 matrix
+
+ NonInvertibleMatrixError
+ If the matrix "A" is non-invertible
+
+ See Also
+ ========
+ sympy.matrices.expressions.blockmatrix.BlockMatrix.UDLdecomposition
+ sympy.matrices.expressions.blockmatrix.BlockMatrix.LDUdecomposition
+ """
+ if self.blockshape == (2,2):
+ [[A, B],
+ [C, D]] = self.blocks.tolist()
+ try:
+ A = A**0.5
+ AI = A.I
+ except NonInvertibleMatrixError:
+ raise NonInvertibleMatrixError('Block LU decomposition cannot be calculated when\
+ "A" is singular')
+ Z = ZeroMatrix(*B.shape)
+ Q = self.schur()**0.5
+ L = BlockMatrix([[A, Z], [C*AI, Q]])
+ U = BlockMatrix([[A, AI*B],[Z.T, Q]])
+ return L, U
+ else:
+ raise ShapeError("Block LU decomposition is supported only for 2x2 block matrices")
+
def _entry(self, i, j, **kwargs):
# Find row entry
orig_i, orig_j = i, j
@@ -565,22 +834,19 @@ def blockinverse_2x2(expr):
[C, D]] = expr.arg.blocks.tolist()
formula = _choose_2x2_inversion_formula(A, B, C, D)
-
+ if formula != None:
+ MI = expr.arg.schur(formula).I
if formula == 'A':
AI = A.I
- MI = (D - C * AI * B).I
return BlockMatrix([[AI + AI * B * MI * C * AI, -AI * B * MI], [-MI * C * AI, MI]])
if formula == 'B':
BI = B.I
- MI = (C - D * BI * A).I
return BlockMatrix([[-MI * D * BI, MI], [BI + BI * A * MI * D * BI, -BI * A * MI]])
if formula == 'C':
CI = C.I
- MI = (B - A * CI * D).I
return BlockMatrix([[-CI * D * MI, CI + CI * D * MI * A * CI], [MI, -MI * A * CI]])
if formula == 'D':
DI = D.I
- MI = (A - B * DI * C).I
return BlockMatrix([[MI, -MI * B * DI], [-DI * C * MI, DI + DI * C * MI * B * DI]])
return expr
| diff --git a/sympy/matrices/expressions/tests/test_blockmatrix.py b/sympy/matrices/expressions/tests/test_blockmatrix.py
index aaf90ccfb10b..35a6d79805c4 100644
--- a/sympy/matrices/expressions/tests/test_blockmatrix.py
+++ b/sympy/matrices/expressions/tests/test_blockmatrix.py
@@ -129,7 +129,7 @@ def test_BlockMatrix_Determinant():
X = BlockMatrix([[A, B], [C, D]])
from sympy import assuming, Q
with assuming(Q.invertible(A)):
- assert det(X) == det(A) * det(D - C*A.I*B)
+ assert det(X) == det(A) * det(X.schur('A'))
assert isinstance(det(X), Expr)
assert det(BlockMatrix([A])) == det(A)
@@ -178,8 +178,8 @@ def test_BlockMatrix_2x2_inverse_symbolic():
D = ZeroMatrix(m, m)
X = BlockMatrix([[A, B], [C, D]])
assert block_collapse(X.inverse()) == BlockMatrix([
- [A.I + A.I * B * (D - C * A.I * B).I * C * A.I, -A.I * B * (D - C * A.I * B).I],
- [-(D - C * A.I * B).I * C * A.I, (D - C * A.I * B).I],
+ [A.I + A.I * B * X.schur('A').I * C * A.I, -A.I * B * X.schur('A').I],
+ [-X.schur('A').I * C * A.I, X.schur('A').I],
])
# test code path where only B is invertible
@@ -189,8 +189,8 @@ def test_BlockMatrix_2x2_inverse_symbolic():
D = MatrixSymbol('D', m, n)
X = BlockMatrix([[A, B], [C, D]])
assert block_collapse(X.inverse()) == BlockMatrix([
- [-(C - D * B.I * A).I * D * B.I, (C - D * B.I * A).I],
- [B.I + B.I * A * (C - D * B.I * A).I * D * B.I, -B.I * A * (C - D * B.I * A).I],
+ [-X.schur('B').I * D * B.I, X.schur('B').I],
+ [B.I + B.I * A * X.schur('B').I * D * B.I, -B.I * A * X.schur('B').I],
])
# test code path where only C is invertible
@@ -200,8 +200,8 @@ def test_BlockMatrix_2x2_inverse_symbolic():
D = MatrixSymbol('D', m, n)
X = BlockMatrix([[A, B], [C, D]])
assert block_collapse(X.inverse()) == BlockMatrix([
- [-C.I * D * (B - A * C.I * D).I, C.I + C.I * D * (B - A * C.I * D).I * A * C.I],
- [(B - A * C.I * D).I, -(B - A * C.I * D).I * A * C.I],
+ [-C.I * D * X.schur('C').I, C.I + C.I * D * X.schur('C').I * A * C.I],
+ [X.schur('C').I, -X.schur('C').I * A * C.I],
])
# test code path where only D is invertible
@@ -211,8 +211,8 @@ def test_BlockMatrix_2x2_inverse_symbolic():
D = MatrixSymbol('D', m, m)
X = BlockMatrix([[A, B], [C, D]])
assert block_collapse(X.inverse()) == BlockMatrix([
- [(A - B * D.I * C).I, -(A - B * D.I * C).I * B * D.I],
- [-D.I * C * (A - B * D.I * C).I, D.I + D.I * C * (A - B * D.I * C).I * B * D.I],
+ [X.schur('D').I, -X.schur('D').I * B * D.I],
+ [-D.I * C * X.schur('D').I, D.I + D.I * C * X.schur('D').I * B * D.I],
])
@@ -395,3 +395,22 @@ def test_invalid_block_matrix():
[ZeroMatrix(n - 1, n), ZeroMatrix(n, n)],
[ZeroMatrix(n + 1, n), ZeroMatrix(n, n)],
]))
+
+def test_block_lu_decomposition():
+ A = MatrixSymbol('A', n, n)
+ B = MatrixSymbol('B', n, m)
+ C = MatrixSymbol('C', m, n)
+ D = MatrixSymbol('D', m, m)
+ X = BlockMatrix([[A, B], [C, D]])
+
+ #LDU decomposition
+ L, D, U = X.LDUdecomposition()
+ assert block_collapse(L*D*U) == X
+
+ #UDL decomposition
+ U, D, L = X.UDLdecomposition()
+ assert block_collapse(U*D*L) == X
+
+ #LU decomposition
+ L, U = X.LUdecomposition()
+ assert block_collapse(L*U) == X
| [
{
"components": [
{
"doc": "Return the Schur Complement of the 2x2 BlockMatrix\n\nParameters\n==========\n\nmat : String, optional\n The matrix with respect to which the\n Schur Complement is calculated. 'A' is\n used by default\n\ngeneralized : bool, optional\n If True, returns the ge... | [
"test_BlockMatrix_Determinant",
"test_BlockMatrix_2x2_inverse_symbolic"
] | [
"test_bc_matmul",
"test_bc_matadd",
"test_bc_transpose",
"test_bc_dist_diag",
"test_block_plus_ident",
"test_BlockMatrix",
"test_block_collapse_explicit_matrices",
"test_issue_17624",
"test_issue_18618",
"test_BlockMatrix_trace",
"test_squareBlockMatrix",
"test_BlockMatrix_2x2_inverse_numeric"... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Added schur complement and block decompositions
Added Schur Complement, Generalized Schur Complement and LDU, UDL and LU Block Decompositions
#### References to other Issues or PRs
Fixes #20401
#### Release Notes
<!-- BEGIN RELEASE NOTES -->
* matrices
* Added function to calculate Generalized Schur Complement for Block Matrices
* Added functions to compute LDU, UDL and LU decompositions for Block Matrices
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/matrices/expressions/blockmatrix.py]
(definition of BlockMatrix.schur:)
def schur(self, mat = 'A', generalized = False):
"""Return the Schur Complement of the 2x2 BlockMatrix
Parameters
==========
mat : String, optional
The matrix with respect to which the
Schur Complement is calculated. 'A' is
used by default
generalized : bool, optional
If True, returns the generalized Schur
Component which uses Moore-Penrose Inverse
Examples
========
>>> from sympy import symbols, MatrixSymbol, BlockMatrix
>>> m, n = symbols('m n')
>>> A = MatrixSymbol('A', n, n)
>>> B = MatrixSymbol('B', n, m)
>>> C = MatrixSymbol('C', m, n)
>>> D = MatrixSymbol('D', m, m)
>>> X = BlockMatrix([[A, B], [C, D]])
The default Schur Complement is evaluated with "A"
>>> X.schur()
-C*A**(-1)*B + D
>>> X.schur('D')
A - B*D**(-1)*C
Schur complement with non-invertible matrices is not
defined. Instead, the generalized Schur complement can
be calculated which uses the Moore-Penrose Inverse. To
achieve this, `generalized` must be set to `True`
>>> X.schur('B', generalized=True)
C - D*(B.T*B)**(-1)*B.T*A
>>> X.schur('C', generalized=True)
-A*(C.T*C)**(-1)*C.T*D + B
Returns
=======
M : Matrix
The Schur Complement Matrix
Raises
======
ShapeError
If the block matrix is not a 2x2 matrix
NonInvertibleMatrixError
If given matrix is non-invertible
References
==========
.. [1] Wikipedia Article on Schur Component : https://en.wikipedia.org/wiki/Schur_complement
See Also
========
sympy.matrices.matrices.MatrixBase.pinv"""
(definition of BlockMatrix.LDUdecomposition:)
def LDUdecomposition(self):
"""Returns the Block LDU decomposition of
a 2x2 Block Matrix
Returns
=======
(L, D, U) : Matrices
L : Lower Diagonal Matrix
D : Diagonal Matrix
U : Upper Diagonal Matrix
Examples
========
>>> from sympy import symbols, MatrixSymbol, BlockMatrix, block_collapse
>>> m, n = symbols('m n')
>>> A = MatrixSymbol('A', n, n)
>>> B = MatrixSymbol('B', n, m)
>>> C = MatrixSymbol('C', m, n)
>>> D = MatrixSymbol('D', m, m)
>>> X = BlockMatrix([[A, B], [C, D]])
>>> L, D, U = X.LDUdecomposition()
>>> block_collapse(L*D*U)
Matrix([
[A, B],
[C, D]])
Raises
======
ShapeError
If the block matrix is not a 2x2 matrix
NonInvertibleMatrixError
If the matrix "A" is non-invertible
See Also
========
sympy.matrices.expressions.blockmatrix.BlockMatrix.UDLdecomposition
sympy.matrices.expressions.blockmatrix.BlockMatrix.LUdecomposition"""
(definition of BlockMatrix.UDLdecomposition:)
def UDLdecomposition(self):
"""Returns the Block UDL decomposition of
a 2x2 Block Matrix
Returns
=======
(U, D, L) : Matrices
U : Upper Diagonal Matrix
D : Diagonal Matrix
L : Lower Diagonal Matrix
Examples
========
>>> from sympy import symbols, MatrixSymbol, BlockMatrix, block_collapse
>>> m, n = symbols('m n')
>>> A = MatrixSymbol('A', n, n)
>>> B = MatrixSymbol('B', n, m)
>>> C = MatrixSymbol('C', m, n)
>>> D = MatrixSymbol('D', m, m)
>>> X = BlockMatrix([[A, B], [C, D]])
>>> U, D, L = X.UDLdecomposition()
>>> block_collapse(U*D*L)
Matrix([
[A, B],
[C, D]])
Raises
======
ShapeError
If the block matrix is not a 2x2 matrix
NonInvertibleMatrixError
If the matrix "D" is non-invertible
See Also
========
sympy.matrices.expressions.blockmatrix.BlockMatrix.LDUdecomposition
sympy.matrices.expressions.blockmatrix.BlockMatrix.LUdecomposition"""
(definition of BlockMatrix.LUdecomposition:)
def LUdecomposition(self):
"""Returns the Block LU decomposition of
a 2x2 Block Matrix
Returns
=======
(L, U) : Matrices
L : Lower Diagonal Matrix
U : Upper Diagonal Matrix
Examples
========
>>> from sympy import symbols, MatrixSymbol, BlockMatrix, block_collapse
>>> m, n = symbols('m n')
>>> A = MatrixSymbol('A', n, n)
>>> B = MatrixSymbol('B', n, m)
>>> C = MatrixSymbol('C', m, n)
>>> D = MatrixSymbol('D', m, m)
>>> X = BlockMatrix([[A, B], [C, D]])
>>> L, U = X.LUdecomposition()
>>> block_collapse(L*U)
Matrix([
[A, B],
[C, D]])
Raises
======
ShapeError
If the block matrix is not a 2x2 matrix
NonInvertibleMatrixError
If the matrix "A" is non-invertible
See Also
========
sympy.matrices.expressions.blockmatrix.BlockMatrix.UDLdecomposition
sympy.matrices.expressions.blockmatrix.BlockMatrix.LDUdecomposition"""
[end of new definitions in sympy/matrices/expressions/blockmatrix.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Schur complement and Block LU decomposition
Schur complement can be implemented for block matrices. Using this, inverse of block matrices, block LU, block LDU and block UDL decomposition can be computed as expressions.
[Schur complement](https://en.wikipedia.org/wiki/Schur_complement)
[Block LU decomposition](https://en.wikipedia.org/wiki/Block_LU_decomposition)
----------
--------------------
</issues> | 284db64e44124856accce4abff75d2d8990c301d | |
sympy__sympy-20377 | 20,377 | sympy/sympy | 1.8 | 4674883018b333ea2049cf88ec04b291c01439a0 | 2020-11-03T16:28:28Z | diff --git a/sympy/__init__.py b/sympy/__init__.py
index 3d5cab161a99..e242b55c5ca0 100644
--- a/sympy/__init__.py
+++ b/sympy/__init__.py
@@ -190,7 +190,7 @@ def __sympy_debug():
Adjoint, hadamard_product, HadamardProduct, HadamardPower,
Determinant, det, diagonalize_vector, DiagMatrix, DiagonalMatrix,
DiagonalOf, trace, DotProduct, kronecker_product, KroneckerProduct,
- PermutationMatrix, MatrixPermute)
+ PermutationMatrix, MatrixPermute, Permanent, per)
from .geometry import (Point, Point2D, Point3D, Line, Ray, Segment, Line2D,
Segment2D, Ray2D, Line3D, Segment3D, Ray3D, Plane, Ellipse, Circle,
@@ -424,7 +424,7 @@ def __sympy_debug():
'HadamardPower', 'Determinant', 'det', 'diagonalize_vector', 'DiagMatrix',
'DiagonalMatrix', 'DiagonalOf', 'trace', 'DotProduct',
'kronecker_product', 'KroneckerProduct', 'PermutationMatrix',
- 'MatrixPermute',
+ 'MatrixPermute', 'Permanent', 'per',
# sympy.geometry
'Point', 'Point2D', 'Point3D', 'Line', 'Ray', 'Segment', 'Line2D',
diff --git a/sympy/matrices/__init__.py b/sympy/matrices/__init__.py
index aab95e10f5db..0b717488478f 100644
--- a/sympy/matrices/__init__.py
+++ b/sympy/matrices/__init__.py
@@ -28,7 +28,7 @@
hadamard_product, HadamardProduct, HadamardPower, Determinant, det,
diagonalize_vector, DiagMatrix, DiagonalMatrix, DiagonalOf, trace,
DotProduct, kronecker_product, KroneckerProduct,
- PermutationMatrix, MatrixPermute, MatrixSet)
+ PermutationMatrix, MatrixPermute, MatrixSet, Permanent, per)
from .utilities import dotprodsimp
@@ -62,6 +62,7 @@
'det', 'diagonalize_vector', 'DiagMatrix', 'DiagonalMatrix',
'DiagonalOf', 'trace', 'DotProduct', 'kronecker_product',
'KroneckerProduct', 'PermutationMatrix', 'MatrixPermute', 'MatrixSet',
+ 'Permanent', 'per',
'dotprodsimp',
]
diff --git a/sympy/matrices/determinant.py b/sympy/matrices/determinant.py
index ff838c0836a8..08c0160151f6 100644
--- a/sympy/matrices/determinant.py
+++ b/sympy/matrices/determinant.py
@@ -6,6 +6,8 @@
from sympy.polys import PurePoly, cancel
from sympy.simplify.simplify import (simplify as _simplify,
dotprodsimp as _dotprodsimp)
+from sympy import sympify
+from sympy.functions.combinatorial.numbers import nC
from .common import MatrixError, NonSquareMatrixError
from .utilities import (
@@ -479,6 +481,62 @@ def _cofactor_matrix(M, method="berkowitz"):
return M._new(M.rows, M.cols,
lambda i, j: M.cofactor(i, j, method))
+def _per(M):
+ """Returns the permanent of a matrix. Unlike determinant,
+ permanent is defined for both square and non-square matrices.
+
+ For an m x n matrix, with m less than or equal to n,
+ it is given as the sum over the permutations s of size
+ less than or equal to m on [1, 2, . . . n] of the product
+ from i = 1 to m of M[i, s[i]]. Taking the transpose will
+ not affect the value of the permanent.
+
+ In the case of a square matrix, this is the same as the permutation
+ definition of the determinant, but it does not take the sign of the
+ permutation into account. Computing the permanent with this definition
+ is quite inefficient, so here the Ryser formula is used.
+
+ Examples
+ ========
+
+ >>> from sympy import Matrix
+ >>> M = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
+ >>> M.per()
+ 450
+ >>> M = Matrix([1, 5, 7])
+ >>> M.per()
+ 13
+
+ References
+ ==========
+
+ .. [1] Prof. Frank Ben's notes: https://math.berkeley.edu/~bernd/ban275.pdf
+ .. [2] Wikipedia article on Permanent: https://en.wikipedia.org/wiki/Permanent_(mathematics)
+ .. [3] https://reference.wolfram.com/language/ref/Permanent.html
+ .. [4] Permanent of a rectangular matrix : https://arxiv.org/pdf/0904.3251.pdf
+ """
+ import itertools
+
+ m, n = M.shape
+ if m > n:
+ M = M.T
+ m, n = n, m
+ s = list(range(n))
+
+ subsets = []
+ for i in range(1, m + 1):
+ subsets += list(map(list, itertools.combinations(s, i)))
+
+ perm = 0
+ for subset in subsets:
+ prod = 1
+ sub_len = len(subset)
+ for i in range(m):
+ prod *= sum([M[i, j] for j in subset])
+ perm += prod * (-1)**sub_len * nC(n - sub_len, m - sub_len)
+ perm *= (-1)**m
+ perm = sympify(perm)
+ return perm.simplify()
# This functions is a candidate for caching if it gets implemented for matrices.
def _det(M, method="bareiss", iszerofunc=None):
diff --git a/sympy/matrices/expressions/__init__.py b/sympy/matrices/expressions/__init__.py
index d67fc53d9a85..5f4ab203ab74 100644
--- a/sympy/matrices/expressions/__init__.py
+++ b/sympy/matrices/expressions/__init__.py
@@ -10,7 +10,7 @@
from .matmul import MatMul
from .matpow import MatPow
from .trace import Trace, trace
-from .determinant import Determinant, det
+from .determinant import Determinant, det, Permanent, per
from .transpose import Transpose
from .adjoint import Adjoint
from .hadamard import hadamard_product, HadamardProduct, hadamard_power, HadamardPower
@@ -57,4 +57,6 @@
'kronecker_product', 'KroneckerProduct', 'combine_kronecker',
'PermutationMatrix', 'MatrixPermute',
+
+ 'Permanent', 'per'
]
diff --git a/sympy/matrices/expressions/determinant.py b/sympy/matrices/expressions/determinant.py
index 0c181106e878..061b45c1329d 100644
--- a/sympy/matrices/expressions/determinant.py
+++ b/sympy/matrices/expressions/determinant.py
@@ -55,6 +55,57 @@ def det(matexpr):
return Determinant(matexpr).doit()
+class Permanent(Expr):
+ """Matrix Permanent
+
+ Represents the permanent of a matrix expression.
+
+ Examples
+ ========
+
+ >>> from sympy import MatrixSymbol, Permanent, ones
+ >>> A = MatrixSymbol('A', 3, 3)
+ >>> Permanent(A)
+ Permanent(A)
+ >>> Permanent(ones(3, 3)).doit()
+ 6
+ """
+
+ def __new__(cls, mat):
+ mat = sympify(mat)
+ if not mat.is_Matrix:
+ raise TypeError("Input to Permanent, %s, not a matrix" % str(mat))
+
+ return Basic.__new__(cls, mat)
+
+ @property
+ def arg(self):
+ return self.args[0]
+
+ def doit(self, expand=False):
+ try:
+ return self.arg.per()
+ except (AttributeError, NotImplementedError):
+ return self
+
+def per(matexpr):
+ """ Matrix Permanent
+
+ Examples
+ ========
+
+ >>> from sympy import MatrixSymbol, Matrix, per, ones
+ >>> A = MatrixSymbol('A', 3, 3)
+ >>> per(A)
+ Permanent(A)
+ >>> per(ones(5, 5))
+ 120
+ >>> M = Matrix([1, 2, 5])
+ >>> per(M)
+ 8
+ """
+
+ return Permanent(matexpr).doit()
from sympy.assumptions.ask import ask, Q
from sympy.assumptions.refine import handlers_dict
diff --git a/sympy/matrices/matrices.py b/sympy/matrices/matrices.py
index 4a6ea356e4f8..75c048ff92dd 100644
--- a/sympy/matrices/matrices.py
+++ b/sympy/matrices/matrices.py
@@ -30,7 +30,7 @@
from .determinant import (
_find_reasonable_pivot, _find_reasonable_pivot_naive,
- _adjugate, _charpoly, _cofactor, _cofactor_matrix,
+ _adjugate, _charpoly, _cofactor, _cofactor_matrix, _per,
_det, _det_bareiss, _det_berkowitz, _det_LU, _minor, _minor_submatrix)
from .reductions import _is_echelon, _echelon_form, _rank, _rref
@@ -123,6 +123,9 @@ def cofactor_matrix(self, method="berkowitz"):
def det(self, method="bareiss", iszerofunc=None):
return _det(self, method=method, iszerofunc=iszerofunc)
+ def per(self):
+ return _per(self)
+
def minor(self, i, j, method="berkowitz"):
return _minor(self, i, j, method=method)
@@ -140,6 +143,7 @@ def minor_submatrix(self, i, j):
cofactor.__doc__ = _cofactor.__doc__
cofactor_matrix.__doc__ = _cofactor_matrix.__doc__
det.__doc__ = _det.__doc__
+ per.__doc__ = _per.__doc__
minor.__doc__ = _minor.__doc__
minor_submatrix.__doc__ = _minor_submatrix.__doc__
| diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index ea71ddbeefef..b63816de7907 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -3137,6 +3137,10 @@ def test_sympy__matrices__expressions__determinant__Determinant():
from sympy.matrices.expressions import MatrixSymbol
assert _test_args(Determinant(MatrixSymbol('A', 3, 3)))
+def test_sympy__matrices__expressions__determinant__Permanent():
+ from sympy.matrices.expressions.determinant import Permanent
+ from sympy.matrices.expressions import MatrixSymbol
+ assert _test_args(Permanent(MatrixSymbol('A', 3, 4)))
def test_sympy__matrices__expressions__funcmatrix__FunctionMatrix():
from sympy.matrices.expressions.funcmatrix import FunctionMatrix
diff --git a/sympy/matrices/tests/test_determinant.py b/sympy/matrices/tests/test_determinant.py
index c06712a98e09..58be7e86c198 100644
--- a/sympy/matrices/tests/test_determinant.py
+++ b/sympy/matrices/tests/test_determinant.py
@@ -6,6 +6,7 @@
from sympy.testing.pytest import raises
from sympy.matrices.matrices import MatrixDeterminant
from sympy.matrices.common import NonSquareMatrixError, _MinimalMatrix, _CastableMatrix
+from sympy.functions.combinatorial.factorials import factorial, subfactorial
class DeterminantOnlyMatrix(_MinimalMatrix, _CastableMatrix, MatrixDeterminant):
pass
@@ -277,6 +278,17 @@ def test_det():
assert h.det() == 1
raises(ValueError, lambda: e.det(iszerofunc="test"))
+def test_permanent():
+ M = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
+ assert M.per() == 450
+ for i in range(1, 12):
+ assert ones(i, i).per() == ones(i, i).T.per() == factorial(i)
+ assert (ones(i, i)-eye(i)).per() == (ones(i, i)-eye(i)).T.per() == subfactorial(i)
+
+ a1, a2, a3, a4, a5 = symbols('a_1 a_2 a_3 a_4 a_5')
+ M = Matrix([a1, a2, a3, a4, a5])
+ assert M.per() == M.T.per() == a1 + a2 + a3 + a4 + a5
+
def test_adjugate():
x = Symbol('x')
e = DeterminantOnlyMatrix(4, 4,
| [
{
"components": [
{
"doc": "Returns the permanent of a matrix. Unlike determinant,\npermanent is defined for both square and non-square matrices.\n\nFor an m x n matrix, with m less than or equal to n,\nit is given as the sum over the permutations s of size\nless than or equal to m on [1, 2, . . .... | [
"test_sympy__matrices__expressions__determinant__Permanent",
"test_permanent"
] | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_sympy__assumptions__assume__Predicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers__AnyArgs",
"test_sympy__assu... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Permanent of matrix
Added a method to compute permanent of a matrix
#### References to other Issues or PRs
#19041
#### Brief description of what is fixed or changed
The previous PR is obselete and was only for square matrices. With this PR, permanent of square and rectangular matrices can be calculated. I have implemented Ryser's algorithm which is the fastest known algorithm presently.
#### Other comments
I have added tests in the determinant file itself. If required, we can create a new file for permanent.
#### Release Notes
<!-- BEGIN RELEASE NOTES -->
* matrices
* Added a function to compute the permanent of a matrix
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/matrices/determinant.py]
(definition of _per:)
def _per(M):
"""Returns the permanent of a matrix. Unlike determinant,
permanent is defined for both square and non-square matrices.
For an m x n matrix, with m less than or equal to n,
it is given as the sum over the permutations s of size
less than or equal to m on [1, 2, . . . n] of the product
from i = 1 to m of M[i, s[i]]. Taking the transpose will
not affect the value of the permanent.
In the case of a square matrix, this is the same as the permutation
definition of the determinant, but it does not take the sign of the
permutation into account. Computing the permanent with this definition
is quite inefficient, so here the Ryser formula is used.
Examples
========
>>> from sympy import Matrix
>>> M = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> M.per()
450
>>> M = Matrix([1, 5, 7])
>>> M.per()
13
References
==========
.. [1] Prof. Frank Ben's notes: https://math.berkeley.edu/~bernd/ban275.pdf
.. [2] Wikipedia article on Permanent: https://en.wikipedia.org/wiki/Permanent_(mathematics)
.. [3] https://reference.wolfram.com/language/ref/Permanent.html
.. [4] Permanent of a rectangular matrix : https://arxiv.org/pdf/0904.3251.pdf"""
[end of new definitions in sympy/matrices/determinant.py]
[start of new definitions in sympy/matrices/expressions/determinant.py]
(definition of Permanent:)
class Permanent(Expr):
"""Matrix Permanent
Represents the permanent of a matrix expression.
Examples
========
>>> from sympy import MatrixSymbol, Permanent, ones
>>> A = MatrixSymbol('A', 3, 3)
>>> Permanent(A)
Permanent(A)
>>> Permanent(ones(3, 3)).doit()
6"""
(definition of Permanent.__new__:)
def __new__(cls, mat):
(definition of Permanent.arg:)
def arg(self):
(definition of Permanent.doit:)
def doit(self, expand=False):
(definition of per:)
def per(matexpr):
"""Matrix Permanent
Examples
========
>>> from sympy import MatrixSymbol, Matrix, per, ones
>>> A = MatrixSymbol('A', 3, 3)
>>> per(A)
Permanent(A)
>>> per(ones(5, 5))
120
>>> M = Matrix([1, 2, 5])
>>> per(M)
8"""
[end of new definitions in sympy/matrices/expressions/determinant.py]
[start of new definitions in sympy/matrices/matrices.py]
(definition of MatrixDeterminant.per:)
def per(self):
[end of new definitions in sympy/matrices/matrices.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 284db64e44124856accce4abff75d2d8990c301d | ||
scikit-learn__scikit-learn-18736 | 18,736 | scikit-learn/scikit-learn | 1.0 | bf0886bae0ccbc8c5d285b6e2affe7e40474f970 | 2020-11-02T20:40:00Z | diff --git a/doc/whats_new/v1.0.rst b/doc/whats_new/v1.0.rst
index 0cd1d6a89d158..6e3c063a45dcb 100644
--- a/doc/whats_new/v1.0.rst
+++ b/doc/whats_new/v1.0.rst
@@ -76,6 +76,11 @@ Changelog
- For :class:`tree.ExtraTreeRegressor`, `criterion="mse"` is deprecated,
use `"squared_error"` instead which is now the default.
+:mod:`sklearn.base`
+...................
+
+- |Fix| :func:`config_context` is now threadsafe. :pr:`18736` by `Thomas Fan`_.
+
:mod:`sklearn.calibration`
..........................
diff --git a/sklearn/_config.py b/sklearn/_config.py
index feb5e86287c38..e81d50849db05 100644
--- a/sklearn/_config.py
+++ b/sklearn/_config.py
@@ -2,6 +2,7 @@
"""
import os
from contextlib import contextmanager as contextmanager
+import threading
_global_config = {
'assume_finite': bool(os.environ.get('SKLEARN_ASSUME_FINITE', False)),
@@ -9,6 +10,15 @@
'print_changed_only': True,
'display': 'text',
}
+_threadlocal = threading.local()
+
+
+def _get_threadlocal_config():
+ """Get a threadlocal **mutable** configuration. If the configuration
+ does not exist, copy the default global configuration."""
+ if not hasattr(_threadlocal, 'global_config'):
+ _threadlocal.global_config = _global_config.copy()
+ return _threadlocal.global_config
def get_config():
@@ -24,7 +34,9 @@ def get_config():
config_context : Context manager for global scikit-learn configuration.
set_config : Set global scikit-learn configuration.
"""
- return _global_config.copy()
+ # Return a copy of the threadlocal configuration so that users will
+ # not be able to modify the configuration with the returned dict.
+ return _get_threadlocal_config().copy()
def set_config(assume_finite=None, working_memory=None,
@@ -72,14 +84,16 @@ def set_config(assume_finite=None, working_memory=None,
config_context : Context manager for global scikit-learn configuration.
get_config : Retrieve current values of the global configuration.
"""
+ local_config = _get_threadlocal_config()
+
if assume_finite is not None:
- _global_config['assume_finite'] = assume_finite
+ local_config['assume_finite'] = assume_finite
if working_memory is not None:
- _global_config['working_memory'] = working_memory
+ local_config['working_memory'] = working_memory
if print_changed_only is not None:
- _global_config['print_changed_only'] = print_changed_only
+ local_config['print_changed_only'] = print_changed_only
if display is not None:
- _global_config['display'] = display
+ local_config['display'] = display
@contextmanager
@@ -120,8 +134,7 @@ def config_context(**new_config):
Notes
-----
All settings, not just those presently modified, will be returned to
- their previous values when the context manager is exited. This is not
- thread-safe.
+ their previous values when the context manager is exited.
Examples
--------
@@ -141,7 +154,7 @@ def config_context(**new_config):
set_config : Set global scikit-learn configuration.
get_config : Retrieve current values of the global configuration.
"""
- old_config = get_config().copy()
+ old_config = get_config()
set_config(**new_config)
try:
| diff --git a/sklearn/tests/test_config.py b/sklearn/tests/test_config.py
index 22ec862ef24a3..6d458088a37a8 100644
--- a/sklearn/tests/test_config.py
+++ b/sklearn/tests/test_config.py
@@ -1,5 +1,13 @@
+import time
+from concurrent.futures import ThreadPoolExecutor
+
+from joblib import Parallel
+import joblib
import pytest
+
from sklearn import get_config, set_config, config_context
+from sklearn.utils.fixes import delayed
+from sklearn.utils.fixes import parse_version
def test_config_context():
@@ -76,3 +84,50 @@ def test_set_config():
# No unknown arguments
with pytest.raises(TypeError):
set_config(do_something_else=True)
+
+
+def set_assume_finite(assume_finite, sleep_duration):
+ """Return the value of assume_finite after waiting `sleep_duration`."""
+ with config_context(assume_finite=assume_finite):
+ time.sleep(sleep_duration)
+ return get_config()['assume_finite']
+
+
+@pytest.mark.parametrize("backend",
+ ["loky", "multiprocessing", "threading"])
+def test_config_threadsafe_joblib(backend):
+ """Test that the global config is threadsafe with all joblib backends.
+ Two jobs are spawned and sets assume_finite to two different values.
+ When the job with a duration 0.1s completes, the assume_finite value
+ should be the same as the value passed to the function. In other words,
+ it is not influenced by the other job setting assume_finite to True.
+ """
+
+ if (parse_version(joblib.__version__) < parse_version('0.12')
+ and backend == 'loky'):
+ pytest.skip('loky backend does not exist in joblib <0.12') # noqa
+
+ assume_finites = [False, True]
+ sleep_durations = [0.1, 0.2]
+
+ items = Parallel(backend=backend, n_jobs=2)(
+ delayed(set_assume_finite)(assume_finite, sleep_dur)
+ for assume_finite, sleep_dur
+ in zip(assume_finites, sleep_durations))
+
+ assert items == [False, True]
+
+
+def test_config_threadsafe():
+ """Uses threads directly to test that the global config does not change
+ between threads. Same test as `test_config_threadsafe_joblib` but with
+ `ThreadPoolExecutor`."""
+
+ assume_finites = [False, True]
+ sleep_durations = [0.1, 0.2]
+
+ with ThreadPoolExecutor(max_workers=2) as e:
+ items = [output for output in
+ e.map(set_assume_finite, assume_finites, sleep_durations)]
+
+ assert items == [False, True]
| diff --git a/doc/whats_new/v1.0.rst b/doc/whats_new/v1.0.rst
index 0cd1d6a89d158..6e3c063a45dcb 100644
--- a/doc/whats_new/v1.0.rst
+++ b/doc/whats_new/v1.0.rst
@@ -76,6 +76,11 @@ Changelog
- For :class:`tree.ExtraTreeRegressor`, `criterion="mse"` is deprecated,
use `"squared_error"` instead which is now the default.
+:mod:`sklearn.base`
+...................
+
+- |Fix| :func:`config_context` is now threadsafe. :pr:`18736` by `Thomas Fan`_.
+
:mod:`sklearn.calibration`
..........................
| [
{
"components": [
{
"doc": "Get a threadlocal **mutable** configuration. If the configuration\ndoes not exist, copy the default global configuration.",
"lines": [
16,
21
],
"name": "_get_threadlocal_config",
"signature": "def _get_threadlocal_con... | [
"sklearn/tests/test_config.py::test_config_threadsafe_joblib[threading]",
"sklearn/tests/test_config.py::test_config_threadsafe"
] | [
"sklearn/tests/test_config.py::test_config_context",
"sklearn/tests/test_config.py::test_config_context_exception",
"sklearn/tests/test_config.py::test_set_config",
"sklearn/tests/test_config.py::test_config_threadsafe_joblib[loky]",
"sklearn/tests/test_config.py::test_config_threadsafe_joblib[multiprocessi... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
ENH Makes global configuration thread local
This PR makes it so that the global config is thread local. This means that changing the global config in one thread does not change the global config on another thread.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sklearn/_config.py]
(definition of _get_threadlocal_config:)
def _get_threadlocal_config():
"""Get a threadlocal **mutable** configuration. If the configuration
does not exist, copy the default global configuration."""
[end of new definitions in sklearn/_config.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 3c732b9f6a77e95dfa6beb154ca2e1e7848b74f9 | |
falconry__falcon-1781 | 1,781 | falconry/falcon | null | 7009096bac1cfbd542a9e6c4d05b625607da81f2 | 2020-10-30T20:07:59Z | diff --git a/docs/_newsfragments/1583.breakingchange.rst b/docs/_newsfragments/1583.breakingchange.rst
new file mode 100644
index 000000000..fb1b07cfb
--- /dev/null
+++ b/docs/_newsfragments/1583.breakingchange.rst
@@ -0,0 +1,3 @@
+Referencing the class :class:`falcon.stream.BoundedStream` through the
+``falcon.request_helpers`` module is deprecated. It is now accessible from
+the module ``falcon.stream``.
diff --git a/docs/api/request_and_response_wsgi.rst b/docs/api/request_and_response_wsgi.rst
index ccca0d0dd..540f26f42 100644
--- a/docs/api/request_and_response_wsgi.rst
+++ b/docs/api/request_and_response_wsgi.rst
@@ -40,6 +40,9 @@ Request
.. autoclass:: falcon.Forwarded
:members:
+.. autoclass:: falcon.stream.BoundedStream
+ :members:
+
Response
--------
diff --git a/falcon/__init__.py b/falcon/__init__.py
index 05c559cb2..c90d70818 100644
--- a/falcon/__init__.py
+++ b/falcon/__init__.py
@@ -36,6 +36,7 @@
from falcon.redirects import * # NOQA
from falcon.http_error import HTTPError # NOQA
from falcon.http_status import HTTPStatus # NOQA
+from falcon.stream import BoundedStream # NOQA
from falcon.middleware import CORSMiddleware # NOQA
# NOTE(kgriffs): Ensure that "from falcon import uri" will import
diff --git a/falcon/media/multipart.py b/falcon/media/multipart.py
index 10b24db36..8097d50c8 100644
--- a/falcon/media/multipart.py
+++ b/falcon/media/multipart.py
@@ -19,12 +19,13 @@
from urllib.parse import unquote_to_bytes
from falcon import errors
-from falcon import request_helpers
from falcon.media.base import BaseHandler
+from falcon.stream import BoundedStream
from falcon.util import BufferedReader
from falcon.util import misc
from falcon.util.deprecation import deprecated_args
+
# TODO(vytas):
# * Better support for form-wide charset setting
# * Clean up, simplify, and optimize BufferedReader
@@ -381,7 +382,7 @@ def __init__(self, stream, boundary, content_length, parse_options):
# This approach makes testing both the Cythonized and pure-Python
# streams easier within the same test/benchmark suite.
if not hasattr(stream, 'read_until'):
- if isinstance(stream, request_helpers.BoundedStream):
+ if isinstance(stream, BoundedStream):
stream = BufferedReader(stream.stream.read, content_length)
else:
stream = BufferedReader(stream.read, content_length)
diff --git a/falcon/request.py b/falcon/request.py
index a9f3387ff..23557631d 100644
--- a/falcon/request.py
+++ b/falcon/request.py
@@ -25,6 +25,7 @@
from falcon.forwarded import Forwarded # NOQA
from falcon.media import Handlers
from falcon.media.json import _DEFAULT_JSON_HANDLER
+from falcon.stream import BoundedStream
from falcon.util import structures
from falcon.util.misc import isascii
from falcon.util.uri import parse_host, parse_query_string
@@ -1817,7 +1818,7 @@ def _get_wrapped_wsgi_input(self):
# but it had an invalid value. Assume no content.
content_length = 0
- return helpers.BoundedStream(self.env['wsgi.input'], content_length)
+ return BoundedStream(self.env['wsgi.input'], content_length)
def _parse_form_urlencoded(self):
content_length = self.content_length
diff --git a/falcon/request_helpers.py b/falcon/request_helpers.py
index f0e3eeea7..b273feb9e 100644
--- a/falcon/request_helpers.py
+++ b/falcon/request_helpers.py
@@ -15,9 +15,11 @@
"""Utilities for the Request class."""
from http import cookies as http_cookies
-import io
import re
+# TODO: BoundedStream import here is for backwards-compatibility
+# and it should be removed in Falcon 4.0
+from falcon.stream import BoundedStream, Body # NOQA
from falcon.util import ETag
# https://tools.ietf.org/html/rfc6265#section-4.1.1
@@ -169,153 +171,3 @@ def _parse_etags(etag_str):
# are all set to nothing, and so therefore basically should be
# treated as not having been set in the first place.
return etags or None
-
-
-class BoundedStream(io.IOBase):
- """Wrap *wsgi.input* streams to make them more robust.
-
- ``socket._fileobject`` and ``io.BufferedReader`` are sometimes used
- to implement *wsgi.input*. However, app developers are often burned
- by the fact that the `read()` method for these objects block
- indefinitely if either no size is passed, or a size greater than
- the request's content length is passed to the method.
-
- This class normalizes *wsgi.input* behavior between WSGI servers
- by implementing non-blocking behavior for the cases mentioned
- above. The caller is not allowed to read more than the number of
- bytes specified by the Content-Length header in the request.
-
- Args:
- stream: Instance of ``socket._fileobject`` from
- ``environ['wsgi.input']``
- stream_len: Expected content length of the stream.
-
- Attributes:
- eof (bool): ``True`` if there is no more data to read from
- the stream, otherwise ``False``.
- is_exhausted (bool): Deprecated alias for `eof`.
-
- """
-
- def __init__(self, stream, stream_len):
- self.stream = stream
- self.stream_len = stream_len
-
- self._bytes_remaining = self.stream_len
-
- def __iter__(self):
- return self
-
- def __next__(self):
- return next(self.stream)
-
- next = __next__
-
- def _read(self, size, target):
- """Proxy reads to the underlying stream.
-
- Args:
- size (int): Maximum number of bytes to read. Will be
- coerced, if None or -1, to the number of remaining bytes
- in the stream. Will likewise be coerced if greater than
- the number of remaining bytes, to avoid making a
- blocking call to the wrapped stream.
- target (callable): Once `size` has been fixed up, this function
- will be called to actually do the work.
-
- Returns:
- bytes: Data read from the stream, as returned by `target`.
-
- """
-
- # NOTE(kgriffs): Default to reading all remaining bytes if the
- # size is not specified or is out of bounds. This behaves
- # similarly to the IO streams passed in by non-wsgiref servers.
- if (size is None or size == -1 or size > self._bytes_remaining):
- size = self._bytes_remaining
-
- self._bytes_remaining -= size
- return target(size)
-
- def readable(self):
- """Return ``True`` always."""
- return True
-
- def seekable(self):
- """Return ``False`` always."""
- return False
-
- def writable(self):
- """Return ``False`` always."""
- return False
-
- def read(self, size=None):
- """Read from the stream.
-
- Args:
- size (int): Maximum number of bytes/characters to read.
- Defaults to reading until EOF.
-
- Returns:
- bytes: Data read from the stream.
-
- """
-
- return self._read(size, self.stream.read)
-
- def readline(self, limit=None):
- """Read a line from the stream.
-
- Args:
- limit (int): Maximum number of bytes/characters to read.
- Defaults to reading until EOF.
-
- Returns:
- bytes: Data read from the stream.
-
- """
-
- return self._read(limit, self.stream.readline)
-
- def readlines(self, hint=None):
- """Read lines from the stream.
-
- Args:
- hint (int): Maximum number of bytes/characters to read.
- Defaults to reading until EOF.
-
- Returns:
- bytes: Data read from the stream.
-
- """
-
- return self._read(hint, self.stream.readlines)
-
- def write(self, data):
- """Raise IOError always; writing is not supported."""
-
- raise IOError('Stream is not writeable')
-
- def exhaust(self, chunk_size=64 * 1024):
- """Exhaust the stream.
-
- This consumes all the data left until the limit is reached.
-
- Args:
- chunk_size (int): The size for a chunk (default: 64 KB).
- It will read the chunk until the stream is exhausted.
- """
- while True:
- chunk = self.read(chunk_size)
- if not chunk:
- break
-
- @property
- def eof(self):
- return self._bytes_remaining <= 0
-
- is_exhausted = eof
-
-
-# NOTE(kgriffs): Alias for backwards-compat
-Body = BoundedStream
diff --git a/falcon/stream.py b/falcon/stream.py
new file mode 100644
index 000000000..97b2c8911
--- /dev/null
+++ b/falcon/stream.py
@@ -0,0 +1,169 @@
+# Copyright 2013 by Rackspace Hosting, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""WSGI BoundedStream class."""
+
+import io
+
+__all__ = ['BoundedStream']
+
+
+class BoundedStream(io.IOBase):
+ """Wrap *wsgi.input* streams to make them more robust.
+
+ ``socket._fileobject`` and ``io.BufferedReader`` are sometimes used
+ to implement *wsgi.input*. However, app developers are often burned
+ by the fact that the `read()` method for these objects block
+ indefinitely if either no size is passed, or a size greater than
+ the request's content length is passed to the method.
+
+ This class normalizes *wsgi.input* behavior between WSGI servers
+ by implementing non-blocking behavior for the cases mentioned
+ above. The caller is not allowed to read more than the number of
+ bytes specified by the Content-Length header in the request.
+
+ Args:
+ stream: Instance of ``socket._fileobject`` from
+ ``environ['wsgi.input']``
+ stream_len: Expected content length of the stream.
+
+ Attributes:
+ eof (bool): ``True`` if there is no more data to read from
+ the stream, otherwise ``False``.
+ is_exhausted (bool): Deprecated alias for `eof`.
+
+ """
+
+ def __init__(self, stream, stream_len):
+ self.stream = stream
+ self.stream_len = stream_len
+
+ self._bytes_remaining = self.stream_len
+
+ def __iter__(self):
+ return self
+
+ def __next__(self):
+ return next(self.stream)
+
+ next = __next__
+
+ def _read(self, size, target):
+ """Proxy reads to the underlying stream.
+
+ Args:
+ size (int): Maximum number of bytes to read. Will be
+ coerced, if None or -1, to the number of remaining bytes
+ in the stream. Will likewise be coerced if greater than
+ the number of remaining bytes, to avoid making a
+ blocking call to the wrapped stream.
+ target (callable): Once `size` has been fixed up, this function
+ will be called to actually do the work.
+
+ Returns:
+ bytes: Data read from the stream, as returned by `target`.
+
+ """
+
+ # NOTE(kgriffs): Default to reading all remaining bytes if the
+ # size is not specified or is out of bounds. This behaves
+ # similarly to the IO streams passed in by non-wsgiref servers.
+ if (size is None or size == -1 or size > self._bytes_remaining):
+ size = self._bytes_remaining
+
+ self._bytes_remaining -= size
+ return target(size)
+
+ def readable(self):
+ """Return ``True`` always."""
+ return True
+
+ def seekable(self):
+ """Return ``False`` always."""
+ return False
+
+ def writable(self):
+ """Return ``False`` always."""
+ return False
+
+ def read(self, size=None):
+ """Read from the stream.
+
+ Args:
+ size (int): Maximum number of bytes/characters to read.
+ Defaults to reading until EOF.
+
+ Returns:
+ bytes: Data read from the stream.
+
+ """
+
+ return self._read(size, self.stream.read)
+
+ def readline(self, limit=None):
+ """Read a line from the stream.
+
+ Args:
+ limit (int): Maximum number of bytes/characters to read.
+ Defaults to reading until EOF.
+
+ Returns:
+ bytes: Data read from the stream.
+
+ """
+
+ return self._read(limit, self.stream.readline)
+
+ def readlines(self, hint=None):
+ """Read lines from the stream.
+
+ Args:
+ hint (int): Maximum number of bytes/characters to read.
+ Defaults to reading until EOF.
+
+ Returns:
+ bytes: Data read from the stream.
+
+ """
+
+ return self._read(hint, self.stream.readlines)
+
+ def write(self, data):
+ """Raise IOError always; writing is not supported."""
+
+ raise IOError('Stream is not writeable')
+
+ def exhaust(self, chunk_size=64 * 1024):
+ """Exhaust the stream.
+
+ This consumes all the data left until the limit is reached.
+
+ Args:
+ chunk_size (int): The size for a chunk (default: 64 KB).
+ It will read the chunk until the stream is exhausted.
+ """
+ while True:
+ chunk = self.read(chunk_size)
+ if not chunk:
+ break
+
+ @property
+ def eof(self):
+ return self._bytes_remaining <= 0
+
+ is_exhausted = eof
+
+
+# NOTE(kgriffs): Alias for backwards-compat
+Body = BoundedStream
| diff --git a/tests/test_boundedstream.py b/tests/test_boundedstream.py
index f2a8cb7b9..3c9ec2091 100644
--- a/tests/test_boundedstream.py
+++ b/tests/test_boundedstream.py
@@ -2,7 +2,7 @@
import pytest
-from falcon.request_helpers import BoundedStream
+from falcon.stream import BoundedStream
@pytest.fixture
diff --git a/tests/test_deprecations.py b/tests/test_deprecations.py
new file mode 100644
index 000000000..09570a3b0
--- /dev/null
+++ b/tests/test_deprecations.py
@@ -0,0 +1,6 @@
+from falcon import request_helpers, stream
+
+
+def test_bounded_stream():
+ assert request_helpers.Body is stream.Body
+ assert request_helpers.BoundedStream is stream.BoundedStream
diff --git a/tests/test_request_body.py b/tests/test_request_body.py
index eb33f1be4..43ecab12b 100644
--- a/tests/test_request_body.py
+++ b/tests/test_request_body.py
@@ -6,6 +6,7 @@
import falcon
from falcon import request_helpers
import falcon.request
+from falcon.stream import BoundedStream
import falcon.testing as testing
SIZE_1_KB = 1024
@@ -26,7 +27,7 @@ class TestRequestBody:
def _get_wrapped_stream(self, req):
# Getting wrapped wsgi.input:
stream = req.stream
- if isinstance(stream, request_helpers.BoundedStream):
+ if isinstance(stream, BoundedStream):
stream = stream.stream
if isinstance(stream, InputWrapper):
stream = stream.input
| diff --git a/docs/_newsfragments/1583.breakingchange.rst b/docs/_newsfragments/1583.breakingchange.rst
new file mode 100644
index 000000000..fb1b07cfb
--- /dev/null
+++ b/docs/_newsfragments/1583.breakingchange.rst
@@ -0,0 +1,3 @@
+Referencing the class :class:`falcon.stream.BoundedStream` through the
+``falcon.request_helpers`` module is deprecated. It is now accessible from
+the module ``falcon.stream``.
diff --git a/docs/api/request_and_response_wsgi.rst b/docs/api/request_and_response_wsgi.rst
index ccca0d0dd..540f26f42 100644
--- a/docs/api/request_and_response_wsgi.rst
+++ b/docs/api/request_and_response_wsgi.rst
@@ -40,6 +40,9 @@ Request
.. autoclass:: falcon.Forwarded
:members:
+.. autoclass:: falcon.stream.BoundedStream
+ :members:
+
Response
--------
| [
{
"components": [
{
"doc": "Wrap *wsgi.input* streams to make them more robust.\n\n``socket._fileobject`` and ``io.BufferedReader`` are sometimes used\nto implement *wsgi.input*. However, app developers are often burned\nby the fact that the `read()` method for these objects block\nindefinitely if... | [
"tests/test_boundedstream.py::test_not_writable",
"tests/test_deprecations.py::test_bounded_stream",
"tests/test_request_body.py::TestRequestBody::test_empty_body",
"tests/test_request_body.py::TestRequestBody::test_tiny_body",
"tests/test_request_body.py::TestRequestBody::test_tiny_body_overflow",
"tests... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
refactor(request_helpers): move BoundedStream to stream module (#1583) v2
Continuing from https://github.com/falconry/falcon/pull/1732
# Summary of Changes
Closes #1583
This PR moves WSGI `BoundedStream` to its own module `stream.py.` The code has been properly refactored and all test's are passing.
- Now you can import `BoundedStream` class as `falcon.stream.BoundedStream`
# Pull Request Checklist
- [x] Applied changes to both WSGI and ASGI code paths and interfaces (where applicable).
- [x] Added **tests** for changed code.
- [x] Prefixed code comments with GitHub nick and an appropriate prefix.
- [x] Coding style is consistent with the rest of the framework.
- [x] Updated **documentation** for changed code.
- [x] Added docstrings for any new classes, functions, or modules.
- [x] Updated docstrings for any modifications to existing code.
- [x] Updated both WSGI and ASGI docs (where applicable).
- [x] Added references to new classes, functions, or modules to the relevant RST file under `docs/`.
- [x] Updated all relevant supporting documentation files under `docs/`.
- [x] A copyright notice is included at the top of any new modules (using your own name or the name of your organization).
- [x] Changed/added classes/methods/functions have appropriate `versionadded`, `versionchanged`, or `deprecated` [directives](http://www.sphinx-doc.org/en/stable/usage/restructuredtext/directives.html?highlight=versionadded#directive-versionadded).
- [x] Changes (and possible deprecations) have [towncrier](https://towncrier.readthedocs.io/en/actual-freaking-docs/index.html) news fragments under `docs/_newsfragments/`, with the file name format `{issue_number}.{fragment_type}.rst`. (Run `towncrier --draft` to ensure it renders correctly.)
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in falcon/stream.py]
(definition of BoundedStream:)
class BoundedStream(io.IOBase):
"""Wrap *wsgi.input* streams to make them more robust.
``socket._fileobject`` and ``io.BufferedReader`` are sometimes used
to implement *wsgi.input*. However, app developers are often burned
by the fact that the `read()` method for these objects block
indefinitely if either no size is passed, or a size greater than
the request's content length is passed to the method.
This class normalizes *wsgi.input* behavior between WSGI servers
by implementing non-blocking behavior for the cases mentioned
above. The caller is not allowed to read more than the number of
bytes specified by the Content-Length header in the request.
Args:
stream: Instance of ``socket._fileobject`` from
``environ['wsgi.input']``
stream_len: Expected content length of the stream.
Attributes:
eof (bool): ``True`` if there is no more data to read from
the stream, otherwise ``False``.
is_exhausted (bool): Deprecated alias for `eof`."""
(definition of BoundedStream.__init__:)
def __init__(self, stream, stream_len):
(definition of BoundedStream.__iter__:)
def __iter__(self):
(definition of BoundedStream.__next__:)
def __next__(self):
(definition of BoundedStream._read:)
def _read(self, size, target):
"""Proxy reads to the underlying stream.
Args:
size (int): Maximum number of bytes to read. Will be
coerced, if None or -1, to the number of remaining bytes
in the stream. Will likewise be coerced if greater than
the number of remaining bytes, to avoid making a
blocking call to the wrapped stream.
target (callable): Once `size` has been fixed up, this function
will be called to actually do the work.
Returns:
bytes: Data read from the stream, as returned by `target`."""
(definition of BoundedStream.readable:)
def readable(self):
"""Return ``True`` always."""
(definition of BoundedStream.seekable:)
def seekable(self):
"""Return ``False`` always."""
(definition of BoundedStream.writable:)
def writable(self):
"""Return ``False`` always."""
(definition of BoundedStream.read:)
def read(self, size=None):
"""Read from the stream.
Args:
size (int): Maximum number of bytes/characters to read.
Defaults to reading until EOF.
Returns:
bytes: Data read from the stream."""
(definition of BoundedStream.readline:)
def readline(self, limit=None):
"""Read a line from the stream.
Args:
limit (int): Maximum number of bytes/characters to read.
Defaults to reading until EOF.
Returns:
bytes: Data read from the stream."""
(definition of BoundedStream.readlines:)
def readlines(self, hint=None):
"""Read lines from the stream.
Args:
hint (int): Maximum number of bytes/characters to read.
Defaults to reading until EOF.
Returns:
bytes: Data read from the stream."""
(definition of BoundedStream.write:)
def write(self, data):
"""Raise IOError always; writing is not supported."""
(definition of BoundedStream.exhaust:)
def exhaust(self, chunk_size=64 * 1024):
"""Exhaust the stream.
This consumes all the data left until the limit is reached.
Args:
chunk_size (int): The size for a chunk (default: 64 KB).
It will read the chunk until the stream is exhausted."""
(definition of BoundedStream.eof:)
def eof(self):
[end of new definitions in falcon/stream.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Move falcon.request_helpers.BoundedStream to its own module
The ASGI version is in it's own module, so let's do the same thing on the WSGI side. It should make it a bit more on-topic to have it in a separate module, and make it easier for contributors to track it down. This will also let us "privatize" request_helpers in the future without affecting BoundedStream (which is obviously part of the public interface).
This is a breaking change, but shouldn't cause much trouble because apps generally just refer to this class via the req.bounded_stream instance.
Be sure to also update any relevant docs and include a news fragment noting the breaking change.
`falcon.request_helpers.BoundedStream ==> falcon.stream.BoundedStream`
----------
NB we'll also need to unify approaches and naming with the `BufferedStream` which was introduced in the multipart form parser proposal.
@vytas7 @kgriffs I can work on this.
--------------------
</issues> | 77d5e6394a88ead151c9469494749f95f06b24bf |
matplotlib__matplotlib-18829 | 18,829 | matplotlib/matplotlib | 3.3 | c1e021178a409c532b23cd2caa322c4d2e6c8763 | 2020-10-28T04:24:11Z | diff --git a/doc/users/next_whats_new/range_slider.rst b/doc/users/next_whats_new/range_slider.rst
new file mode 100644
index 000000000000..07ddae8e0626
--- /dev/null
+++ b/doc/users/next_whats_new/range_slider.rst
@@ -0,0 +1,4 @@
+New RangeSlider widget
+----------------------
+`.widgets.RangeSlider` allows for creating a slider that defines
+a range rather than a single value.
diff --git a/examples/widgets/range_slider.py b/examples/widgets/range_slider.py
new file mode 100644
index 000000000000..da17e4c1d314
--- /dev/null
+++ b/examples/widgets/range_slider.py
@@ -0,0 +1,68 @@
+"""
+======================================
+Thresholding an Image with RangeSlider
+======================================
+
+Using the RangeSlider widget to control the thresholding of an image.
+
+The RangeSlider widget can be used similarly to the `.widgets.Slider`
+widget. The major difference is that RangeSlider's ``val`` attribute
+is a tuple of floats ``(lower val, upper val)`` rather than a single float.
+"""
+
+import numpy as np
+import matplotlib.pyplot as plt
+from matplotlib.widgets import RangeSlider
+
+# generate a fake image
+np.random.seed(19680801)
+N = 128
+img = np.random.randn(N, N)
+
+fig, axs = plt.subplots(1, 2, figsize=(10, 5))
+plt.subplots_adjust(bottom=0.25)
+
+im = axs[0].imshow(img)
+axs[1].hist(img.flatten(), bins='auto')
+axs[1].set_title('Histogram of pixel intensities')
+
+# Create the RangeSlider
+slider_ax = plt.axes([0.20, 0.1, 0.60, 0.03])
+slider = RangeSlider(slider_ax, "Threshold", img.min(), img.max())
+
+# Create the Vertical lines on the histogram
+lower_limit_line = axs[1].axvline(slider.val[0], color='k')
+upper_limit_line = axs[1].axvline(slider.val[1], color='k')
+
+
+def update(val):
+ # The val passed to a callback by the RangeSlider will
+ # be a tuple of (min, max)
+
+ # Update the image's colormap
+ im.norm.vmin = val[0]
+ im.norm.vmax = val[1]
+
+ # Update the position of the vertical lines
+ lower_limit_line.set_xdata([val[0], val[0]])
+ upper_limit_line.set_xdata([val[1], val[1]])
+
+ # Redraw the figure to ensure it updates
+ fig.canvas.draw_idle()
+
+
+slider.on_changed(update)
+plt.show()
+
+#############################################################################
+#
+# ------------
+#
+# References
+# """"""""""
+#
+# The use of the following functions, methods, classes and modules is shown
+# in this example:
+
+import matplotlib
+matplotlib.widgets.RangeSlider
diff --git a/lib/matplotlib/widgets.py b/lib/matplotlib/widgets.py
index 1a983610b0a0..e97a5c1d6efc 100644
--- a/lib/matplotlib/widgets.py
+++ b/lib/matplotlib/widgets.py
@@ -229,7 +229,84 @@ def disconnect(self, cid):
self._observers.disconnect(cid)
-class Slider(AxesWidget):
+class SliderBase(AxesWidget):
+ """
+ The base class for constructing Slider widgets. Not intended for direct
+ usage.
+
+ For the slider to remain responsive you must maintain a reference to it.
+ """
+ def __init__(self, ax, orientation, closedmin, closedmax,
+ valmin, valmax, valfmt, dragging, valstep):
+ if ax.name == '3d':
+ raise ValueError('Sliders cannot be added to 3D Axes')
+
+ super().__init__(ax)
+ _api.check_in_list(['horizontal', 'vertical'], orientation=orientation)
+
+ self.orientation = orientation
+ self.closedmin = closedmin
+ self.closedmax = closedmax
+ self.valmin = valmin
+ self.valmax = valmax
+ self.valstep = valstep
+ self.drag_active = False
+ self.valfmt = valfmt
+
+ if orientation == "vertical":
+ ax.set_ylim((valmin, valmax))
+ axis = ax.yaxis
+ else:
+ ax.set_xlim((valmin, valmax))
+ axis = ax.xaxis
+
+ self._fmt = axis.get_major_formatter()
+ if not isinstance(self._fmt, ticker.ScalarFormatter):
+ self._fmt = ticker.ScalarFormatter()
+ self._fmt.set_axis(axis)
+ self._fmt.set_useOffset(False) # No additive offset.
+ self._fmt.set_useMathText(True) # x sign before multiplicative offset.
+
+ ax.set_xticks([])
+ ax.set_yticks([])
+ ax.set_navigate(False)
+ self.connect_event("button_press_event", self._update)
+ self.connect_event("button_release_event", self._update)
+ if dragging:
+ self.connect_event("motion_notify_event", self._update)
+ self._observers = cbook.CallbackRegistry()
+
+ def _stepped_value(self, val):
+ if isinstance(self.valstep, Number):
+ val = (self.valmin
+ + round((val - self.valmin) / self.valstep) * self.valstep)
+ elif self.valstep is not None:
+ valstep = np.asanyarray(self.valstep)
+ if valstep.ndim != 1:
+ raise ValueError(
+ f"valstep must have 1 dimension but has {valstep.ndim}"
+ )
+ val = valstep[np.argmin(np.abs(valstep - val))]
+ return val
+
+ def disconnect(self, cid):
+ """
+ Remove the observer with connection id *cid*
+
+ Parameters
+ ----------
+ cid : int
+ Connection id of the observer to be removed
+ """
+ self._observers.disconnect(cid)
+
+ def reset(self):
+ """Reset the slider to the initial value"""
+ if self.val != self.valinit:
+ self.set_val(self.valinit)
+
+
+class Slider(SliderBase):
"""
A slider representing a floating point range.
@@ -309,10 +386,8 @@ def __init__(self, ax, label, valmin, valmax, valinit=0.5, valfmt=None,
`.Rectangle` documentation for valid property names (``facecolor``,
``edgecolor``, ``alpha``, etc.).
"""
- if ax.name == '3d':
- raise ValueError('Sliders cannot be added to 3D Axes')
-
- super().__init__(ax)
+ super().__init__(ax, orientation, closedmin, closedmax,
+ valmin, valmax, valfmt, dragging, valstep)
if slidermin is not None and not hasattr(slidermin, 'val'):
raise ValueError(
@@ -320,17 +395,8 @@ def __init__(self, ax, label, valmin, valmax, valinit=0.5, valfmt=None,
if slidermax is not None and not hasattr(slidermax, 'val'):
raise ValueError(
f"Argument slidermax ({type(slidermax)}) has no 'val'")
- _api.check_in_list(['horizontal', 'vertical'], orientation=orientation)
-
- self.orientation = orientation
- self.closedmin = closedmin
- self.closedmax = closedmax
self.slidermin = slidermin
self.slidermax = slidermax
- self.drag_active = False
- self.valmin = valmin
- self.valmax = valmax
- self.valstep = valstep
valinit = self._value_in_bounds(valinit)
if valinit is None:
valinit = valmin
@@ -343,29 +409,6 @@ def __init__(self, ax, label, valmin, valmax, valinit=0.5, valfmt=None,
self.poly = ax.axvspan(valmin, valinit, 0, 1, **kwargs)
self.vline = ax.axvline(valinit, 0, 1, color=initcolor, lw=1)
- if orientation == 'vertical':
- ax.set_ylim((valmin, valmax))
- axis = ax.yaxis
- else:
- ax.set_xlim((valmin, valmax))
- axis = ax.xaxis
-
- self.valfmt = valfmt
- self._fmt = axis.get_major_formatter()
- if not isinstance(self._fmt, ticker.ScalarFormatter):
- self._fmt = ticker.ScalarFormatter()
- self._fmt.set_axis(axis)
- self._fmt.set_useOffset(False) # No additive offset.
- self._fmt.set_useMathText(True) # x sign before multiplicative offset.
-
- ax.set_xticks([])
- ax.set_yticks([])
- ax.set_navigate(False)
-
- self.connect_event('button_press_event', self._update)
- self.connect_event('button_release_event', self._update)
- if dragging:
- self.connect_event('motion_notify_event', self._update)
if orientation == 'vertical':
self.label = ax.text(0.5, 1.02, label, transform=ax.transAxes,
verticalalignment='bottom',
@@ -385,22 +428,11 @@ def __init__(self, ax, label, valmin, valmax, valinit=0.5, valfmt=None,
verticalalignment='center',
horizontalalignment='left')
- self._observers = cbook.CallbackRegistry()
-
self.set_val(valinit)
def _value_in_bounds(self, val):
"""Makes sure *val* is with given bounds."""
- if isinstance(self.valstep, Number):
- val = (self.valmin
- + round((val - self.valmin) / self.valstep) * self.valstep)
- elif self.valstep is not None:
- valstep = np.asanyarray(self.valstep)
- if valstep.ndim != 1:
- raise ValueError(
- f"valstep must have 1 dimension but has {valstep.ndim}"
- )
- val = valstep[np.argmin(np.abs(valstep - val))]
+ val = self._stepped_value(val)
if val <= self.valmin:
if not self.closedmin:
@@ -497,21 +529,290 @@ def on_changed(self, func):
"""
return self._observers.connect('changed', lambda val: func(val))
- def disconnect(self, cid):
+
+class RangeSlider(SliderBase):
+ """
+ A slider representing a range of floating point values. Defines the min and
+ max of the range via the *val* attribute as a tuple of (min, max).
+
+ Create a slider that defines a range contained within [*valmin*, *valmax*]
+ in axes *ax*. For the slider to remain responsive you must maintain a
+ reference to it. Call :meth:`on_changed` to connect to the slider event.
+
+ Attributes
+ ----------
+ val : tuple of float
+ Slider value.
+ """
+
+ def __init__(
+ self,
+ ax,
+ label,
+ valmin,
+ valmax,
+ valinit=None,
+ valfmt=None,
+ closedmin=True,
+ closedmax=True,
+ dragging=True,
+ valstep=None,
+ orientation="horizontal",
+ **kwargs,
+ ):
"""
- Remove the observer with connection id *cid*
+ Parameters
+ ----------
+ ax : Axes
+ The Axes to put the slider in.
+
+ label : str
+ Slider label.
+
+ valmin : float
+ The minimum value of the slider.
+
+ valmax : float
+ The maximum value of the slider.
+
+ valinit : tuple of float or None, default: None
+ The initial positions of the slider. If None the initial positions
+ will be at the 25th and 75th percentiles of the range.
+
+ valfmt : str, default: None
+ %-format string used to format the slider values. If None, a
+ `.ScalarFormatter` is used instead.
+
+ closedmin : bool, default: True
+ Whether the slider interval is closed on the bottom.
+
+ closedmax : bool, default: True
+ Whether the slider interval is closed on the top.
+
+ dragging : bool, default: True
+ If True the slider can be dragged by the mouse.
+
+ valstep : float, default: None
+ If given, the slider will snap to multiples of *valstep*.
+
+ orientation : {'horizontal', 'vertical'}, default: 'horizontal'
+ The orientation of the slider.
+
+ Notes
+ -----
+ Additional kwargs are passed on to ``self.poly`` which is the
+ `~matplotlib.patches.Rectangle` that draws the slider knob. See the
+ `.Rectangle` documentation for valid property names (``facecolor``,
+ ``edgecolor``, ``alpha``, etc.).
+ """
+ super().__init__(ax, orientation, closedmin, closedmax,
+ valmin, valmax, valfmt, dragging, valstep)
+
+ if valinit is None:
+ # Place at the 25th and 75th percentiles
+ extent = valmax - valmin
+ valinit = np.array(
+ [valmin + extent * 0.25, valmin + extent * 0.75]
+ )
+ else:
+ valinit = self._value_in_bounds(valinit)
+ self.val = valinit
+ self.valinit = valinit
+ if orientation == "vertical":
+ self.poly = ax.axhspan(valinit[0], valinit[1], 0, 1, **kwargs)
+ else:
+ self.poly = ax.axvspan(valinit[0], valinit[1], 0, 1, **kwargs)
+
+ if orientation == "vertical":
+ self.label = ax.text(
+ 0.5,
+ 1.02,
+ label,
+ transform=ax.transAxes,
+ verticalalignment="bottom",
+ horizontalalignment="center",
+ )
+
+ self.valtext = ax.text(
+ 0.5,
+ -0.02,
+ self._format(valinit),
+ transform=ax.transAxes,
+ verticalalignment="top",
+ horizontalalignment="center",
+ )
+ else:
+ self.label = ax.text(
+ -0.02,
+ 0.5,
+ label,
+ transform=ax.transAxes,
+ verticalalignment="center",
+ horizontalalignment="right",
+ )
+
+ self.valtext = ax.text(
+ 1.02,
+ 0.5,
+ self._format(valinit),
+ transform=ax.transAxes,
+ verticalalignment="center",
+ horizontalalignment="left",
+ )
+
+ self.set_val(valinit)
+
+ def _min_in_bounds(self, min):
+ """
+ Ensure the new min value is between valmin and self.val[1]
+ """
+ if min <= self.valmin:
+ if not self.closedmin:
+ return self.val[0]
+ min = self.valmin
+
+ if min > self.val[1]:
+ min = self.val[1]
+ return self._stepped_value(min)
+
+ def _max_in_bounds(self, max):
+ """
+ Ensure the new max value is between valmax and self.val[0]
+ """
+ if max >= self.valmax:
+ if not self.closedmax:
+ return self.val[1]
+ max = self.valmax
+
+ if max <= self.val[0]:
+ max = self.val[0]
+ return self._stepped_value(max)
+
+ def _value_in_bounds(self, val):
+ return (self._min_in_bounds(val[0]), self._max_in_bounds(val[1]))
+
+ def _update_val_from_pos(self, pos):
+ """
+ Given a position update the *val*
+ """
+ idx = np.argmin(np.abs(self.val - pos))
+ if idx == 0:
+ val = self._min_in_bounds(pos)
+ self.set_min(val)
+ else:
+ val = self._max_in_bounds(pos)
+ self.set_max(val)
+
+ def _update(self, event):
+ """Update the slider position."""
+ if self.ignore(event) or event.button != 1:
+ return
+
+ if event.name == "button_press_event" and event.inaxes == self.ax:
+ self.drag_active = True
+ event.canvas.grab_mouse(self.ax)
+
+ if not self.drag_active:
+ return
+
+ elif (event.name == "button_release_event") or (
+ event.name == "button_press_event" and event.inaxes != self.ax
+ ):
+ self.drag_active = False
+ event.canvas.release_mouse(self.ax)
+ return
+ if self.orientation == "vertical":
+ self._update_val_from_pos(event.ydata)
+ else:
+ self._update_val_from_pos(event.xdata)
+
+ def _format(self, val):
+ """Pretty-print *val*."""
+ if self.valfmt is not None:
+ return f"({self.valfmt % val[0]}, {self.valfmt % val[1]})"
+ else:
+ _, s1, s2, _ = self._fmt.format_ticks(
+ [self.valmin, *val, self.valmax]
+ )
+ # fmt.get_offset is actually the multiplicative factor, if any.
+ s1 += self._fmt.get_offset()
+ s2 += self._fmt.get_offset()
+ # Use f string to avoid issues with backslashes when cast to a str
+ return f"({s1}, {s2})"
+
+ def set_min(self, min):
+ """
+ Set the lower value of the slider to *min*
Parameters
----------
- cid : int
- Connection id of the observer to be removed
+ min : float
"""
- self._observers.disconnect(cid)
+ self.set_val((min, self.val[1]))
- def reset(self):
- """Reset the slider to the initial value"""
- if self.val != self.valinit:
- self.set_val(self.valinit)
+ def set_max(self, max):
+ """
+ Set the lower value of the slider to *max*
+
+ Parameters
+ ----------
+ max : float
+ """
+ self.set_val((self.val[0], max))
+
+ def set_val(self, val):
+ """
+ Set slider value to *val*
+
+ Parameters
+ ----------
+ val : tuple or arraylike of float
+ """
+ val = np.sort(np.asanyarray(val))
+ if val.shape != (2,):
+ raise ValueError(
+ f"val must have shape (2,) but has shape {val.shape}"
+ )
+ val[0] = self._min_in_bounds(val[0])
+ val[1] = self._max_in_bounds(val[1])
+ xy = self.poly.xy
+ if self.orientation == "vertical":
+ xy[0] = 0, val[0]
+ xy[1] = 0, val[1]
+ xy[2] = 1, val[1]
+ xy[3] = 1, val[0]
+ xy[4] = 0, val[0]
+ else:
+ xy[0] = val[0], 0
+ xy[1] = val[0], 1
+ xy[2] = val[1], 1
+ xy[3] = val[1], 0
+ xy[4] = val[0], 0
+ self.poly.xy = xy
+ self.valtext.set_text(self._format(val))
+ if self.drawon:
+ self.ax.figure.canvas.draw_idle()
+ self.val = val
+ if self.eventson:
+ self._observers.process("changed", val)
+
+ def on_changed(self, func):
+ """
+ When the slider value is changed call *func* with the new
+ slider value
+
+ Parameters
+ ----------
+ func : callable
+ Function to call when slider is changed. The function
+ must accept a numpy array with shape (2,) as its argument.
+
+ Returns
+ -------
+ int
+ Connection id (which can be used to disconnect *func*)
+ """
+ return self._observers.connect('changed', lambda val: func(val))
class CheckButtons(AxesWidget):
| diff --git a/lib/matplotlib/tests/test_widgets.py b/lib/matplotlib/tests/test_widgets.py
index 0da6e6a52112..a90dc55d1e8e 100644
--- a/lib/matplotlib/tests/test_widgets.py
+++ b/lib/matplotlib/tests/test_widgets.py
@@ -298,6 +298,33 @@ def test_slider_horizontal_vertical():
assert_allclose(box.bounds, [0, 0, 1, 10/24])
+@pytest.mark.parametrize("orientation", ["horizontal", "vertical"])
+def test_range_slider(orientation):
+ if orientation == "vertical":
+ idx = [1, 0, 3, 2]
+ else:
+ idx = [0, 1, 2, 3]
+
+ fig, ax = plt.subplots()
+
+ slider = widgets.RangeSlider(
+ ax=ax, label="", valmin=0.0, valmax=1.0, orientation=orientation
+ )
+ box = slider.poly.get_extents().transformed(ax.transAxes.inverted())
+ assert_allclose(box.get_points().flatten()[idx], [0.25, 0, 0.75, 1])
+
+ slider.set_val((0.2, 0.6))
+ assert_allclose(slider.val, (0.2, 0.6))
+ box = slider.poly.get_extents().transformed(ax.transAxes.inverted())
+ assert_allclose(box.get_points().flatten()[idx], [0.2, 0, 0.6, 1])
+
+ slider.set_val((0.2, 0.1))
+ assert_allclose(slider.val, (0.1, 0.2))
+
+ slider.set_val((-1, 10))
+ assert_allclose(slider.val, (0, 1))
+
+
def check_polygon_selector(event_sequence, expected_result, selections_count):
"""
Helper function to test Polygon Selector.
| diff --git a/doc/users/next_whats_new/range_slider.rst b/doc/users/next_whats_new/range_slider.rst
new file mode 100644
index 000000000000..07ddae8e0626
--- /dev/null
+++ b/doc/users/next_whats_new/range_slider.rst
@@ -0,0 +1,4 @@
+New RangeSlider widget
+----------------------
+`.widgets.RangeSlider` allows for creating a slider that defines
+a range rather than a single value.
| [
{
"components": [
{
"doc": "",
"lines": [
38,
51
],
"name": "update",
"signature": "def update(val):",
"type": "function"
}
],
"file": "examples/widgets/range_slider.py"
},
{
"components": [
{
"doc": "T... | [
"lib/matplotlib/tests/test_widgets.py::test_range_slider[horizontal]",
"lib/matplotlib/tests/test_widgets.py::test_range_slider[vertical]"
] | [
"lib/matplotlib/tests/test_widgets.py::test_rectangle_selector",
"lib/matplotlib/tests/test_widgets.py::test_ellipse",
"lib/matplotlib/tests/test_widgets.py::test_rectangle_handles",
"lib/matplotlib/tests/test_widgets.py::test_span_selector",
"lib/matplotlib/tests/test_widgets.py::test_lasso_selector",
"l... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Create a RangeSlider widget
## PR Summary
Closes: https://github.com/matplotlib/matplotlib/issues/18563
Creates a SliderWidget that has a movable min and movable max. This enables doing things like this:

The `RangeSlider` is as analogous to `Slider` with the major difference being that `val` is an array rather than a float. To accomplish this I made a new class `_SliderBase` that both the sliders inherit their callback and other shared functionality from.
## PR Checklist
<!-- Please mark any checkboxes that do not apply to this PR as [N/A]. -->
- [x] Has pytest style unit tests (and `pytest` passes).
- Is [Flake 8](https://flake8.pycqa.org/en/latest/) compliant (run `flake8` on changed files to check).
- [x] code and tests
- [x] example - keeps complaining about the import for the references
- [x] New features are documented, with examples if plot related.
- [x] Documentation is sphinx and numpydoc compliant (the docs should [build](https://matplotlib.org/devel/documenting_mpl.html#building-the-docs) without error).
- [x] Conforms to Matplotlib style conventions (install `flake8-docstrings` and run `flake8 --docstring-convention=all`).
- [x] New features have an entry in `doc/users/next_whats_new/` (follow instructions in README.rst there).
- [N/A] API changes documented in `doc/api/next_api_changes/` (follow instructions in README.rst there).
<!--
Thank you so much for your PR! To help us review your contribution, please
consider the following points:
- A development guide is available at https://matplotlib.org/devdocs/devel/index.html.
- Help with git and github is available at
https://matplotlib.org/devel/gitwash/development_workflow.html.
- Do not create the PR out of master, but out of a separate branch.
- The PR title should summarize the changes, for example "Raise ValueError on
non-numeric input to set_xlim". Avoid non-descriptive titles such as
"Addresses issue #8576".
- The summary should provide at least 1-2 sentences describing the pull request
in detail (Why is this change required? What problem does it solve?) and
link to any relevant issues.
- If you are contributing fixes to docstrings, please pay attention to
http://matplotlib.org/devel/documenting_mpl.html#formatting. In particular,
note the difference between using single backquotes, double backquotes, and
asterisks in the markup.
We understand that PRs can sometimes be overwhelming, especially as the
reviews start coming in. Please let us know if the reviews are unclear or
the recommended next step seems overly demanding, if you would like help in
addressing a reviewer's comments, or if you have been waiting too long to hear
back on your PR.
-->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in examples/widgets/range_slider.py]
(definition of update:)
def update(val):
[end of new definitions in examples/widgets/range_slider.py]
[start of new definitions in lib/matplotlib/widgets.py]
(definition of SliderBase:)
class SliderBase(AxesWidget):
"""The base class for constructing Slider widgets. Not intended for direct
usage.
For the slider to remain responsive you must maintain a reference to it."""
(definition of SliderBase.__init__:)
def __init__(self, ax, orientation, closedmin, closedmax, valmin, valmax, valfmt, dragging, valstep):
(definition of SliderBase._stepped_value:)
def _stepped_value(self, val):
(definition of SliderBase.disconnect:)
def disconnect(self, cid):
"""Remove the observer with connection id *cid*
Parameters
----------
cid : int
Connection id of the observer to be removed"""
(definition of SliderBase.reset:)
def reset(self):
"""Reset the slider to the initial value"""
(definition of RangeSlider:)
class RangeSlider(SliderBase):
"""A slider representing a range of floating point values. Defines the min and
max of the range via the *val* attribute as a tuple of (min, max).
Create a slider that defines a range contained within [*valmin*, *valmax*]
in axes *ax*. For the slider to remain responsive you must maintain a
reference to it. Call :meth:`on_changed` to connect to the slider event.
Attributes
----------
val : tuple of float
Slider value."""
(definition of RangeSlider.__init__:)
def __init__( self, ax, label, valmin, valmax, valinit=None, valfmt=None, closedmin=True, closedmax=True, dragging=True, valstep=None, orientation="horizontal", **kwargs, ):
"""Parameters
----------
ax : Axes
The Axes to put the slider in.
label : str
Slider label.
valmin : float
The minimum value of the slider.
valmax : float
The maximum value of the slider.
valinit : tuple of float or None, default: None
The initial positions of the slider. If None the initial positions
will be at the 25th and 75th percentiles of the range.
valfmt : str, default: None
%-format string used to format the slider values. If None, a
`.ScalarFormatter` is used instead.
closedmin : bool, default: True
Whether the slider interval is closed on the bottom.
closedmax : bool, default: True
Whether the slider interval is closed on the top.
dragging : bool, default: True
If True the slider can be dragged by the mouse.
valstep : float, default: None
If given, the slider will snap to multiples of *valstep*.
orientation : {'horizontal', 'vertical'}, default: 'horizontal'
The orientation of the slider.
Notes
-----
Additional kwargs are passed on to ``self.poly`` which is the
`~matplotlib.patches.Rectangle` that draws the slider knob. See the
`.Rectangle` documentation for valid property names (``facecolor``,
``edgecolor``, ``alpha``, etc.)."""
(definition of RangeSlider._min_in_bounds:)
def _min_in_bounds(self, min):
"""Ensure the new min value is between valmin and self.val[1]"""
(definition of RangeSlider._max_in_bounds:)
def _max_in_bounds(self, max):
"""Ensure the new max value is between valmax and self.val[0]"""
(definition of RangeSlider._value_in_bounds:)
def _value_in_bounds(self, val):
(definition of RangeSlider._update_val_from_pos:)
def _update_val_from_pos(self, pos):
"""Given a position update the *val*"""
(definition of RangeSlider._update:)
def _update(self, event):
"""Update the slider position."""
(definition of RangeSlider._format:)
def _format(self, val):
"""Pretty-print *val*."""
(definition of RangeSlider.set_min:)
def set_min(self, min):
"""Set the lower value of the slider to *min*
Parameters
----------
min : float"""
(definition of RangeSlider.set_max:)
def set_max(self, max):
"""Set the lower value of the slider to *max*
Parameters
----------
max : float"""
(definition of RangeSlider.set_val:)
def set_val(self, val):
"""Set slider value to *val*
Parameters
----------
val : tuple or arraylike of float"""
(definition of RangeSlider.on_changed:)
def on_changed(self, func):
"""When the slider value is changed call *func* with the new
slider value
Parameters
----------
func : callable
Function to call when slider is changed. The function
must accept a numpy array with shape (2,) as its argument.
Returns
-------
int
Connection id (which can be used to disconnect *func*)"""
[end of new definitions in lib/matplotlib/widgets.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 59b32afde60e46407b60c766b878c840a9bfa490 | |
dpkp__kafka-python-2151 | 2,151 | dpkp/kafka-python | null | 53dc740bce8ef19c32fad2881021d1f6bb055f7a | 2020-10-27T19:20:00Z | diff --git a/kafka/protocol/__init__.py b/kafka/protocol/__init__.py
index e739b5cb1..025447f99 100644
--- a/kafka/protocol/__init__.py
+++ b/kafka/protocol/__init__.py
@@ -43,5 +43,7 @@
40: 'ExpireDelegationToken',
41: 'DescribeDelegationToken',
42: 'DeleteGroups',
+ 45: 'AlterPartitionReassignments',
+ 46: 'ListPartitionReassignments',
48: 'DescribeClientQuotas',
}
diff --git a/kafka/protocol/admin.py b/kafka/protocol/admin.py
index 63a3327a6..f9d61e5cd 100644
--- a/kafka/protocol/admin.py
+++ b/kafka/protocol/admin.py
@@ -1,7 +1,7 @@
from __future__ import absolute_import
from kafka.protocol.api import Request, Response
-from kafka.protocol.types import Array, Boolean, Bytes, Int8, Int16, Int32, Int64, Schema, String, Float64
+from kafka.protocol.types import Array, Boolean, Bytes, Int8, Int16, Int32, Int64, Schema, String, Float64, CompactString, CompactArray, TaggedFields
class ApiVersionResponse_v0(Response):
@@ -963,3 +963,92 @@ class DescribeClientQuotasRequest_v0(Request):
DescribeClientQuotasResponse = [
DescribeClientQuotasResponse_v0,
]
+
+
+class AlterPartitionReassignmentsResponse_v0(Response):
+ API_KEY = 45
+ API_VERSION = 0
+ SCHEMA = Schema(
+ ("throttle_time_ms", Int32),
+ ("error_code", Int16),
+ ("error_message", CompactString("utf-8")),
+ ("responses", CompactArray(
+ ("name", CompactString("utf-8")),
+ ("partitions", CompactArray(
+ ("partition_index", Int32),
+ ("error_code", Int16),
+ ("error_message", CompactString("utf-8")),
+ ("tags", TaggedFields)
+ )),
+ ("tags", TaggedFields)
+ )),
+ ("tags", TaggedFields)
+ )
+
+
+class AlterPartitionReassignmentsRequest_v0(Request):
+ FLEXIBLE_VERSION = True
+ API_KEY = 45
+ API_VERSION = 0
+ RESPONSE_TYPE = AlterPartitionReassignmentsResponse_v0
+ SCHEMA = Schema(
+ ("timeout_ms", Int32),
+ ("topics", CompactArray(
+ ("name", CompactString("utf-8")),
+ ("partitions", CompactArray(
+ ("partition_index", Int32),
+ ("replicas", CompactArray(Int32)),
+ ("tags", TaggedFields)
+ )),
+ ("tags", TaggedFields)
+ )),
+ ("tags", TaggedFields)
+ )
+
+
+AlterPartitionReassignmentsRequest = [AlterPartitionReassignmentsRequest_v0]
+
+AlterPartitionReassignmentsResponse = [AlterPartitionReassignmentsResponse_v0]
+
+
+class ListPartitionReassignmentsResponse_v0(Response):
+ API_KEY = 46
+ API_VERSION = 0
+ SCHEMA = Schema(
+ ("throttle_time_ms", Int32),
+ ("error_code", Int16),
+ ("error_message", CompactString("utf-8")),
+ ("topics", CompactArray(
+ ("name", CompactString("utf-8")),
+ ("partitions", CompactArray(
+ ("partition_index", Int32),
+ ("replicas", CompactArray(Int32)),
+ ("adding_replicas", CompactArray(Int32)),
+ ("removing_replicas", CompactArray(Int32)),
+ ("tags", TaggedFields)
+ )),
+ ("tags", TaggedFields)
+ )),
+ ("tags", TaggedFields)
+ )
+
+
+class ListPartitionReassignmentsRequest_v0(Request):
+ FLEXIBLE_VERSION = True
+ API_KEY = 46
+ API_VERSION = 0
+ RESPONSE_TYPE = ListPartitionReassignmentsResponse_v0
+ SCHEMA = Schema(
+ ("timeout_ms", Int32),
+ ("topics", CompactArray(
+ ("name", CompactString("utf-8")),
+ ("partition_index", CompactArray(Int32)),
+ ("tags", TaggedFields)
+ )),
+ ("tags", TaggedFields)
+ )
+
+
+ListPartitionReassignmentsRequest = [ListPartitionReassignmentsRequest_v0]
+
+ListPartitionReassignmentsResponse = [ListPartitionReassignmentsResponse_v0]
diff --git a/kafka/protocol/api.py b/kafka/protocol/api.py
index 64276fc17..f12cb972b 100644
--- a/kafka/protocol/api.py
+++ b/kafka/protocol/api.py
@@ -3,7 +3,7 @@
import abc
from kafka.protocol.struct import Struct
-from kafka.protocol.types import Int16, Int32, String, Schema, Array
+from kafka.protocol.types import Int16, Int32, String, Schema, Array, TaggedFields
class RequestHeader(Struct):
@@ -20,9 +20,40 @@ def __init__(self, request, correlation_id=0, client_id='kafka-python'):
)
+class RequestHeaderV2(Struct):
+ # Flexible response / request headers end in field buffer
+ SCHEMA = Schema(
+ ('api_key', Int16),
+ ('api_version', Int16),
+ ('correlation_id', Int32),
+ ('client_id', String('utf-8')),
+ ('tags', TaggedFields),
+ )
+
+ def __init__(self, request, correlation_id=0, client_id='kafka-python', tags=None):
+ super(RequestHeaderV2, self).__init__(
+ request.API_KEY, request.API_VERSION, correlation_id, client_id, tags or {}
+ )
+
+
+class ResponseHeader(Struct):
+ SCHEMA = Schema(
+ ('correlation_id', Int32),
+ )
+
+
+class ResponseHeaderV2(Struct):
+ SCHEMA = Schema(
+ ('correlation_id', Int32),
+ ('tags', TaggedFields),
+ )
+
+
class Request(Struct):
__metaclass__ = abc.ABCMeta
+ FLEXIBLE_VERSION = False
+
@abc.abstractproperty
def API_KEY(self):
"""Integer identifier for api request"""
@@ -50,6 +81,16 @@ def expect_response(self):
def to_object(self):
return _to_object(self.SCHEMA, self)
+ def build_request_header(self, correlation_id, client_id):
+ if self.FLEXIBLE_VERSION:
+ return RequestHeaderV2(self, correlation_id=correlation_id, client_id=client_id)
+ return RequestHeader(self, correlation_id=correlation_id, client_id=client_id)
+
+ def parse_response_header(self, read_buffer):
+ if self.FLEXIBLE_VERSION:
+ return ResponseHeaderV2.decode(read_buffer)
+ return ResponseHeader.decode(read_buffer)
+
class Response(Struct):
__metaclass__ = abc.ABCMeta
diff --git a/kafka/protocol/parser.py b/kafka/protocol/parser.py
index cfee0466d..a9e767220 100644
--- a/kafka/protocol/parser.py
+++ b/kafka/protocol/parser.py
@@ -4,10 +4,9 @@
import logging
import kafka.errors as Errors
-from kafka.protocol.api import RequestHeader
from kafka.protocol.commit import GroupCoordinatorResponse
from kafka.protocol.frame import KafkaBytes
-from kafka.protocol.types import Int32
+from kafka.protocol.types import Int32, TaggedFields
from kafka.version import __version__
log = logging.getLogger(__name__)
@@ -59,9 +58,8 @@ def send_request(self, request, correlation_id=None):
log.debug('Sending request %s', request)
if correlation_id is None:
correlation_id = self._next_correlation_id()
- header = RequestHeader(request,
- correlation_id=correlation_id,
- client_id=self._client_id)
+
+ header = request.build_request_header(correlation_id=correlation_id, client_id=self._client_id)
message = b''.join([header.encode(), request.encode()])
size = Int32.encode(len(message))
data = size + message
@@ -135,17 +133,12 @@ def receive_bytes(self, data):
return responses
def _process_response(self, read_buffer):
- recv_correlation_id = Int32.decode(read_buffer)
- log.debug('Received correlation id: %d', recv_correlation_id)
-
if not self.in_flight_requests:
- raise Errors.CorrelationIdError(
- 'No in-flight-request found for server response'
- ' with correlation ID %d'
- % (recv_correlation_id,))
-
+ raise Errors.CorrelationIdError('No in-flight-request found for server response')
(correlation_id, request) = self.in_flight_requests.popleft()
-
+ response_header = request.parse_response_header(read_buffer)
+ recv_correlation_id = response_header.correlation_id
+ log.debug('Received correlation id: %d', recv_correlation_id)
# 0.8.2 quirk
if (recv_correlation_id == 0 and
correlation_id != 0 and
diff --git a/kafka/protocol/types.py b/kafka/protocol/types.py
index 2fde24fcc..0e3685d73 100644
--- a/kafka/protocol/types.py
+++ b/kafka/protocol/types.py
@@ -210,3 +210,156 @@ def repr(self, list_of_items):
if list_of_items is None:
return 'NULL'
return '[' + ', '.join([self.array_of.repr(item) for item in list_of_items]) + ']'
+
+
+class UnsignedVarInt32(AbstractType):
+ @classmethod
+ def decode(cls, data):
+ value, i = 0, 0
+ while True:
+ b, = struct.unpack('B', data.read(1))
+ if not (b & 0x80):
+ break
+ value |= (b & 0x7f) << i
+ i += 7
+ if i > 28:
+ raise ValueError('Invalid value {}'.format(value))
+ value |= b << i
+ return value
+
+ @classmethod
+ def encode(cls, value):
+ value &= 0xffffffff
+ ret = b''
+ while (value & 0xffffff80) != 0:
+ b = (value & 0x7f) | 0x80
+ ret += struct.pack('B', b)
+ value >>= 7
+ ret += struct.pack('B', value)
+ return ret
+
+
+class VarInt32(AbstractType):
+ @classmethod
+ def decode(cls, data):
+ value = UnsignedVarInt32.decode(data)
+ return (value >> 1) ^ -(value & 1)
+
+ @classmethod
+ def encode(cls, value):
+ # bring it in line with the java binary repr
+ value &= 0xffffffff
+ return UnsignedVarInt32.encode((value << 1) ^ (value >> 31))
+
+
+class VarInt64(AbstractType):
+ @classmethod
+ def decode(cls, data):
+ value, i = 0, 0
+ while True:
+ b = data.read(1)
+ if not (b & 0x80):
+ break
+ value |= (b & 0x7f) << i
+ i += 7
+ if i > 63:
+ raise ValueError('Invalid value {}'.format(value))
+ value |= b << i
+ return (value >> 1) ^ -(value & 1)
+
+ @classmethod
+ def encode(cls, value):
+ # bring it in line with the java binary repr
+ value &= 0xffffffffffffffff
+ v = (value << 1) ^ (value >> 63)
+ ret = b''
+ while (v & 0xffffffffffffff80) != 0:
+ b = (value & 0x7f) | 0x80
+ ret += struct.pack('B', b)
+ v >>= 7
+ ret += struct.pack('B', v)
+ return ret
+
+
+class CompactString(String):
+ def decode(self, data):
+ length = UnsignedVarInt32.decode(data) - 1
+ if length < 0:
+ return None
+ value = data.read(length)
+ if len(value) != length:
+ raise ValueError('Buffer underrun decoding string')
+ return value.decode(self.encoding)
+
+ def encode(self, value):
+ if value is None:
+ return UnsignedVarInt32.encode(0)
+ value = str(value).encode(self.encoding)
+ return UnsignedVarInt32.encode(len(value) + 1) + value
+
+
+class TaggedFields(AbstractType):
+ @classmethod
+ def decode(cls, data):
+ num_fields = UnsignedVarInt32.decode(data)
+ ret = {}
+ if not num_fields:
+ return ret
+ prev_tag = -1
+ for i in range(num_fields):
+ tag = UnsignedVarInt32.decode(data)
+ if tag <= prev_tag:
+ raise ValueError('Invalid or out-of-order tag {}'.format(tag))
+ prev_tag = tag
+ size = UnsignedVarInt32.decode(data)
+ val = data.read(size)
+ ret[tag] = val
+ return ret
+
+ @classmethod
+ def encode(cls, value):
+ ret = UnsignedVarInt32.encode(len(value))
+ for k, v in value.items():
+ # do we allow for other data types ?? It could get complicated really fast
+ assert isinstance(v, bytes), 'Value {} is not a byte array'.format(v)
+ assert isinstance(k, int) and k > 0, 'Key {} is not a positive integer'.format(k)
+ ret += UnsignedVarInt32.encode(k)
+ ret += v
+ return ret
+
+
+class CompactBytes(AbstractType):
+ @classmethod
+ def decode(cls, data):
+ length = UnsignedVarInt32.decode(data) - 1
+ if length < 0:
+ return None
+ value = data.read(length)
+ if len(value) != length:
+ raise ValueError('Buffer underrun decoding Bytes')
+ return value
+
+ @classmethod
+ def encode(cls, value):
+ if value is None:
+ return UnsignedVarInt32.encode(0)
+ else:
+ return UnsignedVarInt32.encode(len(value) + 1) + value
+
+
+class CompactArray(Array):
+
+ def encode(self, items):
+ if items is None:
+ return UnsignedVarInt32.encode(0)
+ return b''.join(
+ [UnsignedVarInt32.encode(len(items) + 1)] +
+ [self.array_of.encode(item) for item in items]
+ )
+
+ def decode(self, data):
+ length = UnsignedVarInt32.decode(data) - 1
+ if length == -1:
+ return None
+ return [self.array_of.decode(data) for _ in range(length)]
+
| diff --git a/test/test_protocol.py b/test/test_protocol.py
index e295174d4..6a77e19d6 100644
--- a/test/test_protocol.py
+++ b/test/test_protocol.py
@@ -9,7 +9,7 @@
from kafka.protocol.fetch import FetchRequest, FetchResponse
from kafka.protocol.message import Message, MessageSet, PartialMessage
from kafka.protocol.metadata import MetadataRequest
-from kafka.protocol.types import Int16, Int32, Int64, String
+from kafka.protocol.types import Int16, Int32, Int64, String, UnsignedVarInt32, CompactString, CompactArray, CompactBytes
def test_create_message():
@@ -282,3 +282,55 @@ def test_struct_unrecognized_kwargs():
def test_struct_missing_kwargs():
fr = FetchRequest[0](max_wait_time=100)
assert fr.min_bytes is None
+
+
+def test_unsigned_varint_serde():
+ pairs = {
+ 0: [0],
+ -1: [0xff, 0xff, 0xff, 0xff, 0x0f],
+ 1: [1],
+ 63: [0x3f],
+ -64: [0xc0, 0xff, 0xff, 0xff, 0x0f],
+ 64: [0x40],
+ 8191: [0xff, 0x3f],
+ -8192: [0x80, 0xc0, 0xff, 0xff, 0x0f],
+ 8192: [0x80, 0x40],
+ -8193: [0xff, 0xbf, 0xff, 0xff, 0x0f],
+ 1048575: [0xff, 0xff, 0x3f],
+
+ }
+ for value, expected_encoded in pairs.items():
+ value &= 0xffffffff
+ encoded = UnsignedVarInt32.encode(value)
+ assert encoded == b''.join(struct.pack('>B', x) for x in expected_encoded)
+ assert value == UnsignedVarInt32.decode(io.BytesIO(encoded))
+
+
+def test_compact_data_structs():
+ cs = CompactString()
+ encoded = cs.encode(None)
+ assert encoded == struct.pack('B', 0)
+ decoded = cs.decode(io.BytesIO(encoded))
+ assert decoded is None
+ assert b'\x01' == cs.encode('')
+ assert '' == cs.decode(io.BytesIO(b'\x01'))
+ encoded = cs.encode("foobarbaz")
+ assert cs.decode(io.BytesIO(encoded)) == "foobarbaz"
+
+ arr = CompactArray(CompactString())
+ assert arr.encode(None) == b'\x00'
+ assert arr.decode(io.BytesIO(b'\x00')) is None
+ enc = arr.encode([])
+ assert enc == b'\x01'
+ assert [] == arr.decode(io.BytesIO(enc))
+ encoded = arr.encode(["foo", "bar", "baz", "quux"])
+ assert arr.decode(io.BytesIO(encoded)) == ["foo", "bar", "baz", "quux"]
+
+ enc = CompactBytes.encode(None)
+ assert enc == b'\x00'
+ assert CompactBytes.decode(io.BytesIO(b'\x00')) is None
+ enc = CompactBytes.encode(b'')
+ assert enc == b'\x01'
+ assert CompactBytes.decode(io.BytesIO(b'\x01')) is b''
+ enc = CompactBytes.encode(b'foo')
+ assert CompactBytes.decode(io.BytesIO(enc)) == b'foo'
| [
{
"components": [
{
"doc": "",
"lines": [
968,
985
],
"name": "AlterPartitionReassignmentsResponse_v0",
"signature": "class AlterPartitionReassignmentsResponse_v0(Response):",
"type": "class"
},
{
"doc": "",
"l... | [
"test/test_protocol.py::test_create_message",
"test/test_protocol.py::test_encode_message_v0",
"test/test_protocol.py::test_encode_message_v1",
"test/test_protocol.py::test_decode_message",
"test/test_protocol.py::test_decode_message_validate_crc",
"test/test_protocol.py::test_encode_message_set",
"test... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
KIP-482: add support for flexible versions / tagged fields
- Add support for varints
- Add support for compact collections (byte array, string, array)
- Add support for new request and response headers, supporting flexible
versions
- Add list of supported flexible versions, plus extraction script
- Add List / Alter partition reassignments apis
<!-- Reviewable:start -->
---
This change isโ[<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/dpkp/kafka-python/2151)
<!-- Reviewable:end -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in kafka/protocol/admin.py]
(definition of AlterPartitionReassignmentsResponse_v0:)
class AlterPartitionReassignmentsResponse_v0(Response):
(definition of AlterPartitionReassignmentsRequest_v0:)
class AlterPartitionReassignmentsRequest_v0(Request):
(definition of ListPartitionReassignmentsResponse_v0:)
class ListPartitionReassignmentsResponse_v0(Response):
(definition of ListPartitionReassignmentsRequest_v0:)
class ListPartitionReassignmentsRequest_v0(Request):
[end of new definitions in kafka/protocol/admin.py]
[start of new definitions in kafka/protocol/api.py]
(definition of RequestHeaderV2:)
class RequestHeaderV2(Struct):
(definition of RequestHeaderV2.__init__:)
def __init__(self, request, correlation_id=0, client_id='kafka-python', tags=None):
(definition of ResponseHeader:)
class ResponseHeader(Struct):
(definition of ResponseHeaderV2:)
class ResponseHeaderV2(Struct):
(definition of Request.build_request_header:)
def build_request_header(self, correlation_id, client_id):
(definition of Request.parse_response_header:)
def parse_response_header(self, read_buffer):
[end of new definitions in kafka/protocol/api.py]
[start of new definitions in kafka/protocol/types.py]
(definition of UnsignedVarInt32:)
class UnsignedVarInt32(AbstractType): @classmethod
(definition of UnsignedVarInt32.decode:)
def decode(cls, data):
(definition of UnsignedVarInt32.encode:)
def encode(cls, value):
(definition of VarInt32:)
class VarInt32(AbstractType): @classmethod
(definition of VarInt32.decode:)
def decode(cls, data):
(definition of VarInt32.encode:)
def encode(cls, value):
(definition of VarInt64:)
class VarInt64(AbstractType): @classmethod
(definition of VarInt64.decode:)
def decode(cls, data):
(definition of VarInt64.encode:)
def encode(cls, value):
(definition of CompactString:)
class CompactString(String):
(definition of CompactString.decode:)
def decode(self, data):
(definition of CompactString.encode:)
def encode(self, value):
(definition of TaggedFields:)
class TaggedFields(AbstractType): @classmethod
(definition of TaggedFields.decode:)
def decode(cls, data):
(definition of TaggedFields.encode:)
def encode(cls, value):
(definition of CompactBytes:)
class CompactBytes(AbstractType): @classmethod
(definition of CompactBytes.decode:)
def decode(cls, data):
(definition of CompactBytes.encode:)
def encode(cls, value):
(definition of CompactArray:)
class CompactArray(Array):
(definition of CompactArray.encode:)
def encode(self, items):
(definition of CompactArray.decode:)
def decode(self, data):
[end of new definitions in kafka/protocol/types.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 53dc740bce8ef19c32fad2881021d1f6bb055f7a | ||
Project-MONAI__MONAI-1138 | 1,138 | Project-MONAI/MONAI | null | 614659093c891ef479695aec16ca048b8fe69e6b | 2020-10-25T19:21:52Z | diff --git a/docs/source/data.rst b/docs/source/data.rst
index c79ed15e75..c45f9a4a73 100644
--- a/docs/source/data.rst
+++ b/docs/source/data.rst
@@ -130,3 +130,8 @@ Decathlon Datalist
DataLoader
~~~~~~~~~~
.. autofunction:: monai.data.DataLoader
+
+
+ThreadBuffer
+~~~~~~~~~~~~
+.. autofunction:: monai.data.ThreadBuffer
diff --git a/monai/data/__init__.py b/monai/data/__init__.py
index ac6e6cfa12..ed43fa023c 100644
--- a/monai/data/__init__.py
+++ b/monai/data/__init__.py
@@ -21,4 +21,5 @@
from .png_saver import PNGSaver
from .png_writer import write_png
from .synthetic import *
+from .thread_buffer import ThreadBuffer
from .utils import *
diff --git a/monai/data/thread_buffer.py b/monai/data/thread_buffer.py
new file mode 100644
index 0000000000..b76e8c7444
--- /dev/null
+++ b/monai/data/thread_buffer.py
@@ -0,0 +1,75 @@
+# Copyright 2020 MONAI Consortium
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+# http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from queue import Empty, Full, Queue
+from threading import Thread
+
+
+class ThreadBuffer:
+ """
+ Iterates over values from self.src in a separate thread but yielding them in the current thread. This allows values
+ to be queued up asynchronously. The internal thread will continue running so long as the source has values or until
+ the stop() method is called.
+
+ One issue raised by using a thread in this way is that during the lifetime of the thread the source object is being
+ iterated over, so if the thread hasn't finished another attempt to iterate over it will raise an exception or yield
+ inexpected results. To ensure the thread releases the iteration and proper cleanup is done the stop() method must
+ be called which will join with the thread.
+
+ Args:
+ src: Source data iterable
+ buffer_size: Number of items to buffer from the source
+ timeout: Time to wait for an item from the buffer, or to wait while the buffer is full when adding items
+ """
+
+ def __init__(self, src, buffer_size=1, timeout=0.01):
+ self.src = src
+ self.buffer_size = buffer_size
+ self.timeout = timeout
+ self.buffer = Queue(self.buffer_size)
+ self.gen_thread = None
+ self.is_running = False
+
+ def enqueue_values(self):
+ for src_val in self.src:
+ while self.is_running:
+ try:
+ self.buffer.put(src_val, timeout=self.timeout)
+ except Full:
+ pass # try to add the item again
+ else:
+ break # successfully added the item, quit trying
+ else: # quit the thread cleanly when requested to stop
+ break
+
+ def stop(self):
+ self.is_running = False # signal the thread to exit
+
+ if self.gen_thread is not None:
+ self.gen_thread.join()
+
+ self.gen_thread = None
+
+ def __iter__(self):
+
+ self.is_running = True
+ self.gen_thread = Thread(target=self.enqueue_values, daemon=True)
+ self.gen_thread.start()
+
+ try:
+ while self.is_running and (self.gen_thread.is_alive() or not self.buffer.empty()):
+ try:
+ yield self.buffer.get(timeout=self.timeout)
+ except Empty:
+ pass # queue was empty this time, try again
+ finally:
+ self.stop() # ensure thread completion
diff --git a/monai/utils/decorators.py b/monai/utils/decorators.py
index eca7a3af72..0d14eb0dd3 100644
--- a/monai/utils/decorators.py
+++ b/monai/utils/decorators.py
@@ -22,9 +22,9 @@ def timing(func):
@wraps(func)
def timingwrap(*args, **kwargs):
print(func.__name__, flush=True)
- start = time.time()
+ start = time.perf_counter()
res = func(*args, **kwargs)
- end = time.time()
+ end = time.perf_counter()
print(func.__name__, "dT (s) =", (end - start), flush=True)
return res
diff --git a/monai/utils/misc.py b/monai/utils/misc.py
index 488825bbb1..27b2c90ebe 100644
--- a/monai/utils/misc.py
+++ b/monai/utils/misc.py
@@ -12,6 +12,7 @@
import collections.abc
import itertools
import random
+import time
from ast import literal_eval
from distutils.util import strtobool
from typing import Any, Callable, Optional, Sequence, Tuple, Union
@@ -260,3 +261,23 @@ def _parse_var(s):
except ValueError:
d[key] = value
return d
+
+
+class PerfContext:
+ """
+ Context manager for tracking how much time is spent within context blocks. This uses `time.perf_counter` to
+ accumulate the total amount of time in seconds in the attribute `total_time` over however many context blocks
+ the object is used in.
+ """
+
+ def __init__(self):
+ self.total_time = 0
+ self.start_time = None
+
+ def __enter__(self):
+ self.start_time = time.perf_counter()
+ return self
+
+ def __exit__(self, exc_type, exc_value, exc_traceback):
+ self.total_time += time.perf_counter() - self.start_time
+ self.start_time = None
| diff --git a/tests/test_thread_buffer.py b/tests/test_thread_buffer.py
new file mode 100644
index 0000000000..2688f1ae40
--- /dev/null
+++ b/tests/test_thread_buffer.py
@@ -0,0 +1,57 @@
+import time
+import unittest
+
+from monai.data import DataLoader, Dataset, ThreadBuffer
+from monai.transforms import Compose, SimulateDelayd
+from monai.utils import PerfContext
+
+
+class TestDataLoader(unittest.TestCase):
+ def setUp(self):
+ super().setUp()
+
+ self.datalist = [
+ {"image": "spleen_19.nii.gz", "label": "spleen_label_19.nii.gz"},
+ {"image": "spleen_31.nii.gz", "label": "spleen_label_31.nii.gz"},
+ ]
+
+ self.transform = Compose([SimulateDelayd(keys=["image", "label"], delay_time=0.1)])
+
+ def test_values(self):
+ dataset = Dataset(data=self.datalist, transform=self.transform)
+ dataloader = DataLoader(dataset=dataset, batch_size=2, num_workers=0)
+
+ tbuffer = ThreadBuffer(dataloader)
+
+ for d in tbuffer:
+ self.assertEqual(d["image"][0], "spleen_19.nii.gz")
+ self.assertEqual(d["image"][1], "spleen_31.nii.gz")
+ self.assertEqual(d["label"][0], "spleen_label_19.nii.gz")
+ self.assertEqual(d["label"][1], "spleen_label_31.nii.gz")
+
+ def test_time(self):
+ dataset = Dataset(data=self.datalist * 2, transform=self.transform) # contains data for 2 batches
+ dataloader = DataLoader(dataset=dataset, batch_size=2, num_workers=0)
+
+ tbuffer = ThreadBuffer(dataloader)
+
+ with PerfContext() as pc:
+ for _ in dataloader:
+ time.sleep(0.5) # each batch takes 0.8 s to generate on top of this time
+
+ unbuffered_time = pc.total_time
+
+ with PerfContext() as pc:
+ for _ in tbuffer:
+ time.sleep(0.5) # while "computation" is happening the next batch is being generated, saving 0.4 s
+
+ buffered_time = pc.total_time
+
+ self.assertTrue(
+ buffered_time < unbuffered_time,
+ f"Buffered time {buffered_time} should be less than unbuffered time {unbuffered_time}",
+ )
+
+
+if __name__ == "__main__":
+ unittest.main()
| diff --git a/docs/source/data.rst b/docs/source/data.rst
index c79ed15e75..c45f9a4a73 100644
--- a/docs/source/data.rst
+++ b/docs/source/data.rst
@@ -130,3 +130,8 @@ Decathlon Datalist
DataLoader
~~~~~~~~~~
.. autofunction:: monai.data.DataLoader
+
+
+ThreadBuffer
+~~~~~~~~~~~~
+.. autofunction:: monai.data.ThreadBuffer
| [
{
"components": [
{
"doc": "Iterates over values from self.src in a separate thread but yielding them in the current thread. This allows values\nto be queued up asynchronously. The internal thread will continue running so long as the source has values or until\nthe stop() method is called.\n\nOne ... | [
"tests/test_thread_buffer.py::TestDataLoader::test_time",
"tests/test_thread_buffer.py::TestDataLoader::test_values"
] | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Threadbuffer
### Description
Adds `ThreadBuffer` for generating batches in a separate thread from the main thread. This allows a batch to be generated while the GPU is busy with training, allowing some overlap of execution which should speed up the training process. A example notebook to come in the tutorials repo.
### Status
**Ready**
### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not applicable items -->
- [x] Non-breaking change (fix or new feature that would not break existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing functionality to change).
- [x] New tests added to cover the changes.
- [x] Integration tests passed locally by running `./runtests.sh --codeformat --coverage`.
- [x] In-line docstrings updated.
- [x] Documentation updated, tested `make html` command in the `docs/` folder.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in monai/data/thread_buffer.py]
(definition of ThreadBuffer:)
class ThreadBuffer:
"""Iterates over values from self.src in a separate thread but yielding them in the current thread. This allows values
to be queued up asynchronously. The internal thread will continue running so long as the source has values or until
the stop() method is called.
One issue raised by using a thread in this way is that during the lifetime of the thread the source object is being
iterated over, so if the thread hasn't finished another attempt to iterate over it will raise an exception or yield
inexpected results. To ensure the thread releases the iteration and proper cleanup is done the stop() method must
be called which will join with the thread.
Args:
src: Source data iterable
buffer_size: Number of items to buffer from the source
timeout: Time to wait for an item from the buffer, or to wait while the buffer is full when adding items"""
(definition of ThreadBuffer.__init__:)
def __init__(self, src, buffer_size=1, timeout=0.01):
(definition of ThreadBuffer.enqueue_values:)
def enqueue_values(self):
(definition of ThreadBuffer.stop:)
def stop(self):
(definition of ThreadBuffer.__iter__:)
def __iter__(self):
[end of new definitions in monai/data/thread_buffer.py]
[start of new definitions in monai/utils/misc.py]
(definition of PerfContext:)
class PerfContext:
"""Context manager for tracking how much time is spent within context blocks. This uses `time.perf_counter` to
accumulate the total amount of time in seconds in the attribute `total_time` over however many context blocks
the object is used in."""
(definition of PerfContext.__init__:)
def __init__(self):
(definition of PerfContext.__enter__:)
def __enter__(self):
(definition of PerfContext.__exit__:)
def __exit__(self, exc_type, exc_value, exc_traceback):
[end of new definitions in monai/utils/misc.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | e73257caa79309dcce1e93abf1632f4bfd75b11f | |
matplotlib__matplotlib-18807 | 18,807 | matplotlib/matplotlib | 3.3 | 6d629a1f9a6a09a90fd6b7e20f0fa8ed97945055 | 2020-10-24T23:01:01Z | diff --git a/doc/users/next_whats_new/animatable_FancyArrow.rst b/doc/users/next_whats_new/animatable_FancyArrow.rst
new file mode 100644
index 000000000000..55513fba6db0
--- /dev/null
+++ b/doc/users/next_whats_new/animatable_FancyArrow.rst
@@ -0,0 +1,5 @@
+``set_data`` method for ``FancyArrow`` patch
+--------------------------------------------
+
+`.FancyArrow`, the patch returned by ``ax.arrow``, now has a ``set_data``
+method that allows for animating the arrow.
diff --git a/lib/matplotlib/axes/_axes.py b/lib/matplotlib/axes/_axes.py
index 1b0c881245af..2d4ee16cb668 100644
--- a/lib/matplotlib/axes/_axes.py
+++ b/lib/matplotlib/axes/_axes.py
@@ -5035,12 +5035,6 @@ def arrow(self, x, y, dx, dy, **kwargs):
Parameters
----------
- x, y : float
- The x and y coordinates of the arrow base.
-
- dx, dy : float
- The length of the arrow along x and y direction.
-
%(FancyArrow)s
Returns
diff --git a/lib/matplotlib/patches.py b/lib/matplotlib/patches.py
index 29b76f7b74ab..70ba885ff343 100644
--- a/lib/matplotlib/patches.py
+++ b/lib/matplotlib/patches.py
@@ -1305,6 +1305,12 @@ def __init__(self, x, y, dx, dy, width=0.001, length_includes_head=False,
"""
Parameters
----------
+ x, y : float
+ The x and y coordinates of the arrow base.
+
+ dx, dy : float
+ The length of the arrow along x and y direction.
+
width : float, default: 0.001
Width of full arrow tail.
@@ -1333,22 +1339,82 @@ def __init__(self, x, y, dx, dy, width=0.001, length_includes_head=False,
%(Patch_kwdoc)s
"""
- if head_width is None:
- head_width = 3 * width
- if head_length is None:
+ self._x = x
+ self._y = y
+ self._dx = dx
+ self._dy = dy
+ self._width = width
+ self._length_includes_head = length_includes_head
+ self._head_width = head_width
+ self._head_length = head_length
+ self._shape = shape
+ self._overhang = overhang
+ self._head_starts_at_zero = head_starts_at_zero
+ self._make_verts()
+ super().__init__(self.verts, closed=True, **kwargs)
+
+ def set_data(self, *, x=None, y=None, dx=None, dy=None, width=None,
+ head_width=None, head_length=None):
+ """
+ Set `.FancyArrow` x, y, dx, dy, width, head_with, and head_length.
+ Values left as None will not be updated.
+
+ Parameters
+ ----------
+ x, y : float or None, default: None
+ The x and y coordinates of the arrow base.
+
+ dx, dy : float or None, default: None
+ The length of the arrow along x and y direction.
+
+ width: float or None, default: None
+ Width of full arrow tail.
+
+ head_width: float or None, default: None
+ Total width of the full arrow head.
+
+ head_length: float or None, default: None
+ Length of arrow head.
+ """
+ if x is not None:
+ self._x = x
+ if y is not None:
+ self._y = y
+ if dx is not None:
+ self._dx = dx
+ if dy is not None:
+ self._dy = dy
+ if width is not None:
+ self._width = width
+ if head_width is not None:
+ self._head_width = head_width
+ if head_length is not None:
+ self._head_length = head_length
+ self._make_verts()
+ self.set_xy(self.verts)
+
+ def _make_verts(self):
+ if self._head_width is None:
+ head_width = 3 * self._width
+ else:
+ head_width = self._head_width
+ if self._head_length is None:
head_length = 1.5 * head_width
+ else:
+ head_length = self._head_length
- distance = np.hypot(dx, dy)
+ distance = np.hypot(self._dx, self._dy)
- if length_includes_head:
+ if self._length_includes_head:
length = distance
else:
length = distance + head_length
if not length:
- verts = np.empty([0, 2]) # display nothing if empty
+ self.verts = np.empty([0, 2]) # display nothing if empty
else:
# start by drawing horizontal arrow, point at (0, 0)
- hw, hl, hs, lw = head_width, head_length, overhang, width
+ hw, hl = head_width, head_length
+ hs, lw = self._overhang, self._width
left_half_arrow = np.array([
[0.0, 0.0], # tip
[-hl, -hw / 2], # leftmost
@@ -1357,36 +1423,37 @@ def __init__(self, x, y, dx, dy, width=0.001, length_includes_head=False,
[-length, 0],
])
# if we're not including the head, shift up by head length
- if not length_includes_head:
+ if not self._length_includes_head:
left_half_arrow += [head_length, 0]
# if the head starts at 0, shift up by another head length
- if head_starts_at_zero:
+ if self._head_starts_at_zero:
left_half_arrow += [head_length / 2, 0]
# figure out the shape, and complete accordingly
- if shape == 'left':
+ if self._shape == 'left':
coords = left_half_arrow
else:
right_half_arrow = left_half_arrow * [1, -1]
- if shape == 'right':
+ if self._shape == 'right':
coords = right_half_arrow
- elif shape == 'full':
+ elif self._shape == 'full':
# The half-arrows contain the midpoint of the stem,
# which we can omit from the full arrow. Including it
# twice caused a problem with xpdf.
coords = np.concatenate([left_half_arrow[:-1],
right_half_arrow[-2::-1]])
else:
- raise ValueError("Got unknown shape: %s" % shape)
+ raise ValueError("Got unknown shape: %s" % self.shape)
if distance != 0:
- cx = dx / distance
- sx = dy / distance
+ cx = self._dx / distance
+ sx = self._dy / distance
else:
# Account for division by zero
cx, sx = 0, 1
M = [[cx, sx], [-sx, cx]]
- verts = np.dot(coords, M) + (x + dx, y + dy)
-
- super().__init__(verts, closed=True, **kwargs)
+ self.verts = np.dot(coords, M) + [
+ self._x + self._dx,
+ self._y + self._dy,
+ ]
docstring.interpd.update(
| diff --git a/lib/matplotlib/tests/test_patches.py b/lib/matplotlib/tests/test_patches.py
index 2a908098364e..33b86dfcdad5 100644
--- a/lib/matplotlib/tests/test_patches.py
+++ b/lib/matplotlib/tests/test_patches.py
@@ -522,7 +522,37 @@ def test_fancyarrow_units():
dtime = datetime(2000, 1, 1)
fig, ax = plt.subplots()
arrow = FancyArrowPatch((0, dtime), (0.01, dtime))
- ax.add_patch(arrow)
+
+
+def test_fancyarrow_setdata():
+ fig, ax = plt.subplots()
+ arrow = ax.arrow(0, 0, 10, 10, head_length=5, head_width=1, width=.5)
+ expected1 = np.array(
+ [[13.54, 13.54],
+ [10.35, 9.65],
+ [10.18, 9.82],
+ [0.18, -0.18],
+ [-0.18, 0.18],
+ [9.82, 10.18],
+ [9.65, 10.35],
+ [13.54, 13.54]]
+ )
+ assert np.allclose(expected1, np.round(arrow.verts, 2))
+
+ expected2 = np.array(
+ [[16.71, 16.71],
+ [16.71, 15.29],
+ [16.71, 15.29],
+ [1.71, 0.29],
+ [0.29, 1.71],
+ [15.29, 16.71],
+ [15.29, 16.71],
+ [16.71, 16.71]]
+ )
+ arrow.set_data(
+ x=1, y=1, dx=15, dy=15, width=2, head_width=2, head_length=1
+ )
+ assert np.allclose(expected2, np.round(arrow.verts, 2))
@image_comparison(["large_arc.svg"], style="mpl20")
| diff --git a/doc/users/next_whats_new/animatable_FancyArrow.rst b/doc/users/next_whats_new/animatable_FancyArrow.rst
new file mode 100644
index 000000000000..55513fba6db0
--- /dev/null
+++ b/doc/users/next_whats_new/animatable_FancyArrow.rst
@@ -0,0 +1,5 @@
+``set_data`` method for ``FancyArrow`` patch
+--------------------------------------------
+
+`.FancyArrow`, the patch returned by ``ax.arrow``, now has a ``set_data``
+method that allows for animating the arrow.
| [
{
"components": [
{
"doc": "Set `.FancyArrow` x, y, dx, dy, width, head_with, and head_length.\nValues left as None will not be updated.\n\nParameters\n----------\nx, y : float or None, default: None\n The x and y coordinates of the arrow base.\n\ndx, dy : float or None, default: None\n The ... | [
"lib/matplotlib/tests/test_patches.py::test_fancyarrow_setdata"
] | [
"lib/matplotlib/tests/test_patches.py::test_Polygon_close",
"lib/matplotlib/tests/test_patches.py::test_rotate_rect",
"lib/matplotlib/tests/test_patches.py::test_negative_rect",
"lib/matplotlib/tests/test_patches.py::test_clip_to_bbox[png]",
"lib/matplotlib/tests/test_patches.py::test_patch_alpha_coloring[p... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
make FancyArrow animatable
## PR Summary
Addresses point 7 of https://github.com/matplotlib/matplotlib/issues/5699
Makes the `FancyArrow` Patch animatable. While `FancyArrowPatch` is somewhat animatable with `set_positions` there is no easy way to animate the Polygon that is returned from `plt.arrow`. This PR addresses that gap allowing things like this:
```python
%matplotlib ipympl
import ipywidgets as widgets
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
arrow = ax.arrow(0, 0, 0.5, 0.5)
ax.set_xlim([0, 1])
ax.set_ylim([0, 1])
width_slider = widgets.FloatSlider(min=0, max=0.5, step=0.02, description="width")
x_slider = widgets.FloatSlider(min=0, max=0.5, step=0.02, description="x")
def f(change):
arrow.set_data(x=x_slider.value, width=width_slider.value)
width_slider.observe(f, names="value")
x_slider.observe(f, names="value")
display(widgets.VBox([x_slider, width_slider]))
```

## PR Checklist
<!-- Please mark any checkboxes that do not apply to this PR as [N/A]. -->
- [x] Has pytest style unit tests (and `pytest` passes).
- [x] Is [Flake 8](https://flake8.pycqa.org/en/latest/) compliant (run `flake8` on changed files to check).
- [ ] New features are documented, with examples if plot related.
- [ ] Documentation is sphinx and numpydoc compliant (the docs should [build](https://matplotlib.org/devel/documenting_mpl.html#building-the-docs) without error).
- [x] Conforms to Matplotlib style conventions (install `flake8-docstrings` and run `flake8 --docstring-convention=all`).
- [x] New features have an entry in `doc/users/next_whats_new/` (follow instructions in README.rst there).
- [ ] API changes documented in `doc/api/next_api_changes/` (follow instructions in README.rst there).
<!--
Thank you so much for your PR! To help us review your contribution, please
consider the following points:
- A development guide is available at https://matplotlib.org/devdocs/devel/index.html.
- Help with git and github is available at
https://matplotlib.org/devel/gitwash/development_workflow.html.
- Do not create the PR out of master, but out of a separate branch.
- The PR title should summarize the changes, for example "Raise ValueError on
non-numeric input to set_xlim". Avoid non-descriptive titles such as
"Addresses issue #8576".
- The summary should provide at least 1-2 sentences describing the pull request
in detail (Why is this change required? What problem does it solve?) and
link to any relevant issues.
- If you are contributing fixes to docstrings, please pay attention to
http://matplotlib.org/devel/documenting_mpl.html#formatting. In particular,
note the difference between using single backquotes, double backquotes, and
asterisks in the markup.
We understand that PRs can sometimes be overwhelming, especially as the
reviews start coming in. Please let us know if the reviews are unclear or
the recommended next step seems overly demanding, if you would like help in
addressing a reviewer's comments, or if you have been waiting too long to hear
back on your PR.
-->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in lib/matplotlib/patches.py]
(definition of FancyArrow.set_data:)
def set_data(self, *, x=None, y=None, dx=None, dy=None, width=None, head_width=None, head_length=None):
"""Set `.FancyArrow` x, y, dx, dy, width, head_with, and head_length.
Values left as None will not be updated.
Parameters
----------
x, y : float or None, default: None
The x and y coordinates of the arrow base.
dx, dy : float or None, default: None
The length of the arrow along x and y direction.
width: float or None, default: None
Width of full arrow tail.
head_width: float or None, default: None
Total width of the full arrow head.
head_length: float or None, default: None
Length of arrow head."""
(definition of FancyArrow._make_verts:)
def _make_verts(self):
[end of new definitions in lib/matplotlib/patches.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 59b32afde60e46407b60c766b878c840a9bfa490 | |
pydata__xarray-4534 | 4,534 | pydata/xarray | 0.12 | 764ffc95af41602b7ba216bad4be438330c3dc45 | 2020-10-24T22:13:29Z | diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst
index b10271b13f5..5d9a0263c35 100644
--- a/doc/api-hidden.rst
+++ b/doc/api-hidden.rst
@@ -298,6 +298,7 @@
core.accessor_dt.DatetimeAccessor.is_quarter_start
core.accessor_dt.DatetimeAccessor.is_year_end
core.accessor_dt.DatetimeAccessor.is_year_start
+ core.accessor_dt.DatetimeAccessor.isocalendar
core.accessor_dt.DatetimeAccessor.microsecond
core.accessor_dt.DatetimeAccessor.minute
core.accessor_dt.DatetimeAccessor.month
diff --git a/doc/api.rst b/doc/api.rst
index 5e8a2be0ed4..ceab7dcc976 100644
--- a/doc/api.rst
+++ b/doc/api.rst
@@ -475,8 +475,6 @@ Datetimelike properties
DataArray.dt.second
DataArray.dt.microsecond
DataArray.dt.nanosecond
- DataArray.dt.weekofyear
- DataArray.dt.week
DataArray.dt.dayofweek
DataArray.dt.weekday
DataArray.dt.weekday_name
@@ -500,6 +498,7 @@ Datetimelike properties
DataArray.dt.floor
DataArray.dt.ceil
+ DataArray.dt.isocalendar
DataArray.dt.round
DataArray.dt.strftime
@@ -896,3 +895,10 @@ Deprecated / Pending Deprecation
Dataset.apply
core.groupby.DataArrayGroupBy.apply
core.groupby.DatasetGroupBy.apply
+
+.. autosummary::
+ :toctree: generated/
+ :template: autosummary/accessor_attribute.rst
+
+ DataArray.dt.weekofyear
+ DataArray.dt.week
diff --git a/doc/whats-new.rst b/doc/whats-new.rst
index 9460fc08478..308aa400132 100644
--- a/doc/whats-new.rst
+++ b/doc/whats-new.rst
@@ -23,6 +23,10 @@ v0.16.2 (unreleased)
Breaking changes
~~~~~~~~~~~~~~~~
+- :py:attr:`~core.accessor_dt.DatetimeAccessor.weekofyear` and :py:attr:`~core.accessor_dt.DatetimeAccessor.week`
+ have been deprecated. Use ``DataArray.dt.isocalendar().week``
+ instead (:pull:`4534`). By `Mathias Hauser <https://github.com/mathause>`_,
+ `Maximilian Roos <https://github.com/max-sixty>`_, and `Spencer Clark <https://github.com/spencerkclark>`_.
- :py:attr:`DataArray.rolling` and :py:attr:`Dataset.rolling` no longer support passing ``keep_attrs``
via its constructor. Pass ``keep_attrs`` via the applied function, i.e. use
``ds.rolling(...).mean(keep_attrs=False)`` instead of ``ds.rolling(..., keep_attrs=False).mean()``
@@ -37,6 +41,10 @@ New Features
By `Miguel Jimenez <https://github.com/Mikejmnez>`_ and `Wei Ji Leong <https://github.com/weiji14>`_.
- Unary & binary operations follow the ``keep_attrs`` flag (:issue:`3490`, :issue:`4065`, :issue:`3433`, :issue:`3595`, :pull:`4195`).
By `Deepak Cherian <https://github.com/dcherian>`_.
+- Added :py:meth:`~core.accessor_dt.DatetimeAccessor.isocalendar()` that returns a Dataset
+ with year, week, and weekday calculated according to the ISO 8601 calendar. Requires
+ pandas version 1.1.0 or greater (:pull:`4534`). By `Mathias Hauser <https://github.com/mathause>`_,
+ `Maximilian Roos <https://github.com/max-sixty>`_, and `Spencer Clark <https://github.com/spencerkclark>`_.
- :py:meth:`Dataset.to_zarr` now supports a ``region`` keyword for writing to
limited regions of existing Zarr stores (:pull:`4035`).
See :ref:`io.zarr.appending` for full details.
diff --git a/xarray/core/accessor_dt.py b/xarray/core/accessor_dt.py
index 96ef0f3b5de..3fc682f8c32 100644
--- a/xarray/core/accessor_dt.py
+++ b/xarray/core/accessor_dt.py
@@ -1,3 +1,6 @@
+import warnings
+from distutils.version import LooseVersion
+
import numpy as np
import pandas as pd
@@ -40,6 +43,10 @@ def _access_through_series(values, name):
if name == "season":
months = values_as_series.dt.month.values
field_values = _season_from_months(months)
+ elif name == "isocalendar":
+ # isocalendar returns iso- year, week, and weekday -> reshape
+ field_values = np.array(values_as_series.dt.isocalendar(), dtype=np.int64)
+ return field_values.T.reshape(3, *values.shape)
else:
field_values = getattr(values_as_series.dt, name).values
return field_values.reshape(values.shape)
@@ -72,7 +79,15 @@ def _get_date_field(values, name, dtype):
if is_duck_dask_array(values):
from dask.array import map_blocks
- return map_blocks(access_method, values, name, dtype=dtype)
+ new_axis = chunks = None
+ # isocalendar adds adds an axis
+ if name == "isocalendar":
+ chunks = (3,) + values.chunksize
+ new_axis = 0
+
+ return map_blocks(
+ access_method, values, name, dtype=dtype, new_axis=new_axis, chunks=chunks
+ )
else:
return access_method(values, name)
@@ -304,6 +319,33 @@ def strftime(self, date_format):
result, name="strftime", coords=self._obj.coords, dims=self._obj.dims
)
+ def isocalendar(self):
+ """Dataset containing ISO year, week number, and weekday.
+
+ Note
+ ----
+ The iso year and weekday differ from the nominal year and weekday.
+ """
+
+ from .dataset import Dataset
+
+ if not is_np_datetime_like(self._obj.data.dtype):
+ raise AttributeError("'CFTimeIndex' object has no attribute 'isocalendar'")
+
+ if LooseVersion(pd.__version__) < "1.1.0":
+ raise AttributeError("'isocalendar' not available in pandas < 1.1.0")
+
+ values = _get_date_field(self._obj.data, "isocalendar", np.int64)
+
+ obj_type = type(self._obj)
+ data_vars = {}
+ for i, name in enumerate(["year", "week", "weekday"]):
+ data_vars[name] = obj_type(
+ values[i], name=name, coords=self._obj.coords, dims=self._obj.dims
+ )
+
+ return Dataset(data_vars)
+
year = Properties._tslib_field_accessor(
"year", "The year of the datetime", np.int64
)
@@ -326,9 +368,26 @@ def strftime(self, date_format):
nanosecond = Properties._tslib_field_accessor(
"nanosecond", "The nanoseconds of the datetime", np.int64
)
- weekofyear = Properties._tslib_field_accessor(
- "weekofyear", "The week ordinal of the year", np.int64
- )
+
+ @property
+ def weekofyear(self):
+ "The week ordinal of the year"
+
+ warnings.warn(
+ "dt.weekofyear and dt.week have been deprecated. Please use "
+ "dt.isocalendar().week instead.",
+ FutureWarning,
+ )
+
+ if LooseVersion(pd.__version__) < "1.1.0":
+ weekofyear = Properties._tslib_field_accessor(
+ "weekofyear", "The week ordinal of the year", np.int64
+ ).fget(self)
+ else:
+ weekofyear = self.isocalendar().week
+
+ return weekofyear
+
week = weekofyear
dayofweek = Properties._tslib_field_accessor(
"dayofweek", "The day of the week with Monday=0, Sunday=6", np.int64
| diff --git a/xarray/tests/test_accessor_dt.py b/xarray/tests/test_accessor_dt.py
index ae5b0b6d7cf..984bfc763bc 100644
--- a/xarray/tests/test_accessor_dt.py
+++ b/xarray/tests/test_accessor_dt.py
@@ -1,3 +1,5 @@
+from distutils.version import LooseVersion
+
import numpy as np
import pandas as pd
import pytest
@@ -67,10 +69,48 @@ def setup(self):
],
)
def test_field_access(self, field):
+
+ if LooseVersion(pd.__version__) >= "1.1.0" and field in ["week", "weekofyear"]:
+ data = self.times.isocalendar()["week"]
+ else:
+ data = getattr(self.times, field)
+
+ expected = xr.DataArray(data, name=field, coords=[self.times], dims=["time"])
+
+ if field in ["week", "weekofyear"]:
+ with pytest.warns(
+ FutureWarning, match="dt.weekofyear and dt.week have been deprecated"
+ ):
+ actual = getattr(self.data.time.dt, field)
+ else:
+ actual = getattr(self.data.time.dt, field)
+
+ assert_equal(expected, actual)
+
+ @pytest.mark.parametrize(
+ "field, pandas_field",
+ [
+ ("year", "year"),
+ ("week", "week"),
+ ("weekday", "day"),
+ ],
+ )
+ def test_isocalendar(self, field, pandas_field):
+
+ if LooseVersion(pd.__version__) < "1.1.0":
+ with raises_regex(
+ AttributeError, "'isocalendar' not available in pandas < 1.1.0"
+ ):
+ self.data.time.dt.isocalendar()[field]
+ return
+
+ # pandas isocalendar has dtypy UInt32Dtype, convert to Int64
+ expected = pd.Int64Index(getattr(self.times.isocalendar(), pandas_field))
expected = xr.DataArray(
- getattr(self.times, field), name=field, coords=[self.times], dims=["time"]
+ expected, name=field, coords=[self.times], dims=["time"]
)
- actual = getattr(self.data.time.dt, field)
+
+ actual = self.data.time.dt.isocalendar()[field]
assert_equal(expected, actual)
def test_strftime(self):
@@ -85,6 +125,7 @@ def test_not_datetime_type(self):
with raises_regex(TypeError, "dt"):
nontime_data.time.dt
+ @pytest.mark.filterwarnings("ignore:dt.weekofyear and dt.week have been deprecated")
@requires_dask
@pytest.mark.parametrize(
"field",
@@ -129,6 +170,39 @@ def test_dask_field_access(self, field):
assert_chunks_equal(actual, dask_times_2d)
assert_equal(actual.compute(), expected.compute())
+ @requires_dask
+ @pytest.mark.parametrize(
+ "field",
+ [
+ "year",
+ "week",
+ "weekday",
+ ],
+ )
+ def test_isocalendar_dask(self, field):
+ import dask.array as da
+
+ if LooseVersion(pd.__version__) < "1.1.0":
+ with raises_regex(
+ AttributeError, "'isocalendar' not available in pandas < 1.1.0"
+ ):
+ self.data.time.dt.isocalendar()[field]
+ return
+
+ expected = getattr(self.times_data.dt.isocalendar(), field)
+
+ dask_times_arr = da.from_array(self.times_arr, chunks=(5, 5, 50))
+ dask_times_2d = xr.DataArray(
+ dask_times_arr, coords=self.data.coords, dims=self.data.dims, name="data"
+ )
+
+ with raise_if_dask_computes():
+ actual = dask_times_2d.dt.isocalendar()[field]
+
+ assert isinstance(actual.data, da.Array)
+ assert_chunks_equal(actual, dask_times_2d)
+ assert_equal(actual.compute(), expected.compute())
+
@requires_dask
@pytest.mark.parametrize(
"method, parameters",
@@ -347,6 +421,15 @@ def test_field_access(data, field):
assert_equal(result, expected)
+@requires_cftime
+def test_isocalendar_cftime(data):
+
+ with raises_regex(
+ AttributeError, "'CFTimeIndex' object has no attribute 'isocalendar'"
+ ):
+ data.time.dt.isocalendar()
+
+
@requires_cftime
@pytest.mark.filterwarnings("ignore::RuntimeWarning")
def test_cftime_strftime_access(data):
| diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst
index b10271b13f5..5d9a0263c35 100644
--- a/doc/api-hidden.rst
+++ b/doc/api-hidden.rst
@@ -298,6 +298,7 @@
core.accessor_dt.DatetimeAccessor.is_quarter_start
core.accessor_dt.DatetimeAccessor.is_year_end
core.accessor_dt.DatetimeAccessor.is_year_start
+ core.accessor_dt.DatetimeAccessor.isocalendar
core.accessor_dt.DatetimeAccessor.microsecond
core.accessor_dt.DatetimeAccessor.minute
core.accessor_dt.DatetimeAccessor.month
diff --git a/doc/api.rst b/doc/api.rst
index 5e8a2be0ed4..ceab7dcc976 100644
--- a/doc/api.rst
+++ b/doc/api.rst
@@ -475,8 +475,6 @@ Datetimelike properties
DataArray.dt.second
DataArray.dt.microsecond
DataArray.dt.nanosecond
- DataArray.dt.weekofyear
- DataArray.dt.week
DataArray.dt.dayofweek
DataArray.dt.weekday
DataArray.dt.weekday_name
@@ -500,6 +498,7 @@ Datetimelike properties
DataArray.dt.floor
DataArray.dt.ceil
+ DataArray.dt.isocalendar
DataArray.dt.round
DataArray.dt.strftime
@@ -896,3 +895,10 @@ Deprecated / Pending Deprecation
Dataset.apply
core.groupby.DataArrayGroupBy.apply
core.groupby.DatasetGroupBy.apply
+
+.. autosummary::
+ :toctree: generated/
+ :template: autosummary/accessor_attribute.rst
+
+ DataArray.dt.weekofyear
+ DataArray.dt.week
diff --git a/doc/whats-new.rst b/doc/whats-new.rst
index 9460fc08478..308aa400132 100644
--- a/doc/whats-new.rst
+++ b/doc/whats-new.rst
@@ -23,6 +23,10 @@ v0.16.2 (unreleased)
Breaking changes
~~~~~~~~~~~~~~~~
+- :py:attr:`~core.accessor_dt.DatetimeAccessor.weekofyear` and :py:attr:`~core.accessor_dt.DatetimeAccessor.week`
+ have been deprecated. Use ``DataArray.dt.isocalendar().week``
+ instead (:pull:`4534`). By `Mathias Hauser <https://github.com/mathause>`_,
+ `Maximilian Roos <https://github.com/max-sixty>`_, and `Spencer Clark <https://github.com/spencerkclark>`_.
- :py:attr:`DataArray.rolling` and :py:attr:`Dataset.rolling` no longer support passing ``keep_attrs``
via its constructor. Pass ``keep_attrs`` via the applied function, i.e. use
``ds.rolling(...).mean(keep_attrs=False)`` instead of ``ds.rolling(..., keep_attrs=False).mean()``
@@ -37,6 +41,10 @@ New Features
By `Miguel Jimenez <https://github.com/Mikejmnez>`_ and `Wei Ji Leong <https://github.com/weiji14>`_.
- Unary & binary operations follow the ``keep_attrs`` flag (:issue:`3490`, :issue:`4065`, :issue:`3433`, :issue:`3595`, :pull:`4195`).
By `Deepak Cherian <https://github.com/dcherian>`_.
+- Added :py:meth:`~core.accessor_dt.DatetimeAccessor.isocalendar()` that returns a Dataset
+ with year, week, and weekday calculated according to the ISO 8601 calendar. Requires
+ pandas version 1.1.0 or greater (:pull:`4534`). By `Mathias Hauser <https://github.com/mathause>`_,
+ `Maximilian Roos <https://github.com/max-sixty>`_, and `Spencer Clark <https://github.com/spencerkclark>`_.
- :py:meth:`Dataset.to_zarr` now supports a ``region`` keyword for writing to
limited regions of existing Zarr stores (:pull:`4035`).
See :ref:`io.zarr.appending` for full details.
| [
{
"components": [
{
"doc": "Dataset containing ISO year, week number, and weekday.\n\nNote\n----\nThe iso year and weekday differ from the nominal year and weekday.",
"lines": [
322,
347
],
"name": "DatetimeAccessor.isocalendar",
"signature": "de... | [
"xarray/tests/test_accessor_dt.py::TestDatetimeAccessor::test_field_access[week]",
"xarray/tests/test_accessor_dt.py::TestDatetimeAccessor::test_field_access[weekofyear]",
"xarray/tests/test_accessor_dt.py::TestDatetimeAccessor::test_isocalendar[year-year]",
"xarray/tests/test_accessor_dt.py::TestDatetimeAcce... | [
"xarray/tests/test_accessor_dt.py::TestDatetimeAccessor::test_field_access[year]",
"xarray/tests/test_accessor_dt.py::TestDatetimeAccessor::test_field_access[month]",
"xarray/tests/test_accessor_dt.py::TestDatetimeAccessor::test_field_access[day]",
"xarray/tests/test_accessor_dt.py::TestDatetimeAccessor::test... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add isocalendar to dt fields
<!-- Feel free to remove check-list items aren't relevant to your change -->
- [x] Tests added
- [x] Passes `isort . && black . && mypy . && flake8`
- [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst`
- [x] New functions/methods are listed in `api.rst`
This currently fails tests โย IIUC because our infra for this expects scalars rather than tuples, and `isocalendar` returns a tuple.
The existing `week` & `weekofyear` are going to stop working at some point, so if we can move off those that would be ideal:
```
xarray/tests/test_accessor_dt.py::TestDatetimeAccessor::test_field_access[weekofyear]
xarray/tests/test_accessor_dt.py::TestDatetimeAccessor::test_field_access[week]
/Users/maximilian/workspace/xarray/xarray/tests/test_accessor_dt.py:72: FutureWarning: weekofyear and week have been deprecated, please use DatetimeIndex.isocalendar().week instead, which returns a Series. To exactly reproduce the behavior of week and weekofyear and return an Index, you may call pd.Int64Index(idx.isocalendar().week)
getattr(self.times, field), name=field, coords=[self.times], dims=["time"]
xarray/tests/test_accessor_dt.py::TestDatetimeAccessor::test_field_access[weekofyear]
xarray/tests/test_accessor_dt.py::TestDatetimeAccessor::test_field_access[week]
/Users/maximilian/workspace/xarray/xarray/core/accessor_dt.py:44: FutureWarning: Series.dt.weekofyear and Series.dt.week have been deprecated. Please use Series.dt.isocalendar().week instead.
field_values = getattr(values_as_series.dt, name).values
-- Docs: https://docs.pytest.org/en/stable/warnings.html
```
I'd be very happy for someone else to take this on...
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in xarray/core/accessor_dt.py]
(definition of DatetimeAccessor.isocalendar:)
def isocalendar(self):
"""Dataset containing ISO year, week number, and weekday.
Note
----
The iso year and weekday differ from the nominal year and weekday."""
(definition of DatetimeAccessor.weekofyear:)
def weekofyear(self):
"""The week ordinal of the year"""
[end of new definitions in xarray/core/accessor_dt.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1c198a191127c601d091213c4b3292a8bb3054e1 | |
slackapi__python-slack-sdk-860 | 860 | slackapi/python-slack-sdk | null | 141da1fd220fdcb0749d514ec8c6c6a729f1424b | 2020-10-24T03:34:46Z | diff --git a/slack_sdk/oauth/installation_store/async_cacheable_installation_store.py b/slack_sdk/oauth/installation_store/async_cacheable_installation_store.py
new file mode 100644
index 000000000..c0c889ebf
--- /dev/null
+++ b/slack_sdk/oauth/installation_store/async_cacheable_installation_store.py
@@ -0,0 +1,34 @@
+from logging import Logger
+from typing import Optional, Dict
+
+from slack_sdk.oauth.installation_store import Bot, Installation
+from slack_sdk.oauth.installation_store.async_installation_store import AsyncInstallationStore
+
+
+class AsyncCacheableInstallationStore(AsyncInstallationStore):
+ underlying: AsyncInstallationStore
+ cached_bots: Dict[str, Bot]
+
+ def __init__(self, installation_store: AsyncInstallationStore):
+ """A simple memory cache wrapper for any installation stores.
+
+ :param installation_store: the installation store to wrap
+ """
+ self.underlying = installation_store
+ self.cached_bots = {}
+
+ @property
+ def logger(self) -> Logger:
+ return self.underlying.logger
+
+ async def async_save(self, installation: Installation):
+ return await self.underlying.async_save(installation)
+
+ async def async_find_bot(self, *, enterprise_id: Optional[str], team_id: Optional[str]) -> Optional[Bot]:
+ key = f"{enterprise_id}-{team_id}"
+ if key in self.cached_bots:
+ return self.cached_bots[key]
+ bot = await self.underlying.async_find_bot(enterprise_id=enterprise_id, team_id=team_id)
+ if bot:
+ self.cached_bots[key] = bot
+ return bot
diff --git a/slack_sdk/oauth/installation_store/cacheable_installation_store.py b/slack_sdk/oauth/installation_store/cacheable_installation_store.py
new file mode 100644
index 000000000..069062d61
--- /dev/null
+++ b/slack_sdk/oauth/installation_store/cacheable_installation_store.py
@@ -0,0 +1,36 @@
+from logging import Logger
+from typing import Optional, Dict
+
+from slack_sdk.oauth import InstallationStore
+from slack_sdk.oauth.installation_store import Bot, Installation
+
+
+class CacheableInstallationStore(InstallationStore):
+ underlying: InstallationStore
+ cached_bots: Dict[str, Bot]
+
+ def __init__(self, installation_store: InstallationStore):
+ """A simple memory cache wrapper for any installation stores.
+
+ :param installation_store: the installation store to wrap
+ """
+ self.underlying = installation_store
+ self.cached_bots = {}
+
+ @property
+ def logger(self) -> Logger:
+ return self.underlying.logger
+
+ def save(self, installation: Installation):
+ return self.underlying.save(installation)
+
+ def find_bot(
+ self, *, enterprise_id: Optional[str], team_id: Optional[str]
+ ) -> Optional[Bot]:
+ key = f"{enterprise_id}-{team_id}"
+ if key in self.cached_bots:
+ return self.cached_bots[key]
+ bot = self.underlying.find_bot(enterprise_id=enterprise_id, team_id=team_id)
+ if bot:
+ self.cached_bots[key] = bot
+ return bot
| diff --git a/tests/slack_sdk/oauth/installation_store/test_simple_cache.py b/tests/slack_sdk/oauth/installation_store/test_simple_cache.py
new file mode 100644
index 000000000..c57c84adc
--- /dev/null
+++ b/tests/slack_sdk/oauth/installation_store/test_simple_cache.py
@@ -0,0 +1,39 @@
+import os
+import unittest
+
+from slack_sdk.oauth.installation_store import Installation
+from slack_sdk.oauth.installation_store.cacheable_installation_store import CacheableInstallationStore
+from slack_sdk.oauth.installation_store.sqlite3 import SQLite3InstallationStore
+
+
+class TestCacheable(unittest.TestCase):
+
+ def test_save_and_find(self):
+ sqlite3_store = SQLite3InstallationStore(
+ database="logs/cacheable.db",
+ client_id="111.222"
+ )
+ sqlite3_store.init()
+ store = CacheableInstallationStore(sqlite3_store)
+
+ installation = Installation(
+ app_id="A111",
+ enterprise_id="E111",
+ team_id="T111",
+ user_id="U111",
+ bot_id="B111",
+ bot_token="xoxb-111",
+ bot_scopes=["chat:write"],
+ bot_user_id="U222"
+ )
+ store.save(installation)
+
+ bot = store.find_bot(enterprise_id="E111", team_id="T111")
+ self.assertIsNotNone(bot)
+
+ os.remove("logs/cacheable.db")
+
+ bot = sqlite3_store.find_bot(enterprise_id="E111", team_id="T111")
+ self.assertIsNone(bot)
+ bot = store.find_bot(enterprise_id="E111", team_id="T111")
+ self.assertIsNotNone(bot)
diff --git a/tests/slack_sdk_async/oauth/__init__.py b/tests/slack_sdk_async/oauth/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/tests/slack_sdk_async/oauth/installation_store/__init__.py b/tests/slack_sdk_async/oauth/installation_store/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/tests/slack_sdk_async/oauth/installation_store/test_simple_cache.py b/tests/slack_sdk_async/oauth/installation_store/test_simple_cache.py
new file mode 100644
index 000000000..b92b3d90b
--- /dev/null
+++ b/tests/slack_sdk_async/oauth/installation_store/test_simple_cache.py
@@ -0,0 +1,41 @@
+import os
+import unittest
+
+from slack_sdk.oauth.installation_store import Installation
+from slack_sdk.oauth.installation_store.async_cacheable_installation_store import AsyncCacheableInstallationStore
+from slack_sdk.oauth.installation_store.sqlite3 import SQLite3InstallationStore
+from tests.helpers import async_test
+
+
+class TestCacheable(unittest.TestCase):
+
+ @async_test
+ async def test_save_and_find(self):
+ sqlite3_store = SQLite3InstallationStore(
+ database="logs/cacheable.db",
+ client_id="111.222"
+ )
+ sqlite3_store.init()
+ store = AsyncCacheableInstallationStore(sqlite3_store)
+
+ installation = Installation(
+ app_id="A111",
+ enterprise_id="E111",
+ team_id="T111",
+ user_id="U111",
+ bot_id="B111",
+ bot_token="xoxb-111",
+ bot_scopes=["chat:write"],
+ bot_user_id="U222"
+ )
+ await store.async_save(installation)
+
+ bot = await store.async_find_bot(enterprise_id="E111", team_id="T111")
+ self.assertIsNotNone(bot)
+
+ os.remove("logs/cacheable.db")
+
+ bot = await sqlite3_store.async_find_bot(enterprise_id="E111", team_id="T111")
+ self.assertIsNone(bot)
+ bot = await store.async_find_bot(enterprise_id="E111", team_id="T111")
+ self.assertIsNotNone(bot)
| [
{
"components": [
{
"doc": "",
"lines": [
8,
34
],
"name": "AsyncCacheableInstallationStore",
"signature": "class AsyncCacheableInstallationStore(AsyncInstallationStore):",
"type": "class"
},
{
"doc": "A simple memory ... | [
"tests/slack_sdk/oauth/installation_store/test_simple_cache.py::TestCacheable::test_save_and_find",
"tests/slack_sdk_async/oauth/installation_store/test_simple_cache.py::TestCacheable::test_save_and_find"
] | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
[v3] Add memory-cache wrapper for InstallationStore/AsyncInstallationStore
## Summary
This pull request adds optional classes that are useful for improving Slack apps' performance. Using the wrapper classes (in the decorator design pattern) enables Slack apps to reduce queries to datastore.
This PR doesn't change any default behaviors and it just adds optional modules.
### Category (place an `x` in each of the `[ ]`)
- [x] **slack_sdk.oauth** (OAuth)
## Requirements (place an `x` in each `[ ]`)
- [x] I've read and understood the [Contributing Guidelines](https://github.com/slackapi/python-slackclient/blob/main/.github/contributing.md) and have done my best effort to follow them.
- [x] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
- [x] I've run `python setup.py validate` after making the changes.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in slack_sdk/oauth/installation_store/async_cacheable_installation_store.py]
(definition of AsyncCacheableInstallationStore:)
class AsyncCacheableInstallationStore(AsyncInstallationStore):
(definition of AsyncCacheableInstallationStore.__init__:)
def __init__(self, installation_store: AsyncInstallationStore):
"""A simple memory cache wrapper for any installation stores.
:param installation_store: the installation store to wrap"""
(definition of AsyncCacheableInstallationStore.logger:)
def logger(self) -> Logger:
[end of new definitions in slack_sdk/oauth/installation_store/async_cacheable_installation_store.py]
[start of new definitions in slack_sdk/oauth/installation_store/cacheable_installation_store.py]
(definition of CacheableInstallationStore:)
class CacheableInstallationStore(InstallationStore):
(definition of CacheableInstallationStore.__init__:)
def __init__(self, installation_store: InstallationStore):
"""A simple memory cache wrapper for any installation stores.
:param installation_store: the installation store to wrap"""
(definition of CacheableInstallationStore.logger:)
def logger(self) -> Logger:
(definition of CacheableInstallationStore.save:)
def save(self, installation: Installation):
(definition of CacheableInstallationStore.find_bot:)
def find_bot( self, *, enterprise_id: Optional[str], team_id: Optional[str] ) -> Optional[Bot]:
[end of new definitions in slack_sdk/oauth/installation_store/cacheable_installation_store.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 2997a1786c4fd969b00ce69af888ebae8e8ebed0 | ||
DLR-RM__stable-baselines3-196 | 196 | DLR-RM/stable-baselines3 | null | 19c1a89a3add1bef4eb45ce79625fc3b3e41eee5 | 2020-10-20T19:06:32Z | diff --git a/docs/guide/tensorboard.rst b/docs/guide/tensorboard.rst
index 7ae2ffa60..cf5c4cf21 100644
--- a/docs/guide/tensorboard.rst
+++ b/docs/guide/tensorboard.rst
@@ -80,3 +80,76 @@ Here is a simple example on how to log both additional tensor or arbitrary scala
model.learn(50000, callback=TensorboardCallback())
+
+Logging Videos
+--------------
+
+TensorBoard supports periodic logging of video data, which helps evaluating agents at various stages during training.
+
+.. warning::
+ To support video logging `moviepy <https://zulko.github.io/moviepy/>`_ must be installed otherwise, TensorBoard ignores the video and logs a warning.
+
+Here is an example of how to render an episode and log the resulting video to TensorBoard at regular intervals:
+
+.. code-block:: python
+
+ from typing import Any, Dict
+
+ import gym
+ import torch as th
+
+ from stable_baselines3 import A2C
+ from stable_baselines3.common.callbacks import BaseCallback
+ from stable_baselines3.common.evaluation import evaluate_policy
+ from stable_baselines3.common.logger import Video
+
+
+ class VideoRecorderCallback(BaseCallback):
+ def __init__(self, eval_env: gym.Env, render_freq: int, n_eval_episodes: int = 1, deterministic: bool = True):
+ """
+ Records a video of an agent's trajectory traversing ``eval_env`` and logs it to TensorBoard
+
+ :param eval_env: A gym environment from which the trajectory is recorded
+ :param render_freq: Render the agent's trajectory every eval_freq call of the callback.
+ :param n_eval_episodes: Number of episodes to render
+ :param deterministic: Whether to use deterministic or stochastic policy
+ """
+ super().__init__()
+ self._eval_env = eval_env
+ self._render_freq = render_freq
+ self._n_eval_episodes = n_eval_episodes
+ self._deterministic = deterministic
+
+ def _on_step(self) -> bool:
+ if self.n_calls % self._render_freq == 0:
+ screens = []
+
+ def grab_screens(_locals: Dict[str, Any], _globals: Dict[str, Any]) -> None:
+ """
+ Renders the environment in its current state, recording the screen in the captured `screens` list
+
+ :param _locals: A dictionary containing all local variables of the callback's scope
+ :param _globals: A dictionary containing all global variables of the callback's scope
+ """
+ screen = self._eval_env.render(mode="rgb_array")
+ # PyTorch uses CxHxW vs HxWxC gym (and tensorflow) image convention
+ screens.append(screen.transpose(2, 0, 1))
+
+ evaluate_policy(
+ self.model,
+ self._eval_env,
+ callback=grab_screens,
+ n_eval_episodes=self._n_eval_episodes,
+ deterministic=self._deterministic,
+ )
+ self.logger.record(
+ "trajectory/video",
+ Video(th.ByteTensor([screens]), fps=40),
+ exclude=("stdout", "log", "json", "csv"),
+ )
+ return True
+
+
+ model = A2C("MlpPolicy", "CartPole-v1", tensorboard_log="runs/", verbose=1)
+ video_recorder = VideoRecorderCallback(gym.make("CartPole-v1"), render_freq=5000)
+ model.learn(total_timesteps=int(5e4), callback=video_recorder)
diff --git a/docs/misc/changelog.rst b/docs/misc/changelog.rst
index f63571218..e896723d1 100644
--- a/docs/misc/changelog.rst
+++ b/docs/misc/changelog.rst
@@ -14,6 +14,7 @@ Breaking Changes:
New Features:
^^^^^^^^^^^^^
- Allow custom actor/critic network architectures using ``net_arch=dict(qf=[400, 300], pi=[64, 64])`` for off-policy algorithms (SAC, TD3, DDPG)
+- Support logging videos to Tensorboard (@SwamyDev)
Bug Fixes:
^^^^^^^^^^
diff --git a/stable_baselines3/common/logger.py b/stable_baselines3/common/logger.py
index f97b69a1d..391c62e77 100644
--- a/stable_baselines3/common/logger.py
+++ b/stable_baselines3/common/logger.py
@@ -5,7 +5,7 @@
import tempfile
import warnings
from collections import defaultdict
-from typing import Any, Dict, List, Optional, TextIO, Tuple, Union
+from typing import Any, Dict, List, Optional, Sequence, TextIO, Tuple, Union
import numpy as np
import pandas
@@ -23,6 +23,28 @@
DISABLED = 50
+class Video(object):
+ """
+ Video data class storing the video frames and the frame per seconds
+ """
+
+ def __init__(self, frames: th.Tensor, fps: Union[float, int]):
+ self.frames = frames
+ self.fps = fps
+
+
+class FormatUnsupportedError(NotImplementedError):
+ def __init__(self, unsupported_formats: Sequence[str], value_description: str):
+ if len(unsupported_formats) > 1:
+ format_str = f"formats {', '.join(unsupported_formats)} are"
+ else:
+ format_str = f"format {unsupported_formats[0]} is"
+ super(FormatUnsupportedError, self).__init__(
+ f"The {format_str} not supported for the {value_description} value logged.\n"
+ f"You can exclude formats via the `exclude` parameter of the logger's `record` function."
+ )
+
+
class KVWriter(object):
"""
Key Value writer
@@ -83,6 +105,9 @@ def write(self, key_values: Dict, key_excluded: Dict, step: int = 0) -> None:
if excluded is not None and ("stdout" in excluded or "log" in excluded):
continue
+ if isinstance(value, Video):
+ raise FormatUnsupportedError(["stdout", "log"], "video")
+
if isinstance(value, float):
# Align left
value_str = f"{value:<8.3g}"
@@ -169,6 +194,8 @@ def __init__(self, filename: str):
def write(self, key_values: Dict[str, Any], key_excluded: Dict[str, Union[str, Tuple[str, ...]]], step: int = 0) -> None:
def cast_to_json_serializable(value: Any):
+ if isinstance(value, Video):
+ raise FormatUnsupportedError(["json"], "video")
if hasattr(value, "dtype"):
if value.shape == () or len(value) == 1:
# if value is a dimensionless numpy array or of length 1, serialize as a float
@@ -227,6 +254,10 @@ def write(self, key_values: Dict[str, Any], key_excluded: Dict[str, Union[str, T
if i > 0:
self.file.write(",")
value = key_values.get(key)
+
+ if isinstance(value, Video):
+ raise FormatUnsupportedError(["csv"], "video")
+
if value is not None:
self.file.write(str(value))
self.file.write("\n")
@@ -262,6 +293,9 @@ def write(self, key_values: Dict[str, Any], key_excluded: Dict[str, Union[str, T
if isinstance(value, th.Tensor):
self.writer.add_histogram(key, value, step)
+ if isinstance(value, Video):
+ self.writer.add_video(key, value.frames, step, value.fps)
+
# Flush the output to the file
self.writer.flush()
| diff --git a/tests/test_logger.py b/tests/test_logger.py
index 5ab540f4a..98f832d8f 100644
--- a/tests/test_logger.py
+++ b/tests/test_logger.py
@@ -2,11 +2,14 @@
import numpy as np
import pytest
+import torch as th
from pandas.errors import EmptyDataError
from stable_baselines3.common.logger import (
DEBUG,
+ FormatUnsupportedError,
ScopedConfigure,
+ Video,
configure,
debug,
dump,
@@ -162,3 +165,36 @@ def test_exclude_keys(tmp_path, read_log, _format):
writer.write(dict(some_tag=42), key_excluded=dict(some_tag=(_format)))
writer.close()
assert read_log(_format).empty
+
+
+def test_report_video_to_tensorboard(tmp_path, read_log, capsys):
+ pytest.importorskip("tensorboard")
+
+ video = Video(frames=th.rand(1, 20, 3, 16, 16), fps=20)
+ writer = make_output_format("tensorboard", tmp_path)
+ writer.write({"video": video}, key_excluded={"video": ()})
+
+ if is_moviepy_installed():
+ assert not read_log("tensorboard").empty
+ else:
+ assert "moviepy" in capsys.readouterr().out
+ writer.close()
+
+
+def is_moviepy_installed():
+ try:
+ import moviepy # noqa: F401
+ except ModuleNotFoundError:
+ return False
+ return True
+
+
+@pytest.mark.parametrize("unsupported_format", ["stdout", "log", "json", "csv"])
+def test_report_video_to_unsupported_format_raises_error(tmp_path, unsupported_format):
+ writer = make_output_format(unsupported_format, tmp_path)
+
+ with pytest.raises(FormatUnsupportedError) as exec_info:
+ video = Video(frames=th.rand(1, 20, 3, 16, 16), fps=20)
+ writer.write({"video": video}, key_excluded={"video": ()})
+ assert unsupported_format in str(exec_info.value)
+ writer.close()
| diff --git a/docs/guide/tensorboard.rst b/docs/guide/tensorboard.rst
index 7ae2ffa60..cf5c4cf21 100644
--- a/docs/guide/tensorboard.rst
+++ b/docs/guide/tensorboard.rst
@@ -80,3 +80,76 @@ Here is a simple example on how to log both additional tensor or arbitrary scala
model.learn(50000, callback=TensorboardCallback())
+
+Logging Videos
+--------------
+
+TensorBoard supports periodic logging of video data, which helps evaluating agents at various stages during training.
+
+.. warning::
+ To support video logging `moviepy <https://zulko.github.io/moviepy/>`_ must be installed otherwise, TensorBoard ignores the video and logs a warning.
+
+Here is an example of how to render an episode and log the resulting video to TensorBoard at regular intervals:
+
+.. code-block:: python
+
+ from typing import Any, Dict
+
+ import gym
+ import torch as th
+
+ from stable_baselines3 import A2C
+ from stable_baselines3.common.callbacks import BaseCallback
+ from stable_baselines3.common.evaluation import evaluate_policy
+ from stable_baselines3.common.logger import Video
+
+
+ class VideoRecorderCallback(BaseCallback):
+ def __init__(self, eval_env: gym.Env, render_freq: int, n_eval_episodes: int = 1, deterministic: bool = True):
+ """
+ Records a video of an agent's trajectory traversing ``eval_env`` and logs it to TensorBoard
+
+ :param eval_env: A gym environment from which the trajectory is recorded
+ :param render_freq: Render the agent's trajectory every eval_freq call of the callback.
+ :param n_eval_episodes: Number of episodes to render
+ :param deterministic: Whether to use deterministic or stochastic policy
+ """
+ super().__init__()
+ self._eval_env = eval_env
+ self._render_freq = render_freq
+ self._n_eval_episodes = n_eval_episodes
+ self._deterministic = deterministic
+
+ def _on_step(self) -> bool:
+ if self.n_calls % self._render_freq == 0:
+ screens = []
+
+ def grab_screens(_locals: Dict[str, Any], _globals: Dict[str, Any]) -> None:
+ """
+ Renders the environment in its current state, recording the screen in the captured `screens` list
+
+ :param _locals: A dictionary containing all local variables of the callback's scope
+ :param _globals: A dictionary containing all global variables of the callback's scope
+ """
+ screen = self._eval_env.render(mode="rgb_array")
+ # PyTorch uses CxHxW vs HxWxC gym (and tensorflow) image convention
+ screens.append(screen.transpose(2, 0, 1))
+
+ evaluate_policy(
+ self.model,
+ self._eval_env,
+ callback=grab_screens,
+ n_eval_episodes=self._n_eval_episodes,
+ deterministic=self._deterministic,
+ )
+ self.logger.record(
+ "trajectory/video",
+ Video(th.ByteTensor([screens]), fps=40),
+ exclude=("stdout", "log", "json", "csv"),
+ )
+ return True
+
+
+ model = A2C("MlpPolicy", "CartPole-v1", tensorboard_log="runs/", verbose=1)
+ video_recorder = VideoRecorderCallback(gym.make("CartPole-v1"), render_freq=5000)
+ model.learn(total_timesteps=int(5e4), callback=video_recorder)
diff --git a/docs/misc/changelog.rst b/docs/misc/changelog.rst
index f63571218..e896723d1 100644
--- a/docs/misc/changelog.rst
+++ b/docs/misc/changelog.rst
@@ -14,6 +14,7 @@ Breaking Changes:
New Features:
^^^^^^^^^^^^^
- Allow custom actor/critic network architectures using ``net_arch=dict(qf=[400, 300], pi=[64, 64])`` for off-policy algorithms (SAC, TD3, DDPG)
+- Support logging videos to Tensorboard (@SwamyDev)
Bug Fixes:
^^^^^^^^^^
| [
{
"components": [
{
"doc": "Video data class storing the video frames and the frame per seconds",
"lines": [
26,
33
],
"name": "Video",
"signature": "class Video(object):",
"type": "class"
},
{
"doc": "",
"line... | [
"tests/test_logger.py::test_main",
"tests/test_logger.py::test_make_output[stdout]",
"tests/test_logger.py::test_make_output[log]",
"tests/test_logger.py::test_make_output[json]",
"tests/test_logger.py::test_make_output[csv]",
"tests/test_logger.py::test_make_output_fail",
"tests/test_logger.py::test_ex... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add support to log videos via tensorboard
## Description
I've introduced a new Video type which simply consists of the frames and the requested frames per second of the video. This type can then be checked against in the logger's format-writer classes and treated accordingly. Currently only the `TensorBoardOutputFormat` supports writing Videos as it is easy to implement and Tensorboard already supports handling video data. All other formats raise a custom NotSupportedError reporting the unsupported format and suggesting how to exclude them from the `record` call. Tests have been added to check that behavior. To allow video support the `moviepy` package has been added to the `extra` dependencies in the setup.py. Tensorboard needs this package to render videos otherwise it will produce a warning and just ignore the video data.
However, there are some options to consider:
- Removing the `moviepy` dependency from the `extra` dependencies and report a warning or raise an error if a video is logged to Tensorboard and `moviepy` is not installed. This way users can decide whether they want that feature or not
- Instead of capturing the rendered screens via callback extend the `evaluate_policy` and also return the rendered screens if the `render` parameter is True or set to "rgb_array" or a similar mechanism
What do you think? For this first implementation I went with the simplest and least intrusive options.
## Motivation and Context
Logging an agent's trajectory as a video helps evaluate the agent. One can inspect directly how the agent is behaving during training and see if it is making progress. When tuning parameters or comparing different implementations it's of great help to directly see if the agent's performance is actually improving in addition to the usual reward and loss metrics. In summary, it provides a more complete picture of an agent's learning progress.
closes #186
- [x] I have raised an issue to propose this change ([required](https://github.com/DLR-RM/stable-baselines3/blob/master/CONTRIBUTING.md) for new features and bug fixes)
## Types of changes
<!--- What types of changes does your code introduce? Put an `x` in all the boxes that apply: -->
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [x] Documentation (update in the documentation)
## Checklist:
<!--- Go over all the following points, and put an `x` in all the boxes that apply. -->
<!--- If you're unsure about any of these, don't hesitate to ask. We're here to help! -->
- [x] I've read the [CONTRIBUTION](https://github.com/DLR-RM/stable-baselines3/blob/master/CONTRIBUTING.md) guide (**required**)
- [x] I have updated the changelog accordingly (**required**).
- [x] My change requires a change to the documentation.
- [x] I have updated the tests accordingly (*required for a bug fix or a new feature*).
- [x] I have updated the documentation accordingly.
- [x] I have reformatted the code using `make format` (**required**)
- [x] I have checked the codestyle using `make check-codestyle` and `make lint` (**required**)
- [x] I have ensured `make pytest` and `make type` both pass. (**required**)
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in stable_baselines3/common/logger.py]
(definition of Video:)
class Video(object):
"""Video data class storing the video frames and the frame per seconds"""
(definition of Video.__init__:)
def __init__(self, frames: th.Tensor, fps: Union[float, int]):
(definition of FormatUnsupportedError:)
class FormatUnsupportedError(NotImplementedError):
(definition of FormatUnsupportedError.__init__:)
def __init__(self, unsupported_formats: Sequence[str], value_description: str):
[end of new definitions in stable_baselines3/common/logger.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
[feature request] Support Tensorboard video logging (convenience)
When investigating RL algorithms I've found it quite useful to report videos of a trajectory to Tensorboard, to directly relate it to other performance measures and look at the agent at different stages.
The implementation would be quite simple I think and if you decide this to be a useful feature I'd be happy to implement it. As a matter of fact I've already done so in my own fork of this repo. Here is how I envisioned its usage:
```python
def render_trajectory(env, model):
obs = env.reset()
rewards = []
done = False
screens = []
while not done:
actions, _ = model.predict(obs, deterministic=True)
obs, _, done, _ = env.step(actions)
screen = env.render(mode='rgb_array')
screens.append(screen.transpose(2, 0, 1))
return Video(th.ByteTensor([screens]), fps=40)
class ReportTrajectoryCallback(BaseCallback):
def __init__(self, eval_env, check_freq):
super().__init__()
self._eval_env = eval_env
self._check_freq = check_freq
def _on_step(self):
if self.n_calls % self._check_freq == 0:
video = render_trajectory(self._eval_env, self.model)
self.logger.record("trajectory/video", video)
return True
env = gym.make('CartPole-v1')
model = A2C('MlpPolicy', env)
model.learn(total_timesteps=int(1e4), callback=ReportTrajectoryCallback(env, 1000))
```
I'd suggest introducing a `Video` object and a simple extension of the logger to record the video. The `TensorBoardOutputFormat` would simply check for a video object and forward it to the `SummaryWriter`. All other `KVWriter` classes would just ignore it. As mentioned I've done this already for me self. Here is the relevant commit:
https://github.com/SwamyDev/stable-baselines3/commit/d6b15416000011a5aeddd4af1bfb9f9a2d79d22f
Of course I'm open to any different approaches. For instance checking the type directly in the Formater classes is not good object oriented style and maybe an explicit video logging function would be better. However, it is relatively simple and get's the job done.
What do you think? Would this is be a small but useful contribution?
----------
Hey, I agree this kind of things are useful. Gym offers the video recorder wrapper but that only works with files, and indeed I see the usefulness in the tensorboard videos. This would be a perfect contribution to the sb3-contrib repository we are cooking (should be out soonโข) which is designed for all kinds of useful utilities like this :). I will poke here once the repo is out.
In fact, part of it should be included inside SB3 repo, because we need to change the tensorboard writer.
But I agree that the callback and the helper should go in the contrib folder.
And there is a simpler way to use it: `record("trajectory/video", video, exclude=("json", "stdout", "log", "csv"))`
Hi there, glad you think the idea useful :).
I agree using the exclude explicitly would simplify the implementation. The reason I opted for extending all `*Format` classes to handle the `Video` object is, that I wanted to take the burden of deciding which format is invalid the user. I was worried that it might result in hard to understand error messages or strange log outputs when one tries to log a video to a format where it doesn't make sense . It's a trade-off between complexity and usability.
On another note, while investigating the simpler way you've suggested I've noticed that the json exclude doesn't seem to work, as it is still trying to log the key to the json file. When looking at the code of `JSONOutputFormat` it seems like it's just not converting array like objects to lists when json is part of the exclude list, but otherwise dumping the whole `key_values` dictionary to json. Also `CSVOutputFormat` seems to completely ignore the `key_excluded` parameter.
If this is indeed unintended behavior I'd be happy to file a bug, and also provide a fix with regression tests :)
>On another note, while investigating the simpler way you've suggested I've noticed that the json exclude doesn't seem to work, as it is still trying to log the key to the json file
>If this is indeed unintended behavior I'd be happy to file a bug, and also provide a fix with regression tests :)
oh thanks, looks like a bug, please do ;)
Last thing, you can use `evaluate_policy` and the `callback` attribute of it instead of `render_trajectory` ;)
Ah thanks for the suggestion that makes my code a lot simpler :)
Ok, I'll file a proper bug report and everything for the other issue
With that issue out of the way, the video implementation becomes trivial. Now its just the introduction of the `Video` class and an extra type check in `TensorBoardOutputFormat::write`.
However, I'd suggest to handle the Video class in the other `*Format` classes as well and raise a an error when `record("trajectory/video", video, ...)` is called without the proper excludes, explaining that videos are not supported for the selected logging format. What do you think?
Also should I create I PR for this as well or do you want to wait for the `sb3-contrib` repository?
>Also should I create I PR for this as well
yes, please do, as it need to be done anyway.
we are currently beta-testing the contrib repo ;) (should be out soon)
--------------------
</issues> | 512eea923afad6f6da4bb53d72b6ea4c6d856e59 |
scikit-learn__scikit-learn-18649 | 18,649 | scikit-learn/scikit-learn | 1.0 | fe897c0ba0f00171333dcbdb483ca0d0346fed95 | 2020-10-19T20:41:58Z | diff --git a/doc/modules/classes.rst b/doc/modules/classes.rst
index c658bc6b12452..d019af3cfb1ff 100644
--- a/doc/modules/classes.rst
+++ b/doc/modules/classes.rst
@@ -1176,6 +1176,7 @@ Splitter Classes
model_selection.ShuffleSplit
model_selection.StratifiedKFold
model_selection.StratifiedShuffleSplit
+ model_selection.StratifiedGroupKFold
model_selection.TimeSeriesSplit
Splitter Functions
diff --git a/doc/modules/cross_validation.rst b/doc/modules/cross_validation.rst
index ae3d38f168f3f..0b090fd7385b6 100644
--- a/doc/modules/cross_validation.rst
+++ b/doc/modules/cross_validation.rst
@@ -353,7 +353,7 @@ Example of 2-fold cross-validation on a dataset with 4 samples::
Here is a visualization of the cross-validation behavior. Note that
:class:`KFold` is not affected by classes or groups.
-.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_004.png
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_006.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
:scale: 75%
@@ -509,7 +509,7 @@ Here is a usage example::
Here is a visualization of the cross-validation behavior. Note that
:class:`ShuffleSplit` is not affected by classes or groups.
-.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_006.png
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_008.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
:scale: 75%
@@ -566,7 +566,7 @@ We can see that :class:`StratifiedKFold` preserves the class ratios
Here is a visualization of the cross-validation behavior.
-.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_007.png
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_009.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
:scale: 75%
@@ -585,7 +585,7 @@ percentage for each target class as in the complete set.
Here is a visualization of the cross-validation behavior.
-.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_009.png
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_012.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
:scale: 75%
@@ -645,6 +645,58 @@ size due to the imbalance in the data.
Here is a visualization of the cross-validation behavior.
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_007.png
+ :target: ../auto_examples/model_selection/plot_cv_indices.html
+ :align: center
+ :scale: 75%
+
+.. _stratified_group_k_fold:
+
+StratifiedGroupKFold
+^^^^^^^^^^^^^^^^^^^^
+
+:class:`StratifiedGroupKFold` is a cross-validation scheme that combines both
+:class:`StratifiedKFold` and :class:`GroupKFold`. The idea is to try to
+preserve the distribution of classes in each split while keeping each group
+within a single split. That might be useful when you have an unbalanced
+dataset so that using just :class:`GroupKFold` might produce skewed splits.
+
+Example::
+
+ >>> from sklearn.model_selection import StratifiedGroupKFold
+ >>> X = list(range(18))
+ >>> y = [1] * 6 + [0] * 12
+ >>> groups = [1, 2, 3, 3, 4, 4, 1, 1, 2, 2, 3, 4, 5, 5, 5, 6, 6, 6]
+ >>> sgkf = StratifiedGroupKFold(n_splits=3)
+ >>> for train, test in sgkf.split(X, y, groups=groups):
+ ... print("%s %s" % (train, test))
+ [ 0 2 3 4 5 6 7 10 11 15 16 17] [ 1 8 9 12 13 14]
+ [ 0 1 4 5 6 7 8 9 11 12 13 14] [ 2 3 10 15 16 17]
+ [ 1 2 3 8 9 10 12 13 14 15 16 17] [ 0 4 5 6 7 11]
+
+Implementation notes:
+
+- With the current implementation full shuffle is not possible in most
+ scenarios. When shuffle=True, the following happens:
+
+ 1. All groups a shuffled.
+ 2. Groups are sorted by standard deviation of classes using stable sort.
+ 3. Sorted groups are iterated over and assigned to folds.
+
+ That means that only groups with the same standard deviation of class
+ distribution will be shuffled, which might be useful when each group has only
+ a single class.
+- The algorithm greedily assigns each group to one of n_splits test sets,
+ choosing the test set that minimises the variance in class distribution
+ across test sets. Group assignment proceeds from groups with highest to
+ lowest variance in class frequency, i.e. large groups peaked on one or few
+ classes are assigned first.
+- This split is suboptimal in a sense that it might produce imbalanced splits
+ even if perfect stratification is possible. If you have relatively close
+ distribution of classes in each group, using :class:`GroupKFold` is better.
+
+Here is a visualization of cross-validation behavior for uneven groups:
+
.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_005.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
@@ -733,7 +785,7 @@ Here is a usage example::
Here is a visualization of the cross-validation behavior.
-.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_008.png
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_011.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
:scale: 75%
@@ -835,7 +887,7 @@ Example of 3-split time series cross-validation on a dataset with 6 samples::
Here is a visualization of the cross-validation behavior.
-.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_010.png
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_013.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
:scale: 75%
diff --git a/doc/whats_new/v1.0.rst b/doc/whats_new/v1.0.rst
index 34f39ca48f20a..985fe57164824 100644
--- a/doc/whats_new/v1.0.rst
+++ b/doc/whats_new/v1.0.rst
@@ -183,6 +183,16 @@ Changelog
are integral.
:pr:`9843` by :user:`Jon Crall <Erotemic>`.
+:mod:`sklearn.model_selection`
+..............................
+
+- |Feature| added :class:`model_selection.StratifiedGroupKFold`, that combines
+ :class:`model_selection.StratifiedKFold` and `model_selection.GroupKFold`,
+ providing an ability to split data preserving the distribution of classes in
+ each split while keeping each group within a single split.
+ :pr:`18649` by `Leandro Hermida <hermidalc>` and
+ `Rodion Martynov <marrodion>`.
+
:mod:`sklearn.naive_bayes`
..........................
diff --git a/examples/model_selection/plot_cv_indices.py b/examples/model_selection/plot_cv_indices.py
index 91f71b0451cb2..f07fa1595e860 100644
--- a/examples/model_selection/plot_cv_indices.py
+++ b/examples/model_selection/plot_cv_indices.py
@@ -13,7 +13,8 @@
from sklearn.model_selection import (TimeSeriesSplit, KFold, ShuffleSplit,
StratifiedKFold, GroupShuffleSplit,
- GroupKFold, StratifiedShuffleSplit)
+ GroupKFold, StratifiedShuffleSplit,
+ StratifiedGroupKFold)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Patch
@@ -113,16 +114,32 @@ def plot_cv_indices(cv, X, y, group, ax, n_splits, lw=10):
# %%
# As you can see, by default the KFold cross-validation iterator does not
# take either datapoint class or group into consideration. We can change this
-# by using the ``StratifiedKFold`` like so.
+# by using either:
+#
+# - ``StratifiedKFold`` to preserve the percentage of samples for each class.
+# - ``GroupKFold`` to ensure that the same group will not appear in two
+# different folds.
+# - ``StratifiedGroupKFold`` to keep the constraint of ``GroupKFold`` while
+# attempting to return stratified folds.
-fig, ax = plt.subplots()
-cv = StratifiedKFold(n_splits)
-plot_cv_indices(cv, X, y, groups, ax, n_splits)
+# To better demonstrate the difference, we will assign samples to groups
+# unevenly:
+
+uneven_groups = np.sort(np.random.randint(0, 10, n_points))
+
+cvs = [StratifiedKFold, GroupKFold, StratifiedGroupKFold]
+
+for cv in cvs:
+ fig, ax = plt.subplots(figsize=(6, 3))
+ plot_cv_indices(cv(n_splits), X, y, uneven_groups, ax, n_splits)
+ ax.legend([Patch(color=cmap_cv(.8)), Patch(color=cmap_cv(.02))],
+ ['Testing set', 'Training set'], loc=(1.02, .8))
+ # Make the legend fit
+ plt.tight_layout()
+ fig.subplots_adjust(right=.7)
# %%
-# In this case, the cross-validation retained the same ratio of classes across
-# each CV split. Next we'll visualize this behavior for a number of CV
-# iterators.
+# Next we'll visualize this behavior for a number of CV iterators.
#
# Visualize cross-validation indices for many CV objects
# ------------------------------------------------------
@@ -133,7 +150,7 @@ def plot_cv_indices(cv, X, y, group, ax, n_splits, lw=10):
#
# Note how some use the group/class information while others do not.
-cvs = [KFold, GroupKFold, ShuffleSplit, StratifiedKFold,
+cvs = [KFold, GroupKFold, ShuffleSplit, StratifiedKFold, StratifiedGroupKFold,
GroupShuffleSplit, StratifiedShuffleSplit, TimeSeriesSplit]
diff --git a/sklearn/model_selection/__init__.py b/sklearn/model_selection/__init__.py
index 897183414b5a6..f79db2a5acc17 100644
--- a/sklearn/model_selection/__init__.py
+++ b/sklearn/model_selection/__init__.py
@@ -14,6 +14,7 @@
from ._split import ShuffleSplit
from ._split import GroupShuffleSplit
from ._split import StratifiedShuffleSplit
+from ._split import StratifiedGroupKFold
from ._split import PredefinedSplit
from ._split import train_test_split
from ._split import check_cv
@@ -57,6 +58,7 @@
'RandomizedSearchCV',
'ShuffleSplit',
'StratifiedKFold',
+ 'StratifiedGroupKFold',
'StratifiedShuffleSplit',
'check_cv',
'cross_val_predict',
diff --git a/sklearn/model_selection/_split.py b/sklearn/model_selection/_split.py
index 244b2b63af449..13edbeef071f5 100644
--- a/sklearn/model_selection/_split.py
+++ b/sklearn/model_selection/_split.py
@@ -3,13 +3,16 @@
functions to split the data based on a preset strategy.
"""
-# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>,
-# Gael Varoquaux <gael.varoquaux@normalesup.org>,
+# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
+# Gael Varoquaux <gael.varoquaux@normalesup.org>
# Olivier Grisel <olivier.grisel@ensta.org>
# Raghav RV <rvraghav93@gmail.com>
+# Leandro Hermida <hermidal@cs.umd.edu>
+# Rodion Martynov <marrodion@gmail.com>
# License: BSD 3 clause
from collections.abc import Iterable
+from collections import defaultdict
import warnings
from itertools import chain, combinations
from math import ceil, floor
@@ -40,6 +43,7 @@
'ShuffleSplit',
'GroupShuffleSplit',
'StratifiedKFold',
+ 'StratifiedGroupKFold',
'StratifiedShuffleSplit',
'PredefinedSplit',
'train_test_split',
@@ -732,6 +736,190 @@ def split(self, X, y, groups=None):
return super().split(X, y, groups)
+class StratifiedGroupKFold(_BaseKFold):
+ """Stratified K-Folds iterator variant with non-overlapping groups.
+
+ This cross-validation object is a variation of StratifiedKFold attempts to
+ return stratified folds with non-overlapping groups. The folds are made by
+ preserving the percentage of samples for each class.
+
+ The same group will not appear in two different folds (the number of
+ distinct groups has to be at least equal to the number of folds).
+
+ The difference between GroupKFold and StratifiedGroupKFold is that
+ the former attempts to create balanced folds such that the number of
+ distinct groups is approximately the same in each fold, whereas
+ StratifiedGroupKFold attempts to create folds which preserve the
+ percentage of samples for each class as much as possible given the
+ constraint of non-overlapping groups between splits.
+
+ Read more in the :ref:`User Guide <cross_validation>`.
+
+ Parameters
+ ----------
+ n_splits : int, default=5
+ Number of folds. Must be at least 2.
+
+ shuffle : bool, default=False
+ Whether to shuffle each class's samples before splitting into batches.
+ Note that the samples within each split will not be shuffled.
+ This implementation can only shuffle groups that have approximately the
+ same y distribution, no global shuffle will be performed.
+
+ random_state : int or RandomState instance, default=None
+ When `shuffle` is True, `random_state` affects the ordering of the
+ indices, which controls the randomness of each fold for each class.
+ Otherwise, leave `random_state` as `None`.
+ Pass an int for reproducible output across multiple function calls.
+ See :term:`Glossary <random_state>`.
+
+ Examples
+ --------
+ >>> import numpy as np
+ >>> from sklearn.model_selection import StratifiedGroupKFold
+ >>> X = np.ones((17, 2))
+ >>> y = np.array([0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0])
+ >>> groups = np.array([1, 1, 2, 2, 3, 3, 3, 4, 5, 5, 5, 5, 6, 6, 7, 8, 8])
+ >>> cv = StratifiedGroupKFold(n_splits=3)
+ >>> for train_idxs, test_idxs in cv.split(X, y, groups):
+ ... print("TRAIN:", groups[train_idxs])
+ ... print(" ", y[train_idxs])
+ ... print(" TEST:", groups[test_idxs])
+ ... print(" ", y[test_idxs])
+ TRAIN: [1 1 2 2 4 5 5 5 5 8 8]
+ [0 0 1 1 1 0 0 0 0 0 0]
+ TEST: [3 3 3 6 6 7]
+ [1 1 1 0 0 0]
+ TRAIN: [3 3 3 4 5 5 5 5 6 6 7]
+ [1 1 1 1 0 0 0 0 0 0 0]
+ TEST: [1 1 2 2 8 8]
+ [0 0 1 1 0 0]
+ TRAIN: [1 1 2 2 3 3 3 6 6 7 8 8]
+ [0 0 1 1 1 1 1 0 0 0 0 0]
+ TEST: [4 5 5 5 5]
+ [1 0 0 0 0]
+
+ Notes
+ -----
+ The implementation is designed to:
+
+ * Mimic the behavior of StratifiedKFold as much as possible for trivial
+ groups (e.g. when each group contains only one sample).
+ * Be invariant to class label: relabelling ``y = ["Happy", "Sad"]`` to
+ ``y = [1, 0]`` should not change the indices generated.
+ * Stratify based on samples as much as possible while keeping
+ non-overlapping groups constraint. That means that in some cases when
+ there is a small number of groups containing a large number of samples
+ the stratification will not be possible and the behavior will be close
+ to GroupKFold.
+
+ See also
+ --------
+ StratifiedKFold: Takes class information into account to build folds which
+ retain class distributions (for binary or multiclass classification
+ tasks).
+
+ GroupKFold: K-fold iterator variant with non-overlapping groups.
+ """
+
+ def __init__(self, n_splits=5, shuffle=False, random_state=None):
+ super().__init__(n_splits=n_splits, shuffle=shuffle,
+ random_state=random_state)
+
+ def _iter_test_indices(self, X, y, groups):
+ # Implementation is based on this kaggle kernel:
+ # https://www.kaggle.com/jakubwasikowski/stratified-group-k-fold-cross-validation
+ # and is a subject to Apache 2.0 License. You may obtain a copy of the
+ # License at http://www.apache.org/licenses/LICENSE-2.0
+ # Changelist:
+ # - Refactored function to a class following scikit-learn KFold
+ # interface.
+ # - Added heuristic for assigning group to the least populated fold in
+ # cases when all other criteria are equal
+ # - Swtch from using python ``Counter`` to ``np.unique`` to get class
+ # distribution
+ # - Added scikit-learn checks for input: checking that target is binary
+ # or multiclass, checking passed random state, checking that number
+ # of splits is less than number of members in each class, checking
+ # that least populated class has more members than there are splits.
+ rng = check_random_state(self.random_state)
+ y = np.asarray(y)
+ type_of_target_y = type_of_target(y)
+ allowed_target_types = ('binary', 'multiclass')
+ if type_of_target_y not in allowed_target_types:
+ raise ValueError(
+ 'Supported target types are: {}. Got {!r} instead.'.format(
+ allowed_target_types, type_of_target_y))
+
+ y = column_or_1d(y)
+ _, y_inv, y_cnt = np.unique(y, return_inverse=True, return_counts=True)
+ if np.all(self.n_splits > y_cnt):
+ raise ValueError("n_splits=%d cannot be greater than the"
+ " number of members in each class."
+ % (self.n_splits))
+ n_smallest_class = np.min(y_cnt)
+ if self.n_splits > n_smallest_class:
+ warnings.warn(("The least populated class in y has only %d"
+ " members, which is less than n_splits=%d."
+ % (n_smallest_class, self.n_splits)), UserWarning)
+ n_classes = len(y_cnt)
+
+ _, groups_inv, groups_cnt = np.unique(
+ groups, return_inverse=True, return_counts=True)
+ y_counts_per_group = np.zeros((len(groups_cnt), n_classes))
+ for class_idx, group_idx in zip(y_inv, groups_inv):
+ y_counts_per_group[group_idx, class_idx] += 1
+
+ y_counts_per_fold = np.zeros((self.n_splits, n_classes))
+ groups_per_fold = defaultdict(set)
+
+ if self.shuffle:
+ rng.shuffle(y_counts_per_group)
+
+ # Stable sort to keep shuffled order for groups with the same
+ # class distribution variance
+ sorted_groups_idx = np.argsort(-np.std(y_counts_per_group, axis=1),
+ kind='mergesort')
+
+ for group_idx in sorted_groups_idx:
+ group_y_counts = y_counts_per_group[group_idx]
+ best_fold = self._find_best_fold(
+ y_counts_per_fold=y_counts_per_fold, y_cnt=y_cnt,
+ group_y_counts=group_y_counts)
+ y_counts_per_fold[best_fold] += group_y_counts
+ groups_per_fold[best_fold].add(group_idx)
+
+ for i in range(self.n_splits):
+ test_indices = [idx for idx, group_idx in enumerate(groups_inv)
+ if group_idx in groups_per_fold[i]]
+ yield test_indices
+
+ def _find_best_fold(
+ self, y_counts_per_fold, y_cnt, group_y_counts):
+ best_fold = None
+ min_eval = np.inf
+ min_samples_in_fold = np.inf
+ for i in range(self.n_splits):
+ y_counts_per_fold[i] += group_y_counts
+ # Summarise the distribution over classes in each proposed fold
+ std_per_class = np.std(
+ y_counts_per_fold / y_cnt.reshape(1, -1),
+ axis=0)
+ y_counts_per_fold[i] -= group_y_counts
+ fold_eval = np.mean(std_per_class)
+ samples_in_fold = np.sum(y_counts_per_fold[i])
+ is_current_fold_better = (
+ fold_eval < min_eval or
+ np.isclose(fold_eval, min_eval)
+ and samples_in_fold < min_samples_in_fold
+ )
+ if is_current_fold_better:
+ min_eval = fold_eval
+ min_samples_in_fold = samples_in_fold
+ best_fold = i
+ return best_fold
+
+
class TimeSeriesSplit(_BaseKFold):
"""Time Series cross-validator
| diff --git a/sklearn/model_selection/tests/test_split.py b/sklearn/model_selection/tests/test_split.py
index 80c19c7f2e08c..c66d8e1836ac9 100644
--- a/sklearn/model_selection/tests/test_split.py
+++ b/sklearn/model_selection/tests/test_split.py
@@ -35,6 +35,7 @@
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import RepeatedStratifiedKFold
+from sklearn.model_selection import StratifiedGroupKFold
from sklearn.linear_model import Ridge
@@ -80,6 +81,7 @@ def test_cross_validator_with_default_params():
lopo = LeavePGroupsOut(p)
ss = ShuffleSplit(random_state=0)
ps = PredefinedSplit([1, 1, 2, 2]) # n_splits = np of unique folds = 2
+ sgkf = StratifiedGroupKFold(n_splits)
loo_repr = "LeaveOneOut()"
lpo_repr = "LeavePOut(p=2)"
@@ -90,15 +92,17 @@ def test_cross_validator_with_default_params():
ss_repr = ("ShuffleSplit(n_splits=10, random_state=0, "
"test_size=None, train_size=None)")
ps_repr = "PredefinedSplit(test_fold=array([1, 1, 2, 2]))"
+ sgkf_repr = ("StratifiedGroupKFold(n_splits=2, random_state=None, "
+ "shuffle=False)")
n_splits_expected = [n_samples, comb(n_samples, p), n_splits, n_splits,
n_unique_groups, comb(n_unique_groups, p),
- n_shuffle_splits, 2]
+ n_shuffle_splits, 2, n_splits]
for i, (cv, cv_repr) in enumerate(zip(
- [loo, lpo, kf, skf, lolo, lopo, ss, ps],
+ [loo, lpo, kf, skf, lolo, lopo, ss, ps, sgkf],
[loo_repr, lpo_repr, kf_repr, skf_repr, lolo_repr, lopo_repr,
- ss_repr, ps_repr])):
+ ss_repr, ps_repr, sgkf_repr])):
# Test if get_n_splits works correctly
assert n_splits_expected[i] == cv.get_n_splits(X, y, groups)
@@ -133,10 +137,11 @@ def test_2d_y():
groups = rng.randint(0, 3, size=(n_samples,))
splitters = [LeaveOneOut(), LeavePOut(p=2), KFold(), StratifiedKFold(),
RepeatedKFold(), RepeatedStratifiedKFold(),
- ShuffleSplit(), StratifiedShuffleSplit(test_size=.5),
- GroupShuffleSplit(), LeaveOneGroupOut(),
- LeavePGroupsOut(n_groups=2), GroupKFold(n_splits=3),
- TimeSeriesSplit(), PredefinedSplit(test_fold=groups)]
+ StratifiedGroupKFold(), ShuffleSplit(),
+ StratifiedShuffleSplit(test_size=.5), GroupShuffleSplit(),
+ LeaveOneGroupOut(), LeavePGroupsOut(n_groups=2),
+ GroupKFold(n_splits=3), TimeSeriesSplit(),
+ PredefinedSplit(test_fold=groups)]
for splitter in splitters:
list(splitter.split(X, y, groups))
list(splitter.split(X, y_2d, groups))
@@ -193,6 +198,11 @@ def test_kfold_valueerrors():
with pytest.warns(Warning, match="The least populated class"):
next(skf_3.split(X2, y))
+ sgkf_3 = StratifiedGroupKFold(3)
+ naive_groups = np.arange(len(y))
+ with pytest.warns(Warning, match="The least populated class"):
+ next(sgkf_3.split(X2, y, naive_groups))
+
# Check that despite the warning the folds are still computed even
# though all the classes are not necessarily represented at on each
# side of the split at each split
@@ -200,12 +210,20 @@ def test_kfold_valueerrors():
warnings.simplefilter("ignore")
check_cv_coverage(skf_3, X2, y, groups=None, expected_n_splits=3)
+ with warnings.catch_warnings():
+ warnings.simplefilter("ignore")
+ check_cv_coverage(
+ sgkf_3, X2, y, groups=naive_groups, expected_n_splits=3
+ )
+
# Check that errors are raised if all n_groups for individual
# classes are less than n_splits.
y = np.array([3, 3, -1, -1, 2])
with pytest.raises(ValueError):
next(skf_3.split(X2, y))
+ with pytest.raises(ValueError):
+ next(sgkf_3.split(X2, y))
# Error when number of folds is <= 1
with pytest.raises(ValueError):
@@ -218,6 +236,10 @@ def test_kfold_valueerrors():
StratifiedKFold(0)
with pytest.raises(ValueError, match=error_string):
StratifiedKFold(1)
+ with pytest.raises(ValueError, match=error_string):
+ StratifiedGroupKFold(0)
+ with pytest.raises(ValueError, match=error_string):
+ StratifiedGroupKFold(1)
# When n_splits is not integer:
with pytest.raises(ValueError):
@@ -228,6 +250,10 @@ def test_kfold_valueerrors():
StratifiedKFold(1.5)
with pytest.raises(ValueError):
StratifiedKFold(2.0)
+ with pytest.raises(ValueError):
+ StratifiedGroupKFold(1.5)
+ with pytest.raises(ValueError):
+ StratifiedGroupKFold(2.0)
# When shuffle is not a bool:
with pytest.raises(TypeError):
@@ -318,7 +344,8 @@ def test_stratified_kfold_no_shuffle():
@pytest.mark.parametrize('shuffle', [False, True])
@pytest.mark.parametrize('k', [4, 5, 6, 7, 8, 9, 10])
-def test_stratified_kfold_ratios(k, shuffle):
+@pytest.mark.parametrize('kfold', [StratifiedKFold, StratifiedGroupKFold])
+def test_stratified_kfold_ratios(k, shuffle, kfold):
# Check that stratified kfold preserves class ratios in individual splits
# Repeat with shuffling turned off and on
n_samples = 1000
@@ -326,12 +353,14 @@ def test_stratified_kfold_ratios(k, shuffle):
y = np.array([4] * int(0.10 * n_samples) +
[0] * int(0.89 * n_samples) +
[1] * int(0.01 * n_samples))
+ # ensure perfect stratification with StratifiedGroupKFold
+ groups = np.arange(len(y))
distr = np.bincount(y) / len(y)
test_sizes = []
random_state = None if not shuffle else 0
- skf = StratifiedKFold(k, random_state=random_state, shuffle=shuffle)
- for train, test in skf.split(X, y):
+ skf = kfold(k, random_state=random_state, shuffle=shuffle)
+ for train, test in skf.split(X, y, groups=groups):
assert_allclose(np.bincount(y[train]) / len(train), distr, atol=0.02)
assert_allclose(np.bincount(y[test]) / len(test), distr, atol=0.02)
test_sizes.append(len(test))
@@ -340,20 +369,23 @@ def test_stratified_kfold_ratios(k, shuffle):
@pytest.mark.parametrize('shuffle', [False, True])
@pytest.mark.parametrize('k', [4, 6, 7])
-def test_stratified_kfold_label_invariance(k, shuffle):
+@pytest.mark.parametrize('kfold', [StratifiedKFold, StratifiedGroupKFold])
+def test_stratified_kfold_label_invariance(k, shuffle, kfold):
# Check that stratified kfold gives the same indices regardless of labels
n_samples = 100
y = np.array([2] * int(0.10 * n_samples) +
[0] * int(0.89 * n_samples) +
[1] * int(0.01 * n_samples))
X = np.ones(len(y))
+ # ensure perfect stratification with StratifiedGroupKFold
+ groups = np.arange(len(y))
def get_splits(y):
random_state = None if not shuffle else 0
return [(list(train), list(test))
for train, test
- in StratifiedKFold(k, random_state=random_state,
- shuffle=shuffle).split(X, y)]
+ in kfold(k, random_state=random_state,
+ shuffle=shuffle).split(X, y, groups=groups)]
splits_base = get_splits(y)
for perm in permutations([0, 1, 2]):
@@ -372,17 +404,20 @@ def test_kfold_balance():
assert np.sum(sizes) == i
-def test_stratifiedkfold_balance():
+@pytest.mark.parametrize('kfold', [StratifiedKFold, StratifiedGroupKFold])
+def test_stratifiedkfold_balance(kfold):
# Check that KFold returns folds with balanced sizes (only when
# stratification is possible)
# Repeat with shuffling turned off and on
X = np.ones(17)
y = [0] * 3 + [1] * 14
+ # ensure perfect stratification with StratifiedGroupKFold
+ groups = np.arange(len(y))
for shuffle in (True, False):
- cv = StratifiedKFold(3, shuffle=shuffle)
+ cv = kfold(3, shuffle=shuffle)
for i in range(11, 17):
- skf = cv.split(X[:i], y[:i])
+ skf = cv.split(X[:i], y[:i], groups[:i])
sizes = [len(test) for _, test in skf]
assert (np.max(sizes) - np.min(sizes)) <= 1
@@ -411,39 +446,39 @@ def test_shuffle_kfold():
assert sum(all_folds) == 300
-def test_shuffle_kfold_stratifiedkfold_reproducibility():
+@pytest.mark.parametrize("kfold",
+ [KFold, StratifiedKFold, StratifiedGroupKFold])
+def test_shuffle_kfold_stratifiedkfold_reproducibility(kfold):
X = np.ones(15) # Divisible by 3
y = [0] * 7 + [1] * 8
+ groups_1 = np.arange(len(y))
X2 = np.ones(16) # Not divisible by 3
y2 = [0] * 8 + [1] * 8
+ groups_2 = np.arange(len(y2))
# Check that when the shuffle is True, multiple split calls produce the
# same split when random_state is int
- kf = KFold(3, shuffle=True, random_state=0)
- skf = StratifiedKFold(3, shuffle=True, random_state=0)
+ kf = kfold(3, shuffle=True, random_state=0)
- for cv in (kf, skf):
- np.testing.assert_equal(list(cv.split(X, y)), list(cv.split(X, y)))
- np.testing.assert_equal(list(cv.split(X2, y2)), list(cv.split(X2, y2)))
+ np.testing.assert_equal(
+ list(kf.split(X, y, groups_1)),
+ list(kf.split(X, y, groups_1))
+ )
# Check that when the shuffle is True, multiple split calls often
# (not always) produce different splits when random_state is
# RandomState instance or None
- kf = KFold(3, shuffle=True, random_state=np.random.RandomState(0))
- skf = StratifiedKFold(3, shuffle=True,
- random_state=np.random.RandomState(0))
-
- for cv in (kf, skf):
- for data in zip((X, X2), (y, y2)):
- # Test if the two splits are different cv
- for (_, test_a), (_, test_b) in zip(cv.split(*data),
- cv.split(*data)):
- # cv.split(...) returns an array of tuples, each tuple
- # consisting of an array with train indices and test indices
- # Ensure that the splits for data are not same
- # when random state is not set
- with pytest.raises(AssertionError):
- np.testing.assert_array_equal(test_a, test_b)
+ kf = kfold(3, shuffle=True, random_state=np.random.RandomState(0))
+ for data in zip((X, X2), (y, y2), (groups_1, groups_2)):
+ # Test if the two splits are different cv
+ for (_, test_a), (_, test_b) in zip(kf.split(*data),
+ kf.split(*data)):
+ # cv.split(...) returns an array of tuples, each tuple
+ # consisting of an array with train indices and test indices
+ # Ensure that the splits for data are not same
+ # when random state is not set
+ with pytest.raises(AssertionError):
+ np.testing.assert_array_equal(test_a, test_b)
def test_shuffle_stratifiedkfold():
@@ -514,6 +549,96 @@ def test_kfold_can_detect_dependent_samples_on_digits(): # see #2372
assert mean_score > 0.80
+def test_stratified_group_kfold_trivial():
+ sgkf = StratifiedGroupKFold(n_splits=3)
+ # Trivial example - groups with the same distribution
+ y = np.array([1] * 6 + [0] * 12)
+ X = np.ones_like(y).reshape(-1, 1)
+ groups = np.asarray((1, 2, 3, 4, 5, 6, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6))
+ distr = np.bincount(y) / len(y)
+ test_sizes = []
+ for train, test in sgkf.split(X, y, groups):
+ # check group constraint
+ assert np.intersect1d(groups[train], groups[test]).size == 0
+ # check y distribution
+ assert_allclose(np.bincount(y[train]) / len(train), distr, atol=0.02)
+ assert_allclose(np.bincount(y[test]) / len(test), distr, atol=0.02)
+ test_sizes.append(len(test))
+ assert np.ptp(test_sizes) <= 1
+
+
+def test_stratified_group_kfold_approximate():
+ # Not perfect stratification (even though it is possible) because of
+ # iteration over groups
+ sgkf = StratifiedGroupKFold(n_splits=3)
+ y = np.array([1] * 6 + [0] * 12)
+ X = np.ones_like(y).reshape(-1, 1)
+ groups = np.array([1, 2, 3, 3, 4, 4, 1, 1, 2, 2, 3, 4, 5, 5, 5, 6, 6, 6])
+ expected = np.asarray([[0.833, 0.166], [0.666, 0.333], [0.5, 0.5]])
+ test_sizes = []
+ for (train, test), expect_dist in zip(sgkf.split(X, y, groups), expected):
+ # check group constraint
+ assert np.intersect1d(groups[train], groups[test]).size == 0
+ split_dist = np.bincount(y[test]) / len(test)
+ assert_allclose(split_dist, expect_dist, atol=0.001)
+ test_sizes.append(len(test))
+ assert np.ptp(test_sizes) <= 1
+
+
+@pytest.mark.parametrize('y, groups, expected',
+ [(np.array([0] * 6 + [1] * 6),
+ np.array([1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6]),
+ np.asarray([[.5, .5],
+ [.5, .5],
+ [.5, .5]])),
+ (np.array([0] * 9 + [1] * 3),
+ np.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 5, 6]),
+ np.asarray([[.75, .25],
+ [.75, .25],
+ [.75, .25]]))])
+def test_stratified_group_kfold_homogeneous_groups(y, groups, expected):
+ sgkf = StratifiedGroupKFold(n_splits=3)
+ X = np.ones_like(y).reshape(-1, 1)
+ for (train, test), expect_dist in zip(sgkf.split(X, y, groups), expected):
+ # check group constraint
+ assert np.intersect1d(groups[train], groups[test]).size == 0
+ split_dist = np.bincount(y[test]) / len(test)
+ assert_allclose(split_dist, expect_dist, atol=0.001)
+
+
+@pytest.mark.parametrize('cls_distr',
+ [(0.4, 0.6),
+ (0.3, 0.7),
+ (0.2, 0.8),
+ (0.8, 0.2)])
+@pytest.mark.parametrize('n_groups', [5, 30, 70])
+def test_stratified_group_kfold_against_group_kfold(cls_distr, n_groups):
+ # Check that given sufficient amount of samples StratifiedGroupKFold
+ # produces better stratified folds than regular GroupKFold
+ n_splits = 5
+ sgkf = StratifiedGroupKFold(n_splits=n_splits)
+ gkf = GroupKFold(n_splits=n_splits)
+ rng = np.random.RandomState(0)
+ n_points = 1000
+ y = rng.choice(2, size=n_points, p=cls_distr)
+ X = np.ones_like(y).reshape(-1, 1)
+ g = rng.choice(n_groups, n_points)
+ sgkf_folds = sgkf.split(X, y, groups=g)
+ gkf_folds = gkf.split(X, y, groups=g)
+ sgkf_entr = 0
+ gkf_entr = 0
+ for (sgkf_train, sgkf_test), (_, gkf_test) in zip(sgkf_folds, gkf_folds):
+ # check group constraint
+ assert np.intersect1d(g[sgkf_train], g[sgkf_test]).size == 0
+ sgkf_distr = np.bincount(y[sgkf_test]) / len(sgkf_test)
+ gkf_distr = np.bincount(y[gkf_test]) / len(gkf_test)
+ sgkf_entr += stats.entropy(sgkf_distr, qk=cls_distr)
+ gkf_entr += stats.entropy(gkf_distr, qk=cls_distr)
+ sgkf_entr /= n_splits
+ gkf_entr /= n_splits
+ assert sgkf_entr <= gkf_entr
+
+
def test_shuffle_split():
ss1 = ShuffleSplit(test_size=0.2, random_state=0).split(X)
ss2 = ShuffleSplit(test_size=2, random_state=0).split(X)
@@ -1310,7 +1435,8 @@ def test_cv_iterable_wrapper():
"successive calls to split should yield different results")
-def test_group_kfold():
+@pytest.mark.parametrize('kfold', [GroupKFold, StratifiedGroupKFold])
+def test_group_kfold(kfold):
rng = np.random.RandomState(0)
# Parameters of the test
@@ -1329,7 +1455,7 @@ def test_group_kfold():
len(np.unique(groups))
# Get the test fold indices from the test set indices of each fold
folds = np.zeros(n_samples)
- lkf = GroupKFold(n_splits=n_splits)
+ lkf = kfold(n_splits=n_splits)
for i, (_, test) in enumerate(lkf.split(X, y, groups)):
folds[test] = i
@@ -1569,7 +1695,7 @@ def test_nested_cv():
groups = rng.randint(0, 5, 15)
cvs = [LeaveOneGroupOut(), LeaveOneOut(), GroupKFold(n_splits=3),
- StratifiedKFold(),
+ StratifiedKFold(), StratifiedGroupKFold(),
StratifiedShuffleSplit(n_splits=3, random_state=0)]
for inner_cv, outer_cv in combinations_with_replacement(cvs, 2):
@@ -1640,7 +1766,8 @@ def test_leave_p_out_empty_trainset():
next(cv.split(X, y, groups=[1, 2]))
-@pytest.mark.parametrize('Klass', (KFold, StratifiedKFold))
+@pytest.mark.parametrize('Klass',
+ (KFold, StratifiedKFold, StratifiedGroupKFold))
def test_random_state_shuffle_false(Klass):
# passing a non-default random_state when shuffle=False makes no sense
with pytest.raises(ValueError,
@@ -1653,6 +1780,8 @@ def test_random_state_shuffle_false(Klass):
(KFold(shuffle=True, random_state=123), True),
(StratifiedKFold(), True),
(StratifiedKFold(shuffle=True, random_state=123), True),
+ (StratifiedGroupKFold(shuffle=True, random_state=123), True),
+ (StratifiedGroupKFold(), True),
(RepeatedKFold(random_state=123), True),
(RepeatedStratifiedKFold(random_state=123), True),
(ShuffleSplit(random_state=123), True),
@@ -1664,7 +1793,6 @@ def test_random_state_shuffle_false(Klass):
(LeaveOneGroupOut(), True),
(LeavePGroupsOut(n_groups=2), True),
(LeavePOut(p=2), True),
-
(KFold(shuffle=True, random_state=None), False),
(KFold(shuffle=True, random_state=None), False),
(StratifiedKFold(shuffle=True, random_state=np.random.RandomState(0)),
| diff --git a/doc/modules/classes.rst b/doc/modules/classes.rst
index c658bc6b12452..d019af3cfb1ff 100644
--- a/doc/modules/classes.rst
+++ b/doc/modules/classes.rst
@@ -1176,6 +1176,7 @@ Splitter Classes
model_selection.ShuffleSplit
model_selection.StratifiedKFold
model_selection.StratifiedShuffleSplit
+ model_selection.StratifiedGroupKFold
model_selection.TimeSeriesSplit
Splitter Functions
diff --git a/doc/modules/cross_validation.rst b/doc/modules/cross_validation.rst
index ae3d38f168f3f..0b090fd7385b6 100644
--- a/doc/modules/cross_validation.rst
+++ b/doc/modules/cross_validation.rst
@@ -353,7 +353,7 @@ Example of 2-fold cross-validation on a dataset with 4 samples::
Here is a visualization of the cross-validation behavior. Note that
:class:`KFold` is not affected by classes or groups.
-.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_004.png
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_006.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
:scale: 75%
@@ -509,7 +509,7 @@ Here is a usage example::
Here is a visualization of the cross-validation behavior. Note that
:class:`ShuffleSplit` is not affected by classes or groups.
-.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_006.png
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_008.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
:scale: 75%
@@ -566,7 +566,7 @@ We can see that :class:`StratifiedKFold` preserves the class ratios
Here is a visualization of the cross-validation behavior.
-.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_007.png
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_009.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
:scale: 75%
@@ -585,7 +585,7 @@ percentage for each target class as in the complete set.
Here is a visualization of the cross-validation behavior.
-.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_009.png
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_012.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
:scale: 75%
@@ -645,6 +645,58 @@ size due to the imbalance in the data.
Here is a visualization of the cross-validation behavior.
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_007.png
+ :target: ../auto_examples/model_selection/plot_cv_indices.html
+ :align: center
+ :scale: 75%
+
+.. _stratified_group_k_fold:
+
+StratifiedGroupKFold
+^^^^^^^^^^^^^^^^^^^^
+
+:class:`StratifiedGroupKFold` is a cross-validation scheme that combines both
+:class:`StratifiedKFold` and :class:`GroupKFold`. The idea is to try to
+preserve the distribution of classes in each split while keeping each group
+within a single split. That might be useful when you have an unbalanced
+dataset so that using just :class:`GroupKFold` might produce skewed splits.
+
+Example::
+
+ >>> from sklearn.model_selection import StratifiedGroupKFold
+ >>> X = list(range(18))
+ >>> y = [1] * 6 + [0] * 12
+ >>> groups = [1, 2, 3, 3, 4, 4, 1, 1, 2, 2, 3, 4, 5, 5, 5, 6, 6, 6]
+ >>> sgkf = StratifiedGroupKFold(n_splits=3)
+ >>> for train, test in sgkf.split(X, y, groups=groups):
+ ... print("%s %s" % (train, test))
+ [ 0 2 3 4 5 6 7 10 11 15 16 17] [ 1 8 9 12 13 14]
+ [ 0 1 4 5 6 7 8 9 11 12 13 14] [ 2 3 10 15 16 17]
+ [ 1 2 3 8 9 10 12 13 14 15 16 17] [ 0 4 5 6 7 11]
+
+Implementation notes:
+
+- With the current implementation full shuffle is not possible in most
+ scenarios. When shuffle=True, the following happens:
+
+ 1. All groups a shuffled.
+ 2. Groups are sorted by standard deviation of classes using stable sort.
+ 3. Sorted groups are iterated over and assigned to folds.
+
+ That means that only groups with the same standard deviation of class
+ distribution will be shuffled, which might be useful when each group has only
+ a single class.
+- The algorithm greedily assigns each group to one of n_splits test sets,
+ choosing the test set that minimises the variance in class distribution
+ across test sets. Group assignment proceeds from groups with highest to
+ lowest variance in class frequency, i.e. large groups peaked on one or few
+ classes are assigned first.
+- This split is suboptimal in a sense that it might produce imbalanced splits
+ even if perfect stratification is possible. If you have relatively close
+ distribution of classes in each group, using :class:`GroupKFold` is better.
+
+Here is a visualization of cross-validation behavior for uneven groups:
+
.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_005.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
@@ -733,7 +785,7 @@ Here is a usage example::
Here is a visualization of the cross-validation behavior.
-.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_008.png
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_011.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
:scale: 75%
@@ -835,7 +887,7 @@ Example of 3-split time series cross-validation on a dataset with 6 samples::
Here is a visualization of the cross-validation behavior.
-.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_010.png
+.. figure:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_indices_013.png
:target: ../auto_examples/model_selection/plot_cv_indices.html
:align: center
:scale: 75%
diff --git a/doc/whats_new/v1.0.rst b/doc/whats_new/v1.0.rst
index 34f39ca48f20a..985fe57164824 100644
--- a/doc/whats_new/v1.0.rst
+++ b/doc/whats_new/v1.0.rst
@@ -183,6 +183,16 @@ Changelog
are integral.
:pr:`9843` by :user:`Jon Crall <Erotemic>`.
+:mod:`sklearn.model_selection`
+..............................
+
+- |Feature| added :class:`model_selection.StratifiedGroupKFold`, that combines
+ :class:`model_selection.StratifiedKFold` and `model_selection.GroupKFold`,
+ providing an ability to split data preserving the distribution of classes in
+ each split while keeping each group within a single split.
+ :pr:`18649` by `Leandro Hermida <hermidalc>` and
+ `Rodion Martynov <marrodion>`.
+
:mod:`sklearn.naive_bayes`
..........................
| [
{
"components": [
{
"doc": "Stratified K-Folds iterator variant with non-overlapping groups.\n\nThis cross-validation object is a variation of StratifiedKFold attempts to\nreturn stratified folds with non-overlapping groups. The folds are made by\npreserving the percentage of samples for each clas... | [
"sklearn/model_selection/tests/test_split.py::test_cross_validator_with_default_params",
"sklearn/model_selection/tests/test_split.py::test_2d_y",
"sklearn/model_selection/tests/test_split.py::test_kfold_valueerrors",
"sklearn/model_selection/tests/test_split.py::test_kfold_indices",
"sklearn/model_selectio... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Stratified Group KFold implementation
#### Reference Issues/PRs
Fixes #13621.
#### What does this implement/fix? Explain your changes.
Implementation of StratifiedGroupKFold based on the PR https://github.com/scikit-learn/scikit-learn/pull/15239 and [kaggle kernel](https://www.kaggle.com/jakubwasikowski/stratified-group-k-fold-cross-validation).
Implementation considers distribution of labels within the groups without the restriction of each group having only one class.
For trivial cases:
- If number of groups equals to number of samples the implementation follows `StratifiedKFold` as much as possible
- If each group has roughly equal distribution of labels, the behavior is almost identical to `GroupKFold`
Issues with current implementation:
- Due to the sorting of groups when iterating, shuffling happens only for groups with the same label distribution. Not sure if the option to shuffle should be kept at all given that.
- Because the algorithm looks at groups in sequence, it may produce suboptimal splits in terms of stratification (see `.sklearn/model_selection/tests/test_split.py:559` - `test_startified_group_kfold_approximate`)
#### Any other comments?
Given outlined restrictions, I am hesitant whether this should be included into scikit-learn or not. It seems that its usefulness is limited. However, there seem to be a constant interest in this feature and I failed to design a better solution that is not brute-force. Would like to hear any thoughts/ideas on which algorithm might produce better results.
<!--
Please be aware that we are a loose team of volunteers so patience is
necessary; assistance handling other issues is very welcome. We value
all user contributions, no matter how minor they are. If we are slow to
review, either the pull request needs some benchmarking, tinkering,
convincing, etc. or more likely the reviewers are simply busy. In either
case, we ask for your understanding during the review process.
For more information, see our FAQ on this topic:
http://scikit-learn.org/dev/faq.html#why-is-my-pull-request-not-getting-any-attention.
Thanks for contributing!
-->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sklearn/model_selection/_split.py]
(definition of StratifiedGroupKFold:)
class StratifiedGroupKFold(_BaseKFold):
"""Stratified K-Folds iterator variant with non-overlapping groups.
This cross-validation object is a variation of StratifiedKFold attempts to
return stratified folds with non-overlapping groups. The folds are made by
preserving the percentage of samples for each class.
The same group will not appear in two different folds (the number of
distinct groups has to be at least equal to the number of folds).
The difference between GroupKFold and StratifiedGroupKFold is that
the former attempts to create balanced folds such that the number of
distinct groups is approximately the same in each fold, whereas
StratifiedGroupKFold attempts to create folds which preserve the
percentage of samples for each class as much as possible given the
constraint of non-overlapping groups between splits.
Read more in the :ref:`User Guide <cross_validation>`.
Parameters
----------
n_splits : int, default=5
Number of folds. Must be at least 2.
shuffle : bool, default=False
Whether to shuffle each class's samples before splitting into batches.
Note that the samples within each split will not be shuffled.
This implementation can only shuffle groups that have approximately the
same y distribution, no global shuffle will be performed.
random_state : int or RandomState instance, default=None
When `shuffle` is True, `random_state` affects the ordering of the
indices, which controls the randomness of each fold for each class.
Otherwise, leave `random_state` as `None`.
Pass an int for reproducible output across multiple function calls.
See :term:`Glossary <random_state>`.
Examples
--------
>>> import numpy as np
>>> from sklearn.model_selection import StratifiedGroupKFold
>>> X = np.ones((17, 2))
>>> y = np.array([0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0])
>>> groups = np.array([1, 1, 2, 2, 3, 3, 3, 4, 5, 5, 5, 5, 6, 6, 7, 8, 8])
>>> cv = StratifiedGroupKFold(n_splits=3)
>>> for train_idxs, test_idxs in cv.split(X, y, groups):
... print("TRAIN:", groups[train_idxs])
... print(" ", y[train_idxs])
... print(" TEST:", groups[test_idxs])
... print(" ", y[test_idxs])
TRAIN: [1 1 2 2 4 5 5 5 5 8 8]
[0 0 1 1 1 0 0 0 0 0 0]
TEST: [3 3 3 6 6 7]
[1 1 1 0 0 0]
TRAIN: [3 3 3 4 5 5 5 5 6 6 7]
[1 1 1 1 0 0 0 0 0 0 0]
TEST: [1 1 2 2 8 8]
[0 0 1 1 0 0]
TRAIN: [1 1 2 2 3 3 3 6 6 7 8 8]
[0 0 1 1 1 1 1 0 0 0 0 0]
TEST: [4 5 5 5 5]
[1 0 0 0 0]
Notes
-----
The implementation is designed to:
* Mimic the behavior of StratifiedKFold as much as possible for trivial
groups (e.g. when each group contains only one sample).
* Be invariant to class label: relabelling ``y = ["Happy", "Sad"]`` to
``y = [1, 0]`` should not change the indices generated.
* Stratify based on samples as much as possible while keeping
non-overlapping groups constraint. That means that in some cases when
there is a small number of groups containing a large number of samples
the stratification will not be possible and the behavior will be close
to GroupKFold.
See also
--------
StratifiedKFold: Takes class information into account to build folds which
retain class distributions (for binary or multiclass classification
tasks).
GroupKFold: K-fold iterator variant with non-overlapping groups."""
(definition of StratifiedGroupKFold.__init__:)
def __init__(self, n_splits=5, shuffle=False, random_state=None):
(definition of StratifiedGroupKFold._iter_test_indices:)
def _iter_test_indices(self, X, y, groups):
(definition of StratifiedGroupKFold._find_best_fold:)
def _find_best_fold( self, y_counts_per_fold, y_cnt, group_y_counts):
[end of new definitions in sklearn/model_selection/_split.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Stratified GroupKFold
<!--
If your issue is a usage question, submit it here instead:
- StackOverflow with the scikit-learn tag: https://stackoverflow.com/questions/tagged/scikit-learn
- Mailing List: https://mail.python.org/mailman/listinfo/scikit-learn
For more information, see User Questions: http://scikit-learn.org/stable/support.html#user-questions
-->
<!-- Instructions For Filing a Bug: https://github.com/scikit-learn/scikit-learn/blob/master/CONTRIBUTING.md#filing-bugs -->
#### Description
Currently sklearn does not have a stratified group kfold feature. Either we can use stratification or we can use group kfold. However, it would be good to have both.
I would like to implement it, if we decide to have it.
----------
@TomDLT @NicolasHug What do you think?
Might be interesting in theory, but I'm not sure how useful it'd be in practice. We can certainly keep the issue open and see how many people request this feature
Do you assume that each group is in a single class?
See also #9413
@jnothman Yes, I had a similar thing in mind. However, I see that the pull request is still open. I meant that a group will not be repeated across folds. If we have ID as groups then a same ID will not occur across multiple folds
I understand this is relevant to use of RFECV.
Currently this defaults to using a StratifiedKFold cv. Its fit() also takes groups=
However: it appears that groups is not respected when executing fit(). No warning (might be considered a bug).
Grouping AND stratification are useful for quite imbalanced datasets with inter-record dependency
(in my case, the same individual has multiple records, but there are still a large number of groups=people relative to the number of splits; I imagine there would be practical problems as the number of unique groups in the minority class gets anywhere near the number of splits).
So: +1!
This would definitely be useful. For instance, working with highly imbalanced time-series medical data, keeping patients separate but (approximately) balance the imbalanced class in each fold.
I have also found that StratifiedKFold takes groups as a parameter but doesn't group according to them, should probably be flagged up.
Another good use of this feature would be financial data, which is usually very imbalanced. In my case, I have a highly imbalanced dataset with several records for the same entity (just different points in time). We want to do a `GroupKFold` to avoid leakage, but also stratify since due to the high imbalance, we could end up with groups with very few or none positives.
also see #14524 I think?
Another use case for Stratified GroupShuffleSplit and GroupKFold is biological "repeated measures" designs, where you have multiple samples per subject or other parent biological unit. Also in many real world datasets in biology there is class imbalance. Each group of samples has the same class. So it's important to stratify and keep groups together.
--------------------
</issues> | 3c732b9f6a77e95dfa6beb154ca2e1e7848b74f9 |
Textualize__rich-389 | 389 | Textualize/rich | null | e1cb4796bad9cbf06b4ee15fa349b84215001b4f | 2020-10-16T11:16:36Z | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 9b2992626e..3c0259a54d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -22,16 +22,14 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Added Console.out
- Added Progress.reset
- Added Style.background_style property
-
-### Added
-
-- Added solid block bar class
+- Added Bar renderable https://github.com/willmcgugan/rich/pull/361
### Changed
- Dropped box.get_safe_box function in favor of Box.substitute
- Changed default padding in Panel from 0 to (0, 1) https://github.com/willmcgugan/rich/issues/385
- Table with row_styles will extend background color between cells if the box has no vertical dividerhttps://github.com/willmcgugan/rich/issues/383
+- Renamed rich.bar to rich.progress_bar, and Bar class to ProgressBar, rich.bar is now the new solid bar class
### Fixed
diff --git a/docs/source/reference.rst b/docs/source/reference.rst
index 156793a2e7..2fd7bec140 100644
--- a/docs/source/reference.rst
+++ b/docs/source/reference.rst
@@ -19,6 +19,7 @@ Reference
reference/padding.rst
reference/panel.rst
reference/progress.rst
+ reference/progress_bar.rst
reference/prompt.rst
reference/rule.rst
reference/segment.rst
diff --git a/docs/source/reference/progress_bar.rst b/docs/source/reference/progress_bar.rst
new file mode 100644
index 0000000000..6e5a201f4a
--- /dev/null
+++ b/docs/source/reference/progress_bar.rst
@@ -0,0 +1,7 @@
+rich.progress_bar
+=================
+
+.. automodule:: rich.progress_bar
+ :members:
+
+
diff --git a/examples/bars.py b/examples/bars.py
new file mode 100644
index 0000000000..005e8c4760
--- /dev/null
+++ b/examples/bars.py
@@ -0,0 +1,20 @@
+"""
+
+Use Bar to renderer a sort-of cirlce.
+
+"""
+import math
+
+from rich.bar import Bar
+from rich.color import Color
+from rich import print
+
+
+SIZE = 40
+
+for row in range(SIZE):
+ y = (row / SIZE) * 2 - 1
+ x = math.sqrt(1 - y * y)
+ color = Color.from_rgb((y + 1) * 127, 0, 0)
+ bar = Bar(1, width=SIZE * 2, begin=1 - x, end=x, color=color)
+ print(bar)
diff --git a/examples/colorbars.py b/examples/colorbars.py
deleted file mode 100644
index 7036774173..0000000000
--- a/examples/colorbars.py
+++ /dev/null
@@ -1,19 +0,0 @@
-"""
-
-This example demonstrates how to make colorful bars.
-
-"""
-
-from rich.block_bar import BlockBar
-from rich.console import Console
-from rich.table import Table
-
-table = Table()
-table.add_column("Score")
-
-table.add_row(BlockBar(size=100, begin=0, end=5, width=30, color="bright_red"))
-table.add_row(BlockBar(size=100, begin=0, end=35, width=30, color="bright_yellow"))
-table.add_row(BlockBar(size=100, begin=0, end=87, width=30, color="bright_green"))
-
-console = Console()
-console.print(table, justify="center")
diff --git a/rich/bar.py b/rich/bar.py
index c9eb72d0e9..637b75cc18 100644
--- a/rich/bar.py
+++ b/rich/bar.py
@@ -1,193 +1,85 @@
-import math
-from functools import lru_cache
-from time import monotonic
-from typing import Iterable, List, Optional
+from typing import Optional, Union
-from .color import Color, blend_rgb
-from .color_triplet import ColorTriplet
+from .color import Color
from .console import Console, ConsoleOptions, RenderResult
from .jupyter import JupyterMixin
from .measure import Measurement
from .segment import Segment
-from .style import Style, StyleType
+from .style import Style
-# Number of characters before 'pulse' animation repeats
-PULSE_SIZE = 20
+# There are left-aligned characters for 1/8 to 7/8, but
+# the right-aligned characters exist only for 1/8 and 4/8.
+BEGIN_BLOCK_ELEMENTS = ["โ", "โ", "โ", "โ", "โ", "โ", "โ", "โ"]
+END_BLOCK_ELEMENTS = [" ", "โ", "โ", "โ", "โ", "โ", "โ", "โ"]
+FULL_BLOCK = "โ"
class Bar(JupyterMixin):
- """Renders a (progress) bar.
+ """Renders a solid block bar.
Args:
- total (float, optional): Number of steps in the bar. Defaults to 100.
- completed (float, optional): Number of steps completed. Defaults to 0.
+ size (float): Value for the end of the bar.
+ begin (float): Begin point (between 0 and size, inclusive).
+ end (float): End point (between 0 and size, inclusive).
width (int, optional): Width of the bar, or ``None`` for maximum width. Defaults to None.
- pulse (bool, optional): Enable pulse effect. Defaults to False.
- style (StyleType, optional): Style for the bar background. Defaults to "bar.back".
- complete_style (StyleType, optional): Style for the completed bar. Defaults to "bar.complete".
- finished_style (StyleType, optional): Style for a finished bar. Defaults to "bar.done".
- pulse_style (StyleType, optional): Style for pulsing bars. Defaults to "bar.pulse".
- animation_time (Optional[float], optional): Time in seconds to use for animation, or None to use system time.
+ color (Union[Color, str], optional): Color of the bar. Defaults to "default".
+ bgcolor (Union[Color, str], optional): Color of bar background. Defaults to "default".
"""
def __init__(
self,
- total: float = 100,
- completed: float = 0,
+ size: float,
+ begin: float,
+ end: float,
+ *,
width: int = None,
- pulse: bool = False,
- style: StyleType = "bar.back",
- complete_style: StyleType = "bar.complete",
- finished_style: StyleType = "bar.finished",
- pulse_style: StyleType = "bar.pulse",
- animation_time: float = None,
+ color: Union[Color, str] = "default",
+ bgcolor: Union[Color, str] = "default",
):
- self.total = total
- self.completed = completed
+ self.size = size
+ self.begin = max(begin, 0)
+ self.end = min(end, size)
self.width = width
- self.pulse = pulse
- self.style = style
- self.complete_style = complete_style
- self.finished_style = finished_style
- self.pulse_style = pulse_style
- self.animation_time = animation_time
-
- self._pulse_segments: Optional[List[Segment]] = None
+ self.style = Style(color=color, bgcolor=bgcolor)
def __repr__(self) -> str:
- return f"<Bar {self.completed!r} of {self.total!r}>"
-
- @property
- def percentage_completed(self) -> float:
- """Calculate percentage complete."""
- completed = (self.completed / self.total) * 100.0
- completed = min(100, max(0.0, completed))
- return completed
-
- @lru_cache(maxsize=16)
- def _get_pulse_segments(
- self,
- fore_style: Style,
- back_style: Style,
- color_system: str,
- ascii: bool = False,
- ) -> List[Segment]:
- """Get a list of segments to render a pulse animation.
-
- Returns:
- List[Segment]: A list of segments, one segment per character.
- """
- bar = "-" if ascii else "โ"
- segments: List[Segment] = []
-
- if color_system != "truecolor":
- segments += [Segment(bar, fore_style)] * (PULSE_SIZE // 2)
- segments += [Segment(bar if color_system else " ", back_style)] * (
- PULSE_SIZE - (PULSE_SIZE // 2)
- )
- return segments
-
- append = segments.append
- fore_color = (
- fore_style.color.get_truecolor()
- if fore_style.color
- else ColorTriplet(255, 0, 255)
- )
- back_color = (
- back_style.color.get_truecolor()
- if back_style.color
- else ColorTriplet(0, 0, 0)
- )
- cos = math.cos
- pi = math.pi
- _Segment = Segment
- _Style = Style
- from_triplet = Color.from_triplet
-
- for index in range(PULSE_SIZE):
- position = index / PULSE_SIZE
- fade = 0.5 + cos((position * pi * 2)) / 2.0
- color = blend_rgb(fore_color, back_color, cross_fade=fade)
- append(_Segment(bar, _Style(color=from_triplet(color))))
- return segments
-
- def update(self, completed: float, total: float = None) -> None:
- """Update progress with new values.
-
- Args:
- completed (float): Number of steps completed.
- total (float, optional): Total number of steps, or ``None`` to not change. Defaults to None.
- """
- self.completed = completed
- self.total = total if total is not None else self.total
-
- def _render_pulse(
- self, console: Console, width: int, ascii: bool = False
- ) -> Iterable[Segment]:
- """Renders the pulse animation.
-
- Args:
- console (Console): Console instance.
- width (int): Width in characters of pulse animation.
-
- Returns:
- RenderResult: [description]
-
- Yields:
- Iterator[Segment]: Segments to render pulse
- """
- fore_style = console.get_style(self.pulse_style, default="white")
- back_style = console.get_style(self.style, default="black")
-
- pulse_segments = self._get_pulse_segments(
- fore_style, back_style, console.color_system, ascii=ascii
- )
- segment_count = len(pulse_segments)
- current_time = (
- monotonic() if self.animation_time is None else self.animation_time
- )
- segments = pulse_segments * (int(width / segment_count) + 2)
- offset = int(-current_time * 15) % segment_count
- segments = segments[offset : offset + width]
- yield from segments
+ return f"Bar({self.size}, {self.begin}, {self.end})"
def __rich_console__(
self, console: Console, options: ConsoleOptions
) -> RenderResult:
width = min(self.width or options.max_width, options.max_width)
- ascii = options.legacy_windows or options.ascii_only
- if self.pulse:
- yield from self._render_pulse(console, width, ascii=ascii)
+
+ if self.begin >= self.end:
+ yield Segment(" " * width, self.style)
+ yield Segment.line()
return
- completed = min(self.total, max(0, self.completed))
+ prefix_complete_eights = int(width * 8 * self.begin / self.size)
+ prefix_bar_count = prefix_complete_eights // 8
+ prefix_eights_count = prefix_complete_eights % 8
- bar = "-" if ascii else "โ"
- half_bar_right = " " if ascii else "โธ"
- half_bar_left = " " if ascii else "โบ"
- complete_halves = (
- int(width * 2 * completed / self.total) if self.total else width * 2
- )
- bar_count = complete_halves // 2
- half_bar_count = complete_halves % 2
- style = console.get_style(self.style)
- complete_style = console.get_style(
- self.complete_style if self.completed < self.total else self.finished_style
- )
- _Segment = Segment
- if bar_count:
- yield _Segment(bar * bar_count, complete_style)
- if half_bar_count:
- yield _Segment(half_bar_right * half_bar_count, complete_style)
-
- remaining_bars = width - bar_count - half_bar_count
- if remaining_bars and console.color_system is not None:
- if not half_bar_count and bar_count:
- yield _Segment(half_bar_left, style)
- remaining_bars -= 1
- if remaining_bars:
- yield _Segment(bar * remaining_bars, style)
+ body_complete_eights = int(width * 8 * self.end / self.size)
+ body_bar_count = body_complete_eights // 8
+ body_eights_count = body_complete_eights % 8
+
+ # When start and end fall into the same cell, we ideally should render
+ # a symbol that's "center-aligned", but there is no good symbol in Unicode.
+ # In this case, we fall back to right-aligned block symbol for simplicity.
+
+ prefix = " " * prefix_bar_count
+ if prefix_eights_count:
+ prefix += BEGIN_BLOCK_ELEMENTS[prefix_eights_count]
+
+ body = FULL_BLOCK * body_bar_count
+ if body_eights_count:
+ body += END_BLOCK_ELEMENTS[body_eights_count]
+
+ suffix = " " * (width - len(body))
+
+ yield Segment(prefix + body[len(prefix) :] + suffix, self.style)
+ yield Segment.line()
def __rich_measure__(self, console: Console, max_width: int) -> Measurement:
return (
@@ -195,19 +87,3 @@ def __rich_measure__(self, console: Console, max_width: int) -> Measurement:
if self.width is not None
else Measurement(4, max_width)
)
-
-
-if __name__ == "__main__": # pragma: no cover
- console = Console()
- bar = Bar(width=50, total=100)
-
- import time
-
- console.show_cursor(False)
- for n in range(0, 101, 1):
- bar.update(n)
- console.print(bar)
- console.file.write("\r")
- time.sleep(0.05)
- console.show_cursor(True)
- console.print()
diff --git a/rich/block_bar.py b/rich/block_bar.py
deleted file mode 100644
index d86e6c7baa..0000000000
--- a/rich/block_bar.py
+++ /dev/null
@@ -1,113 +0,0 @@
-from typing import Optional, Union
-
-from .color import Color
-from .console import Console, ConsoleOptions, RenderResult
-from .jupyter import JupyterMixin
-from .measure import Measurement
-from .segment import Segment
-from .style import Style
-
-# There are left-aligned characters for 1/8 to 7/8, but
-# the right-aligned characters exist only for 1/8 and 4/8.
-BEGIN_BLOCK_ELEMS = ["โ", "โ", "โ", "โ", "โ", "โ", "โ", "โ"]
-END_BLOCK_ELEMS = [" ", "โ", "โ", "โ", "โ", "โ", "โ", "โ"]
-FULL_BLOCK = "โ"
-
-
-class BlockBar(JupyterMixin):
- """Renders a solid block bar.
-
- Args:
- size (float): Value for the end of the bar.
- begin (float): Begin point (between 0 and size, inclusive).
- end (float): End point (between 0 and size, inclusive).
- width (int, optional): Width of the bar, or ``None`` for maximum width. Defaults to None.
- color (Union[Color, str], optional): Color of the bar. Defaults to "default".
- bgcolor (Union[Color, str], optional): Color of bar background. Defaults to "default".
- """
-
- def __init__(
- self,
- size: float,
- begin: float,
- end: float,
- width: int = None,
- color: Union[Color, str] = "default",
- bgcolor: Union[Color, str] = "default",
- ):
- self.size = size
- self.begin = max(begin, 0)
- self.end = min(end, size)
- self.width = width
- self.style = Style(color=color, bgcolor=bgcolor)
-
- def __repr__(self) -> str:
- return f"<BlockBar {self.begin!r}..{self.end!r} of {self.size!r}>"
-
- def __rich_console__(
- self, console: Console, options: ConsoleOptions
- ) -> RenderResult:
-
- width = min(self.width or options.max_width, options.max_width)
-
- if self.begin >= self.end:
- yield Segment(" " * width, self.style)
- return
-
- prefix_complete_eights = int(width * 8 * self.begin / self.size)
- prefix_bar_count = prefix_complete_eights // 8
- prefix_eights_count = prefix_complete_eights % 8
-
- body_complete_eights = int(width * 8 * self.end / self.size)
- body_bar_count = body_complete_eights // 8
- body_eights_count = body_complete_eights % 8
-
- # When start and end fall into the same cell, we ideally should render
- # a symbol that's "center-aligned", but there is no good symbol in Unicode.
- # In this case, we fall back to right-aligned block symbol for simplicity.
-
- prefix = " " * prefix_bar_count
- if prefix_eights_count:
- prefix += BEGIN_BLOCK_ELEMS[prefix_eights_count]
-
- body = FULL_BLOCK * body_bar_count
- if body_eights_count:
- body += END_BLOCK_ELEMS[body_eights_count]
-
- suffix = " " * (width - len(body))
-
- yield Segment(prefix + body[len(prefix) :] + suffix, self.style)
-
- def __rich_measure__(self, console: Console, max_width: int) -> Measurement:
- return (
- Measurement(self.width, self.width)
- if self.width is not None
- else Measurement(4, max_width)
- )
-
-
-if __name__ == "__main__": # pragma: no cover
- console = Console()
-
- import time
-
- console.show_cursor(False)
- for n in range(0, 101):
- block_bar = BlockBar(size=100, begin=0, end=n, width=10)
- console.print(block_bar)
- console.file.write("\r")
- time.sleep(0.05)
- console.print()
- for n in range(0, 101):
- block_bar = BlockBar(size=100, begin=n, end=100, width=10)
- console.print(block_bar)
- console.file.write("\r")
- time.sleep(0.05)
- console.print()
- for n in range(0, 51):
- block_bar = BlockBar(size=100, begin=50 - n, end=50 + n, width=10)
- console.print(block_bar)
- console.file.write("\r")
- time.sleep(0.05)
- console.show_cursor(True)
- console.print()
diff --git a/rich/color.py b/rich/color.py
index 402e797915..223a5c40cb 100644
--- a/rich/color.py
+++ b/rich/color.py
@@ -339,6 +339,20 @@ def from_triplet(cls, triplet: "ColorTriplet") -> "Color":
"""
return cls(name=triplet.hex, type=ColorType.TRUECOLOR, triplet=triplet)
+ @classmethod
+ def from_rgb(cls, red: float, green: float, blue: float) -> "Color":
+ """Create a truecolor from three color components in the range(0->255).
+
+ Args:
+ red (float): Red component.
+ green (float): Green component.
+ blue (float): Blue component.
+
+ Returns:
+ Color: A new color object.
+ """
+ return cls.from_triplet(ColorTriplet(int(red), int(green), int(blue)))
+
@classmethod
def default(cls) -> "Color":
"""Get a Color instance representing the default color.
diff --git a/rich/progress.py b/rich/progress.py
index 9541ea40e9..bcb38ea453 100644
--- a/rich/progress.py
+++ b/rich/progress.py
@@ -27,7 +27,7 @@
)
from . import filesize, get_console
-from .bar import Bar
+from .progress_bar import ProgressBar
from .console import (
Console,
ConsoleRenderable,
@@ -244,9 +244,9 @@ def __init__(
self.pulse_style = pulse_style
super().__init__()
- def render(self, task: "Task") -> Bar:
+ def render(self, task: "Task") -> ProgressBar:
"""Gets a progress bar widget for a task."""
- return Bar(
+ return ProgressBar(
total=max(0, task.total),
completed=max(0, task.completed),
width=None if self.bar_width is None else max(1, self.bar_width),
diff --git a/rich/progress_bar.py b/rich/progress_bar.py
new file mode 100644
index 0000000000..c1e3be6e9e
--- /dev/null
+++ b/rich/progress_bar.py
@@ -0,0 +1,213 @@
+import math
+from functools import lru_cache
+from time import monotonic
+from typing import Iterable, List, Optional
+
+from .color import Color, blend_rgb
+from .color_triplet import ColorTriplet
+from .console import Console, ConsoleOptions, RenderResult
+from .jupyter import JupyterMixin
+from .measure import Measurement
+from .segment import Segment
+from .style import Style, StyleType
+
+# Number of characters before 'pulse' animation repeats
+PULSE_SIZE = 20
+
+
+class ProgressBar(JupyterMixin):
+ """Renders a (progress) bar. Used by rich.progress.
+
+ Args:
+ total (float, optional): Number of steps in the bar. Defaults to 100.
+ completed (float, optional): Number of steps completed. Defaults to 0.
+ width (int, optional): Width of the bar, or ``None`` for maximum width. Defaults to None.
+ pulse (bool, optional): Enable pulse effect. Defaults to False.
+ style (StyleType, optional): Style for the bar background. Defaults to "bar.back".
+ complete_style (StyleType, optional): Style for the completed bar. Defaults to "bar.complete".
+ finished_style (StyleType, optional): Style for a finished bar. Defaults to "bar.done".
+ pulse_style (StyleType, optional): Style for pulsing bars. Defaults to "bar.pulse".
+ animation_time (Optional[float], optional): Time in seconds to use for animation, or None to use system time.
+ """
+
+ def __init__(
+ self,
+ total: float = 100,
+ completed: float = 0,
+ width: int = None,
+ pulse: bool = False,
+ style: StyleType = "bar.back",
+ complete_style: StyleType = "bar.complete",
+ finished_style: StyleType = "bar.finished",
+ pulse_style: StyleType = "bar.pulse",
+ animation_time: float = None,
+ ):
+ self.total = total
+ self.completed = completed
+ self.width = width
+ self.pulse = pulse
+ self.style = style
+ self.complete_style = complete_style
+ self.finished_style = finished_style
+ self.pulse_style = pulse_style
+ self.animation_time = animation_time
+
+ self._pulse_segments: Optional[List[Segment]] = None
+
+ def __repr__(self) -> str:
+ return f"<Bar {self.completed!r} of {self.total!r}>"
+
+ @property
+ def percentage_completed(self) -> float:
+ """Calculate percentage complete."""
+ completed = (self.completed / self.total) * 100.0
+ completed = min(100, max(0.0, completed))
+ return completed
+
+ @lru_cache(maxsize=16)
+ def _get_pulse_segments(
+ self,
+ fore_style: Style,
+ back_style: Style,
+ color_system: str,
+ ascii: bool = False,
+ ) -> List[Segment]:
+ """Get a list of segments to render a pulse animation.
+
+ Returns:
+ List[Segment]: A list of segments, one segment per character.
+ """
+ bar = "-" if ascii else "โ"
+ segments: List[Segment] = []
+
+ if color_system != "truecolor":
+ segments += [Segment(bar, fore_style)] * (PULSE_SIZE // 2)
+ segments += [Segment(bar if color_system else " ", back_style)] * (
+ PULSE_SIZE - (PULSE_SIZE // 2)
+ )
+ return segments
+
+ append = segments.append
+ fore_color = (
+ fore_style.color.get_truecolor()
+ if fore_style.color
+ else ColorTriplet(255, 0, 255)
+ )
+ back_color = (
+ back_style.color.get_truecolor()
+ if back_style.color
+ else ColorTriplet(0, 0, 0)
+ )
+ cos = math.cos
+ pi = math.pi
+ _Segment = Segment
+ _Style = Style
+ from_triplet = Color.from_triplet
+
+ for index in range(PULSE_SIZE):
+ position = index / PULSE_SIZE
+ fade = 0.5 + cos((position * pi * 2)) / 2.0
+ color = blend_rgb(fore_color, back_color, cross_fade=fade)
+ append(_Segment(bar, _Style(color=from_triplet(color))))
+ return segments
+
+ def update(self, completed: float, total: float = None) -> None:
+ """Update progress with new values.
+
+ Args:
+ completed (float): Number of steps completed.
+ total (float, optional): Total number of steps, or ``None`` to not change. Defaults to None.
+ """
+ self.completed = completed
+ self.total = total if total is not None else self.total
+
+ def _render_pulse(
+ self, console: Console, width: int, ascii: bool = False
+ ) -> Iterable[Segment]:
+ """Renders the pulse animation.
+
+ Args:
+ console (Console): Console instance.
+ width (int): Width in characters of pulse animation.
+
+ Returns:
+ RenderResult: [description]
+
+ Yields:
+ Iterator[Segment]: Segments to render pulse
+ """
+ fore_style = console.get_style(self.pulse_style, default="white")
+ back_style = console.get_style(self.style, default="black")
+
+ pulse_segments = self._get_pulse_segments(
+ fore_style, back_style, console.color_system, ascii=ascii
+ )
+ segment_count = len(pulse_segments)
+ current_time = (
+ monotonic() if self.animation_time is None else self.animation_time
+ )
+ segments = pulse_segments * (int(width / segment_count) + 2)
+ offset = int(-current_time * 15) % segment_count
+ segments = segments[offset : offset + width]
+ yield from segments
+
+ def __rich_console__(
+ self, console: Console, options: ConsoleOptions
+ ) -> RenderResult:
+
+ width = min(self.width or options.max_width, options.max_width)
+ ascii = options.legacy_windows or options.ascii_only
+ if self.pulse:
+ yield from self._render_pulse(console, width, ascii=ascii)
+ return
+
+ completed = min(self.total, max(0, self.completed))
+
+ bar = "-" if ascii else "โ"
+ half_bar_right = " " if ascii else "โธ"
+ half_bar_left = " " if ascii else "โบ"
+ complete_halves = (
+ int(width * 2 * completed / self.total) if self.total else width * 2
+ )
+ bar_count = complete_halves // 2
+ half_bar_count = complete_halves % 2
+ style = console.get_style(self.style)
+ complete_style = console.get_style(
+ self.complete_style if self.completed < self.total else self.finished_style
+ )
+ _Segment = Segment
+ if bar_count:
+ yield _Segment(bar * bar_count, complete_style)
+ if half_bar_count:
+ yield _Segment(half_bar_right * half_bar_count, complete_style)
+
+ remaining_bars = width - bar_count - half_bar_count
+ if remaining_bars and console.color_system is not None:
+ if not half_bar_count and bar_count:
+ yield _Segment(half_bar_left, style)
+ remaining_bars -= 1
+ if remaining_bars:
+ yield _Segment(bar * remaining_bars, style)
+
+ def __rich_measure__(self, console: Console, max_width: int) -> Measurement:
+ return (
+ Measurement(self.width, self.width)
+ if self.width is not None
+ else Measurement(4, max_width)
+ )
+
+
+if __name__ == "__main__": # pragma: no cover
+ console = Console()
+ bar = ProgressBar(width=50, total=100)
+
+ import time
+
+ console.show_cursor(False)
+ for n in range(0, 101, 1):
+ bar.update(n)
+ console.print(bar)
+ console.file.write("\r")
+ time.sleep(0.05)
+ console.show_cursor(True)
+ console.print()
| diff --git a/tests/_card_render.py b/tests/_card_render.py
index 805bab4753..8d71f7c255 100644
--- a/tests/_card_render.py
+++ b/tests/_card_render.py
@@ -1,1 +1,1 @@
-expected='\x1b[3m Rich features \x1b[0m\n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Colors\x1b[0m\x1b[1;31m \x1b[0m\x1b[1;33m256\x1b[0m colors or \x1b[1;32m16.7 million\x1b[0m colors \x1b[34m(if supported by your terminal)\x1b[0m. \n \n \x1b[48;5;16m \x1b[0m\x1b[48;5;17m \x1b[0m\x1b[48;5;18m \x1b[0m\x1b[48;5;19m \x1b[0m\x1b[48;5;20m \x1b[0m\x1b[48;5;21m \x1b[0m\x1b[48;5;52m \x1b[0m\x1b[48;5;53m \x1b[0m\x1b[48;5;54m \x1b[0m\x1b[48;5;55m \x1b[0m\x1b[48;5;56m \x1b[0m\x1b[48;5;57m \x1b[0m\x1b[48;5;88m \x1b[0m\x1b[48;5;89m \x1b[0m\x1b[48;5;90m \x1b[0m\x1b[48;5;91m \x1b[0m\x1b[48;5;92m \x1b[0m\x1b[48;5;93m \x1b[0m\x1b[48;5;124m \x1b[0m\x1b[48;5;125m \x1b[0m\x1b[48;5;126m \x1b[0m\x1b[48;5;127m \x1b[0m\x1b[48;5;128m \x1b[0m\x1b[48;5;129m \x1b[0m\x1b[48;5;160m \x1b[0m\x1b[48;5;161m \x1b[0m\x1b[48;5;162m \x1b[0m\x1b[48;5;163m \x1b[0m\x1b[48;5;164m \x1b[0m\x1b[48;5;165m \x1b[0m\x1b[48;5;196m \x1b[0m\x1b[48;5;197m \x1b[0m\x1b[48;5;198m \x1b[0m\x1b[48;5;199m \x1b[0m\x1b[48;5;200m \x1b[0m\x1b[48;5;201m \x1b[0m \n \x1b[48;5;22m \x1b[0m\x1b[48;5;23m \x1b[0m\x1b[48;5;24m \x1b[0m\x1b[48;5;25m \x1b[0m\x1b[48;5;26m \x1b[0m\x1b[48;5;27m \x1b[0m\x1b[48;5;58m \x1b[0m\x1b[48;5;59m \x1b[0m\x1b[48;5;60m \x1b[0m\x1b[48;5;61m \x1b[0m\x1b[48;5;62m \x1b[0m\x1b[48;5;63m \x1b[0m\x1b[48;5;94m \x1b[0m\x1b[48;5;95m \x1b[0m\x1b[48;5;96m \x1b[0m\x1b[48;5;97m \x1b[0m\x1b[48;5;98m \x1b[0m\x1b[48;5;99m \x1b[0m\x1b[48;5;130m \x1b[0m\x1b[48;5;131m \x1b[0m\x1b[48;5;132m \x1b[0m\x1b[48;5;133m \x1b[0m\x1b[48;5;134m \x1b[0m\x1b[48;5;135m \x1b[0m\x1b[48;5;166m \x1b[0m\x1b[48;5;167m \x1b[0m\x1b[48;5;168m \x1b[0m\x1b[48;5;169m \x1b[0m\x1b[48;5;170m \x1b[0m\x1b[48;5;171m \x1b[0m\x1b[48;5;202m \x1b[0m\x1b[48;5;203m \x1b[0m\x1b[48;5;204m \x1b[0m\x1b[48;5;205m \x1b[0m\x1b[48;5;206m \x1b[0m\x1b[48;5;207m \x1b[0m \n \x1b[48;5;28m \x1b[0m\x1b[48;5;29m \x1b[0m\x1b[48;5;30m \x1b[0m\x1b[48;5;31m \x1b[0m\x1b[48;5;32m \x1b[0m\x1b[48;5;33m \x1b[0m\x1b[48;5;64m \x1b[0m\x1b[48;5;65m \x1b[0m\x1b[48;5;66m \x1b[0m\x1b[48;5;67m \x1b[0m\x1b[48;5;68m \x1b[0m\x1b[48;5;69m \x1b[0m\x1b[48;5;100m \x1b[0m\x1b[48;5;101m \x1b[0m\x1b[48;5;102m \x1b[0m\x1b[48;5;103m \x1b[0m\x1b[48;5;104m \x1b[0m\x1b[48;5;105m \x1b[0m\x1b[48;5;136m \x1b[0m\x1b[48;5;137m \x1b[0m\x1b[48;5;138m \x1b[0m\x1b[48;5;139m \x1b[0m\x1b[48;5;140m \x1b[0m\x1b[48;5;141m \x1b[0m\x1b[48;5;172m \x1b[0m\x1b[48;5;173m \x1b[0m\x1b[48;5;174m \x1b[0m\x1b[48;5;175m \x1b[0m\x1b[48;5;176m \x1b[0m\x1b[48;5;177m \x1b[0m\x1b[48;5;208m \x1b[0m\x1b[48;5;209m \x1b[0m\x1b[48;5;210m \x1b[0m\x1b[48;5;211m \x1b[0m\x1b[48;5;212m \x1b[0m\x1b[48;5;213m \x1b[0m \n \x1b[48;5;34m \x1b[0m\x1b[48;5;35m \x1b[0m\x1b[48;5;36m \x1b[0m\x1b[48;5;37m \x1b[0m\x1b[48;5;38m \x1b[0m\x1b[48;5;39m \x1b[0m\x1b[48;5;70m \x1b[0m\x1b[48;5;71m \x1b[0m\x1b[48;5;72m \x1b[0m\x1b[48;5;73m \x1b[0m\x1b[48;5;74m \x1b[0m\x1b[48;5;75m \x1b[0m\x1b[48;5;106m \x1b[0m\x1b[48;5;107m \x1b[0m\x1b[48;5;108m \x1b[0m\x1b[48;5;109m \x1b[0m\x1b[48;5;110m \x1b[0m\x1b[48;5;111m \x1b[0m\x1b[48;5;142m \x1b[0m\x1b[48;5;143m \x1b[0m\x1b[48;5;144m \x1b[0m\x1b[48;5;145m \x1b[0m\x1b[48;5;146m \x1b[0m\x1b[48;5;147m \x1b[0m\x1b[48;5;178m \x1b[0m\x1b[48;5;179m \x1b[0m\x1b[48;5;180m \x1b[0m\x1b[48;5;181m \x1b[0m\x1b[48;5;182m \x1b[0m\x1b[48;5;183m \x1b[0m\x1b[48;5;214m \x1b[0m\x1b[48;5;215m \x1b[0m\x1b[48;5;216m \x1b[0m\x1b[48;5;217m \x1b[0m\x1b[48;5;218m \x1b[0m\x1b[48;5;219m \x1b[0m \n \x1b[48;5;40m \x1b[0m\x1b[48;5;41m \x1b[0m\x1b[48;5;42m \x1b[0m\x1b[48;5;43m \x1b[0m\x1b[48;5;44m \x1b[0m\x1b[48;5;45m \x1b[0m\x1b[48;5;76m \x1b[0m\x1b[48;5;77m \x1b[0m\x1b[48;5;78m \x1b[0m\x1b[48;5;79m \x1b[0m\x1b[48;5;80m \x1b[0m\x1b[48;5;81m \x1b[0m\x1b[48;5;112m \x1b[0m\x1b[48;5;113m \x1b[0m\x1b[48;5;114m \x1b[0m\x1b[48;5;115m \x1b[0m\x1b[48;5;116m \x1b[0m\x1b[48;5;117m \x1b[0m\x1b[48;5;148m \x1b[0m\x1b[48;5;149m \x1b[0m\x1b[48;5;150m \x1b[0m\x1b[48;5;151m \x1b[0m\x1b[48;5;152m \x1b[0m\x1b[48;5;153m \x1b[0m\x1b[48;5;184m \x1b[0m\x1b[48;5;185m \x1b[0m\x1b[48;5;186m \x1b[0m\x1b[48;5;187m \x1b[0m\x1b[48;5;188m \x1b[0m\x1b[48;5;189m \x1b[0m\x1b[48;5;220m \x1b[0m\x1b[48;5;221m \x1b[0m\x1b[48;5;222m \x1b[0m\x1b[48;5;223m \x1b[0m\x1b[48;5;224m \x1b[0m\x1b[48;5;225m \x1b[0m \n \x1b[48;5;46m \x1b[0m\x1b[48;5;47m \x1b[0m\x1b[48;5;48m \x1b[0m\x1b[48;5;49m \x1b[0m\x1b[48;5;50m \x1b[0m\x1b[48;5;51m \x1b[0m\x1b[48;5;82m \x1b[0m\x1b[48;5;83m \x1b[0m\x1b[48;5;84m \x1b[0m\x1b[48;5;85m \x1b[0m\x1b[48;5;86m \x1b[0m\x1b[48;5;87m \x1b[0m\x1b[48;5;118m \x1b[0m\x1b[48;5;119m \x1b[0m\x1b[48;5;120m \x1b[0m\x1b[48;5;121m \x1b[0m\x1b[48;5;122m \x1b[0m\x1b[48;5;123m \x1b[0m\x1b[48;5;154m \x1b[0m\x1b[48;5;155m \x1b[0m\x1b[48;5;156m \x1b[0m\x1b[48;5;157m \x1b[0m\x1b[48;5;158m \x1b[0m\x1b[48;5;159m \x1b[0m\x1b[48;5;190m \x1b[0m\x1b[48;5;191m \x1b[0m\x1b[48;5;192m \x1b[0m\x1b[48;5;193m \x1b[0m\x1b[48;5;194m \x1b[0m\x1b[48;5;195m \x1b[0m\x1b[48;5;226m \x1b[0m\x1b[48;5;227m \x1b[0m\x1b[48;5;228m \x1b[0m\x1b[48;5;229m \x1b[0m\x1b[48;5;230m \x1b[0m\x1b[48;5;231m \x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Styles\x1b[0m\x1b[1;31m \x1b[0mAll ansi styles: \x1b[1mbold\x1b[0m, \x1b[2mdim\x1b[0m, \x1b[3mitalic\x1b[0m, \x1b[4munderline\x1b[0m, \x1b[9mstrikethrough\x1b[0m, \x1b[7mreverse\x1b[0m, and \n even \x1b[5mblink\x1b[0m. \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Text\x1b[0m\x1b[1;31m \x1b[0mWord wrap text. Justify \x1b[32mleft\x1b[0m, \x1b[33mcenter\x1b[0m, \x1b[34mright\x1b[0m or \x1b[31mfull\x1b[0m. \n \n \x1b[32mLorem ipsum dolor \x1b[0m \x1b[33mLorem ipsum dolor \x1b[0m \x1b[34m Lorem ipsum dolor\x1b[0m \x1b[31mLorem\x1b[0m\x1b[31m \x1b[0m\x1b[31mipsum\x1b[0m\x1b[31m \x1b[0m\x1b[31mdolor\x1b[0m \n \x1b[32msit amet, \x1b[0m \x1b[33m sit amet, \x1b[0m \x1b[34m sit amet,\x1b[0m \x1b[31msit\x1b[0m\x1b[31m \x1b[0m\x1b[31mamet,\x1b[0m \n \x1b[32mconsectetur \x1b[0m \x1b[33m consectetur \x1b[0m \x1b[34m consectetur\x1b[0m \x1b[31mconsectetur\x1b[0m \n \x1b[32madipiscing elit. \x1b[0m \x1b[33m adipiscing elit. \x1b[0m \x1b[34m adipiscing elit.\x1b[0m \x1b[31madipiscing\x1b[0m\x1b[31m \x1b[0m\x1b[31melit.\x1b[0m \n \x1b[32mQuisque in metus \x1b[0m \x1b[33m Quisque in metus \x1b[0m \x1b[34m Quisque in metus\x1b[0m \x1b[31mQuisque\x1b[0m\x1b[31m \x1b[0m\x1b[31min\x1b[0m\x1b[31m \x1b[0m\x1b[31mmetus\x1b[0m \n \x1b[32msed sapien \x1b[0m \x1b[33m sed sapien \x1b[0m \x1b[34m sed sapien\x1b[0m \x1b[31msed\x1b[0m\x1b[31m \x1b[0m\x1b[31msapien\x1b[0m \n \x1b[32multricies pretium a\x1b[0m \x1b[33multricies pretium \x1b[0m \x1b[34multricies pretium a\x1b[0m \x1b[31multricies\x1b[0m\x1b[31m \x1b[0m\x1b[31mpretium\x1b[0m\x1b[31m \x1b[0m\x1b[31ma\x1b[0m \n \x1b[32mat justo. Maecenas \x1b[0m \x1b[33m a at justo. \x1b[0m \x1b[34m at justo. Maecenas\x1b[0m \x1b[31mat\x1b[0m\x1b[31m \x1b[0m\x1b[31mjusto.\x1b[0m\x1b[31m \x1b[0m\x1b[31mMaecenas\x1b[0m \n \x1b[32mluctus velit et \x1b[0m \x1b[33m Maecenas luctus \x1b[0m \x1b[34m luctus velit et\x1b[0m \x1b[31mluctus\x1b[0m\x1b[31m \x1b[0m\x1b[31mvelit\x1b[0m\x1b[31m \x1b[0m\x1b[31met\x1b[0m \n \x1b[32mauctor maximus. \x1b[0m \x1b[33m velit et auctor \x1b[0m \x1b[34m auctor maximus.\x1b[0m \x1b[31mauctor\x1b[0m\x1b[31m \x1b[0m\x1b[31mmaximus.\x1b[0m \n \x1b[32mDonec faucibus vel \x1b[0m \x1b[33m maximus. Donec \x1b[0m \x1b[34m Donec faucibus vel\x1b[0m \x1b[31mDonec\x1b[0m\x1b[31m \x1b[0m\x1b[31mfaucibus\x1b[0m\x1b[31m \x1b[0m\x1b[31mvel\x1b[0m \n \x1b[32marcu id pretium. \x1b[0m \x1b[33mfaucibus vel arcu \x1b[0m \x1b[34m arcu id pretium.\x1b[0m \x1b[31marcu id pretium.\x1b[0m \n \x1b[33m id pretium. \x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m CJK support\x1b[0m\x1b[1;31m \x1b[0m\x1b[31mโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \n \x1b[31mโ\x1b[0m ่ฏฅๅบๆฏๆไธญๆ๏ผๆฅๆๅ้ฉๆๆๆฌ๏ผ \x1b[31mโ\x1b[0m \n \x1b[31mโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Emoji\x1b[0m\x1b[1;31m \x1b[0mRender emoji code: :+1: :apple: :ant: Render emoji code: ๐ ๐ ๐ ๐ป ๐ฅ ๐ \n :bear: :baguette_bread: :bus: \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Console markup\x1b[0m\x1b[1;31m \x1b[0m[bold magenta]Rich[/] supports a \x1b[1;35mRich\x1b[0m supports a simple \x1b[3mbbcode\x1b[0m like \n simple [i]bbcode[/i] like \x1b[1mmarkup\x1b[0m, you can use to insert \x1b[33mcolor\x1b[0m \n [b]markup[/b], you can use to insert and \x1b[4mstyle\x1b[0m. \n [yellow]color[/] and \n [underline]style[/]. \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Tables\x1b[0m\x1b[1;31m \x1b[0m\x1b[3m Star Wars box office \x1b[0m \n โโโโโโโโโโโโโโโโณโโโโโโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโโโโโ \n โ\x1b[1;35m \x1b[0m\x1b[1;35mDate \x1b[0m\x1b[1;35m \x1b[0mโ\x1b[1;35m \x1b[0m\x1b[1;35mTitle \x1b[0m\x1b[1;35m \x1b[0mโ\x1b[1;35m \x1b[0m\x1b[1;35mProduction Budget\x1b[0m\x1b[1;35m \x1b[0mโ\x1b[1;35m \x1b[0m\x1b[1;35m Box Office\x1b[0m\x1b[1;35m \x1b[0mโ \n โกโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ \n โ\x1b[2m \x1b[0m\x1b[2mDec 20, 2019\x1b[0m\x1b[2m \x1b[0mโ Star Wars: The Rise of โ $275,000,000 โ $375,126,118 โ \n โ โ Skywalker โ โ โ \n โ\x1b[2m \x1b[0m\x1b[2mMay 25, 2018\x1b[0m\x1b[2m \x1b[0mโ \x1b[31mSolo\x1b[0m: A Star Wars โ $275,000,000 โ $393,151,347 โ \n โ โ Story โ โ โ \n โ\x1b[2m \x1b[0m\x1b[2mDec 15, 2017\x1b[0m\x1b[2m \x1b[0mโ Star Wars Ep. VIII: โ $262,000,000 โ \x1b[1m$1,332,539,889\x1b[0m โ \n โ โ The Last Jedi โ โ โ \n โโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโ \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31mSyntax highlighting\x1b[0m\x1b[1;31m \x1b[0m\x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 1 \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mdef\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;166;226;46;48;2;39;40;34miter_last\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m(\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalues\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mIterable\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m[\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mT\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m]\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m)\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m-\x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m>\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mIterable\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m[\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mTuple\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m[\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mbool\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m,\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mT\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m]\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m]\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 2 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;230;219;116;48;2;39;40;34m"""Iterate and generate a tuple with a flag for last value."""\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 3 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter_values\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m=\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m(\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalues\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m)\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 4 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mtry\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 5 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m=\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mnext\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m(\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter_values\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m)\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 6 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mexcept\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;166;226;46;48;2;39;40;34mStopIteration\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 7 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mreturn\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 8 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mfor\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalue\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34min\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter_values\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 9 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34myield\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mFalse\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m,\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m10 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m=\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalue\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m11 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34myield\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mTrue\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m,\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Markdown\x1b[0m\x1b[1;31m \x1b[0m# Markdown โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ \n โ \x1b[1mMarkdown\x1b[0m โ \n Supports much of the *markdown*, โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ \n __syntax__! \n Supports much of the \x1b[3mmarkdown\x1b[0m, \x1b[1msyntax\x1b[0m! \n - Headers \n - Basic formatting: **bold**, \x1b[1;33m โข \x1b[0mHeaders \n *italic*, `code` \x1b[1;33m โข \x1b[0mBasic formatting: \x1b[1mbold\x1b[0m, \x1b[3mitalic\x1b[0m, \n - Block quotes \x1b[1;33m \x1b[0m\x1b[97;40mcode\x1b[0m \n - Lists, and more... \x1b[1;33m โข \x1b[0mBlock quotes \n \x1b[1;33m โข \x1b[0mLists, and more... \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m And more\x1b[0m\x1b[1;31m \x1b[0mProgress bars, styled logging handler, tracebacks, etc... \n\x1b[1;31m \x1b[0m \n'
\ No newline at end of file
+expected = '\x1b[3m Rich features \x1b[0m\n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Colors\x1b[0m\x1b[1;31m \x1b[0m\x1b[1;33m256\x1b[0m colors or \x1b[1;32m16.7 million\x1b[0m colors \x1b[34m(if supported by your terminal)\x1b[0m. \n \n \x1b[48;5;16m \x1b[0m\x1b[48;5;17m \x1b[0m\x1b[48;5;18m \x1b[0m\x1b[48;5;19m \x1b[0m\x1b[48;5;20m \x1b[0m\x1b[48;5;21m \x1b[0m\x1b[48;5;52m \x1b[0m\x1b[48;5;53m \x1b[0m\x1b[48;5;54m \x1b[0m\x1b[48;5;55m \x1b[0m\x1b[48;5;56m \x1b[0m\x1b[48;5;57m \x1b[0m\x1b[48;5;88m \x1b[0m\x1b[48;5;89m \x1b[0m\x1b[48;5;90m \x1b[0m\x1b[48;5;91m \x1b[0m\x1b[48;5;92m \x1b[0m\x1b[48;5;93m \x1b[0m\x1b[48;5;124m \x1b[0m\x1b[48;5;125m \x1b[0m\x1b[48;5;126m \x1b[0m\x1b[48;5;127m \x1b[0m\x1b[48;5;128m \x1b[0m\x1b[48;5;129m \x1b[0m\x1b[48;5;160m \x1b[0m\x1b[48;5;161m \x1b[0m\x1b[48;5;162m \x1b[0m\x1b[48;5;163m \x1b[0m\x1b[48;5;164m \x1b[0m\x1b[48;5;165m \x1b[0m\x1b[48;5;196m \x1b[0m\x1b[48;5;197m \x1b[0m\x1b[48;5;198m \x1b[0m\x1b[48;5;199m \x1b[0m\x1b[48;5;200m \x1b[0m\x1b[48;5;201m \x1b[0m \n \x1b[48;5;22m \x1b[0m\x1b[48;5;23m \x1b[0m\x1b[48;5;24m \x1b[0m\x1b[48;5;25m \x1b[0m\x1b[48;5;26m \x1b[0m\x1b[48;5;27m \x1b[0m\x1b[48;5;58m \x1b[0m\x1b[48;5;59m \x1b[0m\x1b[48;5;60m \x1b[0m\x1b[48;5;61m \x1b[0m\x1b[48;5;62m \x1b[0m\x1b[48;5;63m \x1b[0m\x1b[48;5;94m \x1b[0m\x1b[48;5;95m \x1b[0m\x1b[48;5;96m \x1b[0m\x1b[48;5;97m \x1b[0m\x1b[48;5;98m \x1b[0m\x1b[48;5;99m \x1b[0m\x1b[48;5;130m \x1b[0m\x1b[48;5;131m \x1b[0m\x1b[48;5;132m \x1b[0m\x1b[48;5;133m \x1b[0m\x1b[48;5;134m \x1b[0m\x1b[48;5;135m \x1b[0m\x1b[48;5;166m \x1b[0m\x1b[48;5;167m \x1b[0m\x1b[48;5;168m \x1b[0m\x1b[48;5;169m \x1b[0m\x1b[48;5;170m \x1b[0m\x1b[48;5;171m \x1b[0m\x1b[48;5;202m \x1b[0m\x1b[48;5;203m \x1b[0m\x1b[48;5;204m \x1b[0m\x1b[48;5;205m \x1b[0m\x1b[48;5;206m \x1b[0m\x1b[48;5;207m \x1b[0m \n \x1b[48;5;28m \x1b[0m\x1b[48;5;29m \x1b[0m\x1b[48;5;30m \x1b[0m\x1b[48;5;31m \x1b[0m\x1b[48;5;32m \x1b[0m\x1b[48;5;33m \x1b[0m\x1b[48;5;64m \x1b[0m\x1b[48;5;65m \x1b[0m\x1b[48;5;66m \x1b[0m\x1b[48;5;67m \x1b[0m\x1b[48;5;68m \x1b[0m\x1b[48;5;69m \x1b[0m\x1b[48;5;100m \x1b[0m\x1b[48;5;101m \x1b[0m\x1b[48;5;102m \x1b[0m\x1b[48;5;103m \x1b[0m\x1b[48;5;104m \x1b[0m\x1b[48;5;105m \x1b[0m\x1b[48;5;136m \x1b[0m\x1b[48;5;137m \x1b[0m\x1b[48;5;138m \x1b[0m\x1b[48;5;139m \x1b[0m\x1b[48;5;140m \x1b[0m\x1b[48;5;141m \x1b[0m\x1b[48;5;172m \x1b[0m\x1b[48;5;173m \x1b[0m\x1b[48;5;174m \x1b[0m\x1b[48;5;175m \x1b[0m\x1b[48;5;176m \x1b[0m\x1b[48;5;177m \x1b[0m\x1b[48;5;208m \x1b[0m\x1b[48;5;209m \x1b[0m\x1b[48;5;210m \x1b[0m\x1b[48;5;211m \x1b[0m\x1b[48;5;212m \x1b[0m\x1b[48;5;213m \x1b[0m \n \x1b[48;5;34m \x1b[0m\x1b[48;5;35m \x1b[0m\x1b[48;5;36m \x1b[0m\x1b[48;5;37m \x1b[0m\x1b[48;5;38m \x1b[0m\x1b[48;5;39m \x1b[0m\x1b[48;5;70m \x1b[0m\x1b[48;5;71m \x1b[0m\x1b[48;5;72m \x1b[0m\x1b[48;5;73m \x1b[0m\x1b[48;5;74m \x1b[0m\x1b[48;5;75m \x1b[0m\x1b[48;5;106m \x1b[0m\x1b[48;5;107m \x1b[0m\x1b[48;5;108m \x1b[0m\x1b[48;5;109m \x1b[0m\x1b[48;5;110m \x1b[0m\x1b[48;5;111m \x1b[0m\x1b[48;5;142m \x1b[0m\x1b[48;5;143m \x1b[0m\x1b[48;5;144m \x1b[0m\x1b[48;5;145m \x1b[0m\x1b[48;5;146m \x1b[0m\x1b[48;5;147m \x1b[0m\x1b[48;5;178m \x1b[0m\x1b[48;5;179m \x1b[0m\x1b[48;5;180m \x1b[0m\x1b[48;5;181m \x1b[0m\x1b[48;5;182m \x1b[0m\x1b[48;5;183m \x1b[0m\x1b[48;5;214m \x1b[0m\x1b[48;5;215m \x1b[0m\x1b[48;5;216m \x1b[0m\x1b[48;5;217m \x1b[0m\x1b[48;5;218m \x1b[0m\x1b[48;5;219m \x1b[0m \n \x1b[48;5;40m \x1b[0m\x1b[48;5;41m \x1b[0m\x1b[48;5;42m \x1b[0m\x1b[48;5;43m \x1b[0m\x1b[48;5;44m \x1b[0m\x1b[48;5;45m \x1b[0m\x1b[48;5;76m \x1b[0m\x1b[48;5;77m \x1b[0m\x1b[48;5;78m \x1b[0m\x1b[48;5;79m \x1b[0m\x1b[48;5;80m \x1b[0m\x1b[48;5;81m \x1b[0m\x1b[48;5;112m \x1b[0m\x1b[48;5;113m \x1b[0m\x1b[48;5;114m \x1b[0m\x1b[48;5;115m \x1b[0m\x1b[48;5;116m \x1b[0m\x1b[48;5;117m \x1b[0m\x1b[48;5;148m \x1b[0m\x1b[48;5;149m \x1b[0m\x1b[48;5;150m \x1b[0m\x1b[48;5;151m \x1b[0m\x1b[48;5;152m \x1b[0m\x1b[48;5;153m \x1b[0m\x1b[48;5;184m \x1b[0m\x1b[48;5;185m \x1b[0m\x1b[48;5;186m \x1b[0m\x1b[48;5;187m \x1b[0m\x1b[48;5;188m \x1b[0m\x1b[48;5;189m \x1b[0m\x1b[48;5;220m \x1b[0m\x1b[48;5;221m \x1b[0m\x1b[48;5;222m \x1b[0m\x1b[48;5;223m \x1b[0m\x1b[48;5;224m \x1b[0m\x1b[48;5;225m \x1b[0m \n \x1b[48;5;46m \x1b[0m\x1b[48;5;47m \x1b[0m\x1b[48;5;48m \x1b[0m\x1b[48;5;49m \x1b[0m\x1b[48;5;50m \x1b[0m\x1b[48;5;51m \x1b[0m\x1b[48;5;82m \x1b[0m\x1b[48;5;83m \x1b[0m\x1b[48;5;84m \x1b[0m\x1b[48;5;85m \x1b[0m\x1b[48;5;86m \x1b[0m\x1b[48;5;87m \x1b[0m\x1b[48;5;118m \x1b[0m\x1b[48;5;119m \x1b[0m\x1b[48;5;120m \x1b[0m\x1b[48;5;121m \x1b[0m\x1b[48;5;122m \x1b[0m\x1b[48;5;123m \x1b[0m\x1b[48;5;154m \x1b[0m\x1b[48;5;155m \x1b[0m\x1b[48;5;156m \x1b[0m\x1b[48;5;157m \x1b[0m\x1b[48;5;158m \x1b[0m\x1b[48;5;159m \x1b[0m\x1b[48;5;190m \x1b[0m\x1b[48;5;191m \x1b[0m\x1b[48;5;192m \x1b[0m\x1b[48;5;193m \x1b[0m\x1b[48;5;194m \x1b[0m\x1b[48;5;195m \x1b[0m\x1b[48;5;226m \x1b[0m\x1b[48;5;227m \x1b[0m\x1b[48;5;228m \x1b[0m\x1b[48;5;229m \x1b[0m\x1b[48;5;230m \x1b[0m\x1b[48;5;231m \x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Styles\x1b[0m\x1b[1;31m \x1b[0mAll ansi styles: \x1b[1mbold\x1b[0m, \x1b[2mdim\x1b[0m, \x1b[3mitalic\x1b[0m, \x1b[4munderline\x1b[0m, \x1b[9mstrikethrough\x1b[0m, \x1b[7mreverse\x1b[0m, and \n even \x1b[5mblink\x1b[0m. \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Text\x1b[0m\x1b[1;31m \x1b[0mWord wrap text. Justify \x1b[32mleft\x1b[0m, \x1b[33mcenter\x1b[0m, \x1b[34mright\x1b[0m or \x1b[31mfull\x1b[0m. \n \n \x1b[32mLorem ipsum dolor \x1b[0m \x1b[33mLorem ipsum dolor \x1b[0m \x1b[34m Lorem ipsum dolor\x1b[0m \x1b[31mLorem\x1b[0m\x1b[31m \x1b[0m\x1b[31mipsum\x1b[0m\x1b[31m \x1b[0m\x1b[31mdolor\x1b[0m \n \x1b[32msit amet, \x1b[0m \x1b[33m sit amet, \x1b[0m \x1b[34m sit amet,\x1b[0m \x1b[31msit\x1b[0m\x1b[31m \x1b[0m\x1b[31mamet,\x1b[0m \n \x1b[32mconsectetur \x1b[0m \x1b[33m consectetur \x1b[0m \x1b[34m consectetur\x1b[0m \x1b[31mconsectetur\x1b[0m \n \x1b[32madipiscing elit. \x1b[0m \x1b[33m adipiscing elit. \x1b[0m \x1b[34m adipiscing elit.\x1b[0m \x1b[31madipiscing\x1b[0m\x1b[31m \x1b[0m\x1b[31melit.\x1b[0m \n \x1b[32mQuisque in metus \x1b[0m \x1b[33m Quisque in metus \x1b[0m \x1b[34m Quisque in metus\x1b[0m \x1b[31mQuisque\x1b[0m\x1b[31m \x1b[0m\x1b[31min\x1b[0m\x1b[31m \x1b[0m\x1b[31mmetus\x1b[0m \n \x1b[32msed sapien \x1b[0m \x1b[33m sed sapien \x1b[0m \x1b[34m sed sapien\x1b[0m \x1b[31msed\x1b[0m\x1b[31m \x1b[0m\x1b[31msapien\x1b[0m \n \x1b[32multricies pretium a\x1b[0m \x1b[33multricies pretium \x1b[0m \x1b[34multricies pretium a\x1b[0m \x1b[31multricies\x1b[0m\x1b[31m \x1b[0m\x1b[31mpretium\x1b[0m\x1b[31m \x1b[0m\x1b[31ma\x1b[0m \n \x1b[32mat justo. Maecenas \x1b[0m \x1b[33m a at justo. \x1b[0m \x1b[34m at justo. Maecenas\x1b[0m \x1b[31mat\x1b[0m\x1b[31m \x1b[0m\x1b[31mjusto.\x1b[0m\x1b[31m \x1b[0m\x1b[31mMaecenas\x1b[0m \n \x1b[32mluctus velit et \x1b[0m \x1b[33m Maecenas luctus \x1b[0m \x1b[34m luctus velit et\x1b[0m \x1b[31mluctus\x1b[0m\x1b[31m \x1b[0m\x1b[31mvelit\x1b[0m\x1b[31m \x1b[0m\x1b[31met\x1b[0m \n \x1b[32mauctor maximus. \x1b[0m \x1b[33m velit et auctor \x1b[0m \x1b[34m auctor maximus.\x1b[0m \x1b[31mauctor\x1b[0m\x1b[31m \x1b[0m\x1b[31mmaximus.\x1b[0m \n \x1b[32mDonec faucibus vel \x1b[0m \x1b[33m maximus. Donec \x1b[0m \x1b[34m Donec faucibus vel\x1b[0m \x1b[31mDonec\x1b[0m\x1b[31m \x1b[0m\x1b[31mfaucibus\x1b[0m\x1b[31m \x1b[0m\x1b[31mvel\x1b[0m \n \x1b[32marcu id pretium. \x1b[0m \x1b[33mfaucibus vel arcu \x1b[0m \x1b[34m arcu id pretium.\x1b[0m \x1b[31marcu id pretium.\x1b[0m \n \x1b[33m id pretium. \x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m CJK support\x1b[0m\x1b[1;31m \x1b[0m\x1b[31mโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \n \x1b[31mโ\x1b[0m ่ฏฅๅบๆฏๆไธญๆ๏ผๆฅๆๅ้ฉๆๆๆฌ๏ผ \x1b[31mโ\x1b[0m \n \x1b[31mโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ\x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Emoji\x1b[0m\x1b[1;31m \x1b[0mRender emoji code: :+1: :apple: :ant: Render emoji code: ๐ ๐ ๐ ๐ป ๐ฅ ๐ \n :bear: :baguette_bread: :bus: \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Console markup\x1b[0m\x1b[1;31m \x1b[0m[bold magenta]Rich[/] supports a \x1b[1;35mRich\x1b[0m supports a simple \x1b[3mbbcode\x1b[0m like \n simple [i]bbcode[/i] like \x1b[1mmarkup\x1b[0m, you can use to insert \x1b[33mcolor\x1b[0m \n [b]markup[/b], you can use to insert and \x1b[4mstyle\x1b[0m. \n [yellow]color[/] and \n [underline]style[/]. \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Tables\x1b[0m\x1b[1;31m \x1b[0m\x1b[3m Star Wars box office \x1b[0m \n โโโโโโโโโโโโโโโโณโโโโโโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโโโโโ \n โ\x1b[1;35m \x1b[0m\x1b[1;35mDate \x1b[0m\x1b[1;35m \x1b[0mโ\x1b[1;35m \x1b[0m\x1b[1;35mTitle \x1b[0m\x1b[1;35m \x1b[0mโ\x1b[1;35m \x1b[0m\x1b[1;35mProduction Budget\x1b[0m\x1b[1;35m \x1b[0mโ\x1b[1;35m \x1b[0m\x1b[1;35m Box Office\x1b[0m\x1b[1;35m \x1b[0mโ \n โกโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ \n โ\x1b[2m \x1b[0m\x1b[2mDec 20, 2019\x1b[0m\x1b[2m \x1b[0mโ Star Wars: The Rise of โ $275,000,000 โ $375,126,118 โ \n โ โ Skywalker โ โ โ \n โ\x1b[2m \x1b[0m\x1b[2mMay 25, 2018\x1b[0m\x1b[2m \x1b[0mโ \x1b[31mSolo\x1b[0m: A Star Wars โ $275,000,000 โ $393,151,347 โ \n โ โ Story โ โ โ \n โ\x1b[2m \x1b[0m\x1b[2mDec 15, 2017\x1b[0m\x1b[2m \x1b[0mโ Star Wars Ep. VIII: โ $262,000,000 โ \x1b[1m$1,332,539,889\x1b[0m โ \n โ โ The Last Jedi โ โ โ \n โโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโ \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31mSyntax highlighting\x1b[0m\x1b[1;31m \x1b[0m\x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 1 \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mdef\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;166;226;46;48;2;39;40;34miter_last\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m(\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalues\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mIterable\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m[\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mT\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m]\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m)\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m-\x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m>\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mIterable\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m[\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mTuple\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m[\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mbool\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m,\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mT\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m]\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m]\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 2 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;230;219;116;48;2;39;40;34m"""Iterate and generate a tuple with a flag for last value."""\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 3 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter_values\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m=\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m(\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalues\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m)\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 4 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mtry\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 5 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m=\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mnext\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m(\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter_values\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m)\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 6 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mexcept\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;166;226;46;48;2;39;40;34mStopIteration\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 7 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mreturn\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 8 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mfor\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalue\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34min\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter_values\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 9 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34myield\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mFalse\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m,\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m10 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m=\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalue\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m11 \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34myield\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mTrue\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m,\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Markdown\x1b[0m\x1b[1;31m \x1b[0m# Markdown โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ \n โ \x1b[1mMarkdown\x1b[0m โ \n Supports much of the *markdown*, โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ \n __syntax__! \n Supports much of the \x1b[3mmarkdown\x1b[0m, \x1b[1msyntax\x1b[0m! \n - Headers \n - Basic formatting: **bold**, \x1b[1;33m โข \x1b[0mHeaders \n *italic*, `code` \x1b[1;33m โข \x1b[0mBasic formatting: \x1b[1mbold\x1b[0m, \x1b[3mitalic\x1b[0m, \n - Block quotes \x1b[1;33m \x1b[0m\x1b[97;40mcode\x1b[0m \n - Lists, and more... \x1b[1;33m โข \x1b[0mBlock quotes \n \x1b[1;33m โข \x1b[0mLists, and more... \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m And more\x1b[0m\x1b[1;31m \x1b[0mProgress bars, styled logging handler, tracebacks, etc... \n\x1b[1;31m \x1b[0m \n'
diff --git a/tests/test_bar.py b/tests/test_bar.py
index 3571c872c6..976a10b083 100644
--- a/tests/test_bar.py
+++ b/tests/test_bar.py
@@ -1,4 +1,4 @@
-from rich.bar import Bar
+from rich.progress_bar import ProgressBar
from rich.segment import Segment
from rich.style import Style
@@ -6,13 +6,13 @@
def test_init():
- bar = Bar(completed=50)
+ bar = ProgressBar(completed=50)
repr(bar)
assert bar.percentage_completed == 50.0
def test_update():
- bar = Bar()
+ bar = ProgressBar()
assert bar.completed == 0
assert bar.total == 100
bar.update(10, 20)
@@ -30,7 +30,7 @@ def test_update():
def test_render():
- bar = Bar(completed=11, width=50)
+ bar = ProgressBar(completed=11, width=50)
bar_render = render(bar)
assert bar_render == expected[0]
bar.update(completed=12)
@@ -39,7 +39,7 @@ def test_render():
def test_measure():
- bar = Bar()
+ bar = ProgressBar()
measurement = bar.__rich_measure__(None, 120)
assert measurement.minimum == 4
assert measurement.maximum == 120
@@ -47,12 +47,12 @@ def test_measure():
def test_zero_total():
# Shouldn't throw zero division error
- bar = Bar(total=0)
+ bar = ProgressBar(total=0)
render(bar)
def test_pulse():
- bar = Bar(pulse=True, animation_time=10)
+ bar = ProgressBar(pulse=True, animation_time=10)
bar_render = render(bar)
print(repr(bar_render))
expected = "\x1b[38;2;249;38;114mโ\x1b[0m\x1b[38;2;244;38;112mโ\x1b[0m\x1b[38;2;230;39;108mโ\x1b[0m\x1b[38;2;209;42;102mโ\x1b[0m\x1b[38;2;183;44;94mโ\x1b[0m\x1b[38;2;153;48;86mโ\x1b[0m\x1b[38;2;123;51;77mโ\x1b[0m\x1b[38;2;97;53;69mโ\x1b[0m\x1b[38;2;76;56;63mโ\x1b[0m\x1b[38;2;62;57;59mโ\x1b[0m\x1b[38;2;58;58;58mโ\x1b[0m\x1b[38;2;62;57;59mโ\x1b[0m\x1b[38;2;76;56;63mโ\x1b[0m\x1b[38;2;97;53;69mโ\x1b[0m\x1b[38;2;123;51;77mโ\x1b[0m\x1b[38;2;153;48;86mโ\x1b[0m\x1b[38;2;183;44;94mโ\x1b[0m\x1b[38;2;209;42;102mโ\x1b[0m\x1b[38;2;230;39;108mโ\x1b[0m\x1b[38;2;244;38;112mโ\x1b[0m\x1b[38;2;249;38;114mโ\x1b[0m\x1b[38;2;244;38;112mโ\x1b[0m\x1b[38;2;230;39;108mโ\x1b[0m\x1b[38;2;209;42;102mโ\x1b[0m\x1b[38;2;183;44;94mโ\x1b[0m\x1b[38;2;153;48;86mโ\x1b[0m\x1b[38;2;123;51;77mโ\x1b[0m\x1b[38;2;97;53;69mโ\x1b[0m\x1b[38;2;76;56;63mโ\x1b[0m\x1b[38;2;62;57;59mโ\x1b[0m\x1b[38;2;58;58;58mโ\x1b[0m\x1b[38;2;62;57;59mโ\x1b[0m\x1b[38;2;76;56;63mโ\x1b[0m\x1b[38;2;97;53;69mโ\x1b[0m\x1b[38;2;123;51;77mโ\x1b[0m\x1b[38;2;153;48;86mโ\x1b[0m\x1b[38;2;183;44;94mโ\x1b[0m\x1b[38;2;209;42;102mโ\x1b[0m\x1b[38;2;230;39;108mโ\x1b[0m\x1b[38;2;244;38;112mโ\x1b[0m\x1b[38;2;249;38;114mโ\x1b[0m\x1b[38;2;244;38;112mโ\x1b[0m\x1b[38;2;230;39;108mโ\x1b[0m\x1b[38;2;209;42;102mโ\x1b[0m\x1b[38;2;183;44;94mโ\x1b[0m\x1b[38;2;153;48;86mโ\x1b[0m\x1b[38;2;123;51;77mโ\x1b[0m\x1b[38;2;97;53;69mโ\x1b[0m\x1b[38;2;76;56;63mโ\x1b[0m\x1b[38;2;62;57;59mโ\x1b[0m\x1b[38;2;58;58;58mโ\x1b[0m\x1b[38;2;62;57;59mโ\x1b[0m\x1b[38;2;76;56;63mโ\x1b[0m\x1b[38;2;97;53;69mโ\x1b[0m\x1b[38;2;123;51;77mโ\x1b[0m\x1b[38;2;153;48;86mโ\x1b[0m\x1b[38;2;183;44;94mโ\x1b[0m\x1b[38;2;209;42;102mโ\x1b[0m\x1b[38;2;230;39;108mโ\x1b[0m\x1b[38;2;244;38;112mโ\x1b[0m\x1b[38;2;249;38;114mโ\x1b[0m\x1b[38;2;244;38;112mโ\x1b[0m\x1b[38;2;230;39;108mโ\x1b[0m\x1b[38;2;209;42;102mโ\x1b[0m\x1b[38;2;183;44;94mโ\x1b[0m\x1b[38;2;153;48;86mโ\x1b[0m\x1b[38;2;123;51;77mโ\x1b[0m\x1b[38;2;97;53;69mโ\x1b[0m\x1b[38;2;76;56;63mโ\x1b[0m\x1b[38;2;62;57;59mโ\x1b[0m\x1b[38;2;58;58;58mโ\x1b[0m\x1b[38;2;62;57;59mโ\x1b[0m\x1b[38;2;76;56;63mโ\x1b[0m\x1b[38;2;97;53;69mโ\x1b[0m\x1b[38;2;123;51;77mโ\x1b[0m\x1b[38;2;153;48;86mโ\x1b[0m\x1b[38;2;183;44;94mโ\x1b[0m\x1b[38;2;209;42;102mโ\x1b[0m\x1b[38;2;230;39;108mโ\x1b[0m\x1b[38;2;244;38;112mโ\x1b[0m\x1b[38;2;249;38;114mโ\x1b[0m\x1b[38;2;244;38;112mโ\x1b[0m\x1b[38;2;230;39;108mโ\x1b[0m\x1b[38;2;209;42;102mโ\x1b[0m\x1b[38;2;183;44;94mโ\x1b[0m\x1b[38;2;153;48;86mโ\x1b[0m\x1b[38;2;123;51;77mโ\x1b[0m\x1b[38;2;97;53;69mโ\x1b[0m\x1b[38;2;76;56;63mโ\x1b[0m\x1b[38;2;62;57;59mโ\x1b[0m\x1b[38;2;58;58;58mโ\x1b[0m\x1b[38;2;62;57;59mโ\x1b[0m\x1b[38;2;76;56;63mโ\x1b[0m\x1b[38;2;97;53;69mโ\x1b[0m\x1b[38;2;123;51;77mโ\x1b[0m\x1b[38;2;153;48;86mโ\x1b[0m\x1b[38;2;183;44;94mโ\x1b[0m\x1b[38;2;209;42;102mโ\x1b[0m\x1b[38;2;230;39;108mโ\x1b[0m\x1b[38;2;244;38;112mโ\x1b[0m"
@@ -60,7 +60,7 @@ def test_pulse():
def test_get_pulse_segments():
- bar = Bar()
+ bar = ProgressBar()
segments = bar._get_pulse_segments(
Style.parse("red"), Style.parse("yellow"), "standard", False
)
@@ -91,7 +91,7 @@ def test_get_pulse_segments():
if __name__ == "__main__":
- bar = Bar(completed=11, width=50)
+ bar = ProgressBar(completed=11, width=50)
bar_render = render(bar)
print(repr(bar_render))
bar.update(completed=12)
diff --git a/tests/test_block_bar.py b/tests/test_block_bar.py
index 7210d79d4a..973b9e85e9 100644
--- a/tests/test_block_bar.py
+++ b/tests/test_block_bar.py
@@ -1,35 +1,35 @@
-from rich.block_bar import BlockBar
+from rich.bar import Bar
from .render import render
expected = [
- "\x1b[39;49m โโโโโโโโโโโโโโโโโโโโโโโโโโ \x1b[0m",
- "\x1b[39;49m โโโโโโโโโโโโโโโโโโโโโโโ \x1b[0m",
- "\x1b[39;49m \x1b[0m",
+ "\x1b[39;49m โโโโโโโโโโโโโโโโโโโโโโโโโโ \x1b[0m\n",
+ "\x1b[39;49m โโโโโโโโโโโโโโโโโโโโโโโ \x1b[0m\n",
+ "\x1b[39;49m \x1b[0m\n",
]
def test_repr():
- bar = BlockBar(size=100, begin=11, end=62, width=50)
- assert repr(bar) == "<BlockBar 11..62 of 100>"
+ bar = Bar(size=100, begin=11, end=62, width=50)
+ assert repr(bar) == "Bar(100, 11, 62)"
def test_render():
- bar = BlockBar(size=100, begin=11, end=62, width=50)
+ bar = Bar(size=100, begin=11, end=62, width=50)
bar_render = render(bar)
assert bar_render == expected[0]
- bar = BlockBar(size=100, begin=12, end=57, width=50)
+ bar = Bar(size=100, begin=12, end=57, width=50)
bar_render = render(bar)
assert bar_render == expected[1]
# begin after end
- bar = BlockBar(size=100, begin=60, end=40, width=50)
+ bar = Bar(size=100, begin=60, end=40, width=50)
bar_render = render(bar)
assert bar_render == expected[2]
def test_measure():
- bar = BlockBar(size=100, begin=11, end=62)
+ bar = Bar(size=100, begin=11, end=62)
measurement = bar.__rich_measure__(None, 120)
assert measurement.minimum == 4
assert measurement.maximum == 120
@@ -37,17 +37,17 @@ def test_measure():
def test_zero_total():
# Shouldn't throw zero division error
- bar = BlockBar(size=0, begin=0, end=0)
+ bar = Bar(size=0, begin=0, end=0)
render(bar)
if __name__ == "__main__":
- bar = BlockBar(size=100, begin=11, end=62, width=50)
+ bar = Bar(size=100, begin=11, end=62, width=50)
bar_render = render(bar)
print(repr(bar_render))
- bar = BlockBar(size=100, begin=12, end=57, width=50)
+ bar = Bar(size=100, begin=12, end=57, width=50)
bar_render = render(bar)
print(repr(bar_render))
- bar = BlockBar(size=100, begin=60, end=40, width=50)
+ bar = Bar(size=100, begin=60, end=40, width=50)
bar_render = render(bar)
print(repr(bar_render))
diff --git a/tests/test_color.py b/tests/test_color.py
index c3607b24ea..d5ecffd57a 100644
--- a/tests/test_color.py
+++ b/tests/test_color.py
@@ -67,6 +67,12 @@ def test_from_triplet() -> None:
)
+def test_from_rgb() -> None:
+ assert Color.from_rgb(0x10, 0x20, 0x30) == Color(
+ "#102030", ColorType.TRUECOLOR, None, ColorTriplet(0x10, 0x20, 0x30)
+ )
+
+
def test_default() -> None:
assert Color.default() == Color("default", ColorType.DEFAULT, None, None)
diff --git a/tests/test_progress.py b/tests/test_progress.py
index 9d4bb6ab0f..3d862d5740 100644
--- a/tests/test_progress.py
+++ b/tests/test_progress.py
@@ -5,7 +5,7 @@
import pytest
-from rich.bar import Bar
+from rich.progress_bar import ProgressBar
from rich.console import Console
from rich.highlighter import NullHighlighter
from rich.progress import (
@@ -49,7 +49,7 @@ def test_bar_columns():
assert bar_column.bar_width == 100
task = Task(1, "test", 100, 20, _get_time=lambda: 1.0)
bar = bar_column(task)
- assert isinstance(bar, Bar)
+ assert isinstance(bar, ProgressBar)
assert bar.completed == 20
assert bar.total == 100
| diff --git a/CHANGELOG.md b/CHANGELOG.md
index 9b2992626e..3c0259a54d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -22,16 +22,14 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Added Console.out
- Added Progress.reset
- Added Style.background_style property
-
-### Added
-
-- Added solid block bar class
+- Added Bar renderable https://github.com/willmcgugan/rich/pull/361
### Changed
- Dropped box.get_safe_box function in favor of Box.substitute
- Changed default padding in Panel from 0 to (0, 1) https://github.com/willmcgugan/rich/issues/385
- Table with row_styles will extend background color between cells if the box has no vertical dividerhttps://github.com/willmcgugan/rich/issues/383
+- Renamed rich.bar to rich.progress_bar, and Bar class to ProgressBar, rich.bar is now the new solid bar class
### Fixed
diff --git a/docs/source/reference.rst b/docs/source/reference.rst
index 156793a2e7..2fd7bec140 100644
--- a/docs/source/reference.rst
+++ b/docs/source/reference.rst
@@ -19,6 +19,7 @@ Reference
reference/padding.rst
reference/panel.rst
reference/progress.rst
+ reference/progress_bar.rst
reference/prompt.rst
reference/rule.rst
reference/segment.rst
diff --git a/docs/source/reference/progress_bar.rst b/docs/source/reference/progress_bar.rst
new file mode 100644
index 0000000000..6e5a201f4a
--- /dev/null
+++ b/docs/source/reference/progress_bar.rst
@@ -0,0 +1,7 @@
+rich.progress_bar
+=================
+
+.. automodule:: rich.progress_bar
+ :members:
+
+
| [
{
"components": [
{
"doc": "Create a truecolor from three color components in the range(0->255).\n\nArgs:\n red (float): Red component.\n green (float): Green component.\n blue (float): Blue component.\n\nReturns:\n Color: A new color object.",
"lines": [
343,
... | [
"tests/test_bar.py::test_init",
"tests/test_bar.py::test_update",
"tests/test_bar.py::test_render",
"tests/test_bar.py::test_measure",
"tests/test_bar.py::test_zero_total",
"tests/test_bar.py::test_pulse",
"tests/test_bar.py::test_get_pulse_segments",
"tests/test_block_bar.py::test_repr",
"tests/tes... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Bar
A refactor of bar.py.
rich.bar.Bar is now rich.progress_bar.ProgressBar to better reflect its use.
There is a new rich.bar.Bar which renders a simple solid bar.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in rich/color.py]
(definition of Color.from_rgb:)
def from_rgb(cls, red: float, green: float, blue: float) -> "Color":
"""Create a truecolor from three color components in the range(0->255).
Args:
red (float): Red component.
green (float): Green component.
blue (float): Blue component.
Returns:
Color: A new color object."""
[end of new definitions in rich/color.py]
[start of new definitions in rich/progress_bar.py]
(definition of ProgressBar:)
class ProgressBar(JupyterMixin):
"""Renders a (progress) bar. Used by rich.progress.
Args:
total (float, optional): Number of steps in the bar. Defaults to 100.
completed (float, optional): Number of steps completed. Defaults to 0.
width (int, optional): Width of the bar, or ``None`` for maximum width. Defaults to None.
pulse (bool, optional): Enable pulse effect. Defaults to False.
style (StyleType, optional): Style for the bar background. Defaults to "bar.back".
complete_style (StyleType, optional): Style for the completed bar. Defaults to "bar.complete".
finished_style (StyleType, optional): Style for a finished bar. Defaults to "bar.done".
pulse_style (StyleType, optional): Style for pulsing bars. Defaults to "bar.pulse".
animation_time (Optional[float], optional): Time in seconds to use for animation, or None to use system time."""
(definition of ProgressBar.__init__:)
def __init__( self, total: float = 100, completed: float = 0, width: int = None, pulse: bool = False, style: StyleType = "bar.back", complete_style: StyleType = "bar.complete", finished_style: StyleType = "bar.finished", pulse_style: StyleType = "bar.pulse", animation_time: float = None, ):
(definition of ProgressBar.__repr__:)
def __repr__(self) -> str:
(definition of ProgressBar.percentage_completed:)
def percentage_completed(self) -> float:
"""Calculate percentage complete."""
(definition of ProgressBar._get_pulse_segments:)
def _get_pulse_segments( self, fore_style: Style, back_style: Style, color_system: str, ascii: bool = False, ) -> List[Segment]:
"""Get a list of segments to render a pulse animation.
Returns:
List[Segment]: A list of segments, one segment per character."""
(definition of ProgressBar.update:)
def update(self, completed: float, total: float = None) -> None:
"""Update progress with new values.
Args:
completed (float): Number of steps completed.
total (float, optional): Total number of steps, or ``None`` to not change. Defaults to None."""
(definition of ProgressBar._render_pulse:)
def _render_pulse( self, console: Console, width: int, ascii: bool = False ) -> Iterable[Segment]:
"""Renders the pulse animation.
Args:
console (Console): Console instance.
width (int): Width in characters of pulse animation.
Returns:
RenderResult: [description]
Yields:
Iterator[Segment]: Segments to render pulse"""
(definition of ProgressBar.__rich_console__:)
def __rich_console__( self, console: Console, options: ConsoleOptions ) -> RenderResult:
(definition of ProgressBar.__rich_measure__:)
def __rich_measure__(self, console: Console, max_width: int) -> Measurement:
[end of new definitions in rich/progress_bar.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | b0661de34bab35af9b4b1d3ba8e28b186b225e84 | |
pre-commit__pre-commit-1643 | 1,643 | pre-commit/pre-commit | null | 6ba50f3aa7adfc215d00862bbc74613534b2026b | 2020-10-14T17:05:22Z | diff --git a/pre_commit/languages/pygrep.py b/pre_commit/languages/pygrep.py
index 40adba0f7..c80d6794b 100644
--- a/pre_commit/languages/pygrep.py
+++ b/pre_commit/languages/pygrep.py
@@ -1,6 +1,7 @@
import argparse
import re
import sys
+from typing import NamedTuple
from typing import Optional
from typing import Pattern
from typing import Sequence
@@ -45,6 +46,46 @@ def _process_filename_at_once(pattern: Pattern[bytes], filename: str) -> int:
return retv
+def _process_filename_by_line_negated(
+ pattern: Pattern[bytes],
+ filename: str,
+) -> int:
+ with open(filename, 'rb') as f:
+ for line in f:
+ if pattern.search(line):
+ return 0
+ else:
+ output.write_line(filename)
+ return 1
+
+
+def _process_filename_at_once_negated(
+ pattern: Pattern[bytes],
+ filename: str,
+) -> int:
+ with open(filename, 'rb') as f:
+ contents = f.read()
+ match = pattern.search(contents)
+ if match:
+ return 0
+ else:
+ output.write_line(filename)
+ return 1
+
+
+class Choice(NamedTuple):
+ multiline: bool
+ negate: bool
+
+
+FNS = {
+ Choice(multiline=True, negate=True): _process_filename_at_once_negated,
+ Choice(multiline=True, negate=False): _process_filename_at_once,
+ Choice(multiline=False, negate=True): _process_filename_by_line_negated,
+ Choice(multiline=False, negate=False): _process_filename_by_line,
+}
+
+
def run_hook(
hook: Hook,
file_args: Sequence[str],
@@ -64,6 +105,7 @@ def main(argv: Optional[Sequence[str]] = None) -> int:
)
parser.add_argument('-i', '--ignore-case', action='store_true')
parser.add_argument('--multiline', action='store_true')
+ parser.add_argument('--negate', action='store_true')
parser.add_argument('pattern', help='python regex pattern.')
parser.add_argument('filenames', nargs='*')
args = parser.parse_args(argv)
@@ -75,11 +117,9 @@ def main(argv: Optional[Sequence[str]] = None) -> int:
pattern = re.compile(args.pattern.encode(), flags)
retv = 0
+ process_fn = FNS[Choice(multiline=args.multiline, negate=args.negate)]
for filename in args.filenames:
- if args.multiline:
- retv |= _process_filename_at_once(pattern, filename)
- else:
- retv |= _process_filename_by_line(pattern, filename)
+ retv |= process_fn(pattern, filename)
return retv
| diff --git a/tests/languages/pygrep_test.py b/tests/languages/pygrep_test.py
index 6eef56b7a..d8bacc484 100644
--- a/tests/languages/pygrep_test.py
+++ b/tests/languages/pygrep_test.py
@@ -8,6 +8,9 @@ def some_files(tmpdir):
tmpdir.join('f1').write_binary(b'foo\nbar\n')
tmpdir.join('f2').write_binary(b'[INFO] hi\n')
tmpdir.join('f3').write_binary(b"with'quotes\n")
+ tmpdir.join('f4').write_binary(b'foo\npattern\nbar\n')
+ tmpdir.join('f5').write_binary(b'[INFO] hi\npattern\nbar')
+ tmpdir.join('f6').write_binary(b"pattern\nbarwith'foo\n")
with tmpdir.as_cwd():
yield
@@ -30,6 +33,58 @@ def test_main(cap_out, pattern, expected_retcode, expected_out):
assert out == expected_out
+@pytest.mark.usefixtures('some_files')
+def test_negate_by_line_no_match(cap_out):
+ ret = pygrep.main(('pattern\nbar', 'f4', 'f5', 'f6', '--negate'))
+ out = cap_out.get()
+ assert ret == 1
+ assert out == 'f4\nf5\nf6\n'
+
+
+@pytest.mark.usefixtures('some_files')
+def test_negate_by_line_two_match(cap_out):
+ ret = pygrep.main(('foo', 'f4', 'f5', 'f6', '--negate'))
+ out = cap_out.get()
+ assert ret == 1
+ assert out == 'f5\n'
+
+
+@pytest.mark.usefixtures('some_files')
+def test_negate_by_line_all_match(cap_out):
+ ret = pygrep.main(('pattern', 'f4', 'f5', 'f6', '--negate'))
+ out = cap_out.get()
+ assert ret == 0
+ assert out == ''
+
+
+@pytest.mark.usefixtures('some_files')
+def test_negate_by_file_no_match(cap_out):
+ ret = pygrep.main(('baz', 'f4', 'f5', 'f6', '--negate', '--multiline'))
+ out = cap_out.get()
+ assert ret == 1
+ assert out == 'f4\nf5\nf6\n'
+
+
+@pytest.mark.usefixtures('some_files')
+def test_negate_by_file_one_match(cap_out):
+ ret = pygrep.main(
+ ('foo\npattern', 'f4', 'f5', 'f6', '--negate', '--multiline'),
+ )
+ out = cap_out.get()
+ assert ret == 1
+ assert out == 'f5\nf6\n'
+
+
+@pytest.mark.usefixtures('some_files')
+def test_negate_by_file_all_match(cap_out):
+ ret = pygrep.main(
+ ('pattern\nbar', 'f4', 'f5', 'f6', '--negate', '--multiline'),
+ )
+ out = cap_out.get()
+ assert ret == 0
+ assert out == ''
+
+
@pytest.mark.usefixtures('some_files')
def test_ignore_case(cap_out):
ret = pygrep.main(('--ignore-case', 'info', 'f1', 'f2', 'f3'))
| [
{
"components": [
{
"doc": "",
"lines": [
49,
59
],
"name": "_process_filename_by_line_negated",
"signature": "def _process_filename_by_line_negated( pattern: Pattern[bytes], filename: str, ) -> int:",
"type": "function"
},
{
... | [
"tests/languages/pygrep_test.py::test_negate_by_line_no_match",
"tests/languages/pygrep_test.py::test_negate_by_line_two_match",
"tests/languages/pygrep_test.py::test_negate_by_line_all_match",
"tests/languages/pygrep_test.py::test_negate_by_file_no_match",
"tests/languages/pygrep_test.py::test_negate_by_fi... | [
"tests/languages/pygrep_test.py::test_main[baz-0-]",
"tests/languages/pygrep_test.py::test_main[foo-1-f1:1:foo\\n]",
"tests/languages/pygrep_test.py::test_main[bar-1-f1:2:bar\\n]",
"tests/languages/pygrep_test.py::test_main[(?i)\\\\[info\\\\]-1-f2:1:[INFO]",
"tests/languages/pygrep_test.py::test_main[h'q-1-... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
ENH add --negate flag to pygrep
xref this comment on StackOverflow: https://stackoverflow.com/a/64344519/9988333
> A PR may be considered if you can demonstrate a real use case for it
A use-case comes from `PyMC3`, where there is a pre-commit hook to check that all notebooks have a watermark. Currently, this is done by defining
```
- repo: local
hooks:
- id: watermark
name: Check notebooks have watermark
types: [jupyter]
entry: python scripts/check_watermark.py
language: python
```
in https://github.com/pymc-devs/pymc3/blob/master/.pre-commit-config.yaml, and then
```python
"""
Check that given Jupyter notebooks all contain a final watermark cell to facilite reproducibility.
This is intended to be used as a pre-commit hook, see `.pre-commit-config.yaml`.
You can run it manually with `pre-commit run watermark --all`.
"""
import argparse
from pathlib import Path
import re
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("filenames", nargs="*")
args = parser.parse_args()
for file_ in args.filenames:
assert (
re.search(
r"%load_ext watermark.*%watermark -n -u -v -iv -w",
Path(file_).read_text(),
flags=re.DOTALL,
)
is not None
), (
f"Watermark not found in {file_} - please see the PyMC3 Jupyter Notebook Style guide:\n"
"https://github.com/pymc-devs/pymc3/wiki/PyMC's-Jupyter-Notebook-Style"
)
```
in https://github.com/pymc-devs/pymc3/blob/master/scripts/check_watermark.py .
----
I think the output should be a file which does _not_ contain the pattern...I'll work on this tomorrow
----------
Marking as ready for review as I'd like to think this is enough to tell whether it's the right direction / whether the feature would be welcome. Thanks for the 'don't put logic in tests' link!
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in pre_commit/languages/pygrep.py]
(definition of _process_filename_by_line_negated:)
def _process_filename_by_line_negated( pattern: Pattern[bytes], filename: str, ) -> int:
(definition of _process_filename_at_once_negated:)
def _process_filename_at_once_negated( pattern: Pattern[bytes], filename: str, ) -> int:
(definition of Choice:)
class Choice(NamedTuple):
[end of new definitions in pre_commit/languages/pygrep.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | ceb429b25380dacee34ba6997e5dd3d5f09b811a | ||
Textualize__rich-382 | 382 | Textualize/rich | null | f98cd319650f4dd6263a757029dfc88244f30f64 | 2020-10-13T15:57:18Z | diff --git a/.gitignore b/.gitignore
index 233804adff..bc0dfdb715 100644
--- a/.gitignore
+++ b/.gitignore
@@ -6,6 +6,7 @@ mypy_report
docs/build
docs/source/_build
tools/*.txt
+playground/
# Byte-compiled / optimized / DLL files
__pycache__/
diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index de322cbf7e..57530923cb 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -7,3 +7,4 @@ The following people have contributed to the development of Rich:
- [Oleksis Fraga](https://github.com/oleksis)
- [Hedy Li](https://github.com/hedythedev)
- [Will McGugan](https://github.com/willmcgugan)
+- [Nathan Page](https://github.com/nathanrpage97)
diff --git a/docs/source/index.rst b/docs/source/index.rst
index d7cf613426..f40541a2f3 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -19,16 +19,17 @@ Welcome to Rich's documentation!
logging.rst
traceback.rst
prompt.rst
-
+
tables.rst
padding.rst
- panel.rst
- group.rst
+ panel.rst
+ group.rst
columns.rst
+ live.rst
progress.rst
markdown.rst
- syntax.rst
-
+ syntax.rst
+
protocol.rst
reference.rst
diff --git a/docs/source/live.rst b/docs/source/live.rst
new file mode 100644
index 0000000000..f1d6bb4aa7
--- /dev/null
+++ b/docs/source/live.rst
@@ -0,0 +1,172 @@
+.. _live:
+
+Live Display
+============
+
+Rich can display continiuously updated information for any renderable.
+
+To see some live display examples, try this from the command line::
+
+ python -m rich.live
+
+.. note::
+
+ If you see ellipsis "...", this indicates that the terminal is not tall enough to show the full table.
+
+Basic Usage
+-----------
+
+The basic usage can be split into two use cases.
+
+1. Same Renderable
+~~~~~~~~~~~~~~~~~~
+
+When keeping the same renderable, you simply pass the :class:`~rich.console.RenderableType` you would like to see updating and provide
+a ``refresh_per_second`` parameter. The Live :class:`~rich.live.Live` will automatically update the console at the provided refresh rate.
+
+
+**Example**::
+
+ import time
+
+ from rich.live import Live
+ from rich.table import Table
+
+ table = Table()
+ table.add_column("Row ID")
+ table.add_column("Description")
+ table.add_column("Level")
+
+ with Live(table, refresh_per_second=4): # update 4 times a second to feel fluid
+ for row in range(12):
+ time.sleep(0.4) # arbitrary delay
+ # update the renderable internally
+ table.add_row(f"{row}", f"description {row}", "[red]ERROR")
+
+
+2. New Renderable
+~~~~~~~~~~~~~~~~~
+
+You can also provide constant new renderable to :class:`~rich.live.Live` using the :meth:`~rich.live.Live.update` function. This allows you to
+completely change what is rendered live.
+
+**Example**::
+
+ import random
+ import time
+
+ from rich.live import Live
+ from rich.table import Table
+
+
+ def generate_table() -> Table:
+
+ table = Table()
+ table.add_column("ID")
+ table.add_column("Value")
+ table.add_column("Status")
+
+ for row in range(random.randint(2, 6)):
+ value = random.random() * 100
+ table.add_row(
+ f"{row}", f"{value:3.2f}", "[red]ERROR" if value < 50 else "[green]SUCCESS"
+ )
+ return table
+
+
+ with Live(refresh_per_second=4) as live:
+ for _ in range(40):
+ time.sleep(0.4)
+ live.update(generate_table())
+
+Advanced Usage
+--------------
+
+Transient Display
+~~~~~~~~~~~~~~~~~
+
+Normally when you exit live context manager (or call :meth:`~rich.live.Live.stop`) the last refreshed item remains in the terminal with the cursor on the following line.
+You can also make the live display disappear on exit by setting ``transient=True`` on the Live constructor. Here's an example::
+
+ with Live(transient=True) as live:
+ ...
+
+Auto refresh
+~~~~~~~~~~~~
+
+By default, the live display will refresh 4 times a second. You can set the refresh rate with the ``refresh_per_second`` argument on the :class:`~rich.live.Live` constructor.
+You should set this to something lower than 4 if you know your updates will not be that frequent or higher for a smoother feeling.
+
+You might want to disable auto-refresh entirely if your updates are not very frequent, which you can do by setting ``auto_refresh=False`` on the constructor.
+If you disable auto-refresh you will need to call :meth:`~rich.live.Live.refresh` manually or :meth:`~rich.live.Live.update` with ``refresh=True``.
+
+Vertical Overflow
+~~~~~~~~~~~~~~~~~
+
+By default, the live display will display ellipsis if the renderable is too large for the terminal. You can adjust this by setting the
+``vertical_overflow`` argument on the :class:`~rich.live.Live` constructor.
+
+- crop: Show renderable up to the terminal height. The rest is hidden.
+- ellipsis: Similar to crop except last line of the terminal is replaced with "...". This is the default behavior.
+- visible: Will allow the whole renderable to be shown. Note that the display cannot be properly cleared in this mode.
+
+.. note::
+
+ Once the live display stops on a non-transient renderable, the last frame will render as **visible** since it doesn't have to be cleared.
+
+Complex Renders
+~~~~~~~~~~~~~~~
+
+Refer to the :ref:`Render Groups` about combining multiple :class:`RenderableType` together so that it may be passed into the :class:`~rich.live.Live` constructor
+or :meth:`~rich.live.Live.update` method.
+
+For more powerful structuring it is also possible to use nested tables.
+
+
+Print / log
+~~~~~~~~~~~
+
+The Live class will create an internal Console object which you can access via ``live.console``. If you print or log to this console, the output will be displayed *above* the live display. Here's an example::
+
+ import time
+
+ from rich.live import Live
+ from rich.table import Table
+
+ table = Table()
+ table.add_column("Row ID")
+ table.add_column("Description")
+ table.add_column("Level")
+
+ with Live(table, refresh_per_second=4): # update 4 times a second to feel fluid
+ for row in range(12):
+ live.console.print("Working on row #{row}")
+ time.sleep(0.4)
+ table.add_row(f"{row}", f"description {row}", "[red]ERROR")
+
+
+If you have another Console object you want to use, pass it in to the :class:`~rich.live.Live` constructor. Here's an example::
+
+ from my_project import my_console
+
+ with Live(console=my_console) as live:
+ my_console.print("[bold blue]Starting work!")
+ ...
+
+.. note::
+
+ If you are passing in a file console, the live display only show the last item once the live context is left.
+
+Redirecting stdout / stderr
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To avoid breaking the live display visuals, Rich will redirect ``stdout`` and ``stderr`` so that you can use the builtin ``print`` statement.
+This feature is enabled by default, but you can disable by setting ``redirect_stdout`` or ``redirect_stderr`` to ``False``.
+
+
+Examples
+--------
+
+See `table_movie.py <https://github.com/willmcgugan/rich/blob/master/examples/table_movie.py>`_ and
+`top_lite_simulator.py <https://github.com/willmcgugan/rich/blob/master/examples/top_lite_simulator.py>`_
+for deeper examples of live displaying.
diff --git a/docs/source/reference.rst b/docs/source/reference.rst
index fe87ad5c84..a4a324555e 100644
--- a/docs/source/reference.rst
+++ b/docs/source/reference.rst
@@ -3,15 +3,16 @@ Reference
.. toctree::
:maxdepth: 3
-
+
reference/align.rst
reference/bar.rst
reference/color.rst
reference/columns.rst
reference/console.rst
reference/emoji.rst
- reference/highlighter.rst
+ reference/highlighter.rst
reference/init.rst
+ reference/live.rst
reference/logging.rst
reference/markdown.rst
reference/markup.rst
diff --git a/docs/source/reference/live.rst b/docs/source/reference/live.rst
new file mode 100644
index 0000000000..516ed8f24e
--- /dev/null
+++ b/docs/source/reference/live.rst
@@ -0,0 +1,5 @@
+rich.live
+=========
+
+.. automodule:: rich.live
+ :members:
\ No newline at end of file
diff --git a/examples/table_movie.py b/examples/table_movie.py
index fe9c753bbd..627408e464 100644
--- a/examples/table_movie.py
+++ b/examples/table_movie.py
@@ -1,12 +1,16 @@
+"""Same as the table_movie.py but uses Live to update"""
from contextlib import contextmanager
import time
from rich.console import Console
+from rich.columns import Columns
from rich.table import Table
from rich.measure import Measurement
from rich import box
from rich.text import Text
+from rich.live import Live
+
TABLE_DATA = [
[
"May 25, 1977",
@@ -60,136 +64,110 @@
@contextmanager
def beat(length: int = 1) -> None:
with console:
- console.clear()
yield
time.sleep(length * BEAT_TIME)
table = Table(show_footer=False)
+table_centered = Columns((table,), align="center", expand=True)
console.clear()
-console.show_cursor(False)
-try:
- table.add_column("Release Date", no_wrap=True)
+with Live(
+ table_centered, console=console, refresh_per_second=10, vertical_overflow="ellipsis"
+):
+
with beat(10):
- console.print(table, justify="center")
+ table.add_column("Release Date", no_wrap=True)
- table.add_column("Title", Text.from_markup("[b]Total", justify="right"))
with beat(10):
- console.print(table, justify="center")
+ table.add_column("Title", Text.from_markup("[b]Total", justify="right"))
- table.add_column("Budget", "[u]$412,000,000", no_wrap=True)
with beat(10):
- console.print(table, justify="center")
+ table.add_column("Budget", "[u]$412,000,000", no_wrap=True)
- table.add_column("Opening Weekend", "[u]$577,703,455", no_wrap=True)
with beat(10):
- console.print(table, justify="center")
+ table.add_column("Opening Weekend", "[u]$577,703,455", no_wrap=True)
- table.add_column("Box Office", "[u]$4,331,212,357", no_wrap=True)
with beat(10):
- console.print(table, justify="center")
+ table.add_column("Box Office", "[u]$4,331,212,357", no_wrap=True)
- table.title = "Star Wars Box Office"
with beat(10):
- console.print(table, justify="center")
+ table.title = "Star Wars Box Office"
- table.title = (
- "[not italic]:popcorn:[/] Star Wars Box Office [not italic]:popcorn:[/]"
- )
with beat(10):
- console.print(table, justify="center")
+ table.title = (
+ "[not italic]:popcorn:[/] Star Wars Box Office [not italic]:popcorn:[/]"
+ )
- table.caption = "Made with Rich"
with beat(10):
- console.print(table, justify="center")
+ table.caption = "Made with Rich"
- table.caption = "Made with [b]Rich[/b]"
with beat(10):
- console.print(table, justify="center")
+ table.caption = "Made with [b]Rich[/b]"
- table.caption = "Made with [b magenta not dim]Rich[/]"
with beat(10):
- console.print(table, justify="center")
+ table.caption = "Made with [b magenta not dim]Rich[/]"
for row in TABLE_DATA:
- table.add_row(*row)
with beat(10):
- console.print(table, justify="center")
+ table.add_row(*row)
- table.show_footer = True
with beat(10):
- console.print(table, justify="center")
+ table.show_footer = True
table_width = Measurement.get(console, table, console.width).maximum
- table.columns[2].justify = "right"
with beat(10):
- console.print(table, justify="center")
+ table.columns[2].justify = "right"
- table.columns[3].justify = "right"
with beat(10):
- console.print(table, justify="center")
+ table.columns[3].justify = "right"
- table.columns[4].justify = "right"
with beat(10):
- console.print(table, justify="center")
+ table.columns[4].justify = "right"
- table.columns[2].header_style = "bold red"
with beat(10):
- console.print(table, justify="center")
+ table.columns[2].header_style = "bold red"
- table.columns[3].header_style = "bold green"
with beat(10):
- console.print(table, justify="center")
+ table.columns[3].header_style = "bold green"
- table.columns[4].header_style = "bold blue"
with beat(10):
- console.print(table, justify="center")
+ table.columns[4].header_style = "bold blue"
- table.columns[2].style = "red"
with beat(10):
- console.print(table, justify="center")
+ table.columns[2].style = "red"
- table.columns[3].style = "green"
with beat(10):
- console.print(table, justify="center")
+ table.columns[3].style = "green"
- table.columns[4].style = "blue"
with beat(10):
- console.print(table, justify="center")
+ table.columns[4].style = "blue"
- table.columns[0].style = "cyan"
- table.columns[0].header_style = "bold cyan"
with beat(10):
- console.print(table, justify="center")
+ table.columns[0].style = "cyan"
+ table.columns[0].header_style = "bold cyan"
- table.columns[1].style = "magenta"
- table.columns[1].header_style = "bold magenta"
with beat(10):
- console.print(table, justify="center")
+ table.columns[1].style = "magenta"
+ table.columns[1].header_style = "bold magenta"
- table.columns[2].footer_style = "bright_red"
with beat(10):
- console.print(table, justify="center")
+ table.columns[2].footer_style = "bright_red"
- table.columns[3].footer_style = "bright_green"
with beat(10):
- console.print(table, justify="center")
+ table.columns[3].footer_style = "bright_green"
- table.columns[4].footer_style = "bright_blue"
with beat(10):
- console.print(table, justify="center")
+ table.columns[4].footer_style = "bright_blue"
- table.row_styles = ["none", "dim"]
with beat(10):
- console.print(table, justify="center")
+ table.row_styles = ["none", "dim"]
- table.border_style = "bright_yellow"
with beat(10):
- console.print(table, justify="center")
+ table.border_style = "bright_yellow"
for box in [
box.SQUARE,
@@ -197,39 +175,29 @@ def beat(length: int = 1) -> None:
box.SIMPLE,
box.SIMPLE_HEAD,
]:
- table.box = box
with beat(10):
- console.print(table, justify="center")
+ table.box = box
- table.pad_edge = False
with beat(10):
- console.print(table, justify="center")
+ table.pad_edge = False
original_width = Measurement.get(console, table).maximum
for width in range(original_width, console.width, 2):
- table.width = width
with beat(2):
- console.print(table, justify="center")
+ table.width = width
for width in range(console.width, original_width, -2):
- table.width = width
with beat(2):
- console.print(table, justify="center")
+ table.width = width
for width in range(original_width, 90, -2):
- table.width = width
with beat(2):
- console.print(table, justify="center")
+ table.width = width
for width in range(90, original_width + 1, 2):
- table.width = width
with beat(2):
- console.print(table, justify="center")
+ table.width = width
- table.width = None
with beat(2):
- console.print(table, justify="center")
-
-finally:
- console.show_cursor(True)
+ table.width = None
diff --git a/examples/top_lite_simulator.py b/examples/top_lite_simulator.py
new file mode 100644
index 0000000000..df605ee474
--- /dev/null
+++ b/examples/top_lite_simulator.py
@@ -0,0 +1,81 @@
+"""Lite simulation of the top linux command."""
+
+import datetime
+import random
+import time
+from dataclasses import dataclass
+
+from rich import box
+from rich.console import Console
+from rich.live import Live
+from rich.table import Table
+from typing_extensions import Literal
+
+
+@dataclass
+class Process:
+ pid: int
+ command: str
+ cpu_percent: float
+ memory: int
+ start_time: datetime.datetime
+ thread_count: int
+ state: Literal["running", "sleeping"]
+
+ @property
+ def memory_str(self) -> str:
+ if self.memory > 1e6:
+ return f"{int(self.memory/1e6)}M"
+ if self.memory > 1e3:
+ return f"{int(self.memory/1e3)}K"
+ return str(self.memory)
+
+ @property
+ def time_str(self) -> str:
+ return str(datetime.datetime.now() - self.start_time)
+
+
+def generate_process(pid: int) -> Process:
+ return Process(
+ pid=pid,
+ command=f"Process {pid}",
+ cpu_percent=random.random() * 20,
+ memory=random.randint(10, 200) ** 3,
+ start_time=datetime.datetime.now()
+ - datetime.timedelta(seconds=random.randint(0, 500) ** 2),
+ thread_count=random.randint(1, 32),
+ state="running" if random.randint(0, 10) < 8 else "sleeping",
+ )
+
+
+def create_process_table(height: int) -> Table:
+
+ processes = sorted(
+ [generate_process(pid) for pid in range(height)],
+ key=lambda p: p.cpu_percent,
+ reverse=True,
+ )
+ table = Table(
+ "PID", "Command", "CPU %", "Memory", "Time", "Thread #", "State", box=box.SIMPLE
+ )
+
+ for process in processes:
+ table.add_row(
+ str(process.pid),
+ process.command,
+ f"{process.cpu_percent:.1f}",
+ process.memory_str,
+ process.time_str,
+ str(process.thread_count),
+ process.state,
+ )
+
+ return table
+
+
+console = Console()
+
+with Live(console=console, transient=True, refresh_per_second=10) as live:
+ while True:
+ live.update(create_process_table(console.size.height - 4))
+ time.sleep(1.5)
diff --git a/rich/live.py b/rich/live.py
new file mode 100644
index 0000000000..17c69b81f0
--- /dev/null
+++ b/rich/live.py
@@ -0,0 +1,376 @@
+import sys
+from threading import Event, RLock, Thread
+from typing import IO, Any, List, Optional
+
+from typing_extensions import Literal
+
+from .__init__ import get_console
+from ._loop import loop_last
+from .console import (
+ Console,
+ ConsoleOptions,
+ ConsoleRenderable,
+ RenderableType,
+ RenderHook,
+ RenderResult,
+)
+from .control import Control
+from .jupyter import JupyterMixin
+from .live_render import LiveRender
+from .progress import _FileProxy
+from .segment import Segment
+from .style import Style
+from .text import Text
+
+VerticalOverflowMethod = Literal["crop", "ellipsis", "visible"]
+
+
+class _RefreshThread(Thread):
+ """A thread that calls refresh() at regular intervals."""
+
+ def __init__(self, live: "Live", refresh_per_second: float) -> None:
+ self.live = live
+ self.refresh_per_second = refresh_per_second
+ self.done = Event()
+ super().__init__()
+
+ def stop(self) -> None:
+ self.done.set()
+
+ def run(self) -> None:
+ while not self.done.wait(1 / self.refresh_per_second):
+ with self.live._lock:
+ self.live.refresh()
+
+
+class _LiveRender(LiveRender):
+ def __init__(self, live: "Live", renderable: RenderableType) -> None:
+ self._live = live
+ self.renderable = renderable
+ self._shape: Optional[Tuple[int, int]] = None
+
+ def __rich_console__(
+ self, console: Console, options: ConsoleOptions
+ ) -> RenderResult:
+ with self._live._lock:
+ lines = console.render_lines(self.renderable, options, pad=False)
+
+ shape = Segment.get_shape(lines)
+ _, height = shape
+ if height > console.size.height:
+ if self._live.vertical_overflow == "crop":
+ lines = lines[: console.size.height]
+ shape = Segment.get_shape(lines)
+ elif self._live.vertical_overflow == "ellipsis":
+ lines = lines[: (console.size.height - 1)]
+ lines.append(
+ list(
+ console.render(
+ Text(
+ "...",
+ overflow="crop",
+ justify="center",
+ end="",
+ style=Style(bold=True),
+ )
+ )
+ )
+ )
+ shape = Segment.get_shape(lines)
+
+ self._shape = shape
+
+ for last, line in loop_last(lines):
+ yield from line
+ if not last:
+ yield Segment.line()
+
+
+class Live(JupyterMixin, RenderHook):
+ """Renders an auto-updating live display of any given renderable.
+
+ Args:
+ renderable (RenderableType, optional): [The renderable to live display. Defaults to displaying nothing.
+ console (Console, optional): Optional Console instance. Default will an internal Console instance writing to stdout.
+ auto_refresh (bool, optional): Enable auto refresh. If disabled, you will need to call `refresh()` or `update()` with refresh flag. Defaults to True
+ refresh_per_second (float, optional): Number of times per second to refresh the live display. Defaults to 1.
+ transient (bool, optional): Clear the renderable on exit. Defaults to False.
+ redirect_stdout (bool, optional): Enable redirection of stdout, so ``print`` may be used. Defaults to True.
+ redirect_stderr (bool, optional): Enable redirection of stderr. Defaults to True.
+ vertical_overflow (VerticalOverflowMethod, optional): How to handle renderable when it is too tall for the console. Defaults to "ellipsis".
+ """
+
+ def __init__(
+ self,
+ renderable: RenderableType = "",
+ *,
+ console: Console = None,
+ auto_refresh: bool = True,
+ refresh_per_second: float = 4,
+ transient: bool = False,
+ redirect_stdout: bool = True,
+ redirect_stderr: bool = True,
+ vertical_overflow: VerticalOverflowMethod = "ellipsis",
+ ) -> None:
+ assert refresh_per_second > 0, "refresh_per_second must be > 0"
+ self.console = console if console is not None else get_console()
+ self._live_render = _LiveRender(self, renderable)
+
+ self._redirect_stdout = redirect_stdout
+ self._redirect_stderr = redirect_stderr
+ self._restore_stdout: Optional[IO[str]] = None
+ self._restore_stderr: Optional[IO[str]] = None
+
+ self._lock = RLock()
+ self.ipy_widget: Optional[Any] = None
+ self.auto_refresh = auto_refresh
+ self._started: bool = False
+ self.transient = transient
+
+ self._refresh_thread: Optional[_RefreshThread] = None
+ self.refresh_per_second = refresh_per_second
+
+ self.vertical_overflow = vertical_overflow
+ # cant store just clear_control as the live_render shape is lazily computed on render
+
+ def start(self) -> None:
+ """Start live rendering display."""
+ with self._lock:
+ if self._started:
+ return
+
+ self.console.show_cursor(False)
+ self._enable_redirect_io()
+ self.console.push_render_hook(self)
+ self._started = True
+
+ if self.auto_refresh:
+ self._refresh_thread = _RefreshThread(self, self.refresh_per_second)
+ self._refresh_thread.start()
+
+ def stop(self) -> None:
+ """Stop live rendering display."""
+ with self._lock:
+ if not self._started:
+ return
+ self._started = False
+ try:
+ if self.auto_refresh and self._refresh_thread is not None:
+ self._refresh_thread.stop()
+ # allow it to fully render on the last even if overflow
+ self.vertical_overflow = "visible"
+ self.refresh()
+ if self.console.is_terminal:
+ self.console.line()
+ finally:
+ self._disable_redirect_io()
+ self.console.pop_render_hook()
+ self.console.show_cursor(True)
+
+ if self.transient:
+ self.console.control(self._live_render.restore_cursor())
+ if self.ipy_widget is not None and self.transient: # pragma: no cover
+ self.ipy_widget.clear_output()
+ self.ipy_widget.close()
+
+ def __enter__(self) -> "Live":
+ self.start()
+ return self
+
+ def __exit__(self, exc_type, exc_val, exc_tb) -> None:
+ self.stop()
+
+ def _enable_redirect_io(self):
+ """Enable redirecting of stdout / stderr."""
+ if self.console.is_terminal:
+ if self._redirect_stdout:
+ self._restore_stdout = sys.stdout
+ sys.stdout = _FileProxy(self.console, sys.stdout)
+ if self._redirect_stderr:
+ self._restore_stderr = sys.stderr
+ sys.stdout = _FileProxy(self.console, sys.stdout)
+
+ @property
+ def renderable(self) -> RenderableType:
+ """Get the renderable that is being displayed
+
+ Returns:
+ RenderableType: Displayed renderable.
+ """
+ with self._lock:
+ return self._live_render.renderable
+
+ def update(self, renderable: RenderableType, *, refresh: bool = False) -> None:
+ """Update the renderable that is being displayed
+
+ Args:
+ renderable (RenderableType): New renderable to use.
+ refresh (bool, optional): Refresh the display. Defaults to False.
+ """
+ with self._lock:
+ self._live_render.set_renderable(renderable)
+ if refresh:
+ self.refresh()
+
+ def refresh(self) -> None:
+ """Update the display of the Live Render."""
+ if self.console.is_jupyter: # pragma: no cover
+ try:
+ from IPython.display import display
+ from ipywidgets import Output
+ except ImportError:
+ import warnings
+
+ warnings.warn('install "ipywidgets" for Jupyter support')
+ else:
+ with self._lock:
+ if self.ipy_widget is None:
+ self.ipy_widget = Output()
+ display(self.ipy_widget)
+
+ with self.ipy_widget:
+ self.ipy_widget.clear_output(wait=True)
+ self.console.print(self._live_render.renderable)
+ elif self.console.is_terminal and not self.console.is_dumb_terminal:
+ with self._lock, self.console:
+ self.console.print(Control(""))
+ elif (
+ not self._started and not self.transient
+ ): # if it is finished allow files or dumb-terminals to see final result
+ with self.console:
+ self.console.print(Control(""))
+
+ def _disable_redirect_io(self):
+ """Disable redirecting of stdout / stderr."""
+ if self._restore_stdout:
+ sys.stdout = self._restore_stdout
+ self._restore_stdout = None
+ if self._restore_stderr:
+ sys.stderr = self._restore_stderr
+ self._restore_stderr = None
+
+ def process_renderables(
+ self, renderables: List[ConsoleRenderable]
+ ) -> List[ConsoleRenderable]:
+ """Process renderables to restore cursor and display progress."""
+ if self.console.is_terminal:
+ # lock needs acquiring as user can modify live_render renerable at any time unlike in Progress.
+ with self._lock:
+ # determine the control command needed to clear previous rendering
+ renderables = [
+ self._live_render.position_cursor(),
+ *renderables,
+ self._live_render,
+ ]
+ elif (
+ not self._started and not self.transient
+ ): # if it is finished render the final output for files or dumb_terminals
+ renderables = [*renderables, self._live_render]
+
+ return renderables
+
+
+if __name__ == "__main__": # pragma: no cover
+ import random
+ import time
+ from itertools import cycle
+ from typing import Dict, List, Tuple
+
+ from .console import Console
+ from .live import Live
+ from .panel import Panel
+ from .rule import Rule
+ from .syntax import Syntax
+ from .table import Table
+
+ console = Console()
+
+ syntax = Syntax(
+ '''def loop_last(values: Iterable[T]) -> Iterable[Tuple[bool, T]]:
+ """Iterate and generate a tuple with a flag for last value."""
+ iter_values = iter(values)
+ try:
+ previous_value = next(iter_values)
+ except StopIteration:
+ return
+ for value in iter_values:
+ yield False, previous_value
+ previous_value = value
+ yield True, previous_value''',
+ "python",
+ line_numbers=True,
+ )
+
+ table = Table("foo", "bar", "baz")
+ table.add_row("1", "2", "3")
+
+ progress_renderables = [
+ "You can make the terminal shorter and taller to see the live table hide"
+ "Text may be printed while the progress bars are rendering.",
+ Panel("In fact, [i]any[/i] renderable will work"),
+ "Such as [magenta]tables[/]...",
+ table,
+ "Pretty printed structures...",
+ {"type": "example", "text": "Pretty printed"},
+ "Syntax...",
+ syntax,
+ Rule("Give it a try!"),
+ ]
+
+ examples = cycle(progress_renderables)
+
+ exchanges = [
+ "SGD",
+ "MYR",
+ "EUR",
+ "USD",
+ "AUD",
+ "JPY",
+ "CNH",
+ "HKD",
+ "CAD",
+ "INR",
+ "DKK",
+ "GBP",
+ "RUB",
+ "NZD",
+ "MXN",
+ "IDR",
+ "TWD",
+ "THB",
+ "VND",
+ ]
+ with Live(console=console) as live_table:
+ exchange_rate_dict: Dict[Tuple[str, str], float] = {}
+
+ for index in range(100):
+ select_exchange = exchanges[index % len(exchanges)]
+
+ for exchange in exchanges:
+ if exchange == select_exchange:
+ continue
+ time.sleep(0.4)
+ if random.randint(0, 10) < 1:
+ console.log(next(examples))
+ exchange_rate_dict[(select_exchange, exchange)] = 200 / (
+ (random.random() * 320) + 1
+ )
+ if len(exchange_rate_dict) > len(exchanges) - 1:
+ exchange_rate_dict.pop(list(exchange_rate_dict.keys())[0])
+ table = Table(title="Exchange Rates")
+
+ table.add_column("Source Currency")
+ table.add_column("Destination Currency")
+ table.add_column("Exchange Rate")
+
+ for ((soure, dest), exchange_rate) in exchange_rate_dict.items():
+ table.add_row(
+ soure,
+ dest,
+ Text(
+ f"{exchange_rate:.4f}",
+ style="red" if exchange_rate < 1.0 else "green",
+ ),
+ )
+
+ live_table.update(table)
| diff --git a/tests/_card_render.py b/tests/_card_render.py
index 17783d030b..8f7f6279d1 100644
--- a/tests/_card_render.py
+++ b/tests/_card_render.py
@@ -1,1 +1,1 @@
-expected='\x1b[3m Rich features \x1b[0m\n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Colors \x1b[0m\x1b[1;31m \x1b[0mโ \x1b[1;32m4-bit color\x1b[0m \x1b[38;2;86;0;0;48;2;51;0;0mโ\x1b[0m\x1b[38;2;86;9;0;48;2;51;5;0mโ\x1b[0m\x1b[38;2;86;18;0;48;2;51;11;0mโ\x1b[0m\x1b[38;2;86;28;0;48;2;51;16;0mโ\x1b[0m\x1b[38;2;86;37;0;48;2;51;22;0mโ\x1b[0m\x1b[38;2;86;47;0;48;2;51;27;0mโ\x1b[0m\x1b[38;2;86;56;0;48;2;51;33;0mโ\x1b[0m\x1b[38;2;86;66;0;48;2;51;38;0mโ\x1b[0m\x1b[38;2;86;75;0;48;2;51;44;0mโ\x1b[0m\x1b[38;2;86;85;0;48;2;51;50;0mโ\x1b[0m\x1b[38;2;78;86;0;48;2;46;51;0mโ\x1b[0m\x1b[38;2;69;86;0;48;2;40;51;0mโ\x1b[0m\x1b[38;2;59;86;0;48;2;35;51;0mโ\x1b[0m\x1b[38;2;50;86;0;48;2;29;51;0mโ\x1b[0m\x1b[38;2;40;86;0;48;2;24;51;0mโ\x1b[0m\x1b[38;2;31;86;0;48;2;18;51;0mโ\x1b[0m\x1b[38;2;22;86;0;48;2;12;51;0mโ\x1b[0m\x1b[38;2;12;86;0;48;2;7;51;0mโ\x1b[0m\x1b[38;2;3;86;0;48;2;1;51;0mโ\x1b[0m\x1b[38;2;0;86;6;48;2;0;51;3mโ\x1b[0m\x1b[38;2;0;86;15;48;2;0;51;9mโ\x1b[0m\x1b[38;2;0;86;25;48;2;0;51;14mโ\x1b[0m\x1b[38;2;0;86;34;48;2;0;51;20mโ\x1b[0m\x1b[38;2;0;86;44;48;2;0;51;25mโ\x1b[0m\x1b[38;2;0;86;53;48;2;0;51;31mโ\x1b[0m\x1b[38;2;0;86;63;48;2;0;51;37mโ\x1b[0m\x1b[38;2;0;86;72;48;2;0;51;42mโ\x1b[0m\x1b[38;2;0;86;81;48;2;0;51;48mโ\x1b[0m\x1b[38;2;0;81;86;48;2;0;48;51mโ\x1b[0m\x1b[38;2;0;72;86;48;2;0;42;51mโ\x1b[0m\x1b[38;2;0;63;86;48;2;0;37;51mโ\x1b[0m\x1b[38;2;0;53;86;48;2;0;31;51mโ\x1b[0m\x1b[38;2;0;44;86;48;2;0;25;51mโ\x1b[0m\x1b[38;2;0;34;86;48;2;0;20;51mโ\x1b[0m\x1b[38;2;0;25;86;48;2;0;14;51mโ\x1b[0m\x1b[38;2;0;15;86;48;2;0;9;51mโ\x1b[0m\x1b[38;2;0;6;86;48;2;0;3;51mโ\x1b[0m\x1b[38;2;3;0;86;48;2;1;0;51mโ\x1b[0m\x1b[38;2;12;0;86;48;2;7;0;51mโ\x1b[0m\x1b[38;2;22;0;86;48;2;12;0;51mโ\x1b[0m\x1b[38;2;31;0;86;48;2;18;0;51mโ\x1b[0m\x1b[38;2;40;0;86;48;2;24;0;51mโ\x1b[0m\x1b[38;2;50;0;86;48;2;29;0;51mโ\x1b[0m\x1b[38;2;59;0;86;48;2;35;0;51mโ\x1b[0m\x1b[38;2;69;0;86;48;2;40;0;51mโ\x1b[0m\x1b[38;2;78;0;86;48;2;46;0;51mโ\x1b[0m\x1b[38;2;86;0;85;48;2;51;0;50mโ\x1b[0m\x1b[38;2;86;0;75;48;2;51;0;44mโ\x1b[0m\x1b[38;2;86;0;66;48;2;51;0;38mโ\x1b[0m\x1b[38;2;86;0;56;48;2;51;0;33mโ\x1b[0m\x1b[38;2;86;0;47;48;2;51;0;27mโ\x1b[0m\x1b[38;2;86;0;37;48;2;51;0;22mโ\x1b[0m\x1b[38;2;86;0;28;48;2;51;0;16mโ\x1b[0m\x1b[38;2;86;0;18;48;2;51;0;11mโ\x1b[0m\x1b[38;2;86;0;9;48;2;51;0;5mโ\x1b[0m \n โ \x1b[1;34m8-bit color\x1b[0m \x1b[38;2;158;0;0;48;2;122;0;0mโ\x1b[0m\x1b[38;2;158;17;0;48;2;122;13;0mโ\x1b[0m\x1b[38;2;158;34;0;48;2;122;26;0mโ\x1b[0m\x1b[38;2;158;51;0;48;2;122;40;0mโ\x1b[0m\x1b[38;2;158;68;0;48;2;122;53;0mโ\x1b[0m\x1b[38;2;158;86;0;48;2;122;66;0mโ\x1b[0m\x1b[38;2;158;103;0;48;2;122;80;0mโ\x1b[0m\x1b[38;2;158;120;0;48;2;122;93;0mโ\x1b[0m\x1b[38;2;158;137;0;48;2;122;106;0mโ\x1b[0m\x1b[38;2;158;155;0;48;2;122;120;0mโ\x1b[0m\x1b[38;2;143;158;0;48;2;111;122;0mโ\x1b[0m\x1b[38;2;126;158;0;48;2;97;122;0mโ\x1b[0m\x1b[38;2;109;158;0;48;2;84;122;0mโ\x1b[0m\x1b[38;2;91;158;0;48;2;71;122;0mโ\x1b[0m\x1b[38;2;74;158;0;48;2;57;122;0mโ\x1b[0m\x1b[38;2;57;158;0;48;2;44;122;0mโ\x1b[0m\x1b[38;2;40;158;0;48;2;31;122;0mโ\x1b[0m\x1b[38;2;22;158;0;48;2;17;122;0mโ\x1b[0m\x1b[38;2;5;158;0;48;2;4;122;0mโ\x1b[0m\x1b[38;2;0;158;11;48;2;0;122;8mโ\x1b[0m\x1b[38;2;0;158;28;48;2;0;122;22mโ\x1b[0m\x1b[38;2;0;158;45;48;2;0;122;35mโ\x1b[0m\x1b[38;2;0;158;63;48;2;0;122;48mโ\x1b[0m\x1b[38;2;0;158;80;48;2;0;122;62mโ\x1b[0m\x1b[38;2;0;158;97;48;2;0;122;75mโ\x1b[0m\x1b[38;2;0;158;114;48;2;0;122;89mโ\x1b[0m\x1b[38;2;0;158;132;48;2;0;122;102mโ\x1b[0m\x1b[38;2;0;158;149;48;2;0;122;115mโ\x1b[0m\x1b[38;2;0;149;158;48;2;0;115;122mโ\x1b[0m\x1b[38;2;0;132;158;48;2;0;102;122mโ\x1b[0m\x1b[38;2;0;114;158;48;2;0;89;122mโ\x1b[0m\x1b[38;2;0;97;158;48;2;0;75;122mโ\x1b[0m\x1b[38;2;0;80;158;48;2;0;62;122mโ\x1b[0m\x1b[38;2;0;63;158;48;2;0;48;122mโ\x1b[0m\x1b[38;2;0;45;158;48;2;0;35;122mโ\x1b[0m\x1b[38;2;0;28;158;48;2;0;22;122mโ\x1b[0m\x1b[38;2;0;11;158;48;2;0;8;122mโ\x1b[0m\x1b[38;2;5;0;158;48;2;4;0;122mโ\x1b[0m\x1b[38;2;22;0;158;48;2;17;0;122mโ\x1b[0m\x1b[38;2;40;0;158;48;2;31;0;122mโ\x1b[0m\x1b[38;2;57;0;158;48;2;44;0;122mโ\x1b[0m\x1b[38;2;74;0;158;48;2;57;0;122mโ\x1b[0m\x1b[38;2;91;0;158;48;2;71;0;122mโ\x1b[0m\x1b[38;2;109;0;158;48;2;84;0;122mโ\x1b[0m\x1b[38;2;126;0;158;48;2;97;0;122mโ\x1b[0m\x1b[38;2;143;0;158;48;2;111;0;122mโ\x1b[0m\x1b[38;2;158;0;155;48;2;122;0;120mโ\x1b[0m\x1b[38;2;158;0;137;48;2;122;0;106mโ\x1b[0m\x1b[38;2;158;0;120;48;2;122;0;93mโ\x1b[0m\x1b[38;2;158;0;103;48;2;122;0;80mโ\x1b[0m\x1b[38;2;158;0;86;48;2;122;0;66mโ\x1b[0m\x1b[38;2;158;0;68;48;2;122;0;53mโ\x1b[0m\x1b[38;2;158;0;51;48;2;122;0;40mโ\x1b[0m\x1b[38;2;158;0;34;48;2;122;0;26mโ\x1b[0m\x1b[38;2;158;0;17;48;2;122;0;13mโ\x1b[0m \n โ \x1b[1;35mTruecolor (16.7 million)\x1b[0m \x1b[38;2;229;0;0;48;2;193;0;0mโ\x1b[0m\x1b[38;2;229;25;0;48;2;193;21;0mโ\x1b[0m\x1b[38;2;229;50;0;48;2;193;42;0mโ\x1b[0m\x1b[38;2;229;75;0;48;2;193;63;0mโ\x1b[0m\x1b[38;2;229;100;0;48;2;193;84;0mโ\x1b[0m\x1b[38;2;229;125;0;48;2;193;105;0mโ\x1b[0m\x1b[38;2;229;150;0;48;2;193;126;0mโ\x1b[0m\x1b[38;2;229;175;0;48;2;193;147;0mโ\x1b[0m\x1b[38;2;229;200;0;48;2;193;169;0mโ\x1b[0m\x1b[38;2;229;225;0;48;2;193;190;0mโ\x1b[0m\x1b[38;2;208;229;0;48;2;176;193;0mโ\x1b[0m\x1b[38;2;183;229;0;48;2;155;193;0mโ\x1b[0m\x1b[38;2;158;229;0;48;2;133;193;0mโ\x1b[0m\x1b[38;2;133;229;0;48;2;112;193;0mโ\x1b[0m\x1b[38;2;108;229;0;48;2;91;193;0mโ\x1b[0m\x1b[38;2;83;229;0;48;2;70;193;0mโ\x1b[0m\x1b[38;2;58;229;0;48;2;49;193;0mโ\x1b[0m\x1b[38;2;33;229;0;48;2;28;193;0mโ\x1b[0m\x1b[38;2;8;229;0;48;2;7;193;0mโ\x1b[0m\x1b[38;2;0;229;16;48;2;0;193;14mโ\x1b[0m\x1b[38;2;0;229;41;48;2;0;193;35mโ\x1b[0m\x1b[38;2;0;229;66;48;2;0;193;56mโ\x1b[0m\x1b[38;2;0;229;91;48;2;0;193;77mโ\x1b[0m\x1b[38;2;0;229;116;48;2;0;193;98mโ\x1b[0m\x1b[38;2;0;229;141;48;2;0;193;119mโ\x1b[0m\x1b[38;2;0;229;166;48;2;0;193;140mโ\x1b[0m\x1b[38;2;0;229;191;48;2;0;193;162mโ\x1b[0m\x1b[38;2;0;229;216;48;2;0;193;183mโ\x1b[0m\x1b[38;2;0;216;229;48;2;0;183;193mโ\x1b[0m\x1b[38;2;0;191;229;48;2;0;162;193mโ\x1b[0m\x1b[38;2;0;166;229;48;2;0;140;193mโ\x1b[0m\x1b[38;2;0;141;229;48;2;0;119;193mโ\x1b[0m\x1b[38;2;0;116;229;48;2;0;98;193mโ\x1b[0m\x1b[38;2;0;91;229;48;2;0;77;193mโ\x1b[0m\x1b[38;2;0;66;229;48;2;0;56;193mโ\x1b[0m\x1b[38;2;0;41;229;48;2;0;35;193mโ\x1b[0m\x1b[38;2;0;16;229;48;2;0;14;193mโ\x1b[0m\x1b[38;2;8;0;229;48;2;7;0;193mโ\x1b[0m\x1b[38;2;33;0;229;48;2;28;0;193mโ\x1b[0m\x1b[38;2;58;0;229;48;2;49;0;193mโ\x1b[0m\x1b[38;2;83;0;229;48;2;70;0;193mโ\x1b[0m\x1b[38;2;108;0;229;48;2;91;0;193mโ\x1b[0m\x1b[38;2;133;0;229;48;2;112;0;193mโ\x1b[0m\x1b[38;2;158;0;229;48;2;133;0;193mโ\x1b[0m\x1b[38;2;183;0;229;48;2;155;0;193mโ\x1b[0m\x1b[38;2;208;0;229;48;2;176;0;193mโ\x1b[0m\x1b[38;2;229;0;225;48;2;193;0;190mโ\x1b[0m\x1b[38;2;229;0;200;48;2;193;0;169mโ\x1b[0m\x1b[38;2;229;0;175;48;2;193;0;147mโ\x1b[0m\x1b[38;2;229;0;150;48;2;193;0;126mโ\x1b[0m\x1b[38;2;229;0;125;48;2;193;0;105mโ\x1b[0m\x1b[38;2;229;0;100;48;2;193;0;84mโ\x1b[0m\x1b[38;2;229;0;75;48;2;193;0;63mโ\x1b[0m\x1b[38;2;229;0;50;48;2;193;0;42mโ\x1b[0m\x1b[38;2;229;0;25;48;2;193;0;21mโ\x1b[0m \n โ \x1b[1;33mDumb terminals\x1b[0m \x1b[38;2;254;45;45;48;2;255;10;10mโ\x1b[0m\x1b[38;2;254;68;45;48;2;255;36;10mโ\x1b[0m\x1b[38;2;254;91;45;48;2;255;63;10mโ\x1b[0m\x1b[38;2;254;114;45;48;2;255;90;10mโ\x1b[0m\x1b[38;2;254;137;45;48;2;255;117;10mโ\x1b[0m\x1b[38;2;254;159;45;48;2;255;143;10mโ\x1b[0m\x1b[38;2;254;182;45;48;2;255;170;10mโ\x1b[0m\x1b[38;2;254;205;45;48;2;255;197;10mโ\x1b[0m\x1b[38;2;254;228;45;48;2;255;223;10mโ\x1b[0m\x1b[38;2;254;251;45;48;2;255;250;10mโ\x1b[0m\x1b[38;2;235;254;45;48;2;232;255;10mโ\x1b[0m\x1b[38;2;213;254;45;48;2;206;255;10mโ\x1b[0m\x1b[38;2;190;254;45;48;2;179;255;10mโ\x1b[0m\x1b[38;2;167;254;45;48;2;152;255;10mโ\x1b[0m\x1b[38;2;144;254;45;48;2;125;255;10mโ\x1b[0m\x1b[38;2;121;254;45;48;2;99;255;10mโ\x1b[0m\x1b[38;2;99;254;45;48;2;72;255;10mโ\x1b[0m\x1b[38;2;76;254;45;48;2;45;255;10mโ\x1b[0m\x1b[38;2;53;254;45;48;2;19;255;10mโ\x1b[0m\x1b[38;2;45;254;61;48;2;10;255;28mโ\x1b[0m\x1b[38;2;45;254;83;48;2;10;255;54mโ\x1b[0m\x1b[38;2;45;254;106;48;2;10;255;81mโ\x1b[0m\x1b[38;2;45;254;129;48;2;10;255;108mโ\x1b[0m\x1b[38;2;45;254;152;48;2;10;255;134mโ\x1b[0m\x1b[38;2;45;254;175;48;2;10;255;161mโ\x1b[0m\x1b[38;2;45;254;197;48;2;10;255;188mโ\x1b[0m\x1b[38;2;45;254;220;48;2;10;255;214mโ\x1b[0m\x1b[38;2;45;254;243;48;2;10;255;241mโ\x1b[0m\x1b[38;2;45;243;254;48;2;10;241;255mโ\x1b[0m\x1b[38;2;45;220;254;48;2;10;214;255mโ\x1b[0m\x1b[38;2;45;197;254;48;2;10;188;255mโ\x1b[0m\x1b[38;2;45;175;254;48;2;10;161;255mโ\x1b[0m\x1b[38;2;45;152;254;48;2;10;134;255mโ\x1b[0m\x1b[38;2;45;129;254;48;2;10;108;255mโ\x1b[0m\x1b[38;2;45;106;254;48;2;10;81;255mโ\x1b[0m\x1b[38;2;45;83;254;48;2;10;54;255mโ\x1b[0m\x1b[38;2;45;61;254;48;2;10;28;255mโ\x1b[0m\x1b[38;2;53;45;254;48;2;19;10;255mโ\x1b[0m\x1b[38;2;76;45;254;48;2;45;10;255mโ\x1b[0m\x1b[38;2;99;45;254;48;2;72;10;255mโ\x1b[0m\x1b[38;2;121;45;254;48;2;99;10;255mโ\x1b[0m\x1b[38;2;144;45;254;48;2;125;10;255mโ\x1b[0m\x1b[38;2;167;45;254;48;2;152;10;255mโ\x1b[0m\x1b[38;2;190;45;254;48;2;179;10;255mโ\x1b[0m\x1b[38;2;213;45;254;48;2;206;10;255mโ\x1b[0m\x1b[38;2;235;45;254;48;2;232;10;255mโ\x1b[0m\x1b[38;2;254;45;251;48;2;255;10;250mโ\x1b[0m\x1b[38;2;254;45;228;48;2;255;10;223mโ\x1b[0m\x1b[38;2;254;45;205;48;2;255;10;197mโ\x1b[0m\x1b[38;2;254;45;182;48;2;255;10;170mโ\x1b[0m\x1b[38;2;254;45;159;48;2;255;10;143mโ\x1b[0m\x1b[38;2;254;45;137;48;2;255;10;117mโ\x1b[0m\x1b[38;2;254;45;114;48;2;255;10;90mโ\x1b[0m\x1b[38;2;254;45;91;48;2;255;10;63mโ\x1b[0m\x1b[38;2;254;45;68;48;2;255;10;36mโ\x1b[0m \n โ \x1b[1;36mAutomatic color conversion\x1b[0m \x1b[38;2;255;117;117;48;2;255;81;81mโ\x1b[0m\x1b[38;2;255;132;117;48;2;255;100;81mโ\x1b[0m\x1b[38;2;255;147;117;48;2;255;119;81mโ\x1b[0m\x1b[38;2;255;162;117;48;2;255;138;81mโ\x1b[0m\x1b[38;2;255;177;117;48;2;255;157;81mโ\x1b[0m\x1b[38;2;255;192;117;48;2;255;176;81mโ\x1b[0m\x1b[38;2;255;207;117;48;2;255;195;81mโ\x1b[0m\x1b[38;2;255;222;117;48;2;255;214;81mโ\x1b[0m\x1b[38;2;255;237;117;48;2;255;232;81mโ\x1b[0m\x1b[38;2;255;252;117;48;2;255;251;81mโ\x1b[0m\x1b[38;2;242;255;117;48;2;239;255;81mโ\x1b[0m\x1b[38;2;227;255;117;48;2;220;255;81mโ\x1b[0m\x1b[38;2;212;255;117;48;2;201;255;81mโ\x1b[0m\x1b[38;2;197;255;117;48;2;182;255;81mโ\x1b[0m\x1b[38;2;182;255;117;48;2;163;255;81mโ\x1b[0m\x1b[38;2;167;255;117;48;2;144;255;81mโ\x1b[0m\x1b[38;2;152;255;117;48;2;125;255;81mโ\x1b[0m\x1b[38;2;137;255;117;48;2;106;255;81mโ\x1b[0m\x1b[38;2;122;255;117;48;2;87;255;81mโ\x1b[0m\x1b[38;2;117;255;127;48;2;81;255;94mโ\x1b[0m\x1b[38;2;117;255;142;48;2;81;255;113mโ\x1b[0m\x1b[38;2;117;255;157;48;2;81;255;132mโ\x1b[0m\x1b[38;2;117;255;172;48;2;81;255;150mโ\x1b[0m\x1b[38;2;117;255;187;48;2;81;255;169mโ\x1b[0m\x1b[38;2;117;255;202;48;2;81;255;188mโ\x1b[0m\x1b[38;2;117;255;217;48;2;81;255;207mโ\x1b[0m\x1b[38;2;117;255;232;48;2;81;255;226mโ\x1b[0m\x1b[38;2;117;255;247;48;2;81;255;245mโ\x1b[0m\x1b[38;2;117;247;255;48;2;81;245;255mโ\x1b[0m\x1b[38;2;117;232;255;48;2;81;226;255mโ\x1b[0m\x1b[38;2;117;217;255;48;2;81;207;255mโ\x1b[0m\x1b[38;2;117;202;255;48;2;81;188;255mโ\x1b[0m\x1b[38;2;117;187;255;48;2;81;169;255mโ\x1b[0m\x1b[38;2;117;172;255;48;2;81;150;255mโ\x1b[0m\x1b[38;2;117;157;255;48;2;81;132;255mโ\x1b[0m\x1b[38;2;117;142;255;48;2;81;113;255mโ\x1b[0m\x1b[38;2;117;127;255;48;2;81;94;255mโ\x1b[0m\x1b[38;2;122;117;255;48;2;87;81;255mโ\x1b[0m\x1b[38;2;137;117;255;48;2;106;81;255mโ\x1b[0m\x1b[38;2;152;117;255;48;2;125;81;255mโ\x1b[0m\x1b[38;2;167;117;255;48;2;144;81;255mโ\x1b[0m\x1b[38;2;182;117;255;48;2;163;81;255mโ\x1b[0m\x1b[38;2;197;117;255;48;2;182;81;255mโ\x1b[0m\x1b[38;2;212;117;255;48;2;201;81;255mโ\x1b[0m\x1b[38;2;227;117;255;48;2;220;81;255mโ\x1b[0m\x1b[38;2;242;117;255;48;2;239;81;255mโ\x1b[0m\x1b[38;2;255;117;252;48;2;255;81;251mโ\x1b[0m\x1b[38;2;255;117;237;48;2;255;81;232mโ\x1b[0m\x1b[38;2;255;117;222;48;2;255;81;214mโ\x1b[0m\x1b[38;2;255;117;207;48;2;255;81;195mโ\x1b[0m\x1b[38;2;255;117;192;48;2;255;81;176mโ\x1b[0m\x1b[38;2;255;117;177;48;2;255;81;157mโ\x1b[0m\x1b[38;2;255;117;162;48;2;255;81;138mโ\x1b[0m\x1b[38;2;255;117;147;48;2;255;81;119mโ\x1b[0m\x1b[38;2;255;117;132;48;2;255;81;100mโ\x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Styles \x1b[0m\x1b[1;31m \x1b[0mAll ansi styles: \x1b[1mbold\x1b[0m, \x1b[2mdim\x1b[0m, \x1b[3mitalic\x1b[0m, \x1b[4munderline\x1b[0m, \x1b[9mstrikethrough\x1b[0m, \x1b[7mreverse\x1b[0m, and even \n \x1b[5mblink\x1b[0m. \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Text \x1b[0m\x1b[1;31m \x1b[0mWord wrap text. Justify \x1b[32mleft\x1b[0m, \x1b[33mcenter\x1b[0m, \x1b[34mright\x1b[0m or \x1b[31mfull\x1b[0m. \n \n \x1b[32mLorem ipsum dolor \x1b[0m \x1b[33m Lorem ipsum dolor \x1b[0m \x1b[34m Lorem ipsum dolor\x1b[0m \x1b[31mLorem\x1b[0m\x1b[31m \x1b[0m\x1b[31mipsum\x1b[0m\x1b[31m \x1b[0m\x1b[31mdolor\x1b[0m\x1b[31m \x1b[0m\x1b[31msit\x1b[0m \n \x1b[32msit amet, \x1b[0m \x1b[33m sit amet, \x1b[0m \x1b[34m sit amet,\x1b[0m \x1b[31mamet,\x1b[0m\x1b[31m \x1b[0m\x1b[31mconsectetur\x1b[0m \n \x1b[32mconsectetur \x1b[0m \x1b[33m consectetur \x1b[0m \x1b[34m consectetur\x1b[0m \x1b[31madipiscing\x1b[0m\x1b[31m \x1b[0m\x1b[31melit.\x1b[0m \n \x1b[32madipiscing elit. \x1b[0m \x1b[33m adipiscing elit. \x1b[0m \x1b[34m adipiscing elit.\x1b[0m \x1b[31mQuisque\x1b[0m\x1b[31m \x1b[0m\x1b[31min\x1b[0m\x1b[31m \x1b[0m\x1b[31mmetus\x1b[0m\x1b[31m \x1b[0m\x1b[31msed\x1b[0m \n \x1b[32mQuisque in metus sed\x1b[0m \x1b[33mQuisque in metus sed\x1b[0m \x1b[34mQuisque in metus sed\x1b[0m \x1b[31msapien\x1b[0m\x1b[31m \x1b[0m\x1b[31multricies\x1b[0m \n \x1b[32msapien ultricies \x1b[0m \x1b[33m sapien ultricies \x1b[0m \x1b[34m sapien ultricies\x1b[0m \x1b[31mpretium\x1b[0m\x1b[31m \x1b[0m\x1b[31ma\x1b[0m\x1b[31m \x1b[0m\x1b[31mat\x1b[0m\x1b[31m \x1b[0m\x1b[31mjusto.\x1b[0m \n \x1b[32mpretium a at justo. \x1b[0m \x1b[33mpretium a at justo. \x1b[0m \x1b[34m pretium a at justo.\x1b[0m \x1b[31mMaecenas\x1b[0m\x1b[31m \x1b[0m\x1b[31mluctus\x1b[0m\x1b[31m \x1b[0m\x1b[31mvelit\x1b[0m \n \x1b[32mMaecenas luctus \x1b[0m \x1b[33m Maecenas luctus \x1b[0m \x1b[34m Maecenas luctus\x1b[0m \x1b[31met auctor maximus.\x1b[0m \n \x1b[32mvelit et auctor \x1b[0m \x1b[33m velit et auctor \x1b[0m \x1b[34m velit et auctor\x1b[0m \n \x1b[32mmaximus. \x1b[0m \x1b[33m maximus. \x1b[0m \x1b[34m maximus.\x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Asian \x1b[0m\x1b[1;31m \x1b[0m๐จ๐ณ ่ฏฅๅบๆฏๆไธญๆ๏ผๆฅๆๅ้ฉๆๆๆฌ๏ผ \n\x1b[1;31m \x1b[0m\x1b[1;31m language \x1b[0m\x1b[1;31m \x1b[0m๐ฏ๐ต ใฉใคใใฉใชใฏไธญๅฝ่ชใๆฅๆฌ่ชใ้ๅฝ่ชใฎใใญในใใใตใใผใใใฆใใพใ \n\x1b[1;31m \x1b[0m\x1b[1;31m support \x1b[0m\x1b[1;31m \x1b[0m๐ฐ๐ท ์ด ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ ์ค๊ตญ์ด, ์ผ๋ณธ์ด ๋ฐ ํ๊ตญ์ด ํ
์คํธ๋ฅผ ์ง์ํฉ๋๋ค \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Markup \x1b[0m\x1b[1;31m \x1b[0m\x1b[1;35mRich\x1b[0m supports a simple \x1b[3mbbcode\x1b[0m like \x1b[1mmarkup\x1b[0m for \x1b[33mcolor\x1b[0m, \x1b[4mstyle\x1b[0m, and emoji! ๐ ๐ ๐ ๐ป โฆ \n ๐ \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Tables \x1b[0m\x1b[1;31m \x1b[0m\x1b[1m \x1b[0m\x1b[1;32mDate\x1b[0m\x1b[1m \x1b[0m\x1b[1m \x1b[0m \x1b[1m \x1b[0m\x1b[1;34mTitle\x1b[0m\x1b[1m \x1b[0m\x1b[1m \x1b[0m \x1b[1m \x1b[0m\x1b[1;36mProduction Budget\x1b[0m\x1b[1m \x1b[0m \x1b[1m \x1b[0m\x1b[1m \x1b[0m\x1b[1;35mBox Office\x1b[0m\x1b[1m \x1b[0m \n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ \n \x1b[32m \x1b[0m\x1b[32mDec 20, 2019\x1b[0m\x1b[32m \x1b[0m \x1b[34m \x1b[0m\x1b[34mStar Wars: The Rise of \x1b[0m\x1b[34m \x1b[0m \x1b[36m \x1b[0m\x1b[36m $275,000,000\x1b[0m\x1b[36m \x1b[0m \x1b[35m \x1b[0m\x1b[35m $375,126,118\x1b[0m\x1b[35m \x1b[0m \n \x1b[34m \x1b[0m\x1b[34mSkywalker \x1b[0m\x1b[34m \x1b[0m \n \x1b[2;32m \x1b[0m\x1b[2;32mMay 25, 2018\x1b[0m\x1b[2;32m \x1b[0m \x1b[2;34m \x1b[0m\x1b[1;2;34mSolo\x1b[0m\x1b[2;34m: A Star Wars Story \x1b[0m\x1b[2;34m \x1b[0m \x1b[2;36m \x1b[0m\x1b[2;36m $275,000,000\x1b[0m\x1b[2;36m \x1b[0m \x1b[2;35m \x1b[0m\x1b[2;35m $393,151,347\x1b[0m\x1b[2;35m \x1b[0m \n \x1b[32m \x1b[0m\x1b[32mDec 15, 2017\x1b[0m\x1b[32m \x1b[0m \x1b[34m \x1b[0m\x1b[34mStar Wars Ep. VIII: The Last \x1b[0m\x1b[34m \x1b[0m \x1b[36m \x1b[0m\x1b[36m $262,000,000\x1b[0m\x1b[36m \x1b[0m \x1b[35m \x1b[0m\x1b[1;35m$1,332,539,889\x1b[0m\x1b[35m \x1b[0m \n \x1b[34m \x1b[0m\x1b[34mJedi \x1b[0m\x1b[34m \x1b[0m \n \x1b[2;32m \x1b[0m\x1b[2;32mMay 19, 1999\x1b[0m\x1b[2;32m \x1b[0m \x1b[2;34m \x1b[0m\x1b[2;34mStar Wars Ep. \x1b[0m\x1b[1;2;34mI\x1b[0m\x1b[2;34m: \x1b[0m\x1b[2;3;34mThe phantom \x1b[0m\x1b[2;34m \x1b[0m\x1b[2;34m \x1b[0m \x1b[2;36m \x1b[0m\x1b[2;36m $115,000,000\x1b[0m\x1b[2;36m \x1b[0m \x1b[2;35m \x1b[0m\x1b[2;35m$1,027,044,677\x1b[0m\x1b[2;35m \x1b[0m \n \x1b[2;34m \x1b[0m\x1b[2;3;34mMenace\x1b[0m\x1b[2;34m \x1b[0m\x1b[2;34m \x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Syntax \x1b[0m\x1b[1;31m \x1b[0m\x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 1 \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mdef\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;166;226;46;48;2;39;40;34miter_last\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m(\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalues\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mIterable\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m[\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mT\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m]\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m)\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m-\x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m>\x1b[0m \x1b[1m{\x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31mhighlighting\x1b[0m\x1b[1;31m \x1b[0m\x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 2 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ \x1b[0m\x1b[38;2;230;219;116;48;2;39;40;34m"""Iterate and generate a tuple w\x1b[0m \x1b[2;32mโ \x1b[0m\x1b[32m\'foo\'\x1b[0m: \x1b[1m[\x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m & \x1b[0m\x1b[1;31m \x1b[0m\x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 3 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter_values\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m=\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m(\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalues\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m)\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ โ \x1b[0m\x1b[1;34m3.1427\x1b[0m, \n\x1b[1;31m \x1b[0m\x1b[1;31m pretty \x1b[0m\x1b[1;31m \x1b[0m\x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 4 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mtry\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ โ \x1b[0m\x1b[1m(\x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m printing \x1b[0m\x1b[1;31m \x1b[0m\x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 5 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ โ \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m=\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mnext\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m(\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter_va\x1b[0m \x1b[2;32mโ โ โ \x1b[0m\x1b[32m\'Paul Atriedies\'\x1b[0m, \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 6 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mexcept\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;166;226;46;48;2;39;40;34mStopIteration\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ โ โ \x1b[0m\x1b[32m\'Vladimir Harkonnen\'\x1b[0m, \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 7 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ โ \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mreturn\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ โ โ \x1b[0m\x1b[32m\'Thufir Haway\'\x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 8 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mfor\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalue\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34min\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter_values\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ โ \x1b[0m\x1b[1m)\x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 9 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ โ \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34myield\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mFalse\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m,\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ \x1b[0m\x1b[1m]\x1b[0m, \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m10 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ โ \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m=\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalue\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ \x1b[0m\x1b[32m\'atomic\'\x1b[0m: \x1b[1m(\x1b[0m\x1b[3;91mFalse\x1b[0m, \x1b[3;92mTrue\x1b[0m, \x1b[3;35mNone\x1b[0m\x1b[1m)\x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m11 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34myield\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mTrue\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m,\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[1m}\x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Markdown \x1b[0m\x1b[1;31m \x1b[0m\x1b[36m# Markdown\x1b[0m โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ \n โ \x1b[1mMarkdown\x1b[0m โ \n \x1b[36mSupports much of the *markdown*, \x1b[0m โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ \n \x1b[36m__syntax__!\x1b[0m \n Supports much of the \x1b[3mmarkdown\x1b[0m, \x1b[1msyntax\x1b[0m! \n \x1b[36m- Headers\x1b[0m \n \x1b[36m- Basic formatting: **bold**, *italic*, \x1b[0m \x1b[1;33m โข \x1b[0mHeaders \n \x1b[36m`code`\x1b[0m \x1b[1;33m โข \x1b[0mBasic formatting: \x1b[1mbold\x1b[0m, \x1b[3mitalic\x1b[0m, \x1b[97;40mcode\x1b[0m \n \x1b[36m- Block quotes\x1b[0m \x1b[1;33m โข \x1b[0mBlock quotes \n \x1b[36m- Lists, and more...\x1b[0m \x1b[1;33m โข \x1b[0mLists, and more... \n \x1b[36m \x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m +more! \x1b[0m\x1b[1;31m \x1b[0mProgress bars, columns, styled logging handler, tracebacks, etc... \n\x1b[1;31m \x1b[0m \n'
\ No newline at end of file
+expected = "\x1b[3m Rich features \x1b[0m\n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Colors \x1b[0m\x1b[1;31m \x1b[0mโ \x1b[1;32m4-bit color\x1b[0m \x1b[38;2;86;0;0;48;2;51;0;0mโ\x1b[0m\x1b[38;2;86;9;0;48;2;51;5;0mโ\x1b[0m\x1b[38;2;86;18;0;48;2;51;11;0mโ\x1b[0m\x1b[38;2;86;28;0;48;2;51;16;0mโ\x1b[0m\x1b[38;2;86;37;0;48;2;51;22;0mโ\x1b[0m\x1b[38;2;86;47;0;48;2;51;27;0mโ\x1b[0m\x1b[38;2;86;56;0;48;2;51;33;0mโ\x1b[0m\x1b[38;2;86;66;0;48;2;51;38;0mโ\x1b[0m\x1b[38;2;86;75;0;48;2;51;44;0mโ\x1b[0m\x1b[38;2;86;85;0;48;2;51;50;0mโ\x1b[0m\x1b[38;2;78;86;0;48;2;46;51;0mโ\x1b[0m\x1b[38;2;69;86;0;48;2;40;51;0mโ\x1b[0m\x1b[38;2;59;86;0;48;2;35;51;0mโ\x1b[0m\x1b[38;2;50;86;0;48;2;29;51;0mโ\x1b[0m\x1b[38;2;40;86;0;48;2;24;51;0mโ\x1b[0m\x1b[38;2;31;86;0;48;2;18;51;0mโ\x1b[0m\x1b[38;2;22;86;0;48;2;12;51;0mโ\x1b[0m\x1b[38;2;12;86;0;48;2;7;51;0mโ\x1b[0m\x1b[38;2;3;86;0;48;2;1;51;0mโ\x1b[0m\x1b[38;2;0;86;6;48;2;0;51;3mโ\x1b[0m\x1b[38;2;0;86;15;48;2;0;51;9mโ\x1b[0m\x1b[38;2;0;86;25;48;2;0;51;14mโ\x1b[0m\x1b[38;2;0;86;34;48;2;0;51;20mโ\x1b[0m\x1b[38;2;0;86;44;48;2;0;51;25mโ\x1b[0m\x1b[38;2;0;86;53;48;2;0;51;31mโ\x1b[0m\x1b[38;2;0;86;63;48;2;0;51;37mโ\x1b[0m\x1b[38;2;0;86;72;48;2;0;51;42mโ\x1b[0m\x1b[38;2;0;86;81;48;2;0;51;48mโ\x1b[0m\x1b[38;2;0;81;86;48;2;0;48;51mโ\x1b[0m\x1b[38;2;0;72;86;48;2;0;42;51mโ\x1b[0m\x1b[38;2;0;63;86;48;2;0;37;51mโ\x1b[0m\x1b[38;2;0;53;86;48;2;0;31;51mโ\x1b[0m\x1b[38;2;0;44;86;48;2;0;25;51mโ\x1b[0m\x1b[38;2;0;34;86;48;2;0;20;51mโ\x1b[0m\x1b[38;2;0;25;86;48;2;0;14;51mโ\x1b[0m\x1b[38;2;0;15;86;48;2;0;9;51mโ\x1b[0m\x1b[38;2;0;6;86;48;2;0;3;51mโ\x1b[0m\x1b[38;2;3;0;86;48;2;1;0;51mโ\x1b[0m\x1b[38;2;12;0;86;48;2;7;0;51mโ\x1b[0m\x1b[38;2;22;0;86;48;2;12;0;51mโ\x1b[0m\x1b[38;2;31;0;86;48;2;18;0;51mโ\x1b[0m\x1b[38;2;40;0;86;48;2;24;0;51mโ\x1b[0m\x1b[38;2;50;0;86;48;2;29;0;51mโ\x1b[0m\x1b[38;2;59;0;86;48;2;35;0;51mโ\x1b[0m\x1b[38;2;69;0;86;48;2;40;0;51mโ\x1b[0m\x1b[38;2;78;0;86;48;2;46;0;51mโ\x1b[0m\x1b[38;2;86;0;85;48;2;51;0;50mโ\x1b[0m\x1b[38;2;86;0;75;48;2;51;0;44mโ\x1b[0m\x1b[38;2;86;0;66;48;2;51;0;38mโ\x1b[0m\x1b[38;2;86;0;56;48;2;51;0;33mโ\x1b[0m\x1b[38;2;86;0;47;48;2;51;0;27mโ\x1b[0m\x1b[38;2;86;0;37;48;2;51;0;22mโ\x1b[0m\x1b[38;2;86;0;28;48;2;51;0;16mโ\x1b[0m\x1b[38;2;86;0;18;48;2;51;0;11mโ\x1b[0m\x1b[38;2;86;0;9;48;2;51;0;5mโ\x1b[0m \n โ \x1b[1;34m8-bit color\x1b[0m \x1b[38;2;158;0;0;48;2;122;0;0mโ\x1b[0m\x1b[38;2;158;17;0;48;2;122;13;0mโ\x1b[0m\x1b[38;2;158;34;0;48;2;122;26;0mโ\x1b[0m\x1b[38;2;158;51;0;48;2;122;40;0mโ\x1b[0m\x1b[38;2;158;68;0;48;2;122;53;0mโ\x1b[0m\x1b[38;2;158;86;0;48;2;122;66;0mโ\x1b[0m\x1b[38;2;158;103;0;48;2;122;80;0mโ\x1b[0m\x1b[38;2;158;120;0;48;2;122;93;0mโ\x1b[0m\x1b[38;2;158;137;0;48;2;122;106;0mโ\x1b[0m\x1b[38;2;158;155;0;48;2;122;120;0mโ\x1b[0m\x1b[38;2;143;158;0;48;2;111;122;0mโ\x1b[0m\x1b[38;2;126;158;0;48;2;97;122;0mโ\x1b[0m\x1b[38;2;109;158;0;48;2;84;122;0mโ\x1b[0m\x1b[38;2;91;158;0;48;2;71;122;0mโ\x1b[0m\x1b[38;2;74;158;0;48;2;57;122;0mโ\x1b[0m\x1b[38;2;57;158;0;48;2;44;122;0mโ\x1b[0m\x1b[38;2;40;158;0;48;2;31;122;0mโ\x1b[0m\x1b[38;2;22;158;0;48;2;17;122;0mโ\x1b[0m\x1b[38;2;5;158;0;48;2;4;122;0mโ\x1b[0m\x1b[38;2;0;158;11;48;2;0;122;8mโ\x1b[0m\x1b[38;2;0;158;28;48;2;0;122;22mโ\x1b[0m\x1b[38;2;0;158;45;48;2;0;122;35mโ\x1b[0m\x1b[38;2;0;158;63;48;2;0;122;48mโ\x1b[0m\x1b[38;2;0;158;80;48;2;0;122;62mโ\x1b[0m\x1b[38;2;0;158;97;48;2;0;122;75mโ\x1b[0m\x1b[38;2;0;158;114;48;2;0;122;89mโ\x1b[0m\x1b[38;2;0;158;132;48;2;0;122;102mโ\x1b[0m\x1b[38;2;0;158;149;48;2;0;122;115mโ\x1b[0m\x1b[38;2;0;149;158;48;2;0;115;122mโ\x1b[0m\x1b[38;2;0;132;158;48;2;0;102;122mโ\x1b[0m\x1b[38;2;0;114;158;48;2;0;89;122mโ\x1b[0m\x1b[38;2;0;97;158;48;2;0;75;122mโ\x1b[0m\x1b[38;2;0;80;158;48;2;0;62;122mโ\x1b[0m\x1b[38;2;0;63;158;48;2;0;48;122mโ\x1b[0m\x1b[38;2;0;45;158;48;2;0;35;122mโ\x1b[0m\x1b[38;2;0;28;158;48;2;0;22;122mโ\x1b[0m\x1b[38;2;0;11;158;48;2;0;8;122mโ\x1b[0m\x1b[38;2;5;0;158;48;2;4;0;122mโ\x1b[0m\x1b[38;2;22;0;158;48;2;17;0;122mโ\x1b[0m\x1b[38;2;40;0;158;48;2;31;0;122mโ\x1b[0m\x1b[38;2;57;0;158;48;2;44;0;122mโ\x1b[0m\x1b[38;2;74;0;158;48;2;57;0;122mโ\x1b[0m\x1b[38;2;91;0;158;48;2;71;0;122mโ\x1b[0m\x1b[38;2;109;0;158;48;2;84;0;122mโ\x1b[0m\x1b[38;2;126;0;158;48;2;97;0;122mโ\x1b[0m\x1b[38;2;143;0;158;48;2;111;0;122mโ\x1b[0m\x1b[38;2;158;0;155;48;2;122;0;120mโ\x1b[0m\x1b[38;2;158;0;137;48;2;122;0;106mโ\x1b[0m\x1b[38;2;158;0;120;48;2;122;0;93mโ\x1b[0m\x1b[38;2;158;0;103;48;2;122;0;80mโ\x1b[0m\x1b[38;2;158;0;86;48;2;122;0;66mโ\x1b[0m\x1b[38;2;158;0;68;48;2;122;0;53mโ\x1b[0m\x1b[38;2;158;0;51;48;2;122;0;40mโ\x1b[0m\x1b[38;2;158;0;34;48;2;122;0;26mโ\x1b[0m\x1b[38;2;158;0;17;48;2;122;0;13mโ\x1b[0m \n โ \x1b[1;35mTruecolor (16.7 million)\x1b[0m \x1b[38;2;229;0;0;48;2;193;0;0mโ\x1b[0m\x1b[38;2;229;25;0;48;2;193;21;0mโ\x1b[0m\x1b[38;2;229;50;0;48;2;193;42;0mโ\x1b[0m\x1b[38;2;229;75;0;48;2;193;63;0mโ\x1b[0m\x1b[38;2;229;100;0;48;2;193;84;0mโ\x1b[0m\x1b[38;2;229;125;0;48;2;193;105;0mโ\x1b[0m\x1b[38;2;229;150;0;48;2;193;126;0mโ\x1b[0m\x1b[38;2;229;175;0;48;2;193;147;0mโ\x1b[0m\x1b[38;2;229;200;0;48;2;193;169;0mโ\x1b[0m\x1b[38;2;229;225;0;48;2;193;190;0mโ\x1b[0m\x1b[38;2;208;229;0;48;2;176;193;0mโ\x1b[0m\x1b[38;2;183;229;0;48;2;155;193;0mโ\x1b[0m\x1b[38;2;158;229;0;48;2;133;193;0mโ\x1b[0m\x1b[38;2;133;229;0;48;2;112;193;0mโ\x1b[0m\x1b[38;2;108;229;0;48;2;91;193;0mโ\x1b[0m\x1b[38;2;83;229;0;48;2;70;193;0mโ\x1b[0m\x1b[38;2;58;229;0;48;2;49;193;0mโ\x1b[0m\x1b[38;2;33;229;0;48;2;28;193;0mโ\x1b[0m\x1b[38;2;8;229;0;48;2;7;193;0mโ\x1b[0m\x1b[38;2;0;229;16;48;2;0;193;14mโ\x1b[0m\x1b[38;2;0;229;41;48;2;0;193;35mโ\x1b[0m\x1b[38;2;0;229;66;48;2;0;193;56mโ\x1b[0m\x1b[38;2;0;229;91;48;2;0;193;77mโ\x1b[0m\x1b[38;2;0;229;116;48;2;0;193;98mโ\x1b[0m\x1b[38;2;0;229;141;48;2;0;193;119mโ\x1b[0m\x1b[38;2;0;229;166;48;2;0;193;140mโ\x1b[0m\x1b[38;2;0;229;191;48;2;0;193;162mโ\x1b[0m\x1b[38;2;0;229;216;48;2;0;193;183mโ\x1b[0m\x1b[38;2;0;216;229;48;2;0;183;193mโ\x1b[0m\x1b[38;2;0;191;229;48;2;0;162;193mโ\x1b[0m\x1b[38;2;0;166;229;48;2;0;140;193mโ\x1b[0m\x1b[38;2;0;141;229;48;2;0;119;193mโ\x1b[0m\x1b[38;2;0;116;229;48;2;0;98;193mโ\x1b[0m\x1b[38;2;0;91;229;48;2;0;77;193mโ\x1b[0m\x1b[38;2;0;66;229;48;2;0;56;193mโ\x1b[0m\x1b[38;2;0;41;229;48;2;0;35;193mโ\x1b[0m\x1b[38;2;0;16;229;48;2;0;14;193mโ\x1b[0m\x1b[38;2;8;0;229;48;2;7;0;193mโ\x1b[0m\x1b[38;2;33;0;229;48;2;28;0;193mโ\x1b[0m\x1b[38;2;58;0;229;48;2;49;0;193mโ\x1b[0m\x1b[38;2;83;0;229;48;2;70;0;193mโ\x1b[0m\x1b[38;2;108;0;229;48;2;91;0;193mโ\x1b[0m\x1b[38;2;133;0;229;48;2;112;0;193mโ\x1b[0m\x1b[38;2;158;0;229;48;2;133;0;193mโ\x1b[0m\x1b[38;2;183;0;229;48;2;155;0;193mโ\x1b[0m\x1b[38;2;208;0;229;48;2;176;0;193mโ\x1b[0m\x1b[38;2;229;0;225;48;2;193;0;190mโ\x1b[0m\x1b[38;2;229;0;200;48;2;193;0;169mโ\x1b[0m\x1b[38;2;229;0;175;48;2;193;0;147mโ\x1b[0m\x1b[38;2;229;0;150;48;2;193;0;126mโ\x1b[0m\x1b[38;2;229;0;125;48;2;193;0;105mโ\x1b[0m\x1b[38;2;229;0;100;48;2;193;0;84mโ\x1b[0m\x1b[38;2;229;0;75;48;2;193;0;63mโ\x1b[0m\x1b[38;2;229;0;50;48;2;193;0;42mโ\x1b[0m\x1b[38;2;229;0;25;48;2;193;0;21mโ\x1b[0m \n โ \x1b[1;33mDumb terminals\x1b[0m \x1b[38;2;254;45;45;48;2;255;10;10mโ\x1b[0m\x1b[38;2;254;68;45;48;2;255;36;10mโ\x1b[0m\x1b[38;2;254;91;45;48;2;255;63;10mโ\x1b[0m\x1b[38;2;254;114;45;48;2;255;90;10mโ\x1b[0m\x1b[38;2;254;137;45;48;2;255;117;10mโ\x1b[0m\x1b[38;2;254;159;45;48;2;255;143;10mโ\x1b[0m\x1b[38;2;254;182;45;48;2;255;170;10mโ\x1b[0m\x1b[38;2;254;205;45;48;2;255;197;10mโ\x1b[0m\x1b[38;2;254;228;45;48;2;255;223;10mโ\x1b[0m\x1b[38;2;254;251;45;48;2;255;250;10mโ\x1b[0m\x1b[38;2;235;254;45;48;2;232;255;10mโ\x1b[0m\x1b[38;2;213;254;45;48;2;206;255;10mโ\x1b[0m\x1b[38;2;190;254;45;48;2;179;255;10mโ\x1b[0m\x1b[38;2;167;254;45;48;2;152;255;10mโ\x1b[0m\x1b[38;2;144;254;45;48;2;125;255;10mโ\x1b[0m\x1b[38;2;121;254;45;48;2;99;255;10mโ\x1b[0m\x1b[38;2;99;254;45;48;2;72;255;10mโ\x1b[0m\x1b[38;2;76;254;45;48;2;45;255;10mโ\x1b[0m\x1b[38;2;53;254;45;48;2;19;255;10mโ\x1b[0m\x1b[38;2;45;254;61;48;2;10;255;28mโ\x1b[0m\x1b[38;2;45;254;83;48;2;10;255;54mโ\x1b[0m\x1b[38;2;45;254;106;48;2;10;255;81mโ\x1b[0m\x1b[38;2;45;254;129;48;2;10;255;108mโ\x1b[0m\x1b[38;2;45;254;152;48;2;10;255;134mโ\x1b[0m\x1b[38;2;45;254;175;48;2;10;255;161mโ\x1b[0m\x1b[38;2;45;254;197;48;2;10;255;188mโ\x1b[0m\x1b[38;2;45;254;220;48;2;10;255;214mโ\x1b[0m\x1b[38;2;45;254;243;48;2;10;255;241mโ\x1b[0m\x1b[38;2;45;243;254;48;2;10;241;255mโ\x1b[0m\x1b[38;2;45;220;254;48;2;10;214;255mโ\x1b[0m\x1b[38;2;45;197;254;48;2;10;188;255mโ\x1b[0m\x1b[38;2;45;175;254;48;2;10;161;255mโ\x1b[0m\x1b[38;2;45;152;254;48;2;10;134;255mโ\x1b[0m\x1b[38;2;45;129;254;48;2;10;108;255mโ\x1b[0m\x1b[38;2;45;106;254;48;2;10;81;255mโ\x1b[0m\x1b[38;2;45;83;254;48;2;10;54;255mโ\x1b[0m\x1b[38;2;45;61;254;48;2;10;28;255mโ\x1b[0m\x1b[38;2;53;45;254;48;2;19;10;255mโ\x1b[0m\x1b[38;2;76;45;254;48;2;45;10;255mโ\x1b[0m\x1b[38;2;99;45;254;48;2;72;10;255mโ\x1b[0m\x1b[38;2;121;45;254;48;2;99;10;255mโ\x1b[0m\x1b[38;2;144;45;254;48;2;125;10;255mโ\x1b[0m\x1b[38;2;167;45;254;48;2;152;10;255mโ\x1b[0m\x1b[38;2;190;45;254;48;2;179;10;255mโ\x1b[0m\x1b[38;2;213;45;254;48;2;206;10;255mโ\x1b[0m\x1b[38;2;235;45;254;48;2;232;10;255mโ\x1b[0m\x1b[38;2;254;45;251;48;2;255;10;250mโ\x1b[0m\x1b[38;2;254;45;228;48;2;255;10;223mโ\x1b[0m\x1b[38;2;254;45;205;48;2;255;10;197mโ\x1b[0m\x1b[38;2;254;45;182;48;2;255;10;170mโ\x1b[0m\x1b[38;2;254;45;159;48;2;255;10;143mโ\x1b[0m\x1b[38;2;254;45;137;48;2;255;10;117mโ\x1b[0m\x1b[38;2;254;45;114;48;2;255;10;90mโ\x1b[0m\x1b[38;2;254;45;91;48;2;255;10;63mโ\x1b[0m\x1b[38;2;254;45;68;48;2;255;10;36mโ\x1b[0m \n โ \x1b[1;36mAutomatic color conversion\x1b[0m \x1b[38;2;255;117;117;48;2;255;81;81mโ\x1b[0m\x1b[38;2;255;132;117;48;2;255;100;81mโ\x1b[0m\x1b[38;2;255;147;117;48;2;255;119;81mโ\x1b[0m\x1b[38;2;255;162;117;48;2;255;138;81mโ\x1b[0m\x1b[38;2;255;177;117;48;2;255;157;81mโ\x1b[0m\x1b[38;2;255;192;117;48;2;255;176;81mโ\x1b[0m\x1b[38;2;255;207;117;48;2;255;195;81mโ\x1b[0m\x1b[38;2;255;222;117;48;2;255;214;81mโ\x1b[0m\x1b[38;2;255;237;117;48;2;255;232;81mโ\x1b[0m\x1b[38;2;255;252;117;48;2;255;251;81mโ\x1b[0m\x1b[38;2;242;255;117;48;2;239;255;81mโ\x1b[0m\x1b[38;2;227;255;117;48;2;220;255;81mโ\x1b[0m\x1b[38;2;212;255;117;48;2;201;255;81mโ\x1b[0m\x1b[38;2;197;255;117;48;2;182;255;81mโ\x1b[0m\x1b[38;2;182;255;117;48;2;163;255;81mโ\x1b[0m\x1b[38;2;167;255;117;48;2;144;255;81mโ\x1b[0m\x1b[38;2;152;255;117;48;2;125;255;81mโ\x1b[0m\x1b[38;2;137;255;117;48;2;106;255;81mโ\x1b[0m\x1b[38;2;122;255;117;48;2;87;255;81mโ\x1b[0m\x1b[38;2;117;255;127;48;2;81;255;94mโ\x1b[0m\x1b[38;2;117;255;142;48;2;81;255;113mโ\x1b[0m\x1b[38;2;117;255;157;48;2;81;255;132mโ\x1b[0m\x1b[38;2;117;255;172;48;2;81;255;150mโ\x1b[0m\x1b[38;2;117;255;187;48;2;81;255;169mโ\x1b[0m\x1b[38;2;117;255;202;48;2;81;255;188mโ\x1b[0m\x1b[38;2;117;255;217;48;2;81;255;207mโ\x1b[0m\x1b[38;2;117;255;232;48;2;81;255;226mโ\x1b[0m\x1b[38;2;117;255;247;48;2;81;255;245mโ\x1b[0m\x1b[38;2;117;247;255;48;2;81;245;255mโ\x1b[0m\x1b[38;2;117;232;255;48;2;81;226;255mโ\x1b[0m\x1b[38;2;117;217;255;48;2;81;207;255mโ\x1b[0m\x1b[38;2;117;202;255;48;2;81;188;255mโ\x1b[0m\x1b[38;2;117;187;255;48;2;81;169;255mโ\x1b[0m\x1b[38;2;117;172;255;48;2;81;150;255mโ\x1b[0m\x1b[38;2;117;157;255;48;2;81;132;255mโ\x1b[0m\x1b[38;2;117;142;255;48;2;81;113;255mโ\x1b[0m\x1b[38;2;117;127;255;48;2;81;94;255mโ\x1b[0m\x1b[38;2;122;117;255;48;2;87;81;255mโ\x1b[0m\x1b[38;2;137;117;255;48;2;106;81;255mโ\x1b[0m\x1b[38;2;152;117;255;48;2;125;81;255mโ\x1b[0m\x1b[38;2;167;117;255;48;2;144;81;255mโ\x1b[0m\x1b[38;2;182;117;255;48;2;163;81;255mโ\x1b[0m\x1b[38;2;197;117;255;48;2;182;81;255mโ\x1b[0m\x1b[38;2;212;117;255;48;2;201;81;255mโ\x1b[0m\x1b[38;2;227;117;255;48;2;220;81;255mโ\x1b[0m\x1b[38;2;242;117;255;48;2;239;81;255mโ\x1b[0m\x1b[38;2;255;117;252;48;2;255;81;251mโ\x1b[0m\x1b[38;2;255;117;237;48;2;255;81;232mโ\x1b[0m\x1b[38;2;255;117;222;48;2;255;81;214mโ\x1b[0m\x1b[38;2;255;117;207;48;2;255;81;195mโ\x1b[0m\x1b[38;2;255;117;192;48;2;255;81;176mโ\x1b[0m\x1b[38;2;255;117;177;48;2;255;81;157mโ\x1b[0m\x1b[38;2;255;117;162;48;2;255;81;138mโ\x1b[0m\x1b[38;2;255;117;147;48;2;255;81;119mโ\x1b[0m\x1b[38;2;255;117;132;48;2;255;81;100mโ\x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Styles \x1b[0m\x1b[1;31m \x1b[0mAll ansi styles: \x1b[1mbold\x1b[0m, \x1b[2mdim\x1b[0m, \x1b[3mitalic\x1b[0m, \x1b[4munderline\x1b[0m, \x1b[9mstrikethrough\x1b[0m, \x1b[7mreverse\x1b[0m, and even \n \x1b[5mblink\x1b[0m. \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Text \x1b[0m\x1b[1;31m \x1b[0mWord wrap text. Justify \x1b[32mleft\x1b[0m, \x1b[33mcenter\x1b[0m, \x1b[34mright\x1b[0m or \x1b[31mfull\x1b[0m. \n \n \x1b[32mLorem ipsum dolor \x1b[0m \x1b[33m Lorem ipsum dolor \x1b[0m \x1b[34m Lorem ipsum dolor\x1b[0m \x1b[31mLorem\x1b[0m\x1b[31m \x1b[0m\x1b[31mipsum\x1b[0m\x1b[31m \x1b[0m\x1b[31mdolor\x1b[0m\x1b[31m \x1b[0m\x1b[31msit\x1b[0m \n \x1b[32msit amet, \x1b[0m \x1b[33m sit amet, \x1b[0m \x1b[34m sit amet,\x1b[0m \x1b[31mamet,\x1b[0m\x1b[31m \x1b[0m\x1b[31mconsectetur\x1b[0m \n \x1b[32mconsectetur \x1b[0m \x1b[33m consectetur \x1b[0m \x1b[34m consectetur\x1b[0m \x1b[31madipiscing\x1b[0m\x1b[31m \x1b[0m\x1b[31melit.\x1b[0m \n \x1b[32madipiscing elit. \x1b[0m \x1b[33m adipiscing elit. \x1b[0m \x1b[34m adipiscing elit.\x1b[0m \x1b[31mQuisque\x1b[0m\x1b[31m \x1b[0m\x1b[31min\x1b[0m\x1b[31m \x1b[0m\x1b[31mmetus\x1b[0m\x1b[31m \x1b[0m\x1b[31msed\x1b[0m \n \x1b[32mQuisque in metus sed\x1b[0m \x1b[33mQuisque in metus sed\x1b[0m \x1b[34mQuisque in metus sed\x1b[0m \x1b[31msapien\x1b[0m\x1b[31m \x1b[0m\x1b[31multricies\x1b[0m \n \x1b[32msapien ultricies \x1b[0m \x1b[33m sapien ultricies \x1b[0m \x1b[34m sapien ultricies\x1b[0m \x1b[31mpretium\x1b[0m\x1b[31m \x1b[0m\x1b[31ma\x1b[0m\x1b[31m \x1b[0m\x1b[31mat\x1b[0m\x1b[31m \x1b[0m\x1b[31mjusto.\x1b[0m \n \x1b[32mpretium a at justo. \x1b[0m \x1b[33mpretium a at justo. \x1b[0m \x1b[34m pretium a at justo.\x1b[0m \x1b[31mMaecenas\x1b[0m\x1b[31m \x1b[0m\x1b[31mluctus\x1b[0m\x1b[31m \x1b[0m\x1b[31mvelit\x1b[0m \n \x1b[32mMaecenas luctus \x1b[0m \x1b[33m Maecenas luctus \x1b[0m \x1b[34m Maecenas luctus\x1b[0m \x1b[31met auctor maximus.\x1b[0m \n \x1b[32mvelit et auctor \x1b[0m \x1b[33m velit et auctor \x1b[0m \x1b[34m velit et auctor\x1b[0m \n \x1b[32mmaximus. \x1b[0m \x1b[33m maximus. \x1b[0m \x1b[34m maximus.\x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Asian \x1b[0m\x1b[1;31m \x1b[0m๐จ๐ณ ่ฏฅๅบๆฏๆไธญๆ๏ผๆฅๆๅ้ฉๆๆๆฌ๏ผ \n\x1b[1;31m \x1b[0m\x1b[1;31m language \x1b[0m\x1b[1;31m \x1b[0m๐ฏ๐ต ใฉใคใใฉใชใฏไธญๅฝ่ชใๆฅๆฌ่ชใ้ๅฝ่ชใฎใใญในใใใตใใผใใใฆใใพใ \n\x1b[1;31m \x1b[0m\x1b[1;31m support \x1b[0m\x1b[1;31m \x1b[0m๐ฐ๐ท ์ด ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ ์ค๊ตญ์ด, ์ผ๋ณธ์ด ๋ฐ ํ๊ตญ์ด ํ
์คํธ๋ฅผ ์ง์ํฉ๋๋ค \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Markup \x1b[0m\x1b[1;31m \x1b[0m\x1b[1;35mRich\x1b[0m supports a simple \x1b[3mbbcode\x1b[0m like \x1b[1mmarkup\x1b[0m for \x1b[33mcolor\x1b[0m, \x1b[4mstyle\x1b[0m, and emoji! ๐ ๐ ๐ ๐ป โฆ \n ๐ \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Tables \x1b[0m\x1b[1;31m \x1b[0m\x1b[1m \x1b[0m\x1b[1;32mDate\x1b[0m\x1b[1m \x1b[0m\x1b[1m \x1b[0m \x1b[1m \x1b[0m\x1b[1;34mTitle\x1b[0m\x1b[1m \x1b[0m\x1b[1m \x1b[0m \x1b[1m \x1b[0m\x1b[1;36mProduction Budget\x1b[0m\x1b[1m \x1b[0m \x1b[1m \x1b[0m\x1b[1m \x1b[0m\x1b[1;35mBox Office\x1b[0m\x1b[1m \x1b[0m \n โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ \n \x1b[32m \x1b[0m\x1b[32mDec 20, 2019\x1b[0m\x1b[32m \x1b[0m \x1b[34m \x1b[0m\x1b[34mStar Wars: The Rise of \x1b[0m\x1b[34m \x1b[0m \x1b[36m \x1b[0m\x1b[36m $275,000,000\x1b[0m\x1b[36m \x1b[0m \x1b[35m \x1b[0m\x1b[35m $375,126,118\x1b[0m\x1b[35m \x1b[0m \n \x1b[34m \x1b[0m\x1b[34mSkywalker \x1b[0m\x1b[34m \x1b[0m \n \x1b[2;32m \x1b[0m\x1b[2;32mMay 25, 2018\x1b[0m\x1b[2;32m \x1b[0m \x1b[2;34m \x1b[0m\x1b[1;2;34mSolo\x1b[0m\x1b[2;34m: A Star Wars Story \x1b[0m\x1b[2;34m \x1b[0m \x1b[2;36m \x1b[0m\x1b[2;36m $275,000,000\x1b[0m\x1b[2;36m \x1b[0m \x1b[2;35m \x1b[0m\x1b[2;35m $393,151,347\x1b[0m\x1b[2;35m \x1b[0m \n \x1b[32m \x1b[0m\x1b[32mDec 15, 2017\x1b[0m\x1b[32m \x1b[0m \x1b[34m \x1b[0m\x1b[34mStar Wars Ep. VIII: The Last \x1b[0m\x1b[34m \x1b[0m \x1b[36m \x1b[0m\x1b[36m $262,000,000\x1b[0m\x1b[36m \x1b[0m \x1b[35m \x1b[0m\x1b[1;35m$1,332,539,889\x1b[0m\x1b[35m \x1b[0m \n \x1b[34m \x1b[0m\x1b[34mJedi \x1b[0m\x1b[34m \x1b[0m \n \x1b[2;32m \x1b[0m\x1b[2;32mMay 19, 1999\x1b[0m\x1b[2;32m \x1b[0m \x1b[2;34m \x1b[0m\x1b[2;34mStar Wars Ep. \x1b[0m\x1b[1;2;34mI\x1b[0m\x1b[2;34m: \x1b[0m\x1b[2;3;34mThe phantom \x1b[0m\x1b[2;34m \x1b[0m\x1b[2;34m \x1b[0m \x1b[2;36m \x1b[0m\x1b[2;36m $115,000,000\x1b[0m\x1b[2;36m \x1b[0m \x1b[2;35m \x1b[0m\x1b[2;35m$1,027,044,677\x1b[0m\x1b[2;35m \x1b[0m \n \x1b[2;34m \x1b[0m\x1b[2;3;34mMenace\x1b[0m\x1b[2;34m \x1b[0m\x1b[2;34m \x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Syntax \x1b[0m\x1b[1;31m \x1b[0m\x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 1 \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mdef\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;166;226;46;48;2;39;40;34miter_last\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m(\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalues\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mIterable\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m[\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mT\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m]\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m)\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m-\x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m>\x1b[0m \x1b[1m{\x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31mhighlighting\x1b[0m\x1b[1;31m \x1b[0m\x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 2 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ \x1b[0m\x1b[38;2;230;219;116;48;2;39;40;34m\"\"\"Iterate and generate a tuple w\x1b[0m \x1b[2;32mโ \x1b[0m\x1b[32m'foo'\x1b[0m: \x1b[1m[\x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m & \x1b[0m\x1b[1;31m \x1b[0m\x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 3 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter_values\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m=\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m(\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalues\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m)\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ โ \x1b[0m\x1b[1;34m3.1427\x1b[0m, \n\x1b[1;31m \x1b[0m\x1b[1;31m pretty \x1b[0m\x1b[1;31m \x1b[0m\x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 4 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mtry\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ โ \x1b[0m\x1b[1m(\x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m printing \x1b[0m\x1b[1;31m \x1b[0m\x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 5 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ โ \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m=\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mnext\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m(\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter_va\x1b[0m \x1b[2;32mโ โ โ \x1b[0m\x1b[32m'Paul Atriedies'\x1b[0m, \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 6 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mexcept\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;166;226;46;48;2;39;40;34mStopIteration\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ โ โ \x1b[0m\x1b[32m'Vladimir Harkonnen'\x1b[0m, \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 7 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ โ \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mreturn\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ โ โ \x1b[0m\x1b[32m'Thufir Haway'\x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 8 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mfor\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalue\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34min\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34miter_values\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m:\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ โ \x1b[0m\x1b[1m)\x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m 9 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ โ \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34myield\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mFalse\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m,\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ \x1b[0m\x1b[1m]\x1b[0m, \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m10 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ โ \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;249;38;114;48;2;39;40;34m=\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mvalue\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[2;32mโ \x1b[0m\x1b[32m'atomic'\x1b[0m: \x1b[1m(\x1b[0m\x1b[3;91mFalse\x1b[0m, \x1b[3;92mTrue\x1b[0m, \x1b[3;35mNone\x1b[0m\x1b[1m)\x1b[0m \n \x1b[1;38;2;227;227;221;48;2;39;40;34m \x1b[0m\x1b[38;2;101;102;96;48;2;39;40;34m11 \x1b[0m\x1b[2;38;2;117;113;94;48;2;39;40;34mโ \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34myield\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;102;217;239;48;2;39;40;34mTrue\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m,\x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34m \x1b[0m\x1b[38;2;248;248;242;48;2;39;40;34mprevious_value\x1b[0m\x1b[48;2;39;40;34m \x1b[0m \x1b[1m}\x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m Markdown \x1b[0m\x1b[1;31m \x1b[0m\x1b[36m# Markdown\x1b[0m โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ \n โ \x1b[1mMarkdown\x1b[0m โ \n \x1b[36mSupports much of the *markdown*, \x1b[0m โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ \n \x1b[36m__syntax__!\x1b[0m \n Supports much of the \x1b[3mmarkdown\x1b[0m, \x1b[1msyntax\x1b[0m! \n \x1b[36m- Headers\x1b[0m \n \x1b[36m- Basic formatting: **bold**, *italic*, \x1b[0m \x1b[1;33m โข \x1b[0mHeaders \n \x1b[36m`code`\x1b[0m \x1b[1;33m โข \x1b[0mBasic formatting: \x1b[1mbold\x1b[0m, \x1b[3mitalic\x1b[0m, \x1b[97;40mcode\x1b[0m \n \x1b[36m- Block quotes\x1b[0m \x1b[1;33m โข \x1b[0mBlock quotes \n \x1b[36m- Lists, and more...\x1b[0m \x1b[1;33m โข \x1b[0mLists, and more... \n \x1b[36m \x1b[0m \n\x1b[1;31m \x1b[0m \n\x1b[1;31m \x1b[0m\x1b[1;31m +more! \x1b[0m\x1b[1;31m \x1b[0mProgress bars, columns, styled logging handler, tracebacks, etc... \n\x1b[1;31m \x1b[0m \n"
diff --git a/tests/test_live.py b/tests/test_live.py
new file mode 100644
index 0000000000..8bf55bed3e
--- /dev/null
+++ b/tests/test_live.py
@@ -0,0 +1,160 @@
+# encoding=utf-8
+import io
+import time
+from typing import Optional
+
+# import pytest
+from rich.console import Console
+from rich.live import Live
+
+
+def create_capture_console(
+ *, width: int = 60, height: int = 80, force_terminal: Optional[bool] = True
+) -> Console:
+ return Console(
+ width=width,
+ height=height,
+ file=io.StringIO(),
+ force_terminal=force_terminal,
+ legacy_windows=False,
+ color_system=None, # use no color system to reduce complexity of output
+ )
+
+
+def test_live_state() -> None:
+
+ with Live("") as live:
+ assert live._started
+ live.start()
+
+ assert live.renderable == ""
+
+ assert live._started
+ live.stop()
+ assert not live._started
+
+ assert not live._started
+
+
+def test_growing_display() -> None:
+ console = create_capture_console()
+ console.begin_capture()
+ with Live(console=console, auto_refresh=False) as live:
+ display = ""
+ for step in range(10):
+ display += f"Step {step}\n"
+ live.update(display, refresh=True)
+ output = console.end_capture()
+ assert (
+ output
+ == "\x1b[?25lStep 0\n\r\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\nStep 9\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\nStep 9\n\n\x1b[?25h"
+ )
+
+
+def test_growing_display_transient() -> None:
+ console = create_capture_console()
+ console.begin_capture()
+ with Live(console=console, auto_refresh=False, transient=True) as live:
+ display = ""
+ for step in range(10):
+ display += f"Step {step}\n"
+ live.update(display, refresh=True)
+ output = console.end_capture()
+ assert (
+ output
+ == "\x1b[?25lStep 0\n\r\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\nStep 9\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\nStep 9\n\n\x1b[?25h\r\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K"
+ )
+
+
+def test_growing_display_overflow_ellipsis() -> None:
+ console = create_capture_console(height=5)
+ console.begin_capture()
+ with Live(
+ console=console, auto_refresh=False, vertical_overflow="ellipsis"
+ ) as live:
+ display = ""
+ for step in range(10):
+ display += f"Step {step}\n"
+ live.update(display, refresh=True)
+ output = console.end_capture()
+ assert (
+ output
+ == "\x1b[?25lStep 0\n\r\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\n ... \r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\n ... \r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\n ... \r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\n ... \r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\n ... \r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\n ... \r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\nStep 9\n\n\x1b[?25h"
+ )
+
+
+def test_growing_display_overflow_crop() -> None:
+ console = create_capture_console(height=5)
+ console.begin_capture()
+ with Live(console=console, auto_refresh=False, vertical_overflow="crop") as live:
+ display = ""
+ for step in range(10):
+ display += f"Step {step}\n"
+ live.update(display, refresh=True)
+ output = console.end_capture()
+ assert (
+ output
+ == "\x1b[?25lStep 0\n\r\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\nStep 9\n\n\x1b[?25h"
+ )
+
+
+def test_growing_display_overflow_visible() -> None:
+ console = create_capture_console(height=5)
+ console.begin_capture()
+ with Live(console=console, auto_refresh=False, vertical_overflow="visible") as live:
+ display = ""
+ for step in range(10):
+ display += f"Step {step}\n"
+ live.update(display, refresh=True)
+ output = console.end_capture()
+ assert (
+ output
+ == "\x1b[?25lStep 0\n\r\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\nStep 9\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\nStep 9\n\n\x1b[?25h"
+ )
+
+
+def test_growing_display_autorefresh() -> None:
+ """Test generating a table but using auto-refresh from threading"""
+ console = create_capture_console()
+
+ console = create_capture_console(height=5)
+ console.begin_capture()
+ with Live(console=console, auto_refresh=True, vertical_overflow="visible") as live:
+ display = ""
+ for step in range(10):
+ display += f"Step {step}\n"
+ live.update(display)
+ time.sleep(0.2)
+
+ # no way to truly test w/ multithreading, just make sure it doesn't crash
+
+
+def test_growing_display_console_redirect() -> None:
+ console = create_capture_console()
+ console.begin_capture()
+ with Live(console=console, auto_refresh=False) as live:
+ display = ""
+ for step in range(10):
+ console.print(f"Running step {step}")
+ display += f"Step {step}\n"
+ live.update(display, refresh=True)
+ output = console.end_capture()
+ assert (
+ output
+ == "\x1b[?25lRunning step 0\n\r\x1b[2KStep 0\n\r\x1b[2K\x1b[1A\x1b[2KRunning step 1\nStep 0\n\r\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KRunning step 2\nStep 0\nStep 1\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KRunning step 3\nStep 0\nStep 1\nStep 2\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KRunning step 4\nStep 0\nStep 1\nStep 2\nStep 3\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KRunning step 5\nStep 0\nStep 1\nStep 2\nStep 3\nStep 4\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KRunning step 6\nStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KRunning step 7\nStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KRunning step 8\nStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KRunning step 9\nStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\nStep 9\n\r\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2K\x1b[1A\x1b[2KStep 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\nStep 9\n\n\x1b[?25h"
+ )
+
+
+def test_growing_display_file_console() -> None:
+ console = create_capture_console(force_terminal=False)
+ console.begin_capture()
+ with Live(console=console, auto_refresh=False) as live:
+ display = ""
+ for step in range(10):
+ display += f"Step {step}\n"
+ live.update(display, refresh=True)
+ output = console.end_capture()
+ assert (
+ output
+ == "Step 0\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5\nStep 6\nStep 7\nStep 8\nStep 9\n"
+ )
| diff --git a/.gitignore b/.gitignore
index 233804adff..bc0dfdb715 100644
--- a/.gitignore
+++ b/.gitignore
@@ -6,6 +6,7 @@ mypy_report
docs/build
docs/source/_build
tools/*.txt
+playground/
# Byte-compiled / optimized / DLL files
__pycache__/
diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index de322cbf7e..57530923cb 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -7,3 +7,4 @@ The following people have contributed to the development of Rich:
- [Oleksis Fraga](https://github.com/oleksis)
- [Hedy Li](https://github.com/hedythedev)
- [Will McGugan](https://github.com/willmcgugan)
+- [Nathan Page](https://github.com/nathanrpage97)
diff --git a/docs/source/index.rst b/docs/source/index.rst
index d7cf613426..f40541a2f3 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -19,16 +19,17 @@ Welcome to Rich's documentation!
logging.rst
traceback.rst
prompt.rst
-
+
tables.rst
padding.rst
- panel.rst
- group.rst
+ panel.rst
+ group.rst
columns.rst
+ live.rst
progress.rst
markdown.rst
- syntax.rst
-
+ syntax.rst
+
protocol.rst
reference.rst
diff --git a/docs/source/live.rst b/docs/source/live.rst
new file mode 100644
index 0000000000..f1d6bb4aa7
--- /dev/null
+++ b/docs/source/live.rst
@@ -0,0 +1,172 @@
+.. _live:
+
+Live Display
+============
+
+Rich can display continiuously updated information for any renderable.
+
+To see some live display examples, try this from the command line::
+
+ python -m rich.live
+
+.. note::
+
+ If you see ellipsis "...", this indicates that the terminal is not tall enough to show the full table.
+
+Basic Usage
+-----------
+
+The basic usage can be split into two use cases.
+
+1. Same Renderable
+~~~~~~~~~~~~~~~~~~
+
+When keeping the same renderable, you simply pass the :class:`~rich.console.RenderableType` you would like to see updating and provide
+a ``refresh_per_second`` parameter. The Live :class:`~rich.live.Live` will automatically update the console at the provided refresh rate.
+
+
+**Example**::
+
+ import time
+
+ from rich.live import Live
+ from rich.table import Table
+
+ table = Table()
+ table.add_column("Row ID")
+ table.add_column("Description")
+ table.add_column("Level")
+
+ with Live(table, refresh_per_second=4): # update 4 times a second to feel fluid
+ for row in range(12):
+ time.sleep(0.4) # arbitrary delay
+ # update the renderable internally
+ table.add_row(f"{row}", f"description {row}", "[red]ERROR")
+
+
+2. New Renderable
+~~~~~~~~~~~~~~~~~
+
+You can also provide constant new renderable to :class:`~rich.live.Live` using the :meth:`~rich.live.Live.update` function. This allows you to
+completely change what is rendered live.
+
+**Example**::
+
+ import random
+ import time
+
+ from rich.live import Live
+ from rich.table import Table
+
+
+ def generate_table() -> Table:
+
+ table = Table()
+ table.add_column("ID")
+ table.add_column("Value")
+ table.add_column("Status")
+
+ for row in range(random.randint(2, 6)):
+ value = random.random() * 100
+ table.add_row(
+ f"{row}", f"{value:3.2f}", "[red]ERROR" if value < 50 else "[green]SUCCESS"
+ )
+ return table
+
+
+ with Live(refresh_per_second=4) as live:
+ for _ in range(40):
+ time.sleep(0.4)
+ live.update(generate_table())
+
+Advanced Usage
+--------------
+
+Transient Display
+~~~~~~~~~~~~~~~~~
+
+Normally when you exit live context manager (or call :meth:`~rich.live.Live.stop`) the last refreshed item remains in the terminal with the cursor on the following line.
+You can also make the live display disappear on exit by setting ``transient=True`` on the Live constructor. Here's an example::
+
+ with Live(transient=True) as live:
+ ...
+
+Auto refresh
+~~~~~~~~~~~~
+
+By default, the live display will refresh 4 times a second. You can set the refresh rate with the ``refresh_per_second`` argument on the :class:`~rich.live.Live` constructor.
+You should set this to something lower than 4 if you know your updates will not be that frequent or higher for a smoother feeling.
+
+You might want to disable auto-refresh entirely if your updates are not very frequent, which you can do by setting ``auto_refresh=False`` on the constructor.
+If you disable auto-refresh you will need to call :meth:`~rich.live.Live.refresh` manually or :meth:`~rich.live.Live.update` with ``refresh=True``.
+
+Vertical Overflow
+~~~~~~~~~~~~~~~~~
+
+By default, the live display will display ellipsis if the renderable is too large for the terminal. You can adjust this by setting the
+``vertical_overflow`` argument on the :class:`~rich.live.Live` constructor.
+
+- crop: Show renderable up to the terminal height. The rest is hidden.
+- ellipsis: Similar to crop except last line of the terminal is replaced with "...". This is the default behavior.
+- visible: Will allow the whole renderable to be shown. Note that the display cannot be properly cleared in this mode.
+
+.. note::
+
+ Once the live display stops on a non-transient renderable, the last frame will render as **visible** since it doesn't have to be cleared.
+
+Complex Renders
+~~~~~~~~~~~~~~~
+
+Refer to the :ref:`Render Groups` about combining multiple :class:`RenderableType` together so that it may be passed into the :class:`~rich.live.Live` constructor
+or :meth:`~rich.live.Live.update` method.
+
+For more powerful structuring it is also possible to use nested tables.
+
+
+Print / log
+~~~~~~~~~~~
+
+The Live class will create an internal Console object which you can access via ``live.console``. If you print or log to this console, the output will be displayed *above* the live display. Here's an example::
+
+ import time
+
+ from rich.live import Live
+ from rich.table import Table
+
+ table = Table()
+ table.add_column("Row ID")
+ table.add_column("Description")
+ table.add_column("Level")
+
+ with Live(table, refresh_per_second=4): # update 4 times a second to feel fluid
+ for row in range(12):
+ live.console.print("Working on row #{row}")
+ time.sleep(0.4)
+ table.add_row(f"{row}", f"description {row}", "[red]ERROR")
+
+
+If you have another Console object you want to use, pass it in to the :class:`~rich.live.Live` constructor. Here's an example::
+
+ from my_project import my_console
+
+ with Live(console=my_console) as live:
+ my_console.print("[bold blue]Starting work!")
+ ...
+
+.. note::
+
+ If you are passing in a file console, the live display only show the last item once the live context is left.
+
+Redirecting stdout / stderr
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To avoid breaking the live display visuals, Rich will redirect ``stdout`` and ``stderr`` so that you can use the builtin ``print`` statement.
+This feature is enabled by default, but you can disable by setting ``redirect_stdout`` or ``redirect_stderr`` to ``False``.
+
+
+Examples
+--------
+
+See `table_movie.py <https://github.com/willmcgugan/rich/blob/master/examples/table_movie.py>`_ and
+`top_lite_simulator.py <https://github.com/willmcgugan/rich/blob/master/examples/top_lite_simulator.py>`_
+for deeper examples of live displaying.
diff --git a/docs/source/reference.rst b/docs/source/reference.rst
index fe87ad5c84..a4a324555e 100644
--- a/docs/source/reference.rst
+++ b/docs/source/reference.rst
@@ -3,15 +3,16 @@ Reference
.. toctree::
:maxdepth: 3
-
+
reference/align.rst
reference/bar.rst
reference/color.rst
reference/columns.rst
reference/console.rst
reference/emoji.rst
- reference/highlighter.rst
+ reference/highlighter.rst
reference/init.rst
+ reference/live.rst
reference/logging.rst
reference/markdown.rst
reference/markup.rst
diff --git a/docs/source/reference/live.rst b/docs/source/reference/live.rst
new file mode 100644
index 0000000000..516ed8f24e
--- /dev/null
+++ b/docs/source/reference/live.rst
@@ -0,0 +1,5 @@
+rich.live
+=========
+
+.. automodule:: rich.live
+ :members:
\ No newline at end of file
| [
{
"components": [
{
"doc": "",
"lines": [
16,
35
],
"name": "Process",
"signature": "class Process:",
"type": "class"
},
{
"doc": "",
"lines": [
26,
31
],
"name": "Process.me... | [
"tests/test_live.py::test_live_state",
"tests/test_live.py::test_growing_display",
"tests/test_live.py::test_growing_display_transient",
"tests/test_live.py::test_growing_display_overflow_ellipsis",
"tests/test_live.py::test_growing_display_overflow_crop",
"tests/test_live.py::test_growing_display_overflo... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add live updating for any render
## Type of changes
- [ ] Bug fix
- [x] New feature
- [x] Documentation / docstrings
- [x] Tests
- [ ] Other
## Checklist
- [x] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [x] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [x] I've added tests for new code.
- [x] I accept that @willmcgugan may be pedantic in the code review.
## Description
I've been looking at ways to have live updating data table from `rich`. Whenever I used `LiveRender` it always faced any issues if something else was being logged.
So with that in mind I took what was there already from the `Progress` component and made a generalized Live Updating view. It allows for the component to be updated by the user with an `update()` function. It currently isn't threaded like Progress is, but could be expanded maybe with a LiveThreaded?
### Limitations
If the render component is too large, then it will start too not fully clear any of the Renderable above the console view. I have messed around with creating a `safe` flag that will have it print "the terminal is too small" instead when this is true. I didn't want to add it here yet before getting feedback.
### Showcase

[Some gist examples](https://gist.github.com/nathanrpage97/6eb6b92cc55a7536ad344709bfabe4e9)
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in examples/top_lite_simulator.py]
(definition of Process:)
class Process:
(definition of Process.memory_str:)
def memory_str(self) -> str:
(definition of Process.time_str:)
def time_str(self) -> str:
(definition of generate_process:)
def generate_process(pid: int) -> Process:
(definition of create_process_table:)
def create_process_table(height: int) -> Table:
[end of new definitions in examples/top_lite_simulator.py]
[start of new definitions in rich/live.py]
(definition of _RefreshThread:)
class _RefreshThread(Thread):
"""A thread that calls refresh() at regular intervals."""
(definition of _RefreshThread.__init__:)
def __init__(self, live: "Live", refresh_per_second: float) -> None:
(definition of _RefreshThread.stop:)
def stop(self) -> None:
(definition of _RefreshThread.run:)
def run(self) -> None:
(definition of _LiveRender:)
class _LiveRender(LiveRender):
(definition of _LiveRender.__init__:)
def __init__(self, live: "Live", renderable: RenderableType) -> None:
(definition of _LiveRender.__rich_console__:)
def __rich_console__( self, console: Console, options: ConsoleOptions ) -> RenderResult:
(definition of Live:)
class Live(JupyterMixin, RenderHook):
"""Renders an auto-updating live display of any given renderable.
Args:
renderable (RenderableType, optional): [The renderable to live display. Defaults to displaying nothing.
console (Console, optional): Optional Console instance. Default will an internal Console instance writing to stdout.
auto_refresh (bool, optional): Enable auto refresh. If disabled, you will need to call `refresh()` or `update()` with refresh flag. Defaults to True
refresh_per_second (float, optional): Number of times per second to refresh the live display. Defaults to 1.
transient (bool, optional): Clear the renderable on exit. Defaults to False.
redirect_stdout (bool, optional): Enable redirection of stdout, so ``print`` may be used. Defaults to True.
redirect_stderr (bool, optional): Enable redirection of stderr. Defaults to True.
vertical_overflow (VerticalOverflowMethod, optional): How to handle renderable when it is too tall for the console. Defaults to "ellipsis"."""
(definition of Live.__init__:)
def __init__( self, renderable: RenderableType = "", *, console: Console = None, auto_refresh: bool = True, refresh_per_second: float = 4, transient: bool = False, redirect_stdout: bool = True, redirect_stderr: bool = True, vertical_overflow: VerticalOverflowMethod = "ellipsis", ) -> None:
(definition of Live.start:)
def start(self) -> None:
"""Start live rendering display."""
(definition of Live.stop:)
def stop(self) -> None:
"""Stop live rendering display."""
(definition of Live.__enter__:)
def __enter__(self) -> "Live":
(definition of Live.__exit__:)
def __exit__(self, exc_type, exc_val, exc_tb) -> None:
(definition of Live._enable_redirect_io:)
def _enable_redirect_io(self):
"""Enable redirecting of stdout / stderr."""
(definition of Live.renderable:)
def renderable(self) -> RenderableType:
"""Get the renderable that is being displayed
Returns:
RenderableType: Displayed renderable."""
(definition of Live.update:)
def update(self, renderable: RenderableType, *, refresh: bool = False) -> None:
"""Update the renderable that is being displayed
Args:
renderable (RenderableType): New renderable to use.
refresh (bool, optional): Refresh the display. Defaults to False."""
(definition of Live.refresh:)
def refresh(self) -> None:
"""Update the display of the Live Render."""
(definition of Live._disable_redirect_io:)
def _disable_redirect_io(self):
"""Disable redirecting of stdout / stderr."""
(definition of Live.process_renderables:)
def process_renderables( self, renderables: List[ConsoleRenderable] ) -> List[ConsoleRenderable]:
"""Process renderables to restore cursor and display progress."""
[end of new definitions in rich/live.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | b0661de34bab35af9b4b1d3ba8e28b186b225e84 | |
Textualize__rich-376 | 376 | Textualize/rich | null | a83ee864e67d97be926894c7b5d3cf470194d6c1 | 2020-10-11T16:35:48Z | diff --git a/CHANGELOG.md b/CHANGELOG.md
index f813ffeb0a..132fafbbc2 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -18,6 +18,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Addded box.SQUARE_DOUBLE_HEAD
- Added highlighting of EUI-48 and EUI-64 (MAC addresses)
- Added Console.pager
+- Added Console.out
### Changed
diff --git a/docs/source/console.rst b/docs/source/console.rst
index a3c690c794..f5565ee6b3 100644
--- a/docs/source/console.rst
+++ b/docs/source/console.rst
@@ -69,6 +69,15 @@ The :meth:`~rich.console.Console.log` methods offers the same capabilities as pr
To help with debugging, the log() method has a ``log_locals`` parameter. If you set this to ``True``, Rich will display a table of local variables where the method was called.
+Low level output
+----------------
+
+In additional to :meth:`~rich.console.Console.print` and :meth:`~rich.console.Console.log`, Rich has a :meth:`~rich.console.Console.out` method which provides a lower-level way of writing to the terminal. The out() method converts all the positional arguments to strings and won't pretty print, word wrap, or apply markup to the output, but can apply a basic style and will optionally do highlighting.
+
+Here's an example::
+
+ >>> console.out("Locals", locals())
+
Justify / Alignment
-------------------
diff --git a/rich/console.py b/rich/console.py
index 64d0109200..2e3afe8573 100644
--- a/rich/console.py
+++ b/rich/console.py
@@ -1012,6 +1012,37 @@ def control(self, control_codes: Union["Control", str]) -> None:
self._buffer.append(Segment.control(str(control_codes)))
self._check_buffer()
+ def out(
+ self,
+ *objects: Any,
+ sep=" ",
+ end="\n",
+ style: Union[str, Style] = None,
+ highlight: bool = True,
+ ) -> None:
+ """Output to the terminal. This is a low-level way of writing to the terminal which unlike
+ :meth:`~rich.console.Console.print` doesn't pretty print, wrap text, nor markup, but will highlighting
+ and apply basic style.
+
+ Args:
+ sep (str, optional): String to write between print data. Defaults to " ".
+ end (str, optional): String to write at end of print data. Defaults to "\\n".
+ style (Union[str, Style], optional): A style to apply to output. Defaults to None.
+ highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to ``None``.
+ """
+ raw_output: str = sep.join(str(_object) for _object in objects)
+ self.print(
+ raw_output,
+ style=style,
+ highlight=highlight,
+ emoji=False,
+ markup=False,
+ no_wrap=True,
+ overflow="ignore",
+ crop=False,
+ end=end,
+ )
+
def print(
self,
*objects: Any,
| diff --git a/tests/test_console.py b/tests/test_console.py
index eca300095a..8dbe06933a 100644
--- a/tests/test_console.py
+++ b/tests/test_console.py
@@ -370,3 +370,10 @@ def mock_pager(content: str) -> None:
console.print("[bold link https:/example.org]Hello World")
assert pager_content == "Hello World\n"
+
+
+def test_out() -> None:
+ console = Console(width=10)
+ console.begin_capture()
+ console.out(*(["foo bar"] * 5), sep=".", end="X")
+ assert console.end_capture() == "foo bar.foo bar.foo bar.foo bar.foo barX"
diff --git a/tests/test_padding.py b/tests/test_padding.py
index fa86b5b176..419f0503dc 100644
--- a/tests/test_padding.py
+++ b/tests/test_padding.py
@@ -32,7 +32,11 @@ def test_rich_console():
renderable = "test renderable"
style = Style(color="red")
options = ConsoleOptions(
- min_width=10, max_width=20, is_terminal=False, encoding="utf-8"
+ legacy_windows=False,
+ min_width=10,
+ max_width=20,
+ is_terminal=False,
+ encoding="utf-8",
)
expected_outputs = [
| diff --git a/CHANGELOG.md b/CHANGELOG.md
index f813ffeb0a..132fafbbc2 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -18,6 +18,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Addded box.SQUARE_DOUBLE_HEAD
- Added highlighting of EUI-48 and EUI-64 (MAC addresses)
- Added Console.pager
+- Added Console.out
### Changed
diff --git a/docs/source/console.rst b/docs/source/console.rst
index a3c690c794..f5565ee6b3 100644
--- a/docs/source/console.rst
+++ b/docs/source/console.rst
@@ -69,6 +69,15 @@ The :meth:`~rich.console.Console.log` methods offers the same capabilities as pr
To help with debugging, the log() method has a ``log_locals`` parameter. If you set this to ``True``, Rich will display a table of local variables where the method was called.
+Low level output
+----------------
+
+In additional to :meth:`~rich.console.Console.print` and :meth:`~rich.console.Console.log`, Rich has a :meth:`~rich.console.Console.out` method which provides a lower-level way of writing to the terminal. The out() method converts all the positional arguments to strings and won't pretty print, word wrap, or apply markup to the output, but can apply a basic style and will optionally do highlighting.
+
+Here's an example::
+
+ >>> console.out("Locals", locals())
+
Justify / Alignment
-------------------
| [
{
"components": [
{
"doc": "Output to the terminal. This is a low-level way of writing to the terminal which unlike\n:meth:`~rich.console.Console.print` doesn't pretty print, wrap text, nor markup, but will highlighting\nand apply basic style.\n\nArgs:\n sep (str, optional): String to write bet... | [
"tests/test_console.py::test_out"
] | [
"tests/test_console.py::test_dumb_terminal",
"tests/test_console.py::test_16color_terminal",
"tests/test_console.py::test_truecolor_terminal",
"tests/test_console.py::test_console_options_update",
"tests/test_console.py::test_init",
"tests/test_console.py::test_size",
"tests/test_console.py::test_repr",... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
console out
Adds a Console.out method which is like a low-level print does simple styling only.
## Type of changes
- [ ] Bug fix
- [x] New feature
- [x] Documentation / docstrings
- [ ] Tests
- [ ] Other
## Checklist
- [x] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [x] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [x] I've added tests for new code.
- [x] I accept that @willmcgugan may be pedantic in the code review.
## Description
Please describe your changes here. If this fixes a bug, please link to the issue, if possible.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in rich/console.py]
(definition of Console.out:)
def out( self, *objects: Any, sep=" ", end="\n", style: Union[str, Style] = None, highlight: bool = True, ) -> None:
"""Output to the terminal. This is a low-level way of writing to the terminal which unlike
:meth:`~rich.console.Console.print` doesn't pretty print, wrap text, nor markup, but will highlighting
and apply basic style.
Args:
sep (str, optional): String to write between print data. Defaults to " ".
end (str, optional): String to write at end of print data. Defaults to "\n".
style (Union[str, Style], optional): A style to apply to output. Defaults to None.
highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to ``None``."""
[end of new definitions in rich/console.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | b0661de34bab35af9b4b1d3ba8e28b186b225e84 | |
joke2k__faker-1303 | 1,303 | joke2k/faker | null | 9e360fa90eb6bfe3fde4c320ac32f033cc803e00 | 2020-10-11T10:23:02Z | diff --git a/faker/providers/bank/de_CH/__init__.py b/faker/providers/bank/de_CH/__init__.py
new file mode 100644
index 0000000000..6aabca48b3
--- /dev/null
+++ b/faker/providers/bank/de_CH/__init__.py
@@ -0,0 +1,8 @@
+from .. import Provider as BankProvider
+
+
+class Provider(BankProvider):
+ """Implement bank provider for ``de_CH`` locale."""
+
+ bban_format = '#################'
+ country_code = 'CH'
diff --git a/faker/providers/bank/fr_CH/__init__.py b/faker/providers/bank/fr_CH/__init__.py
new file mode 100644
index 0000000000..735a406827
--- /dev/null
+++ b/faker/providers/bank/fr_CH/__init__.py
@@ -0,0 +1,9 @@
+from ..de_CH import Provider as DeChBankProvider
+
+
+class Provider(DeChBankProvider):
+ """Implement bank provider for ``fr_CH`` locale.
+
+ There is no difference from the ``de_CH`` implementation.
+ """
+ pass
diff --git a/faker/providers/bank/it_CH/__init__.py b/faker/providers/bank/it_CH/__init__.py
new file mode 100644
index 0000000000..b5d2dfd23d
--- /dev/null
+++ b/faker/providers/bank/it_CH/__init__.py
@@ -0,0 +1,9 @@
+from ..de_CH import Provider as DeChBankProvider
+
+
+class Provider(DeChBankProvider):
+ """Implement bank provider for ``it_CH`` locale.
+
+ There is no difference from the ``de_CH`` implementation.
+ """
+ pass
| diff --git a/tests/providers/test_bank.py b/tests/providers/test_bank.py
index 2ce36aff55..57b8acabe2 100644
--- a/tests/providers/test_bank.py
+++ b/tests/providers/test_bank.py
@@ -3,6 +3,7 @@
import pytest
from faker.providers.bank import Provider as BankProvider
+from faker.providers.bank.de_CH import Provider as DeChBankProvider
from faker.providers.bank.en_GB import Provider as EnGbBankProvider
from faker.providers.bank.en_PH import Provider as EnPhBankProvider
from faker.providers.bank.es_ES import Provider as EsEsBankProvider
@@ -207,3 +208,28 @@ def test_iban(self, faker, num_samples):
assert is_valid_iban(iban)
assert iban[:2] == TrTrBankProvider.country_code
assert re.fullmatch(r"\d{2}\d{22}", iban[2:])
+
+
+class TestDeCh:
+ """Test de_CH bank provider"""
+
+ def test_bban(self, faker, num_samples):
+ for _ in range(num_samples):
+ assert re.fullmatch(r"\d{17}", faker.bban())
+
+ def test_iban(self, faker, num_samples):
+ for _ in range(num_samples):
+ iban = faker.iban()
+ assert is_valid_iban(iban)
+ assert iban[:2] == DeChBankProvider.country_code
+ assert re.fullmatch(r"\d{19}", iban[2:])
+
+
+class TestFrCh(TestDeCh):
+ """Test fr_CH bank provider"""
+ pass
+
+
+class TestItCh(TestDeCh):
+ """Test it_CH bank provider"""
+ pass
| [
{
"components": [
{
"doc": "Implement bank provider for ``de_CH`` locale.",
"lines": [
4,
8
],
"name": "Provider",
"signature": "class Provider(BankProvider):",
"type": "class"
}
],
"file": "faker/providers/bank/de_CH/__init... | [
"tests/providers/test_bank.py::TestNoNo::test_bban",
"tests/providers/test_bank.py::TestNoNo::test_iban",
"tests/providers/test_bank.py::TestFiFi::test_bban",
"tests/providers/test_bank.py::TestFiFi::test_iban",
"tests/providers/test_bank.py::TestPlPl::test_bban",
"tests/providers/test_bank.py::TestPlPl::... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add Swiss bank provider locales
### What does this changes
Add Swiss `de_CH`, `fr_CH` and `it_CH` bank provider locales.
Closes #1196
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in faker/providers/bank/de_CH/__init__.py]
(definition of Provider:)
class Provider(BankProvider):
"""Implement bank provider for ``de_CH`` locale."""
[end of new definitions in faker/providers/bank/de_CH/__init__.py]
[start of new definitions in faker/providers/bank/fr_CH/__init__.py]
(definition of Provider:)
class Provider(DeChBankProvider):
"""Implement bank provider for ``fr_CH`` locale.
There is no difference from the ``de_CH`` implementation."""
[end of new definitions in faker/providers/bank/fr_CH/__init__.py]
[start of new definitions in faker/providers/bank/it_CH/__init__.py]
(definition of Provider:)
class Provider(DeChBankProvider):
"""Implement bank provider for ``it_CH`` locale.
There is no difference from the ``de_CH`` implementation."""
[end of new definitions in faker/providers/bank/it_CH/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Fake IBAN for Switzerland should be of 21 characters in length
* Faker version: 4.1.0
Fake IBAN for Swiss locale should be 21 characters long, is 22 characters long now.
### Steps to reproduce
>>> swiss_iban = Faker('de_CH').iban()
>>> swiss_iban
'GB80WQBB93456553336766'
>>> len(swiss_iban)
22
### Expected behavior
>>> swiss_iban = Faker('de_CH').iban()
>>> swiss_iban
'CH4489144168326729379'
>>> len(swiss_iban)
21
### Actual behavior
>>> len(swiss_iban)
22
----------
I made a custom provider to workaround it.
class CHBankProvider(Provider):
country_code = 'CH'
bban_format = '#################'
faker = Faker('CH')
faker.add_provider(CHBankProvider())
>>> swiss_iban = faker.iban()
>>> swiss_iban
'CH4489144168326729379'
>>> len(swiss_iban)
21
Thank you for the report @m-aciek ! Feel free to submit a Pull Request! :)
--------------------
</issues> | 6edfdbf6ae90b0153309e3bf066aa3b2d16494a7 | |
Textualize__rich-373 | 373 | Textualize/rich | null | 9c5d790488062909877dc03ace9bf4a4cfc4aa57 | 2020-10-10T14:07:04Z | diff --git a/CHANGELOG.md b/CHANGELOG.md
index d7854b3ea3..f813ffeb0a 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -17,6 +17,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Added ascii_only to ConsoleOptions
- Addded box.SQUARE_DOUBLE_HEAD
- Added highlighting of EUI-48 and EUI-64 (MAC addresses)
+- Added Console.pager
### Changed
diff --git a/docs/source/console.rst b/docs/source/console.rst
index 5d8d824f23..a3c690c794 100644
--- a/docs/source/console.rst
+++ b/docs/source/console.rst
@@ -210,6 +210,25 @@ An alternative way of capturing output is to set the Console file to a :py:class
console.print("[bold red]Hello[/] World")
str_output = console.file.getvalue()
+Paging
+------
+
+If you have some long output to present to the user you can use a *pager* to display it. A pager is typically an application on by your operating system which will at least support pressing a key to scroll, but will often support scrolling up and down through the text and other features.
+
+You can page output from a Console by calling :meth:`~rich.console.Console.pager` which returns a context manger. When the pager exits, anything that was printed will be sent to the pager. Here's an example::
+
+ from rich.__main__ import make_test_card
+ from rich.console import Console
+
+ console = Console()
+ with console.pager():
+ console.print(make_test_card())
+
+Since the default pager on most platforms don't support color, Rich will strip color from the output. If you know that your pager supports color, you can set ``style=True`` when calling the :meth:`~rich.console.Console.pager` method.
+
+.. note::
+ Rich will use the ``PAGER`` environment variable to get the pager command. On Linux and macOS you can set this to ``less -r`` to enable paging with ANSI styles.
+
Terminal detection
------------------
diff --git a/rich/console.py b/rich/console.py
index f9f7ba8085..64d0109200 100644
--- a/rich/console.py
+++ b/rich/console.py
@@ -35,6 +35,7 @@
from .highlighter import NullHighlighter, ReprHighlighter
from .markup import render as render_markup
from .measure import Measurement, measure_renderables
+from .pager import Pager, SystemPager
from .pretty import Pretty
from .scope import render_scope
from .segment import Segment
@@ -205,6 +206,40 @@ def __exit__(self, exc_type, exc_val, exc_tb) -> None:
self.console.pop_theme()
+class PagerContext:
+ """A context manager that 'pages' content. See :meth:`~rich.console.Console.pager` for usage."""
+
+ def __init__(
+ self,
+ console: "Console",
+ pager: Pager = None,
+ styles: bool = False,
+ links: bool = False,
+ ) -> None:
+ self._console = console
+ self.pager = SystemPager() if pager is None else pager
+ self.styles = styles
+ self.links = links
+
+ def __enter__(self) -> "PagerContext":
+ self._console._enter_buffer()
+ return self
+
+ def __exit__(self, exc_type, exc_val, exc_tb) -> None:
+ if exc_type is None:
+ with self._console._lock:
+ buffer: List[Segment] = self._console._buffer[:]
+ del self._console._buffer[:]
+ segments: Iterable[Segment] = buffer
+ if not self.styles:
+ segments = Segment.strip_styles(segments)
+ elif not self.links:
+ segments = Segment.strip_links(segments)
+ content = self._console._render_buffer(segments)
+ self.pager.show(content)
+ self._console._exit_buffer()
+
+
class RenderGroup:
"""Takes a group of renderables and returns a renderable object that renders the group.
@@ -523,7 +558,7 @@ def end_capture(self) -> str:
Returns:
str: Console output.
"""
- render_result = self._render_buffer()
+ render_result = self._render_buffer(self._buffer)
del self._buffer[:]
self._exit_buffer()
return render_result
@@ -667,6 +702,29 @@ def capture(self) -> Capture:
capture = Capture(self)
return capture
+ def pager(
+ self, pager: Pager = None, styles: bool = False, links: bool = False
+ ) -> PagerContext:
+ """A context manager to display anything printed within a "pager". The pager used
+ is defined by the system and will typically support at less pressing a key to scroll.
+
+ Args:
+ pager (Pager, optional): A pager object, or None to use :class:~rich.pager.SystemPager`. Defaults to None.
+ styles (bool, optional): Show styles in pager. Defaults to False.
+ links (bool, optional): Show links in pager. Defaults to False.
+
+ Example:
+ >>> from rich.console import Console
+ >>> from rich.__main__ import make_test_card
+ >>> console = Console()
+ >>> with console.pager():
+ console.print(make_test_card())
+
+ Returns:
+ PagerContext: A context manager.
+ """
+ return PagerContext(self, pager=pager, styles=styles, links=links)
+
def line(self, count: int = 1) -> None:
"""Write new line(s).
@@ -1154,7 +1212,8 @@ def _check_buffer(self) -> None:
display(self._buffer)
del self._buffer[:]
else:
- text = self._render_buffer()
+ text = self._render_buffer(self._buffer[:])
+ del self._buffer[:]
if text:
try:
if WINDOWS: # pragma: no cover
@@ -1169,17 +1228,15 @@ def _check_buffer(self) -> None:
error.reason = f"{error.reason}\n*** You may need to add PYTHONIOENCODING=utf-8 to your environment ***"
raise
- def _render_buffer(self) -> str:
+ def _render_buffer(self, buffer: Iterable[Segment]) -> str:
"""Render buffered output, and clear buffer."""
output: List[str] = []
append = output.append
color_system = self._color_system
legacy_windows = self.legacy_windows
- buffer = self._buffer[:]
if self.record:
with self._record_buffer_lock:
self._record_buffer.extend(buffer)
- del self._buffer[:]
not_terminal = not self.is_terminal
for text, style, is_control in buffer:
if style and not is_control:
diff --git a/rich/pager.py b/rich/pager.py
new file mode 100644
index 0000000000..52e77ef6da
--- /dev/null
+++ b/rich/pager.py
@@ -0,0 +1,33 @@
+from abc import ABC, abstractmethod
+import pydoc
+
+
+class Pager(ABC):
+ """Base class for a pager."""
+
+ @abstractmethod
+ def show(self, content: str) -> None:
+ """Show content in pager.
+
+ Args:
+ content (str): Content to be displayed.
+ """
+
+
+class SystemPager(Pager):
+ """Uses the pager installed on the system."""
+
+ _pager = lambda self, content: pydoc.pager(content)
+
+ def show(self, content: str) -> None:
+ """Use the same pager used by pydoc."""
+ self._pager(content)
+
+
+if __name__ == "__main__": # pragma: no cover
+ from .__main__ import make_test_card
+ from .console import Console
+
+ console = Console()
+ with console.pager(styles=True):
+ console.print(make_test_card())
diff --git a/rich/segment.py b/rich/segment.py
index 21c91321ce..3222f3d2be 100644
--- a/rich/segment.py
+++ b/rich/segment.py
@@ -300,6 +300,30 @@ def simplify(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
last_segment = segment
yield last_segment
+ @classmethod
+ def strip_links(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
+ """Remove all links from an iterable of styles.
+
+ Yields:
+ Segment: Segments with link removed.
+ """
+ for segment in segments:
+ if segment.is_control or segment.style is None:
+ yield segment
+ else:
+ text, style, _is_control = segment
+ yield cls(text, style.update_link(None) if style else None)
+
+ @classmethod
+ def strip_styles(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
+ """Remove all styles from an iterable of segments.
+
+ Yields:
+ Segment: Segments with styles replace with None
+ """
+ for text, _style, is_control in segments:
+ yield cls(text, None, is_control)
+
if __name__ == "__main__": # pragma: no cover
lines = [[Segment("Hello")]]
diff --git a/rich/style.py b/rich/style.py
index a67e2c67f7..4eca636181 100644
--- a/rich/style.py
+++ b/rich/style.py
@@ -522,6 +522,28 @@ def copy(self) -> "Style":
style._null = False
return style
+ def update_link(self, link: str = None) -> "Style":
+ """Get a copy with a different value for link.
+
+ Args:
+ link (str, optional): New value for link. Defaults to None.
+
+ Returns:
+ Style: A new Style instance.
+ """
+ style = self.__new__(Style)
+ style._ansi = self._ansi
+ style._style_definition = self._style_definition
+ style._color = self._color
+ style._bgcolor = self._bgcolor
+ style._attributes = self._attributes
+ style._set_attributes = self._set_attributes
+ style._link = link
+ style._link_id = f"{time()}-{randint(0, 999999)}" if link else ""
+ style._hash = self._hash
+ style._null = False
+ return style
+
def render(
self,
text: str = "",
diff --git a/rich/table.py b/rich/table.py
index 0ff4c49d6c..1745d70048 100644
--- a/rich/table.py
+++ b/rich/table.py
@@ -171,7 +171,7 @@ def __init__(
self.footer_style = footer_style
self.border_style = border_style
self.title_style = title_style
- self.caption_style = title_style
+ self.caption_style = caption_style
self.title_justify = title_justify
self.caption_justify = caption_justify
self._row_count = 0
@@ -241,12 +241,6 @@ def get_row_style(self, index: int) -> StyleType:
def __rich_measure__(self, console: "Console", max_width: int) -> Measurement:
if self.width is not None:
max_width = self.width
-
- # if self.box:
- # max_width -= len(self.columns) - 1
- # if self.show_edge:
- # max_width -= 2
-
if max_width < 0:
return Measurement(0, 0)
| diff --git a/tests/test_console.py b/tests/test_console.py
index 3df8e7cef5..eca300095a 100644
--- a/tests/test_console.py
+++ b/tests/test_console.py
@@ -2,12 +2,14 @@
import os
import sys
import tempfile
+from typing import Optional
import pytest
+from rich import errors
from rich.color import ColorSystem
from rich.console import CaptureError, Console, ConsoleOptions
-from rich import errors
+from rich.pager import SystemPager
from rich.panel import Panel
from rich.style import Style
@@ -346,3 +348,25 @@ def test_bell() -> None:
console.begin_capture()
console.bell()
assert console.end_capture() == "\x07"
+
+
+def test_pager() -> None:
+ console = Console()
+
+ pager_content: Optional[str] = None
+
+ def mock_pager(content: str) -> None:
+ nonlocal pager_content
+ pager_content = content
+
+ pager = SystemPager()
+ pager._pager = mock_pager
+
+ with console.pager(pager):
+ console.print("[bold]Hello World")
+ assert pager_content == "Hello World\n"
+
+ with console.pager(pager, styles=True, links=False):
+ console.print("[bold link https:/example.org]Hello World")
+
+ assert pager_content == "Hello World\n"
diff --git a/tests/test_segment.py b/tests/test_segment.py
index c5e24995af..2c3344dd1b 100644
--- a/tests/test_segment.py
+++ b/tests/test_segment.py
@@ -86,3 +86,13 @@ def test_filter_control():
assert list(Segment.filter_control(segments, is_control=True)) == [
Segment("bar", is_control=True)
]
+
+
+def test_strip_styles():
+ segments = [Segment("foo", Style(bold=True))]
+ assert list(Segment.strip_styles(segments)) == [Segment("foo", None)]
+
+
+def test_strip_links():
+ segments = [Segment("foo", Style(bold=True, link="https://www.example.org"))]
+ assert list(Segment.strip_links(segments)) == [Segment("foo", Style(bold=True))]
| diff --git a/CHANGELOG.md b/CHANGELOG.md
index d7854b3ea3..f813ffeb0a 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -17,6 +17,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Added ascii_only to ConsoleOptions
- Addded box.SQUARE_DOUBLE_HEAD
- Added highlighting of EUI-48 and EUI-64 (MAC addresses)
+- Added Console.pager
### Changed
diff --git a/docs/source/console.rst b/docs/source/console.rst
index 5d8d824f23..a3c690c794 100644
--- a/docs/source/console.rst
+++ b/docs/source/console.rst
@@ -210,6 +210,25 @@ An alternative way of capturing output is to set the Console file to a :py:class
console.print("[bold red]Hello[/] World")
str_output = console.file.getvalue()
+Paging
+------
+
+If you have some long output to present to the user you can use a *pager* to display it. A pager is typically an application on by your operating system which will at least support pressing a key to scroll, but will often support scrolling up and down through the text and other features.
+
+You can page output from a Console by calling :meth:`~rich.console.Console.pager` which returns a context manger. When the pager exits, anything that was printed will be sent to the pager. Here's an example::
+
+ from rich.__main__ import make_test_card
+ from rich.console import Console
+
+ console = Console()
+ with console.pager():
+ console.print(make_test_card())
+
+Since the default pager on most platforms don't support color, Rich will strip color from the output. If you know that your pager supports color, you can set ``style=True`` when calling the :meth:`~rich.console.Console.pager` method.
+
+.. note::
+ Rich will use the ``PAGER`` environment variable to get the pager command. On Linux and macOS you can set this to ``less -r`` to enable paging with ANSI styles.
+
Terminal detection
------------------
| [
{
"components": [
{
"doc": "A context manager that 'pages' content. See :meth:`~rich.console.Console.pager` for usage.",
"lines": [
209,
240
],
"name": "PagerContext",
"signature": "class PagerContext:",
"type": "class"
},
{
... | [
"tests/test_console.py::test_dumb_terminal",
"tests/test_console.py::test_16color_terminal",
"tests/test_console.py::test_truecolor_terminal",
"tests/test_console.py::test_console_options_update",
"tests/test_console.py::test_init",
"tests/test_console.py::test_size",
"tests/test_console.py::test_repr",... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
pager functionality
Adds pager functionality, which uses pydoc to launch a pager and provides hooks for customer pagers.
## Type of changes
- [ ] Bug fix
- [x] New feature
- [ ] Documentation / docstrings
- [ ] Tests
- [ ] Other
## Checklist
- [x] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [x] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [x] I've added tests for new code.
- [x] I accept that @willmcgugan may be pedantic in the code review.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in rich/console.py]
(definition of PagerContext:)
class PagerContext:
"""A context manager that 'pages' content. See :meth:`~rich.console.Console.pager` for usage."""
(definition of PagerContext.__init__:)
def __init__( self, console: "Console", pager: Pager = None, styles: bool = False, links: bool = False, ) -> None:
(definition of PagerContext.__enter__:)
def __enter__(self) -> "PagerContext":
(definition of PagerContext.__exit__:)
def __exit__(self, exc_type, exc_val, exc_tb) -> None:
(definition of Console.pager:)
def pager( self, pager: Pager = None, styles: bool = False, links: bool = False ) -> PagerContext:
"""A context manager to display anything printed within a "pager". The pager used
is defined by the system and will typically support at less pressing a key to scroll.
Args:
pager (Pager, optional): A pager object, or None to use :class:~rich.pager.SystemPager`. Defaults to None.
styles (bool, optional): Show styles in pager. Defaults to False.
links (bool, optional): Show links in pager. Defaults to False.
Example:
>>> from rich.console import Console
>>> from rich.__main__ import make_test_card
>>> console = Console()
>>> with console.pager():
console.print(make_test_card())
Returns:
PagerContext: A context manager."""
[end of new definitions in rich/console.py]
[start of new definitions in rich/pager.py]
(definition of Pager:)
class Pager(ABC):
"""Base class for a pager."""
(definition of Pager.show:)
def show(self, content: str) -> None:
"""Show content in pager.
Args:
content (str): Content to be displayed."""
(definition of SystemPager:)
class SystemPager(Pager):
"""Uses the pager installed on the system."""
(definition of SystemPager.show:)
def show(self, content: str) -> None:
"""Use the same pager used by pydoc."""
[end of new definitions in rich/pager.py]
[start of new definitions in rich/segment.py]
(definition of Segment.strip_links:)
def strip_links(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
"""Remove all links from an iterable of styles.
Yields:
Segment: Segments with link removed."""
(definition of Segment.strip_styles:)
def strip_styles(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
"""Remove all styles from an iterable of segments.
Yields:
Segment: Segments with styles replace with None"""
[end of new definitions in rich/segment.py]
[start of new definitions in rich/style.py]
(definition of Style.update_link:)
def update_link(self, link: str = None) -> "Style":
"""Get a copy with a different value for link.
Args:
link (str, optional): New value for link. Defaults to None.
Returns:
Style: A new Style instance."""
[end of new definitions in rich/style.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | b0661de34bab35af9b4b1d3ba8e28b186b225e84 | |
sympy__sympy-20238 | 20,238 | sympy/sympy | 1.8 | c094f1bb9b9047eaa4cf98790c1df997f4f489f8 | 2020-10-10T07:57:39Z | diff --git a/sympy/stats/stochastic_process_types.py b/sympy/stats/stochastic_process_types.py
index e0afad26dc1b..446d417c1887 100644
--- a/sympy/stats/stochastic_process_types.py
+++ b/sympy/stats/stochastic_process_types.py
@@ -10,7 +10,7 @@
Union, Expr, Function, exp, cacheit, sqrt, pi, gamma,
Ge, Piecewise, Symbol, NonSquareMatrixError, EmptySet,
ceiling, MatrixBase, ConditionSet, ones, zeros, Identity,
- Rational, Lt, Gt, Ne)
+ Rational, Lt, Gt, Ne, BlockMatrix)
from sympy.core.relational import Relational
from sympy.logic.boolalg import Boolean
from sympy.utilities.exceptions import SymPyDeprecationWarning
@@ -901,7 +901,7 @@ def communication_classes(self) -> tList[tTuple[tList[Basic], Boolean, Integer]]
Returns
=======
- classes : List of Tuple of List, Boolean, Intger
+ classes
The ``classes`` are a list of tuples. Each
tuple represents a single communication class
with its properties. The first element in the
@@ -952,6 +952,9 @@ def communication_classes(self) -> tList[tTuple[tList[Basic], Boolean, Integer]]
n = self.number_of_states
T = self.transition_probabilities
+ if isinstance(T, MatrixSymbol):
+ raise NotImplementedError("Cannot perform the operation with a symbolic matrix.")
+
# begin Tarjan's algorithm
V = Range(n)
# don't use state names. Rather use state
@@ -1162,6 +1165,205 @@ def limiting_distribution(self):
"""
return self.fixed_row_vector()
+ def decompose(self) -> tTuple[tList[Basic], ImmutableMatrix, ImmutableMatrix, ImmutableMatrix]:
+ """
+ Decomposes the transition matrix into submatrices with
+ special properties.
+
+ The transition matrix can be decomposed into 4 submatrices:
+ - A - the submatrix from recurrent states to recurrent states.
+ - B - the submatrix from transient to recurrent states.
+ - C - the submatrix from transient to transient states.
+ - O - the submatrix of zeros for recurrent to transient states.
+
+ Returns
+ =======
+
+ states, A, B, C
+ ``states`` - a list of state names with the first being
+ the recurrent states and the last being
+ the transient states in the order
+ of the row names of A and then the row names of C.
+ ``A`` - the submatrix from recurrent states to recurrent states.
+ ``B`` - the submatrix from transient to recurrent states.
+ ``C`` - the submatrix from transient to transient states.
+
+ Examples
+ ========
+
+ >>> from sympy.stats import DiscreteMarkovChain
+ >>> from sympy import Matrix, S
+
+ One can decompose this chain for example:
+
+ >>> T = Matrix([[S(1)/2, S(1)/2, 0, 0, 0],
+ ... [S(2)/5, S(1)/5, S(2)/5, 0, 0],
+ ... [0, 0, 1, 0, 0],
+ ... [0, 0, S(1)/2, S(1)/2, 0],
+ ... [S(1)/2, 0, 0, 0, S(1)/2]])
+ >>> X = DiscreteMarkovChain('X', trans_probs=T)
+ >>> states, A, B, C = X.decompose()
+ >>> states
+ [2, 0, 1, 3, 4]
+
+ >>> A # recurrent to recurrent
+ Matrix([[1]])
+
+ >>> B # transient to recurrent
+ Matrix([
+ [ 0],
+ [2/5],
+ [1/2],
+ [ 0]])
+
+ >>> C # transient to transient
+ Matrix([
+ [1/2, 1/2, 0, 0],
+ [2/5, 1/5, 0, 0],
+ [ 0, 0, 1/2, 0],
+ [1/2, 0, 0, 1/2]])
+
+ This means that state 2 is the only absorbing state
+ (since A is a 1x1 matrix). B is a 4x1 matrix since
+ the 4 remaining transient states all merge into reccurent
+ state 2. And C is the 4x4 matrix that shows how the
+ transient states 0, 1, 3, 4 all interact.
+
+ See Also
+ ========
+
+ sympy.stats.stochastic_process_types.DiscreteMarkovChain.communication_classes
+ sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form
+
+ References
+ ==========
+
+ .. [1] https://en.wikipedia.org/wiki/Absorbing_Markov_chain
+ .. [2] http://people.brandeis.edu/~igusa/Math56aS08/Math56a_S08_notes015.pdf
+ """
+ trans_probs = self.transition_probabilities
+
+ classes = self.communication_classes()
+ r_states = []
+ t_states = []
+
+ for states, recurrent, period in classes:
+ if recurrent:
+ r_states += states
+ else:
+ t_states += states
+
+ states = r_states + t_states
+ indexes = [self.index_of[state] for state in states]
+
+ A = Matrix(len(r_states), len(r_states),
+ lambda i, j: trans_probs[indexes[i], indexes[j]])
+
+ B = Matrix(len(t_states), len(r_states),
+ lambda i, j: trans_probs[indexes[len(r_states) + i], indexes[j]])
+
+ C = Matrix(len(t_states), len(t_states),
+ lambda i, j: trans_probs[indexes[len(r_states) + i], indexes[len(r_states) + j]])
+
+ return states, A.as_immutable(), B.as_immutable(), C.as_immutable()
+
+ def canonical_form(self) -> tTuple[tList[Basic], ImmutableMatrix]:
+ """
+ Reorders the one-step transition matrix
+ so that recurrent states appear first and transient
+ states appear last. Other representations include inserting
+ transient states first and recurrent states last.
+
+ Returns
+ =======
+
+ states, P_new
+ ``states`` is the list that describes the order of the
+ new states in the matrix
+ so that the ith element in ``states`` is the state of the
+ ith row of A.
+ ``P_new`` is the new transition matrix in canonical form.
+
+ Examples
+ ========
+
+ >>> from sympy.stats import DiscreteMarkovChain
+ >>> from sympy import Matrix, S
+
+ You can convert your chain into canonical form:
+
+ >>> T = Matrix([[S(1)/2, S(1)/2, 0, 0, 0],
+ ... [S(2)/5, S(1)/5, S(2)/5, 0, 0],
+ ... [0, 0, 1, 0, 0],
+ ... [0, 0, S(1)/2, S(1)/2, 0],
+ ... [S(1)/2, 0, 0, 0, S(1)/2]])
+ >>> X = DiscreteMarkovChain('X', list(range(1, 6)), trans_probs=T)
+ >>> states, new_matrix = X.canonical_form()
+ >>> states
+ [3, 1, 2, 4, 5]
+
+ >>> new_matrix
+ Matrix([
+ [ 1, 0, 0, 0, 0],
+ [ 0, 1/2, 1/2, 0, 0],
+ [2/5, 2/5, 1/5, 0, 0],
+ [1/2, 0, 0, 1/2, 0],
+ [ 0, 1/2, 0, 0, 1/2]])
+
+ The new states are [3, 1, 2, 4, 5] and you can
+ create a new chain with this and its canonical
+ form will remain the same (since it is already
+ in canonical form).
+
+ >>> X = DiscreteMarkovChain('X', states, new_matrix)
+ >>> states, new_matrix = X.canonical_form()
+ >>> states
+ [3, 1, 2, 4, 5]
+
+ >>> new_matrix
+ Matrix([
+ [ 1, 0, 0, 0, 0],
+ [ 0, 1/2, 1/2, 0, 0],
+ [2/5, 2/5, 1/5, 0, 0],
+ [1/2, 0, 0, 1/2, 0],
+ [ 0, 1/2, 0, 0, 1/2]])
+
+ This is not limited to absorbing chains:
+
+ >>> T = Matrix([[0, 5, 5, 0, 0],
+ ... [0, 0, 0, 10, 0],
+ ... [5, 0, 5, 0, 0],
+ ... [0, 10, 0, 0, 0],
+ ... [0, 3, 0, 3, 4]])/10
+ >>> X = DiscreteMarkovChain('X', trans_probs=T)
+ >>> states, new_matrix = X.canonical_form()
+ >>> states
+ [1, 3, 0, 2, 4]
+
+ >>> new_matrix
+ Matrix([
+ [ 0, 1, 0, 0, 0],
+ [ 1, 0, 0, 0, 0],
+ [ 1/2, 0, 0, 1/2, 0],
+ [ 0, 0, 1/2, 1/2, 0],
+ [3/10, 3/10, 0, 0, 2/5]])
+
+ See Also
+ ========
+
+ sympy.stats.stochastic_process_types.DiscreteMarkovChain.communication_classes
+ sympy.stats.stochastic_process_types.DiscreteMarkovChain.decompose
+
+ References
+ ==========
+
+ .. [1] https://onlinelibrary.wiley.com/doi/pdf/10.1002/9780470316887.app1
+ .. [2] http://www.columbia.edu/~ww2040/6711F12/lect1023big.pdf
+ """
+ states, A, B, C = self.decompose()
+ O = zeros(A.shape[0], C.shape[1])
+ return states, BlockMatrix([[A, O], [B, C]]).as_explicit()
+
def sample(self):
"""
Returns
| diff --git a/sympy/stats/tests/test_stochastic_process.py b/sympy/stats/tests/test_stochastic_process.py
index bfdde46fe9d6..67acba4a0741 100644
--- a/sympy/stats/tests/test_stochastic_process.py
+++ b/sympy/stats/tests/test_stochastic_process.py
@@ -30,6 +30,9 @@ def test_DiscreteMarkovChain():
assert E(X[0]) == Expectation(X[0])
raises(TypeError, lambda: DiscreteMarkovChain(1))
raises(NotImplementedError, lambda: X(t))
+ raises(NotImplementedError, lambda: X.communication_classes())
+ raises(NotImplementedError, lambda: X.canonical_form())
+ raises(NotImplementedError, lambda: X.decompose())
nz = Symbol('n', integer=True)
TZ = MatrixSymbol('M', nz, nz)
@@ -123,6 +126,10 @@ def test_DiscreteMarkovChain():
raises (ValueError, lambda: Y3.fundamental_matrix())
assert Y2.is_absorbing_chain() == True
assert Y3.is_absorbing_chain() == False
+ assert Y2.canonical_form() == ([0, 1, 2], TO2)
+ assert Y3.canonical_form() == ([0, 1, 2], TO3)
+ assert Y2.decompose() == ([0, 1, 2], TO2[0:1, 0:1], TO2[1:3, 0:1], TO2[1:3, 1:3])
+ assert Y3.decompose() == ([0, 1, 2], TO3, Matrix(0, 3, []), Matrix(0, 0, []))
TO4 = Matrix([[Rational(1, 5), Rational(2, 5), Rational(2, 5)], [Rational(1, 10), S.Half, Rational(2, 5)], [Rational(3, 5), Rational(3, 10), Rational(1, 10)]])
Y4 = DiscreteMarkovChain('Y', trans_probs=TO4)
w = ImmutableMatrix([[Rational(11, 39), Rational(16, 39), Rational(4, 13)]])
@@ -143,6 +150,10 @@ def test_DiscreteMarkovChain():
X = DiscreteMarkovChain('X', trans_probs=Matrix([[]]))
assert X.number_of_states == 0
assert X.stationary_distribution() == Matrix([[]])
+ assert X.communication_classes() == []
+ assert X.canonical_form() == ([], Matrix([[]]))
+ assert X.decompose() == ([], Matrix([[]]), Matrix([[]]), Matrix([[]]))
+
# test communication_class
# see https://drive.google.com/drive/folders/1HbxLlwwn2b3U8Lj7eb_ASIUb5vYaNIjg?usp=sharing
# tutorial 2.pdf
@@ -188,6 +199,28 @@ def test_DiscreteMarkovChain():
assert recurrence == (True, True, False, True, False)
assert periods == (1, 1, 1, 1, 1)
+ # test canonical form
+ # see https://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter11.pdf
+ # example 11.13
+ T = Matrix([[1, 0, 0, 0, 0],
+ [S(1) / 2, 0, S(1) / 2, 0, 0],
+ [0, S(1) / 2, 0, S(1) / 2, 0],
+ [0, 0, S(1) / 2, 0, S(1) / 2],
+ [0, 0, 0, 0, S(1)]])
+ DW = DiscreteMarkovChain('DW', [0, 1, 2, 3, 4], T)
+ states, A, B, C = DW.decompose()
+ assert states == [0, 4, 1, 2, 3]
+ assert A == Matrix([[1, 0], [0, 1]])
+ assert B == Matrix([[S(1)/2, 0], [0, 0], [0, S(1)/2]])
+ assert C == Matrix([[0, S(1)/2, 0], [S(1)/2, 0, S(1)/2], [0, S(1)/2, 0]])
+ states, new_matrix = DW.canonical_form()
+ assert states == [0, 4, 1, 2, 3]
+ assert new_matrix == Matrix([[1, 0, 0, 0, 0],
+ [0, 1, 0, 0, 0],
+ [S(1)/2, 0, 0, S(1)/2, 0],
+ [0, 0, S(1)/2, 0, S(1)/2],
+ [0, S(1)/2, 0, S(1)/2, 0]])
+
# test custom state space
Y10 = DiscreteMarkovChain('Y', [1, 2, 3], TO2)
tuples = Y10.communication_classes()
@@ -195,6 +228,8 @@ def test_DiscreteMarkovChain():
assert classes == ([1], [2, 3])
assert recurrence == (True, False)
assert periods == (1, 1)
+ assert Y10.canonical_form() == ([1, 2, 3], TO2)
+ assert Y10.decompose() == ([1, 2, 3], TO2[0:1, 0:1], TO2[1:3, 0:1], TO2[1:3, 1:3])
# testing miscellaneous queries
T = Matrix([[S.Half, Rational(1, 4), Rational(1, 4)],
| [
{
"components": [
{
"doc": "Decomposes the transition matrix into submatrices with\nspecial properties.\n\nThe transition matrix can be decomposed into 4 submatrices:\n- A - the submatrix from recurrent states to recurrent states.\n- B - the submatrix from transient to recurrent states.\n- C - the... | [
"test_DiscreteMarkovChain"
] | [
"test_ContinuousMarkovChain",
"test_BernoulliProcess",
"test_PoissonProcess",
"test_WienerProcess",
"test_GammaProcess_symbolic"
] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Added canonical form functions for Discrete Markov Chains
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
#### Brief description of what is fixed or changed
Added two functions to `DiscreteMarkovChain`. These functions are both used to attain a canonical form for the Markov chain. See the docstring for more details.
#### Other comments
These functions will also be useful for `ContinuousMarkovChain`. But they use a function called `communication_classes` which is only valid for `DiscreteMarkovChain` since a continuous Markov chain does not have a periodicity. In a future PR, I will move all of the functions that I can up to `MarkovChain`.
I removed the type hint in the docstrings since there is already a type annotation. This docstring hint has caused my IDE to bug out (and so with other users of my IDE I assume).
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
NO ENTRY
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/stats/stochastic_process_types.py]
(definition of DiscreteMarkovChain.decompose:)
def decompose(self) -> tTuple[tList[Basic], ImmutableMatrix, ImmutableMatrix, ImmutableMatrix]:
"""Decomposes the transition matrix into submatrices with
special properties.
The transition matrix can be decomposed into 4 submatrices:
- A - the submatrix from recurrent states to recurrent states.
- B - the submatrix from transient to recurrent states.
- C - the submatrix from transient to transient states.
- O - the submatrix of zeros for recurrent to transient states.
Returns
=======
states, A, B, C
``states`` - a list of state names with the first being
the recurrent states and the last being
the transient states in the order
of the row names of A and then the row names of C.
``A`` - the submatrix from recurrent states to recurrent states.
``B`` - the submatrix from transient to recurrent states.
``C`` - the submatrix from transient to transient states.
Examples
========
>>> from sympy.stats import DiscreteMarkovChain
>>> from sympy import Matrix, S
One can decompose this chain for example:
>>> T = Matrix([[S(1)/2, S(1)/2, 0, 0, 0],
... [S(2)/5, S(1)/5, S(2)/5, 0, 0],
... [0, 0, 1, 0, 0],
... [0, 0, S(1)/2, S(1)/2, 0],
... [S(1)/2, 0, 0, 0, S(1)/2]])
>>> X = DiscreteMarkovChain('X', trans_probs=T)
>>> states, A, B, C = X.decompose()
>>> states
[2, 0, 1, 3, 4]
>>> A # recurrent to recurrent
Matrix([[1]])
>>> B # transient to recurrent
Matrix([
[ 0],
[2/5],
[1/2],
[ 0]])
>>> C # transient to transient
Matrix([
[1/2, 1/2, 0, 0],
[2/5, 1/5, 0, 0],
[ 0, 0, 1/2, 0],
[1/2, 0, 0, 1/2]])
This means that state 2 is the only absorbing state
(since A is a 1x1 matrix). B is a 4x1 matrix since
the 4 remaining transient states all merge into reccurent
state 2. And C is the 4x4 matrix that shows how the
transient states 0, 1, 3, 4 all interact.
See Also
========
sympy.stats.stochastic_process_types.DiscreteMarkovChain.communication_classes
sympy.stats.stochastic_process_types.DiscreteMarkovChain.canonical_form
References
==========
.. [1] https://en.wikipedia.org/wiki/Absorbing_Markov_chain
.. [2] http://people.brandeis.edu/~igusa/Math56aS08/Math56a_S08_notes015.pdf"""
(definition of DiscreteMarkovChain.canonical_form:)
def canonical_form(self) -> tTuple[tList[Basic], ImmutableMatrix]:
"""Reorders the one-step transition matrix
so that recurrent states appear first and transient
states appear last. Other representations include inserting
transient states first and recurrent states last.
Returns
=======
states, P_new
``states`` is the list that describes the order of the
new states in the matrix
so that the ith element in ``states`` is the state of the
ith row of A.
``P_new`` is the new transition matrix in canonical form.
Examples
========
>>> from sympy.stats import DiscreteMarkovChain
>>> from sympy import Matrix, S
You can convert your chain into canonical form:
>>> T = Matrix([[S(1)/2, S(1)/2, 0, 0, 0],
... [S(2)/5, S(1)/5, S(2)/5, 0, 0],
... [0, 0, 1, 0, 0],
... [0, 0, S(1)/2, S(1)/2, 0],
... [S(1)/2, 0, 0, 0, S(1)/2]])
>>> X = DiscreteMarkovChain('X', list(range(1, 6)), trans_probs=T)
>>> states, new_matrix = X.canonical_form()
>>> states
[3, 1, 2, 4, 5]
>>> new_matrix
Matrix([
[ 1, 0, 0, 0, 0],
[ 0, 1/2, 1/2, 0, 0],
[2/5, 2/5, 1/5, 0, 0],
[1/2, 0, 0, 1/2, 0],
[ 0, 1/2, 0, 0, 1/2]])
The new states are [3, 1, 2, 4, 5] and you can
create a new chain with this and its canonical
form will remain the same (since it is already
in canonical form).
>>> X = DiscreteMarkovChain('X', states, new_matrix)
>>> states, new_matrix = X.canonical_form()
>>> states
[3, 1, 2, 4, 5]
>>> new_matrix
Matrix([
[ 1, 0, 0, 0, 0],
[ 0, 1/2, 1/2, 0, 0],
[2/5, 2/5, 1/5, 0, 0],
[1/2, 0, 0, 1/2, 0],
[ 0, 1/2, 0, 0, 1/2]])
This is not limited to absorbing chains:
>>> T = Matrix([[0, 5, 5, 0, 0],
... [0, 0, 0, 10, 0],
... [5, 0, 5, 0, 0],
... [0, 10, 0, 0, 0],
... [0, 3, 0, 3, 4]])/10
>>> X = DiscreteMarkovChain('X', trans_probs=T)
>>> states, new_matrix = X.canonical_form()
>>> states
[1, 3, 0, 2, 4]
>>> new_matrix
Matrix([
[ 0, 1, 0, 0, 0],
[ 1, 0, 0, 0, 0],
[ 1/2, 0, 0, 1/2, 0],
[ 0, 0, 1/2, 1/2, 0],
[3/10, 3/10, 0, 0, 2/5]])
See Also
========
sympy.stats.stochastic_process_types.DiscreteMarkovChain.communication_classes
sympy.stats.stochastic_process_types.DiscreteMarkovChain.decompose
References
==========
.. [1] https://onlinelibrary.wiley.com/doi/pdf/10.1002/9780470316887.app1
.. [2] http://www.columbia.edu/~ww2040/6711F12/lect1023big.pdf"""
[end of new definitions in sympy/stats/stochastic_process_types.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 284db64e44124856accce4abff75d2d8990c301d | ||
slackapi__python-slack-sdk-844 | 844 | slackapi/python-slack-sdk | null | c959e4534baa13d0cbc11a10a449a8d93356ddcf | 2020-10-09T10:38:43Z | diff --git a/slack/web/classes/blocks.py b/slack/web/classes/blocks.py
index 1f412d83e..6bba87021 100644
--- a/slack/web/classes/blocks.py
+++ b/slack/web/classes/blocks.py
@@ -280,7 +280,9 @@ class InputBlock(Block):
@property
def attributes(self) -> Set[str]:
- return super().attributes.union({"label", "hint", "element", "optional"})
+ return super().attributes.union(
+ {"label", "hint", "element", "optional", "dispatch_action"}
+ )
def __init__(
self,
@@ -290,6 +292,7 @@ def __init__(
block_id: Optional[str] = None,
hint: Optional[Union[str, dict, PlainTextObject]] = None,
optional: Optional[bool] = None,
+ dispatch_action: Optional[bool] = None,
**others: dict,
):
"""A block that collects information from users - it can hold a plain-text input element,
@@ -304,6 +307,7 @@ def __init__(
self.element = BlockElement.parse(element)
self.hint = TextObject.parse(hint, default_type=PlainTextObject.type)
self.optional = optional
+ self.dispatch_action = dispatch_action
@JsonValidator(f"label attribute cannot exceed {label_max_length} characters")
def _validate_label_length(self):
diff --git a/slack/web/classes/elements.py b/slack/web/classes/elements.py
index 1271b3899..129900db8 100644
--- a/slack/web/classes/elements.py
+++ b/slack/web/classes/elements.py
@@ -9,6 +9,7 @@
from .objects import (
ButtonStyles,
ConfirmObject,
+ DispatchActionConfig,
Option,
OptionGroup,
TextObject,
@@ -956,7 +957,13 @@ class PlainTextInputElement(InputInteractiveElement):
@property
def attributes(self) -> Set[str]:
return super().attributes.union(
- {"initial_value", "multiline", "min_length", "max_length"}
+ {
+ "initial_value",
+ "multiline",
+ "min_length",
+ "max_length",
+ "dispatch_action_config",
+ }
)
def __init__(
@@ -969,6 +976,7 @@ def __init__(
multiline: Optional[bool] = None,
min_length: Optional[int] = None,
max_length: Optional[int] = None,
+ dispatch_action_config: Optional[Union[dict, DispatchActionConfig]] = None,
**others: dict,
):
"""
@@ -988,6 +996,7 @@ def __init__(
self.multiline = multiline
self.min_length = min_length
self.max_length = max_length
+ self.dispatch_action_config = dispatch_action_config
# -------------------------------------------------
diff --git a/slack/web/classes/objects.py b/slack/web/classes/objects.py
index 50bb9f741..214deb885 100644
--- a/slack/web/classes/objects.py
+++ b/slack/web/classes/objects.py
@@ -540,3 +540,35 @@ def to_dict(self, option_type: str = "block") -> dict: # skipcq: PYL-W0221
"label": dict_label,
"options": dict_options,
}
+
+
+class DispatchActionConfig(JsonObject):
+ attributes = {"trigger_actions_on"}
+
+ @classmethod
+ def parse(cls, config: Union["DispatchActionConfig", dict]):
+ if config:
+ if isinstance(config, DispatchActionConfig): # skipcq: PYL-R1705
+ return config
+ elif isinstance(config, dict):
+ return DispatchActionConfig(**config)
+ else:
+ # Not yet implemented: show some warning here
+ return None
+ return None
+
+ def __init__(
+ self, *, trigger_actions_on: Optional[list] = None,
+ ):
+ """
+ Determines when a plain-text input element will return a block_actions interaction payload.
+ https://api.slack.com/reference/block-kit/composition-objects#dispatch_action_config
+ """
+ self._trigger_actions_on = trigger_actions_on or []
+
+ def to_dict(self) -> dict: # skipcq: PYL-W0221
+ self.validate_json()
+ json = {}
+ if self._trigger_actions_on:
+ json["trigger_actions_on"] = self._trigger_actions_on
+ return json
| diff --git a/tests/web/classes/test_blocks.py b/tests/web/classes/test_blocks.py
index 547ac7e97..8f69b649f 100644
--- a/tests/web/classes/test_blocks.py
+++ b/tests/web/classes/test_blocks.py
@@ -607,6 +607,19 @@ def test_document(self):
"emoji": True,
},
},
+ {
+ "dispatch_action": True,
+ "type": "input",
+ "element": {
+ "type": "plain_text_input",
+ "action_id": "plain_text_input-action"
+ },
+ "label": {
+ "type": "plain_text",
+ "text": "Label",
+ "emoji": True
+ }
+ }
]
for input in blocks:
self.assertDictEqual(input, InputBlock(**input).to_dict())
diff --git a/tests/web/classes/test_elements.py b/tests/web/classes/test_elements.py
index 012191d16..ff2fdca3a 100644
--- a/tests/web/classes/test_elements.py
+++ b/tests/web/classes/test_elements.py
@@ -803,6 +803,16 @@ def test_document_2(self):
}
self.assertDictEqual(input, PlainTextInputElement(**input).to_dict())
+ def test_document_3(self):
+ input = {
+ "type": "plain_text_input",
+ "multiline": True,
+ "dispatch_action_config": {
+ "trigger_actions_on": ["on_character_entered"]
+ }
+ }
+ self.assertDictEqual(input, PlainTextInputElement(**input).to_dict())
+
# -------------------------------------------------
# Radio Buttons
| [
{
"components": [
{
"doc": "",
"lines": [
545,
574
],
"name": "DispatchActionConfig",
"signature": "class DispatchActionConfig(JsonObject):",
"type": "class"
},
{
"doc": "",
"lines": [
549,
... | [
"tests/web/classes/test_blocks.py::InputBlockTests::test_document",
"tests/web/classes/test_elements.py::PlainTextInputElementTests::test_document_3"
] | [
"tests/web/classes/test_blocks.py::BlockTests::test_parse",
"tests/web/classes/test_blocks.py::SectionBlockTests::test_document_1",
"tests/web/classes/test_blocks.py::SectionBlockTests::test_document_2",
"tests/web/classes/test_blocks.py::SectionBlockTests::test_document_3",
"tests/web/classes/test_blocks.p... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add Dispatch Action in Block Kit #841
## Summary
This pull request adds the feature described in #841 to 2.9 series.
### Category (place an `x` in each of the `[ ]`)
- [ ] **slack.web.WebClient** (Web API client)
- [ ] **slack.webhook.WebhookClient** (Incoming Webhook, response_url sender)
- [x] **slack.web.classes** (UI component builders)
- [ ] **slack.rtm.RTMClient** (RTM client)
- [ ] Documents
- [ ] Others
## Requirements (place an `x` in each `[ ]`)
- [x] I've read and understood the [Contributing Guidelines](https://github.com/slackapi/python-slackclient/blob/main/.github/contributing.md) and have done my best effort to follow them.
- [x] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
- [x] I've run `python setup.py validate` after making the changes.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in slack/web/classes/objects.py]
(definition of DispatchActionConfig:)
class DispatchActionConfig(JsonObject):
(definition of DispatchActionConfig.parse:)
def parse(cls, config: Union["DispatchActionConfig", dict]):
(definition of DispatchActionConfig.__init__:)
def __init__( self, *, trigger_actions_on: Optional[list] = None, ):
"""Determines when a plain-text input element will return a block_actions interaction payload.
https://api.slack.com/reference/block-kit/composition-objects#dispatch_action_config"""
(definition of DispatchActionConfig.to_dict:)
def to_dict(self) -> dict:
[end of new definitions in slack/web/classes/objects.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 2997a1786c4fd969b00ce69af888ebae8e8ebed0 | ||
slackapi__python-slack-sdk-843 | 843 | slackapi/python-slack-sdk | null | 5f4d92a8048814fc4938753594e74d7cfc74c27a | 2020-10-09T10:31:07Z | diff --git a/slack_sdk/models/blocks/basic_components.py b/slack_sdk/models/blocks/basic_components.py
index f2e18cee3..adc575192 100644
--- a/slack_sdk/models/blocks/basic_components.py
+++ b/slack_sdk/models/blocks/basic_components.py
@@ -465,3 +465,35 @@ def to_dict(self, option_type: str = "block") -> dict: # skipcq: PYL-W0221
if self._style:
json["style"] = self._style
return json
+
+
+class DispatchActionConfig(JsonObject):
+ attributes = {"trigger_actions_on"}
+
+ @classmethod
+ def parse(cls, config: Union["DispatchActionConfig", dict]):
+ if config:
+ if isinstance(config, DispatchActionConfig): # skipcq: PYL-R1705
+ return config
+ elif isinstance(config, dict):
+ return DispatchActionConfig(**config)
+ else:
+ # Not yet implemented: show some warning here
+ return None
+ return None
+
+ def __init__(
+ self, *, trigger_actions_on: Optional[list] = None,
+ ):
+ """
+ Determines when a plain-text input element will return a block_actions interaction payload.
+ https://api.slack.com/reference/block-kit/composition-objects#dispatch_action_config
+ """
+ self._trigger_actions_on = trigger_actions_on or []
+
+ def to_dict(self) -> dict: # skipcq: PYL-W0221
+ self.validate_json()
+ json = {}
+ if self._trigger_actions_on:
+ json["trigger_actions_on"] = self._trigger_actions_on
+ return json
diff --git a/slack_sdk/models/blocks/block_elements.py b/slack_sdk/models/blocks/block_elements.py
index c7a070994..f7daa8d5e 100644
--- a/slack_sdk/models/blocks/block_elements.py
+++ b/slack_sdk/models/blocks/block_elements.py
@@ -13,6 +13,7 @@
)
from .basic_components import ButtonStyles
from .basic_components import ConfirmObject
+from .basic_components import DispatchActionConfig
from .basic_components import MarkdownTextObject
from .basic_components import Option
from .basic_components import OptionGroup
@@ -957,7 +958,13 @@ class PlainTextInputElement(InputInteractiveElement):
@property
def attributes(self) -> Set[str]:
return super().attributes.union(
- {"initial_value", "multiline", "min_length", "max_length"}
+ {
+ "initial_value",
+ "multiline",
+ "min_length",
+ "max_length",
+ "dispatch_action_config",
+ }
)
def __init__(
@@ -970,6 +977,7 @@ def __init__(
multiline: Optional[bool] = None,
min_length: Optional[int] = None,
max_length: Optional[int] = None,
+ dispatch_action_config: Optional[Union[dict, DispatchActionConfig]] = None,
**others: dict,
):
"""
@@ -989,6 +997,7 @@ def __init__(
self.multiline = multiline
self.min_length = min_length
self.max_length = max_length
+ self.dispatch_action_config = dispatch_action_config
# -------------------------------------------------
diff --git a/slack_sdk/models/blocks/blocks.py b/slack_sdk/models/blocks/blocks.py
index 0d14b0b43..b699dac1d 100644
--- a/slack_sdk/models/blocks/blocks.py
+++ b/slack_sdk/models/blocks/blocks.py
@@ -283,7 +283,9 @@ class InputBlock(Block):
@property
def attributes(self) -> Set[str]:
- return super().attributes.union({"label", "hint", "element", "optional"})
+ return super().attributes.union(
+ {"label", "hint", "element", "optional", "dispatch_action"}
+ )
def __init__(
self,
@@ -292,6 +294,7 @@ def __init__(
element: Union[str, dict, InputInteractiveElement],
block_id: Optional[str] = None,
hint: Optional[Union[str, dict, PlainTextObject]] = None,
+ dispatch_action: Optional[bool] = None,
optional: Optional[bool] = None,
**others: dict,
):
@@ -306,6 +309,7 @@ def __init__(
self.label = TextObject.parse(label, default_type=PlainTextObject.type)
self.element = BlockElement.parse(element)
self.hint = TextObject.parse(hint, default_type=PlainTextObject.type)
+ self.dispatch_action = dispatch_action
self.optional = optional
@JsonValidator(f"label attribute cannot exceed {label_max_length} characters")
| diff --git a/tests/slack_sdk/web/classes/test_blocks.py b/tests/slack_sdk/web/classes/test_blocks.py
index c630d8f39..f020947c1 100644
--- a/tests/slack_sdk/web/classes/test_blocks.py
+++ b/tests/slack_sdk/web/classes/test_blocks.py
@@ -545,6 +545,19 @@ def test_document(self):
"label": {"type": "plain_text", "text": "Label", "emoji": True,},
"hint": {"type": "plain_text", "text": "some hint", "emoji": True,},
},
+ {
+ "dispatch_action": True,
+ "type": "input",
+ "element": {
+ "type": "plain_text_input",
+ "action_id": "plain_text_input-action"
+ },
+ "label": {
+ "type": "plain_text",
+ "text": "Label",
+ "emoji": True
+ }
+ }
]
for input in blocks:
self.assertDictEqual(input, InputBlock(**input).to_dict())
diff --git a/tests/slack_sdk/web/classes/test_elements.py b/tests/slack_sdk/web/classes/test_elements.py
index 37737c25f..148b3a01b 100644
--- a/tests/slack_sdk/web/classes/test_elements.py
+++ b/tests/slack_sdk/web/classes/test_elements.py
@@ -685,6 +685,15 @@ def test_document_2(self):
}
self.assertDictEqual(input, PlainTextInputElement(**input).to_dict())
+ def test_document_3(self):
+ input = {
+ "type": "plain_text_input",
+ "multiline": True,
+ "dispatch_action_config": {
+ "trigger_actions_on": ["on_character_entered"]
+ }
+ }
+ self.assertDictEqual(input, PlainTextInputElement(**input).to_dict())
# -------------------------------------------------
# Radio Buttons
| [
{
"components": [
{
"doc": "",
"lines": [
470,
499
],
"name": "DispatchActionConfig",
"signature": "class DispatchActionConfig(JsonObject):",
"type": "class"
},
{
"doc": "",
"lines": [
474,
... | [
"tests/slack_sdk/web/classes/test_blocks.py::InputBlockTests::test_document",
"tests/slack_sdk/web/classes/test_elements.py::PlainTextInputElementTests::test_document_3"
] | [
"tests/slack_sdk/web/classes/test_blocks.py::BlockTests::test_parse",
"tests/slack_sdk/web/classes/test_blocks.py::SectionBlockTests::test_document_1",
"tests/slack_sdk/web/classes/test_blocks.py::SectionBlockTests::test_document_2",
"tests/slack_sdk/web/classes/test_blocks.py::SectionBlockTests::test_documen... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
[v3] Add Dispatch Action in Block Kit #841
## Summary
This pull request adds the new features described in #841 to the v3 branch.
### Category (place an `x` in each of the `[ ]`)
- [ ] **slack.web.WebClient** (Web API client)
- [ ] **slack.webhook.WebhookClient** (Incoming Webhook, response_url sender)
- [x] **slack.web.classes** (UI component builders)
- [ ] **slack.rtm.RTMClient** (RTM client)
- [ ] Documents
- [ ] Others
## Requirements (place an `x` in each `[ ]`)
- [x] I've read and understood the [Contributing Guidelines](https://github.com/slackapi/python-slackclient/blob/main/.github/contributing.md) and have done my best effort to follow them.
- [x] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
- [x] I've run `python setup.py validate` after making the changes.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in slack_sdk/models/blocks/basic_components.py]
(definition of DispatchActionConfig:)
class DispatchActionConfig(JsonObject):
(definition of DispatchActionConfig.parse:)
def parse(cls, config: Union["DispatchActionConfig", dict]):
(definition of DispatchActionConfig.__init__:)
def __init__( self, *, trigger_actions_on: Optional[list] = None, ):
"""Determines when a plain-text input element will return a block_actions interaction payload.
https://api.slack.com/reference/block-kit/composition-objects#dispatch_action_config"""
(definition of DispatchActionConfig.to_dict:)
def to_dict(self) -> dict:
[end of new definitions in slack_sdk/models/blocks/basic_components.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 2997a1786c4fd969b00ce69af888ebae8e8ebed0 | ||
Textualize__rich-361 | 361 | Textualize/rich | null | e75649a870745b934927040d36fed8a8321dd238 | 2020-10-08T03:40:29Z | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 3a05e68116..97150dffd0 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -7,6 +7,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [8.0.1] - unreleased
+### Added
+
+- Added solid block bar class
+
### Changed
- Change the render prefix to correspond to the decimal units in progress
diff --git a/examples/colorbars.py b/examples/colorbars.py
new file mode 100644
index 0000000000..7036774173
--- /dev/null
+++ b/examples/colorbars.py
@@ -0,0 +1,19 @@
+"""
+
+This example demonstrates how to make colorful bars.
+
+"""
+
+from rich.block_bar import BlockBar
+from rich.console import Console
+from rich.table import Table
+
+table = Table()
+table.add_column("Score")
+
+table.add_row(BlockBar(size=100, begin=0, end=5, width=30, color="bright_red"))
+table.add_row(BlockBar(size=100, begin=0, end=35, width=30, color="bright_yellow"))
+table.add_row(BlockBar(size=100, begin=0, end=87, width=30, color="bright_green"))
+
+console = Console()
+console.print(table, justify="center")
diff --git a/rich/block_bar.py b/rich/block_bar.py
new file mode 100644
index 0000000000..d86e6c7baa
--- /dev/null
+++ b/rich/block_bar.py
@@ -0,0 +1,113 @@
+from typing import Optional, Union
+
+from .color import Color
+from .console import Console, ConsoleOptions, RenderResult
+from .jupyter import JupyterMixin
+from .measure import Measurement
+from .segment import Segment
+from .style import Style
+
+# There are left-aligned characters for 1/8 to 7/8, but
+# the right-aligned characters exist only for 1/8 and 4/8.
+BEGIN_BLOCK_ELEMS = ["โ", "โ", "โ", "โ", "โ", "โ", "โ", "โ"]
+END_BLOCK_ELEMS = [" ", "โ", "โ", "โ", "โ", "โ", "โ", "โ"]
+FULL_BLOCK = "โ"
+
+
+class BlockBar(JupyterMixin):
+ """Renders a solid block bar.
+
+ Args:
+ size (float): Value for the end of the bar.
+ begin (float): Begin point (between 0 and size, inclusive).
+ end (float): End point (between 0 and size, inclusive).
+ width (int, optional): Width of the bar, or ``None`` for maximum width. Defaults to None.
+ color (Union[Color, str], optional): Color of the bar. Defaults to "default".
+ bgcolor (Union[Color, str], optional): Color of bar background. Defaults to "default".
+ """
+
+ def __init__(
+ self,
+ size: float,
+ begin: float,
+ end: float,
+ width: int = None,
+ color: Union[Color, str] = "default",
+ bgcolor: Union[Color, str] = "default",
+ ):
+ self.size = size
+ self.begin = max(begin, 0)
+ self.end = min(end, size)
+ self.width = width
+ self.style = Style(color=color, bgcolor=bgcolor)
+
+ def __repr__(self) -> str:
+ return f"<BlockBar {self.begin!r}..{self.end!r} of {self.size!r}>"
+
+ def __rich_console__(
+ self, console: Console, options: ConsoleOptions
+ ) -> RenderResult:
+
+ width = min(self.width or options.max_width, options.max_width)
+
+ if self.begin >= self.end:
+ yield Segment(" " * width, self.style)
+ return
+
+ prefix_complete_eights = int(width * 8 * self.begin / self.size)
+ prefix_bar_count = prefix_complete_eights // 8
+ prefix_eights_count = prefix_complete_eights % 8
+
+ body_complete_eights = int(width * 8 * self.end / self.size)
+ body_bar_count = body_complete_eights // 8
+ body_eights_count = body_complete_eights % 8
+
+ # When start and end fall into the same cell, we ideally should render
+ # a symbol that's "center-aligned", but there is no good symbol in Unicode.
+ # In this case, we fall back to right-aligned block symbol for simplicity.
+
+ prefix = " " * prefix_bar_count
+ if prefix_eights_count:
+ prefix += BEGIN_BLOCK_ELEMS[prefix_eights_count]
+
+ body = FULL_BLOCK * body_bar_count
+ if body_eights_count:
+ body += END_BLOCK_ELEMS[body_eights_count]
+
+ suffix = " " * (width - len(body))
+
+ yield Segment(prefix + body[len(prefix) :] + suffix, self.style)
+
+ def __rich_measure__(self, console: Console, max_width: int) -> Measurement:
+ return (
+ Measurement(self.width, self.width)
+ if self.width is not None
+ else Measurement(4, max_width)
+ )
+
+
+if __name__ == "__main__": # pragma: no cover
+ console = Console()
+
+ import time
+
+ console.show_cursor(False)
+ for n in range(0, 101):
+ block_bar = BlockBar(size=100, begin=0, end=n, width=10)
+ console.print(block_bar)
+ console.file.write("\r")
+ time.sleep(0.05)
+ console.print()
+ for n in range(0, 101):
+ block_bar = BlockBar(size=100, begin=n, end=100, width=10)
+ console.print(block_bar)
+ console.file.write("\r")
+ time.sleep(0.05)
+ console.print()
+ for n in range(0, 51):
+ block_bar = BlockBar(size=100, begin=50 - n, end=50 + n, width=10)
+ console.print(block_bar)
+ console.file.write("\r")
+ time.sleep(0.05)
+ console.show_cursor(True)
+ console.print()
| diff --git a/tests/test_block_bar.py b/tests/test_block_bar.py
new file mode 100644
index 0000000000..7210d79d4a
--- /dev/null
+++ b/tests/test_block_bar.py
@@ -0,0 +1,53 @@
+from rich.block_bar import BlockBar
+
+from .render import render
+
+
+expected = [
+ "\x1b[39;49m โโโโโโโโโโโโโโโโโโโโโโโโโโ \x1b[0m",
+ "\x1b[39;49m โโโโโโโโโโโโโโโโโโโโโโโ \x1b[0m",
+ "\x1b[39;49m \x1b[0m",
+]
+
+
+def test_repr():
+ bar = BlockBar(size=100, begin=11, end=62, width=50)
+ assert repr(bar) == "<BlockBar 11..62 of 100>"
+
+
+def test_render():
+ bar = BlockBar(size=100, begin=11, end=62, width=50)
+ bar_render = render(bar)
+ assert bar_render == expected[0]
+ bar = BlockBar(size=100, begin=12, end=57, width=50)
+ bar_render = render(bar)
+ assert bar_render == expected[1]
+ # begin after end
+ bar = BlockBar(size=100, begin=60, end=40, width=50)
+ bar_render = render(bar)
+ assert bar_render == expected[2]
+
+
+def test_measure():
+ bar = BlockBar(size=100, begin=11, end=62)
+ measurement = bar.__rich_measure__(None, 120)
+ assert measurement.minimum == 4
+ assert measurement.maximum == 120
+
+
+def test_zero_total():
+ # Shouldn't throw zero division error
+ bar = BlockBar(size=0, begin=0, end=0)
+ render(bar)
+
+
+if __name__ == "__main__":
+ bar = BlockBar(size=100, begin=11, end=62, width=50)
+ bar_render = render(bar)
+ print(repr(bar_render))
+ bar = BlockBar(size=100, begin=12, end=57, width=50)
+ bar_render = render(bar)
+ print(repr(bar_render))
+ bar = BlockBar(size=100, begin=60, end=40, width=50)
+ bar_render = render(bar)
+ print(repr(bar_render))
| diff --git a/CHANGELOG.md b/CHANGELOG.md
index 3a05e68116..97150dffd0 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -7,6 +7,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [8.0.1] - unreleased
+### Added
+
+- Added solid block bar class
+
### Changed
- Change the render prefix to correspond to the decimal units in progress
| [
{
"components": [
{
"doc": "Renders a solid block bar.\n\nArgs:\n size (float): Value for the end of the bar.\n begin (float): Begin point (between 0 and size, inclusive).\n end (float): End point (between 0 and size, inclusive).\n width (int, optional): Width of the bar, or ``None`` f... | [
"tests/test_block_bar.py::test_repr",
"tests/test_block_bar.py::test_render",
"tests/test_block_bar.py::test_measure",
"tests/test_block_bar.py::test_zero_total"
] | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add block bar
## Type of changes
- [ ] Bug fix
- [x] New feature
- [ ] Documentation / docstrings
- [ ] Tests
- [ ] Other
## Checklist
- [x] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [x] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [x] I've added tests for new code.
- [x] I accept that @willmcgugan may be pedantic in the code review.
## Description
This implements a solid block bar, as requested in https://github.com/willmcgugan/rich/issues/293.
Try with: `python -m rich.block_bar`
**Implemented:**
- Block elements support
- left-aligned bars with 1/8-of-a-cell granularity on the right side
- right-aligned bars with 1/2-of-a-cell granularity on the left side
- Render nothing when value is min
**Not implemented:**
- Render nothing when value is max
- Color gradient blends between colors
**Note:** `BlockBar` might not be the best name. Let me know!
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in rich/block_bar.py]
(definition of BlockBar:)
class BlockBar(JupyterMixin):
"""Renders a solid block bar.
Args:
size (float): Value for the end of the bar.
begin (float): Begin point (between 0 and size, inclusive).
end (float): End point (between 0 and size, inclusive).
width (int, optional): Width of the bar, or ``None`` for maximum width. Defaults to None.
color (Union[Color, str], optional): Color of the bar. Defaults to "default".
bgcolor (Union[Color, str], optional): Color of bar background. Defaults to "default"."""
(definition of BlockBar.__init__:)
def __init__( self, size: float, begin: float, end: float, width: int = None, color: Union[Color, str] = "default", bgcolor: Union[Color, str] = "default", ):
(definition of BlockBar.__repr__:)
def __repr__(self) -> str:
(definition of BlockBar.__rich_console__:)
def __rich_console__( self, console: Console, options: ConsoleOptions ) -> RenderResult:
(definition of BlockBar.__rich_measure__:)
def __rich_measure__(self, console: Console, max_width: int) -> Measurement:
[end of new definitions in rich/block_bar.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | b0661de34bab35af9b4b1d3ba8e28b186b225e84 | |
lark-parser__lark-711 | 711 | lark-parser/lark | null | 131012b893dfc253d2454d39a722fd03044e6fc7 | 2020-10-05T21:38:36Z | diff --git a/lark/lark.py b/lark/lark.py
index 770b82136..776469a72 100644
--- a/lark/lark.py
+++ b/lark/lark.py
@@ -95,7 +95,16 @@ class LarkOptions(Serialize):
"""
if __doc__:
__doc__ += OPTIONS_DOC
-
+
+
+ # Adding a new option needs to be done in multiple places:
+ # - In the dictionary below. This is the primary truth which options `Lark.__init__` takes
+ # - In the doc string above. It is used both for the docstring of `LarkOptions` and `Lark`
+ # - In `lark-stubs/lark.pyi`:
+ # - As attribute to `LarkOptions`
+ # - As parameter to `Lark.__init__`
+ # - potentially in `_LOAD_ALLOWED_OPTIONS` below this class, when the option doesn't change how the grammar is loaded
+ # - potentially in `lark.tools.__init__`, when it can easily be passed as a cmd argument and makes sense
_defaults = {
'debug': False,
'keep_all_tokens': False,
@@ -163,8 +172,9 @@ def deserialize(cls, data, memo):
return cls(data)
-_LOAD_ALLOWED_OPTIONS = {'postlex', 'transformer', 'use_bytes', 'debug', 'g_regex_flags',
- 'regex', 'propagate_positions', 'keep_all_tokens', 'tree_class'}
+# Options that can be passed to the Lark parser, even when it was loaded from cache/standalone.
+# These option are only used outside of `load_grammar`.
+_LOAD_ALLOWED_OPTIONS = {'postlex', 'transformer', 'use_bytes', 'debug', 'g_regex_flags', 'regex', 'propagate_positions', 'tree_class'}
class Lark(Serialize):
diff --git a/lark/tools/__init__.py b/lark/tools/__init__.py
index e69de29bb..4ecf13d41 100644
--- a/lark/tools/__init__.py
+++ b/lark/tools/__init__.py
@@ -0,0 +1,65 @@
+import sys
+from argparse import ArgumentParser, FileType
+try:
+ from textwrap import indent
+except ImportError:
+ def indent(text, prefix):
+ return ''.join(prefix + line for line in text.splitlines(True))
+from logging import DEBUG, INFO, WARN, ERROR
+import warnings
+
+from lark import Lark, logger
+
+lalr_argparser = ArgumentParser(add_help=False, epilog='Look at the Lark documentation for more info on the options')
+
+flags = [
+ ('d', 'debug'),
+ 'keep_all_tokens',
+ 'regex',
+ 'propagate_positions',
+ 'maybe_placeholders',
+ 'use_bytes'
+]
+
+options = ['start', 'lexer']
+
+lalr_argparser.add_argument('-v', '--verbose', action='count', default=0, help="Increase Logger output level, up to three times")
+lalr_argparser.add_argument('-s', '--start', action='append', default=[])
+lalr_argparser.add_argument('-l', '--lexer', default='contextual', choices=('standard', 'contextual'))
+k = {'encoding': 'utf-8'} if sys.version_info > (3, 4) else {}
+lalr_argparser.add_argument('-o', '--out', type=FileType('w', **k), default=sys.stdout, help='the output file (default=stdout)')
+lalr_argparser.add_argument('grammar_file', type=FileType('r', **k), help='A valid .lark file')
+
+for f in flags:
+ if isinstance(f, tuple):
+ options.append(f[1])
+ lalr_argparser.add_argument('-' + f[0], '--' + f[1], action='store_true')
+ else:
+ options.append(f)
+ lalr_argparser.add_argument('--' + f, action='store_true')
+
+
+def build_lalr(namespace):
+ logger.setLevel((ERROR, WARN, INFO, DEBUG)[min(namespace.verbose, 3)])
+ if len(namespace.start) == 0:
+ namespace.start.append('start')
+ kwargs = {n: getattr(namespace, n) for n in options}
+ return Lark(namespace.grammar_file, parser='lalr', **kwargs), namespace.out
+
+
+def showwarning_as_comment(message, category, filename, lineno, file=None, line=None):
+ # Based on warnings._showwarnmsg_impl
+ text = warnings.formatwarning(message, category, filename, lineno, line)
+ text = indent(text, '# ')
+ if file is None:
+ file = sys.stderr
+ if file is None:
+ return
+ try:
+ file.write(text)
+ except OSError:
+ pass
+
+
+def make_warnings_comments():
+ warnings.showwarning = showwarning_as_comment
diff --git a/lark/tools/serialize.py b/lark/tools/serialize.py
index fb69d35ac..10884eb85 100644
--- a/lark/tools/serialize.py
+++ b/lark/tools/serialize.py
@@ -5,20 +5,16 @@
from lark import Lark
from lark.grammar import RuleOptions, Rule
from lark.lexer import TerminalDef
+from lark.tools import lalr_argparser, build_lalr
import argparse
-argparser = argparse.ArgumentParser(prog='python -m lark.tools.serialize') #description='''Lark Serialization Tool -- Stores Lark's internal state & LALR analysis as a convenient JSON file''')
+argparser = argparse.ArgumentParser(prog='python -m lark.tools.serialize', parents=[lalr_argparser],
+ description="Lark Serialization Tool - Stores Lark's internal state & LALR analysis as a JSON file",
+ epilog='Look at the Lark documentation for more info on the options')
-argparser.add_argument('grammar_file', type=argparse.FileType('r'), help='A valid .lark file')
-argparser.add_argument('-o', '--out', type=argparse.FileType('w'), default=sys.stdout, help='json file path to create (default=stdout)')
-argparser.add_argument('-s', '--start', default='start', help='start symbol (default="start")', nargs='+')
-argparser.add_argument('-l', '--lexer', default='standard', choices=['standard', 'contextual'], help='lexer type (default="standard")')
-
-
-def serialize(infile, outfile, lexer, start):
- lark_inst = Lark(infile, parser="lalr", lexer=lexer, start=start) # TODO contextual
+def serialize(lark_inst, outfile):
data, memo = lark_inst.memo_serialize([TerminalDef, Rule])
outfile.write('{\n')
outfile.write(' "data": %s,\n' % json.dumps(data))
@@ -27,13 +23,9 @@ def serialize(infile, outfile, lexer, start):
def main():
- if len(sys.argv) == 1 or '-h' in sys.argv or '--help' in sys.argv:
- print("Lark Serialization Tool - Stores Lark's internal state & LALR analysis as a JSON file")
- print("")
- argparser.print_help()
- else:
- args = argparser.parse_args()
- serialize(args.grammar_file, args.out, args.lexer, args.start)
+ ns = argparser.parse_args()
+ serialize(*build_lalr(ns))
+
if __name__ == '__main__':
- main()
\ No newline at end of file
+ main()
diff --git a/lark/tools/standalone.py b/lark/tools/standalone.py
index f2af015c4..4ca57f649 100644
--- a/lark/tools/standalone.py
+++ b/lark/tools/standalone.py
@@ -26,22 +26,21 @@
#
#
-import os
from io import open
###}
-import codecs
import sys
import token, tokenize
import os
-from pprint import pprint
from os import path
from collections import defaultdict
from functools import partial
+from argparse import ArgumentParser, SUPPRESS
+from warnings import warn
import lark
from lark import Lark
-from lark.parsers.lalr_analysis import Reduce
+from lark.tools import lalr_argparser, build_lalr, make_warnings_comments
from lark.grammar import RuleOptions, Rule
@@ -120,48 +119,53 @@ def strip_docstrings(line_gen):
def main(fobj, start, print=print):
+ warn('`lark.tools.standalone.main` is being redesigned. Use `gen_standalone`', DeprecationWarning)
lark_inst = Lark(fobj, parser="lalr", lexer="contextual", start=start)
+ gen_standalone(lark_inst, print)
+
+def gen_standalone(lark_inst, output=None, out=sys.stdout):
+ if output is None:
+ output = partial(print, file=out)
- print('# The file was automatically generated by Lark v%s' % lark.__version__)
- print('__version__ = "%s"' % lark.__version__)
- print()
+ output('# The file was automatically generated by Lark v%s' % lark.__version__)
+ output('__version__ = "%s"' % lark.__version__)
+ output()
for i, pyfile in enumerate(EXTRACT_STANDALONE_FILES):
with open(os.path.join(_larkdir, pyfile)) as f:
code = extract_sections(f)['standalone']
if i: # if not this file
code = strip_docstrings(partial(next, iter(code.splitlines(True))))
- print(code)
+ output(code)
data, m = lark_inst.memo_serialize([TerminalDef, Rule])
- print( 'DATA = (' )
+ output('DATA = (')
# pprint(data, width=160)
- print(data)
- print(')')
- print( 'MEMO = (')
- print(m)
- print(')')
+ output(data)
+ output(')')
+ output('MEMO = (')
+ output(m)
+ output(')')
+
+ output('Shift = 0')
+ output('Reduce = 1')
+ output("def Lark_StandAlone(**kwargs):")
+ output(" return Lark._load_from_dict(DATA, MEMO, **kwargs)")
- print('Shift = 0')
- print('Reduce = 1')
- print("def Lark_StandAlone(**kwargs):")
- print(" return Lark._load_from_dict(DATA, MEMO, **kwargs)")
if __name__ == '__main__':
- if len(sys.argv) < 2:
- print("Lark Stand-alone Generator Tool")
- print("Usage: python -m lark.tools.standalone <grammar-file> [<start>]")
- sys.exit(1)
-
- if len(sys.argv) == 3:
- fn, start = sys.argv[1:]
- elif len(sys.argv) == 2:
- fn, start = sys.argv[1], 'start'
- else:
- assert False, sys.argv
-
- with codecs.open(fn, encoding='utf8') as f:
- main(f, start)
+ make_warnings_comments()
+ parser = ArgumentParser(prog="prog='python -m lark.tools.standalone'", description="Lark Stand-alone Generator Tool",
+ parents=[lalr_argparser], epilog='Look at the Lark documentation for more info on the options')
+ parser.add_argument("old_start", nargs='?', help=SUPPRESS)
+ ns = parser.parse_args()
+ if ns.old_start is not None:
+ warn('The syntax `python -m lark.tools.standalone <grammar-file> <start>` is deprecated. Use the -s option')
+ ns.start.append(ns.old_start)
+
+ lark_inst, out = build_lalr(ns)
+ gen_standalone(lark_inst, out=out)
+
| diff --git a/tests/test_tools.py b/tests/test_tools.py
index ce995d818..0972f8fb3 100644
--- a/tests/test_tools.py
+++ b/tests/test_tools.py
@@ -3,7 +3,7 @@
import sys
from unittest import TestCase, main
-from functools import partial
+from lark import Lark
from lark.tree import Tree
from lark.tools import standalone
@@ -21,8 +21,7 @@ def setUp(self):
def _create_standalone(self, grammar):
code_buf = StringIO()
- pr = partial(print, file=code_buf)
- standalone.main(StringIO(grammar), 'start', print=pr)
+ standalone.gen_standalone(Lark(grammar, parser='lalr'), out=code_buf)
code = code_buf.getvalue()
context = {'__doc__': None}
| [
{
"components": [
{
"doc": "",
"lines": [
6,
7
],
"name": "indent",
"signature": "def indent(text, prefix):",
"type": "function"
},
{
"doc": "",
"lines": [
42,
47
],
"name": ... | [
"tests/test_tools.py::TestStandalone::test_contextual",
"tests/test_tools.py::TestStandalone::test_postlex",
"tests/test_tools.py::TestStandalone::test_simple",
"tests/test_tools.py::TestStandalone::test_transformer"
] | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Allows the standalone parser to take more options at the command line
This adds most of the options that can be passed to a normal Lark instantiation to the standalone generator and the serializer.
Based on #710 and #707.
Note: breaks backwards compatibility by deprecating `python -m lark.tools.standalone grammar-file start` in favor of `python -m lark.tools.standalone -s start grammar-file`
Also adds an overview where new options to `Lark` need to (potentially) be added.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in lark/tools/__init__.py]
(definition of indent:)
def indent(text, prefix):
(definition of build_lalr:)
def build_lalr(namespace):
(definition of showwarning_as_comment:)
def showwarning_as_comment(message, category, filename, lineno, file=None, line=None):
(definition of make_warnings_comments:)
def make_warnings_comments():
[end of new definitions in lark/tools/__init__.py]
[start of new definitions in lark/tools/standalone.py]
(definition of gen_standalone:)
def gen_standalone(lark_inst, output=None, out=sys.stdout):
[end of new definitions in lark/tools/standalone.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 5faea9223cc54d1dbd0985cf830d05a10a7729ec | ||
sympy__sympy-20195 | 20,195 | sympy/sympy | 1.7 | 1b4529a95ef641c2fc15889091b281644069d20e | 2020-10-04T06:32:11Z | diff --git a/sympy/stats/stochastic_process_types.py b/sympy/stats/stochastic_process_types.py
index 7bc012c532b3..a0da210c2aa8 100644
--- a/sympy/stats/stochastic_process_types.py
+++ b/sympy/stats/stochastic_process_types.py
@@ -1,10 +1,10 @@
import random
import itertools
-from typing import Sequence as tSequence, Union as tUnion
+from typing import Sequence as tSequence, Union as tUnion, List as tList, Tuple as tTuple
from sympy import (Matrix, MatrixSymbol, S, Indexed, Basic, Tuple, Range,
- Set, And, Eq, FiniteSet, ImmutableMatrix, Integer,
+ Set, And, Eq, FiniteSet, ImmutableMatrix, Integer, igcd,
Lambda, Mul, Dummy, IndexedBase, Add, Interval, oo,
linsolve, eye, Or, Not, Intersection, factorial, Contains,
Union, Expr, Function, exp, cacheit, sqrt, pi, gamma,
@@ -13,6 +13,7 @@
from sympy.core.relational import Relational
from sympy.logic.boolalg import Boolean
from sympy.utilities.exceptions import SymPyDeprecationWarning
+from sympy.utilities.iterables import strongly_connected_components
from sympy.stats.joint_rv import JointDistribution
from sympy.stats.joint_rv_types import JointDistributionHandmade
from sympy.stats.rv import (RandomIndexedSymbol, random_symbols, RandomSymbol,
@@ -835,6 +836,134 @@ def _transient2absorbing(self):
return ImmutableMatrix(t2a)
+ def communication_classes(self) -> tList[tTuple[tList[Basic], Boolean, Integer]]:
+ """
+ Returns the list of communication classes that partition
+ the states of the markov chain.
+
+ A communication class is defined to be a set of states
+ such that every state in that set is reachable from
+ every other state in that set. Due to its properties
+ this forms a class in the mathematical sense.
+ Communication classes are also known as recurrence
+ classes.
+
+ Returns
+ =======
+
+ classes : List of Tuple of List, Boolean, Intger
+ The ``classes`` are a list of tuples. Each
+ tuple represents a single communication class
+ with its properties. The first element in the
+ tuple is the list of states in the class, the
+ second element is whether the class is recurrent
+ and the third element is the period of the
+ communication class.
+
+ Examples
+ ========
+
+ >>> from sympy.stats import DiscreteMarkovChain
+ >>> from sympy import Matrix
+ >>> T = Matrix([[0, 1, 0],
+ ... [1, 0, 0],
+ ... [1, 0, 0]])
+ >>> X = DiscreteMarkovChain('X', [1, 2, 3], T)
+ >>> classes = X.communication_classes()
+ >>> for states, is_recurrent, period in classes:
+ ... states, is_recurrent, period
+ ([1, 2], True, 2)
+ ([3], False, 1)
+
+ From this we can see that states ``1`` and ``2``
+ communicate, are recurrent and have a period
+ of 2. We can also see state ``3`` is transient
+ with a period of 1.
+
+ Notes
+ =====
+
+ The algorithm used is of order ``O(n**2)`` where
+ ``n`` is the number of states in the markov chain.
+ It uses Tarjan's algorithm to find the classes
+ themselves and then it uses a breadth-first search
+ algorithm to find each class's periodicity.
+ Most of the algorithm's components approach ``O(n)``
+ as the matrix becomes more and more sparse.
+
+ References
+ ==========
+
+ .. [1] http://www.columbia.edu/~ww2040/4701Sum07/4701-06-Notes-MCII.pdf
+ .. [2] http://cecas.clemson.edu/~shierd/Shier/markov.pdf
+ .. [3] https://ujcontent.uj.ac.za/vital/access/services/Download/uj:7506/CONTENT1
+ .. [4] https://www.mathworks.com/help/econ/dtmc.classify.html
+ """
+ n = self.number_of_states
+ T = self.transition_probabilities
+
+ # begin Tarjan's algorithm
+ V = Range(n)
+ # don't use state names. Rather use state
+ # indexes since we use them for matrix
+ # indexing here and later onward
+ E = [(i, j) for i in V for j in V if T[i, j] != 0]
+ classes = strongly_connected_components((V, E))
+ # end Tarjan's algorithm
+
+ recurrence = []
+ periods = []
+ for class_ in classes:
+ # begin recurrent check (similar to self._check_trans_probs())
+ submatrix = T[class_, class_] # get the submatrix with those states
+ is_recurrent = S.true
+ rows = submatrix.tolist()
+ for row in rows:
+ if (sum(row) - 1) != 0:
+ is_recurrent = S.false
+ break
+ recurrence.append(is_recurrent)
+ # end recurrent check
+
+ # begin breadth-first search
+ non_tree_edge_values = set()
+ visited = {class_[0]}
+ newly_visited = {class_[0]}
+ level = {class_[0]: 0}
+ current_level = 0
+ done = False # imitate a do-while loop
+ while not done: # runs at most len(class_) times
+ done = len(visited) == len(class_)
+ current_level += 1
+
+ # this loop and the while loop above run a combined len(class_) number of times.
+ # so this triple nested loop runs through each of the n states once.
+ for i in newly_visited:
+
+ # the loop below runs len(class_) number of times
+ # complexity is around about O(n * avg(len(class_)))
+ newly_visited = {j for j in class_ if T[i, j] != 0}
+
+ new_tree_edges = newly_visited.difference(visited)
+ for j in new_tree_edges:
+ level[j] = current_level
+
+ new_non_tree_edges = newly_visited.intersection(visited)
+ new_non_tree_edge_values = {level[i]-level[j]+1 for j in new_non_tree_edges}
+
+ non_tree_edge_values = non_tree_edge_values.union(new_non_tree_edge_values)
+ visited = visited.union(new_tree_edges)
+
+ # igcd needs at least 2 arguments
+ g = igcd(len(class_), len(class_), *{val_e for val_e in non_tree_edge_values if val_e > 0})
+ periods.append(g)
+ # end breadth-first search
+
+ # convert back to the user's state names
+ classes = [[self._state_index[i] for i in class_] for class_ in classes]
+
+ return sympify(list(zip(classes, recurrence, periods)))
+
def fundamental_matrix(self):
Q = self._transient2transient()
if Q == None:
| diff --git a/sympy/stats/tests/test_stochastic_process.py b/sympy/stats/tests/test_stochastic_process.py
index 74fd6172b2f2..3abd19a46e6e 100644
--- a/sympy/stats/tests/test_stochastic_process.py
+++ b/sympy/stats/tests/test_stochastic_process.py
@@ -138,6 +138,59 @@ def test_DiscreteMarkovChain():
assert Y6.fundamental_matrix() == ImmutableMatrix([[Rational(3, 2), S.One, S.Half], [S.One, S(2), S.One], [S.Half, S.One, Rational(3, 2)]])
assert Y6.absorbing_probabilities() == ImmutableMatrix([[Rational(3, 4), Rational(1, 4)], [S.Half, S.Half], [Rational(1, 4), Rational(3, 4)]])
+ # test communication_class
+ # see https://drive.google.com/drive/folders/1HbxLlwwn2b3U8Lj7eb_ASIUb5vYaNIjg?usp=sharing
+ # tutorial 2.pdf
+ TO7 = Matrix([[0, 5, 5, 0, 0],
+ [0, 0, 0, 10, 0],
+ [5, 0, 5, 0, 0],
+ [0, 10, 0, 0, 0],
+ [0, 3, 0, 3, 4]])/10
+ Y7 = DiscreteMarkovChain('Y', trans_probs=TO7)
+ tuples = Y7.communication_classes()
+ classes, recurrence, periods = list(zip(*tuples))
+ assert classes == ([1, 3], [0, 2], [4])
+ assert recurrence == (True, False, False)
+ assert periods == (2, 1, 1)
+
+ TO8 = Matrix([[0, 0, 0, 10, 0, 0],
+ [5, 0, 5, 0, 0, 0],
+ [0, 4, 0, 0, 0, 6],
+ [10, 0, 0, 0, 0, 0],
+ [0, 10, 0, 0, 0, 0],
+ [0, 0, 0, 5, 5, 0]])/10
+ Y8 = DiscreteMarkovChain('Y', trans_probs=TO8)
+ tuples = Y8.communication_classes()
+ classes, recurrence, periods = list(zip(*tuples))
+ assert classes == ([0, 3], [1, 2, 5, 4])
+ assert recurrence == (True, False)
+ assert periods == (2, 2)
+
+ TO9 = Matrix([[2, 0, 0, 3, 0, 0, 3, 2, 0, 0],
+ [0, 10, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 2, 2, 0, 0, 0, 0, 0, 3, 3],
+ [0, 0, 0, 3, 0, 0, 6, 1, 0, 0],
+ [0, 0, 0, 0, 5, 5, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 10, 0, 0, 0, 0],
+ [4, 0, 0, 5, 0, 0, 1, 0, 0, 0],
+ [2, 0, 0, 4, 0, 0, 2, 2, 0, 0],
+ [3, 0, 1, 0, 0, 0, 0, 0, 4, 2],
+ [0, 0, 4, 0, 0, 0, 0, 0, 3, 3]])/10
+ Y9 = DiscreteMarkovChain('Y', trans_probs=TO9)
+ tuples = Y9.communication_classes()
+ classes, recurrence, periods = list(zip(*tuples))
+ assert classes == ([0, 3, 6, 7], [1], [2, 8, 9], [5], [4])
+ assert recurrence == (True, True, False, True, False)
+ assert periods == (1, 1, 1, 1, 1)
+
+ # test custom state space
+ Y10 = DiscreteMarkovChain('Y', [1, 2, 3], TO2)
+ tuples = Y10.communication_classes()
+ classes, recurrence, periods = list(zip(*tuples))
+ assert classes == ([1], [2, 3])
+ assert recurrence == (True, False)
+ assert periods == (1, 1)
+
# testing miscellaneous queries
T = Matrix([[S.Half, Rational(1, 4), Rational(1, 4)],
[Rational(1, 3), 0, Rational(2, 3)],
| [
{
"components": [
{
"doc": "Returns the list of communication classes that partition\nthe states of the markov chain.\n\nA communication class is defined to be a set of states\nsuch that every state in that set is reachable from\nevery other state in that set. Due to its properties\nthis forms a c... | [
"test_DiscreteMarkovChain"
] | [
"test_ContinuousMarkovChain",
"test_BernoulliProcess",
"test_PoissonProcess",
"test_WienerProcess",
"test_GammaProcess_symbolic"
] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Added Communication Classes to Discrete Markov Chains
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
#### Brief description of what is fixed or changed
Added the ability to identify the communication classes in a Markov chain, whether they are recurrent or transient and what periodicity they have.
#### Other comments
The algorithms used are as suggested from #20026. The overall algorithm complexity is technically O(n^2) where n is the number states in the chain. This is because of the definition of `E` in the source code. However, the major parts of the algorithm have complexity roughly O(n*m) where m is the average number of states in each recurrence class. This means, as the transition matrix approaches the identity matrix (or becomes more sparse), the algorithm approaches O(n).
There will be more functions added that directly and indirectly follow from this one.
The matrices used in the tests are from [this tutorial](https://drive.google.com/file/d/1mE9IgOdO-Nu9-xAaei74n28v5jVD7yvW/view?usp=sharing).
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
NO ENTRY
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/stats/stochastic_process_types.py]
(definition of DiscreteMarkovChain.communication_classes:)
def communication_classes(self) -> tList[tTuple[tList[Basic], Boolean, Integer]]:
"""Returns the list of communication classes that partition
the states of the markov chain.
A communication class is defined to be a set of states
such that every state in that set is reachable from
every other state in that set. Due to its properties
this forms a class in the mathematical sense.
Communication classes are also known as recurrence
classes.
Returns
=======
classes : List of Tuple of List, Boolean, Intger
The ``classes`` are a list of tuples. Each
tuple represents a single communication class
with its properties. The first element in the
tuple is the list of states in the class, the
second element is whether the class is recurrent
and the third element is the period of the
communication class.
Examples
========
>>> from sympy.stats import DiscreteMarkovChain
>>> from sympy import Matrix
>>> T = Matrix([[0, 1, 0],
... [1, 0, 0],
... [1, 0, 0]])
>>> X = DiscreteMarkovChain('X', [1, 2, 3], T)
>>> classes = X.communication_classes()
>>> for states, is_recurrent, period in classes:
... states, is_recurrent, period
([1, 2], True, 2)
([3], False, 1)
From this we can see that states ``1`` and ``2``
communicate, are recurrent and have a period
of 2. We can also see state ``3`` is transient
with a period of 1.
Notes
=====
The algorithm used is of order ``O(n**2)`` where
``n`` is the number of states in the markov chain.
It uses Tarjan's algorithm to find the classes
themselves and then it uses a breadth-first search
algorithm to find each class's periodicity.
Most of the algorithm's components approach ``O(n)``
as the matrix becomes more and more sparse.
References
==========
.. [1] http://www.columbia.edu/~ww2040/4701Sum07/4701-06-Notes-MCII.pdf
.. [2] http://cecas.clemson.edu/~shierd/Shier/markov.pdf
.. [3] https://ujcontent.uj.ac.za/vital/access/services/Download/uj:7506/CONTENT1
.. [4] https://www.mathworks.com/help/econ/dtmc.classify.html"""
[end of new definitions in sympy/stats/stochastic_process_types.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1b4529a95ef641c2fc15889091b281644069d20e | ||
joke2k__faker-1294 | 1,294 | joke2k/faker | null | 565d3aa016cb986031ad00746c4d3066e9743710 | 2020-10-03T12:43:26Z | diff --git a/faker/providers/currency/sv_SE/__init__.py b/faker/providers/currency/sv_SE/__init__.py
new file mode 100644
index 0000000000..69919686e4
--- /dev/null
+++ b/faker/providers/currency/sv_SE/__init__.py
@@ -0,0 +1,178 @@
+from .. import Provider as CurrencyProvider
+
+# Names taken from https://www.iban.se/currency-codes
+
+
+class Provider(CurrencyProvider):
+ # Format: (code, name)
+ currencies = (
+ ("AED", "UAE Dirham"),
+ ("AFN", "Afghani"),
+ ("ALL", "Lek"),
+ ("AMD", "Armenisk Dram"),
+ ("ANG", "Gulden frรฅn Nederlรคndska Antillerna"),
+ ("AOA", "Kwanza"),
+ ("ARS", "Argentinsk Peso"),
+ ("AUD", "Australisk Dollar"),
+ ("AWG", "Arubisk Florin"),
+ ("AZN", "Azerbajdzjansk Manat"),
+ ("BAM", "Konvertibel Mark"),
+ ("BBD", "Barbadosdollar"),
+ ("BDT", "Taka"),
+ ("BGN", "Bulgarisk Lev"),
+ ("BHD", "Bahraini Dinar"),
+ ("BIF", "Burundi-franc"),
+ ("BMD", "Bermuda-dollar"),
+ ("BND", "Brunei-dollar"),
+ ("BOB", "Boliviano"),
+ ("BOV", "Mvdol"),
+ ("BRL", "Brasilisk Real"),
+ ("BSD", "Bahamasdollar"),
+ ("BTN", "Ngultrum"),
+ ("BWP", "Pula"),
+ ("BYR", "Vitrysk Rubel"),
+ ("BZD", "Belize-dollar"),
+ ("CAD", "Kanadensisk Dollar"),
+ ("CDF", "Kongolesisk Franc"),
+ ("CHE", "WIR Euro"),
+ ("CHF", "Schweizerfranc"),
+ ("CHW", "WIR Franc"),
+ ("CLF", "Unidad de Fomento"),
+ ("CLP", "Chilensk Peso"),
+ ("CNY", "Yuan Renminbi"),
+ ("COP", "Colombiansk Peso"),
+ ("COU", "Unidad de Valor Real"),
+ ("CRC", "Costa Ricansk Colon"),
+ ("CUC", "Peso Convertible"),
+ ("CUP", "Kubansk Peso"),
+ ("CVE", "Kap Verde Escudo"),
+ ("CZK", "Tjeckisk Koruna"),
+ ("DJF", "Djibouti-franc"),
+ ("DKK", "Dansk Krone"),
+ ("DOP", "Dominicansk Peso"),
+ ("DZD", "Algerisk Dinar"),
+ ("EGP", "Egyptiskt pund"),
+ ("ERN", "Nakfa"),
+ ("ETB", "Etiopisk Birr"),
+ ("EUR", "Euro"),
+ ("FJD", "Fiji Dollar"),
+ ("FKP", "Pund frรฅn Falklandรถarna"),
+ ("GBP", "Pund Sterling"),
+ ("GEL", "Lari"),
+ ("GHS", "Ghana Cedi"),
+ ("GIP", "Gibraltar-pund"),
+ ("GMD", "Dalasi"),
+ ("GNF", "Guinea-franc"),
+ ("GTQ", "Quetzal"),
+ ("GYD", "Guyana-dollar"),
+ ("HKD", "Hong Kong-dollar"),
+ ("HNL", "Lempira"),
+ ("HRK", "Kuna"),
+ ("HTG", "Gourde"),
+ ("HUF", "Forint"),
+ ("IDR", "Rupiah"),
+ ("ILS", "Ny Israelisk Shekel"),
+ ("INR", "Indisk Rupie"),
+ ("IQD", "Irakisk Dinar"),
+ ("IRR", "Iransk Rial"),
+ ("ISK", "Islรคndsk Krona"),
+ ("JMD", "Jamaica-dollar"),
+ ("JOD", "Jordanisk Dinar"),
+ ("JPY", "Yen"),
+ ("KES", "Kenyansk Shilling"),
+ ("KGS", "Som"),
+ ("KHR", "Riel"),
+ ("KMF", "Comoros-franc"),
+ ("KPW", "Nordkoreansk Won"),
+ ("KRW", "Won"),
+ ("KWD", "Kuwaiti Dinar"),
+ ("KYD", "Caymanรถar-dollar"),
+ ("KZT", "Tenge"),
+ ("LAK", "Kip"),
+ ("LBP", "Libanesiskt pund"),
+ ("LKR", "Sri Lanka Rupie"),
+ ("LRD", "Liberiansk Dollar"),
+ ("LSL", "Loti"),
+ ("LYD", "Libysk Dinar"),
+ ("MAD", "Marockansk Dirham"),
+ ("MDL", "Moldavisk Leu"),
+ ("MGA", "Malagasy Ariary"),
+ ("MKD", "Denar"),
+ ("MMK", "Kyat"),
+ ("MNT", "Tugrik"),
+ ("MOP", "Pataca"),
+ ("MRO", "Ouguiya"),
+ ("MUR", "Mauritius Rupie"),
+ ("MVR", "Rufiyaa"),
+ ("MWK", "Kwacha"),
+ ("MXN", "Mexikansk Peso"),
+ ("MXV", "Mexikansk Unidad de Inversion (UDI)"),
+ ("MYR", "Malaysisk Ringgit"),
+ ("MZN", "Mozambique Metical"),
+ ("NAD", "Namibia Dollar"),
+ ("NGN", "Naira"),
+ ("NIO", "Cordoba Oro"),
+ ("NOK", "Norsk Krone"),
+ ("NOK", "Norwegian Krone"),
+ ("NPR", "Nepalesisk Rupie"),
+ ("NZD", "Nya Zealand-dollar"),
+ ("OMR", "Rial Omani"),
+ ("PAB", "Balboa"),
+ ("PEN", "Nuevo Sol"),
+ ("PGK", "Kina"),
+ ("PHP", "Filippinsk Peso"),
+ ("PKR", "Pakistansk Rupie"),
+ ("PLN", "Zloty"),
+ ("PYG", "Guarani"),
+ ("QAR", "Qatari Rial"),
+ ("RON", "Rumรคnsk Leu"),
+ ("RSD", "Serbisk Dinar"),
+ ("RUB", "Rysk Rubel"),
+ ("RWF", "Rwanda Franc"),
+ ("SAR", "Saudi Riyal"),
+ ("SBD", "Dollar frรฅn Salomonรถarna"),
+ ("SCR", "Seychell-rupie"),
+ ("SDG", "Sudanesiskt pund"),
+ ("SEK", "Svensk Krona"),
+ ("SGD", "Singapore Dollar"),
+ ("SHP", "Saint Helena pund"),
+ ("SLL", "Leone"),
+ ("SOS", "Somalisk Shilling"),
+ ("SRD", "Surinam Dollar"),
+ ("SSP", "Sydsudanesiskt pund"),
+ ("STD", "Dobra"),
+ ("SVC", "El Salvador Colon"),
+ ("SYP", "Syriskt pund"),
+ ("SZL", "Lilangeni"),
+ ("THB", "Baht"),
+ ("TJS", "Somoni"),
+ ("TMT", "Turkmenistansk Ny Manat"),
+ ("TND", "Tunisisk Dinar"),
+ ("TOP", "Paโanga"),
+ ("TRY", "Turkisk Lira"),
+ ("TTD", "Trinidad och Tobago Dollar"),
+ ("TWD", "Ny Taiwanesisk Dollar"),
+ ("TZS", "Tanzanisk Shilling"),
+ ("UAH", "Hryvnia"),
+ ("UGX", "Uganda Shilling"),
+ ("USD", "US Dollar"),
+ ("USN", "US Dollar (Nรคsta dag)"),
+ ("UYI", "Uruguay Peso en Unidades Indexadas (URUIURUI)"),
+ ("UYU", "Peso Uruguayo"),
+ ("UZS", "Uzbekistansk Sum"),
+ ("VEF", "Bolivar"),
+ ("VND", "Dong"),
+ ("VUV", "Vatu"),
+ ("WST", "Tala"),
+ ("XAF", "CFA Franc BEAC"),
+ ("XCD", "East Caribbean Dollar"),
+ ("XDR", "SDR (Sรคrskild dragningsrรคtt)"),
+ ("XOF", "CFA Franc BCEAO"),
+ ("XPF", "CFP Franc"),
+ ("XSU", "Sucre"),
+ ("XUA", "ADB Berรคkningsenhet"),
+ ("YER", "Yemeni Rial"),
+ ("ZAR", "Rand"),
+ ("ZMW", "Zambian Kwacha"),
+ ("ZWL", "Zimbabwe Dollar"),
+ )
| diff --git a/tests/providers/test_currency.py b/tests/providers/test_currency.py
index d88231693c..81bd2220c7 100644
--- a/tests/providers/test_currency.py
+++ b/tests/providers/test_currency.py
@@ -110,3 +110,25 @@ def test_currency_name(self, faker, num_samples):
for _ in range(num_samples):
name = faker.currency_name()
assert name in self.currency_names
+
+
+class TestSvSe:
+ """Test sv_SE currency provider"""
+ num_samples = 100
+
+ @classmethod
+ def setup_class(cls):
+ from faker.providers.currency.sv_SE import Provider as SvSeCurrencyProvider
+ cls.provider = SvSeCurrencyProvider
+ cls.currencies = cls.provider.currencies
+ cls.currency_codes, cls.currency_names = tuple(zip(*cls.currencies))
+
+ def test_currency(self, faker, num_samples):
+ for _ in range(num_samples):
+ cur = faker.currency()
+ assert cur in self.currencies
+
+ def test_currency_name(self, faker, num_samples):
+ for _ in range(num_samples):
+ name = faker.currency_name()
+ assert name in self.currency_names
| [
{
"components": [
{
"doc": "",
"lines": [
6,
177
],
"name": "Provider",
"signature": "class Provider(CurrencyProvider):",
"type": "class"
}
],
"file": "faker/providers/currency/sv_SE/__init__.py"
}
] | [
"tests/providers/test_currency.py::TestSvSe::test_currency",
"tests/providers/test_currency.py::TestSvSe::test_currency_name"
] | [
"tests/providers/test_currency.py::TestCurrencyProvider::test_currency",
"tests/providers/test_currency.py::TestCurrencyProvider::test_currency_code",
"tests/providers/test_currency.py::TestCurrencyProvider::test_currency_name",
"tests/providers/test_currency.py::TestCurrencyProvider::test_currency_symbol_no_... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Added currency provider for sv_SE
### What does this changes
Added currency provider for sv_SE
### What was wrong
There was no provider.
### How this fixes it
Implemented a provider and added the same sort of tests as es_ES and ru_RU.
Fixes #1293
EDIT: [Hacktoberfest ](https://hacktoberfest.digitalocean.com/hacktoberfest-update) changed their rules, so if you want to attend you need to opt-in by adding the topic hacktoberfest.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in faker/providers/currency/sv_SE/__init__.py]
(definition of Provider:)
class Provider(CurrencyProvider):
[end of new definitions in faker/providers/currency/sv_SE/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
sv_SE curreny provider missing
There are currently no provider for sv_SE handeling currencies.
----------
--------------------
</issues> | 6edfdbf6ae90b0153309e3bf066aa3b2d16494a7 | |
joke2k__faker-1290 | 1,290 | joke2k/faker | null | 565d3aa016cb986031ad00746c4d3066e9743710 | 2020-10-02T07:49:14Z | diff --git a/faker/providers/job/sk_SK/__init__.py b/faker/providers/job/sk_SK/__init__.py
new file mode 100644
index 0000000000..4fb402b5dd
--- /dev/null
+++ b/faker/providers/job/sk_SK/__init__.py
@@ -0,0 +1,513 @@
+from .. import Provider as JobProvider
+
+
+class Provider(JobProvider):
+ """Translated from Super class"""
+
+ jobs = (
+ 'Administrรกtor, umenie',
+ 'Administrรกtor, ลกtรกtna sluลพba',
+ 'Advokรกt',
+ 'Advokรกt pre ochrannรฉ znรกmky',
+ 'Akademickรฝ knihovnรญk',
+ 'Akupunkturista',
+ 'Analytickรฝ chemik',
+ 'Analytik finanฤnรฉho rizika',
+ 'Angliฤtina ako lektorka cudzieho jazyka',
+ 'Angliฤtina ako uฤiteฤพka druhรฉho jazyka',
+ 'Animรกtor',
+ 'Arborista',
+ 'Archeolรณg',
+ 'Architekt',
+ 'Architektonickรฝ technolรณg',
+ 'Archivรกr',
+ 'Arteterapeut',
+ 'Asistent politika',
+ 'Astronรณm',
+ 'Audiologickรฝ vedec',
+ 'Automobilovรฝ inลพinier',
+ 'Autorizovanรฝ likvidรกtor strรกt',
+ 'Autorizovanรฝ รบฤtovnรญk',
+ 'Autorizovanรฝ รบฤtovnรญk v oblasti verejnรฝch financiรญ',
+ 'Bankรกr',
+ 'Banskรฝ inลพinier',
+ 'Barista',
+ 'Biochemik, klinickรฝ',
+ 'Biomedicรญnsky inลพinier',
+ 'Biomedicรญnsky vedec',
+ 'Bylinkรกr',
+ 'Bytovรฝ manaลพรฉr / referent',
+ 'Charitatรญvny รบradnรญk',
+ 'Chemickรฝ inลพinier',
+ 'Chemik, analytickรฝ',
+ 'Chiropraktik',
+ 'Chirurg',
+ 'Copywriter, reklama',
+ 'Cytogenetik',
+ 'Daลovรฝ poradca',
+ 'Detskรก sestra',
+ 'Detskรฝ psychoterapeut',
+ 'Diagnostickรฝ rรกdiograf',
+ 'Dietolรณg',
+ 'Dizajnรฉr, fรบkanรฉ sklo / vitrรกลพ',
+ 'Dizajnรฉr, grafik',
+ 'Dizajnรฉr, interiรฉr / priestor',
+ 'Dizajnรฉr, keramika / hrnฤiarstvo',
+ 'Dizajnรฉr, multimรฉdiรก',
+ 'Dizajnรฉr, mรณda / obleฤenie',
+ 'Dizajnรฉr, nรกbytok',
+ 'Dizajnรฉr, priemyselnรฝ / produkt',
+ 'Dizajnรฉr, televรญzia / film',
+ 'Dizajnรฉr, textil',
+ 'Dizajnรฉr, vรฝstava / vรฝstava',
+ 'Dizajnรฉr, ลกperky',
+ 'Docent',
+ 'Dodรกvateฤพ',
+ 'Dospelรฝ poradenskรฝ pracovnรญk',
+ 'Dozorca',
+ 'Dramatickรฝ terapeut',
+ 'Dรดstojnรญk obchodnรฉho nรกmornรญctva',
+ 'Dรดstojnรญk pre ochranu prรญrody',
+ 'Dรดstojnรญk pre vรฝcvik a vzdelรกvanie ozbrojenรฝch sรญl',
+ 'Editor funkcie ฤasopisu',
+ 'Ekolรณg',
+ 'Ekonรณm',
+ 'Elektroinลพinier',
+ 'Embryolรณg, klinickรฝ',
+ 'Energetickรฝ inลพinier',
+ 'Energetickรฝ manaลพรฉr',
+ 'Environmentรกlny manaลพรฉr',
+ 'Ergonรณm',
+ 'Farebnรฝ technolรณg',
+ 'Farmaceut Spoloฤenstva',
+ 'Farmakolรณg',
+ 'Filmovรฝ / video editor',
+ 'Financny kontrolor',
+ 'Finanฤnรฝ manaลพรฉr',
+ 'Finanฤnรฝ obchodnรญk',
+ 'Finanฤnรฝ plรกnovaฤ',
+ 'Finanฤnรฝ poradca',
+ 'Finanฤnรฝ riaditeฤพ',
+ 'Firemnรก sekretรกrka',
+ 'Fotograf',
+ 'Fytoterapeut',
+ 'Fyzik zdravia',
+ 'Fyzik, lekรกr',
+ 'Fyziologickรฝ vedec',
+ 'Fyziolรณg cviฤenia',
+ 'Fyzioterapeut',
+ 'Fรบkaฤ skla / dizajnรฉr',
+ 'Genetik, molekulรกrny',
+ 'Geochemik',
+ 'Geodet minerรกlov',
+ 'Geodet poistnรฉho rizika',
+ 'Geofyzik / terรฉnny seizmolรณg',
+ 'Geolรณg, strojรกrstvo',
+ 'Geolรณg',
+ 'Geovedec',
+ 'Grafickรฝ dizajnรฉr',
+ 'Grafik',
+ 'Hasiฤ',
+ 'Hematolรณg',
+ 'Herec',
+ 'Herpetolรณg',
+ 'Hlavnรฝ marketingovรฝ riaditeฤพ',
+ 'Homeopat',
+ 'Hotelovรฝ manaลพรฉr',
+ 'Hudobnรญk',
+ 'Hudobnรฝ lektor',
+ 'Hudobnรฝ terapeut',
+ 'Hutnรญk',
+ 'Hydrogeolรณg',
+ 'Hydrografickรฝ geodet',
+ 'Hydrolรณg',
+ 'Hygienik prรกce',
+ 'IT konzultant',
+ 'Ilustrรกtor',
+ 'Imunolรณg',
+ 'Informaฤnรฝ รบradnรญk',
+ 'Investiฤnรฝ analytik',
+ 'Investiฤnรฝ bankรกr, funkฤnรฝ',
+ 'Investiฤnรฝ bankรกr, podnikovรฝ',
+ 'Inลกpektor / hodnotiteฤพ reklamรกciรญ',
+ 'Inลกpektor historickรฝch budov / referent pamiatkovej starostlivosti',
+ 'Inลกpektor plรกnovania a rozvoja',
+ 'Inลกpektor zdravia a bezpeฤnosti',
+ 'Inลพinier budov',
+ 'Inลพinier elektroniky',
+ 'Inลพinier kontroly a prรญstrojovรฉho vybavenia',
+ 'Inลพinier poฤพnohospodรกrstva',
+ 'Inลพinier pre automobilovรฝ priemysel',
+ 'Inลพinier vรฝrobnรฝch systรฉmov',
+ 'Inลพinier, banรญctvo',
+ 'Inลพinier, biomedicรญnsky',
+ 'Inลพinier, chemickรฝ',
+ 'Inลพinier, elektronika',
+ 'Inลพinier, elektrotechnik',
+ 'Inลพinier, energia',
+ 'Inลพinier, komunikรกcia',
+ 'Inลพinier, leteckรฝ',
+ 'Inลพinier, materiรกly',
+ 'Inลพinier, pozemok',
+ 'Inลพinier, poฤพnohospodรกr',
+ 'Inลพinier, riadenie a prรญstrojovรฉ vybavenie',
+ 'Inลพinier, ropa',
+ 'Inลพinier, statik',
+ 'Inลพinier, stavebnรฉ sluลพby',
+ 'Inลพinier, stavebnรฝ (zmluvnรฝ)',
+ 'Inลพinier, stavebnรฝ inลพinier (poradenstvo)',
+ 'Inลพinier, technickรฝ predaj',
+ 'Inลพinier, voda',
+ 'Inลพinier, vysielanie (prevรกdzka)',
+ 'Inลพinier, vรฝroba',
+ 'Inลพinier, vรฝroba',
+ 'Inลพinier, vรฝrobnรฉ systรฉmy',
+ 'Inลพinier, vลtanie',
+ 'Inลพinier, web',
+ 'Inลพinier, รบdrลพba',
+ 'Inลพinier, รบdrลพba (IT)',
+ 'Inลพiniersky geolรณg',
+ 'Kameraman',
+ 'Kariรฉrny informaฤnรฝ รบradnรญk',
+ 'Kariรฉrny poradca',
+ 'Kariรฉrny poradca pre vysokoลกkolskรฉ vzdelรกvanie',
+ 'Kartograf',
+ 'Klinickรฝ biochemik',
+ 'Klinickรฝ cytogenetik',
+ 'Klinickรฝ embryolรณg',
+ 'Klinickรฝ molekulรกrny genetik',
+ 'Klinickรฝ psycholรณg',
+ 'Klinickรฝ vedec, histokompatibilita a imunogenetika',
+ 'Knihovnรญk',
+ 'Knihovnรญk, verejnรฝ',
+ 'Knรญhkupec',
+ 'Komerฤnรฝ / rezidenฤnรฝ geodet',
+ 'Komerฤnรฝ zรกhradnรญk',
+ 'Komunikaฤnรฝ inลพinier',
+ 'Komunitnรฝ umeleckรฝ pracovnรญk',
+ 'Konateฤพ spoloฤnosti',
+ 'Kontrolรณr',
+ 'Konzervรกtor / reลกtaurรกtor nรกbytku',
+ 'Konzervรกtor mรบzea / galรฉrie',
+ 'Konzervรกtor, mรบzeum / galรฉria',
+ 'Konzervรกtor, nรกbytok',
+ 'Konzultant pre dรดchodky',
+ 'Konzultรกcia so stavebnรฝm inลพinierom',
+ 'Koordinรกtor dobrovoฤพnรญctva',
+ 'Kupujรบci, maloobchod',
+ 'Kurรกtor',
+ 'Kurรกtor mรบzea / galรฉrie',
+ 'Kรณpia',
+ 'Lektor ฤalลกieho vzdelรกvania',
+ 'Lektor, vysokoลกkolskรฉ vzdelanie',
+ 'Lektor, ฤalลกie vzdelรกvanie',
+ 'Lekรกr vลกeobecnรฉho lekรกra',
+ 'Lekรกr, nemocnica',
+ 'Lekรกr, vลกeobecnรก prax',
+ 'Lekรกrnik, komunita',
+ 'Lekรกrnik, nemocnica',
+ 'Lekรกrsky fyzik',
+ 'Lekรกrsky ilustrรกtor',
+ 'Lekรกrsky obchodnรฝ zรกstupca',
+ 'Lekรกrsky sekretรกr',
+ 'Lekรกrsky technickรฝ pracovnรญk',
+ 'Leteckรฝ dispeฤer',
+ 'Leteckรฝ inลพinier',
+ 'Leteckรฝ sprostredkovateฤพ',
+ 'Lexikograf',
+ 'Licencovanรฝ dopravca',
+ 'Lobista',
+ 'Logistika / podpora / administratรญvny dรดstojnรญk ozbrojenรฝch sรญl',
+ 'Manaลพรฉr call centra',
+ 'Manaลพรฉr cestovnej kancelรกrie',
+ 'Manaลพรฉr divadelnej scรฉny',
+ 'Manaลพรฉr farmy',
+ 'Manaลพรฉr fitnescentra',
+ 'Manaลพรฉr informaฤnรฝch systรฉmov',
+ 'Manaลพรฉr komerฤnej umeleckej galรฉrie',
+ 'Manaลพรฉr logistiky a distribรบcie',
+ 'Manaลพรฉr stravovania',
+ 'Manaลพรฉr umeleckej galรฉrie',
+ 'Manaลพรฉr zariadenรญ',
+ 'Manaลพรฉr zรกbavnรฉho parku',
+ 'Manลพรฉrsky konzultant',
+ 'Marketingovรฝ manaลพรฉr',
+ 'Materiรกlovรฝ inลพinier',
+ 'Mediรกlny plรกnovaฤ',
+ 'Meteorolรณg',
+ 'Mikrobiolรณg',
+ 'Moderรกtor, vysielanie',
+ 'Morskรฝ vedec',
+ 'Multimediรกlne programy',
+ 'Mรณdny nรกvrhรกr',
+ 'Najlepลกรญ chlapec',
+ 'Nemocniฤnรฝ lekรกr',
+ 'Nemocniฤnรฝ lekรกrnik',
+ 'Neurochirurg',
+ 'Novinรกr novรญn',
+ 'Novinรกr ฤasopisu',
+ 'Novinรกr, noviny',
+ 'Novinรกr, vysielanie',
+ 'Novinรกr, ฤasopis',
+ 'Nรกkupca mรฉdiรญ',
+ 'Nรกkupca, priemyselnรฝ',
+ 'Nรกmornรฝ architekt',
+ 'Nรกvrhรกr interiรฉrov a priestorov',
+ 'Nรกvrhรกr nรกbytku',
+ 'Nรกvrhรกr vรฝstavy',
+ 'Nรกvrhรกr ลกperkov',
+ 'Nรกvrhรกrka keramiky',
+ 'Obchodnรญk s akciami',
+ 'Obchodnรญk s dlhopismi',
+ 'Obchodnรญk s futures',
+ 'Oceรกnograf',
+ 'Ochranรกr, historickรฉ budovy',
+ 'Odbornรญk na ลพivotnรฉ prostredie',
+ 'Odevnรฝ / textilnรฝ technolรณg',
+ 'Odevnรฝ / textilnรฝ technolรณg',
+ 'Onkolรณg',
+ 'Operatรญvny vรฝskumnรญk',
+ 'Operaฤnรฝ dรดstojnรญk diplomatickรฝch sluลพieb',
+ 'Operaฤnรฝ dรดstojnรญk ozbrojenรฝch sรญl',
+ 'Optik, vรฝdaj',
+ 'Optometristu',
+ 'Organizรกcia podujatia',
+ 'Ortoptista',
+ 'Osobnรฝ asistent',
+ 'Osteopat',
+ 'Oฤnรฝ lekรกr',
+ 'Palubnรญ sprievodcovia',
+ 'Patent attorney',
+ 'Patolรณg',
+ 'Pedagogickรฝ psycholรณg',
+ 'Pedikรฉr',
+ 'Personalista',
+ 'Pilot leteckej spoloฤnosti',
+ 'Plรกnovaฤ dopravy',
+ 'Plรกnovaฤ reklamnรฉho รบฤtu',
+ 'Plรกnovaฤ tlaฤe',
+ 'Podnikovรฝ investiฤnรฝ bankรกr',
+ 'Podnikovรฝ pokladnรญk',
+ 'Poistnรฝ matematik',
+ 'Poisลฅovacรญ maklรฉr',
+ 'Poisลฅovateฤพ',
+ 'Police officer',
+ 'Poradca pre zdravie a bezpeฤnosลฅ',
+ 'Poradca pre ลพivotnรฉ prostredie',
+ 'Poradenskรก pracovnรญฤka',
+ 'Poradenskรฝ psycholรณg',
+ 'Posรกdka',
+ 'Potravinรกrsky technolรณg',
+ 'Poฤพnohospodรกrsky konzultant',
+ 'Pracovnรญk medzinรกrodnej pomoci / rozvoja',
+ 'Pracovnรญk pomoci',
+ 'Pracovnรญk rozvoja komunity',
+ 'Pracovnรญk s mlรกdeลพou',
+ 'Pracovnรฝ psycholรณg',
+ 'Pracovnรฝ terapeut',
+ 'Predajca',
+ 'Prekladateฤพ',
+ 'Prevรกdzkovateฤพ televรญznej kamery',
+ 'Prevรกdzkovรฝ geolรณg',
+ 'Prevรกdzkovรฝ investiฤnรฝ bankรกr',
+ 'Prevรกdzkovรฝ riaditeฤพ',
+ 'Priemyselnรฝ / produktovรฝ dizajnรฉr',
+ 'Priemyselnรฝ kupujรบci',
+ 'Prieskumnรญk trhu',
+ 'Prieskumovรฝ pracovnรญk',
+ 'Probaฤnรฝ รบradnรญk',
+ 'Producent, rรกdio',
+ 'Producent, televรญzia / film / video',
+ 'Production assistant, radio',
+ 'Production assistant, television',
+ 'Production designer, theatre/television/film',
+ 'Production engineer',
+ 'Production manager',
+ 'Produktovรฝ dizajnรฉr',
+ 'Produktovรฝ manaลพรฉr',
+ 'Professor Emeritus',
+ 'Programme researcher, broadcasting/film/video',
+ 'Programmer, applications',
+ 'Programmer, multimedia',
+ 'Programmer, systems',
+ 'Proofreader',
+ 'Prรกvnik',
+ 'Prรกvny tajomnรญk',
+ 'Prรกzdninovรฝ zรกstupca',
+ 'Psychiatric nurse',
+ 'Psychiatrist',
+ 'Psychologist, clinical',
+ 'Psychologist, counselling',
+ 'Psychologist, educational',
+ 'Psychologist, forensic',
+ 'Psychologist, occupational',
+ 'Psychologist, prison and probation services',
+ 'Psychologist, sport and exercise',
+ 'Psychoterapeut taneฤnรฉho pohybu',
+ 'Psychoterapeut',
+ 'Pรดda',
+ 'Pรดrodnรก asistentka',
+ 'Manaลพรฉr kvality',
+ 'Radca',
+ 'Realitnรฝ maklรฉr',
+ 'Redaktor, uvedenie do prevรกdzky',
+ 'Redakฤnรฝ asistent',
+ 'Referent cestovnรฉho ruchu',
+ 'Referent environmentรกlnej vรฝchovy',
+ 'Referent geografickรฝch informaฤnรฝch systรฉmov',
+ 'Referent komunitnรฉho vzdelรกvania',
+ 'Referent mรบzejnรฉho vzdelรกvania',
+ 'Referent obchodnรฝch noriem',
+ 'Referent ochrany prรญrody',
+ 'Referent odbornej prรญpravy a rozvoja',
+ 'Referent odborovรฉho vรฝskumu',
+ 'Referent poฤพnรฝch pokusov',
+ 'Referent pre nรบdzovรฉ plรกnovanie / riadenie',
+ 'Referent pre rovnosลฅ a rozmanitosลฅ',
+ 'Referent pre vรฝstavy v mรบzeรกch / galรฉriรกch',
+ 'Referent rozvoja umenia',
+ 'Referent technickej podpory IT',
+ 'Referent vรฝstavy, mรบzeum / galรฉria',
+ 'Referent ฤพudskรฝch zdrojov',
+ 'Referent ลกkolstva pre ลพivotnรฉ prostredie',
+ 'Referent ลกkolstva, komunita',
+ 'Referent ลกkolstva, mรบzeum',
+ 'Regulรกtor strรกt, objednanรฝ',
+ 'Reklamnรฝ textรกr',
+ 'Reklamnรฝ umeleckรฝ riaditeฤพ',
+ 'Riaditeฤพ pre stratรฉgiu',
+ 'Ropnรฝ inลพinier',
+ 'Rozvojovรฝ pracovnรญk, komunita',
+ 'Rozvojovรฝ pracovnรญk, medzinรกrodnรก pomoc',
+ 'Rรฝchly streamer pre ลกtรกtnu sluลพbu',
+ 'Sanitka',
+ 'Sestra pre dospelรฝch',
+ 'Sestra pre duลกevnรฉ zdravie',
+ 'Sestra s poruchami uฤenia',
+ 'Sestra, detskรก',
+ 'Sestra, dospelรฝ',
+ 'Sestra, porucha uฤenia',
+ 'Sieลฅovรฝ inลพinier',
+ 'Spisovateฤพ',
+ 'Spolupracovnรญk pre klinickรฝ vรฝskum',
+ 'Spracovateฤพ geofyzikรกlnych รบdajov',
+ 'Spravodajskรฝ analytik',
+ 'Sprรกvca',
+ 'Sprรกvca databรกzy',
+ 'Sprรกvca dediฤstva',
+ 'Sprรกvca dรดchodkovรฉho systรฉmu',
+ 'Sprรกvca lesov a lesov',
+ 'Sprรกvca nehnuteฤพnosti / pozemkovรฝ agent',
+ 'Sprรกvca poistnรฉho รบฤtu',
+ 'Sprรกvca polohy',
+ 'Sprรกvca spracovania รบdajov',
+ 'Sprรกvca umenia',
+ 'Sprรกvca zรกkaznรญckeho centra',
+ 'Sprรกvca ลกkolstva',
+ 'Sprรกvca ลกtรกtnej sluลพby',
+ 'Sprรกvca, charitatรญvne / dobrovoฤพnรญcke organizรกcie',
+ 'Sprรกvca, miestna samosprรกva',
+ 'Sprรกvca, vzdelรกvanie',
+ 'Sprรกvca, ลกport',
+ 'Stavebnรฝ geodet',
+ 'Stavebnรฝ geodet',
+ 'Stavebnรฝ inลพinier, poradenstvo',
+ 'Stavebnรฝ inลพinier, uzatvรกranie zmlรบv',
+ 'Strihaฤ, film / video',
+ 'Strojnรฝ inลพinier',
+ 'Strรกลพca / strรกลพca',
+ 'Svetelnรฝ technik, vysielanie / film / video',
+ 'Sรบdny psycholรณg',
+ 'Sรบdny vedec',
+ 'Sรบkromnรฝ uฤiteฤพ hudby',
+ 'Taneฤnica',
+ 'Technickรฝ dรดstojnรญk ozbrojenรฝch sรญl',
+ 'Technik รบdrลพby',
+ 'Technolรณg pre zvieratรก',
+ 'Technolรณg varenia piva',
+ 'Terapeut, drรกma',
+ 'Terapeut, hudba',
+ 'Terapeut, zรกhradnรญcky',
+ 'Terapeut, ลกport',
+ 'Terรฉnny seizmolรณg',
+ 'Tlaฤovรฝ fotograf',
+ 'Tlmoฤnรญk',
+ 'Toxikolรณg',
+ 'Umelec',
+ 'Urobiลฅ',
+ 'Uvรกdzacรญ redaktor',
+ 'Uฤiaci sa mentor',
+ 'Uฤiteฤพ v ranom detstve',
+ 'Uฤiteฤพ, angliฤtina ako cudzรญ jazyk',
+ 'Uฤiteฤพ, hudba',
+ 'Uฤiteฤพ, prvรฉ roky / pred',
+ 'Uฤiteฤพ, vzdelรกvanie dospelรฝch',
+ 'Uฤiteฤพ, zรกkladnรก ลกkola',
+ 'Uฤiteฤพka na zรกkladnej ลกkole',
+ 'Vedec dรกt',
+ 'Vedec pre kvalitu vody',
+ 'Vedec vรฝvoja produktov / procesov',
+ 'Vedeckรฝ pracovnรญk lekรกrskeho laboratรณria',
+ 'Vedรบci kancelรกrie',
+ 'Vedรบci konferenฤnรฉho centra',
+ 'Vedรบci osobnej dopravy',
+ 'Vedรบci outdoorovรฝch aktivรญt / vzdelรกvania',
+ 'Vedรบci reklamnรฉho รบฤtu',
+ 'Vedรบci reลกtaurรกcie rรฝchleho obฤerstvenia',
+ 'Vedรบci rybej farmy',
+ 'Vedรบci skladu',
+ 'Vedรบci strediska voฤพnรฉho ฤasu',
+ 'Vedรบci turistickรฉho informaฤnรฉho centra',
+ 'Vedรบci ubytovania',
+ 'Vedรบci zdravotnรญckej sluลพby',
+ 'Vedรบci รบseku',
+ 'Veterinรกrny chirurg',
+ 'Video editor',
+ 'Vizuรกlny obchodnรญk',
+ 'Vlรกdny รบradnรญk pre sociรกlny vรฝskum',
+ 'Vodnรฝ inลพinier',
+ 'Vrtnรฝ inลพinier',
+ 'Vybavenie zรกhradnรญk',
+ 'Vybavovaฤ poistnรฝch udalostรญ',
+ 'Vysielanรฝ novinรกr',
+ 'Vysokoลกkolskรฝ lektor',
+ 'Vรฝdajnรฝ optik',
+ 'Vรฝkonnรฝ riaditeฤพ',
+ 'Vรฝkonnรฝ technickรฝ riaditeฤพ',
+ 'Vรฝrobnรฝ inลพinier',
+ 'Vรฝtvarnรฝ umelec',
+ 'Vรฝvojรกr aplikรกciรญ',
+ 'Vรฝvojรกr hier',
+ 'Vรฝvojรกr poฤรญtaฤovรฝch hier',
+ 'Vรฝvojรกr systรฉmov',
+ 'Vรฝลพivovรฝ poradca pre zvieratรก',
+ 'Vรฝลพivovรฝ terapeut',
+ 'Web dizajnรฉr',
+ 'Wellsite geolรณg',
+ 'Zamestnanec imigraฤnรฉho รบradu',
+ 'Zdravotnรก sestra, duลกevnรฉ zdravie',
+ 'Zdravotnรฝ nรกvลกtevnรญk',
+ 'Zememeraฤ / geomatik',
+ 'Zmluvnรฝ stavebnรฝ inลพinier',
+ 'Zubรกr',
+ 'Zรกchranรกr',
+ 'Zรกhradnรญcky konzultant',
+ 'Zรกhradnรญcky terapeut',
+ 'Zรกhradnรญk, komerฤnรฝ',
+ 'Zรกhradnรญk, vybavenosลฅ',
+ 'Zรกhradnรฝ architekt',
+ 'Zรกznamnรญk bahna',
+ 'รradnรญk miestnej samosprรกvy',
+ 'รradnรญk pre rybolov',
+ 'รฤtovnรญk, autorizovanรฉ verejnรฉ financie',
+ 'รฤtovnรญk, autorizovanรฝ',
+ 'รฤtovnรญk, autorizovanรฝ certifikovanรฝ',
+ 'รฤtovnรญk, autorizovanรฝ manaลพment',
+ 'รฤtovnรฝ technik',
+ 'ล pecialista na multimรฉdiรก',
+ 'ล pecialista na podporu zdravia',
+ 'ล peditรฉr',
+ 'ล ฤพachtiteฤพ rastlรญn / genetik')
+
+ def job(self):
+ return self.random_element(self.jobs)
| diff --git a/tests/providers/test_job.py b/tests/providers/test_job.py
index 33edb985d7..5bca64ead0 100644
--- a/tests/providers/test_job.py
+++ b/tests/providers/test_job.py
@@ -8,6 +8,7 @@
from faker.providers.job.ko_KR import Provider as KoKrJobProvider
from faker.providers.job.pt_BR import Provider as PtBrJobProvider
from faker.providers.job.pt_PT import Provider as PtPtJobProvider
+from faker.providers.job.sk_SK import Provider as SkSkJobProvider
from faker.providers.job.tr_TR import Provider as TrTrJobProvider
@@ -84,13 +85,23 @@ def test_job(self, faker, num_samples):
class TestPtBr:
- """Test de_DE job provider"""
+ """Test pt_BR job provider"""
def test_job(self, faker, num_samples):
for _ in range(num_samples):
assert faker.job() in PtBrJobProvider.jobs
+class TestSkSk:
+ """Test sk_SK job provider"""
+
+ def test_job(self, faker, num_samples):
+ for _ in range(num_samples):
+ job = faker.job()
+ assert isinstance(job, str)
+ assert job in SkSkJobProvider.jobs
+
+
class TestTrTr:
"""Test tr_TR job provider"""
| [
{
"components": [
{
"doc": "Translated from Super class",
"lines": [
4,
513
],
"name": "Provider",
"signature": "class Provider(JobProvider):",
"type": "class"
},
{
"doc": "",
"lines": [
512,
... | [
"tests/providers/test_job.py::TestJobProvider::test_job",
"tests/providers/test_job.py::TestJaJp::test_job",
"tests/providers/test_job.py::TestKoKr::test_job",
"tests/providers/test_job.py::TestHuHu::test_job",
"tests/providers/test_job.py::TestHyAm::test_job",
"tests/providers/test_job.py::TestDeDe::test... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add sk_SK Job provider
### What does this changes
This PR adds sk_SK Job provider
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in faker/providers/job/sk_SK/__init__.py]
(definition of Provider:)
class Provider(JobProvider):
"""Translated from Super class"""
(definition of Provider.job:)
def job(self):
[end of new definitions in faker/providers/job/sk_SK/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 6edfdbf6ae90b0153309e3bf066aa3b2d16494a7 | ||
joke2k__faker-1288 | 1,288 | joke2k/faker | null | b86c2f827fe09b65e3bd3494f385291ef96e0556 | 2020-10-01T18:44:54Z | diff --git a/faker/providers/automotive/tr_TR/__init__.py b/faker/providers/automotive/tr_TR/__init__.py
new file mode 100644
index 0000000000..e5498b23bb
--- /dev/null
+++ b/faker/providers/automotive/tr_TR/__init__.py
@@ -0,0 +1,31 @@
+import re
+
+from .. import Provider as AutomotiveProvider
+
+
+class Provider(AutomotiveProvider):
+ """Implement automotive provider for ``tr_TR`` locale.
+
+ Sources:
+
+ - https://en.wikipedia.org/wiki/Vehicle_registration_plates_of_Turkey
+ """
+
+ license_formats = (
+ '## ? ####',
+ '## ? #####',
+ '## ?? ###',
+ '## ?? ####',
+ '## ??? ##',
+ '## ??? ###',
+ )
+ ascii_uppercase_turkish = 'ABCDEFGHIJKLMNOPRSTUVYZ'
+
+ def license_plate(self) -> str:
+ """Generate a license plate."""
+ temp = re.sub(r'\?',
+ lambda x: self.random_element(self.ascii_uppercase_turkish),
+ self.random_element(self.license_formats))
+ temp = temp.replace('##', '{:02d}', 1)
+ temp = temp.format(self.random_element(range(1, 82)))
+ return self.numerify(temp)
| diff --git a/tests/providers/test_automotive.py b/tests/providers/test_automotive.py
index f79289c6e1..6842b66b18 100644
--- a/tests/providers/test_automotive.py
+++ b/tests/providers/test_automotive.py
@@ -3,6 +3,7 @@
from faker.providers.automotive.de_DE import Provider as DeDeAutomotiveProvider
from faker.providers.automotive.es_ES import Provider as EsEsAutomotiveProvider
from faker.providers.automotive.ru_RU import Provider as RuRuAutomotiveProvider
+from faker.providers.automotive.tr_TR import Provider as TrTrAutomotiveProvider
class _SimpleAutomotiveTestMixin:
@@ -158,3 +159,20 @@ def test_plate_format(self, faker):
plate = faker.license_plate()
assert isinstance(plate, str)
assert self.new_format_pattern.match(plate) or self.old_format_pattern.match(plate)
+
+
+class TestTrTr(_SimpleAutomotiveTestMixin):
+ """Test tr_TR automotive provider methods"""
+ license_plate_pattern = re.compile(
+ r'\d{2} [A-Z] \d{4}|'
+ r'\d{2} [A-Z] \d{5}|'
+ r'\d{2} [A-Z]{2} \d{3}|'
+ r'\d{2} [A-Z]{2} \d{4}|'
+ r'\d{2} [A-Z]{3} \d{2}|'
+ r'\d{2} [A-Z]{3} \d{3}',
+ )
+
+ def perform_extra_checks(self, license_plate, match):
+ [city_code, letters, _] = license_plate.split(' ')
+ assert int(city_code) in range(1, 82)
+ assert all(letter in TrTrAutomotiveProvider.ascii_uppercase_turkish for letter in letters)
| [
{
"components": [
{
"doc": "Implement automotive provider for ``tr_TR`` locale.\n\nSources:\n\n- https://en.wikipedia.org/wiki/Vehicle_registration_plates_of_Turkey",
"lines": [
6,
31
],
"name": "Provider",
"signature": "class Provider(Automotive... | [
"tests/providers/test_automotive.py::TestPtBr::test_license_plate",
"tests/providers/test_automotive.py::TestPtPt::test_license_plate",
"tests/providers/test_automotive.py::TestHuHu::test_license_plate",
"tests/providers/test_automotive.py::TestDeDe::test_license_plate",
"tests/providers/test_automotive.py:... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add Turkish translation to automotive provider
### What does this changes
Add custom automotive provider for tr_TR locale
### What was wrong
The automotive provider of tr_TR was the default automotive provider.
### How this fixes it
The added automotive provider will return a license plate in Turkish.
Fixes #1286
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in faker/providers/automotive/tr_TR/__init__.py]
(definition of Provider:)
class Provider(AutomotiveProvider):
"""Implement automotive provider for ``tr_TR`` locale.
Sources:
- https://en.wikipedia.org/wiki/Vehicle_registration_plates_of_Turkey"""
(definition of Provider.license_plate:)
def license_plate(self) -> str:
"""Generate a license plate."""
[end of new definitions in faker/providers/automotive/tr_TR/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Turkish translation for Automotive provider
* Faker version:
* OS:
The job provider does not contain Turkish translation. It is using the base automotive provider (`license_plate`).
### Steps to reproduce
import faker
f = faker.Faker('tr_TR')
lp = f.license_plate()
### Expected behavior
`lp` will needs to hold a license plate sample in Turkish(e.g: `25 MOT 93`)
### Actual behavior
`lp` is holding a license plate in English(e.g: `552 YFC`)
----------
--------------------
</issues> | 6edfdbf6ae90b0153309e3bf066aa3b2d16494a7 | |
joke2k__faker-1287 | 1,287 | joke2k/faker | null | b86c2f827fe09b65e3bd3494f385291ef96e0556 | 2020-10-01T18:40:29Z | diff --git a/faker/providers/credit_card/pt_PT/__init__.py b/faker/providers/credit_card/pt_PT/__init__.py
new file mode 100644
index 0000000000..57f2e74f38
--- /dev/null
+++ b/faker/providers/credit_card/pt_PT/__init__.py
@@ -0,0 +1,68 @@
+from collections import OrderedDict
+
+from .. import CreditCard
+from .. import Provider as CreditCardProvider
+
+
+class Provider(CreditCardProvider):
+ """Implementation of ``pt_PT`` locale credit card
+
+ For all methods that take ``card_type`` as an argument a random card type
+ will be used if the supplied value is ``None``. The list of valid card types
+ includes ``'visa'``, ``'mastercard'`` and ``'maestro'``.
+
+ Source: https://bincheck.org/portugal
+ """
+
+ prefix_visa = ['400131', '400190', '400817', '402192', '402947', '402956', '403005', '403006', '403007',
+ '403008', '403271', '404520', '404530', '405758', '406170', '406475', '407548', '407549',
+ '407575', '408237', '408239', '409842', '409843', '410000', '410344', '410345', '410553',
+ '410557', '411635', '411700', '411701', '411869', '412487', '412488', '412489', '412657',
+ '412782', '412990', '413014', '413793', '413871', '415158', '415159', '415170', '415171',
+ '415174', '415175', '415194', '415195', '415238', '415272', '415273', '415403', '415404',
+ '415405', '415440', '415441', '415569', '415920', '415961', '416952', '416963', '416970',
+ '417005', '417091', '417092', '417337', '418847', '419022', '419682', '419683', '419684',
+ '421149', '421510', '422080', '422240', '422241', '422414', '422417', '422597', '422869',
+ '423392', '423393', '424118', '424184', '424208', '424661', '425509', '425510', '425906',
+ '426150', '426360', '426370', '427256', '427304', '427729', '427770', '427867', '428139',
+ '428184', '428185', '428186', '428187', '429711', '430240', '430241', '431926', '433390',
+ '433391', '433511', '433512', '433513', '433599', '433618', '433622', '433966', '437886',
+ '438257', '439070', '440637', '440644', '440645', '442664', '443977', '443978', '444224',
+ '444227', '445961', '445962', '446140', '446144', '449389', '450915', '451156', '451166',
+ '454755', '455250', '455290', '455292', '455658', '456811', '456812', '457031', '458058',
+ '458059', '459432', '459433', '459449', '460340', '460341', '460342', '461247', '461248',
+ '461249', '462731', '462732', '464406', '465964', '476066', '476067', '476068', '476069',
+ '476070', '476071', '476329', '477920', '477921', '477922', '477947', '477989', '478062',
+ '478063', '479702', '479736', '483088', '485672', '486449', '486457', '489434', '489485',
+ '490772', '490830', '490831', '490832', '490841', '490863', '491213', '491546', '491547',
+ '491613', '492194', '493402', '493480', '493800', '493801', '493830', '498800', '499968',
+ '499969', '499986', '422239', '422041', '464409', '464408']
+
+ prefix_mastercard = ['510122', '510123', '512556', '518772', '519744', '519774', '520342', '524552',
+ '524878', '525625', '525808', '526819', '527014', '528024', '529119', '530267',
+ '530770', '532355', '536468', '541171', '541557', '542081', '542098', '542858',
+ '543099', '543116', '543123', '544051', '544052', '544233', '547260', '547459',
+ '548168', '548169', '552727', '552755', '553057', '554506', '554517', '554518',
+ '556660', '557836', '557882', '557883', '557888']
+
+ prefix_maestro = ['501654', '501659', '670530', '670811', '670812', '676938', '676938', '677393',
+ '677707', '670835', '670817']
+
+ credit_card_types = OrderedDict((
+ ('maestro', CreditCard('Maestro', prefix_maestro, 16, security_code='CVV2')),
+ ('mastercard', CreditCard('Mastercard', prefix_mastercard, 16, security_code='CVV2')),
+ ('visa', CreditCard('Visa', prefix_visa, 16, security_code='CVV2')),
+ ))
+
+ def credit_card_expire(self, start='now', end='+4y', date_format='%m/%y'):
+ """Generate a credit card expiry date.
+
+ This method uses |date_time_between| under the hood to generate the
+ expiry date, so the ``start`` and ``end`` arguments work in the same way
+ here as it would in that method. For the actual formatting of the expiry
+ date, |strftime| is used and ``date_format`` is simply passed
+ to that method.
+ """
+
+ expire_date = self.generator.date_time_between(start, end)
+ return expire_date.strftime(date_format)
| diff --git a/tests/providers/test_credit_card.py b/tests/providers/test_credit_card.py
index 11b3df8309..6c2561eb91 100644
--- a/tests/providers/test_credit_card.py
+++ b/tests/providers/test_credit_card.py
@@ -125,3 +125,26 @@ def test_credit_card_full(self, faker, num_samples):
card_data = faker.credit_card_full().split('\n')
assert re.match('[A-Za-z]+', card_data[1])
assert card_data[4] in RuRuBankProvider.banks
+
+
+class TestPtPt:
+ """Test pt_PT credit card provider methods"""
+
+ visa_pattern = re.compile(r'4[0-9]{15}')
+ mastercard_pattern = re.compile(r'5[1-5][0-9]{14}')
+ maestro_pattern = re.compile(r'(50|67)[0-9]{14}')
+
+ def test_visa(self, faker, num_samples):
+ for _ in range(num_samples):
+ number = faker.credit_card_number('visa')
+ assert self.visa_pattern.fullmatch(number)
+
+ def test_mastercard(self, faker, num_samples):
+ for _ in range(num_samples):
+ number = faker.credit_card_number('mastercard')
+ assert self.mastercard_pattern.fullmatch(number)
+
+ def test_maestro(self, faker, num_samples):
+ for _ in range(num_samples):
+ number = faker.credit_card_number('maestro')
+ assert self.maestro_pattern.fullmatch(number)
| [
{
"components": [
{
"doc": "Implementation of ``pt_PT`` locale credit card\n\nFor all methods that take ``card_type`` as an argument a random card type\nwill be used if the supplied value is ``None``. The list of valid card types\nincludes ``'visa'``, ``'mastercard'`` and ``'maestro'``.\n\nSource:... | [
"tests/providers/test_credit_card.py::TestPtPt::test_mastercard",
"tests/providers/test_credit_card.py::TestPtPt::test_maestro"
] | [
"tests/providers/test_credit_card.py::TestCreditCardProvider::test_mastercard",
"tests/providers/test_credit_card.py::TestCreditCardProvider::test_visa13",
"tests/providers/test_credit_card.py::TestCreditCardProvider::test_visa16",
"tests/providers/test_credit_card.py::TestCreditCardProvider::test_visa19",
... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add pt_PT credit card provider
### Added pt_PT credit card provider
Added visa, mastercard and maestro bins of portugal.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in faker/providers/credit_card/pt_PT/__init__.py]
(definition of Provider:)
class Provider(CreditCardProvider):
"""Implementation of ``pt_PT`` locale credit card
For all methods that take ``card_type`` as an argument a random card type
will be used if the supplied value is ``None``. The list of valid card types
includes ``'visa'``, ``'mastercard'`` and ``'maestro'``.
Source: https://bincheck.org/portugal"""
(definition of Provider.credit_card_expire:)
def credit_card_expire(self, start='now', end='+4y', date_format='%m/%y'):
"""Generate a credit card expiry date.
This method uses |date_time_between| under the hood to generate the
expiry date, so the ``start`` and ``end`` arguments work in the same way
here as it would in that method. For the actual formatting of the expiry
date, |strftime| is used and ``date_format`` is simply passed
to that method."""
[end of new definitions in faker/providers/credit_card/pt_PT/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 6edfdbf6ae90b0153309e3bf066aa3b2d16494a7 | ||
joke2k__faker-1285 | 1,285 | joke2k/faker | null | c5fc99edf684e13914cf807fe16a6c37cb86511a | 2020-10-01T17:48:14Z | diff --git a/faker/providers/job/tr_TR/__init__.py b/faker/providers/job/tr_TR/__init__.py
new file mode 100644
index 0000000000..64330999ab
--- /dev/null
+++ b/faker/providers/job/tr_TR/__init__.py
@@ -0,0 +1,737 @@
+from .. import Provider as BaseProvider
+
+
+class Provider(BaseProvider):
+ """
+ Source: https://www.turkcebilgi.com/meslekler_listesi
+ """
+ jobs = [
+ "Acentacฤฑ",
+ "Acil durum yรถnetmeni",
+ "Adli tabip",
+ "Agronomist",
+ "Aฤ yรถneticisi",
+ "Aลรงฤฑ",
+ "Aลรงฤฑbaลฤฑ",
+ "Ahลap tekne yapฤฑmcฤฑsฤฑ",
+ "Aile hekimi",
+ "Aktar",
+ "Akortรงu",
+ "Aktรถr",
+ "Aktรผer",
+ "Aktris",
+ "Akustikรงi",
+ "Albay",
+ "Ambalajcฤฑ",
+ "Ambarcฤฑ",
+ "Ambulans ลofรถrรผ",
+ "Amiral",
+ "Anahtarcฤฑ",
+ "Anestezi uzmanฤฑ",
+ "Anestezi teknikeri",
+ "Animatรถr",
+ "Antika satฤฑcฤฑsฤฑ",
+ "Antropolog",
+ "Apartman yรถneticisi",
+ "Araba satฤฑcฤฑsฤฑ",
+ "Araba yฤฑkayฤฑcฤฑsฤฑ",
+ "Arabacฤฑ",
+ "Arabulucu",
+ "Araลtฤฑrmacฤฑ",
+ "Arฤฑcฤฑ",
+ "Arkeolog",
+ "Armatรถr",
+ "Arpist",
+ "Arลivci",
+ "Artist",
+ "Asansรถrcรผ",
+ "Asistan",
+ "Asker",
+ "Astrofizikรงi",
+ "Astrolog",
+ "Astronom",
+ "Astronot",
+ "Astsubay",
+ "Atlet",
+ "Av bekรงisi",
+ "Avcฤฑ",
+ "Avizeci",
+ "Avukat",
+ "Ayakรงฤฑ (otogar, lokanta)",
+ "Ayakkabฤฑ boyacฤฑsฤฑ",
+ "Ayakkabฤฑ tamircisi",
+ "Ayakkabฤฑcฤฑ",
+ "Ayฤฑ oynatฤฑcฤฑsฤฑ",
+ "Araba tamircisi",
+ "Bacacฤฑ",
+ "Badanacฤฑ",
+ "Baharatรงฤฑ",
+ "Bahรงe bitkileri uzmanฤฑ",
+ "Bahรงฤฑvan",
+ "Bakan",
+ "Bakฤฑcฤฑ",
+ "Bakฤฑrcฤฑ",
+ "Bakkal",
+ "Bakteriyolog",
+ "Balฤฑkรงฤฑ",
+ "Balerin",
+ "Balon pilotu",
+ "Bankacฤฑ",
+ "Banker",
+ "Barmen",
+ "Barmeyd",
+ "Basketbolcu",
+ "Baลbakan",
+ "Baลรงavuล",
+ "Baลdรผmenci",
+ "Baลhemลire",
+ "Baลkan",
+ "Baลkomiser",
+ "Baลpiskopos",
+ "Baลrahip",
+ "Belediye baลkanฤฑ",
+ "Belediye meclisi รผyesi",
+ "Benzinci",
+ "Berber",
+ "Besteci",
+ "Biletรงi",
+ "Bilgisayar mรผhendisi",
+ "Bilgisayar programcฤฑsฤฑ",
+ "Bilgisayar tamircisi",
+ "Bilim insanฤฑ",
+ "Bilirkiลi",
+ "Binicilik",
+ "Biracฤฑ",
+ "Bisikletรงi",
+ "Biyografi yazarฤฑ",
+ "Biyolog",
+ "Biyomedikal Mรผhendisi",
+ "Bobinajcฤฑ",
+ "Bombacฤฑ",
+ "Bomba imhacฤฑ",
+ "Borsacฤฑ",
+ "Borucu",
+ "Botanikรงi",
+ "Boyacฤฑ",
+ "Bozacฤฑ",
+ "Bรถcekbilimci",
+ "Bรถrekรงi",
+ "Bulaลฤฑkรงฤฑ",
+ "Buldozer operatรถrรผ",
+ "Bรผtรงe uzmanฤฑ",
+ "Bรผyรผkelรงi",
+ "Besicilik",
+ "Bilgi ฤฐลlemci",
+ "Camcฤฑ",
+ "Cerrah",
+ "Celep",
+ "Cellat",
+ "Cost Control",
+ "Cillopรงu",
+ "Cumhurbaลkanฤฑ",
+ "รamaลฤฑrcฤฑ",
+ "รantacฤฑ",
+ "รarkรงฤฑ",
+ "รatฤฑcฤฑ",
+ "รaycฤฑ",
+ "รevirmen",
+ "รevrebilimci",
+ "รevre mรผhendisi",
+ "รeyizci",
+ "รฤฑkฤฑkรงฤฑ",
+ "รฤฑkrฤฑkรงฤฑ",
+ "รiรงekรงi",
+ "รiftรงi",
+ "รiftlik iลletici",
+ "รikolatacฤฑ",
+ "รilingir",
+ "รinici",
+ "รitรงi",
+ "รoban",
+ "รocuk doktoru",
+ "รorapรงฤฑ",
+ "รรถp iลรงisi",
+ "รรถpรงรผ",
+ "รฤฑrak",
+ "รevik Kuvvet",
+ "Dadฤฑ",
+ "Daktilograf",
+ "Dalgฤฑรง",
+ "Damฤฑtฤฑcฤฑ",
+ "Danฤฑลman",
+ "Dansรถz",
+ "Davulcu",
+ "Debbaฤ",
+ "Dedektif",
+ "Deฤirmen iลรงisi",
+ "Deฤirmenci",
+ "Demirci",
+ "Demiryolu iลรงisi",
+ "Denetรงi",
+ "Denetleyici",
+ "Denizci",
+ "Depocu",
+ "Derici",
+ "Desinatรถr",
+ "Devlet memuru",
+ "Dilci",
+ "Dilenci",
+ "Diplomat",
+ "Diล hekimi",
+ "Diyetisyen",
+ "Dizgici",
+ "Doฤalgazcฤฑ",
+ "Doฤramacฤฑ",
+ "Doฤum uzmanฤฑ",
+ "Dok iลรงisi",
+ "Dokumacฤฑ",
+ "Doktor",
+ "Dondurmacฤฑ",
+ "Dรถkรผmcรผ",
+ "Dรถลemeci",
+ "Dรถvizci",
+ "Dublajcฤฑ",
+ "Duvarcฤฑ",
+ "Dรผmenci",
+ "Diล teknisyeni",
+ "Ebe",
+ "Eczacฤฑ",
+ "Eczacฤฑ kalfasฤฑ",
+ "Editรถr",
+ "Eฤitimci",
+ "Eฤitmen",
+ "Ekonomist",
+ "Elektrik mรผhendisi",
+ "Elektronik mรผhendisi",
+ "Elektrik-Elektronik mรผhendisi",
+ "Elektronik ve Haberleลme mรผhendisi",
+ "Elektrikรงi",
+ "Eleลtirmen",
+ "Embriyolog",
+ "Emlakรงฤฑ",
+ "Emniyet amiri",
+ "Emniyet genel mรผdรผrรผ",
+ "Endรผstri mรผhendisi",
+ "Endรผstri sistemleri mรผhendisi",
+ "Enstrรผman imalatรงฤฑsฤฑ",
+ "Ergonomist",
+ "Eskici",
+ "Esnaf",
+ "Estetisyen",
+ "Etolojist",
+ "Etimolog",
+ "Etnolog",
+ "Ev hanฤฑmฤฑ",
+ "Fabrika iลรงisi",
+ "Fahiลe",
+ "Falcฤฑ",
+ "Fermantasyon iลรงisi",
+ "Fฤฑรงฤฑcฤฑ",
+ "Fฤฑrฤฑncฤฑ",
+ "Figรผran",
+ "Film yapฤฑmcฤฑsฤฑ",
+ "Film yรถnetmeni",
+ "Filozof",
+ "Finansรถr",
+ "Fizikรงi",
+ "Fizyonomist",
+ "Fizyoterapist",
+ "Acil tฤฑp teknisyeni",
+ "Fon yรถneticisi",
+ "Forklift operatรถrรผ",
+ "Fotoฤrafรงฤฑ",
+ "Futbolcu",
+ "Gardiyan",
+ "Galerici",
+ "Garson",
+ "Gazete daฤฤฑtฤฑcฤฑsฤฑ",
+ "Gazete satฤฑcฤฑsฤฑ",
+ "Gazeteci",
+ "Gelir uzmanฤฑ",
+ "Gelir uzman yardฤฑmcฤฑsฤฑ",
+ "Gemici",
+ "General",
+ "Genetik mรผhendisi",
+ "Geyลa",
+ "Gezgin",
+ "Gezici vaiz",
+ "Gฤฑda mรผhendisi",
+ "Gitarist",
+ "Gondolcu",
+ "Gรถkbilimci",
+ "Gรถz doktoru",
+ "Gรถzetmen",
+ "Gรถzlรผkรงรผ",
+ "Grafiker",
+ "Gramer uzmanฤฑ",
+ "Greyder operatรถrรผ",
+ "Guru",
+ "Gรผfteci",
+ "Gรผmrรผk memuru",
+ "Gรผmrรผk mรผลaviri",
+ "Gรผmrรผk mรผลavir yardฤฑmcฤฑsฤฑ",
+ "Gรผmrรผk uzmanฤฑ",
+ "Gรผndelikรงi",
+ "Gรผzellik uzmanฤฑ",
+ "Haberci",
+ "Haddeci",
+ "Haham",
+ "Hakem",
+ "Halฤฑcฤฑ",
+ "Halkbilimci",
+ "Hamal",
+ "Hamamcฤฑ",
+ "Hamurkรขr",
+ "Hareket memuru",
+ "Haritacฤฑ",
+ "Harita mรผhendisi",
+ "Hastabakฤฑcฤฑ",
+ "Hattat",
+ "Hava trafikรงisi",
+ "Havacฤฑ",
+ "Haydut",
+ "Hayvan bakฤฑcฤฑsฤฑ",
+ "Hayvan terbiyecisi",
+ "Hemลire",
+ "Hesap uzmanฤฑ",
+ "Heykeltฤฑraล",
+ "Hฤฑrdavatรงฤฑ",
+ "Hฤฑrsฤฑz",
+ "Hidrolikรงi",
+ "Hizmetรงi",
+ "Hokkabaz",
+ "Host",
+ "Hostes",
+ "Hukukรงu",
+ "Hurdacฤฑ",
+ "ฤฐcra memuru",
+ "ฤฐรง mimar",
+ "ฤฐฤneci",
+ "ฤฐhracatรงฤฑ",
+ "ฤฐktisatรงฤฑ",
+ "ฤฐlahiyatรงฤฑ",
+ "ฤฐllรผzyonist",
+ "ฤฐmam",
+ "ฤฐnsan kaynaklarฤฑ uzmanฤฑ",
+ "ฤฐnลaat mรผhendisi",
+ "ฤฐnลaatรงฤฑ",
+ "ฤฐpรงi",
+ "ฤฐplikรงi",
+ "ฤฐstatistikรงi",
+ "ฤฐstihkรขmcฤฑ",
+ "ฤฐลaretรงi",
+ "ฤฐลรงi",
+ "ฤฐลletmeci",
+ "ฤฐลletme mรผhendisi",
+ "ฤฐลportacฤฑ",
+ "ฤฐล ve Uฤraลฤฑ Terapisti",
+ "ฤฐtfaiyeci",
+ "ฤฐthalatรงฤฑ",
+ "Jeofizik mรผhendisi",
+ "Jeoloji mรผhendisi",
+ "Jeolog",
+ "Jeomorfolog",
+ "Jinekolog",
+ "Jimnastikรงi",
+ "Jokey",
+ "Kabin gรถrevlisi",
+ "Kabuk soyucusu",
+ "Kadฤฑn berberi",
+ "Kadฤฑn terzisi",
+ "Kรขฤฤฑtรงฤฑ",
+ "Kahveci",
+ "Kรขhya",
+ "Kalaycฤฑ",
+ "Kalฤฑpรงฤฑ",
+ "Kaloriferci",
+ "Kamarot",
+ "Kameraman",
+ "Kamyoncu",
+ "Kapฤฑ satฤฑcฤฑsฤฑ",
+ "Kapฤฑcฤฑ",
+ "Kaplamacฤฑ",
+ "Kaportacฤฑ",
+ "Kaptan",
+ "Kardinal",
+ "Kardiyolog",
+ "Karikatรผrist",
+ "Karoserci",
+ "Karpuzcu",
+ "Kasap",
+ "Kasiyer",
+ "Kat gรถrevlisi",
+ "Kรขtip",
+ "Kayฤฑkรงฤฑ",
+ "Kaymakam",
+ "Kaynakรงฤฑ",
+ "Kazฤฑcฤฑ",
+ "Kebapรงฤฑ",
+ "Kemancฤฑ",
+ "Kesimci",
+ "Keskin Niลancฤฑ",
+ "Kฤฑrtasiyeci",
+ "Kimyager",
+ "Kimya mรผhendisi",
+ "Kitapรงฤฑ",
+ "Klarnetรงi",
+ "Koleksiyoncu",
+ "Komedyen",
+ "Komisyoncu",
+ "Komiser",
+ "Konserveci",
+ "Konsolos",
+ "Konsomatris",
+ "Kontrolรถr",
+ "Konveyรถr operatรถrรผ",
+ "Kopyalayฤฑcฤฑ",
+ "Koreograf",
+ "Korgeneral",
+ "Koramiral",
+ "Korsan",
+ "Koruma gรถrevlisi",
+ "Komiser",
+ "Komiser yardฤฑmcฤฑsฤฑ",
+ "Kozmolog",
+ "Kรถfteci",
+ "Kรถmรผrcรผ",
+ "Kรถpek eฤiticisi",
+ "Kรถลe yazarฤฑ",
+ "Kuafรถr",
+ "Kuลรงu",
+ "Kumarbaz",
+ "Kumaลรงฤฑ",
+ "Kumcu",
+ "Kuru temizlemeci",
+ "Kuruyemiลรงi",
+ "Kurye",
+ "Kuลbilimci",
+ "Kuyumcu",
+ "Kรผrkรงรผ",
+ "Kรผtรผphaneci",
+ "Krupiye",
+ "Laborant",
+ "Laboratuvar iลรงisi",
+ "Lahmacuncu",
+ "Lehimci",
+ "Levazฤฑmcฤฑ",
+ "Lobici",
+ "Lokantacฤฑ",
+ "Lokomotifรงi",
+ "Lostromo",
+ "Lostracฤฑ",
+ "Lokman",
+ "Madenci",
+ "Makasรงฤฑ",
+ "Makastar",
+ "Maketรงi",
+ "Makinist",
+ "Makine mรผhendisi",
+ "Makine zabiti",
+ "Makyajcฤฑ",
+ "Mali hizmetler uzmanฤฑ",
+ "Manastฤฑr baล rahibesi",
+ "Manav",
+ "Manifaturacฤฑ",
+ "Manikรผrcรผ",
+ "Manken",
+ "Marangoz",
+ "Masรถr",
+ "Masรถz",
+ "Matador",
+ "Matbaacฤฑ",
+ "Matematikรงi",
+ "Matkapรงฤฑ",
+ "Medya Planlama Uzmanฤฑ",
+ "Memur",
+ "Menajer",
+ "Mermerci",
+ "Metalurji mรผhendisi",
+ "Meteoroloji uzmanฤฑ",
+ "Metin yazarฤฑ",
+ "Mevsimlik iลรงi",
+ "Meydancฤฑ",
+ "Meyhaneci",
+ "Mezarcฤฑ",
+ "Midyeci",
+ "Mikrobiyolog",
+ "Milletvekili",
+ "Mimar",
+ "Misyoner",
+ "Mobilyacฤฑ",
+ "Modacฤฑ",
+ "Model",
+ "Modelci",
+ "Modelist",
+ "Montajcฤฑ",
+ "Montรถr",
+ "Motor tamircisi",
+ "Motorcu",
+ "Muhabbet tellalฤฑ",
+ "Muhabir",
+ "Muhafฤฑz",
+ "Muhasebeci",
+ "Muhtar",
+ "Mumyalayฤฑcฤฑ",
+ "Muzcu",
+ "Mรผbaลir",
+ "Mรผdรผr",
+ "Mรผezzin",
+ "Mรผfettiล",
+ "Mรผลavir",
+ "Mรผhendis",
+ "Mรผneccim",
+ "Mรผrebbiye",
+ "Mรผsteลar",
+ "Mรผteahhit",
+ "Mรผtercim",
+ "Mรผze mรผdรผrรผ",
+ "Mรผzik yรถnetmeni",
+ "Mรผzisyen",
+ "Nalฤฑncฤฑ",
+ "Nakฤฑลรงฤฑ",
+ "Nakliyeci",
+ "Nalbant",
+ "Nalbur",
+ "Noter",
+ "Obuacฤฑ",
+ "Ocakรงฤฑ",
+ "Odacฤฑ",
+ "Oduncu",
+ "Okรงu",
+ "Okul mรผdรผrรผ",
+ "Okutman",
+ "Operatรถr",
+ "Opera sanatรงฤฑsฤฑ",
+ "Orgcu",
+ "Orgeneral",
+ "Orman mรผhendisi",
+ "Ornitolog",
+ "Otelci",
+ "Oto elektrikรงisi",
+ "Oto lastik tamircisi",
+ "Oto tamircisi",
+ "Oto yedek parรงacฤฑ",
+ "Overlokรงu",
+ "Oymacฤฑ",
+ "Oyuncu",
+ "Oyun hostesi",
+ "Oyun yazarฤฑ",
+ "Oyuncakรงฤฑ",
+ "รฤretmen",
+ "รฤretim elemanฤฑ",
+ "รฤretim gรถrevlisi",
+ "รฤretim รผyesi",
+ "รrmeci",
+ "รn muhasebeci",
+ "รn muhasebe sorumlusu",
+ "รn muhasebe yardฤฑmcฤฑ elemanฤฑ",
+ "รn bรผro elemanฤฑ",
+ "รzel ลofรถr",
+ "Paketleyici",
+ "Palyaรงo",
+ "Pandomimci",
+ "Pansiyoncu",
+ "Pansumancฤฑ",
+ "Papa",
+ "Papaz",
+ "Paralฤฑ asker",
+ "Park bekรงisi",
+ "Pastรถrizรถr",
+ "Patolog",
+ "Peรงeteci",
+ "Pencereci",
+ "Perukรงu",
+ "Peyzaj mimarฤฑ",
+ "Peyzaj teknikeri",
+ "Pideci",
+ "Pilavcฤฑ",
+ "Pilot",
+ "Piskopos",
+ "Piyade",
+ "Piyango satฤฑcฤฑsฤฑ",
+ "Piyanist",
+ "Polis memuru",
+ "Polis ลefi",
+ "Polisajcฤฑ",
+ "Pompacฤฑ",
+ "Postacฤฑ",
+ "Profesรถr",
+ "Proktolog",
+ "Protokol gรถrevlisi",
+ "Psikiyatr",
+ "Psikolog",
+ "Psikolojik danฤฑลmanlฤฑk ve rehberlik",
+ "Paramedik",
+ "Radyolog",
+ "Redaktรถr",
+ "Rehber",
+ "Rejisรถr",
+ "Reklamcฤฑ",
+ "Rektรถr",
+ "Rektรถr yardฤฑmcฤฑsฤฑ",
+ "Remayรถzcรผ",
+ "Ressam",
+ "Resepsiyon memuru",
+ "Rot balansรงฤฑ",
+ "Radyoloji teknisyeni/teknikeri",
+ "Saat tamircisi",
+ "Saatรงi",
+ "Saฤlฤฑk teknisyeni",
+ "Sahil koruma",
+ "Saksofoncu",
+ "Salepรงi",
+ "Sanat yรถnetmeni",
+ "Sanayici",
+ "Sansรผrcรผ",
+ "Santral memuru",
+ "Saraรง",
+ "Sarraf",
+ "Satฤฑล elemanฤฑ",
+ "Savcฤฑ",
+ "Saz ลairi",
+ "Sekreter",
+ "Senarist",
+ "Sepetรงi",
+ "Serbest muhasebeci mali mรผลavir",
+ "Ses teknisyeni",
+ "Seyis",
+ "Sฤฑnฤฑrlฤฑ baล makinist",
+ "Sicil memuru",
+ "Sigortacฤฑ",
+ "Sihirbaz",
+ "Silahรงฤฑ",
+ "Silindir operatรถrรผ",
+ "Simitรงi",
+ "Simyacฤฑ",
+ "Sistem mรผhendisi",
+ "Sistem yรถneticisi",
+ "Siyasetรงi",
+ "Soฤuk demirci",
+ "Sokak รงalgฤฑcฤฑsฤฑ",
+ "Sokak satฤฑcฤฑsฤฑ",
+ "Son รผtรผcรผ",
+ "Sorgu hรขkimi",
+ "Sosyal hizmet uzmanฤฑ",
+ "Sosyolog",
+ "Spiker",
+ "Stenograf",
+ "Stilist",
+ "Striptizci",
+ "Su tesisatรงฤฑsฤฑ",
+ "Subay",
+ "Sucu",
+ "Suflรถr",
+ "Sulh hรขkimi",
+ "Sunucu",
+ "Susuz araรง yฤฑkama",
+ "Sรผnnetรงi",
+ "Sรผrveyan",
+ "Sรผtanne",
+ "Sรผtรงรผ",
+ "ลahinci",
+ "ลair",
+ "ลapel papazฤฑ",
+ "ลapkacฤฑ",
+ "ลarap รผreticisi",
+ "ลarkฤฑcฤฑ",
+ "ลarkฤฑ sรถzรผ yazarฤฑ",
+ "ลarkรผter",
+ "ลekerci",
+ "ลemsiyeci",
+ "ลifre รงรถzรผmleyici",
+ "ลimลirci",
+ "ลofรถr",
+ "Tabakรงฤฑ",
+ "Tabelacฤฑ",
+ "Tahsildar",
+ "Taksici",
+ "Tarฤฑm iลรงisi",
+ "Tarihรงi",
+ "Tasarฤฑmcฤฑ",
+ "Taลรงฤฑ",
+ "Taลlayฤฑcฤฑ",
+ "Tatlฤฑcฤฑ",
+ "Tavukรงu",
+ "Tayfa",
+ "Tefeci",
+ "Teฤmen",
+ "Tekniker",
+ "Teknisyen",
+ "Teknoloji uzmani",
+ "Telefon operatรถrรผ",
+ "Telekฤฑz",
+ "Televizyon tamircisi",
+ "Tellal",
+ "Temizlikรงi",
+ "Temsilci",
+ "Terapist",
+ "Tercรผman",
+ "Terzi",
+ "Tesgahtar",
+ "Tesisatรงฤฑ",
+ "Tesviyeci",
+ "Test mรผhendisi",
+ "Test pilotu",
+ "Teลrifatรงฤฑ",
+ "Tiyatro yรถnetmeni",
+ "Tombalacฤฑ",
+ "Topรงu",
+ "Tornacฤฑ",
+ "Turizmci",
+ "Tuฤgeneral",
+ "Tuhafiyeci",
+ "Turลucu",
+ "Tuzcu",
+ "Tรผmamiral",
+ "Tรผmgeneral",
+ "Uรงuล teknisyeni",
+ "Ulaลฤฑm sorumlusu",
+ "Ustabaลฤฑ",
+ "Uydu antenci",
+ "Uzay mรผhendisi",
+ "Uzay bilimcisi",
+ "Uzman Jandarma",
+ "Uzman รavuล",
+ "รretici",
+ "รrolog",
+ "รtรผcรผ",
+ "Vaiz",
+ "Vali",
+ "Vergi denetmeni",
+ "Vergi mรผfettiลi",
+ "Vergi tahakkuk memuru",
+ "Veritabanฤฑ yรถneticisi",
+ "Veri hazฤฑrlama ve kontrol iลletmeni",
+ "Vestiyerci",
+ "Veteriner hekim",
+ "Veteriner saฤlฤฑk teknikeri",
+ "Veteriner saฤlฤฑk teknisyeni",
+ "Veznedar",
+ "Video editรถrรผ",
+ "Vinรง operatรถrรผ",
+ "Vitrinci",
+ "Viyolonselci",
+ "Yarbay",
+ "Yardฤฑmcฤฑ hakem",
+ "Yardฤฑmcฤฑ hizmetli",
+ "Yardฤฑmcฤฑ pilot",
+ "Yargฤฑรง",
+ "Yatฤฑrฤฑm uzmanฤฑ",
+ "Yayฤฑncฤฑ",
+ "Yazar",
+ "Yazฤฑ iลleri mรผdรผrรผ",
+ "Yazฤฑlฤฑm mรผhendisi",
+ "Yelkenci",
+ "Yeminli mali mรผลavir",
+ "Yeminli tercรผman",
+ "Yer gรถsterici",
+ "Yer teknisyeni",
+ "Yerรถlรงmeci",
+ "Yoฤurtรงu",
+ "Yol bekรงisi",
+ "Yorgancฤฑ",
+ "Yorumcu",
+ "Yรถnetici",
+ "Yรผzรผcรผ",
+ "Yรถnetmen",
+ ]
| diff --git a/tests/providers/test_job.py b/tests/providers/test_job.py
index 4bd522746d..33edb985d7 100644
--- a/tests/providers/test_job.py
+++ b/tests/providers/test_job.py
@@ -8,6 +8,7 @@
from faker.providers.job.ko_KR import Provider as KoKrJobProvider
from faker.providers.job.pt_BR import Provider as PtBrJobProvider
from faker.providers.job.pt_PT import Provider as PtPtJobProvider
+from faker.providers.job.tr_TR import Provider as TrTrJobProvider
class TestJobProvider:
@@ -88,3 +89,10 @@ class TestPtBr:
def test_job(self, faker, num_samples):
for _ in range(num_samples):
assert faker.job() in PtBrJobProvider.jobs
+
+
+class TestTrTr:
+ """Test tr_TR job provider"""
+
+ def test_job(self, faker, num_samples):
+ assert faker.job() in TrTrJobProvider.jobs
| [
{
"components": [
{
"doc": "Source: https://www.turkcebilgi.com/meslekler_listesi",
"lines": [
4,
736
],
"name": "Provider",
"signature": "class Provider(BaseProvider):",
"type": "class"
}
],
"file": "faker/providers/job/tr_... | [
"tests/providers/test_job.py::TestJobProvider::test_job",
"tests/providers/test_job.py::TestJaJp::test_job",
"tests/providers/test_job.py::TestKoKr::test_job",
"tests/providers/test_job.py::TestHuHu::test_job",
"tests/providers/test_job.py::TestHyAm::test_job",
"tests/providers/test_job.py::TestDeDe::test... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add job provider for tr_TR
### What does this changes
Add custom job provider for tr_TR locale
### What was wrong
The job provider of tr_TR was the default job provider.
### How this fixes it
The added job provider will return a job title in Turkish.
Fixes #1280
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in faker/providers/job/tr_TR/__init__.py]
(definition of Provider:)
class Provider(BaseProvider):
"""Source: https://www.turkcebilgi.com/meslekler_listesi"""
[end of new definitions in faker/providers/job/tr_TR/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Turkish translation for job provider
* Faker version:
* OS:
The job provider does not contain Turkish translation. It is using the base job provider.
### Steps to reproduce
```python
import faker
f = faker.Faker('tr_TR')
j = f.job()
```
### Expected behavior
`j` will needs to hold a job title in Turkish(e.g: Opera sanatรงฤฑsฤฑ)
### Actual behavior
`j` will hold a job but it will in English(e.g: Investment banker, corporate)
----------
--------------------
</issues> | 6edfdbf6ae90b0153309e3bf066aa3b2d16494a7 | |
joke2k__faker-1284 | 1,284 | joke2k/faker | null | f58e64eda0af1bd42c04192cee17ab8c98a620e9 | 2020-10-01T12:08:08Z | diff --git a/faker/providers/company/tr_TR/__init__.py b/faker/providers/company/tr_TR/__init__.py
new file mode 100644
index 0000000000..e6866f45ad
--- /dev/null
+++ b/faker/providers/company/tr_TR/__init__.py
@@ -0,0 +1,25 @@
+from .. import Provider as CompanyProvider
+
+
+class Provider(CompanyProvider):
+ formats = (
+ '{{last_name}} {{company_suffix}}',
+ '{{last_name}} {{last_name}} {{company_suffix}}',
+ '{{large_company}}',
+ )
+ # Source: https://www.capital.com.tr/listeler/capital-500
+ large_companies = (
+ 'Tรผpraล', 'Tรผrk Hava Yollarฤฑ', 'Petrol Ofisi', 'Opet', 'BฤฐM', 'Ford Otosan', 'Arรงelik', 'Vestel',
+ 'Toyota Otomotiv', 'Oyak Renault', 'A101 Yeni Maฤzacฤฑlฤฑk', 'Turkcell', 'Tรผrk Telekom', 'Anadolu Efes', 'Migros',
+ 'LC Waikiki', 'Peak Games', 'Masomo', 'EnerjiSA', 'Tofaล', 'ลiลecam', 'Selรงuk Ecza', 'ลOK Marketler', 'Petkim',
+ 'Limak ฤฐnลaat', 'Aselsan', 'Havelsan', 'Roketsan', 'ลiลecam',
+ )
+ company_suffixes = (
+ 'A.ล.', 'Ltd.', 'ลti.',
+ )
+
+ def large_company(self):
+ """
+ :example: 'Peak Games'
+ """
+ return self.random_element(self.large_companies)
| diff --git a/tests/providers/test_company.py b/tests/providers/test_company.py
index 0ba51c0670..5731e6c8e0 100644
--- a/tests/providers/test_company.py
+++ b/tests/providers/test_company.py
@@ -17,6 +17,7 @@
from faker.providers.company.pt_BR import company_id_checksum
from faker.providers.company.ru_RU import Provider as RuRuCompanyProvider
from faker.providers.company.ru_RU import calculate_checksum
+from faker.providers.company.tr_TR import Provider as TrTrCompanyProvider
class TestFiFi:
@@ -353,3 +354,19 @@ def test_company_vat_special_cases(self, faker, value, expected):
company_vat = fake.company_vat()
assert self.vat_regex.match(company_vat)
assert company_vat[9:12] == expected
+
+
+class TestTrTr:
+ """Test tr_TR company provider methods"""
+
+ def test_company_suffix(self, faker, num_samples):
+ for _ in range(num_samples):
+ suffix = faker.company_suffix()
+ assert isinstance(suffix, str)
+ assert suffix in TrTrCompanyProvider.company_suffixes
+
+ def test_large_companies(self, faker, num_samples):
+ for _ in range(num_samples):
+ company = faker.large_company()
+ assert isinstance(company, str)
+ assert company in TrTrCompanyProvider.large_companies
| [
{
"components": [
{
"doc": "",
"lines": [
4,
25
],
"name": "Provider",
"signature": "class Provider(CompanyProvider):",
"type": "class"
},
{
"doc": ":example: 'Peak Games'",
"lines": [
21,
2... | [
"tests/providers/test_company.py::TestFiFi::test_company_business_id",
"tests/providers/test_company.py::TestHyAm::test_bs",
"tests/providers/test_company.py::TestHyAm::test_catch_phrase",
"tests/providers/test_company.py::TestHyAm::test_company",
"tests/providers/test_company.py::TestHyAm::test_company_suf... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Added Company Provider for tr_TR locale.
Added Company Provider for tr_TR locale. ๐
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in faker/providers/company/tr_TR/__init__.py]
(definition of Provider:)
class Provider(CompanyProvider):
(definition of Provider.large_company:)
def large_company(self):
""":example: 'Peak Games'"""
[end of new definitions in faker/providers/company/tr_TR/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 6edfdbf6ae90b0153309e3bf066aa3b2d16494a7 | ||
joke2k__faker-1283 | 1,283 | joke2k/faker | null | e940094cc1f8998f867346b24983ab08c8f51301 | 2020-10-01T09:46:47Z | diff --git a/faker/providers/color/sk_SK/__init__.py b/faker/providers/color/sk_SK/__init__.py
new file mode 100644
index 0000000000..64cf79d248
--- /dev/null
+++ b/faker/providers/color/sk_SK/__init__.py
@@ -0,0 +1,11 @@
+from .. import Provider as ColorProvider
+
+
+class Provider(ColorProvider):
+ """Implement color provider for ``sk_SK`` locale."""
+
+ safe_colors = (
+ 'ฤierna', 'gaลกtanovรก', 'zelenรก', 'nรกmornรญcka', 'olivovรก',
+ 'fialovรก', 'zelenomodrรก', 'limetkovรก', 'modrรก', 'striebornรก',
+ 'sivรก', 'ลพltรก', 'fuchsiovรก', 'aquamarinovรก', 'biela',
+ )
| diff --git a/tests/providers/test_color.py b/tests/providers/test_color.py
index fd65d6a2b4..0d1e8c4aa8 100644
--- a/tests/providers/test_color.py
+++ b/tests/providers/test_color.py
@@ -8,6 +8,7 @@
from faker.providers.color.es_ES import Provider as EsEsColorProvider
from faker.providers.color.fa_IR import Provider as FaIrColorProvider
from faker.providers.color.hy_AM import Provider as HyAmColorProvider
+from faker.providers.color.sk_SK import Provider as SkSkColorProvider
class TestColorProvider:
@@ -290,3 +291,13 @@ def test_safe_color_name(self, faker, num_samples):
safe_color_name = faker.safe_color_name()
assert isinstance(safe_color_name, str)
assert safe_color_name in EsEsColorProvider.safe_colors
+
+
+class TestSkSk:
+ """Test sk_SK color provider methods"""
+
+ def test_safe_color_name(self, faker, num_samples):
+ for _ in range(num_samples):
+ safe_color_name = faker.safe_color_name()
+ assert isinstance(safe_color_name, str)
+ assert safe_color_name in SkSkColorProvider.safe_colors
| [
{
"components": [
{
"doc": "Implement color provider for ``sk_SK`` locale.",
"lines": [
4,
10
],
"name": "Provider",
"signature": "class Provider(ColorProvider):",
"type": "class"
}
],
"file": "faker/providers/color/sk_SK/__... | [
"tests/providers/test_color.py::TestColorProvider::test_safe_hex_color",
"tests/providers/test_color.py::TestColorProvider::test_hex_color",
"tests/providers/test_color.py::TestColorProvider::test_rgb_color",
"tests/providers/test_color.py::TestColorProvider::test_rgb_css_color",
"tests/providers/test_color... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Implement color provider for ``sk_SK`` locale
### What does this changes
This PR implements color provider for ``sk_SK`` locale
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in faker/providers/color/sk_SK/__init__.py]
(definition of Provider:)
class Provider(ColorProvider):
"""Implement color provider for ``sk_SK`` locale."""
[end of new definitions in faker/providers/color/sk_SK/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 6edfdbf6ae90b0153309e3bf066aa3b2d16494a7 | ||
pvlib__pvlib-python-1073 | 1,073 | pvlib/pvlib-python | 0.7 | b105021f7c1a47f888363af5585083fc27aefd4c | 2020-10-01T01:40:51Z | diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 2e882e9a46..845ad393fa 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -627,6 +627,7 @@ ModelChain model definitions.
modelchain.ModelChain.sapm_temp
modelchain.ModelChain.pvsyst_temp
modelchain.ModelChain.faiman_temp
+ modelchain.ModelChain.fuentes_temp
modelchain.ModelChain.pvwatts_losses
modelchain.ModelChain.no_extra_losses
diff --git a/docs/sphinx/source/whatsnew/v0.8.1.rst b/docs/sphinx/source/whatsnew/v0.8.1.rst
index 9b6e2a3800..27e5498596 100644
--- a/docs/sphinx/source/whatsnew/v0.8.1.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.1.rst
@@ -13,6 +13,8 @@ Deprecations
Enhancements
~~~~~~~~~~~~
+* Create :py:func:`~pvlib.pvsystem.PVSystem.fuentes_celltemp` and add ``temperature_model='fuentes'``
+ option to :py:class:`~pvlib.modelchain.ModelChain`. (:pull:`1042`) (:issue:`1073`)
* Added :py:func:`pvlib.temperature.ross` for cell temperature modeling using
only NOCT. (:pull:`1045`)
diff --git a/pvlib/modelchain.py b/pvlib/modelchain.py
index 7077d6c920..e5becc4a1a 100644
--- a/pvlib/modelchain.py
+++ b/pvlib/modelchain.py
@@ -323,9 +323,9 @@ class ModelChain:
as the first argument to a user-defined function.
temperature_model: None, str or function, default None
- Valid strings are 'sapm', 'pvsyst', and 'faiman'. The ModelChain
- instance will be passed as the first argument to a user-defined
- function.
+ Valid strings are 'sapm', 'pvsyst', 'faiman', and 'fuentes'.
+ The ModelChain instance will be passed as the first argument to a
+ user-defined function.
losses_model: str or function, default 'no_loss'
Valid strings are 'pvwatts', 'no_loss'. The ModelChain instance
@@ -866,6 +866,8 @@ def temperature_model(self, model):
self._temperature_model = self.pvsyst_temp
elif model == 'faiman':
self._temperature_model = self.faiman_temp
+ elif model == 'fuentes':
+ self._temperature_model = self.fuentes_temp
else:
raise ValueError(model + ' is not a valid temperature model')
# check system.temperature_model_parameters for consistency
@@ -891,6 +893,8 @@ def infer_temperature_model(self):
return self.pvsyst_temp
elif {'u0', 'u1'} <= params:
return self.faiman_temp
+ elif {'noct_installed'} <= params:
+ return self.fuentes_temp
else:
raise ValueError('could not infer temperature model from '
'system.temperature_module_parameters {}.'
@@ -914,6 +918,12 @@ def faiman_temp(self):
self.weather['wind_speed'])
return self
+ def fuentes_temp(self):
+ self.cell_temperature = self.system.fuentes_celltemp(
+ self.total_irrad['poa_global'], self.weather['temp_air'],
+ self.weather['wind_speed'])
+ return self
+
@property
def losses_model(self):
return self._losses_model
diff --git a/pvlib/pvsystem.py b/pvlib/pvsystem.py
index fecc93075e..c4a38cf65d 100644
--- a/pvlib/pvsystem.py
+++ b/pvlib/pvsystem.py
@@ -609,6 +609,46 @@ def faiman_celltemp(self, poa_global, temp_air, wind_speed=1.0):
return temperature.faiman(poa_global, temp_air, wind_speed,
**kwargs)
+ def fuentes_celltemp(self, poa_global, temp_air, wind_speed):
+ """
+ Use :py:func:`temperature.fuentes` to calculate cell temperature.
+
+ Parameters
+ ----------
+ poa_global : pandas Series
+ Total incident irradiance [W/m^2]
+
+ temp_air : pandas Series
+ Ambient dry bulb temperature [C]
+
+ wind_speed : pandas Series
+ Wind speed [m/s]
+
+ Returns
+ -------
+ temperature_cell : pandas Series
+ The modeled cell temperature [C]
+
+ Notes
+ -----
+ The Fuentes thermal model uses the module surface tilt for convection
+ modeling. The SAM implementation of PVWatts hardcodes the surface tilt
+ value at 30 degrees, ignoring whatever value is used for irradiance
+ transposition. This method defaults to using ``self.surface_tilt``, but
+ if you want to match the PVWatts behavior, you can override it by
+ including a ``surface_tilt`` value in ``temperature_model_parameters``.
+ """
+ # default to using the PVSystem attribute, but allow user to
+ # override with a custom surface_tilt value
+ kwargs = {'surface_tilt': self.surface_tilt}
+ temp_model_kwargs = _build_kwargs([
+ 'noct_installed', 'module_height', 'wind_height', 'emissivity',
+ 'absorption', 'surface_tilt', 'module_width', 'module_length'],
+ self.temperature_model_parameters)
+ kwargs.update(temp_model_kwargs)
+ return temperature.fuentes(poa_global, temp_air, wind_speed,
+ **kwargs)
+
def first_solar_spectral_loss(self, pw, airmass_absolute):
"""
| diff --git a/pvlib/tests/test_modelchain.py b/pvlib/tests/test_modelchain.py
index 4393c5091d..d02611bbf3 100644
--- a/pvlib/tests/test_modelchain.py
+++ b/pvlib/tests/test_modelchain.py
@@ -138,6 +138,18 @@ def pvwatts_dc_pvwatts_ac_pvsyst_temp_system():
return system
+@pytest.fixture(scope="function")
+def pvwatts_dc_pvwatts_ac_fuentes_temp_system():
+ module_parameters = {'pdc0': 220, 'gamma_pdc': -0.003}
+ temp_model_params = {'noct_installed': 45}
+ inverter_parameters = {'pdc0': 220, 'eta_inv_nom': 0.95}
+ system = PVSystem(surface_tilt=32.2, surface_azimuth=180,
+ module_parameters=module_parameters,
+ temperature_model_parameters=temp_model_params,
+ inverter_parameters=inverter_parameters)
+ return system
+
+
@pytest.fixture(scope="function")
def system_no_aoi(cec_module_cs5p_220m, sapm_temperature_cs5p_220m,
cec_inverter_parameters):
@@ -317,6 +329,23 @@ def test_run_model_with_weather_faiman_temp(sapm_dc_snl_ac_system, location,
assert not mc.ac.empty
+def test_run_model_with_weather_fuentes_temp(sapm_dc_snl_ac_system, location,
+ weather, mocker):
+ weather['wind_speed'] = 5
+ weather['temp_air'] = 10
+ sapm_dc_snl_ac_system.temperature_model_parameters = {
+ 'noct_installed': 45
+ }
+ mc = ModelChain(sapm_dc_snl_ac_system, location)
+ mc.temperature_model = 'fuentes'
+ m_fuentes = mocker.spy(sapm_dc_snl_ac_system, 'fuentes_celltemp')
+ mc.run_model(weather)
+ assert m_fuentes.call_count == 1
+ assert_series_equal(m_fuentes.call_args[0][1], weather['temp_air'])
+ assert_series_equal(m_fuentes.call_args[0][2], weather['wind_speed'])
+ assert not mc.ac.empty
+
+
def test_run_model_tracker(sapm_dc_snl_ac_system, location, weather, mocker):
system = SingleAxisTracker(
module_parameters=sapm_dc_snl_ac_system.module_parameters,
@@ -479,14 +508,16 @@ def test_infer_spectral_model(location, sapm_dc_snl_ac_system,
@pytest.mark.parametrize('temp_model', [
- 'sapm_temp', 'faiman_temp', 'pvsyst_temp'])
+ 'sapm_temp', 'faiman_temp', 'pvsyst_temp', 'fuentes_temp'])
def test_infer_temp_model(location, sapm_dc_snl_ac_system,
pvwatts_dc_pvwatts_ac_pvsyst_temp_system,
pvwatts_dc_pvwatts_ac_faiman_temp_system,
+ pvwatts_dc_pvwatts_ac_fuentes_temp_system,
temp_model):
dc_systems = {'sapm_temp': sapm_dc_snl_ac_system,
'pvsyst_temp': pvwatts_dc_pvwatts_ac_pvsyst_temp_system,
- 'faiman_temp': pvwatts_dc_pvwatts_ac_faiman_temp_system}
+ 'faiman_temp': pvwatts_dc_pvwatts_ac_faiman_temp_system,
+ 'fuentes_temp': pvwatts_dc_pvwatts_ac_fuentes_temp_system}
system = dc_systems[temp_model]
mc = ModelChain(system, location,
orientation_strategy='None', aoi_model='physical',
diff --git a/pvlib/tests/test_pvsystem.py b/pvlib/tests/test_pvsystem.py
index b9edf54fff..5675f8c3f7 100644
--- a/pvlib/tests/test_pvsystem.py
+++ b/pvlib/tests/test_pvsystem.py
@@ -360,6 +360,50 @@ def test_PVSystem_faiman_celltemp(mocker):
assert_allclose(out, 56.4, atol=1)
+def test_PVSystem_fuentes_celltemp(mocker):
+ noct_installed = 45
+ temp_model_params = {'noct_installed': noct_installed}
+ system = pvsystem.PVSystem(temperature_model_parameters=temp_model_params)
+ spy = mocker.spy(temperature, 'fuentes')
+ index = pd.date_range('2019-01-01 11:00', freq='h', periods=3)
+ temps = pd.Series(25, index)
+ irrads = pd.Series(1000, index)
+ winds = pd.Series(1, index)
+ out = system.fuentes_celltemp(irrads, temps, winds)
+ assert_series_equal(spy.call_args[0][0], irrads)
+ assert_series_equal(spy.call_args[0][1], temps)
+ assert_series_equal(spy.call_args[0][2], winds)
+ assert spy.call_args[1]['noct_installed'] == noct_installed
+ assert_series_equal(out, pd.Series([52.85, 55.85, 55.85], index,
+ name='tmod'))
+
+
+def test_PVSystem_fuentes_celltemp_override(mocker):
+ # test that the surface_tilt value in the cell temp calculation can be
+ # overridden but defaults to the surface_tilt attribute of the PVSystem
+ spy = mocker.spy(temperature, 'fuentes')
+
+ noct_installed = 45
+ index = pd.date_range('2019-01-01 11:00', freq='h', periods=3)
+ temps = pd.Series(25, index)
+ irrads = pd.Series(1000, index)
+ winds = pd.Series(1, index)
+
+ # uses default value
+ temp_model_params = {'noct_installed': noct_installed}
+ system = pvsystem.PVSystem(temperature_model_parameters=temp_model_params,
+ surface_tilt=20)
+ system.fuentes_celltemp(irrads, temps, winds)
+ assert spy.call_args[1]['surface_tilt'] == 20
+
+ # can be overridden
+ temp_model_params = {'noct_installed': noct_installed, 'surface_tilt': 30}
+ system = pvsystem.PVSystem(temperature_model_parameters=temp_model_params,
+ surface_tilt=20)
+ system.fuentes_celltemp(irrads, temps, winds)
+ assert spy.call_args[1]['surface_tilt'] == 30
+
+
def test__infer_temperature_model_params():
system = pvsystem.PVSystem(module_parameters={},
racking_model='open_rack',
| diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 2e882e9a46..845ad393fa 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -627,6 +627,7 @@ ModelChain model definitions.
modelchain.ModelChain.sapm_temp
modelchain.ModelChain.pvsyst_temp
modelchain.ModelChain.faiman_temp
+ modelchain.ModelChain.fuentes_temp
modelchain.ModelChain.pvwatts_losses
modelchain.ModelChain.no_extra_losses
diff --git a/docs/sphinx/source/whatsnew/v0.8.1.rst b/docs/sphinx/source/whatsnew/v0.8.1.rst
index 9b6e2a3800..27e5498596 100644
--- a/docs/sphinx/source/whatsnew/v0.8.1.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.1.rst
@@ -13,6 +13,8 @@ Deprecations
Enhancements
~~~~~~~~~~~~
+* Create :py:func:`~pvlib.pvsystem.PVSystem.fuentes_celltemp` and add ``temperature_model='fuentes'``
+ option to :py:class:`~pvlib.modelchain.ModelChain`. (:pull:`1042`) (:issue:`1073`)
* Added :py:func:`pvlib.temperature.ross` for cell temperature modeling using
only NOCT. (:pull:`1045`)
| [
{
"components": [
{
"doc": "",
"lines": [
921,
925
],
"name": "ModelChain.fuentes_temp",
"signature": "def fuentes_temp(self):",
"type": "function"
}
],
"file": "pvlib/modelchain.py"
},
{
"components": [
{
... | [
"pvlib/tests/test_modelchain.py::test_run_model_with_weather_fuentes_temp",
"pvlib/tests/test_modelchain.py::test_infer_temp_model[fuentes_temp]",
"pvlib/tests/test_pvsystem.py::test_PVSystem_fuentes_celltemp",
"pvlib/tests/test_pvsystem.py::test_PVSystem_fuentes_celltemp_override"
] | [
"pvlib/tests/test_modelchain.py::test_ModelChain_creation",
"pvlib/tests/test_modelchain.py::test_with_sapm",
"pvlib/tests/test_modelchain.py::test_with_pvwatts",
"pvlib/tests/test_modelchain.py::test_orientation_strategy[None-expected0]",
"pvlib/tests/test_modelchain.py::test_orientation_strategy[None-expe... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Expose temperature.fuentes in PVSystem and ModelChain
- [x] Closes #1042
- [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)
- [x] Tests added
- [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.
- [x] Adds description and name entries in the appropriate "what's new" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).
- [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.
- [x] Pull request is nearly complete and ready for detailed review.
- [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.
Basically just copied #897. Happy to make changes if needed. CI is probably going to complain about this until #1071 is fixed.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in pvlib/modelchain.py]
(definition of ModelChain.fuentes_temp:)
def fuentes_temp(self):
[end of new definitions in pvlib/modelchain.py]
[start of new definitions in pvlib/pvsystem.py]
(definition of PVSystem.fuentes_celltemp:)
def fuentes_celltemp(self, poa_global, temp_air, wind_speed):
"""Use :py:func:`temperature.fuentes` to calculate cell temperature.
Parameters
----------
poa_global : pandas Series
Total incident irradiance [W/m^2]
temp_air : pandas Series
Ambient dry bulb temperature [C]
wind_speed : pandas Series
Wind speed [m/s]
Returns
-------
temperature_cell : pandas Series
The modeled cell temperature [C]
Notes
-----
The Fuentes thermal model uses the module surface tilt for convection
modeling. The SAM implementation of PVWatts hardcodes the surface tilt
value at 30 degrees, ignoring whatever value is used for irradiance
transposition. This method defaults to using ``self.surface_tilt``, but
if you want to match the PVWatts behavior, you can override it by
including a ``surface_tilt`` value in ``temperature_model_parameters``."""
[end of new definitions in pvlib/pvsystem.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
expose pvlib.temperature.fuentes in PVSystem and ModelChain
Follow up to #1032 and #1037
----------
--------------------
</issues> | aa1635bcb40dc83f82e9fd72158670c235bfe99b |
minio__minio-py-982 | 982 | minio/minio-py | null | 1009b2c84c1736712485582caef390f0a749c6c9 | 2020-09-30T14:20:17Z | diff --git a/docs/API.md b/docs/API.md
index 5a6da598e..e0a427fd2 100644
--- a/docs/API.md
+++ b/docs/API.md
@@ -41,10 +41,13 @@ s3Client = Minio(
| [`set_bucket_versioning`](#set_bucket_versioning) | [`fput_object`](#fput_object) | | [`listen_bucket_notification`](#listen_bucket_notification) |
| [`delete_bucket_replication`](#delete_bucket_replication) | [`fget_object`](#fget_object) | | [`get_bucket_encryption`](#get_bucket_encryption) |
| [`get_bucket_replication`](#get_bucket_replication) | [`select_object_content`](#select_object_content) | | [`remove_all_bucket_notification`](#remove_all_bucket_notification) |
-| [`set_bucket_replication`](#set_bucket_replication) | | | [`put_bucket_encryption`](#put_bucket_encryption) |
-| [`delete_bucket_lifecycle`](#delete_bucket_lifecycle) | | | [`delete_bucket_encryption`](#delete_bucket_encryption) |
-| [`get_bucket_lifecycle`](#get_bucket_lifecycle) | | | |
+| [`set_bucket_replication`](#set_bucket_replication) | [`delete_object_tags`](#delete_object_tags) | | [`put_bucket_encryption`](#put_bucket_encryption) |
+| [`delete_bucket_lifecycle`](#delete_bucket_lifecycle) | [`get_object_tags`](#get_object_tags) | | [`delete_bucket_encryption`](#delete_bucket_encryption) |
+| [`get_bucket_lifecycle`](#get_bucket_lifecycle) | [`set_object_tags`](#set_object_tags) | | |
| [`set_bucket_lifecycle`](#set_bucket_lifecycle) | | | |
+| [`delete_bucket_tags`](#delete_bucket_tags) | | | |
+| [`get_bucket_tags`](#get_bucket_tags) | | | |
+| [`set_bucket_tags`](#set_bucket_tags) | | | |
## 1. Constructor
@@ -776,6 +779,68 @@ config = LifecycleConfig(
minio.set_bucket_lifecycle("my-bucketname", config)
```
+<a name="delete_bucket_tags"></a>
+
+### delete_bucket_tags(bucket_name)
+
+Delete tags configuration of a bucket.
+
+__Parameters__
+
+| Param | Type | Description |
+|:----------------|:------|:--------------------|
+| ``bucket_name`` | _str_ | Name of the bucket. |
+
+__Example__
+
+```py
+minio.delete_bucket_tags("my-bucketname")
+```
+
+<a name="get_bucket_tags"></a>
+
+### get_bucket_tags(bucket_name)
+
+Get tags configuration of a bucket.
+
+__Parameters__
+
+| Param | Type | Description |
+|:----------------|:------|:--------------------|
+| ``bucket_name`` | _str_ | Name of the bucket. |
+
+| Return |
+|:---------------|
+| _Tags_ object. |
+
+__Example__
+
+```py
+tags = minio.get_bucket_tags("my-bucketname")
+```
+
+<a name="set_bucket_tags"></a>
+
+### set_bucket_tags(bucket_name, tags)
+
+Set tags configuration to a bucket.
+
+__Parameters__
+
+| Param | Type | Description |
+|:----------------|:-------|:--------------------|
+| ``bucket_name`` | _str_ | Name of the bucket. |
+| ``tags`` | _Tags_ | Tags configuration. |
+
+__Example__
+
+```py
+tags = Tags.new_bucket_tags()
+tags["Project"] = "Project One"
+tags["User"] = "jsmith"
+client.set_bucket_tags("my-bucketname", tags)
+```
+
## 3. Object operations
<a name="get_object"></a>
@@ -1112,6 +1177,74 @@ minio.remove_objects(
)
```
+<a name="delete_object_tags"></a>
+
+### delete_object_tags(bucket_name, object_name, version_id=None)
+
+Delete tags configuration of an object.
+
+__Parameters__
+
+| Param | Type | Description |
+|:----------------|:------|:---------------------------|
+| ``bucket_name`` | _str_ | Name of the bucket. |
+| ``object_name`` | _str_ | Object name in the bucket. |
+| ``version_id`` | _str_ | Version ID of the object. |
+
+__Example__
+
+```py
+minio.delete_object_tags("my-bucketname", "my-objectname")
+```
+
+<a name="get_object_tags"></a>
+
+### get_object_tags(bucket_name, object_name, version_id=None)
+
+Get tags configuration of an object.
+
+__Parameters__
+
+| Param | Type | Description |
+|:----------------|:------|:---------------------------|
+| ``bucket_name`` | _str_ | Name of the bucket. |
+| ``object_name`` | _str_ | Object name in the bucket. |
+| ``version_id`` | _str_ | Version ID of the object. |
+
+| Return |
+|:---------------|
+| _Tags_ object. |
+
+__Example__
+
+```py
+tags = minio.get_object_tags("my-bucketname", "my-objectname")
+```
+
+<a name="set_object_tags"></a>
+
+### set_object_tags(bucket_name, object_name, tags, version_id=None)
+
+Set tags configuration to an object.
+
+__Parameters__
+
+| Param | Type | Description |
+|:----------------|:-------|:---------------------------|
+| ``bucket_name`` | _str_ | Name of the bucket. |
+| ``object_name`` | _str_ | Object name in the bucket. |
+| ``tags`` | _Tags_ | Tags configuration. |
+| ``version_id`` | _str_ | Version ID of the object. |
+
+__Example__
+
+```py
+tags = Tags.new_object_tags()
+tags["Project"] = "Project One"
+tags["User"] = "jsmith"
+client.set_object_tags("my-bucketname", "my-objectname", tags)
+```
+
## 4. Presigned operations
<a name="presigned_get_object"></a>
diff --git a/examples/delete_bucket_tags.py b/examples/delete_bucket_tags.py
new file mode 100644
index 000000000..f77672628
--- /dev/null
+++ b/examples/delete_bucket_tags.py
@@ -0,0 +1,28 @@
+# -*- coding: utf-8 -*-
+# MinIO Python Library for Amazon S3 Compatible Cloud Storage.
+# Copyright (C) 2020 MinIO, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
+# dummy values, please replace them with original values.
+
+from minio import Minio
+
+client = Minio(
+ "play.min.io",
+ access_key="Q3AM3UQ867SPQQA43P2F",
+ secret_key="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG",
+)
+
+client.delete_bucket_tags("my-bucketname")
diff --git a/examples/delete_object_tags.py b/examples/delete_object_tags.py
new file mode 100644
index 000000000..79cf31e59
--- /dev/null
+++ b/examples/delete_object_tags.py
@@ -0,0 +1,28 @@
+# -*- coding: utf-8 -*-
+# MinIO Python Library for Amazon S3 Compatible Cloud Storage.
+# Copyright (C) 2020 MinIO, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
+# dummy values, please replace them with original values.
+
+from minio import Minio
+
+client = Minio(
+ "play.min.io",
+ access_key="Q3AM3UQ867SPQQA43P2F",
+ secret_key="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG",
+)
+
+client.delete_object_tags("my-bucketname", "my-objectname")
diff --git a/examples/get_bucket_tags.py b/examples/get_bucket_tags.py
new file mode 100644
index 000000000..7c2b11b43
--- /dev/null
+++ b/examples/get_bucket_tags.py
@@ -0,0 +1,28 @@
+# -*- coding: utf-8 -*-
+# MinIO Python Library for Amazon S3 Compatible Cloud Storage.
+# Copyright (C) 2020 MinIO, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
+# dummy values, please replace them with original values.
+
+from minio import Minio
+
+client = Minio(
+ "play.min.io",
+ access_key="Q3AM3UQ867SPQQA43P2F",
+ secret_key="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG",
+)
+
+tags = client.get_bucket_tags("my-bucketname")
diff --git a/examples/get_object_tags.py b/examples/get_object_tags.py
new file mode 100644
index 000000000..f1e40c733
--- /dev/null
+++ b/examples/get_object_tags.py
@@ -0,0 +1,28 @@
+# -*- coding: utf-8 -*-
+# MinIO Python Library for Amazon S3 Compatible Cloud Storage.
+# Copyright (C) 2020 MinIO, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
+# dummy values, please replace them with original values.
+
+from minio import Minio
+
+client = Minio(
+ "play.min.io",
+ access_key="Q3AM3UQ867SPQQA43P2F",
+ secret_key="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG",
+)
+
+tags = client.get_object_tags("my-bucketname", "my-objectname")
diff --git a/examples/set_bucket_tags.py b/examples/set_bucket_tags.py
new file mode 100644
index 000000000..9277fb8fb
--- /dev/null
+++ b/examples/set_bucket_tags.py
@@ -0,0 +1,32 @@
+# -*- coding: utf-8 -*-
+# MinIO Python Library for Amazon S3 Compatible Cloud Storage.
+# Copyright (C) 2020 MinIO, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
+# dummy values, please replace them with original values.
+
+from minio import Minio
+from minio.commonconfig import Tags
+
+client = Minio(
+ "play.min.io",
+ access_key="Q3AM3UQ867SPQQA43P2F",
+ secret_key="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG",
+)
+
+tags = Tags.new_bucket_tags()
+tags["Project"] = "Project One"
+tags["User"] = "jsmith"
+client.set_bucket_tags("my-bucketname", tags)
diff --git a/examples/set_object_tags.py b/examples/set_object_tags.py
new file mode 100644
index 000000000..c7683e0cb
--- /dev/null
+++ b/examples/set_object_tags.py
@@ -0,0 +1,32 @@
+# -*- coding: utf-8 -*-
+# MinIO Python Library for Amazon S3 Compatible Cloud Storage.
+# Copyright (C) 2020 MinIO, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
+# dummy values, please replace them with original values.
+
+from minio import Minio
+from minio.commonconfig import Tags
+
+client = Minio(
+ "play.min.io",
+ access_key="Q3AM3UQ867SPQQA43P2F",
+ secret_key="zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG",
+)
+
+tags = Tags.new_object_tags()
+tags["Project"] = "Project One"
+tags["User"] = "jsmith"
+client.set_object_tags("my-bucketname", "my-objectname", tags)
diff --git a/minio/api.py b/minio/api.py
index c01cfbf45..35318202e 100644
--- a/minio/api.py
+++ b/minio/api.py
@@ -43,6 +43,7 @@
import urllib3
from . import __title__, __version__
+from .commonconfig import Tags
from .credentials import StaticProvider
from .definitions import BaseURL, Object, ObjectWriteResult, Part
from .error import InvalidResponseError, S3Error, ServerError
@@ -66,6 +67,7 @@
from .signer import (AMZ_DATE_FORMAT, SIGN_V4_ALGORITHM, get_credential_string,
post_presign_v4, presign_v4, sign_v4_s3)
from .sse import SseCustomerKey
+from .tagging import Tagging
from .thread_pool import ThreadPool
from .versioningconfig import VersioningConfig
from .xml import Element, SubElement, findtext, marshal, unmarshal
@@ -1924,6 +1926,148 @@ def set_bucket_lifecycle(self, bucket_name, config):
query_params={"lifecycle": ""},
)
+ def delete_bucket_tags(self, bucket_name):
+ """
+ Delete tags configuration of a bucket.
+
+ :param bucket_name: Name of the bucket.
+
+ Example::
+ minio.delete_bucket_tags("my-bucketname")
+ """
+ check_bucket_name(bucket_name)
+ self._execute("DELETE", bucket_name, query_params={"tagging": ""})
+
+ def get_bucket_tags(self, bucket_name):
+ """
+ Get tags configuration of a bucket.
+
+ :param bucket_name: Name of the bucket.
+ :return: :class:`Tags <Tags>` object.
+
+ Example::
+ tags = minio.get_bucket_tags("my-bucketname")
+ """
+ check_bucket_name(bucket_name)
+ try:
+ response = self._execute(
+ "GET", bucket_name, query_params={"tagging": ""},
+ )
+ tagging = unmarshal(Tagging, response.data.decode())
+ return tagging.tags()
+ except S3Error as exc:
+ if exc.code != "NoSuchTagSet":
+ raise
+ return None
+
+ def set_bucket_tags(self, bucket_name, tags):
+ """
+ Set tags configuration to a bucket.
+
+ :param bucket_name: Name of the bucket.
+ :param tags: :class:`Tags <Tags>` object.
+
+ Example::
+ tags = Tags.new_bucket_tags()
+ tags["Project"] = "Project One"
+ tags["User"] = "jsmith"
+ minio.set_bucket_tags("my-bucketname", tags)
+ """
+ check_bucket_name(bucket_name)
+ if not isinstance(tags, Tags):
+ raise ValueError("tags must be Tags type")
+ body = marshal(Tagging(tags))
+ self._execute(
+ "PUT",
+ bucket_name,
+ body=body,
+ headers={"Content-MD5": md5sum_hash(body)},
+ query_params={"tagging": ""},
+ )
+
+ def delete_object_tags(self, bucket_name, object_name, version_id=None):
+ """
+ Delete tags configuration of an object.
+
+ :param bucket_name: Name of the bucket.
+ :param object_name: Object name in the bucket.
+ :param version_id: Version ID of the Object.
+
+ Example::
+ minio.delete_object_tags("my-bucketname", "my-objectname")
+ """
+ check_bucket_name(bucket_name)
+ check_non_empty_string(object_name)
+ query_params = {"versionId": version_id} if version_id else {}
+ query_params["tagging"] = ""
+ self._execute(
+ "DELETE",
+ bucket_name,
+ object_name=object_name,
+ query_params=query_params,
+ )
+
+ def get_object_tags(self, bucket_name, object_name, version_id=None):
+ """
+ Get tags configuration of a object.
+
+ :param bucket_name: Name of the bucket.
+ :param object_name: Object name in the bucket.
+ :param version_id: Version ID of the Object.
+ :return: :class:`Tags <Tags>` object.
+
+ Example::
+ tags = minio.get_object_tags("my-bucketname", "my-objectname")
+ """
+ check_bucket_name(bucket_name)
+ check_non_empty_string(object_name)
+ query_params = {"versionId": version_id} if version_id else {}
+ query_params["tagging"] = ""
+ try:
+ response = self._execute(
+ "GET",
+ bucket_name,
+ object_name=object_name,
+ query_params=query_params,
+ )
+ tagging = unmarshal(Tagging, response.data.decode())
+ return tagging.tags()
+ except S3Error as exc:
+ if exc.code != "NoSuchTagSet":
+ raise
+ return None
+
+ def set_object_tags(self, bucket_name, object_name, tags, version_id=None):
+ """
+ Set tags configuration to an object.
+
+ :param bucket_name: Name of the bucket.
+ :param object_name: Object name in the bucket.
+ :param version_id: Version ID of the Object.
+ :param tags: :class:`Tags <Tags>` object.
+
+ Example::
+ tags = Tags.new_object_tags()
+ tags["Project"] = "Project One"
+ tags["User"] = "jsmith"
+ minio.set_object_tags("my-bucketname", "my-objectname", tags)
+ """
+ check_bucket_name(bucket_name)
+ check_non_empty_string(object_name)
+ if not isinstance(tags, Tags):
+ raise ValueError("tags must be Tags type")
+ body = marshal(Tagging(tags))
+ query_params = {"versionId": version_id} if version_id else {}
+ query_params["tagging"] = ""
+ self._execute(
+ "PUT",
+ bucket_name,
+ object_name=object_name,
+ body=body,
+ headers={"Content-MD5": md5sum_hash(body)},
+ query_params=query_params,
+ )
+
def _list_objects( # pylint: disable=too-many-arguments,too-many-branches
self,
bucket_name,
diff --git a/minio/commonconfig.py b/minio/commonconfig.py
index c59179851..85bf6d7d7 100644
--- a/minio/commonconfig.py
+++ b/minio/commonconfig.py
@@ -52,6 +52,16 @@ def __setitem__(self, key, value):
raise ValueError("invalid tag value '{0}'".format(value))
super().__setitem__(key, value)
+ @classmethod
+ def new_bucket_tags(cls):
+ """Create new bucket tags."""
+ return cls()
+
+ @classmethod
+ def new_object_tags(cls):
+ """Create new object tags."""
+ return cls(True)
+
@classmethod
def fromxml(cls, element):
"""Create new object with values from XML element."""
diff --git a/minio/tagging.py b/minio/tagging.py
new file mode 100644
index 000000000..ac186ab45
--- /dev/null
+++ b/minio/tagging.py
@@ -0,0 +1,52 @@
+# -*- coding: utf-8 -*-
+# MinIO Python Library for Amazon S3 Compatible Cloud Storage, (C)
+# 2020 MinIO, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Tagging for bucket and object."""
+
+from __future__ import absolute_import
+
+from .commonconfig import Tags
+from .xml import Element, SubElement, find
+
+
+class Tagging:
+ """Tagging for buckets and objects."""
+
+ def __init__(self, tags):
+ self._tags = tags
+
+ @property
+ def tags(self):
+ """Get tags."""
+ return self._tags
+
+ @classmethod
+ def fromxml(cls, element):
+ """Create new object with values from XML element."""
+ element = find(element, "TagSet")
+ tags = (
+ None if find(element, "Tag") is None
+ else Tags.fromxml(element)
+ )
+ return cls(tags)
+
+ def toxml(self, element):
+ """Convert to XML."""
+ element = Element("Tagging")
+ if self._tags:
+ element = SubElement(element, "TagSet")
+ self._tags.toxml(element)
+ return element
| diff --git a/tests/unit/tagging.py b/tests/unit/tagging.py
new file mode 100644
index 000000000..b3ff77295
--- /dev/null
+++ b/tests/unit/tagging.py
@@ -0,0 +1,47 @@
+# -*- coding: utf-8 -*-
+# MinIO Python Library for Amazon S3 Compatible Cloud Storage,
+# (C) 2020 MinIO, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from unittest import TestCase
+
+from minio import xml
+from minio.commonconfig import Tags
+from minio.tagging import Tagging
+
+
+class TaggingTest(TestCase):
+ def test_tagging(self):
+ tags = Tags()
+ tags["Project"] = "Project One"
+ tags["User"] = "jsmith"
+ tagging = Tagging(tags)
+ xml.marshal(tagging)
+
+ config = xml.unmarshal(
+ Tagging,
+ """<Tagging xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+ <TagSet>
+ <Tag>
+ <Key>key1</Key>
+ <Value>value1</Value>
+ </Tag>
+ <Tag>
+ <Key>key2</Key>
+ <Value>value2</Value>
+ </Tag>
+ </TagSet>
+</Tagging>""",
+ )
+ xml.marshal(config)
| diff --git a/docs/API.md b/docs/API.md
index 5a6da598e..e0a427fd2 100644
--- a/docs/API.md
+++ b/docs/API.md
@@ -41,10 +41,13 @@ s3Client = Minio(
| [`set_bucket_versioning`](#set_bucket_versioning) | [`fput_object`](#fput_object) | | [`listen_bucket_notification`](#listen_bucket_notification) |
| [`delete_bucket_replication`](#delete_bucket_replication) | [`fget_object`](#fget_object) | | [`get_bucket_encryption`](#get_bucket_encryption) |
| [`get_bucket_replication`](#get_bucket_replication) | [`select_object_content`](#select_object_content) | | [`remove_all_bucket_notification`](#remove_all_bucket_notification) |
-| [`set_bucket_replication`](#set_bucket_replication) | | | [`put_bucket_encryption`](#put_bucket_encryption) |
-| [`delete_bucket_lifecycle`](#delete_bucket_lifecycle) | | | [`delete_bucket_encryption`](#delete_bucket_encryption) |
-| [`get_bucket_lifecycle`](#get_bucket_lifecycle) | | | |
+| [`set_bucket_replication`](#set_bucket_replication) | [`delete_object_tags`](#delete_object_tags) | | [`put_bucket_encryption`](#put_bucket_encryption) |
+| [`delete_bucket_lifecycle`](#delete_bucket_lifecycle) | [`get_object_tags`](#get_object_tags) | | [`delete_bucket_encryption`](#delete_bucket_encryption) |
+| [`get_bucket_lifecycle`](#get_bucket_lifecycle) | [`set_object_tags`](#set_object_tags) | | |
| [`set_bucket_lifecycle`](#set_bucket_lifecycle) | | | |
+| [`delete_bucket_tags`](#delete_bucket_tags) | | | |
+| [`get_bucket_tags`](#get_bucket_tags) | | | |
+| [`set_bucket_tags`](#set_bucket_tags) | | | |
## 1. Constructor
@@ -776,6 +779,68 @@ config = LifecycleConfig(
minio.set_bucket_lifecycle("my-bucketname", config)
```
+<a name="delete_bucket_tags"></a>
+
+### delete_bucket_tags(bucket_name)
+
+Delete tags configuration of a bucket.
+
+__Parameters__
+
+| Param | Type | Description |
+|:----------------|:------|:--------------------|
+| ``bucket_name`` | _str_ | Name of the bucket. |
+
+__Example__
+
+```py
+minio.delete_bucket_tags("my-bucketname")
+```
+
+<a name="get_bucket_tags"></a>
+
+### get_bucket_tags(bucket_name)
+
+Get tags configuration of a bucket.
+
+__Parameters__
+
+| Param | Type | Description |
+|:----------------|:------|:--------------------|
+| ``bucket_name`` | _str_ | Name of the bucket. |
+
+| Return |
+|:---------------|
+| _Tags_ object. |
+
+__Example__
+
+```py
+tags = minio.get_bucket_tags("my-bucketname")
+```
+
+<a name="set_bucket_tags"></a>
+
+### set_bucket_tags(bucket_name, tags)
+
+Set tags configuration to a bucket.
+
+__Parameters__
+
+| Param | Type | Description |
+|:----------------|:-------|:--------------------|
+| ``bucket_name`` | _str_ | Name of the bucket. |
+| ``tags`` | _Tags_ | Tags configuration. |
+
+__Example__
+
+```py
+tags = Tags.new_bucket_tags()
+tags["Project"] = "Project One"
+tags["User"] = "jsmith"
+client.set_bucket_tags("my-bucketname", tags)
+```
+
## 3. Object operations
<a name="get_object"></a>
@@ -1112,6 +1177,74 @@ minio.remove_objects(
)
```
+<a name="delete_object_tags"></a>
+
+### delete_object_tags(bucket_name, object_name, version_id=None)
+
+Delete tags configuration of an object.
+
+__Parameters__
+
+| Param | Type | Description |
+|:----------------|:------|:---------------------------|
+| ``bucket_name`` | _str_ | Name of the bucket. |
+| ``object_name`` | _str_ | Object name in the bucket. |
+| ``version_id`` | _str_ | Version ID of the object. |
+
+__Example__
+
+```py
+minio.delete_object_tags("my-bucketname", "my-objectname")
+```
+
+<a name="get_object_tags"></a>
+
+### get_object_tags(bucket_name, object_name, version_id=None)
+
+Get tags configuration of an object.
+
+__Parameters__
+
+| Param | Type | Description |
+|:----------------|:------|:---------------------------|
+| ``bucket_name`` | _str_ | Name of the bucket. |
+| ``object_name`` | _str_ | Object name in the bucket. |
+| ``version_id`` | _str_ | Version ID of the object. |
+
+| Return |
+|:---------------|
+| _Tags_ object. |
+
+__Example__
+
+```py
+tags = minio.get_object_tags("my-bucketname", "my-objectname")
+```
+
+<a name="set_object_tags"></a>
+
+### set_object_tags(bucket_name, object_name, tags, version_id=None)
+
+Set tags configuration to an object.
+
+__Parameters__
+
+| Param | Type | Description |
+|:----------------|:-------|:---------------------------|
+| ``bucket_name`` | _str_ | Name of the bucket. |
+| ``object_name`` | _str_ | Object name in the bucket. |
+| ``tags`` | _Tags_ | Tags configuration. |
+| ``version_id`` | _str_ | Version ID of the object. |
+
+__Example__
+
+```py
+tags = Tags.new_object_tags()
+tags["Project"] = "Project One"
+tags["User"] = "jsmith"
+client.set_object_tags("my-bucketname", "my-objectname", tags)
+```
+
## 4. Presigned operations
<a name="presigned_get_object"></a>
| [
{
"components": [
{
"doc": "Delete tags configuration of a bucket.\n\n:param bucket_name: Name of the bucket.\n\nExample::\n minio.delete_bucket_tags(\"my-bucketname\")",
"lines": [
1929,
1939
],
"name": "Minio.delete_bucket_tags",
"signature"... | [
"tests/unit/tagging.py::TaggingTest::test_tagging"
] | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add {set,get,delete}_{bucket,object}_tags APIs
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in minio/api.py]
(definition of Minio.delete_bucket_tags:)
def delete_bucket_tags(self, bucket_name):
"""Delete tags configuration of a bucket.
:param bucket_name: Name of the bucket.
Example::
minio.delete_bucket_tags("my-bucketname")"""
(definition of Minio.get_bucket_tags:)
def get_bucket_tags(self, bucket_name):
"""Get tags configuration of a bucket.
:param bucket_name: Name of the bucket.
:return: :class:`Tags <Tags>` object.
Example::
tags = minio.get_bucket_tags("my-bucketname")"""
(definition of Minio.set_bucket_tags:)
def set_bucket_tags(self, bucket_name, tags):
"""Set tags configuration to a bucket.
:param bucket_name: Name of the bucket.
:param tags: :class:`Tags <Tags>` object.
Example::
tags = Tags.new_bucket_tags()
tags["Project"] = "Project One"
tags["User"] = "jsmith"
minio.set_bucket_tags("my-bucketname", tags)"""
(definition of Minio.delete_object_tags:)
def delete_object_tags(self, bucket_name, object_name, version_id=None):
"""Delete tags configuration of an object.
:param bucket_name: Name of the bucket.
:param object_name: Object name in the bucket.
:param version_id: Version ID of the Object.
Example::
minio.delete_object_tags("my-bucketname", "my-objectname")"""
(definition of Minio.get_object_tags:)
def get_object_tags(self, bucket_name, object_name, version_id=None):
"""Get tags configuration of a object.
:param bucket_name: Name of the bucket.
:param object_name: Object name in the bucket.
:param version_id: Version ID of the Object.
:return: :class:`Tags <Tags>` object.
Example::
tags = minio.get_object_tags("my-bucketname", "my-objectname")"""
(definition of Minio.set_object_tags:)
def set_object_tags(self, bucket_name, object_name, tags, version_id=None):
"""Set tags configuration to an object.
:param bucket_name: Name of the bucket.
:param object_name: Object name in the bucket.
:param version_id: Version ID of the Object.
:param tags: :class:`Tags <Tags>` object.
Example::
tags = Tags.new_object_tags()
tags["Project"] = "Project One"
tags["User"] = "jsmith"
minio.set_object_tags("my-bucketname", "my-objectname", tags)"""
[end of new definitions in minio/api.py]
[start of new definitions in minio/commonconfig.py]
(definition of Tags.new_bucket_tags:)
def new_bucket_tags(cls):
"""Create new bucket tags."""
(definition of Tags.new_object_tags:)
def new_object_tags(cls):
"""Create new object tags."""
[end of new definitions in minio/commonconfig.py]
[start of new definitions in minio/tagging.py]
(definition of Tagging:)
class Tagging:
"""Tagging for buckets and objects."""
(definition of Tagging.__init__:)
def __init__(self, tags):
(definition of Tagging.tags:)
def tags(self):
"""Get tags."""
(definition of Tagging.fromxml:)
def fromxml(cls, element):
"""Create new object with values from XML element."""
(definition of Tagging.toxml:)
def toxml(self, element):
"""Convert to XML."""
[end of new definitions in minio/tagging.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | d4cc44095a98edb62ecbc3c5ed09ed86be45edcc | |
pydicom__pydicom-1196 | 1,196 | pydicom/pydicom | 2.0 | 147641145c09cf38d6c796ed47427aaffb475872 | 2020-09-27T00:45:17Z | diff --git a/doc/faq/index.rst b/doc/faq/index.rst
index 2915b3de30..eceea57c00 100644
--- a/doc/faq/index.rst
+++ b/doc/faq/index.rst
@@ -45,7 +45,7 @@ Required
Optional
~~~~~~~~
-When manipulating *Pixel Data* it's recommended that you install
+When manipulating *Pixel Data* it's recommended you install
`NumPy <https://numpy.org/>`_. When dealing with JPEG
compressed *Pixel Data* see :ref:`this table<guide_compressed>` for which
libraries are required.
@@ -55,28 +55,31 @@ libraries are required.
What version of Python can I use?
---------------------------------
-+-----------------+------------------+-------------------------+
-| pydicom version | Release date | Python versions |
-+=================+==================+=========================+
-| 1.0 | March 2018 | 2.7, 3.4, 3.5, 3.6 |
-+-----------------+------------------+-------------------------+
-| 1.1 | June 2018 | 2.7, 3.4, 3.5, 3.6 |
-+-----------------+------------------+-------------------------+
-| 1.2 | October 2018 | 2.7, 3.4, 3.5, 3.6 |
-+-----------------+------------------+-------------------------+
-| 1.3 | July 2019 | 2.7, 3.4, 3.5, 3.6 |
-+-----------------+------------------+-------------------------+
-| 1.4 | January 2020 | 2.7, 3.5, 3.6, 3.7, 3.8 |
-+-----------------+------------------+-------------------------+
-| 2.0 | ~April 2020 | 3.5, 3.6, 3.7, 3.8 |
-+-----------------+------------------+-------------------------+
-| 2.1 | ~November 2020 | 3.6, 3.7, 3.8, 3.9 |
-+-----------------+------------------+-------------------------+
++-----------------+------------------+--------------------------+
+| pydicom version | Release date | Python versions |
++=================+==================+==========================+
+| 1.0 | March 2018 | 2.7, 3.4, 3.5, 3.6 |
++-----------------+------------------+--------------------------+
+| 1.1 | June 2018 | 2.7, 3.4, 3.5, 3.6 |
++-----------------+------------------+--------------------------+
+| 1.2 | October 2018 | 2.7, 3.4, 3.5, 3.6 |
++-----------------+------------------+--------------------------+
+| 1.3 | July 2019 | 2.7, 3.4, 3.5, 3.6 |
++-----------------+------------------+--------------------------+
+| 1.4 | January 2020 | 2.7, 3.5, 3.6, 3.7, 3.8 |
++-----------------+------------------+--------------------------+
+| 2.0 | May 2020 | 3.5, 3.6, 3.7, 3.8 |
++-----------------+------------------+--------------------------+
+| 2.1 | ~November 2020 | 3.6, 3.7, 3.8, 3.9 |
++-----------------+------------------+--------------------------+
+| 2.2 | ~May 2021 | 3.6, 3.7, 3.8, 3.9 |
++-----------------+------------------+--------------------------+
+| 2.3 | ~November 2021 | 3.6, 3.7, 3.8, 3.9, 3.10 |
++-----------------+------------------+--------------------------+
What about support for Python 2.7?
----------------------------------
Python 2.7 reached `end of life <https://www.python.org/doc/sunset-python-2/>`_
-on 1st January, 2020 and will no longer be supported by *pydicom* starting
-from v2.0 (expected release date is April 2020). More information is available
-:doc:`here</old/python2_support>`.
+on 1st January, 2020 and is no longer supported by *pydicom*. More information
+is available :doc:`here</old/python2_support>`.
diff --git a/doc/tutorials/installation.rst b/doc/tutorials/installation.rst
index 680eab0a43..bceda35711 100644
--- a/doc/tutorials/installation.rst
+++ b/doc/tutorials/installation.rst
@@ -28,17 +28,17 @@ Install using pip
official third-party Python software repository. The simplest way to install
from PyPi is using `pip <https://pip.pypa.io/>`_ with the command::
- $ pip install pydicom
+ pip install pydicom
You may need to use this instead, depending on your operating system::
- $ python -m pip install pydicom
+ python -m pip install pydicom
You can also perform an offline installation by
`downloading <https://github.com/pydicom/pydicom/releases>`_ and installing
-one of the release ``*.whl`` files. For example, with the v1.3 release::
+one of the release ``*.whl`` files. For example, with the v2.0 release::
- $ pip install pydicom-1.3.0-py2.py3-none-any.whl
+ pip install pydicom-2.0.0-py3-none-any.whl
Install using conda
@@ -47,23 +47,26 @@ Install using conda
*pydicom* is also available for `conda <https://docs.conda.io/>`_ at
`conda-forge <https://anaconda.org/conda-forge/pydicom>`_::
- $ conda install -c conda-forge pydicom
-
-
-.. _tut_install_libs:
+ conda install -c conda-forge pydicom
Downloading example/test DICOM files
------------------------------------
-A range of the test data files are not distributed within the pydicom package
-so as to keep the package download size small. These files are instead housed
-on Zenodo at <https://zenodo.org/communities/pydicom>. To download all of the
-pydicom test files and get all of the respective paths of where this data is
-housed run the following::
+To keep the package size small, a number of the larger DICOM files are not
+distributed with *pydicom* and are instead kept in the
+`pydicom-data <https://github.com/pydicom/pydicom-data>`_
+repository. To get the complete set of testing and example files you can either
+install the *pydicom-data* repository::
+
+ pip install git+https://github.com/pydicom/pydicom-data
- pydicom.data.get_testdata_files()
+Or download the missing files to the local cache (after installing *pydicom*)::
+ python -c "import pydicom; pydicom.data.fetch_data_files()"
+
+
+.. _tut_install_libs:
Install the optional libraries
==============================
@@ -73,11 +76,11 @@ If you're going to be manipulating pixel data then
Using pip::
- $ pip install numpy
+ pip install numpy
Through conda::
- $ conda install numpy
+ conda install numpy
To decode JPEG compressed pixel data one or more additional libraries will
need to be installed. See :ref:`this page <guide_compressed>` for a list of
@@ -96,12 +99,12 @@ Using pip; you may need to make sure that the
`openjpeg <http://www.openjpeg.org/>`_ (for JPEG 2000) libraries are installed
beforehand::
- $ pip install pillow
+ pip install pillow
Through conda::
- $ conda install -c conda-forge openjpeg jpeg
- $ conda install pillow
+ conda install -c conda-forge openjpeg jpeg
+ conda install pillow
Installing CharPyLS
@@ -113,13 +116,13 @@ decompress JPEG-LS images.
Using pip::
- $ pip install cython
- $ pip install git+https://github.com/Who8MyLunch/CharPyLS
+ pip install cython
+ pip install git+https://github.com/Who8MyLunch/CharPyLS
Through conda::
- $ conda install cython
- $ pip install git+https://github.com/Who8MyLunch/CharPyLS
+ conda install cython
+ pip install git+https://github.com/Who8MyLunch/CharPyLS
Installing GDCM
@@ -136,7 +139,7 @@ has instructions for installing in a virtual environment in Ubuntu
Through conda::
- $ conda install gdcm -c conda-forge
+ conda install gdcm -c conda-forge
Installing pylibjpeg
@@ -148,7 +151,7 @@ is installed.
Using pip::
- $ pip install pylibjpeg pylibjpeg-libjpeg pylibjpeg-openjpeg
+ pip install pylibjpeg pylibjpeg-libjpeg pylibjpeg-openjpeg
.. _tut_install_dev:
@@ -159,7 +162,7 @@ Install the development version
To install a snapshot of the latest code (the ``master`` branch) from
`GitHub <https://github.com/pydicom/pydicom>`_::
- $ pip install git+https://github.com/pydicom/pydicom.git
+ pip install git+https://github.com/pydicom/pydicom.git
The ``master`` branch is under active development and while it is usually
stable, it may have undocumented changes or bugs.
@@ -168,11 +171,11 @@ If you want to keep up-to-date with the latest code, make sure you have
`Git <https://git-scm.com/>`_ installed and then clone the ``master``
branch (this will create a ``pydicom`` directory in your current directory)::
- $ git clone --depth=1 https://github.com/pydicom/pydicom.git
+ git clone --depth=1 https://github.com/pydicom/pydicom.git
Then install using pip in editable (``-e``) mode::
- $ pip install -e pydicom/
+ pip install -e pydicom/
When you want to update your copy of the source code, run ``git pull`` from
within the ``pydicom`` directory and Git will download and apply any changes.
diff --git a/pydicom/data/__init__.py b/pydicom/data/__init__.py
index 325ecdf27c..42a9caeaeb 100644
--- a/pydicom/data/__init__.py
+++ b/pydicom/data/__init__.py
@@ -3,10 +3,11 @@
from .data_manager import (
get_charset_files, get_testdata_file, get_testdata_files,
- get_palette_files, DATA_ROOT, EXTERNAL_DATA_SOURCES
+ get_palette_files, DATA_ROOT, EXTERNAL_DATA_SOURCES, fetch_data_files
)
__all__ = [
+ 'fetch_data_files',
'get_charset_files',
'get_palette_files'
'get_testdata_files',
diff --git a/pydicom/data/data_manager.py b/pydicom/data/data_manager.py
index 37461a5b07..f2f659f3c9 100644
--- a/pydicom/data/data_manager.py
+++ b/pydicom/data/data_manager.py
@@ -55,7 +55,7 @@
from pydicom.data.download import (
data_path_with_download, calculate_file_hash, get_cached_filehash,
- get_url_map
+ get_url_map, get_data_dir
)
@@ -141,6 +141,26 @@ def online_test_file_dummy_paths() -> Dict[str, str]:
return dummy_path_map
+def fetch_data_files():
+ """Download missing test files to the local cache."""
+ cache = get_data_dir()
+ paths = {cache / fname: fname for fname in list(get_url_map().keys())}
+
+ error = []
+ for p in paths:
+ # Download missing files or files that don't match the hash
+ try:
+ data_path_with_download(p.name)
+ except Exception as exc:
+ error.append(p.name)
+
+ if error:
+ raise RuntimeError(
+ "An error occurred downloading the following files: "
+ f"{', '.join(error)}"
+ )
+
+
def get_files(
base: Union[str, os.PathLike],
pattern: str = "**/*",
diff --git a/pydicom/data/download.py b/pydicom/data/download.py
index f86e70144e..81982fb5d8 100644
--- a/pydicom/data/download.py
+++ b/pydicom/data/download.py
@@ -14,8 +14,6 @@
import urllib.error
import warnings
-from . import retry
-
try:
import tqdm
@@ -25,29 +23,23 @@ def update_to(self, b=1, bsize=1, tsize=None):
self.total = tsize
self.update(b * bsize - self.n)
+ USE_PROGRESS_BAR = True
except ImportError:
- @contextlib.contextmanager
- def DownloadProgressBar(*args, **kwargs):
- try:
- class dummy:
- def update_to(*args, **kwargs): # pylint: disable = no-method-argument
- pass
+ USE_PROGRESS_BAR = False
- yield dummy
- finally:
- pass
+from . import retry
HERE = pathlib.Path(__file__).resolve().parent
_SIMULATE_NETWORK_OUTAGE = False # For testing network outages
-def calculate_file_hash(fpath: pathlib.Path) -> str:
+def calculate_file_hash(fpath: os.PathLike) -> str:
"""Return the SHA256 checksum for the file at `fpath`.
Parameters
----------
- fpath : pathlib.Path
+ fpath : os.PathLike
The absolute path to the file that is to be checksummed.
Returns
@@ -66,40 +58,43 @@ def calculate_file_hash(fpath: pathlib.Path) -> str:
return hasher.hexdigest()
-def get_config_dir() -> str:
+def get_config_dir() -> os.PathLike:
"""Return the path to the pydicom config directory, creating it if required
The config directory will be named ``.pydicom`` and will be created in the
local user's home directory.
"""
- config_dir = pathlib.Path.home().joinpath(".pydicom")
+ config_dir = pathlib.Path.home() / ".pydicom"
config_dir.mkdir(exist_ok=True)
return config_dir
@retry.retry(urllib.error.HTTPError)
-def download_with_progress(url: str, fpath: pathlib.Path) -> None:
+def download_with_progress(url: str, fpath: os.PathLike) -> None:
"""Download the file at `url` to `fpath` with a progress bar.
Parameters
----------
url : str
The URL to download the file from.
- fpath : pathlib.Path
+ fpath : os.PathLike
The absolute path where the file will be written to.
"""
- with DownloadProgressBar(
- unit="B", unit_scale=True, miniters=1, desc=url.split("/")[-1]
- ) as t:
- urllib.request.urlretrieve(
- url, os.fspath(fpath), reporthook=t.update_to
- )
+ if USE_PROGRESS_BAR:
+ with DownloadProgressBar(
+ unit="B", unit_scale=True, miniters=1, desc=url.split("/")[-1]
+ ) as t:
+ urllib.request.urlretrieve(
+ url, os.fspath(fpath), reporthook=t.update_to
+ )
+ else:
+ urllib.request.urlretrieve(url, os.fspath(fpath))
-def get_data_dir() -> str:
+def get_data_dir() -> os.PathLike:
"""Return the path to the cache directory, creating it if required."""
- data_dir = get_config_dir().joinpath("data")
+ data_dir = get_config_dir() / "data"
data_dir.mkdir(exist_ok=True)
return data_dir
@@ -147,8 +142,9 @@ def data_path_with_download(
filename: str,
check_hash: bool = True,
redownload_on_hash_mismatch: bool = True,
- url: Optional[str] = None
-) -> pathlib.Path:
+ url: Optional[str] = None,
+ quiet: bool = True
+) -> os.PathLike:
"""Return the absolute path to the cached file with `filename`.
If the file isn't available in the cache then it will be downloaded.
@@ -167,7 +163,7 @@ def data_path_with_download(
Returns
-------
- pathlib.Path
+ os.PathLike
The absolute path to the file.
"""
if _SIMULATE_NETWORK_OUTAGE:
| diff --git a/pydicom/tests/test_data_manager.py b/pydicom/tests/test_data_manager.py
index fd9d6777be..2f17878292 100644
--- a/pydicom/tests/test_data_manager.py
+++ b/pydicom/tests/test_data_manager.py
@@ -10,7 +10,7 @@
import pytest
from pydicom.data import (
- get_charset_files, get_testdata_files, get_palette_files
+ get_charset_files, get_testdata_files, get_palette_files, fetch_data_files
)
from pydicom.data.data_manager import (
DATA_ROOT, get_testdata_file, EXTERNAL_DATA_SOURCES
@@ -26,6 +26,14 @@
EXT_PYDICOM = True
+@pytest.fixture
+def download_failure():
+ """Simulate a download failure."""
+ download._SIMULATE_NETWORK_OUTAGE = True
+ yield
+ download._SIMULATE_NETWORK_OUTAGE = False
+
+
class TestGetData:
def test_get_dataset(self):
"""Test the different functions to get lists of data files."""
@@ -124,8 +132,6 @@ def teardown(self):
if 'mylib' in EXTERNAL_DATA_SOURCES:
del EXTERNAL_DATA_SOURCES['mylib']
- download._SIMULATE_NETWORK_OUTAGE = False
-
def as_posix(self, path):
"""Return `path` as a posix path"""
return Path(path).as_posix()
@@ -247,12 +253,8 @@ def test_get_testdata_files_external_ignore_hash(self):
@pytest.mark.skipif(EXT_PYDICOM, reason="pydicom-data installed")
class TestDownload:
"""Tests for the download module."""
- def teardown(self):
- download._SIMULATE_NETWORK_OUTAGE = False
-
- def test_get_testdata_file_network_outage(self):
+ def test_get_testdata_file_network_outage(self, download_failure):
"""Test a network outage when using get_testdata_file."""
- download._SIMULATE_NETWORK_OUTAGE = True
fname = "693_UNCI.dcm"
msg = (
r"A download failure occurred while attempting to "
@@ -261,9 +263,8 @@ def test_get_testdata_file_network_outage(self):
with pytest.warns(UserWarning, match=msg):
assert get_testdata_file(fname) is None
- def test_get_testdata_files_network_outage(self):
+ def test_get_testdata_files_network_outage(self, download_failure):
"""Test a network outage when using get_testdata_files."""
- download._SIMULATE_NETWORK_OUTAGE = True
msg = (
r"One or more download failures occurred, the list of matching "
r"file paths may be incomplete"
@@ -272,6 +273,26 @@ def test_get_testdata_files_network_outage(self):
assert [] == get_testdata_files("693_UN*")
+def test_fetch_data_files():
+ """Test fetch_data_files()."""
+ # Remove a single file from the cache
+ cache = get_data_dir()
+ path = cache / "693_J2KR.dcm"
+ if path.exists():
+ path.unlink()
+
+ assert not path.exists()
+ fetch_data_files()
+ assert path.exists()
+
+
+def test_fetch_data_files_download_failure(download_failure):
+ """Test fetch_data_files() with download failures."""
+ msg = r"An error occurred downloading the following files:"
+ with pytest.raises(RuntimeError, match=msg):
+ fetch_data_files()
+
+
def test_hashes():
"""Test for duplicates in hashes.json."""
# We can't have case mixes because windows filenames are case insensitive
| diff --git a/doc/faq/index.rst b/doc/faq/index.rst
index 2915b3de30..eceea57c00 100644
--- a/doc/faq/index.rst
+++ b/doc/faq/index.rst
@@ -45,7 +45,7 @@ Required
Optional
~~~~~~~~
-When manipulating *Pixel Data* it's recommended that you install
+When manipulating *Pixel Data* it's recommended you install
`NumPy <https://numpy.org/>`_. When dealing with JPEG
compressed *Pixel Data* see :ref:`this table<guide_compressed>` for which
libraries are required.
@@ -55,28 +55,31 @@ libraries are required.
What version of Python can I use?
---------------------------------
-+-----------------+------------------+-------------------------+
-| pydicom version | Release date | Python versions |
-+=================+==================+=========================+
-| 1.0 | March 2018 | 2.7, 3.4, 3.5, 3.6 |
-+-----------------+------------------+-------------------------+
-| 1.1 | June 2018 | 2.7, 3.4, 3.5, 3.6 |
-+-----------------+------------------+-------------------------+
-| 1.2 | October 2018 | 2.7, 3.4, 3.5, 3.6 |
-+-----------------+------------------+-------------------------+
-| 1.3 | July 2019 | 2.7, 3.4, 3.5, 3.6 |
-+-----------------+------------------+-------------------------+
-| 1.4 | January 2020 | 2.7, 3.5, 3.6, 3.7, 3.8 |
-+-----------------+------------------+-------------------------+
-| 2.0 | ~April 2020 | 3.5, 3.6, 3.7, 3.8 |
-+-----------------+------------------+-------------------------+
-| 2.1 | ~November 2020 | 3.6, 3.7, 3.8, 3.9 |
-+-----------------+------------------+-------------------------+
++-----------------+------------------+--------------------------+
+| pydicom version | Release date | Python versions |
++=================+==================+==========================+
+| 1.0 | March 2018 | 2.7, 3.4, 3.5, 3.6 |
++-----------------+------------------+--------------------------+
+| 1.1 | June 2018 | 2.7, 3.4, 3.5, 3.6 |
++-----------------+------------------+--------------------------+
+| 1.2 | October 2018 | 2.7, 3.4, 3.5, 3.6 |
++-----------------+------------------+--------------------------+
+| 1.3 | July 2019 | 2.7, 3.4, 3.5, 3.6 |
++-----------------+------------------+--------------------------+
+| 1.4 | January 2020 | 2.7, 3.5, 3.6, 3.7, 3.8 |
++-----------------+------------------+--------------------------+
+| 2.0 | May 2020 | 3.5, 3.6, 3.7, 3.8 |
++-----------------+------------------+--------------------------+
+| 2.1 | ~November 2020 | 3.6, 3.7, 3.8, 3.9 |
++-----------------+------------------+--------------------------+
+| 2.2 | ~May 2021 | 3.6, 3.7, 3.8, 3.9 |
++-----------------+------------------+--------------------------+
+| 2.3 | ~November 2021 | 3.6, 3.7, 3.8, 3.9, 3.10 |
++-----------------+------------------+--------------------------+
What about support for Python 2.7?
----------------------------------
Python 2.7 reached `end of life <https://www.python.org/doc/sunset-python-2/>`_
-on 1st January, 2020 and will no longer be supported by *pydicom* starting
-from v2.0 (expected release date is April 2020). More information is available
-:doc:`here</old/python2_support>`.
+on 1st January, 2020 and is no longer supported by *pydicom*. More information
+is available :doc:`here</old/python2_support>`.
diff --git a/doc/tutorials/installation.rst b/doc/tutorials/installation.rst
index 680eab0a43..bceda35711 100644
--- a/doc/tutorials/installation.rst
+++ b/doc/tutorials/installation.rst
@@ -28,17 +28,17 @@ Install using pip
official third-party Python software repository. The simplest way to install
from PyPi is using `pip <https://pip.pypa.io/>`_ with the command::
- $ pip install pydicom
+ pip install pydicom
You may need to use this instead, depending on your operating system::
- $ python -m pip install pydicom
+ python -m pip install pydicom
You can also perform an offline installation by
`downloading <https://github.com/pydicom/pydicom/releases>`_ and installing
-one of the release ``*.whl`` files. For example, with the v1.3 release::
+one of the release ``*.whl`` files. For example, with the v2.0 release::
- $ pip install pydicom-1.3.0-py2.py3-none-any.whl
+ pip install pydicom-2.0.0-py3-none-any.whl
Install using conda
@@ -47,23 +47,26 @@ Install using conda
*pydicom* is also available for `conda <https://docs.conda.io/>`_ at
`conda-forge <https://anaconda.org/conda-forge/pydicom>`_::
- $ conda install -c conda-forge pydicom
-
-
-.. _tut_install_libs:
+ conda install -c conda-forge pydicom
Downloading example/test DICOM files
------------------------------------
-A range of the test data files are not distributed within the pydicom package
-so as to keep the package download size small. These files are instead housed
-on Zenodo at <https://zenodo.org/communities/pydicom>. To download all of the
-pydicom test files and get all of the respective paths of where this data is
-housed run the following::
+To keep the package size small, a number of the larger DICOM files are not
+distributed with *pydicom* and are instead kept in the
+`pydicom-data <https://github.com/pydicom/pydicom-data>`_
+repository. To get the complete set of testing and example files you can either
+install the *pydicom-data* repository::
+
+ pip install git+https://github.com/pydicom/pydicom-data
- pydicom.data.get_testdata_files()
+Or download the missing files to the local cache (after installing *pydicom*)::
+ python -c "import pydicom; pydicom.data.fetch_data_files()"
+
+
+.. _tut_install_libs:
Install the optional libraries
==============================
@@ -73,11 +76,11 @@ If you're going to be manipulating pixel data then
Using pip::
- $ pip install numpy
+ pip install numpy
Through conda::
- $ conda install numpy
+ conda install numpy
To decode JPEG compressed pixel data one or more additional libraries will
need to be installed. See :ref:`this page <guide_compressed>` for a list of
@@ -96,12 +99,12 @@ Using pip; you may need to make sure that the
`openjpeg <http://www.openjpeg.org/>`_ (for JPEG 2000) libraries are installed
beforehand::
- $ pip install pillow
+ pip install pillow
Through conda::
- $ conda install -c conda-forge openjpeg jpeg
- $ conda install pillow
+ conda install -c conda-forge openjpeg jpeg
+ conda install pillow
Installing CharPyLS
@@ -113,13 +116,13 @@ decompress JPEG-LS images.
Using pip::
- $ pip install cython
- $ pip install git+https://github.com/Who8MyLunch/CharPyLS
+ pip install cython
+ pip install git+https://github.com/Who8MyLunch/CharPyLS
Through conda::
- $ conda install cython
- $ pip install git+https://github.com/Who8MyLunch/CharPyLS
+ conda install cython
+ pip install git+https://github.com/Who8MyLunch/CharPyLS
Installing GDCM
@@ -136,7 +139,7 @@ has instructions for installing in a virtual environment in Ubuntu
Through conda::
- $ conda install gdcm -c conda-forge
+ conda install gdcm -c conda-forge
Installing pylibjpeg
@@ -148,7 +151,7 @@ is installed.
Using pip::
- $ pip install pylibjpeg pylibjpeg-libjpeg pylibjpeg-openjpeg
+ pip install pylibjpeg pylibjpeg-libjpeg pylibjpeg-openjpeg
.. _tut_install_dev:
@@ -159,7 +162,7 @@ Install the development version
To install a snapshot of the latest code (the ``master`` branch) from
`GitHub <https://github.com/pydicom/pydicom>`_::
- $ pip install git+https://github.com/pydicom/pydicom.git
+ pip install git+https://github.com/pydicom/pydicom.git
The ``master`` branch is under active development and while it is usually
stable, it may have undocumented changes or bugs.
@@ -168,11 +171,11 @@ If you want to keep up-to-date with the latest code, make sure you have
`Git <https://git-scm.com/>`_ installed and then clone the ``master``
branch (this will create a ``pydicom`` directory in your current directory)::
- $ git clone --depth=1 https://github.com/pydicom/pydicom.git
+ git clone --depth=1 https://github.com/pydicom/pydicom.git
Then install using pip in editable (``-e``) mode::
- $ pip install -e pydicom/
+ pip install -e pydicom/
When you want to update your copy of the source code, run ``git pull`` from
within the ``pydicom`` directory and Git will download and apply any changes.
| [
{
"components": [
{
"doc": "Download missing test files to the local cache.",
"lines": [
144,
160
],
"name": "fetch_data_files",
"signature": "def fetch_data_files():",
"type": "function"
}
],
"file": "pydicom/data/data_mana... | [
"pydicom/tests/test_data_manager.py::TestGetData::test_get_dataset",
"pydicom/tests/test_data_manager.py::TestGetData::test_get_dataset_pattern",
"pydicom/tests/test_data_manager.py::TestGetData::test_get_testdata_file",
"pydicom/tests/test_data_manager.py::TestGetData::test_get_palette_files",
"pydicom/tes... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
[MRG] Update installation documentation for cache data
#### Describe the changes
* Update installation documentation
* Add `fetch_data_files()` for downloading cache data
#### Tasks
- [x] Unit tests added that reproduce the issue or prove feature is working
- [x] Fix or feature added
- [x] Documentation updated (if relevant)
- [x] [Preview link](https://2365-14006067-gh.circle-artifacts.com/0/doc/_build/html/index.html)
- [x] Unit tests passing and overall coverage the same or better
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in pydicom/data/data_manager.py]
(definition of fetch_data_files:)
def fetch_data_files():
"""Download missing test files to the local cache."""
[end of new definitions in pydicom/data/data_manager.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | c2c6145d679adc97924d6c8a761a50b8e2819e3f | |
joke2k__faker-1276 | 1,276 | joke2k/faker | null | 135738584b799f9b0fabb938095929455f8449a0 | 2020-09-26T17:42:54Z | diff --git a/faker/providers/bank/tr_TR/__init__.py b/faker/providers/bank/tr_TR/__init__.py
new file mode 100644
index 0000000000..e89db54f42
--- /dev/null
+++ b/faker/providers/bank/tr_TR/__init__.py
@@ -0,0 +1,8 @@
+from .. import Provider as BankProvider
+
+
+class Provider(BankProvider):
+ """Implement bank provider for ``tr_TR`` locale."""
+
+ bban_format = '######################'
+ country_code = 'TR'
| diff --git a/tests/providers/test_bank.py b/tests/providers/test_bank.py
index 46f710ecf1..2ce36aff55 100644
--- a/tests/providers/test_bank.py
+++ b/tests/providers/test_bank.py
@@ -11,6 +11,7 @@
from faker.providers.bank.no_NO import Provider as NoNoBankProvider
from faker.providers.bank.pl_PL import Provider as PlPlBankProvider
from faker.providers.bank.pt_PT import Provider as PtPtBankProvider
+from faker.providers.bank.tr_TR import Provider as TrTrBankProvider
def is_valid_iban(iban):
@@ -191,3 +192,18 @@ class TestFilPh(TestEnPh):
class TestTlPh(TestEnPh):
"""Test tl_PH bank provider"""
pass
+
+
+class TestTrTr:
+ """Test tr_TR bank provider"""
+
+ def test_bban(self, faker, num_samples):
+ for _ in range(num_samples):
+ assert re.fullmatch(r"\d{22}", faker.bban())
+
+ def test_iban(self, faker, num_samples):
+ for _ in range(num_samples):
+ iban = faker.iban()
+ assert is_valid_iban(iban)
+ assert iban[:2] == TrTrBankProvider.country_code
+ assert re.fullmatch(r"\d{2}\d{22}", iban[2:])
| [
{
"components": [
{
"doc": "Implement bank provider for ``tr_TR`` locale.",
"lines": [
4,
8
],
"name": "Provider",
"signature": "class Provider(BankProvider):",
"type": "class"
}
],
"file": "faker/providers/bank/tr_TR/__init... | [
"tests/providers/test_bank.py::TestNoNo::test_bban",
"tests/providers/test_bank.py::TestNoNo::test_iban",
"tests/providers/test_bank.py::TestFiFi::test_bban",
"tests/providers/test_bank.py::TestFiFi::test_iban",
"tests/providers/test_bank.py::TestPlPl::test_bban",
"tests/providers/test_bank.py::TestPlPl::... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Added bank provider for tr_TR locale.
Added bank provider for tr_TR locale. ๐ #1200
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in faker/providers/bank/tr_TR/__init__.py]
(definition of Provider:)
class Provider(BankProvider):
"""Implement bank provider for ``tr_TR`` locale."""
[end of new definitions in faker/providers/bank/tr_TR/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 6edfdbf6ae90b0153309e3bf066aa3b2d16494a7 | ||
scikit-learn__scikit-learn-18447 | 18,447 | scikit-learn/scikit-learn | 0.24 | 54ce4222694819ad52d544ce5cba5da274c34ab7 | 2020-09-23T20:43:26Z | diff --git a/sklearn/utils/extmath.py b/sklearn/utils/extmath.py
index b45ffebea2d6b..0384dc0706334 100644
--- a/sklearn/utils/extmath.py
+++ b/sklearn/utils/extmath.py
@@ -690,6 +690,105 @@ def _safe_accumulator_op(op, x, *args, **kwargs):
return result
+def _incremental_weighted_mean_and_var(X, sample_weight,
+ last_mean,
+ last_variance,
+ last_weight_sum):
+ """Calculate weighted mean and weighted variance incremental update.
+
+ .. versionadded:: 0.24
+
+ Parameters
+ ----------
+ X : array-like of shape (n_samples, n_features)
+ Data to use for mean and variance update.
+
+ sample_weight : array-like of shape (n_samples,) or None
+ Sample weights. If None, then samples are equally weighted.
+
+ last_mean : array-like of shape (n_features,)
+ Mean before the incremental update.
+
+ last_variance : array-like of shape (n_features,) or None
+ Variance before the incremental update.
+ If None, variance update is not computed (in case scaling is not
+ required).
+
+ last_weight_sum : array-like of shape (n_features,)
+ Sum of weights before the incremental update.
+
+ Returns
+ -------
+ updated_mean : array of shape (n_features,)
+
+ updated_variance : array of shape (n_features,) or None
+ If None, only mean is computed.
+
+ updated_weight_sum : array of shape (n_features,)
+
+ Notes
+ -----
+ NaNs in `X` are ignored.
+
+ `last_mean` and `last_variance` are statistics computed at the last step
+ by the function. Both must be initialized to 0.0.
+ The mean is always required (`last_mean`) and returned (`updated_mean`),
+ whereas the variance can be None (`last_variance` and `updated_variance`).
+
+ For further details on the algorithm to perform the computation in a
+ numerically stable way, see [Finch2009]_, Sections 4 and 5.
+
+ References
+ ----------
+ .. [Finch2009] `Tony Finch,
+ "Incremental calculation of weighted mean and variance",
+ University of Cambridge Computing Service, February 2009.
+ <https://fanf2.user.srcf.net/hermes/doc/antiforgery/stats.pdf>`_
+
+ """
+ # last = stats before the increment
+ # new = the current increment
+ # updated = the aggregated stats
+ if sample_weight is None:
+ return _incremental_mean_and_var(X, last_mean, last_variance,
+ last_weight_sum)
+ nan_mask = np.isnan(X)
+ sample_weight_T = np.reshape(sample_weight, (1, -1))
+ # new_weight_sum with shape (n_features,)
+ new_weight_sum = \
+ _safe_accumulator_op(np.dot, sample_weight_T, ~nan_mask).ravel()
+ total_weight_sum = _safe_accumulator_op(np.sum, sample_weight, axis=0)
+
+ X_0 = np.where(nan_mask, 0, X)
+ new_mean = \
+ _safe_accumulator_op(np.average, X_0, weights=sample_weight, axis=0)
+ new_mean *= total_weight_sum / new_weight_sum
+ updated_weight_sum = last_weight_sum + new_weight_sum
+ updated_mean = (
+ (last_weight_sum * last_mean + new_weight_sum * new_mean)
+ / updated_weight_sum)
+
+ if last_variance is None:
+ updated_variance = None
+ else:
+ X_0 = np.where(nan_mask, 0, (X-new_mean)**2)
+ new_variance =\
+ _safe_accumulator_op(
+ np.average, X_0, weights=sample_weight, axis=0)
+ new_variance *= total_weight_sum / new_weight_sum
+ new_term = (
+ new_weight_sum *
+ (new_variance +
+ (new_mean - updated_mean) ** 2))
+ last_term = (
+ last_weight_sum *
+ (last_variance +
+ (last_mean - updated_mean) ** 2))
+ updated_variance = (new_term + last_term) / updated_weight_sum
+
+ return updated_mean, updated_variance, updated_weight_sum
+
+
def _incremental_mean_and_var(X, last_mean, last_variance, last_sample_count):
"""Calculate mean update and a Youngs and Cramer variance update.
| diff --git a/sklearn/utils/tests/test_extmath.py b/sklearn/utils/tests/test_extmath.py
index fba6ef7353375..451acd8d0307c 100644
--- a/sklearn/utils/tests/test_extmath.py
+++ b/sklearn/utils/tests/test_extmath.py
@@ -11,6 +11,7 @@
from scipy.special import expit
import pytest
+from sklearn.utils import gen_batches
from sklearn.utils._testing import assert_almost_equal
from sklearn.utils._testing import assert_allclose
@@ -21,7 +22,7 @@
from sklearn.utils._testing import assert_warns_message
from sklearn.utils._testing import skip_if_32bit
-from sklearn.utils.extmath import density
+from sklearn.utils.extmath import density, _safe_accumulator_op
from sklearn.utils.extmath import randomized_svd
from sklearn.utils.extmath import row_norms
from sklearn.utils.extmath import weighted_mode
@@ -29,6 +30,7 @@
from sklearn.utils.extmath import log_logistic
from sklearn.utils.extmath import svd_flip
from sklearn.utils.extmath import _incremental_mean_and_var
+from sklearn.utils.extmath import _incremental_weighted_mean_and_var
from sklearn.utils.extmath import _deterministic_vector_sign_flip
from sklearn.utils.extmath import softmax
from sklearn.utils.extmath import stable_cumsum
@@ -452,6 +454,99 @@ def naive_log_logistic(x):
assert_array_almost_equal(log_logistic(extreme_x), [-100, 0])
+@pytest.fixture()
+def rng():
+ return np.random.RandomState(42)
+
+
+def test_incremental_weighted_mean_and_variance_simple(rng):
+ mult = 10
+ X = rng.rand(1000, 20)*mult
+ sample_weight = rng.rand(X.shape[0]) * mult
+ mean, var, _ = _incremental_weighted_mean_and_var(X, sample_weight,
+ 0, 0, 0)
+
+ expected_mean = np.average(X, weights=sample_weight, axis=0)
+ expected_var = np.average(X**2, weights=sample_weight, axis=0) - \
+ expected_mean**2
+ assert_almost_equal(mean, expected_mean)
+ assert_almost_equal(var, expected_var)
+
+
+@pytest.mark.parametrize("mean", [0, 1e7, -1e7])
+@pytest.mark.parametrize("var", [1, 1e-8, 1e5])
+@pytest.mark.parametrize("weight_loc, weight_scale", [
+ (0, 1), (0, 1e-8), (1, 1e-8), (10, 1), (1e7, 1)])
+def test_incremental_weighted_mean_and_variance(mean, var, weight_loc,
+ weight_scale, rng):
+
+ # Testing of correctness and numerical stability
+ def _assert(X, sample_weight, expected_mean, expected_var):
+ n = X.shape[0]
+ for chunk_size in [1, n//10 + 1, n//4 + 1, n//2 + 1, n]:
+ last_mean, last_weight_sum, last_var = 0, 0, 0
+ for batch in gen_batches(n, chunk_size):
+ last_mean, last_var, last_weight_sum = \
+ _incremental_weighted_mean_and_var(X[batch],
+ sample_weight[batch],
+ last_mean,
+ last_var,
+ last_weight_sum)
+ assert_allclose(last_mean, expected_mean)
+ assert_allclose(last_var, expected_var, atol=1e-6)
+
+ size = (100, 20)
+ weight = rng.normal(loc=weight_loc, scale=weight_scale, size=size[0])
+
+ # Compare to weighted average: np.average
+ X = rng.normal(loc=mean, scale=var, size=size)
+ expected_mean = _safe_accumulator_op(np.average, X, weights=weight, axis=0)
+ expected_var = _safe_accumulator_op(
+ np.average, (X - expected_mean) ** 2, weights=weight, axis=0)
+ _assert(X, weight, expected_mean, expected_var)
+
+ # Compare to unweighted mean: np.mean
+ X = rng.normal(loc=mean, scale=var, size=size)
+ ones_weight = np.ones(size[0])
+ expected_mean = _safe_accumulator_op(np.mean, X, axis=0)
+ expected_var = _safe_accumulator_op(np.var, X, axis=0)
+ _assert(X, ones_weight, expected_mean, expected_var)
+
+
+def test_incremental_weighted_mean_and_variance_ignore_nan():
+ old_means = np.array([535., 535., 535., 535.])
+ old_variances = np.array([4225., 4225., 4225., 4225.])
+ old_weight_sum = np.array([2, 2, 2, 2], dtype=np.int32)
+ sample_weights_X = np.ones(3)
+ sample_weights_X_nan = np.ones(4)
+
+ X = np.array([[170, 170, 170, 170],
+ [430, 430, 430, 430],
+ [300, 300, 300, 300]])
+
+ X_nan = np.array([[170, np.nan, 170, 170],
+ [np.nan, 170, 430, 430],
+ [430, 430, np.nan, 300],
+ [300, 300, 300, np.nan]])
+
+ X_means, X_variances, X_count = \
+ _incremental_weighted_mean_and_var(X,
+ sample_weights_X,
+ old_means,
+ old_variances,
+ old_weight_sum)
+ X_nan_means, X_nan_variances, X_nan_count = \
+ _incremental_weighted_mean_and_var(X_nan,
+ sample_weights_X_nan,
+ old_means,
+ old_variances,
+ old_weight_sum)
+
+ assert_allclose(X_nan_means, X_means)
+ assert_allclose(X_nan_variances, X_variances)
+ assert_allclose(X_nan_count, X_count)
+
+
def test_incremental_variance_update_formulas():
# Test Youngs and Cramer incremental variance formulas.
# Doggie data from https://www.mathsisfun.com/data/standard-deviation.html
| [
{
"components": [
{
"doc": "Calculate weighted mean and weighted variance incremental update.\n\n.. versionadded:: 0.24\n\nParameters\n----------\nX : array-like of shape (n_samples, n_features)\n Data to use for mean and variance update.\n\nsample_weight : array-like of shape (n_samples,) or N... | [
"sklearn/utils/tests/test_extmath.py::test_density",
"sklearn/utils/tests/test_extmath.py::test_uniform_weights",
"sklearn/utils/tests/test_extmath.py::test_random_weights",
"sklearn/utils/tests/test_extmath.py::test_randomized_svd_low_rank_all_dtypes[int32]",
"sklearn/utils/tests/test_extmath.py::test_rand... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
ENH Incremental weighted mean and var
Partially address #15601.
Supersede and close #16066.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sklearn/utils/extmath.py]
(definition of _incremental_weighted_mean_and_var:)
def _incremental_weighted_mean_and_var(X, sample_weight, last_mean, last_variance, last_weight_sum):
"""Calculate weighted mean and weighted variance incremental update.
.. versionadded:: 0.24
Parameters
----------
X : array-like of shape (n_samples, n_features)
Data to use for mean and variance update.
sample_weight : array-like of shape (n_samples,) or None
Sample weights. If None, then samples are equally weighted.
last_mean : array-like of shape (n_features,)
Mean before the incremental update.
last_variance : array-like of shape (n_features,) or None
Variance before the incremental update.
If None, variance update is not computed (in case scaling is not
required).
last_weight_sum : array-like of shape (n_features,)
Sum of weights before the incremental update.
Returns
-------
updated_mean : array of shape (n_features,)
updated_variance : array of shape (n_features,) or None
If None, only mean is computed.
updated_weight_sum : array of shape (n_features,)
Notes
-----
NaNs in `X` are ignored.
`last_mean` and `last_variance` are statistics computed at the last step
by the function. Both must be initialized to 0.0.
The mean is always required (`last_mean`) and returned (`updated_mean`),
whereas the variance can be None (`last_variance` and `updated_variance`).
For further details on the algorithm to perform the computation in a
numerically stable way, see [Finch2009]_, Sections 4 and 5.
References
----------
.. [Finch2009] `Tony Finch,
"Incremental calculation of weighted mean and variance",
University of Cambridge Computing Service, February 2009.
<https://fanf2.user.srcf.net/hermes/doc/antiforgery/stats.pdf>`_"""
[end of new definitions in sklearn/utils/extmath.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Incremental weighted mean and var
#### Reference Issues/PRs
Partially adresses: #15601
#### What does this implement/fix? Explain your changes.
Method partial_fit in StandardScaler can be used multiple times to incrementally update mean and variance, but there was no proper method to do it with sample_weights. This PR introduce _incremental_weighted_mean_and_var, which does the exact thing we want.
#### Any other comments?
1. Basic equations I used can be found [here](https://fanf2.user.srcf.net/hermes/doc/antiforgery/stats.pdf), but I haven't found any (free) papers with derivation leading to batch version of them. It's basic math, but code may seem confusing without that sort of explanation. If it's really necessary I can provide brief paper.
2. NaNs handling is somehow inelegant, but I haven't found any 'weighted' method similar to np.nanvar/np.nanmean.
----------
--------------------
</issues> | 54ce4222694819ad52d544ce5cba5da274c34ab7 | |
Textualize__rich-305 | 305 | Textualize/rich | null | e651e850161a9b84758214e2a24791340aaeb254 | 2020-09-23T15:12:51Z | diff --git a/CHANGELOG.md b/CHANGELOG.md
index a0a2563d71..e411978ebd 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,7 +5,11 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
-## [7.0.1] - unreleased
+## [7.1.0] - unreleased
+
+### Added
+
+- Added Console.capture method
###ย Changed
diff --git a/README.md b/README.md
index 7a0fbf6b3a..a4c6c4341b 100644
--- a/README.md
+++ b/README.md
@@ -54,6 +54,15 @@ Rich can be installed in the Python REPL, so that any data structures will be pr

+## Rich Inspect
+
+Rich has an [inspect](https://rich.readthedocs.io/en/latest/reference/init.html?highlight=inspect#rich.inspect) function which can produce a report on any Python object, such as class, instance, or builtin.
+
+```python
+>>> from rich import inspect
+>>> inspect(str, methods=True)
+```
+
## Using the Console
For more control over rich terminal content, import and construct a [Console](https://rich.readthedocs.io/en/latest/reference/console.html#rich.console.Console) object.
diff --git a/docs/source/console.rst b/docs/source/console.rst
index ca92670985..3373ac7111 100644
--- a/docs/source/console.rst
+++ b/docs/source/console.rst
@@ -42,48 +42,6 @@ You can set ``color_system`` to one of the following values:
Be careful when setting a color system, if you set a higher color system than your terminal supports, your text may be unreadable.
-File output
------------
-
-The Console object will write to standard output (i.e. the terminal). You can also tell the Console object to write to another file by setting the ``file`` argument on the constructor -- which should be a file-like object opened for writing text. One use of this capability is to create a Console for writing to standard error by setting file to ``sys.stderr``. Here's an example::
-
- import sys
- from rich.console import Console
- error_console = Console(file=sys.stderr)
- error_console.print("[bold red]This is an error!")
-
-
-Capturing output
-----------------
-
-There may be situations where you want to capture the output from a Console rather than writing it directly to the terminal. You can do this by setting the ``file`` argument to a :py:class:`io.StringIO` instance. Here's an example::
-
- from io import StringIO
- from rich.console import Console
- console = Console(file=StringIO())
- console.print("[bold red]Hello[/] World")
- str_output = console.file.getvalue()
-
-You may also want to set ``force_terminal=True`` on the Console constructor if you want control codes for colour and style in the resulting string.
-
-Terminal detection
-------------------
-
-If Rich detects that it is not writing to a terminal it will strip control codes from the output. If you want to write control codes to a regular file then set ``force_terminal=True`` on the constructor.
-
-Letting Rich auto-detect terminals is useful as it will write plain text when you pipe output to a file or other application.
-
-
-Environment variables
----------------------
-
-Rich respects some standard environment variables.
-
-Setting the environment variable ``TERM`` to ``"dumb"`` or ``"unknown"`` will disable color/style and some features that require moving the cursor, such as progress bars.
-
-If the environment variable ``NO_COLOR`` is set, Rich will disable all color in the output.
-
-
Printing
--------
@@ -220,3 +178,52 @@ The Console class can export anything written to it as either text or html. To e
After you have written content, you can call :meth:`~rich.console.Console.export_text` or :meth:`~rich.console.Console.export_html` to get the console output as a string. You can also call :meth:`~rich.console.Console.save_text` or :meth:`~rich.console.Console.save_html` to write the contents directly to disk.
For examples of the html output generated by Rich Console, see :ref:`appendix-colors`.
+
+
+File output
+-----------
+
+The Console object will write to standard output (i.e. the terminal). You can also tell the Console object to write to another file by setting the ``file`` argument on the constructor -- which should be a file-like object opened for writing text. One use of this capability is to create a Console for writing to standard error by setting file to ``sys.stderr``. Here's an example::
+
+ import sys
+ from rich.console import Console
+ error_console = Console(file=sys.stderr)
+ error_console.print("[bold red]This is an error!")
+
+
+Capturing output
+----------------
+
+There may be situations where you want to *capture* the output from a Console rather than writing it directly to the terminal. You can do this with the :meth:`~rich.console.Console.capture` method which returns a context manager. On exit from this context manager, call :meth:`~rich.console.Capture.get` to return the string that would have been written to the terminal. Here's an example::
+
+ from rich.console import Console
+ console = Console()
+ with console.capture() as capture:
+ console.print("[bold red]Hello[/] World")
+ str_output = capture.get()
+
+An alternative way of capturing output is to set the Console file to a :py:class:`io.StringIO`. This is the recommended method if you are testing console output in unit tests. Here's an example::
+
+ from io import StringIO
+ from rich.console import Console
+ console = Console(file=StringIO())
+ console.print("[bold red]Hello[/] World")
+ str_output = console.file.getvalue()
+
+
+Terminal detection
+------------------
+
+If Rich detects that it is not writing to a terminal it will strip control codes from the output. If you want to write control codes to a regular file then set ``force_terminal=True`` on the constructor.
+
+Letting Rich auto-detect terminals is useful as it will write plain text when you pipe output to a file or other application.
+
+
+Environment variables
+---------------------
+
+Rich respects some standard environment variables.
+
+Setting the environment variable ``TERM`` to ``"dumb"`` or ``"unknown"`` will disable color/style and some features that require moving the cursor, such as progress bars.
+
+If the environment variable ``NO_COLOR`` is set, Rich will disable all color in the output.
diff --git a/rich/color.py b/rich/color.py
index 4dffe6e2d7..4d6398edf4 100644
--- a/rich/color.py
+++ b/rich/color.py
@@ -508,7 +508,7 @@ def blend_rgb(
if __name__ == "__main__": # pragma: no cover
from .console import Console
- from .table import Column, Table
+ from .table import Table
from .text import Text
from . import box
diff --git a/rich/color_triplet.py b/rich/color_triplet.py
index d4da6fdb2c..75c03d2aca 100644
--- a/rich/color_triplet.py
+++ b/rich/color_triplet.py
@@ -22,7 +22,7 @@ def rgb(self) -> str:
"""The color in RGB format.
Returns:
- str: An rgb color, e.g. ``"rgb(100, 23, 255)"``.
+ str: An rgb color, e.g. ``"rgb(100,23,255)"``.
"""
red, green, blue = self
return f"rgb({red},{green},{blue})"
diff --git a/rich/console.py b/rich/console.py
index bb22310c7d..5985e8715d 100644
--- a/rich/console.py
+++ b/rich/console.py
@@ -150,6 +150,38 @@ def __rich_console__(
_null_highlighter = NullHighlighter()
+class CaptureError(Exception):
+ """An error in the Capture context manager."""
+
+
+class Capture:
+ """Context manager to capture the result of printing to the console.
+ See :meth:`~rich.console.Console.capture` for how to use.
+
+ Args:
+ console (Console): A console instance to capture output.
+ """
+
+ def __init__(self, console: "Console") -> None:
+ self._console = console
+ self._result: Optional[str] = None
+
+ def __enter__(self) -> "Capture":
+ self._console.begin_capture()
+ return self
+
+ def __exit__(self, exc_type, exc_val, exc_tb) -> None:
+ self._result = self._console.end_capture()
+
+ def get(self) -> str:
+ """Get the result of the capture."""
+ if self._result is None:
+ raise CaptureError(
+ "Capture result is not available until context manager exits."
+ )
+ return self._result
+
+
class RenderGroup:
"""Takes a group of renderables and returns a renderable object that renders the group.
@@ -451,6 +483,21 @@ def __exit__(self, exc_type, exc_value, traceback) -> None:
"""Exit buffer context."""
self._exit_buffer()
+ def begin_capture(self) -> None:
+ """Begin capturing console output. Call :meth:`end_capture` to exit capture mode and return output."""
+ self._enter_buffer()
+
+ def end_capture(self) -> str:
+ """End capture mode and return captured string.
+
+ Returns:
+ str: Console output.
+ """
+ render_result = self._render_buffer()
+ del self._buffer[:]
+ self._exit_buffer()
+ return render_result
+
@property
def color_system(self) -> Optional[str]:
"""Get color system string.
@@ -541,6 +588,23 @@ def width(self) -> int:
width, _ = self.size
return width
+ def capture(self) -> Capture:
+ """A context manager to *capture* the result of print() or log() in a string,
+ rather than writing it to the console.
+
+ Example:
+ >>> from rich.console import Console
+ >>> console = Console()
+ >>> with console.capture() as capture:
+ ... console.print("[bold magenta]Hello World[/])
+ >>> print(capture.get())
+
+ Returns:
+ Capture: Context manager which will contain the attribute `result` on exit.
+ """
+ capture = Capture(self)
+ return capture
+
def line(self, count: int = 1) -> None:
"""Write new line(s).
@@ -1085,13 +1149,19 @@ def input(
Returns:
str: Text read from stdin.
"""
+ prompt_str = ""
if prompt:
- self.print(prompt, markup=markup, emoji=emoji, end="")
- result = (
- getpass("", stream=stream)
- if password
- else (stream.readline() if stream else input())
- )
+ with self.capture() as capture:
+ self.print(prompt, markup=markup, emoji=emoji, end="")
+ prompt_str = capture.get()
+ if password:
+ result = getpass(prompt_str, stream=stream)
+ else:
+ if stream:
+ self.file.write(prompt_str)
+ result = stream.readline()
+ else:
+ result = input(prompt_str)
return result
def export_text(self, *, clear: bool = True, styles: bool = False) -> str:
diff --git a/rich/containers.py b/rich/containers.py
index 17c24ceb50..8329889385 100644
--- a/rich/containers.py
+++ b/rich/containers.py
@@ -156,7 +156,7 @@ def justify(
tokens.append(word)
if index < len(spaces):
if next_word is None:
- space_style = Style()
+ space_style = Style.empty()
else:
style = word.get_style_at_offset(console, -1)
next_style = next_word.get_style_at_offset(console, 0)
diff --git a/rich/markup.py b/rich/markup.py
index 841541211c..61fb103e19 100644
--- a/rich/markup.py
+++ b/rich/markup.py
@@ -31,6 +31,7 @@ def __str__(self) -> str:
@property
def markup(self) -> str:
+ """Get the string representation of this tag."""
return (
f"[{self.name}]"
if self.parameters is None
diff --git a/rich/style.py b/rich/style.py
index 5ec1824ae3..f0a9562252 100644
--- a/rich/style.py
+++ b/rich/style.py
@@ -162,6 +162,21 @@ def _make_color(color: Union[Color, str]) -> Color:
)
)
+ @classmethod
+ def empty(cls) -> "Style":
+ """Create an 'empty' style, equivalent to Style(), but more performant."""
+ style = cls.__new__(Style)
+ style._ansi = None
+ style._style_definition = None
+ style._color = None
+ style._bgcolor = None
+ style._attributes = 0
+ style._set_attributes = 0
+ style._link = None
+ style._link_id = None
+ style._hash = hash((None, None, 0, 0, None))
+ return style
+
bold = _Bit(0)
dim = _Bit(1)
italic = _Bit(2)
@@ -532,7 +547,6 @@ def render(
rendered = (
f"\x1b]8;id={self._link_id};{self._link}\x1b\\{rendered}\x1b]8;;\x1b\\"
)
-
return rendered
def test(self, text: Optional[str] = None) -> None:
diff --git a/rich/syntax.py b/rich/syntax.py
index 88d1a54b87..5e14642ce0 100644
--- a/rich/syntax.py
+++ b/rich/syntax.py
@@ -136,7 +136,7 @@ def get_style_for_token(self, token_type: TokenType) -> Style:
try:
pygments_style = self._pygments_style_class.style_for_token(token_type)
except KeyError:
- style = Style()
+ style = Style.empty()
else:
color = pygments_style["color"]
bgcolor = pygments_style["bgcolor"]
@@ -159,8 +159,8 @@ class ANSISyntaxTheme(SyntaxTheme):
def __init__(self, style_map: Dict[TokenType, Style]) -> None:
self.style_map = style_map
- self._missing_style = Style()
- self._background_style = Style()
+ self._missing_style = Style.empty()
+ self._background_style = Style.empty()
self._style_cache: Dict[TokenType, Style] = {}
def get_style_for_token(self, token_type: TokenType) -> Style:
@@ -397,7 +397,7 @@ def _get_number_styles(self, console: Console) -> Tuple[Style, Style, Style]:
"""Get background, number, and highlight styles for line numbers."""
background_style = self._get_base_style()
if background_style.transaprent_background:
- return Style(), Style(dim=True), Style()
+ return Style.empty(), Style(dim=True), Style.empty()
if console.color_system in ("256", "truecolor"):
number_style = Style.chain(
background_style,
diff --git a/rich/table.py b/rich/table.py
index b5458af242..cf344dad83 100644
--- a/rich/table.py
+++ b/rich/table.py
@@ -229,7 +229,7 @@ def get_row_style(self, index: int) -> StyleType:
"""Get the current row style."""
if self.row_styles:
return self.row_styles[index % len(self.row_styles)]
- return Style()
+ return Style.empty()
def __rich_measure__(self, console: "Console", max_width: int) -> Measurement:
if self.width is not None:
@@ -643,7 +643,7 @@ def _render(
max_height = 1
cells: List[List[List[Segment]]] = []
if header_row or footer_row:
- row_style = Style()
+ row_style = Style.empty()
else:
row_style = get_style(
get_row_style(index - 1 if show_header else index)
diff --git a/rich/text.py b/rich/text.py
index 93f7a21631..3098ad5da7 100644
--- a/rich/text.py
+++ b/rich/text.py
@@ -522,7 +522,7 @@ def render(self, console: "Console", end: str = "") -> Iterable["Segment"]:
"""
text = self.plain
- null_style = Style()
+ null_style = Style.empty()
enumerated_spans = list(enumerate(self._spans, 1))
get_style = partial(console.get_style, default=null_style)
style_map = {index: get_style(span.style) for index, span in enumerated_spans}
| diff --git a/tests/test_console.py b/tests/test_console.py
index e16b2a056c..1a07321c6b 100644
--- a/tests/test_console.py
+++ b/tests/test_console.py
@@ -5,13 +5,11 @@
import pytest
-from rich.color import Color, ColorSystem
-from rich.console import Console, ConsoleOptions
+from rich.color import ColorSystem
+from rich.console import CaptureError, Console, ConsoleOptions
from rich import errors
from rich.panel import Panel
-from rich.segment import Segment
from rich.style import Style
-from rich.theme import Theme
def test_dumb_terminal():
@@ -160,14 +158,41 @@ def test_control():
assert console.file.getvalue() == "FOOBAR\n"
+def test_capture():
+ console = Console()
+ with console.capture() as capture:
+ with pytest.raises(CaptureError):
+ capture.get()
+ console.print("Hello")
+ assert capture.get() == "Hello\n"
+
+
def test_input(monkeypatch, capsys):
- monkeypatch.setattr("builtins.input", lambda: "bar")
+ def fake_input(prompt):
+ console.file.write(prompt)
+ return "bar"
+
+ monkeypatch.setattr("builtins.input", fake_input)
console = Console()
user_input = console.input(prompt="foo:")
assert capsys.readouterr().out == "foo:"
assert user_input == "bar"
+def test_input_password(monkeypatch, capsys):
+ def fake_input(prompt, stream=None):
+ console.file.write(prompt)
+ return "bar"
+
+ import rich.console
+
+ monkeypatch.setattr(rich.console, "getpass", fake_input)
+ console = Console()
+ user_input = console.input(prompt="foo:", password=True)
+ assert capsys.readouterr().out == "foo:"
+ assert user_input == "bar"
+
+
def test_justify_none():
console = Console(file=io.StringIO(), force_terminal=True, width=20)
console.print("FOO", justify=None)
diff --git a/tests/test_style.py b/tests/test_style.py
index 4195f707ec..7e7b2f0e86 100644
--- a/tests/test_style.py
+++ b/tests/test_style.py
@@ -71,6 +71,10 @@ def test_hash():
assert isinstance(hash(Style()), int)
+def test_empty():
+ assert Style.empty() == Style()
+
+
def test_bool():
assert bool(Style()) is False
assert bool(Style(bold=True)) is True
| diff --git a/CHANGELOG.md b/CHANGELOG.md
index a0a2563d71..e411978ebd 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,7 +5,11 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
-## [7.0.1] - unreleased
+## [7.1.0] - unreleased
+
+### Added
+
+- Added Console.capture method
###ย Changed
diff --git a/README.md b/README.md
index 7a0fbf6b3a..a4c6c4341b 100644
--- a/README.md
+++ b/README.md
@@ -54,6 +54,15 @@ Rich can be installed in the Python REPL, so that any data structures will be pr

+## Rich Inspect
+
+Rich has an [inspect](https://rich.readthedocs.io/en/latest/reference/init.html?highlight=inspect#rich.inspect) function which can produce a report on any Python object, such as class, instance, or builtin.
+
+```python
+>>> from rich import inspect
+>>> inspect(str, methods=True)
+```
+
## Using the Console
For more control over rich terminal content, import and construct a [Console](https://rich.readthedocs.io/en/latest/reference/console.html#rich.console.Console) object.
diff --git a/docs/source/console.rst b/docs/source/console.rst
index ca92670985..3373ac7111 100644
--- a/docs/source/console.rst
+++ b/docs/source/console.rst
@@ -42,48 +42,6 @@ You can set ``color_system`` to one of the following values:
Be careful when setting a color system, if you set a higher color system than your terminal supports, your text may be unreadable.
-File output
------------
-
-The Console object will write to standard output (i.e. the terminal). You can also tell the Console object to write to another file by setting the ``file`` argument on the constructor -- which should be a file-like object opened for writing text. One use of this capability is to create a Console for writing to standard error by setting file to ``sys.stderr``. Here's an example::
-
- import sys
- from rich.console import Console
- error_console = Console(file=sys.stderr)
- error_console.print("[bold red]This is an error!")
-
-
-Capturing output
-----------------
-
-There may be situations where you want to capture the output from a Console rather than writing it directly to the terminal. You can do this by setting the ``file`` argument to a :py:class:`io.StringIO` instance. Here's an example::
-
- from io import StringIO
- from rich.console import Console
- console = Console(file=StringIO())
- console.print("[bold red]Hello[/] World")
- str_output = console.file.getvalue()
-
-You may also want to set ``force_terminal=True`` on the Console constructor if you want control codes for colour and style in the resulting string.
-
-Terminal detection
-------------------
-
-If Rich detects that it is not writing to a terminal it will strip control codes from the output. If you want to write control codes to a regular file then set ``force_terminal=True`` on the constructor.
-
-Letting Rich auto-detect terminals is useful as it will write plain text when you pipe output to a file or other application.
-
-
-Environment variables
----------------------
-
-Rich respects some standard environment variables.
-
-Setting the environment variable ``TERM`` to ``"dumb"`` or ``"unknown"`` will disable color/style and some features that require moving the cursor, such as progress bars.
-
-If the environment variable ``NO_COLOR`` is set, Rich will disable all color in the output.
-
-
Printing
--------
@@ -220,3 +178,52 @@ The Console class can export anything written to it as either text or html. To e
After you have written content, you can call :meth:`~rich.console.Console.export_text` or :meth:`~rich.console.Console.export_html` to get the console output as a string. You can also call :meth:`~rich.console.Console.save_text` or :meth:`~rich.console.Console.save_html` to write the contents directly to disk.
For examples of the html output generated by Rich Console, see :ref:`appendix-colors`.
+
+
+File output
+-----------
+
+The Console object will write to standard output (i.e. the terminal). You can also tell the Console object to write to another file by setting the ``file`` argument on the constructor -- which should be a file-like object opened for writing text. One use of this capability is to create a Console for writing to standard error by setting file to ``sys.stderr``. Here's an example::
+
+ import sys
+ from rich.console import Console
+ error_console = Console(file=sys.stderr)
+ error_console.print("[bold red]This is an error!")
+
+
+Capturing output
+----------------
+
+There may be situations where you want to *capture* the output from a Console rather than writing it directly to the terminal. You can do this with the :meth:`~rich.console.Console.capture` method which returns a context manager. On exit from this context manager, call :meth:`~rich.console.Capture.get` to return the string that would have been written to the terminal. Here's an example::
+
+ from rich.console import Console
+ console = Console()
+ with console.capture() as capture:
+ console.print("[bold red]Hello[/] World")
+ str_output = capture.get()
+
+An alternative way of capturing output is to set the Console file to a :py:class:`io.StringIO`. This is the recommended method if you are testing console output in unit tests. Here's an example::
+
+ from io import StringIO
+ from rich.console import Console
+ console = Console(file=StringIO())
+ console.print("[bold red]Hello[/] World")
+ str_output = console.file.getvalue()
+
+
+Terminal detection
+------------------
+
+If Rich detects that it is not writing to a terminal it will strip control codes from the output. If you want to write control codes to a regular file then set ``force_terminal=True`` on the constructor.
+
+Letting Rich auto-detect terminals is useful as it will write plain text when you pipe output to a file or other application.
+
+
+Environment variables
+---------------------
+
+Rich respects some standard environment variables.
+
+Setting the environment variable ``TERM`` to ``"dumb"`` or ``"unknown"`` will disable color/style and some features that require moving the cursor, such as progress bars.
+
+If the environment variable ``NO_COLOR`` is set, Rich will disable all color in the output.
| [
{
"components": [
{
"doc": "An error in the Capture context manager.",
"lines": [
153,
154
],
"name": "CaptureError",
"signature": "class CaptureError(Exception):",
"type": "class"
},
{
"doc": "Context manager to captu... | [
"tests/test_console.py::test_dumb_terminal",
"tests/test_console.py::test_16color_terminal",
"tests/test_console.py::test_truecolor_terminal",
"tests/test_console.py::test_console_options_update",
"tests/test_console.py::test_init",
"tests/test_console.py::test_size",
"tests/test_console.py::test_repr",... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Capture
## Type of changes
- [ ] Bug fix
- [x] New feature
- [ ] Documentation / docstrings
- [ ] Tests
- [ ] Other
## Checklist
- [ ] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [ ] I've added tests for new code.
- [ ] I accept that @willmcgugan may be pedantic in the code review.
## Description
Implements a simplified way of capturing output rather than printing.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in rich/console.py]
(definition of CaptureError:)
class CaptureError(Exception):
"""An error in the Capture context manager."""
(definition of Capture:)
class Capture:
"""Context manager to capture the result of printing to the console.
See :meth:`~rich.console.Console.capture` for how to use.
Args:
console (Console): A console instance to capture output."""
(definition of Capture.__init__:)
def __init__(self, console: "Console") -> None:
(definition of Capture.__enter__:)
def __enter__(self) -> "Capture":
(definition of Capture.__exit__:)
def __exit__(self, exc_type, exc_val, exc_tb) -> None:
(definition of Capture.get:)
def get(self) -> str:
"""Get the result of the capture."""
(definition of Console.begin_capture:)
def begin_capture(self) -> None:
"""Begin capturing console output. Call :meth:`end_capture` to exit capture mode and return output."""
(definition of Console.end_capture:)
def end_capture(self) -> str:
"""End capture mode and return captured string.
Returns:
str: Console output."""
(definition of Console.capture:)
def capture(self) -> Capture:
"""A context manager to *capture* the result of print() or log() in a string,
rather than writing it to the console.
Example:
>>> from rich.console import Console
>>> console = Console()
>>> with console.capture() as capture:
... console.print("[bold magenta]Hello World[/])
>>> print(capture.get())
Returns:
Capture: Context manager which will contain the attribute `result` on exit."""
[end of new definitions in rich/console.py]
[start of new definitions in rich/style.py]
(definition of Style.empty:)
def empty(cls) -> "Style":
"""Create an 'empty' style, equivalent to Style(), but more performant."""
[end of new definitions in rich/style.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | b0661de34bab35af9b4b1d3ba8e28b186b225e84 | |
scikit-learn__scikit-learn-18393 | 18,393 | scikit-learn/scikit-learn | 1.0 | 114616d9f6ce9eba7c1aacd3d4a254f868010e25 | 2020-09-13T16:00:48Z | diff --git a/doc/whats_new/v1.0.rst b/doc/whats_new/v1.0.rst
index be894774f5a27..a54abb78730a4 100644
--- a/doc/whats_new/v1.0.rst
+++ b/doc/whats_new/v1.0.rst
@@ -102,6 +102,13 @@ Changelog
- |Enhancement| :func:`datasets.fetch_kddcup99` raises a better message
when the cached file is invalid. :pr:`19669` `Thomas Fan`_.
+:mod:`sklearn.compose`
+......................
+
+- |Enhancement| :class:`compose.ColumnTransformer` now records the output
+ of each transformer in `output_indices_`. :pr:`18393` by
+ :user:`Luca Bittarello <lbittarello>`.
+
:mod:`sklearn.decomposition`
............................
diff --git a/sklearn/compose/_column_transformer.py b/sklearn/compose/_column_transformer.py
index c0444fe2d6cda..da4a2dd93507c 100644
--- a/sklearn/compose/_column_transformer.py
+++ b/sklearn/compose/_column_transformer.py
@@ -134,6 +134,12 @@ class ColumnTransformer(TransformerMixin, _BaseComposition):
sparse matrix or a dense numpy array, which depends on the output
of the individual transformers and the `sparse_threshold` keyword.
+ output_indices_ : dict
+ A dictionary from each transformer name to a slice, where the slice
+ corresponds to indices in the transformed output. This is useful to
+ inspect which transformer is responsible for which transformed
+ feature(s).
+
Notes
-----
The order of the columns in the transformed feature matrix follows the
@@ -408,6 +414,28 @@ def _validate_output(self, result):
"The output of the '{0}' transformer should be 2D (scipy "
"matrix, array, or pandas DataFrame).".format(name))
+ def _record_output_indices(self, Xs):
+ """
+ Record which transformer produced which column.
+ """
+ idx = 0
+ self.output_indices_ = {}
+
+ for transformer_idx, (name, _, _, _) in enumerate(
+ self._iter(fitted=True, replace_strings=True)
+ ):
+ n_columns = Xs[transformer_idx].shape[1]
+ self.output_indices_[name] = slice(idx, idx + n_columns)
+ idx += n_columns
+
+ # `_iter` only generates transformers that have a non empty
+ # selection. Here we set empty slices for transformers that
+ # generate no output, which are safe for indexing
+ all_names = [t[0] for t in self.transformers] + ['remainder']
+ for name in all_names:
+ if name not in self.output_indices_:
+ self.output_indices_[name] = slice(0, 0)
+
def _log_message(self, name, idx, total):
if not self.verbose:
return None
@@ -518,6 +546,7 @@ def fit_transform(self, X, y=None):
self._update_fitted_transformers(transformers)
self._validate_output(Xs)
+ self._record_output_indices(Xs)
return self._hstack(list(Xs))
| diff --git a/sklearn/compose/tests/test_column_transformer.py b/sklearn/compose/tests/test_column_transformer.py
index ae2e25b68210f..f7c1874d4a1b7 100644
--- a/sklearn/compose/tests/test_column_transformer.py
+++ b/sklearn/compose/tests/test_column_transformer.py
@@ -225,7 +225,7 @@ def test_column_transformer_dataframe():
assert len(both.transformers_) == 1
assert both.transformers_[-1][0] != 'remainder'
- # ensure pandas object is passes through
+ # ensure pandas object is passed through
class TransAssert(BaseEstimator):
@@ -310,6 +310,92 @@ def test_column_transformer_empty_columns(pandas, column_selection,
assert isinstance(ct.transformers_[0][1], TransRaise)
+def test_column_transformer_output_indices():
+ # Checks for the output_indices_ attribute
+ X_array = np.arange(6).reshape(3, 2)
+
+ ct = ColumnTransformer([('trans1', Trans(), [0]),
+ ('trans2', Trans(), [1])])
+ X_trans = ct.fit_transform(X_array)
+ assert ct.output_indices_ == {'trans1': slice(0, 1),
+ 'trans2': slice(1, 2),
+ 'remainder': slice(0, 0)}
+ assert_array_equal(X_trans[:, [0]],
+ X_trans[:, ct.output_indices_['trans1']])
+ assert_array_equal(X_trans[:, [1]],
+ X_trans[:, ct.output_indices_['trans2']])
+
+ # test with transformer_weights and multiple columns
+ ct = ColumnTransformer([('trans', Trans(), [0, 1])],
+ transformer_weights={'trans': .1})
+ X_trans = ct.fit_transform(X_array)
+ assert ct.output_indices_ == {'trans': slice(0, 2),
+ 'remainder': slice(0, 0)}
+ assert_array_equal(X_trans[:, [0, 1]],
+ X_trans[:, ct.output_indices_['trans']])
+ assert_array_equal(X_trans[:, []],
+ X_trans[:, ct.output_indices_['remainder']])
+
+ # test case that ensures that the attribute does also work when
+ # a given transformer doesn't have any columns to work on
+ ct = ColumnTransformer([('trans1', Trans(), [0, 1]),
+ ('trans2', TransRaise(), [])])
+ X_trans = ct.fit_transform(X_array)
+ assert ct.output_indices_ == {'trans1': slice(0, 2),
+ 'trans2': slice(0, 0),
+ 'remainder': slice(0, 0)}
+ assert_array_equal(X_trans[:, [0, 1]],
+ X_trans[:, ct.output_indices_['trans1']])
+ assert_array_equal(X_trans[:, []],
+ X_trans[:, ct.output_indices_['trans2']])
+ assert_array_equal(X_trans[:, []],
+ X_trans[:, ct.output_indices_['remainder']])
+
+ ct = ColumnTransformer([('trans', TransRaise(), [])],
+ remainder='passthrough')
+ X_trans = ct.fit_transform(X_array)
+ assert ct.output_indices_ == {'trans': slice(0, 0),
+ 'remainder': slice(0, 2)}
+ assert_array_equal(X_trans[:, []],
+ X_trans[:, ct.output_indices_['trans']])
+ assert_array_equal(X_trans[:, [0, 1]],
+ X_trans[:, ct.output_indices_['remainder']])
+
+
+def test_column_transformer_output_indices_df():
+ # Checks for the output_indices_ attribute with data frames
+ pd = pytest.importorskip('pandas')
+
+ X_df = pd.DataFrame(np.arange(6).reshape(3, 2),
+ columns=['first', 'second'])
+
+ ct = ColumnTransformer([('trans1', Trans(), ['first']),
+ ('trans2', Trans(), ['second'])])
+ X_trans = ct.fit_transform(X_df)
+ assert ct.output_indices_ == {'trans1': slice(0, 1),
+ 'trans2': slice(1, 2),
+ 'remainder': slice(0, 0)}
+ assert_array_equal(X_trans[:, [0]],
+ X_trans[:, ct.output_indices_['trans1']])
+ assert_array_equal(X_trans[:, [1]],
+ X_trans[:, ct.output_indices_['trans2']])
+ assert_array_equal(X_trans[:, []],
+ X_trans[:, ct.output_indices_['remainder']])
+
+ ct = ColumnTransformer([('trans1', Trans(), [0]),
+ ('trans2', Trans(), [1])])
+ X_trans = ct.fit_transform(X_df)
+ assert ct.output_indices_ == {'trans1': slice(0, 1),
+ 'trans2': slice(1, 2),
+ 'remainder': slice(0, 0)}
+ assert_array_equal(X_trans[:, [0]],
+ X_trans[:, ct.output_indices_['trans1']])
+ assert_array_equal(X_trans[:, [1]],
+ X_trans[:, ct.output_indices_['trans2']])
+ assert_array_equal(X_trans[:, []],
+ X_trans[:, ct.output_indices_['remainder']])
+
+
def test_column_transformer_sparse_array():
X_sparse = sparse.eye(3, 2).tocsr()
| diff --git a/doc/whats_new/v1.0.rst b/doc/whats_new/v1.0.rst
index be894774f5a27..a54abb78730a4 100644
--- a/doc/whats_new/v1.0.rst
+++ b/doc/whats_new/v1.0.rst
@@ -102,6 +102,13 @@ Changelog
- |Enhancement| :func:`datasets.fetch_kddcup99` raises a better message
when the cached file is invalid. :pr:`19669` `Thomas Fan`_.
+:mod:`sklearn.compose`
+......................
+
+- |Enhancement| :class:`compose.ColumnTransformer` now records the output
+ of each transformer in `output_indices_`. :pr:`18393` by
+ :user:`Luca Bittarello <lbittarello>`.
+
:mod:`sklearn.decomposition`
............................
| [
{
"components": [
{
"doc": "Record which transformer produced which column.",
"lines": [
417,
437
],
"name": "ColumnTransformer._record_output_indices",
"signature": "def _record_output_indices(self, Xs):",
"type": "function"
}
... | [
"sklearn/compose/tests/test_column_transformer.py::test_column_transformer_output_indices",
"sklearn/compose/tests/test_column_transformer.py::test_column_transformer_output_indices_df"
] | [
"sklearn/compose/tests/test_column_transformer.py::test_column_transformer",
"sklearn/compose/tests/test_column_transformer.py::test_column_transformer_dataframe",
"sklearn/compose/tests/test_column_transformer.py::test_column_transformer_empty_columns[False-list-pandas]",
"sklearn/compose/tests/test_column_t... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
ENH Record output of transformers in ColumnTransformer
This PR adds a fitted attribute to the `ColumnTransformer`, tentatively named `transformers_output_`. It records the columns produced by each transformer (if any). This is useful for debugging as well as modelling (e.g., to speed up partial dependencies). I have incremented the existing tests to inspect the new attribute.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sklearn/compose/_column_transformer.py]
(definition of ColumnTransformer._record_output_indices:)
def _record_output_indices(self, Xs):
"""Record which transformer produced which column."""
[end of new definitions in sklearn/compose/_column_transformer.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 3c732b9f6a77e95dfa6beb154ca2e1e7848b74f9 | |
sympy__sympy-20072 | 20,072 | sympy/sympy | 1.7 | b6c328e2f057ba81513d19a1451ace358ecc8233 | 2020-09-11T23:54:49Z | diff --git a/sympy/core/basic.py b/sympy/core/basic.py
index bc45798bd939..f66847f623d6 100644
--- a/sympy/core/basic.py
+++ b/sympy/core/basic.py
@@ -1690,21 +1690,6 @@ def _eval_rewrite(self, pattern, rule, **hints):
return self.func(*args) if hints.get('evaluate', True) else self
- def _accept_eval_derivative(self, s):
- # This method needs to be overridden by array-like objects
- return s._visit_eval_derivative_scalar(self)
-
- def _visit_eval_derivative_scalar(self, base):
- # Base is a scalar
- # Types are (base: scalar, self: scalar)
- return base._eval_derivative(self)
-
- def _visit_eval_derivative_array(self, base):
- # Types are (base: array/matrix, self: scalar)
- # Base is some kind of array/matrix,
- # it should have `.applyfunc(lambda x: x.diff(self)` implemented:
- return base._eval_derivative_array(self)
-
def _eval_derivative_n_times(self, s, n):
# This is the default evaluator for derivatives (as called by `diff`
# and `Derivative`), it will attempt a loop to derive the expression
@@ -1716,7 +1701,7 @@ def _eval_derivative_n_times(self, s, n):
if isinstance(n, (int, Integer)):
obj = self
for i in range(n):
- obj2 = obj._accept_eval_derivative(s)
+ obj2 = obj._eval_derivative(s)
if obj == obj2 or obj2 is None:
break
obj = obj2
diff --git a/sympy/core/expr.py b/sympy/core/expr.py
index 1e5d6ca9e357..73b9c9120040 100644
--- a/sympy/core/expr.py
+++ b/sympy/core/expr.py
@@ -3443,7 +3443,7 @@ def fourier_series(self, limits=None):
def diff(self, *symbols, **assumptions):
assumptions.setdefault("evaluate", True)
- return Derivative(self, *symbols, **assumptions)
+ return _derivative_dispatch(self, *symbols, **assumptions)
###########################################################################
###################### EXPRESSION EXPANSION METHODS #######################
@@ -4016,7 +4016,7 @@ def search_element(self, elem):
from .mul import Mul
from .add import Add
from .power import Pow
-from .function import Derivative, Function
+from .function import Function, _derivative_dispatch
from .mod import Mod
from .exprtools import factor_terms
from .numbers import Integer, Rational
diff --git a/sympy/core/function.py b/sympy/core/function.py
index 96c57a5a2697..1d2184206649 100644
--- a/sympy/core/function.py
+++ b/sympy/core/function.py
@@ -768,14 +768,14 @@ def fdiff(self, argindex=1):
A = self.args[ix]
if A._diff_wrt:
if len(self.args) == 1 or not A.is_Symbol:
- return Derivative(self, A)
+ return _derivative_dispatch(self, A)
for i, v in enumerate(self.args):
if i != ix and A in v.free_symbols:
# it can't be in any other argument's free symbols
# issue 8510
break
else:
- return Derivative(self, A)
+ return _derivative_dispatch(self, A)
# See issue 4624 and issue 4719, 5600 and 8510
D = Dummy('xi_%i' % argindex, dummy_index=hash(A))
@@ -1400,13 +1400,7 @@ def __new__(cls, expr, *variables, **kwargs):
zero = True
break
if zero:
- if isinstance(expr, (MatrixCommon, NDimArray)):
- return expr.zeros(*expr.shape)
- elif isinstance(expr, MatrixExpr):
- from sympy import ZeroMatrix
- return ZeroMatrix(*expr.shape)
- elif expr.is_scalar:
- return S.Zero
+ return cls._get_zero_with_shape_like(expr)
# make the order of symbols canonical
#TODO: check if assumption of discontinuous derivatives exist
@@ -1416,7 +1410,7 @@ def __new__(cls, expr, *variables, **kwargs):
if isinstance(expr, Derivative):
variable_count = list(expr.variable_count) + variable_count
expr = expr.expr
- return Derivative(expr, *variable_count, **kwargs)
+ return _derivative_dispatch(expr, *variable_count, **kwargs)
# we return here if evaluate is False or if there is no
# _eval_derivative method
@@ -1459,11 +1453,7 @@ def __new__(cls, expr, *variables, **kwargs):
# _eval_derivative defined
expr *= old_v.diff(old_v)
- # Evaluate the derivative `n` times. If
- # `_eval_derivative_n_times` is not overridden by the current
- # object, the default in `Basic` will call a loop over
- # `_eval_derivative`:
- obj = expr._eval_derivative_n_times(v, count)
+ obj = cls._dispatch_eval_derivative_n_times(expr, v, count)
if obj is not None and obj.is_zero:
return obj
@@ -1734,7 +1724,7 @@ def _subset(a, b):
old_vars = Counter(dict(reversed(old.variable_count)))
self_vars = Counter(dict(reversed(self.variable_count)))
if _subset(old_vars, self_vars):
- return Derivative(new, *(self_vars - old_vars).items()).canonical
+ return _derivative_dispatch(new, *(self_vars - old_vars).items()).canonical
args = list(self.args)
newargs = list(x._subs(old, new) for x in args)
@@ -1742,7 +1732,7 @@ def _subset(a, b):
# complete replacement of self.expr
# we already checked that the new is valid so we know
# it won't be a problem should it appear in variables
- return Derivative(*newargs)
+ return _derivative_dispatch(*newargs)
if newargs[0] != args[0]:
# case (1) can't change expr by introducing something that is in
@@ -1797,7 +1787,7 @@ def _subset(a, b):
return Subs(Derivative(newe, *vc), *zip(*subs))
# everything was ok
- return Derivative(*newargs)
+ return _derivative_dispatch(*newargs)
def _eval_lseries(self, x, logx, cdir=0):
dx = self.variables
@@ -1912,6 +1902,29 @@ def as_finite_difference(self, points=1, x0=None, wrt=None):
from ..calculus.finite_diff import _as_finite_diff
return _as_finite_diff(self, points, x0, wrt)
+ @classmethod
+ def _get_zero_with_shape_like(cls, expr):
+ return S.Zero
+
+ @classmethod
+ def _dispatch_eval_derivative_n_times(cls, expr, v, count):
+ # Evaluate the derivative `n` times. If
+ # `_eval_derivative_n_times` is not overridden by the current
+ # object, the default in `Basic` will call a loop over
+ # `_eval_derivative`:
+ return expr._eval_derivative_n_times(v, count)
+
+
+def _derivative_dispatch(expr, *variables, **kwargs):
+ from sympy.matrices.common import MatrixCommon
+ from sympy import MatrixExpr
+ from sympy import NDimArray
+ array_types = (MatrixCommon, MatrixExpr, NDimArray, list, tuple, Tuple)
+ if isinstance(expr, array_types) or any(isinstance(i[0], array_types) if isinstance(i, (tuple, list, Tuple)) else isinstance(i, array_types) for i in variables):
+ from sympy.tensor.array.array_derivatives import ArrayDerivative
+ return ArrayDerivative(expr, *variables, **kwargs)
+ return Derivative(expr, *variables, **kwargs)
+
class Lambda(Expr):
"""
@@ -2444,7 +2457,7 @@ def diff(f, *symbols, **kwargs):
if hasattr(f, 'diff'):
return f.diff(*symbols, **kwargs)
kwargs.setdefault('evaluate', True)
- return Derivative(f, *symbols, **kwargs)
+ return _derivative_dispatch(f, *symbols, **kwargs)
def expand(e, deep=True, modulus=None, power_base=True, power_exp=True,
diff --git a/sympy/matrices/expressions/matexpr.py b/sympy/matrices/expressions/matexpr.py
index 83596decde7d..dfb129bf7afd 100644
--- a/sympy/matrices/expressions/matexpr.py
+++ b/sympy/matrices/expressions/matexpr.py
@@ -195,36 +195,17 @@ def _eval_adjoint(self):
from sympy.matrices.expressions.adjoint import Adjoint
return Adjoint(self)
- def _eval_derivative_array(self, x):
- if isinstance(x, MatrixExpr):
- return _matrix_derivative(self, x)
- else:
- return self._eval_derivative(x)
-
def _eval_derivative_n_times(self, x, n):
return Basic._eval_derivative_n_times(self, x, n)
- def _visit_eval_derivative_scalar(self, x):
+ def _eval_derivative(self, x):
# `x` is a scalar:
- if x.has(self):
- return _matrix_derivative(x, self)
+ if self.has(x):
+ # See if there are other methods using it:
+ return super(MatrixExpr, self)._eval_derivative(x)
else:
return ZeroMatrix(*self.shape)
- def _visit_eval_derivative_array(self, x):
- if x.has(self):
- return _matrix_derivative(x, self)
- else:
- from sympy import Derivative
- return Derivative(x, self)
-
- def _accept_eval_derivative(self, s):
- from sympy import MatrixBase, NDimArray
- if isinstance(s, (MatrixBase, NDimArray, MatrixExpr)):
- return s._visit_eval_derivative_array(self)
- else:
- return s._visit_eval_derivative_scalar(self)
-
@classmethod
def _check_dim(cls, dim):
"""Helper function to check invalid matrix dimensions"""
@@ -650,7 +631,7 @@ def _postprocessor(expr):
def _matrix_derivative(expr, x):
- from sympy import Derivative
+ from sympy.tensor.array.array_derivatives import ArrayDerivative
lines = expr._eval_derivative_matrix_lines(x)
parts = [i.build() for i in lines]
@@ -693,7 +674,7 @@ def contract_one_dims(parts):
if rank <= 2:
return Add.fromiter([contract_one_dims(i) for i in parts])
- return Derivative(expr, x)
+ return ArrayDerivative(expr, x)
class MatrixElement(Expr):
diff --git a/sympy/matrices/matrices.py b/sympy/matrices/matrices.py
index f3c5e0fe671e..09005f8acdbc 100644
--- a/sympy/matrices/matrices.py
+++ b/sympy/matrices/matrices.py
@@ -454,9 +454,9 @@ def diff(self, *args, **kwargs):
limit
"""
# XXX this should be handled here rather than in Derivative
- from sympy import Derivative
+ from sympy.tensor.array.array_derivatives import ArrayDerivative
kwargs.setdefault('evaluate', True)
- deriv = Derivative(self, *args, evaluate=True)
+ deriv = ArrayDerivative(self, *args, evaluate=True)
if not isinstance(self, Basic):
return deriv.as_mutable()
else:
@@ -465,18 +465,6 @@ def diff(self, *args, **kwargs):
def _eval_derivative(self, arg):
return self.applyfunc(lambda x: x.diff(arg))
- def _accept_eval_derivative(self, s):
- return s._visit_eval_derivative_array(self)
-
- def _visit_eval_derivative_scalar(self, base):
- # Types are (base: scalar, self: matrix)
- return self.applyfunc(lambda x: base.diff(x))
-
- def _visit_eval_derivative_array(self, base):
- # Types are (base: array/matrix, self: matrix)
- from sympy import derive_by_array
- return derive_by_array(base, self)
-
def integrate(self, *args, **kwargs):
"""Integrate each element of the matrix. ``args`` will
be passed to the ``integrate`` function.
diff --git a/sympy/tensor/array/array_derivatives.py b/sympy/tensor/array/array_derivatives.py
new file mode 100644
index 000000000000..9584c2e3e3ad
--- /dev/null
+++ b/sympy/tensor/array/array_derivatives.py
@@ -0,0 +1,125 @@
+from sympy import Derivative, Integer, Expr
+from sympy.matrices.common import MatrixCommon
+from .ndim_array import NDimArray
+from .arrayop import derive_by_array
+from sympy import MatrixExpr
+from sympy import ZeroMatrix
+from sympy.matrices.expressions.matexpr import _matrix_derivative
+
+
+class ArrayDerivative(Derivative):
+
+ is_scalar = False
+
+ def __new__(cls, expr, *variables, **kwargs):
+ obj = super(ArrayDerivative, cls).__new__(cls, expr, *variables, **kwargs)
+ if isinstance(obj, ArrayDerivative):
+ obj._shape = obj._get_shape()
+ return obj
+
+ def _get_shape(self):
+ shape = ()
+ for v, count in self.variable_count:
+ if hasattr(v, "shape"):
+ for i in range(count):
+ shape += v.shape
+ if hasattr(self.expr, "shape"):
+ shape += self.expr.shape
+ return shape
+
+ @property
+ def shape(self):
+ return self._shape
+
+ @classmethod
+ def _get_zero_with_shape_like(cls, expr):
+ if isinstance(expr, (MatrixCommon, NDimArray)):
+ return expr.zeros(*expr.shape)
+ elif isinstance(expr, MatrixExpr):
+ return ZeroMatrix(*expr.shape)
+ else:
+ raise RuntimeError("Unable to determine shape of array-derivative.")
+
+ @staticmethod
+ def _call_derive_scalar_by_matrix(expr, v): # type: (Expr, MatrixCommon) -> Expr
+ return v.applyfunc(lambda x: expr.diff(x))
+
+ @staticmethod
+ def _call_derive_scalar_by_matexpr(expr, v): # type: (Expr, MatrixExpr) -> Expr
+ if expr.has(v):
+ return _matrix_derivative(expr, v)
+ else:
+ return ZeroMatrix(*v.shape)
+
+ @staticmethod
+ def _call_derive_scalar_by_array(expr, v): # type: (Expr, NDimArray) -> Expr
+ return v.applyfunc(lambda x: expr.diff(x))
+
+ @staticmethod
+ def _call_derive_matrix_by_scalar(expr, v): # type: (MatrixCommon, Expr) -> Expr
+ return _matrix_derivative(expr, v)
+
+ @staticmethod
+ def _call_derive_matexpr_by_scalar(expr, v): # type: (MatrixExpr, Expr) -> Expr
+ return expr._eval_derivative(v)
+
+ @staticmethod
+ def _call_derive_array_by_scalar(expr, v): # type: (NDimArray, Expr) -> Expr
+ return expr.applyfunc(lambda x: x.diff(v))
+
+ @staticmethod
+ def _call_derive_default(expr, v): # type: (Expr, Expr) -> Expr
+ if expr.has(v):
+ return _matrix_derivative(expr, v)
+ else:
+ return None
+
+ @classmethod
+ def _dispatch_eval_derivative_n_times(cls, expr, v, count):
+ # Evaluate the derivative `n` times. If
+ # `_eval_derivative_n_times` is not overridden by the current
+ # object, the default in `Basic` will call a loop over
+ # `_eval_derivative`:
+
+ if not isinstance(count, (int, Integer)) or ((count <= 0) == True):
+ return None
+
+ # TODO: this could be done with multiple-dispatching:
+ if expr.is_scalar:
+ if isinstance(v, MatrixCommon):
+ result = cls._call_derive_scalar_by_matrix(expr, v)
+ elif isinstance(v, MatrixExpr):
+ result = cls._call_derive_scalar_by_matexpr(expr, v)
+ elif isinstance(v, NDimArray):
+ result = cls._call_derive_scalar_by_array(expr, v)
+ elif v.is_scalar:
+ # scalar by scalar has a special
+ return super(ArrayDerivative, cls)._dispatch_eval_derivative_n_times(expr, v, count)
+ else:
+ return None
+ elif v.is_scalar:
+ if isinstance(expr, MatrixCommon):
+ result = cls._call_derive_matrix_by_scalar(expr, v)
+ elif isinstance(expr, MatrixExpr):
+ result = cls._call_derive_matexpr_by_scalar(expr, v)
+ elif isinstance(expr, NDimArray):
+ result = cls._call_derive_array_by_scalar(expr, v)
+ else:
+ return None
+ else:
+ # Both `expr` and `v` are some array/matrix type:
+ if isinstance(expr, MatrixCommon) or isinstance(expr, MatrixCommon):
+ result = derive_by_array(expr, v)
+ elif isinstance(expr, MatrixExpr) and isinstance(v, MatrixExpr):
+ result = cls._call_derive_default(expr, v)
+ elif isinstance(expr, MatrixExpr) or isinstance(v, MatrixExpr):
+ # if one expression is a symbolic matrix expression while the other isn't, don't evaluate:
+ return None
+ else:
+ result = derive_by_array(expr, v)
+ if result is None:
+ return None
+ if count == 1:
+ return result
+ else:
+ return cls._dispatch_eval_derivative_n_times(result, v, count - 1)
diff --git a/sympy/tensor/array/arrayop.py b/sympy/tensor/array/arrayop.py
index 273897b5f4bc..91725b28896c 100644
--- a/sympy/tensor/array/arrayop.py
+++ b/sympy/tensor/array/arrayop.py
@@ -3,9 +3,8 @@
from sympy import S, Tuple, diff, Basic
from sympy.core.compatibility import Iterable
-from sympy.tensor.array import ImmutableDenseNDimArray
from sympy.tensor.array.ndim_array import NDimArray
-from sympy.tensor.array.dense_ndim_array import DenseNDimArray
+from sympy.tensor.array.dense_ndim_array import DenseNDimArray, ImmutableDenseNDimArray
from sympy.tensor.array.sparse_ndim_array import SparseNDimArray
diff --git a/sympy/tensor/array/ndim_array.py b/sympy/tensor/array/ndim_array.py
index 81056949b02f..edf6f522a0ed 100644
--- a/sympy/tensor/array/ndim_array.py
+++ b/sympy/tensor/array/ndim_array.py
@@ -10,6 +10,7 @@
import itertools
+
class NDimArray(Printable):
"""
@@ -64,6 +65,7 @@ class NDimArray(Printable):
"""
_diff_wrt = True
+ is_scalar = False
def __new__(cls, iterable, shape=None, **kwargs):
from sympy.tensor.array import ImmutableDenseNDimArray
@@ -239,38 +241,17 @@ def diff(self, *args, **kwargs):
[[1, 0], [0, y]]
"""
- from sympy import Derivative
+ from sympy.tensor.array.array_derivatives import ArrayDerivative
kwargs.setdefault('evaluate', True)
- return Derivative(self.as_immutable(), *args, **kwargs)
-
- def _accept_eval_derivative(self, s):
- return s._visit_eval_derivative_array(self)
+ return ArrayDerivative(self.as_immutable(), *args, **kwargs)
- def _visit_eval_derivative_scalar(self, base):
+ def _eval_derivative(self, base):
# Types are (base: scalar, self: array)
return self.applyfunc(lambda x: base.diff(x))
- def _visit_eval_derivative_array(self, base):
- # Types are (base: array/matrix, self: array)
- from sympy import derive_by_array
- return derive_by_array(base, self)
-
def _eval_derivative_n_times(self, s, n):
return Basic._eval_derivative_n_times(self, s, n)
- def _eval_derivative(self, arg):
- return self.applyfunc(lambda x: x.diff(arg))
-
- def _eval_derivative_array(self, arg):
- from sympy import derive_by_array
- from sympy import Tuple
- from sympy.matrices.common import MatrixCommon
-
- if isinstance(arg, (Iterable, Tuple, MatrixCommon, NDimArray)):
- return derive_by_array(self, arg)
- else:
- return self.applyfunc(lambda x: x.diff(arg))
-
def applyfunc(self, f):
"""Apply a function to each element of the N-dim array.
| diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index 14e330a05ddc..f799fdf7b1f2 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -4285,6 +4285,13 @@ def test_sympy__tensor__array__arrayop__Flatten():
assert _test_args(fla)
+def test_sympy__tensor__array__array_derivatives__ArrayDerivative():
+ from sympy.tensor.array.array_derivatives import ArrayDerivative
+ A = MatrixSymbol("A", 2, 2)
+ arrder = ArrayDerivative(A, A, evaluate=False)
+ assert _test_args(arrder)
+
+
def test_sympy__tensor__functions__TensorProduct():
from sympy.tensor.functions import TensorProduct
A = MatrixSymbol('A', 3, 3)
diff --git a/sympy/matrices/expressions/tests/test_derivatives.py b/sympy/matrices/expressions/tests/test_derivatives.py
index c0089f082173..dd44a9b32eab 100644
--- a/sympy/matrices/expressions/tests/test_derivatives.py
+++ b/sympy/matrices/expressions/tests/test_derivatives.py
@@ -4,11 +4,12 @@
http://www.math.uwaterloo.ca/~hwolkowi//matrixcookbook.pdf
"""
from sympy import (MatrixSymbol, Inverse, symbols, Determinant, Trace,
- Derivative, sin, exp, cos, tan, log, S, sqrt,
+ sin, exp, cos, tan, log, S, sqrt,
hadamard_product, DiagMatrix, OneMatrix,
HadamardProduct, HadamardPower, KroneckerDelta, Sum,
Rational)
from sympy import MatAdd, Identity, MatMul, ZeroMatrix
+from sympy.tensor.array.array_derivatives import ArrayDerivative
from sympy.matrices.expressions import hadamard_power
k = symbols("k")
@@ -72,17 +73,17 @@ def test_matrix_derivative_by_scalar():
def test_matrix_derivative_non_matrix_result():
# This is a 4-dimensional array:
- assert A.diff(A) == Derivative(A, A)
- assert A.T.diff(A) == Derivative(A.T, A)
- assert (2*A).diff(A) == Derivative(2*A, A)
- assert MatAdd(A, A).diff(A) == Derivative(MatAdd(A, A), A)
- assert (A + B).diff(A) == Derivative(A + B, A) # TODO: `B` can be removed.
+ assert A.diff(A) == ArrayDerivative(A, A)
+ assert A.T.diff(A) == ArrayDerivative(A.T, A)
+ assert (2*A).diff(A) == ArrayDerivative(2*A, A)
+ assert MatAdd(A, A).diff(A) == ArrayDerivative(MatAdd(A, A), A)
+ assert (A + B).diff(A) == ArrayDerivative(A + B, A) # TODO: `B` can be removed.
def test_matrix_derivative_trivial_cases():
# Cookbook example 33:
# TODO: find a way to represent a four-dimensional zero-array:
- assert X.diff(A) == Derivative(X, A)
+ assert X.diff(A) == ArrayDerivative(X, A)
def test_matrix_derivative_with_inverse():
@@ -159,7 +160,7 @@ def test_matrix_derivative_vectors_and_scalars():
def test_matrix_derivatives_of_traces():
expr = Trace(A)*A
- assert expr.diff(A) == Derivative(Trace(A)*A, A)
+ assert expr.diff(A) == ArrayDerivative(Trace(A)*A, A)
assert expr[i, j].diff(A[m, n]).doit() == (
KDelta(i, m)*KDelta(j, n)*Trace(A) +
KDelta(m, n)*A[i, j]
@@ -323,7 +324,7 @@ def test_mixed_deriv_mixed_expressions():
expr = Trace(A)*A
# TODO: this is not yet supported:
- assert expr.diff(A) == Derivative(expr, A)
+ assert expr.diff(A) == ArrayDerivative(expr, A)
expr = Trace(Trace(A)*A)
assert expr.diff(A) == (2*Trace(A))*Identity(k)
diff --git a/sympy/tensor/array/tests/test_array_derivatives.py b/sympy/tensor/array/tests/test_array_derivatives.py
new file mode 100644
index 000000000000..e48eb59b9366
--- /dev/null
+++ b/sympy/tensor/array/tests/test_array_derivatives.py
@@ -0,0 +1,49 @@
+from sympy import Matrix, symbols, MatrixSymbol, NDimArray
+from sympy.matrices.common import MatrixCommon
+from sympy.tensor.array.array_derivatives import ArrayDerivative
+
+x, y, z, t = symbols("x y z t")
+
+m = Matrix([[x, y], [z, t]])
+
+M = MatrixSymbol("M", 3, 2)
+N = MatrixSymbol("N", 4, 3)
+
+
+def test_array_derivative_construction():
+
+ d = ArrayDerivative(x, m, evaluate=False)
+ assert d.shape == (2, 2)
+ expr = d.doit()
+ assert isinstance(expr, MatrixCommon)
+ assert expr.shape == (2, 2)
+
+ d = ArrayDerivative(m, m, evaluate=False)
+ assert d.shape == (2, 2, 2, 2)
+ expr = d.doit()
+ assert isinstance(expr, NDimArray)
+ assert expr.shape == (2, 2, 2, 2)
+
+ d = ArrayDerivative(m, x, evaluate=False)
+ assert d.shape == (2, 2)
+ expr = d.doit()
+ assert isinstance(expr, MatrixCommon)
+ assert expr.shape == (2, 2)
+
+ d = ArrayDerivative(M, N, evaluate=False)
+ assert d.shape == (4, 3, 3, 2)
+ expr = d.doit()
+ assert isinstance(expr, ArrayDerivative)
+ assert expr.shape == (4, 3, 3, 2)
+
+ d = ArrayDerivative(M, (N, 2), evaluate=False)
+ assert d.shape == (4, 3, 4, 3, 3, 2)
+ expr = d.doit()
+ assert isinstance(expr, ArrayDerivative)
+ assert expr.shape == (4, 3, 4, 3, 3, 2)
+
+ d = ArrayDerivative(M.as_explicit(), (N.as_explicit(), 2), evaluate=False)
+ assert d.doit().shape == (4, 3, 4, 3, 3, 2)
+ expr = d.doit()
+ assert isinstance(expr, ArrayDerivative)
+ assert expr.shape == (4, 3, 4, 3, 3, 2)
| [
{
"components": [
{
"doc": "",
"lines": [
1906,
1907
],
"name": "Derivative._get_zero_with_shape_like",
"signature": "def _get_zero_with_shape_like(cls, expr):",
"type": "function"
},
{
"doc": "",
"lines": [
... | [
"test_sympy__tensor__array__array_derivatives__ArrayDerivative",
"test_matrix_derivative_by_scalar",
"test_matrix_derivative_non_matrix_result",
"test_matrix_derivative_trivial_cases",
"test_matrix_derivative_with_inverse",
"test_matrix_derivative_vectors_and_scalars",
"test_matrix_derivatives_of_traces... | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_sympy__assumptions__assume__Predicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers__AnyArgs",
"test_sympy__assu... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Adding ArrayDerivative class as subclass of Derivative, will handle dโฆ
Adding ArrayDerivative class as subclass of Derivative, will handle derivatives involving non scalar expressions.
Getting rid of the visitor pattern in order to perform double-dispatch and decide how to handle derivatives.
This PR does not change the logic of SymPy, it's just code clean-up.
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
#### Brief description of what is fixed or changed
#### Other comments
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* tensor
* Adding ArrayDerivative class as subclass of Derivative. This new handles derivatives involving non-scalar expressions.
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/core/function.py]
(definition of Derivative._get_zero_with_shape_like:)
def _get_zero_with_shape_like(cls, expr):
(definition of Derivative._dispatch_eval_derivative_n_times:)
def _dispatch_eval_derivative_n_times(cls, expr, v, count):
(definition of _derivative_dispatch:)
def _derivative_dispatch(expr, *variables, **kwargs):
[end of new definitions in sympy/core/function.py]
[start of new definitions in sympy/matrices/expressions/matexpr.py]
(definition of MatrixExpr._eval_derivative:)
def _eval_derivative(self, x):
[end of new definitions in sympy/matrices/expressions/matexpr.py]
[start of new definitions in sympy/tensor/array/array_derivatives.py]
(definition of ArrayDerivative:)
class ArrayDerivative(Derivative):
(definition of ArrayDerivative.__new__:)
def __new__(cls, expr, *variables, **kwargs):
(definition of ArrayDerivative._get_shape:)
def _get_shape(self):
(definition of ArrayDerivative.shape:)
def shape(self):
(definition of ArrayDerivative._get_zero_with_shape_like:)
def _get_zero_with_shape_like(cls, expr):
(definition of ArrayDerivative._call_derive_scalar_by_matrix:)
def _call_derive_scalar_by_matrix(expr, v):
(definition of ArrayDerivative._call_derive_scalar_by_matexpr:)
def _call_derive_scalar_by_matexpr(expr, v):
(definition of ArrayDerivative._call_derive_scalar_by_array:)
def _call_derive_scalar_by_array(expr, v):
(definition of ArrayDerivative._call_derive_matrix_by_scalar:)
def _call_derive_matrix_by_scalar(expr, v):
(definition of ArrayDerivative._call_derive_matexpr_by_scalar:)
def _call_derive_matexpr_by_scalar(expr, v):
(definition of ArrayDerivative._call_derive_array_by_scalar:)
def _call_derive_array_by_scalar(expr, v):
(definition of ArrayDerivative._call_derive_default:)
def _call_derive_default(expr, v):
(definition of ArrayDerivative._dispatch_eval_derivative_n_times:)
def _dispatch_eval_derivative_n_times(cls, expr, v, count):
[end of new definitions in sympy/tensor/array/array_derivatives.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1b4529a95ef641c2fc15889091b281644069d20e | ||
pvlib__pvlib-python-1045 | 1,045 | pvlib/pvlib-python | 0.7 | f8b9c04c13228ae74fa3be1cfb7e03ed4cf4eaa5 | 2020-09-04T16:52:57Z | diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 077a5e121d..2e882e9a46 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -236,7 +236,10 @@ PV temperature models
temperature.pvsyst_cell
temperature.faiman
temperature.fuentes
+ temperature.ross
pvsystem.PVSystem.sapm_celltemp
+ pvsystem.PVSystem.pvsyst_celltemp
+ pvsystem.PVSystem.faiman_celltemp
Temperature Model Parameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/docs/sphinx/source/whatsnew/v0.8.1.rst b/docs/sphinx/source/whatsnew/v0.8.1.rst
index 61eb026205..9b6e2a3800 100644
--- a/docs/sphinx/source/whatsnew/v0.8.1.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.1.rst
@@ -13,6 +13,8 @@ Deprecations
Enhancements
~~~~~~~~~~~~
+* Added :py:func:`pvlib.temperature.ross` for cell temperature modeling using
+ only NOCT. (:pull:`1045`)
Bug fixes
@@ -36,3 +38,5 @@ Requirements
Contributors
~~~~~~~~~~~~
* Kevin Anderson (:ghuser:`kanderso-nrel`)
+* Will Holmgren (:ghuser:`wholmgren`)
+* Cliff Hansen (:ghuser:`cwhanse`)
diff --git a/pvlib/temperature.py b/pvlib/temperature.py
index 1dd32fecf8..1f27180a9a 100644
--- a/pvlib/temperature.py
+++ b/pvlib/temperature.py
@@ -377,9 +377,10 @@ def pvsyst_cell(poa_global, temp_air, wind_speed=1.0, u_c=29.0, u_v=0.0,
def faiman(poa_global, temp_air, wind_speed=1.0, u0=25.0, u1=6.84):
r'''
- Calculate cell or module temperature using the Faiman model. The Faiman
- model uses an empirical heat loss factor model [1]_ and is adopted in the
- IEC 61853 standards [2]_ and [3]_.
+ Calculate cell or module temperature using the Faiman model.
+
+ The Faiman model uses an empirical heat loss factor model [1]_ and is
+ adopted in the IEC 61853 standards [2]_ and [3]_.
Usage of this model in the IEC 61853 standard does not distinguish
between cell and module temperature.
@@ -443,6 +444,53 @@ def faiman(poa_global, temp_air, wind_speed=1.0, u0=25.0, u1=6.84):
return temp_air + temp_difference
+def ross(poa_global, temp_air, noct):
+ r'''
+ Calculate cell temperature using the Ross model.
+
+ The Ross model [1]_ assumes the difference between cell temperature
+ and ambient temperature is proportional to the plane of array irradiance,
+ and assumes wind speed of 1 m/s. The model implicitly assumes steady or
+ slowly changing irradiance conditions.
+
+ Parameters
+ ----------
+ poa_global : numeric
+ Total incident irradiance. [W/m^2]
+
+ temp_air : numeric
+ Ambient dry bulb temperature. [C]
+
+ noct : numeric
+ Nominal operating cell temperature [C], determined at conditions of
+ 800 W/m^2 irradiance, 20 C ambient air temperature and 1 m/s wind.
+
+ Returns
+ -------
+ cell_temperature : numeric
+ Cell temperature. [C]
+
+ Notes
+ -----
+ The Ross model for cell temperature :math:`T_{C}` is given in [1]_ as
+
+ .. math::
+
+ T_{C} = T_{a} + \frac{NOCT - 20}{80} S
+
+ where :math:`S` is the plane of array irradiance in :math:`mW/{cm}^2`.
+ This function expects irradiance in :math:`W/m^2`.
+
+ References
+ ----------
+ .. [1] Ross, R. G. Jr., (1981). "Design Techniques for Flat-Plate
+ Photovoltaic Arrays". 15th IEEE Photovoltaic Specialist Conference,
+ Orlando, FL.
+ '''
+ # factor of 0.1 converts irradiance from W/m2 to mW/cm2
+ return temp_air + (noct - 20.) / 80. * poa_global * 0.1
+
+
def _fuentes_hconv(tave, windmod, tinoct, temp_delta, xlen, tilt,
check_reynold):
# Calculate the convective coefficient as in Fuentes 1987 -- a mixture of
| diff --git a/pvlib/tests/test_temperature.py b/pvlib/tests/test_temperature.py
index 411adcfcca..f8ea3a8bc1 100644
--- a/pvlib/tests/test_temperature.py
+++ b/pvlib/tests/test_temperature.py
@@ -124,6 +124,14 @@ def test_faiman_ndarray():
assert_allclose(expected, result, 3)
+def test_ross():
+ result = temperature.ross(np.array([1000., 600., 1000.]),
+ np.array([20., 40., 60.]),
+ np.array([40., 100., 20.]))
+ expected = np.array([45., 100., 60.])
+ assert_allclose(expected, result)
+
+
def test_faiman_series():
times = pd.date_range(start="2015-01-01", end="2015-01-02", freq="12H")
temps = pd.Series([0, 10, 5], index=times)
| diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 077a5e121d..2e882e9a46 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -236,7 +236,10 @@ PV temperature models
temperature.pvsyst_cell
temperature.faiman
temperature.fuentes
+ temperature.ross
pvsystem.PVSystem.sapm_celltemp
+ pvsystem.PVSystem.pvsyst_celltemp
+ pvsystem.PVSystem.faiman_celltemp
Temperature Model Parameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/docs/sphinx/source/whatsnew/v0.8.1.rst b/docs/sphinx/source/whatsnew/v0.8.1.rst
index 61eb026205..9b6e2a3800 100644
--- a/docs/sphinx/source/whatsnew/v0.8.1.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.1.rst
@@ -13,6 +13,8 @@ Deprecations
Enhancements
~~~~~~~~~~~~
+* Added :py:func:`pvlib.temperature.ross` for cell temperature modeling using
+ only NOCT. (:pull:`1045`)
Bug fixes
@@ -36,3 +38,5 @@ Requirements
Contributors
~~~~~~~~~~~~
* Kevin Anderson (:ghuser:`kanderso-nrel`)
+* Will Holmgren (:ghuser:`wholmgren`)
+* Cliff Hansen (:ghuser:`cwhanse`)
| [
{
"components": [
{
"doc": "Calculate cell temperature using the Ross model.\n\nThe Ross model [1]_ assumes the difference between cell temperature\nand ambient temperature is proportional to the plane of array irradiance,\nand assumes wind speed of 1 m/s. The model implicitly assumes steady or\ns... | [
"pvlib/tests/test_temperature.py::test_ross"
] | [
"pvlib/tests/test_temperature.py::test_sapm_cell",
"pvlib/tests/test_temperature.py::test_sapm_module",
"pvlib/tests/test_temperature.py::test_sapm_cell_from_module",
"pvlib/tests/test_temperature.py::test_sapm_ndarray",
"pvlib/tests/test_temperature.py::test_sapm_series",
"pvlib/tests/test_temperature.py... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
add Ross temperature model
- [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)
- [x] Tests added
- [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.
- [x] Adds description and name entries in the appropriate "what's new" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).
- [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.
- [x] Pull request is nearly complete and ready for detailed review.
- [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.
A simple cell temperature model using only NOCT as a parameter.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in pvlib/temperature.py]
(definition of ross:)
def ross(poa_global, temp_air, noct):
"""Calculate cell temperature using the Ross model.
The Ross model [1]_ assumes the difference between cell temperature
and ambient temperature is proportional to the plane of array irradiance,
and assumes wind speed of 1 m/s. The model implicitly assumes steady or
slowly changing irradiance conditions.
Parameters
----------
poa_global : numeric
Total incident irradiance. [W/m^2]
temp_air : numeric
Ambient dry bulb temperature. [C]
noct : numeric
Nominal operating cell temperature [C], determined at conditions of
800 W/m^2 irradiance, 20 C ambient air temperature and 1 m/s wind.
Returns
-------
cell_temperature : numeric
Cell temperature. [C]
Notes
-----
The Ross model for cell temperature :math:`T_{C}` is given in [1]_ as
.. math::
T_{C} = T_{a} + \frac{NOCT - 20}{80} S
where :math:`S` is the plane of array irradiance in :math:`mW/{cm}^2`.
This function expects irradiance in :math:`W/m^2`.
References
----------
.. [1] Ross, R. G. Jr., (1981). "Design Techniques for Flat-Plate
Photovoltaic Arrays". 15th IEEE Photovoltaic Specialist Conference,
Orlando, FL."""
[end of new definitions in pvlib/temperature.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | aa1635bcb40dc83f82e9fd72158670c235bfe99b | |
lark-parser__lark-681 | 681 | lark-parser/lark | null | 4e37b754f56cceccbbc40d1813ba4c2940ea8337 | 2020-08-28T22:40:56Z | diff --git a/lark/lark.py b/lark/lark.py
index d82e723ac..27ec9b05d 100644
--- a/lark/lark.py
+++ b/lark/lark.py
@@ -75,6 +75,7 @@ class LarkOptions(Serialize):
- "resolve" - The parser will automatically choose the simplest derivation
(it chooses consistently: greedy for tokens, non-greedy for rules)
- "explicit": The parser will return all derivations wrapped in "_ambig" tree nodes (i.e. a forest).
+ - "forest": The parser will return the root of the shared packed parse forest.
**=== Misc. / Domain Specific Options ===**
@@ -262,7 +263,7 @@ def __init__(self, grammar, **options):
assert self.options.priority in ('auto', None, 'normal', 'invert'), 'invalid priority option specified: {}. options are auto, none, normal, invert.'.format(self.options.priority)
assert self.options.ambiguity not in ('resolve__antiscore_sum', ), 'resolve__antiscore_sum has been replaced with the option priority="invert"'
- assert self.options.ambiguity in ('resolve', 'explicit', 'auto', )
+ assert self.options.ambiguity in ('resolve', 'explicit', 'forest', 'auto', )
# Parse the grammar file and compose the grammars (TODO)
self.grammar = load_grammar(grammar, self.source, re_module)
@@ -317,8 +318,11 @@ def _build_lexer(self):
def _prepare_callbacks(self):
self.parser_class = get_frontend(self.options.parser, self.options.lexer)
- self._parse_tree_builder = ParseTreeBuilder(self.rules, self.options.tree_class or Tree, self.options.propagate_positions, self.options.keep_all_tokens, self.options.parser!='lalr' and self.options.ambiguity=='explicit', self.options.maybe_placeholders)
- self._callbacks = self._parse_tree_builder.create_callback(self.options.transformer)
+ self._callbacks = None
+ # we don't need these callbacks if we aren't building a tree
+ if self.options.ambiguity != 'forest':
+ self._parse_tree_builder = ParseTreeBuilder(self.rules, self.options.tree_class or Tree, self.options.propagate_positions, self.options.keep_all_tokens, self.options.parser!='lalr' and self.options.ambiguity=='explicit', self.options.maybe_placeholders)
+ self._callbacks = self._parse_tree_builder.create_callback(self.options.transformer)
def _build_parser(self):
self._prepare_callbacks()
diff --git a/lark/parser_frontends.py b/lark/parser_frontends.py
index b993b9f31..eb2b61531 100644
--- a/lark/parser_frontends.py
+++ b/lark/parser_frontends.py
@@ -165,7 +165,8 @@ def __init__(self, lexer_conf, parser_conf, options=None):
resolve_ambiguity = options.ambiguity == 'resolve'
debug = options.debug if options else False
- self.parser = earley.Parser(parser_conf, self.match, resolve_ambiguity=resolve_ambiguity, debug=debug)
+ tree_class = options.tree_class or Tree if options.ambiguity != 'forest' else None
+ self.parser = earley.Parser(parser_conf, self.match, resolve_ambiguity=resolve_ambiguity, debug=debug, tree_class=tree_class)
def match(self, term, token):
return term.name == token.type
@@ -179,11 +180,13 @@ def __init__(self, lexer_conf, parser_conf, options=None, **kw):
self._prepare_match(lexer_conf)
resolve_ambiguity = options.ambiguity == 'resolve'
debug = options.debug if options else False
+ tree_class = options.tree_class or Tree if options.ambiguity != 'forest' else None
self.parser = xearley.Parser(parser_conf,
self.match,
ignore=lexer_conf.ignore,
resolve_ambiguity=resolve_ambiguity,
debug=debug,
+ tree_class=tree_class,
**kw
)
diff --git a/lark/parsers/earley.py b/lark/parsers/earley.py
index bcb568f85..42542c2f9 100644
--- a/lark/parsers/earley.py
+++ b/lark/parsers/earley.py
@@ -12,20 +12,22 @@
from collections import deque
+from ..tree import Tree
from ..visitors import Transformer_InPlace, v_args
from ..exceptions import UnexpectedEOF, UnexpectedToken
from ..utils import logger
from .grammar_analysis import GrammarAnalyzer
from ..grammar import NonTerminal
from .earley_common import Item, TransitiveItem
-from .earley_forest import ForestToTreeVisitor, ForestSumVisitor, SymbolNode, CompleteForestToAmbiguousTreeVisitor
+from .earley_forest import ForestSumVisitor, SymbolNode, ForestToParseTree
class Parser:
- def __init__(self, parser_conf, term_matcher, resolve_ambiguity=True, debug=False):
+ def __init__(self, parser_conf, term_matcher, resolve_ambiguity=True, debug=False, tree_class=Tree):
analysis = GrammarAnalyzer(parser_conf)
self.parser_conf = parser_conf
self.resolve_ambiguity = resolve_ambiguity
self.debug = debug
+ self.tree_class = tree_class
self.FIRST = analysis.FIRST
self.NULLABLE = analysis.NULLABLE
@@ -312,12 +314,13 @@ def parse(self, stream, start):
elif len(solutions) > 1:
assert False, 'Earley should not generate multiple start symbol items!'
- # Perform our SPPF -> AST conversion using the right ForestVisitor.
- forest_tree_visitor_cls = ForestToTreeVisitor if self.resolve_ambiguity else CompleteForestToAmbiguousTreeVisitor
- forest_tree_visitor = forest_tree_visitor_cls(self.callbacks, self.forest_sum_visitor and self.forest_sum_visitor())
-
- return forest_tree_visitor.visit(solutions[0])
+ if self.tree_class is not None:
+ # Perform our SPPF -> AST conversion
+ transformer = ForestToParseTree(self.tree_class, self.callbacks, self.forest_sum_visitor and self.forest_sum_visitor(), self.resolve_ambiguity)
+ return transformer.transform(solutions[0])
+ # return the root of the SPPF
+ return solutions[0]
class ApplyCallbacks(Transformer_InPlace):
def __init__(self, postprocess):
diff --git a/lark/parsers/earley_forest.py b/lark/parsers/earley_forest.py
index b39d02e02..c1b2b82f8 100644
--- a/lark/parsers/earley_forest.py
+++ b/lark/parsers/earley_forest.py
@@ -12,10 +12,13 @@
from collections import deque
from operator import attrgetter
from importlib import import_module
+from functools import partial
+from ..parse_tree_builder import AmbiguousIntermediateExpander
+from ..visitors import Discard
+from ..lexer import Token
from ..utils import logger
from ..tree import Tree
-from ..exceptions import ParseError
class ForestNode(object):
pass
@@ -125,6 +128,10 @@ def sort_key(self):
"""
return self.is_empty, -self.priority, self.rule.order
+ @property
+ def children(self):
+ return [x for x in [self.left, self.right] if x is not None]
+
def __iter__(self):
return iter([self.left, self.right])
@@ -153,7 +160,12 @@ class ForestVisitor(object):
Use this as a base when you need to walk the forest.
"""
- __slots__ = ['result']
+
+ def get_cycle_in_path(self, node, path):
+ index = len(path) - 1
+ while id(path[index]) != id(node):
+ index -= 1
+ return path[index:]
def visit_token_node(self, node): pass
def visit_symbol_node_in(self, node): pass
@@ -161,8 +173,18 @@ def visit_symbol_node_out(self, node): pass
def visit_packed_node_in(self, node): pass
def visit_packed_node_out(self, node): pass
+ def on_cycle(self, node, path):
+ """Called when a cycle is encountered. `node` is the node that causes
+ the cycle. `path` the list of nodes being visited: nodes that have been
+ entered but not exited. The first element is the root in a forest
+ visit, and the last element is the node visited most recently.
+ `path` should be treated as read-only. The utility function
+ `get_cycle_in_path` may be used to obtain a slice of `path` that only
+ contains the nodes that make up the cycle."""
+ pass
+
def visit(self, root):
- self.result = None
+
# Visiting is a list of IDs of all symbol/intermediate nodes currently in
# the stack. It serves two purposes: to detect when we 'recurse' in and out
# of a symbol/intermediate so that we can process both up and down. Also,
@@ -170,6 +192,10 @@ def visit(self, root):
# to recurse into a node that's already on the stack (infinite recursion).
visiting = set()
+ # a list of nodes that are currently being visited
+ # used for the `on_cycle` callback
+ path = list()
+
# We do not use recursion here to walk the Forest due to the limited
# stack size in python. Therefore input_stack is essentially our stack.
input_stack = deque([root])
@@ -180,7 +206,11 @@ def visit(self, root):
vpni = getattr(self, 'visit_packed_node_in')
vsno = getattr(self, 'visit_symbol_node_out')
vsni = getattr(self, 'visit_symbol_node_in')
+ vino = getattr(self, 'visit_intermediate_node_out', vsno)
+ vini = getattr(self, 'visit_intermediate_node_in', vsni)
vtn = getattr(self, 'visit_token_node')
+ oc = getattr(self, 'on_cycle')
+
while input_stack:
current = next(reversed(input_stack))
try:
@@ -196,7 +226,8 @@ def visit(self, root):
continue
if id(next_node) in visiting:
- raise ParseError("Infinite recursion in grammar, in rule '%s'!" % next_node.s.name)
+ oc(next_node, path)
+ continue
input_stack.append(next_node)
continue
@@ -208,25 +239,134 @@ def visit(self, root):
current_id = id(current)
if current_id in visiting:
- if isinstance(current, PackedNode): vpno(current)
- else: vsno(current)
+ if isinstance(current, PackedNode):
+ vpno(current)
+ elif current.is_intermediate:
+ vino(current)
+ else:
+ vsno(current)
input_stack.pop()
+ path.pop()
visiting.remove(current_id)
continue
else:
visiting.add(current_id)
- if isinstance(current, PackedNode): next_node = vpni(current)
- else: next_node = vsni(current)
+ path.append(current)
+ if isinstance(current, PackedNode):
+ next_node = vpni(current)
+ elif current.is_intermediate:
+ next_node = vini(current)
+ else:
+ next_node = vsni(current)
if next_node is None:
continue
- if id(next_node) in visiting:
- raise ParseError("Infinite recursion in grammar!")
+ if not isinstance(next_node, ForestNode) and \
+ not isinstance(next_node, Token):
+ next_node = iter(next_node)
+ elif id(next_node) in visiting:
+ oc(next_node, path)
+ continue
input_stack.append(next_node)
continue
- return self.result
+class ForestTransformer(ForestVisitor):
+ """The base class for a bottom-up forest transformation.
+ Transformations are applied via inheritance and overriding of the
+ following methods:
+
+ transform_symbol_node
+ transform_intermediate_node
+ transform_packed_node
+ transform_token_node
+
+ `transform_token_node` receives a Token as an argument.
+ All other methods receive the node that is being transformed and
+ a list of the results of the transformations of that node's children.
+ The return value of these methods are the resulting transformations.
+
+ If `Discard` is raised in a transformation, no data from that node
+ will be passed to its parent's transformation.
+ """
+
+ def __init__(self):
+ # results of transformations
+ self.data = dict()
+ # used to track parent nodes
+ self.node_stack = deque()
+
+ def transform(self, root):
+ """Perform a transformation on a Forest."""
+ self.node_stack.append('result')
+ self.data['result'] = []
+ self.visit(root)
+ assert len(self.data['result']) <= 1
+ if self.data['result']:
+ return self.data['result'][0]
+
+ def transform_symbol_node(self, node, data):
+ return node
+
+ def transform_intermediate_node(self, node, data):
+ return node
+
+ def transform_packed_node(self, node, data):
+ return node
+
+ def transform_token_node(self, node):
+ return node
+
+ def visit_symbol_node_in(self, node):
+ self.node_stack.append(id(node))
+ self.data[id(node)] = []
+ return node.children
+
+ def visit_packed_node_in(self, node):
+ self.node_stack.append(id(node))
+ self.data[id(node)] = []
+ return node.children
+
+ def visit_token_node(self, node):
+ try:
+ transformed = self.transform_token_node(node)
+ except Discard:
+ pass
+ else:
+ self.data[self.node_stack[-1]].append(transformed)
+
+ def visit_symbol_node_out(self, node):
+ self.node_stack.pop()
+ try:
+ transformed = self.transform_symbol_node(node, self.data[id(node)])
+ except Discard:
+ pass
+ else:
+ self.data[self.node_stack[-1]].append(transformed)
+ finally:
+ del self.data[id(node)]
+
+ def visit_intermediate_node_out(self, node):
+ self.node_stack.pop()
+ try:
+ transformed = self.transform_intermediate_node(node, self.data[id(node)])
+ except Discard:
+ pass
+ else:
+ self.data[self.node_stack[-1]].append(transformed)
+ finally:
+ del self.data[id(node)]
+
+ def visit_packed_node_out(self, node):
+ self.node_stack.pop()
+ try:
+ transformed = self.transform_packed_node(node, self.data[id(node)])
+ except Discard:
+ pass
+ else:
+ self.data[self.node_stack[-1]].append(transformed)
+ finally:
+ del self.data[id(node)]
class ForestSumVisitor(ForestVisitor):
"""
@@ -259,178 +399,203 @@ def visit_packed_node_out(self, node):
def visit_symbol_node_out(self, node):
node.priority = max(child.priority for child in node.children)
-class ForestToTreeVisitor(ForestVisitor):
+class PackedData():
+ """Used in transformationss of packed nodes to distinguish the data
+ that comes from the left child and the right child.
"""
- A Forest visitor which converts an SPPF forest to an unambiguous AST.
-
- The implementation in this visitor walks only the first ambiguous child
- of each symbol node. When it finds an ambiguous symbol node it first
- calls the forest_sum_visitor implementation to sort the children
- into preference order using the algorithms defined there; so the first
- child should always be the highest preference. The forest_sum_visitor
- implementation should be another ForestVisitor which sorts the children
- according to some priority mechanism.
+
+ def __init__(self, node, data):
+ self.left = None
+ self.right = None
+ if data:
+ if node.left:
+ self.left = data[0]
+ if len(data) > 1 and node.right:
+ self.right = data[1]
+ elif node.right:
+ self.right = data[0]
+
+class ForestToParseTree(ForestTransformer):
+ """Used by the earley parser when ambiguity equals 'resolve' or
+ 'explicit'. Transforms an SPPF into an (ambiguous) parse tree.
+
+ tree_class: The Tree class to use for construction
+ callbacks: A dictionary of rules to functions that output a tree
+ prioritizer: A ForestVisitor that manipulates the priorities of
+ ForestNodes
+ resolve_ambiguity: If True, ambiguities will be resolved based on
+ priorities. Otherwise, `_ambig` nodes will be in the resulting
+ tree.
"""
- __slots__ = ['forest_sum_visitor', 'callbacks', 'output_stack']
- def __init__(self, callbacks, forest_sum_visitor = None):
- assert callbacks
- self.forest_sum_visitor = forest_sum_visitor
+
+ def __init__(self, tree_class=Tree, callbacks=dict(), prioritizer=ForestSumVisitor(), resolve_ambiguity=True):
+ super(ForestToParseTree, self).__init__()
+ self.tree_class = tree_class
self.callbacks = callbacks
+ self.prioritizer = prioritizer
+ self.resolve_ambiguity = resolve_ambiguity
+ self._on_cycle_retreat = False
+
+ def on_cycle(self, node, path):
+ logger.warning("Cycle encountered in the SPPF at node: %s. "
+ "As infinite ambiguities cannot be represented in a tree, "
+ "this family of derivations will be discarded.", node)
+ if self.resolve_ambiguity:
+ # TODO: choose a different path if cycle is encountered
+ logger.warning("At this time, using ambiguity resolution for SPPFs "
+ "with cycles may result in None being returned.")
+ self._on_cycle_retreat = True
+
+ def _check_cycle(self, node):
+ if self._on_cycle_retreat:
+ raise Discard
- def visit(self, root):
- self.output_stack = deque()
- return super(ForestToTreeVisitor, self).visit(root)
+ def _collapse_ambig(self, children):
+ new_children = []
+ for child in children:
+ if hasattr(child, 'data') and child.data == '_ambig':
+ new_children += child.children
+ else:
+ new_children.append(child)
+ return new_children
- def visit_token_node(self, node):
- self.output_stack[-1].append(node)
+ def _call_rule_func(self, node, data):
+ # called when transforming children of symbol nodes
+ # data is a list of trees or tokens that correspond to the
+ # symbol's rule expansion
+ return self.callbacks[node.rule](data)
+
+ def _call_ambig_func(self, node, data):
+ # called when transforming a symbol node
+ # data is a list of trees where each tree's data is
+ # equal to the name of the symbol or one of its aliases.
+ if len(data) > 1:
+ return self.tree_class('_ambig', data)
+ elif data:
+ return data[0]
+ raise Discard
+
+ def transform_symbol_node(self, node, data):
+ self._check_cycle(node)
+ data = self._collapse_ambig(data)
+ return self._call_ambig_func(node, data)
+
+ def transform_intermediate_node(self, node, data):
+ self._check_cycle(node)
+ if len(data) > 1:
+ children = [self.tree_class('_inter', c) for c in data]
+ return self.tree_class('_iambig', children)
+ return data[0]
+
+ def transform_packed_node(self, node, data):
+ self._check_cycle(node)
+ children = list()
+ assert len(data) <= 2
+ data = PackedData(node, data)
+ if data.left is not None:
+ if node.left.is_intermediate and isinstance(data.left, list):
+ children += data.left
+ else:
+ children.append(data.left)
+ if data.right is not None:
+ children.append(data.right)
+ if node.parent.is_intermediate:
+ return children
+ return self._call_rule_func(node, children)
def visit_symbol_node_in(self, node):
- if self.forest_sum_visitor and node.is_ambiguous and isinf(node.priority):
- self.forest_sum_visitor.visit(node)
- return next(iter(node.children))
+ self._on_cycle_retreat = False
+ super(ForestToParseTree, self).visit_symbol_node_in(node)
+ if self.prioritizer and node.is_ambiguous and isinf(node.priority):
+ self.prioritizer.visit(node)
+ if self.resolve_ambiguity:
+ return node.children[0]
+ return node.children
def visit_packed_node_in(self, node):
- if not node.parent.is_intermediate:
- self.output_stack.append([])
- return iter([node.left, node.right])
-
- def visit_packed_node_out(self, node):
- if not node.parent.is_intermediate:
- result = self.callbacks[node.rule](self.output_stack.pop())
- if self.output_stack:
- self.output_stack[-1].append(result)
- else:
- self.result = result
-
-class ForestToAmbiguousTreeVisitor(ForestToTreeVisitor):
- """
- A Forest visitor which converts an SPPF forest to an ambiguous AST.
-
- Because of the fundamental disparity between what can be stored in
- an SPPF and what can be stored in a Tree; this implementation is not
- complete. It correctly deals with ambiguities that occur on symbol nodes only,
- and cannot deal with ambiguities that occur on intermediate nodes.
-
- Usually, most parsers can be rewritten to avoid intermediate node
- ambiguities. Also, this implementation could be fixed, however
- the code to handle intermediate node ambiguities is messy and
- would not be performant. It is much better not to use this and
- instead to correctly disambiguate the forest and only store unambiguous
- parses in Trees. It is here just to provide some parity with the
- old ambiguity='explicit'.
-
- This is mainly used by the test framework, to make it simpler to write
- tests ensuring the SPPF contains the right results.
- """
- def __init__(self, callbacks, forest_sum_visitor = ForestSumVisitor):
- super(ForestToAmbiguousTreeVisitor, self).__init__(callbacks, forest_sum_visitor)
+ self._on_cycle_retreat = False
+ return super(ForestToParseTree, self).visit_packed_node_in(node)
def visit_token_node(self, node):
- self.output_stack[-1].children.append(node)
-
- def visit_symbol_node_in(self, node):
- if node.is_ambiguous:
- if self.forest_sum_visitor and isinf(node.priority):
- self.forest_sum_visitor.visit(node)
- if node.is_intermediate:
- # TODO Support ambiguous intermediate nodes!
- logger.warning("Ambiguous intermediate node in the SPPF: %s. "
- "Lark does not currently process these ambiguities; resolving with the first derivation.", node)
- return next(iter(node.children))
- else:
- self.output_stack.append(Tree('_ambig', []))
+ self._on_cycle_retreat = False
+ return super(ForestToParseTree, self).visit_token_node(node)
- return iter(node.children)
+def handles_ambiguity(func):
+ """Decorator for methods of subclasses of TreeForestTransformer.
+ Denotes that the method should receive a list of transformed derivations."""
+ func.handles_ambiguity = True
+ return func
- def visit_symbol_node_out(self, node):
- if not node.is_intermediate and node.is_ambiguous:
- result = self.output_stack.pop()
- if self.output_stack:
- self.output_stack[-1].children.append(result)
- else:
- self.result = result
+class TreeForestTransformer(ForestToParseTree):
+ """A ForestTransformer with a tree-Transformer-like interface.
+ By default, it will construct a tree.
- def visit_packed_node_in(self, node):
- if not node.parent.is_intermediate:
- self.output_stack.append(Tree('drv', []))
- return iter([node.left, node.right])
+ Methods provided via inheritance are called based on the rule/symbol
+ names of nodes in the forest.
- def visit_packed_node_out(self, node):
- if not node.parent.is_intermediate:
- result = self.callbacks[node.rule](self.output_stack.pop().children)
- if self.output_stack:
- self.output_stack[-1].children.append(result)
- else:
- self.result = result
-
-class CompleteForestToAmbiguousTreeVisitor(ForestToTreeVisitor):
- """
- An augmented version of ForestToAmbiguousTreeVisitor that is designed to
- handle ambiguous intermediate nodes as well as ambiguous symbol nodes.
+ Methods that act on rules will receive a list of the results of the
+ transformations of the rule's children. By default, trees and tokens.
- On the way down:
+ Methods that act on tokens will receive a Token.
- - When an ambiguous intermediate node is encountered, an '_iambig' node
- is inserted into the tree.
- - Each possible derivation of an ambiguous intermediate node is represented
- by an '_inter' node added as a child of the corresponding '_iambig' node.
+ Alternatively, methods that act on rules may be annotated with
+ `handles_ambiguity`. In this case, the function will receive a list
+ of all the transformations of all the derivations of the rule.
+ By default, a list of trees where each tree.data is equal to the
+ rule name or one of its aliases.
- On the way up, these nodes are propagated up the tree and collapsed
- into a single '_ambig' node for the nearest symbol node ancestor.
- This is achieved by the AmbiguousIntermediateExpander contained in
- the callbacks.
+ Non-tree transformations are made possible by override of
+ `__default__`, `__default_token__`, and `__default_ambig__`.
"""
- def _collapse_ambig(self, children):
- new_children = []
- for child in children:
- if child.data == '_ambig':
- new_children += child.children
- else:
- new_children.append(child)
- return new_children
+ def __init__(self, tree_class=Tree, prioritizer=ForestSumVisitor(), resolve_ambiguity=True):
+ super(TreeForestTransformer, self).__init__(tree_class, dict(), prioritizer, resolve_ambiguity)
- def visit_token_node(self, node):
- self.output_stack[-1].children.append(node)
+ def __default__(self, name, data):
+ """Default operation on tree (for override).
- def visit_symbol_node_in(self, node):
- if node.is_ambiguous:
- if self.forest_sum_visitor and isinf(node.priority):
- self.forest_sum_visitor.visit(node)
- if node.is_intermediate:
- self.output_stack.append(Tree('_iambig', []))
- else:
- self.output_stack.append(Tree('_ambig', []))
- return iter(node.children)
+ Returns a tree with name with data as children.
+ """
+ return self.tree_class(name, data)
- def visit_symbol_node_out(self, node):
- if node.is_ambiguous:
- result = self.output_stack.pop()
- if not node.is_intermediate:
- result = Tree('_ambig', self._collapse_ambig(result.children))
- if self.output_stack:
- self.output_stack[-1].children.append(result)
- else:
- self.result = result
+ def __default_ambig__(self, name, data):
+ """Default operation on ambiguous rule (for override).
- def visit_packed_node_in(self, node):
- if not node.parent.is_intermediate:
- self.output_stack.append(Tree('drv', []))
- elif node.parent.is_ambiguous:
- self.output_stack.append(Tree('_inter', []))
- return iter([node.left, node.right])
+ Wraps data in an '_ambig_ node if it contains more than
+ one element.'
+ """
+ if len(data) > 1:
+ return self.tree_class('_ambig', data)
+ elif data:
+ return data[0]
+ raise Discard
- def visit_packed_node_out(self, node):
- if not node.parent.is_intermediate:
- result = self.callbacks[node.rule](self.output_stack.pop().children)
- elif node.parent.is_ambiguous:
- result = self.output_stack.pop()
- else:
- return
- if self.output_stack:
- self.output_stack[-1].children.append(result)
- else:
- self.result = result
+ def __default_token__(self, node):
+ """Default operation on Token (for override).
+
+ Returns node
+ """
+ return node
+
+ def transform_token_node(self, node):
+ return getattr(self, node.type, self.__default_token__)(node)
+
+ def _call_rule_func(self, node, data):
+ name = node.rule.alias or node.rule.options.template_source or node.rule.origin.name
+ user_func = getattr(self, name, self.__default__)
+ if user_func == self.__default__ or hasattr(user_func, 'handles_ambiguity'):
+ user_func = partial(self.__default__, name)
+ if not self.resolve_ambiguity:
+ wrapper = partial(AmbiguousIntermediateExpander, self.tree_class)
+ user_func = wrapper(user_func)
+ return user_func(data)
+
+ def _call_ambig_func(self, node, data):
+ name = node.s.name
+ user_func = getattr(self, name, self.__default_ambig__)
+ if user_func == self.__default_ambig__ or not hasattr(user_func, 'handles_ambiguity'):
+ user_func = partial(self.__default_ambig__, name)
+ return user_func(data)
class ForestToPyDotVisitor(ForestVisitor):
"""
diff --git a/lark/parsers/xearley.py b/lark/parsers/xearley.py
index 855625a96..256fc2c47 100644
--- a/lark/parsers/xearley.py
+++ b/lark/parsers/xearley.py
@@ -16,6 +16,7 @@
from collections import defaultdict
+from ..tree import Tree
from ..exceptions import UnexpectedCharacters
from ..lexer import Token
from ..grammar import Terminal
@@ -24,8 +25,8 @@
class Parser(BaseParser):
- def __init__(self, parser_conf, term_matcher, resolve_ambiguity=True, ignore = (), complete_lex = False, debug=False):
- BaseParser.__init__(self, parser_conf, term_matcher, resolve_ambiguity, debug)
+ def __init__(self, parser_conf, term_matcher, resolve_ambiguity=True, ignore = (), complete_lex = False, debug=False, tree_class=Tree):
+ BaseParser.__init__(self, parser_conf, term_matcher, resolve_ambiguity, debug, tree_class)
self.ignore = [Terminal(t) for t in ignore]
self.complete_lex = complete_lex
@@ -148,4 +149,4 @@ def scan(i, to_scan):
## Column is now the final column in the parse.
assert i == len(columns)-1
- return to_scan
\ No newline at end of file
+ return to_scan
| diff --git a/tests/test_tree_forest_transformer.py b/tests/test_tree_forest_transformer.py
new file mode 100644
index 000000000..e7ca56ba9
--- /dev/null
+++ b/tests/test_tree_forest_transformer.py
@@ -0,0 +1,247 @@
+from __future__ import absolute_import
+
+import unittest
+
+from lark import Lark
+from lark.lexer import Token
+from lark.tree import Tree
+from lark.visitors import Visitor, Transformer, Discard
+from lark.parsers.earley_forest import TreeForestTransformer, handles_ambiguity
+
+class TestTreeForestTransformer(unittest.TestCase):
+
+ grammar = """
+ start: ab bc cd
+ !ab: "A" "B"?
+ !bc: "B"? "C"?
+ !cd: "C"? "D"
+ """
+
+ parser = Lark(grammar, parser='earley', ambiguity='forest')
+ forest = parser.parse("ABCD")
+
+ def test_identity_resolve_ambiguity(self):
+ l = Lark(self.grammar, parser='earley', ambiguity='resolve')
+ tree1 = l.parse("ABCD")
+ tree2 = TreeForestTransformer(resolve_ambiguity=True).transform(self.forest)
+ self.assertEqual(tree1, tree2)
+
+ def test_identity_explicit_ambiguity(self):
+ l = Lark(self.grammar, parser='earley', ambiguity='explicit')
+ tree1 = l.parse("ABCD")
+ tree2 = TreeForestTransformer(resolve_ambiguity=False).transform(self.forest)
+ self.assertEqual(tree1, tree2)
+
+ def test_tree_class(self):
+
+ class CustomTree(Tree):
+ pass
+
+ class TreeChecker(Visitor):
+ def __default__(self, tree):
+ assert isinstance(tree, CustomTree)
+
+ tree = TreeForestTransformer(resolve_ambiguity=False, tree_class=CustomTree).transform(self.forest)
+ TreeChecker().visit(tree)
+
+ def test_token_calls(self):
+
+ visited_A = False
+ visited_B = False
+ visited_C = False
+ visited_D = False
+
+ class CustomTransformer(TreeForestTransformer):
+ def A(self, node):
+ assert node.type == 'A'
+ nonlocal visited_A
+ visited_A = True
+ def B(self, node):
+ assert node.type == 'B'
+ nonlocal visited_B
+ visited_B = True
+ def C(self, node):
+ assert node.type == 'C'
+ nonlocal visited_C
+ visited_C = True
+ def D(self, node):
+ assert node.type == 'D'
+ nonlocal visited_D
+ visited_D = True
+
+ tree = CustomTransformer(resolve_ambiguity=False).transform(self.forest)
+ self.assertTrue(visited_A)
+ self.assertTrue(visited_B)
+ self.assertTrue(visited_C)
+ self.assertTrue(visited_D)
+
+ def test_default_token(self):
+
+ token_count = 0
+
+ class CustomTransformer(TreeForestTransformer):
+ def __default_token__(self, node):
+ nonlocal token_count
+ token_count += 1
+ assert isinstance(node, Token)
+
+ tree = CustomTransformer(resolve_ambiguity=True).transform(self.forest)
+ self.assertEqual(token_count, 4)
+
+ def test_rule_calls(self):
+
+ visited_start = False
+ visited_ab = False
+ visited_bc = False
+ visited_cd = False
+
+ class CustomTransformer(TreeForestTransformer):
+ def start(self, data):
+ nonlocal visited_start
+ visited_start = True
+ def ab(self, data):
+ nonlocal visited_ab
+ visited_ab = True
+ def bc(self, data):
+ nonlocal visited_bc
+ visited_bc = True
+ def cd(self, data):
+ nonlocal visited_cd
+ visited_cd = True
+
+ tree = CustomTransformer(resolve_ambiguity=False).transform(self.forest)
+ self.assertTrue(visited_start)
+ self.assertTrue(visited_ab)
+ self.assertTrue(visited_bc)
+ self.assertTrue(visited_cd)
+
+ def test_default_rule(self):
+
+ rule_count = 0
+
+ class CustomTransformer(TreeForestTransformer):
+ def __default__(self, name, data):
+ nonlocal rule_count
+ rule_count += 1
+
+ tree = CustomTransformer(resolve_ambiguity=True).transform(self.forest)
+ self.assertEqual(rule_count, 4)
+
+ def test_default_ambig(self):
+
+ ambig_count = 0
+
+ class CustomTransformer(TreeForestTransformer):
+ def __default_ambig__(self, name, data):
+ nonlocal ambig_count
+ if len(data) > 1:
+ ambig_count += 1
+
+ tree = CustomTransformer(resolve_ambiguity=False).transform(self.forest)
+ self.assertEqual(ambig_count, 1)
+
+ def test_handles_ambiguity(self):
+
+ class CustomTransformer(TreeForestTransformer):
+ @handles_ambiguity
+ def start(self, data):
+ assert isinstance(data, list)
+ assert len(data) == 4
+ for tree in data:
+ assert tree.data == 'start'
+ return 'handled'
+
+ @handles_ambiguity
+ def ab(self, data):
+ assert isinstance(data, list)
+ assert len(data) == 1
+ assert data[0].data == 'ab'
+
+ tree = CustomTransformer(resolve_ambiguity=False).transform(self.forest)
+ self.assertEqual(tree, 'handled')
+
+ def test_discard(self):
+
+ class CustomTransformer(TreeForestTransformer):
+ def bc(self, data):
+ raise Discard
+
+ def D(self, node):
+ raise Discard
+
+ class TreeChecker(Transformer):
+ def bc(self, children):
+ assert False
+
+ def D(self, token):
+ assert False
+
+ tree = CustomTransformer(resolve_ambiguity=False).transform(self.forest)
+ TreeChecker(visit_tokens=True).transform(tree)
+
+ def test_aliases(self):
+
+ visited_ambiguous = False
+ visited_full = False
+
+ class CustomTransformer(TreeForestTransformer):
+ @handles_ambiguity
+ def start(self, data):
+ for tree in data:
+ assert tree.data == 'ambiguous' or tree.data == 'full'
+
+ def ambiguous(self, data):
+ nonlocal visited_ambiguous
+ visited_ambiguous = True
+ assert len(data) == 3
+ assert data[0].data == 'ab'
+ assert data[1].data == 'bc'
+ assert data[2].data == 'cd'
+ return self.tree_class('ambiguous', data)
+
+ def full(self, data):
+ nonlocal visited_full
+ visited_full = True
+ assert len(data) == 1
+ assert data[0].data == 'abcd'
+ return self.tree_class('full', data)
+
+ grammar = """
+ start: ab bc cd -> ambiguous
+ | abcd -> full
+ !ab: "A" "B"?
+ !bc: "B"? "C"?
+ !cd: "C"? "D"
+ !abcd: "ABCD"
+ """
+
+ l = Lark(grammar, parser='earley', ambiguity='forest')
+ forest = l.parse('ABCD')
+ tree = CustomTransformer(resolve_ambiguity=False).transform(forest)
+ self.assertTrue(visited_ambiguous)
+ self.assertTrue(visited_full)
+
+ def test_transformation(self):
+
+ class CustomTransformer(TreeForestTransformer):
+ def __default__(self, name, data):
+ result = []
+ for item in data:
+ if isinstance(item, list):
+ result += item
+ else:
+ result.append(item)
+ return result
+
+ def __default_token__(self, node):
+ return node.lower()
+
+ def __default_ambig__(self, name, data):
+ return data[0]
+
+ result = CustomTransformer(resolve_ambiguity=False).transform(self.forest)
+ expected = ['a', 'b', 'c', 'd']
+ self.assertEqual(result, expected)
+
+if __name__ == '__main__':
+ unittest.main()
| [
{
"components": [
{
"doc": "",
"lines": [
132,
133
],
"name": "PackedNode.children",
"signature": "def children(self):",
"type": "function"
},
{
"doc": "",
"lines": [
164,
168
],
... | [
"tests/test_tree_forest_transformer.py::TestTreeForestTransformer::test_aliases",
"tests/test_tree_forest_transformer.py::TestTreeForestTransformer::test_default_ambig",
"tests/test_tree_forest_transformer.py::TestTreeForestTransformer::test_default_rule",
"tests/test_tree_forest_transformer.py::TestTreeFores... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
New forest utilities (2)
A better version of #675 ready for review.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in lark/parsers/earley_forest.py]
(definition of PackedNode.children:)
def children(self):
(definition of ForestVisitor.get_cycle_in_path:)
def get_cycle_in_path(self, node, path):
(definition of ForestVisitor.on_cycle:)
def on_cycle(self, node, path):
"""Called when a cycle is encountered. `node` is the node that causes
the cycle. `path` the list of nodes being visited: nodes that have been
entered but not exited. The first element is the root in a forest
visit, and the last element is the node visited most recently.
`path` should be treated as read-only. The utility function
`get_cycle_in_path` may be used to obtain a slice of `path` that only
contains the nodes that make up the cycle."""
(definition of ForestTransformer:)
class ForestTransformer(ForestVisitor):
"""The base class for a bottom-up forest transformation.
Transformations are applied via inheritance and overriding of the
following methods:
transform_symbol_node
transform_intermediate_node
transform_packed_node
transform_token_node
`transform_token_node` receives a Token as an argument.
All other methods receive the node that is being transformed and
a list of the results of the transformations of that node's children.
The return value of these methods are the resulting transformations.
If `Discard` is raised in a transformation, no data from that node
will be passed to its parent's transformation."""
(definition of ForestTransformer.__init__:)
def __init__(self):
(definition of ForestTransformer.transform:)
def transform(self, root):
"""Perform a transformation on a Forest."""
(definition of ForestTransformer.transform_symbol_node:)
def transform_symbol_node(self, node, data):
(definition of ForestTransformer.transform_intermediate_node:)
def transform_intermediate_node(self, node, data):
(definition of ForestTransformer.transform_packed_node:)
def transform_packed_node(self, node, data):
(definition of ForestTransformer.transform_token_node:)
def transform_token_node(self, node):
(definition of ForestTransformer.visit_symbol_node_in:)
def visit_symbol_node_in(self, node):
(definition of ForestTransformer.visit_packed_node_in:)
def visit_packed_node_in(self, node):
(definition of ForestTransformer.visit_token_node:)
def visit_token_node(self, node):
(definition of ForestTransformer.visit_symbol_node_out:)
def visit_symbol_node_out(self, node):
(definition of ForestTransformer.visit_intermediate_node_out:)
def visit_intermediate_node_out(self, node):
(definition of ForestTransformer.visit_packed_node_out:)
def visit_packed_node_out(self, node):
(definition of PackedData:)
class PackedData():
"""Used in transformationss of packed nodes to distinguish the data
that comes from the left child and the right child."""
(definition of PackedData.__init__:)
def __init__(self, node, data):
(definition of ForestToParseTree:)
class ForestToParseTree(ForestTransformer):
"""Used by the earley parser when ambiguity equals 'resolve' or
'explicit'. Transforms an SPPF into an (ambiguous) parse tree.
tree_class: The Tree class to use for construction
callbacks: A dictionary of rules to functions that output a tree
prioritizer: A ForestVisitor that manipulates the priorities of
ForestNodes
resolve_ambiguity: If True, ambiguities will be resolved based on
priorities. Otherwise, `_ambig` nodes will be in the resulting
tree."""
(definition of ForestToParseTree.__init__:)
def __init__(self, tree_class=Tree, callbacks=dict(), prioritizer=ForestSumVisitor(), resolve_ambiguity=True):
(definition of ForestToParseTree.on_cycle:)
def on_cycle(self, node, path):
(definition of ForestToParseTree._check_cycle:)
def _check_cycle(self, node):
(definition of ForestToParseTree._collapse_ambig:)
def _collapse_ambig(self, children):
(definition of ForestToParseTree._call_rule_func:)
def _call_rule_func(self, node, data):
(definition of ForestToParseTree._call_ambig_func:)
def _call_ambig_func(self, node, data):
(definition of ForestToParseTree.transform_symbol_node:)
def transform_symbol_node(self, node, data):
(definition of ForestToParseTree.transform_intermediate_node:)
def transform_intermediate_node(self, node, data):
(definition of ForestToParseTree.transform_packed_node:)
def transform_packed_node(self, node, data):
(definition of ForestToParseTree.visit_symbol_node_in:)
def visit_symbol_node_in(self, node):
(definition of ForestToParseTree.visit_packed_node_in:)
def visit_packed_node_in(self, node):
(definition of ForestToParseTree.visit_token_node:)
def visit_token_node(self, node):
(definition of handles_ambiguity:)
def handles_ambiguity(func):
"""Decorator for methods of subclasses of TreeForestTransformer.
Denotes that the method should receive a list of transformed derivations."""
(definition of TreeForestTransformer:)
class TreeForestTransformer(ForestToParseTree):
"""A ForestTransformer with a tree-Transformer-like interface.
By default, it will construct a tree.
Methods provided via inheritance are called based on the rule/symbol
names of nodes in the forest.
Methods that act on rules will receive a list of the results of the
transformations of the rule's children. By default, trees and tokens.
Methods that act on tokens will receive a Token.
Alternatively, methods that act on rules may be annotated with
`handles_ambiguity`. In this case, the function will receive a list
of all the transformations of all the derivations of the rule.
By default, a list of trees where each tree.data is equal to the
rule name or one of its aliases.
Non-tree transformations are made possible by override of
`__default__`, `__default_token__`, and `__default_ambig__`."""
(definition of TreeForestTransformer.__init__:)
def __init__(self, tree_class=Tree, prioritizer=ForestSumVisitor(), resolve_ambiguity=True):
(definition of TreeForestTransformer.__default__:)
def __default__(self, name, data):
"""Default operation on tree (for override).
Returns a tree with name with data as children."""
(definition of TreeForestTransformer.__default_ambig__:)
def __default_ambig__(self, name, data):
"""Default operation on ambiguous rule (for override).
Wraps data in an '_ambig_ node if it contains more than
one element.'"""
(definition of TreeForestTransformer.__default_token__:)
def __default_token__(self, node):
"""Default operation on Token (for override).
Returns node"""
(definition of TreeForestTransformer.transform_token_node:)
def transform_token_node(self, node):
(definition of TreeForestTransformer._call_rule_func:)
def _call_rule_func(self, node, data):
(definition of TreeForestTransformer._call_ambig_func:)
def _call_ambig_func(self, node, data):
[end of new definitions in lark/parsers/earley_forest.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 5faea9223cc54d1dbd0985cf830d05a10a7729ec | ||
Project-MONAI__MONAI-964 | 964 | Project-MONAI/MONAI | null | 5fff91e72cf0c19e24156265794242b3b7e0957a | 2020-08-27T15:01:41Z | diff --git a/docs/source/data.rst b/docs/source/data.rst
index ef05c4c0e4..50d44ee709 100644
--- a/docs/source/data.rst
+++ b/docs/source/data.rst
@@ -68,6 +68,11 @@ NibabelReader
.. autoclass:: NibabelReader
:members:
+NumpyReader
+~~~~~~~~~~~
+.. autoclass:: NunpyReader
+ :members:
+
Nifti format handling
---------------------
diff --git a/monai/data/image_reader.py b/monai/data/image_reader.py
index d2bdede4bd..ef8fb17585 100644
--- a/monai/data/image_reader.py
+++ b/monai/data/image_reader.py
@@ -14,6 +14,7 @@
import numpy as np
+from monai.config import KeysCollection
from monai.data.utils import correct_nifti_header_if_necessary
from monai.utils import ensure_tuple, optional_import
@@ -348,3 +349,94 @@ def _get_array_data(self, img: Nifti1Image) -> np.ndarray:
"""
return np.asarray(img.dataobj)
+
+
+class NumpyReader(ImageReader):
+ """
+ Load NPY or NPZ format data based on Numpy library, they can be arrays or pickled objects.
+ A typical usage is to load the `mask` data for classification task.
+ It can load part of the npz file with specified `npz_keys`.
+
+ Args:
+ npz_keys: if loading npz file, only load the specified keys, if None, load all the items.
+ stack the loaded items together to construct a new first dimension.
+
+ """
+
+ def __init__(self, npz_keys: Optional[KeysCollection] = None):
+ super().__init__()
+ self._img: Optional[Sequence[Nifti1Image]] = None
+ if npz_keys is not None:
+ npz_keys = ensure_tuple(npz_keys)
+ self.npz_keys = npz_keys
+
+ def verify_suffix(self, filename: Union[Sequence[str], str]) -> bool:
+ """
+ Verify whether the specified file or files format is supported by Numpy reader.
+
+ Args:
+ filename: file name or a list of file names to read.
+ if a list of files, verify all the subffixes.
+
+ """
+ suffixes: Sequence[str] = ["npz", "npy"]
+ return is_supported_format(filename, suffixes)
+
+ def read(self, data: Union[Sequence[str], str, np.ndarray], **kwargs):
+ """
+ Read image data from specified file or files, or set a Numpy array.
+ Note that the returned object is Numpy array or list of Numpy arrays.
+ `self._img` is always a list, even only has 1 image.
+
+ Args:
+ data: file name or a list of file names to read.
+ kwargs: additional args for `numpy.load` API except `allow_pickle`. more details about available args:
+ https://numpy.org/doc/stable/reference/generated/numpy.load.html
+
+ """
+ self._img = list()
+ if isinstance(data, np.ndarray):
+ self._img.append(data)
+ return data
+
+ filenames: Sequence[str] = ensure_tuple(data)
+ for name in filenames:
+ img = np.load(name, allow_pickle=True, **kwargs)
+ if name.endswith(".npz"):
+ # load expected items from NPZ file
+ npz_keys = [f"arr_{i}" for i in range(len(img))] if self.npz_keys is None else self.npz_keys
+ for k in npz_keys:
+ self._img.append(img[k])
+ else:
+ self._img.append(img)
+
+ return self._img if len(filenames) > 1 else self._img[0]
+
+ def get_data(self):
+ """
+ Extract data array and meta data from loaded data and return them.
+ This function returns 2 objects, first is numpy array of image data, second is dict of meta data.
+ It constructs `spatial_shape=data.shape` and stores in meta dict if the data is numpy array.
+ If loading a list of files, stack them together and add a new dimension as first dimension,
+ and use the meta data of the first image to represent the stacked result.
+
+ """
+ img_array: List[np.ndarray] = list()
+ compatible_meta: Dict = None
+ if self._img is None:
+ raise RuntimeError("please call read() first then use get_data().")
+
+ for img in self._img:
+ header = dict()
+ if isinstance(img, np.ndarray):
+ header["spatial_shape"] = img.shape
+ img_array.append(img)
+
+ if compatible_meta is None:
+ compatible_meta = header
+ else:
+ if not np.allclose(header["spatial_shape"], compatible_meta["spatial_shape"]):
+ raise RuntimeError("spatial_shape of all images should be same.")
+
+ img_array_ = np.stack(img_array, axis=0) if len(img_array) > 1 else img_array[0]
+ return img_array_, compatible_meta
| diff --git a/tests/test_numpy_reader.py b/tests/test_numpy_reader.py
new file mode 100644
index 0000000000..9c40ded9d1
--- /dev/null
+++ b/tests/test_numpy_reader.py
@@ -0,0 +1,90 @@
+# Copyright 2020 MONAI Consortium
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+# http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import tempfile
+import unittest
+
+import numpy as np
+
+from monai.data import NumpyReader
+
+
+class TestNumpyReader(unittest.TestCase):
+ def test_npy(self):
+ test_data = np.random.randint(0, 256, size=[3, 4, 4])
+ with tempfile.TemporaryDirectory() as tempdir:
+ filepath = os.path.join(tempdir, "test_data.npy")
+ np.save(filepath, test_data)
+
+ reader = NumpyReader()
+ reader.read(filepath)
+ result = reader.get_data()
+ self.assertTupleEqual(result[1]["spatial_shape"], test_data.shape)
+ self.assertTupleEqual(result[0].shape, test_data.shape)
+ np.testing.assert_allclose(result[0], test_data)
+
+ def test_npz1(self):
+ test_data1 = np.random.randint(0, 256, size=[3, 4, 4])
+ with tempfile.TemporaryDirectory() as tempdir:
+ filepath = os.path.join(tempdir, "test_data.npy")
+ np.save(filepath, test_data1)
+
+ reader = NumpyReader()
+ reader.read(filepath)
+ result = reader.get_data()
+ self.assertTupleEqual(result[1]["spatial_shape"], test_data1.shape)
+ self.assertTupleEqual(result[0].shape, test_data1.shape)
+ np.testing.assert_allclose(result[0], test_data1)
+
+ def test_npz2(self):
+ test_data1 = np.random.randint(0, 256, size=[3, 4, 4])
+ test_data2 = np.random.randint(0, 256, size=[3, 4, 4])
+ with tempfile.TemporaryDirectory() as tempdir:
+ filepath = os.path.join(tempdir, "test_data.npz")
+ np.savez(filepath, test_data1, test_data2)
+
+ reader = NumpyReader()
+ reader.read(filepath)
+ result = reader.get_data()
+ self.assertTupleEqual(result[1]["spatial_shape"], test_data1.shape)
+ self.assertTupleEqual(result[0].shape, (2, 3, 4, 4))
+ np.testing.assert_allclose(result[0], np.stack([test_data1, test_data2]))
+
+ def test_npz3(self):
+ test_data1 = np.random.randint(0, 256, size=[3, 4, 4])
+ test_data2 = np.random.randint(0, 256, size=[3, 4, 4])
+ with tempfile.TemporaryDirectory() as tempdir:
+ filepath = os.path.join(tempdir, "test_data.npz")
+ np.savez(filepath, test1=test_data1, test2=test_data2)
+
+ reader = NumpyReader(npz_keys=["test1", "test2"])
+ reader.read(filepath)
+ result = reader.get_data()
+ self.assertTupleEqual(result[1]["spatial_shape"], test_data1.shape)
+ self.assertTupleEqual(result[0].shape, (2, 3, 4, 4))
+ np.testing.assert_allclose(result[0], np.stack([test_data1, test_data2]))
+
+ def test_npy_pickle(self):
+ test_data = {"test": np.random.randint(0, 256, size=[3, 4, 4])}
+ with tempfile.TemporaryDirectory() as tempdir:
+ filepath = os.path.join(tempdir, "test_data.npy")
+ np.save(filepath, test_data, allow_pickle=True)
+
+ reader = NumpyReader()
+ reader.read(filepath)
+ result = reader.get_data()[0].item()
+ self.assertTupleEqual(result["test"].shape, test_data["test"].shape)
+ np.testing.assert_allclose(result["test"], test_data["test"])
+
+
+if __name__ == "__main__":
+ unittest.main()
| diff --git a/docs/source/data.rst b/docs/source/data.rst
index ef05c4c0e4..50d44ee709 100644
--- a/docs/source/data.rst
+++ b/docs/source/data.rst
@@ -68,6 +68,11 @@ NibabelReader
.. autoclass:: NibabelReader
:members:
+NumpyReader
+~~~~~~~~~~~
+.. autoclass:: NunpyReader
+ :members:
+
Nifti format handling
---------------------
| [
{
"components": [
{
"doc": "Load NPY or NPZ format data based on Numpy library, they can be arrays or pickled objects.\nA typical usage is to load the `mask` data for classification task.\nIt can load part of the npz file with specified `npz_keys`.\n\nArgs:\n npz_keys: if loading npz file, only... | [
"tests/test_numpy_reader.py::TestNumpyReader::test_npy",
"tests/test_numpy_reader.py::TestNumpyReader::test_npy_pickle",
"tests/test_numpy_reader.py::TestNumpyReader::test_npz1",
"tests/test_numpy_reader.py::TestNumpyReader::test_npz2",
"tests/test_numpy_reader.py::TestNumpyReader::test_npz3"
] | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
909 Add NumpyReader for IO factory
Part of #909
### Description
This PR implemented `NumpyReader` which is part of the IO factory task.
### Status
**Ready**
### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not applicable items -->
- [x] Non-breaking change (fix or new feature that would not break existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing functionality to change).
- [x] New tests added to cover the changes.
- [ ] Integration tests passed locally by running `./runtests.sh --codeformat --coverage`.
- [x] In-line docstrings updated.
- [x] Documentation updated, tested `make html` command in the `docs/` folder.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in monai/data/image_reader.py]
(definition of NumpyReader:)
class NumpyReader(ImageReader):
"""Load NPY or NPZ format data based on Numpy library, they can be arrays or pickled objects.
A typical usage is to load the `mask` data for classification task.
It can load part of the npz file with specified `npz_keys`.
Args:
npz_keys: if loading npz file, only load the specified keys, if None, load all the items.
stack the loaded items together to construct a new first dimension."""
(definition of NumpyReader.__init__:)
def __init__(self, npz_keys: Optional[KeysCollection] = None):
(definition of NumpyReader.verify_suffix:)
def verify_suffix(self, filename: Union[Sequence[str], str]) -> bool:
"""Verify whether the specified file or files format is supported by Numpy reader.
Args:
filename: file name or a list of file names to read.
if a list of files, verify all the subffixes."""
(definition of NumpyReader.read:)
def read(self, data: Union[Sequence[str], str, np.ndarray], **kwargs):
"""Read image data from specified file or files, or set a Numpy array.
Note that the returned object is Numpy array or list of Numpy arrays.
`self._img` is always a list, even only has 1 image.
Args:
data: file name or a list of file names to read.
kwargs: additional args for `numpy.load` API except `allow_pickle`. more details about available args:
https://numpy.org/doc/stable/reference/generated/numpy.load.html"""
(definition of NumpyReader.get_data:)
def get_data(self):
"""Extract data array and meta data from loaded data and return them.
This function returns 2 objects, first is numpy array of image data, second is dict of meta data.
It constructs `spatial_shape=data.shape` and stores in meta dict if the data is numpy array.
If loading a list of files, stack them together and add a new dimension as first dimension,
and use the meta data of the first image to represent the stacked result."""
[end of new definitions in monai/data/image_reader.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | e73257caa79309dcce1e93abf1632f4bfd75b11f | |
atlassian-api__atlassian-python-api-559 | 559 | atlassian-api/atlassian-python-api | null | eb86bf1b088f235fba9932ea7b06ca8e0d018c52 | 2020-08-26T10:06:22Z | diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst
index 70824f1aa..93eaa4e14 100644
--- a/CONTRIBUTING.rst
+++ b/CONTRIBUTING.rst
@@ -77,6 +77,8 @@ All methods based on docs from: https://developer.atlassian.com/docs/
- https://developer.atlassian.com/platform/marketplace/rest
* Crowd:
- https://developer.atlassian.com/server/crowd/crowd-rest-apis/
+* Xray:
+ - https://docs.getxray.app/display/XRAY/REST+API
* Others:
- https://developer.atlassian.com/server/jira/platform/oauth/
- https://confluence.atlassian.com/cloud/api-tokens-938839638.html
diff --git a/README.rst b/README.rst
index fe02f5483..c85b53500 100644
--- a/README.rst
+++ b/README.rst
@@ -81,6 +81,20 @@ Example to get your requests:
data = sd.get_my_customer_requests()
print(data)
+Using Xray (Test Management tool for Jira):
+
+.. code-block:: python
+
+ from atlassian import Xray
+
+ xr = Xray(
+ url='http://localhost:7990',
+ username='admin',
+ password='admin')
+
+ data = xr.get_tests('TEST-001')
+ print(data)
+
If you want to see the response in pretty print format JSON. Feel free for use construction like:
.. code-block:: python
diff --git a/atlassian/__init__.py b/atlassian/__init__.py
index 4901edb85..048449eea 100644
--- a/atlassian/__init__.py
+++ b/atlassian/__init__.py
@@ -7,6 +7,7 @@
from .marketplace import MarketPlace
from .portfolio import Portfolio
from .service_desk import ServiceDesk
+from .xray import Xray
__all__ = [
'Confluence',
@@ -17,5 +18,6 @@
'Stash',
'Crowd',
'ServiceDesk',
- 'MarketPlace'
+ 'MarketPlace',
+ 'Xray'
]
diff --git a/atlassian/xray.py b/atlassian/xray.py
new file mode 100755
index 000000000..21417b94f
--- /dev/null
+++ b/atlassian/xray.py
@@ -0,0 +1,383 @@
+# coding=utf-8
+import logging
+import re
+
+from .rest_client import AtlassianRestAPI
+
+log = logging.getLogger(__name__)
+
+
+class Xray(AtlassianRestAPI):
+
+ def __init__(self, *args, **kwargs):
+ super(Xray, self).__init__(*args, **kwargs)
+
+ # Tests API
+ def get_tests(self, test_keys):
+ """
+ Retrieve information about the provided tests.
+ :param test_keys: list of tests (eg. `['TEST-001', 'TEST-002']`) to retrieve.
+ :return: Returns the retrieved tests.
+ """
+ url = 'rest/raven/1.0/api/test/?keys={0}'.format(';'.join(test_keys))
+ return self.get(url)
+
+ def get_test_statuses(self):
+ """
+ Retrieve a list of all Test Statuses available in Xray sorted by rank.
+ :return: Returns the test statuses.
+ """
+ url = 'rest/raven/1.0/api/settings/teststatuses'
+ return self.get(url)
+
+ def get_test_runs(self, test_key):
+ """
+ Retrieve test runs of a test.
+ :param test_key: Test key (eg. 'TEST-001').
+ :return: Returns the exported test runs.
+ """
+ url = 'rest/raven/1.0/api/test/{0}/testruns'.format(test_key)
+ return self.get(url)
+
+ def get_test_runs_with_environment(self, test_key, test_environments):
+ # TODO
+ """
+ Retrieve test runs of a test filtered by tests environments.
+ :param test_key: Test key (eg. 'TEST-001').
+ :param test_environments: Test execution environments separated by ','.
+ :return: Returns the exported test runs.
+ """
+ env = '?testEnvironments={0}'.format(','.join([re.escape(env) for env in test_environments]))
+ url = 'rest/raven/1.0/api/test/{0}/testruns{1}'.format(test_key, env)
+ return self.get(url)
+
+ def get_test_preconditions(self, test_key):
+ """
+ Retrieve pre-conditions of a test.
+ :param test_key: Test key (eg. 'TEST-001').
+ :return: Returns the test pre-conditions of a given test.
+ """
+ url = 'rest/raven/1.0/api/test/{0}/preconditions'.format(test_key)
+ return self.get(url)
+
+ def get_test_sets(self, test_key):
+ """
+ Retrieve test sets associated with a test.
+ :param test_key: Test key (eg. 'TEST-001').
+ :return: Returns the exported test sets.
+ """
+ url = 'rest/raven/1.0/api/test/{0}/testsets'.format(test_key)
+ return self.get(url)
+
+ def get_test_executions(self, test_key):
+ """
+ Retrieve test executions of a test.
+ :param test_key: Test key (eg. 'TEST-001').
+ :return: Returns the exported test executions.
+ """
+ url = 'rest/raven/1.0/api/test/{0}/testexecutions'.format(test_key)
+ return self.get(url)
+
+ def get_test_plans(self, test_key):
+ """
+ Retrieve test plans associated with a test.
+ :param test_key: Test key (eg. 'TEST-001').
+ :return: Returns the exported test plans.
+ """
+ url = 'rest/raven/1.0/api/test/{0}/testplans'.format(test_key)
+ return self.get(url)
+
+ # Test Steps API
+ def get_test_step_statuses(self):
+ """
+ Retrieve the test step statuses available in Xray sorted by rank.
+ :return: Returns the test step statuses available in Xray sorted by rank.
+ """
+ url = 'rest/raven/1.0/api/settings/teststepstatuses'
+ return self.get(url)
+
+ def get_test_step(self, test_key, test_step_id):
+ """
+ Retrieve the specified test step of a given test.
+ :param test_key: Test key (eg. 'TEST-001').
+ :param test_step_id: ID of the test step.
+ :return: Return the test step with the given id.
+ """
+ url = 'rest/raven/1.0/api/test/{0}/step/{1}'.format(test_key, test_step_id)
+ return self.get(url)
+
+ def get_test_steps(self, test_key):
+ """
+ Retrieve the test steps of a given test.
+ :param test_key: Test key (eg. 'TEST-001').
+ :return: Return the test steps of a given test.
+ """
+ url = 'rest/raven/1.0/api/test/{0}/step'.format(test_key)
+ return self.get(url)
+
+ def create_test_step(self, test_key, step, data, result):
+ """
+ Create a new test steps for a given test.
+ NOTE: attachments are currently not supported!
+ :param test_key: Test key (eg. 'TEST-001').
+ :param step: Test Step name (eg. 'Example step').
+ :param data: Test Step data (eg. 'Example data').
+ :param results: Test Step results (eg. 'Example results').
+ :return:
+ """
+ create = {'step': step, 'data': data, 'result': result, 'attachments': []}
+ url = 'rest/raven/1.0/api/test/{0}/step'.format(test_key)
+ return self.put(url, create)
+
+ def update_test_step(self, test_key, test_step_id, step, data, result):
+ """
+ Update the specified test steps for a given test.
+ NOTE: attachments are currently not supported!
+ :param test_key: Test key (eg. 'TEST-001').
+ :param test_step_id: ID of the test step.
+ :param step: Test Step name (eg. 'Example step').
+ :param data: Test Step data (eg. 'Example data').
+ :param results: Test Step results (eg. 'Example results').
+ :return:
+ """
+ update = {'step': step, 'data': data, 'result': result, 'attachments': {'add': [], 'remove': []}}
+ url = 'rest/raven/1.0/api/test/{0}/step/{1}'.format(test_key, test_step_id)
+ return self.post(url, update)
+
+ def delete_test_step(self, test_key, test_step_id):
+ """
+ Remove the specified test steps from a given test.
+ :param test_key: Test key (eg. 'TEST-001').
+ :param test_step_id: ID of the test step.
+ :return:
+ """
+ url = 'rest/raven/1.0/api/test/{0}/step/{1}'.format(test_key, test_step_id)
+ return self.delete(url)
+
+ # Pre-Conditions API
+ def get_tests_with_precondition(self, precondition_key):
+ """
+ Retrieve the tests associated with the given pre-condition.
+ :param precondition_key: Precondition key (eg. 'TEST-001').
+ :return: Return a list of the test associated with the pre-condition.
+ """
+ url = 'rest/raven/1.0/api/precondition/{0}/test'.format(precondition_key)
+ return self.get(url)
+
+ def update_precondition(self, precondition_key, add=[], remove=[]):
+ """
+ Associate tests with the given pre-condition.
+ :param precondition_key: Precondition key (eg. 'TEST-001').
+ :param add: OPTIONAL List of Test Keys to associate with the pre-condition (eg. ['TEST-002', 'TEST-003'])
+ :param add: OPTIONAL List of Test Keys no longer associate with the pre-condition (eg. ['TEST-004', 'TEST-005'])
+ :return:
+ """
+ update = {'add': add, 'remove': remove}
+ url = 'rest/raven/1.0/api/precondition/{0}/test'.format(precondition_key)
+ return self.post(url, update)
+
+ def delete_test_from_precondition(self, precondition_key, test_key):
+ """
+ Remove association of the specified tests from the given pre-condition.
+ :param precondition_key: Precondition key (eg. 'TEST-001').
+ :param test_key: Test Key which should no longer be associate with the pre-condition (eg. 'TEST-100')
+ :return:
+ """
+ url = 'rest/raven/1.0/api/precondition/{0}/test/{1}'.format(precondition_key, test_key)
+ return self.delete(url)
+
+ # Test Set API
+ def get_tests_with_test_set(self, test_set_key):
+ """
+ Retrieve the tests associated with the given test set.
+ :param test_set_key: Test set key (eg. 'SET-001').
+ :return: Return a list of the test associated with the test set.
+ """
+ url = 'rest/raven/1.0/api/testset/{0}/test'.format(test_set_key)
+ return self.get(url)
+
+ def update_test_set(self, test_set_key, add=[], remove=[]):
+ """
+ Associate tests with the given test set.
+ :param test_set_key: Test set key (eg. 'SET-001').
+ :param add: OPTIONAL List of Test Keys to associate with the test set (eg. ['TEST-002', 'TEST-003'])
+ :param add: OPTIONAL List of Test Keys no longer associate with the test set (eg. ['TEST-004', 'TEST-005'])
+ :return:
+ """
+ update = {'add': add, 'remove': remove}
+ url = 'rest/raven/1.0/api/testset/{0}/test'.format(test_set_key)
+ return self.post(url, update)
+
+ def delete_test_from_test_set(self, test_set_key, test_key):
+ """
+ Remove association of the specified tests from the given test set.
+ :param test_set_key: Test set key (eg. 'SET-001').
+ :param test_key: Test Key which should no longer be associate with the test set (eg. 'TEST-100')
+ :return:
+ """
+ url = 'rest/raven/1.0/api/testset/{0}/test/{1}'.format(test_set_key, test_key)
+ return self.delete(url)
+
+ # Test Plans API
+ def get_tests_with_test_plan(self, test_plan_key):
+ """
+ Retrieve the tests associated with the given test plan.
+ :param test_plan_key: Test set key (eg. 'PLAN-001').
+ :return: Return a list of the test associated with the test plan.
+ """
+ url = 'rest/raven/1.0/api/testplan/{0}/test'.format(test_plan_key)
+ return self.get(url)
+
+ def update_test_plan(self, test_plan_key, add=[], remove=[]):
+ """
+ Associate tests with the given test plan.
+ :param test_plan_key: Test plan key (eg. 'PLAN-001').
+ :param add: OPTIONAL List of Test Keys to associate with the test plan (eg. ['TEST-002', 'TEST-003'])
+ :param add: OPTIONAL List of Test Keys no longer associate with the test plan (eg. ['TEST-004', 'TEST-005'])
+ :return:
+ """
+ update = {'add': add, 'remove': remove}
+ url = 'rest/raven/1.0/api/testplan/{0}/test'.format(test_plan_key)
+ return self.post(url, update)
+
+ def delete_test_from_test_plan(self, test_plan_key, test_key):
+ """
+ Remove association of the specified tests from the given test plan.
+ :param test_plan_key: Test plan key (eg. 'PLAN-001').
+ :param test_key: Test Key which should no longer be associate with the test plan (eg. 'TEST-100')
+ :return:
+ """
+ url = 'rest/raven/1.0/api/testplan/{0}/test/{1}'.format(test_plan_key, test_key)
+ return self.delete(url)
+
+ # Test Executions API
+ def get_tests_with_test_execution(self, test_exec_key):
+ """
+ Retrieve the tests associated with the given test execution.
+ :param test_exec_key: Test execution key (eg. 'EXEC-001').
+ :return: Return a list of the test associated with the test execution.
+ """
+ url = 'rest/raven/1.0/api/testexec/{0}/test'.format(test_exec_key)
+ return self.get(url)
+
+ def update_test_execution(self, test_exec_key, add=[], remove=[]):
+ """
+ Associate tests with the given test execution.
+ :param test_exec_key: Test execution key (eg. 'EXEC-001').
+ :param add: OPTIONAL List of Test Keys to associate with the test execution (eg. ['TEST-002', 'TEST-003'])
+ :param add: OPTIONAL List of Test Keys no longer associate with the test execution (eg. ['TEST-004', 'TEST-005'])
+ :return:
+ """
+ update = {'add': add, 'remove': remove}
+ url = 'rest/raven/1.0/api/testexec/{0}/test'.format(test_exec_key)
+ return self.post(url, update)
+
+ def delete_test_from_test_execution(self, test_exec_key, test_key):
+ """
+ Remove association of the specified tests from the given test execution.
+ :param test_exec_key: Test execution key (eg. 'EXEC-001').
+ :param test_key: Test Key which should no longer be associate with the test execution (eg. 'TEST-100')
+ :return:
+ """
+ url = 'rest/raven/1.0/api/testexec/{0}/test/{1}'.format(test_exec_key, test_key)
+ return self.delete(url)
+
+ # Test Runs API
+ def get_test_run(self, test_run_id):
+ """
+ Retrieve detailed information about the given test run.
+ :param test_run_id: ID of the test run (eg. 100).
+ :return: Returns detailed information about the test run.
+ """
+ url = 'rest/raven/1.0/api/testrun/{0}'.format(test_run_id)
+ return self.get(url)
+
+ def get_test_run_assignee(self, test_run_id):
+ """
+ Retrieve the assignee for the given test run.
+ :param test_run_id: ID of the test run (eg. 100).
+ :return: Returns the assignee for the given test run
+ """
+ url = 'rest/raven/1.0/api/testrun/{0}/assignee'.format(test_run_id)
+ return self.get(url)
+
+ def update_test_run_assignee(self, test_run_id, assignee):
+ """
+ Update the assignee for the given test run.
+ :param test_run_id: ID of the test run (eg. 100).
+ :param assignee: Assignee id (eg. 'bob')
+ :return:
+ """
+ update = {'assignee': assignee}
+ url = 'rest/raven/1.0/api/testrun/{0}'.format(test_run_id)
+ return self.put(url, update)
+
+ def get_test_run_status(self, test_run_id):
+ """
+ Retrieve the status for the given test run.
+ :param test_run_id: ID of the test run (eg. 100).
+ :return: Returns the status for the given test run
+ """
+ url = 'rest/raven/1.0/api/testrun/{0}/status'.format(test_run_id)
+ return self.get(url)
+
+ def update_test_run_status(self, test_run_id, status):
+ """
+ Update the status for the given test run.
+ :param test_run_id: ID of the test run (eg. 100).
+ :param assignee: Assignee id (eg. 'PASS')
+ :return:
+ """
+ update = {'status': status}
+ url = 'rest/raven/1.0/api/testrun/{0}'.format(test_run_id)
+ return self.put(url, update)
+
+ def get_test_run_defects(self, test_run_id):
+ """
+ Retrieve the defects for the given test run.
+ :param test_run_id: ID of the test run (eg. 100).
+ :return: Returns a list of defects for the given test run
+ """
+ url = 'rest/raven/1.0/api/testrun/{0}/defect'.format(test_run_id)
+ return self.get(url)
+
+ def update_test_run_defects(self, test_run_id, add=[], remove=[]):
+ """
+ Update the defects associated with the given test run.
+ :param test_run_id: ID of the test run (eg. 100).
+ :param add: OPTIONAL List of defects to associate to the test run (eg. ['BUG-001', 'BUG-002'])
+ :param remove: OPTIONAL List of defects which no longer need to be associated to the test run (eg. ['BUG-003'])
+ :return:
+ """
+ update = {'defects': {'add': add, 'remove': remove}}
+ url = 'rest/raven/1.0/api/testrun/{0}'.format(test_run_id)
+ return self.put(url, update)
+
+ def get_test_run_comment(self, test_run_id):
+ """
+ Retrieve the comment for the given test run.
+ :param test_run_id: ID of the test run (eg. 100).
+ :return: Returns the comment for the given test run
+ """
+ url = 'rest/raven/1.0/api/testrun/{0}/comment'.format(test_run_id)
+ return self.get(url)
+
+ def update_test_run_comment(self, test_run_id, comment):
+ """
+ Update the comment for the given test run.
+ :param test_run_id: ID of the test run (eg. 100).
+ :param comment: Comment (eg. 'Test needs to be reworked')
+ :return:
+ """
+ update = {'comment': comment}
+ url = 'rest/raven/1.0/api/testrun/{0}'.format(test_run_id)
+ return self.put(url, update)
+
+ def get_test_run_steps(self, test_run_id):
+ """
+ Retrieve the steps for the given test run.
+ :param test_run_id: ID of the test run (eg. 100).
+ :return: Returns the steps for the given test run
+ """
+ url = 'rest/raven/1.0/api/testrun/{0}/step'.format(test_run_id)
+ return self.get(url)
diff --git a/docs/index.rst b/docs/index.rst
index 514269c7a..cd82f7808 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -22,6 +22,7 @@ Add a connection:
from atlassian import Confluence
from atlassian import Bitbucket
from atlassian import ServiceDesk
+ from atlassian import Xray
jira = Jira(
url='http://localhost:8080',
@@ -42,6 +43,11 @@ Add a connection:
url='http://localhost:8080',
username='admin',
password='admin')
+
+ xray = Xray(
+ url='http://localhost:8080',
+ username='admin',
+ password='admin')
Key/Cert Based authentication
@@ -55,6 +61,7 @@ Add a connection using key/cert based authentication:
from atlassian import Confluence
from atlassian import Bitbucket
from atlassian import ServiceDesk
+ from atlassian import Xray
jira = Jira(
url='http://localhost:8080',
@@ -76,6 +83,11 @@ Add a connection using key/cert based authentication:
key='/path/to/key',
cert='/path/to/cert')
+ xray = Xray(
+ url='http://localhost:8080',
+ key='/path/to/key',
+ cert='/path/to/cert')
+
Alternatively OAuth can be used:
.. code-block:: python
@@ -102,6 +114,10 @@ Alternatively OAuth can be used:
url='http://localhost:8080',
oauth=oauth_dict)
+ xray = Xray(
+ url='http://localhost:8080',
+ oauth=oauth_dict)
+
Or Kerberos *(installation with kerberos extra necessary)*:
.. code-block:: python
@@ -124,6 +140,10 @@ Or Kerberos *(installation with kerberos extra necessary)*:
url='http://localhost:8080',
kerberos=kerberos_service)
+ xray = Xray(
+ url='http://localhost:8080',
+ kerberos=kerberos_service)
+
Or reuse cookie file:
.. code-block:: python
@@ -147,6 +167,10 @@ Or reuse cookie file:
url='http://localhost:8080',
cookies=cookie_dict)
+ xray = Xray(
+ url='http://localhost:8080',
+ cookies=cookie_dict)
+
To authenticate to the Atlassian Cloud APIs:
.. code-block:: python
@@ -177,8 +201,6 @@ To authenticate to the Atlassian Cloud APIs:
password=jira_api_token,
cloud=True)
-
-
.. toctree::
:maxdepth: 2
@@ -187,6 +209,7 @@ To authenticate to the Atlassian Cloud APIs:
bitbucket
bamboo
service_desk
+ xray
.. |Build Status| image:: https://travis-ci.org/atlassian-api/atlassian-python-api.svg?branch=master
:target: https://pypi.python.org/pypi/atlassian-python-api
diff --git a/docs/xray.rst b/docs/xray.rst
new file mode 100755
index 000000000..fd98a950a
--- /dev/null
+++ b/docs/xray.rst
@@ -0,0 +1,147 @@
+Xray module
+===========
+
+.. NOTE: The Xray module only support the Server + Data Center edition of the Xray Jira plugin!
+
+Manage Test
+-----------
+
+.. code-block:: python
+
+ # Retrieve information about the provided tests
+ xray.get_tests(['TEST-001', 'TEST-002'])
+
+ # Retrieve a list of all Test Statuses available in Xray sorted by rank
+ xray.get_test_statuses()
+
+ # Retrieve test runs of a test
+ xray.get_test_runs('TEST-001')
+
+ # Retrieve test runs of a test filtered by tests environments
+ xray.get_test_runs_with_environment('TEST-001', 'Android,iOS')
+
+ # Retrieve pre-conditions of a test
+ xray.get_test_preconditions('TEST-001')
+
+ # Retrieve test sets associated with a test
+ xray.get_test_sets('TEST-001')
+
+ # Retrieve test executions of a test
+ xray.get_test_executions('TEST-001')
+
+ # Retrieve test plans associated with a test
+ xray.get_test_plans('TEST-001')
+
+Manage Test Steps
+-----------------
+
+.. code-block:: python
+
+ # Retrieve the test step statuses available in Xray sorted by rank
+ xray.get_test_step_statuses()
+
+ # Retrieve the specified test step of a given test
+ xray.get_test_step('TEST-001', 'STEP-001')
+
+ # Retrieve the test steps of a given test
+ xray.get_test_steps('TEST-001')
+
+ # Create a new test steps for a given test
+ xray.create_test_step('TEST-001', 'Example Test Step', 'Example Test Data', 'Example Test Result')
+
+ # Update the specified test steps for a given test
+ xray.update_test_step('TEST-001', 100, 'Updated Test Step', 'Updated Test Data', 'Updated Test Result')
+
+ # Remove the specified test steps from a given test
+ xray.delete_test_step('TEST-001', 100)
+
+Manage Pre-conditions
+---------------------
+
+.. code-block:: python
+
+ # Retrieve the tests associated with the given pre-condition
+ xray.get_tests_with_precondition('PREC-001')
+
+ # Associate tests with the given pre-condition
+ xray.update_precondition('PREC-001', add=['TEST-001','TEST-002'], remove=['TEST-003'])
+
+ # Remove association of the specified tests from the given pre-condition
+ xray.delete_test_from_precondition('PREC-001', 'TEST-003')
+
+Manage Test sets
+----------------
+
+.. code-block:: python
+
+ # Retrieve the tests associated with the given test set
+ xray.get_tests_with_test_set('SET-001')
+
+ # Associate tests with the given test set
+ xray.update_test_set('SET-001',add=['TEST-001','TEST-002'], remove=['TEST-003'])
+
+ # Remove association of the specified tests from the given test set
+ xray.delete_test_from_test_set('SET-001', 'TEST-003')
+
+Manage Test plans
+-----------------
+
+.. code-block:: python
+
+ # Retrieve the tests associated with the given test plan
+ xray.get_tests_with_test_plan('PLAN-001')
+
+ # Associate tests with the given test plan
+ xray.update_test_plan('PLAN-001', add=['TEST-001', 'TEST-002'], remove=['TEST-003'])
+
+ # Remove association of the specified tests from the given test plan
+ xray.delete_test_from_test_plan('PLAN-001', 'TEST-001'):
+
+Manage Test Executions
+----------------------
+
+.. code-block:: python
+
+ # Retrieve the tests associated with the given test execution
+ xray.get_tests_with_test_execution('EXEC-001')
+
+ # Associate tests with the given test execution
+ xray.update_test_execution('EXEC-001', add=['TEST-001', 'TEST-002'], remove=['TEST-003'])
+
+ # Remove association of the specified tests from the given test execution
+ xray.delete_test_from_test_execution('EXEC-001', 'TEST-001')
+
+Manage Test Runs
+----------------
+
+.. code-block:: python
+
+ # Retrieve detailed information about the given test run
+ xray.get_test_run(100)
+
+ # Retrieve the assignee for the given test run.
+ xray.get_test_run_assignee(100)
+
+ # Update the assignee for the given test run
+ xray.update_test_run_assignee(100, 'bob')
+
+ # Retrieve the status for the given test run
+ xray.get_test_run_status(100)
+
+ # Update the status for the given test run
+ xray.update_test_run_status(100, 'PASS')
+
+ # Retrieve the defects for the given test run
+ xray.get_test_run_defects(100)
+
+ # Update the defects associated with the given test run
+ xray.update_test_run_defects(100, add=['BUG-001', 'BUG-002'], remove=['BUG-003'])
+
+ # Retrieve the comment for the given test run
+ xray.get_test_run_comment(100)
+
+ # Update the comment for the given test run
+ xray.update_test_run_comment(100, 'Test needs to be reworked')
+
+ # Retrieve the steps for the given test run
+ xray.get_test_run_steps(100)
| diff --git a/tests/test_base.py b/tests/test_base.py
index 9a9f8da5a..4825c9fd6 100644
--- a/tests/test_base.py
+++ b/tests/test_base.py
@@ -1,5 +1,5 @@
# coding: utf8
-from atlassian import Jira, Confluence, Bitbucket, Bamboo, Crowd, ServiceDesk
+from atlassian import Jira, Confluence, Bitbucket, Bamboo, Crowd, ServiceDesk, Xray
import os
BAMBOO_URL = os.environ.get('BAMBOO_URL', 'http://localhost:8085')
@@ -7,6 +7,7 @@
CONFLUENCE_URL = os.environ.get('BAMBOO_URL', 'http://localhost:8090')
STASH_URL = os.environ.get('BAMBOO_URL', 'http://localhost:7990')
SERVICE_DESK_URL = os.environ.get('SERVICE_DESK_URL', 'http://localhost:8080')
+XRAY_URL = os.environ.get('XRAY_URL', 'http://localhost:8080')
CROWD_URL = os.environ.get('CROWD_URL', 'http://localhost:8095/crowd')
CROWD_APPLICATION = os.environ.get('CROWD_APPLICATION', 'bamboo')
@@ -58,3 +59,10 @@ def test_init_service_desk(self):
username=ATLASSIAN_USER,
password=ATLASSIAN_PASSWORD
)
+
+ def test_init_xray(self):
+ xray = Xray(
+ url=XRAY_URL,
+ username=ATLASSIAN_USER,
+ password=ATLASSIAN_PASSWORD
+ )
| diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst
index 70824f1aa..93eaa4e14 100644
--- a/CONTRIBUTING.rst
+++ b/CONTRIBUTING.rst
@@ -77,6 +77,8 @@ All methods based on docs from: https://developer.atlassian.com/docs/
- https://developer.atlassian.com/platform/marketplace/rest
* Crowd:
- https://developer.atlassian.com/server/crowd/crowd-rest-apis/
+* Xray:
+ - https://docs.getxray.app/display/XRAY/REST+API
* Others:
- https://developer.atlassian.com/server/jira/platform/oauth/
- https://confluence.atlassian.com/cloud/api-tokens-938839638.html
diff --git a/README.rst b/README.rst
index fe02f5483..c85b53500 100644
--- a/README.rst
+++ b/README.rst
@@ -81,6 +81,20 @@ Example to get your requests:
data = sd.get_my_customer_requests()
print(data)
+Using Xray (Test Management tool for Jira):
+
+.. code-block:: python
+
+ from atlassian import Xray
+
+ xr = Xray(
+ url='http://localhost:7990',
+ username='admin',
+ password='admin')
+
+ data = xr.get_tests('TEST-001')
+ print(data)
+
If you want to see the response in pretty print format JSON. Feel free for use construction like:
.. code-block:: python
diff --git a/docs/index.rst b/docs/index.rst
index 514269c7a..cd82f7808 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -22,6 +22,7 @@ Add a connection:
from atlassian import Confluence
from atlassian import Bitbucket
from atlassian import ServiceDesk
+ from atlassian import Xray
jira = Jira(
url='http://localhost:8080',
@@ -42,6 +43,11 @@ Add a connection:
url='http://localhost:8080',
username='admin',
password='admin')
+
+ xray = Xray(
+ url='http://localhost:8080',
+ username='admin',
+ password='admin')
Key/Cert Based authentication
@@ -55,6 +61,7 @@ Add a connection using key/cert based authentication:
from atlassian import Confluence
from atlassian import Bitbucket
from atlassian import ServiceDesk
+ from atlassian import Xray
jira = Jira(
url='http://localhost:8080',
@@ -76,6 +83,11 @@ Add a connection using key/cert based authentication:
key='/path/to/key',
cert='/path/to/cert')
+ xray = Xray(
+ url='http://localhost:8080',
+ key='/path/to/key',
+ cert='/path/to/cert')
+
Alternatively OAuth can be used:
.. code-block:: python
@@ -102,6 +114,10 @@ Alternatively OAuth can be used:
url='http://localhost:8080',
oauth=oauth_dict)
+ xray = Xray(
+ url='http://localhost:8080',
+ oauth=oauth_dict)
+
Or Kerberos *(installation with kerberos extra necessary)*:
.. code-block:: python
@@ -124,6 +140,10 @@ Or Kerberos *(installation with kerberos extra necessary)*:
url='http://localhost:8080',
kerberos=kerberos_service)
+ xray = Xray(
+ url='http://localhost:8080',
+ kerberos=kerberos_service)
+
Or reuse cookie file:
.. code-block:: python
@@ -147,6 +167,10 @@ Or reuse cookie file:
url='http://localhost:8080',
cookies=cookie_dict)
+ xray = Xray(
+ url='http://localhost:8080',
+ cookies=cookie_dict)
+
To authenticate to the Atlassian Cloud APIs:
.. code-block:: python
@@ -177,8 +201,6 @@ To authenticate to the Atlassian Cloud APIs:
password=jira_api_token,
cloud=True)
-
-
.. toctree::
:maxdepth: 2
@@ -187,6 +209,7 @@ To authenticate to the Atlassian Cloud APIs:
bitbucket
bamboo
service_desk
+ xray
.. |Build Status| image:: https://travis-ci.org/atlassian-api/atlassian-python-api.svg?branch=master
:target: https://pypi.python.org/pypi/atlassian-python-api
diff --git a/docs/xray.rst b/docs/xray.rst
new file mode 100755
index 000000000..fd98a950a
--- /dev/null
+++ b/docs/xray.rst
@@ -0,0 +1,147 @@
+Xray module
+===========
+
+.. NOTE: The Xray module only support the Server + Data Center edition of the Xray Jira plugin!
+
+Manage Test
+-----------
+
+.. code-block:: python
+
+ # Retrieve information about the provided tests
+ xray.get_tests(['TEST-001', 'TEST-002'])
+
+ # Retrieve a list of all Test Statuses available in Xray sorted by rank
+ xray.get_test_statuses()
+
+ # Retrieve test runs of a test
+ xray.get_test_runs('TEST-001')
+
+ # Retrieve test runs of a test filtered by tests environments
+ xray.get_test_runs_with_environment('TEST-001', 'Android,iOS')
+
+ # Retrieve pre-conditions of a test
+ xray.get_test_preconditions('TEST-001')
+
+ # Retrieve test sets associated with a test
+ xray.get_test_sets('TEST-001')
+
+ # Retrieve test executions of a test
+ xray.get_test_executions('TEST-001')
+
+ # Retrieve test plans associated with a test
+ xray.get_test_plans('TEST-001')
+
+Manage Test Steps
+-----------------
+
+.. code-block:: python
+
+ # Retrieve the test step statuses available in Xray sorted by rank
+ xray.get_test_step_statuses()
+
+ # Retrieve the specified test step of a given test
+ xray.get_test_step('TEST-001', 'STEP-001')
+
+ # Retrieve the test steps of a given test
+ xray.get_test_steps('TEST-001')
+
+ # Create a new test steps for a given test
+ xray.create_test_step('TEST-001', 'Example Test Step', 'Example Test Data', 'Example Test Result')
+
+ # Update the specified test steps for a given test
+ xray.update_test_step('TEST-001', 100, 'Updated Test Step', 'Updated Test Data', 'Updated Test Result')
+
+ # Remove the specified test steps from a given test
+ xray.delete_test_step('TEST-001', 100)
+
+Manage Pre-conditions
+---------------------
+
+.. code-block:: python
+
+ # Retrieve the tests associated with the given pre-condition
+ xray.get_tests_with_precondition('PREC-001')
+
+ # Associate tests with the given pre-condition
+ xray.update_precondition('PREC-001', add=['TEST-001','TEST-002'], remove=['TEST-003'])
+
+ # Remove association of the specified tests from the given pre-condition
+ xray.delete_test_from_precondition('PREC-001', 'TEST-003')
+
+Manage Test sets
+----------------
+
+.. code-block:: python
+
+ # Retrieve the tests associated with the given test set
+ xray.get_tests_with_test_set('SET-001')
+
+ # Associate tests with the given test set
+ xray.update_test_set('SET-001',add=['TEST-001','TEST-002'], remove=['TEST-003'])
+
+ # Remove association of the specified tests from the given test set
+ xray.delete_test_from_test_set('SET-001', 'TEST-003')
+
+Manage Test plans
+-----------------
+
+.. code-block:: python
+
+ # Retrieve the tests associated with the given test plan
+ xray.get_tests_with_test_plan('PLAN-001')
+
+ # Associate tests with the given test plan
+ xray.update_test_plan('PLAN-001', add=['TEST-001', 'TEST-002'], remove=['TEST-003'])
+
+ # Remove association of the specified tests from the given test plan
+ xray.delete_test_from_test_plan('PLAN-001', 'TEST-001'):
+
+Manage Test Executions
+----------------------
+
+.. code-block:: python
+
+ # Retrieve the tests associated with the given test execution
+ xray.get_tests_with_test_execution('EXEC-001')
+
+ # Associate tests with the given test execution
+ xray.update_test_execution('EXEC-001', add=['TEST-001', 'TEST-002'], remove=['TEST-003'])
+
+ # Remove association of the specified tests from the given test execution
+ xray.delete_test_from_test_execution('EXEC-001', 'TEST-001')
+
+Manage Test Runs
+----------------
+
+.. code-block:: python
+
+ # Retrieve detailed information about the given test run
+ xray.get_test_run(100)
+
+ # Retrieve the assignee for the given test run.
+ xray.get_test_run_assignee(100)
+
+ # Update the assignee for the given test run
+ xray.update_test_run_assignee(100, 'bob')
+
+ # Retrieve the status for the given test run
+ xray.get_test_run_status(100)
+
+ # Update the status for the given test run
+ xray.update_test_run_status(100, 'PASS')
+
+ # Retrieve the defects for the given test run
+ xray.get_test_run_defects(100)
+
+ # Update the defects associated with the given test run
+ xray.update_test_run_defects(100, add=['BUG-001', 'BUG-002'], remove=['BUG-003'])
+
+ # Retrieve the comment for the given test run
+ xray.get_test_run_comment(100)
+
+ # Update the comment for the given test run
+ xray.update_test_run_comment(100, 'Test needs to be reworked')
+
+ # Retrieve the steps for the given test run
+ xray.get_test_run_steps(100)
| [
{
"components": [
{
"doc": "",
"lines": [
10,
383
],
"name": "Xray",
"signature": "class Xray(AtlassianRestAPI):",
"type": "class"
},
{
"doc": "",
"lines": [
12,
13
],
"name"... | [
"tests/test_base.py::TestBasic::test_init_jira",
"tests/test_base.py::TestBasic::test_init_confluence",
"tests/test_base.py::TestBasic::test_init_bitbucket",
"tests/test_base.py::TestBasic::test_init_bamboo",
"tests/test_base.py::TestBasic::test_init_crowd",
"tests/test_base.py::TestBasic::test_init_servi... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
feat: add xray rest api (server/data-center) support
Add support for v1.0 of the Xray REST api for Server and Data-Center.
Xray is a Jira plugin which provides Test Management support. I use the REST api for test automation (+feedback) from CI.
NOTE:
* As of now, v2.0 does not cover all functionality provided in the initial version, mainly related to test automation (eg. import of test executions).
* The REST api of Xray for Jira Cloud differs from Jira Server/DC.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in atlassian/xray.py]
(definition of Xray:)
class Xray(AtlassianRestAPI):
(definition of Xray.__init__:)
def __init__(self, *args, **kwargs):
(definition of Xray.get_tests:)
def get_tests(self, test_keys):
"""Retrieve information about the provided tests.
:param test_keys: list of tests (eg. `['TEST-001', 'TEST-002']`) to retrieve.
:return: Returns the retrieved tests."""
(definition of Xray.get_test_statuses:)
def get_test_statuses(self):
"""Retrieve a list of all Test Statuses available in Xray sorted by rank.
:return: Returns the test statuses."""
(definition of Xray.get_test_runs:)
def get_test_runs(self, test_key):
"""Retrieve test runs of a test.
:param test_key: Test key (eg. 'TEST-001').
:return: Returns the exported test runs."""
(definition of Xray.get_test_runs_with_environment:)
def get_test_runs_with_environment(self, test_key, test_environments):
"""Retrieve test runs of a test filtered by tests environments.
:param test_key: Test key (eg. 'TEST-001').
:param test_environments: Test execution environments separated by ','.
:return: Returns the exported test runs."""
(definition of Xray.get_test_preconditions:)
def get_test_preconditions(self, test_key):
"""Retrieve pre-conditions of a test.
:param test_key: Test key (eg. 'TEST-001').
:return: Returns the test pre-conditions of a given test."""
(definition of Xray.get_test_sets:)
def get_test_sets(self, test_key):
"""Retrieve test sets associated with a test.
:param test_key: Test key (eg. 'TEST-001').
:return: Returns the exported test sets."""
(definition of Xray.get_test_executions:)
def get_test_executions(self, test_key):
"""Retrieve test executions of a test.
:param test_key: Test key (eg. 'TEST-001').
:return: Returns the exported test executions."""
(definition of Xray.get_test_plans:)
def get_test_plans(self, test_key):
"""Retrieve test plans associated with a test.
:param test_key: Test key (eg. 'TEST-001').
:return: Returns the exported test plans."""
(definition of Xray.get_test_step_statuses:)
def get_test_step_statuses(self):
"""Retrieve the test step statuses available in Xray sorted by rank.
:return: Returns the test step statuses available in Xray sorted by rank."""
(definition of Xray.get_test_step:)
def get_test_step(self, test_key, test_step_id):
"""Retrieve the specified test step of a given test.
:param test_key: Test key (eg. 'TEST-001').
:param test_step_id: ID of the test step.
:return: Return the test step with the given id."""
(definition of Xray.get_test_steps:)
def get_test_steps(self, test_key):
"""Retrieve the test steps of a given test.
:param test_key: Test key (eg. 'TEST-001').
:return: Return the test steps of a given test."""
(definition of Xray.create_test_step:)
def create_test_step(self, test_key, step, data, result):
"""Create a new test steps for a given test.
NOTE: attachments are currently not supported!
:param test_key: Test key (eg. 'TEST-001').
:param step: Test Step name (eg. 'Example step').
:param data: Test Step data (eg. 'Example data').
:param results: Test Step results (eg. 'Example results').
:return: """
(definition of Xray.update_test_step:)
def update_test_step(self, test_key, test_step_id, step, data, result):
"""Update the specified test steps for a given test.
NOTE: attachments are currently not supported!
:param test_key: Test key (eg. 'TEST-001').
:param test_step_id: ID of the test step.
:param step: Test Step name (eg. 'Example step').
:param data: Test Step data (eg. 'Example data').
:param results: Test Step results (eg. 'Example results').
:return:"""
(definition of Xray.delete_test_step:)
def delete_test_step(self, test_key, test_step_id):
"""Remove the specified test steps from a given test.
:param test_key: Test key (eg. 'TEST-001').
:param test_step_id: ID of the test step.
:return:"""
(definition of Xray.get_tests_with_precondition:)
def get_tests_with_precondition(self, precondition_key):
"""Retrieve the tests associated with the given pre-condition.
:param precondition_key: Precondition key (eg. 'TEST-001').
:return: Return a list of the test associated with the pre-condition."""
(definition of Xray.update_precondition:)
def update_precondition(self, precondition_key, add=[], remove=[]):
"""Associate tests with the given pre-condition.
:param precondition_key: Precondition key (eg. 'TEST-001').
:param add: OPTIONAL List of Test Keys to associate with the pre-condition (eg. ['TEST-002', 'TEST-003'])
:param add: OPTIONAL List of Test Keys no longer associate with the pre-condition (eg. ['TEST-004', 'TEST-005'])
:return:"""
(definition of Xray.delete_test_from_precondition:)
def delete_test_from_precondition(self, precondition_key, test_key):
"""Remove association of the specified tests from the given pre-condition.
:param precondition_key: Precondition key (eg. 'TEST-001').
:param test_key: Test Key which should no longer be associate with the pre-condition (eg. 'TEST-100')
:return:"""
(definition of Xray.get_tests_with_test_set:)
def get_tests_with_test_set(self, test_set_key):
"""Retrieve the tests associated with the given test set.
:param test_set_key: Test set key (eg. 'SET-001').
:return: Return a list of the test associated with the test set."""
(definition of Xray.update_test_set:)
def update_test_set(self, test_set_key, add=[], remove=[]):
"""Associate tests with the given test set.
:param test_set_key: Test set key (eg. 'SET-001').
:param add: OPTIONAL List of Test Keys to associate with the test set (eg. ['TEST-002', 'TEST-003'])
:param add: OPTIONAL List of Test Keys no longer associate with the test set (eg. ['TEST-004', 'TEST-005'])
:return:"""
(definition of Xray.delete_test_from_test_set:)
def delete_test_from_test_set(self, test_set_key, test_key):
"""Remove association of the specified tests from the given test set.
:param test_set_key: Test set key (eg. 'SET-001').
:param test_key: Test Key which should no longer be associate with the test set (eg. 'TEST-100')
:return:"""
(definition of Xray.get_tests_with_test_plan:)
def get_tests_with_test_plan(self, test_plan_key):
"""Retrieve the tests associated with the given test plan.
:param test_plan_key: Test set key (eg. 'PLAN-001').
:return: Return a list of the test associated with the test plan."""
(definition of Xray.update_test_plan:)
def update_test_plan(self, test_plan_key, add=[], remove=[]):
"""Associate tests with the given test plan.
:param test_plan_key: Test plan key (eg. 'PLAN-001').
:param add: OPTIONAL List of Test Keys to associate with the test plan (eg. ['TEST-002', 'TEST-003'])
:param add: OPTIONAL List of Test Keys no longer associate with the test plan (eg. ['TEST-004', 'TEST-005'])
:return:"""
(definition of Xray.delete_test_from_test_plan:)
def delete_test_from_test_plan(self, test_plan_key, test_key):
"""Remove association of the specified tests from the given test plan.
:param test_plan_key: Test plan key (eg. 'PLAN-001').
:param test_key: Test Key which should no longer be associate with the test plan (eg. 'TEST-100')
:return:"""
(definition of Xray.get_tests_with_test_execution:)
def get_tests_with_test_execution(self, test_exec_key):
"""Retrieve the tests associated with the given test execution.
:param test_exec_key: Test execution key (eg. 'EXEC-001').
:return: Return a list of the test associated with the test execution."""
(definition of Xray.update_test_execution:)
def update_test_execution(self, test_exec_key, add=[], remove=[]):
"""Associate tests with the given test execution.
:param test_exec_key: Test execution key (eg. 'EXEC-001').
:param add: OPTIONAL List of Test Keys to associate with the test execution (eg. ['TEST-002', 'TEST-003'])
:param add: OPTIONAL List of Test Keys no longer associate with the test execution (eg. ['TEST-004', 'TEST-005'])
:return:"""
(definition of Xray.delete_test_from_test_execution:)
def delete_test_from_test_execution(self, test_exec_key, test_key):
"""Remove association of the specified tests from the given test execution.
:param test_exec_key: Test execution key (eg. 'EXEC-001').
:param test_key: Test Key which should no longer be associate with the test execution (eg. 'TEST-100')
:return:"""
(definition of Xray.get_test_run:)
def get_test_run(self, test_run_id):
"""Retrieve detailed information about the given test run.
:param test_run_id: ID of the test run (eg. 100).
:return: Returns detailed information about the test run."""
(definition of Xray.get_test_run_assignee:)
def get_test_run_assignee(self, test_run_id):
"""Retrieve the assignee for the given test run.
:param test_run_id: ID of the test run (eg. 100).
:return: Returns the assignee for the given test run"""
(definition of Xray.update_test_run_assignee:)
def update_test_run_assignee(self, test_run_id, assignee):
"""Update the assignee for the given test run.
:param test_run_id: ID of the test run (eg. 100).
:param assignee: Assignee id (eg. 'bob')
:return: """
(definition of Xray.get_test_run_status:)
def get_test_run_status(self, test_run_id):
"""Retrieve the status for the given test run.
:param test_run_id: ID of the test run (eg. 100).
:return: Returns the status for the given test run"""
(definition of Xray.update_test_run_status:)
def update_test_run_status(self, test_run_id, status):
"""Update the status for the given test run.
:param test_run_id: ID of the test run (eg. 100).
:param assignee: Assignee id (eg. 'PASS')
:return: """
(definition of Xray.get_test_run_defects:)
def get_test_run_defects(self, test_run_id):
"""Retrieve the defects for the given test run.
:param test_run_id: ID of the test run (eg. 100).
:return: Returns a list of defects for the given test run"""
(definition of Xray.update_test_run_defects:)
def update_test_run_defects(self, test_run_id, add=[], remove=[]):
"""Update the defects associated with the given test run.
:param test_run_id: ID of the test run (eg. 100).
:param add: OPTIONAL List of defects to associate to the test run (eg. ['BUG-001', 'BUG-002'])
:param remove: OPTIONAL List of defects which no longer need to be associated to the test run (eg. ['BUG-003'])
:return: """
(definition of Xray.get_test_run_comment:)
def get_test_run_comment(self, test_run_id):
"""Retrieve the comment for the given test run.
:param test_run_id: ID of the test run (eg. 100).
:return: Returns the comment for the given test run"""
(definition of Xray.update_test_run_comment:)
def update_test_run_comment(self, test_run_id, comment):
"""Update the comment for the given test run.
:param test_run_id: ID of the test run (eg. 100).
:param comment: Comment (eg. 'Test needs to be reworked')
:return: """
(definition of Xray.get_test_run_steps:)
def get_test_run_steps(self, test_run_id):
"""Retrieve the steps for the given test run.
:param test_run_id: ID of the test run (eg. 100).
:return: Returns the steps for the given test run"""
[end of new definitions in atlassian/xray.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 3e1ef05e9b570b32985a2a0f14e6666ccde9bd65 | |
sympy__sympy-19998 | 19,998 | sympy/sympy | 1.7 | 8a64de4322db8b1195e930e86fabd925e85e19ea | 2020-08-24T09:37:58Z | diff --git a/sympy/solvers/ode/ode.py b/sympy/solvers/ode/ode.py
index 5bb0f6b422e5..3963872bdaac 100644
--- a/sympy/solvers/ode/ode.py
+++ b/sympy/solvers/ode/ode.py
@@ -580,7 +580,7 @@ def dsolve(eq, func=None, hint="default", simplify=True,
# changed to show the systems that haven't
# been solved.
try:
- sol = dsolve_system(eq, funcs=func, ics=ics)
+ sol = dsolve_system(eq, funcs=func, ics=ics, doit=True)
return sol[0] if len(sol) == 1 else sol
except NotImplementedError:
pass
diff --git a/sympy/solvers/ode/systems.py b/sympy/solvers/ode/systems.py
index c2bf7f35e3dd..9a5edf53e309 100644
--- a/sympy/solvers/ode/systems.py
+++ b/sympy/solvers/ode/systems.py
@@ -1,4 +1,4 @@
-from sympy.core import Add, Mul
+from sympy.core import Add, Mul, S
from sympy.core.containers import Tuple
from sympy.core.compatibility import iterable
from sympy.core.exprtools import factor_terms
@@ -11,8 +11,8 @@
piecewise_fold, sqrt, log)
from sympy.functions.combinatorial.factorials import factorial
from sympy.matrices import zeros, Matrix, NonSquareMatrixError, MatrixBase, eye
-from sympy.polys import Poly
-from sympy.simplify import collect
+from sympy.polys import Poly, together
+from sympy.simplify import collect, radsimp, signsimp
from sympy.simplify.powsimp import powdenest, powsimp
from sympy.simplify.ratsimp import ratsimp
from sympy.simplify.simplify import simplify
@@ -70,6 +70,137 @@ def _solsimp(e, t):
return no_t + has_t
+def simpsol(sol, wrt1, wrt2, doit=True):
+ """Simplify solutions from dsolve_system."""
+
+ # The parameter sol is the solution as returned by dsolve (list of Eq).
+ #
+ # The parameters wrt1 and wrt2 are lists of symbols to be collected for
+ # with those in wrt1 being collected for first. This allows for collecting
+ # on any factors involving the independent variable before collecting on
+ # the integration constants or vice versa using e.g.:
+ #
+ # sol = simpsol(sol, [t], [C1, C2]) # t first, constants after
+ # sol = simpsol(sol, [C1, C2], [t]) # constants first, t after
+ #
+ # If doit=True (default) then simpsol will begin by evaluating any
+ # unevaluated integrals. Since many integrals will appear multiple times
+ # in the solutions this is done intelligently by computing each integral
+ # only once.
+ #
+ # The strategy is to first perform simple cancellation with factor_terms
+ # and then multiply out all brackets with expand_mul. This gives an Add
+ # with many terms.
+ #
+ # We split each term into two multiplicative factors dep and coeff where
+ # all factors that involve wrt1 are in dep and any constant factors are in
+ # coeff e.g.
+ # sqrt(2)*C1*exp(t) -> ( exp(t) , sqrt(2)*C1 )
+ #
+ # The dep factors are simplified using powsimp to combine expanded
+ # exponential factors e.g.
+ # exp(a*t)*exp(b*t) -> exp(t*(a+b))
+ #
+ # We then collect coefficients for all terms having the same (simplified)
+ # dep. The coefficients are then simplified using together and ratsimp and
+ # lastly by recursively applying the same transformation to the
+ # coefficients to collect on wrt2.
+ #
+ # Finally the result is recombined into an Add and signsimp is used to
+ # normalise any minus signs.
+
+ def simprhs(rhs, rep, wrt1, wrt2):
+ """Simplify the rhs of an ODE solution"""
+ if rep:
+ rhs = rhs.subs(rep)
+ rhs = factor_terms(rhs)
+ rhs = simp_coeff_dep(rhs, wrt1, wrt2)
+ rhs = signsimp(rhs)
+ return rhs
+
+ def simp_coeff_dep(expr, wrt1, wrt2=None):
+ """Split rhs into terms, split terms into dep and coeff and collect on dep"""
+ add_dep_terms = lambda e: e.is_Add and e.has(*wrt1)
+ expandable = lambda e: e.is_Mul and any(map(add_dep_terms, e.args))
+ expand_func = lambda e: expand_mul(e, deep=False)
+ expand_mul_mod = lambda e: e.replace(expandable, expand_func)
+ terms = Add.make_args(expand_mul_mod(expr))
+ dc = {}
+ for term in terms:
+ coeff, dep = term.as_independent(*wrt1, as_Add=False)
+ # Collect together the coefficients for terms that have the same
+ # dependence on wrt1 (after dep is normalised using simpdep).
+ dep = simpdep(dep, wrt1)
+
+ # See if the dependence on t cancels out...
+ if dep is not S.One:
+ dep2 = factor_terms(dep)
+ if not dep2.has(*wrt1):
+ coeff *= dep2
+ dep = S.One
+
+ if dep not in dc:
+ dc[dep] = coeff
+ else:
+ dc[dep] += coeff
+ # Apply the method recursively to the coefficients but this time
+ # collecting on wrt2 rather than wrt2.
+ termpairs = ((simpcoeff(c, wrt2), d) for d, c in dc.items())
+ if wrt2 is not None:
+ termpairs = ((simp_coeff_dep(c, wrt2), d) for c, d in termpairs)
+ return Add(*(c * d for c, d in termpairs))
+
+ def simpdep(term, wrt1):
+ """Normalise factors involving t with powsimp and recombine exp"""
+ def canonicalise(a):
+ # Using factor_terms here isn't quite right because it leads to things
+ # like exp(t*(1+t)) that we don't want. We do want to cancel factors
+ # and pull out a common denominator but ideally the numerator would be
+ # expressed as a standard form polynomial in t so we expand_mul
+ # and collect afterwards.
+ a = factor_terms(a)
+ num, den = a.as_numer_denom()
+ num = expand_mul(num)
+ num = collect(num, wrt1)
+ return num / den
+
+ term = powsimp(term)
+ rep = {e: exp(canonicalise(e.args[0])) for e in term.atoms(exp)}
+ term = term.subs(rep)
+ return term
+
+ def simpcoeff(coeff, wrt2):
+ """Bring to a common fraction and cancel with ratsimp"""
+ coeff = together(coeff)
+ if coeff.is_polynomial():
+ # Calling ratsimp can be expensive. The main reason is to simplify
+ # sums of terms with irrational denominators so we limit ourselves
+ # to the case where the expression is polynomial in any symbols.
+ # Maybe there's a better approach...
+ coeff = ratsimp(radsimp(coeff))
+ # collect on secondary variables first and any remaining symbols after
+ if wrt2 is not None:
+ syms = list(wrt2) + list(ordered(coeff.free_symbols - set(wrt2)))
+ else:
+ syms = list(ordered(coeff.free_symbols))
+ coeff = collect(coeff, syms)
+ coeff = together(coeff)
+ return coeff
+
+ # There are often repeated integrals. Collect unique integrals and
+ # evaluate each once and then substitute into the final result to replace
+ # all occurrences in each of the solution equations.
+ if doit:
+ integrals = set().union(*(s.atoms(Integral) for s in sol))
+ rep = {i: factor_terms(i).doit() for i in integrals}
+ else:
+ rep = {}
+
+ sol = [Eq(s.lhs, simprhs(s.rhs, rep, wrt1, wrt2)) for s in sol]
+
+ return sol
+
+
def linodesolve_type(A, t, b=None):
r"""
Helper function that determines the type of the system of ODEs for solving with :obj:`sympy.solvers.ode.systems.linodesolve()`
@@ -1518,7 +1649,8 @@ def _higher_order_ode_solver(match):
else:
new_eqs, new_funcs = _higher_order_to_first_order(eqs, sysorder, t, funcs=funcs,
type=type, J=match.get('J', None),
- f_t=match.get('f(t)', None))
+ f_t=match.get('f(t)', None),
+ P=match.get('P', None), b=match.get('rhs', None))
if is_transformed:
t = match.get('t_', t)
@@ -1728,7 +1860,7 @@ def _second_order_to_first_order(eqs, funcs, t, type="auto", A1=None,
return _higher_order_to_first_order(eqs, sys_order, t, funcs=funcs)
-def _higher_order_type2_to_sub_systems(J, f_t, funcs, t, max_order):
+def _higher_order_type2_to_sub_systems(J, f_t, funcs, t, max_order, b=None, P=None):
# Note: To add a test for this ValueError
if J is None or f_t is None or not _matrix_is_constant(J, t):
@@ -1737,8 +1869,17 @@ def _higher_order_type2_to_sub_systems(J, f_t, funcs, t, max_order):
Type 2
'''))
+ if P is None and b is not None and not b.is_zero_matrix:
+ raise ValueError(filldedent('''
+ Provide the keyword 'P' for matrix P in A = P * J * P-1.
+ '''))
+
new_funcs = Matrix([Function(Dummy('{}__0'.format(f.func.__name__)))(t) for f in funcs])
new_eqs = new_funcs.diff(t, max_order) - f_t * J * new_funcs
+
+ if b is not None and not b.is_zero_matrix:
+ new_eqs -= P.inv() * b
+
new_eqs = canonical_odes(new_eqs, new_funcs, t)[0]
return new_eqs, new_funcs
@@ -1804,9 +1945,11 @@ def _get_coeffs_from_subs_expression(expr):
if type == "type2":
J = kwargs.get('J', None)
f_t = kwargs.get('f_t', None)
+ b = kwargs.get('b', None)
+ P = kwargs.get('P', None)
max_order = max(sys_order[func] for func in funcs)
- return _higher_order_type2_to_sub_systems(J, f_t, funcs, t, max_order)
+ return _higher_order_type2_to_sub_systems(J, f_t, funcs, t, max_order, P=P, b=b)
# Note: To be changed to this after doit option is disabled for default cases
# new_sysorder = _get_func_order(new_eqs, new_funcs)
@@ -1836,7 +1979,7 @@ def _get_coeffs_from_subs_expression(expr):
return eqs, new_funcs
-def dsolve_system(eqs, funcs=None, t=None, ics=None, doit=False):
+def dsolve_system(eqs, funcs=None, t=None, ics=None, doit=False, simplify=True):
r"""
Solves any(supported) system of Ordinary Differential Equations
@@ -1877,7 +2020,13 @@ def dsolve_system(eqs, funcs=None, t=None, ics=None, doit=False):
ics : Dict or None
Set of initial boundary/conditions for the system of ODEs
doit : Boolean
- Evaluate the solutions if True. Default value is False
+ Evaluate the solutions if True. Default value is True. Can be
+ set to false if the integral evaluation takes too much time and/or
+ isn't required.
+ simplify: Boolean
+ Simplify the solutions for the systems. Default value is True.
+ Can be set to false if simplification takes too much time and/or
+ isn't required.
Examples
========
@@ -1984,8 +2133,9 @@ def dsolve_system(eqs, funcs=None, t=None, ics=None, doit=False):
solved_constants = solve_ics(sol, funcs, constants, ics)
sol = [s.subs(solved_constants) for s in sol]
- if doit:
- sol = [s.doit() for s in sol]
+ if simplify:
+ constants = Tuple(*sol).free_symbols - variables
+ sol = simpsol(sol, [t], constants, doit=doit)
final_sols.append(sol)
| diff --git a/sympy/solvers/ode/tests/test_systems.py b/sympy/solvers/ode/tests/test_systems.py
index 950d1c839641..72231bab96c3 100644
--- a/sympy/solvers/ode/tests/test_systems.py
+++ b/sympy/solvers/ode/tests/test_systems.py
@@ -1,13 +1,13 @@
from sympy import (symbols, Symbol, diff, Function, Derivative, Matrix, Rational, S,
- I, Eq, sqrt)
+ I, Eq, sqrt, Mul, pi)
from sympy.core.containers import Tuple
-from sympy.functions import exp, cos, sin, log
+from sympy.functions import exp, cos, sin, log, tan, Ci, Si, erf, erfi
from sympy.matrices import dotprodsimp, NonSquareMatrixError
from sympy.solvers.ode import dsolve
from sympy.solvers.ode.ode import constant_renumber
from sympy.solvers.ode.subscheck import checksysodesol
from sympy.solvers.ode.systems import (_classify_linear_system, linear_ode_to_matrix,
- ODEOrderError, ODENonlinearError, _simpsol, _solsimp,
+ ODEOrderError, ODENonlinearError, _simpsol,
_is_commutative_anti_derivative, linodesolve,
canonical_odes, dsolve_system, _component_division,
_eqs2dict, _dict2graph)
@@ -509,126 +509,171 @@ def test_sysode_linear_neq_order1_type1():
f, g, x, y, h = symbols('f g x y h', cls=Function)
a, b, c, t = symbols('a b c t')
- eq1 = [Eq(x(t).diff(t), x(t)), Eq(y(t).diff(t), y(t))]
- sol1 = [Eq(x(t), C1*exp(t)), Eq(y(t), C2*exp(t))]
- assert dsolve(eq1) == sol1
- assert checksysodesol(eq1, sol1) == (True, [0, 0])
+ eqs1 = [Eq(Derivative(x(t), t), x(t)),
+ Eq(Derivative(y(t), t), y(t))]
+ sol1 = [Eq(x(t), C1*exp(t)),
+ Eq(y(t), C2*exp(t))]
+ assert dsolve(eqs1) == sol1
+ assert checksysodesol(eqs1, sol1) == (True, [0, 0])
- eq2 = [Eq(x(t).diff(t), 2*x(t)), Eq(y(t).diff(t), 3*y(t))]
- #sol2 = [Eq(x(t), C1*exp(2*t)), Eq(y(t), C2*exp(3*t))]
- sol2 = [Eq(x(t), C1*exp(2*t)), Eq(y(t), C2*exp(3*t))]
- assert dsolve(eq2) == sol2
- assert checksysodesol(eq2, sol2) == (True, [0, 0])
+ eqs2 = [Eq(Derivative(x(t), t), 2*x(t)),
+ Eq(Derivative(y(t), t), 3*y(t))]
+ sol2 = [Eq(x(t), C1*exp(2*t)),
+ Eq(y(t), C2*exp(3*t))]
+ assert dsolve(eqs2) == sol2
+ assert checksysodesol(eqs2, sol2) == (True, [0, 0])
- eq3 = [Eq(x(t).diff(t), a*x(t)), Eq(y(t).diff(t), a*y(t))]
- sol3 = [Eq(x(t), C1*exp(a*t)), Eq(y(t), C2*exp(a*t))]
- assert dsolve(eq3) == sol3
- assert checksysodesol(eq3, sol3) == (True, [0, 0])
+ eqs3 = [Eq(Derivative(x(t), t), a*x(t)),
+ Eq(Derivative(y(t), t), a*y(t))]
+ sol3 = [Eq(x(t), C1*exp(a*t)),
+ Eq(y(t), C2*exp(a*t))]
+ assert dsolve(eqs3) == sol3
+ assert checksysodesol(eqs3, sol3) == (True, [0, 0])
# Regression test case for issue #15474
# https://github.com/sympy/sympy/issues/15474
- eq4 = [Eq(x(t).diff(t), a*x(t)), Eq(y(t).diff(t), b*y(t))]
- sol4 = [Eq(x(t), C1*exp(a*t)), Eq(y(t), C2*exp(b*t))]
- assert dsolve(eq4) == sol4
- assert checksysodesol(eq4, sol4) == (True, [0, 0])
+ eqs4 = [Eq(Derivative(x(t), t), a*x(t)),
+ Eq(Derivative(y(t), t), b*y(t))]
+ sol4 = [Eq(x(t), C1*exp(a*t)),
+ Eq(y(t), C2*exp(b*t))]
+ assert dsolve(eqs4) == sol4
+ assert checksysodesol(eqs4, sol4) == (True, [0, 0])
- eq5 = [Eq(x(t).diff(t), -y(t)), Eq(y(t).diff(t), x(t))]
- sol5 = [Eq(x(t), -C1*sin(t) - C2*cos(t)), Eq(y(t), C1*cos(t) - C2*sin(t))]
- assert dsolve(eq5) == sol5
- assert checksysodesol(eq5, sol5) == (True, [0, 0])
+ eqs5 = [Eq(Derivative(x(t), t), -y(t)),
+ Eq(Derivative(y(t), t), x(t))]
+ sol5 = [Eq(x(t), -C1*sin(t) - C2*cos(t)),
+ Eq(y(t), C1*cos(t) - C2*sin(t))]
+ assert dsolve(eqs5) == sol5
+ assert checksysodesol(eqs5, sol5) == (True, [0, 0])
- eq6 = [Eq(x(t).diff(t), -2*y(t)), Eq(y(t).diff(t), 2*x(t))]
- sol6 = [Eq(x(t), -C1*sin(2*t) - C2*cos(2*t)), Eq(y(t), C1*cos(2*t) - C2*sin(2*t))]
- assert dsolve(eq6) == sol6
- assert checksysodesol(eq6, sol6) == (True, [0, 0])
+ eqs6 = [Eq(Derivative(x(t), t), -2*y(t)),
+ Eq(Derivative(y(t), t), 2*x(t))]
+ sol6 = [Eq(x(t), -C1*sin(2*t) - C2*cos(2*t)),
+ Eq(y(t), C1*cos(2*t) - C2*sin(2*t))]
+ assert dsolve(eqs6) == sol6
+ assert checksysodesol(eqs6, sol6) == (True, [0, 0])
- eq7 = [Eq(x(t).diff(t), I*y(t)), Eq(y(t).diff(t), I*x(t))]
- sol7 = [Eq(x(t), -C1*exp(-I*t) + C2*exp(I*t)), Eq(y(t), C1*exp(-I*t) + C2*exp(I*t))]
- assert dsolve(eq7) == sol7
- assert checksysodesol(eq7, sol7) == (True, [0, 0])
+ eqs7 = [Eq(Derivative(x(t), t), I*y(t)),
+ Eq(Derivative(y(t), t), I*x(t))]
+ sol7 = [Eq(x(t), -C1*exp(-I*t) + C2*exp(I*t)),
+ Eq(y(t), C1*exp(-I*t) + C2*exp(I*t))]
+ assert dsolve(eqs7) == sol7
+ assert checksysodesol(eqs7, sol7) == (True, [0, 0])
- eq8 = [Eq(x(t).diff(t), -a*y(t)), Eq(y(t).diff(t), a*x(t))]
- sol8 = [Eq(x(t), -I*C1*exp(-I*a*t) + I*C2*exp(I*a*t)), Eq(y(t), C1*exp(-I*a*t) + C2*exp(I*a*t))]
- assert dsolve(eq8) == sol8
- assert checksysodesol(eq8, sol8) == (True, [0, 0])
+ eqs8 = [Eq(Derivative(x(t), t), -a*y(t)),
+ Eq(Derivative(y(t), t), a*x(t))]
+ sol8 = [Eq(x(t), -I*C1*exp(-I*a*t) + I*C2*exp(I*a*t)),
+ Eq(y(t), C1*exp(-I*a*t) + C2*exp(I*a*t))]
+ assert dsolve(eqs8) == sol8
+ assert checksysodesol(eqs8, sol8) == (True, [0, 0])
- eq9 = [Eq(x(t).diff(t), x(t) + y(t)), Eq(y(t).diff(t), x(t) - y(t))]
+ eqs9 = [Eq(Derivative(x(t), t), x(t) + y(t)),
+ Eq(Derivative(y(t), t), x(t) - y(t))]
sol9 = [Eq(x(t), C1*(1 - sqrt(2))*exp(-sqrt(2)*t) + C2*(1 + sqrt(2))*exp(sqrt(2)*t)),
- Eq(y(t), C1*exp(-sqrt(2)*t) + C2*exp(sqrt(2)*t))]
- assert dsolve(eq9) == sol9
- assert checksysodesol(eq9, sol9) == (True, [0, 0])
-
- eq10 = [Eq(x(t).diff(t), x(t) + y(t)), Eq(y(t).diff(t), x(t) + y(t))]
- sol10 = [Eq(x(t), -C1 + C2*exp(2*t)), Eq(y(t), C1 + C2*exp(2*t))]
- assert dsolve(eq10) == sol10
- assert checksysodesol(eq10, sol10) == (True, [0, 0])
-
- eq11 = [Eq(x(t).diff(t), 2*x(t) + y(t)), Eq(y(t).diff(t), -x(t) + 2*y(t))]
- sol11 = [Eq(x(t), (C1*sin(t) + C2*cos(t))*exp(2*t)),
- Eq(y(t), (C1*cos(t) - C2*sin(t))*exp(2*t))]
- assert dsolve(eq11) == sol11
- assert checksysodesol(eq11, sol11) == (True, [0, 0])
-
- eq12 = [Eq(x(t).diff(t), x(t) + 2*y(t)), Eq(y(t).diff(t), 2*x(t) + y(t))]
- sol12 = [Eq(x(t), -C1*exp(-t) + C2*exp(3*t)), Eq(y(t), C1*exp(-t) + C2*exp(3*t))]
- assert dsolve(eq12) == sol12
- assert checksysodesol(eq12, sol12) == (True, [0, 0])
-
- eq13 = [Eq(x(t).diff(t), 4*x(t) + y(t)), Eq(y(t).diff(t), -x(t) + 2*y(t))]
- sol13 = [Eq(x(t), (C1 + C2*t + C2)*exp(3*t)), Eq(y(t), (-C1 - C2*t)*exp(3*t))]
- assert dsolve(eq13) == sol13
- assert checksysodesol(eq13, sol13) == (True, [0, 0])
-
- eq14 = [Eq(x(t).diff(t), a*y(t)), Eq(y(t).diff(t), a*x(t))]
- sol14 = [Eq(x(t), -C1*exp(-a*t) + C2*exp(a*t)), Eq(y(t), C1*exp(-a*t) + C2*exp(a*t))]
- assert dsolve(eq14) == sol14
- assert checksysodesol(eq14, sol14) == (True, [0, 0])
-
- eq15 = [Eq(x(t).diff(t), a*y(t)), Eq(y(t).diff(t), b*x(t))]
- sol15 = [Eq(x(t), -C1*a*exp(-t*sqrt(a*b))/sqrt(a*b) + C2*a*exp(t*sqrt(a*b))/sqrt(a*b)),
- Eq(y(t), C1*exp(-t*sqrt(a*b)) + C2*exp(t*sqrt(a*b)))]
- assert dsolve(eq15) == sol15
- assert checksysodesol(eq15, sol15) == (True, [0, 0])
+ Eq(y(t), C1*exp(-sqrt(2)*t) + C2*exp(sqrt(2)*t))]
+ assert dsolve(eqs9) == sol9
+ assert checksysodesol(eqs9, sol9) == (True, [0, 0])
- eq16 = [Eq(x(t).diff(t), a*x(t) + b*y(t)), Eq(y(t).diff(t), c*x(t))]
- sol16 = [Eq(x(t), -2*C1*b*exp(t*(a/2 + sqrt(a**2 + 4*b*c)/2))/(a - sqrt(a**2 + 4*b*c)) - 2*C2*b*exp(t*(a/2 - sqrt(a**2 + 4*b*c)/2))/(a + sqrt(a**2 + 4*b*c))),
- Eq(y(t), C1*exp(t*(a/2 + sqrt(a**2 + 4*b*c)/2)) + C2*exp(t*(a/2 - sqrt(a**2 + 4*b*c)/2)))]
- assert dsolve(eq16) == sol16
- assert checksysodesol(eq16, sol16) == (True, [0, 0])
+ eqs10 = [Eq(Derivative(x(t), t), x(t) + y(t)),
+ Eq(Derivative(y(t), t), x(t) + y(t))]
+ sol10 = [Eq(x(t), -C1 + C2*exp(2*t)),
+ Eq(y(t), C1 + C2*exp(2*t))]
+ assert dsolve(eqs10) == sol10
+ assert checksysodesol(eqs10, sol10) == (True, [0, 0])
+
+ eqs11 = [Eq(Derivative(x(t), t), 2*x(t) + y(t)),
+ Eq(Derivative(y(t), t), -x(t) + 2*y(t))]
+ sol11 = [Eq(x(t), C1*exp(2*t)*sin(t) + C2*exp(2*t)*cos(t)),
+ Eq(y(t), C1*exp(2*t)*cos(t) - C2*exp(2*t)*sin(t))]
+ assert dsolve(eqs11) == sol11
+ assert checksysodesol(eqs11, sol11) == (True, [0, 0])
+
+ eqs12 = [Eq(Derivative(x(t), t), x(t) + 2*y(t)),
+ Eq(Derivative(y(t), t), 2*x(t) + y(t))]
+ sol12 = [Eq(x(t), -C1*exp(-t) + C2*exp(3*t)),
+ Eq(y(t), C1*exp(-t) + C2*exp(3*t))]
+ assert dsolve(eqs12) == sol12
+ assert checksysodesol(eqs12, sol12) == (True, [0, 0])
+
+ eqs13 = [Eq(Derivative(x(t), t), 4*x(t) + y(t)),
+ Eq(Derivative(y(t), t), -x(t) + 2*y(t))]
+ sol13 = [Eq(x(t), C2*t*exp(3*t) + (C1 + C2)*exp(3*t)),
+ Eq(y(t), -C1*exp(3*t) - C2*t*exp(3*t))]
+ assert dsolve(eqs13) == sol13
+ assert checksysodesol(eqs13, sol13) == (True, [0, 0])
+
+ eqs14 = [Eq(Derivative(x(t), t), a*y(t)),
+ Eq(Derivative(y(t), t), a*x(t))]
+ sol14 = [Eq(x(t), -C1*exp(-a*t) + C2*exp(a*t)),
+ Eq(y(t), C1*exp(-a*t) + C2*exp(a*t))]
+ assert dsolve(eqs14) == sol14
+ assert checksysodesol(eqs14, sol14) == (True, [0, 0])
+
+ eqs15 = [Eq(Derivative(x(t), t), a*y(t)),
+ Eq(Derivative(y(t), t), b*x(t))]
+ sol15 = [Eq(x(t), -C1*a*exp(-t*sqrt(a*b))/sqrt(a*b) + C2*a*exp(t*sqrt(a*b))/sqrt(a*b)),
+ Eq(y(t), C1*exp(-t*sqrt(a*b)) + C2*exp(t*sqrt(a*b)))]
+ assert dsolve(eqs15) == sol15
+ assert checksysodesol(eqs15, sol15) == (True, [0, 0])
+
+ eqs16 = [Eq(Derivative(x(t), t), a*x(t) + b*y(t)),
+ Eq(Derivative(y(t), t), c*x(t))]
+ sol16 = [Eq(x(t), -2*C1*b*exp(t*(a + sqrt(a**2 + 4*b*c))/2)/(a - sqrt(a**2 + 4*b*c)) - 2*C2*b*exp(t*(a -
+ sqrt(a**2 + 4*b*c))/2)/(a + sqrt(a**2 + 4*b*c))),
+ Eq(y(t), C1*exp(t*(a + sqrt(a**2 + 4*b*c))/2) + C2*exp(t*(a - sqrt(a**2 + 4*b*c))/2))]
+ assert dsolve(eqs16) == sol16
+ assert checksysodesol(eqs16, sol16) == (True, [0, 0])
# Regression test case for issue #18562
# https://github.com/sympy/sympy/issues/18562
- eq17 = [Eq(x(t).diff(t), x(t) + a*y(t)), Eq(y(t).diff(t), x(t)*a - y(t))]
- sol17 = [Eq(x(t), C1*a*exp(t*sqrt(a**2 + 1))/(sqrt(a**2 + 1) - 1) - C2*a*exp(-t*sqrt(a**2 + 1))/(sqrt(a**2 + 1) + 1)),
+ eqs17 = [Eq(Derivative(x(t), t), a*y(t) + x(t)),
+ Eq(Derivative(y(t), t), a*x(t) - y(t))]
+ sol17 = [Eq(x(t), C1*a*exp(t*sqrt(a**2 + 1))/(sqrt(a**2 + 1) - 1) - C2*a*exp(-t*sqrt(a**2 + 1))/(sqrt(a**2 +
+ 1) + 1)),
Eq(y(t), C1*exp(t*sqrt(a**2 + 1)) + C2*exp(-t*sqrt(a**2 + 1)))]
- assert dsolve(eq17) == sol17
- assert checksysodesol(eq17, sol17) == (True, [0, 0])
-
- eq18 = [Eq(x(t).diff(t), 0), Eq(y(t).diff(t), 0)]
- sol18 = [Eq(x(t), C1), Eq(y(t), C2)]
- assert dsolve(eq18) == sol18
- assert checksysodesol(eq18, sol18) == (True, [0, 0])
-
- eq19 = [Eq(x(t).diff(t), 2*x(t) - y(t)), Eq(y(t).diff(t), x(t))]
- sol19 = [Eq(x(t), (C1 + C2*t + C2)*exp(t)), Eq(y(t), (C1 + C2*t)*exp(t))]
- assert dsolve(eq19) == sol19
- assert checksysodesol(eq19, sol19) == (True, [0, 0])
-
- eq20 = [Eq(x(t).diff(t), x(t)), Eq(y(t).diff(t), x(t) + y(t))]
- sol20 = [Eq(x(t), C1*exp(t)), Eq(y(t), (C1*t + C2)*exp(t))]
- assert dsolve(eq20) == sol20
- assert checksysodesol(eq20, sol20) == (True, [0, 0])
-
- eq21 = [Eq(Derivative(x(t), t), 3*x(t)), Eq(Derivative(y(t), t), x(t) + y(t))]
- sol21 = [Eq(x(t), 2*C1*exp(3*t)), Eq(y(t), C1*exp(3*t) + C2*exp(t))]
- assert dsolve(eq21) == sol21
- assert checksysodesol(eq21, sol21) == (True, [0, 0])
+ assert dsolve(eqs17) == sol17
+ assert checksysodesol(eqs17, sol17) == (True, [0, 0])
+
+ eqs18 = [Eq(Derivative(x(t), t), 0),
+ Eq(Derivative(y(t), t), 0)]
+ sol18 = [Eq(x(t), C1),
+ Eq(y(t), C2)]
+ assert dsolve(eqs18) == sol18
+ assert checksysodesol(eqs18, sol18) == (True, [0, 0])
+
+ eqs19 = [Eq(Derivative(x(t), t), 2*x(t) - y(t)),
+ Eq(Derivative(y(t), t), x(t))]
+ sol19 = [Eq(x(t), C2*t*exp(t) + (C1 + C2)*exp(t)),
+ Eq(y(t), C1*exp(t) + C2*t*exp(t))]
+ assert dsolve(eqs19) == sol19
+ assert checksysodesol(eqs19, sol19) == (True, [0, 0])
+
+ eqs20 = [Eq(Derivative(x(t), t), x(t)),
+ Eq(Derivative(y(t), t), x(t) + y(t))]
+ sol20 = [Eq(x(t), C1*exp(t)),
+ Eq(y(t), C1*t*exp(t) + C2*exp(t))]
+ assert dsolve(eqs20) == sol20
+ assert checksysodesol(eqs20, sol20) == (True, [0, 0])
+
+ eqs21 = [Eq(Derivative(x(t), t), 3*x(t)),
+ Eq(Derivative(y(t), t), x(t) + y(t))]
+ sol21 = [Eq(x(t), 2*C1*exp(3*t)),
+ Eq(y(t), C1*exp(3*t) + C2*exp(t))]
+ assert dsolve(eqs21) == sol21
+ assert checksysodesol(eqs21, sol21) == (True, [0, 0])
+
+ eqs22 = [Eq(Derivative(x(t), t), 3*x(t)),
+ Eq(Derivative(y(t), t), y(t))]
+ sol22 = [Eq(x(t), C1*exp(3*t)),
+ Eq(y(t), C2*exp(t))]
+ assert dsolve(eqs22) == sol22
+ assert checksysodesol(eqs22, sol22) == (True, [0, 0])
- eq22 = [Eq(Derivative(x(t), t), 3*x(t)), Eq(Derivative(y(t), t), y(t))]
- sol22 = [Eq(x(t), C1*exp(3*t)), Eq(y(t), C2*exp(t))]
- assert dsolve(eq22) == sol22
- assert checksysodesol(eq22, sol22) == (True, [0, 0])
+@slow
+def test_sysode_linear_neq_order1_type1_slow():
+
+ t = Symbol('t')
Z0 = Function('Z0')
Z1 = Function('Z1')
Z2 = Function('Z2')
@@ -636,56 +681,54 @@ def test_sysode_linear_neq_order1_type1():
k01, k10, k20, k21, k23, k30 = symbols('k01 k10 k20 k21 k23 k30')
- eq1 = (Eq(Derivative(Z0(t), t), -k01*Z0(t) + k10*Z1(t) + k20*Z2(t) + k30*Z3(t)),
- Eq(Derivative(Z1(t), t), k01*Z0(t) - k10*Z1(t) + k21*Z2(t)),
- Eq(Derivative(Z2(t), t), -(k20 + k21 + k23)*Z2(t)), Eq(Derivative(Z3(t), t), k23*Z2(t) - k30*Z3(t)))
- sol1 = [Eq(Z0(t), C1*k10/k01 + C2*(-k10 + k30)*exp(-k30*t)/(k01 + k10 - k30) + C3*(k10*k20 + k10*k21 - k10*k30 -
- k20**2 - k20*k21 - k20*k23 + k20*k30 + k23*k30)*exp(t*(-k20 - k21 - k23))/(k23*(k01 + k10 - k20 - k21 -
- k23)) - C4*exp(t*(-k01 - k10))),
- Eq(Z1(t), C1 - C2*k01*exp(-k30*t)/(k01 + k10 - k30) + C3*(k01*k20 + k01*k21 - k01*k30 - k20*k21 - k21**2 -
- k21*k23 + k21*k30)*exp(t*(-k20 - k21 - k23))/(k23*(k01 + k10 - k20 - k21 - k23)) + C4*exp(t*(-k01 - k10))),
- Eq(Z2(t), C3*(-k20 - k21 - k23 + k30)*exp(t*(-k20 - k21 - k23))/k23),
- Eq(Z3(t), C2*exp(-k30*t) + C3*exp(t*(-k20 - k21 - k23)))]
-
- assert dsolve(eq1, simplify=False) == sol1
- assert checksysodesol(eq1, sol1) == (True, [0, 0, 0, 0])
+ eqs1 = [Eq(Derivative(Z0(t), t), -k01*Z0(t) + k10*Z1(t) + k20*Z2(t) + k30*Z3(t)),
+ Eq(Derivative(Z1(t), t), k01*Z0(t) - k10*Z1(t) + k21*Z2(t)),
+ Eq(Derivative(Z2(t), t), (-k20 - k21 - k23)*Z2(t)),
+ Eq(Derivative(Z3(t), t), k23*Z2(t) - k30*Z3(t))]
+ sol1 = [Eq(Z0(t), C1*k10/k01 - C2*(k10 - k30)*exp(-k30*t)/(k01 + k10 - k30) - C3*(k10*(k20 + k21 - k30) -
+ k20**2 - k20*(k21 + k23 - k30) + k23*k30)*exp(-t*(k20 + k21 + k23))/(k23*(-k01 - k10 + k20 + k21 +
+ k23)) - C4*exp(-t*(k01 + k10))),
+ Eq(Z1(t), C1 - C2*k01*exp(-k30*t)/(k01 + k10 - k30) + C3*(-k01*(k20 + k21 - k30) + k20*k21 + k21**2
+ + k21*(k23 - k30))*exp(-t*(k20 + k21 + k23))/(k23*(-k01 - k10 + k20 + k21 + k23)) + C4*exp(-t*(k01 +
+ k10))),
+ Eq(Z2(t), -C3*(k20 + k21 + k23 - k30)*exp(-t*(k20 + k21 + k23))/k23),
+ Eq(Z3(t), C2*exp(-k30*t) + C3*exp(-t*(k20 + k21 + k23)))]
+ assert dsolve(eqs1) == sol1
+ assert checksysodesol(eqs1, sol1) == (True, [0, 0, 0, 0])
x, y, z, u, v, w = symbols('x y z u v w', cls=Function)
k2, k3 = symbols('k2 k3')
a_b, a_c = symbols('a_b a_c', real=True)
- eq2 = (
- Eq(Derivative(z(t), t), k2*y(t)),
- Eq(Derivative(x(t), t), k3*y(t)),
- Eq(Derivative(y(t), t), (-k2 - k3)*y(t))
- )
- sol2 = [Eq(z(t), C1 - C2*k2*exp(t*(-k2 - k3))/(k2 + k3)),
- Eq(x(t), -C2*k3*exp(t*(-k2 - k3))/(k2 + k3) + C3),
- Eq(y(t), C2*exp(t*(-k2 - k3)))]
- assert dsolve(eq2) == sol2
- assert checksysodesol(eq2, sol2) == (True, [0, 0, 0])
-
- eq3 = [4*u(t) - v(t) - 2*w(t) + Derivative(u(t), t),
- 2*u(t) + v(t) - 2*w(t) + Derivative(v(t), t),
- 5*u(t) + v(t) - 3*w(t) + Derivative(w(t), t)]
- sol3 = [Eq(u(t), C1*cos(sqrt(3)*t)/2 - C2*sin(sqrt(3)*t)/2 + C3*exp(-2*t) + sqrt(3)*(C1*sin(sqrt(3)*t) + C2*cos(sqrt(3)*t))/6),
- Eq(v(t), C1*cos(sqrt(3)*t)/2 - C2*sin(sqrt(3)*t)/2 + sqrt(3)*(C1*sin(sqrt(3)*t) + C2*cos(sqrt(3)*t))/6),
- Eq(w(t), C1*cos(sqrt(3)*t) - C2*sin(sqrt(3)*t) + C3*exp(-2*t))]
- assert dsolve(eq3) == sol3
- assert checksysodesol(eq3, sol3) == (True, [0, 0, 0])
- tw = Rational(2, 9)
- eq4 = [Eq(x(t).diff(t), 2*x(t) + y(t) - tw*4*z(t) - tw*w(t)),
- Eq(y(t).diff(t), 2*y(t) + 8*tw*z(t) + 2*tw*w(t)),
- Eq(z(t).diff(t), Rational(37, 9)*z(t) - tw*w(t)),
- Eq(w(t).diff(t), 22*tw*w(t) - 2*tw*z(t))]
+ eqs2 = [Eq(Derivative(z(t), t), k2*y(t)),
+ Eq(Derivative(x(t), t), k3*y(t)),
+ Eq(Derivative(y(t), t), (-k2 - k3)*y(t))]
+ sol2 = [Eq(z(t), C1 - C2*k2*exp(-t*(k2 + k3))/(k2 + k3)),
+ Eq(x(t), -C2*k3*exp(-t*(k2 + k3))/(k2 + k3) + C3),
+ Eq(y(t), C2*exp(-t*(k2 + k3)))]
+ assert dsolve(eqs2) == sol2
+ assert checksysodesol(eqs2, sol2) == (True, [0, 0, 0])
+
+ eqs3 = [4*u(t) - v(t) - 2*w(t) + Derivative(u(t), t),
+ 2*u(t) + v(t) - 2*w(t) + Derivative(v(t), t),
+ 5*u(t) + v(t) - 3*w(t) + Derivative(w(t), t)]
+ sol3 = [Eq(u(t), C3*exp(-2*t) + (C1/2 + sqrt(3)*C2/6)*cos(sqrt(3)*t) + sin(sqrt(3)*t)*(sqrt(3)*C1/6 +
+ C2*Rational(-1, 2))),
+ Eq(v(t), (C1/2 + sqrt(3)*C2/6)*cos(sqrt(3)*t) + sin(sqrt(3)*t)*(sqrt(3)*C1/6 + C2*Rational(-1, 2))),
+ Eq(w(t), C1*cos(sqrt(3)*t) - C2*sin(sqrt(3)*t) + C3*exp(-2*t))]
+ assert dsolve(eqs3) == sol3
+ assert checksysodesol(eqs3, sol3) == (True, [0, 0, 0])
- sol4 = [Eq(x(t), (C1 + C2*t)*exp(2*t)),
+ eqs4 = [Eq(Derivative(x(t), t), w(t)*Rational(-2, 9) + 2*x(t) + y(t) + z(t)*Rational(-8, 9)),
+ Eq(Derivative(y(t), t), w(t)*Rational(4, 9) + 2*y(t) + z(t)*Rational(16, 9)),
+ Eq(Derivative(z(t), t), w(t)*Rational(-2, 9) + z(t)*Rational(37, 9)),
+ Eq(Derivative(w(t), t), w(t)*Rational(44, 9) + z(t)*Rational(-4, 9))]
+ sol4 = [Eq(x(t), C1*exp(2*t) + C2*t*exp(2*t)),
Eq(y(t), C2*exp(2*t) + 2*C3*exp(4*t)),
- Eq(z(t), 2*C3*exp(4*t) - C4*exp(5*t)/4),
+ Eq(z(t), 2*C3*exp(4*t) + C4*exp(5*t)*Rational(-1, 4)),
Eq(w(t), C3*exp(4*t) + C4*exp(5*t))]
-
- assert dsolve(eq4) == sol4
- assert checksysodesol(eq4, sol4) == (True, [0, 0, 0, 0])
+ assert dsolve(eqs4) == sol4
+ assert checksysodesol(eqs4, sol4) == (True, [0, 0, 0, 0])
# Regression test case for issue #15574
# https://github.com/sympy/sympy/issues/15574
@@ -694,16 +737,17 @@ def test_sysode_linear_neq_order1_type1():
assert dsolve(eq5) == sol5
assert checksysodesol(eq5, sol5) == (True, [0, 0, 0, 0])
- eq6 = [Eq(Derivative(x(t), t), x(t) + y(t)),
- Eq(Derivative(y(t), t), y(t) + z(t)),
- Eq(Derivative(z(t), t), -w(t)/8 + z(t)),
- Eq(Derivative(w(t), t), w(t)/2 + z(t)/2)]
- sol6 = [Eq(x(t), (C1 + C2*t)*exp(t) + (4*C3 + 4*C4*t + 48*C4)*exp(3*t/4)),
- Eq(y(t), C2*exp(t) + (-C3 - C4*t - 8*C4)*exp(3*t/4)),
- Eq(z(t), (C3/4 + C4*t/4 + C4)*exp(3*t/4)),
- Eq(w(t), (C3/2 + C4*t/2)*exp(3*t/4))]
- assert dsolve(eq6) == sol6
- assert checksysodesol(eq6, sol6) == (True, [0, 0, 0, 0])
+ eqs6 = [Eq(Derivative(x(t), t), x(t) + y(t)),
+ Eq(Derivative(y(t), t), y(t) + z(t)),
+ Eq(Derivative(z(t), t), w(t)*Rational(-1, 8) + z(t)),
+ Eq(Derivative(w(t), t), w(t)/2 + z(t)/2)]
+ sol6 = [Eq(x(t), C1*exp(t) + C2*t*exp(t) + 4*C4*t*exp(t*Rational(3, 4)) + (4*C3 + 48*C4)*exp(t*Rational(3,
+ 4))),
+ Eq(y(t), C2*exp(t) - C4*t*exp(t*Rational(3, 4)) - (C3 + 8*C4)*exp(t*Rational(3, 4))),
+ Eq(z(t), C4*t*exp(t*Rational(3, 4))/4 + (C3/4 + C4)*exp(t*Rational(3, 4))),
+ Eq(w(t), C3*exp(t*Rational(3, 4))/2 + C4*t*exp(t*Rational(3, 4))/2)]
+ assert dsolve(eqs6) == sol6
+ assert checksysodesol(eqs6, sol6) == (True, [0, 0, 0, 0])
# Regression test case for issue #15574
# https://github.com/sympy/sympy/issues/15574
@@ -714,14 +758,18 @@ def test_sysode_linear_neq_order1_type1():
assert dsolve(eq7) == sol7
assert checksysodesol(eq7, sol7) == (True, [0, 0, 0, 0, 0])
- eq8 = [Eq(Derivative(x(t), t), 2*x(t) + y(t)), Eq(Derivative(y(t), t), 2*y(t)),
- Eq(Derivative(z(t), t), 4*z(t)),
- Eq(Derivative(w(t), t), u(t) + 5*w(t)), Eq(Derivative(u(t), t), 5*u(t))]
- sol8 = [Eq(x(t), (C1 + C2*t)*exp(2*t)), Eq(y(t), C2*exp(2*t)), Eq(z(t), C3*exp(4*t)),
- Eq(w(t), (C4 + C5*t)*exp(5*t)),
+ eqs8 = [Eq(Derivative(x(t), t), 2*x(t) + y(t)),
+ Eq(Derivative(y(t), t), 2*y(t)),
+ Eq(Derivative(z(t), t), 4*z(t)),
+ Eq(Derivative(w(t), t), u(t) + 5*w(t)),
+ Eq(Derivative(u(t), t), 5*u(t))]
+ sol8 = [Eq(x(t), C1*exp(2*t) + C2*t*exp(2*t)),
+ Eq(y(t), C2*exp(2*t)),
+ Eq(z(t), C3*exp(4*t)),
+ Eq(w(t), C4*exp(5*t) + C5*t*exp(5*t)),
Eq(u(t), C5*exp(5*t))]
- assert dsolve(eq8) == sol8
- assert checksysodesol(eq8, sol8) == (True, [0, 0, 0, 0, 0])
+ assert dsolve(eqs8) == sol8
+ assert checksysodesol(eqs8, sol8) == (True, [0, 0, 0, 0, 0])
# Regression test case for issue #15574
# https://github.com/sympy/sympy/issues/15574
@@ -732,64 +780,71 @@ def test_sysode_linear_neq_order1_type1():
# Regression test case for issue #15407
# https://github.com/sympy/sympy/issues/15407
- eq10 = [Eq(Derivative(x(t), t), (-a_b - a_c)*x(t)), Eq(Derivative(y(t), t), a_b*y(t)), Eq(Derivative(z(t), t),
- a_c*x(t))]
- sol10 = [Eq(x(t), -C1*(a_b + a_c)*exp(t*(-a_b - a_c))/a_c),
+ eqs10 = [Eq(Derivative(x(t), t), (-a_b - a_c)*x(t)),
+ Eq(Derivative(y(t), t), a_b*y(t)),
+ Eq(Derivative(z(t), t), a_c*x(t))]
+ sol10 = [Eq(x(t), -C1*(a_b + a_c)*exp(-t*(a_b + a_c))/a_c),
Eq(y(t), C2*exp(a_b*t)),
- Eq(z(t), C1*exp(t*(-a_b - a_c)) + C3)]
- assert dsolve(eq10) == sol10
- assert checksysodesol(eq10, sol10) == (True, [0, 0, 0])
+ Eq(z(t), C1*exp(-t*(a_b + a_c)) + C3)]
+ assert dsolve(eqs10) == sol10
+ assert checksysodesol(eqs10, sol10) == (True, [0, 0, 0])
# Regression test case for issue #14312
# https://github.com/sympy/sympy/issues/14312
- eq11 = (Eq(Derivative(x(t), t), k3*y(t)), Eq(Derivative(y(t), t), (-k2 - k3)*y(t)), Eq(Derivative(z(t), t), k2*y(t)))
- sol11 = [Eq(x(t), C1 + C2*k3*exp(t*(-k2 - k3))/k2),
- Eq(y(t), -C2*(k2 + k3)*exp(t*(-k2 - k3))/k2),
- Eq(z(t), C2*exp(t*(-k2 - k3)) + C3)]
- assert dsolve(eq11) == sol11
- assert checksysodesol(eq11, sol11) == (True, [0, 0, 0])
+ eqs11 = [Eq(Derivative(x(t), t), k3*y(t)),
+ Eq(Derivative(y(t), t), (-k2 - k3)*y(t)),
+ Eq(Derivative(z(t), t), k2*y(t))]
+ sol11 = [Eq(x(t), C1 + C2*k3*exp(-t*(k2 + k3))/k2),
+ Eq(y(t), -C2*(k2 + k3)*exp(-t*(k2 + k3))/k2),
+ Eq(z(t), C2*exp(-t*(k2 + k3)) + C3)]
+ assert dsolve(eqs11) == sol11
+ assert checksysodesol(eqs11, sol11) == (True, [0, 0, 0])
# Regression test case for issue #14312
# https://github.com/sympy/sympy/issues/14312
- eq12 = (Eq(Derivative(z(t), t), k2*y(t)), Eq(Derivative(x(t), t), k3*y(t)), Eq(Derivative(y(t), t), (-k2 - k3)*y(t)))
- sol12 = [Eq(z(t), C1 - C2*k2*exp(t*(-k2 - k3))/(k2 + k3)),
- Eq(x(t), -C2*k3*exp(t*(-k2 - k3))/(k2 + k3) + C3),
- Eq(y(t), C2*exp(t*(-k2 - k3)))]
- assert dsolve(eq12) == sol12
- assert checksysodesol(eq12, sol12) == (True, [0, 0, 0])
+ eqs12 = [Eq(Derivative(z(t), t), k2*y(t)),
+ Eq(Derivative(x(t), t), k3*y(t)),
+ Eq(Derivative(y(t), t), (-k2 - k3)*y(t))]
+ sol12 = [Eq(z(t), C1 - C2*k2*exp(-t*(k2 + k3))/(k2 + k3)),
+ Eq(x(t), -C2*k3*exp(-t*(k2 + k3))/(k2 + k3) + C3),
+ Eq(y(t), C2*exp(-t*(k2 + k3)))]
+ assert dsolve(eqs12) == sol12
+ assert checksysodesol(eqs12, sol12) == (True, [0, 0, 0])
+
+ f, g, h = symbols('f, g, h', cls=Function)
+ a, b, c = symbols('a, b, c')
# Regression test case for issue #15474
# https://github.com/sympy/sympy/issues/15474
- eq13 = [Eq(Derivative(f(t), t), 2*f(t) + g(t)), Eq(Derivative(g(t), t), a*f(t))]
- sol13 = [Eq(f(t), C1*exp(t*(sqrt(a + 1) + 1))/(sqrt(a + 1) - 1) - C2*exp(t*(1 - sqrt(a + 1)))/(sqrt(a + 1) + 1)),
- Eq(g(t), C1*exp(t*(sqrt(a + 1) + 1)) + C2*exp(t*(1 - sqrt(a + 1))))]
- assert dsolve(eq13) == sol13
- assert checksysodesol(eq13, sol13) == (True, [0, 0])
-
- eq14 = [Eq(f(t).diff(t), 2*g(t) - 3*h(t)),
- Eq(g(t).diff(t), 4*h(t) - 2*f(t)),
- Eq(h(t).diff(t), 3*f(t) - 4*g(t))]
- sol14 = [Eq(f(t), 2*C1 - 8*C2*cos(sqrt(29)*t)/25 + 8*C3*sin(sqrt(29)*t)/25 - 3*sqrt(29)*(C2*sin(sqrt(29)*t)
- + C3*cos(sqrt(29)*t))/25),
- Eq(g(t), 3*C1/2 - 6*C2*cos(sqrt(29)*t)/25 + 6*C3*sin(sqrt(29)*t)/25
- + 4*sqrt(29)*(C2*sin(sqrt(29)*t) + C3*cos(sqrt(29)*t))/25),
- Eq(h(t), C1 + C2*cos(sqrt(29)*t)
- - C3*sin(sqrt(29)*t))]
-
- assert dsolve(eq14) == sol14
- assert checksysodesol(eq14, sol14) == (True, [0, 0, 0])
-
- eq15 = [Eq(2*f(t).diff(t), 3*4*(g(t) - h(t))),
- Eq(3*g(t).diff(t), 2*4*(h(t) - f(t))),
- Eq(4*h(t).diff(t), 2*3*(f(t) - g(t)))]
- sol15 = [Eq(f(t), C1 - 16*C2*cos(sqrt(29)*t)/13 + 16*C3*sin(sqrt(29)*t)/13 - 6*sqrt(29)*(
- C2*sin(sqrt(29)*t) + C3*cos(sqrt(29)*t))/13),
- Eq(g(t), C1 - 16*C2*cos(sqrt(29)*t)/13 + 16*C3*sin(sqrt(29)*t)/13 + 8*sqrt(29)*(C2*sin(sqrt(29)*t) +
- C3*cos(sqrt(29)*t))/39),
- Eq(h(t), C1 + C2*cos(sqrt(29)*t) - C3*sin(sqrt(29)*t))]
-
- assert dsolve(eq15) == sol15
- assert checksysodesol(eq15, sol15) == (True, [0, 0, 0])
+ eqs13 = [Eq(Derivative(f(t), t), 2*f(t) + g(t)),
+ Eq(Derivative(g(t), t), a*f(t))]
+ sol13 = [Eq(f(t), C1*exp(t*(sqrt(a + 1) + 1))/(sqrt(a + 1) - 1) - C2*exp(-t*(sqrt(a + 1) - 1))/(sqrt(a + 1) +
+ 1)),
+ Eq(g(t), C1*exp(t*(sqrt(a + 1) + 1)) + C2*exp(-t*(sqrt(a + 1) - 1)))]
+ assert dsolve(eqs13) == sol13
+ assert checksysodesol(eqs13, sol13) == (True, [0, 0])
+
+ eqs14 = [Eq(Derivative(f(t), t), 2*g(t) - 3*h(t)),
+ Eq(Derivative(g(t), t), -2*f(t) + 4*h(t)),
+ Eq(Derivative(h(t), t), 3*f(t) - 4*g(t))]
+ sol14 = [Eq(f(t), 2*C1 - sin(sqrt(29)*t)*(sqrt(29)*C2*Rational(3, 25) + C3*Rational(-8, 25)) -
+ cos(sqrt(29)*t)*(C2*Rational(8, 25) + sqrt(29)*C3*Rational(3, 25))),
+ Eq(g(t), C1*Rational(3, 2) + sin(sqrt(29)*t)*(sqrt(29)*C2*Rational(4, 25) + C3*Rational(6, 25)) -
+ cos(sqrt(29)*t)*(C2*Rational(6, 25) + sqrt(29)*C3*Rational(-4, 25))),
+ Eq(h(t), C1 + C2*cos(sqrt(29)*t) - C3*sin(sqrt(29)*t))]
+ assert dsolve(eqs14) == sol14
+ assert checksysodesol(eqs14, sol14) == (True, [0, 0, 0])
+
+ eqs15 = [Eq(2*Derivative(f(t), t), 12*g(t) - 12*h(t)),
+ Eq(3*Derivative(g(t), t), -8*f(t) + 8*h(t)),
+ Eq(4*Derivative(h(t), t), 6*f(t) - 6*g(t))]
+ sol15 = [Eq(f(t), C1 - sin(sqrt(29)*t)*(sqrt(29)*C2*Rational(6, 13) + C3*Rational(-16, 13)) -
+ cos(sqrt(29)*t)*(C2*Rational(16, 13) + sqrt(29)*C3*Rational(6, 13))),
+ Eq(g(t), C1 + sin(sqrt(29)*t)*(sqrt(29)*C2*Rational(8, 39) + C3*Rational(16, 13)) -
+ cos(sqrt(29)*t)*(C2*Rational(16, 13) + sqrt(29)*C3*Rational(-8, 39))),
+ Eq(h(t), C1 + C2*cos(sqrt(29)*t) - C3*sin(sqrt(29)*t))]
+ assert dsolve(eqs15) == sol15
+ assert checksysodesol(eqs15, sol15) == (True, [0, 0, 0])
eq16 = (Eq(diff(x(t), t), 21*x(t)), Eq(diff(y(t), t), 17*x(t) + 3*y(t)),
Eq(diff(z(t), t), 5*x(t) + 7*y(t) + 9*z(t)))
@@ -799,39 +854,45 @@ def test_sysode_linear_neq_order1_type1():
assert dsolve(eq16) == sol16
assert checksysodesol(eq16, sol16) == (True, [0, 0, 0])
- eq17 = (Eq(diff(x(t), t), 3*y(t) - 11*z(t)), Eq(diff(y(t), t), 7*z(t) - 3*x(t)), Eq(diff(z(t), t), 11*x(t) - 7*y(t)))
- sol17 = [Eq(x(t), 7*C1/3 - 21*C2*cos(sqrt(179)*t)/170 + 21*C3*sin(sqrt(179)*t)/170 - 11*sqrt(179)*(
- C2*sin(sqrt(179)*t) + C3*cos(sqrt(179)*t))/170),
- Eq(y(t), 11*C1/3 - 33*C2*cos(sqrt(179)*t)/170 + 33*C3*sin(sqrt(179)*t)/170 + 7*sqrt(179)*(
- C2*sin(sqrt(179)*t) + C3*cos(sqrt(179)*t))/170),
+ eqs17 = [Eq(Derivative(x(t), t), 3*y(t) - 11*z(t)),
+ Eq(Derivative(y(t), t), -3*x(t) + 7*z(t)),
+ Eq(Derivative(z(t), t), 11*x(t) - 7*y(t))]
+ sol17 = [Eq(x(t), C1*Rational(7, 3) - sin(sqrt(179)*t)*(sqrt(179)*C2*Rational(11, 170) + C3*Rational(-21,
+ 170)) - cos(sqrt(179)*t)*(C2*Rational(21, 170) + sqrt(179)*C3*Rational(11, 170))),
+ Eq(y(t), C1*Rational(11, 3) + sin(sqrt(179)*t)*(sqrt(179)*C2*Rational(7, 170) + C3*Rational(33,
+ 170)) - cos(sqrt(179)*t)*(C2*Rational(33, 170) + sqrt(179)*C3*Rational(-7, 170))),
Eq(z(t), C1 + C2*cos(sqrt(179)*t) - C3*sin(sqrt(179)*t))]
-
- assert dsolve(eq17) == sol17
- assert checksysodesol(eq17, sol17) == (True, [0, 0, 0])
-
- eq18 = (Eq(3*diff(x(t), t), 4*5*(y(t) - z(t))), Eq(4*diff(y(t), t), 3*5*(z(t) - x(t))),
- Eq(5*diff(z(t), t), 3*4*(x(t) - y(t))))
- sol18 = [Eq(x(t), C1 - C2*cos(5*sqrt(2)*t) + C3*sin(5*sqrt(2)*t) - 4*sqrt(2)*(C2*sin(5*sqrt(2)*t) + C3*cos(5*sqrt(2)*t))/3),
- Eq(y(t), C1 - C2*cos(5*sqrt(2)*t) + C3*sin(5*sqrt(2)*t) + 3*sqrt(2)*(C2*sin(5*sqrt(2)*t) + C3*cos(5*sqrt(2)*t))/4),
+ assert dsolve(eqs17) == sol17
+ assert checksysodesol(eqs17, sol17) == (True, [0, 0, 0])
+
+ eqs18 = [Eq(3*Derivative(x(t), t), 20*y(t) - 20*z(t)),
+ Eq(4*Derivative(y(t), t), -15*x(t) + 15*z(t)),
+ Eq(5*Derivative(z(t), t), 12*x(t) - 12*y(t))]
+ sol18 = [Eq(x(t), C1 - sin(5*sqrt(2)*t)*(sqrt(2)*C2*Rational(4, 3) - C3) - cos(5*sqrt(2)*t)*(C2 +
+ sqrt(2)*C3*Rational(4, 3))),
+ Eq(y(t), C1 + sin(5*sqrt(2)*t)*(sqrt(2)*C2*Rational(3, 4) + C3) - cos(5*sqrt(2)*t)*(C2 +
+ sqrt(2)*C3*Rational(-3, 4))),
Eq(z(t), C1 + C2*cos(5*sqrt(2)*t) - C3*sin(5*sqrt(2)*t))]
-
- assert dsolve(eq18) == sol18
- assert checksysodesol(eq18, sol18) == (True, [0, 0, 0])
-
- eq19 = (Eq(diff(x(t), t), 4*x(t) - z(t)), Eq(diff(y(t), t), 2*x(t) + 2*y(t) - z(t)), Eq(diff(z(t), t), 3*x(t) + y(t)))
- sol19 = [Eq(x(t), (C1 + C2*t**2/2 + 2*C2*t + C2 + C3*t + 2*C3)*exp(2*t)),
- Eq(y(t), (C1 + C2*t**2/2 + 2*C2*t + C3*t + 2*C3)*exp(2*t)),
- Eq(z(t), (2*C1 + C2*t**2 + 3*C2*t + 2*C3*t + 3*C3)*exp(2*t))]
- assert dsolve(eq19) == sol19
- assert checksysodesol(eq19, sol19) == (True, [0, 0, 0])
-
- eq20 = (Eq(diff(x(t), t), 4*x(t) - y(t) - 2*z(t)), Eq(diff(y(t), t), 2*x(t) + y(t) - 2*z(t)),
- Eq(diff(z(t), t), 5*x(t) - 3*z(t)))
- sol20 = [Eq(x(t), C1*exp(2*t) - 3*C2*sin(t)/5 - C2*cos(t)/5 - C3*sin(t)/5 + 3*C3*cos(t)/5),
- Eq(y(t), -3*C2*sin(t)/5 - C2*cos(t)/5 - C3*sin(t)/5 + 3*C3*cos(t)/5),
+ assert dsolve(eqs18) == sol18
+ assert checksysodesol(eqs18, sol18) == (True, [0, 0, 0])
+
+ eqs19 = [Eq(Derivative(x(t), t), 4*x(t) - z(t)),
+ Eq(Derivative(y(t), t), 2*x(t) + 2*y(t) - z(t)),
+ Eq(Derivative(z(t), t), 3*x(t) + y(t))]
+ sol19 = [Eq(x(t), C2*t**2*exp(2*t)/2 + t*(2*C2 + C3)*exp(2*t) + (C1 + C2 + 2*C3)*exp(2*t)),
+ Eq(y(t), C2*t**2*exp(2*t)/2 + t*(2*C2 + C3)*exp(2*t) + (C1 + 2*C3)*exp(2*t)),
+ Eq(z(t), C2*t**2*exp(2*t) + t*(3*C2 + 2*C3)*exp(2*t) + (2*C1 + 3*C3)*exp(2*t))]
+ assert dsolve(eqs19) == sol19
+ assert checksysodesol(eqs19, sol19) == (True, [0, 0, 0])
+
+ eqs20 = [Eq(Derivative(x(t), t), 4*x(t) - y(t) - 2*z(t)),
+ Eq(Derivative(y(t), t), 2*x(t) + y(t) - 2*z(t)),
+ Eq(Derivative(z(t), t), 5*x(t) - 3*z(t))]
+ sol20 = [Eq(x(t), C1*exp(2*t) - sin(t)*(C2*Rational(3, 5) + C3/5) - cos(t)*(C2/5 + C3*Rational(-3, 5))),
+ Eq(y(t), -sin(t)*(C2*Rational(3, 5) + C3/5) - cos(t)*(C2/5 + C3*Rational(-3, 5))),
Eq(z(t), C1*exp(2*t) - C2*sin(t) + C3*cos(t))]
- assert dsolve(eq20) == sol20
- assert checksysodesol(eq20, sol20) == (True, [0, 0, 0])
+ assert dsolve(eqs20) == sol20
+ assert checksysodesol(eqs20, sol20) == (True, [0, 0, 0])
eq21 = (Eq(diff(x(t), t), 9*y(t)), Eq(diff(y(t), t), 12*x(t)))
sol21 = [Eq(x(t), -sqrt(3)*C1*exp(-6*sqrt(3)*t)/2 + sqrt(3)*C2*exp(6*sqrt(3)*t)/2),
@@ -840,21 +901,21 @@ def test_sysode_linear_neq_order1_type1():
assert dsolve(eq21) == sol21
assert checksysodesol(eq21, sol21) == (True, [0, 0])
- eq22 = (Eq(Derivative(x(t), t), 2*x(t) + 4*y(t)), Eq(Derivative(y(t), t), 12*x(t) + 41*y(t)))
- sol22 = [Eq(x(t), C1*(-Rational(13, 8) + sqrt(1713)/24)*exp(t*(sqrt(1713)/2 + Rational(43, 2))) + C2*(
- - sqrt(1713)/24 - Rational(13, 8))*exp(t*(Rational(43, 2) - sqrt(1713)/2))),
- Eq(y(t), C1*exp(t*(sqrt(1713)/2 + Rational(43, 2))) + C2*exp(t*(Rational(43, 2) - sqrt(1713)/2)))]
- assert dsolve(eq22) == sol22
- assert checksysodesol(eq22, sol22) == (True, [0, 0])
-
- eq23 = (Eq(Derivative(x(t), t), x(t) + y(t)), Eq(Derivative(y(t), t), -2*x(t) + 2*y(t)))
- sol23 = [
- Eq(x(t), (C1*cos(sqrt(7)*t/2)/4 - C2*sin(sqrt(7)*t/2)/4 + sqrt(7)*(C1*sin(sqrt(7)*t/2) + C2*cos(
- sqrt(7)*t/2))/4)*exp(3*t/2)),
- Eq(y(t), (C1*cos(sqrt(7)*t/2) - C2*sin(sqrt(7)*t/2))*exp(3*t/2))
- ]
- assert dsolve(eq23) == sol23
- assert checksysodesol(eq23, sol23) == (True, [0, 0])
+ eqs22 = [Eq(Derivative(x(t), t), 2*x(t) + 4*y(t)),
+ Eq(Derivative(y(t), t), 12*x(t) + 41*y(t))]
+ sol22 = [Eq(x(t), C1*(39 - sqrt(1713))*exp(t*(sqrt(1713) + 43)/2)*Rational(-1, 24) + C2*(39 +
+ sqrt(1713))*exp(t*(43 - sqrt(1713))/2)*Rational(-1, 24)),
+ Eq(y(t), C1*exp(t*(sqrt(1713) + 43)/2) + C2*exp(t*(43 - sqrt(1713))/2))]
+ assert dsolve(eqs22) == sol22
+ assert checksysodesol(eqs22, sol22) == (True, [0, 0])
+
+ eqs23 = [Eq(Derivative(x(t), t), x(t) + y(t)),
+ Eq(Derivative(y(t), t), -2*x(t) + 2*y(t))]
+ sol23 = [Eq(x(t), (C1/4 + sqrt(7)*C2/4)*cos(sqrt(7)*t/2)*exp(t*Rational(3, 2)) +
+ sin(sqrt(7)*t/2)*(sqrt(7)*C1/4 + C2*Rational(-1, 4))*exp(t*Rational(3, 2))),
+ Eq(y(t), C1*cos(sqrt(7)*t/2)*exp(t*Rational(3, 2)) - C2*sin(sqrt(7)*t/2)*exp(t*Rational(3, 2)))]
+ assert dsolve(eqs23) == sol23
+ assert checksysodesol(eqs23, sol23) == (True, [0, 0])
# Regression test case for issue #15474
# https://github.com/sympy/sympy/issues/15474
@@ -866,22 +927,20 @@ def test_sysode_linear_neq_order1_type1():
# Regression test case for issue #19150
# https://github.com/sympy/sympy/issues/19150
- eq25 = [Eq(Derivative(f(t), t), 0),
- Eq(Derivative(g(t), t), 1/(c*b)*(-2*g(t) + x(t) + f(t))),
- Eq(Derivative(x(t), t), 1/(c*b)*(-2*x(t) + g(t) + y(t))),
- Eq(Derivative(y(t), t), 1/(c*b)*(-2*y(t) + x(t) + h(t))),
- Eq(Derivative(h(t), t), 0)]
-
+ eqs25 = [Eq(Derivative(f(t), t), 0),
+ Eq(Derivative(g(t), t), (f(t) - 2*g(t) + x(t))/(b*c)),
+ Eq(Derivative(x(t), t), (g(t) - 2*x(t) + y(t))/(b*c)),
+ Eq(Derivative(y(t), t), (h(t) + x(t) - 2*y(t))/(b*c)),
+ Eq(Derivative(h(t), t), 0)]
sol25 = [Eq(f(t), -3*C1 + 4*C2),
- Eq(g(t), -2*C1 + 3*C2 - C3*exp(-2*t/(b*c)) + C4*exp(t*(-2 - sqrt(2))/(b*c)) +
- C5*exp(t*(-2 + sqrt(2))/(b*c))),
- Eq(x(t), -C1 + 2*C2 - sqrt(2)*C4*exp(t*(-2 - sqrt(2))/(b*c)) + sqrt(2)*C5*exp(t*(-2 + sqrt(2))/(b*c))),
- Eq(y(t), C2 + C3*exp(-2*t/(b*c)) + C4*exp(t*(-2 - sqrt(2))/(b*c)) + C5*exp(
- t*(-2 + sqrt(2))/(b*c))),
+ Eq(g(t), -2*C1 + 3*C2 - C3*exp(-2*t/(b*c)) + C4*exp(-t*(sqrt(2) + 2)/(b*c)) + C5*exp(-t*(2 -
+ sqrt(2))/(b*c))),
+ Eq(x(t), -C1 + 2*C2 - sqrt(2)*C4*exp(-t*(sqrt(2) + 2)/(b*c)) + sqrt(2)*C5*exp(-t*(2 -
+ sqrt(2))/(b*c))),
+ Eq(y(t), C2 + C3*exp(-2*t/(b*c)) + C4*exp(-t*(sqrt(2) + 2)/(b*c)) + C5*exp(-t*(2 - sqrt(2))/(b*c))),
Eq(h(t), C1)]
-
- assert dsolve(eq25) == sol25
- assert checksysodesol(eq25, sol25) == (True, [0, 0, 0, 0, 0])
+ assert dsolve(eqs25) == sol25
+ assert checksysodesol(eqs25, sol25) == (True, [0, 0, 0, 0, 0])
eq26 = [Eq(Derivative(f(t), t), 2*f(t)), Eq(Derivative(g(t), t), 3*f(t) + 7*g(t))]
sol26 = [Eq(f(t), -5*C1*exp(2*t)/3), Eq(g(t), C1*exp(2*t) + C2*exp(7*t))]
@@ -934,7 +993,7 @@ def test_sysode_linear_neq_order1_type1():
assert checksysodesol(eq35, sol35) == (True, [0, 0])
eq36 = [Eq(Derivative(f(t), t), I*g(t)), Eq(Derivative(g(t), t), 0)]
- sol36 = [Eq(f(t), I*(C1 + C2*t)), Eq(g(t), C2)]
+ sol36 = [Eq(f(t), I*C1 + I*C2*t), Eq(g(t), C2)]
assert dsolve(eq36) == sol36
assert checksysodesol(eq36, sol36) == (True, [0, 0])
@@ -959,151 +1018,102 @@ def test_sysode_linear_neq_order1_type2():
f, g, h, k = symbols('f g h k', cls=Function)
x, t, a, b, c, d, y = symbols('x t a b c d y')
- eq1 = [Eq(diff(f(x), x), f(x) + g(x) + 5),
- Eq(diff(g(x), x), -f(x) - g(x) + 7)]
- sol1 = [Eq(f(x), C1 + C2*x + C2 + x*Integral(12, x) + Integral(12, x) + Integral(-12*x - 7, x)),
- Eq(g(x), -C1 - C2*x - x*Integral(12, x) - Integral(-12*x - 7, x))]
- assert dsolve(eq1) == sol1
- assert checksysodesol(eq1, sol1) == (True, [0, 0])
-
- eq2 = [Eq(diff(f(x), x), f(x) + g(x) + 5),
- Eq(diff(g(x), x), f(x) + g(x) + 7)]
- sol2 = [Eq(f(x), -C1 + C2*exp(2*x) + exp(2*x)*Integral(6*exp(-2*x), x) - Integral(1, x)),
- Eq(g(x), C1 + C2*exp(2*x) + exp(2*x)*Integral(6*exp(-2*x), x) + Integral(1, x))]
- assert dsolve(eq2) == sol2
- assert checksysodesol(eq2, sol2) == (True, [0, 0])
-
- eq3 = [Eq(diff(f(x), x), f(x) + 5), Eq(diff(g(x), x), f(x) + 7)]
- sol3 = [Eq(f(x), C1*exp(x) + exp(x)*Integral(5*exp(-x), x)),
- Eq(g(x), C1*exp(x) + C2 + exp(x)*Integral(5*exp(-x), x) + Integral(2, x))]
- assert dsolve(eq3) == sol3
- assert checksysodesol(eq3, sol3) == (True, [0, 0])
-
- eq4 = [Eq(diff(f(x), x), f(x) + exp(x)), Eq(diff(g(x), x), f(x) + g(x) + x*exp(x))]
- sol4 = [Eq(f(x), (C1 + Integral(1, x))*exp(x)), Eq(g(x), (C1*x + C2 + x*Integral(1, x) + Integral(0, x))*exp(x))]
- assert dsolve(eq4) == sol4
- assert checksysodesol(eq4, sol4) == (True, [0, 0])
-
- eq5 = [Eq(diff(f(x), x), f(x) + g(x) + 5*x), Eq(diff(g(x), x), f(x) - g(x))]
- sol5 = [Eq(f(x), C1*exp(sqrt(2)*x) + sqrt(2)*C1*exp(sqrt(2)*x) + (-sqrt(2)*C2 + C2 - sqrt(2)*Integral(
- -5*sqrt(2)*x*exp(sqrt(2)*x)/4, x) + Integral(-5*sqrt(2)*x*exp(sqrt(2)*x)/4, x))*exp(-sqrt(2)*x) +
- exp(sqrt(2)*x)*Integral(5*sqrt(2)*x*exp(-sqrt(2)*x)/4, x) + sqrt(2)*exp(sqrt(2)*x)*Integral(
- 5*sqrt(2)*x*exp(-sqrt(2)*x)/4, x)),
- Eq(g(x), C1*exp(sqrt(2)*x) + (C2 + Integral(-5*sqrt(2)*x*exp(sqrt(2)*x)/4, x))*exp(-sqrt(2)*x) + exp(
- sqrt(2)*x)*Integral(5*sqrt(2)*x*exp(-sqrt(2)*x)/4, x))]
- assert dsolve(eq5) == sol5
- assert checksysodesol(eq5, sol5) == (True, [0, 0])
+ eqs1 = [Eq(Derivative(f(x), x), f(x) + g(x) + 5),
+ Eq(Derivative(g(x), x), -f(x) - g(x) + 7)]
+ sol1 = [Eq(f(x), C1 + C2 + 6*x**2 + x*(C2 + 5)),
+ Eq(g(x), -C1 - 6*x**2 - x*(C2 - 7))]
+ assert dsolve(eqs1) == sol1
+ assert checksysodesol(eqs1, sol1) == (True, [0, 0])
+
+ eqs2 = [Eq(Derivative(f(x), x), f(x) + g(x) + 5),
+ Eq(Derivative(g(x), x), f(x) + g(x) + 7)]
+ sol2 = [Eq(f(x), -C1 + C2*exp(2*x) - x - 3),
+ Eq(g(x), C1 + C2*exp(2*x) + x - 3)]
+ assert dsolve(eqs2) == sol2
+ assert checksysodesol(eqs2, sol2) == (True, [0, 0])
+
+ eqs3 = [Eq(Derivative(f(x), x), f(x) + 5),
+ Eq(Derivative(g(x), x), f(x) + 7)]
+ sol3 = [Eq(f(x), C1*exp(x) - 5),
+ Eq(g(x), C1*exp(x) + C2 + 2*x - 5)]
+ assert dsolve(eqs3) == sol3
+ assert checksysodesol(eqs3, sol3) == (True, [0, 0])
+
+ eqs4 = [Eq(Derivative(f(x), x), f(x) + exp(x)),
+ Eq(Derivative(g(x), x), x*exp(x) + f(x) + g(x))]
+ sol4 = [Eq(f(x), C1*exp(x) + x*exp(x)),
+ Eq(g(x), C1*x*exp(x) + C2*exp(x) + x**2*exp(x))]
+ assert dsolve(eqs4) == sol4
+ assert checksysodesol(eqs4, sol4) == (True, [0, 0])
- eq6 = [Eq(diff(f(x), x), -9*f(x) - 4*g(x)),
- Eq(diff(g(x), x), -4*g(x)),
- Eq(diff(h(x), x), h(x) + exp(x))]
- sol6 = [Eq(f(x), (-4*C1/5 - 4*Integral(0, x)/5)*exp(-4*x) + (C2 + Integral(0, x))*exp(-9*x)),
- Eq(g(x), (C1 + Integral(0, x))*exp(-4*x)), Eq(h(x), (C3 + Integral(1, x))*exp(x))]
- assert dsolve(eq6) == sol6
- assert checksysodesol(eq6, sol6) == (True, [0, 0, 0])
+ eqs5 = [Eq(Derivative(f(x), x), 5*x + f(x) + g(x)),
+ Eq(Derivative(g(x), x), f(x) - g(x))]
+ sol5 = [Eq(f(x), C1*(1 + sqrt(2))*exp(sqrt(2)*x) + C2*(1 - sqrt(2))*exp(-sqrt(2)*x) + x*Rational(-5, 2) +
+ Rational(-5, 2)),
+ Eq(g(x), C1*exp(sqrt(2)*x) + C2*exp(-sqrt(2)*x) + x*Rational(-5, 2))]
+ assert dsolve(eqs5) == sol5
+ assert checksysodesol(eqs5, sol5) == (True, [0, 0])
+
+ eqs6 = [Eq(Derivative(f(x), x), -9*f(x) - 4*g(x)),
+ Eq(Derivative(g(x), x), -4*g(x)),
+ Eq(Derivative(h(x), x), h(x) + exp(x))]
+ sol6 = [Eq(f(x), C1*exp(-4*x)*Rational(-4, 5) + C2*exp(-9*x)),
+ Eq(g(x), C1*exp(-4*x)),
+ Eq(h(x), C3*exp(x) + x*exp(x))]
+ assert dsolve(eqs6) == sol6
+ assert checksysodesol(eqs6, sol6) == (True, [0, 0, 0])
# Regression test case for issue #8859
# https://github.com/sympy/sympy/issues/8859
- eq7 = [Eq(diff(f(t), t), f(t) + 3*t), Eq(diff(g(t), t), g(t))]
- sol7 = [Eq(f(t), C1*exp(t) + exp(t)*Integral(3*t*exp(-t), t)), Eq(g(t), C2*exp(t) + exp(t)*Integral(0, t))]
- assert dsolve(eq7) == sol7
- assert checksysodesol(eq7, sol7) == (True, [0, 0])
+ eqs7 = [Eq(Derivative(f(t), t), 3*t + f(t)),
+ Eq(Derivative(g(t), t), g(t))]
+ sol7 = [Eq(f(t), C1*exp(t) - 3*t - 3),
+ Eq(g(t), C2*exp(t))]
+ assert dsolve(eqs7) == sol7
+ assert checksysodesol(eqs7, sol7) == (True, [0, 0])
# Regression test case for issue #8567
# https://github.com/sympy/sympy/issues/8567
- eq8 = [Eq(f(t).diff(t), f(t) + 2*g(t)), Eq(g(t).diff(t), -2*f(t) + g(t) + 2*exp(t))]
- sol8 = [Eq(f(t), (C1*sin(2*t) + C2*cos(2*t) + sin(2*t)*Integral(-2*sin(2*t)**2/cos(2*t) + 2/cos(2*t), t) +
- cos(2*t)*Integral(-2*sin(2*t), t))*exp(t)),
- Eq(g(t), (C1*cos(2*t) - C2*sin(2*t) - sin(2*t)*Integral(-2*sin(2*t), t) +
- cos(2*t)*Integral(-2*sin(2*t)**2/cos(2*t) + 2/cos(2*t), t))*exp(t))]
- assert dsolve(eq8) == sol8
- assert checksysodesol(eq8, sol8) == (True, [0, 0])
+ eqs8 = [Eq(Derivative(f(t), t), f(t) + 2*g(t)),
+ Eq(Derivative(g(t), t), -2*f(t) + g(t) + 2*exp(t))]
+ sol8 = [Eq(f(t), C1*exp(t)*sin(2*t) + C2*exp(t)*cos(2*t) + exp(t)*cos(2*t)**2 +
+ 2*exp(t)*sin(2*t)*tan(t)/(tan(t)**2 + 1)),
+ Eq(g(t), C1*exp(t)*cos(2*t) - C2*exp(t)*sin(2*t) - exp(t)*sin(2*t)*cos(2*t) +
+ 2*exp(t)*cos(2*t)*tan(t)/(tan(t)**2 + 1))]
+ assert dsolve(eqs8) == sol8
+ assert checksysodesol(eqs8, sol8) == (True, [0, 0])
# Regression test case for issue #19150
# https://github.com/sympy/sympy/issues/19150
- eq9 = [Eq(Derivative(f(t), t), 1/(a*b)*(-2*f(t) + g(t) + c)),
- Eq(Derivative(g(t), t), 1/(a*b)*(-2*g(t) + f(t) + h(t))),
- Eq(Derivative(h(t), t), 1/(a*b)*(-2*h(t) + g(t) + d))]
- sol9 = [Eq(f(t), (-C1 + C2*exp(-sqrt(2)*t/(a*b)) + C3*exp(sqrt(2)*t/(a*b)) + exp(sqrt(2)*t/(a*b))*Integral(
- c*exp(-sqrt(2)*t/(a*b) + 2*t/(a*b))/(4*a*b) + d*exp(-sqrt(2)*t/(a*b) + 2*t/(a*b))/(4*a*b), t) - Integral(
- -c*exp(2*t/(a*b))/(2*a*b) + d*exp(2*t/(a*b))/(2*a*b), t) + exp(-sqrt(2)*t/(a*b))*Integral(
- c*exp(sqrt(2)*t/(a*b) + 2*t/(a*b))/(4*a*b) + d*exp(sqrt(2)*t/(a*b) + 2*t/(a*b))/(4*a*b), t))*exp(
- -2*t/(a*b))),
- Eq(g(t), (-sqrt(2)*C2*exp(-sqrt(2)*t/(a*b)) + sqrt(2)*C3*exp(sqrt(2)*t/(a*b)) +
- sqrt(2)*exp(sqrt(2)*t/(a*b))*Integral(c*exp(-sqrt(2)*t/(a*b) + 2*t/(a*b))/(4*a*b) + d*exp(-sqrt(2)*t/(a*b) +
- 2*t/(a*b))/(4*a*b), t) - sqrt(2)*exp(-sqrt(2)*t/(a*b))*Integral(c*exp(sqrt(2)*t/(a*b) + 2*t/(a*b))/(4*a*b) +
- d*exp(sqrt(2)*t/(a*b) + 2*t/(a*b))/(4*a*b), t))*exp(-2*t/(a*b))),
- Eq(h(t), (C1 + C2*exp(-sqrt(2)*t/(a*b)) +
- C3*exp(sqrt(2)*t/(a*b)) + exp(sqrt(2)*t/(a*b))*Integral(c*exp(-sqrt(2)*t/(a*b) + 2*t/(a*b))/(4*a*b) +
- d*exp(-sqrt(2)*t/(a*b) + 2*t/(a*b))/(4*a*b), t) + Integral(-c*exp(2*t/(a*b))/(2*a*b) + d*exp(2*t/(a*b))/(
- 2*a*b), t) + exp(-sqrt(2)*t/(a*b))*Integral(c*exp(sqrt(2)*t/(a*b) + 2*t/(a*b))/(4*a*b) + d*exp(
- sqrt(2)*t/(a*b) + 2*t/(a*b))/(4*a*b), t))*exp(-2*t/(a*b)))]
- assert dsolve(eq9) == sol9
- assert checksysodesol(eq9, sol9) == (True, [0, 0, 0])
-
- # Simpsol and Solsimp testing
- # Note: To remove these dependencies
- _x1 = sqrt(2)
- _x2 = exp(2*_x1*t/(a*b))
- _x3 = 4*C3*_x2*a*b
- _x4 = exp(2*t/(a*b))
- _x5 = exp(_x1*t/(a*b))
- _x6 = Integral(_x4*_x5, t)
- _x7 = exp(-2*t/(a*b))
- _x8 = exp(-_x1*t/(a*b))
- _x9 = Integral(_x4*_x8, t)
- _x10 = _x2*_x9*d
- _x11 = 4*C2*a*b
- sol9_simpsol = [
- Eq(f(t), _x7*_x8*(-4*C1*_x5*a*b + _x10 + _x11 + _x2*_x9*c + _x3 + 2*_x5*c*Integral(_x4,
- t) - 2*_x5*d*Integral(
- _x4, t) + _x6*c + _x6*d)/(4*a*b)),
- Eq(g(t), _x7*_x8*(
- -4*C2*_x1*a*b + 4*C3*_x1*_x2*a*b + _x1*_x2*_x9*c + _x1*_x2*_x9*d - _x1*_x6*c - _x1*_x6*d)/(
- 4*a*b)),
- Eq(h(t), _x7*_x8*(4*C1*_x5*a*b + _x10 + _x11 + _x2*_x9*c + _x3 - 2*_x5*c*Integral(_x4,
- t) + 2*_x5*d*Integral(
- _x4, t) + _x6*c + _x6*d)/(4*a*b)),
- ]
- assert [_simpsol(s) for s in sol9] == sol9_simpsol
-
- _x1 = sqrt(2)
- _x2 = 1/b
- _x3 = exp(_x2*t*(2 - _x1)/a)
- _x4 = exp(_x2*t*(_x1 + 2)/a)
- _x5 = exp(-2*_x2*t/a)
- _x6 = exp(2*_x2*t/a)
- _x7 = Integral(-_x2*_x6*c/(2*a) + _x2*_x6*d/(2*a), t)
- _x8 = exp(_x2*t*(_x1 - 2)/a)
- _x9 = exp(-_x2*t*(_x1 + 2)/a)
- _x10 = Integral(_x2*_x3*c/(4*a) + _x2*_x3*d/(4*a), t)
- _x11 = Integral(_x2*_x4*c/(4*a) + _x2*_x4*d/(4*a), t)
- sol9_solsimp = [
- Eq(f(t), -C1*_x5 + C2*_x9 + C3*_x8 + _x10*_x8 + _x11*_x9 - _x5*_x7),
- Eq(g(t), -C2*_x1*_x9 + C3*_x1*_x8 + _x1*_x10*_x8 - _x1*_x11*_x9),
- Eq(h(t), C1*_x5 + C2*_x9 + C3*_x8 + _x10*_x8 + _x11*_x9 + _x5*_x7),
- ]
- assert [Eq(s.lhs, _solsimp(s.rhs, t)) for s in sol9] == sol9_solsimp
+ eqs9 = [Eq(Derivative(f(t), t), (c - 2*f(t) + g(t))/(a*b)),
+ Eq(Derivative(g(t), t), (f(t) - 2*g(t) + h(t))/(a*b)),
+ Eq(Derivative(h(t), t), (d + g(t) - 2*h(t))/(a*b))]
+ sol9 = [Eq(f(t), -C1*exp(-2*t/(a*b)) + C2*exp(-t*(sqrt(2) + 2)/(a*b)) + C3*exp(-t*(2 - sqrt(2))/(a*b)) +
+ Mul(Rational(1, 4), 3*c + d, evaluate=False)),
+ Eq(g(t), -sqrt(2)*C2*exp(-t*(sqrt(2) + 2)/(a*b)) + sqrt(2)*C3*exp(-t*(2 - sqrt(2))/(a*b)) +
+ Mul(Rational(1, 2), c + d, evaluate=False)),
+ Eq(h(t), C1*exp(-2*t/(a*b)) + C2*exp(-t*(sqrt(2) + 2)/(a*b)) + C3*exp(-t*(2 - sqrt(2))/(a*b)) +
+ Mul(Rational(1, 4), c + 3*d, evaluate=False))]
+ assert dsolve(eqs9) == sol9
+ assert checksysodesol(eqs9, sol9) == (True, [0, 0, 0])
# Regression test case for issue #16635
# https://github.com/sympy/sympy/issues/16635
- eq10 = [Eq(f(t).diff(t), f(t) - g(t) + 15*t - 10), Eq(g(t).diff(t), f(t) - g(t) - 15*t - 5)]
- sol10 = [Eq(f(t), C1 + C2*t + C2 + t*Integral(30*t - 5, t) + Integral(30*t - 5, t) + Integral(-30*t**2 - 10*t - 5, t)),
- Eq(g(t), C1 + C2*t + t*Integral(30*t - 5, t) + Integral(-30*t**2 - 10*t - 5, t))]
- assert dsolve(eq10) == sol10
- assert checksysodesol(eq10, sol10) == (True, [0, 0])
-
- # Multiple equations
- eq1 = [Eq(Derivative(f(t), t)**2 - 2*Derivative(f(t), t) + 1, 4),
- Eq(-y*f(t) + Derivative(g(t), t), 0)]
- sol1 = [[Eq(f(t), C1 + Integral(-1, t)),
- Eq(g(t), C1*t*y + C2*y + t*y*Integral(-1, t) + y*Integral(t, t))],
- [Eq(f(t), C1 + Integral(3, t)),
- Eq(g(t), C1*t*y + C2*y + t*y*Integral(3, t) + y*Integral(-3*t, t))]]
- assert dsolve(eq1) == sol1
- for sol in sol1:
- assert checksysodesol(eq1, sol) == (True, [0, 0])
-
+ eqs10 = [Eq(Derivative(f(t), t), 15*t + f(t) - g(t) - 10),
+ Eq(Derivative(g(t), t), -15*t + f(t) - g(t) - 5)]
+ sol10 = [Eq(f(t), C1 + C2 + 5*t**3 + 5*t**2 + t*(C2 - 10)),
+ Eq(g(t), C1 + 5*t**3 - 10*t**2 + t*(C2 - 5))]
+ assert dsolve(eqs10) == sol10
+ assert checksysodesol(eqs10, sol10) == (True, [0, 0])
+
+ # Multiple solutions
+ eqs11 = [Eq(Derivative(f(t), t)**2 - 2*Derivative(f(t), t) + 1, 4),
+ Eq(-y*f(t) + Derivative(g(t), t), 0)]
+ sol11 = [[Eq(f(t), C1 - t), Eq(g(t), C1*t*y + C2*y + t**2*y*Rational(-1, 2))],
+ [Eq(f(t), C1 + 3*t), Eq(g(t), C1*t*y + C2*y + t**2*y*Rational(3, 2))]]
+ assert dsolve(eqs11) == sol11
+ for s11 in sol11:
+ assert checksysodesol(eqs11, s11) == (True, [0, 0])
# test case for issue #19831
# https://github.com/sympy/sympy/issues/19831
@@ -1117,26 +1127,25 @@ def test_sysode_linear_neq_order1_type2():
y = Function('y')
T = symbols('T')
- eq8 = (Eq(Derivative(y(t), t), x(t)),
- Eq(Derivative(x(t), t), n*( y(t) + 1)))
- sol = [
- Eq(y(t),
- (-((-T*sqrt(n)*exp(sqrt(n)*t_0)/2 - n*Integral(exp(sqrt(n)*t_0), t_0)/2 +
- x_0*exp(sqrt(n)*t_0)/2)/sqrt(n) + Integral(n*exp(sqrt(n)*t)/2, t)/sqrt(n)))
- *exp(-sqrt(n)*t) + (T*sqrt(n)*exp(-sqrt(n)*t_0)/2 - n*Integral(exp(-sqrt(n)*t_0)
- , t_0)/2 + x_0*exp(-sqrt(n)*t_0)/2)*exp(sqrt(n)*t)/sqrt(n)
- + exp(sqrt(n)*t)*Integral(n*exp(-sqrt(n)*t)/2, t)/sqrt(n)
- ),
- Eq(x(t),
- (T*sqrt(n)*exp(-sqrt(n)*t_0)/2 - n*Integral(exp(-sqrt(n)*t_0), t_0)/2 +
- x_0*exp(-sqrt(n)*t_0)/2)*exp(sqrt(n)*t) + (-T*sqrt(n)*exp(sqrt(n)*t_0)/2 -
- n*Integral(exp(sqrt(n)*t_0), t_0)/2 + x_0*exp(sqrt(n)*t_0)/2 +
- Integral(n*exp(sqrt(n)*t)/2, t))*exp(-sqrt(n)*t)+ exp(sqrt(n)*t)
- *Integral(n*exp(-sqrt(n)*t)/2, t)
- ),
- ]
- assert dsolve(eq8, ics={y(t0): T, x(t0): x0}) == sol
- assert checksysodesol(eq8, sol) == (True, [0, 0])
+ eqs12 = [Eq(Derivative(y(t), t), x(t)),
+ Eq(Derivative(x(t), t), n*(y(t) + 1))]
+ sol12 = [Eq(y(t), C1*exp(sqrt(n)*t)*n**Rational(-1, 2) - C2*exp(-sqrt(n)*t)*n**Rational(-1, 2) - 1),
+ Eq(x(t), C1*exp(sqrt(n)*t) + C2*exp(-sqrt(n)*t))]
+ assert dsolve(eqs12) == sol12
+ assert checksysodesol(eqs12, sol12) == (True, [0, 0])
+
+ sol12b = [
+ Eq(y(t), (T*exp(-sqrt(n)*t_0)/2 + exp(-sqrt(n)*t_0)/2 +
+ x_0*exp(-sqrt(n)*t_0)/(2*sqrt(n)))*exp(sqrt(n)*t) +
+ (T*exp(sqrt(n)*t_0)/2 + exp(sqrt(n)*t_0)/2 -
+ x_0*exp(sqrt(n)*t_0)/(2*sqrt(n)))*exp(-sqrt(n)*t) - 1),
+ Eq(x(t), (T*sqrt(n)*exp(-sqrt(n)*t_0)/2 + sqrt(n)*exp(-sqrt(n)*t_0)/2
+ + x_0*exp(-sqrt(n)*t_0)/2)*exp(sqrt(n)*t)
+ - (T*sqrt(n)*exp(sqrt(n)*t_0)/2 + sqrt(n)*exp(sqrt(n)*t_0)/2 -
+ x_0*exp(sqrt(n)*t_0)/2)*exp(-sqrt(n)*t))
+ ]
+ assert dsolve(eqs12, ics={y(t0): T, x(t0): x0}) == sol12b
+ assert checksysodesol(eqs12, sol12b) == (True, [0, 0])
def test_sysode_linear_neq_order1_type3():
@@ -1144,25 +1153,26 @@ def test_sysode_linear_neq_order1_type3():
f, g, h, k = symbols('f g h k', cls=Function)
x, t, a = symbols('x t a')
r = symbols('r', real=True)
- eqs1 = [Eq(diff(f(r), r), f(r) + r*g(r)),
- Eq(diff(g(r), r),-r*f(r) + g(r))]
- sol1 = [Eq(f(r), (C1*sin(r**2/2) + C2*cos(r**2/2))*exp(r)), Eq(g(r), (C1*cos(r**2/2) - C2*sin(r**2/2))*exp(r))]
+ eqs1 = [Eq(Derivative(f(r), r), r*g(r) + f(r)),
+ Eq(Derivative(g(r), r), -r*f(r) + g(r))]
+ sol1 = [Eq(f(r), C1*exp(r)*sin(r**2/2) + C2*exp(r)*cos(r**2/2)),
+ Eq(g(r), C1*exp(r)*cos(r**2/2) - C2*exp(r)*sin(r**2/2))]
assert dsolve(eqs1) == sol1
assert checksysodesol(eqs1, sol1) == (True, [0, 0])
- eqs2 = [Eq(diff(f(x), x), x*f(x) + x**2*g(x)),
- Eq(diff(g(x), x), 2*x**2*f(x) + (x + 3*x**2)*g(x))]
- sol2 = [Eq(f(x), (3*C1/17 + 4*C2/17 + (8*C1 - 12*C2)/(51 + 17*sqrt(17)))*exp(x**2*(3*x + sqrt(17)*x + 3)/6) + (13*C1/51 +
- 2*C2/17 + (16*C1 - 24*C2)/(-663 + 153*sqrt(17)))*exp(x**2*(-sqrt(17)*x + 3*x + 3)/6)),
- Eq(g(x), (4*C1/17 - 6*C2/17 + (12*C1 + 16*C2)/(-51 + 17*sqrt(17)))*exp(x**2*(3*x + sqrt(17)*x + 3)/6) + (
- 13*C1/17 + 6*C2/17 + (-12*C1 - 16*C2)/(-51 + 17*sqrt(17)))*exp(x**2*(-sqrt(17)*x + 3*x + 3)/6))]
-
- assert [_simpsol(s) for s in dsolve(eqs2)] == sol2
+ eqs2 = [Eq(Derivative(f(x), x), x**2*g(x) + x*f(x)),
+ Eq(Derivative(g(x), x), 2*x**2*f(x) + (3*x**2 + x)*g(x))]
+ sol2 = [Eq(f(x), (sqrt(17)*C1/17 + C2*(17 - 3*sqrt(17))/34)*exp(x**3*(3 + sqrt(17))/6 + x**2/2) -
+ exp(x**3*(3 - sqrt(17))/6 + x**2/2)*(sqrt(17)*C1/17 + C2*(3*sqrt(17) + 17)*Rational(-1, 34))),
+ Eq(g(x), exp(x**3*(3 - sqrt(17))/6 + x**2/2)*(C1*(17 - 3*sqrt(17))/34 + sqrt(17)*C2*Rational(-2,
+ 17)) + exp(x**3*(3 + sqrt(17))/6 + x**2/2)*(C1*(3*sqrt(17) + 17)/34 + sqrt(17)*C2*Rational(2, 17)))]
+ assert dsolve(eqs2) == sol2
assert checksysodesol(eqs2, sol2) == (True, [0, 0])
- eqs3 = [Eq(f(x).diff(x), x*f(x) + g(x)), Eq(g(x).diff(x), -f(x) + x*g(x))]
- sol3 = [Eq(f(x), (C1/2 - I*C2/2)*exp(x**2/2 + I*x) + (C1/2 + I*C2/2)*exp(x**2/2 - I*x)),
- Eq(g(x), (-I*C1/2 + C2/2)*exp(x**2/2 - I*x) + (I*C1/2 + C2/2)*exp(x**2/2 + I*x))]
+ eqs3 = [Eq(f(x).diff(x), x*f(x) + g(x)),
+ Eq(g(x).diff(x), -f(x) + x*g(x))]
+ sol3 = [Eq(f(x), (C1/2 + I*C2/2)*exp(x**2/2 - I*x) + exp(x**2/2 + I*x)*(C1/2 + I*C2*Rational(-1, 2))),
+ Eq(g(x), (I*C1/2 + C2/2)*exp(x**2/2 + I*x) - exp(x**2/2 - I*x)*(I*C1/2 + C2*Rational(-1, 2)))]
assert dsolve(eqs3) == sol3
assert checksysodesol(eqs3, sol3) == (True, [0, 0])
@@ -1195,12 +1205,15 @@ def test_sysode_linear_neq_order1_type3():
y = symbols("y", real=True)
- eqs7 = [Eq(Derivative(f(y), y), y*f(y) + g(y)), Eq(Derivative(g(y), y), y*g(y) - f(y))]
- sol7 = [Eq(f(y), (C1*sin(y) + C2*cos(y))*exp(y**2/2)), Eq(g(y), (C1*cos(y) - C2*sin(y))*exp(y**2/2))]
+ eqs7 = [Eq(Derivative(f(y), y), y*f(y) + g(y)),
+ Eq(Derivative(g(y), y), y*g(y) - f(y))]
+ sol7 = [Eq(f(y), C1*exp(y**2/2)*sin(y) + C2*exp(y**2/2)*cos(y)),
+ Eq(g(y), C1*exp(y**2/2)*cos(y) - C2*exp(y**2/2)*sin(y))]
assert dsolve(eqs7) == sol7
assert checksysodesol(eqs7, sol7) == (True, [0, 0])
+@slow
def test_sysode_linear_neq_order1_type4():
f, g, h, k = symbols('f g h k', cls=Function)
@@ -1222,88 +1235,100 @@ def test_sysode_linear_neq_order1_type4():
Eq(g(r), C1*exp(r)*cos(r**2/2) - C2*exp(r)*sin(r**2/2) - exp(r)*sin(r**2/2)*Integral(r*exp(-r)*cos(r**2/2) -
exp(-r)*log(r)*sin(r**2/2), r) + exp(r)*cos(r**2/2)*Integral(r*exp(-r)*sin(r**2/2) + exp(-r)*log(r)*cos(
r**2/2), r))]
- assert dsolve(eqs2) == sol2
+ # XXX: dsolve hangs for this in integration
+ assert dsolve_system(eqs2, simplify=False, doit=False) == [sol2]
assert checksysodesol(eqs2, sol2) == (True, [0, 0])
- eqs3 = [Eq(f(x).diff(x), x*(f(x) + g(x) + h(x)) + x), Eq(g(x).diff(x), x*(f(x) + g(x) + h(x)) + x),
- Eq(h(x).diff(x), x*(f(x) + g(x) + h(x)) + 1)]
- _x1 = exp(-3*x**2/2)
- _x2 = exp(3*x**2/2)
- _x3 = Integral(2*_x1*x/3 + _x1/3 + x/3 - Rational(1, 3), x)
- _x4 = Integral(2*_x1*x/3 + _x1/3 - 2*x/3 + Rational(2, 3), x)
- sol3 = [
- Eq(f(x),
- C1*_x2/3 - C1/3 + C2*_x2/3 - C2/3 + C3*_x2/3 + 2*C3/3 + 2*_x2*_x3/3 + _x2*_x4/3 + _x3/3 - _x4/3),
- Eq(g(x),
- C1*_x2/3 + 2*C1/3 + C2*_x2/3 - C2/3 + C3*_x2/3 - C3/3 + 2*_x2*_x3/3 + _x2*_x4/3 + _x3/3 - _x4/3),
- Eq(h(x),
- C1*_x2/3 - C1/3 + C2*_x2/3 + 2*C2/3 + C3*_x2/3 - C3/3 + 2*_x2*_x3/3 + _x2*_x4/3 - 2*_x3/3 + 2*_x4/3),
- ]
+ eqs3 = [Eq(Derivative(f(x), x), x*(f(x) + g(x) + h(x)) + x),
+ Eq(Derivative(g(x), x), x*(f(x) + g(x) + h(x)) + x),
+ Eq(Derivative(h(x), x), x*(f(x) + g(x) + h(x)) + 1)]
+ sol3 = [Eq(f(x), C1*Rational(-1, 3) + C2*Rational(-1, 3) + C3*Rational(2, 3) + x**2/6 + x*Rational(-1, 3) +
+ (C1/3 + C2/3 + C3/3)*exp(x**2*Rational(3, 2)) +
+ sqrt(6)*sqrt(pi)*erf(sqrt(6)*x/2)*exp(x**2*Rational(3, 2))/18 + Rational(-2, 9)),
+ Eq(g(x), C1*Rational(2, 3) + C2*Rational(-1, 3) + C3*Rational(-1, 3) + x**2/6 + x*Rational(-1, 3) +
+ (C1/3 + C2/3 + C3/3)*exp(x**2*Rational(3, 2)) +
+ sqrt(6)*sqrt(pi)*erf(sqrt(6)*x/2)*exp(x**2*Rational(3, 2))/18 + Rational(-2, 9)),
+ Eq(h(x), C1*Rational(-1, 3) + C2*Rational(2, 3) + C3*Rational(-1, 3) + x**2*Rational(-1, 3) +
+ x*Rational(2, 3) + (C1/3 + C2/3 + C3/3)*exp(x**2*Rational(3, 2)) +
+ sqrt(6)*sqrt(pi)*erf(sqrt(6)*x/2)*exp(x**2*Rational(3, 2))/18 + Rational(-2, 9))]
assert dsolve(eqs3) == sol3
assert checksysodesol(eqs3, sol3) == (True, [0, 0, 0])
- eqs4 = [Eq(f(x).diff(x), x*(f(x) + g(x) + h(x)) + sin(x)), Eq(g(x).diff(x), x*(f(x) + g(x) + h(x)) + sin(x)),
- Eq(h(x).diff(x), x*(f(x) + g(x) + h(x)) + sin(x))]
- sol4 = [Eq(f(x), C1*exp(3*x**2/2)/3 - C1/3 + C2*exp(3*x**2/2)/3 - C2/3 + C3*exp(3*x**2/2)/3 + 2*C3/3 +
- exp(3*x**2/2)*Integral(exp(-3*x**2/2)*sin(x), x)),
- Eq(g(x), C1*exp(3*x**2/2)/3 + 2*C1/3 + C2*exp(3*x**2/2)/3 - C2/3 +
- C3*exp(3*x**2/2)/3 - C3/3 + exp(3*x**2/2)*Integral(exp(-3*x**2/2)*sin(x), x)),
- Eq(h(x), C1*exp(3*x**2/2)/3 - C1/3 + C2*exp(3*x**2/2)/3 + 2*C2/3 + C3*exp(3*x**2/2)/3 - C3/3 + exp(
- 3*x**2/2)*Integral(exp(-3*x**2/2)*sin(x), x))]
+ eqs4 = [Eq(Derivative(f(x), x), x*(f(x) + g(x) + h(x)) + sin(x)),
+ Eq(Derivative(g(x), x), x*(f(x) + g(x) + h(x)) + sin(x)),
+ Eq(Derivative(h(x), x), x*(f(x) + g(x) + h(x)) + sin(x))]
+ sol4 = [Eq(f(x), C1*Rational(-1, 3) + C2*Rational(-1, 3) + C3*Rational(2, 3) + (C1/3 + C2/3 +
+ C3/3)*exp(x**2*Rational(3, 2)) + Integral(sin(x)*exp(x**2*Rational(-3, 2)), x)*exp(x**2*Rational(3,
+ 2))),
+ Eq(g(x), C1*Rational(2, 3) + C2*Rational(-1, 3) + C3*Rational(-1, 3) + (C1/3 + C2/3 +
+ C3/3)*exp(x**2*Rational(3, 2)) + Integral(sin(x)*exp(x**2*Rational(-3, 2)), x)*exp(x**2*Rational(3,
+ 2))),
+ Eq(h(x), C1*Rational(-1, 3) + C2*Rational(2, 3) + C3*Rational(-1, 3) + (C1/3 + C2/3 +
+ C3/3)*exp(x**2*Rational(3, 2)) + Integral(sin(x)*exp(x**2*Rational(-3, 2)), x)*exp(x**2*Rational(3,
+ 2)))]
assert dsolve(eqs4) == sol4
assert checksysodesol(eqs4, sol4) == (True, [0, 0, 0])
- eqs5 = [Eq(Derivative(f(x), x), x*(f(x) + g(x) + h(x) + k(x) + 1)), Eq(Derivative(g(x), x), x*(f(x) + g(x)
- + h(x) + k(x) + 1)), Eq(Derivative(h(x), x), x*(f(x) + g(x) + h(x) + k(x) + 1)), Eq(Derivative(k(x),
- x), x*(f(x) + g(x) + h(x) + k(x) + 1))]
- sol5 = [Eq(f(x), C1*exp(2*x**2)/4 - C1/4 + C2*exp(2*x**2)/4 - C2/4 + C3*exp(2*x**2)/4 - C3/4 + C4*exp(2*x**2)/4 + 3*C4/4 +
- exp(2*x**2)*Integral(x*exp(-2*x**2), x)),
- Eq(g(x), C1*exp(2*x**2)/4 + 3*C1/4 + C2*exp(2*x**2)/4 - C2/4 +
- C3*exp(2*x**2)/4 - C3/4 + C4*exp(2*x**2)/4 - C4/4 + exp(2*x**2)*Integral(x*exp(-2*x**2), x)),
- Eq(h(x), C1*exp(2*x**2)/4 - C1/4 + C2*exp(2*x**2)/4 + 3*C2/4 + C3*exp(2*x**2)/4 - C3/4 + C4*exp(2*x**2)/4 - C4/4 +
- exp(2*x**2)*Integral(x*exp(-2*x**2), x)),
- Eq(k(x), C1*exp(2*x**2)/4 - C1/4 + C2*exp(2*x**2)/4 - C2/4 + C3*exp(2*x**2)/4
- + 3*C3/4 + C4*exp(2*x**2)/4 - C4/4 + exp(2*x**2)*Integral(x*exp(-2*x**2), x))]
+ eqs5 = [Eq(Derivative(f(x), x), x*(f(x) + g(x) + h(x) + k(x) + 1)),
+ Eq(Derivative(g(x), x), x*(f(x) + g(x) + h(x) + k(x) + 1)),
+ Eq(Derivative(h(x), x), x*(f(x) + g(x) + h(x) + k(x) + 1)),
+ Eq(Derivative(k(x), x), x*(f(x) + g(x) + h(x) + k(x) + 1))]
+ sol5 = [Eq(f(x), C1*Rational(-1, 4) + C2*Rational(-1, 4) + C3*Rational(-1, 4) + C4*Rational(3, 4) + (C1/4 +
+ C2/4 + C3/4 + C4/4)*exp(2*x**2) + Rational(-1, 4)),
+ Eq(g(x), C1*Rational(3, 4) + C2*Rational(-1, 4) + C3*Rational(-1, 4) + C4*Rational(-1, 4) + (C1/4 +
+ C2/4 + C3/4 + C4/4)*exp(2*x**2) + Rational(-1, 4)),
+ Eq(h(x), C1*Rational(-1, 4) + C2*Rational(3, 4) + C3*Rational(-1, 4) + C4*Rational(-1, 4) + (C1/4 +
+ C2/4 + C3/4 + C4/4)*exp(2*x**2) + Rational(-1, 4)),
+ Eq(k(x), C1*Rational(-1, 4) + C2*Rational(-1, 4) + C3*Rational(3, 4) + C4*Rational(-1, 4) + (C1/4 +
+ C2/4 + C3/4 + C4/4)*exp(2*x**2) + Rational(-1, 4))]
assert dsolve(eqs5) == sol5
assert checksysodesol(eqs5, sol5) == (True, [0, 0, 0, 0])
- eqs6 = [Eq(Derivative(f(x), x), x**2*(f(x) + g(x) + h(x) + k(x) + 1)), Eq(Derivative(g(x), x), x**2*(f(x) +
- g(x) + h(x) + k(x) + 1)), Eq(Derivative(h(x), x), x**2*(f(x) + g(x) + h(x) + k(x) + 1)),
+ eqs6 = [Eq(Derivative(f(x), x), x**2*(f(x) + g(x) + h(x) + k(x) + 1)),
+ Eq(Derivative(g(x), x), x**2*(f(x) + g(x) + h(x) + k(x) + 1)),
+ Eq(Derivative(h(x), x), x**2*(f(x) + g(x) + h(x) + k(x) + 1)),
Eq(Derivative(k(x), x), x**2*(f(x) + g(x) + h(x) + k(x) + 1))]
- sol6 = [Eq(f(x), C1*exp(4*x**3/3)/4 - C1/4 + C2*exp(4*x**3/3)/4 - C2/4 + C3*exp(4*x**3/3)/4 - C3/4 + C4*exp(4*x**3/3)/4 +
- 3*C4/4 + exp(4*x**3/3)*Integral(x**2*exp(-4*x**3/3), x)),
- Eq(g(x), C1*exp(4*x**3/3)/4 + 3*C1/4 + C2*exp(4*x**3/3)/4 - C2/4 + C3*exp(4*x**3/3)/4 - C3/4 + C4*exp(
- 4*x**3/3)/4 - C4/4 + exp(4*x**3/3)*Integral(x**2*exp(-4*x**3/3), x)),
- Eq(h(x), C1*exp(4*x**3/3)/4 - C1/4 + C2*exp(4*x**3/3)/4 + 3*C2/4 + C3*exp(4*x**3/3)/4 - C3/4 + C4*exp(
- 4*x**3/3)/4 - C4/4 + exp(4*x**3/3)*Integral(x**2*exp(-4*x**3/3), x)),
- Eq(k(x), C1*exp(4*x**3/3)/4 - C1/4 + C2*exp(4*x**3/3)/4 - C2/4 +
- C3*exp(4*x**3/3)/4 + 3*C3/4 + C4*exp(4*x**3/3)/4 - C4/4 + exp(4*x**3/3)*Integral(x**2*exp(-4*x**3/3), x))]
+ sol6 = [Eq(f(x), C1*Rational(-1, 4) + C2*Rational(-1, 4) + C3*Rational(-1, 4) + C4*Rational(3, 4) + (C1/4 +
+ C2/4 + C3/4 + C4/4)*exp(x**3*Rational(4, 3)) + Rational(-1, 4)),
+ Eq(g(x), C1*Rational(3, 4) + C2*Rational(-1, 4) + C3*Rational(-1, 4) + C4*Rational(-1, 4) + (C1/4 +
+ C2/4 + C3/4 + C4/4)*exp(x**3*Rational(4, 3)) + Rational(-1, 4)),
+ Eq(h(x), C1*Rational(-1, 4) + C2*Rational(3, 4) + C3*Rational(-1, 4) + C4*Rational(-1, 4) + (C1/4 +
+ C2/4 + C3/4 + C4/4)*exp(x**3*Rational(4, 3)) + Rational(-1, 4)),
+ Eq(k(x), C1*Rational(-1, 4) + C2*Rational(-1, 4) + C3*Rational(3, 4) + C4*Rational(-1, 4) + (C1/4 +
+ C2/4 + C3/4 + C4/4)*exp(x**3*Rational(4, 3)) + Rational(-1, 4))]
assert dsolve(eqs6) == sol6
assert checksysodesol(eqs6, sol6) == (True, [0, 0, 0, 0])
eqs7 = [Eq(Derivative(f(x), x), (f(x) + g(x) + h(x))*log(x) + sin(x)), Eq(Derivative(g(x), x), (f(x) + g(x)
+ h(x))*log(x) + sin(x)), Eq(Derivative(h(x), x), (f(x) + g(x) + h(x))*log(x) + sin(x))]
- sol7 = [Eq(f(x), -C1/3 - C2/3 + 2*C3/3 + (C1/3 + C2/3 + C3/3 + Integral(exp(-3*x*log(x) + 3*x)*sin(x), x))*exp(3*x*log(x) -
- 3*x)),
- Eq(g(x), 2*C1/3 - C2/3 - C3/3 + (C1/3 + C2/3 + C3/3 + Integral(exp(-3*x*log(x) + 3*x)*sin(x), x))*exp(3*x*log(x)
- - 3*x)),
- Eq(h(x), -C1/3 + 2*C2/3 - C3/3 + (C1/3 + C2/3 + C3/3 + Integral(exp(-3*x*log(x) + 3*x)*sin(x),
- x))*exp(3*x*log(x) - 3*x))]
+ sol7 = [Eq(f(x), -C1/3 - C2/3 + 2*C3/3 + (C1/3 + C2/3 +
+ C3/3)*exp(x*(3*log(x) - 3)) + exp(x*(3*log(x) -
+ 3))*Integral(exp(3*x)*exp(-3*x*log(x))*sin(x), x)),
+ Eq(g(x), 2*C1/3 - C2/3 - C3/3 + (C1/3 + C2/3 +
+ C3/3)*exp(x*(3*log(x) - 3)) + exp(x*(3*log(x) -
+ 3))*Integral(exp(3*x)*exp(-3*x*log(x))*sin(x), x)),
+ Eq(h(x), -C1/3 + 2*C2/3 - C3/3 + (C1/3 + C2/3 +
+ C3/3)*exp(x*(3*log(x) - 3)) + exp(x*(3*log(x) -
+ 3))*Integral(exp(3*x)*exp(-3*x*log(x))*sin(x), x))]
with dotprodsimp(True):
- assert dsolve(eqs7) == sol7
+ assert dsolve(eqs7, simplify=False, doit=False) == sol7
assert checksysodesol(eqs7, sol7) == (True, [0, 0, 0])
eqs8 = [Eq(Derivative(f(x), x), (f(x) + g(x) + h(x) + k(x))*log(x) + sin(x)), Eq(Derivative(g(x), x), (f(x)
+ g(x) + h(x) + k(x))*log(x) + sin(x)), Eq(Derivative(h(x), x), (f(x) + g(x) + h(x) + k(x))*log(x) +
sin(x)), Eq(Derivative(k(x), x), (f(x) + g(x) + h(x) + k(x))*log(x) + sin(x))]
- sol8 = [Eq(f(x), -C1/4 - C2/4 - C3/4 + 3*C4/4 + (C1/4 + C2/4 + C3/4 + C4/4 + Integral(exp(-4*x*log(x) + 4*x)*sin(x),
- x))*exp(4*x*log(x) - 4*x)),
- Eq(g(x), 3*C1/4 - C2/4 - C3/4 - C4/4 + (C1/4 + C2/4 + C3/4 + C4/4 + Integral(exp(-4*x*log(x)
- + 4*x)*sin(x), x))*exp(4*x*log(x) - 4*x)),
- Eq(h(x), -C1/4 + 3*C2/4 - C3/4 - C4/4 + (C1/4 + C2/4 + C3/4 + C4/4 +
- Integral(exp(-4*x*log(x) + 4*x)*sin(x), x))*exp(4*x*log(x) - 4*x)),
- Eq(k(x), -C1/4 - C2/4 + 3*C3/4 - C4/4 + (C1/4 + C2/4
- + C3/4 + C4/4 + Integral(exp(-4*x*log(x) + 4*x)*sin(x), x))*exp(4*x*log(x) - 4*x))]
+ sol8 = [Eq(f(x), -C1/4 - C2/4 - C3/4 + 3*C4/4 + (C1/4 + C2/4 + C3/4 +
+ C4/4)*exp(x*(4*log(x) - 4)) + exp(x*(4*log(x) -
+ 4))*Integral(exp(4*x)*exp(-4*x*log(x))*sin(x), x)),
+ Eq(g(x), 3*C1/4 - C2/4 - C3/4 - C4/4 + (C1/4 + C2/4 + C3/4 +
+ C4/4)*exp(x*(4*log(x) - 4)) + exp(x*(4*log(x) -
+ 4))*Integral(exp(4*x)*exp(-4*x*log(x))*sin(x), x)),
+ Eq(h(x), -C1/4 + 3*C2/4 - C3/4 - C4/4 + (C1/4 + C2/4 + C3/4 +
+ C4/4)*exp(x*(4*log(x) - 4)) + exp(x*(4*log(x) -
+ 4))*Integral(exp(4*x)*exp(-4*x*log(x))*sin(x), x)),
+ Eq(k(x), -C1/4 - C2/4 + 3*C3/4 - C4/4 + (C1/4 + C2/4 + C3/4 +
+ C4/4)*exp(x*(4*log(x) - 4)) + exp(x*(4*log(x) -
+ 4))*Integral(exp(4*x)*exp(-4*x*log(x))*sin(x), x))]
with dotprodsimp(True):
assert dsolve(eqs8) == sol8
assert checksysodesol(eqs8, sol8) == (True, [0, 0, 0, 0])
@@ -1320,24 +1345,22 @@ def test_sysode_linear_neq_order1_type5_type6():
assert checksysodesol(eqs1, sol1) == (True, [0, 0])
# Type 6
- eqs1 = [Eq(Derivative(f(x), x), (2*f(x) + g(x) + 1)/x), Eq(Derivative(g(x), x), (x + f(x) + 2*g(x))/x)]
-
- ot = Rational(1, 2)
- sol1 = [Eq(f(x), -C1*x + C2*x**3 + x**3*Integral(exp(-2*x_)/2 + exp(-3*x_)/2, (x_, log(x))) -
- x*Integral(ot - exp(-x_)/2, (x_, log(x)))),
- Eq(g(x), C1*x + C2*x**3 + x**3*Integral(exp(-2*x_)/2 + exp(-3*x_)/2, (x_, log(x))) + x*Integral(ot -
- exp(-x_)/2, (x_, log(x))))]
- assert dsolve(eqs1) == sol1
- assert checksysodesol(eqs1, sol1) == (True, [0, 0])
+ eqs2 = [Eq(Derivative(f(x), x), (2*f(x) + g(x) + 1)/x),
+ Eq(Derivative(g(x), x), (x + f(x) + 2*g(x))/x)]
+ sol2 = [Eq(f(x), C2*x**3 - x*(C1 + Rational(1, 4)) + x*log(x)*Rational(-1, 2) + Rational(-2, 3)),
+ Eq(g(x), C2*x**3 + x*log(x)/2 + x*(C1 + Rational(-1, 4)) + Rational(1, 3))]
+ assert dsolve(eqs2) == sol2
+ assert checksysodesol(eqs2, sol2) == (True, [0, 0])
def test_higher_order_to_first_order():
f, g = symbols('f g', cls=Function)
x = symbols('x')
- eqs1 = [Eq(f(x).diff(x, 2), 2*f(x) + g(x)), Eq(g(x).diff(x, 2), -f(x))]
- sol1 = [Eq(f(x), C3*x*exp(x) + C3*exp(x) + C4*exp(x) + (-C1 - C2*x + C2)*exp(-x)),
- Eq(g(x), -C3*x*exp(x) + C3*exp(x) - C4*exp(x) + (C1 + C2*x + C2)*exp(-x))]
+ eqs1 = [Eq(Derivative(f(x), (x, 2)), 2*f(x) + g(x)),
+ Eq(Derivative(g(x), (x, 2)), -f(x))]
+ sol1 = [Eq(f(x), -C2*x*exp(-x) + C3*x*exp(x) - (C1 - C2)*exp(-x) + (C3 + C4)*exp(x)),
+ Eq(g(x), C2*x*exp(-x) - C3*x*exp(x) + (C1 + C2)*exp(-x) + (C3 - C4)*exp(x))]
assert dsolve(eqs1) == sol1
assert checksysodesol(eqs1, sol1) == (True, [0, 0])
@@ -1346,17 +1369,19 @@ def test_higher_order_to_first_order():
assert dsolve(eqs2) == sol2
assert checksysodesol(eqs2, sol2) == (True, [0, 0])
- eqs3 = [Eq(f(x).diff(x, 2), 2*f(x)), Eq(g(x).diff(x, 2), -f(x) + 2*g(x))]
+ eqs3 = [Eq(Derivative(f(x), (x, 2)), 2*f(x)),
+ Eq(Derivative(g(x), (x, 2)), -f(x) + 2*g(x))]
sol3 = [Eq(f(x), 4*C1*exp(-sqrt(2)*x) + 4*C2*exp(sqrt(2)*x)),
- Eq(g(x), C2*exp(sqrt(2)*x) + (C1 + sqrt(2)*(C1*x + C4))*exp(-sqrt(2)*x) - sqrt(2)*(C2*x*exp(sqrt(2)*x) +
- C3*exp(sqrt(2)*x)))]
+ Eq(g(x), sqrt(2)*C1*x*exp(-sqrt(2)*x) - sqrt(2)*C2*x*exp(sqrt(2)*x) + (C1 +
+ sqrt(2)*C4)*exp(-sqrt(2)*x) + (C2 - sqrt(2)*C3)*exp(sqrt(2)*x))]
assert dsolve(eqs3) == sol3
assert checksysodesol(eqs3, sol3) == (True, [0, 0])
- eqs4 = [Eq(f(x).diff(x, 2), 2*f(x) + g(x)), Eq(g(x).diff(x, 2), + 2*g(x))]
- sol4 = [Eq(f(x), C1*x*exp(sqrt(2)*x)/4 - sqrt(2)*C1*exp(sqrt(2)*x)/8 + C4*exp(sqrt(2)*x)/4 + (C2/4 + C3*x/4 +
- sqrt(2)*C3/8)*exp(-sqrt(2)*x)),
- Eq(g(x), sqrt(2)*C1*exp(sqrt(2)*x)/2 - sqrt(2)*C3*exp(-sqrt(2)*x)/2)]
+ eqs4 = [Eq(Derivative(f(x), (x, 2)), 2*f(x) + g(x)),
+ Eq(Derivative(g(x), (x, 2)), 2*g(x))]
+ sol4 = [Eq(f(x), C1*x*exp(sqrt(2)*x)/4 + C3*x*exp(-sqrt(2)*x)/4 + (C2/4 + sqrt(2)*C3/8)*exp(-sqrt(2)*x) -
+ exp(sqrt(2)*x)*(sqrt(2)*C1/8 + C4*Rational(-1, 4))),
+ Eq(g(x), sqrt(2)*C1*exp(sqrt(2)*x)/2 + sqrt(2)*C3*exp(-sqrt(2)*x)*Rational(-1, 2))]
assert dsolve(eqs4) == sol4
assert checksysodesol(eqs4, sol4) == (True, [0, 0])
@@ -1365,117 +1390,138 @@ def test_higher_order_to_first_order():
assert dsolve(eqs5) == sol5
assert checksysodesol(eqs5, sol5) == (True, [0, 0])
- eqs6 = [Eq(f(x).diff(x, 2), f(x) + g(x)), Eq(g(x).diff(x, 2), -f(x) - g(x))]
- sol6 = [Eq(f(x), C1 + C2*x**2/2 + C2 + C3*x + C4*x**3/6 + C4*x), Eq(g(x), -C1 - C2*x**2/2 - C3*x - C4*x**3/6)]
+ eqs6 = [Eq(Derivative(f(x), (x, 2)), f(x) + g(x)),
+ Eq(Derivative(g(x), (x, 2)), -f(x) - g(x))]
+ sol6 = [Eq(f(x), C1 + C2*x**2/2 + C2 + C4*x**3/6 + x*(C3 + C4)),
+ Eq(g(x), -C1 + C2*x**2*Rational(-1, 2) - C3*x + C4*x**3*Rational(-1, 6))]
assert dsolve(eqs6) == sol6
assert checksysodesol(eqs6, sol6) == (True, [0, 0])
- eqs7 = [Eq(f(x).diff(x, 2), f(x) + g(x) + 1), Eq(g(x).diff(x, 2), f(x) + g(x) + 1)]
- sol7 = [Eq(f(x), -C1 - C2*x + sqrt(2)*C3*exp(sqrt(2)*x)/2 - x*Integral(0, x) + (-sqrt(2)*C4/2 -
- sqrt(2)*Integral(exp(sqrt(2)*x)/2, x)/2)*exp(-sqrt(2)*x) + sqrt(2)*exp(sqrt(2)*x)*
- Integral(exp(-sqrt(2)*x)/2, x)/2 - Integral(0, x)),
- Eq(g(x), C1 + C2*x + sqrt(2)*C3*exp(sqrt(2)*x)/2 + x*Integral(0, x) + (-sqrt(2)*C4/2 - sqrt(2)*
- Integral(exp(sqrt(2)*x)/2, x)/2)*exp(-sqrt(2)*x) + sqrt(2)*exp(sqrt(2)*x)*Integral(
- exp(-sqrt(2)*x)/2, x)/2 + Integral(0, x))]
+ eqs7 = [Eq(Derivative(f(x), (x, 2)), f(x) + g(x) + 1),
+ Eq(Derivative(g(x), (x, 2)), f(x) + g(x) + 1)]
+ sol7 = [Eq(f(x), -C1 - C2*x + sqrt(2)*C3*exp(sqrt(2)*x)/2 + sqrt(2)*C4*exp(-sqrt(2)*x)*Rational(-1, 2) +
+ Rational(-1, 2)),
+ Eq(g(x), C1 + C2*x + sqrt(2)*C3*exp(sqrt(2)*x)/2 + sqrt(2)*C4*exp(-sqrt(2)*x)*Rational(-1, 2) +
+ Rational(-1, 2))]
assert dsolve(eqs7) == sol7
assert checksysodesol(eqs7, sol7) == (True, [0, 0])
- eqs8 = [Eq(f(x).diff(x, 2), f(x) + g(x) + 1), Eq(g(x).diff(x, 2), -f(x) - g(x) + 1)]
- sol8 = [Eq(f(x), C1 + C2*x**2/2 + C2 + C3*x + C4*x**3/6 + C4*x + x**3*Integral(2, x)/6 +
- x**2*Integral(-2*x, x)/2 + x*Integral(2, x) + x*Integral(x**2 - 1, x) + Integral(-2*x, x) +
- Integral(-x**3/3 + x, x)),
- Eq(g(x), -C1 - C2*x**2/2 - C3*x - C4*x**3/6 - x**3*Integral(2, x)/6 -
- x**2*Integral(-2*x, x)/2 - x*Integral(x**2 - 1, x) - Integral(-x**3/3 + x, x))]
+ eqs8 = [Eq(Derivative(f(x), (x, 2)), f(x) + g(x) + 1),
+ Eq(Derivative(g(x), (x, 2)), -f(x) - g(x) + 1)]
+ sol8 = [Eq(f(x), C1 + C2 + C4*x**3/6 + x**4/12 + x**2*(C2/2 + Rational(1, 2)) + x*(C3 + C4)),
+ Eq(g(x), -C1 - C3*x + C4*x**3*Rational(-1, 6) + x**4*Rational(-1, 12) - x**2*(C2/2 + Rational(-1,
+ 2)))]
assert dsolve(eqs8) == sol8
assert checksysodesol(eqs8, sol8) == (True, [0, 0])
- eqs9 = [f(x).diff(x, 2) + 2*f(x).diff(x) + f(x) + g(x) - 2*exp(I*x),
- g(x).diff(x, 2) + 2*g(x).diff(x) + f(x) + g(x) - 2*exp(I*x)]
- sol9 = [Eq(f(x), -C1 + (C2/2 + Integral(0, x)/2)*exp(-2*x) + (C3*sin(x)/2 - C3*cos(x)/2 + C4*sin(x)/2 + C4*cos(x)/2 +
- sin(x)*Integral(-2*exp(x)*exp(I*x)*sin(x) + 2*exp(x)*exp(I*x)*cos(x), x)/2 + sin(x)*Integral(-2*exp(x)*
- exp(I*x)*sin(x)**2/cos(x) + 2*exp(x)*exp(I*x)*sin(x) + 2*exp(x)*exp(I*x)/cos(x), x)/2 + cos(x)*
- Integral(-2*exp(x)*exp(I*x)*sin(x) + 2*exp(x)*exp(I*x)*cos(x), x)/2 - cos(x)*Integral(-2*exp(x)*exp(I*x)*
- sin(x)**2/cos(x) + 2*exp(x)*exp(I*x)*sin(x) + 2*exp(x)*exp(I*x)/cos(x), x)/2)*exp(-x) - Integral(0, x)),
- Eq(g(x), C1 + (-C2/2 - Integral(0, x)/2)*exp(-2*x) + (C3*sin(x)/2 - C3*cos(x)/2 + C4*sin(x)/2 + C4*cos(x)/2 +
- sin(x)*Integral(-2*exp(x)*exp(I*x)*sin(x) + 2*exp(x)*exp(I*x)*cos(x), x)/2 + sin(x)*Integral(-2*exp(x)*
- exp(I*x)*sin(x)**2/cos(x) + 2*exp(x)*exp(I*x)*sin(x) + 2*exp(x)*exp(I*x)/cos(x), x)/2 + cos(x)*
- Integral(-2*exp(x)*exp(I*x)*sin(x) + 2*exp(x)*exp(I*x)*cos(x), x)/2 - cos(x)*Integral(-2*exp(x)*exp(I*x)*
- sin(x)**2/cos(x) + 2*exp(x)*exp(I*x)*sin(x) + 2*exp(x)*exp(I*x)/cos(x), x)/2)*exp(-x) + Integral(0, x))]
- assert dsolve(eqs9) == sol9
- assert checksysodesol(eqs9, sol9) == (True, [0, 0])
-
x, y = symbols('x, y', cls=Function)
t, l = symbols('t, l')
- eq1 = (Eq(diff(x(t),t,t), 5*x(t) + 43*y(t)), Eq(diff(y(t),t,t), x(t) + 9*y(t)))
- sol1 = [Eq(x(t), 43*C1*exp(-t*sqrt(7 - sqrt(47)))/((-2 + sqrt(47))*sqrt(7 - sqrt(47))) + 43*C2*exp(t*sqrt(7
- - sqrt(47)))/((2 - sqrt(47))*sqrt(7 - sqrt(47))) - 43*C3*exp(-t*sqrt(sqrt(47) + 7))/((2 +
- sqrt(47))*sqrt(sqrt(47) + 7)) + 43*C4*exp(t*sqrt(sqrt(47) + 7))/((2 + sqrt(47))*sqrt(sqrt(47) + 7))),
- Eq(y(t), -C1*exp(-t*sqrt(7 - sqrt(47)))/sqrt(7 - sqrt(47)) + C2*exp(t*sqrt(7 - sqrt(47)))/sqrt(7 -
- sqrt(47)) - C3*exp(-t*sqrt(sqrt(47) + 7))/sqrt(sqrt(47) + 7) + C4*exp(t*sqrt(sqrt(47) +
- 7))/sqrt(sqrt(47) + 7))]
- assert dsolve(eq1) == sol1
- assert checksysodesol(eq1, sol1) == (True, [0, 0])
-
- eq2 = (Eq(diff(x(t),t,t) - 9*diff(y(t),t) + 7*x(t),0), Eq(diff(y(t),t,t) + 9*diff(x(t),t) + 7*y(t),0))
- sol2 = [Eq(y(t), (-Rational(9, 14) + sqrt(109)/14)*(-C1*sin(t*sqrt(9*sqrt(109)/2 + Rational(95, 2))) + C2*cos(t*sqrt(9*sqrt(109)/2 +
- Rational(95, 2)))) + (-sqrt(109)/14 - Rational(9, 14))*(-C3*sin(t*sqrt(Rational(95, 2) - 9*sqrt(109)/2)) +
- C4*cos(t*sqrt(Rational(95, 2) - 9*sqrt(109)/2)))),
- Eq(x(t), sqrt(2)*(C1*cos(t*sqrt(9*sqrt(109)/2 + Rational(95, 2))) + C2*sin(t*sqrt(9*sqrt(109)/2 + Rational(95, 2))))/
- sqrt(9*sqrt(109) + 95) + sqrt(2)*(C3*cos(t*sqrt(Rational(95, 2) - 9*sqrt(109)/2)) + C4*sin(t*sqrt(
- Rational(95, 2) - 9*sqrt(109)/2)))/sqrt(95 - 9*sqrt(109)))]
- assert dsolve(eq2) == sol2
- assert checksysodesol(eq2, sol2) == (True, [0, 0])
-
- eqs4 = [Eq(4*x(t) + Derivative(x(t), (t, 2)) + 8*Derivative(y(t), t), 0), Eq(4*y(t) - 8*Derivative(x(t), t) +
- Derivative(y(t), (t, 2)), 0)]
- sol4 = [Eq(y(t), (1 - sqrt(5)/2)*(-C1*sin(2*t*sqrt(4*sqrt(5) + 9)) + C2*cos(2*t*sqrt(4*sqrt(5) + 9))) + (1
- + sqrt(5)/2)*(-C3*sin(2*t*sqrt(9 - 4*sqrt(5))) + C4*cos(2*t*sqrt(9 - 4*sqrt(5))))),
- Eq(x(t), (C1*cos(2*t*sqrt(4*sqrt(5) + 9)) + C2*sin(2*t*sqrt(4*sqrt(5) + 9)))/(2*sqrt(4*sqrt(5) + 9)) +
- (C3*cos(2*t*sqrt(9 - 4*sqrt(5))) + C4*sin(2*t*sqrt(9 - 4*sqrt(5))))/(2*sqrt(9 - 4*sqrt(5))))]
- assert dsolve(eqs4) == sol4
- assert checksysodesol(eqs4, sol4) == (True, [0, 0])
+ eqs10 = [Eq(Derivative(x(t), (t, 2)), 5*x(t) + 43*y(t)),
+ Eq(Derivative(y(t), (t, 2)), x(t) + 9*y(t))]
+ sol10 = [Eq(x(t), C1*sqrt(7 - sqrt(47))*(61 + 9*sqrt(47))*exp(-t*sqrt(7 - sqrt(47)))/2 + C2*sqrt(7 -
+ sqrt(47))*(61 + 9*sqrt(47))*exp(t*sqrt(7 - sqrt(47)))*Rational(-1, 2) + C3*(61 -
+ 9*sqrt(47))*sqrt(sqrt(47) + 7)*exp(-t*sqrt(sqrt(47) + 7))/2 + C4*(61 - 9*sqrt(47))*sqrt(sqrt(47) +
+ 7)*exp(t*sqrt(sqrt(47) + 7))*Rational(-1, 2)),
+ Eq(y(t), C1*sqrt(7 - sqrt(47))*(sqrt(47) + 7)*exp(-t*sqrt(7 - sqrt(47)))*Rational(-1, 2) + C2*sqrt(7
+ - sqrt(47))*(sqrt(47) + 7)*exp(t*sqrt(7 - sqrt(47)))/2 + C3*(7 - sqrt(47))*sqrt(sqrt(47) +
+ 7)*exp(-t*sqrt(sqrt(47) + 7))*Rational(-1, 2) + C4*(7 - sqrt(47))*sqrt(sqrt(47) +
+ 7)*exp(t*sqrt(sqrt(47) + 7))/2)]
+ assert dsolve(eqs10) == sol10
+ assert checksysodesol(eqs10, sol10) == (True, [0, 0])
+
+ eqs11 = [Eq(7*x(t) + Derivative(x(t), (t, 2)) - 9*Derivative(y(t), t), 0),
+ Eq(7*y(t) + 9*Derivative(x(t), t) + Derivative(y(t), (t, 2)), 0)]
+ sol11 = [Eq(y(t), C1*(9 - sqrt(109))*sin(sqrt(2)*t*sqrt(9*sqrt(109) + 95)/2)/14 + C2*(9 -
+ sqrt(109))*cos(sqrt(2)*t*sqrt(9*sqrt(109) + 95)/2)*Rational(-1, 14) + C3*(9 +
+ sqrt(109))*sin(sqrt(2)*t*sqrt(95 - 9*sqrt(109))/2)/14 + C4*(9 + sqrt(109))*cos(sqrt(2)*t*sqrt(95 -
+ 9*sqrt(109))/2)*Rational(-1, 14)),
+ Eq(x(t), C1*(9 - sqrt(109))*cos(sqrt(2)*t*sqrt(9*sqrt(109) + 95)/2)*Rational(-1, 14) + C2*(9 -
+ sqrt(109))*sin(sqrt(2)*t*sqrt(9*sqrt(109) + 95)/2)*Rational(-1, 14) + C3*(9 +
+ sqrt(109))*cos(sqrt(2)*t*sqrt(95 - 9*sqrt(109))/2)/14 + C4*(9 + sqrt(109))*sin(sqrt(2)*t*sqrt(95 -
+ 9*sqrt(109))/2)/14)]
+ assert dsolve(eqs11) == sol11
+ assert checksysodesol(eqs11, sol11) == (True, [0, 0])
# Euler Systems
# Note: To add examples of euler systems solver with non-homogeneous term.
- ot = Rational(1, 2)
- eqs1 = [Eq(Derivative(f(t), (t, 2)), Derivative(f(t), t)/t + f(t)/t**2 + g(t)/t**2),
- Eq(Derivative(g(t), (t, 2)), g(t)/t**2)]
- sol1 = [Eq(f(t), C1*t**(ot - sqrt(5)/2)*(-3*ot - sqrt(5)/2) + C2*t**(ot + sqrt(5)/2)*(-3*ot + sqrt(5)/2) +
- C3*t**(1 - sqrt(2))*(-sqrt(2) - 1) + C4*t**(1 + sqrt(2))*(-1 + sqrt(2))),
- Eq(g(t), C1*t**(ot - sqrt(5)/2)*(-sqrt(5)/2 - ot) + C2*t**(ot +
- sqrt(5)/2)*(-ot + sqrt(5)/2))]
- assert dsolve(eqs1) == sol1
- assert checksysodesol(eqs1, sol1) == (True, [0, 0])
+ eqs13 = [Eq(Derivative(f(t), (t, 2)), Derivative(f(t), t)/t + f(t)/t**2 + g(t)/t**2),
+ Eq(Derivative(g(t), (t, 2)), g(t)/t**2)]
+ sol13 = [Eq(f(t), C1*(sqrt(5) + 3)*Rational(-1, 2)*t**(Rational(1, 2) +
+ sqrt(5)*Rational(-1, 2)) + C2*t**(Rational(1, 2) +
+ sqrt(5)/2)*(3 - sqrt(5))*Rational(-1, 2) - C3*t**(1 -
+ sqrt(2))*(1 + sqrt(2)) - C4*t**(1 + sqrt(2))*(1 - sqrt(2))),
+ Eq(g(t), C1*(1 + sqrt(5))*Rational(-1, 2)*t**(Rational(1, 2) +
+ sqrt(5)*Rational(-1, 2)) + C2*t**(Rational(1, 2) +
+ sqrt(5)/2)*(1 - sqrt(5))*Rational(-1, 2))]
+ assert dsolve(eqs13) == sol13
+ assert checksysodesol(eqs13, sol13) == (True, [0, 0])
# Solving systems using dsolve separately
- eqs1 = [Eq(Derivative(f(t), (t, 2)), t*f(t)), Eq(Derivative(g(t), (t, 2)), t*g(t))]
- sol1 = [Eq(f(t), C1*airyai(t) + C2*airybi(t)), Eq(g(t), C3*airyai(t) + C4*airybi(t))]
- assert dsolve(eqs1) == sol1
- assert checksysodesol(eqs1, sol1) == (True, [0, 0])
+ eqs14 = [Eq(Derivative(f(t), (t, 2)), t*f(t)),
+ Eq(Derivative(g(t), (t, 2)), t*g(t))]
+ sol14 = [Eq(f(t), C1*airyai(t) + C2*airybi(t)),
+ Eq(g(t), C3*airyai(t) + C4*airybi(t))]
+ assert dsolve(eqs14) == sol14
+ assert checksysodesol(eqs14, sol14) == (True, [0, 0])
- eqs2 = (Eq(Derivative(x(t), (t, 2)), t*(4*Derivative(x(t), t) + 8*Derivative(y(t), t))),
- Eq(Derivative(y(t), (t, 2)), t*(12*Derivative(x(t), t) - 6*Derivative(y(t), t))))
- sol2 = [Eq(x(t), C1 + Integral(4*C2*exp(5*t**2)/11 - 4*C2*exp(-6*t**2)/11 + 8*C3*exp(5*t**2)/11 +
- 3*C3*exp(-6*t**2)/11, t)),
- Eq(y(t), C4 + Integral(3*C2*exp(5*t**2)/11 + 8*C2*exp(-6*t**2)/11 + 6*C3*exp(5*t**2)/11 -
- 6*C3*exp(-6*t**2)/11, t))]
- assert dsolve(eqs2) == sol2
- assert checksysodesol(eqs2, sol2) == (True, [0, 0])
+ eqs15 = [Eq(Derivative(x(t), (t, 2)), t*(4*Derivative(x(t), t) + 8*Derivative(y(t), t))),
+ Eq(Derivative(y(t), (t, 2)), t*(12*Derivative(x(t), t) - 6*Derivative(y(t), t)))]
+ sol15 = [Eq(x(t), C1 - erf(sqrt(6)*t)*(sqrt(6)*sqrt(pi)*C2/33 + sqrt(6)*sqrt(pi)*C3*Rational(-1, 44)) +
+ erfi(sqrt(5)*t)*(sqrt(5)*sqrt(pi)*C2*Rational(2, 55) + sqrt(5)*sqrt(pi)*C3*Rational(4, 55))),
+ Eq(y(t), C4 + erf(sqrt(6)*t)*(sqrt(6)*sqrt(pi)*C2*Rational(2, 33) + sqrt(6)*sqrt(pi)*C3*Rational(-1,
+ 22)) + erfi(sqrt(5)*t)*(sqrt(5)*sqrt(pi)*C2*Rational(3, 110) + sqrt(5)*sqrt(pi)*C3*Rational(3, 55)))]
+ assert dsolve(eqs15) == sol15
+ assert checksysodesol(eqs15, sol15) == (True, [0, 0])
-def test_second_order_to_first_order():
- f, g = symbols("f g", cls=Function)
- x, t, x_, t_, d, a, m = symbols("x t x_ t_ d a m")
- # Type 1
+@slow
+def test_higher_order_to_first_order_9():
+ f, g = symbols('f g', cls=Function)
+ x = symbols('x')
- eqs1 = [Eq(f(x).diff(x, 2), 2/x *(x*g(x).diff(x) - g(x))),
- Eq(g(x).diff(x, 2),-2/x *(x*f(x).diff(x) - f(x)))]
- sol1 = [Eq(f(x), C1*x + x*Integral(C2*exp(-x_)*sin(2*exp(x_)) + C3*exp(-x_)*cos(2*exp(x_)), (x_, log(x)))),
- Eq(g(x), C4*x + x*Integral(C2*exp(-x_)*cos(2*exp(x_)) - C3*exp(-x_)*sin(2*exp(x_)), (x_, log(x))))]
- assert dsolve(eqs1) == sol1
- assert checksysodesol(eqs1, sol1) == (True, [0, 0])
+ eqs9 = [f(x) + g(x) - 2*exp(I*x) + 2*Derivative(f(x), x) + Derivative(f(x), (x, 2)),
+ f(x) + g(x) - 2*exp(I*x) + 2*Derivative(g(x), x) + Derivative(g(x), (x, 2))]
+ sol9 = [Eq(f(x), -C1 + C2*exp(-2*x)/2 + (C3/2 + C4/2)*exp(-x)*sin(x) + (2 +
+ I)*exp(I*x)*sin(x)**2*Rational(-1, 5) + (1 - 2*I)*exp(I*x)*sin(x)*cos(x)*Rational(2, 5) + (4 -
+ 3*I)*exp(I*x)*cos(x)**2/5 + exp(-x)*sin(x)*Integral(-exp(x)*exp(I*x)*sin(x)**2/cos(x) +
+ exp(x)*exp(I*x)*sin(x) + exp(x)*exp(I*x)/cos(x), x) -
+ exp(-x)*cos(x)*Integral(-exp(x)*exp(I*x)*sin(x)**2/cos(x) + exp(x)*exp(I*x)*sin(x) +
+ exp(x)*exp(I*x)/cos(x), x) - exp(-x)*cos(x)*(C3/2 + C4*Rational(-1, 2))),
+ Eq(g(x), C1 + C2*exp(-2*x)*Rational(-1, 2) + (C3/2 + C4/2)*exp(-x)*sin(x) + (2 +
+ I)*exp(I*x)*sin(x)**2*Rational(-1, 5) + (1 - 2*I)*exp(I*x)*sin(x)*cos(x)*Rational(2, 5) + (4 -
+ 3*I)*exp(I*x)*cos(x)**2/5 + exp(-x)*sin(x)*Integral(-exp(x)*exp(I*x)*sin(x)**2/cos(x) +
+ exp(x)*exp(I*x)*sin(x) + exp(x)*exp(I*x)/cos(x), x) -
+ exp(-x)*cos(x)*Integral(-exp(x)*exp(I*x)*sin(x)**2/cos(x) + exp(x)*exp(I*x)*sin(x) +
+ exp(x)*exp(I*x)/cos(x), x) - exp(-x)*cos(x)*(C3/2 + C4*Rational(-1, 2)))]
+ assert dsolve(eqs9) == sol9
+ assert checksysodesol(eqs9, sol9) == (True, [0, 0])
+
+
+@slow
+def test_higher_order_to_first_order_12():
+ f, g = symbols('f g', cls=Function)
+ x = symbols('x')
+
+ x, y = symbols('x, y', cls=Function)
+ t, l = symbols('t, l')
+
+ eqs12 = [Eq(4*x(t) + Derivative(x(t), (t, 2)) + 8*Derivative(y(t), t), 0),
+ Eq(4*y(t) - 8*Derivative(x(t), t) + Derivative(y(t), (t, 2)), 0)]
+ sol12 = [Eq(y(t), C1*(2 - sqrt(5))*sin(2*t*sqrt(4*sqrt(5) + 9))*Rational(-1, 2) + C2*(2 -
+ sqrt(5))*cos(2*t*sqrt(4*sqrt(5) + 9))/2 + C3*(2 + sqrt(5))*sin(2*t*sqrt(9 - 4*sqrt(5)))*Rational(-1,
+ 2) + C4*(2 + sqrt(5))*cos(2*t*sqrt(9 - 4*sqrt(5)))/2),
+ Eq(x(t), C1*(2 - sqrt(5))*cos(2*t*sqrt(4*sqrt(5) + 9))*Rational(-1, 2) + C2*(2 -
+ sqrt(5))*sin(2*t*sqrt(4*sqrt(5) + 9))*Rational(-1, 2) + C3*(2 + sqrt(5))*cos(2*t*sqrt(9 -
+ 4*sqrt(5)))/2 + C4*(2 + sqrt(5))*sin(2*t*sqrt(9 - 4*sqrt(5)))/2)]
+ assert dsolve(eqs12) == sol12
+ assert checksysodesol(eqs12, sol12) == (True, [0, 0])
+
+
+def test_second_order_to_first_order_2():
+ f, g = symbols("f g", cls=Function)
+ x, t, x_, t_, d, a, m = symbols("x t x_ t_ d a m")
eqs2 = [Eq(f(x).diff(x, 2), 2*(x*g(x).diff(x) - g(x))),
Eq(g(x).diff(x, 2),-2*(x*f(x).diff(x) - f(x)))]
@@ -1483,7 +1529,8 @@ def test_second_order_to_first_order():
I*C3*exp(-x_)*exp(I*exp(2*x_))/2 + I*C3*exp(-x_)*exp(-I*exp(2*x_))/2, (x_, log(x)))),
Eq(g(x), C4*x + x*Integral(I*C2*exp(-x_)*exp(I*exp(2*x_))/2 - I*C2*exp(-x_)*exp(-I*exp(2*x_))/2 +
C3*exp(-x_)*exp(I*exp(2*x_))/2 + C3*exp(-x_)*exp(-I*exp(2*x_))/2, (x_, log(x))))]
- assert dsolve(eqs2) == sol2
+ # XXX: dsolve hangs for this in integration
+ assert dsolve_system(eqs2, simplify=False, doit=False) == [sol2]
assert checksysodesol(eqs2, sol2) == (True, [0, 0])
eqs3 = (Eq(diff(f(t),t,t), 9*t*diff(g(t),t)-9*g(t)), Eq(diff(g(t),t,t),7*t*diff(f(t),t)-7*f(t)))
@@ -1493,39 +1540,65 @@ def test_second_order_to_first_order():
Eq(g(t), C4*t + t*Integral(sqrt(7)*C2*exp(-t_)*exp(3*sqrt(7)*exp(2*t_)/2)/6 - sqrt(7)*C2*exp(-t_)*
exp(-3*sqrt(7)*exp(2*t_)/2)/6 + C3*exp(-t_)*exp(3*sqrt(7)*exp(2*t_)/2)/2 + C3*exp(-t_)*exp(-3*sqrt(7)*
exp(2*t_)/2)/2, (t_, log(t))))]
- assert dsolve(eqs3) == sol3
+ # XXX: dsolve hangs for this in integration
+ assert dsolve_system(eqs3, simplify=False, doit=False) == [sol3]
assert checksysodesol(eqs3, sol3) == (True, [0, 0])
- eqs4 = [Eq(Derivative(f(t), (t, 2)), t*sin(t)*Derivative(g(t), t) - g(t)*sin(t)),
- Eq(Derivative(g(t), (t, 2)), t*sin(t)*Derivative(f(t), t) - f(t)*sin(t))]
- sol4 = [Eq(f(t), C1*t + t*Integral(C2*exp(-t_)*exp(exp(t_)*cos(exp(t_)))*exp(-sin(exp(t_)))/2 +
- C2*exp(-t_)*exp(-exp(t_)*cos(exp(t_)))*exp(sin(exp(t_)))/2 - C3*exp(-t_)*exp(exp(t_)*cos(exp(t_)))*
- exp(-sin(exp(t_)))/2 +
- C3*exp(-t_)*exp(-exp(t_)*cos(exp(t_)))*exp(sin(exp(t_)))/2, (t_, log(t)))),
- Eq(g(t), C4*t + t*Integral(-C2*exp(-t_)*exp(exp(t_)*cos(exp(t_)))*exp(-sin(exp(t_)))/2 +
- C2*exp(-t_)*exp(-exp(t_)*cos(exp(t_)))*exp(sin(exp(t_)))/2 + C3*exp(-t_)*exp(exp(t_)*cos(exp(t_)))*
- exp(-sin(exp(t_)))/2 + C3*exp(-t_)*exp(-exp(t_)*cos(exp(t_)))*exp(sin(exp(t_)))/2, (t_, log(t))))]
- assert dsolve(eqs4) == sol4
- assert checksysodesol(eqs4, sol4) == (True, [0, 0])
-
# Regression Test case for sympy#19238
# https://github.com/sympy/sympy/issues/19238
# Note: When the doit method is removed, these particular types of systems
# can be divided first so that we have lesser number of big matrices.
- eqs5 = [Eq(Derivative(g(t), (t, 2)), a*m), Eq(Derivative(f(t), (t, 2)), 0)]
- sol5 = [Eq(g(t), C1 + C2*t + t*Integral(a*m, t) + Integral(-a*m*t, t)),
- Eq(f(t), C3 + C4*t + t*Integral(0, t) + Integral(0, t))]
+ eqs5 = [Eq(Derivative(g(t), (t, 2)), a*m),
+ Eq(Derivative(f(t), (t, 2)), 0)]
+ sol5 = [Eq(g(t), C1 + C2*t + a*m*t**2/2),
+ Eq(f(t), C3 + C4*t)]
assert dsolve(eqs5) == sol5
assert checksysodesol(eqs5, sol5) == (True, [0, 0])
# Type 2
- eqs1 = [Eq(Derivative(f(t), (t, 2)), f(t)/t**4), Eq(Derivative(g(t), (t, 2)), d*g(t)/t**4)]
- sol1 = [Eq(f(t), (C1*exp(-1/t) - C2*exp(1/t))*sqrt(t**2)),
- Eq(g(t), (C3*exp(-sqrt(d)/t)/sqrt(d) - C4*exp(sqrt(d)/t)/sqrt(d))*sqrt(t**2))]
+ eqs6 = [Eq(Derivative(f(t), (t, 2)), f(t)/t**4),
+ Eq(Derivative(g(t), (t, 2)), d*g(t)/t**4)]
+ sol6 = [Eq(f(t), C1*sqrt(t**2)*exp(-1/t) - C2*sqrt(t**2)*exp(1/t)),
+ Eq(g(t), C3*sqrt(t**2)*exp(-sqrt(d)/t)*d**Rational(-1, 2) -
+ C4*sqrt(t**2)*exp(sqrt(d)/t)*d**Rational(-1, 2))]
+ assert dsolve(eqs6) == sol6
+ assert checksysodesol(eqs6, sol6) == (True, [0, 0])
+
+
+@slow
+def test_second_order_to_first_order_slow1():
+ f, g = symbols("f g", cls=Function)
+ x, t, x_, t_, d, a, m = symbols("x t x_ t_ d a m")
+
+ # Type 1
+
+ eqs1 = [Eq(f(x).diff(x, 2), 2/x *(x*g(x).diff(x) - g(x))),
+ Eq(g(x).diff(x, 2),-2/x *(x*f(x).diff(x) - f(x)))]
+ sol1 = [Eq(f(x), C1*x + 2*C2*x*Ci(2*x) - C2*sin(2*x) - 2*C3*x*Si(2*x) - C3*cos(2*x)),
+ Eq(g(x), -2*C2*x*Si(2*x) - C2*cos(2*x) - 2*C3*x*Ci(2*x) + C3*sin(2*x) + C4*x)]
assert dsolve(eqs1) == sol1
assert checksysodesol(eqs1, sol1) == (True, [0, 0])
+@slow
+def test_second_order_to_first_order_slow4():
+ f, g = symbols("f g", cls=Function)
+ x, t, x_, t_, d, a, m = symbols("x t x_ t_ d a m")
+
+ eqs4 = [Eq(Derivative(f(t), (t, 2)), t*sin(t)*Derivative(g(t), t) - g(t)*sin(t)),
+ Eq(Derivative(g(t), (t, 2)), t*sin(t)*Derivative(f(t), t) - f(t)*sin(t))]
+ sol4 = [Eq(f(t), C1*t + t*Integral(C2*exp(-t_)*exp(exp(t_)*cos(exp(t_)))*exp(-sin(exp(t_)))/2 +
+ C2*exp(-t_)*exp(-exp(t_)*cos(exp(t_)))*exp(sin(exp(t_)))/2 - C3*exp(-t_)*exp(exp(t_)*cos(exp(t_)))*
+ exp(-sin(exp(t_)))/2 +
+ C3*exp(-t_)*exp(-exp(t_)*cos(exp(t_)))*exp(sin(exp(t_)))/2, (t_, log(t)))),
+ Eq(g(t), C4*t + t*Integral(-C2*exp(-t_)*exp(exp(t_)*cos(exp(t_)))*exp(-sin(exp(t_)))/2 +
+ C2*exp(-t_)*exp(-exp(t_)*cos(exp(t_)))*exp(sin(exp(t_)))/2 + C3*exp(-t_)*exp(exp(t_)*cos(exp(t_)))*
+ exp(-sin(exp(t_)))/2 + C3*exp(-t_)*exp(-exp(t_)*cos(exp(t_)))*exp(sin(exp(t_)))/2, (t_, log(t))))]
+ # XXX: dsolve hangs for this in integration
+ assert dsolve_system(eqs4, simplify=False, doit=False) == [sol4]
+ assert checksysodesol(eqs4, sol4) == (True, [0, 0])
+
+
def test_component_division():
f, g, h, k = symbols('f g h k', cls=Function)
x = symbols("x")
@@ -1538,7 +1611,10 @@ def test_component_division():
sol1 = [Eq(f(x), 2*C1*exp(2*x)),
Eq(g(x), C1*exp(2*x) + C2),
Eq(h(x), C3*exp(x)),
- Eq(k(x), (C4 + Integral(C3**4*exp(3*x), x))*exp(x))]
+ Eq(k(x), C3**4*exp(4*x)/3 + C4*exp(x))]
+ assert dsolve(eqs1) == sol1
+ assert checksysodesol(eqs1, sol1) == (True, [0, 0, 0, 0])
+
components1 = {((Eq(Derivative(f(x), x), 2*f(x)),), (Eq(Derivative(g(x), x), f(x)),)),
((Eq(Derivative(h(x), x), h(x)),), (Eq(Derivative(k(x), x), h(x)**4 + k(x)),))}
eqsdict1 = ({f(x): set(), g(x): {f(x)}, h(x): set(), k(x): {h(x)}},
@@ -1550,17 +1626,18 @@ def test_component_division():
assert {tuple(tuple(scc) for scc in wcc) for wcc in _component_division(eqs1, funcs, x)} == components1
assert _eqs2dict(eqs1, funcs) == eqsdict1
assert [set(element) for element in _dict2graph(eqsdict1[0])] == graph1
- assert dsolve(eqs1) == sol1
- assert checksysodesol(eqs1, sol1) == (True, [0, 0, 0, 0])
eqs2 = [Eq(Derivative(f(x), x), 2*f(x)),
Eq(Derivative(g(x), x), f(x)),
Eq(Derivative(h(x), x), h(x)),
Eq(Derivative(k(x), x), f(x)**4 + k(x))]
sol2 = [Eq(f(x), C1*exp(2*x)),
- Eq(g(x), C2 + Integral(C1*exp(2*x), x)),
+ Eq(g(x), C1*exp(2*x)/2 + C2),
Eq(h(x), C3*exp(x)),
- Eq(k(x), (C4 + Integral(C1**4*exp(7*x), x))*exp(x))]
+ Eq(k(x), C1**4*exp(8*x)/7 + C4*exp(x))]
+ assert dsolve(eqs2) == sol2
+ assert checksysodesol(eqs2, sol2) == (True, [0, 0, 0, 0])
+
components2 = {frozenset([(Eq(Derivative(f(x), x), 2*f(x)),),
(Eq(Derivative(g(x), x), f(x)),),
(Eq(Derivative(k(x), x), f(x)**4 + k(x)),)]),
@@ -1574,17 +1651,18 @@ def test_component_division():
assert {frozenset(tuple(scc) for scc in wcc) for wcc in _component_division(eqs2, funcs, x)} == components2
assert _eqs2dict(eqs2, funcs) == eqsdict2
assert [set(element) for element in _dict2graph(eqsdict2[0])] == graph2
- assert dsolve(eqs2) == sol2
- assert checksysodesol(eqs2, sol2) == (True, [0, 0, 0, 0])
eqs3 = [Eq(Derivative(f(x), x), 2*f(x)),
- Eq(Derivative(g(x), x), f(x) + x),
+ Eq(Derivative(g(x), x), x + f(x)),
Eq(Derivative(h(x), x), h(x)),
Eq(Derivative(k(x), x), f(x)**4 + k(x))]
sol3 = [Eq(f(x), C1*exp(2*x)),
- Eq(g(x), C2 + Integral(C1*exp(2*x) + x, x)),
+ Eq(g(x), C1*exp(2*x)/2 + C2 + x**2/2),
Eq(h(x), C3*exp(x)),
- Eq(k(x), (C4 + Integral(C1**4*exp(7*x), x))*exp(x))]
+ Eq(k(x), C1**4*exp(8*x)/7 + C4*exp(x))]
+ assert dsolve(eqs3) == sol3
+ assert checksysodesol(eqs3, sol3) == (True, [0, 0, 0, 0])
+
components3 = {frozenset([(Eq(Derivative(f(x), x), 2*f(x)),),
(Eq(Derivative(g(x), x), x + f(x)),),
(Eq(Derivative(k(x), x), f(x)**4 + k(x)),)]),
@@ -1598,8 +1676,6 @@ def test_component_division():
assert {frozenset(tuple(scc) for scc in wcc) for wcc in _component_division(eqs3, funcs, x)} == components3
assert _eqs2dict(eqs3, funcs) == eqsdict3
assert [set(l) for l in _dict2graph(eqsdict3[0])] == graph3
- assert dsolve(eqs3) == sol3
- assert checksysodesol(eqs3, sol3) == (True, [0, 0, 0, 0])
# Note: To be uncommented when the default option to call dsolve first for
# single ODE system can be rearranged. This can be done after the doit
@@ -1640,7 +1716,8 @@ def test_component_division():
assert {tuple(frozenset(scc) for scc in wcc) for wcc in _component_division(eqs4, funcs, x)} == components4
assert _eqs2dict(eqs4, funcs) == eqsdict4
assert [set(element) for element in _dict2graph(eqsdict4[0])] == graph4
- assert dsolve(eqs4) == sol4
+ # XXX: dsolve hangs in integration here:
+ assert dsolve_system(eqs4, simplify=False, doit=False) == [sol4]
assert checksysodesol(eqs4, sol4) == (True, [0, 0, 0, 0])
eqs5 = [Eq(Derivative(f(x), x), x*f(x) + 2*g(x)),
@@ -1665,7 +1742,8 @@ def test_component_division():
assert {tuple(frozenset(scc) for scc in wcc) for wcc in _component_division(eqs5, funcs, x)} == components5
assert _eqs2dict(eqs5, funcs) == eqsdict5
assert [set(element) for element in _dict2graph(eqsdict5[0])] == graph5
- assert dsolve(eqs5) == sol5
+ # XXX: dsolve hangs in integration here:
+ assert dsolve_system(eqs5, simplify=False, doit=False) == [sol5]
assert checksysodesol(eqs5, sol5) == (True, [0, 0, 0, 0])
@@ -1862,29 +1940,18 @@ def test_linear_neq_order1_type2_slow1():
eq2 = r2*c1*Derivative(x1(t), t) + r2*c2*Derivative(x2(t), t) + x2(t) - r2*i
eq = [eq1, eq2]
- _x1 = sqrt(
- c1**2*r1**2 + 2*c1**2*r1*r2 + c1**2*r2**2 - 2*c1*c2*r1*r2 + 2*c1*c2*r2**2 + c2**2*r2**2)
- _x2 = -_x1*t/(2*c1*c2*r1*r2)
- _x3 = 1/(_x1 - c1*r1 - c1*r2 + c2*r2)
- _x4 = 1/(_x1 + c1*r1 + c1*r2 - c2*r2)
- _x5 = 1/(2*_x3*c1*c2*r2 + 2*_x4*c1*c2*r2)
- _x6 = Integral(_x5*i*exp(_x2 + t/(2*c2*r2) + t/(2*c2*r1) + t/(2*c1*r1)), t)
- _x7 = Integral(
- -_x5*i*exp(_x1*t/(2*c1*c2*r1*r2) + t/(2*c2*r2) + t/(2*c2*r1) + t/(2*c1*r1)),
- t)
- _x8 = exp(_x1*t/(2*c1*c2*r1*r2) - t/(2*c2*r2) - t/(2*c2*r1) - t/(2*c1*r1))
- _x9 = exp(_x2 - t/(2*c2*r2) - t/(2*c2*r1) - t/(2*c1*r1))
- sol = [
- Eq(x1(t),
- 2*C1*_x3*_x8*c2*r2 - 2*C2*_x4*_x9*c2*r2 + 2*_x3*_x6*_x8*c2*r2 - 2*_x4*_x7*_x9*c2*r2),
- Eq(x2(t), C1*_x8 + C2*_x9 + _x6*_x8 + _x7*_x9),
- ]
-
- assert dsolve(eq) == sol
+ # XXX: Solution is too complicated
+ [sol] = dsolve_system(eq, simplify=False, doit=False)
assert checksysodesol(eq, sol) == (True, [0, 0])
-def _de_lorentz_solution():
+# Regression test case for issue #9204
+# https://github.com/sympy/sympy/issues/9204
+@slow
+def test_linear_new_order1_type2_de_lorentz_slow_check():
+ if ON_TRAVIS:
+ skip("Too slow for travis.")
+
m = Symbol("m", real=True)
q = Symbol("q", real=True)
t = Symbol("t", real=True)
@@ -1898,108 +1965,19 @@ def _de_lorentz_solution():
-e2*q + m*Derivative(v2(t), t) - q*(b1*v3(t) - b3*v1(t)),
-e3*q + m*Derivative(v3(t), t) - q*(-b1*v2(t) + b2*v1(t))
]
-
- # The code for the solution here is made using
- # printsol from https://github.com/sympy/sympy/issues/19574
- _x1 = 1/m
- _x2 = b1**2
- _x3 = b2**2
- _x4 = b3**2
- _x5 = sqrt(-_x2 - _x3 - _x4)
- _x6 = exp(2*_x1*_x5*q*t)
- _x7 = exp(_x1*_x5*q*t)
- _x8 = 1/(-_x2*_x7*b3 - _x3*_x7*b3)
- _x9 = sqrt(_x2 + _x3 + _x4)
- _x10 = 1/(_x2*m + _x3*m + _x4*m)
- _x11 = Integral(_x10*_x4*e3*q + _x10*b1*b3*e1*q + _x10*b2*b3*e2*q, t)
- _x12 = b1**3
- _x13 = b2**3
- _x14 = 1/(-2*I*_x2*_x7*_x9*m - 2*I*_x3*_x7*_x9*m - 2*I*_x4*_x7*_x9*m)
- _x15 = Integral(
- _x12*_x14*e2*q - _x13*_x14*e1*q - I*_x14*_x2*_x9*e3*q - _x14*_x2*b2*e1*q - I*_x14*_x3*_x9*e3*q + _x14*_x3*b1*e2*q + _x14*_x4*b1*e2*q - _x14*_x4*b2*e1*q + I*_x14*_x9*b1*b3*e1*q + I*_x14*_x9*b2*b3*e2*q,
- t)
- _x16 = 1/(
- -2*_x12*b3*m - 2*I*_x13*_x9*m - 2*I*_x2*_x9*b2*m - 2*_x3*b1*b3*m - 2*I*_x4*_x9*b2*m - 2*b1*b3**3*m)
- _x17 = Integral(
- -_x12*_x16*_x7*b2*e2*q - _x12*_x16*_x7*b3*e3*q - I*_x13*_x16*_x7*_x9*e3*q - _x13*_x16*_x7*b1*e2*q + _x16*_x2*_x3*_x7*e1*q + _x16*_x2*_x4*_x7*e1*q - I*_x16*_x2*_x7*_x9*b2*e3*q + I*_x16*_x2*_x7*_x9*b3*e2*q + _x16*_x3*_x4*_x7*e1*q + I*_x16*_x3*_x7*_x9*b3*e2*q - _x16*_x3*_x7*b1*b3*e3*q + _x16*_x7*b2**4*e1*q,
- t)
- _x18 = 1/(_x2*_x7*b3 + _x3*_x7*b3)
- sol = [
- Eq(v1(t),
- -C1*_x18*_x4*b1 - I*C1*_x18*_x9*b2*b3 + C2*_x12*_x18*_x7 + C2*_x18*_x3*_x7*b1 - C3*_x18*_x4*_x6*b1 + I*C3*_x18*_x6*_x9*b2*b3 + _x11*_x12*_x18*_x7 + _x11*_x18*_x3*_x7*b1 - _x15*_x18*_x4*_x6*b1 + I*_x15*_x18*_x6*_x9*b2*b3 - _x17*_x18*_x4*b1 - I*_x17*_x18*_x9*b2*b3),
- Eq(v2(t),
- C1*_x4*_x8*b2 - I*C1*_x8*_x9*b1*b3 - C2*_x13*_x7*_x8 - C2*_x2*_x7*_x8*b2 + C3*_x4*_x6*_x8*b2 + I*C3*_x6*_x8*_x9*b1*b3 - _x11*_x13*_x7*_x8 - _x11*_x2*_x7*_x8*b2 + _x15*_x4*_x6*_x8*b2 + I*_x15*_x6*_x8*_x9*b1*b3 + _x17*_x4*_x8*b2 - I*_x17*_x8*_x9*b1*b3),
- Eq(v3(t), C2 + C3*_x7 + _x11 + _x15*_x7 + (C1 + _x17)*exp(-_x1*_x5*q*t)),
- ]
-
- return eqs, sol
-
-# Regression test case for issue #9204
-# https://github.com/sympy/sympy/issues/9204
-
-# A very big solution is obtained for this
-# test case. To be simplified in future.
-@slow
-def test_linear_new_order1_type2_de_lorentz():
- if ON_TRAVIS:
- skip("Too slow for travis.")
-
- eqs, sol = _de_lorentz_solution()
-
- with dotprodsimp(True):
- assert dsolve(eqs) == sol
-
-
-@slow
-def test_linear_new_order1_type2_de_lorentz_slow_check():
- if ON_TRAVIS:
- skip("Too slow for travis.")
-
- eqs, sol = _de_lorentz_solution()
+ sol = dsolve(eqs)
assert checksysodesol(eqs, sol) == (True, [0, 0, 0])
-def _neq_order1_type2_slow():
- RC, t, C, Vs, L, R1, V0, I0 = symbols("RC t C Vs L R1 V0 I0")
- V = Function("V")
- I = Function("I")
- system = [Eq(V(t).diff(t), -1/RC*V(t) + I(t)/C), Eq(I(t).diff(t), -R1/L*I(t) - 1/L*V(t) + Vs/L)]
-
- z1 = sqrt(C**2*L**2 - 2*C**2*L*R1*RC + C**2*R1**2*RC**2 - 4*C*L*RC**2)
- z2 = 1/(C*L - C*R1*RC - z1)
- z3 = 1/(C*L - C*R1*RC + z1)
- z4 = exp(-t/(2*RC) - R1*t/(2*L) + t*z1/(2*C*L*RC))
- z5 = exp(-t/(2*RC) - R1*t/(2*L) - t*z1/(2*C*L*RC))
- z6 = Integral(2*RC*Vs*exp(t/(2*RC) + R1*t/(2*L) +
- t*z1/(2*C*L*RC))/(-2*C*L**2*RC*z2 + 2*C*L**2*RC*z3 + 2*C*L*R1*RC**2*z2
- - 2*C*L*R1*RC**2*z3 - 2*L*RC*z1*z2 + 2*L*RC*z1*z3), t)
- z7 = Integral(-2*RC*Vs*exp(t/(2*RC) + R1*t/(2*L) -
- t*z1/(2*C*L*RC))/(-2*C*L**2*RC*z2 + 2*C*L**2*RC*z3 + 2*C*L*R1*RC**2*z2
- - 2*C*L*R1*RC**2*z3 + 2*L*RC*z1*z2 - 2*L*RC*z1*z3), t)
- sol = [
- Eq(V(t), 2*C1*L*RC*z2*z5 + 2*C2*L*RC*z3*z4 + 2*L*RC*z2*z5*z6 + 2*L*RC*z3*z4*z7),
- Eq(I(t), C1*z5 + C2*z4 + z4*z7 + z5*z6),
- ]
-
- return system, sol
-
-
-# A very big solution is obtained for this
-# test case. To be simplified in future.
-def test_linear_neq_order1_type2_slow():
- system, sol = _neq_order1_type2_slow()
-
- assert dsolve(system) == sol
-
# Regression test case for issue #14001
# https://github.com/sympy/sympy/issues/14001
-
@slow
def test_linear_neq_order1_type2_slow_check():
- if ON_TRAVIS:
- skip("Too slow for travis.")
-
- system, sol = _neq_order1_type2_slow()
+ RC, t, C, Vs, L, R1, V0, I0 = symbols("RC t C Vs L R1 V0 I0")
+ V = Function("V")
+ I = Function("I")
+ system = [Eq(V(t).diff(t), -1/RC*V(t) + I(t)/C), Eq(I(t).diff(t), -R1/L*I(t) - 1/L*V(t) + Vs/L)]
+ [sol] = dsolve_system(system, simplify=False, doit=False)
assert checksysodesol(system, sol) == (True, [0, 0])
@@ -2037,91 +2015,29 @@ def _linear_3eq_order1_type4_long():
assert checksysodesol(eq1, dsolve_sol1) == (True, [0, 0, 0])
-def _neq_order1_type4_slow1():
- f, g = symbols("f, g", cls=Function)
- x = Symbol("x")
-
- eqs = [Eq(diff(f(x), x), x*f(x) + x**2*g(x) + x),
- Eq(diff(g(x), x), 2*x**2*f(x) + (x + 3*x**2)*g(x) + 1)]
- _x1 = sqrt(17)
- _x2 = 1/(45*_x1 + 187)
- _x3 = 1/(161*_x1 + 663)
- _x4 = 1/(51*_x1 - 221)
- _x5 = 1/(161*_x1 - 663)
- _x6 = -x**3/2
- _x7 = _x1*x**3/6
- _x8 = exp(_x6 + _x7 - x**2/2)
- _x9 = -_x1*x**3/6
- _x10 = exp(_x6 + _x9 - x**2/2)
- _x11 = Integral(
- 22*_x1*_x10*_x3*x + 39*_x1*_x10*_x3 + 139*_x1*_x3*_x8*x - 39*_x1*_x3*_x8 + 90*_x10*_x3*x + 161*_x10*_x3 + 573*_x3*_x8*x - 161*_x3*_x8,
- x)
- _x12 = exp(_x9 + x**3/2 + x**2/2)
- _x13 = exp(_x7 + x**3/2 + x**2/2)
- _x14 = Integral(
- 22*_x1*_x10*_x2*x + 39*_x1*_x10*_x2 - 22*_x1*_x2*_x8*x + 6*_x1*_x2*_x8 + 90*_x10*_x2*x + 161*_x10*_x2 - 90*_x2*_x8*x + 26*_x2*_x8,
- x)
- sol = [
- Eq(f(x), _x12*(
- 39*C1*_x1*_x5 - 161*C1*_x5 + 22*C2*_x1*_x5 - 90*C2*_x5 + 22*_x1*_x11*_x5 + 39*_x1*_x14*_x5 - 90*_x11*_x5 - 161*_x14*_x5) + _x13*(
- -39*C1*_x1*_x5 + 161*C1*_x5 + 139*C2*_x1*_x5 - 573*C2*_x5 + 139*_x1*_x11*_x5 - 39*_x1*_x14*_x5 - 573*_x11*_x5 + 161*_x14*_x5)),
- Eq(g(x), _x12*(
- 45*C1*_x1*_x4 - 187*C1*_x4 + 26*C2*_x1*_x4 - 102*C2*_x4 + 26*_x1*_x11*_x4 + 45*_x1*_x14*_x4 - 102*_x11*_x4 - 187*_x14*_x4) + _x13*(
- 6*C1*_x1*_x4 - 34*C1*_x4 - 26*C2*_x1*_x4 + 102*C2*_x4 - 26*_x1*_x11*_x4 + 6*_x1*_x14*_x4 + 102*_x11*_x4 - 34*_x14*_x4)),
- ]
-
- return eqs, sol
-
-
-def test_neq_order1_type4_slow1():
- eqs, sol = _neq_order1_type4_slow1()
- with dotprodsimp(True):
- assert dsolve(eqs) == sol
-
-
@slow
def test_neq_order1_type4_slow_check1():
- if ON_TRAVIS:
- skip("Too slow for travis.")
+ f, g = symbols("f g", cls=Function)
+ x = symbols("x")
- eqs, sol = _neq_order1_type4_slow1()
+ eqs = [Eq(diff(f(x), x), x*f(x) + x**2*g(x) + x),
+ Eq(diff(g(x), x), 2*x**2*f(x) + (x + 3*x**2)*g(x) + 1)]
+ sol = dsolve(eqs)
assert checksysodesol(eqs, sol) == (True, [0, 0])
-def _neq_order1_type4_slow2():
+@slow
+def test_neq_order1_type4_slow_check2():
f, g, h = symbols("f, g, h", cls=Function)
x = Symbol("x")
- eqs = [Eq(Derivative(f(x), x), x*h(x) + f(x) + g(x) + 1), Eq(Derivative(g(x), x), x*g(x) + f(x) + h(x) +
- 10), Eq(Derivative(h(x), x), x*f(x) + x + g(x) + h(x))]
- _x1 = -x**2/2
- _x2 = Integral(x*exp(_x1 - 2*x)/3 - x*exp(_x1 + x)/3 + 11*exp(_x1 - 2*x)/3 + 19*exp(_x1 + x)/3,
- x)
- _x3 = Integral(x*exp(_x1 - 2*x)/3 + x*exp(_x1 + x)/6 - x*exp(x**2/2 - x)/2 + 11*exp(
- _x1 - 2*x)/3 - 19*exp(_x1 + x)/6 + exp(x**2/2 - x)/2, x)
- _x4 = Integral(x*exp(_x1 - 2*x)/3 + x*exp(_x1 + x)/6 + x*exp(x**2/2 - x)/2 + 11*exp(
- _x1 - 2*x)/3 - 19*exp(_x1 + x)/6 - exp(x**2/2 - x)/2, x)
- _x5 = (C1/3 + C2/3 + C3/3 + _x2/3 + _x3/3 + _x4/3)*exp(x**2/2 + 2*x)
- sol = [
- Eq(f(x), _x5 + (-C1/2 + C2/2 + _x3/2 - _x4/2)*exp(_x1 + x) + (
- C1/6 + C2/6 - C3/3 - _x2/3 + _x3/6 + _x4/6)*exp(x**2/2 - x)),
- Eq(g(x), _x5 + (-C1/3 - C2/3 + 2*C3/3 + 2*_x2/3 - _x3/3 - _x4/3)*exp(x**2/2 - x)),
- Eq(h(x), _x5 + (C1/2 - C2/2 - _x3/2 + _x4/2)*exp(_x1 + x) + (
- C1/6 + C2/6 - C3/3 - _x2/3 + _x3/6 + _x4/6)*exp(x**2/2 - x)),
+ eqs = [
+ Eq(Derivative(f(x), x), x*h(x) + f(x) + g(x) + 1),
+ Eq(Derivative(g(x), x), x*g(x) + f(x) + h(x) + 10),
+ Eq(Derivative(h(x), x), x*f(x) + x + g(x) + h(x))
]
-
- return eqs, sol
-
-
-def test_neq_order1_type4_slow2():
- eqs, sol = _neq_order1_type4_slow2()
with dotprodsimp(True):
- assert dsolve(eqs) == sol
-
-
-@slow
-def test_neq_order1_type4_slow_check2():
- eqs, sol = _neq_order1_type4_slow2()
+ sol = dsolve(eqs)
assert checksysodesol(eqs, sol) == (True, [0, 0, 0])
@@ -2129,19 +2045,33 @@ def _neq_order1_type4_slow3():
f, g = symbols("f g", cls=Function)
x = symbols("x")
- eqs = [Eq(Derivative(f(x), x), x*f(x) + g(x) + sin(x)), Eq(Derivative(g(x), x), x**2 + x*g(x) - f(x))]
- _x1 = exp(x**2/2 + I*x)
- _x2 = exp(-x**2/2 + I*x)
- _x3 = exp(-x**2/2 - I*x)
- _x4 = Integral(_x2*x**2/2 - I*_x2*sin(x)/2 + _x3*x**2/2 + I*_x3*sin(x)/2, x)
- _x5 = Integral(I*_x2*x**2/2 + _x2*sin(x)/2 - I*_x3*x**2/2 + _x3*sin(x)/2, x)
+ eqs = [
+ Eq(Derivative(f(x), x), x*f(x) + g(x) + sin(x)),
+ Eq(Derivative(g(x), x), x**2 + x*g(x) - f(x))
+ ]
sol = [
- Eq(f(x),
- _x1*(C1/2 - I*C2/2 - I*_x4/2 + _x5/2) + (C1/2 + I*C2/2 + I*_x4/2 + _x5/2)*exp(
- x**2/2 - I*x)),
- Eq(g(x),
- _x1*(I*C1/2 + C2/2 + _x4/2 + I*_x5/2) + (-I*C1/2 + C2/2 + _x4/2 - I*_x5/2)*exp(
- x**2/2 - I*x)),
+ Eq(f(x), (C1/2 - I*C2/2 - I*Integral(x**2*exp(-x**2/2 - I*x)/2 +
+ x**2*exp(-x**2/2 + I*x)/2 + I*exp(-x**2/2 - I*x)*sin(x)/2 -
+ I*exp(-x**2/2 + I*x)*sin(x)/2, x)/2 + Integral(-I*x**2*exp(-x**2/2
+ - I*x)/2 + I*x**2*exp(-x**2/2 + I*x)/2 + exp(-x**2/2 -
+ I*x)*sin(x)/2 + exp(-x**2/2 + I*x)*sin(x)/2, x)/2)*exp(x**2/2 +
+ I*x) + (C1/2 + I*C2/2 + I*Integral(x**2*exp(-x**2/2 - I*x)/2 +
+ x**2*exp(-x**2/2 + I*x)/2 + I*exp(-x**2/2 - I*x)*sin(x)/2 -
+ I*exp(-x**2/2 + I*x)*sin(x)/2, x)/2 + Integral(-I*x**2*exp(-x**2/2
+ - I*x)/2 + I*x**2*exp(-x**2/2 + I*x)/2 + exp(-x**2/2 -
+ I*x)*sin(x)/2 + exp(-x**2/2 + I*x)*sin(x)/2, x)/2)*exp(x**2/2 -
+ I*x)),
+ Eq(g(x), (-I*C1/2 + C2/2 + Integral(x**2*exp(-x**2/2 - I*x)/2 +
+ x**2*exp(-x**2/2 + I*x)/2 + I*exp(-x**2/2 - I*x)*sin(x)/2 -
+ I*exp(-x**2/2 + I*x)*sin(x)/2, x)/2 -
+ I*Integral(-I*x**2*exp(-x**2/2 - I*x)/2 + I*x**2*exp(-x**2/2 +
+ I*x)/2 + exp(-x**2/2 - I*x)*sin(x)/2 + exp(-x**2/2 +
+ I*x)*sin(x)/2, x)/2)*exp(x**2/2 - I*x) + (I*C1/2 + C2/2 +
+ Integral(x**2*exp(-x**2/2 - I*x)/2 + x**2*exp(-x**2/2 + I*x)/2 +
+ I*exp(-x**2/2 - I*x)*sin(x)/2 - I*exp(-x**2/2 + I*x)*sin(x)/2,
+ x)/2 + I*Integral(-I*x**2*exp(-x**2/2 - I*x)/2 +
+ I*x**2*exp(-x**2/2 + I*x)/2 + exp(-x**2/2 - I*x)*sin(x)/2 +
+ exp(-x**2/2 + I*x)*sin(x)/2, x)/2)*exp(x**2/2 + I*x))
]
return eqs, sol
@@ -2149,7 +2079,10 @@ def _neq_order1_type4_slow3():
def test_neq_order1_type4_slow3():
eqs, sol = _neq_order1_type4_slow3()
- assert dsolve(eqs) == sol
+ assert dsolve_system(eqs, simplify=False, doit=False) == [sol]
+ # XXX: dsolve gives an error in integration:
+ # assert dsolve(eqs) == sol
+ # https://github.com/sympy/sympy/issues/20155
@slow
@@ -2283,7 +2216,7 @@ def _higher_order_slow1():
def test_higher_order_slow1():
eq, sol = _higher_order_slow1()
- assert dsolve(eq) == sol
+ assert dsolve_system(eq, simplify=False, doit=False) == [sol]
@slow
@@ -2300,13 +2233,11 @@ def test_second_order_type2_slow1():
x, y, z = symbols('x, y, z', cls=Function)
t, l = symbols('t, l')
- ot = Rational(1, 3)
- eqs = (Eq(Derivative(x(t), (t, 2)), t*(2*x(t) + y(t))), Eq(Derivative(y(t), (t, 2)), t*(-x(t) + 2*y(t))))
- sol = [Eq(x(t), I*(C1*airyai(t*(2 - I)**(ot)) + C2*airybi(t*(2 - I)**(ot))) - I*(C3*airyai(t*(2 +
- I)**(ot)) + C4*airybi(t*(2 + I)**(ot)))),
- Eq(y(t), C1*airyai(t*(2 - I)**(ot)) + C3*airyai(t*(2 + I)**(ot)) + C2*airybi(t*(2 - I)**(ot)) +
- C4*airybi(t*(2 + I)**(ot)))]
-
- # Note: dsolve call in dsolve_system makes solving this system slow.
- assert dsolve(eqs) == sol
- assert checksysodesol(eqs, sol) == (True, [0, 0])
+ eqs1 = [Eq(Derivative(x(t), (t, 2)), t*(2*x(t) + y(t))),
+ Eq(Derivative(y(t), (t, 2)), t*(-x(t) + 2*y(t)))]
+ sol1 = [Eq(x(t), I*C1*airyai(t*(2 - I)**(S(1)/3)) + I*C2*airybi(t*(2 - I)**(S(1)/3)) - I*C3*airyai(t*(2 +
+ I)**(S(1)/3)) - I*C4*airybi(t*(2 + I)**(S(1)/3))),
+ Eq(y(t), C1*airyai(t*(2 - I)**(S(1)/3)) + C2*airybi(t*(2 - I)**(S(1)/3)) + C3*airyai(t*(2 + I)**(S(1)/3)) +
+ C4*airybi(t*(2 + I)**(S(1)/3)))]
+ assert dsolve(eqs1) == sol1
+ assert checksysodesol(eqs1, sol1) == (True, [0, 0])
| [
{
"components": [
{
"doc": "Simplify solutions from dsolve_system.",
"lines": [
73,
201
],
"name": "simpsol",
"signature": "def simpsol(sol, wrt1, wrt2, doit=True):",
"type": "function"
},
{
"doc": "Simplify the rhs of... | [
"test_sysode_linear_neq_order1_type1",
"test_sysode_linear_neq_order1_type2",
"test_sysode_linear_neq_order1_type3",
"test_sysode_linear_neq_order1_type5_type6",
"test_higher_order_to_first_order",
"test_second_order_to_first_order_2",
"test_component_division",
"test_neq_order1_type4_slow3"
] | [
"test_linear_ode_to_matrix",
"test__classify_linear_system",
"test_matrix_exp",
"test_canonical_odes",
"test_linodesolve",
"test_dsolve_system",
"test_dsolve"
] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
[GSoC] Simplification for the solutions of systems of ODEs
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
#### Brief description of what is fixed or changed
#### Other comments
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* solvers
* Added simplification strategies for simplifying the solutions of systems of ODEs
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/solvers/ode/systems.py]
(definition of simpsol:)
def simpsol(sol, wrt1, wrt2, doit=True):
"""Simplify solutions from dsolve_system."""
(definition of simpsol.simprhs:)
def simprhs(rhs, rep, wrt1, wrt2):
"""Simplify the rhs of an ODE solution"""
(definition of simpsol.simp_coeff_dep:)
def simp_coeff_dep(expr, wrt1, wrt2=None):
"""Split rhs into terms, split terms into dep and coeff and collect on dep"""
(definition of simpsol.simpdep:)
def simpdep(term, wrt1):
"""Normalise factors involving t with powsimp and recombine exp"""
(definition of simpsol.simpdep.canonicalise:)
def canonicalise(a):
(definition of simpsol.simpcoeff:)
def simpcoeff(coeff, wrt2):
"""Bring to a common fraction and cancel with ratsimp"""
[end of new definitions in sympy/solvers/ode/systems.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1b4529a95ef641c2fc15889091b281644069d20e | ||
joke2k__faker-1247 | 1,247 | joke2k/faker | null | a310d09e627fc1e1ddd6e88a3d65cab03fe22e4e | 2020-08-19T07:17:49Z | diff --git a/faker/generator.py b/faker/generator.py
index 46f176d100..2c7be5e733 100644
--- a/faker/generator.py
+++ b/faker/generator.py
@@ -99,7 +99,9 @@ def set_formatter(self, name, method):
def set_arguments(self, group, argument, value=None):
"""
Creates an argument group, with an individual argument or a dictionary
- of arguments. Used with the Generator.parse method.
+ of arguments. The argument groups is used to apply arguments to tokens,
+ when using the generator.parse() method. To further manage argument
+ groups, use get_arguments() and del_arguments() methods.
generator.set_arguments('small', 'max_value', 10)
generator.set_arguments('small', {'min_value': 5, 'max_value': 10})
@@ -117,7 +119,8 @@ def set_arguments(self, group, argument, value=None):
def get_arguments(self, group, argument=None):
"""
Get the value of an argument configured within a argument group, or
- the entire group as a dictionary.
+ the entire group as a dictionary. Used in conjunction with the
+ set_arguments() method.
generator.get_arguments('small', 'max_value')
generator.get_arguments('small')
@@ -131,8 +134,8 @@ def get_arguments(self, group, argument=None):
def del_arguments(self, group, argument=None):
"""
- Delete an argument from an argument group or the entire
- argument group.
+ Delete an argument from an argument group or the entire argument group.
+ Used in conjunction with the set_arguments() method.
generator.del_arguments('small')
generator.del_arguments('small', 'max_value')
@@ -149,9 +152,10 @@ def del_arguments(self, group, argument=None):
def parse(self, text):
"""
- Replaces tokens (like '{{ tokenName }}' or '{{tokenName}}')
- with the result from the token method call. Arguments can be
- parsed by using an argument group. '{{ tokenName:group }}'
+ Replaces tokens like '{{ tokenName }}' or '{{tokenName}}' in a string with
+ the result from the token method call. Arguments can be parsed by using an
+ argument group. For more information on the use of argument groups, please
+ refer to the set_arguments() method.
Example:
diff --git a/faker/providers/misc/__init__.py b/faker/providers/misc/__init__.py
index fe6e24bd0e..cb129ce5b8 100644
--- a/faker/providers/misc/__init__.py
+++ b/faker/providers/misc/__init__.py
@@ -7,8 +7,6 @@
import uuid
import zipfile
-from collections import OrderedDict
-
from .. import BaseProvider
localized = True
@@ -298,7 +296,12 @@ def dsv(self, dialect='faker-csv', header=None,
The ``header`` argument expects a list or a tuple of strings that will serve as the header row
if supplied. The ``data_columns`` argument expects a list or a tuple of string tokens, and these
string tokens will be passed to :meth:`pystr_format() <faker.providers.python.Provider.pystr_format>`
- for data generation. Both ``header`` and ``data_columns`` myst be of the same length.
+ for data generation. Argument Groups are used to pass arguments to the provider methods.
+ Both ``header`` and ``data_columns`` must be of the same length.
+
+ Example:
+ fake.set_arguments('top_half', {'min_value': 50, 'max_value': 100})
+ fake.dsv(data_columns=('{{ name }}', '{{ pyint:top_half }}'))
The ``num_rows`` argument controls how many rows of data to generate, and the ``include_row_ids``
argument may be set to ``True`` to include a sequential row ID column.
@@ -385,24 +388,37 @@ def psv(self, header=None, data_columns=('{{name}}', '{{address}}'), num_rows=10
)
def json(self,
- data_columns: list = [('name', 'name'), ('residency', 'address')],
+ data_columns: list = None,
num_rows: int = 10,
indent: int = None) -> str:
"""
- Generate random JSON structured key/values
+ Generate random JSON structure values.
- Using a list of records that is passed as ``data_columns``, you define the structure that
- will be generated. Parameters are provider specific, and should be a dictionary that will
- be passed to the provider method.
+ Using a dictionary or list of records that is passed as ``data_columns``,
+ define the structure that is used to build JSON structures. For complex
+ data structures it is recommended to use the dictionary format.
- Data Columns format
- [('field_name', 'provider_name', {'parameters'})]
+ Data Column Dictionary format:
+ {'key name': 'definition'}}
- The provider_name can also be a list of records, to create a list within the JSON data.
- For value only entries within the list, set the 'field_name' to None.
+ The definition can simply be the 'name:argument_group' of a provider
+ method, or can also be string {{ tokens }} that are passed to python
+ provider pystr_format() method for complex string generation.
+ Argument Groups are used to pass arguments to the provider methods.
- :param spec: specification for the data structure
- :type data_columns: list
+ Example:
+ fake.set_arguments('top_half', {'min_value': 50, 'max_value': 100})
+ fake.json(data_columns={'Name': 'name', 'Score': 'pyint:top_half'})
+
+ Data Column List format:
+ [('key name', 'definition', {'arguments'})]
+
+ With the list format the definition can be a list of records, to create
+ a list within the structure data. For literal entries within the list,
+ set the 'field_name' to None.
+
+ :param data_columns: specification for the data structure
+ :type data_columns: dict
:param num_rows: number of rows the returned
:type num_rows: int
:param indent: number of spaces to indent the fields
@@ -410,50 +426,99 @@ def json(self,
:return: Serialized JSON data
:rtype: str
- :sample: data_columns=[('id', 'pyint', {'max_value': 20})], num_rows=3
- :sample: data_columns=[('id', 'pyint'), ('details', (('name', 'name'),))], num_rows=1
- :sample: data_columns=[('id', 'pyint'), ('details', [(None, 'name'), (None, 'name')])], num_rows=1
- :sample: data_columns=[('id', 'pyint'), ('details', [('name', 'name'), ('name', 'name')])], num_rows=1
+ :sample: data_columns={'ID': 'pyint', 'Details': {'Name': 'name',
+ 'Address': 'address'}}, num_rows=1
+ :sample: data_columns={'Candidates': ['name', 'name', 'name']},
+ num_rows=1
+ :sample: data_columns=[('Name', 'name'), ('Points', 'pyint',
+ {'min_value': 50, 'max_value': 100})], num_rows=1
"""
+ default_data_columns = {
+ 'name': '{{name}}',
+ 'residency': '{{address}}',
+ }
+ data_columns = data_columns if data_columns else default_data_columns
+
+ def process_list_structure(data: list) -> dict:
+ entry = {}
+
+ for name, definition, *arguments in data:
+ kwargs = arguments[0] if arguments else {}
- def create_json_entry(data_columns: list) -> OrderedDict:
- entry = OrderedDict()
- for field_name, provider_name, *parameters in data_columns:
- kwargs = parameters[0] if parameters else {}
if not isinstance(kwargs, dict):
- raise TypeError("Parameters must be a dictionary")
+ raise TypeError('Invalid arguments type. Must be a dictionary')
+
+ if name is None:
+ return self._format_selection(definition, **kwargs)
+
+ if isinstance(definition, tuple):
+ entry[name] = process_list_structure(definition)
+ elif isinstance(definition, (list, set)):
+ entry[name] = [process_list_structure([item])
+ for item in definition]
+ else:
+ entry[name] = self._format_selection(definition, **kwargs)
+ return entry
+
+ def process_dict_structure(data: dict) -> dict:
+ entry = {}
- if field_name is None:
- return self.generator.format(provider_name, **kwargs)
+ if isinstance(data, str):
+ return self._format_selection(data)
- if isinstance(provider_name, tuple):
- entry[field_name] = create_json_entry(provider_name)
- elif isinstance(provider_name, list):
- entry[field_name] = [create_json_entry([item])
- for item in provider_name]
+ if isinstance(data, (float, int)):
+ return data
+
+ for name, definition in data.items():
+ if isinstance(definition, (tuple, list)):
+ entry[name] = [process_dict_structure(item)
+ for item in definition]
+ elif isinstance(definition, (dict, int, float)):
+ entry[name] = process_dict_structure(definition)
else:
- entry[field_name] = self.generator.format(provider_name, **kwargs)
+ entry[name] = self._format_selection(definition)
+
return entry
+ def create_json_structure(data_columns) -> dict:
+ if isinstance(data_columns, dict):
+ return process_dict_structure(data_columns)
+
+ if isinstance(data_columns, list):
+ return process_list_structure(data_columns)
+
+ raise TypeError('Invalid data_columns type. Must be a dictionary or list')
+
if num_rows == 1:
- return json.dumps(create_json_entry(data_columns), indent=indent)
+ return json.dumps(create_json_structure(data_columns), indent=indent)
- data = [create_json_entry(data_columns) for _ in range(num_rows)]
+ data = [create_json_structure(data_columns) for _ in range(num_rows)]
return json.dumps(data, indent=indent)
def fixed_width(self,
- data_columns: list = [(20, 'name'), (3, 'pyint', {'max_value': 20})],
+ data_columns: list = None,
num_rows: int = 10,
align: str = 'left') -> str:
"""
Generate random fixed width values.
- Using a list of records that is passed as ``data_columns``, you define the structure that
- will be generated. ``parameters`` are provider specific, and should be a dictionary that will
- be passed to the provider method.
+ Using a list of tuple records that is passed as ``data_columns``, that
+ defines the structure that will be generated. Arguments within the
+ record are provider specific, and should be a dictionary that will be
+ passed to the provider method.
+
+ Data Column List format
+ [('field width', 'definition', {'arguments'})]
+
+ The definition can simply be the 'name:argument_group' of a provider
+ method, or can also be string tokens that are passed to python
+ provider method pystr_format() for data generation.
+ Argument Groups can be used to pass arguments to the provider methods,
+ but will override the arguments supplied in the tuple record.
- Data Columns format
- [('field_width', 'provider_name', {'parameters'})]
+ Example:
+ fake.set_arguments('top_half', {'min_value': 50, 'max_value': 100})
+ fake.fixed_width(data_columns=[(20, 'name'), (3, 'pyint:top_half')])
:param data_columns: specification for the data structure
:type data_columns: list
@@ -464,25 +529,48 @@ def fixed_width(self,
:return: Serialized Fixed Width data
:rtype: str
- :sample: align='right', data_columns=[(20, 'name'), (3, 'pyint', {'max_value': 20})], num_rows=3
+ :sample: data_columns=[(20, 'name'), (3, 'pyint', {'min_value': 50,
+ 'max_value': 100})], align='right', num_rows=1
"""
+ default_data_columns = [
+ (20, 'name'),
+ (3, 'pyint', {'max_value': 20}),
+ ]
+ data_columns = data_columns if data_columns else default_data_columns
align_map = {
'left': '<',
'middle': '^',
'right': '>',
}
-
data = []
+
for _ in range(num_rows):
row = []
- for field_width, provider_name, *parameters in data_columns:
- kwargs = parameters[0] if parameters else {}
+
+ for width, definition, *arguments in data_columns:
+ kwargs = arguments[0] if arguments else {}
+
if not isinstance(kwargs, dict):
- raise TypeError("Parameters must be a dictionary")
+ raise TypeError('Invalid arguments type. Must be a dictionary')
- result = self.generator.format(provider_name, **kwargs)
- field = "{0:%s%s}" % (align_map.get(align, '<'), field_width)
- row.append(field.format(result)[:field_width])
- data.append(''.join(row))
+ result = self._format_selection(definition, **kwargs)
+ field = "{0:%s%s}" % (align_map.get(align, '<'), width)
+ row.append(field.format(result)[:width])
+ data.append(''.join(row))
return '\n'.join(data)
+
+ def _format_selection(self, definition, **kwargs):
+ """
+ Formats the string with PyStr Format if special characters are found.
+ """
+ if '{{' in definition and '}}' in definition:
+ return self.generator.pystr_format(definition)
+
+ if definition.count(':') == 1:
+ definition, argument_group = definition.split(':')
+ arguments = self.generator.get_arguments(argument_group.strip())
+
+ return self.generator.format(definition.strip(), **arguments)
+
+ return self.generator.format(definition, **kwargs)
| diff --git a/tests/providers/test_misc.py b/tests/providers/test_misc.py
index 3c5814e3b7..f81e7136e6 100644
--- a/tests/providers/test_misc.py
+++ b/tests/providers/test_misc.py
@@ -12,10 +12,25 @@
from faker import Faker
+class _FooBarProvider:
+
+ def foo_bar(self, param: str = None) -> str:
+ return 'FooBar' + str(param) if param else 'FooBar'
+
+ def test_integer(self, multi=1) -> int:
+ return 1 * multi
+
+ def test_float(self, multi=1) -> float:
+ return 1.1 * multi
+
+
class TestMisc(unittest.TestCase):
"""Tests miscellaneous generators"""
def setUp(self):
self.fake = Faker()
+ self.fake.add_provider(_FooBarProvider())
+ self.fake.set_arguments('argument_group', 'param', 'Baz')
+ self.fake.set_arguments('double', 'multi', 2)
Faker.seed(0)
def test_uuid4_str(self):
@@ -426,76 +441,199 @@ def test_psv_helper_method(self):
kwargs['delimiter'] = '|'
mock_dsv.assert_called_once_with(**kwargs)
- def test_json_single_entry(self):
+ def test_json_with_arguments(self):
kwargs = {
- 'data_columns': [('word', 'name'), ('number', 'pyint', {'max_value': 20})],
+ 'data_columns': [
+ ('item1', '{{ foo_bar:argument_group }}'),
+ ('item2', 'foo_bar', {'param': 'BAZ'}),
+ ],
'num_rows': 1,
}
- json_string = self.fake.json(**kwargs)
- json_data = json.loads(json_string)
+ json_data = json.loads(self.fake.json(**kwargs))
- assert isinstance(json_data, dict)
- assert 'word' in json_data
- assert 'number' in json_data
+ assert json_data.get('item1') == 'FooBarBaz'
+ assert json_data.get('item2') == 'FooBarBAZ'
- def test_json_multiple_entries(self):
+ def test_json_multiple_rows(self):
kwargs = {
- 'data_columns': [('word', 'name'), ('number', 'pyint', {'max_value': 20})],
+ 'data_columns': {'item': 'foo_bar'},
'num_rows': 2,
}
- json_string = self.fake.json(**kwargs)
- json_data = json.loads(json_string)
+ json_data = json.loads(self.fake.json(**kwargs))
+
+ assert isinstance(json_data, list) and len(json_data) == 2
- assert isinstance(json_data, list)
- for entry in json_data:
- assert 'word' in entry
- assert 'number' in entry
+ def test_json_passthrough_int_float(self):
+ kwargs = {
+ 'data_columns': {
+ 'item1': 1,
+ 'item2': 1.0,
+ },
+ 'num_rows': 1,
+ }
+ json_data = json.loads(self.fake.json(**kwargs))
+
+ assert json_data['item1'] == 1
+ assert json_data['item2'] == 1.0
+
+ def test_json_type_integrity_int(self):
+ kwargs = {
+ 'data_columns': {
+ 'item1': 'test_integer',
+ 'item2': 'test_integer:double',
+ },
+ 'num_rows': 1,
+ }
+ json_data = json.loads(self.fake.json(**kwargs))
+ assert isinstance(json_data['item1'], int)
+ assert json_data['item2'] == 2
+
+ def test_json_type_integrity_float(self):
+ kwargs = {
+ 'data_columns': {
+ 'item1': 'test_float',
+ 'item2': 'test_float:double',
+ },
+ 'num_rows': 1,
+ }
+ json_data = json.loads(self.fake.json(**kwargs))
+ assert isinstance(json_data['item1'], float)
+ assert json_data['item2'] == 2.2
+
+ def test_json_invalid_data_columns(self):
+ kwargs = {
+ 'data_columns': (('item', 'foo_bar'),),
+ 'num_rows': 1,
+ }
+ with self.assertRaises(TypeError) as excinfo:
+ json.loads(self.fake.json(**kwargs))
+ assert str(excinfo.exception) == 'Invalid data_columns type. Must be a dictionary or list'
+
+ def test_json_list_format_invalid_arguments_type(self):
+ kwargs = {
+ 'data_columns': [('item', 'foo_bar', ['wrong'])],
+ 'num_rows': 1,
+ }
+ with self.assertRaises(TypeError) as excinfo:
+ self.fake.json(**kwargs)
+ assert str(excinfo.exception) == 'Invalid arguments type. Must be a dictionary'
- def test_json_depth_structure(self):
+ def test_json_list_format_nested_list_of_values(self):
kwargs = {
'data_columns': [
- ('list1', [(None, 'pyint'), (None, 'pyint')]),
- ('list2', [('number', 'pyint'), ('number', 'pyint')]),
- ('dict', (('number', 'pyint'), ('number', 'pyint'))),
+ (
+ 'list', [
+ (None, '{{ foo_bar }}s'),
+ (None, 'foo_bar'),
+ ],
+ ),
],
'num_rows': 1,
}
+ json_data = json.loads(self.fake.json(**kwargs))
- json_string = self.fake.json(**kwargs)
- json_data = json.loads(json_string)
+ assert json_data['list'][0] == 'FooBars'
+ assert json_data['list'][1] == 'FooBar'
- assert isinstance(json_data['list1'], list)
- assert isinstance(json_data['list2'], list)
- assert isinstance(json_data['dict'], dict)
+ def test_json_list_format_nested_list_of_objects(self):
+ kwargs = {
+ 'data_columns': [
+ (
+ 'list', [
+ ('item', '{{ foo_bar }}s'),
+ ('item', 'foo_bar'),
+ ],
+ ),
+ ],
+ 'num_rows': 1,
+ }
+ json_data = json.loads(self.fake.json(**kwargs))
+
+ assert json_data['list'][0]['item'] == 'FooBars'
+ assert json_data['list'][1]['item'] == 'FooBar'
+
+ def test_json_list_format_nested_objects(self):
+ kwargs = {
+ 'data_columns': [
+ (
+ 'dict', (
+ ('item1', '{{ foo_bar }}s'),
+ ('item2', 'foo_bar'),
+ ),
+ ),
+ ],
+ 'num_rows': 1,
+ }
+ json_data = json.loads(self.fake.json(**kwargs))
- # Check the Lists have values, and key/values
- for item in json_data['list1']:
- assert isinstance(item, int)
- for item in json_data['list2']:
- assert isinstance(item, dict)
+ assert json_data['dict']['item1'] == 'FooBars'
+ assert json_data['dict']['item2'] == 'FooBar'
- def test_json_invalid_parameter_type(self):
+ def test_json_dict_format_nested_list_of_values(self):
kwargs = {
- 'data_columns': [('word', 'name', ['wrong'])],
+ 'data_columns': {
+ 'list': [
+ '{{ foo_bar }}s',
+ 'foo_bar',
+ ],
+ },
'num_rows': 1,
}
- with self.assertRaises(TypeError):
- self.fake.json(**kwargs)
+ json_data = json.loads(self.fake.json(**kwargs))
- def test_fixed_width_row_lengths(self):
+ assert json_data['list'][0] == 'FooBars'
+ assert json_data['list'][1] == 'FooBar'
+
+ def test_json_dict_format_nested_list_of_objects(self):
+ kwargs = {
+ 'data_columns': {
+ 'list': [
+ {'item': '{{ foo_bar }}s'},
+ {'item': 'foo_bar'},
+ ],
+ },
+ 'num_rows': 1,
+ }
+ json_data = json.loads(self.fake.json(**kwargs))
+
+ assert json_data['list'][0]['item'] == 'FooBars'
+ assert json_data['list'][1]['item'] == 'FooBar'
+
+ def test_json_dict_format_nested_objects(self):
+ kwargs = {
+ 'data_columns': {
+ 'dict': {
+ 'item1': '{{ foo_bar }}s',
+ 'item2': 'foo_bar',
+ },
+ },
+ 'num_rows': 1,
+ }
+ json_data = json.loads(self.fake.json(**kwargs))
+
+ assert json_data['dict']['item1'] == 'FooBars'
+ assert json_data['dict']['item2'] == 'FooBar'
+
+ def test_fixed_width_with_arguments(self):
kwargs = {
- 'data_columns': [(20, 'name'), (3, 'pyint', {'max_value': 20})],
+ 'data_columns': [
+ (9, '{{ foo_bar:argument_group }}'),
+ (9, 'foo_bar', {'param': 'BAR'}),
+ ],
'num_rows': 2,
}
fixed_width_string = self.fake.fixed_width(**kwargs)
for row in fixed_width_string.split('\n'):
- assert len(row) == 23
+ assert len(row) == 18
+ assert row[0:9].strip() == 'FooBarBaz'
+ assert row[9:18].strip() == 'FooBarBAR'
- def test_fixed_width_invalid_parameter_type(self):
+ def test_fixed_width_invalid_arguments_type(self):
kwargs = {
- 'data_columns': [(20, 'name'), (3, 'pyint', ['error'])],
+ 'data_columns': [(9, 'foo_bar', ['wrong'])],
'num_rows': 1,
}
- with self.assertRaises(TypeError):
+ with self.assertRaises(TypeError) as excinfo:
self.fake.fixed_width(**kwargs)
+ assert str(excinfo.exception) == 'Invalid arguments type. Must be a dictionary'
| [
{
"components": [
{
"doc": "",
"lines": [
442,
461
],
"name": "Provider.json.process_list_structure",
"signature": "def process_list_structure(data: list) -> dict:",
"type": "function"
},
{
"doc": "",
"lines": ... | [
"tests/providers/test_misc.py::TestMisc::test_fixed_width_invalid_arguments_type",
"tests/providers/test_misc.py::TestMisc::test_fixed_width_with_arguments",
"tests/providers/test_misc.py::TestMisc::test_json_dict_format_nested_list_of_objects",
"tests/providers/test_misc.py::TestMisc::test_json_dict_format_n... | [
"tests/providers/test_misc.py::TestMisc::test_csv_helper_method",
"tests/providers/test_misc.py::TestMisc::test_dsv_csvwriter_kwargs",
"tests/providers/test_misc.py::TestMisc::test_dsv_data_columns",
"tests/providers/test_misc.py::TestMisc::test_dsv_no_header",
"tests/providers/test_misc.py::TestMisc::test_... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
JSON and Fixed Width argument group and parser support.
### What does this changes
The new Argument Groups is a much cleaner way to handle parsing arguments to provider methods when building data structures like DSV, JSON and Fixed Width. It also allows for the JSON provider to now accept a Dictionary that utilizes the Argument Groups, which is a much simpler way to build complex data structures.
```Python
fake.set_arguments('top_half', {'min_value': 50,'max_value': 100})
fake.json(data_columns={'Name': 'name', 'Score': 'pyint:top_half'})
fake.fixed_width(data_columns=[(20, 'name'), (3, 'pyint:top_half')])
```
If string tokens are used then complex Strings can be created by using pystr_format under the hood.
```Python
fake.set_arguments('red_rgb', {'hue': 'red', 'color_format': 'rgb'})
fake.json(data_columns={'Address': 'address', 'Feature': '### - {{ color:red_rgb }}'})
```
All previous support for lists with tuple records is still in tacked and will not be broken. More tests have been added to ensure absolutely safety for anyone that begins to use JSON with list format or dict format, and Fixed Width generation.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in faker/providers/misc/__init__.py]
(definition of Provider.json.process_list_structure:)
def process_list_structure(data: list) -> dict:
(definition of Provider.json.process_dict_structure:)
def process_dict_structure(data: dict) -> dict:
(definition of Provider.json.create_json_structure:)
def create_json_structure(data_columns) -> dict:
(definition of Provider._format_selection:)
def _format_selection(self, definition, **kwargs):
"""Formats the string with PyStr Format if special characters are found."""
[end of new definitions in faker/providers/misc/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Update python_requires in setup.py
https://github.com/joke2k/faker/blob/146f205b942d15c95160df35d3e431624697d079/setup.py#L65
----------
--------------------Finnish IBAN should be 18 characters of length
* Faker version: 4.1.1
Finnish IBAN should be 18 characters of length. Currently returned Finnish IBAN has 20 characters.
### Steps to reproduce
```
from faker import Faker
>>> fake = Faker('fi_FI')
>>> fin_iban = fake.iban()
>>> fin_iban
'FI807370583252728936'
>>> len(fin_iban)
20
```
### Expected behavior
```
>>> len(fin_iban)
18
```
### Actual behavior
```
>>> len(fin_iban)
20
```
----------
--------------------
</issues> | 6edfdbf6ae90b0153309e3bf066aa3b2d16494a7 | |
scikit-learn__scikit-learn-18176 | 18,176 | scikit-learn/scikit-learn | 0.24 | bf4714f40113ac4e6045d34d89905f146c5274b3 | 2020-08-17T09:07:16Z | diff --git a/doc/modules/classes.rst b/doc/modules/classes.rst
index f5a0e71e07d1c..2ec617df85cc0 100644
--- a/doc/modules/classes.rst
+++ b/doc/modules/classes.rst
@@ -947,7 +947,7 @@ details.
metrics.cohen_kappa_score
metrics.confusion_matrix
metrics.dcg_score
- metrics.detection_error_tradeoff_curve
+ metrics.det_curve
metrics.f1_score
metrics.fbeta_score
metrics.hamming_loss
@@ -1100,6 +1100,7 @@ See the :ref:`visualizations` section of the user guide for further details.
:template: function.rst
metrics.plot_confusion_matrix
+ metrics.plot_det_curve
metrics.plot_precision_recall_curve
metrics.plot_roc_curve
@@ -1108,6 +1109,7 @@ See the :ref:`visualizations` section of the user guide for further details.
:template: class.rst
metrics.ConfusionMatrixDisplay
+ metrics.DetCurveDisplay
metrics.PrecisionRecallDisplay
metrics.RocCurveDisplay
diff --git a/doc/modules/model_evaluation.rst b/doc/modules/model_evaluation.rst
index e64aee2075e06..58c30d3091830 100644
--- a/doc/modules/model_evaluation.rst
+++ b/doc/modules/model_evaluation.rst
@@ -306,7 +306,7 @@ Some of these are restricted to the binary classification case:
precision_recall_curve
roc_curve
- detection_error_tradeoff_curve
+ det_curve
Others also work in the multiclass case:
@@ -1443,7 +1443,7 @@ to the given limit.
Detection error tradeoff (DET)
------------------------------
-The function :func:`detection_error_tradeoff_curve` computes the
+The function :func:`det_curve` computes the
detection error tradeoff curve (DET) curve [WikipediaDET2017]_.
Quoting Wikipedia:
diff --git a/doc/visualizations.rst b/doc/visualizations.rst
index ad316205b3c90..a2d40408b403f 100644
--- a/doc/visualizations.rst
+++ b/doc/visualizations.rst
@@ -78,6 +78,7 @@ Functions
inspection.plot_partial_dependence
metrics.plot_confusion_matrix
+ metrics.plot_det_curve
metrics.plot_precision_recall_curve
metrics.plot_roc_curve
@@ -91,5 +92,6 @@ Display Objects
inspection.PartialDependenceDisplay
metrics.ConfusionMatrixDisplay
+ metrics.DetCurveDisplay
metrics.PrecisionRecallDisplay
metrics.RocCurveDisplay
diff --git a/doc/whats_new/v0.24.rst b/doc/whats_new/v0.24.rst
index aaf86a2f0576d..1da9670307b75 100644
--- a/doc/whats_new/v0.24.rst
+++ b/doc/whats_new/v0.24.rst
@@ -280,11 +280,15 @@ Changelog
:mod:`sklearn.metrics`
......................
-- |Feature| Added :func:`metrics.detection_error_tradeoff_curve` to compute
- Detection Error Tradeoff curve classification metric.
+- |Feature| Added :func:`metrics.det_curve` to compute Detection Error Tradeoff
+ curve classification metric.
:pr:`10591` by :user:`Jeremy Karnowski <jkarnows>` and
:user:`Daniel Mohns <dmohns>`.
+- |Feature| Added :func:`metrics.plot_det_curve` and :class:`DetCurveDisplay`
+ to ease the plot of DET curves.
+ :pr:`18176` by :user:`Guillaume Lemaitre <glemaitre>`.
+
- |Feature| Added :func:`metrics.mean_absolute_percentage_error` metric and
the associated scorer for regression problems. :issue:`10708` fixed with the
PR :pr:`15007` by :user:`Ashutosh Hathidara <ashutosh1919>`. The scorer and
diff --git a/examples/model_selection/plot_det.py b/examples/model_selection/plot_det.py
index f4b1b96f947a2..2e27dd07ee684 100644
--- a/examples/model_selection/plot_det.py
+++ b/examples/model_selection/plot_det.py
@@ -8,9 +8,9 @@
for the same classification task.
DET curves are commonly plotted in normal deviate scale.
-To achieve this we transform the error rates as returned by the
-:func:`~sklearn.metrics.detection_error_tradeoff_curve` function and the axis
-scale using :func:`scipy.stats.norm`.
+To achieve this `plot_det_curve` transforms the error rates as returned by the
+:func:`~sklearn.metrics.det_curve` and the axis scale using
+:func:`scipy.stats.norm`.
The point of this example is to demonstrate two properties of DET curves,
namely:
@@ -39,8 +39,8 @@
- See :func:`sklearn.metrics.roc_curve` for further information about ROC
curves.
- - See :func:`sklearn.metrics.detection_error_tradeoff_curve` for further
- information about DET curves.
+ - See :func:`sklearn.metrics.det_curve` for further information about
+ DET curves.
- This example is loosely based on
:ref:`sphx_glr_auto_examples_classification_plot_classifier_comparison.py`
@@ -51,15 +51,13 @@
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
-from sklearn.metrics import detection_error_tradeoff_curve
+from sklearn.metrics import plot_det_curve
from sklearn.metrics import plot_roc_curve
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
-from scipy.stats import norm
-
N_SAMPLES = 1000
classifiers = {
@@ -79,43 +77,17 @@
# prepare plots
fig, [ax_roc, ax_det] = plt.subplots(1, 2, figsize=(11, 5))
-# first prepare the ROC curve
-ax_roc.set_title('Receiver Operating Characteristic (ROC) curves')
-ax_roc.grid(linestyle='--')
-
-# second prepare the DET curve
-ax_det.set_title('Detection Error Tradeoff (DET) curves')
-ax_det.set_xlabel('False Positive Rate')
-ax_det.set_ylabel('False Negative Rate')
-ax_det.set_xlim(-3, 3)
-ax_det.set_ylim(-3, 3)
-ax_det.grid(linestyle='--')
-
-# customized ticks for DET curve plot to represent normal deviate scale
-ticks = [0.001, 0.01, 0.05, 0.20, 0.5, 0.80, 0.95, 0.99, 0.999]
-tick_locs = norm.ppf(ticks)
-tick_lbls = [
- '{:.0%}'.format(s) if (100*s).is_integer() else '{:.1%}'.format(s)
- for s in ticks
-]
-plt.sca(ax_det)
-plt.xticks(tick_locs, tick_lbls)
-plt.yticks(tick_locs, tick_lbls)
-
-# iterate over classifiers
for name, clf in classifiers.items():
clf.fit(X_train, y_train)
- if hasattr(clf, "decision_function"):
- y_score = clf.decision_function(X_test)
- else:
- y_score = clf.predict_proba(X_test)[:, 1]
-
plot_roc_curve(clf, X_test, y_test, ax=ax_roc, name=name)
- det_fpr, det_fnr, _ = detection_error_tradeoff_curve(y_test, y_score)
+ plot_det_curve(clf, X_test, y_test, ax=ax_det, name=name)
+
+ax_roc.set_title('Receiver Operating Characteristic (ROC) curves')
+ax_det.set_title('Detection Error Tradeoff (DET) curves')
- # transform errors into normal deviate scale
- ax_det.plot(norm.ppf(det_fpr), norm.ppf(det_fnr), label=name)
+ax_roc.grid(linestyle='--')
+ax_det.grid(linestyle='--')
plt.legend()
plt.show()
diff --git a/sklearn/metrics/__init__.py b/sklearn/metrics/__init__.py
index a69d5c618c20f..a8beea4f8c2f9 100644
--- a/sklearn/metrics/__init__.py
+++ b/sklearn/metrics/__init__.py
@@ -7,7 +7,7 @@
from ._ranking import auc
from ._ranking import average_precision_score
from ._ranking import coverage_error
-from ._ranking import detection_error_tradeoff_curve
+from ._ranking import det_curve
from ._ranking import dcg_score
from ._ranking import label_ranking_average_precision_score
from ._ranking import label_ranking_loss
@@ -77,6 +77,8 @@
from ._scorer import SCORERS
from ._scorer import get_scorer
+from ._plot.det_curve import plot_det_curve
+from ._plot.det_curve import DetCurveDisplay
from ._plot.roc_curve import plot_roc_curve
from ._plot.roc_curve import RocCurveDisplay
from ._plot.precision_recall_curve import plot_precision_recall_curve
@@ -105,7 +107,8 @@
'coverage_error',
'dcg_score',
'davies_bouldin_score',
- 'detection_error_tradeoff_curve',
+ 'DetCurveDisplay',
+ 'det_curve',
'euclidean_distances',
'explained_variance_score',
'f1_score',
@@ -142,6 +145,7 @@
'pairwise_distances_chunked',
'pairwise_kernels',
'plot_confusion_matrix',
+ 'plot_det_curve',
'plot_precision_recall_curve',
'plot_roc_curve',
'PrecisionRecallDisplay',
diff --git a/sklearn/metrics/_plot/det_curve.py b/sklearn/metrics/_plot/det_curve.py
new file mode 100644
index 0000000000000..00ae0226ea6d3
--- /dev/null
+++ b/sklearn/metrics/_plot/det_curve.py
@@ -0,0 +1,224 @@
+import scipy as sp
+
+from .base import _get_response
+
+from .. import det_curve
+
+from ...utils import check_matplotlib_support
+
+
+class DetCurveDisplay:
+ """DET curve visualization.
+
+ It is recommend to use :func:`~sklearn.metrics.plot_det_curve` to create a
+ visualizer. All parameters are stored as attributes.
+
+ Read more in the :ref:`User Guide <visualizations>`.
+
+ .. versionadded:: 0.24
+
+ Parameters
+ ----------
+ fpr : ndarray
+ False positive rate.
+
+ tpr : ndarray
+ True positive rate.
+
+ estimator_name : str, default=None
+ Name of estimator. If None, the estimator name is not shown.
+
+ pos_label : str or int, default=None
+ The label of the positive class.
+
+ Attributes
+ ----------
+ line_ : matplotlib Artist
+ DET Curve.
+
+ ax_ : matplotlib Axes
+ Axes with DET Curve.
+
+ figure_ : matplotlib Figure
+ Figure containing the curve.
+
+ Examples
+ --------
+ >>> import matplotlib.pyplot as plt # doctest: +SKIP
+ >>> import numpy as np
+ >>> from sklearn import metrics
+ >>> y = np.array([0, 0, 1, 1])
+ >>> pred = np.array([0.1, 0.4, 0.35, 0.8])
+ >>> fpr, fnr, thresholds = metrics.det_curve(y, pred)
+ >>> display = metrics.DetCurveDisplay(
+ ... fpr=fpr, fnr=fnr, estimator_name='example estimator'
+ ... )
+ >>> display.plot() # doctest: +SKIP
+ >>> plt.show() # doctest: +SKIP
+ """
+ def __init__(self, *, fpr, fnr, estimator_name=None, pos_label=None):
+ self.fpr = fpr
+ self.fnr = fnr
+ self.estimator_name = estimator_name
+ self.pos_label = pos_label
+
+ def plot(self, ax=None, *, name=None, **kwargs):
+ """Plot visualization.
+
+ Parameters
+ ----------
+ ax : matplotlib axes, default=None
+ Axes object to plot on. If `None`, a new figure and axes is
+ created.
+
+ name : str, default=None
+ Name of DET curve for labeling. If `None`, use the name of the
+ estimator.
+
+ Returns
+ -------
+ display : :class:`~sklearn.metrics.plot.DetCurveDisplay`
+ Object that stores computed values.
+ """
+ check_matplotlib_support('DetCurveDisplay.plot')
+
+ name = self.estimator_name if name is None else name
+ line_kwargs = {} if name is None else {"label": name}
+ line_kwargs.update(**kwargs)
+
+ import matplotlib.pyplot as plt
+
+ if ax is None:
+ _, ax = plt.subplots()
+
+ self.line_, = ax.plot(
+ sp.stats.norm.ppf(self.fpr),
+ sp.stats.norm.ppf(self.fnr),
+ **line_kwargs,
+ )
+ info_pos_label = (f" (Positive label: {self.pos_label})"
+ if self.pos_label is not None else "")
+
+ xlabel = "False Positive Rate" + info_pos_label
+ ylabel = "False Negative Rate" + info_pos_label
+ ax.set(xlabel=xlabel, ylabel=ylabel)
+
+ if "label" in line_kwargs:
+ ax.legend(loc="lower right")
+
+ ticks = [0.001, 0.01, 0.05, 0.20, 0.5, 0.80, 0.95, 0.99, 0.999]
+ tick_locations = sp.stats.norm.ppf(ticks)
+ tick_labels = [
+ '{:.0%}'.format(s) if (100*s).is_integer() else '{:.1%}'.format(s)
+ for s in ticks
+ ]
+ ax.set_xticks(tick_locations)
+ ax.set_xticklabels(tick_labels)
+ ax.set_xlim(-3, 3)
+ ax.set_yticks(tick_locations)
+ ax.set_yticklabels(tick_labels)
+ ax.set_ylim(-3, 3)
+
+ self.ax_ = ax
+ self.figure_ = ax.figure
+ return self
+
+
+def plot_det_curve(
+ estimator,
+ X,
+ y,
+ *,
+ sample_weight=None,
+ response_method="auto",
+ name=None,
+ ax=None,
+ pos_label=None,
+ **kwargs
+):
+ """Plot detection error tradeoff (DET) curve.
+
+ Extra keyword arguments will be passed to matplotlib's `plot`.
+
+ Read more in the :ref:`User Guide <visualizations>`.
+
+ .. versionadded:: 0.24
+
+ Parameters
+ ----------
+ estimator : estimator instance
+ Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
+ in which the last estimator is a classifier.
+
+ X : {array-like, sparse matrix} of shape (n_samples, n_features)
+ Input values.
+
+ y : array-like of shape (n_samples,)
+ Target values.
+
+ sample_weight : array-like of shape (n_samples,), default=None
+ Sample weights.
+
+ response_method : {'predict_proba', 'decision_function', 'auto'} \
+ default='auto'
+ Specifies whether to use :term:`predict_proba` or
+ :term:`decision_function` as the predicted target response. If set to
+ 'auto', :term:`predict_proba` is tried first and if it does not exist
+ :term:`decision_function` is tried next.
+
+ name : str, default=None
+ Name of DET curve for labeling. If `None`, use the name of the
+ estimator.
+
+ ax : matplotlib axes, default=None
+ Axes object to plot on. If `None`, a new figure and axes is created.
+
+ pos_label : str or int, default=None
+ The label of the positive class.
+ When `pos_label=None`, if `y_true` is in {-1, 1} or {0, 1},
+ `pos_label` is set to 1, otherwise an error will be raised.
+
+ Returns
+ -------
+ display : :class:`~sklearn.metrics.DetCurveDisplay`
+ Object that stores computed values.
+
+ See Also
+ --------
+ det_curve : Compute error rates for different probability thresholds
+
+ plot_roc_curve : Plot Receiver operating characteristic (ROC) curve
+
+ Examples
+ --------
+ >>> import matplotlib.pyplot as plt # doctest: +SKIP
+ >>> from sklearn import datasets, metrics, model_selection, svm
+ >>> X, y = datasets.make_classification(random_state=0)
+ >>> X_train, X_test, y_train, y_test = model_selection.train_test_split(
+ ... X, y, random_state=0)
+ >>> clf = svm.SVC(random_state=0)
+ >>> clf.fit(X_train, y_train)
+ SVC(random_state=0)
+ >>> metrics.plot_det_curve(clf, X_test, y_test) # doctest: +SKIP
+ >>> plt.show() # doctest: +SKIP
+ """
+ check_matplotlib_support('plot_det_curve')
+
+ y_pred, pos_label = _get_response(
+ X, estimator, response_method, pos_label=pos_label
+ )
+
+ fpr, fnr, _ = det_curve(
+ y, y_pred, pos_label=pos_label, sample_weight=sample_weight,
+ )
+
+ name = estimator.__class__.__name__ if name is None else name
+
+ viz = DetCurveDisplay(
+ fpr=fpr,
+ fnr=fnr,
+ estimator_name=name,
+ pos_label=pos_label
+ )
+
+ return viz.plot(ax=ax, name=name, **kwargs)
diff --git a/sklearn/metrics/_ranking.py b/sklearn/metrics/_ranking.py
index 0d44600d272e3..8e6a2280eebd5 100644
--- a/sklearn/metrics/_ranking.py
+++ b/sklearn/metrics/_ranking.py
@@ -218,8 +218,7 @@ def _binary_uninterpolated_average_precision(
average, sample_weight=sample_weight)
-def detection_error_tradeoff_curve(y_true, y_score, pos_label=None,
- sample_weight=None):
+def det_curve(y_true, y_score, pos_label=None, sample_weight=None):
"""Compute error rates for different probability thresholds.
.. note::
@@ -273,10 +272,10 @@ def detection_error_tradeoff_curve(y_true, y_score, pos_label=None,
Examples
--------
>>> import numpy as np
- >>> from sklearn.metrics import detection_error_tradeoff_curve
+ >>> from sklearn.metrics import det_curve
>>> y_true = np.array([0, 0, 1, 1])
>>> y_scores = np.array([0.1, 0.4, 0.35, 0.8])
- >>> fpr, fnr, thresholds = detection_error_tradeoff_curve(y_true, y_scores)
+ >>> fpr, fnr, thresholds = det_curve(y_true, y_scores)
>>> fpr
array([0.5, 0.5, 0. ])
>>> fnr
@@ -747,14 +746,13 @@ def precision_recall_curve(y_true, probas_pred, *, pos_label=None,
thresholds : ndarray of shape (n_thresholds,)
Increasing thresholds on the decision function used to compute
- precision and recall. n_thresgolds <= len(np.unique(probas_pred)).
+ precision and recall. n_thresholds <= len(np.unique(probas_pred)).
See also
--------
average_precision_score : Compute average precision from prediction scores
- detection_error_tradeoff_curve: Compute error rates for different \
- probability thresholds
+ det_curve: Compute error rates for different probability thresholds
roc_curve : Compute Receiver operating characteristic (ROC) curve
@@ -846,8 +844,7 @@ def roc_curve(y_true, y_score, *, pos_label=None, sample_weight=None,
See Also
--------
- detection_error_tradeoff_curve: Compute error rates for different \
- probability thresholds
+ det_curve: Compute error rates for different probability thresholds
roc_auc_score : Compute the area under the ROC curve
| diff --git a/sklearn/metrics/_plot/tests/test_plot_curve_common.py b/sklearn/metrics/_plot/tests/test_plot_curve_common.py
new file mode 100644
index 0000000000000..c3b56f1724372
--- /dev/null
+++ b/sklearn/metrics/_plot/tests/test_plot_curve_common.py
@@ -0,0 +1,103 @@
+import pytest
+
+from sklearn.base import ClassifierMixin
+from sklearn.base import clone
+from sklearn.compose import make_column_transformer
+from sklearn.datasets import load_iris
+from sklearn.exceptions import NotFittedError
+from sklearn.linear_model import LogisticRegression
+from sklearn.pipeline import make_pipeline
+from sklearn.preprocessing import StandardScaler
+from sklearn.tree import DecisionTreeClassifier
+
+from sklearn.metrics import plot_det_curve
+from sklearn.metrics import plot_roc_curve
+
+
+@pytest.fixture(scope="module")
+def data():
+ return load_iris(return_X_y=True)
+
+
+@pytest.fixture(scope="module")
+def data_binary(data):
+ X, y = data
+ return X[y < 2], y[y < 2]
+
+
+@pytest.mark.parametrize("plot_func", [plot_det_curve, plot_roc_curve])
+def test_plot_curve_error_non_binary(pyplot, data, plot_func):
+ X, y = data
+ clf = DecisionTreeClassifier()
+ clf.fit(X, y)
+
+ msg = "DecisionTreeClassifier should be a binary classifier"
+ with pytest.raises(ValueError, match=msg):
+ plot_func(clf, X, y)
+
+
+@pytest.mark.parametrize(
+ "response_method, msg",
+ [("predict_proba", "response method predict_proba is not defined in "
+ "MyClassifier"),
+ ("decision_function", "response method decision_function is not defined "
+ "in MyClassifier"),
+ ("auto", "response method decision_function or predict_proba is not "
+ "defined in MyClassifier"),
+ ("bad_method", "response_method must be 'predict_proba', "
+ "'decision_function' or 'auto'")]
+)
+@pytest.mark.parametrize("plot_func", [plot_det_curve, plot_roc_curve])
+def test_plot_curve_error_no_response(
+ pyplot, data_binary, response_method, msg, plot_func,
+):
+ X, y = data_binary
+
+ class MyClassifier(ClassifierMixin):
+ def fit(self, X, y):
+ self.classes_ = [0, 1]
+ return self
+
+ clf = MyClassifier().fit(X, y)
+
+ with pytest.raises(ValueError, match=msg):
+ plot_func(clf, X, y, response_method=response_method)
+
+
+@pytest.mark.parametrize("plot_func", [plot_det_curve, plot_roc_curve])
+def test_plot_curve_estimator_name_multiple_calls(
+ pyplot, data_binary, plot_func
+):
+ # non-regression test checking that the `name` used when calling
+ # `plot_func` is used as well when calling `disp.plot()`
+ X, y = data_binary
+ clf_name = "my hand-crafted name"
+ clf = LogisticRegression().fit(X, y)
+ disp = plot_func(clf, X, y, name=clf_name)
+ assert disp.estimator_name == clf_name
+ pyplot.close("all")
+ disp.plot()
+ assert clf_name in disp.line_.get_label()
+ pyplot.close("all")
+ clf_name = "another_name"
+ disp.plot(name=clf_name)
+ assert clf_name in disp.line_.get_label()
+
+
+@pytest.mark.parametrize(
+ "clf", [LogisticRegression(),
+ make_pipeline(StandardScaler(), LogisticRegression()),
+ make_pipeline(make_column_transformer((StandardScaler(), [0, 1])),
+ LogisticRegression())])
+@pytest.mark.parametrize("plot_func", [plot_det_curve, plot_roc_curve])
+def test_plot_det_curve_not_fitted_errors(pyplot, data_binary, clf, plot_func):
+ X, y = data_binary
+ # clone since we parametrize the test and the classifier will be fitted
+ # when testing the second and subsequent plotting function
+ model = clone(clf)
+ with pytest.raises(NotFittedError):
+ plot_func(model, X, y)
+ model.fit(X, y)
+ disp = plot_func(model, X, y)
+ assert model.__class__.__name__ in disp.line_.get_label()
+ assert disp.estimator_name == model.__class__.__name__
diff --git a/sklearn/metrics/_plot/tests/test_plot_det_curve.py b/sklearn/metrics/_plot/tests/test_plot_det_curve.py
new file mode 100644
index 0000000000000..9ef10237af879
--- /dev/null
+++ b/sklearn/metrics/_plot/tests/test_plot_det_curve.py
@@ -0,0 +1,84 @@
+import pytest
+import numpy as np
+from numpy.testing import assert_allclose
+
+from sklearn.datasets import load_iris
+from sklearn.linear_model import LogisticRegression
+
+from sklearn.metrics import det_curve
+from sklearn.metrics import plot_det_curve
+
+
+@pytest.fixture(scope="module")
+def data():
+ return load_iris(return_X_y=True)
+
+
+@pytest.fixture(scope="module")
+def data_binary(data):
+ X, y = data
+ return X[y < 2], y[y < 2]
+
+
+@pytest.mark.parametrize(
+ "response_method", ["predict_proba", "decision_function"]
+)
+@pytest.mark.parametrize("with_sample_weight", [True, False])
+@pytest.mark.parametrize("with_strings", [True, False])
+def test_plot_det_curve(
+ pyplot,
+ response_method,
+ data_binary,
+ with_sample_weight,
+ with_strings
+):
+ X, y = data_binary
+
+ pos_label = None
+ if with_strings:
+ y = np.array(["c", "b"])[y]
+ pos_label = "c"
+
+ if with_sample_weight:
+ rng = np.random.RandomState(42)
+ sample_weight = rng.randint(1, 4, size=(X.shape[0]))
+ else:
+ sample_weight = None
+
+ lr = LogisticRegression()
+ lr.fit(X, y)
+
+ viz = plot_det_curve(
+ lr, X, y, alpha=0.8, sample_weight=sample_weight,
+ )
+
+ y_pred = getattr(lr, response_method)(X)
+ if y_pred.ndim == 2:
+ y_pred = y_pred[:, 1]
+
+ fpr, fnr, _ = det_curve(
+ y, y_pred, sample_weight=sample_weight, pos_label=pos_label,
+ )
+
+ assert_allclose(viz.fpr, fpr)
+ assert_allclose(viz.fnr, fnr)
+
+ assert viz.estimator_name == "LogisticRegression"
+
+ # cannot fail thanks to pyplot fixture
+ import matplotlib as mpl # noqal
+ assert isinstance(viz.line_, mpl.lines.Line2D)
+ assert viz.line_.get_alpha() == 0.8
+ assert isinstance(viz.ax_, mpl.axes.Axes)
+ assert isinstance(viz.figure_, mpl.figure.Figure)
+ assert viz.line_.get_label() == "LogisticRegression"
+
+ expected_pos_label = 1 if pos_label is None else pos_label
+ expected_ylabel = (
+ f"False Negative Rate (Positive label: {expected_pos_label})"
+ )
+ expected_xlabel = (
+ f"False Positive Rate (Positive label: {expected_pos_label})"
+ )
+ assert viz.ax_.get_ylabel() == expected_ylabel
+ assert viz.ax_.get_xlabel() == expected_xlabel
diff --git a/sklearn/metrics/_plot/tests/test_plot_roc_curve.py b/sklearn/metrics/_plot/tests/test_plot_roc_curve.py
index 76b7024f0dc7c..de5a23d81af19 100644
--- a/sklearn/metrics/_plot/tests/test_plot_roc_curve.py
+++ b/sklearn/metrics/_plot/tests/test_plot_roc_curve.py
@@ -2,7 +2,6 @@
import numpy as np
from numpy.testing import assert_allclose
-from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import plot_roc_curve
from sklearn.metrics import RocCurveDisplay
from sklearn.metrics import roc_curve
@@ -11,7 +10,6 @@
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
-from sklearn.base import ClassifierMixin
from sklearn.exceptions import NotFittedError
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
@@ -34,42 +32,6 @@ def data_binary(data):
X, y = data
return X[y < 2], y[y < 2]
-
-def test_plot_roc_curve_error_non_binary(pyplot, data):
- X, y = data
- clf = DecisionTreeClassifier()
- clf.fit(X, y)
-
- msg = "DecisionTreeClassifier should be a binary classifier"
- with pytest.raises(ValueError, match=msg):
- plot_roc_curve(clf, X, y)
-
-
-@pytest.mark.parametrize(
- "response_method, msg",
- [("predict_proba", "response method predict_proba is not defined in "
- "MyClassifier"),
- ("decision_function", "response method decision_function is not defined "
- "in MyClassifier"),
- ("auto", "response method decision_function or predict_proba is not "
- "defined in MyClassifier"),
- ("bad_method", "response_method must be 'predict_proba', "
- "'decision_function' or 'auto'")])
-def test_plot_roc_curve_error_no_response(pyplot, data_binary, response_method,
- msg):
- X, y = data_binary
-
- class MyClassifier(ClassifierMixin):
- def fit(self, X, y):
- self.classes_ = [0, 1]
- return self
-
- clf = MyClassifier().fit(X, y)
-
- with pytest.raises(ValueError, match=msg):
- plot_roc_curve(clf, X, y, response_method=response_method)
-
-
@pytest.mark.parametrize("response_method",
["predict_proba", "decision_function"])
@pytest.mark.parametrize("with_sample_weight", [True, False])
@@ -146,23 +108,6 @@ def test_roc_curve_not_fitted_errors(pyplot, data_binary, clf):
assert disp.estimator_name == clf.__class__.__name__
-def test_plot_roc_curve_estimator_name_multiple_calls(pyplot, data_binary):
- # non-regression test checking that the `name` used when calling
- # `plot_roc_curve` is used as well when calling `disp.plot()`
- X, y = data_binary
- clf_name = "my hand-crafted name"
- clf = LogisticRegression().fit(X, y)
- disp = plot_roc_curve(clf, X, y, name=clf_name)
- assert disp.estimator_name == clf_name
- pyplot.close("all")
- disp.plot()
- assert clf_name in disp.line_.get_label()
- pyplot.close("all")
- clf_name = "another_name"
- disp.plot(name=clf_name)
- assert clf_name in disp.line_.get_label()
-
-
@pytest.mark.parametrize(
"roc_auc, estimator_name, expected_label",
[
diff --git a/sklearn/metrics/tests/test_common.py b/sklearn/metrics/tests/test_common.py
index 24f01d46610a7..5a8e4b3f69d8c 100644
--- a/sklearn/metrics/tests/test_common.py
+++ b/sklearn/metrics/tests/test_common.py
@@ -29,7 +29,7 @@
from sklearn.metrics import cohen_kappa_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import coverage_error
-from sklearn.metrics import detection_error_tradeoff_curve
+from sklearn.metrics import det_curve
from sklearn.metrics import explained_variance_score
from sklearn.metrics import f1_score
from sklearn.metrics import fbeta_score
@@ -206,7 +206,7 @@ def precision_recall_curve_padded_thresholds(*args, **kwargs):
CURVE_METRICS = {
"roc_curve": roc_curve,
"precision_recall_curve": precision_recall_curve_padded_thresholds,
- "detection_error_tradeoff_curve": detection_error_tradeoff_curve,
+ "det_curve": det_curve,
}
THRESHOLDED_METRICS = {
@@ -303,7 +303,7 @@ def precision_recall_curve_padded_thresholds(*args, **kwargs):
# curves
"roc_curve",
"precision_recall_curve",
- "detection_error_tradeoff_curve",
+ "det_curve",
}
# Metric undefined with "binary" or "multiclass" input
@@ -325,7 +325,7 @@ def precision_recall_curve_padded_thresholds(*args, **kwargs):
METRICS_WITH_POS_LABEL = {
"roc_curve",
"precision_recall_curve",
- "detection_error_tradeoff_curve",
+ "det_curve",
"brier_score_loss",
@@ -356,7 +356,7 @@ def precision_recall_curve_padded_thresholds(*args, **kwargs):
"normalized_confusion_matrix",
"roc_curve",
"precision_recall_curve",
- "detection_error_tradeoff_curve",
+ "det_curve",
"precision_score", "recall_score", "f1_score", "f2_score", "f0.5_score",
"jaccard_score",
@@ -469,7 +469,7 @@ def precision_recall_curve_padded_thresholds(*args, **kwargs):
"normalized_confusion_matrix",
"roc_curve",
"precision_recall_curve",
- "detection_error_tradeoff_curve",
+ "det_curve",
"precision_score", "recall_score", "f2_score", "f0.5_score",
diff --git a/sklearn/metrics/tests/test_ranking.py b/sklearn/metrics/tests/test_ranking.py
index 02d19aea1236c..166ff775e2690 100644
--- a/sklearn/metrics/tests/test_ranking.py
+++ b/sklearn/metrics/tests/test_ranking.py
@@ -22,7 +22,7 @@
from sklearn.metrics import auc
from sklearn.metrics import average_precision_score
from sklearn.metrics import coverage_error
-from sklearn.metrics import detection_error_tradeoff_curve
+from sklearn.metrics import det_curve
from sklearn.metrics import label_ranking_average_precision_score
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import label_ranking_loss
@@ -949,10 +949,10 @@ def test_score_scale_invariance():
([1, 0, 1], [0.5, 0.75, 1], [1, 1, 0], [0, 0.5, 0.5]),
([1, 0, 1], [0.25, 0.5, 0.75], [1, 1, 0], [0, 0.5, 0.5]),
])
-def test_detection_error_tradeoff_curve_toydata(y_true, y_score,
+def test_det_curve_toydata(y_true, y_score,
expected_fpr, expected_fnr):
# Check on a batch of small examples.
- fpr, fnr, _ = detection_error_tradeoff_curve(y_true, y_score)
+ fpr, fnr, _ = det_curve(y_true, y_score)
assert_allclose(fpr, expected_fpr)
assert_allclose(fnr, expected_fnr)
@@ -968,20 +968,20 @@ def test_detection_error_tradeoff_curve_toydata(y_true, y_score,
([1, 0, 1], [0.25, 0.5, 0.5], [1], [0]),
([1, 1, 0], [0.25, 0.5, 0.5], [1], [0]),
])
-def test_detection_error_tradeoff_curve_tie_handling(y_true, y_score,
+def test_det_curve_tie_handling(y_true, y_score,
expected_fpr,
expected_fnr):
- fpr, fnr, _ = detection_error_tradeoff_curve(y_true, y_score)
+ fpr, fnr, _ = det_curve(y_true, y_score)
assert_allclose(fpr, expected_fpr)
assert_allclose(fnr, expected_fnr)
-def test_detection_error_tradeoff_curve_sanity_check():
+def test_det_curve_sanity_check():
# Exactly duplicated inputs yield the same result.
assert_allclose(
- detection_error_tradeoff_curve([0, 0, 1], [0, 0.5, 1]),
- detection_error_tradeoff_curve(
+ det_curve([0, 0, 1], [0, 0.5, 1]),
+ det_curve(
[0, 0, 0, 0, 1, 1], [0, 0, 0.5, 0.5, 1, 1])
)
@@ -989,8 +989,8 @@ def test_detection_error_tradeoff_curve_sanity_check():
@pytest.mark.parametrize("y_score", [
(0), (0.25), (0.5), (0.75), (1)
])
-def test_detection_error_tradeoff_curve_constant_scores(y_score):
- fpr, fnr, threshold = detection_error_tradeoff_curve(
+def test_det_curve_constant_scores(y_score):
+ fpr, fnr, threshold = det_curve(
y_true=[0, 1, 0, 1, 0, 1],
y_score=np.full(6, y_score)
)
@@ -1007,8 +1007,8 @@ def test_detection_error_tradeoff_curve_constant_scores(y_score):
([0, 0, 1, 1, 1, 1]),
([0, 1, 1, 1, 1, 1]),
])
-def test_detection_error_tradeoff_curve_perfect_scores(y_true):
- fpr, fnr, _ = detection_error_tradeoff_curve(
+def test_det_curve_perfect_scores(y_true):
+ fpr, fnr, _ = det_curve(
y_true=y_true,
y_score=y_true
)
@@ -1031,13 +1031,13 @@ def test_detection_error_tradeoff_curve_perfect_scores(y_true):
),
],
)
-def test_detection_error_tradeoff_curve_bad_input(y_true, y_pred, err_msg):
+def test_det_curve_bad_input(y_true, y_pred, err_msg):
# input variables with inconsistent numbers of samples
with pytest.raises(ValueError, match=err_msg):
- detection_error_tradeoff_curve(y_true, y_pred)
+ det_curve(y_true, y_pred)
-def test_detection_error_tradeoff_curve_pos_label():
+def test_det_curve_pos_label():
y_true = ["cancer"] * 3 + ["not cancer"] * 7
y_pred_pos_not_cancer = np.array(
[0.1, 0.4, 0.6, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.9]
@@ -1045,11 +1045,11 @@ def test_detection_error_tradeoff_curve_pos_label():
y_pred_pos_cancer = 1 - y_pred_pos_not_cancer
fpr_pos_cancer, fnr_pos_cancer, th_pos_cancer = \
- detection_error_tradeoff_curve(
+ det_curve(
y_true, y_pred_pos_cancer, pos_label="cancer",
)
fpr_pos_not_cancer, fnr_pos_not_cancer, th_pos_not_cancer = \
- detection_error_tradeoff_curve(
+ det_curve(
y_true, y_pred_pos_not_cancer, pos_label="not cancer",
)
| diff --git a/doc/modules/classes.rst b/doc/modules/classes.rst
index f5a0e71e07d1c..2ec617df85cc0 100644
--- a/doc/modules/classes.rst
+++ b/doc/modules/classes.rst
@@ -947,7 +947,7 @@ details.
metrics.cohen_kappa_score
metrics.confusion_matrix
metrics.dcg_score
- metrics.detection_error_tradeoff_curve
+ metrics.det_curve
metrics.f1_score
metrics.fbeta_score
metrics.hamming_loss
@@ -1100,6 +1100,7 @@ See the :ref:`visualizations` section of the user guide for further details.
:template: function.rst
metrics.plot_confusion_matrix
+ metrics.plot_det_curve
metrics.plot_precision_recall_curve
metrics.plot_roc_curve
@@ -1108,6 +1109,7 @@ See the :ref:`visualizations` section of the user guide for further details.
:template: class.rst
metrics.ConfusionMatrixDisplay
+ metrics.DetCurveDisplay
metrics.PrecisionRecallDisplay
metrics.RocCurveDisplay
diff --git a/doc/modules/model_evaluation.rst b/doc/modules/model_evaluation.rst
index e64aee2075e06..58c30d3091830 100644
--- a/doc/modules/model_evaluation.rst
+++ b/doc/modules/model_evaluation.rst
@@ -306,7 +306,7 @@ Some of these are restricted to the binary classification case:
precision_recall_curve
roc_curve
- detection_error_tradeoff_curve
+ det_curve
Others also work in the multiclass case:
@@ -1443,7 +1443,7 @@ to the given limit.
Detection error tradeoff (DET)
------------------------------
-The function :func:`detection_error_tradeoff_curve` computes the
+The function :func:`det_curve` computes the
detection error tradeoff curve (DET) curve [WikipediaDET2017]_.
Quoting Wikipedia:
diff --git a/doc/visualizations.rst b/doc/visualizations.rst
index ad316205b3c90..a2d40408b403f 100644
--- a/doc/visualizations.rst
+++ b/doc/visualizations.rst
@@ -78,6 +78,7 @@ Functions
inspection.plot_partial_dependence
metrics.plot_confusion_matrix
+ metrics.plot_det_curve
metrics.plot_precision_recall_curve
metrics.plot_roc_curve
@@ -91,5 +92,6 @@ Display Objects
inspection.PartialDependenceDisplay
metrics.ConfusionMatrixDisplay
+ metrics.DetCurveDisplay
metrics.PrecisionRecallDisplay
metrics.RocCurveDisplay
diff --git a/doc/whats_new/v0.24.rst b/doc/whats_new/v0.24.rst
index aaf86a2f0576d..1da9670307b75 100644
--- a/doc/whats_new/v0.24.rst
+++ b/doc/whats_new/v0.24.rst
@@ -280,11 +280,15 @@ Changelog
:mod:`sklearn.metrics`
......................
-- |Feature| Added :func:`metrics.detection_error_tradeoff_curve` to compute
- Detection Error Tradeoff curve classification metric.
+- |Feature| Added :func:`metrics.det_curve` to compute Detection Error Tradeoff
+ curve classification metric.
:pr:`10591` by :user:`Jeremy Karnowski <jkarnows>` and
:user:`Daniel Mohns <dmohns>`.
+- |Feature| Added :func:`metrics.plot_det_curve` and :class:`DetCurveDisplay`
+ to ease the plot of DET curves.
+ :pr:`18176` by :user:`Guillaume Lemaitre <glemaitre>`.
+
- |Feature| Added :func:`metrics.mean_absolute_percentage_error` metric and
the associated scorer for regression problems. :issue:`10708` fixed with the
PR :pr:`15007` by :user:`Ashutosh Hathidara <ashutosh1919>`. The scorer and
| [
{
"components": [
{
"doc": "DET curve visualization.\n\nIt is recommend to use :func:`~sklearn.metrics.plot_det_curve` to create a\nvisualizer. All parameters are stored as attributes.\n\nRead more in the :ref:`User Guide <visualizations>`.\n\n.. versionadded:: 0.24\n\nParameters\n----------\nfpr ... | [
"sklearn/metrics/_plot/tests/test_plot_curve_common.py::test_plot_curve_error_non_binary[plot_det_curve]",
"sklearn/metrics/_plot/tests/test_plot_curve_common.py::test_plot_curve_error_non_binary[plot_roc_curve]",
"sklearn/metrics/_plot/tests/test_plot_curve_common.py::test_plot_curve_error_no_response[plot_det... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
ENH Adds plot_det_curve and associated display
Add the plotting helpers for the DET curve
Follow-up to #18169
Closes #18181
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sklearn/metrics/_plot/det_curve.py]
(definition of DetCurveDisplay:)
class DetCurveDisplay:
"""DET curve visualization.
It is recommend to use :func:`~sklearn.metrics.plot_det_curve` to create a
visualizer. All parameters are stored as attributes.
Read more in the :ref:`User Guide <visualizations>`.
.. versionadded:: 0.24
Parameters
----------
fpr : ndarray
False positive rate.
tpr : ndarray
True positive rate.
estimator_name : str, default=None
Name of estimator. If None, the estimator name is not shown.
pos_label : str or int, default=None
The label of the positive class.
Attributes
----------
line_ : matplotlib Artist
DET Curve.
ax_ : matplotlib Axes
Axes with DET Curve.
figure_ : matplotlib Figure
Figure containing the curve.
Examples
--------
>>> import matplotlib.pyplot as plt # doctest: +SKIP
>>> import numpy as np
>>> from sklearn import metrics
>>> y = np.array([0, 0, 1, 1])
>>> pred = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, fnr, thresholds = metrics.det_curve(y, pred)
>>> display = metrics.DetCurveDisplay(
... fpr=fpr, fnr=fnr, estimator_name='example estimator'
... )
>>> display.plot() # doctest: +SKIP
>>> plt.show() # doctest: +SKIP"""
(definition of DetCurveDisplay.__init__:)
def __init__(self, *, fpr, fnr, estimator_name=None, pos_label=None):
(definition of DetCurveDisplay.plot:)
def plot(self, ax=None, *, name=None, **kwargs):
"""Plot visualization.
Parameters
----------
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
name : str, default=None
Name of DET curve for labeling. If `None`, use the name of the
estimator.
Returns
-------
display : :class:`~sklearn.metrics.plot.DetCurveDisplay`
Object that stores computed values."""
(definition of plot_det_curve:)
def plot_det_curve( estimator, X, y, *, sample_weight=None, response_method="auto", name=None, ax=None, pos_label=None, **kwargs ):
"""Plot detection error tradeoff (DET) curve.
Extra keyword arguments will be passed to matplotlib's `plot`.
Read more in the :ref:`User Guide <visualizations>`.
.. versionadded:: 0.24
Parameters
----------
estimator : estimator instance
Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a classifier.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
response_method : {'predict_proba', 'decision_function', 'auto'} default='auto'
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the predicted target response. If set to
'auto', :term:`predict_proba` is tried first and if it does not exist
:term:`decision_function` is tried next.
name : str, default=None
Name of DET curve for labeling. If `None`, use the name of the
estimator.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is created.
pos_label : str or int, default=None
The label of the positive class.
When `pos_label=None`, if `y_true` is in {-1, 1} or {0, 1},
`pos_label` is set to 1, otherwise an error will be raised.
Returns
-------
display : :class:`~sklearn.metrics.DetCurveDisplay`
Object that stores computed values.
See Also
--------
det_curve : Compute error rates for different probability thresholds
plot_roc_curve : Plot Receiver operating characteristic (ROC) curve
Examples
--------
>>> import matplotlib.pyplot as plt # doctest: +SKIP
>>> from sklearn import datasets, metrics, model_selection, svm
>>> X, y = datasets.make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = model_selection.train_test_split(
... X, y, random_state=0)
>>> clf = svm.SVC(random_state=0)
>>> clf.fit(X_train, y_train)
SVC(random_state=0)
>>> metrics.plot_det_curve(clf, X_test, y_test) # doctest: +SKIP
>>> plt.show() # doctest: +SKIP"""
[end of new definitions in sklearn/metrics/_plot/det_curve.py]
[start of new definitions in sklearn/metrics/_ranking.py]
(definition of det_curve:)
def det_curve(y_true, y_score, pos_label=None, sample_weight=None):
"""Compute error rates for different probability thresholds.
.. note::
This metric is used for evaluation of ranking and error tradeoffs of
a binary classification task.
Read more in the :ref:`User Guide <det_curve>`.
.. versionadded:: 0.24
Parameters
----------
y_true : ndarray of shape (n_samples,)
True binary labels. If labels are not either {-1, 1} or {0, 1}, then
pos_label should be explicitly given.
y_score : ndarray of shape of (n_samples,)
Target scores, can either be probability estimates of the positive
class, confidence values, or non-thresholded measure of decisions
(as returned by "decision_function" on some classifiers).
pos_label : int or str, default=None
The label of the positive class.
When ``pos_label=None``, if `y_true` is in {-1, 1} or {0, 1},
``pos_label`` is set to 1, otherwise an error will be raised.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
Returns
-------
fpr : ndarray of shape (n_thresholds,)
False positive rate (FPR) such that element i is the false positive
rate of predictions with score >= thresholds[i]. This is occasionally
referred to as false acceptance propability or fall-out.
fnr : ndarray of shape (n_thresholds,)
False negative rate (FNR) such that element i is the false negative
rate of predictions with score >= thresholds[i]. This is occasionally
referred to as false rejection or miss rate.
thresholds : ndarray of shape (n_thresholds,)
Decreasing score values.
See Also
--------
roc_curve : Compute Receiver operating characteristic (ROC) curve
precision_recall_curve : Compute precision-recall curve
Examples
--------
>>> import numpy as np
>>> from sklearn.metrics import det_curve
>>> y_true = np.array([0, 0, 1, 1])
>>> y_scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, fnr, thresholds = det_curve(y_true, y_scores)
>>> fpr
array([0.5, 0.5, 0. ])
>>> fnr
array([0. , 0.5, 0.5])
>>> thresholds
array([0.35, 0.4 , 0.8 ])"""
[end of new definitions in sklearn/metrics/_ranking.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
RFC rename detection_error_tradeoff_curve det_curve
I am wondering if we should rename `detection_error_tradeoff_curve` to `det_curve`.
It will be consistent with `roc_curve` and the DET curve seems to be quite common in the literature when related to detection error trade-off
----------
--------------------
</issues> | 54ce4222694819ad52d544ce5cba5da274c34ab7 |
sphinx-doc__sphinx-8125 | 8,125 | sphinx-doc/sphinx | 3.4 | a163bbe870dc5bc7f3863ead37cd391be81fb0cc | 2020-08-14T15:50:26Z | diff --git a/CHANGES b/CHANGES
index 55c0a833e52..53567756f91 100644
--- a/CHANGES
+++ b/CHANGES
@@ -18,6 +18,9 @@ Deprecated
Features added
--------------
+* #8119: autodoc: Allow to determine whether a member not included in
+ ``__all__`` attribute of the module should be documented or not via
+ :event:`autodoc-skip-member` event
* #6914: Add a new event :event:`warn-missing-reference` to custom warning
messages when failed to resolve a cross-reference
* #6914: Emit a detailed warning when failed to resolve a ``:ref:`` reference
diff --git a/sphinx/ext/autodoc/__init__.py b/sphinx/ext/autodoc/__init__.py
index 92b3ad3e710..a0c5cf61f1d 100644
--- a/sphinx/ext/autodoc/__init__.py
+++ b/sphinx/ext/autodoc/__init__.py
@@ -258,6 +258,32 @@ def __getattr__(self, name: str) -> Any:
return None
+class ObjectMember(tuple):
+ """A member of object.
+
+ This is used for the result of `Documenter.get_object_members()` to
+ represent each member of the object.
+
+ .. Note::
+
+ An instance of this class behaves as a tuple of (name, object)
+ for compatibility to old Sphinx. The behavior will be dropped
+ in the future. Therefore extensions should not use the tuple
+ interface.
+ """
+
+ def __new__(cls, name: str, obj: Any, **kwargs: Any) -> Any:
+ return super().__new__(cls, (name, obj)) # type: ignore
+
+ def __init__(self, name: str, obj: Any, skipped: bool = False) -> None:
+ self.__name__ = name
+ self.object = obj
+ self.skipped = skipped
+
+
+ObjectMembers = Union[List[ObjectMember], List[Tuple[str, Any]]]
+
+
class Documenter:
"""
A Documenter knows how to autodocument a single object type. When
@@ -589,7 +615,7 @@ def add_content(self, more_content: Any, no_docstring: bool = False) -> None:
for line, src in zip(more_content.data, more_content.items):
self.add_line(line, src[0], src[1])
- def get_object_members(self, want_all: bool) -> Tuple[bool, List[Tuple[str, Any]]]:
+ def get_object_members(self, want_all: bool) -> Tuple[bool, ObjectMembers]:
"""Return `(members_check_module, members)` where `members` is a
list of `(membername, member)` pairs of the members of *self.object*.
@@ -599,10 +625,10 @@ def get_object_members(self, want_all: bool) -> Tuple[bool, List[Tuple[str, Any]
members = get_object_members(self.object, self.objpath, self.get_attr, self.analyzer)
if not want_all:
if not self.options.members:
- return False, []
+ return False, [] # type: ignore
# specific members given
selected = []
- for name in self.options.members:
+ for name in self.options.members: # type: str
if name in members:
selected.append((name, members[name].value))
else:
@@ -615,7 +641,7 @@ def get_object_members(self, want_all: bool) -> Tuple[bool, List[Tuple[str, Any]
return False, [(m.name, m.value) for m in members.values()
if m.directly_defined]
- def filter_members(self, members: List[Tuple[str, Any]], want_all: bool
+ def filter_members(self, members: ObjectMembers, want_all: bool
) -> List[Tuple[str, Any, bool]]:
"""Filter the given member list.
@@ -654,7 +680,8 @@ def is_filtered_inherited_member(name: str) -> bool:
attr_docs = {}
# process members and determine which to skip
- for (membername, member) in members:
+ for obj in members:
+ membername, member = obj
# if isattr is True, the member is documented as an attribute
if member is INSTANCEATTR:
isattr = True
@@ -731,6 +758,10 @@ def is_filtered_inherited_member(name: str) -> bool:
# ignore undocumented members if :undoc-members: is not given
keep = has_doc or self.options.undoc_members
+ if isinstance(obj, ObjectMember) and obj.skipped:
+ # forcedly skipped member (ex. a module attribute not defined in __all__)
+ keep = False
+
# give the user a chance to decide whether this member
# should be skipped
if self.env.app:
@@ -992,28 +1023,35 @@ def add_directive_header(self, sig: str) -> None:
if self.options.deprecated:
self.add_line(' :deprecated:', sourcename)
- def get_object_members(self, want_all: bool) -> Tuple[bool, List[Tuple[str, Any]]]:
+ def get_object_members(self, want_all: bool) -> Tuple[bool, ObjectMembers]:
if want_all:
- if self.__all__:
- memberlist = self.__all__
- else:
+ members = get_module_members(self.object)
+ if not self.__all__:
# for implicit module members, check __module__ to avoid
# documenting imported objects
- return True, get_module_members(self.object)
+ return True, members
+ else:
+ ret = []
+ for name, value in members:
+ if name in self.__all__:
+ ret.append(ObjectMember(name, value))
+ else:
+ ret.append(ObjectMember(name, value, skipped=True))
+
+ return False, ret
else:
memberlist = self.options.members or []
- ret = []
- for mname in memberlist:
- try:
- ret.append((mname, safe_getattr(self.object, mname)))
- except AttributeError:
- logger.warning(
- __('missing attribute mentioned in :members: or __all__: '
- 'module %s, attribute %s') %
- (safe_getattr(self.object, '__name__', '???'), mname),
- type='autodoc'
- )
- return False, ret
+ ret = []
+ for name in memberlist:
+ try:
+ value = safe_getattr(self.object, name)
+ ret.append(ObjectMember(name, value))
+ except AttributeError:
+ logger.warning(__('missing attribute mentioned in :members: option: '
+ 'module %s, attribute %s') %
+ (safe_getattr(self.object, '__name__', '???'), name),
+ type='autodoc')
+ return False, ret
def sort_members(self, documenters: List[Tuple["Documenter", bool]],
order: str) -> List[Tuple["Documenter", bool]]:
| diff --git a/tests/test_ext_autodoc_events.py b/tests/test_ext_autodoc_events.py
index 7ddc952ab38..798f593dcee 100644
--- a/tests/test_ext_autodoc_events.py
+++ b/tests/test_ext_autodoc_events.py
@@ -80,3 +80,28 @@ def test_between_exclude(app):
' third line',
'',
]
+
+
+@pytest.mark.sphinx('html', testroot='ext-autodoc')
+def test_skip_module_member(app):
+ def autodoc_skip_member(app, what, name, obj, skip, options):
+ if name == "Class":
+ return True # Skip "Class" class in __all__
+ elif name == "raises":
+ return False # Show "raises()" function (not in __all__)
+
+ app.connect('autodoc-skip-member', autodoc_skip_member)
+
+ options = {"members": None}
+ actual = do_autodoc(app, 'module', 'target', options)
+ assert list(actual) == [
+ '',
+ '.. py:module:: target',
+ '',
+ '',
+ '.. py:function:: raises(exc, func, *args, **kwds)',
+ ' :module: target',
+ '',
+ ' Raise AssertionError if ``func(*args, **kwds)`` does not raise *exc*.',
+ '',
+ ]
| diff --git a/CHANGES b/CHANGES
index 55c0a833e52..53567756f91 100644
--- a/CHANGES
+++ b/CHANGES
@@ -18,6 +18,9 @@ Deprecated
Features added
--------------
+* #8119: autodoc: Allow to determine whether a member not included in
+ ``__all__`` attribute of the module should be documented or not via
+ :event:`autodoc-skip-member` event
* #6914: Add a new event :event:`warn-missing-reference` to custom warning
messages when failed to resolve a cross-reference
* #6914: Emit a detailed warning when failed to resolve a ``:ref:`` reference
| [
{
"components": [
{
"doc": "A member of object.\n\nThis is used for the result of `Documenter.get_object_members()` to\nrepresent each member of the object.\n\n.. Note::\n\n An instance of this class behaves as a tuple of (name, object)\n for compatibility to old Sphinx. The behavior will be ... | [
"tests/test_ext_autodoc_events.py::test_skip_module_member"
] | [
"tests/test_ext_autodoc_events.py::test_process_docstring",
"tests/test_ext_autodoc_events.py::test_cut_lines",
"tests/test_ext_autodoc_events.py::test_between",
"tests/test_ext_autodoc_events.py::test_between_exclude"
] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Close #8119: autodoc: Control visibility of module member not in __all__
### Feature or Bugfix
- Feature
### Purpose
- refs: #8119
- This allows `autodoc-skip-member` handlers to determine whether a member
not included in `__all__` attribute of the module should be documented or
not.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sphinx/ext/autodoc/__init__.py]
(definition of ObjectMember:)
class ObjectMember(tuple):
"""A member of object.
This is used for the result of `Documenter.get_object_members()` to
represent each member of the object.
.. Note::
An instance of this class behaves as a tuple of (name, object)
for compatibility to old Sphinx. The behavior will be dropped
in the future. Therefore extensions should not use the tuple
interface."""
(definition of ObjectMember.__new__:)
def __new__(cls, name: str, obj: Any, **kwargs: Any) -> Any:
(definition of ObjectMember.__init__:)
def __init__(self, name: str, obj: Any, skipped: bool = False) -> None:
[end of new definitions in sphinx/ext/autodoc/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
All identifiers should be passed to autodoc-skip-member, even if not in __all__
**Is your feature request related to a problem? Please describe.**
I have a Python project for which I've generated docs. It did well. Almost everything was doc'ed. I knew about the default of excluding anything starting with '_' so I created an autodoc-skip-member function that would include those. It works. I see a bunch of private methods and functions included.
But...I have a module, let's call it XYZ that has both private and public functions. Sphinx is only doc'ing the public functions of XYZ, but not its private functions. I have other modules that have private functions, and it's doc'ing those .I added a print() to the autodoc-skip-member function and it doesn't even print out the `name` of those private functions so it seems it's not even seeing it.
I figured it out. The module had certain identifiers listed in `__all__`, and was not passing anything else in that module to autodoc-skip-member.
**Describe the solution you'd like**
Sphinx should pass through *all* members, not just the ones in `__all__`. Maybe add something to the `options` object that is passed to autodoc-skip-members that indicates it would have been excluded by the `__all__` designator, or some such.
**Describe alternatives you've considered**
I simply removed the `__all__` from the file.
----------
--------------------
</issues> | c941b9cb14aefc132d6e3691085ce9704c1356e6 |
pgmpy__pgmpy-1316 | 1,316 | pgmpy/pgmpy | null | b8fc138bfcf384f8e87b928ed19b5cc53ea1c79b | 2020-08-13T12:11:39Z | diff --git a/pgmpy/base/DAG.py b/pgmpy/base/DAG.py
index cdaa3375e..f12951dc0 100644
--- a/pgmpy/base/DAG.py
+++ b/pgmpy/base/DAG.py
@@ -647,14 +647,16 @@ def active_trail_nodes(self, variables, observed=None):
active_trails[start] = active_nodes
return active_trails
- def _get_ancestors_of(self, obs_nodes_list):
+ def _get_ancestors_of(self, nodes):
"""
Returns a dictionary of all ancestors of all the observed nodes including the
node itself.
+
Parameters
----------
- obs_nodes_list: string, list-type
+ nodes: string, list-type
name of all the observed nodes
+
Examples
--------
>>> from pgmpy.base import DAG
@@ -665,15 +667,15 @@ def _get_ancestors_of(self, obs_nodes_list):
>>> model._get_ancestors_of(['G', 'I'])
{'D', 'G', 'I'}
"""
- if not isinstance(obs_nodes_list, (list, tuple)):
- obs_nodes_list = [obs_nodes_list]
+ if not isinstance(nodes, (list, tuple)):
+ nodes = [nodes]
- for node in obs_nodes_list:
+ for node in nodes:
if node not in self.nodes():
raise ValueError(f"Node {node} not in not in graph")
ancestors_list = set()
- nodes_list = set(obs_nodes_list)
+ nodes_list = set(nodes)
while nodes_list:
node = nodes_list.pop()
if node not in ancestors_list:
@@ -735,6 +737,31 @@ def do(self, node):
dag_do_x.remove_edge(parent, node)
return dag_do_x
+ def get_ancestral_graph(self, nodes):
+ """
+ Returns the ancestral graph of the given `nodes`. The ancestral graph only
+ contains the nodes which are ancestors of atleast one of the variables in
+ node.
+
+ Parameters
+ ----------
+ node: iterable
+ List of nodes whose ancestral graph needs to be computed.
+
+ Returns
+ -------
+ pgmpy.base.DAG instance: The ancestral graph.
+
+ Examples
+ --------
+ >>> from pgmpy.base import DAG
+ >>> dag = DAG([('A', 'C'), ('B', 'C'), ('D', 'A'), ('D', 'B')])
+ >>> anc_dag = dag.get_ancestral_graph(nodes=['A', 'B'])
+ >>> anc_dag.edges()
+ [('D', 'A'), ('D', 'B')]
+ """
+ return self.subgraph(nodes=self._get_ancestors_of(nodes=nodes))
+
class PDAG(nx.DiGraph):
"""
| diff --git a/pgmpy/tests/test_base/test_DAG.py b/pgmpy/tests/test_base/test_DAG.py
index 7cf96fd45..f680bf343 100644
--- a/pgmpy/tests/test_base/test_DAG.py
+++ b/pgmpy/tests/test_base/test_DAG.py
@@ -132,6 +132,12 @@ def test_init_with_cycle(self):
self.assertRaises(ValueError, DAG, [("a", "b"), ("b", "a")])
self.assertRaises(ValueError, DAG, [("a", "b"), ("b", "c"), ("c", "a")])
+ def test_get_ancestral_graph(self):
+ dag = DAG([("A", "C"), ("B", "C"), ("D", "A"), ("D", "B")])
+ anc_dag = dag.get_ancestral_graph(["A", "B"])
+ self.assertEqual(set(anc_dag.edges()), set([("D", "A"), ("D", "B")]))
+ self.assertRaises(ValueError, dag.get_ancestral_graph, ["A", "gibber"])
+
def tearDown(self):
del self.graph
| [
{
"components": [
{
"doc": "Returns the ancestral graph of the given `nodes`. The ancestral graph only\ncontains the nodes which are ancestors of atleast one of the variables in \nnode.\n\nParameters\n----------\nnode: iterable\n List of nodes whose ancestral graph needs to be computed.\n\nRetu... | [
"pgmpy/tests/test_base/test_DAG.py::TestDAGCreation::test_get_ancestral_graph"
] | [
"pgmpy/tests/test_base/test_DAG.py::TestDAGCreation::test_add_edge_nonstring",
"pgmpy/tests/test_base/test_DAG.py::TestDAGCreation::test_add_edge_string",
"pgmpy/tests/test_base/test_DAG.py::TestDAGCreation::test_add_edge_weight",
"pgmpy/tests/test_base/test_DAG.py::TestDAGCreation::test_add_edges_from_nonstr... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Adds DAG.get_ancestral_graph method to get the ancestral graph of theโฆ
โฆ given nodes
### Your checklist for this pull request
Please review the [guidelines for contributing](CONTRIBUTING.md) to this repository.
- [ ] Make sure you are requesting to **pull a topic/feature/bugfix branch** (right side). Don't request your master!
- [ ] Make sure you are making a pull request against the **dev branch** (left side). Also you should start *your branch* off *our dev*.
- [ ] Check the commit's or even all commits' message styles matches our requested structure.
### Issue number(s) that this pull request fixes
- Fixes #
### List of changes to the codebase in this pull request
-
-
-
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in pgmpy/base/DAG.py]
(definition of DAG.get_ancestral_graph:)
def get_ancestral_graph(self, nodes):
"""Returns the ancestral graph of the given `nodes`. The ancestral graph only
contains the nodes which are ancestors of atleast one of the variables in
node.
Parameters
----------
node: iterable
List of nodes whose ancestral graph needs to be computed.
Returns
-------
pgmpy.base.DAG instance: The ancestral graph.
Examples
--------
>>> from pgmpy.base import DAG
>>> dag = DAG([('A', 'C'), ('B', 'C'), ('D', 'A'), ('D', 'B')])
>>> anc_dag = dag.get_ancestral_graph(nodes=['A', 'B'])
>>> anc_dag.edges()
[('D', 'A'), ('D', 'B')]"""
[end of new definitions in pgmpy/base/DAG.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | cf8d0f12e2e5be62b01ff8fded85f3f64eab1e84 | ||
pvlib__pvlib-python-1022 | 1,022 | pvlib/pvlib-python | 0.7 | 87d1bf21ed7f9c654ef55864eb1035c18fbb1cf4 | 2020-08-11T03:12:32Z | diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index a7087edad0..91c7662a0a 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -495,6 +495,8 @@ Creating a ModelChain object.
:toctree: generated/
modelchain.ModelChain
+ modelchain.ModelChain.with_pvwatts
+ modelchain.ModelChain.with_sapm
Running
-------
diff --git a/docs/sphinx/source/whatsnew/v0.8.0.rst b/docs/sphinx/source/whatsnew/v0.8.0.rst
index ec273f051f..b87f79067b 100644
--- a/docs/sphinx/source/whatsnew/v0.8.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.0.rst
@@ -44,6 +44,11 @@ Enhancements
(:pull:`1017`)
* Add :py:func:`pvlib.inverter.fit_sandia` that fits the Sandia inverter model
to a set of inverter efficiency curves. (:pull:`1011`)
+* Add factory methods :py:meth:`~pvlib.modelchain.ModelChain.with_pvwatts`
+ :py:meth:`~pvlib.modelchain.ModelChain.with_sapm` to create ``ModelChain``
+ objects configured for the respective modeling paradigms. The
+ configurations are defined in ``modelchain.PVWATTS_CONFIG``, and
+ ``modelchain.SAPM_CONFIG``. (:issue:`1013`, :pull:`1022`)
* Added *racking_model*, *module_type*, and *temperature_model_parameters* to
PVSystem, LocalizedPVSystem, SingleAxisTracker, and
LocalizedSingleAxisTracker repr methods. (:issue:`1027`)
diff --git a/pvlib/modelchain.py b/pvlib/modelchain.py
index 0d8e816879..2479f16118 100644
--- a/pvlib/modelchain.py
+++ b/pvlib/modelchain.py
@@ -18,6 +18,30 @@
from pvlib._deprecation import pvlibDeprecationWarning
from pvlib.tools import _build_kwargs
+
+# these dictionaries contain the default configuration for following
+# established modeling sequences. They can be used in combination with
+# basic_chain and ModelChain. They are used by the ModelChain methods
+# ModelChain.with_pvwatts, ModelChain.with_sapm, etc.
+
+# pvwatts documentation states that it uses the following reference for
+# a temperature model: Fuentes, M. K. (1987). A Simplified Thermal Model
+# for Flat-Plate Photovoltaic Arrays. SAND85-0330. Albuquerque, NM:
+# Sandia National Laboratories. Accessed September 3, 2013:
+# http://prod.sandia.gov/techlib/access-control.cgi/1985/850330.pdf
+# pvlib python does not implement that model, so use the SAPM instead.
+PVWATTS_CONFIG = dict(
+ dc_model='pvwatts', ac_model='pvwatts', losses_model='pvwatts',
+ transposition_model='perez', aoi_model='physical',
+ spectral_model='no_loss', temperature_model='sapm'
+)
+
+SAPM_CONFIG = dict(
+ dc_model='sapm', ac_model='sandia', losses_model='no_loss',
+ aoi_model='sapm', spectral_model='sapm', temperature_model='sapm'
+)
+
+
def basic_chain(times, latitude, longitude,
module_parameters, temperature_model_parameters,
inverter_parameters,
@@ -349,6 +373,171 @@ def __init__(self, system, location,
self.times = None
self.solar_position = None
+ @classmethod
+ def with_pvwatts(cls, system, location,
+ orientation_strategy=None,
+ clearsky_model='ineichen',
+ airmass_model='kastenyoung1989',
+ name=None,
+ **kwargs):
+ """
+ ModelChain that follows the PVWatts methods.
+
+ Parameters
+ ----------
+ system : PVSystem
+ A :py:class:`~pvlib.pvsystem.PVSystem` object that represents
+ the connected set of modules, inverters, etc.
+
+ location : Location
+ A :py:class:`~pvlib.location.Location` object that represents
+ the physical location at which to evaluate the model.
+
+ orientation_strategy : None or str, default None
+ The strategy for aligning the modules. If not None, sets the
+ ``surface_azimuth`` and ``surface_tilt`` properties of the
+ ``system``. Allowed strategies include 'flat',
+ 'south_at_latitude_tilt'. Ignored for SingleAxisTracker systems.
+
+ clearsky_model : str, default 'ineichen'
+ Passed to location.get_clearsky.
+
+ airmass_model : str, default 'kastenyoung1989'
+ Passed to location.get_airmass.
+
+ name: None or str, default None
+ Name of ModelChain instance.
+
+ **kwargs
+ Parameters supplied here are passed to the ModelChain
+ constructor and take precedence over the default
+ configuration.
+
+ Examples
+ --------
+ >>> module_parameters = dict(gamma_pdc=-0.003, pdc0=4500)
+ >>> inverter_parameters = dict(pac0=4000)
+ >>> tparams = TEMPERATURE_MODEL_PARAMETERS['sapm']['open_rack_glass_glass']
+ >>> system = PVSystem(surface_tilt=30, surface_azimuth=180,
+ ... module_parameters=module_parameters,
+ ... inverter_parameters=inverter_parameters,
+ ... temperature_model_parameters=tparams)
+ >>> location = Location(32.2, -110.9)
+ >>> ModelChain.with_pvwatts(system, location)
+ ModelChain:
+ name: None
+ orientation_strategy: None
+ clearsky_model: ineichen
+ transposition_model: perez
+ solar_position_method: nrel_numpy
+ airmass_model: kastenyoung1989
+ dc_model: pvwatts_dc
+ ac_model: pvwatts_inverter
+ aoi_model: physical_aoi_loss
+ spectral_model: no_spectral_loss
+ temperature_model: sapm_temp
+ losses_model: pvwatts_losses
+ """ # noqa: E501
+ config = PVWATTS_CONFIG.copy()
+ config.update(kwargs)
+ return ModelChain(
+ system, location,
+ orientation_strategy=orientation_strategy,
+ clearsky_model=clearsky_model,
+ airmass_model=airmass_model,
+ name=name,
+ **config
+ )
+
+ @classmethod
+ def with_sapm(cls, system, location,
+ orientation_strategy=None,
+ clearsky_model='ineichen',
+ transposition_model='haydavies',
+ solar_position_method='nrel_numpy',
+ airmass_model='kastenyoung1989',
+ name=None,
+ **kwargs):
+ """
+ ModelChain that follows the Sandia Array Performance Model
+ (SAPM) methods.
+
+ Parameters
+ ----------
+ system : PVSystem
+ A :py:class:`~pvlib.pvsystem.PVSystem` object that represents
+ the connected set of modules, inverters, etc.
+
+ location : Location
+ A :py:class:`~pvlib.location.Location` object that represents
+ the physical location at which to evaluate the model.
+
+ orientation_strategy : None or str, default None
+ The strategy for aligning the modules. If not None, sets the
+ ``surface_azimuth`` and ``surface_tilt`` properties of the
+ ``system``. Allowed strategies include 'flat',
+ 'south_at_latitude_tilt'. Ignored for SingleAxisTracker systems.
+
+ clearsky_model : str, default 'ineichen'
+ Passed to location.get_clearsky.
+
+ transposition_model : str, default 'haydavies'
+ Passed to system.get_irradiance.
+
+ solar_position_method : str, default 'nrel_numpy'
+ Passed to location.get_solarposition.
+
+ airmass_model : str, default 'kastenyoung1989'
+ Passed to location.get_airmass.
+
+ name: None or str, default None
+ Name of ModelChain instance.
+
+ **kwargs
+ Parameters supplied here are passed to the ModelChain
+ constructor and take precedence over the default
+ configuration.
+
+ Examples
+ --------
+ >>> mods = pvlib.pvsystem.retrieve_sam('sandiamod')
+ >>> invs = pvlib.pvsystem.retrieve_sam('cecinverter')
+ >>> module_parameters = mods['Canadian_Solar_CS5P_220M___2009_']
+ >>> inverter_parameters = invs['ABB__MICRO_0_25_I_OUTD_US_240__240V_']
+ >>> tparams = TEMPERATURE_MODEL_PARAMETERS['sapm']['open_rack_glass_glass']
+ >>> system = PVSystem(surface_tilt=30, surface_azimuth=180,
+ ... module_parameters=module_parameters,
+ ... inverter_parameters=inverter_parameters,
+ ... temperature_model_parameters=tparams)
+ >>> location = Location(32.2, -110.9)
+ >>> ModelChain.with_sapm(system, location)
+ ModelChain:
+ name: None
+ orientation_strategy: None
+ clearsky_model: ineichen
+ transposition_model: haydavies
+ solar_position_method: nrel_numpy
+ airmass_model: kastenyoung1989
+ dc_model: sapm
+ ac_model: snlinverter
+ aoi_model: sapm_aoi_loss
+ spectral_model: sapm_spectral_loss
+ temperature_model: sapm_temp
+ losses_model: no_extra_losses
+ """ # noqa: E501
+ config = SAPM_CONFIG.copy()
+ config.update(kwargs)
+ return ModelChain(
+ system, location,
+ orientation_strategy=orientation_strategy,
+ clearsky_model=clearsky_model,
+ transposition_model=transposition_model,
+ solar_position_method=solar_position_method,
+ airmass_model=airmass_model,
+ name=name,
+ **config
+ )
+
def __repr__(self):
attrs = [
'name', 'orientation_strategy', 'clearsky_model',
| diff --git a/pvlib/tests/test_modelchain.py b/pvlib/tests/test_modelchain.py
index bba8a711a0..e6d8f70083 100644
--- a/pvlib/tests/test_modelchain.py
+++ b/pvlib/tests/test_modelchain.py
@@ -170,6 +170,19 @@ def test_ModelChain_creation(sapm_dc_snl_ac_system, location):
ModelChain(sapm_dc_snl_ac_system, location)
+def test_with_sapm(sapm_dc_snl_ac_system, location, weather):
+ mc = ModelChain.with_sapm(sapm_dc_snl_ac_system, location)
+ assert mc.dc_model == mc.sapm
+ mc.run_model(weather)
+
+
+def test_with_pvwatts(pvwatts_dc_pvwatts_ac_system, location, weather):
+ mc = ModelChain.with_pvwatts(pvwatts_dc_pvwatts_ac_system, location)
+ assert mc.dc_model == mc.pvwatts_dc
+ assert mc.temperature_model == mc.sapm_temp
+ mc.run_model(weather)
+
+
@pytest.mark.parametrize('strategy, expected', [
(None, (32.2, 180)), ('None', (32.2, 180)), ('flat', (0, 180)),
('south_at_latitude_tilt', (32.2, 180))
| diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index a7087edad0..91c7662a0a 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -495,6 +495,8 @@ Creating a ModelChain object.
:toctree: generated/
modelchain.ModelChain
+ modelchain.ModelChain.with_pvwatts
+ modelchain.ModelChain.with_sapm
Running
-------
diff --git a/docs/sphinx/source/whatsnew/v0.8.0.rst b/docs/sphinx/source/whatsnew/v0.8.0.rst
index ec273f051f..b87f79067b 100644
--- a/docs/sphinx/source/whatsnew/v0.8.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.0.rst
@@ -44,6 +44,11 @@ Enhancements
(:pull:`1017`)
* Add :py:func:`pvlib.inverter.fit_sandia` that fits the Sandia inverter model
to a set of inverter efficiency curves. (:pull:`1011`)
+* Add factory methods :py:meth:`~pvlib.modelchain.ModelChain.with_pvwatts`
+ :py:meth:`~pvlib.modelchain.ModelChain.with_sapm` to create ``ModelChain``
+ objects configured for the respective modeling paradigms. The
+ configurations are defined in ``modelchain.PVWATTS_CONFIG``, and
+ ``modelchain.SAPM_CONFIG``. (:issue:`1013`, :pull:`1022`)
* Added *racking_model*, *module_type*, and *temperature_model_parameters* to
PVSystem, LocalizedPVSystem, SingleAxisTracker, and
LocalizedSingleAxisTracker repr methods. (:issue:`1027`)
| [
{
"components": [
{
"doc": "ModelChain that follows the PVWatts methods.\n\nParameters\n----------\nsystem : PVSystem\n A :py:class:`~pvlib.pvsystem.PVSystem` object that represents\n the connected set of modules, inverters, etc.\n\nlocation : Location\n A :py:class:`~pvlib.location.Locat... | [
"pvlib/tests/test_modelchain.py::test_with_sapm",
"pvlib/tests/test_modelchain.py::test_with_pvwatts"
] | [
"pvlib/tests/test_modelchain.py::test_ModelChain_creation",
"pvlib/tests/test_modelchain.py::test_orientation_strategy[None-expected0]",
"pvlib/tests/test_modelchain.py::test_orientation_strategy[None-expected1]",
"pvlib/tests/test_modelchain.py::test_orientation_strategy[flat-expected2]",
"pvlib/tests/test... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
preconfigured ModelChain for pvwatts, sapm
- [x] Closes #1013
- [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)
- [x] Tests added
- [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.
- [x] Adds description and name entries in the appropriate "what's new" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).
- [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.
- [x] Pull request is nearly complete and ready for detailed review.
- [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in pvlib/modelchain.py]
(definition of ModelChain.with_pvwatts:)
def with_pvwatts(cls, system, location, orientation_strategy=None, clearsky_model='ineichen', airmass_model='kastenyoung1989', name=None, **kwargs):
"""ModelChain that follows the PVWatts methods.
Parameters
----------
system : PVSystem
A :py:class:`~pvlib.pvsystem.PVSystem` object that represents
the connected set of modules, inverters, etc.
location : Location
A :py:class:`~pvlib.location.Location` object that represents
the physical location at which to evaluate the model.
orientation_strategy : None or str, default None
The strategy for aligning the modules. If not None, sets the
``surface_azimuth`` and ``surface_tilt`` properties of the
``system``. Allowed strategies include 'flat',
'south_at_latitude_tilt'. Ignored for SingleAxisTracker systems.
clearsky_model : str, default 'ineichen'
Passed to location.get_clearsky.
airmass_model : str, default 'kastenyoung1989'
Passed to location.get_airmass.
name: None or str, default None
Name of ModelChain instance.
**kwargs
Parameters supplied here are passed to the ModelChain
constructor and take precedence over the default
configuration.
Examples
--------
>>> module_parameters = dict(gamma_pdc=-0.003, pdc0=4500)
>>> inverter_parameters = dict(pac0=4000)
>>> tparams = TEMPERATURE_MODEL_PARAMETERS['sapm']['open_rack_glass_glass']
>>> system = PVSystem(surface_tilt=30, surface_azimuth=180,
... module_parameters=module_parameters,
... inverter_parameters=inverter_parameters,
... temperature_model_parameters=tparams)
>>> location = Location(32.2, -110.9)
>>> ModelChain.with_pvwatts(system, location)
ModelChain:
name: None
orientation_strategy: None
clearsky_model: ineichen
transposition_model: perez
solar_position_method: nrel_numpy
airmass_model: kastenyoung1989
dc_model: pvwatts_dc
ac_model: pvwatts_inverter
aoi_model: physical_aoi_loss
spectral_model: no_spectral_loss
temperature_model: sapm_temp
losses_model: pvwatts_losses"""
(definition of ModelChain.with_sapm:)
def with_sapm(cls, system, location, orientation_strategy=None, clearsky_model='ineichen', transposition_model='haydavies', solar_position_method='nrel_numpy', airmass_model='kastenyoung1989', name=None, **kwargs):
"""ModelChain that follows the Sandia Array Performance Model
(SAPM) methods.
Parameters
----------
system : PVSystem
A :py:class:`~pvlib.pvsystem.PVSystem` object that represents
the connected set of modules, inverters, etc.
location : Location
A :py:class:`~pvlib.location.Location` object that represents
the physical location at which to evaluate the model.
orientation_strategy : None or str, default None
The strategy for aligning the modules. If not None, sets the
``surface_azimuth`` and ``surface_tilt`` properties of the
``system``. Allowed strategies include 'flat',
'south_at_latitude_tilt'. Ignored for SingleAxisTracker systems.
clearsky_model : str, default 'ineichen'
Passed to location.get_clearsky.
transposition_model : str, default 'haydavies'
Passed to system.get_irradiance.
solar_position_method : str, default 'nrel_numpy'
Passed to location.get_solarposition.
airmass_model : str, default 'kastenyoung1989'
Passed to location.get_airmass.
name: None or str, default None
Name of ModelChain instance.
**kwargs
Parameters supplied here are passed to the ModelChain
constructor and take precedence over the default
configuration.
Examples
--------
>>> mods = pvlib.pvsystem.retrieve_sam('sandiamod')
>>> invs = pvlib.pvsystem.retrieve_sam('cecinverter')
>>> module_parameters = mods['Canadian_Solar_CS5P_220M___2009_']
>>> inverter_parameters = invs['ABB__MICRO_0_25_I_OUTD_US_240__240V_']
>>> tparams = TEMPERATURE_MODEL_PARAMETERS['sapm']['open_rack_glass_glass']
>>> system = PVSystem(surface_tilt=30, surface_azimuth=180,
... module_parameters=module_parameters,
... inverter_parameters=inverter_parameters,
... temperature_model_parameters=tparams)
>>> location = Location(32.2, -110.9)
>>> ModelChain.with_sapm(system, location)
ModelChain:
name: None
orientation_strategy: None
clearsky_model: ineichen
transposition_model: haydavies
solar_position_method: nrel_numpy
airmass_model: kastenyoung1989
dc_model: sapm
ac_model: snlinverter
aoi_model: sapm_aoi_loss
spectral_model: sapm_spectral_loss
temperature_model: sapm_temp
losses_model: no_extra_losses"""
[end of new definitions in pvlib/modelchain.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
pre-configured ModelChains
**Is your feature request related to a problem? Please describe.**
`ModelChain` has a lot of options and can be challenging to configure. This also makes it difficult to implement reference implementations of workflows.
**Describe the solution you'd like**
Create wrappers for pre-configured ModelChains. For example, a `PVWatts` version of `ModelChain` might look like:
```python
# modelchain.py
PVWatts = functools.partial(
ModelChain,
dc_model='pvwatts', ac_model='pvwatts', losses_model='pvwatts',
transposition_model='perez', aoi_model='physical'
)
# user code
my_model_chain = PVWatts(my_location, my_system)
```
This [SO](https://stackoverflow.com/questions/38911146/python-equivalent-of-functools-partial-for-a-class-constructor) post discusses some subtleties with using `partial` with classes, so the actual implementation might require the helper function in that post.
**Describe alternatives you've considered**
We could subclass ModelChain but that seems like overkill for this. Inheritance leads to brittle code, in my experience, so I prefer to avoid it if possible.
We could supply dicts of parameters e.g.
```python
# modelchain.py
pvwatts_config = dict(
dc_model='pvwatts', ac_model='pvwatts', losses_model='pvwatts',
transposition_model='perez', aoi_model='physical'
)
# user code
my_model_chain = ModelChain(my_location, my_system, **pvwatts_config)
```
Any other ideas?
----------
Replacing the multiple model name attributes with a single dict has some appeal...
I guess the two ideas could also be combined:
```python
# modelchain.py
pvwatts_config = dict(
dc_model='pvwatts', ac_model='pvwatts', losses_model='pvwatts',
transposition_model='perez', aoi_model='physical'
)
PVWatts = functools.partial(ModelChain, **pvwatts_config)
# user code
my_model_chain = ModelChain(my_location, my_system, **pvwatts_config)
# or
my_model_chain = PVWatts(my_location, my_system)
```
Any comments on preferred approaches from folks that use `ModelChain` and would like some additional standardization? @kanderso-nrel @mikofski do you use `ModelChain` and/or have opinions on this? Would something like this be useful for rdtools?
I haven't used `ModelChain` in any significant way, but I mildly prefer offering a "partial class" over exposing the `pvwatts_config` dict for users to double-splat into their `ModelChain(...)` calls. Users not familiar with splatting might not find that interface any more appealing than the main ModelChain interface.
> Would something like this be useful for rdtools?
I've wondered about this too. https://github.com/NREL/rdtools/pull/173 added the ability to normalize against any user-supplied expected energy timeseries, which I think fits into a broader sentiment of "rdtools isn't in the PV modeling business". I don't see much of a benefit to rdtools interacting with a ModelChain instance and calling its methods instead of having the user do it and pass in the result instead. Maybe @mdeceglie has thoughts?
To me the pre-configured dict alone is sufficient and a simple solution from a maintainer standpoint, but how would the dict be offered to users for use in their code? The simplest entrypoint for the user may very well be the PVWatts partial class.
Though I worry the standalone PVWatts is redundant and just creates extra entries into the chain to maintain, especially given the complications described in the SO.
There are some possible benefits to using the pre-configured dict with ModelChain such as maintaining compatibility where it is already being used and setting a precedent for other pre-configurations to follow the same format without needed other partial Classes for each one.
Would there be interest in adding a parameter to ModelChain like 'model_strategy' in the vein of orientation_strategy, that when set configures the pvwatts_config dict on the backend? So the user would call ModelChain(my_location, my_system, model_strategy='pvwatts')?
--------------------
</issues> | aa1635bcb40dc83f82e9fd72158670c235bfe99b |
pyocd__pyOCD-929 | 929 | pyocd/pyOCD | null | 11c27362c4cb107fe6834c496ba6c03779dac85d | 2020-08-10T20:53:30Z | diff --git a/pyocd/coresight/component_ids.py b/pyocd/coresight/component_ids.py
index d52a679af..b6b838479 100644
--- a/pyocd/coresight/component_ids.py
+++ b/pyocd/coresight/component_ids.py
@@ -24,6 +24,7 @@
from .itm import ITM
from .tpiu import TPIU
from .gpr import GPR
+from .sdc600 import SDC600
# Component classes.
ROM_TABLE_CLASS = 0x1
@@ -96,7 +97,7 @@
(ARM_ID, CORESIGHT_CLASS, 0x9ec, 0x22, 0) : CmpInfo('CS-600 ATB Replicator', None ),
(ARM_ID, CORESIGHT_CLASS, 0x9ed, 0x14, 0x1a14) : CmpInfo('CS-600 CTI', None ),
(ARM_ID, CORESIGHT_CLASS, 0x9ee, 0x00, 0) : CmpInfo('CS-600 CATU', None ),
- (ARM_ID, CORESIGHT_CLASS, 0x9ef, 0x00, 0x0a57) : CmpInfo('CS-600 SDC-600', None ),
+ (ARM_ID, CORESIGHT_CLASS, 0x9ef, 0x00, 0x0a57) : CmpInfo('CS-600 SDC-600', SDC600.factory ),
(ARM_ID, CORESIGHT_CLASS, 0xc05, 0x15, 0) : CmpInfo('CPU-A5', None ),
(ARM_ID, CORESIGHT_CLASS, 0xc07, 0x15, 0) : CmpInfo('CPU-A7', None ),
(ARM_ID, CORESIGHT_CLASS, 0xc08, 0x15, 0) : CmpInfo('CPU-A8', None ),
diff --git a/pyocd/coresight/sdc600.py b/pyocd/coresight/sdc600.py
new file mode 100644
index 000000000..3b7d44110
--- /dev/null
+++ b/pyocd/coresight/sdc600.py
@@ -0,0 +1,479 @@
+# pyOCD debugger
+# Copyright (c) 2020 Arm Limited
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+from time import sleep
+from enum import Enum
+
+from .component import CoreSightComponent
+from ..core import exceptions
+from ..utility.timeout import Timeout
+from ..utility.hex import dump_hex_data
+
+LOG = logging.getLogger(__name__)
+
+class ComPortError(exceptions.Error):
+ """! @brief Base class for SDC-600 exceptions."""
+ pass
+
+class UnexpectedFlagError(ComPortError):
+ """! @brief Received an unexpected or out of order flag byte."""
+ pass
+
+class LinkError(ComPortError):
+ """! @brief Received a link error flag (LERR)."""
+ pass
+
+class LinkClosedException(ComPortError):
+ """! @brief Received an unexpected or out of order flag byte."""
+ def __init__(self, phase):
+ self._phase = phase
+
+ @property
+ def phase(self):
+ """! @brief The link phase that was closed from the other side."""
+ return self._phase
+
+class SDC600(CoreSightComponent):
+ """! @brief SDC-600 component.
+ """
+
+ ## Default timeout for an operation or packet transfer.
+ TRANSFER_TIMEOUT = 30.0
+
+ class LinkPhase(Enum):
+ """! @brief COM Port link phases."""
+ ## Hardware-defined link phase.
+ PHASE1 = 1
+ ## Software-defiend link phase.
+ PHASE2 = 2
+
+ class Register:
+ """! @brief Namespace for SDC-600 register offset constants."""
+ # Register offsets.
+ VIDR = 0xD00
+ FIDTXR = 0xD08
+ FIDRXR = 0xD0C
+ ICSR = 0xD10
+ DR = 0xD20
+ SR = 0xD2C
+ DBR = 0xD30
+ SR_ALIAS = 0xD3C
+
+ # FIDTXR and FIDRXR bit definitions.
+ FIDxXR_xXI_MASK = (0x00000001)
+ FIDxXR_xXI_SHIFT = (0)
+ FIDxXR_xXINT_MASK = (0x00000002)
+ FIDxXR_xXINT_SHIFT = (1)
+ FIDxXR_xXW_MASK = (0x000000f0)
+ FIDxXR_xXW_SHIFT = (4)
+ FIDxXR_xXSZ8_MASK = (0x00000100)
+ FIDxXR_xXSZ8_SHIFT = (8)
+ FIDxXR_xXSZ16_MASK = (0x00000200)
+ FIDxXR_xXSZ16_SHIFT = (9)
+ FIDxXR_xXSZ32_MASK = (0x00000400)
+ FIDxXR_xXSZ32_SHIFT = (10)
+ FIDxXR_xXFD_MASK = (0x000f0000)
+ FIDxXR_xXFD_SHIFT = (16)
+
+ # SR bit definitions.
+ SR_TXS_MASK = (0x000000ff)
+ SR_TXS_SHIFT = (0)
+ SR_RRDIS_MASK = (0x00001000)
+ SR_RRDIS_SHIFT = (12)
+ SR_TXOE_MASK = (0x00002000)
+ SR_TXOE_SHIFT = (13)
+ SR_TXLE_MASK = (0x00004000)
+ SR_TXLE_SHIFT = (14)
+ SR_TRINPROG_MASK = (0x00008000)
+ SR_TRINPROG_SHIFT = (18)
+ SR_RXF_MASK = (0x00ff0000)
+ SR_RXF_SHIFT = (16)
+ SR_RXLE_MASK = (0x40000000)
+ SR_RXLE_SHIFT = (30)
+ SR_PEN_MASK = (0x80000000)
+ SR_PEN_SHIFT = (31)
+
+ class Flag:
+ """! @brief Namespace with SDC-600 flag byte constants."""
+ IDR = 0xA0
+ IDA = 0xA1
+ LPH1RA = 0xA6
+ LPH1RL = 0xA7
+ LPH2RA = 0xA8
+ LPH2RL = 0xA9
+ LPH2RR = 0xAA
+ LERR = 0xAB
+ START = 0xAC
+ END = 0xAD
+ ESC = 0xAE
+ NULL = 0xAF
+
+ # All bytes with 0b101 in bits [7:5] are flag bytes.
+ MASK = 0xE0
+ IDENTIFIER = 0b10100000
+
+ ## Map from flag value to name.
+ NAME = {
+ IDR : "IDR",
+ IDA : "IDA",
+ LPH1RA : "LPH1RA",
+ LPH1RL : "LPH1RL",
+ LPH2RA : "LPH2RA",
+ LPH2RL : "LPH2RL",
+ LPH2RR : "LPH2RR",
+ LERR : "LERR",
+ START : "START",
+ END : "END",
+ ESC : "ESC",
+ NULL : "NULL",
+ }
+
+ ## NULL bytes must be written to the upper bytes, and will be present in the upper bytes
+ # when read.
+ NULL_FILL = 0xAFAFAF00
+
+ def __init__(self, ap, cmpid=None, addr=None):
+ super(SDC600, self).__init__(ap, cmpid, addr)
+ self._tx_width = 0
+ self._rx_width = 0
+ self._current_link_phase = None
+
+ def init(self):
+ """! @brief Inits the component.
+
+ Reads the RX and TX widths and whether the SDC-600 is enabled. All error flags are cleared.
+ """
+ fidtx = self.ap.read32(self.Register.FIDTXR)
+ LOG.debug("fidtx=0x%08x", fidtx)
+ fidrx = self.ap.read32(self.Register.FIDRXR)
+ LOG.debug("fidrx=0x%08x", fidrx)
+
+ self._tx_width = (fidtx & self.Register.FIDxXR_xXW_MASK) >> self.Register.FIDxXR_xXW_SHIFT
+
+ self._rx_width = (fidrx & self.Register.FIDxXR_xXW_MASK) >> self.Register.FIDxXR_xXW_SHIFT
+
+ status = self.ap.read32(self.Register.SR)
+ LOG.debug("status=0x%08x", status)
+ self._is_enabled = (status & self.Register.SR_PEN_MASK) != 0
+
+ # Clear any error flags.
+ error_flags = status & (self.Register.SR_TXOE_MASK | self.Register.SR_TXLE_MASK)
+ if error_flags:
+ self.ap.write32(self.Register.SR, error_flags)
+
+ @property
+ def is_enabled(self):
+ """! @brief Whether the SDC-600 is enabled."""
+ return self._is_enabled
+
+ @property
+ def is_reboot_request_enabled(self):
+ """! @brief Whether the Reboot Request feature is enabled in the SDC-600."""
+ return (self.ap.read32(self.Register.SR) & self.Register.SR_RRDIS_MASK) == 0
+
+ @property
+ def current_link_phase(self):
+ """! @brief Currently established link phase.
+ @return Either None or one of the SDC600.LinkPhase enums.
+ """
+ return self._current_link_phase
+
+ def _read1(self, to_):
+ """! @brief Read a single byte.
+
+ If a NULL byte is received, it is ignored and another byte is read. No other flag bytes
+ are processed.
+
+ @exception TimeoutError
+ """
+ while True:
+ # Wait until a byte is ready in the receive FIFO.
+ while to_.check():
+ if (self.ap.read32(self.Register.SR) & self.Register.SR_RXF_MASK) != 0:
+ break
+ else:
+ raise exceptions.TimeoutError("timeout while reading from SDC-600")
+
+ # Read the data register and strip off NULL bytes in high bytes.
+ value = self.ap.read32(self.Register.DR) & 0xFF
+
+ # Ignore NULL flag bytes.
+ if value == self.Flag.NULL:
+ continue
+
+ return value
+
+ def _write1(self, value, to_):
+ """! @brief Write one or more bytes.
+ @exception TimeoutError
+ """
+ # Wait until room is available in the transmit FIFO.
+ while to_.check():
+ if (self.ap.read32(self.Register.SR) & self.Register.SR_TXS_MASK) != 0:
+ break
+ else:
+ raise exceptions.TimeoutError("timeout while writing to SDC-600")
+
+ # Write this byte to the transmit FIFO.
+ dbr_value = self.NULL_FILL | (value & 0xFF)
+ self.ap.write32(self.Register.DR, dbr_value)
+
+ def _check_flags(self, value, to_):
+ """! @brief Handle link and error related flag bytes.
+ @param self
+ @param value Integer byte value to check.
+ @param to_ Timeout object.
+ @exception UnexpectedFlagError
+ @exception LinkClosedException
+ @exception LinkError
+ @exception TimeoutError
+ """
+ if value == self.Flag.LPH1RL:
+ LOG.debug("got LPH1RL!")
+ self._current_link_phase = None
+ raise LinkClosedException(self.LinkPhase.PHASE1)
+ elif value == self.Flag.LPH2RL:
+ LOG.debug("got LPH2RL!")
+ # Target killed the phase 2 connection. Send required reply.
+ self._current_link_phase = self.LinkPhase.PHASE1
+ self._write1(self.Flag.LPH2RL, to_)
+ raise LinkClosedException(self.LinkPhase.PHASE2)
+ elif value == self.Flag.LERR:
+ LOG.debug("got LERR!")
+ raise LinkError()
+ # Catch reserved flags.
+ elif (0xA2 <= value <= 0xA5) or (0xB0 <= value <= 0xBF):
+ raise UnexpectedFlagError("received reserved flag value ({:#04x})".format(flag))
+
+ def _expect_flag(self, flag, to_):
+ """! @brief Read a byte and compare to expected value.
+ @param self
+ @param flag Integer flag byte value to match.
+ @param to_ Timeout object.
+ @exception UnexpectedFlagError
+ @exception LinkClosedException
+ @exception TimeoutError
+ """
+ value = self._read1(to_)
+ if value != flag:
+ # Check certain flags we have to handle. This will raise if a flag is handled.
+ self._check_flags(value, to_)
+ # _check_flags() did not raise, so we should .
+ raise UnexpectedFlagError("got {:#04x} instead of expected {} ({:#04x})".format(
+ value, self.Flag.NAME[flag], flag))
+ else:
+ LOG.debug("got expected %s", self.Flag.NAME[value])
+
+ def _stuff(self, data):
+ """! @brief Perform COM Encapsulation byte stuffing.
+ @param self
+ @param data List of integers of the original data.
+ @return List of integers for the escaped version of _data_.
+ """
+ result = []
+ for value in data:
+ # Values matching flag bytes just get copied to output.
+ if (value & self.Flag.MASK) == self.Flag.IDENTIFIER:
+ # Insert escape flag.
+ result.append(self.Flag.ESC)
+
+ # Invert high bit.
+ value ^= 0x80
+
+ result.append(value)
+ return result
+
+ def _destuff(self, data):
+ """! @brief Remove COM Encapsulation byte stuffing.
+ @param self
+ @param data List of integers. The only acceptable flag byte is ESC.
+ @return List of integers properly de-stuffed.
+ """
+ result = []
+ i = 0
+ while i < len(data):
+ value = data[i]
+
+ # Check for escaped bytes.
+ if value == self.Flag.ESC:
+ # Skip over escape.
+ i += 1
+
+ # Get escaped byte and invert high bit to destuff it.
+ value = data[i] ^ 0x80
+
+ result.append(value)
+
+ i += 1
+ return result
+
+ def _read_packet_data_to_end(self, to_):
+ """! @brief Read an escaped packet from the first message byte to the end.
+ @exception UnexpectedFlagError
+ @exception LinkClosedException
+ @exception TimeoutError
+ """
+ result = []
+ while to_.check():
+ value = self._read1(to_)
+
+ # Check for the packet end marker flag.
+ if value == self.Flag.END:
+ break
+ # Handle other flag bytes. This will raise on any detected flags.
+ elif (value & self.Flag.MASK) == self.Flag.IDENTIFIER:
+ self._check_flags(value, to_)
+
+ # Append data bytes.
+ result.append(value)
+ else:
+ raise exceptions.TimeoutError("timeout while reading from SDC-600")
+
+ return self._destuff(result)
+
+ def receive_packet(self, timeout=TRANSFER_TIMEOUT):
+ """! @brief Read a data packet.
+
+ Reads a packet (PDU) from the target and removes byte stuffing. The timeout for reading the
+ entire packet can be set via the _timeout_ parameter.
+
+ As data is read from the target, special flags for link errors or to close either phase of
+ the link are handled and an appropriate exception is raised.
+
+ The connection must be in link phase 2.
+
+ @param self
+ @param timeout Optional timeout for reading the entire packet. If reading times out, a
+ TimeoutError exception is raised.
+ @return List of integer byte values of the de-escaped packet contents.
+
+ @exception UnexpectedFlagError
+ @exception LinkClosedException
+ @exception TimeoutError
+ """
+ assert self._current_link_phase == self.LinkPhase.PHASE2
+ with Timeout(timeout) as to_:
+ self._expect_flag(self.Flag.START, to_)
+ return self._read_packet_data_to_end(to_)
+
+ def send_packet(self, data, timeout=TRANSFER_TIMEOUT):
+ """! @brief Send a data packet.
+
+ Sends the provided data to the target as a single packet (PDU), escaping bytes as necessary.
+ No data is read while the packet is sent, so if the target closes the connection it will
+ not be detected.
+
+ The connection must be in link phase 2.
+
+ @param self
+ @param data List of integer byte values to send. Must not be pre-escaped.
+ @param timeout Optional timeout for reading the entire packet. If reading times out, a
+ TimeoutError exception is raised.
+
+ @exception UnexpectedFlagError
+ @exception TimeoutError
+ """
+ assert self._current_link_phase == self.LinkPhase.PHASE2
+ with Timeout(timeout) as to_:
+ self._write1(self.Flag.START, to_)
+ for value in self._stuff(data):
+ self._write1(value, to_)
+ self._write1(self.Flag.END, to_)
+
+ def open_link(self, phase, timeout=TRANSFER_TIMEOUT):
+ """! @brief Send the LPH1RA or LPH2RA flag.
+ @exception UnexpectedFlagError
+ @exception LinkClosedException
+ @exception TimeoutError
+ """
+ with Timeout(timeout) as to_:
+ if phase == self.LinkPhase.PHASE1:
+ assert self._current_link_phase == None
+
+ # Close link phase 1 first, to put it in a known state.
+ self.close_link(self.LinkPhase.PHASE1)
+
+ LOG.debug("sending LPH1RA")
+ self._write1(self.Flag.LPH1RA, to_)
+ self._expect_flag(self.Flag.LPH1RA, to_)
+
+ self._current_link_phase = self.LinkPhase.PHASE1
+ elif phase == self.LinkPhase.PHASE2:
+ assert self._current_link_phase == self.LinkPhase.PHASE1
+
+ LOG.debug("sending LPH2RA")
+ self._write1(self.Flag.LPH2RA, to_)
+ self._expect_flag(self.Flag.LPH2RA, to_)
+
+ self._current_link_phase = self.LinkPhase.PHASE2
+ else:
+ raise ValueError("unrecognized phase value")
+
+ def close_link(self, phase, timeout=TRANSFER_TIMEOUT):
+ """! @brief Send the LPH1RL or LPH2RL flag.
+
+ Link phase 1 can be closed from any state. Link phase 2 can only be closed when the
+ connection is already in that phase.
+
+ @exception UnexpectedFlagError
+ @exception TimeoutError
+ """
+ with Timeout(timeout) as to_:
+ if phase == self.LinkPhase.PHASE1:
+ # Link phase 1 can be closed from any state, so we don't assert here.
+ LOG.debug("sending LPH1RL")
+ self._write1(self.Flag.LPH1RL, to_)
+ self._expect_flag(self.Flag.LPH1RL, to_)
+
+ self._current_link_phase = None
+ elif phase == self.LinkPhase.PHASE2:
+ assert self._current_link_phase == self.LinkPhase.PHASE2
+
+ LOG.debug("sending LPH2RL")
+ self._write1(self.Flag.LPH2RL, to_)
+ self._expect_flag(self.Flag.LPH2RL, to_)
+
+ self._current_link_phase = self.LinkPhase.PHASE1
+ else:
+ raise ValueError("unrecognized phase value")
+
+ def _log_status(self):
+ status = self.ap.read32(self.Register.SR)
+ LOG.info("status=0x%08x phase=%s", status, self._current_link_phase)
+
+ def read_protocol_id(self, timeout=TRANSFER_TIMEOUT):
+ """! @brief Read and return the 6-byte protocol ID.
+ @exception UnexpectedFlagError
+ @exception LinkClosedException
+ @exception TimeoutError
+ """
+ with Timeout(timeout) as to_:
+ self._write1(self.Flag.IDR, to_)
+ self._expect_flag(self.Flag.IDA, to_)
+ return self._read_packet_data_to_end(to_)
+
+ def send_reboot_request(self, timeout=TRANSFER_TIMEOUT):
+ """! @brief Send remote reboot request."""
+ with Timeout(timeout) as to_:
+ self._write1(self.Flag.LPH2RR, to_)
+
+ def __repr__(self):
+ return "<SDC-600@{:x}: en={} txw={} rxw={} phase={}>".format(id(self),
+ self._is_enabled, self._tx_width, self._rx_width, self._current_link_phase)
+
+
+
diff --git a/pyocd/utility/timeout.py b/pyocd/utility/timeout.py
index 39c97c874..0f099a0e4 100644
--- a/pyocd/utility/timeout.py
+++ b/pyocd/utility/timeout.py
@@ -1,5 +1,5 @@
# pyOCD debugger
-# Copyright (c) 2017-2018 Arm Limited
+# Copyright (c) 2017-2020 Arm Limited
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from time import time
+from time import (time, sleep)
class Timeout(object):
"""! @brief Timeout helper context manager.
@@ -24,12 +24,11 @@ class Timeout(object):
to exit in the successful case.
@code
- with Timeout(5) as t_o:
+ with Timeout(5, sleeptime=0.1) as t_o:
while t_o.check(): # or "while not t_o.did_time_out"
# Perform some operation, check, etc.
if foobar:
break
- sleep(0.1)
else:
print("Timed out!")
@endcode
@@ -49,12 +48,24 @@ class Timeout(object):
You may also combine the call to check() in the while loop with other boolean expressions
related to the operation being performed.
+
+ If you pass a non-zero value for _sleeptime_ to the constructor, the check() method will
+ automatically sleep by default starting with the second call. You can disable auto-sleep
+ by passing `autosleep=False` to check().
"""
- def __init__(self, timeout):
+ def __init__(self, timeout, sleeptime=0):
+ """! @brief Constructor.
+ @param self
+ @param timeout The timeout in seconds.
+ @param sleeptime Time in seconds to sleep during calls to check(). Defaults to 0, thus
+ check() will not sleep unless you pass a different value.
+ """
+ self._sleeptime = sleeptime
self._timeout = timeout
self._timed_out = False
self._start = -1
+ self._is_first_check = True
def __enter__(self):
self._start = time()
@@ -63,13 +74,31 @@ def __enter__(self):
def __exit__(self, exc_type, exc_val, exc_tb):
pass
- def check(self):
+ def check(self, autosleep=True):
+ """! @brief Check for timeout and possibly sleep.
+
+ Starting with the second call to this method, it will automatically sleep before returning
+ if:
+ - The timeout has not yet occurred.
+ - A non-zero _sleeptime_ was passed to the constructor.
+ - The _autosleep_ parameter is True.
+
+ @param self
+ @param autosleep Whether to sleep if not timed out yet. The sleeptime passed to the
+ constructor must have been non-zero.
+ """
+ # Check for a timeout.
if (time() - self._start) > self._timeout:
self._timed_out = True
+ # Sleep if appropriate.
+ elif (not self._is_first_check) and autosleep and self._sleeptime:
+ sleep(self._sleeptime)
+ self._is_first_check = False
return not self._timed_out
@property
def did_time_out(self):
- self.check()
+ """! @brief Whether the timeout has occurred as of the time when this property is accessed."""
+ self.check(autosleep=False)
return self._timed_out
| diff --git a/test/unit/test_sdc600.py b/test/unit/test_sdc600.py
new file mode 100644
index 000000000..12dbb8beb
--- /dev/null
+++ b/test/unit/test_sdc600.py
@@ -0,0 +1,94 @@
+# pyOCD debugger
+# Copyright (c) 2020 Arm Limited
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import pytest
+import six
+
+# unittest.mock is available from Python 3.3.
+try:
+ from unittest import mock
+except ImportError:
+ import mock
+
+from .test_rom_table import (MockCoreSightComponent, MockCoreSight, MockCSSOC600Components)
+
+from pyocd.coresight.ap import AccessPort
+from pyocd.coresight.sdc600 import SDC600
+from pyocd.coresight.rom_table import CoreSightComponentID
+
+MockAP = mock.Mock(spec=AccessPort)
+
+@pytest.fixture(scope='function')
+def sdc():
+ cs = MockCoreSight([MockCSSOC600Components.SDC600])
+ # Add SDC-600 register values.
+ cs.write_memory_block32(0xd00, [
+ 0x00000000, # VIDR = 0xD00
+ 0, # 0xD04
+ 0x00000411, # FIDTXR = 0xD08
+ 0x00000401, # FIDRXR = 0xD0C
+ 0x00000000, # ICSR = 0xD10
+ 0, # 0xD14
+ 0, # 0xD18
+ 0, # 0xD1C
+ 0x00000000, # DR = 0xD20
+ 0, # 0xD24
+ 0, # 0xD28
+ 0x80011001, # SR = 0xD2C
+ 0x00000000, # DBR = 0xD30
+ 0, # 0xD34
+ 0, # 0xD38
+ 0x80011001, # SR_ALIAS = 0xD3C
+ ])
+ cmpid = CoreSightComponentID(None, cs, MockCSSOC600Components.SDC600_BASE)
+ sdc600 = SDC600(cs, cmpid, 0x1000)
+ sdc600.init()
+ return sdc600
+
+# Flag bytes
+FLAGS = [i for i in range(0xa0, 0xc0)]
+
+class TestSDC600:
+ # Verify non-flag bytes are not escaped.
+ def test_stuff_nonflag(self, sdc):
+ print(FLAGS)
+ for i in range(256):
+ # Skip flag bytes.
+ if i in FLAGS:
+ continue
+ assert sdc._stuff([i]) == [i]
+
+ # Verify non-flag bytes are not de-escaped.
+ def test_destuff_nonflag(self, sdc):
+ for i in range(256):
+ # Skip flag bytes.
+ if i in FLAGS:
+ continue
+ assert sdc._destuff([i]) == [i]
+
+ # Test stuffing a single byte.
+ def test_stuff_flag(self, sdc):
+ for i in FLAGS:
+ assert sdc._stuff([i]) == [SDC600.Flag.ESC, i ^ 0x80]
+ assert sdc._stuff([i]) == [SDC600.Flag.ESC, i & ~0x80]
+
+ # Test destuffing a single escaped byte.
+ def test_destuff_flag(self, sdc):
+ for i in FLAGS:
+ assert sdc._destuff([SDC600.Flag.ESC, i ^ 0x80]) == [i]
+ assert sdc._destuff([SDC600.Flag.ESC, i & ~0x80]) == [i]
+
+
| [
{
"components": [
{
"doc": "! @brief Base class for SDC-600 exceptions.",
"lines": [
28,
30
],
"name": "ComPortError",
"signature": "class ComPortError(exceptions.Error):",
"type": "class"
},
{
"doc": "! @brief Receive... | [
"test/unit/test_sdc600.py::TestSDC600::test_stuff_nonflag",
"test/unit/test_sdc600.py::TestSDC600::test_destuff_nonflag",
"test/unit/test_sdc600.py::TestSDC600::test_stuff_flag",
"test/unit/test_sdc600.py::TestSDC600::test_destuff_flag"
] | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
SDC-600 component
Adds a component class for the Arm SDC-600 debug communications channel, typically used for secure debug. No currently shipping MCUs use SDC-600, so this is mostly useful with designs in development running on an FPGA.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in pyocd/coresight/sdc600.py]
(definition of ComPortError:)
class ComPortError(exceptions.Error):
"""! @brief Base class for SDC-600 exceptions."""
(definition of UnexpectedFlagError:)
class UnexpectedFlagError(ComPortError):
"""! @brief Received an unexpected or out of order flag byte."""
(definition of LinkError:)
class LinkError(ComPortError):
"""! @brief Received a link error flag (LERR)."""
(definition of LinkClosedException:)
class LinkClosedException(ComPortError):
"""! @brief Received an unexpected or out of order flag byte."""
(definition of LinkClosedException.__init__:)
def __init__(self, phase):
(definition of LinkClosedException.phase:)
def phase(self):
"""! @brief The link phase that was closed from the other side."""
(definition of SDC600:)
class SDC600(CoreSightComponent):
"""! @brief SDC-600 component.
"""
(definition of SDC600.LinkPhase:)
class LinkPhase(Enum):
"""! @brief COM Port link phases."""
(definition of SDC600.Register:)
class Register:
"""! @brief Namespace for SDC-600 register offset constants."""
(definition of SDC600.Flag:)
class Flag:
"""! @brief Namespace with SDC-600 flag byte constants."""
(definition of SDC600.__init__:)
def __init__(self, ap, cmpid=None, addr=None):
(definition of SDC600.init:)
def init(self):
"""! @brief Inits the component.
Reads the RX and TX widths and whether the SDC-600 is enabled. All error flags are cleared."""
(definition of SDC600.is_enabled:)
def is_enabled(self):
"""! @brief Whether the SDC-600 is enabled."""
(definition of SDC600.is_reboot_request_enabled:)
def is_reboot_request_enabled(self):
"""! @brief Whether the Reboot Request feature is enabled in the SDC-600."""
(definition of SDC600.current_link_phase:)
def current_link_phase(self):
"""! @brief Currently established link phase.
@return Either None or one of the SDC600.LinkPhase enums."""
(definition of SDC600._read1:)
def _read1(self, to_):
"""! @brief Read a single byte.
If a NULL byte is received, it is ignored and another byte is read. No other flag bytes
are processed.
@exception TimeoutError"""
(definition of SDC600._write1:)
def _write1(self, value, to_):
"""! @brief Write one or more bytes.
@exception TimeoutError"""
(definition of SDC600._check_flags:)
def _check_flags(self, value, to_):
"""! @brief Handle link and error related flag bytes.
@param self
@param value Integer byte value to check.
@param to_ Timeout object.
@exception UnexpectedFlagError
@exception LinkClosedException
@exception LinkError
@exception TimeoutError"""
(definition of SDC600._expect_flag:)
def _expect_flag(self, flag, to_):
"""! @brief Read a byte and compare to expected value.
@param self
@param flag Integer flag byte value to match.
@param to_ Timeout object.
@exception UnexpectedFlagError
@exception LinkClosedException
@exception TimeoutError"""
(definition of SDC600._stuff:)
def _stuff(self, data):
"""! @brief Perform COM Encapsulation byte stuffing.
@param self
@param data List of integers of the original data.
@return List of integers for the escaped version of _data_."""
(definition of SDC600._destuff:)
def _destuff(self, data):
"""! @brief Remove COM Encapsulation byte stuffing.
@param self
@param data List of integers. The only acceptable flag byte is ESC.
@return List of integers properly de-stuffed."""
(definition of SDC600._read_packet_data_to_end:)
def _read_packet_data_to_end(self, to_):
"""! @brief Read an escaped packet from the first message byte to the end.
@exception UnexpectedFlagError
@exception LinkClosedException
@exception TimeoutError"""
(definition of SDC600.receive_packet:)
def receive_packet(self, timeout=TRANSFER_TIMEOUT):
"""! @brief Read a data packet.
Reads a packet (PDU) from the target and removes byte stuffing. The timeout for reading the
entire packet can be set via the _timeout_ parameter.
As data is read from the target, special flags for link errors or to close either phase of
the link are handled and an appropriate exception is raised.
The connection must be in link phase 2.
@param self
@param timeout Optional timeout for reading the entire packet. If reading times out, a
TimeoutError exception is raised.
@return List of integer byte values of the de-escaped packet contents.
@exception UnexpectedFlagError
@exception LinkClosedException
@exception TimeoutError"""
(definition of SDC600.send_packet:)
def send_packet(self, data, timeout=TRANSFER_TIMEOUT):
"""! @brief Send a data packet.
Sends the provided data to the target as a single packet (PDU), escaping bytes as necessary.
No data is read while the packet is sent, so if the target closes the connection it will
not be detected.
The connection must be in link phase 2.
@param self
@param data List of integer byte values to send. Must not be pre-escaped.
@param timeout Optional timeout for reading the entire packet. If reading times out, a
TimeoutError exception is raised.
@exception UnexpectedFlagError
@exception TimeoutError"""
(definition of SDC600.open_link:)
def open_link(self, phase, timeout=TRANSFER_TIMEOUT):
"""! @brief Send the LPH1RA or LPH2RA flag.
@exception UnexpectedFlagError
@exception LinkClosedException
@exception TimeoutError"""
(definition of SDC600.close_link:)
def close_link(self, phase, timeout=TRANSFER_TIMEOUT):
"""! @brief Send the LPH1RL or LPH2RL flag.
Link phase 1 can be closed from any state. Link phase 2 can only be closed when the
connection is already in that phase.
@exception UnexpectedFlagError
@exception TimeoutError"""
(definition of SDC600._log_status:)
def _log_status(self):
(definition of SDC600.read_protocol_id:)
def read_protocol_id(self, timeout=TRANSFER_TIMEOUT):
"""! @brief Read and return the 6-byte protocol ID.
@exception UnexpectedFlagError
@exception LinkClosedException
@exception TimeoutError"""
(definition of SDC600.send_reboot_request:)
def send_reboot_request(self, timeout=TRANSFER_TIMEOUT):
"""! @brief Send remote reboot request."""
(definition of SDC600.__repr__:)
def __repr__(self):
[end of new definitions in pyocd/coresight/sdc600.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | fe7d737424be818daa1d55ecbd59be1a0dffdf5b | ||
sphinx-doc__sphinx-8075 | 8,075 | sphinx-doc/sphinx | 3.4 | 487b8436c6e8dc596db4b8d4d06e9145105a2ac2 | 2020-08-08T11:20:59Z | diff --git a/CHANGES b/CHANGES
index b59dc7cd680..99e5cdaf9be 100644
--- a/CHANGES
+++ b/CHANGES
@@ -13,6 +13,10 @@ Deprecated
Features added
--------------
+* #6914: Add a new event :event:`warn-missing-reference` to custom warning
+ messages when failed to resolve a cross-reference
+* #6914: Emit a detailed warning when failed to resolve a ``:ref:`` reference
+
Bugs fixed
----------
diff --git a/doc/extdev/appapi.rst b/doc/extdev/appapi.rst
index df3eb3d6750..9f2c10676d4 100644
--- a/doc/extdev/appapi.rst
+++ b/doc/extdev/appapi.rst
@@ -186,6 +186,7 @@ type for that event::
13. apply post-transforms (by priority): docutils.document -> docutils.document
14. event.doctree-resolved(app, doctree, docname)
- (for any reference node that fails to resolve) event.missing-reference(env, node, contnode)
+ - (for any reference node that fails to resolve) event.warn-missing-reference(domain, node)
15. Generate output files
16. event.build-finished(app, exception)
@@ -284,6 +285,14 @@ Here is a more detailed list of these events.
.. versionadded:: 0.5
+.. event:: warn-missing-reference (app, domain, node)
+
+ Emitted when a cross-reference to an object cannot be resolved even after
+ :event:`missing-reference`. If the event handler can emit warnings for
+ the missing reference, it should return ``True``.
+
+ .. versionadded:: 3.4
+
.. event:: doctree-resolved (app, doctree, docname)
Emitted when a doctree has been "resolved" by the environment, that is, all
diff --git a/sphinx/domains/std.py b/sphinx/domains/std.py
index 39f67b54eca..7ea468404a3 100644
--- a/sphinx/domains/std.py
+++ b/sphinx/domains/std.py
@@ -610,8 +610,6 @@ class StandardDomain(Domain):
dangling_warnings = {
'term': 'term not in glossary: %(target)s',
- 'ref': 'undefined label: %(target)s (if the link has no caption '
- 'the label must precede a section header)',
'numref': 'undefined label: %(target)s',
'keyword': 'unknown keyword: %(target)s',
'doc': 'unknown document: %(target)s',
@@ -1107,8 +1105,23 @@ def note_labels(self, env: "BuildEnvironment", docname: str, document: nodes.doc
RemovedInSphinx40Warning, stacklevel=2)
+def warn_missing_reference(app: "Sphinx", domain: Domain, node: pending_xref) -> bool:
+ if domain.name != 'std' or node['reftype'] != 'ref':
+ return None
+ else:
+ target = node['reftarget']
+ if target not in domain.anonlabels: # type: ignore
+ msg = __('undefined label: %s')
+ else:
+ msg = __('Failed to create a cross reference. A title or caption not found: %s')
+
+ logger.warning(msg % target, location=node, type='ref', subtype=node['reftype'])
+ return True
+
+
def setup(app: "Sphinx") -> Dict[str, Any]:
app.add_domain(StandardDomain)
+ app.connect('warn-missing-reference', warn_missing_reference)
return {
'version': 'builtin',
diff --git a/sphinx/events.py b/sphinx/events.py
index 82a52d7622f..214654706f2 100644
--- a/sphinx/events.py
+++ b/sphinx/events.py
@@ -46,6 +46,7 @@
'doctree-read': 'the doctree before being pickled',
'env-merge-info': 'env, read docnames, other env instance',
'missing-reference': 'env, node, contnode',
+ 'warn-missing-reference': 'domain, node',
'doctree-resolved': 'doctree, docname',
'env-updated': 'env',
'html-collect-pages': 'builder',
diff --git a/sphinx/transforms/post_transforms/__init__.py b/sphinx/transforms/post_transforms/__init__.py
index 7dc14af52b0..6633d6434c1 100644
--- a/sphinx/transforms/post_transforms/__init__.py
+++ b/sphinx/transforms/post_transforms/__init__.py
@@ -166,7 +166,10 @@ def warn_missing_reference(self, refdoc: str, typ: str, target: str,
warn = False
if not warn:
return
- if domain and typ in domain.dangling_warnings:
+
+ if self.app.emit_firstresult('warn-missing-reference', domain, node):
+ return
+ elif domain and typ in domain.dangling_warnings:
msg = domain.dangling_warnings[typ]
elif node.get('refdomain', 'std') not in ('', 'std'):
msg = (__('%s:%s reference target not found: %%(target)s') %
| diff --git a/tests/roots/test-domain-py-xref-warning/conf.py b/tests/roots/test-domain-py-xref-warning/conf.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/tests/roots/test-domain-py-xref-warning/index.rst b/tests/roots/test-domain-py-xref-warning/index.rst
new file mode 100644
index 00000000000..6f2cab795ac
--- /dev/null
+++ b/tests/roots/test-domain-py-xref-warning/index.rst
@@ -0,0 +1,7 @@
+test-domain-py-xref-warning
+===========================
+
+.. _existing-label:
+
+:ref:`no-label`
+:ref:`existing-label`
diff --git a/tests/test_domain_py.py b/tests/test_domain_py.py
index ceea515082c..d81b406c225 100644
--- a/tests/test_domain_py.py
+++ b/tests/test_domain_py.py
@@ -859,3 +859,11 @@ def test_noindexentry(app):
assert_node(doctree, (addnodes.index, desc, addnodes.index, desc))
assert_node(doctree[0], addnodes.index, entries=[('single', 'f (built-in class)', 'f', '', None)])
assert_node(doctree[2], addnodes.index, entries=[])
+
+
+@pytest.mark.sphinx('dummy', testroot='domain-py-xref-warning')
+def test_warn_missing_reference(app, status, warning):
+ app.build()
+ assert 'index.rst:6: WARNING: undefined label: no-label' in warning.getvalue()
+ assert ('index.rst:6: WARNING: Failed to create a cross reference. A title or caption not found: existing-label'
+ in warning.getvalue())
| diff --git a/CHANGES b/CHANGES
index b59dc7cd680..99e5cdaf9be 100644
--- a/CHANGES
+++ b/CHANGES
@@ -13,6 +13,10 @@ Deprecated
Features added
--------------
+* #6914: Add a new event :event:`warn-missing-reference` to custom warning
+ messages when failed to resolve a cross-reference
+* #6914: Emit a detailed warning when failed to resolve a ``:ref:`` reference
+
Bugs fixed
----------
diff --git a/doc/extdev/appapi.rst b/doc/extdev/appapi.rst
index df3eb3d6750..9f2c10676d4 100644
--- a/doc/extdev/appapi.rst
+++ b/doc/extdev/appapi.rst
@@ -186,6 +186,7 @@ type for that event::
13. apply post-transforms (by priority): docutils.document -> docutils.document
14. event.doctree-resolved(app, doctree, docname)
- (for any reference node that fails to resolve) event.missing-reference(env, node, contnode)
+ - (for any reference node that fails to resolve) event.warn-missing-reference(domain, node)
15. Generate output files
16. event.build-finished(app, exception)
@@ -284,6 +285,14 @@ Here is a more detailed list of these events.
.. versionadded:: 0.5
+.. event:: warn-missing-reference (app, domain, node)
+
+ Emitted when a cross-reference to an object cannot be resolved even after
+ :event:`missing-reference`. If the event handler can emit warnings for
+ the missing reference, it should return ``True``.
+
+ .. versionadded:: 3.4
+
.. event:: doctree-resolved (app, doctree, docname)
Emitted when a doctree has been "resolved" by the environment, that is, all
| [
{
"components": [
{
"doc": "",
"lines": [
1108,
1119
],
"name": "warn_missing_reference",
"signature": "def warn_missing_reference(app: \"Sphinx\", domain: Domain, node: pending_xref) -> bool:",
"type": "function"
}
],
"file... | [
"tests/test_domain_py.py::test_warn_missing_reference"
] | [
"tests/test_domain_py.py::test_function_signatures",
"tests/test_domain_py.py::test_domain_py_xrefs",
"tests/test_domain_py.py::test_domain_py_objects",
"tests/test_domain_py.py::test_resolve_xref_for_properties",
"tests/test_domain_py.py::test_domain_py_find_obj",
"tests/test_domain_py.py::test_get_full_... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Fix #6914: Emit a detailed warning when failed to resolve :ref:
### Feature or Bugfix
- Feature
### Purpose
- refs: #6914
- To be clear the ambiguous warning for missing-reference :ref:,
this separates the warning to missing-label and missing-caption.
- To emit a warning dynamically, this also adds a new event:
`warn-missing-reference` to customize warning messages via event
handlers.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sphinx/domains/std.py]
(definition of warn_missing_reference:)
def warn_missing_reference(app: "Sphinx", domain: Domain, node: pending_xref) -> bool:
[end of new definitions in sphinx/domains/std.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
References to figures without captions: errors in both HTML and LaTeX
**Describe the bug**
Using figures without captions causes errors in both HTML (though these are properly reported when source is processed) and in LaTeX (they are not reported until LaTeX says there were undefined references).
This was the test document, compiled with sphinx 2.2.2 from pypi; `numfig=True` was added to conf.py, the project was otherwise generated with sphinx-build with no other changes. It is attached here: [sphinx-captions.zip](https://github.com/sphinx-doc/sphinx/files/3947135/sphinx-captions.zip)
```
Welcome to foo's documentation!
===============================
References:
* figure without caption
* plain reference :ref:`fig-sample-nocaption` (error: HTML, LaTeX)
* named reference :ref:`figure without caption <fig-sample-nocaption>` (error: LaTeX)
* numbered reference :numref:`fig-sample-nocaption` (error: LaTeX)
* figure with caption
* plain reference :ref:`fig-sample-caption`
* named reference :ref:`figure without caption <fig-sample-caption>`
* numbered reference :numref:`fig-sample-caption`
.. _fig-sample-nocaption:
.. figure:: sample.png
.. _fig-sample-caption:
.. figure:: sample.png
This is some caption.
```
and these are the results:
1. misleading warning: **index.rst:8: WARNING: undefined label: fig-sample-nocaption (if the link has no caption the label must precede a section header)**
2. this is HTML output (the error highlighted corresponds to the warning mentioned above):

3. this is LaTeX (pdflatex) output:
```
LaTeX Warning: Hyper reference `index:fig-sample-nocaption' on page 1 undefined
on input line 99.
LaTeX Warning: Hyper reference `index:fig-sample-nocaption' on page 1 undefined
on input line 102.
```

**Expected behavior**
I expect
1. sphinx to produce valid LaTeX input without undefined references;
2. uncaptioned figures to be referencable in LaTeX (this could be an optional setting perhaps causing uncaptioned figured to produce only "Figure 4.1." caption);
3. warning about figure not being captioned to be more meaningful -- I understand that non-numbered figure cannot be referenced via :ref:`label` (as the label will not resolve to any text) but the warning is not pointing to how to fix the issue.
**Environment info**
- OS: Ubuntu 18.04 LTS
- Python version: 3.6.8
- Sphinx version: 2.2.2
- Sphinx extensions: none
- Extra tools: pdflatex TeXLive
----------
--------------------
</issues> | c941b9cb14aefc132d6e3691085ce9704c1356e6 |
sympy__sympy-19896 | 19,896 | sympy/sympy | 1.7 | d20fcb2086345d9bef7c7651f6b36d7e49b4a804 | 2020-08-06T01:42:12Z | diff --git a/sympy/physics/control/lti.py b/sympy/physics/control/lti.py
index 46fd2ea928f6..72589784c223 100644
--- a/sympy/physics/control/lti.py
+++ b/sympy/physics/control/lti.py
@@ -1,11 +1,23 @@
-from sympy import Basic, Mul, degree, Symbol, expand, cancel, Expr, exp
+from sympy import Basic, Mul, Pow, degree, Symbol, expand, cancel, Expr, exp, roots
from sympy.core.evalf import EvalfMixin
+from sympy.core.logic import fuzzy_and
from sympy.core.numbers import Integer
from sympy.core.sympify import sympify, _sympify
+from sympy.polys import Poly, rootof
+from sympy.series import limit
__all__ = ['TransferFunction', 'Series', 'Parallel', 'Feedback']
+def _roots(poly, var):
+ """ like roots, but works on higher-order polynomials. """
+ r = roots(poly, var, multiple=True)
+ n = degree(poly)
+ if len(r) != n:
+ r = [rootof(poly, var, k) for k in range(n)]
+ return r
+
+
class TransferFunction(Basic, EvalfMixin):
"""
A class for representing LTI (Linear, time-invariant) systems that can be strictly described
@@ -249,6 +261,109 @@ def expand(self):
"""
return TransferFunction(expand(self.num), expand(self.den), self.var)
+ def dc_gain(self):
+ """
+ Computes the gain of the response as the frequency approaches zero.
+
+ The DC gain is infinite for systems with pure integrators.
+
+ Examples
+ ========
+
+ >>> from sympy.abc import s, p, a, b
+ >>> from sympy.physics.control.lti import TransferFunction
+ >>> tf1 = TransferFunction(s + 3, s**2 - 9, s)
+ >>> tf1.dc_gain()
+ -1/3
+ >>> tf2 = TransferFunction(p**2, p - 3 + p**3, p)
+ >>> tf2.dc_gain()
+ 0
+ >>> tf3 = TransferFunction(a*p**2 - b, s + b, s)
+ >>> tf3.dc_gain()
+ (a*p**2 - b)/b
+ >>> tf4 = TransferFunction(1, s, s)
+ >>> tf4.dc_gain()
+ oo
+
+ """
+ m = Mul(self.num, Pow(self.den, -1, evaluate=False), evaluate=False)
+ return limit(m, self.var, 0)
+
+ def poles(self):
+ """
+ Returns the poles of a transfer function.
+
+ Examples
+ ========
+
+ >>> from sympy.abc import s, p, a
+ >>> from sympy.physics.control.lti import TransferFunction
+ >>> tf1 = TransferFunction((p + 3)*(p - 1), (p - 1)*(p + 5), p)
+ >>> tf1.poles()
+ [-5, 1]
+ >>> tf2 = TransferFunction((1 - s)**2, (s**2 + 1)**2, s)
+ >>> tf2.poles()
+ [I, I, -I, -I]
+ >>> tf3 = TransferFunction(s**2, a*s + p, s)
+ >>> tf3.poles()
+ [-p/a]
+
+ """
+ return _roots(Poly(self.den, self.var), self.var)
+
+ def zeros(self):
+ """
+ Returns the zeros of a transfer function.
+
+ Examples
+ ========
+
+ >>> from sympy.abc import s, p, a
+ >>> from sympy.physics.control.lti import TransferFunction
+ >>> tf1 = TransferFunction((p + 3)*(p - 1), (p - 1)*(p + 5), p)
+ >>> tf1.zeros()
+ [-3, 1]
+ >>> tf2 = TransferFunction((1 - s)**2, (s**2 + 1)**2, s)
+ >>> tf2.zeros()
+ [1, 1]
+ >>> tf3 = TransferFunction(s**2, a*s + p, s)
+ >>> tf3.zeros()
+ [0, 0]
+
+ """
+ return _roots(Poly(self.num, self.var), self.var)
+
+ def is_stable(self):
+ """
+ Returns True if the transfer function is asymptotically stable; else False.
+
+ This would not check the marginal or conditional stability of the system.
+
+ Examples
+ ========
+
+ >>> from sympy.abc import s, p, a
+ >>> from sympy.core.symbol import symbols
+ >>> q, r = symbols('q, r', negative=True)
+ >>> from sympy.physics.control.lti import TransferFunction
+ >>> tf1 = TransferFunction((1 - s)**2, (s + 1)**2, s)
+ >>> tf1.is_stable()
+ True
+ >>> tf2 = TransferFunction((1 - p)**2, (s**2 + 1)**2, s)
+ >>> tf2.is_stable()
+ False
+ >>> tf3 = TransferFunction(4, q*s - r, s)
+ >>> tf3.is_stable()
+ False
+
+ # not enough info about the symbols to determine stability
+ >>> tf4 = TransferFunction(p + 1, a*p - s**2, p)
+ >>> tf4.is_stable() is None
+ True
+
+ """
+ return fuzzy_and(pole.as_real_imag()[0].is_negative for pole in self.poles())
+
def __add__(self, other):
if isinstance(other, (TransferFunction, Series)):
if not self.var == other.var:
| diff --git a/sympy/physics/control/tests/test_lti.py b/sympy/physics/control/tests/test_lti.py
index 310231955f1a..209c79ddbe53 100644
--- a/sympy/physics/control/tests/test_lti.py
+++ b/sympy/physics/control/tests/test_lti.py
@@ -1,8 +1,9 @@
-from sympy import symbols, Matrix, factor, Function, simplify, exp, pi
+from sympy import symbols, Matrix, factor, Function, simplify, exp, pi, oo, I, \
+ Rational, sqrt, CRootOf
from sympy.physics.control.lti import TransferFunction, Series, Parallel, Feedback
from sympy.testing.pytest import raises
-a, b, s, g, d, p, k, a0, a1, a2, b0, b1, b2 = symbols('a, b, s, g, d, p, k, a0:3, b0:3')
+a, x, b, s, g, d, p, k, a0, a1, a2, b0, b1, b2 = symbols('a, x, b, s, g, d, p, k, a0:3, b0:3')
def test_TransferFunction_construction():
@@ -163,6 +164,68 @@ def test_TransferFunction_functions():
assert SP4.subs({a0: -1, a1: -7}).evalf() == expect4
assert expect4_.evalf() == expect4
+ # Low-frequency (or DC) gain.
+ assert tf0.dc_gain() == 1
+ assert tf1.dc_gain() == Rational(3, 5)
+ assert SP2.dc_gain() == 0
+ assert expect4.dc_gain() == -1
+ assert expect2_.dc_gain() == 0
+ assert TransferFunction(1, s, s).dc_gain() == oo
+
+ # Poles of a transfer function.
+ tf_ = TransferFunction(x**3 - k, k, x)
+ _tf = TransferFunction(k, x**4 - k, x)
+ TF_ = TransferFunction(x**2, x**10 + x + x**2, x)
+ _TF = TransferFunction(x**10 + x + x**2, x**2, x)
+ assert G1.poles() == [I, I, -I, -I]
+ assert G2.poles() == []
+ assert tf1.poles() == [-5, 1]
+ assert expect4_.poles() == [s]
+ assert SP4.poles() == [-a0*s]
+ assert expect3.poles() == [-0.25*p]
+ assert str(expect2.poles()) == str([0.729001428685125, -0.564500714342563 - 0.710198984796332*I, -0.564500714342563 + 0.710198984796332*I])
+ assert str(expect1.poles()) == str([-0.4 - 0.66332495807108*I, -0.4 + 0.66332495807108*I])
+ assert _tf.poles() == [k**(Rational(1, 4)), -k**(Rational(1, 4)), I*k**(Rational(1, 4)), -I*k**(Rational(1, 4))]
+ assert TF_.poles() == [CRootOf(x**9 + x + 1, 0), 0, CRootOf(x**9 + x + 1, 1), CRootOf(x**9 + x + 1, 2),
+ CRootOf(x**9 + x + 1, 3), CRootOf(x**9 + x + 1, 4), CRootOf(x**9 + x + 1, 5), CRootOf(x**9 + x + 1, 6),
+ CRootOf(x**9 + x + 1, 7), CRootOf(x**9 + x + 1, 8)]
+ raises(NotImplementedError, lambda: TransferFunction(x**2, a0*x**10 + x + x**2, x).poles())
+
+ # Stability of a transfer function.
+ q, r = symbols('q, r', negative=True)
+ t = symbols('t', positive=True)
+ TF_ = TransferFunction(s**2 + a0 - a1*p, q*s - r, s)
+ stable_tf = TransferFunction(s**2 + a0 - a1*p, q*s - 1, s)
+ stable_tf_ = TransferFunction(s**2 + a0 - a1*p, q*s - t, s)
+
+ assert G1.is_stable() is False
+ assert G2.is_stable() is True
+ assert tf1.is_stable() is False # as one pole is +ve, and the other is -ve.
+ assert expect2.is_stable() is False
+ assert expect1.is_stable() is True
+ assert stable_tf.is_stable() is True
+ assert stable_tf_.is_stable() is True
+ assert TF_.is_stable() is False
+ assert expect4_.is_stable() is None # no assumption provided for the only pole 's'.
+ assert SP4.is_stable() is None
+
+ # Zeros of a transfer function.
+ assert G1.zeros() == [1, 1]
+ assert G2.zeros() == []
+ assert tf1.zeros() == [-3, 1]
+ assert expect4_.zeros() == [7**(Rational(2, 3))*(-s)**(Rational(1, 3))/7, -7**(Rational(2, 3))*(-s)**(Rational(1, 3))/14 -
+ sqrt(3)*7**(Rational(2, 3))*I*(-s)**(Rational(1, 3))/14, -7**(Rational(2, 3))*(-s)**(Rational(1, 3))/14 + sqrt(3)*7**(Rational(2, 3))*I*(-s)**(Rational(1, 3))/14]
+ assert SP4.zeros() == [(s/a1)**(Rational(1, 3)), -(s/a1)**(Rational(1, 3))/2 - sqrt(3)*I*(s/a1)**(Rational(1, 3))/2,
+ -(s/a1)**(Rational(1, 3))/2 + sqrt(3)*I*(s/a1)**(Rational(1, 3))/2]
+ assert str(expect3.zeros()) == str([0.125 - 1.11102430216445*sqrt(-0.405063291139241*p**3 - 1.0),
+ 1.11102430216445*sqrt(-0.405063291139241*p**3 - 1.0) + 0.125])
+ assert tf_.zeros() == [k**(Rational(1, 3)), -k**(Rational(1, 3))/2 - sqrt(3)*I*k**(Rational(1, 3))/2,
+ -k**(Rational(1, 3))/2 + sqrt(3)*I*k**(Rational(1, 3))/2]
+ assert _TF.zeros() == [CRootOf(x**9 + x + 1, 0), 0, CRootOf(x**9 + x + 1, 1), CRootOf(x**9 + x + 1, 2),
+ CRootOf(x**9 + x + 1, 3), CRootOf(x**9 + x + 1, 4), CRootOf(x**9 + x + 1, 5), CRootOf(x**9 + x + 1, 6),
+ CRootOf(x**9 + x + 1, 7), CRootOf(x**9 + x + 1, 8)]
+ raises(NotImplementedError, lambda: TransferFunction(a0*x**10 + x + x**2, x**2, x).zeros())
+
# negation of TF.
tf2 = TransferFunction(s + 3, s**2 - s**3 + 9, s)
tf3 = TransferFunction(-3*p + 3, 1 - p, p)
| [
{
"components": [
{
"doc": "like roots, but works on higher-order polynomials. ",
"lines": [
12,
18
],
"name": "_roots",
"signature": "def _roots(poly, var):",
"type": "function"
},
{
"doc": "Computes the gain of the r... | [
"test_TransferFunction_functions"
] | [
"test_TransferFunction_construction",
"test_TransferFunction_addition_and_subtraction",
"test_TransferFunction_multiplication_and_division",
"test_TransferFunction_is_proper",
"test_TransferFunction_is_strictly_proper",
"test_TransferFunction_is_biproper",
"test_Series_construction",
"test_Series_func... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
[GSoC] Add other useful methods in TransferFunction class
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
https://github.com/sympy/sympy/issues/19352#
#### Brief description of what is fixed or changed
#### Other comments
TODO (along with docs and tests):
- [x] Add zero-frequency (or DC) gain.
- [x] Add method for finding poles.
- [x] Add method for finding zeros.
- [x] Add method for stability.
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* physics.control
* Added `dc_gain`, `poles`, `zeros`, and `is_stable` in `TransferFunction` class.
<!-- END RELEASE NOTES -->
----------
@moorepants, so I was trying to define a method to get the poles of a transfer function... we could use the `roots` function of sympy directly to calculate the roots of the denominator but since we can also have multivariate polynomials, it gives a traceback.
``` python
>>> G1 = TransferFunction(s + 1, s**2 + s + 1, s)
>>> poles = roots(G1.den)
>>> for pole in poles:
... print(pole.evalf())
...
-0.5 - 0.866025403784439*I
-0.5 + 0.866025403784439*I
>>>
>>> G2 = TransferFunction(s - 1, s**2 - b, s)
>>> poles = roots(G2.den)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/namannimmo/sympy/sympy/polys/polyroots.py", line 910, in roots
raise PolynomialError('multivariate polynomials are not supported')
sympy.polys.polyerrors.PolynomialError: multivariate polynomials are not supported
```
Now, I think it is easier to write a caveat in the docstring to substitute the other symbols with a number to get the poles.
SymPy can calculate the roots symbolically up to 4th order. If 5th or higher order and any coefficient is symbolic you can raise a value error. If there are numeric coefficients then you can return numeric results of any order.
SymPy can *sometimes* compute roots of a polynomial with symbolic coefficients of higher than degree 4. The difficulty is just that it's not always possible to express such roots in radicals (using sympy or anything else):
```julia
In [18]: expr = x**5 - 2*x**4*y - 3*x**4*z + x**3*y**2 + 6*x**3*y*z + 3*x**3*z**2 - 3*x**2*y**2*z - 6*x**2*y*z**2 - x**2*z**3 + 3*x*y**2*z**2 +
...: 2*x*y*z**3 - y**2*z**3
In [19]: expr
Out[19]:
5 4 4 3 2 3 3 2 2 2 2 2 2 3 2 2 3 2 3
x - 2โ
x โ
y - 3โ
x โ
z + x โ
y + 6โ
x โ
yโ
z + 3โ
x โ
z - 3โ
x โ
y โ
z - 6โ
x โ
yโ
z - x โ
z + 3โ
xโ
y โ
z + 2โ
xโ
yโ
z - y โ
z
In [20]: roots(expr, x)
Out[20]: {y: 2, z: 3}
```
I'm fine if we only support symbolics for 4th order and lower. Realistically I think a control practitioner will only use up to 3rd order symbolically, if that. I assume detecting a 5th order that SymPy can solve would require some heuristics?
I would just ask to compute the roots and see what happens.
There are lots of cases of higher-order polynomials that can be solved. The limitation of Abel-Ruffini is that there exist roots of quintics that can not be expressed in radicals. Therefore there can be no general formula in radicals for the roots of a quintic. A fully symbolic quintic with coefficients `a, b, c, ...` therefore would not work but many examples of quintics or higher-order polynomials can have analytic roots with or without symbols e.g.:
```
In [3]: roots(x**10 - z**10, x)
Out[3]:
โง โ ________โ โ ________โ โ ________โ โ ________โ
โช โ 1 โ5 โฑ โ5 5 โ โ 1 โ5 โฑ โ5 5 โ โ1 โ5 โฑ 5 โ5 โ โ1 โ5 โฑ 5 โ5 โ
โจ-z: 1, z: 1, zโ
โ- โ + โโ - โ
โ
โฑ โโ + โ โ: 1, zโ
โ- โ + โโ + โ
โ
โฑ โโ + โ โ: 1, zโ
โโ + โโ - โ
โ
โฑ โ - โโ โ: 1, zโ
โโ + โโ + โ
โ
โฑ โ - โโ โ
โช โ 4 4 โฒโฑ 8 8 โ โ 4 4 โฒโฑ 8 8 โ โ4 4 โฒโฑ 8 8 โ โ4 4 โฒโฑ 8 8 โ
โฉ
โ ________โ โ ________โ โ ________โ โ ________โ โซ
โ โ5 1 โฑ 5 โ5 โ โ โ5 1 โฑ 5 โ5 โ โ โ5 1 โฑ โ5 5 โ โ โ5 1 โฑ โ5 5 โ โช
: 1, zโ
โ- โโ - โ - โ
โ
โฑ โ - โโ โ: 1, zโ
โ- โโ - โ + โ
โ
โฑ โ - โโ โ: 1, zโ
โ- โโ + โ - โ
โ
โฑ โโ + โ โ: 1, zโ
โ- โโ + โ + โ
โ
โฑ โโ + โ โ: 1โฌ
โ 4 4 โฒโฑ 8 8 โ โ 4 4 โฒโฑ 8 8 โ โ 4 4 โฒโฑ 8 8 โ โ 4 4 โฒโฑ 8 8 โ โช
โญ
```
That's fine, it just didn't seem that `roots()` worked from @namannimmo10 example above. I guess roots will return some forms of errors if it can't compute and he should just catch those errors with a helpful error message for the user.
Oh, I see. The example where roots failed above is because you need to provide the argument specifying which symbol you want the roots for.
Sometimes roots returns only a subset of the roots so the length/multiplicity needs to be checked.
You can see how this is handled in the ode module here:
https://github.com/sympy/sympy/blob/e5729343e2072940f43f6b9d758d65979fce6e83/sympy/solvers/ode/ode.py#L4746-L4751
The first part would succeed for any quartic or less and might succeed for higher-order polynomials. The last part using rootof will only succeed if there are no symbols in the coefficients but otherwise should be able to handle any univariate polynomial of any order.
> Oh, I see. The example where roots failed above is because you need to provide the argument specifying which symbol you want the roots for.
Ninja'd.
Also, I think we can raise an error whenever we get no roots of a polynomial, that is, an empty dict received.
> Also, I think we can raise an error whenever we get no roots of a polynomial, that is, an empty dict received.
Note carefully what the code from ODE module does. It checks the number of roots returned because `roots` can also return a subset of the roots in some cases:
```
In [7]: roots(x**10+x+x**2, x)
Out[7]: {0: 1}
In [8]: roots(x**10+x+x**2, x, multiple=True)
Out[8]: [0]
```
This can happen any time you have a polynomial that factorises into factors some of which are solvable:
```julia
In [11]: p1 = x**5 - x + 1
In [12]: p2 = x**2 - 1
In [13]: roots(p1)
Out[13]: {}
In [14]: roots(p2)
Out[14]: {-1: 1, 1: 1}
In [15]: roots(expand(p1*p2))
Out[15]: {-1: 1, 1: 1}
```
Symbolically you can compute the eigenvalues of the companion matrix (assuming it is supported) which is actually how the poles of TF are computed numerically. It's going to be terribly ugly of course but at least tractable.
@ilayn, thanks!! I see it works.
``` julia
>>> poly = s**2 - b
>>> Poly(poly, s)
Poly(s**2 - b, s, domain='ZZ[b]')
>>> Matrix.companion(_)
Matrix([
[0, b],
[1, 0]])
>>> m = _
>>> m.eigenvals()
{-sqrt(b): 1, sqrt(b): 1}
>>>
>>>
>>> poly = x**10 - z**10
>>> p = Poly(poly, x)
>>> M = Matrix.companion(p)
>>> M
Matrix([
[0, 0, 0, 0, 0, 0, 0, 0, 0, z**10],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0]])
>>> M.eigenvals()
{-z: 1, z: 1, z*(-sqrt(5)/4 - 1/4 - I*sqrt(5/8 - sqrt(5)/8)): 1, z*(-sqrt(5)/4 - 1/4 + I*sqrt(5/8 - sqrt(5)/8)): 1, z*(-sqrt(5)/4 + 1/4 - I*sqrt(sqrt(5)/8 + 5/8)): 1, z*(-sqrt(5)/4 + 1/4 + I*sqrt(sqrt(5)/8 + 5/8)): 1, z*(-1/4 + sqrt(5)/4 - I*sqrt(sqrt(5)/8 + 5/8)): 1, z*(-1/4 + sqrt(5)/4 + I*sqrt(sqrt(5)/8 + 5/8)): 1, z*(1/4 + sqrt(5)/4 - I*sqrt(5/8 - sqrt(5)/8)): 1, z*(1/4 + sqrt(5)/4 + I*sqrt(5/8 - sqrt(5)/8)): 1}
```
Although in some cases, we do get `MatrixError` when eigenvalues cannot be computed, we can always catch those with a helpful error message for the users.
``` julia
>>> p1 = x**5 - x + 1
>>> p = Poly(p1, x)
>>> M = Matrix.companion(p)
>>> M
Matrix([
[0, 0, 0, 0, -1],
[1, 0, 0, 0, 1],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 1, 0]])
>>> M.eigenvals()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/namannimmo/sympy/sympy/matrices/matrices.py", line 367, in eigenvals
return _eigenvals(self, error_when_incomplete=error_when_incomplete, **flags)
File "/home/namannimmo/sympy/sympy/matrices/eigen.py", line 177, in _eigenvals
**flags)
File "/home/namannimmo/sympy/sympy/matrices/eigen.py", line 227, in _eigenvals_dict
.format(block))
sympy.matrices.common.MatrixError: Could not compute eigenvalues...
```
And I see it is much more tractable.
Your last example seems to come out wrong though (as a companion matrix eigvals are probably same).
```python
[[ 0, 0, 0, 1, -1],
[ 1, 0, 0, 0, 0],
[ 0, 1, 0, 0, 0],
[ 0, 0, 1, 0, 0],
[ 0, 0, 0, 1, 0]]
```
should be the result.
There are different definitions of the companion matrix. The example there matches the one at wikipedia:
https://en.wikipedia.org/wiki/Companion_matrix
Ah sorry for the noise, from 10x10 example I thought it's the matlab version but that's just the spacing illusion.
A transfer function will be stable if the real parts of all the poles will be less than 0. But how do I check for symbolic values??
``` julia
>>> tf1 = TransferFunction((p + 3)*(p - 1), (p - 1)*(p + 5), p)
>>> tf1.pole()
[1, -5]
>>> all(0 > p for p in tf1.pole()) # checks for stability!!
False
>>>
>>> tf2 = TransferFunction(7*p**3 + s, p - s, p)
>>> tf2.pole()
[s]
```
This is where sympy assumptions come in. If the variables have assumptions then combinations of them may be negative/positive, etc. You'll need to learn about assumptions in SymPy.
In some cases it will not be possible to say whether or not the real part is positive or negative (or zero) so you should consider the indeterminate case when designing any API around this.
Thanks for your suggestions!
@moorepants, Can we merge this?
I think we can merge this PR! Documentation for the control package will be added in Transfer function matrix PR.
If there are no further comments to address, can anyone merge this? I want to include this as a merged PR in my GSoC final report.
We can merge this within 24 hours if no further comments are to be addressed. Thanks for the suggestions!
You have responded to all the comments and all the tests are passing. Except for the pypy tests which have some build failures, I'll restart the build.
So, unless anyone else has any more comments, please squash your commits @namannimmo10
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/physics/control/lti.py]
(definition of _roots:)
def _roots(poly, var):
"""like roots, but works on higher-order polynomials. """
(definition of TransferFunction.dc_gain:)
def dc_gain(self):
"""Computes the gain of the response as the frequency approaches zero.
The DC gain is infinite for systems with pure integrators.
Examples
========
>>> from sympy.abc import s, p, a, b
>>> from sympy.physics.control.lti import TransferFunction
>>> tf1 = TransferFunction(s + 3, s**2 - 9, s)
>>> tf1.dc_gain()
-1/3
>>> tf2 = TransferFunction(p**2, p - 3 + p**3, p)
>>> tf2.dc_gain()
0
>>> tf3 = TransferFunction(a*p**2 - b, s + b, s)
>>> tf3.dc_gain()
(a*p**2 - b)/b
>>> tf4 = TransferFunction(1, s, s)
>>> tf4.dc_gain()
oo"""
(definition of TransferFunction.poles:)
def poles(self):
"""Returns the poles of a transfer function.
Examples
========
>>> from sympy.abc import s, p, a
>>> from sympy.physics.control.lti import TransferFunction
>>> tf1 = TransferFunction((p + 3)*(p - 1), (p - 1)*(p + 5), p)
>>> tf1.poles()
[-5, 1]
>>> tf2 = TransferFunction((1 - s)**2, (s**2 + 1)**2, s)
>>> tf2.poles()
[I, I, -I, -I]
>>> tf3 = TransferFunction(s**2, a*s + p, s)
>>> tf3.poles()
[-p/a]"""
(definition of TransferFunction.zeros:)
def zeros(self):
"""Returns the zeros of a transfer function.
Examples
========
>>> from sympy.abc import s, p, a
>>> from sympy.physics.control.lti import TransferFunction
>>> tf1 = TransferFunction((p + 3)*(p - 1), (p - 1)*(p + 5), p)
>>> tf1.zeros()
[-3, 1]
>>> tf2 = TransferFunction((1 - s)**2, (s**2 + 1)**2, s)
>>> tf2.zeros()
[1, 1]
>>> tf3 = TransferFunction(s**2, a*s + p, s)
>>> tf3.zeros()
[0, 0]"""
(definition of TransferFunction.is_stable:)
def is_stable(self):
"""Returns True if the transfer function is asymptotically stable; else False.
This would not check the marginal or conditional stability of the system.
Examples
========
>>> from sympy.abc import s, p, a
>>> from sympy.core.symbol import symbols
>>> q, r = symbols('q, r', negative=True)
>>> from sympy.physics.control.lti import TransferFunction
>>> tf1 = TransferFunction((1 - s)**2, (s + 1)**2, s)
>>> tf1.is_stable()
True
>>> tf2 = TransferFunction((1 - p)**2, (s**2 + 1)**2, s)
>>> tf2.is_stable()
False
>>> tf3 = TransferFunction(4, q*s - r, s)
>>> tf3.is_stable()
False
# not enough info about the symbols to determine stability
>>> tf4 = TransferFunction(p + 1, a*p - s**2, p)
>>> tf4.is_stable() is None
True"""
[end of new definitions in sympy/physics/control/lti.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1b4529a95ef641c2fc15889091b281644069d20e | ||
pvlib__pvlib-python-1017 | 1,017 | pvlib/pvlib-python | 0.7 | 49da0318256d8b46f90d4b29a7023de680f8410b | 2020-08-04T03:27:48Z | diff --git a/docs/examples/plot_passias_diffuse_shading.py b/docs/examples/plot_passias_diffuse_shading.py
new file mode 100644
index 0000000000..989e977fdb
--- /dev/null
+++ b/docs/examples/plot_passias_diffuse_shading.py
@@ -0,0 +1,84 @@
+"""
+Diffuse Self-Shading
+====================
+
+Modeling the reduction in diffuse irradiance caused by row-to-row diffuse
+shading.
+"""
+
+# %%
+# The term "self-shading" usually refers to adjacent rows blocking direct
+# irradiance and casting shadows on each other. However, the concept also
+# applies to diffuse irradiance because rows block a portion of the sky
+# dome even when the sun is high in the sky. The irradiance loss fraction
+# depends on how tightly the rows are packed and where on the module the
+# loss is evaluated -- a point near the top of edge of a module will see
+# more of the sky than a point near the bottom edge.
+#
+# This example uses the approach presented by Passias and Kรคllbรคck in [1]_
+# and recreates two figures from that paper using
+# :py:func:`pvlib.shading.masking_angle_passias` and
+# :py:func:`pvlib.shading.sky_diffuse_passias`.
+#
+# References
+# ----------
+# .. [1] D. Passias and B. Kรคllbรคck, "Shading effects in rows of solar cell
+# panels", Solar Cells, Volume 11, Pages 281-291. 1984.
+# DOI: 10.1016/0379-6787(84)90017-6
+
+from pvlib import shading, irradiance
+import matplotlib.pyplot as plt
+import numpy as np
+
+# %%
+# First we'll recreate Figure 4, showing how the average masking angle varies
+# with array tilt and array packing. The masking angle of a given point on a
+# module is the angle from horizontal to the next row's top edge and represents
+# the portion of the sky dome blocked by the next row. Because it changes
+# from the bottom to the top of a module, the average across the module is
+# calculated. In [1]_, ``k`` refers to the ratio of row pitch to row slant
+# height (i.e. 1 / GCR).
+
+surface_tilt = np.arange(0, 90, 0.5)
+
+plt.figure()
+for k in [1, 1.5, 2, 2.5, 3, 4, 5, 7, 10]:
+ gcr = 1/k
+ psi = shading.masking_angle_passias(surface_tilt, gcr)
+ plt.plot(surface_tilt, psi, label='k={}'.format(k))
+
+plt.xlabel('Inclination angle [degrees]')
+plt.ylabel('Average masking angle [degrees]')
+plt.legend()
+plt.show()
+
+# %%
+# So as the array is packed tighter (decreasing ``k``), the average masking
+# angle increases.
+#
+# Next we'll recreate Figure 5. Note that the y-axis here is the ratio of
+# diffuse plane of array irradiance (after accounting for shading) to diffuse
+# horizontal irradiance. This means that the deviation from 100% is due to the
+# combination of self-shading and the fact that being at a tilt blocks off
+# the portion of the sky behind the row. The first effect is modeled with
+# :py:func:`pvlib.shading.sky_diffuse_passias` and the second with
+# :py:func:`pvlib.irradiance.isotropic`.
+
+plt.figure()
+for k in [1, 1.5, 2, 10]:
+ gcr = 1/k
+ psi = shading.masking_angle_passias(surface_tilt, gcr)
+ shading_loss = shading.sky_diffuse_passias(psi)
+ transposition_ratio = irradiance.isotropic(surface_tilt, dhi=1.0)
+ relative_diffuse = transposition_ratio * (1-shading_loss) * 100 # %
+ plt.plot(surface_tilt, relative_diffuse, label='k={}'.format(k))
+
+plt.xlabel('Inclination angle [degrees]')
+plt.ylabel('Relative diffuse irradiance [%]')
+plt.ylim(0, 105)
+plt.legend()
+plt.show()
+
+# %%
+# As ``k`` decreases, GCR increases, so self-shading loss increases and
+# collected diffuse irradiance decreases.
diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index dd7c618646..a7087edad0 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -354,6 +354,12 @@ Effects on PV System Output
soiling.hsu
soiling.kimber
+.. autosummary::
+ :toctree: generated/
+
+ shading.masking_angle
+ shading.masking_angle_passias
+ shading.sky_diffuse_passias
Tracking
diff --git a/docs/sphinx/source/whatsnew/v0.8.0.rst b/docs/sphinx/source/whatsnew/v0.8.0.rst
index 2aa1ebfb67..1fd41f232f 100644
--- a/docs/sphinx/source/whatsnew/v0.8.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.0.rst
@@ -38,6 +38,10 @@ Enhancements
* Add :py:func:`pvlib.iam.marion_diffuse` and
:py:func:`pvlib.iam.marion_integrate` to calculate IAM values for
diffuse irradiance. (:pull:`984`)
+* Add :py:func:`pvlib.shading.sky_diffuse_passias`,
+ :py:func:`pvlib.shading.masking_angle_passias`, and
+ :py:func:`pvlib.shading.masking_angle` to model diffuse shading loss.
+ (:pull:`1017`)
* Add :py:func:`pvlib.inverter.fit_sandia` that fits the Sandia inverter model
to a set of inverter efficiency curves. (:pull:`1011`)
@@ -75,6 +79,7 @@ Documentation
* Add a transposition gain example to the gallery. (:pull:`979`)
* Add a gallery example of calculating diffuse IAM using
:py:func:`pvlib.iam.marion_diffuse`. (:pull:`984`)
+* Add a gallery example of modeling diffuse shading loss. (:pull:`1017`)
* Add minigalleries to API reference pages. (:pull:`991`)
Requirements
diff --git a/pvlib/shading.py b/pvlib/shading.py
new file mode 100644
index 0000000000..9479eb1739
--- /dev/null
+++ b/pvlib/shading.py
@@ -0,0 +1,193 @@
+"""
+The ``shading`` module contains functions that model module shading and the
+associated effects on PV module output
+"""
+
+import numpy as np
+import pandas as pd
+from pvlib.tools import sind, cosd
+
+
+def masking_angle(surface_tilt, gcr, slant_height):
+ """
+ The elevation angle below which diffuse irradiance is blocked.
+
+ The ``height`` parameter determines how far up the module's surface to
+ evaluate the masking angle. The lower the point, the steeper the masking
+ angle [1]_. SAM uses a "worst-case" approach where the masking angle
+ is calculated for the bottom of the array (i.e. ``slant_height=0``) [2]_.
+
+ Parameters
+ ----------
+ surface_tilt : numeric
+ Panel tilt from horizontal [degrees].
+
+ gcr : float
+ The ground coverage ratio of the array [unitless].
+
+ slant_height : numeric
+ The distance up the module's slant height to evaluate the masking
+ angle, as a fraction [0-1] of the module slant height [unitless].
+
+ Returns
+ -------
+ mask_angle : numeric
+ Angle from horizontal where diffuse light is blocked by the
+ preceding row [degrees].
+
+ See Also
+ --------
+ masking_angle_passias
+ sky_diffuse_passias
+
+ References
+ ----------
+ .. [1] D. Passias and B. Kรคllbรคck, "Shading effects in rows of solar cell
+ panels", Solar Cells, Volume 11, Pages 281-291. 1984.
+ DOI: 10.1016/0379-6787(84)90017-6
+ .. [2] Gilman, P. et al., (2018). "SAM Photovoltaic Model Technical
+ Reference Update", NREL Technical Report NREL/TP-6A20-67399.
+ Available at https://www.nrel.gov/docs/fy18osti/67399.pdf
+ """
+ # The original equation (8 in [1]) requires pitch and collector width,
+ # but it's easy to non-dimensionalize it to make it a function of GCR
+ # by factoring out B from the argument to arctan.
+ numerator = (1 - slant_height) * sind(surface_tilt)
+ denominator = 1/gcr - (1 - slant_height) * cosd(surface_tilt)
+ phi = np.arctan(numerator / denominator)
+ return np.degrees(phi)
+
+
+def masking_angle_passias(surface_tilt, gcr):
+ r"""
+ The average masking angle over the slant height of a row.
+
+ The masking angle is the angle from horizontal where the sky dome is
+ blocked by the row in front. The masking angle is larger near the lower
+ edge of a row than near the upper edge. This function calculates the
+ average masking angle as described in [1]_.
+
+ Parameters
+ ----------
+ surface_tilt : numeric
+ Panel tilt from horizontal [degrees].
+
+ gcr : float
+ The ground coverage ratio of the array [unitless].
+
+ Returns
+ ----------
+ mask_angle : numeric
+ Average angle from horizontal where diffuse light is blocked by the
+ preceding row [degrees].
+
+ See Also
+ --------
+ masking_angle
+ sky_diffuse_passias
+
+ Notes
+ -----
+ The pvlib-python authors believe that Eqn. 9 in [1]_ is incorrect.
+ Here we use an independent equation. First, Eqn. 8 is non-dimensionalized
+ (recasting in terms of GCR):
+
+ .. math::
+
+ \psi(z') = \arctan \left [
+ \frac{(1 - z') \sin \beta}
+ {\mathrm{GCR}^{-1} + (z' - 1) \cos \beta}
+ \right ]
+
+ Where :math:`GCR = B/C` and :math:`z' = z/B`. The average masking angle
+ :math:`\overline{\psi} = \int_0^1 \psi(z') \mathrm{d}z'` is then
+ evaluated symbolically using Maxima (using :math:`X = 1/\mathrm{GCR}`):
+
+ .. code-block:: none
+
+ load(scifac) /* for the gcfac function */
+ assume(X>0, cos(beta)>0, cos(beta)-X<0); /* X is 1/GCR */
+ gcfac(integrate(atan((1-z)*sin(beta)/(X+(z-1)*cos(beta))), z, 0, 1))
+
+ This yields the equation implemented by this function:
+
+ .. math::
+
+ \overline{\psi} = \
+ &-\frac{X}{2} \sin\beta \log | 2 X \cos\beta - (X^2 + 1)| \\
+ &+ (X \cos\beta - 1) \arctan \frac{X \cos\beta - 1}{X \sin\beta} \\
+ &+ (1 - X \cos\beta) \arctan \frac{\cos\beta}{\sin\beta} \\
+ &+ X \log X \sin\beta
+
+ The pvlib-python authors have validated this equation against numerical
+ integration of :math:`\overline{\psi} = \int_0^1 \psi(z') \mathrm{d}z'`.
+
+ References
+ ----------
+ .. [1] D. Passias and B. Kรคllbรคck, "Shading effects in rows of solar cell
+ panels", Solar Cells, Volume 11, Pages 281-291. 1984.
+ DOI: 10.1016/0379-6787(84)90017-6
+ """
+ # wrap it in an array so that division by zero is handled well
+ beta = np.radians(np.array(surface_tilt))
+ sin_b = np.sin(beta)
+ cos_b = np.cos(beta)
+ X = 1/gcr
+
+ with np.errstate(divide='ignore', invalid='ignore'): # ignore beta=0
+ term1 = -X * sin_b * np.log(np.abs(2 * X * cos_b - (X**2 + 1))) / 2
+ term2 = (X * cos_b - 1) * np.arctan((X * cos_b - 1) / (X * sin_b))
+ term3 = (1 - X * cos_b) * np.arctan(cos_b / sin_b)
+ term4 = X * np.log(X) * sin_b
+
+ psi_avg = term1 + term2 + term3 + term4
+ # when beta=0, divide by zero makes psi_avg NaN. replace with 0:
+ psi_avg = np.where(np.isfinite(psi_avg), psi_avg, 0)
+
+ if isinstance(surface_tilt, pd.Series):
+ psi_avg = pd.Series(psi_avg, index=surface_tilt.index)
+
+ return np.degrees(psi_avg)
+
+
+def sky_diffuse_passias(masking_angle):
+ r"""
+ The diffuse irradiance loss caused by row-to-row sky diffuse shading.
+
+ Even when the sun is high in the sky, a row's view of the sky dome will
+ be partially blocked by the row in front. This causes a reduction in the
+ diffuse irradiance incident on the module. The reduction depends on the
+ masking angle, the elevation angle from a point on the shaded module to
+ the top of the shading row. In [1]_ the masking angle is calculated as
+ the average across the module height. SAM assumes the "worst-case" loss
+ where the masking angle is calculated for the bottom of the array [2]_.
+
+ This function, as in [1]_, makes the assumption that sky diffuse
+ irradiance is isotropic.
+
+ Parameters
+ ----------
+ masking_angle : numeric
+ The elevation angle below which diffuse irradiance is blocked
+ [degrees].
+
+ Returns
+ -------
+ derate : numeric
+ The fraction [0-1] of blocked sky diffuse irradiance.
+
+ See Also
+ --------
+ masking_angle
+ masking_angle_passias
+
+ References
+ ----------
+ .. [1] D. Passias and B. Kรคllbรคck, "Shading effects in rows of solar cell
+ panels", Solar Cells, Volume 11, Pages 281-291. 1984.
+ DOI: 10.1016/0379-6787(84)90017-6
+ .. [2] Gilman, P. et al., (2018). "SAM Photovoltaic Model Technical
+ Reference Update", NREL Technical Report NREL/TP-6A20-67399.
+ Available at https://www.nrel.gov/docs/fy18osti/67399.pdf
+ """
+ return 1 - cosd(masking_angle/2)**2
| diff --git a/pvlib/tests/test_shading.py b/pvlib/tests/test_shading.py
new file mode 100644
index 0000000000..8a9fd46a69
--- /dev/null
+++ b/pvlib/tests/test_shading.py
@@ -0,0 +1,71 @@
+import numpy as np
+import pandas as pd
+
+from pandas.testing import assert_series_equal
+import pytest
+
+from pvlib import shading
+
+
+@pytest.fixture
+def surface_tilt():
+ idx = pd.date_range('2019-01-01', freq='h', periods=3)
+ return pd.Series([0, 20, 90], index=idx)
+
+
+@pytest.fixture
+def masking_angle(surface_tilt):
+ # masking angles for the surface_tilt fixture,
+ # assuming GCR=0.5 and height=0.25
+ return pd.Series([0.0, 11.20223712, 20.55604522], index=surface_tilt.index)
+
+
+@pytest.fixture
+def average_masking_angle(surface_tilt):
+ # average masking angles for the surface_tilt fixture, assuming GCR=0.5
+ return pd.Series([0.0, 7.20980655, 13.779867461], index=surface_tilt.index)
+
+
+@pytest.fixture
+def shading_loss(surface_tilt):
+ # diffuse shading loss values for the average_masking_angle fixture
+ return pd.Series([0, 0.00395338, 0.01439098], index=surface_tilt.index)
+
+
+def test_masking_angle_series(surface_tilt, masking_angle):
+ # series inputs and outputs
+ masking_angle_actual = shading.masking_angle(surface_tilt, 0.5, 0.25)
+ assert_series_equal(masking_angle_actual, masking_angle)
+
+
+def test_masking_angle_scalar(surface_tilt, masking_angle):
+ # scalar inputs and outputs, including zero
+ for tilt, angle in zip(surface_tilt, masking_angle):
+ masking_angle_actual = shading.masking_angle(tilt, 0.5, 0.25)
+ assert np.isclose(masking_angle_actual, angle)
+
+
+def test_masking_angle_passias_series(surface_tilt, average_masking_angle):
+ # pandas series inputs and outputs
+ masking_angle_actual = shading.masking_angle_passias(surface_tilt, 0.5)
+ assert_series_equal(masking_angle_actual, average_masking_angle)
+
+
+def test_masking_angle_passias_scalar(surface_tilt, average_masking_angle):
+ # scalar inputs and outputs, including zero
+ for tilt, angle in zip(surface_tilt, average_masking_angle):
+ masking_angle_actual = shading.masking_angle_passias(tilt, 0.5)
+ assert np.isclose(masking_angle_actual, angle)
+
+
+def test_sky_diffuse_passias_series(average_masking_angle, shading_loss):
+ # pandas series inputs and outputs
+ actual_loss = shading.sky_diffuse_passias(average_masking_angle)
+ assert_series_equal(shading_loss, actual_loss)
+
+
+def test_sky_diffuse_passias_scalar(average_masking_angle, shading_loss):
+ # scalar inputs and outputs
+ for angle, loss in zip(average_masking_angle, shading_loss):
+ actual_loss = shading.sky_diffuse_passias(angle)
+ assert np.isclose(loss, actual_loss)
| diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index dd7c618646..a7087edad0 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -354,6 +354,12 @@ Effects on PV System Output
soiling.hsu
soiling.kimber
+.. autosummary::
+ :toctree: generated/
+
+ shading.masking_angle
+ shading.masking_angle_passias
+ shading.sky_diffuse_passias
Tracking
diff --git a/docs/sphinx/source/whatsnew/v0.8.0.rst b/docs/sphinx/source/whatsnew/v0.8.0.rst
index 2aa1ebfb67..1fd41f232f 100644
--- a/docs/sphinx/source/whatsnew/v0.8.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.0.rst
@@ -38,6 +38,10 @@ Enhancements
* Add :py:func:`pvlib.iam.marion_diffuse` and
:py:func:`pvlib.iam.marion_integrate` to calculate IAM values for
diffuse irradiance. (:pull:`984`)
+* Add :py:func:`pvlib.shading.sky_diffuse_passias`,
+ :py:func:`pvlib.shading.masking_angle_passias`, and
+ :py:func:`pvlib.shading.masking_angle` to model diffuse shading loss.
+ (:pull:`1017`)
* Add :py:func:`pvlib.inverter.fit_sandia` that fits the Sandia inverter model
to a set of inverter efficiency curves. (:pull:`1011`)
@@ -75,6 +79,7 @@ Documentation
* Add a transposition gain example to the gallery. (:pull:`979`)
* Add a gallery example of calculating diffuse IAM using
:py:func:`pvlib.iam.marion_diffuse`. (:pull:`984`)
+* Add a gallery example of modeling diffuse shading loss. (:pull:`1017`)
* Add minigalleries to API reference pages. (:pull:`991`)
Requirements
| [
{
"components": [
{
"doc": "The elevation angle below which diffuse irradiance is blocked.\n\nThe ``height`` parameter determines how far up the module's surface to\nevaluate the masking angle. The lower the point, the steeper the masking\nangle [1]_. SAM uses a \"worst-case\" approach where the... | [
"pvlib/tests/test_shading.py::test_masking_angle_series",
"pvlib/tests/test_shading.py::test_masking_angle_scalar",
"pvlib/tests/test_shading.py::test_masking_angle_passias_series",
"pvlib/tests/test_shading.py::test_masking_angle_passias_scalar",
"pvlib/tests/test_shading.py::test_sky_diffuse_passias_serie... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add diffuse self-shading functions
- ~Closes #xxxx~
- [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)
- [x] Tests added
- [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.
- [x] Adds description and name entries in the appropriate "what's new" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).
- [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.
- [x] Pull request is nearly complete and ready for detailed review.
- [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.
This implements a modified version of the diffuse self-shading model in Passias and Kallbach (1984) "SHADING EFFECTS IN ROWS OF SOLAR CELL PANELS", which I believe has an error in the equation for the average masking angle (Eq 9). Here is a gist comparing the paper's equation with the one I derived, along with a numerical integration for comparison: https://gist.github.com/kanderso-nrel/2c6c3a1853338cdef5b4bbc67092ccc8. Note that I implemented two versions of the paper's equation to account for an ambiguous minus sign.
If anyone has a Mathematica license and can produce a cleaner version of that equation, I'd be happy to replace it -- that quick and dirty Maxima script in the docstring is about as much as I can do with symbolic solvers.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in pvlib/shading.py]
(definition of masking_angle:)
def masking_angle(surface_tilt, gcr, slant_height):
"""The elevation angle below which diffuse irradiance is blocked.
The ``height`` parameter determines how far up the module's surface to
evaluate the masking angle. The lower the point, the steeper the masking
angle [1]_. SAM uses a "worst-case" approach where the masking angle
is calculated for the bottom of the array (i.e. ``slant_height=0``) [2]_.
Parameters
----------
surface_tilt : numeric
Panel tilt from horizontal [degrees].
gcr : float
The ground coverage ratio of the array [unitless].
slant_height : numeric
The distance up the module's slant height to evaluate the masking
angle, as a fraction [0-1] of the module slant height [unitless].
Returns
-------
mask_angle : numeric
Angle from horizontal where diffuse light is blocked by the
preceding row [degrees].
See Also
--------
masking_angle_passias
sky_diffuse_passias
References
----------
.. [1] D. Passias and B. Kรคllbรคck, "Shading effects in rows of solar cell
panels", Solar Cells, Volume 11, Pages 281-291. 1984.
DOI: 10.1016/0379-6787(84)90017-6
.. [2] Gilman, P. et al., (2018). "SAM Photovoltaic Model Technical
Reference Update", NREL Technical Report NREL/TP-6A20-67399.
Available at https://www.nrel.gov/docs/fy18osti/67399.pdf"""
(definition of masking_angle_passias:)
def masking_angle_passias(surface_tilt, gcr):
"""The average masking angle over the slant height of a row.
The masking angle is the angle from horizontal where the sky dome is
blocked by the row in front. The masking angle is larger near the lower
edge of a row than near the upper edge. This function calculates the
average masking angle as described in [1]_.
Parameters
----------
surface_tilt : numeric
Panel tilt from horizontal [degrees].
gcr : float
The ground coverage ratio of the array [unitless].
Returns
----------
mask_angle : numeric
Average angle from horizontal where diffuse light is blocked by the
preceding row [degrees].
See Also
--------
masking_angle
sky_diffuse_passias
Notes
-----
The pvlib-python authors believe that Eqn. 9 in [1]_ is incorrect.
Here we use an independent equation. First, Eqn. 8 is non-dimensionalized
(recasting in terms of GCR):
.. math::
\psi(z') = \arctan \left [
\frac{(1 - z') \sin \beta}
{\mathrm{GCR}^{-1} + (z' - 1) \cos \beta}
\right ]
Where :math:`GCR = B/C` and :math:`z' = z/B`. The average masking angle
:math:`\overline{\psi} = \int_0^1 \psi(z') \mathrm{d}z'` is then
evaluated symbolically using Maxima (using :math:`X = 1/\mathrm{GCR}`):
.. code-block:: none
load(scifac) /* for the gcfac function */
assume(X>0, cos(beta)>0, cos(beta)-X<0); /* X is 1/GCR */
gcfac(integrate(atan((1-z)*sin(beta)/(X+(z-1)*cos(beta))), z, 0, 1))
This yields the equation implemented by this function:
.. math::
\overline{\psi} = \
&-\frac{X}{2} \sin\beta \log | 2 X \cos\beta - (X^2 + 1)| \\
&+ (X \cos\beta - 1) \arctan \frac{X \cos\beta - 1}{X \sin\beta} \\
&+ (1 - X \cos\beta) \arctan \frac{\cos\beta}{\sin\beta} \\
&+ X \log X \sin\beta
The pvlib-python authors have validated this equation against numerical
integration of :math:`\overline{\psi} = \int_0^1 \psi(z') \mathrm{d}z'`.
References
----------
.. [1] D. Passias and B. Kรคllbรคck, "Shading effects in rows of solar cell
panels", Solar Cells, Volume 11, Pages 281-291. 1984.
DOI: 10.1016/0379-6787(84)90017-6"""
(definition of sky_diffuse_passias:)
def sky_diffuse_passias(masking_angle):
"""The diffuse irradiance loss caused by row-to-row sky diffuse shading.
Even when the sun is high in the sky, a row's view of the sky dome will
be partially blocked by the row in front. This causes a reduction in the
diffuse irradiance incident on the module. The reduction depends on the
masking angle, the elevation angle from a point on the shaded module to
the top of the shading row. In [1]_ the masking angle is calculated as
the average across the module height. SAM assumes the "worst-case" loss
where the masking angle is calculated for the bottom of the array [2]_.
This function, as in [1]_, makes the assumption that sky diffuse
irradiance is isotropic.
Parameters
----------
masking_angle : numeric
The elevation angle below which diffuse irradiance is blocked
[degrees].
Returns
-------
derate : numeric
The fraction [0-1] of blocked sky diffuse irradiance.
See Also
--------
masking_angle
masking_angle_passias
References
----------
.. [1] D. Passias and B. Kรคllbรคck, "Shading effects in rows of solar cell
panels", Solar Cells, Volume 11, Pages 281-291. 1984.
DOI: 10.1016/0379-6787(84)90017-6
.. [2] Gilman, P. et al., (2018). "SAM Photovoltaic Model Technical
Reference Update", NREL Technical Report NREL/TP-6A20-67399.
Available at https://www.nrel.gov/docs/fy18osti/67399.pdf"""
[end of new definitions in pvlib/shading.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | aa1635bcb40dc83f82e9fd72158670c235bfe99b | |
sympy__sympy-19882 | 19,882 | sympy/sympy | 1.7 | 9547fa20eae7f8c9bf475b393e9191b36a9b748c | 2020-08-02T23:52:40Z | diff --git a/sympy/polys/domainmatrix.py b/sympy/polys/domainmatrix.py
new file mode 100644
index 000000000000..c06f25572785
--- /dev/null
+++ b/sympy/polys/domainmatrix.py
@@ -0,0 +1,653 @@
+from __future__ import print_function
+
+from operator import mul
+
+from sympy.core.sympify import _sympify
+from sympy.matrices.common import (NonInvertibleMatrixError,
+ NonSquareMatrixError, ShapeError)
+from sympy.polys.constructor import construct_domain
+
+
+class DDMError(Exception):
+ """Base class for errors raised by DDM"""
+ pass
+
+
+class DDMBadInputError(DDMError):
+ """list of lists is inconsistent with shape"""
+ pass
+
+
+class DDMDomainError(DDMError):
+ """domains do not match"""
+ pass
+
+
+class DDMShapeError(DDMError):
+ """shapes are inconsistent"""
+ pass
+
+
+class DDM(list):
+ """Dense matrix based on polys domain elements
+
+ This is a list subclass and is a wrapper for a list of lists that supports
+ basic matrix arithmetic +, -, *, **.
+ """
+ def __init__(self, rowslist, shape, domain):
+ super().__init__(rowslist)
+ self.shape = self.rows, self.cols = m, n = shape
+ self.domain = domain
+
+ if not (len(self) == m and all(len(row) == n for row in self)):
+ raise DDMBadInputError("Inconsistent row-list/shape")
+
+ def __str__(self):
+ cls = type(self).__name__
+ rows = list.__str__(self)
+ return '%s(%s, %s, %s)' % (cls, rows, self.shape, self.domain)
+
+ def __eq__(self, other):
+ if not isinstance(other, DDM):
+ return False
+ return (super().__eq__(other) and self.domain == other.domain)
+
+ def __ne__(self, other):
+ return not self.__eq__(other)
+
+ @classmethod
+ def zeros(cls, shape, domain):
+ z = domain.zero
+ m, n = shape
+ rowslist = ([z] * n for _ in range(m))
+ return DDM(rowslist, shape, domain)
+
+ @classmethod
+ def eye(cls, size, domain):
+ one = domain.one
+ ddm = cls.zeros((size, size), domain)
+ for i in range(size):
+ ddm[i][i] = one
+ return ddm
+
+ def copy(self):
+ copyrows = (row[:] for row in self)
+ return DDM(copyrows, self.shape, self.domain)
+
+ def __add__(a, b):
+ if not isinstance(b, DDM):
+ return NotImplemented
+ return a.add(b)
+
+ def __sub__(a, b):
+ if not isinstance(b, DDM):
+ return NotImplemented
+ return a.sub(b)
+
+ def __neg__(a):
+ return a.neg()
+
+ def __mul__(a, b):
+ if b in a.domain:
+ return a.mul(b)
+ else:
+ return NotImplemented
+
+ def __matmul__(a, b):
+ if isinstance(b, DDM):
+ return a.matmul(b)
+ else:
+ return NotImplemented
+
+ @classmethod
+ def _check(cls, a, op, b, ashape, bshape):
+ if a.domain != b.domain:
+ msg = "Domain mismatch: %s %s %s" % (a.domain, op, b.domain)
+ raise DDMDomainError(msg)
+ if ashape != bshape:
+ msg = "Shape mismatch: %s %s %s" % (a.shape, op, b.shape)
+ raise DDMShapeError(msg)
+
+ def add(a, b):
+ """a + b"""
+ a._check(a, '+', b, a.shape, b.shape)
+ c = a.copy()
+ ddm_iadd(c, b)
+ return c
+
+ def sub(a, b):
+ """a - b"""
+ a._check(a, '-', b, a.shape, b.shape)
+ c = a.copy()
+ ddm_isub(c, b)
+ return c
+
+ def neg(a):
+ """-a"""
+ b = a.copy()
+ ddm_ineg(b)
+ return b
+
+ def matmul(a, b):
+ """a @ b (matrix product)"""
+ m, o = a.shape
+ o2, n = b.shape
+ a._check(a, '*', b, o, o2)
+ c = a.zeros((m, n), a.domain)
+ ddm_imatmul(c, a, b)
+ return c
+
+ def rref(a):
+ """Reduced-row echelon form of a and list of pivots"""
+ b = a.copy()
+ pivots = ddm_irref(b)
+ return b, pivots
+
+ def det(a):
+ """Determinant of a"""
+ m, n = a.shape
+ if m != n:
+ raise DDMShapeError("Determinant of non-square matrix")
+ b = a.copy()
+ K = b.domain
+ deta = ddm_idet(b, K)
+ return deta
+
+ def inv(a):
+ """Inverse of a"""
+ m, n = a.shape
+ if m != n:
+ raise DDMShapeError("Determinant of non-square matrix")
+ ainv = a.copy()
+ K = a.domain
+ ddm_iinv(ainv, a, K)
+ return ainv
+
+ def lu(a):
+ """L, U decomposition of a"""
+ m, n = a.shape
+ K = a.domain
+
+ U = a.copy()
+ L = a.eye(m, K)
+ swaps = ddm_ilu_split(L, U, K)
+
+ return L, U, swaps
+
+ def lu_solve(a, b):
+ """x where a*x = b"""
+ m, n = a.shape
+ m2, o = b.shape
+ a._check(a, 'lu_solve', b, m, m2)
+
+ L, U, swaps = a.lu()
+ x = a.zeros((n, o), a.domain)
+ ddm_ilu_solve(x, L, U, swaps, b)
+ return x
+
+ def charpoly(a):
+ """Coefficients of characteristic polynomial of a"""
+ K = a.domain
+ m, n = a.shape
+ if m != n:
+ raise DDMShapeError("Charpoly of non-square matrix")
+ vec = ddm_berk(a, K)
+ coeffs = [vec[i][0] for i in range(n+1)]
+ return coeffs
+
+
+def ddm_iadd(a, b):
+ """a += b"""
+ for ai, bi in zip(a, b):
+ for j, bij in enumerate(bi):
+ ai[j] += bij
+
+
+def ddm_isub(a, b):
+ """a -= b"""
+ for ai, bi in zip(a, b):
+ for j, bij in enumerate(bi):
+ ai[j] -= bij
+
+
+def ddm_ineg(a):
+ """a <-- -a"""
+ for ai in a:
+ for j, aij in enumerate(ai):
+ ai[j] = -aij
+
+
+def ddm_imatmul(a, b, c):
+ """a += b @ c"""
+ cT = list(zip(*c))
+
+ for bi, ai in zip(b, a):
+ for j, cTj in enumerate(cT):
+ ai[j] = sum(map(mul, bi, cTj), ai[j])
+
+
+def ddm_irref(a):
+ """a <-- rref(a)"""
+ # a is (m x n)
+ m = len(a)
+ if not m:
+ return []
+ n = len(a[0])
+
+ i = 0
+ pivots = []
+
+ for j in range(n):
+ # pivot
+ aij = a[i][j]
+
+ # zero-pivot
+ if not aij:
+ for ip in range(i+1, m):
+ aij = a[ip][j]
+ # row-swap
+ if aij:
+ a[i], a[ip] = a[ip], a[i]
+ break
+ else:
+ # next column
+ continue
+
+ # normalise row
+ ai = a[i]
+ for l in range(j, n):
+ ai[l] /= aij # ai[j] = one
+
+ # eliminate above and below to the right
+ for k, ak in enumerate(a):
+ if k == i or not ak[j]:
+ continue
+ akj = ak[j]
+ ak[j] -= akj # ak[j] = zero
+ for l in range(j+1, n):
+ ak[l] -= akj * ai[l]
+
+ # next row
+ pivots.append(j)
+ i += 1
+
+ # no more rows?
+ if i >= m:
+ break
+
+ return pivots
+
+
+def ddm_idet(a, K):
+ """a <-- echelon(a); return det"""
+ # Fraction-free Gaussian elimination
+ # https://www.math.usm.edu/perry/Research/Thesis_DRL.pdf
+
+ # a is (m x n)
+ m = len(a)
+ if not m:
+ return K.one
+ n = len(a[0])
+
+ is_field = K.is_Field
+ # uf keeps track of the effect of row swaps and multiplies
+ uf = K.one
+ for j in range(n-1):
+ # if zero on the diagonal need to swap
+ if not a[j][j]:
+ for l in range(j+1, n):
+ if a[l][j]:
+ a[j], a[l] = a[l], a[j]
+ uf = -uf
+ break
+ else:
+ # unable to swap: det = 0
+ return K.zero
+ for i in range(j+1, n):
+ if a[i][j]:
+ if not is_field:
+ d = K.gcd(a[j][j], a[i][j])
+ b = a[j][j] // d
+ c = a[i][j] // d
+ else:
+ b = a[j][j]
+ c = a[i][j]
+ # account for multiplying row i by b
+ uf = b * uf
+ for k in range(j+1, n):
+ a[i][k] = b*a[i][k] - c*a[j][k]
+
+ # triangular det is product of diagonal
+ prod = K.one
+ for i in range(n):
+ prod = prod * a[i][i]
+ # incorporate swaps and multiplies
+ if not is_field:
+ D = prod // uf
+ else:
+ D = prod / uf
+ return D
+
+
+def ddm_iinv(ainv, a, K):
+ if not K.is_Field:
+ raise ValueError('Not a field')
+
+ # a is (m x n)
+ m = len(a)
+ if not m:
+ return
+ n = len(a[0])
+ if m != n:
+ raise NonSquareMatrixError
+
+ eye = [[K.one if i==j else K.zero for j in range(n)] for i in range(n)]
+ Aaug = [row + eyerow for row, eyerow in zip(a, eye)]
+ pivots = ddm_irref(Aaug)
+ if pivots != list(range(n)):
+ raise NonInvertibleMatrixError('Matrix det == 0; not invertible.')
+ ainv[:] = [row[n:] for row in Aaug]
+
+
+def ddm_ilu_split(L, U, K):
+ """L, U <-- LU(U)"""
+ m = len(U)
+ if not m:
+ return []
+ n = len(U[0])
+
+ swaps = ddm_ilu(U)
+
+ zeros = [K.zero] * min(m, n)
+ for i in range(1, m):
+ j = min(i, n)
+ L[i][:j] = U[i][:j]
+ U[i][:j] = zeros[:j]
+
+ return swaps
+
+
+def ddm_ilu(a):
+ """a <-- LU(a)"""
+ m = len(a)
+ if not m:
+ return []
+ n = len(a[0])
+
+ swaps = []
+
+ for i in range(min(m, n)):
+ if not a[i][i]:
+ for ip in range(i+1, m):
+ if a[ip][i]:
+ swaps.append((i, ip))
+ a[i], a[ip] = a[ip], a[i]
+ break
+ else:
+ # M = Matrix([[1, 0, 0, 0], [0, 0, 0, 0], [0, 0, 1, 1], [0, 0, 1, 2]])
+ continue
+ for j in range(i+1, m):
+ l_ji = a[j][i] / a[i][i]
+ a[j][i] = l_ji
+ for k in range(i+1, n):
+ a[j][k] -= l_ji * a[i][k]
+
+ return swaps
+
+
+def ddm_ilu_solve(x, L, U, swaps, b):
+ """x <-- solve(L*U*x = swaps(b))"""
+ m = len(U)
+ if not m:
+ return
+ n = len(U[0])
+
+ m2 = len(b)
+ if not m2:
+ raise DDMShapeError("Shape mismtch")
+ o = len(b[0])
+
+ if m != m2:
+ raise DDMShapeError("Shape mismtch")
+ if m < n:
+ raise NotImplementedError("Underdetermined")
+
+ if swaps:
+ b = [row[:] for row in b]
+ for i1, i2 in swaps:
+ b[i1], b[i2] = b[i2], b[i1]
+
+ # solve Ly = b
+ y = [[None] * o for _ in range(m)]
+ for k in range(o):
+ for i in range(m):
+ rhs = b[i][k]
+ for j in range(i):
+ rhs -= L[i][j] * y[j][k]
+ y[i][k] = rhs
+
+ if m > n:
+ for i in range(n, m):
+ for j in range(o):
+ if y[i][j]:
+ raise NonInvertibleMatrixError
+
+ # Solve Ux = y
+ for k in range(o):
+ for i in reversed(range(n)):
+ if not U[i][i]:
+ raise NonInvertibleMatrixError
+ rhs = y[i][k]
+ for j in range(i+1, n):
+ rhs -= U[i][j] * x[j][k]
+ x[i][k] = rhs / U[i][i]
+
+
+def ddm_berk(M, K):
+ m = len(M)
+ if not m:
+ return [[K.one]]
+ n = len(M[0])
+
+ if m != n:
+ raise DDMShapeError("Not square")
+
+ if n == 1:
+ return [[K.one], [-M[0][0]]]
+
+ a = M[0][0]
+ R = [M[0][1:]]
+ C = [[row[0]] for row in M[1:]]
+ A = [row[1:] for row in M[1:]]
+
+ q = ddm_berk(A, K)
+
+ T = [[K.zero] * n for _ in range(n+1)]
+ for i in range(n):
+ T[i][i] = K.one
+ T[i+1][i] = -a
+ for i in range(2, n+1):
+ if i == 2:
+ AnC = C
+ else:
+ C = AnC
+ AnC = [[K.zero] for row in C]
+ ddm_imatmul(AnC, A, C)
+ RAnC = [[K.zero]]
+ ddm_imatmul(RAnC, R, AnC)
+ for j in range(0, n+1-i):
+ T[i+j][j] = -RAnC[0][0]
+
+ qout = [[K.zero] for _ in range(n+1)]
+ ddm_imatmul(qout, T, q)
+ return qout
+
+
+class DomainMatrix:
+
+ def __init__(self, rows, shape, domain):
+ self.rep = DDM(rows, shape, domain)
+ self.shape = shape
+ self.domain = domain
+
+ @classmethod
+ def from_ddm(cls, ddm):
+ return cls(ddm, ddm.shape, ddm.domain)
+
+ @classmethod
+ def from_list_sympy(cls, nrows, ncols, rows):
+ assert len(rows) == nrows
+ assert all(len(row) == ncols for row in rows)
+
+ items_sympy = [_sympify(item) for row in rows for item in row]
+
+ domain, items_domain = cls.get_domain(items_sympy)
+
+ domain_rows = [[items_domain[ncols*r + c] for c in range(ncols)] for r in range(nrows)]
+
+ return DomainMatrix(domain_rows, (nrows, ncols), domain)
+
+ @classmethod
+ def get_domain(cls, items_sympy, **kwargs):
+ K, items_K = construct_domain(items_sympy, **kwargs)
+ return K, items_K
+
+ def convert_to(self, K):
+ Kold = self.domain
+ new_rows = [[K.convert_from(e, Kold) for e in row] for row in self.rep]
+ return DomainMatrix(new_rows, self.shape, K)
+
+ def to_field(self):
+ K = self.domain.get_field()
+ return self.convert_to(K)
+
+ def unify(self, other):
+ K1 = self.domain
+ K2 = other.domain
+ if K1 == K2:
+ return self, other
+ K = K1.unify(K2)
+ if K1 != K:
+ self = self.convert_to(K)
+ if K2 != K:
+ other = other.convert_to(K)
+ return self, other
+
+ def to_Matrix(self):
+ from sympy.matrices.dense import MutableDenseMatrix
+ rows_sympy = [[self.domain.to_sympy(e) for e in row] for row in self.rep]
+ return MutableDenseMatrix(rows_sympy)
+
+ def __repr__(self):
+ rows_str = ['[%s]' % (', '.join(map(str, row))) for row in self.rep]
+ rowstr = '[%s]' % ', '.join(rows_str)
+ return 'DomainMatrix(%s, %r, %r)' % (rowstr, self.shape, self.domain)
+
+ def __add__(A, B):
+ if not isinstance(B, DomainMatrix):
+ return NotImplemented
+ return A.add(B)
+
+ def __sub__(A, B):
+ if not isinstance(B, DomainMatrix):
+ return NotImplemented
+ return A.sub(B)
+
+ def __neg__(A):
+ return A.neg()
+
+ def __mul__(A, B):
+ """A * B"""
+ if not isinstance(B, DomainMatrix):
+ return NotImplemented
+ return A.matmul(B)
+
+ def __pow__(A, n):
+ """A ** n"""
+ if not isinstance(n, int):
+ return NotImplemented
+ return A.pow(n)
+
+ def add(A, B):
+ if A.shape != B.shape:
+ raise ShapeError("shape")
+ if A.domain != B.domain:
+ raise ValueError("domain")
+ return A.from_ddm(A.rep.add(B.rep))
+
+ def sub(A, B):
+ if A.shape != B.shape:
+ raise ShapeError("shape")
+ if A.domain != B.domain:
+ raise ValueError("domain")
+ return A.from_ddm(A.rep.sub(B.rep))
+
+ def neg(A):
+ return A.from_ddm(A.rep.neg())
+
+ def matmul(A, B):
+ return A.from_ddm(A.rep.matmul(B.rep))
+
+ def pow(A, n):
+ if n < 0:
+ raise NotImplementedError('Negative powers')
+ elif n == 0:
+ m, n = A.shape
+ rows = [[A.domain.zero] * m for _ in range(m)]
+ for i in range(m):
+ rows[i][i] = A.domain.one
+ return type(A)(rows, A.shape, A.domain)
+ elif n == 1:
+ return A
+ elif n % 2 == 1:
+ return A * A**(n - 1)
+ else:
+ sqrtAn = A ** (n // 2)
+ return sqrtAn * sqrtAn
+
+ def rref(self):
+ if not self.domain.is_Field:
+ raise ValueError('Not a field')
+ rref_ddm, pivots = self.rep.rref()
+ return self.from_ddm(rref_ddm), tuple(pivots)
+
+ def inv(self):
+ if not self.domain.is_Field:
+ raise ValueError('Not a field')
+ m, n = self.shape
+ if m != n:
+ raise NonSquareMatrixError
+ inv = self.rep.inv()
+ return self.from_ddm(inv)
+
+ def det(self):
+ m, n = self.shape
+ if m != n:
+ raise NonSquareMatrixError
+ return self.rep.det()
+
+ def lu(self):
+ if not self.domain.is_Field:
+ raise ValueError('Not a field')
+ L, U, swaps = self.rep.lu()
+ return self.from_ddm(L), self.from_ddm(U), swaps
+
+ def lu_solve(self, rhs):
+ if self.shape[0] != rhs.shape[0]:
+ raise ShapeError("Shape")
+ if not self.domain.is_Field:
+ raise ValueError('Not a field')
+ sol = self.rep.lu_solve(rhs.rep)
+ return self.from_ddm(sol)
+
+ def charpoly(self):
+ m, n = self.shape
+ if m != n:
+ raise NonSquareMatrixError("not square")
+ return self.rep.charpoly()
+
+ def __eq__(A, B):
+ """A == B"""
+ if not isinstance(B, DomainMatrix):
+ return NotImplemented
+ return A.rep == B.rep
diff --git a/sympy/polys/polymatrix.py b/sympy/polys/polymatrix.py
index 731e6d24eb7a..5f2efe1fbf7b 100644
--- a/sympy/polys/polymatrix.py
+++ b/sympy/polys/polymatrix.py
@@ -1,10 +1,7 @@
from __future__ import print_function
-from sympy.core.sympify import _sympify
-
from sympy.matrices.dense import MutableDenseMatrix
-from sympy.polys.fields import sfield
from sympy.polys.polytools import Poly
from sympy.polys.domains import EX
@@ -90,128 +87,3 @@ def _eval_scalar_rmul(self, other):
MutablePolyMatrix = PolyMatrix = MutablePolyDenseMatrix
-
-
-class DomainMatrix:
-
- def __init__(self, rows, shape, domain):
- self.shape = shape
- self.rows = [[item for item in row] for row in rows]
- self.domain = domain
-
- @classmethod
- def from_list_sympy(cls, rows):
- nrows = len(rows)
- ncols = len(rows[0])
- assert len(rows) == nrows
- assert all(len(row) == ncols for row in rows)
-
- items_sympy = [_sympify(item) for row in rows for item in row]
-
- domain, items_domain = cls.get_domain(items_sympy)
-
- domain_rows = [[items_domain[ncols*r + c] for c in range(ncols)] for r in range(nrows)]
-
- return DomainMatrix(domain_rows, (nrows, ncols), domain)
-
- @classmethod
- def get_domain(cls, items_sympy):
- K, items_K = sfield(items_sympy, field=True, extension=True)
-
- if K.gens:
- domain = K.to_domain()
- else:
- domain = K.domain
-
- def convert(item):
- if not item:
- return domain.zero
- else:
- return item.numer[()] / item.denom[()]
-
- items_K = [convert(item) for item in items_K]
-
- return domain, items_K
-
- def to_Matrix(self):
- rows_sympy = [[self.domain.to_sympy(e) for e in row] for row in self.rows]
- return MutableDenseMatrix(rows_sympy)
-
- def __repr__(self):
- rows_str = ['[%s]' % (', '.join(map(str, row))) for row in self.rows]
- rowstr = '[%s]' % ', '.join(rows_str)
- return 'DomainMatrix(%s, %r, %r)' % (rowstr, self.shape, self.domain)
-
- def __mul__(A, B):
- """A * B"""
- if not isinstance(B, DomainMatrix):
- return NotImplemented
- rows, shape = matrix_mul(A.rows, A.shape, B.rows, B.shape)
- domain = A.domain.unify(B.domain)
- return type(A)(rows, shape, domain)
-
- def __pow__(A, n):
- """A ** n"""
- if n == 1:
- return A
- elif n % 2 == 1:
- return A * A**(n - 1)
- else:
- sqrtAn = A ** (n // 2)
- return sqrtAn * sqrtAn
-
- def rref(self):
- rref_rows, pivots = rref(self.rows, self.shape)
- rref_matrix = type(self)(rref_rows, self.shape, self.domain)
- pivots = tuple(pivots)
- return rref_matrix, pivots
-
- def __eq__(A, B):
- """A == B"""
- if not isinstance(B, DomainMatrix):
- return NotImplemented
- return A.rows == B.rows
-
-
-def matrix_mul(items1, shape1, items2, shape2):
- m, n1 = shape1
- n2, o = shape2
- assert n1 == n2
- n = n1
- shape3 = (m, o)
- items3 = [[None] * o for _ in range(m)]
- for i in range(m):
- for j in range(o):
- items3[i][j] = sum(items1[i][k] * items2[k][j] for k in range(n))
- return items3, shape3
-
-
-def rref(rows, shape):
- nrows, ncols = shape
- rows = [[item for item in row] for row in rows]
- pivots = []
- ri = 0
- for ci in range(ncols):
- for rj in range(ri, nrows):
- if rows[rj][ci]:
- # Row swap for pivot
- if rj != ri:
- rows[rj], rows[ri] = rows[ri], rows[rj]
- # Record pivot
- pivots.append(ci)
- break
- else:
- # No pivot
- continue
- # Normalise row
- pivoti = rows[ri][ci]
- for ck in range(ci, ncols):
- rows[ri][ck] = rows[ri][ck] / pivoti
- # Eliminate above and below from col to the right
- for rk in range(nrows):
- pivotk = rows[rk][ci]
- if rk != ri and pivotk:
- for ck in range(ci, ncols):
- rows[rk][ck] = rows[rk][ck] - pivotk * rows[ri][ck]
- ri += 1
- return rows, pivots
diff --git a/sympy/polys/solvers.py b/sympy/polys/solvers.py
index fd406c8d4119..d04ddc1493d6 100644
--- a/sympy/polys/solvers.py
+++ b/sympy/polys/solvers.py
@@ -8,7 +8,7 @@
from sympy.polys.domains import EX
from sympy.polys.rings import sring
from sympy.polys.polyerrors import NotInvertible
-from sympy.polys.polymatrix import DomainMatrix
+from sympy.polys.domainmatrix import DomainMatrix
class PolyNonlinearError(Exception):
@@ -72,8 +72,8 @@ def eqs_to_matrix(eqs_coeffs, eqs_rhs, gens, domain):
rows = [[domain.zero] * ncols for _ in range(nrows)]
for row, eq_coeff, eq_rhs in zip(rows, eqs_coeffs, eqs_rhs):
for sym, coeff in eq_coeff.items():
- row[sym2index[sym]] = coeff
- row[-1] = -eq_rhs
+ row[sym2index[sym]] = domain.convert(coeff)
+ row[-1] = -domain.convert(eq_rhs)
return DomainMatrix(rows, (nrows, ncols), domain)
@@ -346,6 +346,10 @@ def _solve_lin_sys_component(eqs_coeffs, eqs_rhs, ring):
# transform from equations to matrix form
matrix = eqs_to_matrix(eqs_coeffs, eqs_rhs, ring.gens, ring.domain)
+ # convert to a field for rref
+ if not matrix.domain.is_Field:
+ matrix = matrix.to_field()
+
# solve by row-reduction
echelon, pivots = matrix.rref()
@@ -357,14 +361,14 @@ def _solve_lin_sys_component(eqs_coeffs, eqs_rhs, ring):
if len(pivots) == len(keys):
sol = []
- for s in [row[-1] for row in echelon.rows]:
+ for s in [row[-1] for row in echelon.rep]:
a = s
sol.append(a)
sols = dict(zip(keys, sol))
else:
sols = {}
g = ring.gens
- echelon = echelon.rows
+ echelon = echelon.rep
for i, p in enumerate(pivots):
v = echelon[i][-1] - sum(echelon[i][j]*g[j] for j in range(p+1, len(g)))
sols[keys[p]] = v
| diff --git a/sympy/polys/tests/test_domainmatrix.py b/sympy/polys/tests/test_domainmatrix.py
new file mode 100644
index 000000000000..1b20741aafeb
--- /dev/null
+++ b/sympy/polys/tests/test_domainmatrix.py
@@ -0,0 +1,1021 @@
+from sympy.core.compatibility import HAS_GMPY
+from sympy.core.numbers import Rational
+from sympy.matrices.common import (NonInvertibleMatrixError,
+ NonSquareMatrixError, ShapeError)
+from sympy.matrices.dense import Matrix
+from sympy.polys import ZZ, QQ
+from sympy.polys.domainmatrix import (
+ DDM,
+ DDMBadInputError, DDMDomainError, DDMShapeError,
+ DomainMatrix,
+ ddm_iadd, ddm_isub, ddm_ineg, ddm_imatmul,
+ ddm_irref, ddm_idet, ddm_iinv, ddm_ilu, ddm_ilu_split, ddm_ilu_solve,
+ ddm_berk,
+ )
+
+from sympy.testing.pytest import raises
+
+
+def test_DDM_init():
+ items = [[ZZ(0), ZZ(1), ZZ(2)], [ZZ(3), ZZ(4), ZZ(5)]]
+ shape = (2, 3)
+ ddm = DDM(items, shape, ZZ)
+ assert ddm.shape == shape
+ assert ddm.rows == 2
+ assert ddm.cols == 3
+ assert ddm.domain == ZZ
+
+ raises(DDMBadInputError, lambda: DDM([[ZZ(2), ZZ(3)]], (2, 2), ZZ))
+ raises(DDMBadInputError, lambda: DDM([[ZZ(1)], [ZZ(2), ZZ(3)]], (2, 2), ZZ))
+
+
+def test_DDM_getsetitem():
+ ddm = DDM([[ZZ(2), ZZ(3)], [ZZ(4), ZZ(5)]], (2, 2), ZZ)
+
+ assert ddm[0][0] == ZZ(2)
+ assert ddm[0][1] == ZZ(3)
+ assert ddm[1][0] == ZZ(4)
+ assert ddm[1][1] == ZZ(5)
+
+ raises(IndexError, lambda: ddm[2][0])
+ raises(IndexError, lambda: ddm[0][2])
+
+ ddm[0][0] = ZZ(-1)
+ assert ddm[0][0] == ZZ(-1)
+
+
+def test_DDM_str():
+ ddm = DDM([[ZZ(0), ZZ(1)], [ZZ(2), ZZ(3)]], (2, 2), ZZ)
+ if HAS_GMPY:
+ assert str(ddm) == 'DDM([[mpz(0), mpz(1)], [mpz(2), mpz(3)]], (2, 2), ZZ)'
+ else:
+ assert str(ddm) == 'DDM([[0, 1], [2, 3]], (2, 2), ZZ)'
+
+
+def test_DDM_eq():
+ items = [[ZZ(0), ZZ(1)], [ZZ(2), ZZ(3)]]
+ ddm1 = DDM(items, (2, 2), ZZ)
+ ddm2 = DDM(items, (2, 2), ZZ)
+
+ assert (ddm1 == ddm1) is True
+ assert (ddm1 == items) is False
+ assert (items == ddm1) is False
+ assert (ddm1 == ddm2) is True
+ assert (ddm2 == ddm1) is True
+
+ assert (ddm1 != ddm1) is False
+ assert (ddm1 != items) is True
+ assert (items != ddm1) is True
+ assert (ddm1 != ddm2) is False
+ assert (ddm2 != ddm1) is False
+
+ ddm3 = DDM([[ZZ(0), ZZ(1)], [ZZ(3), ZZ(3)]], (2, 2), ZZ)
+ ddm3 = DDM(items, (2, 2), QQ)
+
+ assert (ddm1 == ddm3) is False
+ assert (ddm3 == ddm1) is False
+ assert (ddm1 != ddm3) is True
+ assert (ddm3 != ddm1) is True
+
+
+def test_DDM_zeros():
+ ddmz = DDM.zeros((3, 4), QQ)
+ assert list(ddmz) == [[QQ(0)] * 4] * 3
+ assert ddmz.shape == (3, 4)
+ assert ddmz.domain == QQ
+
+
+def test_DDM_eye():
+ ddmz = DDM.eye(3, QQ)
+ f = lambda i, j: QQ(1) if i == j else QQ(0)
+ assert list(ddmz) == [[f(i, j) for i in range(3)] for j in range(3)]
+ assert ddmz.shape == (3, 3)
+ assert ddmz.domain == QQ
+
+
+def test_DDM_copy():
+ ddm1 = DDM([[QQ(1)], [QQ(2)]], (2, 1), QQ)
+ ddm2 = ddm1.copy()
+ assert (ddm1 == ddm2) is True
+ ddm1[0][0] = QQ(-1)
+ assert (ddm1 == ddm2) is False
+ ddm2[0][0] = QQ(-1)
+ assert (ddm1 == ddm2) is True
+
+
+def test_DDM_add():
+ A = DDM([[ZZ(1)], [ZZ(2)]], (2, 1), ZZ)
+ B = DDM([[ZZ(3)], [ZZ(4)]], (2, 1), ZZ)
+ C = DDM([[ZZ(4)], [ZZ(6)]], (2, 1), ZZ)
+ AQ = DDM([[QQ(1)], [QQ(2)]], (2, 1), QQ)
+ assert A + B == A.add(B) == C
+
+ raises(DDMShapeError, lambda: A + DDM([[ZZ(5)]], (1, 1), ZZ))
+ raises(TypeError, lambda: A + ZZ(1))
+ raises(TypeError, lambda: ZZ(1) + A)
+ raises(DDMDomainError, lambda: A + AQ)
+ raises(DDMDomainError, lambda: AQ + A)
+
+
+def test_DDM_sub():
+ A = DDM([[ZZ(1)], [ZZ(2)]], (2, 1), ZZ)
+ B = DDM([[ZZ(3)], [ZZ(4)]], (2, 1), ZZ)
+ C = DDM([[ZZ(-2)], [ZZ(-2)]], (2, 1), ZZ)
+ AQ = DDM([[QQ(1)], [QQ(2)]], (2, 1), QQ)
+ D = DDM([[ZZ(5)]], (1, 1), ZZ)
+ assert A - B == A.sub(B) == C
+
+ raises(TypeError, lambda: A - ZZ(1))
+ raises(TypeError, lambda: ZZ(1) - A)
+ raises(DDMShapeError, lambda: A - D)
+ raises(DDMShapeError, lambda: D - A)
+ raises(DDMShapeError, lambda: A.sub(D))
+ raises(DDMShapeError, lambda: D.sub(A))
+ raises(DDMDomainError, lambda: A - AQ)
+ raises(DDMDomainError, lambda: AQ - A)
+ raises(DDMDomainError, lambda: A.sub(AQ))
+ raises(DDMDomainError, lambda: AQ.sub(A))
+
+
+def test_DDM_neg():
+ A = DDM([[ZZ(1)], [ZZ(2)]], (2, 1), ZZ)
+ An = DDM([[ZZ(-1)], [ZZ(-2)]], (2, 1), ZZ)
+ assert -A == A.neg() == An
+ assert -An == An.neg() == A
+
+
+def test_DDM_mul():
+ A = DDM([[ZZ(1)]], (1, 1), ZZ)
+ raises(TypeError, lambda: [[1]] * A)
+ raises(TypeError, lambda: A * [[1]])
+
+
+def test_DDM_matmul():
+ A = DDM([[ZZ(1)], [ZZ(2)]], (2, 1), ZZ)
+ B = DDM([[ZZ(3), ZZ(4)]], (1, 2), ZZ)
+ AB = DDM([[ZZ(3), ZZ(4)], [ZZ(6), ZZ(8)]], (2, 2), ZZ)
+ BA = DDM([[ZZ(11)]], (1, 1), ZZ)
+
+ assert A @ B == A.matmul(B) == AB
+ assert B @ A == B.matmul(A) == BA
+
+ raises(TypeError, lambda: A @ 1)
+ raises(TypeError, lambda: A @ [[3, 4]])
+
+ Bq = DDM([[QQ(3), QQ(4)]], (1, 2), QQ)
+
+ raises(DDMDomainError, lambda: A @ Bq)
+ raises(DDMDomainError, lambda: Bq @ A)
+
+ C = DDM([[ZZ(1)]], (1, 1), ZZ)
+
+ assert A @ C == A.matmul(C) == A
+
+ raises(DDMShapeError, lambda: C @ A)
+ raises(DDMShapeError, lambda: C.matmul(A))
+
+ Z04 = DDM([], (0, 4), ZZ)
+ Z40 = DDM([[]]*4, (4, 0), ZZ)
+ Z50 = DDM([[]]*5, (5, 0), ZZ)
+ Z05 = DDM([], (0, 5), ZZ)
+ Z45 = DDM([[0] * 5] * 4, (4, 5), ZZ)
+ Z54 = DDM([[0] * 4] * 5, (5, 4), ZZ)
+ Z00 = DDM([], (0, 0), ZZ)
+
+ assert Z04 @ Z45 == Z04.matmul(Z45) == Z05
+ assert Z45 @ Z50 == Z45.matmul(Z50) == Z40
+ assert Z00 @ Z04 == Z00.matmul(Z04) == Z04
+ assert Z50 @ Z00 == Z50.matmul(Z00) == Z50
+ assert Z00 @ Z00 == Z00.matmul(Z00) == Z00
+ assert Z50 @ Z04 == Z50.matmul(Z04) == Z54
+
+ raises(DDMShapeError, lambda: Z05 @ Z40)
+ raises(DDMShapeError, lambda: Z05.matmul(Z40))
+
+
+def test_DDM_rref():
+
+ A = DDM([], (0, 4), QQ)
+ assert A.rref() == (A, [])
+
+ A = DDM([[QQ(0), QQ(1)], [QQ(1), QQ(1)]], (2, 2), QQ)
+ Ar = DDM([[QQ(1), QQ(0)], [QQ(0), QQ(1)]], (2, 2), QQ)
+ pivots = [0, 1]
+ assert A.rref() == (Ar, pivots)
+
+ A = DDM([[QQ(1), QQ(2), QQ(1)], [QQ(3), QQ(4), QQ(1)]], (2, 3), QQ)
+ Ar = DDM([[QQ(1), QQ(0), QQ(-1)], [QQ(0), QQ(1), QQ(1)]], (2, 3), QQ)
+ pivots = [0, 1]
+ assert A.rref() == (Ar, pivots)
+
+ A = DDM([[QQ(3), QQ(4), QQ(1)], [QQ(1), QQ(2), QQ(1)]], (2, 3), QQ)
+ Ar = DDM([[QQ(1), QQ(0), QQ(-1)], [QQ(0), QQ(1), QQ(1)]], (2, 3), QQ)
+ pivots = [0, 1]
+ assert A.rref() == (Ar, pivots)
+
+ A = DDM([[QQ(1), QQ(0)], [QQ(1), QQ(3)], [QQ(0), QQ(1)]], (3, 2), QQ)
+ Ar = DDM([[QQ(1), QQ(0)], [QQ(0), QQ(1)], [QQ(0), QQ(0)]], (3, 2), QQ)
+ pivots = [0, 1]
+ assert A.rref() == (Ar, pivots)
+
+ A = DDM([[QQ(1), QQ(0), QQ(1)], [QQ(3), QQ(0), QQ(1)]], (2, 3), QQ)
+ Ar = DDM([[QQ(1), QQ(0), QQ(0)], [QQ(0), QQ(0), QQ(1)]], (2, 3), QQ)
+ pivots = [0, 2]
+ assert A.rref() == (Ar, pivots)
+
+
+def test_DDM_det():
+ # 0x0 case
+ A = DDM([], (0, 0), ZZ)
+ assert A.det() == ZZ(1)
+
+ # 1x1 case
+ A = DDM([[ZZ(2)]], (1, 1), ZZ)
+ assert A.det() == ZZ(2)
+
+ # 2x2 case
+ A = DDM([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ assert A.det() == ZZ(-2)
+
+ # 3x3 with swap
+ A = DDM([[ZZ(1), ZZ(2), ZZ(3)], [ZZ(1), ZZ(2), ZZ(4)], [ZZ(1), ZZ(2), ZZ(5)]], (3, 3), ZZ)
+ assert A.det() == ZZ(0)
+
+ # 2x2 QQ case
+ A = DDM([[QQ(1, 2), QQ(1, 2)], [QQ(1, 3), QQ(1, 4)]], (2, 2), QQ)
+ assert A.det() == QQ(-1, 24)
+
+ # Nonsquare error
+ A = DDM([[ZZ(1)], [ZZ(2)]], (2, 1), ZZ)
+ raises(DDMShapeError, lambda: A.det())
+
+ # Nonsquare error with empty matrix
+ A = DDM([], (0, 1), ZZ)
+ raises(DDMShapeError, lambda: A.det())
+
+
+def test_DDM_inv():
+ A = DDM([[QQ(1, 1), QQ(2, 1)], [QQ(3, 1), QQ(4, 1)]], (2, 2), QQ)
+ Ainv = DDM([[QQ(-2, 1), QQ(1, 1)], [QQ(3, 2), QQ(-1, 2)]], (2, 2), QQ)
+ assert A.inv() == Ainv
+
+ A = DDM([[QQ(1), QQ(2)]], (1, 2), QQ)
+ raises(DDMShapeError, lambda: A.inv())
+
+ A = DDM([[ZZ(2)]], (1, 1), ZZ)
+ raises(ValueError, lambda: A.inv())
+
+ A = DDM([], (0, 0), QQ)
+ assert A.inv() == A
+
+ A = DDM([[QQ(1), QQ(2)], [QQ(2), QQ(4)]], (2, 2), QQ)
+ raises(NonInvertibleMatrixError, lambda: A.inv())
+
+
+def test_DDM_lu():
+ A = DDM([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ L, U, swaps = A.lu()
+ assert L == DDM([[QQ(1), QQ(0)], [QQ(3), QQ(1)]], (2, 2), QQ)
+ assert U == DDM([[QQ(1), QQ(2)], [QQ(0), QQ(-2)]], (2, 2), QQ)
+ assert swaps == []
+
+ A = [[1, 0, 0, 0], [0, 0, 0, 0], [0, 0, 1, 1], [0, 0, 1, 2]]
+ Lexp = [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 1, 1]]
+ Uexp = [[1, 0, 0, 0], [0, 0, 0, 0], [0, 0, 1, 1], [0, 0, 0, 1]]
+ to_dom = lambda rows, dom: [[dom(e) for e in row] for row in rows]
+ A = DDM(to_dom(A, QQ), (4, 4), QQ)
+ Lexp = DDM(to_dom(Lexp, QQ), (4, 4), QQ)
+ Uexp = DDM(to_dom(Uexp, QQ), (4, 4), QQ)
+ L, U, swaps = A.lu()
+ assert L == Lexp
+ assert U == Uexp
+ assert swaps == []
+
+
+def test_DDM_lu_solve():
+ # Basic example
+ A = DDM([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ b = DDM([[QQ(1)], [QQ(2)]], (2, 1), QQ)
+ x = DDM([[QQ(0)], [QQ(1, 2)]], (2, 1), QQ)
+ assert A.lu_solve(b) == x
+
+ # Example with swaps
+ A = DDM([[QQ(0), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ assert A.lu_solve(b) == x
+
+ # Overdetermined, consistent
+ A = DDM([[QQ(1), QQ(2)], [QQ(3), QQ(4)], [QQ(5), QQ(6)]], (3, 2), QQ)
+ b = DDM([[QQ(1)], [QQ(2)], [QQ(3)]], (3, 1), QQ)
+ assert A.lu_solve(b) == x
+
+ # Overdetermined, inconsistent
+ b = DDM([[QQ(1)], [QQ(2)], [QQ(4)]], (3, 1), QQ)
+ raises(NonInvertibleMatrixError, lambda: A.lu_solve(b))
+
+ # Square, noninvertible
+ A = DDM([[QQ(1), QQ(2)], [QQ(1), QQ(2)]], (2, 2), QQ)
+ b = DDM([[QQ(1)], [QQ(2)]], (2, 1), QQ)
+ raises(NonInvertibleMatrixError, lambda: A.lu_solve(b))
+
+ # Underdetermined
+ A = DDM([[QQ(1), QQ(2)]], (1, 2), QQ)
+ b = DDM([[QQ(3)]], (1, 1), QQ)
+ raises(NotImplementedError, lambda: A.lu_solve(b))
+
+ # Domain mismatch
+ bz = DDM([[ZZ(1)], [ZZ(2)]], (2, 1), ZZ)
+ raises(DDMDomainError, lambda: A.lu_solve(bz))
+
+ # Shape mismatch
+ b3 = DDM([[QQ(1)], [QQ(2)], [QQ(3)]], (3, 1), QQ)
+ raises(DDMShapeError, lambda: A.lu_solve(b3))
+
+
+def test_DDM_charpoly():
+ A = DDM([], (0, 0), ZZ)
+ assert A.charpoly() == [ZZ(1)]
+
+ A = DDM([
+ [ZZ(1), ZZ(2), ZZ(3)],
+ [ZZ(4), ZZ(5), ZZ(6)],
+ [ZZ(7), ZZ(8), ZZ(9)]], (3, 3), ZZ)
+ Avec = [ZZ(1), ZZ(-15), ZZ(-18), ZZ(0)]
+ assert A.charpoly() == Avec
+
+ A = DDM([[ZZ(1), ZZ(2)]], (1, 2), ZZ)
+ raises(DDMShapeError, lambda: A.charpoly())
+
+
+def test_ddm_iadd():
+ a = [[1, 2], [3, 4]]
+ b = [[5, 6], [7, 8]]
+ ddm_iadd(a, b)
+ assert a == [[6, 8], [10, 12]]
+
+
+def test_ddm_isub():
+ a = [[1, 2], [3, 4]]
+ b = [[5, 6], [7, 8]]
+ ddm_isub(a, b)
+ assert a == [[-4, -4], [-4, -4]]
+
+
+def test_ddm_ineg():
+ a = [[1, 2], [3, 4]]
+ ddm_ineg(a)
+ assert a == [[-1, -2], [-3, -4]]
+
+
+def test_ddm_imatmul():
+ a = [[1, 2, 3], [4, 5, 6]]
+ b = [[1, 2], [3, 4], [5, 6]]
+
+ c1 = [[0, 0], [0, 0]]
+ ddm_imatmul(c1, a, b)
+ assert c1 == [[22, 28], [49, 64]]
+
+ c2 = [[0, 0, 0], [0, 0, 0], [0, 0, 0]]
+ ddm_imatmul(c2, b, a)
+ assert c2 == [[9, 12, 15], [19, 26, 33], [29, 40, 51]]
+
+ b3 = [[1], [2], [3]]
+ c3 = [[0], [0]]
+ ddm_imatmul(c3, a, b3)
+ assert c3 == [[14], [32]]
+
+
+def test_ddm_irref():
+ # Empty matrix
+ A = []
+ Ar = []
+ pivots = []
+ assert ddm_irref(A) == pivots
+ assert A == Ar
+
+ # Standard square case
+ A = [[QQ(0), QQ(1)], [QQ(1), QQ(1)]]
+ Ar = [[QQ(1), QQ(0)], [QQ(0), QQ(1)]]
+ pivots = [0, 1]
+ assert ddm_irref(A) == pivots
+ assert A == Ar
+
+ # m < n case
+ A = [[QQ(1), QQ(2), QQ(1)], [QQ(3), QQ(4), QQ(1)]]
+ Ar = [[QQ(1), QQ(0), QQ(-1)], [QQ(0), QQ(1), QQ(1)]]
+ pivots = [0, 1]
+ assert ddm_irref(A) == pivots
+ assert A == Ar
+
+ # same m < n but reversed
+ A = [[QQ(3), QQ(4), QQ(1)], [QQ(1), QQ(2), QQ(1)]]
+ Ar = [[QQ(1), QQ(0), QQ(-1)], [QQ(0), QQ(1), QQ(1)]]
+ pivots = [0, 1]
+ assert ddm_irref(A) == pivots
+ assert A == Ar
+
+ # m > n case
+ A = [[QQ(1), QQ(0)], [QQ(1), QQ(3)], [QQ(0), QQ(1)]]
+ Ar = [[QQ(1), QQ(0)], [QQ(0), QQ(1)], [QQ(0), QQ(0)]]
+ pivots = [0, 1]
+ assert ddm_irref(A) == pivots
+ assert A == Ar
+
+ # Example with missing pivot
+ A = [[QQ(1), QQ(0), QQ(1)], [QQ(3), QQ(0), QQ(1)]]
+ Ar = [[QQ(1), QQ(0), QQ(0)], [QQ(0), QQ(0), QQ(1)]]
+ pivots = [0, 2]
+ assert ddm_irref(A) == pivots
+ assert A == Ar
+
+ # Example with missing pivot and no replacement
+ A = [[QQ(0), QQ(1)], [QQ(0), QQ(2)], [QQ(1), QQ(0)]]
+ Ar = [[QQ(1), QQ(0)], [QQ(0), QQ(1)], [QQ(0), QQ(0)]]
+ pivots = [0, 1]
+ assert ddm_irref(A) == pivots
+ assert A == Ar
+
+
+def test_ddm_idet():
+ A = []
+ assert ddm_idet(A, ZZ) == ZZ(1)
+
+ A = [[ZZ(2)]]
+ assert ddm_idet(A, ZZ) == ZZ(2)
+
+ A = [[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]]
+ assert ddm_idet(A, ZZ) == ZZ(-2)
+
+ A = [[ZZ(1), ZZ(2), ZZ(3)], [ZZ(1), ZZ(2), ZZ(4)], [ZZ(1), ZZ(3), ZZ(5)]]
+ assert ddm_idet(A, ZZ) == ZZ(-1)
+
+ A = [[ZZ(1), ZZ(2), ZZ(3)], [ZZ(1), ZZ(2), ZZ(4)], [ZZ(1), ZZ(2), ZZ(5)]]
+ assert ddm_idet(A, ZZ) == ZZ(0)
+
+ A = [[QQ(1, 2), QQ(1, 2)], [QQ(1, 3), QQ(1, 4)]]
+ assert ddm_idet(A, QQ) == QQ(-1, 24)
+
+
+def test_ddm_inv():
+ A = []
+ Ainv = []
+ ddm_iinv(Ainv, A, QQ)
+ assert Ainv == A
+
+ A = []
+ Ainv = []
+ raises(ValueError, lambda: ddm_iinv(Ainv, A, ZZ))
+
+ A = [[QQ(1), QQ(2)]]
+ Ainv = [[QQ(0), QQ(0)]]
+ raises(NonSquareMatrixError, lambda: ddm_iinv(Ainv, A, QQ))
+
+ A = [[QQ(1, 1), QQ(2, 1)], [QQ(3, 1), QQ(4, 1)]]
+ Ainv = [[QQ(0), QQ(0)], [QQ(0), QQ(0)]]
+ Ainv_expected = [[QQ(-2, 1), QQ(1, 1)], [QQ(3, 2), QQ(-1, 2)]]
+ ddm_iinv(Ainv, A, QQ)
+ assert Ainv == Ainv_expected
+
+ A = [[QQ(1, 1), QQ(2, 1)], [QQ(2, 1), QQ(4, 1)]]
+ Ainv = [[QQ(0), QQ(0)], [QQ(0), QQ(0)]]
+ raises(NonInvertibleMatrixError, lambda: ddm_iinv(Ainv, A, QQ))
+
+
+def test_ddm_ilu():
+ A = []
+ Alu = []
+ swaps = ddm_ilu(A)
+ assert A == Alu
+ assert swaps == []
+
+ A = [[]]
+ Alu = [[]]
+ swaps = ddm_ilu(A)
+ assert A == Alu
+ assert swaps == []
+
+ A = [[QQ(1), QQ(2)], [QQ(3), QQ(4)]]
+ Alu = [[QQ(1), QQ(2)], [QQ(3), QQ(-2)]]
+ swaps = ddm_ilu(A)
+ assert A == Alu
+ assert swaps == []
+
+ A = [[QQ(0), QQ(2)], [QQ(3), QQ(4)]]
+ Alu = [[QQ(3), QQ(4)], [QQ(0), QQ(2)]]
+ swaps = ddm_ilu(A)
+ assert A == Alu
+ assert swaps == [(0, 1)]
+
+ A = [[QQ(1), QQ(2), QQ(3)], [QQ(4), QQ(5), QQ(6)], [QQ(7), QQ(8), QQ(9)]]
+ Alu = [[QQ(1), QQ(2), QQ(3)], [QQ(4), QQ(-3), QQ(-6)], [QQ(7), QQ(2), QQ(0)]]
+ swaps = ddm_ilu(A)
+ assert A == Alu
+ assert swaps == []
+
+ A = [[QQ(0), QQ(1), QQ(2)], [QQ(0), QQ(1), QQ(3)], [QQ(1), QQ(1), QQ(2)]]
+ Alu = [[QQ(1), QQ(1), QQ(2)], [QQ(0), QQ(1), QQ(3)], [QQ(0), QQ(1), QQ(-1)]]
+ swaps = ddm_ilu(A)
+ assert A == Alu
+ assert swaps == [(0, 2)]
+
+ A = [[QQ(1), QQ(2), QQ(3)], [QQ(4), QQ(5), QQ(6)]]
+ Alu = [[QQ(1), QQ(2), QQ(3)], [QQ(4), QQ(-3), QQ(-6)]]
+ swaps = ddm_ilu(A)
+ assert A == Alu
+ assert swaps == []
+
+ A = [[QQ(1), QQ(2)], [QQ(3), QQ(4)], [QQ(5), QQ(6)]]
+ Alu = [[QQ(1), QQ(2)], [QQ(3), QQ(-2)], [QQ(5), QQ(2)]]
+ swaps = ddm_ilu(A)
+ assert A == Alu
+ assert swaps == []
+
+
+def test_ddm_ilu_split():
+ U = []
+ L = []
+ Uexp = []
+ Lexp = []
+ swaps = ddm_ilu_split(L, U, QQ)
+ assert U == Uexp
+ assert L == Lexp
+ assert swaps == []
+
+ U = [[]]
+ L = [[QQ(1)]]
+ Uexp = [[]]
+ Lexp = [[QQ(1)]]
+ swaps = ddm_ilu_split(L, U, QQ)
+ assert U == Uexp
+ assert L == Lexp
+ assert swaps == []
+
+ U = [[QQ(1), QQ(2)], [QQ(3), QQ(4)]]
+ L = [[QQ(1), QQ(0)], [QQ(0), QQ(1)]]
+ Uexp = [[QQ(1), QQ(2)], [QQ(0), QQ(-2)]]
+ Lexp = [[QQ(1), QQ(0)], [QQ(3), QQ(1)]]
+ swaps = ddm_ilu_split(L, U, QQ)
+ assert U == Uexp
+ assert L == Lexp
+ assert swaps == []
+
+ U = [[QQ(1), QQ(2), QQ(3)], [QQ(4), QQ(5), QQ(6)]]
+ L = [[QQ(1), QQ(0)], [QQ(0), QQ(1)]]
+ Uexp = [[QQ(1), QQ(2), QQ(3)], [QQ(0), QQ(-3), QQ(-6)]]
+ Lexp = [[QQ(1), QQ(0)], [QQ(4), QQ(1)]]
+ swaps = ddm_ilu_split(L, U, QQ)
+ assert U == Uexp
+ assert L == Lexp
+ assert swaps == []
+
+ U = [[QQ(1), QQ(2)], [QQ(3), QQ(4)], [QQ(5), QQ(6)]]
+ L = [[QQ(1), QQ(0), QQ(0)], [QQ(0), QQ(1), QQ(0)], [QQ(0), QQ(0), QQ(1)]]
+ Uexp = [[QQ(1), QQ(2)], [QQ(0), QQ(-2)], [QQ(0), QQ(0)]]
+ Lexp = [[QQ(1), QQ(0), QQ(0)], [QQ(3), QQ(1), QQ(0)], [QQ(5), QQ(2), QQ(1)]]
+ swaps = ddm_ilu_split(L, U, QQ)
+ assert U == Uexp
+ assert L == Lexp
+ assert swaps == []
+
+
+def test_ddm_ilu_solve():
+ # Basic example
+ # A = [[QQ(1), QQ(2)], [QQ(3), QQ(4)]]
+ U = [[QQ(1), QQ(2)], [QQ(0), QQ(-2)]]
+ L = [[QQ(1), QQ(0)], [QQ(3), QQ(1)]]
+ swaps = []
+ b = DDM([[QQ(1)], [QQ(2)]], (2, 1), QQ)
+ x = DDM([[QQ(0)], [QQ(0)]], (2, 1), QQ)
+ xexp = DDM([[QQ(0)], [QQ(1, 2)]], (2, 1), QQ)
+ ddm_ilu_solve(x, L, U, swaps, b)
+ assert x == xexp
+
+ # Example with swaps
+ # A = [[QQ(0), QQ(2)], [QQ(3), QQ(4)]]
+ U = [[QQ(3), QQ(4)], [QQ(0), QQ(2)]]
+ L = [[QQ(1), QQ(0)], [QQ(0), QQ(1)]]
+ swaps = [(0, 1)]
+ b = DDM([[QQ(1)], [QQ(2)]], (2, 1), QQ)
+ x = DDM([[QQ(0)], [QQ(0)]], (2, 1), QQ)
+ xexp = DDM([[QQ(0)], [QQ(1, 2)]], (2, 1), QQ)
+ ddm_ilu_solve(x, L, U, swaps, b)
+ assert x == xexp
+
+ # Overdetermined, consistent
+ # A = DDM([[QQ(1), QQ(2)], [QQ(3), QQ(4)], [QQ(5), QQ(6)]], (3, 2), QQ)
+ U = [[QQ(1), QQ(2)], [QQ(0), QQ(-2)], [QQ(0), QQ(0)]]
+ L = [[QQ(1), QQ(0), QQ(0)], [QQ(3), QQ(1), QQ(0)], [QQ(5), QQ(2), QQ(1)]]
+ swaps = []
+ b = DDM([[QQ(1)], [QQ(2)], [QQ(3)]], (3, 1), QQ)
+ x = DDM([[QQ(0)], [QQ(0)]], (2, 1), QQ)
+ xexp = DDM([[QQ(0)], [QQ(1, 2)]], (2, 1), QQ)
+ ddm_ilu_solve(x, L, U, swaps, b)
+ assert x == xexp
+
+ # Overdetermined, inconsistent
+ b = DDM([[QQ(1)], [QQ(2)], [QQ(4)]], (3, 1), QQ)
+ raises(NonInvertibleMatrixError, lambda: ddm_ilu_solve(x, L, U, swaps, b))
+
+ # Square, noninvertible
+ # A = DDM([[QQ(1), QQ(2)], [QQ(1), QQ(2)]], (2, 2), QQ)
+ U = [[QQ(1), QQ(2)], [QQ(0), QQ(0)]]
+ L = [[QQ(1), QQ(0)], [QQ(1), QQ(1)]]
+ swaps = []
+ b = DDM([[QQ(1)], [QQ(2)]], (2, 1), QQ)
+ raises(NonInvertibleMatrixError, lambda: ddm_ilu_solve(x, L, U, swaps, b))
+
+ # Underdetermined
+ # A = DDM([[QQ(1), QQ(2)]], (1, 2), QQ)
+ U = [[QQ(1), QQ(2)]]
+ L = [[QQ(1)]]
+ swaps = []
+ b = DDM([[QQ(3)]], (1, 1), QQ)
+ raises(NotImplementedError, lambda: ddm_ilu_solve(x, L, U, swaps, b))
+
+ # Shape mismatch
+ b3 = DDM([[QQ(1)], [QQ(2)], [QQ(3)]], (3, 1), QQ)
+ raises(DDMShapeError, lambda: ddm_ilu_solve(x, L, U, swaps, b3))
+
+ # Empty shape mismatch
+ U = [[QQ(1)]]
+ L = [[QQ(1)]]
+ swaps = []
+ x = [[QQ(1)]]
+ b = []
+ raises(DDMShapeError, lambda: ddm_ilu_solve(x, L, U, swaps, b))
+
+ # Empty system
+ U = []
+ L = []
+ swaps = []
+ b = []
+ x = []
+ ddm_ilu_solve(x, L, U, swaps, b)
+ assert x == []
+
+
+def test_ddm_charpoly():
+ A = []
+ assert ddm_berk(A, ZZ) == [[ZZ(1)]]
+
+ A = [[ZZ(1), ZZ(2), ZZ(3)], [ZZ(4), ZZ(5), ZZ(6)], [ZZ(7), ZZ(8), ZZ(9)]]
+ Avec = [[ZZ(1)], [ZZ(-15)], [ZZ(-18)], [ZZ(0)]]
+ assert ddm_berk(A, ZZ) == Avec
+
+ A = DDM([[ZZ(1), ZZ(2)]], (1, 2), ZZ)
+ raises(DDMShapeError, lambda: ddm_berk(A, ZZ))
+
+
+def test_DomainMatrix_init():
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ assert A.rep == DDM([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ assert A.shape == (2, 2)
+ assert A.domain == ZZ
+
+ raises(DDMBadInputError, lambda: DomainMatrix([[ZZ(1), ZZ(2)]], (2, 2), ZZ))
+
+
+def test_DomainMatrix_from_ddm():
+ ddm = DDM([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ A = DomainMatrix.from_ddm(ddm)
+ assert A.rep == ddm
+ assert A.shape == (2, 2)
+ assert A.domain == ZZ
+
+
+def test_DomainMatrix_from_list_sympy():
+ ddm = DDM([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ A = DomainMatrix.from_list_sympy(2, 2, [[1, 2], [3, 4]])
+ assert A.rep == ddm
+ assert A.shape == (2, 2)
+ assert A.domain == ZZ
+
+
+def test_DomainMatrix_eq():
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ assert A == A
+ B = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(1)]], (2, 2), ZZ)
+ assert A != B
+ C = [[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]]
+ assert A != C
+
+
+def test_DomainMatrix_get_domain():
+ K, items = DomainMatrix.get_domain([1, 2, 3, 4])
+ assert items == [ZZ(1), ZZ(2), ZZ(3), ZZ(4)]
+ assert K == ZZ
+
+ K, items = DomainMatrix.get_domain([1, 2, 3, Rational(1, 2)])
+ assert items == [QQ(1), QQ(2), QQ(3), QQ(1, 2)]
+ assert K == QQ
+
+
+def test_DomainMatrix_convert_to():
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ Aq = A.convert_to(QQ)
+ assert Aq == DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+
+
+def test_DomainMatrix_to_field():
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ Aq = A.to_field()
+ assert Aq == DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+
+
+def test_DomainMatrix_unify():
+ Az = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ Aq = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ assert Az.unify(Az) == (Az, Az)
+ assert Az.unify(Aq) == (Aq, Aq)
+ assert Aq.unify(Az) == (Aq, Aq)
+ assert Aq.unify(Aq) == (Aq, Aq)
+
+
+def test_DomainMatrix_to_Matrix():
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ assert A.to_Matrix() == Matrix([[1, 2], [3, 4]])
+
+
+def test_DomainMatrix_repr():
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ assert repr(A) == 'DomainMatrix([[1, 2], [3, 4]], (2, 2), ZZ)'
+
+
+def test_DomainMatrix_add():
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ B = DomainMatrix([[ZZ(2), ZZ(4)], [ZZ(6), ZZ(8)]], (2, 2), ZZ)
+ assert A + A == A.add(A) == B
+
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ L = [[2, 3], [3, 4]]
+ raises(TypeError, lambda: A + L)
+ raises(TypeError, lambda: L + A)
+
+ A1 = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ A2 = DomainMatrix([[ZZ(1), ZZ(2)]], (1, 2), ZZ)
+ raises(ShapeError, lambda: A1 + A2)
+ raises(ShapeError, lambda: A2 + A1)
+ raises(ShapeError, lambda: A1.add(A2))
+ raises(ShapeError, lambda: A2.add(A1))
+
+ Az = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ Aq = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ raises(ValueError, lambda: Az + Aq)
+ raises(ValueError, lambda: Aq + Az)
+ raises(ValueError, lambda: Az.add(Aq))
+ raises(ValueError, lambda: Aq.add(Az))
+
+
+def test_DomainMatrix_sub():
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ B = DomainMatrix([[ZZ(0), ZZ(0)], [ZZ(0), ZZ(0)]], (2, 2), ZZ)
+ assert A - A == A.sub(A) == B
+
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ L = [[2, 3], [3, 4]]
+ raises(TypeError, lambda: A - L)
+ raises(TypeError, lambda: L - A)
+
+ A1 = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ A2 = DomainMatrix([[ZZ(1), ZZ(2)]], (1, 2), ZZ)
+ raises(ShapeError, lambda: A1 - A2)
+ raises(ShapeError, lambda: A2 - A1)
+ raises(ShapeError, lambda: A1.sub(A2))
+ raises(ShapeError, lambda: A2.sub(A1))
+
+ Az = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ Aq = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ raises(ValueError, lambda: Az - Aq)
+ raises(ValueError, lambda: Aq - Az)
+ raises(ValueError, lambda: Az.sub(Aq))
+ raises(ValueError, lambda: Aq.sub(Az))
+
+
+def test_DomainMatrix_neg():
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ Aneg = DomainMatrix([[ZZ(-1), ZZ(-2)], [ZZ(-3), ZZ(-4)]], (2, 2), ZZ)
+ assert -A == A.neg() == Aneg
+
+
+def test_DomainMatrix_mul():
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ A2 = DomainMatrix([[ZZ(7), ZZ(10)], [ZZ(15), ZZ(22)]], (2, 2), ZZ)
+ assert A*A == A.matmul(A) == A2
+
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ L = [[1, 2], [3, 4]]
+ raises(TypeError, lambda: A * L)
+ raises(TypeError, lambda: L * A)
+
+ Az = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ Aq = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ raises(DDMDomainError, lambda: Az * Aq)
+ raises(DDMDomainError, lambda: Aq * Az)
+ raises(DDMDomainError, lambda: Az.matmul(Aq))
+ raises(DDMDomainError, lambda: Aq.matmul(Az))
+
+
+def test_DomainMatrix_pow():
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ A2 = DomainMatrix([[ZZ(7), ZZ(10)], [ZZ(15), ZZ(22)]], (2, 2), ZZ)
+ A3 = DomainMatrix([[ZZ(37), ZZ(54)], [ZZ(81), ZZ(118)]], (2, 2), ZZ)
+ eye = DomainMatrix([[ZZ(1), ZZ(0)], [ZZ(0), ZZ(1)]], (2, 2), ZZ)
+ assert A**0 == A.pow(0) == eye
+ assert A**1 == A.pow(1) == A
+ assert A**2 == A.pow(2) == A2
+ assert A**3 == A.pow(3) == A3
+
+ raises(TypeError, lambda: A ** Rational(1, 2))
+ raises(NotImplementedError, lambda: A ** -1)
+ raises(NotImplementedError, lambda: A.pow(-1))
+
+
+def test_DomainMatrix_rref():
+ A = DomainMatrix([], (0, 1), QQ)
+ assert A.rref() == (A, ())
+
+ A = DomainMatrix([[QQ(1)]], (1, 1), QQ)
+ assert A.rref() == (A, (0,))
+
+ A = DomainMatrix([[QQ(0)]], (1, 1), QQ)
+ assert A.rref() == (A, ())
+
+ A = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ Ar, pivots = A.rref()
+ assert Ar == DomainMatrix([[QQ(1), QQ(0)], [QQ(0), QQ(1)]], (2, 2), QQ)
+ assert pivots == (0, 1)
+
+ A = DomainMatrix([[QQ(0), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ Ar, pivots = A.rref()
+ assert Ar == DomainMatrix([[QQ(1), QQ(0)], [QQ(0), QQ(1)]], (2, 2), QQ)
+ assert pivots == (0, 1)
+
+ A = DomainMatrix([[QQ(0), QQ(2)], [QQ(0), QQ(4)]], (2, 2), QQ)
+ Ar, pivots = A.rref()
+ assert Ar == DomainMatrix([[QQ(0), QQ(1)], [QQ(0), QQ(0)]], (2, 2), QQ)
+ assert pivots == (1,)
+
+ Az = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ raises(ValueError, lambda: Az.rref())
+
+
+def test_DomainMatrix_inv():
+ A = DomainMatrix([], (0, 0), QQ)
+ assert A.inv() == A
+
+ A = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ Ainv = DomainMatrix([[QQ(-2), QQ(1)], [QQ(3, 2), QQ(-1, 2)]], (2, 2), QQ)
+ assert A.inv() == Ainv
+
+ Az = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ raises(ValueError, lambda: Az.inv())
+
+ Ans = DomainMatrix([[QQ(1), QQ(2)]], (1, 2), QQ)
+ raises(NonSquareMatrixError, lambda: Ans.inv())
+
+ Aninv = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(6)]], (2, 2), QQ)
+ raises(NonInvertibleMatrixError, lambda: Aninv.inv())
+
+
+def test_DomainMatrix_det():
+ A = DomainMatrix([], (0, 0), ZZ)
+ assert A.det() == 1
+
+ A = DomainMatrix([[1]], (1, 1), ZZ)
+ assert A.det() == 1
+
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ assert A.det() == ZZ(-2)
+
+ A = DomainMatrix([[ZZ(1), ZZ(2), ZZ(3)], [ZZ(1), ZZ(2), ZZ(4)], [ZZ(1), ZZ(3), ZZ(5)]], (3, 3), ZZ)
+ assert A.det() == ZZ(-1)
+
+ A = DomainMatrix([[ZZ(1), ZZ(2), ZZ(3)], [ZZ(1), ZZ(2), ZZ(4)], [ZZ(1), ZZ(2), ZZ(5)]], (3, 3), ZZ)
+ assert A.det() == ZZ(0)
+
+ Ans = DomainMatrix([[QQ(1), QQ(2)]], (1, 2), QQ)
+ raises(NonSquareMatrixError, lambda: Ans.det())
+
+ A = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ assert A.det() == QQ(-2)
+
+
+def test_DomainMatrix_lu():
+ A = DomainMatrix([], (0, 0), QQ)
+ assert A.lu() == (A, A, [])
+
+ A = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ L = DomainMatrix([[QQ(1), QQ(0)], [QQ(3), QQ(1)]], (2, 2), QQ)
+ U = DomainMatrix([[QQ(1), QQ(2)], [QQ(0), QQ(-2)]], (2, 2), QQ)
+ swaps = []
+ assert A.lu() == (L, U, swaps)
+
+ A = DomainMatrix([[QQ(0), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ L = DomainMatrix([[QQ(1), QQ(0)], [QQ(0), QQ(1)]], (2, 2), QQ)
+ U = DomainMatrix([[QQ(3), QQ(4)], [QQ(0), QQ(2)]], (2, 2), QQ)
+ swaps = [(0, 1)]
+ assert A.lu() == (L, U, swaps)
+
+ A = DomainMatrix([[QQ(1), QQ(2)], [QQ(2), QQ(4)]], (2, 2), QQ)
+ L = DomainMatrix([[QQ(1), QQ(0)], [QQ(2), QQ(1)]], (2, 2), QQ)
+ U = DomainMatrix([[QQ(1), QQ(2)], [QQ(0), QQ(0)]], (2, 2), QQ)
+ swaps = []
+ assert A.lu() == (L, U, swaps)
+
+ A = DomainMatrix([[QQ(0), QQ(2)], [QQ(0), QQ(4)]], (2, 2), QQ)
+ L = DomainMatrix([[QQ(1), QQ(0)], [QQ(0), QQ(1)]], (2, 2), QQ)
+ U = DomainMatrix([[QQ(0), QQ(2)], [QQ(0), QQ(4)]], (2, 2), QQ)
+ swaps = []
+ assert A.lu() == (L, U, swaps)
+
+ A = DomainMatrix([[QQ(1), QQ(2), QQ(3)], [QQ(4), QQ(5), QQ(6)]], (2, 3), QQ)
+ L = DomainMatrix([[QQ(1), QQ(0)], [QQ(4), QQ(1)]], (2, 2), QQ)
+ U = DomainMatrix([[QQ(1), QQ(2), QQ(3)], [QQ(0), QQ(-3), QQ(-6)]], (2, 3), QQ)
+ swaps = []
+ assert A.lu() == (L, U, swaps)
+
+ A = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)], [QQ(5), QQ(6)]], (3, 2), QQ)
+ L = DomainMatrix([
+ [QQ(1), QQ(0), QQ(0)],
+ [QQ(3), QQ(1), QQ(0)],
+ [QQ(5), QQ(2), QQ(1)]], (3, 3), QQ)
+ U = DomainMatrix([[QQ(1), QQ(2)], [QQ(0), QQ(-2)], [QQ(0), QQ(0)]], (3, 2), QQ)
+ swaps = []
+ assert A.lu() == (L, U, swaps)
+
+ A = [[1, 0, 0, 0], [0, 0, 0, 0], [0, 0, 1, 1], [0, 0, 1, 2]]
+ L = [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 1, 1]]
+ U = [[1, 0, 0, 0], [0, 0, 0, 0], [0, 0, 1, 1], [0, 0, 0, 1]]
+ to_dom = lambda rows, dom: [[dom(e) for e in row] for row in rows]
+ A = DomainMatrix(to_dom(A, QQ), (4, 4), QQ)
+ L = DomainMatrix(to_dom(L, QQ), (4, 4), QQ)
+ U = DomainMatrix(to_dom(U, QQ), (4, 4), QQ)
+ assert A.lu() == (L, U, [])
+
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ raises(ValueError, lambda: A.lu())
+
+
+def test_DomainMatrix_lu_solve():
+ # Base case
+ A = b = x = DomainMatrix([], (0, 0), QQ)
+ assert A.lu_solve(b) == x
+
+ # Basic example
+ A = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ b = DomainMatrix([[QQ(1)], [QQ(2)]], (2, 1), QQ)
+ x = DomainMatrix([[QQ(0)], [QQ(1, 2)]], (2, 1), QQ)
+ assert A.lu_solve(b) == x
+
+ # Example with swaps
+ A = DomainMatrix([[QQ(0), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ b = DomainMatrix([[QQ(1)], [QQ(2)]], (2, 1), QQ)
+ x = DomainMatrix([[QQ(0)], [QQ(1, 2)]], (2, 1), QQ)
+ assert A.lu_solve(b) == x
+
+ # Non-invertible
+ A = DomainMatrix([[QQ(1), QQ(2)], [QQ(2), QQ(4)]], (2, 2), QQ)
+ b = DomainMatrix([[QQ(1)], [QQ(2)]], (2, 1), QQ)
+ raises(NonInvertibleMatrixError, lambda: A.lu_solve(b))
+
+ # Overdetermined, consistent
+ A = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)], [QQ(5), QQ(6)]], (3, 2), QQ)
+ b = DomainMatrix([[QQ(1)], [QQ(2)], [QQ(3)]], (3, 1), QQ)
+ x = DomainMatrix([[QQ(0)], [QQ(1, 2)]], (2, 1), QQ)
+ assert A.lu_solve(b) == x
+
+ # Overdetermined, inconsistent
+ A = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)], [QQ(5), QQ(6)]], (3, 2), QQ)
+ b = DomainMatrix([[QQ(1)], [QQ(2)], [QQ(4)]], (3, 1), QQ)
+ raises(NonInvertibleMatrixError, lambda: A.lu_solve(b))
+
+ # Underdetermined
+ A = DomainMatrix([[QQ(1), QQ(2)]], (1, 2), QQ)
+ b = DomainMatrix([[QQ(1)]], (1, 1), QQ)
+ raises(NotImplementedError, lambda: A.lu_solve(b))
+
+ # Non-field
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ b = DomainMatrix([[ZZ(1)], [ZZ(2)]], (2, 1), ZZ)
+ raises(ValueError, lambda: A.lu_solve(b))
+
+ # Shape mismatch
+ A = DomainMatrix([[QQ(1), QQ(2)], [QQ(3), QQ(4)]], (2, 2), QQ)
+ b = DomainMatrix([[QQ(1), QQ(2)]], (1, 2), QQ)
+ raises(ShapeError, lambda: A.lu_solve(b))
+
+
+def test_DomainMatrix_charpoly():
+ A = DomainMatrix([], (0, 0), ZZ)
+ assert A.charpoly() == [ZZ(1)]
+
+ A = DomainMatrix([[1]], (1, 1), ZZ)
+ assert A.charpoly() == [ZZ(1), ZZ(-1)]
+
+ A = DomainMatrix([[ZZ(1), ZZ(2)], [ZZ(3), ZZ(4)]], (2, 2), ZZ)
+ assert A.charpoly() == [ZZ(1), ZZ(-5), ZZ(-2)]
+
+ A = DomainMatrix([[ZZ(1), ZZ(2), ZZ(3)], [ZZ(4), ZZ(5), ZZ(6)], [ZZ(7), ZZ(8), ZZ(9)]], (3, 3), ZZ)
+ assert A.charpoly() == [ZZ(1), ZZ(-15), ZZ(-18), ZZ(0)]
+
+ Ans = DomainMatrix([[QQ(1), QQ(2)]], (1, 2), QQ)
+ raises(NonSquareMatrixError, lambda: Ans.charpoly())
| [
{
"components": [
{
"doc": "Base class for errors raised by DDM",
"lines": [
11,
13
],
"name": "DDMError",
"signature": "class DDMError(Exception):",
"type": "class"
},
{
"doc": "list of lists is inconsistent with shap... | [
"test_DDM_init",
"test_DDM_getsetitem",
"test_DDM_str",
"test_DDM_eq",
"test_DDM_zeros",
"test_DDM_eye",
"test_DDM_copy",
"test_DDM_add",
"test_DDM_sub",
"test_DDM_neg",
"test_DDM_mul",
"test_DDM_matmul",
"test_DDM_rref",
"test_DDM_det",
"test_DDM_inv",
"test_DDM_lu",
"test_DDM_lu_so... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
feat(matrices): Extend and test DomainMatrix
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
Main work so far from #19686 split out as a separate PR.
Extends the DomainMatrix class added in #18844
#### Brief description of what is fixed or changed
Extend the DomainMatrix class by adding a bunch of methods, supporting functions and tests.
Add a lower-level class DDM that is a list subclass that represents a matrix as a list of lists of domain elements.
#### Other comments
This is still a work in progress for me but it has already reached a useful state and I'm not sure how much time I can spend working on it so I'm putting it here to be either merged or built upon.
The future work I want to do to complete this is:
1. Make better use of fraction-free algorithms
2. Make an immutable version of DDM based on tuple rather than list.
3. Add a sparse version SDM based on dict of dicts
4. Refactor the various algorithms to make use of each other.
5. Make the various functions have a consistent naming scheme and explain the scheme.
6. Add docs, comments etc.
The DomainMatrix class here can beat the performance of Matrix on many different benchmarks. Once this is usable we can begin to use this internally to speed up operations with matrices of pure Integer/Rational at least since in those cases there is no behaviour difference. Before that it would be good to use this to eliminate RawMatrix, NewMatrix, PolyMatrix.
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
NO ENTRY
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/polys/domainmatrix.py]
(definition of DDMError:)
class DDMError(Exception):
"""Base class for errors raised by DDM"""
(definition of DDMBadInputError:)
class DDMBadInputError(DDMError):
"""list of lists is inconsistent with shape"""
(definition of DDMDomainError:)
class DDMDomainError(DDMError):
"""domains do not match"""
(definition of DDMShapeError:)
class DDMShapeError(DDMError):
"""shapes are inconsistent"""
(definition of DDM:)
class DDM(list):
"""Dense matrix based on polys domain elements
This is a list subclass and is a wrapper for a list of lists that supports
basic matrix arithmetic +, -, *, **."""
(definition of DDM.__init__:)
def __init__(self, rowslist, shape, domain):
(definition of DDM.__str__:)
def __str__(self):
(definition of DDM.__eq__:)
def __eq__(self, other):
(definition of DDM.__ne__:)
def __ne__(self, other):
(definition of DDM.zeros:)
def zeros(cls, shape, domain):
(definition of DDM.eye:)
def eye(cls, size, domain):
(definition of DDM.copy:)
def copy(self):
(definition of DDM.__add__:)
def __add__(a, b):
(definition of DDM.__sub__:)
def __sub__(a, b):
(definition of DDM.__neg__:)
def __neg__(a):
(definition of DDM.__mul__:)
def __mul__(a, b):
(definition of DDM.__matmul__:)
def __matmul__(a, b):
(definition of DDM._check:)
def _check(cls, a, op, b, ashape, bshape):
(definition of DDM.add:)
def add(a, b):
"""a + b"""
(definition of DDM.sub:)
def sub(a, b):
"""a - b"""
(definition of DDM.neg:)
def neg(a):
"""-a"""
(definition of DDM.matmul:)
def matmul(a, b):
"""a @ b (matrix product)"""
(definition of DDM.rref:)
def rref(a):
"""Reduced-row echelon form of a and list of pivots"""
(definition of DDM.det:)
def det(a):
"""Determinant of a"""
(definition of DDM.inv:)
def inv(a):
"""Inverse of a"""
(definition of DDM.lu:)
def lu(a):
"""L, U decomposition of a"""
(definition of DDM.lu_solve:)
def lu_solve(a, b):
"""x where a*x = b"""
(definition of DDM.charpoly:)
def charpoly(a):
"""Coefficients of characteristic polynomial of a"""
(definition of ddm_iadd:)
def ddm_iadd(a, b):
"""a += b"""
(definition of ddm_isub:)
def ddm_isub(a, b):
"""a -= b"""
(definition of ddm_ineg:)
def ddm_ineg(a):
"""a <-- -a"""
(definition of ddm_imatmul:)
def ddm_imatmul(a, b, c):
"""a += b @ c"""
(definition of ddm_irref:)
def ddm_irref(a):
"""a <-- rref(a)"""
(definition of ddm_idet:)
def ddm_idet(a, K):
"""a <-- echelon(a); return det"""
(definition of ddm_iinv:)
def ddm_iinv(ainv, a, K):
(definition of ddm_ilu_split:)
def ddm_ilu_split(L, U, K):
"""L, U <-- LU(U)"""
(definition of ddm_ilu:)
def ddm_ilu(a):
"""a <-- LU(a)"""
(definition of ddm_ilu_solve:)
def ddm_ilu_solve(x, L, U, swaps, b):
"""x <-- solve(L*U*x = swaps(b))"""
(definition of ddm_berk:)
def ddm_berk(M, K):
(definition of DomainMatrix:)
class DomainMatrix:
(definition of DomainMatrix.__init__:)
def __init__(self, rows, shape, domain):
(definition of DomainMatrix.from_ddm:)
def from_ddm(cls, ddm):
(definition of DomainMatrix.from_list_sympy:)
def from_list_sympy(cls, nrows, ncols, rows):
(definition of DomainMatrix.get_domain:)
def get_domain(cls, items_sympy, **kwargs):
(definition of DomainMatrix.convert_to:)
def convert_to(self, K):
(definition of DomainMatrix.to_field:)
def to_field(self):
(definition of DomainMatrix.unify:)
def unify(self, other):
(definition of DomainMatrix.to_Matrix:)
def to_Matrix(self):
(definition of DomainMatrix.__repr__:)
def __repr__(self):
(definition of DomainMatrix.__add__:)
def __add__(A, B):
(definition of DomainMatrix.__sub__:)
def __sub__(A, B):
(definition of DomainMatrix.__neg__:)
def __neg__(A):
(definition of DomainMatrix.__mul__:)
def __mul__(A, B):
"""A * B"""
(definition of DomainMatrix.__pow__:)
def __pow__(A, n):
"""A ** n"""
(definition of DomainMatrix.add:)
def add(A, B):
(definition of DomainMatrix.sub:)
def sub(A, B):
(definition of DomainMatrix.neg:)
def neg(A):
(definition of DomainMatrix.matmul:)
def matmul(A, B):
(definition of DomainMatrix.pow:)
def pow(A, n):
(definition of DomainMatrix.rref:)
def rref(self):
(definition of DomainMatrix.inv:)
def inv(self):
(definition of DomainMatrix.det:)
def det(self):
(definition of DomainMatrix.lu:)
def lu(self):
(definition of DomainMatrix.lu_solve:)
def lu_solve(self, rhs):
(definition of DomainMatrix.charpoly:)
def charpoly(self):
(definition of DomainMatrix.__eq__:)
def __eq__(A, B):
"""A == B"""
[end of new definitions in sympy/polys/domainmatrix.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1b4529a95ef641c2fc15889091b281644069d20e | ||
sympy__sympy-19880 | 19,880 | sympy/sympy | 1.7 | 9547fa20eae7f8c9bf475b393e9191b36a9b748c | 2020-08-02T19:57:09Z | diff --git a/sympy/codegen/cfunctions.py b/sympy/codegen/cfunctions.py
index 5b935efe08c3..b55d4b2136dd 100644
--- a/sympy/codegen/cfunctions.py
+++ b/sympy/codegen/cfunctions.py
@@ -256,6 +256,9 @@ def eval(cls, arg):
elif arg.is_Pow and arg.base == _Two:
return arg.exp
+ def _eval_evalf(self, *args, **kwargs):
+ return self.rewrite(log).evalf(*args, **kwargs)
+
def _eval_expand_func(self, **hints):
return _log2(*self.args)
diff --git a/sympy/codegen/numpy_nodes.py b/sympy/codegen/numpy_nodes.py
index 6b629e1b7b59..3d7476f3f5be 100644
--- a/sympy/codegen/numpy_nodes.py
+++ b/sympy/codegen/numpy_nodes.py
@@ -1,4 +1,5 @@
from sympy.core.function import Add, ArgumentIndexError, Function
+from sympy.core.power import Pow
from sympy.core.singleton import S
from sympy.functions.elementary.exponential import exp, log
from sympy.utilities import default_sort_key
@@ -8,11 +9,28 @@ def _logaddexp(x1, x2, *, evaluate=True):
return log(Add(exp(x1, evaluate=evaluate), exp(x2, evaluate=evaluate), evaluate=evaluate))
+_two = S.One*2
+_ln2 = log(_two)
+
+
+def _lb(x, *, evaluate=True):
+ return log(x, evaluate=evaluate)/_ln2
+
+
+def _exp2(x, *, evaluate=True):
+ return Pow(_two, x, evaluate=evaluate)
+
+
+def _logaddexp2(x1, x2, *, evaluate=True):
+ return _lb(Add(_exp2(x1, evaluate=evaluate),
+ _exp2(x2, evaluate=evaluate), evaluate=evaluate))
+
+
class logaddexp(Function):
""" Logarithm of the sum of exponentiations of the inputs.
Helper class for use with e.g. numpy.logaddexp
- See: https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.logaddexp.html
+ See: https://numpy.org/doc/stable/reference/generated/numpy.logaddexp.html
"""
nargs = 2
@@ -34,6 +52,9 @@ def fdiff(self, argindex=1):
def _eval_rewrite_as_log(self, x1, x2, **kwargs):
return _logaddexp(x1, x2)
+ def _eval_evalf(self, *args, **kwargs):
+ return self.rewrite(log).evalf(*args, **kwargs)
+
def _eval_simplify(self, *args, **kwargs):
a, b = map(lambda x: x.simplify(**kwargs), self.args)
candidate = _logaddexp(a, b)
@@ -41,3 +62,41 @@ def _eval_simplify(self, *args, **kwargs):
return candidate
else:
return logaddexp(a, b)
+
+
+class logaddexp2(Function):
+ """ Logarithm of the sum of exponentiations of the inputs in base-2.
+
+ Helper class for use with e.g. numpy.logaddexp2
+ See: https://numpy.org/doc/stable/reference/generated/numpy.logaddexp2.html
+ """
+ nargs = 2
+
+ def __new__(cls, *args):
+ return Function.__new__(cls, *sorted(args, key=default_sort_key))
+
+ def fdiff(self, argindex=1):
+ """
+ Returns the first derivative of this function.
+ """
+ if argindex == 1:
+ wrt, other = self.args
+ elif argindex == 2:
+ other, wrt = self.args
+ else:
+ raise ArgumentIndexError(self, argindex)
+ return S.One/(S.One + _exp2(other-wrt))
+
+ def _eval_rewrite_as_log(self, x1, x2, **kwargs):
+ return _logaddexp2(x1, x2)
+
+ def _eval_evalf(self, *args, **kwargs):
+ return self.rewrite(log).evalf(*args, **kwargs)
+
+ def _eval_simplify(self, *args, **kwargs):
+ a, b = map(lambda x: x.simplify(**kwargs).factor(), self.args)
+ candidate = _logaddexp2(a, b)
+ if candidate != _logaddexp2(a, b, evaluate=False):
+ return candidate
+ else:
+ return logaddexp2(a, b)
diff --git a/sympy/codegen/rewriting.py b/sympy/codegen/rewriting.py
index a20e1e04117c..ccbe4bc8de9d 100644
--- a/sympy/codegen/rewriting.py
+++ b/sympy/codegen/rewriting.py
@@ -36,7 +36,8 @@
from sympy.codegen.cfunctions import log1p, log2, exp2, expm1
from sympy.codegen.matrix_nodes import MatrixSolve
from sympy.core.expr import UnevaluatedExpr
-from sympy.codegen.numpy_nodes import logaddexp
+from sympy.core.power import Pow
+from sympy.codegen.numpy_nodes import logaddexp, logaddexp2
from sympy.core.mul import Mul
from sympy.matrices.expressions.matexpr import MatrixSymbol
from sympy.utilities.iterables import sift
@@ -252,8 +253,9 @@ def _matinv_transform(expr):
logaddexp_opt = ReplaceOptim(log(exp(_v)+exp(_w)), logaddexp(_v, _w))
+logaddexp2_opt = ReplaceOptim(log(Pow(2, _v)+Pow(2, _w)), logaddexp2(_v, _w)*log(2))
# Collections of optimizations:
optims_c99 = (expm1_opt, log1p_opt, exp2_opt, log2_opt, log2const_opt)
-optims_numpy = (logaddexp_opt,)
+optims_numpy = (logaddexp_opt, logaddexp2_opt)
diff --git a/sympy/printing/pycode.py b/sympy/printing/pycode.py
index a6e809168eda..5862c8fb514e 100644
--- a/sympy/printing/pycode.py
+++ b/sympy/printing/pycode.py
@@ -594,6 +594,7 @@ def _print_Pow(self, expr, rational=False):
'exp2': 'exp2',
'sign': 'sign',
'logaddexp': 'logaddexp',
+ 'logaddexp2': 'logaddexp2',
})
_known_constants_numpy = {
'Exp1': 'e',
| diff --git a/sympy/codegen/tests/test_numpy_nodes.py b/sympy/codegen/tests/test_numpy_nodes.py
index 46217b65901a..6b4a9ceff43b 100644
--- a/sympy/codegen/tests/test_numpy_nodes.py
+++ b/sympy/codegen/tests/test_numpy_nodes.py
@@ -1,6 +1,6 @@
from itertools import product
-from sympy import symbols, exp, log
-from sympy.codegen.numpy_nodes import logaddexp
+from sympy import symbols, exp, log, S
+from sympy.codegen.numpy_nodes import logaddexp, logaddexp2
x, y, z = symbols('x y z')
@@ -21,3 +21,25 @@ def test_logaddexp():
assert lae_sum_to_e.rewrite(log) == 1
assert lae_sum_to_e.simplify() == 1
assert logaddexp(2, 3).simplify() == logaddexp(2, 3) # cannot simplify with 2, 3
+
+
+def test_logaddexp2():
+ lae2_xy = logaddexp2(x, y)
+ ref2_xy = log(2**x + 2**y)/log(2)
+ for wrt, deriv_order in product([x, y, z], range(0, 3)):
+ assert (
+ lae2_xy.diff(wrt, deriv_order) -
+ ref2_xy.diff(wrt, deriv_order)
+ ).rewrite(log).simplify() == 0
+
+ def lb(x):
+ return log(x)/log(2)
+
+ two_thirds = S.One*2/3
+ four_thirds = 2*two_thirds
+ lbTwoThirds = lb(two_thirds)
+ lbFourThirds = lb(four_thirds)
+ lae2_sum_to_2 = logaddexp2(lbTwoThirds, lbFourThirds)
+ assert lae2_sum_to_2.rewrite(log) == 1
+ assert lae2_sum_to_2.simplify() == 1
+ assert logaddexp2(x, y).simplify() == logaddexp2(x, y) # cannot simplify with x, y
diff --git a/sympy/codegen/tests/test_rewriting.py b/sympy/codegen/tests/test_rewriting.py
index 0ab628604d85..057f9509c899 100644
--- a/sympy/codegen/tests/test_rewriting.py
+++ b/sympy/codegen/tests/test_rewriting.py
@@ -3,10 +3,10 @@
from sympy.printing.ccode import ccode
from sympy.codegen.matrix_nodes import MatrixSolve
from sympy.codegen.cfunctions import log2, exp2, expm1, log1p
-from sympy.codegen.numpy_nodes import logaddexp
+from sympy.codegen.numpy_nodes import logaddexp, logaddexp2
from sympy.codegen.rewriting import (
optimize, log2_opt, exp2_opt, expm1_opt, log1p_opt, optims_c99,
- create_expand_pow_optimization, matinv_opt, logaddexp_opt
+ create_expand_pow_optimization, matinv_opt, logaddexp_opt, logaddexp2_opt
)
from sympy.testing.pytest import XFAIL
@@ -192,3 +192,12 @@ def test_logaddexp_opt():
assert logaddexp(x, y) - opt1 == 0
assert logaddexp(y, x) - opt1 == 0
assert opt1.rewrite(log) == expr1
+
+
+def test_logaddexp2_opt():
+ x, y = map(Symbol, 'x y'.split())
+ expr1 = log(2**x + 2**y)/log(2)
+ opt1 = optimize(expr1, [logaddexp2_opt])
+ assert logaddexp2(x, y) - opt1 == 0
+ assert logaddexp2(y, x) - opt1 == 0
+ assert opt1.rewrite(log) == expr1
diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index 0b6029d8d547..d940f5e88978 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -457,6 +457,11 @@ def test_sympy__codegen__numpy_nodes__logaddexp():
assert _test_args(logaddexp(x, y))
+def test_sympy__codegen__numpy_nodes__logaddexp2():
+ from sympy.codegen.numpy_nodes import logaddexp2
+ assert _test_args(logaddexp2(x, y))
+
+
@XFAIL
def test_sympy__combinatorics__graycode__GrayCode():
from sympy.combinatorics.graycode import GrayCode
diff --git a/sympy/printing/tests/test_numpy.py b/sympy/printing/tests/test_numpy.py
index 69cfbe0cf480..515729f60844 100644
--- a/sympy/printing/tests/test_numpy.py
+++ b/sympy/printing/tests/test_numpy.py
@@ -6,7 +6,7 @@
from sympy.abc import x, i, j, a, b, c, d
from sympy.core import Pow
from sympy.codegen.matrix_nodes import MatrixSolve
-from sympy.codegen.numpy_nodes import logaddexp
+from sympy.codegen.numpy_nodes import logaddexp, logaddexp2
from sympy.codegen.cfunctions import log1p, expm1, hypot, log10, exp2, log2, Sqrt
from sympy.codegen.array_utils import (CodegenArrayTensorProduct, CodegenArrayDiagonal,
CodegenArrayPermuteDims, CodegenArrayElementwiseAdd, parse_matrix_expression)
@@ -34,6 +34,8 @@ def test_numpy_piecewise_regression():
def test_numpy_logaddexp():
lae = logaddexp(a, b)
assert NumPyPrinter().doprint(lae) == 'numpy.logaddexp(a, b)'
+ lae2 = logaddexp2(a, b)
+ assert NumPyPrinter().doprint(lae2) == 'numpy.logaddexp2(a, b)'
def test_sum():
diff --git a/sympy/utilities/tests/test_lambdify.py b/sympy/utilities/tests/test_lambdify.py
index 708816385349..4b61384becd1 100644
--- a/sympy/utilities/tests/test_lambdify.py
+++ b/sympy/utilities/tests/test_lambdify.py
@@ -11,6 +11,8 @@
DotProduct, Eq, Dummy, sinc, erf, erfc, factorial, gamma, loggamma,
digamma, RisingFactorial, besselj, bessely, besseli, besselk, S, beta,
fresnelc, fresnels)
+from sympy.codegen.cfunctions import expm1, log1p, exp2, log2, log10, hypot
+from sympy.codegen.numpy_nodes import logaddexp, logaddexp2
from sympy.functions.elementary.complexes import re, im, arg
from sympy.functions.special.polynomials import \
chebyshevt, chebyshevu, legendre, hermite, laguerre, gegenbauer, \
@@ -959,10 +961,10 @@ def test_Indexed():
def test_issue_12173():
#test for issue 12173
- exp1 = lambdify((x, y), uppergamma(x, y),"mpmath")(1, 2)
- exp2 = lambdify((x, y), lowergamma(x, y),"mpmath")(1, 2)
- assert exp1 == uppergamma(1, 2).evalf()
- assert exp2 == lowergamma(1, 2).evalf()
+ expr1 = lambdify((x, y), uppergamma(x, y),"mpmath")(1, 2)
+ expr2 = lambdify((x, y), lowergamma(x, y),"mpmath")(1, 2)
+ assert expr1 == uppergamma(1, 2).evalf()
+ assert expr2 == lowergamma(1, 2).evalf()
def test_issue_13642():
@@ -1275,3 +1277,28 @@ def test_beta_math():
F = lambdify((x, y), f, modules='math')
assert abs(beta(1.3, 2.3) - F(1.3, 2.3)) <= 1e-10
+
+
+def test_numpy_special_math():
+ if not numpy:
+ skip("numpy not installed")
+
+ funcs = [expm1, log1p, exp2, log2, log10, hypot, logaddexp, logaddexp2]
+ for func in funcs:
+ if 2 in func.nargs:
+ expr = func(x, y)
+ args = (x, y)
+ num_args = (0.3, 0.4)
+ elif 1 in func.nargs:
+ expr = func(x)
+ args = (x,)
+ num_args = (0.3,)
+ else:
+ raise NotImplementedError("Need to handle other than unary & binary functions in test")
+ f = lambdify(args, expr)
+ result = f(*num_args)
+ reference = expr.subs(dict(zip(args, num_args))).evalf()
+ assert numpy.allclose(result, float(reference))
+
+ lae2 = lambdify((x, y), logaddexp2(log2(x), log2(y)))
+ assert abs(2.0**lae2(1e-50, 2.5e-50) - 3.5e-50) < 1e-62 # from NumPy's docstring
| [
{
"components": [
{
"doc": "",
"lines": [
259,
260
],
"name": "log2._eval_evalf",
"signature": "def _eval_evalf(self, *args, **kwargs):",
"type": "function"
}
],
"file": "sympy/codegen/cfunctions.py"
},
{
"components... | [
"test_logaddexp",
"test_log2_opt",
"test_exp2_opt",
"test_expm1_opt",
"test_log1p_opt",
"test_optims_c99",
"test_create_expand_pow_optimization",
"test_matsolve",
"test_logaddexp_opt",
"test_sympy__codegen__numpy_nodes__logaddexp2",
"test_numpy_piecewise_regression",
"test_numpy_logaddexp",
... | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_sympy__assumptions__assume__Predicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers__AnyArgs",
"test_sympy__assu... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add to codegen: numpy.logaddexp2
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
This is a continuation of gh-19742
#### Brief description of what is fixed or changed
Added support for generating calls to `numpy.logaddexp2`
#### Other comments
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* codegen
* Added support for generating calls to `numpy.logaddexp` & `numpy.logaddexp2`.
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/codegen/cfunctions.py]
(definition of log2._eval_evalf:)
def _eval_evalf(self, *args, **kwargs):
[end of new definitions in sympy/codegen/cfunctions.py]
[start of new definitions in sympy/codegen/numpy_nodes.py]
(definition of _lb:)
def _lb(x, *, evaluate=True):
(definition of _exp2:)
def _exp2(x, *, evaluate=True):
(definition of _logaddexp2:)
def _logaddexp2(x1, x2, *, evaluate=True):
(definition of logaddexp._eval_evalf:)
def _eval_evalf(self, *args, **kwargs):
(definition of logaddexp2:)
class logaddexp2(Function):
"""Logarithm of the sum of exponentiations of the inputs in base-2.
Helper class for use with e.g. numpy.logaddexp2
See: https://numpy.org/doc/stable/reference/generated/numpy.logaddexp2.html"""
(definition of logaddexp2.__new__:)
def __new__(cls, *args):
(definition of logaddexp2.fdiff:)
def fdiff(self, argindex=1):
"""Returns the first derivative of this function."""
(definition of logaddexp2._eval_rewrite_as_log:)
def _eval_rewrite_as_log(self, x1, x2, **kwargs):
(definition of logaddexp2._eval_evalf:)
def _eval_evalf(self, *args, **kwargs):
(definition of logaddexp2._eval_simplify:)
def _eval_simplify(self, *args, **kwargs):
[end of new definitions in sympy/codegen/numpy_nodes.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1b4529a95ef641c2fc15889091b281644069d20e | ||
sphinx-doc__sphinx-8035 | 8,035 | sphinx-doc/sphinx | 3.2 | 5e6da19f0e44a0ae83944fb6ce18f18f781e1a6e | 2020-08-01T16:28:05Z | diff --git a/CHANGES b/CHANGES
index eadace6dcd4..79efd2c8df0 100644
--- a/CHANGES
+++ b/CHANGES
@@ -10,6 +10,7 @@ Incompatible changes
Deprecated
----------
+* ``sphinx.ext.autodoc.merge_special_members_option()``
* ``sphinx.writers.texinfo.TexinfoWriter.desc``
* C, parsing of pre-v3 style type directives and roles, along with the options
:confval:`c_allow_pre_v3` and :confval:`c_warn_on_allowed_pre_v3`.
@@ -18,6 +19,8 @@ Features added
--------------
* #2076: autodoc: Allow overriding of exclude-members in skip-member function
+* #8034: autodoc: ``:private-member:`` can take an explicit list of member names
+ to be documented
* #2024: autosummary: Add :confval:`autosummary_filename_map` to avoid conflict
of filenames between two object with different case
* #7849: html: Add :confval:`html_codeblock_linenos_style` to change the style
diff --git a/doc/extdev/deprecated.rst b/doc/extdev/deprecated.rst
index ceb88d45f74..d6e87b45923 100644
--- a/doc/extdev/deprecated.rst
+++ b/doc/extdev/deprecated.rst
@@ -26,6 +26,11 @@ The following is a list of deprecated interfaces.
- (willย be) Removed
- Alternatives
+ * - ``sphinx.ext.autodoc.merge_special_members_option()``
+ - 3.2
+ - 5.0
+ - ``sphinx.ext.autodoc.merge_members_option()``
+
* - ``sphinx.writers.texinfo.TexinfoWriter.desc``
- 3.2
- 5.0
diff --git a/doc/usage/extensions/autodoc.rst b/doc/usage/extensions/autodoc.rst
index 150b62c4d93..71f49c24044 100644
--- a/doc/usage/extensions/autodoc.rst
+++ b/doc/usage/extensions/autodoc.rst
@@ -136,9 +136,22 @@ inserting them into the page source under a suitable :rst:dir:`py:module`,
:undoc-members:
* "Private" members (that is, those named like ``_private`` or ``__private``)
- will be included if the ``private-members`` flag option is given.
+ will be included if the ``private-members`` flag option is given::
+
+ .. automodule:: noodle
+ :members:
+ :private-members:
+
+ It can also take an explicit list of member names to be documented as
+ arguments::
+
+ .. automodule:: noodle
+ :members:
+ :private-members: _spicy, _garlickly
.. versionadded:: 1.1
+ .. versionchanged:: 3.2
+ The option can now take arguments.
* autodoc considers a member private if its docstring contains
``:meta private:`` in its :ref:`info-field-lists`.
diff --git a/sphinx/ext/autodoc/__init__.py b/sphinx/ext/autodoc/__init__.py
index 8c3509c3bbb..9f8820fdc06 100644
--- a/sphinx/ext/autodoc/__init__.py
+++ b/sphinx/ext/autodoc/__init__.py
@@ -125,6 +125,8 @@ def bool_option(arg: Any) -> bool:
def merge_special_members_option(options: Dict) -> None:
"""Merge :special-members: option to :members: option."""
+ warnings.warn("merge_special_members_option() is deprecated.",
+ RemovedInSphinx50Warning, stacklevel=2)
if 'special-members' in options and options['special-members'] is not ALL:
if options.get('members') is ALL:
pass
@@ -136,6 +138,20 @@ def merge_special_members_option(options: Dict) -> None:
options['members'] = options['special-members']
+def merge_members_option(options: Dict) -> None:
+ """Merge :*-members: option to the :members: option."""
+ if options.get('members') is ALL:
+ # merging is not needed when members: ALL
+ return
+
+ members = options.setdefault('members', [])
+ for key in {'private-members', 'special-members'}:
+ if key in options and options[key] is not ALL:
+ for member in options[key]:
+ if member not in members:
+ members.append(member)
+
+
# Some useful event listener factories for autodoc-process-docstring.
def cut_lines(pre: int, post: int = 0, what: str = None) -> Callable:
@@ -648,16 +664,28 @@ def is_filtered_inherited_member(name: str) -> bool:
keep = has_doc or self.options.undoc_members
elif (namespace, membername) in attr_docs:
if want_all and isprivate:
- # ignore members whose name starts with _ by default
- keep = self.options.private_members
+ if self.options.private_members is None:
+ keep = False
+ elif self.options.private_members is ALL:
+ keep = True
+ else:
+ keep = membername in self.options.private_members
else:
# keep documented attributes
keep = True
isattr = True
elif want_all and isprivate:
- # ignore members whose name starts with _ by default
- keep = self.options.private_members and \
- (has_doc or self.options.undoc_members)
+ if has_doc or self.options.undoc_members:
+ if self.options.private_members is None:
+ keep = False
+ elif self.options.private_members is ALL:
+ keep = True
+ elif is_filtered_inherited_member(membername):
+ keep = False
+ else:
+ keep = membername in self.options.private_members
+ else:
+ keep = False
else:
if self.options.members is ALL and is_filtered_inherited_member(membername):
keep = False
@@ -859,13 +887,13 @@ class ModuleDocumenter(Documenter):
'show-inheritance': bool_option, 'synopsis': identity,
'platform': identity, 'deprecated': bool_option,
'member-order': member_order_option, 'exclude-members': members_set_option,
- 'private-members': bool_option, 'special-members': members_option,
+ 'private-members': members_option, 'special-members': members_option,
'imported-members': bool_option, 'ignore-module-all': bool_option
} # type: Dict[str, Callable]
def __init__(self, *args: Any) -> None:
super().__init__(*args)
- merge_special_members_option(self.options)
+ merge_members_option(self.options)
self.__all__ = None
@classmethod
@@ -1279,7 +1307,7 @@ class ClassDocumenter(DocstringSignatureMixin, ModuleLevelDocumenter): # type:
'noindex': bool_option, 'inherited-members': inherited_members_option,
'show-inheritance': bool_option, 'member-order': member_order_option,
'exclude-members': members_set_option,
- 'private-members': bool_option, 'special-members': members_option,
+ 'private-members': members_option, 'special-members': members_option,
} # type: Dict[str, Callable]
_signature_class = None # type: Any
@@ -1287,7 +1315,7 @@ class ClassDocumenter(DocstringSignatureMixin, ModuleLevelDocumenter): # type:
def __init__(self, *args: Any) -> None:
super().__init__(*args)
- merge_special_members_option(self.options)
+ merge_members_option(self.options)
@classmethod
def can_document_member(cls, member: Any, membername: str, isattr: bool, parent: Any
| diff --git a/tests/test_ext_autodoc_private_members.py b/tests/test_ext_autodoc_private_members.py
index f4cadd3a543..9d7ff487a03 100644
--- a/tests/test_ext_autodoc_private_members.py
+++ b/tests/test_ext_autodoc_private_members.py
@@ -60,3 +60,24 @@ def test_private_field_and_private_members(app):
' :meta private:',
'',
]
+
+
+@pytest.mark.sphinx('html', testroot='ext-autodoc')
+def test_private_members(app):
+ app.config.autoclass_content = 'class'
+ options = {"members": None,
+ "private-members": "_public_function"}
+ actual = do_autodoc(app, 'module', 'target.private', options)
+ assert list(actual) == [
+ '',
+ '.. py:module:: target.private',
+ '',
+ '',
+ '.. py:function:: _public_function(name)',
+ ' :module: target.private',
+ '',
+ ' public_function is a docstring().',
+ '',
+ ' :meta public:',
+ '',
+ ]
| diff --git a/CHANGES b/CHANGES
index eadace6dcd4..79efd2c8df0 100644
--- a/CHANGES
+++ b/CHANGES
@@ -10,6 +10,7 @@ Incompatible changes
Deprecated
----------
+* ``sphinx.ext.autodoc.merge_special_members_option()``
* ``sphinx.writers.texinfo.TexinfoWriter.desc``
* C, parsing of pre-v3 style type directives and roles, along with the options
:confval:`c_allow_pre_v3` and :confval:`c_warn_on_allowed_pre_v3`.
@@ -18,6 +19,8 @@ Features added
--------------
* #2076: autodoc: Allow overriding of exclude-members in skip-member function
+* #8034: autodoc: ``:private-member:`` can take an explicit list of member names
+ to be documented
* #2024: autosummary: Add :confval:`autosummary_filename_map` to avoid conflict
of filenames between two object with different case
* #7849: html: Add :confval:`html_codeblock_linenos_style` to change the style
diff --git a/doc/extdev/deprecated.rst b/doc/extdev/deprecated.rst
index ceb88d45f74..d6e87b45923 100644
--- a/doc/extdev/deprecated.rst
+++ b/doc/extdev/deprecated.rst
@@ -26,6 +26,11 @@ The following is a list of deprecated interfaces.
- (willย be) Removed
- Alternatives
+ * - ``sphinx.ext.autodoc.merge_special_members_option()``
+ - 3.2
+ - 5.0
+ - ``sphinx.ext.autodoc.merge_members_option()``
+
* - ``sphinx.writers.texinfo.TexinfoWriter.desc``
- 3.2
- 5.0
diff --git a/doc/usage/extensions/autodoc.rst b/doc/usage/extensions/autodoc.rst
index 150b62c4d93..71f49c24044 100644
--- a/doc/usage/extensions/autodoc.rst
+++ b/doc/usage/extensions/autodoc.rst
@@ -136,9 +136,22 @@ inserting them into the page source under a suitable :rst:dir:`py:module`,
:undoc-members:
* "Private" members (that is, those named like ``_private`` or ``__private``)
- will be included if the ``private-members`` flag option is given.
+ will be included if the ``private-members`` flag option is given::
+
+ .. automodule:: noodle
+ :members:
+ :private-members:
+
+ It can also take an explicit list of member names to be documented as
+ arguments::
+
+ .. automodule:: noodle
+ :members:
+ :private-members: _spicy, _garlickly
.. versionadded:: 1.1
+ .. versionchanged:: 3.2
+ The option can now take arguments.
* autodoc considers a member private if its docstring contains
``:meta private:`` in its :ref:`info-field-lists`.
| [
{
"components": [
{
"doc": "Merge :*-members: option to the :members: option.",
"lines": [
141,
152
],
"name": "merge_members_option",
"signature": "def merge_members_option(options: Dict) -> None:",
"type": "function"
}
],
... | [
"tests/test_ext_autodoc_private_members.py::test_private_members"
] | [
"tests/test_ext_autodoc_private_members.py::test_private_field",
"tests/test_ext_autodoc_private_members.py::test_private_field_and_private_members"
] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Close #8034: autodoc: :private-member: can take an list of member names
### Feature or Bugfix
- Feature
### Purpose
- refs: #8034
- Now :private-member: option can take an explicit list of member names to
be documented as its arguments (like :special-members: does).
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sphinx/ext/autodoc/__init__.py]
(definition of merge_members_option:)
def merge_members_option(options: Dict) -> None:
"""Merge :*-members: option to the :members: option."""
[end of new definitions in sphinx/ext/autodoc/__init__.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Support defining specific `:private-members:` for autodoc
**Is your feature request related to a problem? Please describe.**
Currently, if I'm using autodoc, the `:private-members:` option does not allow specification of which private members to document. The current behavior is to document all private members, but what if I would only like to document 1 or 2?
**Describe the solution you'd like**
For `:private-members:` to take arguments, similarly to how `:members:` currently works
**Describe alternatives you've considered**
The current best way to do this is to explicitly list each class in a module and use `:autoattribute:`
- Some prior discussion: https://github.com/sphinx-doc/sphinx/issues/8009
----------
--------------------
</issues> | 5e6da19f0e44a0ae83944fb6ce18f18f781e1a6e |
sympy__sympy-19857 | 19,857 | sympy/sympy | 1.7 | 5a9542f27417151ed93242a5e9df6d7911f3c34c | 2020-07-30T07:50:29Z | diff --git a/sympy/stats/matrix_distributions.py b/sympy/stats/matrix_distributions.py
index 4c3a0162bb35..6534b7195814 100644
--- a/sympy/stats/matrix_distributions.py
+++ b/sympy/stats/matrix_distributions.py
@@ -1,11 +1,20 @@
from sympy import S, Basic, exp, multigamma, pi
from sympy.core.sympify import sympify, _sympify
from sympy.matrices import (ImmutableMatrix, Inverse, Trace, Determinant,
- MatrixSymbol, MatrixBase, Transpose, MatrixSet)
+ MatrixSymbol, MatrixBase, Transpose, MatrixSet,
+ matrix2numpy)
from sympy.stats.rv import (_value_check, RandomMatrixSymbol, NamedArgsMixin, PSpace,
- _symbol_converter)
+ _symbol_converter, MatrixDomain)
+from sympy.external import import_module
+scipy = import_module('scipy')
+numpy = import_module('numpy')
+pymc3 = import_module('pymc3')
+
+################################################################################
+#------------------------Matrix Probability Space------------------------------#
+################################################################################
class MatrixPSpace(PSpace):
"""
Represents probability space for
@@ -21,10 +30,18 @@ def __new__(cls, sym, distribution, dim_n, dim_m):
distribution = property(lambda self: self.args[1])
symbol = property(lambda self: self.args[0])
+ @property
+ def domain(self):
+ return MatrixDomain(self.symbol, self.distribution.set)
+
@property
def value(self):
return RandomMatrixSymbol(self.symbol, self.args[2], self.args[3], self)
+ @property
+ def values(self):
+ return {self.value}
+
def compute_density(self, expr, *args):
rms = expr.atoms(RandomMatrixSymbol)
if len(rms) > 1 or (not isinstance(expr, RandomMatrixSymbol)):
@@ -33,6 +50,14 @@ def compute_density(self, expr, *args):
"multiple matrix distributions.")
return self.distribution.pdf(expr)
+ def sample(self, size=(), library='scipy'):
+ """
+ Internal sample method
+
+ Returns dictionary mapping RandomMatrixSymbol to realization value.
+ """
+ return {self.value: self.distribution.sample(size, library=library)}
+
def rv(symbol, cls, args):
args = list(map(sympify, args))
@@ -43,6 +68,93 @@ def rv(symbol, cls, args):
return pspace.value
+class SampleMatrixScipy:
+ """Returns the sample from scipy of the given distribution"""
+ def __new__(cls, dist, size):
+ return cls._sample_scipy(dist, size)
+
+ scipy_rv_map = {
+ 'WishartDistribution': lambda dist, size: scipy.stats.wishart.rvs(
+ df=int(dist.n), scale=matrix2numpy(dist.scale_matrix, float), size=size),
+ 'MatrixNormalDistribution': lambda dist, size: scipy.stats.matrix_normal.rvs(
+ mean=matrix2numpy(dist.location_matrix, float),
+ rowcov=matrix2numpy(dist.scale_matrix_1, float),
+ colcov=matrix2numpy(dist.scale_matrix_2, float), size=size)
+ }
+
+ @classmethod
+ def _sample_scipy(cls, dist, size):
+ """Sample from SciPy."""
+
+ dist_list = cls.scipy_rv_map.keys()
+
+ if dist.__class__.__name__ not in dist_list:
+ return None
+
+ return cls.scipy_rv_map[dist.__class__.__name__](dist, size)
+
+
+class SampleMatrixNumpy:
+ """Returns the sample from numpy of the given distribution"""
+
+ ### TODO: Add tests after adding matrix distributions in numpy_rv_map
+ def __new__(cls, dist, size):
+ return cls._sample_numpy(dist, size)
+
+ numpy_rv_map = {
+ }
+
+ @classmethod
+ def _sample_numpy(cls, dist, size):
+ """Sample from NumPy."""
+
+ dist_list = cls.numpy_rv_map.keys()
+
+ if dist.__class__.__name__ not in dist_list:
+ return None
+
+ return cls.numpy_rv_map[dist.__class__.__name__](dist, size)
+
+
+class SampleMatrixPymc:
+ """Returns the sample from pymc3 of the given distribution"""
+
+ def __new__(cls, dist, size):
+ return cls._sample_pymc3(dist, size)
+
+ pymc3_rv_map = {
+ 'MatrixNormalDistribution': lambda dist: pymc3.MatrixNormal('X',
+ mu=matrix2numpy(dist.location_matrix, float),
+ rowcov=matrix2numpy(dist.scale_matrix_1, float),
+ colcov=matrix2numpy(dist.scale_matrix_2, float),
+ shape=dist.location_matrix.shape),
+ 'WishartDistribution': lambda dist: pymc3.WishartBartlett('X',
+ nu=int(dist.n), S=matrix2numpy(dist.scale_matrix, float))
+ }
+
+ @classmethod
+ def _sample_pymc3(cls, dist, size):
+ """Sample from PyMC3."""
+
+ dist_list = cls.pymc3_rv_map.keys()
+
+ if dist.__class__.__name__ not in dist_list:
+ return None
+
+ with pymc3.Model():
+ cls.pymc3_rv_map[dist.__class__.__name__](dist)
+ return pymc3.sample(size, chains=1, progressbar=False)[:]['X']
+
+_get_sample_class_matrixrv = {
+ 'scipy': SampleMatrixScipy,
+ 'pymc3': SampleMatrixPymc,
+ 'numpy': SampleMatrixNumpy
+}
+
+################################################################################
+#-------------------------Matrix Distribution----------------------------------#
+################################################################################
+
class MatrixDistribution(Basic, NamedArgsMixin):
"""
Abstract class for Matrix Distribution
@@ -60,10 +172,32 @@ def __call__(self, expr):
expr = ImmutableMatrix(expr)
return self.pdf(expr)
+ def sample(self, size=(), library='scipy'):
+ """
+ Internal sample method
+
+ Returns dictionary mapping RandomSymbol to realization value.
+ """
+
+ libraries = ['scipy', 'numpy', 'pymc3']
+ if library not in libraries:
+ raise NotImplementedError("Sampling from %s is not supported yet."
+ % str(library))
+ if not import_module(library):
+ raise ValueError("Failed to import %s" % library)
+
+ samps = _get_sample_class_matrixrv[library](self, size)
+
+ if samps is not None:
+ return samps
+ raise NotImplementedError(
+ "Sampling for %s is not currently implemented from %s"
+ % (self.__class__.__name__, library)
+ )
-########################################
-#--------Matrix Distributions----------#
-########################################
+################################################################################
+#------------------------Matrix Distribution Types-----------------------------#
+################################################################################
#-------------------------------------------------------------------------------
# Matrix Gamma distribution ----------------------------------------------------
diff --git a/sympy/stats/rv.py b/sympy/stats/rv.py
index c1bc92676621..ad6d351e8c48 100644
--- a/sympy/stats/rv.py
+++ b/sympy/stats/rv.py
@@ -105,6 +105,24 @@ def __contains__(self, other):
return self.symbol == sym and val in self.set
+class MatrixDomain(RandomDomain):
+ """
+ A Random Matrix variable and its domain
+
+ """
+ def __new__(cls, symbol, set):
+ symbol, set = _symbol_converter(symbol), _sympify(set)
+ return Basic.__new__(cls, symbol, set)
+
+ @property
+ def symbol(self):
+ return self.args[0]
+
+ @property
+ def symbols(self):
+ return FiniteSet(self.symbol)
+
+
class ConditionalDomain(RandomDomain):
"""
A RandomDomain with an attached condition
| diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index 19640af56275..eeb2ab52accb 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -1135,6 +1135,11 @@ def test_sympy__stats__rv__ConditionalDomain():
D = RandomDomain(FiniteSet(x), FiniteSet(1, 2))
assert _test_args(ConditionalDomain(D, x > 1))
+def test_sympy__stats__rv__MatrixDomain():
+ from sympy.stats.rv import MatrixDomain
+ from sympy.matrices import MatrixSet
+ from sympy import S
+ assert _test_args(MatrixDomain(x, MatrixSet(2, 2, S.Reals)))
def test_sympy__stats__rv__PSpace():
from sympy.stats.rv import PSpace, RandomDomain
diff --git a/sympy/stats/tests/test_matrix_distributions.py b/sympy/stats/tests/test_matrix_distributions.py
index 0002d46f7f46..68da9475ad8f 100644
--- a/sympy/stats/tests/test_matrix_distributions.py
+++ b/sympy/stats/tests/test_matrix_distributions.py
@@ -1,9 +1,10 @@
from sympy import exp, S, sqrt, pi, symbols, Product, gamma, Dummy
from sympy.matrices import Determinant, Matrix, Trace, MatrixSymbol, MatrixSet
-from sympy.stats import density
+from sympy.stats import density, sample
from sympy.stats.matrix_distributions import (MatrixGammaDistribution,
MatrixGamma, MatrixPSpace, Wishart, MatrixNormal)
-from sympy.testing.pytest import raises
+from sympy.testing.pytest import raises, skip, ignore_warnings
+from sympy.external import import_module
def test_MatrixPSpace():
@@ -91,3 +92,40 @@ def test_MatrixNormal():
raises(ValueError, lambda: MatrixNormal('M', [1, 2], [[1, 0], [2, 1]], [[1, 0], [0]]))
raises(ValueError, lambda: MatrixNormal('M', [[1, 2]], [[1, 0], [0, 1]], [[1, 0]]))
raises(ValueError, lambda: MatrixNormal('M', [[1, 2]], [1], [[1, 0]]))
+
+def test_sample_scipy():
+ distribs_scipy = [
+ MatrixNormal('M', [[5, 6]], [4], [[2, 1], [1, 2]]),
+ Wishart('W', 5, [[1, 0], [0, 1]])
+ ]
+
+ size = 5
+ scipy = import_module('scipy')
+ if not scipy:
+ skip('Scipy not installed. Abort tests for _sample_scipy.')
+ else:
+ with ignore_warnings(UserWarning): ### TODO: Restore tests once warnings are removed
+ for X in distribs_scipy:
+ samps = next(sample(X, size=size))
+ for sam in samps:
+ assert Matrix(sam) in X.pspace.distribution.set
+ M = MatrixGamma('M', 1, 2, [[1, 0], [0, 1]])
+ raises(NotImplementedError, lambda: next(sample(M, size=3)))
+
+def test_sample_pymc3():
+ distribs_pymc3 = [
+ MatrixNormal('M', [[5, 6], [3, 4]], [[1, 0], [0, 1]], [[2, 1], [1, 2]]),
+ Wishart('W', 7, [[2, 1], [1, 2]])
+ ]
+ size = 3
+ pymc3 = import_module('pymc3')
+ if not pymc3:
+ skip('PyMC3 is not installed. Abort tests for _sample_pymc3.')
+ else:
+ with ignore_warnings(UserWarning): ### TODO: Restore tests once warnings are removed
+ for X in distribs_pymc3:
+ samps = next(sample(X, size=size, library='pymc3'))
+ for sam in samps:
+ assert Matrix(sam) in X.pspace.distribution.set
+ M = MatrixGamma('M', 1, 2, [[1, 0], [0, 1]])
+ raises(NotImplementedError, lambda: next(sample(M, size=3)))
| [
{
"components": [
{
"doc": "",
"lines": [
34,
35
],
"name": "MatrixPSpace.domain",
"signature": "def domain(self):",
"type": "function"
},
{
"doc": "",
"lines": [
42,
43
],
"... | [
"test_sympy__stats__rv__MatrixDomain"
] | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_sympy__assumptions__assume__Predicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers__AnyArgs",
"test_sympy__assu... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
[GSoC] Added sampling from Matrix Distributions
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
#### Brief description of what is fixed or changed
#### Other comments
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* stats
* Added sampling for Matrix Distributions
<!-- END RELEASE NOTES -->
ping @czgdp1807 @Upabjojr @jmig5776
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/stats/matrix_distributions.py]
(definition of MatrixPSpace.domain:)
def domain(self):
(definition of MatrixPSpace.values:)
def values(self):
(definition of MatrixPSpace.sample:)
def sample(self, size=(), library='scipy'):
"""Internal sample method
Returns dictionary mapping RandomMatrixSymbol to realization value."""
(definition of SampleMatrixScipy:)
class SampleMatrixScipy:
"""Returns the sample from scipy of the given distribution"""
(definition of SampleMatrixScipy.__new__:)
def __new__(cls, dist, size):
(definition of SampleMatrixScipy._sample_scipy:)
def _sample_scipy(cls, dist, size):
"""Sample from SciPy."""
(definition of SampleMatrixNumpy:)
class SampleMatrixNumpy:
"""Returns the sample from numpy of the given distribution"""
(definition of SampleMatrixNumpy.__new__:)
def __new__(cls, dist, size):
(definition of SampleMatrixNumpy._sample_numpy:)
def _sample_numpy(cls, dist, size):
"""Sample from NumPy."""
(definition of SampleMatrixPymc:)
class SampleMatrixPymc:
"""Returns the sample from pymc3 of the given distribution"""
(definition of SampleMatrixPymc.__new__:)
def __new__(cls, dist, size):
(definition of SampleMatrixPymc._sample_pymc3:)
def _sample_pymc3(cls, dist, size):
"""Sample from PyMC3."""
(definition of MatrixDistribution.sample:)
def sample(self, size=(), library='scipy'):
"""Internal sample method
Returns dictionary mapping RandomSymbol to realization value."""
[end of new definitions in sympy/stats/matrix_distributions.py]
[start of new definitions in sympy/stats/rv.py]
(definition of MatrixDomain:)
class MatrixDomain(RandomDomain):
"""A Random Matrix variable and its domain"""
(definition of MatrixDomain.__new__:)
def __new__(cls, symbol, set):
(definition of MatrixDomain.symbol:)
def symbol(self):
(definition of MatrixDomain.symbols:)
def symbols(self):
[end of new definitions in sympy/stats/rv.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1b4529a95ef641c2fc15889091b281644069d20e | ||
joke2k__faker-1226 | 1,226 | joke2k/faker | null | f58e64eda0af1bd42c04192cee17ab8c98a620e9 | 2020-07-28T16:37:49Z | diff --git a/README.rst b/README.rst
index 23b7b57d6c..5c1a5df0ef 100644
--- a/README.rst
+++ b/README.rst
@@ -358,6 +358,38 @@ By default all generators share the same instance of ``random.Random``, which
can be accessed with ``from faker.generator import random``. Using this may
be useful for plugins that want to affect all faker instances.
+Unique values`
+-------------
+
+Through use of the ``.unique`` property on the generator, you can guarantee
+that any generated values are unique for this specific instance.
+
+.. code:: python
+
+ from faker import Faker
+ fake = Faker()
+ names = [fake.unique.first_name() for i in range(500)]
+ assert len(set(names)) == len(names)
+
+Calling ``fake.unique.clear()`` clears the already seen values.
+Note, to avoid infinite loops, after a number of attempts to find a unique
+value, Faker will throw a ``UniquenessException``. Beware of the `birthday
+paradox <https://en.wikipedia.org/wiki/Birthday_problem>`_, collisions
+are more likely than you'd think.
+
+
+.. code:: python
+
+ from faker import Faker
+
+ fake = Faker()
+ for i in range(3):
+ # Raises a UniquenessException
+ fake.unique.boolean()
+
+In addition, only hashable arguments and return values can be used
+with ``.unique``.
+
Seeding the Generator
---------------------
diff --git a/docs/fakerclass.rst b/docs/fakerclass.rst
index c7a13304d1..656b4c8b22 100644
--- a/docs/fakerclass.rst
+++ b/docs/fakerclass.rst
@@ -287,3 +287,72 @@ a cheatsheet for new ``Faker`` in multiple locale mode.
fake['ja_JP'].luzon_province()
.. _the relevant PR: https://github.com/joke2k/faker/pull/1052#issuecomment-557170225
+
+
+Unique Values
+-------------
+
+New in version ``v4.2.0` is the ``.unique`` attribute on the
+``Faker`` proxy.
+
+Accessing provider methods through this attribute guarantees that
+the returned values are unique for the lifetime of the ``Faker`` instance.
+
+
+.. code:: python
+
+ import faker
+
+ fake = faker.Faker()
+
+ numbers = set(fake.unique.random_int() for i in range(1000))
+ assert len(numbers) == 1000
+
+To clear already seen values, simply call ``fake.unique.clear()``, which will
+allow previous values generated to be returned again.
+
+Different argument signatures for provider methods do not share
+a uniqueness pool.
+
+
+.. code:: python
+
+ import faker
+
+ fake = faker.Faker()
+
+ numbers = set(fake.unique.random_int(min=1, max=10) for i in range(10))
+ other_numbers = set(fake.unique.random_int(min=1, max=5) for i in range(5))
+
+ assert other_numbers.issubset(numbers)
+
+If the range of possible values generated by a provider function is small
+and the ``.unique`` attribute has been used, it's possible that no
+suitable unique value will be found after a certain number of iterations.
+
+To avoid infinite loops, at that point, a ``UniquenessException`` will
+be raised.
+
+
+.. code:: python
+
+ import faker
+
+ fake = faker.Faker()
+
+ for i in range(3):
+ fake.unique.boolean() # UniquenessException!
+
+
+As a final caveat, only hashable arguments and return values can be used
+with the ``.unique`` attribute, as it is backed internally by a set for
+fast membership testing.
+
+
+.. code:: python
+
+ import faker
+
+ fake = faker.Faker()
+
+ fake.unique.profile() # TypeError: unhashable type: 'dict'
diff --git a/faker/exceptions.py b/faker/exceptions.py
new file mode 100644
index 0000000000..bb744b9a2f
--- /dev/null
+++ b/faker/exceptions.py
@@ -0,0 +1,8 @@
+class BaseFakerException(Exception):
+ """The base exception for all Faker exceptions."""
+
+
+class UniquenessException(BaseFakerException):
+ """To avoid infinite loops, after a certain number of attempts,
+ the "unique" attribute of the Proxy will throw this exception.
+ """
diff --git a/faker/proxy.py b/faker/proxy.py
index 67f261efe5..d391056298 100644
--- a/faker/proxy.py
+++ b/faker/proxy.py
@@ -1,13 +1,17 @@
+import functools
import random
import re
from collections import OrderedDict
from faker.config import DEFAULT_LOCALE
+from faker.exceptions import UniquenessException
from faker.factory import Factory
from faker.generator import Generator
from faker.utils.distribution import choices_distribution
+_UNIQUE_ATTEMPTS = 1000
+
class Faker:
"""Proxy class capable of supporting multiple locales"""
@@ -23,6 +27,7 @@ def __init__(self, locale=None, providers=None,
generator=None, includes=None, **config):
self._factory_map = OrderedDict()
self._weights = None
+ self._unique_proxy = UniqueProxy(self)
if isinstance(locale, str):
locales = [locale.replace('-', '_')]
@@ -105,6 +110,10 @@ def __getattr__(self, attr):
factory = self._select_factory(attr)
return getattr(factory, attr)
+ @property
+ def unique(self):
+ return self._unique_proxy
+
def _select_factory(self, method_name):
"""
Returns a random factory that supports the provider method
@@ -236,3 +245,44 @@ def factories(self):
def items(self):
return self._factory_map.items()
+
+
+class UniqueProxy:
+ def __init__(self, proxy):
+ self._proxy = proxy
+ self._seen = {}
+ self._sentinel = object()
+
+ def clear(self):
+ self._seen = {}
+
+ def __getattr__(self, name: str):
+ obj = getattr(self._proxy, name)
+ if callable(obj):
+ return self._wrap(name, obj)
+ else:
+ raise TypeError("Accessing non-functions through .unique is not supported.")
+
+ def _wrap(self, name, function):
+ @functools.wraps(function)
+ def wrapper(*args, **kwargs):
+ key = (name, args, tuple(sorted(kwargs.items())))
+
+ generated = self._seen.setdefault(key, {self._sentinel})
+
+ # With use of a sentinel value rather than None, we leave
+ # None open as a valid return value.
+ retval = self._sentinel
+
+ for i in range(_UNIQUE_ATTEMPTS):
+ if retval not in generated:
+ break
+ retval = function(*args, **kwargs)
+ else:
+ raise UniquenessException("Got duplicated values after {0:,} iterations.".format(_UNIQUE_ATTEMPTS))
+
+ generated.add(retval)
+
+ return retval
+
+ return wrapper
| diff --git a/docs/pytest-fixtures.rst b/docs/pytest-fixtures.rst
index ef9e02334a..7c44a983a3 100644
--- a/docs/pytest-fixtures.rst
+++ b/docs/pytest-fixtures.rst
@@ -9,8 +9,9 @@ Pytest Fixtures
assert isinstance(faker.name(), str)
Out of the box, the ``faker`` fixture returns a session-scoped ``Faker`` instance to be used across
-all tests in your test suite. This instance defaults to the ``en-US`` locale, and it is reseeded
-using a seed value of ``0`` prior to each test.
+all tests in your test suite. This instance defaults to the ``en-US`` locale, it is reseeded
+using a seed value of ``0`` prior to each test, and the ``.unique`` remembered generated values
+are cleared.
To change the default locale, you can define a session-scoped autouse ``faker_session_locale``
fixture in your top level ``conftest.py``. To change the default seed value, you define a
diff --git a/faker/contrib/pytest/plugin.py b/faker/contrib/pytest/plugin.py
index 48c0eea917..2d866db820 100644
--- a/faker/contrib/pytest/plugin.py
+++ b/faker/contrib/pytest/plugin.py
@@ -33,5 +33,6 @@ def faker(request):
if 'faker_seed' in request.fixturenames:
seed = request.getfixturevalue('faker_seed')
fake.seed_instance(seed=seed)
+ fake.unique.clear()
return fake
diff --git a/tests/conftest.py b/tests/conftest.py
new file mode 100644
index 0000000000..c6481d5f74
--- /dev/null
+++ b/tests/conftest.py
@@ -0,0 +1,1 @@
+pytest_plugins = ["pytester"]
diff --git a/tests/pytest/test_unique_clear.py b/tests/pytest/test_unique_clear.py
new file mode 100644
index 0000000000..4bf382c9cb
--- /dev/null
+++ b/tests/pytest/test_unique_clear.py
@@ -0,0 +1,37 @@
+def test_unique_clears(testdir):
+ """Successive uses of the `faker` pytest fixture have the
+ generated unique values cleared between functions."""
+
+ testdir.makepyfile(
+ """
+ import pytest
+ from faker.exceptions import UniquenessException
+
+ NUM_SAMPLES = 100
+
+ def test_fully_exhaust_unique_booleans(faker):
+ _dummy = [faker.boolean() for _ in range(NUM_SAMPLES)]
+
+ faker.unique.boolean()
+ faker.unique.boolean()
+ with pytest.raises(UniquenessException):
+ faker.unique.boolean()
+ _dummy = [faker.boolean() for _ in range(NUM_SAMPLES)]
+
+ def test_do_not_exhaust_booleans(faker):
+ faker.unique.boolean()
+
+ def test_fully_exhaust_unique_booleans_again(faker):
+ _dummy = [faker.boolean() for _ in range(NUM_SAMPLES)]
+
+ faker.unique.boolean()
+ faker.unique.boolean()
+ with pytest.raises(UniquenessException):
+ faker.unique.boolean()
+ _dummy = [faker.boolean() for _ in range(NUM_SAMPLES)]
+ """,
+ )
+
+ result = testdir.runpytest()
+
+ result.assert_outcomes(passed=3)
diff --git a/tests/test_proxy.py b/tests/test_proxy.py
index 2b48d2bfdf..44eb73d5fe 100644
--- a/tests/test_proxy.py
+++ b/tests/test_proxy.py
@@ -322,6 +322,7 @@ def test_dir_include_all_providers_attribute_in_list(self):
fake = Faker(['en_US', 'en_PH'])
expected = set(dir(Faker) + [
'_factories', '_locales', '_factory_map', '_weights',
+ '_unique_proxy',
])
for factory in fake.factories:
expected |= {
diff --git a/tests/test_unique.py b/tests/test_unique.py
new file mode 100644
index 0000000000..f939a6ab59
--- /dev/null
+++ b/tests/test_unique.py
@@ -0,0 +1,62 @@
+import pytest
+
+from faker import Faker
+from faker.exceptions import UniquenessException
+
+
+class TestUniquenessClass:
+ def test_uniqueness(self):
+ fake = Faker("en_US")
+
+ names = set()
+ # There are (at time of writing 690) first names in the
+ # US identity provider. Birthday paradox puts the chances of
+ # no duplicates in 250 selections as low enough to be impossible
+ for i in range(250):
+ first_name = fake.unique.first_name()
+ assert first_name not in names
+ names.add(first_name)
+
+ def test_sanity_escape(self):
+ fake = Faker()
+
+ # Those of you who are especially astute may realise
+ # there are only 2 booleans, so the third boolean cannot
+ # be unique.
+ with pytest.raises(UniquenessException, match=r"Got duplicated values after [\d,]+ iterations."):
+ for i in range(3):
+ _ = fake.unique.boolean()
+
+ def test_uniqueness_clear(self):
+ fake = Faker()
+
+ for i in range(2):
+ fake.unique.boolean()
+
+ fake.unique.clear()
+
+ # Because we cleared the generated values, this will not
+ # throw an exception
+ fake.unique.boolean()
+
+ def test_exclusive_arguments(self):
+ """Calls through the "unique" portal will only affect
+ calls with that specific function signature.
+ """
+ fake = Faker()
+
+ for i in range(10):
+ fake.unique.random_int(min=1, max=10)
+
+ # Different signature, so new pool. If they shared a pool
+ # this would throw a sanity exception
+ fake.unique.random_int(min=2, max=10)
+
+ def test_functions_only(self):
+ """Accessing non-functions through the `.unique` attribute
+ will throw a TypeError."""
+
+ fake = Faker()
+
+ with pytest.raises(TypeError, match="Accessing non-functions through .unique is not supported."):
+ fake.unique.locales
| diff --git a/README.rst b/README.rst
index 23b7b57d6c..5c1a5df0ef 100644
--- a/README.rst
+++ b/README.rst
@@ -358,6 +358,38 @@ By default all generators share the same instance of ``random.Random``, which
can be accessed with ``from faker.generator import random``. Using this may
be useful for plugins that want to affect all faker instances.
+Unique values`
+-------------
+
+Through use of the ``.unique`` property on the generator, you can guarantee
+that any generated values are unique for this specific instance.
+
+.. code:: python
+
+ from faker import Faker
+ fake = Faker()
+ names = [fake.unique.first_name() for i in range(500)]
+ assert len(set(names)) == len(names)
+
+Calling ``fake.unique.clear()`` clears the already seen values.
+Note, to avoid infinite loops, after a number of attempts to find a unique
+value, Faker will throw a ``UniquenessException``. Beware of the `birthday
+paradox <https://en.wikipedia.org/wiki/Birthday_problem>`_, collisions
+are more likely than you'd think.
+
+
+.. code:: python
+
+ from faker import Faker
+
+ fake = Faker()
+ for i in range(3):
+ # Raises a UniquenessException
+ fake.unique.boolean()
+
+In addition, only hashable arguments and return values can be used
+with ``.unique``.
+
Seeding the Generator
---------------------
diff --git a/docs/fakerclass.rst b/docs/fakerclass.rst
index c7a13304d1..656b4c8b22 100644
--- a/docs/fakerclass.rst
+++ b/docs/fakerclass.rst
@@ -287,3 +287,72 @@ a cheatsheet for new ``Faker`` in multiple locale mode.
fake['ja_JP'].luzon_province()
.. _the relevant PR: https://github.com/joke2k/faker/pull/1052#issuecomment-557170225
+
+
+Unique Values
+-------------
+
+New in version ``v4.2.0` is the ``.unique`` attribute on the
+``Faker`` proxy.
+
+Accessing provider methods through this attribute guarantees that
+the returned values are unique for the lifetime of the ``Faker`` instance.
+
+
+.. code:: python
+
+ import faker
+
+ fake = faker.Faker()
+
+ numbers = set(fake.unique.random_int() for i in range(1000))
+ assert len(numbers) == 1000
+
+To clear already seen values, simply call ``fake.unique.clear()``, which will
+allow previous values generated to be returned again.
+
+Different argument signatures for provider methods do not share
+a uniqueness pool.
+
+
+.. code:: python
+
+ import faker
+
+ fake = faker.Faker()
+
+ numbers = set(fake.unique.random_int(min=1, max=10) for i in range(10))
+ other_numbers = set(fake.unique.random_int(min=1, max=5) for i in range(5))
+
+ assert other_numbers.issubset(numbers)
+
+If the range of possible values generated by a provider function is small
+and the ``.unique`` attribute has been used, it's possible that no
+suitable unique value will be found after a certain number of iterations.
+
+To avoid infinite loops, at that point, a ``UniquenessException`` will
+be raised.
+
+
+.. code:: python
+
+ import faker
+
+ fake = faker.Faker()
+
+ for i in range(3):
+ fake.unique.boolean() # UniquenessException!
+
+
+As a final caveat, only hashable arguments and return values can be used
+with the ``.unique`` attribute, as it is backed internally by a set for
+fast membership testing.
+
+
+.. code:: python
+
+ import faker
+
+ fake = faker.Faker()
+
+ fake.unique.profile() # TypeError: unhashable type: 'dict'
| [
{
"components": [
{
"doc": "The base exception for all Faker exceptions.",
"lines": [
1,
2
],
"name": "BaseFakerException",
"signature": "class BaseFakerException(Exception):",
"type": "class"
},
{
"doc": "To avoid inf... | [
"tests/pytest/test_unique_clear.py::test_unique_clears",
"tests/test_proxy.py::TestFakerProxyClass::test_unspecified_locale",
"tests/test_proxy.py::TestFakerProxyClass::test_locale_as_string",
"tests/test_proxy.py::TestFakerProxyClass::test_locale_as_list",
"tests/test_proxy.py::TestFakerProxyClass::test_lo... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
๐ฆ Add unique values support to Faker
Adds support to Faker for guaranteed unique values.
A new `.unique` property has been added to the Proxy class, which
returns an instance of a `UniqueProxy`. Any attributes looked up on the
unique proxy goes to the common proxy. If the attribute is a function,
then it's wrapped with a wrapper that only allows unique values.
Obviously, it's not always possible to get a unique value, especially
with a small set of values. If the wrapper can't find a unique value
after a certain number of attempts (currently pegged at an arbitrary
1000), it throws a UniquenessSanityException.
Each "remembered" set of values is tied to the name of the function, and
the arguments passed to it, so, for example, different `.random_int()`
calls with different arguments will not share a "bucket".
Since it uses a set internally for remembering previous values, and a
dictionary for determining what bucket to use, it can't be used with any
arguments or return values that are unhashable.
---
Resolves #1125
Resolves #971
- ~~Still needs full documentation added to sphinx, didn't want to spent lots of time documenting something that I didn't know if people were gonna accept or not. 100% happy to write some up if reaction is positive.~~ Done!
- Does the sanity value need to be changeable?
- Is this generally the right solution?
Had a lot of fun puzzling this one out, lemmie know if I'm on the right track!
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in faker/exceptions.py]
(definition of BaseFakerException:)
class BaseFakerException(Exception):
"""The base exception for all Faker exceptions."""
(definition of UniquenessException:)
class UniquenessException(BaseFakerException):
"""To avoid infinite loops, after a certain number of attempts,
the "unique" attribute of the Proxy will throw this exception."""
[end of new definitions in faker/exceptions.py]
[start of new definitions in faker/proxy.py]
(definition of Faker.unique:)
def unique(self):
(definition of UniqueProxy:)
class UniqueProxy:
(definition of UniqueProxy.__init__:)
def __init__(self, proxy):
(definition of UniqueProxy.clear:)
def clear(self):
(definition of UniqueProxy.__getattr__:)
def __getattr__(self, name: str):
(definition of UniqueProxy._wrap:)
def _wrap(self, name, function): @functools.wraps(function)
(definition of UniqueProxy._wrap.wrapper:)
def wrapper(*args, **kwargs):
[end of new definitions in faker/proxy.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Return list of n unique companies (n_unique keyword for all providers?)
* Faker version: 1.0.7
* OS: Ubuntu 16.04
I was wondering if it is possible to generate a list of 'n' unique company names? I saw some PR's which gave a unique keyword for 'words' but doesn't seem to extend to other providers? I understand i could just keep regenerating and dropping duplicates until I got a unique set of length n, but would be nice to just have a keyword for that (plus this may be suboptimal depending on how Faker implemented under the hood, which i don't know enough about i confess)
----------
Has this issue been tackled by anyone already?
--------------------
</issues> | 6edfdbf6ae90b0153309e3bf066aa3b2d16494a7 |
pvlib__pvlib-python-1011 | 1,011 | pvlib/pvlib-python | 0.7 | 9f5666053cf1082886feae9bc4d4836d6a733e98 | 2020-07-24T14:58:25Z | diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 0e9cfa19e0..dd7c618646 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -265,6 +265,15 @@ Low-level functions for solving the single diode equation.
singlediode.bishop88_v_from_i
singlediode.bishop88_mpp
+Functions for fitting diode models
+
+.. autosummary::
+ :toctree: generated/
+
+ ivtools.fit_sde_sandia
+ ivtools.fit_sdm_cec_sam
+ ivtools.fit_sdm_desoto
+
Inverter models (DC to AC conversion)
-------------------------------------
@@ -275,6 +284,14 @@ Inverter models (DC to AC conversion)
inverter.adr
inverter.pvwatts
+Functions for fitting inverter models
+
+.. autosummary::
+ :toctree: generated/
+
+ inverter.fit_sandia
+
+
PV System Models
----------------
@@ -311,16 +328,6 @@ PVWatts model
inverter.pvwatts
pvsystem.pvwatts_losses
-Functions for fitting diode models
-----------------------------------
-
-.. autosummary::
- :toctree: generated/
-
- ivtools.fit_sde_sandia
- ivtools.fit_sdm_cec_sam
- ivtools.fit_sdm_desoto
-
Other
-----
diff --git a/docs/sphinx/source/whatsnew/v0.8.0.rst b/docs/sphinx/source/whatsnew/v0.8.0.rst
index 2bda4b0db8..2aa1ebfb67 100644
--- a/docs/sphinx/source/whatsnew/v0.8.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.0.rst
@@ -38,6 +38,8 @@ Enhancements
* Add :py:func:`pvlib.iam.marion_diffuse` and
:py:func:`pvlib.iam.marion_integrate` to calculate IAM values for
diffuse irradiance. (:pull:`984`)
+* Add :py:func:`pvlib.inverter.fit_sandia` that fits the Sandia inverter model
+ to a set of inverter efficiency curves. (:pull:`1011`)
Bug fixes
~~~~~~~~~
diff --git a/pvlib/data/inverter_fit_snl_meas.csv b/pvlib/data/inverter_fit_snl_meas.csv
new file mode 100644
index 0000000000..dcbc71201d
--- /dev/null
+++ b/pvlib/data/inverter_fit_snl_meas.csv
@@ -0,0 +1,127 @@
+fraction_of_rated_power,dc_voltage_level,ac_power,dc_voltage,efficiency
+0.1,Vmin,32800,660.5,0.95814
+0.2,Vmin,73000,660.9,0.9755
+0.3,Vmin,107500,660.73,0.97787
+0.5,Vmin,168100,660.1,0.97998
+0.75,Vmin,235467,660.27,0.97785
+1,Vmin,318067,660.03,0.97258
+0.1,Vnom,32800,740.1,0.95441
+0.2,Vnom,72900,740.2,0.96985
+0.3,Vnom,107600,740.13,0.97611
+0.5,Vnom,167500,740.57,0.97554
+0.75,Vnom,234967,741.87,0.97429
+1,Vnom,317267,737.7,0.97261
+0.1,Vmax,32800,959.07,0.94165
+0.2,Vmax,71600,959.43,0.95979
+0.3,Vmax,107300,959.1,0.96551
+0.5,Vmax,166700,959.5,0.96787
+0.75,Vmax,234767,958.8,0.96612
+1,Vmax,317467,957,0.96358
+0.1,Vmin,32800,660.77,0.95721
+0.2,Vmin,73000,660.77,0.97247
+0.3,Vmin,107500,660.47,0.97668
+0.5,Vmin,168100,660.23,0.98018
+0.75,Vmin,235333.3333,660.3,0.97716
+1,Vmin,317466.6667,659.8,0.97184
+0.1,Vnom,32800,740.27,0.95534
+0.2,Vnom,72900,740.27,0.97071
+0.3,Vnom,107600,740.2,0.97523
+0.5,Vnom,167500,740.8,0.97592
+0.75,Vnom,234966.6667,741.67,0.97429
+1,Vnom,317300,737.97,0.97252
+0.1,Vmax,32800,959.23,0.93718
+0.2,Vmax,71600,959.4,0.96107
+0.3,Vmax,107300,959.27,0.96638
+0.5,Vmax,166700,959.57,0.96825
+0.75,Vmax,234733.3333,959.17,0.96731
+1,Vmax,317466.6667,957.07,0.96241
+0.1,Vmin,32800,660.57,0.95814
+0.2,Vmin,73000,660.67,0.97333
+0.3,Vmin,107500,660.5,0.97609
+0.5,Vmin,168100,660.1,0.97884
+0.75,Vmin,235066.6667,660.3,0.97781
+1,Vmin,316900,659.27,0.97209
+0.1,Vnom,32800,740.17,0.95441
+0.2,Vnom,72900,740.27,0.97028
+0.3,Vnom,107600,740.23,0.97464
+0.5,Vnom,167500,740.3,0.97573
+0.75,Vnom,235133.3333,742.13,0.97417
+1,Vnom,317300,737.9,0.97252
+0.1,Vmax,32800,959.2,0.93626
+0.2,Vmax,71600,959.43,0.95979
+0.3,Vmax,107300,959.2,0.96493
+0.5,Vmax,166700,959.5,0.96806
+0.75,Vmax,234833.3333,958.97,0.96573
+1,Vmax,317400,956.87,0.96279
+0.1,Vmin,32800,660.63,0.95627
+0.2,Vmin,73000,660.9,0.97377
+0.3,Vmin,107500,661.07,0.97846
+0.5,Vmin,168100,660.13,0.97827
+0.75,Vmin,235200,660.43,0.97701
+1,Vmin,316933.3333,660.07,0.97308
+0.1,Vnom,32800,740.27,0.95441
+0.2,Vnom,72900,740.37,0.96985
+0.3,Vnom,107600,740.27,0.97464
+0.5,Vnom,167500,740.53,0.97592
+0.75,Vnom,234800,742.13,0.97374
+1,Vnom,317300,737.73,0.97202
+0.1,Vmax,32800,959.2,0.93271
+0.2,Vmax,71600,959.27,0.95594
+0.3,Vmax,107300,959.2,0.96783
+0.5,Vmax,166700,959.47,0.96806
+0.75,Vmax,234700,958.67,0.96505
+1,Vmax,317433.3333,956.8,0.96299
+0.1,Vmin,32800,660.67,0.95534
+0.2,Vmin,73000,660.8,0.9755
+0.3,Vmin,107500,661.23,0.97905
+0.5,Vmin,168100,660.33,0.97941
+0.75,Vmin,236566.6667,660.43,0.97741
+1,Vmin,317866.6667,659.53,0.97366
+0.1,Vnom,32800,740.13,0.95627
+0.2,Vnom,72900,740.37,0.97071
+0.3,Vnom,107600,740.4,0.97523
+0.5,Vnom,167500,740.57,0.97649
+0.75,Vnom,234733.3333,741.83,0.97413
+1,Vnom,317333.3333,737.77,0.97222
+0.1,Vmax,32800,959.03,0.9336
+0.2,Vmax,71600,959.33,0.96108
+0.3,Vmax,107300,959.2,0.96464
+0.5,Vmax,166700,959.57,0.96975
+0.75,Vmax,234700,958.83,0.96584
+1,Vmax,317400,956.7,0.96338
+0.1,Vmin,32800,660.43,0.95349
+0.2,Vmin,73000,660.83,0.97247
+0.3,Vmin,107500,660.47,0.97668
+0.5,Vmin,168100,660.27,0.97941
+0.75,Vmin,236167,660.57,0.97657
+1,Vmin,317833,660.47,0.97177
+0.1,Vnom,32800,740.2,0.95534
+0.2,Vnom,72900,740.3,0.96985
+0.3,Vnom,107600,740.33,0.97434
+0.5,Vnom,167500,740.53,0.9763
+0.75,Vnom,234833,741.93,0.97468
+1,Vnom,317333,737.73,0.97242
+0.1,Vmax,32800,959.03,0.93626
+0.2,Vmax,71600,959.37,0.95936
+0.3,Vmax,107300,959.23,0.96464
+0.5,Vmax,166700,959.5,0.96731
+0.75,Vmax,235267,958.67,0.96592
+1,Vmax,317400,957.07,0.96269
+0.1,Vmin,32800,660.73,0.95627
+0.2,Vmin,73000,660.57,0.97204
+0.3,Vmin,107500,660.97,0.97787
+0.5,Vmin,168100,660,0.97865
+0.75,Vmin,236200,659.77,0.97778
+1,Vmin,317200,659.2,0.97221
+0.1,Vnom,32800,740.2,0.95257
+0.2,Vnom,72900,740.33,0.97071
+0.3,Vnom,107600,740.43,0.97464
+0.5,Vnom,167500,740.63,0.97592
+0.75,Vnom,235100,741.87,0.97458
+1,Vnom,317333.3333,737.83,0.97232
+0.1,Vmax,32800,959,0.93182
+0.2,Vmax,71600,959.4,0.9585
+0.3,Vmax,107300,959.07,0.96783
+0.5,Vmax,166700,959.7,0.96806
+0.75,Vmax,235466.6667,958.77,0.96569
+1,Vmax,317400,956.6,0.96308
diff --git a/pvlib/data/inverter_fit_snl_sim.csv b/pvlib/data/inverter_fit_snl_sim.csv
new file mode 100644
index 0000000000..15b1062079
--- /dev/null
+++ b/pvlib/data/inverter_fit_snl_sim.csv
@@ -0,0 +1,19 @@
+fraction_of_rated_power,efficiency,dc_voltage_level,dc_voltage,dc_power,ac_power,efficiency
+0.1,0.892146066,Vmin,220,112.0892685,100,0.892146067
+0.1,0.876414009,Vnom,240,114.1013254,100,0.876414009
+0.1,0.861227164,Vmax,260,116.1133835,100,0.861227164
+0.2,0.925255801,Vmin,220,216.15644,200,0.925255801
+0.2,0.916673906,Vnom,240,218.1800951,200,0.916673906
+0.2,0.908249736,Vmax,260,220.2037524,200,0.908249736
+0.3,0.936909957,Vmin,220,320.2015283,300,0.936909957
+0.3,0.93099374,Vnom,240,322.2363236,300,0.93099374
+0.3,0.925151763,Vmax,260,324.2711217,300,0.925151763
+0.5,0.946565413,Vmin,220,528.2255121,500,0.946565413
+0.5,0.94289593,Vnom,240,530.2812159,500,0.94289593
+0.5,0.939254781,Vmax,260,532.336923,500,0.939254781
+0.75,0.951617818,Vmin,220,788.1315225,750,0.951617818
+0.75,0.949113828,Vnom,240,790.2108027,750,0.949113828
+0.75,0.946622992,Vmax,260,792.2900736,750,0.946622992
+1,0.954289529,Vmin,220,1047.900002,1000,0.95428953
+1,0.952380952,Vnom,240,1050,1000,0.952380952
+1,0.950479992,Vmax,260,1052.1,1000,0.950479992
diff --git a/pvlib/inverter.py b/pvlib/inverter.py
index 514be06e88..a742e1a567 100644
--- a/pvlib/inverter.py
+++ b/pvlib/inverter.py
@@ -1,12 +1,21 @@
# -*- coding: utf-8 -*-
"""
-This module contains functions for inverter modeling, primarily conversion of
-DC to AC power.
+This module contains functions for inverter modeling and for fitting inverter
+models to data.
+
+Inverter models calculate AC power output from DC input. Model parameters
+should be passed as a single dict.
+
+Functions for estimating parameters for inverter models should follow the
+naming pattern 'fit_<model name>', e.g., fit_sandia.
+
"""
import numpy as np
import pandas as pd
+from numpy.polynomial.polynomial import polyfit # different than np.polyfit
+
def sandia(v_dc, p_dc, inverter):
r'''
@@ -176,7 +185,7 @@ def adr(v_dc, p_dc, inverter, vtol=0.10):
References
----------
- .. [1] Driesse, A. "Beyond the Curves: Modeling the Electrical Efficiency
+ .. [1] A. Driesse, "Beyond the Curves: Modeling the Electrical Efficiency
of Photovoltaic Inverters", 33rd IEEE Photovoltaic Specialist
Conference (PVSC), June 2008
@@ -285,8 +294,7 @@ def pvwatts(pdc, pdc0, eta_inv_nom=0.96, eta_inv_ref=0.9637):
References
----------
.. [1] A. P. Dobos, "PVWatts Version 5 Manual,"
- http://pvwatts.nrel.gov/downloads/pvwattsv5.pdf
- (2014).
+ http://pvwatts.nrel.gov/downloads/pvwattsv5.pdf (2014).
"""
pac0 = eta_inv_nom * pdc0
@@ -306,3 +314,110 @@ def pvwatts(pdc, pdc0, eta_inv_nom=0.96, eta_inv_ref=0.9637):
power_ac = np.maximum(0, power_ac) # GH 541
return power_ac
+
+
+def fit_sandia(ac_power, dc_power, dc_voltage, dc_voltage_level, p_ac_0, p_nt):
+ r'''
+ Determine parameters for the Sandia inverter model.
+
+ Parameters
+ ----------
+ ac_power : array_like
+ AC power output at each data point [W].
+ dc_power : array_like
+ DC power input at each data point [W].
+ dc_voltage : array_like
+ DC input voltage at each data point [V].
+ dc_voltage_level : array_like
+ DC input voltage level at each data point. Values must be 'Vmin',
+ 'Vnom' or 'Vmax'.
+ p_ac_0 : float
+ Rated AC power of the inverter [W].
+ p_nt : float
+ Night tare, i.e., power consumed while inverter is not delivering
+ AC power. [W]
+
+ Returns
+ -------
+ dict
+ A set of parameters for the Sandia inverter model [1]_. See
+ :py:func:`pvlib.inverter.sandia` for a description of keys and values.
+
+ See Also
+ --------
+ pvlib.inverter.sandia
+
+ Notes
+ -----
+ The fitting procedure to estimate parameters is described at [2]_.
+ A data point is a pair of values (dc_power, ac_power). Typically, inverter
+ performance is measured or described at three DC input voltage levels,
+ denoted 'Vmin', 'Vnom' and 'Vmax' and at each level, inverter efficiency
+ is determined at various output power levels. For example,
+ the CEC inverter test protocol [3]_ specifies measurement of input DC
+ power that delivers AC output power of 0.1, 0.2, 0.3, 0.5, 0.75 and 1.0 of
+ the inverter's AC power rating.
+
+ References
+ ----------
+ .. [1] D. King, S. Gonzalez, G. Galbraith, W. Boyson, "Performance Model
+ for Grid-Connected Photovoltaic Inverters", SAND2007-5036, Sandia
+ National Laboratories.
+ .. [2] Sandia Inverter Model page, PV Performance Modeling Collaborative
+ https://pvpmc.sandia.gov/modeling-steps/dc-to-ac-conversion/sandia-inverter-model/
+ .. [3] W. Bower, et al., "Performance Test Protocol for Evaluating
+ Inverters Used in Grid-Connected Photovoltaic Systems", available at
+ https://www.energy.ca.gov/sites/default/files/2020-06/2004-11-22_Sandia_Test_Protocol_ada.pdf
+ ''' # noqa: E501
+
+ voltage_levels = ['Vmin', 'Vnom', 'Vmax']
+
+ # average dc input voltage at each voltage level
+ v_d = np.array(
+ [dc_voltage[dc_voltage_level == 'Vmin'].mean(),
+ dc_voltage[dc_voltage_level == 'Vnom'].mean(),
+ dc_voltage[dc_voltage_level == 'Vmax'].mean()])
+ v_nom = v_d[1] # model parameter
+ # independent variable for regressions, x_d
+ x_d = v_d - v_nom
+
+ # empty dataframe to contain intermediate variables
+ coeffs = pd.DataFrame(index=voltage_levels,
+ columns=['a', 'b', 'c', 'p_dc', 'p_s0'], data=np.nan)
+
+ def solve_quad(a, b, c):
+ return (-b + (b**2 - 4 * a * c)**.5) / (2 * a)
+
+ # [2] STEP 3E, fit a line to (DC voltage, model_coefficient)
+ def extract_c(x_d, add):
+ beta0, beta1 = polyfit(x_d, add, 1)
+ c = beta1 / beta0
+ return beta0, beta1, c
+
+ for d in voltage_levels:
+ x = dc_power[dc_voltage_level == d]
+ y = ac_power[dc_voltage_level == d]
+ # [2] STEP 3B
+ # fit a quadratic to (DC power, AC power)
+ c, b, a = polyfit(x, y, 2)
+
+ # [2] STEP 3D, solve for p_dc and p_s0
+ p_dc = solve_quad(a, b, (c - p_ac_0))
+ p_s0 = solve_quad(a, b, c)
+
+ # Add values to dataframe at index d
+ coeffs['a'][d] = a
+ coeffs['p_dc'][d] = p_dc
+ coeffs['p_s0'][d] = p_s0
+
+ b_dc0, b_dc1, c1 = extract_c(x_d, coeffs['p_dc'])
+ b_s0, b_s1, c2 = extract_c(x_d, coeffs['p_s0'])
+ b_c0, b_c1, c3 = extract_c(x_d, coeffs['a'])
+
+ p_dc0 = b_dc0
+ p_s0 = b_s0
+ c0 = b_c0
+
+ # prepare dict and return
+ return {'Paco': p_ac_0, 'Pdco': p_dc0, 'Vdco': v_nom, 'Pso': p_s0,
+ 'C0': c0, 'C1': c1, 'C2': c2, 'C3': c3, 'Pnt': p_nt}
| diff --git a/pvlib/tests/test_inverter.py b/pvlib/tests/test_inverter.py
index 79d4cc2fb9..de018e95c5 100644
--- a/pvlib/tests/test_inverter.py
+++ b/pvlib/tests/test_inverter.py
@@ -6,8 +6,10 @@
from conftest import assert_series_equal
from numpy.testing import assert_allclose
+from conftest import needs_numpy_1_10, DATA_DIR
+import pytest
+
from pvlib import inverter
-from conftest import needs_numpy_1_10
def test_adr(adr_inverter_parameters):
@@ -131,3 +133,26 @@ def test_pvwatts_series():
expected = pd.Series(np.array([np.nan, 0., 47.608436, 95.]))
out = inverter.pvwatts(pdc, pdc0, 0.95)
assert_series_equal(expected, out)
+
+
+INVERTER_TEST_MEAS = DATA_DIR / 'inverter_fit_snl_meas.csv'
+INVERTER_TEST_SIM = DATA_DIR / 'inverter_fit_snl_sim.csv'
+
+
+@pytest.mark.parametrize('infilen, expected', [
+ (INVERTER_TEST_MEAS, {'Paco': 333000., 'Pdco': 343251., 'Vdco': 740.,
+ 'Pso': 1427.746, 'C0': -5.768e-08, 'C1': 3.596e-05,
+ 'C2': 1.038e-03, 'C3': 2.978e-05, 'Pnt': 1.}),
+ (INVERTER_TEST_SIM, {'Paco': 1000., 'Pdco': 1050., 'Vdco': 240.,
+ 'Pso': 10., 'C0': 1e-6, 'C1': 1e-4, 'C2': 1e-2,
+ 'C3': 1e-3, 'Pnt': 1.}),
+])
+def test_fit_sandia(infilen, expected):
+ curves = pd.read_csv(infilen)
+ dc_power = curves['ac_power'] / curves['efficiency']
+ result = inverter.fit_sandia(ac_power=curves['ac_power'],
+ dc_power=dc_power,
+ dc_voltage=curves['dc_voltage'],
+ dc_voltage_level=curves['dc_voltage_level'],
+ p_ac_0=expected['Paco'], p_nt=expected['Pnt'])
+ assert expected == pytest.approx(result, rel=1e-3)
| diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index 0e9cfa19e0..dd7c618646 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -265,6 +265,15 @@ Low-level functions for solving the single diode equation.
singlediode.bishop88_v_from_i
singlediode.bishop88_mpp
+Functions for fitting diode models
+
+.. autosummary::
+ :toctree: generated/
+
+ ivtools.fit_sde_sandia
+ ivtools.fit_sdm_cec_sam
+ ivtools.fit_sdm_desoto
+
Inverter models (DC to AC conversion)
-------------------------------------
@@ -275,6 +284,14 @@ Inverter models (DC to AC conversion)
inverter.adr
inverter.pvwatts
+Functions for fitting inverter models
+
+.. autosummary::
+ :toctree: generated/
+
+ inverter.fit_sandia
+
+
PV System Models
----------------
@@ -311,16 +328,6 @@ PVWatts model
inverter.pvwatts
pvsystem.pvwatts_losses
-Functions for fitting diode models
-----------------------------------
-
-.. autosummary::
- :toctree: generated/
-
- ivtools.fit_sde_sandia
- ivtools.fit_sdm_cec_sam
- ivtools.fit_sdm_desoto
-
Other
-----
diff --git a/docs/sphinx/source/whatsnew/v0.8.0.rst b/docs/sphinx/source/whatsnew/v0.8.0.rst
index 2bda4b0db8..2aa1ebfb67 100644
--- a/docs/sphinx/source/whatsnew/v0.8.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.0.rst
@@ -38,6 +38,8 @@ Enhancements
* Add :py:func:`pvlib.iam.marion_diffuse` and
:py:func:`pvlib.iam.marion_integrate` to calculate IAM values for
diffuse irradiance. (:pull:`984`)
+* Add :py:func:`pvlib.inverter.fit_sandia` that fits the Sandia inverter model
+ to a set of inverter efficiency curves. (:pull:`1011`)
Bug fixes
~~~~~~~~~
diff --git a/pvlib/data/inverter_fit_snl_meas.csv b/pvlib/data/inverter_fit_snl_meas.csv
new file mode 100644
index 0000000000..dcbc71201d
--- /dev/null
+++ b/pvlib/data/inverter_fit_snl_meas.csv
@@ -0,0 +1,127 @@
+fraction_of_rated_power,dc_voltage_level,ac_power,dc_voltage,efficiency
+0.1,Vmin,32800,660.5,0.95814
+0.2,Vmin,73000,660.9,0.9755
+0.3,Vmin,107500,660.73,0.97787
+0.5,Vmin,168100,660.1,0.97998
+0.75,Vmin,235467,660.27,0.97785
+1,Vmin,318067,660.03,0.97258
+0.1,Vnom,32800,740.1,0.95441
+0.2,Vnom,72900,740.2,0.96985
+0.3,Vnom,107600,740.13,0.97611
+0.5,Vnom,167500,740.57,0.97554
+0.75,Vnom,234967,741.87,0.97429
+1,Vnom,317267,737.7,0.97261
+0.1,Vmax,32800,959.07,0.94165
+0.2,Vmax,71600,959.43,0.95979
+0.3,Vmax,107300,959.1,0.96551
+0.5,Vmax,166700,959.5,0.96787
+0.75,Vmax,234767,958.8,0.96612
+1,Vmax,317467,957,0.96358
+0.1,Vmin,32800,660.77,0.95721
+0.2,Vmin,73000,660.77,0.97247
+0.3,Vmin,107500,660.47,0.97668
+0.5,Vmin,168100,660.23,0.98018
+0.75,Vmin,235333.3333,660.3,0.97716
+1,Vmin,317466.6667,659.8,0.97184
+0.1,Vnom,32800,740.27,0.95534
+0.2,Vnom,72900,740.27,0.97071
+0.3,Vnom,107600,740.2,0.97523
+0.5,Vnom,167500,740.8,0.97592
+0.75,Vnom,234966.6667,741.67,0.97429
+1,Vnom,317300,737.97,0.97252
+0.1,Vmax,32800,959.23,0.93718
+0.2,Vmax,71600,959.4,0.96107
+0.3,Vmax,107300,959.27,0.96638
+0.5,Vmax,166700,959.57,0.96825
+0.75,Vmax,234733.3333,959.17,0.96731
+1,Vmax,317466.6667,957.07,0.96241
+0.1,Vmin,32800,660.57,0.95814
+0.2,Vmin,73000,660.67,0.97333
+0.3,Vmin,107500,660.5,0.97609
+0.5,Vmin,168100,660.1,0.97884
+0.75,Vmin,235066.6667,660.3,0.97781
+1,Vmin,316900,659.27,0.97209
+0.1,Vnom,32800,740.17,0.95441
+0.2,Vnom,72900,740.27,0.97028
+0.3,Vnom,107600,740.23,0.97464
+0.5,Vnom,167500,740.3,0.97573
+0.75,Vnom,235133.3333,742.13,0.97417
+1,Vnom,317300,737.9,0.97252
+0.1,Vmax,32800,959.2,0.93626
+0.2,Vmax,71600,959.43,0.95979
+0.3,Vmax,107300,959.2,0.96493
+0.5,Vmax,166700,959.5,0.96806
+0.75,Vmax,234833.3333,958.97,0.96573
+1,Vmax,317400,956.87,0.96279
+0.1,Vmin,32800,660.63,0.95627
+0.2,Vmin,73000,660.9,0.97377
+0.3,Vmin,107500,661.07,0.97846
+0.5,Vmin,168100,660.13,0.97827
+0.75,Vmin,235200,660.43,0.97701
+1,Vmin,316933.3333,660.07,0.97308
+0.1,Vnom,32800,740.27,0.95441
+0.2,Vnom,72900,740.37,0.96985
+0.3,Vnom,107600,740.27,0.97464
+0.5,Vnom,167500,740.53,0.97592
+0.75,Vnom,234800,742.13,0.97374
+1,Vnom,317300,737.73,0.97202
+0.1,Vmax,32800,959.2,0.93271
+0.2,Vmax,71600,959.27,0.95594
+0.3,Vmax,107300,959.2,0.96783
+0.5,Vmax,166700,959.47,0.96806
+0.75,Vmax,234700,958.67,0.96505
+1,Vmax,317433.3333,956.8,0.96299
+0.1,Vmin,32800,660.67,0.95534
+0.2,Vmin,73000,660.8,0.9755
+0.3,Vmin,107500,661.23,0.97905
+0.5,Vmin,168100,660.33,0.97941
+0.75,Vmin,236566.6667,660.43,0.97741
+1,Vmin,317866.6667,659.53,0.97366
+0.1,Vnom,32800,740.13,0.95627
+0.2,Vnom,72900,740.37,0.97071
+0.3,Vnom,107600,740.4,0.97523
+0.5,Vnom,167500,740.57,0.97649
+0.75,Vnom,234733.3333,741.83,0.97413
+1,Vnom,317333.3333,737.77,0.97222
+0.1,Vmax,32800,959.03,0.9336
+0.2,Vmax,71600,959.33,0.96108
+0.3,Vmax,107300,959.2,0.96464
+0.5,Vmax,166700,959.57,0.96975
+0.75,Vmax,234700,958.83,0.96584
+1,Vmax,317400,956.7,0.96338
+0.1,Vmin,32800,660.43,0.95349
+0.2,Vmin,73000,660.83,0.97247
+0.3,Vmin,107500,660.47,0.97668
+0.5,Vmin,168100,660.27,0.97941
+0.75,Vmin,236167,660.57,0.97657
+1,Vmin,317833,660.47,0.97177
+0.1,Vnom,32800,740.2,0.95534
+0.2,Vnom,72900,740.3,0.96985
+0.3,Vnom,107600,740.33,0.97434
+0.5,Vnom,167500,740.53,0.9763
+0.75,Vnom,234833,741.93,0.97468
+1,Vnom,317333,737.73,0.97242
+0.1,Vmax,32800,959.03,0.93626
+0.2,Vmax,71600,959.37,0.95936
+0.3,Vmax,107300,959.23,0.96464
+0.5,Vmax,166700,959.5,0.96731
+0.75,Vmax,235267,958.67,0.96592
+1,Vmax,317400,957.07,0.96269
+0.1,Vmin,32800,660.73,0.95627
+0.2,Vmin,73000,660.57,0.97204
+0.3,Vmin,107500,660.97,0.97787
+0.5,Vmin,168100,660,0.97865
+0.75,Vmin,236200,659.77,0.97778
+1,Vmin,317200,659.2,0.97221
+0.1,Vnom,32800,740.2,0.95257
+0.2,Vnom,72900,740.33,0.97071
+0.3,Vnom,107600,740.43,0.97464
+0.5,Vnom,167500,740.63,0.97592
+0.75,Vnom,235100,741.87,0.97458
+1,Vnom,317333.3333,737.83,0.97232
+0.1,Vmax,32800,959,0.93182
+0.2,Vmax,71600,959.4,0.9585
+0.3,Vmax,107300,959.07,0.96783
+0.5,Vmax,166700,959.7,0.96806
+0.75,Vmax,235466.6667,958.77,0.96569
+1,Vmax,317400,956.6,0.96308
diff --git a/pvlib/data/inverter_fit_snl_sim.csv b/pvlib/data/inverter_fit_snl_sim.csv
new file mode 100644
index 0000000000..15b1062079
--- /dev/null
+++ b/pvlib/data/inverter_fit_snl_sim.csv
@@ -0,0 +1,19 @@
+fraction_of_rated_power,efficiency,dc_voltage_level,dc_voltage,dc_power,ac_power,efficiency
+0.1,0.892146066,Vmin,220,112.0892685,100,0.892146067
+0.1,0.876414009,Vnom,240,114.1013254,100,0.876414009
+0.1,0.861227164,Vmax,260,116.1133835,100,0.861227164
+0.2,0.925255801,Vmin,220,216.15644,200,0.925255801
+0.2,0.916673906,Vnom,240,218.1800951,200,0.916673906
+0.2,0.908249736,Vmax,260,220.2037524,200,0.908249736
+0.3,0.936909957,Vmin,220,320.2015283,300,0.936909957
+0.3,0.93099374,Vnom,240,322.2363236,300,0.93099374
+0.3,0.925151763,Vmax,260,324.2711217,300,0.925151763
+0.5,0.946565413,Vmin,220,528.2255121,500,0.946565413
+0.5,0.94289593,Vnom,240,530.2812159,500,0.94289593
+0.5,0.939254781,Vmax,260,532.336923,500,0.939254781
+0.75,0.951617818,Vmin,220,788.1315225,750,0.951617818
+0.75,0.949113828,Vnom,240,790.2108027,750,0.949113828
+0.75,0.946622992,Vmax,260,792.2900736,750,0.946622992
+1,0.954289529,Vmin,220,1047.900002,1000,0.95428953
+1,0.952380952,Vnom,240,1050,1000,0.952380952
+1,0.950479992,Vmax,260,1052.1,1000,0.950479992
| [
{
"components": [
{
"doc": "Determine parameters for the Sandia inverter model.\n\nParameters\n----------\nac_power : array_like\n AC power output at each data point [W].\ndc_power : array_like\n DC power input at each data point [W].\ndc_voltage : array_like\n DC input voltage at each da... | [
"pvlib/tests/test_inverter.py::test_fit_sandia[infilen0-expected0]",
"pvlib/tests/test_inverter.py::test_fit_sandia[infilen1-expected1]"
] | [
"pvlib/tests/test_inverter.py::test_adr",
"pvlib/tests/test_inverter.py::test_adr_vtol",
"pvlib/tests/test_inverter.py::test_adr_float",
"pvlib/tests/test_inverter.py::test_adr_invalid_and_night",
"pvlib/tests/test_inverter.py::test_sandia",
"pvlib/tests/test_inverter.py::test_sandia_float",
"pvlib/test... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add function to fit Sandia inverter model
- [x] Closes #975
- [x] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)
- [x] Tests added
- [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.
- [x] Adds description and name entries in the appropriate "what's new" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).
- [x] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.
- [x] Pull request is nearly complete and ready for detailed review.
- [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.
Inputs are: efficiency curves with DC voltage and AC power, rated AC power and night tare. Returns dict ready for the Sandia inverter model function. The algorithm reference is on the [PVPMC](https://pvpmc.sandia.gov/modeling-steps/dc-to-ac-conversion/sandia-inverter-model/) site.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in pvlib/inverter.py]
(definition of fit_sandia:)
def fit_sandia(ac_power, dc_power, dc_voltage, dc_voltage_level, p_ac_0, p_nt):
"""Determine parameters for the Sandia inverter model.
Parameters
----------
ac_power : array_like
AC power output at each data point [W].
dc_power : array_like
DC power input at each data point [W].
dc_voltage : array_like
DC input voltage at each data point [V].
dc_voltage_level : array_like
DC input voltage level at each data point. Values must be 'Vmin',
'Vnom' or 'Vmax'.
p_ac_0 : float
Rated AC power of the inverter [W].
p_nt : float
Night tare, i.e., power consumed while inverter is not delivering
AC power. [W]
Returns
-------
dict
A set of parameters for the Sandia inverter model [1]_. See
:py:func:`pvlib.inverter.sandia` for a description of keys and values.
See Also
--------
pvlib.inverter.sandia
Notes
-----
The fitting procedure to estimate parameters is described at [2]_.
A data point is a pair of values (dc_power, ac_power). Typically, inverter
performance is measured or described at three DC input voltage levels,
denoted 'Vmin', 'Vnom' and 'Vmax' and at each level, inverter efficiency
is determined at various output power levels. For example,
the CEC inverter test protocol [3]_ specifies measurement of input DC
power that delivers AC output power of 0.1, 0.2, 0.3, 0.5, 0.75 and 1.0 of
the inverter's AC power rating.
References
----------
.. [1] D. King, S. Gonzalez, G. Galbraith, W. Boyson, "Performance Model
for Grid-Connected Photovoltaic Inverters", SAND2007-5036, Sandia
National Laboratories.
.. [2] Sandia Inverter Model page, PV Performance Modeling Collaborative
https://pvpmc.sandia.gov/modeling-steps/dc-to-ac-conversion/sandia-inverter-model/
.. [3] W. Bower, et al., "Performance Test Protocol for Evaluating
Inverters Used in Grid-Connected Photovoltaic Systems", available at
https://www.energy.ca.gov/sites/default/files/2020-06/2004-11-22_Sandia_Test_Protocol_ada.pdf"""
(definition of fit_sandia.solve_quad:)
def solve_quad(a, b, c):
(definition of fit_sandia.extract_c:)
def extract_c(x_d, add):
[end of new definitions in pvlib/inverter.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Implement a function to calculate inverter model parameters
From measured efficiency curves and datasheets.
See the [previous discussion](https://groups.google.com/forum/?#!topic/pvlib-python/R7cs-O9oG6o) for more context.
----------
**Progress report**: I've spent some time on the datasheet function and it's not looking great. There's a [technique ](https://pvpmc.sandia.gov/modeling-steps/dc-to-ac-conversion/sandia-inverter-model/)to get to Sandia inverter model parameters from measured efficiency, following the [CEC test protocol](https://pvpmc.sandia.gov/modeling-steps/dc-to-ac-conversion/cec-inverter-test-protocol/). I've coded this technique. It uses several regressions that assume replicated measurements of efficiency.
Datasheets for inverters typically provide efficiency at a sequence of output AC power points, at different DC voltage input levels. Feeding these data into the parameter method for measured data (adjusted for the lack of replicated measurements) yields parameter values which, when put into the inverter model, don't precisely reproduce the input efficiencies. I think this is a problem. I've tried various techniques to get to parameters from some test efficiency curves that are calculated from assumed parameters, and can't (yet) reliably recover the known parameters.
**An alternate idea**: implement a new inverter efficiency model that basically interpolates on the values found in a datasheet. The interpolation needs to be a bit more sophisticated than simply linear between input power levels, to preserve the downward curving shape of the input DC power vs. efficiency curve. I'm thinking that the convexity-preserving quadratic spline `schumaker_qspline` in #708 would be useful here.
Looking for feedback on the value of and interest in a datasheet-based inverter efficiency model.
@cwhanse Thanks for the update and for the pointers. I have read Sandia's [Performance Model for Grid-Connected Photovoltaic Inverters](https://energy.sandia.gov/wp-content//gallery/uploads/Performance-Model-for-Grid-Connected-Photovoltaic-Inverters.pdf) and that helped me understand the model. :blush:
> I've tried various techniques to get to parameters from some test efficiency curves that are calculated from assumed parameters, and can't (yet) reliably recover the known parameters
I agree this should be the first step in order to validate the code implementation. Have you uploaded your code somewhere? If so, I would love to have a look at it.
> (adjusted for the lack of replicated measurements)
What do you mean by "adjusted"? I am guessing with the datasheet points you just have fewer data to fit the parameters. Or do you adjust/transform them somehow?
> An alternate idea: implement a new inverter efficiency model that basically interpolates on the values found in a datasheet
This could be complementary to the first idea. I think a function to calculate CEC parameters could still be useful. At least for the "minimum" model: just calculating the quadratic function to account for the curvature (<img src="https://render.githubusercontent.com/render/math?math=C_0">) and ignoring <img src="https://render.githubusercontent.com/render/math?math=V_{DC}"> changes.
I was surprised when I found the <img src="https://render.githubusercontent.com/render/math?math=V_{DC}"> linearity assumptions of the CEC model, as in some inverters that does not seem to be the case:

Or am I misunderstanding something?
By the way, the efficiency curve I just posted above ([SMA's Sunny Tripower 6.0](https://files.sma.de/downloads/STP3-6-3AV-40-DS-en-20.pdf)) looks like a buck-boost converter efficiency curve.
For example, this is the curve of the [LM5175 buck-boost converter](http://www.ti.com/lit/ds/symlink/lm5175.pdf):

Where the highest efficiency is achieved at the closest-to-output V_in. Higher V_in means lower efficiency, and lower V_in means lower efficiency too.
Does it mean that some inverters have a DC-DC converter stage and then a DC-AC converter stage? That could make sense for a multi-MPPT inverter. In example, for a 2-MPPTs inverter:

In which maybe the DC-AC stage operates at a constant V_in and this setup allows for each of the MPPT to operate at a different V_in.
Do you know if that could be the case for a multi-MPPT inverter? Is it always the case that there are 2 separate stages and the DC-AC stage operates at constant V_in? (i.e.: there is always a DC-DC converter, even for single-MPPT inverters)
I am trying to understand how an inverter works internally, but maybe there is a better place to ask/discuss this? :innocent:
@cwhanse Friendly ping with respect to:
> Have you uploaded your code somewhere? If so, I would love to have a look at it.
If you would rather not share it, I may code it myself, but I would rather not duplicate work. :innocent:
--------------------
</issues> | aa1635bcb40dc83f82e9fd72158670c235bfe99b |
huggingface__datasets-430 | 430 | huggingface/datasets | null | c70d0c022e5ef2adca580e4fccfc319bdcae88a9 | 2020-07-23T15:43:49Z | diff --git a/docs/source/package_reference/main_classes.rst b/docs/source/package_reference/main_classes.rst
index 5fc244089df..8a89ca6c008 100644
--- a/docs/source/package_reference/main_classes.rst
+++ b/docs/source/package_reference/main_classes.rst
@@ -26,6 +26,17 @@ The base class :class:`nlp.Dataset` implements a Dataset backed by an Apache Arr
description, download_checksums, download_size, features, homepage,
license, size_in_bytes, supervised_keys, version
+
+``DatasetDict``
+~~~~~~~~~~~~~~~~~~~~~
+
+Dictionary with split names as keys ('train', 'test' for example), and :obj:`nlp.Dataset` objects as values.
+It also has dataset transform methods like map or filter, to process all the splits at once.
+
+.. autoclass:: nlp.DatasetDict
+ :members: map, filter, sort, shuffle
+
+
``Features``
~~~~~~~~~~~~~~~~~~~~~
diff --git a/docs/source/processing.rst b/docs/source/processing.rst
index 5a68f35c83a..94d021aa8ce 100644
--- a/docs/source/processing.rst
+++ b/docs/source/processing.rst
@@ -420,6 +420,30 @@ Here we have now multiply the size of our dataset by ``4`` by adding three alter
Obviously this is a very simple example for data augmentation and it could be improved in several ways, the most interesting take-aways is probably how this can be written in roughtly ten lines of code without any loss in flexibility.
+Processing several splits at once
+-----------------------------------
+
+When you load a dataset that has various splits, :func:`nlp.load_dataset` returns a :obj:`nlp.DatasetDict` that is a dictionary with split names as keys ('train', 'test' for example), and :obj:`nlp.Dataset` objects as values.
+You can directly call map, filter, shuffle, and sort directly on a :obj:`nlp.DatasetDict` object:
+
+.. code-block::
+
+ >>> from nlp import load_dataset
+ >>>
+ >>> dataset = load_dataset('glue', 'mrpc') # load all the splits
+ >>> dataset.keys()
+ dict_keys(['train', 'validation', 'test'])
+ >>> encoded_dataset = dataset.map(lambda examples: tokenizer(examples['sentence1']), batched=True)
+ >>> encoded_dataset["train"][0]
+ {'sentence1': 'Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .',
+ 'sentence2': 'Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .',
+ 'label': 1,
+ 'idx': 0,
+ 'input_ids': [ 101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102],
+ 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
+ }
+
This concludes our chapter on data processing with ๐คnlp (and ๐คtransformers).
Controling the cache behavior
diff --git a/docs/source/quicktour.rst b/docs/source/quicktour.rst
index 19a0a152d3c..2d645e8b8cc 100644
--- a/docs/source/quicktour.rst
+++ b/docs/source/quicktour.rst
@@ -54,7 +54,7 @@ If you want to create a :class:`nlp.Dataset` from local CSV, JSON, text or panda
.. note::
- If you don't provide a :obj:`split` argument to :func:`nlp.load_dataset`, this method will return a dictionnary containing a datasets for each split in the dataset.
+ If you don't provide a :obj:`split` argument to :func:`nlp.load_dataset`, this method will return a dictionary containing a datasets for each split in the dataset. This dictionary is a :obj:`nlp.DatasetDict` object that lets you process all the splits at once using :func:`nlp.DatasetDict.map`, :func:`nlp.DatasetDict.filter`, etc.
Now let's have a look at our newly created :class:`nlp.Dataset` object. It basically behaves like a normal python container. You can query its length, get a single row but also get multiple rows and even index along columns (see all the details in :doc:`exploring </exploring>`):
diff --git a/src/nlp/__init__.py b/src/nlp/__init__.py
index e6d56fbcda3..7afce7943db 100644
--- a/src/nlp/__init__.py
+++ b/src/nlp/__init__.py
@@ -27,6 +27,7 @@
from .arrow_dataset import Dataset
from .arrow_reader import ReadInstruction
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
+from .dataset_dict import DatasetDict
from .features import ClassLabel, Features, Sequence, Tensor, Translation, TranslationVariableLanguages, Value
from .info import DatasetInfo, MetricInfo
from .inspect import inspect_dataset, inspect_metric, list_datasets, list_metrics
diff --git a/src/nlp/arrow_dataset.py b/src/nlp/arrow_dataset.py
index 35ce92852c6..5f5634066e2 100644
--- a/src/nlp/arrow_dataset.py
+++ b/src/nlp/arrow_dataset.py
@@ -24,7 +24,7 @@
from collections.abc import Iterable, Mapping
from functools import partial
from math import ceil, floor
-from typing import Any, Dict, List, Optional, Union
+from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
import numpy as np
import pandas as pd
@@ -41,6 +41,9 @@
from .utils import map_nested
+if TYPE_CHECKING:
+ from .dataset_dict import DatasetDict
+
logger = logging.getLogger(__name__)
@@ -680,7 +683,7 @@ def map(
features: Optional[Features] = None,
disable_nullable: bool = True,
verbose: bool = True,
- ):
+ ) -> "Dataset":
""" Apply a function to all the elements in the table (individually or in batches)
and update the table (if function does updated examples).
@@ -849,7 +852,18 @@ def apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples=F
else:
return self
- def filter(self, function, with_indices=False, **kwargs):
+ def filter(
+ self,
+ function,
+ with_indices=False,
+ batch_size: Optional[int] = 1000,
+ remove_columns: Optional[List[str]] = None,
+ keep_in_memory: bool = False,
+ load_from_cache_file: bool = True,
+ cache_file_name: Optional[str] = None,
+ writer_batch_size: Optional[int] = 1000,
+ verbose: bool = True,
+ ) -> "Dataset":
""" Apply a filter function to all the elements in the table in batches
and update the table so that the dataset only includes examples according to the filter function.
@@ -870,7 +884,6 @@ def filter(self, function, with_indices=False, **kwargs):
results of the computation instead of the automatically generated cache file name.
`writer_batch_size` (`int`, default: `1000`): Number of rows per write operation for the cache file writer.
Higher value gives smaller cache files, lower value consume less temporary memory while running `.map()`.
- `disable_nullable` (`bool`, default: `True`): Allow null values in the table.
`verbose` (`bool`, default: `True`): Set to `False` to deactivate the tqdm progress bar and informations.
"""
if len(self.list_indexes()) > 0:
@@ -909,7 +922,19 @@ def map_function(batch, *args):
return result
# return map function
- return self.map(map_function, batched=True, with_indices=with_indices, features=self.features, **kwargs)
+ return self.map(
+ map_function,
+ batched=True,
+ with_indices=with_indices,
+ features=self.features,
+ batch_size=batch_size,
+ remove_columns=remove_columns,
+ keep_in_memory=keep_in_memory,
+ load_from_cache_file=load_from_cache_file,
+ cache_file_name=cache_file_name,
+ writer_batch_size=writer_batch_size,
+ verbose=verbose,
+ )
def select(
self,
@@ -999,7 +1024,7 @@ def sort(
cache_file_name: Optional[str] = None,
writer_batch_size: Optional[int] = 1000,
verbose: bool = True,
- ):
+ ) -> "Dataset":
""" Create a new dataset sorted according to a column.
Currently sorting according to a column name uses numpy sorting algorithm under the hood.
@@ -1081,8 +1106,8 @@ def shuffle(
cache_file_name: Optional[str] = None,
writer_batch_size: Optional[int] = 1000,
verbose: bool = True,
- ):
- """ Create a new Dataset where rows the rows are shuffled.
+ ) -> "Dataset":
+ """ Create a new Dataset where the rows are shuffled.
Currently shuffling uses numpy random generators.
You can either supply a NumPy BitGenerator to use, or a seed to initiate NumPy's default random generator (PCG64).
@@ -1241,8 +1266,8 @@ def train_test_split(
test_cache_file_name: Optional[str] = None,
writer_batch_size: Optional[int] = 1000,
verbose: bool = True,
- ):
- """ Return a dictionary with two random train and test subsets (`train` and `test` ``Dataset`` splits).
+ ) -> "DatasetDict":
+ """ Return a dictionary (:obj:`nlp.DatsetDict`) with two random train and test subsets (`train` and `test` ``Dataset`` splits).
Splits are created from the dataset according to `test_size`, `train_size` and `shuffle`.
This method is similar to scikit-learn `train_test_split` with the omission of the stratified options.
@@ -1274,13 +1299,15 @@ def train_test_split(
Higher value gives smaller cache files, lower value consume less temporary memory while running `.map()`.
`verbose` (`bool`, default: `True`): Set to `False` to deactivate the tqdm progress bar and informations.
"""
+ from .dataset_dict import DatasetDict # import here because of circular dependency
+
if len(self.list_indexes()) > 0:
raise DatasetTransformationNotAllowedError(
"Using `.train_test_split` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it."
)
# If the array is empty we do nothing
if len(self) == 0:
- return self
+ return DatasetDict({"train": self, "test": self})
if test_size is None and train_size is None:
test_size = 0.25
@@ -1383,10 +1410,12 @@ def train_test_split(
logger.info(
"Loading cached split dataset at %s and %s", train_cache_file_name, test_cache_file_name
)
- return {
- "train": Dataset.from_file(train_cache_file_name, info=self.info, split=self.split),
- "test": Dataset.from_file(test_cache_file_name, info=self.info, split=self.split),
- }
+ return DatasetDict(
+ {
+ "train": Dataset.from_file(train_cache_file_name, info=self.info, split=self.split),
+ "test": Dataset.from_file(test_cache_file_name, info=self.info, split=self.split),
+ }
+ )
if not shuffle:
train_indices = np.arange(n_train)
@@ -1417,7 +1446,7 @@ def train_test_split(
verbose=verbose,
)
- return {"train": train_split, "test": test_split}
+ return DatasetDict({"train": train_split, "test": test_split})
def shard(
self,
@@ -1429,7 +1458,7 @@ def shard(
cache_file_name: Optional[str] = None,
writer_batch_size: Optional[int] = 1000,
verbose: bool = True,
- ):
+ ) -> "Dataset":
""" Return the `index`-nth shard from dataset split into `num_shards` pieces.
This shards deterministically. dset.shard(n, i) will contain all elements of dset whose
diff --git a/src/nlp/builder.py b/src/nlp/builder.py
index 72fe7b55622..5a766b5a0e4 100644
--- a/src/nlp/builder.py
+++ b/src/nlp/builder.py
@@ -32,6 +32,7 @@
from .arrow_dataset import Dataset
from .arrow_reader import HF_GCP_BASE_URL, ArrowReader, DatasetNotOnHfGcs, MissingFilesOnHfGcs
from .arrow_writer import ArrowWriter, BeamWriter
+from .dataset_dict import DatasetDict
from .features import Features, Value
from .info import DATASET_INFO_FILENAME, DATASET_INFOS_DICT_FILE_NAME, LICENSE_FILENAME, DatasetInfo, DatasetInfosDict
from .naming import camelcase_to_snakecase, filename_prefix_for_split
@@ -527,7 +528,7 @@ def _make_split_generators_kwargs(self, prepare_split_kwargs):
del prepare_split_kwargs
return {}
- def as_dataset(self, split: Optional[Split] = None, run_post_process=True):
+ def as_dataset(self, split: Optional[Split] = None, run_post_process=True) -> Union[Dataset, DatasetDict]:
""" Return a Dataset for the specified split.
"""
logger.info("Constructing Dataset for split %s, from %s", split, self._cache_dir)
@@ -549,6 +550,8 @@ def as_dataset(self, split: Optional[Split] = None, run_post_process=True):
datasets = utils.map_nested(
partial(self._build_single_dataset, run_post_process=run_post_process), split, map_tuple=True
)
+ if isinstance(datasets, dict):
+ datasets = DatasetDict(datasets)
return datasets
def _build_single_dataset(self, split, run_post_process):
diff --git a/src/nlp/dataset_dict.py b/src/nlp/dataset_dict.py
new file mode 100644
index 00000000000..0c782947761
--- /dev/null
+++ b/src/nlp/dataset_dict.py
@@ -0,0 +1,250 @@
+from typing import Dict, List, Optional
+
+import numpy as np
+
+from .arrow_dataset import Dataset
+from .features import Features
+
+
+class DatasetDict(dict):
+ """A dictionary (dict of str: nlp.Dataset) with dataset transforms methods (map, filter, etc.)"""
+
+ def _check_values_type(self):
+ for dataset in self.values():
+ if not isinstance(dataset, Dataset):
+ raise TypeError(
+ "Values in `DatasetDict` should of type `Dataset` but got type '{}'".format(type(dataset))
+ )
+
+ def map(
+ self,
+ function,
+ with_indices: bool = False,
+ batched: bool = False,
+ batch_size: Optional[int] = 1000,
+ remove_columns: Optional[List[str]] = None,
+ keep_in_memory: bool = False,
+ load_from_cache_file: bool = True,
+ cache_file_names: Optional[Dict[str, str]] = None,
+ writer_batch_size: Optional[int] = 1000,
+ features: Optional[Features] = None,
+ disable_nullable: bool = True,
+ verbose: bool = True,
+ ) -> "DatasetDict":
+ """ Apply a function to all the elements in the table (individually or in batches)
+ and update the table (if function does updated examples).
+ The transformation is applied to all the datasets of the dataset dictionary.
+
+ Args:
+ `function` (`callable`): with one of the following signature:
+ - `function(example: Dict) -> Union[Dict, Any]` if `batched=False` and `with_indices=False`
+ - `function(example: Dict, indices: int) -> Union[Dict, Any]` if `batched=False` and `with_indices=True`
+ - `function(batch: Dict[List]) -> Union[Dict, Any]` if `batched=True` and `with_indices=False`
+ - `function(batch: Dict[List], indices: List[int]) -> Union[Dict, Any]` if `batched=True` and `with_indices=True`
+ `with_indices` (`bool`, default: `False`): Provide example indices to `function`. Note that in this case the signature of `function` should be `def function(example, idx): ...`.
+ `batched` (`bool`, default: `False`): Provide batch of examples to `function`
+ `batch_size` (`Optional[int]`, default: `1000`): Number of examples per batch provided to `function` if `batched=True`
+ `batch_size <= 0` or `batch_size == None`: Provide the full dataset as a single batch to `function`
+ `remove_columns` (`Optional[List[str]]`, default: `None`): Remove a selection of columns while doing the mapping.
+ Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding
+ columns with names in `remove_columns`, these columns will be kept.
+ `keep_in_memory` (`bool`, default: `False`): Keep the dataset in memory instead of writing it to a cache file.
+ `load_from_cache_file` (`bool`, default: `True`): If a cache file storing the current computation from `function`
+ can be identified, use it instead of recomputing.
+ `cache_file_names` (`Optional[Dict[str, str]]`, default: `None`): Provide the name of a cache file to use to store the
+ results of the computation instead of the automatically generated cache file name.
+ You have to provide one :obj:`cache_file_name` per dataset in the dataset dictionary.
+ `writer_batch_size` (`int`, default: `1000`): Number of rows per write operation for the cache file writer.
+ Higher value gives smaller cache files, lower value consume less temporary memory while running `.map()`.
+ `features` (`Optional[nlp.Features]`, default: `None`): Use a specific Features to store the cache file
+ instead of the automatically generated one.
+ `disable_nullable` (`bool`, default: `True`): Allow null values in the table.
+ `verbose` (`bool`, default: `True`): Set to `False` to deactivate the tqdm progress bar and informations.
+ """
+ self._check_values_type()
+ if cache_file_names is None:
+ cache_file_names = {k: None for k in self}
+ return DatasetDict(
+ {
+ k: dataset.map(
+ function=function,
+ with_indices=with_indices,
+ batched=batched,
+ batch_size=batch_size,
+ remove_columns=remove_columns,
+ keep_in_memory=keep_in_memory,
+ load_from_cache_file=load_from_cache_file,
+ cache_file_name=cache_file_names[k],
+ writer_batch_size=writer_batch_size,
+ features=features,
+ disable_nullable=disable_nullable,
+ verbose=verbose,
+ )
+ for k, dataset in self.items()
+ }
+ )
+
+ def filter(
+ self,
+ function,
+ with_indices=False,
+ batch_size: Optional[int] = 1000,
+ remove_columns: Optional[List[str]] = None,
+ keep_in_memory: bool = False,
+ load_from_cache_file: bool = True,
+ cache_file_names: Optional[Dict[str, str]] = None,
+ writer_batch_size: Optional[int] = 1000,
+ verbose: bool = True,
+ ) -> "DatasetDict":
+ """ Apply a filter function to all the elements in the table in batches
+ and update the table so that the dataset only includes examples according to the filter function.
+ The transformation is applied to all the datasets of the dataset dictionary.
+
+ Args:
+ `function` (`callable`): with one of the following signature:
+ - `function(example: Dict) -> bool` if `with_indices=False`
+ - `function(example: Dict, indices: int) -> bool` if `with_indices=True`
+ `with_indices` (`bool`, default: `False`): Provide example indices to `function`. Note that in this case the signature of `function` should be `def function(example, idx): ...`.
+ `batch_size` (`Optional[int]`, default: `1000`): Number of examples per batch provided to `function` if `batched=True`
+ `batch_size <= 0` or `batch_size == None`: Provide the full dataset as a single batch to `function`
+ `remove_columns` (`Optional[List[str]]`, default: `None`): Remove a selection of columns while doing the mapping.
+ Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding
+ columns with names in `remove_columns`, these columns will be kept.
+ `keep_in_memory` (`bool`, default: `False`): Keep the dataset in memory instead of writing it to a cache file.
+ `load_from_cache_file` (`bool`, default: `True`): If a cache file storing the current computation from `function`
+ can be identified, use it instead of recomputing.
+ `cache_file_names` (`Optional[Dict[str, str]]`, default: `None`): Provide the name of a cache file to use to store the
+ results of the computation instead of the automatically generated cache file name.
+ You have to provide one :obj:`cache_file_name` per dataset in the dataset dictionary.
+ `writer_batch_size` (`int`, default: `1000`): Number of rows per write operation for the cache file writer.
+ Higher value gives smaller cache files, lower value consume less temporary memory while running `.map()`.
+ `verbose` (`bool`, default: `True`): Set to `False` to deactivate the tqdm progress bar and informations.
+ """
+ self._check_values_type()
+ if cache_file_names is None:
+ cache_file_names = {k: None for k in self}
+ return DatasetDict(
+ {
+ k: dataset.filter(
+ function=function,
+ with_indices=with_indices,
+ batch_size=batch_size,
+ remove_columns=remove_columns,
+ keep_in_memory=keep_in_memory,
+ load_from_cache_file=load_from_cache_file,
+ cache_file_name=cache_file_names[k],
+ writer_batch_size=writer_batch_size,
+ verbose=verbose,
+ )
+ for k, dataset in self.items()
+ }
+ )
+
+ def sort(
+ self,
+ column: str,
+ reverse: bool = False,
+ kind: str = None,
+ keep_in_memory: bool = False,
+ load_from_cache_file: bool = True,
+ cache_file_names: Optional[Dict[str, str]] = None,
+ writer_batch_size: Optional[int] = 1000,
+ verbose: bool = True,
+ ) -> "DatasetDict":
+ """ Create a new dataset sorted according to a column.
+ The transformation is applied to all the datasets of the dataset dictionary.
+
+ Currently sorting according to a column name uses numpy sorting algorithm under the hood.
+ The column should thus be a numpy compatible type (in particular not a nested type).
+ This also means that the column used for sorting is fully loaded in memory (which should be fine in most cases).
+
+ Args:
+ `column` (`str`): column name to sort by.
+ `reverse`: (`bool`, default: `False`): If True, sort by descending order rather then ascending.
+ `kind` (Optional `str`): Numpy algorithm for sorting selected in {โquicksortโ, โmergesortโ, โheapsortโ, โstableโ},
+ The default is โquicksortโ. Note that both โstableโ and โmergesortโ use timsort under the covers and, in general,
+ the actual implementation will vary with data type. The โmergesortโ option is retained for backwards compatibility.
+ `keep_in_memory` (`bool`, default: `False`): Keep the dataset in memory instead of writing it to a cache file.
+ `load_from_cache_file` (`bool`, default: `True`): If a cache file storing the current computation from `function`
+ can be identified, use it instead of recomputing.
+ `cache_file_names` (`Optional[Dict[str, str]]`, default: `None`): Provide the name of a cache file to use to store the
+ results of the computation instead of the automatically generated cache file name.
+ You have to provide one :obj:`cache_file_name` per dataset in the dataset dictionary.
+ `writer_batch_size` (`int`, default: `1000`): Number of rows per write operation for the cache file writer.
+ Higher value gives smaller cache files, lower value consume less temporary memory while running `.map()`.
+ `verbose` (`bool`, default: `True`): Set to `False` to deactivate the tqdm progress bar and informations.
+ """
+ self._check_values_type()
+ if cache_file_names is None:
+ cache_file_names = {k: None for k in self}
+ return DatasetDict(
+ {
+ k: dataset.sort(
+ column=column,
+ reverse=reverse,
+ kind=kind,
+ keep_in_memory=keep_in_memory,
+ load_from_cache_file=load_from_cache_file,
+ cache_file_name=cache_file_names[k],
+ writer_batch_size=writer_batch_size,
+ verbose=verbose,
+ )
+ for k, dataset in self.items()
+ }
+ )
+
+ def shuffle(
+ self,
+ seeds: Optional[Dict[str, int]] = None,
+ generators: Optional[Dict[str, np.random.Generator]] = None,
+ keep_in_memory: bool = False,
+ load_from_cache_file: bool = True,
+ cache_file_names: Optional[Dict[str, str]] = None,
+ writer_batch_size: Optional[int] = 1000,
+ verbose: bool = True,
+ ):
+ """ Create a new Dataset where the rows are shuffled.
+ The transformation is applied to all the datasets of the dataset dictionary.
+
+ Currently shuffling uses numpy random generators.
+ You can either supply a NumPy BitGenerator to use, or a seed to initiate NumPy's default random generator (PCG64).
+
+ Args:
+ `seeds` (Optional `Dict[str, int]`): A seed to initialize the default BitGenerator if ``generator=None``.
+ If None, then fresh, unpredictable entropy will be pulled from the OS.
+ If an int or array_like[ints] is passed, then it will be passed to SeedSequence to derive the initial BitGenerator state.
+ You have to provide one :obj:`seed` per dataset in the dataset dictionary.
+ `generators` (Optional `Dict[str, np.random.Generator]`): Numpy random Generator to use to compute the permutation of the dataset rows.
+ If ``generator=None`` (default), uses np.random.default_rng (the default BitGenerator (PCG64) of NumPy).
+ You have to provide one :obj:`generator` per dataset in the dataset dictionary.
+ `keep_in_memory` (`bool`, default: `False`): Keep the dataset in memory instead of writing it to a cache file.
+ `load_from_cache_file` (`bool`, default: `True`): If a cache file storing the current computation from `function`
+ can be identified, use it instead of recomputing.
+ `cache_file_names` (`Optional[Dict[str, str]]`, default: `None`): Provide the name of a cache file to use to store the
+ results of the computation instead of the automatically generated cache file name.
+ You have to provide one :obj:`cache_file_name` per dataset in the dataset dictionary.
+ `writer_batch_size` (`int`, default: `1000`): Number of rows per write operation for the cache file writer.
+ Higher value gives smaller cache files, lower value consume less temporary memory while running `.map()`.
+ `verbose` (`bool`, default: `True`): Set to `False` to deactivate the tqdm progress bar and informations.
+ """
+ self._check_values_type()
+ if seeds is None:
+ seeds = {k: None for k in self}
+ if generators is None:
+ generators = {k: None for k in self}
+ if cache_file_names is None:
+ cache_file_names = {k: None for k in self}
+ return DatasetDict(
+ {
+ k: dataset.shuffle(
+ seed=seeds[k],
+ generator=generators[k],
+ keep_in_memory=keep_in_memory,
+ load_from_cache_file=load_from_cache_file,
+ cache_file_name=cache_file_names[k],
+ writer_batch_size=writer_batch_size,
+ verbose=verbose,
+ )
+ for k, dataset in self.items()
+ }
+ )
diff --git a/src/nlp/load.py b/src/nlp/load.py
index 8d89d8affaf..9ab2ff6cc13 100644
--- a/src/nlp/load.py
+++ b/src/nlp/load.py
@@ -34,6 +34,7 @@
from .arrow_dataset import Dataset
from .builder import DatasetBuilder
+from .dataset_dict import DatasetDict
from .info import DATASET_INFOS_DICT_FILE_NAME, DatasetInfo
from .metric import Metric
from .splits import Split
@@ -459,7 +460,7 @@ def load_dataset(
ignore_verifications: bool = False,
save_infos: bool = False,
**config_kwargs,
-) -> Union[Dict[Split, Dataset], Dataset]:
+) -> Union[DatasetDict, Dataset]:
r"""Load a dataset
This method does the following under the hood:
@@ -506,9 +507,9 @@ def load_dataset(
**config_kwargs (Optional ``dict``): keyword arguments to be passed to the ``nlp.BuilderConfig`` and used in the ``nlp.DatasetBuilder``.
Returns:
- ``nlp.Dataset`` or ``Dict[nlp.Split, nlp.Dataset]``
+ ``nlp.Dataset`` or ``nlp.DatasetDict``
if `split` is not None: the dataset requested,
- if `split` is None, a `dict<key: nlp.Split, value: nlp.Dataset>` with each split.
+ if `split` is None, a ``nlp.DatasetDict`` with each split.
"""
# Download/copy dataset processing script
| diff --git a/tests/test_builder.py b/tests/test_builder.py
new file mode 100644
index 00000000000..e439663e2aa
--- /dev/null
+++ b/tests/test_builder.py
@@ -0,0 +1,44 @@
+import os
+import tempfile
+from unittest import TestCase
+
+from nlp.arrow_dataset import Dataset
+from nlp.arrow_writer import ArrowWriter
+from nlp.builder import DatasetBuilder
+from nlp.dataset_dict import DatasetDict
+from nlp.features import Features, Value
+from nlp.info import DatasetInfo
+from nlp.splits import SplitDict, SplitInfo
+
+
+class DummyBuilder(DatasetBuilder):
+ def _info(self):
+ return DatasetInfo()
+
+
+class BuilderTest(TestCase):
+ def test_as_dataset(self):
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ dummy_builder = DummyBuilder(cache_dir=tmp_dir, name="dummy")
+ os.makedirs(dummy_builder.cache_dir)
+
+ dummy_builder.info.splits = SplitDict()
+ dummy_builder.info.splits.add(SplitInfo("train"))
+ dummy_builder.info.splits.add(SplitInfo("test"))
+
+ for split in dummy_builder.info.splits:
+ writer = ArrowWriter(
+ path=os.path.join(dummy_builder.cache_dir, f"dummy_builder-{split}.arrow"),
+ features=Features({"text": Value("string")}),
+ )
+ writer.write_batch({"text": ["foo"] * 10})
+ writer.finalize()
+
+ dsets = dummy_builder.as_dataset()
+ self.assertIsInstance(dsets, DatasetDict)
+
+ dset = dummy_builder.as_dataset("train")
+ self.assertIsInstance(dset, Dataset)
+
+ dset = dummy_builder.as_dataset("train+test[:30%]")
+ self.assertIsInstance(dset, Dataset)
diff --git a/tests/test_dataset_dict.py b/tests/test_dataset_dict.py
new file mode 100644
index 00000000000..0f47a04cd2e
--- /dev/null
+++ b/tests/test_dataset_dict.py
@@ -0,0 +1,90 @@
+import os
+import tempfile
+from typing import Dict
+from unittest import TestCase
+
+import numpy as np
+import pyarrow as pa
+
+from nlp.arrow_dataset import Dataset
+from nlp.dataset_dict import DatasetDict
+
+
+class DatasetDictTest(TestCase):
+ def _create_dummy_dataset(self) -> Dataset:
+ dset = Dataset(
+ pa.Table.from_pydict({"filename": ["my_name-train" + "_" + str(x) for x in np.arange(30).tolist()]})
+ )
+ return dset
+
+ def _create_dummy_dataset_dict(self) -> DatasetDict:
+ return DatasetDict({"train": self._create_dummy_dataset(), "test": self._create_dummy_dataset()})
+
+ def test_map(self):
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ dsets = self._create_dummy_dataset_dict()
+
+ mapped_dsets_1: Dict[str, Dataset] = dsets.map(
+ lambda ex: {"foo": ["bar"] * len(ex["filename"])}, batched=True
+ )
+ self.assertListEqual(list(dsets.keys()), list(mapped_dsets_1.keys()))
+ self.assertListEqual(mapped_dsets_1["train"].column_names, ["filename", "foo"])
+
+ cache_file_names = {
+ "train": os.path.join(tmp_dir, "train.arrow"),
+ "test": os.path.join(tmp_dir, "test.arrow"),
+ }
+ mapped_dsets_2: Dict[str, Dataset] = mapped_dsets_1.map(
+ lambda ex: {"bar": ["foo"] * len(ex["filename"])}, batched=True, cache_file_names=cache_file_names
+ )
+ self.assertListEqual(list(dsets.keys()), list(mapped_dsets_2.keys()))
+ self.assertListEqual(mapped_dsets_2["train"].column_names, ["filename", "foo", "bar"])
+
+ def test_filter(self):
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ dsets = self._create_dummy_dataset_dict()
+
+ filtered_dsets_1: Dict[str, Dataset] = dsets.filter(lambda ex: int(ex["filename"].split("_")[-1]) < 10)
+ self.assertListEqual(list(dsets.keys()), list(filtered_dsets_1.keys()))
+ self.assertEqual(len(filtered_dsets_1["train"]), 10)
+
+ cache_file_names = {
+ "train": os.path.join(tmp_dir, "train.arrow"),
+ "test": os.path.join(tmp_dir, "test.arrow"),
+ }
+ filtered_dsets_2: Dict[str, Dataset] = filtered_dsets_1.filter(
+ lambda ex: int(ex["filename"].split("_")[-1]) < 5, cache_file_names=cache_file_names
+ )
+ self.assertListEqual(list(dsets.keys()), list(filtered_dsets_2.keys()))
+ self.assertEqual(len(filtered_dsets_2["train"]), 5)
+
+ def test_sort(self):
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ dsets = self._create_dummy_dataset_dict()
+
+ sorted_dsets_1: Dict[str, Dataset] = dsets.sort("filename")
+ self.assertListEqual(list(dsets.keys()), list(sorted_dsets_1.keys()))
+ self.assertListEqual(
+ [f.split("_")[-1] for f in sorted_dsets_1["train"]["filename"]], sorted(str(x) for x in range(30))
+ )
+
+ cache_file_names = {
+ "train": os.path.join(tmp_dir, "train.arrow"),
+ "test": os.path.join(tmp_dir, "test.arrow"),
+ }
+ sorted_dsets_2: Dict[str, Dataset] = sorted_dsets_1.sort(
+ "filename", cache_file_names=cache_file_names, reverse=True
+ )
+ self.assertListEqual(list(dsets.keys()), list(sorted_dsets_2.keys()))
+ self.assertListEqual(
+ [f.split("_")[-1] for f in sorted_dsets_2["train"]["filename"]],
+ sorted((str(x) for x in range(30)), reverse=True),
+ )
+
+ def test_check_values_type(self):
+ dsets = self._create_dummy_dataset_dict()
+ dsets["bad_split"] = None
+ self.assertRaises(TypeError, dsets.map, lambda x: x)
+ self.assertRaises(TypeError, dsets.filter, lambda x: True)
+ self.assertRaises(TypeError, dsets.shuffle)
+ self.assertRaises(TypeError, dsets.sort, "filename")
| diff --git a/docs/source/package_reference/main_classes.rst b/docs/source/package_reference/main_classes.rst
index 5fc244089df..8a89ca6c008 100644
--- a/docs/source/package_reference/main_classes.rst
+++ b/docs/source/package_reference/main_classes.rst
@@ -26,6 +26,17 @@ The base class :class:`nlp.Dataset` implements a Dataset backed by an Apache Arr
description, download_checksums, download_size, features, homepage,
license, size_in_bytes, supervised_keys, version
+
+``DatasetDict``
+~~~~~~~~~~~~~~~~~~~~~
+
+Dictionary with split names as keys ('train', 'test' for example), and :obj:`nlp.Dataset` objects as values.
+It also has dataset transform methods like map or filter, to process all the splits at once.
+
+.. autoclass:: nlp.DatasetDict
+ :members: map, filter, sort, shuffle
+
+
``Features``
~~~~~~~~~~~~~~~~~~~~~
diff --git a/docs/source/processing.rst b/docs/source/processing.rst
index 5a68f35c83a..94d021aa8ce 100644
--- a/docs/source/processing.rst
+++ b/docs/source/processing.rst
@@ -420,6 +420,30 @@ Here we have now multiply the size of our dataset by ``4`` by adding three alter
Obviously this is a very simple example for data augmentation and it could be improved in several ways, the most interesting take-aways is probably how this can be written in roughtly ten lines of code without any loss in flexibility.
+Processing several splits at once
+-----------------------------------
+
+When you load a dataset that has various splits, :func:`nlp.load_dataset` returns a :obj:`nlp.DatasetDict` that is a dictionary with split names as keys ('train', 'test' for example), and :obj:`nlp.Dataset` objects as values.
+You can directly call map, filter, shuffle, and sort directly on a :obj:`nlp.DatasetDict` object:
+
+.. code-block::
+
+ >>> from nlp import load_dataset
+ >>>
+ >>> dataset = load_dataset('glue', 'mrpc') # load all the splits
+ >>> dataset.keys()
+ dict_keys(['train', 'validation', 'test'])
+ >>> encoded_dataset = dataset.map(lambda examples: tokenizer(examples['sentence1']), batched=True)
+ >>> encoded_dataset["train"][0]
+ {'sentence1': 'Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .',
+ 'sentence2': 'Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .',
+ 'label': 1,
+ 'idx': 0,
+ 'input_ids': [ 101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102],
+ 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
+ }
+
This concludes our chapter on data processing with ๐คnlp (and ๐คtransformers).
Controling the cache behavior
diff --git a/docs/source/quicktour.rst b/docs/source/quicktour.rst
index 19a0a152d3c..2d645e8b8cc 100644
--- a/docs/source/quicktour.rst
+++ b/docs/source/quicktour.rst
@@ -54,7 +54,7 @@ If you want to create a :class:`nlp.Dataset` from local CSV, JSON, text or panda
.. note::
- If you don't provide a :obj:`split` argument to :func:`nlp.load_dataset`, this method will return a dictionnary containing a datasets for each split in the dataset.
+ If you don't provide a :obj:`split` argument to :func:`nlp.load_dataset`, this method will return a dictionary containing a datasets for each split in the dataset. This dictionary is a :obj:`nlp.DatasetDict` object that lets you process all the splits at once using :func:`nlp.DatasetDict.map`, :func:`nlp.DatasetDict.filter`, etc.
Now let's have a look at our newly created :class:`nlp.Dataset` object. It basically behaves like a normal python container. You can query its length, get a single row but also get multiple rows and even index along columns (see all the details in :doc:`exploring </exploring>`):
| [
{
"components": [
{
"doc": "A dictionary (dict of str: nlp.Dataset) with dataset transforms methods (map, filter, etc.)",
"lines": [
9,
248
],
"name": "DatasetDict",
"signature": "class DatasetDict(dict):",
"type": "class"
},
... | [
"tests/test_builder.py::BuilderTest::test_as_dataset",
"tests/test_dataset_dict.py::DatasetDictTest::test_check_values_type",
"tests/test_dataset_dict.py::DatasetDictTest::test_filter",
"tests/test_dataset_dict.py::DatasetDictTest::test_map",
"tests/test_dataset_dict.py::DatasetDictTest::test_sort"
] | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
add DatasetDict
## Add DatasetDict
### Overview
When you call `load_dataset` it can return a dictionary of datasets if there are several splits (train/test for example).
If you wanted to apply dataset transforms you had to iterate over each split and apply the transform.
Instead of returning a dict, it now returns a `nlp.DatasetDict` object which inherits from dict and contains the same data as before, except that now users can call dataset transforms directly from the output, and they'll be applied on each split.
Before:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = {
split_name: dataset.map(my_func) for split_name, dataset in squad.items()
}
print(squad.keys())
# dict_keys(['train', 'validation'])
```
Now:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = squad.map(my_func)
print(squad.keys())
# dict_keys(['train', 'validation'])
```
### Dataset transforms
`nlp.DatasetDict` implements the following dataset transforms:
- map
- filter
- sort
- shuffle
### Arguments
The arguments of the methods are the same except for split-specific arguments like `cache_file_name`.
For such arguments, the expected input is a dictionary `{split_name: argument_value}`
It concerns:
- `cache_file_name` in map, filter, sort, shuffle
- `seed` and `generator` in shuffle
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in src/nlp/dataset_dict.py]
(definition of DatasetDict:)
class DatasetDict(dict):
"""A dictionary (dict of str: nlp.Dataset) with dataset transforms methods (map, filter, etc.)"""
(definition of DatasetDict._check_values_type:)
def _check_values_type(self):
(definition of DatasetDict.map:)
def map( self, function, with_indices: bool = False, batched: bool = False, batch_size: Optional[int] = 1000, remove_columns: Optional[List[str]] = None, keep_in_memory: bool = False, load_from_cache_file: bool = True, cache_file_names: Optional[Dict[str, str]] = None, writer_batch_size: Optional[int] = 1000, features: Optional[Features] = None, disable_nullable: bool = True, verbose: bool = True, ) -> "DatasetDict":
"""Apply a function to all the elements in the table (individually or in batches)
and update the table (if function does updated examples).
The transformation is applied to all the datasets of the dataset dictionary.
Args:
`function` (`callable`): with one of the following signature:
- `function(example: Dict) -> Union[Dict, Any]` if `batched=False` and `with_indices=False`
- `function(example: Dict, indices: int) -> Union[Dict, Any]` if `batched=False` and `with_indices=True`
- `function(batch: Dict[List]) -> Union[Dict, Any]` if `batched=True` and `with_indices=False`
- `function(batch: Dict[List], indices: List[int]) -> Union[Dict, Any]` if `batched=True` and `with_indices=True`
`with_indices` (`bool`, default: `False`): Provide example indices to `function`. Note that in this case the signature of `function` should be `def function(example, idx): ...`.
`batched` (`bool`, default: `False`): Provide batch of examples to `function`
`batch_size` (`Optional[int]`, default: `1000`): Number of examples per batch provided to `function` if `batched=True`
`batch_size <= 0` or `batch_size == None`: Provide the full dataset as a single batch to `function`
`remove_columns` (`Optional[List[str]]`, default: `None`): Remove a selection of columns while doing the mapping.
Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding
columns with names in `remove_columns`, these columns will be kept.
`keep_in_memory` (`bool`, default: `False`): Keep the dataset in memory instead of writing it to a cache file.
`load_from_cache_file` (`bool`, default: `True`): If a cache file storing the current computation from `function`
can be identified, use it instead of recomputing.
`cache_file_names` (`Optional[Dict[str, str]]`, default: `None`): Provide the name of a cache file to use to store the
results of the computation instead of the automatically generated cache file name.
You have to provide one :obj:`cache_file_name` per dataset in the dataset dictionary.
`writer_batch_size` (`int`, default: `1000`): Number of rows per write operation for the cache file writer.
Higher value gives smaller cache files, lower value consume less temporary memory while running `.map()`.
`features` (`Optional[nlp.Features]`, default: `None`): Use a specific Features to store the cache file
instead of the automatically generated one.
`disable_nullable` (`bool`, default: `True`): Allow null values in the table.
`verbose` (`bool`, default: `True`): Set to `False` to deactivate the tqdm progress bar and informations."""
(definition of DatasetDict.filter:)
def filter( self, function, with_indices=False, batch_size: Optional[int] = 1000, remove_columns: Optional[List[str]] = None, keep_in_memory: bool = False, load_from_cache_file: bool = True, cache_file_names: Optional[Dict[str, str]] = None, writer_batch_size: Optional[int] = 1000, verbose: bool = True, ) -> "DatasetDict":
"""Apply a filter function to all the elements in the table in batches
and update the table so that the dataset only includes examples according to the filter function.
The transformation is applied to all the datasets of the dataset dictionary.
Args:
`function` (`callable`): with one of the following signature:
- `function(example: Dict) -> bool` if `with_indices=False`
- `function(example: Dict, indices: int) -> bool` if `with_indices=True`
`with_indices` (`bool`, default: `False`): Provide example indices to `function`. Note that in this case the signature of `function` should be `def function(example, idx): ...`.
`batch_size` (`Optional[int]`, default: `1000`): Number of examples per batch provided to `function` if `batched=True`
`batch_size <= 0` or `batch_size == None`: Provide the full dataset as a single batch to `function`
`remove_columns` (`Optional[List[str]]`, default: `None`): Remove a selection of columns while doing the mapping.
Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding
columns with names in `remove_columns`, these columns will be kept.
`keep_in_memory` (`bool`, default: `False`): Keep the dataset in memory instead of writing it to a cache file.
`load_from_cache_file` (`bool`, default: `True`): If a cache file storing the current computation from `function`
can be identified, use it instead of recomputing.
`cache_file_names` (`Optional[Dict[str, str]]`, default: `None`): Provide the name of a cache file to use to store the
results of the computation instead of the automatically generated cache file name.
You have to provide one :obj:`cache_file_name` per dataset in the dataset dictionary.
`writer_batch_size` (`int`, default: `1000`): Number of rows per write operation for the cache file writer.
Higher value gives smaller cache files, lower value consume less temporary memory while running `.map()`.
`verbose` (`bool`, default: `True`): Set to `False` to deactivate the tqdm progress bar and informations."""
(definition of DatasetDict.sort:)
def sort( self, column: str, reverse: bool = False, kind: str = None, keep_in_memory: bool = False, load_from_cache_file: bool = True, cache_file_names: Optional[Dict[str, str]] = None, writer_batch_size: Optional[int] = 1000, verbose: bool = True, ) -> "DatasetDict":
"""Create a new dataset sorted according to a column.
The transformation is applied to all the datasets of the dataset dictionary.
Currently sorting according to a column name uses numpy sorting algorithm under the hood.
The column should thus be a numpy compatible type (in particular not a nested type).
This also means that the column used for sorting is fully loaded in memory (which should be fine in most cases).
Args:
`column` (`str`): column name to sort by.
`reverse`: (`bool`, default: `False`): If True, sort by descending order rather then ascending.
`kind` (Optional `str`): Numpy algorithm for sorting selected in {โquicksortโ, โmergesortโ, โheapsortโ, โstableโ},
The default is โquicksortโ. Note that both โstableโ and โmergesortโ use timsort under the covers and, in general,
the actual implementation will vary with data type. The โmergesortโ option is retained for backwards compatibility.
`keep_in_memory` (`bool`, default: `False`): Keep the dataset in memory instead of writing it to a cache file.
`load_from_cache_file` (`bool`, default: `True`): If a cache file storing the current computation from `function`
can be identified, use it instead of recomputing.
`cache_file_names` (`Optional[Dict[str, str]]`, default: `None`): Provide the name of a cache file to use to store the
results of the computation instead of the automatically generated cache file name.
You have to provide one :obj:`cache_file_name` per dataset in the dataset dictionary.
`writer_batch_size` (`int`, default: `1000`): Number of rows per write operation for the cache file writer.
Higher value gives smaller cache files, lower value consume less temporary memory while running `.map()`.
`verbose` (`bool`, default: `True`): Set to `False` to deactivate the tqdm progress bar and informations."""
(definition of DatasetDict.shuffle:)
def shuffle( self, seeds: Optional[Dict[str, int]] = None, generators: Optional[Dict[str, np.random.Generator]] = None, keep_in_memory: bool = False, load_from_cache_file: bool = True, cache_file_names: Optional[Dict[str, str]] = None, writer_batch_size: Optional[int] = 1000, verbose: bool = True, ):
"""Create a new Dataset where the rows are shuffled.
The transformation is applied to all the datasets of the dataset dictionary.
Currently shuffling uses numpy random generators.
You can either supply a NumPy BitGenerator to use, or a seed to initiate NumPy's default random generator (PCG64).
Args:
`seeds` (Optional `Dict[str, int]`): A seed to initialize the default BitGenerator if ``generator=None``.
If None, then fresh, unpredictable entropy will be pulled from the OS.
If an int or array_like[ints] is passed, then it will be passed to SeedSequence to derive the initial BitGenerator state.
You have to provide one :obj:`seed` per dataset in the dataset dictionary.
`generators` (Optional `Dict[str, np.random.Generator]`): Numpy random Generator to use to compute the permutation of the dataset rows.
If ``generator=None`` (default), uses np.random.default_rng (the default BitGenerator (PCG64) of NumPy).
You have to provide one :obj:`generator` per dataset in the dataset dictionary.
`keep_in_memory` (`bool`, default: `False`): Keep the dataset in memory instead of writing it to a cache file.
`load_from_cache_file` (`bool`, default: `True`): If a cache file storing the current computation from `function`
can be identified, use it instead of recomputing.
`cache_file_names` (`Optional[Dict[str, str]]`, default: `None`): Provide the name of a cache file to use to store the
results of the computation instead of the automatically generated cache file name.
You have to provide one :obj:`cache_file_name` per dataset in the dataset dictionary.
`writer_batch_size` (`int`, default: `1000`): Number of rows per write operation for the cache file writer.
Higher value gives smaller cache files, lower value consume less temporary memory while running `.map()`.
`verbose` (`bool`, default: `True`): Set to `False` to deactivate the tqdm progress bar and informations."""
[end of new definitions in src/nlp/dataset_dict.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 5142a8cf61d8a4495eda3d91dc4283a6df01ea14 | |
sympy__sympy-19826 | 19,826 | sympy/sympy | 1.7 | 3d6b5a44dff651642cb480797b0e265c0bf28357 | 2020-07-23T05:02:27Z | diff --git a/sympy/matrices/__init__.py b/sympy/matrices/__init__.py
index 82d5161c48d9..aab95e10f5db 100644
--- a/sympy/matrices/__init__.py
+++ b/sympy/matrices/__init__.py
@@ -28,7 +28,7 @@
hadamard_product, HadamardProduct, HadamardPower, Determinant, det,
diagonalize_vector, DiagMatrix, DiagonalMatrix, DiagonalOf, trace,
DotProduct, kronecker_product, KroneckerProduct,
- PermutationMatrix, MatrixPermute)
+ PermutationMatrix, MatrixPermute, MatrixSet)
from .utilities import dotprodsimp
@@ -61,7 +61,7 @@
'hadamard_product', 'HadamardProduct', 'HadamardPower', 'Determinant',
'det', 'diagonalize_vector', 'DiagMatrix', 'DiagonalMatrix',
'DiagonalOf', 'trace', 'DotProduct', 'kronecker_product',
- 'KroneckerProduct', 'PermutationMatrix', 'MatrixPermute',
+ 'KroneckerProduct', 'PermutationMatrix', 'MatrixPermute', 'MatrixSet',
'dotprodsimp',
]
diff --git a/sympy/matrices/expressions/__init__.py b/sympy/matrices/expressions/__init__.py
index 05ddc9cb1c12..8f1e997e76bb 100644
--- a/sympy/matrices/expressions/__init__.py
+++ b/sympy/matrices/expressions/__init__.py
@@ -7,7 +7,7 @@
from .inverse import Inverse
from .matadd import MatAdd
from .matexpr import (Identity, MatrixExpr, MatrixSymbol, ZeroMatrix, OneMatrix,
- matrix_symbols)
+ matrix_symbols, MatrixSet)
from .matmul import MatMul
from .matpow import MatPow
from .trace import Trace, trace
@@ -33,7 +33,7 @@
'MatAdd',
'Identity', 'MatrixExpr', 'MatrixSymbol', 'ZeroMatrix', 'OneMatrix',
- 'matrix_symbols',
+ 'matrix_symbols', 'MatrixSet',
'MatMul',
diff --git a/sympy/matrices/expressions/matexpr.py b/sympy/matrices/expressions/matexpr.py
index 38c5f4d4d2bf..b7a52ca4de56 100644
--- a/sympy/matrices/expressions/matexpr.py
+++ b/sympy/matrices/expressions/matexpr.py
@@ -1,5 +1,5 @@
from typing import Any, Callable
-from sympy.core.logic import FuzzyBool
+from sympy.core.logic import FuzzyBool, fuzzy_and
from functools import wraps, reduce
import collections
@@ -15,6 +15,7 @@
from sympy.utilities.misc import filldedent
from sympy.assumptions.ask import ask, Q
from sympy.multipledispatch import dispatch
+from sympy.sets.sets import Set
def _sympifyit(arg, retval=None):
@@ -1269,6 +1270,63 @@ def _make_matrix(x):
return ImmutableDenseMatrix([[x]])
+class MatrixSet(Set):
+ """
+ MatrixSet represents the set of matrices with ``shape = (n, m)`` over the
+ given set.
+
+ Examples
+ ========
+
+ >>> from sympy.matrices import MatrixSet, Matrix
+ >>> from sympy import S, I
+ >>> M = MatrixSet(2, 2, set=S.Reals)
+ >>> X = Matrix([[1, 2], [3, 4]])
+ >>> X in M
+ True
+ >>> X = Matrix([[1, 2], [I, 4]])
+ >>> X in M
+ False
+
+ """
+ is_empty = False
+
+ def __new__(cls, n, m, set):
+ n, m, set = _sympify(n), _sympify(m), _sympify(set)
+ cls._check_dim(n)
+ cls._check_dim(m)
+ if not isinstance(set, Set):
+ raise TypeError("{} should be an instance of Set.".format(set))
+ return Set.__new__(cls, n, m, set)
+
+ @property
+ def shape(self):
+ return self.args[:2]
+
+ @property
+ def set(self):
+ return self.args[2]
+
+ def _contains(self, other):
+ if not isinstance(other, MatrixExpr):
+ raise TypeError("{} should be an instance of MatrixExpr.".format(other))
+ if other.shape != self.shape:
+ are_symbolic = any(_sympify(x).is_Symbol for x in other.shape + self.shape)
+ if are_symbolic:
+ return None
+ return False
+ return fuzzy_and(self.set.contains(x) for x in other)
+
+ @classmethod
+ def _check_dim(cls, dim):
+ """Helper function to check invalid matrix dimensions"""
+ from sympy.core.assumptions import check_assumptions
+ ok = check_assumptions(dim, integer=True, nonnegative=True)
+ if ok is False:
+ raise ValueError(
+ "The dimension specification {} should be "
+ "a nonnegative integer.".format(dim))
+
from .matmul import MatMul
from .matadd import MatAdd
from .matpow import MatPow
| diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index c12553a78909..381ea0ec2642 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -3010,6 +3010,11 @@ def test_sympy__matrices__expressions__matexpr__GenericZeroMatrix():
from sympy.matrices.expressions.matexpr import GenericZeroMatrix
assert _test_args(GenericZeroMatrix())
+def test_sympy__matrices__expressions__matexpr__MatrixSet():
+ from sympy.matrices.expressions.matexpr import MatrixSet
+ from sympy import S
+ assert _test_args(MatrixSet(2, 2, S.Reals))
+
def test_sympy__matrices__expressions__matmul__MatMul():
from sympy.matrices.expressions.matmul import MatMul
from sympy.matrices.expressions import MatrixSymbol
diff --git a/sympy/matrices/expressions/tests/test_matexpr.py b/sympy/matrices/expressions/tests/test_matexpr.py
index 6f58ab32c326..2a2b2acbc4c1 100644
--- a/sympy/matrices/expressions/tests/test_matexpr.py
+++ b/sympy/matrices/expressions/tests/test_matexpr.py
@@ -7,7 +7,7 @@
from sympy.simplify import simplify
from sympy.matrices import (Identity, ImmutableMatrix, Inverse, MatAdd, MatMul,
MatPow, Matrix, MatrixExpr, MatrixSymbol, ShapeError, ZeroMatrix,
- SparseMatrix, Transpose, Adjoint, NonSquareMatrixError)
+ SparseMatrix, Transpose, Adjoint, NonSquareMatrixError, MatrixSet)
from sympy.matrices.expressions.matexpr import (MatrixElement,
GenericZeroMatrix, GenericIdentity, OneMatrix)
from sympy.testing.pytest import raises, XFAIL
@@ -669,3 +669,26 @@ def test_as_explicit():
[Z[1, 0], Z[1, 1], Z[1, 2]],
])
raises(ValueError, lambda: A.as_explicit())
+
+def test_MatrixSet():
+ M = MatrixSet(2, 2, set=S.Reals)
+ assert M.shape == (2, 2)
+ assert M.set == S.Reals
+ X = Matrix([[1, 2], [3, 4]])
+ assert X in M
+ X = ZeroMatrix(2, 2)
+ assert X in M
+ raises(TypeError, lambda: A in M)
+ raises(TypeError, lambda: 1 in M)
+ M = MatrixSet(n, m, set=S.Reals)
+ assert A in M
+ raises(TypeError, lambda: C in M)
+ raises(TypeError, lambda: X in M)
+ M = MatrixSet(2, 2, set={1, 2, 3})
+ X = Matrix([[1, 2], [3, 4]])
+ Y = Matrix([[1, 2]])
+ assert (X in M) == S.false
+ assert (Y in M) == S.false
+ raises(ValueError, lambda: MatrixSet(2, -2, S.Reals))
+ raises(ValueError, lambda: MatrixSet(2.4, -1, S.Reals))
+ raises(TypeError, lambda: MatrixSet(2, 2, (1, 2, 3)))
| [
{
"components": [
{
"doc": "MatrixSet represents the set of matrices with ``shape = (n, m)`` over the\ngiven set.\n\nExamples\n========\n\n>>> from sympy.matrices import MatrixSet, Matrix\n>>> from sympy import S, I\n>>> M = MatrixSet(2, 2, set=S.Reals)\n>>> X = Matrix([[1, 2], [3, 4]])\n>>> X in ... | [
"test_sympy__matrices__expressions__matexpr__MatrixSet",
"test_matrix_symbol_creation",
"test_zero_matrix_creation",
"test_one_matrix_creation",
"test_identity_matrix_creation",
"test_shape",
"test_matexpr",
"test_subs",
"test_ZeroMatrix",
"test_ZeroMatrix_doit",
"test_OneMatrix",
"test_OneMat... | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_sympy__assumptions__assume__Predicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers__AnyArgs",
"test_sympy__assu... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Added `MatrixSet` to represent set of Matrices
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
Related comment https://github.com/sympy/sympy/pull/19795#discussion_r457919332
#### Brief description of what is fixed or changed
MatrixSet represents the set of matrices of given shape over the given set.
Examples:
```py
>>> from sympy.matrices import MatrixSet, Matrix
>>> from sympy import S, I
>>> M = MatrixSet(2, 2, set=S.Reals)
>>> X = Matrix([[1, 2], [3, 4]])
>>> X in M
True
>>> X = Matrix([[1, 2], [I, 4]])
>>> X in M
False
```
#### Other comments
- [x] Add docs and class `MatrixSet`
- [x] Add tests
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* matrices
* Added class `MatrixSet` to represent the set of matrices
<!-- END RELEASE NOTES -->
ping @oscarbenjamin
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/matrices/expressions/matexpr.py]
(definition of MatrixSet:)
class MatrixSet(Set):
"""MatrixSet represents the set of matrices with ``shape = (n, m)`` over the
given set.
Examples
========
>>> from sympy.matrices import MatrixSet, Matrix
>>> from sympy import S, I
>>> M = MatrixSet(2, 2, set=S.Reals)
>>> X = Matrix([[1, 2], [3, 4]])
>>> X in M
True
>>> X = Matrix([[1, 2], [I, 4]])
>>> X in M
False"""
(definition of MatrixSet.__new__:)
def __new__(cls, n, m, set):
(definition of MatrixSet.shape:)
def shape(self):
(definition of MatrixSet.set:)
def set(self):
(definition of MatrixSet._contains:)
def _contains(self, other):
(definition of MatrixSet._check_dim:)
def _check_dim(cls, dim):
"""Helper function to check invalid matrix dimensions"""
[end of new definitions in sympy/matrices/expressions/matexpr.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1b4529a95ef641c2fc15889091b281644069d20e | ||
sympy__sympy-19808 | 19,808 | sympy/sympy | 1.7 | 947756d0a3e64c354827a8b2db2bb3fcd0acc416 | 2020-07-21T05:51:56Z | diff --git a/sympy/stats/compound_rv.py b/sympy/stats/compound_rv.py
index e0dba16c2e98..12e491d10d46 100644
--- a/sympy/stats/compound_rv.py
+++ b/sympy/stats/compound_rv.py
@@ -1,6 +1,6 @@
-from sympy import Basic, integrate, Sum, Dummy, Lambda
+from sympy import Basic, Sum, Dummy, Lambda, Integral
from sympy.stats.rv import (NamedArgsMixin, random_symbols, _symbol_converter,
- PSpace, RandomSymbol)
+ PSpace, RandomSymbol, is_random)
from sympy.stats.crv import ContinuousDistribution, SingleContinuousPSpace
from sympy.stats.drv import DiscreteDistribution, SingleDiscretePSpace
from sympy.stats.frv import SingleFiniteDistribution, SingleFinitePSpace
@@ -37,6 +37,18 @@ def value(self):
def symbol(self):
return self.args[0]
+ @property
+ def is_Continuous(self):
+ return self.distribution.is_Continuous
+
+ @property
+ def is_Finite(self):
+ return self.distribution.is_Finite
+
+ @property
+ def is_Discrete(self):
+ return self.distribution.is_Discrete
+
@property
def distribution(self):
return self.args[1]
@@ -53,11 +65,11 @@ def set(self):
def domain(self):
return self._get_newpspace().domain
- def _get_newpspace(self):
+ def _get_newpspace(self, evaluate=False):
x = Dummy('x')
parent_dist = self.distribution.args[0]
- new_pspace = self._transform_pspace(self.symbol, parent_dist,
- Lambda(x, self.distribution.pdf(x)))
+ func = Lambda(x, self.distribution.pdf(x, evaluate))
+ new_pspace = self._transform_pspace(self.symbol, parent_dist, func)
if new_pspace is not None:
return new_pspace
message = ("Compound Distribution for %s is not implemeted yet" % str(parent_dist))
@@ -79,17 +91,17 @@ def _transform_pspace(self, sym, dist, pdf):
return SingleFinitePSpace(sym, FiniteDistributionHandmade(dens))
def compute_density(self, expr, **kwargs):
- new_pspace = self._get_newpspace()
+ new_pspace = self._get_newpspace(kwargs.pop('compound_evaluate', True))
expr = expr.subs({self.value: new_pspace.value})
return new_pspace.compute_density(expr, **kwargs)
def compute_cdf(self, expr, **kwargs):
- new_pspace = self._get_newpspace()
+ new_pspace = self._get_newpspace(kwargs.pop('compound_evaluate', True))
expr = expr.subs({self.value: new_pspace.value})
return new_pspace.compute_cdf(expr, **kwargs)
def compute_expectation(self, expr, rvs=None, evaluate=False, **kwargs):
- new_pspace = self._get_newpspace()
+ new_pspace = self._get_newpspace(evaluate)
expr = expr.subs({self.value: new_pspace.value})
if rvs:
rvs = rvs.subs({self.value: new_pspace.value})
@@ -98,12 +110,12 @@ def compute_expectation(self, expr, rvs=None, evaluate=False, **kwargs):
return new_pspace.compute_expectation(expr, rvs, evaluate, **kwargs)
def probability(self, condition, **kwargs):
- new_pspace = self._get_newpspace()
+ new_pspace = self._get_newpspace(kwargs.pop('compound_evaluate', True))
condition = condition.subs({self.value: new_pspace.value})
return new_pspace.probability(condition)
def conditional_space(self, condition, **kwargs):
- new_pspace = self._get_newpspace()
+ new_pspace = self._get_newpspace(kwargs.pop('compound_evaluate', True))
condition = condition.subs({self.value: new_pspace.value})
return new_pspace.conditional_space(condition)
@@ -130,7 +142,7 @@ class CompoundDistribution(Basic, NamedArgsMixin):
>>> C = CompoundDistribution(N)
>>> C.set
Interval(-oo, oo)
- >>> C.pdf(x).simplify()
+ >>> C.pdf(x, evaluate=True).simplify()
exp(-x**2/64 + x/16 - 1/16)/(8*sqrt(pi))
References
@@ -140,18 +152,9 @@ class CompoundDistribution(Basic, NamedArgsMixin):
"""
- is_Finite = None
- is_Continuous = None
- is_Discrete = None
-
def __new__(cls, dist):
- if isinstance(dist, ContinuousDistribution):
- cls.is_Continuous = True
- elif isinstance(dist, DiscreteDistribution):
- cls.is_Discrete = True
- elif isinstance(dist, SingleFiniteDistribution):
- cls.is_Finite = True
- else:
+ if not isinstance(dist, (ContinuousDistribution,
+ SingleFiniteDistribution, DiscreteDistribution)):
message = "Compound Distribution for %s is not implemeted yet" % str(dist)
raise NotImplementedError(message)
if not cls._compound_check(dist):
@@ -162,33 +165,46 @@ def __new__(cls, dist):
def set(self):
return self.args[0].set
- def pdf(self, x):
+ @property
+ def is_Continuous(self):
+ return isinstance(self.args[0], ContinuousDistribution)
+
+ @property
+ def is_Finite(self):
+ return isinstance(self.args[0], SingleFiniteDistribution)
+
+ @property
+ def is_Discrete(self):
+ return isinstance(self.args[0], DiscreteDistribution)
+
+ def pdf(self, x, evaluate=False):
dist = self.args[0]
- randoms = []
- for arg in dist.args:
- randoms.extend(random_symbols(arg))
- if len(randoms) > 1:
- raise NotImplementedError("Compound Distributions for more than"
- " one random argument is not implemeted yet.")
- rand_sym = randoms[0]
+ randoms = [rv for rv in dist.args if is_random(rv)]
if isinstance(dist, SingleFiniteDistribution):
y = Dummy('y', integer=True, negative=False)
- _pdf = dist.pmf(y)
+ expr = dist.pmf(y)
else:
y = Dummy('y')
- _pdf = dist.pdf(y)
- if isinstance(rand_sym.pspace.distribution, SingleFiniteDistribution):
- rand_dens = rand_sym.pspace.distribution.pmf(rand_sym)
+ expr = dist.pdf(y)
+ for rv in randoms:
+ expr = self._marginalise(expr, rv, evaluate)
+ return Lambda(y, expr)(x)
+
+ def _marginalise(self, expr, rv, evaluate):
+ if isinstance(rv.pspace.distribution, SingleFiniteDistribution):
+ rv_dens = rv.pspace.distribution.pmf(rv)
else:
- rand_dens = rand_sym.pspace.distribution.pdf(rand_sym)
- rand_sym_dom = rand_sym.pspace.domain.set
- if rand_sym.pspace.is_Discrete or rand_sym.pspace.is_Finite:
- _pdf = Sum(_pdf*rand_dens, (rand_sym, rand_sym_dom._inf,
- rand_sym_dom._sup)).doit()
+ rv_dens = rv.pspace.distribution.pdf(rv)
+ rv_dom = rv.pspace.domain.set
+ if rv.pspace.is_Discrete or rv.pspace.is_Finite:
+ expr = Sum(expr*rv_dens, (rv, rv_dom._inf,
+ rv_dom._sup))
else:
- _pdf = integrate(_pdf*rand_dens, (rand_sym, rand_sym_dom._inf,
- rand_sym_dom._sup))
- return Lambda(y, _pdf)(x)
+ expr = Integral(expr*rv_dens, (rv, rv_dom._inf,
+ rv_dom._sup))
+ if evaluate:
+ return expr.doit()
+ return expr
@classmethod
def _compound_check(self, dist):
diff --git a/sympy/stats/rv.py b/sympy/stats/rv.py
index c1bc92676621..e0f958a28835 100644
--- a/sympy/stats/rv.py
+++ b/sympy/stats/rv.py
@@ -823,6 +823,7 @@ def doit(self, evaluate=True, **kwargs):
from sympy.stats.random_matrix import RandomMatrixPSpace
from sympy.stats.joint_rv import JointPSpace
from sympy.stats.matrix_distributions import MatrixPSpace
+ from sympy.stats.compound_rv import CompoundPSpace
from sympy.stats.frv import SingleFiniteDistribution
expr, condition = self.expr, self.condition
@@ -839,6 +840,8 @@ def doit(self, evaluate=True, **kwargs):
return expr.pspace.distribution
elif isinstance(expr.pspace, RandomMatrixPSpace):
return expr.pspace.model
+ if isinstance(pspace(expr), CompoundPSpace):
+ kwargs['compound_evaluate'] = evaluate
result = pspace(expr).compute_density(expr, **kwargs)
if evaluate and hasattr(result, 'doit'):
| diff --git a/sympy/stats/tests/test_compound_rv.py b/sympy/stats/tests/test_compound_rv.py
index 40b661659468..8a8a1564e6c2 100644
--- a/sympy/stats/tests/test_compound_rv.py
+++ b/sympy/stats/tests/test_compound_rv.py
@@ -1,5 +1,5 @@
from sympy import (symbols, S, erf, sqrt, pi, exp, gamma, Interval, oo, beta,
- Eq, Piecewise, Integral, Abs, arg, Dummy)
+ Eq, Piecewise, Integral, Abs, arg, Dummy, Sum, factorial)
from sympy.stats import (Normal, P, E, density, Gamma, Poisson, Rayleigh,
variance, Bernoulli, Beta, Uniform, cdf)
from sympy.stats.compound_rv import CompoundDistribution, CompoundPSpace
@@ -63,12 +63,25 @@ def test_unevaluated_CompoundDist():
(_k, 0, oo))/(32*sqrt(pi)), True))
assert (density(X)(x).simplify()).dummy_eq(exprd.simplify())
- expre = Integral(Piecewise((_k*exp(S(3)/4 - _k/4)/8, 2*Abs(arg(_k - 3)) <= pi/2),
- (sqrt(2)*_k*Integral(exp(-(_k**4 + 16*(_k - 3)**2)/(32*_k**2)),
- (_k, 0, oo))/(32*sqrt(pi)), True)), (_k, -oo, oo))
+ expre = Integral(_k*Integral(sqrt(2)*exp(-_k**2/32)*exp(-(_k - 3)**2/(2*_k**2)
+ )/(32*sqrt(pi)), (_k, 0, oo)), (_k, -oo, oo))
with ignore_warnings(UserWarning): ### TODO: Restore tests once warnings are removed
- assert (E(X, evaluate=False).rewrite(Integral).simplify()).dummy_eq(expre.simplify())
-
+ assert E(X, evaluate=False).rewrite(Integral).dummy_eq(expre)
+
+ X = Poisson('X', 1)
+ Y = Poisson('Y', X)
+ Z = Poisson('Z', Y)
+ exprd = exp(-1)*Sum(exp(-Y)*Y**x*Sum(exp(-X)*X**Y/(factorial(X)*factorial(Y)
+ ), (X, 0, oo)), (Y, 0, oo))/factorial(x)
+ assert density(Z)(x).simplify() == exprd
+
+ N = Normal('N', 1, 2)
+ M = Normal('M', 3, 4)
+ D = Normal('D', M, N)
+ exprd = Integral(sqrt(2)*exp(-(_k - 1)**2/8)*Integral(exp(-(-_k + x
+ )**2/(2*_k**2))*exp(-(_k - 3)**2/32)/(8*pi*_k)
+ , (_k, -oo, oo))/(4*sqrt(pi)), (_k, -oo, oo))
+ assert density(D, evaluate=False)(x).dummy_eq(exprd)
def test_Compound_Distribution():
X = Normal('X', 2, 4)
@@ -76,7 +89,7 @@ def test_Compound_Distribution():
C = CompoundDistribution(N)
assert C.is_Continuous
assert C.set == Interval(-oo, oo)
- assert C.pdf(x).simplify() == exp(-x**2/64 + x/16 - S(1)/16)/(8*sqrt(pi))
+ assert C.pdf(x, evaluate=True).simplify() == exp(-x**2/64 + x/16 - S(1)/16)/(8*sqrt(pi))
assert not isinstance(CompoundDistribution(NormalDistribution(2, 3)),
CompoundDistribution)
@@ -89,7 +102,7 @@ def test_Compound_Distribution():
assert C.is_Finite
assert C.set == {0, 1}
y = symbols('y', negative=False, integer=True)
- assert C.pdf(y) == Piecewise((S(1)/(30*beta(2, 4)), Eq(y, 0)),
+ assert C.pdf(y, evaluate=True) == Piecewise((S(1)/(30*beta(2, 4)), Eq(y, 0)),
(S(1)/(60*beta(2, 4)), Eq(y, 1)), (0, True))
k, t, z = symbols('k t z', positive=True, real=True)
@@ -98,7 +111,8 @@ def test_Compound_Distribution():
C = CompoundDistribution(X)
assert C.is_Discrete
assert C.set == S.Naturals0
- assert C.pdf(z).simplify() == t**z*(t + 1)**(-k - z)*gamma(k + z)/(gamma(k)*gamma(z + 1))
+ assert C.pdf(z, evaluate=True).simplify() == t**z*(t + 1)**(-k - z)*gamma(k \
+ + z)/(gamma(k)*gamma(z + 1))
def test_compound_pspace():
| [
{
"components": [
{
"doc": "",
"lines": [
41,
42
],
"name": "CompoundPSpace.is_Continuous",
"signature": "def is_Continuous(self):",
"type": "function"
},
{
"doc": "",
"lines": [
45,
46
... | [
"test_unevaluated_CompoundDist",
"test_Compound_Distribution"
] | [
"test_normal_CompoundDist",
"test_poisson_CompoundDist",
"test_bernoulli_CompoundDist"
] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
[GSoC] Allow more than one RV in compound distribution
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
Related to discussions in #19728
#### Brief description of what is fixed or changed
Previously:
```py
>>> from sympy.stats import *
>>> N = Normal('N', 0, 1)
>>> M = Normal('M', N, 1)
>>> D = Normal('D', 0, M)
>>> density(D)(x)
Traceback (most recent call last):
File "/usr/lib/python3.6/code.py", line 91, in runcode
exec(code, self.locals)
File "<console>", line 1, in <module>
File "/home/czgdp1807ssd/sympy_project/sympy/sympy/stats/rv.py", line 882, in density
return Density(expr, condition).doit(evaluate=evaluate, **kwargs)
File "/home/czgdp1807ssd/sympy_project/sympy/sympy/stats/rv.py", line 832, in doit
result = pspace(expr).compute_density(expr, **kwargs)
File "/home/czgdp1807ssd/sympy_project/sympy/sympy/stats/compound_rv.py", line 82, in compute_density
new_pspace = self._get_newpspace()
File "/home/czgdp1807ssd/sympy_project/sympy/sympy/stats/compound_rv.py", line 60, in _get_newpspace
Lambda(x, self.distribution.pdf(x)))
File "/home/czgdp1807ssd/sympy_project/sympy/sympy/stats/compound_rv.py", line 171, in pdf
raise NotImplementedError("Compound Distributions for more than"
NotImplementedError: Compound Distributions for more than one random argument is not implemented yet.
```
After changing the marginalization algorithm:
```py
In [1]: from sympy.stats import *
In [2]: N = Normal('N', 0, 1)
In [3]: M = Normal('M', N, 1)
In [4]: D = Normal('D', 0, M)
In [5]: density(D)(x)
Out[5]:
โง ฯ
โช 0 for 2โ
โarg(x)โ โค โ
โช 2
โช
โชโ
โชโ
โชโฎ โ 2 2 โ
โชโฎ โ -M -M โ 2
โชโฎ โ โโโโ โโโโ โ -x
โจโฎ โโ โMโโ 4 4 โMโโ โโโโ
โชโฎ โโ2 - erfcโโโโโ
โฏ โฏ โ
erfcโโโโ 2
โชโฎ โโ โ2โ โ โ2โ โ 2โ
M
โชโฎ โ2โ
โโโโโโโโโโโโโโโโโโโโ + โโโโโโโโโโโโโโโ
โฏ
โชโฎ โ 4โ
โฯ 4โ
โฯ โ
โชโฎ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ dM otherwise
โชโฎ 2โ
โฯโ
M
โชโก
โช-โ
โฉ
```
#### Other comments
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* stats
* Allowing `CompoundDistribution` to handle more than one random variables
<!-- END RELEASE NOTES -->
ping @czgdp1807 @Upabjojr @jmig5776
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/stats/compound_rv.py]
(definition of CompoundPSpace.is_Continuous:)
def is_Continuous(self):
(definition of CompoundPSpace.is_Finite:)
def is_Finite(self):
(definition of CompoundPSpace.is_Discrete:)
def is_Discrete(self):
(definition of CompoundDistribution.is_Continuous:)
def is_Continuous(self):
(definition of CompoundDistribution.is_Finite:)
def is_Finite(self):
(definition of CompoundDistribution.is_Discrete:)
def is_Discrete(self):
(definition of CompoundDistribution._marginalise:)
def _marginalise(self, expr, rv, evaluate):
[end of new definitions in sympy/stats/compound_rv.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1b4529a95ef641c2fc15889091b281644069d20e | ||
sympy__sympy-19807 | 19,807 | sympy/sympy | 1.7 | 5a9542f27417151ed93242a5e9df6d7911f3c34c | 2020-07-20T19:14:30Z | diff --git a/sympy/vector/implicitregion.py b/sympy/vector/implicitregion.py
index 47bb208d91f5..2760f5b9a562 100644
--- a/sympy/vector/implicitregion.py
+++ b/sympy/vector/implicitregion.py
@@ -1,9 +1,11 @@
-from sympy import S
-from sympy.core import Basic, Tuple, diff, expand, Eq
+from sympy import S, Rational, gcd, sqrt, sign, symbols, Complement
+from sympy.core import Basic, Tuple, diff, expand, Eq, Integer
+from sympy.core.compatibility import ordered
from sympy.core.symbol import _symbol
-from sympy.solvers import solveset, nonlinsolve
+from sympy.solvers import solveset, nonlinsolve, diophantine
from sympy.polys import total_degree
from sympy.geometry import Point
+from sympy.ntheory.factor_ import core
class ImplicitRegion(Basic):
@@ -74,27 +76,42 @@ def regular_point(self):
Examples
========
- >>> from sympy.abc import x, y
+ >>> from sympy.abc import x, y, z
>>> from sympy.vector import ImplicitRegion
>>> circle = ImplicitRegion((x, y), (x + 2)**2 + (y - 3)**2 - 16)
>>> circle.regular_point()
- (-6, 3)
+ (-2, -1)
+ >>> parabola = ImplicitRegion((x, y), x**2 - 4*y)
+ >>> parabola.regular_point()
+ (0, 0)
+ >>> r = ImplicitRegion((x, y, z), (x + y + z)**4)
+ >>> r.regular_point()
+ (-10, -10, 20)
+
+ References
+ =========
- """
+ - Erik Hillgarter, "Rational Points on Conics", Diploma Thesis, RISC-Linz,
+ J. Kepler Universitat Linz, 1996. Availaible:
+ https://www3.risc.jku.at/publications/download/risc_1355/Rational%20Points%20on%20Conics.pdf
- # TODO: implement the algorithm to find regular point on a Conic.
- # Now it iterates over a range and tries to find a point on the region.
+ """
equation = self.equation
if len(self.variables) == 1:
return (list(solveset(equation, self.variables[0], domain=S.Reals))[0],)
elif len(self.variables) == 2:
- x, y = self.variables
- for x_reg in range(-100, 100):
- if not solveset(equation.subs(x, x_reg), self.variables[1], domain=S.Reals).is_empty:
- return (x_reg, list(solveset(equation.subs(x, x_reg), domain=S.Reals))[0])
- elif len(self.variables) == 3:
+ if self.degree == 2:
+ coeffs = a, b, c, d, e, f = conic_coeff(self.variables, equation)
+
+ if b**2 == 4*a*c:
+ x_reg, y_reg = self._regular_point_parabola(*coeffs)
+ else:
+ x_reg, y_reg = self._regular_point_ellipse(*coeffs)
+ return x_reg, y_reg
+
+ if len(self.variables) == 3:
x, y, z = self.variables
for x_reg in range(-10, 10):
@@ -107,6 +124,156 @@ def regular_point(self):
raise NotImplementedError()
+ def _regular_point_parabola(self, a, b, c, d, e, f):
+ ok = (a, d) != (0, 0) and (c, e) != (0, 0) and b**2 == 4*a*c and (a, c) != (0, 0)
+
+ if not ok:
+ raise ValueError("Rational Point on the conic does not exist")
+
+ if a != 0:
+ d_dash, f_dash = (4*a*e - 2*b*d, 4*a*f - d**2)
+ if d_dash != 0:
+ y_reg = -f_dash/d_dash
+ x_reg = -(d + b*y_reg)/(2*a)
+ else:
+ ok = False
+ elif c != 0:
+ d_dash, f_dash = (4*c*d - 2*b*e, 4*c*f - e**2)
+ if d_dash != 0:
+ x_reg = -f_dash/d_dash
+ y_reg = -(e + b*x_reg)/(2*c)
+ else:
+ ok = False
+
+ if ok:
+ return x_reg, y_reg
+ else:
+ raise ValueError("Rational Point on the conic does not exist")
+
+ def _regular_point_ellipse(self, a, b, c, d, e, f):
+ D = 4*a*c - b**2
+ ok = D
+
+ if not ok:
+ raise ValueError("Rational Point on the conic does not exist")
+
+ if a == 0 and c == 0:
+ K = -1
+ L = 4*(d*e - b*f)
+ elif c != 0:
+ K = D
+ L = 4*c**2*d**2 - 4*b*c*d*e + 4*a*c*e**2 + 4*b**2*c*f - 16*a*c**2*f
+ else:
+ K = D
+ L = 4*a**2*e**2 - 4*b*a*d*e + 4*b**2*a*f
+
+ ok = L != 0 and not(K > 0 and L < 0)
+ if not ok:
+ raise ValueError("Rational Point on the conic does not exist")
+
+ K = Rational(K).limit_denominator(10**12)
+ L = Rational(L).limit_denominator(10**12)
+
+ k1, k2 = K.p, K.q
+ l1, l2 = L.p, L.q
+ g = gcd(k2, l2)
+
+ a1 = (l2*k2)/g
+ b1 = (k1*l2)/g
+ c1 = -(l1*k2)/g
+ a2 = sign(a1)*core(abs(a1), 2)
+ r1 = sqrt(a1/a2)
+ b2 = sign(b1)*core(abs(b1), 2)
+ r2 = sqrt(b1/b2)
+ c2 = sign(c1)*core(abs(c1), 2)
+ r3 = sqrt(c1/c2)
+
+ g = gcd(gcd(a2, b2), c2)
+ a2 = a2/g
+ b2 = b2/g
+ c2 = c2/g
+
+ g1 = gcd(a2, b2)
+ a2 = a2/g1
+ b2 = b2/g1
+ c2 = c2*g1
+
+ g2 = gcd(a2,c2)
+ a2 = a2/g2
+ b2 = b2*g2
+ c2 = c2/g2
+
+ g3 = gcd(b2, c2)
+ a2 = a2*g3
+ b2 = b2/g3
+ c2 = c2/g3
+
+ x, y, z = symbols("x y z")
+ eq = a2*x**2 + b2*y**2 + c2*z**2
+
+ solutions = diophantine(eq)
+
+ if len(solutions) == 0:
+ raise ValueError("Rational Point on the conic does not exist")
+
+ flag = False
+ for sol in solutions:
+ syms = Tuple(*sol).free_symbols
+ rep = {s: 3 for s in syms}
+ sol_z = sol[2]
+
+ if sol_z == 0:
+ flag = True
+ continue
+
+ if not (isinstance(sol_z, Integer) or isinstance(sol_z, int)):
+ syms_z = sol_z.free_symbols
+
+ if len(syms_z) == 1:
+ p = next(iter(syms_z))
+ p_values = Complement(S.Integers, solveset(Eq(sol_z, 0), p, S.Integers))
+ rep[p] = next(iter(p_values))
+
+ if len(syms_z) == 2:
+ p, q = list(ordered(syms_z))
+
+ for i in S.Integers:
+ subs_sol_z = sol_z.subs(p, i)
+ q_values = Complement(S.Integers, solveset(Eq(subs_sol_z, 0), q, S.Integers))
+
+ if not q_values.is_empty:
+ rep[p] = i
+ rep[q] = next(iter(q_values))
+ break
+
+ if len(syms) != 0:
+ x, y, z = tuple(s.subs(rep) for s in sol)
+ else:
+ x, y, z = sol
+ flag = False
+ break
+
+ if flag:
+ raise ValueError("Rational Point on the conic does not exist")
+
+ x = (x*g3)/r1
+ y = (y*g2)/r2
+ z = (z*g1)/r3
+ x = x/z
+ y = y/z
+
+ if a == 0 and c == 0:
+ x_reg = (x + y - 2*e)/(2*b)
+ y_reg = (x - y - 2*d)/(2*b)
+ elif c != 0:
+ x_reg = (x - 2*d*c + b*e)/K
+ y_reg = (y - b*x_reg - e)/(2*c)
+ else:
+ y_reg = (x - 2*e*a + b*d)/K
+ x_reg = (y - b*y_reg - d)/(2*a)
+
+ return x_reg, y_reg
+
def singular_points(self):
"""
Returns a set of singular points of the region.
@@ -184,7 +351,7 @@ def rational_parametrization(self, parameters=('t', 's'), reg_point=None):
>>> circle = ImplicitRegion((x, y), Eq(x**2 + y**2, 4))
>>> circle.rational_parametrization()
- (-2 + 4/(t**2 + 1), 4*t/(t**2 + 1))
+ (4*t/(t**2 + 1), 4*t**2/(t**2 + 1) - 2)
>>> I = ImplicitRegion((x, y), x**3 + x**2 - y**2)
>>> I.rational_parametrization()
@@ -206,11 +373,12 @@ def rational_parametrization(self, parameters=('t', 's'), reg_point=None):
>>> c.rational_parametrization(reg_point=(3/4, 0))
(0.75 - 0.5/(t**2 + 1), -0.5*t/(t**2 + 1))
- Refrences
+ References
=========
- - Christoph M. Hoffmann, Conversion Methods between Parametric and
- Implicit Curves and Surfaces.
+ - Christoph M. Hoffmann, "Conversion Methods between Parametric and
+ Implicit Curves and Surfaces", Purdue e-Pubs, 1990. Available:
+ https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1827&context=cstech
"""
equation = self.equation
@@ -245,6 +413,12 @@ def rational_parametrization(self, parameters=('t', 's'), reg_point=None):
if len(self.singular_points()) != 0:
singular_points = self.singular_points()
for spoint in singular_points:
+ syms = Tuple(*spoint).free_symbols
+ rep = {s: 2 for s in syms}
+
+ if len(syms) != 0:
+ spoint = tuple(s.subs(rep) for s in spoint)
+
if self.multiplicity(spoint) == degree - 1:
point = spoint
break
@@ -311,3 +485,18 @@ def rational_parametrization(self, parameters=('t', 's'), reg_point=None):
return x_par, y_par, z_par
raise NotImplementedError()
+
+def conic_coeff(variables, equation):
+ if total_degree(equation) != 2:
+ raise ValueError()
+ x = variables[0]
+ y = variables[1]
+
+ equation = expand(equation)
+ a = equation.coeff(x**2)
+ b = equation.coeff(x*y)
+ c = equation.coeff(y**2)
+ d = equation.coeff(x, 1).coeff(y, 0)
+ e = equation.coeff(y, 1).coeff(x, 0)
+ f = equation.coeff(x, 0).coeff(y, 0)
+ return a, b, c, d, e, f
| diff --git a/sympy/vector/tests/test_implicitregion.py b/sympy/vector/tests/test_implicitregion.py
index c02166c52191..bd518b937313 100644
--- a/sympy/vector/tests/test_implicitregion.py
+++ b/sympy/vector/tests/test_implicitregion.py
@@ -20,10 +20,16 @@ def test_ImplicitRegion():
def test_regular_point():
r1 = ImplicitRegion((x,), x**2 - 16)
r1.regular_point() == (-4,)
- c = ImplicitRegion((x, y), x**2 + y**2 - 4)
- c.regular_point() == (-2, 0)
- r2 = ImplicitRegion((x, y), (x - S(5)/2)**2 + y**2 - (S(1)/4)**2)
- raises(NotImplementedError, lambda: r2.regular_point())
+ c1 = ImplicitRegion((x, y), x**2 + y**2 - 4)
+ c1.regular_point() == (2, 0)
+ c2 = ImplicitRegion((x, y), (x - S(5)/2)**2 + y**2 - (S(1)/4)**2)
+ c2.regular_point() == (11/4, 0)
+ c3 = ImplicitRegion((x, y), (y - 5)**2 - 16*(x - 5))
+ c3.regular_point() == (5, 5)
+ r2 = ImplicitRegion((x, y), x**2 - 4*x*y - 3*y**2 + 4*x + 8*y - 5)
+ r2.regular_point == (4/7, 13/21)
+ r3 = ImplicitRegion((x, y), x**2 - 2*x*y + 3*y**2 - 2*x - 5*y + 3/2)
+ raises(ValueError, lambda: r3.regular_point())
def test_singular_points_and_multiplicty():
@@ -51,34 +57,35 @@ def test_rational_parametrization():
line = ImplicitRegion((x, y), Eq(y, 3*x + 2))
assert line.rational_parametrization() == (x, 3*x + 2)
- circle1 = ImplicitRegion((x, y), ((x-2)**2 + (y+3)**2 - 4))
- assert circle1.rational_parametrization(parameters=t) == (4/(t**2 + 1), 4*t/(t**2 + 1) - 3)
- circle2 = ImplicitRegion((x, y), (x - 1/2)**2 + y**2 - (1/4)**2 )
- raises(NotImplementedError, lambda: circle2.rational_parametrization())
- circle2.rational_parametrization(t, reg_point=Point(0.75, 0)) == (3/4 - 0.5/(t**2 + 1), -0.5*t/(t**2 + 1))
+ circle1 = ImplicitRegion((x, y), (x-2)**2 + (y+3)**2 - 4)
+ assert circle1.rational_parametrization(parameters=t) == (4*t/(t**2 + 1) + 2, 4*t**2/(t**2 + 1) - 5)
+ circle2 = ImplicitRegion((x, y), (x - S.Half)**2 + y**2 - (S(1)/2)**2)
+ print(circle2.regular_point())
+ print(circle2.rational_parametrization())
+ assert circle2.rational_parametrization(parameters=t) == (t/(t**2 + 1) + S(1)/2, t**2/(t**2 + 1) - S(1)/2)
circle3 = ImplicitRegion((x, y), Eq(x**2 + y**2, 2*x))
- assert circle3.rational_parametrization(parameters=(t,)) == (2/(t**2 + 1), 2*t/(t**2 + 1))
+ assert circle3.rational_parametrization(parameters=(t,)) == (2*t/(t**2 + 1) + 1, 2*t**2/(t**2 + 1) - 1)
parabola = ImplicitRegion((x, y), (y - 3)**2 - 4*(x + 6))
assert parabola.rational_parametrization(t) == (-6 + 4/t**2, 3 + 4/t)
rect_hyperbola = ImplicitRegion((x, y), x*y - 1)
- assert rect_hyperbola.rational_parametrization(t) == (-100 + (100*t + S(1)/100)/t, 100*t)
+ assert rect_hyperbola.rational_parametrization(t) == (-1 + (t + 1)/t, t)
cubic_curve = ImplicitRegion((x, y), x**3 + x**2 - y**2)
assert cubic_curve.rational_parametrization(parameters=(t)) == (t**2 - 1, t*(t**2 - 1))
cuspidal = ImplicitRegion((x, y), (x**3 - y**2))
assert cuspidal.rational_parametrization(t) == (t**2, t**3)
- I= ImplicitRegion((x, y), x**3 + x**2 - y**2)
+ I = ImplicitRegion((x, y), x**3 + x**2 - y**2)
assert I.rational_parametrization(t) == (t**2 - 1, t*(t**2 - 1))
sphere = ImplicitRegion((x, y, z), Eq(x**2 + y**2 + z**2, 2*x))
+ print(sphere.rational_parametrization(parameters=(s, t)))
assert sphere.rational_parametrization(parameters=(s, t)) == (2/(s**2 + t**2 + 1), 2*t/(s**2 + t**2 + 1), 2*s/(s**2 + t**2 + 1))
conic = ImplicitRegion((x, y), Eq(x**2 + 4*x*y + 3*y**2 + x - y + 10, 0))
- conic.rational_parametrization(t) == (-100 - 205/(3*(3*t**2 - sqrt(41881)*t + 4*t - 2*sqrt(41881)/3 + 1)),\
- -205*t/(3*(3*t**2 - sqrt(41881)*t + 4*t - 2*sqrt(41881)/3 + 1)) - sqrt(41881)/6 + 401/6)
+ conic.rational_parametrization(t) == (17/2 + 4/(3*t**2 + 4*t + 1), 4*t/(3*t**2 + 4*t + 1) - 11/2)
r1 = ImplicitRegion((x, y), y**2 - x**3 + x)
raises(NotImplementedError, lambda: r1.rational_parametrization())
| [
{
"components": [
{
"doc": "",
"lines": [
127,
151
],
"name": "ImplicitRegion._regular_point_parabola",
"signature": "def _regular_point_parabola(self, a, b, c, d, e, f):",
"type": "function"
},
{
"doc": "",
"l... | [
"test_regular_point"
] | [
"test_ImplicitRegion",
"test_singular_points_and_multiplicty"
] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Implemented an algorithm to find a rational point on a conic
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
#19320
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
#### Brief description of what is fixed or changed
Implemented an algorithm to find a rational point on the conic. To parametrize a monoid of degree d, we need to find a point of multiplicity d - 1. This implies for curves of degree 2, we need to determine a rational point on it. While determining a point of multiplicity >= 2 is easy using sympy's non-linsolve, a separate algorithm needs to be implemented for points of multiplicity 1 or regular points.
The `regular_point` was based on iterating over a set of points and checking whether they lie on the conic. This PR fixes this for conics. I have not yet implemented the algorithm for [quadrics](https://en.wikipedia.org/wiki/Quadric).
#### Other comments
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* vector
* Added a function to find a rational point on conic
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/vector/implicitregion.py]
(definition of ImplicitRegion._regular_point_parabola:)
def _regular_point_parabola(self, a, b, c, d, e, f):
(definition of ImplicitRegion._regular_point_ellipse:)
def _regular_point_ellipse(self, a, b, c, d, e, f):
(definition of conic_coeff:)
def conic_coeff(variables, equation):
[end of new definitions in sympy/vector/implicitregion.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1b4529a95ef641c2fc15889091b281644069d20e | ||
sympy__sympy-19795 | 19,795 | sympy/sympy | 1.7 | ce85528e6ca95be5ad5822843116d9cbc8913fbe | 2020-07-18T07:29:53Z | diff --git a/sympy/stats/__init__.py b/sympy/stats/__init__.py
index cab740072483..be9d57f38070 100644
--- a/sympy/stats/__init__.py
+++ b/sympy/stats/__init__.py
@@ -142,7 +142,7 @@
'joint_eigen_distribution', 'JointEigenDistribution',
'level_spacing_distribution',
- 'MatrixGamma',
+ 'MatrixGamma', 'Wishart', 'MatrixNormal',
'Probability', 'Expectation', 'Variance', 'Covariance', 'Moment',
'CentralMoment',
@@ -191,7 +191,7 @@
GaussianSymplecticEnsemble, joint_eigen_distribution,
JointEigenDistribution, level_spacing_distribution)
-from .matrix_distributions import MatrixGamma
+from .matrix_distributions import MatrixGamma, Wishart, MatrixNormal
from .symbolic_probability import (Probability, Expectation, Variance,
Covariance, Moment, CentralMoment)
diff --git a/sympy/stats/matrix_distributions.py b/sympy/stats/matrix_distributions.py
index 8e94b55f78e8..4c3a0162bb35 100644
--- a/sympy/stats/matrix_distributions.py
+++ b/sympy/stats/matrix_distributions.py
@@ -1,7 +1,7 @@
-from sympy import S, Basic, exp, multigamma
+from sympy import S, Basic, exp, multigamma, pi
from sympy.core.sympify import sympify, _sympify
from sympy.matrices import (ImmutableMatrix, Inverse, Trace, Determinant,
- MatrixSymbol, MatrixBase)
+ MatrixSymbol, MatrixBase, Transpose, MatrixSet)
from sympy.stats.rv import (_value_check, RandomMatrixSymbol, NamedArgsMixin, PSpace,
_symbol_converter)
@@ -77,7 +77,7 @@ def check(alpha, beta, scale_matrix):
if not isinstance(scale_matrix , MatrixSymbol):
_value_check(scale_matrix.is_positive_definite, "The shape "
"matrix must be positive definite.")
- _value_check(scale_matrix.shape[0] == scale_matrix.shape[1], "Should "
+ _value_check(scale_matrix.is_square, "Should "
"be square matrix")
_value_check(alpha.is_positive, "Shape parameter should be positive.")
_value_check(beta.is_positive, "Scale parameter should be positive.")
@@ -85,7 +85,7 @@ def check(alpha, beta, scale_matrix):
@property
def set(self):
k = self.scale_matrix.shape[0]
- return S.Reals ** k
+ return MatrixSet(k, k, S.Reals)
@property
def dimension(self):
@@ -151,3 +151,184 @@ def MatrixGamma(symbol, alpha, beta, scale_matrix):
if isinstance(scale_matrix, list):
scale_matrix = ImmutableMatrix(scale_matrix)
return rv(symbol, MatrixGammaDistribution, (alpha, beta, scale_matrix))
+
+#-------------------------------------------------------------------------------
+# Wishart Distribution ---------------------------------------------------------
+
+class WishartDistribution(MatrixDistribution):
+
+ _argnames = ('n', 'scale_matrix')
+
+ @staticmethod
+ def check(n, scale_matrix):
+ if not isinstance(scale_matrix , MatrixSymbol):
+ _value_check(scale_matrix.is_positive_definite, "The shape "
+ "matrix must be positive definite.")
+ _value_check(scale_matrix.is_square, "Should "
+ "be square matrix")
+ _value_check(n.is_positive, "Shape parameter should be positive.")
+
+ @property
+ def set(self):
+ k = self.scale_matrix.shape[0]
+ return MatrixSet(k, k, S.Reals)
+
+ @property
+ def dimension(self):
+ return self.scale_matrix.shape
+
+ def pdf(self, x):
+ n, scale_matrix = self.n, self.scale_matrix
+ p = scale_matrix.shape[0]
+ if isinstance(x, list):
+ x = ImmutableMatrix(x)
+ if not isinstance(x, (MatrixBase, MatrixSymbol)):
+ raise ValueError("%s should be an isinstance of Matrix "
+ "or MatrixSymbol" % str(x))
+ sigma_inv_x = - Inverse(scale_matrix)*x / S(2)
+ term1 = exp(Trace(sigma_inv_x))/((2**(p*n/S(2))) * multigamma(n/S(2), p))
+ term2 = (Determinant(scale_matrix))**(-n/S(2))
+ term3 = (Determinant(x))**(S(n - p - 1)/2)
+ return term1 * term2 * term3
+
+def Wishart(symbol, n, scale_matrix):
+ """
+ Creates a random variable with Wishart Distribution.
+
+ The density of the said distribution can be found at [1].
+
+ Parameters
+ ==========
+
+ n: Positive Real number
+ Represents degrees of freedom
+ scale_matrix: Positive definite real square matrix
+ Scale Matrix
+
+ Returns
+ =======
+
+ RandomSymbol
+
+ Examples
+ ========
+
+ >>> from sympy.stats import density, Wishart
+ >>> from sympy import MatrixSymbol, symbols
+ >>> n = symbols('n', positive=True)
+ >>> W = Wishart('W', n, [[2, 1], [1, 2]])
+ >>> X = MatrixSymbol('X', 2, 2)
+ >>> density(W)(X).doit()
+ 2**(-n)*3**(-n/2)*exp(Trace(Matrix([
+ [-1/3, 1/6],
+ [ 1/6, -1/3]])*X))*Determinant(X)**(n/2 - 3/2)/(sqrt(pi)*gamma(n/2)*gamma(n/2 - 1/2))
+ >>> density(W)([[1, 0], [0, 1]]).doit()
+ 2**(-n)*3**(-n/2)*exp(-2/3)/(sqrt(pi)*gamma(n/2)*gamma(n/2 - 1/2))
+
+ References
+ ==========
+
+ .. [1] https://en.wikipedia.org/wiki/Wishart_distribution
+
+ """
+ if isinstance(scale_matrix, list):
+ scale_matrix = ImmutableMatrix(scale_matrix)
+ return rv(symbol, WishartDistribution, (n, scale_matrix))
+
+#-------------------------------------------------------------------------------
+# Matrix Normal distribution ---------------------------------------------------
+
+class MatrixNormalDistribution(MatrixDistribution):
+
+ _argnames = ('location_matrix', 'scale_matrix_1', 'scale_matrix_2')
+
+ @staticmethod
+ def check(location_matrix, scale_matrix_1, scale_matrix_2):
+ if not isinstance(scale_matrix_1 , MatrixSymbol):
+ _value_check(scale_matrix_1.is_positive_definite, "The shape "
+ "matrix must be positive definite.")
+ if not isinstance(scale_matrix_2 , MatrixSymbol):
+ _value_check(scale_matrix_2.is_positive_definite, "The shape "
+ "matrix must be positive definite.")
+ _value_check(scale_matrix_1.is_square, "Scale matrix 1 should be "
+ "be square matrix")
+ _value_check(scale_matrix_2.is_square, "Scale matrix 2 should be "
+ "be square matrix")
+ n = location_matrix.shape[0]
+ p = location_matrix.shape[1]
+ _value_check(scale_matrix_1.shape[0] == n, "Scale matrix 1 should be"
+ " of shape %s x %s"% (str(n), str(n)))
+ _value_check(scale_matrix_2.shape[0] == p, "Scale matrix 2 should be"
+ " of shape %s x %s"% (str(p), str(p)))
+
+ @property
+ def set(self):
+ n, p = self.location_matrix.shape
+ return MatrixSet(n, p, S.Reals)
+
+ @property
+ def dimension(self):
+ return self.location_matrix.shape
+
+ def pdf(self, x):
+ M , U , V = self.location_matrix, self.scale_matrix_1, self.scale_matrix_2
+ n, p = M.shape
+ if isinstance(x, list):
+ x = ImmutableMatrix(x)
+ if not isinstance(x, (MatrixBase, MatrixSymbol)):
+ raise ValueError("%s should be an isinstance of Matrix "
+ "or MatrixSymbol" % str(x))
+ term1 = Inverse(V)*Transpose(x - M)*Inverse(U)*(x - M)
+ num = exp(-Trace(term1)/S(2))
+ den = (2*pi)**(S(n*p)/2) * Determinant(U)**S(p)/2 * Determinant(V)**S(n)/2
+ return num/den
+
+def MatrixNormal(symbol, location_matrix, scale_matrix_1, scale_matrix_2):
+ """
+ Creates a random variable with Matrix Normal Distribution.
+
+ The density of the said distribution can be found at [1].
+
+ Parameters
+ ==========
+
+ location_matrix: Real ``n x p`` matrix
+ Represents degrees of freedom
+ scale_matrix_1: Positive definite matrix
+ Scale Matrix of shape ``n x n``
+ scale_matrix_2: Positive definite matrix
+ Scale Matrix of shape ``p x p``
+
+ Returns
+ =======
+
+ RandomSymbol
+
+ Examples
+ ========
+
+ >>> from sympy import MatrixSymbol
+ >>> from sympy.stats import density, MatrixNormal
+ >>> M = MatrixNormal('M', [[1, 2]], [1], [[1, 0], [0, 1]])
+ >>> X = MatrixSymbol('X', 1, 2)
+ >>> density(M)(X).doit()
+ 2*exp(-Trace((Matrix([
+ [-1],
+ [-2]]) + X.T)*(Matrix([[-1, -2]]) + X))/2)/pi
+ >>> density(M)([[3, 4]]).doit()
+ 2*exp(-4)/pi
+
+ References
+ ==========
+
+ .. [1] https://en.wikipedia.org/wiki/Matrix_normal_distribution
+
+ """
+ if isinstance(location_matrix, list):
+ location_matrix = ImmutableMatrix(location_matrix)
+ if isinstance(scale_matrix_1, list):
+ scale_matrix_1 = ImmutableMatrix(scale_matrix_1)
+ if isinstance(scale_matrix_2, list):
+ scale_matrix_2 = ImmutableMatrix(scale_matrix_2)
+ args = (location_matrix, scale_matrix_1, scale_matrix_2)
+ return rv(symbol, MatrixNormalDistribution, args)
| diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index 381ea0ec2642..598f243a4e6b 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -1788,6 +1788,19 @@ def test_sympy__stats__matrix_distributions__MatrixGammaDistribution():
from sympy import Matrix
assert _test_args(MatrixGammaDistribution(3, 4, Matrix([[1, 0], [0, 1]])))
+def test_sympy__stats__matrix_distributions__WishartDistribution():
+ from sympy.stats.matrix_distributions import WishartDistribution
+ from sympy import Matrix
+ assert _test_args(WishartDistribution(3, Matrix([[1, 0], [0, 1]])))
+
+def test_sympy__stats__matrix_distributions__MatrixNormalDistribution():
+ from sympy.stats.matrix_distributions import MatrixNormalDistribution
+ from sympy import MatrixSymbol
+ L = MatrixSymbol('L', 1, 2)
+ S1 = MatrixSymbol('S1', 1, 1)
+ S2 = MatrixSymbol('S2', 2, 2)
+ assert _test_args(MatrixNormalDistribution(L, S1, S2))
+
def test_sympy__core__symbol__Str():
from sympy.core.symbol import Str
diff --git a/sympy/stats/tests/test_matrix_distributions.py b/sympy/stats/tests/test_matrix_distributions.py
index 446e8c06c710..0002d46f7f46 100644
--- a/sympy/stats/tests/test_matrix_distributions.py
+++ b/sympy/stats/tests/test_matrix_distributions.py
@@ -1,9 +1,8 @@
-from sympy import (exp, S, ProductSet, sqrt, pi, symbols,
- Product, gamma, Dummy)
-from sympy.matrices import Determinant, Matrix, Trace, MatrixSymbol
+from sympy import exp, S, sqrt, pi, symbols, Product, gamma, Dummy
+from sympy.matrices import Determinant, Matrix, Trace, MatrixSymbol, MatrixSet
from sympy.stats import density
from sympy.stats.matrix_distributions import (MatrixGammaDistribution,
- MatrixGamma, MatrixPSpace)
+ MatrixGamma, MatrixPSpace, Wishart, MatrixNormal)
from sympy.testing.pytest import raises
@@ -15,7 +14,7 @@ def test_MatrixPSpace():
def test_MatrixGamma():
M = MatrixGamma('M', 1, 2, [[1, 0], [0, 1]])
- assert M.pspace.distribution.set == ProductSet(S.Reals, S.Reals)
+ assert M.pspace.distribution.set == MatrixSet(2, 2, S.Reals)
assert isinstance(density(M), MatrixGammaDistribution)
X = MatrixSymbol('X', 2, 2)
num = exp(Trace(Matrix([[-S(1)/2, 0], [0, -S(1)/2]])*X))
@@ -44,3 +43,51 @@ def test_MatrixGamma():
raises(ValueError, lambda: MatrixGamma('M', -1, -2, [[1, 0], [0, 1]]))
raises(ValueError, lambda: MatrixGamma('M', -1, 2, [[1, 0], [2, 1]]))
raises(ValueError, lambda: MatrixGamma('M', -1, 2, [[1, 0], [0]]))
+
+def test_Wishart():
+ W = Wishart('W', 5, [[1, 0], [0, 1]])
+ assert W.pspace.distribution.set == MatrixSet(2, 2, S.Reals)
+ X = MatrixSymbol('X', 2, 2)
+ term1 = exp(Trace(Matrix([[-S(1)/2, 0], [0, -S(1)/2]])*X))
+ assert density(W)(X).doit() == term1 * Determinant(X)/(24*pi)
+ assert density(W)([[2, 1], [1, 2]]).doit() == exp(-2)/(8*pi)
+ n = symbols('n', positive=True)
+ d = symbols('d', positive=True, integer=True)
+ Y = MatrixSymbol('Y', d, d)
+ SM = MatrixSymbol('SM', d, d)
+ W = Wishart('W', n, SM)
+ k = Dummy('k')
+ exprd = 2**(-d*n/2)*pi**(-d*(d - 1)/4)*exp(Trace(-(S(1)/2)*SM**(-1)*Y)
+ )*Determinant(SM)**(-n/2)*Determinant(Y)**(
+ -d/2 + n/2 - S(1)/2)/Product(gamma(-k/2 + n/2 + S(1)/2), (k, 1, d))
+ assert density(W)(Y).dummy_eq(exprd)
+ raises(ValueError, lambda: density(W)(1))
+ raises(ValueError, lambda: Wishart('W', -1, [[1, 0], [0, 1]]))
+ raises(ValueError, lambda: Wishart('W', -1, [[1, 0], [2, 1]]))
+ raises(ValueError, lambda: Wishart('W', 2, [[1, 0], [0]]))
+
+def test_MatrixNormal():
+ M = MatrixNormal('M', [[5, 6]], [4], [[2, 1], [1, 2]])
+ assert M.pspace.distribution.set == MatrixSet(1, 2, S.Reals)
+ X = MatrixSymbol('X', 1, 2)
+ term1 = exp(-Trace(Matrix([[ S(2)/3, -S(1)/3], [-S(1)/3, S(2)/3]])*(
+ Matrix([[-5], [-6]]) + X.T)*Matrix([[1/4]])*(Matrix([[-5, -6]]) + X))/2)
+ assert density(M)(X).doit() == term1/(24*pi)
+ assert density(M)([[7, 8]]).doit() == exp(-S(1)/3)/(24*pi)
+ d, n = symbols('d n', positive=True, integer=True)
+ SM2 = MatrixSymbol('SM2', d, d)
+ SM1 = MatrixSymbol('SM1', n, n)
+ LM = MatrixSymbol('LM', n, d)
+ Y = MatrixSymbol('Y', n, d)
+ M = MatrixNormal('M', LM, SM1, SM2)
+ exprd = 4*(2*pi)**(-d*n/2)*exp(-Trace(SM2**(-1)*(-LM.T + Y.T)*SM1**(-1)*(-LM + Y)
+ )/2)*Determinant(SM1)**(-d)*Determinant(SM2)**(-n)
+ assert density(M)(Y).doit() == exprd
+ raises(ValueError, lambda: density(M)(1))
+ raises(ValueError, lambda: MatrixNormal('M', [1, 2], [[1, 0], [0, 1]], [[1, 0], [2, 1]]))
+ raises(ValueError, lambda: MatrixNormal('M', [1, 2], [[1, 0], [2, 1]], [[1, 0], [0, 1]]))
+ raises(ValueError, lambda: MatrixNormal('M', [1, 2], [[1, 0], [0, 1]], [[1, 0], [0, 1]]))
+ raises(ValueError, lambda: MatrixNormal('M', [1, 2], [[1, 0], [2]], [[1, 0], [0, 1]]))
+ raises(ValueError, lambda: MatrixNormal('M', [1, 2], [[1, 0], [2, 1]], [[1, 0], [0]]))
+ raises(ValueError, lambda: MatrixNormal('M', [[1, 2]], [[1, 0], [0, 1]], [[1, 0]]))
+ raises(ValueError, lambda: MatrixNormal('M', [[1, 2]], [1], [[1, 0]]))
| [
{
"components": [
{
"doc": "",
"lines": [
158,
192
],
"name": "WishartDistribution",
"signature": "class WishartDistribution(MatrixDistribution):",
"type": "class"
},
{
"doc": "",
"lines": [
163,
... | [
"test_sympy__stats__matrix_distributions__WishartDistribution",
"test_sympy__stats__matrix_distributions__MatrixNormalDistribution",
"test_MatrixPSpace",
"test_MatrixGamma",
"test_Wishart"
] | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_sympy__assumptions__assume__Predicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers__AnyArgs",
"test_sympy__assu... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
[GSoC] Added Matrix Normal and Wishart Distribution
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
FIxes #19723
Closes #17204
#### Brief description of what is fixed or changed
#### Other comments
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* stats
* Added `Wishart` and `MatrixNormal` Distributions
<!-- END RELEASE NOTES -->
ping @czgdp1807 @Upabjojr @jmig5776
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/stats/matrix_distributions.py]
(definition of WishartDistribution:)
class WishartDistribution(MatrixDistribution):
(definition of WishartDistribution.check:)
def check(n, scale_matrix):
(definition of WishartDistribution.set:)
def set(self):
(definition of WishartDistribution.dimension:)
def dimension(self):
(definition of WishartDistribution.pdf:)
def pdf(self, x):
(definition of Wishart:)
def Wishart(symbol, n, scale_matrix):
"""Creates a random variable with Wishart Distribution.
The density of the said distribution can be found at [1].
Parameters
==========
n: Positive Real number
Represents degrees of freedom
scale_matrix: Positive definite real square matrix
Scale Matrix
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import density, Wishart
>>> from sympy import MatrixSymbol, symbols
>>> n = symbols('n', positive=True)
>>> W = Wishart('W', n, [[2, 1], [1, 2]])
>>> X = MatrixSymbol('X', 2, 2)
>>> density(W)(X).doit()
2**(-n)*3**(-n/2)*exp(Trace(Matrix([
[-1/3, 1/6],
[ 1/6, -1/3]])*X))*Determinant(X)**(n/2 - 3/2)/(sqrt(pi)*gamma(n/2)*gamma(n/2 - 1/2))
>>> density(W)([[1, 0], [0, 1]]).doit()
2**(-n)*3**(-n/2)*exp(-2/3)/(sqrt(pi)*gamma(n/2)*gamma(n/2 - 1/2))
References
==========
.. [1] https://en.wikipedia.org/wiki/Wishart_distribution"""
(definition of MatrixNormalDistribution:)
class MatrixNormalDistribution(MatrixDistribution):
(definition of MatrixNormalDistribution.check:)
def check(location_matrix, scale_matrix_1, scale_matrix_2):
(definition of MatrixNormalDistribution.set:)
def set(self):
(definition of MatrixNormalDistribution.dimension:)
def dimension(self):
(definition of MatrixNormalDistribution.pdf:)
def pdf(self, x):
(definition of MatrixNormal:)
def MatrixNormal(symbol, location_matrix, scale_matrix_1, scale_matrix_2):
"""Creates a random variable with Matrix Normal Distribution.
The density of the said distribution can be found at [1].
Parameters
==========
location_matrix: Real ``n x p`` matrix
Represents degrees of freedom
scale_matrix_1: Positive definite matrix
Scale Matrix of shape ``n x n``
scale_matrix_2: Positive definite matrix
Scale Matrix of shape ``p x p``
Returns
=======
RandomSymbol
Examples
========
>>> from sympy import MatrixSymbol
>>> from sympy.stats import density, MatrixNormal
>>> M = MatrixNormal('M', [[1, 2]], [1], [[1, 0], [0, 1]])
>>> X = MatrixSymbol('X', 1, 2)
>>> density(M)(X).doit()
2*exp(-Trace((Matrix([
[-1],
[-2]]) + X.T)*(Matrix([[-1, -2]]) + X))/2)/pi
>>> density(M)([[3, 4]]).doit()
2*exp(-4)/pi
References
==========
.. [1] https://en.wikipedia.org/wiki/Matrix_normal_distribution"""
[end of new definitions in sympy/stats/matrix_distributions.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
[GSoC][Discussion] Matrix Distributions
### Proposal
For adding new matrix distributions in `sympy.stats`, I think it probably can be constructed with `random_matrix_models.py`(to avoid creating new Random Symbol) for them, some of the examples of Matrix distributions are [Wishart](https://en.wikipedia.org/wiki/Wishart_distribution), [MatrixGamma](https://en.wikipedia.org/wiki/Matrix_gamma_distribution), [Normal-Wishart](https://en.wikipedia.org/wiki/Normal-Wishart_distribution), etc, It is because
1. The current `JointRandomSymbol` is not a subclass of `MatrixSymbol` and so doesn't have a `shape` as its attribute,
2. The `pdf` method of current Joint Distributions are defined like
https://github.com/sympy/sympy/blob/1f7cbeee19200aac259f4ffb8ec208b9b77fd5cf/sympy/stats/joint_rv_types.py#L146-L153
Suppose `X = MatrixGamma('X',1,2,[[1,0], [0,1]]))`
So, makes it difficult to handle queries if we have `density(X)([[1,2],[2,1]])` and `density(X)(T)` (`T` is a `MatrixSymbol`) as the second one should also be supported IMO. If we handle it some way, then it might raise errors for functions like `marginal_distribution`, `expectation` etc.
So, it should be implemented under `random_matrix_models.py`
ping @czgdp1807 @Upabjojr
----------
Can you suggest an example problem from some mathematics documents which uses matrix distributions?
If you replace `ImmutableMatrix` with `.as_immutable()`... does that method exist in matrix expressions?
Ideally the code should be written once and should work with both implicit-element and explicit-element matrices.
> Can you suggest an example problem from some mathematics documents which uses matrix distributions?
I haven't got many resources for this but, by googling I get a few from [wolfram](https://reference.wolfram.com/language/guide/MatrixDistributions.html) which shows that they can be extended further in `random_matrix_models.py` as a few of the examples there are already present in the file and we could add some of the remaining ones.
Also, [this](https://encyclopediaofmath.org/wiki/Matrix_variate_distribution) gives some basic examples and definitions of Matrix Distributions
> Ideally the code should be written once and should work with both implicit-element and explicit-element matrices.
I actually think that they should be a part of `random_matrix_models.py` and not Joint Distributions as I noticed some difference in the pdf of [MatrixNormal Distribution](https://encyclopediaofmath.org/wiki/Matrix_variate_distribution){Matrix Dist.} and [Multivariate Normal Distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution){Joint Dist.}
--------------------Add Wishart distribution
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234". See
https://github.com/blog/1506-closing-issues-via-pull-requests . Please also
write a comment on that issue linking back to this pull request once it is
open. -->
#### Brief description of what is fixed or changed
Added wishart distribution in `sympy/stats/joint_rv_types.py`
#### Other comments
Ping @Upabjojr @sidhantnagpal
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
- stats
- Added Wishart distribution in `sympy/stats/joint_rv_types.py`
<!-- END RELEASE NOTES -->
----------
:white_check_mark:
Hi, I am the [SymPy bot](https://github.com/sympy/sympy-bot) (v147). I'm here to help you write a release notes entry. Please read the [guide on how to write release notes](https://github.com/sympy/sympy/wiki/Writing-Release-Notes).
Your release notes are in good order.
Here is what the release notes will look like:
* stats
- Added Wishart distribution in `sympy/stats/joint_rv_types.py` ([#17204](https://github.com/sympy/sympy/pull/17204) by [@ritesh99rakesh](https://github.com/ritesh99rakesh))
This will be added to https://github.com/sympy/sympy/wiki/Release-Notes-for-1.5.
Note: This comment will be updated with the latest check if you edit the pull request. You need to reload the page to see it. <details><summary>Click here to see the pull request description that was parsed.</summary>
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234". See
https://github.com/blog/1506-closing-issues-via-pull-requests . Please also
write a comment on that issue linking back to this pull request once it is
open. -->
#### Brief description of what is fixed or changed
Added wishart distribution in `sympy/stats/joint_rv_types.py`
#### Other comments
Ping @Upabjojr @sidhantnagpal
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
- stats
- Added Wishart distribution in `sympy/stats/joint_rv_types.py`
<!-- END RELEASE NOTES -->
</details><p>
# [Codecov](https://codecov.io/gh/sympy/sympy/pull/17204?src=pr&el=h1) Report
> Merging [#17204](https://codecov.io/gh/sympy/sympy/pull/17204?src=pr&el=desc) into [master](https://codecov.io/gh/sympy/sympy/commit/e6a5b6cb700f9aee594d80bea058ca19a95551cc?src=pr&el=desc) will **decrease** coverage by `<.001%`.
> The diff coverage is `96%`.
```diff
@@ Coverage Diff @@
## master #17204 +/- ##
===========================================
- Coverage 74.55% 74.55% -0.001%
===========================================
Files 623 623
Lines 161609 161663 +54
Branches 37933 37932 -1
===========================================
+ Hits 120480 120520 +40
- Misses 35798 35811 +13
- Partials 5331 5332 +1
```
It would be really nice to see this distribution in `master`. I think we don't need any algorithmic change for supporting pdf of this distribution, though, I am not sure how `integrate(f, MatrisSymbolObject)` will behave. `f` doesn't contain any matrices though. We need to discuss how to implement and what all is needed from other modules. May be we can add this distribution under `random_matrices`.
I have added this as a part of my GSoc project in stats. I would like to take this over and continue upon it and IMO there are some changes required in joint_rv.py to make some methods to work more general as in crv.py so further additions of multivariate distributions follow a specific API.
IMO, all of you can propose your approaches in your proposals. We can compare them while evaluating them.
--------------------
</issues> | 1b4529a95ef641c2fc15889091b281644069d20e | |
matplotlib__matplotlib-17900 | 17,900 | matplotlib/matplotlib | 3.2 | ff821ba3249401fe7f5fdb11cb7ac5d0564c2697 | 2020-07-12T18:20:00Z | diff --git a/doc/users/next_whats_new/colormap_get_under_over_bad.rst b/doc/users/next_whats_new/colormap_get_under_over_bad.rst
new file mode 100644
index 000000000000..d21e7b4349cc
--- /dev/null
+++ b/doc/users/next_whats_new/colormap_get_under_over_bad.rst
@@ -0,0 +1,7 @@
+Get under/over/bad colors of Colormap objects
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+`matplotlib.colors.Colormap` now has methods
+`~.colors.Colormap.get_under`, `~.colors.Colormap.get_over`,
+`~.colors.Colormap.get_bad` for the colors used for out-of-range and masked
+values.
diff --git a/lib/matplotlib/colors.py b/lib/matplotlib/colors.py
index 95151c7b9779..57293aaa664a 100644
--- a/lib/matplotlib/colors.py
+++ b/lib/matplotlib/colors.py
@@ -110,6 +110,8 @@ def __delitem__(self, key):
_colors_full_map.update(BASE_COLORS)
_colors_full_map = _ColorMapping(_colors_full_map)
+_REPR_PNG_SIZE = (512, 64)
+
def get_named_colors_mapping():
"""Return the global mapping of names to named colors."""
@@ -614,6 +616,12 @@ def __copy__(self):
cmapobject._global = False
return cmapobject
+ def get_bad(self):
+ """Get the color for masked values."""
+ if not self._isinit:
+ self._init()
+ return self._lut[self._i_bad]
+
def set_bad(self, color='k', alpha=None):
"""Set the color for masked values."""
_warn_if_global_cmap_modified(self)
@@ -621,19 +629,27 @@ def set_bad(self, color='k', alpha=None):
if self._isinit:
self._set_extremes()
+ def get_under(self):
+ """Get the color for low out-of-range values."""
+ if not self._isinit:
+ self._init()
+ return self._lut[self._i_under]
+
def set_under(self, color='k', alpha=None):
- """
- Set the color for low out-of-range values when ``norm.clip = False``.
- """
+ """Set the color for low out-of-range values."""
_warn_if_global_cmap_modified(self)
self._rgba_under = to_rgba(color, alpha)
if self._isinit:
self._set_extremes()
+ def get_over(self):
+ """Get the color for high out-of-range values."""
+ if not self._isinit:
+ self._init()
+ return self._lut[self._i_over]
+
def set_over(self, color='k', alpha=None):
- """
- Set the color for high out-of-range values when ``norm.clip = False``.
- """
+ """Set the color for high out-of-range values."""
_warn_if_global_cmap_modified(self)
self._rgba_over = to_rgba(color, alpha)
if self._isinit:
@@ -655,6 +671,7 @@ def _init(self):
raise NotImplementedError("Abstract class only")
def is_gray(self):
+ """Return whether the color map is grayscale."""
if not self._isinit:
self._init()
return (np.all(self._lut[:, 0] == self._lut[:, 1]) and
@@ -685,8 +702,8 @@ def reversed(self, name=None):
def _repr_png_(self):
"""Generate a PNG representation of the Colormap."""
- IMAGE_SIZE = (400, 50)
- X = np.tile(np.linspace(0, 1, IMAGE_SIZE[0]), (IMAGE_SIZE[1], 1))
+ X = np.tile(np.linspace(0, 1, _REPR_PNG_SIZE[0]),
+ (_REPR_PNG_SIZE[1], 1))
pixels = self(X, bytes=True)
png_bytes = io.BytesIO()
title = self.name + ' color map'
@@ -703,12 +720,36 @@ def _repr_html_(self):
"""Generate an HTML representation of the Colormap."""
png_bytes = self._repr_png_()
png_base64 = base64.b64encode(png_bytes).decode('ascii')
- return ('<strong>' + self.name + '</strong>' +
- '<img ' +
- 'alt="' + self.name + ' color map" ' +
- 'title="' + self.name + '"' +
- 'style="border: 1px solid #555;" ' +
- 'src="data:image/png;base64,' + png_base64 + '">')
+ def color_block(color):
+ hex_color = to_hex(color, keep_alpha=True)
+ return (f'<div title="{hex_color}" '
+ 'style="display: inline-block; '
+ 'width: 1em; height: 1em; '
+ 'margin: 0; '
+ 'vertical-align: middle; '
+ 'border: 1px solid #555; '
+ f'background-color: {hex_color};"></div>')
+
+ return ('<div style="vertical-align: middle;">'
+ f'<strong>{self.name}</strong> '
+ '</div>'
+ '<div class="cmap"><img '
+ f'alt="{self.name} color map" '
+ f'title="{self.name}" '
+ 'style="border: 1px solid #555;" '
+ f'src="data:image/png;base64,{png_base64}"></div>'
+ '<div style="vertical-align: middle; '
+ f'max-width: {_REPR_PNG_SIZE[0]+2}px; '
+ 'display: flex; justify-content: space-between;">'
+ '<div style="float: left;">'
+ f'{color_block(self.get_under())} under'
+ '</div>'
+ '<div style="margin: 0 auto; display: inline-block;">'
+ f'bad {color_block(self.get_bad())}'
+ '</div>'
+ '<div style="float: right;">'
+ f'over {color_block(self.get_over())}'
+ '</div>')
class LinearSegmentedColormap(Colormap):
| diff --git a/lib/matplotlib/tests/test_colors.py b/lib/matplotlib/tests/test_colors.py
index 5e0b2044bcf8..6180e789ac95 100644
--- a/lib/matplotlib/tests/test_colors.py
+++ b/lib/matplotlib/tests/test_colors.py
@@ -5,6 +5,7 @@
import numpy as np
from PIL import Image
import pytest
+import base64
from numpy.testing import assert_array_equal, assert_array_almost_equal
@@ -303,6 +304,9 @@ def test_BoundaryNorm():
cmref.set_under('white')
cmshould = mcolors.ListedColormap(['white', 'blue', 'red', 'black'])
+ assert mcolors.same_color(cmref.get_over(), 'black')
+ assert mcolors.same_color(cmref.get_under(), 'white')
+
refnorm = mcolors.BoundaryNorm(bounds, cmref.N)
mynorm = mcolors.BoundaryNorm(bounds, cmshould.N, extend='both')
assert mynorm.vmin == refnorm.vmin
@@ -323,6 +327,8 @@ def test_BoundaryNorm():
cmref.set_under('white')
cmshould = mcolors.ListedColormap(['white', 'blue', 'red'])
+ assert mcolors.same_color(cmref.get_under(), 'white')
+
assert cmref.N == 2
assert cmshould.N == 3
refnorm = mcolors.BoundaryNorm(bounds, cmref.N)
@@ -339,6 +345,8 @@ def test_BoundaryNorm():
cmref.set_over('black')
cmshould = mcolors.ListedColormap(['blue', 'red', 'black'])
+ assert mcolors.same_color(cmref.get_over(), 'black')
+
assert cmref.N == 2
assert cmshould.N == 3
refnorm = mcolors.BoundaryNorm(bounds, cmref.N)
@@ -1160,4 +1168,15 @@ def test_repr_html():
cmap = plt.get_cmap('viridis')
html = cmap._repr_html_()
assert len(html) > 0
+ png = cmap._repr_png_()
+ assert base64.b64encode(png).decode('ascii') in html
assert cmap.name in html
+ assert html.startswith('<div')
+ assert html.endswith('</div>')
+
+
+def test_get_under_over_bad():
+ cmap = plt.get_cmap('viridis')
+ assert_array_equal(cmap.get_under(), cmap(-np.inf))
+ assert_array_equal(cmap.get_over(), cmap(np.inf))
+ assert_array_equal(cmap.get_bad(), cmap(np.nan))
| diff --git a/doc/users/next_whats_new/colormap_get_under_over_bad.rst b/doc/users/next_whats_new/colormap_get_under_over_bad.rst
new file mode 100644
index 000000000000..d21e7b4349cc
--- /dev/null
+++ b/doc/users/next_whats_new/colormap_get_under_over_bad.rst
@@ -0,0 +1,7 @@
+Get under/over/bad colors of Colormap objects
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+`matplotlib.colors.Colormap` now has methods
+`~.colors.Colormap.get_under`, `~.colors.Colormap.get_over`,
+`~.colors.Colormap.get_bad` for the colors used for out-of-range and masked
+values.
| [
{
"components": [
{
"doc": "Get the color for masked values.",
"lines": [
619,
623
],
"name": "Colormap.get_bad",
"signature": "def get_bad(self):",
"type": "function"
},
{
"doc": "Get the color for low out-of-range va... | [
"lib/matplotlib/tests/test_colors.py::test_BoundaryNorm",
"lib/matplotlib/tests/test_colors.py::test_repr_html",
"lib/matplotlib/tests/test_colors.py::test_get_under_over_bad"
] | [
"lib/matplotlib/tests/test_colors.py::test_create_lookup_table[5-result0]",
"lib/matplotlib/tests/test_colors.py::test_create_lookup_table[2-result1]",
"lib/matplotlib/tests/test_colors.py::test_create_lookup_table[1-result2]",
"lib/matplotlib/tests/test_colors.py::test_resample",
"lib/matplotlib/tests/test... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add getters and _repr_html_ for over/under/bad values of Colormap objects.
## PR Summary
This PR extends from #17888 to add over/under/bad values in the `_repr_html_` of `matplotlib.colors.Colormap` objects. This also adds public getters for those values, which previously had only setters.
## Demo

## PR Checklist
- [x] Has Pytest style unit tests
- [x] Code is [Flake 8](http://flake8.pycqa.org/en/latest/) compliant
- [x] New features are documented, with examples if plot related
- [x] Documentation is sphinx and numpydoc compliant
- [x] Added an entry to doc/users/next_whats_new/ if major new feature (follow instructions in README.rst there)
- [ ] Documented in doc/api/next_api_changes/* if API changed in a backward-incompatible way
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in lib/matplotlib/colors.py]
(definition of Colormap.get_bad:)
def get_bad(self):
"""Get the color for masked values."""
(definition of Colormap.get_under:)
def get_under(self):
"""Get the color for low out-of-range values."""
(definition of Colormap.get_over:)
def get_over(self):
"""Get the color for high out-of-range values."""
(definition of Colormap._repr_html_.color_block:)
def color_block(color):
[end of new definitions in lib/matplotlib/colors.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | ff821ba3249401fe7f5fdb11cb7ac5d0564c2697 | |
Project-MONAI__MONAI-745 | 745 | Project-MONAI/MONAI | null | 44fcf772760e7296787a63182e462e331ace31f8 | 2020-07-12T07:11:57Z | diff --git a/docs/source/transforms.rst b/docs/source/transforms.rst
index 692108f980..d5bb39bbb7 100644
--- a/docs/source/transforms.rst
+++ b/docs/source/transforms.rst
@@ -183,6 +183,12 @@ IO
:members:
:special-members: __call__
+`LoadNumpy`
+"""""""""""
+.. autoclass:: LoadNumpy
+ :members:
+ :special-members: __call__
+
Post-processing
^^^^^^^^^^^^^^^
@@ -570,6 +576,12 @@ IO (Dict)
:members:
:special-members: __call__
+`LoadNumpyd`
+""""""""""""
+.. autoclass:: LoadNumpyd
+ :members:
+ :special-members: __call__
+
Post-processing (Dict)
^^^^^^^^^^^^^^^^^^^^^^
diff --git a/monai/transforms/io/array.py b/monai/transforms/io/array.py
index bcaf91f208..6ba1acff32 100644
--- a/monai/transforms/io/array.py
+++ b/monai/transforms/io/array.py
@@ -17,7 +17,7 @@
import numpy as np
from torch.utils.data._utils.collate import np_str_obj_array_pattern
-
+from monai.config import KeysCollection
from monai.data.utils import correct_nifti_header_if_necessary
from monai.transforms.compose import Transform
from monai.utils import optional_import, ensure_tuple
@@ -111,6 +111,8 @@ def __call__(self, filename):
class LoadPNG(Transform):
"""
Load common 2D image format (PNG, JPG, etc. using PIL) file or files from provided path.
+ If loading a list of files, stack them together and add a new dimension as first dimension,
+ and use the meta data of the first image to represent the stacked result.
It's based on the Image module in PIL library:
https://pillow.readthedocs.io/en/stable/reference/Image.html
"""
@@ -138,6 +140,10 @@ def __call__(self, filename):
if self.dtype:
data = data.astype(self.dtype)
img_array.append(data)
+
+ if self.image_only:
+ continue
+
meta = dict()
meta["filename_or_obj"] = name
meta["spatial_shape"] = data.shape[:2]
@@ -146,10 +152,6 @@ def __call__(self, filename):
meta["width"] = img.width
meta["height"] = img.height
meta["info"] = img.info
-
- if self.image_only:
- continue
-
if not compatible_meta:
compatible_meta = meta
else:
@@ -159,3 +161,73 @@ def __call__(self, filename):
img_array = np.stack(img_array, axis=0) if len(img_array) > 1 else img_array[0]
return img_array if self.image_only else (img_array, compatible_meta)
+
+
+class LoadNumpy(Transform):
+ """
+ Load arrays or pickled objects from .npy, .npz or pickled files, file or files are from provided path.
+ A typical usage is to load the `mask` data for classification task.
+ If loading a list of files or laoding npz file, stack results together and add a new dimension as first dimension,
+ and use the meta data of the first file to represent the stacked result.
+ It can load part of the npz file with specified `npz_keys`.
+ It's based on the Numpy load/read API:
+ https://numpy.org/doc/stable/reference/generated/numpy.load.html
+
+ """
+
+ def __init__(
+ self, data_only: bool = False, dtype: Optional[np.dtype] = np.float32, npz_keys: Optional[KeysCollection] = None
+ ) -> None:
+ """
+ Args:
+ data_only: if True return only the data array, otherwise return data array and metadata.
+ dtype: if not None convert the loaded data to this data type.
+ npz_keys: if loading npz file, only load the specified keys, if None, load all the items.
+ stack the loaded items together to construct a new first dimension.
+
+ """
+ self.data_only = data_only
+ self.dtype = dtype
+ if npz_keys is not None:
+ npz_keys = ensure_tuple(npz_keys)
+ self.npz_keys = npz_keys
+
+ def __call__(self, filename):
+ """
+ Args:
+ filename (str, list, tuple, file): path file or file-like object or a list of files.
+ """
+ if isinstance(filename, (tuple, list)):
+ for name in filename:
+ if name.endswith(".npz"):
+ raise TypeError("can not load a list of npz file.")
+ filename = ensure_tuple(filename)
+ data_array = list()
+ compatible_meta = None
+
+ def _save_data_meta(data_array, name, data, compatible_meta):
+ data_array.append(data if self.dtype is None else data.astype(self.dtype))
+ if not self.data_only:
+ meta = dict()
+ meta["filename_or_obj"] = name
+ meta["spatial_shape"] = data.shape
+ if not compatible_meta:
+ compatible_meta = meta
+ else:
+ assert np.allclose(
+ meta["spatial_shape"], compatible_meta["spatial_shape"]
+ ), "all the data in the list should have same shape."
+ return compatible_meta
+
+ for name in filename:
+ data = np.load(name, allow_pickle=True)
+ if name.endswith(".npz"):
+ # load expected items from NPZ file
+ npz_keys = [f"arr_{i}" for i in range(len(data))] if self.npz_keys is None else self.npz_keys
+ for k in npz_keys:
+ compatible_meta = _save_data_meta(data_array, name, data[k], compatible_meta)
+ else:
+ compatible_meta = _save_data_meta(data_array, name, data, compatible_meta)
+
+ data_array = np.stack(data_array, axis=0) if len(data_array) > 1 else data_array[0]
+ return data_array if self.data_only else (data_array, compatible_meta)
diff --git a/monai/transforms/io/dictionary.py b/monai/transforms/io/dictionary.py
index faf4060af2..5a04b4406f 100644
--- a/monai/transforms/io/dictionary.py
+++ b/monai/transforms/io/dictionary.py
@@ -21,7 +21,7 @@
from monai.config import KeysCollection
from monai.transforms.compose import MapTransform
-from monai.transforms.io.array import LoadNifti, LoadPNG
+from monai.transforms.io.array import LoadNifti, LoadPNG, LoadNumpy
class LoadNiftid(MapTransform):
@@ -47,8 +47,8 @@ def __init__(
keys: keys of the corresponding items to be transformed.
See also: :py:class:`monai.transforms.compose.MapTransform`
as_closest_canonical: if True, load the image as closest to canonical axis format.
- dtype: if not None convert the loaded image to this data type.
- meta_key_postfix: use `key_{postfix}` to to store meta data of the nifti image,
+ dtype: if not None convert the loaded image data to this data type.
+ meta_key_postfix: use `key_{postfix}` to store the metadata of the nifti image,
default is `meta_dict`. The meta data is a dictionary object.
For example, load nifti file for `image`, store the metadata into `image_meta_dict`.
overwriting: whether allow to overwrite existing meta data of same key.
@@ -95,10 +95,10 @@ def __init__(
Args:
keys: keys of the corresponding items to be transformed.
See also: :py:class:`monai.transforms.compose.MapTransform`
- dtype: if not None convert the loaded image to this data type.
- meta_key_postfix: use `key_{postfix}` to to store meta data of the nifti image,
+ dtype: if not None convert the loaded image data to this data type.
+ meta_key_postfix: use `key_{postfix}` to store the metadata of the PNG image,
default is `meta_dict`. The meta data is a dictionary object.
- For example, load nifti file for `image`, store the metadata into `image_meta_dict`.
+ For example, load PNG file for `image`, store the metadata into `image_meta_dict`.
overwriting: whether allow to overwrite existing meta data of same key.
default is False, which will raise exception if encountering existing key.
@@ -127,5 +127,57 @@ def __call__(self, data):
return d
+class LoadNumpyd(MapTransform):
+ """
+ Dictionary-based wrapper of :py:class:`monai.transforms.LoadNumpy`.
+ """
+
+ def __init__(
+ self,
+ keys: KeysCollection,
+ dtype: Optional[np.dtype] = np.float32,
+ npz_keys: Optional[KeysCollection] = None,
+ meta_key_postfix: str = "meta_dict",
+ overwriting: bool = False,
+ ) -> None:
+ """
+ Args:
+ keys: keys of the corresponding items to be transformed.
+ See also: :py:class:`monai.transforms.compose.MapTransform`
+ dtype: if not None convert the loaded data to this data type.
+ npz_keys: if loading npz file, only load the specified keys, if None, load all the items.
+ stack the loaded items together to construct a new first dimension.
+ meta_key_postfix: use `key_{postfix}` to store the metadata of the Numpy data,
+ default is `meta_dict`. The meta data is a dictionary object.
+ For example, load Numpy file for `mask`, store the metadata into `mask_meta_dict`.
+ overwriting: whether allow to overwrite existing meta data of same key.
+ default is False, which will raise exception if encountering existing key.
+
+ Raises:
+ ValueError: meta_key_postfix must be a string.
+
+ """
+ super().__init__(keys)
+ self.loader = LoadNumpy(data_only=False, dtype=dtype, npz_keys=npz_keys)
+ if not isinstance(meta_key_postfix, str):
+ raise ValueError("meta_key_postfix must be a string.")
+ self.meta_key_postfix = meta_key_postfix
+ self.overwriting = overwriting
+
+ def __call__(self, data):
+ d = dict(data)
+ for key in self.keys:
+ data = self.loader(d[key])
+ assert isinstance(data, (tuple, list)), "loader must return a tuple or list."
+ d[key] = data[0]
+ assert isinstance(data[1], dict), "metadata must be a dict."
+ key_to_add = f"{key}_{self.meta_key_postfix}"
+ if key_to_add in d and not self.overwriting:
+ raise KeyError(f"meta data with key {key_to_add} already exists.")
+ d[key_to_add] = data[1]
+ return d
+
+
LoadNiftiD = LoadNiftiDict = LoadNiftid
LoadPNGD = LoadPNGDict = LoadPNGd
+LoadNumpyD = LoadNumpyDict = LoadNumpyd
| diff --git a/tests/test_load_numpy.py b/tests/test_load_numpy.py
new file mode 100644
index 0000000000..6332e6241d
--- /dev/null
+++ b/tests/test_load_numpy.py
@@ -0,0 +1,89 @@
+# Copyright 2020 MONAI Consortium
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+# http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import os
+import shutil
+import numpy as np
+import tempfile
+from monai.transforms import LoadNumpy
+
+
+class TestLoadNumpy(unittest.TestCase):
+ def test_npy(self):
+ test_data = np.random.randint(0, 256, size=[3, 4, 4])
+ tempdir = tempfile.mkdtemp()
+ filepath = os.path.join(tempdir, "test_data.npy")
+ np.save(filepath, test_data)
+
+ result = LoadNumpy()(filepath)
+ self.assertTupleEqual(result[1]["spatial_shape"], test_data.shape)
+ self.assertTupleEqual(result[0].shape, test_data.shape)
+ np.testing.assert_allclose(result[0], test_data)
+
+ shutil.rmtree(tempdir)
+
+ def test_npz1(self):
+ test_data1 = np.random.randint(0, 256, size=[3, 4, 4])
+ tempdir = tempfile.mkdtemp()
+ filepath = os.path.join(tempdir, "test_data.npy")
+ np.save(filepath, test_data1)
+
+ result = LoadNumpy()(filepath)
+ self.assertTupleEqual(result[1]["spatial_shape"], test_data1.shape)
+ self.assertTupleEqual(result[0].shape, test_data1.shape)
+ np.testing.assert_allclose(result[0], test_data1)
+
+ shutil.rmtree(tempdir)
+
+ def test_npz2(self):
+ test_data1 = np.random.randint(0, 256, size=[3, 4, 4])
+ test_data2 = np.random.randint(0, 256, size=[3, 4, 4])
+ tempdir = tempfile.mkdtemp()
+ filepath = os.path.join(tempdir, "test_data.npz")
+ np.savez(filepath, test_data1, test_data2)
+
+ result = LoadNumpy()(filepath)
+ self.assertTupleEqual(result[1]["spatial_shape"], test_data1.shape)
+ self.assertTupleEqual(result[0].shape, (2, 3, 4, 4))
+ np.testing.assert_allclose(result[0], np.stack([test_data1, test_data2]))
+
+ shutil.rmtree(tempdir)
+
+ def test_npz3(self):
+ test_data1 = np.random.randint(0, 256, size=[3, 4, 4])
+ test_data2 = np.random.randint(0, 256, size=[3, 4, 4])
+ tempdir = tempfile.mkdtemp()
+ filepath = os.path.join(tempdir, "test_data.npz")
+ np.savez(filepath, test1=test_data1, test2=test_data2)
+
+ result = LoadNumpy(npz_keys=["test1", "test2"])(filepath)
+ self.assertTupleEqual(result[1]["spatial_shape"], test_data1.shape)
+ self.assertTupleEqual(result[0].shape, (2, 3, 4, 4))
+ np.testing.assert_allclose(result[0], np.stack([test_data1, test_data2]))
+
+ shutil.rmtree(tempdir)
+
+ def test_npy_pickle(self):
+ test_data = {"test": np.random.randint(0, 256, size=[3, 4, 4])}
+ tempdir = tempfile.mkdtemp()
+ filepath = os.path.join(tempdir, "test_data.npy")
+ np.save(filepath, test_data, allow_pickle=True)
+
+ result = LoadNumpy(data_only=True, dtype=None)(filepath).item()
+ self.assertTupleEqual(result["test"].shape, test_data["test"].shape)
+ np.testing.assert_allclose(result["test"], test_data["test"])
+
+ shutil.rmtree(tempdir)
+
+
+if __name__ == "__main__":
+ unittest.main()
diff --git a/tests/test_load_numpyd.py b/tests/test_load_numpyd.py
new file mode 100644
index 0000000000..4e68e1b3ed
--- /dev/null
+++ b/tests/test_load_numpyd.py
@@ -0,0 +1,89 @@
+# Copyright 2020 MONAI Consortium
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+# http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import os
+import shutil
+import numpy as np
+import tempfile
+from monai.transforms import LoadNumpyd
+
+
+class TestLoadNumpyd(unittest.TestCase):
+ def test_npy(self):
+ test_data = np.random.randint(0, 256, size=[3, 4, 4])
+ tempdir = tempfile.mkdtemp()
+ filepath = os.path.join(tempdir, "test_data.npy")
+ np.save(filepath, test_data)
+
+ result = LoadNumpyd(keys="mask")({"mask": filepath})
+ self.assertTupleEqual(result["mask_meta_dict"]["spatial_shape"], test_data.shape)
+ self.assertTupleEqual(result["mask"].shape, test_data.shape)
+ np.testing.assert_allclose(result["mask"], test_data)
+
+ shutil.rmtree(tempdir)
+
+ def test_npz1(self):
+ test_data1 = np.random.randint(0, 256, size=[3, 4, 4])
+ tempdir = tempfile.mkdtemp()
+ filepath = os.path.join(tempdir, "test_data.npy")
+ np.save(filepath, test_data1)
+
+ result = LoadNumpyd(keys="mask")({"mask": filepath})
+ self.assertTupleEqual(result["mask_meta_dict"]["spatial_shape"], test_data1.shape)
+ self.assertTupleEqual(result["mask"].shape, test_data1.shape)
+ np.testing.assert_allclose(result["mask"], test_data1)
+
+ shutil.rmtree(tempdir)
+
+ def test_npz2(self):
+ test_data1 = np.random.randint(0, 256, size=[3, 4, 4])
+ test_data2 = np.random.randint(0, 256, size=[3, 4, 4])
+ tempdir = tempfile.mkdtemp()
+ filepath = os.path.join(tempdir, "test_data.npz")
+ np.savez(filepath, test_data1, test_data2)
+
+ result = LoadNumpyd(keys="mask")({"mask": filepath})
+ self.assertTupleEqual(result["mask_meta_dict"]["spatial_shape"], test_data1.shape)
+ self.assertTupleEqual(result["mask"].shape, (2, 3, 4, 4))
+ np.testing.assert_allclose(result["mask"], np.stack([test_data1, test_data2]))
+
+ shutil.rmtree(tempdir)
+
+ def test_npz3(self):
+ test_data1 = np.random.randint(0, 256, size=[3, 4, 4])
+ test_data2 = np.random.randint(0, 256, size=[3, 4, 4])
+ tempdir = tempfile.mkdtemp()
+ filepath = os.path.join(tempdir, "test_data.npz")
+ np.savez(filepath, test1=test_data1, test2=test_data2)
+
+ result = LoadNumpyd(keys="mask", npz_keys=["test1", "test2"])({"mask": filepath})
+ self.assertTupleEqual(result["mask_meta_dict"]["spatial_shape"], test_data1.shape)
+ self.assertTupleEqual(result["mask"].shape, (2, 3, 4, 4))
+ np.testing.assert_allclose(result["mask"], np.stack([test_data1, test_data2]))
+
+ shutil.rmtree(tempdir)
+
+ def test_npy_pickle(self):
+ test_data = {"test": np.random.randint(0, 256, size=[3, 4, 4])}
+ tempdir = tempfile.mkdtemp()
+ filepath = os.path.join(tempdir, "test_data.npy")
+ np.save(filepath, test_data, allow_pickle=True)
+
+ result = LoadNumpyd(keys="mask", dtype=None)({"mask": filepath})["mask"].item()
+ self.assertTupleEqual(result["test"].shape, test_data["test"].shape)
+ np.testing.assert_allclose(result["test"], test_data["test"])
+
+ shutil.rmtree(tempdir)
+
+
+if __name__ == "__main__":
+ unittest.main()
| diff --git a/docs/source/transforms.rst b/docs/source/transforms.rst
index 692108f980..d5bb39bbb7 100644
--- a/docs/source/transforms.rst
+++ b/docs/source/transforms.rst
@@ -183,6 +183,12 @@ IO
:members:
:special-members: __call__
+`LoadNumpy`
+"""""""""""
+.. autoclass:: LoadNumpy
+ :members:
+ :special-members: __call__
+
Post-processing
^^^^^^^^^^^^^^^
@@ -570,6 +576,12 @@ IO (Dict)
:members:
:special-members: __call__
+`LoadNumpyd`
+""""""""""""
+.. autoclass:: LoadNumpyd
+ :members:
+ :special-members: __call__
+
Post-processing (Dict)
^^^^^^^^^^^^^^^^^^^^^^
| [
{
"components": [
{
"doc": "Load arrays or pickled objects from .npy, .npz or pickled files, file or files are from provided path.\nA typical usage is to load the `mask` data for classification task.\nIf loading a list of files or laoding npz file, stack results together and add a new dimension as... | [
"tests/test_load_numpy.py::TestLoadNumpy::test_npy",
"tests/test_load_numpy.py::TestLoadNumpy::test_npy_pickle",
"tests/test_load_numpy.py::TestLoadNumpy::test_npz1",
"tests/test_load_numpy.py::TestLoadNumpy::test_npz2",
"tests/test_load_numpy.py::TestLoadNumpy::test_npz3",
"tests/test_load_numpyd.py::Tes... | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
471 Add LoadNumpy transform
Fixes #471 .
### Description
This PR implemented `LoadNumpy` and `LoadNumpyd` transforms.
It's requested from user and can be useful to load extra data for training or evaluation.
### Status
**Ready**
### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not applicable items -->
- [x] Non-breaking change (fix or new feature that would not break existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing functionality to change).
- [x] New tests added to cover the changes.
- [x] Integration tests passed locally by running `./runtests.sh --codeformat --coverage`.
- [x] In-line docstrings updated.
- [x] Documentation updated, tested `make html` command in the `docs/` folder.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in monai/transforms/io/array.py]
(definition of LoadNumpy:)
class LoadNumpy(Transform):
"""Load arrays or pickled objects from .npy, .npz or pickled files, file or files are from provided path.
A typical usage is to load the `mask` data for classification task.
If loading a list of files or laoding npz file, stack results together and add a new dimension as first dimension,
and use the meta data of the first file to represent the stacked result.
It can load part of the npz file with specified `npz_keys`.
It's based on the Numpy load/read API:
https://numpy.org/doc/stable/reference/generated/numpy.load.html"""
(definition of LoadNumpy.__init__:)
def __init__( self, data_only: bool = False, dtype: Optional[np.dtype] = np.float32, npz_keys: Optional[KeysCollection] = None ) -> None:
"""Args:
data_only: if True return only the data array, otherwise return data array and metadata.
dtype: if not None convert the loaded data to this data type.
npz_keys: if loading npz file, only load the specified keys, if None, load all the items.
stack the loaded items together to construct a new first dimension."""
(definition of LoadNumpy.__call__:)
def __call__(self, filename):
"""Args:
filename (str, list, tuple, file): path file or file-like object or a list of files."""
(definition of LoadNumpy.__call__._save_data_meta:)
def _save_data_meta(data_array, name, data, compatible_meta):
[end of new definitions in monai/transforms/io/array.py]
[start of new definitions in monai/transforms/io/dictionary.py]
(definition of LoadNumpyd:)
class LoadNumpyd(MapTransform):
"""Dictionary-based wrapper of :py:class:`monai.transforms.LoadNumpy`."""
(definition of LoadNumpyd.__init__:)
def __init__( self, keys: KeysCollection, dtype: Optional[np.dtype] = np.float32, npz_keys: Optional[KeysCollection] = None, meta_key_postfix: str = "meta_dict", overwriting: bool = False, ) -> None:
"""Args:
keys: keys of the corresponding items to be transformed.
See also: :py:class:`monai.transforms.compose.MapTransform`
dtype: if not None convert the loaded data to this data type.
npz_keys: if loading npz file, only load the specified keys, if None, load all the items.
stack the loaded items together to construct a new first dimension.
meta_key_postfix: use `key_{postfix}` to store the metadata of the Numpy data,
default is `meta_dict`. The meta data is a dictionary object.
For example, load Numpy file for `mask`, store the metadata into `mask_meta_dict`.
overwriting: whether allow to overwrite existing meta data of same key.
default is False, which will raise exception if encountering existing key.
Raises:
ValueError: meta_key_postfix must be a string."""
(definition of LoadNumpyd.__call__:)
def __call__(self, data):
[end of new definitions in monai/transforms/io/dictionary.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
IO formats
Hello
How do I perform multi class semantic (instance) segmentation over my set of 2D PNG/TIF grayscale images and their corresponding masks file (NPY numpy array, TIF or any other) matrix representation with # channels corresponding to the classes?
Do I need to use NIFTI_interface? If so, how do I convert my PNG/TIF/NPY files to this format?
Thank you
----------
Hi @asmagen ,
Thanks for your interest here!
1. Basically, I think the `LoadPNG` or `LoadPNGd` transform can load `TIF` format image also.
So maybe you can use `LoadPNGd(keys=["image", "label"])` to have a try first?
2. How many classes in this task? If you do need to convert to Nifti format, you can use the `nibabel` lib to save a numpy array into Nifti file.
Thanks.
Thanks for your response @Nic-Ma
1. Is there a tutorial performing a similar segmentation task and using PNG/TIF images via LoadPNGd(keys=["image", "label"]) or similar that I can adapt? I'm new to image analysis so I was hoping to get a quick start somehow.
2. I have. 7 classes including background (1st channel background, 2nd tumor, 3rd normal, 4 stroma etc). Some images don't have all of the tissue regions, e.g., some are all tumor, so the normal channel will be empty. I'm not sure if it's a problem. So do I need tot convert the format to Nifty or is it not necessary? If I do need it, do I use `img.to_filename(os.path.join('build','test4d.nii.gz'))`?
Thanks
Hi @asmagen ,
1. You can refer to this issue https://github.com/Project-MONAI/MONAI/issues/415 for multi-labels segmentation task. It's easy to adapt from our spleen tutorial.
2. As you have 7 classes, I think maybe it's truly a good idea to convert to Nifti format before training and use "LoadNiftid" to load the images and labels for training. You can refer to:
https://stackoverflow.com/questions/55090463/how-to-convert-3d-numpy-array-to-nifti-image-in-nibabel
Thanks.
--------------------
</issues> | e73257caa79309dcce1e93abf1632f4bfd75b11f |
Project-MONAI__MONAI-744 | 744 | Project-MONAI/MONAI | null | bbc5e063268182b2d79b02ddb5156a38528cb62a | 2020-07-11T18:47:36Z | diff --git a/monai/losses/__init__.py b/monai/losses/__init__.py
index c9ab1099fd..ff9814898f 100644
--- a/monai/losses/__init__.py
+++ b/monai/losses/__init__.py
@@ -9,6 +9,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from .dice import Dice, DiceLoss, GeneralizedDiceLoss, MaskedDiceLoss, dice, generalized_dice
+from .dice import (
+ Dice,
+ DiceLoss,
+ GeneralizedDiceLoss,
+ dice,
+ generalized_dice,
+ MaskedDiceLoss,
+ GeneralizedWassersteinDiceLoss,
+)
+
from .focal_loss import FocalLoss
from .tversky import TverskyLoss
diff --git a/monai/losses/dice.py b/monai/losses/dice.py
index 86823f22ae..a2670a0a10 100644
--- a/monai/losses/dice.py
+++ b/monai/losses/dice.py
@@ -12,7 +12,9 @@
import warnings
from typing import Callable, Optional, Union
+import numpy as np
import torch
+import torch.nn.functional as F
from torch.nn.modules.loss import _Loss
from monai.networks import one_hot
@@ -297,5 +299,171 @@ def forward(self, input: torch.Tensor, target: torch.Tensor, smooth: float = 1e-
return f
+class GeneralizedWassersteinDiceLoss(_Loss):
+ """
+ Generalized Wasserstein Dice Loss [1] in PyTorch.
+ Compared to [1] we used a weighting method similar to the one
+ used in the generalized Dice Loss [2].
+
+ References:
+ ===========
+ [1] "Generalised Wasserstein Dice Score for Imbalanced Multi-class
+ Segmentation using Holistic Convolutional Networks",
+ Fidon L. et al. MICCAI BrainLes 2017.
+ [2] "Generalised dice overlap as a deep learning loss function
+ for highly unbalanced segmentations",
+ Sudre C., et al. MICCAI DLMIA 2017.
+
+ wasserstein_distance_map:
+ Compute the voxel-wise Wasserstein distance (eq. 6 in [1]) between the
+ flattened prediction and the flattened labels (ground_truth) with respect
+ to the distance matrix on the label space M.
+ References:
+ [1] "Generalised Wasserstein Dice Score for Imbalanced Multi-class
+ Segmentation using Holistic Convolutional Networks",
+ Fidon L. et al. MICCAI BrainLes 2017
+
+ compute_weights_generalized_true_positives:
+ Compute the weights \alpha_l of eq. 9 in [1] but using the weighting
+ method proposed in the generalized Dice Loss [2].
+ References:
+ [1] "Generalised Wasserstein Dice Score for Imbalanced Multi-class
+ Segmentation using Holistic Convolutional Networks",
+ Fidon L. et al. MICCAI BrainLes 2017
+ [2] "Generalised dice overlap as a deep learning loss function
+ for highly unbalanced segmentations." Sudre C., et al.
+ MICCAI DLMIA 2017.
+ """
+
+ def __init__(self, dist_matrix, reduction: Union[LossReduction, str] = LossReduction.MEAN):
+ """
+ Args:
+ param dist_matrix: 2d tensor or 2d numpy array; matrix of distances
+ between the classes. It must have dimension C x C where C is the
+ number of classes.
+ param reduction: str; reduction mode.
+
+ Raises:
+ ValueError: dist_matrix.shape[0] != dist_matrix.shape[1] is invalid.
+
+ """
+ super(GeneralizedWassersteinDiceLoss, self).__init__(reduction=LossReduction(reduction).value)
+
+ if dist_matrix.shape[0] != dist_matrix.shape[1]:
+ raise ValueError("Dist Matrix is invalid.")
+
+ self.m = dist_matrix
+ if isinstance(self.m, np.ndarray):
+ self.m = torch.from_numpy(self.m)
+ if torch.max(self.m) != 1:
+ self.m = self.m / torch.max(self.m)
+ self.num_classes = self.m.size(0)
+
+ def forward(self, input: torch.Tensor, target: torch.Tensor, smooth: float = 1e-5):
+ """
+ Args:
+ input: the shape should be BNH[WD].
+ target: the shape should be BNH[WD].
+ smooth: a small constant to avoid nan.
+
+ """
+ # Aggregate spatial dimensions
+ flat_input = input.view(input.size(0), input.size(1), -1)
+ flat_target = target.view(target.size(0), -1)
+
+ # Apply the softmax to the input scores map
+ probs = F.softmax(flat_input, dim=1)
+
+ # Compute the Wasserstein distance map
+ wass_dist_map = self.wasserstein_distance_map(probs, flat_target)
+
+ # Compute the generalised number of true positives
+ alpha = self.compute_weights_generalized_true_positives(flat_target)
+ true_pos = self.compute_generalized_true_positive(alpha, flat_target, wass_dist_map)
+ denom = self.compute_denominator(alpha, flat_target, wass_dist_map)
+
+ # Compute and return the final loss
+ wass_dice = (2.0 * true_pos + smooth) / (denom + smooth)
+ wass_dice_loss = 1.0 - wass_dice
+ return wass_dice_loss.mean()
+
+ def wasserstein_distance_map(self, flat_proba: torch.Tensor, flat_target: torch.Tensor):
+ """
+ Args:
+ flat_proba: the probabilities of input(predicted) tensor.
+ flat_target: the target tensor.
+ """
+ # Turn the distance matrix to a map of identical matrix
+ m = torch.clone(self.m).to(flat_proba.device)
+ m_extended = torch.unsqueeze(m, dim=0)
+ m_extended = torch.unsqueeze(m_extended, dim=3)
+ m_extended = m_extended.expand((flat_proba.size(0), m_extended.size(1), m_extended.size(2), flat_proba.size(2)))
+
+ # Expand the feature dimensions of the target
+ flat_target_extended = torch.unsqueeze(flat_target, dim=1)
+ flat_target_extended = flat_target_extended.expand(
+ (flat_target.size(0), m_extended.size(1), flat_target.size(1))
+ )
+ flat_target_extended = torch.unsqueeze(flat_target_extended, dim=1)
+
+ # Extract the vector of class distances for the ground-truth label at each voxel
+ m_extended = torch.gather(m_extended, dim=1, index=flat_target_extended)
+ m_extended = torch.squeeze(m_extended, dim=1)
+
+ # Compute the wasserstein distance map
+ wasserstein_map = m_extended * flat_proba
+
+ # Sum over the classes
+ wasserstein_map = torch.sum(wasserstein_map, dim=1)
+ return wasserstein_map
+
+ def compute_generalized_true_positive(
+ self, alpha: torch.Tensor, flat_target: torch.Tensor, wasserstein_distance_map
+ ):
+ """
+ Args:
+ alpha: generalised number of true positives of target class.
+ flat_target: the target tensor.
+ wasserstein_distance_map: the map obtained from the above function.
+ """
+ # Extend alpha to a map and select value at each voxel according to flat_target
+ alpha_extended = torch.unsqueeze(alpha, dim=2)
+ alpha_extended = alpha_extended.expand((flat_target.size(0), self.num_classes, flat_target.size(1)))
+ flat_target_extended = torch.unsqueeze(flat_target, dim=1)
+ alpha_extended = torch.gather(alpha_extended, index=flat_target_extended, dim=1)
+
+ # Compute the generalized true positive as in eq. 9
+ generalized_true_pos = torch.sum(alpha_extended * (1.0 - wasserstein_distance_map), dim=[1, 2],)
+ return generalized_true_pos
+
+ def compute_denominator(self, alpha: torch.Tensor, flat_target: torch.Tensor, wasserstein_distance_map):
+ """
+ Args:
+ alpha: generalised number of true positives of target class.
+ flat_target: the target tensor.
+ wasserstein_distance_map: the map obtained from the above function.
+ """
+ # Extend alpha to a map and select value at each voxel according to flat_target
+ alpha_extended = torch.unsqueeze(alpha, dim=2)
+ alpha_extended = alpha_extended.expand((flat_target.size(0), self.num_classes, flat_target.size(1)))
+ flat_target_extended = torch.unsqueeze(flat_target, dim=1)
+ alpha_extended = torch.gather(alpha_extended, index=flat_target_extended, dim=1)
+
+ # Compute the generalized true positive as in eq. 9
+ generalized_true_pos = torch.sum(alpha_extended * (2.0 - wasserstein_distance_map), dim=[1, 2],)
+ return generalized_true_pos
+
+ def compute_weights_generalized_true_positives(self, flat_target: torch.Tensor):
+ """
+ Args:
+ flat_target: the target tensor.
+ """
+ one_hot = F.one_hot(flat_target, num_classes=self.num_classes).permute(0, 2, 1).float()
+ volumes = torch.sum(one_hot, dim=2)
+ alpha = 1.0 / (volumes + 1.0)
+ return alpha
+
+
dice = Dice = DiceLoss
generalized_dice = GeneralizedDiceLoss
+generalized_wasserstein_dice = GeneralizedWassersteinDiceLoss
| diff --git a/tests/test_generalized_wasserstein_dice_loss.py b/tests/test_generalized_wasserstein_dice_loss.py
new file mode 100644
index 0000000000..77d62d5a86
--- /dev/null
+++ b/tests/test_generalized_wasserstein_dice_loss.py
@@ -0,0 +1,66 @@
+# Copyright 2020 MONAI Consortium
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+# http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+import torch
+import torch.nn.functional as F
+from monai.losses import GeneralizedWassersteinDiceLoss
+
+
+class TestGeneralizedWassersteinDiceLoss(unittest.TestCase):
+ def test_bin_seg_2d(self):
+ target = torch.tensor([[0, 0, 0, 0], [0, 1, 1, 0], [0, 1, 1, 0], [0, 0, 0, 0]])
+
+ # add another dimension corresponding to the batch (batch size = 1 here)
+ target = target.unsqueeze(0)
+ pred_very_good = 1000 * F.one_hot(target, num_classes=2).permute(0, 3, 1, 2).float()
+ pred_very_poor = 1000 * F.one_hot(1 - target, num_classes=2).permute(0, 3, 1, 2).float()
+
+ # initialize the loss
+ loss = GeneralizedWassersteinDiceLoss(dist_matrix=np.array([[0.0, 1.0], [1.0, 0.0]]))
+
+ # the loss for pred_very_good should be close to 0
+ loss_good = float(loss.forward(pred_very_good, target))
+ self.assertAlmostEqual(loss_good, 0.0, places=3)
+
+ # same test, but with target with a class dimension
+ target_4dim = target.unsqueeze(1)
+ loss_good = float(loss.forward(pred_very_good, target_4dim))
+ self.assertAlmostEqual(loss_good, 0.0, places=3)
+
+ # the loss for pred_very_poor should be close to 1
+ loss_poor = float(loss.forward(pred_very_poor, target))
+ self.assertAlmostEqual(loss_poor, 1.0, places=3)
+
+ def test_empty_class_2d(self):
+ num_classes = 2
+ target = torch.tensor([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]])
+
+ # add another dimension corresponding to the batch (batch size = 1 here)
+ target = target.unsqueeze(0)
+ pred_very_good = 1000 * F.one_hot(target, num_classes=num_classes).permute(0, 3, 1, 2).float()
+ pred_very_poor = 1000 * F.one_hot(1 - target, num_classes=num_classes).permute(0, 3, 1, 2).float()
+
+ # initialize the loss
+ loss = GeneralizedWassersteinDiceLoss(dist_matrix=np.array([[0.0, 1.0], [1.0, 0.0]]))
+
+ # loss for pred_very_good should be close to 0
+ loss_good = float(loss.forward(pred_very_good, target))
+ self.assertAlmostEqual(loss_good, 0.0, places=3)
+
+ # loss for pred_very_poor should be close to 1
+ loss_poor = float(loss.forward(pred_very_poor, target))
+ self.assertAlmostEqual(loss_poor, 1.0, places=3)
+
+
+if __name__ == "__main__":
+ unittest.main()
| [
{
"components": [
{
"doc": "Generalized Wasserstein Dice Loss [1] in PyTorch.\nCompared to [1] we used a weighting method similar to the one\nused in the generalized Dice Loss [2].\n\nReferences:\n===========\n[1] \"Generalised Wasserstein Dice Score for Imbalanced Multi-class\n Segmentation us... | [
"tests/test_generalized_wasserstein_dice_loss.py::TestGeneralizedWassersteinDiceLoss::test_bin_seg_2d",
"tests/test_generalized_wasserstein_dice_loss.py::TestGeneralizedWassersteinDiceLoss::test_empty_class_2d"
] | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Worked on the implementation of Generalized Wasserstein Dice Loss
Worked on issue #601 . @wyli Please check.
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in monai/losses/dice.py]
(definition of GeneralizedWassersteinDiceLoss:)
class GeneralizedWassersteinDiceLoss(_Loss):
"""Generalized Wasserstein Dice Loss [1] in PyTorch.
Compared to [1] we used a weighting method similar to the one
used in the generalized Dice Loss [2].
References:
===========
[1] "Generalised Wasserstein Dice Score for Imbalanced Multi-class
Segmentation using Holistic Convolutional Networks",
Fidon L. et al. MICCAI BrainLes 2017.
[2] "Generalised dice overlap as a deep learning loss function
for highly unbalanced segmentations",
Sudre C., et al. MICCAI DLMIA 2017.
wasserstein_distance_map:
Compute the voxel-wise Wasserstein distance (eq. 6 in [1]) between the
flattened prediction and the flattened labels (ground_truth) with respect
to the distance matrix on the label space M.
References:
[1] "Generalised Wasserstein Dice Score for Imbalanced Multi-class
Segmentation using Holistic Convolutional Networks",
Fidon L. et al. MICCAI BrainLes 2017
compute_weights_generalized_true_positives:
Compute the weights lpha_l of eq. 9 in [1] but using the weighting
method proposed in the generalized Dice Loss [2].
References:
[1] "Generalised Wasserstein Dice Score for Imbalanced Multi-class
Segmentation using Holistic Convolutional Networks",
Fidon L. et al. MICCAI BrainLes 2017
[2] "Generalised dice overlap as a deep learning loss function
for highly unbalanced segmentations." Sudre C., et al.
MICCAI DLMIA 2017."""
(definition of GeneralizedWassersteinDiceLoss.__init__:)
def __init__(self, dist_matrix, reduction: Union[LossReduction, str] = LossReduction.MEAN):
"""Args:
param dist_matrix: 2d tensor or 2d numpy array; matrix of distances
between the classes. It must have dimension C x C where C is the
number of classes.
param reduction: str; reduction mode.
Raises:
ValueError: dist_matrix.shape[0] != dist_matrix.shape[1] is invalid."""
(definition of GeneralizedWassersteinDiceLoss.forward:)
def forward(self, input: torch.Tensor, target: torch.Tensor, smooth: float = 1e-5):
"""Args:
input: the shape should be BNH[WD].
target: the shape should be BNH[WD].
smooth: a small constant to avoid nan."""
(definition of GeneralizedWassersteinDiceLoss.wasserstein_distance_map:)
def wasserstein_distance_map(self, flat_proba: torch.Tensor, flat_target: torch.Tensor):
"""Args:
flat_proba: the probabilities of input(predicted) tensor.
flat_target: the target tensor."""
(definition of GeneralizedWassersteinDiceLoss.compute_generalized_true_positive:)
def compute_generalized_true_positive( self, alpha: torch.Tensor, flat_target: torch.Tensor, wasserstein_distance_map ):
"""Args:
alpha: generalised number of true positives of target class.
flat_target: the target tensor.
wasserstein_distance_map: the map obtained from the above function."""
(definition of GeneralizedWassersteinDiceLoss.compute_denominator:)
def compute_denominator(self, alpha: torch.Tensor, flat_target: torch.Tensor, wasserstein_distance_map):
"""Args:
alpha: generalised number of true positives of target class.
flat_target: the target tensor.
wasserstein_distance_map: the map obtained from the above function."""
(definition of GeneralizedWassersteinDiceLoss.compute_weights_generalized_true_positives:)
def compute_weights_generalized_true_positives(self, flat_target: torch.Tensor):
"""Args:
flat_target: the target tensor."""
[end of new definitions in monai/losses/dice.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | e73257caa79309dcce1e93abf1632f4bfd75b11f | ||
matplotlib__matplotlib-17888 | 17,888 | matplotlib/matplotlib | 3.2 | ec0132f66170dabc6352ba79d839b1dd9eb02157 | 2020-07-11T17:31:13Z | diff --git a/doc/users/next_whats_new/colormap_repr.rst b/doc/users/next_whats_new/colormap_repr.rst
new file mode 100644
index 000000000000..077f6fca12e9
--- /dev/null
+++ b/doc/users/next_whats_new/colormap_repr.rst
@@ -0,0 +1,6 @@
+IPython representations for Colormap objects
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The `matplotlib.colors.Colormap` object now has image representations for
+IPython / Jupyter backends. Cells returning a color map on the last line will
+display an image of the color map.
diff --git a/lib/matplotlib/colors.py b/lib/matplotlib/colors.py
index 2471c8a9698c..5cc0be9bc363 100644
--- a/lib/matplotlib/colors.py
+++ b/lib/matplotlib/colors.py
@@ -65,12 +65,17 @@
.. _xkcd color survey: https://xkcd.com/color/rgb/
"""
+import base64
from collections.abc import Sized
import functools
+import io
import itertools
from numbers import Number
import re
+from PIL import Image
+from PIL.PngImagePlugin import PngInfo
+import matplotlib as mpl
import numpy as np
import matplotlib.cbook as cbook
from matplotlib import docstring
@@ -691,6 +696,33 @@ def reversed(self, name=None):
"""
raise NotImplementedError()
+ def _repr_png_(self):
+ """Generate a PNG representation of the Colormap."""
+ IMAGE_SIZE = (400, 50)
+ X = np.tile(np.linspace(0, 1, IMAGE_SIZE[0]), (IMAGE_SIZE[1], 1))
+ pixels = self(X, bytes=True)
+ png_bytes = io.BytesIO()
+ title = self.name + ' color map'
+ author = f'Matplotlib v{mpl.__version__}, https://matplotlib.org'
+ pnginfo = PngInfo()
+ pnginfo.add_text('Title', title)
+ pnginfo.add_text('Description', title)
+ pnginfo.add_text('Author', author)
+ pnginfo.add_text('Software', author)
+ Image.fromarray(pixels).save(png_bytes, format='png', pnginfo=pnginfo)
+ return png_bytes.getvalue()
+
+ def _repr_html_(self):
+ """Generate an HTML representation of the Colormap."""
+ png_bytes = self._repr_png_()
+ png_base64 = base64.b64encode(png_bytes).decode('ascii')
+ return ('<strong>' + self.name + '</strong>' +
+ '<img ' +
+ 'alt="' + self.name + ' color map" ' +
+ 'title="' + self.name + '"' +
+ 'style="border: 1px solid #555;" ' +
+ 'src="data:image/png;base64,' + png_base64 + '">')
+
class LinearSegmentedColormap(Colormap):
"""
| diff --git a/lib/matplotlib/tests/test_colors.py b/lib/matplotlib/tests/test_colors.py
index ba1192d16ac3..00ed7fdfb907 100644
--- a/lib/matplotlib/tests/test_colors.py
+++ b/lib/matplotlib/tests/test_colors.py
@@ -1,7 +1,9 @@
import copy
import itertools
+from io import BytesIO
import numpy as np
+from PIL import Image
import pytest
from numpy.testing import assert_array_equal, assert_array_almost_equal
@@ -1135,3 +1137,23 @@ def test_hex_shorthand_notation():
def test_DivergingNorm_deprecated():
with pytest.warns(cbook.MatplotlibDeprecationWarning):
norm = mcolors.DivergingNorm(vcenter=0)
+
+
+def test_repr_png():
+ cmap = plt.get_cmap('viridis')
+ png = cmap._repr_png_()
+ assert len(png) > 0
+ img = Image.open(BytesIO(png))
+ assert img.width > 0
+ assert img.height > 0
+ assert 'Title' in img.text
+ assert 'Description' in img.text
+ assert 'Author' in img.text
+ assert 'Software' in img.text
+
+
+def test_repr_html():
+ cmap = plt.get_cmap('viridis')
+ html = cmap._repr_html_()
+ assert len(html) > 0
+ assert cmap.name in html
| diff --git a/doc/users/next_whats_new/colormap_repr.rst b/doc/users/next_whats_new/colormap_repr.rst
new file mode 100644
index 000000000000..077f6fca12e9
--- /dev/null
+++ b/doc/users/next_whats_new/colormap_repr.rst
@@ -0,0 +1,6 @@
+IPython representations for Colormap objects
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The `matplotlib.colors.Colormap` object now has image representations for
+IPython / Jupyter backends. Cells returning a color map on the last line will
+display an image of the color map.
| [
{
"components": [
{
"doc": "Generate a PNG representation of the Colormap.",
"lines": [
699,
713
],
"name": "Colormap._repr_png_",
"signature": "def _repr_png_(self):",
"type": "function"
},
{
"doc": "Generate an HTML ... | [
"lib/matplotlib/tests/test_colors.py::test_repr_png",
"lib/matplotlib/tests/test_colors.py::test_repr_html"
] | [
"lib/matplotlib/tests/test_colors.py::test_create_lookup_table[5-result0]",
"lib/matplotlib/tests/test_colors.py::test_create_lookup_table[2-result1]",
"lib/matplotlib/tests/test_colors.py::test_create_lookup_table[1-result2]",
"lib/matplotlib/tests/test_colors.py::test_resample",
"lib/matplotlib/tests/test... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Add _repr_png_ and _repr_html_ to Colormap objects.
## PR Summary
This PR adds `_repr_png_` and `_repr_html_` methods to the `matplotlib.colors.Colormap` class, enabling visual representations of `Colormap` objects in IPython / Jupyter settings.
## Demo

Resolves #15616.
## PR Checklist
- [x] Has Pytest style unit tests
- [x] Code is [Flake 8](http://flake8.pycqa.org/en/latest/) compliant
- [ ] New features are documented, with examples if plot related
- [ ] Documentation is sphinx and numpydoc compliant
- [x] Added an entry to doc/users/next_whats_new/ if major new feature (follow instructions in README.rst there)
- [ ] Documented in doc/api/next_api_changes/* if API changed in a backward-incompatible way
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in lib/matplotlib/colors.py]
(definition of Colormap._repr_png_:)
def _repr_png_(self):
"""Generate a PNG representation of the Colormap."""
(definition of Colormap._repr_html_:)
def _repr_html_(self):
"""Generate an HTML representation of the Colormap."""
[end of new definitions in lib/matplotlib/colors.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Colormaps should have a `_repr_html_` that is an image of the colormap
`colorcet` has a swatch function in the plotting module that creates an image of the colormap.
<img width="1048" alt="Screen Shot 2019-11-06 at 3 00 22 PM" src="https://user-images.githubusercontent.com/4806877/68333441-85f1aa00-00a6-11ea-99b0-d9ef9726c916.png">
I think that the colormaps should have a `_repr_html_` that does something similar.
----------
Would be cool. How does it get used?
In jupyter notebook you could type `cm.jet` and it'd print out a swatch
Whoops I guess I should have mentioned that part :)
That's a fancy feature and if somebody wants to implement that, go for it.
However, I suspect that it's more fancy than actually useful. The only purpose I see for drawing a standalone colormap is getting an impression of the colors. I would primarily do that for selecting a suitable colormap. But in such a scenario I would not necessarily know the name, and I would like to compare different maps. Both can be addressed better by the [colormap reference](https://matplotlib.org/devdocs/gallery/color/colormap_reference.html).
But maybe I'm overlooking a real-world use-case.
Well I definitely tend to think about such things from the perspective of a higher level package that would consume the `_repr_html_` (such as [panel](https://panel.pyviz.org)). If you think about it from that perspective you can imaging someone easily displaying a bunch or colormaps by having them render themselves using their `_repr_html_`. Definitely fancy though :)
Itโs always nice for users to be able to do things *without* referring to an external resource, and in this case *not* have to refer to our colormap reference example. Otherwise, the jupyterlab/notebook environment wouldnโt have that handy โshift-tab bring up a tooltipโ functionality.
I also think itโs nice for users who are offline and donโt have quick access to the colormap reference.
I would like to work on this for the SciPy 2020 sprint. ๐
you likely want a `_repr_png_`, so that non-html frontend can display it.
--------------------
</issues> | ff821ba3249401fe7f5fdb11cb7ac5d0564c2697 |
sympy__sympy-19734 | 19,734 | sympy/sympy | 1.7 | 47670c187a2088fef92c540b6525594af7fed435 | 2020-07-10T05:51:25Z | diff --git a/sympy/stats/__init__.py b/sympy/stats/__init__.py
index c4b87459aeb4..cab740072483 100644
--- a/sympy/stats/__init__.py
+++ b/sympy/stats/__init__.py
@@ -142,7 +142,10 @@
'joint_eigen_distribution', 'JointEigenDistribution',
'level_spacing_distribution',
- 'Probability', 'Expectation', 'Variance', 'Covariance', 'Moment', 'CentralMoment',
+ 'MatrixGamma',
+
+ 'Probability', 'Expectation', 'Variance', 'Covariance', 'Moment',
+ 'CentralMoment',
'ExpectationMatrix', 'VarianceMatrix', 'CrossCovarianceMatrix'
@@ -188,6 +191,8 @@
GaussianSymplecticEnsemble, joint_eigen_distribution,
JointEigenDistribution, level_spacing_distribution)
+from .matrix_distributions import MatrixGamma
+
from .symbolic_probability import (Probability, Expectation, Variance,
Covariance, Moment, CentralMoment)
diff --git a/sympy/stats/matrix_distributions.py b/sympy/stats/matrix_distributions.py
new file mode 100644
index 000000000000..8e94b55f78e8
--- /dev/null
+++ b/sympy/stats/matrix_distributions.py
@@ -0,0 +1,153 @@
+from sympy import S, Basic, exp, multigamma
+from sympy.core.sympify import sympify, _sympify
+from sympy.matrices import (ImmutableMatrix, Inverse, Trace, Determinant,
+ MatrixSymbol, MatrixBase)
+from sympy.stats.rv import (_value_check, RandomMatrixSymbol, NamedArgsMixin, PSpace,
+ _symbol_converter)
+
+
+class MatrixPSpace(PSpace):
+ """
+ Represents probability space for
+ Matrix Distributions
+ """
+ def __new__(cls, sym, distribution, dim_n, dim_m):
+ sym = _symbol_converter(sym)
+ dim_n, dim_m = _sympify(dim_n), _sympify(dim_m)
+ if not (dim_n.is_integer and dim_m.is_integer):
+ raise ValueError("Dimensions should be integers")
+ return Basic.__new__(cls, sym, distribution, dim_n, dim_m)
+
+ distribution = property(lambda self: self.args[1])
+ symbol = property(lambda self: self.args[0])
+
+ @property
+ def value(self):
+ return RandomMatrixSymbol(self.symbol, self.args[2], self.args[3], self)
+
+ def compute_density(self, expr, *args):
+ rms = expr.atoms(RandomMatrixSymbol)
+ if len(rms) > 1 or (not isinstance(expr, RandomMatrixSymbol)):
+ raise NotImplementedError("Currently, no algorithm has been "
+ "implemented to handle general expressions containing "
+ "multiple matrix distributions.")
+ return self.distribution.pdf(expr)
+
+
+def rv(symbol, cls, args):
+ args = list(map(sympify, args))
+ dist = cls(*args)
+ dist.check(*args)
+ dim = dist.dimension
+ pspace = MatrixPSpace(symbol, dist, dim[0], dim[1])
+ return pspace.value
+
+
+class MatrixDistribution(Basic, NamedArgsMixin):
+ """
+ Abstract class for Matrix Distribution
+ """
+ def __new__(cls, *args):
+ args = list(map(sympify, args))
+ return Basic.__new__(cls, *args)
+
+ @staticmethod
+ def check(*args):
+ pass
+
+ def __call__(self, expr):
+ if isinstance(expr, list):
+ expr = ImmutableMatrix(expr)
+ return self.pdf(expr)
+
+
+########################################
+#--------Matrix Distributions----------#
+########################################
+
+#-------------------------------------------------------------------------------
+# Matrix Gamma distribution ----------------------------------------------------
+
+class MatrixGammaDistribution(MatrixDistribution):
+
+ _argnames = ('alpha', 'beta', 'scale_matrix')
+
+ @staticmethod
+ def check(alpha, beta, scale_matrix):
+ if not isinstance(scale_matrix , MatrixSymbol):
+ _value_check(scale_matrix.is_positive_definite, "The shape "
+ "matrix must be positive definite.")
+ _value_check(scale_matrix.shape[0] == scale_matrix.shape[1], "Should "
+ "be square matrix")
+ _value_check(alpha.is_positive, "Shape parameter should be positive.")
+ _value_check(beta.is_positive, "Scale parameter should be positive.")
+
+ @property
+ def set(self):
+ k = self.scale_matrix.shape[0]
+ return S.Reals ** k
+
+ @property
+ def dimension(self):
+ return self.scale_matrix.shape
+
+ def pdf(self, x):
+ alpha , beta , scale_matrix = self.alpha, self.beta, self.scale_matrix
+ p = scale_matrix.shape[0]
+ if isinstance(x, list):
+ x = ImmutableMatrix(x)
+ if not isinstance(x, (MatrixBase, MatrixSymbol)):
+ raise ValueError("%s should be an isinstance of Matrix "
+ "or MatrixSymbol" % str(x))
+ sigma_inv_x = - Inverse(scale_matrix)*x / beta
+ term1 = exp(Trace(sigma_inv_x))/((beta**(p*alpha)) * multigamma(alpha, p))
+ term2 = (Determinant(scale_matrix))**(-alpha)
+ term3 = (Determinant(x))**(alpha - S(p + 1)/2)
+ return term1 * term2 * term3
+
+def MatrixGamma(symbol, alpha, beta, scale_matrix):
+ """
+ Creates a random variable with Matrix Gamma Distribution.
+
+ The density of the said distribution can be found at [1].
+
+ Parameters
+ ==========
+
+ alpha: Positive Real number
+ Shape Parameter
+ beta: Positive Real number
+ Scale Parameter
+ scale_matrix: Positive definite real square matrix
+ Scale Matrix
+
+ Returns
+ =======
+
+ RandomSymbol
+
+ Examples
+ ========
+
+ >>> from sympy.stats import density, MatrixGamma
+ >>> from sympy import MatrixSymbol, symbols
+ >>> a, b = symbols('a b', positive=True)
+ >>> M = MatrixGamma('M', a, b, [[2, 1], [1, 2]])
+ >>> X = MatrixSymbol('X', 2, 2)
+ >>> density(M)(X).doit()
+ 3**(-a)*b**(-2*a)*exp(Trace(Matrix([
+ [-2/3, 1/3],
+ [ 1/3, -2/3]])*X)/b)*Determinant(X)**(a - 3/2)/(sqrt(pi)*gamma(a)*gamma(a - 1/2))
+ >>> density(M)([[1, 0], [0, 1]]).doit()
+ 3**(-a)*b**(-2*a)*exp(-4/(3*b))/(sqrt(pi)*gamma(a)*gamma(a - 1/2))
+
+
+ References
+ ==========
+
+ .. [1] https://en.wikipedia.org/wiki/Matrix_gamma_distribution
+
+ """
+ if isinstance(scale_matrix, list):
+ scale_matrix = ImmutableMatrix(scale_matrix)
+ return rv(symbol, MatrixGammaDistribution, (alpha, beta, scale_matrix))
diff --git a/sympy/stats/random_matrix_models.py b/sympy/stats/random_matrix_models.py
index a04e2f3bc104..ca86c8534fc5 100644
--- a/sympy/stats/random_matrix_models.py
+++ b/sympy/stats/random_matrix_models.py
@@ -49,6 +49,9 @@ def __new__(cls, sym, dim=None):
def density(self, expr):
return Density(expr)
+ def __call__(self, expr):
+ return self.density(expr)
+
class GaussianEnsemble(RandomMatrixEnsemble):
"""
Abstract class for Gaussian ensembles.
@@ -106,9 +109,11 @@ class GaussianUnitaryEnsemble(GaussianEnsemble):
========
>>> from sympy.stats import GaussianUnitaryEnsemble as GUE, density
+ >>> from sympy import MatrixSymbol
>>> G = GUE('U', 2)
- >>> density(G)
- Lambda(H, exp(-Trace(H**2))/(2*pi**2))
+ >>> X = MatrixSymbol('X', 2, 2)
+ >>> density(G)(X)
+ exp(-Trace(X**2))/(2*pi**2)
"""
@property
def normalization_constant(self):
@@ -119,7 +124,7 @@ def density(self, expr):
n, ZGUE = self.dimension, self.normalization_constant
h_pspace = RandomMatrixPSpace('P', model=self)
H = RandomMatrixSymbol('H', n, n, pspace=h_pspace)
- return Lambda(H, exp(-S(n)/2 * Trace(H**2))/ZGUE)
+ return Lambda(H, exp(-S(n)/2 * Trace(H**2))/ZGUE)(expr)
def joint_eigen_distribution(self):
return self._compute_joint_eigen_distribution(S(2))
@@ -137,9 +142,11 @@ class GaussianOrthogonalEnsemble(GaussianEnsemble):
========
>>> from sympy.stats import GaussianOrthogonalEnsemble as GOE, density
+ >>> from sympy import MatrixSymbol
>>> G = GOE('U', 2)
- >>> density(G)
- Lambda(H, exp(-Trace(H**2)/2)/Integral(exp(-Trace(_H**2)/2), _H))
+ >>> X = MatrixSymbol('X', 2, 2)
+ >>> density(G)(X)
+ exp(-Trace(X**2)/2)/Integral(exp(-Trace(_H**2)/2), _H)
"""
@property
def normalization_constant(self):
@@ -151,7 +158,7 @@ def density(self, expr):
n, ZGOE = self.dimension, self.normalization_constant
h_pspace = RandomMatrixPSpace('P', model=self)
H = RandomMatrixSymbol('H', n, n, pspace=h_pspace)
- return Lambda(H, exp(-S(n)/4 * Trace(H**2))/ZGOE)
+ return Lambda(H, exp(-S(n)/4 * Trace(H**2))/ZGOE)(expr)
def joint_eigen_distribution(self):
return self._compute_joint_eigen_distribution(S.One)
@@ -169,9 +176,11 @@ class GaussianSymplecticEnsemble(GaussianEnsemble):
========
>>> from sympy.stats import GaussianSymplecticEnsemble as GSE, density
+ >>> from sympy import MatrixSymbol
>>> G = GSE('U', 2)
- >>> density(G)
- Lambda(H, exp(-2*Trace(H**2))/Integral(exp(-2*Trace(_H**2)), _H))
+ >>> X = MatrixSymbol('X', 2, 2)
+ >>> density(G)(X)
+ exp(-2*Trace(X**2))/Integral(exp(-2*Trace(_H**2)), _H)
"""
@property
def normalization_constant(self):
@@ -183,7 +192,7 @@ def density(self, expr):
n, ZGSE = self.dimension, self.normalization_constant
h_pspace = RandomMatrixPSpace('P', model=self)
H = RandomMatrixSymbol('H', n, n, pspace=h_pspace)
- return Lambda(H, exp(-S(n) * Trace(H**2))/ZGSE)
+ return Lambda(H, exp(-S(n) * Trace(H**2))/ZGSE)(expr)
def joint_eigen_distribution(self):
return self._compute_joint_eigen_distribution(S(4))
diff --git a/sympy/stats/rv.py b/sympy/stats/rv.py
index 8d50e0dde893..2d2c6f67ada1 100644
--- a/sympy/stats/rv.py
+++ b/sympy/stats/rv.py
@@ -810,25 +810,25 @@ def condition(self):
return None
def doit(self, evaluate=True, **kwargs):
+ from sympy.stats.random_matrix import RandomMatrixPSpace
from sympy.stats.joint_rv import JointPSpace
+ from sympy.stats.matrix_distributions import MatrixPSpace
from sympy.stats.frv import SingleFiniteDistribution
expr, condition = self.expr, self.condition
- if _sympify(expr).has(RandomMatrixSymbol):
- return pspace(expr).compute_density(expr)
+
if isinstance(expr, SingleFiniteDistribution):
return expr.dict
if condition is not None:
# Recompute on new conditional expr
expr = given(expr, condition, **kwargs)
- if isinstance(expr, RandomSymbol) and \
- isinstance(expr.pspace, JointPSpace):
- return expr.pspace.distribution
if not random_symbols(expr):
return Lambda(x, DiracDelta(x - expr))
- if (isinstance(expr, RandomSymbol) and
- hasattr(expr.pspace, 'distribution') and
- isinstance(pspace(expr), (SinglePSpace))):
- return expr.pspace.distribution
+ if isinstance(expr, RandomSymbol):
+ if isinstance(expr.pspace, (SinglePSpace, JointPSpace, MatrixPSpace)) and \
+ hasattr(expr.pspace, 'distribution'):
+ return expr.pspace.distribution
+ elif isinstance(expr.pspace, RandomMatrixPSpace):
+ return expr.pspace.model
result = pspace(expr).compute_density(expr, **kwargs)
if evaluate and hasattr(result, 'doit'):
| diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index 498676cbed70..e1f891bbb8b0 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -1772,6 +1772,23 @@ def test_sympy__stats__symbolic_multivariate_probability__CrossCovarianceMatrix(
assert _test_args(CrossCovarianceMatrix(RandomMatrixSymbol('R', 3, 1),
RandomMatrixSymbol('X', 3, 1)))
+def test_sympy__stats__matrix_distributions__MatrixPSpace():
+ from sympy.stats.matrix_distributions import MatrixDistribution, MatrixPSpace
+ from sympy import Matrix
+ M = MatrixDistribution(1, Matrix([[1, 0], [0, 1]]))
+ assert _test_args(MatrixPSpace('M', M, 2, 2))
+
+def test_sympy__stats__matrix_distributions__MatrixDistribution():
+ from sympy.stats.matrix_distributions import MatrixDistribution
+ from sympy import Matrix
+ assert _test_args(MatrixDistribution(1, Matrix([[1, 0], [0, 1]])))
+
+def test_sympy__stats__matrix_distributions__MatrixGammaDistribution():
+ from sympy.stats.matrix_distributions import MatrixGammaDistribution
+ from sympy import Matrix
+ assert _test_args(MatrixGammaDistribution(3, 4, Matrix([[1, 0], [0, 1]])))
+
+
def test_sympy__core__symbol__Str():
from sympy.core.symbol import Str
assert _test_args(Str('t'))
diff --git a/sympy/stats/tests/test_matrix_distributions.py b/sympy/stats/tests/test_matrix_distributions.py
new file mode 100644
index 000000000000..446e8c06c710
--- /dev/null
+++ b/sympy/stats/tests/test_matrix_distributions.py
@@ -0,0 +1,46 @@
+from sympy import (exp, S, ProductSet, sqrt, pi, symbols,
+ Product, gamma, Dummy)
+from sympy.matrices import Determinant, Matrix, Trace, MatrixSymbol
+from sympy.stats import density
+from sympy.stats.matrix_distributions import (MatrixGammaDistribution,
+ MatrixGamma, MatrixPSpace)
+from sympy.testing.pytest import raises
+
+
+def test_MatrixPSpace():
+ M = MatrixGammaDistribution(1, 2, [[2, 1], [1, 2]])
+ MP = MatrixPSpace('M', M, 2, 2)
+ assert MP.distribution == M
+ raises(ValueError, lambda: MatrixPSpace('M', M, 1.2, 2))
+
+def test_MatrixGamma():
+ M = MatrixGamma('M', 1, 2, [[1, 0], [0, 1]])
+ assert M.pspace.distribution.set == ProductSet(S.Reals, S.Reals)
+ assert isinstance(density(M), MatrixGammaDistribution)
+ X = MatrixSymbol('X', 2, 2)
+ num = exp(Trace(Matrix([[-S(1)/2, 0], [0, -S(1)/2]])*X))
+ assert density(M)(X).doit() == num/(4*pi*sqrt(Determinant(X)))
+ assert density(M)([[2, 1], [1, 2]]).doit() == sqrt(3)*exp(-2)/(12*pi)
+ X = MatrixSymbol('X', 1, 2)
+ Y = MatrixSymbol('Y', 1, 2)
+ assert density(M)([X, Y]).doit() == exp(-X[0, 0]/2 - Y[0, 1]/2)/(4*pi*sqrt(
+ X[0, 0]*Y[0, 1] - X[0, 1]*Y[0, 0]))
+ # symbolic
+ a, b = symbols('a b', positive=True)
+ d = symbols('d', positive=True, integer=True)
+ Y = MatrixSymbol('Y', d, d)
+ Z = MatrixSymbol('Z', 2, 2)
+ SM = MatrixSymbol('SM', d, d)
+ M2 = MatrixGamma('M2', a, b, SM)
+ M3 = MatrixGamma('M3', 2, 3, [[2, 1], [1, 2]])
+ k = Dummy('k')
+ exprd = pi**(-d*(d - 1)/4)*b**(-a*d)*exp(Trace((-1/b)*SM**(-1)*Y)
+ )*Determinant(SM)**(-a)*Determinant(Y)**(a - d/2 - S(1)/2)/Product(
+ gamma(-k/2 + a + S(1)/2), (k, 1, d))
+ assert density(M2)(Y).dummy_eq(exprd)
+ raises(NotImplementedError, lambda: density(M3 + M)(Z))
+ raises(ValueError, lambda: density(M)(1))
+ raises(ValueError, lambda: MatrixGamma('M', -1, 2, [[1, 0], [0, 1]]))
+ raises(ValueError, lambda: MatrixGamma('M', -1, -2, [[1, 0], [0, 1]]))
+ raises(ValueError, lambda: MatrixGamma('M', -1, 2, [[1, 0], [2, 1]]))
+ raises(ValueError, lambda: MatrixGamma('M', -1, 2, [[1, 0], [0]]))
diff --git a/sympy/stats/tests/test_random_matrix.py b/sympy/stats/tests/test_random_matrix.py
index 1ce9aa888ac9..5f25bad3d957 100644
--- a/sympy/stats/tests/test_random_matrix.py
+++ b/sympy/stats/tests/test_random_matrix.py
@@ -11,13 +11,13 @@
level_spacing_distribution,
Normal, Beta)
from sympy.stats.joint_rv_types import JointDistributionHandmade
-from sympy.stats.rv import RandomMatrixSymbol, Density
+from sympy.stats.rv import RandomMatrixSymbol
from sympy.stats.random_matrix_models import GaussianEnsemble
from sympy.testing.pytest import raises
def test_GaussianEnsemble():
G = GaussianEnsemble('G', 3)
- assert density(G) == Density(G)
+ assert density(G) == G.pspace.model
raises(ValueError, lambda: GaussianEnsemble('G', 3.5))
def test_GaussianUnitaryEnsemble():
| [
{
"components": [
{
"doc": "Represents probability space for\nMatrix Distributions",
"lines": [
9,
34
],
"name": "MatrixPSpace",
"signature": "class MatrixPSpace(PSpace):",
"type": "class"
},
{
"doc": "",
"line... | [
"test_sympy__stats__matrix_distributions__MatrixPSpace",
"test_sympy__stats__matrix_distributions__MatrixDistribution",
"test_sympy__stats__matrix_distributions__MatrixGammaDistribution",
"test_MatrixPSpace",
"test_GaussianEnsemble"
] | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_sympy__assumptions__assume__Predicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers__AnyArgs",
"test_sympy__assu... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
[GSoC][WIP] Add Matrix Gamma distribution
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
Add Matrix Gamma Distribution
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
Discussions in #19723
## Major Changes
- [x] Corrected density API for random matrices and now works the same as other Random Variables
Example:
Previously:
```py
>>> from sympy.stats import GaussianUnitaryEnsemble as GUE, density
>>> G = GUE('U', 2)
>>> density(G)
Lambda(H, exp(-Trace(H**2))/(2*pi**2))
```
Now:
```py
>>> from sympy.stats import GaussianUnitaryEnsemble as GUE, density
>>> from sympy import MatrixSymbol
>>> X = MatrixSymbol('X', 2, 2)
>>> G = GUE('U', 2)
>>> density(G)
GaussianUnitaryEnsemble(U, 2)
>>> density(G)(X)
exp(-Trace(X**2))/(2*pi**2)
```
>>>
- [x] Added classes of `MatrixPSpace`, `MatrixDistribution` (General Framework of Matrix Distributions)
- [x] Add MatrixGammaDistribution
- [x] Add tests and docs
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* stats
* Added `MatrixGamma` Distribution
<!-- END RELEASE NOTES -->
ping @Upabjojr @czgdp1807 @jmig5776
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/stats/matrix_distributions.py]
(definition of MatrixPSpace:)
class MatrixPSpace(PSpace):
"""Represents probability space for
Matrix Distributions"""
(definition of MatrixPSpace.__new__:)
def __new__(cls, sym, distribution, dim_n, dim_m):
(definition of MatrixPSpace.value:)
def value(self):
(definition of MatrixPSpace.compute_density:)
def compute_density(self, expr, *args):
(definition of rv:)
def rv(symbol, cls, args):
(definition of MatrixDistribution:)
class MatrixDistribution(Basic, NamedArgsMixin):
"""Abstract class for Matrix Distribution"""
(definition of MatrixDistribution.__new__:)
def __new__(cls, *args):
(definition of MatrixDistribution.check:)
def check(*args):
(definition of MatrixDistribution.__call__:)
def __call__(self, expr):
(definition of MatrixGammaDistribution:)
class MatrixGammaDistribution(MatrixDistribution):
(definition of MatrixGammaDistribution.check:)
def check(alpha, beta, scale_matrix):
(definition of MatrixGammaDistribution.set:)
def set(self):
(definition of MatrixGammaDistribution.dimension:)
def dimension(self):
(definition of MatrixGammaDistribution.pdf:)
def pdf(self, x):
(definition of MatrixGamma:)
def MatrixGamma(symbol, alpha, beta, scale_matrix):
"""Creates a random variable with Matrix Gamma Distribution.
The density of the said distribution can be found at [1].
Parameters
==========
alpha: Positive Real number
Shape Parameter
beta: Positive Real number
Scale Parameter
scale_matrix: Positive definite real square matrix
Scale Matrix
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import density, MatrixGamma
>>> from sympy import MatrixSymbol, symbols
>>> a, b = symbols('a b', positive=True)
>>> M = MatrixGamma('M', a, b, [[2, 1], [1, 2]])
>>> X = MatrixSymbol('X', 2, 2)
>>> density(M)(X).doit()
3**(-a)*b**(-2*a)*exp(Trace(Matrix([
[-2/3, 1/3],
[ 1/3, -2/3]])*X)/b)*Determinant(X)**(a - 3/2)/(sqrt(pi)*gamma(a)*gamma(a - 1/2))
>>> density(M)([[1, 0], [0, 1]]).doit()
3**(-a)*b**(-2*a)*exp(-4/(3*b))/(sqrt(pi)*gamma(a)*gamma(a - 1/2))
References
==========
.. [1] https://en.wikipedia.org/wiki/Matrix_gamma_distribution"""
[end of new definitions in sympy/stats/matrix_distributions.py]
[start of new definitions in sympy/stats/random_matrix_models.py]
(definition of RandomMatrixEnsemble.__call__:)
def __call__(self, expr):
[end of new definitions in sympy/stats/random_matrix_models.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1b4529a95ef641c2fc15889091b281644069d20e | ||
pvlib__pvlib-python-1000 | 1,000 | pvlib/pvlib-python | 0.7 | b4bdfd3be8784aba8f194e729b72f5cb9d19c3f4 | 2020-07-08T15:48:21Z | diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index cd5b0ced40..6569202046 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -369,6 +369,13 @@ Other
Effects on PV System Output
===========================
+Loss models
+^^^^^^^^^^^
+.. autosummary::
+ :toctree: generated/
+
+ pvsystem.combine_loss_factors
+
.. autosummary::
:toctree: generated/
diff --git a/docs/sphinx/source/whatsnew/v0.8.0.rst b/docs/sphinx/source/whatsnew/v0.8.0.rst
index 75e223b045..b64c839a29 100644
--- a/docs/sphinx/source/whatsnew/v0.8.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.0.rst
@@ -92,6 +92,9 @@ Enhancements
PVSystem, LocalizedPVSystem, SingleAxisTracker, and
LocalizedSingleAxisTracker repr methods. (:issue:`1027`)
* Added ability for :py:func:`pvlib.soiling.hsu` to accept arbitrary time intervals. (:pull:`980`)
+* Add :py:func:`pvlib.pvsystem.combine_loss_factors` as general purpose
+ function to combine loss factors with a common index.
+ Partialy addresses :issue:`988`. Contributed by Brock Taute :ghuser:`btaute`
Bug fixes
~~~~~~~~~
diff --git a/pvlib/pvsystem.py b/pvlib/pvsystem.py
index bf1b3bab06..2134801c8d 100644
--- a/pvlib/pvsystem.py
+++ b/pvlib/pvsystem.py
@@ -2310,6 +2310,48 @@ def pvwatts_losses(soiling=2, shading=3, snow=0, mismatch=2, wiring=2,
return losses
+def combine_loss_factors(index, *losses, fill_method='ffill'):
+ r"""
+ Combines Series loss fractions while setting a common index.
+
+ The separate losses are compounded using the following equation:
+
+ .. math::
+
+ L_{total} = 1 - [ 1 - \Pi_i ( 1 - L_i ) ]
+
+ :math:`L_{total}` is the total loss returned
+ :math:`L_i` is each individual loss factor input
+
+ Note the losses must each be a series with a DatetimeIndex.
+ All losses will be resampled to match the index parameter using
+ the fill method specified (defaults to "fill forward").
+
+ Parameters
+ ----------
+ index : DatetimeIndex
+ The index of the returned loss factors
+
+ *losses : Series
+ One or more Series of fractions to be compounded
+
+ fill_method : {'ffill', 'bfill', 'nearest'}, default 'ffill'
+ Method to use for filling holes in reindexed DataFrame
+
+ Returns
+ -------
+ Series
+ Fractions resulting from the combination of each loss factor
+ """
+ combined_factor = 1
+
+ for loss in losses:
+ loss = loss.reindex(index, method=fill_method)
+ combined_factor *= (1 - loss)
+
+ return 1 - combined_factor
+
+
snlinverter = deprecated('0.8', alternative='inverter.sandia',
name='snlinverter', removal='0.9')(inverter.sandia)
| diff --git a/pvlib/tests/test_pvsystem.py b/pvlib/tests/test_pvsystem.py
index 4febb71f43..2ad26937b4 100644
--- a/pvlib/tests/test_pvsystem.py
+++ b/pvlib/tests/test_pvsystem.py
@@ -1299,6 +1299,18 @@ def test_PVSystem_pvwatts_ac_kwargs(mocker):
assert out < pdc
+def test_combine_loss_factors():
+ test_index = pd.date_range(start='1990/01/01T12:00', periods=365, freq='D')
+ loss_1 = pd.Series(.10, index=test_index)
+ loss_2 = pd.Series(.05, index=pd.date_range(start='1990/01/01T12:00',
+ periods=365*2, freq='D'))
+ loss_3 = pd.Series(.02, index=pd.date_range(start='1990/01/01',
+ periods=12, freq='MS'))
+ expected = pd.Series(.1621, index=test_index)
+ out = pvsystem.combine_loss_factors(test_index, loss_1, loss_2, loss_3)
+ assert_series_equal(expected, out)
+
+
@fail_on_pvlib_version('0.9')
def test_deprecated_09(cec_inverter_parameters, adr_inverter_parameters):
# deprecated function pvsystem.snlinverter
| diff --git a/docs/sphinx/source/api.rst b/docs/sphinx/source/api.rst
index cd5b0ced40..6569202046 100644
--- a/docs/sphinx/source/api.rst
+++ b/docs/sphinx/source/api.rst
@@ -369,6 +369,13 @@ Other
Effects on PV System Output
===========================
+Loss models
+^^^^^^^^^^^
+.. autosummary::
+ :toctree: generated/
+
+ pvsystem.combine_loss_factors
+
.. autosummary::
:toctree: generated/
diff --git a/docs/sphinx/source/whatsnew/v0.8.0.rst b/docs/sphinx/source/whatsnew/v0.8.0.rst
index 75e223b045..b64c839a29 100644
--- a/docs/sphinx/source/whatsnew/v0.8.0.rst
+++ b/docs/sphinx/source/whatsnew/v0.8.0.rst
@@ -92,6 +92,9 @@ Enhancements
PVSystem, LocalizedPVSystem, SingleAxisTracker, and
LocalizedSingleAxisTracker repr methods. (:issue:`1027`)
* Added ability for :py:func:`pvlib.soiling.hsu` to accept arbitrary time intervals. (:pull:`980`)
+* Add :py:func:`pvlib.pvsystem.combine_loss_factors` as general purpose
+ function to combine loss factors with a common index.
+ Partialy addresses :issue:`988`. Contributed by Brock Taute :ghuser:`btaute`
Bug fixes
~~~~~~~~~
| [
{
"components": [
{
"doc": "Combines Series loss fractions while setting a common index.\n\nThe separate losses are compounded using the following equation:\n\n.. math::\n\n L_{total} = 1 - [ 1 - \\Pi_i ( 1 - L_i ) ]\n\n:math:`L_{total}` is the total loss returned\n:math:`L_i` is each individua... | [
"pvlib/tests/test_pvsystem.py::test_combine_loss_factors"
] | [
"pvlib/tests/test_pvsystem.py::test_PVSystem_get_iam[ashrae-model_params0]",
"pvlib/tests/test_pvsystem.py::test_PVSystem_get_iam[physical-model_params1]",
"pvlib/tests/test_pvsystem.py::test_PVSystem_get_iam[martin_ruiz-model_params2]",
"pvlib/tests/test_pvsystem.py::test_PVSystem_get_iam_sapm",
"pvlib/tes... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Added irradiance_loss_pvsyst function to pvsystem
Added irradiance_loss_pvsyst function to pvsystem module and method to PVSystem class.
Partially addresses Issue #988.
I still need to incorporate tests and update the "what's new" file. I'm hoping for some feedback on implementation before starting those.
Thank you for your review.
<!-- Thank you for your contribution! The following items must be addressed before the code can be merged. Please don't hesitate to ask for help if you're unsure of how to accomplish any of the items. Feel free to remove checklist items that are not relevant to your change. -->
- [x] Closes #988
- [X] I am familiar with the [contributing guidelines](https://pvlib-python.readthedocs.io/en/latest/contributing.html)
- [x] Tests added
- [x] Updates entries to [`docs/sphinx/source/api.rst`](https://github.com/pvlib/pvlib-python/blob/master/docs/sphinx/source/api.rst) for API changes.
- [x] Adds description and name entries in the appropriate "what's new" file in [`docs/sphinx/source/whatsnew`](https://github.com/pvlib/pvlib-python/tree/master/docs/sphinx/source/whatsnew) for all changes. Includes link to the GitHub Issue with `` :issue:`num` `` or this Pull Request with `` :pull:`num` ``. Includes contributor name and/or GitHub username (link with `` :ghuser:`user` ``).
- [X] New code is fully documented. Includes [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) compliant docstrings, examples, and comments where necessary.
- [x] Pull request is nearly complete and ready for detailed review.
- [x] Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.
<!-- Brief description of the problem and proposed solution (if not already fully described in the issue linked to above): -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in pvlib/pvsystem.py]
(definition of combine_loss_factors:)
def combine_loss_factors(index, *losses, fill_method='ffill'):
"""Combines Series loss fractions while setting a common index.
The separate losses are compounded using the following equation:
.. math::
L_{total} = 1 - [ 1 - \Pi_i ( 1 - L_i ) ]
:math:`L_{total}` is the total loss returned
:math:`L_i` is each individual loss factor input
Note the losses must each be a series with a DatetimeIndex.
All losses will be resampled to match the index parameter using
the fill method specified (defaults to "fill forward").
Parameters
----------
index : DatetimeIndex
The index of the returned loss factors
*losses : Series
One or more Series of fractions to be compounded
fill_method : {'ffill', 'bfill', 'nearest'}, default 'ffill'
Method to use for filling holes in reindexed DataFrame
Returns
-------
Series
Fractions resulting from the combination of each loss factor"""
[end of new definitions in pvlib/pvsystem.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Adding Loss Model Categories
Following [this](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/pvlib-python/O5Dhrz5mlRc/-zZez4UnBgAJ) conversation on the Google Group, it would be helpful to create loss models divided into different categories. I think this could be done effectively with 4 categories:
1. Effective Irradiance
A. All losses already accounted for in the transposition models
B. Angle of Incidence losses
C. Soiling
D. Snow Coverage
E. Shading -> this one gets a bit more nuanced if you take into account cell stringing and blends with an electrical loss
F. Bifaciality -> Also a bit more nuanced as it incorporates module-specific parameters
2. Module Performance
A. Degradation, including LID, LeTID, and annual degradation
B. Module Quality, or the actual performance of a module installed compared to its rated performance
C. Module Mismatch? (This one might fit better in the DC losses bucket, but technically current mismatch within a string and voltage mismatch between parallel strings impact performance at a module level)
3. DC Electrical Losses (after dc_model)
A. Voltage drop due to cabling resistance (DC Ohmic Loss in PVSyst)
B. Clipping when DC output at MPPT is higher than the inverter rating (although I think this is already well accounted for in pvlib)
4. AC Losses (after ac_model)
A. Medium Voltage Transformer losses
B. AC Cabling resistance losses
C. High Voltage Transformer losses
D. Transmission line losses
E. Clipping when a plant is capable of producing more power than it is allowed to inject into its point of interconnection
F. Any additional auxiliary losses should be included (like parasitic trackers or a control house)
Often the Module Performance and DC Electrical Loss categories are combined, but I think being more explicit can offer value, as those losses would be applied before the single diode model instead of after.
Are there any thoughts on this implementation?
----------
Following up on this thread, I'd be happy to submit a pull request for one of the models (and eventually all of them). I believe the DC electrical losses would be the simplest to implement, so I would intend to start with that one. I'd like to get some feedback on the implementation prior to doing this, though. Please let me know how I can best assist this project.
Thanks!
Hi @btaute, it sounds like you're planning on adding these models as methods on the ModelChain class, is that true? One general piece of feedback is that pvlib is organized into two layers: (1) a backbone of low-level procedural functions to implement models and (2) high-level classes (PVSystem and ModelChain) that provide a more organized interface but still delegate the model execution to the low-level functions. So keeping with that structure, there are two general approaches you should consider:
- exposing the existing low-level functions in the ModelChain API (because not all of them are available, e.g. soiling)
- adding new low-level functions you're interested in and then integrating them into the high-level interfaces
So for example, in your example code in the google group thread you suggested methods like `ModelChain.dc_losses_model` and
`ModelChain.effective_irradiance_model` -- we would probably want to see the underlying models implemented as low-level functions before creating the corresponding ModelChain steps, assuming the models you want to include don't already exist in pvlib.
Anyway, your categorizations seem mostly reasonable to me, though I might quibble a bit over the small details. A good reference is the [PVPMC modeling website](https://pvpmc.sandia.gov/modeling-steps/). I would tend to kick the can down the road on what the API changes to ModelChain end up being and focus more on the models that need to be added. What might help make it more concrete would be to include the names/references of the models you're interested in implementing because many of the loss categories you listed don't currently have any models in pvlib.
Thank you for the feedback. I think it would help to focus on one loss category. If we start with an irradiance_loss_model that served the purpose of reducing the effective_irradiance that goes into the dc_model, I see value in two potential loss models (at the moment), which could be called by a ModelChain.effective_irradiance_loss_model.
```
def pvsyst_irradiance_loss_model(effective_irradiance, monthly_soiling_factors, timeseries_shading_factors):
"""
Reduce the effective irradiance on the solar plant by accounting for soiling and linear shading effects (electrical effects due to
shading are accounted for in the dc_losses_model)
Parameters
------------
effective_irradiance: Series.
The effective irradiance in the plane of the array of the solar racking
monthly_soiling_factors: float or Series with length 12.
The percent of effective irradiance on a module reduced by soiling. A constant loss for each month
timeseries_shading_factors: Series.
The percent of module surface area shaded
Returns
--------
new_effective_irradiance: Series.
Effective irradiance accounting for soiling and linear shading losses [W/m2]
"""
timeseries_soiling_factors = convert_monthly_to_timeseries(monthly_soiling_factors)
new_effective_irradiance = effective_irradiance * (1 - timeseries_soiling_factors) * (1 - timeseries_shading_factors)
return new_effective_irradiance
```
and
```
def timeseries_irradiance_loss_model(effective_irradiance, timeseries_soiling_factors, timeseries_shading_factors,
timeseries_snow_coverage_factors):
"""
Reduce the effective irradiance on the solar plant by accounting for soiling, snow coverage, and linear shading effects
(electrical effects due to shading and snow coverage are accounted for in the dc_losses_model)
Parameters
------------
effective_irradiance: Series.
The effective irradiance in the plane of the array of the solar racking
timeseries_soiling_factors: Series.
The percent of effective irradiance on a module reduced by soiling
timeseries_shading_factors: Series.
The percent of module surface area shaded
timeseries_snow_coverage_factors: Series.
The percent of module surface area covered by snow
Returns
--------
new_effective_irradiance: Series.
Effective irradiance accounting for soiling and linear shading losses [W/m2]
"""
new_effective_irradiance = effective_irradiance * (1 - timeseries_soiling_factors) * (1 - timeseries_shading_factors) * (1 -
timeseries_snow_coverage_factors)
return new_effective_irradiance
```
I believe pvlib has models in place for calculating the snow coverage and soiling timeseries factors. Monthly soiling factors will likely remain inputs. I think a shading model is still missing. I'm imagining the effective_irradiance would already be incorporating bifaciality (if applicable) before reaching this model.
(The code is meant to be indicative. I'm certainly not married to the name "new_effective_irradiance.")
Ok, thanks for the examples. And sorry for the continued delayed responses -- I think this is just a busy time of year, it's not a lack of interest. These functions are simple enough, basically just combining different losses together rather than what I was imagining (modeling the component losses themselves). I think hearing from @cwhanse would be useful because he has some planned improvements to ModelChain along these lines as well.
Thanks @kanderso-nrel, and certainly no worries on the response timing.
I thought that would be the appropriate level of abstraction as it would allow different methods of implementing the component loss models together. I see tremendous value in getting the modelchain object in such a form that you can run it to replicate PVSyst results and even more value when the flexibility exists to improve upon that model.
I envision my initial pull requests to implement models at the same level of abstraction as the examples above to apply losses to effective_irradiance, singlediode parameters, dc output (pdc), and ac output (pac), which leave room for more detailed component models to then be implemented. Subsequent pull requests would be to alter the modelchain workflow to infer and utilize these loss models. A PVSyst model replication would leave most of these component model losses as constant W or % decreases, but better component level models would be able to improve upon that.
Another aspect of this I'd be happy to work on, which is both part of this discussion and a discussion all in its own, is incorporating bifaciality. To date, I've done this by applying rear shading and mismatch losses to the backside irradiance, then multiplying it by the module's bifaciality factor and adding to the front side irradiance prior to entering the single diode equation. We can save that implementation until the end if we believe it will muddy the waters of this discussion.
Again, I welcome feedback from all members of this team on the implementation before I put together the first pull request. I'm happy to start coding instead of chatting, but this is obviously something that needs to be incorporated with the style that you all have in mind. Thanks!
Iโd like to gently bump this issue. To further clarify my intentions, Iโd like to submit a pull request that implements the timeseries_irradiance_loss_model above (with a better name) as a function in the pvystem.py module and as a method in the PVSystem class (within the same module). Please let me know if that Pull Request would be an appropriate first step or if the maintaining team would like to see this implemented in a different manner. Thank you.
@btaute I apologize for the delay in responding. Time of year, and other work obligations, etc.
Thank you for this initiative. I think we will do well to take implementing loss functions in manageable steps. The outline you provide is excellent; let's use that as a guide.
As we implement loss models we should do so with an eye towards accomodating different loss models that may be applied at different points in the model chain. As you point out, shading loss can be applied (in a simple way) to reduce effective irradiance, or to reduce module DC current with a more complex shading effects model. Something to keep in mind as we develop naming conventions and code modules.
I agree with your assessment that it is valuable to add loss models in the PVsyst style. Lacking a better idea, let's add the system-level functions to pvsystem.py.
The two functions proposed above appear to have the same objective (implement losses in the PVsyst style) so I think these two are actually a single function `irradiance_loss_pvsyst`. The loss inputs (soiling, shading, snow) should be Series, and get resampled to the time index of the effective_irradiance input, and a dedicated monthly_to_xxx function shouldn't be necessary.
--------------------
</issues> | aa1635bcb40dc83f82e9fd72158670c235bfe99b |
sympy__sympy-19724 | 19,724 | sympy/sympy | 1.7 | 1f7cbeee19200aac259f4ffb8ec208b9b77fd5cf | 2020-07-07T11:03:31Z | diff --git a/sympy/stats/__init__.py b/sympy/stats/__init__.py
index e2475f13bea6..c4b87459aeb4 100644
--- a/sympy/stats/__init__.py
+++ b/sympy/stats/__init__.py
@@ -142,7 +142,7 @@
'joint_eigen_distribution', 'JointEigenDistribution',
'level_spacing_distribution',
- 'Probability', 'Expectation', 'Variance', 'Covariance',
+ 'Probability', 'Expectation', 'Variance', 'Covariance', 'Moment', 'CentralMoment',
'ExpectationMatrix', 'VarianceMatrix', 'CrossCovarianceMatrix'
@@ -189,7 +189,7 @@
JointEigenDistribution, level_spacing_distribution)
from .symbolic_probability import (Probability, Expectation, Variance,
- Covariance)
+ Covariance, Moment, CentralMoment)
from .symbolic_multivariate_probability import (ExpectationMatrix, VarianceMatrix,
CrossCovarianceMatrix)
diff --git a/sympy/stats/rv_interface.py b/sympy/stats/rv_interface.py
index 24ccda833ca4..e48ddcf7a9dd 100644
--- a/sympy/stats/rv_interface.py
+++ b/sympy/stats/rv_interface.py
@@ -1,6 +1,7 @@
from __future__ import print_function, division
from sympy.sets import FiniteSet
-from sympy import sqrt, log, exp, FallingFactorial, Rational, Eq, Dummy, piecewise_fold, solveset
+from sympy import (sqrt, log, exp, FallingFactorial, Rational, Eq, Dummy,
+ piecewise_fold, solveset, Integral)
from .rv import (probability, expectation, density, where, given, pspace, cdf, PSpace,
characteristic_function, sample, sample_iter, random_symbols, independent, dependent,
sampling_density, moment_generating_function, quantile, is_random,
@@ -33,7 +34,10 @@ def moment(X, n, c=0, condition=None, **kwargs):
>>> moment(X, 1) == E(X)
True
"""
- return expectation((X - c)**n, condition, **kwargs)
+ from sympy.stats.symbolic_probability import Moment
+ if kwargs.pop('evaluate', True):
+ return Moment(X, n, c, condition).doit()
+ return Moment(X, n, c, condition).rewrite(Integral)
def variance(X, condition=None, **kwargs):
@@ -211,8 +215,10 @@ def cmoment(X, n, condition=None, **kwargs):
>>> cmoment(X, 2) == variance(X)
True
"""
- mu = expectation(X, condition, **kwargs)
- return moment(X, n, mu, condition, **kwargs)
+ from sympy.stats.symbolic_probability import CentralMoment
+ if kwargs.pop('evaluate', True):
+ return CentralMoment(X, n, condition).doit()
+ return CentralMoment(X, n, condition).rewrite(Integral)
def smoment(X, n, condition=None, **kwargs):
diff --git a/sympy/stats/symbolic_probability.py b/sympy/stats/symbolic_probability.py
index b351c612b090..1abfa8ebf8d4 100644
--- a/sympy/stats/symbolic_probability.py
+++ b/sympy/stats/symbolic_probability.py
@@ -561,3 +561,111 @@ def _eval_rewrite_as_Integral(self, arg1, arg2, condition=None, **kwargs):
def evaluate_integral(self):
return self.rewrite(Integral).doit()
+
+
+class Moment(Expr):
+ """
+ Symbolic class for Moment
+
+ Examples
+ ========
+
+ >>> from sympy import Symbol, Integral
+ >>> from sympy.stats import Normal, Expectation, Probability, Moment
+ >>> mu = Symbol('mu', real=True)
+ >>> sigma = Symbol('sigma', real=True, positive=True)
+ >>> X = Normal('X', mu, sigma)
+ >>> M = Moment(X, 3, 1)
+
+ To evaluate the result of Moment use `doit`:
+ >>> M.doit()
+ mu**3 - 3*mu**2 + 3*mu*sigma**2 + 3*mu - 3*sigma**2 - 1
+
+ Rewrite the Moment expression in terms of Expectation:
+ >>> M.rewrite(Expectation)
+ Expectation((X - 1)**3)
+
+ Rewrite the Moment expression in terms of Probability:
+ >>> M.rewrite(Probability)
+ Integral((x - 1)**3*Probability(Eq(X, x)), (x, -oo, oo))
+
+ Rewrite the Moment expression in terms of Integral:
+ >>> M.rewrite(Integral)
+ Integral(sqrt(2)*(X - 1)**3*exp(-(X - mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (X, -oo, oo))
+
+ """
+ def __new__(cls, X, n, c=0, condition=None, **kwargs):
+ X = _sympify(X)
+ n = _sympify(n)
+ c = _sympify(c)
+ if condition is not None:
+ condition = _sympify(condition)
+ return Expr.__new__(cls, X, n, c, condition)
+
+ def doit(self, **hints):
+ if not is_random(self.args[0]):
+ return self.args[0]
+ return self.rewrite(Expectation).doit(**hints)
+
+ def _eval_rewrite_as_Expectation(self, X, n, c=0, condition=None, **kwargs):
+ return Expectation((X - c)**n, condition)
+
+ def _eval_rewrite_as_Probability(self, X, n, c=0, condition=None, **kwargs):
+ return self.rewrite(Expectation).rewrite(Probability)
+
+ def _eval_rewrite_as_Integral(self, X, n, c=0, condition=None, **kwargs):
+ return self.rewrite(Expectation).rewrite(Integral)
+
+
+class CentralMoment(Expr):
+ """
+ Symbolic class Central Moment
+
+ Examples
+ ========
+
+ >>> from sympy import Symbol, Integral
+ >>> from sympy.stats import Normal, Expectation, Probability, CentralMoment
+ >>> mu = Symbol('mu', real=True)
+ >>> sigma = Symbol('sigma', real=True, positive=True)
+ >>> X = Normal('X', mu, sigma)
+ >>> CM = CentralMoment(X, 4)
+
+ To evaluate the result of CentralMoment use `doit`:
+ >>> CM.doit().simplify()
+ 3*sigma**4
+
+ Rewrite the CentralMoment expression in terms of Expectation:
+ >>> CM.rewrite(Expectation)
+ Expectation((X - Expectation(X))**4)
+
+ Rewrite the CentralMoment expression in terms of Probability:
+ >>> CM.rewrite(Probability)
+ Integral((x - Integral(x*Probability(True), (x, -oo, oo)))**4*Probability(Eq(X, x)), (x, -oo, oo))
+
+ Rewrite the CentralMoment expression in terms of Integral:
+ >>> CM.rewrite(Integral)
+ Integral(sqrt(2)*(X - Integral(sqrt(2)*X*exp(-(X - mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (X, -oo, oo)))**4*exp(-(X - mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (X, -oo, oo))
+
+ """
+ def __new__(cls, X, n, condition=None, **kwargs):
+ X = _sympify(X)
+ n = _sympify(n)
+ if condition is not None:
+ condition = _sympify(condition)
+ return Expr.__new__(cls, X, n, condition)
+
+ def doit(self, **hints):
+ if not is_random(self.args[0]):
+ return self.args[0]
+ return self.rewrite(Expectation).doit(**hints)
+
+ def _eval_rewrite_as_Expectation(self, X, n, condition=None, **kwargs):
+ mu = Expectation(X, condition, **kwargs)
+ return Moment(X, n, mu, condition, **kwargs).rewrite(Expectation)
+
+ def _eval_rewrite_as_Probability(self, X, n, condition=None, **kwargs):
+ return self.rewrite(Expectation).rewrite(Probability)
+
+ def _eval_rewrite_as_Integral(self, X, n, condition=None, **kwargs):
+ return self.rewrite(Expectation).rewrite(Integral)
| diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index 6bdc452c62c2..d2cd263deacb 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -1195,6 +1195,20 @@ def test_sympy__stats__symbolic_probability__Variance():
assert _test_args(Variance(X))
+def test_sympy__stats__symbolic_probability__Moment():
+ from sympy.stats.symbolic_probability import Moment
+ from sympy.stats import Normal
+ X = Normal('X', 0, 1)
+ assert _test_args(Moment(X, 3, 2, X > 3))
+
+
+def test_sympy__stats__symbolic_probability__CentralMoment():
+ from sympy.stats.symbolic_probability import CentralMoment
+ from sympy.stats import Normal
+ X = Normal('X', 0, 1)
+ assert _test_args(CentralMoment(X, 2, X > 1))
+
+
def test_sympy__stats__frv_types__DiscreteUniformDistribution():
from sympy.stats.frv_types import DiscreteUniformDistribution
from sympy.core.containers import Tuple
diff --git a/sympy/stats/tests/test_symbolic_probability.py b/sympy/stats/tests/test_symbolic_probability.py
index 53245a15e2da..1dc1d9cff0f3 100644
--- a/sympy/stats/tests/test_symbolic_probability.py
+++ b/sympy/stats/tests/test_symbolic_probability.py
@@ -1,7 +1,7 @@
-from sympy import symbols, Mul, sin, Integral, oo, Eq, Sum
+from sympy import symbols, Mul, sin, Integral, oo, Eq, Sum, sqrt, exp, pi, Dummy
from sympy.core.expr import unchanged
from sympy.stats import (Normal, Poisson, variance, Covariance, Variance,
- Probability, Expectation)
+ Probability, Expectation, Moment, CentralMoment)
from sympy.stats.rv import probability, expectation
@@ -127,3 +127,41 @@ def test_probability_rewrite():
assert Variance(X, condition=Y).rewrite(Probability) == Integral(x**2*Probability(Eq(X, x), Y), (x, -oo, oo)) - \
Integral(x*Probability(Eq(X, x), Y), (x, -oo, oo))**2
+
+
+def test_symbolic_Moment():
+ mu = symbols('mu', real=True)
+ sigma = symbols('sigma', real=True, positive=True)
+ x = symbols('x')
+ X = Normal('X', mu, sigma)
+ M = Moment(X, 4, 2)
+ assert M.rewrite(Expectation) == Expectation((X - 2)**4)
+ assert M.rewrite(Probability) == Integral((x - 2)**4*Probability(Eq(X, x)),
+ (x, -oo, oo))
+ k = Dummy('k')
+ expri = Integral(sqrt(2)*(k - 2)**4*exp(-(k - \
+ mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (k, -oo, oo))
+ assert M.rewrite(Integral).dummy_eq(expri)
+ assert M.doit() == (mu**4 - 8*mu**3 + 6*mu**2*sigma**2 + \
+ 24*mu**2 - 24*mu*sigma**2 - 32*mu + 3*sigma**4 + 24*sigma**2 + 16)
+ M = Moment(2, 5)
+ assert M.doit() == 2
+
+
+def test_symbolic_CentralMoment():
+ mu = symbols('mu', real=True)
+ sigma = symbols('sigma', real=True, positive=True)
+ x = symbols('x')
+ X = Normal('X', mu, sigma)
+ CM = CentralMoment(X, 6)
+ assert CM.rewrite(Expectation) == Expectation((X - Expectation(X))**6)
+ assert CM.rewrite(Probability) == Integral((x - Integral(x*Probability(True),
+ (x, -oo, oo)))**6*Probability(Eq(X, x)), (x, -oo, oo))
+ k = Dummy('k')
+ expri = Integral(sqrt(2)*(k - Integral(sqrt(2)*k*exp(-(k - \
+ mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (k, -oo, oo)))**6*exp(-(k - \
+ mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (k, -oo, oo))
+ assert CM.rewrite(Integral).dummy_eq(expri)
+ assert CM.doit().simplify() == 15*sigma**6
+ CM = Moment(5, 5)
+ assert CM.doit() == 5
| [
{
"components": [
{
"doc": "Symbolic class for Moment\n\nExamples\n========\n\n>>> from sympy import Symbol, Integral\n>>> from sympy.stats import Normal, Expectation, Probability, Moment\n>>> mu = Symbol('mu', real=True)\n>>> sigma = Symbol('sigma', real=True, positive=True)\n>>> X = Normal('X', ... | [
"test_sympy__stats__symbolic_probability__Moment",
"test_sympy__stats__symbolic_probability__CentralMoment",
"test_literal_probability",
"test_probability_rewrite",
"test_symbolic_Moment"
] | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_sympy__assumptions__assume__Predicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers__AnyArgs",
"test_sympy__assu... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
[GSoC] Added symbolic classes for Moment and Centralmoment
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
Fixes #6323
Related comment https://github.com/sympy/sympy/issues/6323#issuecomment-654743869
#### Brief description of what is fixed or changed
- [x] Add classes of Moment and CentralMoment and their docs
- [x] Tests
#### Other comments
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* stats
* Added symbolic classes of `Moment` and `CentralMoment`
<!-- END RELEASE NOTES -->
ping @Upabjojr @czgdp1807 @jmig5776
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/stats/symbolic_probability.py]
(definition of Moment:)
class Moment(Expr):
"""Symbolic class for Moment
Examples
========
>>> from sympy import Symbol, Integral
>>> from sympy.stats import Normal, Expectation, Probability, Moment
>>> mu = Symbol('mu', real=True)
>>> sigma = Symbol('sigma', real=True, positive=True)
>>> X = Normal('X', mu, sigma)
>>> M = Moment(X, 3, 1)
To evaluate the result of Moment use `doit`:
>>> M.doit()
mu**3 - 3*mu**2 + 3*mu*sigma**2 + 3*mu - 3*sigma**2 - 1
Rewrite the Moment expression in terms of Expectation:
>>> M.rewrite(Expectation)
Expectation((X - 1)**3)
Rewrite the Moment expression in terms of Probability:
>>> M.rewrite(Probability)
Integral((x - 1)**3*Probability(Eq(X, x)), (x, -oo, oo))
Rewrite the Moment expression in terms of Integral:
>>> M.rewrite(Integral)
Integral(sqrt(2)*(X - 1)**3*exp(-(X - mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (X, -oo, oo))"""
(definition of Moment.__new__:)
def __new__(cls, X, n, c=0, condition=None, **kwargs):
(definition of Moment.doit:)
def doit(self, **hints):
(definition of Moment._eval_rewrite_as_Expectation:)
def _eval_rewrite_as_Expectation(self, X, n, c=0, condition=None, **kwargs):
(definition of Moment._eval_rewrite_as_Probability:)
def _eval_rewrite_as_Probability(self, X, n, c=0, condition=None, **kwargs):
(definition of Moment._eval_rewrite_as_Integral:)
def _eval_rewrite_as_Integral(self, X, n, c=0, condition=None, **kwargs):
(definition of CentralMoment:)
class CentralMoment(Expr):
"""Symbolic class Central Moment
Examples
========
>>> from sympy import Symbol, Integral
>>> from sympy.stats import Normal, Expectation, Probability, CentralMoment
>>> mu = Symbol('mu', real=True)
>>> sigma = Symbol('sigma', real=True, positive=True)
>>> X = Normal('X', mu, sigma)
>>> CM = CentralMoment(X, 4)
To evaluate the result of CentralMoment use `doit`:
>>> CM.doit().simplify()
3*sigma**4
Rewrite the CentralMoment expression in terms of Expectation:
>>> CM.rewrite(Expectation)
Expectation((X - Expectation(X))**4)
Rewrite the CentralMoment expression in terms of Probability:
>>> CM.rewrite(Probability)
Integral((x - Integral(x*Probability(True), (x, -oo, oo)))**4*Probability(Eq(X, x)), (x, -oo, oo))
Rewrite the CentralMoment expression in terms of Integral:
>>> CM.rewrite(Integral)
Integral(sqrt(2)*(X - Integral(sqrt(2)*X*exp(-(X - mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (X, -oo, oo)))**4*exp(-(X - mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (X, -oo, oo))"""
(definition of CentralMoment.__new__:)
def __new__(cls, X, n, condition=None, **kwargs):
(definition of CentralMoment.doit:)
def doit(self, **hints):
(definition of CentralMoment._eval_rewrite_as_Expectation:)
def _eval_rewrite_as_Expectation(self, X, n, condition=None, **kwargs):
(definition of CentralMoment._eval_rewrite_as_Probability:)
def _eval_rewrite_as_Probability(self, X, n, condition=None, **kwargs):
(definition of CentralMoment._eval_rewrite_as_Integral:)
def _eval_rewrite_as_Integral(self, X, n, condition=None, **kwargs):
[end of new definitions in sympy/stats/symbolic_probability.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Generating Functions for sympy.stats
```
Is there any benefit to reasoning about generating functions (moment, cumulant, etc...) for random variables? Are there problems that we could solve more easily with this extra formalism? http://en.wikipedia.org/wiki/Cumulant_generating_function http://en.wikipedia.org/wiki/Moment_generating_function
```
Original issue for #6323: http://code.google.com/p/sympy/issues/detail?id=3224
Original author: https://code.google.com/u/109882876523836932473/
----------
```
http://en.wikipedia.org/wiki/Probability-generating_function http://en.wikipedia.org/wiki/Characteristic_function_(probability_theory)
```
Original comment: http://code.google.com/p/sympy/issues/detail?id=3224#c1
Original author: https://code.google.com/u/109882876523836932473/
```
@mrocklin : Now that we have functions to calculate moments after https://github.com/sympy/sympy/pull/1875 , do we need these functions ? If yes I would like to work on them.
Ankit.
```
Original comment: http://code.google.com/p/sympy/issues/detail?id=3224#c2
Original author: https://code.google.com/u/110118527174861881051/
```
Do we need these? No.
Would they be useful? Maybe, I don't know enough about them.
Would they be nice to have in any event? Sure.
If you want to work on them then that's great. If nothing else it'd be nice to have their formal definition in SymPy. As you're working on them it might be interesting to think about problems that these bits of formalism solve that our current system finds difficult.
```
Original comment: http://code.google.com/p/sympy/issues/detail?id=3224#c3
Original author: https://code.google.com/u/109882876523836932473/
```
Sure, I will think of the cases where these generating/characteristics
functions can be used, as it would serve as better examples in docs too.
Also I have some submission deadlines so I will commit tomorrow. Thanks.
```
Original comment: http://code.google.com/p/sympy/issues/detail?id=3224#c4
Original author: https://code.google.com/u/110118527174861881051/
```
We have moved issues to GitHub https://github.com/sympy/sympy/issues .
**Labels:** Restrict-AddIssueComment-Commit
```
Original comment: http://code.google.com/p/sympy/issues/detail?id=3224#c5
Original author: https://code.google.com/u/asmeurer@gmail.com/
```
We have moved issues to GitHub https://github.com/sympy/sympy/issues .
```
Original comment: http://code.google.com/p/sympy/issues/detail?id=3224#c6
Original author: https://code.google.com/u/asmeurer@gmail.com/
Currently `sympy.stats` support, `moment`, `cmoment`, `smoment`, `factorial_moment`, `characteristic_function`, and `moment_generating_function`
https://github.com/sympy/sympy/blob/3b4c1a6ad7ef692e2e1b1fc5a306c33951da64e7/sympy/stats/rv_interface.py#L19-L22
https://github.com/sympy/sympy/blob/3b4c1a6ad7ef692e2e1b1fc5a306c33951da64e7/sympy/stats/rv_interface.py#L197-L201
https://github.com/sympy/sympy/blob/3b4c1a6ad7ef692e2e1b1fc5a306c33951da64e7/sympy/stats/rv_interface.py#L218-L222
https://github.com/sympy/sympy/blob/3b4c1a6ad7ef692e2e1b1fc5a306c33951da64e7/sympy/stats/rv_interface.py#L314-L319
https://github.com/sympy/sympy/blob/3b4c1a6ad7ef692e2e1b1fc5a306c33951da64e7/sympy/stats/rv.py#L969-L973
https://github.com/sympy/sympy/blob/3b4c1a6ad7ef692e2e1b1fc5a306c33951da64e7/sympy/stats/rv.py#L1002-L1006
IMHO, this can be safely closed. Should we close this @Upabjojr @czgdp1807 ?
> Currently `sympy.stats` support, `moment`, `cmoment`, `smoment`, `factorial_moment`, `characteristic_function`, and `moment_generating_function`
Do we have the symbolic class for all of them? It's important to have a class to express unevaluated functions, so we can leave the unevaluated `Moment(X, 1)` in case functions like `moment(X, 1)` are unable to evaluate.
For unevaluated results, we can use `evaluate=False`:
```py
In [1]: from sympy.stats import *
In [2]: X = Normal('X', 1, 3)
In [3]: moment(X, 4, evaluate=False)
Out[3]:
โ
โ
โฎ 2
โฎ -(X - 1)
โฎ โโโโโโโโโโ
โฎ 4 18
โฎ โ2โ
X โ
โฏ
โฎ โโโโโโโโโโโโโโโโโ dX
โฎ 6โ
โฯ
โก
-โ
In [4]: moment(X, 4)
Out[4]: 298
In [5]: cmoment(X, 3, evaluate=False)
Out[5]:
โ
โ
โฎ 3
โฎ โ โ โ
โฎ โ โ โ
โฎ โ โฎ 2 โ
โฎ โ โฎ -(X - 1) โ 2
โฎ โ โฎ โโโโโโโโโโ โ -(X - 1)
โฎ โ โฎ 18 โ โโโโโโโโโโ
โฎ โ โฎ โ2โ
Xโ
โฏ โ 18
โฎ โ2โ
โX - โฎ โโโโโโโโโโโโโโโโ dXโ โ
โฏ
โฎ โ โฎ 6โ
โฯ โ
โฎ โ โก โ
โฎ โ -โ โ
โฎ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ dX
โฎ 6โ
โฯ
โก
-โ
In [6]: cmoment(X, 3)
Out[6]: 0
In [7]: factorial_moment(X, 2)
Out[7]: 9
In [8]: factorial_moment(X, 2, evaluate=False)
Out[8]:
โ
โ
โฎ 2
โฎ -(X - 1)
โฎ โโโโโโโโโโ
โฎ 18
โฎ โ2โ
Xโ
(X - 1)โ
โฏ
โฎ โโโโโโโโโโโโโโโโโโโโโโโโ dX
โฎ 6โ
โฯ
โก
-โ
```
Is there a `Moment` class rather than `Integral`?
> For unevaluated results, we can use `evaluate=False`
That is returning the unevaluated integral, that is not the unevaluated `Moment`. We have already classes `Probability`, `Expectation`, `Variance`, `Covariance` for unevaluated expressions... but not for the other ones.
Sure I get it, It Will be nice additions.
--------------------
</issues> | 1b4529a95ef641c2fc15889091b281644069d20e | |
sympy__sympy-19696 | 19,696 | sympy/sympy | 1.7 | b3a8f9a4bc4633a4d4649ef91d576068c90b3700 | 2020-07-03T06:56:25Z | diff --git a/sympy/stats/rv.py b/sympy/stats/rv.py
index 4e9fc1e5c03e..f911f5e53c53 100644
--- a/sympy/stats/rv.py
+++ b/sympy/stats/rv.py
@@ -19,11 +19,10 @@
from typing import Tuple as tTuple
from sympy import (Basic, S, Expr, Symbol, Tuple, And, Add, Eq, lambdify, Or,
- Equality, Lambda, sympify, Dummy, Ne, KroneckerDelta, Not,
+ Equality, Lambda, sympify, Dummy, Ne, KroneckerDelta,
DiracDelta, Mul, Indexed, MatrixSymbol, Function, Integral)
from sympy.core.relational import Relational
from sympy.core.sympify import _sympify
-from sympy.logic.boolalg import Boolean
from sympy.sets.sets import FiniteSet, ProductSet, Intersection
from sympy.solvers.solveset import solveset
from sympy.external import import_module
@@ -788,57 +787,11 @@ def probability(condition, given_condition=None, numsamples=None,
5/12
"""
- condition = sympify(condition)
- given_condition = sympify(given_condition)
-
- if isinstance(condition, Not):
- return S.One - probability(condition.args[0], given_condition, evaluate, **kwargs)
-
- if condition.has(RandomIndexedSymbol):
- return pspace(condition).probability(condition, given_condition, evaluate, **kwargs)
-
- if isinstance(given_condition, RandomSymbol):
- condrv = random_symbols(condition)
- if len(condrv) == 1 and condrv[0] == given_condition:
- from sympy.stats.frv_types import BernoulliDistribution
- return BernoulliDistribution(probability(condition), 0, 1)
- if any([dependent(rv, given_condition) for rv in condrv]):
- from sympy.stats.symbolic_probability import Probability
- return Probability(condition, given_condition)
- else:
- return probability(condition)
-
- if given_condition is not None and \
- not isinstance(given_condition, (Relational, Boolean)):
- raise ValueError("%s is not a relational or combination of relationals"
- % (given_condition))
- if given_condition == False:
- return S.Zero
- if not isinstance(condition, (Relational, Boolean)):
- raise ValueError("%s is not a relational or combination of relationals"
- % (condition))
- if condition is S.true:
- return S.One
- if condition is S.false:
- return S.Zero
-
- if numsamples:
- return sampling_P(condition, given_condition, numsamples=numsamples,
- **kwargs)
- if given_condition is not None: # If there is a condition
- # Recompute on new conditional expr
- return probability(given(condition, given_condition, **kwargs), **kwargs)
-
- # Otherwise pass work off to the ProbabilitySpace
- if pspace(condition) == PSpace():
- from sympy.stats.symbolic_probability import Probability
- return Probability(condition, given_condition)
-
- result = pspace(condition).probability(condition, **kwargs)
- if evaluate and hasattr(result, 'doit'):
- return result.doit()
- else:
- return result
+ kwargs['numsamples'] = numsamples
+ from sympy.stats.symbolic_probability import Probability
+ if evaluate:
+ return Probability(condition, given_condition).doit(**kwargs)
+ return Probability(condition, given_condition).rewrite(Integral) # will return Sum in case of discrete RV
class Density(Basic):
diff --git a/sympy/stats/symbolic_multivariate_probability.py b/sympy/stats/symbolic_multivariate_probability.py
index 1ef75a52e9b5..9ad8cc47d260 100644
--- a/sympy/stats/symbolic_multivariate_probability.py
+++ b/sympy/stats/symbolic_multivariate_probability.py
@@ -6,6 +6,7 @@
from sympy.core.sympify import _sympify
from sympy.stats.symbolic_probability import Variance, Covariance, Expectation
+
class ExpectationMatrix(Expectation, MatrixExpr):
"""
Expectation of a random matrix expression.
@@ -70,10 +71,15 @@ def expand(self, **hints):
return expr
if isinstance(expr, Add):
- return Add(*[Expectation(a, condition=condition).expand() for a in expr.args])
+ return Add.fromiter(Expectation(a, condition=condition).expand()
+ for a in expr.args)
+
+ expand_expr = _expand(expr)
+ if isinstance(expand_expr, Add):
+ return Add.fromiter(Expectation(a, condition=condition).expand()
+ for a in expand_expr.args)
+
elif isinstance(expr, (Mul, MatMul)):
- if isinstance(_expand(expr), Add):
- return Expectation(_expand(expr)).expand()
rv = []
nonrv = []
postnon = []
@@ -262,7 +268,7 @@ def expand(self, **hints):
addends = [a*CrossCovarianceMatrix(r1, r2, condition=condition)*b.transpose()
for (a, r1) in coeff_rv_list1 for (b, r2) in coeff_rv_list2]
- return Add(*addends)
+ return Add.fromiter(addends)
@classmethod
def _expand_single_argument(cls, expr):
@@ -292,4 +298,4 @@ def _get_mul_nonrv_rv_tuple(cls, m):
rv.append(a)
else:
nonrv.append(a)
- return (Mul(*nonrv), Mul(*rv))
+ return (Mul.fromiter(nonrv), Mul.fromiter(rv))
diff --git a/sympy/stats/symbolic_probability.py b/sympy/stats/symbolic_probability.py
index cf2a192d8d5d..b351c612b090 100644
--- a/sympy/stats/symbolic_probability.py
+++ b/sympy/stats/symbolic_probability.py
@@ -1,13 +1,16 @@
import itertools
-from sympy import Expr, Add, Mul, S, Integral, Eq, Sum, Symbol, expand as _expand
+from sympy import (Expr, Add, Mul, S, Integral, Eq, Sum, Symbol,
+ expand as _expand, Not)
from sympy.core.compatibility import default_sort_key
from sympy.core.parameters import global_parameters
from sympy.core.sympify import _sympify
+from sympy.core.relational import Relational
+from sympy.logic.boolalg import Boolean
from sympy.stats import variance, covariance
-from sympy.stats.rv import (RandomSymbol, probability, pspace,
+from sympy.stats.rv import (RandomSymbol, pspace, dependent,
given, sampling_E, RandomIndexedSymbol, is_random,
- PSpace)
+ PSpace, sampling_P, random_symbols)
__all__ = ['Probability', 'Expectation', 'Variance', 'Covariance']
@@ -58,8 +61,61 @@ def __new__(cls, prob, condition=None, **kwargs):
obj._condition = condition
return obj
+ def doit(self, **hints):
+ condition = self.args[0]
+ given_condition = self._condition
+ numsamples = hints.get('numsamples', False)
+ for_rewrite = not hints.get('for_rewrite', False)
+
+ if isinstance(condition, Not):
+ return S.One - self.func(condition.args[0], given_condition,
+ evaluate=for_rewrite).doit(**hints)
+
+ if condition.has(RandomIndexedSymbol):
+ return pspace(condition).probability(condition, given_condition,
+ evaluate=for_rewrite)
+
+ if isinstance(given_condition, RandomSymbol):
+ condrv = random_symbols(condition)
+ if len(condrv) == 1 and condrv[0] == given_condition:
+ from sympy.stats.frv_types import BernoulliDistribution
+ return BernoulliDistribution(self.func(condition).doit(**hints), 0, 1)
+ if any([dependent(rv, given_condition) for rv in condrv]):
+ return Probability(condition, given_condition)
+ else:
+ return Probability(condition).doit()
+
+ if given_condition is not None and \
+ not isinstance(given_condition, (Relational, Boolean)):
+ raise ValueError("%s is not a relational or combination of relationals"
+ % (given_condition))
+
+ if given_condition == False or condition is S.false:
+ return S.Zero
+ if not isinstance(condition, (Relational, Boolean)):
+ raise ValueError("%s is not a relational or combination of relationals"
+ % (condition))
+ if condition is S.true:
+ return S.One
+
+ if numsamples:
+ return sampling_P(condition, given_condition, numsamples=numsamples)
+ if given_condition is not None: # If there is a condition
+ # Recompute on new conditional expr
+ return Probability(given(condition, given_condition)).doit()
+
+ # Otherwise pass work off to the ProbabilitySpace
+ if pspace(condition) == PSpace():
+ return Probability(condition, given_condition)
+
+ result = pspace(condition).probability(condition)
+ if hasattr(result, 'doit') and for_rewrite:
+ return result.doit()
+ else:
+ return result
+
def _eval_rewrite_as_Integral(self, arg, condition=None, **kwargs):
- return probability(arg, condition, evaluate=False)
+ return self.func(arg, condition=condition).doit(for_rewrite=True)
_eval_rewrite_as_Sum = _eval_rewrite_as_Integral
@@ -160,11 +216,15 @@ def expand(self, **hints):
return expr
if isinstance(expr, Add):
- return Add(*[Expectation(a, condition=condition).expand()
- for a in expr.args])
+ return Add.fromiter(Expectation(a, condition=condition).expand()
+ for a in expr.args)
+
+ expand_expr = _expand(expr)
+ if isinstance(expand_expr, Add):
+ return Add.fromiter(Expectation(a, condition=condition).expand()
+ for a in expand_expr.args)
+
elif isinstance(expr, Mul):
- if isinstance(_expand(expr), Add):
- return Expectation(_expand(expr)).expand()
rv = []
nonrv = []
for a in expr.args:
@@ -172,7 +232,7 @@ def expand(self, **hints):
rv.append(a)
else:
nonrv.append(a)
- return Mul(*nonrv)*Expectation(Mul(*rv), condition=condition)
+ return Mul.fromiter(nonrv)*Expectation(Mul.fromiter(rv), condition=condition)
return self
@@ -345,7 +405,7 @@ def expand(self, **hints):
nonrv.append(a**2)
if len(rv) == 0:
return S.Zero
- return Mul(*nonrv)*Variance(Mul(*rv), condition)
+ return Mul.fromiter(nonrv)*Variance(Mul.fromiter(rv), condition)
# this expression contains a RandomSymbol somehow:
return self
@@ -454,7 +514,7 @@ def expand(self, **hints):
addends = [a*b*Covariance(*sorted([r1, r2], key=default_sort_key), condition=condition)
for (a, r1) in coeff_rv_list1 for (b, r2) in coeff_rv_list2]
- return Add(*addends)
+ return Add.fromiter(addends)
@classmethod
def _expand_single_argument(cls, expr):
@@ -484,7 +544,7 @@ def _get_mul_nonrv_rv_tuple(cls, m):
rv.append(a)
else:
nonrv.append(a)
- return (Mul(*nonrv), Mul(*rv))
+ return (Mul.fromiter(nonrv), Mul.fromiter(rv))
def _eval_rewrite_as_Expectation(self, arg1, arg2, condition=None, **kwargs):
e1 = Expectation(arg1*arg2, condition)
| diff --git a/sympy/stats/tests/test_symbolic_probability.py b/sympy/stats/tests/test_symbolic_probability.py
index d794becf85f0..53245a15e2da 100644
--- a/sympy/stats/tests/test_symbolic_probability.py
+++ b/sympy/stats/tests/test_symbolic_probability.py
@@ -40,6 +40,12 @@ def test_literal_probability():
assert Expectation(Expectation(2*X)).doit() == 4
assert Expectation(Expectation(2*X)).doit(deep=False) == Expectation(2*X)
assert Expectation(4*Expectation(2*X)).doit(deep=False) == 4*Expectation(2*X)
+ assert Expectation((X + Y)**3).expand() == 3*Expectation(X*Y**2) +\
+ 3*Expectation(X**2*Y) + Expectation(X**3) + Expectation(Y**3)
+ assert Expectation((X - Y)**3).expand() == 3*Expectation(X*Y**2) -\
+ 3*Expectation(X**2*Y) + Expectation(X**3) - Expectation(Y**3)
+ assert Expectation((X - Y)**2).expand() == -2*Expectation(X*Y) +\
+ Expectation(X**2) + Expectation(Y**2)
assert Variance(w).args == (w,)
assert Variance(w).expand() == 0
| [
{
"components": [
{
"doc": "",
"lines": [
64,
115
],
"name": "Probability.doit",
"signature": "def doit(self, **hints):",
"type": "function"
}
],
"file": "sympy/stats/symbolic_probability.py"
}
] | [
"test_literal_probability"
] | [] | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
[GSoC] Added doit method in symbolic Probability
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
Added doit method in symbolic Probability
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
#### Brief description of what is fixed or changed
`P=probability` now calls `Probability(...).doit()`
#### Other comments
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
* stats
* Added `doit` in class `Probability`
<!-- END RELEASE NOTES -->
ping @Upabjojr @czgdp1807 @jmig5776
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/stats/symbolic_probability.py]
(definition of Probability.doit:)
def doit(self, **hints):
[end of new definitions in sympy/stats/symbolic_probability.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | 1b4529a95ef641c2fc15889091b281644069d20e | ||
sympy__sympy-19694 | 19,694 | sympy/sympy | 1.7 | b3a8f9a4bc4633a4d4649ef91d576068c90b3700 | 2020-07-03T05:16:46Z | diff --git a/sympy/categories/baseclasses.py b/sympy/categories/baseclasses.py
index a29fa82e92eb..58b2dffc2689 100644
--- a/sympy/categories/baseclasses.py
+++ b/sympy/categories/baseclasses.py
@@ -1,5 +1,6 @@
from sympy.core import S, Basic, Dict, Symbol, Tuple, sympify
from sympy.core.compatibility import iterable
+from sympy.core.symbol import Str
from sympy.sets import Set, FiniteSet, EmptySet
@@ -196,8 +197,8 @@ def __new__(cls, domain, codomain, name):
if not name:
raise ValueError("Empty morphism names not allowed.")
- if not isinstance(name, Symbol):
- name = Symbol(name)
+ if not isinstance(name, Str):
+ name = Str(name)
return Basic.__new__(cls, domain, codomain, name)
@@ -449,17 +450,17 @@ class Category(Basic):
========
Diagram
"""
- def __new__(cls, symbol, objects=EmptySet, commutative_diagrams=EmptySet):
- if not symbol:
+ def __new__(cls, name, objects=EmptySet, commutative_diagrams=EmptySet):
+ if not name:
raise ValueError("A Category cannot have an empty name.")
- if not isinstance(symbol, Symbol):
- symbol = Symbol(symbol)
+ if not isinstance(name, Str):
+ name = Str(name)
if not isinstance(objects, Class):
objects = Class(objects)
- new_category = Basic.__new__(cls, symbol, objects,
+ new_category = Basic.__new__(cls, name, objects,
FiniteSet(*commutative_diagrams))
return new_category
diff --git a/sympy/core/symbol.py b/sympy/core/symbol.py
index d2eee90ea57c..2918324430d7 100644
--- a/sympy/core/symbol.py
+++ b/sympy/core/symbol.py
@@ -1,6 +1,6 @@
from sympy.core.assumptions import StdFactKB, _assume_defined
from sympy.core.compatibility import is_sequence, ordered
-from .basic import Basic
+from .basic import Basic, Atom
from .sympify import sympify
from .singleton import S
from .expr import Expr, AtomicExpr
@@ -15,6 +15,32 @@
import re as _re
import random
+class Str(Atom):
+ """
+ Represents string in SymPy.
+
+ Explanation
+ ===========
+
+ Previously, ``Symbol`` was used where string is needed in ``args`` of SymPy
+ objects, e.g. denoting the printing. However, since ``Symbol`` represents
+ mathematical scalar, this class should be used instead.
+
+ """
+ __slots__ = ('name',)
+
+ def __new__(cls, name, **kwargs):
+ if not isinstance(name, str):
+ raise TypeError("name should be a string, not %s" % repr(type(name)))
+ obj = Expr.__new__(cls, **kwargs)
+ obj.name = name
+ return obj
+
+ def __getnewargs__(self):
+ return (self.name,)
+
+ def _hashable_content(self):
+ return (self.name,)
def _filter_assumptions(kwargs):
"""Split the given dict into assumptions and non-assumptions.
@@ -27,7 +53,6 @@ def _filter_assumptions(kwargs):
Symbol._sanitize(assumptions)
return assumptions, nonassumptions
-
def _symbol(s, matching_symbol=None, **assumptions):
"""Return s if s is a Symbol, else if s is a string, return either
the matching_symbol if the names are the same or else a new symbol
diff --git a/sympy/printing/printer.py b/sympy/printing/printer.py
index 8098ef5ad74a..03bc9701b76a 100644
--- a/sympy/printing/printer.py
+++ b/sympy/printing/printer.py
@@ -340,3 +340,6 @@ def _as_ordered_terms(self, expr, order=None):
return list(expr.args)
else:
return expr.as_ordered_terms(order=order)
+
+ def _print_Str(self, s):
+ return self._print(s.name)
diff --git a/sympy/printing/repr.py b/sympy/printing/repr.py
index 04cfee5c3e61..53270d9f0a51 100644
--- a/sympy/printing/repr.py
+++ b/sympy/printing/repr.py
@@ -232,6 +232,9 @@ def _print_Sum2(self, expr):
return "Sum2(%s, (%s, %s, %s))" % (self._print(expr.f), self._print(expr.i),
self._print(expr.a), self._print(expr.b))
+ def _print_Str(self, s):
+ return "%s(%s)" % (s.__class__.__name__, self._print(s.name))
+
def _print_Symbol(self, expr):
d = expr._assumptions.generator
# print the dummy_index like it was an assumption
diff --git a/sympy/vector/coordsysrect.py b/sympy/vector/coordsysrect.py
index 80230acd09c5..7040110642a8 100644
--- a/sympy/vector/coordsysrect.py
+++ b/sympy/vector/coordsysrect.py
@@ -3,10 +3,11 @@
from sympy.core.compatibility import Callable
from sympy.core.cache import cacheit
from sympy.core import S, Dummy, Lambda
+from sympy.core.symbol import Str
from sympy import symbols, MatrixBase, ImmutableDenseMatrix
from sympy.solvers import solve
from sympy.vector.scalar import BaseScalar
-from sympy import eye, trigsimp, ImmutableMatrix as Matrix, Symbol, sin, cos,\
+from sympy import eye, trigsimp, ImmutableMatrix as Matrix, sin, cos,\
sqrt, diff, Tuple, acos, atan2, simplify
import sympy.vector
from sympy.vector.orienters import (Orienter, AxisOrienter, BodyOrienter,
@@ -87,8 +88,8 @@ def __new__(cls, name, transformation=None, parent=None, location=None,
transformation = Lambda((x1, x2, x3),
transformation(x1, x2, x3))
elif isinstance(transformation, str):
- transformation = Symbol(transformation)
- elif isinstance(transformation, (Symbol, Lambda)):
+ transformation = Str(transformation)
+ elif isinstance(transformation, (Str, Lambda)):
pass
else:
raise TypeError("transformation: "
@@ -141,7 +142,7 @@ def __new__(cls, name, transformation=None, parent=None, location=None,
lambda_lame = CoordSys3D._get_lame_coeff('cartesian')
lambda_inverse = lambda x, y, z: r.inv()*Matrix(
[x-l[0], y-l[1], z-l[2]])
- elif isinstance(transformation, Symbol):
+ elif isinstance(transformation, Str):
trname = transformation.name
lambda_transformation = CoordSys3D._get_transformation_lambdas(trname)
if parent is not None:
@@ -165,7 +166,7 @@ def __new__(cls, name, transformation=None, parent=None, location=None,
if variable_names is None:
if isinstance(transformation, Lambda):
variable_names = ["x1", "x2", "x3"]
- elif isinstance(transformation, Symbol):
+ elif isinstance(transformation, Str):
if transformation.name == 'spherical':
variable_names = ["r", "theta", "phi"]
elif transformation.name == 'cylindrical':
@@ -188,10 +189,10 @@ def __new__(cls, name, transformation=None, parent=None, location=None,
# they may actually be 'coincident' wrt the root system.
if parent is not None:
obj = super().__new__(
- cls, Symbol(name), transformation, parent)
+ cls, Str(name), transformation, parent)
else:
obj = super().__new__(
- cls, Symbol(name), transformation)
+ cls, Str(name), transformation)
obj._name = name
# Initialize the base vectors
diff --git a/sympy/vector/orienters.py b/sympy/vector/orienters.py
index 241f1fa40dfa..bcaab04a8eca 100644
--- a/sympy/vector/orienters.py
+++ b/sympy/vector/orienters.py
@@ -1,7 +1,8 @@
from sympy.core.basic import Basic
from sympy import (sympify, eye, sin, cos, rot_axis1, rot_axis2,
- rot_axis3, ImmutableMatrix as Matrix, Symbol)
+ rot_axis3, ImmutableMatrix as Matrix)
from sympy.core.cache import cacheit
+from sympy.core.symbol import Str
import sympy.vector
@@ -104,7 +105,7 @@ class ThreeAngleOrienter(Orienter):
"""
def __new__(cls, angle1, angle2, angle3, rot_order):
- if isinstance(rot_order, Symbol):
+ if isinstance(rot_order, Str):
rot_order = rot_order.name
approved_orders = ('123', '231', '312', '132', '213',
@@ -137,7 +138,7 @@ def __new__(cls, angle1, angle2, angle3, rot_order):
parent_orient = parent_orient.T
obj = super().__new__(
- cls, angle1, angle2, angle3, Symbol(rot_order))
+ cls, angle1, angle2, angle3, Str(rot_order))
obj._angle1 = angle1
obj._angle2 = angle2
obj._angle3 = angle3
diff --git a/sympy/vector/point.py b/sympy/vector/point.py
index 4528c9e6db85..e46bb22d9103 100644
--- a/sympy/vector/point.py
+++ b/sympy/vector/point.py
@@ -1,8 +1,8 @@
from sympy.core.basic import Basic
+from sympy.core.symbol import Str
from sympy.vector.vector import Vector
from sympy.vector.coordsysrect import CoordSys3D
from sympy.vector.functions import _path
-from sympy import Symbol
from sympy.core.cache import cacheit
@@ -25,9 +25,9 @@ def __new__(cls, name, position=Vector.zero, parent_point=None):
parent_point))
# Super class construction
if parent_point is None:
- obj = super().__new__(cls, Symbol(name), position)
+ obj = super().__new__(cls, Str(name), position)
else:
- obj = super().__new__(cls, Symbol(name), position, parent_point)
+ obj = super().__new__(cls, Str(name), position, parent_point)
# Decide the object parameters
obj._name = name
obj._pos = position
| diff --git a/sympy/core/tests/test_args.py b/sympy/core/tests/test_args.py
index c9cac546ef8b..6bdc452c62c2 100644
--- a/sympy/core/tests/test_args.py
+++ b/sympy/core/tests/test_args.py
@@ -1749,6 +1749,10 @@ def test_sympy__stats__symbolic_multivariate_probability__CrossCovarianceMatrix(
assert _test_args(CrossCovarianceMatrix(RandomMatrixSymbol('R', 3, 1),
RandomMatrixSymbol('X', 3, 1)))
+def test_sympy__core__symbol__Str():
+ from sympy.core.symbol import Str
+ assert _test_args(Str('t'))
+
def test_sympy__core__symbol__Dummy():
from sympy.core.symbol import Dummy
assert _test_args(Dummy('t'))
diff --git a/sympy/core/tests/test_symbol.py b/sympy/core/tests/test_symbol.py
index a58ff66fbb25..a07a1d2f3659 100644
--- a/sympy/core/tests/test_symbol.py
+++ b/sympy/core/tests/test_symbol.py
@@ -1,10 +1,16 @@
from sympy import (Symbol, Wild, GreaterThan, LessThan, StrictGreaterThan,
StrictLessThan, pi, I, Rational, sympify, symbols, Dummy)
-from sympy.core.symbol import uniquely_named_symbol, _symbol
+from sympy.core.symbol import uniquely_named_symbol, _symbol, Str
from sympy.testing.pytest import raises
from sympy.core.symbol import disambiguate
+def test_Str():
+ a1 = Str('a')
+ a2 = Str('a')
+ b = Str('b')
+ assert a1 == a2 != b
+ raises(TypeError, lambda: Str())
def test_Symbol():
a = Symbol("a")
diff --git a/sympy/utilities/tests/test_pickling.py b/sympy/utilities/tests/test_pickling.py
index 35fae20dcce1..c659e6ce8b86 100644
--- a/sympy/utilities/tests/test_pickling.py
+++ b/sympy/utilities/tests/test_pickling.py
@@ -9,7 +9,7 @@
from sympy.core.basic import Atom, Basic
from sympy.core.core import BasicMeta
from sympy.core.singleton import SingletonRegistry
-from sympy.core.symbol import Dummy, Symbol, Wild
+from sympy.core.symbol import Str, Dummy, Symbol, Wild
from sympy.core.numbers import (E, I, pi, oo, zoo, nan, Integer,
Rational, Float)
from sympy.core.relational import (Equality, GreaterThan, LessThan, Relational,
@@ -92,6 +92,8 @@ def test_core_basic():
SingletonRegistry, S):
check(c)
+def test_core_Str():
+ check(Str('x'))
def test_core_symbol():
# make the Symbol a unique name that doesn't class with any other
| [
{
"components": [
{
"doc": "Represents string in SymPy.\n\nExplanation\n===========\n\nPreviously, ``Symbol`` was used where string is needed in ``args`` of SymPy\nobjects, e.g. denoting the printing. However, since ``Symbol`` represents\nmathematical scalar, this class should be used instead.",
... | [
"test_sympy__core__symbol__Str",
"test_Str",
"test_Symbol",
"test_Dummy",
"test_Dummy_force_dummy_index",
"test_lt_gt",
"test_no_len",
"test_ineq_unequal",
"test_Wild_properties",
"test_symbols",
"test_symbols_become_functions_issue_3539",
"test_unicode",
"test_core_basic",
"test_core_Str"... | [
"test_all_classes_are_tested",
"test_sympy__assumptions__assume__AppliedPredicate",
"test_sympy__assumptions__assume__Predicate",
"test_sympy__assumptions__sathandlers__UnevaluatedOnFree",
"test_sympy__assumptions__sathandlers__AllArgs",
"test_sympy__assumptions__sathandlers__AnyArgs",
"test_sympy__assu... | This is a feature request which requires a new feature to add in the code repository.
<<NEW FEATURE REQUEST>>
<request>
Introduce Str class
<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more information). Also, please
write a comment on that issue linking back to this pull request once it is
open. -->
Fixes #19677
#### Brief description of what is fixed or changed
`Str` class is introduced in `sympy.core.symbol` module. This class can be used to denote string-like object in `args`.
#### Other comments
Before this PR, `Symbol` was used for string-like arguments, such as the name of the object. However, `Symbol` represents mathematical scalar with assumptions and operations defined, so this is an abuse of usage.
Brief discussion was held in [google groups page](https://groups.google.com/g/sympy/c/tNm1LDcEXA4).
Misuse of `Symbol` in vector and categories module is fixed in this PR. More may be found in other modules, and they need to be handled properly as well.
#### Release Notes
<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->
<!-- BEGIN RELEASE NOTES -->
NO ENTRY
<!-- END RELEASE NOTES -->
----------
</request>
There are several new functions or classes that need to be implemented, using the definitions below:
<<NEW DEFINITIONS>>
There are several new functions or classes that need to be implemented, using the definitions below:
<definitions>
[start of new definitions in sympy/core/symbol.py]
(definition of Str:)
class Str(Atom):
"""Represents string in SymPy.
Explanation
===========
Previously, ``Symbol`` was used where string is needed in ``args`` of SymPy
objects, e.g. denoting the printing. However, since ``Symbol`` represents
mathematical scalar, this class should be used instead."""
(definition of Str.__new__:)
def __new__(cls, name, **kwargs):
(definition of Str.__getnewargs__:)
def __getnewargs__(self):
(definition of Str._hashable_content:)
def _hashable_content(self):
[end of new definitions in sympy/core/symbol.py]
[start of new definitions in sympy/printing/printer.py]
(definition of Printer._print_Str:)
def _print_Str(self, s):
[end of new definitions in sympy/printing/printer.py]
[start of new definitions in sympy/printing/repr.py]
(definition of ReprPrinter._print_Str:)
def _print_Str(self, s):
[end of new definitions in sympy/printing/repr.py]
</definitions>
Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly.
<<END>> | Here is the discussion in the issues of the pull request.
<issues>
Introducing Str class to core
This topic is briefly discussed in [google groups page](https://groups.google.com/u/2/g/sympy/c/tNm1LDcEXA4).
Currently, many classes use `Symbol` for instance naming. However, since the purpose of `Symbol` is to represent the mathematical scalar whose assumptions are defined, it would be better to introduce `Str` class for this purpose.
Any opinion, please?
----------
I think that a string object that doesn't do anything apart from comparing equal/unequal and can be substituted seems reasonable to me.
Awkwardly sympify already accepts strings and parses them e.g. `sympify('1+1')` so it isn't possible to use `sympify` here without deprecating/removing parsing from there.
@oscarbenjamin
I think `sympify` doesn't have to convert `str` to `Str`, since `Str` is likely to be limited to internal use only.
That would create a special case that would end up needing to be handled in many places e.g. Tuple:
https://github.com/sympy/sympy/blob/e926fc095fd6402f4d01dc4bf119e63e9fb56a76/sympy/core/containers.py#L50
My hunch is that `sympify('x') -> Symbol('x')`-behavior does not need to be changed immediately. `Tuple('x')` can return `Tuple(Symbol('x'))` as before with no problem. User will expect 'x' to be transformed to scalar more than they expect it to be transformed to `Str`.
Okay so the string inputs would not be processed by `simpify/_sympify` but would be explicitly converted to Str when a string argument is expected and needs to be made Basic. Maybe that could work.
The other side to this is that we should deprecate using sympify to parse strings but that's not easy because it's widespread and widely used.
--------------------
</issues> | 1b4529a95ef641c2fc15889091b281644069d20e |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.