source
stringclasses
1 value
version
stringclasses
1 value
module
stringclasses
43 values
function
stringclasses
307 values
input
stringlengths
3
496
expected
stringlengths
0
40.5k
signature
stringclasses
0 values
cpython
cfcd524
statistics
StatisticsError.mode
>>> mode([1, 1, 2, 3, 3, 3, 3, 4])
3 This also works with nominal (non-numeric) data:
null
cpython
cfcd524
statistics
StatisticsError.mode
>>> mode(["red", "blue", "blue", "red", "green", "red", "red"])
'red' If there are multiple modes with same frequency, return the first one encountered:
null
cpython
cfcd524
statistics
StatisticsError.mode
>>> mode(['red', 'red', 'green', 'blue', 'blue'])
'red' If *data* is empty, ``mode``, raises StatisticsError.
null
cpython
cfcd524
statistics
StatisticsError.multimode
>>> multimode('aabbbbbbbbcc')
['b']
null
cpython
cfcd524
statistics
StatisticsError.multimode
>>> multimode('aabbbbccddddeeffffgg')
['b', 'd', 'f']
null
cpython
cfcd524
statistics
StatisticsError.multimode
>>> multimode('')
[]
null
cpython
cfcd524
statistics
StatisticsError.variance
>>> data = [2.75, 1.75, 1.25, 0.25, 0.5, 1.25, 3.5]
null
cpython
cfcd524
statistics
StatisticsError.variance
>>> variance(data)
1.3720238095238095 If you have already calculated the mean of your data, you can pass it as the optional second argument ``xbar`` to avoid recalculating it:
null
cpython
cfcd524
statistics
StatisticsError.variance
>>> m = mean(data)
null
cpython
cfcd524
statistics
StatisticsError.variance
>>> variance(data, m)
1.3720238095238095 This function does not check that ``xbar`` is actually the mean of ``data``. Giving arbitrary values for ``xbar`` may lead to invalid or impossible results. Decimals and Fractions are supported:
null
cpython
cfcd524
statistics
StatisticsError.variance
>>> from decimal import Decimal as D
null
cpython
cfcd524
statistics
StatisticsError.variance
>>> variance([D("27.5"), D("30.25"), D("30.25"), D("34.5"), D("41.75")])
Decimal('31.01875')
null
cpython
cfcd524
statistics
StatisticsError.variance
>>> from fractions import Fraction as F
null
cpython
cfcd524
statistics
StatisticsError.variance
>>> variance([F(1, 6), F(1, 2), F(5, 3)])
Fraction(67, 108)
null
cpython
cfcd524
statistics
StatisticsError.pvariance
>>> data = [0.0, 0.25, 0.25, 1.25, 1.5, 1.75, 2.75, 3.25]
null
cpython
cfcd524
statistics
StatisticsError.pvariance
>>> pvariance(data)
1.25 If you have already calculated the mean of the data, you can pass it as the optional second argument to avoid recalculating it:
null
cpython
cfcd524
statistics
StatisticsError.pvariance
>>> mu = mean(data)
null
cpython
cfcd524
statistics
StatisticsError.pvariance
>>> pvariance(data, mu)
1.25 Decimals and Fractions are supported:
null
cpython
cfcd524
statistics
StatisticsError.pvariance
>>> from decimal import Decimal as D
null
cpython
cfcd524
statistics
StatisticsError.pvariance
>>> pvariance([D("27.5"), D("30.25"), D("30.25"), D("34.5"), D("41.75")])
Decimal('24.815')
null
cpython
cfcd524
statistics
StatisticsError.pvariance
>>> from fractions import Fraction as F
null
cpython
cfcd524
statistics
StatisticsError.pvariance
>>> pvariance([F(1, 4), F(5, 4), F(1, 2)])
Fraction(13, 72)
null
cpython
cfcd524
statistics
StatisticsError.stdev
>>> stdev([1.5, 2.5, 2.5, 2.75, 3.25, 4.75])
1.0810874155219827
null
cpython
cfcd524
statistics
StatisticsError.pstdev
>>> pstdev([1.5, 2.5, 2.5, 2.75, 3.25, 4.75])
0.986893273527251
null
cpython
cfcd524
statistics
StatisticsError.covariance
>>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9]
null
cpython
cfcd524
statistics
StatisticsError.covariance
>>> y = [1, 2, 3, 1, 2, 3, 1, 2, 3]
null
cpython
cfcd524
statistics
StatisticsError.covariance
>>> covariance(x, y)
0.75
null
cpython
cfcd524
statistics
StatisticsError.covariance
>>> z = [9, 8, 7, 6, 5, 4, 3, 2, 1]
null
cpython
cfcd524
statistics
StatisticsError.covariance
>>> covariance(x, z)
-7.5
null
cpython
cfcd524
statistics
StatisticsError.covariance
>>> covariance(z, x)
-7.5
null
cpython
cfcd524
statistics
StatisticsError.correlation
>>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9]
null
cpython
cfcd524
statistics
StatisticsError.correlation
>>> y = [9, 8, 7, 6, 5, 4, 3, 2, 1]
null
cpython
cfcd524
statistics
StatisticsError.correlation
>>> correlation(x, x)
1.0
null
cpython
cfcd524
statistics
StatisticsError.correlation
>>> correlation(x, y)
-1.0 If *method* is "ranked", computes Spearman's rank correlation coefficient for two inputs. The data is replaced by ranks. Ties are averaged so that equal values receive the same rank. The resulting coefficient measures the strength of a monotonic relationship. Spearman's rank correlation coefficient is appropriate for ordinal data or for continuous data that doesn't meet the linear proportion requirement for Pearson's correlation coefficient.
null
cpython
cfcd524
statistics
StatisticsError.linear_regression
>>> x = [1, 2, 3, 4, 5]
null
cpython
cfcd524
statistics
StatisticsError.linear_regression
>>> noise = NormalDist().samples(5, seed=42)
null
cpython
cfcd524
statistics
StatisticsError.linear_regression
>>> y = [3 * x[i] + 2 + noise[i] for i in range(5)]
null
cpython
cfcd524
statistics
StatisticsError.linear_regression
>>> linear_regression(x, y) #doctest: +ELLIPSIS
LinearRegression(slope=3.17495..., intercept=1.00925...) If *proportional* is true, the independent variable *x* and the dependent variable *y* are assumed to be directly proportional. The data is fit to a line passing through the origin. Since the *intercept* will always be 0.0, the underlying linear function simplifies to: y = slope * x + noise
null
cpython
cfcd524
statistics
StatisticsError.linear_regression
>>> y = [3 * x[i] + noise[i] for i in range(5)]
null
cpython
cfcd524
statistics
StatisticsError.linear_regression
>>> linear_regression(x, y, proportional=True) #doctest: +ELLIPSIS
LinearRegression(slope=2.90475..., intercept=0.0)
null
cpython
cfcd524
statistics
StatisticsError.kde
>>> sample = [-2.1, -1.3, -0.4, 1.9, 5.1, 6.2]
null
cpython
cfcd524
statistics
StatisticsError.kde
>>> f_hat = kde(sample, h=1.5)
Compute the area under the curve:
null
cpython
cfcd524
statistics
StatisticsError.kde
>>> area = sum(f_hat(x) for x in range(-20, 20))
null
cpython
cfcd524
statistics
StatisticsError.kde
>>> round(area, 4)
1.0 Plot the estimated probability density function at evenly spaced points from -6 to 10:
null
cpython
cfcd524
statistics
StatisticsError.kde
>>> for x in range(-6, 11): ... density = f_hat(x) ... plot = ' ' * int(density * 400) + 'x' ... print(f'{x:2}: {density:.3f} {plot}') ...
-6: 0.002 x -5: 0.009 x -4: 0.031 x -3: 0.070 x -2: 0.111 x -1: 0.125 x 0: 0.110 x 1: 0.086 x 2: 0.068 x 3: 0.059 x 4: 0.066 x 5: 0.082 x 6: 0.082 x 7: 0.058 x 8: 0.028 x 9: 0.009 x 10: 0.002 x Estimate P(4.5 < X <= 7.5), the probability that a new sample value will be between 4.5 and 7.5:
null
cpython
cfcd524
statistics
StatisticsError.kde
>>> cdf = kde(sample, h=1.5, cumulative=True)
null
cpython
cfcd524
statistics
StatisticsError.kde
>>> round(cdf(7.5) - cdf(4.5), 2)
0.22 References ---------- Kernel density estimation and its application: https://www.itm-conferences.org/articles/itmconf/pdf/2018/08/itmconf_sam2018_00037.pdf Kernel functions in common use: https://en.wikipedia.org/wiki/Kernel_(statistics)#kernel_functions_in_common_use Interactive graphical demonstration and exploration: https://demonstrations.wolfram.com/KernelDensityEstimation/ Kernel estimation of cumulative distribution function of a random variable with bounded support https://www.econstor.eu/bitstream/10419/207829/1/10.21307_stattrans-2016-037.pdf
null
cpython
cfcd524
statistics
StatisticsError.kde_random
>>> data = [-2.1, -1.3, -0.4, 1.9, 5.1, 6.2]
null
cpython
cfcd524
statistics
StatisticsError.kde_random
>>> rand = kde_random(data, h=1.5, seed=8675309)
null
cpython
cfcd524
statistics
StatisticsError.kde_random
>>> new_selections = [rand() for i in range(10)]
null
cpython
cfcd524
statistics
StatisticsError.kde_random
>>> [round(x, 1) for x in new_selections]
[0.7, 6.2, 1.2, 6.9, 7.0, 1.8, 2.5, -0.5, -1.8, 5.6]
null
cpython
cfcd524
statistics
NormalDist.overlap
>>> N1 = NormalDist(2.4, 1.6)
null
cpython
cfcd524
statistics
NormalDist.overlap
>>> N2 = NormalDist(3.2, 2.0)
null
cpython
cfcd524
statistics
NormalDist.overlap
>>> N1.overlap(N2)
0.8035050657330205
null
cpython
cfcd524
statistics
NormalDist._sum
>>> _sum([3, 2.25, 4.5, -0.5, 0.25])
(<class 'float'>, Fraction(19, 2), 5) Some sources of round-off error will be avoided: # Built-in sum returns zero.
null
cpython
cfcd524
statistics
NormalDist._sum
>>> _sum([1e50, 1, -1e50] * 1000)
(<class 'float'>, Fraction(1000, 1), 3000) Fractions and Decimals are also supported:
null
cpython
cfcd524
statistics
NormalDist._sum
>>> from fractions import Fraction as F
null
cpython
cfcd524
statistics
NormalDist._sum
>>> _sum([F(2, 3), F(7, 5), F(1, 4), F(5, 6)])
(<class 'fractions.Fraction'>, Fraction(63, 20), 4)
null
cpython
cfcd524
statistics
NormalDist._sum
>>> from decimal import Decimal as D
null
cpython
cfcd524
statistics
NormalDist._sum
>>> data = [D("0.1375"), D("0.2108"), D("0.3061"), D("0.0419")]
null
cpython
cfcd524
statistics
NormalDist._sum
>>> _sum(data)
(<class 'decimal.Decimal'>, Fraction(6963, 10000), 4) Mixed types are currently treated as an error, except that int is allowed.
null
cpython
cfcd524
statistics
NormalDist._exact_ratio
>>> _exact_ratio(0.25)
(1, 4) x is expected to be an int, Fraction, Decimal or float.
null
cpython
cfcd524
statistics
NormalDist._rank
>>> data = [31, 56, 31, 25, 75, 18]
null
cpython
cfcd524
statistics
NormalDist._rank
>>> _rank(data)
[3.5, 5.0, 3.5, 2.0, 6.0, 1.0] The operation is idempotent:
null
cpython
cfcd524
statistics
NormalDist._rank
>>> _rank([3.5, 5.0, 3.5, 2.0, 6.0, 1.0])
[3.5, 5.0, 3.5, 2.0, 6.0, 1.0] It is possible to rank the data in reverse order so that the highest value has rank 1. Also, a key-function can extract the field to be ranked:
null
cpython
cfcd524
statistics
NormalDist._rank
>>> goals = [('eagles', 45), ('bears', 48), ('lions', 44)]
null
cpython
cfcd524
statistics
NormalDist._rank
>>> _rank(goals, key=itemgetter(1), reverse=True)
[2.0, 1.0, 3.0] Ranks are conventionally numbered starting from one; however, setting *start* to zero allows the ranks to be used as array indices:
null
cpython
cfcd524
statistics
NormalDist._rank
>>> prize = ['Gold', 'Silver', 'Bronze', 'Certificate']
null
cpython
cfcd524
statistics
NormalDist._rank
>>> scores = [8.1, 7.3, 9.4, 8.3]
null
cpython
cfcd524
statistics
NormalDist._rank
>>> [prize[int(i)] for i in _rank(scores, start=0, reverse=True)]
['Bronze', 'Certificate', 'Gold', 'Silver']
null
cpython
cfcd524
doctest
_SpoofOut._ellipsis_match
>>> _ellipsis_match('aa...aa', 'aaa')
False
null
cpython
cfcd524
doctest
DocTestRunner
>>> save_colorize = _colorize.COLORIZE
null
cpython
cfcd524
doctest
DocTestRunner
>>> _colorize.COLORIZE = False
null
cpython
cfcd524
doctest
DocTestRunner
>>> tests = DocTestFinder().find(_TestClass)
null
cpython
cfcd524
doctest
DocTestRunner
>>> runner = DocTestRunner(verbose=False)
null
cpython
cfcd524
doctest
DocTestRunner
>>> tests.sort(key = lambda test: test.name)
null
cpython
cfcd524
doctest
DocTestRunner
>>> for test in tests: ... print(test.name, '->', runner.run(test))
_TestClass -> TestResults(failed=0, attempted=2) _TestClass.__init__ -> TestResults(failed=0, attempted=2) _TestClass.get -> TestResults(failed=0, attempted=2) _TestClass.square -> TestResults(failed=0, attempted=1) The `summarize` method prints a summary of all the test cases that have been run by the runner, and returns an aggregated TestResults instance:
null
cpython
cfcd524
doctest
DocTestRunner
>>> runner.summarize(verbose=1)
4 items passed all tests: 2 tests in _TestClass 2 tests in _TestClass.__init__ 2 tests in _TestClass.get 1 test in _TestClass.square 7 tests in 4 items. 7 passed. Test passed. TestResults(failed=0, attempted=7) The aggregated number of tried examples and failed examples is also available via the `tries`, `failures` and `skips` attributes:
null
cpython
cfcd524
doctest
DocTestRunner
>>> runner.tries
7
null
cpython
cfcd524
doctest
DocTestRunner
>>> runner.failures
0
null
cpython
cfcd524
doctest
DocTestRunner
>>> runner.skips
0 The comparison between expected outputs and actual outputs is done by an `OutputChecker`. This comparison may be customized with a number of option flags; see the documentation for `testmod` for more information. If the option flags are insufficient, then the comparison may also be customized by passing a subclass of `OutputChecker` to the constructor. The test runner's display output can be controlled in two ways. First, an output function (`out`) can be passed to `TestRunner.run`; this function will be called with strings that should be displayed. It defaults to `sys.stdout.write`. If capturing the output is not sufficient, then the display output can be also customized by subclassing DocTestRunner, and overriding the methods `report_start`, `report_success`, `report_unexpected_exception`, and `report_failure`.
null
cpython
cfcd524
doctest
DocTestRunner
>>> _colorize.COLORIZE = save_colorize
null
cpython
cfcd524
doctest
DebugRunner
>>> runner = DebugRunner(verbose=False)
null
cpython
cfcd524
doctest
DebugRunner
>>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', ... {}, 'foo', 'foo.py', 0)
null
cpython
cfcd524
doctest
DebugRunner
>>> try: ... runner.run(test) ... except UnexpectedException as f: ... failure = f
null
cpython
cfcd524
doctest
DebugRunner
>>> failure.test is test
True
null
cpython
cfcd524
doctest
DebugRunner
>>> failure.example.want
'42\n'
null
cpython
cfcd524
doctest
DebugRunner
>>> exc_info = failure.exc_info
null
cpython
cfcd524
doctest
DebugRunner
>>> raise exc_info[1] # Already has the traceback
Traceback (most recent call last): ... KeyError We wrap the original exception to give the calling application access to the test and example information. If the output doesn't match, then a DocTestFailure is raised:
null
cpython
cfcd524
doctest
DebugRunner
>>> test = DocTestParser().get_doctest(''' ... >>> x = 1 ... >>> x ... 2 ... ''', {}, 'foo', 'foo.py', 0)
null
cpython
cfcd524
doctest
DebugRunner
>>> try: ... runner.run(test) ... except DocTestFailure as f: ... failure = f
DocTestFailure objects provide access to the test:
null
cpython
cfcd524
doctest
DebugRunner
>>> failure.test is test
True As well as to the example:
null
cpython
cfcd524
doctest
DebugRunner
>>> failure.example.want
'2\n' and the actual output:
null
cpython
cfcd524
doctest
DebugRunner
>>> failure.got
'1\n' If a failure or error occurs, the globals are left intact:
null
cpython
cfcd524
doctest
DebugRunner
>>> del test.globs['__builtins__']
null
cpython
cfcd524
doctest
DebugRunner
>>> test.globs
{'x': 1}
null
cpython
cfcd524
doctest
DebugRunner
>>> test = DocTestParser().get_doctest(''' ... >>> x = 2 ... >>> raise KeyError ... ''', {}, 'foo', 'foo.py', 0)
null
cpython
cfcd524
doctest
DebugRunner
>>> runner.run(test)
Traceback (most recent call last): ... doctest.UnexpectedException: <DocTest foo from foo.py:0 (2 examples)>
null
cpython
cfcd524
doctest
DebugRunner
>>> del test.globs['__builtins__']
null
cpython
cfcd524
doctest
DebugRunner
>>> test.globs
{'x': 2} But the globals are cleared if there is no error:
null