hexsha stringlengths 40 40 | size int64 4 1.02M | ext stringclasses 8
values | lang stringclasses 1
value | max_stars_repo_path stringlengths 4 209 | max_stars_repo_name stringlengths 5 121 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 4 209 | max_issues_repo_name stringlengths 5 121 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count int64 1 67k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 4 209 | max_forks_repo_name stringlengths 5 121 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 4 1.02M | avg_line_length float64 1.07 66.1k | max_line_length int64 4 266k | alphanum_fraction float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
17f768248d44ecf63c14499639b0be8e9b44ed00 | 17,118 | py | Python | colour/contrast/barten1999.py | OmarWagih1/colour | bdc880a2783ff523dafb19f1233212dd03a639bd | [
"BSD-3-Clause"
] | 2 | 2020-06-20T03:44:41.000Z | 2020-06-20T14:08:41.000Z | colour/contrast/barten1999.py | OmarWagih1/colour | bdc880a2783ff523dafb19f1233212dd03a639bd | [
"BSD-3-Clause"
] | null | null | null | colour/contrast/barten1999.py | OmarWagih1/colour | bdc880a2783ff523dafb19f1233212dd03a639bd | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Barten (1999) Contrast Sensitivity Function
===========================================
Defines *Barten (1999)* contrast sensitivity function:
- :func:`colour.contrast.contrast_sensitivity_function_Barten1999`
References
----------
- :cite:`Barten1999` : Barten, P. G. (1999). Contrast Sensitivity of the
Human Eye and Its Effects on Image Quality. SPIE. doi:10.1117/3.353254
- :cite:`Barten2003` : Barten, P. G. J. (2003). Formula for the contrast
sensitivity of the human eye. In Y. Miyake & D. R. Rasmussen (Eds.),
Proceedings of SPIE (Vol. 5294, pp. 231-238). doi:10.1117/12.537476
- :cite:`Cowan2004` : Cowan, M., Kennel, G., Maier, T., & Walker, B. (2004).
Contrast Sensitivity Experiment to Determine the Bit Depth for Digital
Cinema. SMPTE Motion Imaging Journal, 113(9), 281-292. doi:10.5594/j11549
- :cite:`InternationalTelecommunicationUnion2015` : International
Telecommunication Union. (2015). Report ITU-R BT.2246-4 - The present
state of ultra-high definition television BT Series Broadcasting service
(Vol. 5, pp. 1-92).
https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-BT.2246-4-2015-PDF-E.pdf
"""
from __future__ import division, unicode_literals
import numpy as np
from colour.utilities import as_float_array, as_float
__author__ = 'Colour Developers'
__copyright__ = 'Copyright (C) 2013-2020 - Colour Developers'
__license__ = 'New BSD License - https://opensource.org/licenses/BSD-3-Clause'
__maintainer__ = 'Colour Developers'
__email__ = 'colour-developers@colour-science.org'
__status__ = 'Production'
__all__ = [
'optical_MTF_Barten1999', 'pupil_diameter_Barten1999', 'sigma_Barten1999',
'retinal_illuminance_Barten1999', 'maximum_angular_size_Barten1999',
'contrast_sensitivity_function_Barten1999'
]
def optical_MTF_Barten1999(u, sigma=0.01):
"""
Returns the optical modulation transfer function (MTF) :math:`M_{opt}` of
the eye using *Barten (1999)* method.
Parameters
----------
u : numeric or array_like
Spatial frequency :math:`u`, the cycles per degree.
sigma : numeric or array_like, optional
Standard deviation :math:`\\sigma` of the line-spread function
resulting from the convolution of the different elements of the
convolution process.
Returns
-------
numeric or array_like
Optical modulation transfer function (MTF) :math:`M_{opt}` of the eye.
References
----------
:cite:`Barten1999`, :cite:`Barten2003`, :cite:`Cowan2004`,
:cite:`InternationalTelecommunicationUnion2015`,
Examples
--------
>>> optical_MTF_Barten1999(4, 0.01) # doctest: +ELLIPSIS
0.9689107...
"""
u = as_float_array(u)
sigma = as_float_array(sigma)
return as_float(np.exp(-2 * np.pi ** 2 * sigma ** 2 * u ** 2))
def pupil_diameter_Barten1999(L, X_0=60, Y_0=None):
"""
Returns the pupil diameter for given luminance and object or stimulus
angular size using *Barten (1999)* method.
Parameters
----------
L : numeric or array_like
Average luminance :math:`L` in :math:`cd/m^2`.
X_0 : numeric or array_like, optional
Angular size of the object :math:`X_0` in degrees in the x direction.
Y_0 : numeric or array_like, optional
Angular size of the object :math:`X_0` in degrees in the y direction.
References
----------
:cite:`Barten1999`, :cite:`Barten2003`, :cite:`Cowan2004`,
:cite:`InternationalTelecommunicationUnion2015`,
Examples
--------
>>> pupil_diameter_Barten1999(100, 60, 60) # doctest: +ELLIPSIS
2.0777571...
"""
L = as_float_array(L)
X_0 = as_float_array(X_0)
Y_0 = X_0 if Y_0 is None else as_float_array(Y_0)
return as_float(5 - 3 * np.tanh(0.4 * np.log(L * X_0 * Y_0 / 40 ** 2)))
def sigma_Barten1999(sigma_0=0.5 / 60, C_ab=0.08 / 60, d=2.1):
"""
Returns the standard deviation :math:`\\sigma` of the line-spread function
resulting from the convolution of the different elements of the convolution
process using *Barten (1999)* method.
The :math:`\\sigma` quantity depends on the pupil diameter :math:`d` of the
eye lens. For very small pupil diameters, :math:`\\sigma` increases
inversely proportionally with pupil size because of diffraction, and for
large pupil diameters, :math:`\\sigma` increases about linearly with pupil
size because of chromatic aberration and others aberrations.
Parameters
----------
sigma_0 : numeric or array_like, optional
Constant :math:`\\sigma_{0}` in degrees.
C_ab : numeric or array_like, optional
Spherical aberration of the eye :math:`C_{ab}` in
:math:`degrees\\div mm`.
d : numeric or array_like, optional
Pupil diameter :math:`d` in millimeters.
Returns
-------
ndarray
Standard deviation :math:`\\sigma` of the line-spread function
resulting from the convolution of the different elements of the
convolution process.
Warnings
--------
This definition expects :math:`\\sigma_{0}` and :math:`C_{ab}` to be given
in degrees and :math:`degrees\\div mm` respectively. However, in the
literature, the values for :math:`\\sigma_{0}` and
:math:`C_{ab}` are usually given in :math:`arc min` and
:math:`arc min\\div mm` respectively, thus they need to be divided by 60.
References
----------
:cite:`Barten1999`, :cite:`Barten2003`, :cite:`Cowan2004`,
:cite:`InternationalTelecommunicationUnion2015`,
Examples
--------
>>> sigma_Barten1999(0.5 / 60, 0.08 / 60, 2.1) # doctest: +ELLIPSIS
0.0087911...
"""
sigma_0 = as_float_array(sigma_0)
C_ab = as_float_array(C_ab)
d = as_float_array(d)
return as_float(np.sqrt(sigma_0 ** 2 + (C_ab * d) ** 2))
def retinal_illuminance_Barten1999(
L, d=2.1, apply_stiles_crawford_effect_correction=True):
"""
Returns the retinal illuminance :math:`E` in Trolands for given average
luminance :math:`L` and pupil diameter :math:`d` using *Barten (1999)*
method.
Parameters
----------
L : numeric or array_like
Average luminance :math:`L` in :math:`cd/m^2`.
d : numeric or array_like, optional
Pupil diameter :math:`d` in millimeters.
apply_stiles_crawford_effect_correction : bool, optional
Whether to apply the correction for *Stiles-Crawford* effect.
Returns
-------
ndarray
Retinal illuminance :math:`E` in Trolands.
Notes
-----
- This definition is for use with photopic viewing conditions and thus
corrects for the Stiles-Crawford effect by default, i.e. directional
sensitivity of the cone cells with lower response of cone cells
receiving light from the edge of the pupil.
References
----------
:cite:`Barten1999`, :cite:`Barten2003`, :cite:`Cowan2004`,
:cite:`InternationalTelecommunicationUnion2015`,
Examples
--------
>>> retinal_illuminance_Barten1999(100, 2.1) # doctest: +ELLIPSIS
330.4115803...
>>> retinal_illuminance_Barten1999(100, 2.1, False) # doctest: +ELLIPSIS
346.3605900...
"""
d = as_float_array(d)
L = as_float_array(L)
E = (np.pi * d ** 2) / 4 * L
if apply_stiles_crawford_effect_correction:
E *= (1 - (d / 9.7) ** 2 + (d / 12.4) ** 4)
return E
def maximum_angular_size_Barten1999(u, X_0=60, X_max=12, N_max=15):
"""
Returns the maximum angular size :math:`X` of the object considered using
*Barten (1999)* method.
Parameters
----------
u : numeric
Spatial frequency :math:`u`, the cycles per degree.
X_0 : numeric or array_like, optional
Angular size :math:`X_0` in degrees of the object in the x direction.
X_max : numeric or array_like, optional
Maximum angular size :math:`X_{max}` in degrees of the integration
area in the x direction.
N_max : numeric or array_like, optional
Maximum number of cycles :math:`N_{max}` over which the eye can
integrate the information.
Returns
-------
numeric or ndarray
Maximum angular size :math:`X` of the object considered.
References
----------
:cite:`Barten1999`, :cite:`Barten2003`, :cite:`Cowan2004`,
:cite:`InternationalTelecommunicationUnion2015`,
Examples
--------
>>> maximum_angular_size_Barten1999(4) # doctest: +ELLIPSIS
3.5729480...
"""
u = as_float_array(u)
X_0 = as_float_array(X_0)
X_max = as_float_array(X_max)
N_max = as_float_array(N_max)
return (1 / X_0 ** 2 + 1 / X_max ** 2 + u ** 2 / N_max ** 2) ** -0.5
def contrast_sensitivity_function_Barten1999(u,
sigma=sigma_Barten1999(
0.5 / 60, 0.08 / 60, 2.1),
k=3.0,
T=0.1,
X_0=60,
Y_0=None,
X_max=12,
Y_max=None,
N_max=15,
n=0.03,
p=1.2274 * 10 ** 6,
E=retinal_illuminance_Barten1999(
20, 2.1),
phi_0=3 * 10 ** -8,
u_0=7):
"""
Returns the contrast sensitivity :math:`S` of the human eye according to
the contrast sensitivity function (CSF) described by *Barten (1999)*.
Contrast sensitivity is defined as the inverse of the modulation threshold
of a sinusoidal luminance pattern. The modulation threshold of this pattern
is generally defined by 50% probability of detection. The contrast
sensitivity function or CSF gives the contrast sensitivity as a function of
spatial frequency. In the CSF, the spatial frequency is expressed in
angular units with respect to the eye. It reaches a maximum between 1 and
10 cycles per degree with a fall off at higher and lower spatial
frequencies.
Parameters
----------
u : numeric
Spatial frequency :math:`u`, the cycles per degree.
sigma : numeric or array_like, optional
Standard deviation :math:`\\sigma` of the line-spread function
resulting from the convolution of the different elements of the
convolution process.
k : numeric or array_like, optional
Signal-to-noise (SNR) ratio :math:`k`.
T : numeric or array_like, optional
Integration time :math:`T` in seconds of the eye.
X_0 : numeric or array_like, optional
Angular size :math:`X_0` in degrees of the object in the x direction.
Y_0 : numeric or array_like, optional
Angular size :math:`Y_0` in degrees of the object in the y direction.
X_max : numeric or array_like, optional
Maximum angular size :math:`X_{max}` in degrees of the integration
area in the x direction.
Y_max : numeric or array_like, optional
Maximum angular size :math:`Y_{max}` in degrees of the integration
area in the y direction.
N_max : numeric or array_like, optional
Maximum number of cycles :math:`N_{max}` over which the eye can
integrate the information.
n : numeric or array_like, optional
Quantum efficiency of the eye :math:`n`.
p : numeric or array_like, optional
Photon conversion factor :math:`p` in
:math:`photons\\div seconds\\div degrees^2\\div Trolands` that
depends on the light source.
E : numeric or array_like, optional
Retinal illuminance :math:`E` in Trolands.
phi_0 : numeric or array_like, optional
Spectral density :math:`\\phi_0` in :math:`seconds degrees^2` of the
neural noise.
u_0 : numeric or array_like, optional
Spatial frequency :math:`u_0` in :math:`cycles\\div degrees` above
which the lateral inhibition ceases.
Returns
-------
ndarray
Contrast sensitivity :math:`S`.
Warnings
--------
This definition expects :math:`\\sigma_{0}` and :math:`C_{ab}` used in the
computation of :math:`\\sigma` to be given in degrees and
:math:`degrees\\div mm` respectively. However, in the literature, the
values for :math:`\\sigma_{0}` and :math:`C_{ab}` are usually given in
:math:`arc min` and :math:`arc min\\div mm` respectively, thus they need to
be divided by 60.
Notes
-----
- The formula holds for bilateral viewing and for equal dimensions of
the object in x and y direction. For monocular vision, the contrast
sensitivity is a factor :math:`\\sqrt{2}` smaller.
- *Barten (1999)* CSF default values for the :math:`k`,
:math:`\\sigma_{0}`, :math:`C_{ab}`, :math:`T`, :math:`X_{max}`,
:math:`N_{max}`, :math:`n`, :math:`\\phi_{0}` and :math:`u_0` constants
are valid for a standard observer with good vision and with an age
between 20 and 30 years.
- The other constants have been filled using reference data from
*Figure 31* in :cite:`InternationalTelecommunicationUnion2015` but
must be adapted to the current use case.
- The product of :math:`u`, the cycles per degree, and :math:`X_0`,
the number of degrees, gives the number of cycles :math:`P_c` in a
pattern. Therefore, :math:`X_0` can be made a variable dependent on
:math:`u` such as :math:`X_0 = P_c / u`.
References
----------
:cite:`Barten1999`, :cite:`Barten2003`, :cite:`Cowan2004`,
:cite:`InternationalTelecommunicationUnion2015`,
Examples
--------
>>> contrast_sensitivity_function_Barten1999(4) # doctest: +ELLIPSIS
360.8691122...
Reproducing *Figure 31* in \
:cite:`InternationalTelecommunicationUnion2015` illustrating the minimum
detectable contrast according to *Barten (1999)* model with the assumed
conditions for UHDTV applications. The minimum detectable contrast
:math:`MDC` is then defined as follows::
:math:`MDC = 1 / CSF * 2 * (1 / 1.27)`
where :math:`2` is used for the conversion from modulation to contrast and
:math:`1 / 1.27` is used for the conversion from sinusoidal to rectangular
waves.
>>> from scipy.optimize import fmin
>>> settings_BT2246 = {
... 'k': 3.0,
... 'T': 0.1,
... 'X_max': 12,
... 'N_max': 15,
... 'n': 0.03,
... 'p': 1.2274 * 10 ** 6,
... 'phi_0': 3 * 10 ** -8,
... 'u_0': 7,
... }
>>>
>>> def maximise_spatial_frequency(L):
... maximised_spatial_frequency = []
... for L_v in L:
... X_0 = 60
... d = pupil_diameter_Barten1999(L_v, X_0)
... sigma = sigma_Barten1999(0.5 / 60, 0.08 / 60, d)
... E = retinal_illuminance_Barten1999(L_v, d, True)
... maximised_spatial_frequency.append(
... fmin(lambda x: (
... -contrast_sensitivity_function_Barten1999(
... u=x,
... sigma=sigma,
... X_0=X_0,
... E=E,
... **settings_BT2246)
... ), 0, disp=False)[0])
... return as_float(np.array(maximised_spatial_frequency))
>>>
>>> L = np.logspace(np.log10(0.01), np.log10(100), 10)
>>> X_0 = Y_0 = 60
>>> d = pupil_diameter_Barten1999(L, X_0, Y_0)
>>> sigma = sigma_Barten1999(0.5 / 60, 0.08 / 60, d)
>>> E = retinal_illuminance_Barten1999(L, d)
>>> u = maximise_spatial_frequency(L)
>>> (1 / contrast_sensitivity_function_Barten1999(
... u=u, sigma=sigma, E=E, X_0=X_0, Y_0=Y_0, **settings_BT2246)
... * 2 * (1/ 1.27))
... # doctest: +ELLIPSIS
array([ 0.0207396..., 0.0134885..., 0.0096063..., 0.0077299..., \
0.0068983...,
0.0065057..., 0.0062712..., 0.0061198..., 0.0060365..., \
0.0059984...])
"""
u = as_float_array(u)
k = as_float_array(k)
T = as_float_array(T)
X_0 = as_float_array(X_0)
Y_0 = X_0 if Y_0 is None else as_float_array(Y_0)
X_max = as_float_array(X_max)
Y_max = X_max if Y_max is None else as_float_array(Y_max)
N_max = as_float_array(N_max)
n = as_float_array(n)
p = as_float_array(p)
E = as_float_array(E)
phi_0 = as_float_array(phi_0)
u_0 = as_float_array(u_0)
M_opt = optical_MTF_Barten1999(u, sigma)
M_as = 1 / (maximum_angular_size_Barten1999(u, X_0, X_max, N_max) *
maximum_angular_size_Barten1999(u, Y_0, Y_max, N_max))
S = (M_opt / k) / np.sqrt(2 / T * M_as * (1 / (n * p * E) + phi_0 /
(1 - np.exp(-(u / u_0) ** 2))))
return as_float(S)
| 37.539474 | 79 | 0.60895 |
7ccb74d8b503f2d6817e9675ca9d94abddebed0a | 2,245 | py | Python | social/models.py | 7workday/TT | 8887a85652c387a50a65e2598abdc833400e56f3 | [
"Apache-2.0"
] | null | null | null | social/models.py | 7workday/TT | 8887a85652c387a50a65e2598abdc833400e56f3 | [
"Apache-2.0"
] | null | null | null | social/models.py | 7workday/TT | 8887a85652c387a50a65e2598abdc833400e56f3 | [
"Apache-2.0"
] | null | null | null | from django.db import models
from django.db.models import Q
from sqlite3 import IntegrityError
from common import stat
class Swiped(models.Model):
'''滑动记录'''
STYPES = (
('like', '右滑'),
('superlike', '上滑'),
('dislike', '左滑'),
)
uid = models.IntegerField(verbose_name='用户 ID')
sid = models.IntegerField(verbose_name='被滑动用户的 ID')
stype = models.CharField(max_length=10, choices=STYPES, verbose_name='滑动的类型')
stime = models.DateTimeField(auto_now_add=True, verbose_name='滑动的时间')
class Meta:
unique_together = ('uid', 'sid') # uid 与 sid 联合唯一
@classmethod
def swiper(cls, self, uid, sid, styple):
'''执行一次滑动'''
try:
return cls.objects.create(uid=uid, sid=sid, style='like')
except IntegrityError:
raise stat.RepeatSwipeErr
@classmethod
def has_liked(cls, uid, sid):
'''检查是否喜欢过某人'''
return cls.objects.filter(uid=uid, sid=sid , stype__in=['like','superlike']).exists()
class Friend(models.Model):
'''好友表'''
uid1 = models.IntegerField(verbose_name='用户 ID')
uid2 = models.IntegerField(verbose_name='用户 ID')
class Meta:
unique_together = ('uid1', 'uid2') # uid1 与 uid2 联合唯一
@classmethod
def make_friends(cls, uid1, uid2):
'''添加好友关系'''
# 调整 uid1 和 uid2 的顺序,小的值放前面
uid1,uid2 = (uid2,uid1)if uid1>uid2 else (uid1,uid2)
try:
return cls.objects.create(uid1=uid1,uid2=uid2)
except IntegrityError:
raise stat.AreadyFriends
@classmethod
def break_off(cls,uid1,uid2):
'''绝交'''
# 调整 uid1 和 uid2 的顺序,小的值放前面
uid1, uid2 = (uid2, uid1) if uid1 > uid2 else (uid1, uid2)
cls.objects.filter(uid1=uid1,uid2=uid2).delete()
@classmethod
def get_my_friends_id(cls, uid):
'''获取用户自己的好友 ID 列表'''
query_condition = Q(uid1=uid) | Q(uid2=uid)
friendship = cls.objects.filter(query_condition)
# 取出所有好友的 UID
friend_id_list = []
for f_obj in friendship:
if f_obj.uid1 == uid:
friend_id_list.append(f_obj.uid2)
else:
friend_id_list.append(f_obj.uid1)
return friend_id_list
| 29.933333 | 93 | 0.604454 |
90179942b479c02a62c0f83dc133fc47dd16d363 | 906 | py | Python | typed_python/compiler/tests/subclass_of_test.py | APrioriInvestments/nativepython | 94e6b09d788e49cbe34b9b0d3c948218d7a8dcc5 | [
"Apache-2.0"
] | 52 | 2019-04-12T18:07:56.000Z | 2019-10-07T10:26:19.000Z | typed_python/compiler/tests/subclass_of_test.py | APrioriInvestments/nativepython | 94e6b09d788e49cbe34b9b0d3c948218d7a8dcc5 | [
"Apache-2.0"
] | 135 | 2019-04-15T12:52:56.000Z | 2019-10-08T18:39:58.000Z | typed_python/compiler/tests/subclass_of_test.py | APrioriInvestments/nativepython | 94e6b09d788e49cbe34b9b0d3c948218d7a8dcc5 | [
"Apache-2.0"
] | 1 | 2019-04-12T13:03:38.000Z | 2019-04-12T13:03:38.000Z | from typed_python import Entrypoint, SubclassOf, Class, Final, Function, ListOf
class A(Class):
pass
class B(A):
pass
class C(B, Final):
pass
def test_can_cast_subclass_of_correctly():
@Function
def f(c: SubclassOf(C)):
return "C"
@f.overload
def f(c: SubclassOf(B)):
return "B"
@f.overload
def f(c: SubclassOf(A)):
return "A"
def checkIt():
assert f(C) == "C", f(C)
assert f(B) == "B", f(B)
assert f(A) == "A", f(A)
checkIt()
Entrypoint(checkIt)()
@Entrypoint
def checkItList(x):
res = ListOf(str)()
for cls in x:
res.append(f(cls))
return res
assert checkItList(ListOf(SubclassOf(A))([A, B, C])) == ["A", "B", "C"]
assert checkItList(ListOf(SubclassOf(B))([B, C])) == ["B", "C"]
assert checkItList(ListOf(SubclassOf(C))([C])) == ["C"]
| 18.12 | 79 | 0.540839 |
34e0ba9664536d5200cea4a58b149563fc8fc3da | 7,347 | py | Python | pywick/models/segmentation/refinenet/refinenet.py | achaiah/pywick | 9d663faf0c1660a9b8359a6472c164f658dfc8cb | [
"MIT"
] | 408 | 2019-05-16T16:12:41.000Z | 2022-03-26T17:27:12.000Z | pywick/models/segmentation/refinenet/refinenet.py | ashishpatel26/pywick | 1afffd1c21c2b188836d3599e802146182757bb5 | [
"MIT"
] | 13 | 2019-05-17T05:47:06.000Z | 2021-06-21T19:02:30.000Z | pywick/models/segmentation/refinenet/refinenet.py | ashishpatel26/pywick | 1afffd1c21c2b188836d3599e802146182757bb5 | [
"MIT"
] | 42 | 2019-05-16T19:57:12.000Z | 2022-03-06T15:23:18.000Z | # Source: https://github.com/thomasjpfan/pytorch_refinenet (License: MIT)
"""
Implementation of `RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation <https://arxiv.org/abs/1611.06612>`_.
"""
import torch.nn as nn
import torchvision.models as models
import torch.nn.functional as F
from .blocks import (RefineNetBlock, ResidualConvUnit,
RefineNetBlockImprovedPooling)
__all__ = ['RefineNet4Cascade', 'RefineNet4CascadePoolingImproved']
class BaseRefineNet4Cascade(nn.Module):
def __init__(self,
input_shape,
refinenet_block,
num_classes=1,
features=256,
resnet_factory=models.resnet101,
pretrained=True,
freeze_resnet=False,
**kwargs):
"""Multi-path 4-Cascaded RefineNet for image segmentation
Args:
input_shape ((int, int)): (channel, size) assumes input has
equal height and width
refinenet_block (block): RefineNet Block
num_classes (int, optional): number of classes
features (int, optional): number of features in refinenet
resnet_factory (func, optional): A Resnet model from torchvision.
Default: models.resnet101
pretrained (bool, optional): Use pretrained version of resnet
Default: True
freeze_resnet (bool, optional): Freeze resnet model
Default: True
Raises:
ValueError: size of input_shape not divisible by 32
"""
super().__init__()
input_channel, input_size = input_shape
if input_size % 32 != 0:
raise ValueError("{} not divisble by 32".format(input_shape))
resnet = resnet_factory(pretrained=pretrained)
self.layer1 = nn.Sequential(resnet.conv1, resnet.bn1, resnet.relu,
resnet.maxpool, resnet.layer1)
self.layer2 = resnet.layer2
self.layer3 = resnet.layer3
self.layer4 = resnet.layer4
if freeze_resnet:
layers = [self.layer1, self.layer2, self.layer3, self.layer4]
for layer in layers:
for param in layer.parameters():
param.requires_grad = False
self.layer1_rn = nn.Conv2d(
256, features, kernel_size=3, stride=1, padding=1, bias=False)
self.layer2_rn = nn.Conv2d(
512, features, kernel_size=3, stride=1, padding=1, bias=False)
self.layer3_rn = nn.Conv2d(
1024, features, kernel_size=3, stride=1, padding=1, bias=False)
self.layer4_rn = nn.Conv2d(
2048, 2 * features, kernel_size=3, stride=1, padding=1, bias=False)
self.refinenet4 = RefineNetBlock(2 * features,
(2 * features, input_size // 32))
self.refinenet3 = RefineNetBlock(features,
(2 * features, input_size // 32),
(features, input_size // 16))
self.refinenet2 = RefineNetBlock(features,
(features, input_size // 16),
(features, input_size // 8))
self.refinenet1 = RefineNetBlock(features, (features, input_size // 8),
(features, input_size // 4))
self.output_conv = nn.Sequential(
ResidualConvUnit(features), ResidualConvUnit(features),
nn.Conv2d(
features,
num_classes,
kernel_size=1,
stride=1,
padding=0,
bias=True))
def forward(self, x):
size = x.size()[2:]
layer_1 = self.layer1(x)
layer_2 = self.layer2(layer_1)
layer_3 = self.layer3(layer_2)
layer_4 = self.layer4(layer_3)
layer_1_rn = self.layer1_rn(layer_1)
layer_2_rn = self.layer2_rn(layer_2)
layer_3_rn = self.layer3_rn(layer_3)
layer_4_rn = self.layer4_rn(layer_4)
path_4 = self.refinenet4(layer_4_rn)
path_3 = self.refinenet3(path_4, layer_3_rn)
path_2 = self.refinenet2(path_3, layer_2_rn)
path_1 = self.refinenet1(path_2, layer_1_rn)
out_conv = self.output_conv(path_1)
out = F.interpolate(out_conv, size, mode='bilinear', align_corners=True)
return out
class RefineNet4CascadePoolingImproved(BaseRefineNet4Cascade):
def __init__(self,
num_classes=1,
pretrained=True,
input_shape=(1, 512),
features=256,
resnet_factory=models.resnet101,
freeze_resnet=False,
**kwargs):
"""Multi-path 4-Cascaded RefineNet for image segmentation with improved pooling
Args:
input_shape ((int, int)): (channel, size) assumes input has
equal height and width
refinenet_block (block): RefineNet Block
num_classes (int, optional): number of classes
features (int, optional): number of features in refinenet
resnet_factory (func, optional): A Resnet model from torchvision.
Default: models.resnet101
pretrained (bool, optional): Use pretrained version of resnet
Default: True
freeze_resnet (bool, optional): Freeze resnet model
Default: True
Raises:
ValueError: size of input_shape not divisible by 32
"""
super().__init__(
input_shape,
RefineNetBlockImprovedPooling,
num_classes=num_classes,
features=features,
resnet_factory=resnet_factory,
pretrained=pretrained,
freeze_resnet=freeze_resnet,
**kwargs)
class RefineNet4Cascade(BaseRefineNet4Cascade):
def __init__(self,
num_classes=1,
pretrained=True,
input_shape=(1, 512),
features=256,
resnet_factory=models.resnet101,
freeze_resnet=False,
**kwargs):
"""Multi-path 4-Cascaded RefineNet for image segmentation
Args:
input_shape ((int, int)): (channel, size) assumes input has
equal height and width
refinenet_block (block): RefineNet Block
num_classes (int, optional): number of classes
features (int, optional): number of features in refinenet
resnet_factory (func, optional): A Resnet model from torchvision.
Default: models.resnet101
pretrained (bool, optional): Use pretrained version of resnet
Default: True
freeze_resnet (bool, optional): Freeze resnet model
Default: True
Raises:
ValueError: size of input_shape not divisible by 32
"""
super().__init__(
input_shape,
RefineNetBlock,
num_classes=num_classes,
features=features,
resnet_factory=resnet_factory,
pretrained=pretrained,
freeze_resnet=freeze_resnet,
**kwargs)
| 38.465969 | 140 | 0.573567 |
c1619f8d2d0f89b12e96435210c9cf9a161f8b3e | 1,692 | py | Python | launch/sample_laserscan_to_pointcloud_launch.py | charlielito/pointcloud_to_laserscan | 6bc835a4b718e08a45a832d4af44040c2e13e418 | [
"BSD-3-Clause"
] | 213 | 2016-01-21T03:55:52.000Z | 2022-03-31T13:23:55.000Z | launch/sample_laserscan_to_pointcloud_launch.py | charlielito/pointcloud_to_laserscan | 6bc835a4b718e08a45a832d4af44040c2e13e418 | [
"BSD-3-Clause"
] | 57 | 2015-06-08T17:13:19.000Z | 2022-03-16T07:49:24.000Z | launch/sample_laserscan_to_pointcloud_launch.py | charlielito/pointcloud_to_laserscan | 6bc835a4b718e08a45a832d4af44040c2e13e418 | [
"BSD-3-Clause"
] | 165 | 2015-08-20T22:05:38.000Z | 2022-03-29T08:53:53.000Z | from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.actions import ExecuteProcess
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node
import yaml
def generate_launch_description():
return LaunchDescription([
DeclareLaunchArgument(
name='scanner', default_value='scanner',
description='Namespace for sample topics'
),
ExecuteProcess(
cmd=[
'ros2', 'topic', 'pub', '-r', '10',
'--qos-profile', 'sensor_data',
[LaunchConfiguration(variable_name='scanner'), '/scan'],
'sensor_msgs/msg/LaserScan', yaml.dump({
'header': {'frame_id': 'scan'}, 'angle_min': -1.0,
'angle_max': 1.0, 'angle_increment': 0.1, 'range_max': 10.0,
'ranges': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
})
],
name='scan_publisher'
),
Node(
package='tf2_ros',
executable='static_transform_publisher',
name='static_transform_publisher',
arguments='0 0 0 0 0 0 1 map scan'
),
Node(
package='pointcloud_to_laserscan',
executable='laserscan_to_pointcloud_node',
name='laserscan_to_pointcloud',
remappings=[('scan_in', [LaunchConfiguration(variable_name='scanner'), '/scan']),
('cloud', [LaunchConfiguration(variable_name='scanner'), '/cloud'])],
parameters=[{'target_frame': 'scan', 'transform_tolerance': 0.01}]
),
])
| 38.454545 | 93 | 0.559693 |
492694a8f4162cf9a89910ff56fad00b7647a2ab | 9,323 | py | Python | docs/conf.py | seven-qi/ThreadedMessenger | 66afb7dc5ed9f96a89b4969648bd0ae510859ed3 | [
"MIT"
] | null | null | null | docs/conf.py | seven-qi/ThreadedMessenger | 66afb7dc5ed9f96a89b4969648bd0ae510859ed3 | [
"MIT"
] | null | null | null | docs/conf.py | seven-qi/ThreadedMessenger | 66afb7dc5ed9f96a89b4969648bd0ae510859ed3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Threaded Messenger documentation build configuration file, created by
# sphinx-quickstart on Fri Jan 22 23:36:13 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('..'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.napoleon'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Threaded Messenger'
copyright = u'2016, Jingwei Qi'
author = u'Jingwei Qi'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = u'0.0.1'
# The full version, including alpha/beta/rc tags.
release = u'0.0.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
#html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'ThreadedMessengerdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'ThreadedMessenger.tex', u'Threaded Messenger Documentation',
u'Jingwei Qi', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'threadedmessenger', u'Threaded Messenger Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'ThreadedMessenger', u'Threaded Messenger Documentation',
author, 'ThreadedMessenger', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
| 32.484321 | 79 | 0.718975 |
095d0f6bda48ee03264a1260174947c0e4481a49 | 1,649 | py | Python | src/torchcontentarea/contentareainference.py | charliebudd/torch-content-area | b1ebc53c65eca5a18858ed922d4e20890743d485 | [
"MIT"
] | null | null | null | src/torchcontentarea/contentareainference.py | charliebudd/torch-content-area | b1ebc53c65eca5a18858ed922d4e20890743d485 | [
"MIT"
] | null | null | null | src/torchcontentarea/contentareainference.py | charliebudd/torch-content-area | b1ebc53c65eca5a18858ed922d4e20890743d485 | [
"MIT"
] | null | null | null | import torch
from enum import IntEnum
from typing import Sequence, Optional
import __torchcontentareaext as __ext
from __torchcontentareaext import get_times
class InterpolationMode(IntEnum):
"""A tag to specify the type of interpolation when cropping"""
NEAREST = 0
BILINEAR = 1
class ContentAreaInference(__ext.ContentAreaInference):
"""Entry point for content area inference"""
def __init__(self) -> None:
super().__init__()
def infer_mask(self, image: torch.Tensor) -> torch.Tensor:
"""Infers the content area for a given endoscopic image and returns a binary mask"""
return self.__infer_mask(image)
def infer_area(self, image: torch.Tensor) -> Optional[Sequence[int]]:
"""Infers the content area for a given endoscopic image and returns the parameters of the content area (None if no area found)"""
return self.__infer_area(image)
def draw_mask(self, image: torch.Tensor, area: Sequence[int]) -> torch.Tensor:
"""Returns a binary mask for the provided content area parameters"""
return self.__draw_mask(image, area)
def crop_area(self, image: torch.Tensor, area: Sequence[int], size: Sequence[int], interpolation_mode: InterpolationMode=InterpolationMode.BILINEAR) -> torch.Tensor:
"""Crops and resizes the image to within the provided content area"""
return self.__crop_area(image, area, size, int(interpolation_mode))
def get_debug(self, image: torch.Tensor) -> Sequence[Sequence[float]]:
"""Returns debug information"""
return self.__get_debug(image)
def get_times(self):
return get_times()
| 40.219512 | 169 | 0.713159 |
febe04954752dc42caeeafaf157da48f5ccf0dad | 933 | py | Python | modules/jb_bootcamp/jb_bootcamp/bioinfo_dicts.py | bjamh/bootcamp | 783b2d21484911bf82730c0750f81eb31817b05b | [
"CC-BY-4.0",
"MIT"
] | 43 | 2016-09-11T05:58:23.000Z | 2022-03-05T16:42:21.000Z | modules/jb_bootcamp/jb_bootcamp/bioinfo_dicts.py | bjamh/bootcamp | 783b2d21484911bf82730c0750f81eb31817b05b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2015-09-19T16:44:45.000Z | 2015-09-19T16:45:22.000Z | modules/jb_bootcamp/jb_bootcamp/bioinfo_dicts.py | bjamh/bootcamp | 783b2d21484911bf82730c0750f81eb31817b05b | [
"CC-BY-4.0",
"MIT"
] | 806 | 2015-09-19T21:31:59.000Z | 2022-03-30T12:25:48.000Z | """Useful dictionaries to have around for bioinformatics."""
aa = {'A': 'Ala',
'R': 'Arg',
'N': 'Asn',
'D': 'Asp',
'C': 'Cys',
'Q': 'Gln',
'E': 'Glu',
'G': 'Gly',
'H': 'His',
'I': 'Ile',
'L': 'Leu',
'K': 'Lys',
'M': 'Met',
'F': 'Phe',
'P': 'Pro',
'S': 'Ser',
'T': 'Thr',
'W': 'Trp',
'Y': 'Tyr',
'V': 'Val'}
# The set of DNA bases
bases = ['T', 'C', 'A', 'G']
# Build list of codons
codon_list = []
for first_base in bases:
for second_base in bases:
for third_base in bases:
codon_list += [first_base + second_base + third_base]
# The amino acids that are coded for (* = STOP codon)
amino_acids = 'FFLLSSSSYY**CC*WLLLLPPPPHHQQRRRRIIIMTTTTNNKKSSRRVVVVAAAADDEEGGGG'
# Build dictionary from tuple of 2-tuples (technically an iterator, but it works)
codons = dict(zip(codon_list, amino_acids))
| 23.325 | 81 | 0.525188 |
63cc3f38831847ea874920f02f6637396f2d2d54 | 15,387 | py | Python | selfdrive/car/tesla/ACC_module.py | Michaelc0/openpilot | b210ad3b10c6b2f5c3a373aa63b01a126a12b808 | [
"MIT"
] | null | null | null | selfdrive/car/tesla/ACC_module.py | Michaelc0/openpilot | b210ad3b10c6b2f5c3a373aa63b01a126a12b808 | [
"MIT"
] | null | null | null | selfdrive/car/tesla/ACC_module.py | Michaelc0/openpilot | b210ad3b10c6b2f5c3a373aa63b01a126a12b808 | [
"MIT"
] | null | null | null | from selfdrive.services import service_list
from selfdrive.car.tesla.values import AH, CruiseButtons, CruiseState, CAR
from selfdrive.config import Conversions as CV
import selfdrive.messaging as messaging
import os
import subprocess
import time
import zmq
class ACCState(object):
# Possible state of the ACC system, following the DI_cruiseState naming
# scheme.
OFF = 0 # Disabled by UI.
STANDBY = 1 # Ready to be enaged.
ENABLED = 2 # Engaged.
NOT_READY = 9 # Not ready to be engaged due to the state of the car.
def _current_time_millis():
return int(round(time.time() * 1000))
class ACCController(object):
# Tesla cruise only functions above 18 MPH
MIN_CRUISE_SPEED_MS = 18 * CV.MPH_TO_MS
def __init__(self, carcontroller):
self.CC = carcontroller
self.human_cruise_action_time = 0
self.automated_cruise_action_time = 0
self.last_angle = 0.
context = zmq.Context()
self.poller = zmq.Poller()
self.live20 = messaging.sub_sock(context, service_list['live20'].port, conflate=True, poller=self.poller)
self.lead_1 = None
self.last_update_time = 0
self.enable_adaptive_cruise = False
# Whether to re-engage automatically after being paused due to low speed or
# user-initated deceleration.
self.autoresume = False
self.last_cruise_stalk_pull_time = 0
self.prev_cruise_buttons = CruiseButtons.IDLE
self.prev_pcm_acc_status = 0
self.acc_speed_kph = 0.
def update_stat(self, CS, enabled):
# Check if the cruise stalk was double pulled, indicating that adaptive
# cruise control should be enabled. Twice in .75 seconds counts as a double
# pull.
prev_enable_adaptive_cruise = self.enable_adaptive_cruise
self.autoresume = CS.cstm_btns.get_button_label2("acc") == "AutoRes"
curr_time_ms = _current_time_millis()
speed_uom_kph = 1.
if CS.imperial_speed_units:
speed_uom_kph = CV.MPH_TO_KPH
# Handle pressing the enable button.
if (CS.cruise_buttons == CruiseButtons.MAIN and
self.prev_cruise_buttons != CruiseButtons.MAIN):
double_pull = curr_time_ms - self.last_cruise_stalk_pull_time < 750
self.last_cruise_stalk_pull_time = curr_time_ms
ready = (CS.cstm_btns.get_button_status("acc") > ACCState.OFF and
enabled and
CruiseState.is_enabled_or_standby(CS.pcm_acc_status))
if ready and double_pull:
# A double pull enables ACC. updating the max ACC speed if necessary.
self.enable_adaptive_cruise = True
# Increase ACC speed to match current, if applicable.
self.acc_speed_kph = max(CS.v_ego_raw * CV.MS_TO_KPH, self.acc_speed_kph)
else:
# A single pull disables ACC (falling back to just steering).
self.enable_adaptive_cruise = False
# Handle pressing the cancel button.
elif CS.cruise_buttons == CruiseButtons.CANCEL:
self.enable_adaptive_cruise = False
self.acc_speed_kph = 0.
self.last_cruise_stalk_pull_time = 0
# Handle pressing up and down buttons.
elif (self.enable_adaptive_cruise and
CS.cruise_buttons != self.prev_cruise_buttons):
# Real stalk command while ACC is already enabled. Adjust the max ACC
# speed if necessary. For example if max speed is 50 but you're currently
# only going 30, the cruise speed can be increased without any change to
# max ACC speed. If actual speed is already 50, the code also increases
# the max cruise speed.
if CS.cruise_buttons == CruiseButtons.RES_ACCEL:
requested_speed_kph = CS.v_ego * CV.MS_TO_KPH + speed_uom_kph
self.acc_speed_kph = max(self.acc_speed_kph, requested_speed_kph)
elif CS.cruise_buttons == CruiseButtons.RES_ACCEL_2ND:
requested_speed_kph = CS.v_ego * CV.MS_TO_KPH + 5 * speed_uom_kph
self.acc_speed_kph = max(self.acc_speed_kph, requested_speed_kph)
elif CS.cruise_buttons == CruiseButtons.DECEL_SET:
self.acc_speed_kph -= speed_uom_kph
elif CS.cruise_buttons == CruiseButtons.DECEL_2ND:
self.acc_speed_kph -= 5 * speed_uom_kph
# Clip ACC speed between 0 and 170 KPH.
self.acc_speed_kph = min(self.acc_speed_kph, 170)
self.acc_speed_kph = max(self.acc_speed_kph, 0)
# If something disabled cruise control, disable ACC too.
elif (self.prev_pcm_acc_status == 2 and
CS.pcm_acc_status != 2 and
not self.autoresume):
self.enable_adaptive_cruise = False
# Notify if ACC was toggled
if prev_enable_adaptive_cruise and not self.enable_adaptive_cruise:
CS.UE.custom_alert_message(3, "ACC Disabled", 150, 4)
CS.cstm_btns.set_button_status("acc", ACCState.STANDBY)
elif self.enable_adaptive_cruise and not prev_enable_adaptive_cruise:
CS.UE.custom_alert_message(2, "ACC Enabled", 150)
CS.cstm_btns.set_button_status("acc", ACCState.ENABLED)
# Update the UI to show whether the current car state allows ACC.
if CS.cstm_btns.get_button_status("acc") in [ACCState.STANDBY, ACCState.NOT_READY]:
if (enabled
and CruiseState.is_enabled_or_standby(CS.pcm_acc_status)
and CS.v_ego > self.MIN_CRUISE_SPEED_MS):
CS.cstm_btns.set_button_status("acc", ACCState.STANDBY)
else:
CS.cstm_btns.set_button_status("acc", ACCState.NOT_READY)
# Update prev state after all other actions.
self.prev_cruise_buttons = CS.cruise_buttons
self.prev_pcm_acc_status = CS.pcm_acc_status
def update_acc(self, enabled, CS, frame, actuators, pcm_speed):
# Adaptive cruise control
current_time_ms = _current_time_millis()
if CruiseButtons.should_be_throttled(CS.cruise_buttons):
self.human_cruise_action_time = current_time_ms
button_to_press = None
# The difference between OP's target speed and the current cruise
# control speed, in KPH.
speed_offset = (pcm_speed * CV.MS_TO_KPH - CS.v_cruise_actual)
if (self.enable_adaptive_cruise
# Only do ACC if OP is steering
and enabled
# And adjust infrequently, since sending repeated adjustments makes
# the car think we're doing a 'long press' on the cruise stalk,
# resulting in small, jerky speed adjustments.
and current_time_ms > self.automated_cruise_action_time + 500):
# Automatically engange traditional cruise if it is idle and we are
# going fast enough and we are accelerating.
if (CS.pcm_acc_status == 1
and CS.v_ego > self.MIN_CRUISE_SPEED_MS
and CS.a_ego >= 0.):
button_to_press = CruiseButtons.DECEL_2ND
# If traditional cruise is engaged, then control it.
elif (CS.pcm_acc_status == 2
# But don't make adjustments if a human has manually done so in
# the last 3 seconds. Human intention should not be overridden.
and current_time_ms > self.human_cruise_action_time + 3000):
if CS.imperial_speed_units:
# Imperial unit cars adjust cruise in units of 1 and 5 mph.
half_press_kph = 1 * CV.MPH_TO_KPH
full_press_kph = 5 * CV.MPH_TO_KPH
else:
# Metric cars adjust cruise in units of 1 and 5 kph.
half_press_kph = 1
full_press_kph = 5
# Reduce cruise speed significantly if necessary. Multiply by a % to
# make the car slightly more eager to slow down vs speed up.
if speed_offset < -0.6 * full_press_kph and CS.v_cruise_actual > 0:
# Send cruise stalk dn_2nd.
button_to_press = CruiseButtons.DECEL_2ND
# Reduce speed slightly if necessary.
elif speed_offset < -0.9 * half_press_kph and CS.v_cruise_actual > 0:
# Send cruise stalk dn_1st.
button_to_press = CruiseButtons.DECEL_SET
# Increase cruise speed if possible.
elif CS.v_ego > self.MIN_CRUISE_SPEED_MS:
# How much we can accelerate without exceeding max allowed speed.
available_speed = self.acc_speed_kph - CS.v_cruise_actual
if speed_offset > full_press_kph and full_press_kph < available_speed:
# Send cruise stalk up_2nd.
button_to_press = CruiseButtons.RES_ACCEL_2ND
elif speed_offset > half_press_kph and half_press_kph < available_speed:
# Send cruise stalk up_1st.
button_to_press = CruiseButtons.RES_ACCEL
if CS.cstm_btns.get_button_label2("acc") == "Mod JJ":
# Alternative speed decision logic that uses the lead car's distance
# and speed more directly.
# Bring in the lead car distance from the Live20 feed
l20 = None
if enabled:
for socket, _ in self.poller.poll(0):
if socket is self.live20:
l20 = messaging.recv_one(socket)
break
if l20 is not None:
self.lead_1 = l20.live20.leadOne
button_to_press = self.calc_follow_button(CS)
if button_to_press:
self.automated_cruise_action_time = current_time_ms
# If trying to slow below the min cruise speed, just cancel cruise.
# This prevents a SCCM crash which is triggered by repeatedly pressing
# stalk-down when already at min cruise speed.
if (CruiseButtons.is_decel(button_to_press)
and CS.v_cruise_actual - 1 < self.MIN_CRUISE_SPEED_MS * CV.MS_TO_KPH):
button_to_press = CruiseButtons.CANCEL
# Debug logging (disable in production to reduce latency of commands)
#print "***ACC command: %s***" % button_to_press
#elif (current_time_ms > self.last_update_time + 1000):
# self.last_update_time = current_time_ms
# print "Desired ACC speed change: %s" % (speed_offset)
return button_to_press
# function to calculate the cruise button based on a safe follow distance
def calc_follow_button(self, CS):
follow_time = 2.0 # in seconds
current_time_ms = _current_time_millis()
# Make sure we were able to populate lead_1.
if self.lead_1 is None:
return None
# dRel is in meters.
lead_dist = self.lead_1.dRel
# Grab the relative speed.
rel_speed = self.lead_1.vRel * CV.MS_TO_KPH
# Current speed in kph
cur_speed = CS.v_ego * CV.MS_TO_KPH
# v_ego is in m/s, so safe_dist_mance is in meters.
safe_dist_m = CS.v_ego * follow_time
# How much we can accelerate without exceeding the max allowed speed.
available_speed = self.acc_speed_kph - CS.v_cruise_actual
# Metric cars adjust cruise in units of 1 and 5 kph.
half_press_kph = 1
full_press_kph = 5
# Imperial unit cars adjust cruise in units of 1 and 5 mph
if CS.imperial_speed_units:
half_press_kph = 1 * CV.MPH_TO_KPH
full_press_kph = 5 * CV.MPH_TO_KPH
# button to issue
button = None
# debug msg
msg = None
#print "dRel: ", self.lead_1.dRel," yRel: ", self.lead_1.yRel, " vRel: ", self.lead_1.vRel, " aRel: ", self.lead_1.aRel, " vLead: ", self.lead_1.vLead, " vLeadK: ", self.lead_1.vLeadK, " aLeadK: ", self.lead_1.aLeadK
### Logic to determine best cruise speed ###
# Automatically engange traditional cruise if it is idle and we are
# going fast enough and accelerating.
if (CS.pcm_acc_status == 1
and self.enable_adaptive_cruise
and CS.v_ego > self.MIN_CRUISE_SPEED_MS
and CS.a_ego > 0.12):
button = CruiseButtons.DECEL_SET
# If traditional cruise is engaged, then control it.
elif CS.pcm_acc_status == 2:
# if cruise is set to faster than the max speed, slow down
if CS.v_cruise_actual > self.acc_speed_kph:
msg = "Slow to max"
button = CruiseButtons.DECEL_SET
# If lead_dist is reported as 0, no one is detected in front of you so you
# can speed up don't speed up when steer-angle > 2; vision radar often
# loses lead car in a turn.
elif lead_dist == 0 and self.enable_adaptive_cruise and CS.angle_steers < 2.0:
if full_press_kph < available_speed:
msg = "5 MPH UP full: ","{0:.1f}kph".format(full_press_kph), " avail: {0:.1f}kph".format(available_speed)
button = CruiseButtons.RES_ACCEL_2ND
elif half_press_kph < available_speed:
msg = "1 MPH UP half: ","{0:.1f}kph".format(half_press_kph), " avail: {0:.1f}kph".format(available_speed)
button = CruiseButtons.RES_ACCEL
# if we have a populated lead_distance
elif (lead_dist > 0
# and we only issue commands every 300ms
and current_time_ms > self.automated_cruise_action_time + 300):
### Slowing down ###
# Reduce speed significantly if lead_dist < 50% of safe dist, no matter
# the rel_speed
if CS.v_cruise_actual > full_press_kph:
if lead_dist < (safe_dist_m * 0.3) and rel_speed < 2:
msg = "50pct down"
button = CruiseButtons.DECEL_2ND
# Reduce speed significantly if lead_dist < 60% of safe dist
# and if the lead car isn't pulling away
elif lead_dist < (safe_dist_m * 0.5) and rel_speed < 0:
msg = "70pct down"
button = CruiseButtons.DECEL_SET
#Reduce speed if rel_speed < -15kph so you don't rush up to lead car
elif rel_speed < -15:
msg = "relspd -15 down"
button = CruiseButtons.DECEL_SET
# we're close to the safe distance, so make slow adjustments
# only adjust every 1 secs
elif (lead_dist < (safe_dist_m * 0.9) and rel_speed < 0
and current_time_ms > self.automated_cruise_action_time + 1000):
msg = "90pct down"
button = CruiseButtons.DECEL_SET
### Speed up ###
# don't speed up again until you have more than a safe distance in front
# only adjust every 2 sec
elif ((lead_dist > (safe_dist_m * 0.8) or rel_speed > 5) and half_press_kph < available_speed
and current_time_ms > self.automated_cruise_action_time + 100):
msg = "120pct UP half: ","{0:.1f}kph".format(half_press_kph), " avail: {0:.1f}kph".format(available_speed)
button = CruiseButtons.RES_ACCEL
# if we don't need to do any of the above, then we're at a pretty good
# speed make sure if we're at this point that the set cruise speed isn't
# set too low or high
if (cur_speed - CS.v_cruise_actual) > 5 and button == None:
# Send cruise stalk up_1st if the set speed is too low to bring it up
msg = "cruise rectify"
button = CruiseButtons.RES_ACCEL
if (current_time_ms > self.last_update_time + 1000):
ratio = 0
if safe_dist_m > 0:
ratio = (lead_dist / safe_dist_m) * 100
print "Ratio: {0:.1f}%".format(ratio), " lead: ","{0:.1f}m".format(lead_dist)," avail: ","{0:.1f}kph".format(available_speed), " Rel Speed: ","{0:.1f}kph".format(rel_speed), " Angle: {0:.1f}deg".format(CS.angle_steers)
self.last_update_time = current_time_ms
if msg != None:
print msg
return button | 48.539432 | 232 | 0.661792 |
bfa06aa55b1954eac6c4c6590877c795ba582ec3 | 2,537 | py | Python | Information search/IS - 1. Features (Sekitei)/extract_features.py | Shemplo/Study-courses | d719cfbddf9358b0f3505e747586d0cc575dd832 | [
"Apache-2.0"
] | 1 | 2019-03-27T18:42:12.000Z | 2019-03-27T18:42:12.000Z | Information search/IS - 1. Features (Sekitei)/extract_features.py | Shemplo/Study-courses | d719cfbddf9358b0f3505e747586d0cc575dd832 | [
"Apache-2.0"
] | 3 | 2018-10-19T07:04:03.000Z | 2021-12-14T21:15:10.000Z | Information search/IS - 1. Features (Sekitei)/extract_features.py | Shemplo/Study-courses | d719cfbddf9358b0f3505e747586d0cc575dd832 | [
"Apache-2.0"
] | 3 | 2019-03-21T05:16:21.000Z | 2021-12-21T11:54:30.000Z | # coding: utf-8
import random
import sys
import re
import urlparse
import urllib
samples_size = 1000
treshold = 0.1
def read_file_and_get_subset (file, limit):
lines = []
desc = open (file, "r")
for line in desc:
lines.append (line [:-1])
desc.close ()
sample = range (len (lines))
random.shuffle (sample)
return (lines, sample [:limit])
def increment_counter (key, counters):
if counters.get (key):
counters [key] += 1
else:
counters [key] = 1
def handle_url (path, query, counters):
templates = {
"a": ["segment_name_", r".*", lambda s, m: s],
"b": ["segment_[0-9]_", r"[0-9]+$|[0-9]+\.", lambda s, m: "1"],
"c": ["segment_substr[0-9]_", r"[^\d]+\d+[^\d]+$", lambda s, m: "1"],
"d": ["segment_ext_", r".+\.(.+)", lambda s, m: m.group (1)],
"e": ["segment_ext_substr[0-9]_", r"[^\d]+\d+[^\d]+\.(\W+)",
lambda s, m: m.group (1)
],
"f": ["segment_len_", r".*", lambda s, m: str (len (s))]
}
index = 0
for seg in urllib.unquote (path).split ('/'):
if seg == "":
continue
for k in templates:
temp = templates [k]
match = re.match (temp [1], seg)
if match:
increment_counter (temp [0] + str (index) + ":" + temp [2] (seg, match), counters)
index += 1
increment_counter ("segments:" + str (index), counters)
for seg in urllib.unquote (query).split ('&'):
if seg == "":
continue
match = re.match (r"(.+)=(.+)", seg)
if match:
increment_counter ("param_name:" + match.group (1), counters)
increment_counter ("param:" + match.group (), counters)
else:
increment_counter ("param_name:" + seg, counters)
def write_results_file (file, counters):
desc = open (file, "w")
for k in counters.keys ():
if counters [k] > treshold * samples_size:
desc.write (k + '\t' + str (counters [k]) + '\n')
desc.close ()
def extract_features (INPUT_FILE_1, INPUT_FILE_2, OUTPUT_FILE):
counters = {}
for f in [INPUT_FILE_1, INPUT_FILE_2]:
subset = read_file_and_get_subset (f, samples_size)
for i in subset [1]:
url_parsed = urlparse.urlparse (subset [0][i])
handle_url (url_parsed.path, url_parsed.query, counters)
write_results_file (OUTPUT_FILE, counters)
| 28.829545 | 98 | 0.524241 |
4e6352205eb646c18f4f2d2410f14da8b49f5a23 | 22,245 | py | Python | test/functional/test_framework/util.py | minblock/Scoutcoin | 532aebe3597dbffe4ea7cc3e70d6cbde542614d3 | [
"MIT"
] | null | null | null | test/functional/test_framework/util.py | minblock/Scoutcoin | 532aebe3597dbffe4ea7cc3e70d6cbde542614d3 | [
"MIT"
] | null | null | null | test/functional/test_framework/util.py | minblock/Scoutcoin | 532aebe3597dbffe4ea7cc3e70d6cbde542614d3 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# Copyright (c) 2014-2018 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Helpful routines for regression testing."""
from base64 import b64encode
from binascii import hexlify, unhexlify
from decimal import Decimal, ROUND_DOWN
import hashlib
import inspect
import json
import logging
import os
import random
import re
from subprocess import CalledProcessError
import time
from . import coverage
from .authproxy import AuthServiceProxy, JSONRPCException
logger = logging.getLogger("TestFramework.utils")
# Assert functions
##################
def assert_fee_amount(fee, tx_size, fee_per_kB):
"""Assert the fee was in range"""
target_fee = round(tx_size * fee_per_kB / 1000, 8)
if fee < target_fee:
raise AssertionError("Fee of %s STC too low! (Should be %s STC)" % (str(fee), str(target_fee)))
# allow the wallet's estimation to be at most 2 bytes off
if fee > (tx_size + 2) * fee_per_kB / 1000:
raise AssertionError("Fee of %s STC too high! (Should be %s STC)" % (str(fee), str(target_fee)))
def assert_equal(thing1, thing2, *args):
if thing1 != thing2 or any(thing1 != arg for arg in args):
raise AssertionError("not(%s)" % " == ".join(str(arg) for arg in (thing1, thing2) + args))
def assert_greater_than(thing1, thing2):
if thing1 <= thing2:
raise AssertionError("%s <= %s" % (str(thing1), str(thing2)))
def assert_greater_than_or_equal(thing1, thing2):
if thing1 < thing2:
raise AssertionError("%s < %s" % (str(thing1), str(thing2)))
def assert_raises(exc, fun, *args, **kwds):
assert_raises_message(exc, None, fun, *args, **kwds)
def assert_raises_message(exc, message, fun, *args, **kwds):
try:
fun(*args, **kwds)
except JSONRPCException:
raise AssertionError("Use assert_raises_rpc_error() to test RPC failures")
except exc as e:
if message is not None and message not in e.error['message']:
raise AssertionError("Expected substring not found:" + e.error['message'])
except Exception as e:
raise AssertionError("Unexpected exception raised: " + type(e).__name__)
else:
raise AssertionError("No exception raised")
def assert_raises_process_error(returncode, output, fun, *args, **kwds):
"""Execute a process and asserts the process return code and output.
Calls function `fun` with arguments `args` and `kwds`. Catches a CalledProcessError
and verifies that the return code and output are as expected. Throws AssertionError if
no CalledProcessError was raised or if the return code and output are not as expected.
Args:
returncode (int): the process return code.
output (string): [a substring of] the process output.
fun (function): the function to call. This should execute a process.
args*: positional arguments for the function.
kwds**: named arguments for the function.
"""
try:
fun(*args, **kwds)
except CalledProcessError as e:
if returncode != e.returncode:
raise AssertionError("Unexpected returncode %i" % e.returncode)
if output not in e.output:
raise AssertionError("Expected substring not found:" + e.output)
else:
raise AssertionError("No exception raised")
def assert_raises_rpc_error(code, message, fun, *args, **kwds):
"""Run an RPC and verify that a specific JSONRPC exception code and message is raised.
Calls function `fun` with arguments `args` and `kwds`. Catches a JSONRPCException
and verifies that the error code and message are as expected. Throws AssertionError if
no JSONRPCException was raised or if the error code/message are not as expected.
Args:
code (int), optional: the error code returned by the RPC call (defined
in src/rpc/protocol.h). Set to None if checking the error code is not required.
message (string), optional: [a substring of] the error string returned by the
RPC call. Set to None if checking the error string is not required.
fun (function): the function to call. This should be the name of an RPC.
args*: positional arguments for the function.
kwds**: named arguments for the function.
"""
assert try_rpc(code, message, fun, *args, **kwds), "No exception raised"
def try_rpc(code, message, fun, *args, **kwds):
"""Tries to run an rpc command.
Test against error code and message if the rpc fails.
Returns whether a JSONRPCException was raised."""
try:
fun(*args, **kwds)
except JSONRPCException as e:
# JSONRPCException was thrown as expected. Check the code and message values are correct.
if (code is not None) and (code != e.error["code"]):
raise AssertionError("Unexpected JSONRPC error code %i" % e.error["code"])
if (message is not None) and (message not in e.error['message']):
raise AssertionError("Expected substring not found:" + e.error['message'])
return True
except Exception as e:
raise AssertionError("Unexpected exception raised: " + type(e).__name__)
else:
return False
def assert_is_hex_string(string):
try:
int(string, 16)
except Exception as e:
raise AssertionError(
"Couldn't interpret %r as hexadecimal; raised: %s" % (string, e))
def assert_is_hash_string(string, length=64):
if not isinstance(string, str):
raise AssertionError("Expected a string, got type %r" % type(string))
elif length and len(string) != length:
raise AssertionError(
"String of length %d expected; got %d" % (length, len(string)))
elif not re.match('[abcdef0-9]+$', string):
raise AssertionError(
"String %r contains invalid characters for a hash." % string)
def assert_array_result(object_array, to_match, expected, should_not_find=False):
"""
Pass in array of JSON objects, a dictionary with key/value pairs
to match against, and another dictionary with expected key/value
pairs.
If the should_not_find flag is true, to_match should not be found
in object_array
"""
if should_not_find:
assert_equal(expected, {})
num_matched = 0
for item in object_array:
all_match = True
for key, value in to_match.items():
if item[key] != value:
all_match = False
if not all_match:
continue
elif should_not_find:
num_matched = num_matched + 1
for key, value in expected.items():
if item[key] != value:
raise AssertionError("%s : expected %s=%s" % (str(item), str(key), str(value)))
num_matched = num_matched + 1
if num_matched == 0 and not should_not_find:
raise AssertionError("No objects matched %s" % (str(to_match)))
if num_matched > 0 and should_not_find:
raise AssertionError("Objects were found %s" % (str(to_match)))
# Utility functions
###################
def check_json_precision():
"""Make sure json library being used does not lose precision converting BTC values"""
n = Decimal("20000000.00000003")
satoshis = int(json.loads(json.dumps(float(n))) * 1.0e8)
if satoshis != 2000000000000003:
raise RuntimeError("JSON encode/decode loses precision")
def count_bytes(hex_string):
return len(bytearray.fromhex(hex_string))
def bytes_to_hex_str(byte_str):
return hexlify(byte_str).decode('ascii')
def hash256(byte_str):
sha256 = hashlib.sha256()
sha256.update(byte_str)
sha256d = hashlib.sha256()
sha256d.update(sha256.digest())
return sha256d.digest()[::-1]
def hex_str_to_bytes(hex_str):
return unhexlify(hex_str.encode('ascii'))
def str_to_b64str(string):
return b64encode(string.encode('utf-8')).decode('ascii')
def satoshi_round(amount):
return Decimal(amount).quantize(Decimal('0.00000001'), rounding=ROUND_DOWN)
def wait_until(predicate, *, attempts=float('inf'), timeout=float('inf'), lock=None):
if attempts == float('inf') and timeout == float('inf'):
timeout = 60
attempt = 0
time_end = time.time() + timeout
while attempt < attempts and time.time() < time_end:
if lock:
with lock:
if predicate():
return
else:
if predicate():
return
attempt += 1
time.sleep(0.05)
# Print the cause of the timeout
predicate_source = "''''\n" + inspect.getsource(predicate) + "'''"
logger.error("wait_until() failed. Predicate: {}".format(predicate_source))
if attempt >= attempts:
raise AssertionError("Predicate {} not true after {} attempts".format(predicate_source, attempts))
elif time.time() >= time_end:
raise AssertionError("Predicate {} not true after {} seconds".format(predicate_source, timeout))
raise RuntimeError('Unreachable')
# RPC/P2P connection constants and functions
############################################
# The maximum number of nodes a single test can spawn
MAX_NODES = 8
# Don't assign rpc or p2p ports lower than this
PORT_MIN = 11000
# The number of ports to "reserve" for p2p and rpc, each
PORT_RANGE = 5000
class PortSeed:
# Must be initialized with a unique integer for each process
n = None
def get_rpc_proxy(url, node_number, timeout=None, coveragedir=None):
"""
Args:
url (str): URL of the RPC server to call
node_number (int): the node number (or id) that this calls to
Kwargs:
timeout (int): HTTP timeout in seconds
Returns:
AuthServiceProxy. convenience object for making RPC calls.
"""
proxy_kwargs = {}
if timeout is not None:
proxy_kwargs['timeout'] = timeout
proxy = AuthServiceProxy(url, **proxy_kwargs)
proxy.url = url # store URL on proxy for info
coverage_logfile = coverage.get_filename(
coveragedir, node_number) if coveragedir else None
return coverage.AuthServiceProxyWrapper(proxy, coverage_logfile)
def p2p_port(n):
assert(n <= MAX_NODES)
return PORT_MIN + n + (MAX_NODES * PortSeed.n) % (PORT_RANGE - 1 - MAX_NODES)
def rpc_port(n):
return PORT_MIN + PORT_RANGE + n + (MAX_NODES * PortSeed.n) % (PORT_RANGE - 1 - MAX_NODES)
def rpc_url(datadir, i, rpchost=None):
rpc_u, rpc_p = get_auth_cookie(datadir)
host = '127.0.0.1'
port = rpc_port(i)
if rpchost:
parts = rpchost.split(':')
if len(parts) == 2:
host, port = parts
else:
host = rpchost
return "http://%s:%s@%s:%d" % (rpc_u, rpc_p, host, int(port))
# Node functions
################
def initialize_datadir(dirname, n):
datadir = get_datadir_path(dirname, n)
if not os.path.isdir(datadir):
os.makedirs(datadir)
with open(os.path.join(datadir, "scoutcoin.conf"), 'w', encoding='utf8') as f:
f.write("regtest=1\n")
f.write("[regtest]\n")
f.write("port=" + str(p2p_port(n)) + "\n")
f.write("rpcport=" + str(rpc_port(n)) + "\n")
f.write("server=1\n")
f.write("keypool=1\n")
f.write("discover=0\n")
f.write("listenonion=0\n")
f.write("printtoconsole=0\n")
os.makedirs(os.path.join(datadir, 'stderr'), exist_ok=True)
os.makedirs(os.path.join(datadir, 'stdout'), exist_ok=True)
return datadir
def get_datadir_path(dirname, n):
return os.path.join(dirname, "node" + str(n))
def append_config(datadir, options):
with open(os.path.join(datadir, "scoutcoin.conf"), 'a', encoding='utf8') as f:
for option in options:
f.write(option + "\n")
def get_auth_cookie(datadir):
user = None
password = None
if os.path.isfile(os.path.join(datadir, "scoutcoin.conf")):
with open(os.path.join(datadir, "scoutcoin.conf"), 'r', encoding='utf8') as f:
for line in f:
if line.startswith("rpcuser="):
assert user is None # Ensure that there is only one rpcuser line
user = line.split("=")[1].strip("\n")
if line.startswith("rpcpassword="):
assert password is None # Ensure that there is only one rpcpassword line
password = line.split("=")[1].strip("\n")
if os.path.isfile(os.path.join(datadir, "regtest", ".cookie")) and os.access(os.path.join(datadir, "regtest", ".cookie"), os.R_OK):
with open(os.path.join(datadir, "regtest", ".cookie"), 'r', encoding="ascii") as f:
userpass = f.read()
split_userpass = userpass.split(':')
user = split_userpass[0]
password = split_userpass[1]
if user is None or password is None:
raise ValueError("No RPC credentials")
return user, password
# If a cookie file exists in the given datadir, delete it.
def delete_cookie_file(datadir):
if os.path.isfile(os.path.join(datadir, "regtest", ".cookie")):
logger.debug("Deleting leftover cookie file")
os.remove(os.path.join(datadir, "regtest", ".cookie"))
def get_bip9_status(node, key):
info = node.getblockchaininfo()
return info['bip9_softforks'][key]
def set_node_times(nodes, t):
for node in nodes:
node.setmocktime(t)
def disconnect_nodes(from_connection, node_num):
for peer_id in [peer['id'] for peer in from_connection.getpeerinfo() if "testnode%d" % node_num in peer['subver']]:
try:
from_connection.disconnectnode(nodeid=peer_id)
except JSONRPCException as e:
# If this node is disconnected between calculating the peer id
# and issuing the disconnect, don't worry about it.
# This avoids a race condition if we're mass-disconnecting peers.
if e.error['code'] != -29: # RPC_CLIENT_NODE_NOT_CONNECTED
raise
# wait to disconnect
wait_until(lambda: [peer['id'] for peer in from_connection.getpeerinfo() if "testnode%d" % node_num in peer['subver']] == [], timeout=5)
def connect_nodes(from_connection, node_num):
ip_port = "127.0.0.1:" + str(p2p_port(node_num))
from_connection.addnode(ip_port, "onetry")
# poll until version handshake complete to avoid race conditions
# with transaction relaying
wait_until(lambda: all(peer['version'] != 0 for peer in from_connection.getpeerinfo()))
def connect_nodes_bi(nodes, a, b):
connect_nodes(nodes[a], b)
connect_nodes(nodes[b], a)
def sync_blocks(rpc_connections, *, wait=1, timeout=60):
"""
Wait until everybody has the same tip.
sync_blocks needs to be called with an rpc_connections set that has least
one node already synced to the latest, stable tip, otherwise there's a
chance it might return before all nodes are stably synced.
"""
stop_time = time.time() + timeout
while time.time() <= stop_time:
best_hash = [x.getbestblockhash() for x in rpc_connections]
if best_hash.count(best_hash[0]) == len(rpc_connections):
return
time.sleep(wait)
raise AssertionError("Block sync timed out:{}".format("".join("\n {!r}".format(b) for b in best_hash)))
def sync_mempools(rpc_connections, *, wait=1, timeout=60, flush_scheduler=True):
"""
Wait until everybody has the same transactions in their memory
pools
"""
stop_time = time.time() + timeout
while time.time() <= stop_time:
pool = [set(r.getrawmempool()) for r in rpc_connections]
if pool.count(pool[0]) == len(rpc_connections):
if flush_scheduler:
for r in rpc_connections:
r.syncwithvalidationinterfacequeue()
return
time.sleep(wait)
raise AssertionError("Mempool sync timed out:{}".format("".join("\n {!r}".format(m) for m in pool)))
# Transaction/Block functions
#############################
def find_output(node, txid, amount, *, blockhash=None):
"""
Return index to output of txid with value amount
Raises exception if there is none.
"""
txdata = node.getrawtransaction(txid, 1, blockhash)
for i in range(len(txdata["vout"])):
if txdata["vout"][i]["value"] == amount:
return i
raise RuntimeError("find_output txid %s : %s not found" % (txid, str(amount)))
def gather_inputs(from_node, amount_needed, confirmations_required=1):
"""
Return a random set of unspent txouts that are enough to pay amount_needed
"""
assert(confirmations_required >= 0)
utxo = from_node.listunspent(confirmations_required)
random.shuffle(utxo)
inputs = []
total_in = Decimal("0.00000000")
while total_in < amount_needed and len(utxo) > 0:
t = utxo.pop()
total_in += t["amount"]
inputs.append({"txid": t["txid"], "vout": t["vout"], "address": t["address"]})
if total_in < amount_needed:
raise RuntimeError("Insufficient funds: need %d, have %d" % (amount_needed, total_in))
return (total_in, inputs)
def make_change(from_node, amount_in, amount_out, fee):
"""
Create change output(s), return them
"""
outputs = {}
amount = amount_out + fee
change = amount_in - amount
if change > amount * 2:
# Create an extra change output to break up big inputs
change_address = from_node.getnewaddress()
# Split change in two, being careful of rounding:
outputs[change_address] = Decimal(change / 2).quantize(Decimal('0.00000001'), rounding=ROUND_DOWN)
change = amount_in - amount - outputs[change_address]
if change > 0:
outputs[from_node.getnewaddress()] = change
return outputs
def random_transaction(nodes, amount, min_fee, fee_increment, fee_variants):
"""
Create a random transaction.
Returns (txid, hex-encoded-transaction-data, fee)
"""
from_node = random.choice(nodes)
to_node = random.choice(nodes)
fee = min_fee + fee_increment * random.randint(0, fee_variants)
(total_in, inputs) = gather_inputs(from_node, amount + fee)
outputs = make_change(from_node, total_in, amount, fee)
outputs[to_node.getnewaddress()] = float(amount)
rawtx = from_node.createrawtransaction(inputs, outputs)
signresult = from_node.signrawtransactionwithwallet(rawtx)
txid = from_node.sendrawtransaction(signresult["hex"], True)
return (txid, signresult["hex"], fee)
# Helper to create at least "count" utxos
# Pass in a fee that is sufficient for relay and mining new transactions.
def create_confirmed_utxos(fee, node, count):
to_generate = int(0.5 * count) + 101
while to_generate > 0:
node.generate(min(25, to_generate))
to_generate -= 25
utxos = node.listunspent()
iterations = count - len(utxos)
addr1 = node.getnewaddress()
addr2 = node.getnewaddress()
if iterations <= 0:
return utxos
for i in range(iterations):
t = utxos.pop()
inputs = []
inputs.append({"txid": t["txid"], "vout": t["vout"]})
outputs = {}
send_value = t['amount'] - fee
outputs[addr1] = satoshi_round(send_value / 2)
outputs[addr2] = satoshi_round(send_value / 2)
raw_tx = node.createrawtransaction(inputs, outputs)
signed_tx = node.signrawtransactionwithwallet(raw_tx)["hex"]
node.sendrawtransaction(signed_tx)
while (node.getmempoolinfo()['size'] > 0):
node.generate(1)
utxos = node.listunspent()
assert(len(utxos) >= count)
return utxos
# Create large OP_RETURN txouts that can be appended to a transaction
# to make it large (helper for constructing large transactions).
def gen_return_txouts():
# Some pre-processing to create a bunch of OP_RETURN txouts to insert into transactions we create
# So we have big transactions (and therefore can't fit very many into each block)
# create one script_pubkey
script_pubkey = "6a4d0200" # OP_RETURN OP_PUSH2 512 bytes
for i in range(512):
script_pubkey = script_pubkey + "01"
# concatenate 128 txouts of above script_pubkey which we'll insert before the txout for change
txouts = "81"
for k in range(128):
# add txout value
txouts = txouts + "0000000000000000"
# add length of script_pubkey
txouts = txouts + "fd0402"
# add script_pubkey
txouts = txouts + script_pubkey
return txouts
# Create a spend of each passed-in utxo, splicing in "txouts" to each raw
# transaction to make it large. See gen_return_txouts() above.
def create_lots_of_big_transactions(node, txouts, utxos, num, fee):
addr = node.getnewaddress()
txids = []
for _ in range(num):
t = utxos.pop()
inputs = [{"txid": t["txid"], "vout": t["vout"]}]
outputs = {}
change = t['amount'] - fee
outputs[addr] = satoshi_round(change)
rawtx = node.createrawtransaction(inputs, outputs)
newtx = rawtx[0:92]
newtx = newtx + txouts
newtx = newtx + rawtx[94:]
signresult = node.signrawtransactionwithwallet(newtx, None, "NONE")
txid = node.sendrawtransaction(signresult["hex"], True)
txids.append(txid)
return txids
def mine_large_block(node, utxos=None):
# generate a 66k transaction,
# and 14 of them is close to the 1MB block limit
num = 14
txouts = gen_return_txouts()
utxos = utxos if utxos is not None else []
if len(utxos) < num:
utxos.clear()
utxos.extend(node.listunspent())
fee = 100 * node.getnetworkinfo()["relayfee"]
create_lots_of_big_transactions(node, txouts, utxos, num, fee=fee)
node.generate(1)
def find_vout_for_address(node, txid, addr):
"""
Locate the vout index of the given transaction sending to the
given address. Raises runtime error exception if not found.
"""
tx = node.getrawtransaction(txid, True)
for i in range(len(tx["vout"])):
if any([addr == a for a in tx["vout"][i]["scriptPubKey"]["addresses"]]):
return i
raise RuntimeError("Vout not found for address: txid=%s, addr=%s" % (txid, addr))
| 38.88986 | 140 | 0.652371 |
9a2e17155df14ae1c617cab7d9bfcd1821b766a5 | 3,128 | py | Python | elasticsearch/_async/client/dangling_indices.py | michaelbalber/elasticsearch-py | 598a76702509f7d2aac8ffbf64cfad9829ba6fe9 | [
"Apache-2.0"
] | 1 | 2021-09-02T14:50:53.000Z | 2021-09-02T14:50:53.000Z | elasticsearch/_async/client/dangling_indices.py | michaelbalber/elasticsearch-py | 598a76702509f7d2aac8ffbf64cfad9829ba6fe9 | [
"Apache-2.0"
] | null | null | null | elasticsearch/_async/client/dangling_indices.py | michaelbalber/elasticsearch-py | 598a76702509f7d2aac8ffbf64cfad9829ba6fe9 | [
"Apache-2.0"
] | null | null | null | # Licensed to Elasticsearch B.V. under one or more contributor
# license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright
# ownership. Elasticsearch B.V. licenses this file to you under
# the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from .utils import NamespacedClient, SKIP_IN_PATH, query_params, _make_path
class DanglingIndicesClient(NamespacedClient):
@query_params("accept_data_loss", "master_timeout", "timeout")
async def delete_dangling_index(self, index_uuid, params=None, headers=None):
"""
Deletes the specified dangling index
`<https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-gateway-dangling-indices.html>`_
:arg index_uuid: The UUID of the dangling index
:arg accept_data_loss: Must be set to true in order to delete
the dangling index
:arg master_timeout: Specify timeout for connection to master
:arg timeout: Explicit operation timeout
"""
if index_uuid in SKIP_IN_PATH:
raise ValueError("Empty value passed for a required argument 'index_uuid'.")
return await self.transport.perform_request(
"DELETE",
_make_path("_dangling", index_uuid),
params=params,
headers=headers,
)
@query_params("accept_data_loss", "master_timeout", "timeout")
async def import_dangling_index(self, index_uuid, params=None, headers=None):
"""
Imports the specified dangling index
`<https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-gateway-dangling-indices.html>`_
:arg index_uuid: The UUID of the dangling index
:arg accept_data_loss: Must be set to true in order to import
the dangling index
:arg master_timeout: Specify timeout for connection to master
:arg timeout: Explicit operation timeout
"""
if index_uuid in SKIP_IN_PATH:
raise ValueError("Empty value passed for a required argument 'index_uuid'.")
return await self.transport.perform_request(
"POST", _make_path("_dangling", index_uuid), params=params, headers=headers
)
@query_params()
async def list_dangling_indices(self, params=None, headers=None):
"""
Returns all dangling indices.
`<https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-gateway-dangling-indices.html>`_
"""
return await self.transport.perform_request(
"GET", "/_dangling", params=params, headers=headers
)
| 41.706667 | 113 | 0.695013 |
7dc9300db81bff6372c6bea9c06ef17fb2994310 | 143 | py | Python | learning_users/basic_app/admin.py | BAXTOR95/django_deployment | f9a33d15fb90b7cedc020a96574ae240d88e1bad | [
"MIT"
] | null | null | null | learning_users/basic_app/admin.py | BAXTOR95/django_deployment | f9a33d15fb90b7cedc020a96574ae240d88e1bad | [
"MIT"
] | null | null | null | learning_users/basic_app/admin.py | BAXTOR95/django_deployment | f9a33d15fb90b7cedc020a96574ae240d88e1bad | [
"MIT"
] | null | null | null | from django.contrib import admin
from basic_app.models import UserProfileInfo
# Register your models here.
admin.site.register(UserProfileInfo) | 35.75 | 44 | 0.853147 |
bc57f1a014282a63343ad3858c8b8682b83b2fa5 | 1,278 | py | Python | pyinstaller-1.5.1/hooks/shared_PIL_SpiderImagePlugin.py | rafidhoda/differentiator_utility | b30ef54abef5bde46fb2d3eac8cb43218a7a4c17 | [
"Xnet",
"X11"
] | null | null | null | pyinstaller-1.5.1/hooks/shared_PIL_SpiderImagePlugin.py | rafidhoda/differentiator_utility | b30ef54abef5bde46fb2d3eac8cb43218a7a4c17 | [
"Xnet",
"X11"
] | null | null | null | pyinstaller-1.5.1/hooks/shared_PIL_SpiderImagePlugin.py | rafidhoda/differentiator_utility | b30ef54abef5bde46fb2d3eac8cb43218a7a4c17 | [
"Xnet",
"X11"
] | null | null | null | # Copyright (C) 2006, Giovanni Bajo
# Based on previous work under copyright (c) 2001, 2002 McMillan Enterprises, Inc.
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
# PIL's SpiderImagePlugin features a tkPhotoImage() method which imports
# ImageTk (and thus brings the whole Tcl/Tk library in).
# We cheat a little and remove the ImageTk import: I assume that if people
# are really using ImageTk in their application, they will also import it
# directly.
def hook(mod):
for i in range(len(mod.imports)):
if mod.imports[i][0] == "ImageTk":
del mod.imports[i]
break
return mod
| 44.068966 | 82 | 0.737872 |
65a0d827bb535bcde55488e5740bfc15ebaf3ee6 | 10,109 | py | Python | weibo/weibo.py | ComputerNetworksExp2016/weibo-exp | 883150bf226b607fd3f8e2f73e6d06a0e35ea214 | [
"MIT"
] | null | null | null | weibo/weibo.py | ComputerNetworksExp2016/weibo-exp | 883150bf226b607fd3f8e2f73e6d06a0e35ea214 | [
"MIT"
] | null | null | null | weibo/weibo.py | ComputerNetworksExp2016/weibo-exp | 883150bf226b607fd3f8e2f73e6d06a0e35ea214 | [
"MIT"
] | null | null | null | from base64 import b64encode
from datetime import datetime, timedelta
from time import sleep
from getpass import getpass
import json
import logging
from pathlib import Path
import pickle
import sys
from bs4 import BeautifulSoup
from requests import Session
logging.basicConfig(level=logging.DEBUG)
def parse_num(s):
if s[-1] == '万':
return int(s[:-1]) * 10000
else:
return int(s)
class Post(object):
"""docstring for Post"""
def __init__(self, mid, uid, created_at, length, repost_num, comment_num, like_num):
self.mid = mid
self.uid = uid
self.created_at = created_at
self.length = length
self.repost_num = repost_num
self.comment_num = comment_num
self.like_num = like_num
def __str__(self):
return 'Post({}, {}, {}, {}, {}, {}, {})'.format(
self.mid, self.uid, self.created_at, self.length, self.repost_num,
self.comment_num, self.like_num)
class User(object):
"""docstring for User"""
def __init__(self, uid, gender, age, location, post_num, following_num, follower_num, following_uids):
self.uid = uid
self.gender = gender
self.age = age
self.location = location
self.post_num = post_num
self.following_num = following_num
self.follower_num = follower_num
self.following_uids = following_uids
def __str__(self):
return 'User({}, {}, {}, {}, {}, {}, {}, {})'.format(
self.uid, self.gender, self.age, self.location, self.post_num,
self.following_num, self.follower_num, self.following_uids)
class Weibo(object):
"""Client for weibo.com."""
USER_AGENT = 'Mozilla/5.0 (iPhone; CPU iPhone OS 9_3 like Mac OS X) AppleWebKit/601.1.46 (KHTML, like Gecko) Version/9.0 Mobile/13E188a Safari/601.1'
PICKLE_FILE = 'weibo.pickle'
REQUEST_INTERVAL = 5
def __init__(self, username, password):
"""Create a new client."""
logging.debug('Logging in with the username %s', username)
self.session = Session()
payload = {
'username': username,
'password': password,
'savestate': 1,
'ec': 0,
'pageerfer': 'https://passport.weibo.cn/signin/welcome?entry=mweibo&r=http%3A%2F%2Fm.weibo.cn%2F&wm=3349&vt=4',
'entry': 'mweibo'
}
headers = {
'Referer': 'https://passport.weibo.cn/signin/',
}
r = self.post('https://passport.weibo.cn/sso/login',
data=payload, headers=headers)
r.raise_for_status()
data = json.loads(r.content.decode())
if data['retcode'] != 20000000:
raise RuntimeError('Failed to login: ' + data['msg'])
self.uid = data['data']['uid']
logging.info('Logged in, uid: %s', self.uid)
# Get cross-domain cookies.
self.get(data['data']['loginresulturl']);
self.users = {}
def save(self, file=None):
if file is None:
file = self.PICKLE_FILE
logging.debug('Saving to %s.', file)
pickle.dump(self, open(file, 'wb'))
@classmethod
def from_pickle(cls, file=None):
"""Load a client from a pickle file if possible."""
if file is None:
file = cls.PICKLE_FILE
if Path(file).exists():
logging.debug('Loading from %s.', file)
return pickle.load(open(file, 'rb'))
else:
logging.debug('Pickle file (%s) does not exist, creating new client.', file)
# username = input('Username: ')
# password = getpass('Password for {}: '.format(username))
username = sys.argv[1]
password = sys.argv[2]
return cls(username, password)
def get(self, url, **kw):
if 'headers' not in kw:
kw['headers'] = {}
kw['headers']['User-Agent'] = self.USER_AGENT
sleep(self.REQUEST_INTERVAL)
return self.session.get(url, **kw)
def post(self, url, **kw):
if 'headers' not in kw:
kw['headers'] = {}
kw['headers']['User-Agent'] = self.USER_AGENT
sleep(self.REQUEST_INTERVAL)
return self.session.post(url, **kw)
def topic_posts(self, containerid):
"""Return posts of a certain topic."""
next_cursor = None
for i in range(30, 31):
failure = 0
while failure < 3:
try:
r = self.get('http://m.weibo.cn/page/pageJson?containerid=%s&next_cursor=%s&page=%d' % (containerid,next_cursor,i))
r.raise_for_status()
data = json.loads(r.content.decode())
next_cursor = data['next_cursor']
l = data['cards'][0]['card_group']
for j in l:
mid = j['mblog']['id']
uid = j['mblog']['user']['id']
created_at = datetime.fromtimestamp(j['mblog']['created_timestamp'])
length = j['mblog']['textLength']
repost_num = j['mblog']['reposts_count']
comment_num = j['mblog']['comments_count']
like_num = j['mblog']['like_count']
yield Post(mid, uid, created_at, length, repost_num, comment_num, like_num)
break
except Exception as e:
logging.error(e)
failure += 1
else: # Failed to fetch more posts.
break
raise StopIteration()
def topic_followers(self, containerid):
"""Return followers of a certain topic."""
followers_id = []
# for i in range(2, 501):
for i in range(int(sys.argv[3]), int(sys.argv[3])+50):
failure = 0
while failure < 3:
try:
r = self.get('http://m.weibo.cn/page/pageJson?&containerid=230403_-_%s&page=%d' % (containerid, i))
r.raise_for_status()
data = json.loads(r.content.decode())
l = data['cards'][0]['card_group']
for j in l:
yield j['user']['id']
break
except Exception as e:
logging.error(e)
failure += 1
else: # Failed to fetch more posts.
break
raise StopIteration()
def user(self, uid):
"""Return a certain user"""
failure = 0
while failure < 3:
try:
r = self.get('http://m.weibo.cn/u/{}'.format(uid))
r.raise_for_status()
data = r.content.decode()
begin = data.find("""[{"mod_type":""")
end = data.find("""},'common':""")
body = data[begin:end]
body = json.loads(body)
post_num = int(body[1]['mblogNum'])
following_num = int(body[1]['attNum'])
follower_num = int(body[1]['fansNum'])
# print(r.content)
# soup = BeautifulSoup(r.content.decode(), "html.parser")
# infos = soup.select('.mct-a.txt-s')
# post_num = parse_num(infos[2].get_text())
# following_num = parse_num(infos[4].get_text())
# follower_num = parse_num(infos[8].get_text())
r = self.get('http://m.weibo.cn/users/{}'.format(uid))
r.raise_for_status()
soup = BeautifulSoup(r.content.decode(), "html.parser")
infos = {}
for name, value in zip(soup.select('.item-info-page span'),
soup.select('.item-info-page p')):
infos[name.get_text()] = value.get_text()
gender = infos.get('性别')
location = infos.get('所在地')
birthday = infos.get('生日')
if birthday is None:
age = None
else:
try:
birthday = datetime.strptime(birthday, '%Y-%m-%d')
if birthday == datetime(1970, 1, 1):
age = None
else:
age = (datetime.today() - birthday) / timedelta(365)
except ValueError:
age = None
following_uids = []
# for following_uid in self.followings(uid):
# following_uids.append(following_uid)
return User(uid, gender, age, location, post_num, following_num, follower_num, following_uids)
except Exception as e:
logging.error(e)
failure += 1
return None
def followings(self, uid):
"""Return following uids"""
followingsID = []
i = 1
while True:
failure = 0
while failure < 3:
try:
r = self.get('http://m.weibo.cn/page/json?containerid=100505%s_-_FOLLOWERS&page=%d' % (uid, i))
r.raise_for_status()
data = json.loads(r.content.decode())
l = data['cards'][0]['card_group']
for j in l:
yield j['user']['id']
break
except Exception as e:
logging.error(e)
failure += 1
else:
break
i = i + 1;
raise StopIteration()
if __name__ == '__main__':
client = Weibo.from_pickle()
# client.topic_posts('1008086edfd628a87d2ee80e5a4352f13de408')
# client.save()
# print(client.user('5324474591'))
# print(followingsID)
# for post in client.topic_posts('1008086edfd628a87d2ee80e5a4352f13de408'):
# print(post)
# for user in client.topic_followers('1008086edfd628a87d2ee80e5a4352f13de408'):
# print(user)
| 36.103571 | 153 | 0.513008 |
646f3671b147f42620a5646a87d96e14802224fd | 601 | py | Python | examples/bulk_export.py | madhulika-accolite/fusionexport-python-client | 70e9566288f615a4a38f1676e99ede1900da9163 | [
"MIT"
] | 1 | 2018-05-24T09:06:23.000Z | 2018-05-24T09:06:23.000Z | examples/bulk_export.py | madhulika-accolite/fusionexport-python-client | 70e9566288f615a4a38f1676e99ede1900da9163 | [
"MIT"
] | null | null | null | examples/bulk_export.py | madhulika-accolite/fusionexport-python-client | 70e9566288f615a4a38f1676e99ede1900da9163 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from fusionexport import ExportManager, ExportConfig # Import sdk
# Instantiate the ExportConfig class and add the required configurations
export_config = ExportConfig()
export_config["chartConfig"] = "bulk.json"
# Provide port and host of FusionExport Service
export_server_host = "127.0.0.1"
export_server_port = 1337
# Instantiate the ExportManager class
em = ExportManager(export_server_host, export_server_port)
# Call the export() method with the export config and the output location
exported_files = em.export(export_config, "./exports", True)
print(exported_files)
| 33.388889 | 73 | 0.800333 |
5238c67e93dfd672d3f56eb645a8ee52c2275949 | 16,937 | py | Python | chatbot_env/Lib/site-packages/sklearn/manifold/_mds.py | rakmakan/Chatbot | d04bc1526b56961a16c25148d9ef18c4f157e9c4 | [
"MIT"
] | 6,989 | 2017-07-18T06:23:18.000Z | 2022-03-31T15:58:36.000Z | chatbot_env/Lib/site-packages/sklearn/manifold/_mds.py | rakmakan/Chatbot | d04bc1526b56961a16c25148d9ef18c4f157e9c4 | [
"MIT"
] | 1,978 | 2017-07-18T09:17:58.000Z | 2022-03-31T14:28:43.000Z | site-packages/sklearn/manifold/_mds.py | Wristlebane/Pyto | 901ac307b68486d8289105c159ca702318bea5b0 | [
"MIT"
] | 1,228 | 2017-07-18T09:03:13.000Z | 2022-03-29T05:57:40.000Z | """
Multi-dimensional Scaling (MDS)
"""
# author: Nelle Varoquaux <nelle.varoquaux@gmail.com>
# License: BSD
import numpy as np
from joblib import Parallel, delayed, effective_n_jobs
import warnings
from ..base import BaseEstimator
from ..metrics import euclidean_distances
from ..utils import check_random_state, check_array, check_symmetric
from ..isotonic import IsotonicRegression
def _smacof_single(dissimilarities, metric=True, n_components=2, init=None,
max_iter=300, verbose=0, eps=1e-3, random_state=None):
"""Computes multidimensional scaling using SMACOF algorithm
Parameters
----------
dissimilarities : ndarray, shape (n_samples, n_samples)
Pairwise dissimilarities between the points. Must be symmetric.
metric : boolean, optional, default: True
Compute metric or nonmetric SMACOF algorithm.
n_components : int, optional, default: 2
Number of dimensions in which to immerse the dissimilarities. If an
``init`` array is provided, this option is overridden and the shape of
``init`` is used to determine the dimensionality of the embedding
space.
init : ndarray, shape (n_samples, n_components), optional, default: None
Starting configuration of the embedding to initialize the algorithm. By
default, the algorithm is initialized with a randomly chosen array.
max_iter : int, optional, default: 300
Maximum number of iterations of the SMACOF algorithm for a single run.
verbose : int, optional, default: 0
Level of verbosity.
eps : float, optional, default: 1e-3
Relative tolerance with respect to stress at which to declare
convergence.
random_state : int, RandomState instance or None, optional, default: None
The generator used to initialize the centers. If int, random_state is
the seed used by the random number generator; If RandomState instance,
random_state is the random number generator; If None, the random number
generator is the RandomState instance used by `np.random`.
Returns
-------
X : ndarray, shape (n_samples, n_components)
Coordinates of the points in a ``n_components``-space.
stress : float
The final value of the stress (sum of squared distance of the
disparities and the distances for all constrained points).
n_iter : int
The number of iterations corresponding to the best stress.
"""
dissimilarities = check_symmetric(dissimilarities, raise_exception=True)
n_samples = dissimilarities.shape[0]
random_state = check_random_state(random_state)
sim_flat = ((1 - np.tri(n_samples)) * dissimilarities).ravel()
sim_flat_w = sim_flat[sim_flat != 0]
if init is None:
# Randomly choose initial configuration
X = random_state.rand(n_samples * n_components)
X = X.reshape((n_samples, n_components))
else:
# overrides the parameter p
n_components = init.shape[1]
if n_samples != init.shape[0]:
raise ValueError("init matrix should be of shape (%d, %d)" %
(n_samples, n_components))
X = init
old_stress = None
ir = IsotonicRegression()
for it in range(max_iter):
# Compute distance and monotonic regression
dis = euclidean_distances(X)
if metric:
disparities = dissimilarities
else:
dis_flat = dis.ravel()
# dissimilarities with 0 are considered as missing values
dis_flat_w = dis_flat[sim_flat != 0]
# Compute the disparities using a monotonic regression
disparities_flat = ir.fit_transform(sim_flat_w, dis_flat_w)
disparities = dis_flat.copy()
disparities[sim_flat != 0] = disparities_flat
disparities = disparities.reshape((n_samples, n_samples))
disparities *= np.sqrt((n_samples * (n_samples - 1) / 2) /
(disparities ** 2).sum())
# Compute stress
stress = ((dis.ravel() - disparities.ravel()) ** 2).sum() / 2
# Update X using the Guttman transform
dis[dis == 0] = 1e-5
ratio = disparities / dis
B = - ratio
B[np.arange(len(B)), np.arange(len(B))] += ratio.sum(axis=1)
X = 1. / n_samples * np.dot(B, X)
dis = np.sqrt((X ** 2).sum(axis=1)).sum()
if verbose >= 2:
print('it: %d, stress %s' % (it, stress))
if old_stress is not None:
if(old_stress - stress / dis) < eps:
if verbose:
print('breaking at iteration %d with stress %s' % (it,
stress))
break
old_stress = stress / dis
return X, stress, it + 1
def smacof(dissimilarities, metric=True, n_components=2, init=None, n_init=8,
n_jobs=None, max_iter=300, verbose=0, eps=1e-3, random_state=None,
return_n_iter=False):
"""Computes multidimensional scaling using the SMACOF algorithm.
The SMACOF (Scaling by MAjorizing a COmplicated Function) algorithm is a
multidimensional scaling algorithm which minimizes an objective function
(the *stress*) using a majorization technique. Stress majorization, also
known as the Guttman Transform, guarantees a monotone convergence of
stress, and is more powerful than traditional techniques such as gradient
descent.
The SMACOF algorithm for metric MDS can summarized by the following steps:
1. Set an initial start configuration, randomly or not.
2. Compute the stress
3. Compute the Guttman Transform
4. Iterate 2 and 3 until convergence.
The nonmetric algorithm adds a monotonic regression step before computing
the stress.
Parameters
----------
dissimilarities : ndarray, shape (n_samples, n_samples)
Pairwise dissimilarities between the points. Must be symmetric.
metric : boolean, optional, default: True
Compute metric or nonmetric SMACOF algorithm.
n_components : int, optional, default: 2
Number of dimensions in which to immerse the dissimilarities. If an
``init`` array is provided, this option is overridden and the shape of
``init`` is used to determine the dimensionality of the embedding
space.
init : ndarray, shape (n_samples, n_components), optional, default: None
Starting configuration of the embedding to initialize the algorithm. By
default, the algorithm is initialized with a randomly chosen array.
n_init : int, optional, default: 8
Number of times the SMACOF algorithm will be run with different
initializations. The final results will be the best output of the runs,
determined by the run with the smallest final stress. If ``init`` is
provided, this option is overridden and a single run is performed.
n_jobs : int or None, optional (default=None)
The number of jobs to use for the computation. If multiple
initializations are used (``n_init``), each run of the algorithm is
computed in parallel.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details.
max_iter : int, optional, default: 300
Maximum number of iterations of the SMACOF algorithm for a single run.
verbose : int, optional, default: 0
Level of verbosity.
eps : float, optional, default: 1e-3
Relative tolerance with respect to stress at which to declare
convergence.
random_state : int, RandomState instance or None, optional, default: None
The generator used to initialize the centers. If int, random_state is
the seed used by the random number generator; If RandomState instance,
random_state is the random number generator; If None, the random number
generator is the RandomState instance used by `np.random`.
return_n_iter : bool, optional, default: False
Whether or not to return the number of iterations.
Returns
-------
X : ndarray, shape (n_samples, n_components)
Coordinates of the points in a ``n_components``-space.
stress : float
The final value of the stress (sum of squared distance of the
disparities and the distances for all constrained points).
n_iter : int
The number of iterations corresponding to the best stress. Returned
only if ``return_n_iter`` is set to ``True``.
Notes
-----
"Modern Multidimensional Scaling - Theory and Applications" Borg, I.;
Groenen P. Springer Series in Statistics (1997)
"Nonmetric multidimensional scaling: a numerical method" Kruskal, J.
Psychometrika, 29 (1964)
"Multidimensional scaling by optimizing goodness of fit to a nonmetric
hypothesis" Kruskal, J. Psychometrika, 29, (1964)
"""
dissimilarities = check_array(dissimilarities)
random_state = check_random_state(random_state)
if hasattr(init, '__array__'):
init = np.asarray(init).copy()
if not n_init == 1:
warnings.warn(
'Explicit initial positions passed: '
'performing only one init of the MDS instead of %d'
% n_init)
n_init = 1
best_pos, best_stress = None, None
if effective_n_jobs(n_jobs) == 1:
for it in range(n_init):
pos, stress, n_iter_ = _smacof_single(
dissimilarities, metric=metric,
n_components=n_components, init=init,
max_iter=max_iter, verbose=verbose,
eps=eps, random_state=random_state)
if best_stress is None or stress < best_stress:
best_stress = stress
best_pos = pos.copy()
best_iter = n_iter_
else:
seeds = random_state.randint(np.iinfo(np.int32).max, size=n_init)
results = Parallel(n_jobs=n_jobs, verbose=max(verbose - 1, 0))(
delayed(_smacof_single)(
dissimilarities, metric=metric, n_components=n_components,
init=init, max_iter=max_iter, verbose=verbose, eps=eps,
random_state=seed)
for seed in seeds)
positions, stress, n_iters = zip(*results)
best = np.argmin(stress)
best_stress = stress[best]
best_pos = positions[best]
best_iter = n_iters[best]
if return_n_iter:
return best_pos, best_stress, best_iter
else:
return best_pos, best_stress
class MDS(BaseEstimator):
"""Multidimensional scaling
Read more in the :ref:`User Guide <multidimensional_scaling>`.
Parameters
----------
n_components : int, optional, default: 2
Number of dimensions in which to immerse the dissimilarities.
metric : boolean, optional, default: True
If ``True``, perform metric MDS; otherwise, perform nonmetric MDS.
n_init : int, optional, default: 4
Number of times the SMACOF algorithm will be run with different
initializations. The final results will be the best output of the runs,
determined by the run with the smallest final stress.
max_iter : int, optional, default: 300
Maximum number of iterations of the SMACOF algorithm for a single run.
verbose : int, optional, default: 0
Level of verbosity.
eps : float, optional, default: 1e-3
Relative tolerance with respect to stress at which to declare
convergence.
n_jobs : int or None, optional (default=None)
The number of jobs to use for the computation. If multiple
initializations are used (``n_init``), each run of the algorithm is
computed in parallel.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details.
random_state : int, RandomState instance or None, optional, default: None
The generator used to initialize the centers. If int, random_state is
the seed used by the random number generator; If RandomState instance,
random_state is the random number generator; If None, the random number
generator is the RandomState instance used by `np.random`.
dissimilarity : 'euclidean' | 'precomputed', optional, default: 'euclidean'
Dissimilarity measure to use:
- 'euclidean':
Pairwise Euclidean distances between points in the dataset.
- 'precomputed':
Pre-computed dissimilarities are passed directly to ``fit`` and
``fit_transform``.
Attributes
----------
embedding_ : array-like, shape (n_samples, n_components)
Stores the position of the dataset in the embedding space.
stress_ : float
The final value of the stress (sum of squared distance of the
disparities and the distances for all constrained points).
Examples
--------
>>> from sklearn.datasets import load_digits
>>> from sklearn.manifold import MDS
>>> X, _ = load_digits(return_X_y=True)
>>> X.shape
(1797, 64)
>>> embedding = MDS(n_components=2)
>>> X_transformed = embedding.fit_transform(X[:100])
>>> X_transformed.shape
(100, 2)
References
----------
"Modern Multidimensional Scaling - Theory and Applications" Borg, I.;
Groenen P. Springer Series in Statistics (1997)
"Nonmetric multidimensional scaling: a numerical method" Kruskal, J.
Psychometrika, 29 (1964)
"Multidimensional scaling by optimizing goodness of fit to a nonmetric
hypothesis" Kruskal, J. Psychometrika, 29, (1964)
"""
def __init__(self, n_components=2, metric=True, n_init=4,
max_iter=300, verbose=0, eps=1e-3, n_jobs=None,
random_state=None, dissimilarity="euclidean"):
self.n_components = n_components
self.dissimilarity = dissimilarity
self.metric = metric
self.n_init = n_init
self.max_iter = max_iter
self.eps = eps
self.verbose = verbose
self.n_jobs = n_jobs
self.random_state = random_state
@property
def _pairwise(self):
return self.kernel == "precomputed"
def fit(self, X, y=None, init=None):
"""
Computes the position of the points in the embedding space
Parameters
----------
X : array, shape (n_samples, n_features) or (n_samples, n_samples)
Input data. If ``dissimilarity=='precomputed'``, the input should
be the dissimilarity matrix.
y : Ignored
init : ndarray, shape (n_samples,), optional, default: None
Starting configuration of the embedding to initialize the SMACOF
algorithm. By default, the algorithm is initialized with a randomly
chosen array.
"""
self.fit_transform(X, init=init)
return self
def fit_transform(self, X, y=None, init=None):
"""
Fit the data from X, and returns the embedded coordinates
Parameters
----------
X : array, shape (n_samples, n_features) or (n_samples, n_samples)
Input data. If ``dissimilarity=='precomputed'``, the input should
be the dissimilarity matrix.
y : Ignored
init : ndarray, shape (n_samples,), optional, default: None
Starting configuration of the embedding to initialize the SMACOF
algorithm. By default, the algorithm is initialized with a randomly
chosen array.
"""
X = check_array(X)
if X.shape[0] == X.shape[1] and self.dissimilarity != "precomputed":
warnings.warn("The MDS API has changed. ``fit`` now constructs an"
" dissimilarity matrix from data. To use a custom "
"dissimilarity matrix, set "
"``dissimilarity='precomputed'``.")
if self.dissimilarity == "precomputed":
self.dissimilarity_matrix_ = X
elif self.dissimilarity == "euclidean":
self.dissimilarity_matrix_ = euclidean_distances(X)
else:
raise ValueError("Proximity must be 'precomputed' or 'euclidean'."
" Got %s instead" % str(self.dissimilarity))
self.embedding_, self.stress_, self.n_iter_ = smacof(
self.dissimilarity_matrix_, metric=self.metric,
n_components=self.n_components, init=init, n_init=self.n_init,
n_jobs=self.n_jobs, max_iter=self.max_iter, verbose=self.verbose,
eps=self.eps, random_state=self.random_state,
return_n_iter=True)
return self.embedding_
| 38.493182 | 79 | 0.64551 |
682ab7733c77a2839acbde7f6ef6e1d2769ddaf3 | 13,923 | py | Python | lib/python2.7/site-packages/sklearn/utils/fixes.py | wfehrnstrom/harmonize | e5661d24b2021739e8ac4bf1d3a530eda4e155b3 | [
"MIT"
] | 6,989 | 2017-07-18T06:23:18.000Z | 2022-03-31T15:58:36.000Z | lib/python2.7/site-packages/sklearn/utils/fixes.py | wfehrnstrom/harmonize | e5661d24b2021739e8ac4bf1d3a530eda4e155b3 | [
"MIT"
] | 1,978 | 2017-07-18T09:17:58.000Z | 2022-03-31T14:28:43.000Z | lib/python2.7/site-packages/sklearn/utils/fixes.py | wfehrnstrom/harmonize | e5661d24b2021739e8ac4bf1d3a530eda4e155b3 | [
"MIT"
] | 1,228 | 2017-07-18T09:03:13.000Z | 2022-03-29T05:57:40.000Z | """Compatibility fixes for older version of python, numpy and scipy
If you add content to this file, please give the version of the package
at which the fixe is no longer needed.
"""
# Authors: Emmanuelle Gouillart <emmanuelle.gouillart@normalesup.org>
# Gael Varoquaux <gael.varoquaux@normalesup.org>
# Fabian Pedregosa <fpedregosa@acm.org>
# Lars Buitinck
#
# License: BSD 3 clause
import warnings
import sys
import functools
import os
import errno
import numpy as np
import scipy.sparse as sp
import scipy
try:
from inspect import signature
except ImportError:
from ..externals.funcsigs import signature
def _parse_version(version_string):
version = []
for x in version_string.split('.'):
try:
version.append(int(x))
except ValueError:
# x may be of the form dev-1ea1592
version.append(x)
return tuple(version)
np_version = _parse_version(np.__version__)
sp_version = _parse_version(scipy.__version__)
try:
from scipy.special import expit # SciPy >= 0.10
with np.errstate(invalid='ignore', over='ignore'):
if np.isnan(expit(1000)): # SciPy < 0.14
raise ImportError("no stable expit in scipy.special")
except ImportError:
def expit(x, out=None):
"""Logistic sigmoid function, ``1 / (1 + exp(-x))``.
See sklearn.utils.extmath.log_logistic for the log of this function.
"""
if out is None:
out = np.empty(np.atleast_1d(x).shape, dtype=np.float64)
out[:] = x
# 1 / (1 + exp(-x)) = (1 + tanh(x / 2)) / 2
# This way of computing the logistic is both fast and stable.
out *= .5
np.tanh(out, out)
out += 1
out *= .5
return out.reshape(np.shape(x))
# little danse to see if np.copy has an 'order' keyword argument
# Supported since numpy 1.7.0
if 'order' in signature(np.copy).parameters:
def safe_copy(X):
# Copy, but keep the order
return np.copy(X, order='K')
else:
# Before an 'order' argument was introduced, numpy wouldn't muck with
# the ordering
safe_copy = np.copy
try:
if (not np.allclose(np.divide(.4, 1, casting="unsafe"),
np.divide(.4, 1, casting="unsafe", dtype=np.float64))
or not np.allclose(np.divide(.4, 1), .4)):
raise TypeError('Divide not working with dtype: '
'https://github.com/numpy/numpy/issues/3484')
divide = np.divide
except TypeError:
# Compat for old versions of np.divide that do not provide support for
# the dtype args
def divide(x1, x2, out=None, dtype=None):
out_orig = out
if out is None:
out = np.asarray(x1, dtype=dtype)
if out is x1:
out = x1.copy()
else:
if out is not x1:
out[:] = x1
if dtype is not None and out.dtype != dtype:
out = out.astype(dtype)
out /= x2
if out_orig is None and np.isscalar(x1):
out = np.asscalar(out)
return out
try:
np.array(5).astype(float, copy=False)
except TypeError:
# Compat where astype accepted no copy argument (numpy < 1.7.0)
def astype(array, dtype, copy=True):
if not copy and array.dtype == dtype:
return array
return array.astype(dtype)
else:
astype = np.ndarray.astype
try:
with warnings.catch_warnings(record=True):
# Don't raise the numpy deprecation warnings that appear in
# 1.9, but avoid Python bug due to simplefilter('ignore')
warnings.simplefilter('always')
sp.csr_matrix([1.0, 2.0, 3.0]).max(axis=0)
except (TypeError, AttributeError):
# in scipy < 14.0, sparse matrix min/max doesn't accept an `axis` argument
# the following code is taken from the scipy 0.14 codebase
def _minor_reduce(X, ufunc):
major_index = np.flatnonzero(np.diff(X.indptr))
if X.data.size == 0 and major_index.size == 0:
# Numpy < 1.8.0 don't handle empty arrays in reduceat
value = np.zeros_like(X.data)
else:
value = ufunc.reduceat(X.data, X.indptr[major_index])
return major_index, value
def _min_or_max_axis(X, axis, min_or_max):
N = X.shape[axis]
if N == 0:
raise ValueError("zero-size array to reduction operation")
M = X.shape[1 - axis]
mat = X.tocsc() if axis == 0 else X.tocsr()
mat.sum_duplicates()
major_index, value = _minor_reduce(mat, min_or_max)
not_full = np.diff(mat.indptr)[major_index] < N
value[not_full] = min_or_max(value[not_full], 0)
mask = value != 0
major_index = np.compress(mask, major_index)
value = np.compress(mask, value)
from scipy.sparse import coo_matrix
if axis == 0:
res = coo_matrix((value, (np.zeros(len(value)), major_index)),
dtype=X.dtype, shape=(1, M))
else:
res = coo_matrix((value, (major_index, np.zeros(len(value)))),
dtype=X.dtype, shape=(M, 1))
return res.A.ravel()
def _sparse_min_or_max(X, axis, min_or_max):
if axis is None:
if 0 in X.shape:
raise ValueError("zero-size array to reduction operation")
zero = X.dtype.type(0)
if X.nnz == 0:
return zero
m = min_or_max.reduce(X.data.ravel())
if X.nnz != np.product(X.shape):
m = min_or_max(zero, m)
return m
if axis < 0:
axis += 2
if (axis == 0) or (axis == 1):
return _min_or_max_axis(X, axis, min_or_max)
else:
raise ValueError("invalid axis, use 0 for rows, or 1 for columns")
def sparse_min_max(X, axis):
return (_sparse_min_or_max(X, axis, np.minimum),
_sparse_min_or_max(X, axis, np.maximum))
else:
def sparse_min_max(X, axis):
return (X.min(axis=axis).toarray().ravel(),
X.max(axis=axis).toarray().ravel())
try:
from numpy import argpartition
except ImportError:
# numpy.argpartition was introduced in v 1.8.0
def argpartition(a, kth, axis=-1, kind='introselect', order=None):
return np.argsort(a, axis=axis, order=order)
try:
from numpy import partition
except ImportError:
warnings.warn('Using `sort` instead of partition.'
'Upgrade numpy to 1.8 for better performace on large number'
'of clusters')
def partition(a, kth, axis=-1, kind='introselect', order=None):
return np.sort(a, axis=axis, order=order)
try:
from itertools import combinations_with_replacement
except ImportError:
# Backport of itertools.combinations_with_replacement for Python 2.6,
# from Python 3.4 documentation (http://tinyurl.com/comb-w-r), copyright
# Python Software Foundation (https://docs.python.org/3/license.html)
def combinations_with_replacement(iterable, r):
# combinations_with_replacement('ABC', 2) --> AA AB AC BB BC CC
pool = tuple(iterable)
n = len(pool)
if not n and r:
return
indices = [0] * r
yield tuple(pool[i] for i in indices)
while True:
for i in reversed(range(r)):
if indices[i] != n - 1:
break
else:
return
indices[i:] = [indices[i] + 1] * (r - i)
yield tuple(pool[i] for i in indices)
if np_version < (1, 7):
# Prior to 1.7.0, np.frombuffer wouldn't work for empty first arg.
def frombuffer_empty(buf, dtype):
if len(buf) == 0:
return np.empty(0, dtype=dtype)
else:
return np.frombuffer(buf, dtype=dtype)
else:
frombuffer_empty = np.frombuffer
if np_version < (1, 8):
def in1d(ar1, ar2, assume_unique=False, invert=False):
# Backport of numpy function in1d 1.8.1 to support numpy 1.6.2
# Ravel both arrays, behavior for the first array could be different
ar1 = np.asarray(ar1).ravel()
ar2 = np.asarray(ar2).ravel()
# This code is significantly faster when the condition is satisfied.
if len(ar2) < 10 * len(ar1) ** 0.145:
if invert:
mask = np.ones(len(ar1), dtype=np.bool)
for a in ar2:
mask &= (ar1 != a)
else:
mask = np.zeros(len(ar1), dtype=np.bool)
for a in ar2:
mask |= (ar1 == a)
return mask
# Otherwise use sorting
if not assume_unique:
ar1, rev_idx = np.unique(ar1, return_inverse=True)
ar2 = np.unique(ar2)
ar = np.concatenate((ar1, ar2))
# We need this to be a stable sort, so always use 'mergesort'
# here. The values from the first array should always come before
# the values from the second array.
order = ar.argsort(kind='mergesort')
sar = ar[order]
if invert:
bool_ar = (sar[1:] != sar[:-1])
else:
bool_ar = (sar[1:] == sar[:-1])
flag = np.concatenate((bool_ar, [invert]))
indx = order.argsort(kind='mergesort')[:len(ar1)]
if assume_unique:
return flag[indx]
else:
return flag[indx][rev_idx]
else:
from numpy import in1d
if sp_version < (0, 15):
# Backport fix for scikit-learn/scikit-learn#2986 / scipy/scipy#4142
from ._scipy_sparse_lsqr_backport import lsqr as sparse_lsqr
else:
from scipy.sparse.linalg import lsqr as sparse_lsqr
if sys.version_info < (2, 7, 0):
# partial cannot be pickled in Python 2.6
# http://bugs.python.org/issue1398
class partial(object):
def __init__(self, func, *args, **keywords):
functools.update_wrapper(self, func)
self.func = func
self.args = args
self.keywords = keywords
def __call__(self, *args, **keywords):
args = self.args + args
kwargs = self.keywords.copy()
kwargs.update(keywords)
return self.func(*args, **kwargs)
else:
from functools import partial
def parallel_helper(obj, methodname, *args, **kwargs):
"""Helper to workaround Python 2 limitations of pickling instance methods"""
return getattr(obj, methodname)(*args, **kwargs)
if np_version < (1, 6, 2):
# Allow bincount to accept empty arrays
# https://github.com/numpy/numpy/commit/40f0844846a9d7665616b142407a3d74cb65a040
def bincount(x, weights=None, minlength=None):
if len(x) > 0:
return np.bincount(x, weights, minlength)
else:
if minlength is None:
minlength = 0
minlength = np.asscalar(np.asarray(minlength, dtype=np.intp))
return np.zeros(minlength, dtype=np.intp)
else:
from numpy import bincount
if 'exist_ok' in signature(os.makedirs).parameters:
makedirs = os.makedirs
else:
def makedirs(name, mode=0o777, exist_ok=False):
"""makedirs(name [, mode=0o777][, exist_ok=False])
Super-mkdir; create a leaf directory and all intermediate ones. Works
like mkdir, except that any intermediate path segment (not just the
rightmost) will be created if it does not exist. If the target
directory already exists, raise an OSError if exist_ok is False.
Otherwise no exception is raised. This is recursive.
"""
try:
os.makedirs(name, mode=mode)
except OSError as e:
if (not exist_ok or e.errno != errno.EEXIST
or not os.path.isdir(name)):
raise
if np_version < (1, 8, 1):
def array_equal(a1, a2):
# copy-paste from numpy 1.8.1
try:
a1, a2 = np.asarray(a1), np.asarray(a2)
except:
return False
if a1.shape != a2.shape:
return False
return bool(np.asarray(a1 == a2).all())
else:
from numpy import array_equal
if sp_version < (0, 13, 0):
def rankdata(a, method='average'):
if method not in ('average', 'min', 'max', 'dense', 'ordinal'):
raise ValueError('unknown method "{0}"'.format(method))
arr = np.ravel(np.asarray(a))
algo = 'mergesort' if method == 'ordinal' else 'quicksort'
sorter = np.argsort(arr, kind=algo)
inv = np.empty(sorter.size, dtype=np.intp)
inv[sorter] = np.arange(sorter.size, dtype=np.intp)
if method == 'ordinal':
return inv + 1
arr = arr[sorter]
obs = np.r_[True, arr[1:] != arr[:-1]]
dense = obs.cumsum()[inv]
if method == 'dense':
return dense
# cumulative counts of each unique value
count = np.r_[np.nonzero(obs)[0], len(obs)]
if method == 'max':
return count[dense]
if method == 'min':
return count[dense - 1] + 1
# average method
return .5 * (count[dense] + count[dense - 1] + 1)
else:
from scipy.stats import rankdata
if np_version < (1, 12, 0):
class MaskedArray(np.ma.MaskedArray):
# Before numpy 1.12, np.ma.MaskedArray object is not picklable
# This fix is needed to make our model_selection.GridSearchCV
# picklable as the ``cv_results_`` param uses MaskedArray
def __getstate__(self):
"""Return the internal state of the masked array, for pickling
purposes.
"""
cf = 'CF'[self.flags.fnc]
data_state = super(np.ma.MaskedArray, self).__reduce__()[2]
return data_state + (np.ma.getmaskarray(self).tostring(cf),
self._fill_value)
else:
from numpy.ma import MaskedArray # noqa
| 32.992891 | 84 | 0.588451 |
1f67400ddc98ef547f2b6778412d449f8a851395 | 32,304 | py | Python | test/test_clients.py | gsy0911/azfs | 200c2dd4a6004cdf24162fd98ca9bb86e205d15a | [
"MIT"
] | null | null | null | test/test_clients.py | gsy0911/azfs | 200c2dd4a6004cdf24162fd98ca9bb86e205d15a | [
"MIT"
] | 128 | 2020-05-07T12:14:03.000Z | 2021-07-17T04:33:17.000Z | test/test_clients.py | gsy0911/azfs | 200c2dd4a6004cdf24162fd98ca9bb86e205d15a | [
"MIT"
] | 1 | 2020-05-06T14:29:45.000Z | 2020-05-06T14:29:45.000Z | import pytest
# in order to avoid warning .coverage
# see https://pytest-cov.readthedocs.io/en/latest/subprocess-support.html#if-you-use-multiprocessing-process
try:
from pytest_cov.embed import cleanup_on_sigterm
except ImportError:
pass
else:
cleanup_on_sigterm()
import azfs
from azure.core.exceptions import ResourceNotFoundError
from azfs.clients.blob_client import AzBlobClient
from azfs.clients.datalake_client import AzDataLakeClient
from azfs.clients.client_interface import ClientInterface
from azfs.error import (
AzfsInputError,
AzfsDecoratorFileFormatError,
AzfsDecoratorReturnTypeError,
AzfsDecoratorSizeNotMatchedError
)
import pandas as pd
class TestClientIntexrface:
def test_not_implemented_error(self, var_azc):
client_interface = ClientInterface(credential="")
# the file below is not exists
account_url = "https://testazfs.blob.core.windows.net/"
path = f"{account_url}test_caontainer/test.csv"
file_path = "test_caontainer"
with pytest.raises(NotImplementedError):
client_interface.get(path=path)
with pytest.raises(NotImplementedError):
client_interface.put(path=path, data={})
with pytest.raises(NotImplementedError):
client_interface.ls(path=path, file_path=file_path)
with pytest.raises(NotImplementedError):
client_interface.rm(path=path)
with pytest.raises(NotImplementedError):
client_interface.info(path=path)
with pytest.raises(NotImplementedError):
client_interface.get_container_client_from_path(path=path)
with pytest.raises(NotImplementedError):
client_interface.get_file_client_from_path(path=path)
with pytest.raises(NotImplementedError):
client_interface.get_service_client_from_url(account_url=account_url)
# connection_stringから作成する場合
client_interface = ClientInterface(credential=None, connection_string="")
with pytest.raises(NotImplementedError):
client_interface.get_service_client_from_url(account_url=account_url)
with pytest.raises(NotImplementedError):
# use with multiprocessing
var_azc.glob("https://testazfs.queue.core.windows.net/test_queue/test/*.msg")
def test_azfs_input_error(self, var_azc):
with pytest.raises(AzfsInputError):
var_azc.read().csv(path=0)
with pytest.raises(AzfsInputError):
var_azc.read().csv(path=None)
class TestReadCsv:
def test_blob_read_csv(self, mocker, _get_csv, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_csv)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.csv"
# read data from not-exist path
with var_azc:
df = pd.read_csv_az(path)
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
df = var_azc.read().csv(path=path)
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
df = var_azc.read(path=path).csv()
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
# with multiprocessing
df = var_azc.read(use_mp=True).csv(path=path)
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
df = var_azc.read(path=path, use_mp=True).csv()
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
def test_blob_read_glob_csv(self, mocker, _get_csv, var_azc, _ls_for_glob):
mocker.patch.object(AzBlobClient, "_get", _get_csv)
mocker.patch.object(AzBlobClient, "_ls", _ls_for_glob)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/*.csv"
df = var_azc.read().csv(path=path)
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 4
df = var_azc.read(path=path).csv()
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 4
def test_blob_read_list_csv(self, mocker, _get_csv, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_csv)
# the file below is not exists
path_list = [
"https://testazfs.blob.core.windows.net/test_caontainer/root_folder/test1.csv",
"https://testazfs.blob.core.windows.net/test_caontainer/root_folder/test2.csv"
]
df = var_azc.read().csv(path=path_list)
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 4
df = var_azc.read(path=path_list).csv()
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 4
def test_blob_read_csv_gz(self, mocker, _get_csv_gz, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_csv_gz)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.csv.gz"
# read data from not-exist path
with var_azc:
df = pd.read_csv_az(path)
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
df = var_azc.read().csv(path=path)
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
def test_dfs_read_csv(self, mocker, _get_csv, var_azc):
mocker.patch.object(AzDataLakeClient, "_get", _get_csv)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/test.csv"
# read data from not-exist path
with var_azc:
df = pd.read_csv_az(path)
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
df = var_azc.read().csv(path=path)
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
class TestReadTable:
def test_blob_read_table(self, mocker, _get_table, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_table)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.tsv"
# read data from not-exist path
with var_azc:
df = pd.read_table_az(path)
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
class TestReadPickle:
def test_blob_read_pickle(self, mocker, _get_pickle, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_pickle)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.pkl"
# read data from not-exist path
with var_azc:
df = pd.read_pickle_az(path, compression=None)
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
def test_blob_read_pickle_pyspark_like(self, mocker, _get_pickle, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_pickle)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.pkl"
# read data from not-exist path
df = var_azc.read().pickle(path=path, compression=None)
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
def test_blob_read_pickle_gzip(self, mocker, _get_pickle_gzip, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_pickle_gzip)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.pkl"
# read data from not-exist path
with var_azc:
df = pd.read_pickle_az(path, compression="gzip")
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
def test_blob_read_pickle_gzip_pyspark_like(self, mocker, _get_pickle_gzip, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_pickle_gzip)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.pkl"
# read data from not-exist path
df = var_azc.read().pickle(path=path, compression="gzip")
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
def test_blob_read_pickle_bz2(self, mocker, _get_pickle_bz2, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_pickle_bz2)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.pkl"
# read data from not-exist path
with var_azc:
df = pd.read_pickle_az(path, compression="bz2")
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
def test_blob_read_pickle_bz2_pyspark_like(self, mocker, _get_pickle_bz2, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_pickle_bz2)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.pkl"
# read data from not-exist path
df = var_azc.read().pickle(path=path, compression="bz2")
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
def test_blob_read_pickle_xz(self, mocker, _get_pickle_xz, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_pickle_xz)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.pkl"
# read data from not-exist path
with var_azc:
df = pd.read_pickle_az(path, compression="xz")
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
def test_blob_read_pickle_xz_pyspark_like(self, mocker, _get_pickle_xz, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_pickle_xz)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.pkl"
# read data from not-exist path
df = var_azc.read().pickle(path=path, compression="xz")
columns = df.columns
assert "name" in columns
assert "age" in columns
assert len(df.index) == 2
class TestReadJson:
def test_blob_read_json(self, mocker, _get_json, var_azc, var_json):
mocker.patch.object(AzBlobClient, "_get", _get_json)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.json"
data = var_azc.read_json(path)
assert data == var_json
def test_dfs_read_json(self, mocker, _get_json, var_azc, var_json):
mocker.patch.object(AzDataLakeClient, "_get", _get_json)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/test.json"
data = var_azc.read_json(path)
assert data == var_json
class TestReadLineIter:
def test_blob_read_line_iter(self, mocker, _get_csv, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_csv)
return_value = {"size": len(b'name,age\nalice,10\nbob,10\n')}
func_mock = mocker.MagicMock()
func_mock.return_value = return_value
mocker.patch.object(AzBlobClient, "_info", func_mock)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.csv"
# read data from not-exist path
line_counter = 0
for _ in var_azc.read_line_iter(path=path):
line_counter += 1
assert line_counter == 3
def test_dfs_read_line_iter(self, mocker, _get_csv, var_azc):
mocker.patch.object(AzDataLakeClient, "_get", _get_csv)
return_value = {"size": len(b'name,age\nalice,10\nbob,10\n')}
func_mock = mocker.MagicMock()
func_mock.return_value = return_value
mocker.patch.object(AzDataLakeClient, "_info", func_mock)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/test.csv"
# read data from not-exist path
line_counter = 0
for _ in var_azc.read_line_iter(path=path):
line_counter += 1
assert line_counter == 3
class TestReadCsvChunk:
def test_blob_read_csv_chunk(self, mocker, _get_csv, var_azc):
mocker.patch.object(AzBlobClient, "_get", _get_csv)
return_value = {"size": len(b'name,age\nalice,10\nbob,10\n')}
func_mock = mocker.MagicMock()
func_mock.return_value = return_value
mocker.patch.object(AzBlobClient, "_info", func_mock)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.csv"
chunk_size = 2
chunk_counter = 0
with pytest.warns(FutureWarning):
for _ in var_azc.read_csv_chunk(path, chunk_size):
chunk_counter += 1
assert chunk_counter == 2
def test_dfs_read_csv_chunk(self, mocker, _get_csv, var_azc):
mocker.patch.object(AzDataLakeClient, "_get", _get_csv)
return_value = {"size": len(b'name,age\nalice,10\nbob,10\n')}
func_mock = mocker.MagicMock()
func_mock.return_value = return_value
mocker.patch.object(AzDataLakeClient, "_info", func_mock)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/test.csv"
chunk_size = 2
chunk_counter = 0
with pytest.warns(FutureWarning):
for _ in var_azc.read_csv_chunk(path, chunk_size):
chunk_counter += 1
assert chunk_counter == 2
class TestToCsv:
def test_blob_to_csv(self, mocker, _put, var_azc, var_df):
mocker.patch.object(AzBlobClient, "_put", _put)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.csv"
with var_azc:
result = var_df.to_csv_az(path)
assert result
def test_dfs_to_csv(self, mocker, _put, var_azc, var_df):
mocker.patch.object(AzDataLakeClient, "_put", _put)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/test.csv"
with var_azc:
result = var_df.to_csv_az(path)
assert result
class TestToTsv:
def test_blob_to_table(self, mocker, _put, var_azc, var_df):
mocker.patch.object(AzBlobClient, "_put", _put)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.tsv"
with var_azc:
result = var_df.to_table_az(path)
assert result
def test_dfs_to_table(self, mocker, _put, var_azc, var_df):
mocker.patch.object(AzDataLakeClient, "_put", _put)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/test.tsv"
with var_azc:
result = var_df.to_table_az(path)
assert result
class TestToPickle:
def test_blob_to_pickle(self, mocker, _put, var_azc, var_df):
mocker.patch.object(AzBlobClient, "_put", _put)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.pkl"
with var_azc:
result = var_df.to_pickle_az(path)
assert result
result = var_df.to_pickle_az(path, compression=None)
assert result
result = var_df.to_pickle_az(path, compression="gzip")
assert result
result = var_df.to_pickle_az(path, compression="bz2")
assert result
result = var_df.to_pickle_az(path, compression="xz")
assert result
def test_dfs_to_pickle(self, mocker, _put, var_azc, var_df):
mocker.patch.object(AzDataLakeClient, "_put", _put)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/test.pkl"
with var_azc:
result = var_df.to_pickle_az(path)
assert result
result = var_df.to_pickle_az(path, compression=None)
assert result
result = var_df.to_pickle_az(path, compression="gzip")
assert result
result = var_df.to_pickle_az(path, compression="bz2")
assert result
result = var_df.to_pickle_az(path, compression="xz")
assert result
class TestToJson:
def test_blob_to_csv(self, mocker, _put, var_azc):
mocker.patch.object(AzBlobClient, "_put", _put)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test.json"
result = var_azc.write_json(path, data={"a": "b"})
assert result
def test_dfs_to_csv(self, mocker, _put, var_azc):
mocker.patch.object(AzDataLakeClient, "_put", _put)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/test.json"
result = var_azc.write_json(path, data={"a": "b"})
assert result
class TestLs:
def test_blob_ls(self, mocker, _ls, var_azc):
mocker.patch.object(AzBlobClient, "_ls", _ls)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/"
file_list = var_azc.ls(path=path)
assert len(file_list) == 3
assert "test1.csv" in file_list
assert "test2.csv" in file_list
assert "dir/" in file_list
def test_blob_ls_full_path(self, mocker, _ls, var_azc):
mocker.patch.object(AzBlobClient, "_ls", _ls)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/"
file_list = var_azc.ls(path=path, attach_prefix=True)
assert len(file_list) == 3
assert "https://testazfs.blob.core.windows.net/test_caontainer/test1.csv" in file_list
assert "https://testazfs.blob.core.windows.net/test_caontainer/test2.csv" in file_list
assert "https://testazfs.blob.core.windows.net/test_caontainer/dir/" in file_list
def test_dfs_ls(self, mocker, _ls, var_azc):
mocker.patch.object(AzDataLakeClient, "_ls", _ls)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/"
file_list = var_azc.ls(path=path)
assert len(file_list) == 3
assert "test1.csv" in file_list
assert "test2.csv" in file_list
assert "dir/" in file_list
def test_dfs_ls_full_path(self, mocker, _ls, var_azc):
mocker.patch.object(AzDataLakeClient, "_ls", _ls)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/"
file_list = var_azc.ls(path=path, attach_prefix=True)
assert len(file_list) == 3
assert "https://testazfs.dfs.core.windows.net/test_caontainer/test1.csv" in file_list
assert "https://testazfs.dfs.core.windows.net/test_caontainer/test2.csv" in file_list
assert "https://testazfs.dfs.core.windows.net/test_caontainer/dir/" in file_list
class TestGlob:
def test_blob_glob_error(self, var_azc):
path = "https://testazfs.blob.core.windows.net/test_caontainer/test1.csv"
with pytest.raises(AzfsInputError):
var_azc.glob(path)
path = "https://testazfs.blob.core.windows.net/test_caontainer/*"
with pytest.raises(AzfsInputError):
var_azc.glob(path)
def test_blob_glob(self, mocker, _ls_for_glob, var_azc):
mocker.patch.object(AzBlobClient, "_ls", _ls_for_glob)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/*.csv"
file_list = var_azc.glob(pattern_path=path)
assert len(file_list) == 2
assert "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/test1.csv" in file_list
assert "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/test2.csv" in file_list
path = "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/*.json"
file_list = var_azc.glob(pattern_path=path)
assert len(file_list) == 1
assert "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/test1.json" in file_list
path = "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/*/*.csv"
file_list = var_azc.glob(pattern_path=path)
assert len(file_list) == 4
assert "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/dir1/test1.csv" in file_list
assert "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/dir1/test2.csv" in file_list
assert "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/dir2/test1.csv" in file_list
assert "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/dir2/test2.csv" in file_list
path = "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/dir1/*.csv"
file_list = var_azc.glob(pattern_path=path)
assert len(file_list) == 2
assert "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/dir1/test1.csv" in file_list
assert "https://testazfs.blob.core.windows.net/test_caontainer/root_folder/dir1/test2.csv" in file_list
def test_dfs_glob_error(self, var_azc):
path = "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/test1.csv"
with pytest.raises(AzfsInputError):
var_azc.glob(path)
path = "https://testazfs.dfs.core.windows.net/test_caontainer/*"
with pytest.raises(AzfsInputError):
var_azc.glob(path)
def test_dfs_glob(self, mocker, _ls_for_glob, var_azc):
mocker.patch.object(AzDataLakeClient, "_ls", _ls_for_glob)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/*.csv"
file_list = var_azc.glob(pattern_path=path)
assert len(file_list) == 2
assert "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/test1.csv" in file_list
assert "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/test2.csv" in file_list
path = "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/*.json"
file_list = var_azc.glob(pattern_path=path)
assert len(file_list) == 1
assert "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/test1.json" in file_list
path = "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/*/*.csv"
file_list = var_azc.glob(pattern_path=path)
assert len(file_list) == 4
assert "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/dir1/test1.csv" in file_list
assert "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/dir1/test2.csv" in file_list
assert "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/dir2/test1.csv" in file_list
assert "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/dir2/test2.csv" in file_list
path = "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/dir1/*.csv"
file_list = var_azc.glob(pattern_path=path)
assert len(file_list) == 2
assert "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/dir1/test1.csv" in file_list
assert "https://testazfs.dfs.core.windows.net/test_caontainer/root_folder/dir1/test2.csv" in file_list
class TestRm:
def test_blob_rm(self, mocker, _rm, var_azc):
mocker.patch.object(AzBlobClient, "_rm", _rm)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/"
result = var_azc.rm(path=path)
assert result
def test_dfs_rm(self, mocker, _rm, var_azc):
mocker.patch.object(AzDataLakeClient, "_rm", _rm)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/"
result = var_azc.rm(path=path)
assert result
class TestExists:
def test_blob_exists(self, mocker, var_azc):
return_value = {"size": len(b'name,age\nalice,10\nbob,10\n')}
func_mock = mocker.MagicMock()
func_mock.return_value = return_value
mocker.patch.object(AzBlobClient, "_info", func_mock)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test1.csv"
result = var_azc.exists(path=path)
assert result
# set to raise exception
func_mock.side_effect = ResourceNotFoundError
mocker.patch.object(AzBlobClient, "_info", func_mock)
# the file below is not exists
path = "https://testazfs.blob.core.windows.net/test_caontainer/test3.csv"
result = var_azc.exists(path=path)
assert not result
def test_dfs_exists(self, mocker, _get_csv, var_azc):
return_value = {"size": len(b'name,age\nalice,10\nbob,10\n')}
func_mock = mocker.MagicMock()
func_mock.return_value = return_value
mocker.patch.object(AzDataLakeClient, "_info", func_mock)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/test1.csv"
result = var_azc.exists(path=path)
assert result
# set to raise exception
func_mock.side_effect = ResourceNotFoundError
mocker.patch.object(AzDataLakeClient, "_info", func_mock)
# the file below is not exists
path = "https://testazfs.dfs.core.windows.net/test_caontainer/test3.csv"
result = var_azc.exists(path=path)
assert not result
class TestExportDecorator:
decorator = azfs.export_decorator
@staticmethod
@decorator.register()
def export_df_example_1(_input: str) -> str:
return _input
@staticmethod
@decorator.register()
def export_df_example_2(_input: str) -> pd.DataFrame:
return pd.DataFrame()
@staticmethod
@decorator.register()
def export_df_example_3(_input: str) -> pd.DataFrame:
"""
Args:
_input: some_name
"""
return pd.DataFrame()
@staticmethod
@decorator.register()
def export_df_example_multiple(_input: str) -> (pd.DataFrame, pd.DataFrame):
"""
Args:
_input: some_name
"""
return pd.DataFrame(), pd.DataFrame
@staticmethod
@decorator.register()
def export_str_example_multiple(_input: str) -> (str, str):
"""
Args:
_input: some_name
"""
return "a", "b"
azc = azfs.AzFileClient()
azc.import_decorator(decorator, keyword_list=["prod"])
azc_multiple = azfs.AzFileClient()
azc_multiple.import_decorator(
decorator,
keyword_list=["prod", "dev"],
output_parent_path={
"prod": "https://prodazfs.dfs.core.windows.net/test_caontainer",
"dev": None
}
)
azc_ignore_error = azfs.AzFileClient()
azc_ignore_error.import_decorator(
decorator,
keyword_list=["prod"],
ignore_error=True
)
def test_return_type_not_matched(self):
with pytest.raises(AzfsDecoratorReturnTypeError):
self.azc.export_df_example_1(
_input="error",
_prod_file_name_prefix="prefix",
_prod_file_name="the_file_name",
_prod_file_name_suffix="suffix"
)
with pytest.raises(AzfsDecoratorReturnTypeError):
self.azc.export_df_example_1(
_input="error",
_prod_output_parent_path="https://prodazfs.dfs.core.windows.net/test_caontainer",
_prod_key="test",
_prod_file_name_prefix="prefix",
_prod_file_name="the_file_name",
_prod_file_name_suffix="suffix"
)
def test_format_type_not_matched(self):
with pytest.raises(AzfsDecoratorFileFormatError):
self.azc.export_df_example_2(
_input="error",
_prod_output_parent_path="https://testazfs.dfs.core.windows.net/test_caontainer",
_prod_file_name_prefix="prefix",
_prod_file_name="the_file_name",
_prod_file_name_suffix="suffix",
_prod_format_type="parquet"
)
with pytest.raises(AzfsDecoratorFileFormatError):
self.azc.export_df_example_3(
_input="error",
_prod_storage_account="testazfs",
_prod_container="test_container",
_prod_key="some_folder",
_prod_file_name_prefix="prefix",
_prod_file_name="the_file_name",
_prod_file_name_suffix="suffix",
_prod_format_type="parquet"
)
with pytest.raises(AzfsDecoratorFileFormatError):
self.azc_multiple.export_df_example_multiple(
_input="error",
_prod_file_name_prefix="prefix",
_prod_file_name=["the_file_name_1", "the_file_name_2"],
_prod_file_name_suffix="suffix",
_prod_format_type="parquet",
_dev_storage_account="devazfs",
_dev_container="test_container",
_dev_key="some_folder",
_dev_file_name_prefix="prefix",
_dev_file_name=["the_file_name_1", "the_file_name_2"],
_dev_file_name_suffix="suffix",
_dev_format_type="parquet",
)
with pytest.raises(AzfsDecoratorSizeNotMatchedError):
self.azc_multiple.export_df_example_multiple(
_input="error",
_prod_file_name_prefix="prefix",
_prod_file_name="the_file_name_1",
_prod_file_name_suffix="suffix",
_prod_format_type="parquet",
_dev_storage_account="devazfs",
_dev_container="test_container",
_dev_file_name_prefix="prefix",
_dev_file_name=["the_file_name_1", "the_file_name_2"],
_dev_file_name_suffix="suffix",
_dev_format_type="parquet",
)
with pytest.raises(AzfsDecoratorReturnTypeError):
self.azc_multiple.export_str_example_multiple(
_input="error",
_prod_file_name_prefix="prefix",
_prod_file_name=["the_file_name_1", "the_file_name_2"],
_prod_file_name_suffix="suffix",
_prod_format_type="parquet",
_dev_storage_account="devazfs",
_dev_container="test_container",
_dev_key="some_folder",
_dev_file_name_prefix="prefix",
_dev_file_name=["the_file_name_1", "the_file_name_2"],
_dev_file_name_suffix="suffix",
_dev_format_type="parquet",
)
def test_ignore_error(self):
self.azc_ignore_error.export_df_example_1(
_input="error",
_prod_file_name_prefix="prefix",
_prod_file_name="the_file_name",
_prod_file_name_suffix="suffix"
) | 37.003436 | 111 | 0.647257 |
669d654ac2b0993095ba2a68175c410143be9802 | 230,498 | py | Python | tmqa3.py | RomainClaret/mse.thesis.code | 11837100c438d376a90392018ed69fff067c8ddf | [
"MIT"
] | 2 | 2020-12-07T23:35:49.000Z | 2021-05-02T14:57:50.000Z | tmqa3.py | RomainClaret/mse.thesis.code | 11837100c438d376a90392018ed69fff067c8ddf | [
"MIT"
] | null | null | null | tmqa3.py | RomainClaret/mse.thesis.code | 11837100c438d376a90392018ed69fff067c8ddf | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
# In[1]:
get_ipython().run_line_magic('autosave', '180')
get_ipython().run_line_magic('matplotlib', 'inline')
# In[2]:
import requests
import time
import itertools as it
import re
#import numpy
from copy import copy
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
#from pprint import pprint
import time
import json
import os
import networkx as nx
from math import sqrt
import spacy
from hdt import HDTDocument
import multiprocessing as mp
import torch
from transformers import BertTokenizer, BertModel, BertForMaskedLM
os.environ['CUDA_VISIBLE_DEVICES'] = "1,2"
from deepcorrect import DeepCorrect
#import deepcorrect
#print(deepcorrect.__file__)
corrector = DeepCorrect('data/deep_punct/deeppunct_params_en', 'data/deep_punct/deeppunct_checkpoint_wikipedia')
# In[3]:
# Load pre-trained model tokenizer (vocabulary)
modelpath = "bert-large-uncased"
tokenizer = BertTokenizer.from_pretrained(modelpath)
# In[4]:
hdt_wd = HDTDocument("data/kb/wikidata2018_09_11.hdt")
# In[5]:
#nlp = spacy.load("en_core_web_lg")
nlp = spacy.load("/data/users/romain.claret/tm/wiki-kb-linked-entities/nlp_custom_6")
#print(nlp.pipeline)
# In[6]:
# load settings
with open( "settings-graphqa.json", "r") as settings_data:
settings = json.load(settings_data)
use_cache = settings['use_cache']
save_cache = settings['save_cache']
cache_path = settings['cache_path']
#cache_path
# In[7]:
save_cache = True
def save_cache_data(save_cache=save_cache):
if save_cache:
with open(os.path.join(cache_path,'wd_local_statements_dict.json'), 'wb') as outfile:
outfile.write(json.dumps(wd_local_statements_dict, separators=(',',':')).encode('utf8'))
with open(os.path.join(cache_path,'wd_labels_dict.json'), 'wb') as outfile:
outfile.write(json.dumps(wd_labels_dict, separators=(',',':')).encode('utf8'))
with open(os.path.join(cache_path,'wd_local_word_ids_dict.json'), 'wb') as outfile:
outfile.write(json.dumps(wd_local_word_ids_dict, separators=(',',':')).encode('utf8'))
with open(os.path.join(cache_path,'wd_online_word_ids_dict.json'), 'wb') as outfile:
outfile.write(json.dumps(wd_online_word_ids_dict, separators=(',',':')).encode('utf8'))
with open(os.path.join(cache_path,'wd_local_predicate_ids_dict.json'), 'wb') as outfile:
outfile.write(json.dumps(wd_local_predicate_ids_dict, separators=(',',':')).encode('utf8'))
with open(os.path.join(cache_path,'wd_online_predicate_ids_dict.json'), 'wb') as outfile:
outfile.write(json.dumps(wd_online_predicate_ids_dict, separators=(',',':')).encode('utf8'))
with open(os.path.join(cache_path,'word_similarities_dict.json'), 'wb') as outfile:
outfile.write(json.dumps(word_similarities_dict, separators=(',',':')).encode('utf8'))
# In[8]:
# Load statements cache
use_cache = True
def load_cache_data(use_cache=False):
if use_cache:
path_wd_local_statements_dict = "wd_local_statements_dict.json"
path_wd_labels_dict = 'wd_labels_dict.json'
path_wd_local_word_ids_dict = 'wd_local_word_ids_dict.json'
path_wd_online_word_ids_dict = 'wd_online_word_ids_dict.json'
path_wd_local_predicate_ids_dict = 'wd_local_predicate_ids_dict.json'
path_wd_online_predicate_ids_dict = 'wd_online_predicate_ids_dict.json'
path_word_similarities_dict = 'word_similarities_dict.json'
else:
path_wd_local_statements_dict = "wd_local_statements_dict_empty.json"
path_wd_labels_dict = 'wd_labels_dict_empty.json'
path_wd_local_word_ids_dict = 'wd_local_word_ids_dict_empty.json'
path_wd_online_word_ids_dict = 'wd_online_word_ids_dict_empty.json'
path_wd_local_predicate_ids_dict = 'wd_local_predicate_ids_dict_empty.json'
path_wd_online_predicate_ids_dict = 'wd_online_predicate_ids_dict_empty.json'
path_word_similarities_dict = 'word_similarities_dict_empty.json'
with open(os.path.join(cache_path,path_wd_local_statements_dict), "rb") as data:
wd_local_statements_dict = json.load(data)
with open(os.path.join(cache_path,path_wd_labels_dict), "rb") as data:
wd_labels_dict = json.load(data)
with open(os.path.join(cache_path,path_wd_local_word_ids_dict), "rb") as data:
wd_local_word_ids_dict = json.load(data)
with open(os.path.join(cache_path,path_wd_online_word_ids_dict), "rb") as data:
wd_online_word_ids_dict = json.load(data)
with open(os.path.join(cache_path,path_wd_local_predicate_ids_dict), "rb") as data:
wd_local_predicate_ids_dict = json.load(data)
with open(os.path.join(cache_path,path_wd_online_predicate_ids_dict), "rb") as data:
wd_online_predicate_ids_dict = json.load(data)
with open(os.path.join(cache_path,path_word_similarities_dict), "rb") as data:
word_similarities_dict = json.load(data)
return (wd_local_statements_dict, wd_labels_dict,
wd_local_word_ids_dict, wd_online_word_ids_dict,
wd_local_predicate_ids_dict, wd_online_predicate_ids_dict,
word_similarities_dict)
(wd_local_statements_dict, wd_labels_dict,
wd_local_word_ids_dict, wd_online_word_ids_dict,
wd_local_predicate_ids_dict, wd_online_predicate_ids_dict,
word_similarities_dict) = load_cache_data(use_cache=True)
#print("wd_local_statements_dict",len(wd_local_statements_dict))
#print("wd_labels_dict",len(wd_labels_dict))
#print("wd_local_word_ids_dict",len(wd_local_word_ids_dict))
#print("wd_online_word_ids_dict",len(wd_online_word_ids_dict))
#print("wd_local_predicate_ids_dict",len(wd_local_predicate_ids_dict))
#print("wd_online_predicate_ids_dict",len(wd_online_predicate_ids_dict))
#print("word_similarities_dict",len(word_similarities_dict))
# In[9]:
def get_kb_ents(text):
#doc = nlp_kb(text)
doc = nlp(text)
#for ent in doc.ents:
# print(" ".join(["ent", ent.text, ent.label_, ent.kb_id_]))
return doc.ents
#ent_text_test = (
# "In The Hitchhiker's Guide to the Galaxy, written by Douglas Adams, "
# "Douglas reminds us to always bring our towel, even in China or Brazil. "
# "The main character in Doug's novel is the man Arthur Dent, "
# "but Dougledydoug doesn't write about George Washington or Homer Simpson."
#)
#
#en_text_test_2 = ("Which actor voiced the Unicorn in The Last Unicorn?")
#
#print([ent.kb_id_ for ent in get_kb_ents(ent_text_test)])
#[ent.kb_id_ for ent in get_kb_ents(en_text_test_2)]
# In[102]:
def get_nlp(sentence, autocorrect=False, banning_str=False):
sentence = sentence.replace("’", "\'")
nlp_sentence = nlp(sentence)
nlp_sentence_list = list(nlp_sentence)
meaningful_punct = []
for i_t, t in enumerate(nlp_sentence_list):
#print(t,t.pos_, t.lemma_)
if t.lemma_ == "year":
nlp_sentence_list[i_t] = "date"
elif t.text == "\'s" or t.text == "s":
if t.lemma_ == "be" or t.lemma_ == "s":
nlp_sentence_list[i_t] = "is"
else: nlp_sentence_list[i_t] = ""
elif t.text == "\'ve" or t.text == "ve":
if t.lemma_ == "have":
nlp_sentence_list[i_t] = "have"
else: nlp_sentence_list[i_t] = ""
elif t.text == "\'re" or t.text == "re":
if t.lemma_ == "be":
nlp_sentence_list[i_t] = "are"
else: nlp_sentence_list[i_t] = ""
elif t.text == "\'ll" or t.text == "ll":
if t.lemma_ == "will":
nlp_sentence_list[i_t] = "will"
else: nlp_sentence_list[i_t] = ""
elif t.text == "\'d" or t.text == "d":
if t.lemma_ == "have":
nlp_sentence_list[i_t] = "had"
elif t.lemma_ == "would":
nlp_sentence_list[i_t] = "would"
else: nlp_sentence_list[i_t] = ""
elif t.is_space:
nlp_sentence_list[i_t] = ""
elif t.pos_ == "PUNCT":
if t.text.count(".") > 2:
meaningful_punct.append((i_t,"..."))
nlp_sentence_list[i_t] = "..."
else:
nlp_sentence_list[i_t] = ""
else: nlp_sentence_list[i_t] = nlp_sentence_list[i_t].text
nlp_sentence_list = [w for w in nlp_sentence_list if w]
#print("nlp_sentence_list",nlp_sentence_list)
if autocorrect:
nlp_sentence = " ".join(nlp_sentence_list)
nlp_sentence = (nlp_sentence.replace("’", "\'").replace("€", "euro").replace("ç", "c")
.replace("à", "a").replace("é","e").replace("ä","a").replace("ö","o")
.replace("ü","u").replace("è","e").replace("¨","").replace("ê","e")
.replace("â","a").replace("ô","o").replace("î","i").replace("û","u")
.replace("_"," ").replace("°","degree").replace("§","section")
.replace("š","s").replace("Š","S").replace("ć","c").replace("Ç", "C")
.replace("À", "A").replace("É","E").replace("Ä","A").replace("Ö","O")
.replace("Ü","U").replace("È","E").replace("Ê","E").replace("Ë","E")
.replace("Â","A").replace("Ô","O").replace("Î","I").replace("Û","U")
.replace("á","a").replace("Á","Á").replace("ó","o").replace("Ó","O")
.replace("ú","u").replace("Ú","U").replace("í","i").replace("Í","I")
.replace("–","-").replace("×","x").replace("“","\"").replace("ř","r")
.replace("ø","o").replace("ı","i").replace("ş","s").replace("Á","A")
.replace("Ō","O").replace("ã","a").replace("ū","u").replace("ō","o")
.replace("ñ","n").replace("Ł","L").replace("ł","l").replace("Ñ","N")
.replace("Ō","O").replace("Ā","A").replace("ē","e").replace("ǟ","a")
.replace("ȱ","o").replace("ō","o").replace("ȭ","o").replace("ī","i")
.replace("ū","u").replace("ȯ","o").replace("ä","a").replace("õ","o")
.replace("Ā","A").replace("ū","u").replace("ī","i").replace("ē","e")
.replace("ō","o").replace("Ā","A").replace("ā","a").replace("Ǟ","A")
.replace("ǟ","a").replace("Ḇ","B").replace("ḇ","b").replace("C̄","C")
.replace("c̄","c").replace("Ḏ","D").replace("ḏ","d").replace("ḕ","e")
.replace("Ē","E").replace("ē","e").replace("Ḕ","E").replace("Ḗ","E")
.replace("ḗ","e").replace("Ḡ","G").replace("ḡ","g").replace("ẖ","h")
.replace("Ī","ī").replace("Ḹ","L").replace("ḹ","l").replace("Ḻ","L")
.replace("ḻ","l").replace("Ṉ","N").replace("ṉ","n").replace("Ȫ","O")
.replace("ȫ","o").replace("Ṑ","O").replace("ṑ","o").replace("ß","ss")
.replace("Ṓ","O").replace("ṓ","o").replace("Ṝ","R").replace("ṝ","r")
.replace("Ṟ","R").replace("ṟ","r").replace("Ṯ","T").replace("ṯ","t")
.replace("Ū","U").replace("ū","u").replace("Ǘ","U").replace("ǘ","u")
.replace("Ǖ","U").replace("ǖ","u").replace("Ṻ","U").replace("ṻ","u")
.replace("Ȳ","Y").replace("ȳ","y").replace("ẕ","z").replace("Ẕ","Z")
.replace("Ǣ","AE").replace("ǣ","ae").replace("ė","e").replace("å","a")
.replace("æ","ae").replace("Æ","AE").replace("ą","a").replace("ț","t")
.replace("ï","i").replace("Ț","T").replace("İ","I").replace("ʻ","\'")
.replace("ń","n").replace("Ń","N").replace("Č","C").replace("ø","o")
.replace("č","c").replace("ž","z").replace("Ž","Z").replace("Ø","O")
.replace("ễ","e").replace("Ê","E").replace("ă","a").replace("Ă","A")
.replace("ệ","e").replace("Ş","S").replace("ş","s").replace("~"," ")
.replace("œ","oe").replace("Œ","OE").replace("ě","e").replace("Ě","E")
.replace("đ","d").replace("Đ","D").replace("Я","R").replace("я","r")
.replace("ý","y").replace("Ý","Y").replace("Ż","Z").replace("ż","z")
.replace("ș","s").replace("¡","i").replace("´","\'").replace("Ș","S")
.replace("ò","o").replace("Ò","O").replace("ë","e")
)
if banning_str:
for ban in banning_str:
nlp_sentence = nlp_sentence.replace(ban[0],ban[1])
nlp_sentence = corrector.correct(nlp_sentence)
nlp_sentence = nlp_sentence[0]["sequence"]
nlp_sentence = nlp(nlp_sentence)
nlp_sentence_list = list(nlp_sentence)
for i_t, t in enumerate(nlp_sentence_list):
if t.pos_ == "PUNCT":
if i_t in [mpunct[0] for mpunct in meaningful_punct]:
for mpunct in meaningful_punct:
if i_t == mpunct[0]:
nlp_sentence_list[mpunct[0]] = mpunct[1]
else: nlp_sentence_list[i_t] = ''
else:
nlp_sentence_list[i_t] = nlp_sentence_list[i_t].text
for mpunct in meaningful_punct:
if mpunct[0] < len(nlp_sentence_list):
if nlp_sentence_list[mpunct[0]] != mpunct[1]:
nlp_sentence_list.insert(mpunct[0], mpunct[1])
return nlp(" ".join(nlp_sentence_list).replace(" ", " ").replace(". &",".").replace(". /","."))
#get_nlp("Which genre of album is harder.....faster?", autocorrect=True)
#get_nlp("Which genre of album is Harder ... Faster", autocorrect=True)
#get_nlp("Which home is an example of italianate architecture?", autocorrect=True)
#get_nlp("Your mom's father, were nice in the Years.!?\'\":`’^!$£€\(\)ç*+%&/\\\{\};,àéäöüè¨êâôîû~-_<>°§...@.....", autocorrect=True)
#get_nlp("of what nationality is ken mcgoogan", autocorrect=True)
#get_nlp("you're fun", autocorrect=True)
#get_nlp("where's the fun", autocorrect=True)
#get_nlp("whats the name of the organization that was founded by frei otto", True)
#get_nlp("Hurry! We’re late!",True)
#get_nlp("Who was an influential figure for miško Šuvaković",True)
#get_nlp("what is the second level division of the division crixás do tocantins",True)
#get_nlp("what is the second level division of the division crixás do tocantins",True)
#get_nlp("2×4",autocorrect=True,banning_str=[["×","x"]])
# In[11]:
def is_wd_entity(to_check):
pattern = re.compile('^Q[0-9]*$')
if pattern.match(to_check.strip()): return True
else: return False
def is_wd_predicate(to_check):
pattern = re.compile('^P[0-9]*$')
if pattern.match(to_check.strip()): return True
else: return False
def is_valide_wd_id(to_check):
if is_wd_entity(to_check) or is_wd_predicate(to_check): return True
else: return False
#print(is_wd_entity("Q155"))
# In[12]:
# TODO redo the functions and optimize
def is_entity_or_literal(to_check):
if is_wd_entity(to_check.strip()):
return True
pattern = re.compile('^[A-Za-z0-9]*$')
if len(to_check) == 32 and pattern.match(to_check.strip()):
return False
return True
# return if the given string is a literal or a date
def is_literal_or_date(to_check):
return not('www.wikidata.org' in to_check)
# return if the given string describes a year in the format YYYY
def is_year(year):
pattern = re.compile('^[0-9][0-9][0-9][0-9]$')
if not(pattern.match(year.strip())):
return False
else:
return True
# return if the given string is a date
def is_date(date):
pattern = re.compile('^[0-9]+ [A-z]+ [0-9][0-9][0-9][0-9]$')
if not(pattern.match(date.strip())):
return False
else:
return True
# return if the given string is a timestamp
def is_timestamp(timestamp):
pattern = re.compile('^[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]T00:00:00Z')
if not(pattern.match(timestamp.strip())):
return False
else:
return True
# convert the given month to a number
def convert_month_to_number(month):
return{
"january" : "01",
"february" : "02",
"march" : "03",
"april" : "04",
"may" : "05",
"june" : "06",
"july" : "07",
"august" : "08",
"september" : "09",
"october" : "10",
"november" : "11",
"december" : "12"
}[month.lower()]
# convert a date from the wikidata frontendstyle to timestamp style
def convert_date_to_timestamp (date):
sdate = date.split(" ")
# add the leading zero
if (len(sdate[0]) < 2):
sdate[0] = "0" + sdate[0]
return sdate[2] + '-' + convert_month_to_number(sdate[1]) + '-' + sdate[0] + 'T00:00:00Z'
# convert a year to timestamp style
def convert_year_to_timestamp(year):
return year + '-01-01T00:00:00Z'
# get the wikidata id of a wikidata url
def wikidata_url_to_wikidata_id(url):
if not url:
return False
if "XMLSchema#dateTime" in url or "XMLSchema#decimal" in url:
date = url.split("\"", 2)[1]
date = date.replace("+", "")
return date
if(is_literal_or_date(url)):
if is_year(url):
return convert_year_to_timestamp(url)
if is_date(url):
return convert_date_to_timestamp(url)
else:
url = url.replace("\"", "")
return url
else:
url_array = url.split('/')
# the wikidata id is always in the last component of the id
return url_array[len(url_array)-1]
# fetch all statements where the given qualifier statement occurs as subject
def get_all_statements_with_qualifier_as_subject(qualifier):
statements = []
triples, cardinality = hdt_wd.search_triples(qualifier, "", "")
for triple in triples:
sub, pre, obj = triple
# only consider triples with a wikidata-predicate
if pre.startswith("http://www.wikidata.org/"):
statements.append({'entity': sub, 'predicate': pre, 'object': obj})
return statements
# fetch the statement where the given qualifier statement occurs as object
def get_statement_with_qualifier_as_object(qualifier):
triples, cardinality = hdt_wd.search_triples("", "", qualifier)
for triple in triples:
sub, pre, obj = triple
# only consider triples with a wikidata-predicate
if pre.startswith("http://www.wikidata.org/") and sub.startswith("http://www.wikidata.org/entity/Q"):
return (sub, pre, obj)
return False
# returns all statements that involve the given entity
def get_all_statements_of_entity(entity_id):
# check entity pattern
if not is_wd_entity(entity_id.strip()):
return False
if wd_local_statements_dict.get(entity_id) != None:
#print("saved statement")
return wd_local_statements_dict[entity_id]
entity = "http://www.wikidata.org/entity/"+entity_id
statements = []
# entity as subject
triples_sub, cardinality_sub = hdt_wd.search_triples(entity, "", "")
# entity as object
triples_obj, cardinality_obj = hdt_wd.search_triples("", "", entity)
if cardinality_sub + cardinality_obj > 5000:
wd_local_statements_dict[entity_id] = []
return []
# iterate through all triples in which the entity occurs as the subject
for triple in triples_sub:
sub, pre, obj = triple
# only consider triples with a wikidata-predicate or if it is an identifier predicate
if not pre.startswith("http://www.wikidata.org/"):# or (wikidata_url_to_wikidata_id(pre) in identifier_predicates):
continue
# object is statement
if obj.startswith("http://www.wikidata.org/entity/statement/"):
qualifier_statements = get_all_statements_with_qualifier_as_subject(obj)
qualifiers = []
for qualifier_statement in qualifier_statements:
if qualifier_statement['predicate'] == "http://www.wikidata.org/prop/statement/" + wikidata_url_to_wikidata_id(pre):
obj = qualifier_statement['object']
elif is_entity_or_literal(wikidata_url_to_wikidata_id(qualifier_statement['object'])):
qualifiers.append({
"qualifier_predicate":{
"id": wikidata_url_to_wikidata_id(qualifier_statement['predicate'])
},
"qualifier_object":{
"id": wikidata_url_to_wikidata_id(qualifier_statement['object'])
}})
statements.append({'entity': {'id': wikidata_url_to_wikidata_id(sub)}, 'predicate': {'id': wikidata_url_to_wikidata_id(pre)}, 'object': {'id': wikidata_url_to_wikidata_id(obj)}, 'qualifiers': qualifiers})
else:
statements.append({'entity': {'id': wikidata_url_to_wikidata_id(sub)}, 'predicate': {'id': wikidata_url_to_wikidata_id(pre)}, 'object': {'id': wikidata_url_to_wikidata_id(obj)}, 'qualifiers': []})
# iterate through all triples in which the entity occurs as the object
for triple in triples_obj:
sub, pre, obj = triple
# only consider triples with an entity as subject and a wikidata-predicate or if it is an identifier predicate
if not sub.startswith("http://www.wikidata.org/entity/Q"):# or not pre.startswith("http://www.wikidata.org/") or wikidata_url_to_wikidata_id(pre) in identifier_predicates:
continue
if sub.startswith("http://www.wikidata.org/entity/statement/"):
statements_with_qualifier_as_object = get_statement_with_qualifier_as_object(sub, process)
# if no statement was found continue
if not statements_with_qualifier_as_object:
continue
main_sub, main_pred, main_obj = statements_with_qualifier_as_object
qualifier_statements = get_all_statements_with_qualifier_as_subject(sub)
qualifiers = []
for qualifier_statement in qualifier_statements:
if wikidata_url_to_wikidata_id(qualifier_statement['predicate']) == wikidata_url_to_wikidata_id(main_pred):
main_obj = qualifier_statement['object']
elif is_entity_or_literal(wikidata_url_to_wikidata_id(qualifier_statement['object'])):
qualifiers.append({
"qualifier_predicate":{"id": wikidata_url_to_wikidata_id(qualifier_statement['predicate'])},
"qualifier_object":{"id": wikidata_url_to_wikidata_id(qualifier_statement['object'])}
})
statements.append({
'entity': {'id': wikidata_url_to_wikidata_id(main_sub)},
'predicate': {'id': wikidata_url_to_wikidata_id(main_pred)},
'object': {'id': wikidata_url_to_wikidata_id(main_obj)},
'qualifiers': qualifiers
})
else:
statements.append({'entity': {'id': wikidata_url_to_wikidata_id(sub)}, 'predicate': {'id': wikidata_url_to_wikidata_id(pre)}, 'object': {'id': wikidata_url_to_wikidata_id(obj)}, 'qualifiers': []})
# cache the data
wd_local_statements_dict[entity_id] = statements
return statements
#print(len(get_all_statements_of_entity("Q267721")))
#for s in get_all_statements_of_entity("Q267721"):
# print(s)
#save_cache_data(save_cache=save_cache)
# In[13]:
def get_wd_ids_online(name, is_predicate=False, top_k=3, sim_threshold=0.5):
name = name.split('(')[0]
if is_predicate and wd_online_predicate_ids_dict.get(name) != None and use_cache and len(wd_online_predicate_ids_dict)>0:
res_ids = wd_online_predicate_ids_dict[name][:top_k]
to_return = []
for res_id in res_ids:
if get_nlp(get_wd_label(res_id)).similarity(get_nlp(name)) >= sim_threshold:
to_return.append(res_id)
return to_return #wd_online_predicate_ids_dict[name][:top_k]
elif not is_predicate and wd_online_word_ids_dict.get(name) != None and use_cache and len(wd_online_word_ids_dict)>0:
#print("saved word online")
res_ids = wd_online_word_ids_dict[name][:top_k]
to_return = []
for res_id in res_ids:
if get_nlp(get_wd_label(res_id)).similarity(get_nlp(name)) >= sim_threshold:
to_return.append(res_id)
return to_return #wd_online_word_ids_dict[name][:top_k]
request_successfull = False
entity_ids = ""
while not request_successfull:
try:
if is_predicate:
entity_ids = requests.get('https://www.wikidata.org/w/api.php?action=wbsearchentities&format=json&language=en&type=property&limit=' + str(top_k) + '&search='+name).json()
else:
entity_ids = requests.get('https://www.wikidata.org/w/api.php?action=wbsearchentities&format=json&language=en&limit=' + str(top_k) + '&search='+name).json()
request_successfull = True
except:
time.sleep(5)
results = entity_ids.get("search")
if not results:
if is_predicate: wd_online_predicate_ids_dict[name] = []
else: wd_online_word_ids_dict[name] = []
return []
if not len(results):
if is_predicate: wd_online_predicate_ids_dict[name] = []
else: wd_online_word_ids_dict[name] = []
return []
res = []
for result in results:
res_id = result['id']
#print(res_id,get_nlp(get_wd_label(res_id)).similarity(get_nlp(name)))
if get_nlp(get_wd_label(res_id)).similarity(get_nlp(name)) >= sim_threshold:
res.append(res_id)
if is_predicate: wd_online_predicate_ids_dict[name] = res
else: wd_online_word_ids_dict[name] = res
if res:
return res[:top_k]
else:
return []
#print(get_wd_ids_online("did", is_predicate=False, top_k=3))
#print(get_wd_ids_online("voiced", is_predicate=True, top_k=3))
# In[14]:
# very computational
def get_most_similar(word, top_k=3):
print("behold: get_most_similar started with:", word)
word_text = str(word.lower())
if word_similarities_dict.get(word) != None and use_cache and len(word_similarities_dict)>0:
return word_similarities_dict[word][:top_k]
word = nlp.vocab[word_text]
queries = [w for w in word.vocab if w.is_lower == word.is_lower and w.prob >= -15]
by_similarity = sorted(queries, key=lambda w: word.similarity(w), reverse=True)
word_similarities = [(w.text.lower(),float(w.similarity(word))) for w in by_similarity[:10] if w.lower_ != word.lower_]
word_similarities_dict[word_text] = word_similarities
save_cache_data(save_cache=save_cache)
return word_similarities[:top_k]
#print(get_most_similar("voiced", top_k=3))
#save_cache_data(save_cache=save_cache)
# In[15]:
def get_wd_ids(word, is_predicate=False, top_k=3, limit=6, online=False, sim_threshold=0.5):
#if is_predicate and wd_local_predicate_ids_dict.get(word) != None and use_cache and len(wd_local_predicate_ids_dict)>0:
# #print("saved predicate local")
# res_ids = wd_local_predicate_ids_dict[word][:top_k]
# to_return = []
# for res_id in res_ids:
# if get_nlp(get_wd_label(res_id)).similarity(get_nlp(word)) >= sim_threshold:
# to_return.append(res_id)
# return to_return #wd_local_predicate_ids_dict[word][:top_k]
#
#elif not is_predicate and wd_local_word_ids_dict.get(word) != None and use_cache and len(wd_local_word_ids_dict)>0:
# #print("saved word local")
# res_ids = wd_local_word_ids_dict[word][:top_k]
# to_return = []
# for res_id in res_ids:
# if get_nlp(get_wd_label(res_id)).similarity(get_nlp(word)) >= sim_threshold:
# to_return.append(res_id)
# return to_return #wd_local_word_ids_dict[word][:top_k]
language = "en"
word_formated = str("\""+word+"\""+"@"+language)
to_remove = len("http://www.wikidata.org/entity/")
t_name, card_name = hdt_wd.search_triples("", "http://schema.org/name", word_formated, limit=top_k)
#print("names cardinality of \"" + word+"\": %i" % card_name)
t_alt, card_alt = hdt_wd.search_triples("", 'http://www.w3.org/2004/02/skos/core#altLabel', word_formated, limit=top_k)
#print("alternative names cardinality of \"" + word+"\": %i" % card_alt)
results = list(set(
[t[0][to_remove:] for t in t_name if is_valide_wd_id(t[0][to_remove:])] +
[t[0][to_remove:] for t in t_alt if is_valide_wd_id(t[0][to_remove:])]
))
res = []
for result in results:
#print(result,get_nlp(get_wd_label(result)).similarity(get_nlp(word)))
if get_nlp(get_wd_label(result)).similarity(get_nlp(word)) >= sim_threshold:
res.append(result)
if is_predicate: res = [r for r in res if is_wd_predicate(r)]
# cache the data
if is_predicate: wd_local_predicate_ids_dict[word] = res
else: wd_local_word_ids_dict[word] = res
return res[:top_k] if limit<=0 else res[:limit-1]
#print(get_wd_ids("did", is_predicate=False, top_k=1))
#get_wd_ids("The Last Unicorn", is_predicate=False,top_k=0, limit=10)
#print(get_wd_ids("wife", is_predicate=False , top_k=0, limit=0))
#print(get_wd_ids("voiced", is_predicate=True , top_k=0, limit=0))
# In[16]:
def get_wd_label(from_id, language="en"):
#print("from_id",from_id)
if is_valide_wd_id(from_id):
if wd_labels_dict.get(from_id) != None and use_cache and len(wd_labels_dict)>0:
#print("saved label local")
return wd_labels_dict[from_id]
id_url = "http://www.wikidata.org/entity/"+from_id
t_name, card_name = hdt_wd.search_triples(id_url, "http://schema.org/name", "")
name = [t[2].split('\"@'+language)[0].replace("\"", "") for t in t_name if "@"+language in t[2]]
#name = [t[2].split('@en')[0] for t in t_name if "@"+language in t[2]]
result = name[0] if name else ''
wd_labels_dict[from_id] = result #caching
return result
else:
return from_id
#print(get_wd_label("P725"))
#get_wd_label("Q20789322")
#get_wd_label("Q267721")
# In[17]:
# Building colors from graph
def get_color(node_type):
if node_type == "entity": return "violet"#"cornflowerblue"
elif node_type == "predicate": return "yellow"
else: return "red"
# Building labels for graph
def get_elements_from_graph(graph):
node_names = nx.get_node_attributes(graph,"name")
node_types = nx.get_node_attributes(graph,"type")
colors = [get_color(node_types[n]) for n in node_names]
return node_names, colors
# Plotting the graph
def plot_graph(graph, name, title="Graph"):
fig = plt.figure(figsize=(14,14))
ax = plt.subplot(111)
ax.set_title(str("answer: "+title), fontsize=10)
#pos = nx.spring_layout(graph)
labels, colors = get_elements_from_graph(graph)
nx.draw(graph, node_size=30, node_color=colors, font_size=10, font_weight='bold', with_labels=True, labels=labels)
plt.tight_layout()
plt.savefig("tmqa1_graphs_imgs/"+str(name)+".png", format="PNG", dpi = 300)
plt.show()
#plot_graph(graph, "file_name_graph", "Graph_title")
# In[18]:
def make_statements_graph_worker(graph, predicate_nodes, turn, indexing_predicates, BANNED_WD_IDS, BANNED_WD_PRED_IDS, BANNED_WD_KEYWORDS, BANNED_WD_PRED_KEYWORDS, in_mp_queue, out_mp_queue, predicate_nodes_lock, node_weight, qa):
#for statement in statements:
sentinel = None
for statement in iter(in_mp_queue.get, sentinel):
#print("statement",statement)
#if (statement['entity']['id'][0] != "Q"
# or statement['entity']['id'] in BANNED_WD_IDS
# or statement['predicate']['id'][0] != "P"
# or statement['predicate']['id'] in BANNED_WD_PRED_IDS
# or statement['object']['id'][0] != "Q"
# or statement['object']['id'] in BANNED_WD_IDS):
# continue
if (
statement['entity']['id'] in BANNED_WD_IDS
or statement['predicate']['id'][0] != "P"
or statement['predicate']['id'] in BANNED_WD_PRED_IDS
or statement['object']['id'] in BANNED_WD_IDS
):
continue
continue_flag = False
for key in BANNED_WD_PRED_KEYWORDS:
if (get_wd_label(statement['predicate']['id']).find(key) != -1): continue_flag = True
for key in BANNED_WD_KEYWORDS:
if (get_wd_label(statement['entity']['id']).find(key) != -1): continue_flag = True
if (get_wd_label(statement['object']['id']).find(key) != -1): continue_flag = True
if continue_flag: continue
#print(statement)
if not statement['entity']['id'] in graph:
graph.add_node(statement['entity']['id'], name=get_wd_label(statement['entity']['id']), type='entity', turn=turn, weight=node_weight, qa=qa)
if not statement['object']['id'] in graph:
graph.add_node(statement['object']['id'], name=get_wd_label(statement['object']['id']), type='entity', turn=turn, weight=node_weight, qa=qa)
with predicate_nodes_lock:
# increment index of predicate or set it at 0
if not statement['predicate']['id'] in predicate_nodes or not indexing_predicates:
predicate_nodes_index = 1
predicate_nodes[statement['predicate']['id']] = 1
else:
predicate_nodes[statement['predicate']['id']] += 1
predicate_nodes_index = predicate_nodes[statement['predicate']['id']]
# add the predicate node
predicate_node_id = (statement['predicate']['id'])
if indexing_predicates: predicate_node_id += "-" + str(predicate_nodes_index)
graph.add_node(predicate_node_id, name=get_wd_label(statement['predicate']['id']), type='predicate', turn=turn, weight=node_weight)
# add the two edges (entity->predicate->object)
#statement['entity']['id'] in BANNED_WD_IDS
#statement['object']['id'] in BANNED_WD_IDS
#statement['predicate']['id'] in BANNED_WD_PRED_IDS
#if (statement['predicate']['id'] in BANNED_WD_PRED_IDS): break
graph.add_edge(statement['entity']['id'], predicate_node_id)
graph.add_edge(predicate_node_id, statement['object']['id'])
out_mp_queue.put(graph)
# In[19]:
# TODO: handle special literals? which one
def make_statements_graph(statements, indexing_predicates=True, cores=mp.cpu_count(), context_graph=False, node_weight=1, qa=False):
BANNED_WD_IDS = [
"Q4167410","Q66087861","Q65932995","Q21281405","Q17442446","Q41770487","Q29548341",
"Q29547399","Q25670","Q21286738"
]
BANNED_WD_PRED_IDS = [
"P1687","P7087","P1889","P646", "P227", "P1256", "P1257", "P1258", "P1260", "P301",
"P18","P1266","P487","P1970","P2529", "P4390", "P4342", "P4213", "P487", "P2624",
"P4953", "P2241", "P345","P703", "P2163", "P18", "P436", "P227", "P646", "P2581",
"P1006", "P244", "P214", "P1051", "P1296","P461", "P2959", "P1657", "P3834","P243",
"P3306","P6932","P356","P1630","P3303","P1921","P1793","P1628","P1184","P1662","P2704",
"P4793","P1921","P2302","P6562","P6127","P4342","P6145","P5786","P5099","P4947","P5032",
"P4933","P4632","P4529","P4277","P4282","P3135","P4276","P3593","P2638","P3804","P3145",
"P2509","P3212","P2704","P480","P3844","P3141","P3808","P3933","P2346","P3077","P3417",
"P2529","P3302","P3143","P2334","P3129","P3138","P3107","P2603","P2631","P2508","P2465",
"P2014", "P1874", "P2518", "P1265", "P1237","P1712", "P1970","P1804","P905","P1562",
"P1258","P646","P345"
]
BANNED_WD_KEYWORDS = ["_:"]
BANNED_WD_PRED_KEYWORDS = [
"ID", "ISBN","identifier", "IDENTIFIER", "isbn", "ISSN", "issn","id","Id","iD"
]
predicate_nodes = mp.Manager().dict()
predicate_nodes_lock = mp.Manager().Lock()
if context_graph:
latest_turn = sorted([y["turn"] for x,y in context_graph.nodes(data=True)])[-1]
turn = latest_turn+1
graph = context_graph.copy()
previous_predicates_ids = [x for x,y in context_graph.nodes(data=True) if y["type"]=="predicate"]
if previous_predicates_ids[0].find("-") != -1:
for ppi in previous_predicates_ids:
ppi_id = ppi[:ppi.find("-")]
ppi_value = ppi[ppi.find("-")+1:]
if ppi_id in predicate_nodes:
if int(ppi_value) > predicate_nodes[ppi_id]:
predicate_nodes[ppi_id] = int(ppi_value)
else:
predicate_nodes[ppi_id] = int(ppi_value)
#print("predicate_nodes from context",predicate_nodes)
else:
turn=1
graph = nx.Graph()
if cores <= 0: cores = 1
out_mp_queue = mp.Queue()
in_mp_queue = mp.Queue()
sentinel = None
for statement in statements:
in_mp_queue.put(statement)
procs = [mp.Process(target = make_statements_graph_worker, args = (graph, predicate_nodes, turn, indexing_predicates, BANNED_WD_IDS, BANNED_WD_PRED_IDS, BANNED_WD_KEYWORDS, BANNED_WD_PRED_KEYWORDS, in_mp_queue, out_mp_queue, predicate_nodes_lock, node_weight, qa)) for i in range(cores)]
for proc in procs:
proc.daemon = True
proc.start()
for proc in procs:
in_mp_queue.put(sentinel)
for proc in procs:
local_g = out_mp_queue.get()
graph = nx.compose(graph,local_g)
for proc in procs:
proc.join()
if context_graph:
if previous_predicates_ids[0].find("-") != -1:
previous_predicates_ids_only = [p[:p.find("-")] for p in previous_predicates_ids]
else:
previous_predicates_ids_only = previous_predicates_ids
spo_list = [[list(graph.neighbors(p))[0],p[:p.find("-")],list(graph.neighbors(p))[1]] for p in previous_predicates_ids]
spo_list_tagged = [[list(graph.neighbors(p))[0],p,list(graph.neighbors(p))[1]] for p in previous_predicates_ids]
for spo in spo_list_tagged:
#print("graph.nodes(data=True)",graph.nodes(data=True))
#print("before spo weights",spo,graph.nodes[spo[0]]['weight'],graph.nodes[spo[1]]['weight'],graph.nodes[spo[2]]['weight'])
graph.nodes[spo[0]]['weight'] += 1
graph.nodes[spo[1]]['weight'] += 1
graph.nodes[spo[2]]['weight'] += 1
#print("after spo weights",spo,graph.nodes[spo[0]]['weight'],graph.nodes[spo[1]]['weight'],graph.nodes[spo[2]]['weight'])
for p in [x for x,y in graph.nodes(data=True) if y["type"]=="predicate"]:
p_n = list(graph.neighbors(p))
if p.find("-") != -1:
p_id = p[:p.find("-")]
p_value = p[p.find("-")+1:]
else:
p_id = p
p_value = 0
spo_tuple = [p_n[0],p_id,p_n[1]]
if spo_tuple not in spo_list:
spo_list.append(spo_tuple)
else:
if p not in previous_predicates_ids:
graph.remove_node(p)
for p in [x for x,y in graph.nodes(data=True) if y["type"]=="entity"]:
p_n = list(graph.neighbors(p))
#print("p",get_wd_label(p),p,p_n)
#print("graph.nodes(data=True)",graph.nodes(data=True))
return graph, predicate_nodes
#test_graph = make_statements_graph(test_unduplicate_statements, indexing_predicates=False)
#print(test_graph[1])
#plot_graph(test_graph[0],"test")
#print("len(filtered_statements)",len(filtered_statements))
#start_time = time.time()
#graph, predicate_nodes = make_statements_graph(filtered_statements, indexing_predicates=True, cores=1)
#print(time.time()-start_time)
#print("--> ",len(graph), "nodes and", graph.size(), "edges")
#print(predicate_nodes)
#q0_test = "Who is the wife of Barack Obama?"
#q0_nlp_test = get_nlp(q0_test)
#q0_themes_test = get_themes(q0_nlp_test, q0_test, top_k=3)
#q0_themes_enhanced_test = get_enhanced_themes(q0_themes_test, top_k=3)
#q0_predicates_test = get_predicates_online(q0_nlp_test, top_k=3)
#q0_focused_parts_test = []
#q0_graph, q0_predicates_dict = build_graph(q0_nlp_test, q0_themes_test, q0_themes_enhanced_test, q0_predicates_test, deep_k=3)
#print(q0_predicates_dict)
#plot_graph(q0_graph, "file_name_graph", "Graph_title")
#
#q1_test = "Where did Barack Obama and Michelle Obama marry?"
#q1_nlp_test = get_nlp(q1_test)
#q1_themes_test = get_themes(q1_nlp_test, q1_test, top_k=2)
#print("q1_themes_test",q1_themes_test)
#q1_themes_enhanced_test = get_enhanced_themes(q1_themes_test, top_k=2)
#print("q1_themes_enhanced_test",q1_themes_enhanced_test)
#q1_predicates_test = get_predicates_online(q1_nlp_test, top_k=2)
#print("q1_predicates_test",q1_predicates_test)
#q1_focused_parts_test = []
#q1_graph, q1_predicates_dict = build_graph(q1_nlp_test, q1_themes_test, q1_themes_enhanced_test, q1_predicates_test, deep_k=3, context_graph=context_graph_1)
#print("q1_predicates_dict",q1_predicates_dict)
#plot_graph(q1_graph, "file_name_graph", "Graph_title")
#plot_graph(answer_1[-1], "file_name_graph", "Graph_title")
# In[20]:
def merge_lists(list_1, list_2):
if len(list_1) == len(list_2):
return [(list_1[i], list_2[i]) for i in range(0, len(list_1))]
else:
return "Error: lists are not the same lenght"
#print(merge_lists(["author"],['P50']))
# In[21]:
def get_themes_ids_from_chunks(noun_chunks, top_k=3, online=False):
if online:
theme_ids = [get_wd_ids_online(chunk.text, top_k=top_k)+get_wd_ids(chunk.text, top_k=top_k) for chunk in noun_chunks]
else:
theme_ids = [get_wd_ids(chunk.text, top_k=top_k) for chunk in noun_chunks]
return theme_ids
def get_themes(nlp_question, raw_question, top_k=3, online=False, max_title_size=5):
#nlp_raw_question = get_nlp(raw_question, autocorrect=False)
#nlp_raw_question_lower = get_nlp(raw_question.lower(), autocorrect=False)
#nlp_raw_question_captialize = get_nlp(" ".join([w.capitalize() for w in raw_question.split(" ")]), autocorrect=False)
#nlp_raw_question_list = list(nlp_raw_question)
#print([e for e in nlp_raw_question])
#print([e for e in nlp_raw_question_lower])
#print([e for e in nlp_raw_question_captialize])
special_words = [w for w in raw_question.lower().split(" ") if w not in nlp_question.text.lower().split()]
special_words_capitalize = [w.capitalize() for w in special_words]
#special_words_capitalize_per_2 = it.permutations(special_words_capitalize,2)
#print("special_words",special_words)
#print("special_words_capitalize",special_words_capitalize)
special_words_titles = []
for per_size in range(2,len(special_words_capitalize)+1):
if per_size > max_title_size:
break
special_words_titles += [" ".join(p) for p in it.permutations(special_words_capitalize,per_size)]
#print("special_words_titles",special_words_titles)
#print(nlp_raw_question_list)
# PART1: finding themes as the user typed it
filter_list = ["PART", "PRON", "NUM"]
nlp_list_src = list(nlp_question)
nlp_list = []
for w in nlp_question:
if w.pos_ not in filter_list:
nlp_list.append(w)
nlp_question = get_nlp(" ".join([e.text for e in nlp_list]))
themes = [(ent, [ent.kb_id_]) for ent in get_kb_ents(nlp_question.text) if ent.kb_id_ != "NIL"]
#print("1 themes",themes)
for w in special_words+special_words_capitalize+special_words_titles:
themes += [(ent, [ent.kb_id_]) for ent in get_kb_ents(w) if ent.kb_id_ != "NIL"]
#print("2 themes",themes)
theme_complements = []
noun_chunks = [chunk for chunk in nlp_question.noun_chunks]
#print("1 noun_chunks",noun_chunks)
for w in special_words+special_words_capitalize+special_words_titles:
noun_chunks += [chunk for chunk in get_nlp(w, autocorrect=False).noun_chunks]
#print("2 noun_chunks",noun_chunks)
#theme_ids = [get_wd_ids(chunk.text, top_k=top_k) for chunk in noun_chunks][:top_k]
theme_ids = get_themes_ids_from_chunks(noun_chunks, top_k=3, online=online)
for i, chunk in enumerate(theme_ids):
if chunk: themes.append((noun_chunks[i], chunk))
else: theme_complements.append(noun_chunks[i])
# PART2: finding themes with the question capitalized
#print(nlp_question)
nlp_list_cap = []
nlp_list_low = []
nlp_list_lemma = []
nlp_list_no_det = []
w_filter = ["WDT","WP","WP$","WRB"]
for w in nlp_question:
if w.tag_ not in w_filter:
nlp_list_cap.append(w.text.capitalize())
nlp_list_low.append(w.text.lower())
nlp_list_lemma.append(w.lemma_)
if w.pos_ != "DET":
nlp_list_no_det.append(w.text)
nlp_question_cap = get_nlp(" ".join([e for e in nlp_list_cap]))
nlp_question_low = get_nlp(" ".join([e for e in nlp_list_low]))
nlp_question_lemma = get_nlp(" ".join([e for e in nlp_list_lemma]))
nlp_question_no_det = get_nlp(" ".join([e for e in nlp_list_no_det]))
themes += [(ent, [ent.kb_id_]) for ent in get_kb_ents(nlp_question_cap.text) if ent.kb_id_ != "NIL" and (ent, [ent.kb_id_]) not in themes]
themes += [(ent, [ent.kb_id_]) for ent in get_kb_ents(nlp_question_low.text) if ent.kb_id_ != "NIL" and (ent, [ent.kb_id_]) not in themes]
themes += [(ent, [ent.kb_id_]) for ent in get_kb_ents(nlp_question_lemma.text) if ent.kb_id_ != "NIL" and (ent, [ent.kb_id_]) not in themes]
themes += [(ent, [ent.kb_id_]) for ent in get_kb_ents(nlp_question_no_det.text) if ent.kb_id_ != "NIL" and (ent, [ent.kb_id_]) not in themes]
if online:
themes += [(ent, get_wd_ids_online(ent.text, is_predicate=False, top_k=top_k)) for ent in get_kb_ents(nlp_question_cap.text)]
themes += [(ent, get_wd_ids_online(ent.text, is_predicate=False, top_k=top_k)) for ent in get_kb_ents(nlp_question_low.text)]
themes += [(ent, get_wd_ids_online(ent.text, is_predicate=False, top_k=top_k)) for ent in get_kb_ents(nlp_question_lemma.text)]
themes += [(ent, get_wd_ids_online(ent.text, is_predicate=False, top_k=top_k)) for ent in get_kb_ents(nlp_question_no_det.text)]
noun_chunks = []
previous_title_position = 0
for i_t,t in enumerate(nlp_question):
tmp_row = []
if i_t > previous_title_position:
if t.is_title:
for i_p in range(previous_title_position,i_t+1):
tmp_row.append(nlp_question[i_p])
noun_chunks.append(get_nlp(" ".join([w.text for w in tmp_row])))
if t.is_title:
previous_title_position = i_t
noun_chunks += [chunk for chunk in nlp_question_cap.noun_chunks]
#theme_ids = [get_wd_ids(chunk.text, top_k=top_k) for chunk in noun_chunks][:top_k]
theme_ids = get_themes_ids_from_chunks(noun_chunks, top_k=3, online=online)
for i, chunk in enumerate(theme_ids):
if chunk: themes.append((noun_chunks[i], chunk))
else: theme_complements.append(noun_chunks[i])
noun_chunks = [chunk for chunk in nlp_question_low.noun_chunks]
#theme_ids = [get_wd_ids(chunk.text, top_k=top_k) for chunk in noun_chunks][:top_k]
theme_ids = get_themes_ids_from_chunks(noun_chunks, top_k=3, online=online)
for i, chunk in enumerate(theme_ids):
if chunk: themes.append((noun_chunks[i], chunk))
else: theme_complements.append(noun_chunks[i])
noun_chunks = [chunk for chunk in nlp_question_lemma.noun_chunks]
#theme_ids = [get_wd_ids(chunk.text, top_k=top_k) for chunk in noun_chunks][:top_k]
theme_ids = get_themes_ids_from_chunks(noun_chunks, top_k=3, online=online)
for i, chunk in enumerate(theme_ids):
if chunk: themes.append((noun_chunks[i], chunk))
else: theme_complements.append(noun_chunks[i])
noun_chunks = [chunk for chunk in nlp_question_no_det.noun_chunks]
#theme_ids = [get_wd_ids(chunk.text, top_k=top_k) for chunk in noun_chunks][:top_k]
theme_ids = get_themes_ids_from_chunks(noun_chunks, top_k=3, online=online)
for i, chunk in enumerate(theme_ids):
if chunk: themes.append((noun_chunks[i], chunk))
else: theme_complements.append(noun_chunks[i])
themes_filtered = []
for t in themes:
if t[0].text in [tf[0].text for tf in themes_filtered]:
index = [tf[0].text for tf in themes_filtered].index(t[0].text)
tmp = t[1]+[i for j in [tf[1] for index, tf in enumerate(themes_filtered) if tf[0].text == t[0].text] for i in j]
themes_filtered[index] = (t[0],tmp)
else:
themes_filtered.append(t)
# removing the same elments per rows and skipping already existing rows
unique_ids = []
themes_filtered_undupped = []
for tf in themes_filtered:
tmp_ids = []
for tfid in tf[1]:
if tfid not in unique_ids and tfid not in tmp_ids:
tfname = get_wd_label(tfid)
similarity = get_nlp(tfname).similarity(tf[0])
if similarity >= 0.95:
tmp_ids.append(tfid)
unique_ids.append(tfid)
if tmp_ids and tmp_ids not in [tfu[1] for tfu in themes_filtered_undupped]:
themes_filtered_undupped.append((tf[0],tmp_ids[:top_k]))
#for tf in themes_filtered:
# tmp_ids = []
# for tfid in tf[1]:
# if tfid not in tmp_ids:
# tmp_ids.append(tfid)
# if tmp_ids not in [tfu[1] for tfu in themes_filtered_undupped]:
# themes_filtered_undupped.append((tf[0],tmp_ids))
theme_complements_undupped = []
[theme_complements_undupped.append(tc) for tc in theme_complements if tc.text not in [tcu.text for tcu in theme_complements_undupped]]
#print(themes_filtered)
return themes_filtered_undupped, theme_complements_undupped
#q0_themes = get_themes(q0_nlp, top_k=3)
#q0_themes_test = get_themes(q0_nlp_test)
#q0_themes_test_2 = get_themes(q0_nlp_test_2)
#print(q0_themes)
#q_test_3 = get_nlp("the unicorn and the raccoons love obama barack's tacos")
#q_test_3_themes = get_themes(q_test_3, top_k=3)
#print(get_enhanced_themes(q_test_3_themes))
#print(q_test_3_themes)
#q_test_test = get_nlp("What is a tv action show?")
#q_test_test = get_nlp("Who voiced the Unicorn in The Last Unicorn")
#q_test_test = get_nlp("What is the name of the person who created Saved by the Bell?")
#q_test_test = get_nlp("When did the movie Grease come out?")
#q_test_question = "Who was an influential figure for miško Šuvaković"
#q_test_test = get_nlp(q_test_question,True)
#get_themes(q_test_test, q_test_question, top_k=3, online=True)
# In[22]:
BANNED_WORDS = ["..."]
def get_theme_tuples(theme_list, top_k=3, online=False):
tuples = [(t, get_wd_ids(t, top_k=top_k)) for t in theme_list if t not in BANNED_WORDS]
if online:
tuples += [(t, get_wd_ids_online(t, is_predicate=False, top_k=top_k)) for t in theme_list if t not in BANNED_WORDS]
return tuples
def get_theme_no_stopwords(theme_list):
return [s for s in theme_list if not s.is_stop]
def get_theme_lemmatized(theme_list):
return [s.lemma_ for s in theme_list]
def get_permutation_tuples(theme_list, start=2):
permutations = []
for i in range(start, len(theme_list)+1):
permutations += list(it.permutations(theme_list,i))
return permutations
def get_lemma_permutation_tuples(theme_list, start=2):
return get_permutation_tuples(get_theme_lemmatized(theme_list), start=2)
def get_non_token_tuples(theme_list):
return [" ".join([e for e in list(l)]) for l in theme_list]
def get_non_token_lower_tuples(theme_list):
return [" ".join([e.lower() for e in list(l)]) for l in theme_list]
def get_non_token_capitalize_tuples(theme_list):
return [" ".join([c.capitalize() for c in [e for e in list(l)]]) for l in theme_list]
def get_text_tuples(theme_list):
return [" ".join([e.text for e in list(l)]) for l in theme_list]
def get_lower_tuples(theme_list):
return [" ".join([e.lower_ for e in list(l)]) for l in theme_list]
def get_capitalized_tuples(theme_list):
return [" ".join([c.capitalize() for c in [e.text for e in list(l)]]) for l in theme_list]
def get_enhanced_themes(themes, top_k=3, title_limit=5, aggressive=False, online=False):
enhanced_themes = []
# permute, capitalize, lowering of the words in the complements
for c in themes[1]:
if len(c) <= title_limit:
per_lemma = get_theme_tuples(get_non_token_tuples([n for n in get_permutation_tuples(get_theme_lemmatized(c))]),top_k, online=online)
[enhanced_themes.append(p) for p in per_lemma if p[1] and p not in enhanced_themes]
del per_lemma
per_nostop = get_theme_tuples(get_text_tuples(get_permutation_tuples(get_theme_no_stopwords(c),start=1)),top_k, online=online)
[enhanced_themes.append(p) for p in per_nostop if p[1] and p not in enhanced_themes]
del per_nostop
per_lemma_nostop = get_theme_tuples(get_non_token_tuples([get_theme_lemmatized(s) for s in get_permutation_tuples(get_theme_no_stopwords(c),start=1)]),top_k, online=online)
[enhanced_themes.append(p) for p in per_lemma_nostop if p[1] and p not in enhanced_themes]
del per_lemma_nostop
per_lemma_lower = get_theme_tuples(get_non_token_lower_tuples([n for n in get_permutation_tuples(get_theme_lemmatized(c))]),top_k, online=online)
[enhanced_themes.append(p) for p in per_lemma_lower if p[1] and p not in enhanced_themes]
del per_lemma_lower
per_nostop_lower = get_theme_tuples(get_lower_tuples(get_permutation_tuples(get_theme_no_stopwords(c),start=1)),top_k)
[enhanced_themes.append(p) for p in per_nostop_lower if p[1] and p not in enhanced_themes]
del per_nostop_lower
per_lemma_nostop_lower = get_theme_tuples(get_non_token_lower_tuples([get_theme_lemmatized(s) for s in get_permutation_tuples(get_theme_no_stopwords(c),start=1)]),top_k, online=online)
[enhanced_themes.append(p) for p in per_lemma_nostop_lower if p[1] and p not in enhanced_themes]
del per_lemma_nostop_lower
per_lemma_capitalize = get_theme_tuples(get_non_token_capitalize_tuples([n for n in get_permutation_tuples(get_theme_lemmatized(c))]),top_k, online=online)
[enhanced_themes.append(p) for p in per_lemma_capitalize if p[1] and p not in enhanced_themes]
del per_lemma_capitalize
per_nostop_capitalize = get_theme_tuples(get_capitalized_tuples(get_permutation_tuples(get_theme_no_stopwords(c),start=1)),top_k, online=online)
[enhanced_themes.append(p) for p in per_nostop_capitalize if p[1] and p not in enhanced_themes]
del per_nostop_capitalize
per_lemma_nostop_capitalize = get_theme_tuples(get_non_token_capitalize_tuples([get_theme_lemmatized(s) for s in get_permutation_tuples(get_theme_no_stopwords(c),start=1)]),top_k, online=online)
[enhanced_themes.append(p) for p in per_lemma_nostop_capitalize if p[1] and p not in enhanced_themes]
per = get_theme_tuples(get_text_tuples(get_permutation_tuples(c)),top_k, online=online)
[enhanced_themes.append(p) for p in per if p[1] and p not in enhanced_themes]
del per
per_lower = get_theme_tuples(get_lower_tuples(get_permutation_tuples(c)),top_k, online=online)
[enhanced_themes.append(p) for p in per_lower if p[1] and p not in enhanced_themes]
del per_lower
per_capitalize = get_theme_tuples(get_capitalized_tuples(get_permutation_tuples(c)),top_k, online=online)
[enhanced_themes.append(p) for p in per_capitalize if p[1] and p not in enhanced_themes]
del per_capitalize
if aggressive:
predicates = []
[predicates.append(get_wd_label(pred)) for pred in sum([p[1] for p in themes[0]],[]) if get_wd_label(pred) not in predicates]
predicates_ids = [get_wd_ids_online(p, is_predicate=True, top_k=top_k) for p in predicates]
predicated_themes = merge_lists(predicates, predicates_ids)
predicated_themes = [pt for pt in predicated_themes if pt[1] != '']
if predicates: enhanced_themes += predicated_themes
#print("themes[0]",[t[0].text for t in themes[0]])
#print("themes[0].lower()",[t[0].text.lower() for t in themes[0]])
enhanced_themes_filtered = []
for et in enhanced_themes:
if not et[0] in [t[0].text for t in themes[0]]:
#print("et not in themes",et)
#print(len(themes[0]))
#print([t[0].text.find(et[0]) for t in themes[0]].count(-1))
if len([t for t in themes[0] if t[0].text.find(et[0]) == -1]) < len(themes[0]) or not et[1]:
continue
if et[0] in [e[0] for e in enhanced_themes_filtered]:
index_et = [e[0] for e in enhanced_themes_filtered].index(et[0])
if index_et != -1:
enhanced_themes_filtered[index_et] = (et[0], enhanced_themes_filtered[index_et][1]+et[1]) #.append((et[0],et[1][:top_k]))
else: enhanced_themes_filtered.append((et[0],et[1][:top_k]))
#elif et[0] not in :
# print("et unknown",et)
else:
enhanced_themes_filtered.append((et[0],et[1][:top_k]))
return enhanced_themes_filtered
#q_test_3 = get_nlp("Which genre of album is harder.....faster?",autocorrect=True)
#q_test_3 = get_nlp("the unicorn and the raccoons love obama barack's tacos")
#q_test_3 = get_nlp("what was the cause of death of yves klein")
#q_test_3 = get_nlp("Who is the author that wrote the book Moby Dick")
#q_test_3_themes = get_themes(q_test_3, top_k=3)
#print(q_test_3_themes[0])
#print(get_enhanced_themes(q_test_3_themes, aggressive=False))
#print(get_enhanced_themes(q_test_3_themes, aggressive=True))
# In[23]:
def get_predicates_online(nlp_sentence, top_k=3, aggressive=False):
PASSIVE_VERBS = ["be"]
AGRESSIVE_FILTER = ["VERB","AUX","NOUN","ADJ"]
if aggressive: predicates = [p for p in nlp_sentence if p.pos_ in AGRESSIVE_FILTER]
else: predicates = [p for p in nlp_sentence if p.pos_ == "VERB" or p.pos_ == "AUX"]
if len(predicates) == 1:
if predicates[0].lemma_ in PASSIVE_VERBS:
predicates += [p for p in nlp_sentence if p.pos_ in AGRESSIVE_FILTER if p not in predicates]
predicates_filtered = []
for p in predicates:
if p.lemma_ in PASSIVE_VERBS:
p = get_nlp(p.lemma_)[0]
if len(predicates_filtered) == 0:
predicates_filtered.append(p)
if p.text not in [p.text for p in predicates_filtered]:
predicates_filtered.append(p)
predicates_ids = []
for i_p, p in enumerate(predicates_filtered):
if p.lemma_ == "be":
predicates_ids.append(get_wd_ids_online("is", is_predicate=True, top_k=top_k)[:1])
else:
p_id = get_wd_ids_online(p.text, is_predicate=True, top_k=top_k)
if not p_id:
p_id = get_wd_ids_online(p.lemma_, is_predicate=True, top_k=top_k)
if not p_id:
similar_words = [w[0] for w in get_most_similar(p.lemma_, top_k=top_k)]
for sw in similar_words:
if not p_id:
p_id = get_wd_ids_online(sw, is_predicate=True, top_k=top_k)
predicates_ids.append(p_id[:top_k])
return merge_lists(predicates_filtered, predicates_ids)
#q_test = get_nlp("Who voiced the Unicorn in The Last Unicorn")
#q_test = get_nlp("Of what nationality is Ken McGoogan")
#q_test = get_nlp("Which have the nation of Martha Mattox")
#q_test = get_nlp("what city was alex golfis born in")
#q_test = get_nlp("who's born in city was alex golfis born in")
#q_test = get_nlp("what's the name fo the wife of my dads")
#start_time = time.time()
#q_test = get_nlp("Where did roger marquis die")
#print(get_predicates_online(q_test, top_k=2, aggressive=False))
#print("it was:",time.time()-start_time)
#q0_predicates_test_2 = get_predicates_online(q0_nlp_test_2, top_k=3, aggressive=True)
# In[24]:
def get_predicates(nlp_sentence, themes=False, top_k=0):
PASSIVE_VERBS = ["be"]
predicates = [p for p in nlp_sentence if p.pos_ == "VERB" or p.pos_ == "AUX"]
#for i_p, p in enumerate(predicates):
# if p.text == "\'s":
# predicates[i_p] = get_nlp("is")[0]
# if p.text == "\'re":
# predicates[i_p] = get_nlp("are")[0]
if themes:
for t in themes[0]:
for e in t[1]:
if is_wd_predicate(e):
predicates.append(t[0])
predicates_filtered = []
for p in predicates:
if p.lemma_ in PASSIVE_VERBS:
p = get_nlp(p.lemma_)[0]
if len(predicates_filtered) == 0:
predicates_filtered.append(p)
if p.text not in [p.text for p in predicates_filtered]:
predicates_filtered.append(p)
predicates_ids = []
for i_p, p in enumerate(predicates_filtered):
if p.lemma_ in PASSIVE_VERBS:
predicates_ids.append(get_wd_ids(p.lemma_, is_predicate=True, top_k=top_k, limit=0)[:1])
else:
predicates_ids.append(get_wd_ids(p.text, is_predicate=True, top_k=top_k, limit=0)[:top_k])
#predicates_ids = [ for p in predicates_filtered]
return merge_lists(predicates_filtered, predicates_ids)
#q_test = get_nlp("Who voiced the Unicorn in The Last Unicorn")
#q_test = get_nlp("Of what nationality is Ken McGoogan")
#q_test = get_nlp("Where did roger marquis die")
#q_test = get_nlp("who's born in city was alex golfis born in")
#get_predicates(q_test)
#q_test_themes = get_themes(q_test)
#get_predicates(q_test, q_test_themes, top_k=3)
#q0_nlp_test_0 = get_nlp("Voiced")
#q0_predicates = get_predicates(q0_nlp, top_k=3)
#q0_predicates_test_2 = get_predicates(q0_nlp_test_2, top_k=3)
#print(q0_predicates)
# In[25]:
def extract_ids(to_extract):
return [i for i in it.chain.from_iterable([id[1] for id in to_extract])]
#extract_ids([('name', ['id'])]) #q0_themes[0] #q0_focused_parts #q0_predicates
#print(extract_ids([("The Last Unicorn", ['Q16614390']),("Second Theme", ['Q12345'])]))
#extract_ids(q0_focused_parts)
# In[26]:
def get_similarity_by_words(nlp_word_from, nlp_word_to):
if not nlp_word_from or not nlp_word_to:
return 0
elif not nlp_word_from.vector_norm or not nlp_word_to.vector_norm:
return 0
else:
return nlp_word_from.similarity(nlp_word_to)
#print(get_similarity_by_words(get_nlp("character role"), get_nlp("voice actor")))
# In[27]:
def get_similarity_by_ids(word_id_from, word_id_to):
nlp_word_from = get_nlp(get_wd_label(word_id_from))
nlp_word_to = get_nlp(get_wd_label(word_id_to))
return get_similarity_by_words(nlp_word_from, nlp_word_to)
#print(get_similarity_by_ids("P453", "P725"))
# In[28]:
def get_top_similar_statements(statements, from_token_id, similar_to_name, top_k=0, qualifier=False, statement_type="object", time_sentitive=False):
highest_matching_similarity = -1
top_statements = []
nlp_name = get_nlp(similar_to_name)
#print("get_top_similar_statements from_token_id",from_token_id)
if get_wd_label(from_token_id):
for statement in statements:
if top_k>0:
if qualifier:
for qualifier in statement['qualifiers']:
if time_sentitive and is_timestamp(qualifier[statement_type]['id']):
top_statements.append((1, statement))
else:
nlp_word_to = get_nlp(get_wd_label(qualifier[statement_type]['id']))
matching_similarity = get_similarity_by_words(nlp_name, nlp_word_to)
top_statements.append((matching_similarity, statement))
else:
if time_sentitive and is_timestamp(statement[statement_type]['id']):
top_statements.append((1, statement))
else:
nlp_word_to = get_nlp(get_wd_label(statement[statement_type]['id']))
matching_similarity = get_similarity_by_words(nlp_name, nlp_word_to)
top_statements.append((matching_similarity, statement))
else:
if qualifier:
if statement.get('qualifiers'):
for qualifier in statement['qualifiers']:
if time_sentitive and is_timestamp(qualifier[statement_type]['id']):
top_statements.append((1, statement))
else:
nlp_word_to = get_nlp(get_wd_label(qualifier[statement_type]['id']))
matching_similarity = get_similarity_by_words(nlp_name, nlp_word_to)
if highest_matching_similarity == -1 or matching_similarity >= highest_matching_similarity:
highest_matching_similarity = matching_similarity
best_statement = statement
top_statements.append((highest_matching_similarity, best_statement))
else:
if time_sentitive and is_timestamp(statement[statement_type]['id']):
top_statements.append((1, statement))
else:
nlp_word_to = get_nlp(get_wd_label(statement[statement_type]['id']))
matching_similarity = get_similarity_by_words(nlp_name, nlp_word_to)
if highest_matching_similarity == -1 or matching_similarity >= highest_matching_similarity:
highest_matching_similarity = matching_similarity
best_statement = statement
top_statements.append((highest_matching_similarity, best_statement))
if top_k > 0:
return sorted(top_statements, key=lambda x: x[0], reverse=True)[:top_k]
else:
return sorted(top_statements, key=lambda x: x[0], reverse=True)
#statements = get_all_statements_of_entity('Q503992')
#top_similar_statements = get_top_similar_statements(statements, 'Q267721', 'western')
#print(top_similar_statements)
# In[29]:
def get_best_similar_statements_by_word_worker(in_mp_queue, out_mp_queue, top_k, qualifier, statement_type, time_sentitive):
sentinel = None
best_statements = []
for token,similar_to_name in iter(in_mp_queue.get, sentinel):
# print("working on",token,similar_to_name)
statements = get_all_statements_of_entity(token)
if statements: best_statements += get_top_similar_statements(statements, token, similar_to_name, top_k=top_k, qualifier=qualifier, statement_type=statement_type, time_sentitive=time_sentitive)
# print("done with",token,similar_to_name)
out_mp_queue.put(best_statements)
# In[30]:
def get_best_similar_statements_by_word(from_token_ids, similar_to_name, top_k=3, qualifier=False, statement_type="object", time_sentitive=False, cores=1):
if not similar_to_name:
return []
best_statements = []
if cores > 1:
out_mp_queue = mp.Queue()
in_mp_queue = mp.Queue()
sentinel = None
for token in from_token_ids:
in_mp_queue.put((token,similar_to_name))
procs = [mp.Process(target = get_best_similar_statements_by_word_worker, args = (in_mp_queue, out_mp_queue, top_k, qualifier, statement_type, time_sentitive)) for i in range(cores)]
for proc in procs:
proc.daemon = True
proc.start()
for proc in procs:
in_mp_queue.put(sentinel)
for proc in procs:
best_statements += out_mp_queue.get()
for proc in procs:
proc.join()
else:
for token in from_token_ids:
statements = get_all_statements_of_entity(token)
if statements: best_statements += get_top_similar_statements(statements, token, similar_to_name, top_k=top_k, qualifier=qualifier, statement_type=statement_type, time_sentitive=time_sentitive)
#print("best_statements",best_statements)
return sorted(best_statements, key=lambda x: x[0], reverse=True)
#best_similar_statements = get_best_similar_statements_by_word(extract_ids(q0_themes[0]), 'voiced', top_k=3, qualifier=True, statement_type="qualifier_object")
#print(best_similar_statements[0])
#init_clusters = cluster_extend_by_words(theme_ids, [p[0].text for p in q_predicates+predicates_enhanced], top_k=deep_k, time_sentitive=time_sensitive,cores=2)
# In[31]:
def get_statements_subjects_labels(statements):
return [get_wd_label(t[1]['entity']['id']) for t in statements]
#print(get_statements_subjects_labels(best_similar_statements))
# In[32]:
def get_statements_predicates_labels(statements):
return [get_wd_label(t[1]['predicate']['id']) for t in statements]
#print(get_statements_predicates_labels(best_similar_statements))
# In[33]:
def get_statements_objects_labels(statements):
return [get_wd_label(t[1]['object']['id']) for t in statements]
#print(get_statements_objects_labels(best_similar_statements))
# In[34]:
def get_statements_qualifier_predicates_labels(statements):
return [get_wd_label(t[1]['qualifiers'][0]['qualifier_predicate']['id']) for t in statements]
#print(get_statements_qualifier_predicates_labels(best_similar_statements))
# In[35]:
def get_statements_qualifier_objects_labels(statements):
return [get_wd_label(t[1]['qualifiers'][0]['qualifier_object']['id']) for t in statements]
#print(get_statements_qualifier_objects_labels(best_similar_statements))
# In[36]:
def cluster_extend_by_words_worker(in_mp_queue, out_mp_queue, top_k, time_sentitive, cores):
sentinel = None
cluster = []
for cluster_root_ids,name in iter(in_mp_queue.get, sentinel):
cluster += get_best_similar_statements_by_word(cluster_root_ids, name, top_k=top_k, qualifier=True, statement_type="qualifier_predicate", time_sentitive=time_sentitive,cores=1)
cluster += get_best_similar_statements_by_word(cluster_root_ids, name, top_k=top_k, qualifier=True, statement_type="qualifier_object", time_sentitive=time_sentitive,cores=1)
cluster += get_best_similar_statements_by_word(cluster_root_ids, name, top_k=top_k, qualifier=False, statement_type="predicate", time_sentitive=time_sentitive,cores=1)
cluster += get_best_similar_statements_by_word(cluster_root_ids, name, top_k=top_k, qualifier=False, statement_type="object", time_sentitive=time_sentitive,cores=1)
out_mp_queue.put(cluster)
# In[37]:
def cluster_extend_by_words(cluster_root_ids, extending_words, top_k=3, time_sentitive=False,cores=mp.cpu_count()):
if not cluster_root_ids or not extending_words:
return []
cluster = []
if cores <= 0: cores = 1
out_mp_queue = mp.Queue()
in_mp_queue = mp.Queue()
sentinel = None
for name in extending_words:
in_mp_queue.put((cluster_root_ids,name))
procs = [mp.Process(target = cluster_extend_by_words_worker, args = (in_mp_queue, out_mp_queue, top_k, time_sentitive, cores)) for i in range(cores)]
for proc in procs:
proc.daemon = True
proc.start()
for proc in procs:
in_mp_queue.put(sentinel)
for proc in procs:
cluster += out_mp_queue.get()
for proc in procs:
proc.join()
return sorted(cluster, key=lambda x: x[0], reverse=True)
#start_time = time.time()
#for name in extending_words:
# #start_cluster_time = time.time()
# cluster += get_best_similar_statements_by_word(cluster_root_ids, name, top_k=top_k, qualifier=True, statement_type="qualifier_predicate", time_sentitive=time_sentitive,cores=cores)
# cluster += get_best_similar_statements_by_word(cluster_root_ids, name, top_k=top_k, qualifier=True, statement_type="qualifier_object", time_sentitive=time_sentitive,cores=cores)
# cluster += get_best_similar_statements_by_word(cluster_root_ids, name, top_k=top_k, qualifier=False, statement_type="predicate", time_sentitive=time_sentitive,cores=cores)
# cluster += get_best_similar_statements_by_word(cluster_root_ids, name, top_k=top_k, qualifier=False, statement_type="object", time_sentitive=time_sentitive,cores=cores)
# #end_time = time.time()
# #print("EXTENDING Cluster with:", name," ->\tRunning time is {}s".format(round(end_time-start_cluster_time,2)))
##end_time = time.time()
##print("EXTENDING Clusters ->\tRunning time is {}s".format(round(end_time-start_time,2)))
#test_cluster = cluster_extend_by_words(extract_ids(q0_themes[0]), ['voiced'], top_k=2)
#test_cluster_test_2 = cluster_extend_by_words(extract_ids(q0_themes_test_2[0]), ['birth'], top_k=2)
#print(test_cluster[0])
#start_time = time.time()
#init_clusters = cluster_extend_by_words(theme_ids, [p[0].text for p in q_predicates+predicates_enhanced], top_k=deep_k, time_sentitive=time_sensitive,cores=mp.cpu_count())
#print("timer",time.time()-start_time)
# In[38]:
# sorts by the similarity value of statements[0]
def sort_statements_by_similarity(statements):
return [s for s in sorted(statements, key=lambda x: x[0], reverse=True)]
#test_sorted_statements = sort_statements_by_similarity(test_cluster)
#test_sorted_statements_test_2 = sort_statements_by_similarity(test_cluster_test_2)
#print(test_sorted_statements[0])
# In[39]:
# appends spo from qualifiers, removes qualifier tags, and removes similarity scores
def statements_flatter(statements):
best_statements_to_graph = []
for statement in statements:
tmp_statement = copy(statement)
if tmp_statement.get('qualifiers'):
#print("statement", statement)
for q in tmp_statement['qualifiers']:
qualifier_statement = {'entity': {'id': tmp_statement['entity']['id']}}
qualifier_statement['predicate'] = {'id': q['qualifier_predicate']['id']}
qualifier_statement['object'] = {'id': q['qualifier_object']['id']}
best_statements_to_graph.append(qualifier_statement)
del(tmp_statement['qualifiers'])
else:
#print("tmp_statement", tmp_statement)
if ('qualifiers' in tmp_statement): del(tmp_statement['qualifiers'])
if tmp_statement not in best_statements_to_graph:
#print("best_statements_to_graph", tmp_statement)
best_statements_to_graph.append(tmp_statement)
return best_statements_to_graph
#test_flatten_statements = statements_flatter([s[1] for s in test_sorted_statements])
#test_flatten_statements_test_2 = statements_flatter([s[1] for s in test_sorted_statements_test_2])
#print(test_flatten_statements[0])
#test_flatten_statements_test_2
# In[40]:
# remove duplicates from statements
def unduplicate_statements(statements):
filtered_statements = []
[filtered_statements.append(s) for s in statements if s not in [e for e in filtered_statements]]
return filtered_statements
#test_unduplicate_statements = unduplicate_statements(test_flatten_statements)
#print(len(test_flatten_statements))
#print(len(test_unduplicate_statements))
#print(test_unduplicate_statements[0])
# In[41]:
def get_statements_by_id(statements, from_token_id, to_id, qualifier=False, statement_type="predicate"):
id_statements = []
if not statements:
return id_statements
if get_wd_label(from_token_id):
for statement in statements:
if qualifier:
if statement.get('qualifiers'):
for s in statement['qualifiers']:
if to_id == s[statement_type]['id']:
id_statements.append(statement)
else:
if to_id == statement[statement_type]['id']:
id_statements.append(statement)
return id_statements
#statements_test = get_all_statements_of_entity('Q176198')
#id_statements_test = get_statements_by_id(statements_test, 'Q176198', 'P725')
#print(id_statements_test[0])
#get_statements_by_id(root_statements, cluster_root_id, predicate_id, qualifier=False, statement_type="predicate")
#statements_test = get_all_statements_of_entity('Q176198')
#id_statements_test = get_statements_by_id(statements_test, 'Q176198', 'P725')
#id_statements_test[0]
# In[42]:
def cluster_extend_by_words(cluster_root_ids, extending_words, top_k=3, time_sentitive=False,cores=2):
if not cluster_root_ids or not extending_words:
return []
cluster = []
if cores <= 0: cores = 1
out_mp_queue = mp.Queue()
in_mp_queue = mp.Queue()
sentinel = None
for name in extending_words:
in_mp_queue.put((cluster_root_ids,name))
procs = [mp.Process(target = cluster_extend_by_words_worker, args = (in_mp_queue, out_mp_queue, top_k, time_sentitive, cores)) for i in range(cores)]
for proc in procs:
proc.daemon = True
proc.start()
for proc in procs:
in_mp_queue.put(sentinel)
for proc in procs:
cluster += out_mp_queue.get()
for proc in procs:
proc.join()
return sorted(cluster, key=lambda x: x[0], reverse=True)
#start_time = time.time()
#for name in extending_words:
# #start_cluster_time = time.time()
# cluster += get_best_similar_statements_by_word(cluster_root_ids, name, top_k=top_k, qualifier=True, statement_type="qualifier_predicate", time_sentitive=time_sentitive,cores=cores)
# cluster += get_best_similar_statements_by_word(cluster_root_ids, name, top_k=top_k, qualifier=True, statement_type="qualifier_object", time_sentitive=time_sentitive,cores=cores)
# cluster += get_best_similar_statements_by_word(cluster_root_ids, name, top_k=top_k, qualifier=False, statement_type="predicate", time_sentitive=time_sentitive,cores=cores)
# cluster += get_best_similar_statements_by_word(cluster_root_ids, name, top_k=top_k, qualifier=False, statement_type="object", time_sentitive=time_sentitive,cores=cores)
# #end_time = time.time()
# #print("EXTENDING Cluster with:", name," ->\tRunning time is {}s".format(round(end_time-start_cluster_time,2)))
##end_time = time.time()
##print("EXTENDING Clusters ->\tRunning time is {}s".format(round(end_time-start_time,2)))
#test_cluster = cluster_extend_by_words(extract_ids(q0_themes[0]), ['voiced'], top_k=2)
#test_cluster_test_2 = cluster_extend_by_words(extract_ids(q0_themes_test_2[0]), ['birth'], top_k=2)
#print(test_cluster[0])
#start_time = time.time()
#init_clusters = cluster_extend_by_words(theme_ids, [p[0].text for p in q_predicates+predicates_enhanced], top_k=deep_k, time_sentitive=time_sensitive,cores=mp.cpu_count())
#print("timer",time.time()-start_time)
# In[43]:
def cluster_extend_by_predicates_ids_worker(in_mp_queue, out_mp_queue):
sentinel = None
cluster = []
for cluster_root_id, predicate_id in iter(in_mp_queue.get, sentinel):
root_statements = get_all_statements_of_entity(cluster_root_id)
if root_statements:
cluster += get_statements_by_id(root_statements, cluster_root_id, predicate_id, qualifier=True, statement_type="qualifier_predicate")
cluster += get_statements_by_id(root_statements, cluster_root_id, predicate_id, qualifier=False, statement_type="predicate")
out_mp_queue.put(cluster)
# In[44]:
# parameters
# cluster_root_ids: ['Qcode']
# predicates_ids: ['Pcode']
def cluster_extend_by_predicates_ids(cluster_root_ids, predicates_ids, cores=mp.cpu_count()):
if not cluster_root_ids or not predicates_ids:
return []
cluster = []
if cores <= 0: cores = 1
out_mp_queue = mp.Queue()
in_mp_queue = mp.Queue()
sentinel = None
for cluster_root_id, predicate_id in it.product(cluster_root_ids, predicates_ids):
#print((cluster_root_id, predicates_id))
in_mp_queue.put((cluster_root_id, predicate_id))
procs = [mp.Process(target = cluster_extend_by_predicates_ids_worker, args = (in_mp_queue, out_mp_queue)) for i in range(cores)]
for proc in procs:
proc.daemon = True
proc.start()
for proc in procs:
in_mp_queue.put(sentinel)
for proc in procs:
cluster += out_mp_queue.get()
for proc in procs:
proc.join()
#for cluster_root_id in cluster_root_ids:
# root_statements = get_all_statements_of_entity(cluster_root_id)
# #print("root_statements", root_statements)
# for predicate_id in predicates_ids:
# cluster += get_statements_by_id(root_statements, cluster_root_id, predicate_id, qualifier=True, statement_type="qualifier_predicate")
# cluster += get_statements_by_id(root_statements, cluster_root_id, predicate_id, qualifier=False, statement_type="predicate")
return cluster #sorted(cluster, key=lambda x: x[0], reverse=True)
#test_predicate_clusters = cluster_extend_by_predicates_ids(extract_ids(q0_themes[0]), extract_ids(q0_predicates))
#print(len(test_predicate_clusters))
#test_predicate_clusters[0]
#test_predicate_clusters_test_2 = cluster_extend_by_predicates_ids(extract_ids(q0_themes_test_2[0]), extract_ids(q0_predicates_test_2))
#print(len(test_predicate_clusters_test_2))
#print(test_predicate_clusters_test_2[-1])
#predicate_ids_clusters = cluster_extend_by_predicates_ids(theme_ids, predicates_ids+predicates_enhanced_ids)
#print(predicate_ids_clusters)
# In[45]:
def cluster_extractor_from_complements(complements):
for c in complements:
[print(t.pos_) for t in c]
return complements
#print(cluster_extractor_from_complements(q0_themes[1]))
# In[46]:
#TODO: add cache
#TODO: Check if extending with predicate_ids is useful
# parameter
# question: nlp_string
#limits=plt.axis('off')
def build_graph(nlp, themes, themes_enhanced, predicates, deep_k=3, time_sensitive = False, cores=mp.cpu_count(), context_graph=False, aggressive=False):
#print("time_sensitive",time_sensitive)
#start_time = time.time()
theme_ids = extract_ids(themes[0])
theme_enhanced_ids = extract_ids(themes_enhanced)
predicates_ids = extract_ids(predicates)
predicates_enhanced_ids = [p for p in theme_enhanced_ids if is_wd_predicate(p)]
predicates_enhanced = merge_lists([get_nlp(get_wd_label(p)) for p in predicates_enhanced_ids], predicates_enhanced_ids)
previous_predicates_ids = False
previous_entities_ids = False
if context_graph:
previous_predicates_ids = [x for x,y in context_graph.nodes(data=True) if y["type"]=="predicate"]
previous_entities_ids = [x for x,y in context_graph.nodes(data=True) if y["type"]=="entity"]
for i, tei in enumerate(theme_enhanced_ids):
if tei in theme_ids:
tmp = theme_enhanced_ids.pop(i)
if not aggressive and context_graph:
print("build_graph - not agressive and context graph present")
theme_ids = previous_entities_ids
init_clusters = cluster_extend_by_words(theme_ids, [p[0].text for p in predicates+predicates_enhanced], top_k=deep_k, time_sentitive=time_sensitive, cores=cores)
#print("init_clusters",len(init_clusters))
init_clusters_enhanced = cluster_extend_by_words(theme_enhanced_ids, [p[0].text for p in predicates+predicates_enhanced], top_k=deep_k, time_sentitive=time_sensitive, cores=cores)
#print("init_clusters_enhanced",len(init_clusters_enhanced))
init_sorted_statements = sort_statements_by_similarity(init_clusters + init_clusters_enhanced)
#print("init_sorted_statements",len(init_sorted_statements))
init_flatten_statements = statements_flatter([s[1] for s in init_sorted_statements])
#print("init_flatten_statements",len(init_flatten_statements))
predicate_ids_clusters = cluster_extend_by_predicates_ids(theme_ids, predicates_ids+predicates_enhanced_ids, cores=cores)
#print("predicate_ids_clusters",len(predicate_ids_clusters))
predicate_ids_enhanced_clusters = cluster_extend_by_predicates_ids(theme_enhanced_ids, predicates_ids+predicates_enhanced_ids, cores=cores)
#print("predicate_ids_enhanced_clusters",len(predicate_ids_enhanced_clusters))
predicate_ids_flatten_statements = statements_flatter(predicate_ids_clusters+predicate_ids_enhanced_clusters)
#print("predicate_ids_flatten_statements",len(predicate_ids_flatten_statements))
clusters = init_flatten_statements+predicate_ids_flatten_statements
filtered_statements = unduplicate_statements(clusters)
#print(predicate_ids_enhanced_clusters)
graph = make_statements_graph(filtered_statements, cores=cores, context_graph=context_graph)
#print([get_wd_label(e) for e in g.nodes] )
##print("clusters:", len(clusters))
##print("filtered_statements:", len(filtered_statements))
#end_time = time.time()
#print("->\tRunning time is {}s".format(round(end_time-start_time,2)))
return graph
#q0_test = questions[0]
#q0_test = "Which actor voiced the Unicorn in The Last Unicorn?"
#q0_test = "what was the cause of death of yves klein"
#q0_test = "Who is the wife of Barack Obama?"
#q0_test = "Who is the author of Le Petit Prince?"
#q0_test = "Who is the wife of Barack Obama?"
#q0_nlp_test = get_nlp(q0_test)
#q0_themes_test = get_themes(q0_nlp_test, q0_test, top_k=3)
#q0_themes_enhanced_test = get_enhanced_themes(q0_themes_test, top_k=3)
#q0_predicates_test = get_predicates_online(q0_nlp_test, top_k=3)
#q0_focused_parts_test = []
#graph, predicates_dict = build_graph(q0_nlp_test, q0_themes_test, q0_themes_enhanced_test, q0_predicates_test, deep_k=3)
#print(predicates_dict)
#plot_graph(graph, "file_name_graph", "Graph_title")
# In[47]:
def filter_graph_by_names(graph, filtering_names, entities=True, predicates=False):
# remove meaningless subgraphs
graph_copy = graph.copy()
for g in [g for g in (graph.subgraph(c) for c in nx.connected_components(graph))]:
is_meaningful = False
for e in g:
if get_wd_label(e) in filtering_names:
is_meaningful = True
break
if not is_meaningful:
for e in g:
graph_copy.remove_node(e)
return graph_copy
#q_theme_names = [q[0].text for q in q_themes[0]]
#q_theme_enhanced_names = [q[0] for q in q_themes_enhanced]
#filter_graph_by_names(graph, q_theme_names+q_theme_enhanced_names, entities=True, predicates=False)
#plot_graph(graph, "subgraph_test", "subgraph_test")
# In[48]:
#TODO
def filter_graph_by_ids(graph, filtering_ids, entities=True, predicates=False):
# remove meaningless subgraphs
graph_copy = graph.copy()
for g in [g for g in (graph.subgraph(c) for c in nx.connected_components(graph))]:
is_meaningful = False
for e in g:
if e in filtering_ids:
is_meaningful = True
break
if not is_meaningful:
for e in g:
graph_copy.remove_node(e)
return graph_copy
#print("filtered_graph",filtered_graph)
#plot_graph(filtered_graph, "subgraph_test", "subgraph_test")
#plot_graph(graph, "subgraph_test", "subgraph_test")
#q_theme_ids = extract_ids(q_themes[0])
#q_theme_enhanced_ids = extract_ids(q_themes_enhanced)
#filter_graph_by_ids(graph, q_theme_ids+q_theme_enhanced_ids, entities=True, predicates=False)
# In[49]:
# check the graph for complements
# parameters
# name: string
def find_name_in_graph(graph, name):
return [x for x,y in graph.nodes(data=True) if y['name'].lower() == name.lower()]
#[find_name_in_graph(c.text) for c in q0_themes[1]]
#print(find_name_in_graph(graph, "the unicorn"))
# In[50]:
# check the graph for complements
# parameters
# name: string
def find_id_in_graph(graph, id_to_find):
return [x for x,y in graph.nodes(data=True) if x == id_to_find]
#[find_name_in_graph(c.text) for c in q0_themes[1]]
#print(find_name_in_graph(graph, "the unicorn"))
# In[51]:
# TODO: clean the complements by removing stopwords etc.
def find_theme_complement(graph, themes):
return [i for i in it.chain.from_iterable(
[id for id in [c for c in [find_name_in_graph(graph, t.text) for t in themes[1]] if c]])]
#print(find_theme_complement(graph, q0_themes_test))
#[i for i in it.chain.from_iterable([id for id in check_theme_complement(graph, q0_themes)])]
# In[52]:
def find_paths_in_graph(graph, node_start, node_end):
return [p for p in nx.all_simple_paths(graph, source=node_start, target=node_end)]
#test_paths = find_paths_in_graph(graph, "Q16205566", "Q7774795")
#print(test_paths)
# In[53]:
def is_id_in_graph(graph, node_id):
return graph.has_node(node_id)
#print(is_id_in_graph(graph, "Q24039104"))
# In[54]:
def is_name_in_graph(graph, node_name):
return find_name_in_graph(graph, node_name) != []
#print(is_name_in_graph(graph, "the Unicorn"))
# In[55]:
def find_paths_for_themes(graph, themes):
themes_ids = [t for t in extract_ids(themes[0])]
complements_ids = find_theme_complement(graph, themes)
paths = []
for t_id in themes_ids:
if is_id_in_graph(graph, t_id):
for c_id in complements_ids:
if is_id_in_graph(graph, c_id):
path = find_paths_in_graph(graph, t_id, c_id)
if path:
paths.append(path)
paths = [i for i in it.chain.from_iterable(
[id for id in paths])]
return paths
#print(find_paths_for_themes(graph, q0_themes_test))
#print(find_paths_for_themes(graph, q0_themes))
# In[56]:
def get_node_predicates_from_path(paths):
predicates = []
for p in paths:
[predicates.append(i[:i.find("-")]) for i in p if is_wd_predicate(i[:i.find("-")]) and i[:i.find("-")] not in predicates]
return predicates
#test_node_predicates = get_node_predicates_from_path(test_paths)
#print(test_node_predicates)
# In[57]:
def get_node_predicate_similarity_from_path(paths, predicates):
path_predicates = get_node_predicates_from_path(paths)
return sorted([(pp, get_similarity_by_ids(p2, pp)) for p in predicates for p2 in p[1] for pp in path_predicates], key=lambda x: x[-1], reverse=True)
#test_node_pedicate_similarities = get_node_predicate_similarity_from_path(test_paths, q0_predicates)
#print(test_node_pedicate_similarities)
# In[58]:
def get_focused_parts(nlp_sentence, themes, top_k=3, in_context=False):
W_FILTERS = ["WDT", "WP", "WP$", "WRB"]
V_FILTERS = ["VERB", "AUX"]
dummy_doc = get_nlp("dummy doc")
focused_parts = [t.head for t in nlp_sentence if t.tag_ in W_FILTERS]
for fp in focused_parts:
if fp.children:
for c in fp.children:
if c.tag_ not in W_FILTERS and c.text not in [fp.text for fp in focused_parts]:
focused_parts.append(c)
#print("focused_parts",focused_parts)
#print("themes[0]",themes[0])
for t in themes[0]:
for i_fp, fp in enumerate(focused_parts):
#print("fp",fp, type(fp))
for i_w, w in enumerate([w.lower_ for w in t[0]]):
#print("w",w, type(w))
if fp.lower_ == w:
#print("MATCHING")
if i_fp+1 < len(focused_parts):
#print("focused_parts[i_fp+1].lower_",focused_parts[i_fp+1].lower_)
#print("t[0][i_w-1].lower_",t[0][i_w-1].lower_)
if focused_parts[i_fp+1].lower_ == t[0][i_w-1].lower_:
#print(i_fp,fp, t[0][i_w-1], t[0])
#print("BEFORE focused_parts",focused_parts)
#print("t[0]",t[0])
if type(t[0]) == type(dummy_doc):
focused_parts[i_fp] = t[0][:]
else:
focused_parts[i_fp] = t[0]
del focused_parts[i_fp+1]
#print("AFTER focused_parts",focused_parts)
#print()
#for fp in focused_parts:
# print(type(fp))
#
# print(fp.as_doc())
#if isinstance() == 'spacy.tokens.span.Span':
# print("in")
#
#focused_parts = [type(fp) for fp in focused_parts]
#print("focused_parts",focused_parts)
if in_context:
focused_parts_ids = []
for p in focused_parts:
is_in = False
for t in themes[0]:
if p.text == t[0].text:
focused_parts_ids.append(t[1])
is_in = True
break
if not is_in:
focused_parts_ids.append(get_wd_ids(p.text, top_k=top_k))
#print("focused_parts",focused_parts)
#print("focused_parts_ids",focused_parts_ids)
else:
focused_parts_ids = [get_wd_ids(p.text, top_k=top_k) for p in focused_parts]
#print("focused_parts",focused_parts)
#print("focused_parts_ids",focused_parts_ids)
#focused_parts_ids = [get_wd_ids(p.text, top_k=top_k, online=True) for p in focused_parts]
#print("focused_parts_ids",focused_parts_ids)
merged_list = merge_lists(focused_parts, focused_parts_ids)
#print("merged_list",merged_list)
dummy_span = dummy_doc[:]
merged_list_filtered = []
for ml in merged_list:
if ml[1]:
if type(ml[0]) == type(dummy_span):
merged_list_filtered.append(ml)
elif ml[0].pos_ not in V_FILTERS and not ml[0].is_stop:
merged_list_filtered.append(ml)
return merged_list_filtered
#q_test_nlp = get_nlp("what's akbar tandjung's ethnicity")
#print(get_focused_parts(q0_nlp_test))
#q_test_nlp = get_nlp("Who voiced the Unicorn in The Last Unicorn?")
#print(get_focused_parts(q0_nlp_test))
#q_test_nlp = get_nlp("Who is the author that wrote the book Moby Dick")
#q_test_themes = get_themes(q_test_nlp, top_k=3)
#get_focused_parts(q_test_nlp,q_test_themes, top_k=3)
#q_test_nlp = get_nlp("Where was Shigeyasu Suzuki Place of Birth")
#q_test_nlp = get_nlp("Who is the author that wrote the book Moby Dick")
#q_test_nlp = get_nlp("Where was Shigeyasu Suzuki Place of Birth")
#q_test_themes = get_themes(q_test_nlp, top_k=3)
#get_focused_parts(q_test_nlp,q_test_themes, top_k=3)
#q_focused_parts: [(Unicorn, ['Q18356448', 'Q21070472', 'Q22043340', 'Q1565614', 'Q30060419']),
#(in, ['P642', 'Q29733109', 'P361', 'P131']),
#(the, ['Q1408543', 'Q2865743', 'Q29423', 'Q21121474']),
#(Unicorn, ['Q18356448', 'Q21070472', 'Q22043340', 'Q1565614', 'Q30060419']),
#(The, ['Q1067527', 'Q13423400', 'Q28457426', 'Q24406786', 'Q2430521', 'Q37199001']),
#(Last, ['Q16995904', 'Q20072822', 'Q24229340', 'Q20155285'])]
#[(author, ['P676', 'Q482980', 'Q3154968']),
# (book, ['Q571', 'Q4942925', 'Q997698']),
# (Dick, ['Q1471500', 'Q21510351', 'Q249606']),
# (Moby, ['Q1954726', 'Q6887412', 'Q14045'])]
# In[59]:
def is_in_list_by_similarity(word,list_of_words,similarity_threshold):
nlp_word = get_nlp(word)
for lw in list_of_words:
if nlp_word.similarity(get_nlp(lw)) > similarity_threshold:
return True
return False
#is_in_list_by_similarity("Moby Dick", ["moby-dick","star wars"],0.9)
# In[60]:
def add_compound(nlp_list, themes):
compounded = []
#if not nlp_list[0]:
# return compounded
try:
for t in [e[0] for e in themes[0]] + themes[1]:
for l in [n[0] for n in nlp_list]:
if l.text.lower() in t.text.lower():
compounded.append(t.text)
return compounded
except:
return compounded
# TODO: make the predicate search go further in the path list for the !i%2
def find_paths_keywords(graph, nlp, themes, themes_enhanced, predicates, focused_parts, keywords_len_limit=5, similarity_threshold=0.9):
WH_FILTER = ["WDT", "WP", "WP$", "WRB"]
VERB_FILTER = ["VERB", "AUX"]
NOUN_FILTER = ["NOUN","PROPN"]
POSITION_FILTER = ["ADP"]
#[(voiced, ['P725']), (Last, ['P5017'])]
predicates_ids = extract_ids(predicates)#[""]
#print("find_paths_keywords - predicates_ids",predicates_ids)
focused_parts_words = [t[0].text for t in focused_parts]
focused_parts_ids = [j for i in [t[1] for t in focused_parts] for j in i]
focused_parts_predicates_ids = [f for f in focused_parts_ids if is_wd_predicate(f)]
focused_parts_words_ids = [f for f in focused_parts_ids if is_wd_entity(f)]
focused_parts_words_ids_labeled = [get_wd_label(p) for p in focused_parts_words_ids]
#print(focused_parts_words_2)
question_anchors = [t for t in nlp if t.tag_ in WH_FILTER]
themes_enhanced_list = [t[0] for t in themes_enhanced]
focus_themes = [t[0].text for t in themes[0]]
focus_path_by_tails = [[c for c in t.head.children if c.pos_ in NOUN_FILTER] for t in nlp if t.pos_ == "PRON"]
focus_part_by_head = [t.head for t in question_anchors]
predicates_nlp = [t for t in nlp if t.pos_ in VERB_FILTER]
predicates_lemma = [t.lemma_ for t in predicates_nlp]
predicates_attention = [t for t in nlp if t.head in predicates_nlp]
predicates_attention_tails = [[c for c in t.children] for t in predicates_attention]
in_attention_heads = [t.head.text for t in nlp if t.pos_ in POSITION_FILTER]
in_attention_tails = add_compound([[c for c in t.children] for t in nlp if t.pos_ in POSITION_FILTER], themes)
focus_themes_enhanced = [t[0] for t in themes_enhanced
if t[0].lower() in [a.lower() for a in in_attention_tails]
or t[0].lower() in [a.lower() for a in in_attention_heads]]
theme_enhanced_ids = extract_ids(themes_enhanced)
predicates_enhanced_ids = predicates_ids+[(p) for p in theme_enhanced_ids if is_wd_predicate(p)]
[predicates_enhanced_ids.append(p) for p in focused_parts_predicates_ids if p not in predicates_enhanced_ids]
predicates_enhanced_ids_filtered = []
[predicates_enhanced_ids_filtered.append(p) for p in predicates_enhanced_ids if p not in predicates_enhanced_ids_filtered]
#print("predicates_enhanced_ids_filtered",predicates_enhanced_ids_filtered)
alterniative_words = {}
for t in themes_enhanced:
for e in predicates_enhanced_ids_filtered:
if e in t[1]:
alterniative_words[t[0]] = [get_nlp(get_wd_label(e)),[e]]
else:
alterniative_words[get_wd_label(e)] = [get_nlp(get_wd_label(e)),[e]]
#print("focused_parts_predicates_ids",focused_parts_predicates_ids)
#print("focused_parts_words_ids",focused_parts_words_ids)
#print("alterniative_words",alterniative_words)
#print("predicates_enhanced_ids",predicates_enhanced_ids)
##print("predicates_enhanced",predicates_enhanced)
#print("question_anchors",question_anchors)
#print("in_attention_heads",in_attention_heads)
#print("in_attention_tails",in_attention_tails)
#print("focus_themes",focus_themes)
#print("themes_enhanced_list",themes_enhanced_list)
#print("focus_themes_enhanced",focus_themes_enhanced)
#print("focus_path_by_tails",focus_path_by_tails)
#print("focus_part_by_head",focus_part_by_head)
#print("predicates_nlp",predicates_nlp)
#print("predicates_lemma",predicates_lemma)
#print("predicates_attention",predicates_attention)
#print("predicates_attention_tails",predicates_attention_tails)
#
#print("\n")
paths_keywords = []
[paths_keywords.append(e.lower()) for e in focused_parts_words + in_attention_heads + in_attention_tails + focus_themes + focus_themes_enhanced + focused_parts_words_ids_labeled if e.lower() not in paths_keywords]
#print(paths_keywords)
#paths_keywords = [p for p in it.permutations(paths_keywords)]
#print(paths_keywords)
paths_keywords = [p for p in paths_keywords if p and len(p.split(" ")) <= keywords_len_limit]
paths_keywords_filtered = []
#print("paths_keywords",paths_keywords)
#for k in paths_keywords:
# print("current k",k)
# #print("paths_keywords_filtered before",paths_keywords_filtered)
# is_in_list_by_similarity(k, paths_keywords_filtered,similarity_threshold)
[paths_keywords_filtered.append(k) for k in paths_keywords if not is_in_list_by_similarity(k, paths_keywords_filtered,similarity_threshold)]
#is_in_list_by_similarity("Moby Dick", ["moby-dick","star wars"],0.9)
return paths_keywords_filtered, alterniative_words, question_anchors
#initial_paths = find_paths_for_themes(graph, themes)
#predicate_id_similarities = get_node_predicate_similarity_from_path(initial_paths, predicates)
#best_path = [p for p in initial_paths if predicate_id_similarities[0][0] == p[1][:p[1].find("-")]]
#path_answer = get_wd_label(best_path[0][2]) if best_path else []
#return (path_answer, best_path[0][2]) if path_answer else (False, False)
#paths_keywords_2 = find_paths_keywords(graph_2, q_nlp_2, q_themes_2, q_themes_enhanced_2, q_predicates_2, q_focused_parts_2)
#paths_keywords_2
# In[61]:
def get_all_simple_paths_worker(graph, in_mp_queue, out_mp_queue):
found_paths = []
sentinel = None
for source, target in iter(in_mp_queue.get, sentinel):
for path in nx.all_simple_paths(graph, source = source, target = target, cutoff = None):
found_paths.append(path)
out_mp_queue.put(found_paths)
# In[62]:
def get_keywords_nodes_worker(graph, threshold, in_mp_queue, out_mp_queue):
keywords_nodes = []
sentinel = None
for name in iter(in_mp_queue.get, sentinel):
#print("2 get_paths_keywords_nodes")
nlp_lookup = get_nlp(name)
keywords_nodes = [x for x,y in graph.nodes(data=True) if get_nlp(y['name']).similarity(nlp_lookup) >= threshold]
out_mp_queue.put(keywords_nodes)
# In[63]:
def do_nothing(perm):
return perm
def get_paths_keywords_nodes(graph, keywords,threshold=0.9,top_performance=50, cores=mp.cpu_count()):
#print("1 get_paths_keywords_nodes")
if cores <= 0: cores = 1
sentinel = None
out_mp_queue = mp.Queue()
in_mp_queue = mp.Queue()
for k in keywords:
in_mp_queue.put(k)
procs = [mp.Process(target = get_keywords_nodes_worker, args = (graph, threshold, in_mp_queue, out_mp_queue)) for i in range(cores)]
keywords_nodes = []
for proc in procs:
proc.daemon = True
proc.start()
for proc in procs:
in_mp_queue.put(sentinel)
for proc in procs:
keywords_nodes.append(out_mp_queue.get())
for proc in procs:
proc.join()
keywords_nodes = [k for k in keywords_nodes if k]
#print("3 get_paths_keywords_nodes")
#print("1 get_paths_keywords_nodes")
#keywords_nodes = []
#print("len(keywords)",len(keywords))
#for k in keywords:
# nlp_lookup = get_nlp(k)
# keywords_nodes.append([x for x,y in graph.nodes(data=True) if get_nlp(y['name']).similarity(nlp_lookup) >= threshold])
# print("2 get_paths_keywords_nodes")
#keywords_nodes = [k for k in keywords_nodes if k]
#print("3 get_paths_keywords_nodes")
#keywords_nodes [['Q17521117', 'Q17521118', 'Q557214', 'Q421946', 'Q11282976', 'Q4677712', 'Q33999'], ['Q7246', 'Q1307944', 'Q21070472', 'Q18356448', 'Q1863113', 'Q20983877', 'Q226755', 'Q22043340'], ['Q176198', 'Q967268', 'Q17553756', 'Q30060419', 'Q17985004', 'Q16614390', 'Q18647334', 'Q15628943'], ['Q176198', 'Q967268', 'Q17553756', 'Q30060419', 'Q17985004', 'Q16614390', 'Q18647334', 'Q15628943'], []]
#keywords_nodes[0] ['Q17521117', 'Q17521118', 'Q557214', 'Q421946', 'Q11282976', 'Q4677712', 'Q33999']
#keywords_nodes[1] ['Q7246', 'Q1307944', 'Q21070472', 'Q18356448', 'Q1863113', 'Q20983877', 'Q226755', 'Q22043340']
keywords_nodes_per = []
if keywords_nodes:
#print("4 get_paths_keywords_nodes")
if len(keywords_nodes) > 1:
#print("5 get_paths_keywords_nodes")
for kn_i, kn in enumerate(keywords_nodes):
#print("6 get_paths_keywords_nodes")
if kn_i + 1 < len(keywords_nodes):
if len(kn) * len(keywords_nodes[kn_i+1]) > top_performance:
if len(kn) <= int(sqrt(top_performance)):
keywords_nodes[kn_i+1] = keywords_nodes[kn_i+1][:int(top_performance/len(kn))]
elif len(kn) >= len(keywords_nodes[kn_i+1]):
kn = kn[:int(top_performance/len(keywords_nodes[kn_i+1]))]
else:
kn = kn[:int(sqrt(top_performance))]
keywords_nodes[kn_i+1] = keywords_nodes[kn_i+1][:int(sqrt(top_performance))]
#print("7 get_paths_keywords_nodes")
#print("8 get_paths_keywords_nodes")
with mp.Pool() as pool:
keywords_nodes_per = pool.map(do_nothing, it.permutations(keywords_nodes, 2))
#print("9 get_paths_keywords_nodes")
keywords_nodes_per = [p for p in keywords_nodes_per]
#print(">1 len(keywords_nodes_per",len(keywords_nodes_per),keywords_nodes_per[0])
else:
keywords_nodes_per = [(keywords_nodes+keywords_nodes)]
#print("<1 len(keywords_nodes_per)",len(keywords_nodes_per),keywords_nodes_per[0])
#print("keywords_nodes_per",keywords_nodes_per)
#return 0
paths_keyword_nodes = []
targets = []
sources = []
for pkn in keywords_nodes_per:
[sources.append(pkn0) for pkn0 in pkn[0] if pkn0 not in sources]
[targets.append(pkn1) for pkn1 in pkn[1] if pkn1 not in targets]# and pkn1 not in pkn[0]]
#print("len(targets)",len(targets))
#print("len(sources)",len(sources))
out_mp_queue = mp.Queue()
in_mp_queue = mp.Queue()
sentinel = None
for source, target in it.product(sources, targets):
in_mp_queue.put((source, target))
procs = [mp.Process(target = get_all_simple_paths_worker, args = (graph, in_mp_queue, out_mp_queue)) for i in range(cores)]
for proc in procs:
proc.daemon = True
proc.start()
for proc in procs:
in_mp_queue.put(sentinel)
for proc in procs:
paths_keyword_nodes.extend(out_mp_queue.get())
for proc in procs:
proc.join()
paths_keyword_nodes = [p for p in paths_keyword_nodes if p]
#paths_keyword_nodes_filtered = []
#[paths_keyword_nodes_filtered.append(p) for p in paths_keyword_nodes if p not in paths_keyword_nodes_filtered]
#print("len(paths_keyword_nodes)",len(paths_keyword_nodes))
return paths_keyword_nodes
def find_path_nodes_from_graph(nlp_question, graph, predicates_dict, keywords, threshold=0.9,special_pred_theshold=0.7, thres_inter=0.15, top_performance=50,min_paths=3000, cores=mp.cpu_count()):
#print("current threshold", str(round(threshold, 1)))
w_positions, w_names = w_converter(nlp_question)
w_names_only = [wn[1] for wn in w_names]
date_trigger = "date" in w_names_only
location_trigger = "location" in w_names_only
#location_trigger = "person" in w_names_only
all_predicates = list(predicates_dict.keys())
option_keywords = []
if date_trigger:
nlp_time = get_nlp("time")
nlp_date = get_nlp("date")
for p in all_predicates:
#print("current p", p)
p_label = get_wd_label(p)
nlp_p = get_nlp(p_label)
#print("nlp_p",nlp_p)
p_date = nlp_p.similarity(nlp_date)
p_time = nlp_p.similarity(nlp_time)
#print("p_date",p_date)
#print("p_time",p_time)
if p_date > special_pred_theshold or p_time > special_pred_theshold:
if p not in option_keywords:
#print("adding",p)
option_keywords.append(p_label)
if location_trigger:
nlp_location = get_nlp("location")
nlp_place = get_nlp("place")
for p in all_predicates:
#print("current p", p)
p_label = get_wd_label(p)
nlp_p = get_nlp(p_label)
#print("nlp_p",nlp_p)
p_location = nlp_p.similarity(nlp_location)
p_place = nlp_p.similarity(nlp_place)
#print(p_label, "p_location",p_location)
#print(p_label, "p_place",p_place)
if p_location > special_pred_theshold or p_place > special_pred_theshold:
if p not in option_keywords:
#print("adding",p)
option_keywords.append(p_label)
for k in keywords[0]:
nlp_k = get_nlp(get_wd_label(k))
for p in all_predicates:
#print("current p", p)
p_label = get_wd_label(p)
nlp_p = get_nlp(p_label)
#print("nlp_p",nlp_p)
p_k_sim = nlp_p.similarity(nlp_k)
if p_k_sim > special_pred_theshold:
if p not in option_keywords:
option_keywords.append(p_label)
#print("keywords[1]",keywords[1])
k1_predicates = keywords[1].values() #[[country of citizenship, ['P27']], [country of citizenship, ['P27']]]
#print("k1_predicates",k1_predicates)
k1_predicates = sum([[get_wd_label(p) for p in e[1]] for e in k1_predicates],[])
#print("k1_predicates",k1_predicates)
#print("option_keywords",option_keywords)
all_keywords = []
[all_keywords.append(k) for k in keywords[0] + option_keywords + k1_predicates if k and k not in all_keywords]
#print("all_keywords",all_keywords)
main_keyword_paths = get_paths_keywords_nodes(graph, all_keywords,threshold=threshold,top_performance=top_performance, cores=cores)
alternative_keyword_paths = []
#for k_1 in keywords[1]:
# for i, k_0 in enumerate(all_keywords):
# if k_1==k_0:
# tmp_keywords = all_keywords.copy()
# tmp_keywords[i] = keywords[1][k_1][0].text
# alternative_keyword_paths += get_paths_keywords_nodes(graph, all_keywords,threshold=threshold,top_performance=top_performance, cores=cores)
keyword_paths = main_keyword_paths#+alternative_keyword_paths
#print("BEFORE len(keyword_paths)",len(keyword_paths))
keyword_paths_filtered=[]
[keyword_paths_filtered.append(p) for p in keyword_paths if p not in keyword_paths_filtered]
keyword_paths = keyword_paths_filtered
#print("keyword_paths",len(keyword_paths))
#print("len(keyword_paths)",len(keyword_paths))
if len(keyword_paths) < min_paths:
if threshold == 0: return keyword_paths
threshold -= thres_inter
if threshold < 0: threshold = 0
keyword_paths = get_paths_keywords_nodes(graph, all_keywords,threshold=threshold,top_performance=top_performance, cores=cores)
keyword_paths_filtered=[]
[keyword_paths_filtered.append(p) for p in keyword_paths if p not in keyword_paths_filtered]
keyword_paths = keyword_paths_filtered
#keyword_paths_filtered = []
#print("AFTER len(keyword_paths)",len(keyword_paths))
#[keyword_paths_filtered.append(p) for p in keyword_paths if p not in keyword_paths_filtered]
return keyword_paths
#path_nodes_2 = find_path_nodes_from_graph(nlp_question,graph_2, predicates_dict,paths_keywords_2, threshold=0.9, special_pred_theshold=0.7, thres_inter=0.15, top_performance=50, min_paths=3000)
#end_time = time.time()
#start_time = time.time()
#path_nodes = find_path_nodes_from_graph(nlp_question,graph, predicates_dict,paths_keywords, threshold=0.8, thres_inter=0.1, top_performance=graph.size(),min_paths=3000,cores=2)
#print("--> len(path_nodes):",len(path_nodes))
#print("Finding path nodes ->\tRunning time is {}s".format(round(time.time()-start_time,2)))
# In[64]:
def is_sublist(a, b):
if not a: return True
if not b: return False
#if a == b: return False
return b[:len(a)] == a or is_sublist(a, b[1:])
def paths_nodes_filter_is_sublist_worker(in_mp_queue, out_mp_queue, filtered_paths):
found_paths = []
sentinel = None
#print("HI I AM A PROCESSOR", filtered_paths)
for i, fp in iter(in_mp_queue.get, sentinel):
for fp_2 in filtered_paths:
#print("will process",i,fp,fp_2)
if (is_sublist(fp, fp_2) and fp!=fp_2):
#print("processed",i,fp,fp_2)
out_mp_queue.put(i)
break
out_mp_queue.put(False)
#print("I AM TERMINATED")
# In[65]:
#node_predicates_names_2 = get_node_predicates_from_path(path_nodes_2)
def paths_nodes_filter(path_nodes, graph, cores=mp.cpu_count(), with_sublists=True):
filtered_paths = []
for path in path_nodes:
filtered_row = []
for i,p in enumerate(path):
if is_wd_predicate(p[:p.find("-")]):
if i == 0:
#if p[:p.find("-")] == "P725":
# print(p)
neighbor = [k for k in graph[p].keys() if k != path[i+1]]
if neighbor:
filtered_row.append(neighbor[0])
filtered_row.append(p[:p.find("-")])
else:
continue
#print(filtered_row)
elif i > 0 and i < len(path)-1:
filtered_row.append(p[:p.find("-")])
else:
neighbor = [k for k in graph[p].keys() if k != path[i-1]]
if neighbor:
filtered_row.append(p[:p.find("-")])
filtered_row.append(neighbor[0])
else:
continue
else: filtered_row.append(p)
#print("filtered_paths",filtered_paths)
if len(filtered_row) > 1 and filtered_row not in filtered_paths:
filtered_paths.append(filtered_row)
if with_sublists:
if cores <= 0: cores = 1
out_mp_queue = mp.Queue()
in_mp_queue = mp.Queue()
sentinel = None
for i,fp in enumerate(filtered_paths):
in_mp_queue.put((i, fp))
procs = [mp.Process(target = paths_nodes_filter_is_sublist_worker, args = (in_mp_queue, out_mp_queue, filtered_paths)) for i in range(cores)]
to_remove_idexes = []
for proc in procs:
proc.daemon = True
proc.start()
for proc in procs:
in_mp_queue.put(sentinel)
for proc in procs:
to_remove_idexes.append(out_mp_queue.get())
for proc in procs:
proc.join(1)
#print("to_remove_idexes",to_remove_idexes)
for tri in to_remove_idexes:
if tri:
filtered_paths[tri] = []
unique_paths = [p for p in filtered_paths if p]
unique_paths_with_reversed = []
for up in unique_paths:
reversed_up = list(reversed(up))
if up not in unique_paths_with_reversed:
unique_paths_with_reversed.append(up)
if reversed_up not in unique_paths_with_reversed:
unique_paths_with_reversed.append(reversed_up)
#print("unique_paths",len(unique_paths))
#for i, up in enumerate(unique_paths):
# for up_2 in unique_paths:
# if (list(reversed(up)) == up_2):
# unique_paths[i] = []
# break
#cleaned_paths = []
#unique_paths = [up for up in unique_paths if up]
#for up in unique_paths:
# for i,e in enumerate(up):
# if not is_wd_predicate(e):
# for j,r in enumerate(list(reversed(up))):
# if not is_wd_predicate(r):
# cleaned_paths.append(up[i:-j])
# break
#print("cleaned_paths",len(cleaned_paths))
#cleaned_paths = [c for c in cleaned_paths if len(c) > 2]
#unique_paths = cleaned_paths.copy()
#for i,fp in enumerate(cleaned_paths):
# for fp_2 in cleaned_paths:
# if (is_sublist(fp, fp_2) and fp!=fp_2):
# unique_paths[i] = []
# break
#unique_paths = [p for p in unique_paths if len(p) > 2]
#for i, up in enumerate(unique_paths):
# for up_2 in unique_paths:
# if (list(reversed(up)) == up_2):
# unique_paths[i] = []
# break
#print(up)
#[up for up in unique_paths if up and not is_wd_predicate(up[-1]) and not is_wd_predicate(up[0])]
#print()
#for up in unique_paths:
# print(up)
# break
# return []
else:
unique_paths_with_reversed = filtered_paths
return [p for p in unique_paths_with_reversed if len(p) > 2] #False#[up for up in unique_paths if up and not is_wd_predicate(up[-1]) and not is_wd_predicate(up[0])]#False# [p for p in unique_paths if p]
#paths_nodes_filtered_2 = paths_nodes_filter(path_nodes_2, graph_2)
#print("unique_paths", len(paths_nodes_filtered_2))
#for p in paths_nodes_filtered_2:
# print(p)
# In[66]:
def w_converter(nlp):
w_positions = []
w_names = []
for i_q,q in enumerate(nlp):
if q.lemma_ == "where":
w_positions.append((i_q))
w_names.append((i_q,"location"))
elif q.lemma_ == "when":
w_positions.append((i_q))
w_names.append((i_q,"date"))
elif q.lemma_ == "who":
w_positions.append((i_q))
w_names.append((i_q,"person"))
elif q.lemma_ == "why":
w_positions.append(i_q)
w_names.append((i_q,"cause"))
elif q.lemma_ == "which":
w_positions.append(i_q)
w_names.append((i_q,"which"))
elif q.lemma_ == "what":
w_positions.append(i_q)
w_names.append((i_q,"what"))
elif i_q+1 < len(nlp) and q.lemma_ == "how" and (nlp[i_q+1].lemma_ == "much" or nlp[i_q+1].lemma_ == "many"):
w_positions.append(i_q)
w_names.append((i_q,"quantity"))
return w_positions, w_names
# In[67]:
def get_entity_similarity(word_id, entity_type, banned_labels=[], max_reward=2.0):
LOCATION_FILTER = ["GPE", "FAC", "LOC","PERSON"]
PERSON_FILTER = ["PERSON","NORP","ORG","PER"]
DATE_FILTER = ["DATE","TIME"]
CAUSE_FILTER = ["NORP","PRODUCT","EVENT","MISC"]
WHICH_FILTER = PERSON_FILTER+DATE_FILTER+["GPE","LOC","PRODUCT","EVENT",
"WORK_OF_ART","LAW","LANGUAGE","MISC"]
WHAT_FILTER = LOCATION_FILTER+DATE_FILTER+CAUSE_FILTER+PERSON_FILTER+["WORK_OF_ART","LAW","LANGUAGE"]
QUANTITY_FILTER = ["PERCENT", "MONEY", "QUANTITY", "ORDINAL", "CARDINAL"]
ALL_FILTER = LOCATION_FILTER + PERSON_FILTER + DATE_FILTER + CAUSE_FILTER + WHICH_FILTER + WHAT_FILTER + QUANTITY_FILTER
similarities = []
word_label = get_wd_label(word_id)
is_banned_label = word_label.lower() in banned_labels
if word_label == "" and not is_timestamp(word_id):
return similarities
word_ents = get_kb_ents(word_label)
#print(word_id,word_label,entity_type,[e.label_ for e in word_ents])
#if is_timestamp(word_id):
#print("is_timestamp")
#print("word_ents",word_ents)
if is_timestamp(word_id) and entity_type == "date" and not is_banned_label:
similarities.append(max_reward)
#print("in the condition", word_id, entity_type, similarities)
elif word_ents and not is_banned_label:
for ent in word_ents:
if (entity_type in ALL_FILTER and ent.label_ == entity_type):
similarities.append(max_reward)
elif ent.kb_id_ == word_id:
if entity_type == "location" and ent.label_ in LOCATION_FILTER:
similarities.append(max_reward)
elif entity_type == "person" and ent.label_ in PERSON_FILTER:
similarities.append(max_reward)
elif entity_type == "date" and (ent.label_ in DATE_FILTER):
similarities.append(max_reward)
elif entity_type == "cause" and ent.label_ in CAUSE_FILTER:
similarities.append(max_reward)
elif entity_type == "which" and ent.label_ in WHICH_FILTER:
similarities.append(max_reward)
elif entity_type == "what" and ent.label_ in WHAT_FILTER:
similarities.append(max_reward)
elif entity_type == "quantity" and ent.label_ in QUANTITY_FILTER:
similarities.append(max_reward)
else:
similarities.append(get_similarity_by_words(get_nlp(word_label),get_nlp(entity_type)))
else: similarities.append(get_similarity_by_words(get_nlp(word_label),get_nlp(entity_type)))
else:
similarities.append(get_similarity_by_words(get_nlp(word_label),get_nlp(entity_type)))
#print("get_entity_similarity:",word_label, entity_type, similarities)
return similarities
#get_entity_similarity("place of birth", "location", [], max_reward=2)
# In[68]:
def get_hypothesises(nlp, predicates_dict, predicates, themes, paths_keywords, filtered_paths, threshold=0.5, special_pred_theshold=0.7, max_reward=2.0):#, themes, themes_enhanced):
#for fp in filtered_paths:
# print(fp)
w_positions, w_names = w_converter(nlp)
w_names_only = [wn[1] for wn in w_names]
#print("w_positions",w_positions)
#print("w_names",w_names)
#print("w_names_only",w_names_only)
date_trigger = "date" in w_names_only
#person_trigger = "person" in w_names_only
#print("date_trigger",date_trigger)
#print("person_trigger",person_trigger)
BANNED_PREDICATATE_IDS = ["P31"]
complementary_predicates = paths_keywords[0]+[p[0] for p in list(paths_keywords[1].values())]
nlp_time = get_nlp("time")
nlp_date = get_nlp("date")
#locate positions
anchors_positions = []
anchors_focuses = []
#keywords_positions = []
#predicates_positions = []
theme_keywords = [t[0] for t in themes[0]]
predicate_ids = sum([p[1] for p in predicates if p[1]],[])
predicate_names = [get_nlp(p[0].text) for p in predicates]
#print("predicate_ids",predicate_ids)
#print("predicate_names",predicate_names)
[anchors_positions.append(i) for i, w in enumerate(nlp) if w in paths_keywords[2]]
#print("\nanchors_positions:",anchors_positions)
#anchors_childrens
for p in anchors_positions:
children = [c for c in nlp[p].children]
if children == []:
children = [c for c in nlp[p].head.children]
else:
if nlp[p].head:
children.append(nlp[p].head)
anchors_focuses += ([c for c in children
if c not in [nlp[a] for a in anchors_positions]
and c.pos_ != "PUNCT"])
if not anchors_focuses:
anchors_focuses = [nlp[p].head]
anchors_focuses += complementary_predicates
#print("\nanchors_focuses",anchors_focuses)
anchors_focuses_filtered = []
for af in anchors_focuses:
if isinstance(af, str):
anchors_focuses_filtered.append(af)
else:
anchors_focuses_filtered.append(af.text)
anchors_focuses = []
[anchors_focuses.append(af) for af in anchors_focuses_filtered if af not in anchors_focuses and af]
#print("\nanchors_focuses",anchors_focuses)
#find anchor position in paths
anchors_predicates = []
main_predicate_ids = []
main_predicate_names = []
[main_predicate_ids.append(p) for p in predicate_ids+sum([p[1] for p in list(paths_keywords[1].values())],[]) if p not in main_predicate_ids]
#print("paths_keywords[1]",paths_keywords[1])
#print("main_predicate_ids",main_predicate_ids)
#print("[p[0] for p in list(paths_keywords[1].values())]",[p[0].text for p in list(paths_keywords[1].values())])
[main_predicate_names.append(p) for p in predicate_names+[get_nlp(p[0].text) for p in list(paths_keywords[1].values())] if p not in main_predicate_names]
#print("paths_keywords[1]",paths_keywords[1])
#print("main_predicate_names",main_predicate_names)
#
#return 0
for p in filtered_paths:
p_len = len(p)
for i_e, e in enumerate(p):
if is_wd_predicate(e):
#print("predicate",e)
if main_predicate_ids:
if e in main_predicate_ids and e not in BANNED_PREDICATATE_IDS:
if e not in [ap[0] for ap in anchors_predicates]:
if date_trigger:
time_similarity = get_similarity_by_words(get_nlp(get_wd_label(e)),nlp_time)
date_similarity = get_similarity_by_words(get_nlp(get_wd_label(e)),nlp_date)
#print("main_predicate_ids", e, "time_similarity",time_similarity)
#print("main_predicate_ids", e, "date_similarity",date_similarity)
if time_similarity > date_similarity:
anchors_predicates.append((e, time_similarity))
else: anchors_predicates.append((e, date_similarity))
else:
anchors_predicates.append((e, max_reward))
elif e not in [ap[0] for ap in anchors_predicates]:
stat_count = 0
stat_current = 0
for pn in main_predicate_names:
stat_current += get_similarity_by_words(get_nlp(get_wd_label(e)),pn)
stat_count += 1
for pi in main_predicate_ids:
stat_current += get_similarity_by_words(get_nlp(get_wd_label(e)),get_nlp(get_wd_label(pi)))
stat_count += 1
if date_trigger:
time_similarity = get_similarity_by_words(get_nlp(get_wd_label(e)),nlp_time)
date_similarity = get_similarity_by_words(get_nlp(get_wd_label(e)),nlp_date)
#print("if main_pred -> date_trigger -> elif e not",e)
#print("time_similarity",time_similarity)
#print("date_similarity",date_similarity)
if time_similarity > special_pred_theshold or date_similarity > special_pred_theshold:
if stat_count > 1:
stat_count -= 1
else: stat_count += 1
if time_similarity > date_similarity:
anchors_predicates.append((e, time_similarity))
else: anchors_predicates.append((e, date_similarity))
anchors_predicates.append((e, stat_current/stat_count))
elif e not in [ap[0] for ap in anchors_predicates]:
stat_count = 0
stat_current = 0
for af in anchors_focuses:
stat_current += get_similarity_by_words(get_nlp(get_wd_label(e)),get_nlp(af))
stat_count += 1
if date_trigger:
time_similarity = get_similarity_by_words(get_nlp(get_wd_label(e)),nlp_time)
date_similarity = get_similarity_by_words(get_nlp(get_wd_label(e)),nlp_date)
#print("if not main_pred -> date_trigger -> elif e not",e)
#print("time_similarity",time_similarity)
#print("date_similarity",date_similarity)
if time_similarity > special_pred_theshold or date_similarity > special_pred_theshold:
if stat_count > 1:
stat_count -= 1
else: stat_count += 1
if time_similarity > date_similarity:
anchors_predicates.append((e, time_similarity))
else: anchors_predicates.append((e, time_similarity))
anchors_predicates.append((e, stat_current/stat_count))
#print("filtered_paths",filtered_paths)
#for p in filtered_paths:
# for af in anchors_focuses:
# #print(af, p)
# for e in p:
# #print(af,get_wd_label(e))
# if is_wd_predicate(e):# and e not in [ap[0] for ap in anchors_predicates]:
# #print(af,get_wd_label(e))
# anchors_predicates.append([e, get_similarity_by_words(get_nlp(get_wd_label(e)),get_nlp(af))])
#print("\nanchors_predicates",anchors_predicates)
anchors_predicates_filtered = []
[anchors_predicates_filtered.append(ap) for ap in anchors_predicates if ap not in anchors_predicates_filtered]
#print("\anchors_predicates_filtered",anchors_predicates_filtered)
anchors_predicates = [a for a in sorted(anchors_predicates_filtered, key=lambda x: x[-1], reverse=True) if a[1] > threshold]
for thres in [e/100 for e in reversed(range(10, int(threshold*100)+10, 10))]:
#print("anchors_predicates current thres",thres)
anchors_predicates = [a for a in sorted(anchors_predicates_filtered, key=lambda x: x[-1], reverse=True) if a[1] > thres]
if anchors_predicates:
break
#print("len(anchors_predicates sorted)",len(anchors_predicates))
#print("anchors_predicates sorted",anchors_predicates)
#anchors_predicates_filtered = []
#for ap in anchors_predicates:
# for af in anchors_focuses:
# anchors_predicates_filtered.append([ap[0],get_similarity_by_words(get_nlp(get_wd_label(ap[0])),get_nlp(af))])
#
#anchors_predicates_filtered = [a for a in sorted(anchors_predicates_filtered, key=lambda x: x[-1], reverse=True) if a[1] > 0]
#for thres in [e/100 for e in reversed(range(10, int(threshold*100)+10, 10))]:
# print("anchors_predicates_filtered current thres",thres)
# if not anchors_predicates_filtered:
# anchors_predicates_filtered = anchors_predicates
# break
# anchors_predicates_filtered = [a for a in sorted(anchors_predicates_filtered, key=lambda x: x[-1], reverse=True) if a[1] > thres]
# if len(anchors_predicates) > 10:
# break
#print("len(anchors_predicates_filtered)",len(anchors_predicates_filtered))
#print("anchors_predicates_filtered",anchors_predicates_filtered)
#
#anchors_predicates=[]
#[anchors_predicates.append(apf) for apf in anchors_predicates_filtered if apf not in anchors_predicates]
#print("len(anchors_predicates)",len(anchors_predicates))
#print("anchors_predicates",anchors_predicates)
tuples_unique_ids = []
tuples_unique_predicate_ids = []
hypothesises_tuples = []
for ap in anchors_predicates:
#print("ap",ap)
for fp in filtered_paths:
#if "Q4985" in fp:
# print("Q4985 in fp",fp, ap)
for i, e in enumerate(fp):
#print(e)
if e == ap[0] and i>1 and i<len(fp)-1:
#print(i, [fp[i-1], fp[i], fp[i+1]])
hypothesis_tuple = [fp[i-1], fp[i], fp[i+1]]
if hypothesis_tuple not in hypothesises_tuples:
hypothesises_tuples.append(hypothesis_tuple)
if hypothesis_tuple[0] not in tuples_unique_ids:
tuples_unique_ids.append(hypothesis_tuple[0])
if hypothesis_tuple[1] not in tuples_unique_predicate_ids:
tuples_unique_predicate_ids.append(hypothesis_tuple[1])
if hypothesis_tuple[2] not in tuples_unique_ids:
tuples_unique_ids.append(hypothesis_tuple[2])
#if "Q4985" in hypothesis_tuple:
# print("Q4985 hypothesis_tuple",hypothesis_tuple, ap,fp)
#print("tuples_unique_ids",tuples_unique_ids)
#print("tuples_unique_predicate_ids",tuples_unique_predicate_ids)
hypothesises_unique_ids = [t for t in tuples_unique_ids if get_wd_label(t).lower() not in anchors_focuses]
if len(hypothesises_unique_ids)>0 and len(tuples_unique_ids)>0:
max_reward *= len(hypothesises_unique_ids)/len(tuples_unique_ids)
#print("hypothesises_unique_ids",hypothesises_unique_ids)
#print("hypothesises_tuples",hypothesises_tuples)
#print("hypothesises_tuples",hypothesises_tuples)
#print([a[0] for a in anchors_predicates])
#keywords_ids = [i for j in [get_wd_ids(k) for k in anchors_focuses if get_wd_ids(k)] for i in j]
#print("anchors_focuses",keywords_ids)
#print(extract_ids(themes[0]))
#print(extract_ids(themes_enhanced))
#keywords_ids = []
#[keywords_ids.append(i) for i in extract_ids(themes[0]) + extract_ids(themes_enhanced) if i not in keywords_ids]
#print("keywords_ids",keywords_ids)
#print("anchors_predicates",anchors_predicates)
#print("-------START FILTERING-------")
hypothesises = []
hypothesises_all = []
hypothesises_tuples_len = len(hypothesises_tuples)
keywords_similarity_threshold = 0.9
tmp_to_find="" # for debugging, set the ID of the element to track in the log as print
for ht in hypothesises_tuples:
if tmp_to_find in ht: print("ht",ht)
if ht[1] in [a[0] for a in anchors_predicates]:
for i_af, af in enumerate(anchors_focuses):
hypo_sum = 0
nlp_af = get_nlp(af)
nlp_ht0 = get_nlp(get_wd_label(ht[0]))
nlp_ht2 = get_nlp(get_wd_label(ht[2]))
if not nlp_ht2:
break
af_lemma = ' '.join([e.lower_ for e in nlp_af if e.pos_ != "DET"])
ht0_lemma = ' '.join([e.lower_ for e in nlp_ht0 if e.pos_ != "DET"])
ht2_lemma = ' '.join([e.lower_ for e in nlp_ht2 if e.pos_ != "DET"])
#if get_wd_label(ht[0]).lower() not in anchors_focuses and get_wd_label(ht[2]).lower() not in anchors_focuses:
# for es in get_entity_similarity(ht[0], wn[1], anchors_focuses, max_reward=max_reward):
# hypo_sum += es
if (
nlp_af.text.lower() != nlp_ht2.text.lower()
and af_lemma != nlp_ht2[0].text.lower()
and nlp_af.text.lower() != ht2_lemma
and af_lemma != ht2_lemma
):
if date_trigger:
if is_timestamp(ht[0]):
for es in get_entity_similarity(ht[0], "date", anchors_focuses, max_reward=max_reward):
hypo_sum += es
#print("if date hypo_sum",ht[0], "date",ht[0], es, hypo_sum)
else: hypo_sum += get_similarity_by_words(nlp_ht2, nlp_af)
else: hypo_sum += get_similarity_by_words(nlp_ht2, nlp_af)
if i_af in w_positions:
for wn in w_names:
if i_af == wn[0]:
for es in get_entity_similarity(ht[0], wn[1], anchors_focuses, max_reward=max_reward):
hypo_sum += es
if tmp_to_find in ht: print("if i_af hypo_sum","ht[0], wn[1], es",ht[0], wn[1], es,hypo_sum)
ht0_sum = 0
ht2_sum = 0
if is_timestamp(ht[0]): ht0_label = ht[0]
else: ht0_label = get_wd_label(ht[0]).lower()
if is_timestamp(ht[2]): ht2_label = ht[2]
else: ht2_label = get_wd_label(ht[2]).lower()
for tk in theme_keywords:
if tmp_to_find in ht: print("tk",tk)
if tmp_to_find in ht: print("ht0_label",ht0_label)
if tmp_to_find in ht: print("ht2_label",ht2_label)
nlp_tk = get_nlp(tk.text.lower())
ht0_label_similarity = get_nlp(ht0_label).similarity(nlp_tk)
ht2_label_similarity = get_nlp(ht2_label).similarity(nlp_tk)
if tmp_to_find in ht: print("ht0_label_similarity",ht0_label_similarity)
if tmp_to_find in ht: print("ht2_label_similarity",ht2_label_similarity)
#
if ht0_label_similarity > keywords_similarity_threshold and ht[1] in main_predicate_ids:
if tmp_to_find in ht: print("ht0_label",ht0_label)
for wn in w_names_only:
for es in get_entity_similarity(ht[2], wn, anchors_focuses, max_reward=max_reward*3):
if tmp_to_find in ht: print("ht0_sum main_predicate_ids before",ht0_sum)
ht0_sum += es
if tmp_to_find in ht: print("theme_keywords ht0_sum ht[2], wn, es",ht[2], wn, es, ht0_sum)
if tmp_to_find in ht: print("ht0_label",ht2_label,es, ht0_sum, ht)
elif ht0_label_similarity > keywords_similarity_threshold:
if tmp_to_find in ht: print("ht0_label",ht0_label)
for wn in w_names_only:
for es in get_entity_similarity(ht[2], wn, anchors_focuses, max_reward=max_reward*2):
if tmp_to_find in ht: print("ht0_sum before",ht0_sum)
ht0_sum += es
if tmp_to_find in ht: print("theme_keywords not main_predicate_ids ht0_sum ht[2], wn, es",ht[2], wn, es, ht0_sum)
if tmp_to_find in ht: print("ht0_label",ht2_label,es, ht0_sum, ht)
#
if ht2_label_similarity > keywords_similarity_threshold and ht[1] in main_predicate_ids:
if tmp_to_find in ht: print("ht2_label",ht2_label)
for wn in w_names_only:
for es in get_entity_similarity(ht[0], wn, anchors_focuses, max_reward=max_reward*3):
if tmp_to_find in ht: print("ht2_sum before",ht0_sum)
ht2_sum += es
if tmp_to_find in ht: print("theme_keywords main_predicate_ids ht2_sum ht[0], wn, es",ht[0], wn, es, ht2_sum)
if tmp_to_find in ht: print("ht2_label",ht0_label,es, ht2_sum, ht)
elif ht2_label_similarity > keywords_similarity_threshold:
if tmp_to_find in ht: print("ht2_label",ht2_label)
for wn in w_names_only:
for es in get_entity_similarity(ht[0], wn, anchors_focuses, max_reward=max_reward*2):
if tmp_to_find in ht: print("ht2_sum before",ht0_sum)
ht2_sum += es
if tmp_to_find in ht: print("theme_keywords not main_predicate_ids ht2_sum ht[0], wn, es",ht[0], wn, es, ht2_sum)
if tmp_to_find in ht: print("ht2_label",ht0_label,es, ht2_sum, ht)
for ap in anchors_predicates:
if ap[0] == ht[1]:
for wn in w_names_only:
for es in get_entity_similarity(ht[0], wn, anchors_focuses, max_reward=max_reward*2):
ht0_sum += es
if tmp_to_find in ht: print("anchors_predicates w_names_only ht0_sum ht[0], wn, es",ht[0], wn, es, ht0_sum)
for es in get_entity_similarity(ht[2], wn, anchors_focuses, max_reward=max_reward*2):
ht2_sum += es
if tmp_to_find in ht: print("anchors_predicates w_names_only ht2_sum ht[2], wn, es",ht[2], wn, es, ht2_sum)
for tk in theme_keywords:
if tmp_to_find in ht: print("anchors_predicates tk",tk)
if tmp_to_find in ht: print("anchors_predicates ht0_label",ht0_label)
if tmp_to_find in ht: print("anchors_predicates ht2_label",ht2_label)
nlp_tk = get_nlp(tk.text.lower())
ht0_label_similarity = get_nlp(ht0_label).similarity(nlp_tk)
ht2_label_similarity = get_nlp(ht2_label).similarity(nlp_tk)
if tmp_to_find in ht: print("anchors_predicates ht0_label_similarity",ht0_label_similarity)
if tmp_to_find in ht: print("anchors_predicates ht2_label_similarity",ht2_label_similarity)
if ht0_label_similarity > keywords_similarity_threshold and ht[1] in main_predicate_ids:
if tmp_to_find in ht: print("anchors_predicates ht0_label",ht0_label)
for wn in w_names_only:
for es in get_entity_similarity(ht[2], wn, anchors_focuses, max_reward=max_reward*2):
if tmp_to_find in ht: print("anchors_predicates ht0_sum main_predicate_ids before",ht0_sum)
ht0_sum += es
if tmp_to_find in ht: print("anchors_predicates theme_keywords ht0_sum ht[2], wn, es",ht[2], wn, es, ht0_sum)
if tmp_to_find in ht: print("anchors_predicates ht0_label",ht2_label,es, ht0_sum, ht)
elif ht0_label_similarity > keywords_similarity_threshold:
if tmp_to_find in ht: print("anchors_predicates ht0_label",ht0_label)
for wn in w_names_only:
for es in get_entity_similarity(ht[2], wn, anchors_focuses, max_reward=max_reward):
if tmp_to_find in ht: print("anchors_predicates ht0_sum before",ht0_sum)
ht0_sum += es
if tmp_to_find in ht: print("anchors_predicates theme_keywords not main_predicate_ids ht0_sum ht[2], wn, es",ht[2], wn, es, ht0_sum)
if tmp_to_find in ht: print("anchors_predicates ht0_label",ht2_label,es, ht0_sum, ht)
if ht2_label_similarity > keywords_similarity_threshold and ht[1] in main_predicate_ids:
if tmp_to_find in ht: print("anchors_predicates ht2_label",ht2_label)
for wn in w_names_only:
for es in get_entity_similarity(ht[0], wn, anchors_focuses, max_reward=max_reward*2):
if tmp_to_find in ht: print("anchors_predicates ht2_sum before",ht0_sum)
ht2_sum += es
if tmp_to_find in ht: print("anchors_predicates theme_keywords main_predicate_ids ht2_sum ht[0], wn, es",ht[0], wn, es, ht2_sum)
if tmp_to_find in ht: print("anchors_predicates ht2_label",ht0_label,es, ht2_sum, ht)
elif ht2_label_similarity > keywords_similarity_threshold:
if tmp_to_find in ht: print("anchors_predicates ht2_label",ht2_label)
for wn in w_names_only:
for es in get_entity_similarity(ht[0], wn, anchors_focuses, max_reward=max_reward):
if tmp_to_find in ht: print("anchors_predicates ht2_sum before",ht0_sum)
ht2_sum += es
if tmp_to_find in ht: print("anchors_predicates theme_keywords not main_predicate_ids ht2_sum ht[0], wn, es",ht[0], wn, es, ht2_sum)
if tmp_to_find in ht: print("anchors_predicates ht2_label",ht0_label,es, ht2_sum, ht)
if date_trigger and is_timestamp(ht0_label) and ht2_label in anchors_focuses:
hypo_sum += ht0_sum
#print("is_timestamp(ht0_label) hypo_sum", hypo_sum)
elif date_trigger and is_timestamp(ht2_label) and ht0_label in anchors_focuses:
hypo_sum += ht2_sum
#print("is_timestamp(ht2_label) hypo_sum", hypo_sum)
elif ht2_label in anchors_focuses and ht0_label not in anchors_focuses:
hypo_sum += ht2_sum
if tmp_to_find in ht: print("ht2_label hypo_sum in anchors_focuses", hypo_sum)
elif ht0_label in anchors_focuses and ht2_label not in anchors_focuses:
hypo_sum += ht0_sum
if tmp_to_find in ht: print("ht0_label hypo_sum in anchors_focuses", hypo_sum)
else:
hypo_sum += ht0_sum
hypo_sum += ht2_sum
if tmp_to_find in ht: print("else in anchors_focuses hypo_sum", hypo_sum)
if tmp_to_find in ht: print("hypo_sum",hypo_sum)
if tmp_to_find in ht: print("ap[1]",ap[1])
hypo_sum *= ap[1]
if tmp_to_find in ht: print("ht[0], ht[2], hypo_sum",ht[0], ht[2], hypo_sum)
#if get_wd_label(ht[0]).lower() in anchors_focuses:
# if not i_af in w_positions:
# hypo_sum += abs(ap[1])
# else: hypo_sum -= abs(ap[1])
#if ht[0] == "Q202725": print("hypo_sum",hypo_sum)
#else: hypo_sum = ap[1]
#hypo_sum *= abs(ap[1])
#break
#print("ap",ap, "ht",ht, "hypo_sum",hypo_sum)
#print(ht)
#break
#hypo_sum = abs(hypo_sum)
#hypo_sum += abs(ap[1])
#hypo_sum += abs(ap[1])
#hypo_sum += ap[1]
#hypo_sum += abs(hypo_sum)
#hypo_sum *= abs(ap[1])
#hypo_sum = abs(hypo_sum)
#hypo_sum /= ap[1]
#hypo_sum -= ap[1]
#hypo_sum += hypo_sum/ap[1]
#print("ht[0]",ht[0])
#print("ht[2]",ht[2])
if (date_trigger and is_timestamp(ht[0])
and not is_timestamp(ht[2])
and is_in_list_by_similarity(get_wd_label(ht[2]).lower(), anchors_focuses, keywords_similarity_threshold)):
#print("is_timestamp(ht[0]")
hypo = ht[0]
elif (date_trigger and is_timestamp(ht[2])
and not is_timestamp(ht[0])
and is_in_list_by_similarity(get_wd_label(ht[0]).lower(), anchors_focuses, keywords_similarity_threshold)):
#print("is_timestamp(ht[2]")
hypo = ht[2]
elif date_trigger and is_timestamp(ht[0]) and is_timestamp(ht[2]): break
#is_in_list_by_similarity("Moby Dick", ["moby-dick","star wars"],0.9)
#elif get_wd_label(ht[0]).lower() in anchors_focuses:
# #print("get_wd_label(ht[0]).lower()",get_wd_label(ht[0]).lower())
# if not get_wd_label(ht[2]).lower() in anchors_focuses:
# hypo = ht[2]
# if get_wd_label(ht[2]).lower() in anchors_focuses:
# break
elif is_in_list_by_similarity(get_wd_label(ht[0]).lower(), anchors_focuses, keywords_similarity_threshold):
if not is_in_list_by_similarity(get_wd_label(ht[2]).lower(), anchors_focuses, keywords_similarity_threshold):
hypo = ht[2]
else: break
elif not is_in_list_by_similarity(get_wd_label(ht[0]).lower(), anchors_focuses, keywords_similarity_threshold):
if is_in_list_by_similarity(get_wd_label(ht[2]).lower(), anchors_focuses, keywords_similarity_threshold):
hypo = ht[0]
else:
hypothesises_all.append(ht[0])
if not hypothesises: hypothesises.append([ht[0], hypo_sum])
else:
if ht[0] in [h[0] for h in hypothesises]:
for i, h in enumerate(hypothesises):
if ht[0] == h[0]: hypothesises[i] = [ht[0], hypo_sum+hypothesises[i][1]]
else: hypothesises.append([ht[0], hypo_sum])
hypo = ht[2]
#elif not get_wd_label(ht[0]).lower() in anchors_focuses:
# if get_wd_label(ht[2]).lower() in anchors_focuses:
# hypo = ht[0]
# if not get_wd_label(ht[2]).lower() in anchors_focuses:
# hypothesises_all.append(ht[0])
# if not hypothesises: hypothesises.append([ht[0], hypo_sum])
# else:
# if ht[0] in [h[0] for h in hypothesises]:
# for i, h in enumerate(hypothesises):
# if ht[0] == h[0]: hypothesises[i] = [ht[0], hypo_sum+hypothesises[i][1]]
# else: hypothesises.append([ht[0], hypo_sum])
#
# #if "Q4985" in ht: print("Q4985 ALONE hypo and sum:", ht[0], hypo_sum)
# hypo = ht[2]
else:
#print("BREAK", ht)
break
#print("hypothesises",hypothesises)
#if "Q4985" in ht:
# print("Q4985 hypo and sum:", hypo, hypo_sum)
hypothesises_all.append(hypo)
if not hypothesises: hypothesises.append([hypo, hypo_sum])
else:
if hypo in [h[0] for h in hypothesises]:
for i, h in enumerate(hypothesises):
if hypo == h[0]: hypothesises[i] = [hypo, hypo_sum+hypothesises[i][1]]
else: hypothesises.append([hypo, hypo_sum])
#print("len(hypothesises_all)",len(hypothesises_all))
for i_h, h in enumerate(hypothesises):
h_sum = hypothesises_all.count(h[0])
#print("h_sum",h_sum)
#print("BEFORE: hypothesises[i_h][1]",hypothesises[i_h][1])
hypothesises[i_h][1] = hypothesises[i_h][1]/h_sum
#print("AFTER: hypothesises[i_h][1]",hypothesises[i_h][1])
#print("hypothesises_all",hypothesises_all)
hypothesises = sorted(hypothesises, key=lambda x: x[-1], reverse=True)
return hypothesises
#if verbose: print("-> Computing hypothesises...")
#hypothesises = get_hypothesises(q_nlp, q_predicates, q_themes, paths_keywords, paths_nodes_filtered, threshold=0.5, max_reward=2.0)
#if verbose: print("\n\n--> hypothesises:",hypothesises)
# In[69]:
def get_unique_hypo_paths(hypothesis, other_hypothesis, hypo_paths):
BANNED_IDS = ["P31"]
filtered_hypo_paths = []
other_hypothesis = other_hypothesis[:]+[hypothesis]
for hp in hypo_paths:
#print("hp",hp)
path_is_used = False
len_hp = len(hp)
if len_hp >= 3:
#if "P31" in hp: print("hp",hp)
for e in hp:
if e in other_hypothesis:
e_index = hp.index(e)
for step_index in range(1, len_hp, 2): #len_hp-e_index
#print("step_index",step_index)
if e_index-step_index-2 >= 0:
if hp[e_index-step_index] == hp[e_index-step_index-2] and hp[e_index-step_index] not in BANNED_IDS:
#print(hp.index(e), hp)
#print("IN")
part_1 = hp[:e_index-step_index-0]
part_2 = hp[e_index-step_index-1:]
#print("hp[:",e_index-step_index-0,"]",part_1)
#print("hp[",e_index-step_index-1,":]",part_2)
#hp[e_index:]
sub_part_1 = None
sub_part_2 = None
if hypothesis in part_1:
sub_part_1 = get_unique_hypo_paths(hypothesis, other_hypothesis, [part_1,[]])
if hypothesis in part_2:
sub_part_2 = get_unique_hypo_paths(hypothesis, other_hypothesis, [part_2,[]])
if sub_part_1 != None:
if sub_part_1:
[filtered_hypo_paths.append(sp) for sp in sub_part_1]
else:
for e in hp:
if hp.count(e) > 1 and e not in BANNED_IDS: flag_too_much=True
if not flag_too_much: filtered_hypo_paths.append(part_1)
if sub_part_2 != None:
if sub_part_2:
[filtered_hypo_paths.append(sp) for sp in sub_part_2]
else:
for e in hp:
if hp.count(e) > 1 and e not in BANNED_IDS: flag_too_much=True
if not flag_too_much: filtered_hypo_paths.append(part_2)
path_is_used=True
else: break
for step_index in range(1, len_hp, 2):
#print("step_index",step_index)
if e_index+step_index+2 < len_hp:
if hp[e_index+step_index] == hp[e_index+step_index+2] and hp[e_index+step_index] not in BANNED_IDS:
#print(hp.index(e), hp)
part_1 = hp[:e_index+step_index+2]
part_2 = hp[e_index+step_index+1:]
#print("hp[:",e_index+step_index+2,"]",part_1)
#print("hp[",e_index+step_index+1,":]",part_2)
#print("part_1",part_1)
#print("part_2",part_2)
sub_part_1 = None
sub_part_2 = None
if hypothesis in part_1:
sub_part_1 = get_unique_hypo_paths(hypothesis, other_hypothesis, [part_1,[]])
if hypothesis in part_2:
sub_part_2 = get_unique_hypo_paths(hypothesis, other_hypothesis, [part_2,[]])
if sub_part_1 != None:
if sub_part_1:
[filtered_hypo_paths.append(sp) for sp in sub_part_1]
else:
for e in hp:
if hp.count(e) > 1 and e not in BANNED_IDS: flag_too_much=True
if not flag_too_much: filtered_hypo_paths.append(part_1)
if sub_part_2 != None:
if sub_part_2:
[filtered_hypo_paths.append(sp) for sp in sub_part_2]
else:
flag_too_much = False
for e in hp:
if hp.count(e) > 1 and e not in BANNED_IDS: flag_too_much=True
if not flag_too_much: filtered_hypo_paths.append(part_2)
path_is_used=True
else: break
if path_is_used == False:
flag_too_much = False
for e in hp:
if hp.count(e) > 1 and e not in BANNED_IDS: flag_too_much=True
if not flag_too_much: filtered_hypo_paths.append(hp)
#else:
# filtered_hypo_paths.append(hp)
return filtered_hypo_paths
# In[70]:
def match_hypothesises_worker(in_mp_queue, out_mp_queue):
golden_paths = []
sentinel = None
for mpu in iter(in_mp_queue.get, sentinel):
#for mp in mp_similarities_untagged:
#print("AFTER mpu",mpu)
for i_e, e in enumerate(mpu[1]):
if i_e <= 1 or i_e >= len(mpu[1])-2:
continue
if not is_wd_entity(e):
continue
mp_e_statements = get_all_statements_of_entity(e)
mp_predicate_tagging_index = mpu[1][i_e+1].find("-")
if mp_predicate_tagging_index != -1:
mp_predicate = mpu[1][i_e+1][:mp_predicate_tagging_index]
else:
mp_predicate = mpu[1][i_e+1]
extended_paths = get_statements_by_id(mp_e_statements, e, mp_predicate, qualifier=False, statement_type="predicate")
extended_paths_qualifier = get_statements_by_id(mp_e_statements, e, mp_predicate, qualifier=True, statement_type="qualifier_predicate")
ep_predicate_tagging_index_plus_1 = mpu[1][i_e+1].find("-")
if ep_predicate_tagging_index_plus_1 != -1:
ep_predicate_plus_1 = mpu[1][i_e+1][:ep_predicate_tagging_index_plus_1]
else:
ep_predicate_plus_1 = mpu[1][i_e+1]
ep_predicate_tagging_index_minus_1 = mpu[1][i_e-1].find("-")
if ep_predicate_tagging_index_minus_1 != -1:
ep_predicate_minus_1 = mpu[1][i_e-1][:ep_predicate_tagging_index_minus_1]
else:
ep_predicate_minus_1 = mpu[1][i_e-1]
for ep in extended_paths_qualifier:
if (ep['entity']['id'] == mpu[1][i_e] and
ep['predicate']['id'] == ep_predicate_minus_1 and
ep['object']['id'] == mpu[1][i_e-2] and
ep['qualifiers']):
for q in ep['qualifiers']:
if(q['qualifier_predicate']["id"] == ep_predicate_plus_1 and
q['qualifier_object']["id"] == mpu[1][i_e+2]):
if mpu[1] not in golden_paths:
golden_paths.append(mpu[1])
if (ep['entity']['id'] == mpu[1][i_e+2] and
ep['predicate']['id'] == ep_predicate_plus_1 and
ep['object']['id'] == mpu[1][i_e] and
ep['qualifiers']):
for q in ep['qualifiers']:
if(q['qualifier_predicate']["id"] == ep_predicate_minus_1 and
q['qualifier_object']["id"] == mpu[1][i_e-2]):
if mpu[1] not in golden_paths:
golden_paths.append(mpu[1])
for ep in extended_paths:
if (ep['entity']['id'] == mpu[1][i_e] and
ep['predicate']['id'] == ep_predicate_minus_1 and
ep['object']['id'] == mpu[1][i_e-2] and
ep['qualifiers']):
for q in ep['qualifiers']:
if(q['qualifier_predicate']["id"] == ep_predicate_plus_1 and
q['qualifier_object']["id"] == mpu[1][i_e+2]):
if mpu[1] not in golden_paths:
golden_paths.append(mpu[1])
if (ep['entity']['id'] == mpu[1][i_e+2] and
ep['predicate']['id'] == ep_predicate_plus_1 and
ep['object']['id'] == mpu[1][i_e] and
ep['qualifiers']):
for q in ep['qualifiers']:
if(q['qualifier_predicate']["id"] == ep_predicate_minus_1 and
q['qualifier_object']["id"] == mpu[1][i_e-2]):
if mpu[1] not in golden_paths:
golden_paths.append(mpu[1])
out_mp_queue.put(golden_paths)
# In[71]:
def count_hops(best_path,question):
count_hops = []
[count_hops.append(e) for e in best_path if get_wd_label(e).lower() in question.lower() and not is_wd_predicate(e) and get_wd_label(e).lower() not in [get_wd_label(w).lower() for w in count_hops]]
if len(count_hops)>0:
spo_k = len(count_hops)
else: spo_k=1
return spo_k
# In[72]:
# TODO / REDO
# is_wd_entity not taking care of timestamps
def list_by_n(l, i):
list_n = []
for j in range(0, len(l)+1, 1):
tmp = l[j-i:i+j-i]
if tmp:
list_n.append(tmp)
return list_n
def match_hypothesises(graph, question, themes, predicates, hypothesises, paths, threshold=0.8, max_reward=2.0, winner_threshold_diff=7, time_sensitive=False, cores=mp.cpu_count(), deep_match=4):
BANNED_IDS = ["P31"]
LOCATION_FILTER = ["GPE", "FAC", "LOC","PERSON"]
filtered_paths = []
#print("hypothesises",hypothesises)
if time_sensitive:
hypothesises_time = [h for h in hypothesises if is_timestamp(h[0])]
if hypothesises_time:
hypothesises = hypothesises_time
#if 'where' in [t.lower_ for t in question if t.tag_=="WRB"]:
# print("in where condition")
# hypothesises_location = []
# for h in hypothesises:
# word_label = get_wd_label(h[0])
# word_ents = get_kb_ents(word_label)
# entities = [e.label_ for e in word_ents]
# [hypothesises_location.append(e) for e in entities if e in LOCATION_FILTER]
# print(h[0],word_label,word_ents,entities)
#
# if hypothesises_location:
# hypothesises = hypothesises_location
#print(word_id,word_label,entity_type,)
#print("hypothesises",hypothesises)
for h in hypothesises:
other_hypothesis = [e[0] for e in hypothesises if e != h]
hypo_paths = [p for p in paths if h[0] in p]
filtered_hypo_paths = []
[filtered_hypo_paths.append(p) for p in get_unique_hypo_paths(h[0], other_hypothesis, hypo_paths) if p not in filtered_hypo_paths]
for p in filtered_hypo_paths:
for e in p:
if p.count(e) > 1 and e not in BANNED_IDS:
continue
if p[-1] == h[0]:
reversed_path = list(reversed(p))
if reversed_path not in filtered_paths:
filtered_paths.append(reversed_path)
else:
if p not in filtered_paths:
filtered_paths.append(p)
#print("filtered_paths",filtered_paths)
#print("1 hypothesises",hypothesises)
# check if first hypothesis is clear winner
winner_threshold_diff = 2*max_reward # this is coded in hard ! TBD
first_is_winner = False
if len(hypothesises) > 1:
hypo_diff = hypothesises[0][1]-hypothesises[1][1]
#print("hypo_diff",hypo_diff)
if hypo_diff > winner_threshold_diff:
first_is_winner = True
#print("first_is_winner",first_is_winner)
w_positions, w_names = w_converter(question)
sorted_golden_paths = []
if not sorted_golden_paths:
for p in filtered_paths:
if len(p)>1 and p[0] == hypothesises[0][0]:
if p not in sorted_golden_paths:
sorted_golden_paths.append(p)
if len(p)>1 and p[-1] == hypothesises[0][0]:
p = list(reversed(p))
if p not in sorted_golden_paths:
sorted_golden_paths.append(p)
if not sorted_golden_paths:
for p in filtered_paths:
if len(p)>1 and hypothesises[0][0] in p:
if p not in sorted_golden_paths:
sorted_golden_paths.append(p)
meaningful_paths = []
theme_ids = sum([t[1] for t in themes[0]],[])
for p in filtered_paths:
counter = 0
for ti in theme_ids:
if ti in p and p not in meaningful_paths:
counter += 1
for pred in [p[1] for p in predicates]:
for e in pred:
if e in p:
counter += 1
else:
counter = 0
for i_wp, wp in enumerate(w_positions):
if w_names[i_wp][1] and wp<len(p):
for es in get_entity_similarity(p[wp], w_names[i_wp][1], [], max_reward=max_reward):
counter += es
#print("p[wp], w_names[i_wp][1], es",p[wp], w_names[i_wp][1], es)
for hypo in hypothesises:
if hypo[0] in p:
counter += 1
if hypo[0] == p[0]:
counter += 1
if hypo[0] == p[-1]:
counter += 1
if counter > 0: meaningful_paths.append((counter, p))
#print("before len(meaningful_paths):",len(meaningful_paths))
meaningful_paths = [mp for mp in meaningful_paths if len(mp[1])>2]
#print("after len(meaningful_paths):",len(meaningful_paths))
#for mpc in meaningful_paths.copy():
# reversed_mp = list(reversed(mpc[1]))
# #print(mp[1],reversed_mp)
# if reversed_mp not in [p[1] for p in meaningful_paths]:
# if time_sensitive:
# if is_timestamp(reversed_mp[0]):
# #print("time_sensitive added")
# #print(mpc[1],reversed_mp)
# meaningful_paths.append([mpc[0],reversed_mp])
# else:
# #print("added")
# #print(mpc[1],reversed_mp)
# meaningful_paths.append([mpc[0],reversed_mp])
#print("meaningful_paths",meaningful_paths)
#print("after after len(meaningful_paths):",len(meaningful_paths))
meaningful_paths = sorted(meaningful_paths, key=lambda x: x[0], reverse=True)
#print("\n")
#looped_paths = []
#for hypo in hypothesises:
# for mp in meaningful_paths:
# if mp[1][0] == hypo[0] or mp[1][-1] == hypo[0]:
# if graph.has_node(mp[1][0]) and graph.has_node(mp[1][-1]):
# path_tmp = list(nx.all_simple_paths(graph, mp[1][0],mp[1][-1]))
# if len(path_tmp)>1:
# for p in path_tmp:
# if p not in [lp[1] for lp in looped_paths]:
# looped_paths.append((mp[0],p))
# #else:
# # if not graph.has_node(mp[1][0]):
# # print("MISSING NODE:", mp[1][0], get_wd_label(mp[1][0]))
# # if not graph.has_node(mp[1][-1]):
# # print("MISSING NODE:", mp[1][-1], get_wd_label(mp[1][-1]))
#
##print("len(looped_paths)", len(looped_paths))
#print("looped_paths",looped_paths)
#looped_paths_untagged = []
#for lp in looped_paths:
# row_tmp = []
# for w in lp[1]:
# if w.find("-") > 0:
# row_tmp.append(w[:w.find("-")])
# else:
# row_tmp.append(w)
# looped_paths_untagged.append((lp[0],row_tmp))
#
#
#
#print("looped_paths_untagged",looped_paths_untagged)
looped_paths_untagged = meaningful_paths
mp_similarities_untagged = []
#mp_similarities_tagged = []
mp_similarities_untagged_hypo = []
#mp_similarities_tagged_hypo = []
question_enhanced = []
for q in question:
if q.lemma_ == "where": question_enhanced.append("location")
elif q.lemma_ == "when": question_enhanced.append("date")
elif q.lemma_ == "who": question_enhanced.append("person")
elif q.lemma_ == "why": question_enhanced.append("cause")
else: question_enhanced.append(q.text)
question_enhanced = nlp(" ".join([q for q in question_enhanced]))
#print("question",question)
#print("question_enhanced",question_enhanced)
#print("[h[0] for h in hypothesises]",[h[0] for h in hypothesises])
for i_lp, lp in enumerate(looped_paths_untagged):
#print(lp)
sentence = get_nlp(" ".join([get_wd_label(w) for w in lp[1]]))
similarity = get_similarity_by_words(sentence, question)
similarity_enhanced = get_similarity_by_words(sentence, question_enhanced)
similarity_avg = (similarity+similarity_enhanced)/2*lp[0]
#print(sentence,question,question_enhanced)
#print("similarity", similarity)
#print("question_enhanced", similarity_enhanced)
#mp_similarities_untagged.append((similarity_enhanced,lp[1]))
#mp_similarities_tagged.append((similarity_enhanced,looped_paths[i_lp][1]))
if lp[1][0] in [h[0] for h in hypothesises]:
#print("lp[1][0]",lp[1][0])
mp_similarities_untagged_hypo.append((similarity_avg, lp[1]))
#mp_similarities_tagged_hypo.append((similarity_avg, looped_paths[i_lp][1]))
mp_similarities_untagged.append((similarity_avg, lp[1]))
#mp_similarities_tagged.append((similarity_avg, looped_paths[i_lp][1]))
#print("mp_similarities_untagged",len(mp_similarities_untagged))
#print("mp_similarities_untagged_hypo",len(mp_similarities_untagged_hypo))
#print("mp_similarities_untagged",mp_similarities_untagged)
#mp_similarities_tagged = sorted(mp_similarities_tagged, key=lambda x: x[0], reverse=True)
#mp_similarities_tagged = [mp for mp in mp_similarities_tagged if mpu[0] > threshold]
mp_similarities_untagged = sorted(mp_similarities_untagged, key=lambda x: x[0], reverse=True)
mp_similarities_untagged = [mpu for mpu in mp_similarities_untagged if mpu[0] > threshold]
#print("mp_similarities_untagged",len(mp_similarities_untagged))
#print("mp_similarities_untagged",mp_similarities_untagged)
[mp_similarities_untagged.append(suh) for suh in mp_similarities_untagged_hypo if not suh in mp_similarities_untagged]
#[mp_similarities_tagged.append(sth) for sth in mp_similarities_tagged_hypo if not sth in mp_similarities_tagged]
#print("mp_similarities_untagged",len(mp_similarities_untagged))
#print("mp_similarities_tagged",len(mp_similarities_tagged))
#WH_FILTER = ["WDT", "WP", "WP$", "WRB"]
#wh_position = [w.i for w in question if w.tag_ in WH_FILTER][0]
#question_list = [w.lower_ for w in question if not w.is_punct]
#question_list_filtered = [w.lower_ for w in question if not w.is_punct and w.tag_ not in WH_FILTER]
if cores <= 0: cores = 1
sentinel = None
out_mp_queue = mp.Queue()
in_mp_queue = mp.Queue()
for mpu in mp_similarities_untagged:
#print("BEFORE mpu",mpu)
in_mp_queue.put(mpu)
procs = [mp.Process(target = match_hypothesises_worker, args = (in_mp_queue, out_mp_queue)) for i in range(cores)]
golden_paths = []
for proc in procs:
proc.daemon = True
proc.start()
for proc in procs:
in_mp_queue.put(sentinel)
for proc in procs:
golden_paths.extend(out_mp_queue.get())
for proc in procs:
proc.join()
golden_paths = [gp for gp in golden_paths if gp]
#print("golden_paths",golden_paths)
#print("len(golden_paths)",len(golden_paths))
sorted_golden_paths = []
for gp in golden_paths:
tmp_gp = []
#if gp[0] in [h[0] for h in hypothesises]:
for e in gp:
if is_wd_entity(e):
tmp_gp.append(get_wd_label(e))
else:
tmp_gp.append(get_wd_label(e[:e.find("-")]))
nlp_gp = get_nlp(" ".join(tmp_gp))
sorted_golden_paths.append((get_similarity_by_words(question,nlp_gp), gp))
#
sorted_golden_paths = sorted(sorted_golden_paths, key=lambda x: x[0], reverse=True)
#print("sorted_golden_paths",sorted_golden_paths)
#print("len(sorted_golden_paths) BEFORE",len(sorted_golden_paths))
#print("w_positions[0]",w_positions[0])
#print("w_names[0][1]",w_names[0][1])
if sorted_golden_paths:
best_spos = [sorted_golden_paths[0]]
#print("initial best_spos",best_spos)
#spo = spo_k*3-1
for sgp in sorted_golden_paths:
for dm in range(deep_match):
if dm==0: spo_k=3
else: spo_k = (dm+1)*3-1
#print("spo_k",spo_k)
#print("sgp[1][:",spo_k,"]",sgp[1][:spo_k])
sentence = get_nlp(" ".join([get_wd_label(w) for i_w, w in enumerate(sgp[1][:spo_k]) if i_w != w_positions[0]]))
question_no_w = get_nlp(" ".join([q.text for q in question if q.lower_!=w_names[0][1].lower()]))
#print("question_no_w -> sentence",question_no_w,"->",sentence)
similarity = get_similarity_by_words(sentence, question_no_w)
#print("similarity",similarity)
#print("best_spos[0]",best_spos[0])
if best_spos[0][0] < similarity:
#print("best_spos[0][0] < similarity",best_spos[0][0] < similarity)
best_spos.insert(0,[similarity,sgp[1]])
#print("best_spos",best_spos)
sorted_golden_paths = [sgp[1] for sgp in best_spos]
#print("after sorted_golden_paths",sorted_golden_paths)
else:
for hypo in hypothesises:
#print("hypo",hypo[0])
for lp in [lp[1] for lp in meaningful_paths]:
if len(lp)>1 and lp[0] == hypo[0]:
if lp not in sorted_golden_paths:
sorted_golden_paths.append(lp)
if len(lp)>1 and lp[-1] == hypo[0]:
lp = list(reversed(lp))
if lp not in sorted_golden_paths:
sorted_golden_paths.append(lp)
if len(sorted_golden_paths) >= 1: break
#print("len(sorted_golden_paths) AFTER",len(sorted_golden_paths))
if not sorted_golden_paths:
for hypo in hypothesises:
for p in filtered_paths:
if len(p)>1 and p[0] == hypo[0]:
#print(p)
if p not in sorted_golden_paths:
sorted_golden_paths.append(p)
if len(p)>1 and p[-1] == hypo[0]:
p = list(reversed(p))
if p not in sorted_golden_paths:
sorted_golden_paths.append(p)
if len(sorted_golden_paths) >= 1: break
#print("len(sorted_golden_paths) AFTER AFTER",len(sorted_golden_paths))
if not sorted_golden_paths:
for hypo in hypothesises:
for p in filtered_paths:
if len(p)>1 and hypo[0] in p:
if p not in sorted_golden_paths:
sorted_golden_paths.append(p)
if len(sorted_golden_paths) >= 1: break
#print("len(sorted_golden_paths) AFTER AFTER AFTER",len(sorted_golden_paths))
golden_paths_filtered = []
for gp in sorted_golden_paths:
tmp_path = []
for i_e, e in enumerate(gp):
if i_e < len(gp)-2 and not is_wd_entity(e):
if e == gp[i_e+2]:
golden_paths_filtered.append(gp[:gp.index(e)+2])
break
else:
tmp_path.append(e)
else:
tmp_path.append(e)
if tmp_path:
for i_e, e in enumerate(tmp_path):
if is_wd_entity(e):
if tmp_path.count(e) > 1:
pass
else:
if tmp_path not in golden_paths_filtered:
golden_paths_filtered.append(tmp_path)
question_no_w = get_nlp(" ".join([q.text for q in question if q.lower_!=w_names[0][1].lower()]))
#print("question_no_w",question_no_w)
golden_paths_enhanced = []
for gpf in golden_paths_filtered:
sentence = get_nlp(" ".join([get_wd_label(w) for i_w, w in enumerate(gpf) if i_w != w_positions[0]]))
similarity = get_similarity_by_words(sentence, question_no_w)
golden_paths_enhanced.append((similarity,gpf))
#print("question_no_w -> sentence",question_no_w,"->",sentence,"->",similarity)
for mlp in meaningful_paths:
sentence = get_nlp(" ".join([get_wd_label(w) for i_w, w in enumerate(mlp[1]) if i_w != w_positions[0]]))
reversed_mlp = list(reversed(mlp[1]))
reversed_sentence = get_nlp(" ".join([get_wd_label(w) for i_w, w in enumerate(mlp[1]) if i_w != w_positions[0]]))
if mlp[1][0] == golden_paths_filtered[0][0] and mlp[1] not in golden_paths_enhanced:
similarity = get_similarity_by_words(sentence, question_no_w)
golden_paths_enhanced.append((similarity,mlp[1]))
#print("question_no_w -> sentence",question_no_w,"->",sentence,"->",similarity)
elif mlp[1][0] == golden_paths_filtered[0][-1] and reversed_mlp not in golden_paths_enhanced:
similarity = get_similarity_by_words(reversed_sentence, question_no_w)
golden_paths_enhanced.append((similarity,reversed_mlp))
#print("question_no_w -> reversed_sentence",question_no_w,"->",reversed_sentence,"->",similarity)
#golden_paths_enhanced
#print("golden_paths_enhanced",golden_paths_enhanced)
sorted_golden_paths_enhanced = sorted(golden_paths_enhanced, key=lambda x: x[0], reverse=True)
#print("sorted_golden_paths_enhanced",sorted_golden_paths_enhanced)
golden_paths_enhanced = [gpe[1] for gpe in golden_paths_enhanced]
#print("len(golden_paths_filtered)",len(golden_paths_filtered))
#print("golden_paths_filtered",golden_paths_filtered)
golden_unique_paths = golden_paths_enhanced.copy()
for i_sgp, sgp in enumerate(golden_paths_enhanced):
for sgp_2 in golden_paths_enhanced:
if (is_sublist(sgp, sgp_2) and sgp!=sgp_2):
golden_unique_paths[i_sgp] = []
break
golden_unique_paths = [gup for gup in golden_unique_paths if gup]
hypothesises_names = [h[0] for h in hypothesises]
#print("golden_unique_paths",golden_unique_paths)
#print("before hypothesises_names",hypothesises_names)
#print("golden_unique_paths[0][0]",golden_unique_paths[0][0])
#print("hypothesises_names",hypothesises_names)
#if is_valide_wd_id(hypothesises_names[0]):
if not first_is_winner:
if golden_unique_paths and hypothesises_names:
if golden_unique_paths[0] and hypothesises_names[0]:
if golden_unique_paths[0][0]:
#for hn in hypothesises_names:
if not time_sensitive:
if (not is_wd_entity(hypothesises_names[0])
and is_wd_entity(golden_unique_paths[0][0])
or hypothesises_names[0] != golden_unique_paths[0][0]):
if golden_unique_paths[0][0] in hypothesises_names:
hypothesises_names.insert(0,golden_unique_paths[0][0])
elif (time_sensitive and is_timestamp(hypothesises_names[0]) and is_timestamp(golden_unique_paths[0][0])):
if golden_unique_paths[0][0] in hypothesises_names:
hypothesises_names.insert(0,golden_unique_paths[0][0])
elif time_sensitive:
is_found = False
for gup in golden_unique_paths:
if is_timestamp(gup[0]):
hypothesises_names.insert(0,gup[0])
is_found=True
break
elif is_timestamp(gup[-1]):
hypothesises_names.insert(0,gup[-1])
is_found=True
break
if not is_found:
best_hypo_name = hypothesises_names[0]
best_hypo_count = 0
for hn in hypothesises_names:
current_hypo_count = 0
for gpu in golden_unique_paths:
current_hypo_count += gpu.count(hn)
if current_hypo_count > best_hypo_count:
best_hypo_name = hn
best_hypo_count = current_hypo_count
hypothesises_names.insert(0,best_hypo_name)
else:
best_hypo_name = hypothesises_names[0]
best_hypo_count = 0
for hn in hypothesises_names:
current_hypo_count = 0
for gpu in golden_unique_paths:
current_hypo_count += gpu.count(hn)
if current_hypo_count > best_hypo_count:
best_hypo_name = hn
best_hypo_count = current_hypo_count
hypothesises_names.insert(0,best_hypo_name)
#print("hypothesises_names",hypothesises_names)
#print([gup for gup in golden_unique_paths if gup[0]])
golden_unique_paths_0 = [gup for gup in golden_unique_paths if gup[0] == hypothesises_names[0]]
if not golden_unique_paths_0:
golden_unique_paths_1 = []
#print("not golden_unique_paths")
for gup in golden_unique_paths:
#print("gup",gup)
#print("gup[-1]",gup[-1])
if hypothesises_names[0] == gup[-1]:
reversed_gup = list(reversed(gup))
#print("i am in ", reversed_gup)
golden_unique_paths_1.append(reversed_gup)
golden_unique_paths = golden_unique_paths_1
else:
golden_unique_paths = golden_unique_paths_0
#print("golden_unique_paths",golden_unique_paths)
#golden_unique_paths = [list(reversed(gup)) for gup in golden_unique_paths if gup[-1] == hypothesises_names[0]]
#print("after hypothesises_names",hypothesises_names)
golden_unique_paths
hypothesises_names_filtered = []
[hypothesises_names_filtered.append(h) for h in hypothesises_names if h not in hypothesises_names_filtered]
#elif hypothesises_names[0] != golden_unique_paths[0][0]
golden_unique_paths = [hypothesises_names_filtered]+golden_unique_paths
return golden_unique_paths
#if verbose: print("-> Matching hypothesises...")
#start_time = time.time()
#golden_paths = match_hypothesises(graph, q_nlp, q_themes, q_predicates, hypothesises, paths_nodes_filtered, threshold=0.8, max_reward=2.0)
#end_time = time.time()
#print("Golden paths ->\tRunning time is {}s".format(round(end_time-start_time,2)))
#print(golden_paths)
# In[107]:
def get_context_graph(answer, graph, themes,question, is_next=False):
answer_graph = graph.copy()
meaningful_ids = [answer[0][0]]
[[meaningful_ids.append(e) for e in p if e not in meaningful_ids] for p in answer[1:]]
#print("meaningful_ids",meaningful_ids)
#print("answer_1[1]",answer_1[1])
if len(answer)>1:
spo_k = count_hops(answer[1],question)
if spo_k>1: last_element = spo_k*3-1
else: last_element = spo_k*3
meaningful_ids_answer = [answer[0][0]]
if len(answer)>1:
[meaningful_ids_answer.append(e) for e in answer[1][:last_element] if e not in meaningful_ids_answer]
#print("before meaningful_ids_answer",meaningful_ids_answer)
for t in themes[0]:
if not any(item in t[1] for item in meaningful_ids_answer) and any(item in t[1] for item in meaningful_ids):
#print("not any(item in t[1] for item in meaningful_ids_answer) and has any item of t[1] in meaningful_ids")
[meaningful_ids_answer.append(e) for e in t[1]]
#print("after meaningful_ids_answer",meaningful_ids_answer)
#print("meaningful_ids",meaningful_ids)
if is_next:
graph_filtered=answer_graph
context_centers = [(get_wd_label(c),[c]) for c in nx.center(context_graph) if is_wd_entity(c)]
for cc in context_centers:
if cc[1] not in meaningful_ids:
meaningful_ids+=cc[1]
else:
for n in graph.nodes():
n_tmp = n
if n.find("-") != -1 and not is_timestamp(n):
n_tmp = n[:n.find("-")]
n_n = list(graph.neighbors(n))
#print("n_n",n_n)
if n_tmp not in meaningful_ids and n in graph.nodes() and not any(item in n_n for item in meaningful_ids_answer):
#print("removing",n)
answer_graph.remove_node(n)
graph_filtered = filter_graph_by_ids(answer_graph, meaningful_ids)
for p in [x for x,y in graph_filtered.nodes(data=True) if y["type"]=="predicate"]:
p_n = list(graph_filtered.neighbors(p))
if len(p_n) < 2:
graph_filtered.remove_node(p)
continue
graph_filtered.nodes[p]['qa'] = True
if p in meaningful_ids_answer:
graph_filtered.nodes[p]['qa'] = 1
else:
graph_filtered.nodes[p]['qa'] = 2
#print("answer[0][0]",answer[0][0])
for p in [x for x,y in graph_filtered.nodes(data=True) if y["type"]=="entity"]:
p_n = list(graph_filtered.neighbors(p))
if len(p_n) == 0 and p != answer[0][0]:
graph_filtered.remove_node(p)
continue
graph_filtered.nodes[p]['qa'] = True
if p in meaningful_ids_answer:
graph_filtered.nodes[p]['qa'] = 1
else:
graph_filtered.nodes[p]['qa'] = 2
return graph_filtered
#context_graph_test = get_context_graph(answer_2,context_graph_2,question=question,is_next=True)
#plot_graph(context_graph_test, "file_name_graph", "Graph_title")
# In[108]:
def convert_to_literal(to_literal):
if is_timestamp(to_literal):
datetimeObj = datetime.strptime(to_literal, '%Y-%m-%dT%H:%M:%SZ')
if str(datetimeObj.time())!="00:00:00" and str(datetimeObj.date())[5:]=="01-01":
to_literal = str(datetimeObj.year)
elif str(datetimeObj.time())=="00:00:00":
to_literal = str(datetimeObj.strftime("%d %B %Y"))
else:
to_literal = str(datetimeObj.strftime("%d %B %Y")+" at "+datetimeObj.strftime("%H:%M"))
return to_literal
else:
return to_literal
# In[110]:
## questions = ("what was the cause of death of yves klein",
# "Who is the wife of Barack Obama?",
# "Who is the president of the United States?",
# "When was produced the first Matrix movie?",
# "Who made the soundtrack of the The Last Unicorn movie?",
# "Who is the author of Le Petit Prince?",
# "Which actor voiced the Unicorn in The Last Unicorn?",
# "how is called the rabbit in Alice in Wonderland?"
# )
#def print_running_time(start_time, end_time=time.time()):
# print("->\tRunning time is {}s".format(round(end_time-start_time,2)))
def answer_question(question, verbose=False, aggressive=True, looped=False,
deep_k=3, deep_k_step=1, deep_k_max=20,
graph_size_min=100, graph_size_target=350, graph_size_max=900,
paths_filter_max=20000, paths_max=300000,
timer=False, g_paths=True, show_graph=False, cores=mp.cpu_count(),
banning_str=False, reload_cache=False, answer_sentence=False,
previous_answer=False, previous_graph=False, graph_size_target_context=100,
k_context=1, is_next=False):
PLURAL_PRONS = ["they"]
if previous_graph:
context_graph=previous_graph.copy()
else:
context_graph=previous_graph
(wd_local_statements_dict, wd_labels_dict,
wd_local_word_ids_dict, wd_online_word_ids_dict,
wd_local_predicate_ids_dict, wd_online_predicate_ids_dict,
word_similarities_dict) = load_cache_data(use_cache=reload_cache)
if verbose: start_time = time.time()
if timer: timer_time = time.time()
if verbose: print("User input:",question)
if verbose: print("--> Auto correcting question in progress...")
q_nlp = get_nlp(question, autocorrect=True, banning_str=banning_str)
if verbose: print("-> Auto corrected q_nlp:",q_nlp)
if context_graph:
if verbose: print("IN CONTEXT")
is_next = True
if not previous_answer:
last_golden_answer = ""
else:
last_golden_answer = previous_answer[0][0]
context_centers = [(get_wd_label(c),[c]) for c in nx.center(context_graph)+[last_golden_answer] if is_wd_entity(c)]
#print("last_golden_answer",last_golden_answer)
#print("graph_context",context_graph)
#print("context_centers",context_centers)
center_best = sorted([(x, len(list(context_graph.neighbors(x)))) for x,y in context_graph.nodes(data=True) if y["type"]=="entity"], key=lambda x: x[1], reverse=True)
center_best = (get_wd_label(center_best[0][0]),[center_best[0][0]])
if center_best[0][0] not in context_centers:
context_centers.append(center_best)
new_sentence = []
for w in q_nlp:
#print("w",w.lower_,w.lemma_)
if w.lemma_ == "-PRON-" and w.lower_ in PLURAL_PRONS:
#print("plural")
for i_cc, cc in enumerate(context_centers):
if i_cc > 0:
new_sentence.append("and")
new_sentence.append(cc[0])
elif w.lemma_ == "-PRON-":
new_sentence.append(context_centers[0][0])
else:
new_sentence.append(w.text)
question = " ".join(new_sentence)
q_nlp = get_nlp(question)
if verbose: print("-> Replacing pronouns from context..")
if verbose: print("-> New q_nlp:",q_nlp)
time_sensitive = False
if 'when' in [t.lower_ for t in q_nlp if t.tag_=="WRB"]: time_sensitive = True
q_themes = get_themes(q_nlp, question, top_k=2, online=True)
q_theme_names = [q[0].text for q in q_themes[0]]
if verbose: print("-> q_themes:",q_themes)
q_themes_enhanced = get_enhanced_themes(q_themes, top_k=1, aggressive=True)
q_theme_enhanced_names = [q[0] for q in q_themes_enhanced]
if verbose: print("-> q_themes_enhanced:",q_themes_enhanced)
if verbose: print("--> Calculating predicates... (could be long.. depends on uncached unpure predicates)")
q_predicates_db = get_predicates(q_nlp, q_themes, top_k=0)
q_predicates_online = get_predicates_online(q_nlp, top_k=2, aggressive=aggressive)
q_predicates = []
q_predicates_db_ids = [p[1] for p in q_predicates_db]
q_predicates_db_names = [p[0] for p in q_predicates_db]
q_predicates_online_ids = [p[1] for p in q_predicates_online]
q_predicates_online_names = [p[0] for p in q_predicates_online]
for i_n,n in enumerate(q_predicates_db_names):
pn_online_text = [n.text for n in q_predicates_online_names]
tmp_ids = q_predicates_db_ids[i_n]
if n.text in pn_online_text:
for p_o in q_predicates_online_ids[pn_online_text.index(n.text)]:
if p_o not in tmp_ids:
tmp_ids.append(p_o)
q_predicates.append((n,tmp_ids))
for i_n_o,n_o in enumerate(q_predicates_online_names):
n_db_text = [n.text for n in q_predicates_db_names]
if n_o.text not in n_db_text:
q_predicates.append((n_o, q_predicates_online_ids[i_n_o]))
if verbose: print("-> q_predicates:",q_predicates)
if timer:
print("-> q_predicates \tRunning time is {}s".format(round(time.time()-timer_time,2)))
timer_time = time.time()
if context_graph:
if verbose: print("--> Targeting themes into context")
#print("context_centers",context_centers)
q_themes_new = []
print([cc[0] for cc in context_centers])
for t in q_themes[0]:
replaced = False
for cc in context_centers:
if cc[0].lower() == t[0].lower_:
q_themes_new.append((t[0],cc[1]))
replaced = True
if not replaced:
q_themes_new.append((t[0],t[1]))
q_themes_1_new=[]
for t in q_themes[1]:
is_in = False
for cc in context_centers:
for w in cc[0].split(" "):
if w in t.text.split(" "):
is_in = True
break
if is_in:
break
if not is_in:
q_themes_1_new.append(t[0])
q_themes = (q_themes_new, q_themes_1_new)
if verbose: print("q_themes in context",q_themes)
q_focused_parts = get_focused_parts(q_nlp, q_themes, top_k=2, in_context=True)
if verbose: print("-> q_focused_parts:",q_focused_parts)
if verbose: print("-> Building the graph with k_deep",str(deep_k),"... (could be long)")
# Auto-scaling the graph size with deepness
if deep_k > deep_k_max and looped:
deep_k_max+=int(deep_k_max/2)
previous_graph_size = 0
previous_graph_len = 0
if deep_k<2:
deep_k = 2
graph, predicates_dict = build_graph(q_nlp, q_themes, q_themes_enhanced, q_predicates, deep_k=deep_k, time_sensitive=time_sensitive, cores=cores, context_graph=context_graph)
if verbose: print("--> ",len(graph), "nodes and", graph.size(), "edges")
if verbose: print("--> Removing meaningless subgraphs")
graph = filter_graph_by_names(graph, q_theme_names+q_theme_enhanced_names, entities=True, predicates=False)
if verbose: print("--> New graph of:",len(graph), "nodes and", graph.size(), "edges")
if timer:
print("->New graph\tRunning time is {}s".format(round(time.time()-timer_time,2)))
timer_time = time.time()
else:
if deep_k >= deep_k_max:
graph, predicates_dict = build_graph(q_nlp, q_themes, q_themes_enhanced, q_predicates, deep_k=deep_k, time_sensitive=time_sensitive, cores=cores, context_graph=context_graph)
if verbose: print("---> deep_k > deep_k_max, running graph as last trial with deep_k:",deep_k)
if timer:
print("->Rebuild new graph \tRunning time is {}s".format(round(time.time()-timer_time,2)))
timer_time = time.time()
if verbose: print("--> Removing meaningless subgraphs")
graph = filter_graph_by_names(graph, q_theme_names+q_theme_enhanced_names, entities=True, predicates=False)
if verbose: print("--> New graph of:",len(graph), "nodes and", graph.size(), "edges")
else:
for k in range(deep_k, deep_k_max, deep_k_step):
graph, predicates_dict = build_graph(q_nlp, q_themes, q_themes_enhanced, q_predicates, deep_k=deep_k, time_sensitive=time_sensitive, cores=cores, context_graph=context_graph)
if timer:
print("->New graph \tRunning time is {}s".format(round(time.time()-timer_time,2)))
timer_time = time.time()
if verbose: print("--> ",len(graph), "nodes and", graph.size(), "edges")
if verbose: print("--> Removing meaningless subgraphs")
graph = filter_graph_by_names(graph, q_theme_names+q_theme_enhanced_names, entities=True, predicates=False)
if verbose: print("--> New graph of:",len(graph), "nodes and", graph.size(), "edges")
if previous_graph_size == graph.size() and previous_graph_len == len(graph):
if verbose: print("---> Loop detected, returning the graph in the current state")
break
else:
previous_graph_size = graph.size()
previous_graph_len = len(graph)
if context_graph and (graph.size() > graph_size_target_context or len(graph) > graph_size_target_context) and deep_k >= 2:
deep_k -= deep_k_step
if verbose: print("---> Rebuilding the graph with k_deep",str(deep_k), "... Previously:",len(graph), "nodes or", graph.size(), "edges was above the limit of",graph_size_target_context)
graph, predicates_dict = build_graph(q_nlp, q_themes, q_themes_enhanced, q_predicates, deep_k=deep_k, time_sensitive=time_sensitive, cores=cores, context_graph=context_graph)
break
elif (graph.size() > graph_size_target or len(graph) > graph_size_target) and deep_k >= 2:
deep_k -= deep_k_step
if verbose: print("---> Rebuilding the graph with k_deep",str(deep_k), "... Previously:",len(graph), "nodes or", graph.size(), "edges was above the limit of",graph_size_target)
graph, predicates_dict = build_graph(q_nlp, q_themes, q_themes_enhanced, q_predicates, deep_k=deep_k, time_sensitive=time_sensitive, cores=cores, context_graph=context_graph)
break
elif graph.size() <= graph_size_min or len(graph) <= graph_size_min:
if context_graph:
break
elif graph.size() < graph_size_min/3 or len(graph) < graph_size_min/3:
deep_k += deep_k_step*3
elif graph.size() < graph_size_min/4*3 or len(graph) < graph_size_min/4*3:
deep_k += deep_k_step*2
else:
deep_k += deep_k_step
if verbose: print("---> Rebuilding the graph with k_deep",str(deep_k), "... Previously:",len(graph), "nodes or", graph.size(), "edges was below the limit of",graph_size_min)
else: break
if graph.size()>graph_size_max or len(graph)>graph_size_max:
if verbose: print("---> Too many nodes, statistically it's not worth the run. Cancelling question, it probably require reasoning.\n")
return False,False
#if context_graph:
# for g in [g for g in (graph.subgraph(c) for c in nx.connected_components(graph))]:
# print("g",list(g.nodes()))
# print("context_centers",context_centers)
# context_centers_ids = []
# for cc in context_centers:
# context_centers_ids+=cc[1]
# if all(item in list(g.nodes()) for item in context_centers_ids):
# print("yes")
if show_graph:
if verbose: print("---> Ploting the full graph")
plot_graph(graph, "file_name_full_graph", "Full_Graph_title")
if verbose: print("-> predicates_dict:",predicates_dict)
paths_keywords = find_paths_keywords(graph, q_nlp, q_themes, q_themes_enhanced, q_predicates, q_focused_parts)
if verbose: print("-> paths_keywords:",paths_keywords)
if timer: timer_time = time.time()
if verbose: print("-> Computing possible paths... (could be long)")
path_nodes = find_path_nodes_from_graph(q_nlp, graph, predicates_dict, paths_keywords, threshold=0.8,special_pred_theshold=0.7, thres_inter=0.1, top_performance=len(graph), min_paths=100, cores=cores)
if verbose: print("--> len(path_nodes):",len(path_nodes))
if timer:
print("->Computing possible paths \tRunning time is {}s".format(round(time.time()-timer_time,2)))
timer_time = time.time()
if len(path_nodes) < paths_filter_max:
if verbose: print("-> Filtering paths... (could be long)")
paths_nodes_filtered = paths_nodes_filter(path_nodes, graph)
if verbose: print("--> len(paths_nodes_filtered):",len(paths_nodes_filtered))
#print("paths_nodes_filtered",paths_nodes_filtered)
if timer:
print("->\tRunning time is {}s".format(round(time.time()-timer_time,2)))
timer_time = time.time()
elif len(path_nodes) < paths_max:
if verbose: print("---> Too many paths, statistically it's not worth the run. Cancelling question, it probably require reasoning.\n")
return False,False
else:
if verbose: print("--> Skipping paths filtering... (too much paths)")
paths_nodes_filtered = paths_nodes_filter(path_nodes, graph, with_sublists=False)
if verbose: print("-> Computing hypothesises...")
hypothesises = get_hypothesises(q_nlp, predicates_dict, q_predicates, q_themes, paths_keywords, paths_nodes_filtered, threshold=0.5, max_reward=2.0)
if verbose: print("--> hypothesises:",hypothesises)
if timer:
print("->Computing hypothesises \tRunning time is {}s".format(round(time.time()-timer_time,2)))
timer_time = time.time()
if g_paths:
if hypothesises:
if verbose: print("-> Computing golden paths...")
golden_paths = match_hypothesises(graph, q_nlp, q_themes, q_predicates, hypothesises, paths_nodes_filtered, threshold=0.8, max_reward=2.0,winner_threshold_diff=4.0, time_sensitive=time_sensitive)
if verbose: print("--> len(golden_paths):",len(golden_paths)-1)
if timer:
print("->\tRunning time is {}s".format(round(time.time()-timer_time,2)))
timer_time = time.time()
else:
if not looped:
if verbose: print("-> Looping on aggressive mode...\n")
golden_paths = answer_question(question, verbose=verbose, aggressive=True, looped=True,
deep_k=deep_k, deep_k_step=deep_k_step, deep_k_max=deep_k_max,
graph_size_min=graph_size_min, graph_size_target=graph_size_target, graph_size_max=graph_size_max,
paths_filter_max=paths_filter_max, paths_max=paths_max,
timer=timer, g_paths=g_paths, show_graph=show_graph, cores=cores,
banning_str=banning_str, reload_cache=reload_cache, answer_sentence=answer_sentence,
previous_answer=previous_answer, previous_graph=previous_graph, graph_size_target_context=graph_size_target_context,
k_context=1, is_next=False
)
else:
if verbose: print("--> End of loop")
golden_paths=[]
save_cache_data(save_cache=save_cache)
if g_paths:
if golden_paths == (False, False):
golden_paths = False
if golden_paths:
cleared_golden_paths = [golden_paths[0].copy()]
for p in golden_paths[1:]:
tmp_labeling = []
for e in p:
#tmp_labeling.append(get_wd_label(e))
tmp_labeling.append(e)
if tmp_labeling not in cleared_golden_paths:
cleared_golden_paths.append(tmp_labeling)
if verbose: print("--> len(cleared_golden_paths):",len(cleared_golden_paths)-1)
if len(cleared_golden_paths) > 1:
if verbose: print("---> First path:",cleared_golden_paths[1])
if timer: timer_time = time.time()
#
if verbose: print("->\tTotal Running time is {}s\n".format(round(time.time()-start_time,2)))
if g_paths:
if golden_paths:
answer = cleared_golden_paths
context_graph = get_context_graph(answer[:k_context+1], graph, q_themes,question=question,is_next=is_next)
if show_graph:
if verbose: print("---> Ploting the context graph")
plot_graph(context_graph, "file_name_context_graph", "Context_Graph_title")
return [answer, context_graph]#[answer, graph]#
else: return False,False
else:
if hypothesises:
answer = [[a[0] for a in hypothesises]] + [[hypothesises[0][0]]]
context_graph = get_context_graph(answer[:k_context+1], graph, q_themes,question=question,is_next=is_next)
if show_graph:
if verbose: print("---> Ploting the context graph")
plot_graph(context_graph, "file_name_context_graph", "Context_Graph_title")
return [answer, [context_graph]]
else:
return False,False
#answer,context_graph = answer_question("what film is by the writer phil hay?", verbose=True, timer=True) #444.36s
#answer,context_graph = answer_question("When was produced the first Matrix movie?", verbose=True, timer=True) #70.67s
#answer,context_graph = answer_question("Which actor voiced the Unicorn in The Last Unicorn?", verbose=True, timer=True, g_paths=True, show_graph=True) #works 312.12s
#answer,context_graph = answer_question("Who voiced the Unicorn in The Last Unicorn?", verbose=True, timer=True, show_graph=True) #works 323.52s
#answer,context_graph = answer_question("How many actors voiced the Unicorn in The Last Unicorn?", verbose=True, timer=True) #592.22s
#answer,context_graph = answer_question("Which is the nation of Martha Mattox", verbose=True, timer=True) #97.89s
#answer,context_graph = answer_question("Who made the soundtrack of the The Last Unicorn movie?", verbose=True, timer=True)
#answer,context_graph = answer_question("Who is the author of Le Petit Prince?", verbose=True, timer=True)
#answer,context_graph = answer_question("When was produced the first Matrix movie?", verbose=True, timer=True)
#answer,context_graph = answer_question("Who is the president of the United States?", verbose=True, timer=True) #node Q76 not in graph 324.88s
#answer,context_graph = answer_question("Who is the wife of Barack Obama?", verbose=True, timer=True) #works 275.94s
#answer,context_graph = answer_question("what was the cause of death of yves klein", verbose=True, timer=True) #309.06s
#answer,context_graph = answer_question("what city was alex golfis born in", verbose=True, timer=True)
#answer,context_graph = answer_question("which stadium do the wests tigers play in", verbose=True, timer=True) #462.47s
#answer,context_graph = answer_question("lol", verbose=True, timer=True)
#answer,context_graph = answer_question("what's akbar tandjung's ethnicity", verbose=True, timer=True)
#answer,context_graph = answer_question("Which equestrian was is in dublin ?", verbose=True, timer=True)
#answer,context_graph = answer_question("how does engelbert zaschka identify ", verbose=True, timer=True)
#answer,context_graph = answer_question("Who influenced michael mcdowell?", verbose=True, timer=True)
#answer,context_graph = answer_question("what does 2674 pandarus orbit", verbose=True, timer=True)
#answer,context_graph = answer_question("what production company was involved in smokin' aces 2: assasins' ball", verbose=True, timer=True)
#answer,context_graph = answer_question("who's a kung fu star from hong kong", verbose=True, timer=True)
#answer,context_graph = answer_question("Which genre of album is harder.....faster?", verbose=True, timer=True)
#answer,context_graph = answer_question("Which equestrian was born in dublin?", verbose=True, timer=True)
#answer,context_graph = answer_question("Who is the author that wrote the book Moby Dick", verbose=True, timer=True, show_graph=True) #314.04s works
#answer,context_graph = answer_question("Name a person who died from bleeding.", verbose=True, timer=True) # 117.35s
#answer,context_graph = answer_question("What is the name of the person who created Saved by the Bell?", verbose=True, timer=True)
#answer,context_graph = answer_question("of what nationality is ken mcgoogan", verbose=True, timer=True) #works 51.39s
#
#answer,context_graph = answer_question("What is a tv action show?", verbose=True, timer=True, g_paths=False)
#answer,context_graph = answer_question("who published neo contra", verbose=True, timer=True, g_paths=False)
#answer,context_graph = answer_question("When was the publication date of the movie Grease?", verbose=True, timer=True)
#answer,context_graph = answer_question("When did the movie Grease come out?", verbose=True, timer=True, show_graph=True)
#answer,context_graph = answer_question("whats the name of the organization that was founded by frei otto", verbose=True, timer=True, g_paths=False)
#answer,context_graph = answer_question("where was johannes messenius born", verbose=True, timer=True, g_paths=False)
#answer,context_graph = answer_question("What is a type of gameplay available to gamers playing custom robo v2", verbose=True, timer=True, g_paths=False)
#answer,context_graph = answer_question("Which genre of album is Harder ... Faster?", verbose=True, timer=True, show_graph=True)
#answer,context_graph = answer_question("Who is the author that wrote the book Moby Dick", verbose=True, timer=True, show_graph=True)
#answer,context_graph = answer_question("how does engelbert zaschka identify", verbose=True, timer=True, show_graph=True)
#answer,context_graph = answer_question("where was shigeyasu suzuki's place of birth", verbose=True, timer=True, show_graph=True)
#answer,context_graph = answer_question("What is the name of the writer of The Secret Garden?", verbose=True, timer=True, show_graph=True)
#answer,context_graph = answer_question("Who was an influential figure for miško Šuvaković", verbose=True, timer=True, show_graph=True)
#
#answer,context_graph = answer_question("When did the movie Grease come out?", verbose=True, timer=True, show_graph=True)
#answer,context_graph = answer_question("of what nationality is ken mcgoogan", verbose=True, timer=True) #works 51.39s
#answer,context_graph = answer_question("Where did roger marquis die", verbose=True, timer=True, show_graph=True) # works 64.56s
#answer,context_graph = answer_question("How many people were in The Beatles?", verbose=True, timer=True, show_graph=True)
#answer,context_graph = answer_question("which type of people does roberto benigni belong to", verbose=True, timer=True, show_graph=True)
#if answer:
# print("Answer:",convert_to_literal(get_wd_label(answer[0][0])), "("+str(answer[0][0])+")\n")
## #print("Paths:",[[get_wd_label(e) for e in row] for row in answer[1:]])
#answer,context_graph = answer_question("When did the movie Grease come out?", verbose=True, timer=True, g_paths=False, show_graph=True)
#answer,context_graph = answer_question("When was the publication date of the movie Grease?", verbose=True, timer=True, g_paths=False)
#answer,context_graph = answer_question("Which actor voiced the Unicorn in The Last Unicorn?", verbose=True, timer=True, show_graph=True)
#answer,context_graph = answer_question("whats the name of the organization that was founded by frei otto", verbose=True, timer=True, g_paths=False, show_graph=True)
#answer,context_graph = answer_question("What was Joseph Ferdinand Daniel Place of Death", verbose=True, timer=True, g_paths=False, show_graph=True)
#answer_1,context_graph_1 = answer_question("Who is the wife of Barack Obama?", verbose=True, timer=True, show_graph=True)
#answer_2,context_graph_2 = answer_question("When did they marry?", previous_answer=answer_1, previous_graph=context_graph_1, verbose=True, timer=True, show_graph=True)
#answer_3,context_graph_3 = answer_question("When did they marry?", previous_answer=answer_1, previous_graph=context_graph_1, verbose=True, timer=True, show_graph=True)
#answer_1,context_graph_1 = answer_question("Who is the wife of Barack Obama?", verbose=True, timer=True, show_graph=True)
#answer_2,context_graph_2 = answer_question("When did they marry?", previous_answer=answer_1, previous_graph=context_graph_1, verbose=True, timer=True, show_graph=True)
#answer_2,context_graph_2 = answer_question("When did they marry?", previous_answer=answer_1, previous_graph=context_graph_1, verbose=True, timer=True, show_graph=True)
# In[92]:
conversation_questions = [
"Which actor voice the Unicorn in The Last Unicron?",
"And Alan Arkin was behind..",
"Who did the score?",
"So who performed the songs?",
"Genre of this band's music?",
"By the way, who was the director?"
]
def answer_conversation(questions, answer=False, context_graph=False):
for i_q,question in enumerate(questions):
if i_q >= 1:
if i_q == 0:
answer_convex,context_graph = answer_question(question ,previous_answer=answer, previous_graph=context_graph, verbose=True, timer=True, show_graph=True)
elif context_graph:
print("Context Question:",question)
answer_convex,context_graph = answer_question(question ,previous_answer=answer, previous_graph=context_graph, verbose=True, timer=True, show_graph=True)
else:
print("NO CONTEXT ERROR")
break
if answer_convex: print("Answer:",convert_to_literal(get_wd_label(answer_convex[0][0])), "("+str(answer_convex[0][0])+")\n")
break
#answer_conversation(conversation_questions,answer=answer,context_graph=context_graph)
# In[93]:
from datetime import datetime
def generate_answer_sentence(answer, question, verbose=False, k_spo=False):
if answer:
answer_paths = answer[1:]
answer_fact = answer[0][0]
if verbose: print("answer_paths",answer_paths)
if verbose: print("answer_fact",answer_fact)
if verbose: print("Answer:",get_wd_label(answer_fact), "("+str(answer_fact)+")")
if verbose: print("Paths:",[[get_wd_label(e) for e in row] for row in answer_paths])
if k_spo:
spo_k = k_spo
else:
spo_k = count_hops(answer_paths,question)
if verbose: print("spo_k",spo_k)
if spo_k>1:
last_element = spo_k*3-1
else:
last_element = spo_k*3
answer_best_path = [[get_wd_label(e) for e in row] for row in answer_paths][0][:last_element]
raw_list = []
sentence_spliters = []
for i_e,e in enumerate(answer_best_path):
e = convert_to_literal(e)
if i_e == 0:
raw_list.append('[CLS]')
if is_timestamp(e): raw_list.append('[MASK]')
raw_list.append(e)
raw_list.append('[MASK]')
elif i_e == len(answer_best_path)-1:
if is_timestamp(e): raw_list.append('[MASK]')
raw_list.append(e)
raw_list.append('[MASK]')
raw_list.append('[SEP]')
elif spo_k > 1 and i_e % 3 and i_e != 1:
sentence_spliters.append(len(raw_list)+2)
#raw_list.append('[MASK]')
if is_timestamp(e): raw_list.append('[MASK]')
raw_list.append(e)
raw_list.append('[MASK]')
#raw_list.append('[MASK]')
#raw_list.append('[SEP]')
else:
#print("e",e)
get_nlp(e)
if e.find("is") == -1:
raw_list.append('[MASK]')
raw_list.append(e)
raw_list.append('[MASK]')
#print("answer_best_path_to_complete",answer_best_path_to_complete)
raw_text = " ".join(raw_list)
if verbose: print("Best answer path selected for spo_k",raw_text)
#return "lol"
tokenized_text = tokenizer.tokenize(raw_text)
#print("tokenized_text",tokenized_text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
#print("indexed_tokens",indexed_tokens)
segments_ids = [0] * len(tokenized_text)
#print("segments_ids",segments_ids)
if spo_k > 2:
#print("sentence_spliters",sentence_spliters)
previous_position = 0
for i_ss, ss in enumerate(sentence_spliters):
for i_p in range(ss+1, len(segments_ids)):
#print(i_p)
segments_ids[i_p] = i_ss+1
#print(i_ss, ss)
#segments_ids[previous_position:ss] = [i_ss] * int(ss)-int(previous_position)
#previous_position = ss
#print("segments_ids",segments_ids)
else:
segments_ids = [0] * len(tokenized_text)
#print("sentence",segments_ids)
#return False
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
model = BertForMaskedLM.from_pretrained(modelpath)
model.eval()
tokens_tensor = tokens_tensor.to(torch.device('cuda:0'))
segments_tensors = segments_tensors.to(torch.device('cuda:0'))
model.to(torch.device('cuda:0'))
masked_indices = [i for i, x in enumerate(tokenized_text) if x == "[MASK]"]# or x == "[CLS]"]
masked_indices_raw = [i for i, x in enumerate(raw_list) if x == "[MASK]"]# or x == "[CLS]"]
with torch.no_grad():
outputs = model(tokens_tensor, token_type_ids=segments_tensors)
predictions = outputs[0]
completed_text = raw_list.copy()
#print("source:", " ".join(completed_text))
for i_mi, masked_index in enumerate(masked_indices):
predicted_index = torch.argmax(predictions[0, masked_index]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
completed_text[masked_indices_raw[i_mi]] = predicted_token
completed_text = " ".join(completed_text).replace("[CLS]","").replace("[SEP]","").replace("/","").replace(" .",". ")#[6:-5]
completed_text = re.sub(r'\s*,\s*', ', ', completed_text)
completed_text = re.sub(r'\s*, ,\s*', ', ', completed_text)
completed_text = re.sub(r'\s*\.\s*', '. ', completed_text)
completed_text = re.sub(r'\s*\(\s*', ' (', completed_text)
completed_text = re.sub(r'\s*\)\s*', ') ', completed_text)
completed_text = re.sub(r'\s* ; \s*', '; ', completed_text)
completed_text = re.sub(r'\s* : \s*', ': ', completed_text)
completed_text = re.sub(r'\s* - \s*', ', ', completed_text)
if completed_text[len(completed_text)-2:] == "; ": completed_text = completed_text[:-2]+"."
if completed_text[len(completed_text)-1:] == " ": completed_text = completed_text[:-1]
if completed_text[0] == " ": completed_text = completed_text[1:]
if verbose: print("Spot filled:",completed_text)
corrected_completed_text = corrector.correct(completed_text)
if verbose: print("\n\nBest Answer sentence for spo_k",spo_k,"and corrected:",corrected_completed_text[0]["sequence"])
if corrected_completed_text[0]["sequence"].find("/") != -1:
return [completed_text, corrected_completed_text[0]["sequence"]]
else:
return [corrected_completed_text[0]["sequence"], completed_text]
else:
return False
#print("--> Generating complete sentence from answer...")
#question = "who is the wife of barrack obama"
#question = "which actor voiced the unicorn in the last unicorn?"
#question = "when did the the first star wars released"
#question = "Where did Michelle Obama and Barrack Obama get married?"
#answer = answer_question(question, verbose=True, timer=True, show_graph=True)
#print(answer)
#generate_answer_sentence(answer, question, verbose=True, k_spo=False)[0]
# In[95]:
questions = [
"Who voiced the Unicorn in The Last Unicorn?",
"Which actor voiced the Unicorn in The Last Unicorn?",
"When was the publication date of the movie Grease?",
"When was produced the first Matrix movie?",
"Which is the nation of Martha Mattox",
"Where did roger marquis die",
"Who is the author that wrote the book Moby Dick",
"Who is the wife of Barack Obama?",
"of what nationality is ken mcgoogan",
"When did the movie Grease come out?",
"which stadium do the wests tigers play in",
"What is the name of the writer of The Secret Garden?",
"whats the name of the organization that was founded by frei otto",
"Which genre of album is harder.....faster?",
"Which genre of album is Harder ... Faster?",
"Which equestrian was is in dublin ?",
"how does engelbert zaschka identify ",
"Who influenced michael mcdowell?",
"what does 2674 pandarus orbit"
]
#for i_q, question in enumerate(questions):
# if i_q >= 16:
# answer,context_graph = answer_question(question, verbose=True, timer=True, show_graph=True)
# if answer:
# print("Answer:",convert_to_literal(get_wd_label(answer[0][0])), "("+str(answer[0][0])+")\n")
# In[ ]:
def get_top_1_answer(answer_convex,question):
time_sensitive = False
if 'when' in [t.lower_ for t in get_nlp(question) if t.tag_=="WRB"]: time_sensitive = True
top_1_answer = answer_convex[0]["answer"]
if time_sensitive:
for answer in answer_convex:
if is_timestamp(answer["answer"]):
top_1_answer = answer["answer"]
break
return top_1_answer
#top_1_answer = get_top_1_answer(answer_convex, question)
#print("top_1_answer",top_1_answer)
# In[ ]:
def standardize_graph(graph):
this_graph = graph.copy()
for n in this_graph.nodes():
n_pos = n.find("-")
n_name = n
if n_pos != -1: n_name = n[:n_pos]
this_graph.nodes[n]["name"] = get_wd_label(n_name)
return this_graph
# In[ ]:
### try with Convex
conversation_questions = [
"Which actor voice the Unicorn in The Last Unicron?",
"And Alan Arkin was behind..",
"Who did the score?",
"So who performed the songs?",
"Genre of this band's music?",
"By the way, who was the director?"
]
def try_conversation_with_convex(questions, answer=False, context_graph=False):
frontier_detection=[0.9, 0.6, 0.3] #random_access
answer_detection=[0.9, 0.1] #total_distance_qa_nodes, total_distance_frontiers
frontiers=3
for i_q,question in enumerate(questions):
turn = i_q+1
if i_q >= 1:
if i_q == 0:
answer_convex,context_graph = answer_question(question ,previous_answer=answer, previous_graph=context_graph, verbose=True, timer=True, show_graph=True)
if answer_convex: print("Answer:",convert_to_literal(get_wd_label(answer_convex[0][0])), "("+str(answer_convex[0][0])+")\n")
elif context_graph:
print("Context Question:",question)
answer_convex, context_graph = cx.answer_follow_up_question(question, turn, context_graph, frontier_detection+answer_detection, frontiers)
context_graph = standardize_graph(context_graph)
top_1_answer = get_top_1_answer(answer_convex[1:],question)
plot_graph(context_graph, "file_name_context_graph", "Context_Graph_title")
if top_1_answer: print("Answer:",convert_to_literal(get_wd_label(top_1_answer)), "("+str(top_1_answer)+")\n")
for ac in answer_convex:
print(ac)
print("\n")
else:
print("NO CONTEXT ERROR")
break
#import convex as cx
#try_conversation_with_convex(conversation_questions,answer=answer,context_graph=context_graph)
# In[ ]:
# TODO try decomposing the question into subquestions (multi-hop into single-hop SPO)
#test_nlp_1 = get_nlp("What is the name of Metallica's first album")
#test_nlp_2 = get_nlp("What year did the TV show Arrested Development first air?")
#test_nlp_3 = get_nlp("How many Back to the Future movie are there?")
#test_nlp_4 = get_nlp("what is the name of an episode of life on a stick")
#test_nlp_5 = get_nlp("Which album is a death metal album?")
#test_nlp_6 = get_nlp("Star wars: the first order, the movie.")
#test_nlp_7 = get_nlp("What is the first star wars movie?")
#
#def is_reasoning_question(nlp_question):
# print(nlp_question)
# for e in nlp_question:
# print(e,e.tag_,e.pos_,list(e.children))
# print("\n")
#
#is_reasoning_question(test_nlp_1)
#is_reasoning_question(test_nlp_2)
#is_reasoning_question(test_nlp_3)
#is_reasoning_question(test_nlp_4)
#is_reasoning_question(test_nlp_5)
#is_reasoning_question(test_nlp_7)
| 45.160266 | 411 | 0.615901 |
02320c7b3dcf898760df6a324b7e8bb01f925a15 | 137 | py | Python | tool/pycorrector_demo.py | KEVINYZY/python-tutorial | ae43536908eb8af56c34865f52a6e8644edc4fa3 | [
"Apache-2.0"
] | null | null | null | tool/pycorrector_demo.py | KEVINYZY/python-tutorial | ae43536908eb8af56c34865f52a6e8644edc4fa3 | [
"Apache-2.0"
] | null | null | null | tool/pycorrector_demo.py | KEVINYZY/python-tutorial | ae43536908eb8af56c34865f52a6e8644edc4fa3 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@author:XuMing(xuming624@qq.com)
@description:
"""
import pycorrector
print(pycorrector.correct('给老人让坐'))
| 13.7 | 35 | 0.671533 |
0f3b1315566bec9d7840f75bd1179e4498d80169 | 10,158 | py | Python | superpoint/evaluations/descriptor_evaluation.py | sMamooler/SuperPoint | b66bb2c986ddbd7935858af948bc3508e32a5acc | [
"MIT"
] | null | null | null | superpoint/evaluations/descriptor_evaluation.py | sMamooler/SuperPoint | b66bb2c986ddbd7935858af948bc3508e32a5acc | [
"MIT"
] | null | null | null | superpoint/evaluations/descriptor_evaluation.py | sMamooler/SuperPoint | b66bb2c986ddbd7935858af948bc3508e32a5acc | [
"MIT"
] | null | null | null | import numpy as np
import cv2
from os import path as osp
from glob import glob
from superpoint.settings import EXPER_PATH
from superpoint.evaluations.evaluation_tool import warp_keypoints, keep_true_keypoints, select_k_best, filter_keypoints
def get_paths(exper_name):
"""
Return a list of paths to the outputs of the experiment.
"""
return glob(osp.join(EXPER_PATH, 'outputs/{}/*.npz'.format(exper_name)))
def keep_shared_points(data, keypoint_map, inv, keep_k_points=1000):
"""
Compute a list of keypoints from the map, filter the list of points by keeping
only the points that once mapped by H are still inside the shape of the map
and keep at most 'keep_k_points' keypoints in the image.
"""
keypoints = np.where(keypoint_map > 0)
prob = keypoint_map[keypoints[0], keypoints[1]]
keypoints = np.stack([keypoints[0], keypoints[1], prob], axis=-1)
keypoints = keep_true_keypoints(data, keypoints, inv)
idx = np.argsort(keypoints[:,2])[::-1]
keypoints = select_k_best(keypoints, keep_k_points)
return keypoints.astype(int)
def compute_homography(data, keep_k_points=1000, correctness_thresh=3, orb=False):
"""
Compute the homography between 2 sets of detections and descriptors inside data.
"""
shape = data['prob'].shape
real_H = data['homography']
# Keeps only the points shared between the two views
keypoints = keep_shared_points(data, data['prob'], False, keep_k_points)
warped_keypoints = keep_shared_points(data, data['warped_prob'], True, keep_k_points)
desc = data['desc'][keypoints[:, 0], keypoints[:, 1]]
warped_desc = data['warped_desc'][warped_keypoints[:, 0],
warped_keypoints[:, 1]]
# Match the keypoints with the warped_keypoints with nearest neighbor search
if orb:
desc = desc.astype(np.uint8)
warped_desc = warped_desc.astype(np.uint8)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
else:
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
matches = bf.match(desc, warped_desc)
matches_idx = np.array([m.queryIdx for m in matches])
if len(matches_idx) == 0: # No match found
return {'correctness': 0.,
'keypoints1': keypoints,
'keypoints2': warped_keypoints,
'matches': [],
'inliers': [],
'homography': None}
m_keypoints = keypoints[matches_idx, :]
matches_idx = np.array([m.trainIdx for m in matches])
m_warped_keypoints = warped_keypoints[matches_idx, :]
# Estimate the homography between the matches using RANSAC
H, inliers = cv2.findHomography(m_keypoints[:, [1, 0]],
m_warped_keypoints[:, [1, 0]],
cv2.RANSAC)
if H is None:
return {'correctness': 0.,
'keypoints1': keypoints,
'keypoints2': warped_keypoints,
'matches': matches,
'inliers': inliers,
'homography': H}
inliers = inliers.flatten()
# Compute correctness
corners = np.array([[0, 0, 1],
[shape[1] - 1, 0, 1],
[0, shape[0] - 1, 1],
[shape[1] - 1, shape[0] - 1, 1]])
real_warped_corners = np.dot(corners, np.transpose(real_H))
real_warped_corners = real_warped_corners[:, :2] / real_warped_corners[:, 2:]
warped_corners = np.dot(corners, np.transpose(H))
warped_corners = warped_corners[:, :2] / warped_corners[:, 2:]
mean_dist = np.mean(np.linalg.norm(real_warped_corners - warped_corners, axis=1))
correctness = float(mean_dist <= correctness_thresh)
return {'correctness': correctness,
'keypoints1': keypoints,
'keypoints2': warped_keypoints,
'matches': matches,
'inliers': inliers,
'homography': H}
def homography_estimation(exper_name, keep_k_points=1000,
correctness_thresh=3, orb=False):
"""
Estimates the homography between two images given the predictions.
The experiment must contain in its output the prediction on 2 images, an original
image and a warped version of it, plus the homography linking the 2 images.
Outputs the correctness score.
"""
paths = get_paths(exper_name)
correctness = []
for path in paths:
data = np.load(path)
estimates = compute_homography(data, keep_k_points, correctness_thresh, orb)
correctness.append(estimates['correctness'])
return np.mean(correctness)
def get_homography_matches(exper_name, keep_k_points=1000,
correctness_thresh=3, num_images=1, orb=False):
"""
Estimates the homography between two images given the predictions.
The experiment must contain in its output the prediction on 2 images, an original
image and a warped version of it, plus the homography linking the 2 images.
Outputs the keypoints shared between the two views,
a mask of inliers points in the first image, and a list of matches meaning that
keypoints1[i] is matched with keypoints2[matches[i]]
"""
paths = get_paths(exper_name)
outputs = []
for path in paths[:num_images]:
data = np.load(path)
output = compute_homography(data, keep_k_points, correctness_thresh, orb)
output['image1'] = data['image']
output['image2'] = data['warped_image']
outputs.append(output)
return outputs
def get_ground_truth(data, keypoints, warped_keypoints, shape, correctness_thresh, inv):
"""
Compute the ground truth keypoints matchings from image to image' where image' in the result
of warping image with H_matrix.
"""
#keypoints = np.stack([keypoints[0], keypoints[1]], axis=-1)
# Warp the original keypoints with the true homography
true_warped_keypoints = warp_keypoints(data, keypoints[:, [1, 0]], inv)
true_warped_keypoints = np.stack([true_warped_keypoints[:, 1],
true_warped_keypoints[:, 0]], axis=-1)
true_warped_keypoints = filter_keypoints(true_warped_keypoints, shape)
diff = np.expand_dims(warped_keypoints, axis=1) - np.expand_dims(true_warped_keypoints, axis=0)
dist = np.linalg.norm(diff, axis=-1)
matches = np.less_equal(dist, correctness_thresh)
return matches, len(true_warped_keypoints)
def compute_pr_rec(prob, gt, n_gt, total_n, remove_zero=1e-4, simplified=False):
"""
computes precison and recall of the image
return: precision and recall
"""
matches = gt
#print(gt.shape)
tp = 0
fp = 0
tp_points = []
matched = np.zeros(len(gt))
for m in matches:
correct = np.any(m)
if correct:
gt_idx = np.argmax(m)
#tp +=1
#at most one tp should be considerd for each ground turth point
if gt_idx not in tp_points:
tp_points.append(gt_idx)
tp += 1
else:
fp += 1
else:
#tp.append(False)
fp += 1
#compute precison and recall
matching_score = tp / total_n if total_n!=0 else 0
prec = tp / (tp+fp) if (tp+fp)!=0 else 0
recall = tp / (n_gt) if n_gt!= 0 else 0
return prec, recall, matching_score
def get_mean_AP(data, correctness_threshs, keep_k_points=1000, orb=False):
prob = data['prob']
warped_prob = data['warped_prob']
shape = prob.shape
warped_shape = warped_prob.shape
# Keeps only the points shared between the two views
keypoints = keep_shared_points(data, prob, False, 1000)
warped_keypoints = keep_shared_points(data, warped_prob, True, 1000)
desc = data['desc'][keypoints[:, 0], keypoints[:, 1]]
warped_desc = data['warped_desc'][warped_keypoints[:, 0],
warped_keypoints[:, 1]]
# Match the keypoints with the warped_keypoints with nearest neighbor search
if orb:
desc = desc.astype(np.uint8)
warped_desc = warped_desc.astype(np.uint8)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
else:
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
matches = bf.match(desc, warped_desc)
matches_idx = np.array([m.queryIdx for m in matches]).astype(int)
m_keypoints = keypoints[matches_idx, :]
matches_idx = np.array([m.trainIdx for m in matches]).astype(int)
m_warped_keypoints = warped_keypoints[matches_idx, :]
precisions = []
recalls = []
ms = []
for t in correctness_threshs:
#find ground truth
true_keypoints, n_gt1 = get_ground_truth(data, m_warped_keypoints, m_keypoints, warped_shape, t, inv=True)
true_warped_keypoints, n_gt2 = get_ground_truth(data, m_keypoints, m_warped_keypoints, shape, t, inv=False)
#calculate precison and recall
prec1, recall1, ms1 = compute_pr_rec(m_warped_keypoints, true_warped_keypoints, n_gt2, len(warped_keypoints))
prec2, recall2, ms2 = compute_pr_rec(m_keypoints, true_keypoints, n_gt1, len(keypoints))
#average precison and recall for two images
prec = (prec1 + prec2)/2
recall = (recall1 + recall2)/2
matching_score = (ms1 + ms2)/2
precisions.append(prec)
recalls.append(recall)
ms.append(matching_score)
return precisions, recalls, ms
def mean_AP(exper_name):
paths = get_paths(exper_name)
threshs = np.arange(1,31)
precisions = np.zeros([1,30])
recalls = np.zeros([1,30])
for path in paths:
data = np.load(path)
pr, rec, ms = get_mean_AP(data, threshs)
precisions = np.add(precisions, pr)
recalls = np.add(recalls, rec)
n = len(paths)
precisions = precisions / n
recalls = recalls / n
mean_AP = np.sum(precisions[0][1:] * np.abs((np.array(recalls[0][1:]) - np.array(recalls[0][:-1]))))
return mean_AP
def matching_score(exper_name):
paths = get_paths(exper_name)
threshs = np.arange(3,4)
matching_score = np.zeros([1,1])
for path in paths:
data = np.load(path)
pr, rec, ms = get_mean_AP(data, threshs)
matching_score = np.add(matching_score, ms)
n = len(paths)
matching_score = matching_score/n
return matching_score
| 34.787671 | 119 | 0.656921 |
856bd5e6096658ee85bd155087df3c9bb55b4975 | 8,845 | py | Python | Plugins/CharakterAssistent/WizardWrapper.py | dbuse/Sephrasto | a72a0e961b38f4ea60dedb2b896c681c0226595b | [
"MIT"
] | null | null | null | Plugins/CharakterAssistent/WizardWrapper.py | dbuse/Sephrasto | a72a0e961b38f4ea60dedb2b896c681c0226595b | [
"MIT"
] | null | null | null | Plugins/CharakterAssistent/WizardWrapper.py | dbuse/Sephrasto | a72a0e961b38f4ea60dedb2b896c681c0226595b | [
"MIT"
] | null | null | null | from PyQt5 import QtWidgets, QtCore
import os.path
import logging
from Wolke import Wolke
import lxml.etree as etree
from EinstellungenWrapper import EinstellungenWrapper
from EventBus import EventBus
from CharakterAssistent.CharakterMerger import CharakterMerger
from Hilfsmethoden import Hilfsmethoden
class Regeln(object):
def __init__(self):
self.spezies = {}
self.kulturen = {}
self.professionen = {}
self.archetypen = False
def hasArchetypen(self):
return self.archetypen
def setHasArchetypen(self):
self.archetypen = True
class WizardWrapper(object):
def __init__(self):
self.regelList = {}
rootdir = os.path.dirname(os.path.abspath(__file__))
datadirs = [os.path.join(rootdir, "Data"), os.path.join(Wolke.Settings['Pfad-Plugins'], "CharakterAssistent", "Data")]
for datadir in datadirs:
if not os.path.isdir(datadir):
continue
for regelnFolder in Hilfsmethoden.listdir(datadir):
if not os.path.isdir(os.path.join(datadir, regelnFolder)):
continue
regelnName = os.path.splitext(os.path.basename(regelnFolder))[0]
if not regelnName in self.regelList:
self.regelList[regelnName] = Regeln()
regeln = self.regelList[regelnName]
speziesFolder = os.path.join(datadir, regelnFolder, "Spezies")
regeln.spezies = {**regeln.spezies, **self.mapContainedFileNamesToPaths(speziesFolder)} #syntax = merge dict b into a
kulturenFolder = os.path.join(datadir, regelnFolder, "Kultur")
regeln.kulturen = {**regeln.kulturen, **self.mapContainedFileNamesToPaths(kulturenFolder)}
if regelnName.endswith("Archetypen"):
regeln.setHasArchetypen()
professionenFolder = os.path.join(datadir, regelnFolder)
else:
professionenFolder = os.path.join(datadir, regelnFolder, "Profession")
if os.path.isdir(professionenFolder):
for professionKategorieFolder in Hilfsmethoden.listdir(professionenFolder):
professionKategorieFolder = os.path.join(professionenFolder, professionKategorieFolder)
if os.path.isdir(professionKategorieFolder):
kategorie = os.path.basename(professionKategorieFolder)
if not kategorie in regeln.professionen:
regeln.professionen[kategorie] = {}
regeln.professionen[kategorie] = {**regeln.professionen[kategorie], **self.mapContainedFileNamesToPaths(professionKategorieFolder, regeln.hasArchetypen())}
def mapContainedFileNamesToPaths(self, folderPath, appendEP = False):
result = {}
if os.path.isdir(folderPath):
for path in Hilfsmethoden.listdir(folderPath):
path = os.path.join(folderPath, path)
if os.path.isfile(path):
fileNameSplit = os.path.splitext(os.path.basename(path))
if fileNameSplit[1] != ".xml":
continue
fileName = fileNameSplit[0]
index = 0
if fileName.endswith("_var"):
index = 1
fileName = fileName[:-4]
if logging.root.level == logging.DEBUG:
logging.debug("CharakterAssistent: Verifiziere " + path)
CharakterMerger.readChoices(path) # print log warnings for entire data folder on char creation
elif appendEP:
root = etree.parse(path).getroot()
fileName += " | " + root.find('Erfahrung/EPspent').text + " EP"
if fileName in result:
result[fileName].insert(index, path)
else:
result[fileName] = [path]
return result
def setupMainForm(self):
rl = list(self.regelList.keys())
self.ui.cbRegeln.addItems(rl)
if "CharakterAssistent_Regeln" in Wolke.Settings:
regeln = Wolke.Settings["CharakterAssistent_Regeln"]
if regeln in rl:
self.ui.cbRegeln.setCurrentIndex(rl.index(regeln))
self.ui.cbRegeln.currentIndexChanged.connect(self.regelnChanged)
self.ui.cbProfessionKategorie.currentIndexChanged.connect(self.professionKategorieChanged)
self.regelnChanged()
self.ui.btnAccept.clicked.connect(self.acceptClickedHandler)
def professionKategorieChanged(self):
regeln = self.regelList[self.ui.cbRegeln.currentText()]
self.ui.cbProfession.clear()
kategorie = self.ui.cbProfessionKategorie.currentText()
self.ui.cbProfession.setEnabled(kategorie != "Überspringen")
if kategorie != "Überspringen":
self.ui.cbProfession.addItem("Überspringen")
self.ui.cbProfession.addItems(regeln.professionen[kategorie].keys())
def regelnChanged(self):
Wolke.Settings["CharakterAssistent_Regeln"] = self.ui.cbRegeln.currentText()
EinstellungenWrapper.save()
if not self.ui.cbRegeln.currentText() in self.regelList:
return
regeln = self.regelList[self.ui.cbRegeln.currentText()]
self.ui.lblSpezies.setVisible(not regeln.hasArchetypen())
self.ui.cbSpezies.setVisible(not regeln.hasArchetypen())
self.ui.lblGeschlecht.setVisible(not regeln.hasArchetypen())
self.ui.btnMaennlich.setVisible(not regeln.hasArchetypen())
self.ui.btnWeiblich.setVisible(not regeln.hasArchetypen())
self.ui.lblKultur.setVisible(not regeln.hasArchetypen())
self.ui.cbKultur.setVisible(not regeln.hasArchetypen())
self.ui.lblProfessionKategorie.setText(regeln.hasArchetypen() and "Archetypkategorie" or "Professionskategorie")
self.ui.lblProfession.setText(regeln.hasArchetypen() and "Archetyp" or "Profession")
self.ui.cbSpezies.clear()
self.ui.cbKultur.clear()
self.ui.cbProfessionKategorie.blockSignals(True)
self.ui.cbProfessionKategorie.clear()
self.ui.cbProfessionKategorie.blockSignals(False)
self.ui.cbSpezies.addItem("Überspringen")
self.ui.cbSpezies.addItems(regeln.spezies.keys())
self.ui.cbKultur.addItem("Überspringen")
self.ui.cbKultur.addItems(regeln.kulturen.keys())
if not regeln.hasArchetypen():
self.ui.cbProfessionKategorie.addItem("Überspringen")
self.ui.cbProfessionKategorie.addItems(regeln.professionen.keys())
self.professionKategorieChanged()
def acceptClickedHandler(self):
if not self.ui.cbRegeln.currentText() in self.regelList:
self.formMain.hide()
return
regeln = self.regelList[self.ui.cbRegeln.currentText()]
geschlecht = ""
if self.ui.btnWeiblich.isChecked():
geschlecht = "weiblich"
else:
geschlecht = "männlich"
if not regeln.hasArchetypen():
Wolke.Char.kurzbeschreibung = "Geschlecht: " + geschlecht
EventBus.doAction("cbext_update", { 'name' : "geschlecht", 'value' : geschlecht })
if self.ui.cbSpezies.currentText() != "Überspringen":
spezies = regeln.spezies[self.ui.cbSpezies.currentText()]
CharakterMerger.xmlLesen(spezies[0], True, False)
CharakterMerger.handleChoices(spezies, self.ui.cbSpezies.currentText(), geschlecht, True, False, False)
if self.ui.cbKultur.currentText() != "Überspringen":
kultur = regeln.kulturen[self.ui.cbKultur.currentText()]
CharakterMerger.xmlLesen(kultur[0], False, True)
CharakterMerger.handleChoices(kultur, self.ui.cbKultur.currentText(), geschlecht, False, True, False)
if self.ui.cbProfessionKategorie.currentText() != "Überspringen":
professionKategorie = regeln.professionen[self.ui.cbProfessionKategorie.currentText()]
if self.ui.cbProfession.currentText() != "Überspringen":
profession = professionKategorie[self.ui.cbProfession.currentText()]
if regeln.hasArchetypen():
CharakterMerger.xmlLesen(profession[0], True, True)
else:
CharakterMerger.xmlLesen(profession[0], False, False)
CharakterMerger.handleChoices(profession, self.ui.cbProfession.currentText(), geschlecht, False, False, True)
Wolke.Char.aktualisieren()
self.formMain.hide()
EventBus.doAction("charaktereditor_reload") | 45.592784 | 183 | 0.629621 |
a3c6560f300d1147b68881c35e190d8a204f5e32 | 339 | py | Python | cfd/__init__.py | ko-matsu/cfd-python | 3f1be7b31ba68aeb565debc3c6b6d3555ef807ef | [
"MIT"
] | null | null | null | cfd/__init__.py | ko-matsu/cfd-python | 3f1be7b31ba68aeb565debc3c6b6d3555ef807ef | [
"MIT"
] | null | null | null | cfd/__init__.py | ko-matsu/cfd-python | 3f1be7b31ba68aeb565debc3c6b6d3555ef807ef | [
"MIT"
] | null | null | null | import os
import glob
__version__ = '0.1.0'
__all__ = [
os.path.split(os.path.splitext(file)[0])[1]
for file in glob.glob(os.path.join(
os.path.dirname(__file__), '[a-zA-Z0-9]*.py'))
]
"""
'address',
'confidential_address',
'confidential_transaction',
'descriptor',
'hdwallet',
'key',
'script',
'transaction',
'util'
"""
| 15.409091 | 54 | 0.646018 |
3df131dbadb3338fed7d4932ab31de9f905c4a63 | 184 | py | Python | docs/examples/record_sequence.py | TRex22/picamerax | 2e7b05c92331b315533835596862a643ba55b021 | [
"BSD-3-Clause"
] | 17 | 2021-03-23T09:23:13.000Z | 2022-01-27T16:23:48.000Z | docs/examples/record_sequence.py | TRex22/picamerax | 2e7b05c92331b315533835596862a643ba55b021 | [
"BSD-3-Clause"
] | 7 | 2021-05-26T13:40:46.000Z | 2022-03-12T00:47:49.000Z | docs/examples/record_sequence.py | TRex22/picamerax | 2e7b05c92331b315533835596862a643ba55b021 | [
"BSD-3-Clause"
] | 6 | 2021-04-04T15:43:02.000Z | 2021-05-18T10:34:00.000Z | import picamerax
camera = picamerax.PiCamera(resolution=(640, 480))
for filename in camera.record_sequence(
'%d.h264' % i for i in range(1, 11)):
camera.wait_recording(5)
| 26.285714 | 50 | 0.701087 |
6b6cde24b6b46266726294c6d3c83ae6123f1ed5 | 9,849 | py | Python | ClientGenerator/src/googleapis/codegen/java_generator_test.py | Ramkarthik/google-api-dotnet-client | d752f96e8a6de53922c22eedc73ea7077628b106 | [
"Apache-2.0"
] | 3 | 2017-06-11T10:55:49.000Z | 2022-01-07T18:49:47.000Z | ClientGenerator/src/googleapis/codegen/java_generator_test.py | Ramkarthik/google-api-dotnet-client | d752f96e8a6de53922c22eedc73ea7077628b106 | [
"Apache-2.0"
] | null | null | null | ClientGenerator/src/googleapis/codegen/java_generator_test.py | Ramkarthik/google-api-dotnet-client | d752f96e8a6de53922c22eedc73ea7077628b106 | [
"Apache-2.0"
] | 1 | 2018-11-23T21:47:57.000Z | 2018-11-23T21:47:57.000Z | #!/usr/bin/python2.7
# Copyright 2011 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for java_generator."""
__author__ = 'aiuto@google.com (Tony Aiuto)'
from google.apputils import basetest
from googleapis.codegen import data_types
from googleapis.codegen import data_value
from googleapis.codegen import java_generator
class JavaApiTest(basetest.TestCase):
def testToClassName(self):
"""Test creating safe class names from object names."""
# 'parameters': {}
api = java_generator.JavaApi({
'name': 'dummy',
'version': 'v1',
'resources': {
'foo': {
'methods': {
'bar': {
'id': 'bar',
}
}
}
}
})
self.assertEquals('Foo', api.ToClassName('foo', api))
self.assertEquals('DummyClass', api.ToClassName('class', None))
self.assertEquals('DummyDefault', api.ToClassName('default', None))
self.assertEquals('DummyImport', api.ToClassName('import', None))
self.assertEquals('DummyObject', api.ToClassName('object', None))
self.assertEquals('DummyString', api.ToClassName('string', None))
self.assertEquals('DummyTrue', api.ToClassName('true', None))
self.assertEquals('dummy', api.values['name'])
self.assertEquals('Dummy', api._class_name)
# Test the renaming of the object when it matches the API name.
self.assertEquals('Dummy', api.ToClassName('dummy', api))
foo = api._resources[0]
self.assertEquals('DummyOperations',
api.ToClassName('dummy', foo, element_type='resource'))
bar = foo._methods[0]
self.assertEquals('DummyOperation',
api.ToClassName('dummy', bar, element_type='method'))
def testToClassNameWithCanonical(self):
api = java_generator.JavaApi({
'name': 'dummy',
'canonicalName': 'Dummy Service',
'version': 'v1',
'resources': {}
})
self.assertEquals('dummy', api.values['name'])
self.assertEquals('DummyService', api._class_name)
self.assertEquals('DummyServiceClass', api.ToClassName('class', None))
def testGetCodeTypeFromDictionary(self):
"""Test mapping of JSON schema types to Java class names."""
language_model = java_generator.JavaLanguageModel()
test_cases = [
['String', {'type': 'string', 'format': 'byte'}],
['DateTime', {'type': 'string', 'format': 'date-time'}],
['Double', {'type': 'number', 'format': 'double'}],
['Float', {'type': 'number', 'format': 'float'}],
['Short', {'type': 'integer', 'format': 'int16'}],
['Integer', {'type': 'integer', 'format': 'int32'}],
['Long', {'type': 'string', 'format': 'int64'}],
['Object', {'type': 'any'}],
['Boolean', {'type': 'boolean'}],
['String', {'type': 'string'}],
['Long', {'type': 'integer', 'format': 'uint32'}],
['UnsignedLong', {'type': 'string', 'format': 'uint64'}],
]
for test_case in test_cases:
self.assertEquals(
test_case[0],
language_model.GetCodeTypeFromDictionary(test_case[1]))
class JavaGeneratorTest(basetest.TestCase):
def testImportsForArray(self):
"""Test if we get the right imports for an array.
The goal is to see that an array of a primative type which requires an
import really works.
"""
gen = java_generator.BaseJavaGenerator({
'name': 'dummy',
'version': 'v1',
'resources': {},
'schemas': {
'Bar': {
'id': 'Bar',
'type': 'object',
'properties': {
'p1': {
'type': 'array',
'items': {
'type': 'string',
'format': 'uint64'
}
},
'p2': {
'type': 'array',
'items': {
'type': 'array',
'items': {
'type': 'string',
'format': 'date-time'
}
}
}
}
}
}
})
gen.AnnotateApiForLanguage(gen.api)
found_big_integer = False
found_date_time = False
for schema in gen._api._schemas.values():
import_manager = schema.values.get('importManager')
for import_list in import_manager.ImportLists():
for import_def in import_list:
print import_def
if import_def.find('BigInteger'):
found_big_integer = True
if import_def.find('DateTime'):
found_date_time = True
self.assertTrue(found_big_integer)
self.assertTrue(found_date_time)
class JavaLanguageModelTest(basetest.TestCase):
"""Tests for features implemented in the language model."""
def testDefaultPath(self):
"""Test the package path generation."""
def MakeGen(host):
gen = java_generator.BaseJavaGenerator({
'name': 'fake',
'version': 'v1',
'rootUrl': 'https://%s/' % host,
'servicePath': 'fake/v1',
'ownerDomain': host,
})
gen.AnnotateApiForLanguage(gen.api)
return gen
gen = MakeGen('google.com')
self.assertEquals('com/google/api/services/fake', gen.api.module.path)
self.assertEquals('com/google/api/services/fake/model',
gen.api.model_module.path)
gen = MakeGen('not-google.com')
self.assertEquals('com/not_google/fake', gen.api.module.path)
self.assertEquals('com.not_google.fake', gen.api.module.name)
gen = MakeGen('my-custom_app.appspot.com')
self.assertEquals('com/appspot/my_custom_app/fake', gen.api.module.path)
def testDefaultPathWithPackagePathAndCanonicalName(self):
"""Test the package path generation."""
def MakeGen():
api_def = {
'name': 'fake',
'version': 'v1',
'rootUrl': 'https://www.googleapis.com/',
'servicePath': 'fake/v1',
'ownerDomain': 'google.com',
'packagePath': 'my/path',
'canonicalName': 'Canonical Name'
}
gen = java_generator.BaseJavaGenerator(
api_def, options={'google_api': True})
gen.AnnotateApiForLanguage(gen.api)
return gen
gen = MakeGen()
self.assertEquals('com/google/api/services/my/path/canonicalname',
gen.api.module.path)
self.assertEquals('com/google/api/services/my/path/canonicalname/model',
gen.api.model_module.path)
def testAllowedCharacters(self):
# make sure $ is allowed in a name and that @ is not
model = java_generator.JavaLanguageModel()
self.assertEquals('$ref',
model.TransformString(None, '$ref', model.member_policy))
self.assertEquals('set$ref',
model.TransformString(None, '$ref', model.setter_policy))
self.assertEquals('getId',
model.TransformString(None, '@id', model.getter_policy))
class JavaLanguageModelDataValueTest(basetest.TestCase):
"""Tests for DataValue integration."""
def setUp(self):
self.language_model = java_generator.JavaLanguageModel()
def _CreateDataValue(self, value, val_type):
def_dict = {
'className': 'Foo',
'type': val_type,
}
prototype = data_types.DataType(
def_dict, None, language_model=self.language_model)
dv = data_value.DataValue(value, prototype)
return dv
def testRenderBoolean(self):
dv = self._CreateDataValue(True, 'boolean')
render_method = self.language_model._SUPPORTED_TYPES['boolean']
self.assertEqual('true', render_method(dv))
dv.SetValue(False)
self.assertEqual('false', render_method(dv))
def testRenderInteger(self):
dv = self._CreateDataValue(42, 'integer')
render_method = self.language_model._SUPPORTED_TYPES['integer']
self.assertRaises(ValueError, render_method, dv)
dv.SetTemplateValue('codeType', 'Long')
self.assertEqual('42L', render_method(dv))
class Java14LanguageModelTest(basetest.TestCase):
def setUp(self):
self.language_model = java_generator.Java14LanguageModel()
def _CreateDataValue(self, value, val_type):
def_dict = {
'className': 'Foo',
'type': val_type,
}
prototype = data_types.DataType(
def_dict, None, language_model=self.language_model)
dv = data_value.DataValue(value, prototype)
return dv
def testRenderBoolean(self):
dv = self._CreateDataValue(True, 'boolean')
render_method = self.language_model._SUPPORTED_TYPES['boolean']
self.assertEqual('true', render_method(dv))
dv.SetValue(False)
self.assertEqual('false', render_method(dv))
def testRenderInteger(self):
dv = self._CreateDataValue(42, 'integer')
render_method = self.language_model._SUPPORTED_TYPES['integer']
self.assertRaises(ValueError, render_method, dv)
dv.SetTemplateValue('codeType', 'java.lang.Long')
self.assertEqual('42L', render_method(dv))
if __name__ == '__main__':
basetest.main()
| 34.925532 | 79 | 0.602396 |
009762491b36f0fb420f60630b52ab5f118d5a5e | 18,203 | py | Python | calico/felix/test/test_ipsets.py | robbrockbank/felix | 3429099d677bec0caa3dd9b8d69d1553304741ca | [
"Apache-2.0"
] | null | null | null | calico/felix/test/test_ipsets.py | robbrockbank/felix | 3429099d677bec0caa3dd9b8d69d1553304741ca | [
"Apache-2.0"
] | null | null | null | calico/felix/test/test_ipsets.py | robbrockbank/felix | 3429099d677bec0caa3dd9b8d69d1553304741ca | [
"Apache-2.0"
] | 1 | 2016-12-02T12:08:32.000Z | 2016-12-02T12:08:32.000Z | # -*- coding: utf-8 -*-
# Copyright 2015 Metaswitch Networks
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
felix.test.test_ipsets
~~~~~~~~~~~~~~~~~~~~~~
Unit tests for the IpsetManager.
"""
from collections import defaultdict
import logging
from mock import *
from calico.datamodel_v1 import EndpointId
from calico.felix.futils import IPV4, FailedSystemCall
from calico.felix.ipsets import (EndpointData, IpsetManager, TagIpset,
EMPTY_ENDPOINT_DATA, Ipset)
from calico.felix.refcount import CREATED
from calico.felix.test.base import BaseTestCase
# Logger
_log = logging.getLogger(__name__)
EP_ID_1_1 = EndpointId("host1", "orch", "wl1_1", "ep1_1")
EP_1_1 = {
"profile_ids": ["prof1", "prof2"],
"ipv4_nets": ["10.0.0.1/32"],
}
EP_DATA_1_1 = EndpointData(["prof1", "prof2"], ["10.0.0.1"])
EP_1_1_NEW_IP = {
"profile_ids": ["prof1", "prof2"],
"ipv4_nets": ["10.0.0.2/32", "10.0.0.3/32"],
}
EP_1_1_NEW_PROF_IP = {
"profile_ids": ["prof3"],
"ipv4_nets": ["10.0.0.3/32"],
}
EP_ID_1_2 = EndpointId("host1", "orch", "wl1_2", "ep1_2")
EP_ID_2_1 = EndpointId("host2", "orch", "wl2_1", "ep2_1")
EP_2_1 = {
"profile_ids": ["prof1"],
"ipv4_nets": ["10.0.0.1/32"],
}
EP_2_1_NO_NETS = {
"profile_ids": ["prof1"],
}
EP_2_1_IPV6 = {
"profile_ids": ["prof1"],
"ipv6_nets": ["dead:beef::/128"],
}
EP_DATA_2_1 = EndpointData(["prof1"], ["10.0.0.1"])
class TestIpsetManager(BaseTestCase):
def setUp(self):
super(TestIpsetManager, self).setUp()
self.reset()
def reset(self):
self.created_refs = defaultdict(list)
self.acquired_refs = {}
self.mgr = IpsetManager(IPV4)
self.m_create = Mock(spec=self.mgr._create,
side_effect = self.m_create)
self.mgr._create = self.m_create
def m_create(self, tag_id):
_log.info("Creating ipset %s", tag_id)
ipset = Mock(spec=TagIpset)
ipset._manager = None
ipset._id = None
ipset.ref_mgmt_state = CREATED
ipset.ref_count = 0
ipset.owned_ipset_names.return_value = ["felix-v4-" + tag_id,
"felix-v4-tmp-" + tag_id]
ipset.tag = tag_id
self.created_refs[tag_id].append(ipset)
return ipset
def test_tag_then_endpoint(self):
# Send in the messages.
self.mgr.on_tags_update("prof1", ["tag1"], async=True)
self.mgr.on_endpoint_update(EP_ID_1_1, EP_1_1, async=True)
# Let the actor process them.
self.step_mgr()
self.assert_one_ep_one_tag()
def test_endpoint_then_tag(self):
# Send in the messages.
self.mgr.on_endpoint_update(EP_ID_1_1, EP_1_1, async=True)
self.mgr.on_tags_update("prof1", ["tag1"], async=True)
# Let the actor process them.
self.step_mgr()
self.assert_one_ep_one_tag()
def test_endpoint_then_tag_idempotent(self):
for _ in xrange(3):
# Send in the messages.
self.mgr.on_endpoint_update(EP_ID_1_1, EP_1_1, async=True)
self.mgr.on_tags_update("prof1", ["tag1"], async=True)
# Let the actor process them.
self.step_mgr()
self.assert_one_ep_one_tag()
def assert_one_ep_one_tag(self):
self.assertEqual(self.mgr.endpoint_data_by_ep_id, {
EP_ID_1_1: EP_DATA_1_1,
})
self.assertEqual(self.mgr.ip_owners_by_tag, {
"tag1": {
"10.0.0.1": {
"prof1": set([
EP_ID_1_1
])
}
}
})
def test_change_ip(self):
# Initial set-up.
self.mgr.on_tags_update("prof1", ["tag1"], async=True)
self.mgr.on_endpoint_update(EP_ID_1_1, EP_1_1, async=True)
self.step_mgr()
# Update the endpoint's IPs:
self.mgr.on_endpoint_update(EP_ID_1_1, EP_1_1_NEW_IP, async=True)
self.step_mgr()
self.assertEqual(self.mgr.ip_owners_by_tag, {
"tag1": {
"10.0.0.2": {
"prof1": set([
EP_ID_1_1
])
},
"10.0.0.3": {
"prof1": set([
EP_ID_1_1
])
}
}
})
def test_tag_updates(self):
# Initial set-up.
self.mgr.on_endpoint_update(EP_ID_1_1, EP_1_1, async=True)
self.mgr.on_tags_update("prof1", ["tag1"], async=True)
self.step_mgr()
# Add a tag, keep a tag.
self.mgr.on_tags_update("prof1", ["tag1", "tag2"], async=True)
self.step_mgr()
self.assertEqual(self.mgr.ip_owners_by_tag, {
"tag1": {
"10.0.0.1": {
"prof1": set([
EP_ID_1_1
])
}
},
"tag2": {
"10.0.0.1": {
"prof1": set([
EP_ID_1_1
])
}
}
})
self.assertEqual(self.mgr.tags_by_prof_id, {"prof1": ["tag1", "tag2"]})
# Remove a tag.
self.mgr.on_tags_update("prof1", ["tag2"], async=True)
self.step_mgr()
self.assertEqual(self.mgr.ip_owners_by_tag, {
"tag2": {
"10.0.0.1": {
"prof1": set([
EP_ID_1_1
])
}
}
})
# Delete the tags:
self.mgr.on_tags_update("prof1", None, async=True)
self.step_mgr()
self.assertEqual(self.mgr.ip_owners_by_tag, {})
self.assertEqual(self.mgr.tags_by_prof_id, {})
def step_mgr(self):
self.step_actor(self.mgr)
self.assertEqual(self.mgr._dirty_tags, set())
def test_update_profile_and_ips(self):
# Initial set-up.
self.mgr.on_endpoint_update(EP_ID_1_1, EP_1_1, async=True)
self.mgr.on_tags_update("prof1", ["tag1"], async=True)
self.mgr.on_tags_update("prof3", ["tag3"], async=True)
self.step_mgr()
self.mgr.on_endpoint_update(EP_ID_1_1, EP_1_1_NEW_PROF_IP, async=True)
self.step_mgr()
self.assertEqual(self.mgr.ip_owners_by_tag, {
"tag3": {
"10.0.0.3": {
"prof3": set([
EP_ID_1_1
])
}
}
})
self.assertEqual(self.mgr.endpoint_ids_by_profile_id, {
"prof3": set([EP_ID_1_1])
})
def test_optimize_out_v6(self):
self.mgr.on_tags_update("prof1", ["tag1"], async=True)
self.mgr.on_endpoint_update(EP_ID_1_1, EP_1_1, async=True)
self.mgr.on_endpoint_update(EP_ID_2_1, EP_2_1_IPV6, async=True)
self.step_mgr()
# Index should contain only 1_1:
self.assertEqual(self.mgr.endpoint_data_by_ep_id, {
EP_ID_1_1: EP_DATA_1_1,
})
def test_optimize_out_no_nets(self):
self.mgr.on_tags_update("prof1", ["tag1"], async=True)
self.mgr.on_endpoint_update(EP_ID_1_1, EP_1_1, async=True)
self.mgr.on_endpoint_update(EP_ID_2_1, EP_2_1_NO_NETS, async=True)
self.step_mgr()
# Index should contain only 1_1:
self.assertEqual(self.mgr.endpoint_data_by_ep_id, {
EP_ID_1_1: EP_DATA_1_1,
})
# Should be happy to then add it in.
self.mgr.on_endpoint_update(EP_ID_2_1, EP_2_1, async=True)
self.step_mgr()
# Index should contain both:
self.assertEqual(self.mgr.endpoint_data_by_ep_id, {
EP_ID_1_1: EP_DATA_1_1,
EP_ID_2_1: EP_DATA_2_1,
})
def test_duplicate_ips(self):
# Add in two endpoints with the same IP.
self.mgr.on_tags_update("prof1", ["tag1"], async=True)
self.mgr.on_endpoint_update(EP_ID_1_1, EP_1_1, async=True)
self.mgr.on_endpoint_update(EP_ID_2_1, EP_2_1, async=True)
self.step_mgr()
# Index should contain both:
self.assertEqual(self.mgr.endpoint_data_by_ep_id, {
EP_ID_1_1: EP_DATA_1_1,
EP_ID_2_1: EP_DATA_2_1,
})
self.assertEqual(self.mgr.ip_owners_by_tag, {
"tag1": {
"10.0.0.1": {
"prof1": set([
EP_ID_1_1,
EP_ID_2_1,
])
}
}
})
# Second profile tags arrive:
self.mgr.on_tags_update("prof2", ["tag1", "tag2"], async=True)
self.step_mgr()
self.assertEqual(self.mgr.ip_owners_by_tag, {
"tag1": {
"10.0.0.1": {
"prof1": set([
EP_ID_1_1,
EP_ID_2_1,
]),
"prof2": set([
EP_ID_1_1,
])
}
},
"tag2": {
"10.0.0.1": {
"prof2": set([
EP_ID_1_1,
])
}
},
})
# Remove one, check the index gets updated.
self.mgr.on_endpoint_update(EP_ID_2_1, None, async=True)
self.step_mgr()
self.assertEqual(self.mgr.endpoint_data_by_ep_id, {
EP_ID_1_1: EP_DATA_1_1,
})
self.assertEqual(self.mgr.ip_owners_by_tag, {
"tag1": {
"10.0.0.1": {
"prof1": set([
EP_ID_1_1,
]),
"prof2": set([
EP_ID_1_1,
])
}
},
"tag2": {
"10.0.0.1": {
"prof2": set([
EP_ID_1_1,
])
}
},
})
# Remove the other, index should get completely cleaned up.
self.mgr.on_endpoint_update(EP_ID_1_1, None, async=True)
self.step_mgr()
self.assertEqual(self.mgr.endpoint_data_by_ep_id, {})
self.assertEqual(self.mgr.ip_owners_by_tag, {})
def on_ref_acquired(self, tag_id, ipset):
self.acquired_refs[tag_id] = ipset
@patch("calico.felix.ipsets.list_ipset_names", autospec=True)
@patch("calico.felix.futils.check_call", autospec=True)
def test_cleanup(self, m_check_call, m_list_ipsets):
# Start with a couple ipsets.
self.mgr.get_and_incref("foo", callback=self.on_ref_acquired,
async=True)
self.mgr.get_and_incref("bar", callback=self.on_ref_acquired,
async=True)
self.step_mgr()
self.assertEqual(set(self.created_refs.keys()),
set(["foo", "bar"]))
# Notify ready so that the ipsets are marked as started.
self._notify_ready(["foo", "bar"])
self.step_mgr()
# Then decref "bar" so that it gets marked as stopping.
self.mgr.decref("bar", async=True)
self.step_mgr()
self.assertEqual(
self.mgr.stopping_objects_by_id,
{"bar": set(self.created_refs["bar"])}
)
# Return mix of expected and unexpected ipsets.
m_list_ipsets.return_value = [
"not-felix-foo",
"felix-v6-foo",
"felix-v6-bazzle",
"felix-v4-foo",
"felix-v4-bar",
"felix-v4-baz",
"felix-v4-biff",
]
m_check_call.side_effect = iter([
# Exception on any individual call should be ignored.
FailedSystemCall("Dummy", [], None, None, None),
None,
])
self.mgr.cleanup(async=True)
self.step_mgr()
# Explicitly check that exactly the right delete calls were made.
# assert_has_calls would ignore extra calls.
self.assertEqual(sorted(m_check_call.mock_calls),
sorted([
call(["ipset", "destroy", "felix-v4-biff"]),
call(["ipset", "destroy", "felix-v4-baz"]),
]))
def test_apply_snapshot_mainline(self):
self.mgr.apply_snapshot(
{"prof1": ["tag1"], "prof2": ["B"], "prof3": ["B"]},
{EP_ID_1_1: EP_1_1,
EP_ID_2_1: EP_2_1},
async=True,
)
self.mgr.get_and_incref("tag1",
callback=self.on_ref_acquired,
async=True)
self.step_mgr()
self.mgr.on_object_startup_complete("tag1",
self.created_refs["tag1"][0],
async=True)
self.step_mgr()
self.mgr.apply_snapshot(
{"prof1": ["tag1", "tag2"]},
{EP_ID_1_1: EP_1_1},
async=True,
)
self.step_mgr()
self.assertEqual(self.mgr.tags_by_prof_id,
{"prof1": ["tag1", "tag2"]})
self.assertEqual(self.mgr.endpoint_data_by_ep_id,
{EP_ID_1_1: EP_DATA_1_1})
ipset = self.acquired_refs["tag1"]
self.assertEqual(
ipset.replace_members.mock_calls,
[
call(set(['10.0.0.1']), force_reprogram=True, async=True),
call(set(['10.0.0.1']), force_reprogram=True, async=True),
]
)
def test_apply_snapshot_forces_reprogram(self):
# Apply a snapshot but mock the finish call so that we can check that
# apply_snapshot set the flag...
self.mgr.apply_snapshot(
{"prof1": ["A"], "prof2": ["B"]},
{EP_ID_1_1: EP_1_1,
EP_ID_2_1: EP_2_1},
async=True,
)
# noinspection PyUnresolvedReferences
with patch.object(self.mgr, "_finish_msg_batch"):
self.step_actor(self.mgr)
self.assertTrue(self.mgr._force_reprogram)
def test_finish_msg_batch_clears_reprogram_flag(self):
# Apply a snapshot and step the actor for real, should clear the flag.
self.mgr.apply_snapshot(
{"prof1": ["A"]},
{EP_ID_1_1: EP_1_1},
async=True,
)
self.step_mgr()
self.assertFalse(self.mgr._force_reprogram)
def _notify_ready(self, tags):
for tag in tags:
self.mgr.on_object_startup_complete(tag, self.created_refs[tag][0],
async=True)
self.step_mgr()
class TestEndpointData(BaseTestCase):
def test_repr(self):
self.assertEqual(repr(EP_DATA_1_1),
"EndpointData(('prof1', 'prof2'),('10.0.0.1',))")
def test_equals(self):
self.assertEqual(EP_DATA_1_1, EP_DATA_1_1)
self.assertEqual(EndpointData(["prof2", "prof1"],
["10.0.0.2", "10.0.0.1"]),
EndpointData(["prof2", "prof1"],
["10.0.0.2", "10.0.0.1"]))
self.assertEqual(EndpointData(["prof2", "prof1"],
["10.0.0.2", "10.0.0.1"]),
EndpointData(["prof1", "prof2"],
["10.0.0.1", "10.0.0.2"]))
self.assertNotEquals(EP_DATA_1_1, None)
self.assertNotEquals(EP_DATA_1_1, EP_DATA_2_1)
self.assertNotEquals(EP_DATA_1_1, EMPTY_ENDPOINT_DATA)
self.assertFalse(EndpointData(["prof2", "prof1"],
["10.0.0.2", "10.0.0.1"]) !=
EndpointData(["prof2", "prof1"],
["10.0.0.2", "10.0.0.1"]))
def test_hash(self):
self.assertEqual(hash(EndpointData(["prof2", "prof1"],
["10.0.0.2", "10.0.0.1"])),
hash(EndpointData(["prof1", "prof2"],
["10.0.0.1", "10.0.0.2"])))
def test_really_a_struct(self):
self.assertFalse(hasattr(EP_DATA_1_1, "__dict__"))
class TestIpset(BaseTestCase):
def setUp(self):
super(TestIpset, self).setUp()
self.ipset = Ipset("foo", "foo-tmp", "inet")
@patch("calico.felix.futils.check_call", autospec=True)
def test_mainline(self, m_check_call):
self.ipset.replace_members(set(["10.0.0.1"]))
m_check_call.assert_called_once_with(
["ipset", "restore"],
input_str='create foo hash:ip family inet --exist\n'
'create foo-tmp hash:ip family inet --exist\n'
'flush foo-tmp\n'
'add foo-tmp 10.0.0.1\n'
'swap foo foo-tmp\n'
'destroy foo-tmp\n'
'COMMIT\n'
)
@patch("calico.felix.futils.check_call", autospec=True)
def test_ensure_exists(self, m_check_call):
self.ipset.ensure_exists()
m_check_call.assert_called_once_with(
["ipset", "restore"],
input_str='create foo hash:ip family inet --exist\n'
'COMMIT\n'
)
@patch("calico.felix.futils.call_silent", autospec=True)
def test_delete(self, m_call_silent):
self.ipset.delete()
self.assertEqual(
m_call_silent.mock_calls,
[
call(["ipset", "destroy", "foo"]),
call(["ipset", "destroy", "foo-tmp"]),
]
)
| 34.606464 | 79 | 0.521288 |
5814d1fe99fc291e9677f7f94bc37346c9900d80 | 5,055 | py | Python | yatube/yatube/settings.py | admin-nsk/yatube | 767434f0c3281034aa323b997d2ebdbe19b82d1c | [
"MIT"
] | null | null | null | yatube/yatube/settings.py | admin-nsk/yatube | 767434f0c3281034aa323b997d2ebdbe19b82d1c | [
"MIT"
] | null | null | null | yatube/yatube/settings.py | admin-nsk/yatube | 767434f0c3281034aa323b997d2ebdbe19b82d1c | [
"MIT"
] | null | null | null | """
Django settings for yatube project.
Generated by 'django-admin startproject' using Django 2.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '^le&r=3hvvymen6or)%%g!e9-t-j$a!1h1+svs!61ao6yzt(5c'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = [
"localhost",
"127.0.0.1",
"[::1]",
"testserver",
]
# Application definition
INSTALLED_APPS = [
'posts',
'group',
'Users',
'api',
'django_filters',
'django.contrib.sites',
'django.contrib.flatpages',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'sorl.thumbnail',
'debug_toolbar',
'rest_framework.authtoken',
'rest_framework',
'corsheaders',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'corsheaders.middleware.CorsMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'debug_toolbar.middleware.DebugToolbarMiddleware',
]
ROOT_URLCONF = 'yatube.urls'
TEMPLATES_DIR = os.path.join(BASE_DIR, "templates")
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [TEMPLATES_DIR],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
'yatube.context_processors.context.year',
],
},
},
]
WSGI_APPLICATION = 'yatube.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'ru-ru'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "static")
# Login
LOGIN_URL = "/auth/login/"
LOGIN_REDIRECT_URL = "index"
# LOGOUT_REDIRECT_URL = "index"
EMAIL_BACKEND = 'django.core.mail.backends.XXX'
EMAIL_BACKEND = "django.core.mail.backends.filebased.EmailBackend"
EMAIL_FILE_PATH = os.path.join(BASE_DIR, "sent_emails")
SITE_ID = 1
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
}
}
INTERNAL_IPS = [
"127.0.0.1",
]
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAuthenticated',
],
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework.authentication.TokenAuthentication',
],
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework_simplejwt.authentication.JWTAuthentication',
],
'DEFAULT_THROTTLE_CLASSES': [
'rest_framework.throttling.UserRateThrottle',
'rest_framework.throttling.AnonRateThrottle',
],
'DEFAULT_THROTTLE_RATES': {
'user': '10000/day', # лимит для UserRateThrottle
'anon': '1000/day', # лимит для AnonRateThrottle
},
'DEFAULT_FILTER_BACKENDS': ['django_filters.rest_framework.DjangoFilterBackend']
}
CORS_ORIGIN_ALLOW_ALL = True
CORS_URLS_REGEX = r'^/api/.*$' | 25.659898 | 91 | 0.685658 |
736596463c90dbdac648c72a1f5fcbf1ff6eae7b | 15,962 | py | Python | uw_spotseeker/__init__.py | devights/uw-restclients-spotseeker | d692773b4b3b6057c94748f35ba7d602cd53d016 | [
"Apache-2.0"
] | 1 | 2018-05-03T00:45:07.000Z | 2018-05-03T00:45:07.000Z | uw_spotseeker/__init__.py | devights/uw-restclients-spotseeker | d692773b4b3b6057c94748f35ba7d602cd53d016 | [
"Apache-2.0"
] | 4 | 2019-12-16T19:49:21.000Z | 2022-01-19T01:59:44.000Z | uw_spotseeker/__init__.py | uw-it-aca/uw-restclients-spotseeker | 42b5dae637bcde875b27b32a37fe4dd9f867c537 | [
"Apache-2.0"
] | null | null | null | # Copyright 2022 UW-IT, University of Washington
# SPDX-License-Identifier: Apache-2.0
"""
This is the interface for interacting with the Spotseeker Server REST API
"""
from restclients_core.exceptions import DataFailureException
from uw_spotseeker.dao import Spotseeker_DAO
from uw_spotseeker.models import (Spot,
SpotType,
SpotImage,
ItemImage,
SpotItem,
SpotAvailableHours,
SpotExtendedInfo)
from uw_spotseeker.exceptions import InvalidSpotID
from commonconf import settings
from commonconf.exceptions import NotConfigured
import json
import dateutil.parser
import re
import six
import datetime
import requests
import mock
from requests_oauthlib import OAuth1
try:
from urllib import urlencode
except ImportError:
from urllib.parse import urlencode
class Spotseeker(object):
def post_image(self, spot_id, image):
url = "api/v1/spot/%s/image" % spot_id
implementation = Spotseeker_DAO().get_implementation()
if implementation.is_mock():
response = Spotseeker_DAO().putURL(url, {})
content = response.data
return content
else:
try:
headers = {"X-OAuth-User": settings.OAUTH_USER}
auth = OAuth1(settings.SPOTSEEKER_OAUTH_KEY,
settings.SPOTSEEKER_OAUTH_SECRET)
full_url = settings.RESTCLIENTS_SPOTSEEKER_HOST + "/" + url
files = {'image': ('image.jpg', image)}
response = requests.post(full_url,
files=files,
auth=auth,
headers=headers)
if response.status_code != 201:
raise DataFailureException(url,
response.status_code,
response.content)
except AttributeError:
raise NotConfigured("must set OAUTH_ keys in settings")
def delete_image(self, spot_id, image_id, etag):
url = "/api/v1/spot/%s/image/%s" % (spot_id, image_id)
implementation = Spotseeker_DAO().get_implementation()
if implementation.is_mock():
response = mock.Mock()
response.status = 200
else:
try:
headers = {"X-OAuth-User": settings.OAUTH_USER,
"If-Match": etag}
response = Spotseeker_DAO().deleteURL(url, headers)
content = response.data
except AttributeError:
raise NotConfigured("Must set OAUTH_USER in settings")
if response.status != 200:
raise DataFailureException(url, response.status, content)
def post_item_image(self, item_id, image):
url = "/api/v1/item/%s/image" % item_id
implementation = Spotseeker_DAO().get_implementation()
if implementation.is_mock():
response = Spotseeker_DAO().putURL(url, {})
content = response.data
return content
else:
try:
headers = {"X-OAuth-User": settings.OAUTH_USER}
auth = OAuth1(settings.SPOTSEEKER_OAUTH_KEY,
settings.SPOTSEEKER_OAUTH_SECRET)
full_url = settings.RESTCLIENTS_SPOTSEEKER_HOST + url
files = {'image': ('image.jpg', image)}
r = requests.post(full_url,
files=files,
auth=auth,
headers=headers)
if r.status_code != 201:
raise DataFailureException(url, r.status_code, r.content)
except AttributeError as ex:
raise NotConfigured("Must set OAUTH_ keys in settings")
def delete_item_image(self, item_id, image_id, etag):
url = "/api/v1/item/%s/image/%s" % (item_id, image_id)
implementation = Spotseeker_DAO().get_implementation()
if implementation.is_mock():
response = mock.Mock()
response.status = 200
else:
try:
headers = {"X-OAuth-User": settings.OAUTH_USER,
"If-Match": etag}
response = Spotseeker_DAO().deleteURL(url, headers)
content = response.data
except AttributeError:
raise NotConfigured("Must set OAUTH_USER in settings")
if response.status != 200:
raise DataFailureException(url, response.status, content)
def all_spots(self):
url = "/api/v1/spot/all"
response = Spotseeker_DAO().getURL(url)
if response.status != 200:
raise DataFailureException(url, response.status, response.data)
results = json.loads(response.data.decode('utf-8'))
spots = self._spots_from_data(results)
return spots
def search_spots(self, query_tuple):
"""
Returns a list of spots matching the passed parameters
"""
url = "/api/v1/spot?" + urlencode(query_tuple)
response = Spotseeker_DAO().getURL(url)
if response.status != 200:
raise DataFailureException(url, response.status, response.data)
results = json.loads(response.data.decode('utf-8'))
return self._spots_from_data(results)
def put_spot(self, spot_id, spot_json, etag):
url = "/api/v1/spot/%s" % spot_id
implementation = Spotseeker_DAO().get_implementation()
if implementation.is_mock():
response = Spotseeker_DAO().putURL(url, {})
content = response.data
else:
try:
headers = {"X-OAuth-User": settings.OAUTH_USER,
"If-Match": etag}
response = Spotseeker_DAO().putURL(url,
headers,
spot_json)
content = response.data
except AttributeError:
raise NotConfigured("Must set OAUTH_USER in settings")
if response.status != 200:
raise DataFailureException(url, response.status, content)
return response, content
def delete_spot(self, spot_id, etag):
url = "/api/v1/spot/%s" % spot_id
implementation = Spotseeker_DAO().get_implementation()
if implementation.is_mock():
response = Spotseeker_DAO().deleteURL(url)
content = response.data
else:
try:
headers = {"X-OAuth-User": settings.OAUTH_USER,
"If-Match": etag}
response = Spotseeker_DAO().deleteURL(url, headers)
content = response.data
except AttributeError:
raise NotConfigured("Must set OAUTH_USER in settings")
if response.status != 200:
raise DataFailureException(url, response.status, content)
return response, content
def post_spot(self, spot_json):
url = "/api/v1/spot"
implementation = Spotseeker_DAO().get_implementation()
if implementation.is_mock():
response = Spotseeker_DAO().postURL(url)
content = response.data
else:
try:
headers = {"X-OAuth-User": settings.OAUTH_USER,
"Content-Type": "application/json"}
response = Spotseeker_DAO().postURL(url,
headers,
spot_json)
content = response.data
except AttributeError:
raise NotConfigured("Must set OAUTH_USER in settings")
if response.status != 201:
raise DataFailureException(url, response.status, content)
return response
def get_spot_by_id(self, spot_id):
self._validate_spotid(spot_id)
url = "/api/v1/spot/%s" % spot_id
response = Spotseeker_DAO().getURL(url)
if response.status != 200:
raise DataFailureException(url, response.status, response.data)
return self._spot_from_data(json.loads(response.data.decode('utf-8')))
def get_building_list(self, campus, app_type=None):
url = "/api/v1/buildings?extended_info:campus=" + campus
if app_type:
url += "&extended_info:app_type=" + app_type
response = Spotseeker_DAO().getURL(url)
if response.status != 200:
raise DataFailureException(url, response.status, response.data)
return json.loads(response.data.decode('utf-8'))
def _spots_from_data(self, spots_data):
return [self._spot_from_data(spot_data) for spot_data in spots_data]
def _spot_from_data(self, spot_data):
spot = Spot()
spot.spot_id = spot_data["id"]
spot.name = spot_data["name"]
spot.uri = spot_data["uri"]
spot.latitude = spot_data["location"]["latitude"]
spot.longitude = spot_data["location"]["longitude"]
spot.height_from_sea_level = \
spot_data["location"]["height_from_sea_level"]
spot.building_name = spot_data["location"]["building_name"]
spot.building_description = spot_data["location"].get("description",
None)
spot.floor = spot_data["location"]["floor"]
spot.room_number = spot_data["location"]["room_number"]
spot.capacity = spot_data["capacity"]
spot.display_access_restrictions = \
spot_data["display_access_restrictions"]
spot.organization = spot_data["organization"]
spot.manager = spot_data["manager"]
spot.etag = spot_data["etag"]
spot.external_id = spot_data["external_id"]
spot.last_modified = dateutil.parser.parse(spot_data["last_modified"])
spot.spot_types = self._spot_types_from_data(spot_data["type"])
spot.spot_availability = \
self._spot_availability_from_data(spot_data["available_hours"])
spot.images = self._spot_images_from_data(spot_data["images"])
spot.extended_info = \
self._extended_info_from_data(spot_data["extended_info"])
spot.items = []
if "items" in spot_data and len(spot_data["items"]) > 0:
spot.items = self._items_from_data(spot_data["items"])
return spot
def _items_from_data(self, item_data):
spot_items = []
for item in item_data:
spot_item = SpotItem()
spot_item.item_id = item["id"]
spot_item.name = item["name"]
spot_item.category = item["category"]
spot_item.subcategory = item["subcategory"]
spot_item.images = []
if "images" in item and len(item["images"]) > 0:
spot_item.images = self._item_images_from_data(item["images"])
spot_item.extended_info = \
self._extended_info_from_data(item["extended_info"])
spot_items.append(spot_item)
return spot_items
def _item_images_from_data(self, image_data):
images = []
for image in image_data:
item_image = ItemImage()
item_image.image_id = image["id"]
item_image.url = image["url"]
item_image.description = image["description"]
item_image.display_index = image["display_index"]
item_image.content_type = image["content-type"]
item_image.width = image["width"]
item_image.height = image["height"]
item_image.creation_date = dateutil.parser.parse(
image["creation_date"])
item_image.upload_user = image["upload_user"]
item_image.upload_application = image["upload_application"]
item_image.thumbnail_root = image["thumbnail_root"]
images.append(item_image)
return images
def _spot_images_from_data(self, image_data):
images = []
for image in image_data:
spot_image = SpotImage()
spot_image.image_id = image["id"]
spot_image.url = image["url"]
spot_image.description = image["description"]
spot_image.display_index = image["display_index"]
spot_image.content_type = image["content-type"]
spot_image.width = image["width"]
spot_image.height = image["height"]
spot_image.creation_date = dateutil.parser.parse(
image["creation_date"])
spot_image.modification_date = \
dateutil.parser.parse(image["modification_date"])
spot_image.upload_user = image["upload_user"]
spot_image.upload_application = image["upload_application"]
spot_image.thumbnail_root = image["thumbnail_root"]
images.append(spot_image)
return images
def _spot_availability_from_data(self, avaliblity_data):
availability = []
for day in avaliblity_data:
for hours in avaliblity_data[day]:
available_hours = SpotAvailableHours()
available_hours.day = day
available_hours.start_time = self._parse_time(hours[0])
available_hours.end_time = self._parse_time(hours[1])
availability.append(available_hours)
return availability
def _parse_time(self, value):
time_re = re.compile(
r'(?P<hour>\d{1,2}):(?P<minute>\d{1,2})'
r'(?::(?P<second>\d{1,2})(?:\.(?P<microsecond>\d{1,6})\d{0,6})?)?'
)
match = time_re.match(value)
if match:
kw = match.groupdict()
if kw['microsecond']:
kw['microsecond'] = kw['microsecond'].ljust(6, '0')
kw = {k: int(v) for k, v in six.iteritems(kw) if v is not None}
return datetime.time(**kw)
def _spot_types_from_data(self, type_data):
spot_types = []
for spot_type in type_data:
spot_types.append(SpotType(name=spot_type))
return spot_types
def _validate_spotid(self, spotid):
if (not type(spotid) is int):
raise InvalidSpotID
def _extended_info_from_data(self, info_data):
extended_info = []
for attribute in info_data:
spot_extended_info = SpotExtendedInfo(key=attribute,
value=info_data[attribute])
extended_info.append(spot_extended_info)
return extended_info
def _get_image(self, image_app_type, parent_id, image_id, width=None):
if width is not None:
url = "/api/v1/%s/%s/image/%s/thumb/constrain/width:%s" % (
image_app_type,
parent_id,
image_id,
width)
else:
url = "/api/v1/%s/%s/image/%s" % (image_app_type,
parent_id,
image_id)
implementation = Spotseeker_DAO().get_implementation()
if implementation.is_mock():
response = Spotseeker_DAO().getURL(url)
content = response.data
else:
response = Spotseeker_DAO().getURL(url)
content = response.data
return response, content
def get_item_image(self, parent_id, image_id, width=None):
return self._get_image("item", parent_id, image_id, width)
def get_spot_image(self, parent_id, image_id, width=None):
return self._get_image("spot", parent_id, image_id, width)
| 38.931707 | 78 | 0.570981 |
6d947618af25f611e59b1767a45cd472f2cd9513 | 18,316 | py | Python | impacket/smbconnection.py | rackerlabs/impacket | 327acaeab5289da6e99cd82fe0b88db329566e66 | [
"Apache-1.1"
] | null | null | null | impacket/smbconnection.py | rackerlabs/impacket | 327acaeab5289da6e99cd82fe0b88db329566e66 | [
"Apache-1.1"
] | null | null | null | impacket/smbconnection.py | rackerlabs/impacket | 327acaeab5289da6e99cd82fe0b88db329566e66 | [
"Apache-1.1"
] | 1 | 2021-07-25T23:46:43.000Z | 2021-07-25T23:46:43.000Z | # Copyright (c) 2003-2012 CORE Security Technologies)
#
# This software is provided under under a slightly modified version
# of the Apache Software License. See the accompanying LICENSE file
# for more information.
#
# $Id: smbconnection.py 738 2013-04-09 18:40:15Z bethus@gmail.com $
#
# Author: Alberto Solino (beto@coresecurity.com)
#
# Description:
#
# Wrapper class for SMB1/2/3 so it's transparent for the client.
# You can still play with the low level methods (version dependant)
# by calling getSMBServer()
#
import ntpath
import string
import socket
from impacket import smb, smb3, nmb
from smb3structs import *
# So the user doesn't need to import smb, the smb3 are already in here
SMB_DIALECT = smb.SMB_DIALECT
class SMBConnection():
"""
SMBConnection class - beto
:param string remoteName: name of the remote host, can be it's NETBIOS name, IP or *\*SMBSERVER*. If the later, and port is 139, the library will try to get the target's server name.
:param string remoteHost: target server's remote address (IPv4, IPv6)
:param string/optional myName: client's NETBIOS name
:param integer/optional sess_port: target port to connect
:param integer/optional timeout: timeout in seconds when receiving packets
:param optional preferredDialect: the dialect desired to talk with the target server. If not specified the highest one available will be used
:return: a SMBConnection instance, if not raises a SessionError exception
"""
def __init__(self, remoteName='', remoteHost='', myName = None, sess_port = 445, timeout=10, preferredDialect = None, existingConnection = None):
self._SMBConnection = 0
self._dialect = ''
self._nmbSession = 0
hostType = nmb.TYPE_SERVER
if existingConnection is not None:
# Existing Connection must be a smb or smb3 instance
assert ( isinstance(existingConnection,smb.SMB) or isinstance(existingConnection, smb3.SMB3))
self._SMBConnection = existingConnection
return
if preferredDialect is None:
# If no preferredDialect sent, we try the highest available one.
packet = self._negotiateSession(myName, remoteName, remoteHost, sess_port, timeout)
if packet[0] == '\xfe':
# Answer is SMB2 packet
self._SMBConnection = smb3.SMB3(remoteName, remoteHost, myName, hostType, sess_port, timeout, session = self._nmbSession )
else:
# Answer is SMB packet, sticking to SMBv1
self._SMBConnection = smb.SMB(remoteName, remoteHost, myName, hostType, sess_port, timeout, session = self._nmbSession, negPacket = packet)
else:
if preferredDialect == smb.SMB_DIALECT:
self._SMBConnection = smb.SMB(remoteName, remoteHost, myName, hostType, sess_port, timeout)
elif preferredDialect in [SMB2_DIALECT_002, SMB2_DIALECT_21, SMB2_DIALECT_30]:
self._SMBConnection = smb3.SMB3(remoteName, remoteHost, myName, hostType, sess_port, timeout, preferredDialect = preferredDialect)
else:
print "Unknown dialect ", preferredDialect
raise
def _negotiateSession(self, myName, remoteName, remoteHost, sess_port, timeout, extended_security = True):
# Here we follow [MS-SMB2] negotiation handshake trying to understand what dialects
# (including SMB1) is supported on the other end.
if not myName:
myName = socket.gethostname()
i = string.find(myName, '.')
if i > -1:
myName = myName[:i]
# If port 445 and the name sent is *SMBSERVER we're setting the name to the IP. This is to help some old applications still believing
# *SMSBSERVER will work against modern OSes. If port is NETBIOS_SESSION_PORT the user better know about *SMBSERVER's limitations
if sess_port == 445 and remoteName == '*SMBSERVER':
remoteName = remoteHost
self._nmbSession = nmb.NetBIOSTCPSession(myName, remoteName, remoteHost, nmb.TYPE_SERVER, sess_port, timeout)
smbp = smb.NewSMBPacket()
negSession = smb.SMBCommand(smb.SMB.SMB_COM_NEGOTIATE)
if extended_security == True:
smbp['Flags2']=smb.SMB.FLAGS2_EXTENDED_SECURITY
negSession['Data'] = '\x02NT LM 0.12\x00\x02SMB 2.002\x00\x02SMB 2.???\x00'
smbp.addCommand(negSession)
self._nmbSession.send_packet(str(smbp))
r = self._nmbSession.recv_packet(timeout)
return r.get_trailer()
def getSMBServer(self):
"""
returns the SMB/SMB3 instance being used. Useful for calling low level methods
"""
return self._SMBConnection
def getDialect(self):
return self._SMBConnection.getDialect()
def getServerName(self):
return self._SMBConnection.get_server_name()
def getRemoteHost(self):
return self._SMBConnection.get_remote_host()
def getServerDomain(self):
return self._SMBConnection.get_server_domain()
def getServerOS(self):
return self._SMBConnection.get_server_os()
def doesSupportNTLMv2(self):
return self._SMBConnection.doesSupportNTLMv2()
def isLoginRequired(self):
return self._SMBConnection.is_login_required()
def login(self, user, password, domain = '', lmhash = '', nthash = ''):
"""
logins into the target system
:param string user: username
:param string password: password for the user
:param string domain: domain where the account is valid for
:param string lmhash: LMHASH used to authenticate using hashes (password is not used)
:param string nthash: NTHASH used to authenticate using hashes (password is not used)
:return: None, raises a Session Error if error.
"""
return self._SMBConnection.login(user, password, domain, lmhash, nthash)
def isGuestSession(self):
return self._SMBConnection.isGuestSession()
def logoff(self):
return self._SMBConnection.logoff()
def connectTree(self,share):
if self.getDialect() == smb.SMB_DIALECT:
share = ntpath.basename(share)
share = '\\\\' + self.getRemoteHost() + '\\' + share
return self._SMBConnection.connect_tree(share)
def disconnectTree(self, treeId):
return self._SMBConnection.disconnect_tree(treeId)
def listShares(self):
# Get the shares through RPC
from impacket.dcerpc import transport, dcerpc, srvsvc
rpctransport = transport.SMBTransport(self.getRemoteHost(), self.getRemoteHost(), filename = r'\srvsvc', smb_connection = self)
dce = dcerpc.DCERPC_v5(rpctransport)
dce.connect()
dce.bind(srvsvc.MSRPC_UUID_SRVSVC)
srv_svc = srvsvc.DCERPCSrvSvc(dce)
resp = srv_svc.get_share_enum_1(rpctransport.get_dip())
return resp
def listPath(self, shareName, path, password = None):
return self._SMBConnection.list_path(shareName, path, password)
def createFile(self, treeId, pathName, desiredAccess = GENERIC_ALL, shareMode = FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE, creationOption = FILE_NON_DIRECTORY_FILE, creationDisposition = FILE_OVERWRITE_IF , fileAttributes = FILE_ATTRIBUTE_NORMAL, impersonationLevel = SMB2_IL_IMPERSONATION, securityFlags = 0, oplockLevel = SMB2_OPLOCK_LEVEL_NONE, createContexts = None):
"""
creates a remote file
:param HANDLE treeId: a valid handle for the share where the file is to be opened
:param string pathName: the path name to open
:return: a valid file descriptor, if not raises a SessionError exception.
"""
if self.getDialect() == smb.SMB_DIALECT:
pathName = string.replace(pathName, '/', '\\')
ntCreate = smb.SMBCommand(smb.SMB.SMB_COM_NT_CREATE_ANDX)
ntCreate['Parameters'] = smb.SMBNtCreateAndX_Parameters()
ntCreate['Data'] = smb.SMBNtCreateAndX_Data()
ntCreate['Parameters']['FileNameLength']= len(pathName)
ntCreate['Parameters']['AccessMask'] = desiredAccess
ntCreate['Parameters']['FileAttributes']= fileAttributes
ntCreate['Parameters']['ShareAccess'] = shareMode
ntCreate['Parameters']['Disposition'] = creationDisposition
ntCreate['Parameters']['CreateOptions'] = creationOption
ntCreate['Parameters']['Impersonation'] = impersonationLevel
ntCreate['Parameters']['SecurityFlags'] = securityFlags
ntCreate['Parameters']['CreateFlags'] = 0x16
ntCreate['Data']['FileName'] = pathName
if createContexts is not None:
print "CreateContexts not supported in SMB1"
return self._SMBConnection.nt_create_andx(treeId, pathName, cmd = ntCreate)
else:
return self._SMBConnection.create(treeId, pathName, desiredAccess, shareMode, creationOption, creationDisposition, fileAttributes, impersonationLevel, securityFlags, oplockLevel, createContexts)
def openFile(self, treeId, pathName, desiredAccess = FILE_READ_DATA | FILE_WRITE_DATA, shareMode = FILE_SHARE_READ, creationOption = FILE_NON_DIRECTORY_FILE, creationDisposition = FILE_OPEN, fileAttributes = FILE_ATTRIBUTE_NORMAL, impersonationLevel = SMB2_IL_IMPERSONATION, securityFlags = 0, oplockLevel = SMB2_OPLOCK_LEVEL_NONE, createContexts = None):
"""
opens a remote file
:param HANDLE treeId: a valid handle for the share where the file is to be opened
:param string pathName: the path name to open
:return: a valid file descriptor, if not raises a SessionError exception.
"""
if self.getDialect() == smb.SMB_DIALECT:
pathName = string.replace(pathName, '/', '\\')
ntCreate = smb.SMBCommand(smb.SMB.SMB_COM_NT_CREATE_ANDX)
ntCreate['Parameters'] = smb.SMBNtCreateAndX_Parameters()
ntCreate['Data'] = smb.SMBNtCreateAndX_Data()
ntCreate['Parameters']['FileNameLength']= len(pathName)
ntCreate['Parameters']['AccessMask'] = desiredAccess
ntCreate['Parameters']['FileAttributes']= fileAttributes
ntCreate['Parameters']['ShareAccess'] = shareMode
ntCreate['Parameters']['Disposition'] = creationDisposition
ntCreate['Parameters']['CreateOptions'] = creationOption
ntCreate['Parameters']['Impersonation'] = impersonationLevel
ntCreate['Parameters']['SecurityFlags'] = securityFlags
ntCreate['Parameters']['CreateFlags'] = 0x16
ntCreate['Data']['FileName'] = pathName
if createContexts is not None:
print "CreateContexts not supported in SMB1"
return self._SMBConnection.nt_create_andx(treeId, pathName, cmd = ntCreate)
else:
return self._SMBConnection.create(treeId, pathName, desiredAccess, shareMode, creationOption, creationDisposition, fileAttributes, impersonationLevel, securityFlags, oplockLevel, createContexts)
def writeFile(self, treeId, fileId, data, offset=0):
"""
writes data to a file
:param HANDLE treeId: a valid handle for the share where the file is to be opened
:param HANDLE fileId: a valid handle for the file/directory to be closed
:param string data: buffer with the data to write
:param integer offset: offset where to start writing the data
:return: amount of bytes written, if not raises a SessionError exception.
"""
return self._SMBConnection.writeFile(treeId, fileId, data, offset)
def readFile(self, treeId, fileId, offset = 0, bytesToRead = None):
"""
reads data from a file
:param HANDLE treeId: a valid handle for the share where the file is to be opened
:param HANDLE fileId: a valid handle for the file/directory to be closed
:param integer offset: offset where to start writing the data
:param integer bytesToRead: amount of bytes to read. If None, it will read Dialect['MaxBufferSize'] bytes.
:return: the data read, if not raises a SessionError exception.
"""
return self._SMBConnection.read_andx(treeId, fileId, offset, bytesToRead)
def closeFile(self, treeId, fileId):
"""
closes a file handle
:param HANDLE treeId: a valid handle for the share where the file is to be opened
:param HANDLE fileId: a valid handle for the file/directory to be closed
:return: None, raises a SessionError exception if error.
"""
return self._SMBConnection.close(treeId, fileId)
def deleteFile(self, shareName, pathName):
"""
removes a file
:param string shareName: a valid handle for the share where the file is to be opened
:param string pathName: the path name to remove
:return: None, raises a SessionError exception if error.
"""
return self._SMBConnection.remove(shareName, pathName)
def createDirectory(self, shareName, pathName ):
"""
creates a directory
:param string shareName: a valid handle for the share where the file is to be opened
:param string pathName: the path name or the directory to create
:return: None, raises a SessionError exception if error.
"""
return self._SMBConnection.mkdir(shareName, pathName)
def deleteDirectory(self, shareName, pathName):
"""
deletes a directory
:param string shareName: a valid handle for the share where the file is to be opened
:param string pathName: the path name or the directory to delete
:return: None, raises a SessionError exception if error.
"""
return self._SMBConnection.rmdir(shareName, pathName)
def waitNamedPipe(self, treeId, pipeName, timeout = 5):
"""
waits for a named pipe
:param HANDLE treeId: a valid handle for the share where the file is to be checked
:param string pipeName: the pipe name to check
:param integer timeout: time to wait for an answer
:return: None, raises a SessionError exception if error.
"""
return self._SMBConnection.waitNamedPipe(treeId, pipeName, timeout = timeout)
def transactNamedPipe(self, treeId, fileId, data, waitAnswer = True):
"""
writes to a named pipe using a transaction command
:param HANDLE treeId: a valid handle for the share where the file is to be checked
:param HANDLE fileId: a valid handle for the file/directory to be closed
:param string data: buffer with the data to write
:param boolean waitAnswer: whether or not to wait for an answer
:return: None, raises a SessionError exception if error.
"""
return self._SMBConnection.TransactNamedPipe(treeId, fileId, data, waitAnswer = waitAnswer)
def transactNamedPipeRecv(self):
"""
reads from a named pipe using a transaction command
:return: data read, raises a SessionError exception if error.
"""
return self._SMBConnection.TransactNamedPipeRecv()
def writeNamedPipe(self, treeId, fileId, data, waitAnswer = True):
"""
writes to a named pipe
:param HANDLE treeId: a valid handle for the share where the file is to be checked
:param HANDLE fileId: a valid handle for the file/directory to be closed
:param string data: buffer with the data to write
:param boolean waitAnswer: whether or not to wait for an answer
:return: None, raises a SessionError exception if error.
"""
if self.getDialect() == smb.SMB_DIALECT:
return self._SMBConnection.write_andx(treeId, fileId, data, wait_answer = waitAnswer, write_pipe_mode = True)
else:
return self.writeFile(treeId, fileId, data, 0)
def readNamedPipe(self,treeId, fileId, bytesToRead = None ):
"""
read from a named pipe
:param HANDLE treeId: a valid handle for the share where the file is to be checked
:param HANDLE fileId: a valid handle for the file/directory to be closed
:param integer bytestToRead: amount of data to read
:param boolean waitAnswer: whether or not to wait for an answer
:return: None, raises a SessionError exception if error.
"""
return self.readFile(treeId, fileId, bytesToRead = bytesToRead)
def getFile(self, shareName, pathName, callback):
"""
downloads a file
:param string shareName: a valid handle for the share where the file is to be opened
:param string pathName: the path name or the directory to delete
:param callback callback:
:return: None, raises a SessionError exception if error.
"""
return self._SMBConnection.retr_file(shareName, pathName, callback)
def putFile(self, shareName, pathName, callback):
"""
uploads a file
:param string shareName: a valid handle for the share where the file is to be opened
:param string pathName: the path name or the directory to delete
:param callback callback:
:return: None, raises a SessionError exception if error.
"""
return self._SMBConnection.stor_file(shareName, pathName, callback)
def rename(self, shareName, oldPath, newPath):
"""
rename a file/directory
:param string shareName: a valid handle for the share where the file is to be opened
:param string oldPath: the old path name or the directory/file to rename
:param string newPath: the new path name or the directory/file to rename
:return: True, raises a SessionError exception if error.
"""
return self._SMBConnection.rename(shareName, oldPath, newPath)
def setTimeout(self, timeout):
return self._SMBConnection.set_timeout(timeout)
| 43.402844 | 388 | 0.666685 |
8433ec758728d76bc8a5cdb0e68ed2bd2e9d91e2 | 9,580 | py | Python | tests/resources/test_file_link.py | lbianchini84/citrine-python | 0a99fcd3cfd0a464f5612fceed1dc6d1066a4680 | [
"Apache-2.0"
] | null | null | null | tests/resources/test_file_link.py | lbianchini84/citrine-python | 0a99fcd3cfd0a464f5612fceed1dc6d1066a4680 | [
"Apache-2.0"
] | null | null | null | tests/resources/test_file_link.py | lbianchini84/citrine-python | 0a99fcd3cfd0a464f5612fceed1dc6d1066a4680 | [
"Apache-2.0"
] | null | null | null | from collections import namedtuple
import pytest
from uuid import uuid4
import requests_mock
from mock import patch, Mock, call
from botocore.exceptions import ClientError
from citrine.resources.file_link import FileCollection, FileLink, _Uploader
from tests.utils.session import FakeSession, FakeS3Client, FakeCall, FakeRequestResponse
from tests.utils.factories import FileLinkDataFactory, _UploaderFactory
@pytest.fixture
def session() -> FakeSession:
return FakeSession()
@pytest.fixture
def collection(session) -> FileCollection:
return FileCollection(
project_id=uuid4(),
dataset_id=uuid4(),
session=session
)
@pytest.fixture
def valid_data() -> dict:
return FileLinkDataFactory(url='www.citrine.io', filename='materials.txt')
def test_build_equivalence(collection, valid_data):
"""Test that build() works the same whether called from FileLink or FileCollection."""
assert collection.build(valid_data).dump() == FileLink.build(valid_data).dump()
def test_build_as_dict(collection, valid_data):
"""Test that build() works the same whether called from FileLink or FileCollection."""
assert collection.build(valid_data).dump() == FileLink.build(valid_data).as_dict()
def test_string_representation(valid_data):
"""Test the string representation."""
assert str(FileLink.build(valid_data)) == '<File link \'materials.txt\'>'
@pytest.fixture
def uploader() -> _Uploader:
"""An _Uploader object with all of its fields filled in."""
return _UploaderFactory()
@patch('citrine.resources.file_link.boto3_client')
@patch('citrine.resources.file_link.open')
@patch('citrine.resources.file_link.os.stat')
@patch('citrine.resources.file_link.os.path.isfile')
def test_upload(mock_isfile, mock_stat, mock_open, mock_boto3_client, collection, session):
"""Test signaling that an upload has completed and the creation of a FileLink object."""
StatStub = namedtuple('StatStub', ['st_size'])
mock_isfile.return_value = True
mock_stat.return_value = StatStub(st_size=22300)
mock_open.return_value.__enter__.return_value = 'Random file contents'
mock_boto3_client.return_value = FakeS3Client({'VersionId': '3'})
dest_name = 'foo.txt'
file_id = '12345'
version = '13'
# This is the dictionary structure we expect from the upload completion request
file_info_response = {
'file_info': {
'file_id': file_id,
'version': version
}
}
uploads_response = {
's3_region': 'us-east-1',
's3_bucket': 'temp-bucket',
'temporary_credentials': {
'access_key_id': '1234',
'secret_access_key': 'abbb8777',
'session_token': 'hefheuhuhhu83772333',
},
'uploads': [
{
's3_key': '66377378',
'upload_id': '111',
}
]
}
session.set_responses(uploads_response, file_info_response)
file_link = collection.upload(dest_name)
assert session.num_calls == 2
url = 'projects/{}/datasets/{}/files/{}/versions/{}'\
.format(collection.project_id, collection.dataset_id, file_id, version)
assert file_link.dump() == FileLink(dest_name, url=url).dump()
def test_upload_missing_file(collection):
with pytest.raises(ValueError):
collection.upload('this-file-does-not-exist.xls')
@patch('citrine.resources.file_link.os.stat')
def test_upload_request(mock_stat, collection, session, uploader):
"""Test that an upload request response contains all required fields."""
# Mock the method that gets the size of the file.
mock_stat_object = Mock()
mock_stat_object.st_size = 17
mock_stat.return_value = mock_stat_object
# This is the dictionary structure we expect from the upload request
upload_request_response = {
's3_region': uploader.region_name,
's3_bucket': uploader.bucket,
'temporary_credentials': {
'access_key_id': uploader.aws_access_key_id,
'secret_access_key': uploader.aws_secret_access_key,
'session_token': uploader.aws_session_token,
},
'uploads': [
{
's3_key': uploader.object_key,
'upload_id': uploader.upload_id
}
]
}
session.set_response(upload_request_response)
new_uploader = collection._make_upload_request('foo.txt', 'foo.txt')
assert session.num_calls == 1
assert new_uploader.bucket == uploader.bucket
assert new_uploader.object_key == uploader.object_key
assert new_uploader.upload_id == uploader.upload_id
assert new_uploader.region_name == uploader.region_name
assert new_uploader.aws_access_key_id == uploader.aws_access_key_id
assert new_uploader.aws_secret_access_key == uploader.aws_secret_access_key
assert new_uploader.aws_session_token == uploader.aws_session_token
assert new_uploader.object_key == uploader.object_key
# Using a request response that is missing a field throws a RuntimeError
del upload_request_response['s3_bucket']
with pytest.raises(RuntimeError):
collection._make_upload_request('foo.txt', 'foo.txt')
@patch('citrine.resources.file_link.open')
def test_upload_file(_, collection, uploader):
"""Test that uploading a file returns the version ID."""
# A successful file upload sets uploader.s3_version
new_version = '3'
with patch('citrine.resources.file_link.boto3_client',
return_value=FakeS3Client({'VersionId': new_version})):
new_uploader = collection._upload_file('foo.txt', uploader)
assert new_uploader.s3_version == new_version
# If the client throws a ClientError when attempting to upload, throw a RuntimeError
bad_client = Mock()
bad_client.put_object.side_effect = ClientError(error_response={}, operation_name='put')
with patch('citrine.resources.file_link.boto3_client',
return_value=bad_client):
with pytest.raises(RuntimeError):
collection._upload_file('foo.txt', uploader)
def test_upload_missing_version(collection, session, uploader):
dest_name = 'foo.txt'
file_id = '12345'
version = '14'
bad_complete_response = {
'file_info': {
'file_id': file_id
},
'version': version # 'version' is supposed to go inside 'file_info'
}
with pytest.raises(RuntimeError):
session.set_response(bad_complete_response)
collection._complete_upload(dest_name, uploader)
def test_list_file_links(collection, session, valid_data):
"""Test that all files in a dataset can be turned into FileLink and listed."""
file_id = str(uuid4())
version = str(uuid4())
filename = 'materials.txt'
# The actual response contains more fields, but these are the only ones we use.
# Crucial thing is that URL ends with "/files/file_id/versions/version"
returned_data = {
'filename': filename,
'versioned_url': "http://citrine.com/api/files/{}/versions/{}".format(file_id, version)
}
session.set_response({
'files': [returned_data]
})
files_iterator = collection.list(page=1, per_page=15)
files = [file for file in files_iterator]
assert session.num_calls == 1
expected_call = FakeCall(
method='GET',
path=collection._get_path(),
params={
'page': 1,
'per_page': 15
}
)
assert expected_call == session.last_call
assert len(files) == 1
expected_url = "projects/{}/datasets/{}/files/{}/versions/{}".format(
collection.project_id, collection.dataset_id, file_id, version
)
expected_file = FileLinkDataFactory(url=expected_url, filename=filename)
assert files[0].dump() == FileLink.build(expected_file).dump()
# A response that does not have a URL of the expected form throws ValueError
bad_returned_data = {
'filename': filename,
'versioned_url': "http://citrine.com/api/file_version/{}".format(version)
}
session.set_response({
'files': [bad_returned_data]
})
with pytest.raises(ValueError):
files_iterator = collection.list(page=1, per_page=15)
[file for file in files_iterator]
@patch("citrine.resources.file_link.write_file_locally")
def test_file_download(mock_write_file_locally, collection, session):
"""
Test that downloading a file works as expected.
It should make the full file path if only a directory is given, make the directory if
it does not exist, make a call to get the pre-signed URL, and another to download.
"""
# Given
filename = 'diagram.pdf'
url = "http://citrine.com/api/files/123/versions/456"
file = FileLink.build(FileLinkDataFactory(url=url, filename=filename))
pre_signed_url = "http://files.citrine.io/secret-codes/jiifema987pjfsda" # arbitrary
session.set_response({
'pre_signed_read_link': pre_signed_url,
})
local_path = 'Users/me/some/new/directory/'
with requests_mock.mock() as mock_get:
mock_get.get(pre_signed_url, text='0101001')
# When
collection.download(file, local_path)
# When
assert mock_get.call_count == 1
expected_call = FakeCall(
method='GET',
path=url + '/content-link'
)
assert expected_call == session.last_call
assert mock_write_file_locally.call_count == 1
assert mock_write_file_locally.call_args == call(b'0101001', local_path + file.filename)
| 35.88015 | 96 | 0.687578 |
1f7fac9f7cb195509d02d1287fc9c0e17b5d1b56 | 507 | py | Python | tests/test_pay_foreign.py | Rainist/iamport-rest-client-python | d671cc39e258ce2826b4ba06b2a25b24a80d612e | [
"MIT"
] | 1 | 2020-05-18T15:18:23.000Z | 2020-05-18T15:18:23.000Z | tests/test_pay_foreign.py | Rainist/iamport-rest-client-python | d671cc39e258ce2826b4ba06b2a25b24a80d612e | [
"MIT"
] | 2 | 2019-05-21T02:06:42.000Z | 2019-05-21T02:52:25.000Z | tests/test_pay_foreign.py | Rainist/iamport-rest-client-python | d671cc39e258ce2826b4ba06b2a25b24a80d612e | [
"MIT"
] | 1 | 2020-05-18T15:18:31.000Z | 2020-05-18T15:18:31.000Z | # -*- coding: utf-8 -*-
def test_pay_foreign(iamport, merchant_uid):
payload = {
'merchant_uid': merchant_uid,
'amount': 100,
'card_number': 'card-number',
}
try:
iamport.pay_foreign(**payload)
except KeyError as e:
assert "Essential parameter is missing!: expiry" in str(e)
payload.update({
'expiry': '2016-08',
})
try:
iamport.pay_foreign(**payload)
except iamport.ResponseError as e:
assert e.code == -1
| 22.043478 | 66 | 0.577909 |
216a3dd97f8581005104cf38b6e696659ddfc5a3 | 1,258 | py | Python | tests/sentry/api/endpoints/test_user_social_identity_details.py | AlexWayfer/sentry | ef935cda2b2e960bd602fda590540882d1b0712d | [
"BSD-3-Clause"
] | 4 | 2019-05-27T13:55:07.000Z | 2021-03-30T07:05:09.000Z | tests/sentry/api/endpoints/test_user_social_identity_details.py | AlexWayfer/sentry | ef935cda2b2e960bd602fda590540882d1b0712d | [
"BSD-3-Clause"
] | 196 | 2019-06-10T08:34:10.000Z | 2022-02-22T01:26:13.000Z | tests/sentry/api/endpoints/test_user_social_identity_details.py | AlexWayfer/sentry | ef935cda2b2e960bd602fda590540882d1b0712d | [
"BSD-3-Clause"
] | 1 | 2018-07-02T09:46:44.000Z | 2018-07-02T09:46:44.000Z | from __future__ import absolute_import
from django.core.urlresolvers import reverse
from social_auth.models import UserSocialAuth
from sentry.testutils import APITestCase
class UserSocialIdentityDetailsEndpointTest(APITestCase):
def setUp(self):
self.login_as(self.user)
def test_can_disconnect(self):
auth = UserSocialAuth.create_social_auth(self.user, '1234', 'github')
url = reverse('sentry-api-0-user-social-identity-details', kwargs={
'user_id': self.user.id,
'identity_id': auth.id,
})
with self.settings(GITHUB_APP_ID='app-id', GITHUB_API_SECRET='secret'):
response = self.client.delete(url)
assert response.status_code == 204
assert not len(UserSocialAuth.objects.filter(user=self.user))
def test_disconnect_id_not_found(self):
url = reverse('sentry-api-0-user-social-identity-details', kwargs={
'user_id': self.user.id,
'identity_id': 999,
})
with self.settings(GITHUB_APP_ID='app-id', GITHUB_API_SECRET='secret'):
response = self.client.delete(url)
assert response.status_code == 404
assert not len(UserSocialAuth.objects.filter(user=self.user))
| 38.121212 | 79 | 0.670906 |
99f88b6b8c989eddc5ce41915425affc074ea0a3 | 368 | py | Python | djangular/tests/__init__.py | mohamedmehdigara/djangular | 4fc13bbe9c94799734cd08f31ac9ed46c0a2a3b7 | [
"Apache-2.0"
] | 145 | 2015-01-01T12:09:30.000Z | 2022-01-28T13:59:50.000Z | djangular/tests/__init__.py | mohamedmehdigara/djangular | 4fc13bbe9c94799734cd08f31ac9ed46c0a2a3b7 | [
"Apache-2.0"
] | 25 | 2015-01-07T11:42:21.000Z | 2016-12-14T19:23:45.000Z | djangular/tests/__init__.py | mohamedmehdigara/djangular | 4fc13bbe9c94799734cd08f31ac9ed46c0a2a3b7 | [
"Apache-2.0"
] | 40 | 2015-02-07T13:23:09.000Z | 2022-01-28T13:59:53.000Z | import django
if django.VERSION < (1, 6):
from djangular.tests.test_base import *
from djangular.tests.test_finders import *
from djangular.tests.test_middleware import *
from djangular.tests.test_storage import *
from djangular.tests.test_utils import *
from djangular.tests.test_commands import *
from djangular.tests.test_urls import *
| 33.454545 | 49 | 0.752717 |
601a8ff437727bad5226ce3e7f059a1ec2913127 | 3,921 | py | Python | trio/_highlevel_generic.py | zackw/trio | 837a56ec6972a4f3d872ddf04be8e9f0f1d22cd3 | [
"Apache-2.0",
"MIT"
] | null | null | null | trio/_highlevel_generic.py | zackw/trio | 837a56ec6972a4f3d872ddf04be8e9f0f1d22cd3 | [
"Apache-2.0",
"MIT"
] | null | null | null | trio/_highlevel_generic.py | zackw/trio | 837a56ec6972a4f3d872ddf04be8e9f0f1d22cd3 | [
"Apache-2.0",
"MIT"
] | null | null | null | import attr
import trio
from .abc import HalfCloseableStream
async def aclose_forcefully(resource):
"""Close an async resource or async generator immediately, without
blocking to do any graceful cleanup.
:class:`~trio.abc.AsyncResource` objects guarantee that if their
:meth:`~trio.abc.AsyncResource.aclose` method is cancelled, then they will
still close the resource (albeit in a potentially ungraceful
fashion). :func:`aclose_forcefully` is a convenience function that
exploits this behavior to let you force a resource to be closed without
blocking: it works by calling ``await resource.aclose()`` and then
cancelling it immediately.
Most users won't need this, but it may be useful on cleanup paths where
you can't afford to block, or if you want to close a resource and don't
care about handling it gracefully. For example, if
:class:`~trio.SSLStream` encounters an error and cannot perform its
own graceful close, then there's no point in waiting to gracefully shut
down the underlying transport either, so it calls ``await
aclose_forcefully(self.transport_stream)``.
Note that this function is async, and that it acts as a checkpoint, but
unlike most async functions it cannot block indefinitely (at least,
assuming the underlying resource object is correctly implemented).
"""
with trio.CancelScope() as cs:
cs.cancel()
await resource.aclose()
@attr.s(cmp=False, hash=False)
class StapledStream(HalfCloseableStream):
"""This class `staples <https://en.wikipedia.org/wiki/Staple_(fastener)>`__
together two unidirectional streams to make single bidirectional stream.
Args:
send_stream (~trio.abc.SendStream): The stream to use for sending.
receive_stream (~trio.abc.ReceiveStream): The stream to use for
receiving.
Example:
A silly way to make a stream that echoes back whatever you write to
it::
left, right = trio.testing.memory_stream_pair()
echo_stream = StapledStream(SocketStream(left), SocketStream(right))
await echo_stream.send_all(b"x")
assert await echo_stream.receive_some() == b"x"
:class:`StapledStream` objects implement the methods in the
:class:`~trio.abc.HalfCloseableStream` interface. They also have two
additional public attributes:
.. attribute:: send_stream
The underlying :class:`~trio.abc.SendStream`. :meth:`send_all` and
:meth:`wait_send_all_might_not_block` are delegated to this object.
.. attribute:: receive_stream
The underlying :class:`~trio.abc.ReceiveStream`. :meth:`receive_some`
is delegated to this object.
"""
send_stream = attr.ib()
receive_stream = attr.ib()
async def send_all(self, data):
"""Calls ``self.send_stream.send_all``.
"""
return await self.send_stream.send_all(data)
async def wait_send_all_might_not_block(self):
"""Calls ``self.send_stream.wait_send_all_might_not_block``.
"""
return await self.send_stream.wait_send_all_might_not_block()
async def send_eof(self):
"""Shuts down the send side of the stream.
If ``self.send_stream.send_eof`` exists, then calls it. Otherwise,
calls ``self.send_stream.aclose()``.
"""
if hasattr(self.send_stream, "send_eof"):
return await self.send_stream.send_eof()
else:
return await self.send_stream.aclose()
async def receive_some(self, max_bytes=None):
"""Calls ``self.receive_stream.receive_some``.
"""
return await self.receive_stream.receive_some(max_bytes)
async def aclose(self):
"""Calls ``aclose`` on both underlying streams.
"""
try:
await self.send_stream.aclose()
finally:
await self.receive_stream.aclose()
| 34.699115 | 79 | 0.68707 |
455cf47247985429033288a0bcf4278e68683b19 | 1,071 | py | Python | website/addons/forward/routes.py | lbanner/osf.io | 1898ef0ff8bd91713e94c60e7463b5f81ac62caa | [
"Apache-2.0"
] | null | null | null | website/addons/forward/routes.py | lbanner/osf.io | 1898ef0ff8bd91713e94c60e7463b5f81ac62caa | [
"Apache-2.0"
] | 13 | 2020-03-24T15:29:41.000Z | 2022-03-11T23:15:28.000Z | website/addons/forward/routes.py | lbanner/osf.io | 1898ef0ff8bd91713e94c60e7463b5f81ac62caa | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""Forward addon routes."""
from framework.routing import Rule, json_renderer
from website.routes import OsfWebRenderer
from website.addons.forward import views
api_routes = {
'rules': [
Rule(
[
'/project/<pid>/forward/config/',
'/project/<pid>/node/<nid>/forward/config/'
],
'get',
views.config.forward_config_get,
json_renderer,
),
Rule(
[
'/project/<pid>/forward/config/',
'/project/<pid>/node/<nid>/forward/config/'
],
'put',
views.config.forward_config_put,
json_renderer,
),
Rule(
[
'/project/<pid>/forward/widget/',
'/project/<pid>/node/<nid>/forward/widget/',
],
'get',
views.widget.forward_widget,
OsfWebRenderer('../addons/forward/templates/forward_widget.mako'),
)
],
'prefix': '/api/v1',
}
| 22.787234 | 78 | 0.478058 |
22bdc7078a4f48c8e0aadb95d6e64c3e5cf7b652 | 19,590 | py | Python | keystone/common/sql/migrate_repo/versions/016_normalize_domain_ids.py | sanket4373/keystone | 7cf7e7497729803f0470167315af9349b88fe0ec | [
"Apache-2.0"
] | null | null | null | keystone/common/sql/migrate_repo/versions/016_normalize_domain_ids.py | sanket4373/keystone | 7cf7e7497729803f0470167315af9349b88fe0ec | [
"Apache-2.0"
] | null | null | null | keystone/common/sql/migrate_repo/versions/016_normalize_domain_ids.py | sanket4373/keystone | 7cf7e7497729803f0470167315af9349b88fe0ec | [
"Apache-2.0"
] | null | null | null | # Copyright 2012 OpenStack Foundation
# Copyright 2013 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Normalize for domain_id, i.e. ensure User and Project entities have the
domain_id as a first class attribute.
Both User and Project (as well as Group) entities are owned by a
domain, which is implemented as each having a domain_id foreign key
in their sql representation that points back to the respective
domain in the domain table. This domain_id attribute should also
be required (i.e. not nullable)
Adding a non_nullable foreign key attribute to a table with existing
data causes a few problems since not all DB engines support the
ability to either control the triggering of integrity constraints
or the ability to modify columns after they are created.
To get round the above inconsistencies, two versions of the
upgrade/downgrade functions are supplied, one for those engines
that support dropping columns, and one for those that don't. For
the latter we are forced to do table copy AND control the triggering
of integrity constraints.
"""
import sqlalchemy as sql
from sqlalchemy.orm import sessionmaker
from keystone import config
CONF = config.CONF
def _disable_foreign_constraints(session, migrate_engine):
if migrate_engine.name == 'mysql':
session.execute('SET foreign_key_checks = 0;')
def _enable_foreign_constraints(session, migrate_engine):
if migrate_engine.name == 'mysql':
session.execute('SET foreign_key_checks = 1;')
def upgrade_user_table_with_copy(meta, migrate_engine, session):
# We want to add the domain_id attribute to the user table. Since
# it is non nullable and the table may have data, easiest way is
# a table copy. Further, in order to keep foreign key constraints
# pointing at the right table, we need to be able and do a table
# DROP then CREATE, rather than ALTERing the name of the table.
# First make a copy of the user table
temp_user_table = sql.Table(
'temp_user',
meta,
sql.Column('id', sql.String(64), primary_key=True),
sql.Column('name', sql.String(64), unique=True, nullable=False),
sql.Column('extra', sql.Text()),
sql.Column('password', sql.String(128)),
sql.Column('enabled', sql.Boolean, default=True))
temp_user_table.create(migrate_engine, checkfirst=True)
user_table = sql.Table('user', meta, autoload=True)
for user in session.query(user_table):
session.execute('insert into temp_user (id, name, extra, '
'password, enabled) '
'values ( :id, :name, :extra, '
':password, :enabled);',
{'id': user.id,
'name': user.name,
'extra': user.extra,
'password': user.password,
'enabled': user.enabled})
# Now switch off constraints while we drop and then re-create the
# user table, with the additional domain_id column
_disable_foreign_constraints(session, migrate_engine)
session.execute('drop table user;')
# Need to create a new metadata stream since we are going to load a
# different version of the user table
meta2 = sql.MetaData()
meta2.bind = migrate_engine
sql.Table('domain', meta2, autoload=True)
user_table = sql.Table(
'user',
meta2,
sql.Column('id', sql.String(64), primary_key=True),
sql.Column('name', sql.String(64), nullable=False),
sql.Column('extra', sql.Text()),
sql.Column("password", sql.String(128)),
sql.Column("enabled", sql.Boolean, default=True),
sql.Column('domain_id', sql.String(64), sql.ForeignKey('domain.id'),
nullable=False),
sql.UniqueConstraint('domain_id', 'name'))
user_table.create(migrate_engine, checkfirst=True)
# Finally copy in the data from our temp table and then clean
# up by deleting our temp table
for user in session.query(temp_user_table):
session.execute('insert into user (id, name, extra, '
'password, enabled, domain_id) '
'values ( :id, :name, :extra, '
':password, :enabled, :domain_id);',
{'id': user.id,
'name': user.name,
'extra': user.extra,
'password': user.password,
'enabled': user.enabled,
'domain_id': CONF.identity.default_domain_id})
_enable_foreign_constraints(session, migrate_engine)
session.execute('drop table temp_user;')
def upgrade_project_table_with_copy(meta, migrate_engine, session):
# We want to add the domain_id attribute to the project table. Since
# it is non nullable and the table may have data, easiest way is
# a table copy. Further, in order to keep foreign key constraints
# pointing at the right table, we need to be able and do a table
# DROP then CREATE, rather than ALTERing the name of the table.
# Fist make a copy of the project table
temp_project_table = sql.Table(
'temp_project',
meta,
sql.Column('id', sql.String(64), primary_key=True),
sql.Column('name', sql.String(64), unique=True, nullable=False),
sql.Column('extra', sql.Text()),
sql.Column('description', sql.Text()),
sql.Column('enabled', sql.Boolean, default=True))
temp_project_table.create(migrate_engine, checkfirst=True)
project_table = sql.Table('project', meta, autoload=True)
for project in session.query(project_table):
session.execute('insert into temp_project (id, name, extra, '
'description, enabled) '
'values ( :id, :name, :extra, '
':description, :enabled);',
{'id': project.id,
'name': project.name,
'extra': project.extra,
'description': project.description,
'enabled': project.enabled})
# Now switch off constraints while we drop and then re-create the
# project table, with the additional domain_id column
_disable_foreign_constraints(session, migrate_engine)
session.execute('drop table project;')
# Need to create a new metadata stream since we are going to load a
# different version of the project table
meta2 = sql.MetaData()
meta2.bind = migrate_engine
sql.Table('domain', meta2, autoload=True)
project_table = sql.Table(
'project',
meta2,
sql.Column('id', sql.String(64), primary_key=True),
sql.Column('name', sql.String(64), nullable=False),
sql.Column('extra', sql.Text()),
sql.Column('description', sql.Text()),
sql.Column('enabled', sql.Boolean, default=True),
sql.Column('domain_id', sql.String(64), sql.ForeignKey('domain.id'),
nullable=False),
sql.UniqueConstraint('domain_id', 'name'))
project_table.create(migrate_engine, checkfirst=True)
# Finally copy in the data from our temp table and then clean
# up by deleting our temp table
for project in session.query(temp_project_table):
session.execute('insert into project (id, name, extra, '
'description, enabled, domain_id) '
'values ( :id, :name, :extra, '
':description, :enabled, :domain_id);',
{'id': project.id,
'name': project.name,
'extra': project.extra,
'description': project.description,
'enabled': project.enabled,
'domain_id': CONF.identity.default_domain_id})
_enable_foreign_constraints(session, migrate_engine)
session.execute('drop table temp_project;')
def downgrade_user_table_with_copy(meta, migrate_engine, session):
# For engines that don't support dropping columns, we need to do this
# as a table copy. Further, in order to keep foreign key constraints
# pointing at the right table, we need to be able and do a table
# DROP then CREATE, rather than ALTERing the name of the table.
# Fist make a copy of the user table
temp_user_table = sql.Table(
'temp_user',
meta,
sql.Column('id', sql.String(64), primary_key=True),
sql.Column('name', sql.String(64), unique=True, nullable=False),
sql.Column('password', sql.String(128)),
sql.Column('enabled', sql.Boolean, default=True),
sql.Column('extra', sql.Text()))
temp_user_table.create(migrate_engine, checkfirst=True)
user_table = sql.Table('user', meta, autoload=True)
for user in session.query(user_table):
session.execute('insert into temp_user (id, name, '
'password, enabled, extra) '
'values ( :id, :name, '
':password, :enabled, :extra);',
{'id': user.id,
'name': user.name,
'password': user.password,
'enabled': user.enabled,
'extra': user.extra})
# Now switch off constraints while we drop and then re-create the
# user table, less the columns we wanted to drop
_disable_foreign_constraints(session, migrate_engine)
session.execute('drop table user;')
# Need to create a new metadata stream since we are going to load a
# different version of the user table
meta2 = sql.MetaData()
meta2.bind = migrate_engine
user_table = sql.Table(
'user',
meta2,
sql.Column('id', sql.String(64), primary_key=True),
sql.Column('name', sql.String(64), unique=True, nullable=False),
sql.Column('extra', sql.Text()),
sql.Column('password', sql.String(128)),
sql.Column('enabled', sql.Boolean, default=True))
user_table.create(migrate_engine, checkfirst=True)
_enable_foreign_constraints(session, migrate_engine)
# Finally copy in the data from our temp table and then clean
# up by deleting our temp table
for user in session.query(temp_user_table):
session.execute('insert into user (id, name, extra, '
'password, enabled) '
'values ( :id, :name, :extra, '
':password, :enabled);',
{'id': user.id,
'name': user.name,
'extra': user.extra,
'password': user.password,
'enabled': user.enabled})
session.execute('drop table temp_user;')
def downgrade_project_table_with_copy(meta, migrate_engine, session):
# For engines that don't support dropping columns, we need to do this
# as a table copy. Further, in order to keep foreign key constraints
# pointing at the right table, we need to be able and do a table
# DROP then CREATE, rather than ALTERing the name of the table.
# Fist make a copy of the project table
temp_project_table = sql.Table(
'temp_project',
meta,
sql.Column('id', sql.String(64), primary_key=True),
sql.Column('name', sql.String(64), unique=True, nullable=False),
sql.Column('description', sql.Text()),
sql.Column('enabled', sql.Boolean, default=True),
sql.Column('extra', sql.Text()))
temp_project_table.create(migrate_engine, checkfirst=True)
project_table = sql.Table('project', meta, autoload=True)
for project in session.query(project_table):
session.execute('insert into temp_project (id, name, '
'description, enabled, extra) '
'values ( :id, :name, '
':description, :enabled, :extra);',
{'id': project.id,
'name': project.name,
'description': project.description,
'enabled': project.enabled,
'extra': project.extra})
# Now switch off constraints while we drop and then re-create the
# project table, less the columns we wanted to drop
_disable_foreign_constraints(session, migrate_engine)
session.execute('drop table project;')
# Need to create a new metadata stream since we are going to load a
# different version of the project table
meta2 = sql.MetaData()
meta2.bind = migrate_engine
project_table = sql.Table(
'project',
meta2,
sql.Column('id', sql.String(64), primary_key=True),
sql.Column('name', sql.String(64), unique=True, nullable=False),
sql.Column('extra', sql.Text()),
sql.Column('description', sql.Text()),
sql.Column('enabled', sql.Boolean, default=True))
project_table.create(migrate_engine, checkfirst=True)
_enable_foreign_constraints(session, migrate_engine)
# Finally copy in the data from our temp table and then clean
# up by deleting our temp table
for project in session.query(temp_project_table):
session.execute('insert into project (id, name, extra, '
'description, enabled) '
'values ( :id, :name, :extra, '
':description, :enabled);',
{'id': project.id,
'name': project.name,
'extra': project.extra,
'description': project.description,
'enabled': project.enabled})
session.execute("drop table temp_project;")
def upgrade_user_table_with_col_create(meta, migrate_engine, session):
# Create the domain_id column. We want this to be not nullable
# but also a foreign key. We can't create this right off the
# bat since any existing rows would cause an Integrity Error.
# We therefore create it nullable, fill the column with the
# default data and then set it to non nullable.
sql.Table('domain', meta, autoload=True)
user_table = sql.Table('user', meta, autoload=True)
user_table.create_column(
sql.Column('domain_id', sql.String(64),
sql.ForeignKey('domain.id'), nullable=True))
for user in session.query(user_table).all():
values = {'domain_id': CONF.identity.default_domain_id}
update = user_table.update().\
where(user_table.c.id == user.id).\
values(values)
migrate_engine.execute(update)
# Need to commit this or setting nullable to False will fail
session.commit()
user_table.columns.domain_id.alter(nullable=False)
# Finally, change the uniqueness settings for the name attribute
session.execute('ALTER TABLE "user" DROP CONSTRAINT user_name_key;')
session.execute('ALTER TABLE "user" ADD CONSTRAINT user_dom_name_unique '
'UNIQUE (domain_id, name);')
session.commit()
def upgrade_project_table_with_col_create(meta, migrate_engine, session):
# Create the domain_id column. We want this to be not nullable
# but also a foreign key. We can't create this right off the
# bat since any existing rows would cause an Integrity Error.
# We therefore create it nullable, fill the column with the
# default data and then set it to non nullable.
sql.Table('domain', meta, autoload=True)
project_table = sql.Table('project', meta, autoload=True)
project_table.create_column(
sql.Column('domain_id', sql.String(64),
sql.ForeignKey('domain.id'), nullable=True))
for project in session.query(project_table).all():
values = {'domain_id': CONF.identity.default_domain_id}
update = project_table.update().\
where(project_table.c.id == project.id).\
values(values)
migrate_engine.execute(update)
# Need to commit this or setting nullable to False will fail
session.commit()
project_table.columns.domain_id.alter(nullable=False)
# Finally, change the uniqueness settings for the name attribute
session.execute('ALTER TABLE project DROP CONSTRAINT tenant_name_key;')
session.execute('ALTER TABLE project ADD CONSTRAINT proj_dom_name_unique '
'UNIQUE (domain_id, name);')
def downgrade_user_table_with_col_drop(meta, migrate_engine, session):
# Revert uniqueness settings for the name attribute
session.execute('ALTER TABLE "user" DROP CONSTRAINT '
'user_dom_name_unique;')
# specify the constraint name so it can be referenced later
session.execute('ALTER TABLE "user" ADD CONSTRAINT user_name_key '
'UNIQUE (name);')
session.commit()
# And now go ahead an drop the domain_id column
sql.Table('domain', meta, autoload=True)
user_table = sql.Table('user', meta, autoload=True)
column = sql.Column('domain_id', sql.String(64),
sql.ForeignKey('domain.id'), nullable=False)
column.drop(user_table)
def downgrade_project_table_with_col_drop(meta, migrate_engine, session):
# Revert uniqueness settings for the name attribute
session.execute('ALTER TABLE project DROP CONSTRAINT '
'proj_dom_name_unique;')
session.execute('ALTER TABLE project ADD CONSTRAINT tenant_name_key '
'UNIQUE (name);')
session.commit()
# And now go ahead an drop the domain_id column
sql.Table('domain', meta, autoload=True)
project_table = sql.Table('project', meta, autoload=True)
column = sql.Column('domain_id', sql.String(64),
sql.ForeignKey('domain.id'), nullable=False)
column.drop(project_table)
def upgrade(migrate_engine):
meta = sql.MetaData()
meta.bind = migrate_engine
session = sessionmaker(bind=migrate_engine)()
if migrate_engine.name in ['sqlite', 'mysql']:
upgrade_user_table_with_copy(meta, migrate_engine, session)
upgrade_project_table_with_copy(meta, migrate_engine, session)
else:
upgrade_user_table_with_col_create(meta, migrate_engine, session)
upgrade_project_table_with_col_create(meta, migrate_engine, session)
session.commit()
session.close()
def downgrade(migrate_engine):
meta = sql.MetaData()
meta.bind = migrate_engine
session = sessionmaker(bind=migrate_engine)()
if migrate_engine.name in ['sqlite', 'mysql']:
downgrade_user_table_with_copy(meta, migrate_engine, session)
downgrade_project_table_with_copy(meta, migrate_engine, session)
else:
# MySQL should in theory be able to use this path, but seems to
# have problems dropping columns which are foreign keys
downgrade_user_table_with_col_drop(meta, migrate_engine, session)
downgrade_project_table_with_col_drop(meta, migrate_engine, session)
session.commit()
session.close()
| 44.931193 | 78 | 0.63757 |
a800d8448c38c938d97821b352f3b8406e8791b9 | 864 | py | Python | setup.py | matsurih/pyjstage | 8bfd41361b40aa49575d49a1bfc83951ce0c5028 | [
"MIT"
] | null | null | null | setup.py | matsurih/pyjstage | 8bfd41361b40aa49575d49a1bfc83951ce0c5028 | [
"MIT"
] | 2 | 2021-03-31T19:30:30.000Z | 2021-12-13T20:31:00.000Z | setup.py | matsurih/pyjstage | 8bfd41361b40aa49575d49a1bfc83951ce0c5028 | [
"MIT"
] | null | null | null | import os
from setuptools import setup, find_packages
def read_requirements():
reqs_path = os.path.join('.', 'requirements.txt')
with open(reqs_path, 'r') as f:
requirements = [line.rstrip() for line in f]
return requirements
def read_description():
desc_path = os.path.join('.', 'README.md')
with open(desc_path, 'r') as f:
description = f.read()
return description
setup(
name='pyjstage',
version='v0.0.2',
description='J-STAGE API wrapper for Python',
long_description=read_description(),
long_description_content_type="text/markdown",
author='matsurih',
author_email='pipikapu@gmail.com',
url='https://github.com/matsurih/pyjstage',
license='MIT',
packages=find_packages(exclude=('tests', 'docs')),
install_requires=read_requirements(),
)
| 27 | 55 | 0.65162 |
372ff36decd8800f6a1abfaabb725e092a7a3c49 | 4,882 | py | Python | mix_real_MIT_data.py | geigerf/utils | f599e06a8a5db7a7913fae8be5ff0ffae768f36f | [
"Apache-2.0"
] | null | null | null | mix_real_MIT_data.py | geigerf/utils | f599e06a8a5db7a7913fae8be5ff0ffae768f36f | [
"Apache-2.0"
] | null | null | null | mix_real_MIT_data.py | geigerf/utils | f599e06a8a5db7a7913fae8be5ff0ffae768f36f | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu Jul 30 14:13:12 2020
@author: fabian geiger
"""
import numpy as np
import scipy.io as sio
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import train_test_split
from imblearn import under_sampling
from pathlib import Path
def mix_real_MIT(kfold=3, seed=333, undersample=True, split='random'):
"""
Utility function to mix the MIT dataset with the self-acquired dataset
"""
#realData_path = Path('/home/fabian/Documents/Master_thesis/Data_Collection/3kOhm_FB/data_MT_FabianGeiger_5sess.mat')
realData_path = Path('../../Data_Collection/3kOhm_FB/data_MT_FabianGeiger_5sess.mat')
#MITdata_path = Path('/home/fabian/Documents/Master_thesis/Research/STAG_MIT/classification_lite/metadata.mat')
MITdata_path = Path('../../Research/STAG_MIT/classification_lite/metadata.mat')
# These two lists will contain valid data split into recording sessions
x = []
y = []
realData = sio.loadmat(realData_path, squeeze_me=True)
real_valid_mask = realData['valid_flag'] == 1
# Scale all data to the range [0, 1]
realPressure = realData['tactile_data'][real_valid_mask].astype(np.float32)
realPressure = np.clip((realPressure - 1510)/(3000 - 1510), 0.0, 1.0)
realObject_id = realData['object_id'][real_valid_mask]
realSession_id = realData['session_id'][real_valid_mask]
# Split into sessions
num_sessions = len(np.unique(realSession_id))
for i in range(num_sessions):
session_mask = realSession_id == i
x.append(realPressure[session_mask])
y.append(realObject_id[session_mask])
MITdata = sio.loadmat(MITdata_path, squeeze_me=True)
MIT_valid_mask = MITdata['hasValidLabel'] == 1
MITpressure = MITdata['pressure'].reshape((-1, 32*32)).astype(np.float32)
MITpressure = MITpressure[MIT_valid_mask]
MITpressure = np.clip((MITpressure - 500)/(650 - 500), 0.0, 1.0)
MITobject_id = MITdata['objectId'][MIT_valid_mask]
# Only use the same objects as in the real data set
MITobjects = list(MITdata['objects'])
used_objects = list(map(str.strip, realData['objects']))
# Each class has three recording IDs that correspond to the different
# experiment days. There are 81 recording IDs (3*27)
# 0 - 26 belong to the first recording
# 27 - 53 belong to the second recording
# 54 - 81 belong to the third recording
MITrecording_id = MITdata['recordingId'][MIT_valid_mask]
for i in range(3):
# Find valid samples from the different recording days
recording_mask = np.logical_and(MITrecording_id >= i*27,
MITrecording_id < (i+1)*27)
used_pressure = []
used_object_id = []
for i, obj in enumerate(used_objects):
idx = MITobjects.index(obj)
used_mask = np.logical_and(MITobject_id == idx, recording_mask)
used_pressure.append(MITpressure[used_mask])
used_object_id.append(np.full(len(MITobject_id[used_mask]), i))
x.append(np.concatenate(used_pressure))
y.append(np.concatenate(used_object_id))
if kfold is not None:
# Decrease the test size if cross validation is used
test_size = 0.15
else:
kfold = 3
test_size = 0.33
if(split == 'random'):
pressure = np.concatenate(x)
object_id = np.concatenate(y)
if(undersample):
us = under_sampling.RandomUnderSampler(random_state=seed,
sampling_strategy='not minority')
us_pressure, us_object_id = us.fit_resample(pressure, object_id)
pressure, object_id = us_pressure, us_object_id
# Split the already balanced dataset in a stratified way -> training
# and test set will still be balanced
train_data, test_data,\
train_labels, test_labels = train_test_split(pressure, object_id,
test_size=test_size,
random_state=seed,
shuffle=True,
stratify=object_id)
#print(train_data.shape, train_labels.shape)
# This generates a k fold split in a stratified way.
# Easy way to do k fold cross validation
skf = StratifiedKFold(n_splits=kfold, shuffle=True,
random_state=seed)
# train_ind, val_ind = skf.split(train_data, train_labels)
# skf_gen = skf.split(train_data, train_labels)
return train_data, train_labels, test_data, test_labels, skf
elif(split == 'session'):
return x, y | 43.20354 | 121 | 0.634166 |
6c400d6b7749b472e690fb2668cc608c161ddd20 | 1,265 | py | Python | app/__init__.py | marvinkome/flask-tutorial | e4c3c962936422faff04f6afb32f8368ded9c204 | [
"MIT"
] | null | null | null | app/__init__.py | marvinkome/flask-tutorial | e4c3c962936422faff04f6afb32f8368ded9c204 | [
"MIT"
] | 6 | 2020-02-24T17:51:27.000Z | 2021-12-13T19:41:54.000Z | app/__init__.py | marvinkome/flask-tutorial | e4c3c962936422faff04f6afb32f8368ded9c204 | [
"MIT"
] | null | null | null | from flask import Flask
from flask_bootstrap import Bootstrap
from flask_moment import Moment
from flask_sqlalchemy import SQLAlchemy
from flask_mail import Mail, Message
from flask_login import LoginManager
from flask_assets import Environment
from flask_pagedown import PageDown
from config import config
from .assets import app_assets
bootstrap = Bootstrap()
moment = Moment()
db = SQLAlchemy()
mail = Mail()
asset = Environment()
pagedown = PageDown()
login_manager = LoginManager()
login_manager.session_protection = 'strong'
login_manager.login_view = 'auth.login'
def create_app(config_name):
app = Flask(__name__)
app.config.from_object(config[config_name])
config[config_name].init_app(app)
bootstrap.init_app(app)
moment.init_app(app)
db.init_app(app)
mail.init_app(app)
asset.init_app(app)
asset.register(app_assets)
pagedown.init_app(app)
login_manager.init_app(app)
from .main import main as main_blueprint
app.register_blueprint(main_blueprint)
from .auth import auth as auth_blueprint
app.register_blueprint(auth_blueprint, url_prefix='/auth')
from .api_1_0 import api as api_1_0_blueprint
app.register_blueprint(api_1_0_blueprint, url_prefix='/api/v1.0')
return app
| 27.5 | 69 | 0.772332 |
ac898efa19c47692bd950d777ee5a76c195889b1 | 105 | py | Python | src/scraper_app/admin.py | banyh/news_scraper | 56173fc0fcf3f2b367552f97e5514ff62612956e | [
"MIT"
] | 1 | 2016-12-30T15:14:36.000Z | 2016-12-30T15:14:36.000Z | src/scraper_app/admin.py | livingbio/news_scraper | 56173fc0fcf3f2b367552f97e5514ff62612956e | [
"MIT"
] | null | null | null | src/scraper_app/admin.py | livingbio/news_scraper | 56173fc0fcf3f2b367552f97e5514ff62612956e | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Article, ArticleItem
admin.site.register(Article)
| 17.5 | 40 | 0.819048 |
178a452ec63343925ab8acf7c57d22f39893faa2 | 3,270 | py | Python | selfdrive/car/honda/hondacan.py | joshuafuller/openpilot | 9d9480832e8ecec0f52743eefb1a9c8981a21998 | [
"MIT"
] | 3 | 2020-10-04T03:55:59.000Z | 2021-05-13T06:34:02.000Z | selfdrive/car/honda/hondacan.py | joshuafuller/openpilot | 9d9480832e8ecec0f52743eefb1a9c8981a21998 | [
"MIT"
] | 1 | 2021-09-29T22:44:24.000Z | 2021-09-29T22:44:24.000Z | selfdrive/car/honda/hondacan.py | joshuafuller/openpilot | 9d9480832e8ecec0f52743eefb1a9c8981a21998 | [
"MIT"
] | 31 | 2020-08-30T17:07:40.000Z | 2021-01-31T05:32:10.000Z | from selfdrive.config import Conversions as CV
from selfdrive.car.honda.values import HONDA_BOSCH
def get_pt_bus(car_fingerprint, has_relay):
return 1 if car_fingerprint in HONDA_BOSCH and has_relay else 0
def get_lkas_cmd_bus(car_fingerprint, has_relay):
return 2 if car_fingerprint in HONDA_BOSCH and not has_relay else 0
def create_brake_command(packer, apply_brake, pump_on, pcm_override, pcm_cancel_cmd, fcw, idx, car_fingerprint, has_relay, stock_brake):
# TODO: do we loose pressure if we keep pump off for long?
brakelights = apply_brake > 0
brake_rq = apply_brake > 0
pcm_fault_cmd = False
values = {
"COMPUTER_BRAKE": apply_brake,
"BRAKE_PUMP_REQUEST": pump_on,
"CRUISE_OVERRIDE": pcm_override,
"CRUISE_FAULT_CMD": pcm_fault_cmd,
"CRUISE_CANCEL_CMD": pcm_cancel_cmd,
"COMPUTER_BRAKE_REQUEST": brake_rq,
"SET_ME_1": 1,
"BRAKE_LIGHTS": brakelights,
"CHIME": stock_brake["CHIME"] if fcw else 0, # send the chime for stock fcw
"FCW": fcw << 1, # TODO: Why are there two bits for fcw?
"AEB_REQ_1": 0,
"AEB_REQ_2": 0,
"AEB_STATUS": 0,
}
bus = get_pt_bus(car_fingerprint, has_relay)
return packer.make_can_msg("BRAKE_COMMAND", bus, values, idx)
def create_steering_control(packer, apply_steer, lkas_active, car_fingerprint, idx, has_relay):
values = {
"STEER_TORQUE": apply_steer if lkas_active else 0,
"STEER_TORQUE_REQUEST": lkas_active,
}
bus = get_lkas_cmd_bus(car_fingerprint, has_relay)
return packer.make_can_msg("STEERING_CONTROL", bus, values, idx)
def create_bosch_supplemental_1(packer, car_fingerprint, idx, has_relay):
# non-active params
values = {
"SET_ME_X04": 0x04,
"SET_ME_X80": 0x80,
"SET_ME_X10": 0x10,
}
bus = get_lkas_cmd_bus(car_fingerprint, has_relay)
return packer.make_can_msg("BOSCH_SUPPLEMENTAL_1", bus, values, idx)
def create_ui_commands(packer, pcm_speed, hud, car_fingerprint, is_metric, idx, has_relay, stock_hud):
commands = []
bus_pt = get_pt_bus(car_fingerprint, has_relay)
bus_lkas = get_lkas_cmd_bus(car_fingerprint, has_relay)
if car_fingerprint not in HONDA_BOSCH:
acc_hud_values = {
'PCM_SPEED': pcm_speed * CV.MS_TO_KPH,
'PCM_GAS': hud.pcm_accel,
'CRUISE_SPEED': hud.v_cruise,
'ENABLE_MINI_CAR': 1,
'HUD_LEAD': hud.car,
'HUD_DISTANCE': 3, # max distance setting on display
'IMPERIAL_UNIT': int(not is_metric),
'SET_ME_X01_2': 1,
'SET_ME_X01': 1,
"FCM_OFF": stock_hud["FCM_OFF"],
"FCM_OFF_2": stock_hud["FCM_OFF_2"],
"FCM_PROBLEM": stock_hud["FCM_PROBLEM"],
"ICONS": stock_hud["ICONS"],
}
commands.append(packer.make_can_msg("ACC_HUD", bus_pt, acc_hud_values, idx))
lkas_hud_values = {
'SET_ME_X41': 0x41,
'SET_ME_X48': 0x48,
'STEERING_REQUIRED': hud.steer_required,
'SOLID_LANES': hud.lanes,
'BEEP': 0,
}
commands.append(packer.make_can_msg('LKAS_HUD', bus_lkas, lkas_hud_values, idx))
return commands
def spam_buttons_command(packer, button_val, idx, car_fingerprint, has_relay):
values = {
'CRUISE_BUTTONS': button_val,
'CRUISE_SETTING': 0,
}
bus = get_pt_bus(car_fingerprint, has_relay)
return packer.make_can_msg("SCM_BUTTONS", bus, values, idx)
| 32.7 | 136 | 0.715902 |
462187cfd6bf551075115caba1c7a0d660b6de02 | 5,255 | py | Python | clumioapi/controllers/vmware_vcenter_categories_v1.py | clumio-code/clumio-python-sdk | 63bfaf3afed5c0ab4bae3dd1be52271249d07c51 | [
"Apache-2.0"
] | null | null | null | clumioapi/controllers/vmware_vcenter_categories_v1.py | clumio-code/clumio-python-sdk | 63bfaf3afed5c0ab4bae3dd1be52271249d07c51 | [
"Apache-2.0"
] | 1 | 2021-09-16T05:56:05.000Z | 2021-09-16T05:56:05.000Z | clumioapi/controllers/vmware_vcenter_categories_v1.py | clumio-code/clumio-python-sdk | 63bfaf3afed5c0ab4bae3dd1be52271249d07c51 | [
"Apache-2.0"
] | null | null | null | #
# Copyright 2021. Clumio, Inc.
#
from clumioapi import api_helper
from clumioapi import configuration
from clumioapi.controllers import base_controller
from clumioapi.exceptions import clumio_exception
from clumioapi.models import list_tag_categories2_response
from clumioapi.models import read_tag_category2_response
import requests
class VmwareVcenterCategoriesV1Controller(base_controller.BaseController):
"""A Controller to access Endpoints for vmware-vcenter-categories resource."""
def __init__(self, config: configuration.Configuration) -> None:
super().__init__(config)
self.config = config
def list_vmware_vcenter_categories(
self, vcenter_id: str, limit: int = None, start: str = None, filter: str = None
) -> list_tag_categories2_response.ListTagCategories2Response:
"""Returns a list of tag categories in the specified vCenter server.
Args:
vcenter_id:
Performs the operation on the vCenter server with the specified ID.
limit:
Limits the size of the response on each page to the specified number of items.
start:
Sets the page number used to browse the collection.
Pages are indexed starting from 1 (i.e., `start=1`).
filter:
Narrows down the results to only the items that satisfy the filter criteria. The
following table lists
the supported filter fields for this resource and the filter conditions that can
be applied on those fields:
+-------+------------------+---------------------------------------------------+
| Field | Filter Condition | Description |
+=======+==================+===================================================+
| name | contains | The name of the category. For example, |
| | | filter={"name":{"$contains":"backup-category"}} |
+-------+------------------+---------------------------------------------------+
Returns:
ListTagCategories2Response: Response from the API.
Raises:
ClumioException: An error occured while executing the API.
This exception includes the HTTP response code, an error
message, and the HTTP body that was received in the request.
"""
# Prepare query URL
_url_path = f'{self.config.base_path}/datasources/vmware/vcenters/{vcenter_id}/categories'
_url_path = api_helper.append_url_with_template_parameters(
_url_path, {'vcenter_id': vcenter_id}
)
_query_parameters = {}
_query_parameters = {'limit': limit, 'start': start, 'filter': filter}
# Prepare headers
_headers = {
'accept': 'application/vmware-vcenter-categories=v1+json',
}
# Execute request
try:
resp = self.client.get(_url_path, headers=_headers, params=_query_parameters)
except requests.exceptions.HTTPError as http_error:
errors = self.client.get_error_message(http_error.response)
raise clumio_exception.ClumioException(
'Error occurred while executing list_vmware_vcenter_categories.', errors
)
return list_tag_categories2_response.ListTagCategories2Response.from_dictionary(resp)
def read_vmware_vcenter_category(
self, vcenter_id: str, category_id: str
) -> read_tag_category2_response.ReadTagCategory2Response:
"""Returns a representation of the specified tag category.
Args:
vcenter_id:
Performs the operation on a tag category within the specified vCenter server.
category_id:
Performs the operation on the tag category with the specified ID.
Returns:
ReadTagCategory2Response: Response from the API.
Raises:
ClumioException: An error occured while executing the API.
This exception includes the HTTP response code, an error
message, and the HTTP body that was received in the request.
"""
# Prepare query URL
_url_path = f'{self.config.base_path}/datasources/vmware/vcenters/{vcenter_id}/categories/{category_id}'
_url_path = api_helper.append_url_with_template_parameters(
_url_path, {'vcenter_id': vcenter_id, 'category_id': category_id}
)
_query_parameters = {}
# Prepare headers
_headers = {
'accept': 'application/vmware-vcenter-categories=v1+json',
}
# Execute request
try:
resp = self.client.get(_url_path, headers=_headers, params=_query_parameters)
except requests.exceptions.HTTPError as http_error:
errors = self.client.get_error_message(http_error.response)
raise clumio_exception.ClumioException(
'Error occurred while executing read_vmware_vcenter_category.', errors
)
return read_tag_category2_response.ReadTagCategory2Response.from_dictionary(resp)
| 45.695652 | 112 | 0.612559 |
37b709c78606b87994ea0b45900bb777c4cba916 | 4,821 | py | Python | python/datasetLoader.py | sylwow/Engineering-Thesis | 2d5ca1b30603d9ad1da57d4fd473d49756f59296 | [
"MIT"
] | 2 | 2020-04-05T06:14:36.000Z | 2020-10-04T11:26:45.000Z | python/datasetLoader.py | sylwow/Engineering-Thesis | 2d5ca1b30603d9ad1da57d4fd473d49756f59296 | [
"MIT"
] | null | null | null | python/datasetLoader.py | sylwow/Engineering-Thesis | 2d5ca1b30603d9ad1da57d4fd473d49756f59296 | [
"MIT"
] | null | null | null | import torch
from torch.utils.data import Dataset, DataLoader
import torchvision.transforms as transforms
import glob
import re
import cv2
import numpy as np
import os
import random
class myDataset(Dataset):
"""hand symbols dataset."""
def __init__(self, dir, split, test=False, train=False, validation=False, transform=None):
"""
dir - directory to dataset
"""
self.dir = dir
self.transform = transform
splitter = ""
if test:
splitter = '**/*0.jpg'
elif train:
splitter = '**/*[2-9].jpg'
elif validation:
splitter = '**/*1.jpg'
else:
raise Exception("chose one of train/test/validation")
self.pathsList = glob.glob(self.dir + splitter, recursive=True)
def __len__(self):
return len(self.pathsList)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
path = self.pathsList[idx]
image = cv2.imread(path) # 480 x 640 x3
gesture, fileNr = os.path.basename(path).split('_')
nmb = int(gesture)
label = np.zeros(13)
label[nmb - 1] = 1
if self.transform:
image = cv2.flip(image, 1)
image = image[:, 160:640]
image = self.transform(image)
return (image, torch.tensor(label, dtype=torch.float))
class KinectDataset(Dataset):
"""hand symbols dataset."""
def __init__(self, dir, split, test=False, transform=None):
"""
dir - directory to dataset
"""
self.dir = dir
self.transform = transform
splitter = ""
if (not test):
splitter = '**/*[' + str(split) + '-9]_rgb.png'
else:
splitter = '**/*[0-' + str(split - 1) + ']_rgb.png'
self.pathsList = glob.glob(self.dir + splitter, recursive=True)
def __len__(self):
return len(self.pathsList)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
path = self.pathsList[idx]
image = cv2.imread(path)
label = int(re.findall(r'G\d', path)[0][1:])
label = np.eye(10)[label]
if self.transform:
image = self.transform(image)
return (image, torch.tensor(label, dtype=torch.float))
class LaRED(Dataset):
"""hand symbols dataset."""
def __init__(self, dir, split, test=False, transform=None):
"""
dir - directory to dataset
"""
self.dir = dir
self.transform = transform
splitter = ""
if (not test):
splitter = '**/O001/*[' + str(split + 1) + '-9].jpg'
else:
splitter = '**/O001/*[0-' + str(split) + '].jpg'
self.pathsList = glob.glob(self.dir + splitter, recursive=True)
def __len__(self):
return len(self.pathsList)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
path = self.pathsList[idx]
image = cv2.imread(path)
nmb = int(re.findall(r'G\d\d\d', path)[0][1:])
label = np.zeros(27)
label[nmb] = 1
if self.transform:
image = self.transform(image)
return (image, torch.tensor(label, dtype=torch.float))
if __name__ == "__main__":
# test
split = 2
import openCvTranforms.opencv_transforms.transforms as tf
heigh = 200
width = 200
def imshow(img):
img = img[0].permute(1, 2, 0).numpy() / 2 + 0.5
# img = img.view(heigh, width, 3).numpy() # unnormalize
print(img.shape)
cv2.imshow("preview", img)
transform = transforms.Compose([
tf.myRandomCrop(380, 480),
tf.Resize((heigh, width)),
tf.ColorJitter(brightness=0.5, contrast=0.5),
tf.ToTensor(),
tf.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
dir = "D:/DataSetNew/"
testDataset = myDataset(dir, split, test=True, transform=transform)
trainDataset = myDataset(dir, split, test=False, transform=transform)
lts = len(testDataset)
ltr = len(trainDataset)
print("test: {}, train {}".format(lts, ltr))
print("sum : {}, check {}".format(ltr + lts, (ltr + lts) / 13))
# for idx in range(len(dat)):
# print(dat.__getitem__(idx))
testLoader = torch.utils.data.DataLoader(testDataset,
batch_size=1, shuffle=False,
num_workers=4)
trainLoader = torch.utils.data.DataLoader(trainDataset,
batch_size=1, shuffle=True,
num_workers=4)
for data in trainLoader:
images, labels = data
imshow(images)
cv2.waitKey(0)
| 29.759259 | 94 | 0.546982 |
278fd5deccd4a0c65d8e61c5ba29810071893cb7 | 1,339 | py | Python | Scripts/Example_Run.py | CBIIT/NCI-DOE-Collab-Pilot1-Enhanced_COXEN | 89fa9870c75861f3588f54e1b91f271a70e4a5ab | [
"MIT"
] | null | null | null | Scripts/Example_Run.py | CBIIT/NCI-DOE-Collab-Pilot1-Enhanced_COXEN | 89fa9870c75861f3588f54e1b91f271a70e4a5ab | [
"MIT"
] | null | null | null | Scripts/Example_Run.py | CBIIT/NCI-DOE-Collab-Pilot1-Enhanced_COXEN | 89fa9870c75861f3588f54e1b91f271a70e4a5ab | [
"MIT"
] | 1 | 2021-06-17T21:38:34.000Z | 2021-06-17T21:38:34.000Z | import numpy as np
import pandas as pd
from EnhancedCOXEN_Functions import coxen_multi_drug_gene_selection
res = pd.read_csv('../Data/Drug_Response_Data_Of_Set_1.txt', sep='\t', engine='c', na_values=['na', '-', ''],
header=0, index_col=None)
data1 = pd.read_csv('../Data/Gene_Expression_Data_Of_Set_1.txt', sep='\t', engine='c', na_values=['na', '-', ''],
header=0, index_col=0)
data2 = pd.read_csv('../Data/Gene_Expression_Data_Of_Set_2.txt', sep='\t', engine='c', na_values=['na', '-', ''],
header=0, index_col=0)
assert np.sum(data1.columns != data2.columns) == 0
# Use enhanced COXEN method to select genes. First, select 200 predictive genes to form the candidate pool, and then
# select 100 generalizable genes from the candidate pool. The absolute value of Pearson correlation coefficient is
# used as the measure of gene's prediction power.
id = coxen_multi_drug_gene_selection(source_data=data1, target_data=data2, drug_response_data=res,
drug_response_col='Efficacy', tumor_col='Cell_Line', drug_col='Drug',
prediction_power_measure='pearson', num_predictive_gene=200,
num_generalizable_gene=100)
print('Selected genes are:')
print(data1.columns[id]) | 47.821429 | 116 | 0.661688 |
48465d81122ec0fdf7e623ad4db1bcfed0abc7ae | 75,565 | py | Python | scipy/spatial/transform/rotation.py | avivajpeyi/scipy | dbfe06e6618232b26c241cbe8861e2ea1489b535 | [
"BSD-3-Clause"
] | 1 | 2021-03-15T03:59:34.000Z | 2021-03-15T03:59:34.000Z | scipy/spatial/transform/rotation.py | avivajpeyi/scipy | dbfe06e6618232b26c241cbe8861e2ea1489b535 | [
"BSD-3-Clause"
] | 7 | 2020-12-04T04:10:42.000Z | 2021-03-16T00:53:09.000Z | scipy/spatial/transform/rotation.py | avivajpeyi/scipy | dbfe06e6618232b26c241cbe8861e2ea1489b535 | [
"BSD-3-Clause"
] | 1 | 2021-01-20T01:58:53.000Z | 2021-01-20T01:58:53.000Z | import re
import warnings
import numpy as np
import scipy.linalg
from scipy._lib._util import check_random_state
from ._rotation_groups import create_group
_AXIS_TO_IND = {'x': 0, 'y': 1, 'z': 2}
def _elementary_basis_vector(axis):
b = np.zeros(3)
b[_AXIS_TO_IND[axis]] = 1
return b
def _compute_euler_from_matrix(matrix, seq, extrinsic=False):
# The algorithm assumes intrinsic frame transformations. The algorithm
# in the paper is formulated for rotation matrices which are transposition
# rotation matrices used within Rotation.
# Adapt the algorithm for our case by
# 1. Instead of transposing our representation, use the transpose of the
# O matrix as defined in the paper, and be careful to swap indices
# 2. Reversing both axis sequence and angles for extrinsic rotations
if extrinsic:
seq = seq[::-1]
if matrix.ndim == 2:
matrix = matrix[None, :, :]
num_rotations = matrix.shape[0]
# Step 0
# Algorithm assumes axes as column vectors, here we use 1D vectors
n1 = _elementary_basis_vector(seq[0])
n2 = _elementary_basis_vector(seq[1])
n3 = _elementary_basis_vector(seq[2])
# Step 2
sl = np.dot(np.cross(n1, n2), n3)
cl = np.dot(n1, n3)
# angle offset is lambda from the paper referenced in [2] from docstring of
# `as_euler` function
offset = np.arctan2(sl, cl)
c = np.vstack((n2, np.cross(n1, n2), n1))
# Step 3
rot = np.array([
[1, 0, 0],
[0, cl, sl],
[0, -sl, cl],
])
res = np.einsum('...ij,...jk->...ik', c, matrix)
matrix_transformed = np.einsum('...ij,...jk->...ik', res, c.T.dot(rot))
# Step 4
angles = np.empty((num_rotations, 3))
# Ensure less than unit norm
positive_unity = matrix_transformed[:, 2, 2] > 1
negative_unity = matrix_transformed[:, 2, 2] < -1
matrix_transformed[positive_unity, 2, 2] = 1
matrix_transformed[negative_unity, 2, 2] = -1
angles[:, 1] = np.arccos(matrix_transformed[:, 2, 2])
# Steps 5, 6
eps = 1e-7
safe1 = (np.abs(angles[:, 1]) >= eps)
safe2 = (np.abs(angles[:, 1] - np.pi) >= eps)
# Step 4 (Completion)
angles[:, 1] += offset
# 5b
safe_mask = np.logical_and(safe1, safe2)
angles[safe_mask, 0] = np.arctan2(matrix_transformed[safe_mask, 0, 2],
-matrix_transformed[safe_mask, 1, 2])
angles[safe_mask, 2] = np.arctan2(matrix_transformed[safe_mask, 2, 0],
matrix_transformed[safe_mask, 2, 1])
if extrinsic:
# For extrinsic, set first angle to zero so that after reversal we
# ensure that third angle is zero
# 6a
angles[~safe_mask, 0] = 0
# 6b
angles[~safe1, 2] = np.arctan2(matrix_transformed[~safe1, 1, 0]
- matrix_transformed[~safe1, 0, 1],
matrix_transformed[~safe1, 0, 0]
+ matrix_transformed[~safe1, 1, 1])
# 6c
angles[~safe2, 2] = -np.arctan2(matrix_transformed[~safe2, 1, 0]
+ matrix_transformed[~safe2, 0, 1],
matrix_transformed[~safe2, 0, 0]
- matrix_transformed[~safe2, 1, 1])
else:
# For instrinsic, set third angle to zero
# 6a
angles[~safe_mask, 2] = 0
# 6b
angles[~safe1, 0] = np.arctan2(matrix_transformed[~safe1, 1, 0]
- matrix_transformed[~safe1, 0, 1],
matrix_transformed[~safe1, 0, 0]
+ matrix_transformed[~safe1, 1, 1])
# 6c
angles[~safe2, 0] = np.arctan2(matrix_transformed[~safe2, 1, 0]
+ matrix_transformed[~safe2, 0, 1],
matrix_transformed[~safe2, 0, 0]
- matrix_transformed[~safe2, 1, 1])
# Step 7
if seq[0] == seq[2]:
# lambda = 0, so we can only ensure angle2 -> [0, pi]
adjust_mask = np.logical_or(angles[:, 1] < 0, angles[:, 1] > np.pi)
else:
# lambda = + or - pi/2, so we can ensure angle2 -> [-pi/2, pi/2]
adjust_mask = np.logical_or(angles[:, 1] < -np.pi / 2,
angles[:, 1] > np.pi / 2)
# Dont adjust gimbal locked angle sequences
adjust_mask = np.logical_and(adjust_mask, safe_mask)
angles[adjust_mask, 0] += np.pi
angles[adjust_mask, 1] = 2 * offset - angles[adjust_mask, 1]
angles[adjust_mask, 2] -= np.pi
angles[angles < -np.pi] += 2 * np.pi
angles[angles > np.pi] -= 2 * np.pi
# Step 8
if not np.all(safe_mask):
warnings.warn("Gimbal lock detected. Setting third angle to zero since"
" it is not possible to uniquely determine all angles.")
# Reverse role of extrinsic and intrinsic rotations, but let third angle be
# zero for gimbal locked cases
if extrinsic:
angles = angles[:, ::-1]
return angles
def _make_elementary_quat(axis, angles):
quat = np.zeros((angles.shape[0], 4))
quat[:, 3] = np.cos(angles / 2)
quat[:, _AXIS_TO_IND[axis]] = np.sin(angles / 2)
return quat
def _compose_quat(p, q):
product = np.empty((max(p.shape[0], q.shape[0]), 4))
product[:, 3] = p[:, 3] * q[:, 3] - np.sum(p[:, :3] * q[:, :3], axis=1)
product[:, :3] = (p[:, None, 3] * q[:, :3] + q[:, None, 3] * p[:, :3] +
np.cross(p[:, :3], q[:, :3]))
return product
def _elementary_quat_compose(seq, angles, intrinsic=False):
result = _make_elementary_quat(seq[0], angles[:, 0])
for idx, axis in enumerate(seq[1:], start=1):
if intrinsic:
result = _compose_quat(
result,
_make_elementary_quat(axis, angles[:, idx]))
else:
result = _compose_quat(
_make_elementary_quat(axis, angles[:, idx]),
result)
return result
class Rotation(object):
"""Rotation in 3 dimensions.
This class provides an interface to initialize from and represent rotations
with:
- Quaternions
- Rotation Matrices
- Rotation Vectors
- Euler Angles
The following operations on rotations are supported:
- Application on vectors
- Rotation Composition
- Rotation Inversion
- Rotation Indexing
Indexing within a rotation is supported since multiple rotation transforms
can be stored within a single `Rotation` instance.
To create `Rotation` objects use ``from_...`` methods (see examples below).
``Rotation(...)`` is not supposed to be instantiated directly.
Methods
-------
__len__
from_quat
from_matrix
from_rotvec
from_euler
as_quat
as_matrix
as_rotvec
as_euler
apply
__mul__
inv
magnitude
mean
reduce
create_group
__getitem__
identity
random
align_vectors
See Also
--------
Slerp
Notes
-----
.. versionadded: 1.2.0
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
A `Rotation` instance can be initialized in any of the above formats and
converted to any of the others. The underlying object is independent of the
representation used for initialization.
Consider a counter-clockwise rotation of 90 degrees about the z-axis. This
corresponds to the following quaternion (in scalar-last format):
>>> r = R.from_quat([0, 0, np.sin(np.pi/4), np.cos(np.pi/4)])
The rotation can be expressed in any of the other formats:
>>> r.as_matrix()
array([[ 2.22044605e-16, -1.00000000e+00, 0.00000000e+00],
[ 1.00000000e+00, 2.22044605e-16, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
>>> r.as_rotvec()
array([0. , 0. , 1.57079633])
>>> r.as_euler('zyx', degrees=True)
array([90., 0., 0.])
The same rotation can be initialized using a rotation matrix:
>>> r = R.from_matrix([[0, -1, 0],
... [1, 0, 0],
... [0, 0, 1]])
Representation in other formats:
>>> r.as_quat()
array([0. , 0. , 0.70710678, 0.70710678])
>>> r.as_rotvec()
array([0. , 0. , 1.57079633])
>>> r.as_euler('zyx', degrees=True)
array([90., 0., 0.])
The rotation vector corresponding to this rotation is given by:
>>> r = R.from_rotvec(np.pi/2 * np.array([0, 0, 1]))
Representation in other formats:
>>> r.as_quat()
array([0. , 0. , 0.70710678, 0.70710678])
>>> r.as_matrix()
array([[ 2.22044605e-16, -1.00000000e+00, 0.00000000e+00],
[ 1.00000000e+00, 2.22044605e-16, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
>>> r.as_euler('zyx', degrees=True)
array([90., 0., 0.])
The ``from_euler`` method is quite flexible in the range of input formats
it supports. Here we initialize a single rotation about a single axis:
>>> r = R.from_euler('z', 90, degrees=True)
Again, the object is representation independent and can be converted to any
other format:
>>> r.as_quat()
array([0. , 0. , 0.70710678, 0.70710678])
>>> r.as_matrix()
array([[ 2.22044605e-16, -1.00000000e+00, 0.00000000e+00],
[ 1.00000000e+00, 2.22044605e-16, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
>>> r.as_rotvec()
array([0. , 0. , 1.57079633])
It is also possible to initialize multiple rotations in a single instance
using any of the `from_...` functions. Here we initialize a stack of 3
rotations using the ``from_euler`` method:
>>> r = R.from_euler('zyx', [
... [90, 0, 0],
... [0, 45, 0],
... [45, 60, 30]], degrees=True)
The other representations also now return a stack of 3 rotations. For
example:
>>> r.as_quat()
array([[0. , 0. , 0.70710678, 0.70710678],
[0. , 0.38268343, 0. , 0.92387953],
[0.39190384, 0.36042341, 0.43967974, 0.72331741]])
Applying the above rotations onto a vector:
>>> v = [1, 2, 3]
>>> r.apply(v)
array([[-2. , 1. , 3. ],
[ 2.82842712, 2. , 1.41421356],
[ 2.24452282, 0.78093109, 2.89002836]])
A `Rotation` instance can be indexed and sliced as if it were a single
1D array or list:
>>> r.as_quat()
array([[0. , 0. , 0.70710678, 0.70710678],
[0. , 0.38268343, 0. , 0.92387953],
[0.39190384, 0.36042341, 0.43967974, 0.72331741]])
>>> p = r[0]
>>> p.as_matrix()
array([[ 2.22044605e-16, -1.00000000e+00, 0.00000000e+00],
[ 1.00000000e+00, 2.22044605e-16, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
>>> q = r[1:3]
>>> q.as_quat()
array([[0. , 0.38268343, 0. , 0.92387953],
[0.39190384, 0.36042341, 0.43967974, 0.72331741]])
Multiple rotations can be composed using the ``*`` operator:
>>> r1 = R.from_euler('z', 90, degrees=True)
>>> r2 = R.from_rotvec([np.pi/4, 0, 0])
>>> v = [1, 2, 3]
>>> r2.apply(r1.apply(v))
array([-2. , -1.41421356, 2.82842712])
>>> r3 = r2 * r1 # Note the order
>>> r3.apply(v)
array([-2. , -1.41421356, 2.82842712])
Finally, it is also possible to invert rotations:
>>> r1 = R.from_euler('z', [90, 45], degrees=True)
>>> r2 = r1.inv()
>>> r2.as_euler('zyx', degrees=True)
array([[-90., 0., 0.],
[-45., 0., 0.]])
These examples serve as an overview into the `Rotation` class and highlight
major functionalities. For more thorough examples of the range of input and
output formats supported, consult the individual method's examples.
"""
def __init__(self, quat, normalize=True, copy=True):
self._single = False
quat = np.asarray(quat, dtype=float)
if quat.ndim not in [1, 2] or quat.shape[-1] != 4:
raise ValueError("Expected `quat` to have shape (4,) or (N x 4), "
"got {}.".format(quat.shape))
# If a single quaternion is given, convert it to a 2D 1 x 4 matrix but
# set self._single to True so that we can return appropriate objects
# in the `to_...` methods
if quat.shape == (4,):
quat = quat[None, :]
self._single = True
if normalize:
self._quat = quat.copy()
norms = scipy.linalg.norm(quat, axis=1)
zero_norms = norms == 0
if zero_norms.any():
raise ValueError("Found zero norm quaternions in `quat`.")
# Ensure norm is broadcasted along each column.
self._quat[~zero_norms] /= norms[~zero_norms][:, None]
else:
self._quat = quat.copy() if copy else quat
def __len__(self):
"""Number of rotations contained in this object.
Multiple rotations can be stored in a single instance.
Returns
-------
length : int
Number of rotations stored in object.
"""
return self._quat.shape[0]
@classmethod
def from_quat(cls, quat, normalized=None):
"""Initialize from quaternions.
3D rotations can be represented using unit-norm quaternions [1]_.
Parameters
----------
quat : array_like, shape (N, 4) or (4,)
Each row is a (possibly non-unit norm) quaternion in scalar-last
(x, y, z, w) format. Each quaternion will be normalized to unit
norm.
normalized
Deprecated argument. Has no effect, input `quat` is always
normalized.
.. deprecated:: 1.4.0
Returns
-------
rotation : `Rotation` instance
Object containing the rotations represented by input quaternions.
References
----------
.. [1] https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
Initialize a single rotation:
>>> r = R.from_quat([1, 0, 0, 0])
>>> r.as_quat()
array([1., 0., 0., 0.])
>>> r.as_quat().shape
(4,)
Initialize multiple rotations in a single object:
>>> r = R.from_quat([
... [1, 0, 0, 0],
... [0, 0, 0, 1]
... ])
>>> r.as_quat()
array([[1., 0., 0., 0.],
[0., 0., 0., 1.]])
>>> r.as_quat().shape
(2, 4)
It is also possible to have a stack of a single rotation:
>>> r = R.from_quat([[0, 0, 0, 1]])
>>> r.as_quat()
array([[0., 0., 0., 1.]])
>>> r.as_quat().shape
(1, 4)
Quaternions are normalized before initialization.
>>> r = R.from_quat([0, 0, 1, 1])
>>> r.as_quat()
array([0. , 0. , 0.70710678, 0.70710678])
"""
if normalized is not None:
warnings.warn("`normalized` is deprecated in scipy 1.4.0 and "
"will be removed in scipy 1.6.0. The input `quat` "
"is always normalized.", DeprecationWarning)
return cls(quat, normalize=True)
@classmethod
def from_matrix(cls, matrix):
"""Initialize from rotation matrix.
Rotations in 3 dimensions can be represented with 3 x 3 proper
orthogonal matrices [1]_. If the input is not proper orthogonal,
an approximation is created using the method described in [2]_.
Parameters
----------
matrix : array_like, shape (N, 3, 3) or (3, 3)
A single matrix or a stack of matrices, where ``matrix[i]`` is
the i-th matrix.
Returns
-------
rotation : `Rotation` instance
Object containing the rotations represented by the rotation
matrices.
References
----------
.. [1] https://en.wikipedia.org/wiki/Rotation_matrix#In_three_dimensions
.. [2] F. Landis Markley, "Unit Quaternion from Rotation Matrix",
Journal of guidance, control, and dynamics vol. 31.2, pp.
440-442, 2008.
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
Initialize a single rotation:
>>> r = R.from_matrix([
... [0, -1, 0],
... [1, 0, 0],
... [0, 0, 1]])
>>> r.as_matrix().shape
(3, 3)
Initialize multiple rotations in a single object:
>>> r = R.from_matrix([
... [
... [0, -1, 0],
... [1, 0, 0],
... [0, 0, 1],
... ],
... [
... [1, 0, 0],
... [0, 0, -1],
... [0, 1, 0],
... ]])
>>> r.as_matrix().shape
(2, 3, 3)
If input matrices are not special orthogonal (orthogonal with
determinant equal to +1), then a special orthogonal estimate is stored:
>>> a = np.array([
... [0, -0.5, 0],
... [0.5, 0, 0],
... [0, 0, 0.5]])
>>> np.linalg.det(a)
0.12500000000000003
>>> r = R.from_matrix(a)
>>> matrix = r.as_matrix()
>>> matrix
array([[-0.38461538, -0.92307692, 0. ],
[ 0.92307692, -0.38461538, 0. ],
[ 0. , 0. , 1. ]])
>>> np.linalg.det(matrix)
1.0000000000000002
It is also possible to have a stack containing a single rotation:
>>> r = R.from_matrix([[
... [0, -1, 0],
... [1, 0, 0],
... [0, 0, 1]]])
>>> r.as_matrix()
array([[[ 0., -1., 0.],
[ 1., 0., 0.],
[ 0., 0., 1.]]])
>>> r.as_matrix().shape
(1, 3, 3)
Notes
-----
This function was called from_dcm before.
.. versionadded:: 1.4.0
"""
is_single = False
matrix = np.asarray(matrix, dtype=float)
if matrix.ndim not in [2, 3] or matrix.shape[-2:] != (3, 3):
raise ValueError("Expected `matrix` to have shape (3, 3) or "
"(N, 3, 3), got {}".format(matrix.shape))
# If a single matrix is given, convert it to 3D 1 x 3 x 3 matrix but
# set self._single to True so that we can return appropriate objects in
# the `to_...` methods
if matrix.shape == (3, 3):
matrix = matrix.reshape((1, 3, 3))
is_single = True
num_rotations = matrix.shape[0]
decision_matrix = np.empty((num_rotations, 4))
decision_matrix[:, :3] = matrix.diagonal(axis1=1, axis2=2)
decision_matrix[:, -1] = decision_matrix[:, :3].sum(axis=1)
choices = decision_matrix.argmax(axis=1)
quat = np.empty((num_rotations, 4))
ind = np.nonzero(choices != 3)[0]
i = choices[ind]
j = (i + 1) % 3
k = (j + 1) % 3
quat[ind, i] = 1 - decision_matrix[ind, -1] + 2 * matrix[ind, i, i]
quat[ind, j] = matrix[ind, j, i] + matrix[ind, i, j]
quat[ind, k] = matrix[ind, k, i] + matrix[ind, i, k]
quat[ind, 3] = matrix[ind, k, j] - matrix[ind, j, k]
ind = np.nonzero(choices == 3)[0]
quat[ind, 0] = matrix[ind, 2, 1] - matrix[ind, 1, 2]
quat[ind, 1] = matrix[ind, 0, 2] - matrix[ind, 2, 0]
quat[ind, 2] = matrix[ind, 1, 0] - matrix[ind, 0, 1]
quat[ind, 3] = 1 + decision_matrix[ind, -1]
quat /= np.linalg.norm(quat, axis=1)[:, None]
if is_single:
return cls(quat[0], normalize=False, copy=False)
else:
return cls(quat, normalize=False, copy=False)
@classmethod
@np.deprecate(message="from_dcm is renamed to from_matrix in scipy 1.4.0 "
"and will be removed in scipy 1.6.0")
def from_dcm(cls, dcm):
return cls.from_matrix(dcm)
@classmethod
def from_rotvec(cls, rotvec):
"""Initialize from rotation vectors.
A rotation vector is a 3 dimensional vector which is co-directional to
the axis of rotation and whose norm gives the angle of rotation (in
radians) [1]_.
Parameters
----------
rotvec : array_like, shape (N, 3) or (3,)
A single vector or a stack of vectors, where `rot_vec[i]` gives
the ith rotation vector.
Returns
-------
rotation : `Rotation` instance
Object containing the rotations represented by input rotation
vectors.
References
----------
.. [1] https://en.wikipedia.org/wiki/Axis%E2%80%93angle_representation#Rotation_vector
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
Initialize a single rotation:
>>> r = R.from_rotvec(np.pi/2 * np.array([0, 0, 1]))
>>> r.as_rotvec()
array([0. , 0. , 1.57079633])
>>> r.as_rotvec().shape
(3,)
Initialize multiple rotations in one object:
>>> r = R.from_rotvec([
... [0, 0, np.pi/2],
... [np.pi/2, 0, 0]])
>>> r.as_rotvec()
array([[0. , 0. , 1.57079633],
[1.57079633, 0. , 0. ]])
>>> r.as_rotvec().shape
(2, 3)
It is also possible to have a stack of a single rotaton:
>>> r = R.from_rotvec([[0, 0, np.pi/2]])
>>> r.as_rotvec().shape
(1, 3)
"""
is_single = False
rotvec = np.asarray(rotvec, dtype=float)
if rotvec.ndim not in [1, 2] or rotvec.shape[-1] != 3:
raise ValueError("Expected `rot_vec` to have shape (3,) "
"or (N, 3), got {}".format(rotvec.shape))
# If a single vector is given, convert it to a 2D 1 x 3 matrix but
# set self._single to True so that we can return appropriate objects
# in the `as_...` methods
if rotvec.shape == (3,):
rotvec = rotvec[None, :]
is_single = True
num_rotations = rotvec.shape[0]
norms = np.linalg.norm(rotvec, axis=1)
small_angle = (norms <= 1e-3)
large_angle = ~small_angle
scale = np.empty(num_rotations)
scale[small_angle] = (0.5 - norms[small_angle] ** 2 / 48 +
norms[small_angle] ** 4 / 3840)
scale[large_angle] = (np.sin(norms[large_angle] / 2) /
norms[large_angle])
quat = np.empty((num_rotations, 4))
quat[:, :3] = scale[:, None] * rotvec
quat[:, 3] = np.cos(norms / 2)
if is_single:
return cls(quat[0], normalize=False, copy=False)
else:
return cls(quat, normalize=False, copy=False)
@classmethod
def from_euler(cls, seq, angles, degrees=False):
"""Initialize from Euler angles.
Rotations in 3-D can be represented by a sequence of 3
rotations around a sequence of axes. In theory, any three axes spanning
the 3-D Euclidean space are enough. In practice, the axes of rotation are
chosen to be the basis vectors.
The three rotations can either be in a global frame of reference
(extrinsic) or in a body centred frame of reference (intrinsic), which
is attached to, and moves with, the object under rotation [1]_.
Parameters
----------
seq : string
Specifies sequence of axes for rotations. Up to 3 characters
belonging to the set {'X', 'Y', 'Z'} for intrinsic rotations, or
{'x', 'y', 'z'} for extrinsic rotations. Extrinsic and intrinsic
rotations cannot be mixed in one function call.
angles : float or array_like, shape (N,) or (N, [1 or 2 or 3])
Euler angles specified in radians (`degrees` is False) or degrees
(`degrees` is True).
For a single character `seq`, `angles` can be:
- a single value
- array_like with shape (N,), where each `angle[i]`
corresponds to a single rotation
- array_like with shape (N, 1), where each `angle[i, 0]`
corresponds to a single rotation
For 2- and 3-character wide `seq`, `angles` can be:
- array_like with shape (W,) where `W` is the width of
`seq`, which corresponds to a single rotation with `W` axes
- array_like with shape (N, W) where each `angle[i]`
corresponds to a sequence of Euler angles describing a single
rotation
degrees : bool, optional
If True, then the given angles are assumed to be in degrees.
Default is False.
Returns
-------
rotation : `Rotation` instance
Object containing the rotation represented by the sequence of
rotations around given axes with given angles.
References
----------
.. [1] https://en.wikipedia.org/wiki/Euler_angles#Definition_by_intrinsic_rotations
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
Initialize a single rotation along a single axis:
>>> r = R.from_euler('x', 90, degrees=True)
>>> r.as_quat().shape
(4,)
Initialize a single rotation with a given axis sequence:
>>> r = R.from_euler('zyx', [90, 45, 30], degrees=True)
>>> r.as_quat().shape
(4,)
Initialize a stack with a single rotation around a single axis:
>>> r = R.from_euler('x', [90], degrees=True)
>>> r.as_quat().shape
(1, 4)
Initialize a stack with a single rotation with an axis sequence:
>>> r = R.from_euler('zyx', [[90, 45, 30]], degrees=True)
>>> r.as_quat().shape
(1, 4)
Initialize multiple elementary rotations in one object:
>>> r = R.from_euler('x', [90, 45, 30], degrees=True)
>>> r.as_quat().shape
(3, 4)
Initialize multiple rotations in one object:
>>> r = R.from_euler('zyx', [[90, 45, 30], [35, 45, 90]], degrees=True)
>>> r.as_quat().shape
(2, 4)
"""
num_axes = len(seq)
if num_axes < 1 or num_axes > 3:
raise ValueError("Expected axis specification to be a non-empty "
"string of upto 3 characters, got {}".format(seq))
intrinsic = (re.match(r'^[XYZ]{1,3}$', seq) is not None)
extrinsic = (re.match(r'^[xyz]{1,3}$', seq) is not None)
if not (intrinsic or extrinsic):
raise ValueError("Expected axes from `seq` to be from ['x', 'y', "
"'z'] or ['X', 'Y', 'Z'], got {}".format(seq))
if any(seq[i] == seq[i+1] for i in range(num_axes - 1)):
raise ValueError("Expected consecutive axes to be different, "
"got {}".format(seq))
seq = seq.lower()
angles = np.asarray(angles, dtype=float)
if degrees:
angles = np.deg2rad(angles)
is_single = False
# Prepare angles to have shape (num_rot, num_axes)
if num_axes == 1:
if angles.ndim == 0:
# (1, 1)
angles = angles.reshape((1, 1))
is_single = True
elif angles.ndim == 1:
# (N, 1)
angles = angles[:, None]
elif angles.ndim == 2 and angles.shape[-1] != 1:
raise ValueError("Expected `angles` parameter to have shape "
"(N, 1), got {}.".format(angles.shape))
elif angles.ndim > 2:
raise ValueError("Expected float, 1D array, or 2D array for "
"parameter `angles` corresponding to `seq`, "
"got shape {}.".format(angles.shape))
else: # 2 or 3 axes
if angles.ndim not in [1, 2] or angles.shape[-1] != num_axes:
raise ValueError("Expected `angles` to be at most "
"2-dimensional with width equal to number "
"of axes specified, got {} for shape".format(
angles.shape))
if angles.ndim == 1:
# (1, num_axes)
angles = angles[None, :]
is_single = True
# By now angles should have shape (num_rot, num_axes)
# sanity check
if angles.ndim != 2 or angles.shape[-1] != num_axes:
raise ValueError("Expected angles to have shape (num_rotations, "
"num_axes), got {}.".format(angles.shape))
quat = _elementary_quat_compose(seq, angles, intrinsic)
return cls(quat[0] if is_single else quat, normalize=False, copy=False)
def as_quat(self):
"""Represent as quaternions.
Rotations in 3 dimensions can be represented using unit norm
quaternions [1]_. The mapping from quaternions to rotations is
two-to-one, i.e. quaternions ``q`` and ``-q``, where ``-q`` simply
reverses the sign of each component, represent the same spatial
rotation.
Returns
-------
quat : `numpy.ndarray`, shape (4,) or (N, 4)
Shape depends on shape of inputs used for initialization.
References
----------
.. [1] https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
Represent a single rotation:
>>> r = R.from_matrix([[0, -1, 0],
... [1, 0, 0],
... [0, 0, 1]])
>>> r.as_quat()
array([0. , 0. , 0.70710678, 0.70710678])
>>> r.as_quat().shape
(4,)
Represent a stack with a single rotation:
>>> r = R.from_quat([[0, 0, 0, 1]])
>>> r.as_quat().shape
(1, 4)
Represent multiple rotations in a single object:
>>> r = R.from_rotvec([[np.pi, 0, 0], [0, 0, np.pi/2]])
>>> r.as_quat().shape
(2, 4)
"""
if self._single:
return self._quat[0].copy()
else:
return self._quat.copy()
def as_matrix(self):
"""Represent as rotation matrix.
3D rotations can be represented using rotation matrices, which
are 3 x 3 real orthogonal matrices with determinant equal to +1 [1]_.
Returns
-------
matrix : ndarray, shape (3, 3) or (N, 3, 3)
Shape depends on shape of inputs used for initialization.
References
----------
.. [1] https://en.wikipedia.org/wiki/Rotation_matrix#In_three_dimensions
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
Represent a single rotation:
>>> r = R.from_rotvec([0, 0, np.pi/2])
>>> r.as_matrix()
array([[ 2.22044605e-16, -1.00000000e+00, 0.00000000e+00],
[ 1.00000000e+00, 2.22044605e-16, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
>>> r.as_matrix().shape
(3, 3)
Represent a stack with a single rotation:
>>> r = R.from_quat([[1, 1, 0, 0]])
>>> r.as_matrix()
array([[[ 0., 1., 0.],
[ 1., 0., 0.],
[ 0., 0., -1.]]])
>>> r.as_matrix().shape
(1, 3, 3)
Represent multiple rotations:
>>> r = R.from_rotvec([[np.pi/2, 0, 0], [0, 0, np.pi/2]])
>>> r.as_matrix()
array([[[ 1.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 2.22044605e-16, -1.00000000e+00],
[ 0.00000000e+00, 1.00000000e+00, 2.22044605e-16]],
[[ 2.22044605e-16, -1.00000000e+00, 0.00000000e+00],
[ 1.00000000e+00, 2.22044605e-16, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]]])
>>> r.as_matrix().shape
(2, 3, 3)
Notes
-----
This function was called as_dcm before.
.. versionadded:: 1.4.0
"""
x = self._quat[:, 0]
y = self._quat[:, 1]
z = self._quat[:, 2]
w = self._quat[:, 3]
x2 = x * x
y2 = y * y
z2 = z * z
w2 = w * w
xy = x * y
zw = z * w
xz = x * z
yw = y * w
yz = y * z
xw = x * w
num_rotations = len(self)
matrix = np.empty((num_rotations, 3, 3))
matrix[:, 0, 0] = x2 - y2 - z2 + w2
matrix[:, 1, 0] = 2 * (xy + zw)
matrix[:, 2, 0] = 2 * (xz - yw)
matrix[:, 0, 1] = 2 * (xy - zw)
matrix[:, 1, 1] = - x2 + y2 - z2 + w2
matrix[:, 2, 1] = 2 * (yz + xw)
matrix[:, 0, 2] = 2 * (xz + yw)
matrix[:, 1, 2] = 2 * (yz - xw)
matrix[:, 2, 2] = - x2 - y2 + z2 + w2
if self._single:
return matrix[0]
else:
return matrix
@np.deprecate(message="as_dcm is renamed to as_matrix in scipy 1.4.0 "
"and will be removed in scipy 1.6.0")
def as_dcm(self):
return self.as_matrix()
def as_rotvec(self):
"""Represent as rotation vectors.
A rotation vector is a 3 dimensional vector which is co-directional to
the axis of rotation and whose norm gives the angle of rotation (in
radians) [1]_.
Returns
-------
rotvec : ndarray, shape (3,) or (N, 3)
Shape depends on shape of inputs used for initialization.
References
----------
.. [1] https://en.wikipedia.org/wiki/Axis%E2%80%93angle_representation#Rotation_vector
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
Represent a single rotation:
>>> r = R.from_euler('z', 90, degrees=True)
>>> r.as_rotvec()
array([0. , 0. , 1.57079633])
>>> r.as_rotvec().shape
(3,)
Represent a stack with a single rotation:
>>> r = R.from_quat([[0, 0, 1, 1]])
>>> r.as_rotvec()
array([[0. , 0. , 1.57079633]])
>>> r.as_rotvec().shape
(1, 3)
Represent multiple rotations in a single object:
>>> r = R.from_quat([[0, 0, 1, 1], [1, 1, 0, 1]])
>>> r.as_rotvec()
array([[0. , 0. , 1.57079633],
[1.35102172, 1.35102172, 0. ]])
>>> r.as_rotvec().shape
(2, 3)
"""
quat = self._quat.copy()
# w > 0 to ensure 0 <= angle <= pi
quat[quat[:, 3] < 0] *= -1
angle = 2 * np.arctan2(np.linalg.norm(quat[:, :3], axis=1), quat[:, 3])
small_angle = (angle <= 1e-3)
large_angle = ~small_angle
num_rotations = len(self)
scale = np.empty(num_rotations)
scale[small_angle] = (2 + angle[small_angle] ** 2 / 12 +
7 * angle[small_angle] ** 4 / 2880)
scale[large_angle] = (angle[large_angle] /
np.sin(angle[large_angle] / 2))
rotvec = scale[:, None] * quat[:, :3]
if self._single:
return rotvec[0]
else:
return rotvec
def as_euler(self, seq, degrees=False):
"""Represent as Euler angles.
Any orientation can be expressed as a composition of 3 elementary
rotations. Once the axis sequence has been chosen, Euler angles define
the angle of rotation around each respective axis [1]_.
The algorithm from [2]_ has been used to calculate Euler angles for the
rotation about a given sequence of axes.
Euler angles suffer from the problem of gimbal lock [3]_, where the
representation loses a degree of freedom and it is not possible to
determine the first and third angles uniquely. In this case,
a warning is raised, and the third angle is set to zero. Note however
that the returned angles still represent the correct rotation.
Parameters
----------
seq : string, length 3
3 characters belonging to the set {'X', 'Y', 'Z'} for intrinsic
rotations, or {'x', 'y', 'z'} for extrinsic rotations [1]_.
Adjacent axes cannot be the same.
Extrinsic and intrinsic rotations cannot be mixed in one function
call.
degrees : boolean, optional
Returned angles are in degrees if this flag is True, else they are
in radians. Default is False.
Returns
-------
angles : ndarray, shape (3,) or (N, 3)
Shape depends on shape of inputs used to initialize object.
The returned angles are in the range:
- First angle belongs to [-180, 180] degrees (both inclusive)
- Third angle belongs to [-180, 180] degrees (both inclusive)
- Second angle belongs to:
- [-90, 90] degrees if all axes are different (like xyz)
- [0, 180] degrees if first and third axes are the same
(like zxz)
References
----------
.. [1] https://en.wikipedia.org/wiki/Euler_angles#Definition_by_intrinsic_rotations
.. [2] Malcolm D. Shuster, F. Landis Markley, "General formula for
extraction the Euler angles", Journal of guidance, control, and
dynamics, vol. 29.1, pp. 215-221. 2006
.. [3] https://en.wikipedia.org/wiki/Gimbal_lock#In_applied_mathematics
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
Represent a single rotation:
>>> r = R.from_rotvec([0, 0, np.pi/2])
>>> r.as_euler('zxy', degrees=True)
array([90., 0., 0.])
>>> r.as_euler('zxy', degrees=True).shape
(3,)
Represent a stack of single rotation:
>>> r = R.from_rotvec([[0, 0, np.pi/2]])
>>> r.as_euler('zxy', degrees=True)
array([[90., 0., 0.]])
>>> r.as_euler('zxy', degrees=True).shape
(1, 3)
Represent multiple rotations in a single object:
>>> r = R.from_rotvec([
... [0, 0, np.pi/2],
... [0, -np.pi/3, 0],
... [np.pi/4, 0, 0]])
>>> r.as_euler('zxy', degrees=True)
array([[ 90., 0., 0.],
[ 0., 0., -60.],
[ 0., 45., 0.]])
>>> r.as_euler('zxy', degrees=True).shape
(3, 3)
"""
if len(seq) != 3:
raise ValueError("Expected 3 axes, got {}.".format(seq))
intrinsic = (re.match(r'^[XYZ]{1,3}$', seq) is not None)
extrinsic = (re.match(r'^[xyz]{1,3}$', seq) is not None)
if not (intrinsic or extrinsic):
raise ValueError("Expected axes from `seq` to be from "
"['x', 'y', 'z'] or ['X', 'Y', 'Z'], "
"got {}".format(seq))
if any(seq[i] == seq[i+1] for i in range(2)):
raise ValueError("Expected consecutive axes to be different, "
"got {}".format(seq))
seq = seq.lower()
angles = _compute_euler_from_matrix(self.as_matrix(), seq, extrinsic)
if degrees:
angles = np.rad2deg(angles)
return angles[0] if self._single else angles
def apply(self, vectors, inverse=False):
"""Apply this rotation to a set of vectors.
If the original frame rotates to the final frame by this rotation, then
its application to a vector can be seen in two ways:
- As a projection of vector components expressed in the final frame
to the original frame.
- As the physical rotation of a vector being glued to the original
frame as it rotates. In this case the vector components are
expressed in the original frame before and after the rotation.
In terms of rotation matricies, this application is the same as
``self.as_matrix().dot(vectors)``.
Parameters
----------
vectors : array_like, shape (3,) or (N, 3)
Each `vectors[i]` represents a vector in 3D space. A single vector
can either be specified with shape `(3, )` or `(1, 3)`. The number
of rotations and number of vectors given must follow standard numpy
broadcasting rules: either one of them equals unity or they both
equal each other.
inverse : boolean, optional
If True then the inverse of the rotation(s) is applied to the input
vectors. Default is False.
Returns
-------
rotated_vectors : ndarray, shape (3,) or (N, 3)
Result of applying rotation on input vectors.
Shape depends on the following cases:
- If object contains a single rotation (as opposed to a stack
with a single rotation) and a single vector is specified with
shape ``(3,)``, then `rotated_vectors` has shape ``(3,)``.
- In all other cases, `rotated_vectors` has shape ``(N, 3)``,
where ``N`` is either the number of rotations or vectors.
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
Single rotation applied on a single vector:
>>> vector = np.array([1, 0, 0])
>>> r = R.from_rotvec([0, 0, np.pi/2])
>>> r.as_matrix()
array([[ 2.22044605e-16, -1.00000000e+00, 0.00000000e+00],
[ 1.00000000e+00, 2.22044605e-16, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
>>> r.apply(vector)
array([2.22044605e-16, 1.00000000e+00, 0.00000000e+00])
>>> r.apply(vector).shape
(3,)
Single rotation applied on multiple vectors:
>>> vectors = np.array([
... [1, 0, 0],
... [1, 2, 3]])
>>> r = R.from_rotvec([0, 0, np.pi/4])
>>> r.as_matrix()
array([[ 0.70710678, -0.70710678, 0. ],
[ 0.70710678, 0.70710678, 0. ],
[ 0. , 0. , 1. ]])
>>> r.apply(vectors)
array([[ 0.70710678, 0.70710678, 0. ],
[-0.70710678, 2.12132034, 3. ]])
>>> r.apply(vectors).shape
(2, 3)
Multiple rotations on a single vector:
>>> r = R.from_rotvec([[0, 0, np.pi/4], [np.pi/2, 0, 0]])
>>> vector = np.array([1,2,3])
>>> r.as_matrix()
array([[[ 7.07106781e-01, -7.07106781e-01, 0.00000000e+00],
[ 7.07106781e-01, 7.07106781e-01, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]],
[[ 1.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 2.22044605e-16, -1.00000000e+00],
[ 0.00000000e+00, 1.00000000e+00, 2.22044605e-16]]])
>>> r.apply(vector)
array([[-0.70710678, 2.12132034, 3. ],
[ 1. , -3. , 2. ]])
>>> r.apply(vector).shape
(2, 3)
Multiple rotations on multiple vectors. Each rotation is applied on the
corresponding vector:
>>> r = R.from_euler('zxy', [
... [0, 0, 90],
... [45, 30, 60]], degrees=True)
>>> vectors = [
... [1, 2, 3],
... [1, 0, -1]]
>>> r.apply(vectors)
array([[ 3. , 2. , -1. ],
[-0.09026039, 1.11237244, -0.86860844]])
>>> r.apply(vectors).shape
(2, 3)
It is also possible to apply the inverse rotation:
>>> r = R.from_euler('zxy', [
... [0, 0, 90],
... [45, 30, 60]], degrees=True)
>>> vectors = [
... [1, 2, 3],
... [1, 0, -1]]
>>> r.apply(vectors, inverse=True)
array([[-3. , 2. , 1. ],
[ 1.09533535, -0.8365163 , 0.3169873 ]])
"""
vectors = np.asarray(vectors)
if vectors.ndim > 2 or vectors.shape[-1] != 3:
raise ValueError("Expected input of shape (3,) or (P, 3), "
"got {}.".format(vectors.shape))
single_vector = False
if vectors.shape == (3,):
single_vector = True
vectors = vectors[None, :]
matrix = self.as_matrix()
if self._single:
matrix = matrix[None, :, :]
n_vectors = vectors.shape[0]
n_rotations = len(self)
if n_vectors != 1 and n_rotations != 1 and n_vectors != n_rotations:
raise ValueError("Expected equal numbers of rotations and vectors "
", or a single rotation, or a single vector, got "
"{} rotations and {} vectors.".format(
n_rotations, n_vectors))
if inverse:
result = np.einsum('ikj,ik->ij', matrix, vectors)
else:
result = np.einsum('ijk,ik->ij', matrix, vectors)
if self._single and single_vector:
return result[0]
else:
return result
def __mul__(self, other):
"""Compose this rotation with the other.
If `p` and `q` are two rotations, then the composition of 'q followed
by p' is equivalent to `p * q`. In terms of rotation matrices,
the composition can be expressed as
``p.as_matrix().dot(q.as_matrix())``.
Parameters
----------
other : `Rotation` instance
Object containing the rotations to be composed with this one. Note
that rotation compositions are not commutative, so ``p * q`` is
different from ``q * p``.
Returns
-------
composition : `Rotation` instance
This function supports composition of multiple rotations at a time.
The following cases are possible:
- Either ``p`` or ``q`` contains a single rotation. In this case
`composition` contains the result of composing each rotation in
the other object with the single rotation.
- Both ``p`` and ``q`` contain ``N`` rotations. In this case each
rotation ``p[i]`` is composed with the corresponding rotation
``q[i]`` and `output` contains ``N`` rotations.
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
Composition of two single rotations:
>>> p = R.from_quat([0, 0, 1, 1])
>>> q = R.from_quat([1, 0, 0, 1])
>>> p.as_matrix()
array([[ 0., -1., 0.],
[ 1., 0., 0.],
[ 0., 0., 1.]])
>>> q.as_matrix()
array([[ 1., 0., 0.],
[ 0., 0., -1.],
[ 0., 1., 0.]])
>>> r = p * q
>>> r.as_matrix()
array([[0., 0., 1.],
[1., 0., 0.],
[0., 1., 0.]])
Composition of two objects containing equal number of rotations:
>>> p = R.from_quat([[0, 0, 1, 1], [1, 0, 0, 1]])
>>> q = R.from_rotvec([[np.pi/4, 0, 0], [-np.pi/4, 0, np.pi/4]])
>>> p.as_quat()
array([[0. , 0. , 0.70710678, 0.70710678],
[0.70710678, 0. , 0. , 0.70710678]])
>>> q.as_quat()
array([[ 0.38268343, 0. , 0. , 0.92387953],
[-0.37282173, 0. , 0.37282173, 0.84971049]])
>>> r = p * q
>>> r.as_quat()
array([[ 0.27059805, 0.27059805, 0.65328148, 0.65328148],
[ 0.33721128, -0.26362477, 0.26362477, 0.86446082]])
"""
if not(len(self) == 1 or len(other) == 1 or len(self) == len(other)):
raise ValueError("Expected equal number of rotations in both "
"or a single rotation in either object, "
"got {} rotations in first and {} rotations in "
"second object.".format(
len(self), len(other)))
result = _compose_quat(self._quat, other._quat)
if self._single and other._single:
result = result[0]
return self.__class__(result, normalize=True, copy=False)
def inv(self):
"""Invert this rotation.
Composition of a rotation with its inverse results in an identity
transformation.
Returns
-------
inverse : `Rotation` instance
Object containing inverse of the rotations in the current instance.
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
Inverting a single rotation:
>>> p = R.from_euler('z', 45, degrees=True)
>>> q = p.inv()
>>> q.as_euler('zyx', degrees=True)
array([-45., 0., 0.])
Inverting multiple rotations:
>>> p = R.from_rotvec([[0, 0, np.pi/3], [-np.pi/4, 0, 0]])
>>> q = p.inv()
>>> q.as_rotvec()
array([[-0. , -0. , -1.04719755],
[ 0.78539816, -0. , -0. ]])
"""
quat = self._quat.copy()
quat[:, -1] *= -1
if self._single:
quat = quat[0]
return self.__class__(quat, normalize=False, copy=False)
def magnitude(self):
"""Get the magnitude(s) of the rotation(s).
Returns
-------
magnitude : ndarray or float
Angle(s) in radians, float if object contains a single rotation
and ndarray if object contains multiple rotations.
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
>>> r = R.from_quat(np.eye(4))
>>> r.magnitude()
array([3.14159265, 3.14159265, 3.14159265, 0. ])
Magnitude of a single rotation:
>>> r[0].magnitude()
3.141592653589793
"""
quat = self._quat.reshape((len(self), 4))
s = np.linalg.norm(quat[:, :3], axis=1)
c = np.abs(quat[:, 3])
angles = 2 * np.arctan2(s, c)
if self._single:
return angles[0]
else:
return angles
def mean(self, weights=None):
"""Get the mean of the rotations.
Parameters
----------
weights : array_like shape (N,), optional
Weights describing the relative importance of the rotations. If
None (default), then all values in `weights` are assumed to be
equal.
Returns
-------
mean : `Rotation` instance
Object containing the mean of the rotations in the current
instance.
Notes
-----
The mean used is the chordal L2 mean (also called the projected or
induced arithmetic mean). If ``p`` is a set of rotations with mean
``m``, then ``m`` is the rotation which minimizes
``(weights[:, None, None] * (p.as_matrix() - m.as_matrix())**2).sum()``.
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
>>> r = R.from_euler('zyx', [[0, 0, 0],
... [1, 0, 0],
... [0, 1, 0],
... [0, 0, 1]], degrees=True)
>>> r.mean().as_euler('zyx', degrees=True)
array([0.24945696, 0.25054542, 0.24945696])
"""
if weights is None:
weights = np.ones(len(self))
else:
weights = np.asarray(weights)
if weights.ndim != 1:
raise ValueError("Expected `weights` to be 1 dimensional, got "
"shape {}.".format(weights.shape))
if weights.shape[0] != len(self):
raise ValueError("Expected `weights` to have number of values "
"equal to number of rotations, got "
"{} values and {} rotations.".format(
weights.shape[0], len(self)))
if np.any(weights < 0):
raise ValueError("`weights` must be non-negative.")
K = np.dot(weights * self._quat.T, self._quat)
l, v = np.linalg.eigh(K)
return self.__class__(v[:, -1], normalize=False)
def reduce(self, left=None, right=None, return_indices=False):
"""Reduce this rotation with the provided rotation groups.
Reduction of a rotation ``p`` is a transformation of the form
``q = l * p * r``, where ``l`` and ``r`` are chosen from `left` and
`right` respectively, such that rotation ``q`` has the smallest
magnitude.
If `left` and `right` are rotation groups representing symmetries of
two objects rotated by ``p``, then ``q`` is the rotation of the
smallest magnitude to align these objects considering their symmetries.
Parameters
----------
left : `Rotation` instance, optional
Object containing the left rotation(s). Default value (None)
corresponds to the identity rotation.
right : `Rotation` instance, optional
Object containing the right rotation(s). Default value (None)
corresponds to the identity rotation.
return_indices : bool, optional
Whether to return the indices of the rotations from `left` and
`right` used for reduction.
Returns
-------
reduced : `Rotation` instance
Object containing reduced rotations.
left_best, right_best: integer ndarray
Indices of elements from `left` and `right` used for reduction.
"""
if left is None and right is None:
reduced = self.__class__(self._quat, normalize=False, copy=True)
if return_indices:
return reduced, None, None
else:
return reduced
elif right is None:
right = Rotation.identity()
elif left is None:
left = Rotation.identity()
# Levi-Civita tensor for triple product computations
e = np.zeros((3, 3, 3))
e[0, 1, 2] = e[1, 2, 0] = e[2, 0, 1] = 1
e[0, 2, 1] = e[2, 1, 0] = e[1, 0, 2] = -1
# We want to calculate the real components of q = l * p * r. It can
# be shown that:
# qs = ls * ps * rs - ls * dot(pv, rv) - ps * dot(lv, rv)
# - rs * dot(lv, pv) - dot(cross(lv, pv), rv)
# where ls and lv denote the scalar and vector components of l.
def split_rotation(R):
q = np.atleast_2d(R.as_quat())
return q[:, -1], q[:, :-1]
p = self
ps, pv = split_rotation(p)
ls, lv = split_rotation(left)
rs, rv = split_rotation(right)
qs = np.abs(np.einsum('i,j,k', ls, ps, rs) -
np.einsum('i,jx,kx', ls, pv, rv) -
np.einsum('ix,j,kx', lv, ps, rv) -
np.einsum('ix,jx,k', lv, pv, rs) -
np.einsum('xyz,ix,jy,kz', e, lv, pv, rv))
qs = np.reshape(np.rollaxis(qs, 1), (qs.shape[1], -1))
# Find best indices from scalar components
max_ind = np.argmax(np.reshape(qs, (len(qs), -1)), axis=1)
left_best = max_ind // len(right)
right_best = max_ind % len(right)
# Reduce the rotation using the best indices
reduced = left[left_best] * p * right[right_best]
if self._single:
reduced = reduced[0]
left_best = left_best[0]
right_best = right_best[0]
if return_indices:
if left is None:
left_best = None
if right is None:
right_best = None
return reduced, left_best, right_best
else:
return reduced
@classmethod
def create_group(cls, group, axis='Z'):
"""Create a 3D rotation group.
Parameters
----------
group : string
The name of the group. Must be one of 'I', 'O', 'T', 'Dn', 'Cn',
where `n` is a positive integer. The groups are:
* I: Icosahedral group
* O: Octahedral group
* T: Tetrahedral group
* D: Dicyclic group
* C: Cyclic group
axis : integer
The cyclic rotation axis. Must be one of ['X', 'Y', 'Z'] (or
lowercase). Default is 'Z'. Ignored for groups 'I', 'O', and 'T'.
Returns
-------
rotation : `Rotation` instance
Object containing the elements of the rotation group.
Notes
-----
This method generates rotation groups only. The full 3-dimensional
point groups [PointGroups]_ also contain reflections.
References
----------
.. [PointGroups] `Point groups
<https://en.wikipedia.org/wiki/Point_groups_in_three_dimensions>`_
on Wikipedia.
"""
return create_group(cls, group, axis=axis)
def __getitem__(self, indexer):
"""Extract rotation(s) at given index(es) from object.
Create a new `Rotation` instance containing a subset of rotations
stored in this object.
Parameters
----------
indexer : index, slice, or index array
Specifies which rotation(s) to extract. A single indexer must be
specified, i.e. as if indexing a 1 dimensional array or list.
Returns
-------
rotation : `Rotation` instance
Contains
- a single rotation, if `indexer` is a single index
- a stack of rotation(s), if `indexer` is a slice, or and index
array.
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
>>> r = R.from_quat([
... [1, 1, 0, 0],
... [0, 1, 0, 1],
... [1, 1, -1, 0]])
>>> r.as_quat()
array([[ 0.70710678, 0.70710678, 0. , 0. ],
[ 0. , 0.70710678, 0. , 0.70710678],
[ 0.57735027, 0.57735027, -0.57735027, 0. ]])
Indexing using a single index:
>>> p = r[0]
>>> p.as_quat()
array([0.70710678, 0.70710678, 0. , 0. ])
Array slicing:
>>> q = r[1:3]
>>> q.as_quat()
array([[ 0. , 0.70710678, 0. , 0.70710678],
[ 0.57735027, 0.57735027, -0.57735027, 0. ]])
"""
return self.__class__(self._quat[indexer], normalize=False)
@classmethod
def identity(cls, num=None):
"""Get identity rotation(s).
Composition with the identity rotation has no effect.
Parameters
----------
num : int or None, optional
Number of identity rotations to generate. If None (default), then a
single rotation is generated.
Returns
-------
identity : Rotation object
The identity rotation.
"""
if num is None:
q = [0, 0, 0, 1]
else:
q = np.zeros((num, 4))
q[:, 3] = 1
return cls(q, normalize=False)
@classmethod
def random(cls, num=None, random_state=None):
"""Generate uniformly distributed rotations.
Parameters
----------
num : int or None, optional
Number of random rotations to generate. If None (default), then a
single rotation is generated.
random_state : int, RandomState instance or None, optional
Accepts an integer as a seed for the random generator or a
RandomState object. If None (default), uses global `numpy.random`
random state.
Returns
-------
random_rotation : `Rotation` instance
Contains a single rotation if `num` is None. Otherwise contains a
stack of `num` rotations.
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
Sample a single rotation:
>>> R.random(random_state=1234).as_euler('zxy', degrees=True)
array([-110.5976185 , 55.32758512, 76.3289269 ])
Sample a stack of rotations:
>>> R.random(5, random_state=1234).as_euler('zxy', degrees=True)
array([[-110.5976185 , 55.32758512, 76.3289269 ],
[ -91.59132005, -14.3629884 , -93.91933182],
[ 25.23835501, 45.02035145, -121.67867086],
[ -51.51414184, -15.29022692, -172.46870023],
[ -81.63376847, -27.39521579, 2.60408416]])
"""
random_state = check_random_state(random_state)
if num is None:
sample = random_state.normal(size=4)
else:
sample = random_state.normal(size=(num, 4))
return cls(sample)
@classmethod
@np.deprecate(message="match_vectors is deprecated in favor of "
"align_vectors in scipy 1.4.0 and will be removed "
"in scipy 1.6.0")
def match_vectors(cls, a, b, weights=None, normalized=False):
"""Deprecated in favor of `align_vectors`."""
a = np.asarray(a)
if a.ndim != 2 or a.shape[-1] != 3:
raise ValueError("Expected input `a` to have shape (N, 3), "
"got {}".format(a.shape))
b = np.asarray(b)
if b.ndim != 2 or b.shape[-1] != 3:
raise ValueError("Expected input `b` to have shape (N, 3), "
"got {}.".format(b.shape))
if a.shape != b.shape:
raise ValueError("Expected inputs `a` and `b` to have same shapes"
", got {} and {} respectively.".format(
a.shape, b.shape))
if b.shape[0] == 1:
raise ValueError("Rotation cannot be estimated using a single "
"vector.")
if weights is None:
weights = np.ones(b.shape[0])
else:
weights = np.asarray(weights)
if weights.ndim != 1:
raise ValueError("Expected `weights` to be 1 dimensional, got "
"shape {}.".format(weights.shape))
if weights.shape[0] != b.shape[0]:
raise ValueError("Expected `weights` to have number of values "
"equal to number of input vectors, got "
"{} values and {} vectors.".format(
weights.shape[0], b.shape[0]))
weights = weights / np.sum(weights)
if not normalized:
a = a / scipy.linalg.norm(a, axis=1)[:, None]
b = b / scipy.linalg.norm(b, axis=1)[:, None]
B = np.einsum('ji,jk->ik', weights[:, None] * a, b)
u, s, vh = np.linalg.svd(B)
# Correct improper rotation if necessary (as in Kabsch algorithm)
if np.linalg.det(u @ vh) < 0:
s[-1] = -s[-1]
u[:, -1] = -u[:, -1]
C = np.dot(u, vh)
zeta = (s[0]+s[1]) * (s[1]+s[2]) * (s[2]+s[0])
if np.abs(zeta) <= 1e-16:
raise ValueError("Three component error vector has infinite "
"covariance. It is impossible to determine the "
"rotation uniquely.")
kappa = s[0]*s[1] + s[1]*s[2] + s[2]*s[0]
sensitivity = ((kappa * np.eye(3) + np.dot(B, B.T)) /
(zeta * a.shape[0]))
return cls.from_matrix(C), sensitivity
@classmethod
def align_vectors(cls, a, b, weights=None, return_sensitivity=False):
"""Estimate a rotation to optimally align two sets of vectors.
Find a rotation between frames A and B which best aligns a set of
vectors `a` and `b` observed in these frames. The following loss
function is minimized to solve for the rotation matrix
:math:`C`:
.. math::
L(C) = \\frac{1}{2} \\sum_{i = 1}^{n} w_i \\lVert \\mathbf{a}_i -
C \\mathbf{b}_i \\rVert^2 ,
where :math:`w_i`'s are the `weights` corresponding to each vector.
The rotation is estimated with Kabsch algorithm [1]_.
Parameters
----------
a : array_like, shape (N, 3)
Vector components observed in initial frame A. Each row of `a`
denotes a vector.
b : array_like, shape (N, 3)
Vector components observed in another frame B. Each row of `b`
denotes a vector.
weights : array_like shape (N,), optional
Weights describing the relative importance of the vector
observations. If None (default), then all values in `weights` are
assumed to be 1.
return_sensitivity : bool, optional
Whether to return the sensitivity matrix. See Notes for details.
Default is False.
Returns
-------
estimated_rotation : `Rotation` instance
Best estimate of the rotation that transforms `b` to `a`.
rmsd : float
Root mean square distance (weighted) between the given set of
vectors after alignment. It is equal to ``sqrt(2 * minimum_loss)``,
where ``minimum_loss`` is the loss function evaluated for the
found optimal rotation.
sensitivity_matrix : ndarray, shape (3, 3)
Sensitivity matrix of the estimated rotation estimate as explained
in Notes. Returned only when `return_sensitivity` is True.
Notes
-----
This method can also compute the sensitivity of the estimated rotation
to small perturbations of the vector measurements. Specifically we
consider the rotation estimate error as a small rotation vector of
frame A. The sensitivity matrix is proportional to the covariance of
this rotation vector assuming that the vectors in `a` was measured with
errors significantly less than their lengths. To get the true
covariance matrix, the returned sensitivity matrix must be multiplied
by harmonic mean [3]_ of variance in each observation. Note that
`weights` are supposed to be inversely proportional to the observation
variances to get consistent results. For example, if all vectors are
measured with the same accuracy of 0.01 (`weights` must be all equal),
then you should multiple the sensitivity matrix by 0.01**2 to get the
covariance.
Refer to [2]_ for more rigorous discussion of the covariance
estimation.
References
----------
.. [1] https://en.wikipedia.org/wiki/Kabsch_algorithm
.. [2] F. Landis Markley,
"Attitude determination using vector observations: a fast
optimal matrix algorithm", Journal of Astronautical Sciences,
Vol. 41, No.2, 1993, pp. 261-280.
.. [3] https://en.wikipedia.org/wiki/Harmonic_mean
"""
a = np.asarray(a)
if a.ndim != 2 or a.shape[-1] != 3:
raise ValueError("Expected input `a` to have shape (N, 3), "
"got {}".format(a.shape))
b = np.asarray(b)
if b.ndim != 2 or b.shape[-1] != 3:
raise ValueError("Expected input `b` to have shape (N, 3), "
"got {}.".format(b.shape))
if a.shape != b.shape:
raise ValueError("Expected inputs `a` and `b` to have same shapes"
", got {} and {} respectively.".format(
a.shape, b.shape))
if weights is None:
weights = np.ones(len(b))
else:
weights = np.asarray(weights)
if weights.ndim != 1:
raise ValueError("Expected `weights` to be 1 dimensional, got "
"shape {}.".format(weights.shape))
if weights.shape[0] != b.shape[0]:
raise ValueError("Expected `weights` to have number of values "
"equal to number of input vectors, got "
"{} values and {} vectors.".format(
weights.shape[0], b.shape[0]))
B = np.einsum('ji,jk->ik', weights[:, None] * a, b)
u, s, vh = np.linalg.svd(B)
# Correct improper rotation if necessary (as in Kabsch algorithm)
if np.linalg.det(u @ vh) < 0:
s[-1] = -s[-1]
u[:, -1] = -u[:, -1]
C = np.dot(u, vh)
if s[1] + s[2] < 1e-16 * s[0]:
warnings.warn("Optimal rotation is not uniquely or poorly defined "
"for the given sets of vectors.")
rmsd = np.sqrt(max(
np.sum(weights * np.sum(b ** 2 + a ** 2, axis=1)) - 2 * np.sum(s),
0))
if return_sensitivity:
zeta = (s[0] + s[1]) * (s[1] + s[2]) * (s[2] + s[0])
kappa = s[0] * s[1] + s[1] * s[2] + s[2] * s[0]
with np.errstate(divide='ignore', invalid='ignore'):
sensitivity = np.mean(weights) / zeta * (
kappa * np.eye(3) + np.dot(B, B.T))
return cls.from_matrix(C), rmsd, sensitivity
else:
return cls.from_matrix(C), rmsd
class Slerp(object):
"""Spherical Linear Interpolation of Rotations.
The interpolation between consecutive rotations is performed as a rotation
around a fixed axis with a constant angular velocity [1]_. This ensures
that the interpolated rotations follow the shortest path between initial
and final orientations.
Parameters
----------
times : array_like, shape (N,)
Times of the known rotations. At least 2 times must be specified.
rotations : `Rotation` instance
Rotations to perform the interpolation between. Must contain N
rotations.
Methods
-------
__call__
See Also
--------
Rotation
Notes
-----
.. versionadded:: 1.2.0
References
----------
.. [1] https://en.wikipedia.org/wiki/Slerp#Quaternion_Slerp
Examples
--------
>>> from scipy.spatial.transform import Rotation as R
>>> from scipy.spatial.transform import Slerp
Setup the fixed keyframe rotations and times:
>>> key_rots = R.random(5, random_state=2342345)
>>> key_times = [0, 1, 2, 3, 4]
Create the interpolator object:
>>> slerp = Slerp(key_times, key_rots)
Interpolate the rotations at the given times:
>>> times = [0, 0.5, 0.25, 1, 1.5, 2, 2.75, 3, 3.25, 3.60, 4]
>>> interp_rots = slerp(times)
The keyframe rotations expressed as Euler angles:
>>> key_rots.as_euler('xyz', degrees=True)
array([[ 14.31443779, -27.50095894, -3.7275787 ],
[ -1.79924227, -24.69421529, 164.57701743],
[146.15020772, 43.22849451, -31.34891088],
[ 46.39959442, 11.62126073, -45.99719267],
[-88.94647804, -49.64400082, -65.80546984]])
The interpolated rotations expressed as Euler angles. These agree with the
keyframe rotations at both endpoints of the range of keyframe times.
>>> interp_rots.as_euler('xyz', degrees=True)
array([[ 14.31443779, -27.50095894, -3.7275787 ],
[ 4.74588574, -32.44683966, 81.25139984],
[ 10.71094749, -31.56690154, 38.06896408],
[ -1.79924227, -24.69421529, 164.57701743],
[ 11.72796022, 51.64207311, -171.7374683 ],
[ 146.15020772, 43.22849451, -31.34891088],
[ 68.10921869, 20.67625074, -48.74886034],
[ 46.39959442, 11.62126073, -45.99719267],
[ 12.35552615, 4.21525086, -64.89288124],
[ -30.08117143, -19.90769513, -78.98121326],
[ -88.94647804, -49.64400082, -65.80546984]])
"""
def __init__(self, times, rotations):
if len(rotations) == 1:
raise ValueError("`rotations` must contain at least 2 rotations.")
times = np.asarray(times)
if times.ndim != 1:
raise ValueError("Expected times to be specified in a 1 "
"dimensional array, got {} "
"dimensions.".format(times.ndim))
if times.shape[0] != len(rotations):
raise ValueError("Expected number of rotations to be equal to "
"number of timestamps given, got {} rotations "
"and {} timestamps.".format(
len(rotations), times.shape[0]))
self.times = times
self.timedelta = np.diff(times)
if np.any(self.timedelta <= 0):
raise ValueError("Times must be in strictly increasing order.")
self.rotations = rotations[:-1]
self.rotvecs = (self.rotations.inv() * rotations[1:]).as_rotvec()
def __call__(self, times):
"""Interpolate rotations.
Compute the interpolated rotations at the given `times`.
Parameters
----------
times : array_like
Times to compute the interpolations at. Can be a scalar or
1-dimensional.
Returns
-------
interpolated_rotation : `Rotation` instance
Object containing the rotations computed at given `times`.
"""
# Clearly differentiate from self.times property
compute_times = np.asarray(times)
if compute_times.ndim > 1:
raise ValueError("`times` must be at most 1-dimensional.")
single_time = compute_times.ndim == 0
compute_times = np.atleast_1d(compute_times)
# side = 'left' (default) excludes t_min.
ind = np.searchsorted(self.times, compute_times) - 1
# Include t_min. Without this step, index for t_min equals -1
ind[compute_times == self.times[0]] = 0
if np.any(np.logical_or(ind < 0, ind > len(self.rotations) - 1)):
raise ValueError("Interpolation times must be within the range "
"[{}, {}], both inclusive.".format(
self.times[0], self.times[-1]))
alpha = (compute_times - self.times[ind]) / self.timedelta[ind]
result = (self.rotations[ind] *
Rotation.from_rotvec(self.rotvecs[ind] * alpha[:, None]))
if single_time:
result = result[0]
return result
| 35.56 | 94 | 0.524806 |
3048f5c04f154673688a298d5f86060c9f1cd295 | 4,202 | py | Python | official/transformer/v2/transformer_main_test.py | anestisdotpy/models | 3ae3deafb662ba5100f1fde15f8eaab8556f4bf8 | [
"Apache-2.0"
] | null | null | null | official/transformer/v2/transformer_main_test.py | anestisdotpy/models | 3ae3deafb662ba5100f1fde15f8eaab8556f4bf8 | [
"Apache-2.0"
] | null | null | null | official/transformer/v2/transformer_main_test.py | anestisdotpy/models | 3ae3deafb662ba5100f1fde15f8eaab8556f4bf8 | [
"Apache-2.0"
] | 1 | 2019-07-23T01:29:11.000Z | 2019-07-23T01:29:11.000Z | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Test Transformer model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import re
from absl import flags
import tensorflow as tf
from tensorflow.python.framework import test_util
from official.transformer.v2 import misc
from official.transformer.v2 import transformer_main as tm
FLAGS = flags.FLAGS
FIXED_TIMESTAMP = "my_time_stamp"
WEIGHT_PATTERN = re.compile(r"weights-epoch-.+\.hdf5")
def _generate_file(filepath, lines):
with open(filepath, "w") as f:
for l in lines:
f.write("{}\n".format(l))
class TransformerTaskTest(tf.test.TestCase):
def setUp(self):
temp_dir = self.get_temp_dir()
FLAGS.model_dir = temp_dir
FLAGS.init_logdir_timestamp = FIXED_TIMESTAMP
FLAGS.param_set = param_set = "tiny"
FLAGS.use_synthetic_data = True
FLAGS.steps_per_epoch = 1
FLAGS.validation_steps = 1
FLAGS.train_epochs = 1
FLAGS.batch_size = 8
FLAGS.init_weight_path = None
self.cur_log_dir = os.path.join(temp_dir, FIXED_TIMESTAMP)
self.vocab_file = os.path.join(self.cur_log_dir, "vocab")
self.vocab_size = misc.get_model_params(param_set, 0)["vocab_size"]
self.bleu_source = os.path.join(self.cur_log_dir, "bleu_source")
self.bleu_ref = os.path.join(self.cur_log_dir, "bleu_ref")
self.flags_file = os.path.join(self.cur_log_dir, "flags")
def _assert_exists(self, filepath):
self.assertTrue(os.path.exists(filepath))
def test_train(self):
t = tm.TransformerTask(FLAGS)
t.train()
# Test model dir.
self._assert_exists(self.cur_log_dir)
# Test saving models.
self._assert_exists(
os.path.join(self.cur_log_dir, "saves-model-weights.hdf5"))
self._assert_exists(os.path.join(self.cur_log_dir, "saves-model.hdf5"))
# Test callbacks:
# TensorBoard file.
self._assert_exists(os.path.join(self.cur_log_dir, "logs"))
# CSVLogger file.
self._assert_exists(os.path.join(self.cur_log_dir, "result.csv"))
# Checkpoint file.
filenames = os.listdir(self.cur_log_dir)
matched_weight_file = any([WEIGHT_PATTERN.match(f) for f in filenames])
self.assertTrue(matched_weight_file)
def _prepare_files_and_flags(self, *extra_flags):
# Make log dir.
if not os.path.exists(self.cur_log_dir):
os.makedirs(self.cur_log_dir)
# Fake vocab, bleu_source and bleu_ref.
tokens = [
"'<pad>'", "'<EOS>'", "'_'", "'a'", "'b'", "'c'", "'d'", "'a_'", "'b_'",
"'c_'", "'d_'"
]
tokens += ["'{}'".format(i) for i in range(self.vocab_size - len(tokens))]
_generate_file(self.vocab_file, tokens)
_generate_file(self.bleu_source, ["a b", "c d"])
_generate_file(self.bleu_ref, ["a b", "d c"])
# Update flags.
update_flags = [
"ignored_program_name",
"--vocab_file={}".format(self.vocab_file),
"--bleu_source={}".format(self.bleu_source),
"--bleu_ref={}".format(self.bleu_ref),
]
if extra_flags:
update_flags.extend(extra_flags)
FLAGS(update_flags)
@test_util.run_v1_only("V1 should work. Issue: V2 w/ graph transformed.")
def test_predict(self):
self._prepare_files_and_flags()
t = tm.TransformerTask(FLAGS)
t.predict()
@test_util.run_v1_only("V1 should work. Issue: V2 w/ graph transformed.")
def test_eval(self):
self._prepare_files_and_flags()
t = tm.TransformerTask(FLAGS)
t.eval()
if __name__ == "__main__":
misc.define_transformer_flags()
tf.test.main()
| 33.086614 | 80 | 0.687768 |
65f5dd7be3ba78fb9fe629c9cfdd43a0d847c4fd | 2,555 | py | Python | tests/extension/stream_/add/stream_add.py | akmaru/veriloggen | 74f998139e8cf613f7703fa4cffd571bbf069bbc | [
"Apache-2.0"
] | null | null | null | tests/extension/stream_/add/stream_add.py | akmaru/veriloggen | 74f998139e8cf613f7703fa4cffd571bbf069bbc | [
"Apache-2.0"
] | null | null | null | tests/extension/stream_/add/stream_add.py | akmaru/veriloggen | 74f998139e8cf613f7703fa4cffd571bbf069bbc | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
from __future__ import print_function
import sys
import os
# the next line can be removed after installation
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(
os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))))
from veriloggen import *
import veriloggen.stream as stream
def mkMain():
# input variiable
x = stream.Variable('xdata')
y = stream.Variable('ydata')
# stream definition
z = x + y
# set output attribute
z.output('zdata')
st = stream.Stream(z)
m = st.to_module('main')
return m
def mkTest(numports=8):
m = Module('test')
# target instance
main = mkMain()
params = m.copy_params(main)
ports = m.copy_sim_ports(main)
clk = ports['CLK']
rst = ports['RST']
xdata = ports['xdata']
ydata = ports['ydata']
zdata = ports['zdata']
uut = m.Instance(main, 'uut',
params=m.connect_params(main),
ports=m.connect_ports(main))
reset_done = m.Reg('reset_done', initval=0)
reset_stmt = []
reset_stmt.append(reset_done(0))
reset_stmt.append(xdata(0))
reset_stmt.append(ydata(0))
simulation.setup_waveform(m, uut)
simulation.setup_clock(m, clk, hperiod=5)
init = simulation.setup_reset(m, rst, reset_stmt, period=100)
nclk = simulation.next_clock
init.add(
Delay(1000),
reset_done(1),
nclk(clk),
Delay(10000),
Systask('finish'),
)
send_fsm = FSM(m, 'send_fsm', clk, rst)
send_count = m.Reg('send_count', 32, initval=0)
send_fsm.If(reset_done).goto_next()
send_fsm(
xdata(xdata + 1),
ydata(ydata + 2),
Display('xdata=%d', xdata),
Display('ydata=%d', ydata),
send_count.inc()
)
send_fsm.If(send_count == 20).goto_next()
recv_fsm = FSM(m, 'recv_fsm', clk, rst)
recv_count = m.Reg('recv_count', 32, initval=0)
recv_fsm.If(reset_done).goto_next()
recv_fsm(
Display('zdata=%d', zdata),
recv_count.inc()
)
recv_fsm.If(recv_count == 20 + 10).goto_next()
return m
if __name__ == '__main__':
test = mkTest()
verilog = test.to_verilog('tmp.v')
print(verilog)
# run simulator (Icarus Verilog)
sim = simulation.Simulator(test)
rslt = sim.run() # display=False
#rslt = sim.run(display=True)
print(rslt)
# launch waveform viewer (GTKwave)
# sim.view_waveform() # background=False
# sim.view_waveform(background=True)
| 23.440367 | 67 | 0.620352 |
99e0e042233b0ef9e716adad6fef05cdca480d1d | 7,151 | py | Python | neutron/agent/linux/utils.py | gampel/neutron | 51a6260266dc59c066072ca890ad9c40b1aad6cf | [
"Apache-2.0"
] | null | null | null | neutron/agent/linux/utils.py | gampel/neutron | 51a6260266dc59c066072ca890ad9c40b1aad6cf | [
"Apache-2.0"
] | null | null | null | neutron/agent/linux/utils.py | gampel/neutron | 51a6260266dc59c066072ca890ad9c40b1aad6cf | [
"Apache-2.0"
] | null | null | null | # Copyright 2012 Locaweb.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fcntl
import glob
import os
import shlex
import socket
import struct
import tempfile
from eventlet.green import subprocess
from eventlet import greenthread
from oslo.utils import excutils
from neutron.common import constants
from neutron.common import utils
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
def create_process(cmd, root_helper=None, addl_env=None):
"""Create a process object for the given command.
The return value will be a tuple of the process object and the
list of command arguments used to create it.
"""
if root_helper:
cmd = shlex.split(root_helper) + cmd
cmd = map(str, cmd)
LOG.debug("Running command: %s", cmd)
env = os.environ.copy()
if addl_env:
env.update(addl_env)
obj = utils.subprocess_popen(cmd, shell=False,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=env)
return obj, cmd
def execute(cmd, root_helper=None, process_input=None, addl_env=None,
check_exit_code=True, return_stderr=False, log_fail_as_error=True,
extra_ok_codes=None):
try:
obj, cmd = create_process(cmd, root_helper=root_helper,
addl_env=addl_env)
_stdout, _stderr = obj.communicate(process_input)
obj.stdin.close()
m = _("\nCommand: %(cmd)s\nExit code: %(code)s\nStdout: %(stdout)r\n"
"Stderr: %(stderr)r") % {'cmd': cmd, 'code': obj.returncode,
'stdout': _stdout, 'stderr': _stderr}
extra_ok_codes = extra_ok_codes or []
if obj.returncode and obj.returncode in extra_ok_codes:
obj.returncode = None
if obj.returncode and log_fail_as_error:
LOG.error(m)
else:
LOG.debug(m)
if obj.returncode and check_exit_code:
raise RuntimeError(m)
finally:
# NOTE(termie): this appears to be necessary to let the subprocess
# call clean something up in between calls, without
# it two execute calls in a row hangs the second one
greenthread.sleep(0)
return (_stdout, _stderr) if return_stderr else _stdout
def get_interface_mac(interface):
MAC_START = 18
MAC_END = 24
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
info = fcntl.ioctl(s.fileno(), 0x8927,
struct.pack('256s', interface[:constants.DEVICE_NAME_MAX_LEN]))
return ''.join(['%02x:' % ord(char)
for char in info[MAC_START:MAC_END]])[:-1]
def replace_file(file_name, data):
"""Replaces the contents of file_name with data in a safe manner.
First write to a temp file and then rename. Since POSIX renames are
atomic, the file is unlikely to be corrupted by competing writes.
We create the tempfile on the same device to ensure that it can be renamed.
"""
base_dir = os.path.dirname(os.path.abspath(file_name))
tmp_file = tempfile.NamedTemporaryFile('w+', dir=base_dir, delete=False)
tmp_file.write(data)
tmp_file.close()
os.chmod(tmp_file.name, 0o644)
os.rename(tmp_file.name, file_name)
def find_child_pids(pid):
"""Retrieve a list of the pids of child processes of the given pid."""
try:
raw_pids = execute(['ps', '--ppid', pid, '-o', 'pid='],
log_fail_as_error=False)
except RuntimeError as e:
# Unexpected errors are the responsibility of the caller
with excutils.save_and_reraise_exception() as ctxt:
# Exception has already been logged by execute
no_children_found = 'Exit code: 1' in e.message
if no_children_found:
ctxt.reraise = False
return []
return [x.strip() for x in raw_pids.split('\n') if x.strip()]
def _get_conf_base(cfg_root, uuid, ensure_conf_dir):
conf_dir = os.path.abspath(os.path.normpath(cfg_root))
conf_base = os.path.join(conf_dir, uuid)
if ensure_conf_dir:
if not os.path.isdir(conf_dir):
os.makedirs(conf_dir, 0o755)
return conf_base
def get_conf_file_name(cfg_root, uuid, cfg_file, ensure_conf_dir=False):
"""Returns the file name for a given kind of config file."""
conf_base = _get_conf_base(cfg_root, uuid, ensure_conf_dir)
return "%s.%s" % (conf_base, cfg_file)
def get_value_from_conf_file(cfg_root, uuid, cfg_file, converter=None):
"""A helper function to read a value from one of a config file."""
file_name = get_conf_file_name(cfg_root, uuid, cfg_file)
msg = _('Error while reading %s')
try:
with open(file_name, 'r') as f:
try:
return converter(f.read()) if converter else f.read()
except ValueError:
msg = _('Unable to convert value in %s')
except IOError:
msg = _('Unable to access %s')
LOG.debug(msg, file_name)
return None
def remove_conf_files(cfg_root, uuid):
conf_base = _get_conf_base(cfg_root, uuid, False)
for file_path in glob.iglob("%s.*" % conf_base):
os.unlink(file_path)
def remove_conf_file(cfg_root, uuid, cfg_file):
"""Remove a config file."""
conf_file = get_conf_file_name(cfg_root, uuid, cfg_file)
if os.path.exists(conf_file):
os.unlink(conf_file)
def get_root_helper_child_pid(pid, root_helper=None):
"""
Get the lowest child pid in the process hierarchy
If root helper was used, two or more processes would be created:
- a root helper process (e.g. sudo myscript)
- possibly a rootwrap script (e.g. neutron-rootwrap)
- a child process (e.g. myscript)
Killing the root helper process will leave the child process
running, re-parented to init, so the only way to ensure that both
die is to target the child process directly.
"""
pid = str(pid)
if root_helper:
try:
pid = find_child_pids(pid)[0]
except IndexError:
# Process is already dead
return None
while True:
try:
# We shouldn't have more than one child per process
# so keep getting the children of the first one
pid = find_child_pids(pid)[0]
except IndexError:
# Last process in the tree, return it
break
return pid
| 33.731132 | 79 | 0.64047 |
f21eac68a378be811b8064e89bf32c96ba021397 | 2,025 | py | Python | src/day07_mb.py | dev-mbusch/adventofcode2020 | c888799ecdbc0c8d15921b3e54eb04440a289ab5 | [
"MIT"
] | null | null | null | src/day07_mb.py | dev-mbusch/adventofcode2020 | c888799ecdbc0c8d15921b3e54eb04440a289ab5 | [
"MIT"
] | null | null | null | src/day07_mb.py | dev-mbusch/adventofcode2020 | c888799ecdbc0c8d15921b3e54eb04440a289ab5 | [
"MIT"
] | null | null | null | """
https://adventofcode.com/2020/day/7
"""
from collections import namedtuple
from pprint import pprint as pp
with open(r'./input_files/day07_input_mb.txt', 'r') as fh:
raw_input = fh.read()
# Step 1 seperate lines to get the rules
raw_rules = raw_input.splitlines()
# Step 2 seperate outer layer from inner layer
split_outer_layer = [rule.split(' contain ') for rule in raw_rules if 'no other bags' not in rule]
# Step 3 build a dictionary with outer bags as keys and their content as a list of Bags
outer_bags = [' '.join(item[0].split(' ')[0:2]) for item in split_outer_layer]
Bags = namedtuple('Bags', ['amount', 'color'])
inner_bags = [item[1].split(', ') for item in split_outer_layer]
inner_bags = [ [item.split(' ') for item in sublist] for sublist in inner_bags ]
inner_bags = [ [ Bags(item[0], ' '.join(item[1:3]) ) for item in sublist] for sublist in inner_bags ]
dict_bags = {key: value for key, value in zip(outer_bags, inner_bags)}
candidates = set('shiny gold')
# keys = list(dict_bags.keys())
for candidate in candidates.copy():
for key in dict_bags.copy():
for bag in dict_bags[key]:
if candidate in bag.color:
candidates.add(key)
# def search_bags(search_dict, candidates=set()):
# while len(search_dict):
# if not candidates:
# for key in search_dict:
# for bag in search_dict[key]:
# if 'shiny gold' in bag.color:
# candidates.add(key)
# search_dict.pop('shiny gold')
# search_bags(candidates, search_dict)
# else:
# for candidate in candidates:
# for key in search_dict:
# for bag in search_dict[key]:
# if candidate in bag.color:
# candidates.add(key)
# search_dict.pop(candidate)
# search_bags(candidates, search_dict)
# return candidates
| 32.66129 | 101 | 0.599506 |
c285a60cf4231cb4c44d983798c1e33cad4cb316 | 351 | py | Python | python/testData/inspections/PyArgumentListInspection/kwargsMapToNothing.py | Sajadrahimi/intellij-community | ab9ff612dde3ee94ecae33cbc0ea639fa51550d4 | [
"Apache-2.0"
] | null | null | null | python/testData/inspections/PyArgumentListInspection/kwargsMapToNothing.py | Sajadrahimi/intellij-community | ab9ff612dde3ee94ecae33cbc0ea639fa51550d4 | [
"Apache-2.0"
] | null | null | null | python/testData/inspections/PyArgumentListInspection/kwargsMapToNothing.py | Sajadrahimi/intellij-community | ab9ff612dde3ee94ecae33cbc0ea639fa51550d4 | [
"Apache-2.0"
] | 1 | 2022-01-02T19:58:08.000Z | 2022-01-02T19:58:08.000Z | def f5(a, b, c):
pass
arg = [1, 2, 3]
kwarg = {'c':3}
f5(*arg, **kwarg) # ok
f5(1,2, **kwarg) # ok
f5<warning descr="Unexpected argument(s)">(1, 2, 3, <warning descr="Unexpected argument">**kwarg</warning>)</warning> # fail
f5<warning descr="Unexpected argument(s)">(1, 2, 3, <warning descr="Unexpected argument">*arg</warning>)</warning> # fail
| 35.1 | 124 | 0.638177 |
8f14a70a529473a5471aa98434378b595e6623b1 | 1,605 | py | Python | config.py | Wilidon/mpetsbot | 14f3d7b81f0439fd3585a466fe68d327bdbfcd56 | [
"MIT"
] | null | null | null | config.py | Wilidon/mpetsbot | 14f3d7b81f0439fd3585a466fe68d327bdbfcd56 | [
"MIT"
] | null | null | null | config.py | Wilidon/mpetsbot | 14f3d7b81f0439fd3585a466fe68d327bdbfcd56 | [
"MIT"
] | null | null | null | import sys
from functools import lru_cache
import pickledb
from pydantic import BaseSettings
class Settings(BaseSettings):
token: str
group_id: int
tg_token: str
db_host: str
db_username: str
db_password: str
db_name: str
chat_id: str
pickle: str
bot1: str
bot2: str
bot3: str
bot4: str
bot5: str
bot6: str
bot_password: str
api_key: str
class Config:
env_file = ".env"
@lru_cache()
def get_settings():
return Settings()
def get_db():
db = pickledb.load(get_settings().pickle, True)
try:
db.lgetall("user_tasks")
except KeyError:
db.lcreate("user_tasks")
try:
db.lgetall("club_tasks")
except KeyError:
db.lcreate("club_tasks")
try:
db.get("boss_start")
except KeyError:
db.get("boss_start")
try:
db.get("boss_end")
except KeyError:
db.get("boss_end")
return db
logger_config = {
"handlers": [
{"sink": sys.stdout},
{"sink": "logs/main.log",
"format": "{time} | {level} | {module}:{line}-- {message} - "
"{extra[""context]}",
"rotation": "1 MB",
"compression": "zip"},
],
"extra": {"context": "None"}
}
logger_config_for_core = {
"handlers": [
{"sink": sys.stdout},
{"sink": "logs/core.log",
"format": "{time} | {level} | {module}:{line}-- {message} - "
"{extra[""context]}",
"rotation": "1 MB",
"compression": "zip"},
],
"extra": {"context": "None"}
} | 18.662791 | 70 | 0.535826 |
0c9cdfca5cad22f695b760168dc9ecac89f742b2 | 1,039 | py | Python | crypto_backend/test_load.py | IlinValery/hppl_final_project | 06ac468387c27deaedafb38eea54a0bf6bd82360 | [
"MIT"
] | null | null | null | crypto_backend/test_load.py | IlinValery/hppl_final_project | 06ac468387c27deaedafb38eea54a0bf6bd82360 | [
"MIT"
] | 20 | 2020-12-05T19:33:11.000Z | 2020-12-10T06:53:30.000Z | crypto_backend/test_load.py | IlinValery/hppl_final_project | 06ac468387c27deaedafb38eea54a0bf6bd82360 | [
"MIT"
] | 1 | 2020-12-10T07:57:11.000Z | 2020-12-10T07:57:11.000Z | import time
import asyncio
import aiohttp
from tqdm import tqdm
import argparse
from tests import requests as r
async def main(handler, iters, method='test'):
t0 = time.time()
tasks = []
async with aiohttp.ClientSession() as session:
for i in tqdm(range(iters)):
task = asyncio.create_task(handler(session))
tasks.append(task)
await asyncio.gather(*tasks)
print(f"Spent time for testing request: {time.time() - t0} secs")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("-n", "--amount", type=int,
help="Count of requests", default=10)
parser.add_argument("-t", "--type", type=str,
help="Type of called avail function: test", default='test')
args = parser.parse_args()
amount = args.amount
req_type = args.type
handler = r.test_connection
if req_type == 'encode':
pass
elif req_type == 'decode':
pass
asyncio.run(main(handler, amount))
| 25.975 | 83 | 0.620789 |
6c641238c98de435a4dccbd84354b5a857bd1409 | 475 | py | Python | aws_lambda_powertools/tracing/extensions.py | nayaverdier/aws-lambda-powertools-python | cd15ee97746356a84c6f196dbd2d26a34ea50411 | [
"Apache-2.0",
"MIT-0"
] | 1,208 | 2020-05-20T19:06:29.000Z | 2022-03-30T14:17:47.000Z | aws_lambda_powertools/tracing/extensions.py | nayaverdier/aws-lambda-powertools-python | cd15ee97746356a84c6f196dbd2d26a34ea50411 | [
"Apache-2.0",
"MIT-0"
] | 859 | 2020-05-22T09:59:54.000Z | 2022-03-31T08:31:30.000Z | aws_lambda_powertools/tracing/extensions.py | nayaverdier/aws-lambda-powertools-python | cd15ee97746356a84c6f196dbd2d26a34ea50411 | [
"Apache-2.0",
"MIT-0"
] | 163 | 2020-05-18T21:08:25.000Z | 2022-03-28T12:03:37.000Z | def aiohttp_trace_config():
"""aiohttp extension for X-Ray (aws_xray_trace_config)
It expects you to have aiohttp as a dependency.
Returns
-------
TraceConfig
aiohttp trace config
"""
from aws_xray_sdk.ext.aiohttp.client import aws_xray_trace_config # pragma: no cover
aws_xray_trace_config.__doc__ = "aiohttp extension for X-Ray (aws_xray_trace_config)" # pragma: no cover
return aws_xray_trace_config() # pragma: no cover
| 29.6875 | 109 | 0.711579 |
093fdf0d02763f4dcfb292c1b20ddf99e3825a1e | 2,591 | py | Python | tests/test_filters.py | portfoliome/foil | b66d8cf4ab048a387d8c7a033b47e922ed6917d6 | [
"MIT"
] | 30 | 2016-10-03T04:04:26.000Z | 2021-03-16T17:40:11.000Z | tests/test_filters.py | portfoliome/foil | b66d8cf4ab048a387d8c7a033b47e922ed6917d6 | [
"MIT"
] | 11 | 2017-01-01T17:49:51.000Z | 2017-12-14T18:13:20.000Z | tests/test_filters.py | portfoliome/foil | b66d8cf4ab048a387d8c7a033b47e922ed6917d6 | [
"MIT"
] | null | null | null | import unittest
from collections import namedtuple
from foil.filters import AttributeFilter, create_key_filter, create_indexer
TeamRank = namedtuple('TeamRank', ['sport', 'team', 'rank'])
class TestAttributeFilter(unittest.TestCase):
def setUp(self):
self.teams = [TeamRank('baseball', 'cubs', 2),
TeamRank('basketball', 'bulls', 1),
TeamRank('baseball', 'mets', 1),
TeamRank('basketball', 'lakers', 2),
TeamRank('basketball', 'knicks', 3),
TeamRank('basketball', 'bulls', 2)]
def test_include_attributes(self):
keys = ('sport', 'team')
include = [('basketball', 'bulls'),
('basketball', 'knicks')]
expected = [TeamRank('basketball', 'bulls', 1),
TeamRank('basketball', 'knicks', 3),
TeamRank('basketball', 'bulls', 2)]
result = list(AttributeFilter(keys, predicates=include).including(self.teams))
self.assertEqual(expected, result)
def test_exclude_attributes(self):
keys = ('sport', 'team', 'rank')
remove = [('basketball', 'bulls', 2),
('baseball', 'mets', 1),
('basketball', 'lakers', 3)]
expected = [
TeamRank('baseball', 'cubs', 2),
TeamRank('basketball', 'bulls', 1),
TeamRank('basketball', 'lakers', 2),
TeamRank('basketball', 'knicks', 3)]
result = list(AttributeFilter(keys, predicates=remove).excluding(self.teams))
self.assertEqual(expected, result)
def test_create_key_filter(self):
properties = {'sports': ['baseball', 'basketball'],
'teams': ['bulls', 'knicks', 'lakers']}
expected = set([('sports', 'baseball'),
('sports', 'basketball'),
('teams', 'bulls'),
('teams', 'knicks'),
('teams', 'lakers')])
result = set(create_key_filter(properties))
self.assertSetEqual(result, expected)
class TestCreateIndexer(unittest.TestCase):
def setUp(self):
self.record = [0, 10, 20]
def test_single_indexer(self):
indexer = create_indexer([1])
expected = (10,)
result = indexer(self.record)
self.assertEqual(expected, result)
def test_multi_indexer(self):
indexer = create_indexer([2, 0])
expected = (20, 0)
result = indexer(self.record)
self.assertEqual(expected, result)
| 32.3875 | 86 | 0.545349 |
8eb0dfd4fa1db5228d073d6358f4593215aef5a6 | 2,658 | py | Python | airflow/api/common/experimental/delete_dag.py | CSharpCross/xflow | e9d8f9c25405ee05be0ae109a410ac99c27be031 | [
"Apache-2.0"
] | 2 | 2021-01-27T09:27:21.000Z | 2021-05-14T05:49:23.000Z | airflow/api/common/experimental/delete_dag.py | CSharpCross/xflow | e9d8f9c25405ee05be0ae109a410ac99c27be031 | [
"Apache-2.0"
] | 1 | 2019-02-28T06:23:06.000Z | 2019-02-28T06:23:06.000Z | airflow/api/common/experimental/delete_dag.py | CSharpCross/xflow | e9d8f9c25405ee05be0ae109a410ac99c27be031 | [
"Apache-2.0"
] | 3 | 2018-05-23T23:41:38.000Z | 2019-08-07T06:12:17.000Z | # -*- coding: utf-8 -*-
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Delete DAGs APIs."""
from sqlalchemy import or_
from airflow import models
from airflow.models import TaskFail, DagModel
from airflow.utils.db import provide_session
from airflow.exceptions import DagNotFound
@provide_session
def delete_dag(dag_id, keep_records_in_log=True, session=None):
"""
:param dag_id: the dag_id of the DAG to delete
:param keep_records_in_log: whether keep records of the given dag_id
in the Log table in the backend database (for reasons like auditing).
The default value is True.
:param session: session used
:return count of deleted dags
"""
dag = session.query(DagModel).filter(DagModel.dag_id == dag_id).first()
if dag is None:
raise DagNotFound("Dag id {} not found".format(dag_id))
count = 0
# noinspection PyUnresolvedReferences,PyProtectedMember
for model in models.base.Base._decl_class_registry.values(): # pylint: disable=protected-access
if hasattr(model, "dag_id"):
if keep_records_in_log and model.__name__ == 'Log':
continue
cond = or_(model.dag_id == dag_id, model.dag_id.like(dag_id + ".%"))
count += session.query(model).filter(cond).delete(synchronize_session='fetch')
if dag.is_subdag:
parent_dag_id, task_id = dag_id.rsplit(".", 1)
for model in models.DagRun, TaskFail, models.TaskInstance:
count += session.query(model).filter(model.dag_id == parent_dag_id,
model.task_id == task_id).delete()
# Delete entries in Import Errors table for a deleted DAG
# This handles the case when the dag_id is changed in the file
session.query(models.ImportError).filter(
models.ImportError.filename == dag.fileloc
).delete(synchronize_session='fetch')
return count
| 40.892308 | 100 | 0.704289 |
c1ef94bb97526ddfbc255494709fa0071047f6a3 | 140 | py | Python | run.py | florije1988/generateFile | fd7e21785b28897207866fb55e613f37398dfbc3 | [
"MIT"
] | null | null | null | run.py | florije1988/generateFile | fd7e21785b28897207866fb55e613f37398dfbc3 | [
"MIT"
] | null | null | null | run.py | florije1988/generateFile | fd7e21785b28897207866fb55e613f37398dfbc3 | [
"MIT"
] | null | null | null | #!flask/bin/python
# -*- coding: utf-8 -*-
from app import app, cfg
app.secret_key = cfg['FLASK_SECRET_KEY']
app.run(port=6621, debug=True)
| 23.333333 | 40 | 0.7 |
c5074cc1be6fc0ad1eefee4f4dc58fdfa70323d2 | 8,543 | py | Python | layint_scan_api/models/clair_vulnerability.py | LayeredInsight/layint_scan_api_python | bc258b2af5d2211b986e32fedea95fcfc7de80ff | [
"Apache-2.0"
] | 1 | 2018-03-26T23:55:00.000Z | 2018-03-26T23:55:00.000Z | layint_scan_api/models/clair_vulnerability.py | LayeredInsight/layint_scan_api_python | bc258b2af5d2211b986e32fedea95fcfc7de80ff | [
"Apache-2.0"
] | null | null | null | layint_scan_api/models/clair_vulnerability.py | LayeredInsight/layint_scan_api_python | bc258b2af5d2211b986e32fedea95fcfc7de80ff | [
"Apache-2.0"
] | 2 | 2020-11-04T02:56:33.000Z | 2020-11-05T08:12:01.000Z | # coding: utf-8
"""
Layered Insight Scan
Layered Insight Scan performs static vulnerability analysis, license and package compliance. You can find out more about Scan at http://layeredinsight.com.
OpenAPI spec version: 0.9.4
Contact: help@layeredinsight.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from pprint import pformat
from six import iteritems
import re
class ClairVulnerability(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'name': 'str',
'namespace_name': 'str',
'description': 'str',
'link': 'str',
'severity': 'str',
'fixed_by': 'str',
'fixed_in': 'str',
'metadata': 'list[str]'
}
attribute_map = {
'name': 'Name',
'namespace_name': 'NamespaceName',
'description': 'Description',
'link': 'Link',
'severity': 'Severity',
'fixed_by': 'FixedBy',
'fixed_in': 'FixedIn',
'metadata': 'Metadata'
}
def __init__(self, name=None, namespace_name=None, description=None, link=None, severity=None, fixed_by=None, fixed_in=None, metadata=None):
"""
ClairVulnerability - a model defined in Swagger
"""
self._name = None
self._namespace_name = None
self._description = None
self._link = None
self._severity = None
self._fixed_by = None
self._fixed_in = None
self._metadata = None
if name is not None:
self.name = name
if namespace_name is not None:
self.namespace_name = namespace_name
if description is not None:
self.description = description
if link is not None:
self.link = link
if severity is not None:
self.severity = severity
if fixed_by is not None:
self.fixed_by = fixed_by
if fixed_in is not None:
self.fixed_in = fixed_in
if metadata is not None:
self.metadata = metadata
@property
def name(self):
"""
Gets the name of this ClairVulnerability.
Name of vulnerability
:return: The name of this ClairVulnerability.
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""
Sets the name of this ClairVulnerability.
Name of vulnerability
:param name: The name of this ClairVulnerability.
:type: str
"""
self._name = name
@property
def namespace_name(self):
"""
Gets the namespace_name of this ClairVulnerability.
Name of namespace where this vulnerability was seen
:return: The namespace_name of this ClairVulnerability.
:rtype: str
"""
return self._namespace_name
@namespace_name.setter
def namespace_name(self, namespace_name):
"""
Sets the namespace_name of this ClairVulnerability.
Name of namespace where this vulnerability was seen
:param namespace_name: The namespace_name of this ClairVulnerability.
:type: str
"""
self._namespace_name = namespace_name
@property
def description(self):
"""
Gets the description of this ClairVulnerability.
Description of vulnerability
:return: The description of this ClairVulnerability.
:rtype: str
"""
return self._description
@description.setter
def description(self, description):
"""
Sets the description of this ClairVulnerability.
Description of vulnerability
:param description: The description of this ClairVulnerability.
:type: str
"""
self._description = description
@property
def link(self):
"""
Gets the link of this ClairVulnerability.
Link to vulnerability details
:return: The link of this ClairVulnerability.
:rtype: str
"""
return self._link
@link.setter
def link(self, link):
"""
Sets the link of this ClairVulnerability.
Link to vulnerability details
:param link: The link of this ClairVulnerability.
:type: str
"""
self._link = link
@property
def severity(self):
"""
Gets the severity of this ClairVulnerability.
Severity of vulnerability
:return: The severity of this ClairVulnerability.
:rtype: str
"""
return self._severity
@severity.setter
def severity(self, severity):
"""
Sets the severity of this ClairVulnerability.
Severity of vulnerability
:param severity: The severity of this ClairVulnerability.
:type: str
"""
self._severity = severity
@property
def fixed_by(self):
"""
Gets the fixed_by of this ClairVulnerability.
May contain details about who fixed the vulnerability
:return: The fixed_by of this ClairVulnerability.
:rtype: str
"""
return self._fixed_by
@fixed_by.setter
def fixed_by(self, fixed_by):
"""
Sets the fixed_by of this ClairVulnerability.
May contain details about who fixed the vulnerability
:param fixed_by: The fixed_by of this ClairVulnerability.
:type: str
"""
self._fixed_by = fixed_by
@property
def fixed_in(self):
"""
Gets the fixed_in of this ClairVulnerability.
May contain details about what release this was fixed in
:return: The fixed_in of this ClairVulnerability.
:rtype: str
"""
return self._fixed_in
@fixed_in.setter
def fixed_in(self, fixed_in):
"""
Sets the fixed_in of this ClairVulnerability.
May contain details about what release this was fixed in
:param fixed_in: The fixed_in of this ClairVulnerability.
:type: str
"""
self._fixed_in = fixed_in
@property
def metadata(self):
"""
Gets the metadata of this ClairVulnerability.
Array of metadata related to this vulnerability, such as CVSS vectors.
:return: The metadata of this ClairVulnerability.
:rtype: list[str]
"""
return self._metadata
@metadata.setter
def metadata(self, metadata):
"""
Sets the metadata of this ClairVulnerability.
Array of metadata related to this vulnerability, such as CVSS vectors.
:param metadata: The metadata of this ClairVulnerability.
:type: list[str]
"""
self._metadata = metadata
def to_dict(self):
"""
Returns the model properties as a dict
"""
result = {}
for attr, _ in iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""
Returns the string representation of the model
"""
return pformat(self.to_dict())
def __repr__(self):
"""
For `print` and `pprint`
"""
return self.to_str()
def __eq__(self, other):
"""
Returns true if both objects are equal
"""
if not isinstance(other, ClairVulnerability):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""
Returns true if both objects are not equal
"""
return not self == other
| 26.531056 | 160 | 0.581646 |
5fec663b3420db3ebb7f6fc380e01407eeaac135 | 498 | py | Python | user/forms.py | romsha28/hospital_python | 1bb86266223df5084321917169156aaec1c5e318 | [
"Apache-2.0"
] | null | null | null | user/forms.py | romsha28/hospital_python | 1bb86266223df5084321917169156aaec1c5e318 | [
"Apache-2.0"
] | 1 | 2021-10-18T08:56:11.000Z | 2021-10-18T08:56:11.000Z | user/forms.py | romsha28/hospital_python | 1bb86266223df5084321917169156aaec1c5e318 | [
"Apache-2.0"
] | null | null | null | from django import forms
from django.contrib.auth.models import User
from django.contrib.auth.forms import UserCreationForm
class UserRegisterForm(UserCreationForm):
email = forms.EmailField()
phone_no = forms.CharField(max_length = 20)
first_name = forms.CharField(max_length = 20)
last_name = forms.CharField(max_length = 20)
#photo = forms.ImageField(upload_to='user_images', blank=True)
class Meta:
model = User
fields = ['username', 'email', 'phone_no', 'password1', 'password2']
| 31.125 | 70 | 0.763052 |
654c294801ddf9228dcfa75a4aca9adf71f4a75e | 387 | py | Python | accelerator/tests/test_refund_code_redemption.py | masschallenge/django-accelerator | 8af898b574be3b8335edc8961924d1c6fa8b5fd5 | [
"MIT"
] | 6 | 2017-06-14T19:34:01.000Z | 2020-03-08T07:16:59.000Z | accelerator/tests/test_refund_code_redemption.py | masschallenge/django-accelerator | 8af898b574be3b8335edc8961924d1c6fa8b5fd5 | [
"MIT"
] | 160 | 2017-06-20T17:12:13.000Z | 2022-03-30T13:53:12.000Z | accelerator/tests/test_refund_code_redemption.py | masschallenge/django-accelerator | 8af898b574be3b8335edc8961924d1c6fa8b5fd5 | [
"MIT"
] | null | null | null | # MIT License
# Copyright (c) 2017 MassChallenge, Inc.
from __future__ import unicode_literals
from django.test import TestCase
from accelerator.tests.factories import RefundCodeRedemptionFactory
class TestRefundCodeRedemption(TestCase):
def test_str(self):
redemption = RefundCodeRedemptionFactory()
assert redemption.refund_code.unique_code in str(redemption)
| 25.8 | 68 | 0.795866 |
7a633bc18bf4eefc5881e884b12fab6be741a65a | 25,434 | py | Python | capsul/pipeline/xml.py | servoz/capsul | 2d72228c096f1c43ecfca7f3651b353dc35e209e | [
"CECILL-B"
] | 5 | 2018-10-31T10:16:53.000Z | 2022-03-11T08:11:32.000Z | capsul/pipeline/xml.py | servoz/capsul | 2d72228c096f1c43ecfca7f3651b353dc35e209e | [
"CECILL-B"
] | 78 | 2018-10-30T17:04:38.000Z | 2022-03-17T17:39:41.000Z | capsul/pipeline/xml.py | servoz/capsul | 2d72228c096f1c43ecfca7f3651b353dc35e209e | [
"CECILL-B"
] | 11 | 2019-01-21T10:05:08.000Z | 2022-02-22T17:02:22.000Z | # -*- coding: utf-8 -*-
'''
XML IO for pipelines
Functions
=========
:func:`create_xml_pipeline`
---------------------------
:func:`save_xml_pipeline`
-------------------------
'''
from __future__ import absolute_import
from __future__ import print_function
import os
import six
import sys
import xml.etree.cElementTree as ET
from soma.sorted_dictionary import OrderedDict
from capsul.process.xml import string_to_value
from capsul.pipeline.pipeline_construction import PipelineConstructor
from soma.controller import Controller
from traits.api import Undefined
def create_xml_pipeline(module, name, xml_file):
"""
Create a pipeline class given its Capsul XML 2.0 representation.
Parameters
----------
module: str (mandatory)
name of the module for the created Pipeline class (the Python module is
not modified).
name: str (mandatory)
name of the new pipeline class
xml_file: str (mandatory)
name of file containing the XML description or XML string.
"""
if os.path.exists(xml_file):
xml_pipeline = ET.parse(xml_file).getroot()
else:
xml_pipeline = ET.fromstring(xml_file)
version = xml_pipeline.get('capsul_xml')
if version and version != '2.0':
raise ValueError('Only Capsul XML 2.0 is supported, not %s' % version)
class_name = xml_pipeline.get('name')
if class_name:
if name is None:
name = class_name
elif name != class_name:
raise KeyError('pipeline name (%s) and requested object name '
'(%s) differ.' % (class_name, name))
elif name is None:
name = os.path.basename(xml_file).rsplit('.', 1)[0]
builder = PipelineConstructor(module, name)
exported_parameters = set()
for child in xml_pipeline:
if child.tag == 'doc':
if child.text is not None:
builder.set_documentation(child.text.strip())
elif child.tag == 'process':
process_name = child.get('name')
module = child.get('module')
args = (process_name, module)
kwargs = {}
nipype_usedefault = []
iterate = []
iteration = child.get('iteration')
if iteration:
iterate = [x.strip() for x in iteration.split(',')]
for process_child in child:
if process_child.tag == 'set':
name = process_child.get('name')
value = process_child.get('value')
value = string_to_value(value)
if value is not None:
kwargs[name] = value
kwargs.setdefault('make_optional', []).append(name)
elif process_child.tag == 'nipype':
name = process_child.get('name')
usedefault = process_child.get('usedefault')
if usedefault == 'true':
nipype_usedefault.append(name)
copyfile = process_child.get('copyfile')
if copyfile == 'true':
kwargs.setdefault('inputs_to_copy', []).append(name)
elif copyfile == 'discard':
kwargs.setdefault('inputs_to_copy', []).append(name)
kwargs.setdefault('inputs_to_clean', []).append(name)
else:
raise ValueError('Invalid tag in <process>: %s' %
process_child.tag)
if iterate:
kwargs['iterative_plugs'] = iterate
builder.add_iterative_process(*args, **kwargs)
else:
builder.add_process(*args, **kwargs)
for name in nipype_usedefault:
builder.call_process_method(process_name, 'set_usedefault',
name, True)
enabled = child.get('enabled')
if enabled == 'false':
builder.set_node_enabled(process_name, False)
elif child.tag == 'switch':
switch_name = child.get('name')
value = child.get('switch_value')
kwargs = {'export_switch': False}
if value:
kwargs['switch_value'] = value
inputs = []
outputs = []
for process_child in child:
if process_child.tag == 'input':
name = process_child.get('name')
inputs.append(name)
elif process_child.tag == 'output':
name = process_child.get('name')
outputs.append(name)
optional = process_child.get('optional')
if optional == 'true':
kwargs.setdefault('make_optional', []).append(name)
builder.add_switch(switch_name, inputs, outputs, **kwargs)
enabled = child.get('enabled')
if enabled == 'false':
builder.set_node_enabled(switch_name, False)
elif child.tag == 'optional_output_switch':
switch_name = child.get('name')
kwargs = {}
input = None
output = None
for process_child in child:
if process_child.tag == 'input':
if input is not None:
raise ValueError(
'Several inputs in optional_output_switch')
input = process_child.get('name')
elif process_child.tag == 'output':
if output is not None:
raise ValueError(
'Several outputs in optional_output_switch')
output = process_child.get('name')
if input is None:
raise ValueError('No input in optional_output_switch')
builder.add_optional_output_switch(switch_name, input, output)
enabled = child.get('enabled')
if enabled == 'false':
builder.set_node_enabled(switch_name, False)
elif child.tag == 'custom_node':
node_name = child.get('name')
module = child.get('module')
params = {}
kwargs = {}
make_optional = []
for process_child in child:
if process_child.tag == 'config':
for p, value in process_child.items():
params[p] = string_to_value(value)
elif process_child.tag == 'set':
name = process_child.get('name')
value = process_child.get('value')
value = string_to_value(value)
if value is not None:
kwargs[name] = value
make_optional.append(name)
else:
raise ValueError('Invalid tag in <process>: %s' %
process_child.tag)
# TODO optional plugs
builder.add_custom_node(node_name, module, params, make_optional,
**kwargs)
enabled = child.get('enabled')
if enabled == 'false':
builder.set_node_enabled(node_name, False)
elif child.tag == 'link':
source = child.get('source')
dest = child.get('dest')
weak_link = child.get('weak_link')
if weak_link == 'true':
weak_link = True
else:
weak_link = False
if '.' in source:
if '.' in dest:
builder.add_link('%s->%s' % (source, dest),
weak_link=weak_link)
elif dest in exported_parameters:
builder.add_link('%s->%s' % (source, dest),
weak_link=weak_link)
else:
node, plug = source.rsplit('.', 1)
builder.export_parameter(node, plug, dest,
weak_link=weak_link)
exported_parameters.add(dest)
elif source in exported_parameters:
builder.add_link('%s->%s' % (source, dest))
else:
node, plug = dest.rsplit('.', 1)
builder.export_parameter(node, plug, source,
weak_link=weak_link)
exported_parameters.add(source)
elif child.tag == 'processes_selection':
selection_parameter = child.get('name')
selection_groups = OrderedDict()
for select_child in child:
if select_child.tag == 'processes_group':
group_name = select_child.get('name')
group = selection_groups[group_name] = []
for group_child in select_child:
if group_child.tag == 'process':
group.append(group_child.get('name'))
else:
raise ValueError('Invalid tag in <processes_group>'
'<process>: %s' % group_child.tag)
else:
raise ValueError('Invalid tag in <processes_selection>: %s'
% select_child.tag)
builder.add_processes_selection(selection_parameter,
selection_groups)
elif child.tag == 'pipeline_steps':
for step_child in child:
step_name = step_child.get('name')
enabled = step_child.get('enabled')
if enabled == 'false':
enabled = False
else:
enabled = True
nodes = []
for step_node in step_child:
nodes.append(step_node.get('name'))
builder.add_pipeline_step(step_name, nodes, enabled)
elif child.tag == 'gui':
for gui_child in child:
if gui_child.tag == 'position':
name = gui_child.get('name')
x = float(gui_child.get('x'))
y = float(gui_child.get('y'))
builder.set_node_position(name, x, y)
elif gui_child.tag == 'zoom':
builder.set_scene_scale_factor(
float(gui_child.get('level')))
else:
raise ValueError('Invalid tag in <gui>: %s' %
gui_child.tag)
else:
raise ValueError('Invalid tag in <pipeline>: %s' % child.tag)
return builder.pipeline
def save_xml_pipeline(pipeline, xml_file):
'''
Save a pipeline in an XML file
Parameters
----------
pipeline: Pipeline instance
pipeline to save
xml_file: str
XML file to save the pipeline in
'''
# imports are done locally to avoid circular imports
from capsul.api import Process, Pipeline
from capsul.pipeline.pipeline_nodes import ProcessNode, Switch, \
OptionalOutputSwitch
from capsul.pipeline.process_iteration import ProcessIteration
from capsul.process.process import NipypeProcess
from capsul.study_config.process_instance import get_process_instance
def _write_process(process, parent, name, dont_write_plug_values=set(),
init_plug_values={}):
procnode = ET.SubElement(parent, 'process')
if isinstance(process, NipypeProcess):
mod = process._nipype_interface.__module__
classname = process._nipype_interface.__class__.__name__
else:
mod = process.__module__
# if process is a function with XML decorator, we need to
# retrieve the original function name.
func = getattr(process, '_function', None)
if func:
classname = func.__name__
else:
classname = process.__class__.__name__
if classname == 'Pipeline':
# don't accept the base Pipeline class
classname = name
if '.' in class_name:
classname = classname[:classname.index('.')]
classname = classname[0].upper() + class_name[1:]
procnode.set('module', "%s.%s" % (mod, classname))
procnode.set('name', name)
proc_copy = get_process_instance("%s.%s" % (mod, classname))
if isinstance(process, NipypeProcess):
# WARNING: not sure I'm doing the right things for nipype. To be
# fixed if needed.
for param in process.inputs_to_copy:
elem = ET.SubElement(procnode, 'nipype')
elem.set('name', param)
if param in process.inputs_to_clean:
elem.set('copyfile', 'discard')
else:
elem.set('copyfile', 'true')
np_input = getattr(process._nipype_interface.inputs, param)
if np_input:
use_default = getattr(np_input, 'usedefault', False) # is it that?
if use_default:
elem.set('use_default', 'true')
for param, np_input in \
six.iteritems(process._nipype_interface.inputs.__dict__):
use_default = getattr(np_input, 'usedefault', False) # is it that?
if use_default and param not in process.inputs_to_copy:
elem = ET.SubElement(procnode, 'nipype')
elem.set('name', param)
elem.set('use_default', 'true')
# set initial values
dont_write_plug_values = set(dont_write_plug_values)
dont_write_plug_values.update(('nodes_activation',
'selection_changed'))
for param_name, trait in six.iteritems(process.user_traits()):
if param_name not in dont_write_plug_values:
if param_name in init_plug_values:
value = init_plug_values[param_name]
else:
value = getattr(process, param_name)
if value not in (None, Undefined, '', []) \
or (trait.optional
and not proc_copy.trait(param_name).optional):
if isinstance(value, Controller):
value_repr = repr(dict(value.export_to_dict()))
else:
value_repr = repr(value)
try:
eval(value_repr)
except Exception:
print('warning, value of parameter %s cannot be saved'
% param_name)
continue
elem = ET.SubElement(procnode, 'set')
elem.set('name', param_name)
elem.set('value', value_repr)
return procnode
def _write_custom_node(node, parent, name):
etnode = ET.SubElement(parent, 'custom_node')
mod = node.__module__
classname = node.__class__.__name__
nodename = '.'.join((mod, classname))
etnode.set('module', "%s.%s" % (mod, classname))
etnode.set('name', name)
if hasattr(node, 'configured_controller'):
c = node.configured_controller()
if len(c.user_traits()) != 0:
et = ET.SubElement(etnode, 'config')
for param_name in c.user_traits():
value = getattr(c, param_name)
if isinstance(value, Controller):
value_repr = repr(dict(value.export_to_dict()))
else:
value_repr = repr(value)
try:
eval(value_repr)
except Exception:
print('warning, value of parameter %s cannot be saved'
% param_name)
continue
et.set(param_name, value_repr)
# set initial values
for param_name, plug in six.iteritems(node.plugs):
trait = node.trait(param_name)
value = getattr(node, param_name)
if value not in (None, Undefined, '', []) or trait.optional:
if isinstance(value, Controller):
value_repr = repr(dict(value.export_to_dict()))
else:
value_repr = repr(value)
try:
eval(value_repr)
except Exception:
print('warning, value of parameter %s cannot be saved'
% param_name)
continue
elem = ET.SubElement(etnode, 'set')
elem.set('name', param_name)
elem.set('value', value_repr)
def _write_iteration(process_iter, parent, name):
iter_values = dict([(p, getattr(process_iter, p))
for p in process_iter.iterative_parameters])
procnode = _write_process(
process_iter.process, parent, name, init_plug_values=iter_values)
iteration_params = ', '.join(process_iter.iterative_parameters)
procnode.set('iteration', iteration_params)
return procnode
def _write_switch(switch, parent, name):
swnode = ET.SubElement(parent, 'switch')
swnode.set('name', name)
inputs = set()
for plug_name, plug in six.iteritems(switch.plugs):
if plug.output:
elem = ET.SubElement(swnode, 'output')
elem.set('name', plug_name)
if plug.optional:
elem.set('optional', 'true')
else:
name_parts = plug_name.split("_switch_")
if len(name_parts) == 2 \
and name_parts[0] not in inputs:
inputs.add(name_parts[0])
elem = ET.SubElement(swnode, 'input')
elem.set('name', name_parts[0])
if plug.optional:
elem.set('optional', 'true')
swnode.set('switch_value', six.text_type(switch.switch))
return swnode
def _write_optional_output_switch(switch, parent, name):
swnode = ET.SubElement(parent, 'optional_output_switch')
swnode.set('name', name)
for plug_name, plug in six.iteritems(switch.plugs):
if plug.output:
elem = ET.SubElement(swnode, 'output')
elem.set('name', plug_name)
else:
name_parts = plug_name.split("_switch_")
if len(name_parts) == 2:
input = name_parts[0]
if input != '_none':
elem = ET.SubElement(swnode, 'input')
elem.set('name', name_parts[0])
if plug.optional:
elem.set('optional', 'true')
return swnode
def _write_processes(pipeline, root):
for node_name, node in six.iteritems(pipeline.nodes):
if node_name == "":
continue
if isinstance(node, OptionalOutputSwitch):
xmlnode = _write_optional_output_switch(node, root, node_name)
elif isinstance(node, Switch):
xmlnode = _write_switch(node, root, node_name)
elif isinstance(node, ProcessNode) \
and isinstance(node.process, ProcessIteration):
xmlnode = _write_iteration(node.process, root, node_name)
elif isinstance(node, ProcessNode):
xmlnode = _write_process(node.process, root, node_name)
else:
xmlnode = _write_custom_node(node, root, node_name)
if not node.enabled:
xmlnode.set('enabled', 'false')
def _write_processes_selections(pipeline, root):
selection_parameters = []
if hasattr(pipeline, 'processes_selection'):
for selector_name, groups \
in six.iteritems(pipeline.processes_selection):
selection_parameters.append(selector_name)
sel_node = ET.SubElement(root, 'processes_selection')
sel_node.set('name', selector_name)
for group_name, group in six.iteritems(groups):
grp_node = ET.SubElement(sel_node, 'processes_group')
grp_node.set('name', group_name)
for node in group:
proc_node = ET.SubElement(grp_node, 'process')
proc_node.set('name', node)
return selection_parameters
def _write_links(pipeline, root):
for node_name, node in six.iteritems(pipeline.nodes):
for plug_name, plug in six.iteritems(node.plugs):
if (node_name == "" and not plug.output) \
or (node_name != "" and plug.output):
links = plug.links_to
for link in links:
if node_name == "":
src = plug_name
else:
src = "%s.%s" % (node_name, plug_name)
if link[0] == "":
dst = link[1]
else:
dst = "%s.%s" % (link[0], link[1])
linkelem = ET.SubElement(root, 'link')
linkelem.set('source', src)
linkelem.set('dest', dst)
if link[-1]:
linkelem.set('weak_link', "true")
def _write_steps(pipeline, root):
steps = pipeline.trait('pipeline_steps')
steps_node = None
if steps and getattr(pipeline, 'pipeline_steps', None):
steps_node = ET.SubElement(root, 'pipeline_steps')
for step_name, step \
in six.iteritems(pipeline.pipeline_steps.user_traits()):
step_node = ET.SubElement(steps_node, 'step')
step_node.set('name', step_name)
enabled = getattr(pipeline.pipeline_steps, step_name)
if not enabled:
step_node.set('enabled', 'false')
nodes = step.nodes
for node in nodes:
node_item = ET.SubElement(step_node, 'node')
node_item.set('name', node)
return steps_node
def _write_nodes_positions(pipeline, root):
gui = None
if hasattr(pipeline, "node_position") and pipeline.node_position:
gui = ET.SubElement(root, 'gui')
for node_name, pos in six.iteritems(pipeline.node_position):
node_pos = ET.SubElement(gui, 'position')
node_pos.set('name', node_name)
node_pos.set('x', six.text_type(pos[0]))
node_pos.set('y', six.text_type(pos[1]))
return gui
def _write_doc(pipeline, root):
if hasattr(pipeline, "__doc__"):
docstr = pipeline.__doc__
if docstr == Pipeline.__doc__:
docstr = "" # don't use the builtin Pipeline help
else:
# remove automatically added doc
splitdoc = docstr.split('\n')
notepos = [i for i, x in enumerate(splitdoc[:-2])
if x.endswith('.. note::')]
autodocpos = None
if notepos:
for i in notepos:
if splitdoc[i+2].find(
"* Type '{0}.help()'".format(
pipeline.__class__.__name__)) != -1:
autodocpos = i
if autodocpos is not None:
# strip empty trailing lines
while autodocpos >= 1 \
and splitdoc[autodocpos - 1].strip() == '':
autodocpos -= 1
docstr = '\n'.join(splitdoc[:autodocpos]) + '\n'
else:
docstr = ''
if docstr.strip() == '':
docstr = ''
doc = ET.SubElement(root, 'doc')
doc.text = docstr
return doc
root = ET.Element('pipeline')
root.set('capsul_xml', '2.0')
class_name = pipeline.__class__.__name__
if pipeline.__class__ is Pipeline:
# if directly a Pipeline, then use a default new name
class_name = 'CustomPipeline'
root.set('name', class_name)
_write_doc(pipeline, root)
_write_processes(pipeline, root)
_write_links(pipeline, root)
_write_processes_selections(pipeline, root)
_write_steps(pipeline, root)
gui_node = _write_nodes_positions(pipeline, root)
if hasattr(pipeline, "scene_scale_factor"):
if gui_node is None:
gui_node = ET.SubElement(root, 'gui')
scale_node = ET.SubElement(gui_node, 'zoom')
scale_node.set('level', six.text_type(pipeline.scene_scale_factor))
tree = ET.ElementTree(root)
tree.write(xml_file)
| 43.55137 | 86 | 0.515727 |
61e47a96ca4ff66ccb91cd9adc0cf50bac0d3116 | 206 | py | Python | 14-desafio.py | SweydAbdul/EstudosPythonCeV | 5eb61d4e1d47b99d57de776c835aa9f3c2bcee3b | [
"MIT"
] | null | null | null | 14-desafio.py | SweydAbdul/EstudosPythonCeV | 5eb61d4e1d47b99d57de776c835aa9f3c2bcee3b | [
"MIT"
] | null | null | null | 14-desafio.py | SweydAbdul/EstudosPythonCeV | 5eb61d4e1d47b99d57de776c835aa9f3c2bcee3b | [
"MIT"
] | null | null | null | print('===== DESAFIO 14 =====')
celcius = float(input('Informe a temperatura em celcius: '))
fara = 9*celcius /5+32
print('A temperatura de {} celcius corresponde a {:.1f} faranheit.'.format(celcius, fara)) | 51.5 | 90 | 0.679612 |
cbee4f3e46c2028113ae98709d5411467d224df3 | 1,296 | py | Python | tests/test_uff.py | davidbradway/uff.py | 118001211018a4fc95d1dd7304ae6335bdf805f9 | [
"MIT"
] | null | null | null | tests/test_uff.py | davidbradway/uff.py | 118001211018a4fc95d1dd7304ae6335bdf805f9 | [
"MIT"
] | null | null | null | tests/test_uff.py | davidbradway/uff.py | 118001211018a4fc95d1dd7304ae6335bdf805f9 | [
"MIT"
] | null | null | null | import os
from pathlib import Path
from uff.uff import UFF
from uff.utils import verify_correctness, is_version_compatible, load_uff_dict, download_test_data
def test_uff_save_load():
ref_dir = './data'
ref_files = [
'fieldII_converging_wave_mlt_sector.uff',
'fieldII_converging_wave_grid.uff',
'fieldII_diverging_wave_grid.uff',
'fieldII_plane_wave_grid.uff',
'fieldII_single_element_transmit_grid.uff',
]
# check all files exist in data/
if not all([os.path.isfile(Path(ref_dir) / file) for file in ref_files]):
print("Downloading test files...")
# if they do not download them with utils.
base_url = 'http://ustb.no/datasets/uff/'
urls = [base_url + file for file in ref_files]
download_test_data(rel_path=ref_dir, file_urls=urls)
for ref_file in ref_files:
ref_uff_path = os.path.join(ref_dir, ref_file)
uff_dict = load_uff_dict(ref_uff_path)
version = uff_dict.pop('version')
assert is_version_compatible(version, (0, 3, 0))
print("good version")
uff_new = UFF.deserialize(uff_dict)
uff_new_save_path = 'new.uff'
uff_new.save(uff_new_save_path, version)
verify_correctness(uff_new_save_path, ref_uff_path)
| 32.4 | 98 | 0.684414 |
8bcbe395fc79200721c3d6f54d897b41cb8f7bed | 3,379 | py | Python | aliyun-python-sdk-cbn/aliyunsdkcbn/request/v20170912/DescribeCenBandwidthPackagesRequest.py | leafcoder/aliyun-openapi-python-sdk | 26b441ab37a5cda804de475fd5284bab699443f1 | [
"Apache-2.0"
] | 1,001 | 2015-07-24T01:32:41.000Z | 2022-03-25T01:28:18.000Z | aliyun-python-sdk-cbn/aliyunsdkcbn/request/v20170912/DescribeCenBandwidthPackagesRequest.py | leafcoder/aliyun-openapi-python-sdk | 26b441ab37a5cda804de475fd5284bab699443f1 | [
"Apache-2.0"
] | 363 | 2015-10-20T03:15:00.000Z | 2022-03-08T12:26:19.000Z | aliyun-python-sdk-cbn/aliyunsdkcbn/request/v20170912/DescribeCenBandwidthPackagesRequest.py | leafcoder/aliyun-openapi-python-sdk | 26b441ab37a5cda804de475fd5284bab699443f1 | [
"Apache-2.0"
] | 682 | 2015-09-22T07:19:02.000Z | 2022-03-22T09:51:46.000Z | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from aliyunsdkcore.request import RpcRequest
from aliyunsdkcbn.endpoint import endpoint_data
class DescribeCenBandwidthPackagesRequest(RpcRequest):
def __init__(self):
RpcRequest.__init__(self, 'Cbn', '2017-09-12', 'DescribeCenBandwidthPackages','cbn')
self.set_method('POST')
if hasattr(self, "endpoint_map"):
setattr(self, "endpoint_map", endpoint_data.getEndpointMap())
if hasattr(self, "endpoint_regional"):
setattr(self, "endpoint_regional", endpoint_data.getEndpointRegional())
def get_ResourceOwnerId(self):
return self.get_query_params().get('ResourceOwnerId')
def set_ResourceOwnerId(self,ResourceOwnerId):
self.add_query_param('ResourceOwnerId',ResourceOwnerId)
def get_IncludeReservationData(self):
return self.get_query_params().get('IncludeReservationData')
def set_IncludeReservationData(self,IncludeReservationData):
self.add_query_param('IncludeReservationData',IncludeReservationData)
def get_PageNumber(self):
return self.get_query_params().get('PageNumber')
def set_PageNumber(self,PageNumber):
self.add_query_param('PageNumber',PageNumber)
def get_IsOrKey(self):
return self.get_query_params().get('IsOrKey')
def set_IsOrKey(self,IsOrKey):
self.add_query_param('IsOrKey',IsOrKey)
def get_PageSize(self):
return self.get_query_params().get('PageSize')
def set_PageSize(self,PageSize):
self.add_query_param('PageSize',PageSize)
def get_ResourceOwnerAccount(self):
return self.get_query_params().get('ResourceOwnerAccount')
def set_ResourceOwnerAccount(self,ResourceOwnerAccount):
self.add_query_param('ResourceOwnerAccount',ResourceOwnerAccount)
def get_OwnerAccount(self):
return self.get_query_params().get('OwnerAccount')
def set_OwnerAccount(self,OwnerAccount):
self.add_query_param('OwnerAccount',OwnerAccount)
def get_OwnerId(self):
return self.get_query_params().get('OwnerId')
def set_OwnerId(self,OwnerId):
self.add_query_param('OwnerId',OwnerId)
def get_Filters(self):
return self.get_query_params().get('Filter')
def set_Filters(self, Filters):
for depth1 in range(len(Filters)):
if Filters[depth1].get('Value') is not None:
for depth2 in range(len(Filters[depth1].get('Value'))):
if Filters[depth1].get('Value')[depth2] is not None:
self.add_query_param('Filter.' + str(depth1 + 1) + '.Value.' + str(depth2 + 1) , Filters[depth1].get('Value')[depth2])
if Filters[depth1].get('Key') is not None:
self.add_query_param('Filter.' + str(depth1 + 1) + '.Key', Filters[depth1].get('Key')) | 36.728261 | 125 | 0.753477 |
0a63e2be6dc46a0f175d21d925be96c887a21c6c | 3,390 | py | Python | enron.py | jmategk0/enron_email_parser | 2ae567f515bd8528e48356e4b06205111ee82136 | [
"MIT"
] | null | null | null | enron.py | jmategk0/enron_email_parser | 2ae567f515bd8528e48356e4b06205111ee82136 | [
"MIT"
] | null | null | null | enron.py | jmategk0/enron_email_parser | 2ae567f515bd8528e48356e4b06205111ee82136 | [
"MIT"
] | null | null | null | import datetime
import csv
import json
import os
from pprint import pprint
sample_data_path = "maildir_sample"
full_data_path = "/home/james/Downloads/maildir_full"
# https://stackoverflow.com/questions/3207219/how-do-i-list-all-files-of-a-directory
# https://stackoverflow.com/questions/19699367/unicodedecodeerror-utf-8-codec-cant-decode-byte
def generate_results(base_dir):
email_results = []
for (dirpath, dirnames, filenames) in os.walk(base_dir):
for name in filenames:
if name[len(name)-3:] != "swp":
email_data = parse_email_file(filepath=os.path.join(dirpath, name))
email_results.append(email_data)
return email_results
def parse_email_file(filepath):
with open(filepath, encoding="ISO-8859-1") as file:
email_data = file.read()
# try:
# email_data = file.read()
# except UnicodeDecodeError:
# print(filepath)
first_index = 0
# find the end of the metadata block first to avoid edge case bugs in email text
x_filename = email_data.index("X-FileName: ")
metadata_end = email_data.index("\n", x_filename)
message_start_raw = email_data.index("Message-ID: ", first_index, metadata_end)
message_start = message_start_raw + len("Message-ID: ")
date_start_raw = email_data.index("Date: ", first_index, metadata_end)
date_start = date_start_raw + len("Date: ")
from_start_raw = email_data.index("From: ", first_index, metadata_end)
from_start = from_start_raw + len("From: ")
to_start_raw = email_data.index("To: ", first_index, metadata_end)
to_start = to_start_raw + len("To: ")
subject_start_raw = email_data.index("Subject: ", first_index, metadata_end)
message_id = email_data[message_start:date_start_raw]
date = email_data[date_start:from_start_raw]
to_line = email_data[to_start: subject_start_raw]
text_body = email_data[metadata_end+1: len(email_data)-1]
# this block fixes a bug where some emails don't have a To line
if "\nTo: " in email_data:
from_line = email_data[from_start: to_start_raw]
else:
from_line = email_data[from_start: subject_start_raw]
final_data = {
"Message-ID": message_id.strip(),
"Date": date.strip(),
"From": from_line.strip(),
"To": to_line.strip(),
"Message": text_body
}
return final_data
def export_results(export_filename, results_data, to_csv=True):
print(len(results_data))
if to_csv:
with open(export_filename, 'w') as csv_file_export:
fieldnames = ["Message-ID", "Date", "From", "To", "Message"]
writer = csv.DictWriter(csv_file_export, fieldnames=fieldnames)
writer.writeheader()
for row in results_data:
writer.writerow(row)
else:
with open(export_filename, 'w') as fp:
json.dump(results_data, fp)
def run_pipeline(enron_data_dir, export_csv_file):
results = generate_results(base_dir=enron_data_dir)
export_results(export_filename=export_csv_file, results_data=results)
pprint(str(datetime.datetime.now()))
run_pipeline(enron_data_dir=full_data_path, export_csv_file="sample.csv")
pprint(str(datetime.datetime.now()))
| 34.242424 | 94 | 0.661947 |
697885bb45e02eb021e410d4da89efa66ce181bb | 269 | py | Python | create-swatches.py | jtrim-ons/colour-swatch-generator | a2fc6bb98236c576659c45ed2d469487cc8df237 | [
"MIT"
] | null | null | null | create-swatches.py | jtrim-ons/colour-swatch-generator | a2fc6bb98236c576659c45ed2d469487cc8df237 | [
"MIT"
] | null | null | null | create-swatches.py | jtrim-ons/colour-swatch-generator | a2fc6bb98236c576659c45ed2d469487cc8df237 | [
"MIT"
] | null | null | null | import sys
from PIL import Image, ImageDraw
out_dir = sys.argv[2]
for line in sys.stdin.readlines():
colour = line.strip().replace('#', '')
size = int(sys.argv[1])
img = Image.new('RGB', (size, size), color='#'+colour)
img.save(out_dir + '/' + colour + '.png')
| 20.692308 | 55 | 0.63197 |
ec847c0578c28827982da3a8c7f48a1b101015ed | 3,290 | py | Python | year_analysis.py | joee9/spotifystats | c6af564fee60d5f3c37c866f2cfb1a39ee88c2b5 | [
"MIT"
] | 1 | 2022-01-27T17:07:26.000Z | 2022-01-27T17:07:26.000Z | year_analysis.py | joee9/spotifystats | c6af564fee60d5f3c37c866f2cfb1a39ee88c2b5 | [
"MIT"
] | 2 | 2022-03-08T03:38:16.000Z | 2022-03-25T15:57:11.000Z | year_analysis.py | joee9/spotifystats | c6af564fee60d5f3c37c866f2cfb1a39ee88c2b5 | [
"MIT"
] | null | null | null | # Joe Nyhan, 31 December 2021; updated 7 January 2021
# Creates a LaTeX generated yearly summary; see yearly_sum_example.pdf
# TODO: update to write monthly databases
#%%
# spotify libraries
import spotipy.util as util
import spotipy
# time related
from datetime import datetime, timedelta
import pytz; est = pytz.timezone("America/New_York")
from dateutil import parser
#system related
import os
import sys
import json
# misc
import pandas as pd
import numpy as np
from urllib.request import urlretrieve
from PIL import Image, ImageDraw
# user specific details
from secrets import username, client_id, client_secret, home_path, python_path, pdflatex_path, sender
from analysis import make_fullpage_summary, make_formatted_top_songs, start_of_day_est, make_user_stamp, write_html_header, write_html_footer
from count import get_counts
def get_auth():
redirect_uri = 'http://localhost:7777/callback'
# scope = 'user-read-recently-played'
scope = "user-top-read"
token = util.prompt_for_user_token(username=username, scope=scope, client_id=client_id, client_secret=client_secret, redirect_uri=redirect_uri)
return spotipy.Spotify(auth=token)
def main():
sp = get_auth()
yyyy = 2022
if len(sys.argv) == 2:
yyyy = int(sys.argv[1])
# ========== USER INFORMATION
me = sp.current_user()
display_name = me["display_name"]
user_pic_path = me["images"][0]["url"]
user_url = me["external_urls"]["spotify"]
usr_info = user_url, user_pic_path, display_name, f'Yearly Recap: {yyyy}', 'circ'
# ========== CREATE DATA FRAMES FOR EACH MONTH
all_songs = pd.DataFrame(columns=["ID","Timestamp"])
large_track_db = {}
large_artist_db = {}
large_album_db = {}
months = []
for mm in range(1,13):
path = f"{home_path}/data/{yyyy}-{mm:02d}"
if os.path.exists(f"{path}-songlist.txt"):
df = pd.read_csv(f"{path}-songlist.txt")
all_songs = pd.concat([all_songs,df])
months.append(mm)
if os.path.exists(f"{path}-database.txt"):
with open(f"{path}-database.txt","r") as f:
dbs = json.loads(f.read())
track_db, artist_db, album_db = dbs
large_track_db.update(track_db)
large_artist_db.update(artist_db)
large_album_db.update(album_db)
all_dbs = large_track_db, large_artist_db, large_album_db
html = open(f"{home_path}/analysis/year_analysis.html", "w")
write_html_header(html)
make_user_stamp(html, usr_info)
# do yearly stats first
year_cts = get_counts(sp, all_songs, all_dbs)
make_fullpage_summary(html, year_cts, all_dbs, usr_info, str(yyyy), pct=True)
# monthly top songs
for i in range(len(months)):
mm = months[i]
tag = datetime.strftime(datetime.today().replace(month =mm, day=1), "%B")
path = f"{home_path}/data/{yyyy}-{mm:02d}"
df = pd.read_csv(f"{path}-songlist.txt")
pic_str = f"m{i}-"
m_song_cts, m_artist_cts, m_album_cts, m_total = get_counts(sp, df, all_dbs)
make_formatted_top_songs(m_song_cts, html, tag, m_total, large_track_db)
write_html_footer(html)
html.close()
if __name__ == "__main__":
main()
| 28.859649 | 147 | 0.668085 |
c747f38b47777c79ef3142085d795351c13eae6c | 8,111 | py | Python | Scripts/UVLF_IllustrisTNG/UVLF_IllustrisTNG.py | LBJ-Wade/GALLUMI_public | dbef3ff1ae6934c9551a44cbbe0270e2f17f5527 | [
"MIT"
] | 1 | 2021-12-15T00:17:15.000Z | 2021-12-15T00:17:15.000Z | Scripts/UVLF_IllustrisTNG/UVLF_IllustrisTNG.py | NNSSA/GALLUMI_public | 4529ab32ccfc281e5976f482fe556b672b8f464f | [
"MIT"
] | null | null | null | Scripts/UVLF_IllustrisTNG/UVLF_IllustrisTNG.py | NNSSA/GALLUMI_public | 4529ab32ccfc281e5976f482fe556b672b8f464f | [
"MIT"
] | null | null | null | import numpy as np
import h5py
import galcv
data = {}
models = ["A", "B", "C"]
simulations = ["TNG100-1", "TNG300-1", "TNG50-1", "combined"]
redshifts = {33:2, 25:3, 21:4, 17:5, 13:6, 11:7, 8:8, 6:9, 4:10}
quantities = ["bincenters", "luminosity_function", "number_count", "lf_combined"]
## Extract data and store in dictionary "data"
with h5py.File("UVLF_TNG_MV2019.hdf5", "r") as f:
for model in list(f.keys()):
data[model] = {}
for simu in f[model]:
data[model][simu] = {}
for redshift in f[model][simu]:
redshift_translated = redshifts[float(redshift)]
data[model][simu][redshift_translated] = {}
for quantity in f[model][simu][redshift]:
data[model][simu][redshift_translated][quantity] = f[model][simu][redshift][quantity][()]
## This function calls the data
def sim(model, simulation, redshift, quantity):
return data[model][simulation][redshift][quantity]
## Total magnitude range
magnitudes = sim("A", "TNG300-1", 4, "bincenters")
bin_width = np.diff(magnitudes)[-1]
## Compute volume (Mpc^3 x mag_bin_size) of each simulation
vol_300 = (sim("A", "TNG300-1", 4, "number_count") / 10**sim("A", "TNG300-1", 4, "luminosity_function"))[-1]
vol_100 = (sim("A", "TNG100-1", 4, "number_count") / 10**sim("A", "TNG100-1", 4, "luminosity_function"))[-1]
vol_50 = (sim("A", "TNG50-1", 4, "number_count") / 10**sim("A", "TNG50-1", 4, "luminosity_function"))[-1]
## Extract number counts or LF of each simulation
def data_slices(model, redshift, number=True, individual=True):
if individual:
if number:
return sim(model, "TNG300-1", redshift, "number_count"), sim(model, "TNG100-1", redshift, "number_count"), sim(model, "TNG50-1", redshift, "number_count")
return sim(model, "TNG300-1", redshift, "number_count") / vol_300, sim(model, "TNG100-1", redshift, "number_count") / vol_100, sim(model, "TNG50-1", redshift, "number_count") / vol_50
if number:
return np.concatenate((
sim(model, "TNG300-1", redshift, "number_count"),
sim(model, "TNG100-1", redshift, "number_count"),
sim(model, "TNG50-1", redshift, "number_count")
))
return np.concatenate((
sim(model, "TNG300-1", redshift, "number_count") / vol_300,
sim(model, "TNG100-1", redshift, "number_count") / vol_100,
sim(model, "TNG50-1", redshift, "number_count") / vol_50
))
## Integrating tools
# Define order of Gaussian quadrature integration
points, weights = np.polynomial.legendre.leggauss(50)
# Gaussian quadrature integrator
def integrator(f, a, b):
sub = (b - a) / 2.
add = (b + a) / 2.
if sub == 0:
return 0.
return sub * np.dot(f(sub * points + add), weights)
## Comoving Angular diameter distance
def D_A(z, Omega_m=0.3089, h=0.6774):
return integrator(lambda x: 1/np.sqrt(Omega_m * np.power(1 + x,3) + 1. - Omega_m), 0., z) * 299792.458 / (100. * h)
## Redshift bin width of each simulation box
def delta_z(z, Lbox, Omega_m=0.3089, h=0.6774):
return Lbox * 100. * h * np.sqrt(Omega_m * np.power(1 + z,3) + 1. - Omega_m) / 299792.458
## Compute cosmic variance using the galcv code
def cosmic_variance(model, redshift):
# These are the LFs from each simulation
lf300, lf100, lf50 = data_slices(model,redshift,number=False)
# Compute the effective areas in arcmin^2
areas = [(Lbox/D_A(redshift))**2 * (180*60/np.pi)**2 for Lbox in [302.627694125,110.71744907,51.6681428993]]
# galcv can't compute the CV at z = 4, so we use the CV at z = 5 for it (which is a conservative approach)
redshift = max(5, redshift)
# Compute cosmic variance with galcv
cv_300 = np.array(galcv.getcv(mag=magnitudes, area=areas[0], z=redshift, zW=max(0.1, delta_z(redshift, 302.627694125)), appOrAbs="absolute", interpWarning=0))
cv_100 = np.array(galcv.getcv(mag=magnitudes, area=areas[1], z=redshift, zW=max(0.1, delta_z(redshift, 110.71744907)), appOrAbs="absolute", interpWarning=0))
cv_50 = np.array(galcv.getcv(mag=magnitudes, area=areas[2], z=redshift, zW=max(0.1, delta_z(redshift, 51.6681428993)), appOrAbs="absolute", interpWarning=0))
# In some cases galcv gives nan, e.g., if magnitudes are too bright. Therefore we use the largest error there (which is dominated by poisson error anyway)
cv_300[np.isnan(cv_300)] = max(cv_300[np.isfinite(cv_300)])
cv_100[np.isnan(cv_100)] = max(cv_100[np.isfinite(cv_100)])
cv_50[np.isnan(cv_50)] = max(cv_50[np.isfinite(cv_50)])
# Minimal error in cosmic variance
minimal = 0.05
err_300 = np.array(list(map(max,zip(np.repeat(minimal, len(cv_300)), cv_300))))
err_100 = np.array(list(map(max,zip(np.repeat(minimal, len(cv_100)), cv_100))))
err_50 = np.array(list(map(max,zip(np.repeat(minimal, len(cv_50)), cv_50))))
return err_300 * lf300, err_100 * lf100, err_50 * lf50
## Compute Poisson error
def Poisson_error(model, redshift):
# These are the number of galaxies from each simulation
num300, num100, num50 = data_slices(model,redshift)
return np.sqrt(num300)/vol_300, np.sqrt(num100)/vol_100, np.sqrt(num50)/vol_50
## Return combined LF
def data_combined(model, redshift, original=False):
# Return the original, combined UVLF from the raw data
if original:
return 10**sim(model, "combined", redshift, "lf_combined")
# Positions of where we want to transition from one simulation to the other - pos300 or pos100 corresponds...
# ...to the magnitude M_i where the number count of galaxies in the simulation peaks at M_i+2
pos300 = np.argmax(sim(model, "TNG300-1", redshift, "number_count")) - 1
pos100 = np.argmax(sim(model, "TNG100-1", redshift, "number_count")) - 1
# These are the LFs from each simulation
lf_300, lf_100, lf_50 = data_slices(model, redshift, number=False, individual=True)
# set LFs equal to 0 where simulation statistics become relevant (at peak of number of galaxies)
lf_300[pos300:] = 0.
lf_100[pos100:] = 0.
# These are the Poisson errors
poisson_error_300, poisson_error_100, poisson_error_50 = Poisson_error(model, redshift)
# These are the cosmic variances
cv_error_300, cv_error_100, cv_error_50 = cosmic_variance(model,redshift)
# Combine Poisson error and cosmic variance
error_300 = np.sqrt(poisson_error_300**2 + cv_error_300**2)
error_100 = np.sqrt(poisson_error_100**2 + cv_error_100**2)
error_50 = np.sqrt(poisson_error_50**2 + cv_error_50**2)
# Add a minimal error of 20% in each simulation data to account for simulation statistics
min_error = 0.2
error_300 = np.array(list(map(max,zip(min_error * lf_300, error_300))))
error_100 = np.array(list(map(max,zip(min_error * lf_100, error_100))))
error_50 = np.array(list(map(max,zip(min_error * lf_50, error_50))))
error_300[pos300:] = 0.
error_100[pos100:] = 0.
# Compute inverse errors
inv_error_300 = 1/error_300
inv_error_100 = 1/error_100
inv_error_50 = 1/error_50
# Set inverse error equal to 0 where it's infinite
inv_error_300[np.isinf(inv_error_300)] = 0.
inv_error_100[np.isinf(inv_error_100)] = 0.
inv_error_50[np.isinf(inv_error_50)] = 0.
error_tot = 1/np.sqrt(inv_error_300**2 + inv_error_100**2 + inv_error_50**2)
error_tot[np.isinf(error_tot)] = 0.
# inverse variance method
lf_tot = error_tot * np.sqrt((lf_300 * inv_error_300)**2 + (lf_100 * inv_error_100)**2 + (lf_50 * inv_error_50)**2)
return lf_tot, error_tot
## Construct array for output
for_output = []
for z in [4,5,6,7,8,9,10]:
if z < 6.5:
MUV_cutoff = -16.
elif z < 8.5:
MUV_cutoff = -16.5
else:
MUV_cutoff = -16.75
lfs, errors = data_combined("A",z)
for num, LF in enumerate(zip(lfs, errors)):
if np.isfinite(LF[0]) and magnitudes[num] <= MUV_cutoff and LF[0] != 0.:
for_output.append((z, magnitudes[num], bin_width, LF[0], LF[1]))
np.savetxt("UVLF_IllustrisTNG.txt", np.array(for_output))
| 43.143617 | 191 | 0.664776 |
43995dc9556392a005cca7f10bd2c2e364c2b9d1 | 7,905 | py | Python | main.py | jacky181818/AuxiliaryQuestion | 334c8f9bfdaa707f0c0e7d16d5d48798622cc0df | [
"MIT"
] | null | null | null | main.py | jacky181818/AuxiliaryQuestion | 334c8f9bfdaa707f0c0e7d16d5d48798622cc0df | [
"MIT"
] | 6 | 2021-03-18T20:22:35.000Z | 2022-03-11T23:16:17.000Z | main.py | jacky181818/AuxiliaryQuestion | 334c8f9bfdaa707f0c0e7d16d5d48798622cc0df | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
"""
Xi Gua video Million Heroes
"""
import logging.handlers
import multiprocessing
import operator
import os
import threading
import time
from argparse import ArgumentParser
from datetime import datetime
from functools import partial
from multiprocessing import Event, Pipe
from textwrap import wrap
from config import api_key, enable_chrome, use_monitor, image_compress_level, crop_areas
from config import api_version
from config import app_id
from config import app_key
from config import app_secret
from config import data_directory
from config import prefer
from core.android import save_screen, check_screenshot, get_adb_tool, analyze_current_screen_text
from core.check_words import parse_false
from core.chrome_search import run_browser
from core.crawler.crawl import jieba_initialize, kwquery
from core.crawler.pmi import baidu_count
from core.ocr.baiduocr import get_text_from_image as bai_get_text
from core.ocr.spaceocr import get_text_from_image as ocrspace_get_text
from utils.backup import save_question_answers_to_file, get_qa_list, upload_to_cloud
from utils.process_stdout import ProcessStdout
import re
logger = logging.getLogger("assistant")
handler = logging.handlers.WatchedFileHandler("assistant.log")
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
## jieba init
jieba_initialize()
if prefer[0] == "baidu":
get_text_from_image = partial(bai_get_text,
app_id=app_id,
app_key=app_key,
app_secret=app_secret,
api_version=api_version,
timeout=5)
elif prefer[0] == "ocrspace":
get_text_from_image = partial(ocrspace_get_text, api_key=api_key)
def parse_args():
parser = ArgumentParser(description="Million Hero Assistant")
parser.add_argument(
"-t", "--timeout",
type=int,
default=5,
help="default http request timeout"
)
return parser.parse_args()
def parse_question_and_answer(text_list):
question = ""
start = 0
for i, keyword in enumerate(text_list):
question += keyword
if "?" in keyword:
start = i + 1
break
real_question = question.split(".")[-1]
for char, repl in [("以下", ""), ("下列", "")]:
real_question = real_question.replace(char, repl, 1)
question, true_flag = parse_false(real_question)
return true_flag, real_question, question, text_list[start:]
def pre_process_question(keyword):
"""
strip charactor and strip ?
:param question:
:return:
"""
now = datetime.today()
for char, repl in [("“", ""), ("”", ""), ("?", ""), ("《", ""), ("》", ""), ("我国", "中国"),
("今天", "{0}年{1}月{2}日".format(now.year, now.month, now.day)),
("今年", "{0}年".format(now.year)),
("这个月", "{0}年{1}月".format(now.year, now.month))]:
keyword = keyword.replace(char, repl)
keyword = keyword.split(r".")[-1]
keywords = keyword.split(" ")
keyword = "".join([e.strip("\r\n") for e in keywords if e])
return keyword
def sync_data_daemon(stdoutpipe):
qa_li = get_qa_list("screenshots/QA.txt")
# ok = upload_to_cloud(qa_li)
# if ok:
# stdoutpipe.put("同步信息到云端成功")
# else:
# stdoutpipe.put("同步信息到云端错误")
def check_path(path):
exists = os.path.exists(path)
if (not exists):
os.makedirs(path, 0x777, True)
def prompt_message():
global game_type
print("""
请选择答题节目:
1. 百万英雄
2. 百万赢家
3. 冲顶大会
4. 芝士超人
5. 知识英雄
6. 疯狂夺金
7. 黄金十秒
8. 点题成金
9. UC答题
10. 自适应
""")
game_type = input("输入节目序号: ")
if game_type == "1":
game_type = '百万英雄'
elif game_type == "2":
game_type = '百万赢家'
elif game_type == "3":
game_type = '冲顶大会'
elif game_type == "4":
game_type = "芝士超人"
elif game_type == "5":
game_type = '知识英雄'
elif game_type == "6":
game_type = '疯狂夺金'
elif game_type == "7":
game_type = '黄金十秒'
elif game_type == "8":
game_type = '点题成金'
elif game_type == "9":
game_type = "UC答题"
elif game_type == "10":
game_type = "自适应"
else:
game_type = '自适应'
def main():
args = parse_args()
timeout = args.timeout
adb_bin = get_adb_tool()
if use_monitor:
os.system("{0} connect 127.0.0.1:62001".format(adb_bin))
check_screenshot(filename="screenshot.png", directory=data_directory)
std_pipe = ProcessStdout()
sync_job = threading.Thread(target=sync_data_daemon, args=(std_pipe.queue,))
sync_job.daemon = True
sync_job.start()
if enable_chrome:
closer = Event()
noticer = Event()
noticer.clear()
reader, writer = Pipe()
browser_daemon = multiprocessing.Process(
target=run_browser, args=(closer, noticer, reader,))
browser_daemon.daemon = True
browser_daemon.start()
def __inner_job():
start = time.time()
image_binary = analyze_current_screen_text(
directory=data_directory,
compress_level=image_compress_level[0],
crop_area=crop_areas[game_type]
)
if not image_binary:
print("do not detect question and answers")
return
keywords = get_text_from_image(
image_data=image_binary,
timeout=timeout
)
if not keywords:
print("text not recognize")
return
true_flag, real_question, question, answers = parse_question_and_answer(keywords)
### parse for answer
#answers = map(lambda a: a.rsplit(":")[-1], answers)
#answers = list(map(lambda a: a.rsplit(".")[-1], answers))
for i in range(len(answers)):
a = re.sub('\s', '', answers[i])
a = re.sub(u"^[0-9A-Za-z]+[.:。]+", '', a)
answers[i] = a
print("~" * 60)
print("{0}\n{1}".format(real_question, "\n".join(answers)))
print("~" * 60)
if enable_chrome:
writer.send(question)
noticer.set()
summary = baidu_count(question, answers, timeout=timeout)
summary_li = sorted(summary.items(), key=operator.itemgetter(1), reverse=True)
if true_flag:
recommend = "{0}\n{1}".format(
"肯定回答(**): {0}".format(summary_li[0][0]),
"否定回答( ): {0}".format(summary_li[-1][0]))
else:
recommend = "{0}\n{1}".format(
"肯定回答( ): {0}".format(summary_li[0][0]),
"否定回答(**): {0}".format(summary_li[-1][0]))
print("*" * 60)
print(recommend)
print("*" * 60)
ans = kwquery(real_question)
print("-" * 60)
print(wrap(" ".join(ans), 60))
print("-" * 60)
end = time.time()
print("use {0} 秒".format(end - start))
save_screen(directory_src=data_directory, directory_dst=os.path.join(data_directory, game_type))
save_question_answers_to_file(real_question, answers, directory=data_directory)
prompt_message()
check_path(os.path.join(data_directory, game_type))
while True:
enter = input("按Enter键开始,切换游戏请输入s,按ESC键退出...\n")
if enter == chr(27):
break
if enter == 's':
prompt_message()
check_path(os.path.join(data_directory, game_type))
try:
__inner_job()
except Exception as e:
logger.error(str(e), exc_info=True)
print("欢迎下次使用")
if enable_chrome:
reader.close()
writer.close()
closer.set()
time.sleep(3)
if __name__ == "__main__":
multiprocessing.freeze_support()
main()
| 29.0625 | 104 | 0.598482 |
5dd4b5bb740045646621349ae2b5d440c34c6f6f | 8,903 | py | Python | scripts/linters/html_linter_test.py | kaylahardie/oppia | e93ed02dfc7f654ef4fb62268c1a9b9d9ded30ec | [
"Apache-2.0"
] | 1 | 2021-06-26T00:31:08.000Z | 2021-06-26T00:31:08.000Z | scripts/linters/html_linter_test.py | kaylahardie/oppia | e93ed02dfc7f654ef4fb62268c1a9b9d9ded30ec | [
"Apache-2.0"
] | null | null | null | scripts/linters/html_linter_test.py | kaylahardie/oppia | e93ed02dfc7f654ef4fb62268c1a9b9d9ded30ec | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
#
# Copyright 2020 The Oppia Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for scripts/linters/html_linter.py."""
from __future__ import absolute_import # pylint: disable=import-only-modules
from __future__ import unicode_literals # pylint: disable=import-only-modules
import multiprocessing
import os
from core.tests import test_utils
from . import html_linter
from . import pre_commit_linter
NAME_SPACE = multiprocessing.Manager().Namespace()
PROCESSES = multiprocessing.Manager().dict()
NAME_SPACE.files = pre_commit_linter.FileCache()
FILE_CACHE = NAME_SPACE.files
LINTER_TESTS_DIR = os.path.join(os.getcwd(), 'scripts', 'linters', 'test_files')
VALID_HTML_FILEPATH = os.path.join(LINTER_TESTS_DIR, 'valid.html')
INVALID_STYLE_INDENTATION_HTML_FILEPATH = os.path.join(
LINTER_TESTS_DIR, 'invalid_style_indentation.html')
INVALID_INDENTATION_HTML_FILEPATH = os.path.join(
LINTER_TESTS_DIR, 'invalid_indentation.html')
INVALID_QUOTES_HTML_FILEPATH = os.path.join(
LINTER_TESTS_DIR, 'invalid_quotes.html')
INVALID_ALIGNMENT_HTML_FILEPATH = os.path.join(
LINTER_TESTS_DIR, 'invalid_alignment_of_tags.html')
INVALID_MISSING_HTML_TAG_HTML_FILEPATH = os.path.join(
LINTER_TESTS_DIR, 'invalid_missing_html_tag.html')
INVALID_TAG_MISMATCH_HTML_FILEPATH = os.path.join(
LINTER_TESTS_DIR, 'invalid_tag_mismatch.html')
INVALID_MISMATCH_INDENTATION_HTML_FILEPATH = os.path.join(
LINTER_TESTS_DIR, 'invalid_mismatch_indentation.html')
INVALID_MISMATCHED_TAGS_HTML_FILEPATH = os.path.join(
LINTER_TESTS_DIR, 'invalid_mismatched_tags.html')
class CustomHTMLParserTests(test_utils.LinterTestBase):
"""Tests for CustomHTMLParser class."""
def test_custom_linter_with_invalid_style_indentation(self):
with self.print_swap:
html_linter.HTMLLintChecksManager(
[INVALID_STYLE_INDENTATION_HTML_FILEPATH], FILE_CACHE, True,
debug=True).perform_all_lint_checks()
self.assert_same_list_elements([
'invalid_style_indentation.html --> Expected indentation of 6,'
' found indentation of 4 for content of style tag on line 7'
], self.linter_stdout)
self.assert_failed_messages_count(self.linter_stdout, 1)
def test_custom_linter_with_invalid_indentation(self):
with self.print_swap:
html_linter.HTMLLintChecksManager(
[INVALID_INDENTATION_HTML_FILEPATH], FILE_CACHE, True,
debug=True).perform_all_lint_checks()
self.assert_same_list_elements([
'Expected indentation of 10, found indentation of 12 for '
'classroom-page tag on line 14'], self.linter_stdout)
self.assert_failed_messages_count(self.linter_stdout, 1)
def test_custom_linter_with_invalid_quotes(self):
with self.print_swap:
html_linter.HTMLLintChecksManager(
[INVALID_QUOTES_HTML_FILEPATH], FILE_CACHE, True,
debug=True).perform_all_lint_checks()
self.assert_same_list_elements([
'The value color:white; of attribute '
'style for the tag content on line 12 should be enclosed '
'within double quotes.'], self.linter_stdout)
self.assert_failed_messages_count(self.linter_stdout, 1)
def test_custom_linter_with_invalid_alignment(self):
with self.print_swap:
html_linter.HTMLLintChecksManager(
[INVALID_ALIGNMENT_HTML_FILEPATH], FILE_CACHE, True,
debug=True).perform_all_lint_checks()
self.assert_same_list_elements([
'Attribute for tag content on line 13 should align with the '
'leftmost attribute on line 12'], self.linter_stdout)
self.assert_failed_messages_count(self.linter_stdout, 1)
def test_custom_linter_with_invalid_tags(self):
with self.print_swap:
with self.assertRaisesRegexp(
html_linter.TagMismatchException, 'Error in line 2 of file'):
html_linter.HTMLLintChecksManager(
[INVALID_MISMATCHED_TAGS_HTML_FILEPATH], FILE_CACHE, True,
debug=True).perform_all_lint_checks()
def test_custom_linter_with_tag_mismatch(self):
with self.print_swap:
with self.assertRaisesRegexp(
html_linter.TagMismatchException, 'Error in line 13 of file'):
html_linter.HTMLLintChecksManager(
[INVALID_TAG_MISMATCH_HTML_FILEPATH], FILE_CACHE, True,
debug=True).perform_all_lint_checks()
def test_custom_linter_with_mismatched_indentation(self):
with self.print_swap:
html_linter.HTMLLintChecksManager(
[INVALID_MISMATCH_INDENTATION_HTML_FILEPATH], FILE_CACHE, True,
debug=True).perform_all_lint_checks()
self.assert_same_list_elements([
'Indentation for end tag content on line 18 does not match the'
' indentation of the start tag content on line 12'
], self.linter_stdout)
self.assert_failed_messages_count(self.linter_stdout, 1)
def test_custom_without_html_end_tag(self):
with self.print_swap:
with self.assertRaisesRegexp(
html_linter.TagMismatchException, 'Error in file'):
html_linter.HTMLLintChecksManager(
[INVALID_MISSING_HTML_TAG_HTML_FILEPATH], FILE_CACHE, True,
debug=True).perform_all_lint_checks()
def test_valid_html_file_with_custom_linter(self):
with self.print_swap:
html_linter.HTMLLintChecksManager(
[VALID_HTML_FILEPATH], FILE_CACHE, True,
debug=True).perform_all_lint_checks()
self.assert_same_list_elements(
['SUCCESS'], self.linter_stdout)
self.assert_failed_messages_count(self.linter_stdout, 0)
def test_custom_linter_with_no_files(self):
with self.print_swap:
html_linter.HTMLLintChecksManager(
[], FILE_CACHE, True,
debug=True).perform_all_lint_checks()
self.assert_same_list_elements(
['There are no HTML files to lint.'], self.linter_stdout)
self.assert_failed_messages_count(self.linter_stdout, 0)
def test_third_party_linter_with_no_files(self):
with self.print_swap:
html_linter.ThirdPartyHTMLLintChecksManager(
[], True).perform_all_lint_checks()
self.assert_same_list_elements(
['There are no HTML files to lint.'],
self.linter_stdout)
self.assert_failed_messages_count(self.linter_stdout, 0)
def test_third_party_linter_with_verbose_mode_enabled(self):
with self.print_swap:
html_linter.ThirdPartyHTMLLintChecksManager(
[VALID_HTML_FILEPATH], True).perform_all_lint_checks()
self.assert_same_list_elements(
['SUCCESS HTML linting passed'],
self.linter_stdout)
self.assert_failed_messages_count(self.linter_stdout, 0)
def test_third_party_linter_with_verbose_mode_disabled(self):
with self.print_swap:
html_linter.ThirdPartyHTMLLintChecksManager(
[VALID_HTML_FILEPATH], False).perform_all_lint_checks()
self.assert_same_list_elements(
['SUCCESS HTML linting passed'],
self.linter_stdout)
self.assert_failed_messages_count(self.linter_stdout, 0)
def test_third_party_linter_with_lint_errors(self):
with self.print_swap:
html_linter.ThirdPartyHTMLLintChecksManager(
[INVALID_QUOTES_HTML_FILEPATH], True).perform_all_lint_checks()
self.assert_same_list_elements(
['line 10, col 20, line contains trailing whitespace'],
self.linter_stdout)
self.assert_failed_messages_count(self.linter_stdout, 1)
def test_get_linters_with_success(self):
custom_linter, third_party_linter = html_linter.get_linters(
[VALID_HTML_FILEPATH], FILE_CACHE, verbose_mode_enabled=True)
self.assertTrue(
isinstance(custom_linter, html_linter.HTMLLintChecksManager))
self.assertTrue(
isinstance(
third_party_linter,
html_linter.ThirdPartyHTMLLintChecksManager))
| 44.964646 | 80 | 0.703021 |
8c7c1ad73bdcfc7b6f47a71fde6ebed4ff54b44c | 1,853 | py | Python | omdbie/_types.py | DefaltSimon/OMDbie | 9a9e1a31a17eb39a87146476bd3669232ba7d3e2 | [
"MIT"
] | null | null | null | omdbie/_types.py | DefaltSimon/OMDbie | 9a9e1a31a17eb39a87146476bd3669232ba7d3e2 | [
"MIT"
] | null | null | null | omdbie/_types.py | DefaltSimon/OMDbie | 9a9e1a31a17eb39a87146476bd3669232ba7d3e2 | [
"MIT"
] | null | null | null | """
Types for OMDbie
"""
class VideoType:
movie = "movie"
series = "series"
episode = "episode"
class PlotLength:
short = "short"
full = "full"
class Connectors:
requests = "requests"
aiohttp = "aiohttp"
urllib = "urllib"
class Endpoints:
base = "http://www.omdbapi.com/?"
movie_transcodes = {
"BoxOffice": "box_office",
"imdbRating": "imdb_rating",
"imdbVotes": "imdb_votes",
"imdbID": "imdb_id"
}
class Movie:
__slots__ = (
"title", "year", "rated", "released", "runtime", "genre", "director", "writer",
"actors", "plot", "language", "country", "poster", "ratings", "metascore",
"imdb_rating", "imdb_votes", "imdb_id", "type", "dvd", "box_office", "production",
"website", "awards", "total_seasons", "trailer", "metascore"
)
def __init__(self, **fields):
# Shift specified keys to camel_case instead of camelCase
for key_from, key_to in movie_transcodes.items():
if fields.get(key_from):
fields[key_to] = fields.get(key_from)
del fields[key_from]
for name, value in fields.items():
try:
self.__setattr__(name.lower(), value)
except AttributeError:
pass
series_transcodes = {
"totalSeasons": "total_seasons",
}
class Series(Movie):
# Keeps Movie's __slots__
__slots__ = ()
def __init__(self, **fields):
super().__init__(**fields)
for key_from, key_to in series_transcodes.items():
if fields.get(key_from):
fields[key_to] = fields.get(key_from)
del fields[key_from]
for name, value in fields.items():
try:
self.__setattr__(name.lower(), value)
except AttributeError:
pass
| 23.455696 | 90 | 0.570426 |
4681da7fb5392c3d12e5f5c2311ccde368e2d557 | 1,096 | py | Python | indico/modules/events/abstracts/models/fields.py | aiforrural/Digital-Events-Example | 628aaa8727b259b9367ac0ae1c5ba8e9e95eca82 | [
"MIT"
] | 1 | 2021-02-08T09:34:27.000Z | 2021-02-08T09:34:27.000Z | indico/modules/events/abstracts/models/fields.py | pamirk/indico | c3b4e06b11cc21ad497f74d0b2ca901bc1b2a768 | [
"MIT"
] | null | null | null | indico/modules/events/abstracts/models/fields.py | pamirk/indico | c3b4e06b11cc21ad497f74d0b2ca901bc1b2a768 | [
"MIT"
] | null | null | null | # This file is part of Indico.
# Copyright (C) 2002 - 2020 CERN
#
# Indico is free software; you can redistribute it and/or
# modify it under the terms of the MIT License; see the
# LICENSE file for more details.
from indico.core.db import db
from indico.modules.events.contributions.models.fields import ContributionFieldValueBase
from indico.util.string import format_repr, text_to_repr
class AbstractFieldValue(ContributionFieldValueBase):
"""Store a field values related to abstracts."""
__tablename__ = 'abstract_field_values'
__table_args__ = {'schema': 'event_abstracts'}
contribution_field_backref_name = 'abstract_values'
abstract_id = db.Column(
db.Integer,
db.ForeignKey('event_abstracts.abstracts.id'),
index=True,
nullable=False,
primary_key=True
)
# relationship backrefs:
# - abstract (Abstract.field_values)
def __repr__(self):
text = text_to_repr(self.data) if isinstance(self.data, str) else self.data
return format_repr(self, 'abstract_id', 'contribution_field_id', _text=text)
| 32.235294 | 88 | 0.725365 |
b292677a357da1ceb38bd8153927242407fa7e18 | 5,751 | py | Python | networks/dqn_net.py | nsortur/equi_rl | 83bd2ee9dfaab715e51b71ffff90ab990aaed5f8 | [
"MIT"
] | 9 | 2022-02-20T18:18:51.000Z | 2022-03-24T03:04:44.000Z | networks/dqn_net.py | nsortur/equi_rl | 83bd2ee9dfaab715e51b71ffff90ab990aaed5f8 | [
"MIT"
] | null | null | null | networks/dqn_net.py | nsortur/equi_rl | 83bd2ee9dfaab715e51b71ffff90ab990aaed5f8 | [
"MIT"
] | 2 | 2022-02-19T05:17:06.000Z | 2022-02-21T20:53:26.000Z | import torch
import torch.nn as nn
import torch.nn.functional as F
# similar amount of parameters
class CNNCom(nn.Module):
def __init__(self, input_shape=(2, 128, 128), n_p=2, n_theta=1):
super().__init__()
self.n_inv = 3 * n_theta * n_p
self.conv = torch.nn.Sequential(
# 128x128
nn.Conv2d(input_shape[0], 32, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
# 64x64
nn.Conv2d(32, 64, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
# 32x32
nn.Conv2d(64, 128, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
# 16x16
nn.Conv2d(128, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
# 8x8
nn.Conv2d(256, 512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(512, 512, kernel_size=3, padding=0),
nn.ReLU(inplace=True),
# 6x6
nn.MaxPool2d(2),
# 3x3
nn.Conv2d(512, 18, kernel_size=1, padding=0),
)
self.n_p = n_p
self.n_theta = n_theta
for m in self.named_modules():
if isinstance(m[1], nn.Conv2d):
# nn.init.kaiming_normal_(m[1].weight.data)
nn.init.xavier_normal_(m[1].weight.data)
elif isinstance(m[1], nn.BatchNorm2d):
m[1].weight.data.fill_(1)
m[1].bias.data.zero_()
def forward(self, x):
batch_size = x.shape[0]
q = self.conv(x)
q = q.reshape(batch_size, self.n_inv, 9).permute(0, 2, 1)
return q
class DQNComCURL(nn.Module):
def __init__(self, input_shape=(2, 128, 128), n_p=2, n_theta=1, curl_z=128):
super().__init__()
conv_out_size = ((((input_shape[1]//2)//2)//2)//2)
self.conv = torch.nn.Sequential(
# 128x128
nn.Conv2d(input_shape[0], 32, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
# 64x64
nn.Conv2d(32, 64, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
# 32x32
nn.Conv2d(64, 128, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
# 16x16
nn.Conv2d(128, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
# 8x8
nn.Conv2d(256, 512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Flatten(),
torch.nn.Linear(512 * conv_out_size * conv_out_size, 1024),
nn.ReLU(inplace=True),
)
self.W_h = nn.Parameter(torch.rand(1024, 256))
self.layer_norm_1 = nn.LayerNorm(256)
self.W_c = nn.Parameter(torch.rand(256, 128))
self.b_h = nn.Parameter(torch.zeros(256))
self.b_c = nn.Parameter(torch.zeros(128))
self.W = nn.Parameter(torch.rand(128, 128))
self.layer_norm_2 = nn.LayerNorm(128)
self.n_p = n_p
self.n_theta = n_theta
self.fc = torch.nn.Sequential(
torch.nn.Linear(1024, 9 * 3 * n_theta * n_p),
)
for m in self.named_modules():
if isinstance(m[1], nn.Conv2d):
# nn.init.kaiming_normal_(m[1].weight.data)
nn.init.xavier_normal_(m[1].weight.data)
elif isinstance(m[1], nn.BatchNorm2d):
m[1].weight.data.fill_(1)
m[1].bias.data.zero_()
def forward(self, x):
x = self.conv(x)
q = self.fc(x)
h = torch.matmul(x, self.W_h) + self.b_h # Contrastive head
h = self.layer_norm_1(h)
h = F.relu(h)
h = torch.matmul(h, self.W_c) + self.b_c # Contrastive head
h = self.layer_norm_2(h)
return q, h
class DQNComCURLOri(nn.Module):
def __init__(self, input_shape=(2, 128, 128), n_p=2, n_theta=1):
super().__init__()
self.conv = torch.nn.Sequential(
nn.Conv2d(input_shape[0], 32, 5, stride=5, padding=0),
nn.ReLU(),
nn.Conv2d(32, 64, 5, stride=5, padding=0),
nn.ReLU(),
nn.Flatten(),
)
x = torch.randn([1] + list(input_shape))
conv_out_dim = self.conv(x).reshape(-1).shape[-1]
self.W_h = nn.Parameter(torch.rand(conv_out_dim, 256))
self.layer_norm_1 = nn.LayerNorm(256)
self.W_c = nn.Parameter(torch.rand(256, 128))
self.b_h = nn.Parameter(torch.zeros(256))
self.b_c = nn.Parameter(torch.zeros(128))
self.W = nn.Parameter(torch.rand(128, 128))
self.layer_norm_2 = nn.LayerNorm(128)
self.n_p = n_p
self.n_theta = n_theta
self.fc = torch.nn.Sequential(
torch.nn.Linear(conv_out_dim, 256),
nn.ReLU(),
torch.nn.Linear(256, 9 * 3 * n_theta * n_p),
)
for m in self.named_modules():
if isinstance(m[1], nn.Conv2d):
# nn.init.kaiming_normal_(m[1].weight.data)
nn.init.xavier_normal_(m[1].weight.data)
elif isinstance(m[1], nn.BatchNorm2d):
m[1].weight.data.fill_(1)
m[1].bias.data.zero_()
def forward(self, x):
x = self.conv(x)
q = self.fc(x)
h = torch.matmul(x, self.W_h) + self.b_h # Contrastive head
h = self.layer_norm_1(h)
h = F.relu(h)
h = torch.matmul(h, self.W_c) + self.b_c # Contrastive head
h = self.layer_norm_2(h)
return q, h | 34.854545 | 80 | 0.53069 |
09b9a72866a067280729714b6e742ce85f064b2d | 7,744 | py | Python | test/functional/p2p_invalid_tx.py | MrCryptoBeast/WWW | 857e860df0aa1bc7fde2ee6f5918ff32933beeb3 | [
"MIT"
] | null | null | null | test/functional/p2p_invalid_tx.py | MrCryptoBeast/WWW | 857e860df0aa1bc7fde2ee6f5918ff32933beeb3 | [
"MIT"
] | null | null | null | test/functional/p2p_invalid_tx.py | MrCryptoBeast/WWW | 857e860df0aa1bc7fde2ee6f5918ff32933beeb3 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# Copyright (c) 2015-2020 The worldwideweb Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test node responses to invalid transactions.
In this test we connect to one node over p2p, and test tx requests."""
from test_framework.blocktools import create_block, create_coinbase
from test_framework.messages import (
COIN,
COutPoint,
CTransaction,
CTxIn,
CTxOut,
)
from test_framework.p2p import P2PDataStore
from test_framework.test_framework import worldwidewebTestFramework
from test_framework.util import (
assert_equal,
)
from data import invalid_txs
class InvalidTxRequestTest(worldwidewebTestFramework):
def set_test_params(self):
self.num_nodes = 1
self.extra_args = [[
"-acceptnonstdtxn=1",
]]
self.setup_clean_chain = True
def bootstrap_p2p(self, *, num_connections=1):
"""Add a P2P connection to the node.
Helper to connect and wait for version handshake."""
for _ in range(num_connections):
self.nodes[0].add_p2p_connection(P2PDataStore())
def reconnect_p2p(self, **kwargs):
"""Tear down and bootstrap the P2P connection to the node.
The node gets disconnected several times in this test. This helper
method reconnects the p2p and restarts the network thread."""
self.nodes[0].disconnect_p2ps()
self.bootstrap_p2p(**kwargs)
def run_test(self):
node = self.nodes[0] # convenience reference to the node
self.bootstrap_p2p() # Add one p2p connection to the node
best_block = self.nodes[0].getbestblockhash()
tip = int(best_block, 16)
best_block_time = self.nodes[0].getblock(best_block)['time']
block_time = best_block_time + 1
self.log.info("Create a new block with an anyone-can-spend coinbase.")
height = 1
block = create_block(tip, create_coinbase(height), block_time)
block.solve()
# Save the coinbase for later
block1 = block
tip = block.sha256
node.p2ps[0].send_blocks_and_test([block], node, success=True)
self.log.info("Mature the block.")
self.generatetoaddress(self.nodes[0], 100, self.nodes[0].get_deterministic_priv_key().address)
# Iterate through a list of known invalid transaction types, ensuring each is
# rejected. Some are consensus invalid and some just violate policy.
for BadTxTemplate in invalid_txs.iter_all_templates():
self.log.info("Testing invalid transaction: %s", BadTxTemplate.__name__)
template = BadTxTemplate(spend_block=block1)
tx = template.get_tx()
node.p2ps[0].send_txs_and_test(
[tx], node, success=False,
expect_disconnect=template.expect_disconnect,
reject_reason=template.reject_reason,
)
if template.expect_disconnect:
self.log.info("Reconnecting to peer")
self.reconnect_p2p()
# Make two p2p connections to provide the node with orphans
# * p2ps[0] will send valid orphan txs (one with low fee)
# * p2ps[1] will send an invalid orphan tx (and is later disconnected for that)
self.reconnect_p2p(num_connections=2)
self.log.info('Test orphan transaction handling ... ')
# Create a root transaction that we withhold until all dependent transactions
# are sent out and in the orphan cache
SCRIPT_PUB_KEY_OP_TRUE = b'\x51\x75' * 15 + b'\x51'
tx_withhold = CTransaction()
tx_withhold.vin.append(CTxIn(outpoint=COutPoint(block1.vtx[0].sha256, 0)))
tx_withhold.vout.append(CTxOut(nValue=50 * COIN - 12000, scriptPubKey=SCRIPT_PUB_KEY_OP_TRUE))
tx_withhold.calc_sha256()
# Our first orphan tx with some outputs to create further orphan txs
tx_orphan_1 = CTransaction()
tx_orphan_1.vin.append(CTxIn(outpoint=COutPoint(tx_withhold.sha256, 0)))
tx_orphan_1.vout = [CTxOut(nValue=10 * COIN, scriptPubKey=SCRIPT_PUB_KEY_OP_TRUE)] * 3
tx_orphan_1.calc_sha256()
# A valid transaction with low fee
tx_orphan_2_no_fee = CTransaction()
tx_orphan_2_no_fee.vin.append(CTxIn(outpoint=COutPoint(tx_orphan_1.sha256, 0)))
tx_orphan_2_no_fee.vout.append(CTxOut(nValue=10 * COIN, scriptPubKey=SCRIPT_PUB_KEY_OP_TRUE))
# A valid transaction with sufficient fee
tx_orphan_2_valid = CTransaction()
tx_orphan_2_valid.vin.append(CTxIn(outpoint=COutPoint(tx_orphan_1.sha256, 1)))
tx_orphan_2_valid.vout.append(CTxOut(nValue=10 * COIN - 12000, scriptPubKey=SCRIPT_PUB_KEY_OP_TRUE))
tx_orphan_2_valid.calc_sha256()
# An invalid transaction with negative fee
tx_orphan_2_invalid = CTransaction()
tx_orphan_2_invalid.vin.append(CTxIn(outpoint=COutPoint(tx_orphan_1.sha256, 2)))
tx_orphan_2_invalid.vout.append(CTxOut(nValue=11 * COIN, scriptPubKey=SCRIPT_PUB_KEY_OP_TRUE))
tx_orphan_2_invalid.calc_sha256()
self.log.info('Send the orphans ... ')
# Send valid orphan txs from p2ps[0]
node.p2ps[0].send_txs_and_test([tx_orphan_1, tx_orphan_2_no_fee, tx_orphan_2_valid], node, success=False)
# Send invalid tx from p2ps[1]
node.p2ps[1].send_txs_and_test([tx_orphan_2_invalid], node, success=False)
assert_equal(0, node.getmempoolinfo()['size']) # Mempool should be empty
assert_equal(2, len(node.getpeerinfo())) # p2ps[1] is still connected
self.log.info('Send the withhold tx ... ')
with node.assert_debug_log(expected_msgs=["bad-txns-in-belowout"]):
node.p2ps[0].send_txs_and_test([tx_withhold], node, success=True)
# Transactions that should end up in the mempool
expected_mempool = {
t.hash
for t in [
tx_withhold, # The transaction that is the root for all orphans
tx_orphan_1, # The orphan transaction that splits the coins
tx_orphan_2_valid, # The valid transaction (with sufficient fee)
]
}
# Transactions that do not end up in the mempool:
# tx_orphan_2_no_fee, because it has too low fee (p2ps[0] is not disconnected for relaying that tx)
# tx_orphan_2_invalid, because it has negative fee (p2ps[1] is disconnected for relaying that tx)
self.wait_until(lambda: 1 == len(node.getpeerinfo()), timeout=12) # p2ps[1] is no longer connected
assert_equal(expected_mempool, set(node.getrawmempool()))
self.log.info('Test orphan pool overflow')
orphan_tx_pool = [CTransaction() for _ in range(101)]
for i in range(len(orphan_tx_pool)):
orphan_tx_pool[i].vin.append(CTxIn(outpoint=COutPoint(i, 333)))
orphan_tx_pool[i].vout.append(CTxOut(nValue=11 * COIN, scriptPubKey=SCRIPT_PUB_KEY_OP_TRUE))
with node.assert_debug_log(['orphanage overflow, removed 1 tx']):
node.p2ps[0].send_txs_and_test(orphan_tx_pool, node, success=False)
rejected_parent = CTransaction()
rejected_parent.vin.append(CTxIn(outpoint=COutPoint(tx_orphan_2_invalid.sha256, 0)))
rejected_parent.vout.append(CTxOut(nValue=11 * COIN, scriptPubKey=SCRIPT_PUB_KEY_OP_TRUE))
rejected_parent.rehash()
with node.assert_debug_log(['not keeping orphan with rejected parents {}'.format(rejected_parent.hash)]):
node.p2ps[0].send_txs_and_test([rejected_parent], node, success=False)
if __name__ == '__main__':
InvalidTxRequestTest().main()
| 45.552941 | 113 | 0.680914 |
8c5f58e8991097f68639bc20e36572fe77437e37 | 6,783 | py | Python | bindings/python/ensmallen_graph/datasets/string/nocardiasp348mftsu51.py | caufieldjh/ensmallen_graph | 14e98b1cdbc73193a84a913d7d4f2b2b3eb2c43a | [
"MIT"
] | null | null | null | bindings/python/ensmallen_graph/datasets/string/nocardiasp348mftsu51.py | caufieldjh/ensmallen_graph | 14e98b1cdbc73193a84a913d7d4f2b2b3eb2c43a | [
"MIT"
] | null | null | null | bindings/python/ensmallen_graph/datasets/string/nocardiasp348mftsu51.py | caufieldjh/ensmallen_graph | 14e98b1cdbc73193a84a913d7d4f2b2b3eb2c43a | [
"MIT"
] | null | null | null | """
This file offers the methods to automatically retrieve the graph Nocardia sp. 348MFTsu51.
The graph is automatically retrieved from the STRING repository.
Report
---------------------
At the time of rendering these methods (please see datetime below), the graph
had the following characteristics:
Datetime: 2021-02-02 23:07:21.512251
The undirected graph Nocardia sp. 348MFTsu51 has 5155 nodes and 779420
weighted edges, of which none are self-loops. The graph is dense as it
has a density of 0.05867 and has 19 connected components, where the component
with most nodes has 5114 nodes and the component with the least nodes has
2 nodes. The graph median node degree is 251, the mean node degree is 302.39,
and the node degree mode is 3. The top 5 most central nodes are 1172185.KB911526_gene740
(degree 2258), 1172185.KB911514_gene4126 (degree 2191), 1172185.KB911515_gene2303
(degree 1756), 1172185.KB911511_gene3233 (degree 1544) and 1172185.KB911522_gene1877
(degree 1533).
References
---------------------
Please cite the following if you use the data:
@article{szklarczyk2019string,
title={STRING v11: protein--protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets},
author={Szklarczyk, Damian and Gable, Annika L and Lyon, David and Junge, Alexander and Wyder, Stefan and Huerta-Cepas, Jaime and Simonovic, Milan and Doncheva, Nadezhda T and Morris, John H and Bork, Peer and others},
journal={Nucleic acids research},
volume={47},
number={D1},
pages={D607--D613},
year={2019},
publisher={Oxford University Press}
}
Usage example
----------------------
The usage of this graph is relatively straightforward:
.. code:: python
# First import the function to retrieve the graph from the datasets
from ensmallen_graph.datasets.string import NocardiaSp348mftsu51
# Then load the graph
graph = NocardiaSp348mftsu51()
# Finally, you can do anything with it, for instance, compute its report:
print(graph)
# If you need to run a link prediction task with validation,
# you can split the graph using a connected holdout as follows:
train_graph, validation_graph = graph.connected_holdout(
# You can use an 80/20 split the holdout, for example.
train_size=0.8,
# The random state is used to reproduce the holdout.
random_state=42,
# Wether to show a loading bar.
verbose=True
)
# Remember that, if you need, you can enable the memory-time trade-offs:
train_graph.enable(
vector_sources=True,
vector_destinations=True,
vector_outbounds=True
)
# Consider using the methods made available in the Embiggen package
# to run graph embedding or link prediction tasks.
"""
from typing import Dict
from ..automatic_graph_retrieval import AutomaticallyRetrievedGraph
from ...ensmallen_graph import EnsmallenGraph # pylint: disable=import-error
def NocardiaSp348mftsu51(
directed: bool = False,
verbose: int = 2,
cache_path: str = "graphs/string",
**additional_graph_kwargs: Dict
) -> EnsmallenGraph:
"""Return new instance of the Nocardia sp. 348MFTsu51 graph.
The graph is automatically retrieved from the STRING repository.
Parameters
-------------------
directed: bool = False,
Wether to load the graph as directed or undirected.
By default false.
verbose: int = 2,
Wether to show loading bars during the retrieval and building
of the graph.
cache_path: str = "graphs",
Where to store the downloaded graphs.
additional_graph_kwargs: Dict,
Additional graph kwargs.
Returns
-----------------------
Instace of Nocardia sp. 348MFTsu51 graph.
Report
---------------------
At the time of rendering these methods (please see datetime below), the graph
had the following characteristics:
Datetime: 2021-02-02 23:07:21.512251
The undirected graph Nocardia sp. 348MFTsu51 has 5155 nodes and 779420
weighted edges, of which none are self-loops. The graph is dense as it
has a density of 0.05867 and has 19 connected components, where the component
with most nodes has 5114 nodes and the component with the least nodes has
2 nodes. The graph median node degree is 251, the mean node degree is 302.39,
and the node degree mode is 3. The top 5 most central nodes are 1172185.KB911526_gene740
(degree 2258), 1172185.KB911514_gene4126 (degree 2191), 1172185.KB911515_gene2303
(degree 1756), 1172185.KB911511_gene3233 (degree 1544) and 1172185.KB911522_gene1877
(degree 1533).
References
---------------------
Please cite the following if you use the data:
@article{szklarczyk2019string,
title={STRING v11: protein--protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets},
author={Szklarczyk, Damian and Gable, Annika L and Lyon, David and Junge, Alexander and Wyder, Stefan and Huerta-Cepas, Jaime and Simonovic, Milan and Doncheva, Nadezhda T and Morris, John H and Bork, Peer and others},
journal={Nucleic acids research},
volume={47},
number={D1},
pages={D607--D613},
year={2019},
publisher={Oxford University Press}
}
Usage example
----------------------
The usage of this graph is relatively straightforward:
.. code:: python
# First import the function to retrieve the graph from the datasets
from ensmallen_graph.datasets.string import NocardiaSp348mftsu51
# Then load the graph
graph = NocardiaSp348mftsu51()
# Finally, you can do anything with it, for instance, compute its report:
print(graph)
# If you need to run a link prediction task with validation,
# you can split the graph using a connected holdout as follows:
train_graph, validation_graph = graph.connected_holdout(
# You can use an 80/20 split the holdout, for example.
train_size=0.8,
# The random state is used to reproduce the holdout.
random_state=42,
# Wether to show a loading bar.
verbose=True
)
# Remember that, if you need, you can enable the memory-time trade-offs:
train_graph.enable(
vector_sources=True,
vector_destinations=True,
vector_outbounds=True
)
# Consider using the methods made available in the Embiggen package
# to run graph embedding or link prediction tasks.
"""
return AutomaticallyRetrievedGraph(
graph_name="NocardiaSp348mftsu51",
dataset="string",
directed=directed,
verbose=verbose,
cache_path=cache_path,
additional_graph_kwargs=additional_graph_kwargs
)()
| 35.513089 | 223 | 0.706325 |
eab7f1ac2d5036916eec06eaaa60a177b6f442d3 | 755 | py | Python | adafruit_circuitpython_libs/adafruit-circuitpython-bundle-py-20210214/examples/slideshow_simpletest.py | jacoblb64/pico_rgb_keypad_hid | 3251ca6a98ef86d9f98c54f639c4d61810601a0b | [
"MIT"
] | 47 | 2021-02-15T23:02:36.000Z | 2022-03-04T21:30:03.000Z | adafruit_circuitpython_libs/adafruit-circuitpython-bundle-py-20210214/examples/slideshow_simpletest.py | jacoblb64/pico_rgb_keypad_hid | 3251ca6a98ef86d9f98c54f639c4d61810601a0b | [
"MIT"
] | 23 | 2018-10-08T16:28:37.000Z | 2022-03-21T17:32:02.000Z | adafruit_circuitpython_libs/adafruit-circuitpython-bundle-py-20210214/examples/slideshow_simpletest.py | jacoblb64/pico_rgb_keypad_hid | 3251ca6a98ef86d9f98c54f639c4d61810601a0b | [
"MIT"
] | 14 | 2021-02-20T17:40:56.000Z | 2022-01-01T19:53:38.000Z | # SPDX-FileCopyrightText: 2021 ladyada for Adafruit Industries
# SPDX-License-Identifier: MIT
"""Basic demonstration script will create a slideshow
object that plays through once alphabetically."""
import board
from adafruit_slideshow import PlayBackOrder, SlideShow
# use built in display (PyPortal, PyGamer, PyBadge, CLUE, etc.)
# see guide for setting up external displays (TFT / OLED breakouts, RGB matrices, etc.)
# https://learn.adafruit.com/circuitpython-display-support-using-displayio/display-and-display-bus
display = board.DISPLAY
# pylint: disable=no-member
slideshow = SlideShow(
board.DISPLAY,
None,
folder="/images/",
loop=False,
order=PlayBackOrder.ALPHABETICAL,
dwell=10,
)
while slideshow.update():
pass
| 27.962963 | 98 | 0.757616 |
2bd1fc6e915ea58d7ce1c0a3494696abe5476a30 | 12,548 | py | Python | src/reporting.py | ivallesp/ynab-reporter | c5d34d75999ce9558a04e0bf89e09085c70c1415 | [
"MIT"
] | 24 | 2020-06-20T21:29:52.000Z | 2022-01-20T08:42:27.000Z | src/reporting.py | ivallesp/ynab-reporter | c5d34d75999ce9558a04e0bf89e09085c70c1415 | [
"MIT"
] | 1 | 2021-06-28T10:35:43.000Z | 2021-07-04T12:18:39.000Z | src/reporting.py | ivallesp/ynab-reporter | c5d34d75999ce9558a04e0bf89e09085c70c1415 | [
"MIT"
] | 1 | 2020-09-21T17:19:41.000Z | 2020-09-21T17:19:41.000Z | import calendar
import codecs
import math
import os
import sys
from datetime import datetime
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import pandas as pd
import numpy as np
from src.wrangling import calculate_daily_balances, get_ynab_dataset
import matplotlib as mpl
COLORS = [
"da4450",
"643a71",
"91acca",
"b964a4",
"0d6b96",
"06a77d",
"ff9d58",
"536b38",
]
# Use seaborn style
plt.style.use("seaborn")
# Set colors
mpl.rcParams["axes.prop_cycle"] = mpl.cycler(color=COLORS)
# Enable the legend frame
mpl.rcParams["legend.frameon"] = "True"
# Prevent matplotlib from cutting labels at export time
mpl.rcParams.update({"figure.autolayout": True})
MONTHS = [
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December",
]
def get_top_flows(year, month, n_rows):
df = get_ynab_dataset()
df = df.loc[lambda d: (d.date.dt.month == month) & (d.date.dt.year <= year)]
# Remove transfers and exclude categories
df = df[df.transfer_transaction_id.isnull()]
df = df[df.exclude == False]
# Separate inflows and outflows
df_in = df.loc[lambda d: d.amount >= 0]
df_out = df.loc[lambda d: d.amount < 0]
# Date format adjustment
df_in["date"] = df_in.date.dt.strftime("%b-%d")
df_out["date"] = df_out.date.dt.strftime("%b-%d")
# Limit the length of the memo text
df_in["memo"] = df_in["memo"].str.slice(0, 20)
df_out["memo"] = df_out["memo"].str.slice(0, 20)
# Define the original column names and the desired ones
columns = ["date", "account_name", "amount", "memo"]
fancy_colnames = ["Date", "Account", "Amount", "Memo"]
# Select columns and sort by amount
df_in = df_in[columns].sort_values(by="amount", ascending=False)
df_out = df_out[columns].sort_values(by="amount", ascending=True)
# Take the first N rows
df_in = df_in.head(n_rows)
df_out = df_out.head(n_rows)
# Rename the columns
df_in.columns = fancy_colnames
df_out.columns = fancy_colnames
return df_in, df_out
def calculate_financial_snapshot(year, month):
eom_day = calendar.monthrange(year, month)[1]
eom_date = datetime(year, month, eom_day)
# Load data and calculate daily balances
df_ynab = get_ynab_dataset()
df = calculate_daily_balances(df=df_ynab)
# Get the end of month balances
df = df[lambda d: d.date == eom_date]
# Aggregate at account level
df = df.groupby(["account_name"]).amount.sum()
# Sort accounts by decreasing balance
df = df.sort_values(ascending=False).reset_index()
# Remove accounts with 0€
df = df[lambda d: d.amount != 0]
# Add total
total = {"account_name": ["Total"], "amount": [df.amount.sum()]}
df_total = pd.DataFrame(total)
df = pd.concat([df, df_total], axis=0)
# Add fancy column names
df.columns = ["Account", "Amount"]
# Filter out the accounts with amount 0
df = df[(df.Amount < -0.01) | (df.Amount > 0.01) | (df.Account == "Total")]
# Transpose
df = df.set_index("Account").transpose()
return df
def generate_evolution_plot(year, month):
df_ynab = get_ynab_dataset()
df = calculate_daily_balances(df=df_ynab)
eom_day = calendar.monthrange(year, month)[1]
df = df.loc[df.date <= datetime(year, month, eom_day)]
# Aggregate at account level
df = df.groupby(["date", "account_name"]).amount.sum().reset_index()
# Pivot the account dimension
df = pd.pivot_table(df, index="date", columns="account_name", aggfunc="sum")
# Drop the accounts that have always had 0 balance
for col in df.columns:
if df[col].max() == 0:
df = df.drop(col, axis=1)
# Sort columns by average amount
df = df[df.sum(axis=0).sort_values(ascending=False).index]
# Build list of histories per account
histories = list(zip(*df.values.clip(0, None).tolist()))
# Calculate the histories for every account
yticks_freq = 5000 # Frequency of the y ticks
# Get the limits for the Y axis
top = yticks_freq * math.ceil(df.values.sum(axis=1).max() / yticks_freq)
bot = -top * 0.025
# Generate a new figure
fig = plt.figure(figsize=(10, 2.5))
ax = plt.gca()
# Plot the histories
ax.stackplot(df.index.tolist(), *histories, labels=df["amount"].columns)
# Add a horizontal line at zero
ax.axhline(0, color="k", linewidth=1)
# Plot the legend
ax.legend(loc="upper left", facecolor="white")
# Add the grid on top the graph
ax.set_axisbelow(False)
ax.grid(which="both", color="#DDDDDD", linestyle="-", linewidth=0.7)
# Background to white
ax.set_facecolor((1, 1, 1))
# Configure the axis
ax.set_yticks(np.arange(0, top + 1, yticks_freq))
ax.set_ylim(None, top)
years = mdates.YearLocator()
months = mdates.MonthLocator()
monthsFmt = mdates.DateFormatter("%b")
yearsFmt = mdates.DateFormatter("\n\n%Y") # add some space for the year label
ax.xaxis.set_minor_locator(months)
ax.xaxis.set_minor_formatter(monthsFmt)
ax.xaxis.set_major_locator(years)
ax.xaxis.set_major_formatter(yearsFmt)
# Make the layout tight (lower margins)
fig.tight_layout()
# Rotate 90 degrees the minor labels in the X axis
labels = ax.get_xticklabels(minor=True)
ax.set_xticklabels(labels, minor=True, rotation=90)
return fig, ax
def generate_categories_detail_plot(year, month):
df = get_ynab_dataset()
eom_day = calendar.monthrange(year, month)[1]
df = df.loc[df.date <= datetime(year, month, eom_day)]
# Filter transfers and exclude categories
df = df[df.transfer_transaction_id.isnull()]
df = df[df.exclude == False]
df["month"] = df.date.dt.month
df["year"] = df.date.dt.year
df = df.groupby(["year", "month", "category_name"]).amount.sum()
df = df.sort_values()[year, month]
# Remove the income category. We want to analyse the rest
if "Immediate Income SubCategory" in df.index:
df = df.drop("Immediate Income SubCategory")
# Separate the category names from the values
cats = df.index.tolist()
vals = df.values
# Shorten category names by removing the annotations between parentheses
cats = [x.split("(")[0].strip() for x in cats]
# Distinguish possitive and negative values with colors
colors = ["#" + COLORS[0] + "FF" if x < 0 else "#" + COLORS[5] + "FF" for x in vals]
# Calculate the histories for every account
yticks_freq = 250 # Frequency of the y ticks
# Get the max value for the Y axis
top = yticks_freq * math.ceil((-vals).max() / yticks_freq)
bot = yticks_freq * math.floor((-vals).min() / yticks_freq)
# Generate a new figure
fig = plt.figure(figsize=[10, 2.5])
ax = plt.gca()
# Plot the bars
bars = plt.bar(cats, -vals, color=colors)
# Set the frequency of the yticks
ax.set_yticks(np.arange(0, top + 1, yticks_freq))
# Add a horizontal line at zero
ax.axhline(0, color="k", linewidth=0.7)
# Background to white
ax.set_facecolor((1, 1, 1))
# Gridlines to grey
ax.grid(color="#DDDDDD", linestyle="-", linewidth=0.7)
# ax.grid(False)
for bar in bars:
height = bar.get_height()
perc = (
str(round(100 * height / -vals[vals < 0].sum(), 1)) + "%"
if height > 0
else ""
)
plt.text(
bar.get_x() + bar.get_width() / 2.0,
height + 20,
"%s" % perc,
ha="center",
va="bottom",
)
plt.ylim(bot, top)
# Make the layout tight (lower margins)
fig.tight_layout()
ax.xaxis.set_tick_params(rotation=90)
return fig, ax
def calculate_monthly_flows(year, month):
# Load data
df = get_ynab_dataset()
eom_day = calendar.monthrange(year, month)[1]
df = df.loc[df.date <= datetime(year, month, eom_day)]
# Filter out the transfers
df = df[df.transfer_transaction_id.isnull()]
df = df[df.exclude == False]
# Add month column
month_col = pd.Series(np.array(MONTHS)[df.date.dt.month.values - 1], index=df.index)
month_col = month_col + " " + df.date.dt.year.astype(str)
df["month"] = month_col.values
# Calculate inflows and outflows
df["inflow"] = 0
df["outflow"] = 0
inflow_f = lambda d: d.amount > 0
outflow_f = lambda d: d.amount <= 0
df.loc[inflow_f, "inflow"] = df.loc[inflow_f, "amount"]
df.loc[outflow_f, "outflow"] = -df.loc[outflow_f, "amount"]
# Aggregate at month level
agg_dict = {"date": np.max, "inflow": np.sum, "outflow": np.sum}
df = df.groupby("month").agg(agg_dict)
df = df.sort_values(by="date").reset_index()
# Calculate savings
df["savings"] = df.inflow - df.outflow
# Filter and arrange columns
df = df[["month", "inflow", "outflow", "savings"]]
# Remove first row (initial balance is counted as inflow)
df = df.iloc[1:]
return df
def calculate_financial_evolution(year, month, n_months):
# Load data and calculate daily balances
df_ynab = get_ynab_dataset()
df = calculate_daily_balances(df=df_ynab)
eom_day = calendar.monthrange(year, month)[1]
df = df.loc[df.date <= datetime(year, month, eom_day)]
# Get the end of month balances
df = df[lambda d: d.date.dt.is_month_end]
# Aggregate at date level
df = df.groupby(["date"]).amount.sum()
# Sort accounts by decreasing balance
df = df.sort_values(ascending=False).reset_index()
# Add month column
month_col = pd.Series(np.array(MONTHS)[df.date.dt.month - 1])
month_col = month_col + " " + df.date.dt.year.astype(str)
df["month"] = month_col
# Calculate flows
df_flows = calculate_monthly_flows(year=year, month=month)
df = df.merge(df_flows, how="left")
# Sort table
df = df.sort_values(by="date", ascending=False)
# Filter and arrange columns
df = df[["month", "inflow", "outflow", "savings", "amount"]]
# Add fancy column names
df.columns = ["Month", "Inflow", "Outflow", "Savings", "Amount"]
# Number of months to show
df = df.head(n_months)
# Add YoY
df["YoY"] = df.Amount - df.Amount.shift(-11)
return df
def generate_latex_report(year, month):
# TODO: Add commas to separate thousands
float_format = "%.2f"
with open(os.path.join("assets", "template.tex"), "r") as f:
template = f.read()
title = f"Personal unified finance report – {MONTHS[month-1]} {year}"
df_financial_snapshot = calculate_financial_snapshot(year=year, month=month)
financial_snapshot = df_financial_snapshot.to_latex(
index=False, float_format=float_format
)
df_financial_evolution = calculate_financial_evolution(
year=year, month=month, n_months=12
)
financial_evolution = df_financial_evolution.to_latex(
index=False, float_format=float_format
).replace("NaN", "-")
df_last_movements = df_financial_evolution.head(1)
df_last_movements = df_last_movements.drop(["Month", "Amount"], axis=1)
last_movements = df_last_movements.to_latex(index=False, float_format=float_format)
df_top_in, df_top_out = get_top_flows(year=year, month=month, n_rows=10)
top_inflows = df_top_in.to_latex(index=False, float_format=float_format)
top_outflows = df_top_out.to_latex(index=False, float_format=float_format)
# Footnote
today = datetime.now()
if 4 <= today.day <= 20 or 24 <= today.day <= 30:
suffix = "th"
else:
suffix = ["st", "nd", "rd"][today.day % 10 - 1]
today = f"{MONTHS[today.month-1]} {today.day}{suffix} {today.year}"
footnote = f"This report was generated in {today}"
template = template.format(
title=title,
financial_snapshot=financial_snapshot,
last_movements=last_movements,
financial_evolution=financial_evolution,
top_inflows=top_inflows,
top_outflows=top_outflows,
footnote=footnote,
)
# + monthly inflows and outflows, with initial and final balance and savings
# + biggest transactions
# Change colors
# Add grid
# Set xlim
fig, ax = generate_evolution_plot(year=year, month=month)
fig.savefig(os.path.join("assets", "evolution.eps"))
fig, ax = generate_categories_detail_plot(year=year, month=month)
fig.savefig(os.path.join("assets", "categories.eps"))
with codecs.open(os.path.join("assets", "report.tex"), "w", "utf-8") as f:
f.write(template)
| 35.546742 | 88 | 0.652455 |
e3e6a6e62af7cc07c77111db20802682ce2db2c8 | 381 | py | Python | utils/load_npy.py | marcpasfatou/ROLO | 32137cbe749320c9b2aff6c8cc4d954b013f779c | [
"Apache-2.0"
] | 3 | 2017-12-01T22:20:41.000Z | 2019-04-23T13:56:06.000Z | utils/load_npy.py | marcpasfatou/ROLO | 32137cbe749320c9b2aff6c8cc4d954b013f779c | [
"Apache-2.0"
] | null | null | null | utils/load_npy.py | marcpasfatou/ROLO | 32137cbe749320c9b2aff6c8cc4d954b013f779c | [
"Apache-2.0"
] | null | null | null | import numpy as np
import pandas as pd
import os
flag = bool(True)
for filename in os.listdir(os.getcwd()):
#print(filename)
data = np.load(filename)
if flag == True:
matrix = data
flag = False
continue
matrix = np.append(matrix, data, axis=0)
#print(data)
np.savetxt("yolo_out.txt", matrix, fmt='%1.4f',newline='\r\n', delimiter=',') | 22.411765 | 77 | 0.622047 |
95fb9e2c305a866f4e90f70a3abd3196e7fea60e | 345 | py | Python | books/HandsOnMachineLearning/EndToEnd/split_train_test.py | shibasis0801/MachineLearningPractice | 68de20a4cb77a98ccb772f93b8648362c32ca33d | [
"MIT"
] | null | null | null | books/HandsOnMachineLearning/EndToEnd/split_train_test.py | shibasis0801/MachineLearningPractice | 68de20a4cb77a98ccb772f93b8648362c32ca33d | [
"MIT"
] | 1 | 2020-04-26T16:17:07.000Z | 2020-04-26T16:17:07.000Z | books/HandsOnMachineLearning/EndToEnd/split_train_test.py | shibasis0801/MachineLearningPractice | 68de20a4cb77a98ccb772f93b8648362c32ca33d | [
"MIT"
] | null | null | null | import numpy as np
def split_train_test(data, test_ratio):
count = len(data)
shuffled_indices = np.random.permutation(count)
test_set_size = int(test_ratio * count)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices] | 34.5 | 60 | 0.756522 |
27715578ceb4b1d51a0d6ff1a4037caf9185e4a2 | 417 | py | Python | challenges/binary-search/test_binary_search.py | seattlechem/data-structures-and-algorithms | 376e465c0a5529ea7c5c4e972a9852b6340251ff | [
"MIT"
] | null | null | null | challenges/binary-search/test_binary_search.py | seattlechem/data-structures-and-algorithms | 376e465c0a5529ea7c5c4e972a9852b6340251ff | [
"MIT"
] | null | null | null | challenges/binary-search/test_binary_search.py | seattlechem/data-structures-and-algorithms | 376e465c0a5529ea7c5c4e972a9852b6340251ff | [
"MIT"
] | null | null | null | from binary_search import binary_search
def test_binary_search_true():
assert binary_search([4, 8, 15, 16, 23, 42], 15) == 2
def test_binary_search_false():
assert binary_search([11, 22, 33, 44, 55, 66, 77], 90) == -1
def test_binary_search_empty_list():
assert binary_search([], 90) == -1
def test_binary_search_string():
assert binary_search(['potato', 'tomato', 'new york'], 'tomato') == 1
| 23.166667 | 73 | 0.681055 |
355a5b191a4f7ea7092afb2a8138da0db565dfec | 4,154 | py | Python | youtuatools/extractor/cwtv.py | Pagasis/YouTua | edb44b2065a7224f8b26aaf76166bf7287901567 | [
"MIT"
] | 47 | 2021-01-02T07:44:50.000Z | 2022-02-28T22:02:13.000Z | youtuatools/extractor/cwtv.py | Pagasis/YouTua | edb44b2065a7224f8b26aaf76166bf7287901567 | [
"MIT"
] | 4 | 2021-02-07T03:35:13.000Z | 2021-10-31T19:23:53.000Z | youtuatools/extractor/cwtv.py | Pagasis/YouTua | edb44b2065a7224f8b26aaf76166bf7287901567 | [
"MIT"
] | 8 | 2021-01-03T05:44:39.000Z | 2021-11-01T05:46:32.000Z | # coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import (
ExtractorError,
int_or_none,
parse_age_limit,
parse_iso8601,
smuggle_url,
str_or_none,
)
class CWTVIE(InfoExtractor):
_VALID_URL = r"https?://(?:www\.)?cw(?:tv(?:pr)?|seed)\.com/(?:shows/)?(?:[^/]+/)+[^?]*\?.*\b(?:play|watch)=(?P<id>[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12})"
_TESTS = [
{
"url": "http://cwtv.com/shows/arrow/legends-of-yesterday/?play=6b15e985-9345-4f60-baf8-56e96be57c63",
"info_dict": {
"id": "6b15e985-9345-4f60-baf8-56e96be57c63",
"ext": "mp4",
"title": "Legends of Yesterday",
"description": "Oliver and Barry Allen take Kendra Saunders and Carter Hall to a remote location to keep them hidden from Vandal Savage while they figure out how to defeat him.",
"duration": 2665,
"series": "Arrow",
"season_number": 4,
"season": "4",
"episode_number": 8,
"upload_date": "20151203",
"timestamp": 1449122100,
},
"params": {
# m3u8 download
"skip_download": True,
},
"skip": "redirect to http://cwtv.com/shows/arrow/",
},
{
"url": "http://www.cwseed.com/shows/whose-line-is-it-anyway/jeff-davis-4/?play=24282b12-ead2-42f2-95ad-26770c2c6088",
"info_dict": {
"id": "24282b12-ead2-42f2-95ad-26770c2c6088",
"ext": "mp4",
"title": "Jeff Davis 4",
"description": "Jeff Davis is back to make you laugh.",
"duration": 1263,
"series": "Whose Line Is It Anyway?",
"season_number": 11,
"episode_number": 20,
"upload_date": "20151006",
"timestamp": 1444107300,
"age_limit": 14,
"uploader": "CWTV",
},
"params": {
# m3u8 download
"skip_download": True,
},
},
{
"url": "http://cwtv.com/thecw/chroniclesofcisco/?play=8adebe35-f447-465f-ab52-e863506ff6d6",
"only_matching": True,
},
{
"url": "http://cwtvpr.com/the-cw/video?watch=9eee3f60-ef4e-440b-b3b2-49428ac9c54e",
"only_matching": True,
},
{
"url": "http://cwtv.com/shows/arrow/legends-of-yesterday/?watch=6b15e985-9345-4f60-baf8-56e96be57c63",
"only_matching": True,
},
]
def _real_extract(self, url):
video_id = self._match_id(url)
data = self._download_json(
"http://images.cwtv.com/feed/mobileapp/video-meta/apiversion_8/guid_"
+ video_id,
video_id,
)
if data.get("result") != "ok":
raise ExtractorError(data["msg"], expected=True)
video_data = data["video"]
title = video_data["title"]
mpx_url = (
video_data.get("mpx_url")
or "http://link.theplatform.com/s/cwtv/media/guid/2703454149/%s?formats=M3U"
% video_id
)
season = str_or_none(video_data.get("season"))
episode = str_or_none(video_data.get("episode"))
if episode and season:
episode = episode[len(season) :]
return {
"_type": "url_transparent",
"id": video_id,
"title": title,
"url": smuggle_url(mpx_url, {"force_smil_url": True}),
"description": video_data.get("description_long"),
"duration": int_or_none(video_data.get("duration_secs")),
"series": video_data.get("series_name"),
"season_number": int_or_none(season),
"episode_number": int_or_none(episode),
"timestamp": parse_iso8601(video_data.get("start_time")),
"age_limit": parse_age_limit(video_data.get("rating")),
"ie_key": "ThePlatform",
}
| 37.763636 | 194 | 0.520703 |
c747a2368163a9b6e2b8ea41b04b394f5eaa1b78 | 13,191 | py | Python | PVD/PVD.py | ajiang-xyz/testing | 22e93102ce3d28b4ef1756860162d1b0472abd90 | [
"MIT"
] | 1 | 2021-05-05T23:43:30.000Z | 2021-05-05T23:43:30.000Z | PVD/PVD.py | ajiang-xyz/testing | 22e93102ce3d28b4ef1756860162d1b0472abd90 | [
"MIT"
] | null | null | null | PVD/PVD.py | ajiang-xyz/testing | 22e93102ce3d28b4ef1756860162d1b0472abd90 | [
"MIT"
] | null | null | null | from prompt_toolkit import prompt
from rgbChannels import *
from typing import *
import argparse
import datetime
import imghdr
import signal
import ast
def getHelp(specificOption=None):
"""
Basic help information
"""
helpDictionary = {
"default":"""This program uses a custom PVD steganography algorithm to embed and retrieve data from PNG files
Type 'help <category name>' to view all options for a specific category
Currently available categories: 'interactive', 'universal', 'embed', 'retrieve'
Example: help interactive""",
"interactive":"""In order to enter interactive mode, simply run the progran as follows:
PVD.py interactive
Interactive mode commands:
Type 'exit' to exit this program
Type 'clear' to clear the screen
Type 'help' to receive this help message again
Type 'credits' to see credits for this program
Type 'license' to see the license for this project
Type 'cwd' to see the current working directory
Type 'files' to see all png files in the current working directory""",
"categoryHeader":"""Tilde'd options have additional information that can be accessed via 'help <long option name>'
Example: help quantization
Starred options are optional""",
"universal":"""Universal options:
--infile/-i <filename> name of input image to read pixel values from
--outfile/-o <filename> OR 'stdout' name of output image to write encoded pixel values to OR output to standard out
~ --quantization/-q <widths> width classifications to determine amount of data to encode
~ --traversal/-t <order> order to traverse pixel pairs for RGB images
* --verbose/-v enable verbose messages""",
"embed":"""Embed specific options:
--message/-m <message> message to embed in input image
~* --verify/-V add verification data to PNG properties
Example usage:
embed --infile in.png --out out.png --message hello world! --verbose
embed -o out.png -i in.png -m banana fish -v -V
embed -V -o out.png -m banana fish -i in.png -v""",
"retrieve":"""Retrieve specific options:
~* --verify/-V use verification data in PNG properties to verify retrieved binary
Example usage:
retrieve --in out.png --out stdout --verify --verbose
retrieve -o message.txt -i out.png -v -V""",
"quantization":"""Quantization widths format:""",
"traversal":"""foobar 2""",
"verify":"""foobar 3""",
}
if specificOption:
try:
if specificOption in ["universal", "embed", "retrieve"]:
print(helpDictionary["categoryHeader"])
print(helpDictionary[specificOption])
except:
print(f"'{specificOption}' does not have any help information")
else:
print(helpDictionary["default"])
return
def progCreds():
"""
Program credits
"""
print("""Written by Alex Jiang (@ajiang-xyz on GitHub, syossu#2202 on Discord)""")
def progCopyright():
"""
Program copyright
"""
print("""Copyright (c) 2021 Alex Jiang.
All Rights Reserved.""")
def progLicense():
"""
Program license
"""
print("See https://github.com/ajiang-xyz/PVD-Steganography/blob/main/LICENSE.txt")
def getWorkingDictory():
"""
Print cwd
"""
print(os.getcwd())
def printFiles():
"""
Print all png and bmp files in cwd
"""
# Get all files that are of type png and bmp
files = [file for file in os.listdir(os.getcwd()) if os.path.isfile(os.path.join(os.getcwd(), file)) and imghdr.what(file) in ["png", "bmp"]]
if len(files) == 0:
print(f"No png files in this directory {os.getcwd()}")
else:
print(f"Found {len(files)} png files in {os.getcwd()}:")
files = sorted([str(item) for item in files])
# Get terminal size
columns = os.get_terminal_size()[0]
columnWidth = int(columns/3)
# Add padding names
if len(files) % 3 != 0:
for i in range(3-(len(files) % 3)):
files.append(" ")
# Print into columns, truncate if needed
for count, item in enumerate(files, 1):
if len(item) > columnWidth:
item = f"{item[:columnWidth-5]}..."
print(item.ljust(columnWidth), end="")
if count % 3 == 0:
print()
def parseAction(parsed):
"""
Given some input action object, perform designated function on values
"""
parameterlessCommandDict = {
"exit":exit,
"clear":clearBuffer,
"credits":progCreds,
"copyright":progCopyright,
"license":progLicense,
"cwd":getWorkingDictory,
"files":printFiles
}
if parsed.action:
# If action has designated handler function, call it
if parsed.action in parameterlessCommandDict:
parameterlessCommandDict[parsed.action]()
elif parsed.action == "help":
# Parse help arguments
getHelp(parsed.specificOption[0])
else:
# Evaluate str representation of nested arrays into nested arrays
parsed.quantization = ast.literal_eval(parsed.quantization)
parsed.traversal = ast.literal_eval(parsed.traversal)
errors = []
# Cleanse / verify validity of arguments
if parsed.action == "embed":
if parsed.message == "":
errors.append("Must input a message to encode")
if parsed.action == "retrieve":
if parsed.outfile == "":
parsed.outfile = "stdout"
parsed.message = ""
# Ensure outfile is writeable
if parsed.outfile:
path = os.path.split(parsed.outfile)[0]
if path and not os.path.isdir(path):
errors.append(f"Invalid out file path: '{parsed.outfile}' (can't write to {path})")
else:
errors.append(f"Must input an out file name")
# Ensure infile exists and is correct filetype
if parsed.infile:
if not os.path.exists(parsed.infile):
# Check if exists
errors.append(f"Invalid in file path: '{parsed.infile}' (does not exist)")
elif imghdr.what(parsed.infile) not in ["png", "bmp"]:
# Check filetype
errors.append(f"Invalid in file format: '{parsed.infile}' (must be png or bmp)")
else:
errors.append(f"Must input an in file name")
if errors != []:
for error in errors:
print(error)
else:
preTime = int(round(time.time() * 1000))
infile = PIL.Image.open(parsed.infile)
returnValue = rgbChannels(inFile=infile, outFile=parsed.outfile, message=" ".join(parsed.message), verify=parsed.verify, verbose=parsed.verbose)
print()
if isinstance(returnValue, bool):
if returnValue == True:
print("Done.")
else:
print("Completed with errors.")
else:
print("Done.")
# Print binary to console or write to file
if parsed.outfile == "stdout":
print(f"Retrieved binary: {returnValue}")
else:
with open(parsed.outfile, "wb") as out:
out.write(int(returnValue, 2).to_bytes((len(returnValue) + 7) // 8, byteorder="big"))
postTime = int(round(time.time() * 1000))
print(f"Time elapsed: {postTime - preTime} ms")
def interactive():
"""
Interactive command prompt
"""
signal.signal(signal.SIGINT, ctrlCHandler)
# Get all the current time and date info
currentYear = datetime.date.today().year
currentMonth = datetime.date.today().month
currentMonth = datetime.date(currentYear, currentMonth, 1).strftime("%b")
currentDay = datetime.date.today().day
time = datetime.datetime.now().strftime("%H:%M").split(":")
currentTime = ""
if int(time[0]) - 12 > 0:
currentTime += str(int(time[0]) - 12)
currentTime += ":" + time[1]
currentTime += " PM"
else:
currentTime += time[0]
currentTime += ":" + time[1]
currentTime += "AM"
# Display all the fancy stuffs :D
print("""PVD Beta (built on Python) {1} {2} {3}, {0}
Type "help", "copyright", "credits" or "license" for more information.""".format(currentTime, currentMonth, currentDay, currentYear))
while True:
try:
command = prompt("PVD >>> ").strip()
parsed = parseInteractiveArgs(command)
parseAction(parsed)
except KeyboardInterrupt:
print("")
exit()
class ArgumentParser(argparse.ArgumentParser):
"""
Override argparse.ArgumentParser's error function (custom error handling)
"""
def error(self, message):
pass
def ctrlCHandler(sig, frame):
"""
Graceful exit
"""
print("")
exit()
def parseInteractiveArgs(string):
"""
Parse commands passed into interactive program
"""
# Instantiate parser class
parser = ArgumentParser(prog="PVD interactive")
# Add shared arguments for embed and retrieve actions
parent = ArgumentParser(add_help=False)
parent.add_argument("-i", "--infile", metavar="infile", default="")
parent.add_argument("-o", "--outfile", metavar="outfile", default="")
parent.add_argument("-q", "--quantization", default="[[0,1], [2,3], [4,7], [8,11], [12,15], [16,23], [24,31], [32,47], [48,63], [64,95], [96,127], [128,191], [192,255]]")
parent.add_argument("-t", "--traversal", default="[1,3,5,2,4,6]")
parent.add_argument("-V", "--verify", action="store_true", default=False)
parent.add_argument("-v", "--verbose", action="store_true", default=False)
# Create different actions and arguments
subparsers = parser.add_subparsers(title="actions", dest="action")
# Define embed action, inherit parent arguments, and define unique arguments
encodeParser = subparsers.add_parser("embed", parents=[parent])
encodeParser.add_argument("-m", "--message", metavar="message", nargs="*", default="")
# Define retrieve action, inherit parent arguments, and define unique arguments
retrieveParser = subparsers.add_parser("retrieve", parents=[parent])
# Define help messages
helpParser = subparsers.add_parser("help")
helpParser.add_argument("specificOption", default=["default"], nargs="*")
# Define basic actions
subparsers.add_parser("exit")
subparsers.add_parser("clear")
subparsers.add_parser("credits")
subparsers.add_parser("copyright")
subparsers.add_parser("license")
subparsers.add_parser("cwd")
subparsers.add_parser("files")
# Parse args passed into file
try:
args = parser.parse_args(string.split())
return args
except:
print(f"Invalid command: '{string.split()[0]}'")
return parser.parse_args("")
if __name__ == "__main__":
interactive()
# def parseFileArgs(string):
# # Instantiate parser class
# parser = ArgumentParser(prog="PVD file")
# # Add shared arguments for embed and retrieve actions
# parent = ArgumentParser(add_help=False)
# parent.add_argument("-i", "--infile", metavar="infile", default="")
# parent.add_argument("-o", "--outfile", metavar="outfile", default="")
# parent.add_argument("-q", "--quantization", default="[[0,1], [2,3], [4,7], [8,11], [12,15], [16,23], [24,31], [32,47], [48,63], [64,95], [96,127], [128,191], [192,255]]")
# parent.add_argument("-t", "--traversal", default="[1,3,5,2,4,6]")
# parent.add_argument("-V", "--verify", action="store_true", default=False)
# parent.add_argument("-v", "--verbose", action="store_true", default=False)
# # Create different actions and arguments
# subparsers = parser.add_subparsers(title="actions", dest="action")
# # Define embed action, inherit parent arguments, and define unique arguments
# encodeParser = subparsers.add_parser("embed", parents=[parent])
# encodeParser.add_argument("-m", "--message", metavar="message", default="")
# # Define retrieve action, inherit parent arguments, and define unique arguments
# retrieveParser = subparsers.add_parser("retrieve", parents=[parent])
# # Define interactive mode action
# subparsers.add_parser("interactive")
# # Define help messages
# helpParser = subparsers.add_parser("help")
# helpParser.add_argument("specificOption", default=["default"], nargs="*")
# # Parse args passed into file
# try:
# args = parser.parse_args(string.split())
# return args
# except:
# print(f"Invalid command: '{string.split()[0]}'") | 37.262712 | 176 | 0.600258 |
510507efb6d7073980034407df972199208d1e52 | 2,125 | py | Python | grpc/product/ProductInfo_pb2.py | lolyu/aoi | a26e5eb205aafadc7301b2e4acc67915d3bcc935 | [
"MIT"
] | null | null | null | grpc/product/ProductInfo_pb2.py | lolyu/aoi | a26e5eb205aafadc7301b2e4acc67915d3bcc935 | [
"MIT"
] | null | null | null | grpc/product/ProductInfo_pb2.py | lolyu/aoi | a26e5eb205aafadc7301b2e4acc67915d3bcc935 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: ProductInfo.proto
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import wrappers_pb2 as google_dot_protobuf_dot_wrappers__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x11ProductInfo.proto\x12\x08\x65\x63omerce\x1a\x1egoogle/protobuf/wrappers.proto\"\x1a\n\tProductID\x12\r\n\x05value\x18\x01 \x01(\t\"G\n\x07Product\x12\n\n\x02id\x18\x01 \x01(\t\x12\x0c\n\x04name\x18\x02 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x03 \x01(\t\x12\r\n\x05price\x18\x04 \x01(\x02\x32y\n\x0bProductInfo\x12\x34\n\naddProduct\x12\x11.ecomerce.Product\x1a\x13.ecomerce.ProductID\x12\x34\n\ngetProduct\x12\x13.ecomerce.ProductID\x1a\x11.ecomerce.Productb\x06proto3')
_PRODUCTID = DESCRIPTOR.message_types_by_name['ProductID']
_PRODUCT = DESCRIPTOR.message_types_by_name['Product']
ProductID = _reflection.GeneratedProtocolMessageType('ProductID', (_message.Message,), {
'DESCRIPTOR' : _PRODUCTID,
'__module__' : 'ProductInfo_pb2'
# @@protoc_insertion_point(class_scope:ecomerce.ProductID)
})
_sym_db.RegisterMessage(ProductID)
Product = _reflection.GeneratedProtocolMessageType('Product', (_message.Message,), {
'DESCRIPTOR' : _PRODUCT,
'__module__' : 'ProductInfo_pb2'
# @@protoc_insertion_point(class_scope:ecomerce.Product)
})
_sym_db.RegisterMessage(Product)
_PRODUCTINFO = DESCRIPTOR.services_by_name['ProductInfo']
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_PRODUCTID._serialized_start=63
_PRODUCTID._serialized_end=89
_PRODUCT._serialized_start=91
_PRODUCT._serialized_end=162
_PRODUCTINFO._serialized_start=164
_PRODUCTINFO._serialized_end=285
# @@protoc_insertion_point(module_scope)
| 43.367347 | 540 | 0.804235 |
513b7781461e774c4edea5cc00a442a4bd33ae7d | 51 | py | Python | gym-voilier-v2-discrete/gym_voilier/envs/__init__.py | pfontana96/smart-sailboat | 25b2a524b2601b3f8e72092d7a34beb849b617db | [
"MIT"
] | null | null | null | gym-voilier-v2-discrete/gym_voilier/envs/__init__.py | pfontana96/smart-sailboat | 25b2a524b2601b3f8e72092d7a34beb849b617db | [
"MIT"
] | null | null | null | gym-voilier-v2-discrete/gym_voilier/envs/__init__.py | pfontana96/smart-sailboat | 25b2a524b2601b3f8e72092d7a34beb849b617db | [
"MIT"
] | null | null | null | from gym_voilier.envs.voilier_env import VoilierEnv | 51 | 51 | 0.901961 |
2f9a724ba55ba933e60a315f3a4be78ee814da05 | 2,115 | py | Python | ds.py | sudipta1411/Voronoi | 1638755546da3edc164b92f264996560a4b7ebf8 | [
"MIT"
] | null | null | null | ds.py | sudipta1411/Voronoi | 1638755546da3edc164b92f264996560a4b7ebf8 | [
"MIT"
] | null | null | null | ds.py | sudipta1411/Voronoi | 1638755546da3edc164b92f264996560a4b7ebf8 | [
"MIT"
] | null | null | null | #!/usr/bin/pythnon
class Point(object) :
def __init__(self, px=0.0, py=0.0) :
self.px = px
self.py = py
def get_point(self) :
return self.px, self.py
def __str__(self) :
return '({0},{1})'.format(self.px, self.py)
class Site(Point) :
def __init__(self, px=0.0, py=0.0) :
self.site_id = 0
super(Site, self).__init__(px, py)
def get_id(self) : return self.site_id
def bisect(self, site) :
edge = Edge()
dx = float(site.px) - float(self.px)
dy = float(site.py) - float(self.py)
print 'dx {0}, dy {1}'.format(dx,dy)
if dy == 0.0 :
# line is perpendicular x=c
edge.m = float('inf')
edge.c = (self.px + site.px) * 0.5
if self.px < edge.c :
edge.l_site = self
edge.r_site = site
else :
edge.l_site = site
edge.r_site = self
else :
edge.m = -float(dx) / dy
edge.c = (self.py + site.py - edge.m * (self.px + site.px)) * 0.5
if self.py > edge.m * self.px + edge.c :
edge.l_site = self
edge.r_site = site
else :
edge.l_site = site
edge.r_site = self
return edge
class Vertex(Point) :
def __init__(self, px=0.0, py=0.0) :
self.vert_id = 0
super(Vertex, self).__init__(px, py)
def get_id(self) : return self.vert_id
class Edge(object) :
def __init__(self, start=None, end=None, r_site=None, l_site = None):
# start and end are Vertices, l_site and r_site are Sites
self.start = start
self.end = end
self.l_site = l_site
self.r_site = r_site
self.edge_id = 0
# y=m*x+c
self.m = None
self.c = None
class BeachLine(object) :
def __init__(self, point, bl_next=None, bl_prev=None, vert=None) :
self.point = point
self.bl_next = bl_next
self.bl_prev = bl_prev
self.vert = vert
self.e0 = None
self.e1 = None
| 28.2 | 77 | 0.511111 |
53fb6c0fc031d42533891bb09952a5a5430ba0a9 | 8,346 | py | Python | tests/unit_tests/data_steward/cdr_cleaner/cleaning_rules/populate_route_ids_test.py | lrwb-aou/curation | e80447e56d269dc2c9c8bc79e78218d4b0dc504c | [
"MIT"
] | 16 | 2017-06-30T20:05:05.000Z | 2022-03-08T21:03:19.000Z | tests/unit_tests/data_steward/cdr_cleaner/cleaning_rules/populate_route_ids_test.py | lrwb-aou/curation | e80447e56d269dc2c9c8bc79e78218d4b0dc504c | [
"MIT"
] | 342 | 2017-06-23T21:37:40.000Z | 2022-03-30T16:44:16.000Z | tests/unit_tests/data_steward/cdr_cleaner/cleaning_rules/populate_route_ids_test.py | lrwb-aou/curation | e80447e56d269dc2c9c8bc79e78218d4b0dc504c | [
"MIT"
] | 33 | 2017-07-01T00:12:20.000Z | 2022-01-26T18:06:53.000Z | import os
import time
import unittest
import mock
import bq_utils
import common
import resources
from cdr_cleaner.cleaning_rules import populate_route_ids
from constants.cdr_cleaner import clean_cdr as cdr_consts
class PopulateRouteIdsTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
print('**************************************************************')
print(cls.__name__)
print('**************************************************************')
def setUp(self):
self.project_id = 'foo_project'
self.dataset_id = 'bar_dataset'
self.route_mappings_list = [{
'dose_form_concept_id': '46719734',
'route_concept_id': '85738921'
}, {
'dose_form_concept_id': '86340',
'route_concept_id': '52315'
}, {
'dose_form_concept_id': '19082168',
'route_concept_id': '4132161'
}, {
'dose_form_concept_id': '19126918',
'route_concept_id': '45956874',
'route_name': 'Inhalation'
}]
route_mappings = [(46719734, 85738921), (86340, 52315),
(19082168, 4132161), (19126918, 45956874)]
route_mapping_exprs = ['(%s, %s)' % pair for pair in route_mappings]
self.route_mappings_string = ', '.join(route_mapping_exprs)
self.col_exprs = [
'de.drug_exposure_id', 'de.person_id', 'de.drug_concept_id',
'de.drug_exposure_start_date', 'de.drug_exposure_start_datetime',
'de.drug_exposure_end_date', 'de.drug_exposure_end_datetime',
'de.verbatim_end_date', 'de.drug_type_concept_id', 'de.stop_reason',
'de.refills', 'de.quantity', 'de.days_supply', 'de.sig',
'COALESCE(rm.route_concept_id, de.route_concept_id) AS route_concept_id',
'de.lot_number', 'de.provider_id', 'de.visit_occurrence_id',
'de.visit_detail_id', 'de.drug_source_value',
'de.drug_source_concept_id', 'de.route_source_value',
'de.dose_unit_source_value'
]
self.cols = ', '.join(self.col_exprs)
self.drug_exposure_prefix = "de"
self.route_mapping_prefix = "rm"
def test_get_mapping_list(self):
actual = populate_route_ids.get_mapping_list(self.route_mappings_list)
expected = self.route_mappings_string
self.assertEqual(expected, actual)
@mock.patch('bq_utils.get_dataset_id')
@mock.patch('bq_utils.create_table')
@mock.patch('bq_utils.query')
def test_create_dose_form_route_mappings_table_none_dataset_id(
self, mock_query, mock_create_table, mock_dataset_id):
# pre conditions
mock_dataset_id.return_value = self.dataset_id
route_mappings_csv = os.path.join(
resources.resource_files_path,
populate_route_ids.DOSE_FORM_ROUTES_FILE + ".csv")
dose_form_route_mappings = resources.csv_to_list(route_mappings_csv)
mapping_list = populate_route_ids.get_mapping_list(
dose_form_route_mappings)
query_params = dict(
project_id=self.project_id,
dataset_id=self.dataset_id,
routes_table_id=populate_route_ids.DOSE_FORM_ROUTES_TABLE_ID,
mapping_list=mapping_list)
expected_query = populate_route_ids.INSERT_ROUTES_QUERY.format(
**query_params)
# test
populate_route_ids.create_dose_form_route_mappings_table(
self.project_id)
# post conditions
mock_query.assert_called_with(expected_query)
mock_create_table.assert_called_with(
populate_route_ids.DOSE_FORM_ROUTES_TABLE_ID,
populate_route_ids.DOSE_FORM_ROUTE_FIELDS,
drop_existing=True,
dataset_id=self.dataset_id)
@mock.patch('bq_utils.create_table')
@mock.patch('bq_utils.query')
def test_create_dose_form_route_mappings_table_with_dataset_id(
self, mock_query, mock_create_table):
# pre conditions
route_mappings_csv = os.path.join(
resources.resource_files_path,
populate_route_ids.DOSE_FORM_ROUTES_FILE + ".csv")
dose_form_route_mappings = resources.csv_to_list(route_mappings_csv)
mapping_list = populate_route_ids.get_mapping_list(
dose_form_route_mappings)
query_params = dict(
project_id=self.project_id,
dataset_id=self.dataset_id,
routes_table_id=populate_route_ids.DOSE_FORM_ROUTES_TABLE_ID,
mapping_list=mapping_list)
expected_query = populate_route_ids.INSERT_ROUTES_QUERY.format(
**query_params)
# test
populate_route_ids.create_dose_form_route_mappings_table(
self.project_id, self.dataset_id)
# post conditions
mock_query.assert_called_with(expected_query)
mock_create_table.assert_called_with(
populate_route_ids.DOSE_FORM_ROUTES_TABLE_ID,
populate_route_ids.DOSE_FORM_ROUTE_FIELDS,
drop_existing=True,
dataset_id=self.dataset_id)
def _test_integration_create_drug_route_mappings_table(self):
if bq_utils.table_exists(populate_route_ids.DRUG_ROUTES_TABLE_ID,
dataset_id=self.dataset_id):
bq_utils.delete_table(populate_route_ids.DRUG_ROUTES_TABLE_ID,
dataset_id=self.dataset_id)
if not bq_utils.table_exists(
populate_route_ids.DOSE_FORM_ROUTES_TABLE_ID,
dataset_id=self.dataset_id):
populate_route_ids.create_dose_form_route_mappings_table(
self.project_id, self.dataset_id)
populate_route_ids.create_drug_route_mappings_table(
self.project_id, self.dataset_id,
populate_route_ids.DOSE_FORM_ROUTES_TABLE_ID,
self.route_mapping_prefix)
time.sleep(10)
query = ("SELECT COUNT(*) AS n "
"FROM `{project_id}.{dataset_id}.{table_id}`").format(
project_id=self.project_id,
dataset_id=self.dataset_id,
table_id=populate_route_ids.DRUG_ROUTES_TABLE_ID)
result = bq_utils.query(query)
actual = bq_utils.response2rows(result)
self.assertGreater(actual[0]["n"], 0)
def test_get_col_exprs(self):
expected = self.col_exprs
actual = populate_route_ids.get_col_exprs()
self.assertEqual(actual, expected)
@mock.patch('bq_utils.get_dataset_id')
@mock.patch(
'cdr_cleaner.cleaning_rules.populate_route_ids.create_drug_route_mappings_table'
)
@mock.patch(
'cdr_cleaner.cleaning_rules.populate_route_ids.create_dose_form_route_mappings_table'
)
def test_integration_get_route_mapping_queries(
self, mock_create_dose_form_route_mappings_table,
mock_create_drug_route_mappings_table, mock_dataset_id):
# pre conditions
mock_dataset_id.return_value = self.dataset_id
result = []
mock_create_drug_route_mappings_table.return_value = (
result, populate_route_ids.DRUG_ROUTES_TABLE_ID)
mock_create_dose_form_route_mappings_table.return_value = (
result, populate_route_ids.DOSE_FORM_ROUTES_TABLE_ID)
expected_query = populate_route_ids.FILL_ROUTE_ID_QUERY.format(
project_id=self.project_id,
dataset_id=self.dataset_id,
drug_exposure_table=common.DRUG_EXPOSURE,
route_mapping_dataset_id=self.dataset_id,
drug_route_mapping_table=populate_route_ids.DRUG_ROUTES_TABLE_ID,
cols=self.cols,
drug_exposure_prefix=populate_route_ids.DRUG_EXPOSURE_ALIAS,
route_mapping_prefix=populate_route_ids.ROUTE_MAPPING_ALIAS)
queries = populate_route_ids.get_route_mapping_queries(
self.project_id, self.dataset_id)
self.assertEqual(queries[0][cdr_consts.QUERY], expected_query)
self.assertEqual(queries[0][cdr_consts.DESTINATION_DATASET],
self.dataset_id)
self.assertEqual(queries[0][cdr_consts.DESTINATION_TABLE],
common.DRUG_EXPOSURE)
self.assertEqual(queries[0][cdr_consts.DISPOSITION], 'WRITE_TRUNCATE')
| 43.020619 | 93 | 0.662353 |
ac5615fd32c9edd8a5e644f01856b58f2d7b4362 | 2,311 | py | Python | Python/FlaskMySQL/friends/server.py | JosephAMumford/CodingDojo | 505be74d18d7a8f41c4b3576ca050b97f840f0a3 | [
"MIT"
] | 2 | 2018-08-18T15:14:45.000Z | 2019-10-16T16:14:13.000Z | Python/FlaskMySQL/friends/server.py | JosephAMumford/CodingDojo | 505be74d18d7a8f41c4b3576ca050b97f840f0a3 | [
"MIT"
] | null | null | null | Python/FlaskMySQL/friends/server.py | JosephAMumford/CodingDojo | 505be74d18d7a8f41c4b3576ca050b97f840f0a3 | [
"MIT"
] | 6 | 2018-05-05T18:13:05.000Z | 2021-05-20T11:32:48.000Z | from flask import Flask, request, redirect, render_template, session, flash
from mysqlconnection import MySQLConnector
app = Flask(__name__)
mysql = MySQLConnector(app,'friendsdb')
@app.route('/')
def index():
query = "SELECT * FROM friends" # define your query
friends = mysql.query_db(query) # run query with query_db()
return render_template('index.html', all_friends=friends) # pass data to our template
@app.route('/friends', methods=['POST'])
def create():
# Write query as a string. Notice how we have multiple values
# we want to insert into our query.
query = "INSERT INTO friends (first_name, last_name, occupation, created_at, updated_at) VALUES (:first_name, :last_name, :occupation, NOW(), NOW())"
# We'll then create a dictionary of data from the POST data received.
data = {
'first_name': request.form['first_name'],
'last_name': request.form['last_name'],
'occupation': request.form['occupation']
}
# Run query, with dictionary values injected into the query.
mysql.query_db(query, data)
return redirect('/')
@app.route('/friends/<friend_id>')
def show(friend_id):
# Write query to select specific user by id. At every point where
# we want to insert data, we write ":" and variable name.
query = "SELECT * FROM friends WHERE id = :specific_id"
# Then define a dictionary with key that matches :variable_name in query.
data = {'specific_id': friend_id}
# Run query with inserted data.
friends = mysql.query_db(query, data)
# Friends should be a list with a single object,
# so we pass the value at [0] to our template under alias one_friend.
return render_template('index.html', one_friend=friends[0])
@app.route('/update_friend/<friend_id>', methods=['POST'])
def update(friend_id):
query = "UPDATE friends SET first_name = :first_name, last_name = :last_name, occupation = :occupation WHERE id = :id"
data = {
'first_name': request.form['first_name'],
'last_name': request.form['last_name'],
'occupation': request.form['occupation'],
'id': friend_id
}
mysql.query_db(query, data)
return redirect('/')
app.run(debug=True)
| 43.603774 | 153 | 0.656426 |
ad2fd618db08a5bff5953956f8072d6e3074f0c2 | 21,431 | py | Python | fileshack/views.py | airtonix/fileshackproject | caf9863c75b45b5155fcb1f3d1ad6490056c0d5f | [
"MIT"
] | 1 | 2019-06-27T13:47:31.000Z | 2019-06-27T13:47:31.000Z | fileshack/views.py | airtonix/fileshackproject | caf9863c75b45b5155fcb1f3d1ad6490056c0d5f | [
"MIT"
] | null | null | null | fileshack/views.py | airtonix/fileshackproject | caf9863c75b45b5155fcb1f3d1ad6490056c0d5f | [
"MIT"
] | null | null | null | # Copyright (c) 2012 Peter Kuma
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
from django.template import Context, RequestContext, loader
from django.http import HttpResponse, HttpResponseNotFound, \
HttpResponseForbidden, HttpResponseRedirect, \
HttpResponseServerError, HttpResponseBadRequest, \
HttpResponseNotAllowed
from django.views.decorators.cache import never_cache
from django.views.decorators.csrf import csrf_exempt
from django.utils.translation import ugettext, ugettext_lazy as _
from django.utils.translation import ungettext_lazy as ungettext
from django.core.files.base import ContentFile
from django.core.files.storage import default_storage
from django.shortcuts import render, render_to_response, get_object_or_404
from django.http import Http404
from django.conf import settings
from django.core.servers.basehttp import FileWrapper
from django import forms
from django.views.decorators.http import require_POST, require_GET
from django.core.mail import send_mail
from django.contrib.sites.models import get_current_site
try: from django.utils import timezone
except ImportError: from compat import timezone
import datetime
import os
import json
import urllib
import mimetypes
import time
import binascii
import smtplib
import socket
from models import *
class JSONEncoder(json.JSONEncoder):
def __init__(self, **kwargs):
defaults = {
"sort_keys": True,
"indent": 4,
}
defaults.update(**kwargs)
return json.JSONEncoder.__init__(self, **defaults)
def default(self, obj):
if isinstance(obj, datetime.datetime):
return obj.isoformat()
return json.JSONEncoder.default(self, obj)
def require_store(view):
def store_wrapper(*args, **kwargs):
if not kwargs.has_key("store_path"):
raise Http404()
store_path = kwargs.pop("store_path")
if len(store_path) > 0 and store_path[-1] == "/":
store_path = store_path[:-1]
store = get_object_or_404(Store, path=store_path)
return view(*args, store=store, **kwargs)
return store_wrapper
def require_login(view):
def login_wrapper(request, *args, **kwargs):
store = kwargs.get("store")
if not store: return Http404()
if store.accesscode == "":
return view(request, *args, **kwargs)
if request.session.has_key("fileshack_stores") and \
store.id in request.session["fileshack_stores"]:
return view(request, *args, **kwargs)
return HttpResponseForbidden("Login required")
return login_wrapper
@require_store
@require_POST
def logout(request, store):
try:
request.session["fileshack_stores"].remove(store.id)
except (KeyError, ValueError):
pass
request.session.save()
return HttpResponseRedirect(store.get_absolute_url())
@require_store
def index(request, store):
if (not request.session.has_key("fileshack_stores") or \
not store.id in request.session["fileshack_stores"]) \
and store.accesscode != "":
if request.method == "POST":
accesscode = request.POST.get("accesscode")
if accesscode != "" and store.accesscode == accesscode:
if not request.session.has_key("fileshack_stores"):
request.session["fileshack_stores"] = [store.id]
else:
request.session["fileshack_stores"].append(store.id)
return HttpResponseRedirect(store.get_absolute_url())
else:
t = loader.get_template("fileshack/accesscode.html")
c = RequestContext(request, {
"accesscode": accesscode,
"error_label": _("Wrong access code"),
"error_message": _("Please try again"),
})
return HttpResponse(t.render(c))
else:
t = loader.get_template("fileshack/accesscode.html")
c = RequestContext(request)
return HttpResponse(t.render(c))
items = Item.objects.filter(store=store)
watchers = Watcher.objects.filter(store=store)
t = loader.get_template("fileshack/index.html")
c = RequestContext(request, {
"store": store,
"items": items,
"item_size_limit": store.item_limit,
"bootstrap": JSONEncoder().encode({
"items": [i.simple() for i in items],
"watchers": [w.simple() for w in watchers],
}),
})
return HttpResponse(t.render(c))
@require_store
@require_login
def iframe(request, store):
if request.method != "POST":
t = loader.get_template("fileshack/iframe.html")
c = RequestContext(request)
return HttpResponse(t.render(c))
if not request.FILES.has_key("file"):
return HttpResponseForbidden()
f = request.FILES["file"]
item = Item()
item.fileobject.name = urllib.unquote(f.name)
item.store = store
item.size = f.size
item.size_total = f.size
if store.item_limit and f.size > store.item_limit*1024*1024:
return HttpResponse(JSONEncoder().encode({
"status": "itemlimitreached",
"error_label": "Upload failed",
"error_message": "Item size is limited to %d MB" % store.item_limit,
"item": item.simple(),
}))
if store.store_limit and store.total() + f.size > store.store_limit*1024*1024:
return HttpResponse(JSONEncoder().encode({
"status": "storelimitreached",
"error_label": "Upload failed",
"error_message": "The store size limit of %d MB has been reached" % store.store_limit,
"item": item.simple(),
}))
item.fileobject.save(urllib.unquote(f.name), f)
item.save()
return HttpResponse(JSONEncoder().encode({
"status": "success",
"item": Item.objects.get(pk=item.pk).simple()
}))
@never_cache
@require_store
@require_login
def upload(request, store, id):
if request.method != "POST" or not request.FILES.has_key("file"):
data = {
"status": "failed",
"error_label": "Upload failed",
"error_message": "Invalid HTTP request",
}
return HttpResponseBadRequest(JSONEncoder().encode(data))
if request.FILES.has_key("file"):
f = request.FILES["file"]
name = urllib.unquote(f.name)
try: size_total = int(request.META["HTTP_X_FILE_SIZE"])
except (ValueError, KeyError): size_total = f.size
else:
name = ''
size_total = 0 # Unknown.
try: name = unicode(urllib.unquote(request.META["HTTP_X_FILE_NAME"]),
"utf-8", "replace")
except KeyError: name = ''
name = os.path.basename(name)
try: offset = int(request.META["HTTP_X_FILE_OFFSET"])
except (ValueError, KeyError): offset = 0
if store.item_limit and size_total and size_total > store.item_limit*1024*1024:
data = {
"status": "itemlimitreached",
"error_label": "Upload failed",
"error_message": "Item size is limited to %d MB" % store.item_limit,
"item": None,
}
return HttpResponseServerError(JSONEncoder().encode(data))
if store.store_limit and size_total and store.total() + size_total - offset > store.store_limit*1024*1024:
data = {
"status": "storelimitreached",
"error_label": "Upload failed",
"error_message": "The store size limit of %d MB has been reached" % store.store_limit,
"item": None,
}
return HttpResponseServerError(JSONEncoder().encode(data))
# If the item exists, open the file for append.
try:
try: id = int(id)
except ValueError: raise Item.DoesNotExist
item = Item.objects.get(pk=id)
if item.fileobject.size < offset:
data = {
"status": "outoforder",
"error_label": "Chunk out of order",
"error_message": "Application sent a chunk out of order",
"item": item.simple(),
}
return HttpResponseServerError(JSONEncoder().encode(data))
fp = default_storage.open(item.fileobject.path, "ab")
fp.truncate(offset)
# This is a new item.
except Item.DoesNotExist:
if offset != 0:
data = {
"status": "outoforder",
"error_label": "Chunk out of order",
"error_message": "Application sent a chunk of an item that does not exist",
"item": None,
}
return HttpResponseServerError(JSONEncoder().encode(data))
item = Item()
item.store = store
item.fileobject.save(name, ContentFile(""))
item.fileobject.close()
item.size_total = size_total
item.save()
fp = default_storage.open(item.fileobject.path, "wb")
chunks = f.chunks().__iter__()
while True:
try: chunk = chunks.next()
except StopIteration: break
except IOError:
fp.close()
data = {
"status": "failed",
"error_label": "Upload failed",
"error_message": "Server-side I/O error",
"item": item.simple(),
}
return HttpResponseServerError(JSONEncoder().encode(data))
else:
try:
if request.META.get("HTTP_X_FILE_ENCODING") == "base64":
fp.write(chunk.decode("base64"))
else:
fp.write(chunk)
except binascii.Error:
fp.close()
data = {
"status": "failed",
"error_label": "Upload failed",
"error_message": "The browser sent an invalid chunk",
"item": item.simple(),
}
return HttpResponseServerError(JSONEncoder().encode(data))
item.size = fp.tell()
fp.close()
if item.size_total < item.size:
item.size_total = item.size
if item.size >= item.size_total:
item.uploaded = timezone.now()
item.save()
data = {
"status": "success",
"item": Item.objects.get(pk=item.pk).simple()
}
return HttpResponse(JSONEncoder().encode(data))
@require_store
@require_login
def simple_upload(request, store):
if request.method != "POST" or not request.FILES.has_key("file"):
return HttpResponseRedirect(store.get_absolute_url())
#if store.item_limit and f.size > store.item_limit*1024*1024:
#if store.store_limit and store.total() + f.size > store.store_limit*1024*1024:
f = request.FILES["file"]
item = Item()
item.store = store
item.fileobject.save(urllib.unquote(f.name), f)
item.size = f.size
item.size_total = f.size
item.save()
return HttpResponseRedirect(store.get_absolute_url())
@require_store
@require_login
def delete(request, store, item_id):
if request.method != "POST":
return HttpResponseForbidden()
item = get_object_or_404(Item, pk=item_id, store=store)
item.delete()
return HttpResponse("Item has been deleted")
@never_cache
@require_store
@require_login
def update(request, store, since=None):
since_dt = None
if since != None:
try:
since_dt = datetime.datetime.strptime(since, "%Y-%m-%d_%H:%M:%S")
except ValueError:
pass
all_items = Item.objects.filter(store=store)
item_ids = [item.id for item in all_items]
if since_dt != None:
items = Item.objects.filter(store=store, modified__gt=since_dt)
else:
items = all_items
items_simple = []
for item in items:
items_simple.append(item.simple())
dthandler = lambda obj: obj.isoformat() if isinstance(obj, datetime.datetime) else None
data = JSONEncoder(sort_keys=True, indent=4).encode(dict(
time=timezone.now().strftime("%Y-%m-%d_%H:%M:%S"),
item_ids=item_ids, items=items_simple))
return HttpResponse(data)
class ItemFileWrapper(FileWrapper):
def __init__(self, item, *args, **kwargs):
self._item = item
self._counter = 0
self._stale = 0
self._throttle = kwargs.get("throttle", 100)
self._stale_limit = kwargs.get("stale_limit", 10000)
return FileWrapper.__init__(self, *args, **kwargs)
def next(self):
try:
data = FileWrapper.next(self)
self._counter += len(data)
if len(data) > 0:
self._stale = 0
return data
except StopIteration:
if self._counter >= self._item.size_total:
raise StopIteration
if self._stale_limit and self._stale >= self._stale_limit:
raise StopIteration
start = time.time()
time.sleep(self._throttle/1000)
end = time.time()
self._stale += (end - start)*1000
return ""
@require_store
@require_login
def download(request, store, item_id):
item = get_object_or_404(Item, pk=item_id)
if item.status() == "READY":
return HttpResponseRedirect(item.get_absolute_url())
f = default_storage.open(item.fileobject.path, "rb")
wrapper = ItemFileWrapper(item, f)
response = HttpResponse(wrapper, content_type=mimetypes.guess_type(item.fileobject.name))
response["Content-Length"] = "%d" % item.size_total
response["Content-Disposition"] = 'attachment; name="file"; filename="%s"' % urllib.quote(item.name())
return response
@require_store
@require_login
@require_POST
def watch(request, store):
if not store.allow_watch:
return HttpResponseNotAllowed()
class WatcherForm(forms.Form):
email = forms.EmailField(max_length=254)
f = WatcherForm(request.POST)
if f.is_valid():
try: u = User.objects.get(email__iexact=f.cleaned_data["email"])
except User.DoesNotExist:
u = User(email=f.cleaned_data["email"], last_notification=None)
u.save()
try: w = Watcher.objects.get(store=store, user=u)
except Watcher.DoesNotExist: w = Watcher(store=store, user=u)
w.save()
watchers = Watcher.objects.filter(store=store)
return HttpResponse(JSONEncoder().encode({
"status": "success",
"watcher": w.simple(),
"watchers": [w.simple() for w in watchers],
}))
else:
return HttpResponseBadRequest(JSONEncoder().encode({
"status": "error",
"message": f["email"].errors if f["email"].errors else "Validation Error",
}))
if request.method != "POST" or not request.POST.has_key("email"):
return HttpResponseBadRequest()
@require_store
@require_login
@require_POST
def unwatch(request, store):
if not store.allow_watch:
return HttpResponseNotAllowed()
if not request.POST.has_key("email"):
return HttpResponseBadRequest()
email = request.POST["email"]
watchers = Watcher.objects.filter(store=store, user__email__iexact=email)
for w in watchers:
w.delete()
# Delete user who are not watching any store.
User.objects.annotate(n_watchers=Count("watchers")).filter(n_watchers=0).delete()
watchers = Watcher.objects.filter(store=store)
return HttpResponse(JSONEncoder().encode({
"status": "success",
"watchers": [w.simple() for w in watchers],
}))
@csrf_exempt
def cron(request):
ok = False
# Shared secret authentication.
secret = request.POST.get("secret")
if settings.FILESHACK_CRON_SECRET and \
settings.FILESHACK_CRON_SECRET == secret:
ok = True
# Host-based authentication.
for host in settings.FILESHACK_CRON_HOSTS:
try: sockinfos = socket.getaddrinfo(host, None)
except socket.gaierror: sockinfos = []
ips = [sockinfo[4][0] for sockinfo in sockinfos]
if request.META["REMOTE_ADDR"] in ips:
ok = True
break
if not ok: return HttpResponseForbidden(ugettext("Permission denied\n"))
output = ugettext("Cron started at %s\n" % \
timezone.now().strftime("%H:%M %Z, %d %b %Y"))
error = False
# digest.
response = digest(request)
output += u"digest: %s\n" % response.content
if response.status_code != 200: error = True
return HttpResponseServerError(output) if error else HttpResponse(output)
def digest(request):
url_prefix = "http://" + get_current_site(request).domain
now = timezone.now()
watchers = Watcher.objects.filter(user__last_notification=None)
for store in Store.objects.filter(allow_watch=True):
since = now - datetime.timedelta(minutes=store.watch_delay)
watchers |= store.watchers.filter(user__last_notification__lt=since)
messages = {}
for w in watchers:
user = w.user
text = messages.get(user, "")
since = user.last_notification or w.created
nitems = Item.objects.filter(store=w.store, created__gt=since).count()
if nitems == 0: continue
text += ungettext(
"A new item has been uploaded to %(store_url)s.\r\n\r\n",
"%(count)d items have been uploaded to %(store_url)s.\r\n\r\n",
nitems) % {
"count": nitems,
"store_url": url_prefix + w.store.get_absolute_url()
}
messages[user] = text
n = 0
output = u""
error = False
for (user, text) in messages.iteritems():
text += ugettext("Fileshack\r\n")
if settings.SECRET_KEY:
text += ugettext("--\r\nTo UNSUBSCRIBE, go to %(url)s") % {
"url": url_prefix + user.unsubscribe_url()
}
try:
send_mail(ugettext("Fileshack Update"), text,
settings.FILESHACK_EMAIL_FROM, [user.email])
user.last_notification = now
user.save()
n = n + 1
except (smtplib.SMTPException, socket.error), e:
output += u"\nsend_mail: %s: %s" % (e.__class__.__name__, e)
if isinstance(e, smtplib.SMTPRecipientsRefused):
continue # Recipient refused, continue sending messages.
else:
error = True
break # Serious error, does not make sense to continue.
output = ungettext(
"A digest has been sent to %(count)d person.",
"A digest has been sent to %(count)d people.",
n) % { "count": n } + output
return HttpResponseServerError(output) if error else HttpResponse(output)
@require_GET
def unsubscribe(request):
email = request.GET.get("u")
hmac = request.GET.get("hmac")
try: u = User.objects.get(email=email)
except User.DoesNotExist:
return render(request, "fileshack/unsubscribe.html",
dict(result="doesnotexist"),
status=404)
if u.unsubscribe_hmac() != hmac:
return render(request, "fileshack/unsubscribe.html",
dict(result="invalid"),
status=403) # Forbidden.
u.delete()
return render(request, "fileshack/unsubscribe.html", dict(result="success"))
def page_not_found(request):
stores = Store.objects.all()
store = None
for s in stores:
if request.path.startswith(s.get_absolute_url()):
store = s
if not store:
try: store = Store.objects.get(path="")
except Store.DoesNotExist: pass
t = loader.get_template("fileshack/404.html")
return HttpResponseNotFound(t.render(RequestContext(request, {
"request_path": request.path,
"store": store,
"stores_number": len(stores),
"admin_url": reverse("admin:index"),
})))
def server_error(request):
t = loader.get_template("fileshack/500.html")
return HttpResponseServerError(t.render(RequestContext(request, { "request_path": request.path, })))
| 35.075286 | 110 | 0.614157 |
e27d858b47dbe5090effe71ae4b148cb2e7bb954 | 775 | py | Python | yamale/readers/yaml_reader.py | basnijholt/Yamale | 8d8e4b809c9e313795ac5c3721697ac853bfada5 | [
"MIT"
] | 1 | 2021-01-21T10:18:38.000Z | 2021-01-21T10:18:38.000Z | yamale/readers/yaml_reader.py | basnijholt/Yamale | 8d8e4b809c9e313795ac5c3721697ac853bfada5 | [
"MIT"
] | null | null | null | yamale/readers/yaml_reader.py | basnijholt/Yamale | 8d8e4b809c9e313795ac5c3721697ac853bfada5 | [
"MIT"
] | 1 | 2020-12-04T21:48:58.000Z | 2020-12-04T21:48:58.000Z | from __future__ import absolute_import
def _pyyaml(file_name):
import yaml
try:
Loader = yaml.CSafeLoader
except AttributeError: # System does not have libyaml
Loader = yaml.SafeLoader
with open(file_name) as f:
return list(yaml.load_all(f, Loader=Loader))
def _ruamel(file_name):
from ruamel.yaml import YAML
yaml = YAML(typ='safe')
with open(file_name) as f:
return list(yaml.load_all(f))
_parsers = {
'pyyaml': _pyyaml,
'ruamel': _ruamel
}
def parse_file(file_name, parser):
try:
parse = _parsers[parser.lower()]
except KeyError:
raise NameError('Parser "' + parser + '" is not supported\nAvailable parsers are listed below:\nPyYAML\nruamel')
return parse(file_name)
| 23.484848 | 120 | 0.665806 |
01a7612d924634c2eb762ec2d69946111e364553 | 6,302 | py | Python | userbot/modules/chat.py | bakulcilok/ProjectBish-1 | f2d84d6dc4bb4c0078ce6a85df1194833580ae55 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | 1 | 2020-06-30T09:06:27.000Z | 2020-06-30T09:06:27.000Z | userbot/modules/chat.py | tegarseptian033/ProjectBish-1 | 9ee5fe1c549e70e910ce65660498159e8e5fb197 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | userbot/modules/chat.py | tegarseptian033/ProjectBish-1 | 9ee5fe1c549e70e910ce65660498159e8e5fb197 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | # Copyright (C) 2019 The Raphielscape Company LLC.
#
# Licensed under the Raphielscape Public License, Version 1.c (the "License");
# you may not use this file except in compliance with the License.
""" Userbot module containing userid, chatid and log commands"""
from asyncio import sleep
from userbot import CMD_HELP, BOTLOG, BOTLOG_CHATID, bot
from userbot.events import register
from userbot.modules.admin import get_user_from_event
@register(outgoing=True, pattern="^\.userid$")
async def useridgetter(target):
""" For .userid command, returns the ID of the target user. """
message = await target.get_reply_message()
if message:
if not message.forward:
user_id = message.sender.id
if message.sender.username:
name = "@" + message.sender.username
else:
name = "**" + message.sender.first_name + "**"
else:
user_id = message.forward.sender.id
if message.forward.sender.username:
name = "@" + message.forward.sender.username
else:
name = "*" + message.forward.sender.first_name + "*"
await target.edit("**Name:** {} \n**User ID:** `{}`".format(
name, user_id))
@register(outgoing=True, pattern="^\.link(?: |$)(.*)")
async def permalink(mention):
""" For .link command, generates a link to the user's PM with a custom text. """
user, custom = await get_user_from_event(mention)
if not user:
return
if custom:
await mention.edit(f"[{custom}](tg://user?id={user.id})")
else:
tag = user.first_name.replace("\u2060",
"") if user.first_name else user.username
await mention.edit(f"[{tag}](tg://user?id={user.id})")
@register(outgoing=True, pattern="^\.chatid$")
async def chatidgetter(chat):
""" For .chatid, returns the ID of the chat you are in at that moment. """
await chat.edit("Chat ID: `" + str(chat.chat_id) + "`")
@register(outgoing=True, pattern=r"^\.log(?: |$)([\s\S]*)")
async def log(log_text):
""" For .log command, forwards a message or the command argument to the bot logs group """
if BOTLOG:
if log_text.reply_to_msg_id:
reply_msg = await log_text.get_reply_message()
await reply_msg.forward_to(BOTLOG_CHATID)
elif log_text.pattern_match.group(1):
user = f"#LOG / Chat ID: {log_text.chat_id}\n\n"
textx = user + log_text.pattern_match.group(1)
await bot.send_message(BOTLOG_CHATID, textx)
else:
return await log_text.edit("`What am I supposed to log?`")
await log_text.edit("`Logged Successfully`")
else:
await log_text.edit("`This feature requires Logging to be enabled!`")
await sleep(2)
await log_text.delete()
@register(outgoing=True, pattern="^\.kickme$")
async def kickme(leave):
""" Basically it's .kickme command """
await leave.edit("Nope, no, no, I go away")
await leave.client.kick_participant(leave.chat_id, 'me')
@register(outgoing=True, pattern="^\.unmutechat$")
async def unmute_chat(unm_e):
""" For .unmutechat command, unmute a muted chat. """
try:
from userbot.modules.sql_helper.keep_read_sql import unkread
except AttributeError:
return await unm_e.edit('`Running on Non-SQL Mode!`')
unkread(str(unm_e.chat_id))
await unm_e.edit("```Unmuted this chat Successfully```")
await sleep(2)
await unm_e.delete()
@register(outgoing=True, pattern="^\.mutechat$")
async def mute_chat(mute_e):
""" For .mutechat command, mute any chat. """
try:
from userbot.modules.sql_helper.keep_read_sql import kread
except AttributeError:
return await mute_e.edit("`Running on Non-SQL mode!`")
await mute_e.edit(str(mute_e.chat_id))
kread(str(mute_e.chat_id))
await mute_e.edit("`Shush! This chat will be silenced!`")
await sleep(2)
await mute_e.delete()
if BOTLOG:
await mute_e.client.send_message(
BOTLOG_CHATID,
str(mute_e.chat_id) + " was silenced.")
@register(incoming=True, disable_errors=True)
async def keep_read(message):
""" The mute logic. """
try:
from userbot.modules.sql_helper.keep_read_sql import is_kread
except AttributeError:
return
kread = is_kread()
if kread:
for i in kread:
if i.groupid == str(message.chat_id):
await message.client.send_read_acknowledge(message.chat_id)
# Regex-Ninja module by @Kandnub
regexNinja = False
@register(outgoing=True, pattern="^s/")
async def sedNinja(event):
"""For regex-ninja module, auto delete command starting with s/"""
if regexNinja:
await sleep(.5)
await event.delete()
@register(outgoing=True, pattern="^\.regexninja (on|off)$")
async def sedNinjaToggle(event):
""" Enables or disables the regex ninja module. """
global regexNinja
if event.pattern_match.group(1) == "on":
regexNinja = True
await event.edit("`Successfully enabled ninja mode for Regexbot.`")
await sleep(1)
await event.delete()
elif event.pattern_match.group(1) == "off":
regexNinja = False
await event.edit("`Successfully disabled ninja mode for Regexbot.`")
await sleep(1)
await event.delete()
CMD_HELP.update({
"chat":
">`.chatid`"
"\nUsage: Fetches the current chat's ID"
"\n\n>`.userid`"
"\nUsage: Fetches the ID of the user in reply, if its a forwarded message, finds the ID for the source."
"\n\n>`.log`"
"\nUsage: Forwards the message you've replied to in your bot logs group."
"\n\n>`.kickme`"
"\nUsage: Leave from a targeted group."
"\n\n>`.unmutechat`"
"\nUsage: Unmutes a muted chat."
"\n\n>`.mutechat`"
"\nUsage: Allows you to mute any chat."
"\n\n>`.link <username/userid> : <optional text>` (or) reply to someone's message with"
"\n\n>`.link <optional text>`"
"\nUsage: Generate a permanent link to the user's profile with optional custom text."
"\n\n>`.regexninja on/off`"
"\nUsage: Globally enable/disables the regex ninja module."
"\nRegex Ninja module helps to delete the regex bot's triggering messages."
})
| 36.011429 | 108 | 0.638686 |
4fcdc350bc90acd87e65988a6d992e996027d584 | 126,742 | py | Python | pandas/tests/plotting/test_frame.py | mlline00/pandas | fd7db9819b8c7dba86b2887bee33f670b2715afc | [
"BSD-3-Clause"
] | 1 | 2019-12-28T12:27:14.000Z | 2019-12-28T12:27:14.000Z | pandas/tests/plotting/test_frame.py | salama4/pandas | 1d36851ffdc8391b8b60a7628ce5a536180ef13b | [
"BSD-3-Clause"
] | null | null | null | pandas/tests/plotting/test_frame.py | salama4/pandas | 1d36851ffdc8391b8b60a7628ce5a536180ef13b | [
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
""" Test cases for DataFrame.plot """
from datetime import date, datetime
import itertools
import string
import warnings
import numpy as np
from numpy.random import rand, randn
import pytest
import pandas.util._test_decorators as td
from pandas.core.dtypes.api import is_list_like
import pandas as pd
from pandas import DataFrame, MultiIndex, PeriodIndex, Series, bdate_range, date_range
from pandas.core.arrays import integer_array
from pandas.tests.plotting.common import TestPlotBase, _check_plot_works
import pandas.util.testing as tm
from pandas.io.formats.printing import pprint_thing
import pandas.plotting as plotting
@td.skip_if_no_mpl
class TestDataFramePlots(TestPlotBase):
def setup_method(self, method):
TestPlotBase.setup_method(self, method)
import matplotlib as mpl
mpl.rcdefaults()
self.tdf = tm.makeTimeDataFrame()
self.hexbin_df = DataFrame(
{
"A": np.random.uniform(size=20),
"B": np.random.uniform(size=20),
"C": np.arange(20) + np.random.uniform(size=20),
}
)
def _assert_ytickslabels_visibility(self, axes, expected):
for ax, exp in zip(axes, expected):
self._check_visible(ax.get_yticklabels(), visible=exp)
def _assert_xtickslabels_visibility(self, axes, expected):
for ax, exp in zip(axes, expected):
self._check_visible(ax.get_xticklabels(), visible=exp)
@pytest.mark.slow
def test_plot(self):
from pandas.plotting._matplotlib.compat import _mpl_ge_3_1_0
df = self.tdf
_check_plot_works(df.plot, grid=False)
# _check_plot_works adds an ax so catch warning. see GH #13188
with tm.assert_produces_warning(UserWarning):
axes = _check_plot_works(df.plot, subplots=True)
self._check_axes_shape(axes, axes_num=4, layout=(4, 1))
with tm.assert_produces_warning(UserWarning):
axes = _check_plot_works(df.plot, subplots=True, layout=(-1, 2))
self._check_axes_shape(axes, axes_num=4, layout=(2, 2))
with tm.assert_produces_warning(UserWarning):
axes = _check_plot_works(df.plot, subplots=True, use_index=False)
self._check_axes_shape(axes, axes_num=4, layout=(4, 1))
df = DataFrame({"x": [1, 2], "y": [3, 4]})
if _mpl_ge_3_1_0():
msg = "'Line2D' object has no property 'blarg'"
else:
msg = "Unknown property blarg"
with pytest.raises(AttributeError, match=msg):
df.plot.line(blarg=True)
df = DataFrame(np.random.rand(10, 3), index=list(string.ascii_letters[:10]))
_check_plot_works(df.plot, use_index=True)
_check_plot_works(df.plot, sort_columns=False)
_check_plot_works(df.plot, yticks=[1, 5, 10])
_check_plot_works(df.plot, xticks=[1, 5, 10])
_check_plot_works(df.plot, ylim=(-100, 100), xlim=(-100, 100))
with tm.assert_produces_warning(UserWarning):
_check_plot_works(df.plot, subplots=True, title="blah")
# We have to redo it here because _check_plot_works does two plots,
# once without an ax kwarg and once with an ax kwarg and the new sharex
# behaviour does not remove the visibility of the latter axis (as ax is
# present). see: https://github.com/pandas-dev/pandas/issues/9737
axes = df.plot(subplots=True, title="blah")
self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
# axes[0].figure.savefig("test.png")
for ax in axes[:2]:
self._check_visible(ax.xaxis) # xaxis must be visible for grid
self._check_visible(ax.get_xticklabels(), visible=False)
self._check_visible(ax.get_xticklabels(minor=True), visible=False)
self._check_visible([ax.xaxis.get_label()], visible=False)
for ax in [axes[2]]:
self._check_visible(ax.xaxis)
self._check_visible(ax.get_xticklabels())
self._check_visible([ax.xaxis.get_label()])
self._check_ticks_props(ax, xrot=0)
_check_plot_works(df.plot, title="blah")
tuples = zip(string.ascii_letters[:10], range(10))
df = DataFrame(np.random.rand(10, 3), index=MultiIndex.from_tuples(tuples))
_check_plot_works(df.plot, use_index=True)
# unicode
index = MultiIndex.from_tuples(
[
("\u03b1", 0),
("\u03b1", 1),
("\u03b2", 2),
("\u03b2", 3),
("\u03b3", 4),
("\u03b3", 5),
("\u03b4", 6),
("\u03b4", 7),
],
names=["i0", "i1"],
)
columns = MultiIndex.from_tuples(
[("bar", "\u0394"), ("bar", "\u0395")], names=["c0", "c1"]
)
df = DataFrame(np.random.randint(0, 10, (8, 2)), columns=columns, index=index)
_check_plot_works(df.plot, title="\u03A3")
# GH 6951
# Test with single column
df = DataFrame({"x": np.random.rand(10)})
axes = _check_plot_works(df.plot.bar, subplots=True)
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
axes = _check_plot_works(df.plot.bar, subplots=True, layout=(-1, 1))
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
# When ax is supplied and required number of axes is 1,
# passed ax should be used:
fig, ax = self.plt.subplots()
axes = df.plot.bar(subplots=True, ax=ax)
assert len(axes) == 1
result = ax.axes
assert result is axes[0]
def test_integer_array_plot(self):
# GH 25587
arr = integer_array([1, 2, 3, 4], dtype="UInt32")
s = Series(arr)
_check_plot_works(s.plot.line)
_check_plot_works(s.plot.bar)
_check_plot_works(s.plot.hist)
_check_plot_works(s.plot.pie)
df = DataFrame({"x": arr, "y": arr})
_check_plot_works(df.plot.line)
_check_plot_works(df.plot.bar)
_check_plot_works(df.plot.hist)
_check_plot_works(df.plot.pie, y="y")
_check_plot_works(df.plot.scatter, x="x", y="y")
_check_plot_works(df.plot.hexbin, x="x", y="y")
def test_mpl2_color_cycle_str(self):
# GH 15516
colors = ["C" + str(x) for x in range(10)]
df = DataFrame(randn(10, 3), columns=["a", "b", "c"])
for c in colors:
_check_plot_works(df.plot, color=c)
def test_color_single_series_list(self):
# GH 3486
df = DataFrame({"A": [1, 2, 3]})
_check_plot_works(df.plot, color=["red"])
def test_rgb_tuple_color(self):
# GH 16695
df = DataFrame({"x": [1, 2], "y": [3, 4]})
_check_plot_works(df.plot, x="x", y="y", color=(1, 0, 0))
_check_plot_works(df.plot, x="x", y="y", color=(1, 0, 0, 0.5))
def test_color_empty_string(self):
df = DataFrame(randn(10, 2))
with pytest.raises(ValueError):
df.plot(color="")
def test_color_and_style_arguments(self):
df = DataFrame({"x": [1, 2], "y": [3, 4]})
# passing both 'color' and 'style' arguments should be allowed
# if there is no color symbol in the style strings:
ax = df.plot(color=["red", "black"], style=["-", "--"])
# check that the linestyles are correctly set:
linestyle = [line.get_linestyle() for line in ax.lines]
assert linestyle == ["-", "--"]
# check that the colors are correctly set:
color = [line.get_color() for line in ax.lines]
assert color == ["red", "black"]
# passing both 'color' and 'style' arguments should not be allowed
# if there is a color symbol in the style strings:
with pytest.raises(ValueError):
df.plot(color=["red", "black"], style=["k-", "r--"])
def test_nonnumeric_exclude(self):
df = DataFrame({"A": ["x", "y", "z"], "B": [1, 2, 3]})
ax = df.plot()
assert len(ax.get_lines()) == 1 # B was plotted
@pytest.mark.slow
def test_implicit_label(self):
df = DataFrame(randn(10, 3), columns=["a", "b", "c"])
ax = df.plot(x="a", y="b")
self._check_text_labels(ax.xaxis.get_label(), "a")
@pytest.mark.slow
def test_donot_overwrite_index_name(self):
# GH 8494
df = DataFrame(randn(2, 2), columns=["a", "b"])
df.index.name = "NAME"
df.plot(y="b", label="LABEL")
assert df.index.name == "NAME"
@pytest.mark.slow
def test_plot_xy(self):
# columns.inferred_type == 'string'
df = self.tdf
self._check_data(df.plot(x=0, y=1), df.set_index("A")["B"].plot())
self._check_data(df.plot(x=0), df.set_index("A").plot())
self._check_data(df.plot(y=0), df.B.plot())
self._check_data(df.plot(x="A", y="B"), df.set_index("A").B.plot())
self._check_data(df.plot(x="A"), df.set_index("A").plot())
self._check_data(df.plot(y="B"), df.B.plot())
# columns.inferred_type == 'integer'
df.columns = np.arange(1, len(df.columns) + 1)
self._check_data(df.plot(x=1, y=2), df.set_index(1)[2].plot())
self._check_data(df.plot(x=1), df.set_index(1).plot())
self._check_data(df.plot(y=1), df[1].plot())
# figsize and title
ax = df.plot(x=1, y=2, title="Test", figsize=(16, 8))
self._check_text_labels(ax.title, "Test")
self._check_axes_shape(ax, axes_num=1, layout=(1, 1), figsize=(16.0, 8.0))
# columns.inferred_type == 'mixed'
# TODO add MultiIndex test
@pytest.mark.slow
@pytest.mark.parametrize(
"input_log, expected_log", [(True, "log"), ("sym", "symlog")]
)
def test_logscales(self, input_log, expected_log):
df = DataFrame({"a": np.arange(100)}, index=np.arange(100))
ax = df.plot(logy=input_log)
self._check_ax_scales(ax, yaxis=expected_log)
assert ax.get_yscale() == expected_log
ax = df.plot(logx=input_log)
self._check_ax_scales(ax, xaxis=expected_log)
assert ax.get_xscale() == expected_log
ax = df.plot(loglog=input_log)
self._check_ax_scales(ax, xaxis=expected_log, yaxis=expected_log)
assert ax.get_xscale() == expected_log
assert ax.get_yscale() == expected_log
@pytest.mark.parametrize("input_param", ["logx", "logy", "loglog"])
def test_invalid_logscale(self, input_param):
# GH: 24867
df = DataFrame({"a": np.arange(100)}, index=np.arange(100))
msg = "Boolean, None and 'sym' are valid options, 'sm' is given."
with pytest.raises(ValueError, match=msg):
df.plot(**{input_param: "sm"})
@pytest.mark.slow
def test_xcompat(self):
import pandas as pd
df = self.tdf
ax = df.plot(x_compat=True)
lines = ax.get_lines()
assert not isinstance(lines[0].get_xdata(), PeriodIndex)
tm.close()
pd.plotting.plot_params["xaxis.compat"] = True
ax = df.plot()
lines = ax.get_lines()
assert not isinstance(lines[0].get_xdata(), PeriodIndex)
tm.close()
pd.plotting.plot_params["x_compat"] = False
ax = df.plot()
lines = ax.get_lines()
assert not isinstance(lines[0].get_xdata(), PeriodIndex)
assert isinstance(PeriodIndex(lines[0].get_xdata()), PeriodIndex)
tm.close()
# useful if you're plotting a bunch together
with pd.plotting.plot_params.use("x_compat", True):
ax = df.plot()
lines = ax.get_lines()
assert not isinstance(lines[0].get_xdata(), PeriodIndex)
tm.close()
ax = df.plot()
lines = ax.get_lines()
assert not isinstance(lines[0].get_xdata(), PeriodIndex)
assert isinstance(PeriodIndex(lines[0].get_xdata()), PeriodIndex)
def test_period_compat(self):
# GH 9012
# period-array conversions
df = DataFrame(
np.random.rand(21, 2),
index=bdate_range(datetime(2000, 1, 1), datetime(2000, 1, 31)),
columns=["a", "b"],
)
df.plot()
self.plt.axhline(y=0)
tm.close()
def test_unsorted_index(self):
df = DataFrame(
{"y": np.arange(100)}, index=np.arange(99, -1, -1), dtype=np.int64
)
ax = df.plot()
lines = ax.get_lines()[0]
rs = lines.get_xydata()
rs = Series(rs[:, 1], rs[:, 0], dtype=np.int64, name="y")
tm.assert_series_equal(rs, df.y, check_index_type=False)
tm.close()
df.index = pd.Index(np.arange(99, -1, -1), dtype=np.float64)
ax = df.plot()
lines = ax.get_lines()[0]
rs = lines.get_xydata()
rs = Series(rs[:, 1], rs[:, 0], dtype=np.int64, name="y")
tm.assert_series_equal(rs, df.y)
def test_unsorted_index_lims(self):
df = DataFrame({"y": [0.0, 1.0, 2.0, 3.0]}, index=[1.0, 0.0, 3.0, 2.0])
ax = df.plot()
xmin, xmax = ax.get_xlim()
lines = ax.get_lines()
assert xmin <= np.nanmin(lines[0].get_data()[0])
assert xmax >= np.nanmax(lines[0].get_data()[0])
df = DataFrame(
{"y": [0.0, 1.0, np.nan, 3.0, 4.0, 5.0, 6.0]},
index=[1.0, 0.0, 3.0, 2.0, np.nan, 3.0, 2.0],
)
ax = df.plot()
xmin, xmax = ax.get_xlim()
lines = ax.get_lines()
assert xmin <= np.nanmin(lines[0].get_data()[0])
assert xmax >= np.nanmax(lines[0].get_data()[0])
df = DataFrame({"y": [0.0, 1.0, 2.0, 3.0], "z": [91.0, 90.0, 93.0, 92.0]})
ax = df.plot(x="z", y="y")
xmin, xmax = ax.get_xlim()
lines = ax.get_lines()
assert xmin <= np.nanmin(lines[0].get_data()[0])
assert xmax >= np.nanmax(lines[0].get_data()[0])
@pytest.mark.slow
def test_subplots(self):
df = DataFrame(np.random.rand(10, 3), index=list(string.ascii_letters[:10]))
for kind in ["bar", "barh", "line", "area"]:
axes = df.plot(kind=kind, subplots=True, sharex=True, legend=True)
self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
assert axes.shape == (3,)
for ax, column in zip(axes, df.columns):
self._check_legend_labels(ax, labels=[pprint_thing(column)])
for ax in axes[:-2]:
self._check_visible(ax.xaxis) # xaxis must be visible for grid
self._check_visible(ax.get_xticklabels(), visible=False)
if not (kind == "bar" and self.mpl_ge_3_1_0):
# change https://github.com/pandas-dev/pandas/issues/26714
self._check_visible(ax.get_xticklabels(minor=True), visible=False)
self._check_visible(ax.xaxis.get_label(), visible=False)
self._check_visible(ax.get_yticklabels())
self._check_visible(axes[-1].xaxis)
self._check_visible(axes[-1].get_xticklabels())
self._check_visible(axes[-1].get_xticklabels(minor=True))
self._check_visible(axes[-1].xaxis.get_label())
self._check_visible(axes[-1].get_yticklabels())
axes = df.plot(kind=kind, subplots=True, sharex=False)
for ax in axes:
self._check_visible(ax.xaxis)
self._check_visible(ax.get_xticklabels())
self._check_visible(ax.get_xticklabels(minor=True))
self._check_visible(ax.xaxis.get_label())
self._check_visible(ax.get_yticklabels())
axes = df.plot(kind=kind, subplots=True, legend=False)
for ax in axes:
assert ax.get_legend() is None
def test_groupby_boxplot_sharey(self):
# https://github.com/pandas-dev/pandas/issues/20968
# sharey can now be switched check whether the right
# pair of axes is turned on or off
df = DataFrame(
{
"a": [-1.43, -0.15, -3.70, -1.43, -0.14],
"b": [0.56, 0.84, 0.29, 0.56, 0.85],
"c": [0, 1, 2, 3, 1],
},
index=[0, 1, 2, 3, 4],
)
# behavior without keyword
axes = df.groupby("c").boxplot()
expected = [True, False, True, False]
self._assert_ytickslabels_visibility(axes, expected)
# set sharey=True should be identical
axes = df.groupby("c").boxplot(sharey=True)
expected = [True, False, True, False]
self._assert_ytickslabels_visibility(axes, expected)
# sharey=False, all yticklabels should be visible
axes = df.groupby("c").boxplot(sharey=False)
expected = [True, True, True, True]
self._assert_ytickslabels_visibility(axes, expected)
def test_groupby_boxplot_sharex(self):
# https://github.com/pandas-dev/pandas/issues/20968
# sharex can now be switched check whether the right
# pair of axes is turned on or off
df = DataFrame(
{
"a": [-1.43, -0.15, -3.70, -1.43, -0.14],
"b": [0.56, 0.84, 0.29, 0.56, 0.85],
"c": [0, 1, 2, 3, 1],
},
index=[0, 1, 2, 3, 4],
)
# behavior without keyword
axes = df.groupby("c").boxplot()
expected = [True, True, True, True]
self._assert_xtickslabels_visibility(axes, expected)
# set sharex=False should be identical
axes = df.groupby("c").boxplot(sharex=False)
expected = [True, True, True, True]
self._assert_xtickslabels_visibility(axes, expected)
# sharex=True, yticklabels should be visible
# only for bottom plots
axes = df.groupby("c").boxplot(sharex=True)
expected = [False, False, True, True]
self._assert_xtickslabels_visibility(axes, expected)
@pytest.mark.slow
def test_subplots_timeseries(self):
idx = date_range(start="2014-07-01", freq="M", periods=10)
df = DataFrame(np.random.rand(10, 3), index=idx)
for kind in ["line", "area"]:
axes = df.plot(kind=kind, subplots=True, sharex=True)
self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
for ax in axes[:-2]:
# GH 7801
self._check_visible(ax.xaxis) # xaxis must be visible for grid
self._check_visible(ax.get_xticklabels(), visible=False)
self._check_visible(ax.get_xticklabels(minor=True), visible=False)
self._check_visible(ax.xaxis.get_label(), visible=False)
self._check_visible(ax.get_yticklabels())
self._check_visible(axes[-1].xaxis)
self._check_visible(axes[-1].get_xticklabels())
self._check_visible(axes[-1].get_xticklabels(minor=True))
self._check_visible(axes[-1].xaxis.get_label())
self._check_visible(axes[-1].get_yticklabels())
self._check_ticks_props(axes, xrot=0)
axes = df.plot(kind=kind, subplots=True, sharex=False, rot=45, fontsize=7)
for ax in axes:
self._check_visible(ax.xaxis)
self._check_visible(ax.get_xticklabels())
self._check_visible(ax.get_xticklabels(minor=True))
self._check_visible(ax.xaxis.get_label())
self._check_visible(ax.get_yticklabels())
self._check_ticks_props(ax, xlabelsize=7, xrot=45, ylabelsize=7)
def test_subplots_timeseries_y_axis(self):
# GH16953
data = {
"numeric": np.array([1, 2, 5]),
"timedelta": [
pd.Timedelta(-10, unit="s"),
pd.Timedelta(10, unit="m"),
pd.Timedelta(10, unit="h"),
],
"datetime_no_tz": [
pd.to_datetime("2017-08-01 00:00:00"),
pd.to_datetime("2017-08-01 02:00:00"),
pd.to_datetime("2017-08-02 00:00:00"),
],
"datetime_all_tz": [
pd.to_datetime("2017-08-01 00:00:00", utc=True),
pd.to_datetime("2017-08-01 02:00:00", utc=True),
pd.to_datetime("2017-08-02 00:00:00", utc=True),
],
"text": ["This", "should", "fail"],
}
testdata = DataFrame(data)
ax_numeric = testdata.plot(y="numeric")
assert (
ax_numeric.get_lines()[0].get_data()[1] == testdata["numeric"].values
).all()
ax_timedelta = testdata.plot(y="timedelta")
assert (
ax_timedelta.get_lines()[0].get_data()[1] == testdata["timedelta"].values
).all()
ax_datetime_no_tz = testdata.plot(y="datetime_no_tz")
assert (
ax_datetime_no_tz.get_lines()[0].get_data()[1]
== testdata["datetime_no_tz"].values
).all()
ax_datetime_all_tz = testdata.plot(y="datetime_all_tz")
assert (
ax_datetime_all_tz.get_lines()[0].get_data()[1]
== testdata["datetime_all_tz"].values
).all()
msg = "no numeric data to plot"
with pytest.raises(TypeError, match=msg):
testdata.plot(y="text")
@pytest.mark.xfail(reason="not support for period, categorical, datetime_mixed_tz")
def test_subplots_timeseries_y_axis_not_supported(self):
"""
This test will fail for:
period:
since period isn't yet implemented in ``select_dtypes``
and because it will need a custom value converter +
tick formatter (as was done for x-axis plots)
categorical:
because it will need a custom value converter +
tick formatter (also doesn't work for x-axis, as of now)
datetime_mixed_tz:
because of the way how pandas handles ``Series`` of
``datetime`` objects with different timezone,
generally converting ``datetime`` objects in a tz-aware
form could help with this problem
"""
data = {
"numeric": np.array([1, 2, 5]),
"period": [
pd.Period("2017-08-01 00:00:00", freq="H"),
pd.Period("2017-08-01 02:00", freq="H"),
pd.Period("2017-08-02 00:00:00", freq="H"),
],
"categorical": pd.Categorical(
["c", "b", "a"], categories=["a", "b", "c"], ordered=False
),
"datetime_mixed_tz": [
pd.to_datetime("2017-08-01 00:00:00", utc=True),
pd.to_datetime("2017-08-01 02:00:00"),
pd.to_datetime("2017-08-02 00:00:00"),
],
}
testdata = pd.DataFrame(data)
ax_period = testdata.plot(x="numeric", y="period")
assert (
ax_period.get_lines()[0].get_data()[1] == testdata["period"].values
).all()
ax_categorical = testdata.plot(x="numeric", y="categorical")
assert (
ax_categorical.get_lines()[0].get_data()[1]
== testdata["categorical"].values
).all()
ax_datetime_mixed_tz = testdata.plot(x="numeric", y="datetime_mixed_tz")
assert (
ax_datetime_mixed_tz.get_lines()[0].get_data()[1]
== testdata["datetime_mixed_tz"].values
).all()
@pytest.mark.slow
def test_subplots_layout(self):
# GH 6667
df = DataFrame(np.random.rand(10, 3), index=list(string.ascii_letters[:10]))
axes = df.plot(subplots=True, layout=(2, 2))
self._check_axes_shape(axes, axes_num=3, layout=(2, 2))
assert axes.shape == (2, 2)
axes = df.plot(subplots=True, layout=(-1, 2))
self._check_axes_shape(axes, axes_num=3, layout=(2, 2))
assert axes.shape == (2, 2)
axes = df.plot(subplots=True, layout=(2, -1))
self._check_axes_shape(axes, axes_num=3, layout=(2, 2))
assert axes.shape == (2, 2)
axes = df.plot(subplots=True, layout=(1, 4))
self._check_axes_shape(axes, axes_num=3, layout=(1, 4))
assert axes.shape == (1, 4)
axes = df.plot(subplots=True, layout=(-1, 4))
self._check_axes_shape(axes, axes_num=3, layout=(1, 4))
assert axes.shape == (1, 4)
axes = df.plot(subplots=True, layout=(4, -1))
self._check_axes_shape(axes, axes_num=3, layout=(4, 1))
assert axes.shape == (4, 1)
with pytest.raises(ValueError):
df.plot(subplots=True, layout=(1, 1))
with pytest.raises(ValueError):
df.plot(subplots=True, layout=(-1, -1))
# single column
df = DataFrame(np.random.rand(10, 1), index=list(string.ascii_letters[:10]))
axes = df.plot(subplots=True)
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
assert axes.shape == (1,)
axes = df.plot(subplots=True, layout=(3, 3))
self._check_axes_shape(axes, axes_num=1, layout=(3, 3))
assert axes.shape == (3, 3)
@pytest.mark.slow
def test_subplots_warnings(self):
# GH 9464
with tm.assert_produces_warning(None):
df = DataFrame(np.random.randn(100, 4))
df.plot(subplots=True, layout=(3, 2))
df = DataFrame(
np.random.randn(100, 4), index=date_range("1/1/2000", periods=100)
)
df.plot(subplots=True, layout=(3, 2))
@pytest.mark.slow
def test_subplots_multiple_axes(self):
# GH 5353, 6970, GH 7069
fig, axes = self.plt.subplots(2, 3)
df = DataFrame(np.random.rand(10, 3), index=list(string.ascii_letters[:10]))
returned = df.plot(subplots=True, ax=axes[0], sharex=False, sharey=False)
self._check_axes_shape(returned, axes_num=3, layout=(1, 3))
assert returned.shape == (3,)
assert returned[0].figure is fig
# draw on second row
returned = df.plot(subplots=True, ax=axes[1], sharex=False, sharey=False)
self._check_axes_shape(returned, axes_num=3, layout=(1, 3))
assert returned.shape == (3,)
assert returned[0].figure is fig
self._check_axes_shape(axes, axes_num=6, layout=(2, 3))
tm.close()
with pytest.raises(ValueError):
fig, axes = self.plt.subplots(2, 3)
# pass different number of axes from required
df.plot(subplots=True, ax=axes)
# pass 2-dim axes and invalid layout
# invalid lauout should not affect to input and return value
# (show warning is tested in
# TestDataFrameGroupByPlots.test_grouped_box_multiple_axes
fig, axes = self.plt.subplots(2, 2)
with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning)
df = DataFrame(np.random.rand(10, 4), index=list(string.ascii_letters[:10]))
returned = df.plot(
subplots=True, ax=axes, layout=(2, 1), sharex=False, sharey=False
)
self._check_axes_shape(returned, axes_num=4, layout=(2, 2))
assert returned.shape == (4,)
returned = df.plot(
subplots=True, ax=axes, layout=(2, -1), sharex=False, sharey=False
)
self._check_axes_shape(returned, axes_num=4, layout=(2, 2))
assert returned.shape == (4,)
returned = df.plot(
subplots=True, ax=axes, layout=(-1, 2), sharex=False, sharey=False
)
self._check_axes_shape(returned, axes_num=4, layout=(2, 2))
assert returned.shape == (4,)
# single column
fig, axes = self.plt.subplots(1, 1)
df = DataFrame(np.random.rand(10, 1), index=list(string.ascii_letters[:10]))
axes = df.plot(subplots=True, ax=[axes], sharex=False, sharey=False)
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
assert axes.shape == (1,)
def test_subplots_ts_share_axes(self):
# GH 3964
fig, axes = self.plt.subplots(3, 3, sharex=True, sharey=True)
self.plt.subplots_adjust(left=0.05, right=0.95, hspace=0.3, wspace=0.3)
df = DataFrame(
np.random.randn(10, 9),
index=date_range(start="2014-07-01", freq="M", periods=10),
)
for i, ax in enumerate(axes.ravel()):
df[i].plot(ax=ax, fontsize=5)
# Rows other than bottom should not be visible
for ax in axes[0:-1].ravel():
self._check_visible(ax.get_xticklabels(), visible=False)
# Bottom row should be visible
for ax in axes[-1].ravel():
self._check_visible(ax.get_xticklabels(), visible=True)
# First column should be visible
for ax in axes[[0, 1, 2], [0]].ravel():
self._check_visible(ax.get_yticklabels(), visible=True)
# Other columns should not be visible
for ax in axes[[0, 1, 2], [1]].ravel():
self._check_visible(ax.get_yticklabels(), visible=False)
for ax in axes[[0, 1, 2], [2]].ravel():
self._check_visible(ax.get_yticklabels(), visible=False)
def test_subplots_sharex_axes_existing_axes(self):
# GH 9158
d = {"A": [1.0, 2.0, 3.0, 4.0], "B": [4.0, 3.0, 2.0, 1.0], "C": [5, 1, 3, 4]}
df = DataFrame(d, index=date_range("2014 10 11", "2014 10 14"))
axes = df[["A", "B"]].plot(subplots=True)
df["C"].plot(ax=axes[0], secondary_y=True)
self._check_visible(axes[0].get_xticklabels(), visible=False)
self._check_visible(axes[1].get_xticklabels(), visible=True)
for ax in axes.ravel():
self._check_visible(ax.get_yticklabels(), visible=True)
@pytest.mark.slow
def test_subplots_dup_columns(self):
# GH 10962
df = DataFrame(np.random.rand(5, 5), columns=list("aaaaa"))
axes = df.plot(subplots=True)
for ax in axes:
self._check_legend_labels(ax, labels=["a"])
assert len(ax.lines) == 1
tm.close()
axes = df.plot(subplots=True, secondary_y="a")
for ax in axes:
# (right) is only attached when subplots=False
self._check_legend_labels(ax, labels=["a"])
assert len(ax.lines) == 1
tm.close()
ax = df.plot(secondary_y="a")
self._check_legend_labels(ax, labels=["a (right)"] * 5)
assert len(ax.lines) == 0
assert len(ax.right_ax.lines) == 5
def test_negative_log(self):
df = -DataFrame(
rand(6, 4),
index=list(string.ascii_letters[:6]),
columns=["x", "y", "z", "four"],
)
with pytest.raises(ValueError):
df.plot.area(logy=True)
with pytest.raises(ValueError):
df.plot.area(loglog=True)
def _compare_stacked_y_cood(self, normal_lines, stacked_lines):
base = np.zeros(len(normal_lines[0].get_data()[1]))
for nl, sl in zip(normal_lines, stacked_lines):
base += nl.get_data()[1] # get y coordinates
sy = sl.get_data()[1]
tm.assert_numpy_array_equal(base, sy)
def test_line_area_stacked(self):
with tm.RNGContext(42):
df = DataFrame(rand(6, 4), columns=["w", "x", "y", "z"])
neg_df = -df
# each column has either positive or negative value
sep_df = DataFrame(
{"w": rand(6), "x": rand(6), "y": -rand(6), "z": -rand(6)}
)
# each column has positive-negative mixed value
mixed_df = DataFrame(
randn(6, 4),
index=list(string.ascii_letters[:6]),
columns=["w", "x", "y", "z"],
)
for kind in ["line", "area"]:
ax1 = _check_plot_works(df.plot, kind=kind, stacked=False)
ax2 = _check_plot_works(df.plot, kind=kind, stacked=True)
self._compare_stacked_y_cood(ax1.lines, ax2.lines)
ax1 = _check_plot_works(neg_df.plot, kind=kind, stacked=False)
ax2 = _check_plot_works(neg_df.plot, kind=kind, stacked=True)
self._compare_stacked_y_cood(ax1.lines, ax2.lines)
ax1 = _check_plot_works(sep_df.plot, kind=kind, stacked=False)
ax2 = _check_plot_works(sep_df.plot, kind=kind, stacked=True)
self._compare_stacked_y_cood(ax1.lines[:2], ax2.lines[:2])
self._compare_stacked_y_cood(ax1.lines[2:], ax2.lines[2:])
_check_plot_works(mixed_df.plot, stacked=False)
with pytest.raises(ValueError):
mixed_df.plot(stacked=True)
# Use an index with strictly positive values, preventing
# matplotlib from warning about ignoring xlim
df2 = df.set_index(df.index + 1)
_check_plot_works(df2.plot, kind=kind, logx=True, stacked=True)
def test_line_area_nan_df(self):
values1 = [1, 2, np.nan, 3]
values2 = [3, np.nan, 2, 1]
df = DataFrame({"a": values1, "b": values2})
tdf = DataFrame({"a": values1, "b": values2}, index=tm.makeDateIndex(k=4))
for d in [df, tdf]:
ax = _check_plot_works(d.plot)
masked1 = ax.lines[0].get_ydata()
masked2 = ax.lines[1].get_ydata()
# remove nan for comparison purpose
exp = np.array([1, 2, 3], dtype=np.float64)
tm.assert_numpy_array_equal(np.delete(masked1.data, 2), exp)
exp = np.array([3, 2, 1], dtype=np.float64)
tm.assert_numpy_array_equal(np.delete(masked2.data, 1), exp)
tm.assert_numpy_array_equal(
masked1.mask, np.array([False, False, True, False])
)
tm.assert_numpy_array_equal(
masked2.mask, np.array([False, True, False, False])
)
expected1 = np.array([1, 2, 0, 3], dtype=np.float64)
expected2 = np.array([3, 0, 2, 1], dtype=np.float64)
ax = _check_plot_works(d.plot, stacked=True)
tm.assert_numpy_array_equal(ax.lines[0].get_ydata(), expected1)
tm.assert_numpy_array_equal(ax.lines[1].get_ydata(), expected1 + expected2)
ax = _check_plot_works(d.plot.area)
tm.assert_numpy_array_equal(ax.lines[0].get_ydata(), expected1)
tm.assert_numpy_array_equal(ax.lines[1].get_ydata(), expected1 + expected2)
ax = _check_plot_works(d.plot.area, stacked=False)
tm.assert_numpy_array_equal(ax.lines[0].get_ydata(), expected1)
tm.assert_numpy_array_equal(ax.lines[1].get_ydata(), expected2)
def test_line_lim(self):
df = DataFrame(rand(6, 3), columns=["x", "y", "z"])
ax = df.plot()
xmin, xmax = ax.get_xlim()
lines = ax.get_lines()
assert xmin <= lines[0].get_data()[0][0]
assert xmax >= lines[0].get_data()[0][-1]
ax = df.plot(secondary_y=True)
xmin, xmax = ax.get_xlim()
lines = ax.get_lines()
assert xmin <= lines[0].get_data()[0][0]
assert xmax >= lines[0].get_data()[0][-1]
axes = df.plot(secondary_y=True, subplots=True)
self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
for ax in axes:
assert hasattr(ax, "left_ax")
assert not hasattr(ax, "right_ax")
xmin, xmax = ax.get_xlim()
lines = ax.get_lines()
assert xmin <= lines[0].get_data()[0][0]
assert xmax >= lines[0].get_data()[0][-1]
def test_area_lim(self):
df = DataFrame(rand(6, 4), columns=["x", "y", "z", "four"])
neg_df = -df
for stacked in [True, False]:
ax = _check_plot_works(df.plot.area, stacked=stacked)
xmin, xmax = ax.get_xlim()
ymin, ymax = ax.get_ylim()
lines = ax.get_lines()
assert xmin <= lines[0].get_data()[0][0]
assert xmax >= lines[0].get_data()[0][-1]
assert ymin == 0
ax = _check_plot_works(neg_df.plot.area, stacked=stacked)
ymin, ymax = ax.get_ylim()
assert ymax == 0
@pytest.mark.slow
def test_bar_colors(self):
import matplotlib.pyplot as plt
default_colors = self._unpack_cycler(plt.rcParams)
df = DataFrame(randn(5, 5))
ax = df.plot.bar()
self._check_colors(ax.patches[::5], facecolors=default_colors[:5])
tm.close()
custom_colors = "rgcby"
ax = df.plot.bar(color=custom_colors)
self._check_colors(ax.patches[::5], facecolors=custom_colors)
tm.close()
from matplotlib import cm
# Test str -> colormap functionality
ax = df.plot.bar(colormap="jet")
rgba_colors = [cm.jet(n) for n in np.linspace(0, 1, 5)]
self._check_colors(ax.patches[::5], facecolors=rgba_colors)
tm.close()
# Test colormap functionality
ax = df.plot.bar(colormap=cm.jet)
rgba_colors = [cm.jet(n) for n in np.linspace(0, 1, 5)]
self._check_colors(ax.patches[::5], facecolors=rgba_colors)
tm.close()
ax = df.loc[:, [0]].plot.bar(color="DodgerBlue")
self._check_colors([ax.patches[0]], facecolors=["DodgerBlue"])
tm.close()
ax = df.plot(kind="bar", color="green")
self._check_colors(ax.patches[::5], facecolors=["green"] * 5)
tm.close()
def test_bar_user_colors(self):
df = pd.DataFrame(
{"A": range(4), "B": range(1, 5), "color": ["red", "blue", "blue", "red"]}
)
# This should *only* work when `y` is specified, else
# we use one color per column
ax = df.plot.bar(y="A", color=df["color"])
result = [p.get_facecolor() for p in ax.patches]
expected = [
(1.0, 0.0, 0.0, 1.0),
(0.0, 0.0, 1.0, 1.0),
(0.0, 0.0, 1.0, 1.0),
(1.0, 0.0, 0.0, 1.0),
]
assert result == expected
@pytest.mark.slow
def test_bar_linewidth(self):
df = DataFrame(randn(5, 5))
# regular
ax = df.plot.bar(linewidth=2)
for r in ax.patches:
assert r.get_linewidth() == 2
# stacked
ax = df.plot.bar(stacked=True, linewidth=2)
for r in ax.patches:
assert r.get_linewidth() == 2
# subplots
axes = df.plot.bar(linewidth=2, subplots=True)
self._check_axes_shape(axes, axes_num=5, layout=(5, 1))
for ax in axes:
for r in ax.patches:
assert r.get_linewidth() == 2
@pytest.mark.slow
def test_bar_barwidth(self):
df = DataFrame(randn(5, 5))
width = 0.9
# regular
ax = df.plot.bar(width=width)
for r in ax.patches:
assert r.get_width() == width / len(df.columns)
# stacked
ax = df.plot.bar(stacked=True, width=width)
for r in ax.patches:
assert r.get_width() == width
# horizontal regular
ax = df.plot.barh(width=width)
for r in ax.patches:
assert r.get_height() == width / len(df.columns)
# horizontal stacked
ax = df.plot.barh(stacked=True, width=width)
for r in ax.patches:
assert r.get_height() == width
# subplots
axes = df.plot.bar(width=width, subplots=True)
for ax in axes:
for r in ax.patches:
assert r.get_width() == width
# horizontal subplots
axes = df.plot.barh(width=width, subplots=True)
for ax in axes:
for r in ax.patches:
assert r.get_height() == width
@pytest.mark.slow
def test_bar_barwidth_position(self):
df = DataFrame(randn(5, 5))
self._check_bar_alignment(
df, kind="bar", stacked=False, width=0.9, position=0.2
)
self._check_bar_alignment(df, kind="bar", stacked=True, width=0.9, position=0.2)
self._check_bar_alignment(
df, kind="barh", stacked=False, width=0.9, position=0.2
)
self._check_bar_alignment(
df, kind="barh", stacked=True, width=0.9, position=0.2
)
self._check_bar_alignment(
df, kind="bar", subplots=True, width=0.9, position=0.2
)
self._check_bar_alignment(
df, kind="barh", subplots=True, width=0.9, position=0.2
)
@pytest.mark.slow
def test_bar_barwidth_position_int(self):
# GH 12979
df = DataFrame(randn(5, 5))
for w in [1, 1.0]:
ax = df.plot.bar(stacked=True, width=w)
ticks = ax.xaxis.get_ticklocs()
tm.assert_numpy_array_equal(ticks, np.array([0, 1, 2, 3, 4]))
assert ax.get_xlim() == (-0.75, 4.75)
# check left-edge of bars
assert ax.patches[0].get_x() == -0.5
assert ax.patches[-1].get_x() == 3.5
self._check_bar_alignment(df, kind="bar", stacked=True, width=1)
self._check_bar_alignment(df, kind="barh", stacked=False, width=1)
self._check_bar_alignment(df, kind="barh", stacked=True, width=1)
self._check_bar_alignment(df, kind="bar", subplots=True, width=1)
self._check_bar_alignment(df, kind="barh", subplots=True, width=1)
@pytest.mark.slow
def test_bar_bottom_left(self):
df = DataFrame(rand(5, 5))
ax = df.plot.bar(stacked=False, bottom=1)
result = [p.get_y() for p in ax.patches]
assert result == [1] * 25
ax = df.plot.bar(stacked=True, bottom=[-1, -2, -3, -4, -5])
result = [p.get_y() for p in ax.patches[:5]]
assert result == [-1, -2, -3, -4, -5]
ax = df.plot.barh(stacked=False, left=np.array([1, 1, 1, 1, 1]))
result = [p.get_x() for p in ax.patches]
assert result == [1] * 25
ax = df.plot.barh(stacked=True, left=[1, 2, 3, 4, 5])
result = [p.get_x() for p in ax.patches[:5]]
assert result == [1, 2, 3, 4, 5]
axes = df.plot.bar(subplots=True, bottom=-1)
for ax in axes:
result = [p.get_y() for p in ax.patches]
assert result == [-1] * 5
axes = df.plot.barh(subplots=True, left=np.array([1, 1, 1, 1, 1]))
for ax in axes:
result = [p.get_x() for p in ax.patches]
assert result == [1] * 5
@pytest.mark.slow
def test_bar_nan(self):
df = DataFrame({"A": [10, np.nan, 20], "B": [5, 10, 20], "C": [1, 2, 3]})
ax = df.plot.bar()
expected = [10, 0, 20, 5, 10, 20, 1, 2, 3]
result = [p.get_height() for p in ax.patches]
assert result == expected
ax = df.plot.bar(stacked=True)
result = [p.get_height() for p in ax.patches]
assert result == expected
result = [p.get_y() for p in ax.patches]
expected = [0.0, 0.0, 0.0, 10.0, 0.0, 20.0, 15.0, 10.0, 40.0]
assert result == expected
@pytest.mark.slow
def test_bar_categorical(self):
# GH 13019
df1 = pd.DataFrame(
np.random.randn(6, 5),
index=pd.Index(list("ABCDEF")),
columns=pd.Index(list("abcde")),
)
# categorical index must behave the same
df2 = pd.DataFrame(
np.random.randn(6, 5),
index=pd.CategoricalIndex(list("ABCDEF")),
columns=pd.CategoricalIndex(list("abcde")),
)
for df in [df1, df2]:
ax = df.plot.bar()
ticks = ax.xaxis.get_ticklocs()
tm.assert_numpy_array_equal(ticks, np.array([0, 1, 2, 3, 4, 5]))
assert ax.get_xlim() == (-0.5, 5.5)
# check left-edge of bars
assert ax.patches[0].get_x() == -0.25
assert ax.patches[-1].get_x() == 5.15
ax = df.plot.bar(stacked=True)
tm.assert_numpy_array_equal(ticks, np.array([0, 1, 2, 3, 4, 5]))
assert ax.get_xlim() == (-0.5, 5.5)
assert ax.patches[0].get_x() == -0.25
assert ax.patches[-1].get_x() == 4.75
@pytest.mark.slow
def test_plot_scatter(self):
df = DataFrame(
randn(6, 4),
index=list(string.ascii_letters[:6]),
columns=["x", "y", "z", "four"],
)
_check_plot_works(df.plot.scatter, x="x", y="y")
_check_plot_works(df.plot.scatter, x=1, y=2)
with pytest.raises(TypeError):
df.plot.scatter(x="x")
with pytest.raises(TypeError):
df.plot.scatter(y="y")
# GH 6951
axes = df.plot(x="x", y="y", kind="scatter", subplots=True)
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
@pytest.mark.slow
def test_if_scatterplot_colorbar_affects_xaxis_visibility(self):
# addressing issue #10611, to ensure colobar does not
# interfere with x-axis label and ticklabels with
# ipython inline backend.
random_array = np.random.random((1000, 3))
df = pd.DataFrame(random_array, columns=["A label", "B label", "C label"])
ax1 = df.plot.scatter(x="A label", y="B label")
ax2 = df.plot.scatter(x="A label", y="B label", c="C label")
vis1 = [vis.get_visible() for vis in ax1.xaxis.get_minorticklabels()]
vis2 = [vis.get_visible() for vis in ax2.xaxis.get_minorticklabels()]
assert vis1 == vis2
vis1 = [vis.get_visible() for vis in ax1.xaxis.get_majorticklabels()]
vis2 = [vis.get_visible() for vis in ax2.xaxis.get_majorticklabels()]
assert vis1 == vis2
assert (
ax1.xaxis.get_label().get_visible() == ax2.xaxis.get_label().get_visible()
)
@pytest.mark.slow
def test_if_hexbin_xaxis_label_is_visible(self):
# addressing issue #10678, to ensure colobar does not
# interfere with x-axis label and ticklabels with
# ipython inline backend.
random_array = np.random.random((1000, 3))
df = pd.DataFrame(random_array, columns=["A label", "B label", "C label"])
ax = df.plot.hexbin("A label", "B label", gridsize=12)
assert all(vis.get_visible() for vis in ax.xaxis.get_minorticklabels())
assert all(vis.get_visible() for vis in ax.xaxis.get_majorticklabels())
assert ax.xaxis.get_label().get_visible()
@pytest.mark.slow
def test_if_scatterplot_colorbars_are_next_to_parent_axes(self):
import matplotlib.pyplot as plt
random_array = np.random.random((1000, 3))
df = pd.DataFrame(random_array, columns=["A label", "B label", "C label"])
fig, axes = plt.subplots(1, 2)
df.plot.scatter("A label", "B label", c="C label", ax=axes[0])
df.plot.scatter("A label", "B label", c="C label", ax=axes[1])
plt.tight_layout()
points = np.array([ax.get_position().get_points() for ax in fig.axes])
axes_x_coords = points[:, :, 0]
parent_distance = axes_x_coords[1, :] - axes_x_coords[0, :]
colorbar_distance = axes_x_coords[3, :] - axes_x_coords[2, :]
assert np.isclose(parent_distance, colorbar_distance, atol=1e-7).all()
@pytest.mark.slow
def test_plot_scatter_with_categorical_data(self):
# GH 16199
df = pd.DataFrame(
{"x": [1, 2, 3, 4], "y": pd.Categorical(["a", "b", "a", "c"])}
)
with pytest.raises(ValueError) as ve:
df.plot(x="x", y="y", kind="scatter")
ve.match("requires y column to be numeric")
with pytest.raises(ValueError) as ve:
df.plot(x="y", y="x", kind="scatter")
ve.match("requires x column to be numeric")
with pytest.raises(ValueError) as ve:
df.plot(x="y", y="y", kind="scatter")
ve.match("requires x column to be numeric")
@pytest.mark.slow
def test_plot_scatter_with_c(self):
df = DataFrame(
randn(6, 4),
index=list(string.ascii_letters[:6]),
columns=["x", "y", "z", "four"],
)
axes = [df.plot.scatter(x="x", y="y", c="z"), df.plot.scatter(x=0, y=1, c=2)]
for ax in axes:
# default to Greys
assert ax.collections[0].cmap.name == "Greys"
# n.b. there appears to be no public method
# to get the colorbar label
assert ax.collections[0].colorbar._label == "z"
cm = "cubehelix"
ax = df.plot.scatter(x="x", y="y", c="z", colormap=cm)
assert ax.collections[0].cmap.name == cm
# verify turning off colorbar works
ax = df.plot.scatter(x="x", y="y", c="z", colorbar=False)
assert ax.collections[0].colorbar is None
# verify that we can still plot a solid color
ax = df.plot.scatter(x=0, y=1, c="red")
assert ax.collections[0].colorbar is None
self._check_colors(ax.collections, facecolors=["r"])
# Ensure that we can pass an np.array straight through to matplotlib,
# this functionality was accidentally removed previously.
# See https://github.com/pandas-dev/pandas/issues/8852 for bug report
#
# Exercise colormap path and non-colormap path as they are independent
#
df = DataFrame({"A": [1, 2], "B": [3, 4]})
red_rgba = [1.0, 0.0, 0.0, 1.0]
green_rgba = [0.0, 1.0, 0.0, 1.0]
rgba_array = np.array([red_rgba, green_rgba])
ax = df.plot.scatter(x="A", y="B", c=rgba_array)
# expect the face colors of the points in the non-colormap path to be
# identical to the values we supplied, normally we'd be on shaky ground
# comparing floats for equality but here we expect them to be
# identical.
tm.assert_numpy_array_equal(ax.collections[0].get_facecolor(), rgba_array)
# we don't test the colors of the faces in this next plot because they
# are dependent on the spring colormap, which may change its colors
# later.
float_array = np.array([0.0, 1.0])
df.plot.scatter(x="A", y="B", c=float_array, cmap="spring")
def test_scatter_colors(self):
df = DataFrame({"a": [1, 2, 3], "b": [1, 2, 3], "c": [1, 2, 3]})
with pytest.raises(TypeError):
df.plot.scatter(x="a", y="b", c="c", color="green")
default_colors = self._unpack_cycler(self.plt.rcParams)
ax = df.plot.scatter(x="a", y="b", c="c")
tm.assert_numpy_array_equal(
ax.collections[0].get_facecolor()[0],
np.array(self.colorconverter.to_rgba(default_colors[0])),
)
ax = df.plot.scatter(x="a", y="b", color="white")
tm.assert_numpy_array_equal(
ax.collections[0].get_facecolor()[0],
np.array([1, 1, 1, 1], dtype=np.float64),
)
@pytest.mark.slow
def test_plot_bar(self):
df = DataFrame(
randn(6, 4),
index=list(string.ascii_letters[:6]),
columns=["one", "two", "three", "four"],
)
_check_plot_works(df.plot.bar)
_check_plot_works(df.plot.bar, legend=False)
# _check_plot_works adds an ax so catch warning. see GH #13188
with tm.assert_produces_warning(UserWarning):
_check_plot_works(df.plot.bar, subplots=True)
_check_plot_works(df.plot.bar, stacked=True)
df = DataFrame(
randn(10, 15), index=list(string.ascii_letters[:10]), columns=range(15)
)
_check_plot_works(df.plot.bar)
df = DataFrame({"a": [0, 1], "b": [1, 0]})
ax = _check_plot_works(df.plot.bar)
self._check_ticks_props(ax, xrot=90)
ax = df.plot.bar(rot=35, fontsize=10)
self._check_ticks_props(ax, xrot=35, xlabelsize=10, ylabelsize=10)
ax = _check_plot_works(df.plot.barh)
self._check_ticks_props(ax, yrot=0)
ax = df.plot.barh(rot=55, fontsize=11)
self._check_ticks_props(ax, yrot=55, ylabelsize=11, xlabelsize=11)
def _check_bar_alignment(
self,
df,
kind="bar",
stacked=False,
subplots=False,
align="center",
width=0.5,
position=0.5,
):
axes = df.plot(
kind=kind,
stacked=stacked,
subplots=subplots,
align=align,
width=width,
position=position,
grid=True,
)
axes = self._flatten_visible(axes)
for ax in axes:
if kind == "bar":
axis = ax.xaxis
ax_min, ax_max = ax.get_xlim()
min_edge = min(p.get_x() for p in ax.patches)
max_edge = max(p.get_x() + p.get_width() for p in ax.patches)
elif kind == "barh":
axis = ax.yaxis
ax_min, ax_max = ax.get_ylim()
min_edge = min(p.get_y() for p in ax.patches)
max_edge = max(p.get_y() + p.get_height() for p in ax.patches)
else:
raise ValueError
# GH 7498
# compare margins between lim and bar edges
tm.assert_almost_equal(ax_min, min_edge - 0.25)
tm.assert_almost_equal(ax_max, max_edge + 0.25)
p = ax.patches[0]
if kind == "bar" and (stacked is True or subplots is True):
edge = p.get_x()
center = edge + p.get_width() * position
elif kind == "bar" and stacked is False:
center = p.get_x() + p.get_width() * len(df.columns) * position
edge = p.get_x()
elif kind == "barh" and (stacked is True or subplots is True):
center = p.get_y() + p.get_height() * position
edge = p.get_y()
elif kind == "barh" and stacked is False:
center = p.get_y() + p.get_height() * len(df.columns) * position
edge = p.get_y()
else:
raise ValueError
# Check the ticks locates on integer
assert (axis.get_ticklocs() == np.arange(len(df))).all()
if align == "center":
# Check whether the bar locates on center
tm.assert_almost_equal(axis.get_ticklocs()[0], center)
elif align == "edge":
# Check whether the bar's edge starts from the tick
tm.assert_almost_equal(axis.get_ticklocs()[0], edge)
else:
raise ValueError
return axes
@pytest.mark.slow
def test_bar_stacked_center(self):
# GH2157
df = DataFrame({"A": [3] * 5, "B": list(range(5))}, index=range(5))
self._check_bar_alignment(df, kind="bar", stacked=True)
self._check_bar_alignment(df, kind="bar", stacked=True, width=0.9)
self._check_bar_alignment(df, kind="barh", stacked=True)
self._check_bar_alignment(df, kind="barh", stacked=True, width=0.9)
@pytest.mark.slow
def test_bar_center(self):
df = DataFrame({"A": [3] * 5, "B": list(range(5))}, index=range(5))
self._check_bar_alignment(df, kind="bar", stacked=False)
self._check_bar_alignment(df, kind="bar", stacked=False, width=0.9)
self._check_bar_alignment(df, kind="barh", stacked=False)
self._check_bar_alignment(df, kind="barh", stacked=False, width=0.9)
@pytest.mark.slow
def test_bar_subplots_center(self):
df = DataFrame({"A": [3] * 5, "B": list(range(5))}, index=range(5))
self._check_bar_alignment(df, kind="bar", subplots=True)
self._check_bar_alignment(df, kind="bar", subplots=True, width=0.9)
self._check_bar_alignment(df, kind="barh", subplots=True)
self._check_bar_alignment(df, kind="barh", subplots=True, width=0.9)
@pytest.mark.slow
def test_bar_align_single_column(self):
df = DataFrame(randn(5))
self._check_bar_alignment(df, kind="bar", stacked=False)
self._check_bar_alignment(df, kind="bar", stacked=True)
self._check_bar_alignment(df, kind="barh", stacked=False)
self._check_bar_alignment(df, kind="barh", stacked=True)
self._check_bar_alignment(df, kind="bar", subplots=True)
self._check_bar_alignment(df, kind="barh", subplots=True)
@pytest.mark.slow
def test_bar_edge(self):
df = DataFrame({"A": [3] * 5, "B": list(range(5))}, index=range(5))
self._check_bar_alignment(df, kind="bar", stacked=True, align="edge")
self._check_bar_alignment(df, kind="bar", stacked=True, width=0.9, align="edge")
self._check_bar_alignment(df, kind="barh", stacked=True, align="edge")
self._check_bar_alignment(
df, kind="barh", stacked=True, width=0.9, align="edge"
)
self._check_bar_alignment(df, kind="bar", stacked=False, align="edge")
self._check_bar_alignment(
df, kind="bar", stacked=False, width=0.9, align="edge"
)
self._check_bar_alignment(df, kind="barh", stacked=False, align="edge")
self._check_bar_alignment(
df, kind="barh", stacked=False, width=0.9, align="edge"
)
self._check_bar_alignment(df, kind="bar", subplots=True, align="edge")
self._check_bar_alignment(
df, kind="bar", subplots=True, width=0.9, align="edge"
)
self._check_bar_alignment(df, kind="barh", subplots=True, align="edge")
self._check_bar_alignment(
df, kind="barh", subplots=True, width=0.9, align="edge"
)
@pytest.mark.slow
def test_bar_log_no_subplots(self):
# GH3254, GH3298 matplotlib/matplotlib#1882, #1892
# regressions in 1.2.1
expected = np.array([0.1, 1.0, 10.0, 100])
# no subplots
df = DataFrame({"A": [3] * 5, "B": list(range(1, 6))}, index=range(5))
ax = df.plot.bar(grid=True, log=True)
tm.assert_numpy_array_equal(ax.yaxis.get_ticklocs(), expected)
@pytest.mark.slow
def test_bar_log_subplots(self):
expected = np.array([0.1, 1.0, 10.0, 100.0, 1000.0, 1e4])
ax = DataFrame([Series([200, 300]), Series([300, 500])]).plot.bar(
log=True, subplots=True
)
tm.assert_numpy_array_equal(ax[0].yaxis.get_ticklocs(), expected)
tm.assert_numpy_array_equal(ax[1].yaxis.get_ticklocs(), expected)
@pytest.mark.slow
def test_boxplot(self):
df = self.hist_df
series = df["height"]
numeric_cols = df._get_numeric_data().columns
labels = [pprint_thing(c) for c in numeric_cols]
ax = _check_plot_works(df.plot.box)
self._check_text_labels(ax.get_xticklabels(), labels)
tm.assert_numpy_array_equal(
ax.xaxis.get_ticklocs(), np.arange(1, len(numeric_cols) + 1)
)
assert len(ax.lines) == self.bp_n_objects * len(numeric_cols)
axes = series.plot.box(rot=40)
self._check_ticks_props(axes, xrot=40, yrot=0)
tm.close()
ax = _check_plot_works(series.plot.box)
positions = np.array([1, 6, 7])
ax = df.plot.box(positions=positions)
numeric_cols = df._get_numeric_data().columns
labels = [pprint_thing(c) for c in numeric_cols]
self._check_text_labels(ax.get_xticklabels(), labels)
tm.assert_numpy_array_equal(ax.xaxis.get_ticklocs(), positions)
assert len(ax.lines) == self.bp_n_objects * len(numeric_cols)
@pytest.mark.slow
def test_boxplot_vertical(self):
df = self.hist_df
numeric_cols = df._get_numeric_data().columns
labels = [pprint_thing(c) for c in numeric_cols]
# if horizontal, yticklabels are rotated
ax = df.plot.box(rot=50, fontsize=8, vert=False)
self._check_ticks_props(ax, xrot=0, yrot=50, ylabelsize=8)
self._check_text_labels(ax.get_yticklabels(), labels)
assert len(ax.lines) == self.bp_n_objects * len(numeric_cols)
# _check_plot_works adds an ax so catch warning. see GH #13188
with tm.assert_produces_warning(UserWarning):
axes = _check_plot_works(df.plot.box, subplots=True, vert=False, logx=True)
self._check_axes_shape(axes, axes_num=3, layout=(1, 3))
self._check_ax_scales(axes, xaxis="log")
for ax, label in zip(axes, labels):
self._check_text_labels(ax.get_yticklabels(), [label])
assert len(ax.lines) == self.bp_n_objects
positions = np.array([3, 2, 8])
ax = df.plot.box(positions=positions, vert=False)
self._check_text_labels(ax.get_yticklabels(), labels)
tm.assert_numpy_array_equal(ax.yaxis.get_ticklocs(), positions)
assert len(ax.lines) == self.bp_n_objects * len(numeric_cols)
@pytest.mark.slow
def test_boxplot_return_type(self):
df = DataFrame(
randn(6, 4),
index=list(string.ascii_letters[:6]),
columns=["one", "two", "three", "four"],
)
with pytest.raises(ValueError):
df.plot.box(return_type="NOTATYPE")
result = df.plot.box(return_type="dict")
self._check_box_return_type(result, "dict")
result = df.plot.box(return_type="axes")
self._check_box_return_type(result, "axes")
result = df.plot.box() # default axes
self._check_box_return_type(result, "axes")
result = df.plot.box(return_type="both")
self._check_box_return_type(result, "both")
@pytest.mark.slow
def test_boxplot_subplots_return_type(self):
df = self.hist_df
# normal style: return_type=None
result = df.plot.box(subplots=True)
assert isinstance(result, Series)
self._check_box_return_type(
result, None, expected_keys=["height", "weight", "category"]
)
for t in ["dict", "axes", "both"]:
returned = df.plot.box(return_type=t, subplots=True)
self._check_box_return_type(
returned,
t,
expected_keys=["height", "weight", "category"],
check_ax_title=False,
)
@pytest.mark.slow
@td.skip_if_no_scipy
def test_kde_df(self):
df = DataFrame(randn(100, 4))
ax = _check_plot_works(df.plot, kind="kde")
expected = [pprint_thing(c) for c in df.columns]
self._check_legend_labels(ax, labels=expected)
self._check_ticks_props(ax, xrot=0)
ax = df.plot(kind="kde", rot=20, fontsize=5)
self._check_ticks_props(ax, xrot=20, xlabelsize=5, ylabelsize=5)
with tm.assert_produces_warning(UserWarning):
axes = _check_plot_works(df.plot, kind="kde", subplots=True)
self._check_axes_shape(axes, axes_num=4, layout=(4, 1))
axes = df.plot(kind="kde", logy=True, subplots=True)
self._check_ax_scales(axes, yaxis="log")
@pytest.mark.slow
@td.skip_if_no_scipy
def test_kde_missing_vals(self):
df = DataFrame(np.random.uniform(size=(100, 4)))
df.loc[0, 0] = np.nan
_check_plot_works(df.plot, kind="kde")
@pytest.mark.slow
def test_hist_df(self):
from matplotlib.patches import Rectangle
df = DataFrame(randn(100, 4))
series = df[0]
ax = _check_plot_works(df.plot.hist)
expected = [pprint_thing(c) for c in df.columns]
self._check_legend_labels(ax, labels=expected)
with tm.assert_produces_warning(UserWarning):
axes = _check_plot_works(df.plot.hist, subplots=True, logy=True)
self._check_axes_shape(axes, axes_num=4, layout=(4, 1))
self._check_ax_scales(axes, yaxis="log")
axes = series.plot.hist(rot=40)
self._check_ticks_props(axes, xrot=40, yrot=0)
tm.close()
ax = series.plot.hist(cumulative=True, bins=4, density=True)
# height of last bin (index 5) must be 1.0
rects = [x for x in ax.get_children() if isinstance(x, Rectangle)]
tm.assert_almost_equal(rects[-1].get_height(), 1.0)
tm.close()
ax = series.plot.hist(cumulative=True, bins=4)
rects = [x for x in ax.get_children() if isinstance(x, Rectangle)]
tm.assert_almost_equal(rects[-2].get_height(), 100.0)
tm.close()
# if horizontal, yticklabels are rotated
axes = df.plot.hist(rot=50, fontsize=8, orientation="horizontal")
self._check_ticks_props(axes, xrot=0, yrot=50, ylabelsize=8)
def _check_box_coord(
self,
patches,
expected_y=None,
expected_h=None,
expected_x=None,
expected_w=None,
):
result_y = np.array([p.get_y() for p in patches])
result_height = np.array([p.get_height() for p in patches])
result_x = np.array([p.get_x() for p in patches])
result_width = np.array([p.get_width() for p in patches])
# dtype is depending on above values, no need to check
if expected_y is not None:
tm.assert_numpy_array_equal(result_y, expected_y, check_dtype=False)
if expected_h is not None:
tm.assert_numpy_array_equal(result_height, expected_h, check_dtype=False)
if expected_x is not None:
tm.assert_numpy_array_equal(result_x, expected_x, check_dtype=False)
if expected_w is not None:
tm.assert_numpy_array_equal(result_width, expected_w, check_dtype=False)
@pytest.mark.slow
def test_hist_df_coord(self):
normal_df = DataFrame(
{
"A": np.repeat(np.array([1, 2, 3, 4, 5]), np.array([10, 9, 8, 7, 6])),
"B": np.repeat(np.array([1, 2, 3, 4, 5]), np.array([8, 8, 8, 8, 8])),
"C": np.repeat(np.array([1, 2, 3, 4, 5]), np.array([6, 7, 8, 9, 10])),
},
columns=["A", "B", "C"],
)
nan_df = DataFrame(
{
"A": np.repeat(
np.array([np.nan, 1, 2, 3, 4, 5]), np.array([3, 10, 9, 8, 7, 6])
),
"B": np.repeat(
np.array([1, np.nan, 2, 3, 4, 5]), np.array([8, 3, 8, 8, 8, 8])
),
"C": np.repeat(
np.array([1, 2, 3, np.nan, 4, 5]), np.array([6, 7, 8, 3, 9, 10])
),
},
columns=["A", "B", "C"],
)
for df in [normal_df, nan_df]:
ax = df.plot.hist(bins=5)
self._check_box_coord(
ax.patches[:5],
expected_y=np.array([0, 0, 0, 0, 0]),
expected_h=np.array([10, 9, 8, 7, 6]),
)
self._check_box_coord(
ax.patches[5:10],
expected_y=np.array([0, 0, 0, 0, 0]),
expected_h=np.array([8, 8, 8, 8, 8]),
)
self._check_box_coord(
ax.patches[10:],
expected_y=np.array([0, 0, 0, 0, 0]),
expected_h=np.array([6, 7, 8, 9, 10]),
)
ax = df.plot.hist(bins=5, stacked=True)
self._check_box_coord(
ax.patches[:5],
expected_y=np.array([0, 0, 0, 0, 0]),
expected_h=np.array([10, 9, 8, 7, 6]),
)
self._check_box_coord(
ax.patches[5:10],
expected_y=np.array([10, 9, 8, 7, 6]),
expected_h=np.array([8, 8, 8, 8, 8]),
)
self._check_box_coord(
ax.patches[10:],
expected_y=np.array([18, 17, 16, 15, 14]),
expected_h=np.array([6, 7, 8, 9, 10]),
)
axes = df.plot.hist(bins=5, stacked=True, subplots=True)
self._check_box_coord(
axes[0].patches,
expected_y=np.array([0, 0, 0, 0, 0]),
expected_h=np.array([10, 9, 8, 7, 6]),
)
self._check_box_coord(
axes[1].patches,
expected_y=np.array([0, 0, 0, 0, 0]),
expected_h=np.array([8, 8, 8, 8, 8]),
)
self._check_box_coord(
axes[2].patches,
expected_y=np.array([0, 0, 0, 0, 0]),
expected_h=np.array([6, 7, 8, 9, 10]),
)
# horizontal
ax = df.plot.hist(bins=5, orientation="horizontal")
self._check_box_coord(
ax.patches[:5],
expected_x=np.array([0, 0, 0, 0, 0]),
expected_w=np.array([10, 9, 8, 7, 6]),
)
self._check_box_coord(
ax.patches[5:10],
expected_x=np.array([0, 0, 0, 0, 0]),
expected_w=np.array([8, 8, 8, 8, 8]),
)
self._check_box_coord(
ax.patches[10:],
expected_x=np.array([0, 0, 0, 0, 0]),
expected_w=np.array([6, 7, 8, 9, 10]),
)
ax = df.plot.hist(bins=5, stacked=True, orientation="horizontal")
self._check_box_coord(
ax.patches[:5],
expected_x=np.array([0, 0, 0, 0, 0]),
expected_w=np.array([10, 9, 8, 7, 6]),
)
self._check_box_coord(
ax.patches[5:10],
expected_x=np.array([10, 9, 8, 7, 6]),
expected_w=np.array([8, 8, 8, 8, 8]),
)
self._check_box_coord(
ax.patches[10:],
expected_x=np.array([18, 17, 16, 15, 14]),
expected_w=np.array([6, 7, 8, 9, 10]),
)
axes = df.plot.hist(
bins=5, stacked=True, subplots=True, orientation="horizontal"
)
self._check_box_coord(
axes[0].patches,
expected_x=np.array([0, 0, 0, 0, 0]),
expected_w=np.array([10, 9, 8, 7, 6]),
)
self._check_box_coord(
axes[1].patches,
expected_x=np.array([0, 0, 0, 0, 0]),
expected_w=np.array([8, 8, 8, 8, 8]),
)
self._check_box_coord(
axes[2].patches,
expected_x=np.array([0, 0, 0, 0, 0]),
expected_w=np.array([6, 7, 8, 9, 10]),
)
@pytest.mark.slow
def test_plot_int_columns(self):
df = DataFrame(randn(100, 4)).cumsum()
_check_plot_works(df.plot, legend=True)
@pytest.mark.slow
def test_df_legend_labels(self):
kinds = ["line", "bar", "barh", "kde", "area", "hist"]
df = DataFrame(rand(3, 3), columns=["a", "b", "c"])
df2 = DataFrame(rand(3, 3), columns=["d", "e", "f"])
df3 = DataFrame(rand(3, 3), columns=["g", "h", "i"])
df4 = DataFrame(rand(3, 3), columns=["j", "k", "l"])
for kind in kinds:
ax = df.plot(kind=kind, legend=True)
self._check_legend_labels(ax, labels=df.columns)
ax = df2.plot(kind=kind, legend=False, ax=ax)
self._check_legend_labels(ax, labels=df.columns)
ax = df3.plot(kind=kind, legend=True, ax=ax)
self._check_legend_labels(ax, labels=df.columns.union(df3.columns))
ax = df4.plot(kind=kind, legend="reverse", ax=ax)
expected = list(df.columns.union(df3.columns)) + list(reversed(df4.columns))
self._check_legend_labels(ax, labels=expected)
# Secondary Y
ax = df.plot(legend=True, secondary_y="b")
self._check_legend_labels(ax, labels=["a", "b (right)", "c"])
ax = df2.plot(legend=False, ax=ax)
self._check_legend_labels(ax, labels=["a", "b (right)", "c"])
ax = df3.plot(kind="bar", legend=True, secondary_y="h", ax=ax)
self._check_legend_labels(
ax, labels=["a", "b (right)", "c", "g", "h (right)", "i"]
)
# Time Series
ind = date_range("1/1/2014", periods=3)
df = DataFrame(randn(3, 3), columns=["a", "b", "c"], index=ind)
df2 = DataFrame(randn(3, 3), columns=["d", "e", "f"], index=ind)
df3 = DataFrame(randn(3, 3), columns=["g", "h", "i"], index=ind)
ax = df.plot(legend=True, secondary_y="b")
self._check_legend_labels(ax, labels=["a", "b (right)", "c"])
ax = df2.plot(legend=False, ax=ax)
self._check_legend_labels(ax, labels=["a", "b (right)", "c"])
ax = df3.plot(legend=True, ax=ax)
self._check_legend_labels(ax, labels=["a", "b (right)", "c", "g", "h", "i"])
# scatter
ax = df.plot.scatter(x="a", y="b", label="data1")
self._check_legend_labels(ax, labels=["data1"])
ax = df2.plot.scatter(x="d", y="e", legend=False, label="data2", ax=ax)
self._check_legend_labels(ax, labels=["data1"])
ax = df3.plot.scatter(x="g", y="h", label="data3", ax=ax)
self._check_legend_labels(ax, labels=["data1", "data3"])
# ensure label args pass through and
# index name does not mutate
# column names don't mutate
df5 = df.set_index("a")
ax = df5.plot(y="b")
self._check_legend_labels(ax, labels=["b"])
ax = df5.plot(y="b", label="LABEL_b")
self._check_legend_labels(ax, labels=["LABEL_b"])
self._check_text_labels(ax.xaxis.get_label(), "a")
ax = df5.plot(y="c", label="LABEL_c", ax=ax)
self._check_legend_labels(ax, labels=["LABEL_b", "LABEL_c"])
assert df5.columns.tolist() == ["b", "c"]
def test_missing_marker_multi_plots_on_same_ax(self):
# GH 18222
df = pd.DataFrame(
data=[[1, 1, 1, 1], [2, 2, 4, 8]], columns=["x", "r", "g", "b"]
)
fig, ax = self.plt.subplots(nrows=1, ncols=3)
# Left plot
df.plot(x="x", y="r", linewidth=0, marker="o", color="r", ax=ax[0])
df.plot(x="x", y="g", linewidth=1, marker="x", color="g", ax=ax[0])
df.plot(x="x", y="b", linewidth=1, marker="o", color="b", ax=ax[0])
self._check_legend_labels(ax[0], labels=["r", "g", "b"])
self._check_legend_marker(ax[0], expected_markers=["o", "x", "o"])
# Center plot
df.plot(x="x", y="b", linewidth=1, marker="o", color="b", ax=ax[1])
df.plot(x="x", y="r", linewidth=0, marker="o", color="r", ax=ax[1])
df.plot(x="x", y="g", linewidth=1, marker="x", color="g", ax=ax[1])
self._check_legend_labels(ax[1], labels=["b", "r", "g"])
self._check_legend_marker(ax[1], expected_markers=["o", "o", "x"])
# Right plot
df.plot(x="x", y="g", linewidth=1, marker="x", color="g", ax=ax[2])
df.plot(x="x", y="b", linewidth=1, marker="o", color="b", ax=ax[2])
df.plot(x="x", y="r", linewidth=0, marker="o", color="r", ax=ax[2])
self._check_legend_labels(ax[2], labels=["g", "b", "r"])
self._check_legend_marker(ax[2], expected_markers=["x", "o", "o"])
def test_legend_name(self):
multi = DataFrame(
randn(4, 4),
columns=[np.array(["a", "a", "b", "b"]), np.array(["x", "y", "x", "y"])],
)
multi.columns.names = ["group", "individual"]
ax = multi.plot()
leg_title = ax.legend_.get_title()
self._check_text_labels(leg_title, "group,individual")
df = DataFrame(randn(5, 5))
ax = df.plot(legend=True, ax=ax)
leg_title = ax.legend_.get_title()
self._check_text_labels(leg_title, "group,individual")
df.columns.name = "new"
ax = df.plot(legend=False, ax=ax)
leg_title = ax.legend_.get_title()
self._check_text_labels(leg_title, "group,individual")
ax = df.plot(legend=True, ax=ax)
leg_title = ax.legend_.get_title()
self._check_text_labels(leg_title, "new")
@pytest.mark.slow
def test_no_legend(self):
kinds = ["line", "bar", "barh", "kde", "area", "hist"]
df = DataFrame(rand(3, 3), columns=["a", "b", "c"])
for kind in kinds:
ax = df.plot(kind=kind, legend=False)
self._check_legend_labels(ax, visible=False)
@pytest.mark.slow
def test_style_by_column(self):
import matplotlib.pyplot as plt
fig = plt.gcf()
df = DataFrame(randn(100, 3))
for markers in [
{0: "^", 1: "+", 2: "o"},
{0: "^", 1: "+"},
["^", "+", "o"],
["^", "+"],
]:
fig.clf()
fig.add_subplot(111)
ax = df.plot(style=markers)
for i, l in enumerate(ax.get_lines()[: len(markers)]):
assert l.get_marker() == markers[i]
@pytest.mark.slow
def test_line_label_none(self):
s = Series([1, 2])
ax = s.plot()
assert ax.get_legend() is None
ax = s.plot(legend=True)
assert ax.get_legend().get_texts()[0].get_text() == "None"
@pytest.mark.slow
def test_line_colors(self):
from matplotlib import cm
custom_colors = "rgcby"
df = DataFrame(randn(5, 5))
ax = df.plot(color=custom_colors)
self._check_colors(ax.get_lines(), linecolors=custom_colors)
tm.close()
ax2 = df.plot(color=custom_colors)
lines2 = ax2.get_lines()
for l1, l2 in zip(ax.get_lines(), lines2):
assert l1.get_color() == l2.get_color()
tm.close()
ax = df.plot(colormap="jet")
rgba_colors = [cm.jet(n) for n in np.linspace(0, 1, len(df))]
self._check_colors(ax.get_lines(), linecolors=rgba_colors)
tm.close()
ax = df.plot(colormap=cm.jet)
rgba_colors = [cm.jet(n) for n in np.linspace(0, 1, len(df))]
self._check_colors(ax.get_lines(), linecolors=rgba_colors)
tm.close()
# make color a list if plotting one column frame
# handles cases like df.plot(color='DodgerBlue')
ax = df.loc[:, [0]].plot(color="DodgerBlue")
self._check_colors(ax.lines, linecolors=["DodgerBlue"])
ax = df.plot(color="red")
self._check_colors(ax.get_lines(), linecolors=["red"] * 5)
tm.close()
# GH 10299
custom_colors = ["#FF0000", "#0000FF", "#FFFF00", "#000000", "#FFFFFF"]
ax = df.plot(color=custom_colors)
self._check_colors(ax.get_lines(), linecolors=custom_colors)
tm.close()
with pytest.raises(ValueError):
# Color contains shorthand hex value results in ValueError
custom_colors = ["#F00", "#00F", "#FF0", "#000", "#FFF"]
# Forced show plot
_check_plot_works(df.plot, color=custom_colors)
@pytest.mark.slow
def test_dont_modify_colors(self):
colors = ["r", "g", "b"]
pd.DataFrame(np.random.rand(10, 2)).plot(color=colors)
assert len(colors) == 3
@pytest.mark.slow
def test_line_colors_and_styles_subplots(self):
# GH 9894
from matplotlib import cm
default_colors = self._unpack_cycler(self.plt.rcParams)
df = DataFrame(randn(5, 5))
axes = df.plot(subplots=True)
for ax, c in zip(axes, list(default_colors)):
c = [c]
self._check_colors(ax.get_lines(), linecolors=c)
tm.close()
# single color char
axes = df.plot(subplots=True, color="k")
for ax in axes:
self._check_colors(ax.get_lines(), linecolors=["k"])
tm.close()
# single color str
axes = df.plot(subplots=True, color="green")
for ax in axes:
self._check_colors(ax.get_lines(), linecolors=["green"])
tm.close()
custom_colors = "rgcby"
axes = df.plot(color=custom_colors, subplots=True)
for ax, c in zip(axes, list(custom_colors)):
self._check_colors(ax.get_lines(), linecolors=[c])
tm.close()
axes = df.plot(color=list(custom_colors), subplots=True)
for ax, c in zip(axes, list(custom_colors)):
self._check_colors(ax.get_lines(), linecolors=[c])
tm.close()
# GH 10299
custom_colors = ["#FF0000", "#0000FF", "#FFFF00", "#000000", "#FFFFFF"]
axes = df.plot(color=custom_colors, subplots=True)
for ax, c in zip(axes, list(custom_colors)):
self._check_colors(ax.get_lines(), linecolors=[c])
tm.close()
with pytest.raises(ValueError):
# Color contains shorthand hex value results in ValueError
custom_colors = ["#F00", "#00F", "#FF0", "#000", "#FFF"]
# Forced show plot
# _check_plot_works adds an ax so catch warning. see GH #13188
with tm.assert_produces_warning(UserWarning):
_check_plot_works(df.plot, color=custom_colors, subplots=True)
rgba_colors = [cm.jet(n) for n in np.linspace(0, 1, len(df))]
for cmap in ["jet", cm.jet]:
axes = df.plot(colormap=cmap, subplots=True)
for ax, c in zip(axes, rgba_colors):
self._check_colors(ax.get_lines(), linecolors=[c])
tm.close()
# make color a list if plotting one column frame
# handles cases like df.plot(color='DodgerBlue')
axes = df.loc[:, [0]].plot(color="DodgerBlue", subplots=True)
self._check_colors(axes[0].lines, linecolors=["DodgerBlue"])
# single character style
axes = df.plot(style="r", subplots=True)
for ax in axes:
self._check_colors(ax.get_lines(), linecolors=["r"])
tm.close()
# list of styles
styles = list("rgcby")
axes = df.plot(style=styles, subplots=True)
for ax, c in zip(axes, styles):
self._check_colors(ax.get_lines(), linecolors=[c])
tm.close()
@pytest.mark.slow
def test_area_colors(self):
from matplotlib import cm
from matplotlib.collections import PolyCollection
custom_colors = "rgcby"
df = DataFrame(rand(5, 5))
ax = df.plot.area(color=custom_colors)
self._check_colors(ax.get_lines(), linecolors=custom_colors)
poly = [o for o in ax.get_children() if isinstance(o, PolyCollection)]
self._check_colors(poly, facecolors=custom_colors)
handles, labels = ax.get_legend_handles_labels()
self._check_colors(handles, facecolors=custom_colors)
for h in handles:
assert h.get_alpha() is None
tm.close()
ax = df.plot.area(colormap="jet")
jet_colors = [cm.jet(n) for n in np.linspace(0, 1, len(df))]
self._check_colors(ax.get_lines(), linecolors=jet_colors)
poly = [o for o in ax.get_children() if isinstance(o, PolyCollection)]
self._check_colors(poly, facecolors=jet_colors)
handles, labels = ax.get_legend_handles_labels()
self._check_colors(handles, facecolors=jet_colors)
for h in handles:
assert h.get_alpha() is None
tm.close()
# When stacked=False, alpha is set to 0.5
ax = df.plot.area(colormap=cm.jet, stacked=False)
self._check_colors(ax.get_lines(), linecolors=jet_colors)
poly = [o for o in ax.get_children() if isinstance(o, PolyCollection)]
jet_with_alpha = [(c[0], c[1], c[2], 0.5) for c in jet_colors]
self._check_colors(poly, facecolors=jet_with_alpha)
handles, labels = ax.get_legend_handles_labels()
linecolors = jet_with_alpha
self._check_colors(handles[: len(jet_colors)], linecolors=linecolors)
for h in handles:
assert h.get_alpha() == 0.5
@pytest.mark.slow
def test_hist_colors(self):
default_colors = self._unpack_cycler(self.plt.rcParams)
df = DataFrame(randn(5, 5))
ax = df.plot.hist()
self._check_colors(ax.patches[::10], facecolors=default_colors[:5])
tm.close()
custom_colors = "rgcby"
ax = df.plot.hist(color=custom_colors)
self._check_colors(ax.patches[::10], facecolors=custom_colors)
tm.close()
from matplotlib import cm
# Test str -> colormap functionality
ax = df.plot.hist(colormap="jet")
rgba_colors = [cm.jet(n) for n in np.linspace(0, 1, 5)]
self._check_colors(ax.patches[::10], facecolors=rgba_colors)
tm.close()
# Test colormap functionality
ax = df.plot.hist(colormap=cm.jet)
rgba_colors = [cm.jet(n) for n in np.linspace(0, 1, 5)]
self._check_colors(ax.patches[::10], facecolors=rgba_colors)
tm.close()
ax = df.loc[:, [0]].plot.hist(color="DodgerBlue")
self._check_colors([ax.patches[0]], facecolors=["DodgerBlue"])
ax = df.plot(kind="hist", color="green")
self._check_colors(ax.patches[::10], facecolors=["green"] * 5)
tm.close()
@pytest.mark.slow
@td.skip_if_no_scipy
def test_kde_colors(self):
from matplotlib import cm
custom_colors = "rgcby"
df = DataFrame(rand(5, 5))
ax = df.plot.kde(color=custom_colors)
self._check_colors(ax.get_lines(), linecolors=custom_colors)
tm.close()
ax = df.plot.kde(colormap="jet")
rgba_colors = [cm.jet(n) for n in np.linspace(0, 1, len(df))]
self._check_colors(ax.get_lines(), linecolors=rgba_colors)
tm.close()
ax = df.plot.kde(colormap=cm.jet)
rgba_colors = [cm.jet(n) for n in np.linspace(0, 1, len(df))]
self._check_colors(ax.get_lines(), linecolors=rgba_colors)
@pytest.mark.slow
@td.skip_if_no_scipy
def test_kde_colors_and_styles_subplots(self):
from matplotlib import cm
default_colors = self._unpack_cycler(self.plt.rcParams)
df = DataFrame(randn(5, 5))
axes = df.plot(kind="kde", subplots=True)
for ax, c in zip(axes, list(default_colors)):
self._check_colors(ax.get_lines(), linecolors=[c])
tm.close()
# single color char
axes = df.plot(kind="kde", color="k", subplots=True)
for ax in axes:
self._check_colors(ax.get_lines(), linecolors=["k"])
tm.close()
# single color str
axes = df.plot(kind="kde", color="red", subplots=True)
for ax in axes:
self._check_colors(ax.get_lines(), linecolors=["red"])
tm.close()
custom_colors = "rgcby"
axes = df.plot(kind="kde", color=custom_colors, subplots=True)
for ax, c in zip(axes, list(custom_colors)):
self._check_colors(ax.get_lines(), linecolors=[c])
tm.close()
rgba_colors = [cm.jet(n) for n in np.linspace(0, 1, len(df))]
for cmap in ["jet", cm.jet]:
axes = df.plot(kind="kde", colormap=cmap, subplots=True)
for ax, c in zip(axes, rgba_colors):
self._check_colors(ax.get_lines(), linecolors=[c])
tm.close()
# make color a list if plotting one column frame
# handles cases like df.plot(color='DodgerBlue')
axes = df.loc[:, [0]].plot(kind="kde", color="DodgerBlue", subplots=True)
self._check_colors(axes[0].lines, linecolors=["DodgerBlue"])
# single character style
axes = df.plot(kind="kde", style="r", subplots=True)
for ax in axes:
self._check_colors(ax.get_lines(), linecolors=["r"])
tm.close()
# list of styles
styles = list("rgcby")
axes = df.plot(kind="kde", style=styles, subplots=True)
for ax, c in zip(axes, styles):
self._check_colors(ax.get_lines(), linecolors=[c])
tm.close()
@pytest.mark.slow
def test_boxplot_colors(self):
def _check_colors(bp, box_c, whiskers_c, medians_c, caps_c="k", fliers_c=None):
# TODO: outside this func?
if fliers_c is None:
fliers_c = "k"
self._check_colors(bp["boxes"], linecolors=[box_c] * len(bp["boxes"]))
self._check_colors(
bp["whiskers"], linecolors=[whiskers_c] * len(bp["whiskers"])
)
self._check_colors(
bp["medians"], linecolors=[medians_c] * len(bp["medians"])
)
self._check_colors(bp["fliers"], linecolors=[fliers_c] * len(bp["fliers"]))
self._check_colors(bp["caps"], linecolors=[caps_c] * len(bp["caps"]))
default_colors = self._unpack_cycler(self.plt.rcParams)
df = DataFrame(randn(5, 5))
bp = df.plot.box(return_type="dict")
_check_colors(bp, default_colors[0], default_colors[0], default_colors[2])
tm.close()
dict_colors = dict(
boxes="#572923", whiskers="#982042", medians="#804823", caps="#123456"
)
bp = df.plot.box(color=dict_colors, sym="r+", return_type="dict")
_check_colors(
bp,
dict_colors["boxes"],
dict_colors["whiskers"],
dict_colors["medians"],
dict_colors["caps"],
"r",
)
tm.close()
# partial colors
dict_colors = dict(whiskers="c", medians="m")
bp = df.plot.box(color=dict_colors, return_type="dict")
_check_colors(bp, default_colors[0], "c", "m")
tm.close()
from matplotlib import cm
# Test str -> colormap functionality
bp = df.plot.box(colormap="jet", return_type="dict")
jet_colors = [cm.jet(n) for n in np.linspace(0, 1, 3)]
_check_colors(bp, jet_colors[0], jet_colors[0], jet_colors[2])
tm.close()
# Test colormap functionality
bp = df.plot.box(colormap=cm.jet, return_type="dict")
_check_colors(bp, jet_colors[0], jet_colors[0], jet_colors[2])
tm.close()
# string color is applied to all artists except fliers
bp = df.plot.box(color="DodgerBlue", return_type="dict")
_check_colors(bp, "DodgerBlue", "DodgerBlue", "DodgerBlue", "DodgerBlue")
# tuple is also applied to all artists except fliers
bp = df.plot.box(color=(0, 1, 0), sym="#123456", return_type="dict")
_check_colors(bp, (0, 1, 0), (0, 1, 0), (0, 1, 0), (0, 1, 0), "#123456")
with pytest.raises(ValueError):
# Color contains invalid key results in ValueError
df.plot.box(color=dict(boxes="red", xxxx="blue"))
def test_default_color_cycle(self):
import matplotlib.pyplot as plt
import cycler
colors = list("rgbk")
plt.rcParams["axes.prop_cycle"] = cycler.cycler("color", colors)
df = DataFrame(randn(5, 3))
ax = df.plot()
expected = self._unpack_cycler(plt.rcParams)[:3]
self._check_colors(ax.get_lines(), linecolors=expected)
def test_unordered_ts(self):
df = DataFrame(
np.array([3.0, 2.0, 1.0]),
index=[date(2012, 10, 1), date(2012, 9, 1), date(2012, 8, 1)],
columns=["test"],
)
ax = df.plot()
xticks = ax.lines[0].get_xdata()
assert xticks[0] < xticks[1]
ydata = ax.lines[0].get_ydata()
tm.assert_numpy_array_equal(ydata, np.array([1.0, 2.0, 3.0]))
@td.skip_if_no_scipy
def test_kind_both_ways(self):
df = DataFrame({"x": [1, 2, 3]})
for kind in plotting.PlotAccessor._common_kinds:
df.plot(kind=kind)
getattr(df.plot, kind)()
for kind in ["scatter", "hexbin"]:
df.plot("x", "x", kind=kind)
getattr(df.plot, kind)("x", "x")
def test_all_invalid_plot_data(self):
df = DataFrame(list("abcd"))
for kind in plotting.PlotAccessor._common_kinds:
msg = "no numeric data to plot"
with pytest.raises(TypeError, match=msg):
df.plot(kind=kind)
@pytest.mark.slow
def test_partially_invalid_plot_data(self):
with tm.RNGContext(42):
df = DataFrame(randn(10, 2), dtype=object)
df[np.random.rand(df.shape[0]) > 0.5] = "a"
for kind in plotting.PlotAccessor._common_kinds:
msg = "no numeric data to plot"
with pytest.raises(TypeError, match=msg):
df.plot(kind=kind)
with tm.RNGContext(42):
# area plot doesn't support positive/negative mixed data
kinds = ["area"]
df = DataFrame(rand(10, 2), dtype=object)
df[np.random.rand(df.shape[0]) > 0.5] = "a"
for kind in kinds:
with pytest.raises(TypeError):
df.plot(kind=kind)
def test_invalid_kind(self):
df = DataFrame(randn(10, 2))
with pytest.raises(ValueError):
df.plot(kind="aasdf")
@pytest.mark.parametrize(
"x,y,lbl",
[
(["B", "C"], "A", "a"),
(["A"], ["B", "C"], ["b", "c"]),
("A", ["B", "C"], "badlabel"),
],
)
def test_invalid_xy_args(self, x, y, lbl):
# GH 18671, 19699 allows y to be list-like but not x
df = DataFrame({"A": [1, 2], "B": [3, 4], "C": [5, 6]})
with pytest.raises(ValueError):
df.plot(x=x, y=y, label=lbl)
@pytest.mark.parametrize("x,y", [("A", "B"), (["A"], "B")])
def test_invalid_xy_args_dup_cols(self, x, y):
# GH 18671, 19699 allows y to be list-like but not x
df = DataFrame([[1, 3, 5], [2, 4, 6]], columns=list("AAB"))
with pytest.raises(ValueError):
df.plot(x=x, y=y)
@pytest.mark.parametrize(
"x,y,lbl,colors",
[
("A", ["B"], ["b"], ["red"]),
("A", ["B", "C"], ["b", "c"], ["red", "blue"]),
(0, [1, 2], ["bokeh", "cython"], ["green", "yellow"]),
],
)
def test_y_listlike(self, x, y, lbl, colors):
# GH 19699: tests list-like y and verifies lbls & colors
df = DataFrame({"A": [1, 2], "B": [3, 4], "C": [5, 6]})
_check_plot_works(df.plot, x="A", y=y, label=lbl)
ax = df.plot(x=x, y=y, label=lbl, color=colors)
assert len(ax.lines) == len(y)
self._check_colors(ax.get_lines(), linecolors=colors)
@pytest.mark.parametrize("x,y,colnames", [(0, 1, ["A", "B"]), (1, 0, [0, 1])])
def test_xy_args_integer(self, x, y, colnames):
# GH 20056: tests integer args for xy and checks col names
df = DataFrame({"A": [1, 2], "B": [3, 4]})
df.columns = colnames
_check_plot_works(df.plot, x=x, y=y)
@pytest.mark.slow
def test_hexbin_basic(self):
df = self.hexbin_df
ax = df.plot.hexbin(x="A", y="B", gridsize=10)
# TODO: need better way to test. This just does existence.
assert len(ax.collections) == 1
# GH 6951
axes = df.plot.hexbin(x="A", y="B", subplots=True)
# hexbin should have 2 axes in the figure, 1 for plotting and another
# is colorbar
assert len(axes[0].figure.axes) == 2
# return value is single axes
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
@pytest.mark.slow
def test_hexbin_with_c(self):
df = self.hexbin_df
ax = df.plot.hexbin(x="A", y="B", C="C")
assert len(ax.collections) == 1
ax = df.plot.hexbin(x="A", y="B", C="C", reduce_C_function=np.std)
assert len(ax.collections) == 1
@pytest.mark.slow
def test_hexbin_cmap(self):
df = self.hexbin_df
# Default to BuGn
ax = df.plot.hexbin(x="A", y="B")
assert ax.collections[0].cmap.name == "BuGn"
cm = "cubehelix"
ax = df.plot.hexbin(x="A", y="B", colormap=cm)
assert ax.collections[0].cmap.name == cm
@pytest.mark.slow
def test_no_color_bar(self):
df = self.hexbin_df
ax = df.plot.hexbin(x="A", y="B", colorbar=None)
assert ax.collections[0].colorbar is None
@pytest.mark.slow
def test_allow_cmap(self):
df = self.hexbin_df
ax = df.plot.hexbin(x="A", y="B", cmap="YlGn")
assert ax.collections[0].cmap.name == "YlGn"
with pytest.raises(TypeError):
df.plot.hexbin(x="A", y="B", cmap="YlGn", colormap="BuGn")
@pytest.mark.slow
def test_pie_df(self):
df = DataFrame(
np.random.rand(5, 3),
columns=["X", "Y", "Z"],
index=["a", "b", "c", "d", "e"],
)
with pytest.raises(ValueError):
df.plot.pie()
ax = _check_plot_works(df.plot.pie, y="Y")
self._check_text_labels(ax.texts, df.index)
ax = _check_plot_works(df.plot.pie, y=2)
self._check_text_labels(ax.texts, df.index)
# _check_plot_works adds an ax so catch warning. see GH #13188
with tm.assert_produces_warning(UserWarning):
axes = _check_plot_works(df.plot.pie, subplots=True)
assert len(axes) == len(df.columns)
for ax in axes:
self._check_text_labels(ax.texts, df.index)
for ax, ylabel in zip(axes, df.columns):
assert ax.get_ylabel() == ylabel
labels = ["A", "B", "C", "D", "E"]
color_args = ["r", "g", "b", "c", "m"]
with tm.assert_produces_warning(UserWarning):
axes = _check_plot_works(
df.plot.pie, subplots=True, labels=labels, colors=color_args
)
assert len(axes) == len(df.columns)
for ax in axes:
self._check_text_labels(ax.texts, labels)
self._check_colors(ax.patches, facecolors=color_args)
def test_pie_df_nan(self):
df = DataFrame(np.random.rand(4, 4))
for i in range(4):
df.iloc[i, i] = np.nan
fig, axes = self.plt.subplots(ncols=4)
df.plot.pie(subplots=True, ax=axes, legend=True)
base_expected = ["0", "1", "2", "3"]
for i, ax in enumerate(axes):
expected = list(base_expected) # force copy
expected[i] = ""
result = [x.get_text() for x in ax.texts]
assert result == expected
# legend labels
# NaN's not included in legend with subplots
# see https://github.com/pandas-dev/pandas/issues/8390
assert [x.get_text() for x in ax.get_legend().get_texts()] == base_expected[
:i
] + base_expected[i + 1 :]
@pytest.mark.slow
def test_errorbar_plot(self):
with warnings.catch_warnings():
d = {"x": np.arange(12), "y": np.arange(12, 0, -1)}
df = DataFrame(d)
d_err = {"x": np.ones(12) * 0.2, "y": np.ones(12) * 0.4}
df_err = DataFrame(d_err)
# check line plots
ax = _check_plot_works(df.plot, yerr=df_err, logy=True)
self._check_has_errorbars(ax, xerr=0, yerr=2)
ax = _check_plot_works(df.plot, yerr=df_err, logx=True, logy=True)
self._check_has_errorbars(ax, xerr=0, yerr=2)
ax = _check_plot_works(df.plot, yerr=df_err, loglog=True)
self._check_has_errorbars(ax, xerr=0, yerr=2)
kinds = ["line", "bar", "barh"]
for kind in kinds:
ax = _check_plot_works(df.plot, yerr=df_err["x"], kind=kind)
self._check_has_errorbars(ax, xerr=0, yerr=2)
ax = _check_plot_works(df.plot, yerr=d_err, kind=kind)
self._check_has_errorbars(ax, xerr=0, yerr=2)
ax = _check_plot_works(df.plot, yerr=df_err, xerr=df_err, kind=kind)
self._check_has_errorbars(ax, xerr=2, yerr=2)
ax = _check_plot_works(
df.plot, yerr=df_err["x"], xerr=df_err["x"], kind=kind
)
self._check_has_errorbars(ax, xerr=2, yerr=2)
ax = _check_plot_works(df.plot, xerr=0.2, yerr=0.2, kind=kind)
self._check_has_errorbars(ax, xerr=2, yerr=2)
# _check_plot_works adds an ax so catch warning. see GH #13188
axes = _check_plot_works(
df.plot, yerr=df_err, xerr=df_err, subplots=True, kind=kind
)
self._check_has_errorbars(axes, xerr=1, yerr=1)
ax = _check_plot_works(
(df + 1).plot, yerr=df_err, xerr=df_err, kind="bar", log=True
)
self._check_has_errorbars(ax, xerr=2, yerr=2)
# yerr is raw error values
ax = _check_plot_works(df["y"].plot, yerr=np.ones(12) * 0.4)
self._check_has_errorbars(ax, xerr=0, yerr=1)
ax = _check_plot_works(df.plot, yerr=np.ones((2, 12)) * 0.4)
self._check_has_errorbars(ax, xerr=0, yerr=2)
# yerr is column name
for yerr in ["yerr", "誤差"]:
s_df = df.copy()
s_df[yerr] = np.ones(12) * 0.2
ax = _check_plot_works(s_df.plot, yerr=yerr)
self._check_has_errorbars(ax, xerr=0, yerr=2)
ax = _check_plot_works(s_df.plot, y="y", x="x", yerr=yerr)
self._check_has_errorbars(ax, xerr=0, yerr=1)
with pytest.raises(ValueError):
df.plot(yerr=np.random.randn(11))
df_err = DataFrame({"x": ["zzz"] * 12, "y": ["zzz"] * 12})
with pytest.raises((ValueError, TypeError)):
df.plot(yerr=df_err)
@pytest.mark.xfail(reason="Iterator is consumed", raises=ValueError)
@pytest.mark.slow
def test_errorbar_plot_iterator(self):
with warnings.catch_warnings():
d = {"x": np.arange(12), "y": np.arange(12, 0, -1)}
df = DataFrame(d)
# yerr is iterator
ax = _check_plot_works(df.plot, yerr=itertools.repeat(0.1, len(df)))
self._check_has_errorbars(ax, xerr=0, yerr=2)
@pytest.mark.slow
def test_errorbar_with_integer_column_names(self):
# test with integer column names
df = DataFrame(np.random.randn(10, 2))
df_err = DataFrame(np.random.randn(10, 2))
ax = _check_plot_works(df.plot, yerr=df_err)
self._check_has_errorbars(ax, xerr=0, yerr=2)
ax = _check_plot_works(df.plot, y=0, yerr=1)
self._check_has_errorbars(ax, xerr=0, yerr=1)
@pytest.mark.slow
def test_errorbar_with_partial_columns(self):
df = DataFrame(np.random.randn(10, 3))
df_err = DataFrame(np.random.randn(10, 2), columns=[0, 2])
kinds = ["line", "bar"]
for kind in kinds:
ax = _check_plot_works(df.plot, yerr=df_err, kind=kind)
self._check_has_errorbars(ax, xerr=0, yerr=2)
ix = date_range("1/1/2000", periods=10, freq="M")
df.set_index(ix, inplace=True)
df_err.set_index(ix, inplace=True)
ax = _check_plot_works(df.plot, yerr=df_err, kind="line")
self._check_has_errorbars(ax, xerr=0, yerr=2)
d = {"x": np.arange(12), "y": np.arange(12, 0, -1)}
df = DataFrame(d)
d_err = {"x": np.ones(12) * 0.2, "z": np.ones(12) * 0.4}
df_err = DataFrame(d_err)
for err in [d_err, df_err]:
ax = _check_plot_works(df.plot, yerr=err)
self._check_has_errorbars(ax, xerr=0, yerr=1)
@pytest.mark.slow
def test_errorbar_timeseries(self):
with warnings.catch_warnings():
d = {"x": np.arange(12), "y": np.arange(12, 0, -1)}
d_err = {"x": np.ones(12) * 0.2, "y": np.ones(12) * 0.4}
# check time-series plots
ix = date_range("1/1/2000", "1/1/2001", freq="M")
tdf = DataFrame(d, index=ix)
tdf_err = DataFrame(d_err, index=ix)
kinds = ["line", "bar", "barh"]
for kind in kinds:
ax = _check_plot_works(tdf.plot, yerr=tdf_err, kind=kind)
self._check_has_errorbars(ax, xerr=0, yerr=2)
ax = _check_plot_works(tdf.plot, yerr=d_err, kind=kind)
self._check_has_errorbars(ax, xerr=0, yerr=2)
ax = _check_plot_works(tdf.plot, y="y", yerr=tdf_err["x"], kind=kind)
self._check_has_errorbars(ax, xerr=0, yerr=1)
ax = _check_plot_works(tdf.plot, y="y", yerr="x", kind=kind)
self._check_has_errorbars(ax, xerr=0, yerr=1)
ax = _check_plot_works(tdf.plot, yerr=tdf_err, kind=kind)
self._check_has_errorbars(ax, xerr=0, yerr=2)
# _check_plot_works adds an ax so catch warning. see GH #13188
axes = _check_plot_works(
tdf.plot, kind=kind, yerr=tdf_err, subplots=True
)
self._check_has_errorbars(axes, xerr=0, yerr=1)
def test_errorbar_asymmetrical(self):
np.random.seed(0)
err = np.random.rand(3, 2, 5)
# each column is [0, 1, 2, 3, 4], [3, 4, 5, 6, 7]...
df = DataFrame(np.arange(15).reshape(3, 5)).T
ax = df.plot(yerr=err, xerr=err / 2)
yerr_0_0 = ax.collections[1].get_paths()[0].vertices[:, 1]
expected_0_0 = err[0, :, 0] * np.array([-1, 1])
tm.assert_almost_equal(yerr_0_0, expected_0_0)
with pytest.raises(ValueError):
df.plot(yerr=err.T)
tm.close()
def test_table(self):
df = DataFrame(np.random.rand(10, 3), index=list(string.ascii_letters[:10]))
_check_plot_works(df.plot, table=True)
_check_plot_works(df.plot, table=df)
ax = df.plot()
assert len(ax.tables) == 0
plotting.table(ax, df.T)
assert len(ax.tables) == 1
def test_errorbar_scatter(self):
df = DataFrame(np.random.randn(5, 2), index=range(5), columns=["x", "y"])
df_err = DataFrame(
np.random.randn(5, 2) / 5, index=range(5), columns=["x", "y"]
)
ax = _check_plot_works(df.plot.scatter, x="x", y="y")
self._check_has_errorbars(ax, xerr=0, yerr=0)
ax = _check_plot_works(df.plot.scatter, x="x", y="y", xerr=df_err)
self._check_has_errorbars(ax, xerr=1, yerr=0)
ax = _check_plot_works(df.plot.scatter, x="x", y="y", yerr=df_err)
self._check_has_errorbars(ax, xerr=0, yerr=1)
ax = _check_plot_works(df.plot.scatter, x="x", y="y", xerr=df_err, yerr=df_err)
self._check_has_errorbars(ax, xerr=1, yerr=1)
def _check_errorbar_color(containers, expected, has_err="has_xerr"):
lines = []
errs = [c.lines for c in ax.containers if getattr(c, has_err, False)][0]
for el in errs:
if is_list_like(el):
lines.extend(el)
else:
lines.append(el)
err_lines = [x for x in lines if x in ax.collections]
self._check_colors(
err_lines, linecolors=np.array([expected] * len(err_lines))
)
# GH 8081
df = DataFrame(np.random.randn(10, 5), columns=["a", "b", "c", "d", "e"])
ax = df.plot.scatter(x="a", y="b", xerr="d", yerr="e", c="red")
self._check_has_errorbars(ax, xerr=1, yerr=1)
_check_errorbar_color(ax.containers, "red", has_err="has_xerr")
_check_errorbar_color(ax.containers, "red", has_err="has_yerr")
ax = df.plot.scatter(x="a", y="b", yerr="e", color="green")
self._check_has_errorbars(ax, xerr=0, yerr=1)
_check_errorbar_color(ax.containers, "green", has_err="has_yerr")
@pytest.mark.slow
def test_sharex_and_ax(self):
# https://github.com/pandas-dev/pandas/issues/9737 using gridspec,
# the axis in fig.get_axis() are sorted differently than pandas
# expected them, so make sure that only the right ones are removed
import matplotlib.pyplot as plt
plt.close("all")
gs, axes = _generate_4_axes_via_gridspec()
df = DataFrame(
{
"a": [1, 2, 3, 4, 5, 6],
"b": [1, 2, 3, 4, 5, 6],
"c": [1, 2, 3, 4, 5, 6],
"d": [1, 2, 3, 4, 5, 6],
}
)
def _check(axes):
for ax in axes:
assert len(ax.lines) == 1
self._check_visible(ax.get_yticklabels(), visible=True)
for ax in [axes[0], axes[2]]:
self._check_visible(ax.get_xticklabels(), visible=False)
self._check_visible(ax.get_xticklabels(minor=True), visible=False)
for ax in [axes[1], axes[3]]:
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
for ax in axes:
df.plot(x="a", y="b", title="title", ax=ax, sharex=True)
gs.tight_layout(plt.gcf())
_check(axes)
tm.close()
gs, axes = _generate_4_axes_via_gridspec()
with tm.assert_produces_warning(UserWarning):
axes = df.plot(subplots=True, ax=axes, sharex=True)
_check(axes)
tm.close()
gs, axes = _generate_4_axes_via_gridspec()
# without sharex, no labels should be touched!
for ax in axes:
df.plot(x="a", y="b", title="title", ax=ax)
gs.tight_layout(plt.gcf())
for ax in axes:
assert len(ax.lines) == 1
self._check_visible(ax.get_yticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
tm.close()
@pytest.mark.slow
def test_sharey_and_ax(self):
# https://github.com/pandas-dev/pandas/issues/9737 using gridspec,
# the axis in fig.get_axis() are sorted differently than pandas
# expected them, so make sure that only the right ones are removed
import matplotlib.pyplot as plt
gs, axes = _generate_4_axes_via_gridspec()
df = DataFrame(
{
"a": [1, 2, 3, 4, 5, 6],
"b": [1, 2, 3, 4, 5, 6],
"c": [1, 2, 3, 4, 5, 6],
"d": [1, 2, 3, 4, 5, 6],
}
)
def _check(axes):
for ax in axes:
assert len(ax.lines) == 1
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
for ax in [axes[0], axes[1]]:
self._check_visible(ax.get_yticklabels(), visible=True)
for ax in [axes[2], axes[3]]:
self._check_visible(ax.get_yticklabels(), visible=False)
for ax in axes:
df.plot(x="a", y="b", title="title", ax=ax, sharey=True)
gs.tight_layout(plt.gcf())
_check(axes)
tm.close()
gs, axes = _generate_4_axes_via_gridspec()
with tm.assert_produces_warning(UserWarning):
axes = df.plot(subplots=True, ax=axes, sharey=True)
gs.tight_layout(plt.gcf())
_check(axes)
tm.close()
gs, axes = _generate_4_axes_via_gridspec()
# without sharex, no labels should be touched!
for ax in axes:
df.plot(x="a", y="b", title="title", ax=ax)
gs.tight_layout(plt.gcf())
for ax in axes:
assert len(ax.lines) == 1
self._check_visible(ax.get_yticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
@td.skip_if_no_scipy
def test_memory_leak(self):
""" Check that every plot type gets properly collected. """
import weakref
import gc
results = {}
for kind in plotting.PlotAccessor._all_kinds:
args = {}
if kind in ["hexbin", "scatter", "pie"]:
df = self.hexbin_df
args = {"x": "A", "y": "B"}
elif kind == "area":
df = self.tdf.abs()
else:
df = self.tdf
# Use a weakref so we can see if the object gets collected without
# also preventing it from being collected
results[kind] = weakref.proxy(df.plot(kind=kind, **args))
# have matplotlib delete all the figures
tm.close()
# force a garbage collection
gc.collect()
for key in results:
# check that every plot was collected
with pytest.raises(ReferenceError):
# need to actually access something to get an error
results[key].lines
@pytest.mark.slow
def test_df_subplots_patterns_minorticks(self):
# GH 10657
import matplotlib.pyplot as plt
df = DataFrame(
np.random.randn(10, 2),
index=date_range("1/1/2000", periods=10),
columns=list("AB"),
)
# shared subplots
fig, axes = plt.subplots(2, 1, sharex=True)
axes = df.plot(subplots=True, ax=axes)
for ax in axes:
assert len(ax.lines) == 1
self._check_visible(ax.get_yticklabels(), visible=True)
# xaxis of 1st ax must be hidden
self._check_visible(axes[0].get_xticklabels(), visible=False)
self._check_visible(axes[0].get_xticklabels(minor=True), visible=False)
self._check_visible(axes[1].get_xticklabels(), visible=True)
self._check_visible(axes[1].get_xticklabels(minor=True), visible=True)
tm.close()
fig, axes = plt.subplots(2, 1)
with tm.assert_produces_warning(UserWarning):
axes = df.plot(subplots=True, ax=axes, sharex=True)
for ax in axes:
assert len(ax.lines) == 1
self._check_visible(ax.get_yticklabels(), visible=True)
# xaxis of 1st ax must be hidden
self._check_visible(axes[0].get_xticklabels(), visible=False)
self._check_visible(axes[0].get_xticklabels(minor=True), visible=False)
self._check_visible(axes[1].get_xticklabels(), visible=True)
self._check_visible(axes[1].get_xticklabels(minor=True), visible=True)
tm.close()
# not shared
fig, axes = plt.subplots(2, 1)
axes = df.plot(subplots=True, ax=axes)
for ax in axes:
assert len(ax.lines) == 1
self._check_visible(ax.get_yticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
tm.close()
@pytest.mark.slow
def test_df_gridspec_patterns(self):
# GH 10819
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
ts = Series(np.random.randn(10), index=date_range("1/1/2000", periods=10))
df = DataFrame(np.random.randn(10, 2), index=ts.index, columns=list("AB"))
def _get_vertical_grid():
gs = gridspec.GridSpec(3, 1)
fig = plt.figure()
ax1 = fig.add_subplot(gs[:2, :])
ax2 = fig.add_subplot(gs[2, :])
return ax1, ax2
def _get_horizontal_grid():
gs = gridspec.GridSpec(1, 3)
fig = plt.figure()
ax1 = fig.add_subplot(gs[:, :2])
ax2 = fig.add_subplot(gs[:, 2])
return ax1, ax2
for ax1, ax2 in [_get_vertical_grid(), _get_horizontal_grid()]:
ax1 = ts.plot(ax=ax1)
assert len(ax1.lines) == 1
ax2 = df.plot(ax=ax2)
assert len(ax2.lines) == 2
for ax in [ax1, ax2]:
self._check_visible(ax.get_yticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
tm.close()
# subplots=True
for ax1, ax2 in [_get_vertical_grid(), _get_horizontal_grid()]:
axes = df.plot(subplots=True, ax=[ax1, ax2])
assert len(ax1.lines) == 1
assert len(ax2.lines) == 1
for ax in axes:
self._check_visible(ax.get_yticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
tm.close()
# vertical / subplots / sharex=True / sharey=True
ax1, ax2 = _get_vertical_grid()
with tm.assert_produces_warning(UserWarning):
axes = df.plot(subplots=True, ax=[ax1, ax2], sharex=True, sharey=True)
assert len(axes[0].lines) == 1
assert len(axes[1].lines) == 1
for ax in [ax1, ax2]:
# yaxis are visible because there is only one column
self._check_visible(ax.get_yticklabels(), visible=True)
# xaxis of axes0 (top) are hidden
self._check_visible(axes[0].get_xticklabels(), visible=False)
self._check_visible(axes[0].get_xticklabels(minor=True), visible=False)
self._check_visible(axes[1].get_xticklabels(), visible=True)
self._check_visible(axes[1].get_xticklabels(minor=True), visible=True)
tm.close()
# horizontal / subplots / sharex=True / sharey=True
ax1, ax2 = _get_horizontal_grid()
with tm.assert_produces_warning(UserWarning):
axes = df.plot(subplots=True, ax=[ax1, ax2], sharex=True, sharey=True)
assert len(axes[0].lines) == 1
assert len(axes[1].lines) == 1
self._check_visible(axes[0].get_yticklabels(), visible=True)
# yaxis of axes1 (right) are hidden
self._check_visible(axes[1].get_yticklabels(), visible=False)
for ax in [ax1, ax2]:
# xaxis are visible because there is only one column
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
tm.close()
# boxed
def _get_boxed_grid():
gs = gridspec.GridSpec(3, 3)
fig = plt.figure()
ax1 = fig.add_subplot(gs[:2, :2])
ax2 = fig.add_subplot(gs[:2, 2])
ax3 = fig.add_subplot(gs[2, :2])
ax4 = fig.add_subplot(gs[2, 2])
return ax1, ax2, ax3, ax4
axes = _get_boxed_grid()
df = DataFrame(np.random.randn(10, 4), index=ts.index, columns=list("ABCD"))
axes = df.plot(subplots=True, ax=axes)
for ax in axes:
assert len(ax.lines) == 1
# axis are visible because these are not shared
self._check_visible(ax.get_yticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
tm.close()
# subplots / sharex=True / sharey=True
axes = _get_boxed_grid()
with tm.assert_produces_warning(UserWarning):
axes = df.plot(subplots=True, ax=axes, sharex=True, sharey=True)
for ax in axes:
assert len(ax.lines) == 1
for ax in [axes[0], axes[2]]: # left column
self._check_visible(ax.get_yticklabels(), visible=True)
for ax in [axes[1], axes[3]]: # right column
self._check_visible(ax.get_yticklabels(), visible=False)
for ax in [axes[0], axes[1]]: # top row
self._check_visible(ax.get_xticklabels(), visible=False)
self._check_visible(ax.get_xticklabels(minor=True), visible=False)
for ax in [axes[2], axes[3]]: # bottom row
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
tm.close()
@pytest.mark.slow
def test_df_grid_settings(self):
# Make sure plot defaults to rcParams['axes.grid'] setting, GH 9792
self._check_grid_settings(
DataFrame({"a": [1, 2, 3], "b": [2, 3, 4]}),
plotting.PlotAccessor._dataframe_kinds,
kws={"x": "a", "y": "b"},
)
def test_invalid_colormap(self):
df = DataFrame(randn(3, 2), columns=["A", "B"])
with pytest.raises(ValueError):
df.plot(colormap="invalid_colormap")
def test_plain_axes(self):
# supplied ax itself is a SubplotAxes, but figure contains also
# a plain Axes object (GH11556)
fig, ax = self.plt.subplots()
fig.add_axes([0.2, 0.2, 0.2, 0.2])
Series(rand(10)).plot(ax=ax)
# supplied ax itself is a plain Axes, but because the cmap keyword
# a new ax is created for the colorbar -> also multiples axes (GH11520)
df = DataFrame({"a": randn(8), "b": randn(8)})
fig = self.plt.figure()
ax = fig.add_axes((0, 0, 1, 1))
df.plot(kind="scatter", ax=ax, x="a", y="b", c="a", cmap="hsv")
# other examples
fig, ax = self.plt.subplots()
from mpl_toolkits.axes_grid1 import make_axes_locatable
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
Series(rand(10)).plot(ax=ax)
Series(rand(10)).plot(ax=cax)
fig, ax = self.plt.subplots()
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
iax = inset_axes(ax, width="30%", height=1.0, loc=3)
Series(rand(10)).plot(ax=ax)
Series(rand(10)).plot(ax=iax)
def test_passed_bar_colors(self):
import matplotlib as mpl
color_tuples = [(0.9, 0, 0, 1), (0, 0.9, 0, 1), (0, 0, 0.9, 1)]
colormap = mpl.colors.ListedColormap(color_tuples)
barplot = pd.DataFrame([[1, 2, 3]]).plot(kind="bar", cmap=colormap)
assert color_tuples == [c.get_facecolor() for c in barplot.patches]
def test_rcParams_bar_colors(self):
import matplotlib as mpl
color_tuples = [(0.9, 0, 0, 1), (0, 0.9, 0, 1), (0, 0, 0.9, 1)]
with mpl.rc_context(rc={"axes.prop_cycle": mpl.cycler("color", color_tuples)}):
barplot = pd.DataFrame([[1, 2, 3]]).plot(kind="bar")
assert color_tuples == [c.get_facecolor() for c in barplot.patches]
@pytest.mark.parametrize("method", ["line", "barh", "bar"])
def test_secondary_axis_font_size(self, method):
# GH: 12565
df = (
pd.DataFrame(np.random.randn(15, 2), columns=list("AB"))
.assign(C=lambda df: df.B.cumsum())
.assign(D=lambda df: df.C * 1.1)
)
fontsize = 20
sy = ["C", "D"]
kwargs = dict(secondary_y=sy, fontsize=fontsize, mark_right=True)
ax = getattr(df.plot, method)(**kwargs)
self._check_ticks_props(axes=ax.right_ax, ylabelsize=fontsize)
@pytest.mark.slow
def test_x_string_values_ticks(self):
# Test if string plot index have a fixed xtick position
# GH: 7612, GH: 22334
df = pd.DataFrame(
{
"sales": [3, 2, 3],
"visits": [20, 42, 28],
"day": ["Monday", "Tuesday", "Wednesday"],
}
)
ax = df.plot.area(x="day")
ax.set_xlim(-1, 3)
xticklabels = [t.get_text() for t in ax.get_xticklabels()]
labels_position = dict(zip(xticklabels, ax.get_xticks()))
# Testing if the label stayed at the right position
assert labels_position["Monday"] == 0.0
assert labels_position["Tuesday"] == 1.0
assert labels_position["Wednesday"] == 2.0
@pytest.mark.slow
def test_x_multiindex_values_ticks(self):
# Test if multiindex plot index have a fixed xtick position
# GH: 15912
index = pd.MultiIndex.from_product([[2012, 2013], [1, 2]])
df = pd.DataFrame(np.random.randn(4, 2), columns=["A", "B"], index=index)
ax = df.plot()
ax.set_xlim(-1, 4)
xticklabels = [t.get_text() for t in ax.get_xticklabels()]
labels_position = dict(zip(xticklabels, ax.get_xticks()))
# Testing if the label stayed at the right position
assert labels_position["(2012, 1)"] == 0.0
assert labels_position["(2012, 2)"] == 1.0
assert labels_position["(2013, 1)"] == 2.0
assert labels_position["(2013, 2)"] == 3.0
@pytest.mark.parametrize("kind", ["line", "area"])
def test_xlim_plot_line(self, kind):
# test if xlim is set correctly in plot.line and plot.area
# GH 27686
df = pd.DataFrame([2, 4], index=[1, 2])
ax = df.plot(kind=kind)
xlims = ax.get_xlim()
assert xlims[0] < 1
assert xlims[1] > 2
def test_xlim_plot_line_correctly_in_mixed_plot_type(self):
# test if xlim is set correctly when ax contains multiple different kinds
# of plots, GH 27686
fig, ax = self.plt.subplots()
indexes = ["k1", "k2", "k3", "k4"]
df = pd.DataFrame(
{
"s1": [1000, 2000, 1500, 2000],
"s2": [900, 1400, 2000, 3000],
"s3": [1500, 1500, 1600, 1200],
"secondary_y": [1, 3, 4, 3],
},
index=indexes,
)
df[["s1", "s2", "s3"]].plot.bar(ax=ax, stacked=False)
df[["secondary_y"]].plot(ax=ax, secondary_y=True)
xlims = ax.get_xlim()
assert xlims[0] < 0
assert xlims[1] > 3
# make sure axis labels are plotted correctly as well
xticklabels = [t.get_text() for t in ax.get_xticklabels()]
assert xticklabels == indexes
def test_subplots_sharex_false(self):
# test when sharex is set to False, two plots should have different
# labels, GH 25160
df = pd.DataFrame(np.random.rand(10, 2))
df.iloc[5:, 1] = np.nan
df.iloc[:5, 0] = np.nan
figs, axs = self.plt.subplots(2, 1)
df.plot.line(ax=axs, subplots=True, sharex=False)
expected_ax1 = np.arange(4.5, 10, 0.5)
expected_ax2 = np.arange(-0.5, 5, 0.5)
tm.assert_numpy_array_equal(axs[0].get_xticks(), expected_ax1)
tm.assert_numpy_array_equal(axs[1].get_xticks(), expected_ax2)
def test_plot_no_rows(self):
# GH 27758
df = pd.DataFrame(columns=["foo"], dtype=int)
assert df.empty
ax = df.plot()
assert len(ax.get_lines()) == 1
line = ax.get_lines()[0]
assert len(line.get_xdata()) == 0
assert len(line.get_ydata()) == 0
def test_plot_no_numeric_data(self):
df = pd.DataFrame(["a", "b", "c"])
with pytest.raises(TypeError):
df.plot()
def _generate_4_axes_via_gridspec():
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.gridspec # noqa
gs = mpl.gridspec.GridSpec(2, 2)
ax_tl = plt.subplot(gs[0, 0])
ax_ll = plt.subplot(gs[1, 0])
ax_tr = plt.subplot(gs[0, 1])
ax_lr = plt.subplot(gs[1, 1])
return gs, [ax_tl, ax_ll, ax_tr, ax_lr]
| 38.806491 | 88 | 0.567562 |
443a79e5305eff60f5ea0fa9dbe0120700e0537c | 1,942 | py | Python | var/spack/repos/builtin/packages/libpmemobj-cpp/package.py | LiamBindle/spack | e90d5ad6cfff2ba3de7b537d6511adccd9d5fcf1 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 2,360 | 2017-11-06T08:47:01.000Z | 2022-03-31T14:45:33.000Z | var/spack/repos/builtin/packages/libpmemobj-cpp/package.py | LiamBindle/spack | e90d5ad6cfff2ba3de7b537d6511adccd9d5fcf1 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 13,838 | 2017-11-04T07:49:45.000Z | 2022-03-31T23:38:39.000Z | var/spack/repos/builtin/packages/libpmemobj-cpp/package.py | LiamBindle/spack | e90d5ad6cfff2ba3de7b537d6511adccd9d5fcf1 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 1,793 | 2017-11-04T07:45:50.000Z | 2022-03-30T14:31:53.000Z | # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class LibpmemobjCpp(CMakePackage):
"""C++ bindings for libpmemobj (https://github.com/pmem/pmdk)"""
homepage = "https://github.com/pmem/libpmemobj-cpp"
url = "https://github.com/pmem/libpmemobj-cpp/archive/1.12.tar.gz"
git = "https://github.com/pmem/libpmemobj-cpp.git"
version('develop', branch='master')
version('1.12', sha256='5a7e082a862affbd87ff174b790be7db77f7d85d4c583acc34011f1104bc54a9')
version('1.11', sha256='2818f3ce23c861222d2765c377e6d4ccf8a2e2f66e4d23e4e2c35f4246f4a403')
version('1.10', sha256='bba31d9a1c21b38c20cbe2d2b152effef7e2debfa89a87e0c32de616c31d9191')
version('1.9', sha256='0284c20e7f642f16b3d49d576a6540bcf68330962ac273e11b07158b6e769689')
version('1.8', sha256='dcf60be1140a90f10b8eeb763d53e3dfcdf5a5b345e10f78d469356642527b32')
version('1.7', sha256='53af87a648ff28a74d6856ce2125ca4acdb0c4b78062df1cba18d50d35e3eada')
version('1.6', sha256='791bf86c6b9401451e3d20f19cb8799d312b9d58659cb93aa532cd724db554ae')
version('1.5.1', sha256='0448bac4697f6563789e5bf22b8556288ae67ab916608bc45d0a3baa24c67985')
version('1.5', sha256='6254aa2fb77977f8b91998eb866216d2af22f4ccbffdfc7932df1dff151da61e')
# libpmemobj only supports 'Debug' and 'Release'
variant('build_type', default='Release',
description='CMake build type',
values=('Debug', 'Release'))
depends_on('pmdk@master', when='@develop')
depends_on('pmdk@1.9:', when='@1.12:')
depends_on('pmdk@1.8:', when='@1.9:')
depends_on('pmdk@1.7:', when='@1.8:')
depends_on('pmdk@1.4:', when='@1.5:')
def cmake_args(self):
args = ['-DTESTS_USE_VALGRIND=OFF']
return args
| 47.365854 | 98 | 0.717302 |
cc6c132495891def4d47a594b8329ed811777856 | 1,801 | py | Python | setup.py | TheAngryGoldfish/entity-embed | 9493a9d194208a19aed5a40a3140245dbeea6b44 | [
"MIT"
] | 89 | 2021-04-06T23:07:06.000Z | 2022-03-27T18:27:13.000Z | setup.py | TheAngryGoldfish/entity-embed | 9493a9d194208a19aed5a40a3140245dbeea6b44 | [
"MIT"
] | 7 | 2021-04-13T18:51:43.000Z | 2022-01-25T12:39:22.000Z | setup.py | TheAngryGoldfish/entity-embed | 9493a9d194208a19aed5a40a3140245dbeea6b44 | [
"MIT"
] | 7 | 2021-04-21T15:08:23.000Z | 2021-12-11T22:41:17.000Z | #!/usr/bin/env python
"""The setup script."""
from pkg_resources import parse_requirements
from setuptools import find_packages, setup
with open("README.md", "r", encoding="utf-8") as readme_file:
readme = readme_file.read()
with open("requirements.txt") as requirements_file:
requirements = [str(req) for req in parse_requirements(requirements_file.readlines())]
short_description = (
"Transform entities like companies, products, etc. into vectors to support scalable "
"Record Linkage / Entity Resolution using Approximate Nearest Neighbors."
)
setup(
author="Flávio Juvenal (Vinta Software)",
author_email="flavio@vinta.com.br",
python_requires=">=3.6",
classifiers=[
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
],
description=short_description,
entry_points={
"console_scripts": [
"entity_embed_train=entity_embed.cli:train",
"entity_embed_predict=entity_embed.cli:predict",
],
},
install_requires=requirements,
license="MIT license",
long_description=readme,
long_description_content_type="text/markdown",
include_package_data=True,
keywords="record linkage,entity resolution,deduplication,embedding",
name="entity-embed",
packages=find_packages(include=["entity_embed", "entity_embed.*"]),
url="https://github.com/vintasoftware/entity-embed",
version="0.0.6",
zip_safe=False,
)
| 33.981132 | 90 | 0.67518 |
347904b26dd7ad832fec531afacdaa5c6f3560bf | 5,353 | py | Python | ReminderModule.py | Tequila24/SubBot | 66bc1505a3d0ae2e17632a0f422c8e6d7dc1bcdc | [
"Unlicense"
] | null | null | null | ReminderModule.py | Tequila24/SubBot | 66bc1505a3d0ae2e17632a0f422c8e6d7dc1bcdc | [
"Unlicense"
] | null | null | null | ReminderModule.py | Tequila24/SubBot | 66bc1505a3d0ae2e17632a0f422c8e6d7dc1bcdc | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@author: Fuego
"""
from VkLib import VkLib
from SubDB import SubDB
import re
from datetime import datetime, timedelta
from pprint import pprint
import traceback
import typing
class ReminderModule:
def __init__(self, new_vk_handle: VkLib):
self.dbase = SubDB("sub24_reminders")
self.vk_handle = new_vk_handle
self.dbase.create_table("reminders", [ ("id", "INTEGER"),
("expiration_date", "TEXT"),
("author_id", "INTEGER"),
("text", "TEXT"),
("chat_id", "INTEGER") ])
def parse_reminder_command(self, reminder_raw: str) -> typing.Tuple[str, str]:
expiration_date: str = ""
reminder_text: str = ""
match = re.match(r'(напомни (?:мне )?)', reminder_raw)
if match:
reminder_raw = reminder_raw[len(match.group(1)):].strip()
match = re.match(r'(.+) через', reminder_raw)
if match:
reminder_text = match.group(1)
match2 = re.search(r'через (?:(\d+) час(?:ов?|а?) ?)?(?:(\d+) минуты*у* ?)?(?:(\d+) секунды*у* ?)?', reminder_raw)
if match2:
expiration_date = (datetime.now() + timedelta(0,
int(match2.group(3)) if match2.group(3) is not None else 0,
0,
0,
int(match2.group(2)) if match2.group(2) is not None else 0,
int(match2.group(1)) if match2.group(1) is not None else 0,
0) ).strftime("%Y-%m-%d %H:%M:%S")
else:
year: str = str(datetime.now().year)
month: str = ""
day: str = ""
hour: str = "10"
minute: str = "0"
second: str = "0"
# REMINDER TEXT
match = re.search(r'(.+) \d+[-.]\d+', reminder_raw)
if match:
print(match.groups())
reminder_text = match.group(1)
# DATE DD-MM | DD.MM
match = re.search(r'(\d{1,2})[-.](\d{1,2})[^.]', reminder_raw)
if match:
year = str(datetime.now().year)
month = match.group(2)
day = match.group(1)
# DATE YYYY-MM-DD | YYYY.MM.DD
match = re.search(r'(\d{4})[-.](\d{1,2})[-.](\d{1,2})', reminder_raw)
if match:
year = match.group(1)
month = match.group(2)
day = match.group(3)
# DATE DD-MM-YYYY | DD.MM.YYYY
match = re.search(r'(\d{1,2})[-.](\d{1,2})[-.](\d{4})', reminder_raw)
if match:
year = match.group(3)
month = match.group(2)
day = match.group(1)
# TIME HH:MM:SS
match = re.search(r'в (\d{1,2}):?(\d{1,2})?:?(\d{1,2})?', reminder_raw)
if match:
hour = match.group(1)
minute = match.group(2) if match.group(2) is not None else 0
second = match.group(3) if match.group(3) is not None else 0
expiration_date = "{0}-{1}-{2} {3}:{4}:{5}".format( year, month, day, hour, minute, second)
return expiration_date, reminder_text
def create_reminder(self, peer_id: int, author_id: int, reminder_raw: str):
expiration_date, reminder_text = self.parse_reminder_command(reminder_raw)
try:
datetime.strptime(expiration_date, "%Y-%m-%d %H:%M:%S")
except:
self.vk_handle.reply(peer_id, "Неправильный формат!")
return
response = (self.dbase.exc("SELECT MAX(id) FROM reminders"))[0]
if response[0] is not None:
reminder_id = response[0]+1
else:
reminder_id = 1
self.dbase.exc("""INSERT INTO 'reminders' VALUES((?), (?), (?), (?), (?));""", (reminder_id, expiration_date, author_id, reminder_text, peer_id))
self.dbase.com()
self.vk_handle.reply(peer_id, "Окей, записал")
def remove_reminder(self, peer_id: int, author_id: int, reminder_id: int):
response = self.dbase.exc("""SELECT * FROM 'reminders' WHERE id=(?);""", (reminder_id, ))
if len(response):
if int(response[0][2]) == author_id:
self.dbase.exc("""DELETE FROM 'reminders' WHERE id=(?);""", (reminder_id, ))
self.dbase.com()
self.vk_handle.reply(peer_id, "Твоя напоминалка номер {0} удалена!".format(reminder_id))
else:
self.vk_handle.reply(peer_id, "Нельзя удалить чужую напоминалку!")
else:
self.vk_handle.reply(peer_id, "Напоминалка с таким номером не найдена!")
def get_reminders_for_user(self, peer_id: int, author_id: int):
response = self.dbase.exc("""SELECT * FROM 'reminders' WHERE author_id = (?) ORDER BY expiration_date ASC;""", (author_id, ))
author_domain = self.vk_handle.get_user_domain_by_id(author_id)
if len(response):
reply: str = "@{0}, вот твои напоминалки: \r\n".format(author_domain)
for line in response:
reply += "# {0}, {1}, {2}\r\n".format(line[0], line[1], line[3])
self.vk_handle.reply(peer_id, reply, disable_mention=False)
else:
self.vk_handle.reply(peer_id, "{0}, у тебя нет напоминалок!".format(author_domain), disable_mention=False)
def check_active_reminders(self):
response = self.dbase.exc("""SELECT * FROM 'reminders'""")
if len(response):
for line in response:
try:
t_delta: float = (datetime.strptime(line[1], "%Y-%m-%d %H:%M:%S") - datetime.now()).total_seconds()
if t_delta < 0.0:
message: str = "@{0}, напоминаю: {1}".format(self.vk_handle.get_user_domain_by_id(line[2]), line[3])
self.vk_handle.reply(line[4], message, disable_mention=False)
self.dbase.exc("""DELETE FROM 'reminders' WHERE id=(?);""", (line[0], ))
self.dbase.com()
except Exception as e:
self.vk_handle.reply(2000000004, "удалена напоминалка: " + str(line))
self.dbase.exc("""DELETE FROM 'reminders' WHERE id=(?);""", (line[0],))
self.dbase.com()
| 35.450331 | 147 | 0.624883 |
32c2e960a2b4a3d32d99191e33df82440f884adf | 345 | py | Python | marketTimer.py | santarini/python | 8f1d576d59b7c9ca23c3d80058fa2d14833c54ae | [
"MIT"
] | 3 | 2017-12-22T09:33:18.000Z | 2020-07-06T04:53:00.000Z | marketTimer.py | santarini/python | 8f1d576d59b7c9ca23c3d80058fa2d14833c54ae | [
"MIT"
] | null | null | null | marketTimer.py | santarini/python | 8f1d576d59b7c9ca23c3d80058fa2d14833c54ae | [
"MIT"
] | null | null | null | import schedule
import datetime
#NYSE,NASDAQ,AMEX (Hours 9:30am-4:00pm)
weekdays = ['monday','tuesday','wednesday','thursday','friday']
##print("17:00")
def job():
n = datetime.datetime.now()
print("Test " + str(n))
for k in range(0,5):
dayHolder = weekdays[k]
print(dayHolder)
schedule.dayHolder.at(timeHolder).do(job)
| 19.166667 | 63 | 0.66087 |
50d26b537be193fb656581338d40150f229139bf | 2,524 | py | Python | scrapyd/tests/test_environ.py | gtlambert/scrapyd | 1395e32dd88557d6c50c4920cb6b1dea1fbcfbb7 | [
"BSD-3-Clause"
] | null | null | null | scrapyd/tests/test_environ.py | gtlambert/scrapyd | 1395e32dd88557d6c50c4920cb6b1dea1fbcfbb7 | [
"BSD-3-Clause"
] | 1 | 2019-02-07T21:35:25.000Z | 2019-02-07T21:35:25.000Z | scrapyd/tests/test_environ.py | gtlambert/scrapyd | 1395e32dd88557d6c50c4920cb6b1dea1fbcfbb7 | [
"BSD-3-Clause"
] | 2 | 2016-05-17T17:26:23.000Z | 2016-05-24T17:10:17.000Z | import datetime
import os
from twisted.trial import unittest
from zope.interface.verify import verifyObject
from scrapyd.interfaces import IEnvironment
from scrapyd.config import Config
from scrapyd.environ import Environment
class EnvironmentTest(unittest.TestCase):
def setUp(self):
d = self.mktemp()
os.mkdir(d)
config = Config(values={'eggs_dir': d, 'logs_dir': d})
config.cp.add_section('settings')
config.cp.set('settings', 'newbot', 'newbot.settings')
self.environ = Environment(config, initenv={})
def test_interface(self):
verifyObject(IEnvironment, self.environ)
def test_get_environment_with_eggfile(self):
msg = {'_project': 'mybot', '_spider': 'myspider', '_job': 'ID'}
slot = 3
env = self.environ.get_environment(msg, slot)
self.assertEqual(env['SCRAPY_PROJECT'], 'mybot')
self.assertEqual(env['SCRAPY_SLOT'], '3')
self.assertEqual(env['SCRAPY_SPIDER'], 'myspider')
self.assertEqual(env['SCRAPY_JOB'], 'ID')
self.assert_(env['SCRAPY_LOG_FILE'].endswith(os.path.join('mybot', 'myspider', 'ID.log')))
if env.get('SCRAPY_FEED_URI'): # Not compulsory
self.assert_(env['SCRAPY_FEED_URI'].startswith('file://{}'.format(os.getcwd())))
self.assert_(env['SCRAPY_FEED_URI'].endswith(os.path.join('mybot', 'myspider', 'ID.jl')))
self.failIf('SCRAPY_SETTINGS_MODULE' in env)
def test_get_environment_with_no_items_dir(self):
config = Config(values={'items_dir': '', 'logs_dir': ''})
config.cp.add_section('settings')
config.cp.set('settings', 'newbot', 'newbot.settings')
msg = {'_project': 'mybot', '_spider': 'myspider', '_job': 'ID'}
slot = 3
environ = Environment(config, initenv={})
env = environ.get_environment(msg, slot)
self.failUnless('SCRAPY_FEED_URI' not in env)
self.failUnless('SCRAPY_LOG_FILE' not in env)
def test_get_environment_with_logfile(self):
config = Config(values={'items_dir': '', 'logs_dir': '.', 'logs_filename': '{project}-{spider}-{Y}{m}{d}T{H}{M}{S}'})
msg = {'_project': 'mybot', '_spider': 'myspider', '_job': 'ID'}
slot = 3
environ = Environment(config, initenv={})
now = datetime.datetime.now()
env = environ.get_environment(msg, slot)
expected_logfilename = now.strftime("mybot-spider-%Y%m%dT%H%M%S")
self.assert_(env['SCRAPY_LOG_FILE'], expected_logfilename)
| 42.779661 | 125 | 0.644612 |
add84c0c8316c87b0788d73365960c28e22f95c0 | 14,847 | py | Python | virt/ansible-latest/lib/python2.7/site-packages/ansible/modules/network/eos/eos_eapi.py | lakhlaifi/RedHat-Ansible | 27c5077cced9d416081fcd5d69ea44bca0317fa4 | [
"Apache-2.0"
] | 1 | 2020-03-22T01:04:39.000Z | 2020-03-22T01:04:39.000Z | .v/lib/python3.6/site-packages/ansible/modules/network/eos/eos_eapi.py | binRick/ansible-callback-concise | fd7b05596b30872af3f79a32f223a0458bffbedd | [
"MIT"
] | null | null | null | .v/lib/python3.6/site-packages/ansible/modules/network/eos/eos_eapi.py | binRick/ansible-callback-concise | fd7b05596b30872af3f79a32f223a0458bffbedd | [
"MIT"
] | 1 | 2020-03-22T01:04:48.000Z | 2020-03-22T01:04:48.000Z | #!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = """
---
module: eos_eapi
version_added: "2.1"
author: "Peter Sprygada (@privateip)"
short_description: Manage and configure Arista EOS eAPI.
requirements:
- "EOS v4.12 or greater"
description:
- Use to enable or disable eAPI access, and set the port and state
of http, https, local_http and unix-socket servers.
- When enabling eAPI access the default is to enable HTTP on port
80, enable HTTPS on port 443, disable local HTTP, and disable
Unix socket server. Use the options listed below to override the
default configuration.
- Requires EOS v4.12 or greater.
extends_documentation_fragment: eos
options:
http:
description:
- The C(http) argument controls the operating state of the HTTP
transport protocol when eAPI is present in the running-config.
When the value is set to True, the HTTP protocol is enabled and
when the value is set to False, the HTTP protocol is disabled.
By default, when eAPI is first configured, the HTTP protocol is
disabled.
type: bool
default: 'no'
aliases: ['enable_http']
http_port:
description:
- Configures the HTTP port that will listen for connections when
the HTTP transport protocol is enabled. This argument accepts
integer values in the valid range of 1 to 65535.
default: 80
https:
description:
- The C(https) argument controls the operating state of the HTTPS
transport protocol when eAPI is present in the running-config.
When the value is set to True, the HTTPS protocol is enabled and
when the value is set to False, the HTTPS protocol is disabled.
By default, when eAPI is first configured, the HTTPS protocol is
enabled.
type: bool
default: 'yes'
aliases: ['enable_https']
https_port:
description:
- Configures the HTTP port that will listen for connections when
the HTTP transport protocol is enabled. This argument accepts
integer values in the valid range of 1 to 65535.
default: 443
local_http:
description:
- The C(local_http) argument controls the operating state of the
local HTTP transport protocol when eAPI is present in the
running-config. When the value is set to True, the HTTP protocol
is enabled and restricted to connections from localhost only. When
the value is set to False, the HTTP local protocol is disabled.
- Note is value is independent of the C(http) argument
type: bool
default: 'no'
aliases: ['enable_local_http']
local_http_port:
description:
- Configures the HTTP port that will listen for connections when
the HTTP transport protocol is enabled. This argument accepts
integer values in the valid range of 1 to 65535.
default: 8080
socket:
description:
- The C(socket) argument controls the operating state of the UNIX
Domain Socket used to receive eAPI requests. When the value
of this argument is set to True, the UDS will listen for eAPI
requests. When the value is set to False, the UDS will not be
available to handle requests. By default when eAPI is first
configured, the UDS is disabled.
type: bool
default: 'no'
aliases: ['enable_socket']
vrf:
description:
- The C(vrf) argument will configure eAPI to listen for connections
in the specified VRF. By default, eAPI transports will listen
for connections in the global table. This value requires the
VRF to already be created otherwise the task will fail.
default: default
version_added: "2.2"
config:
description:
- The module, by default, will connect to the remote device and
retrieve the current running-config to use as a base for comparing
against the contents of source. There are times when it is not
desirable to have the task get the current running-config for
every task in a playbook. The I(config) argument allows the
implementer to pass in the configuration to use as the base
config for comparison.
version_added: "2.2"
state:
description:
- The C(state) argument controls the operational state of eAPI
on the remote device. When this argument is set to C(started),
eAPI is enabled to receive requests and when this argument is
C(stopped), eAPI is disabled and will not receive requests.
default: started
choices: ['started', 'stopped']
"""
EXAMPLES = """
- name: Enable eAPI access with default configuration
eos_eapi:
state: started
- name: Enable eAPI with no HTTP, HTTPS at port 9443, local HTTP at port 80, and socket enabled
eos_eapi:
state: started
http: false
https_port: 9443
local_http: yes
local_http_port: 80
socket: yes
- name: Shutdown eAPI access
eos_eapi:
state: stopped
"""
RETURN = """
commands:
description: The list of configuration mode commands to send to the device
returned: always
type: list
sample:
- management api http-commands
- protocol http port 81
- no protocol https
urls:
description: Hash of URL endpoints eAPI is listening on per interface
returned: when eAPI is started
type: dict
sample: {'Management1': ['http://172.26.10.1:80']}
session_name:
description: The EOS config session name used to load the configuration
returned: when changed is True
type: str
sample: ansible_1479315771
"""
import re
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.eos.eos import run_commands, load_config
from ansible.module_utils.six import iteritems
from ansible.module_utils.network.eos.eos import eos_argument_spec, check_args
def check_transport(module):
transport = module.params['transport']
provider_transport = (module.params['provider'] or {}).get('transport')
if 'eapi' in (transport, provider_transport):
module.fail_json(msg='eos_eapi module is only supported over cli transport')
def validate_http_port(value, module):
if not 1 <= value <= 65535:
module.fail_json(msg='http_port must be between 1 and 65535')
def validate_https_port(value, module):
if not 1 <= value <= 65535:
module.fail_json(msg='http_port must be between 1 and 65535')
def validate_local_http_port(value, module):
if not 1 <= value <= 65535:
module.fail_json(msg='http_port must be between 1 and 65535')
def validate_vrf(value, module):
out = run_commands(module, ['show vrf'])
configured_vrfs = []
lines = out[0].strip().splitlines()[3:]
for l in lines:
if not l:
continue
splitted_line = re.split(r'\s{2,}', l.strip())
if len(splitted_line) > 2:
configured_vrfs.append(splitted_line[0])
configured_vrfs.append('default')
if value not in configured_vrfs:
module.fail_json(msg='vrf `%s` is not configured on the system' % value)
def map_obj_to_commands(updates, module, warnings):
commands = list()
want, have = updates
def needs_update(x):
return want.get(x) is not None and (want.get(x) != have.get(x))
def add(cmd):
if 'management api http-commands' not in commands:
commands.insert(0, 'management api http-commands')
commands.append(cmd)
if any((needs_update('http'), needs_update('http_port'))):
if want['http'] is False:
add('no protocol http')
else:
if have['http'] is False and want['http'] in (False, None):
warnings.append('protocol http is not enabled, not configuring http port value')
else:
port = want['http_port'] or 80
add('protocol http port %s' % port)
if any((needs_update('https'), needs_update('https_port'))):
if want['https'] is False:
add('no protocol https')
else:
if have['https'] is False and want['https'] in (False, None):
warnings.append('protocol https is not enabled, not configuring https port value')
else:
port = want['https_port'] or 443
add('protocol https port %s' % port)
if any((needs_update('local_http'), needs_update('local_http_port'))):
if want['local_http'] is False:
add('no protocol http localhost')
else:
if have['local_http'] is False and want['local_http'] in (False, None):
warnings.append('protocol local_http is not enabled, not configuring local_http port value')
else:
port = want['local_http_port'] or 8080
add('protocol http localhost port %s' % port)
if any((needs_update('socket'), needs_update('socket'))):
if want['socket'] is False:
add('no protocol unix-socket')
else:
add('protocol unix-socket')
if needs_update('state') and not needs_update('vrf'):
if want['state'] == 'stopped':
add('shutdown')
elif want['state'] == 'started':
add('no shutdown')
if needs_update('vrf'):
add('vrf %s' % want['vrf'])
# switching operational vrfs here
# need to add the desired state as well
if want['state'] == 'stopped':
add('shutdown')
elif want['state'] == 'started':
add('no shutdown')
return commands
def parse_state(data):
if data[0]['enabled']:
return 'started'
else:
return 'stopped'
def map_config_to_obj(module):
out = run_commands(module, ['show management api http-commands | json'])
return {
'http': out[0]['httpServer']['configured'],
'http_port': out[0]['httpServer']['port'],
'https': out[0]['httpsServer']['configured'],
'https_port': out[0]['httpsServer']['port'],
'local_http': out[0]['localHttpServer']['configured'],
'local_http_port': out[0]['localHttpServer']['port'],
'socket': out[0]['unixSocketServer']['configured'],
'vrf': out[0]['vrf'] or "default",
'state': parse_state(out)
}
def map_params_to_obj(module):
obj = {
'http': module.params['http'],
'http_port': module.params['http_port'],
'https': module.params['https'],
'https_port': module.params['https_port'],
'local_http': module.params['local_http'],
'local_http_port': module.params['local_http_port'],
'socket': module.params['socket'],
'vrf': module.params['vrf'],
'state': module.params['state']
}
for key, value in iteritems(obj):
if value:
validator = globals().get('validate_%s' % key)
if validator:
validator(value, module)
return obj
def verify_state(updates, module):
want, have = updates
invalid_state = [('http', 'httpServer'),
('https', 'httpsServer'),
('local_http', 'localHttpServer'),
('socket', 'unixSocketServer')]
timeout = module.params['timeout'] or 30
state = module.params['state']
while invalid_state:
out = run_commands(module, ['show management api http-commands | json'])
for index, item in enumerate(invalid_state):
want_key, eapi_key = item
if want[want_key] is not None:
if want[want_key] == out[0][eapi_key]['running']:
del invalid_state[index]
elif state == 'stopped':
if not out[0][eapi_key]['running']:
del invalid_state[index]
else:
del invalid_state[index]
time.sleep(1)
timeout -= 1
if timeout == 0:
module.fail_json(msg='timeout expired before eapi running state changed')
def collect_facts(module, result):
out = run_commands(module, ['show management api http-commands | json'])
facts = dict(eos_eapi_urls=dict())
for each in out[0]['urls']:
intf, url = each.split(' : ')
key = str(intf).strip()
if key not in facts['eos_eapi_urls']:
facts['eos_eapi_urls'][key] = list()
facts['eos_eapi_urls'][key].append(str(url).strip())
result['ansible_facts'] = facts
def main():
""" main entry point for module execution
"""
argument_spec = dict(
http=dict(aliases=['enable_http'], type='bool'),
http_port=dict(type='int'),
https=dict(aliases=['enable_https'], type='bool'),
https_port=dict(type='int'),
local_http=dict(aliases=['enable_local_http'], type='bool'),
local_http_port=dict(type='int'),
socket=dict(aliases=['enable_socket'], type='bool'),
vrf=dict(default='default'),
config=dict(),
state=dict(default='started', choices=['stopped', 'started']),
)
argument_spec.update(eos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
check_transport(module)
result = {'changed': False}
warnings = list()
if module.params['config']:
warnings.append('config parameter is no longer necessary and will be ignored')
want = map_params_to_obj(module)
have = map_config_to_obj(module)
commands = map_obj_to_commands((want, have), module, warnings)
result['commands'] = commands
if commands:
commit = not module.check_mode
response = load_config(module, commands, commit=commit)
if response.get('diff') and module._diff:
result['diff'] = {'prepared': response.get('diff')}
result['session_name'] = response.get('session')
result['changed'] = True
if result['changed']:
verify_state((want, have), module)
collect_facts(module, result)
if warnings:
result['warnings'] = warnings
module.exit_json(**result)
if __name__ == '__main__':
main()
| 34.447796 | 108 | 0.642689 |
c544667bfca50bac87d22b49ebe3fb2030b432d4 | 2,768 | py | Python | python_modules/libraries/dagster-aws/dagster_aws_tests/s3_tests/test_file_handle_to_s3.py | atsuhiro/dagster | 24116a003a12c5a582afba3d4db74b3fa8c8159c | [
"Apache-2.0"
] | null | null | null | python_modules/libraries/dagster-aws/dagster_aws_tests/s3_tests/test_file_handle_to_s3.py | atsuhiro/dagster | 24116a003a12c5a582afba3d4db74b3fa8c8159c | [
"Apache-2.0"
] | null | null | null | python_modules/libraries/dagster-aws/dagster_aws_tests/s3_tests/test_file_handle_to_s3.py | atsuhiro/dagster | 24116a003a12c5a582afba3d4db74b3fa8c8159c | [
"Apache-2.0"
] | null | null | null | from dagster import pipeline, solid, ModeDefinition, execute_pipeline, ResourceDefinition
from dagster.utils.test import get_temp_file_handle_with_data
from dagster_aws.s3.solids import file_handle_to_s3
from dagster_aws.s3.s3_fake_resource import create_s3_fake_resource
def create_file_handle_pipeline(temp_file_handle, s3_resource):
# pylint: disable=no-value-for-parameter
@solid
def emit_temp_handle(_):
return temp_file_handle
@pipeline(
mode_definitions=[
ModeDefinition(resources={'s3': ResourceDefinition.hardcoded_resource(s3_resource)})
]
)
def test():
return file_handle_to_s3(emit_temp_handle())
return test
def test_successful_file_handle_to_s3():
foo_bytes = 'foo'.encode()
with get_temp_file_handle_with_data(foo_bytes) as temp_file_handle:
s3_fake_resource = create_s3_fake_resource()
result = execute_pipeline(
create_file_handle_pipeline(temp_file_handle, s3_fake_resource),
environment_dict={
'solids': {
'file_handle_to_s3': {'config': {'Bucket': 'some-bucket', 'Key': 'some-key'}}
}
},
)
assert result.success
assert s3_fake_resource.session.mock_extras.put_object.call_count == 1
assert (
s3_fake_resource.session.get_object('some-bucket', 'some-key')['Body'].read()
== foo_bytes
)
materializations = result.result_for_solid(
'file_handle_to_s3'
).materializations_during_compute
assert len(materializations) == 1
assert len(materializations[0].metadata_entries) == 1
assert (
materializations[0].metadata_entries[0].entry_data.path == 's3://some-bucket/some-key'
)
assert materializations[0].metadata_entries[0].label == 'some-key'
def test_successful_file_handle_to_s3_with_configs():
foo_bytes = 'foo'.encode()
with get_temp_file_handle_with_data(foo_bytes) as temp_file_handle:
s3_fake_resource = create_s3_fake_resource()
result = execute_pipeline(
create_file_handle_pipeline(temp_file_handle, s3_fake_resource),
environment_dict={
'solids': {
'file_handle_to_s3': {
'config': {
'Bucket': 'some-bucket',
'Key': 'some-key',
'CacheControl': 'some-value',
}
}
}
},
)
assert result.success
s3_fake_resource.session.mock_extras.put_object.assert_called_once_with(
CacheControl='some-value'
)
| 33.349398 | 98 | 0.619942 |
9fa3b1e36e159ebd81ae35824f48c08c7029ddc7 | 41 | py | Python | mltk/marl/envs/__init__.py | lqf96/mltk | 7187be5d616781695ee68674cd335fbb5a237ccc | [
"MIT"
] | null | null | null | mltk/marl/envs/__init__.py | lqf96/mltk | 7187be5d616781695ee68674cd335fbb5a237ccc | [
"MIT"
] | 2 | 2019-12-24T01:54:21.000Z | 2019-12-24T02:23:54.000Z | mltk/marl/envs/__init__.py | lqf96/mltk | 7187be5d616781695ee68674cd335fbb5a237ccc | [
"MIT"
] | null | null | null | from .matrix import *
from .mdp import *
| 13.666667 | 21 | 0.707317 |
f5396eb12cc8263fba01216afab391c845bed755 | 1,606 | py | Python | utils/extractor.py | roywilds/gpt-2-simple | b536b0e53dce130549f88669e7b4864fe4c5abb1 | [
"MIT"
] | null | null | null | utils/extractor.py | roywilds/gpt-2-simple | b536b0e53dce130549f88669e7b4864fe4c5abb1 | [
"MIT"
] | null | null | null | utils/extractor.py | roywilds/gpt-2-simple | b536b0e53dce130549f88669e7b4864fe4c5abb1 | [
"MIT"
] | null | null | null | """
Roy Wilds
2020-09-07
Simple utility to extract payload from downloaded resources. See NLP-3 ticket for info about resources.
File can be downloaded from https://www.kaggle.com/hsankesara/medium-articles
Example usage:
python extractor.py --input-file /data/NLP/articles.csv
Known Limitations:
- Very limited error handling.
- In medium_articles() function could do a more Pythonic handling of stdout vs file output
"""
import csv
import sys
def medium_articles(in_file, out_file):
if out_file is not None:
output_file_handler = open(out_file, 'w')
with open(in_file, 'r') as input_file_handler:
reader = csv.reader(input_file_handler, delimiter=',', quotechar='"')
for row in reader:
if out_file is None:
print(row[-1]) # Last column has content of article
else:
output_file_handler.write(row[-1] + '\n')
if output_file is not None:
output_file_handler.close()
return
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description='Stitch together multiple responses into one.')
parser.add_argument('--input-file', type=str, action='store', default=None)
parser.add_argument('--output-file', type=str, action='store', default=None)
args = parser.parse_args()
if args.input_file is None:
sys.stderr.write('You must provide the --input-file option that has the path to the articles.csv file.')
sys.exit(1)
input_file = args.input_file
output_file = args.output_file
medium_articles(input_file, output_file)
| 32.12 | 112 | 0.694271 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.