hexsha
stringlengths 40
40
| size
int64 1
1.03M
| ext
stringclasses 10
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
239
| max_stars_repo_name
stringlengths 5
130
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
listlengths 1
10
| max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
239
| max_issues_repo_name
stringlengths 5
130
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
listlengths 1
10
| max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
239
| max_forks_repo_name
stringlengths 5
130
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
listlengths 1
10
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 1
1.03M
| avg_line_length
float64 1
958k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
794bf782ce2a8cb0a8c1657dfc84d12a79b74150
| 70,771
|
py
|
Python
|
src/sage/schemes/elliptic_curves/period_lattice.py
|
rekhabiswal/sage
|
e8633b09919542a65e7e990c8369fee30c7edefd
|
[
"BSL-1.0"
] | null | null | null |
src/sage/schemes/elliptic_curves/period_lattice.py
|
rekhabiswal/sage
|
e8633b09919542a65e7e990c8369fee30c7edefd
|
[
"BSL-1.0"
] | null | null | null |
src/sage/schemes/elliptic_curves/period_lattice.py
|
rekhabiswal/sage
|
e8633b09919542a65e7e990c8369fee30c7edefd
|
[
"BSL-1.0"
] | null | null | null |
# -*- coding: utf-8 -*-
r"""
Period lattices of elliptic curves and related functions
Let `E` be an elliptic curve defined over a number field `K`
(including `\QQ`). We attach a period lattice (a discrete rank 2
subgroup of `\CC`) to each embedding of `K` into `\CC`.
In the case of real embeddings, the lattice is stable under complex
conjugation and is called a real lattice. These have two types:
rectangular, (the real curve has two connected components and positive
discriminant) or non-rectangular (one connected component, negative
discriminant).
The periods are computed to arbitrary precision using the AGM (Gauss's
Arithmetic-Geometric Mean).
EXAMPLES::
sage: K.<a> = NumberField(x^3-2)
sage: E = EllipticCurve([0,1,0,a,a])
First we try a real embedding::
sage: emb = K.embeddings(RealField())[0]
sage: L = E.period_lattice(emb); L
Period lattice associated to Elliptic Curve defined by y^2 = x^3 + x^2 + a*x + a over Number Field in a with defining polynomial x^3 - 2 with respect to the embedding Ring morphism:
From: Number Field in a with defining polynomial x^3 - 2
To: Algebraic Real Field
Defn: a |--> 1.259921049894873?
The first basis period is real::
sage: L.basis()
(3.81452977217855, 1.90726488608927 + 1.34047785962440*I)
sage: L.is_real()
True
For a basis `\omega_1,\omega_2` normalised so that `\omega_1/\omega_2`
is in the fundamental region of the upper half-plane, use the function
``normalised_basis()`` instead::
sage: L.normalised_basis()
(1.90726488608927 - 1.34047785962440*I, -1.90726488608927 - 1.34047785962440*I)
Next a complex embedding::
sage: emb = K.embeddings(ComplexField())[0]
sage: L = E.period_lattice(emb); L
Period lattice associated to Elliptic Curve defined by y^2 = x^3 + x^2 + a*x + a over Number Field in a with defining polynomial x^3 - 2 with respect to the embedding Ring morphism:
From: Number Field in a with defining polynomial x^3 - 2
To: Algebraic Field
Defn: a |--> -0.6299605249474365? - 1.091123635971722?*I
In this case, the basis `\omega_1`, `\omega_2` is always normalised so
that `\tau = \omega_1/\omega_2` is in the fundamental region in the
upper half plane::
sage: w1,w2 = L.basis(); w1,w2
(-1.37588604166076 - 2.58560946624443*I, -2.10339907847356 + 0.428378776460622*I)
sage: L.is_real()
False
sage: tau = w1/w2; tau
0.387694505032876 + 1.30821088214407*I
sage: L.normalised_basis()
(-1.37588604166076 - 2.58560946624443*I, -2.10339907847356 + 0.428378776460622*I)
We test that bug :trac:`8415` (caused by a PARI bug fixed in v2.3.5) is OK::
sage: E = EllipticCurve('37a')
sage: K.<a> = QuadraticField(-7)
sage: EK = E.change_ring(K)
sage: EK.period_lattice(K.complex_embeddings()[0])
Period lattice associated to Elliptic Curve defined by y^2 + y = x^3 + (-1)*x over Number Field in a with defining polynomial x^2 + 7 with respect to the embedding Ring morphism:
From: Number Field in a with defining polynomial x^2 + 7
To: Algebraic Field
Defn: a |--> -2.645751311064591?*I
REFERENCES:
.. [CT] \J. E. Cremona and T. Thongjunthug, The Complex AGM, periods of
elliptic curves over $\CC$ and complex elliptic logarithms.
Journal of Number Theory Volume 133, Issue 8, August 2013, pages
2813-2841.
AUTHORS:
- ?: initial version.
- John Cremona:
- Adapted to handle real embeddings of number fields, September 2008.
- Added basis_matrix function, November 2008
- Added support for complex embeddings, May 2009.
- Added complex elliptic logs, March 2010; enhanced, October 2010.
"""
from sage.modules.free_module import FreeModule_generic_pid
from sage.rings.all import ZZ, QQ, RealField, ComplexField, QQbar, AA
from sage.rings.real_mpfr import is_RealField
from sage.rings.complex_field import is_ComplexField
from sage.rings.real_mpfr import RealNumber as RealNumber
from sage.rings.complex_number import ComplexNumber as ComplexNumber
from sage.rings.number_field.number_field import refine_embedding
from sage.rings.infinity import Infinity
from sage.schemes.elliptic_curves.constructor import EllipticCurve
from sage.misc.cachefunc import cached_method
from sage.structure.richcmp import richcmp_method, richcmp, richcmp_not_equal
from sage.libs.all import pari
class PeriodLattice(FreeModule_generic_pid):
"""
The class for the period lattice of an algebraic variety.
"""
pass
@richcmp_method
class PeriodLattice_ell(PeriodLattice):
r"""
The class for the period lattice of an elliptic curve.
Currently supported are elliptic curves defined over `\QQ`, and
elliptic curves defined over a number field with a real or complex
embedding, where the lattice constructed depends on that
embedding.
"""
def __init__(self, E, embedding=None):
r"""
Initialises the period lattice by storing the elliptic curve and the embedding.
INPUT:
- ``E`` -- an elliptic curve
- ``embedding`` (default: ``None``) -- an embedding of the base
field `K` of ``E`` into a real or complex field. If
``None``:
- use the built-in coercion to `\RR` for `K=\QQ`;
- use the first embedding into `\RR` given by
``K.embeddings(RealField())``, if there are any;
- use the first embedding into `\CC` given by
``K.embeddings(ComplexField())``, if `K` is totally complex.
.. note::
No periods are computed on creation of the lattice; see the
functions ``basis()``, ``normalised_basis()`` and
``real_period()`` for precision setting.
EXAMPLES:
This function is not normally called directly, but will be
called by the period_lattice() function of classes
ell_number_field and ell_rational_field::
sage: from sage.schemes.elliptic_curves.period_lattice import PeriodLattice_ell
sage: E = EllipticCurve('37a')
sage: PeriodLattice_ell(E)
Period lattice associated to Elliptic Curve defined by y^2 + y = x^3 - x over Rational Field
::
sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = PeriodLattice_ell(E,emb); L
Period lattice associated to Elliptic Curve defined by y^2 = x^3 + x^2 + a*x + a over Number Field in a with defining polynomial x^3 - 2 with respect to the embedding Ring morphism:
From: Number Field in a with defining polynomial x^3 - 2
To: Algebraic Real Field
Defn: a |--> 1.259921049894873?
sage: emb = K.embeddings(ComplexField())[0]
sage: L = PeriodLattice_ell(E,emb); L
Period lattice associated to Elliptic Curve defined by y^2 = x^3 + x^2 + a*x + a over Number Field in a with defining polynomial x^3 - 2 with respect to the embedding Ring morphism:
From: Number Field in a with defining polynomial x^3 - 2
To: Algebraic Field
Defn: a |--> -0.6299605249474365? - 1.091123635971722?*I
TESTS::
sage: from sage.schemes.elliptic_curves.period_lattice import PeriodLattice_ell
sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = PeriodLattice_ell(E,emb)
sage: L == loads(dumps(L))
True
"""
# First we cache the elliptic curve with this period lattice:
self.E = E
# Next we cache the embedding into QQbar or AA which extends
# the given embedding:
K = E.base_field()
if embedding is None:
embs = K.embeddings(AA)
real = len(embs)>0
if not real:
embs = K.embeddings(QQbar)
embedding = embs[0]
else:
embedding = refine_embedding(embedding,Infinity)
real = embedding(K.gen()).imag().is_zero()
self.embedding = embedding
# Next we compute and cache (in self.real_flag) the type of
# the lattice: +1 for real rectangular, -1 for real
# non-rectangular, 0 for non-real:
self.real_flag = 0
if real:
self.real_flag = +1
if embedding(E.discriminant())<0:
self.real_flag = -1
# The following algebraic data associated to E and the
# embedding is cached:
#
# Ebar: the curve E base-changed to QQbar (or AA)
# f2: the 2-division polynomial of Ebar
# ei: the roots e1, e2, e3 of f2, as elements of QQbar (or AA)
#
# The ei are used both for period computation and elliptic
# logarithms.
self.Ebar = self.E.change_ring(self.embedding)
self.f2 = self.Ebar.two_division_polynomial()
if self.real_flag == 1: # positive discriminant
self._ei = self.f2.roots(AA,multiplicities=False)
self._ei.sort() # e1 < e2 < e3
e1, e2, e3 = self._ei
elif self.real_flag == -1: # negative discriminant
self._ei = self.f2.roots(QQbar, multiplicities=False)
self._ei = sorted(self._ei, key=lambda z: z.imag())
e1, e3, e2 = self._ei # so e3 is real
e3 = AA(e3)
self._ei = [e1, e2, e3]
else:
self._ei = self.f2.roots(QQbar, multiplicities=False)
e1, e2, e3 = self._ei
# The quantities sqrt(e_i-e_j) are cached (as elements of
# QQbar) to be used in period computations:
self._abc = (e3-e1).sqrt(), (e3-e2).sqrt(), (e2-e1).sqrt()
PeriodLattice.__init__(self, base_ring=ZZ, rank=2, degree=1, sparse=False)
def __richcmp__(self, other, op):
r"""
Comparison function for period lattices
TESTS::
sage: from sage.schemes.elliptic_curves.period_lattice import PeriodLattice_ell
sage: K.<a> = NumberField(x^3-2)
sage: E = EllipticCurve([0,1,0,a,a])
sage: embs = K.embeddings(ComplexField())
sage: L1,L2,L3 = [PeriodLattice_ell(E,e) for e in embs]
sage: L1 < L2 < L3
True
"""
if not isinstance(other, PeriodLattice_ell):
return NotImplemented
lx = self.E
rx = other.E
if lx != rx:
return richcmp_not_equal(lx, rx, op)
a = self.E.base_field().gen()
return richcmp(self.embedding(a), other.embedding(a), op)
def __repr__(self):
"""
Returns the string representation of this period lattice.
EXAMPLES::
sage: E = EllipticCurve('37a')
sage: E.period_lattice()
Period lattice associated to Elliptic Curve defined by y^2 + y = x^3 - x over Rational Field
::
sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(emb); L
Period lattice associated to Elliptic Curve defined by y^2 = x^3 + x^2 + a*x + a over Number Field in a with defining polynomial x^3 - 2 with respect to the embedding Ring morphism:
From: Number Field in a with defining polynomial x^3 - 2
To: Algebraic Real Field
Defn: a |--> 1.259921049894873?
"""
if self.E.base_field() is QQ:
return "Period lattice associated to %s"%(self.E)
else:
return "Period lattice associated to %s with respect to the embedding %s"%(self.E, self.embedding)
def __call__(self, P, prec=None):
r"""
Return the elliptic logarithm of a point `P`.
INPUT:
- ``P`` (point) -- a point on the elliptic curve associated
with this period lattice.
- ``prec`` (default: ``None``) -- precision in bits (default
precision if ``None``).
OUTPUT:
(complex number) The elliptic logarithm of the point `P` with
respect to this period lattice. If `E` is the elliptic curve
and `\sigma:K\to\CC` the embedding, then the returned value `z`
is such that `z\pmod{L}` maps to `\sigma(P)` under the
standard Weierstrass isomorphism from `\CC/L` to `\sigma(E)`.
EXAMPLES::
sage: E = EllipticCurve('389a')
sage: L = E.period_lattice()
sage: E.discriminant() > 0
True
sage: L.real_flag
1
sage: P = E([-1,1])
sage: P.is_on_identity_component ()
False
sage: L(P, prec=96)
0.4793482501902193161295330101 + 0.985868850775824102211203849...*I
sage: Q=E([3,5])
sage: Q.is_on_identity_component()
True
sage: L(Q, prec=96)
1.931128271542559442488585220
Note that this is actually the inverse of the Weierstrass isomorphism::
sage: L.elliptic_exponential(L(Q))
(3.00000000000000 : 5.00000000000000 : 1.00000000000000)
An example with negative discriminant, and a torsion point::
sage: E = EllipticCurve('11a1')
sage: L = E.period_lattice()
sage: E.discriminant() < 0
True
sage: L.real_flag
-1
sage: P = E([16,-61])
sage: L(P)
0.253841860855911
sage: L.real_period() / L(P)
5.00000000000000
"""
return self.elliptic_logarithm(P,prec)
@cached_method
def basis(self, prec=None, algorithm='sage'):
r"""
Return a basis for this period lattice as a 2-tuple.
INPUT:
- ``prec`` (default: ``None``) -- precision in bits (default
precision if ``None``).
- ``algorithm`` (string, default 'sage') -- choice of
implementation (for real embeddings only) between 'sage'
(native Sage implementation) or 'pari' (use the PARI
library: only available for real embeddings).
OUTPUT:
(tuple of Complex) `(\omega_1,\omega_2)` where the lattice is
`\ZZ\omega_1 + \ZZ\omega_2`. If the lattice is real then
`\omega_1` is real and positive, `\Im(\omega_2)>0` and
`\Re(\omega_1/\omega_2)` is either `0` (for rectangular
lattices) or `\frac{1}{2}` (for non-rectangular lattices).
Otherwise, `\omega_1/\omega_2` is in the fundamental region of
the upper half-plane. If the latter normalisation is required
for real lattices, use the function ``normalised_basis()``
instead.
EXAMPLES::
sage: E = EllipticCurve('37a')
sage: E.period_lattice().basis()
(2.99345864623196, 2.45138938198679*I)
This shows that the issue reported at :trac:`3954` is fixed::
sage: E = EllipticCurve('37a')
sage: b1 = E.period_lattice().basis(prec=30)
sage: b2 = E.period_lattice().basis(prec=30)
sage: b1 == b2
True
This shows that the issue reported at :trac:`4064` is fixed::
sage: E = EllipticCurve('37a')
sage: E.period_lattice().basis(prec=30)[0].parent()
Real Field with 30 bits of precision
sage: E.period_lattice().basis(prec=100)[0].parent()
Real Field with 100 bits of precision
::
sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(emb)
sage: L.basis(64)
(3.81452977217854509, 1.90726488608927255 + 1.34047785962440202*I)
sage: emb = K.embeddings(ComplexField())[0]
sage: L = E.period_lattice(emb)
sage: w1,w2 = L.basis(); w1,w2
(-1.37588604166076 - 2.58560946624443*I, -2.10339907847356 + 0.428378776460622*I)
sage: L.is_real()
False
sage: tau = w1/w2; tau
0.387694505032876 + 1.30821088214407*I
"""
# We divide into two cases: (1) Q, or a number field with a
# real embedding; (2) a number field with a complex embedding.
# In each case the periods are computed by a different
# internal function.
if self.is_real():
return self._compute_periods_real(prec=prec, algorithm=algorithm)
else:
return self._compute_periods_complex(prec=prec)
@cached_method
def normalised_basis(self, prec=None, algorithm='sage'):
r"""
Return a normalised basis for this period lattice as a 2-tuple.
INPUT:
- ``prec`` (default: ``None``) -- precision in bits (default
precision if ``None``).
- ``algorithm`` (string, default 'sage') -- choice of
implementation (for real embeddings only) between 'sage'
(native Sage implementation) or 'pari' (use the PARI
library: only available for real embeddings).
OUTPUT:
(tuple of Complex) `(\omega_1,\omega_2)` where the lattice has
the form `\ZZ\omega_1 + \ZZ\omega_2`. The basis is normalised
so that `\omega_1/\omega_2` is in the fundamental region of
the upper half-plane. For an alternative normalisation for
real lattices (with the first period real), use the function
basis() instead.
EXAMPLES::
sage: E = EllipticCurve('37a')
sage: E.period_lattice().normalised_basis()
(2.99345864623196, -2.45138938198679*I)
::
sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(emb)
sage: L.normalised_basis(64)
(1.90726488608927255 - 1.34047785962440202*I, -1.90726488608927255 - 1.34047785962440202*I)
sage: emb = K.embeddings(ComplexField())[0]
sage: L = E.period_lattice(emb)
sage: w1,w2 = L.normalised_basis(); w1,w2
(-1.37588604166076 - 2.58560946624443*I, -2.10339907847356 + 0.428378776460622*I)
sage: L.is_real()
False
sage: tau = w1/w2; tau
0.387694505032876 + 1.30821088214407*I
"""
w1, w2 = periods = self.basis(prec=prec, algorithm=algorithm)
periods, mat = normalise_periods(w1,w2)
return periods
@cached_method
def tau(self, prec=None, algorithm='sage'):
r"""
Return the upper half-plane parameter in the fundamental region.
INPUT:
- ``prec`` (default: ``None``) -- precision in bits (default
precision if ``None``).
- ``algorithm`` (string, default 'sage') -- choice of
implementation (for real embeddings only) between 'sage'
(native Sage implementation) or 'pari' (use the PARI
library: only available for real embeddings).
OUTPUT:
(Complex) `\tau = \omega_1/\omega_2` where the lattice has the
form `\ZZ\omega_1 + \ZZ\omega_2`, normalised so that `\tau =
\omega_1/\omega_2` is in the fundamental region of the upper
half-plane.
EXAMPLES::
sage: E = EllipticCurve('37a')
sage: L = E.period_lattice()
sage: L.tau()
1.22112736076463*I
::
sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(emb)
sage: tau = L.tau(); tau
-0.338718341018919 + 0.940887817679340*I
sage: tau.abs()
1.00000000000000
sage: -0.5 <= tau.real() <= 0.5
True
sage: emb = K.embeddings(ComplexField())[0]
sage: L = E.period_lattice(emb)
sage: tau = L.tau(); tau
0.387694505032876 + 1.30821088214407*I
sage: tau.abs()
1.36444961115933
sage: -0.5 <= tau.real() <= 0.5
True
"""
w1, w2 = self.normalised_basis(prec=prec, algorithm=algorithm)
return w1/w2
@cached_method
def _compute_periods_real(self, prec=None, algorithm='sage'):
r"""
Internal function to compute the periods (real embedding case).
INPUT:
- `prec` (int or ``None`` (default)) -- floating point
precision (in bits); if None, use the default precision.
- `algorithm` (string, default 'sage') -- choice of implementation between
- `pari`: use the PARI library
- `sage`: use a native Sage implementation (with the same underlying algorithm).
OUTPUT:
(tuple of Complex) `(\omega_1,\omega_2)` where the lattice has
the form `\ZZ\omega_1 + \ZZ\omega_2`, `\omega_1` is real and
`\omega_1/\omega_2` has real part either `0` or `frac{1}{2}`.
EXAMPLES::
sage: K.<a> = NumberField(x^3-2)
sage: E = EllipticCurve([0,1,0,a,a])
sage: embs = K.embeddings(CC)
sage: Ls = [E.period_lattice(e) for e in embs]
sage: [L.is_real() for L in Ls]
[False, False, True]
sage: Ls[2]._compute_periods_real(100)
(3.8145297721785450936365098936,
1.9072648860892725468182549468 + 1.3404778596244020196600112394*I)
sage: Ls[2]._compute_periods_real(100, algorithm='pari')
(3.8145297721785450936365098936,
1.9072648860892725468182549468 - 1.3404778596244020196600112394*I)
"""
if prec is None:
prec = RealField().precision()
R = RealField(prec)
C = ComplexField(prec)
if algorithm=='pari':
if self.E.base_field() is QQ:
periods = self.E.pari_curve().omega(prec).sage()
return (R(periods[0]), C(periods[1]))
E_pari = pari([R(self.embedding(ai).real()) for ai in self.E.a_invariants()]).ellinit()
periods = E_pari.omega(prec).sage()
return (R(periods[0]), C(periods[1]))
if algorithm!='sage':
raise ValueError("invalid value of 'algorithm' parameter")
pi = R.pi()
# Up to now everything has been exact in AA or QQbar, but now
# we must go transcendental. Only now is the desired
# precision used!
if self.real_flag == 1: # positive discriminant
a, b, c = (R(x) for x in self._abc)
w1 = R(pi/a.agm(b)) # least real period
w2 = C(0,pi/a.agm(c)) # least pure imaginary period
else:
a = C(self._abc[0])
x, y, r = a.real().abs(), a.imag().abs(), a.abs()
w1 = R(pi/r.agm(x)) # least real period
w2 = R(pi/r.agm(y)) # least pure imaginary period /i
w2 = C(w1,w2)/2
return (w1,w2)
@cached_method
def _compute_periods_complex(self, prec=None, normalise=True):
r"""
Internal function to compute the periods (complex embedding case).
INPUT:
- `prec` (int or ``None`` (default)) -- floating point precision (in bits); if None,
use the default precision.
- `normalise` (bool, default True) -- whether to normalise the
basis after computation.
OUTPUT:
(tuple of Complex) `(\omega_1,\omega_2)` where the lattice has
the form `\ZZ\omega_1 + \ZZ\omega_2`. If `normalise` is
`True`, the basis is normalised so that `(\omega_1/\omega_2)`
is in the fundamental region of the upper half plane.
EXAMPLES::
sage: K.<a> = NumberField(x^3-2)
sage: E = EllipticCurve([0,1,0,a,a])
sage: embs = K.embeddings(CC)
sage: Ls = [E.period_lattice(e) for e in embs]
sage: [L.is_real() for L in Ls]
[False, False, True]
sage: L = Ls[0]
sage: w1,w2 = L._compute_periods_complex(100); w1,w2
(-1.3758860416607626645495991458 - 2.5856094662444337042877901304*I, -2.1033990784735587243397865076 + 0.42837877646062187766760569686*I)
sage: tau = w1/w2; tau
0.38769450503287609349437509561 + 1.3082108821440725664008561928*I
sage: tau.real()
0.38769450503287609349437509561
sage: tau.abs()
1.3644496111593345713923386773
Without normalisation::
sage: w1,w2 = L._compute_periods_complex(normalise=False); w1,w2
(2.10339907847356 - 0.428378776460622*I, 0.727513036812796 - 3.01398824270506*I)
sage: tau = w1/w2; tau
0.293483964608883 + 0.627038168678760*I
sage: tau.real()
0.293483964608883
sage: tau.abs() # > 1
0.692321964451917
"""
if prec is None:
prec = RealField().precision()
C = ComplexField(prec)
# Up to now everything has been exact in AA, but now we
# must go transcendental. Only now is the desired
# precision used!
pi = C.pi()
a, b, c = (C(x) for x in self._abc)
if (a+b).abs() < (a-b).abs(): b=-b
if (a+c).abs() < (a-c).abs(): c=-c
w1 = pi/a.agm(b)
w2 = pi*C.gen()/a.agm(c)
if (w1/w2).imag()<0: w2=-w2
if normalise:
w1w2, mat = normalise_periods(w1,w2)
return w1w2
return (w1,w2)
def is_real(self):
r"""
Return True if this period lattice is real.
EXAMPLES::
sage: f = EllipticCurve('11a')
sage: f.period_lattice().is_real()
True
::
sage: K.<i> = QuadraticField(-1)
sage: E = EllipticCurve(K,[0,0,0,i,2*i])
sage: emb = K.embeddings(ComplexField())[0]
sage: L = E.period_lattice(emb)
sage: L.is_real()
False
::
sage: K.<a> = NumberField(x^3-2)
sage: E = EllipticCurve([0,1,0,a,a])
sage: [E.period_lattice(emb).is_real() for emb in K.embeddings(CC)]
[False, False, True]
ALGORITHM:
The lattice is real if it is associated to a real embedding;
such lattices are stable under conjugation.
"""
return self.real_flag!=0
def is_rectangular(self):
r"""
Return True if this period lattice is rectangular.
.. note::
Only defined for real lattices; a RuntimeError is raised for
non-real lattices.
EXAMPLES::
sage: f = EllipticCurve('11a')
sage: f.period_lattice().basis()
(1.26920930427955, 0.634604652139777 + 1.45881661693850*I)
sage: f.period_lattice().is_rectangular()
False
::
sage: f = EllipticCurve('37b')
sage: f.period_lattice().basis()
(1.08852159290423, 1.76761067023379*I)
sage: f.period_lattice().is_rectangular()
True
ALGORITHM:
The period lattice is rectangular precisely if the
discriminant of the Weierstrass equation is positive, or
equivalently if the number of real components is 2.
"""
if self.is_real():
return self.real_flag == +1
raise RuntimeError("Not defined for non-real lattices.")
def real_period(self, prec = None, algorithm='sage'):
"""
Returns the real period of this period lattice.
INPUT:
- ``prec`` (int or ``None`` (default)) -- real precision in
bits (default real precision if ``None``)
- ``algorithm`` (string, default 'sage') -- choice of
implementation (for real embeddings only) between 'sage'
(native Sage implementation) or 'pari' (use the PARI
library: only available for real embeddings).
.. note::
Only defined for real lattices; a RuntimeError is raised for
non-real lattices.
EXAMPLES::
sage: E = EllipticCurve('37a')
sage: E.period_lattice().real_period()
2.99345864623196
::
sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(emb)
sage: L.real_period(64)
3.81452977217854509
"""
if self.is_real():
return self.basis(prec,algorithm)[0]
raise RuntimeError("Not defined for non-real lattices.")
def omega(self, prec = None):
r"""
Returns the real or complex volume of this period lattice.
INPUT:
- ``prec`` (int or ``None``(default)) -- real precision in
bits (default real precision if ``None``)
OUTPUT:
(real) For real lattices, this is the real period times the
number of connected components. For non-real lattices it is
the complex area.
.. note::
If the curve is defined over `\QQ` and is given by a
*minimal* Weierstrass equation, then this is the correct
period in the BSD conjecture, i.e., it is the least real
period * 2 when the period lattice is rectangular. More
generally the product of this quantity over all embeddings
appears in the generalised BSD formula.
EXAMPLES::
sage: E = EllipticCurve('37a')
sage: E.period_lattice().omega()
5.98691729246392
This is not a minimal model::
sage: E = EllipticCurve([0,-432*6^2])
sage: E.period_lattice().omega()
0.486109385710056
If you were to plug the above omega into the BSD conjecture, you
would get nonsense. The following works though::
sage: F = E.minimal_model()
sage: F.period_lattice().omega()
0.972218771420113
::
sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(emb)
sage: L.omega(64)
3.81452977217854509
A complex example (taken from J.E.Cremona and E.Whitley,
*Periods of cusp forms and elliptic curves over imaginary
quadratic fields*, Mathematics of Computation 62 No. 205
(1994), 407-429)::
sage: K.<i> = QuadraticField(-1)
sage: E = EllipticCurve([0,1-i,i,-i,0])
sage: L = E.period_lattice(K.embeddings(CC)[0])
sage: L.omega()
8.80694160502647
"""
if self.is_real():
n_components = (self.real_flag+3)//2
return self.real_period(prec) * n_components
else:
return self.complex_area()
@cached_method
def basis_matrix(self, prec=None, normalised=False):
r"""
Return the basis matrix of this period lattice.
INPUT:
- ``prec`` (int or ``None``(default)) -- real precision in
bits (default real precision if ``None``).
- ``normalised`` (bool, default None) -- if True and the
embedding is real, use the normalised basis (see
``normalised_basis()``) instead of the default.
OUTPUT:
A 2x2 real matrix whose rows are the lattice basis vectors,
after identifying `\CC` with `\RR^2`.
EXAMPLES::
sage: E = EllipticCurve('37a')
sage: E.period_lattice().basis_matrix()
[ 2.99345864623196 0.000000000000000]
[0.000000000000000 2.45138938198679]
::
sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(emb)
sage: L.basis_matrix(64)
[ 3.81452977217854509 0.000000000000000000]
[ 1.90726488608927255 1.34047785962440202]
See :trac:`4388`::
sage: L = EllipticCurve('11a1').period_lattice()
sage: L.basis_matrix()
[ 1.26920930427955 0.000000000000000]
[0.634604652139777 1.45881661693850]
sage: L.basis_matrix(normalised=True)
[0.634604652139777 -1.45881661693850]
[-1.26920930427955 0.000000000000000]
::
sage: L = EllipticCurve('389a1').period_lattice()
sage: L.basis_matrix()
[ 2.49021256085505 0.000000000000000]
[0.000000000000000 1.97173770155165]
sage: L.basis_matrix(normalised=True)
[ 2.49021256085505 0.000000000000000]
[0.000000000000000 -1.97173770155165]
"""
from sage.matrix.all import Matrix
if normalised:
return Matrix([list(w) for w in self.normalised_basis(prec)])
w1,w2 = self.basis(prec)
if self.is_real():
return Matrix([[w1,0],list(w2)])
else:
return Matrix([list(w) for w in (w1,w2)])
def complex_area(self, prec=None):
"""
Return the area of a fundamental domain for the period lattice
of the elliptic curve.
INPUT:
- ``prec`` (int or ``None``(default)) -- real precision in
bits (default real precision if ``None``).
EXAMPLES::
sage: E = EllipticCurve('37a')
sage: E.period_lattice().complex_area()
7.33813274078958
::
sage: K.<a> = NumberField(x^3-2)
sage: embs = K.embeddings(ComplexField())
sage: E = EllipticCurve([0,1,0,a,a])
sage: [E.period_lattice(emb).is_real() for emb in K.embeddings(CC)]
[False, False, True]
sage: [E.period_lattice(emb).complex_area() for emb in embs]
[6.02796894766694, 6.02796894766694, 5.11329270448345]
"""
w1,w2 = self.basis(prec)
return (w1*w2.conjugate()).imag().abs()
def sigma(self, z, prec = None, flag=0):
r"""
Returns the value of the Weierstrass sigma function for this elliptic curve period lattice.
INPUT:
- ``z`` -- a complex number
- ``prec`` (default: ``None``) -- real precision in bits
(default real precision if None).
- ``flag`` --
0: (default) ???;
1: computes an arbitrary determination of log(sigma(z))
2, 3: same using the product expansion instead of theta series. ???
.. note::
The reason for the ???'s above, is that the PARI
documentation for ellsigma is very vague. Also this is
only implemented for curves defined over `\QQ`.
.. TODO::
This function does not use any of the PeriodLattice functions
and so should be moved to ell_rational_field.
EXAMPLES::
sage: EllipticCurve('389a1').period_lattice().sigma(CC(2,1))
2.60912163570108 - 0.200865080824587*I
"""
if prec is None:
prec = RealField().precision()
try:
return self.E.pari_curve().ellsigma(z, flag, precision=prec)
except AttributeError:
raise NotImplementedError("sigma function not yet implemented for period lattices of curves not defined over Q")
def curve(self):
r"""
Return the elliptic curve associated with this period lattice.
EXAMPLES::
sage: E = EllipticCurve('37a')
sage: L = E.period_lattice()
sage: L.curve() is E
True
::
sage: K.<a> = NumberField(x^3-2)
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(K.embeddings(RealField())[0])
sage: L.curve() is E
True
sage: L = E.period_lattice(K.embeddings(ComplexField())[0])
sage: L.curve() is E
True
"""
return self.E
def ei(self):
r"""
Return the x-coordinates of the 2-division points of the elliptic curve associated with this period lattice, as elements of QQbar.
EXAMPLES::
sage: E = EllipticCurve('37a')
sage: L = E.period_lattice()
sage: L.ei()
[-1.107159871688768?, 0.2695944364054446?, 0.8375654352833230?]
In the following example, we should have one purely real 2-division point coordinate,
and two conjugate purely imaginary coordinates.
::
sage: K.<a> = NumberField(x^3-2)
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(K.embeddings(RealField())[0])
sage: x1,x2,x3 = L.ei()
sage: abs(x1.real())+abs(x2.real())<1e-14
True
sage: x1.imag(),x2.imag(),x3
(-1.122462048309373?, 1.122462048309373?, -1.000000000000000?)
::
sage: L = E.period_lattice(K.embeddings(ComplexField())[0])
sage: L.ei()
[-1.000000000000000? + 0.?e-1...*I,
-0.9720806486198328? - 0.561231024154687?*I,
0.9720806486198328? + 0.561231024154687?*I]
"""
return self._ei
def coordinates(self, z, rounding=None):
r"""
Returns the coordinates of a complex number w.r.t. the lattice basis
INPUT:
- ``z`` (complex) -- A complex number.
- ``rounding`` (default ``None``) -- whether and how to round the
output (see below).
OUTPUT:
When ``rounding`` is ``None`` (the default), returns a tuple
of reals `x`, `y` such that `z=xw_1+yw_2` where `w_1`, `w_2`
are a basis for the lattice (normalised in the case of complex
embeddings).
When ``rounding`` is 'round', returns a tuple of integers `n_1`,
`n_2` which are the closest integers to the `x`, `y` defined
above. If `z` is in the lattice these are the coordinates of
`z` with respect to the lattice basis.
When ``rounding`` is 'floor', returns a tuple of integers
`n_1`, `n_2` which are the integer parts to the `x`, `y`
defined above. These are used in :meth:`.reduce`
EXAMPLES::
sage: E = EllipticCurve('389a')
sage: L = E.period_lattice()
sage: w1, w2 = L.basis(prec=100)
sage: P = E([-1,1])
sage: zP = P.elliptic_logarithm(precision=100); zP
0.47934825019021931612953301006 + 0.98586885077582410221120384908*I
sage: L.coordinates(zP)
(0.19249290511394227352563996419, 0.50000000000000000000000000000)
sage: sum([x*w for x,w in zip(L.coordinates(zP), L.basis(prec=100))])
0.47934825019021931612953301006 + 0.98586885077582410221120384908*I
sage: L.coordinates(12*w1+23*w2)
(12.000000000000000000000000000, 23.000000000000000000000000000)
sage: L.coordinates(12*w1+23*w2, rounding='floor')
(11, 22)
sage: L.coordinates(12*w1+23*w2, rounding='round')
(12, 23)
"""
C = z.parent()
if is_RealField(C):
C = ComplexField(C.precision())
z = C(z)
else:
if is_ComplexField(C):
pass
else:
try:
C = ComplexField()
z = C(z)
except TypeError:
raise TypeError("%s is not a complex number"%z)
prec = C.precision()
from sage.matrix.all import Matrix
from sage.modules.all import vector
if self.real_flag:
w1,w2 = self.basis(prec)
M = Matrix([[w1,0], list(w2)])**(-1)
else:
w1,w2 = self.normalised_basis(prec)
M = Matrix([list(w1), list(w2)])**(-1)
u,v = vector(z)*M
# Now z = u*w1+v*w2
if rounding=='round':
return u.round(), v.round()
if rounding=='floor':
return u.floor(), v.floor()
return u,v
def reduce(self, z):
r"""
Reduce a complex number modulo the lattice
INPUT:
- ``z`` (complex) -- A complex number.
OUTPUT:
(complex) the reduction of `z` modulo the lattice, lying in
the fundamental period parallelogram with respect to the
lattice basis. For curves defined over the reals (i.e. real
embeddings) the output will be real when possible.
EXAMPLES::
sage: E = EllipticCurve('389a')
sage: L = E.period_lattice()
sage: w1, w2 = L.basis(prec=100)
sage: P = E([-1,1])
sage: zP = P.elliptic_logarithm(precision=100); zP
0.47934825019021931612953301006 + 0.98586885077582410221120384908*I
sage: z = zP+10*w1-20*w2; z
25.381473858740770069343110929 - 38.448885180257139986236950114*I
sage: L.reduce(z)
0.47934825019021931612953301006 + 0.98586885077582410221120384908*I
sage: L.elliptic_logarithm(2*P)
0.958696500380439
sage: L.reduce(L.elliptic_logarithm(2*P))
0.958696500380439
sage: L.reduce(L.elliptic_logarithm(2*P)+10*w1-20*w2)
0.958696500380444
"""
C = z.parent()
if is_RealField(C):
C = ComplexField(C.precision())
z = C(z)
elif is_ComplexField(C):
pass
else:
try:
C = ComplexField()
z = C(z)
except TypeError:
raise TypeError("%s is not a complex number" % z)
prec = C.precision()
if self.real_flag:
w1, w2 = self.basis(prec) # w1 real
else:
w1, w2 = self.normalised_basis(prec)
u, v = self.coordinates(z, rounding='floor')
z = z-u*w1-v*w2
# Final adjustments for the real case.
# NB We assume here that when the embedding is real then the
# point is also real!
if self.real_flag == 0: return z
if self.real_flag == -1:
k = (z.imag()/w2.imag()).round()
z = z-k*w2
return C(z.real(),0)
if ((2*z.imag()/w2.imag()).round())%2:
return C(z.real(),w2.imag()/2)
else:
return C(z.real(),0)
def e_log_RC(self, xP, yP, prec=None, reduce=True):
r"""
Return the elliptic logarithm of a real or complex point.
- ``xP, yP`` (real or complex) -- Coordinates of a point on
the embedded elliptic curve associated with this period
lattice.
- ``prec`` (default: ``None``) -- real precision in bits
(default real precision if None).
- ``reduce`` (default: ``True``) -- if ``True``, the result
is reduced with respect to the period lattice basis.
OUTPUT:
(complex number) The elliptic logarithm of the point `(xP,yP)`
with respect to this period lattice. If `E` is the elliptic
curve and `\sigma:K\to\CC` the embedding, the returned
value `z` is such that `z\pmod{L}` maps to `(xP,yP)=\sigma(P)`
under the standard Weierstrass isomorphism from `\CC/L` to
`\sigma(E)`. If ``reduce`` is ``True``, the output is reduced
so that it is in the fundamental period parallelogram with
respect to the normalised lattice basis.
ALGORITHM:
Uses the complex AGM. See [CT]_ for details.
EXAMPLES::
sage: E = EllipticCurve('389a')
sage: L = E.period_lattice()
sage: P = E([-1,1])
sage: xP, yP = [RR(c) for c in P.xy()]
The elliptic log from the real coordinates::
sage: L.e_log_RC(xP, yP)
0.479348250190219 + 0.985868850775824*I
The same elliptic log from the algebraic point::
sage: L(P)
0.479348250190219 + 0.985868850775824*I
A number field example::
sage: K.<a> = NumberField(x^3-2)
sage: E = EllipticCurve([0,0,0,0,a])
sage: v = K.real_places()[0]
sage: L = E.period_lattice(v)
sage: P = E.lift_x(1/3*a^2 + a + 5/3)
sage: L(P)
3.51086196882538
sage: xP, yP = [v(c) for c in P.xy()]
sage: L.e_log_RC(xP, yP)
3.51086196882538
Elliptic logs of real points which do not come from algebraic
points::
sage: ER = EllipticCurve([v(ai) for ai in E.a_invariants()])
sage: P = ER.lift_x(12.34)
sage: xP, yP = P.xy()
sage: xP, yP
(12.3400000000000, 43.3628968710567)
sage: L.e_log_RC(xP, yP)
3.76298229503967
sage: xP, yP = ER.lift_x(0).xy()
sage: L.e_log_RC(xP, yP)
2.69842609082114
Elliptic logs of complex points::
sage: v = K.complex_embeddings()[0]
sage: L = E.period_lattice(v)
sage: P = E.lift_x(1/3*a^2 + a + 5/3)
sage: L(P)
1.68207104397706 - 1.87873661686704*I
sage: xP, yP = [v(c) for c in P.xy()]
sage: L.e_log_RC(xP, yP)
1.68207104397706 - 1.87873661686704*I
sage: EC = EllipticCurve([v(ai) for ai in E.a_invariants()])
sage: xP, yP = EC.lift_x(0).xy()
sage: L.e_log_RC(xP, yP)
1.03355715602040 - 0.867257428417356*I
"""
if prec is None:
prec = RealField().precision()
# Note: using log2(prec) + 3 guard bits is usually enough.
# To avoid computing a logarithm, we use 40 guard bits which
# should be largely enough in practice.
prec2 = prec + 40
R = RealField(prec2)
C = ComplexField(prec2)
e1,e2,e3 = self._ei
a1,a2,a3 = [self.embedding(a) for a in self.E.ainvs()[:3]]
wP = 2*yP+a1*xP+a3
# We treat the case of 2-torsion points separately. (Note
# that Cohen's algorithm does not handle these properly.)
if wP.is_zero(): # 2-torsion treated separately
w1,w2 = self._compute_periods_complex(prec,normalise=False)
if xP==e1:
z = w2/2
else:
if xP==e3:
z = w1/2
else:
z = (w1+w2)/2
if reduce:
z = self.reduce(z)
return z
# NB The first block of code works fine for real embeddings as
# well as complex embeddings. The special code for real
# embeddings uses only real arithmetic in the iteration, and is
# based on Cremona and Thongjunthug.
# An older version, based on Cohen's Algorithm 7.4.8 also uses
# only real arithmetic, and gives different normalisations,
# but also causes problems (see #10026). It is left in but
# commented out below.
if self.real_flag==0: # complex case
a = C((e1-e3).sqrt())
b = C((e1-e2).sqrt())
if (a+b).abs() < (a-b).abs(): b=-b
r = C(((xP-e3)/(xP-e2)).sqrt())
if r.real()<0: r=-r
t = -C(wP)/(2*r*(xP-e2))
# eps controls the end of the loop. Since we aim at a target
# precision of prec bits, eps = 2^(-prec) is enough.
eps = R(1) >> prec
while True:
s = b*r+a
a, b = (a+b)/2, (a*b).sqrt()
if (a+b).abs() < (a-b).abs(): b=-b
r = (a*(r+1)/s).sqrt()
if (r.abs()-1).abs() < eps: break
if r.real()<0: r=-r
t *= r
z = ((a/t).arctan())/a
z = ComplexField(prec)(z)
if reduce:
z = self.reduce(z)
return z
if self.real_flag==-1: # real, connected case
z = C(self._abc[0]) # sqrt(e3-e1)
a, y, b = z.real(), z.imag(), z.abs()
uv = (xP-e1).sqrt()
u, v = uv.real().abs(), uv.imag().abs()
r = (u*a/(u*a+v*y)).sqrt()
t = -r*R(wP)/(2*(u**2+v**2))
on_egg = False
else: # real, disconnected case
a = R(e3-e1).sqrt()
b = R(e3-e2).sqrt()
if (a+b).abs() < (a-b).abs(): b=-b
on_egg = (xP<e3)
if on_egg:
r = a/R(e3-xP).sqrt()
t = r*R(wP)/(2*R(xP-e1))
else:
r = R((xP-e1)/(xP-e2)).sqrt()
t = -R(wP)/(2*r*R(xP-e2))
# eps controls the end of the loop. Since we aim at a target
# precision of prec bits, eps = 2^(-prec) is enough.
eps = R(1) >> prec
while True:
s = b*r+a
a, b = (a+b)/2, (a*b).sqrt()
r = (a*(r+1)/s).sqrt()
if (r-1).abs() < eps: break
t *= r
z = ((a/t).arctan())/a
if on_egg:
w1,w2 = self._compute_periods_real(prec)
z += w2/2
z = ComplexField(prec)(z)
if reduce:
z = self.reduce(z)
return z
def elliptic_logarithm(self, P, prec=None, reduce=True):
r"""
Return the elliptic logarithm of a point.
INPUT:
- ``P`` (point) -- A point on the elliptic curve associated
with this period lattice.
- ``prec`` (default: ``None``) -- real precision in bits
(default real precision if None).
- ``reduce`` (default: ``True``) -- if ``True``, the result
is reduced with respect to the period lattice basis.
OUTPUT:
(complex number) The elliptic logarithm of the point `P` with
respect to this period lattice. If `E` is the elliptic curve
and `\sigma:K\to\CC` the embedding, the returned value `z`
is such that `z\pmod{L}` maps to `\sigma(P)` under the
standard Weierstrass isomorphism from `\CC/L` to `\sigma(E)`.
If ``reduce`` is ``True``, the output is reduced so that it is
in the fundamental period parallelogram with respect to the
normalised lattice basis.
ALGORITHM:
Uses the complex AGM. See [CT]_ for details.
EXAMPLES::
sage: E = EllipticCurve('389a')
sage: L = E.period_lattice()
sage: E.discriminant() > 0
True
sage: L.real_flag
1
sage: P = E([-1,1])
sage: P.is_on_identity_component ()
False
sage: L.elliptic_logarithm(P, prec=96)
0.4793482501902193161295330101 + 0.9858688507758241022112038491*I
sage: Q=E([3,5])
sage: Q.is_on_identity_component()
True
sage: L.elliptic_logarithm(Q, prec=96)
1.931128271542559442488585220
Note that this is actually the inverse of the Weierstrass isomorphism::
sage: L.elliptic_exponential(_)
(3.00000000000000000000000000... : 5.00000000000000000000000000... : 1.000000000000000000000000000)
An example with negative discriminant, and a torsion point::
sage: E = EllipticCurve('11a1')
sage: L = E.period_lattice()
sage: E.discriminant() < 0
True
sage: L.real_flag
-1
sage: P = E([16,-61])
sage: L.elliptic_logarithm(P)
0.253841860855911
sage: L.real_period() / L.elliptic_logarithm(P)
5.00000000000000
An example where precision is problematic::
sage: E = EllipticCurve([1, 0, 1, -85357462, 303528987048]) #18074g1
sage: P = E([4458713781401/835903744, -64466909836503771/24167649046528, 1])
sage: L = E.period_lattice()
sage: L.ei()
[5334.003952567705? - 1.964393150436?e-6*I, 5334.003952567705? + 1.964393150436?e-6*I, -10668.25790513541?]
sage: L.elliptic_logarithm(P,prec=100)
0.27656204014107061464076203097
Some complex examples, taken from the paper by Cremona and Thongjunthug::
sage: K.<i> = QuadraticField(-1)
sage: a4 = 9*i-10
sage: a6 = 21-i
sage: E = EllipticCurve([0,0,0,a4,a6])
sage: e1 = 3-2*i; e2 = 1+i; e3 = -4+i
sage: emb = K.embeddings(CC)[1]
sage: L = E.period_lattice(emb)
sage: P = E(2-i,4+2*i)
By default, the output is reduced with respect to the
normalised lattice basis, so that its coordinates with respect
to that basis lie in the interval [0,1)::
sage: z = L.elliptic_logarithm(P,prec=100); z
0.70448375537782208460499649302 - 0.79246725643650979858266018068*I
sage: L.coordinates(z)
(0.46247636364807931766105406092, 0.79497588726808704200760395829)
Using ``reduce=False`` this step can be omitted. In this case
the coordinates are usually in the interval [-0.5,0.5), but
this is not guaranteed. This option is mainly for testing
purposes::
sage: z = L.elliptic_logarithm(P,prec=100, reduce=False); z
0.57002153834710752778063503023 + 0.46476340520469798857457031393*I
sage: L.coordinates(z)
(0.46247636364807931766105406092, -0.20502411273191295799239604171)
The elliptic logs of the 2-torsion points are half-periods::
sage: L.elliptic_logarithm(E(e1,0),prec=100)
0.64607575874356525952487867052 + 0.22379609053909448304176885364*I
sage: L.elliptic_logarithm(E(e2,0),prec=100)
0.71330686725892253793705940192 - 0.40481924028150941053684639367*I
sage: L.elliptic_logarithm(E(e3,0),prec=100)
0.067231108515357278412180731396 - 0.62861533082060389357861524731*I
We check this by doubling and seeing that the resulting
coordinates are integers::
sage: L.coordinates(2*L.elliptic_logarithm(E(e1,0),prec=100))
(1.0000000000000000000000000000, 0.00000000000000000000000000000)
sage: L.coordinates(2*L.elliptic_logarithm(E(e2,0),prec=100))
(1.0000000000000000000000000000, 1.0000000000000000000000000000)
sage: L.coordinates(2*L.elliptic_logarithm(E(e3,0),prec=100))
(0.00000000000000000000000000000, 1.0000000000000000000000000000)
::
sage: a4 = -78*i + 104
sage: a6 = -216*i - 312
sage: E = EllipticCurve([0,0,0,a4,a6])
sage: emb = K.embeddings(CC)[1]
sage: L = E.period_lattice(emb)
sage: P = E(3+2*i,14-7*i)
sage: L.elliptic_logarithm(P)
0.297147783912228 - 0.546125549639461*I
sage: L.coordinates(L.elliptic_logarithm(P))
(0.628653378040238, 0.371417754610223)
sage: e1 = 1+3*i; e2 = -4-12*i; e3=-e1-e2
sage: L.coordinates(L.elliptic_logarithm(E(e1,0)))
(0.500000000000000, 0.500000000000000)
sage: L.coordinates(L.elliptic_logarithm(E(e2,0)))
(1.00000000000000, 0.500000000000000)
sage: L.coordinates(L.elliptic_logarithm(E(e3,0)))
(0.500000000000000, 0.000000000000000)
TESTS:
See :trac:`10026` and :trac:`11767`::
sage: K.<w> = QuadraticField(2)
sage: E = EllipticCurve([ 0, -1, 1, -3*w -4, 3*w + 4 ])
sage: T = E.simon_two_descent(lim1=20,lim3=5,limtriv=20)
sage: P,Q = T[2]
sage: embs = K.embeddings(CC)
sage: Lambda = E.period_lattice(embs[0])
sage: Lambda.elliptic_logarithm(P+3*Q, 100)
4.7100131126199672766973600998
sage: R.<x> = QQ[]
sage: K.<a> = NumberField(x^2 + x + 5)
sage: E = EllipticCurve(K, [0,0,1,-3,-5])
sage: P = E([0,a])
sage: Lambda = P.curve().period_lattice(K.embeddings(ComplexField(600))[0])
sage: Lambda.elliptic_logarithm(P, prec=600)
-0.842248166487739393375018008381693990800588864069506187033873183845246233548058477561706400464057832396643843146464236956684557207157300006542470428493573195030603817094900751609464 - 0.571366031453267388121279381354098224265947866751130917440598461117775339240176310729173301979590106474259885638797913383502735083088736326391919063211421189027226502851390118943491*I
sage: K.<a> = QuadraticField(-5)
sage: E = EllipticCurve([1,1,a,a,0])
sage: P = E(0,0)
sage: L = P.curve().period_lattice(K.embeddings(ComplexField())[0])
sage: L.elliptic_logarithm(P, prec=500)
1.17058357737548897849026170185581196033579563441850967539191867385734983296504066660506637438866628981886518901958717288150400849746892393771983141354 - 1.13513899565966043682474529757126359416758251309237866586896869548539516543734207347695898664875799307727928332953834601460994992792519799260968053875*I
sage: L.elliptic_logarithm(P, prec=1000)
1.17058357737548897849026170185581196033579563441850967539191867385734983296504066660506637438866628981886518901958717288150400849746892393771983141354014895386251320571643977497740116710952913769943240797618468987304985625823413440999754037939123032233879499904283600304184828809773650066658885672885 - 1.13513899565966043682474529757126359416758251309237866586896869548539516543734207347695898664875799307727928332953834601460994992792519799260968053875387282656993476491590607092182964878750169490985439873220720963653658829712494879003124071110818175013453207439440032582917366703476398880865439217473*I
"""
if not P.curve() is self.E:
raise ValueError("Point is on the wrong curve")
if prec is None:
prec = RealField().precision()
if P.is_zero():
return ComplexField(prec)(0)
# Compute the real or complex coordinates of P:
xP, yP = [self.embedding(coord) for coord in P.xy()]
# The real work is done over R or C now:
return self.e_log_RC(xP, yP, prec, reduce=reduce)
def elliptic_exponential(self, z, to_curve=True):
r"""
Return the elliptic exponential of a complex number.
INPUT:
- ``z`` (complex) -- A complex number (viewed modulo this period lattice).
- ``to_curve`` (bool, default True): see below.
OUTPUT:
- If ``to_curve`` is False, a 2-tuple of real or complex
numbers representing the point `(x,y) = (\wp(z),\wp'(z))`
where `\wp` denotes the Weierstrass `\wp`-function with
respect to this lattice.
- If ``to_curve`` is True, the point `(X,Y) =
(x-b_2/12,y-(a_1(x-b_2/12)-a_3)/2)` as a point in `E(\RR)`
or `E(\CC)`, with `(x,y) = (\wp(z),\wp'(z))` as above, where
`E` is the elliptic curve over `\RR` or `\CC` whose period
lattice this is.
- If the lattice is real and `z` is also real then the output
is a pair of real numbers if ``to_curve`` is True, or a
point in `E(\RR)` if ``to_curve`` is False.
.. note::
The precision is taken from that of the input ``z``.
EXAMPLES::
sage: E = EllipticCurve([1,1,1,-8,6])
sage: P = E(1,-2)
sage: L = E.period_lattice()
sage: z = L(P); z
1.17044757240090
sage: L.elliptic_exponential(z)
(0.999999999999999 : -2.00000000000000 : 1.00000000000000)
sage: _.curve()
Elliptic Curve defined by y^2 + 1.00000000000000*x*y + 1.00000000000000*y = x^3 + 1.00000000000000*x^2 - 8.00000000000000*x + 6.00000000000000 over Real Field with 53 bits of precision
sage: L.elliptic_exponential(z,to_curve=False)
(1.41666666666667, -2.00000000000000)
sage: z = L(P,prec=201); z
1.17044757240089592298992188482371493504472561677451007994189
sage: L.elliptic_exponential(z)
(1.00000000000000000000000000000000000000000000000000000000000 : -2.00000000000000000000000000000000000000000000000000000000000 : 1.00000000000000000000000000000000000000000000000000000000000)
Examples over number fields::
sage: x = polygen(QQ)
sage: K.<a> = NumberField(x^3-2)
sage: embs = K.embeddings(CC)
sage: E = EllipticCurve('37a')
sage: EK = E.change_ring(K)
sage: Li = [EK.period_lattice(e) for e in embs]
sage: P = EK(-1,-1)
sage: Q = EK(a-1,1-a^2)
sage: zi = [L.elliptic_logarithm(P) for L in Li]
sage: [c.real() for c in Li[0].elliptic_exponential(zi[0])]
[-1.00000000000000, -1.00000000000000, 1.00000000000000]
sage: [c.real() for c in Li[0].elliptic_exponential(zi[1])]
[-1.00000000000000, -1.00000000000000, 1.00000000000000]
sage: [c.real() for c in Li[0].elliptic_exponential(zi[2])]
[-1.00000000000000, -1.00000000000000, 1.00000000000000]
sage: zi = [L.elliptic_logarithm(Q) for L in Li]
sage: Li[0].elliptic_exponential(zi[0])
(-1.62996052494744 - 1.09112363597172*I : 1.79370052598410 - 1.37472963699860*I : 1.00000000000000)
sage: [embs[0](c) for c in Q]
[-1.62996052494744 - 1.09112363597172*I, 1.79370052598410 - 1.37472963699860*I, 1.00000000000000]
sage: Li[1].elliptic_exponential(zi[1])
(-1.62996052494744 + 1.09112363597172*I : 1.79370052598410 + 1.37472963699860*I : 1.00000000000000)
sage: [embs[1](c) for c in Q]
[-1.62996052494744 + 1.09112363597172*I, 1.79370052598410 + 1.37472963699860*I, 1.00000000000000]
sage: [c.real() for c in Li[2].elliptic_exponential(zi[2])]
[0.259921049894873, -0.587401051968199, 1.00000000000000]
sage: [embs[2](c) for c in Q]
[0.259921049894873, -0.587401051968200, 1.00000000000000]
Test to show that :trac:`8820` is fixed::
sage: E = EllipticCurve('37a')
sage: K.<a> = QuadraticField(-5)
sage: L = E.change_ring(K).period_lattice(K.places()[0])
sage: L.elliptic_exponential(CDF(.1,.1))
(0.0000142854026029... - 49.9960001066650*I : 249.520141250950 + 250.019855549131*I : 1.00000000000000)
sage: L.elliptic_exponential(CDF(.1,.1), to_curve=False)
(0.0000142854026029447 - 49.9960001066650*I, 500.040282501900 + 500.039711098263*I)
`z=0` is treated as a special case::
sage: E = EllipticCurve([1,1,1,-8,6])
sage: L = E.period_lattice()
sage: L.elliptic_exponential(0)
(0.000000000000000 : 1.00000000000000 : 0.000000000000000)
sage: L.elliptic_exponential(0, to_curve=False)
(+infinity, +infinity)
::
sage: E = EllipticCurve('37a')
sage: K.<a> = QuadraticField(-5)
sage: L = E.change_ring(K).period_lattice(K.places()[0])
sage: P = L.elliptic_exponential(0); P
(0.000000000000000 : 1.00000000000000 : 0.000000000000000)
sage: P.parent()
Abelian group of points on Elliptic Curve defined by y^2 + 1.00000000000000*y = x^3 + (-1.00000000000000)*x over Complex Field with 53 bits of precision
Very small `z` are handled properly (see :trac:`8820`)::
sage: K.<a> = QuadraticField(-1)
sage: E = EllipticCurve([0,0,0,a,0])
sage: L = E.period_lattice(K.complex_embeddings()[0])
sage: L.elliptic_exponential(1e-100)
(0.000000000000000 : 1.00000000000000 : 0.000000000000000)
The elliptic exponential of `z` is returned as (0 : 1 : 0) if
the coordinates of z with respect to the period lattice are
approximately integral::
sage: (100/log(2.0,10))/0.8
415.241011860920
sage: L.elliptic_exponential((RealField(415)(1e-100))).is_zero()
True
sage: L.elliptic_exponential((RealField(420)(1e-100))).is_zero()
False
"""
C = z.parent()
z_is_real = False
if is_RealField(C):
z_is_real = True
C = ComplexField(C.precision())
z = C(z)
else:
if is_ComplexField(C):
z_is_real = z.is_real()
else:
try:
C = ComplexField()
z = C(z)
z_is_real = z.is_real()
except TypeError:
raise TypeError("%s is not a complex number"%z)
prec = C.precision()
# test for the point at infinity:
eps = (C(2)**(-0.8*prec)).real() ## to test integrality w.r.t. lattice within 20%
if all([(t.round()-t).abs() < eps for t in self.coordinates(z)]):
K = z.parent()
if to_curve:
return self.curve().change_ring(K)(0)
else:
return (K('+infinity'),K('+infinity'))
# general number field code (including QQ):
# We do not use PARI's ellztopoint function since it is only
# defined for curves over the reals (note that PARI only
# computes the period lattice basis in that case). But Sage
# can compute the period lattice basis over CC, and then
# PARI's ellwp function works fine.
# NB converting the PARI values to Sage values might land up
# in real/complex fields of spuriously higher precision than
# the input, since PARI's precision is in word-size chunks.
# So we force the results back into the real/complex fields of
# the same precision as the input.
x, y = pari(self.basis(prec=prec)).ellwp(z, flag=1)
x, y = [C(t) for t in (x,y)]
if self.real_flag and z_is_real:
x = x.real()
y = y.real()
if to_curve:
a1,a2,a3,a4,a6 = [self.embedding(a) for a in self.E.ainvs()]
b2 = self.embedding(self.E.b2())
x = x - b2 / 12
y = (y - (a1 * x + a3)) / 2
K = x.parent()
EK = EllipticCurve(K,[a1,a2,a3,a4,a6])
return EK.point((x,y,K(1)), check=False)
else:
return (x,y)
def reduce_tau(tau):
r"""
Transform a point in the upper half plane to the fundamental region.
INPUT:
- ``tau`` (complex) -- a complex number with positive imaginary part
OUTPUT:
(tuple) `(\tau',[a,b,c,d])` where `a,b,c,d` are integers such that
- `ad-bc=1`;
- `\tau`=(a\tau+b)/(c\tau+d)`;
- `|\tau'|\ge1`;
- `|\Re(\tau')|\le\frac{1}{2}`.
EXAMPLES::
sage: from sage.schemes.elliptic_curves.period_lattice import reduce_tau
sage: reduce_tau(CC(1.23,3.45))
(0.230000000000000 + 3.45000000000000*I, [1, -1, 0, 1])
sage: reduce_tau(CC(1.23,0.0345))
(-0.463960069171512 + 1.35591888067914*I, [-5, 6, 4, -5])
sage: reduce_tau(CC(1.23,0.0000345))
(0.130000000001761 + 2.89855072463768*I, [13, -16, 100, -123])
"""
assert tau.imag() > 0
a, b = ZZ(1), ZZ(0)
c, d = b, a
k = tau.real().round()
tau -= k
a -= k*c
b -= k*d
while tau.abs()<0.999:
tau = -1/tau
a, b, c, d = c, d, -a, -b
k = tau.real().round()
tau -= k
a -= k*c
b -= k*d
assert a*d-b*c==1
assert tau.abs()>=0.999 and tau.real().abs() <= 0.5
return tau, [a,b,c,d]
def normalise_periods(w1, w2):
r"""
Normalise the period basis `(w_1,w_2)` so that `w_1/w_2` is in the fundamental region.
INPUT:
- ``w1,w2`` (complex) -- two complex numbers with non-real ratio
OUTPUT:
(tuple) `((\omega_1',\omega_2'),[a,b,c,d])` where `a,b,c,d` are
integers such that
- `ad-bc=\pm1`;
- `(\omega_1',\omega_2') = (a\omega_1+b\omega_2,c\omega_1+d\omega_2)`;
- `\tau=\omega_1'/\omega_2'` is in the upper half plane;
- `|\tau|\ge1` and `|\Re(\tau)|\le\frac{1}{2}`.
EXAMPLES::
sage: from sage.schemes.elliptic_curves.period_lattice import reduce_tau, normalise_periods
sage: w1 = CC(1.234, 3.456)
sage: w2 = CC(1.234, 3.456000001)
sage: w1/w2 # in lower half plane!
0.999999999743367 - 9.16334785827644e-11*I
sage: w1w2, abcd = normalise_periods(w1,w2)
sage: a,b,c,d = abcd
sage: w1w2 == (a*w1+b*w2, c*w1+d*w2)
True
sage: w1w2[0]/w1w2[1]
1.23400010389203e9*I
sage: a*d-b*c # note change of orientation
-1
"""
tau = w1/w2
s = +1
if tau.imag()<0:
w2 = -w2
tau = -tau
s = -1
tau, abcd = reduce_tau(tau)
a, b, c, d = abcd
if s<0:
abcd = (a,-b,c,-d)
return (a*w1+b*w2,c*w1+d*w2), abcd
def extended_agm_iteration(a,b,c):
r"""
Internal function for the extended AGM used in elliptic logarithm computation.
INPUT:
- ``a``, ``b``, ``c`` (real or complex) -- three real or complex numbers.
OUTPUT:
(3-tuple) `(a_0,b_0,c_0)`, the limit of the iteration `(a,b,c) \mapsto ((a+b)/2,\sqrt{ab},(c+\sqrt(c^2+b^2-a^2))/2)`.
EXAMPLES::
sage: from sage.schemes.elliptic_curves.period_lattice import extended_agm_iteration
sage: extended_agm_iteration(RR(1),RR(2),RR(3))
(1.45679103104691, 1.45679103104691, 3.21245294970054)
sage: extended_agm_iteration(CC(1,2),CC(2,3),CC(3,4))
(1.46242448156430 + 2.47791311676267*I,
1.46242448156430 + 2.47791311676267*I,
3.22202144343535 + 4.28383734262540*I)
TESTS::
sage: extended_agm_iteration(1,2,3)
Traceback (most recent call last):
...
ValueError: values must be real or complex numbers
"""
if not isinstance(a, (RealNumber,ComplexNumber)):
raise ValueError("values must be real or complex numbers")
eps = a.parent().one().real()>>(a.parent().precision()-10)
while True:
a1 = (a + b)/2
b1 = (a*b).sqrt()
delta = (b**2 - a**2)/c**2
f = (1 + (1 + delta).sqrt())/2
if (f.abs()-1).abs() < eps:
return a,b,c
c*=f
a,b = a1,b1
| 36.745067
| 619
| 0.57265
|
794bf7af7df759e8ac5f2051df559272cbad0c82
| 3,525
|
py
|
Python
|
ggpy/gdl/parser.py
|
hobson/ggpy
|
4e6e6e876c3a4294cd711647051da2d9c1836b60
|
[
"MIT"
] | 1
|
2015-01-26T19:07:45.000Z
|
2015-01-26T19:07:45.000Z
|
ggpy/gdl/parser.py
|
hobson/ggpy
|
4e6e6e876c3a4294cd711647051da2d9c1836b60
|
[
"MIT"
] | null | null | null |
ggpy/gdl/parser.py
|
hobson/ggpy
|
4e6e6e876c3a4294cd711647051da2d9c1836b60
|
[
"MIT"
] | null | null | null |
import pyparsing as pp
MAX_NUM_ARGS = 1000000000 # max of 1 billion arguments for any function (relation constant)
# function constants are usually lowercase, but haven't found that as a hard requirement in the spec
function_constant = pp.Word(pp.srange("[A-Za-z]"), pp.srange("[a-zA-Z0-9_]"))
identifier = pp.Word(pp.srange("[A-Za-z]"), pp.srange("[a-zA-Z0-9_]"))
comment = pp.OneOrMore(pp.Word(';').suppress()) + pp.restOfLine('comment')
# GDL keywords ("Relation Constants")
role = pp.Keyword('role') # role(p) means that p is a player name/side in the game.
inpt = pp.Keyword('input') # input(t) means that t is a base proposition in the game.
base = pp.Keyword('base') # base(a) means that a is an action in the game, the outcome of a turn.
init = pp.Keyword('init') # init(p) means that the datum p is true in the initial state of the game.
next = pp.Keyword('next') # next(p) means that the datum p is true in the next state of the game.
does = pp.Keyword('does') # does(r, a) means that player r performs action a in the current state.
legal = pp.Keyword('legal') # legal(r, a) means it is legal for r to play a in the current state.
goal = pp.Keyword('goal') # goal(r, n) means that player the current state has utility n for player r. n must be an integer from 0 through 100.
terminal = pp.Keyword('terminal') # terminal(d) means that if the datam d is true, the game has ended and no player actions are legal.
distinct = pp.Keyword('distinct') # distinct(x, y) means that the values of x and y are different.
true = pp.Keyword('true') # true(p) means that the datum p is true in the current state.
# GDL-II Relation Constants
sees = pp.Keyword('sees') # The predicate sees(?r,?p) means that role ?r perceives ?p in the next game state.
random = pp.Keyword('random') # A predefined player that choses legal moves randomly
# GDL-I and GDL-II Relation Constants
relation_constant = role | inpt | base | init | next | does | legal | goal | terminal | distinct | true | sees | random
# TODO: DRY this up
# functions (keywords that should be followed by the number of arguments indicated)
RELATION_CONSTANTS = {
'role': 1, 'input': 2, 'base': 1, 'init': 1, 'next': 1, 'does': 2, 'legal': 2, 'goal': 2, 'terminal': 1, 'distinct': 2, 'true': 1,
'sees': 1, 'random': 1,
'<=': MAX_NUM_ARGS,
'&': 1,
}
# other tokens/terms
variable = pp.Word('?', pp.alphas)
operator = pp.Word('~&|') # not, and, or
identifier = pp.Word(pp.alphas + '_', pp.alphas + pp.nums + '_')
# Numerical contant
# FIXME: too permissive -- accepts 10 numbers, "00", "01", ... "09"
number = (pp.Keyword('100') | pp.Word(pp.nums, min=1, max=2))
# the only binary operator (relationship constant?)
implies = pp.Keyword('<=')
token = (implies | variable | relation_constant | number | pp.Word(pp.alphas + pp.nums))
# Define recursive grammar for nested paretheticals
grammar = pp.Forward()
expression = pp.OneOrMore(implies | variable | relation_constant | number | operator | identifier)
nested_parentheses = pp.nestedExpr('(', ')', content=grammar)
grammar << (implies | variable | relation_constant | number | operator | identifier | nested_parentheses)
sentence = (expression | grammar) + (comment | pp.lineEnd.suppress() | pp.stringEnd.suppress())
game_description = pp.OneOrMore(comment | sentence)
def test():
import os
gdl_path = os.path.join(os.path.dirname(__file__), '..', 'example_game_gdl', 'chapter10.gdl')
parsed_gdl = game_description.parseFile(gdl_path)
return parsed_gdl
| 50.357143
| 144
| 0.693333
|
794bf90a2671b20a134852bf93423ab8ff795de3
| 6,672
|
py
|
Python
|
tests/mqtt/test_connect.py
|
edenhaus/amqtt
|
ecf64a4f82c5d4c10974bce4d3f75f7563d6170b
|
[
"MIT"
] | 29
|
2021-03-13T20:24:21.000Z
|
2022-03-23T02:41:06.000Z
|
tests/mqtt/test_connect.py
|
edenhaus/amqtt
|
ecf64a4f82c5d4c10974bce4d3f75f7563d6170b
|
[
"MIT"
] | 74
|
2021-03-13T14:11:43.000Z
|
2022-03-27T22:07:38.000Z
|
tests/mqtt/test_connect.py
|
edenhaus/amqtt
|
ecf64a4f82c5d4c10974bce4d3f75f7563d6170b
|
[
"MIT"
] | 19
|
2021-03-14T10:46:07.000Z
|
2022-03-26T18:46:36.000Z
|
# Copyright (c) 2015 Nicolas JOUANIN
#
# See the file license.txt for copying permission.
import unittest
import asyncio
from amqtt.mqtt.connect import ConnectPacket, ConnectVariableHeader, ConnectPayload
from amqtt.mqtt.packet import MQTTFixedHeader, CONNECT
from amqtt.adapters import BufferReader
class ConnectPacketTest(unittest.TestCase):
def setUp(self):
self.loop = asyncio.new_event_loop()
def test_decode_ok(self):
data = b"\x10\x3e\x00\x04MQTT\x04\xce\x00\x00\x00\x0a0123456789\x00\x09WillTopic\x00\x0bWillMessage\x00\x04user\x00\x08password"
stream = BufferReader(data)
message = self.loop.run_until_complete(ConnectPacket.from_stream(stream))
self.assertEqual(message.variable_header.proto_name, "MQTT")
self.assertEqual(message.variable_header.proto_level, 4)
assert message.variable_header.username_flag
assert message.variable_header.password_flag
self.assertFalse(message.variable_header.will_retain_flag)
self.assertEqual(message.variable_header.will_qos, 1)
assert message.variable_header.will_flag
assert message.variable_header.clean_session_flag
self.assertFalse(message.variable_header.reserved_flag)
self.assertEqual(message.payload.client_id, "0123456789")
self.assertEqual(message.payload.will_topic, "WillTopic")
self.assertEqual(message.payload.will_message, b"WillMessage")
self.assertEqual(message.payload.username, "user")
self.assertEqual(message.payload.password, "password")
def test_decode_ok_will_flag(self):
data = b"\x10\x26\x00\x04MQTT\x04\xca\x00\x00\x00\x0a0123456789\x00\x04user\x00\x08password"
stream = BufferReader(data)
message = self.loop.run_until_complete(ConnectPacket.from_stream(stream))
self.assertEqual(message.variable_header.proto_name, "MQTT")
self.assertEqual(message.variable_header.proto_level, 4)
assert message.variable_header.username_flag
assert message.variable_header.password_flag
self.assertFalse(message.variable_header.will_retain_flag)
self.assertEqual(message.variable_header.will_qos, 1)
self.assertFalse(message.variable_header.will_flag)
assert message.variable_header.clean_session_flag
self.assertFalse(message.variable_header.reserved_flag)
self.assertEqual(message.payload.client_id, "0123456789")
self.assertEqual(message.payload.will_topic, None)
self.assertEqual(message.payload.will_message, None)
self.assertEqual(message.payload.username, "user")
self.assertEqual(message.payload.password, "password")
def test_decode_fail_reserved_flag(self):
data = b"\x10\x3e\x00\x04MQTT\x04\xcf\x00\x00\x00\x0a0123456789\x00\x09WillTopic\x00\x0bWillMessage\x00\x04user\x00\x08password"
stream = BufferReader(data)
message = self.loop.run_until_complete(ConnectPacket.from_stream(stream))
assert message.variable_header.reserved_flag
def test_decode_fail_miss_clientId(self):
data = b"\x10\x0a\x00\x04MQTT\x04\xce\x00\x00"
stream = BufferReader(data)
message = self.loop.run_until_complete(ConnectPacket.from_stream(stream))
self.assertIsNot(message.payload.client_id, None)
def test_decode_fail_miss_willtopic(self):
data = b"\x10\x16\x00\x04MQTT\x04\xce\x00\x00\x00\x0a0123456789"
stream = BufferReader(data)
message = self.loop.run_until_complete(ConnectPacket.from_stream(stream))
self.assertIs(message.payload.will_topic, None)
def test_decode_fail_miss_username(self):
data = b"\x10\x2e\x00\x04MQTT\x04\xce\x00\x00\x00\x0a0123456789\x00\x09WillTopic\x00\x0bWillMessage"
stream = BufferReader(data)
message = self.loop.run_until_complete(ConnectPacket.from_stream(stream))
self.assertIs(message.payload.username, None)
def test_decode_fail_miss_password(self):
data = b"\x10\x34\x00\x04MQTT\x04\xce\x00\x00\x00\x0a0123456789\x00\x09WillTopic\x00\x0bWillMessage\x00\x04user"
stream = BufferReader(data)
message = self.loop.run_until_complete(ConnectPacket.from_stream(stream))
self.assertIs(message.payload.password, None)
def test_encode(self):
header = MQTTFixedHeader(CONNECT, 0x00, 0)
variable_header = ConnectVariableHeader(0xCE, 0, "MQTT", 4)
payload = ConnectPayload(
"0123456789", "WillTopic", b"WillMessage", "user", "password"
)
message = ConnectPacket(header, variable_header, payload)
encoded = message.to_bytes()
self.assertEqual(
encoded,
b"\x10\x3e\x00\x04MQTT\x04\xce\x00\x00\x00\x0a0123456789\x00\x09WillTopic\x00\x0bWillMessage\x00\x04user\x00\x08password",
)
def test_getattr_ok(self):
data = b"\x10\x3e\x00\x04MQTT\x04\xce\x00\x00\x00\x0a0123456789\x00\x09WillTopic\x00\x0bWillMessage\x00\x04user\x00\x08password"
stream = BufferReader(data)
message = self.loop.run_until_complete(ConnectPacket.from_stream(stream))
self.assertEqual(message.variable_header.proto_name, "MQTT")
self.assertEqual(message.proto_name, "MQTT")
self.assertEqual(message.variable_header.proto_level, 4)
self.assertEqual(message.proto_level, 4)
assert message.variable_header.username_flag
assert message.username_flag
assert message.variable_header.password_flag
assert message.password_flag
self.assertFalse(message.variable_header.will_retain_flag)
self.assertFalse(message.will_retain_flag)
self.assertEqual(message.variable_header.will_qos, 1)
self.assertEqual(message.will_qos, 1)
assert message.variable_header.will_flag
assert message.will_flag
assert message.variable_header.clean_session_flag
assert message.clean_session_flag
self.assertFalse(message.variable_header.reserved_flag)
self.assertFalse(message.reserved_flag)
self.assertEqual(message.payload.client_id, "0123456789")
self.assertEqual(message.client_id, "0123456789")
self.assertEqual(message.payload.will_topic, "WillTopic")
self.assertEqual(message.will_topic, "WillTopic")
self.assertEqual(message.payload.will_message, b"WillMessage")
self.assertEqual(message.will_message, b"WillMessage")
self.assertEqual(message.payload.username, "user")
self.assertEqual(message.username, "user")
self.assertEqual(message.payload.password, "password")
self.assertEqual(message.password, "password")
| 51.72093
| 136
| 0.732464
|
794bf959cfeb0c139c7d71d78ab0a4cbeb5e84b3
| 665
|
py
|
Python
|
pytest_splinter/splinter_patches.py
|
mpasternak/pytest-splinter
|
6908f852629d593799dcd42ebc2e766f05849e20
|
[
"MIT"
] | 226
|
2015-01-04T05:11:51.000Z
|
2022-03-24T18:44:34.000Z
|
pytest_splinter/splinter_patches.py
|
mpasternak/pytest-splinter
|
6908f852629d593799dcd42ebc2e766f05849e20
|
[
"MIT"
] | 133
|
2015-01-18T17:38:11.000Z
|
2022-01-21T21:42:52.000Z
|
pytest_splinter/splinter_patches.py
|
mpasternak/pytest-splinter
|
6908f852629d593799dcd42ebc2e766f05849e20
|
[
"MIT"
] | 59
|
2015-01-28T02:15:37.000Z
|
2022-01-22T14:12:56.000Z
|
"""Patches for splinter."""
from functools import partial
from splinter.driver.webdriver import firefox
from selenium.webdriver.common.action_chains import ActionChains # pragma: no cover
def patch_webdriverelement(): # pragma: no cover
"""Patch the WebDriverElement to allow firefox to use mouse_over."""
def mouse_over(self):
"""Perform a mouse over the element which works."""
(
ActionChains(self.parent.driver)
.move_to_element_with_offset(self._element, 2, 2)
.perform()
)
# Apply the monkey patch for Firefox WebDriverElement
firefox.WebDriverElement.mouse_over = mouse_over
| 30.227273
| 84
| 0.697744
|
794bf969a22618dc2a280014e25fceda568b9708
| 801
|
py
|
Python
|
shipment.py
|
aroraumang/nereid-webshop
|
51ab34b6d3f47362637812c834f91218954b6402
|
[
"BSD-3-Clause"
] | 20
|
2015-04-18T14:41:06.000Z
|
2022-02-01T20:31:32.000Z
|
shipment.py
|
aroraumang/nereid-webshop
|
51ab34b6d3f47362637812c834f91218954b6402
|
[
"BSD-3-Clause"
] | 111
|
2015-01-01T07:48:37.000Z
|
2015-08-07T12:40:47.000Z
|
shipment.py
|
aroraumang/nereid-webshop
|
51ab34b6d3f47362637812c834f91218954b6402
|
[
"BSD-3-Clause"
] | 17
|
2015-01-15T11:35:22.000Z
|
2020-08-31T13:46:34.000Z
|
# -*- coding: utf-8 -*-
'''
shipment
:copyright: (c) 2014 by Openlabs Technologies & Consulting (P) Ltd.
:license: GPLv3, see LICENSE for more details
'''
from trytond.model import ModelView, Workflow
from trytond.pool import PoolMeta
__metaclass__ = PoolMeta
__all__ = ['ShipmentOut']
class ShipmentOut:
__name__ = 'stock.shipment.out'
def send_shipment_alert(self):
"""Alert user about shipment status.
"""
# XXX: Not implemented yet
return
@classmethod
@ModelView.button
@Workflow.transition('done')
def done(cls, shipments):
"""Mark shipment done and send an alert to user.
"""
super(ShipmentOut, cls).done(shipments)
for shipment in shipments:
shipment.send_shipment_alert()
| 22.25
| 71
| 0.642946
|
794bfa39d06a67dc3c9d637fdd59f4e5c24a2d12
| 12,342
|
py
|
Python
|
main.py
|
36base/36base-kakaotalk-bot
|
64bfa4f77a08c6802ffdbf60ae7221e7c792bc80
|
[
"MIT"
] | 7
|
2018-08-26T07:55:18.000Z
|
2019-04-05T08:51:25.000Z
|
main.py
|
36base/36base-kakaotalk-bot
|
64bfa4f77a08c6802ffdbf60ae7221e7c792bc80
|
[
"MIT"
] | 2
|
2018-08-13T16:06:12.000Z
|
2018-09-29T03:27:43.000Z
|
main.py
|
36base/36base-kakaotalk-bot
|
64bfa4f77a08c6802ffdbf60ae7221e7c792bc80
|
[
"MIT"
] | 2
|
2018-08-17T15:02:43.000Z
|
2018-08-26T07:55:23.000Z
|
from flask import Flask, request, jsonify
from chatterbox import *
import girlsfrontline_core_python as gfl_core
import re
import logging
import json
import urllib
import pymysql
from logging_db import MySQLHandler
from ranking_poll import EventRankPoll
import static_resp as rp
import kakao_vision as kv
application = Flask(__name__)
chatter = Chatter(memory='sqlite',
frequency=10,
fallback=False)
cf = json.load(open("config.json", "r", encoding="utf-8"))
# MySQL Connection
conn = pymysql.connect(**cf["MySQL"])
# Logging 모듈 설정
logger = logging.getLogger("main")
logger.setLevel(logging.INFO)
# 핸들러 설정 및 추가
db_handler = MySQLHandler(conn)
logger.addHandler(db_handler)
# 정규식 컴파일
re_build_time = re.compile(r"^([0-9]{1,2})?[ :]?([0-5][0-9])$")
re_rp_calc = re.compile(
r"([0-9]{1,3})[ ,.]([0-9]{1,3})[ ,.]?([0-9]+)?[ ,.]?(서약|ㅅㅇ)?[ ,.]?(요정|ㅇㅈ)?([화ㅎ][력ㄹ]([1]?[0-9])?)?"
)
re_rank_poll = re.compile(r"([0-9]{0,6})[점]? ([0-9]{1,3})(퍼센트|퍼|%|등|)[ ]?(.+)?$")
cancel_list = {'돌아가기', '취소', '종료', '잘가', 'ㅂㅂ', '꺼져', '뒤로', '뒤로가기', '나가기'}
# RankingPoll
rank = EventRankPoll(conn)
# girlsfrontline_core_python
core = gfl_core.core.Core(cf['gfl_core']['dir'])
kv.APP_KEY = cf['kakao_vision']['app_key']
# Chatterbox
# 초기 화면 설정
@chatter.base(name='홈')
def home_keyboard():
return rp.kb_home
# src = 현재 상태(status)
# action = 유저가 입력한 것
# dest = 다음 상태(status)
@chatter.rule(action='인형 검색', src='홈', dest='인형 검색 페이지')
def search_doll(data):
extra_data = dict(user_status='홈', **data)
logger.info(rp.msg_search_doll, extra=extra_data)
return rp.search_doll
@chatter.rule(action='*', src='인형 검색 페이지')
def searched_doll(data):
re_match = re_build_time.match(data['content'].strip())
res = core.find_nickname(data['content'].strip(), 'doll')
if data['content'] in cancel_list:
return cancel(data)
elif re_match:
b_hour, b_min = re_match.groups(default='0')
build_time = int(b_hour) * 3600 + int(b_min) * 60
# searched(dict): 코어에서 나온 객체들
searched = [core.l10n("ko-KR", "doll", n) for n in core.doll.build_time.get(build_time, {})]
if searched and build_time > 0:
if len(searched) == 1:
msg = Text(rp.f_msg_free_input_info["doll"].format(**searched[0]))
msg += MessageButton("상세 정보", searched[0]['link'])
msg += Photo(**searched[0]["photo"])
adv = Keyboard(type="text")
else:
dolls = ["{name}: {rank}성 {Type}".format(**n) for n in searched]
msg = Text("찾은 인형 목록:\n{0}".format("\n".join(dolls)))
adv = Keyboard([n['name'] for n in searched] + ["돌아가기"])
else:
msg = Text("검색 결과가 없습니다.")
adv = Keyboard(type='text')
elif res:
if isinstance(res, tuple):
msg = Text(rp.f_msg_free_input_info[res[0]].format(**res[1])) + MessageButton("상세 정보", res[1]['link'])
msg += Photo(**res[1]["photo"])
adv = Keyboard(type="text")
else:
msg = Text("무엇을 찾으셨나요?")
adv = Keyboard(buttons=res)
else:
msg = Text("올바르지 않은 입력입니다.")
adv = Keyboard(type='text')
extra_data = dict(user_status='인형 검색 페이지', **data)
logger.info(msg.text if isinstance(msg, Text) else msg.text.text, extra=extra_data)
return msg + adv
@chatter.rule(action='장비 검색', src='홈', dest='장비 검색 페이지')
def search_equip(data):
extra_data = dict(user_status='홈', **data)
logger.info(rp.msg_search_equip, extra=extra_data)
return rp.search_equip
@chatter.rule(action='*', src='장비 검색 페이지', dest='홈')
def searched_equip(data):
re_match = re_build_time.match(data['content'].strip())
if re_match:
b_hour, b_min = re_match.groups('0')
build_time = int(b_hour) * 3600 + int(b_min) * 60
if build_time < 3600:
searched = [core.l10n("ko-KR", "equip", n) for n in core.equip.build_time.get(build_time, {})]
equips = ["{name}: {rank}성 {category_name}".format(**n) for n in searched]
else:
searched = [core.l10n("ko-KR", "fairy", n) for n in core.fairy.build_time.get(build_time, {})]
equips = ["{name}".format(**n) for n in searched]
if equips and build_time > 0:
msg = "찾은 장비/요정 목록:\n{0}".format("\n".join(equips))
else:
msg = "검색 결과가 없습니다."
else:
msg = "올바르지 않은 입력입니다."
extra_data = dict(user_status='장비 검색 페이지', **data)
logger.info(msg, extra=extra_data)
return Text(msg) + chatter.home()
@chatter.rule(action='작전보고서 계산', src='홈', dest='작전보고서 계산')
def calc_report(data):
extra_data = dict(user_status='홈', **data)
logger.info(rp.msg_calc_report, extra=extra_data)
return rp.calc_report
@chatter.rule(action='*', src='작전보고서 계산', dest='홈')
def calc_report_return(data):
re_match = re_rp_calc.match(data['content'].strip())
if re_match:
cur_lv, tar_lv, cur_xp, is_oath, is_fairy, hoc, hoc_lv = re_match.groups(default='')
cur_lv = int(cur_lv)
tar_lv = int(tar_lv)
cur_xp = int(cur_xp) if cur_xp else 0
is_oath = True if is_oath else False
is_fairy = True if is_fairy else False
hoc = True if hoc else False
hoc_lv = int(hoc_lv) if hoc_lv else 10
if cur_lv >= tar_lv or tar_lv > 120:
msg = '목표 레벨이 현재 레벨보다 낮거나 120을 넘습니다. 올바른 수치를 입력해주세요.'
elif tar_lv > 100 and (is_fairy or hoc):
msg = '요정 및 화력제대는 100레벨 이상의 계산을 지원하지 않습니다.'
else:
if hoc:
hoc_lv = 10 if hoc_lv > 10 or hoc_lv < 0 else hoc_lv
tar_lv = 100 if tar_lv > 100 else tar_lv
rp, hr = gfl_core.calc.exp_hoc(cur_lv, tar_lv, cur_xp, hoc_lv)
msg = '필요 특수 작전 보고서: {0}개\n소모시간: {1}시간\n소모 전지량: {2}개'.format(rp, hr, hr * 5)
else:
rp = gfl_core.calc.exp(int(cur_lv), int(tar_lv), int(cur_xp), is_oath, is_fairy)
msg = '필요 작전 보고서: {0}개'.format(rp)
else:
msg = "올바르지 않은 입력입니다."
extra_data = dict(user_status='작전보고서 계산', **data)
logger.info(msg, extra=extra_data)
return Text(msg) + chatter.home()
@chatter.rule(action='유용한 정보 모음', src='홈', dest='유용한 정보')
def useful_info(data):
extra_data = dict(user_status='홈', **data)
logger.info(rp.msg_useful_info, extra=extra_data)
return rp.useful_info
@chatter.rule(action='*', src='유용한 정보')
def useful_info_input(data):
if data['content'] == '돌아가기' or data['content'] not in rp.d_useful_info:
return cancel(data)
else:
return useful_info_return(data)
@chatter.rule(dest="유용한 정보")
def useful_info_return(data):
resp = rp.d_useful_info.get(data['content'], Text("오류 발생"))
extra_data = dict(user_status='유용한 정보', **data)
logger.info(resp['message']['text'], extra=extra_data)
return resp + rp.kb_useful_info
@chatter.rule(action='36베이스 바로가기', src='홈', dest='홈')
def go_to_36db(data):
extra_data = dict(user_status='홈', **data)
logger.info(rp.msg_go_to_36db, extra=extra_data)
return rp.go_to_36db + chatter.home()
@chatter.rule(action='랭킹 집계', src='홈', dest='랭킹 집계')
def rank_poll(data):
last_data = rank.get_today(data["user_key"])
if last_data:
msg = rp.msg_rank_poll + (
"\n\n오늘({0}) 입력한 마지막 기록을 덮어씌웁니다.\n"
"{1}점, {2}%"
).format(*last_data[:3])
else:
msg = rp.msg_rank_poll
extra_data = dict(user_status='홈', **data)
logger.info(msg, extra=extra_data)
return Text(msg) + rp.bt_info + Keyboard(type="text")
@chatter.rule(action="*", src="랭킹 집계", dest="홈")
def rank_poll_input(data):
re_match = re_rank_poll.match(data["content"].strip())
if re_match:
score, num, mode, comment = re_match.groups()
if 0 < int(num) <= 100:
if mode in {"등", "위"}:
percent, ranking = 0, int(num)
msg = "{0}점 {1}등으로 등록 완료했습니다. 감사합니다.".format(score, ranking)
else:
percent, ranking = int(num), 0
msg = "{0}점 {1}%으로 등록 완료했습니다. 감사합니다.".format(score, percent)
rank.log(data['user_key'], int(score), percent, ranking, comment)
else:
msg = rp.msg_rank_poll_err
else:
msg = (
"올바른 양식으로 입력해주세요. "
"만약 제대로 입력했는데 이 오류가 발생했다면, 관리자에게 알려주세요."
)
extra_data = dict(user_status='랭킹 집계', **data)
logger.info(msg, extra=extra_data)
return Text(msg) + rp.bt_ranking_result + chatter.home()
@chatter.rule(action="대화하기", src="홈", dest="자유 입력")
def start_free_input(data):
extra_data = dict(user_status='홈', **data)
logger.info(rp.msg_start_free_input, extra=extra_data)
return rp.start_free_input + Keyboard(type="text")
@chatter.rule(action="*", src="자유 입력")
def free_input_check(data):
if data['content'] in cancel_list:
return cancel(data)
elif data['content'][:2] in {'ㅇㅎ', '인형', '제조'}:
data['content'] = data['content'][2:]
return searched_doll(data)
elif data['content'][:2] in {'ㅈㅂ', '장비'}:
data['content'] = data['content'][2:]
return searched_equip(data)
elif data['type'] == 'photo':
return photo_input(data)
else:
return free_input(data)
@chatter.rule(dest="자유 입력")
def free_input(data):
res = core.find_nickname(data["content"].strip(), "", "ko-KR")
res_special = core.special(data["content"].strip())
if res:
if isinstance(res, tuple):
msg = Text(rp.f_msg_free_input_info[res[0]].format(**res[1])) + MessageButton("상세 정보", res[1]['link'])
if "photo" in res[1]:
msg += Photo(**res[1]["photo"])
adv = Keyboard(type="text")
else:
msg = Text("무엇을 찾으셨나요?")
adv = Keyboard(buttons=res)
elif res_special:
msg = Message(**res_special)
adv = Keyboard(type="text")
elif data['content'] == '끝말잇기':
msg = Text("끝말잇기를 시작하겠습니다.\n\n새벽녘")
adv = Keyboard(['내가 졌다', '항복', '모르겠는걸?'])
else:
msg = Text("잘 모르겠습니다. 다시 입력해주세요.")
msg += MessageButton(
label="모르는 말 알려주기",
url=f"https://kakao-learn.gfl.kr/start?message={urllib.parse.quote(data['content'].strip())}"
)
adv = Keyboard(type="text")
extra_data = dict(user_status='자유 입력', **data)
logger.info(msg['message'].get('text', msg['message'].get('photo', {"url": ""})['url']), extra=extra_data)
return msg + adv
@chatter.rule(dest="자유 입력")
def photo_input(data):
result = kv.detect_adult(data['content'])
if 'result' in result:
res = result['result']
if res['adult'] > res['soft'] and res['adult'] > res['normal']:
msg = f"성인 이미지일 확률이 {res['adult'] * 100:0.01f}% 입니다."
elif res['soft'] > res['adult'] and res['soft'] > res['normal']:
msg = f"노출이 있는 이미지일 확률이 {res['soft'] * 100:0.01f}% 입니다."
else:
msg = f"건전한 이미지일 확률이 {res['normal'] * 100:0.01f}% 입니다."
else:
msg = f"오류가 발생하였습니다.\n{result['msg']}"
extra_data = dict(user_status='자유 입력', **data)
logger.info(msg, extra=extra_data)
return Text(msg) + Keyboard(type='text')
@chatter.rule(dest='홈')
def cancel(data):
msg = '기본 화면으로 돌아갑니다.'
extra_data = dict(user_status='*', **data)
logger.info(msg, extra=extra_data)
return Text(msg) + chatter.home()
@chatter.rule(action="*", src="홈", dest="홈")
def fallback(data):
extra_data = dict(user_status='홈', **data)
logger.info(rp.msg_fallback, extra=extra_data)
return rp.fallback + chatter.home()
# ##################
# Flask Func
@application.route('/keyboard', methods=['GET', 'HEAD'])
def keyboard():
return jsonify(chatter.home())
@application.route('/message', methods=['POST'])
def message():
return jsonify(chatter.route(request.json))
@application.route('/friend', methods=['POST'])
def add_friend():
return jsonify({'message': 'SUCCESS'})
@application.route('/friend/<key>', methods=['DELETE'])
def block_friend(key):
return jsonify({'message': 'SUCCESS'})
@application.route('/chat_room/<key>', methods=['DELETE'])
def exit_friend(key):
return jsonify({'message': 'SUCCESS'})
if __name__ == '__main__':
application.run(debug=True, host='0.0.0.0')
| 33.177419
| 114
| 0.595284
|
794bfa3cbe453abd9e1df0e4557117664d3da67a
| 430
|
py
|
Python
|
Dataset/Leetcode/train/78/628.py
|
kkcookies99/UAST
|
fff81885aa07901786141a71e5600a08d7cb4868
|
[
"MIT"
] | null | null | null |
Dataset/Leetcode/train/78/628.py
|
kkcookies99/UAST
|
fff81885aa07901786141a71e5600a08d7cb4868
|
[
"MIT"
] | null | null | null |
Dataset/Leetcode/train/78/628.py
|
kkcookies99/UAST
|
fff81885aa07901786141a71e5600a08d7cb4868
|
[
"MIT"
] | null | null | null |
class Solution:
def XXX(self, nums: List[int]) -> List[List[int]]:
n=len(nums)
all_prob=2**n
ans=list()
for i in range(all_prob):
temp=[]
bit=bin(i)[2:][::-1]
index=0
while(index<len(bit)):
if bit[index]=='1':
temp.append(nums[index])
index+=1
ans.append(temp)
return ans
| 25.294118
| 54
| 0.423256
|
794bfb426d52ad43e3e2f35e2437243e352e4c07
| 291
|
py
|
Python
|
Flask/FastAPI/Django/Python-API-Development.freeCodeCamp.org/07-Pydantic-Models/schemas.py
|
shihab4t/Software-Development
|
0843881f2ba04d9fca34e44443b5f12f509f671e
|
[
"Unlicense"
] | null | null | null |
Flask/FastAPI/Django/Python-API-Development.freeCodeCamp.org/07-Pydantic-Models/schemas.py
|
shihab4t/Software-Development
|
0843881f2ba04d9fca34e44443b5f12f509f671e
|
[
"Unlicense"
] | null | null | null |
Flask/FastAPI/Django/Python-API-Development.freeCodeCamp.org/07-Pydantic-Models/schemas.py
|
shihab4t/Software-Development
|
0843881f2ba04d9fca34e44443b5f12f509f671e
|
[
"Unlicense"
] | null | null | null |
from pydantic import BaseModel
from datetime import datetime
class PostBase(BaseModel):
title: str
content: str
published: bool = True
class PostCreate(PostBase):
pass
class Post(PostBase):
id: int
create_at: datetime
class Config:
orm_mode = True
| 13.857143
| 30
| 0.683849
|
794bfd32c63e644a911445d5d2916f5aa80e370c
| 965
|
py
|
Python
|
run.py
|
scottyellis/fence
|
012ba76a58853169e9ee8e3f44a0dc510f4b2543
|
[
"Apache-2.0"
] | 31
|
2018-01-05T22:49:33.000Z
|
2022-02-02T10:30:23.000Z
|
run.py
|
scottyellis/fence
|
012ba76a58853169e9ee8e3f44a0dc510f4b2543
|
[
"Apache-2.0"
] | 737
|
2017-12-11T17:42:11.000Z
|
2022-03-29T22:42:52.000Z
|
run.py
|
scottyellis/fence
|
012ba76a58853169e9ee8e3f44a0dc510f4b2543
|
[
"Apache-2.0"
] | 46
|
2018-02-23T09:04:23.000Z
|
2022-02-09T18:29:51.000Z
|
from fence import app, app_init, config
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"-c",
"--config_file_name",
help="Name for file is something other than "
"fence-config.yaml. Will search in defined search folders specified in "
"fence's settings. To automatically create configs, check out the "
'cfg_help.py file in this directory. Run "python cfg_help.py --help".',
default="fence-config.yaml",
)
parser.add_argument(
"--config_path",
help="Full path to a yaml config file for fence. Will not"
" search directories for config.",
)
args = parser.parse_args()
if config.get("MOCK_STORAGE"):
from mock import patch
from cdisutilstest.code.storage_client_mock import get_client
patcher = patch("fence.resources.storage.get_client", get_client)
patcher.start()
app_init(app, config_path=args.config_path, config_file_name=args.config_file_name)
app.run(debug=True, port=8000)
| 31.129032
| 83
| 0.732642
|
794bfd92abebf04ec9ab19e5f6235d1d5f30cf7a
| 2,101
|
py
|
Python
|
extensions/captcha_forms.py
|
l29ah/vk4xmpp
|
596e35a3c13d1ee102e8dc0707427947843306d2
|
[
"MIT"
] | 77
|
2015-01-07T06:22:46.000Z
|
2022-01-17T11:15:02.000Z
|
extensions/captcha_forms.py
|
l29ah/vk4xmpp
|
596e35a3c13d1ee102e8dc0707427947843306d2
|
[
"MIT"
] | 157
|
2015-01-16T07:25:42.000Z
|
2021-11-24T05:59:10.000Z
|
extensions/captcha_forms.py
|
l29ah/vk4xmpp
|
596e35a3c13d1ee102e8dc0707427947843306d2
|
[
"MIT"
] | 31
|
2015-01-16T15:16:36.000Z
|
2021-01-25T20:13:54.000Z
|
# coding: utf-8
# This file is a part of VK4XMPP transport
# © simpleApps, 2014 — 2015.
from hashlib import sha1
import xmpp
"""
Implements XEP-0158: CAPTCHA Forms
"""
def sendCaptcha(user, captcha):
"""
Send a captcha to the user
Args:
user: the user's jid
captcha: captcha dictionary ({"url": "https://vk.com/...", "sid": "10"})
"""
url = captcha.get("img")
sid = captcha.get("sid")
logger.debug("VK: sending message with captcha (jid: %s)", user)
body = _("WARNING: VK has sent you a CAPTCHA."
" Please, follow the link: %s and enter the text shown on the image to the chat."
" Example: !captcha my_captcha_key."
"\nWarning: don't use Firefox to open the link.") % url
msg = xmpp.Message(user, body, "chat", frm=TransportID)
x = msg.setTag("x", namespace=xmpp.NS_OOB)
x.setTagData("url", url)
captchaNode = msg.setTag("captcha", namespace=xmpp.NS_CAPTCHA)
image = utils.getLinkData(url, False)
if image:
hash = sha1(image).hexdigest()
encoded = image.encode("base64")
payload = [xmpp.Node("required"),
xmpp.Node("media", {"xmlns": xmpp.NS_MEDIA},
[xmpp.Node("uri", {"type": "image/jpg"},
["cid:sha1+%s@bob.xmpp.org" % hash])])] # feel yourself like a erlang programmer
fields = [{"var": "FORM_TYPE", "value": xmpp.NS_CAPTCHA, "type": "hidden"}]
fields.append({"var": "from", "value": TransportID, "type": "hidden"})
fields.append({"var": "challenge", "value": msg.getID(), "type": "hidden"})
fields.append({"var": "ocr", "label": _("Enter shown text"), "payload": payload})
form = utils.buildDataForm(type="form", fields=fields)
captchaNode.addChild(node=form)
oob = msg.setTag("data", {"cid": "sha1+%s@bob.xmpp.org" % hash, "type": "image/jpg", "max-age": "0"}, xmpp.NS_URN_OOB)
oob.setData(encoded)
else:
logger.warning("unable to get the image from %s (jid: %s)", url, user)
msg.setID(sid)
sender(Component, msg)
sendPresence(user, TransportID, show="xa", reason=body, hash=USER_CAPS_HASH)
TransportFeatures.update({xmpp.NS_OOB,
xmpp.NS_MEDIA,
xmpp.NS_CAPTCHA,
xmpp.NS_URN_OOB})
registerHandler("evt04", sendCaptcha)
| 35.610169
| 120
| 0.669681
|
794bfeaf8d8752099c330384dd9330b4f7d01661
| 846
|
py
|
Python
|
var/spack/repos/builtin/packages/r-codetools/package.py
|
shintaro-iwasaki/spack
|
47998b3f4733c1264760c4a9744b1669661354b9
|
[
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 1
|
2021-09-19T10:20:43.000Z
|
2021-09-19T10:20:43.000Z
|
var/spack/repos/builtin/packages/r-codetools/package.py
|
jserv/spack
|
221e680e2b0eb27971794e8a680a9cf743f079c3
|
[
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 1
|
2021-01-06T19:26:40.000Z
|
2021-01-06T19:42:17.000Z
|
var/spack/repos/builtin/packages/r-codetools/package.py
|
shintaro-iwasaki/spack
|
47998b3f4733c1264760c4a9744b1669661354b9
|
[
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null |
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class RCodetools(RPackage):
"""Code analysis tools for R."""
homepage = "https://cloud.r-project.org/package=codetools"
url = "https://cloud.r-project.org/src/contrib/codetools_0.2-15.tar.gz"
list_url = "https://cloud.r-project.org/src/contrib/Archive/codetools"
version('0.2-16', sha256='f67a66175cb5d8882457d1e9b91ea2f16813d554fa74f80c1fd6e17cf1877501')
version('0.2-15', sha256='4e0798ed79281a614f8cdd199e25f2c1bd8f35ecec902b03016544bd7795fa40')
version('0.2-14', sha256='270d603b89076081af8d2db0256927e55ffeed4c27309d50deea75b444253979')
depends_on('r@2.1:', type=('build', 'run'))
| 40.285714
| 96
| 0.749409
|
794bff586d5bf3d4a0eaa28c25e523445fa3e018
| 1,367
|
py
|
Python
|
setup.py
|
drivet/flask-whoosh
|
8cccd0816406c342ce987a5dbf8cbbd63aef2f26
|
[
"MIT"
] | null | null | null |
setup.py
|
drivet/flask-whoosh
|
8cccd0816406c342ce987a5dbf8cbbd63aef2f26
|
[
"MIT"
] | 1
|
2015-09-07T16:22:18.000Z
|
2015-09-07T16:22:18.000Z
|
setup.py
|
drivet/flask-whoosh
|
8cccd0816406c342ce987a5dbf8cbbd63aef2f26
|
[
"MIT"
] | null | null | null |
"""
Flask-Whoosh
-------------
Small Flask extension to make manipulating Whoosh indexes a slightly more
convenient in the context of a Flask web application.
"""
from setuptools import setup
setup(
name='Flask-Whoosh',
version='0.1.0',
url='https://github.com/drivet/flask-whoosh.git',
license='MIT',
author='Desmond Rivet',
author_email='desmond.rivet@gmail.com',
description='Flask extension to manipulate Whoosh indexes',
long_description=__doc__,
py_modules=['flask_whoosh'],
# if you would be using a package instead use packages instead
# of py_modules:
# packages=['flask_sqlite3'],
zip_safe=False,
platforms='any',
install_requires=[
'Flask',
'whoosh'
],
classifiers=[
'Development Status :: 3 - Alpha',
'Environment :: Web Environment',
'Framework :: Flask',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3 :: Only',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
'Topic :: Internet :: WWW/HTTP :: Indexing/Search',
'Topic :: Software Development :: Libraries :: Python Modules'
],
keywords='flask whoosh search indexing'
)
| 30.377778
| 73
| 0.628383
|
794bffc58135d24b31d669f44f3d729045d41a7d
| 12,478
|
py
|
Python
|
examples/classifier_comparison.py
|
yyht/cleanlab
|
00678f1ec08d97ffcba40de544859d64dc3fb1ad
|
[
"MIT"
] | null | null | null |
examples/classifier_comparison.py
|
yyht/cleanlab
|
00678f1ec08d97ffcba40de544859d64dc3fb1ad
|
[
"MIT"
] | null | null | null |
examples/classifier_comparison.py
|
yyht/cleanlab
|
00678f1ec08d97ffcba40de544859d64dc3fb1ad
|
[
"MIT"
] | 1
|
2020-09-01T11:57:59.000Z
|
2020-09-01T11:57:59.000Z
|
# coding: utf-8
# # Classifier Comparison Tutorial
# ## In this example, we demonstrate how the cleanlab package can be used with any classifier and dataset distribution. We compare performance across 10 classifiers and 4 dataset distributions in both the binary and multiclass classification setting.
#
# ### Some of the graphical components of this tutorial were adapted from
# * http://scikit-learn.org/stable/auto_examples/svm/plot_iris.html
# * http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html
#
#
#
# In[1]:
# Python 2 and 3 compatibility
from __future__ import print_function, absolute_import, division, unicode_literals, with_statement
# In[2]:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from cleanlab.classification import LearningWithNoisyLabels
from cleanlab.noise_generation import generate_noise_matrix_from_trace
from cleanlab.noise_generation import generate_noisy_labels
from cleanlab.util import print_noise_matrix
import copy
# Silence neural network SGD convergence warnings.
from sklearn.exceptions import ConvergenceWarning
import warnings
warnings.filterwarnings('ignore', category=ConvergenceWarning)
# In[3]:
def make_meshgrid(x, y, h=.02):
"""Create a mesh of points to plot in
Parameters
----------
x: data to base x-axis meshgrid on
y: data to base y-axis meshgrid on
h: stepsize for meshgrid, optional
Returns
-------
xx, yy : ndarray
"""
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
def plot_contours(ax, clf, xx, yy, **params):
"""Plot the decision boundaries for a classifier.
Parameters
----------
ax: matplotlib axes object
clf: a classifier
xx: meshgrid ndarray
yy: meshgrid ndarray
params: dictionary of params to pass to contourf, optional
"""
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
def make_linearly_seperable_dataset(n_classes = 3, n_samples = 300):
X, y = make_classification(n_samples = n_samples, n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1, n_classes=n_classes)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
return (X, y)
# ## Sparsity of noise matrix
# #### As is, the code below will generate non-sparse noise matrices (all non-zero noise rates). You can also generate and use sparse noise matrices by increasing **`FRAC_ZERO_NOISE_RATES`** (below). Examples:
# ```python
# ~DENSE~ Noise Matrix / Noisy Channel P(s|y):
# y=0 y=1 y=2
# ---- ---- ---
# s=0 | 0.59 0.04 0.33
# s=1 | 0.12 0.93 0.14
# s=2 | 0.29 0.03 0.53
#
# ~SPARSE~ Noise Matrix / Noisy Channel P(s|y):
# y=0 y=1 y=2
# --- --- ---
# s=0 | 0.6 0 0.3
# s=1 | 0.4 0.95 0
# s=2 | 0 0 0.7
# ```
# #### Higher sparsity typically is more difficult because it focuses all the noise on just a few label changes. Although the overall accuracy is typically lower, higher sparsity tends to lead to more improvement when using **cleanlab** relative to learning with the noisy labels.
# In[4]:
# Initalization.
# Set the sparsity of the noise matrix.
FRAC_ZERO_NOISE_RATES = 0.0 # Consider increasing to 0.5
# A proxy for the fraction of labels that are correct.
avg_trace = 0.65 # ~35% wrong labels. Increasing makes the problem easier.
# Amount of data for each dataset.
dataset_size = 400 # Try 250 or 400 to use less or more data.
# Step size in the mesh.
h = .02
names = ["Naive Bayes", "LogisticReg", "K-NN (K=3)",
"Linear SVM", "RBF SVM","Rand Forest",
"Neural Net", "AdaBoost", "QDA"]
classifiers = [
GaussianNB(),
LogisticRegression(random_state=0, solver = 'lbfgs', multi_class = 'auto'),
KNeighborsClassifier(n_neighbors=3),
SVC(kernel="linear", C=0.025, probability=True, random_state=0),
SVC(gamma=2, C=1, probability=True, random_state=0),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1, random_state=0, ),
AdaBoostClassifier(random_state=0),
QuadraticDiscriminantAnalysis()
]
dataset_names = [
'Linear (m = 4)',
'Linear (m = 3)',
'Moons (m = 2)',
'Circles (m = 2)',
]
# Hyper-parameters for LearningWithNoisyLabels() classifier
params = {
"cv_n_folds": [5], # Default. Keep as default for fair comparison.
"prune_method": ['prune_by_noise_rate', 'prune_by_class', 'both'],
"prune_count_method": ['inverse_nm_dot_s', 'calibrate_confident_joint'],
"converge_latent_estimates": [False, True],
}
experiments = [
'no_label_errors',
'label_errors_no_rp',
'label_errors_with_rp',
]
datasets = [
make_linearly_seperable_dataset(n_classes=4, n_samples=4*dataset_size),
make_linearly_seperable_dataset(n_classes=3, n_samples=3*dataset_size),
make_moons(n_samples=2*dataset_size, noise=0.3, random_state=0), # 2 classes
make_circles(n_samples=2*dataset_size, noise=0.2, factor=0.5, random_state=1), # 2 classes
]
# In[5]:
results = []
# iterate over datasets
for ds_cnt, ds in enumerate(datasets):
# preprocess dataset, split into training and test part
X, y = ds
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4, random_state=0)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=.25, random_state=1)
num_classes = len(np.unique(y_train))
print('Running dataset', ds_cnt + 1, 'with m =', num_classes, 'classes and n =', len(X_train), 'training examples.')
# CONFIDENT LEARNING COMPONENT
np.random.seed(seed=0)
py = np.bincount(y_train) / float(len(y_train))
# Generate the noisy channel to characterize the label errors.
noise_matrix = generate_noise_matrix_from_trace(
K = num_classes,
trace = num_classes * avg_trace,
py = py,
frac_zero_noise_rates = FRAC_ZERO_NOISE_RATES,
)
print_noise_matrix(noise_matrix)
np.random.seed(seed=1)
# Create the noisy labels. This method is exact w.r.t. the noise_matrix.
y_train_w_errors = generate_noisy_labels(y_train, noise_matrix)
clf_results = {}
# iterate over classifiers
for name, clf in zip(names, classifiers):
# Create four copies of the classifier.
# perf_label_clf - Will be trained on the hidden, noise-free labels
# noisy_clf - Will be trained on the noisy labels
# noisy_clf_w_rp - Will be trained on the noisy labels using LearningWithNoisyLabels
clfs = [copy.deepcopy(clf) for i in range(len(experiments))]
perf_label_clf, noisy_clf, noisy_clf_w_rp = clfs
# Classifier (trained without label errors)
perf_label_clf.fit(X_train, y_train)
perf_label_score = perf_label_clf.score(X_test, y_test)
# Classifier (trained with label errors)
noisy_clf.fit(X_train, y_train_w_errors)
noisy_score = noisy_clf.score(X_test, y_test)
# Classifier + RP (trained with label errors)
rp = LearningWithNoisyLabels(noisy_clf_w_rp)
rp.fit(X_train, y_train_w_errors)
noisy_score_w_rp = rp.clf.score(X_test, y_test)
# Store results for each classifier in a dict with key = clf_name.
clf_results[name] = {
'clfs' : clfs,
"perf_label_score" : perf_label_score,
"noisy_score" : noisy_score,
"noisy_score_w_rp" : noisy_score_w_rp,
}
results.append({
"X" : X,
"X_train" : X_train,
"y_train" : y_train,
"y_train_w_errors" : y_train_w_errors,
"num_classes" : num_classes,
"py" : py,
"noise_matrix" : noise_matrix,
"clf_results" : clf_results,
})
# In[6]:
save_figures = True
for e_i, experiment in enumerate([
'no_label_errors',
'label_errors_no_rp',
'label_errors_with_rp',
]):
print(
'Experiment '+str(e_i+1)+':',
"Decision boundary plotted is",
" ".join(experiment.split('_')).capitalize(),
)
print("="*80)
figure = plt.figure(figsize=(27, 12))
i = 1
# iterate over datasets
for ds_cnt, ds in enumerate(datasets):
# Fetch the data we generated above, for plotting.
for key,val in results[ds_cnt].items():
exec(key + '=val')
# Plot the dataset first
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1)
cm = plt.cm.coolwarm
cm = plt.cm.nipy_spectral
cm = plt.cm.Spectral
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
ax.set_ylabel(dataset_names[ds_cnt], fontsize=18)
if ds_cnt == 0:
ax.set_title("Dataset", fontsize=18)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm,
edgecolors='k')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names, classifiers):
# Fetch the classifier computation results.
for key,val in clf_results[name].items():
exec(key + '=val')
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
# This is the clf we'll use to plot the boundary conditions
clf = clfs[e_i]
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])
plot_contours(ax, clf, xx, yy, cmap=cm, alpha=0.5)
# Plot the training points
ax.scatter(
X_train[:, 0],
X_train[:, 1],
c=y_train if experiment == 'no_label_errors' else y_train_w_errors,
cmap=cm,
edgecolors='k',
)
if experiment != 'no_label_errors':
# Plot the label errors
ax.scatter(
X_train[y_train != y_train_w_errors][:, 0],
X_train[y_train != y_train_w_errors][:, 1],
edgecolors='lime',
s=60,
facecolors='none',
alpha=0.55,
linewidth=2,
)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
if ds_cnt == 0:
ax.set_title(name, fontsize=18)
ax.text(xx.min() + 1.5, yy.max() - .7, ('%.2f' % perf_label_score).lstrip('0'),
size=20, horizontalalignment='right', color='black')
ax.text(xx.max() - .2, yy.max() - .7, ('%.2f' % noisy_score).lstrip('0'),
size=20, horizontalalignment='right', color='white')
ax.text(xx.mean() + .75, yy.max() - .7, ('%.2f' % noisy_score_w_rp).lstrip('0'),
size=20, horizontalalignment='right', color='blue')
i += 1
plt.tight_layout()
if save_figures:
_ = plt.savefig('../img/{}.png'.format(experiment), pad_inches=0.0, bbox_inches='tight')
plt.show()
| 35.248588
| 280
| 0.635118
|
794c0064df454b2c3616dac80f79c67d9e50118f
| 37,440
|
py
|
Python
|
pulsar-client-cpp/python/pulsar/__init__.py
|
sinzin91/pulsar
|
f049441f4b52c1c28610b96da32ff2e9ecda5166
|
[
"Apache-2.0"
] | null | null | null |
pulsar-client-cpp/python/pulsar/__init__.py
|
sinzin91/pulsar
|
f049441f4b52c1c28610b96da32ff2e9ecda5166
|
[
"Apache-2.0"
] | null | null | null |
pulsar-client-cpp/python/pulsar/__init__.py
|
sinzin91/pulsar
|
f049441f4b52c1c28610b96da32ff2e9ecda5166
|
[
"Apache-2.0"
] | null | null | null |
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
"""
The Pulsar Python client library is based on the existing C++ client library.
All the same features are exposed through the Python interface.
Currently, the only supported Python version is 2.7.
## Install from PyPI
Download Python wheel binary files for MacOS and Linux
directly from the PyPI archive.
#!shell
$ sudo pip install pulsar-client
## Install from sources
Follow the instructions to compile the Pulsar C++ client library. This method
will also build the Python binding for the library.
To install the Python bindings:
#!shell
$ cd pulsar-client-cpp/python
$ sudo python setup.py install
## Examples
### [Producer](#pulsar.Producer) example
#!python
import pulsar
client = pulsar.Client('pulsar://localhost:6650')
producer = client.create_producer('my-topic')
for i in range(10):
producer.send(('Hello-%d' % i).encode('utf-8'))
client.close()
#### [Consumer](#pulsar.Consumer) Example
#!python
import pulsar
client = pulsar.Client('pulsar://localhost:6650')
consumer = client.subscribe('my-topic', 'my-subscription')
while True:
msg = consumer.receive()
print("Received message '%s' id='%s'", msg.data().decode('utf-8'), msg.message_id())
consumer.acknowledge(msg)
client.close()
### [Async producer](#pulsar.Producer.send_async) example
#!python
import pulsar
client = pulsar.Client('pulsar://localhost:6650')
producer = client.create_producer(
'my-topic',
block_if_queue_full=True,
batching_enabled=True,
batching_max_publish_delay_ms=10
)
def send_callback(res, msg):
print('Message published res=%s', res)
while True:
producer.send_async(('Hello-%d' % i).encode('utf-8'), send_callback)
client.close()
"""
import _pulsar
from _pulsar import Result, CompressionType, ConsumerType, PartitionsRoutingMode # noqa: F401
from pulsar.functions.function import Function
from pulsar.functions.context import Context
from pulsar.functions.serde import SerDe, IdentitySerDe, PickleSerDe
import re
_retype = type(re.compile('x'))
class MessageId:
"""
Represents a message id
"""
'Represents the earliest message stored in a topic'
earliest = _pulsar.MessageId.earliest
'Represents the latest message published on a topic'
latest = _pulsar.MessageId.latest
def serialize(self):
"""
Returns a bytes representation of the message id.
This bytes sequence can be stored and later deserialized.
"""
return self._msg_id.serialize()
@staticmethod
def deserialize(message_id_bytes):
"""
Deserialize a message id object from a previously
serialized bytes sequence.
"""
return _pulsar.MessageId.deserialize(message_id_bytes)
class Message:
"""
Message objects are returned by a consumer, either by calling `receive` or
through a listener.
"""
def data(self):
"""
Returns object typed bytes with the content of the message.
"""
return self._message.data()
def properties(self):
"""
Return the properties attached to the message. Properties are
application-defined key/value pairs that will be attached to the
message.
"""
return self._message.properties()
def partition_key(self):
"""
Get the partitioning key for the message.
"""
return self._message.partition_key()
def publish_timestamp(self):
"""
Get the timestamp in milliseconds with the message publish time.
"""
return self._message.publish_timestamp()
def event_timestamp(self):
"""
Get the timestamp in milliseconds with the message event time.
"""
return self._message.event_timestamp()
def message_id(self):
"""
The message ID that can be used to refere to this particular message.
"""
return self._message.message_id()
class Authentication:
"""
Authentication provider object. Used to load authentication from an external
shared library.
"""
def __init__(self, dynamicLibPath, authParamsString):
"""
Create the authentication provider instance.
**Args**
* `dynamicLibPath`: Path to the authentication provider shared library
(such as `tls.so`)
* `authParamsString`: Comma-separated list of provider-specific
configuration params
"""
_check_type(str, dynamicLibPath, 'dynamicLibPath')
_check_type(str, authParamsString, 'authParamsString')
self.auth = _pulsar.Authentication(dynamicLibPath, authParamsString)
class AuthenticationTLS(Authentication):
"""
TLS Authentication implementation
"""
def __init__(self, certificate_path, private_key_path):
"""
Create the TLS authentication provider instance.
**Args**
* `certificatePath`: Path to the public certificate
* `privateKeyPath`: Path to private TLS key
"""
_check_type(str, certificate_path, 'certificate_path')
_check_type(str, private_key_path, 'private_key_path')
self.auth = _pulsar.AuthenticationTLS(certificate_path, private_key_path)
class AuthenticationToken(Authentication):
"""
Token based authentication implementation
"""
def __init__(self, token):
"""
Create the token authentication provider instance.
**Args**
* `token`: A string containing the token or a functions that provides a
string with the token
"""
if not (isinstance(token, str) or callable(token)):
raise ValueError("Argument token is expected to be of type 'str' or a function returning 'str'")
self.auth = _pulsar.AuthenticationToken(token)
class AuthenticationAthenz(Authentication):
"""
Athenz Authentication implementation
"""
def __init__(self, auth_params_string):
"""
Create the Athenz authentication provider instance.
**Args**
* `auth_params_string`: JSON encoded configuration for Athenz client
"""
_check_type(str, auth_params_string, 'auth_params_string')
self.auth = _pulsar.AuthenticationAthenz(auth_params_string)
class Client:
"""
The Pulsar client. A single client instance can be used to create producers
and consumers on multiple topics.
The client will share the same connection pool and threads across all
producers and consumers.
"""
def __init__(self, service_url,
authentication=None,
operation_timeout_seconds=30,
io_threads=1,
message_listener_threads=1,
concurrent_lookup_requests=50000,
log_conf_file_path=None,
use_tls=False,
tls_trust_certs_file_path=None,
tls_allow_insecure_connection=False
):
"""
Create a new Pulsar client instance.
**Args**
* `service_url`: The Pulsar service url eg: pulsar://my-broker.com:6650/
**Options**
* `authentication`:
Set the authentication provider to be used with the broker. For example:
`AuthenticationTls` or `AuthenticationAthenz`
* `operation_timeout_seconds`:
Set timeout on client operations (subscribe, create producer, close,
unsubscribe).
* `io_threads`:
Set the number of IO threads to be used by the Pulsar client.
* `message_listener_threads`:
Set the number of threads to be used by the Pulsar client when
delivering messages through message listener. The default is 1 thread
per Pulsar client. If using more than 1 thread, messages for distinct
`message_listener`s will be delivered in different threads, however a
single `MessageListener` will always be assigned to the same thread.
* `concurrent_lookup_requests`:
Number of concurrent lookup-requests allowed on each broker connection
to prevent overload on the broker.
* `log_conf_file_path`:
Initialize log4cxx from a configuration file.
* `use_tls`:
Configure whether to use TLS encryption on the connection. This setting
is deprecated. TLS will be automatically enabled if the `serviceUrl` is
set to `pulsar+ssl://` or `https://`
* `tls_trust_certs_file_path`:
Set the path to the trusted TLS certificate file.
* `tls_allow_insecure_connection`:
Configure whether the Pulsar client accepts untrusted TLS certificates
from the broker.
"""
_check_type(str, service_url, 'service_url')
_check_type_or_none(Authentication, authentication, 'authentication')
_check_type(int, operation_timeout_seconds, 'operation_timeout_seconds')
_check_type(int, io_threads, 'io_threads')
_check_type(int, message_listener_threads, 'message_listener_threads')
_check_type(int, concurrent_lookup_requests, 'concurrent_lookup_requests')
_check_type_or_none(str, log_conf_file_path, 'log_conf_file_path')
_check_type(bool, use_tls, 'use_tls')
_check_type_or_none(str, tls_trust_certs_file_path, 'tls_trust_certs_file_path')
_check_type(bool, tls_allow_insecure_connection, 'tls_allow_insecure_connection')
conf = _pulsar.ClientConfiguration()
if authentication:
conf.authentication(authentication.auth)
conf.operation_timeout_seconds(operation_timeout_seconds)
conf.io_threads(io_threads)
conf.message_listener_threads(message_listener_threads)
conf.concurrent_lookup_requests(concurrent_lookup_requests)
if log_conf_file_path:
conf.log_conf_file_path(log_conf_file_path)
if use_tls or service_url.startswith('pulsar+ssl://') or service_url.startswith('https://'):
conf.use_tls(True)
if tls_trust_certs_file_path:
conf.tls_trust_certs_file_path(tls_trust_certs_file_path)
conf.tls_allow_insecure_connection(tls_allow_insecure_connection)
self._client = _pulsar.Client(service_url, conf)
self._consumers = []
def create_producer(self, topic,
producer_name=None,
initial_sequence_id=None,
send_timeout_millis=30000,
compression_type=CompressionType.NONE,
max_pending_messages=1000,
max_pending_messages_across_partitions=50000,
block_if_queue_full=False,
batching_enabled=False,
batching_max_messages=1000,
batching_max_allowed_size_in_bytes=128*1024,
batching_max_publish_delay_ms=10,
message_routing_mode=PartitionsRoutingMode.RoundRobinDistribution,
properties=None,
):
"""
Create a new producer on a given topic.
**Args**
* `topic`:
The topic name
**Options**
* `producer_name`:
Specify a name for the producer. If not assigned,
the system will generate a globally unique name which can be accessed
with `Producer.producer_name()`. When specifying a name, it is app to
the user to ensure that, for a given topic, the producer name is unique
across all Pulsar's clusters.
* `initial_sequence_id`:
Set the baseline for the sequence ids for messages
published by the producer. First message will be using
`(initialSequenceId + 1)`` as its sequence id and subsequent messages will
be assigned incremental sequence ids, if not otherwise specified.
* `send_timeout_seconds`:
If a message is not acknowledged by the server before the
`send_timeout` expires, an error will be reported.
* `compression_type`:
Set the compression type for the producer. By default, message
payloads are not compressed. Supported compression types are
`CompressionType.LZ4` and `CompressionType.ZLib`.
* `max_pending_messages`:
Set the max size of the queue holding the messages pending to receive
an acknowledgment from the broker.
* `max_pending_messages_across_partitions`:
Set the max size of the queue holding the messages pending to receive
an acknowledgment across partitions from the broker.
* `block_if_queue_full`: Set whether `send_async` operations should
block when the outgoing message queue is full.
* `message_routing_mode`:
Set the message routing mode for the partitioned producer. Default is `PartitionsRoutingMode.RoundRobinDistribution`,
other option is `PartitionsRoutingMode.UseSinglePartition`
* `properties`:
Sets the properties for the producer. The properties associated with a producer
can be used for identify a producer at broker side.
"""
_check_type(str, topic, 'topic')
_check_type_or_none(str, producer_name, 'producer_name')
_check_type_or_none(int, initial_sequence_id, 'initial_sequence_id')
_check_type(int, send_timeout_millis, 'send_timeout_millis')
_check_type(CompressionType, compression_type, 'compression_type')
_check_type(int, max_pending_messages, 'max_pending_messages')
_check_type(int, max_pending_messages_across_partitions, 'max_pending_messages_across_partitions')
_check_type(bool, block_if_queue_full, 'block_if_queue_full')
_check_type(bool, batching_enabled, 'batching_enabled')
_check_type(int, batching_max_messages, 'batching_max_messages')
_check_type(int, batching_max_allowed_size_in_bytes, 'batching_max_allowed_size_in_bytes')
_check_type(int, batching_max_publish_delay_ms, 'batching_max_publish_delay_ms')
_check_type_or_none(dict, properties, 'properties')
conf = _pulsar.ProducerConfiguration()
conf.send_timeout_millis(send_timeout_millis)
conf.compression_type(compression_type)
conf.max_pending_messages(max_pending_messages)
conf.max_pending_messages_across_partitions(max_pending_messages_across_partitions)
conf.block_if_queue_full(block_if_queue_full)
conf.batching_enabled(batching_enabled)
conf.batching_max_messages(batching_max_messages)
conf.batching_max_allowed_size_in_bytes(batching_max_allowed_size_in_bytes)
conf.batching_max_publish_delay_ms(batching_max_publish_delay_ms)
conf.partitions_routing_mode(message_routing_mode)
if producer_name:
conf.producer_name(producer_name)
if initial_sequence_id:
conf.initial_sequence_id(initial_sequence_id)
if properties:
for k, v in properties.items():
conf.property(k, v)
p = Producer()
p._producer = self._client.create_producer(topic, conf)
return p
def subscribe(self, topic, subscription_name,
consumer_type=ConsumerType.Exclusive,
message_listener=None,
receiver_queue_size=1000,
max_total_receiver_queue_size_across_partitions=50000,
consumer_name=None,
unacked_messages_timeout_ms=None,
broker_consumer_stats_cache_time_ms=30000,
is_read_compacted=False,
properties=None,
pattern_auto_discovery_period=60
):
"""
Subscribe to the given topic and subscription combination.
**Args**
* `topic`: The name of the topic, list of topics or regex pattern.
This method will accept these forms:
- `topic='my-topic'`
- `topic=['topic-1', 'topic-2', 'topic-3']`
- `topic=re.compile('topic-.*')`
* `subscription`: The name of the subscription.
**Options**
* `consumer_type`:
Select the subscription type to be used when subscribing to the topic.
* `message_listener`:
Sets a message listener for the consumer. When the listener is set,
the application will receive messages through it. Calls to
`consumer.receive()` will not be allowed. The listener function needs
to accept (consumer, message), for example:
#!python
def my_listener(consumer, message):
# process message
consumer.acknowledge(message)
* `receiver_queue_size`:
Sets the size of the consumer receive queue. The consumer receive
queue controls how many messages can be accumulated by the consumer
before the application calls `receive()`. Using a higher value could
potentially increase the consumer throughput at the expense of higher
memory utilization. Setting the consumer queue size to zero decreases
the throughput of the consumer by disabling pre-fetching of messages.
This approach improves the message distribution on shared subscription
by pushing messages only to those consumers that are ready to process
them. Neither receive with timeout nor partitioned topics can be used
if the consumer queue size is zero. The `receive()` function call
should not be interrupted when the consumer queue size is zero. The
default value is 1000 messages and should work well for most use
cases.
* `max_total_receiver_queue_size_across_partitions`
Set the max total receiver queue size across partitions.
This setting will be used to reduce the receiver queue size for individual partitions
* `consumer_name`:
Sets the consumer name.
* `unacked_messages_timeout_ms`:
Sets the timeout in milliseconds for unacknowledged messages. The
timeout needs to be greater than 10 seconds. An exception is thrown if
the given value is less than 10 seconds. If a successful
acknowledgement is not sent within the timeout, all the unacknowledged
messages are redelivered.
* `broker_consumer_stats_cache_time_ms`:
Sets the time duration for which the broker-side consumer stats will
be cached in the client.
* `properties`:
Sets the properties for the consumer. The properties associated with a consumer
can be used for identify a consumer at broker side.
* `pattern_auto_discovery_period`:
Periods of seconds for consumer to auto discover match topics.
"""
_check_type(str, subscription_name, 'subscription_name')
_check_type(ConsumerType, consumer_type, 'consumer_type')
_check_type(int, receiver_queue_size, 'receiver_queue_size')
_check_type(int, max_total_receiver_queue_size_across_partitions,
'max_total_receiver_queue_size_across_partitions')
_check_type_or_none(str, consumer_name, 'consumer_name')
_check_type_or_none(int, unacked_messages_timeout_ms, 'unacked_messages_timeout_ms')
_check_type(int, broker_consumer_stats_cache_time_ms, 'broker_consumer_stats_cache_time_ms')
_check_type(bool, is_read_compacted, 'is_read_compacted')
_check_type_or_none(dict, properties, 'properties')
conf = _pulsar.ConsumerConfiguration()
conf.consumer_type(consumer_type)
conf.read_compacted(is_read_compacted)
if message_listener:
conf.message_listener(message_listener)
conf.receiver_queue_size(receiver_queue_size)
conf.max_total_receiver_queue_size_across_partitions(max_total_receiver_queue_size_across_partitions)
if consumer_name:
conf.consumer_name(consumer_name)
if unacked_messages_timeout_ms:
conf.unacked_messages_timeout_ms(unacked_messages_timeout_ms)
conf.broker_consumer_stats_cache_time_ms(broker_consumer_stats_cache_time_ms)
if properties:
for k, v in properties.items():
conf.property(k, v)
c = Consumer()
if isinstance(topic, str):
# Single topic
c._consumer = self._client.subscribe(topic, subscription_name, conf)
elif isinstance(topic, list):
# List of topics
c._consumer = self._client.subscribe_topics(topic, subscription_name, conf)
elif isinstance(topic, _retype):
# Regex pattern
c._consumer = self._client.subscribe_pattern(topic.pattern, subscription_name, conf)
else:
raise ValueError("Argument 'topic' is expected to be of a type between (str, list, re.pattern)")
c._client = self
self._consumers.append(c)
return c
def create_reader(self, topic, start_message_id,
reader_listener=None,
receiver_queue_size=1000,
reader_name=None,
subscription_role_prefix=None
):
"""
Create a reader on a particular topic
**Args**
* `topic`: The name of the topic.
* `start_message_id`: The initial reader positioning is done by specifying a message id.
The options are:
* `MessageId.earliest`: Start reading from the earliest message available in the topic
* `MessageId.latest`: Start reading from the end topic, only getting messages published
after the reader was created
* `MessageId`: When passing a particular message id, the reader will position itself on
that specific position. The first message to be read will be the message next to the
specified messageId. Message id can be serialized into a string and deserialized
back into a `MessageId` object:
# Serialize to string
s = msg.message_id().serialize()
# Deserialize from string
msg_id = MessageId.deserialize(s)
**Options**
* `reader_listener`:
Sets a message listener for the reader. When the listener is set,
the application will receive messages through it. Calls to
`reader.read_next()` will not be allowed. The listener function needs
to accept (reader, message), for example:
def my_listener(reader, message):
# process message
pass
* `receiver_queue_size`:
Sets the size of the reader receive queue. The reader receive
queue controls how many messages can be accumulated by the reader
before the application calls `read_next()`. Using a higher value could
potentially increase the reader throughput at the expense of higher
memory utilization.
* `reader_name`:
Sets the reader name.
* `subscription_role_prefix`:
Sets the subscription role prefix.
"""
_check_type(str, topic, 'topic')
_check_type(_pulsar.MessageId, start_message_id, 'start_message_id')
_check_type(int, receiver_queue_size, 'receiver_queue_size')
_check_type_or_none(str, reader_name, 'reader_name')
_check_type_or_none(str, subscription_role_prefix, 'subscription_role_prefix')
conf = _pulsar.ReaderConfiguration()
if reader_listener:
conf.reader_listener(reader_listener)
conf.receiver_queue_size(receiver_queue_size)
if reader_name:
conf.reader_name(reader_name)
if subscription_role_prefix:
conf.subscription_role_prefix(subscription_role_prefix)
c = Reader()
c._reader = self._client.create_reader(topic, start_message_id, conf)
c._client = self
self._consumers.append(c)
return c
def get_topic_partitions(self, topic):
"""
Get the list of partitions for a given topic.
If the topic is partitioned, this will return a list of partition names. If the topic is not
partitioned, the returned list will contain the topic name itself.
This can be used to discover the partitions and create Reader, Consumer or Producer
instances directly on a particular partition.
:param topic: the topic name to lookup
:return: a list of partition name
"""
_check_type(str, topic, 'topic')
return self._client.get_topic_partitions(topic)
def close(self):
"""
Close the client and all the associated producers and consumers
"""
self._client.close()
class Producer:
"""
The Pulsar message producer, used to publish messages on a topic.
"""
def topic(self):
"""
Return the topic which producer is publishing to
"""
return self._producer.topic()
def producer_name(self):
"""
Return the producer name which could have been assigned by the
system or specified by the client
"""
return self._producer.producer_name()
def last_sequence_id(self):
"""
Get the last sequence id that was published by this producer.
This represent either the automatically assigned or custom sequence id
(set on the `MessageBuilder`) that was published and acknowledged by the broker.
After recreating a producer with the same producer name, this will return the
last message that was published in the previous producer session, or -1 if
there no message was ever published.
"""
return self._producer.last_sequence_id()
def send(self, content,
properties=None,
partition_key=None,
sequence_id=None,
replication_clusters=None,
disable_replication=False,
event_timestamp=None,
):
"""
Publish a message on the topic. Blocks until the message is acknowledged
**Args**
* `content`:
A `bytes` object with the message payload.
**Options**
* `properties`:
A dict of application-defined string properties.
* `partition_key`:
Sets the partition key for message routing. A hash of this key is used
to determine the message's topic partition.
* `sequence_id`:
Specify a custom sequence id for the message being published.
* `replication_clusters`:
Override namespace replication clusters. Note that it is the caller's
responsibility to provide valid cluster names and that all clusters
have been previously configured as topics. Given an empty list,
the message will replicate according to the namespace configuration.
* `disable_replication`:
Do not replicate this message.
* `event_timestamp`:
Timestamp in millis of the timestamp of event creation
"""
msg = self._build_msg(content, properties, partition_key, sequence_id,
replication_clusters, disable_replication, event_timestamp)
return self._producer.send(msg)
def send_async(self, content, callback,
properties=None,
partition_key=None,
sequence_id=None,
replication_clusters=None,
disable_replication=False,
event_timestamp=None
):
"""
Send a message asynchronously.
The `callback` will be invoked once the message has been acknowledged
by the broker.
Example:
#!python
def callback(res, msg):
print('Message published: %s' % res)
producer.send_async(msg, callback)
When the producer queue is full, by default the message will be rejected
and the callback invoked with an error code.
**Args**
* `content`:
A `bytes` object with the message payload.
**Options**
* `properties`:
A dict of application0-defined string properties.
* `partition_key`:
Sets the partition key for the message routing. A hash of this key is
used to determine the message's topic partition.
* `sequence_id`:
Specify a custom sequence id for the message being published.
* `replication_clusters`: Override namespace replication clusters. Note
that it is the caller's responsibility to provide valid cluster names
and that all clusters have been previously configured as topics.
Given an empty list, the message will replicate per the namespace
configuration.
* `disable_replication`:
Do not replicate this message.
* `event_timestamp`:
Timestamp in millis of the timestamp of event creation
"""
msg = self._build_msg(content, properties, partition_key, sequence_id,
replication_clusters, disable_replication, event_timestamp)
self._producer.send_async(msg, callback)
def close(self):
"""
Close the producer.
"""
self._producer.close()
def _build_msg(self, content, properties, partition_key, sequence_id,
replication_clusters, disable_replication, event_timestamp):
_check_type(bytes, content, 'content')
_check_type_or_none(dict, properties, 'properties')
_check_type_or_none(str, partition_key, 'partition_key')
_check_type_or_none(int, sequence_id, 'sequence_id')
_check_type_or_none(list, replication_clusters, 'replication_clusters')
_check_type(bool, disable_replication, 'disable_replication')
_check_type_or_none(int, event_timestamp, 'event_timestamp')
mb = _pulsar.MessageBuilder()
mb.content(content)
if properties:
for k, v in properties.items():
mb.property(k, v)
if partition_key:
mb.partition_key(partition_key)
if sequence_id:
mb.sequence_id(sequence_id)
if replication_clusters:
mb.replication_clusters(replication_clusters)
if disable_replication:
mb.disable_replication(disable_replication)
if event_timestamp:
mb.event_timestamp(event_timestamp)
return mb.build()
class Consumer:
"""
Pulsar consumer.
"""
def topic(self):
"""
Return the topic this consumer is subscribed to.
"""
return self._consumer.topic()
def subscription_name(self):
"""
Return the subscription name.
"""
return self._consumer.subscription_name()
def unsubscribe(self):
"""
Unsubscribe the current consumer from the topic.
This method will block until the operation is completed. Once the
consumer is unsubscribed, no more messages will be received and
subsequent new messages will not be retained for this consumer.
This consumer object cannot be reused.
"""
return self._consumer.unsubscribe()
def receive(self, timeout_millis=None):
"""
Receive a single message.
If a message is not immediately available, this method will block until
a new message is available.
**Options**
* `timeout_millis`:
If specified, the receive will raise an exception if a message is not
available within the timeout.
"""
if timeout_millis is None:
return self._consumer.receive()
else:
_check_type(int, timeout_millis, 'timeout_millis')
return self._consumer.receive(timeout_millis)
def acknowledge(self, message):
"""
Acknowledge the reception of a single message.
This method will block until an acknowledgement is sent to the broker.
After that, the message will not be re-delivered to this consumer.
**Args**
* `message`:
The received message or message id.
"""
self._consumer.acknowledge(message)
def acknowledge_cumulative(self, message):
"""
Acknowledge the reception of all the messages in the stream up to (and
including) the provided message.
This method will block until an acknowledgement is sent to the broker.
After that, the messages will not be re-delivered to this consumer.
**Args**
* `message`:
The received message or message id.
"""
self._consumer.acknowledge_cumulative(message)
def pause_message_listener(self):
"""
Pause receiving messages via the `message_listener` until
`resume_message_listener()` is called.
"""
self._consumer.pause_message_listener()
def resume_message_listener(self):
"""
Resume receiving the messages via the message listener.
Asynchronously receive all the messages enqueued from the time
`pause_message_listener()` was called.
"""
self._consumer.resume_message_listener()
def redeliver_unacknowledged_messages(self):
"""
Redelivers all the unacknowledged messages. In failover mode, the
request is ignored if the consumer is not active for the given topic. In
shared mode, the consumer's messages to be redelivered are distributed
across all the connected consumers. This is a non-blocking call and
doesn't throw an exception. In case the connection breaks, the messages
are redelivered after reconnect.
"""
self._consumer.redeliver_unacknowledged_messages()
def seek(self, messageid):
"""
Reset the subscription associated with this consumer to a specific message id.
The message id can either be a specific message or represent the first or last messages in the topic.
Note: this operation can only be done on non-partitioned topics. For these, one can rather perform the
seek() on the individual partitions.
**Args**
* `message`:
The message id for seek.
"""
self._consumer.seek(messageid)
def close(self):
"""
Close the consumer.
"""
self._consumer.close()
self._client._consumers.remove(self)
class Reader:
"""
Pulsar topic reader.
"""
def topic(self):
"""
Return the topic this reader is reading from.
"""
return self._reader.topic()
def read_next(self, timeout_millis=None):
"""
Read a single message.
If a message is not immediately available, this method will block until
a new message is available.
**Options**
* `timeout_millis`:
If specified, the receive will raise an exception if a message is not
available within the timeout.
"""
if timeout_millis is None:
return self._reader.read_next()
else:
_check_type(int, timeout_millis, 'timeout_millis')
return self._reader.read_next(timeout_millis)
def has_message_available(self):
"""
Check if there is any message available to read from the current position.
"""
return self._reader.has_message_available();
def close(self):
"""
Close the reader.
"""
self._reader.close()
self._client._consumers.remove(self)
def _check_type(var_type, var, name):
if not isinstance(var, var_type):
raise ValueError("Argument %s is expected to be of type '%s'" % (name, var_type.__name__))
def _check_type_or_none(var_type, var, name):
if var is not None and not isinstance(var, var_type):
raise ValueError("Argument %s is expected to be either None or of type '%s'"
% (name, var_type.__name__))
| 38.204082
| 127
| 0.650614
|
794c00924f437b260060da4c11c24843393cb9bc
| 3,038
|
py
|
Python
|
cmd/gettext_pot_db.py
|
oo13/xgettext-endless-sky
|
729098a80febab4e22ab59ad6aaa6c177521bdf7
|
[
"CC0-1.0"
] | 2
|
2021-03-22T10:35:16.000Z
|
2021-03-22T13:23:57.000Z
|
cmd/gettext_pot_db.py
|
oo13/xgettext-endless-sky
|
729098a80febab4e22ab59ad6aaa6c177521bdf7
|
[
"CC0-1.0"
] | null | null | null |
cmd/gettext_pot_db.py
|
oo13/xgettext-endless-sky
|
729098a80febab4e22ab59ad6aaa6c177521bdf7
|
[
"CC0-1.0"
] | null | null | null |
#!/usr/bin/python3
# -*- coding: utf-8-unix -*-
"""Gettext POT database"""
# Copyright © 2018, 2019, 2021 OOTA, Masato
#
# This is published by CC0 1.0.
# For more information, see CC0 1.0 Universal (CC0 1.0) Public Domain Dedication
# at https://creativecommons.org/publicdomain/zero/1.0/deed).
import types
import re
def _id(msg, context):
if len(context) == 0:
return msg
else:
return context + '\x04' + msg
class pot_db:
"""Gettext POT database"""
def __init__(self):
self.messages = [] # in order
self.contexts = [] # in order
self.message_info = [] # in order
self.message_index = {}
def append(self, msg, context, comment, file, line):
"""append message data."""
if msg[0] == '':
return
id = _id(msg[0], context)
if id not in self.message_index:
self.message_index[id] = len(self.messages)
self.messages.append(msg)
self.contexts.append(context)
self.message_info.append([ (comment, file, line) ])
else:
idx = self.message_index[id]
self.message_info[idx].append( (comment, file, line) )
if len(self.messages[idx]) < len(msg):
self.messages[idx] = msg
_escape_table = {
'\a' : '\\a',
'\b' : '\\b',
'\f' : '\\f',
'\n' : '\\n"\n"',
'\r' : '\\r',
'\t' : '\\t',
'\v' : '\\v',
'\\' : '\\\\',
'"' : '\\"'
}
def _escape_chars(self, s):
"""escape character for gettext."""
out = ''
found_nl = False
for c in s:
found_nl |= c == '\n'
if c in self._escape_table:
out += self._escape_table[c]
elif c < ' ':
out += '\\x{0:02X}'.format(ord(c))
else:
out += c
if s[-1] == '\n':
out = out[:-3]
if found_nl:
out = '"\n"' + out
return out
def write(self, write_func):
"""write all message data."""
for idx in range(len(self.messages)):
write_func("\n")
for info in self.message_info[idx]:
if len(info[0]) > 0:
write_func("#. " + info[0] + "\n")
write_func("#:")
for info in self.message_info[idx]:
write_func(" {}:{}".format(info[1], info[2]))
write_func("\n")
if len(self.contexts[idx]) > 0:
write_func('msgctxt "{}"\n'.format(self._escape_chars(self.contexts[idx])))
if len(self.messages[idx]) == 1:
write_func('msgid "{}"\n'.format(self._escape_chars(self.messages[idx][0])))
write_func('msgstr ""\n')
else:
write_func('msgid "{}"\n'.format(self._escape_chars(self.messages[idx][0])))
write_func('msgid_plural "{}"\n'.format(self._escape_chars(self.messages[idx][1])))
write_func('msgstr[0] ""\n')
| 34.134831
| 99
| 0.48815
|
794c00dd9bf3e917cb63f3258b655f4390e227f5
| 763
|
py
|
Python
|
plugins/github/komand_github/actions/get_my_issues/action.py
|
lukaszlaszuk/insightconnect-plugins
|
8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892
|
[
"MIT"
] | 46
|
2019-06-05T20:47:58.000Z
|
2022-03-29T10:18:01.000Z
|
plugins/github/komand_github/actions/get_my_issues/action.py
|
lukaszlaszuk/insightconnect-plugins
|
8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892
|
[
"MIT"
] | 386
|
2019-06-07T20:20:39.000Z
|
2022-03-30T17:35:01.000Z
|
plugins/github/komand_github/actions/get_my_issues/action.py
|
lukaszlaszuk/insightconnect-plugins
|
8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892
|
[
"MIT"
] | 43
|
2019-07-09T14:13:58.000Z
|
2022-03-28T12:04:46.000Z
|
import komand
import requests
from .. import utils
from .schema import GetMyIssuesInput, GetMyIssuesOutput
class GetMyIssues(komand.Action):
def __init__(self):
super(self.__class__, self).__init__(
name="get_my_issues",
description="Retrieve all issues assigned to the currently authenticated user",
input=GetMyIssuesInput(),
output=GetMyIssuesOutput(),
)
def run(self, params={}):
try:
results = requests.get("https://api.github.com/issues", auth=self.connection.basic_auth)
return {"issues": utils.clean(results.json())}
except Exception as e:
self.logger.error("Could not retrieve current user's assigned issues. Error: " + str(e))
| 34.681818
| 100
| 0.650066
|
794c01c1fda57af5f5c5bdcc764b6200c3c157bf
| 3,431
|
py
|
Python
|
attic/wpfrontman/wp_frontman/tests/test_post.py
|
ludoo/wpkit
|
0447d941a438e143b0e51b5e73418a0206832823
|
[
"BSD-3-Clause"
] | null | null | null |
attic/wpfrontman/wp_frontman/tests/test_post.py
|
ludoo/wpkit
|
0447d941a438e143b0e51b5e73418a0206832823
|
[
"BSD-3-Clause"
] | null | null | null |
attic/wpfrontman/wp_frontman/tests/test_post.py
|
ludoo/wpkit
|
0447d941a438e143b0e51b5e73418a0206832823
|
[
"BSD-3-Clause"
] | null | null | null |
import re
import unittest
from django.conf import settings
from django.db import connection
from django.core.exceptions import ObjectDoesNotExist
from django.core.urlresolvers import set_urlconf, get_resolver
from django.test.client import Client
from wp_frontman.blog import Blog
from wp_frontman.models import Post
class PostTestCase(unittest.TestCase):
def setUp(self):
self.root_urlconf = settings.ROOT_URLCONF
settings.ROOT_URLCONF = Blog.get_active().urlconf
self.cursor = connection.cursor()
def tearDown(self):
settings.ROOT_URLCONF = self.root_urlconf
def testManager(self):
db_table = Post._meta.db_table
# test that the forced filter and published methods actually do what they are supposed to
self.cursor.execute("select count(*) from %s where post_type='post'" % db_table)
self.assertEqual(Post.objects.count(), self.cursor.fetchone()[0])
self.cursor.execute("select count(*) from %s where post_type='post' and post_status='publish'" % db_table)
self.assertEqual(Post.objects.published().count(), self.cursor.fetchone()[0])
# test that default ordering is correct, and that published() is accessible from all kinds of querysets
self.cursor.execute("select ID from %s where post_type='post' order by post_date desc limit 2" % db_table)
self.assertEqual(repr(Post.objects.values_list('id')[:2]), repr(list(self.cursor.fetchall())))
self.cursor.execute("select ID from %s where post_type='post' and post_status='publish' order by ID desc limit 2" % db_table)
result = repr(list(self.cursor.fetchall()))
self.assertEqual(repr(Post.objects.values_list('id').published()[:2]), result)
self.assertEqual(repr(Post.objects.published().values_list('id')[:2]), result)
post = Post.objects.published()[0]
resolver = get_resolver(Blog.get_active().urlconf)
kw = resolver.resolve(post.get_absolute_url())[2]
self.assertEqual(post, Post.objects.get_from_keywords(kw))
self.assertRaises(ObjectDoesNotExist, Post.objects.get_from_keywords, dict(slug='ksjflkdsjlkj'))
self.assertRaises(ObjectDoesNotExist, Post.objects.get_from_path, '/a/b/c')
self.assertEqual(post, Post.objects.get_from_path(post.get_absolute_url()))
def testDbTable(self):
db_table1 = Post._meta.db_table
self.cursor.execute("select count(*) from %s where post_type='post'" % db_table1)
num1 = self.cursor.fetchone()[0]
blog = Blog.factory(2)
db_table2 = Post._meta.db_table
self.assertNotEqual(db_table1, db_table2)
self.cursor.execute("select count(*) from %s where post_type='post'" % db_table2)
num2 = self.cursor.fetchone()[0]
self.assertNotEqual(num1, num2)
self.assertEqual(Post.objects.count(), num2)
Blog.default_active()
def testPermalink(self):
p = Post.objects.get(id=25)
self.assertEqual(p.get_absolute_url(), '/second-category/second-category-first-child/25-post-in-second-category-first-child/')
client = Client(HTTP_HOST='ludolo.it')
response = client.get(p.get_absolute_url())
self.assertEqual(response.status_code, 200)
self.assertEqual(re.findall(r'(?smi)<title>([^<]+)</title>', response.content), ['Post in second category first child | WP Frontman'])
| 51.984848
| 142
| 0.692801
|
794c01f19f7180c1ae60d794e1d86443a09699ef
| 4,513
|
py
|
Python
|
lab_4/decode_text_score_eight_test.py
|
LeraTormashova/2020-2-level-labs
|
c38c05ffef1fc4fa4c3c5861ab9877020060567d
|
[
"MIT"
] | null | null | null |
lab_4/decode_text_score_eight_test.py
|
LeraTormashova/2020-2-level-labs
|
c38c05ffef1fc4fa4c3c5861ab9877020060567d
|
[
"MIT"
] | null | null | null |
lab_4/decode_text_score_eight_test.py
|
LeraTormashova/2020-2-level-labs
|
c38c05ffef1fc4fa4c3c5861ab9877020060567d
|
[
"MIT"
] | null | null | null |
# pylint: skip-file
"""
Tests decode_text function
"""
import unittest
from lab_4.main import encode_text, WordStorage, LikelihoodBasedTextGenerator, decode_text
from lab_4.ngrams.ngram_trie import NGramTrie
class DecodeCorpusTest(unittest.TestCase):
"""
checks for decode_text function.
Score 8 or above function
"""
def test_decode_text_ideal(self):
corpus = ('i', 'have', 'a', 'cat', '<END>',
'his', 'name', 'is', 'bruno', '<END>',
'i', 'have', 'a', 'dog', 'too', '<END>',
'his', 'name', 'is', 'rex', '<END>',
'her', 'name', 'is', 'rex', 'too', '<END>')
storage = WordStorage()
storage.update(corpus)
encoded = encode_text(storage, corpus)
trie = NGramTrie(3, encoded)
context = (storage.get_id('name'),
storage.get_id('is'),)
end = storage.get_id('<END>')
generator = LikelihoodBasedTextGenerator(storage, trie)
to_decode = generator.generate_text(context, 2)
self.assertEqual(to_decode[-1], end)
expected = ('Name is rex', 'Her name is rex')
actual = decode_text(storage, to_decode)
self.assertEqual(expected, actual)
def test_decode_text_incorrect_storage(self):
corpus = ('i', 'have', 'a', 'cat', '<END>',
'his', 'name', 'is', 'bruno', '<END>',
'i', 'have', 'a', 'dog', 'too', '<END>',
'his', 'name', 'is', 'rex', '<END>',
'her', 'name', 'is', 'rex', 'too', '<END>')
storage = WordStorage()
storage.update(corpus)
encoded = encode_text(storage, corpus)
trie = NGramTrie(3, encoded)
context = (storage.get_id('name'),
storage.get_id('is'),)
generator = LikelihoodBasedTextGenerator(storage, trie)
to_decode = generator.generate_text(context, 2)
bad_inputs = [(), [], 123, None, NGramTrie]
for bad_storage in bad_inputs:
self.assertRaises(ValueError, decode_text, bad_storage, to_decode)
def test_decode_text_incorrect_sentences(self):
corpus = ('i', 'have', 'a', 'cat', '<END>',
'his', 'name', 'is', 'bruno', '<END>',
'i', 'have', 'a', 'dog', 'too', '<END>',
'his', 'name', 'is', 'rex', '<END>',
'her', 'name', 'is', 'rex', 'too', '<END>')
storage = WordStorage()
storage.update(corpus)
bad_inputs = [[], 123, None, NGramTrie]
for bad_decode in bad_inputs:
self.assertRaises(ValueError, decode_text, storage, bad_decode)
def test_decode_text_ideal_conditions(self):
corpus = ('i', 'have', 'a', 'cat', '<END>',
'his', 'name', 'is', 'bruno', '<END>',
'i', 'have', 'a', 'dog', 'too', '<END>',
'his', 'name', 'is', 'rex', '<END>',
'her', 'name', 'is', 'rex', 'too', '<END>')
storage = WordStorage()
storage.update(corpus)
encoded = encode_text(storage, corpus)
trie = NGramTrie(3, encoded)
context = (storage.get_id('name'),
storage.get_id('is'),)
generator = LikelihoodBasedTextGenerator(storage, trie)
to_decode = generator.generate_text(context, 2)
actual = decode_text(storage, to_decode)
for sentence in actual:
self.assertTrue('<END>' not in sentence)
self.assertTrue(sentence[0].isupper())
self.assertTrue(sentence[-1].isalpha())
# extra test
def test_decode_text_upper_first_letter(self):
'''
Tests that number all the letters except
first one in a sentence are in a lower case
'''
corpus = ('first', 'sentence', 'here', '<END>',
'second', 'sentence', 'here', '<END>',
'third', 'sentence', 'here', '<END>')
storage = WordStorage()
storage.update(corpus)
encoded_text = encode_text(storage, corpus)
trie = NGramTrie(3, encoded_text)
context = (storage.get_id('first'),
storage.get_id('sentence'))
likelihood_generator = LikelihoodBasedTextGenerator(storage, trie)
generated_encoded_text = likelihood_generator.generate_text(context, 1)
decoded_text = decode_text(storage, generated_encoded_text)
self.assertFalse(decoded_text[0][1:].isupper())
| 33.679104
| 90
| 0.545092
|
794c020711b96647211039971d0a544c0938ff42
| 909
|
py
|
Python
|
Images/main.py
|
allegheny-college-cmpsc-100-fall-2020/cmpsc-100-fall-2020-project-code-examples
|
2fe28a3319e1990df17fbfb1cb4eac9dc9526ee0
|
[
"CC-BY-4.0"
] | null | null | null |
Images/main.py
|
allegheny-college-cmpsc-100-fall-2020/cmpsc-100-fall-2020-project-code-examples
|
2fe28a3319e1990df17fbfb1cb4eac9dc9526ee0
|
[
"CC-BY-4.0"
] | null | null | null |
Images/main.py
|
allegheny-college-cmpsc-100-fall-2020/cmpsc-100-fall-2020-project-code-examples
|
2fe28a3319e1990df17fbfb1cb4eac9dc9526ee0
|
[
"CC-BY-4.0"
] | null | null | null |
import random
from PIL import Image, ImageDraw
def make_random_color():
val = lambda: random.randint(0, 255)
return (val(), val(), val())
def make_random_point(x,y):
new_x = random.randint(0, x)
new_y = random.randint(0, y)
return (new_x, new_y)
# Make 10 squares of a defined size with random colors
for i in range(1,11):
square = Image.new(
mode = "RGB",
size = (100,100),
color = make_random_color()
)
square.save(f"squares/square-{i}.png")
# Create canvas
picture = Image.new(
mode = "RGB",
size = (1000,1000),
color = make_random_color()
)
# Get the size of final canvas
x,y = picture.size
# Paste each of the 10 pictures in random spots
for i in range(1,11):
to_paste = Image.open(f"squares/square-{i}.png")
picture.paste(
to_paste,
make_random_point(x, y)
)
picture.save("final_picture.png")
| 20.2
| 54
| 0.629263
|
794c020a14d430a7b7905c9f5d99244a0a62ff78
| 5,235
|
py
|
Python
|
S4/S4 Library/simulation/sims/sim_spawner_enums.py
|
NeonOcean/Environment
|
ca658cf66e8fd6866c22a4a0136d415705b36d26
|
[
"CC-BY-4.0"
] | 1
|
2021-05-20T19:33:37.000Z
|
2021-05-20T19:33:37.000Z
|
S4/S4 Library/simulation/sims/sim_spawner_enums.py
|
NeonOcean/Environment
|
ca658cf66e8fd6866c22a4a0136d415705b36d26
|
[
"CC-BY-4.0"
] | null | null | null |
S4/S4 Library/simulation/sims/sim_spawner_enums.py
|
NeonOcean/Environment
|
ca658cf66e8fd6866c22a4a0136d415705b36d26
|
[
"CC-BY-4.0"
] | null | null | null |
from protocolbuffers import FileSerialization_pb2
from sims4.tuning.dynamic_enum import DynamicEnum
import enum
class SimNameType(DynamicEnum):
DEFAULT = 0
class SimInfoCreationSource(enum.IntFlags, export=False):
class SimInfoCreationSourceMixin:
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._creation_source = SimInfoCreationSource.UNKNOWN
@property
def creation_source(self):
return self._creation_source
@creation_source.setter
def creation_source(self, value):
if isinstance(value, str):
value = SimInfoCreationSource.get_creation_source_from_legacy_creation_source(value)
self._creation_source = value
class SimInfoCreationSourceData:
__slots__ = ('creation_source', 'creation_source_data')
def __init__(self, creation_source, creation_source_data):
self.creation_source = creation_source
self.creation_source_data = creation_source_data
def __str__(self):
return '{} ({})'.format(str(self.creation_source), self.creation_source_data)
def is_creation_source(self, creation_source):
if self.creation_source & creation_source:
return True
return False
def write_creation_source(self, telemetry_hook):
telemetry_hook.write_string('cstr', str(self.creation_source_data))
self.creation_source.write_creation_source(telemetry_hook)
UNKNOWN = 0
CAS_INITIAL = 1
CAS_REENTRY = 2
PRE_MADE = 4
CLONED = 8
GALLERY = 16
HOUSEHOLD_TEMPLATE = 32
NEIGHBORHOOD_POPULATION_SERVICE = 64
ADOPTION = 128
PREGNANCY = 256
CHEAT = 512
FILTER = 1024
_CREATION_PATH_TO_CREATION_SOURCE = {FileSerialization_pb2.SimData.SIMCREATION_INIT: CAS_INITIAL, FileSerialization_pb2.SimData.SIMCREATION_REENTRY_ADDSIM: CAS_REENTRY, FileSerialization_pb2.SimData.SIMCREATION_PRE_MADE: PRE_MADE, FileSerialization_pb2.SimData.SIMCREATION_CLONED: CLONED, FileSerialization_pb2.SimData.SIMCREATION_GALLERY: GALLERY}
_LEGACY_SOURCE_TO_CREATION_SOURCE = {'cas: initial': CAS_INITIAL, 'cas: re-entry': CAS_REENTRY, 'pre-made': PRE_MADE, 'cloned': CLONED, 'gallery': GALLERY, 'pregnancy': PREGNANCY, 'premade_household': HOUSEHOLD_TEMPLATE, 'unknown': UNKNOWN, 'cheat': CHEAT, 'adoption': ADOPTION, 'neigh_pop_service': NEIGHBORHOOD_POPULATION_SERVICE, 'filter': FILTER}
@staticmethod
def create_creation_source(creation_source, creation_source_data=None):
creation_source = SimInfoCreationSource.SimInfoCreationSourceData(creation_source, creation_source_data)
return creation_source
@staticmethod
def get_creation_source_from_creation_path(creation_path):
creation_source = SimInfoCreationSource._CREATION_PATH_TO_CREATION_SOURCE.get(creation_path, SimInfoCreationSource.UNKNOWN)
return SimInfoCreationSource(creation_source)
@staticmethod
def get_creation_source_from_legacy_creation_source(legacy_creation_source):
for (k, v) in SimInfoCreationSource._LEGACY_SOURCE_TO_CREATION_SOURCE.items():
if k.lower() in legacy_creation_source.lower():
creation_source = SimInfoCreationSource(v)
break
else:
creation_source = SimInfoCreationSource.UNKNOWN
creation_source = SimInfoCreationSource.SimInfoCreationSourceData(creation_source, legacy_creation_source)
return creation_source
def is_creation_source(self, creation_source):
if self & creation_source:
return True
return False
@staticmethod
def save_creation_source(creation_source_data, sim_info_msg):
if isinstance(creation_source_data, SimInfoCreationSource.SimInfoCreationSourceData):
creation_source = creation_source_data.creation_source
creation_source_data = creation_source_data.creation_source_data
elif isinstance(creation_source_data, SimInfoCreationSource):
creation_source = creation_source_data
creation_source_data = None
elif isinstance(creation_source_data, str):
creation_source = SimInfoCreationSource.get_creation_source_from_legacy_creation_source(creation_source_data).creation_source
sim_info_msg.gameplay_data.creation_source = creation_source
if creation_source_data is not None:
sim_info_msg.gameplay_data.creation_source_data = creation_source_data
@staticmethod
def load_creation_source(sim_info_msg):
creation_source = SimInfoCreationSource(sim_info_msg.gameplay_data.creation_source)
creation_source_data = sim_info_msg.gameplay_data.creation_source_data
if not creation_source:
creation_source = SimInfoCreationSource.get_creation_source_from_legacy_creation_source(creation_source_data).creation_source
if not creation_source_data:
return creation_source
return SimInfoCreationSource.SimInfoCreationSourceData(creation_source, creation_source_data)
def write_creation_source(self, telemetry_hook):
telemetry_hook.write_int('csfl', self)
| 46.741071
| 354
| 0.744031
|
794c039b1ecabc2719f6628c2b832c32b15ffb0e
| 3,390
|
py
|
Python
|
protector/tests/rules_test/test_exceed_time_limit.py
|
kraktus/opentsdb-protector
|
e21a666fb82fb9b4e935bb8af26ee8baecf82ca8
|
[
"BSD-3-Clause"
] | null | null | null |
protector/tests/rules_test/test_exceed_time_limit.py
|
kraktus/opentsdb-protector
|
e21a666fb82fb9b4e935bb8af26ee8baecf82ca8
|
[
"BSD-3-Clause"
] | null | null | null |
protector/tests/rules_test/test_exceed_time_limit.py
|
kraktus/opentsdb-protector
|
e21a666fb82fb9b4e935bb8af26ee8baecf82ca8
|
[
"BSD-3-Clause"
] | null | null | null |
# Copyright 2019 Adobe
# All Rights Reserved.
#
# NOTICE: Adobe permits you to use, modify, and distribute this file in
# accordance with the terms of the Adobe license agreement accompanying
# it. If you have received this file from a source other than Adobe,
# then your use, modification, or distribution of it requires the prior
# written permission of Adobe.
#
import unittest
from protector.rules import exceed_time_limit
from protector.query.query import OpenTSDBQuery
class TestQueryExceedFrequency(unittest.TestCase):
def setUp(self):
self.exceed_time_limit = exceed_time_limit.RuleChecker(20)
self.payload1 = """
{
"start": 1530695685,
"queries": [
{
"metric": "mymetric.received.P95",
"aggregator": "max",
"downsample": "20s-max",
"filters": [
{
"filter": "DEV",
"groupBy": false,
"tagk": "environment",
"type": "iliteral_or"
}
]
}
]
}
"""
self.payload2 = """
{
"start": "3n-ago",
"queries": [
{
"metric": "a.mymetric.received.P95",
"aggregator": "max",
"downsample": "20s-max",
"filters": []
}
]
}
"""
self.payload3 = """
{
"start": "90d-ago",
"queries": [
{
"metric": "mymetric",
"aggregator": "max",
"downsample": "20s-max",
"filters": []
}
]
}
"""
self.payload4 = """
{
"start": "89d-ago",
"queries": [
{
"metric": "mymetric",
"aggregator": "none",
"downsample": "20s-max",
"filters": []
}
]
}
"""
def test_below(self):
q = OpenTSDBQuery(self.payload1)
q.set_stats({'duration': 1.5})
self.assertTrue(self.exceed_time_limit.check(q).is_ok())
def test_above(self):
q = OpenTSDBQuery(self.payload2)
q.set_stats({'duration': 20})
self.assertFalse(self.exceed_time_limit.check(q).is_ok())
def test_none(self):
q = OpenTSDBQuery(self.payload3)
self.assertTrue(self.exceed_time_limit.check(q).is_ok())
| 32.285714
| 72
| 0.351917
|
794c05071359501932444c2d53fa11df668b0f6e
| 2,654
|
py
|
Python
|
src/10.Maps, Hash Tables, and Skips Lists/10.1.5 Simiple Unsorted Map Implementation.py
|
LittleNewton/Data_Structure_and_Algorithm_Report
|
e4883a2c112259fb185107d54828eceb097e0318
|
[
"Apache-2.0"
] | 3
|
2018-07-16T09:29:59.000Z
|
2019-12-09T03:46:25.000Z
|
src/10.Maps, Hash Tables, and Skips Lists/10.1.5 Simiple Unsorted Map Implementation.py
|
LittleNewton/Data_Structure_and_Algorithm_Report
|
e4883a2c112259fb185107d54828eceb097e0318
|
[
"Apache-2.0"
] | null | null | null |
src/10.Maps, Hash Tables, and Skips Lists/10.1.5 Simiple Unsorted Map Implementation.py
|
LittleNewton/Data_Structure_and_Algorithm_Report
|
e4883a2c112259fb185107d54828eceb097e0318
|
[
"Apache-2.0"
] | null | null | null |
# 10.1.5 Simiple Unsorted Map Implementation
from collections import MutableMapping
class MapBase(MutableMapping):
"""Our own abstract base class that includes a nonpublic _Item class."""
#----------------------- nested _Item class -----------------------
class _Item:
"""Lightweight composite to store key-value pairs as map items."""
__slots__ = '_key','_value'
def __init__(self,k,v):
self._key = k
self._value = v
def __eq__(self,other):
return self._key == other._key # compare items vased on their keys
def __ne__(self,other):
return not (self == other) # opposite of __eq__
def __It__(self,other):
return self._key < other._key# compare items based on their keys
class UnsortedTableMap(MapBase):
"""Map implementation usiong an unordered list."""
def __init__(self,):
"""Create an empty map."""
self._table = []
def __getitem__(self,k):
"""Return value associated with key k (raise KeyError if not found)."""
for item in self._table:
if k == item._key:
return item._value
raise KeyError('key Error: ' + repr(k))
def __setitem__(self,k,v):
"""Assign value v to key k, overwriting existing value if present."""
for item in self._table: # Found a match
if k == item._key: # reassign a value
item._value = value # and quit
return
# did not find match for key
self._table.append(self._Item(k,v))
def __delitem__(self,k):
"""Remove item associated with key k (raise KeyError if not found)."""
for j in range(len(self._table)):
if k == self._table[j]._key: # found a match
self._table.pop(j) # remove item
return
raise KeyError('Key Error: ' + repr(k))
def __len__(self):
"""Return number of items in the map."""
return len(self._table)
def __iter__(self):
"""Generate iteration of the map's keys."""
for item in self._table:
yield item._key # yield the KEY
#----------------------------- my main function -----------------------------
a = UnsortedTableMap()
a.__setitem__('LiuPeng','A')
a.__setitem__('LiYue','B')
a.__setitem__('God','C')
c = a.__iter__()
for k in c:
print('(',k,',',a.__getitem__(k),')')
print(a.__len__())
a.__delitem__('LiYue')
c = a.__iter__()
for k in c:
print('(',k,',',a.__getitem__(k),')')
print(a.__len__())
| 32.765432
| 79
| 0.547476
|
794c055b54afd633b0c0cbe5098ee902ff9effaf
| 837
|
py
|
Python
|
faasmcli/faasmcli/tasks/__init__.py
|
kubasz/faasm
|
fa4cec66176c669c161f097d24edce099de66919
|
[
"Apache-2.0"
] | 278
|
2020-10-01T16:37:06.000Z
|
2022-03-31T07:06:01.000Z
|
faasmcli/faasmcli/tasks/__init__.py
|
kubasz/faasm
|
fa4cec66176c669c161f097d24edce099de66919
|
[
"Apache-2.0"
] | 78
|
2020-10-01T18:46:16.000Z
|
2022-03-18T15:39:03.000Z
|
faasmcli/faasmcli/tasks/__init__.py
|
kubasz/faasm
|
fa4cec66176c669c161f097d24edce099de66919
|
[
"Apache-2.0"
] | 24
|
2020-10-21T18:45:48.000Z
|
2022-03-26T08:59:41.000Z
|
from invoke import Collection
from . import bare_metal
from . import call
from . import codegen
from . import config
from . import dev
from . import disas
from . import docker_tasks
from . import files
from . import flame
from . import flush
from . import knative
from . import network
from . import python
from . import redis
from . import release
from . import run
from . import state
from . import upload
from . import wast
# Default names
ns = Collection(
codegen,
config,
dev,
disas,
files,
flame,
flush,
knative,
network,
python,
redis,
release,
run,
state,
upload,
wast,
)
# Custom names
ns.add_collection(ns.from_module(bare_metal), name="bm")
ns.add_collection(ns.from_module(call), name="invoke")
ns.add_collection(ns.from_module(docker_tasks), name="docker")
| 17.808511
| 62
| 0.702509
|
794c0614925d3642159bfd1eb7848694b2eac21b
| 5,616
|
py
|
Python
|
xarray/testing.py
|
douglatornell/xarray
|
742ed3984f437982057fd46ecfb0bce214563cb8
|
[
"Apache-2.0"
] | 51
|
2019-02-01T19:43:37.000Z
|
2022-03-16T09:07:03.000Z
|
xarray/testing.py
|
douglatornell/xarray
|
742ed3984f437982057fd46ecfb0bce214563cb8
|
[
"Apache-2.0"
] | 3
|
2018-11-21T02:21:19.000Z
|
2018-12-18T15:34:10.000Z
|
xarray/testing.py
|
douglatornell/xarray
|
742ed3984f437982057fd46ecfb0bce214563cb8
|
[
"Apache-2.0"
] | 35
|
2019-02-08T02:00:31.000Z
|
2022-03-01T23:17:00.000Z
|
"""Testing functions exposed to the user API"""
import numpy as np
from xarray.core import duck_array_ops
from xarray.core import formatting
def _decode_string_data(data):
if data.dtype.kind == 'S':
return np.core.defchararray.decode(data, 'utf-8', 'replace')
return data
def _data_allclose_or_equiv(arr1, arr2, rtol=1e-05, atol=1e-08,
decode_bytes=True):
if any(arr.dtype.kind == 'S' for arr in [arr1, arr2]) and decode_bytes:
arr1 = _decode_string_data(arr1)
arr2 = _decode_string_data(arr2)
exact_dtypes = ['M', 'm', 'O', 'S', 'U']
if any(arr.dtype.kind in exact_dtypes for arr in [arr1, arr2]):
return duck_array_ops.array_equiv(arr1, arr2)
else:
return duck_array_ops.allclose_or_equiv(
arr1, arr2, rtol=rtol, atol=atol)
def assert_equal(a, b):
"""Like :py:func:`numpy.testing.assert_array_equal`, but for xarray
objects.
Raises an AssertionError if two objects are not equal. This will match
data values, dimensions and coordinates, but not names or attributes
(except for Dataset objects for which the variable names must match).
Arrays with NaN in the same location are considered equal.
Parameters
----------
a : xarray.Dataset, xarray.DataArray or xarray.Variable
The first object to compare.
b : xarray.Dataset, xarray.DataArray or xarray.Variable
The second object to compare.
See also
--------
assert_identical, assert_allclose, Dataset.equals, DataArray.equals,
numpy.testing.assert_array_equal
"""
import xarray as xr
__tracebackhide__ = True # noqa: F841
assert type(a) == type(b) # noqa
if isinstance(a, (xr.Variable, xr.DataArray)):
assert a.equals(b), formatting.diff_array_repr(a, b, 'equals')
elif isinstance(a, xr.Dataset):
assert a.equals(b), formatting.diff_dataset_repr(a, b, 'equals')
else:
raise TypeError('{} not supported by assertion comparison'
.format(type(a)))
def assert_identical(a, b):
"""Like :py:func:`xarray.testing.assert_equal`, but also matches the
objects' names and attributes.
Raises an AssertionError if two objects are not identical.
Parameters
----------
a : xarray.Dataset, xarray.DataArray or xarray.Variable
The first object to compare.
b : xarray.Dataset, xarray.DataArray or xarray.Variable
The second object to compare.
See also
--------
assert_equal, assert_allclose, Dataset.equals, DataArray.equals
"""
import xarray as xr
__tracebackhide__ = True # noqa: F841
assert type(a) == type(b) # noqa
if isinstance(a, xr.Variable):
assert a.identical(b), formatting.diff_array_repr(a, b, 'identical')
elif isinstance(a, xr.DataArray):
assert a.name == b.name
assert a.identical(b), formatting.diff_array_repr(a, b, 'identical')
elif isinstance(a, (xr.Dataset, xr.Variable)):
assert a.identical(b), formatting.diff_dataset_repr(a, b, 'identical')
else:
raise TypeError('{} not supported by assertion comparison'
.format(type(a)))
def assert_allclose(a, b, rtol=1e-05, atol=1e-08, decode_bytes=True):
"""Like :py:func:`numpy.testing.assert_allclose`, but for xarray objects.
Raises an AssertionError if two objects are not equal up to desired
tolerance.
Parameters
----------
a : xarray.Dataset, xarray.DataArray or xarray.Variable
The first object to compare.
b : xarray.Dataset, xarray.DataArray or xarray.Variable
The second object to compare.
rtol : float, optional
Relative tolerance.
atol : float, optional
Absolute tolerance.
decode_bytes : bool, optional
Whether byte dtypes should be decoded to strings as UTF-8 or not.
This is useful for testing serialization methods on Python 3 that
return saved strings as bytes.
See also
--------
assert_identical, assert_equal, numpy.testing.assert_allclose
"""
import xarray as xr
__tracebackhide__ = True # noqa: F841
assert type(a) == type(b) # noqa
kwargs = dict(rtol=rtol, atol=atol, decode_bytes=decode_bytes)
if isinstance(a, xr.Variable):
assert a.dims == b.dims
allclose = _data_allclose_or_equiv(a.values, b.values, **kwargs)
assert allclose, '{}\n{}'.format(a.values, b.values)
elif isinstance(a, xr.DataArray):
assert_allclose(a.variable, b.variable, **kwargs)
assert set(a.coords) == set(b.coords)
for v in a.coords.variables:
# can't recurse with this function as coord is sometimes a
# DataArray, so call into _data_allclose_or_equiv directly
allclose = _data_allclose_or_equiv(a.coords[v].values,
b.coords[v].values, **kwargs)
assert allclose, '{}\n{}'.format(a.coords[v].values,
b.coords[v].values)
elif isinstance(a, xr.Dataset):
assert set(a.data_vars) == set(b.data_vars)
assert set(a.coords) == set(b.coords)
for k in list(a.variables) + list(a.coords):
assert_allclose(a[k], b[k], **kwargs)
else:
raise TypeError('{} not supported by assertion comparison'
.format(type(a)))
def assert_combined_tile_ids_equal(dict1, dict2):
assert len(dict1) == len(dict2)
for k, v in dict1.items():
assert k in dict2.keys()
assert_equal(dict1[k], dict2[k])
| 37.192053
| 78
| 0.64156
|
794c069d61031153b35068be74a6ff596eb00232
| 15,235
|
py
|
Python
|
examples/imagenet_d/main.py
|
bethgelab/robustness
|
aa0a6798fe3973bae5f47561721b59b39f126ab7
|
[
"Apache-2.0"
] | 67
|
2020-07-01T01:13:19.000Z
|
2022-03-28T15:33:20.000Z
|
examples/imagenet_d/main.py
|
bethgelab/robustness
|
aa0a6798fe3973bae5f47561721b59b39f126ab7
|
[
"Apache-2.0"
] | 4
|
2021-03-04T13:24:52.000Z
|
2022-03-30T22:07:40.000Z
|
examples/imagenet_d/main.py
|
bethgelab/robustness
|
aa0a6798fe3973bae5f47561721b59b39f126ab7
|
[
"Apache-2.0"
] | 1
|
2021-05-25T09:41:10.000Z
|
2021-05-25T09:41:10.000Z
|
import argparse
import os
import random
import shutil
import time
import warnings
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.distributed as dist
import torch.optim
import torch.multiprocessing as mp
import torch.utils.data
import torch.utils.data.distributed
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torchvision.models as models
from map_files import *
model_names = sorted(name for name in models.__dict__
if name.islower() and not name.startswith("__")
and callable(models.__dict__[name]))
parser = argparse.ArgumentParser(description='PyTorch ImageNet Training')
parser.add_argument('data', metavar='DIR',
help='path to dataset')
parser.add_argument('-a', '--arch', metavar='ARCH', default='resnet50',
choices=model_names,
help='model architecture: ' +
' | '.join(model_names) +
' (default: resnet18)')
parser.add_argument('-j', '--workers', default=4, type=int, metavar='N',
help='number of data loading workers (default: 4)')
parser.add_argument('--epochs', default=90, type=int, metavar='N',
help='number of total epochs to run')
parser.add_argument('--start-epoch', default=0, type=int, metavar='N',
help='manual epoch number (useful on restarts)')
parser.add_argument('-b', '--batch-size', default=256, type=int,
metavar='N',
help='mini-batch size (default: 256), this is the total '
'batch size of all GPUs on the current node when '
'using Data Parallel or Distributed Data Parallel')
parser.add_argument('--lr', '--learning-rate', default=0.1, type=float,
metavar='LR', help='initial learning rate', dest='lr')
parser.add_argument('--momentum', default=0.9, type=float, metavar='M',
help='momentum')
parser.add_argument('--wd', '--weight-decay', default=1e-4, type=float,
metavar='W', help='weight decay (default: 1e-4)',
dest='weight_decay')
parser.add_argument('-p', '--print-freq', default=10, type=int,
metavar='N', help='print frequency (default: 10)')
parser.add_argument('--resume', default='', type=str, metavar='PATH',
help='path to latest checkpoint (default: none)')
parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true',
help='evaluate model on validation set')
parser.add_argument('--pretrained', dest='pretrained', action='store_true',
help='use pre-trained model')
parser.add_argument('--world-size', default=-1, type=int,
help='number of nodes for distributed training')
parser.add_argument('--rank', default=-1, type=int,
help='node rank for distributed training')
parser.add_argument('--dist-url', default='tcp://224.66.41.62:23456', type=str,
help='url used to set up distributed training')
parser.add_argument('--dist-backend', default='nccl', type=str,
help='distributed backend')
parser.add_argument('--seed', default=None, type=int,
help='seed for initializing training. ')
parser.add_argument('--gpu', default=None, type=int,
help='GPU id to use.')
parser.add_argument('--multiprocessing-distributed', action='store_true',
help='Use multi-processing distributed training to launch '
'N processes per node, which has N GPUs. This is the '
'fastest way to use PyTorch for either single node or '
'multi node data parallel training')
#new
parser.add_argument('--use-train-statistics', action='store_true')
def use_train_statistics(module):
if isinstance(module, nn.BatchNorm2d):
module.train()
best_acc1 = 0
def main():
args = parser.parse_args()
print(args)
if args.seed is not None:
random.seed(args.seed)
torch.manual_seed(args.seed)
cudnn.deterministic = True
warnings.warn('You have chosen to seed training. '
'This will turn on the CUDNN deterministic setting, '
'which can slow down your training considerably! '
'You may see unexpected behavior when restarting '
'from checkpoints.')
if args.gpu is not None:
warnings.warn('You have chosen a specific GPU. This will completely '
'disable data parallelism.')
if args.dist_url == "env://" and args.world_size == -1:
args.world_size = int(os.environ["WORLD_SIZE"])
args.distributed = args.world_size > 1 or args.multiprocessing_distributed
ngpus_per_node = torch.cuda.device_count()
if args.multiprocessing_distributed:
# Since we have ngpus_per_node processes per node, the total world_size
# needs to be adjusted accordingly
args.world_size = ngpus_per_node * args.world_size
# Use torch.multiprocessing.spawn to launch distributed processes: the
# main_worker process function
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
else:
# Simply call main_worker function
main_worker(args.gpu, ngpus_per_node, args)
def main_worker(gpu, ngpus_per_node, args):
global best_acc1
args.gpu = gpu
if args.gpu is not None:
print("Use GPU: {} for training".format(args.gpu))
if args.distributed:
if args.dist_url == "env://" and args.rank == -1:
args.rank = int(os.environ["RANK"])
if args.multiprocessing_distributed:
# For multiprocessing distributed training, rank needs to be the
# global rank among all the processes
args.rank = args.rank * ngpus_per_node + gpu
dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url,
world_size=args.world_size, rank=args.rank)
# create model
if args.pretrained:
print("=> using pre-trained model '{}'".format(args.arch))
model = models.__dict__[args.arch](pretrained=True)
else:
print("=> creating model '{}'".format(args.arch))
model = models.__dict__[args.arch]()
if not torch.cuda.is_available():
print('using CPU, this will be slow')
elif args.distributed:
# For multiprocessing distributed, DistributedDataParallel constructor
# should always set the single device scope, otherwise,
# DistributedDataParallel will use all available devices.
if args.gpu is not None:
torch.cuda.set_device(args.gpu)
model.cuda(args.gpu)
# When using a single GPU per process and per
# DistributedDataParallel, we need to divide the batch size
# ourselves based on the total number of GPUs we have
args.batch_size = int(args.batch_size / ngpus_per_node)
args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node)
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
else:
model.cuda()
# DistributedDataParallel will divide and allocate batch_size to all
# available GPUs if device_ids are not set
model = torch.nn.parallel.DistributedDataParallel(model)
elif args.gpu is not None:
torch.cuda.set_device(args.gpu)
model = model.cuda(args.gpu)
else:
# DataParallel will divide and allocate batch_size to all available GPUs
if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):
model.features = torch.nn.DataParallel(model.features)
model.cuda()
else:
model = torch.nn.DataParallel(model).cuda()
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(args.gpu)
optimizer = torch.optim.SGD(model.parameters(), args.lr,
momentum=args.momentum,
weight_decay=args.weight_decay)
# optionally resume from a checkpoint
if args.resume:
if os.path.isfile(args.resume):
print("=> loading checkpoint '{}'".format(args.resume))
if args.gpu is None:
checkpoint = torch.load(args.resume)
else:
# Map model to be loaded to specified single gpu.
loc = 'cuda:{}'.format(args.gpu)
checkpoint = torch.load(args.resume, map_location=loc)
args.start_epoch = checkpoint['epoch']
best_acc1 = checkpoint['best_acc1']
if args.gpu is not None:
# best_acc1 may be from a checkpoint from a different GPU
best_acc1 = best_acc1.to(args.gpu)
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
print("=> loaded checkpoint '{}' (epoch {})"
.format(args.resume, checkpoint['epoch']))
else:
print("=> no checkpoint found at '{}'".format(args.resume))
cudnn.benchmark = True
# Data loading code
mapping_vector, _, _ = create_symlinks_and_get_imagenet_visda_mapping(args.data, map_dict)
valdir = './visda_symlinks/' + args.data.split('/')[-2] + '/'
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
val_dataset = datasets.ImageFolder(valdir, transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
]))
val_loader = torch.utils.data.DataLoader(val_dataset,
batch_size=args.batch_size, shuffle=True,
num_workers=args.workers, pin_memory=True)
print('Number of classes: ', len(val_dataset.classes))
print('Number of images: ', len(val_loader.dataset))
if args.evaluate:
validate(val_loader, model, criterion, args, mapping_vector)
return
for epoch in range(args.start_epoch, args.epochs):
adjust_learning_rate(optimizer, epoch, args)
# evaluate on validation set
acc1 = validate(val_loader, model, criterion, args)
# remember best acc@1 and save checkpoint
is_best = acc1 > best_acc1
best_acc1 = max(acc1, best_acc1)
if not args.multiprocessing_distributed or (args.multiprocessing_distributed
and args.rank % ngpus_per_node == 0):
save_checkpoint({
'epoch': epoch + 1,
'arch': args.arch,
'state_dict': model.state_dict(),
'best_acc1': best_acc1,
'optimizer' : optimizer.state_dict(),
}, is_best)
def validate(val_loader, model, criterion, args, mapping_vector):
batch_time = AverageMeter('Time', ':6.3f')
top1 = AverageMeter('Acc@1', ':6.2f')
top5 = AverageMeter('Acc@5', ':6.2f')
progress = ProgressMeter(
len(val_loader),
[batch_time, top1, top5],
prefix='Test: ')
# switch to evaluate mode
model.eval()
if args.use_train_statistics:
model.apply(use_train_statistics)
nr_of_accepted_images = 0
with torch.no_grad():
end = time.time()
for i, (images, target) in enumerate(val_loader):
if args.gpu is not None:
images = images.cuda(args.gpu, non_blocking=True)
if torch.cuda.is_available():
target = target.cuda(args.gpu, non_blocking=True)
# compute output
output = model(images)
# measure accuracy
# images are mapped in the accuracy function
acc = accuracy(output, target, topk=(1, 5), mapping_vector=mapping_vector)
top1.update(acc[0].item(), images.size(0))
top5.update(acc[1].item(), images.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % args.print_freq == 0:
progress.display(i)
# TODO: this should also be done with the ProgressMeter
print(' * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'
.format(top1=top1, top5=top5))
return top1.avg
def save_checkpoint(state, is_best, filename='checkpoint.pth.tar'):
torch.save(state, filename)
if is_best:
shutil.copyfile(filename, 'model_best.pth.tar')
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self, name, fmt=':f'):
self.name = name
self.fmt = fmt
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def __str__(self):
fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
return fmtstr.format(**self.__dict__)
class ProgressMeter(object):
def __init__(self, num_batches, meters, prefix=""):
self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
self.meters = meters
self.prefix = prefix
def display(self, batch):
entries = [self.prefix + self.batch_fmtstr.format(batch)]
entries += [str(meter) for meter in self.meters]
print('\t'.join(entries))
def _get_batch_fmtstr(self, num_batches):
num_digits = len(str(num_batches // 1))
fmt = '{:' + str(num_digits) + 'd}'
return '[' + fmt + '/' + fmt.format(num_batches) + ']'
def adjust_learning_rate(optimizer, epoch, args):
"""Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
lr = args.lr * (0.1 ** (epoch // 30))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def accuracy(output, target, topk=(1,), mapping_vector=[]):
"""Computes the accuracy over the k top predictions for the specified values of k"""
with torch.no_grad():
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
# map imagenet predictions for the top5 labels to visda classes
pred_label_visda = torch.zeros(pred.shape)
if torch.cuda.is_available():
pred_label_visda = torch.zeros(pred.shape).cuda()
for k in range(maxk):
pred_label_visda[k] = map_imagenet_class_to_visda_class(pred[k], mapping_vector)
correct = pred_label_visda.eq(target.view(1, -1).expand_as(pred_label_visda))
res = []
for k in topk:
correct_k, _ = correct[:k].float().max(dim=0)
correct_k = correct_k.sum()
res.append(correct_k.mul_(100.0 / batch_size))
return res
if __name__ == '__main__':
main()
| 39.882199
| 94
| 0.612012
|
794c089b20f76e5ab0c90638dc15a03dc6f34119
| 147
|
py
|
Python
|
work5.py
|
1109450752/homework2018_10_11
|
22e2f8b270118851cd57abf776623b38750585f3
|
[
"Apache-2.0"
] | null | null | null |
work5.py
|
1109450752/homework2018_10_11
|
22e2f8b270118851cd57abf776623b38750585f3
|
[
"Apache-2.0"
] | null | null | null |
work5.py
|
1109450752/homework2018_10_11
|
22e2f8b270118851cd57abf776623b38750585f3
|
[
"Apache-2.0"
] | null | null | null |
import random
s1=[]
s=[0,1,2,3,4,5,6,7,8,9]
for i in range(1000):
s1.append(random.randint(0,9))
for i in s:
print(i,s1.count(i))
| 16.333333
| 35
| 0.571429
|
794c08a007bf09fcc91039fbf5c33cc93862544b
| 9,239
|
py
|
Python
|
models/MimoNet.py
|
marioviti/nn_segmentation
|
b754b38cd1898c0746e383ecd32d9d4c33c60b33
|
[
"MIT"
] | null | null | null |
models/MimoNet.py
|
marioviti/nn_segmentation
|
b754b38cd1898c0746e383ecd32d9d4c33c60b33
|
[
"MIT"
] | null | null | null |
models/MimoNet.py
|
marioviti/nn_segmentation
|
b754b38cd1898c0746e383ecd32d9d4c33c60b33
|
[
"MIT"
] | null | null | null |
from layers import *
from serialize import *
from metrics_and_losses import *
from GenericModel import GenericModel
import numpy as np
from keras import backend as K
from keras.losses import binary_crossentropy, categorical_crossentropy
from keras.utils import to_categorical
from keras.optimizers import SGD,Adam,Adagrad,RMSprop,Adadelta
# https://keras.io/optimizers/
from keras.layers import ZeroPadding2D
from skimage.transform import resize
from scipy.ndimage.filters import gaussian_filter
import tensorflow as tf
K.set_image_data_format('channels_last')
def crop_receptive(batch_y, model_output_size):
"""
Get a cropped batch to fit the perceptive field,
the resulting output shape is n,hy,wy,cy.
args:
- batch_y (numpy array) y.shape : n,hx,wx,cy
- model_output_size (list) : hy,wy,cy
"""
n,hx,wx,cy = batch_y.shape
hy,wy,cy = model_output_size
dhq, dhr = (hx-hy)//2, (hx-hy)%2
dwq, dwr = (wx-wy)//2, (wx-wy)%2
return batch_y[:, dhq: hx - (dhq + dhr), dwq: wx - (dwq + dwr) ]
def expand_receptive(batch_y, model_input_shape):
"""
Get a expantded batch to fit the model_input_shape hx and wx,
the resulting output shape is n,hx,wx,cy.
args:
- batch_y (numpy array) y.shape : n,hy,wy,cy
- model_input_shape (list) : hx,wx,cx
"""
hx,wx,cx = model_input_shape
n,hy,wy,cy = batch_y.shape
dhq, dhr = (hx-hy)//2, (hx-hy)%2
dwq, dwr = (wx-wy)//2, (wx-wy)%2
y_expanded = np.zeros((n,hx,wx,cy))
y_expanded[:, dhq: - (dhq + dhr), dwq: - (dwq + dwr) ] = batch_y
return y_expanded
def define_mimonet_layers(input_shape, classes, regularized=False):
"""
Use the functional API to define the model
https://keras.io/getting-started/functional-api-guide/
params: input_shape (h,w,channels)
"""
layers = { 'inputs' : None,
'down_path' : {},
'bottle_neck' : None,
'up_path' : {},
'outputs' : None }
layers['inputs'] = [Input(input_shape[0],name='in1'),Input(input_shape[1],name='in2'),Input(input_shape[2],name='in3')]
layers['down_path'][4] = cnv3x3Relu(64,regularized=regularized)(layers['inputs'][0])
layers['down_path'][4] = cnv3x3Relu(64,regularized=regularized)(layers['down_path'][4])
layers['down_path'][3] = crop_concatenate(layers['inputs'][1],
new_down_level(128,layers['down_path'][4],regularized=regularized))
layers['down_path'][2] = crop_concatenate(layers['inputs'][2],
new_down_level(256,layers['down_path'][3],regularized=regularized))
layers['down_path'][1] = new_down_level(512,layers['down_path'][2],regularized=regularized)
layers['bottle_neck'] = new_down_level(1024,layers['down_path'][1],regularized=regularized)
layers['up_path'][1] = new_up_level(512,layers['bottle_neck'],layers['down_path'][1],regularized=regularized)
layers['up_path'][2] = new_up_level(256,layers['up_path'][1],layers['down_path'][2],padding='same',regularized=regularized)
layers['up_path'][3] = new_up_level(128,layers['up_path'][2],layers['down_path'][3],padding='same',regularized=regularized)
layers['up_path'][4] = new_up_level(64,layers['up_path'][3],layers['down_path'][4],regularized=regularized)
auxla1, la1 = feature_mask(4,256,64,classes,layers['up_path'][2],'la1')
auxla2, la2 = feature_mask(2,128,64,classes,layers['up_path'][3],'la2')
auxla3 = layers['up_path'][4]
layers['outputs'] = [ la1,la2 ]
layers['outputs'] += [ Conv2D(classes, (1, 1), activation='softmax', name='la3')(auxla3) ]
l0 = crop_concatenate(auxla1, auxla2)
l0 = crop_concatenate(l0,auxla3)
l0 = cnv3x3Relu(64,regularized=regularized, padding='same')(l0)
l0 = cnv3x3Relu(32,regularized=regularized, padding='same')(l0)
layers['outputs'] += [ Conv2D(classes, (1, 1), activation='softmax', name='l0')(l0) ]
return layers
def _to_tensor(x, dtype):
"""Convert the input `x` to a tensor of type `dtype`.
# Arguments
x: An object to be converted (numpy array, list, tensors).
dtype: The destination type.
# Returns
A tensor.
"""
x = tf.convert_to_tensor(x)
if x.dtype != dtype:
x = tf.cast(x, dtype)
return x
global la1_counter
global la2_counter
global la3_counter
la1_counter = 1.0
la2_counter = 1.0
la3_counter = 1.0
def l0_categorical_crossentropy(target,output):
return softmax_categorical_crossentropy(target,output)
def la1_categorical_crossentropy(target,output):
global la1_counter
la1_counter += 0.75
return softmax_categorical_crossentropy(target,output)/la1_counter
def la2_categorical_crossentropy(target,output):
global la2_counter
la2_counter += 0.5
return softmax_categorical_crossentropy(target,output)/la2_counter
def la3_categorical_crossentropy(target,output):
global la3_counter
la3_counter += 0.125
return softmax_categorical_crossentropy(target,output)/la3_counter
def define_mimonet_inputs_shapes(input_shape):
h,w,c = input_shape
return [ input_shape, [h//2,w//2,c], [h//4,w//4,c] ]
def compute_mimonet_inputs(x,shapes):
"""
MimoNet uses multiple inputs at different scales: this function
helps to resize data to fit in each input
"""
n = x.shape[0]
h1,w1,c = shapes[1]
h2,w2,c = shapes[2]
x1 = np.zeros((n,h1,w1,c), dtype=x.dtype)
x2 = np.zeros((n,h2,w2,c), dtype=x.dtype)
for i in range(n):
x1[i,:,:,:] = resize(gaussian_filter(x[i,:,:,:],1),[h1,w1,c])
x2[i,:,:,:] = resize(gaussian_filter(x1[i,:,:,:],1),[h2,w2,c])
return x,x1,x2
class MimoNet(GenericModel):
def __init__( self, input_shape, classes=2,
regularized = False,
loss={'la1': la1_categorical_crossentropy,
'la2': la2_categorical_crossentropy,
'la3': la3_categorical_crossentropy,
'l0' : l0_categorical_crossentropy
},
loss_weights={'la1': 1.0, 'la2': 1.0 , 'la3': 1.0 },
metrics=[dice_coef],
optimizer=Adadelta() ):
"""
params:
inputs_shape: (tuple) channels_last (h,w,c) of input image.
metrics: (tuple) metrics function for evaluation.
optimizer: (function) Optimization strategy.
"""
input_shapes = define_mimonet_inputs_shapes(input_shape)
layers = define_mimonet_layers(input_shapes, classes, regularized=regularized)
self.layers = layers
self.classes = classes
inputs, outputs = layers['inputs'],layers['outputs']
GenericModel.__init__(self, inputs, outputs, loss, metrics, optimizer,
loss_weights=loss_weights)
def fit( self, x_train, y_train, batch_size=1, epochs=1, cropped=False ):
x_train1,x_train2,x_train3 = compute_mimonet_inputs(x_train,self.inputs_shape)
out_shape = self.outputs_shape[0]
y_train = y_train if cropped else crop_receptive(y_train, out_shape)
return GenericModel.fit( self,
{ 'in1': x_train1,
'in2': x_train2,
'in3': x_train3
},
{ 'la1': y_train,
'la2': y_train,
'la3': y_train,
'l0': y_train
},
epochs=epochs,
batch_size=batch_size )
def evaluate( self, x_test, y_test, batch_size=1, cropped=False ):
x_test1,x_test2,x_test3 = compute_mimonet_inputs(x_test,self.inputs_shape)
out_shape = self.outputs_shape[0]
y_test = y_test if cropped else crop_receptive(y_test, out_shape)
return GenericModel.evaluate( self,
{ 'in1': x_test1,
'in2': x_test2,
'in3': x_test3
},
{ 'la1': y_test,
'la2': y_test,
'la3': y_test,
'l0': y_test
},
batch_size=batch_size )
def predict(self, x, batch_size=1, verbose=0):
x1,x2,x3 = compute_mimonet_inputs(x,self.inputs_shape)
out_shape = self.outputs_shape[0]
ys = GenericModel.predict(self,
{ 'in1': x1,
'in2': x2,
'in3': x3
},
batch_size=batch_size,
verbose=verbose )
return ys[3]
| 41.245536
| 127
| 0.572897
|
794c0aee411fbe4874227a2e1e72ad178b2f589f
| 7,311
|
py
|
Python
|
wine-buffer-redis-custom_match.py
|
TheTacoScott/bloomingtonpuzzles
|
5f2a8a4d37b1321175e037073d627b2f8655d90b
|
[
"MIT"
] | null | null | null |
wine-buffer-redis-custom_match.py
|
TheTacoScott/bloomingtonpuzzles
|
5f2a8a4d37b1321175e037073d627b2f8655d90b
|
[
"MIT"
] | null | null | null |
wine-buffer-redis-custom_match.py
|
TheTacoScott/bloomingtonpuzzles
|
5f2a8a4d37b1321175e037073d627b2f8655d90b
|
[
"MIT"
] | null | null | null |
"""
wine.py
a bipartite non directed graphing solution to the bloomreach wine puzzle
example usage:
python wine.py --input filename.txt
"""
import networkx as nx
import argparse, time, sys
import redis
parser = argparse.ArgumentParser(description='')
parser.add_argument('--input', action="store", dest="input")
parser.add_argument('--min-buffer', action="store", dest="min_buffer_size", default=1000000)
parser.add_argument('--max-buffer', action="store", dest="max_buffer_size", default=1100000)
parser.add_argument('--maxwine', action="store", dest="max_wine", default=3)
parser.add_argument('--maxpref', action="store", dest="max_prefs", default=10)
args = parser.parse_args()
MAX_WINE = int(args.max_wine)
MIN_MEM_NODE_COUNT = int(args.min_buffer_size)
MAX_MEM_NODE_COUNT = int(args.max_buffer_size)
FEWER_COMPARE = int(args.max_prefs + 1)
#let's redis shard out the blacklists
REDIS_SHARDS_W = {}
REDIS_SHARDS_W[0] = redis.Redis("localhost",6379,0)
REDIS_SHARDS_P = {}
REDIS_SHARDS_P[0] = redis.Redis("localhost",6379,1)
REDIS_SHARD_W_HASH_MOD = len(REDIS_SHARDS_W)
REDIS_SHARD_P_HASH_MOD = len(REDIS_SHARDS_P)
fg = nx.Graph()
g_person_node_count = 0
g_wine_node_count = 0
g_root_node_count = 1
redis_has_values = False
def redis_set_value(key,kind):
if kind == "p":
shard = REDIS_SHARDS_P[hash(key) % REDIS_SHARD_P_HASH_MOD]
elif kind == "w":
shard = REDIS_SHARDS_W[hash(key) % REDIS_SHARD_W_HASH_MOD]
if shard:
return shard.incr(key,amount=1)
return None
def redis_get_value(key,kind):
if kind == "p":
shard = REDIS_SHARDS_P[hash(key) % REDIS_SHARD_P_HASH_MOD]
elif kind == "w":
shard = REDIS_SHARDS_W[hash(key) % REDIS_SHARD_W_HASH_MOD]
if shard:
data = shard.get(key)
if data:
return int(data)
return None
#flush cache for this run
print "Flushing Databases"
for shard in REDIS_SHARDS_W:
REDIS_SHARDS_W[shard].flushdb()
for shard in REDIS_SHARDS_P:
REDIS_SHARDS_P[shard].flushdb()
print "Flushing Databases -- DONE"
def add_line_to_graph(line):
"""
helper function to parse and add a line to the graph
"""
global g_person_node_count, g_wine_node_count, redis_has_values
(person, wine) = line[0:-1].split("\t")
person = person.replace("person", "p")
wine = wine.replace("wine", "w")
person_id = long(person.replace("p",""))
wine_id = long(wine.replace("w",""))
if redis_has_values:
pt_set = redis_get_value(person_id,"p")
wt_set = redis_get_value(wine_id,"w")
else:
pt_set = wt_set = False
if person not in fg: #do not add the same person twice, and do not add a person we've already sold 3 wines to
if not pt_set or pt_set < MAX_WINE:
fg.add_node(person)
g_person_node_count += 1
if wine not in fg and not wt_set: #do not add the same wine twice, and do not add a wine we've already sold
fg.add_node(wine)
g_wine_node_count += 1
if not pt_set and not wt_set:
fg.add_edge(person, wine)
if not pt_set:
fg.add_edge(person, "r")
f = open(args.input, "r")
#PROCESS NODES
wine_sold = 0
lowest_wine_edge_count = 1
nodes_to_process = True
start = time.time()
more_file = True
while nodes_to_process:
#REFILL THE BUFFER
if (g_person_node_count+g_wine_node_count) < MIN_MEM_NODE_COUNT:
print_text = True
while (g_person_node_count+g_wine_node_count) < MAX_MEM_NODE_COUNT and more_file:
if print_text:
print "FILLING BUFFER"
print_text = False
line = f.readline() #read in line from input
if line:
add_line_to_graph(line)
else:
more_file = False
# WINE SECTION
wine_node_with_fewest_edges = None
wine_node_with_fewest_edges_edge_count = FEWER_COMPARE
wine_search_count = 0
person_skip_count = 0
people_to_delete = []
for node in nx.dfs_postorder_nodes(fg, "r"): #dfs postorder is magic and should be worshiped. --Andy Weir
if node == "r": continue #skip root node
if node[0] == "p":
if len(fg.neighbors(node)) == 1:
people_to_delete.append(node)
person_skip_count += 1
continue
wine_neighbors = fg.neighbors(node)
wine_neighbor_count = len(wine_neighbors)
wine_search_count += 1
if wine_neighbor_count <= wine_node_with_fewest_edges_edge_count:
wine_node_with_fewest_edges = node
wine_node_with_fewest_edges_edge_count = wine_neighbor_count
if wine_neighbor_count <= lowest_wine_edge_count: break
if wine_search_count > 25: #optimization that is unlikley to be needed with dfs postorder and these data sets
lowest_wine_edge_count = min(lowest_wine_edge_count + 1, 10)
else:
lowest_wine_edge_count = wine_node_with_fewest_edges_edge_count
# END WINE SECTION
# we're out of edges, time to call it a day
if not wine_node_with_fewest_edges:
nodes_to_process = False
break
# PERSON SECTION
person_node_with_fewest_edges = None
person_node_with_fewest_edges_edge_count = FEWER_COMPARE
for person_node in fg.neighbors(wine_node_with_fewest_edges):
person_neighbors = fg.neighbors(person_node)
person_neighbor_count = len(person_neighbors)
if person_neighbor_count < person_node_with_fewest_edges_edge_count: #don't need the optimizations of the wine section here
person_node_with_fewest_edges = person_node
person_node_with_fewest_edges_edge_count = person_neighbor_count
if person_neighbor_count == 1: break #special case still safe on persons neighbors
# END PERSON SECTION
#found node(s) to possibly remove/satisfy
if person_node_with_fewest_edges and wine_node_with_fewest_edges:
print "Traversed W#: {6: >5}\tTraversed P#: {7: >5}\tP-ID: {0: >10}\tW-ID: {1: >10}\tBuffer: {3: >8}\tW: {4: >10}\tP:{5:>10}\tSold: {2: >10}".format(person_node_with_fewest_edges,
wine_node_with_fewest_edges,
wine_sold,
g_person_node_count+g_wine_node_count,
g_wine_node_count,
g_person_node_count,
wine_search_count,
person_skip_count
)
wine_sold += 1
person_id = long(person_node_with_fewest_edges.replace("p",""))
wine_id = long(wine_node_with_fewest_edges.replace("w",""))
redis_set_value(person_id,"p")
if redis_get_value(person_id,"p") == MAX_WINE:
fg.remove_node(person_node_with_fewest_edges)
g_person_node_count -= 1
fg.remove_node(wine_node_with_fewest_edges)
g_wine_node_count -= 1
redis_set_value(wine_id,"w")
redis_has_values = True
#these are people that have no edges that matter, we'll delete them from the graph for now
#we'll readd them should they show up in the input file again
for person in people_to_delete:
fg.remove_node(person)
g_person_node_count -= 1
f.close()
print args.min_buffer_size, args.max_buffer_size, wine_sold, round(time.time()-start, 3)
| 35.838235
| 183
| 0.67419
|
794c0b2cc12cf7a534f2f73083f352ce504b846b
| 49
|
py
|
Python
|
GSpreadPlus/__init__.py
|
TheReaper62/GSpread-Plus
|
f4d15fb6a1ba038ba944d372d608b4681275f8b0
|
[
"MIT"
] | null | null | null |
GSpreadPlus/__init__.py
|
TheReaper62/GSpread-Plus
|
f4d15fb6a1ba038ba944d372d608b4681275f8b0
|
[
"MIT"
] | null | null | null |
GSpreadPlus/__init__.py
|
TheReaper62/GSpread-Plus
|
f4d15fb6a1ba038ba944d372d608b4681275f8b0
|
[
"MIT"
] | null | null | null |
from GSpreadPlus.gspreadplus import Spreadclient
| 24.5
| 48
| 0.897959
|
794c0bbadc684760c9496ed7b69210fc76972051
| 1,647
|
py
|
Python
|
Visualization/code.py
|
llavkush/greyatom-python-for-data-science
|
c82f9e28c9ef000becdf635d00ab0a8816e683fb
|
[
"MIT"
] | 2
|
2020-03-27T12:17:55.000Z
|
2020-03-27T12:17:58.000Z
|
Visualization/code.py
|
llavkush/greyatom-python-for-data-science
|
c82f9e28c9ef000becdf635d00ab0a8816e683fb
|
[
"MIT"
] | null | null | null |
Visualization/code.py
|
llavkush/greyatom-python-for-data-science
|
c82f9e28c9ef000becdf635d00ab0a8816e683fb
|
[
"MIT"
] | null | null | null |
# --------------
#Importing header files
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
data = pd.read_csv(path)
loan_status= data["Loan_Status"].value_counts()
loan_status.plot(kind ="bar")
#Code starts here
# --------------
#Code starts here
property_and_loan = data.groupby(['Property_Area', 'Loan_Status'])
property_and_loan = property_and_loan.size().unstack()
property_and_loan.plot(kind='bar', stacked=False, figsize=(15,10))
# Label X-axes and Y-axes
# --------------
#Code starts here
education_and_loan = data.groupby(['Education','Loan_Status'])
education_and_loan = education_and_loan.size().unstack()
education_and_loan.plot(kind='bar')
plt.xlabel('Education Status')
plt.ylabel('Loan_Status')
# Rotate X-axes labels
plt.xticks(rotation=45)
# --------------
#Code starts here
graduate = data[data['Education'] == 'Graduate']
not_graduate = data[data['Education'] == ' Not Graduate']
data["LoanAmount"].plot(kind='density', label ='Graduate')
data['LoanAmount'].plot(kind='density', label ='Not Graduate')
#Code ends here
#For automatic legend display
plt.legend()
# --------------
#Code starts here
fig, (ax_1, ax_2, ax_3) = plt.subplots(nrows = 3 , ncols = 1,figsize=(20,20))
ax_1.scatter(data['ApplicantIncome'],data["LoanAmount"])
ax_1.set_title('Applicant Income')
ax_2.scatter(data['CoapplicantIncome'],data["LoanAmount"])
ax_2.set_title('Coapplicant Income')
ax_3.scatter(data['ApplicantIncome'],data["LoanAmount"])
ax_3.set_title('Total Income')
data["TotalIncome"] = data["ApplicantIncome"] + data["CoapplicantIncome"]
| 22.256757
| 78
| 0.681846
|
794c0becd2352ba860ab93a516d9ae4903fa36aa
| 1,083
|
py
|
Python
|
WEEKS/CD_Sata-Structures/_RESOURCES/python-prac/Overflow/Beginners-Python-Examples-master/stringOperations.py
|
webdevhub42/Lambda
|
b04b84fb5b82fe7c8b12680149e25ae0d27a0960
|
[
"MIT"
] | 5
|
2021-06-02T23:44:25.000Z
|
2021-12-27T16:21:57.000Z
|
WEEKS/CD_Sata-Structures/_RESOURCES/python-prac/Overflow/Beginners-Python-Examples-master/stringOperations.py
|
webdevhub42/Lambda
|
b04b84fb5b82fe7c8b12680149e25ae0d27a0960
|
[
"MIT"
] | 22
|
2021-05-31T01:33:25.000Z
|
2021-10-18T18:32:39.000Z
|
WEEKS/CD_Sata-Structures/_RESOURCES/python-prac/Overflow/Beginners-Python-Examples-master/stringOperations.py
|
webdevhub42/Lambda
|
b04b84fb5b82fe7c8b12680149e25ae0d27a0960
|
[
"MIT"
] | 3
|
2021-06-19T03:37:47.000Z
|
2021-08-31T00:49:51.000Z
|
# This example shows you string operations
name = "Kalpak"
print("My name is " + name) # I have given space after is notice
age = 14
print("My age is ", age) # comma seprates two different things you want to print
print("This isn't going away too soon.") # that \ is called as an escape character
# the \ is used to use same quote in a string
print(
"I love newlines \n"
) # \n prints new line after string or according to its position
print("\t I love tabs") # \t adds a tab according to its position
multiple = "Iron Man"
print(multiple * 5) # this will print string 5 times
# string methods in built
country = "Norway"
print(country.upper()) # prints all letters in upper case
print(country.lower()) # prints all letters in lower case
print(country.title()) # converts into title
# string formatting
print("%s %s %s" % ("I", "am", "cool"))
expression = "I love"
movie = "Captain America 3"
print("%s %s" % (expression, movie))
# type conversion
addition = 12343 + 3349
print("The answer is " + str(addition)) # str() method converts non-string into string
| 29.27027
| 87
| 0.694367
|
794c0ca2dbdb678de8cf53f5a8f242236d712fa6
| 6,738
|
py
|
Python
|
homeassistant/components/enphase_envoy/sensor.py
|
rocket4321/core
|
781084880b0bc8d1e73d3ec56b81f4717e520dc7
|
[
"Apache-2.0"
] | null | null | null |
homeassistant/components/enphase_envoy/sensor.py
|
rocket4321/core
|
781084880b0bc8d1e73d3ec56b81f4717e520dc7
|
[
"Apache-2.0"
] | null | null | null |
homeassistant/components/enphase_envoy/sensor.py
|
rocket4321/core
|
781084880b0bc8d1e73d3ec56b81f4717e520dc7
|
[
"Apache-2.0"
] | null | null | null |
"""Support for Enphase Envoy solar energy monitor."""
from datetime import timedelta
import logging
import async_timeout
from envoy_reader.envoy_reader import EnvoyReader
import httpx
import voluptuous as vol
from homeassistant.components.sensor import PLATFORM_SCHEMA, SensorEntity
from homeassistant.const import (
CONF_IP_ADDRESS,
CONF_MONITORED_CONDITIONS,
CONF_NAME,
CONF_PASSWORD,
CONF_USERNAME,
ENERGY_WATT_HOUR,
POWER_WATT,
)
from homeassistant.exceptions import PlatformNotReady
import homeassistant.helpers.config_validation as cv
from homeassistant.helpers.update_coordinator import (
CoordinatorEntity,
DataUpdateCoordinator,
UpdateFailed,
)
_LOGGER = logging.getLogger(__name__)
SENSORS = {
"production": ("Envoy Current Energy Production", POWER_WATT),
"daily_production": ("Envoy Today's Energy Production", ENERGY_WATT_HOUR),
"seven_days_production": (
"Envoy Last Seven Days Energy Production",
ENERGY_WATT_HOUR,
),
"lifetime_production": ("Envoy Lifetime Energy Production", ENERGY_WATT_HOUR),
"consumption": ("Envoy Current Energy Consumption", POWER_WATT),
"daily_consumption": ("Envoy Today's Energy Consumption", ENERGY_WATT_HOUR),
"seven_days_consumption": (
"Envoy Last Seven Days Energy Consumption",
ENERGY_WATT_HOUR,
),
"lifetime_consumption": ("Envoy Lifetime Energy Consumption", ENERGY_WATT_HOUR),
"inverters": ("Envoy Inverter", POWER_WATT),
}
ICON = "mdi:flash"
CONST_DEFAULT_HOST = "envoy"
SCAN_INTERVAL = timedelta(seconds=60)
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend(
{
vol.Optional(CONF_IP_ADDRESS, default=CONST_DEFAULT_HOST): cv.string,
vol.Optional(CONF_USERNAME, default="envoy"): cv.string,
vol.Optional(CONF_PASSWORD, default=""): cv.string,
vol.Optional(CONF_MONITORED_CONDITIONS, default=list(SENSORS)): vol.All(
cv.ensure_list, [vol.In(list(SENSORS))]
),
vol.Optional(CONF_NAME, default=""): cv.string,
}
)
async def async_setup_platform(
homeassistant, config, async_add_entities, discovery_info=None
):
"""Set up the Enphase Envoy sensor."""
ip_address = config[CONF_IP_ADDRESS]
monitored_conditions = config[CONF_MONITORED_CONDITIONS]
name = config[CONF_NAME]
username = config[CONF_USERNAME]
password = config[CONF_PASSWORD]
if "inverters" in monitored_conditions:
envoy_reader = EnvoyReader(ip_address, username, password, inverters=True)
else:
envoy_reader = EnvoyReader(ip_address, username, password)
try:
await envoy_reader.getData()
except httpx.HTTPStatusError as err:
_LOGGER.error("Authentication failure during setup: %s", err)
return
except httpx.HTTPError as err:
raise PlatformNotReady from err
async def async_update_data():
"""Fetch data from API endpoint."""
data = {}
async with async_timeout.timeout(30):
try:
await envoy_reader.getData()
except httpx.HTTPError as err:
raise UpdateFailed(f"Error communicating with API: {err}") from err
for condition in monitored_conditions:
if condition != "inverters":
data[condition] = await getattr(envoy_reader, condition)()
else:
data["inverters_production"] = await getattr(
envoy_reader, "inverters_production"
)()
_LOGGER.debug("Retrieved data from API: %s", data)
return data
coordinator = DataUpdateCoordinator(
homeassistant,
_LOGGER,
name="sensor",
update_method=async_update_data,
update_interval=SCAN_INTERVAL,
)
await coordinator.async_refresh()
if coordinator.data is None:
raise PlatformNotReady
entities = []
for condition in monitored_conditions:
entity_name = ""
if (
condition == "inverters"
and coordinator.data.get("inverters_production") is not None
):
for inverter in coordinator.data["inverters_production"]:
entity_name = f"{name}{SENSORS[condition][0]} {inverter}"
split_name = entity_name.split(" ")
serial_number = split_name[-1]
entities.append(
Envoy(
condition,
entity_name,
serial_number,
SENSORS[condition][1],
coordinator,
)
)
elif condition != "inverters":
entity_name = f"{name}{SENSORS[condition][0]}"
entities.append(
Envoy(
condition,
entity_name,
None,
SENSORS[condition][1],
coordinator,
)
)
async_add_entities(entities)
class Envoy(SensorEntity, CoordinatorEntity):
"""Envoy entity."""
def __init__(self, sensor_type, name, serial_number, unit, coordinator):
"""Initialize Envoy entity."""
self._type = sensor_type
self._name = name
self._serial_number = serial_number
self._unit_of_measurement = unit
super().__init__(coordinator)
@property
def name(self):
"""Return the name of the sensor."""
return self._name
@property
def state(self):
"""Return the state of the sensor."""
if self._type != "inverters":
value = self.coordinator.data.get(self._type)
elif (
self._type == "inverters"
and self.coordinator.data.get("inverters_production") is not None
):
value = self.coordinator.data.get("inverters_production").get(
self._serial_number
)[0]
else:
return None
return value
@property
def unit_of_measurement(self):
"""Return the unit of measurement of this entity, if any."""
return self._unit_of_measurement
@property
def icon(self):
"""Icon to use in the frontend, if any."""
return ICON
@property
def extra_state_attributes(self):
"""Return the state attributes."""
if (
self._type == "inverters"
and self.coordinator.data.get("inverters_production") is not None
):
value = self.coordinator.data.get("inverters_production").get(
self._serial_number
)[1]
return {"last_reported": value}
return None
| 31.050691
| 84
| 0.614722
|
794c0d68c04db0f70232e8f61909b43430dcdeb8
| 4,155
|
py
|
Python
|
libs/configs/cfgs_res50_dota_v6.py
|
riciche/RetinaNet_Tensorflow_Rotation
|
b03a7eafba21bfbb78dbffe1be53ab914080201e
|
[
"MIT"
] | 2
|
2021-01-27T13:45:03.000Z
|
2021-01-27T15:02:14.000Z
|
libs/configs/cfgs_res50_dota_v6.py
|
CrazyStoneonRoad/RetinaNet_Tensorflow_Rotation
|
b03a7eafba21bfbb78dbffe1be53ab914080201e
|
[
"MIT"
] | null | null | null |
libs/configs/cfgs_res50_dota_v6.py
|
CrazyStoneonRoad/RetinaNet_Tensorflow_Rotation
|
b03a7eafba21bfbb78dbffe1be53ab914080201e
|
[
"MIT"
] | null | null | null |
# -*- coding: utf-8 -*-
from __future__ import division, print_function, absolute_import
import os
import tensorflow as tf
import math
"""
This is your evaluation result for task 1:
mAP: 0.6089394631017461
ap of each class:
plane:0.8859576592098442,
baseball-diamond:0.6745883911474353,
bridge:0.33669255398137776,
ground-track-field:0.5214597741938006,
small-vehicle:0.6616069164284678,
large-vehicle:0.7319363968127408,
ship:0.7588294869678884,
tennis-court:0.9082611825804324,
basketball-court:0.7734922061739598,
storage-tank:0.7689222533206449,
soccer-ball-field:0.3702702384451027,
roundabout:0.5477193039848192,
harbor:0.5343846914555771,
swimming-pool:0.5287523391912979,
helicopter:0.1312185526328024
The submitted information is :
Description: RetinaNet_DOTA_1x_20190604
Username: yangxue
Institute: DetectionTeamUCAS
Emailadress: yangxue16@mails.ucas.ac.cn
TeamMembers: yangxue, yangjirui
"""
# ------------------------------------------------
VERSION = 'RetinaNet_DOTA_1x_20190604'
NET_NAME = 'resnet50_v1d' # 'MobilenetV2'
ADD_BOX_IN_TENSORBOARD = True
# ---------------------------------------- System_config
ROOT_PATH = os.path.abspath('../')
print(20*"++--")
print(ROOT_PATH)
GPU_GROUP = "0"
NUM_GPU = len(GPU_GROUP.strip().split(','))
SHOW_TRAIN_INFO_INTE = 20
SMRY_ITER = 200
SAVE_WEIGHTS_INTE = 27000
SUMMARY_PATH = ROOT_PATH + '/output/summary'
TEST_SAVE_PATH = ROOT_PATH + '/tools/test_result'
if NET_NAME.startswith("resnet"):
weights_name = NET_NAME
elif NET_NAME.startswith("MobilenetV2"):
weights_name = "mobilenet/mobilenet_v2_1.0_224"
else:
raise Exception('net name must in [resnet_v1_101, resnet_v1_50, MobilenetV2]')
PRETRAINED_CKPT = ROOT_PATH + '/data/pretrained_weights/' + weights_name + '.ckpt'
TRAINED_CKPT = os.path.join(ROOT_PATH, 'output/trained_weights')
EVALUATE_DIR = ROOT_PATH + '/output/evaluate_result_pickle/'
# ------------------------------------------ Train config
RESTORE_FROM_RPN = False
FIXED_BLOCKS = 1 # allow 0~3
FREEZE_BLOCKS = [True, False, False, False, False] # for gluoncv backbone
USE_07_METRIC = True
MUTILPY_BIAS_GRADIENT = 2.0 # if None, will not multipy
GRADIENT_CLIPPING_BY_NORM = 10.0 # if None, will not clip
CLS_WEIGHT = 1.0
REG_WEIGHT = 1.0
USE_IOU_FACTOR = True
BATCH_SIZE = 1
EPSILON = 1e-5
MOMENTUM = 0.9
LR = 5e-4 # * NUM_GPU * BATCH_SIZE
DECAY_STEP = [SAVE_WEIGHTS_INTE*12, SAVE_WEIGHTS_INTE*16, SAVE_WEIGHTS_INTE*20]
MAX_ITERATION = SAVE_WEIGHTS_INTE*20
WARM_SETP = int(1.0 / 8.0 * SAVE_WEIGHTS_INTE)
# -------------------------------------------- Data_preprocess_config
DATASET_NAME = 'DOTA' # 'pascal', 'coco'
PIXEL_MEAN = [123.68, 116.779, 103.939] # R, G, B. In tf, channel is RGB. In openCV, channel is BGR
PIXEL_MEAN_ = [0.485, 0.456, 0.406]
PIXEL_STD = [0.229, 0.224, 0.225] # R, G, B. In tf, channel is RGB. In openCV, channel is BGR
IMG_SHORT_SIDE_LEN = 800
IMG_MAX_LENGTH = 800
CLASS_NUM = 15
# --------------------------------------------- Network_config
SUBNETS_WEIGHTS_INITIALIZER = tf.random_normal_initializer(mean=0.0, stddev=0.01, seed=None)
SUBNETS_BIAS_INITIALIZER = tf.constant_initializer(value=0.0)
PROBABILITY = 0.01
FINAL_CONV_BIAS_INITIALIZER = tf.constant_initializer(value=-math.log((1.0 - PROBABILITY) / PROBABILITY))
WEIGHT_DECAY = 1e-4
# ---------------------------------------------Anchor config
LEVEL = ['P3', 'P4', 'P5', 'P6', 'P7']
BASE_ANCHOR_SIZE_LIST = [32, 64, 128, 256, 512]
ANCHOR_STRIDE = [8, 16, 32, 64, 128]
ANCHOR_SCALES = [1.0] # [2 ** 0, 2 ** (1.0 / 3.0), 2 ** (2.0 / 3.0)]
ANCHOR_RATIOS = [1, 1 / 3., 3., 5., 1 / 5.]
ANCHOR_ANGLES = [-90, -75, -60, -45, -30, -15]
ANCHOR_SCALE_FACTORS = None
USE_CENTER_OFFSET = True
METHOD = 'R'
USE_ANGLE_COND = False
# --------------------------------------------RPN config
SHARE_NET = True
USE_P5 = True
IOU_POSITIVE_THRESHOLD = 0.5
IOU_NEGATIVE_THRESHOLD = 0.4
NMS = True
NMS_IOU_THRESHOLD = 0.5
MAXIMUM_DETECTIONS = 100
FILTERED_SCORE = 0.05
VIS_SCORE = 0.4
# --------------------------------------------NAS FPN config
NUM_FPN = 0
NUM_NAS_FPN = 0
USE_RELU = True
FPN_CHANNEL = 256
| 30.777778
| 105
| 0.673165
|
794c0d83bcc77ab26533709e57976e85c920e6e2
| 776
|
py
|
Python
|
src/services/middleware/workers_io.py
|
QIN2DIM/v2board-mining
|
b757f17418275ce9ac165238b098b098d5e3326c
|
[
"Apache-2.0"
] | 10
|
2021-12-16T08:22:48.000Z
|
2022-03-19T05:56:00.000Z
|
src/services/middleware/workers_io.py
|
QIN2DIM/v2board-mining
|
b757f17418275ce9ac165238b098b098d5e3326c
|
[
"Apache-2.0"
] | null | null | null |
src/services/middleware/workers_io.py
|
QIN2DIM/v2board-mining
|
b757f17418275ce9ac165238b098b098d5e3326c
|
[
"Apache-2.0"
] | 1
|
2021-12-18T06:37:26.000Z
|
2021-12-18T06:37:26.000Z
|
# -*- coding: utf-8 -*-
# Time : 2021/12/22 16:15
# Author : QIN2DIM
# Github : https://github.com/QIN2DIM
# Description:
import ast
from typing import List
from services.middleware.stream_io import RedisClient
class EntropyHeap(RedisClient):
def __init__(self):
super(EntropyHeap, self).__init__()
def update(self, local_entropy: List[dict]):
self.db.lpush(self.PREFIX_ENTROPY, str(local_entropy))
def sync(self) -> List[dict]:
response = self.db.lrange(self.PREFIX_ENTROPY, 0, 1)
if response:
remote_entropy = ast.literal_eval(self.db.lrange(self.PREFIX_ENTROPY, 0, 1)[0])
return remote_entropy
def is_empty(self) -> bool:
return not bool(self.db.llen(self.PREFIX_ENTROPY))
| 28.740741
| 91
| 0.662371
|
794c105cc9d1dcdf306677539f004a742abeaf08
| 665
|
py
|
Python
|
vilya/views/settings/codereview.py
|
mubashshirjamal/code
|
d9c7adf7efed8e9c1ab3ff8cdeb94e7eb1a45382
|
[
"BSD-3-Clause"
] | 1,582
|
2015-01-05T02:41:44.000Z
|
2022-03-30T20:03:22.000Z
|
vilya/views/settings/codereview.py
|
mubashshirjamal/code
|
d9c7adf7efed8e9c1ab3ff8cdeb94e7eb1a45382
|
[
"BSD-3-Clause"
] | 66
|
2015-01-23T07:58:04.000Z
|
2021-11-12T02:23:27.000Z
|
vilya/views/settings/codereview.py
|
mubashshirjamal/code
|
d9c7adf7efed8e9c1ab3ff8cdeb94e7eb1a45382
|
[
"BSD-3-Clause"
] | 347
|
2015-01-05T07:47:07.000Z
|
2021-09-20T21:22:32.000Z
|
# -*- coding: utf-8 -*-
import json
from vilya.libs.auth.decorators import login_required
from vilya.libs.template import st
_q_exports = ['setting', ]
@login_required
def _q_index(request):
user = request.user
return st('settings/codereview.html', **locals())
@login_required
def setting(request):
is_enable = request.get_form_var('is_enable')
field = request.get_form_var('field')
user = request.user
result = "success"
origin = user.settings.__getattr__(field)
try:
user.settings.__setattr__(field, is_enable)
except Exception:
result = "fail"
return json.dumps({"result": result, "origin": origin})
| 23.75
| 59
| 0.690226
|
794c1070f022e2e32db9cac3a089978ce51be872
| 2,540
|
py
|
Python
|
ansible/roles/lib_gcloud/build/lib/health_check.py
|
fahlmant/openshift-tools
|
dbb4f16ccde3404c36c23108c45ca7b67138ee12
|
[
"Apache-2.0"
] | 164
|
2015-07-29T17:35:04.000Z
|
2021-12-16T16:38:04.000Z
|
ansible/roles/lib_gcloud/build/lib/health_check.py
|
fahlmant/openshift-tools
|
dbb4f16ccde3404c36c23108c45ca7b67138ee12
|
[
"Apache-2.0"
] | 3,634
|
2015-06-09T13:49:15.000Z
|
2022-03-23T20:55:44.000Z
|
ansible/roles/lib_gcloud/build/lib/health_check.py
|
fahlmant/openshift-tools
|
dbb4f16ccde3404c36c23108c45ca7b67138ee12
|
[
"Apache-2.0"
] | 250
|
2015-06-08T19:53:11.000Z
|
2022-03-01T04:51:23.000Z
|
# pylint: skip-file
# pylint: disable=too-many-instance-attributes
class HealthCheck(GCPResource):
'''Object to represent a gcp health check'''
resource_type = "compute.v1.httpHealthCheck"
# pylint: disable=too-many-arguments
def __init__(self,
rname,
project,
zone,
desc,
interval_secs,
healthy_threshold,
port,
timeout_secs,
unhealthy_threshold,
request_path='/',
):
'''constructor for gcp resource'''
super(HealthCheck, self).__init__(rname, HealthCheck.resource_type, project, zone)
self._desc = desc
self._interval_secs = interval_secs
self._healthy_threshold = healthy_threshold
self._unhealthy_threshold = unhealthy_threshold
self._port = port
self._timeout_secs = timeout_secs
self._request_path = request_path
@property
def description(self):
'''property for resource description'''
return self._desc
@property
def interval_secs(self):
'''property for resource interval_secs'''
return self._interval_secs
@property
def timeout_secs(self):
'''property for resource timeout_secs'''
return self._timeout_secs
@property
def healthy_threshold(self):
'''property for resource healthy_threshold'''
return self._healthy_threshold
@property
def unhealthy_threshold(self):
'''property for resource unhealthy_threshold'''
return self._unhealthy_threshold
@property
def port(self):
'''property for resource port'''
return self._port
@property
def request_path(self):
'''property for request path'''
return self._request_path
def to_resource(self):
""" return the resource representation"""
return {'name': self.name,
'type': HealthCheck.resource_type,
'properties': {'description': self.description,
'checkIntervalSec': self.interval_secs,
'port': self.port,
'healthyThreshold': self.healthy_threshold,
'unhealthyThreshold': self.unhealthy_threshold,
'timeoutSec': 5,
'requestPath': self.request_path,
}
}
| 31.358025
| 90
| 0.564567
|
794c10758bf731dc185c5348930aaf61312886ad
| 3,663
|
py
|
Python
|
src/biosyn/rerankNet.py
|
Manikant92/BioSyn
|
2f0f02769acf82fc110c724a581dd2675c47d655
|
[
"MIT"
] | 1
|
2020-05-11T05:51:30.000Z
|
2020-05-11T05:51:30.000Z
|
src/biosyn/rerankNet.py
|
Manikant92/BioSyn
|
2f0f02769acf82fc110c724a581dd2675c47d655
|
[
"MIT"
] | null | null | null |
src/biosyn/rerankNet.py
|
Manikant92/BioSyn
|
2f0f02769acf82fc110c724a581dd2675c47d655
|
[
"MIT"
] | null | null | null |
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import logging
from tqdm import tqdm
LOGGER = logging.getLogger(__name__)
class RerankNet(nn.Module):
def __init__(self, encoder, learning_rate, weight_decay, sparse_weight, use_cuda):
LOGGER.info("RerankNet! learning_rate={} weight_decay={} sparse_weight={} use_cuda={}".format(
learning_rate,weight_decay,sparse_weight,use_cuda
))
super(RerankNet, self).__init__()
self.encoder = encoder
self.learning_rate = learning_rate
self.weight_decay = weight_decay
self.use_cuda = use_cuda
self.sparse_weight = sparse_weight
self.optimizer = optim.Adam([
{'params': self.encoder.parameters()},
{'params' : self.sparse_weight, 'lr': 0.01, 'weight_decay': 0}],
lr=self.learning_rate, weight_decay=self.weight_decay
)
self.criterion = marginal_nll
def forward(self, x):
"""
query : (N, h), candidates : (N, topk, h)
output : (N, topk)
"""
query_token, candidate_tokens, candidate_s_scores = x
batch_size, topk, _ = candidate_tokens.shape
if self.use_cuda:
candidate_s_scores = candidate_s_scores.cuda()
# dense embed for query and candidates
query_embed = self.encoder(query_token).unsqueeze(dim=1) # query : [batch_size, 1, hidden]
candidate_tokens = self.reshape_candidates_for_encoder(candidate_tokens)
candidate_embeds = self.encoder(candidate_tokens).view(batch_size, topk, -1) # [batch_size, topk, hidden]
# score dense candidates
candidate_d_score = torch.bmm(query_embed, candidate_embeds.permute(0,2,1)).squeeze(1)
score = self.sparse_weight * candidate_s_scores + candidate_d_score
return score
def reshape_candidates_for_encoder(self, candidates):
"""
reshape candidates for encoder input shape
[batch_size, topk, max_length] => [batch_size*topk, max_length]
"""
_, _, max_length = candidates.shape
candidates = candidates.contiguous().view(-1, max_length)
return candidates
def get_loss(self, outputs, targets):
if self.use_cuda:
targets = targets.cuda()
loss = self.criterion(outputs, targets)
return loss
def get_embeddings(self, mentions, batch_size=1024):
"""
Compute all embeddings from mention tokens.
"""
embedding_table = []
with torch.no_grad():
for start in tqdm(range(0, len(mentions), batch_size)):
end = min(start + batch_size, len(mentions))
batch = mentions[start:end]
batch_embedding = self.vectorizer(batch)
batch_embedding = batch_embedding.cpu()
embedding_table.append(batch_embedding)
embedding_table = torch.cat(embedding_table, dim=0)
return embedding_table
def marginal_nll(score, target):
"""
sum all scores among positive samples
"""
predict = F.softmax(score, dim=-1)
loss = predict * target
loss = loss.sum(dim=-1) # sum all positive scores
loss = loss[loss > 0] # filter sets with at least one positives
loss = torch.clamp(loss, min=1e-9, max=1) # for numerical stability
loss = -torch.log(loss) # for negative log likelihood
if len(loss) == 0:
loss = loss.sum() # will return zero loss
else:
loss = loss.mean()
return loss
| 38.557895
| 114
| 0.621349
|
794c10836a3a0cd88b481f42018b2f69c588ceb5
| 11,669
|
py
|
Python
|
lib/build_graph.py
|
Lingistic/GraBTax
|
ff8e313891da88ffaebc5393cf9b2d7a8650131c
|
[
"Apache-2.0"
] | 9
|
2017-06-30T04:24:53.000Z
|
2020-12-24T11:10:29.000Z
|
lib/build_graph.py
|
Lingistic/GraBTax
|
ff8e313891da88ffaebc5393cf9b2d7a8650131c
|
[
"Apache-2.0"
] | 2
|
2017-05-24T20:00:51.000Z
|
2019-05-27T22:40:48.000Z
|
lib/build_graph.py
|
Lingistic/GraBTax
|
ff8e313891da88ffaebc5393cf9b2d7a8650131c
|
[
"Apache-2.0"
] | 3
|
2017-04-21T07:28:41.000Z
|
2018-03-15T06:56:41.000Z
|
#!/usr/bin/env python
"""
processes a document topic matrix and determines the strength of a topic as a function of it's co-occurrences among
the corpus, beyond a threshold
"""
import numpy
from networkx import Graph, write_graphml, read_graphml, get_edge_attributes
import logging
import os
import metis
__author__ = "Rob McDaniel <robmcdan@gmail.com>"
__copyright__ = """
Copyright 2017 LiveStories
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Changes:
May 2017: Rob Mcdaniel (Lingistic, LLC): cleanup and general bug fixes
"""
__credits__ = ["Rob McDaniel"]
__license__ = "ALv2"
__version__ = "0.0.1"
__maintainer__ = "Rob McDaniel"
__email__ = "robmcdan@gmail.com"
__status__ = "Development"
logging.basicConfig(level=logging.DEBUG)
def make_boolean_topic_matrix(doc_topic_matrix, threshold=0.15):
"""
return a bool matrix for N documents X M topics where topic strength is > threshold
:param (matrix) doc_topic_matrix: NxM document topic matrix (topics over documents)
:param (float) threshold: minimum topic strength
:return:
"""
logging.info("preparing boolean topic matrix")
m = doc_topic_matrix > threshold
return m
def add_jaccard_weighted_edges(g, bool_topic_matrix):
"""
given a document topic matrix, calculate the jaccard similarity score (intersection over union) between each topic
as a weighted edge between topics
:param (matrix) bool_topic_matrix: a boolean matrix of n documents X m topics with TRUE if the topic is represented
:param (networkX graph) g: a graph object to populate
:return: graph object with jaccard-weighted edges between topics
"""
logging.info("calculating jaccard indexes for all topics")
num_topics = bool_topic_matrix.shape[1]
jaccard_matrix = numpy.zeros((num_topics, num_topics))
logging.debug(num_topics)
for i in range(num_topics):
logging.debug(i)
topic_i = bool_topic_matrix[:, i]
jaccard_matrix[i, i] = 1.0
for j in range(i + 1, num_topics):
topic_j = bool_topic_matrix[:, j]
intersection = numpy.logical_and(topic_i, topic_j)
union = numpy.logical_or(topic_i, topic_j)
jaccard = intersection.sum() / float(union.sum())
jaccard_matrix[i, j] = jaccard
jaccard_matrix[j, i] = jaccard
try:
if "count" in g.edges[i, j].keys():
g.add_edge(i, j, similarity=int(jaccard*100))
except KeyError:
pass
return g
def calculate_cooccurences(bool_topic_matrix):
"""
given a boolean topic matrix (n observations X m topics where TRUE exists when a topic exists in a doc), count the
total number of document co-occurrences between topic_i and topic_j
:param (matrix) bool_topic_matrix: document X topic matrix with bool values where a topic exists in a doc.
:return: topic_i X topic_j co-occurrence matrix with co-occurrence counts between topics i and j
"""
logging.info("calculating co-occurrences")
num_topics = bool_topic_matrix.shape[1]
cooccurrence_matrix = numpy.zeros((num_topics, num_topics))
logging.debug(num_topics)
for i in range(num_topics):
logging.debug(i)
topic_i = bool_topic_matrix[:, i]
cooccurrence_matrix[i, i] = numpy.nan
for j in range(i + 1, num_topics):
topic_j = bool_topic_matrix[:, j]
count_ij = bool_topic_matrix[numpy.where(topic_i & topic_j)].shape[0]
cooccurrence_matrix[i, j] = count_ij
cooccurrence_matrix[j, i] = count_ij
return cooccurrence_matrix
def add_vertices(cooccurrence_matrix, g, topic_labels):
"""
adds topic vertices and weights (based on co-occurence) -- vertex weighted by total co-occurence, edges weighted
by co-occurence between v_i and v_j
:param cooccurrence_matrix: topic X topic co-occurrence matrix
:param g: graph object to populate
:param topic_labels: list of labels to associate with each topic (in order)
:return: graph with weighted vertices, with edges
"""
logging.info("Adding vertices to graph")
num_topics = cooccurrence_matrix.shape[1]
logging.debug(num_topics)
for i in range(num_topics):
logging.debug(i)
topic_i = cooccurrence_matrix[:, i]
sum_i = numpy.nansum(topic_i)
g.add_node(i, weight=int(sum_i), label=", ".join(topic_labels[i]))
colocations = numpy.where(topic_i > 0)[0]
for j in colocations:
g.add_edge(i, j, count=int(numpy.nansum(cooccurrence_matrix[i,j])))
return g
def update_edge_weights(g):
"""
adds edge-weights to an existing graph which already contains jaccard-weighted edges. Edge weight is based on
jaccard and rank calculations (see get_rank())
:param g: target graph
:return: graph with updated edge-weights
"""
logging.info("Adding weights to edges in graph")
num_topics = len(g)
logging.debug(num_topics)
for i in range(num_topics):
logging.debug(i)
topic_i = g.node[i]["weight"]
colocations = [edge[1] for edge in g.edges(i)]
lambda1 = 1
lambda2 = 1
for j in colocations:
if j != i:
rank_ij = get_rank(i, j, g)
rank_ji = get_rank(j, i, g)
rank = 1 if rank_ij == 1 or rank_ji == 1 else 0
count = g.edges[i, j]["count"]
jac = g.edges[i, j]["similarity"]
weight_ij = (1 + (lambda1 * rank) + (lambda2 * jac)) * count
g.add_edge(i, j, weight=int(weight_ij))
return g
def get_rank(i, j, g):
"""
calculates the rank score between topic i and topic j -- selects all nodes that have a higher weight than j, and
then counts how many of them have a higher conditional probability than i. Score ranges from 1 to (N(vertices) - 2)
Rank score of 1 means that topic_i is more predictive of topic_j than any other vertex with higher weight than
topic_j.
:param i: topic node
:param j: topic node
:param g: populated graph
:return: returns the rank score
"""
rank_count = 0
# first get topics with greater strength than topic j,
# so get topic J strength.
topic_j_s = g.node[j]["weight"]
# now find the topics which are candidates
# (they are neighbors of j)
# todo: we are iterating all topics, but we just need the ones connected to j
candidate_h = []
num_topics = len(g)
for h in range(num_topics):
#topic_h = g.nodes()[h]
if j != h and i != h:
topic_h_s = g.nodes[h]["weight"]
if topic_h_s > topic_j_s:
candidate_h.append(h)
for h in candidate_h:
h_given_j = get_conditional_topic_prob(h, j, g)
i_given_j = get_conditional_topic_prob(i, j, g)
if h_given_j > i_given_j:
rank_count += 1
rank = rank_count + 1
return rank
def get_conditional_topic_prob(i, j, g):
"""
gets the conditional probability of topic_i given topic_j
:param i: topic_i
:param j: topic_j
:param g: the populated graph with weighted edges and vertices
:return: 0.0 < P(i|j) < 1.0
"""
if i == j:
return 1.0
topic_j_s = g.node[j]["weight"]
try:
count_i_given_j = g.edges[i, j]["count"]
except KeyError: # might not be an edge connecting these vertices
return 0.0
if topic_j_s == 0:
return 0.0
return count_i_given_j / topic_j_s
def save(name, g):
"""
saves a graph in graphml format
:param name: friendly name of the graph
:param g: the graph to save
:return: None
"""
if not os.path.exists("graphs//"):
os.mkdir("graphs//")
write_graphml(g, "graphs//" + name + ".graphml")
def load(name):
"""
loads a previously-saved graph from graphml format using its friendly name.
:param name: the friendly name of the graph
:return: the loaded graph
"""
g = read_graphml("graphs//" + name + ".graphml", node_type=int)
return g
def build_graph(theta_matrix, labels, friendly_name=None):
"""
builds a vertex and edge-weighted graph based on a topic-proportion matrix
:param theta_matrix: Documents X topic_proportions matrix, values should be between 0.0 and 1.0
:param labels: list of size = N(Documents) with topic labels
:param friendly_name: the friendly name to use to save the graph (optional)
:return: build graph
"""
b_matrix = make_boolean_topic_matrix(theta_matrix)
cooccurrences = calculate_cooccurences(b_matrix)
g = Graph()
g = add_vertices(cooccurrences, g, labels)
g = add_jaccard_weighted_edges(g, b_matrix)
g = update_edge_weights(g)
# g = blacklisted_topics(g)
# add these for METIS
g.graph["edge_weight_attr"] = "weight"
g.graph["node_weight_attr"] = "weight"
if friendly_name:
save(friendly_name, g)
return g
def blacklisted_topics(g):
"""
removes blacklisted topics from a graph
:param g: graph to modify
:return: modified graph
"""
g.remove_node(179)
g.remove_node(245)
g.remove_node(106)
g.remove_node(13)
g.remove_node(24)
# g.remove_node(230)
g.remove_node(59)
g.remove_node(183)
g.remove_node(234)
g.remove_node(1)
g.remove_node(14)
return g
def recursive_partition(g, taxonomy_out, query_topic, k=4):
"""
Based on a query topic and a vertex and edge-weighted graph, partition the graph into a query-based topical taxonomy
:param g: source graph
:param taxonomy_out: output graph (can be empty)
:param query_topic: the head vertex to generate taxonomy from
:param k: partition size for graph bisection
:return: taxonomy graph (taxonomy_out)
"""
if query_topic not in g.nodes():
return taxonomy_out, query_topic
from lib.subgraph import get_subgraph # todo: this is here to prevent an annoying circular reference
taxonomy_out.add_node(query_topic, weight=g.node[query_topic]["weight"])
g_sub = get_subgraph(g, query_topic)
if len(g_sub) > 1:
#Graph().add_nodes_from() g_sub.add_node(query_topic, weight=g.node[query_topic]["weight"])
return g_sub, query_topic
x = metis.networkx_to_metis(g_sub)
(edgecuts, parts) = metis.part_graph(x, k)
for part in range(k):
max_degree = 0
max_node = None
for node in [[node for node in g_sub.nodes()][i] for i, j in enumerate(parts) if j == part]:
degree = g_sub.degree(node)
if degree > max_degree:
max_node = node
max_degree = degree
if max_node is not None:
recursive_partition(
g_sub.subgraph([[node for node in g_sub.nodes()][i] for i, j in enumerate(parts) if j == part]),
taxonomy_out, max_node)
taxonomy_out.add_node(max_node, weight=g_sub.node[max_node]["weight"])
taxonomy_out.add_edge(query_topic, max_node)
return taxonomy_out, query_topic
| 36.23913
| 120
| 0.658497
|
794c10f822d4597636e355c03e52b24a747b68e7
| 2,834
|
py
|
Python
|
theano/tensor/tests/test_mpi.py
|
zploskey/Theano
|
9b3f6351d41d9f5e01b198e3de7538d7f032c409
|
[
"BSD-3-Clause"
] | null | null | null |
theano/tensor/tests/test_mpi.py
|
zploskey/Theano
|
9b3f6351d41d9f5e01b198e3de7538d7f032c409
|
[
"BSD-3-Clause"
] | null | null | null |
theano/tensor/tests/test_mpi.py
|
zploskey/Theano
|
9b3f6351d41d9f5e01b198e3de7538d7f032c409
|
[
"BSD-3-Clause"
] | null | null | null |
from __future__ import absolute_import, print_function, division
from theano.tensor.io import (send, recv, mpi_cmps, MPISend, MPISendWait,
mpi_send_wait_cmp, mpi_tag_cmp, mpi_enabled)
import theano
import subprocess
import os
from theano.gof.sched import sort_schedule_fn
from theano import change_flags
mpi_scheduler = sort_schedule_fn(*mpi_cmps)
mpi_linker = theano.OpWiseCLinker(schedule=mpi_scheduler)
mpi_mode = theano.Mode(linker=mpi_linker)
@change_flags(compute_test_value='off')
def test_recv():
x = recv((10, 10), 'float64', 0, 11)
assert x.dtype == 'float64'
assert x.broadcastable == (False, False)
recvnode = x.owner.inputs[0].owner
assert recvnode.op.source == 0
assert recvnode.op.tag == 11
def test_send():
x = theano.tensor.matrix('x')
y = send(x, 1, 11)
sendnode = y.owner.inputs[0].owner
assert sendnode.op.dest == 1
assert sendnode.op.tag == 11
@change_flags(compute_test_value='off')
def test_can_make_function():
x = recv((5, 5), 'float32', 0, 11)
y = x+1
assert theano.function([], [y])
def test_mpi_roundtrip():
if not mpi_enabled:
return
theano_root = theano.__file__.split('__init__')[0]
p = subprocess.Popen("mpiexec -np 2 python " + theano_root +
"tensor/tests/_test_mpi_roundtrip.py",
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True,
close_fds=True)
result = theano.compat.decode(p.stdout.read())
assert "True" in result, theano.compat.decode(p.stderr.read())
def test_mpi_send_wait_cmp():
x = theano.tensor.matrix('x')
y = send(x, 1, 11)
z = x + x
waitnode = y.owner
sendnode = y.owner.inputs[0].owner
addnode = z.owner
assert mpi_send_wait_cmp(sendnode, addnode) < 0 # send happens first
assert mpi_send_wait_cmp(waitnode, addnode) > 0 # wait happens last
@change_flags(compute_test_value='off')
def test_mpi_tag_ordering():
x = recv((2, 2), 'float32', 1, 12)
y = recv((2, 2), 'float32', 1, 11)
z = recv((2, 2), 'float32', 1, 13)
f = theano.function([], [x, y, z], mode=mpi_mode)
nodes = f.maker.linker.make_all()[-1]
assert all(node.op.tag == tag
for node, tag in zip(nodes, (11, 12, 13, 11, 12, 13)))
def test_mpi_schedule():
x = theano.tensor.matrix('x')
y = send(x, 1, 11)
z = x + x
waitnode = y.owner
sendnode = y.owner.inputs[0].owner
addnode = z.owner
f = theano.function([x], [y, z], mode=mpi_mode)
nodes = f.maker.linker.make_all()[-1]
optypes = [MPISend, theano.tensor.Elemwise, MPISendWait]
assert all(isinstance(node.op, optype)
for node, optype in zip(nodes, optypes))
| 30.473118
| 73
| 0.632675
|
794c11cd3d9909b9cb4ac568bf7e3f9751a06559
| 80
|
py
|
Python
|
kymatio/scattering2d/__init__.py
|
GReguig/kymatio
|
e0fc10057f5f8bb947068bc40afff8d3d3729052
|
[
"BSD-3-Clause"
] | 516
|
2018-11-18T06:11:16.000Z
|
2022-03-21T22:35:06.000Z
|
kymatio/scattering2d/__init__.py
|
GReguig/kymatio
|
e0fc10057f5f8bb947068bc40afff8d3d3729052
|
[
"BSD-3-Clause"
] | 558
|
2018-11-19T22:21:12.000Z
|
2022-03-28T14:59:15.000Z
|
kymatio/scattering2d/__init__.py
|
GReguig/kymatio
|
e0fc10057f5f8bb947068bc40afff8d3d3729052
|
[
"BSD-3-Clause"
] | 119
|
2018-11-18T06:05:39.000Z
|
2022-03-26T06:59:37.000Z
|
from .frontend.entry import ScatteringEntry2D
__all__ = ['ScatteringEntry2D']
| 16
| 45
| 0.8
|
794c1295f0f472c58a59bd795b7006f3b501c958
| 962
|
py
|
Python
|
nova/api/openstack/compute/legacy_v2/contrib/extended_hypervisors.py
|
ebalduf/nova-backports
|
6bf97ec73467de522d34ab7a17ca0e0874baa7f9
|
[
"Apache-2.0"
] | 7
|
2015-09-22T11:27:16.000Z
|
2015-11-02T12:33:46.000Z
|
nova/api/openstack/compute/legacy_v2/contrib/extended_hypervisors.py
|
ebalduf/nova-backports
|
6bf97ec73467de522d34ab7a17ca0e0874baa7f9
|
[
"Apache-2.0"
] | 9
|
2015-05-20T11:20:17.000Z
|
2017-07-27T08:21:33.000Z
|
nova/api/openstack/compute/legacy_v2/contrib/extended_hypervisors.py
|
ebalduf/nova-backports
|
6bf97ec73467de522d34ab7a17ca0e0874baa7f9
|
[
"Apache-2.0"
] | 13
|
2015-05-05T09:34:04.000Z
|
2017-11-08T02:03:46.000Z
|
# Copyright 2014 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.api.openstack import extensions
class Extended_hypervisors(extensions.ExtensionDescriptor):
"""Extended hypervisors support."""
name = "ExtendedHypervisors"
alias = "os-extended-hypervisors"
namespace = ("http://docs.openstack.org/compute/ext/"
"extended_hypervisors/api/v1.1")
updated = "2014-01-04T00:00:00Z"
| 37
| 78
| 0.716216
|
794c13144a999e0f86869a7c138f51d8563f6dc2
| 2,429
|
py
|
Python
|
aliyun-python-sdk-vs/aliyunsdkvs/request/v20181212/DescribeDeviceURLRequest.py
|
jia-jerry/aliyun-openapi-python-sdk
|
e90f3683a250cfec5b681b5f1d73a68f0dc9970d
|
[
"Apache-2.0"
] | 1
|
2021-03-08T02:59:17.000Z
|
2021-03-08T02:59:17.000Z
|
aliyun-python-sdk-vs/aliyunsdkvs/request/v20181212/DescribeDeviceURLRequest.py
|
jia-jerry/aliyun-openapi-python-sdk
|
e90f3683a250cfec5b681b5f1d73a68f0dc9970d
|
[
"Apache-2.0"
] | 1
|
2020-05-31T14:51:47.000Z
|
2020-05-31T14:51:47.000Z
|
aliyun-python-sdk-vs/aliyunsdkvs/request/v20181212/DescribeDeviceURLRequest.py
|
jia-jerry/aliyun-openapi-python-sdk
|
e90f3683a250cfec5b681b5f1d73a68f0dc9970d
|
[
"Apache-2.0"
] | null | null | null |
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from aliyunsdkcore.request import RpcRequest
from aliyunsdkvs.endpoint import endpoint_data
class DescribeDeviceURLRequest(RpcRequest):
def __init__(self):
RpcRequest.__init__(self, 'vs', '2018-12-12', 'DescribeDeviceURL','vs')
self.set_method('POST')
if hasattr(self, "endpoint_map"):
setattr(self, "endpoint_map", endpoint_data.getEndpointMap())
if hasattr(self, "endpoint_regional"):
setattr(self, "endpoint_regional", endpoint_data.getEndpointRegional())
def get_Auth(self):
return self.get_query_params().get('Auth')
def set_Auth(self,Auth):
self.add_query_param('Auth',Auth)
def get_Type(self):
return self.get_query_params().get('Type')
def set_Type(self,Type):
self.add_query_param('Type',Type)
def get_Mode(self):
return self.get_query_params().get('Mode')
def set_Mode(self,Mode):
self.add_query_param('Mode',Mode)
def get_Stream(self):
return self.get_query_params().get('Stream')
def set_Stream(self,Stream):
self.add_query_param('Stream',Stream)
def get_Id(self):
return self.get_query_params().get('Id')
def set_Id(self,Id):
self.add_query_param('Id',Id)
def get_OutProtocol(self):
return self.get_query_params().get('OutProtocol')
def set_OutProtocol(self,OutProtocol):
self.add_query_param('OutProtocol',OutProtocol)
def get_OwnerId(self):
return self.get_query_params().get('OwnerId')
def set_OwnerId(self,OwnerId):
self.add_query_param('OwnerId',OwnerId)
def get_Expire(self):
return self.get_query_params().get('Expire')
def set_Expire(self,Expire):
self.add_query_param('Expire',Expire)
| 30.3625
| 74
| 0.741457
|
794c1314bf22e9986c1038e23ccfa6cf2ec03b66
| 5,096
|
py
|
Python
|
ppo.py
|
ajleite/basic-ppo
|
e9d823275dda3c376e3e0f7d66e8dfb815b434d8
|
[
"MIT"
] | 2
|
2020-06-27T11:44:19.000Z
|
2022-01-11T21:23:01.000Z
|
ppo.py
|
ajleite/basic-ppo
|
e9d823275dda3c376e3e0f7d66e8dfb815b434d8
|
[
"MIT"
] | null | null | null |
ppo.py
|
ajleite/basic-ppo
|
e9d823275dda3c376e3e0f7d66e8dfb815b434d8
|
[
"MIT"
] | null | null | null |
#!/usr/bin/python3
# Copyright 2019 Abe Leite
# Based on "Proximal Policy Optimization Algorithms", Schulman et al 2017
# For the benefit of my fellow CSCI-B 659 students
# While I hope that this code is helpful I will not vouch for its total accuracy;
# my primary aim here is to elucidate the ideas from the paper.
import sys
import tensorflow as tf
import gym
ACTORS = 8
N_CYCLES = 10000
LEARNING_RATE = 0.00025
CYCLE_LENGTH = 128
BATCH_SIZE = CYCLE_LENGTH*ACTORS
CYCLE_EPOCHS = 3
MINIBATCH = 32*ACTORS
GAMMA = 0.99
EPSILON = 0.1
class DiscretePPO:
def __init__(self, V, pi):
''' V and pi are both keras (Sequential)s.
V maps state to single scalar value;
pi maps state to discrete probability distribution on actions. '''
self.V = V
self.pi = pi
self.old_pi = tf.keras.models.clone_model(self.pi)
self.optimizer = tf.keras.optimizers.Adam(LEARNING_RATE)
@tf.function
def pick_action(self, S):
return tf.random.categorical(self.pi(tf.expand_dims(S,axis=0)), 1)[0,0]
@tf.function
def train_minibatch(self, SARTS_minibatch):
S, A, R, T, S2 = SARTS_minibatch
next_V = tf.where(T, tf.zeros((MINIBATCH,)), self.V(S2))
next_V = tf.stop_gradient(next_V)
advantage = R + GAMMA * next_V - self.V(S)
V_loss = tf.reduce_sum(advantage ** 2)
V_gradient = tf.gradients(V_loss, self.V.weights)
self.optimizer.apply_gradients(zip(V_gradient, self.V.weights))
ratio = tf.gather(self.pi(S), A, axis=1) / tf.gather(self.old_pi(S), A, axis=1)
confident_ratio = tf.clip_by_value(ratio, 1-EPSILON, 1+EPSILON)
current_objective = ratio * advantage
confident_objective = confident_ratio * advantage
PPO_objective = tf.where(current_objective < confident_objective, current_objective, confident_objective)
PPO_objective = tf.reduce_mean(PPO_objective)
pi_gradient = tf.gradients(-PPO_objective, self.pi.weights)
self.optimizer.apply_gradients(zip(pi_gradient, self.pi.weights))
@tf.function
def train(self, SARTS_batch):
S, A, R, T, S2 = SARTS_batch
for _ in range(CYCLE_EPOCHS):
# shuffle and split into minibatches!
shuffled_indices = tf.random.shuffle(tf.range(BATCH_SIZE))
num_mb = BATCH_SIZE // MINIBATCH
for minibatch_indices in tf.split(shuffled_indices, num_mb):
mb_SARTS = (tf.gather(S, minibatch_indices),
tf.gather(A, minibatch_indices),
tf.gather(R, minibatch_indices),
tf.gather(T, minibatch_indices),
tf.gather(S2, minibatch_indices))
self.train_minibatch(mb_SARTS)
for old_pi_w, pi_w in zip(self.old_pi.weights, self.pi.weights):
old_pi_w.assign(pi_w)
def train_PPO(agent, envs, render=False):
episode_returns = []
current_episode_returns = [0 for env in envs]
last_s = [env.reset() for env in envs]
for _ in range(N_CYCLES):
SARTS_samples = []
next_last_s = []
next_current_episode_returns = []
for env, s, episode_return in zip(envs, last_s, current_episode_returns):
for _ in range(CYCLE_LENGTH):
a = agent.pick_action(s).numpy()
s2, r, t, _ = env.step(a)
if render:
env.render()
episode_return += r
SARTS_samples.append((s,a,r,t,s2))
if t:
episode_returns.append(episode_return)
print(f'Episode {len(episode_returns):3d}: {episode_return}')
episode_return = 0
s = env.reset()
else:
s = s2
next_last_s.append(s)
next_current_episode_returns.append(episode_return)
last_s = next_last_s
current_episode_returns = next_current_episode_returns
SARTS_batch = [tf.stack(X, axis=0) for X in zip(*SARTS_samples)]
agent.train(SARTS_batch)
def make_agent(env):
obs_shape = env.observation_space.shape
n_actions = env.action_space.n
V = tf.keras.Sequential([tf.keras.layers.InputLayer(input_shape=obs_shape),
tf.keras.layers.Dense(400, activation='relu'),
tf.keras.layers.Dense(300, activation='relu'),
tf.keras.layers.Dense(1)])
pi = tf.keras.Sequential([tf.keras.layers.InputLayer(input_shape=obs_shape),
tf.keras.layers.Dense(400, activation='relu'),
tf.keras.layers.Dense(300, activation='sigmoid'),
tf.keras.layers.Dense(n_actions, activation='softmax')])
return DiscretePPO(V, pi)
if __name__ == '__main__':
if len(sys.argv) < 2:
print('Usage: python ppo.py <Env-V*> (--render)')
envs = [gym.make(sys.argv[1]) for _ in range(ACTORS)]
agent = make_agent(envs[0])
train_PPO(agent, envs, '--render' in sys.argv)
| 40.768
| 113
| 0.615385
|
794c13cd48c57c87cafd7e7a8feac35abaceaa40
| 627
|
py
|
Python
|
manage.py
|
wszoltysek/spotify
|
8e8e76dc9c12b49809412e645ff49818dde08a7f
|
[
"MIT"
] | null | null | null |
manage.py
|
wszoltysek/spotify
|
8e8e76dc9c12b49809412e645ff49818dde08a7f
|
[
"MIT"
] | 6
|
2021-03-19T02:56:42.000Z
|
2021-09-22T18:58:42.000Z
|
manage.py
|
wszoltysek/spotify
|
8e8e76dc9c12b49809412e645ff49818dde08a7f
|
[
"MIT"
] | null | null | null |
#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'spotify.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
| 28.5
| 73
| 0.682616
|
794c140085a1b064a2e760575ac7ddd4aa5577d8
| 142
|
py
|
Python
|
cmc/python/__init__.py
|
hschwane/offline_production
|
e14a6493782f613b8bbe64217559765d5213dc1e
|
[
"MIT"
] | 1
|
2020-12-24T22:00:01.000Z
|
2020-12-24T22:00:01.000Z
|
cmc/python/__init__.py
|
hschwane/offline_production
|
e14a6493782f613b8bbe64217559765d5213dc1e
|
[
"MIT"
] | null | null | null |
cmc/python/__init__.py
|
hschwane/offline_production
|
e14a6493782f613b8bbe64217559765d5213dc1e
|
[
"MIT"
] | 3
|
2020-07-17T09:20:29.000Z
|
2021-03-30T16:44:18.000Z
|
from icecube import icetray,dataclasses,sim_services
from icecube.load_pybindings import load_pybindings
load_pybindings(__name__, __path__)
| 28.4
| 52
| 0.880282
|
794c140b282262d1c2ed8f7642a914acc7fbebdb
| 59,089
|
py
|
Python
|
static/0.6.0.322/js/transcrypt/handlers.acesso_aluno.py
|
PhanterJR/www_sme
|
bfe3cd430a33c05e7cd4ceadfd6b90ce8f45be90
|
[
"MIT"
] | null | null | null |
static/0.6.0.322/js/transcrypt/handlers.acesso_aluno.py
|
PhanterJR/www_sme
|
bfe3cd430a33c05e7cd4ceadfd6b90ce8f45be90
|
[
"MIT"
] | null | null | null |
static/0.6.0.322/js/transcrypt/handlers.acesso_aluno.py
|
PhanterJR/www_sme
|
bfe3cd430a33c05e7cd4ceadfd6b90ce8f45be90
|
[
"MIT"
] | null | null | null |
import phanterpwa.frontend.gatehandler as gatehandler
import phanterpwa.frontend.helpers as helpers
import phanterpwa.frontend.components.snippets as snippets
import phanterpwa.frontend.components.widgets as widgets
import phanterpwa.frontend.forms as forms
import phanterpwa.frontend.validations as validations
import phanterpwa.frontend.components.modal as modal
import phanterpwa.frontend.decorators as decorators
import phanterpwa.frontend.preloaders as preloaders
from org.transcrypt.stubs.browser import __pragma__
__pragma__('alias', "jQuery", "$")
__pragma__('skip')
jQuery = window = setTimeout = js_undefined = console = JSON = __new__ = FormData = localStorage = 0
__pragma__('noskip')
DIV = helpers.XmlConstructor.tagger("div")
I = helpers.XmlConstructor.tagger("i")
H1 = helpers.XmlConstructor.tagger("h1")
H2 = helpers.XmlConstructor.tagger("h2")
H3 = helpers.XmlConstructor.tagger("h3")
P = helpers.XmlConstructor.tagger("p")
A = helpers.XmlConstructor.tagger("a")
IMG = helpers.XmlConstructor.tagger("img")
BUTTON = helpers.XmlConstructor.tagger("button")
STRONG = helpers.XmlConstructor.tagger("strong")
TABLE = helpers.XmlConstructor.tagger("table")
TH = helpers.XmlConstructor.tagger("th")
TR = helpers.XmlConstructor.tagger("tr")
TD = helpers.XmlConstructor.tagger("td")
FORM = helpers.XmlConstructor.tagger("form")
SPAN = helpers.XmlConstructor.tagger("span")
BR = helpers.XmlConstructor.tagger("br", True)
HR = helpers.XmlConstructor.tagger("hr", True)
LABEL = helpers.XmlConstructor.tagger("label")
I18N = helpers.I18N
XML = helpers.XML
CONCATENATE = helpers.CONCATENATE
__pragma__('kwargs')
class Index(gatehandler.Handler):
def initialize(self):
arg0 = window.PhanterPWA.Request.get_arg(0)
# html = CONCATENATE(
# DIV(
# DIV(
# DIV(
# DIV("DADOS DO ALUNO", _class="phanterpwa-breadcrumb"),
# _class="phanterpwa-breadcrumb-wrapper"
# ),
# _class="p-container"),
# _class='title_page_container card'
# ),
# DIV(
# DIV(
# DIV(
# DIV(preloaders.android, _style="width: 300px; height: 300px; overflow: hidden; margin: auto;"),
# _style="text-align:center; padding: 50px 0;"
# ),
# _id="content-alunos",
# _class='p-row card e-padding_20'
# ),
# _class="phanterpwa-container p-container"
# )
# )
# html.html_to("#main-container")
if arg0 is not None:
self._check_codigo_de_acesso(arg0)
else:
dados_localstorage = localStorage.getItem("aluno-identificado")
console.log(dados_localstorage)
if dados_localstorage is None:
self.IdentificarAluno = IdentificarAluno(self)
else:
self._check_codigo_de_acesso(dados_localstorage)
def _check_codigo_de_acesso(self, codigo):
localStorage.removeItem("aluno-identificado")
window.PhanterPWA.GET(**{
'url_args': ["api", "identificar-aluno", codigo],
'onComplete': self.resposta_check_codigo_de_acesso
})
def resposta_check_codigo_de_acesso(self, data, ajax_status):
if ajax_status == "success":
json = data.responseJSON
self.DadosAluno = DadosAluno(self, json)
else:
self.IdentificarAluno = IdentificarAluno(self)
class IdentificarAluno():
def __init__(self, index_instance):
self.config = window.PhanterPWA.CONFIG
self.url_image = None
self.nome_completo = None
self.sabe_codigo_de_acesso = None
self.codigo_de_acesso = None
self.data_de_nascimento = None
self.data_de_nascimento_iso = None
self.nome_da_mae = None
self.fazer_alteracoes = False
self.fazer_alteracoes2 = False
self.telefone_celular = None
self.numero_celularfunc = None
self.has_whatsapp = False
self.nome_usuario = "{0} {1}".format(
window.PhanterPWA.get_auth_user().first_name,
window.PhanterPWA.get_auth_user().last_name,
)
self.foi_identificado = False
texto_inicial = DIV(
H1("SISTEMA DE IDENTIFICAÇÃO DO ALUNO", _class="phanterpwa-the_title"),
H3(
P("Bem vindo ao sistema de indentificação do aluno. Nele iremos precisar",
" dos seguintes dados: ", STRONG("Nome completo, Data de Nascimento e Nome da mãe."),
" ou o CÓDIGO DE ACESSO localizado em seu comprovante de matrícula."
),
P("Clique em ", STRONG("Iniciar"), " para darmos início ao processo"
),
DIV(
BUTTON("Iniciar", _id="iniciar_ava", _class="btn wave-on-click"),
_class="button_container"
),
_class="phanterpwa-the_subtitle"
),
_id="text-inicio"
)
arbritary_id_menu = window.PhanterPWA.get_id()
arbritary_id_choice = window.PhanterPWA.get_id()
botao_sair = widgets.MenuBox(
arbritary_id_menu,
I(_class="fas fa-ellipsis-v"),
DIV("Sair", _id=arbritary_id_choice),
**{
"_class": "icon_button button_sair",
"z_index": 2001,
"onOpen": lambda: (
jQuery("#{0}".format(arbritary_id_choice)).on("click",
lambda: (self._sair())),
)
}
)
html = DIV(
DIV(
DIV(
DIV(
IMG(_src="/static/{0}/images/perfil_robo-min.jpg".format(self.config['PROJECT']['versioning'])),
_class="background-robo"
),
_class="head-questionario"
),
DIV(
texto_inicial,
_id="row_content",
_class="row"
),
DIV(_class="footer-questionario"),
_id="container-questionario",
_class="container container-questionario"
),
_id="app-content-questionario",
_class="app-content"
)
jQuery("#app-content-questionario").removeClass("has_servidor")
jQuery("#main-container").removeClass("iniciar")
html.html_to("#main-container")
# DIV("doideira").append_to("#main-container")
xml = jQuery("#row_content")
xml.height(jQuery(window).height() - 280).css("width", "100%")
jQuery(window).resize(lambda: xml.css("min-height", jQuery(window).height() - 280).css("width", "100%"))
self.binds()
def _sair(self):
window.PhanterPWA.logout()
def binds(self):
jQuery("#iniciar_ava").off(
"click.iniciar"
).on(
"click.iniciar",
lambda: self.etapa1()
)
def etapa1(self):
jQuery("#app-content-questionario").removeClass("etapa2")
jQuery("#text-inicio").fadeOut(500)
jQuery("#main-container").addClass("iniciar")
titulo = CONCATENATE(
TABLE(TR(TD("SISTEMA DE INDENTIFICAÇÃO DO ALUNO")), _id="nome_escola_chat"),
DIV(I(_class="fas fa-expand"), _class="botao_expand")
)
titulo.append_to("#container-questionario .head-questionario")
jQuery("#main-container").find(".botao_expand").off("click.expand").on(
"click.expand",
self._expand
)
localStorage.removeItem('servidor-token')
self._comeco()
def _expand(self):
el = jQuery("#main-container")
if el.hasClass("expand"):
el.removeClass("expand")
el.find(".botao_expand").find("i").addClass("fa-expand").removeClass("fa-compress")
else:
el.find(".botao_expand").find("i").addClass("fa-compress").removeClass("fa-expand")
el.addClass("expand")
def abreviar_nome(self, nome):
n = nome
n = n.split(" ")
nome_abreviado = nome
if len(n) > 2:
if len(n[1]) > 3:
nome_abreviado = " ".join(n[0:2])
else:
nome_abreviado = " ".join(n[0:3])
return nome_abreviado
def _comeco(self, alteracao=False):
jQuery("#app-content-questionario").addClass("has_servidor")
if self.foi_identificado:
msg_inicial = P("Confirmando Identidade... Aguarde...", _class="remsc")
else:
msg_inicial = P(
"Antes de continuar irei dar algumas instruções... Algumas das respostas que você der aqui poderão ser",
" editadas, basta clicar no ícone ", I(_class="fas fa-edit"), " quando ele aparecer do lado esquerdo",
" de sua resposta. Dito isto, vamos dar início ao processo.",
_class="remsc"
)
nome_completo = self.nome_completo if self.nome_completo is not None else ""
codigo_de_acesso = self.codigo_de_acesso if self.codigo_de_acesso is not None else ""
data_de_nascimento = "{0} 00:00:00".format(self.data_de_nascimento_iso) if self.data_de_nascimento_iso is not None else ""
nome_da_mae = self.nome_da_mae if self.nome_da_mae is not None else ""
num_cel = self.numero_celularfunc if self.numero_celularfunc is not None else ""
mensagem = DIV(
msg_inicial,
DIV(
FORM(
forms.FormWidget(
"checkaluno",
"nome_completo",
**{
"type": "string",
"value": nome_completo,
"_class": "e-display_hidden"
},
),
forms.FormWidget(
"checkaluno",
"codigo_de_acesso",
**{
"type": "string",
"value": codigo_de_acesso,
"_class": "e-display_hidden"
},
),
forms.FormWidget(
"checkaluno",
"data_de_nascimento",
**{
"type": "string",
"value": data_de_nascimento,
"_class": "e-display_hidden"
},
),
forms.FormWidget(
"checkaluno",
"nome_da_mae",
**{
"type": "string",
"value": nome_da_mae,
"_class": "e-display_hidden"
},
),
forms.FormWidget(
"checkaluno",
"telefone_celular",
**{
"type": "string",
"value": num_cel,
"_class": "e-display_hidden"
},
),
**{"_phanterpwa-form": "checkaluno", "_id": "form-checkaluno"}
),
_style="display: none;",
_class="remsc"
),
_id="menssage_captcha",
_class="mensagem",
_style="display: none;"
)
setTimeout(lambda: (
mensagem.html_to("#row_content"),
jQuery("#menssage_captcha").fadeIn(),
self._get_codigo_de_acesso()
), 1000)
def binder_enter_key(self, widget):
p = jQuery(widget.target_selector)
inp = p.find("input")
inp.off("keyup.enter_key").on("keyup.enter_key", lambda event:
p.find(".phanterpwa-widget-icon-wrapper").trigger("click") if event.keyCode == 13
else None)
def _get_codigo_de_acesso(self, change=False):
if self.sabe_codigo_de_acesso is None or change or self.fazer_alteracoes:
if change or self.fazer_alteracoes:
resp = CONCATENATE(
P("Quero tentar adicionar um ", STRONG("CÓDIGO DE ACESSO"), ".",
_class="rclient"),
P("Tudo bem. Você está pronto para adicionar o ", STRONG("CÓDIGO DE ACESSO"), "?.",
_class="remsc"),
)
else:
resp = CONCATENATE(P("Se vc souber o ", STRONG("CÓDIGO DE ACESSO"), " a identificação ocorre rapidamente. Ao fazer a matrícula na escola é fornecido um COMPROVANTE DE MATRÍCULA",
" , se você tem este documento, o ", STRONG("CÓDIGO DE ACESSO"), " está logao abaixo do ",
STRONG("QRCODE"), ".",
_class="remsc"),
)
arbritary_id = window.PhanterPWA.get_id()
mensagem = DIV(
resp,
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
choice = DIV(
BUTTON("EU TENHO O CÓDIGO DE ACESSO", _id="sim_codigo_de_acesso", _class="btn wave-on-click"),
BUTTON("EU NÃO TENHO O CÓDIGO DE ACESSO", _id="nao_codigo_de_acesso", _class="btn wave-on-click"),
_class="buttons_choice"
)
choice.html_to("#container-questionario .footer-questionario")
jQuery("#sim_codigo_de_acesso").off("click.sim_codigo_de_acesso").on(
"click.sim_codigo_de_acesso",
lambda: self._sabe_codigo_de_acesso(sabe=True)
)
jQuery("#nao_codigo_de_acesso").off("click.nao_codigo_de_acesso").on(
"click.nao_codigo_de_acesso",
lambda: self._sabe_codigo_de_acesso(sabe=False)
)
else:
self._sabe_codigo_de_acesso()
def _sabe_codigo_de_acesso(self, sabe=None, change=False):
self.sabe_codigo_de_acesso = sabe
if self.sabe_codigo_de_acesso is True or change or self.fazer_alteracoes:
resp = CONCATENATE(
P("Eu tenho o ", STRONG("CÓDIGO DE ACESSO"), ".",
_class="rclient"),
P("Ok, adicione o seu ", STRONG("CÓDIGO DE ACESSO"), ".",
_class="remsc"),
)
arbritary_id = window.PhanterPWA.get_id()
mensagem = DIV(
resp,
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
self.wd_cod_acesso = widgets.Input(
"codigo_de_acesso",
icon=I(_class="fab fa-telegram-plane"),
placeholder="Código de Acesso",
wear="shadows",
checker=False,
value=self.codigo_de_acesso if self.fazer_alteracoes and self.codigo_de_acesso is not None else ""
)
self.wd_cod_acesso.html_to("#container-questionario .footer-questionario")
self.binder_enter_key(self.wd_cod_acesso)
jQuery(
"#phanterpwa-widget-input-input-codigo_de_acesso"
).off(
"keydown.codigo_de_acesso_input"
).on(
"keydown.codigo_de_acesso_input",
lambda: self.wd_cod_acesso.del_message_error()
).focus()
jQuery("#phanterpwa-widget-codigo_de_acesso").find(
".phanterpwa-widget-icon-wrapper"
).off(
"click.button_codigo_de_acesso"
).on(
"click.button_codigo_de_acesso",
lambda: self._validate_codigo_de_acesso()
)
else:
arbritary_id = window.PhanterPWA.get_id()
arbritary_id_menu = window.PhanterPWA.get_id()
arbritary_id_choice = window.PhanterPWA.get_id()
botao_edit = widgets.MenuBox(
arbritary_id_menu,
I(_class="fas fa-edit"),
DIV("Quero mudar a matrícula", _id=arbritary_id_choice),
**{
"_class": "button_editar",
"z_index": 2001,
"onOpen": lambda:
jQuery("#{0}".format(arbritary_id_choice)).on(
"click",
lambda: self._get_codigo_de_acesso(change=True)
)
}
)
mensagem = DIV(
P(SPAN(botao_edit), "Não tenho o ", STRONG("CÓDIGO DE ACESSO"), ".",
_class="rclient"),
P("Ok, vamos pular esta parte e tentar outra alternativa.",
_class="remsc"),
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
self._get_nome_completo()
def _validate_codigo_de_acesso(self):
value = jQuery("#phanterpwa-widget-input-input-codigo_de_acesso").val()
if value is not None and value is not js_undefined and value != "" and value.length > 0:
self.codigo_de_acesso = value
jQuery("#phanterpwa-widget-input-input-checkaluno-codigo_de_acesso").val(value)
arbritary_id = window.PhanterPWA.get_id()
arbritary_id_menu = window.PhanterPWA.get_id()
arbritary_id_choice = window.PhanterPWA.get_id()
botao_edit = widgets.MenuBox(
arbritary_id_menu,
I(_class="fas fa-edit"),
DIV("Quero mudar o código de acesso", _id=arbritary_id_choice),
**{
"_class": "button_editar",
"z_index": 2001,
"onOpen": lambda: (
jQuery("#{0}".format(arbritary_id_choice)).on("click",
lambda: (self._get_codigo_de_acesso(change=True))),
)
}
)
mensagem = DIV(
P(botao_edit, self.codigo_de_acesso, _class="rclient"),
p("Checando o código digitado... Aguarde..."),
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
self._check_codigo_de_acesso(value)
else:
self.wd_cod_acesso.set_message_error("Código de acesso inválido!")
def _check_codigo_de_acesso(self, codigo):
window.PhanterPWA.GET(**{
'url_args': ["api", "identificar-aluno", codigo],
'onComplete': self.resposta_check_codigo_de_acesso
})
def resposta_check_codigo_de_acesso(self, data, ajax_status):
if ajax_status == "success":
json = data.responseJSON
self.index_instance.DadosAluno = DadosAluno(self.index_instance, json)
else:
self._get_nome_completo(change=False, codigo_invalido=True)
def _get_nome_completo(self, change=False, codigo_invalido=False):
if self.nome_completo is None or change or self.fazer_alteracoes:
if codigo_invalido is True:
xml_ini = P(
"O código adicionado não foi reconhecido, vamos tentar outra alternativa. ",
"Digite agora seu nome completo. Lembrando que o nome deve ser digitado",
" exatamente como está na certidão de nascimento.",
_class="remsc"
)
else:
xml_ini = P(
"Me informe o seu nome completo. Lembrando que o nome deve ser digitado",
"exatamente como está na certidão de nascimento.",
_class="remsc"
)
if change or self.fazer_alteracoes:
resp = CONCATENATE(
P(
"Quero mudar o nome completo.",
_class="rclient"
),
P(
"Tudo bem, digite agora seu nome completo. Lembrando que o nome deve ser digitado",
" exatamente como está na certidão de nascimento.",
_class="remsc"
)
)
else:
resp = xml_ini
arbritary_id = window.PhanterPWA.get_id()
mensagem = DIV(
resp,
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
self.wd_nome_completo = widgets.Input(
"nome_completo",
icon=I(_class="fab fa-telegram-plane"),
placeholder="Nome completo",
wear="shadows",
checker=False,
value=self.nome_completo if self.fazer_alteracoes and self.nome_completo is not None else ""
)
self.wd_nome_completo.html_to("#container-questionario .footer-questionario")
self.binder_enter_key(self.wd_nome_completo)
jQuery(
"#phanterpwa-widget-input-input-nome_completo"
).off(
"keydown.nome_completo_input"
).on(
"keydown.nome_completo_input",
lambda: self.wd_nome_completo.del_message_error()
).focus()
jQuery("#phanterpwa-widget-nome_completo").find(
".phanterpwa-widget-icon-wrapper"
).off(
"click.button_nome_completo"
).on(
"click.button_nome_completo",
lambda: self._validate_nome_completo()
)
else:
self._get_data_de_nascimento()
def _validate_nome_completo(self):
value = jQuery("#phanterpwa-widget-input-input-nome_completo").val()
if value is not None and value is not js_undefined and value != "" and value.length > 5:
self.nome_completo = value
jQuery("#phanterpwa-widget-input-input-checkaluno-nome_completo").val(value)
arbritary_id = window.PhanterPWA.get_id()
arbritary_id_menu = window.PhanterPWA.get_id()
arbritary_id_choice = window.PhanterPWA.get_id()
botao_edit = widgets.MenuBox(
arbritary_id_menu,
I(_class="fas fa-edit"),
DIV("Quero mudar o nome", _id=arbritary_id_choice),
**{
"_class": "button_editar",
"z_index": 2001,
"onOpen": lambda: (
jQuery("#{0}".format(arbritary_id_choice)).on("click",
lambda: (self._get_nome_completo(change=True))),
)
}
)
mensagem = DIV(
P(botao_edit, self.nome_completo, _class="rclient"),
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
self._get_data_de_nascimento()
else:
self.wd_nome_completo.set_message_error("Nome inválido!")
def _get_data_de_nascimento(self, change=False):
if self.data_de_nascimento is None or change or self.fazer_alteracoes:
nome_abreviado = self.abreviar_nome(self.nome_completo)
if change or self.fazer_alteracoes:
resp = CONCATENATE(
P("Tenho que mudar a data de nascimento", _class="rclient"),
P("Certo ", nome_abreviado, ", adicione sua data de nascimento, ficarei no aguado...",
_class="remsc")
)
else:
resp = CONCATENATE(
P("Muito bem ", nome_abreviado, ", ",
"agora preciso de sua data de nascimento...", _class="remsc")
)
arbritary_id = window.PhanterPWA.get_id()
mensagem = DIV(
resp,
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
self.wd_data_de_nascimento = widgets.Input(
"data_de_nascimento",
kind="date",
icon=I(_class="far fa-calendar-alt"),
format="dd/MM/yyyy",
placeholder="Data de nascimento",
wear="shadows",
checker=False,
onDateorDatetimeChoice=self._on_data_de_nascimento,
value=self.data_de_nascimento if self.fazer_alteracoes and self.data_de_nascimento is not None else ""
)
self.wd_data_de_nascimento.html_to("#container-questionario .footer-questionario")
jQuery(
"#phanterpwa-widget-input-input-data_de_nascimento"
).focus()
jQuery(
"#phanterpwa-widget-input-input-data_de_nascimento"
).off(
"keyup.enter_key_date"
).on(
"keyup.enter_key_date",
lambda event: self._valid_data_de_nascimento_manual(event)
)
else:
self._get_nome_da_mae()
def _valid_data_de_nascimento_manual(self, event):
if event.keyCode == 13:
value = jQuery("#phanterpwa-widget-input-input-data_de_nascimento").val()
valider = validations.Valider(value, ["IS_DATE:dd/MM/yyyy"])
if valider.validate():
data = {
"formated": value,
"iso": "{0}-{1}-{2} 00:00:00".format(value[-4:], value[3:5], value[0:2])
}
self._on_data_de_nascimento(data)
else:
self.wd_data_de_nascimento.set_message_error("Data de nascimento inválido!")
else:
self.wd_data_de_nascimento.del_message_error()
def _on_data_de_nascimento(self, data):
arbritary_id = window.PhanterPWA.get_id()
arbritary_id_menu = window.PhanterPWA.get_id()
arbritary_id_choice = window.PhanterPWA.get_id()
botao_edit = widgets.MenuBox(
arbritary_id_menu,
I(_class="fas fa-edit"),
DIV("Quero mudar a Data de Nascimento", _id=arbritary_id_choice),
**{
"_class": "button_editar",
"z_index": 2001,
"onOpen": lambda: (
jQuery("#{0}".format(arbritary_id_choice)).on("click",
lambda: (self._get_data_de_nascimento(change=True))),
)
}
)
mensagem = DIV(
P(botao_edit, data.formated, _class="rclient"),
_id=arbritary_id,
_style="display: none;"
)
jQuery("#phanterpwa-widget-input-input-checkaluno-data_de_nascimento").val(data.iso)
self.data_de_nascimento = data.formated
mensagem.append_to("#row_content")
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
self._get_nome_da_mae()
def _get_nome_da_mae(self, change=False):
if self.nome_da_mae is None or change or self.fazer_alteracoes:
if change or self.fazer_alteracoes:
resp = CONCATENATE(
P("Eu quero mudar o nome de minha mãe!", _class="rclient"),
P("Certo, então digite o nome de sua mãe completo.", _class="remsc")
)
else:
resp = CONCATENATE(
P("Estamos quase na reta final! Agora preciso do nome completo de sua mãe.", _class="remsc")
)
arbritary_id = window.PhanterPWA.get_id()
mensagem = DIV(
resp,
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
self.wd_nome_da_mae = widgets.Input(
"nome_da_mae",
icon=I(_class="fab fa-telegram-plane"),
placeholder="Nome completo da mãe",
wear="shadows",
checker=False,
value=self.nome_da_mae if self.fazer_alteracoes and self.nome_da_mae is not None else ""
)
self.wd_nome_da_mae.html_to("#container-questionario .footer-questionario")
self.binder_enter_key(self.wd_nome_da_mae)
jQuery(
"#phanterpwa-widget-input-input-nome_da_mae"
).off(
"keydown.nome_da_mae_input"
).on(
"keydown.nome_da_mae_input",
lambda: self.wd_nome_da_mae.del_message_error()
).focus()
jQuery("#phanterpwa-widget-nome_da_mae").find(
".phanterpwa-widget-icon-wrapper"
).off(
"click.button_nome_da_mae"
).on(
"click.button_nome_da_mae",
lambda: self._validate_nome_da_mae()
)
else:
self._get_numero_celularfunc()
def _validate_nome_da_mae(self):
value = jQuery("#phanterpwa-widget-input-input-nome_da_mae").val()
if value is not None and value is not js_undefined and value != "" and value.length > 5:
arbritary_id = window.PhanterPWA.get_id()
arbritary_id_menu = window.PhanterPWA.get_id()
arbritary_id_choice = window.PhanterPWA.get_id()
botao_edit = widgets.MenuBox(
arbritary_id_menu,
I(_class="fas fa-edit"),
DIV("Quero mudar o Nome da Mãe", _id=arbritary_id_choice),
**{
"_class": "button_editar",
"z_index": 2001,
"onOpen": lambda: (
jQuery("#{0}".format(arbritary_id_choice)).on("click",
lambda: (self._get_nome_da_mae(change=True))),
)
}
)
mensagem = DIV(
P(botao_edit, value, _class="rclient"),
P("Blz! Última etapa...", _class="remsc"),
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
self.nome_da_mae = value
jQuery("#phanterpwa-widget-input-input-checkaluno-nome_da_mae").val(value)
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
self._get_numero_celularfunc()
else:
self.wd_nome_da_mae.set_message_error("Nome inválido!")
def _get_numero_celularfunc(self, change=False):
if self.numero_celularfunc is None or change or self.fazer_alteracoes:
if change or self.fazer_alteracoes:
resp = CONCATENATE(
P("Eu quero mudar o número do celular!", _class="rclient"),
P("Certo, adicione o novo número do celular.", _class="remsc")
)
else:
resp = CONCATENATE(
P("É importante por um número de celular. Usaremos este número para entrar em contato pelo ",
"Whatsapp se houver algo errado", _class="remsc")
)
arbritary_id = window.PhanterPWA.get_id()
mensagem = DIV(
resp,
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
self.wd_numero_celularfunc = widgets.Input(
"numero_celularfunc",
icon=I(_class="fab fa-telegram-plane"),
placeholder="Número de Celular",
can_empty=True,
mask="fone",
wear="shadows",
checker=False,
value=self.numero_celularfunc if self.fazer_alteracoes and self.numero_celularfunc is not None else ""
)
self.wd_numero_celularfunc.html_to("#container-questionario .footer-questionario")
self.binder_enter_key(self.wd_numero_celularfunc)
jQuery(
"#phanterpwa-widget-input-input-numero_celularfunc"
).off(
"keydown.numero_celularfunc_input"
).on(
"keydown.numero_celularfunc_input",
lambda: self.wd_numero_celularfunc.del_message_error()
).focus()
jQuery("#phanterpwa-widget-numero_celularfunc").find(
".phanterpwa-widget-icon-wrapper"
).off(
"click.button_numero_celularfunc"
).on(
"click.button_numero_celularfunc",
lambda: self._validate_numero_celularfunc()
)
else:
self._check_dados()
def _validate_numero_celularfunc(self):
value = jQuery("#phanterpwa-widget-input-input-numero_celularfunc").val()
if value == "" or value.length > 14:
arbritary_id = window.PhanterPWA.get_id()
arbritary_id_menu = window.PhanterPWA.get_id()
arbritary_id_choice = window.PhanterPWA.get_id()
botao_edit = widgets.MenuBox(
arbritary_id_menu,
I(_class="fas fa-edit"),
DIV("Quero mudar o número do celular.", _id=arbritary_id_choice),
**{
"_class": "button_editar",
"z_index": 2001,
"onOpen": lambda: (
jQuery("#{0}".format(arbritary_id_choice)).on("click",
lambda: (self._get_numero_celularfunc(change=True))),
)
}
)
mensagem = DIV(
P(botao_edit, value, _class="rclient"),
P("Ok! Vou tentar te identificar agora, aguarde um pouquinho...", _class="remsc"),
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
self.numero_celularfunc = value
jQuery("#phanterpwa-widget-input-input-checkaluno-numero_celularfunc").val(value)
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
self._check_dados()
else:
self.wd_numero_celularfunc.set_message_error("Celular inválido!")
def _check_dados(self):
formdata = __new__(FormData(jQuery("#form-checkaluno")[0]))
window.PhanterPWA.POST(**{
'url_args': ["api", "identificar-aluno"],
'form_data': formdata,
'onComplete': self.after_submit2
})
def after_submit(self, data, ajax_status):
if ajax_status == "success":
data = data.responseJSON
nome_abreviado = self.abreviar_nome(data.servidor.nome_completo)
artigo = "o(a)"
if data.servidor.sexo == "Masculino":
artigo = "o"
elif data.servidor.sexo == "Feminino":
artigo = "a"
nome_pai = ""
if data.servidor.nome_do_pai is not None and\
data.servidor.nome_do_pai is not js_undefined and data.servidor.nome_do_pai != "":
nome_pai = SPAN(STRONG(data.servidor.nome_do_pai), " e ")
codigo_de_acesso = SPAN("Está codigo_de_acessod", artigo, " no(a) ", STRONG(data.servidor.turma))
arbritary_id = window.PhanterPWA.get_id()
mensagem = DIV(
P(
"Muito bem! Já sei quem você é. ",
"Você é ", STRONG(data.servidor.nome_completo), " que nasceu em ", STRONG(data.servidor.data_de_nascimento),
" e é filh", artigo, " de ", nome_pai, STRONG(data.servidor.nome_da_mae), ". ", codigo_de_acesso,
". Mora no seguinte endereço: ", STRONG(data.servidor.endereco), "...",
_class="remsc"
),
P(
"Tudo bom com você, ",
nome_abreviado, "?",
" Espero que sim...",
_class="remsc"
),
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
jQuery(".button_editar").fadeOut()
localStorage.setItem('servidor-token', data.servidor.token)
self.token = data.servidor.token
jQuery("#nome_servidor_chat").html(data.servidor.nome_completo)
jQuery("#app-content-questionario").addClass("has_servidor")
self._area_do_servidor(inicio=True)
else:
arbritary_id = window.PhanterPWA.get_id()
mensagem = DIV(
P("Não consegui achar nenhum servidor com o número de carteira que você forneceu.",
" Tente novamente ou eu posso tentar te identificar utilizando dados fornecidos",
" por você. O que quer fazer?",
_class="remsc"),
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
choice = DIV(
BUTTON("TENTAR NOVAMENTE", _id="sim_carteirinha", _class="btn wave-on-click"),
BUTTON("FORNECER DADOS", _id="nao_carteirinha", _class="btn wave-on-click"),
_class="buttons_choice"
)
choice.html_to("#container-questionario .footer-questionario")
jQuery("#sim_carteirinha").off("click.carteirinha_sim").on(
"click.carteirinha_sim",
lambda: self._tem_carteirinha(True, change=True)
)
jQuery("#nao_carteirinha").off("click.carteirinha_nao").on(
"click.carteirinha_nao",
lambda: self._tem_carteirinha(False, change=True)
)
def after_submit2(self, data, ajax_status):
if ajax_status == "success":
data = data.responseJSON
nome_abreviado = self.abreviar_nome(data.nome_completo)
artigo = "o(a)"
if data.sexo == "Masculino":
artigo = "o"
elif data.sexo == "Feminino":
artigo = "a"
nome_pai = ""
if data.nome_do_pai is not None and\
data.nome_do_pai is not js_undefined and data.nome_do_pai != "":
nome_pai = SPAN(STRONG(data.nome_do_pai), " e ")
codigo_de_acesso = SPAN(" Possui CPF nº ", STRONG(data.cpf), " e Matrícula")
mensagem = DIV(
P(
"Confirmei sua identidade ", nome_abreviado, ". Recapitulando..."
"Você é ", STRONG(data.nome_completo), " que nasceu em ", STRONG(data.data_de_nascimento_formatado),
" e é filh", artigo, " de ", nome_pai, STRONG(data.nome_da_mae), ".", codigo_de_acesso,
_class="remsc"
),
P(
"Sua conta foi vinculada ao seu cadastro, agora é possível acessar a ",
STRONG("Área do Servidor"), " e utilizar os serviços do SME, confira clicando abaixo.",
_class="remsc"
),
_id="identificado",
_style="display: none;"
)
mensagem.append_to("#row_content")
jQuery("#identificado").slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
jQuery(".button_editar").fadeOut()
choice = DIV(
A("ÁREA DO SERVIDOR", _href="#_phanterpwa:/area-do-servidor", _class="btn wave-on-click"),
A("VOLTAR (PÁGINA PRINCIPAL)", _href="#_phanterpwa:/home", _class="btn wave-on-click"),
_class="buttons_choice"
)
choice.html_to("#container-questionario .footer-questionario")
localStorage.setItem('identificar-servidor', JSON.stringify(data))
window.PhanterPWA.update_auth_user()
else:
cod_msg = self.codigo_de_acesso
if self.codigo_de_acesso == "" or self.codigo_de_acesso is None:
cod_msg = "Não sei o código de acesso"
arbritary_id = window.PhanterPWA.get_id()
mensagem = DIV(
P("Não consegui te identificar com os dados que você me deu...", _class="remsc"),
P("Nome completo: ", STRONG(self.nome_completo), BR(),
"Código de Acesso: ", STRONG(cod_msg), BR(),
"Data de Nascimento: ", STRONG(self.data_de_nascimento), BR(),
"Nome da mãe: ", STRONG(self.nome_da_mae), BR(),
"Celular: ", STRONG(self.numero_celularfunc),
_class="remsc"),
P("Pode ser que você tenha digitado algo errado!",
" Faça as alterações necessárias em ", STRONG("CONSERTAR ALGO ERRADO"),
" que eu tento te identificar novamente.",
" Se você tem certeza que os dados inseridos estão corretos, o jeito é falar com alguém da administração do SME",
" ou você pode (opção mais recomendada) deixar uma mensagem em ", STRONG("MANDAR MENSAGEM"),
_class="remsc"),
_id=arbritary_id,
_style="display: none;"
)
mensagem.append_to("#row_content")
jQuery("#{0}".format(arbritary_id)).slideDown(500)
jQuery("#row_content").animate({"scrollTop": 2000000}, "slow")
choice = DIV(
widgets.MenuBox(
"alterar_dados",
BUTTON("CONSERTAR ALGO ERRADO", _id="alterar_dados", _class="btn wave-on-click"),
DIV("Quero tentar acessar usando o CÓDIGO DE ACESSO", _id="algo_errado_codigo_de_acesso"),
DIV("Quero mudar o Nome Completo", _id="algo_errado_nome_completo"),
DIV("Quero mudar o CPF", _id="algo_errado_cpf"),
DIV("Quero mudar o Data de Nascimento", _id="algo_errado_data_de_nascimento"),
DIV("Quero mudar o Nome da Mãe", _id="algo_errado_nome_da_mae"),
DIV("Quero mudar o Número Celular", _id="algo_errado_numero_celular"),
**{
"_class": "button_editar",
"_style": "width: auto; display: inline-block;",
"z_index": 2001,
"onOpen": lambda: (
jQuery("#{0}".format("algo_errado_codigo_de_acesso")).on("click",
lambda: (self._get_codigo_de_acesso(change=True))),
jQuery("#{0}".format("algo_errado_nome_completo")).on("click",
lambda: (self._get_nome_completo(change=True))),
jQuery("#{0}".format("algo_errado_data_de_nascimento")).on("click",
lambda: (self._get_data_de_nascimento(change=True))),
jQuery("#{0}".format("algo_errado_nome_da_mae")).on("click",
lambda: (self._get_nome_da_mae(change=True))),
jQuery("#{0}".format("algo_errado_numero_celular")).on("click",
lambda: (self._get_numero_celularfunc(change=True))),
)
}
),
BUTTON("MANDAR MENSAGEM", _id="mandar_mensagem", _class="btn wave-on-click"),
_class="buttons_choice"
)
choice.html_to("#container-questionario .footer-questionario")
# jQuery("#alterar_dados").off("click.alterar_dados").on(
# "click.alterar_dados",
# lambda: self._comeco(alteracao=True)
# )
jQuery("#mandar_mensagem").off("click.mandar_mensagem").on(
"click.mandar_mensagem",
lambda: self._mandar_mensagem()
)
def _mandar_mensagem(self):
self.Modal = ModalMensagem(
"#modal-container",
**{
"nome_completo": self.nome_completo,
"cpf": self.cpf,
"codigo_de_acesso": self.codigo_de_acesso,
"data_de_nascimento": self.data_de_nascimento,
"nome_da_mae": self.nome_da_mae,
"celular": self.numero_celularfunc
}
)
self.Modal.open()
forms.SignForm("#form-mensagem", after_sign=lambda: forms.ValidateForm("#form-mensagem"))
class ModalMensagem(modal.Modal):
def __init__(self, target_element, **parameters):
nome_completo = parameters.get("nome_completo", "")
nome_da_mae = parameters.get("nome_da_mae", "")
data_de_nascimento = parameters.get("data_de_nascimento", "")
cpf = parameters.get("cpf", "")
codigo_de_acesso = parameters.get("codigo_de_acesso", "")
celular = parameters.get("celular", "")
self.element_target = jQuery(target_element)
tcontent = DIV(
P("Em anexo irão as informações que foram coletadas.",
" Coloque um número de contato que entraremos em contato. Caso seja celular, tentaremos",
" entrar em contato pelo Whatsapp. Em último caso responderemos pelo seu email: ",
STRONG(window.PhanterPWA.get_auth_user().email), _style="text-align: center; color: red;"),
DIV(
forms.FormWidget(
"mensagem",
"nome_completo",
**{
"type": "string",
"label": "Nome completo",
"value": nome_completo,
"_class": "e-display_hidden"
}
),
forms.FormWidget(
"mensagem",
"cpf",
**{
"type": "string",
"label": "CPF",
"value": cpf,
"_class": "e-display_hidden"
}
),
forms.FormWidget(
"mensagem",
"codigo_de_acesso",
**{
"type": "string",
"label": "Matrícula",
"value": codigo_de_acesso,
"_class": "e-display_hidden"
}
),
forms.FormWidget(
"mensagem",
"nome_da_mae",
**{
"type": "string",
"label": "Nome mãe",
"value": nome_da_mae,
"_class": "e-display_hidden"
}
),
forms.FormWidget(
"mensagem",
"data_de_nascimento",
**{
"type": "string",
"label": "Data de nascimento",
"value": data_de_nascimento,
"_class": "e-display_hidden"
}
),
forms.FormWidget(
"mensagem",
"celular",
**{
"type": "string",
"label": "Telefone/Celular",
"mask": "fone",
"value": celular
}
),
forms.FormWidget(
"mensagem",
"email",
**{
"type": "string",
"label": "Email",
"value": window.PhanterPWA.get_auth_user().email,
"_class": "e-display_hidden"
}
),
forms.FormWidget(
"mensagem",
"mensagem",
**{
"label": "Mensagem (Opcional)",
"type": "text"
}
),
_class="p-col w1p100"
),
_class="mensagem-form-inputs"
).jquery()
tfooter = DIV(
DIV(
forms.SubmitButton(
"mensagem",
"Enviar mensagem",
_class="btn-autoresize wave_on_click waves-phanterpwa"
),
_class='phanterpwa-form-buttons-container'
),
_class="p-col w1p100"
).jquery()
modal.Modal.__init__(
self,
self.element_target,
**{
"_phanterpwa-form": "mensagem",
"_id": "form-mensagem",
"header_height": 50,
"footer_height": 80,
"title": "Enviar Mensagem",
"content": tcontent,
"footer": tfooter,
"after_open": self.binds,
"z_index": 2002
}
)
def binds(self):
self.element_target.find(
"#phanterpwa-widget-form-submit_button-mensagem"
).off(
'click.modal_submit_mensagem'
).on(
'click.modal_submit_mensagem',
lambda: self.submit()
)
forms.SignForm("#form-mensagem", after_sign=lambda: forms.ValidateForm("#form-mensagem"))
def after_submit(self, data, ajax_status):
if ajax_status == "success":
json = data.responseJSON
self.close()
window.PhanterPWA.flash("Mensagem enviada com sucesso")
else:
if data.status == 400:
json = data.responseJSON
window.PhanterPWA.flash(**{'html': json.message})
forms.SignForm("#form-mensagem")
def submit(self):
formdata = __new__(FormData())
formdata.append(
"csrf_token",
jQuery("#phanterpwa-widget-input-mensagem-csrf_token").val()
)
formdata.append(
"nome_completo",
jQuery("#phanterpwa-widget-input-input-mensagem-nome_completo").val()
)
formdata.append(
"cpf",
jQuery("#phanterpwa-widget-input-input-mensagem-cpf").val()
)
formdata.append(
"codigo_de_acesso",
jQuery("#phanterpwa-widget-input-input-mensagem-codigo_de_acesso").val()
)
formdata.append(
"nome_da_mae",
jQuery("#phanterpwa-widget-input-input-mensagem-nome_da_mae").val()
)
formdata.append(
"data_de_nascimento",
jQuery("#phanterpwa-widget-input-input-mensagem-data_de_nascimento").val()
)
formdata.append(
"celular",
jQuery("#phanterpwa-widget-input-input-mensagem-celular").val()
)
formdata.append(
"email",
jQuery("#phanterpwa-widget-input-input-mensagem-email").val()
)
formdata.append(
"mensagem",
jQuery("#phanterpwa-widget-textarea-textarea-mensagem-mensagem").val()
)
window.PhanterPWA.POST(**{
'url_args': ["api", "mensagem"],
'form_data': formdata,
'onComplete': self.after_submit
})
class DadosAluno():
def __init__(self, index_instance, json):
self.index_instance = index_instance
self.json = json
qr_code = json.data.qrcode
localStorage.setItem("aluno-identificado", qr_code)
html = CONCATENATE(
DIV(
DIV(
DIV(
DIV("LISTA DE ALUNOS", _class="phanterpwa-breadcrumb"),
_class="phanterpwa-breadcrumb-wrapper"
),
_class="p-container"),
_class='title_page_container card'
),
DIV(
DIV(
DIV(
DIV(preloaders.android, _style="width: 300px; height: 300px; overflow: hidden; margin: auto;"),
_style="text-align:center; padding: 50px 0;"
),
_id="content-dados-aluno",
_class='p-row card e-padding_20'
),
_class="phanterpwa-container p-container"
)
)
html.html_to("#main-container")
html_historico = DIV(
DIV("HISTÓRICO ESCOLAR", _class="p-col w1p100 phanterpwa-widget-form-separator"),
_class="historicos-conteudo"
)
if json.data.historico is not None and json.data.historico is not js_undefined:
for x in json.data.historico:
html_ficha = DIV(
LABEL(x.ano_letivo, " - ", x.serie, " - ", x.escola),
DIV(
DIV(
DIV(
DIV(
"Não há ficha individual no ano letivo especificado apesar do aluno estar matriculado",
_class="p-row"
),
_class="phanterpwa-card-panel-control-content"
),
_class="phanterpwa-card-panel-control-wrapper"
),
_class="phanterpwa-card-panel-control-container"
),
_class="phanterpwa-card-panel-control p-col w1p100"
)
if x.turma is None or x.turma is js_undefined:
html_ficha = DIV(
LABEL(x.ano_letivo, " - ", x.serie, " - ", x.escola),
DIV(
DIV(
DIV(
DIV(
"O(A) aluno(a) não está numa turma, apesar de estar matriculado no ano letivo especificado",
_class="p-row"
),
_class="phanterpwa-card-panel-control-content"
),
_class="phanterpwa-card-panel-control-wrapper"
),
_class="phanterpwa-card-panel-control-container"
),
_class="phanterpwa-card-panel-control p-col w1p100"
)
else:
if x.ficha_individual is not None and x.ficha_individual is not js_undefined:
tabela = TABLE(
_class="tabela_ficha_individual"
)
for y in x.ficha_individual:
linha = TR()
for c in y:
if c[1]["tipo"] == "cabecalho":
linha.append(TH(c[0], **dict(c[1])))
elif c[1]["tipo"] == "cabecalho_rotate":
linha.append(TH(DIV(c[0], _class="rotate"), **dict(c[1])))
else:
linha.append(TD(c[0], **dict(c[1])))
tabela.append(linha)
html_ficha = DIV(
LABEL(x.ano_letivo, " - ", x.serie, " - ", x.escola),
DIV(
DIV(
DIV(
DIV(
H3("TURMA: ", x.turma),
tabela,
_class="p-row"
),
_class="phanterpwa-card-panel-control-content"
),
_class="phanterpwa-card-panel-control-wrapper"
),
_class="phanterpwa-card-panel-control-container"
),
_class="phanterpwa-card-panel-control p-col w1p100"
)
html_historico.append(html_ficha)
CONCATENATE(
forms.Form(json.data.aluno),
DIV(html_historico, _class="p-row")
).html_to("#content-dados-aluno")
__pragma__('nokwargs')
| 42.787111
| 194
| 0.505796
|
794c1518a44a42f65a06a6acb01aa91706931f99
| 651
|
py
|
Python
|
tests/services/alertrules/test_file_type_mismatch.py
|
ryanvanasse/py42
|
4664c45d41c32f48323b552b7b11f885c055bff3
|
[
"MIT"
] | null | null | null |
tests/services/alertrules/test_file_type_mismatch.py
|
ryanvanasse/py42
|
4664c45d41c32f48323b552b7b11f885c055bff3
|
[
"MIT"
] | null | null | null |
tests/services/alertrules/test_file_type_mismatch.py
|
ryanvanasse/py42
|
4664c45d41c32f48323b552b7b11f885c055bff3
|
[
"MIT"
] | null | null | null |
from py42.services.alertrules import FileTypeMismatchService
class TestFileTypeMisMatchClient(object):
def test_get_by_id_posts_to_correct_endpoint_for_type_mismatch_rule_type(
self, mock_connection
):
alert_rule_client = FileTypeMismatchService(mock_connection, u"tenant-id")
alert_rule_client.get("rule-id")
url = mock_connection.post.call_args[0][0]
assert url == "/svc/api/v1/Rules/query-file-type-mismatch-rule"
posted_data = mock_connection.post.call_args[1]["json"]
assert posted_data["tenantId"] == u"tenant-id" and posted_data["ruleIds"] == [
u"rule-id"
]
| 40.6875
| 86
| 0.703533
|
794c156443937d0cc37a41d9d4688ecaba925350
| 456
|
py
|
Python
|
graphs/breadth_first_search.py
|
AppliedArtificialIntelligence/Algorithms
|
71e70ce58d1aa0230f93a69b3f0383a755818276
|
[
"Apache-2.0"
] | 99
|
2018-05-02T18:35:40.000Z
|
2022-03-31T04:52:24.000Z
|
graphs/breadth_first_search.py
|
johnjdailey/python-algorithms
|
71e70ce58d1aa0230f93a69b3f0383a755818276
|
[
"Apache-2.0"
] | null | null | null |
graphs/breadth_first_search.py
|
johnjdailey/python-algorithms
|
71e70ce58d1aa0230f93a69b3f0383a755818276
|
[
"Apache-2.0"
] | 65
|
2018-08-12T18:12:52.000Z
|
2022-03-05T08:20:07.000Z
|
#!/usr/bin/env python
#-*- coding: utf-8 -*-
__author__ = 'Stefan Jansen'
from collections import deque
def bfs(G, s):
P, Q = {s: None}, deque([s]) # Parents and FIFO queue
while Q:
u = Q.popleft() # Constant-time for deque
for v in G[u]:
if v in P: continue # Already has parent
P[v] = u # Reached from u: u is parent
Q.append(v)
return P
| 28.5
| 69
| 0.493421
|
794c17ddb485530e923f6a5d392871a3657b6db5
| 4,338
|
py
|
Python
|
src/iks/delete_secret_cert.py
|
IBM/cis-integration
|
fbf1f5c2df57b846499856ca927d2de15b4b375c
|
[
"Apache-2.0"
] | 4
|
2021-08-04T16:58:18.000Z
|
2022-02-16T03:28:46.000Z
|
src/iks/delete_secret_cert.py
|
IBM/cis-integration
|
fbf1f5c2df57b846499856ca927d2de15b4b375c
|
[
"Apache-2.0"
] | 16
|
2021-07-13T13:50:31.000Z
|
2021-08-13T15:31:32.000Z
|
src/iks/delete_secret_cert.py
|
IBM/cis-integration
|
fbf1f5c2df57b846499856ca927d2de15b4b375c
|
[
"Apache-2.0"
] | null | null | null |
'''
DISCLAIMER OF WARRANTIES:
Permission is granted to copy this Tools or Sample code for internal use only, provided that this
permission notice and warranty disclaimer appears in all copies.
THIS TOOLS OR SAMPLE CODE IS LICENSED TO YOU AS-IS.
IBM AND ITS SUPPLIERS AND LICENSORS DISCLAIM ALL WARRANTIES, EITHER EXPRESS OR IMPLIED, IN SUCH SAMPLE CODE,
INCLUDING THE WARRANTY OF NON-INFRINGEMENT AND THE IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. IN NO EVENT WILL IBM OR ITS LICENSORS OR SUPPLIERS BE LIABLE FOR ANY DAMAGES ARISING
OUT OF THE USE OF OR INABILITY TO USE THE TOOLS OR SAMPLE CODE, DISTRIBUTION OF THE TOOLS OR SAMPLE CODE,
OR COMBINATION OF THE TOOLS OR SAMPLE CODE WITH ANY OTHER CODE. IN NO EVENT SHALL IBM OR ITS LICENSORS AND
SUPPLIERS BE LIABLE FOR ANY LOST REVENUE, LOST PROFITS OR DATA, OR FOR DIRECT, INDIRECT, SPECIAL,
CONSEQUENTIAL,INCIDENTAL OR PUNITIVE DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY,
EVEN IF IBM OR ITS LICENSORS OR SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
'''
import requests, json
from src.common.functions import Color as Color
class DeleteSecretCMS:
region = ''
def __init__(self, cluster_id, cis_domain, cert_manager_crn, cert_name, token):
self.cluster_id = cluster_id
self.cis_domain = cis_domain
self.cert_manager_crn = cert_manager_crn
self.cert_name = cert_name
self.token = token
def delete_secret(self):
url = "https://containers.cloud.ibm.com/global/ingress/v2/secret/deleteSecret"
headers = {
'Authorization': 'Bearer ' + self.token
}
payload = json.dumps({
"cluster": self.cluster_id,
"delete_cert": True,
"name": self.cert_name,
"namespace": "default"
})
response = requests.request("POST", url, headers=headers, data=payload)
print("Started delete process for secret in IKS cluster. Check your kubernetes dashboard for progress")
def delete_cms_cert(self):
cert_id = self.check_certificate()
if not cert_id is None:
url_cert_id = self.URLify(cert_id)
url = f"https://{self.region}.certificate-manager.cloud.ibm.com/api/v2/certificate/{url_cert_id}"
payload={}
headers = {
'Authorization': 'Bearer ' + self.token
}
response = requests.request("DELETE", url, headers=headers, data=payload)
if response.status_code == 200:
print(Color.GREEN + "SUCCESS: Certificate successfully deleted" + Color.END)
else:
print(Color.RED + "ERROR: Failed to remove certificate from Certificate Manager" + Color.END)
else:
print(Color.RED + "ERROR: Failed to find certificate in Certificate Manager" + Color.END)
def check_certificate(self):
url_cert_man_crn = self.URLify(self.cert_manager_crn)
try:
self.region = self.cert_manager_crn.split(":")[5]
except:
print(Color.RED+"ERROR: CRN provided not in correct format"+Color.END)
exit(1)
cert_check_url = f"https://{self.region}.certificate-manager.cloud.ibm.com/api/v3/{url_cert_man_crn}/certificates/"
cert_check_headers = {
"Authorization": 'Bearer ' + self.token
}
# Gets all certificates previously present in the certificate manager
cert_check_response = requests.request(
"GET", url=cert_check_url, headers=cert_check_headers)
# print(cert_check_response.text)
# If a valid certificate exists, it returns the CRN of that certificate
if cert_check_response.status_code == 200:
for cert in cert_check_response.json()["certificates"]:
if self.cis_domain in cert["domains"] and ("*." + self.cis_domain) in cert["domains"] and cert["name"] == self.cert_name:
print(
"Certificate found in certificate manager")
return cert["_id"]
return None
# Converts the certificate manager CRN into a URL-encoded CRN
def URLify(self, replacement_str):
new_string = replacement_str.replace(":", "%3A")
return new_string.replace("/", "%2F")
| 46.148936
| 137
| 0.665514
|
794c18a4754c1dddfef97d3aba02e343a049c8c4
| 2,322
|
py
|
Python
|
lib/python/nestedlog/api.py
|
swarren/nestedlog
|
5f2cf9f103c735c4e5f48779f7b17324d883870c
|
[
"MIT"
] | 1
|
2021-03-14T17:01:43.000Z
|
2021-03-14T17:01:43.000Z
|
lib/python/nestedlog/api.py
|
swarren/nestedlog
|
5f2cf9f103c735c4e5f48779f7b17324d883870c
|
[
"MIT"
] | 10
|
2021-03-14T19:44:35.000Z
|
2021-06-07T19:37:47.000Z
|
lib/python/nestedlog/api.py
|
swarren/nestedlog
|
5f2cf9f103c735c4e5f48779f7b17324d883870c
|
[
"MIT"
] | null | null | null |
# Copyright 2021 Stephen Warren <swarren@wwwdotorg.org>
# SPDX-License-Identifier: MIT
from contextlib import contextmanager
import nestedlog.data as nld
import os
import socket
import subprocess
import sys
import termios
SOCK_ENV_VAR = 'NESTED_LOG_CONTROL'
CMD_START_BLOCK = 'start-block'
CMD_END_BLOCK = 'end-block'
def start_block(block_name):
_send_cmd(f'{CMD_START_BLOCK} {block_name}\n')
def end_block(status):
if status in nld.status_to_text:
status = nld.status_to_text[status]
elif status in nld.text_to_status:
pass
else:
raise Exception('Invalid status ' + str(status))
_send_cmd(f'{CMD_END_BLOCK} {status}\n')
class MarkBlockAsFailedException(Exception):
pass
class BlockFailedException(Exception):
pass
@contextmanager
def run_python_as_block(block_name, exit_on_fail=False):
start_block(block_name)
status = nld.STATUS_AUTO
try:
try:
yield None
finally:
sys.stdout.flush()
except MarkBlockAsFailedException:
status = nld.STATUS_ERROR
except:
print('ERROR: Exception thrown:', file=sys.stderr)
import traceback
traceback.print_exc(file=sys.stderr)
status = nld.STATUS_ERROR
end_block(status)
if status != nld.STATUS_AUTO:
raise BlockFailedException()
def run_as_block(block_name, cmd):
with run_python_as_block(block_name):
cpe = subprocess.run(cmd)
if cpe.returncode != 0:
print('ERROR: Process exit code ' + str(cpe.returncode), file=sys.stderr)
raise MarkBlockAsFailedException()
def _send_cmd(cmd):
# Flush data path all the way to nestedlog server.
#
# These should be no-ops, since stdout/stderr are a character device
# rather than pipes.
sys.stdout.flush()
sys.stderr.flush()
# Synchronously flush to nestedlog-helper CUSE server, which will
# synchronously flush to main nestedlog server.
termios.tcdrain(1)
sock_path = os.environ[SOCK_ENV_VAR]
client_sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
client_sock.connect(sock_path)
client_sock.sendall(cmd.encode('utf-8'))
response = client_sock.recv(1)
client_sock.close()
if response != b'0':
raise Exception('nestedlog server command failed')
| 28.317073
| 85
| 0.698105
|
794c18db2a90c0d1d14c3cd285dffc5dd20f46c3
| 304
|
py
|
Python
|
progressBar.py
|
immortal-autumn/progress-bar-tool
|
72160490561a5c3e692fb9f8966f195bca4c6a0d
|
[
"MIT"
] | 1
|
2021-03-05T17:32:07.000Z
|
2021-03-05T17:32:07.000Z
|
progressBar.py
|
immortal-autumn/progress-bar-tool
|
72160490561a5c3e692fb9f8966f195bca4c6a0d
|
[
"MIT"
] | null | null | null |
progressBar.py
|
immortal-autumn/progress-bar-tool
|
72160490561a5c3e692fb9f8966f195bca4c6a0d
|
[
"MIT"
] | null | null | null |
import math
# construct a progress bar
def construct_bar(progress, total):
rate = progress / total
res = '【'
dots = math.floor(20 * rate)
for i in range(dots):
res += '*'
for i in range(20 - dots):
res += '-'
return res + '】'
print(construct_bar(80, 100))
| 14.47619
| 35
| 0.555921
|
794c18dc021ae58f84a583d74741877094206361
| 1,880
|
py
|
Python
|
ooobuild/dyn/drawing/graphic_filter_request.py
|
Amourspirit/ooo_uno_tmpl
|
64e0c86fd68f24794acc22d63d8d32ae05dd12b8
|
[
"Apache-2.0"
] | null | null | null |
ooobuild/dyn/drawing/graphic_filter_request.py
|
Amourspirit/ooo_uno_tmpl
|
64e0c86fd68f24794acc22d63d8d32ae05dd12b8
|
[
"Apache-2.0"
] | null | null | null |
ooobuild/dyn/drawing/graphic_filter_request.py
|
Amourspirit/ooo_uno_tmpl
|
64e0c86fd68f24794acc22d63d8d32ae05dd12b8
|
[
"Apache-2.0"
] | null | null | null |
# coding: utf-8
#
# Copyright 2022 :Barry-Thomas-Paul: Moss
#
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http: // www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Exception Class
# this is a auto generated file generated by Cheetah
# Namespace: com.sun.star.drawing
# Libre Office Version: 7.3
from typing import TYPE_CHECKING
from ooo.oenv.env_const import UNO_ENVIRONMENT, UNO_RUNTIME
if (not TYPE_CHECKING) and UNO_RUNTIME and UNO_ENVIRONMENT:
import uno
def _get_class():
orig_init = None
ordered_keys = ('Message', 'Context', 'ErrCode')
def init(self, *args, **kwargs):
if len(kwargs) == 0 and len(args) == 1 and getattr(args[0], "__class__", None) == self.__class__:
orig_init(self, args[0])
return
kargs = kwargs.copy()
for i, arg in enumerate(args):
kargs[ordered_keys[i]] = arg
orig_init(self, **kargs)
type_name = 'com.sun.star.drawing.GraphicFilterRequest'
ex = uno.getClass(type_name)
ex.__ooo_ns__ = 'com.sun.star.drawing'
ex.__ooo_full_ns__= type_name
ex.__ooo_type_name__ = 'exception'
orig_init = ex.__init__
ex.__init__ = init
return ex
GraphicFilterRequest = _get_class()
else:
from ...lo.drawing.graphic_filter_request import GraphicFilterRequest as GraphicFilterRequest
__all__ = ['GraphicFilterRequest']
| 34.181818
| 109
| 0.680851
|
794c1976f25bc8314540710c46bad778d90a537a
| 140
|
py
|
Python
|
Python/004. Sets/02. Symmetric Difference.py
|
subhadeep-123/HackerRank
|
4596915097e58d9fd7a8bfad3194ac35f0c25177
|
[
"MIT"
] | 2
|
2020-07-20T19:47:54.000Z
|
2021-04-19T21:14:59.000Z
|
Python/004. Sets/02. Symmetric Difference.py
|
subhadeep-123/HackerRank
|
4596915097e58d9fd7a8bfad3194ac35f0c25177
|
[
"MIT"
] | null | null | null |
Python/004. Sets/02. Symmetric Difference.py
|
subhadeep-123/HackerRank
|
4596915097e58d9fd7a8bfad3194ac35f0c25177
|
[
"MIT"
] | null | null | null |
m = int(input())
s1 = set(map(int, input().split()))
m = int(input())
s2 = set(map(int, input().split()))
print(*sorted(s1 ^ s2), sep='\n')
| 23.333333
| 35
| 0.564286
|
794c19a9d24e4eff6674150988461c5ab1d0d0f8
| 399
|
py
|
Python
|
api/urls.py
|
seanpierce/django-itr
|
01951612d4d49c328c89487efc83e65908c8ad58
|
[
"MIT"
] | null | null | null |
api/urls.py
|
seanpierce/django-itr
|
01951612d4d49c328c89487efc83e65908c8ad58
|
[
"MIT"
] | null | null | null |
api/urls.py
|
seanpierce/django-itr
|
01951612d4d49c328c89487efc83e65908c8ad58
|
[
"MIT"
] | null | null | null |
from django.urls import path
from .views import get_episodes, create_new_subscription_request, create_subscriber, thanks
urlpatterns = [
path('episodes/', get_episodes, name="episodes_all"),
path('subscribe/', create_new_subscription_request,
name="new_subscription_request"),
path('confirm/', create_subscriber, name='create_subscriber'),
path('', thanks, name='thanks'),
]
| 33.25
| 91
| 0.744361
|
794c1ac6af2525fe545920115955d35e4a651c19
| 220
|
py
|
Python
|
began/kernel.py
|
YingzhenLi/SteinGrad
|
6c9b3f3bd51fabfa61890c75bdbccb22c03baa61
|
[
"MIT"
] | 19
|
2018-02-13T22:51:19.000Z
|
2021-11-18T10:53:20.000Z
|
began/kernel.py
|
YingzhenLi/SteinGrad
|
6c9b3f3bd51fabfa61890c75bdbccb22c03baa61
|
[
"MIT"
] | null | null | null |
began/kernel.py
|
YingzhenLi/SteinGrad
|
6c9b3f3bd51fabfa61890c75bdbccb22c03baa61
|
[
"MIT"
] | 7
|
2018-02-26T00:50:56.000Z
|
2021-01-31T10:16:47.000Z
|
import tensorflow as tf
def Epanechnikov_kernel(z, K):
z_ = tf.expand_dims(z, 1)
pdist_square = (z - tf.stop_gradient(z_))**2
kzz = tf.reduce_mean(1 - pdist_square, -1)
return kzz, tf.constant(1.0)
| 22
| 48
| 0.65
|
794c1b7459e4861f620dc74d587f0ebfb4e03042
| 4,553
|
py
|
Python
|
Section 5/source/prep.py
|
quoctrinh8811/AI-for-Finance
|
d57f1ed0d98c1659d9ea953a9fa8d1c8194811c5
|
[
"MIT"
] | 30
|
2019-02-24T04:42:17.000Z
|
2022-03-02T10:12:45.000Z
|
Section 5/source/prep.py
|
quoctrinh8811/AI-for-Finance
|
d57f1ed0d98c1659d9ea953a9fa8d1c8194811c5
|
[
"MIT"
] | 1
|
2022-01-23T16:46:59.000Z
|
2022-01-23T16:46:59.000Z
|
Section 5/source/prep.py
|
quoctrinh8811/AI-for-Finance
|
d57f1ed0d98c1659d9ea953a9fa8d1c8194811c5
|
[
"MIT"
] | 21
|
2019-03-05T23:10:43.000Z
|
2022-03-07T10:04:49.000Z
|
"""
Prepare stock prices data for use in an LSTM network.
Our goal here is to predict a closing price of a share/stock
on a given day of a company based on the metrics from previous day.
We're working with historical stock prices data in .CSV format
for Apple (APPL) (but the code is general and you can use it
with other stocks).
Each row is a set of metrics for a given company for a given day.
We're using the exact same method that we've used in
Section 2.
The main difference is that we're using multiple
metrics/values (instead of just one in Section 2)
to predict closing price.
"""
import pandas
import numpy as np
from matplotlib import pyplot
def get_raw_xy(data):
"""
Return only metrics/values that we will base
our predictions on first, then a list of closing prices.
Both metrics/values are from the same day.
"""
# Removing data column and adj close.
# Assuming that our data don't have any
# dividents, so close column is the same as
# adj close.
data=data.drop(columns=['Date','Adj Close'])
values=data.values
# Each column number match a specific metric:
# Open=0,High=1,Low=2,Close=3,Volume=4
return values[:, [0,1,2,3,4]], values[:, 3]
def train_test_split(X, Y, trs_len=0.80):
"""
Split both X and Y into train and test set.
trs_len - how much data should we use for training?
by default it's 0.80 meaning 80%, the remining
20% of the data will be used for testing.
"""
lx=len(X)
trs=int(lx*trs_len)
train_x, train_y = X[:trs], Y[:trs]
test_x, test_y = X[trs:], Y[trs:]
return train_x, train_y, test_x, test_y
def get_vpo(values):
"""
This is the minimalistic version of
get_vpo function from prep.py from Section 2.
Day 1 A1,A2,*A3
Day 2 B1,B2,*B3
Day 3 C1,C2,*C3
We want to predict values with *,so
if we want to train our network to predict Day1
we don't have any data from previous day, so we can't
do that, but for we base prediction of m23 from the
metrics from prev data: m11,m12,m13 and we can do
the same for day 2:
X: A1,A2,A3 Y: B3
X: B1,B2,B3 Y: C3
What about data from Day 3? Well, we don't have any
data from Day4 to use for prediction using metrics
from Day3.
So this is how we're constructing our data:
X:No data Y:A3 <- We discard this first value,
X:A1,A2,A3 Y:B3 since we don't have any X data from Day 0
X:B1,B2,B3 Y:C3
X:C1,C2,C3 Y:No data <- We need to discard this as well,
since there's no data for Y from Day 4
"""
shifted_y=list(values)
shifted_y.pop(0)
shifted_y.append(None)
return shifted_y
def get_data(f='data/stock_prices.csv'):
"""
Load stock prices data from .CSV file,
convert to the correct form.
"""
d=pandas.read_csv(f)
x,y=get_raw_xy(d)
yy=get_vpo(y)
return x[:-1], yy[:-1]
def prep_data(train_x, train_y):
"""
Split data and return in the exact format
we need it for our LSTM to learn.
"""
train_x, train_y, test_x, test_y = train_test_split(train_x, train_y)
# We need one more "dimension" (we need to put our values
# into on more list) to our x values
train_x=np.array(np.expand_dims(train_x, axis=1))
train_y=np.array(train_y)
test_x=np.array(np.expand_dims(test_x, axis=1))
test_y=np.array(test_y)
print(train_x.shape, train_y.shape, test_x.shape, test_y.shape)
return train_x, train_y, test_x, test_y
if __name__ == '__main__':
# Here we want to first show how the "shifting" works
# the plot a part of our train and test data.
d=pandas.read_csv('data/stock_prices.csv')
x, y = get_raw_xy(d)
yy=get_vpo(y)
print('Data before conversion.')
for i in range(5):
print('X[%d]/Y[%d]' % (i, i), x[i], y[i])
print('Shitfted')
for i in range(5):
print('X[%d]/YY[%d]' % (i, i), x[i], yy[i])
train_x, train_y, test_x, test_y=train_test_split(x, yy)
# Prepare data for plotting.
p_tx=list(train_x[:, 3])+[None]*(len(test_x))
p_ttx=([None]*(len(train_x)-1))+list(train_x[:, 3])[-1:]+list(test_x[:, 3])
# Plot closing prices for each day.
pyplot.plot(p_tx, label='train_x')
pyplot.plot(p_ttx, label='test_x')
pyplot.legend()
pyplot.show()
x,y=get_data()
print('Data before preparation')
print(x[0],y[0])
print('Data after preparation')
train_x, train_y, test_x, test_y=prep_data(x, y)
print(train_x[0],train_y[0])
| 31.184932
| 79
| 0.65056
|
794c1ba741ab4f201190cd6d966fb140430b4824
| 380
|
py
|
Python
|
sessions/neo_pixel.py
|
dunfred/iot_sessions
|
3cfb7e0c2a9f51440a6d001c723b154d64c0b725
|
[
"MIT"
] | 1
|
2021-09-01T17:18:52.000Z
|
2021-09-01T17:18:52.000Z
|
sessions/neo_pixel.py
|
dunfred/iot_sessions
|
3cfb7e0c2a9f51440a6d001c723b154d64c0b725
|
[
"MIT"
] | null | null | null |
sessions/neo_pixel.py
|
dunfred/iot_sessions
|
3cfb7e0c2a9f51440a6d001c723b154d64c0b725
|
[
"MIT"
] | null | null | null |
import neopixel,machine,time
import random as rd
sen = machine.Pin(26,machine.Pin.OUT)
np = neopixel.NeoPixel(sen,7)
while True:
for i in range(0,7):
r,g,b = rd.randint(0,255),rd.randint(0,255),rd.randint(0,255)
np[i]= (r,g,b)
np.write()
time.sleep(1)
for i in range(0,7):
np[i]= (0,0,0)
np.write()
| 20
| 69
| 0.536842
|
794c1bb8c4612023efb13ae63adbb3fea0cf6776
| 1,744
|
py
|
Python
|
tools/mo/openvino/tools/mo/front/PowerToEltwises.py
|
ryanloney/openvino-1
|
4e0a740eb3ee31062ba0df88fcf438564f67edb7
|
[
"Apache-2.0"
] | 1,127
|
2018-10-15T14:36:58.000Z
|
2020-04-20T09:29:44.000Z
|
tools/mo/openvino/tools/mo/front/PowerToEltwises.py
|
ryanloney/openvino-1
|
4e0a740eb3ee31062ba0df88fcf438564f67edb7
|
[
"Apache-2.0"
] | 439
|
2018-10-20T04:40:35.000Z
|
2020-04-19T05:56:25.000Z
|
tools/mo/openvino/tools/mo/front/PowerToEltwises.py
|
ryanloney/openvino-1
|
4e0a740eb3ee31062ba0df88fcf438564f67edb7
|
[
"Apache-2.0"
] | 414
|
2018-10-17T05:53:46.000Z
|
2020-04-16T17:29:53.000Z
|
# Copyright (C) 2018-2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from openvino.tools.mo.ops.elementwise import Mul, Add, Pow
from openvino.tools.mo.front.common.partial_infer.utils import mo_array
from openvino.tools.mo.front.common.replacement import FrontReplacementOp
from openvino.tools.mo.graph.graph import Graph
from openvino.tools.mo.ops.const import Const
class PowerToEltwises(FrontReplacementOp):
op = "AttributedPower"
enabled = True
force_clean_up = True
def replace_sub_graph(self, graph: Graph, match: dict):
op = match['op']
out_port = op.in_port(0).get_source()
if op.soft_get('scale', 1) != 1:
const = Const(graph, {'value': mo_array(op.scale)}).create_node()
mul = Mul(graph, {'name': op.name + '/mul_'}).create_node()
const.out_port(0).connect(mul.in_port(1))
mul.in_port(0).get_connection().set_source(out_port)
out_port = mul.out_port(0)
if op.soft_get('shift', 0) != 0:
const = Const(graph, {'value': mo_array(op.shift)}).create_node()
add = Add(graph, {'name': op.name + '/add_'}).create_node()
const.out_port(0).connect(add.in_port(1))
add.in_port(0).get_connection().set_source(out_port)
out_port = add.out_port(0)
if op.soft_get('power', 1) != 1:
const = Const(graph, {'value': mo_array(op.power)}).create_node()
pow = Pow(graph, {'name': op.name + '/pow_'}).create_node()
const.out_port(0).connect(pow.in_port(1))
pow.in_port(0).get_connection().set_source(out_port)
out_port = pow.out_port(0)
op.out_port(0).get_connection().set_source(out_port)
| 41.52381
| 77
| 0.637041
|
794c1c2c3de87a689351a977df0ebed5e4bfef93
| 1,341
|
py
|
Python
|
python/cugraph/tests/dask/test_mg_utility.py
|
jwyles/cugraph
|
1758d085e03d1d62ccd7064fda8cb0257011f50b
|
[
"Apache-2.0"
] | null | null | null |
python/cugraph/tests/dask/test_mg_utility.py
|
jwyles/cugraph
|
1758d085e03d1d62ccd7064fda8cb0257011f50b
|
[
"Apache-2.0"
] | null | null | null |
python/cugraph/tests/dask/test_mg_utility.py
|
jwyles/cugraph
|
1758d085e03d1d62ccd7064fda8cb0257011f50b
|
[
"Apache-2.0"
] | null | null | null |
import cugraph.dask as dcg
from dask.distributed import Client
import gc
import cugraph
import dask_cudf
import cugraph.comms as Comms
from dask_cuda import LocalCUDACluster
import pytest
@pytest.fixture
def client_connection():
cluster = LocalCUDACluster()
client = Client(cluster)
Comms.initialize()
yield client
Comms.destroy()
client.close()
cluster.close()
def test_compute_local_data(client_connection):
gc.collect()
input_data_path = r"../datasets/karate.csv"
chunksize = dcg.get_chunksize(input_data_path)
ddf = dask_cudf.read_csv(input_data_path, chunksize=chunksize,
delimiter=' ',
names=['src', 'dst', 'value'],
dtype=['int32', 'int32', 'float32'])
dg = cugraph.DiGraph()
dg.from_dask_cudf_edgelist(ddf, source='src', destination='dst',
edge_attr='value')
# Compute_local_data
dg.compute_local_data(by='dst')
data = dg.local_data['data']
by = dg.local_data['by']
assert by == 'dst'
assert Comms.is_initialized()
global_num_edges = data.local_data['edges'].sum()
assert global_num_edges == dg.number_of_edges()
global_num_verts = data.local_data['verts'].sum()
assert global_num_verts == dg.number_of_nodes()
| 26.294118
| 68
| 0.650261
|
794c1c3f8077ba51b49f53cd0d96dde8eb85fa4a
| 122
|
py
|
Python
|
pyradox/datatype/__init__.py
|
SaucyPigeon/pyradox
|
a500a5628f57e056fa019ba1e114118abe6dc205
|
[
"MIT"
] | null | null | null |
pyradox/datatype/__init__.py
|
SaucyPigeon/pyradox
|
a500a5628f57e056fa019ba1e114118abe6dc205
|
[
"MIT"
] | null | null | null |
pyradox/datatype/__init__.py
|
SaucyPigeon/pyradox
|
a500a5628f57e056fa019ba1e114118abe6dc205
|
[
"MIT"
] | null | null | null |
from pyradox.datatype.color import Color
from pyradox.datatype.time import Time
from pyradox.datatype.tree import Tree
| 30.5
| 41
| 0.827869
|
794c1d6ce15a67fe1a531126c3de28837aa0f86c
| 6,574
|
py
|
Python
|
plot_data.py
|
hpi-sam/minimum-wage-rl
|
f9342168955d2fa2623f427a6869e402592944b4
|
[
"MIT"
] | null | null | null |
plot_data.py
|
hpi-sam/minimum-wage-rl
|
f9342168955d2fa2623f427a6869e402592944b4
|
[
"MIT"
] | null | null | null |
plot_data.py
|
hpi-sam/minimum-wage-rl
|
f9342168955d2fa2623f427a6869e402592944b4
|
[
"MIT"
] | null | null | null |
import time
import matplotlib.pyplot as plt
from numpy.core import umath
from numpy.core.numeric import count_nonzero
from numpy.lib.function_base import average
plt.ion()
import numpy as np
import pandas as pd
plt.style.use("dark_background")
class DynamicUpdate():
#Suppose we know the x range
min_x = 0
max_x = 10
def __init__(self) -> None:
self.y_values_1 = 2
self.y_values_2 = 2
self.y_values_3 = 3
self.y_values_4 = 1
self.xdata = []
self.ydata_1 = [[] for _ in range(self.y_values_1)]
self.ydata_2 = [[] for _ in range(self.y_values_2)]
self.ydata_3 = [[] for _ in range(self.y_values_3)]
self.ydata_4 = [[] for _ in range(self.y_values_4)]
def on_launch(self):
#Set up plot
self.figure, self.ax = plt.subplots(2,2)
self.figure.suptitle('AI Scenario', fontweight="bold", fontsize=18)
self.y_values_1 = 2
self.y_values_2 = 2
self.y_values_3 = 3
self.y_values_4 = 1
self.lines_1 = [[] for _ in range(self.y_values_1)]
self.lines_2 = [[] for _ in range(self.y_values_2)]
self.lines_3 = [[] for _ in range(self.y_values_3)]
self.lines_4 = [[] for _ in range(self.y_values_4)]
self.lines_1[0], = self.ax[0,0].plot([],[], label="Expense")
self.lines_1[1], = self.ax[0,0].plot([],[], label="Avg Salary")
self.lines_2[0], = self.ax[0,1].plot([],[],label="Poverty")
self.lines_2[1], = self.ax[0,1].plot([],[], label="Unemployment")
self.lines_3[0], = self.ax[1,0].plot([],[],label="Junior")
self.lines_3[1], = self.ax[1,0].plot([],[],label="Senior")
self.lines_3[2], = self.ax[1,0].plot([],[],label="Executive")
self.lines_4[0], = self.ax[1,1].plot([],[], label="Minimum Wage")
self.ax[0,0].legend(loc="upper left")
self.ax[0,1].legend(loc="upper right")
self.ax[1,0].legend(loc="upper left")
self.ax[1,1].legend(loc="upper left")
for i in range(2):
for j in range(2):
self.ax[i,j].set_autoscaley_on(True)
self.ax[i,j].set_autoscalex_on(True)
self.ax[i,j].grid(linewidth=0.2)
self.ax[0,0].set_title("Expense vs Average Salary", fontweight="bold", fontsize=12)
self.ax[0,1].set_title("Poverty vs Unemployment", fontweight="bold", fontsize=12)
self.ax[1,0].set_title("Jobs", fontweight="bold", fontsize=12)
self.ax[1,1].set_title("Minimum wage", fontweight="bold", fontsize=12)
def on_running(self, xdata, ydata,ax_value):
#Update data (with the new _and_ the old points)
# running_start_time = time.time()
color_vals = ["blue","yellow"]
if ax_value == 1:
for i in range(self.y_values_1):
self.lines_1[i].set_xdata(xdata)
self.lines_1[i].set_ydata(ydata[i])
self.ax[0,0].relim()
self.ax[0,0].autoscale_view()
elif ax_value == 2:
for i in range(self.y_values_2):
self.lines_2[i].set_xdata(xdata)
self.lines_2[i].set_ydata(ydata[i])
self.ax[0,1].fill_between(xdata,ydata[i], alpha=0.04, facecolor=color_vals[i])
self.ax[0,1].relim()
self.ax[0,1].autoscale_view()
elif ax_value == 3:
for i in range(self.y_values_3):
self.lines_3[i].set_xdata(xdata)
self.lines_3[i].set_ydata(ydata[i])
self.ax[1,0].relim()
self.ax[1,0].autoscale_view()
# Minimum wage
else:
ax4_colors = "green"
for i in range(self.y_values_4):
self.lines_4[i].set_xdata(xdata)
self.lines_4[i].set_ydata(ydata[i])
self.ax[1,1].fill_between(xdata,ydata[i], alpha=0.04, facecolor=ax4_colors)
self.ax[1,1].relim()
self.ax[1,1].autoscale_view()
#Need both of these in order to rescale
#We need to draw *and* flush
# print("On Runnung - ", time.time() - running_start_time)
def draw(self):
# running_start_time_2 = time.time()
self.figure.canvas.draw()
self.figure.canvas.flush_events()
# print("Drawing canvas - ", time.time() - running_start_time_2)
#Example
def __call__(self):
self.on_launch()
def update_xdata(self,x):
self.xdata.append(x)
def plot_repeated(self,x,y,ax_val, block_value):
import numpy as np
# repeated_start_time = time.time()
if ax_val == 1:
for i in range(self.y_values_1):
self.ydata_1[i].append(y[i])
self.on_running(self.xdata, self.ydata_1,ax_val)
if ax_val == 2:
for i in range(self.y_values_2):
self.ydata_2[i].append(y[i])
self.on_running(self.xdata, self.ydata_2,ax_val)
if ax_val == 3:
for i in range(self.y_values_3):
self.ydata_3[i].append(y[i])
self.on_running(self.xdata, self.ydata_3,ax_val)
if ax_val == 4:
for i in range(self.y_values_4):
self.ydata_4[i].append(y[i])
self.on_running(self.xdata, self.ydata_4,ax_val)
# print("In Repeated - ", time.time() - repeated_start_time)
plt.show(block=block_value)
d = DynamicUpdate()
d()
df_1 = pd.read_excel("data\\ai_scenario_data.xlsx")
mini_wage = df_1["Minimum Wage"].tolist()
monthly_expense = (df_1["productPrice"]*30).tolist()
average_salary = df_1["Average Salary"].tolist()
poverty_rate = df_1["Poverty"].tolist()
unemployment_rate = df_1["Unemployment"].tolist()
junior_pos = df_1["Junior"].tolist()
senior_pos = df_1["Senior"].tolist()
exec_pos = df_1["Executive"].tolist()
count = 0
x = 1
y = 2
z = 3
all_count = len(mini_wage)
for year_val,wage in enumerate(mini_wage):
if year_val < all_count-1:
block_value = False
else:
block_value = True
d.update_xdata(year_val)
d.plot_repeated(year_val, [monthly_expense[count],average_salary[count]], 1, block_value)
d.plot_repeated(year_val, [poverty_rate[count],unemployment_rate[count]], 2, block_value)
d.plot_repeated(year_val, [junior_pos[count], senior_pos[count], exec_pos[count]], 3, block_value)
d.plot_repeated(year_val,[mini_wage[count]],4, block_value)
count = count + 1
d.draw()
| 32.87
| 102
| 0.583663
|
794c1e5552a54bbae94f8c0cd74334de1478e385
| 6,852
|
py
|
Python
|
cinder/tests/unit/backup/drivers/test_backup_posix.py
|
cloudification-io/cinder
|
23d76e01f2b4f3771b57fb287084a4884238b827
|
[
"Apache-2.0"
] | 571
|
2015-01-01T17:47:26.000Z
|
2022-03-23T07:46:36.000Z
|
cinder/tests/unit/backup/drivers/test_backup_posix.py
|
dFarui/cinder
|
b2922384054ddbd46e071fd07372a75a21d7f85d
|
[
"Apache-2.0"
] | 37
|
2015-01-22T23:27:04.000Z
|
2021-02-05T16:38:48.000Z
|
cinder/tests/unit/backup/drivers/test_backup_posix.py
|
dFarui/cinder
|
b2922384054ddbd46e071fd07372a75a21d7f85d
|
[
"Apache-2.0"
] | 841
|
2015-01-04T17:17:11.000Z
|
2022-03-31T12:06:51.000Z
|
# Copyright (c) 2015 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tests for Posix backup driver."""
import builtins
import os
from unittest import mock
from cinder.backup.drivers import posix
from cinder import context
from cinder import objects
from cinder.tests.unit import fake_constants as fake
from cinder.tests.unit import test
FAKE_FILE_SIZE = 52428800
FAKE_SHA_BLOCK_SIZE_BYTES = 1024
FAKE_BACKUP_ENABLE_PROGRESS_TIMER = True
FAKE_CONTAINER = 'fake/container'
FAKE_BACKUP_ID = fake.BACKUP_ID
FAKE_BACKUP_ID_PART1 = fake.BACKUP_ID[:2]
FAKE_BACKUP_ID_PART2 = fake.BACKUP_ID[2:4]
FAKE_BACKUP_ID_REST = fake.BACKUP_ID[4:]
FAKE_BACKUP = {'id': FAKE_BACKUP_ID, 'container': None}
UPDATED_CONTAINER_NAME = os.path.join(FAKE_BACKUP_ID_PART1,
FAKE_BACKUP_ID_PART2,
FAKE_BACKUP_ID)
FAKE_BACKUP_MOUNT_POINT_BASE = '/fake/mount-point-base'
FAKE_EXPORT_PATH = 'fake/export/path'
FAKE_BACKUP_POSIX_PATH = os.path.join(FAKE_BACKUP_MOUNT_POINT_BASE,
FAKE_EXPORT_PATH)
FAKE_PREFIX = 'prefix-'
FAKE_CONTAINER_ENTRIES = [FAKE_PREFIX + 'one', FAKE_PREFIX + 'two', 'three']
EXPECTED_CONTAINER_ENTRIES = [FAKE_PREFIX + 'one', FAKE_PREFIX + 'two']
FAKE_OBJECT_NAME = 'fake-object-name'
FAKE_OBJECT_PATH = os.path.join(FAKE_BACKUP_POSIX_PATH, FAKE_CONTAINER,
FAKE_OBJECT_NAME)
class PosixBackupDriverTestCase(test.TestCase):
def setUp(self):
super(PosixBackupDriverTestCase, self).setUp()
self.ctxt = context.get_admin_context()
self.override_config('backup_file_size',
FAKE_FILE_SIZE)
self.override_config('backup_sha_block_size_bytes',
FAKE_SHA_BLOCK_SIZE_BYTES)
self.override_config('backup_enable_progress_timer',
FAKE_BACKUP_ENABLE_PROGRESS_TIMER)
self.override_config('backup_posix_path',
FAKE_BACKUP_POSIX_PATH)
self.mock_object(posix, 'LOG')
self.driver = posix.PosixBackupDriver(self.ctxt)
def test_init(self):
drv = posix.PosixBackupDriver(self.ctxt)
self.assertEqual(FAKE_BACKUP_POSIX_PATH,
drv.backup_path)
def test_update_container_name_container_passed(self):
result = self.driver.update_container_name(FAKE_BACKUP, FAKE_CONTAINER)
self.assertEqual(FAKE_CONTAINER, result)
def test_update_container_na_container_passed(self):
result = self.driver.update_container_name(FAKE_BACKUP, None)
self.assertEqual(UPDATED_CONTAINER_NAME, result)
def test_put_container(self):
self.mock_object(os.path, 'exists', return_value=False)
self.mock_object(os, 'makedirs')
self.mock_object(os, 'chmod')
path = os.path.join(self.driver.backup_path, FAKE_CONTAINER)
self.driver.put_container(FAKE_CONTAINER)
os.path.exists.assert_called_once_with(path)
os.makedirs.assert_called_once_with(path)
os.chmod.assert_called_once_with(path, 0o770)
def test_put_container_already_exists(self):
self.mock_object(os.path, 'exists', return_value=True)
self.mock_object(os, 'makedirs')
self.mock_object(os, 'chmod')
path = os.path.join(self.driver.backup_path, FAKE_CONTAINER)
self.driver.put_container(FAKE_CONTAINER)
os.path.exists.assert_called_once_with(path)
self.assertEqual(0, os.makedirs.call_count)
self.assertEqual(0, os.chmod.call_count)
def test_put_container_exception(self):
self.mock_object(os.path, 'exists', return_value=False)
self.mock_object(os, 'makedirs', side_effect=OSError)
self.mock_object(os, 'chmod')
path = os.path.join(self.driver.backup_path, FAKE_CONTAINER)
self.assertRaises(OSError, self.driver.put_container,
FAKE_CONTAINER)
os.path.exists.assert_called_once_with(path)
os.makedirs.assert_called_once_with(path)
self.assertEqual(0, os.chmod.call_count)
def test_get_container_entries(self):
self.mock_object(os, 'listdir', return_value=FAKE_CONTAINER_ENTRIES)
result = self.driver.get_container_entries(FAKE_CONTAINER, FAKE_PREFIX)
self.assertEqual(EXPECTED_CONTAINER_ENTRIES, result)
def test_get_container_entries_no_list(self):
self.mock_object(os, 'listdir', return_value=[])
result = self.driver.get_container_entries(FAKE_CONTAINER, FAKE_PREFIX)
self.assertEqual([], result)
def test_get_container_entries_no_match(self):
self.mock_object(os, 'listdir', return_value=FAKE_CONTAINER_ENTRIES)
result = self.driver.get_container_entries(FAKE_CONTAINER,
FAKE_PREFIX + 'garbage')
self.assertEqual([], result)
def test_get_object_writer(self):
self.mock_object(builtins, 'open', mock.mock_open())
self.mock_object(os, 'chmod')
self.driver.get_object_writer(FAKE_CONTAINER, FAKE_OBJECT_NAME)
os.chmod.assert_called_once_with(FAKE_OBJECT_PATH, 0o660)
builtins.open.assert_called_once_with(FAKE_OBJECT_PATH, 'wb')
def test_get_object_reader(self):
self.mock_object(builtins, 'open', mock.mock_open())
self.driver.get_object_reader(FAKE_CONTAINER, FAKE_OBJECT_NAME)
builtins.open.assert_called_once_with(FAKE_OBJECT_PATH, 'rb')
def test_delete_object(self):
self.mock_object(os, 'remove')
self.driver.delete_object(FAKE_CONTAINER, FAKE_OBJECT_NAME)
@mock.patch.object(posix.timeutils, 'utcnow')
def test_generate_object_name_prefix(self, utcnow_mock):
timestamp = '20170518102205'
utcnow_mock.return_value.strftime.return_value = timestamp
backup = objects.Backup(self.ctxt, volume_id=fake.VOLUME_ID,
id=fake.BACKUP_ID)
res = self.driver._generate_object_name_prefix(backup)
expected = 'volume_%s_%s_backup_%s' % (backup.volume_id,
timestamp,
backup.id)
self.assertEqual(expected, res)
| 38.066667
| 79
| 0.687682
|
794c1ea5184b7567c398660c424a1ae553ad97a7
| 2,886
|
py
|
Python
|
dataset/data_utils/point_util.py
|
shinke-li/Campus3D
|
21768908b064d19ce8daacc2fa0a1fe9e0331514
|
[
"MIT"
] | 31
|
2020-08-10T13:34:38.000Z
|
2022-03-28T11:20:56.000Z
|
dataset/data_utils/point_util.py
|
shinke-li/Campus3D
|
21768908b064d19ce8daacc2fa0a1fe9e0331514
|
[
"MIT"
] | 6
|
2020-12-01T08:57:04.000Z
|
2022-03-14T11:13:28.000Z
|
dataset/data_utils/point_util.py
|
shinke-li/Campus3D
|
21768908b064d19ce8daacc2fa0a1fe9e0331514
|
[
"MIT"
] | 4
|
2021-01-09T05:01:50.000Z
|
2021-08-18T04:25:40.000Z
|
import numpy as np
def gen_gaussian_ball(center, radius, size):
if not isinstance(radius, np.ndarray):
radius = np.asarray([radius, radius, radius])
pts = [np.random.normal(loc=center[i], scale=radius[i], size=size) for i in range(center.shape[0])]
return np.asarray(pts).transpose()
def gen_point_cloud(high, low, center_num, size, scale=1, dim=3):
normalized_centers = np.random.rand(center_num, dim)
centers = (high - low) * normalized_centers + low
ball_pts_ratio = np.random.rand(center_num, )
ball_pts_ratio = ball_pts_ratio / np.sum(ball_pts_ratio)
ball_pts_num = (size * ball_pts_ratio).astype(np.int)
ball_pts_num[-1] = size - np.sum(ball_pts_num[:-1])
radius_sum = (high - low) * float(scale)
radius = radius_sum * ball_pts_ratio
points = []
for i in range(center_num):
points.append(gen_gaussian_ball(centers[i], radius[i], ball_pts_num[i]))
return np.clip(np.vstack(points), low, high)
class PointModifier(object):
"""
Collections of point modifying methods
Add modifying fucntion as this:
@staticmethod
def _funcname(points, arg=None, **kwargs):
new_points = some_func(point, arg, )
return new_points
Then the modify type will be 'funcname'
__init__(modify_type:(str,))
__call__(points: np.ndarray, *args, **kwargs):
Return:
modified points: np.ndarray
"""
def __init__(self, modify_types=('global_normalization', 'block_centeralization')):
self.funcs = [getattr(self, '_' + m) for m in modify_types]
self.shape = len(modify_types) * 3
def __call__(self, points, *args, **kwargs):
points_list = []
for i, func in enumerate(self.funcs):
arg = args[i] if i < len(args) else None
points_list.append(func(points, arg, **kwargs))
return np.concatenate(points_list, axis=-1)
@staticmethod
def _centeralization(points, arg=None, **kwargs):
if arg is None:
arg = kwargs['center']
return points - arg
@staticmethod
def _global_normalization(points, arg=None, **kwargs):
if arg is None:
arg = (kwargs['max_bounds'], kwargs['min_bounds'])
min_bounds, max_bounds = arg
bounds = max_bounds - min_bounds
return (points - min_bounds) / bounds
@staticmethod
def _block_centeralization(points, arg=None, **kwargs):
if arg is None:
arg = (kwargs['block_size_x'], kwargs['block_size_y'])
block_size_x, block_size_y = arg
box_min = np.min(points, axis=0)
shift = np.array([box_min[0] + block_size_x/ 2,
box_min[1] + block_size_y / 2,
box_min[2]])
return points - shift
@staticmethod
def _raw(points, arg, **kwargs):
return points
| 35.195122
| 103
| 0.62578
|
794c1f9e0ba882f15b04b46a45de2c0a4d90605e
| 112
|
py
|
Python
|
matrix_registration/__init__.py
|
grinapo/matrix-registration
|
2f444b0cb52dfd1ec9a992939e01b5363369c5df
|
[
"MIT"
] | null | null | null |
matrix_registration/__init__.py
|
grinapo/matrix-registration
|
2f444b0cb52dfd1ec9a992939e01b5363369c5df
|
[
"MIT"
] | null | null | null |
matrix_registration/__init__.py
|
grinapo/matrix-registration
|
2f444b0cb52dfd1ec9a992939e01b5363369c5df
|
[
"MIT"
] | null | null | null |
from . import api
from . import tokens
from . import config
__version__ = '0.5.6'
name = 'matrix_registration'
| 16
| 28
| 0.732143
|
794c2140e338821a876737321fd396aad3ffe4bd
| 2,180
|
py
|
Python
|
scripts/parse_tlds.py
|
forestmonster/vimium-c
|
950b6a544844970d011f17d534468c1c840d2429
|
[
"MIT"
] | null | null | null |
scripts/parse_tlds.py
|
forestmonster/vimium-c
|
950b6a544844970d011f17d534468c1c840d2429
|
[
"MIT"
] | null | null | null |
scripts/parse_tlds.py
|
forestmonster/vimium-c
|
950b6a544844970d011f17d534468c1c840d2429
|
[
"MIT"
] | null | null | null |
#!/usr/bin/env python3
FILE = "public_suffix_list.dat"
URL = "https://publicsuffix.org/list/" + FILE
DISABLED_TLDS = (
"exe", "pdf", "zip",
)
import sys
from imp import reload
reload(sys)
if hasattr(sys, "setdefaultencoding"):
sys.setdefaultencoding('utf-8')
else:
import codecs
sys.stdout = codecs.getwriter('utf8')(sys.stdout.detach())
import os, os.path as osp, re
FILE = list(filter(osp.exists, (
FILE,
osp.join("scripts", FILE),
FILE + ".txt",
osp.join("scripts", FILE + ".txt"),
)))
FILE = FILE and FILE[0]
if FILE:
fp = open(FILE, "rb")
else:
import urllib.request
fp = urllib.request.urlopen(URL)
with fp:
lines = [line.strip().decode("utf-8") for line in fp]
lines = list(line
for line in lines
if line
and line[0] != "#"
and line[0:2] != "//"
)
tlds = set(suffix.split(".")[-1] for suffix in lines)
tlds -= set(DISABLED_TLDS)
tlds = list(("%02d%s" % (len(i), i), i) for i in tlds)
tlds.sort(key=lambda i: i[0])
tlds = list(i[1] for i in tlds)
prefix, tail = ' , "', ' // char[%d][%d]'
format = '%s%s"%s'
for isEn in (True, False):
print('BgUtils_.%s = [""' % ('_tlds' if isEn else '_nonENTlds'))
i, count, len_tld, line, len_line = "", 0, 2, "", len(prefix)
for i in tlds:
if (re.match(r'^[\dA-Za-z]+\Z', i) is not None) != isEn:
continue
leni = len(i)
if leni > len_tld:
print(format % (prefix, line, tail % (count, len_tld) if count else ""))
line = ''
while len_tld + 2 < leni:
len_tld += 1
line += '", "'
if line:
print(format % (prefix, line, ""))
count, len_tld, line, len_line = 0, leni, "", len(prefix)
len_line += leni + 1
if len_line > (120 if isEn else 80):
line += "\\\n." + i
len_line = leni + 1
else:
line += '.' + i
count += 1
if count > 0:
print(format % (prefix, line, tail % (count, len_tld) if count else ""))
count = 0
print("];")
if isEn:
print("")
#from IPython import embed; embed()
| 27.25
| 84
| 0.525229
|
794c2220001b523bceb2ffd49693a23b5ad3039d
| 8,676
|
py
|
Python
|
ssd_gan.py
|
cyq373/SSD-GAN
|
9dc956fd79cc2b21492fcc9bf1e4cdc5b276bdaf
|
[
"MIT"
] | 29
|
2020-12-10T09:55:51.000Z
|
2022-03-08T13:06:38.000Z
|
ssd_gan.py
|
nianweijie/SSD-GAN
|
9dc956fd79cc2b21492fcc9bf1e4cdc5b276bdaf
|
[
"MIT"
] | 2
|
2021-03-01T13:56:03.000Z
|
2021-03-23T08:23:45.000Z
|
ssd_gan.py
|
nianweijie/SSD-GAN
|
9dc956fd79cc2b21492fcc9bf1e4cdc5b276bdaf
|
[
"MIT"
] | 7
|
2021-06-28T02:47:23.000Z
|
2022-03-13T01:19:41.000Z
|
"""
Implementation of Base SSD-GAN models.
"""
import torch
from torch_mimicry.nets.basemodel import basemodel
from torch_mimicry.modules import losses
import numpy as np
class SSD_Generator(basemodel.BaseModel):
r"""
Base class for a generic unconditional generator model.
Attributes:
nz (int): Noise dimension for upsampling.
ngf (int): Variable controlling generator feature map sizes.
bottom_width (int): Starting width for upsampling generator output to an image.
loss_type (str): Name of loss to use for GAN loss.
"""
def __init__(self, nz, ngf, bottom_width, loss_type, **kwargs):
super().__init__(**kwargs)
self.nz = nz
self.ngf = ngf
self.bottom_width = bottom_width
self.loss_type = loss_type
# def generate_images(self, netG, num_images, device=None):
def generate_images(self, num_images, device=None):
r"""
Generates num_images randomly.
Args:
num_images (int): Number of images to generate
device (torch.device): Device to send images to.
Returns:
Tensor: A batch of generated images.
"""
if device is None:
device = self.device
noise = torch.randn((num_images, self.nz), device=device)
# fake_images = netG.forward(noise)
fake_images = self.forward(noise)
return fake_images
def compute_gan_loss(self, output):
r"""
Computes GAN loss for generator.
Args:
output (Tensor): A batch of output logits from the discriminator of shape (N, 1).
Returns:
Tensor: A batch of GAN losses for the generator.
"""
# Compute loss and backprop
if self.loss_type == "gan":
errG = losses.minimax_loss_gen(output)
elif self.loss_type == "ns":
errG = losses.ns_loss_gen(output)
elif self.loss_type == "hinge":
errG = losses.hinge_loss_gen(output)
elif self.loss_type == "wasserstein":
errG = losses.wasserstein_loss_gen(output)
else:
raise ValueError("Invalid loss_type {} selected.".format(
self.loss_type))
return errG
def train_step(self,
real_batch,
netD,
optG,
log_data,
device=None,
global_step=None,
**kwargs):
r"""
Takes one training step for G.
Args:
real_batch (Tensor): A batch of real images of shape (N, C, H, W).
Used for obtaining current batch size.
netD (nn.Module): Discriminator model for obtaining losses.
optG (Optimizer): Optimizer for updating generator's parameters.
log_data (dict): A dict mapping name to values for logging uses.
device (torch.device): Device to use for running the model.
global_step (int): Variable to sync training, logging and checkpointing.
Useful for dynamic changes to model amidst training.
Returns:
Returns MetricLog object containing updated logging variables after 1 training step.
"""
self.zero_grad()
# Get only batch size from real batch
batch_size = real_batch[0].shape[0]
# Produce fake images
fake_images = self.generate_images(num_images=batch_size,
device=device)
# Compute output logit of D thinking image real
out_spectral, out_spatial = netD(fake_images)
# Compute loss
out = 0.5 * out_spectral.detach() + 0.5 * out_spatial
errG = self.compute_gan_loss(out)
# Backprop and update gradients
errG.backward()
optG.step()
# Log statistics
log_data.add_metric('errG', errG, group='loss')
return log_data
class SSD_Discriminator(basemodel.BaseModel):
r"""
Base class for a generic unconditional discriminator model.
Attributes:
ndf (int): Variable controlling discriminator feature map sizes.
loss_type (str): Name of loss to use for GAN loss.
"""
def __init__(self, ndf, loss_type, **kwargs):
super().__init__(**kwargs)
self.ndf = ndf
self.loss_type = loss_type
def compute_gan_loss(self, output_real, output_fake):
r"""
Computes GAN loss for discriminator.
Args:
output_real (Tensor): A batch of output logits of shape (N, 1) from real images.
output_fake (Tensor): A batch of output logits of shape (N, 1) from fake images.
Returns:
errD (Tensor): A batch of GAN losses for the discriminator.
"""
# Compute loss for D
if self.loss_type == "gan" or self.loss_type == "ns":
errD = losses.minimax_loss_dis(output_fake=output_fake,
output_real=output_real)
elif self.loss_type == "hinge":
errD = losses.hinge_loss_dis(output_fake=output_fake,
output_real=output_real)
elif self.loss_type == "wasserstein":
errD = losses.wasserstein_loss_dis(output_fake=output_fake,
output_real=output_real)
else:
raise ValueError("Invalid loss_type selected.")
return errD
def compute_probs(self, output_real, output_fake):
r"""
Computes probabilities from real/fake images logits.
Args:
output_real (Tensor): A batch of output logits of shape (N, 1) from real images.
output_fake (Tensor): A batch of output logits of shape (N, 1) from fake images.
Returns:
tuple: Average probabilities of real/fake image considered as real for the batch.
"""
D_x = torch.sigmoid(output_real).mean().item()
D_Gz = torch.sigmoid(output_fake).mean().item()
return D_x, D_Gz
def train_step(self,
real_batch,
netG,
optD,
log_data,
device=None,
global_step=None,
**kwargs):
r"""
Takes one training step for D.
Args:
real_batch (Tensor): A batch of real images of shape (N, C, H, W).
loss_type (str): Name of loss to use for GAN loss.
netG (nn.Module): Generator model for obtaining fake images.
optD (Optimizer): Optimizer for updating discriminator's parameters.
device (torch.device): Device to use for running the model.
log_data (dict): A dict mapping name to values for logging uses.
global_step (int): Variable to sync training, logging and checkpointing.
Useful for dynamic changes to model amidst training.
Returns:
MetricLog: Returns MetricLog object containing updated logging variables after 1 training step.
"""
self.zero_grad()
real_images, real_labels = real_batch
batch_size = real_images.shape[0] # Match batch sizes for last iter
# Produce logits for real images
out_spectral_real, out_spatial_real = self.forward(real_images)
# Produce fake images
fake_images = netG.generate_images(num_images=batch_size,
device=device).detach()
# Produce logits for fake images
out_spectral_fake, out_spatial_fake = self.forward(fake_images)
# Compute loss for D
errC = self.compute_gan_loss(output_real=out_spectral_real,
output_fake=out_spectral_fake)
out_real = 0.5 * out_spectral_real.detach() + 0.5 * out_spatial_real
out_fake = 0.5 * out_spectral_fake.detach() + 0.5 * out_spatial_fake
errD = self.compute_gan_loss(output_real=out_real,
output_fake=out_fake)
# Backprop and update gradients
errD_total = errD + errC
errD_total.backward()
optD.step()
# Compute probabilities
D_x, D_Gz = out_real.mean().item(), out_fake.mean().item()
# Log statistics for D once out of loop
log_data.add_metric('errD', errD.item(), group='loss')
log_data.add_metric('errC', errC.item(), group='loss')
log_data.add_metric('D(x)', D_x, group='prob')
log_data.add_metric('D(G(z))', D_Gz, group='prob')
return log_data
| 34.983871
| 107
| 0.59071
|
794c2242585f1facaf790fd7d95c5e70f3b72ecd
| 1,309
|
py
|
Python
|
src/resource-graph/azext_resourcegraph/__init__.py
|
Mannan2812/azure-cli-extensions
|
e2b34efe23795f6db9c59100534a40f0813c3d95
|
[
"MIT"
] | 2
|
2021-06-05T17:51:26.000Z
|
2021-11-17T11:17:56.000Z
|
src/resource-graph/azext_resourcegraph/__init__.py
|
Mannan2812/azure-cli-extensions
|
e2b34efe23795f6db9c59100534a40f0813c3d95
|
[
"MIT"
] | 3
|
2020-05-27T20:16:26.000Z
|
2020-07-23T19:46:49.000Z
|
src/resource-graph/azext_resourcegraph/__init__.py
|
Mannan2812/azure-cli-extensions
|
e2b34efe23795f6db9c59100534a40f0813c3d95
|
[
"MIT"
] | 5
|
2020-05-09T17:47:09.000Z
|
2020-10-01T19:52:06.000Z
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
from azure.cli.core import AzCommandsLoader
from ._help import helps # pylint: disable=unused-import
class ResourceGraphCommandsLoader(AzCommandsLoader):
def __init__(self, cli_ctx=None):
from azure.cli.core.commands import CliCommandType
from ._client_factory import cf_resource_graph
resource_graph_custom = CliCommandType(
operations_tmpl='azext_resourcegraph.custom#{}',
client_factory=cf_resource_graph
)
super(ResourceGraphCommandsLoader, self).__init__(
cli_ctx=cli_ctx,
custom_command_type=resource_graph_custom
)
def load_command_table(self, args):
from .commands import load_command_table
load_command_table(self, args)
return self.command_table
def load_arguments(self, command):
from ._params import load_arguments
load_arguments(self, command)
COMMAND_LOADER_CLS = ResourceGraphCommandsLoader
| 36.361111
| 94
| 0.621085
|
794c228ac0fd8102ab33c8f9e1c0b51f4eba6537
| 178
|
py
|
Python
|
src/utils/enum/flash_message_category.py
|
lokaimoma/Flask-QR-Code-Web-APP
|
5789753757aa1939119a799cbc6bda023ea75bbc
|
[
"MIT"
] | 2
|
2022-03-05T18:54:15.000Z
|
2022-03-24T12:19:22.000Z
|
src/utils/enum/flash_message_category.py
|
lokaimoma/Flask-QR-Code-Web-APP
|
5789753757aa1939119a799cbc6bda023ea75bbc
|
[
"MIT"
] | null | null | null |
src/utils/enum/flash_message_category.py
|
lokaimoma/Flask-QR-Code-Web-APP
|
5789753757aa1939119a799cbc6bda023ea75bbc
|
[
"MIT"
] | null | null | null |
# Created by Kelvin_Clark on 3/5/2022, 7:35 PM
from enum import Enum
class FlashMessageCategory(str, Enum):
SUCCESS = "success"
WARNING = "warning"
ERROR = "error"
| 19.777778
| 46
| 0.685393
|
794c2307696705e76a52d58092082a0f9416e002
| 2,696
|
py
|
Python
|
emoji_model.py
|
AWIS99/Emojinator
|
bc331eba1b37520e54103a7d542e2fc9ec3a0115
|
[
"MIT"
] | 3
|
2020-04-14T15:37:42.000Z
|
2020-04-27T19:54:08.000Z
|
emoji_model.py
|
AWIS99/Emojinator
|
bc331eba1b37520e54103a7d542e2fc9ec3a0115
|
[
"MIT"
] | null | null | null |
emoji_model.py
|
AWIS99/Emojinator
|
bc331eba1b37520e54103a7d542e2fc9ec3a0115
|
[
"MIT"
] | null | null | null |
import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.utils import np_utils
from keras.models import Sequential
from keras.callbacks import ModelCheckpoint
import pandas as pd
data = pd.read_csv("/home/awis/Desktop/train_foo.csv")
dataset = np.array(data)
np.random.shuffle(dataset)
X = dataset
Y = dataset
X = X[:, 1:2501]
Y = Y[:, 0]
X_train = X[0:12000, :]
X_train = X_train / 255.
X_test = X[12000:13201, :]
X_test = X_test / 255.
Y = Y.reshape(Y.shape[0], 1)
Y_train = Y[0:12000, :]
Y_train = Y_train.T
Y_test = Y[12000:13201, :]
Y_test = Y_test.T
print("number of training examples = " + str(X_train.shape[0]))
print("number of test examples = " + str(X_test.shape[0]))
print("X_train shape: " + str(X_train.shape))
print("Y_train shape: " + str(Y_train.shape))
print("X_test shape: " + str(X_test.shape))
print("Y_test shape: " + str(Y_test.shape))
image_x = 50
image_y = 50
train_y = np_utils.to_categorical(Y_train)
test_y = np_utils.to_categorical(Y_test)
train_y = train_y.reshape(train_y.shape[1], train_y.shape[2])
test_y = test_y.reshape(test_y.shape[1], test_y.shape[2])
X_train = X_train.reshape(X_train.shape[0], 50, 50, 1)
X_test = X_test.reshape(X_test.shape[0], 50, 50, 1)
print("X_train shape: " + str(X_train.shape))
print("X_test shape: " + str(X_test.shape))
print("train_y shape: " + str(train_y.shape))
def keras_model(image_x, image_y):
num_of_classes = 12
model = Sequential()
model.add(Conv2D(32, (5, 5), input_shape=(image_x, image_y, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same'))
model.add(Conv2D(64, (5, 5), activation='sigmoid'))
model.add(MaxPooling2D(pool_size=(5, 5), strides=(5, 5), padding='same'))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.6))
model.add(Dense(num_of_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
filepath = "emojinator.h5"
checkpoint1 = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint1]
return model, callbacks_list
model, callbacks_list = keras_model(image_x, image_y)
model.fit(X_train, train_y, validation_data=(X_test, test_y), epochs=10, batch_size=64,callbacks=callbacks_list)
scores = model.evaluate(X_test, test_y, verbose=0)
print("CNN Error: %.2f%%" % (100 - scores[1] * 100))
model.save('emojinator.h5')
| 35.946667
| 112
| 0.720326
|
794c235193b9beef33b65724b017153eead2609a
| 1,745
|
py
|
Python
|
dynamicserialize/dstypes/com/raytheon/uf/common/dataplugin/gfe/request/ExportGridsRequest.py
|
srcarter3/python-awips
|
d981062662968cf3fb105e8e23d955950ae2497e
|
[
"BSD-3-Clause"
] | 33
|
2016-03-17T01:21:18.000Z
|
2022-02-08T10:41:06.000Z
|
dynamicserialize/dstypes/com/raytheon/uf/common/dataplugin/gfe/request/ExportGridsRequest.py
|
srcarter3/python-awips
|
d981062662968cf3fb105e8e23d955950ae2497e
|
[
"BSD-3-Clause"
] | 15
|
2016-04-19T16:34:08.000Z
|
2020-09-09T19:57:54.000Z
|
dynamicserialize/dstypes/com/raytheon/uf/common/dataplugin/gfe/request/ExportGridsRequest.py
|
Unidata/python-awips
|
8459aa756816e5a45d2e5bea534d23d5b1dd1690
|
[
"BSD-3-Clause"
] | 20
|
2016-03-12T01:46:58.000Z
|
2022-02-08T06:53:22.000Z
|
#
# A pure python implementation of com.raytheon.uf.common.dataplugin.gfe.request.ExportGridsRequest
# for use by the python implementation of DynamicSerialize.
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 04/05/13 dgilling Initial Creation.
#
#
#
from dynamicserialize.dstypes.com.raytheon.uf.common.dataplugin.gfe.request import AbstractGfeRequest
class ExportGridsRequest(AbstractGfeRequest):
def __init__(self):
super(ExportGridsRequest, self).__init__()
self.site = None
self.mode = None
def getSite(self):
return self.site
def setSite(self, site):
self.site = site
def getMode(self):
return self.mode
def setMode(self, mode):
validValues = ['CRON', 'MANUAL', 'GRIB2']
inputVal = str(mode).upper()
if inputVal in validValues:
self.mode = mode
else:
raise ValueError(inputVal + " invalid ExportGridsMode. Must be " + str(validValues))
def __str__(self):
retVal = "ExportGridsRequest["
retVal += "wokstationID: " + str(self.workstationID) + ", "
retVal += "siteID: " + str(self.siteID) + ", "
retVal += "site: " + str(self.site) + ", "
retVal += "mode: " + str(self.mode) + "]"
return retVal
def __repr__(self):
retVal = "ExportGridsRequest("
retVal += "wokstationID=" + repr(self.workstationID) + ", "
retVal += "siteID=" + repr(self.siteID) + ", "
retVal += "site=" + repr(self.site) + ", "
retVal += "mode=" + repr(self.mode) + ")"
return retVal
| 30.614035
| 101
| 0.561605
|
794c23942a1f67848ff65e1d32a360602014740e
| 4,238
|
py
|
Python
|
microsoftml/202/plot_grid_search.py
|
wjones30309/ML-Server-Python-Samples
|
975da57979dcd9c63c79d9452277cc27c175b875
|
[
"CC-BY-4.0",
"MIT"
] | 32
|
2017-09-25T18:58:22.000Z
|
2019-04-17T12:55:29.000Z
|
microsoftml/202/plot_grid_search.py
|
wjones30309/ML-Server-Python-Samples
|
975da57979dcd9c63c79d9452277cc27c175b875
|
[
"CC-BY-4.0",
"MIT"
] | 1
|
2019-05-09T13:55:01.000Z
|
2019-05-09T20:21:54.000Z
|
microsoftml/202/plot_grid_search.py
|
wjones30309/ML-Server-Python-Samples
|
975da57979dcd9c63c79d9452277cc27c175b875
|
[
"CC-BY-4.0",
"MIT"
] | 48
|
2019-05-21T21:29:51.000Z
|
2021-10-08T02:59:43.000Z
|
"""
Grid Search
===========
All learners have what we call
`hyperparameters <https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)>`_
which impact the way a model is trained. Most of the time, they have a default
value which works on most of the datasets but that does not mean that's the best
possible value for the current dataset. Let's see how to choose them
on a the following dataset
`Epileptic Seizure Recognition Data Set <https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition>`_.
.. contents::
:local:
The data
--------
"""
import matplotlib.pyplot as plt
import pandas
import os
here = os.path.dirname(__file__) if "__file__" in locals() else "."
data_file = os.path.join(here, "data", "epilepsy", "data.csv")
data = pandas.read_csv(data_file, sep=",")
data["y"] = data["y"].astype("category")
print(data.head(2))
print(data.shape)
#########################################
# The variable of interest is ``y``.
print(set(data["y"]))
###################"
# 1 is epilepsy, 2-5 is not and the distribution is uniform.
# We convert that problem into a binary classification problem,
# 1 for epileptic, 0 otherwise.
data["y"] = data["y"].apply(lambda x: 1 if x == 1 else 0)
print(data[["y", "X1"]].groupby("y").count())
##########################
# We split into train/test.
try:
from sklearn.model_selection import train_test_split
except ImportError:
from sklearn.cross_validation import train_test_split
train, test = train_test_split(data)
###################
# First model
# -----------
#
# Let's fit a logistic regression.
import numpy as np
from microsoftml import rx_fast_trees, rx_predict
features = [c for c in train.columns if c.startswith("X")]
model = rx_fast_trees("y ~ " + "+".join(features), data=train)
pred = rx_predict(model, test, extra_vars_to_write=["y"])
print(pred.head())
#################
# Let's compute the confusion matrix.
from sklearn.metrics import confusion_matrix
conf = confusion_matrix(pred["y"], pred["PredictedLabel"])
print(conf)
########################
# The prediction is quite accurate. Let's see if we can improve it.
#
# Optimize hyperparameters
# ------------------------
#
# We split the training set into a smaller train set and a smaller test set.
# The dataset is then split into three buckets: A, B, C. A is used to train a model,
# B is used to optimize hyperparameters, C is used to validate.
# We define a function which train and test on buckets A, B.
def train_test_hyperparameter(trainA, trainB, **hyper):
# Train a model and
features = [c for c in train.columns if c.startswith("X")]
model = rx_fast_trees("y ~ " + "+".join(features), data=trainA, verbose=0, **hyper)
pred = rx_predict(model, trainB, extra_vars_to_write=["y"])
conf = confusion_matrix(pred["y"], pred["PredictedLabel"])
return (conf[0,0] + conf[1,1]) / conf.sum()
#############################
# We look into one parameter *num_leaves* to see if the number of trees impacts
# the performance.
trainA, trainB = train_test_split(train)
hyper_values = [5, 10, 15, 20, 25, 30, 35, 40, 50, 100, 200]
perfs = []
for val in hyper_values:
acc = train_test_hyperparameter(trainA, trainB, num_leaves=val)
perfs.append(acc)
print("-- Training with hyper={0} performance={1}".format(val, acc))
#########################################
# We finally plot the curve.
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
ax.plot(hyper_values, perfs, "o-")
ax.set_xlabel("num_leaves")
ax.set_ylabel("% correctly classified")
###################################
# Let's choose the best value, we check our finding on the test sets
# after we train of the whole training set.
tries = max(zip(perfs, hyper_values))
print("max={0}".format(tries))
model = rx_fast_trees("y ~ " + "+".join(features), data=train, num_leaves=tries[1])
pred = rx_predict(model, test, extra_vars_to_write=["y"])
conf = confusion_matrix(pred["y"], pred["PredictedLabel"])
print(conf)
########################
# The process we followed relies on one training per
# value of the hyperparameter. This could be improved by
# running `cross validation <https://en.wikipedia.org/wiki/Cross-validation_(statistics)>`_
# and each of them.
| 31.392593
| 115
| 0.662341
|
794c247ca6a76f5a59594360b5cbfaed7761d61e
| 1,361
|
py
|
Python
|
src/main/resources/bitbucket/BitbucketTask.py
|
chandanaseshagiri1/xlr-bitbucket-plugin
|
0f0b4e504e3e3a35ddf8e7aaab7be5e5fb7673f3
|
[
"MIT"
] | null | null | null |
src/main/resources/bitbucket/BitbucketTask.py
|
chandanaseshagiri1/xlr-bitbucket-plugin
|
0f0b4e504e3e3a35ddf8e7aaab7be5e5fb7673f3
|
[
"MIT"
] | null | null | null |
src/main/resources/bitbucket/BitbucketTask.py
|
chandanaseshagiri1/xlr-bitbucket-plugin
|
0f0b4e504e3e3a35ddf8e7aaab7be5e5fb7673f3
|
[
"MIT"
] | 1
|
2020-02-26T19:21:33.000Z
|
2020-02-26T19:21:33.000Z
|
#
# Copyright 2020 XEBIALABS
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
from bitbucket.Bitbucket import BitbucketClient
bitbucket = BitbucketClient.get_client(server, username, password)
method = str(task.getTaskType()).lower().replace('.', '_')
call = getattr(bitbucket, method)
response = call(locals())
for key,value in response.items():
locals()[key] = value
| 71.631579
| 462
| 0.784717
|
794c25a461689de85f52b5a7a94017d583d9ae0b
| 5,185
|
py
|
Python
|
docs/source/conf.py
|
ricsinaruto/ParlAI
|
733b627ae456d6b11a2fc4624088a781bc6c1d03
|
[
"MIT"
] | 24
|
2019-09-16T00:10:54.000Z
|
2021-09-08T19:31:51.000Z
|
docs/source/conf.py
|
ricsinaruto/ParlAI
|
733b627ae456d6b11a2fc4624088a781bc6c1d03
|
[
"MIT"
] | 3
|
2021-03-11T06:04:15.000Z
|
2021-08-31T15:44:42.000Z
|
docs/source/conf.py
|
ricsinaruto/ParlAI
|
733b627ae456d6b11a2fc4624088a781bc6c1d03
|
[
"MIT"
] | 7
|
2019-09-16T02:37:31.000Z
|
2021-09-01T06:06:17.000Z
|
# -*- coding: utf-8 -*-
#
# Copyright (c) Facebook, Inc. and its affiliates.
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
#
# ParlAI documentation build configuration file, created by
# sphinx-quickstart on Wed Apr 19 15:46:54 2017.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
import sphinx_rtd_theme
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.githubpages'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'ParlAI'
copyright = '2018, Facebook AI Research'
author = 'Facebook AI Research'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = ''
# The full version, including alpha/beta/rc tags.
release = ''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
autodoc_default_options = {
'exclude-members': '__dict__,__weakref__',
'special-members': '__init__',
'member-order': 'bysource',
'show-inheritance': True,
}
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_style = 'css/parlai_theme.css'
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'ParlAIdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'ParlAI.tex', 'ParlAI Documentation',
'FAIR', 'manual'),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'parlai', 'ParlAI Documentation',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'ParlAI', 'ParlAI Documentation',
author, 'ParlAI', 'One line description of project.',
'Miscellaneous'),
]
| 29.798851
| 79
| 0.684667
|
794c26ac714a0d7f673c9f6fc082e366d93a4965
| 4,336
|
py
|
Python
|
image_analyzer.py
|
openjamoses/docker-analyser
|
2ca2cda0c78c90544cf7cc857eb5c99c77603347
|
[
"Apache-2.0"
] | null | null | null |
image_analyzer.py
|
openjamoses/docker-analyser
|
2ca2cda0c78c90544cf7cc857eb5c99c77603347
|
[
"Apache-2.0"
] | null | null | null |
image_analyzer.py
|
openjamoses/docker-analyser
|
2ca2cda0c78c90544cf7cc857eb5c99c77603347
|
[
"Apache-2.0"
] | null | null | null |
import argparse
import csv
import json
import os
from pathlib import Path
from analysis.Analyse import DockerImage
from analysis.DockerOptions import ImageOptions
from parser.file_contents import FileManagement
from parser.parser import Parser
parser = argparse.ArgumentParser()
parser.add_argument('-d', '--dir', help="search directory")
parser.add_argument('-r', '--repo', help="repo org/name")
parser.add_argument('-v', '--repoversion', help="repo version")
parser.add_argument('-t', '--testing', help="run as testing", nargs="?", const="tangent")
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
def run_main(repo_dir,repo_name, repo_version, save_image=False):
repo_version = str(repo_version).replace("b'",'')
docker_files = FileManagement.list_project_docker_files(repo_dir)
path_output = ROOT_DIR+'/outputs/csv/'
if not os.path.exists(ROOT_DIR+'/outputs/csv'):
os.makedirs(ROOT_DIR+'/outputs/csv')
if not os.path.exists(path_output + 'results_instruction.csv'):
data_file = open(path_output + 'results_instruction.csv', mode='w', newline='', encoding='utf-8')
data_writer = csv.writer(data_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
data_writer.writerow(
['repos', 'repo_version', 'instructions', 'instructions_count'])
## Second file
data_file2 = open(path_output + 'results_options.csv', mode='w', newline='', encoding='utf-8')
data_writer2 = csv.writer(data_file2, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
data_writer2.writerow(
['repos', 'repo_version', 'option', 'option_count', 'category'])
else:
data_file = open(path_output + 'results_instruction.csv', mode='a+', newline='', encoding='utf-8')
data_writer = csv.writer(data_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
## Second file
data_file2 = open(path_output + 'results_options.csv', mode='a+', newline='', encoding='utf-8')
data_writer2 = csv.writer(data_file2, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
# , repo_dir, repo_name, repo_version
docker_image_contents = FileManagement.get_file_contents(docker_files)
imageOptions = ImageOptions()
parser = Parser()
for file_, source in docker_image_contents.items():
dockerImage = DockerImage()
#print (type(source))
#print (source)
parser.content = source
json_data = json.loads(parser.json)
dockerImage.analyse(json_data)
instructions_dict = dockerImage.instructions_dict
instructions_options_dict = dockerImage.instructions_options_dict
for key, val in instructions_dict.items():
data_writer.writerow(
[repo_name, repo_version, key, val])
for key, val in instructions_options_dict.items():
data_writer2.writerow(
[repo_name, repo_version, key, val, imageOptions.get_category(key)])
if save_image:
file_name = str(file_).split('/')[len(str(file_).split('/'))-1]
if not os.path.exists(ROOT_DIR+'/outputs/dockerfiles'):
os.makedirs(ROOT_DIR+'/outputs/dockerfiles')
if not os.path.exists(ROOT_DIR+'/outputs/dockerfiles/'+repo_name.split('/')[1]):
os.makedirs(ROOT_DIR+'/outputs/dockerfiles/'+repo_name.split('/')[1])
if '/' in repo_version:
repo_version = repo_version.replace('/', '_')
with open(ROOT_DIR+'/outputs/dockerfiles/'+str(repo_name.split('/')[1])+'/'+file_name+'_'+str(repo_version), "w") as _file:
_file.write(source)
data_file.close()
data_file2.close()
def exit_if_invalid_args(args):
if args.dir is None or os.path.isfile(args.dir):
raise SystemExit("ERROR: -d --dir arg should be directory.")
if args.repo is None:
raise SystemExit("ERROR: -r --repo arg should be repo org/name.")
if args.repoversion is None:
raise SystemExit("ERROR: -v --repoversion arg should be repo release version.")
if __name__ == "__main__":
args = parser.parse_args()
exit_if_invalid_args(args)
repo_dir = os.path.abspath(args.dir)
repo_name = args.repo
repo_version = args.repoversion
run_main(repo_dir,repo_name, repo_version)
| 48.719101
| 135
| 0.66928
|
794c270480518a5972bacbf1fab22b31555a5b76
| 475
|
py
|
Python
|
4/oop4.py
|
ikramulkayes/Python_season2
|
d057460d07c5d2d218ecd52e08c1d355add44df2
|
[
"MIT"
] | null | null | null |
4/oop4.py
|
ikramulkayes/Python_season2
|
d057460d07c5d2d218ecd52e08c1d355add44df2
|
[
"MIT"
] | null | null | null |
4/oop4.py
|
ikramulkayes/Python_season2
|
d057460d07c5d2d218ecd52e08c1d355add44df2
|
[
"MIT"
] | null | null | null |
class Cat:
def __init__(self,color = "White",state= "sitting"):
self.color = color
self.state = state
def changeColor(self,color):
self.color = color
def printCat(self):
print(f"{self.color} cat is {self.state}")
c1 = Cat()
c2 = Cat("Black")
c3 = Cat("Brown", "jumping")
c4 = Cat("Red", "purring")
c1.printCat()
c2.printCat()
c3.printCat()
c4.printCat()
c1.changeColor("Blue")
c3.changeColor("Purple")
c1.printCat()
c3.printCat()
| 21.590909
| 56
| 0.625263
|
794c2718987bb3e64661920aa8099d43881032d5
| 5,768
|
py
|
Python
|
melime/explainers/local_models/local_model_linear.py
|
elian204/melime
|
aef885fa4b6b02f7bf7294140d78a85fe546b622
|
[
"MIT"
] | 48
|
2020-09-15T02:26:46.000Z
|
2021-09-03T17:08:53.000Z
|
melime/explainers/local_models/local_model_linear.py
|
elian204/melime
|
aef885fa4b6b02f7bf7294140d78a85fe546b622
|
[
"MIT"
] | 1
|
2020-11-03T04:14:27.000Z
|
2020-11-05T16:32:25.000Z
|
melime/explainers/local_models/local_model_linear.py
|
elian204/melime
|
aef885fa4b6b02f7bf7294140d78a85fe546b622
|
[
"MIT"
] | 3
|
2020-09-20T16:52:11.000Z
|
2021-09-25T10:04:27.000Z
|
import numpy as np
from sklearn import metrics
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import SGDRegressor, Ridge, HuberRegressor
from melime.explainers.local_models.local_model_base import LocalModelBase
def transformer_identity(x):
return x
class LocalModelLinear(LocalModelBase):
def __init__(
self,
x_explain,
chi_explain,
y_p_explain,
feature_names,
target_names,
class_index,
r,
tol_importance=0.001,
tol_error=None,
scale_data=False,
save_samples=False,
):
super().__init__(
x_explain,
chi_explain,
y_p_explain,
feature_names,
target_names,
class_index,
r,
tol_importance,
tol_error,
scale_data,
save_samples,
)
self.model = None
self.mse_error = 2.0 * self.tol_importance
def measure_errors(self, y_true, y_p_local_model):
return metrics.mean_squared_error(y_true=y_true, y_pred=y_p_local_model)
def measure_importances(self):
return self._measure_convergence_importance(self.model.coef_)
def predict(self, x):
x = self.scaler.transform(x)
return self.model.predict(x)
@property
def importance(self):
return self.model.coef_
class SGDRegressorMod(LocalModelLinear):
def __init__(
self,
x_explain,
chi_explain,
y_p_explain,
feature_names,
target_names,
class_index,
r,
tol_importance=0.001,
tol_error=0.001,
scale_data=False,
save_samples=False,
grid_search=False,
l1_ratio=0.15,
max_iter=10000,
tol=0.001,
learning_rate="adaptive",
eta0=0.001,
early_stopping=False,
n_iter_no_change=10000,
average=False,
**kwargs
):
super().__init__(
x_explain,
chi_explain,
y_p_explain,
feature_names,
target_names,
class_index,
r,
tol_importance,
tol_error,
scale_data,
save_samples,
)
self.model = SGDRegressor(
l1_ratio=l1_ratio,
alpha=0.001,
max_iter=max_iter,
tol=1.0e-3,
learning_rate=learning_rate,
eta0=eta0,
n_iter_no_change=n_iter_no_change,
early_stopping=early_stopping,
average=average,
warm_start=True,
**kwargs
)
self.grid_search = grid_search
def partial_fit(self, x_set, y_set, weight_set=None):
super().partial_fit(x_set, y_set, weight_set)
self.scaler.partial_fit(x_set)
x_set = self.scaler.transform(x_set)
if self.grid_search:
self.grid_search = False
parameters = {
"alpha": 10.0 ** (-np.arange(2, 7)),
"eta0": 1,
"loss": ["squared_loss", "huber", "epsilon_insensitive"],
}
grid_search = GridSearchCV(model, parameters, n_jobs=-1)
grid_search.fit(x_train, y_train)
self.model.partial_fit(x_set, y_set, sample_weight=weight_set)
# self.model.fit(x_set, y_set, sample_weight=weight_set)
class RidgeMod(LocalModelLinear):
def __init__(
self,
x_explain,
chi_explain,
y_p_explain,
feature_names,
target_names,
class_index,
r,
tol_importance=0.001,
tol_error=0.001,
scale_data=False,
save_samples=True,
):
super().__init__(
x_explain,
chi_explain,
y_p_explain,
feature_names,
target_names,
class_index,
r,
tol_importance,
tol_error,
scale_data,
save_samples,
)
self.model = Ridge(
alpha=0.001,
fit_intercept=True,
normalize=False,
copy_X=True,
max_iter=10000,
tol=1e-05,
solver="lsqr",
random_state=None,
)
def partial_fit(self, x_set, y_set, weight_set=None):
super().partial_fit(x_set, y_set, weight_set)
self.scaler.fit(self.x_samples)
x_set = self.scaler.transform(self.x_samples)
self.model.fit(x_set, self.y_samples, sample_weight=self.weight_samples)
class HuberRegressorMod(LocalModelLinear):
def __init__(
self,
x_explain,
chi_explain,
y_p_explain,
feature_names,
target_names,
class_index,
r,
tol_importance=0.001,
tol_error=0.001,
scale_data=False,
save_samples=False,
):
super().__init__(
x_explain,
chi_explain,
y_p_explain,
feature_names,
target_names,
class_index,
r,
tol_importance,
tol_error,
scale_data,
save_samples,
)
self.model = HuberRegressor(
epsilon=1.35, max_iter=10000, alpha=0.001, warm_start=True, fit_intercept=True, tol=1e-05
)
def partial_fit(self, x_set, y_set, weight_set=None):
super().partial_fit(x_set, y_set, weight_set)
self.scaler.partial_fit(x_set)
x_set = self.scaler.transform(x_set)
self.model.fit(x_set, y_set, sample_weight=weight_set)
# self.model.fit(x_set, self.y_samples, sample_weight=self.weight_samples)
| 26.703704
| 101
| 0.564667
|
794c287c0e3c966b9d906b6e8fd638f63045d2e8
| 51
|
py
|
Python
|
tuwmodel/__init__.py
|
raoulcollenteur/tuwmodel
|
40a68a61d053dce279b055be3a01ea7c4036bffa
|
[
"MIT"
] | null | null | null |
tuwmodel/__init__.py
|
raoulcollenteur/tuwmodel
|
40a68a61d053dce279b055be3a01ea7c4036bffa
|
[
"MIT"
] | null | null | null |
tuwmodel/__init__.py
|
raoulcollenteur/tuwmodel
|
40a68a61d053dce279b055be3a01ea7c4036bffa
|
[
"MIT"
] | null | null | null |
from .hbvmodel import *
from .model import simulate
| 25.5
| 27
| 0.803922
|
794c28cf8b70ab7a8859df15e37519cc14595ba4
| 6,513
|
py
|
Python
|
src/.history/Test/HiwinRT605_Strategy_test_v1_20190627100320.py
|
SamKaiYang/2019_Hiwin_Shaking
|
d599f8c87dc4da89eae266990d12eb3a8b0f3e16
|
[
"MIT"
] | null | null | null |
src/.history/Test/HiwinRT605_Strategy_test_v1_20190627100320.py
|
SamKaiYang/2019_Hiwin_Shaking
|
d599f8c87dc4da89eae266990d12eb3a8b0f3e16
|
[
"MIT"
] | null | null | null |
src/.history/Test/HiwinRT605_Strategy_test_v1_20190627100320.py
|
SamKaiYang/2019_Hiwin_Shaking
|
d599f8c87dc4da89eae266990d12eb3a8b0f3e16
|
[
"MIT"
] | null | null | null |
#!/usr/bin/env python3
# license removed for brevity
#策略 機械手臂 四點來回跑
import threading
import time
import rospy
import os
import numpy as np
from std_msgs.msg import String
from ROS_Socket.srv import *
from ROS_Socket.msg import *
import math
import enum
import Hiwin_RT605_Arm_Command as ArmTask
##----Arm state-----------
Arm_state_flag = 0
Strategy_flag = 0
Sent_data_flag = True
##----Arm status enum
class Arm_status(enum.IntEnum):
Idle = 0
Isbusy = 1
Error = 2
shutdown = 6
##-----------server feedback arm state----------
def Arm_state(req):
global CurrentMissionType,Strategy_flag,Arm_state_flag
Arm_state_flag = int('%s'%req.Arm_state)
if Arm_state_flag == Arm_status.Isbusy: #表示手臂忙碌
Strategy_flag = False
return(1)
if Arm_state_flag == Arm_status.Idle: #表示手臂準備
Strategy_flag = True
return(0)
if Arm_state_flag == Arm_status.shutdown: #表示程式中斷
Strategy_flag = 6
return(6)
##-----------server feedback Sent_flag----------
def Sent_flag(req):
global Sent_data_flag
Sent_data_flag = int('%s'%req.sent_flag)
return(1)
def arm_state_server():
#rospy.init_node(NAME)
s = rospy.Service('arm_state',arm_state, Arm_state) ##server arm state
a = rospy.Service('sent_flag',sent_flag,Sent_flag)
#rospy.spin() ## spin one
##-----------switch define------------##
class switch(object):
def __init__(self, value):
self.value = value
self.fall = False
def __iter__(self):
"""Return the match method once, then stop"""
yield self.match
raise StopIteration
def match(self, *args):
"""Indicate whether or not to enter a case suite"""
if self.fall or not args:
return True
elif self.value in args: # changed for v1.5, see below
self.fall = True
return True
else:
return False
##------------class-------
class point():
def __init__(self,x,y,z,pitch,roll,yaw):
self.x = x
self.y = y
self.z = z
self.pitch = pitch
self.roll = roll
self.yaw = yaw
pos = point(0,36.8,11.35,-90,0,0)
##-------------------------strategy---------------------
action = 0
def Mission_Trigger():
global action,Arm_state_flag,Sent_data_flag
if Arm_state_flag == Arm_status.Idle:
# Sent_data_flag = False
Arm_state_flag = Arm_status.Isbusy
for case in switch(action): #傳送指令給socket選擇手臂動作
if case(0):
pos.x = 10
pos.y = 36.8
pos.z = 11.35
pos.pitch = -90
pos.roll = 0
pos.yaw = 0
action = 1
print('x: ',pos.x,' y: ',pos.y,' z: ',pos.z,' pitch: ',pos.pitch,' roll: ',pos.roll,' yaw: ',pos.yaw)
#ArmTask.strategy_client_Arm_Mode(2,1,0,10,2)#action,ra,grip,vel,both
ArmTask.strategy_client_pos_move(pos.x,pos.y,pos.z,pos.pitch,pos.roll,pos.yaw)
ArmTask.strategy_client_Arm_Mode(2,1,0,10,2)#action,ra,grip,vel,both
break
if case(1):
pos.x = 10
pos.y = 42
pos.z = 11.35
pos.pitch = -90
pos.roll = 0
pos.yaw = 0
action = 2
print('x: ',pos.x,' y: ',pos.y,' z: ',pos.z,' pitch: ',pos.pitch,' roll: ',pos.roll,' yaw: ',pos.yaw)
#ArmTask.strategy_client_Arm_Mode(2,1,0,10,2)#action,ra,grip,vel,both
ArmTask.strategy_client_pos_move(pos.x,pos.y,pos.z,pos.pitch,pos.roll,pos.yaw)
ArmTask.strategy_client_Arm_Mode(2,1,0,10,2)#action,ra,grip,vel,both
break
if case(2):
pos.x = -10
pos.y = 42
pos.z = 11.35
pos.pitch = -90
pos.roll = 0
pos.yaw = 0
action = 3
print('x: ',pos.x,' y: ',pos.y,' z: ',pos.z,' pitch: ',pos.pitch,' roll: ',pos.roll,' yaw: ',pos.yaw)
#ArmTask.strategy_client_Arm_Mode(2,1,0,10,2)#action,ra,grip,vel,both
ArmTask.strategy_client_pos_move(pos.x,pos.y,pos.z,pos.pitch,pos.roll,pos.yaw)
ArmTask.strategy_client_Arm_Mode(2,1,0,10,2)#action,ra,grip,vel,both
break
if case(3):
pos.x = -10
pos.y = 36.8
pos.z = 11.35
pos.pitch = -90
pos.roll = 0
pos.yaw = 0
action = 4
print('x: ',pos.x,' y: ',pos.y,' z: ',pos.z,' pitch: ',pos.pitch,' roll: ',pos.roll,' yaw: ',pos.yaw)
#ArmTask.strategy_client_Arm_Mode(2,1,0,10,2)#action,ra,grip,vel,both
ArmTask.strategy_client_pos_move(pos.x,pos.y,pos.z,pos.pitch,pos.roll,pos.yaw)
ArmTask.strategy_client_Arm_Mode(2,1,0,10,2)#action,ra,grip,vel,both
break
if case(4):
pos.x = 0
pos.y = 36.8
pos.z = 11.35
pos.pitch = -90
pos.roll = 0
pos.yaw = 0
action = 0
print('x: ',pos.x,' y: ',pos.y,' z: ',pos.z,' pitch: ',pos.pitch,' roll: ',pos.roll,' yaw: ',pos.yaw)
#ArmTask.strategy_client_Arm_Mode(2,1,0,10,2)#action,ra,grip,vel,both
ArmTask.strategy_client_pos_move(pos.x,pos.y,pos.z,pos.pitch,pos.roll,pos.yaw)
ArmTask.strategy_client_Arm_Mode(2,1,0,10,2)#action,ra,grip,vel,both
break
if case(): # default, could also just omit condition or 'if True'
rospy.on_shutdown(myhook)
ArmTask.rospy.on_shutdown(myhook)
#action: ptp line
#ra : abs rel
#grip 夾爪
#vel speed
#both : Ctrl_Mode
##-------------strategy end ------------
def myhook():
print ("shutdown time!")
if __name__ == '__main__':
argv = rospy.myargv()
rospy.init_node('strategy', anonymous=True)
GetInfoFlag = True #Test no data
arm_state_server()
start_input=int(input('開始策略請按1,離開請按3 : ')) #輸入開始指令
start_input = 1
if start_input==1:
## encoding: UTF-8
#timer = threading.Timer(1, Mission_Trigger)
#timer.start()
while 1:
time.sleep(0.2)
Mission_Trigger()
if start_input == 3:
pass
#timer.join()
ArmTask.rospy.spin()
rospy.spin()
| 35.016129
| 117
| 0.536158
|
794c28e3f924356778c0e998226a7cd9d91dffa9
| 6,912
|
py
|
Python
|
morpheus/__main__.py
|
preller/morpheus
|
ba10271c6ace5aff3b35509ab5fbf42bcd6750b6
|
[
"MIT"
] | 44
|
2019-06-28T04:29:02.000Z
|
2022-03-03T01:29:26.000Z
|
morpheus/__main__.py
|
preller/morpheus
|
ba10271c6ace5aff3b35509ab5fbf42bcd6750b6
|
[
"MIT"
] | 27
|
2019-01-07T22:56:04.000Z
|
2022-02-18T15:44:59.000Z
|
morpheus/__main__.py
|
preller/morpheus
|
ba10271c6ace5aff3b35509ab5fbf42bcd6750b6
|
[
"MIT"
] | 8
|
2019-06-28T16:25:08.000Z
|
2022-03-07T12:16:57.000Z
|
# MIT License
# Copyright 2019 Ryan Hausen
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
# ==============================================================================
"""Morpheus -- a package for making pixel level morphological classifications."""
import os
import sys
import argparse
from morpheus.classifier import Classifier
def _valid_file(value):
if os.path.isfile(value) and value.endswith((".fits", ".FITS")):
return value
raise ValueError("File needs to be a fits file, ending with .fits or .FITS")
def _valid_dir(value):
if os.path.isdir(value):
return value
raise ValueError("Value needs to be a directory.")
def _gpus(value):
gpus = [int(v) for v in value.split(",")]
gpu_err = "--gpus option requires more than one GPU ID. If you are trying "
gpu_err += "to select a single gpu to use the CUDA_VISIBLE_DEVICES "
gpu_err += "environment variable. For more information: "
gpu_err += "https://devblogs.nvidia.com/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/"
if len(gpus) < 2:
raise ValueError(gpu_err)
return gpus
def _parse_args(argv):
"""A place to set the arugments used in main."""
parser = argparse.ArgumentParser(description=__doc__)
# required arguments
help_str = "The {} band FITS file location"
parser.add_argument("h", type=_valid_file, help=help_str.format("H"))
parser.add_argument("j", type=_valid_file, help=help_str.format("J"))
parser.add_argument("v", type=_valid_file, help=help_str.format("V"))
parser.add_argument("z", type=_valid_file, help=help_str.format("Z"))
# optional arguments
# action
action_desc = "An additional optional flag to include additional information "
action_desc += "to the morphological classifications. `catalog` saves a "
action_desc += "morpholgical catalog as catalog.txt. `segmap` saves a "
action_desc += "segmentation map as segmap.fits. `colorize` saves a colorized "
action_desc += "version of the classification as colorized.png. "
action_desc += "`all` saves catalog, segmap, and colorize."
parser.add_argument(
"--action",
choices=["catalog", "segmap", "colorize", "all"],
default="None",
help=action_desc,
)
# parallel gpu
gpus_desc = "Optional flag for classifying an image in parallel with multiple "
gpus_desc += "GPUs. Should be comma seperated ints like: 1,3 or 0,1,2 no spaces."
gpus_desc += "DO NOT use this flag for a single GPU classification. "
gpus_desc += "Use the CUDA_VISIBLE_DEVICES enironment variable to select a "
gpus_desc += "GPU for morpheus to use."
parser.add_argument("--gpus", type=_gpus, help=gpus_desc)
# parallel cpu
parser.add_argument(
"--cpus",
type=int,
help="Optional flag for classifying an image in parrallel with multiple cpus",
)
out_dir_desc = "The directory to save the output in."
parser.add_argument(
"--out_dir", type=_valid_dir, default=os.getcwd(), help=out_dir_desc
)
batch_size_desc = "The batch size for Moprheus to use when classifying the image."
parser.add_argument("--batch_size", type=int, default=1000, help=batch_size_desc)
# evaluate args
args = parser.parse_args(argv)
print(args.cpus, args.gpus)
if args.cpus and args.gpus:
raise ValueError("Both --cpus and --gpus were indicated. Choose only one.")
return args
def main():
args = _parse_args(sys.argv[1:])
if args.action == "None":
Classifier.classify(
h=args.h,
j=args.j,
v=args.v,
z=args.z,
batch_size=args.batch_size,
out_dir=args.out_dir,
gpus=args.gpus,
cpus=args.cpus,
)
elif args.action == "catalog":
classified = Classifier.classify(
h=args.h,
j=args.j,
v=args.v,
z=args.z,
batch_size=args.batch_size,
out_dir=args.out_dir,
gpus=args.gpus,
cpus=args.cpus,
)
segmap = Classifier.segmap_from_classified(
classified, args.h, out_dir=args.out_dir
)
Classifier.catalog_from_classified(
classified,
args.h,
segmap,
out_file=os.path.join(args.out_dir, "colorized.png"),
)
elif args.action == "segmap":
classified = Classifier.classify(
h=args.h,
j=args.j,
v=args.v,
z=args.z,
batch_size=args.batch_size,
out_dir=args.out_dir,
gpus=args.gpus,
cpus=args.cpus,
)
Classifier.segmap_from_classified(classified, args.h, out_dir=args.out_dir)
elif args.action == "colorize":
classified = Classifier.classify(
h=args.h,
j=args.j,
v=args.v,
z=args.z,
batch_size=args.batch_size,
out_dir=args.out_dir,
gpus=args.gpus,
cpus=args.cpus,
)
Classifier.colorize_classified(classified, out_dir=args.out_dir)
elif args.action == "all":
classified = Classifier.classify(
h=args.h,
j=args.j,
v=args.v,
z=args.z,
batch_size=args.batch_size,
out_dir=args.out_dir,
gpus=args.gpus,
cpus=args.cpus,
)
segmap = Classifier.segmap_from_classified(
classified, args.h, out_dir=args.out_dir
)
Classifier.catalog_from_classified(
classified,
args.h,
segmap,
out_file=os.path.join(args.out_dir, "colorized.png"),
)
Classifier.colorize_classified(classified, out_dir=args.out_dir)
if __name__ == "__main__":
main()
| 32.914286
| 102
| 0.630353
|
794c29026bf16d235272ba920d10c9a23da82885
| 9,879
|
py
|
Python
|
pennylane/templates/state_preparations/mottonen.py
|
kareem1925/pennylane
|
04bb5ba0fcced558e1273b94b3ea8c39622c5ca4
|
[
"Apache-2.0"
] | null | null | null |
pennylane/templates/state_preparations/mottonen.py
|
kareem1925/pennylane
|
04bb5ba0fcced558e1273b94b3ea8c39622c5ca4
|
[
"Apache-2.0"
] | null | null | null |
pennylane/templates/state_preparations/mottonen.py
|
kareem1925/pennylane
|
04bb5ba0fcced558e1273b94b3ea8c39622c5ca4
|
[
"Apache-2.0"
] | null | null | null |
# Copyright 2018-2020 Xanadu Quantum Technologies Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
Contains the ``MottonenStatePreparation`` template.
"""
import math
import numpy as np
from scipy import sparse
import pennylane as qml
from pennylane.templates.decorator import template
from pennylane.templates.utils import check_shape, get_shape
from pennylane.variable import Variable
from pennylane.wires import Wires
# pylint: disable=len-as-condition,arguments-out-of-order
def gray_code(rank):
"""Generates the Gray code of given rank.
Args:
rank (int): rank of the Gray code (i.e. number of bits)
"""
def gray_code_recurse(g, rank):
k = len(g)
if rank <= 0:
return
for i in range(k - 1, -1, -1):
char = "1" + g[i]
g.append(char)
for i in range(k - 1, -1, -1):
g[i] = "0" + g[i]
gray_code_recurse(g, rank - 1)
g = ["0", "1"]
gray_code_recurse(g, rank - 1)
return g
def _matrix_M_entry(row, col):
"""Returns one entry for the matrix that maps alpha to theta.
Args:
row (int): one-based row number
col (int): one-based column number
Returns:
(float): transformation matrix entry at given row and column
"""
# (col >> 1) ^ col is the Gray code of col
b_and_g = row & ((col >> 1) ^ col)
sum_of_ones = 0
while b_and_g > 0:
if b_and_g & 0b1:
sum_of_ones += 1
b_and_g = b_and_g >> 1
return (-1) ** sum_of_ones
def _compute_theta(alpha):
"""Calculates the rotation angles from the alpha vector.
Args:
alpha (array[float]): alpha parameters
Returns:
(array[float]): rotation angles theta
"""
k = np.log2(alpha.shape[0])
factor = 2 ** (-k)
theta = sparse.dok_matrix(alpha.shape, dtype=np.float64) # type: sparse.dok_matrix
for row in range(alpha.shape[0]):
# Use transpose of M:
entry = sum([_matrix_M_entry(col, row) * a for (col, _), a in alpha.items()])
entry *= factor
if abs(entry) > 1e-6:
theta[row, 0] = entry
return theta
def _uniform_rotation_dagger(gate, alpha, control_wires, target_wire):
"""Applies a given inverse rotation to the target qubit
that is uniformly controlled by the control qubits.
Args:
gate (~.Operation): gate to be applied, needs to have exactly
one parameter
alpha (array[float]): alpha parameters
control_wires (array[int]): wires that act as control
target_wire (int): wire that acts as target
"""
theta = _compute_theta(alpha) # type: sparse.dok_matrix
gray_code_rank = len(control_wires)
if gray_code_rank == 0:
gate(theta[0, 0], wires=[target_wire])
return
code = gray_code(gray_code_rank)
num_selections = len(code)
control_indices = [
int(np.log2(int(code[i], 2) ^ int(code[(i + 1) % num_selections], 2)))
for i in range(num_selections) # TODO: re-asses for nonconsecutive wires
]
for i, control_index in enumerate(control_indices):
gate(theta[i, 0], wires=[target_wire])
qml.CNOT(wires=[control_wires[control_index], target_wire])
def _uniform_rotation_z_dagger(alpha, control_wires, target_wire):
"""Applies the inverse of a Z rotation to the target qubit
that is uniformly controlled by the control qubits.
Args:
alpha (array[float]): alpha parameters
control_wires (array[int]): wires that act as control
target_wire (int): wire that acts as target
"""
_uniform_rotation_dagger(qml.RZ, alpha, control_wires, target_wire)
def _uniform_rotation_y_dagger(alpha, control_wires, target_wire):
"""Applies the inverse of a Y rotation to the target qubit
that is uniformly controlled by the control qubits.
Args:
alpha (array[float]): alpha parameters
control_wires (array[int]): wires that act as control
target_wire (int): wire that acts as target
"""
_uniform_rotation_dagger(qml.RY, alpha, control_wires, target_wire)
def _get_alpha_z(omega, n, k):
r"""Computes the rotation angles alpha for the Z rotations.
Args:
omega (float): phase of the input
n (int): total number of qubits
k (int): index of current qubit
Returns:
scipy.sparse.dok_matrix[np.float64]: a sparse vector representing :math:`\alpha^z_k`
"""
alpha_z_k = sparse.dok_matrix((2 ** (n - k), 1), dtype=np.float64)
for (i, _), om in omega.items():
i += 1
j = int(np.ceil(i * 2 ** (-k)))
s_condition = 2 ** (k - 1) * (2 * j - 1)
s_i = 1.0 if i > s_condition else -1.0
alpha_z_k[j - 1, 0] = alpha_z_k[j - 1, 0] + s_i * om / 2 ** (k - 1)
return alpha_z_k
def _get_alpha_y(a, n, k):
r"""Computes the rotation angles alpha for the Y rotations.
Args:
omega (float): phase of the input
n (int): total number of qubits
k (int): index of current qubit
Returns:
scipy.sparse.dok_matrix[np.float64]: a sparse vector representing :math:`\alpha^y_k`
"""
alpha = sparse.dok_matrix((2 ** (n - k), 1), dtype=np.float64)
numerator = sparse.dok_matrix((2 ** (n - k), 1), dtype=np.float64)
denominator = sparse.dok_matrix((2 ** (n - k), 1), dtype=np.float64)
for (i, _), e in a.items():
j = int(math.ceil((i + 1) / 2 ** k))
l = (i + 1) - (2 * j - 1) * 2 ** (k - 1)
is_part_numerator = 1 <= l <= 2 ** (k - 1)
if is_part_numerator:
numerator[j - 1, 0] += e * e
denominator[j - 1, 0] += e * e
for (j, _), e in numerator.items():
numerator[j, 0] = math.sqrt(e)
for (j, _), e in denominator.items():
denominator[j, 0] = 1 / math.sqrt(e)
pre_alpha = numerator.multiply(denominator) # type: sparse.csr_matrix
for (j, _), e in pre_alpha.todok().items():
alpha[j, 0] = 2 * np.arcsin(e)
return alpha
@template
def MottonenStatePreparation(state_vector, wires):
r"""
Prepares an arbitrary state on the given wires using a decomposition into gates developed
by Möttönen et al. (Quantum Info. Comput., 2005).
The state is prepared via a sequence
of "uniformly controlled rotations". A uniformly controlled rotation on a target qubit is
composed from all possible controlled rotations on said qubit and can be used to address individual
elements of the state vector. In the work of Mottonen et al., the inverse of their state preparation
is constructed by first equalizing the phases of the state vector via uniformly controlled Z rotations
and then rotating the now real state vector into the direction of the state :math:`|0\rangle` via
uniformly controlled Y rotations.
This code is adapted from code written by Carsten Blank for PennyLane-Qiskit.
Args:
state_vector (array): Input array of shape ``(2^N,)``, where N is the number of wires
the state preparation acts on. ``N`` must be smaller or equal to the total
number of wires.
wires (Iterable or Wires): Wires that the template acts on. Accepts an iterable of numbers or strings, or
a Wires object.
Raises:
ValueError: if inputs do not have the correct format
"""
###############
# Input checks
wires = Wires(wires)
n_wires = len(wires)
expected_shape = (2 ** n_wires,)
check_shape(
state_vector,
expected_shape,
msg="'state_vector' must be of shape {}; got {}."
"".format(expected_shape, get_shape(state_vector)),
)
# check if state_vector is normalized
if isinstance(state_vector[0], Variable):
state_vector_values = [s.val for s in state_vector]
norm = np.sum(np.abs(state_vector_values) ** 2)
else:
norm = np.sum(np.abs(state_vector) ** 2)
if not np.isclose(norm, 1.0, atol=1e-3):
raise ValueError("'state_vector' has to be of length 1.0, got {}".format(norm))
#######################
# Change ordering of indices, original code was for IBM machines
state_vector = np.array(state_vector).reshape([2] * n_wires).T.flatten()[:, np.newaxis]
state_vector = sparse.dok_matrix(state_vector)
a = sparse.dok_matrix(state_vector.shape)
omega = sparse.dok_matrix(state_vector.shape)
for (i, j), v in state_vector.items():
if isinstance(v, Variable):
a[i, j] = np.absolute(v.val)
omega[i, j] = np.angle(v.val)
else:
a[i, j] = np.absolute(v)
omega[i, j] = np.angle(v)
# This code is directly applying the inverse of Carsten Blank's
# code to avoid inverting at the end
# Apply y rotations
for k in range(n_wires, 0, -1): # Todo: use actual wire ordering!
alpha_y_k = _get_alpha_y(a, n_wires, k) # type: sparse.dok_matrix
control = wires[k:]
target = wires[k - 1]
_uniform_rotation_y_dagger(alpha_y_k, control, target)
# Apply z rotations
for k in range(n_wires, 0, -1): # Todo: use actual wire ordering!
alpha_z_k = _get_alpha_z(omega, n_wires, k)
control = wires[k:]
target = wires[k - 1]
if len(alpha_z_k) > 0:
_uniform_rotation_z_dagger(alpha_z_k, control, target)
| 32.390164
| 113
| 0.632959
|
794c2aee16bca61634be3e5b04091c96c0ddfff2
| 1,720
|
py
|
Python
|
util.py
|
abwilf/Factorized
|
64e7d2a54bbfbc8b1c5a2130f2b941c376402fe6
|
[
"MIT"
] | 22
|
2021-03-18T22:17:59.000Z
|
2022-01-26T22:04:57.000Z
|
util.py
|
abwilf/Factorized
|
64e7d2a54bbfbc8b1c5a2130f2b941c376402fe6
|
[
"MIT"
] | 3
|
2021-06-10T15:09:32.000Z
|
2022-01-05T07:15:48.000Z
|
util.py
|
abwilf/Factorized
|
64e7d2a54bbfbc8b1c5a2130f2b941c376402fe6
|
[
"MIT"
] | 7
|
2021-03-18T22:00:26.000Z
|
2022-03-07T01:21:33.000Z
|
import random
import os
import torch
import numpy as np
import zipfile
from tqdm import tqdm
from datetime import datetime
from contextlib import contextmanager
from time import time
def set_seed(my_seed):
os.environ['PYTHONHASHSEED'] = str(my_seed)
random.seed(my_seed)
np.random.seed(my_seed)
torch.manual_seed(my_seed)
torch.cuda.manual_seed(my_seed)
torch.cuda.manual_seed_all(my_seed)
# torch.backends.cudnn.deterministic = True
# torch.backends.cudnn.benchmark = False # This can slow down training
def snapshot_code_to_zip(code_path, snapshot_zip_output_dir, snapshot_zip_output_file_name):
zf = zipfile.ZipFile(os.path.join(snapshot_zip_output_dir, snapshot_zip_output_file_name), "w")
dirs_to_exclude = ['.git', 'dataset', 'my_debug', 'log']
for dirname, subdirs, files in os.walk(code_path):
for dir_to_exclude in dirs_to_exclude:
if dir_to_exclude in subdirs:
subdirs.remove(dir_to_exclude) # If you remove something from the 'subdirs' (second parameter) of os.walk() , os.walk() does not walk into it , that way that entire directory will be skipped. Details at docs.python.org/3/library/os.html#os.walk
for filename in files:
if filename == snapshot_zip_output_file_name:
continue # skip the output zip file to avoid infinite recursion
print(filename)
zf.write(os.path.join(dirname, filename), os.path.relpath(os.path.join(dirname, filename), os.path.join(code_path, '..')))
zf.close()
@contextmanager
def timing(description: str) -> None:
start = time()
yield
ellapsed_time = time() - start
print(f"{description}: {ellapsed_time}")
| 35.102041
| 261
| 0.711047
|
794c2d84855a6809157b0cc173e4fe37ec85c9c7
| 1,499
|
py
|
Python
|
src/satisfy/tool/demo_utils.py
|
simone-campagna/satisfy
|
b5327e937e32c5324c05f6288f59cfaac4a316dc
|
[
"Apache-2.0"
] | null | null | null |
src/satisfy/tool/demo_utils.py
|
simone-campagna/satisfy
|
b5327e937e32c5324c05f6288f59cfaac4a316dc
|
[
"Apache-2.0"
] | null | null | null |
src/satisfy/tool/demo_utils.py
|
simone-campagna/satisfy
|
b5327e937e32c5324c05f6288f59cfaac4a316dc
|
[
"Apache-2.0"
] | null | null | null |
__all__ = [
'print_model',
'print_solve_stats',
'print_optimization_stats',
]
def print_model(model):
print("\n=== model variables: ===")
for var_index, (var_name, var_info) in enumerate(model.variables().items()):
print(" {:4d}) {!r} domain: {}".format(var_index, var_name, var_info.domain))
print("\n=== model constraints: ===")
for c_index, constraint in enumerate(model.constraints()):
print(" {:4d}) {}".format(c_index, constraint))
def print_solve_stats(stats):
if stats.count == 1:
suffix = ''
else:
suffix = 's'
if stats.interrupt:
fmt = "Found {s.count} partial solution{suffix} in {s.elapsed:.3f} seconds [{s.interrupt} reached]"
else:
if stats.count == 1:
fmt = "Found unique solution{suffix} in {s.elapsed:.3f} seconds"
else:
fmt = "Found all {s.count} solution{suffix} in {s.elapsed:.3f} seconds"
print("\n" + fmt.format(suffix=suffix, s=stats))
def print_optimization_stats(stats, optimal=None):
if optimal is None:
optimal = stats.interrupt is None
if optimal:
kind = 'optimal'
else:
kind = 'sub-optimal'
if stats.count == 1:
suffix = ''
else:
suffix = 's'
fmt = "Found {kind} solution in {s.elapsed:.3f} seconds after {s.count} solve iteration{suffix}"
if stats.interrupt:
fmt += " [{s.interrupt} reached]"
print("\n" + fmt.format(suffix=suffix, kind=kind, s=stats))
| 30.591837
| 107
| 0.599733
|
794c2f8959eca83660977433d80c4a0bc5b4be88
| 15,122
|
py
|
Python
|
src/marshmallow_sqlalchemy/convert.py
|
TheunsPretorius/marshmallow-sqlalchemy
|
eabf78746504278c7d4b08aca4c24d42b3a9e0f5
|
[
"MIT"
] | 498
|
2015-04-29T05:32:08.000Z
|
2022-03-29T11:42:52.000Z
|
src/marshmallow_sqlalchemy/convert.py
|
TheunsPretorius/marshmallow-sqlalchemy
|
eabf78746504278c7d4b08aca4c24d42b3a9e0f5
|
[
"MIT"
] | 290
|
2015-05-23T01:39:51.000Z
|
2022-03-15T12:44:28.000Z
|
src/marshmallow_sqlalchemy/convert.py
|
TheunsPretorius/marshmallow-sqlalchemy
|
eabf78746504278c7d4b08aca4c24d42b3a9e0f5
|
[
"MIT"
] | 98
|
2015-05-08T01:58:09.000Z
|
2022-03-05T14:05:44.000Z
|
import inspect
import functools
import warnings
from distutils.version import LooseVersion
import uuid
import marshmallow as ma
from marshmallow import validate, fields
from sqlalchemy.dialects import postgresql, mysql, mssql
from sqlalchemy.orm import SynonymProperty
import sqlalchemy as sa
from .exceptions import ModelConversionError
from .fields import Related, RelatedList
_META_KWARGS_DEPRECATED = LooseVersion(ma.__version__) >= LooseVersion("3.10.0")
def _is_field(value):
return isinstance(value, type) and issubclass(value, fields.Field)
def _base_column(column):
"""Unwrap proxied columns"""
if column not in column.base_columns and len(column.base_columns) == 1:
[base] = column.base_columns
return base
return column
def _has_default(column):
return (
column.default is not None
or column.server_default is not None
or _is_auto_increment(column)
)
def _is_auto_increment(column):
return column.table is not None and column is column.table._autoincrement_column
def _postgres_array_factory(converter, data_type):
return functools.partial(
fields.List, converter._get_field_class_for_data_type(data_type.item_type)
)
def _set_meta_kwarg(field_kwargs, key, value):
if _META_KWARGS_DEPRECATED:
field_kwargs["metadata"][key] = value
else:
field_kwargs[key] = value
def _field_update_kwargs(field_class, field_kwargs, kwargs):
if not kwargs:
return field_kwargs
if isinstance(field_class, functools.partial):
# Unwrap partials, assuming that they bind a Field to arguments
field_class = field_class.func
possible_field_keywords = {
key
for cls in inspect.getmro(field_class)
for key, param in inspect.signature(cls).parameters.items()
if param.kind is inspect.Parameter.POSITIONAL_OR_KEYWORD
or param.kind is inspect.Parameter.KEYWORD_ONLY
}
for k, v in kwargs.items():
if k in possible_field_keywords:
field_kwargs[k] = v
else:
_set_meta_kwarg(field_kwargs, k, v)
return field_kwargs
class ModelConverter:
"""Class that converts a SQLAlchemy model into a dictionary of corresponding
marshmallow `Fields <marshmallow.fields.Field>`.
"""
SQLA_TYPE_MAPPING = {
sa.Enum: fields.Field,
sa.JSON: fields.Raw,
postgresql.BIT: fields.Integer,
postgresql.OID: fields.Integer,
postgresql.UUID: fields.UUID,
postgresql.MACADDR: fields.String,
postgresql.INET: fields.String,
postgresql.CIDR: fields.String,
postgresql.JSON: fields.Raw,
postgresql.JSONB: fields.Raw,
postgresql.HSTORE: fields.Raw,
postgresql.ARRAY: _postgres_array_factory,
postgresql.MONEY: fields.Decimal,
postgresql.DATE: fields.Date,
postgresql.TIME: fields.Time,
mysql.BIT: fields.Integer,
mysql.YEAR: fields.Integer,
mysql.SET: fields.List,
mysql.ENUM: fields.Field,
mysql.INTEGER: fields.Integer,
mysql.DATETIME: fields.DateTime,
mssql.BIT: fields.Integer,
mssql.UNIQUEIDENTIFIER: fields.UUID,
}
DIRECTION_MAPPING = {"MANYTOONE": False, "MANYTOMANY": True, "ONETOMANY": True}
def __init__(self, schema_cls=None):
self.schema_cls = schema_cls
@property
def type_mapping(self):
if self.schema_cls:
return self.schema_cls.TYPE_MAPPING
else:
return ma.Schema.TYPE_MAPPING
def fields_for_model(
self,
model,
*,
include_fk=False,
include_relationships=False,
fields=None,
exclude=None,
base_fields=None,
dict_cls=dict,
):
result = dict_cls()
base_fields = base_fields or {}
for prop in model.__mapper__.iterate_properties:
key = self._get_field_name(prop)
if self._should_exclude_field(prop, fields=fields, exclude=exclude):
# Allow marshmallow to validate and exclude the field key.
result[key] = None
continue
if isinstance(prop, SynonymProperty):
continue
if hasattr(prop, "columns"):
if not include_fk:
# Only skip a column if there is no overriden column
# which does not have a Foreign Key.
for column in prop.columns:
if not column.foreign_keys:
break
else:
continue
if not include_relationships and hasattr(prop, "direction"):
continue
field = base_fields.get(key) or self.property2field(prop)
if field:
result[key] = field
return result
def fields_for_table(
self,
table,
*,
include_fk=False,
fields=None,
exclude=None,
base_fields=None,
dict_cls=dict,
):
result = dict_cls()
base_fields = base_fields or {}
for column in table.columns:
key = self._get_field_name(column)
if self._should_exclude_field(column, fields=fields, exclude=exclude):
# Allow marshmallow to validate and exclude the field key.
result[key] = None
continue
if not include_fk and column.foreign_keys:
continue
# Overridden fields are specified relative to key generated by
# self._get_key_for_column(...), rather than keys in source model
field = base_fields.get(key) or self.column2field(column)
if field:
result[key] = field
return result
def property2field(self, prop, *, instance=True, field_class=None, **kwargs):
# handle synonyms
# Attribute renamed "_proxied_object" in 1.4
for attr in ("_proxied_property", "_proxied_object"):
proxied_obj = getattr(prop, attr, None)
if proxied_obj is not None:
prop = proxied_obj
field_class = field_class or self._get_field_class_for_property(prop)
if not instance:
return field_class
field_kwargs = self._get_field_kwargs_for_property(prop)
_field_update_kwargs(field_class, field_kwargs, kwargs)
ret = field_class(**field_kwargs)
if (
hasattr(prop, "direction")
and self.DIRECTION_MAPPING[prop.direction.name]
and prop.uselist is True
):
related_list_kwargs = _field_update_kwargs(
RelatedList, self.get_base_kwargs(), kwargs
)
ret = RelatedList(ret, **related_list_kwargs)
return ret
def column2field(self, column, *, instance=True, **kwargs):
field_class = self._get_field_class_for_column(column)
if not instance:
return field_class
field_kwargs = self.get_base_kwargs()
self._add_column_kwargs(field_kwargs, column)
_field_update_kwargs(field_class, field_kwargs, kwargs)
return field_class(**field_kwargs)
def field_for(self, model, property_name, **kwargs):
target_model = model
prop_name = property_name
attr = getattr(model, property_name)
remote_with_local_multiplicity = False
if hasattr(attr, "remote_attr"):
target_model = attr.target_class
prop_name = attr.value_attr
remote_with_local_multiplicity = attr.local_attr.prop.uselist
prop = target_model.__mapper__.get_property(prop_name)
converted_prop = self.property2field(prop, **kwargs)
if remote_with_local_multiplicity:
related_list_kwargs = _field_update_kwargs(
RelatedList, self.get_base_kwargs(), kwargs
)
return RelatedList(converted_prop, **related_list_kwargs)
else:
return converted_prop
def _get_field_name(self, prop_or_column):
return prop_or_column.key
def _get_field_class_for_column(self, column):
return self._get_field_class_for_data_type(column.type)
def _get_field_class_for_data_type(self, data_type):
field_cls = None
types = inspect.getmro(type(data_type))
# First search for a field class from self.SQLA_TYPE_MAPPING
for col_type in types:
if col_type in self.SQLA_TYPE_MAPPING:
field_cls = self.SQLA_TYPE_MAPPING[col_type]
if callable(field_cls) and not _is_field(field_cls):
field_cls = field_cls(self, data_type)
break
else:
# Try to find a field class based on the column's python_type
try:
python_type = data_type.python_type
except NotImplementedError:
python_type = None
if python_type in self.type_mapping:
field_cls = self.type_mapping[python_type]
else:
if hasattr(data_type, "impl"):
return self._get_field_class_for_data_type(data_type.impl)
raise ModelConversionError(
"Could not find field column of type {}.".format(types[0])
)
return field_cls
def _get_field_class_for_property(self, prop):
if hasattr(prop, "direction"):
field_cls = Related
else:
column = _base_column(prop.columns[0])
field_cls = self._get_field_class_for_column(column)
return field_cls
def _merge_validators(self, defaults, new):
new_classes = [validator.__class__ for validator in new]
return [
validator
for validator in defaults
if validator.__class__ not in new_classes
] + new
def _get_field_kwargs_for_property(self, prop):
kwargs = self.get_base_kwargs()
if hasattr(prop, "columns"):
column = _base_column(prop.columns[0])
self._add_column_kwargs(kwargs, column)
prop = column
if hasattr(prop, "direction"): # Relationship property
self._add_relationship_kwargs(kwargs, prop)
if getattr(prop, "doc", None): # Useful for documentation generation
_set_meta_kwarg(kwargs, "description", prop.doc)
info = getattr(prop, "info", dict())
overrides = info.get("marshmallow")
if overrides is not None:
warnings.warn(
'Passing `info={"marshmallow": ...}` is deprecated. '
"Use `SQLAlchemySchema` and `auto_field` instead.",
DeprecationWarning,
)
validate = overrides.pop("validate", [])
kwargs["validate"] = self._merge_validators(
kwargs["validate"], validate
) # Ensure we do not override the generated validators.
kwargs.update(overrides) # Override other kwargs.
return kwargs
def _add_column_kwargs(self, kwargs, column):
"""Add keyword arguments to kwargs (in-place) based on the passed in
`Column <sqlalchemy.schema.Column>`.
"""
if column.nullable:
kwargs["allow_none"] = True
kwargs["required"] = not column.nullable and not _has_default(column)
if hasattr(column.type, "enums"):
kwargs["validate"].append(validate.OneOf(choices=column.type.enums))
# Add a length validator if a max length is set on the column
# Skip UUID columns
# (see https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/54)
if hasattr(column.type, "length"):
column_length = column.type.length
if column_length is not None:
try:
python_type = column.type.python_type
except (AttributeError, NotImplementedError):
python_type = None
if not python_type or not issubclass(python_type, uuid.UUID):
kwargs["validate"].append(validate.Length(max=column_length))
if getattr(column.type, "asdecimal", False):
kwargs["places"] = getattr(column.type, "scale", None)
def _add_relationship_kwargs(self, kwargs, prop):
"""Add keyword arguments to kwargs (in-place) based on the passed in
relationship `Property`.
"""
nullable = True
for pair in prop.local_remote_pairs:
if not pair[0].nullable:
if prop.uselist is True:
nullable = False
break
kwargs.update({"allow_none": nullable, "required": not nullable})
def _should_exclude_field(self, column, fields=None, exclude=None):
key = self._get_field_name(column)
if fields and key not in fields:
return True
if exclude and key in exclude:
return True
return False
def get_base_kwargs(self):
kwargs = {"validate": []}
if _META_KWARGS_DEPRECATED:
kwargs["metadata"] = {}
return kwargs
default_converter = ModelConverter()
fields_for_model = default_converter.fields_for_model
"""Generate a dict of field_name: `marshmallow.fields.Field` pairs for the given model.
Note: SynonymProperties are ignored. Use an explicit field if you want to include a synonym.
:param model: The SQLAlchemy model
:param bool include_fk: Whether to include foreign key fields in the output.
:param bool include_relationships: Whether to include relationships fields in the output.
:return: dict of field_name: Field instance pairs
"""
property2field = default_converter.property2field
"""Convert a SQLAlchemy `Property` to a field instance or class.
:param Property prop: SQLAlchemy Property.
:param bool instance: If `True`, return `Field` instance, computing relevant kwargs
from the given property. If `False`, return the `Field` class.
:param kwargs: Additional keyword arguments to pass to the field constructor.
:return: A `marshmallow.fields.Field` class or instance.
"""
column2field = default_converter.column2field
"""Convert a SQLAlchemy `Column <sqlalchemy.schema.Column>` to a field instance or class.
:param sqlalchemy.schema.Column column: SQLAlchemy Column.
:param bool instance: If `True`, return `Field` instance, computing relevant kwargs
from the given property. If `False`, return the `Field` class.
:return: A `marshmallow.fields.Field` class or instance.
"""
field_for = default_converter.field_for
"""Convert a property for a mapped SQLAlchemy class to a marshmallow `Field`.
Example: ::
date_created = field_for(Author, 'date_created', dump_only=True)
author = field_for(Book, 'author')
:param type model: A SQLAlchemy mapped class.
:param str property_name: The name of the property to convert.
:param kwargs: Extra keyword arguments to pass to `property2field`
:return: A `marshmallow.fields.Field` class or instance.
"""
| 36.973105
| 92
| 0.639664
|
794c313d4324479813b0b625326c31c58a6c52bf
| 2,820
|
py
|
Python
|
build/android/pylib/uiautomator/test_runner.py
|
tmpsantos/chromium
|
802d4aeeb33af25c01ee5994037bbf14086d4ac0
|
[
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null |
build/android/pylib/uiautomator/test_runner.py
|
tmpsantos/chromium
|
802d4aeeb33af25c01ee5994037bbf14086d4ac0
|
[
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null |
build/android/pylib/uiautomator/test_runner.py
|
tmpsantos/chromium
|
802d4aeeb33af25c01ee5994037bbf14086d4ac0
|
[
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 1
|
2020-11-04T07:23:37.000Z
|
2020-11-04T07:23:37.000Z
|
# Copyright (c) 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Class for running uiautomator tests on a single device."""
from pylib import constants
from pylib import flag_changer
from pylib.device import intent
from pylib.instrumentation import test_options as instr_test_options
from pylib.instrumentation import test_runner as instr_test_runner
class TestRunner(instr_test_runner.TestRunner):
"""Responsible for running a series of tests connected to a single device."""
def __init__(self, test_options, device, shard_index, test_pkg):
"""Create a new TestRunner.
Args:
test_options: A UIAutomatorOptions object.
device: Attached android device.
shard_index: Shard index.
test_pkg: A TestPackage object.
"""
# Create an InstrumentationOptions object to pass to the super class
instrumentation_options = instr_test_options.InstrumentationOptions(
test_options.tool,
test_options.cleanup_test_files,
test_options.push_deps,
test_options.annotations,
test_options.exclude_annotations,
test_options.test_filter,
test_options.test_data,
test_options.save_perf_json,
test_options.screenshot_failures,
wait_for_debugger=False,
coverage_dir=None,
test_apk=None,
test_apk_path=None,
test_apk_jar_path=None,
test_runner=None,
test_support_apk_path=None,
device_flags=None)
super(TestRunner, self).__init__(instrumentation_options, device,
shard_index, test_pkg)
cmdline_file = constants.PACKAGE_INFO[test_options.package].cmdline_file
self.flags = None
if cmdline_file:
self.flags = flag_changer.FlagChanger(self.device, cmdline_file)
self._package = constants.PACKAGE_INFO[test_options.package].package
self._activity = constants.PACKAGE_INFO[test_options.package].activity
#override
def InstallTestPackage(self):
self.test_pkg.Install(self.device)
#override
def PushDataDeps(self):
pass
#override
def _RunTest(self, test, timeout):
self.device.ClearApplicationState(self._package)
if self.flags:
if 'Feature:FirstRunExperience' in self.test_pkg.GetTestAnnotations(test):
self.flags.RemoveFlags(['--disable-fre'])
else:
self.flags.AddFlags(['--disable-fre'])
self.device.StartActivity(
intent.Intent(action='android.intent.action.MAIN',
activity=self._activity,
package=self._package),
blocking=True,
force_stop=True)
return self.device.old_interface.RunUIAutomatorTest(
test, self.test_pkg.GetPackageName(), timeout)
| 35.696203
| 80
| 0.710993
|
794c336b2c288d760796c60e77f621a17d3ae822
| 1,140
|
py
|
Python
|
2016/day08.py
|
iKevinY/advent
|
d160fb711a0a4d671f53cbd61088117e7ff0276a
|
[
"MIT"
] | 11
|
2019-12-03T06:32:37.000Z
|
2021-12-24T12:23:57.000Z
|
2016/day08.py
|
iKevinY/advent
|
d160fb711a0a4d671f53cbd61088117e7ff0276a
|
[
"MIT"
] | null | null | null |
2016/day08.py
|
iKevinY/advent
|
d160fb711a0a4d671f53cbd61088117e7ff0276a
|
[
"MIT"
] | 1
|
2019-12-07T06:21:31.000Z
|
2019-12-07T06:21:31.000Z
|
# -*- coding: utf-8 -*-
# import sys
# import time
import fileinput
from utils import parse_line
WIDTH = 50
HEIGHT = 6
SCREEN = [[False for _ in range(WIDTH)] for _ in range(HEIGHT)]
# Make space for animated output
# print '\n' * HEIGHT
for line in fileinput.input():
if line.startswith('rect'):
a, b = parse_line(r'rect (\d+)x(\d+)', line)
for y in range(b):
for x in range(a):
SCREEN[y][x] = True
else:
rc, n, offset = parse_line(r'rotate (\w+) .=(\d+) by (\d+)', line)
if rc == 'row':
temp = SCREEN[n][:]
for i, x in enumerate(temp):
SCREEN[n][(offset+i) % WIDTH] = x
else:
temp = [row[n] for row in SCREEN]
for i, x in enumerate(temp):
SCREEN[(offset+i) % HEIGHT][n] = x
# sys.stdout.write('\033[F' * HEIGHT)
# for row in SCREEN:
# print ''.join('█' if x else ' ' for x in row)
# time.sleep(0.02)
print ''
for row in SCREEN:
print ''.join('█' if x else ' ' for x in row)
print "\nNumber of lit pixels: %i" % sum(sum(row) for row in SCREEN)
| 22.8
| 74
| 0.524561
|
794c337bdaa29cf7ddadf460865e1b0cabf6dcbf
| 2,602
|
py
|
Python
|
2020/day-18/part2.py
|
nairraghav/advent-of-code-2019
|
274a2a4a59a8be39afb323356c592af5e1921e54
|
[
"MIT"
] | null | null | null |
2020/day-18/part2.py
|
nairraghav/advent-of-code-2019
|
274a2a4a59a8be39afb323356c592af5e1921e54
|
[
"MIT"
] | null | null | null |
2020/day-18/part2.py
|
nairraghav/advent-of-code-2019
|
274a2a4a59a8be39afb323356c592af5e1921e54
|
[
"MIT"
] | null | null | null |
def is_int(int_string):
try:
int(int_string)
return True
except:
return False
def calculate(first_number, second_number, operation):
if operation == "+":
return first_number + second_number
elif operation == "*":
return first_number * second_number
def simplify(input_string):
first_number = None
second_number = None
operation = None
input_string = input_string.split()
for character in input_string:
if is_int(character):
if first_number is None:
first_number = int(character)
elif second_number is None:
second_number = int(character)
else:
operation = character
if first_number is not None and second_number is not None:
first_number = calculate(first_number, second_number, operation)
operation = None
second_number = None
return first_number
def simplify_first(input_string):
while "+" in input_string:
starting_index = None
ending_index = None
for index in range(len(input_string)):
if input_string[index] == "+":
starting_subset = input_string[:index-2]
starting_index = starting_subset.rfind(" ")
if starting_index < 0:
starting_index = 0
else:
starting_index += 1
ending_subset = input_string[index+2:]
ending_index = ending_subset.find(" ") - 1
if ending_index < 0:
ending_index = len(ending_subset)
ending_index += index + 2
result = simplify(input_string[starting_index:ending_index+1])
input_string = input_string[:starting_index] + str(result) + input_string[ending_index+1:]
return simplify(input_string)
running_total = 0
with open("input.txt", "r") as puzzle_input:
for line in puzzle_input:
line = line.strip()
while "(" in line or ")" in line:
starting_index = None
ending_index = None
for index in range(len(line)):
if line[index] == "(":
starting_index = index
elif line[index] == ")":
ending_index = index
break
result = simplify_first(line[starting_index+1:ending_index])
line = line[:starting_index] + str(result) + line[ending_index+1:]
print(line)
running_total += simplify_first(line)
print(running_total)
| 32.525
| 98
| 0.575711
|
794c3419880dcd04c4a9a47939d4ea89877048fb
| 896
|
py
|
Python
|
pets/forms.py
|
AlexeyDonskikh/petselection
|
72a148f866b6550f6ccdfb6117acea25404df955
|
[
"MIT"
] | null | null | null |
pets/forms.py
|
AlexeyDonskikh/petselection
|
72a148f866b6550f6ccdfb6117acea25404df955
|
[
"MIT"
] | null | null | null |
pets/forms.py
|
AlexeyDonskikh/petselection
|
72a148f866b6550f6ccdfb6117acea25404df955
|
[
"MIT"
] | null | null | null |
from django import forms
from django.forms import inlineformset_factory
from pets.models import ImagePet, Pet
class PetForm(forms.ModelForm):
class Meta:
model = Pet
fields = ('name', 'age', 'weight', 'species', 'breed', 'description',)
help_texts = {
'name': 'Имя питомца',
'age': 'Возраст питомца',
'weight': 'Вес питомца',
'species': 'Разновидность: собаки, кошки и т.д.',
'breed': 'Порода питомца',
'description': 'Описание питомца',
}
class ImagePetForm(forms.ModelForm):
class Meta:
model = ImagePet
fields = ('image',)
help_texts = {
'image_name': 'Название фотографии',
'image': 'Фото',
}
ImagePetFormSet = inlineformset_factory(Pet, ImagePet,
form=ImagePetForm, extra=1)
| 26.352941
| 78
| 0.551339
|
794c34d82bdeb106bd4ab06b8b2be88c826d162b
| 2,455
|
py
|
Python
|
discord/commands/errors.py
|
br-cz/hockey-db-bot
|
9919ef1416a0ab46bd93c7a177b33955a43cb430
|
[
"MIT"
] | 2
|
2021-08-29T14:03:22.000Z
|
2022-01-27T10:02:48.000Z
|
discord/commands/errors.py
|
mugman174/fosscord
|
3d860731b761c1be116a281303486cefaaa0228e
|
[
"MIT"
] | 5
|
2021-11-02T17:08:59.000Z
|
2022-03-28T07:31:06.000Z
|
discord/commands/errors.py
|
mugman174/fosscord
|
3d860731b761c1be116a281303486cefaaa0228e
|
[
"MIT"
] | 1
|
2021-11-06T14:09:15.000Z
|
2021-11-06T14:09:15.000Z
|
"""
The MIT License (MIT)
Copyright (c) 2021-present Pycord Development
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
"""
from ..errors import DiscordException
__all__ = (
"ApplicationCommandError",
"CheckFailure",
"ApplicationCommandInvokeError",
)
class ApplicationCommandError(DiscordException):
r"""The base exception type for all application command related errors.
This inherits from :exc:`discord.DiscordException`.
This exception and exceptions inherited from it are handled
in a special way as they are caught and passed into a special event
from :class:`.Bot`\, :func:`.on_command_error`.
"""
pass
class CheckFailure(ApplicationCommandError):
"""Exception raised when the predicates in :attr:`.Command.checks` have failed.
This inherits from :exc:`ApplicationCommandError`
"""
pass
class ApplicationCommandInvokeError(ApplicationCommandError):
"""Exception raised when the command being invoked raised an exception.
This inherits from :exc:`ApplicationCommandError`
Attributes
-----------
original: :exc:`Exception`
The original exception that was raised. You can also get this via
the ``__cause__`` attribute.
"""
def __init__(self, e: Exception) -> None:
self.original: Exception = e
super().__init__(f'Application Command raised an exception: {e.__class__.__name__}: {e}')
| 37.769231
| 98
| 0.731161
|
794c350ea069aa341679917a988cf4d82813eb02
| 26,614
|
py
|
Python
|
kairon/shared/account/processor.py
|
encounter-ai/kairon
|
48eccde345781f95f0654813415e52f1b647d557
|
[
"Apache-2.0"
] | null | null | null |
kairon/shared/account/processor.py
|
encounter-ai/kairon
|
48eccde345781f95f0654813415e52f1b647d557
|
[
"Apache-2.0"
] | null | null | null |
kairon/shared/account/processor.py
|
encounter-ai/kairon
|
48eccde345781f95f0654813415e52f1b647d557
|
[
"Apache-2.0"
] | null | null | null |
from datetime import datetime
from typing import Dict, Text
from loguru import logger as logging
from mongoengine.errors import DoesNotExist
from mongoengine.errors import ValidationError
from pydantic import SecretStr
from validators import ValidationFailure
from validators import email as mail_check
from kairon.exceptions import AppException
from kairon.shared.account.data_objects import Account, User, Bot, UserEmailConfirmation, Feedback, UiConfig, \
MailTemplates, SystemProperties, BotAccess
from kairon.shared.actions.data_objects import FormValidationAction, SlotSetAction, EmailActionConfig
from kairon.shared.data.constant import ACCESS_ROLES, ACTIVITY_STATUS
from kairon.shared.data.data_objects import BotSettings, ChatClientConfig, SlotMapping
from kairon.shared.utils import Utility
Utility.load_email_configuration()
class AccountProcessor:
@staticmethod
def add_account(name: str, user: str):
"""
adds a new account
:param name: account name
:param user: user id
:return: account id
"""
if Utility.check_empty_string(name):
raise AppException("Account Name cannot be empty or blank spaces")
Utility.is_exist(
Account,
exp_message="Account name already exists!",
name__iexact=name,
status=True,
)
license = {"bots": 2, "intents": 3, "examples": 20, "training": 3, "augmentation": 5}
return Account(name=name.strip(), user=user, license=license).save().to_mongo().to_dict()
@staticmethod
def get_account(account: int):
"""
fetch account object
:param account: account id
:return: account details
"""
try:
account = Account.objects().get(id=account).to_mongo().to_dict()
return account
except:
raise DoesNotExist("Account does not exists")
@staticmethod
def add_bot(name: str, account: int, user: str, is_new_account: bool = False):
"""
add a bot to account
:param name: bot name
:param account: account id
:param user: user id
:param is_new_account: True if it is a new account
:return: bot id
"""
from kairon.shared.data.processor import MongoProcessor
from kairon.shared.data.data_objects import BotSettings
if Utility.check_empty_string(name):
raise AppException("Bot Name cannot be empty or blank spaces")
if Utility.check_empty_string(user):
raise AppException("user cannot be empty or blank spaces")
Utility.is_exist(
Bot,
exp_message="Bot already exists!",
name__iexact=name,
account=account,
status=True,
)
bot = Bot(name=name, account=account, user=user).save().to_mongo().to_dict()
bot_id = bot['_id'].__str__()
if not is_new_account:
AccountProcessor.allow_access_to_bot(bot_id, user, user, account, ACCESS_ROLES.ADMIN.value, ACTIVITY_STATUS.ACTIVE.value)
BotSettings(bot=bot_id, user=user).save()
processor = MongoProcessor()
config = processor.load_config(bot_id)
processor.add_or_overwrite_config(config, bot_id, user)
processor.add_default_fallback_data(bot_id, user, True, True)
return bot
@staticmethod
def list_bots(account_id: int):
for bot in Bot.objects(account=account_id, status=True):
bot = bot.to_mongo().to_dict()
bot.pop('status')
bot['_id'] = bot['_id'].__str__()
yield bot
@staticmethod
def update_bot(name: Text, bot: Text):
if Utility.check_empty_string(name):
raise AppException('Name cannot be empty')
try:
bot_info = Bot.objects(id=bot, status=True).get()
bot_info.name = name
bot_info.save()
except DoesNotExist:
raise AppException('Bot not found')
@staticmethod
def delete_bot(bot: Text, user: Text):
from kairon.shared.data.data_objects import Intents, Responses, Stories, Configs, Endpoints, Entities, \
EntitySynonyms, Forms, LookupTables, ModelDeployment, ModelTraining, RegexFeatures, Rules, SessionConfigs, \
Slots, TrainingDataGenerator, TrainingExamples
from kairon.shared.test.data_objects import ModelTestingLogs
from kairon.shared.importer.data_objects import ValidationLogs
from kairon.shared.actions.data_objects import HttpActionConfig, ActionServerLogs, Actions
try:
bot_info = Bot.objects(id=bot, status=True).get()
bot_info.status = False
bot_info.save()
Utility.hard_delete_document([
Actions, BotAccess, BotSettings, Configs, ChatClientConfig, Endpoints, Entities, EmailActionConfig,
EntitySynonyms, Forms, FormValidationAction, HttpActionConfig, Intents, LookupTables, RegexFeatures,
Responses, Rules, SlotMapping, SlotSetAction, SessionConfigs, Slots, Stories, TrainingDataGenerator,
TrainingExamples, ActionServerLogs, ModelTraining, ModelTestingLogs, ModelDeployment, ValidationLogs
], bot, user=user)
AccountProcessor.remove_bot_access(bot)
except DoesNotExist:
raise AppException('Bot not found')
@staticmethod
def fetch_role_for_user(email: Text, bot: Text):
try:
return BotAccess.objects(accessor_email=email, bot=bot,
status=ACTIVITY_STATUS.ACTIVE.value).get().to_mongo().to_dict()
except DoesNotExist as e:
logging.error(e)
raise AppException('Access to bot is denied')
@staticmethod
def get_accessible_bot_details(account_id: int, email: Text):
shared_bots = []
account_bots = list(AccountProcessor.list_bots(account_id))
for bot in BotAccess.objects(accessor_email=email, bot_account__ne=account_id,
status=ACTIVITY_STATUS.ACTIVE.value):
bot_details = AccountProcessor.get_bot(bot['bot'])
bot_details.pop('status')
bot_details['_id'] = bot_details['_id'].__str__()
shared_bots.append(bot_details)
return {
'account_owned': account_bots,
'shared': shared_bots
}
@staticmethod
def allow_bot_and_generate_invite_url(bot: Text, email: Text, user: Text, bot_account: int,
role: ACCESS_ROLES = ACCESS_ROLES.TESTER.value):
bot_details = AccountProcessor.allow_access_to_bot(bot, email, user, bot_account, role)
if Utility.email_conf["email"]["enable"]:
token = Utility.generate_token(email)
link = f'{Utility.email_conf["app"]["url"]}/{bot}/invite/accept/{token}'
return bot_details['name'], link
@staticmethod
def allow_access_to_bot(bot: Text, accessor_email: Text, user: Text,
bot_account: int, role: ACCESS_ROLES = ACCESS_ROLES.TESTER.value,
activity_status: ACTIVITY_STATUS = ACTIVITY_STATUS.INVITE_NOT_ACCEPTED.value):
"""
Adds bot to a user account.
:param bot: bot id
:param accessor_email: email id of the new member
:param user: user adding the new member
:param bot_account: account where bot exists
:param activity_status: can be one of active, inactive or deleted.
:param role: can be one of admin, designer or tester.
"""
bot_details = AccountProcessor.get_bot(bot)
Utility.is_exist(BotAccess, 'User is already a collaborator', accessor_email=accessor_email, bot=bot,
status__ne=ACTIVITY_STATUS.DELETED.value)
BotAccess(
accessor_email=accessor_email,
bot=bot,
role=role,
user=user,
bot_account=bot_account,
status=activity_status
).save()
return bot_details
@staticmethod
def update_bot_access(bot: Text, accessor_email: Text, user: Text,
role: ACCESS_ROLES = ACCESS_ROLES.TESTER.value,
status: ACTIVITY_STATUS = ACTIVITY_STATUS.ACTIVE.value):
"""
Adds bot to a user account.
:param bot: bot id
:param accessor_email: email id of the new member
:param user: user adding the new member
:param role: can be one of admin, designer or tester.
:param status: can be one of active, inactive or deleted.
"""
AccountProcessor.get_bot(bot)
try:
bot_access = BotAccess.objects(accessor_email=accessor_email, bot=bot).get()
if Utility.email_conf["email"]["enable"]:
if status != ACTIVITY_STATUS.DELETED.value and bot_access.status == ACTIVITY_STATUS.INVITE_NOT_ACCEPTED.value:
raise AppException('User is yet to accept the invite')
bot_access.role = role
bot_access.user = user
bot_access.status = status
bot_access.timestamp = datetime.utcnow()
bot_access.save()
except DoesNotExist:
raise AppException('User not yet invited to collaborate')
@staticmethod
def accept_bot_access_invite(token: Text, bot: Text):
"""
Activate user's access to bot.
:param token: token sent in the link
:param bot: bot id
"""
bot_details = AccountProcessor.get_bot(bot)
accessor_email = Utility.verify_token(token)
AccountProcessor.get_user_details(accessor_email)
try:
bot_access = BotAccess.objects(accessor_email=accessor_email, bot=bot,
status=ACTIVITY_STATUS.INVITE_NOT_ACCEPTED.value).get()
bot_access.status = ACTIVITY_STATUS.ACTIVE.value
bot_access.accept_timestamp = datetime.utcnow()
bot_access.save()
return bot_access.user, bot_details['name'], bot_access.accessor_email, bot_access.role
except DoesNotExist:
raise AppException('No pending invite found for this bot and user')
@staticmethod
def remove_bot_access(bot: Text, **kwargs):
"""
Removes bot from either for all users or only for user supplied.
:param bot: bot id
:param kwargs: can be either account or email.
"""
if kwargs:
if not Utility.is_exist(BotAccess, None, False, **kwargs, bot=bot, status__ne=ACTIVITY_STATUS.DELETED.value):
raise AppException('User not a collaborator to this bot')
active_bot_access = BotAccess.objects(**kwargs, bot=bot, status__ne=ACTIVITY_STATUS.DELETED.value)
else:
active_bot_access = BotAccess.objects(bot=bot, status__ne=ACTIVITY_STATUS.DELETED.value)
active_bot_access.update(set__status=ACTIVITY_STATUS.DELETED.value)
@staticmethod
def list_bot_accessors(bot: Text):
"""
List users who have access to bot.
:param bot: bot id
"""
for accessor in BotAccess.objects(bot=bot, status__ne=ACTIVITY_STATUS.DELETED.value):
accessor = accessor.to_mongo().to_dict()
accessor['_id'] = accessor['_id'].__str__()
yield accessor
@staticmethod
def get_bot(id: str):
"""
fetches bot details
:param id: bot id
:return: bot details
"""
try:
return Bot.objects().get(id=id).to_mongo().to_dict()
except:
raise DoesNotExist("Bot does not exists!")
@staticmethod
def add_user(
email: str,
password: str,
first_name: str,
last_name: str,
account: int,
user: str,
is_integration_user=False
):
"""
adds new user to the account
:param email: user login id
:param password: user password
:param first_name: user firstname
:param last_name: user lastname
:param account: account id
:param user: user id
:param is_integration_user: is this
:return: user details
"""
if (
Utility.check_empty_string(email)
or Utility.check_empty_string(last_name)
or Utility.check_empty_string(first_name)
or Utility.check_empty_string(password)
):
raise AppException(
"Email, FirstName, LastName and password cannot be empty or blank spaces"
)
Utility.is_exist(
User,
exp_message="User already exists! try with different email address.",
email__iexact=email.strip(),
status=True,
)
return (
User(
email=email.strip(),
password=Utility.get_password_hash(password.strip()),
first_name=first_name.strip(),
last_name=last_name.strip(),
account=account,
user=user.strip(),
is_integration_user=is_integration_user,
)
.save()
.to_mongo()
.to_dict()
)
@staticmethod
def get_user(email: str):
"""
fetch user details
:param email: user login id
:return: user details
"""
try:
return User.objects().get(email=email).to_mongo().to_dict()
except Exception as e:
logging.error(e)
raise DoesNotExist("User does not exist!")
@staticmethod
def get_user_details(email: str):
"""
fetches complete user details, checks for whether it is inactive
:param email: login id
:return: dict
"""
user = AccountProcessor.get_user(email)
if not user["is_integration_user"]:
AccountProcessor.check_email_confirmation(user["email"])
if not user["status"]:
raise ValidationError("Inactive User please contact admin!")
account = AccountProcessor.get_account(user["account"])
if not account["status"]:
raise ValidationError("Inactive Account Please contact system admin!")
return user
@staticmethod
def get_complete_user_details(email: str):
"""
fetches complete user details including account and bot
:param email: login id
:return: dict
"""
user = AccountProcessor.get_user(email)
account = AccountProcessor.get_account(user["account"])
bots = AccountProcessor.get_accessible_bot_details(user["account"], email)
user["account_name"] = account["name"]
user['bots'] = bots
user["_id"] = user["_id"].__str__()
user.pop('password')
return user
@staticmethod
def get_integration_user(bot: str, account: int):
"""
creates integration user if it does not exist
:param bot: bot id
:param account: account id
:return: dict
"""
email = f"{bot}@integration.com"
if not Utility.is_exist(
User, raise_error=False, email=email, is_integration_user=True, status=True
):
password = Utility.generate_password()
user_details = AccountProcessor.add_user(
email=email,
password=password,
first_name=bot,
last_name=bot,
account=account,
user="auto_gen",
is_integration_user=True,
)
AccountProcessor.allow_access_to_bot(bot, email.strip(), "auto_gen", account,
ACCESS_ROLES.ADMIN.value, ACTIVITY_STATUS.ACTIVE.value)
return user_details
else:
return (
User.objects(email=email).get(is_integration_user=True).to_mongo().to_dict()
)
@staticmethod
async def account_setup(account_setup: Dict, user: Text):
"""
create new account
:param account_setup: dict of account details
:param user: user id
:return: dict user details, user email id, confirmation mail subject, mail body
"""
from kairon.shared.data.processor import MongoProcessor
account = None
bot = None
mail_to = None
email_enabled = Utility.email_conf["email"]["enable"]
link = None
try:
account = AccountProcessor.add_account(account_setup.get("account"), user)
bot = AccountProcessor.add_bot('Hi-Hello', account["_id"], user, True)
user_details = AccountProcessor.add_user(
email=account_setup.get("email"),
first_name=account_setup.get("first_name"),
last_name=account_setup.get("last_name"),
password=account_setup.get("password").get_secret_value(),
account=account["_id"].__str__(),
user=user
)
AccountProcessor.allow_access_to_bot(bot["_id"].__str__(), account_setup.get("email"),
account_setup.get("email"), account['_id'],
ACCESS_ROLES.ADMIN.value, ACTIVITY_STATUS.ACTIVE.value)
await MongoProcessor().save_from_path(
"template/use-cases/Hi-Hello", bot["_id"].__str__(), user="sysadmin"
)
if email_enabled:
token = Utility.generate_token(account_setup.get("email"))
link = Utility.email_conf["app"]["url"] + '/verify/' + token
mail_to = account_setup.get("email")
except Exception as e:
if account and "_id" in account:
Account.objects().get(id=account["_id"]).delete()
if bot and "_id" in bot:
Bot.objects().get(id=bot["_id"]).delete()
raise e
return user_details, mail_to, link
@staticmethod
async def default_account_setup():
"""
default account for testing/demo purposes
:return: user details
:raises: if account already exist
"""
account = {
"account": "DemoAccount",
"bot": "Demo",
"email": "test@demo.in",
"first_name": "Test_First",
"last_name": "Test_Last",
"password": SecretStr("Changeit@123"),
}
try:
user, mail, link = await AccountProcessor.account_setup(account, user="sysadmin")
return user, mail, link
except Exception as e:
logging.info(str(e))
@staticmethod
def load_system_properties():
try:
system_properties = SystemProperties.objects().get().to_mongo().to_dict()
except DoesNotExist:
mail_templates = MailTemplates(
password_reset=open('template/emails/passwordReset.html', 'r').read(),
password_reset_confirmation=open('template/emails/passwordResetConfirmation.html', 'r').read(),
verification=open('template/emails/verification.html', 'r').read(),
verification_confirmation=open('template/emails/verificationConfirmation.html', 'r').read(),
add_member_invitation=open('template/emails/memberAddAccept.html', 'r').read(),
add_member_confirmation=open('template/emails/memberAddConfirmation.html', 'r').read(),
password_generated=open('template/emails/passwordGenerated.html', 'r').read(),
)
system_properties = SystemProperties(mail_templates=mail_templates).save().to_mongo().to_dict()
Utility.email_conf['email']['templates']['verification'] = system_properties['mail_templates']['verification']
Utility.email_conf['email']['templates']['verification_confirmation'] = system_properties['mail_templates']['verification_confirmation']
Utility.email_conf['email']['templates']['password_reset'] = system_properties['mail_templates']['password_reset']
Utility.email_conf['email']['templates']['password_reset_confirmation'] = system_properties['mail_templates']['password_reset_confirmation']
Utility.email_conf['email']['templates']['add_member_invitation'] = system_properties['mail_templates']['add_member_invitation']
Utility.email_conf['email']['templates']['add_member_confirmation'] = system_properties['mail_templates']['add_member_confirmation']
Utility.email_conf['email']['templates']['password_generated'] = system_properties['mail_templates']['password_generated']
@staticmethod
async def confirm_email(token: str):
"""
Confirms the user through link and updates the database
:param token: the token from link
:return: mail id, subject of mail, body of mail
"""
email_confirm = Utility.verify_token(token)
Utility.is_exist(
UserEmailConfirmation,
exp_message="Email already confirmed!",
email__iexact=email_confirm.strip(),
)
confirm = UserEmailConfirmation()
confirm.email = email_confirm
confirm.save()
user = AccountProcessor.get_user(email_confirm)
return email_confirm, user['first_name']
@staticmethod
def is_user_confirmed(email: str):
"""
Checks if user is verified and raises an Exception if not
:param email: mail id of user
:return: None
"""
if not Utility.is_exist(UserEmailConfirmation, email__iexact=email.strip(), raise_error=False):
raise AppException("Please verify your mail")
@staticmethod
def check_email_confirmation(email: str):
"""
Checks if the account is verified through mail
:param email: email of the user
:return: None
"""
email_enabled = Utility.email_conf["email"]["enable"]
if email_enabled:
AccountProcessor.is_user_confirmed(email)
@staticmethod
async def send_reset_link(mail: str):
"""
Sends a password reset link to the mail id
:param mail: email id of the user
:return: mail id, mail subject, mail body
"""
email_enabled = Utility.email_conf["email"]["enable"]
if email_enabled:
if isinstance(mail_check(mail), ValidationFailure):
raise AppException("Please enter valid email id")
if not Utility.is_exist(User, email__iexact=mail.strip(), raise_error=False):
raise AppException("Error! There is no user with the following mail id")
if not Utility.is_exist(UserEmailConfirmation, email__iexact=mail.strip(), raise_error=False):
raise AppException("Error! The following user's mail is not verified")
token = Utility.generate_token(mail)
user = AccountProcessor.get_user(mail)
link = Utility.email_conf["app"]["url"] + '/reset_password/' + token
return mail, user['first_name'], link
else:
raise AppException("Error! Email verification is not enabled")
@staticmethod
async def overwrite_password(token: str, password: str):
"""
Changes the user's password
:param token: unique token from the password reset page
:param password: new password entered by the user
:return: mail id, mail subject and mail body
"""
if Utility.check_empty_string(password):
raise AppException("password cannot be empty or blank")
email = Utility.verify_token(token)
user = User.objects().get(email=email)
user.password = Utility.get_password_hash(password.strip())
user.user = email
user.password_changed = datetime.utcnow
user.save()
return email, user.first_name
@staticmethod
async def send_confirmation_link(mail: str):
"""
Sends a link to the user's mail id for account verification
:param mail: the mail id of the user
:return: mail id, mail subject and mail body
"""
email_enabled = Utility.email_conf["email"]["enable"]
if email_enabled:
if isinstance(mail_check(mail), ValidationFailure):
raise AppException("Please enter valid email id")
Utility.is_exist(UserEmailConfirmation, exp_message="Email already confirmed!", email__iexact=mail.strip())
if not Utility.is_exist(User, email__iexact=mail.strip(), raise_error=False):
raise AppException("Error! There is no user with the following mail id")
user = AccountProcessor.get_user(mail)
token = Utility.generate_token(mail)
link = Utility.email_conf["app"]["url"] + '/verify/' + token
return mail, user['first_name'], link
else:
raise AppException("Error! Email verification is not enabled")
@staticmethod
def add_feedback(rating: float, user: str, scale: float = 5.0, feedback: str = None):
"""
Add user feedback.
@param rating: user given rating.
@param user: Kairon username.
@param scale: Scale on which rating is given. %.0 is the default value.
@param feedback: feedback if any.
@return:
"""
Feedback(rating=rating, scale=scale, feedback=feedback, user=user).save()
@staticmethod
def update_ui_config(config: dict, user: str):
"""
Adds UI configuration such as themes, layout type, flags for stepper
to render UI components based on it.
@param config: UI configuration to save.
@param user: username
"""
try:
ui_config = UiConfig.objects(user=user).get()
except DoesNotExist:
ui_config = UiConfig(user=user)
ui_config.config = config
ui_config.save()
@staticmethod
def get_ui_config(user: str):
"""
Retrieves UI configuration such as themes, layout type, flags for stepper
to render UI components based on it.
@param user: username
"""
try:
ui_config = UiConfig.objects(user=user).get()
config = ui_config.config
except DoesNotExist:
config = {}
AccountProcessor.update_ui_config(config, user)
return config
| 40.14178
| 148
| 0.617232
|
794c3524fbb485bf3059528a66a2992f53743214
| 24,337
|
py
|
Python
|
alectryon/cli.py
|
start974/alectryon
|
df5664e71c1026af4aaf69e6b227d427a728e7c6
|
[
"MIT"
] | null | null | null |
alectryon/cli.py
|
start974/alectryon
|
df5664e71c1026af4aaf69e6b227d427a728e7c6
|
[
"MIT"
] | null | null | null |
alectryon/cli.py
|
start974/alectryon
|
df5664e71c1026af4aaf69e6b227d427a728e7c6
|
[
"MIT"
] | null | null | null |
# Copyright © 2019 Clément Pit-Claudel
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import argparse
import inspect
import os
import os.path
import shutil
import sys
# Pipelines
# =========
def read_plain(_, fpath, fname):
if fname == "-":
return sys.stdin.read()
with open(fpath, encoding="utf-8") as f:
return f.read()
def read_json(_, fpath, fname):
from json import load
if fname == "-":
return load(sys.stdin)
with open(fpath, encoding="utf-8") as f:
return load(f)
def parse_coq_plain(contents):
return [contents]
def _catch_parsing_errors(fpath, k, *args):
from .literate import ParsingError
try:
return k(*args)
except ParsingError as e:
raise ValueError("{}:{}".format(fpath, e))
def coq_to_rst(coq, fpath, point, marker):
from .literate import coq2rst_marked
return _catch_parsing_errors(fpath, coq2rst_marked, coq, point, marker)
def rst_to_coq(coq, fpath, point, marker):
from .literate import rst2coq_marked
return _catch_parsing_errors(fpath, rst2coq_marked, coq, point, marker)
def annotate_chunks(chunks, sertop_args):
from .core import annotate
return annotate(chunks, sertop_args)
def register_docutils(v, sertop_args):
from .docutils import setup, AlectryonTransform
AlectryonTransform.SERTOP_ARGS = sertop_args
setup()
return v
def _gen_docutils_html(source, fpath,
webpage_style, include_banner, include_vernums,
html_assets, traceback, Parser, Reader):
from docutils.core import publish_string
from .docutils import HtmlTranslator, HtmlWriter
# The encoding/decoding dance below happens because setting output_encoding
# to "unicode" causes reST to generate a bad <meta> tag, and setting
# input_encoding to "unicode" breaks the ‘.. include’ directive.
html_assets.extend(HtmlTranslator.JS + HtmlTranslator.CSS)
settings_overrides = {
'traceback': traceback,
'embed_stylesheet': False,
'stylesheet_path': None,
'stylesheet_dirs': [],
'alectryon_banner': include_banner,
'alectryon_vernums': include_vernums,
'webpage_style': webpage_style,
'input_encoding': 'utf-8',
'output_encoding': 'utf-8'
}
parser = Parser()
return publish_string(
source=source.encode("utf-8"),
source_path=fpath, destination_path=None,
reader=Reader(parser), reader_name=None,
parser=parser, parser_name=None,
writer=HtmlWriter(), writer_name=None,
settings=None, settings_spec=None,
settings_overrides=settings_overrides, config_section=None,
enable_exit_status=True).decode("utf-8")
def gen_rstcoq_html(coq, fpath, webpage_style,
include_banner, include_vernums,
html_assets, traceback):
from .docutils import RSTCoqParser, RSTCoqStandaloneReader
return _gen_docutils_html(coq, fpath, webpage_style,
include_banner, include_vernums,
html_assets, traceback,
RSTCoqParser, RSTCoqStandaloneReader)
def gen_rst_html(rst, fpath, webpage_style,
include_banner, include_vernums,
html_assets, traceback):
from docutils.parsers.rst import Parser
from docutils.readers.standalone import Reader
return _gen_docutils_html(rst, fpath, webpage_style,
include_banner, include_vernums,
html_assets, traceback,
Parser, Reader)
def _docutils_cmdline(description, Reader, Parser):
import locale
locale.setlocale(locale.LC_ALL, '')
from docutils.core import publish_cmdline, default_description
from .docutils import setup, HtmlWriter
setup()
parser = Parser()
publish_cmdline(
reader=Reader(parser), parser=parser,
writer=HtmlWriter(),
settings_overrides={'stylesheet_path': None},
description=(description + default_description)
)
def _lint_docutils(source, fpath, Parser, traceback):
from io import StringIO
from docutils.utils import new_document
from docutils.frontend import OptionParser
from docutils.utils import Reporter
from .docutils import JsErrorPrinter
parser = Parser()
settings = OptionParser(components=(Parser,)).get_default_values()
settings.traceback = traceback
observer = JsErrorPrinter(StringIO(), settings)
document = new_document(fpath, settings)
document.reporter.report_level = 0 # Report all messages
document.reporter.halt_level = Reporter.SEVERE_LEVEL + 1 # Do not exit early
document.reporter.stream = False # Disable textual reporting
document.reporter.attach_observer(observer)
parser.parse(source, document)
return observer.stream.getvalue()
def lint_rstcoq(coq, fpath, traceback):
from .docutils import RSTCoqParser
return _lint_docutils(coq, fpath, RSTCoqParser, traceback)
def lint_rst(rst, fpath, traceback):
from docutils.parsers.rst import Parser
return _lint_docutils(rst, fpath, Parser, traceback)
def _scrub_fname(fname):
import re
return re.sub("[^-a-zA-Z0-9]", "-", fname)
def gen_html_snippets(annotated, include_vernums, fname):
from .html import HtmlGenerator
from .pygments import highlight_html
return HtmlGenerator(highlight_html, _scrub_fname(fname)).gen(annotated)
def gen_latex_snippets(annotated):
from .latex import LatexGenerator
from .pygments import highlight_latex
return LatexGenerator(highlight_latex).gen(annotated)
COQDOC_OPTIONS = ['--body-only', '--no-glob', '--no-index', '--no-externals',
'-s', '--html', '--stdout', '--utf8']
def _run_coqdoc(coq_snippets, coqdoc_bin=None):
"""Get the output of coqdoc on coq_code."""
from shutil import rmtree
from tempfile import mkstemp, mkdtemp
from subprocess import check_output
coqdoc_bin = coqdoc_bin or os.path.join(os.getenv("COQBIN", ""), "coqdoc")
dpath = mkdtemp(prefix="alectryon_coqdoc_")
fd, filename = mkstemp(prefix="alectryon_coqdoc_", suffix=".v", dir=dpath)
try:
for snippet in coq_snippets:
os.write(fd, snippet.encode("utf-8"))
os.write(fd, b"\n(* --- *)\n") # Separator to prevent fusing
os.close(fd)
coqdoc = [coqdoc_bin, *COQDOC_OPTIONS, "-d", dpath, filename]
return check_output(coqdoc, cwd=dpath, timeout=10).decode("utf-8")
finally:
rmtree(dpath)
def _gen_coqdoc_html(coqdoc_fragments):
from bs4 import BeautifulSoup
coqdoc_output = _run_coqdoc(fr.contents for fr in coqdoc_fragments)
soup = BeautifulSoup(coqdoc_output, "html.parser")
docs = soup.find_all(class_='doc')
if len(docs) != sum(1 for c in coqdoc_fragments if not c.special):
from pprint import pprint
print("Coqdoc mismatch:", file=sys.stderr)
pprint(list(zip(coqdoc_comments, docs)))
raise AssertionError()
return docs
def _gen_html_snippets_with_coqdoc(annotated, fname):
from dominate.util import raw
from .html import HtmlGenerator
from .pygments import highlight_html
from .transforms import isolate_coqdoc, default_transform, CoqdocFragment
writer = HtmlGenerator(highlight_html, _scrub_fname(fname))
parts = [part for fragments in annotated
for part in isolate_coqdoc(fragments)]
coqdoc = [part for part in parts
if isinstance(part, CoqdocFragment)]
coqdoc_html = iter(_gen_coqdoc_html(coqdoc))
for part in parts:
if isinstance(part, CoqdocFragment):
if not part.special:
yield [raw(str(next(coqdoc_html, None)))]
else:
fragments = default_transform(part.fragments)
yield writer.gen_fragments(fragments)
def gen_html_snippets_with_coqdoc(annotated, html_classes, fname):
html_classes.append("coqdoc")
# ‘return’ instead of ‘yield from’ to update html_classes eagerly
return _gen_html_snippets_with_coqdoc(annotated, fname)
def copy_assets(state, html_assets, copy_fn, output_directory):
from .html import copy_assets as cp
if copy_fn:
cp(output_directory, assets=html_assets, copy_fn=copy_fn)
return state
def dump_html_standalone(snippets, fname, webpage_style,
include_banner, include_vernums,
html_assets, html_classes):
from dominate import tags, document
from dominate.util import raw
from . import GENERATOR
from .core import SerAPI
from .pygments import HTML_FORMATTER
from .html import ASSETS, ADDITIONAL_HEADS, gen_banner, wrap_classes
doc = document(title=fname)
doc.set_attribute("class", "alectryon-standalone")
doc.head.add(tags.meta(charset="utf-8"))
doc.head.add(tags.meta(name="generator", content=GENERATOR))
for hd in ADDITIONAL_HEADS:
doc.head.add(raw(hd))
for css in ASSETS.ALECTRYON_CSS:
doc.head.add(tags.link(rel="stylesheet", href=css))
for link in (ASSETS.IBM_PLEX_CDN, ASSETS.FIRA_CODE_CDN):
doc.head.add(raw(link))
for js in ASSETS.ALECTRYON_JS:
doc.head.add(tags.script(src=js))
html_assets.extend(ASSETS.ALECTRYON_CSS)
html_assets.extend(ASSETS.ALECTRYON_JS)
pygments_css = HTML_FORMATTER.get_style_defs('.highlight')
doc.head.add(tags.style(pygments_css, type="text/css"))
cls = wrap_classes(webpage_style, *html_classes)
root = doc.body.add(tags.article(cls=cls))
if include_banner:
root.add(raw(gen_banner(SerAPI.version_info(), include_vernums)))
for snippet in snippets:
root.add(snippet)
return doc.render(pretty=False)
def prepare_json(obj):
from .json import json_of_annotated
return json_of_annotated(obj)
def dump_json(js):
from json import dumps
return dumps(js, indent=4)
def dump_html_snippets(snippets):
s = ""
for snippet in snippets:
s += snippet.render(pretty=True)
s += "<!-- alectryon-block-end -->\n"
return s
def dump_latex_snippets(snippets):
s = ""
for snippet in snippets:
s += str(snippet)
s += "\n%% alectryon-block-end\n"
return s
def strip_extension(fname):
for ext in EXTENSIONS:
if fname.endswith(ext):
return fname[:-len(ext)]
return fname
def write_output(ext, contents, fname, output, output_directory):
if output == "-" or (output is None and fname == "-"):
sys.stdout.write(contents)
else:
if not output:
output = os.path.join(output_directory, strip_extension(fname) + ext)
with open(output, mode="w", encoding="utf-8") as f:
f.write(contents)
def write_file(ext):
return lambda contents, fname, output, output_directory: \
write_output(ext, contents, fname, output, output_directory)
PIPELINES = {
'json': {
'json': (read_json, annotate_chunks,
prepare_json, dump_json, write_file(".io.json")),
'snippets-html': (read_json, annotate_chunks, gen_html_snippets,
dump_html_snippets, write_file(".snippets.html")),
'snippets-latex': (read_json, annotate_chunks, gen_latex_snippets,
dump_latex_snippets, write_file(".snippets.tex"))
},
'coq': {
'null': (read_plain, parse_coq_plain, annotate_chunks),
'webpage': (read_plain, parse_coq_plain, annotate_chunks,
gen_html_snippets, dump_html_standalone, copy_assets,
write_file(".v.html")),
'snippets-html': (read_plain, parse_coq_plain, annotate_chunks,
gen_html_snippets, dump_html_snippets,
write_file(".snippets.html")),
'snippets-latex': (read_plain, parse_coq_plain, annotate_chunks,
gen_latex_snippets, dump_latex_snippets,
write_file(".snippets.tex")),
'lint': (read_plain, register_docutils, lint_rstcoq,
write_file(".lint.json")),
'rst': (read_plain, coq_to_rst, write_file(".v.rst")),
'json': (read_plain, parse_coq_plain, annotate_chunks, prepare_json,
dump_json, write_file(".io.json"))
},
'coq+rst': {
'webpage': (read_plain, register_docutils, gen_rstcoq_html, copy_assets,
write_file(".html")),
'lint': (read_plain, register_docutils, lint_rstcoq,
write_file(".lint.json")),
'rst': (read_plain, coq_to_rst, write_file(".v.rst"))
},
'coqdoc': {
'webpage': (read_plain, parse_coq_plain, annotate_chunks,
gen_html_snippets_with_coqdoc, dump_html_standalone,
copy_assets, write_file(".html")),
},
'rst': {
'webpage': (read_plain, register_docutils, gen_rst_html, copy_assets,
write_file(".html")),
'lint': (read_plain, register_docutils, lint_rst,
write_file(".lint.json")),
'coq': (read_plain, rst_to_coq, write_file(".v")),
'coq+rst': (read_plain, rst_to_coq, write_file(".v"))
}
}
# CLI
# ===
EXTENSIONS = ['.v', '.json', '.v.rst', '.rst']
FRONTENDS_BY_EXTENSION = [
('.v', 'coq+rst'), ('.json', 'json'), ('.rst', 'rst')
]
BACKENDS_BY_EXTENSION = [
('.v', 'coq'), ('.json', 'json'), ('.rst', 'rst'),
('.lint.json', 'lint'),
('.snippets.html', 'snippets-html'),
('.snippets.tex', 'snippets-latex'),
('.v.html', 'webpage'), ('.html', 'webpage')
]
DEFAULT_BACKENDS = {
'json': 'json',
'coq': 'webpage',
'coqdoc': 'webpage',
'coq+rst': 'webpage',
'rst': 'webpage'
}
def infer_mode(fpath, kind, arg, table):
for (ext, mode) in table:
if fpath.endswith(ext):
return mode
MSG = """{}: Not sure what to do with {!r}.
Try passing {}?"""
raise argparse.ArgumentTypeError(MSG.format(kind, fpath, arg))
def infer_frontend(fpath):
return infer_mode(fpath, "input", "--frontend", FRONTENDS_BY_EXTENSION)
def infer_backend(frontend, out_fpath):
if out_fpath:
return infer_mode(out_fpath, "output", "--backend", BACKENDS_BY_EXTENSION)
return DEFAULT_BACKENDS[frontend]
def resolve_pipeline(fpath, args):
frontend = args.frontend or infer_frontend(fpath)
assert frontend in PIPELINES
supported_backends = PIPELINES[frontend]
backend = args.backend or infer_backend(frontend, args.output)
if backend not in supported_backends:
MSG = """argument --backend: Frontend {!r} does not support backend {!r}: \
expecting one of {}"""
raise argparse.ArgumentTypeError(MSG.format(
frontend, backend, ", ".join(map(repr, supported_backends))))
return supported_backends[backend]
COPY_FUNCTIONS = {
"copy": shutil.copy,
"symlink": os.symlink,
"hardlink": os.link,
"none": None
}
def post_process_arguments(parser, args):
if len(args.input) > 1 and args.output:
parser.error("argument --output: Not valid with multiple inputs")
if args.stdin_filename and "-" not in args.input:
parser.error("argument --stdin-filename: input must be '-'")
for dirpath in args.coq_args_I:
args.sertop_args.extend(("-I", dirpath))
for pair in args.coq_args_R:
args.sertop_args.extend(("-R", ",".join(pair)))
for pair in args.coq_args_Q:
args.sertop_args.extend(("-Q", ",".join(pair)))
# argparse applies ‘type’ before ‘choices’, so we do the conversion here
args.copy_fn = COPY_FUNCTIONS[args.copy_fn]
args.point, args.marker = args.mark_point
if args.point is not None:
try:
args.point = int(args.point)
except ValueError:
MSG = "argument --mark-point: Expecting a number, not {!r}"
parser.error(MSG.format(args.point))
args.html_assets = []
args.html_classes = []
args.pipelines = [(fpath, resolve_pipeline(fpath, args))
for fpath in args.input]
return args
def build_parser():
parser = argparse.ArgumentParser(description="""\
Annotate segments of Coq code with responses and goals.
Take input in Coq, reStructuredText, or JSON format \
and produce reStructuredText, HTML, or JSON output.""")
INPUT_HELP = "Configure the input."
out = parser.add_argument_group("Input arguments", INPUT_HELP)
INPUT_FILES_HELP = "Input files"
parser.add_argument("input", nargs="+", help=INPUT_FILES_HELP)
INPUT_STDIN_NAME_HELP = "Name of file passed on stdin, if any"
parser.add_argument("--stdin-filename", default=None,
help=INPUT_STDIN_NAME_HELP)
FRONTEND_HELP = "Choose a frontend. Defaults: "
FRONTEND_HELP += "; ".join("{!r} → {}".format(ext, frontend)
for ext, frontend in FRONTENDS_BY_EXTENSION)
FRONTEND_CHOICES = sorted(PIPELINES.keys())
out.add_argument("--frontend", default=None, choices=FRONTEND_CHOICES,
help=FRONTEND_HELP)
OUTPUT_HELP = "Configure the output."
out = parser.add_argument_group("Output arguments", OUTPUT_HELP)
BACKEND_HELP = "Choose a backend. Supported: "
BACKEND_HELP += "; ".join(
"{} → {{{}}}".format(frontend, ", ".join(sorted(backends)))
for frontend, backends in PIPELINES.items())
BACKEND_CHOICES = sorted(set(b for _, bs in PIPELINES.items() for b in bs))
out.add_argument("--backend", default=None, choices=BACKEND_CHOICES,
help=BACKEND_HELP)
OUT_FILE_HELP = "Set the output file (default: computed based on INPUT)."
parser.add_argument("-o", "--output", default=None,
help=OUT_FILE_HELP)
OUT_DIR_HELP = "Set the output directory (default: same as each INPUT)."
parser.add_argument("--output-directory", default=None,
help=OUT_DIR_HELP)
COPY_ASSETS_HELP = ("Chose the method to use to copy assets " +
"along the generated file(s) when creating webpages.")
parser.add_argument("--copy-assets", choices=list(COPY_FUNCTIONS.keys()),
default="copy", dest="copy_fn",
help=COPY_ASSETS_HELP)
CACHE_DIRECTORY_HELP = ("Cache Coq's output in DIRECTORY.")
parser.add_argument("--cache-directory", default=None, metavar="DIRECTORY",
help=CACHE_DIRECTORY_HELP)
NO_HEADER_HELP = "Do not insert a header with usage instructions in webpages."
parser.add_argument("--no-header", action='store_false',
dest="include_banner", default="True",
help=NO_HEADER_HELP)
NO_VERSION_NUMBERS = "Omit version numbers in meta tags and headers."
parser.add_argument("--no-version-numbers", action='store_false',
dest="include_vernums", default=True,
help=NO_VERSION_NUMBERS)
WEBPAGE_STYLE_HELP = "Choose a style for standalone webpages."
WEBPAGE_STYLE_CHOICES = ("centered", "floating", "windowed")
parser.add_argument("--webpage-style", default="centered",
choices=WEBPAGE_STYLE_CHOICES,
help=WEBPAGE_STYLE_HELP)
MARK_POINT_HELP = "Mark a point in the output with a given marker."
parser.add_argument("--mark-point", nargs=2, default=(None, None),
metavar=("POINT", "MARKER"),
help=MARK_POINT_HELP)
SUBP_HELP = "Pass arguments to the SerAPI process"
subp = parser.add_argument_group("Subprocess arguments", SUBP_HELP)
SERTOP_ARGS_HELP = "Pass a single argument to SerAPI (e.g. -Q dir,lib)."
subp.add_argument("--sertop-arg", dest="sertop_args",
action="append", default=[],
metavar="SERAPI_ARG",
help=SERTOP_ARGS_HELP)
I_HELP = "Pass -I DIR to the SerAPI subprocess."
subp.add_argument("-I", "--ml-include-path", dest="coq_args_I",
metavar="DIR", nargs=1, action="append",
default=[], help=I_HELP)
Q_HELP = "Pass -Q DIR COQDIR to the SerAPI subprocess."
subp.add_argument("-Q", "--load-path", dest="coq_args_Q",
metavar=("DIR", "COQDIR"), nargs=2, action="append",
default=[], help=Q_HELP)
R_HELP = "Pass -R DIR COQDIR to the SerAPI subprocess."
subp.add_argument("-R", "--rec-load-path", dest="coq_args_R",
metavar=("DIR", "COQDIR"), nargs=2, action="append",
default=[], help=R_HELP)
EXPECT_UNEXPECTED_HELP = "Ignore unexpected output from SerAPI"
parser.add_argument("--expect-unexpected", action="store_true",
default=False, help=EXPECT_UNEXPECTED_HELP)
DEBUG_HELP = "Print communications with SerAPI."
parser.add_argument("--debug", action="store_true",
default=False, help=DEBUG_HELP)
TRACEBACK_HELP = "Print error traces."
parser.add_argument("--traceback", action="store_true",
default=False, help=TRACEBACK_HELP)
return parser
def parse_arguments():
parser = build_parser()
return post_process_arguments(parser, parser.parse_args())
# Entry point
# ===========
def call_pipeline_step(step, state, ctx):
params = list(inspect.signature(step).parameters.keys())[1:]
return step(state, **{p: ctx[p] for p in params})
def build_context(fpath, args):
if fpath == "-":
fname, fpath = "-", (args.stdin_filename or "-")
else:
fname = os.path.basename(fpath)
ctx = {"fpath": fpath, "fname": fname, **vars(args)}
if args.output_directory is None:
if fname == "-":
ctx["output_directory"] = "."
else:
ctx["output_directory"] = os.path.dirname(os.path.abspath(fpath))
return ctx
def process_pipelines(args):
if args.debug:
from . import core
core.DEBUG = True
if args.cache_directory:
from . import docutils
docutils.CACHE_DIRECTORY = args.cache_directory
if args.expect_unexpected:
from . import core
core.SerAPI.EXPECT_UNEXPECTED = True
try:
for fpath, pipeline in args.pipelines:
state, ctx = None, build_context(fpath, args)
for step in pipeline:
state = call_pipeline_step(step, state, ctx)
except (ValueError, FileNotFoundError) as e:
if args.traceback:
raise e
print("Exiting early due to an error:", file=sys.stderr)
print(str(e), file=sys.stderr)
sys.exit(1)
def main():
args = parse_arguments()
process_pipelines(args)
# Alternative CLIs
# ================
def rstcoq2html():
from .docutils import RSTCoqStandaloneReader, RSTCoqParser
DESCRIPTION = 'Build an HTML document from an Alectryon Coq file.'
_docutils_cmdline(DESCRIPTION, RSTCoqStandaloneReader, RSTCoqParser)
def coqrst2html():
from docutils.parsers.rst import Parser
from docutils.readers.standalone import Reader
DESCRIPTION = 'Build an HTML document from an Alectryon reStructuredText file.'
_docutils_cmdline(DESCRIPTION, Reader, Parser)
| 36.986322
| 83
| 0.653285
|
794c3672463512789e30441d4fdd1a2878d00d41
| 5,788
|
py
|
Python
|
tests/testapp/test_forms.py
|
matthiask/towel
|
438bd76ca5f4a464464c5ccc6ccd04d6370df91a
|
[
"BSD-3-Clause"
] | 46
|
2015-01-29T03:50:22.000Z
|
2021-10-01T01:36:46.000Z
|
tests/testapp/test_forms.py
|
matthiask/towel
|
438bd76ca5f4a464464c5ccc6ccd04d6370df91a
|
[
"BSD-3-Clause"
] | 6
|
2015-04-08T13:25:17.000Z
|
2017-03-18T14:04:50.000Z
|
tests/testapp/test_forms.py
|
matthiask/towel
|
438bd76ca5f4a464464c5ccc6ccd04d6370df91a
|
[
"BSD-3-Clause"
] | 6
|
2015-06-05T14:23:51.000Z
|
2020-03-26T19:33:08.000Z
|
from datetime import timedelta
from django.test import TestCase
from django.urls import reverse
from django.utils import timezone
from testapp.models import Message, Person
class FormsTest(TestCase):
def test_warningsform(self):
person = Person.objects.create()
emailaddress = person.emailaddress_set.create()
self.assertEqual(self.client.get(person.urls["message"]).status_code, 200)
self.assertEqual(self.client.post(person.urls["message"]).status_code, 200)
response = self.client.post(
person.urls["message"],
{"sent_to": emailaddress.pk, "message": "Hallo Welt"},
)
self.assertRedirects(response, person.urls["detail"])
self.assertEqual(Message.objects.count(), 1)
response = self.client.post(
person.urls["message"], {"sent_to": emailaddress.pk, "message": " "}
)
self.assertEqual(response.status_code, 200)
self.assertContains(response, "Please review the following warnings:")
response = self.client.post(
person.urls["message"],
{"sent_to": emailaddress.pk, "message": " hello ", "ignore_warnings": 1},
)
self.assertRedirects(response, person.urls["detail"])
self.assertEqual(Message.objects.count(), 2)
def test_searchform(self):
date = timezone.now().replace(year=2012, month=10, day=1)
for i in range(100):
Person.objects.create(
given_name="Given %s" % i,
family_name="Family %s" % i,
is_active=bool(i % 3),
created=date + timedelta(days=i),
)
list_url = reverse("testapp_person_list")
self.assertContains(
self.client.get(list_url),
"<span>1 - 5 / 100</span>",
)
self.assertContains(
self.client.get(list_url + "?query=42"),
"<span>1 - 1 / 1</span>",
)
self.assertContains(
self.client.get(list_url + "?query=is:active"),
"<span>1 - 5 / 66</span>",
)
self.assertContains(
self.client.get(list_url + "?query=is:inactive"),
"<span>1 - 5 / 34</span>",
)
self.assertContains(
self.client.get(list_url + "?query=active:yes"),
"<span>1 - 5 / 66</span>",
)
self.assertContains(
self.client.get(list_url + "?query=active:off"),
"<span>1 - 5 / 34</span>",
)
self.assertContains(
self.client.get(list_url + "?query=year:2012"),
"<span>1 - 5 / 92</span>",
)
self.assertContains(
self.client.get(list_url + '?query="Given+1"+year%3A2012'),
"<span>1 - 5 / 11</span>",
)
self.assertContains(
self.client.get(list_url + '?query="%2BGiven+1"+year%3A2012'),
"<span>1 - 5 / 11</span>",
)
self.assertContains(
self.client.get(list_url + '?query="-Given+1"+year%3A2012'),
"<span>1 - 5 / 81</span>",
)
# Form field
self.assertContains(
self.client.get(list_url + "?is_active=1"),
"<span>1 - 5 / 100</span>",
)
self.assertContains(
self.client.get(list_url + "?is_active=2"),
"<span>1 - 5 / 66</span>",
)
self.assertContains(
self.client.get(list_url + "?is_active=3"),
"<span>1 - 5 / 34</span>",
)
# Invalid query
response = self.client.get(list_url + "?created__year=abc")
self.assertEqual(response.status_code, 302)
self.assertTrue(response["location"].endswith("?clear=1"))
# Mixed quick (only inactive) and form field (only active)
# Form field takes precedence
self.assertContains(
self.client.get(list_url + "?is_active=2&query=is:inactive"),
"<span>1 - 5 / 66</span>",
)
# Search form persistence
self.assertContains(
self.client.get(list_url + "?s=1&is_active=3"),
"<span>1 - 5 / 34</span>",
)
self.assertContains(
self.client.get(list_url),
"<span>1 - 5 / 34</span>",
)
self.assertContains(
self.client.get(list_url + "?clear=1"),
"<span>1 - 5 / 100</span>",
)
# Ordering
self.assertContains(
self.client.get(list_url),
"Given 0 Family 0",
)
response = self.client.get(list_url + "?o=name")
self.assertContains(response, "Given 12 Family 12")
self.assertContains(
response, '<a class="ordering desc" href="?&o=-name"> name</a>'
)
self.assertContains(
response, '<a class="ordering " href="?&o=is_active"> is active</a>'
)
response = self.client.get(list_url + "?o=-name")
self.assertContains(response, "Given 99 Family 99")
self.assertContains(
response, '<a class="ordering asc" href="?&o=name"> name</a>'
)
self.assertContains(
response, '<a class="ordering " href="?&o=is_active"> is active</a>'
)
response = self.client.get(list_url + "?o=is_active")
self.assertContains(response, "Given 14 Family 14")
self.assertNotContains(response, "Given 12 Family 12") # inactive
self.assertContains(response, '<a class="ordering " href="?&o=name"> name</a>')
self.assertContains(
response, '<a class="ordering desc" href="?&o=-is_active"> is active</a>'
)
# TODO multiple choice fields
# TODO SearchForm.default
# TODO autocompletion widget tests?
| 34.86747
| 87
| 0.546303
|
794c372ddba2e9b1e6c67dcd6f1ecf29aeb604b9
| 2,269
|
py
|
Python
|
VSR/Backend/Torch/Models/Esrgan.py
|
Kadantte/VideoSuperResolution
|
4c86e49d81c7a9bea1fe0780d651afc126768df3
|
[
"MIT"
] | 1,447
|
2018-06-04T08:44:07.000Z
|
2022-03-29T06:19:10.000Z
|
VSR/Backend/Torch/Models/Esrgan.py
|
pipixiapipi/VideoSuperResolution
|
4c86e49d81c7a9bea1fe0780d651afc126768df3
|
[
"MIT"
] | 96
|
2018-08-29T01:02:45.000Z
|
2022-01-12T06:00:01.000Z
|
VSR/Backend/Torch/Models/Esrgan.py
|
pipixiapipi/VideoSuperResolution
|
4c86e49d81c7a9bea1fe0780d651afc126768df3
|
[
"MIT"
] | 307
|
2018-06-26T13:35:54.000Z
|
2022-01-21T09:01:54.000Z
|
# Copyright (c): Wenyi Tang 2017-2019.
# Author: Wenyi Tang
# Email: wenyi.tang@intel.com
# Update Date: 2019 - 3 - 15
import logging
import numpy as np
import torch.nn as nn
from .Ops.Blocks import Activation, EasyConv2d, Rrdb
from .Ops.Discriminator import DCGAN
from .Ops.Scale import Upsample
from .Optim.SISR import PerceptualOptimizer
_logger = logging.getLogger("VSR.ESRGAN")
_logger.info("LICENSE: ESRGAN is implemented by Xintao Wang. "
"@xinntao https://github.com/xinntao/ESRGAN")
class RRDB_Net(nn.Module):
def __init__(self, channel, scale, nf, nb, gc=32):
super(RRDB_Net, self).__init__()
self.head = EasyConv2d(channel, nf, kernel_size=3)
rb_blocks = [
Rrdb(nf, gc, 5, 0.2, kernel_size=3,
activation=Activation('lrelu', negative_slope=0.2))
for _ in range(nb)]
LR_conv = EasyConv2d(nf, nf, kernel_size=3)
upsampler = [Upsample(nf, scale, 'nearest',
activation=Activation('lrelu', negative_slope=0.2))]
HR_conv0 = EasyConv2d(nf, nf, kernel_size=3, activation='lrelu',
negative_slope=0.2)
HR_conv1 = EasyConv2d(nf, channel, kernel_size=3)
self.body = nn.Sequential(*rb_blocks, LR_conv)
self.tail = nn.Sequential(*upsampler, HR_conv0, HR_conv1)
def forward(self, x):
x = self.head(x)
x = self.body(x) + x
x = self.tail(x)
return x
class ESRGAN(PerceptualOptimizer):
def __init__(self, channel, scale, patch_size=128, weights=(0.01, 1, 5e-3),
nf=64, nb=23, gc=32, **kwargs):
self.rrdb = RRDB_Net(channel, scale, nf, nb, gc)
super(ESRGAN, self).__init__(scale, channel,
discriminator=DCGAN,
discriminator_kwargs={
'channel': channel,
'scale': scale,
'num_layers': np.log2(patch_size // 4) * 2,
'norm': 'BN'
},
image_weight=weights[0],
feature_weight=weights[1],
gan_weight=weights[2], **kwargs)
def fn(self, x):
return self.rrdb(x)
| 36.015873
| 78
| 0.56677
|
794c37fe40715365670b43421b1b2f25410d25b1
| 629
|
py
|
Python
|
python/basix/variants.py
|
jpdean/basix
|
d79f22d00e195bf8f5b58b8e50562b98650b0be0
|
[
"MIT"
] | null | null | null |
python/basix/variants.py
|
jpdean/basix
|
d79f22d00e195bf8f5b58b8e50562b98650b0be0
|
[
"MIT"
] | null | null | null |
python/basix/variants.py
|
jpdean/basix
|
d79f22d00e195bf8f5b58b8e50562b98650b0be0
|
[
"MIT"
] | null | null | null |
"""Functions to manipulate variant types."""
from ._basixcpp import LagrangeVariant as _LV
def string_to_lagrange_variant(variant: str):
"""Convert a string to a Basix LagrangeVariant enum."""
if variant == "gll":
return _LV.gll_warped
if variant == "chebyshev":
return _LV.chebyshev_isaac
if variant == "gl":
return _LV.gl_isaac
if not hasattr(_LV, variant):
raise ValueError(f"Unknown variant: {variant}")
return getattr(_LV, variant)
def lagrange_variant_to_string(variant: _LV):
"""Convert a Basix LagrangeVariant enum to a string."""
return variant.name
| 27.347826
| 59
| 0.686804
|
794c39258b174b98e904ed4e1994c160c0a51b47
| 4,960
|
py
|
Python
|
stanza/utils/datasets/process_orchid.py
|
de9uch1/stanza
|
cafb7d5004842cd3c8a3ac334ce7649bac928830
|
[
"Apache-2.0"
] | 25
|
2021-12-01T15:19:36.000Z
|
2022-03-12T12:50:28.000Z
|
stanza/utils/datasets/process_orchid.py
|
de9uch1/stanza
|
cafb7d5004842cd3c8a3ac334ce7649bac928830
|
[
"Apache-2.0"
] | 3
|
2021-12-14T06:34:52.000Z
|
2022-02-17T08:23:20.000Z
|
stanza/utils/datasets/process_orchid.py
|
de9uch1/stanza
|
cafb7d5004842cd3c8a3ac334ce7649bac928830
|
[
"Apache-2.0"
] | 6
|
2021-10-12T13:44:17.000Z
|
2022-03-07T13:54:17.000Z
|
"""Parses the xml conversion of orchid
https://github.com/korakot/thainlp/blob/master/xmlchid.xml
For example, if you put the data file in the above link in
extern_data/thai/orchid/xmlchid.xml
you would then run
python3 -m stanza.utils.datasets.process_orchid extern_data/thai/orchid/xmlchid.xml data/tokenize
Because there is no definitive train/dev/test split that we have found
so far, we randomly shuffle the data on a paragraph level and split it
80/10/10. A random seed is chosen so that the splits are reproducible.
The datasets produced have a similar format to the UD datasets, so we
give it a fake UD name to make life easier for the downstream tools.
Training on this dataset seems to work best with low dropout numbers.
For example:
./scripts/run_tokenize.sh UD_Thai-orchid --dropout 0.05 --unit_dropout 0.05
This results in a model with dev set scores:
th_orchid 87.98 70.94
test set scores:
91.60 72.43
Apparently the random split produced a test set easier than the dev set.
"""
import random
import sys
import xml.etree.ElementTree as ET
from stanza.utils.datasets.process_thai_tokenization import write_dataset
# line "122819" has some error in the tokenization of the musical notation
# line "209380" is also messed up
# others have @ followed by a part of speech, which is clearly wrong
skipped_lines = {
"122819",
"209380",
"227769",
"245992",
"347163",
"409708",
"431227",
}
escape_sequences = {
'<left_parenthesis>': '(',
'<right_parenthesis>': ')',
'<circumflex_accent>': '^',
'<full_stop>': '.',
'<minus>': '-',
'<asterisk>': '*',
'<quotation>': '"',
'<slash>': '/',
'<colon>': ':',
'<equal>': '=',
'<comma>': ',',
'<semi_colon>': ';',
'<less_than>': '<',
'<greater_than>': '>',
'<ampersand>': '&',
'<left_curly_bracket>': '{',
'<right_curly_bracket>': '}',
'<apostrophe>': "'",
'<plus>': '+',
'<number>': '#',
'<dollar>': '$',
'<at_mark>': '@',
'<question_mark>': '?',
'<exclamation>': '!',
'app<LI>ances': 'appliances',
'intel<LI>gence': 'intelligence',
"<slash>'": "/'",
'<100>': '100',
}
allowed_sequences = {
'<a>',
'<b>',
'<c>',
'<e>',
'<f>',
'<LI>',
'<---vp',
'<---',
'<----',
}
def read_data(input_filename):
tree = ET.parse(input_filename)
# we will put each paragraph in a separate block in the output file
# we won't pay any attention to the document boundaries unless we
# later find out it was necessary
# a paragraph will be a list of sentences
# a sentence is a list of words, where each word is a string
documents = []
root = tree.getroot()
for document in root:
# these should all be documents
if document.tag != 'document':
raise ValueError("Unexpected orchid xml layout: {}".format(document.tag))
paragraphs = []
for paragraph in document:
if paragraph.tag != 'paragraph':
raise ValueError("Unexpected orchid xml layout: {} under {}".format(paragraph.tag, document.tag))
sentences = []
for sentence in paragraph:
if sentence.tag != 'sentence':
raise ValueError("Unexpected orchid xml layout: {} under {}".format(sentence.tag, document.tag))
if sentence.attrib['line_num'] in skipped_lines:
continue
words = []
for word_idx, word in enumerate(sentence):
if word.tag != 'word':
raise ValueError("Unexpected orchid xml layout: {} under {}".format(word.tag, sentence.tag))
word = word.attrib['surface']
word = escape_sequences.get(word, word)
if word == '<space>':
if word_idx == 0:
raise ValueError("Space character was the first token in a sentence: {}".format(sentence.attrib['line_num']))
else:
words[-1] = (words[-1][0], True)
continue
if len(word) > 1 and word[0] == '<' and word not in allowed_sequences:
raise ValueError("Unknown escape sequence {}".format(word))
words.append((word, False))
if len(words) == 0:
continue
sentences.append(words)
paragraphs.append(sentences)
documents.append(paragraphs)
print("Number of documents: {}".format(len(documents)))
print("Number of paragraphs: {}".format(sum(len(document) for document in documents)))
return documents
def main():
random.seed(1007)
input_filename = sys.argv[1]
output_dir = sys.argv[2]
documents = read_data(input_filename)
write_dataset(documents, output_dir, "orchid")
if __name__ == '__main__':
main()
| 32.207792
| 137
| 0.587903
|
794c393f6f7e9f8ff906c5437a50c4c4e3bdac0d
| 12,382
|
py
|
Python
|
dev/Gems/CloudGemMetric/v1/AWS/python/windows/Lib/numpy/doc/glossary.py
|
jeikabu/lumberyard
|
07228c605ce16cbf5aaa209a94a3cb9d6c1a4115
|
[
"AML"
] | 145
|
2017-01-19T23:33:03.000Z
|
2021-06-05T05:34:55.000Z
|
dev/Gems/CloudGemMetric/v1/AWS/python/windows/Lib/numpy/doc/glossary.py
|
jeikabu/lumberyard
|
07228c605ce16cbf5aaa209a94a3cb9d6c1a4115
|
[
"AML"
] | 17
|
2017-02-03T20:51:39.000Z
|
2020-05-21T11:33:52.000Z
|
dev/Gems/CloudGemMetric/v1/AWS/python/windows/Lib/numpy/doc/glossary.py
|
jeikabu/lumberyard
|
07228c605ce16cbf5aaa209a94a3cb9d6c1a4115
|
[
"AML"
] | 44
|
2017-02-04T19:40:03.000Z
|
2020-10-01T19:24:19.000Z
|
"""
========
Glossary
========
.. glossary::
along an axis
Axes are defined for arrays with more than one dimension. A
2-dimensional array has two corresponding axes: the first running
vertically downwards across rows (axis 0), and the second running
horizontally across columns (axis 1).
Many operation can take place along one of these axes. For example,
we can sum each row of an array, in which case we operate along
columns, or axis 1::
>>> x = np.arange(12).reshape((3,4))
>>> x
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> x.sum(axis=1)
array([ 6, 22, 38])
array
A homogeneous container of numerical elements. Each element in the
array occupies a fixed amount of memory (hence homogeneous), and
can be a numerical element of a single type (such as float, int
or complex) or a combination (such as ``(float, int, float)``). Each
array has an associated data-type (or ``dtype``), which describes
the numerical type of its elements::
>>> x = np.array([1, 2, 3], float)
>>> x
array([ 1., 2., 3.])
>>> x.dtype # floating point number, 64 bits of memory per element
dtype('float64')
# More complicated data type: each array element is a combination of
# and integer and a floating point number
>>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)])
array([(1, 2.0), (3, 4.0)],
dtype=[('x', '<i4'), ('y', '<f8')])
Fast element-wise operations, called :term:`ufuncs`, operate on arrays.
array_like
Any sequence that can be interpreted as an ndarray. This includes
nested lists, tuples, scalars and existing arrays.
attribute
A property of an object that can be accessed using ``obj.attribute``,
e.g., ``shape`` is an attribute of an array::
>>> x = np.array([1, 2, 3])
>>> x.shape
(3,)
BLAS
`Basic Linear Algebra Subprograms <http://en.wikipedia.org/wiki/BLAS>`_
broadcast
NumPy can do operations on arrays whose shapes are mismatched::
>>> x = np.array([1, 2])
>>> y = np.array([[3], [4]])
>>> x
array([1, 2])
>>> y
array([[3],
[4]])
>>> x + y
array([[4, 5],
[5, 6]])
See `numpy.doc.broadcasting` for more information.
C order
See `row-major`
column-major
A way to represent items in a N-dimensional array in the 1-dimensional
computer memory. In column-major order, the leftmost index "varies the
fastest": for example the array::
[[1, 2, 3],
[4, 5, 6]]
is represented in the column-major order as::
[1, 4, 2, 5, 3, 6]
Column-major order is also known as the Fortran order, as the Fortran
programming language uses it.
decorator
An operator that transforms a function. For example, a ``log``
decorator may be defined to print debugging information upon
function execution::
>>> def log(f):
... def new_logging_func(*args, **kwargs):
... print("Logging call with parameters:", args, kwargs)
... return f(*args, **kwargs)
...
... return new_logging_func
Now, when we define a function, we can "decorate" it using ``log``::
>>> @log
... def add(a, b):
... return a + b
Calling ``add`` then yields:
>>> add(1, 2)
Logging call with parameters: (1, 2) {}
3
dictionary
Resembling a language dictionary, which provides a mapping between
words and descriptions thereof, a Python dictionary is a mapping
between two objects::
>>> x = {1: 'one', 'two': [1, 2]}
Here, `x` is a dictionary mapping keys to values, in this case
the integer 1 to the string "one", and the string "two" to
the list ``[1, 2]``. The values may be accessed using their
corresponding keys::
>>> x[1]
'one'
>>> x['two']
[1, 2]
Note that dictionaries are not stored in any specific order. Also,
most mutable (see *immutable* below) objects, such as lists, may not
be used as keys.
For more information on dictionaries, read the
`Python tutorial <http://docs.python.org/tut>`_.
Fortran order
See `column-major`
flattened
Collapsed to a one-dimensional array. See `numpy.ndarray.flatten`
for details.
immutable
An object that cannot be modified after execution is called
immutable. Two common examples are strings and tuples.
instance
A class definition gives the blueprint for constructing an object::
>>> class House(object):
... wall_colour = 'white'
Yet, we have to *build* a house before it exists::
>>> h = House() # build a house
Now, ``h`` is called a ``House`` instance. An instance is therefore
a specific realisation of a class.
iterable
A sequence that allows "walking" (iterating) over items, typically
using a loop such as::
>>> x = [1, 2, 3]
>>> [item**2 for item in x]
[1, 4, 9]
It is often used in combination with ``enumerate``::
>>> keys = ['a','b','c']
>>> for n, k in enumerate(keys):
... print("Key %d: %s" % (n, k))
...
Key 0: a
Key 1: b
Key 2: c
list
A Python container that can hold any number of objects or items.
The items do not have to be of the same type, and can even be
lists themselves::
>>> x = [2, 2.0, "two", [2, 2.0]]
The list `x` contains 4 items, each which can be accessed individually::
>>> x[2] # the string 'two'
'two'
>>> x[3] # a list, containing an integer 2 and a float 2.0
[2, 2.0]
It is also possible to select more than one item at a time,
using *slicing*::
>>> x[0:2] # or, equivalently, x[:2]
[2, 2.0]
In code, arrays are often conveniently expressed as nested lists::
>>> np.array([[1, 2], [3, 4]])
array([[1, 2],
[3, 4]])
For more information, read the section on lists in the `Python
tutorial <http://docs.python.org/tut>`_. For a mapping
type (key-value), see *dictionary*.
mask
A boolean array, used to select only certain elements for an operation::
>>> x = np.arange(5)
>>> x
array([0, 1, 2, 3, 4])
>>> mask = (x > 2)
>>> mask
array([False, False, False, True, True], dtype=bool)
>>> x[mask] = -1
>>> x
array([ 0, 1, 2, -1, -1])
masked array
Array that suppressed values indicated by a mask::
>>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True])
>>> x
masked_array(data = [-- 2.0 --],
mask = [ True False True],
fill_value = 1e+20)
<BLANKLINE>
>>> x + [1, 2, 3]
masked_array(data = [-- 4.0 --],
mask = [ True False True],
fill_value = 1e+20)
<BLANKLINE>
Masked arrays are often used when operating on arrays containing
missing or invalid entries.
matrix
A 2-dimensional ndarray that preserves its two-dimensional nature
throughout operations. It has certain special operations, such as ``*``
(matrix multiplication) and ``**`` (matrix power), defined::
>>> x = np.mat([[1, 2], [3, 4]])
>>> x
matrix([[1, 2],
[3, 4]])
>>> x**2
matrix([[ 7, 10],
[15, 22]])
method
A function associated with an object. For example, each ndarray has a
method called ``repeat``::
>>> x = np.array([1, 2, 3])
>>> x.repeat(2)
array([1, 1, 2, 2, 3, 3])
ndarray
See *array*.
record array
An :term:`ndarray` with :term:`structured data type`_ which has been
subclassed as ``np.recarray`` and whose dtype is of type ``np.record``,
making the fields of its data type to be accessible by attribute.
reference
If ``a`` is a reference to ``b``, then ``(a is b) == True``. Therefore,
``a`` and ``b`` are different names for the same Python object.
row-major
A way to represent items in a N-dimensional array in the 1-dimensional
computer memory. In row-major order, the rightmost index "varies
the fastest": for example the array::
[[1, 2, 3],
[4, 5, 6]]
is represented in the row-major order as::
[1, 2, 3, 4, 5, 6]
Row-major order is also known as the C order, as the C programming
language uses it. New NumPy arrays are by default in row-major order.
self
Often seen in method signatures, ``self`` refers to the instance
of the associated class. For example:
>>> class Paintbrush(object):
... color = 'blue'
...
... def paint(self):
... print("Painting the city %s!" % self.color)
...
>>> p = Paintbrush()
>>> p.color = 'red'
>>> p.paint() # self refers to 'p'
Painting the city red!
slice
Used to select only certain elements from a sequence::
>>> x = range(5)
>>> x
[0, 1, 2, 3, 4]
>>> x[1:3] # slice from 1 to 3 (excluding 3 itself)
[1, 2]
>>> x[1:5:2] # slice from 1 to 5, but skipping every second element
[1, 3]
>>> x[::-1] # slice a sequence in reverse
[4, 3, 2, 1, 0]
Arrays may have more than one dimension, each which can be sliced
individually::
>>> x = np.array([[1, 2], [3, 4]])
>>> x
array([[1, 2],
[3, 4]])
>>> x[:, 1]
array([2, 4])
structured data type
A data type composed of other datatypes
tuple
A sequence that may contain a variable number of types of any
kind. A tuple is immutable, i.e., once constructed it cannot be
changed. Similar to a list, it can be indexed and sliced::
>>> x = (1, 'one', [1, 2])
>>> x
(1, 'one', [1, 2])
>>> x[0]
1
>>> x[:2]
(1, 'one')
A useful concept is "tuple unpacking", which allows variables to
be assigned to the contents of a tuple::
>>> x, y = (1, 2)
>>> x, y = 1, 2
This is often used when a function returns multiple values:
>>> def return_many():
... return 1, 'alpha', None
>>> a, b, c = return_many()
>>> a, b, c
(1, 'alpha', None)
>>> a
1
>>> b
'alpha'
ufunc
Universal function. A fast element-wise array operation. Examples include
``add``, ``sin`` and ``logical_or``.
view
An array that does not own its data, but refers to another array's
data instead. For example, we may create a view that only shows
every second element of another array::
>>> x = np.arange(5)
>>> x
array([0, 1, 2, 3, 4])
>>> y = x[::2]
>>> y
array([0, 2, 4])
>>> x[0] = 3 # changing x changes y as well, since y is a view on x
>>> y
array([3, 2, 4])
wrapper
Python is a high-level (highly abstracted, or English-like) language.
This abstraction comes at a price in execution speed, and sometimes
it becomes necessary to use lower level languages to do fast
computations. A wrapper is code that provides a bridge between
high and the low level languages, allowing, e.g., Python to execute
code written in C or Fortran.
Examples include ctypes, SWIG and Cython (which wraps C and C++)
and f2py (which wraps Fortran).
"""
from __future__ import division, absolute_import, print_function
| 29.134118
| 82
| 0.534243
|
794c3a312e3f935dc46f44cf3cf0828cfcb9da75
| 980
|
py
|
Python
|
python_code/vnev/Lib/site-packages/jdcloud_sdk/services/nc/models/LogConfiguration.py
|
Ureimu/weather-robot
|
7634195af388538a566ccea9f8a8534c5fb0f4b6
|
[
"MIT"
] | 14
|
2018-04-19T09:53:56.000Z
|
2022-01-27T06:05:48.000Z
|
python_code/vnev/Lib/site-packages/jdcloud_sdk/services/nc/models/LogConfiguration.py
|
Ureimu/weather-robot
|
7634195af388538a566ccea9f8a8534c5fb0f4b6
|
[
"MIT"
] | 15
|
2018-09-11T05:39:54.000Z
|
2021-07-02T12:38:02.000Z
|
python_code/vnev/Lib/site-packages/jdcloud_sdk/services/nc/models/LogConfiguration.py
|
Ureimu/weather-robot
|
7634195af388538a566ccea9f8a8534c5fb0f4b6
|
[
"MIT"
] | 33
|
2018-04-20T05:29:16.000Z
|
2022-02-17T09:10:05.000Z
|
# coding=utf8
# Copyright 2018 JDCLOUD.COM
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# NOTE: This class is auto generated by the jdcloud code generator program.
class LogConfiguration(object):
def __init__(self, logDriver=None, options=None):
"""
:param logDriver: (Optional) 日志Driver名称 default:默认在本地分配10MB的存储空间,自动rotate
:param options: (Optional) 日志Driver的配置选项
"""
self.logDriver = logDriver
self.options = options
| 32.666667
| 82
| 0.72551
|
794c3b2bb8443a87f856a76d62c4d36bc450a5e6
| 13,820
|
py
|
Python
|
zippy/benchmarks/src/benchmarks/sympy/sympy/series/order.py
|
lucapele/pele-c
|
ff6d06794a171f8e1b08fc6246446d9777116f56
|
[
"BSD-3-Clause"
] | 319
|
2016-09-22T15:54:48.000Z
|
2022-03-18T02:36:58.000Z
|
sympy/series/order.py
|
KDOTGIS/sympy
|
2fb92446c40f242b341b3f17ebaa446ef5b19849
|
[
"BSD-3-Clause"
] | 9
|
2016-11-03T21:56:41.000Z
|
2020-08-09T19:27:37.000Z
|
sympy/series/order.py
|
KDOTGIS/sympy
|
2fb92446c40f242b341b3f17ebaa446ef5b19849
|
[
"BSD-3-Clause"
] | 27
|
2016-10-06T16:05:32.000Z
|
2022-03-18T02:37:00.000Z
|
from __future__ import print_function, division
from sympy.core import Basic, S, sympify, Expr, Rational, Symbol, Dummy
from sympy.core import Add, Mul, expand_power_base, expand_log
from sympy.core.cache import cacheit
from sympy.core.compatibility import default_sort_key, is_sequence
from sympy.core.containers import Tuple
from sympy.utilities.iterables import uniq
class Order(Expr):
r""" Represents the limiting behavior of some function
The order of a function characterizes the function based on the limiting
behavior of the function as it goes to some limit. Only taking all limit
points to be 0 or positive infinity is currently supported. This is
expressed in big O notation [1]_.
The formal definition for the order of a function `g(x)` about a point `a`
is such that `g(x) = O(f(x))` as `x \rightarrow a` if and only if for any
`\delta > 0` there exists a `M > 0` such that `|g(x)| \leq M|f(x)|` for
`|x-a| < \delta`. This is equivalent to `\lim_{x \rightarrow a}
\sup |g(x)/f(x)| < \infty`.
Let's illustrate it on the following example by taking the expansion of
`\sin(x)` about 0:
.. math ::
\sin(x) = x - x^3/3! + O(x^5)
where in this case `O(x^5) = x^5/5! - x^7/7! + \cdots`. By the definition
of `O`, for any `\delta > 0` there is an `M` such that:
.. math ::
|x^5/5! - x^7/7! + ....| <= M|x^5| \text{ for } |x| < \delta
or by the alternate definition:
.. math ::
\lim_{x \rightarrow 0} | (x^5/5! - x^7/7! + ....) / x^5| < \infty
which surely is true, because
.. math ::
\lim_{x \rightarrow 0} | (x^5/5! - x^7/7! + ....) / x^5| = 1/5!
As it is usually used, the order of a function can be intuitively thought
of representing all terms of powers greater than the one specified. For
example, `O(x^3)` corresponds to any terms proportional to `x^3,
x^4,\ldots` and any higher power. For a polynomial, this leaves terms
proportional to `x^2`, `x` and constants.
Examples
========
>>> from sympy import O, oo
>>> from sympy.abc import x, y
>>> O(x + x**2)
O(x)
>>> O(x + x**2, (x, 0))
O(x)
>>> O(x + x**2, (x, oo))
O(x**2, (x, oo))
>>> O(1 + x*y)
O(1, x, y)
>>> O(1 + x*y, (x, 0), (y, 0))
O(1, x, y)
>>> O(1 + x*y, (x, oo), (y, oo))
O(x*y, (x, oo), (y, oo))
>>> O(1) in O(1, x)
True
>>> O(1, x) in O(1)
False
>>> O(x) in O(1, x)
True
>>> O(x**2) in O(x)
True
>>> O(x)*x
O(x**2)
>>> O(x) - O(x)
O(x)
References
==========
.. [1] `Big O notation <http://en.wikipedia.org/wiki/Big_O_notation>`_
Notes
=====
In ``O(f(x), x)`` the expression ``f(x)`` is assumed to have a leading
term. ``O(f(x), x)`` is automatically transformed to
``O(f(x).as_leading_term(x),x)``.
``O(expr*f(x), x)`` is ``O(f(x), x)``
``O(expr, x)`` is ``O(1)``
``O(0, x)`` is 0.
Multivariate O is also supported:
``O(f(x, y), x, y)`` is transformed to
``O(f(x, y).as_leading_term(x,y).as_leading_term(y), x, y)``
In the multivariate case, it is assumed the limits w.r.t. the various
symbols commute.
If no symbols are passed then all symbols in the expression are used.
"""
is_Order = True
__slots__ = []
@cacheit
def __new__(cls, expr, *args, **kwargs):
expr = sympify(expr)
if not args:
if expr.is_Order:
variables = expr.variables
point = expr.point
else:
variables = list(expr.free_symbols)
point = [S.Zero]*len(variables)
else:
args = list(args if is_sequence(args) else [args])
variables, point = [], []
if is_sequence(args[0]):
for a in args:
v, p = list(map(sympify, a))
variables.append(v)
point.append(p)
else:
variables = list(map(sympify, args))
point = [S.Zero]*len(variables)
if not all(isinstance(v, Symbol) for v in variables):
raise TypeError('Variables are not symbols, got %s' % variables)
if len(list(uniq(variables))) != len(variables):
raise ValueError('Variables are supposed to be unique symbols, got %s' % variables)
if expr.is_Order:
expr_vp = dict(expr.args[1:])
new_vp = dict(expr_vp)
vp = dict(zip(variables, point))
for v, p in vp.items():
if v in new_vp.keys():
if p != new_vp[v]:
raise NotImplementedError(
"Mixing Order at different points is not supported.")
else:
new_vp[v] = p
if set(expr_vp.keys()) == set(new_vp.keys()):
return expr
else:
variables = list(new_vp.keys())
point = [new_vp[v] for v in variables]
if expr is S.NaN:
return S.NaN
if not all(p is S.Zero for p in point) and \
not all(p is S.Infinity for p in point):
raise NotImplementedError('Order at points other than 0 '
'or oo not supported, got %s as a point.' % point)
if variables:
if len(variables) > 1:
# XXX: better way? We need this expand() to
# workaround e.g: expr = x*(x + y).
# (x*(x + y)).as_leading_term(x, y) currently returns
# x*y (wrong order term!). That's why we want to deal with
# expand()'ed expr (handled in "if expr.is_Add" branch below).
expr = expr.expand()
if expr.is_Add:
lst = expr.extract_leading_order(variables, point)
expr = Add(*[f.expr for (e, f) in lst])
elif expr:
if point[0] == S.Zero:
expr = expr.as_leading_term(*variables)
expr = expr.as_independent(*variables, as_Add=False)[1]
expr = expand_power_base(expr)
expr = expand_log(expr)
if len(variables) == 1:
# The definition of O(f(x)) symbol explicitly stated that
# the argument of f(x) is irrelevant. That's why we can
# combine some power exponents (only "on top" of the
# expression tree for f(x)), e.g.:
# x**p * (-x)**q -> x**(p+q) for real p, q.
x = variables[0]
margs = list(Mul.make_args(
expr.as_independent(x, as_Add=False)[1]))
for i, t in enumerate(margs):
if t.is_Pow:
b, q = t.args
if b in (x, -x) and q.is_real and not q.has(x):
margs[i] = x**q
elif b.is_Pow and not b.exp.has(x):
b, r = b.args
if b in (x, -x) and r.is_real:
margs[i] = x**(r*q)
elif b.is_Mul and b.args[0] is S.NegativeOne:
b = -b
if b.is_Pow and not b.exp.has(x):
b, r = b.args
if b in (x, -x) and r.is_real:
margs[i] = x**(r*q)
expr = Mul(*margs)
if expr is S.Zero:
return expr
if expr.is_Order:
expr = expr.expr
if not expr.has(*variables):
expr = S.One
# create Order instance:
variables.sort(key=default_sort_key)
args = (expr,) + Tuple(*zip(variables, point))
obj = Expr.__new__(cls, *args)
return obj
def _eval_nseries(self, x, n, logx):
return self
@property
def expr(self):
return self.args[0]
@property
def variables(self):
if self.args[1:]:
return tuple(x[0] for x in self.args[1:])
else:
return ()
@property
def point(self):
if self.args[1:]:
return tuple(x[1] for x in self.args[1:])
else:
return ()
@property
def free_symbols(self):
return self.expr.free_symbols | set(self.variables)
def _eval_power(b, e):
if e.is_Number and e.is_nonnegative:
return b.func(b.expr ** e, *b.args[1:])
return
def as_expr_variables(self, order_symbols):
if order_symbols is None:
order_symbols = self.args[1:]
else:
if not all(o[1] == order_symbols[0][1] for o in order_symbols) and \
not all(p == self.point[0] for p in self.point):
raise NotImplementedError('Order at points other than 0 '
'or oo not supported, got %s as a point.' % point)
if order_symbols[0][1] != self.point[0]:
raise NotImplementedError(
"Multiplying Order at different points is not supported.")
order_symbols = dict(order_symbols)
for s, p in dict(self.args[1:]).items():
if s not in order_symbols.keys():
order_symbols[s] = p
order_symbols = sorted(order_symbols.items(), key=lambda x: default_sort_key(x[0]))
return self.expr, tuple(order_symbols)
def removeO(self):
return S.Zero
def getO(self):
return self
@cacheit
def contains(self, expr):
"""
Return True if expr belongs to Order(self.expr, \*self.variables).
Return False if self belongs to expr.
Return None if the inclusion relation cannot be determined
(e.g. when self and expr have different symbols).
"""
from sympy import powsimp, PoleError
if expr is S.Zero:
return True
if expr is S.NaN:
return False
if expr.is_Order:
if not all(p == expr.point[0] for p in expr.point) and \
not all(p == self.point[0] for p in self.point):
raise NotImplementedError('Order at points other than 0 '
'or oo not supported, got %s as a point.' % point)
else:
# self and/or expr is O(1):
if any(not p for p in [expr.point, self.point]):
point = self.point + expr.point
if point:
point = point[0]
else:
point = S.Zero
else:
point = self.point[0]
if expr.expr == self.expr:
# O(1) + O(1), O(1) + O(1, x), etc.
return all([x in self.args[1:] for x in expr.args[1:]])
if expr.expr.is_Add:
return all([self.contains(x) for x in expr.expr.args])
if self.expr.is_Add:
return any([self.func(x, *self.args[1:]).contains(expr)
for x in self.expr.args])
if self.variables and expr.variables:
common_symbols = tuple(
[s for s in self.variables if s in expr.variables])
elif self.variables:
common_symbols = self.variables
else:
common_symbols = expr.variables
if not common_symbols:
return None
r = None
ratio = self.expr/expr.expr
ratio = powsimp(ratio, deep=True, combine='exp')
for s in common_symbols:
try:
l = ratio.limit(s, point) != 0
except PoleError:
l = None
if r is None:
r = l
else:
if r != l:
return
return r
obj = self.func(expr, *self.args[1:])
return self.contains(obj)
def __contains__(self, other):
result = self.contains(other)
if result is None:
raise TypeError('contains did not evaluate to a bool')
return result
def _eval_subs(self, old, new):
if old.is_Symbol and old in self.variables:
i = self.variables.index(old)
newexpr = self.expr._subs(old, new)
if isinstance(new, Symbol):
newvars = list(self.variables)
newvars[i] = new
newpt = self.point
else:
newvars = tuple(newexpr.free_symbols) + \
self.variables[:i] + self.variables[i + 1:]
p = new.as_numer_denom()[1].is_number*2 - 1
newpt = self.point[0]**p
if not newpt.is_real:
x = Dummy('x')
newpt = (x**p).limit(x, self.point[0])
newpt = [newpt]*len(newvars)
return Order(newexpr, *zip(newvars, newpt))
return Order(self.expr._subs(old, new), *self.args[1:])
def _eval_conjugate(self):
expr = self.expr._eval_conjugate()
if expr is not None:
return self.func(expr, *self.args[1:])
def _eval_derivative(self, x):
return self.func(self.expr.diff(x), *self.args[1:]) or self
def _eval_transpose(self):
expr = self.expr._eval_transpose()
if expr is not None:
return self.func(expr, *self.args[1:])
def _sage_(self):
#XXX: SAGE doesn't have Order yet. Let's return 0 instead.
return Rational(0)._sage_()
O = Order
| 34.55
| 95
| 0.501085
|
794c3b712d16ea4cdb1fb5379d624521efae0888
| 352
|
py
|
Python
|
exif.py
|
achavez/photostreamer-pi
|
9b335fac1ebc3b69f35368268166e4dfdc1aa449
|
[
"MIT"
] | 11
|
2015-06-04T19:27:15.000Z
|
2021-07-01T01:32:32.000Z
|
exif.py
|
achavez/photostreamer-pi
|
9b335fac1ebc3b69f35368268166e4dfdc1aa449
|
[
"MIT"
] | 1
|
2015-09-22T22:50:38.000Z
|
2015-09-30T12:24:50.000Z
|
exif.py
|
achavez/photostreamer-pi
|
9b335fac1ebc3b69f35368268166e4dfdc1aa449
|
[
"MIT"
] | 2
|
2015-06-08T16:43:22.000Z
|
2016-06-04T12:03:18.000Z
|
import exifread
def parse(fileName):
"""
Pull the EXIf info from a photo and sanitize it so for sending as JSON
by converting values to strings.
"""
f = open(fileName, 'rb')
exif = exifread.process_file(f, details=False)
parsed = {}
for key, value in exif.iteritems():
parsed[key] = str(value)
return parsed
| 27.076923
| 74
| 0.642045
|
794c3d99f85e0d05e19944a6e885fb7f4bd4e85e
| 33
|
py
|
Python
|
py3_wordsmith/__init__.py
|
xACruceSalus/py3_wordsmith
|
120d3d2e4bbf5a0e1c2635266a295354b0be1ca0
|
[
"Apache-2.0"
] | 1
|
2016-03-25T16:54:33.000Z
|
2016-03-25T16:54:33.000Z
|
py3_wordsmith/__init__.py
|
xACruceSalus/py3_wordsmith
|
120d3d2e4bbf5a0e1c2635266a295354b0be1ca0
|
[
"Apache-2.0"
] | null | null | null |
py3_wordsmith/__init__.py
|
xACruceSalus/py3_wordsmith
|
120d3d2e4bbf5a0e1c2635266a295354b0be1ca0
|
[
"Apache-2.0"
] | null | null | null |
from .wordsmith import Wordsmith
| 16.5
| 32
| 0.848485
|
794c3e37ff32b01e7bbba37c3ccb694582acea8a
| 2,756
|
py
|
Python
|
src/examples/tutorial/ascent_intro/python/ascent_trigger_example1.py
|
srini009/ascent
|
70558059dc3fe514206781af6e48715d8934c37c
|
[
"BSD-3-Clause"
] | null | null | null |
src/examples/tutorial/ascent_intro/python/ascent_trigger_example1.py
|
srini009/ascent
|
70558059dc3fe514206781af6e48715d8934c37c
|
[
"BSD-3-Clause"
] | null | null | null |
src/examples/tutorial/ascent_intro/python/ascent_trigger_example1.py
|
srini009/ascent
|
70558059dc3fe514206781af6e48715d8934c37c
|
[
"BSD-3-Clause"
] | null | null | null |
###############################################################################
# Copyright (c) Lawrence Livermore National Security, LLC and other Ascent
# Project developers. See top-level LICENSE AND COPYRIGHT files for dates and
# other details. No copyright assignment is required to contribute to Ascent.
###############################################################################
import conduit
import conduit.blueprint
import ascent
import numpy as np
from ascent_tutorial_py_utils import tutorial_gyre_example
# Use triggers to render when conditions occur
a = ascent.Ascent()
a.open()
# setup actions
actions = conduit.Node()
# declare a question to ask
add_queries = actions.append()
add_queries["action"] = "add_queries"
# add our entropy query (q1)
queries = add_queries["queries"]
queries["q1/params/expression"] = "entropy(histogram(field('gyre'), num_bins=128))"
queries["q1/params/name"] = "entropy"
# declare triggers
add_triggers = actions.append()
add_triggers["action"] = "add_triggers"
triggers = add_triggers["triggers"]
# add a simple trigger (t1_ that fires at cycle 500
triggers["t1/params/condition"] = "cycle() == 500"
triggers["t1/params/actions_file"] = "cycle_trigger_actions.yaml"
# add trigger (t2) that fires when the change in entroy exceeds 0.5
# the history function allows you to access query results of previous
# cycles. relative_index indicates how far back in history to look.
# Looking at the plot of gyre entropy in the previous notebook, we see a jump
# in entropy at cycle 200, so we expect the trigger to fire at cycle 200
triggers["t2/params/condition"] = "entropy - history(entropy, relative_index = 1) > 0.5"
triggers["t2/params/actions_file"] = "entropy_trigger_actions.yaml"
# view our full actions tree
print(actions.to_yaml())
# gyre time varying params
nsteps = 10
time = 0.0
delta_time = 0.5
for step in range(nsteps):
# call helper that generates a double gyre time varying example mesh.
# gyre ref :https://shaddenlab.berkeley.edu/uploads/LCS-tutorial/examples.html
mesh = tutorial_gyre_example(time)
# update the example cycle
cycle = 100 + step * 100
mesh["state/cycle"] = cycle
print("time: {} cycle: {}".format(time,cycle))
# publish mesh to ascent
a.publish(mesh)
# execute the actions
a.execute(actions)
# update time
time = time + delta_time
# retrieve the info node that contains the trigger and query results
info = conduit.Node()
a.info(info)
# close ascent
a.close()
# this will render:
# cycle_trigger_out_500.png
# entropy_trigger_out_200.png
#
# We can also examine when the triggers executed by looking at the expressions
# results in the output info
#
print(info["expressions"].to_yaml())
| 29.634409
| 88
| 0.695573
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.