id stringlengths 1 265 | text stringlengths 6 5.19M | dataset_id stringclasses 7 values |
|---|---|---|
/Flask-HAL-1.0.4.tar.gz/Flask-HAL-1.0.4/README.rst | Flask-HAL
=========
|circle| |downloads| |version| |license| |docs|
Hello Dave...
I provide easy integration of the `HAL <https://tools.ietf.org/html/draft-kelly-json-hal-07>`_
specification for your ``REST`` Flask Applications.
Here is an example Dave...
.. sourcecode:: python
# Third Party Libs
from flask import Flask
# First Party Libs
from flask_hal import HAL, document
app = Flask(__name__)
HAL(app) # Initialise HAL
@app.route('/hello')
def hello():
return document.Document(data={
'message': 'Hello World'
})
if __name__ == "__main__":
app.run(debug=True)
Here is the response from the above view.
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/hal+json; charset=UTF-8
Date: Thu, 06 Aug 2015 10:24:04 GMT
{
"_links": {
"self": {
"href": "/hello"
},
},
"foo": "bar"
}
## Contributing
Run tests using `python setup.py test`.
.. |circle| image:: https://img.shields.io/circleci/project/thisissoon/Flask-HAL.svg
:target: https://circleci.com/gh/thisissoon/Flask-HAL
.. |downloads| image:: http://img.shields.io/pypi/dm/Flask-HAL.svg
:target: https://pypi.python.org/pypi/Flask-HAL
.. |version| image:: http://img.shields.io/pypi/v/Flask-HAL.svg
:target: https://pypi.python.org/pypi/Flask-HAL
.. |license| image:: http://img.shields.io/pypi/l/Flask-HAL.svg
:target: https://pypi.python.org/pypi/Flask-HAL
.. |docs| image:: https://img.shields.io/badge/documentation-latest-blue.svg
:target: http://flask-hal.soon.build/en/latest/
| PypiClean |
/Auto-ADB-Wifi-1.0.1.tar.gz/Auto-ADB-Wifi-1.0.1/README.md | # Auto-ADB-Wifi
[](https://badge.fury.io/py/Auto-ADB-Wifi)
A script to automatically connect android devices in debug mode using WIFI
<p align="center">
<img src="extras/example.gif" width="100%" />
</p>
## Motivation
Everyday I need to connect to a lot of diferent devices to my computer.
Some devices have Micro-USB ports and others USB Type-C ports and I lose time plugging the devices and waiting for the ADB.
So, I have created this script to auto connect a device using WIFI.
**The diference to the other script and plugins:** The script save the connections in a configuration file to try reconnect when you boot your computer or when your device lost the wifi connection.
## Requirements
* Python 3
* ADB
## Installation
Using pip you can install ```auto-adb-wifi```
### Linux and macOS:
```$ sudo pip install auto-adb-wifi```
## Usage
1. Run ```$ adb-wifi```
You can add the ```adb-wifi``` to your startup applications.
2. Connect the devices to your computer and authorized the debub.
**Attention:** If your device turns off(battery, etc), you need to plug again the device to the computer because the adb need to open the ```tcpip port```!
If your device has rooted you can use this [application](https://play.google.com/store/apps/details?id=com.ttxapps.wifiadb)
to turn on the ```tcpip port```and ignore this step.
| PypiClean |
/GRIDOPT-1.3.7rc1.tar.gz/GRIDOPT-1.3.7rc1/gridopt/power_flow/ac_pf.py |
from __future__ import print_function
import time
import numpy as np
from .method_error import *
from .method import PFmethod
from numpy.linalg import norm
class ACPF(PFmethod):
"""
AC power flow method.
"""
CONTROL_MODE_LOCKED = 'locked'
CONTROL_MODE_FREE = 'free'
CONTROL_MODE_REG = 'regulating'
name = 'ACPF'
_parameters = {'weight_vmag': 1e0, # weight for voltage magnitude regularization
'weight_vang': 1e-3, # weight for angle difference regularization
'weight_powers': 1e-3, # weight for gen powers regularization
'weight_controls': 1e0, # weight for control deviation penalty
'weight_var': 1e-5, # weight for general variable regularization
'v_min_clip': 0.5, # lower v threshold for clipping
'v_max_clip': 1.5, # upper v threshold for clipping
'v_limits': False, # voltage magnitude limits
'Q_limits': True, # flag for enforcing generator, VSC and FACTS reactive power limits
'Q_mode': 'regulating', # reactive power mode: free, regulating
'shunt_limits': True, # flag for enforcing switched shunt susceptance limits
'shunt_mode': 'locked', # switched shunts mode: locked, free, regulating
'tap_limits': True, # flag for enforcing transformer tap ratio limits
'tap_mode': 'locked', # transformer tap ratio mode: locked, free, regulating
'lock_vsc_P_dc': True, # flag for locking vsc P dc
'lock_csc_P_dc': True, # flag for locking csc P dc
'lock_csc_i_dc': True, # flag for locking csc i dc
'vdep_loads': False, # flag for modeling voltage dependent loads
'pvpq_start_k': 0, # start iteration number for PVPQ switching heuristics
'vmin_thresh': 0.1, # minimum voltage magnitude threshold
'gens_redispatch': False, # flag for allowing active power redispatch
'shunts_round': True, # flag for rounding discrete switched shunt susceptances (not supported yet)
'taps_round': True, # flag for rounding discrete transformer tap ratios (not supported yet)
'v_mag_warm_ref': False, # flag for using current v_mag as reference in v_mag regularization
'solver': 'nr', # OPTALG optimization solver: augl, ipopt, nr, inlp
'tap_step': 0.5, # tap ratio acceleration factor (NR only)
'shunt_step': 0.5, # susceptance acceleration factor (NR only)
'dtap': 1e-4, # tap ratio perturbation (NR only)
'dsus': 1e-4} # susceptance perturbation (NR only)
_parameters_augl = {'feastol' : 1e-4,
'optol' : 1e0,
'kappa' : 1e-5,
'theta_max': 1e-6,
'sigma_init_max': 1e9}
_parameters_ipopt = {}
_parameters_inlp = {'feastol' : 1e-4,
'optol' : 1e0}
_parameters_nr = {}
def __init__(self):
from optalg.opt_solver import OptSolverAugL, OptSolverIpopt, OptSolverNR, OptSolverINLP
# Parent init
PFmethod.__init__(self)
# Solver params
augl_params = OptSolverAugL.parameters.copy()
augl_params.update(self._parameters_augl) # overwrite defaults
ipopt_params = OptSolverIpopt.parameters.copy()
ipopt_params.update(self._parameters_ipopt) # overwrite defaults
inlp_params = OptSolverINLP.parameters.copy()
inlp_params.update(self._parameters_inlp) # overwrite defaults
nr_params = OptSolverNR.parameters.copy()
nr_params.update(self._parameters_nr) # overwrite defaults
self._parameters = ACPF._parameters.copy()
self._parameters['solver_parameters'] = {'augl': augl_params,
'ipopt': ipopt_params,
'nr': nr_params,
'inlp': inlp_params}
def create_problem(self, net):
solver_name = self._parameters['solver']
if solver_name == 'nr':
return self.create_problem_nr(net)
else:
return self.create_problem_opt(net)
def create_problem_nr(self, net):
import pfnet
# Parameters
params = self._parameters
Q_mode = params['Q_mode']
Q_limits = params['Q_limits']
shunt_mode = params['shunt_mode']
shunt_limits = params['shunt_limits']
tap_mode = params['tap_mode']
tap_limits = params['tap_limits']
lock_vsc_P_dc = params['lock_vsc_P_dc']
lock_csc_P_dc = params['lock_csc_P_dc']
lock_csc_i_dc = params['lock_csc_i_dc']
vdep_loads = params['vdep_loads']
gens_redispatch = params['gens_redispatch']
# Check shunt options
if shunt_mode not in [self.CONTROL_MODE_LOCKED,
self.CONTROL_MODE_REG]:
raise ValueError('invalid shunts mode')
if shunt_mode == self.CONTROL_MODE_REG and not shunt_limits:
raise ValueError('unsupported shunts configuration')
# Check tap options
if tap_mode not in [self.CONTROL_MODE_LOCKED,
self.CONTROL_MODE_REG]:
raise ValueError('invalid taps mode')
if tap_mode == self.CONTROL_MODE_REG and not tap_limits:
raise ValueError('unsupported taps configuration')
# Check Q options
if Q_mode != self.CONTROL_MODE_REG:
raise ValueError('invalid reactive power mode')
# Check other options
if gens_redispatch:
raise ValueError('generation redispatch not supported')
if not lock_vsc_P_dc:
raise ValueError('VSC P DC must be locked')
if not lock_csc_P_dc:
raise ValueError('CSC P DC must be locked')
if not lock_csc_i_dc:
raise ValueError('CSC i DC must be locked')
# Clear flags
net.clear_flags()
# Buses
net.set_flags('bus',
'variable',
'not slack',
'voltage angle')
net.set_flags('bus',
'variable',
'any',
'voltage magnitude')
# Generators
net.set_flags('generator',
'variable',
'slack',
'active power')
net.set_flags('generator',
'variable',
'regulator',
'reactive power')
# VSC HVDC
net.set_flags('vsc converter',
'variable',
'any',
['dc power', 'active power', 'reactive power'])
# CSC HVDC
net.set_flags('csc converter',
'variable',
'any',
['dc power', 'active power', 'reactive power'])
# DC buses
net.set_flags('dc bus',
'variable',
'any',
'voltage')
# FACTS
net.set_flags('facts',
'variable',
'any',
'all')
# Loads
if vdep_loads:
for load in net.loads:
if load.is_voltage_dependent() and load.is_in_service():
net.set_flags_of_component(load,
'variable',
['active power', 'reactive power'])
# Tap changers
if tap_mode != self.CONTROL_MODE_LOCKED:
net.set_flags('branch',
['variable', 'fixed'],
'tap changer - v',
'tap ratio')
# Switched shunts
if shunt_mode != self.CONTROL_MODE_LOCKED:
net.set_flags('shunt',
['variable', 'fixed'],
'switching - v',
'susceptance')
# Set up problem
problem = pfnet.Problem(net)
problem.add_constraint(pfnet.Constraint('AC power balance', net))
problem.add_constraint(pfnet.Constraint('HVDC power balance', net))
problem.add_constraint(pfnet.Constraint('generator active power participation', net))
problem.add_constraint(pfnet.Constraint('variable fixing', net))
problem.add_constraint(pfnet.Constraint('VSC converter equations', net))
problem.add_constraint(pfnet.Constraint('CSC converter equations', net))
problem.add_constraint(pfnet.Constraint('FACTS equations', net))
problem.add_constraint(pfnet.Constraint('VSC DC voltage control', net))
problem.add_constraint(pfnet.Constraint('CSC DC voltage control', net))
problem.add_constraint(pfnet.Constraint('VSC DC power control', net))
problem.add_constraint(pfnet.Constraint('CSC DC power control', net))
problem.add_constraint(pfnet.Constraint('CSC DC current control', net))
problem.add_constraint(pfnet.Constraint('PVPQ switching', net))
problem.add_constraint(pfnet.Constraint('switching power factor regulation', net))
problem.add_constraint(pfnet.Constraint('switching FACTS active power control', net))
problem.add_constraint(pfnet.Constraint('switching FACTS reactive power control', net))
if vdep_loads:
problem.add_constraint(pfnet.Constraint('load voltage dependence', net))
if Q_limits:
problem.add_heuristic(pfnet.Heuristic('PVPQ switching', net))
problem.add_heuristic(pfnet.Heuristic('switching power factor regulation', net))
problem.analyze()
# Check
if (problem.J.shape[0] + problem.A.shape[0] != problem.get_num_primal_variables()):
raise PFmethodError_BadProblem()
# Return
return problem
def create_problem_opt(self, net):
import pfnet
# Parameters
params = self._parameters
wm = params['weight_vmag']
wa = params['weight_vang']
wp = params['weight_powers']
wc = params['weight_controls']
wv = params['weight_var']
v_limits = params['v_limits']
Q_mode = params['Q_mode']
Q_limits = params['Q_limits']
shunt_mode = params['shunt_mode']
shunt_limits = params['shunt_limits']
tap_mode = params['tap_mode']
tap_limits = params['tap_limits']
lock_vsc_P_dc = params['lock_vsc_P_dc']
lock_csc_P_dc = params['lock_csc_P_dc']
lock_csc_i_dc = params['lock_csc_i_dc']
vdep_loads = params['vdep_loads']
v_mag_warm_ref = params['v_mag_warm_ref']
gens_redispatch = params['gens_redispatch']
# Check shunt options
if shunt_mode not in [self.CONTROL_MODE_LOCKED,
self.CONTROL_MODE_FREE,
self.CONTROL_MODE_REG]:
raise ValueError('invalid shunts mode')
if shunt_mode == self.CONTROL_MODE_REG and not shunt_limits:
raise ValueError('unsupported shunts configuration')
# Check tap options
if tap_mode not in [self.CONTROL_MODE_LOCKED,
self.CONTROL_MODE_FREE,
self.CONTROL_MODE_REG]:
raise ValueError('invalid taps mode')
if tap_mode == self.CONTROL_MODE_REG and not tap_limits:
raise ValueError('unsupported taps configuration')
# Check Q options
if Q_mode not in [self.CONTROL_MODE_REG,
self.CONTROL_MODE_FREE]:
raise ValueError('invalid reactive power mode')
# Clear flags
net.clear_flags()
# Buses
net.set_flags('bus',
'variable',
'not slack',
'voltage angle')
net.set_flags('bus',
'variable',
'any',
'voltage magnitude')
if Q_mode == self.CONTROL_MODE_REG and not Q_limits:
net.set_flags('bus',
'fixed',
'v set regulated',
'voltage magnitude')
if v_limits:
net.set_flags('bus',
'bounded',
'any',
'voltage magnitude')
# Genertors
if gens_redispatch:
net.set_flags('generator',
['variable', 'bounded'],
'any',
'active power')
else:
net.set_flags('generator',
'variable',
'slack',
'active power')
net.set_flags('generator',
'variable',
'regulator',
'reactive power')
if Q_mode == self.CONTROL_MODE_FREE and Q_limits:
net.set_flags('generator',
'bounded',
'regulator',
'reactive power')
# Loads
if vdep_loads:
for load in net.loads:
if load.is_voltage_dependent() and load.is_in_service():
net.set_flags_of_component(load,
'variable',
['active power', 'reactive power'])
# VSC HVDC
net.set_flags('vsc converter',
'variable',
'any',
['dc power', 'active power', 'reactive power'])
if Q_mode == self.CONTROL_MODE_FREE and Q_limits:
net.set_flags('vsc converter',
'bounded',
'any',
'reactive power')
# CSC HVDC
net.set_flags('csc converter',
'variable',
'any',
['dc power', 'active power', 'reactive power'])
# DC buses
net.set_flags('dc bus',
'variable',
'any',
'voltage')
# FACTS
net.set_flags('facts',
'variable',
'any',
'all')
if Q_mode == self.CONTROL_MODE_FREE and Q_limits:
net.set_flags('facts',
'bounded',
'any',
'reactive power')
# Tap changers
if tap_mode != self.CONTROL_MODE_LOCKED:
net.set_flags('branch',
'variable',
'tap changer - v',
'tap ratio')
if tap_mode == self.CONTROL_MODE_FREE and tap_limits:
net.set_flags('branch',
'bounded',
'tap changer - v',
'tap ratio')
# Swtiched shunts
if shunt_mode != self.CONTROL_MODE_LOCKED:
net.set_flags('shunt',
'variable',
'switching - v',
'susceptance')
if shunt_mode == self.CONTROL_MODE_FREE and shunt_limits:
net.set_flags('shunt',
'bounded',
'switching - v',
'susceptance')
# Set up problem
problem = pfnet.Problem(net)
problem.add_constraint(pfnet.Constraint('AC power balance', net))
problem.add_constraint(pfnet.Constraint('HVDC power balance', net))
problem.add_constraint(pfnet.Constraint('generator active power participation', net))
problem.add_constraint(pfnet.Constraint('VSC converter equations', net))
problem.add_constraint(pfnet.Constraint('CSC converter equations', net))
problem.add_constraint(pfnet.Constraint('FACTS equations', net))
problem.add_constraint(pfnet.Constraint('VSC DC voltage control', net))
problem.add_constraint(pfnet.Constraint('CSC DC voltage control', net))
problem.add_constraint(pfnet.Constraint('power factor regulation', net))
if lock_vsc_P_dc:
problem.add_constraint(pfnet.Constraint('VSC DC power control', net))
if lock_csc_P_dc:
problem.add_constraint(pfnet.Constraint('CSC DC power control', net))
if lock_csc_i_dc:
problem.add_constraint(pfnet.Constraint('CSC DC current control', net))
func = pfnet.Function('voltage magnitude regularization', wm/(net.get_num_buses(True)+1.), net)
func.set_parameter('v_set_reference', not v_mag_warm_ref)
problem.add_function(func)
problem.add_function(pfnet.Function('variable regularization', wv/(net.num_vars+1.), net))
problem.add_function(pfnet.Function('voltage angle regularization', wa/(net.get_num_buses(True)+1.), net))
problem.add_function(pfnet.Function('generator powers regularization', wp/(net.get_num_generators(True)+1.), net))
problem.add_function(pfnet.Function('VSC DC power control', wc/(net.get_num_vsc_converters(True)+1.), net))
problem.add_function(pfnet.Function('CSC DC power control', wc/(net.get_num_csc_converters(True)+1.), net))
problem.add_function(pfnet.Function('CSC DC current control', wc/(net.get_num_csc_converters(True)+1.), net))
problem.add_function(pfnet.Function('FACTS active power control', wc/(net.get_num_facts(True)+1.), net))
problem.add_function(pfnet.Function('FACTS reactive power control', wc/(net.get_num_facts(True)+1.), net))
if gens_redispatch:
problem.add_function(pfnet.Function('generation redispatch penalty', wc/(net.get_num_generators(True)+1.), net))
if Q_mode == self.CONTROL_MODE_REG and Q_limits:
problem.add_constraint(pfnet.Constraint('voltage set point regulation', net))
if net.num_fixed > 0:
problem.add_constraint(pfnet.Constraint('variable fixing', net))
if tap_mode != self.CONTROL_MODE_LOCKED:
problem.add_function(pfnet.Function('tap ratio regularization', wc/(net.get_num_tap_changers_v(True)+1.), net))
if tap_mode == self.CONTROL_MODE_REG and tap_limits:
problem.add_constraint(pfnet.Constraint('voltage regulation by transformers', net))
if shunt_mode != self.CONTROL_MODE_LOCKED:
problem.add_function(pfnet.Function('susceptance regularization', wc/(net.get_num_switched_v_shunts(True)+1.), net))
if shunt_mode == self.CONTROL_MODE_REG and shunt_limits:
problem.add_constraint(pfnet.Constraint('voltage regulation by shunts', net))
if vdep_loads:
problem.add_constraint(pfnet.Constraint('load voltage dependence', net))
if net.num_bounded > 0:
problem.add_constraint(pfnet.Constraint('variable bounds', net))
# Analyze
problem.analyze()
# Return
return problem
def solve(self, net, save_problem=False):
from optalg.opt_solver import OptSolverError, OptTermination, OptCallback
from optalg.opt_solver import OptSolverAugL, OptSolverIpopt, OptSolverNR, OptSolverINLP
# Parameters
params = self._parameters
Q_mode = params['Q_mode']
shunt_mode = params['shunt_mode']
shunts_round = params['shunts_round']
tap_mode = params['tap_mode']
taps_round = params['taps_round']
vmin_thresh = params['vmin_thresh']
solver_name = params['solver']
solver_params = params['solver_parameters']
v_min_clip = params['v_min_clip']
v_max_clip = params['v_max_clip']
# Opt solver
if solver_name == 'augl':
solver = OptSolverAugL()
elif solver_name == 'ipopt':
solver = OptSolverIpopt()
elif solver_name == 'inlp':
solver = OptSolverINLP()
elif solver_name == 'nr':
solver = OptSolverNR()
else:
raise PFmethodError_BadOptSolver()
solver.set_parameters(solver_params[solver_name])
# Copy network
net = net.get_copy(merge_buses=True)
self.set_network_snapshot(net)
# Clipping
for bus in net.buses:
bus.v_mag = np.minimum(np.maximum(bus.v_mag, v_min_clip), v_max_clip)
# Problem
t0 = time.time()
problem = self.create_problem(net)
problem_time = time.time()-t0
# Callbacks
def c1(s):
if (s.k != 0 and params['tap_limits'] and tap_mode == self.CONTROL_MODE_REG and
norm(s.problem.f, np.inf) < 100.*solver_params['nr']['feastol']):
try:
self.apply_tran_v_regulation(s)
except Exception as e:
raise PFmethodError_TranVReg(e)
def c2(s):
if (s.k != 0 and params['shunt_limits'] and shunt_mode == self.CONTROL_MODE_REG and
norm(s.problem.f, np.inf) < 100.*solver_params['nr']['feastol']):
try:
self.apply_shunt_v_regulation(s)
except Exception as e:
raise PFmethodError_ShuntVReg(e)
def c3(s):
if (s.k >= params['pvpq_start_k'] and params['Q_limits'] and Q_mode == self.CONTROL_MODE_REG):
prob = s.problem.wrapped_problem
prob.apply_heuristics(s.x)
s.problem.A = prob.A
s.problem.b = prob.b
if solver_name == 'nr':
solver.add_callback(OptCallback(c1))
solver.add_callback(OptCallback(c2))
solver.add_callback(OptCallback(c3))
# Termination
def t1(s):
if np.min(s.problem.wrapped_problem.network.bus_v_min) < vmin_thresh:
return True
else:
return False
solver.add_termination(OptTermination(t1, 'low voltage'))
# Info printer
info_printer = self.get_info_printer()
solver.set_info_printer(info_printer)
# Solve
update = True
t0 = time.time()
try:
solver.solve(problem)
except OptSolverError as e:
raise PFmethodError_SolverError(e)
except Exception as e:
update = False
raise e
finally:
# Update network
if update:
net.set_var_values(solver.get_primal_variables()[:net.num_vars])
net.update_properties()
net.clear_sensitivities()
if solver_name != 'nr':
problem.store_sensitivities(*solver.get_dual_variables())
# Save results
self.set_solver_name(solver_name)
self.set_solver_status(solver.get_status())
self.set_solver_message(solver.get_error_msg())
self.set_solver_iterations(solver.get_iterations())
self.set_solver_time(time.time()-t0)
self.set_solver_primal_variables(solver.get_primal_variables())
self.set_solver_dual_variables(solver.get_dual_variables())
self.set_problem(problem if save_problem else None)
self.set_problem_time(problem_time)
self.set_network_snapshot(net)
def get_info_printer(self):
# Parameters
solver_name = self._parameters['solver']
# Define
def info_printer(solver,header):
net = solver.problem.wrapped_problem.network
if header:
print('{0:^5}'.format('vmax'), end=' ')
print('{0:^5}'.format('vmin'), end=' ')
print('{0:^8}'.format('gvdev'), end=' ')
print('{0:^8}'.format('gQvio'))
else:
print('{0:^5.2f}'.format(np.average(net.bus_v_max)), end=' ')
print('{0:^5.2f}'.format(np.average(net.bus_v_min)), end=' ')
print('{0:^8.1e}'.format(np.average(net.gen_v_dev)), end=' ')
print('{0:^8.1e}'.format(np.average(net.gen_Q_vio)))
# Return
return info_printer
def apply_shunt_v_regulation(self,solver):
# Local variables
dsus = self._parameters['dsus']
step = self._parameters['shunt_step']
p = solver.problem.wrapped_problem
net = p.network
x = solver.x
eps = 1e-8
# Fix constraints
c = p.find_constraint('variable fixing')
A = c.A
b = c.b
# Rhs
rhs = np.hstack((np.zeros(p.f.size),np.zeros(p.b.size)))
# Offset
offset = 0
for c in p.constraints:
if c.name == 'variable fixing':
break
else:
offset += c.A.shape[0]
# Violation check
for i in range(net.num_buses):
bus = net.get_bus(i)
if bus.is_regulated_by_shunt(True) and not bus.is_slack():
assert(bus.has_flags('variable','voltage magnitude'))
for t in range(net.num_periods):
v = x[bus.index_v_mag[t]]
vmax = bus.v_max_reg
vmin = bus.v_min_reg
assert(len(bus.reg_shunts) > 0)
assert(vmax >= vmin)
# Violation
if v > vmax or v < vmin:
for reg_shunt in bus.reg_shunts:
if not reg_shunt.is_in_service():
continue
assert(reg_shunt.has_flags('variable','susceptance'))
s = x[reg_shunt.index_b[t]]
smax = reg_shunt.b_max
smin = reg_shunt.b_min
assert(smin <= smax)
# Fix constr index
k = int(np.where(A.col == reg_shunt.index_b[t])[0])
i = A.row[k]
assert(np.abs(b[i]-x[reg_shunt.index_b[t]]) < eps)
assert(A.data[k] == 1.)
# Sensitivity
assert(rhs[p.f.size+offset+i] == 0.)
rhs[p.f.size+offset+i] = dsus
dx = solver.linsolver.solve(rhs)
dv = dx[bus.index_v_mag[t]]
dvds = dv/dsus
rhs[p.f.size+offset+i] = 0.
# Adjustment
dv = (vmax+vmin)/2.-v
ds = step*dv/dvds if dvds != 0. else 0.
snew = np.maximum(np.minimum(s+ds,smax),smin)
x[reg_shunt.index_b[t]] = snew
b[i] = snew
if np.abs(snew-s) > eps:
break
# Update
solver.func(x)
p.update_lin()
solver.problem.A = p.A
solver.problem.b = p.b
def apply_tran_v_regulation(self,solver):
# Local variables
dtap = self._parameters['dtap']
step = self._parameters['tap_step']
p = solver.problem.wrapped_problem
net = p.network
x = solver.x
eps = 1e-8
# Fix constraints
c = p.find_constraint('variable fixing')
A = c.A
b = c.b
# Rhs
rhs = np.hstack((np.zeros(p.f.size),np.zeros(p.b.size)))
# Offset
offset = 0
for c in p.constraints:
if c.name == 'variable fixing':
break
else:
offset += c.A.shape[0]
# Violation check
for i in range(net.num_buses):
bus = net.get_bus(i)
if bus.is_regulated_by_tran(True) and not bus.is_slack():
assert(bus.has_flags('variable','voltage magnitude'))
for tau in range(net.num_periods):
v = x[bus.index_v_mag[tau]]
vmax = bus.v_max_reg
vmin = bus.v_min_reg
assert(len(bus.reg_trans) > 0)
assert(vmax > vmin)
# Violation
if v > vmax or v < vmin:
for reg_tran in bus.reg_trans:
if not reg_tran.is_in_service():
continue
assert(reg_tran.has_flags('variable','tap ratio'))
t = x[reg_tran.index_ratio[tau]]
tmax = reg_tran.ratio_max
tmin = reg_tran.ratio_min
assert(tmax >= tmin)
# Fix constr index
k = int(np.where(A.col == reg_tran.index_ratio[tau])[0])
i = A.row[k]
assert(np.abs(b[i]-x[reg_tran.index_ratio[tau]]) < eps)
assert(A.data[k] == 1.)
# Sensitivity
assert(rhs[p.f.size+offset+i] == 0.)
rhs[p.f.size+offset+i] = dtap
dx = solver.linsolver.solve(rhs)
dv = dx[bus.index_v_mag[tau]]
dvdt = dv/dtap
rhs[p.f.size+offset+i] = 0.
# Adjustment
dv = (vmax+vmin)/2.-v
dt = step*dv/dvdt if dvdt != 0. else 0.
tnew = np.maximum(np.minimum(t+dt,tmax),tmin)
x[reg_tran.index_ratio[tau]] = tnew
b[i] = tnew
if np.abs(tnew-t) > eps:
break
# Update
solver.func(x)
p.update_lin()
solver.problem.A = p.A
solver.problem.b = p.b | PypiClean |
/Chronix2Grid-1.2.0.post1.tar.gz/Chronix2Grid-1.2.0.post1/chronix2grid/generation/consumption/generate_load.py |
import os
import json
from numpy.random import default_rng
# Other Python libraries
import pandas as pd
import numpy as np
# Libraries developed for this module
from . import consumption_utils as conso
from .. import generation_utils as utils
def get_add_dim(params, loads_charac):
add_dim = 0
dx_corr = int(params['dx_corr'])
dy_corr = int(params['dy_corr'])
for x,y in zip(loads_charac["x"], loads_charac["y"]):
x_plus = int(x // dx_corr + 1)
y_plus = int(y // dy_corr + 1)
add_dim = max(y_plus, add_dim)
add_dim = max(x_plus, add_dim)
return add_dim
def main(scenario_destination_path, seed, params, loads_charac,
load_weekly_pattern, write_results = True, day_lag=0,
return_ref_curve=False,
use_legacy=True):
"""
This is the load generation function, it allows you to generate consumption chronics based on demand nodes characteristics and on weekly demand patterns.
Parameters
----------
scenario_destination_path (string): where results are written
seed (int): random seed of the scenario
params (dict): system params such as timestep or mesh characteristics
loads_charac (pandas.DataFrame): characteristics of loads node such as Pmax and type of demand
load_weekly_pattern (pandas.DataFrame): 5 minutes weekly load chronic that represent specificity of the demand context
write_results (boolean): whether to write_results or not. Default is True
Returns
-------
pandas.DataFrame: loads chronics generated at every node with additional gaussian noise
pandas.DataFrame: loads chronics forecasted for the scenario without additional gaussian noise
"""
# Set random seed of scenario
prng = default_rng(seed)
#np.random.seed(seed) #older version - to be removed
# Define reference datetime indices
datetime_index = pd.date_range(
start=params['start_date'],
end=params['end_date'],
freq=str(params['dt']) + 'min')
add_dim = get_add_dim(params, loads_charac)
# Generate GLOBAL temperature noise
print('Computing global auto-correlated spatio-temporal noise for thermosensible demand...') ## temperature is simply to reflect the fact that loads is correlated spatially, and so is the real "temperature". It is not the real temperature.
temperature_noise = utils.generate_coarse_noise(prng, params, 'temperature', add_dim=add_dim)
print('Computing loads ...')
start_day = datetime_index[0]
loads_series = conso.compute_loads(loads_charac,
temperature_noise,
params,
load_weekly_pattern,
start_day=start_day,
add_dim=add_dim,
day_lag=day_lag,
return_ref_curve=return_ref_curve,
use_legacy=use_legacy)
if return_ref_curve:
loads_series, ref_curve = loads_series
loads_series['datetime'] = datetime_index
# Save files
if scenario_destination_path is not None:
print('Saving files in zipped csv in "{}"'.format(scenario_destination_path))
if not os.path.exists(scenario_destination_path):
os.makedirs(scenario_destination_path)
load_p_forecasted = conso.create_csv(prng, loads_series, scenario_destination_path,
forecasted=True, reordering=True,
shift=True, write_results=write_results, index=False)
load_p = conso.create_csv(
prng,
loads_series, scenario_destination_path,
reordering=True,
noise=params['planned_std'],
write_results=write_results,
index=False
)
if not return_ref_curve:
return load_p, load_p_forecasted
else:
return load_p, load_p_forecasted, ref_curve | PypiClean |
/FitBenchmarking-1.0.0.tar.gz/FitBenchmarking-1.0.0/fitbenchmarking/cost_func/hellinger_nlls_cost_func.py | from numpy import array, matmul, ravel, sqrt
from fitbenchmarking.cost_func.nlls_base_cost_func import BaseNLLSCostFunc
from fitbenchmarking.utils.exceptions import (CostFuncError,
IncompatibleCostFunctionError)
class HellingerNLLSCostFunc(BaseNLLSCostFunc):
"""
This defines the Hellinger non-linear least squares cost function where,
given a set of :math:`n` data points :math:`(x_i,y_i)`, associated errors
:math:`e_i`, and a model function :math:`f(x,p)`, we find the optimal
parameters in the Hellinger least-squares sense by solving:
.. math:: \\min_p \\sum_{i=1}^n
\\left(\\sqrt{y_i} - \\sqrt{f(x_i, p})\\right)^2
where :math:`p` is a vector of length :math:`m`, and we start from a
given initial guess for the optimal parameters. More information on
non-linear least squares cost functions can be found
`here <https://en.wikipedia.org/wiki/Non-linear_least_squares>`__ and for
the Hellinger distance measure see
`here <https://en.wikipedia.org/wiki/Hellinger_distance>`__.
"""
def eval_r(self, params, **kwargs):
"""
Calculate the residuals, :math:`\\sqrt{y_i} - \\sqrt{f(x_i, p)}`
:param params: The parameters, :math:`p`, to calculate residuals for
:type params: list
:return: The residuals for the datapoints at the given parameters
:rtype: numpy array
"""
x = kwargs.get("x", self.problem.data_x)
y = kwargs.get("y", self.problem.data_y)
if len(x) != len(y):
raise CostFuncError('The length of the x and y are not the same, '
f'len(x)={len(x)} and len(y)= {len(y)}.')
result = sqrt(y) - sqrt(self.problem.eval_model(params=params, x=x))
# Flatten in case of a vector function
return ravel(result)
def jac_res(self, params, **kwargs):
"""
Uses the Jacobian of the model to evaluate the Jacobian of the
cost function residual, :math:`\\nabla_p r(x,y,p)`, at the
given parameters.
:param params: The parameters at which to calculate Jacobians
:type params: list
:return: evaluated Jacobian of the residual at each x, y pair
:rtype: a list of 1D numpy arrays
"""
x = kwargs.get("x", self.problem.data_x)
j = self.jacobian.eval(params, **kwargs)
return - j / (2 * sqrt(self.problem.eval_model(params, x=x)[:, None]))
def hes_res(self, params, **kwargs):
"""
Uses the Hessian of the model to evaluate the Hessian of the
cost function residual, :math:`\\nabla_p^2 r(x,y,p)`, at the
given parameters.
:param params: The parameters at which to calculate Hessians
:type params: list
:return: evaluated Hessian and Jacobian of the residual at
each x, y pair
:rtype: tuple (list of 2D numpy arrays, list of 1D numpy arrays)
"""
x = kwargs.get("x", self.problem.data_x)
f = self.problem.eval_model(params, x=x)
jac = self.jacobian.eval(params, **kwargs)
hes = self.hessian.eval(params, **kwargs)
for i in range(len(x)):
jac_i = array([jac[i]])
hes[:, :, i] = matmul(jac_i.T, jac_i) / (4 * f[i] ** (3/2)) \
- hes[:, :, i] / (2 * f[i] ** (1/2))
return hes, self.jac_res(params, **kwargs)
def validate_problem(self):
"""
Validate the problem for the Hellinger Cost Function.
Hellinger involves a square root so will fail on negative inputs.
Raises:
IncompatibleCostFunctionError: When the problem has negative
values.
"""
if (self.problem.data_y < 0).any():
raise IncompatibleCostFunctionError(
"Problem has a negative y value.") | PypiClean |
/HammerTime-http-0.9.1.tar.gz/HammerTime-http-0.9.1/hammertime/ruleset.py |
import asyncio
class Heuristics:
EVENTS = {"before_request", "after_headers", "after_response", "on_timeout", "on_request_successful",
"on_host_unreachable"}
def __init__(self, kb=None, request_engine=None):
self.rulesets = {event: RuleSet() for event in self.EVENTS}
self.kb = kb
self.request_engine = request_engine
for key, rs in self.rulesets.items():
setattr(self, key, rs.accept)
def add_multiple(self, iterator):
for h in iterator:
self.add(h)
def add(self, heuristic):
applied = False
supported = dir(heuristic)
if "set_engine" in supported and self.request_engine is not None:
heuristic.set_engine(self.request_engine)
if "set_kb" in supported and self.kb is not None:
try:
heuristic.set_kb(self.kb)
except AttributeError:
if "load_kb" in supported:
heuristic.load_kb(self.kb)
else:
raise
if "set_child_heuristics" in supported:
heuristic.set_child_heuristics(Heuristics(request_engine=self.request_engine, kb=self.kb))
for event in self.EVENTS:
if event in supported:
self.rulesets[event].add(getattr(heuristic, event))
applied = True
if not applied:
raise ValueError("Expecting heuristic to support some of %s" % self.EVENTS)
class RuleSet:
def __init__(self):
self.rules = []
def add(self, rule):
if not asyncio.iscoroutinefunction(rule):
raise ValueError("Expecting asyncio coroutine for %s" % rule)
self.rules.append(rule)
def __len__(self):
return len(self.rules)
async def accept(self, entry):
for r in self.rules:
await r(entry)
class HammerTimeException(Exception):
pass
class StopRequest(HammerTimeException):
pass
class RejectRequest(HammerTimeException):
pass | PypiClean |
/FITS_tools-0.2.tar.gz/FITS_tools-0.2/ah_bootstrap.py | import contextlib
import errno
import imp
import io
import locale
import os
import re
import subprocess as sp
import sys
try:
from ConfigParser import ConfigParser, RawConfigParser
except ImportError:
from configparser import ConfigParser, RawConfigParser
if sys.version_info[0] < 3:
_str_types = (str, unicode)
_text_type = unicode
PY3 = False
else:
_str_types = (str, bytes)
_text_type = str
PY3 = True
# What follows are several import statements meant to deal with install-time
# issues with either missing or misbehaving pacakges (including making sure
# setuptools itself is installed):
# Some pre-setuptools checks to ensure that either distribute or setuptools >=
# 0.7 is used (over pre-distribute setuptools) if it is available on the path;
# otherwise the latest setuptools will be downloaded and bootstrapped with
# ``ez_setup.py``. This used to be included in a separate file called
# setuptools_bootstrap.py; but it was combined into ah_bootstrap.py
try:
import pkg_resources
_setuptools_req = pkg_resources.Requirement.parse('setuptools>=0.7')
# This may raise a DistributionNotFound in which case no version of
# setuptools or distribute is properly installed
_setuptools = pkg_resources.get_distribution('setuptools')
if _setuptools not in _setuptools_req:
# Older version of setuptools; check if we have distribute; again if
# this results in DistributionNotFound we want to give up
_distribute = pkg_resources.get_distribution('distribute')
if _setuptools != _distribute:
# It's possible on some pathological systems to have an old version
# of setuptools and distribute on sys.path simultaneously; make
# sure distribute is the one that's used
sys.path.insert(1, _distribute.location)
_distribute.activate()
imp.reload(pkg_resources)
except:
# There are several types of exceptions that can occur here; if all else
# fails bootstrap and use the bootstrapped version
from ez_setup import use_setuptools
use_setuptools()
# typing as a dependency for 1.6.1+ Sphinx causes issues when imported after
# initializing submodule with ah_boostrap.py
# See discussion and references in
# https://github.com/astropy/astropy-helpers/issues/302
try:
import typing # noqa
except ImportError:
pass
# Note: The following import is required as a workaround to
# https://github.com/astropy/astropy-helpers/issues/89; if we don't import this
# module now, it will get cleaned up after `run_setup` is called, but that will
# later cause the TemporaryDirectory class defined in it to stop working when
# used later on by setuptools
try:
import setuptools.py31compat # noqa
except ImportError:
pass
# matplotlib can cause problems if it is imported from within a call of
# run_setup(), because in some circumstances it will try to write to the user's
# home directory, resulting in a SandboxViolation. See
# https://github.com/matplotlib/matplotlib/pull/4165
# Making sure matplotlib, if it is available, is imported early in the setup
# process can mitigate this (note importing matplotlib.pyplot has the same
# issue)
try:
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot
except:
# Ignore if this fails for *any* reason*
pass
# End compatibility imports...
# In case it didn't successfully import before the ez_setup checks
import pkg_resources
from setuptools import Distribution
from setuptools.package_index import PackageIndex
from setuptools.sandbox import run_setup
from distutils import log
from distutils.debug import DEBUG
# TODO: Maybe enable checking for a specific version of astropy_helpers?
DIST_NAME = 'astropy-helpers'
PACKAGE_NAME = 'astropy_helpers'
# Defaults for other options
DOWNLOAD_IF_NEEDED = True
INDEX_URL = 'https://pypi.python.org/simple'
USE_GIT = True
OFFLINE = False
AUTO_UPGRADE = True
# A list of all the configuration options and their required types
CFG_OPTIONS = [
('auto_use', bool), ('path', str), ('download_if_needed', bool),
('index_url', str), ('use_git', bool), ('offline', bool),
('auto_upgrade', bool)
]
class _Bootstrapper(object):
"""
Bootstrapper implementation. See ``use_astropy_helpers`` for parameter
documentation.
"""
def __init__(self, path=None, index_url=None, use_git=None, offline=None,
download_if_needed=None, auto_upgrade=None):
if path is None:
path = PACKAGE_NAME
if not (isinstance(path, _str_types) or path is False):
raise TypeError('path must be a string or False')
if PY3 and not isinstance(path, _text_type):
fs_encoding = sys.getfilesystemencoding()
path = path.decode(fs_encoding) # path to unicode
self.path = path
# Set other option attributes, using defaults where necessary
self.index_url = index_url if index_url is not None else INDEX_URL
self.offline = offline if offline is not None else OFFLINE
# If offline=True, override download and auto-upgrade
if self.offline:
download_if_needed = False
auto_upgrade = False
self.download = (download_if_needed
if download_if_needed is not None
else DOWNLOAD_IF_NEEDED)
self.auto_upgrade = (auto_upgrade
if auto_upgrade is not None else AUTO_UPGRADE)
# If this is a release then the .git directory will not exist so we
# should not use git.
git_dir_exists = os.path.exists(os.path.join(os.path.dirname(__file__), '.git'))
if use_git is None and not git_dir_exists:
use_git = False
self.use_git = use_git if use_git is not None else USE_GIT
# Declared as False by default--later we check if astropy-helpers can be
# upgraded from PyPI, but only if not using a source distribution (as in
# the case of import from a git submodule)
self.is_submodule = False
@classmethod
def main(cls, argv=None):
if argv is None:
argv = sys.argv
config = cls.parse_config()
config.update(cls.parse_command_line(argv))
auto_use = config.pop('auto_use', False)
bootstrapper = cls(**config)
if auto_use:
# Run the bootstrapper, otherwise the setup.py is using the old
# use_astropy_helpers() interface, in which case it will run the
# bootstrapper manually after reconfiguring it.
bootstrapper.run()
return bootstrapper
@classmethod
def parse_config(cls):
if not os.path.exists('setup.cfg'):
return {}
cfg = ConfigParser()
try:
cfg.read('setup.cfg')
except Exception as e:
if DEBUG:
raise
log.error(
"Error reading setup.cfg: {0!r}\n{1} will not be "
"automatically bootstrapped and package installation may fail."
"\n{2}".format(e, PACKAGE_NAME, _err_help_msg))
return {}
if not cfg.has_section('ah_bootstrap'):
return {}
config = {}
for option, type_ in CFG_OPTIONS:
if not cfg.has_option('ah_bootstrap', option):
continue
if type_ is bool:
value = cfg.getboolean('ah_bootstrap', option)
else:
value = cfg.get('ah_bootstrap', option)
config[option] = value
return config
@classmethod
def parse_command_line(cls, argv=None):
if argv is None:
argv = sys.argv
config = {}
# For now we just pop recognized ah_bootstrap options out of the
# arg list. This is imperfect; in the unlikely case that a setup.py
# custom command or even custom Distribution class defines an argument
# of the same name then we will break that. However there's a catch22
# here that we can't just do full argument parsing right here, because
# we don't yet know *how* to parse all possible command-line arguments.
if '--no-git' in argv:
config['use_git'] = False
argv.remove('--no-git')
if '--offline' in argv:
config['offline'] = True
argv.remove('--offline')
return config
def run(self):
strategies = ['local_directory', 'local_file', 'index']
dist = None
# First, remove any previously imported versions of astropy_helpers;
# this is necessary for nested installs where one package's installer
# is installing another package via setuptools.sandbox.run_setup, as in
# the case of setup_requires
for key in list(sys.modules):
try:
if key == PACKAGE_NAME or key.startswith(PACKAGE_NAME + '.'):
del sys.modules[key]
except AttributeError:
# Sometimes mysterious non-string things can turn up in
# sys.modules
continue
# Check to see if the path is a submodule
self.is_submodule = self._check_submodule()
for strategy in strategies:
method = getattr(self, 'get_{0}_dist'.format(strategy))
dist = method()
if dist is not None:
break
else:
raise _AHBootstrapSystemExit(
"No source found for the {0!r} package; {0} must be "
"available and importable as a prerequisite to building "
"or installing this package.".format(PACKAGE_NAME))
# This is a bit hacky, but if astropy_helpers was loaded from a
# directory/submodule its Distribution object gets a "precedence" of
# "DEVELOP_DIST". However, in other cases it gets a precedence of
# "EGG_DIST". However, when activing the distribution it will only be
# placed early on sys.path if it is treated as an EGG_DIST, so always
# do that
dist = dist.clone(precedence=pkg_resources.EGG_DIST)
# Otherwise we found a version of astropy-helpers, so we're done
# Just active the found distribution on sys.path--if we did a
# download this usually happens automatically but it doesn't hurt to
# do it again
# Note: Adding the dist to the global working set also activates it
# (makes it importable on sys.path) by default.
try:
pkg_resources.working_set.add(dist, replace=True)
except TypeError:
# Some (much) older versions of setuptools do not have the
# replace=True option here. These versions are old enough that all
# bets may be off anyways, but it's easy enough to work around just
# in case...
if dist.key in pkg_resources.working_set.by_key:
del pkg_resources.working_set.by_key[dist.key]
pkg_resources.working_set.add(dist)
@property
def config(self):
"""
A `dict` containing the options this `_Bootstrapper` was configured
with.
"""
return dict((optname, getattr(self, optname))
for optname, _ in CFG_OPTIONS if hasattr(self, optname))
def get_local_directory_dist(self):
"""
Handle importing a vendored package from a subdirectory of the source
distribution.
"""
if not os.path.isdir(self.path):
return
log.info('Attempting to import astropy_helpers from {0} {1!r}'.format(
'submodule' if self.is_submodule else 'directory',
self.path))
dist = self._directory_import()
if dist is None:
log.warn(
'The requested path {0!r} for importing {1} does not '
'exist, or does not contain a copy of the {1} '
'package.'.format(self.path, PACKAGE_NAME))
elif self.auto_upgrade and not self.is_submodule:
# A version of astropy-helpers was found on the available path, but
# check to see if a bugfix release is available on PyPI
upgrade = self._do_upgrade(dist)
if upgrade is not None:
dist = upgrade
return dist
def get_local_file_dist(self):
"""
Handle importing from a source archive; this also uses setup_requires
but points easy_install directly to the source archive.
"""
if not os.path.isfile(self.path):
return
log.info('Attempting to unpack and import astropy_helpers from '
'{0!r}'.format(self.path))
try:
dist = self._do_download(find_links=[self.path])
except Exception as e:
if DEBUG:
raise
log.warn(
'Failed to import {0} from the specified archive {1!r}: '
'{2}'.format(PACKAGE_NAME, self.path, str(e)))
dist = None
if dist is not None and self.auto_upgrade:
# A version of astropy-helpers was found on the available path, but
# check to see if a bugfix release is available on PyPI
upgrade = self._do_upgrade(dist)
if upgrade is not None:
dist = upgrade
return dist
def get_index_dist(self):
if not self.download:
log.warn('Downloading {0!r} disabled.'.format(DIST_NAME))
return None
log.warn(
"Downloading {0!r}; run setup.py with the --offline option to "
"force offline installation.".format(DIST_NAME))
try:
dist = self._do_download()
except Exception as e:
if DEBUG:
raise
log.warn(
'Failed to download and/or install {0!r} from {1!r}:\n'
'{2}'.format(DIST_NAME, self.index_url, str(e)))
dist = None
# No need to run auto-upgrade here since we've already presumably
# gotten the most up-to-date version from the package index
return dist
def _directory_import(self):
"""
Import astropy_helpers from the given path, which will be added to
sys.path.
Must return True if the import succeeded, and False otherwise.
"""
# Return True on success, False on failure but download is allowed, and
# otherwise raise SystemExit
path = os.path.abspath(self.path)
# Use an empty WorkingSet rather than the man
# pkg_resources.working_set, since on older versions of setuptools this
# will invoke a VersionConflict when trying to install an upgrade
ws = pkg_resources.WorkingSet([])
ws.add_entry(path)
dist = ws.by_key.get(DIST_NAME)
if dist is None:
# We didn't find an egg-info/dist-info in the given path, but if a
# setup.py exists we can generate it
setup_py = os.path.join(path, 'setup.py')
if os.path.isfile(setup_py):
with _silence():
run_setup(os.path.join(path, 'setup.py'),
['egg_info'])
for dist in pkg_resources.find_distributions(path, True):
# There should be only one...
return dist
return dist
def _do_download(self, version='', find_links=None):
if find_links:
allow_hosts = ''
index_url = None
else:
allow_hosts = None
index_url = self.index_url
# Annoyingly, setuptools will not handle other arguments to
# Distribution (such as options) before handling setup_requires, so it
# is not straightforward to programmatically augment the arguments which
# are passed to easy_install
class _Distribution(Distribution):
def get_option_dict(self, command_name):
opts = Distribution.get_option_dict(self, command_name)
if command_name == 'easy_install':
if find_links is not None:
opts['find_links'] = ('setup script', find_links)
if index_url is not None:
opts['index_url'] = ('setup script', index_url)
if allow_hosts is not None:
opts['allow_hosts'] = ('setup script', allow_hosts)
return opts
if version:
req = '{0}=={1}'.format(DIST_NAME, version)
else:
req = DIST_NAME
attrs = {'setup_requires': [req]}
try:
if DEBUG:
_Distribution(attrs=attrs)
else:
with _silence():
_Distribution(attrs=attrs)
# If the setup_requires succeeded it will have added the new dist to
# the main working_set
return pkg_resources.working_set.by_key.get(DIST_NAME)
except Exception as e:
if DEBUG:
raise
msg = 'Error retrieving {0} from {1}:\n{2}'
if find_links:
source = find_links[0]
elif index_url != INDEX_URL:
source = index_url
else:
source = 'PyPI'
raise Exception(msg.format(DIST_NAME, source, repr(e)))
def _do_upgrade(self, dist):
# Build up a requirement for a higher bugfix release but a lower minor
# release (so API compatibility is guaranteed)
next_version = _next_version(dist.parsed_version)
req = pkg_resources.Requirement.parse(
'{0}>{1},<{2}'.format(DIST_NAME, dist.version, next_version))
package_index = PackageIndex(index_url=self.index_url)
upgrade = package_index.obtain(req)
if upgrade is not None:
return self._do_download(version=upgrade.version)
def _check_submodule(self):
"""
Check if the given path is a git submodule.
See the docstrings for ``_check_submodule_using_git`` and
``_check_submodule_no_git`` for further details.
"""
if (self.path is None or
(os.path.exists(self.path) and not os.path.isdir(self.path))):
return False
if self.use_git:
return self._check_submodule_using_git()
else:
return self._check_submodule_no_git()
def _check_submodule_using_git(self):
"""
Check if the given path is a git submodule. If so, attempt to initialize
and/or update the submodule if needed.
This function makes calls to the ``git`` command in subprocesses. The
``_check_submodule_no_git`` option uses pure Python to check if the given
path looks like a git submodule, but it cannot perform updates.
"""
cmd = ['git', 'submodule', 'status', '--', self.path]
try:
log.info('Running `{0}`; use the --no-git option to disable git '
'commands'.format(' '.join(cmd)))
returncode, stdout, stderr = run_cmd(cmd)
except _CommandNotFound:
# The git command simply wasn't found; this is most likely the
# case on user systems that don't have git and are simply
# trying to install the package from PyPI or a source
# distribution. Silently ignore this case and simply don't try
# to use submodules
return False
stderr = stderr.strip()
if returncode != 0 and stderr:
# Unfortunately the return code alone cannot be relied on, as
# earlier versions of git returned 0 even if the requested submodule
# does not exist
# This is a warning that occurs in perl (from running git submodule)
# which only occurs with a malformatted locale setting which can
# happen sometimes on OSX. See again
# https://github.com/astropy/astropy/issues/2749
perl_warning = ('perl: warning: Falling back to the standard locale '
'("C").')
if not stderr.strip().endswith(perl_warning):
# Some other unknown error condition occurred
log.warn('git submodule command failed '
'unexpectedly:\n{0}'.format(stderr))
return False
# Output of `git submodule status` is as follows:
#
# 1: Status indicator: '-' for submodule is uninitialized, '+' if
# submodule is initialized but is not at the commit currently indicated
# in .gitmodules (and thus needs to be updated), or 'U' if the
# submodule is in an unstable state (i.e. has merge conflicts)
#
# 2. SHA-1 hash of the current commit of the submodule (we don't really
# need this information but it's useful for checking that the output is
# correct)
#
# 3. The output of `git describe` for the submodule's current commit
# hash (this includes for example what branches the commit is on) but
# only if the submodule is initialized. We ignore this information for
# now
_git_submodule_status_re = re.compile(
'^(?P<status>[+-U ])(?P<commit>[0-9a-f]{40}) '
'(?P<submodule>\S+)( .*)?$')
# The stdout should only contain one line--the status of the
# requested submodule
m = _git_submodule_status_re.match(stdout)
if m:
# Yes, the path *is* a git submodule
self._update_submodule(m.group('submodule'), m.group('status'))
return True
else:
log.warn(
'Unexpected output from `git submodule status`:\n{0}\n'
'Will attempt import from {1!r} regardless.'.format(
stdout, self.path))
return False
def _check_submodule_no_git(self):
"""
Like ``_check_submodule_using_git``, but simply parses the .gitmodules file
to determine if the supplied path is a git submodule, and does not exec any
subprocesses.
This can only determine if a path is a submodule--it does not perform
updates, etc. This function may need to be updated if the format of the
.gitmodules file is changed between git versions.
"""
gitmodules_path = os.path.abspath('.gitmodules')
if not os.path.isfile(gitmodules_path):
return False
# This is a minimal reader for gitconfig-style files. It handles a few of
# the quirks that make gitconfig files incompatible with ConfigParser-style
# files, but does not support the full gitconfig syntax (just enough
# needed to read a .gitmodules file).
gitmodules_fileobj = io.StringIO()
# Must use io.open for cross-Python-compatible behavior wrt unicode
with io.open(gitmodules_path) as f:
for line in f:
# gitconfig files are more flexible with leading whitespace; just
# go ahead and remove it
line = line.lstrip()
# comments can start with either # or ;
if line and line[0] in (':', ';'):
continue
gitmodules_fileobj.write(line)
gitmodules_fileobj.seek(0)
cfg = RawConfigParser()
try:
cfg.readfp(gitmodules_fileobj)
except Exception as exc:
log.warn('Malformatted .gitmodules file: {0}\n'
'{1} cannot be assumed to be a git submodule.'.format(
exc, self.path))
return False
for section in cfg.sections():
if not cfg.has_option(section, 'path'):
continue
submodule_path = cfg.get(section, 'path').rstrip(os.sep)
if submodule_path == self.path.rstrip(os.sep):
return True
return False
def _update_submodule(self, submodule, status):
if status == ' ':
# The submodule is up to date; no action necessary
return
elif status == '-':
if self.offline:
raise _AHBootstrapSystemExit(
"Cannot initialize the {0} submodule in --offline mode; "
"this requires being able to clone the submodule from an "
"online repository.".format(submodule))
cmd = ['update', '--init']
action = 'Initializing'
elif status == '+':
cmd = ['update']
action = 'Updating'
if self.offline:
cmd.append('--no-fetch')
elif status == 'U':
raise _AHBootstrapSystemExit(
'Error: Submodule {0} contains unresolved merge conflicts. '
'Please complete or abandon any changes in the submodule so that '
'it is in a usable state, then try again.'.format(submodule))
else:
log.warn('Unknown status {0!r} for git submodule {1!r}. Will '
'attempt to use the submodule as-is, but try to ensure '
'that the submodule is in a clean state and contains no '
'conflicts or errors.\n{2}'.format(status, submodule,
_err_help_msg))
return
err_msg = None
cmd = ['git', 'submodule'] + cmd + ['--', submodule]
log.warn('{0} {1} submodule with: `{2}`'.format(
action, submodule, ' '.join(cmd)))
try:
log.info('Running `{0}`; use the --no-git option to disable git '
'commands'.format(' '.join(cmd)))
returncode, stdout, stderr = run_cmd(cmd)
except OSError as e:
err_msg = str(e)
else:
if returncode != 0:
err_msg = stderr
if err_msg is not None:
log.warn('An unexpected error occurred updating the git submodule '
'{0!r}:\n{1}\n{2}'.format(submodule, err_msg,
_err_help_msg))
class _CommandNotFound(OSError):
"""
An exception raised when a command run with run_cmd is not found on the
system.
"""
def run_cmd(cmd):
"""
Run a command in a subprocess, given as a list of command-line
arguments.
Returns a ``(returncode, stdout, stderr)`` tuple.
"""
try:
p = sp.Popen(cmd, stdout=sp.PIPE, stderr=sp.PIPE)
# XXX: May block if either stdout or stderr fill their buffers;
# however for the commands this is currently used for that is
# unlikely (they should have very brief output)
stdout, stderr = p.communicate()
except OSError as e:
if DEBUG:
raise
if e.errno == errno.ENOENT:
msg = 'Command not found: `{0}`'.format(' '.join(cmd))
raise _CommandNotFound(msg, cmd)
else:
raise _AHBootstrapSystemExit(
'An unexpected error occurred when running the '
'`{0}` command:\n{1}'.format(' '.join(cmd), str(e)))
# Can fail of the default locale is not configured properly. See
# https://github.com/astropy/astropy/issues/2749. For the purposes under
# consideration 'latin1' is an acceptable fallback.
try:
stdio_encoding = locale.getdefaultlocale()[1] or 'latin1'
except ValueError:
# Due to an OSX oddity locale.getdefaultlocale() can also crash
# depending on the user's locale/language settings. See:
# http://bugs.python.org/issue18378
stdio_encoding = 'latin1'
# Unlikely to fail at this point but even then let's be flexible
if not isinstance(stdout, _text_type):
stdout = stdout.decode(stdio_encoding, 'replace')
if not isinstance(stderr, _text_type):
stderr = stderr.decode(stdio_encoding, 'replace')
return (p.returncode, stdout, stderr)
def _next_version(version):
"""
Given a parsed version from pkg_resources.parse_version, returns a new
version string with the next minor version.
Examples
========
>>> _next_version(pkg_resources.parse_version('1.2.3'))
'1.3.0'
"""
if hasattr(version, 'base_version'):
# New version parsing from setuptools >= 8.0
if version.base_version:
parts = version.base_version.split('.')
else:
parts = []
else:
parts = []
for part in version:
if part.startswith('*'):
break
parts.append(part)
parts = [int(p) for p in parts]
if len(parts) < 3:
parts += [0] * (3 - len(parts))
major, minor, micro = parts[:3]
return '{0}.{1}.{2}'.format(major, minor + 1, 0)
class _DummyFile(object):
"""A noop writeable object."""
errors = '' # Required for Python 3.x
encoding = 'utf-8'
def write(self, s):
pass
def flush(self):
pass
@contextlib.contextmanager
def _silence():
"""A context manager that silences sys.stdout and sys.stderr."""
old_stdout = sys.stdout
old_stderr = sys.stderr
sys.stdout = _DummyFile()
sys.stderr = _DummyFile()
exception_occurred = False
try:
yield
except:
exception_occurred = True
# Go ahead and clean up so that exception handling can work normally
sys.stdout = old_stdout
sys.stderr = old_stderr
raise
if not exception_occurred:
sys.stdout = old_stdout
sys.stderr = old_stderr
_err_help_msg = """
If the problem persists consider installing astropy_helpers manually using pip
(`pip install astropy_helpers`) or by manually downloading the source archive,
extracting it, and installing by running `python setup.py install` from the
root of the extracted source code.
"""
class _AHBootstrapSystemExit(SystemExit):
def __init__(self, *args):
if not args:
msg = 'An unknown problem occurred bootstrapping astropy_helpers.'
else:
msg = args[0]
msg += '\n' + _err_help_msg
super(_AHBootstrapSystemExit, self).__init__(msg, *args[1:])
BOOTSTRAPPER = _Bootstrapper.main()
def use_astropy_helpers(**kwargs):
"""
Ensure that the `astropy_helpers` module is available and is importable.
This supports automatic submodule initialization if astropy_helpers is
included in a project as a git submodule, or will download it from PyPI if
necessary.
Parameters
----------
path : str or None, optional
A filesystem path relative to the root of the project's source code
that should be added to `sys.path` so that `astropy_helpers` can be
imported from that path.
If the path is a git submodule it will automatically be initialized
and/or updated.
The path may also be to a ``.tar.gz`` archive of the astropy_helpers
source distribution. In this case the archive is automatically
unpacked and made temporarily available on `sys.path` as a ``.egg``
archive.
If `None` skip straight to downloading.
download_if_needed : bool, optional
If the provided filesystem path is not found an attempt will be made to
download astropy_helpers from PyPI. It will then be made temporarily
available on `sys.path` as a ``.egg`` archive (using the
``setup_requires`` feature of setuptools. If the ``--offline`` option
is given at the command line the value of this argument is overridden
to `False`.
index_url : str, optional
If provided, use a different URL for the Python package index than the
main PyPI server.
use_git : bool, optional
If `False` no git commands will be used--this effectively disables
support for git submodules. If the ``--no-git`` option is given at the
command line the value of this argument is overridden to `False`.
auto_upgrade : bool, optional
By default, when installing a package from a non-development source
distribution ah_boostrap will try to automatically check for patch
releases to astropy-helpers on PyPI and use the patched version over
any bundled versions. Setting this to `False` will disable that
functionality. If the ``--offline`` option is given at the command line
the value of this argument is overridden to `False`.
offline : bool, optional
If `False` disable all actions that require an internet connection,
including downloading packages from the package index and fetching
updates to any git submodule. Defaults to `True`.
"""
global BOOTSTRAPPER
config = BOOTSTRAPPER.config
config.update(**kwargs)
# Create a new bootstrapper with the updated configuration and run it
BOOTSTRAPPER = _Bootstrapper(**config)
BOOTSTRAPPER.run() | PypiClean |
/Firefly_III_API_Client-2.0.5.0-py3-none-any.whl/firefly_iii_client/paths/v1_webhooks_id_messages_message_id_attempts_attempt_id/delete.py | from dataclasses import dataclass
import typing_extensions
import urllib3
from urllib3._collections import HTTPHeaderDict
from firefly_iii_client import api_client, exceptions
from datetime import date, datetime # noqa: F401
import decimal # noqa: F401
import functools # noqa: F401
import io # noqa: F401
import re # noqa: F401
import typing # noqa: F401
import typing_extensions # noqa: F401
import uuid # noqa: F401
import frozendict # noqa: F401
from firefly_iii_client import schemas # noqa: F401
from firefly_iii_client.model.unauthenticated import Unauthenticated
from firefly_iii_client.model.bad_request import BadRequest
from firefly_iii_client.model.internal_exception import InternalException
from firefly_iii_client.model.not_found import NotFound
from . import path
# Header params
XTraceIdSchema = schemas.UUIDSchema
RequestRequiredHeaderParams = typing_extensions.TypedDict(
'RequestRequiredHeaderParams',
{
}
)
RequestOptionalHeaderParams = typing_extensions.TypedDict(
'RequestOptionalHeaderParams',
{
'X-Trace-Id': typing.Union[XTraceIdSchema, str, uuid.UUID, ],
},
total=False
)
class RequestHeaderParams(RequestRequiredHeaderParams, RequestOptionalHeaderParams):
pass
request_header_x_trace_id = api_client.HeaderParameter(
name="X-Trace-Id",
style=api_client.ParameterStyle.SIMPLE,
schema=XTraceIdSchema,
)
# Path params
IdSchema = schemas.StrSchema
MessageIdSchema = schemas.IntSchema
AttemptIdSchema = schemas.IntSchema
RequestRequiredPathParams = typing_extensions.TypedDict(
'RequestRequiredPathParams',
{
'id': typing.Union[IdSchema, str, ],
'messageId': typing.Union[MessageIdSchema, decimal.Decimal, int, ],
'attemptId': typing.Union[AttemptIdSchema, decimal.Decimal, int, ],
}
)
RequestOptionalPathParams = typing_extensions.TypedDict(
'RequestOptionalPathParams',
{
},
total=False
)
class RequestPathParams(RequestRequiredPathParams, RequestOptionalPathParams):
pass
request_path_id = api_client.PathParameter(
name="id",
style=api_client.ParameterStyle.SIMPLE,
schema=IdSchema,
required=True,
)
request_path_message_id = api_client.PathParameter(
name="messageId",
style=api_client.ParameterStyle.SIMPLE,
schema=MessageIdSchema,
required=True,
)
request_path_attempt_id = api_client.PathParameter(
name="attemptId",
style=api_client.ParameterStyle.SIMPLE,
schema=AttemptIdSchema,
required=True,
)
_auth = [
'firefly_iii_auth',
]
@dataclass
class ApiResponseFor204(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: schemas.Unset = schemas.unset
headers: schemas.Unset = schemas.unset
_response_for_204 = api_client.OpenApiResponse(
response_cls=ApiResponseFor204,
)
SchemaFor400ResponseBodyApplicationJson = BadRequest
@dataclass
class ApiResponseFor400(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[
SchemaFor400ResponseBodyApplicationJson,
]
headers: schemas.Unset = schemas.unset
_response_for_400 = api_client.OpenApiResponse(
response_cls=ApiResponseFor400,
content={
'application/json': api_client.MediaType(
schema=SchemaFor400ResponseBodyApplicationJson),
},
)
SchemaFor401ResponseBodyApplicationJson = Unauthenticated
@dataclass
class ApiResponseFor401(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[
SchemaFor401ResponseBodyApplicationJson,
]
headers: schemas.Unset = schemas.unset
_response_for_401 = api_client.OpenApiResponse(
response_cls=ApiResponseFor401,
content={
'application/json': api_client.MediaType(
schema=SchemaFor401ResponseBodyApplicationJson),
},
)
SchemaFor404ResponseBodyApplicationJson = NotFound
@dataclass
class ApiResponseFor404(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[
SchemaFor404ResponseBodyApplicationJson,
]
headers: schemas.Unset = schemas.unset
_response_for_404 = api_client.OpenApiResponse(
response_cls=ApiResponseFor404,
content={
'application/json': api_client.MediaType(
schema=SchemaFor404ResponseBodyApplicationJson),
},
)
SchemaFor500ResponseBodyApplicationJson = InternalException
@dataclass
class ApiResponseFor500(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[
SchemaFor500ResponseBodyApplicationJson,
]
headers: schemas.Unset = schemas.unset
_response_for_500 = api_client.OpenApiResponse(
response_cls=ApiResponseFor500,
content={
'application/json': api_client.MediaType(
schema=SchemaFor500ResponseBodyApplicationJson),
},
)
_status_code_to_response = {
'204': _response_for_204,
'400': _response_for_400,
'401': _response_for_401,
'404': _response_for_404,
'500': _response_for_500,
}
_all_accept_content_types = (
'application/json',
)
class BaseApi(api_client.Api):
@typing.overload
def _delete_webhook_message_attempt_oapg(
self,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[
ApiResponseFor204,
]: ...
@typing.overload
def _delete_webhook_message_attempt_oapg(
self,
skip_deserialization: typing_extensions.Literal[True],
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
) -> api_client.ApiResponseWithoutDeserialization: ...
@typing.overload
def _delete_webhook_message_attempt_oapg(
self,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = ...,
) -> typing.Union[
ApiResponseFor204,
api_client.ApiResponseWithoutDeserialization,
]: ...
def _delete_webhook_message_attempt_oapg(
self,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = False,
):
"""
Delete a webhook attempt.
:param skip_deserialization: If true then api_response.response will be set but
api_response.body and api_response.headers will not be deserialized into schema
class instances
"""
self._verify_typed_dict_inputs_oapg(RequestHeaderParams, header_params)
self._verify_typed_dict_inputs_oapg(RequestPathParams, path_params)
used_path = path.value
_path_params = {}
for parameter in (
request_path_id,
request_path_message_id,
request_path_attempt_id,
):
parameter_data = path_params.get(parameter.name, schemas.unset)
if parameter_data is schemas.unset:
continue
serialized_data = parameter.serialize(parameter_data)
_path_params.update(serialized_data)
for k, v in _path_params.items():
used_path = used_path.replace('{%s}' % k, v)
_headers = HTTPHeaderDict()
for parameter in (
request_header_x_trace_id,
):
parameter_data = header_params.get(parameter.name, schemas.unset)
if parameter_data is schemas.unset:
continue
serialized_data = parameter.serialize(parameter_data)
_headers.extend(serialized_data)
# TODO add cookie handling
if accept_content_types:
for accept_content_type in accept_content_types:
_headers.add('Accept', accept_content_type)
response = self.api_client.call_api(
resource_path=used_path,
method='delete'.upper(),
headers=_headers,
auth_settings=_auth,
stream=stream,
timeout=timeout,
)
if skip_deserialization:
api_response = api_client.ApiResponseWithoutDeserialization(response=response)
else:
response_for_status = _status_code_to_response.get(str(response.status))
if response_for_status:
api_response = response_for_status.deserialize(response, self.api_client.configuration)
else:
api_response = api_client.ApiResponseWithoutDeserialization(response=response)
if not 200 <= response.status <= 299:
raise exceptions.ApiException(
status=response.status,
reason=response.reason,
api_response=api_response
)
return api_response
class DeleteWebhookMessageAttempt(BaseApi):
# this class is used by api classes that refer to endpoints with operationId fn names
@typing.overload
def delete_webhook_message_attempt(
self,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[
ApiResponseFor204,
]: ...
@typing.overload
def delete_webhook_message_attempt(
self,
skip_deserialization: typing_extensions.Literal[True],
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
) -> api_client.ApiResponseWithoutDeserialization: ...
@typing.overload
def delete_webhook_message_attempt(
self,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = ...,
) -> typing.Union[
ApiResponseFor204,
api_client.ApiResponseWithoutDeserialization,
]: ...
def delete_webhook_message_attempt(
self,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = False,
):
return self._delete_webhook_message_attempt_oapg(
header_params=header_params,
path_params=path_params,
accept_content_types=accept_content_types,
stream=stream,
timeout=timeout,
skip_deserialization=skip_deserialization
)
class ApiFordelete(BaseApi):
# this class is used by api classes that refer to endpoints by path and http method names
@typing.overload
def delete(
self,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[
ApiResponseFor204,
]: ...
@typing.overload
def delete(
self,
skip_deserialization: typing_extensions.Literal[True],
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
) -> api_client.ApiResponseWithoutDeserialization: ...
@typing.overload
def delete(
self,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = ...,
) -> typing.Union[
ApiResponseFor204,
api_client.ApiResponseWithoutDeserialization,
]: ...
def delete(
self,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = False,
):
return self._delete_webhook_message_attempt_oapg(
header_params=header_params,
path_params=path_params,
accept_content_types=accept_content_types,
stream=stream,
timeout=timeout,
skip_deserialization=skip_deserialization
) | PypiClean |
/ClueDojo-1.4.3-1.tar.gz/ClueDojo-1.4.3-1/src/cluedojo/static/dojox/string/BidiComplex.js | if(!dojo._hasResource["dojox.string.BidiComplex"]){
dojo._hasResource["dojox.string.BidiComplex"]=true;
dojo.provide("dojox.string.BidiComplex");
dojo.experimental("dojox.string.BidiComplex");
(function(){
var _1=[];
dojox.string.BidiComplex.attachInput=function(_2,_3){
_2.alt=_3;
dojo.connect(_2,"onkeydown",this,"_ceKeyDown");
dojo.connect(_2,"onkeyup",this,"_ceKeyUp");
dojo.connect(_2,"oncut",this,"_ceCutText");
dojo.connect(_2,"oncopy",this,"_ceCopyText");
_2.value=dojox.string.BidiComplex.createDisplayString(_2.value,_2.alt);
};
dojox.string.BidiComplex.createDisplayString=function(_4,_5){
_4=dojox.string.BidiComplex.stripSpecialCharacters(_4);
var _6=dojox.string.BidiComplex._parse(_4,_5);
var _7=""+_4;
var _8=1;
dojo.forEach(_6,function(n){
if(n!=null){
var _9=_7.substring(0,n+_8);
var _a=_7.substring(n+_8,_7.length);
_7=_9+""+_a;
_8++;
}
});
return _7;
};
dojox.string.BidiComplex.stripSpecialCharacters=function(_b){
return _b.replace(/[\u200E\u200F\u202A-\u202E]/g,"");
};
dojox.string.BidiComplex._ceKeyDown=function(_c){
var _d=dojo.isIE?_c.srcElement:_c.target;
_1=_d.value;
};
dojox.string.BidiComplex._ceKeyUp=function(_e){
var _f="";
var _10=dojo.isIE?_e.srcElement:_e.target;
var _11=_10.value;
var _12=_e.keyCode;
if((_12==dojo.keys.HOME)||(_12==dojo.keys.END)||(_12==dojo.keys.SHIFT)){
return;
}
var _13,_14;
var _15=dojox.string.BidiComplex._getCaretPos(_e,_10);
if(_15){
_13=_15[0];
_14=_15[1];
}
if(dojo.isIE){
var _16=_13,_17=_14;
if(_12==dojo.keys.LEFT_ARROW){
if((_11.charAt(_14-1)==_f)&&(_13==_14)){
dojox.string.BidiComplex._setSelectedRange(_10,_13-1,_14-1);
}
return;
}
if(_12==dojo.keys.RIGHT_ARROW){
if(_11.charAt(_14-1)==_f){
_17=_14+1;
if(_13==_14){
_16=_13+1;
}
}
dojox.string.BidiComplex._setSelectedRange(_10,_16,_17);
return;
}
}else{
if(_12==dojo.keys.LEFT_ARROW){
if(_11.charAt(_14-1)==_f){
dojox.string.BidiComplex._setSelectedRange(_10,_13-1,_14-1);
}
return;
}
if(_12==dojo.keys.RIGHT_ARROW){
if(_11.charAt(_14-1)==_f){
dojox.string.BidiComplex._setSelectedRange(_10,_13+1,_14+1);
}
return;
}
}
var _18=dojox.string.BidiComplex.createDisplayString(_11,_10.alt);
if(_11!=_18){
window.status=_11+" c="+_14;
_10.value=_18;
if((_12==dojo.keys.DELETE)&&(_18.charAt(_14)==_f)){
_10.value=_18.substring(0,_14)+_18.substring(_14+2,_18.length);
}
if(_12==dojo.keys.DELETE){
dojox.string.BidiComplex._setSelectedRange(_10,_13,_14);
}else{
if(_12==dojo.keys.BACKSPACE){
if((_1.length>=_14)&&(_1.charAt(_14-1)==_f)){
dojox.string.BidiComplex._setSelectedRange(_10,_13-1,_14-1);
}else{
dojox.string.BidiComplex._setSelectedRange(_10,_13,_14);
}
}else{
if(_10.value.charAt(_14)!=_f){
dojox.string.BidiComplex._setSelectedRange(_10,_13+1,_14+1);
}
}
}
}
};
dojox.string.BidiComplex._processCopy=function(_19,_1a,_1b){
if(_1a==null){
if(dojo.isIE){
var _1c=document.selection.createRange();
_1a=_1c.text;
}else{
_1a=_19.value.substring(_19.selectionStart,_19.selectionEnd);
}
}
var _1d=dojox.string.BidiComplex.stripSpecialCharacters(_1a);
if(dojo.isIE){
window.clipboardData.setData("Text",_1d);
}
return true;
};
dojox.string.BidiComplex._ceCopyText=function(_1e){
if(dojo.isIE){
_1e.returnValue=false;
}
return dojox.string.BidiComplex._processCopy(_1e,null,false);
};
dojox.string.BidiComplex._ceCutText=function(_1f){
var ret=dojox.string.BidiComplex._processCopy(_1f,null,false);
if(!ret){
return false;
}
if(dojo.isIE){
document.selection.clear();
}else{
var _20=_1f.selectionStart;
_1f.value=_1f.value.substring(0,_20)+_1f.value.substring(_1f.selectionEnd);
_1f.setSelectionRange(_20,_20);
}
return true;
};
dojox.string.BidiComplex._getCaretPos=function(_21,_22){
if(dojo.isIE){
var _23=0,_24=document.selection.createRange().duplicate(),_25=_24.duplicate(),_26=_24.text.length;
if(_22.type=="textarea"){
_25.moveToElementText(_22);
}else{
_25.expand("textedit");
}
while(_24.compareEndPoints("StartToStart",_25)>0){
_24.moveStart("character",-1);
++_23;
}
return [_23,_23+_26];
}
return [_21.target.selectionStart,_21.target.selectionEnd];
};
dojox.string.BidiComplex._setSelectedRange=function(_27,_28,_29){
if(dojo.isIE){
var _2a=_27.createTextRange();
if(_2a){
if(_27.type=="textarea"){
_2a.moveToElementText(_27);
}else{
_2a.expand("textedit");
}
_2a.collapse();
_2a.moveEnd("character",_29);
_2a.moveStart("character",_28);
_2a.select();
}
}else{
_27.selectionStart=_28;
_27.selectionEnd=_29;
}
};
var _2b=function(c){
return (c>="0"&&c<="9")||(c>"ÿ");
};
var _2c=function(c){
return (c>="A"&&c<="Z")||(c>="a"&&c<="z");
};
var _2d=function(_2e,i,_2f){
while(i>0){
if(i==_2f){
return false;
}
i--;
if(_2b(_2e.charAt(i))){
return true;
}
if(_2c(_2e.charAt(i))){
return false;
}
}
return false;
};
dojox.string.BidiComplex._parse=function(str,_30){
var _31=-1,_32=[];
var _33={FILE_PATH:"/\\:.",URL:"/:.?=&#",XPATH:"/\\:.<>=[]",EMAIL:"<>@.,;"}[_30];
switch(_30){
case "FILE_PATH":
case "URL":
case "XPATH":
dojo.forEach(str,function(ch,i){
if(_33.indexOf(ch)>=0&&_2d(str,i,_31)){
_31=i;
_32.push(i);
}
});
break;
case "EMAIL":
var _34=false;
dojo.forEach(str,function(ch,i){
if(ch=="\""){
if(_2d(str,i,_31)){
_31=i;
_32.push(i);
}
i++;
var i1=str.indexOf("\"",i);
if(i1>=i){
i=i1;
}
if(_2d(str,i,_31)){
_31=i;
_32.push(i);
}
}
if(_33.indexOf(ch)>=0&&_2d(str,i,_31)){
_31=i;
_32.push(i);
}
});
}
return _32;
};
})();
} | PypiClean |
/EnergySystemModels-0.1.17.post63-py3-none-any.whl/AHU/air_humide/air_humide_NB.py | from math import*
"""
Created on Mon Aug 31 14:14:16 2020
@author: VG6075
"""
"""
' Validity ranges are the following :
' - Dry temperature [-20°C to 100°C[
' - Pressure [80 000 to 150 000 Pa]
' - Relative Humidity ]0; 100%] (0% excluded)
' *******************************************************
' **************** GENERAL FUNCTIONS ********************
' *******************************************************
' FUNCTION Psv - Saturation vapor pressure [Pa] as a function of dry bulb temperature
' Input: Dry bulb temperature [°C]
' Output: Saturation vapor pressure [Pa]
"""
"""
'ASHRAE Fundamentals handbood (2005) p 6.2, equation 5 and 6 - Valid from -100C to 200 C
'Constants
"""
def Psv(Tdb):
C1 = -5.6745359 * 10 ** (3)
C2 = 6.3925247
C3 = -9.677843 * 10 ** (-3)
C4 = 6.2215701 * 10 ** (-7)
C5 = 2.0747825 * 10 ** (-9)
C6 = -9.484024 * 10 ** (-13)
C7 = 4.1635019 * 10 ** (0)
C8 = -5.8002206 * 10 ** (3)
C9 = 1.3914993 * 10 ** (0)
C10 = -4.8640239 * 10 ** (-2)
C11 = 4.1764768 * 10 ** (-5)
C12 = -1.4452093 * 10 ** (-8)
C13 = 6.5459673 * 10 ** (0)
"""Temperature in Kelvin"""
TK= Tdb + 273.15
"""Saturation vapor pressure"""
if TK <= 273.15:
Psv = exp(C1 / TK + C2 + C3 * TK + C4 * TK ** 2 + C5 * TK ** 3 + C6 * TK ** 4 + C7 * log(TK))
else:
Psv = exp(C8 / TK + C9 + C10 * TK + C11 * TK ** 2 + C12 * TK ** 3 + C13 * log(TK))
return Psv
"""
FUNCTION H_a - Moist air specific enthalpy [kJ/kg dry air] as a function of dry bulb temperature and humidity ratio
' Input: Dry bulb temperature [°C]
' Humidity ratio [kg H2O / kg dry air]
' Output: Saturation vapor pressure [Pa]
"""
def H_a(Tdb,w):
"""Calculations from 2005 ASHRAE Handbook - Fundamentals - SI P6.9 eqn 32'"""
H_a = 1.006 * Tdb + w * (2501 + 1.86 * Tdb)
return H_a
"""
' **********************************************************
' **************** FUNCTIONS Tdb, Twb, P *******************
' **********************************************************
' FUNCTION Air1_RH - Air relative humidity [0 ; 1] as a function of dry bulb temperature, wet bulb temperature and pressure
' Input : Dry bulb temperature [°C]
' Wet bulb temperature [°C]
' Pressure [Pa]
' Output : Relative humidity [0 ; 1]
"""
def Air1_RH(Tdb, Twb , P):
""" Specific humidity [kg H2O/kg dry air]"""
w = Air1_w(Tdb, Twb, P)
"""Relative humidity [0 ; 1]"""
Air1_RH = P / Psv(Tdb) * w / (0.62198 + w)
return Air1_RH
"""
_______________________________________________________________________________________________
' FUNCTION Air1_Tdp - Dew point temperature (°C) as a function of dry bulb temperature, wet bulb temperature and pressure
' Input : Dry bulb temperature [°C]
' Wet bulb temperature [°C]
' Pressure [Pa]
' Output : Dew point temperature [°C]
"""
def Air1_Tdp(Tdb, Twb, P):
"""' Relative humidity"""
RH = Air1_RH(Tdb, Twb, P)
""" dew point temperature"""
Air1_Tdp = Air2_Tdp(Tdb, RH, P)
return Air1_Tdp
"""
_______________________________________________________________________________________________
' FUNCTION Air1_w - Air specific humidity as a function of dry bulb temperature, wet bulb temperature and pressure
' Input : Dry bulb temperature [°C]
' Wet bulb temperature [°C]
' Pressure [Pa]
' Output : Specific humidity at Tdb [kgH2O/kg dry air]
"""
def Air1_w(Tdb, Twb, P):
""" ASHRAE Fundamentals handbood (2005)
Saturation vapor pressure [Pa] and specific humidity at wet bulb temperature [kgH2O/kg dry air]
' Saturation vapor pressure at wet bulb t°"""
Pws = Psv(Twb)
""" Specific humidity at wet bulb temperature"""
Ws = 0.62198 * Pws / (P - Pws)
if Tdb >= 0:
Air1_w = ((2501 - 2.326 * Twb) * Ws - 1.006 * (Tdb - Twb)) / (2501 + 1.86 * Tdb - 4.186 * Twb)
else :
Air1_w = ((2830 - 0.24 * Twb) * Ws - 1.006 * (Tdb - Twb)) / (2830 + 1.86 * Tdb - 2.1 * Twb)
return Air1_w
"""
_______________________________________________________________________________________________
' FUNCTION Air1_xH2O - Water molar fraction as a function of dry bulb temperature, wet bulb temperature and pressure
' Input : Dry bulb temperature [°C]
' Wet bulb temperature [°C]
' Pressure [Pa]
' Output : Water molar fraction [0 ; 1]
"""
def Air1_xH2O(Tdb, RH, P):
"""Specific humidity [kgH2O/kg dry air]y"""
w = Air1_w(Tdb, RH, P)
"""H2O molar fraction"""
Air1_xH2O = w / (w + 0.62198)
return Air1_xH2O
"""
_______________________________________________________________________________________________
' FUNCTION Air1_Hs - Specific enthalpy kJ/kg dry air) as a function of dry bulb temperature, wet bulb temperature and pressure
' Input : Dry bulb temperature [°C]
' Wet bulb temperature [°C]
' Pressure [Pa]
' Output : Specific enthalpy [kJ/kg dry air]
"""
def Air1_Hs(Tdb , Twb , P ):
"""Specific humidity [kgH2O/kg dry air]"""
w = Air1_w(Tdb, Twb, P)
"""Specific enthalpy [kJ/kg dry air]"""
Air1_Hs = H_a(Tdb, w)
return Air1_Hs
"""
_______________________________________________________________________________________________
' FUNCTION Air1_Mv - Moist air density (kg/m3) as a function of dry bulb temperature, wet bulb temperature and pressure
' Input : Dry bulb temperature [°C]
' Wet bulb temperature [°C]
' Pressure [Pa]
' Output : Density [kg/m3]
"""
def Air1_Mv(Tdb , Twb, P):
"""Standard volumic mass of dry air [kg/Nm3] and relative humidity"""
Mv0 = 1.2922
"""Relative humidity"""
RH = Air1_RH(Tdb, Twb, P)
"""Moist air density [kg/m3]"""
Air1_Mv = Mv0 * 273.15 / (273.15 + Tdb) * (P - (1 - 0.62198) * RH * Psv(Tdb)) / 101325
return Air1_Mv
"""
______________________________________________________________________________________________
' FUNCTION Air1_Vs - Air specific volume (m3/kg dry air) as a function of dry bulb temperature, wet bulb temperature and pressure
' Input : Dry bulb temperature [°C]
' Wet bulb temperature [°C]
' Pressure [Pa]
' Output : Air specific volume [m3/kg dry air]
"""
def Air1_Vs(Tdb, Twb, P):
"""Air specific volume [m3/kg dry air]"""
Air1_Vs = (1 + Air1_w(Tdb, Twb, P)) / Air1_Mv(Tdb, Twb, P)
return Air1_Vs
"""
______________________________________________________________________________________________
' **********************************************************
' **************** FUNCTIONS Tdb, RH, P ********************
' **********************************************************
' FUNCTION Air2_w - Specific humidity (humidity ratio) as a function of dry bulb temperature, relative humidity and pressure
' Input : Dry bulb temperature [°C]
' Relative humidity [ 0 ; 1]
' Pressure [Pa]
' Output : Specific humidity [kgH2O/kg dry air]
"""
def Air2_w(Tdb,RH, P):
"""Vapor partial pressure"""
Pv = RH * Psv(Tdb)
"""Specific humidity - Humidity ratio [kgH2O/kg dry air]"""
Air2_w = 0.62198 * Pv / (P - Pv)
return Air2_w
"""
______________________________________________________________________________________________
' FUNCTION Air2_xH2O - Water molar fraction as a function of dry bulb temperature, relative humidity and pressure
' Input : Dry bulb temperature [°C]
' Relative humidity [ 0 ; 1]
' Pressure [Pa]
' Output : Water molar fraction [0 ; 1]
"""
def Air2_xH2O(Tdb, RH , P):
"""Specific humidity [kgH2O/kg dry air]"""
w = Air2_w(Tdb, RH, P)
"""H2O molar fraction"""
Air2_xH2O = w / (w + 0.62198)
return Air2_xH2O
"""
______________________________________________________________________________________________
' FUNCTION Air2_Twb - Wet bulb temperature (°C) as a function of dry bulb temperature, relative humidity and pressure
' Input : Dry bulb temperature [°C]
' Relative humidity [ 0 ; 1]
' Pressure [Pa]
' Output : Wet bulb temperature [°C]
"""
def Air2_Twb(Tdb, RH, P):
""" Uses Newton-Rhapson iteration for quick convergence"""
""" Vapor pressure[Pa]"""
""" Vapor pressure in moist air [Pa]"""
Pv = RH * Psv(Tdb)
"""Humidity ratio [kgH2O/kg dry air]"""
Wa = 0.62198 * Pv / (P - Pv)
"""CALCULATION"""
Twb = Tdb
"""Solve to within 0.001% accuracy using Newton-Rhapson"""
Ws = Air1_w(Tdb, Twb, P)
while True:
Ws2 = Air1_w(Tdb, Twb - 0.001, P)
dw_dt = (Ws - Ws2) / 0.001
Twb = Twb - (Ws - Wa) / dw_dt
Ws = Air1_w(Tdb, Twb, P)
if abs((Ws - Wa) / Wa) <= 0.00001:
break
"""Wet bulb temperature"""
Air2_Twb = Twb
return Air2_Twb
"""
_____________________________________________________________________________________________
' FUNCTION Air2_Tdp - dew point temperature (°C) as a function of dry bulb temperature, relative humidity and pressure
' Input : Dry bulb temperature [°C]
' Relative humidity [ 0 ; 1]
' Pressure [Pa]
' Output : dew point temperature [°C]
"""
def Air2_Tdp(Tdb, RH, P):
"""Uses Newton-Rhapson iteration for quick convergence
' Vapor pressure[Pa]
Dim Pv As Double
' Specific humidity at Tdb and at dew point Tdp - [kgH2O/kg dry air]
Dim Wa As Double, Ws As Double, Ws2 As Double
' Intermediate
Dim dw_dt As Double
' dew point temperature [°C]
Dim Tdp As Double
' Vapor pressure in moist air [Pa]
"""
Pv = RH * Psv(Tdb)
"""Humidity ratio [kgH2O/kg dry air]"""
Wa = 0.62198 * Pv / (P - Pv)
"""CALCULATION"""
Tdp = Tdb
"""Solve to within 0.001% accuracy using Newton-Rhapson"""
Ws = Air2_w(Tdp, 1, P)
while True:
Ws2 = Air2_w(Tdp - 0.001, 1, P)
dw_dt = (Ws - Ws2) / 0.001
Tdp = Tdp - (Ws - Wa) / dw_dt
Ws = Air2_w(Tdp, 1, P)
if abs((Ws - Wa) / Wa) <= 0.00001:
break
"""Wet bulb temperature"""
Air2_Tdp = Tdp
return Air2_Tdp
"""
_____________________________________________________________________________________________
' FUNCTION Air2_Hs - Specific enthalpy kJ/kg dry air) as a function of dry bulb temperature, relative humidity and pressure
' Input : Dry bulb temperature [°C]
' Relative humidity [ 0 ; 1]
' Pressure [Pa]
' Output : Specific enthalpy [kJ/kg dry air]
"""
def Air2_Hs(Tdb, RH, P):
"""
Specific humidity [kgH2O/kg dry air]
Dim w As Double
Specific humidity [kgH2O/kg dry air]
"""
w = Air2_w(Tdb, RH, P)
"""Specific enthalpy [kJ/kg dry air]"""
Air2_Hs = H_a(Tdb, w)
return Air2_Hs
"""
_____________________________________________________________________________________________
' FUNCTION Air2_Mv - Moist air density (kg/m3) as a function of dry bulb temperature, relative humidity and pressure
' Input : Dry bulb temperature [°C]
' Relative humidity [ 0 ; 1]
' Pressure [Pa]
' Output : Density [kg/m3]
"""
def Air2_Mv(Tdb, RH, P):
"""Standard volumic mass of dry air [kg/Nm3]
Dim Mv0 As Double
' Standard volumic mass of dry air [kg/Nm3]"""
Mv0 = 1.2922
"""Moist air density [kg/m3]"""
Air2_Mv = Mv0 * 273.15 / (273.15 + Tdb) * (P - (1 - 0.62198) * RH * Psv(Tdb)) / 101325
return Air2_Mv
"""
___________________________________________________________________________________________
' FUNCTION Air2_Vs - Air specific volume (m3/kg dry air) as a function of dry bulb temperature, relative humidity and pressure
' Input : Dry bulb temperature [°C]
' Relative humidity [ 0 ; 1]
' Pressure [Pa]
' Output : Air specific volume [m3/kg dry air]
"""
def Air2_Vs(Tdb , RH , P ):
"""Air specific volume [m3/kg dry air]"""
Air2_Vs = (1 + Air2_w(Tdb, RH, P)) / Air2_Mv(Tdb, RH, P)
return Air2_Vs
"""
_____________________________________________________________________________________________
' **********************************************************
' **************** FUNCTIONS w, P, Hs **********************
' **********************************************************
' FUNCTION Air3_Tdb - Air dry bulb temperature (°C) as a function of specific humidity, pressure and specific enthalpy
' Input : Specific humidity [kgH2O/kg dry air]
' Pressure [Pa]
' Specific enthalpy [kJ/kg dry air]
' Output : Dry bulb temperature [°C]
"""
def Air3_Tdb(w, P , H):
"""Dry bulb temperature [°C]"""
Air3_Tdb = (H - w * 2501) / (1.006 + w * 1.86)
return Air3_Tdb
"""
_____________________________________________________________________________________________
' FUNCTION Air3_RH - Air relative humidity [0 ; 1] as a function of specific humidity, pressure and specific enthalpy
' Input : Specific humidity [kgH2O/kg dry air]
' Pressure [Pa]
' Specific enthalpy [kJ/kg dry air]
' Output : Relative humidity [0 ; 1]
"""
def Air3_RH(w , P , H ):
""" Dry bulb temperature
Dim Tdb As Double
' Dry bulb temperature [°C]"""
Tdb = Air3_Tdb(w, P, H)
"""' Relative humidity [0 ; 1]"""
Air3_RH = P / Psv(Tdb) * w / (0.62198 + w)
return Air3_RH
"""
_____________________________________________________________________________________________
' FUNCTION Air3_Twb - Wet bulb temperature (°C) as a function of specific humidity, pressure and specific enthalpy
' Input : Specific humidity [kgH2O/kg dry air]
' Pressure [Pa]
' Specific enthalpy [kJ/kg dry air]
' Output : Wet bulb temperature [°C]
"""
def Air3_Twb(w , P , H ):
"""' Dry bulb temperature [°C]
Dim Tdb As Double
' Relative humidity
Dim RH As Double
' Dry bulb temperature [°C]"""
Tdb = Air3_Tdb(w, P, H)
""""' Relative humidity [0 ; 1]"""
RH = P / Psv(Tdb) * w / (0.62198 + w)
"""' Wet bulb temperature [°C]"""
Air3_Twb = Air2_Twb(Tdb, RH, P)
return Air3_Twb
"""
_____________________________________________________________________________________________
' FUNCTION Air3_Tdp - Dew point temperature (°C) as a function of specific humidity, pressure and specific enthalpy
' Input : Specific humidity [kgH2O/kg dry air]
' Pressure [Pa]
' Specific enthalpy [kJ/kg dry air]
' Output : dew point temperature [°C]
"""
def Air3_Tdp(w , P , H ):
"""'
Dry bulb temperature [°C]
Dim Tdb As Double
' Relative humidity
Dim RH As Double
' Dry bulb temperature [°C]"""
Tdb = Air3_Tdb(w, P, H)
"""' Relative humidity [0 ; 1]"""
RH = P / Psv(Tdb) * w / (0.62198 + w)
""""' Dew point temperature"""
Air3_Tdp = Air2_Tdp(Tdb, RH, P)
return Air3_Tdp
"""
_____________________________________________________________________________________________
' FUNCTION Air3_xH2O - Water molar fraction as a function of specific humidity, pressure and specific enthalpy
' Input : Specific humidity [kgH2O/kg dry air]
' Pressure [Pa]
' Specific enthalpy [kJ/kg dry air]
' Output : Water molar fraction [0 ; 1]
"""
def Air3_xH2O(w , P , H ):
"""' H2O molar fraction"""
Air3_xH2O = w / (w + 0.62198)
return Air3_xH2O
"""
' FUNCTION Air3_Mv - Moist air density (kg/m3) as a function of specific humidity, pressure and specific enthalpy
' Input : Specific humidity [kgH2O/kg dry air]
' Pressure [Pa]
' Specific enthalpy [kJ/kg dry air]
' Output : Density [kg/m3]
"""
def Air3_Mv(w , P ,H ):
"""' Standard volumic mass of dry air [kg/Nm3]
Dim Mv0 As Double
' Dry bulb temperature [°C]
Dim Tdb As Double
' Relative humidity
Dim RH As Double
' Dry bulb temperature [°C]"""
Tdb = Air3_Tdb(w, P, H)
"""' Relative humidity [0 ; 1]"""
RH = P / Psv(Tdb) * w / (0.62198 + w)
"""' Standard volumic mass of dry air [kg/Nm3]"""
Mv0 = 1.2922
"""' Moist air density [kg/m3]"""
Air3_Mv = Mv0 * 273.15 / (273.15 + Tdb) * (P - (1 - 0.62198) * RH * Psv(Tdb)) / 101325
return Air3_Mv
"""
_____________________________________________________________________________________________
' FUNCTION Air3_Vs - Air specific volume (m3/kg dry air) as a function of specific humidity, pressure and specific enthalpy
' Input : Specific humidity [kgH2O/kg dry air]
' Pressure [Pa]
' Specific enthalpy [kJ/kg dry air]
' Output : Air specific volume [m3/kg dry air]
"""
def Air3_Vs(w , P , H ):
"""'Air specific volume [m3/kg dry air]"""
Air3_Vs = (1 + w) / Air3_Mv(w, P, H)
return Air3_Vs
"""
_____________________________________________________________________________________________
' **********************************************************
' **************** FUNCTIONS Tdb, P, w *********************
' **********************************************************
"""
"""_____________________________________________________________________________________________
' FUNCTION Air4_Hs - Specific enthalpy [kJ/kg dry air] as a finction of dry bulb temperature, pressure and specific humidity
' Input : Dry bulb temperature [°C]
' Pressure [Pa]
' Specific humidity [kgH2O/kg dry air]
' Output : Specific enthalpy [kJ/kg dry air]
"""
def Air4_Hs(Tdb , P , w ) :
"""'Specific enthalpy [kJ/kg dry air]"""
Air4_Hs = H_a(Tdb, w)
return Air4_Hs
"""
_____________________________________________________________________________________________
' FUNCTION Air4_RH - Air relative humidity s a function of dry bulb temperature, pressure and specific humidity
' Input : Dry bulb temperature [°C]
' Pressure [Pa]
' Specific humidity [kgH2O/kg dry air]
' Output : Relative humidity [0 ; 1]
"""
def Air4_RH(Tdb , P , w ):
"""' Relative humidity [0 ; 1]"""
Air4_RH = P / Psv(Tdb) * w / (0.62198 + w)
return Air4_RH
"""
_____________________________________________________________________________________________
' FUNCTION Air4_Twb - Wet bulb temperature (°C) as a function of dry bulb temperature, pressure and specific humidity
' Input : Dry bulb temperature [°C]
' Pressure [Pa]
' Specific humidity [kgH2O/kg dry air]
' Output : Wet bulb temperature [°C]
"""
def Air4_Twb(Tdb , P , w ):
"""' Relative humidity
Dim RH As Double
' Relative humidity [0 ; 1]"""
RH = P / Psv(Tdb) * w / (0.62198 + w)
"""' Wet bulb temperature [°C]"""
Air4_Twb = Air2_Twb(Tdb, RH, P)
return Air4_Twb
"""
_____________________________________________________________________________________________
' FUNCTION Air4_Tdp - dew point temperature (°C) as a function of dry bulb temperature, pressure and specific humidity
' Input : Dry bulb temperature [°C]
' Pressure [Pa]
' Specific humidity [kgH2O/kg dry air]
' Output : dew point temperature [°C]
"""
def Air4_Tdp(Tdb , P , w ):
"""' Relative humidity
Dim RH As Double
' Relative humidity [0 ; 1]"""
RH = P / Psv(Tdb) * w / (0.62198 + w)
"""' Dew point temperature"""
Air4_Tdp = Air2_Tdp(Tdb, RH, P)
return Air4_Tdp
"""
_____________________________________________________________________________________________
' FUNCTION Air4_xH2O - Water molar fraction as a function of dry bulb temperature, pressure and specific humidity
' Input : Dry bulb temperature [°C]
' Pressure [Pa]
' Specific humidity [kgH2O/kg dry air]
' Output : Water molar fraction [0 ; 1]
"""
def Air4_xH2O(Tdb, P , w ):
"""' H2O molar fraction"""
Air4_xH2O = w / (w + 0.62198)
return Air4_xH2O
"""
___________________________________________________________________________________________
' FUNCTION Air4_Mv - Moist air density (kg/m3) as a function of dry bulb temperature, pressure and specific humidity
' Input : Dry bulb temperature [°C]
' Pressure [Pa]
' Specific humidity [kgH2O/kg dry air]
' Output : Density [kg/m3]
"""
def Air4_Mv(Tdb , P , w ):
"""' Standard volumic mass of dry air [kg/Nm3]
Dim Mv0 As Double
' Relative humidity
Dim RH As Double
' Relative humidity [0 ; 1]"""
RH = P / Psv(Tdb) * w / (0.62198 + w)
"""' Standard volumic mass of dry air [kg/Nm3]"""
Mv0 = 1.2922
"""' Moist air density [kg/m3]"""
Air4_Mv = Mv0 * 273.15 / (273.15 + Tdb) * (P - (1 - 0.62198) * RH * Psv(Tdb)) / 101325
return Air4_Mv
"""
_____________________________________________________________________________________________
' FUNCTION Air4_Vs - Air specific volume (m3/kg dry air) as a function of dry bulb temperature, pressure and specific humidity
' Input : Dry bulb temperature [°C]
' Pressure [Pa]
' Specific humidity [kgH2O/kg dry air]
' Output : Air specific volume [m3/kg dry air]
"""
def Air4_Vs(Tdb , P , w) :
""" Air specific volume [m3/kg dry air]"""
Air4_Vs = (1 + w) / Air4_Mv(Tdb, P, w)
return Air4_Vs
"""
_____________________________________________________________________________________________
' FUNCTION Air5_Tdb - Dry bulb temperature as a function of relative humidity, pressure and specific enthalpy
' Input : Relative humidity [%]
' Pressure [Pa]
' Specific enthalpy [kJ/kg dry air]
' Output : Dry bulb temperature [°c]
"""
def Air5_Tdb(RH , P , H ):
"""
' Uses Newton-Rhapson iteration for quick convergence
' Specific humidity - [kgH2O/kg dry air]
Dim Ws As Double, Ws2 As Double
' Intermediate
Dim dH_dt As Double, Hs As Double, Hs2 As Double
' Dry bulb temperature [°C]
Dim Tdb As Double
' CALCULATION
' Dry bulb temperature [°C] - start"""
Tdb = 10
"""' Humidity ratio [kgH2O/kg dry air]"""
Ws = Air2_w(Tdb, RH, P)
"""' Enthalpy"""
Hs = H_a(Tdb, Ws)
"""' Solve to within 0.001 accuracy using Newton-Rhapson"""
while True:
Ws2 = Air2_w(Tdb - 0.001, RH, P)
Hs2 = H_a(Tdb - 0.001, Ws2)
dH_dt = (Hs - Hs2) / 0.001
Tdb = Tdb - (Hs - H) / dH_dt
Ws = Air2_w(Tdb, RH, P)
Hs = H_a(Tdb, Ws)
if abs(H - Hs) <= 0.0001:
break
"""' Dry bulb temperature [°C]"""
Air5_Tdb = Tdb
return Air5_Tdb
"""
_____________________________________________________________________________________________
' FUNCTION Air6_Tdb - Dry bulb temperature as a function of relative humidity, pressure and wet bulb temperature
' Input : Relative humidity [%]
' Pressure [Pa]
' Wet bulb temperature [°C]
' Output : Dry bulb temperature [°c]
"""
def Air6_Tdb(RH , P , Twb ):
"""' Uses Newton-Rhapson iteration for quick convergence
' Specific humidity - [kgH2O/kg dry air]
Dim Ws As Double, Ws2 As Double
' Intermediate
Dim dTw_dt As Double, Twbs As Double, Twbs2 As Double
Dim n As Integer
' Dry bulb temperature [°C]
Dim Tdb As Double
' CALCULATION
' Dry bulb temperature [°C] - start"""
Tdb = Twb
"""' Wet bulb temperature"""
Twbs = Air2_Twb(Tdb, RH, P)
n = 0
"""' Solve to within 0.001 accuracy using Newton-Rhapson"""
while True:
Twbs2 = Air2_Twb(Tdb - 0.001, RH, P)
dTw_dt = (Twbs - Twbs2) / 0.001
Tdb = Tdb - (Twbs - Twb) / dTw_dt
Twbs = Air2_Twb(Tdb, RH, P)
n = n + 1
if abs(Twb - Twbs) <= 0.00001:
break
"""' Dry bulb temperature [°C]"""
Air6_Tdb = Tdb
return Air6_Tdb
# """
# ______________________________________________________________________
# Tracer graphique
# """
# import numpy as np
# from matplotlib.patches import Rectangle
# import matplotlib.pyplot as plt
# import matplotlib.colors as mcolors
# P= 101325
# HR_1=[0.02,0.04,0.06,0.08,0.1,0.15,0.2,0.25,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1]
# T_1=[-20,-15,-10,-5,0,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80]
# row=len(HR_1)
# lines=len(T_1)
# R_H=np.ones((lines,row*3))
# """Définition du tableau Humidité_relative__________________________________"""
# j=0
# for i in range(lines):
# for k in range(row):
# j=(k)*3
# R_H[i][j]=Air2_w(T_1[i],HR_1[k],P)*1000
# R_H[i][j+1]=Air2_Hs(T_1[i],HR_1[k],P)
# R_H[i][j+2]=Air2_Vs(T_1[i],HR_1[k],P)
# """
# Définition tableau température bulbe humide________________________________________________"""
# T1=[-19,-15,-10,-5,0,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]
# T2=[25,26,27,28,29,30,31,32,33,34,35,36]
# HR_2=[0.00001,0.3,1]
# HR_3=[0.05,0.3,1]
# row1=len(T1)
# row2=len(T2)
# line1=len(HR_2)
# line2=len(HR_3)
# Twb_HR=np.zeros((line1,(row1+row2)*2))
# cpt=0
# k=0
# """Première partie du tableau"""
# for i in range(line1):
# for k in range(row1):
# j=k*2
# Twb_HR[i,j]=Air6_Tdb(HR_2[i],P,T1[k])
# Twb_HR[i,j+1]=1000*Air2_w(Twb_HR[i,j],HR_2[i],P)
# """Seconde partie du tableau"""
# for i in range(line1):
# for k in range(row1,row1+row2):
# j=k*2
# Twb_HR[i,j]=Air6_Tdb(HR_3[i],P,T2[k-25])
# Twb_HR[i,j+1]=1000*Air2_w(Twb_HR[i,j],HR_3[i],P)
# """Définition tableau température bulbe sèche________________________________________________"""
# H=[-15,-10,-5,0,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135]
# HR_4=[0.00001,1]
# HR_5=[0.00001,0.2,0.5,1]
# HR_6=[0.00001,1]
# HR_7=[0.00001,0.93]
# HR_8=[0.00001,0.72]
# HR_9=[0.00001,0.56]
# HR_10=[0.00001,0.44]
# HR_11=[0.00001,0.35]
# HR_12=[0.00001,0.28]
# TdB_HR=np.zeros((len(HR_5),len(H)*2))
# for i in range(len(HR_4)):
# for k in range(H.index(40)+1):
# j=k*2
# TdB_HR[i,j]=Air5_Tdb(HR_4[i],P,H[k])
# TdB_HR[i,j+1]=1000*Air2_w(TdB_HR[i,j],HR_4[i],P)
# for i in range(len(HR_5)):
# for k in range(H.index(45),H.index(95)+1):
# j=k*2
# TdB_HR[i,j]=Air5_Tdb(HR_5[i],P,H[k])
# TdB_HR[i,j+1]=1000*Air2_w(TdB_HR[i,j],HR_5[i],P)
# for i in range(len(HR_6)):
# for k in range(H.index(100),H.index(105)+1):
# j=k*2
# TdB_HR[i,j]=Air5_Tdb(HR_6[i],P,H[k])
# TdB_HR[i,j+1]=1000*Air2_w(TdB_HR[i,j],HR_6[i],P)
# for i in range(len(HR_7)):
# for k in range(H.index(110),H.index(110)+1):
# j=k*2
# TdB_HR[i,j]=Air5_Tdb(HR_7[i],P,H[k])
# TdB_HR[i,j+1]=1000*Air2_w(TdB_HR[i,j],HR_7[i],P)
# for i in range(len(HR_8)):
# for k in range(H.index(115),H.index(115)+1):
# j=k*2
# TdB_HR[i,j]=Air5_Tdb(HR_8[i],P,H[k])
# TdB_HR[i,j+1]=1000*Air2_w(TdB_HR[i,j],HR_8[i],P)
# for i in range(len(HR_9)):
# for k in range(H.index(120),H.index(120)+1):
# j=k*2
# TdB_HR[i,j]=Air5_Tdb(HR_9[i],P,H[k])
# TdB_HR[i,j+1]=1000*Air2_w(TdB_HR[i,j],HR_9[i],P)
# for i in range(len(HR_10)):
# for k in range(H.index(125),H.index(125)+1):
# j=k*2
# TdB_HR[i,j]=Air5_Tdb(HR_9[i],P,H[k])
# TdB_HR[i,j+1]=1000*Air2_w(TdB_HR[i,j],HR_9[i],P)
# for i in range(len(HR_11)):
# for k in range(H.index(130),H.index(130)+1):
# j=k*2
# TdB_HR[i,j]=Air5_Tdb(HR_9[i],P,H[k])
# TdB_HR[i,j+1]=1000*Air2_w(TdB_HR[i,j],HR_9[i],P)
# for i in range(len(HR_12)):
# for k in range(H.index(135),H.index(135)+1):
# j=k*2
# TdB_HR[i,j]=Air5_Tdb(HR_9[i],P,H[k])
# TdB_HR[i,j+1]=1000*Air2_w(TdB_HR[i,j],HR_9[i],P)
# """Définition tableau température bulbe humide________________________________________________"""
# from pylab import *
# """Taille du graphique"""
# figure(figsize=(15,10),dpi=200)
# ylabel('Humidity Ratio - grams moisture /kg of dry air',fontsize=20)
# xlabel('Dry Bulb Temperature - °C',fontsize=20)
# """Limites axes x et y___________________________________________________________________"""
# xlim(-20,60)
# ylim(0,30)
# """Graduation axes x et y_____________________________________________________________"""
# xticks(linspace(-20,60,17,endpoint=True),fontsize=18)
# yticks(np.linspace(0,30,7,endpoint=True),fontsize=18)
# grid()
# """Courbes HR en Noir pointillés__________________________________________________________________________"""
# for i in range(0,(HR_1.index(0.1))*3,3):
# plot(T_1[T_1.index(35):],R_H[T_1.index(35):,i],'k--')
# k=int(i/3)
# if i==0:
# text(T_1[T_1.index(45)]+3,R_H[T_1.index(50),i]-1,'Relative humidity ='+str(int(HR_1[k]*100))+' %',fontsize=8,rotation=5,fontweight='light')
# else:
# text(T_1[T_1.index(55)],R_H[T_1.index(55),i]-1,str(int(HR_1[k]*100))+' %',fontsize=8,rotation=30,fontweight='light')
# for i in range((HR_1.index(0.1))*3,(HR_1.index(0.4))*3,6):
# plot(T_1[T_1.index(0):],R_H[T_1.index(0):,i],'k--')
# text(25,R_H[T_1.index(25),HR_1.index(0.1)*3]-0.5,'10 %',fontsize=8,rotation=20,fontweight='light')
# text(25,R_H[T_1.index(25),HR_1.index(0.2)*3]-0.5,'20 %',fontsize=8,rotation=20,fontweight='light')
# text(25,R_H[T_1.index(25),HR_1.index(0.3)*3]-0.5,'30 %',fontsize=8,rotation=30,fontweight='light')
# for i in range((HR_1.index(0.6))*3,(HR_1.index(0.9)+1)*3,3):
# plot(T_1[T_1.index(0):],R_H[T_1.index(0):,i],'k--')
# k=int(i/3)
# text(T_1[T_1.index(25)],R_H[T_1.index(25),i]-0.5,str(int(HR_1[k]*100))+' %',fontsize=8,rotation=65,fontweight='light')
# for i in range((HR_1.index(0.15))*3,(HR_1.index(0.25)+1)*3,6):
# plot(T_1[T_1.index(20):],R_H[T_1.index(20):,i],'k--')
# plot(T_1[T_1.index(0):],R_H[T_1.index(0):,HR_1.index(0.4)*3],'k--')
# text(25,R_H[T_1.index(25),HR_1.index(0.4)*3]-0.5,'40 %',fontsize=11,rotation=35,fontweight='light')
# plot(T_1[T_1.index(-20):],R_H[T_1.index(-20):,HR_1.index(0.5)*3],'k--')
# text(25,R_H[T_1.index(25),HR_1.index(0.5)*3]-0.5,'50 %',fontsize=11,rotation=40,fontweight='light')
# plot(T_1[T_1.index(-20):],R_H[T_1.index(-20):,HR_1.index(1)*3],'k--')
# text(25,R_H[T_1.index(25),HR_1.index(1)*3]-0.5,'100 %',fontsize=8,rotation=65,fontweight='light')
# """Courbes bulbe humide en ROUGE__________________________________________________________________________"""
# for i in range(0,len(T1)*2,2):
# plot(Twb_HR[:,i],Twb_HR[:,i+1],'r-')
# # for i in range(0,T1.index(5)+1):
# # k=i*2
# # text(Twb_HR[2,k],Twb_HR[2,k+1],str(int(T1[i]))+' °C',fontsize=8,rotation=-40,fontweight='bold')
# for i in range(0,len(T2)*2,2):
# plot(Twb_HR[:,i+len(T1)*2],Twb_HR[:,i+len(T1)*2+1],'r--')
# """Courbes enthalpie en VERT__________________________________________________________________________"""
# for i in range(0,(H.index(40)+1)*2,2):
# plot(TdB_HR[0:2,i],TdB_HR[0:2,i+1],'g--')
# for i in range(0,(H.index(40)+1)):
# k=i*2
# text(TdB_HR[1,k],TdB_HR[1,k+1],str(int(H[i]))+' kJ/kgda ',horizontalalignment = 'right',color = 'green',fontsize=5,rotation=-20,fontweight='light')
# for i in range(H.index(45)*2,(H.index(95)+1)*2,2):
# plot(TdB_HR[0:4,i],TdB_HR[0:4,i+1],'g--')
# for i in range(H.index(45),(H.index(95)+1)):
# k=i*2
# text(TdB_HR[3,k],TdB_HR[3,k+1],str(int(H[i]))+' kJ/kgda ',horizontalalignment = 'right',color = 'green',fontsize=5,rotation=-20,fontweight='light')
# for i in range(H.index(100)*2,(H.index(135)+1)*2,2):
# plot(TdB_HR[0:2,i],TdB_HR[0:2,i+1],'g--')
# for i in range(H.index(100),(H.index(110))):
# k=i*2
# text(TdB_HR[1,k],TdB_HR[1,k+1],str(int(H[i]))+' kJ/kgda ',horizontalalignment = 'right',color = 'green',fontsize=5,rotation=-20,fontweight='light')
# for i in range(H.index(110),(H.index(120))):
# k=i*2
# text(TdB_HR[1,k]+2,TdB_HR[1,k+1]-1,str(int(H[i]))+' kJ/kgda ',horizontalalignment = 'left',color = 'green',fontsize=5,rotation=-35,fontweight='light')
# for i in range(H.index(110),(H.index(120))):
# k=i*2
# text(TdB_HR[1,k]+2,TdB_HR[1,k+1]-1,str(int(H[i]))+' kJ/kgda ',horizontalalignment = 'left',color = 'green',fontsize=5,rotation=-35,fontweight='light')
# j=0
# for i in range(H.index(120),(H.index(135)+1)):
# k=i*2
# l=j*5
# text(46+l,29,str(int(H[i]))+' kJ/kgda ',horizontalalignment = 'right',color = 'green',fontsize=5,rotation=-35,fontweight='light')
# j=j+1 | PypiClean |
/LinkChecker-10.2.1-py3-none-any.whl/linkcheck/command/setup_config.py | import codecs
import getpass
from .. import fileutil
from .. import i18n
from .. import logger
from .. import LOG_CMDLINE
from .. import get_link_pat, log
from ..cmdline import print_version, print_usage, print_plugins
from ..director import console
def has_encoding(encoding):
"""Detect if Python can encode in a certain encoding."""
try:
codecs.lookup(encoding)
return True
except LookupError:
return False
def setup_config(config, options):
"""Set up linkchecker based on command-line options and configuration"""
_username = None
_password = None
# test if running with -O
if options.debug and not __debug__:
log.warn(LOG_CMDLINE, _("Running with python -O disables debugging."))
# apply commandline options and arguments to configuration
constructauth = False
if options.version:
print_version()
if not options.warnings:
config["warnings"] = options.warnings
if options.externstrict:
pats = [get_link_pat(arg, strict=True) for arg in options.externstrict]
config["externlinks"].extend(pats)
if options.extern:
pats = [get_link_pat(arg) for arg in options.extern]
config["externlinks"].extend(pats)
if options.norobotstxt is not None:
config["robotstxt"] = options.norobotstxt
if options.checkextern:
config["checkextern"] = True
elif not config["checkextern"]:
log.info(
LOG_CMDLINE,
"Checking intern URLs only; use --check-extern to check extern URLs.",
)
if options.output:
if "/" in options.output:
logtype, encoding = options.output.split("/", 1)
else:
logtype, encoding = options.output, i18n.default_encoding
logtype = logtype.lower()
if logtype == "blacklist":
log.warn(
LOG_CMDLINE,
_("blacklist is deprecated for option %(option)s, "
"using failures instead") % {"option": "'-o, --output'"}
)
logtype = "failures"
if logtype not in logger.LoggerNames:
print_usage(
_("Unknown logger type %(type)r in %(output)r for option %(option)s")
% {"type": logtype,
"output": options.output,
"option": "'-o, --output'"}
)
if logtype != "none" and not has_encoding(encoding):
print_usage(
_("Unknown encoding %(encoding)r in %(output)r for option %(option)s")
% {
"encoding": encoding,
"output": options.output,
"option": "'-o, --output'",
}
)
config["output"] = logtype
config["logger"] = config.logger_new(logtype, encoding=encoding)
if options.fileoutput:
ns = {"fileoutput": 1}
for arg in options.fileoutput:
ftype = arg
# look for (optional) filename and encoding
if "/" in ftype:
ftype, suffix = ftype.split("/", 1)
if suffix:
if has_encoding(suffix):
# it was an encoding
ns["encoding"] = suffix
elif "/" in suffix:
# look for (optional) encoding
encoding, filename = suffix.split("/", 1)
if has_encoding(encoding):
ns["encoding"] = encoding
ns["filename"] = filename
else:
ns["filename"] = suffix
else:
ns["filename"] = suffix
if ftype == "blacklist":
log.warn(
LOG_CMDLINE,
_("blacklist logger is deprecated for option %(option)s, "
"using failures instead") % {"option": "'-F, --file-output'"}
)
ftype = "failures"
if ftype not in logger.LoggerNames:
print_usage(
_("Unknown logger type %(type)r in %(output)r"
" for option %(option)s")
% {
"type": ftype,
"output": options.fileoutput,
"option": "'-F, --file-output'",
}
)
if ftype != "none" and "encoding" in ns \
and not has_encoding(ns["encoding"]):
print_usage(
_("Unknown encoding %(encoding)r in %(output)r"
" for option %(option)s")
% {
"encoding": ns["encoding"],
"output": options.fileoutput,
"option": "'-F, --file-output'",
}
)
new_logger = config.logger_new(ftype, **ns)
config["fileoutput"].append(new_logger)
if options.nntpserver:
config["nntpserver"] = options.nntpserver
if options.username:
_username = options.username
constructauth = True
if options.password:
if _username:
msg = _("Enter LinkChecker HTTP/FTP password for user %(user)s:") % {
"user": _username
}
else:
msg = _("Enter LinkChecker HTTP/FTP password:")
_password = getpass.getpass(console.encode(msg))
constructauth = True
if options.quiet:
config["logger"] = config.logger_new("none")
if options.recursionlevel is not None:
config["recursionlevel"] = options.recursionlevel
if options.status is not None:
config["status"] = options.status
if options.threads is not None:
if options.threads < 1:
options.threads = 0
config["threads"] = options.threads
if options.timeout is not None:
if options.timeout > 0:
config["timeout"] = options.timeout
else:
print_usage(
_("Illegal argument %(arg)r for option %(option)s")
% {"arg": options.timeout, "option": "'--timeout'"}
)
if options.listplugins:
print_plugins(config["pluginfolders"])
if options.verbose:
if options.verbose:
config["verbose"] = True
config["warnings"] = True
if options.cookiefile is not None:
config["cookiefile"] = options.cookiefile
if constructauth:
config.add_auth(pattern=".+", user=_username, password=_password)
# read missing passwords
for entry in config["authentication"]:
if entry["password"] is None:
attrs = entry.copy()
attrs["strpattern"] = attrs["pattern"].pattern
msg = (
_("Enter LinkChecker password for user %(user)s at %(strpattern)s:")
% attrs
)
entry["password"] = getpass.getpass(msg)
if options.useragent is not None:
config["useragent"] = options.useragent
if options.cookiefile is not None:
if fileutil.is_readable(options.cookiefile):
config["cookiefile"] = options.cookiefile
else:
msg = _("Could not read cookie file %s") % options.cookiefile
log.error(LOG_CMDLINE, msg) | PypiClean |
/Helios_Scanner-1.1-py3-none-any.whl/helios/core/scripts.py | import os
import json
from helios.core.engine import MatchObject, CustomRequestBuilder, RequestBuilder
from helios.core.utils import has_seen_before, response_to_dict
import sys
import logging
try:
import urlparse
except ImportError:
import urllib.parse as urlparse
class ScriptEngine:
scripts_active = []
scripts_fs = []
scripts_passive = []
results = []
triggers = []
can_fs = True
can_exploit = True
s = None
options = None
log_level = logging.INFO
writer = None
def __init__(self, options=None, logger=logging.INFO, database=None):
self.logger = self.logger = logging.getLogger("ScriptEngine")
self.logger.setLevel(logger)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logger)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
if not self.logger.handlers:
self.logger.addHandler(ch)
self.logger.debug("Starting script parser")
self.options = options
self.log_level = logger
self.parse_scripts()
self.writer = database
def parse_scripts(self):
self.s = ScriptParser(logger=self.log_level)
self.s.load_scripts()
self.scripts_active = []
self.scripts_fs = []
self.scripts_passive = []
for script in self.s.scripts:
matches = []
if self.options:
if 'all' not in self.options:
if self.options and 'options' in script:
for sub in script['options']:
if str(sub) not in self.options:
self.logger.debug("Disabling script %s because %s is not enabled" % (script['name'], sub))
continue
else:
if 'options' in script and 'dangerous' in script['options']:
self.logger.debug("Disabling script %s because dangerous flag is present, use --options all or add the dangerous flag to override" % (script['name']))
continue
for x in script['matches']:
mobj = MatchObject(
mtype=x['type'],
match=x['match'],
location=x['location'],
name=x['name'] if 'name' in x else script['name'],
options=list(x['options'])
)
matches.append(mobj)
script_data = {
"name": script['name'],
"find": script['find'],
"severity": script['severity'],
"request": script['request'],
"data": script['data'] if 'data' in script else {},
"matches": matches
}
if not script['request']:
if script['run_at'] == "response":
self.scripts_passive.append(script_data)
if script['run_at'] == "fs":
self.scripts_fs.append(script_data)
if script['request']:
self.scripts_active.append(script_data)
def run_fs(self, base_url):
links = []
if self.can_fs:
for script in self.scripts_fs:
if str(script['find']) == "once":
if has_seen_before(script['name'], self.results):
continue
data = script['data']
new_req = CustomRequestBuilder(
url=data['url'],
data=data['data'] if 'data' in data else None,
headers=data['headers'] if 'headers' in data else {},
options=data['options'] if 'options' in data else [],
)
new_req.root_url = base_url
result = new_req.run()
if result:
# is found so added to crawler
if result.response.code == 200:
links.append([urlparse.urljoin(base_url, new_req.url), new_req.data])
for match in script['matches']:
mresult = match.run(result.response)
if mresult:
res = "%s [%s] > %s" % (script['name'], result.response.to_string(), mresult)
self.logger.info("Discovered: %s" % res)
if self.writer:
severity = script['severity'] if 'severity' in script else 0
text = json.dumps({"request": response_to_dict(result.response), "match": mresult})
self.writer.put(result_type="Basic Script - Filesystem", script=script['name'], severity=severity, text=text)
self.results.append({"script": script['name'], "match": mresult, "data": response_to_dict(result.response)})
return links
def run_scripts(self, request):
for script in self.scripts_passive:
if str(script['find']) == "once":
if has_seen_before(script['name'], self.results):
continue
for match in script['matches']:
result = match.run(request.response)
if result:
res = "%s [%s] > %s" % (script['name'], request.response.to_string(), result)
self.logger.info("Discovered: %s" % res)
if self.writer:
severity = script['severity'] if 'severity' in script else 0
text = json.dumps({"request": response_to_dict(request.response), "match": result})
self.writer.put(result_type="Basic Script - Passive", script=script['name'],
severity=severity, text=text, allow_only_once=str(script['find']) == "once")
self.results.append(
{"script": script['name'], "match": result, "data": response_to_dict(request.response)})
if self.can_exploit:
for script in self.scripts_active:
if str(script['find']) == "once":
if has_seen_before(script['name'], self.results):
continue
try:
r = RequestBuilder(
req=request,
inject_type=script['request'],
inject_value=script['data']['inject_value'],
matchobject=script['matches'],
name=script['name']
)
results = []
results = r.run()
if results:
for scan_result in results:
if scan_result not in self.results:
res = "[%s] URL %s > %s" % (script['name'], scan_result['request']['request']['url'], scan_result['match'])
self.logger.info("Discovered: %s" % res)
if self.writer:
severity = script['severity'] if 'severity' in script else 0
text = json.dumps(scan_result)
self.writer.put(result_type="Basic Script - Active", script=script['name'],
severity=severity, text=text)
self.results.append(res)
except Exception as e:
self.logger.warning("Error running script %s: %s" % (script['name'], str(e)))
class ScriptParser:
directory = '../scripts'
root_dir = ''
script_dir = ''
scripts = []
logger = None
def __init__(self, newdir=None, logger=logging.INFO):
self.root_dir = os.path.dirname(os.path.realpath(__file__))
self.script_dir = os.path.join(self.root_dir, self.directory) if not newdir else newdir
self.logger = logging.getLogger("ScriptParser")
self.logger.setLevel(logger)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logger)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
if not self.logger.handlers:
self.logger.addHandler(ch)
if not os.path.isdir(self.script_dir):
self.logger.error("Cannot initialise script engine, directory '%s' does not exist" % self.script_dir)
self.scripts = []
def load_scripts(self):
self.logger.debug("Init script engine")
for f in os.listdir(self.script_dir):
script = os.path.join(self.script_dir, f)
if os.path.isfile(script):
try:
with open(script, 'r') as scriptfile:
data = scriptfile.read()
jsondata = json.loads(data)
self.scripts.append(jsondata)
except ValueError:
self.logger.error("Script %s appears to be invalid JSON, ignoring" % f)
pass
except IOError:
self.logger.error("Unable to access script file %s, ignoring" % f)
pass
self.logger.info("Script Engine loaded %d scripts" % len(self.scripts)) | PypiClean |
/MindsDB-23.8.3.0.tar.gz/MindsDB-23.8.3.0/mindsdb/integrations/handlers_client/ml_grpc_client.py | import json
import pickle
import grpc
from mindsdb.microservices_grpc.ml import ml_pb2_grpc
from mindsdb.microservices_grpc.ml import ml_pb2, common_pb2
from mindsdb.integrations.libs.response import (
HandlerResponse as Response,
HandlerStatusResponse as StatusResponse,
)
# from mindsdb.integrations.libs.ml_exec_base import BaseMLEngineExec
from mindsdb.integrations.libs.handler_helpers import action_logger
from mindsdb.utilities.context import context as ctx
from mindsdb.utilities.log import get_log
logger = get_log(logger_name="main")
class MLClientGRPC:
def __init__(self, host, port, **handler_params):
self.host = host
self.port = port
self.handler_params = handler_params
self.integration_id = handler_params.get("integration_id")
self.predictor_id = handler_params.get("predictor_id")
for key in ("integration_id", "predictor_id"):
try:
del self.handler_params[key]
except Exception:
pass
# have to create a handler instance
# because Executor accesses to some handler attributes
# directly
# self.handler = BaseMLEngineExec(**self.handler_params)
# remove all 'object' params from dict before sending it to the serverside.
# all of them will be created there
self.channel = grpc.insecure_channel(f"{self.host}:{self.port}")
self.stub = ml_pb2_grpc.MLServiceStub(self.channel)
def __del__(self):
if hasattr(self, "channel"):
self.channel.close()
@property
def context(self):
return ml_pb2.HandlerContextML(
predictor_id=self.predictor_id,
integration_id=self.integration_id,
context=json.dumps(ctx.dump()),
handler_params=json.dumps(self.handler_params),
)
@staticmethod
def _to_status_response(response: common_pb2.StatusResponse):
return StatusResponse(success=response.success,
error_message=response.error_message)
@staticmethod
def _to_response(response: common_pb2.Response):
data = pickle.loads(response.data_frame)
return Response(
resp_type=response.type,
data_frame=data,
query=response.query,
error_code=response.error_code,
error_message=response.error_message,
)
@action_logger(logger)
def predict(
self,
df,
args
):
request = ml_pb2.PredictCall(context=self.context, df=pickle.dumps(df), args=json.dumps(args))
resp = self.stub.Predict(request)
logger.info("%s.learn: returned error - %s, error_message - %s", self.__class__.__name__, resp.error_code, resp.error_message)
if resp.error_code and resp.error_message:
raise Exception(resp.error_message)
return pickle.loads(resp.data_frame)
@action_logger(logger)
def create(
self,
target,
df,
args
):
request = ml_pb2.CreateCall(context=self.context, target=target, df=pickle.dumps(df), args=json.dumps(args))
resp = self.stub.Create(request)
logger.info("%s.learn: success - %s", self.__class__.__name__, resp.success)
if not resp.success:
logger.error("%s.learn: returned error - %s", self.__class__.__name__, resp.error_message)
raise Exception(resp.error_message)
@action_logger(logger)
def update(
self,
df,
args
):
request = ml_pb2.UpdateCall(context=self.context, df=pickle.dumps(df), args=json.dumps(args))
resp = self.stub.Update(request)
logger.info("%s.update: success - %s", self.__class__.__name__, resp.success)
if not resp.success:
logger.error("%s.update: returned error - %s", self.__class__.__name__, resp.error_message)
raise Exception(resp.error_message)
@action_logger(logger)
def close(self):
pass | PypiClean |
/Nuitka_winsvc-1.7.10-cp310-cp310-win_amd64.whl/nuitka/tools/testing/RuntimeTracing.py | import os
import re
import sys
from nuitka.freezer.DependsExe import getDependsExePath, parseDependsExeOutput
from nuitka.utils.Execution import (
callProcess,
executeProcess,
isExecutableCommand,
withEnvironmentVarOverridden,
)
from nuitka.utils.FileOperations import deleteFile
from nuitka.utils.Utils import isFreeBSD, isMacOS, isWin32Windows
from .Common import traceExecutedCommand
def _getRuntimeTraceOfLoadedFilesWin32(logger, command, required):
path = command[0]
output_filename = path + ".depends"
command = (
getDependsExePath(),
"-c", # Console mode
"-ot%s" % output_filename,
"-f1",
"-pb",
"-pa1", # Turn on all profiling options.
"-ps1", # Simulate ShellExecute with app dirs in PATH.
"-pp1", # Do not long DllMain calls.
"-po1", # Log DllMain call for all other messages.
"-ph1", # Hook the process.
"-pl1", # Log LoadLibrary calls.
"-pt1", # Thread information.
"-pe1", # First chance exceptions.
"-pg1", # Log GetProcAddress calls.
"-pf1", # Use full paths.
"-pc1", # Profile child processes.
) + tuple(command)
# TODO: Move the handling of this into nuitka.tools.Execution module methods.
try:
callProcess(command, timeout=5 * 60)
except Exception as e: # Catch all the things, pylint: disable=broad-except
if e.__class__.__name__ == "TimeoutExpired":
if required:
logger.sysexit("Timeout encountered when running dependency walker.")
logger.warning("Timeout encountered when running dependency walker.")
return []
else:
raise
result = parseDependsExeOutput(output_filename)
deleteFile(output_filename, must_exist=False)
return result
def _takeSystemCallTraceOutput(logger, path, command):
tracing_tool = command[0] if command[0] != "sudo" else command[1]
result = []
# Ensure executable is not polluted with third party stuff,
# tests may fail otherwise due to unexpected libs being loaded
# spell-checker: ignore ENOENT,write_nocancel
with withEnvironmentVarOverridden("LD_PRELOAD", None):
if os.environ.get("NUITKA_TRACE_COMMANDS", "0") != "0":
traceExecutedCommand("Tracing with:", command)
_stdout_strace, stderr_strace, exit_strace = executeProcess(
command, stdin=False, timeout=5 * 60
)
if exit_strace != 0:
if str is not bytes:
stderr_strace = stderr_strace.decode("utf8")
logger.warning(stderr_strace)
logger.sysexit("Failed to run '%s'." % tracing_tool)
if b"dtrace: system integrity protection is on" in stderr_strace:
logger.sysexit("System integrity protection prevents system call tracing.")
with open(path + "." + tracing_tool, "wb") as f:
f.write(stderr_strace)
for line in stderr_strace.split(b"\n"):
if exit_strace != 0:
logger.my_print(line)
if not line:
continue
# Don't consider files not found. The "site" module checks lots
# of things.
if b"ENOENT" in line:
continue
if line.startswith((b"write(", b"write_nocancel(")):
continue
if line.startswith((b"stat(", b"newfstatat(")) and b"S_IFDIR" in line:
continue
# Don't consider files not found.
if line.startswith(b"stat64(") and b"= -1" in line:
continue
result.extend(
os.path.abspath(match)
for match in re.findall(b'"(.*?)(?:\\\\0)?"', line)
)
if str is not bytes:
result = [s.decode("utf8") for s in result]
return result
def _getRuntimeTraceOfLoadedFilesDtruss(logger, command):
if not isExecutableCommand("dtruss"):
logger.sysexit(
"""\
Error, needs 'dtruss' on your system to scan used libraries."""
)
if not isExecutableCommand("sudo"):
logger.sysexit(
"""\
Error, needs 'sudo' on your system to scan used libraries."""
)
binary_path = os.path.abspath(command[0])
command = ("sudo", "dtruss", binary_path) + tuple(command[1:])
return _takeSystemCallTraceOutput(logger=logger, command=command, path=binary_path)
def _getRuntimeTraceOfLoadedFilesStrace(logger, command):
if not isExecutableCommand("strace"):
logger.sysexit(
"""\
Error, needs 'strace' on your system to scan used libraries."""
)
binary_path = os.path.abspath(command[0])
command = (
"strace",
"-e",
"file",
"-s4096", # Some paths are truncated in output otherwise
binary_path,
) + tuple(command[1:])
return _takeSystemCallTraceOutput(logger=logger, command=command, path=binary_path)
_supports_taking_runtime_traces = None
def doesSupportTakingRuntimeTrace():
if not isMacOS():
return True
# Python2 hangs calling dtruss for no good reason, probably a bug in
# subprocess32 with Python2 that we do not care about.
if str is bytes:
return False
# singleton, pylint: disable=global-statement
global _supports_taking_runtime_traces
if _supports_taking_runtime_traces is None:
command = ("sudo", "dtruss", "echo")
_stdout, stderr, exit_code = executeProcess(
command, stdin=False, timeout=5 * 60
)
_supports_taking_runtime_traces = (
exit_code == 0
and b"dtrace: system integrity protection is on" not in stderr
)
return _supports_taking_runtime_traces
def getRuntimeTraceOfLoadedFiles(logger, command, required=False):
"""Returns the files loaded when executing a binary."""
# This will make a crazy amount of work,
# pylint: disable=I0021,too-many-branches,too-many-locals,too-many-statements
path = command[0]
if not os.path.exists(path):
logger.sysexit(
"Error, cannot find '%s' ('%s')." % (path, os.path.abspath(path))
)
result = []
if isWin32Windows():
result = _getRuntimeTraceOfLoadedFilesWin32(
logger=logger, command=command, required=required
)
elif isMacOS() or isFreeBSD():
# On macOS and FreeBSD, we can use dtruss, which is similar to strace.
result = _getRuntimeTraceOfLoadedFilesDtruss(logger=logger, command=command)
elif os.name == "posix":
result = _getRuntimeTraceOfLoadedFilesStrace(logger=logger, command=command)
result = tuple(sorted(set(result)))
return result
def main():
from nuitka.Tracing import tools_logger
for filename in getRuntimeTraceOfLoadedFiles(
logger=tools_logger, command=sys.argv[1:]
):
print(filename)
if __name__ == "__main__":
main() | PypiClean |
/Nuitka-1.8.tar.gz/Nuitka-1.8/nuitka/build/inline_copy/lib/scons-2.3.2/SCons/Tool/MSCommon/vs.py |
__revision__ = "src/engine/SCons/Tool/MSCommon/vs.py rel_2.3.5:3329:275e75118ad4 2015/06/20 11:18:26 bdbaddog"
__doc__ = """Module to detect Visual Studio and/or Visual C/C++
"""
import os
import SCons.Errors
import SCons.Util
from common import debug, \
get_output, \
is_win64, \
normalize_env, \
parse_output, \
read_reg
import SCons.Tool.MSCommon.vc
class VisualStudio(object):
"""
An abstract base class for trying to find installed versions of
Visual Studio.
"""
def __init__(self, version, **kw):
self.version = version
kw['vc_version'] = kw.get('vc_version', version)
kw['sdk_version'] = kw.get('sdk_version', version)
self.__dict__.update(kw)
self._cache = {}
#
def find_batch_file(self):
vs_dir = self.get_vs_dir()
if not vs_dir:
debug('find_executable(): no vs_dir')
return None
batch_file = os.path.join(vs_dir, self.batch_file_path)
batch_file = os.path.normpath(batch_file)
if not os.path.isfile(batch_file):
debug('find_batch_file(): %s not on file system' % batch_file)
return None
return batch_file
def find_vs_dir_by_vc(self):
SCons.Tool.MSCommon.vc.get_installed_vcs()
dir = SCons.Tool.MSCommon.vc.find_vc_pdir(self.vc_version)
if not dir:
debug('find_vs_dir(): no installed VC %s' % self.vc_version)
return None
return dir
def find_vs_dir_by_reg(self):
root = 'Software\\'
if is_win64():
root = root + 'Wow6432Node\\'
for key in self.hkeys:
if key=='use_dir':
return self.find_vs_dir_by_vc()
key = root + key
try:
comps = read_reg(key)
except WindowsError, e:
debug('find_vs_dir_by_reg(): no VS registry key %s' % repr(key))
else:
debug('find_vs_dir_by_reg(): found VS in registry: %s' % comps)
return comps
return None
def find_vs_dir(self):
""" Can use registry or location of VC to find vs dir
First try to find by registry, and if that fails find via VC dir
"""
if True:
vs_dir=self.find_vs_dir_by_reg()
return vs_dir
else:
return self.find_vs_dir_by_vc()
def find_executable(self):
vs_dir = self.get_vs_dir()
if not vs_dir:
debug('find_executable(): no vs_dir (%s)'%vs_dir)
return None
executable = os.path.join(vs_dir, self.executable_path)
executable = os.path.normpath(executable)
if not os.path.isfile(executable):
debug('find_executable(): %s not on file system' % executable)
return None
return executable
#
def get_batch_file(self):
try:
return self._cache['batch_file']
except KeyError:
batch_file = self.find_batch_file()
self._cache['batch_file'] = batch_file
return batch_file
def get_executable(self):
try:
debug('get_executable using cache:%s'%self._cache['executable'])
return self._cache['executable']
except KeyError:
executable = self.find_executable()
self._cache['executable'] = executable
debug('get_executable not in cache:%s'%executable)
return executable
def get_vs_dir(self):
try:
return self._cache['vs_dir']
except KeyError:
vs_dir = self.find_vs_dir()
self._cache['vs_dir'] = vs_dir
return vs_dir
def get_supported_arch(self):
try:
return self._cache['supported_arch']
except KeyError:
# RDEVE: for the time being use hardcoded lists
# supported_arch = self.find_supported_arch()
self._cache['supported_arch'] = self.supported_arch
return self.supported_arch
def reset(self):
self._cache = {}
# The list of supported Visual Studio versions we know how to detect.
#
# How to look for .bat file ?
# - VS 2008 Express (x86):
# * from registry key productdir, gives the full path to vsvarsall.bat. In
# HKEY_LOCAL_MACHINE):
# Software\Microsoft\VCEpress\9.0\Setup\VC\productdir
# * from environmnent variable VS90COMNTOOLS: the path is then ..\..\VC
# relatively to the path given by the variable.
#
# - VS 2008 Express (WoW6432: 32 bits on windows x64):
# Software\Wow6432Node\Microsoft\VCEpress\9.0\Setup\VC\productdir
#
# - VS 2005 Express (x86):
# * from registry key productdir, gives the full path to vsvarsall.bat. In
# HKEY_LOCAL_MACHINE):
# Software\Microsoft\VCEpress\8.0\Setup\VC\productdir
# * from environmnent variable VS80COMNTOOLS: the path is then ..\..\VC
# relatively to the path given by the variable.
#
# - VS 2005 Express (WoW6432: 32 bits on windows x64): does not seem to have a
# productdir ?
#
# - VS 2003 .Net (pro edition ? x86):
# * from registry key productdir. The path is then ..\Common7\Tools\
# relatively to the key. The key is in HKEY_LOCAL_MACHINE):
# Software\Microsoft\VisualStudio\7.1\Setup\VC\productdir
# * from environmnent variable VS71COMNTOOLS: the path is the full path to
# vsvars32.bat
#
# - VS 98 (VS 6):
# * from registry key productdir. The path is then Bin
# relatively to the key. The key is in HKEY_LOCAL_MACHINE):
# Software\Microsoft\VisualStudio\6.0\Setup\VC98\productdir
#
# The first version found in the list is the one used by default if
# there are multiple versions installed. Barring good reasons to
# the contrary, this means we should list versions from most recent
# to oldest. Pro versions get listed before Express versions on the
# assumption that, by default, you'd rather use the version you paid
# good money for in preference to whatever Microsoft makes available
# for free.
#
# If you update this list, update _VCVER and _VCVER_TO_PRODUCT_DIR in
# Tool/MSCommon/vc.py, and the MSVC_VERSION documentation in Tool/msvc.xml.
SupportedVSList = [
# Visual Studio 2015
VisualStudio('14.0',
vc_version='14.0',
sdk_version='10.0A',
hkeys=[r'Microsoft\VisualStudio\14.0\Setup\VS\ProductDir'],
common_tools_var='VS140COMNTOOLS',
executable_path=r'Common7\IDE\devenv.com',
batch_file_path=r'Common7\Tools\vsvars32.bat',
supported_arch=['x86', 'amd64', "arm"],
),
# Visual C++ 2015 Express Edition (for Desktop)
VisualStudio('14.0Exp',
vc_version='14.0',
sdk_version='10.0A',
hkeys=[r'Microsoft\VisualStudio\14.0\Setup\VS\ProductDir'],
common_tools_var='VS140COMNTOOLS',
executable_path=r'Common7\IDE\WDExpress.exe',
batch_file_path=r'Common7\Tools\vsvars32.bat',
supported_arch=['x86', 'amd64', "arm"],
),
# Visual Studio 2013
VisualStudio('12.0',
vc_version='12.0',
sdk_version='8.1A',
hkeys=[r'Microsoft\VisualStudio\12.0\Setup\VS\ProductDir'],
common_tools_var='VS120COMNTOOLS',
executable_path=r'Common7\IDE\devenv.com',
batch_file_path=r'Common7\Tools\vsvars32.bat',
supported_arch=['x86', 'amd64'],
),
# Visual C++ 2013 Express Edition (for Desktop)
VisualStudio('12.0Exp',
vc_version='12.0',
sdk_version='8.1A',
hkeys=[r'Microsoft\VisualStudio\12.0\Setup\VS\ProductDir'],
common_tools_var='VS120COMNTOOLS',
executable_path=r'Common7\IDE\WDExpress.exe',
batch_file_path=r'Common7\Tools\vsvars32.bat',
supported_arch=['x86', 'amd64'],
),
# Visual Studio 2012
VisualStudio('11.0',
sdk_version='8.0A',
hkeys=[r'Microsoft\VisualStudio\11.0\Setup\VS\ProductDir'],
common_tools_var='VS110COMNTOOLS',
executable_path=r'Common7\IDE\devenv.com',
batch_file_path=r'Common7\Tools\vsvars32.bat',
supported_arch=['x86', 'amd64'],
),
# Visual C++ 2012 Express Edition (for Desktop)
VisualStudio('11.0Exp',
vc_version='11.0',
sdk_version='8.0A',
hkeys=[r'Microsoft\VisualStudio\11.0\Setup\VS\ProductDir'],
common_tools_var='VS110COMNTOOLS',
executable_path=r'Common7\IDE\WDExpress.exe',
batch_file_path=r'Common7\Tools\vsvars32.bat',
supported_arch=['x86', 'amd64'],
),
# Visual Studio 2010
VisualStudio('10.0',
sdk_version='7.0A',
hkeys=[r'Microsoft\VisualStudio\10.0\Setup\VS\ProductDir'],
common_tools_var='VS100COMNTOOLS',
executable_path=r'Common7\IDE\devenv.com',
batch_file_path=r'Common7\Tools\vsvars32.bat',
supported_arch=['x86', 'amd64'],
),
# Visual C++ 2010 Express Edition
VisualStudio('10.0Exp',
vc_version='10.0',
sdk_version='7.0A',
hkeys=[r'Microsoft\VCExpress\10.0\Setup\VS\ProductDir'],
common_tools_var='VS100COMNTOOLS',
executable_path=r'Common7\IDE\VCExpress.exe',
batch_file_path=r'Common7\Tools\vsvars32.bat',
supported_arch=['x86'],
),
# Visual Studio 2008
VisualStudio('9.0',
sdk_version='6.0A',
hkeys=[r'Microsoft\VisualStudio\9.0\Setup\VS\ProductDir'],
common_tools_var='VS90COMNTOOLS',
executable_path=r'Common7\IDE\devenv.com',
batch_file_path=r'Common7\Tools\vsvars32.bat',
supported_arch=['x86', 'amd64'],
),
# Visual C++ 2008 Express Edition
VisualStudio('9.0Exp',
vc_version='9.0',
sdk_version='6.0A',
hkeys=[r'Microsoft\VCExpress\9.0\Setup\VS\ProductDir'],
common_tools_var='VS90COMNTOOLS',
executable_path=r'Common7\IDE\VCExpress.exe',
batch_file_path=r'Common7\Tools\vsvars32.bat',
supported_arch=['x86'],
),
# Visual Studio 2005
VisualStudio('8.0',
sdk_version='6.0A',
hkeys=[r'Microsoft\VisualStudio\8.0\Setup\VS\ProductDir'],
common_tools_var='VS80COMNTOOLS',
executable_path=r'Common7\IDE\devenv.com',
batch_file_path=r'Common7\Tools\vsvars32.bat',
default_dirname='Microsoft Visual Studio 8',
supported_arch=['x86', 'amd64'],
),
# Visual C++ 2005 Express Edition
VisualStudio('8.0Exp',
vc_version='8.0Exp',
sdk_version='6.0A',
hkeys=[r'Microsoft\VCExpress\8.0\Setup\VS\ProductDir'],
common_tools_var='VS80COMNTOOLS',
executable_path=r'Common7\IDE\VCExpress.exe',
batch_file_path=r'Common7\Tools\vsvars32.bat',
default_dirname='Microsoft Visual Studio 8',
supported_arch=['x86'],
),
# Visual Studio .NET 2003
VisualStudio('7.1',
sdk_version='6.0',
hkeys=[r'Microsoft\VisualStudio\7.1\Setup\VS\ProductDir'],
common_tools_var='VS71COMNTOOLS',
executable_path=r'Common7\IDE\devenv.com',
batch_file_path=r'Common7\Tools\vsvars32.bat',
default_dirname='Microsoft Visual Studio .NET 2003',
supported_arch=['x86'],
),
# Visual Studio .NET
VisualStudio('7.0',
sdk_version='2003R2',
hkeys=[r'Microsoft\VisualStudio\7.0\Setup\VS\ProductDir'],
common_tools_var='VS70COMNTOOLS',
executable_path=r'IDE\devenv.com',
batch_file_path=r'Common7\Tools\vsvars32.bat',
default_dirname='Microsoft Visual Studio .NET',
supported_arch=['x86'],
),
# Visual Studio 6.0
VisualStudio('6.0',
sdk_version='2003R1',
hkeys=[r'Microsoft\VisualStudio\6.0\Setup\Microsoft Visual Studio\ProductDir',
'use_dir'],
common_tools_var='VS60COMNTOOLS',
executable_path=r'Common\MSDev98\Bin\MSDEV.COM',
batch_file_path=r'Common7\Tools\vsvars32.bat',
default_dirname='Microsoft Visual Studio',
supported_arch=['x86'],
),
]
SupportedVSMap = {}
for vs in SupportedVSList:
SupportedVSMap[vs.version] = vs
# Finding installed versions of Visual Studio isn't cheap, because it
# goes not only to the registry but also to the disk to sanity-check
# that there is, in fact, a Visual Studio directory there and that the
# registry entry isn't just stale. Find this information once, when
# requested, and cache it.
InstalledVSList = None
InstalledVSMap = None
def get_installed_visual_studios():
global InstalledVSList
global InstalledVSMap
if InstalledVSList is None:
InstalledVSList = []
InstalledVSMap = {}
for vs in SupportedVSList:
debug('trying to find VS %s' % vs.version)
if vs.get_executable():
debug('found VS %s' % vs.version)
InstalledVSList.append(vs)
InstalledVSMap[vs.version] = vs
return InstalledVSList
def reset_installed_visual_studios():
global InstalledVSList
global InstalledVSMap
InstalledVSList = None
InstalledVSMap = None
for vs in SupportedVSList:
vs.reset()
# Need to clear installed VC's as well as they are used in finding
# installed VS's
SCons.Tool.MSCommon.vc.reset_installed_vcs()
# We may be asked to update multiple construction environments with
# SDK information. When doing this, we check on-disk for whether
# the SDK has 'mfc' and 'atl' subdirectories. Since going to disk
# is expensive, cache results by directory.
#SDKEnvironmentUpdates = {}
#
#def set_sdk_by_directory(env, sdk_dir):
# global SDKEnvironmentUpdates
# try:
# env_tuple_list = SDKEnvironmentUpdates[sdk_dir]
# except KeyError:
# env_tuple_list = []
# SDKEnvironmentUpdates[sdk_dir] = env_tuple_list
#
# include_path = os.path.join(sdk_dir, 'include')
# mfc_path = os.path.join(include_path, 'mfc')
# atl_path = os.path.join(include_path, 'atl')
#
# if os.path.exists(mfc_path):
# env_tuple_list.append(('INCLUDE', mfc_path))
# if os.path.exists(atl_path):
# env_tuple_list.append(('INCLUDE', atl_path))
# env_tuple_list.append(('INCLUDE', include_path))
#
# env_tuple_list.append(('LIB', os.path.join(sdk_dir, 'lib')))
# env_tuple_list.append(('LIBPATH', os.path.join(sdk_dir, 'lib')))
# env_tuple_list.append(('PATH', os.path.join(sdk_dir, 'bin')))
#
# for variable, directory in env_tuple_list:
# env.PrependENVPath(variable, directory)
def msvs_exists():
return (len(get_installed_visual_studios()) > 0)
def get_vs_by_version(msvs):
global InstalledVSMap
global SupportedVSMap
debug('vs.py:get_vs_by_version()')
if msvs not in SupportedVSMap:
msg = "Visual Studio version %s is not supported" % repr(msvs)
raise SCons.Errors.UserError(msg)
get_installed_visual_studios()
vs = InstalledVSMap.get(msvs)
debug('InstalledVSMap:%s'%InstalledVSMap)
debug('vs.py:get_vs_by_version: found vs:%s'%vs)
# Some check like this would let us provide a useful error message
# if they try to set a Visual Studio version that's not installed.
# However, we also want to be able to run tests (like the unit
# tests) on systems that don't, or won't ever, have it installed.
# It might be worth resurrecting this, with some configurable
# setting that the tests can use to bypass the check.
#if not vs:
# msg = "Visual Studio version %s is not installed" % repr(msvs)
# raise SCons.Errors.UserError, msg
return vs
def get_default_version(env):
"""Returns the default version string to use for MSVS.
If no version was requested by the user through the MSVS environment
variable, query all the available the visual studios through
get_installed_visual_studios, and take the highest one.
Return
------
version: str
the default version.
"""
if 'MSVS' not in env or not SCons.Util.is_Dict(env['MSVS']):
# get all versions, and remember them for speed later
versions = [vs.version for vs in get_installed_visual_studios()]
env['MSVS'] = {'VERSIONS' : versions}
else:
versions = env['MSVS'].get('VERSIONS', [])
if 'MSVS_VERSION' not in env:
if versions:
env['MSVS_VERSION'] = versions[0] #use highest version by default
else:
debug('get_default_version: WARNING: no installed versions found, '
'using first in SupportedVSList (%s)'%SupportedVSList[0].version)
env['MSVS_VERSION'] = SupportedVSList[0].version
env['MSVS']['VERSION'] = env['MSVS_VERSION']
return env['MSVS_VERSION']
def get_default_arch(env):
"""Return the default arch to use for MSVS
if no version was requested by the user through the MSVS_ARCH environment
variable, select x86
Return
------
arch: str
"""
arch = env.get('MSVS_ARCH', 'x86')
msvs = InstalledVSMap.get(env['MSVS_VERSION'])
if not msvs:
arch = 'x86'
elif not arch in msvs.get_supported_arch():
fmt = "Visual Studio version %s does not support architecture %s"
raise SCons.Errors.UserError(fmt % (env['MSVS_VERSION'], arch))
return arch
def merge_default_version(env):
version = get_default_version(env)
arch = get_default_arch(env)
def msvs_setup_env(env):
batfilename = msvs.get_batch_file()
msvs = get_vs_by_version(version)
if msvs is None:
return
# XXX: I think this is broken. This will silently set a bogus tool instead
# of failing, but there is no other way with the current scons tool
# framework
if batfilename is not None:
vars = ('LIB', 'LIBPATH', 'PATH', 'INCLUDE')
msvs_list = get_installed_visual_studios()
vscommonvarnames = [vs.common_tools_var for vs in msvs_list]
save_ENV = env['ENV']
nenv = normalize_env(env['ENV'],
['COMSPEC'] + vscommonvarnames,
force=True)
try:
output = get_output(batfilename, arch, env=nenv)
finally:
env['ENV'] = save_ENV
vars = parse_output(output, vars)
for k, v in vars.items():
env.PrependENVPath(k, v, delete_existing=1)
def query_versions():
"""Query the system to get available versions of VS. A version is
considered when a batfile is found."""
msvs_list = get_installed_visual_studios()
versions = [msvs.version for msvs in msvs_list]
return versions
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4: | PypiClean |
/dirtrav-1.0.0.tar.gz/dirtrav-1.0.0/docs/async-await.rst | .. _async_await:
Using ``async`` and ``await``
=============================
.. versionadded:: 2.0
Routes, error handlers, before request, after request, and teardown
functions can all be coroutine functions if Flask is installed with the
``async`` extra (``pip install flask[async]``). This allows views to be
defined with ``async def`` and use ``await``.
.. code-block:: python
@app.route("/get-data")
async def get_data():
data = await async_db_query(...)
return jsonify(data)
Pluggable class-based views also support handlers that are implemented as
coroutines. This applies to the :meth:`~flask.views.View.dispatch_request`
method in views that inherit from the :class:`flask.views.View` class, as
well as all the HTTP method handlers in views that inherit from the
:class:`flask.views.MethodView` class.
.. admonition:: Using ``async`` on Windows on Python 3.8
Python 3.8 has a bug related to asyncio on Windows. If you encounter
something like ``ValueError: set_wakeup_fd only works in main thread``,
please upgrade to Python 3.9.
.. admonition:: Using ``async`` with greenlet
When using gevent or eventlet to serve an application or patch the
runtime, greenlet>=1.0 is required. When using PyPy, PyPy>=7.3.7 is
required.
Performance
-----------
Async functions require an event loop to run. Flask, as a WSGI
application, uses one worker to handle one request/response cycle.
When a request comes in to an async view, Flask will start an event loop
in a thread, run the view function there, then return the result.
Each request still ties up one worker, even for async views. The upside
is that you can run async code within a view, for example to make
multiple concurrent database queries, HTTP requests to an external API,
etc. However, the number of requests your application can handle at one
time will remain the same.
**Async is not inherently faster than sync code.** Async is beneficial
when performing concurrent IO-bound tasks, but will probably not improve
CPU-bound tasks. Traditional Flask views will still be appropriate for
most use cases, but Flask's async support enables writing and using
code that wasn't possible natively before.
Background tasks
----------------
Async functions will run in an event loop until they complete, at
which stage the event loop will stop. This means any additional
spawned tasks that haven't completed when the async function completes
will be cancelled. Therefore you cannot spawn background tasks, for
example via ``asyncio.create_task``.
If you wish to use background tasks it is best to use a task queue to
trigger background work, rather than spawn tasks in a view
function. With that in mind you can spawn asyncio tasks by serving
Flask with an ASGI server and utilising the asgiref WsgiToAsgi adapter
as described in :doc:`deploying/asgi`. This works as the adapter creates
an event loop that runs continually.
When to use Quart instead
-------------------------
Flask's async support is less performant than async-first frameworks due
to the way it is implemented. If you have a mainly async codebase it
would make sense to consider `Quart`_. Quart is a reimplementation of
Flask based on the `ASGI`_ standard instead of WSGI. This allows it to
handle many concurrent requests, long running requests, and websockets
without requiring multiple worker processes or threads.
It has also already been possible to run Flask with Gevent or Eventlet
to get many of the benefits of async request handling. These libraries
patch low-level Python functions to accomplish this, whereas ``async``/
``await`` and ASGI use standard, modern Python capabilities. Deciding
whether you should use Flask, Quart, or something else is ultimately up
to understanding the specific needs of your project.
.. _Quart: https://github.com/pallets/quart
.. _ASGI: https://asgi.readthedocs.io/en/latest/
Extensions
----------
Flask extensions predating Flask's async support do not expect async views.
If they provide decorators to add functionality to views, those will probably
not work with async views because they will not await the function or be
awaitable. Other functions they provide will not be awaitable either and
will probably be blocking if called within an async view.
Extension authors can support async functions by utilising the
:meth:`flask.Flask.ensure_sync` method. For example, if the extension
provides a view function decorator add ``ensure_sync`` before calling
the decorated function,
.. code-block:: python
def extension(func):
@wraps(func)
def wrapper(*args, **kwargs):
... # Extension logic
return current_app.ensure_sync(func)(*args, **kwargs)
return wrapper
Check the changelog of the extension you want to use to see if they've
implemented async support, or make a feature request or PR to them.
Other event loops
-----------------
At the moment Flask only supports :mod:`asyncio`. It's possible to
override :meth:`flask.Flask.ensure_sync` to change how async functions
are wrapped to use a different library.
| PypiClean |
/Agatsuma-0.2.176.default.3499b00918ca.tip.tar.gz/Agatsuma-0.2.176.default.3499b00918ca.tip/agatsuma/adaptations/distribute.py |
from agatsuma import Implementations
from agatsuma.interfaces import ISetupSpell, IInternalSpell
def run_setuptools(**kwargs):
from setuptools import setup
from agatsuma.third_party.distribute_setup import use_setuptools
use_setuptools()
setup(**kwargs)
######################################################################
## Entry points
def collectEntryPoints(spells_filter):
spells = Implementations(ISetupSpell)
spells = filter(spells_filter, spells)
sections = {}
for spell in spells:
pointsdict = spell.py_entry_points()
for section in pointsdict:
if not sections.get(section, None):
sections[section] = []
points = pointsdict[section]
sections[section].extend(points)
return sections
def formatEntryPoints(epoints):
out = ""
for section, points in epoints.items():
out += "[%s]\n" % section
for point in points:
out += "%s = %s:%s\n" % (point[0], point[1], point[2])
return out
def entry_pointsInfo(spells_filter):
entry_pointsDict = collectEntryPoints(spells_filter)
return formatEntryPoints(entry_pointsDict)
######################################################################
## Dependencies
def __withoutIInternalSpells(spell):
return not issubclass(type(spell), IInternalSpell)
def depinfo(groupChecker, spells_filter):
spells = Implementations(ISetupSpell)
spells = filter(spells_filter, spells)
depGroups = []
dependencies = []
depGroupsContent = {}
for spell in spells:
depdict = spell.requirements()
for group in depdict:
depGroups.append(group)
if not depGroupsContent.get(group, None):
depGroupsContent[group] = []
deps = depdict[group]
depGroupsContent[group].extend(deps)
if groupChecker(group):
dependencies.extend(deps)
dependencies = list(set(dependencies))
return dependencies, depGroups, depGroupsContent
######################################################################
## Debug printouts
def out(s):
#log.setup.info
print s
def nl():
out("="*60)
def printDeps(dependencies, depGroups, depGroupsContent, depGroupEnabled):
out("The following dependencies classes are present:")
out("(Use --disable-all to disable all the dependencies)")
for group in depGroups:
formatString = "[ ] %s: %s "
if depGroupEnabled(group):
formatString = "[*] %s: %s"
out(formatString % (group, str(depGroupsContent[group])))
out(" Use --without-%s to disable" % group)
out(" Use --with-%s to enable" % group)
nl()
out("The following dependencies list will be used:\n%s" % str(dependencies))
out("NOTE: You can use AGATSUMA_CONF environment variable to pass options")
out("NOTE: Dependencies may not work under easy_setup. Use pip!")
######################################################################
## Useful routines
def filter_arguments(args):
args = filter(lambda s: not s.startswith('--with'), args)
args = filter(lambda s: s != "--disable-all", args)
return args
def groups_predicate(args):
components = filter(lambda s: s.startswith('--with'), args)
depsDisabled = "--disable-all" in args
def depGroupEnabled(group):
depEnabled =(not (depsDisabled or ('--without-%s' % group) in components)
or (depsDisabled and ('--with-%s' % group) in components))
return depEnabled
return depGroupEnabled
def get_dependencies(depGroupsFilter, spells_filter = __withoutIInternalSpells):
dependencies, depGroups, depGroupsContent = depinfo(depGroupsFilter,
spells_filter)
printDeps(dependencies, depGroups, depGroupsContent, depGroupsFilter)
return dependencies
def get_entry_points(spells_filter = __withoutIInternalSpells):
entry_points = entry_pointsInfo(spells_filter)
nl()
out("The following entry points are provided: %s" % entry_points)
nl()
return entry_points | PypiClean |
/Klampt-0.9.0-cp36-cp36m-win_amd64.whl/klampt/math/autodiff/pytorch.py | import klampt.math.autodiff.ad as ad
import torch,numpy as np
class TorchModuleFunction(ad.ADFunctionInterface):
"""Converts a PyTorch function to a Klamp't autodiff function class."""
def __init__(self,module):
self.module=module
self._eval_params=[]
torch.set_default_dtype(torch.float64)
def __str__(self):
return str(self.module)
def n_in(self,arg):
return -1
def n_out(self):
return -1
def eval(self,*args):
self._eval_params=[]
for a in args:
if not isinstance(a,np.ndarray):
a=np.array([a])
p=torch.Tensor(a)
p.requires_grad_(True)
self._eval_params.append(p)
try:
self._eval_result=torch.flatten(self.module(*self._eval_params))
#self._eval_result.forward()
except Exception as e:
print('Torch error: %s'%str(e))
return self._eval_result.detach().numpy()
def derivative(self,arg,*args):
#lazily check if forward has been done before
if not self._same_param(*args):
self.eval(*args)
rows=[]
for i in range(self._eval_result.shape[0]):
if self._eval_params[arg].grad is not None:
self._eval_params[arg].grad.zero_()
#this is a major performance penalty, torch does not support jacobian
#we have to do it row by row
self._eval_result[i].backward(retain_graph=True)
rows.append(self._eval_params[arg].grad.detach().numpy().flatten())
return np.vstack(rows)
def jvp(self,arg,darg,*args):
raise NotImplementedError('')
def _same_param(self,*args):
if not hasattr(self,"_eval_params"):
return False
if len(self._eval_params)!=len(args):
return False
for p,a in zip(self._eval_params,args):
pn = p.detach().numpy()
if not isinstance(a,np.ndarray):
a=np.array([a])
if pn.shape != a.shape:
return False
if (pn!=a).any():
return False
return True
class ADModule(torch.autograd.Function):
"""Converts a Klamp't autodiff function call or function instance to a
PyTorch Function. The class must be created with the terminal symbols
corresponding to the PyTorch arguments to which this is called.
"""
@staticmethod
def forward(ctx,func,terminals,*args):
torch.set_default_dtype(torch.float64)
if len(args)!=len(terminals):
raise ValueError("Function %s expected to have %d arguments, instead got %d"%(str(func),len(terminals),len(args)))
if isinstance(func,ad.ADFunctionCall):
context={}
for t,a in zip(terminals,args):
context[t.name]=a.detach().numpy()
ret=func.eval(**context)
elif isinstance(func,ad.ADFunctionInterface):
context=[]
for t,a in zip(terminals,args):
context.append(a.detach().numpy())
ret=func.eval(*context)
else:
raise ValueError("f must be a ADFunctionCall or ADFunctionInterface")
ctx.saved_state=(func,terminals,context)
return torch.Tensor(ret)
@staticmethod
def backward(ctx,grad):
ret = [None,None]
func,terminals,context = ctx.saved_state
if isinstance(func,ad.ADFunctionCall):
for k in range(len(terminals)):
if isinstance(terminals[k],ad.ADTerminal):
name = terminals[k].name
else:
name = terminals[k]
deriv=torch.Tensor(func.derivative(name,**context))
ret.append(deriv.T@grad)
elif isinstance(func,ad.ADFunctionInterface):
for k in range(len(terminals)):
deriv=torch.Tensor(func.derivative(k,*context))
ret.append(deriv.T@grad)
else:
raise ValueError("f must be a ADFunctionCall or ADFunctionInterface")
return tuple(ret)
@staticmethod
def check_derivatives_torch(func,terminals,h=1e-6,rtol=1e-2,atol=1e-3):
#sample some random parameters of the appropriate length
if isinstance(func,ad.ADFunctionInterface):
params=[]
for i in range(len(terminals)):
try:
N = func.n_in(i)
if N < 0:
N = 10
except NotImplementedError:
N = 10
params.append(torch.randn(N))
else:
N = 10
params = [torch.randn(N) for i in range(len(terminals))]
for p in params:
p.requires_grad_(True)
torch.autograd.gradcheck(ADModule.apply,tuple([func,terminals]+params),eps=h,atol=atol,rtol=rtol,raise_exception=True)
def torch_to_ad(module,args):
"""Converts a PyTorch function applied to args (list of scalars or numpy
arrays) to a Klamp't autodiff function call on those arguments."""
wrapper=TorchModuleFunction(module)
return wrapper(*args)
def ad_to_torch(func,terminals=None):
"""Converts a Klamp't autodiff function call or function instance to a
PyTorch Function. If terminals is provided, this is the list of arguments
that PyTorch will expect. Otherwise, the variables in the expression
will be automatically determined by the forward traversal order."""
if terminals is None:
if isinstance(func,ad.ADFunctionCall):
terminals = func.terminals()
else:
n_args = func.n_args()
terminals = [func.argname(i) for i in range(n_args)]
else:
if isinstance(func,ad.ADFunctionCall):
fterminals = func.terminals()
if len(terminals) != len(fterminals):
raise ValueError("The number of terminals provided is incorrect")
for t in terminals:
if isinstance(t,ad.ADTerminal):
name = t.name
else:
name = t
if name not in fterminals:
raise ValueError("Invalid terminal %s, function call %s only has terminals %s"%(name,str(func),str(terminals)))
else:
try:
if len(terminals) != func.n_args():
raise ValueError("Invalid number of terminals, function %s expects %d"%(str(func),func.n_args()))
except NotImplementedError:
pass
return ADModule(func,terminals) | PypiClean |
/Herring-0.1.49.tar.gz/Herring-0.1.49/herring/support/path.py | import os
# noinspection PyDocstring
class Path(object):
def __init__(self, *path_parts):
self.__path = os.path.normpath(os.path.join(*[str(part) for part in path_parts]))
self.name = os.path.basename(self.__path)
self.parent = os.path.dirname(self.__path)
self.stem = os.path.splitext(self.name)[0]
def is_absolute(self):
return os.path.isabs(self.__path)
def is_relative(self):
return not self.is_absolute()
def relative_to(self, parent_path):
return os.path.relpath(self.__path, parent_path)
def is_dir(self):
return os.path.isdir(self.__path)
def __str__(self):
return self.__path
def __repr__(self):
return repr(self.__path)
def __hash__(self):
return self.__path.__hash__()
def _cmpkey(self):
return self.__path
def _compare(self, other, method):
try:
# noinspection PyProtectedMember
return method(self._cmpkey(), Path(str(other))._cmpkey())
except (AttributeError, TypeError):
# _cmpkey not implemented, or return different type,
# so I can't compare with "other".
return NotImplemented
def __lt__(self, other):
return self._compare(other, lambda s, o: s < o)
def __le__(self, other):
return self._compare(other, lambda s, o: s <= o)
def __eq__(self, other):
return self._compare(other, lambda s, o: s == o)
def __ge__(self, other):
return self._compare(other, lambda s, o: s >= o)
def __gt__(self, other):
return self._compare(other, lambda s, o: s > o)
def __ne__(self, other):
return self._compare(other, lambda s, o: s != o)
def __getattr__(self, item):
"""
method not found so delegate to the private string __path attribute
"""
return getattr(self.__path, item)
def __getitem__(self, item):
"""
method delegates item accessing to private string __path attribute
"""
return self.__path[item] | PypiClean |
/CsuPMTD-1.0.27.tar.gz/CsuPMTD-1.0.27/PMTD/maskrcnn_benchmark/apex/apex/pyprof/parse/parse.py | import sys
import argparse
from tqdm import tqdm
from .db import DB
from .kernel import Kernel
from .nvvp import NVVP
def parseArgs():
parser = argparse.ArgumentParser(prog=sys.argv[0], description="Parse SQL (nvvp) db.")
parser.add_argument("file",
type=str,
default=None,
help="SQL db (nvvp) file.")
args = parser.parse_args()
return args
def main():
args = parseArgs()
db = DB(args.file)
nvvp = NVVP(db)
kInfo = nvvp.getKernelInfo()
if len(kInfo) == 0:
print("Found 0 kernels. Exiting.", file=sys.stderr)
db.close()
sys.exit(0)
else:
print("Found {} kernels. Getting info for each kernel.".format(len(kInfo)), file=sys.stderr)
nvvp.createMarkerTable()
prevSeqId = -1
prevSubSeqId = -1
prevOp = "na"
Kernel.profStart = nvvp.getProfileStart()
for i in tqdm(range(len(kInfo)), ascii=True):
info = kInfo[i]
k = Kernel()
#Set kernel info
k.setKernelInfo(info)
#Get, set kernel name
name = nvvp.getString(k.kNameId)
k.setKernelName(name)
#Get runtime info
info = nvvp.getCPUInfo(k.corrId)
k.setRunTimeInfo(info)
#Get and set marker and seqid info
info = nvvp.getMarkerInfo(k.objId, k.rStartTime, k.rEndTime)
k.setMarkerInfo(info)
#If the seqId contains both 0 and non zero integers, remove 0.
if any(seq != 0 for seq in k.seqId) and (0 in k.seqId):
k.seqId.remove(0)
#Set direction (it uses seq id)
k.setDirection()
#Set op
k.setOp()
#The following code is based on heuristics.
#TODO: Refactor.
#Assign subSeqId, adjust seqId and altSeqId
#seqId can be 0.
#A kernel can have multiple seqIds both in fprop and bprop.
#In bprop, seqIds might not decrease monotonically. I have observed a few blips.
if len(k.seqId):
assert (k.dir in ["fprop", "bprop"])
if (k.dir == "fprop"):
#Check if there is a sequence id larger than the previous
inc = (k.seqId[-1] > prevSeqId)
if inc:
currSeqId = [x for x in k.seqId if x > prevSeqId][0]
else:
currSeqId = prevSeqId
else:
currSeqId = k.seqId[0]
#if ((currSeqId == prevSeqId) and (k.op == prevOp)):
if ((currSeqId == prevSeqId) and (k.op == prevOp)) or ((k.op[0] == "forward") and (k.op == prevOp) and (k.mod[0] in ["LSTMCell", "GRUCell", "RNNCell"])):
#The second condition is to trap cases when pytorch does not use cudnn for a LSTMCell.
k.subSeqId = prevSubSeqId + 1
prevSeqId = currSeqId
prevSubSeqId = k.subSeqId
prevOp = k.op
#Keep currSeqId in k.seqId, move everything else to k.altSeqId
for s in k.seqId:
if s != currSeqId:
k.seqId.remove(s)
k.altSeqId.append(s)
for s in k.altSeqId:
if s == currSeqId:
k.altSeqId.remove(s)
k.altSeqId = list(set(k.altSeqId))
if (len(k.altSeqId)):
(k.altSeqId).sort()
k.print()
db.close()
if __name__ == '__main__':
main() | PypiClean |
/BaselineRemoval-0.1.5.tar.gz/BaselineRemoval-0.1.5/README.rst | What is it?
===========
Companion python library for the machine learning book [Feature Engineering & Selection for Explainable Models: A Second Course for Data Scientists](https://statguyuser.github.io/feature-engg-selection-for-explainable-models.github.io/index.html). It is used for baseline correction. It has below 3 methods for baseline removal from spectra.
- **Modpoly** Modified multi-polynomial fit [1]. It has below 3 parameters.
1) `degree`, it refers to polynomial degree, and default value is 2.
2) `repitition`, it refers to how many iterations to run, and default value is 100.
3) `gradient`, it refers to gradient for polynomial loss, default is 0.001. It measures incremental gain over each iteration. If gain in any iteration is less than this, further improvement will stop.
- **IModPoly** Improved ModPoly[2], which addresses noise issue in ModPoly. It has below 3 parameters.
1) `degree`, it refers to polynomial degree, and default value is 2.
2) `repitition`, it refers to how many iterations to run, and default value is 100.
3) `gradient`, it refers to gradient for polynomial loss, and default is 0.001. It measures incremental gain over each iteration. If gain in any iteration is less than this, further improvement will stop.
- **ZhangFit** Zhang fit[3], which doesn’t require any user intervention and prior information, such as detected peaks. It has below 3 parameters.
1) `lambda_`, it can be adjusted by user. The larger lambda is, the smoother the resulting background. Default value is 100.
2) `porder` refers to adaptive iteratively reweighted penalized least squares for baseline fitting. Default value is 1.
3) `repitition` is how many iterations to run, and default value is 15.
We can use the python library to process spectral data through either of the techniques ModPoly, IModPoly or Zhang fit algorithm for baseline subtraction. The functions will return baseline-subtracted spectrum.
How to use it?
=================
```python
from BaselineRemoval import BaselineRemoval
input_array=[10,20,1.5,5,2,9,99,25,47]
polynomial_degree=2 #only needed for Modpoly and IModPoly algorithm
baseObj=BaselineRemoval(input_array)
Modpoly_output=baseObj.ModPoly(polynomial_degree)
Imodpoly_output=baseObj.IModPoly(polynomial_degree)
Zhangfit_output=baseObj.ZhangFit()
print('Original input:',input_array)
print('Modpoly base corrected values:',Modpoly_output)
print('IModPoly base corrected values:',Imodpoly_output)
print('ZhangFit base corrected values:',Zhangfit_output)
Original input: [10, 20, 1.5, 5, 2, 9, 99, 25, 47]
Modpoly base corrected values: [-1.98455800e-04 1.61793368e+01 1.08455179e+00 5.21544654e+00
7.20210508e-02 2.15427531e+00 8.44622093e+01 -4.17691125e-03
8.75511661e+00]
IModPoly base corrected values: [-0.84912125 15.13786196 -0.11351367 3.89675187 -1.33134142 0.70220645
82.99739548 -1.44577432 7.37269705]
ZhangFit base corrected values: [ 8.49924691e+00 1.84994576e+01 -3.31739230e-04 3.49854060e+00
4.97412948e-01 7.49628529e+00 9.74951576e+01 2.34940300e+01
4.54929023e+01
```
Where to get it?
================
`pip install BaselineRemoval`
How to cite?
============
Md Azimul Haque (2022). Feature Engineering & Selection for Explainable Models: A Second Course for Data Scientists. Lulu Press, Inc.
Dependencies
============
- [numpy](https://www.numpy.org/])
- [scikit-learn](https://scikit-learn.org/)
- [scipy](https://www.scipy.org/)
References
============
1. [Automated Method for Subtraction of Fluorescence from Biological Raman Spectra](https://www.researchgate.net/publication/8974238_Automated_Method_for_Subtraction_of_Fluorescence_from_Biological_Raman_Spectra) by Lieber & Mahadevan-Jansen (2003)
2. [Automated Autofluorescence Background Subtraction Algorithm for Biomedical Raman Spectroscopy](https://www.researchgate.net/publication/5818031_Automated_Autofluorescence_Background_Subtraction_Algorithm_for_Biomedical_Raman_Spectroscopy) by Zhao, Jianhua, Lui, Harvey, McLean, David I., Zeng, Haishan (2007)
3. [Baseline correction using adaptive iteratively reweighted penalized least squares](https://pubs.rsc.org/is/content/articlelanding/2010/an/b922045c#!divAbstract) by Zhi-Min Zhang, Shan Chena and Yi-Zeng Liang (2010)
| PypiClean |
/Acolyte-0.0.1.tar.gz/Acolyte-0.0.1/acolyte/util/validate.py | import re
import locale
from functools import wraps
from typing import Any
from types import FunctionType
from acolyte.exception import EasemobFlowException
from acolyte.core.message import (
messages,
default_validate_messages
)
from acolyte.core.service import Result
from acolyte.util.lang import get_from_nested_dict
class Field:
"""该类型对象用于描述一个字段的类型、转换规则、验证逻辑等等
"""
def __init__(self, name: str, type_: type, required: bool=True,
default: Any=None,
value_of: type or FunctionType or None=None,
check_logic: FunctionType or None=None):
"""
:param name: 字段名称
:param type_: 期待类型
:param required: 该字段是否是必须的
:param default: 如果不是必须的,该字段的默认值
:param value_of: 如果类型不匹配,回调该函数进行转换
:param check_logic: 验证逻辑
"""
self._name = name
self._type = type_
self._required = required
self._default = default
self._value_of = value_of
self._check_logic = check_logic
@property
def name(self):
return self._name
@property
def type(self):
return self._type
@property
def required(self):
return self._required
@property
def default(self):
return self._default
def __call__(self, value: Any) -> Any:
"""调用call完成对目标值的转换和检查
"""
value = self._base_check(value)
value = self._customize_check(value)
# 检查逻辑
if self._check_logic is not None:
_result = self._check_logic(self._name, value)
if _result is not None:
value = _result
# 返回最终转换的结果
return value
def _base_check(self, value: Any) -> Any:
# 必须且为空,抛出异常
if self._required and (value is None or value == ""):
raise InvalidFieldException(self._name, value, "empty", "")
# 非必须且为空,返回默认值
if not self._required and (value is None or value == ""):
return self._default
# 类型与期望不符,
if not isinstance(value, self._type):
if self._value_of is not None:
try:
value = self._value_of(value)
except Exception:
raise InvalidFieldException(
self._name, value, "invalid_type", self._type.__name__)
else:
raise InvalidFieldException(
self._name, value, "invalid_type", self._type.__name__)
return value
def _customize_check(self, value):
"""自定义检查,子类可以实现
"""
return value
class IntField(Field):
"""该类型对象用于描述一个整数值的验证规则
"""
def __init__(self, name: str, required: bool=True,
default: int=0, value_of: type or FunctionType or None=int,
min_: int or None=None, max_: int or None=None,
check_logic: FunctionType or None=None):
"""
:param min: 最小值
:param max: 最大值
"""
super().__init__(
name=name,
type_=int,
required=required,
default=default,
value_of=value_of,
check_logic=check_logic
)
self._min, self._max = min_, max_
@property
def min(self):
return self._min
@property
def max(self):
return self._max
def _customize_check(self, value):
# 比最小值还要小
if self._min is not None and value < self._min:
raise InvalidFieldException(
self._name, value, "less_than_min", self._min)
# 比最大值还大
if self._max is not None and value > self._max:
raise InvalidFieldException(
self._name, value, "more_than_max", self._max)
return value
class StrField(Field):
"""该类型对象用于描述一个字符串值的验证规则
"""
def __init__(self, name: str, required: bool=True,
default: int=0, value_of: type or FunctionType or None=str,
min_len: int or None=None, max_len: int or None=None,
regex: str or None=None,
check_logic: FunctionType or None=None):
"""
:param min_length: 允许的最小长度
:param max_length: 允许的最大长度
:param regex: 满足的正则表达式
"""
super().__init__(
name=name,
type_=str,
required=required,
default=default,
value_of=value_of,
check_logic=check_logic
)
self._min_len, self._max_len = min_len, max_len
self._regex = regex
@property
def min_len(self):
return self._min_len
@property
def max_len(self):
return self._max_len
@property
def regex(self):
return self._regex
def _customize_check(self, value):
# 检查长度
val_length = len(value)
if self._min_len is not None and val_length < self._min_len:
raise InvalidFieldException(
self._name, value, "less_than_min_length", self._min_len)
if self._max_len is not None and val_length > self._max_len:
raise InvalidFieldException(
self._name, value, "more_than_max_length", self._max_len)
if self._regex is not None and not re.search(self._regex, value):
raise InvalidFieldException(
self._name, value, "invalid_format", self._regex)
return value
class InvalidFieldException(EasemobFlowException):
"""当字段不满足Rule对象的期望条件时,抛出此异常
"""
def __init__(self, field_name, value, reason, expect):
"""
:param field_name: 字段名称
:param value: 字段值
:param reason: 引发错误原因
:param expect: 期望的类型/值/规则
"""
self._field_name = field_name
self._value = value
self._reason = reason
self._expect = expect
super().__init__((
"Invalid field {field}={value}, "
"reason={reason}, expect={expect}"
).format(field=field_name, value=value, reason=reason, expect=expect))
@property
def field_name(self):
return self._field_name
@property
def value(self):
return self._value
@property
def reason(self):
return self._reason
@property
def expect(self):
return self._expect
class BadReq(Exception):
"""被check修饰的服务接口抛出此异常,可以直接返回bad request result
"""
def __init__(self, reason, **args):
self._reason = reason
self._args = args
@property
def reason(self):
return self._reason
@property
def args(self):
return self._args
def check(*fields, messages=messages,
default_validate_messages=default_validate_messages):
"""该decorator用在service对象方法上验证参数
:param fields: 参数规则声明
:pram messages: 消息集合
"""
fields_dict = {f.name: f for f in fields}
def _check(f):
@wraps(f)
def _func(self, *args, **kwds):
nonlocal fields_dict
try:
# 组装并验证参数
new_args = [field(arg_val)
for field, arg_val in zip(fields, args)]
new_kwds = {arg_name: fields_dict[arg_name](arg_val)
for arg_name, arg_val in kwds.items()}
except InvalidFieldException as e:
full_reason = "{field_name}_{reason}".format(
field_name=e.field_name, reason=e.reason)
loc, _ = locale.getlocale(locale.LC_CTYPE)
service_id = self.__class__.__name__
mtd_name = f.__name__
# 先从用户messages集合中获取
msg = get_from_nested_dict(
messages, loc, service_id, mtd_name, full_reason)
if msg is None:
# 用户messages集合取不到再取默认的
msg = default_validate_messages[loc][e.reason]
if e.expect is not None or e.expect != "":
msg = msg.format(
field_name=e.field_name, expect=e.expect)
else:
msg = msg.format(field_name=e.field_name)
else:
if e.expect is not None or e.expect != "":
msg = msg.format(expect=e.expect)
return Result.bad_request(full_reason, msg=msg)
else:
try:
return f(self, *new_args, **new_kwds)
except BadReq as e:
loc, _ = locale.getlocale(locale.LC_CTYPE)
msg = messages[loc][
self.__class__.__name__][f.__name__][e.reason]
if e.args:
msg = msg.format(**e.args)
return Result.bad_request(e.reason, msg=msg)
return _func
return _check
def declare_args(*fields):
"""该decorator用于在方法上声明验证规则
被修饰的方法对象会拥有一个field_rules对象
:param fields: 被修饰函数的参数规则集合
"""
def _declare_args(f):
f.field_rules = fields
@wraps(f)
def _func(*args, **kwds):
return f(*args, **kwds)
return _func
return _declare_args | PypiClean |
/FlaskCms-0.0.4.tar.gz/FlaskCms-0.0.4/flask_cms/static/js/ckeditor/plugins/specialchar/dialogs/lang/zh-cn.js | /*
Copyright (c) 2003-2013, CKSource - Frederico Knabben. All rights reserved.
For licensing, see LICENSE.md or http://ckeditor.com/license
*/
CKEDITOR.plugins.setLang("specialchar","zh-cn",{euro:"欧元符号",lsquo:"左单引号",rsquo:"右单引号",ldquo:"左双引号",rdquo:"右双引号",ndash:"短划线",mdash:"长划线",iexcl:"竖翻叹号",cent:"分币符号",pound:"英镑符号",curren:"货币符号",yen:"日元符号",brvbar:"间断条",sect:"节标记",uml:"分音符",copy:"版权所有标记",ordf:"阴性顺序指示符",laquo:"左指双尖引号",not:"非标记",reg:"注册标记",macr:"长音符",deg:"度标记",sup2:"上标二",sup3:"上标三",acute:"锐音符",micro:"微符",para:"段落标记",middot:"中间点",cedil:"下加符",sup1:"上标一",ordm:"阳性顺序指示符",raquo:"右指双尖引号",frac14:"普通分数四分之一",frac12:"普通分数二分之一",frac34:"普通分数四分之三",iquest:"竖翻问号",
Agrave:"带抑音符的拉丁文大写字母 A",Aacute:"带锐音符的拉丁文大写字母 A",Acirc:"带扬抑符的拉丁文大写字母 A",Atilde:"带颚化符的拉丁文大写字母 A",Auml:"带分音符的拉丁文大写字母 A",Aring:"带上圆圈的拉丁文大写字母 A",AElig:"拉丁文大写字母 Ae",Ccedil:"带下加符的拉丁文大写字母 C",Egrave:"带抑音符的拉丁文大写字母 E",Eacute:"带锐音符的拉丁文大写字母 E",Ecirc:"带扬抑符的拉丁文大写字母 E",Euml:"带分音符的拉丁文大写字母 E",Igrave:"带抑音符的拉丁文大写字母 I",Iacute:"带锐音符的拉丁文大写字母 I",Icirc:"带扬抑符的拉丁文大写字母 I",Iuml:"带分音符的拉丁文大写字母 I",ETH:"拉丁文大写字母 Eth",Ntilde:"带颚化符的拉丁文大写字母 N",Ograve:"带抑音符的拉丁文大写字母 O",Oacute:"带锐音符的拉丁文大写字母 O",Ocirc:"带扬抑符的拉丁文大写字母 O",Otilde:"带颚化符的拉丁文大写字母 O",
Ouml:"带分音符的拉丁文大写字母 O",times:"乘号",Oslash:"带粗线的拉丁文大写字母 O",Ugrave:"带抑音符的拉丁文大写字母 U",Uacute:"带锐音符的拉丁文大写字母 U",Ucirc:"带扬抑符的拉丁文大写字母 U",Uuml:"带分音符的拉丁文大写字母 U",Yacute:"带抑音符的拉丁文大写字母 Y",THORN:"拉丁文大写字母 Thorn",szlig:"拉丁文小写字母清音 S",agrave:"带抑音符的拉丁文小写字母 A",aacute:"带锐音符的拉丁文小写字母 A",acirc:"带扬抑符的拉丁文小写字母 A",atilde:"带颚化符的拉丁文小写字母 A",auml:"带分音符的拉丁文小写字母 A",aring:"带上圆圈的拉丁文小写字母 A",aelig:"拉丁文小写字母 Ae",ccedil:"带下加符的拉丁文小写字母 C",egrave:"带抑音符的拉丁文小写字母 E",eacute:"带锐音符的拉丁文小写字母 E",ecirc:"带扬抑符的拉丁文小写字母 E",euml:"带分音符的拉丁文小写字母 E",igrave:"带抑音符的拉丁文小写字母 I",
iacute:"带锐音符的拉丁文小写字母 I",icirc:"带扬抑符的拉丁文小写字母 I",iuml:"带分音符的拉丁文小写字母 I",eth:"拉丁文小写字母 Eth",ntilde:"带颚化符的拉丁文小写字母 N",ograve:"带抑音符的拉丁文小写字母 O",oacute:"带锐音符的拉丁文小写字母 O",ocirc:"带扬抑符的拉丁文小写字母 O",otilde:"带颚化符的拉丁文小写字母 O",ouml:"带分音符的拉丁文小写字母 O",divide:"除号",oslash:"带粗线的拉丁文小写字母 O",ugrave:"带抑音符的拉丁文小写字母 U",uacute:"带锐音符的拉丁文小写字母 U",ucirc:"带扬抑符的拉丁文小写字母 U",uuml:"带分音符的拉丁文小写字母 U",yacute:"带抑音符的拉丁文小写字母 Y",thorn:"拉丁文小写字母 Thorn",yuml:"带分音符的拉丁文小写字母 Y",OElig:"拉丁文大写连字 Oe",oelig:"拉丁文小写连字 Oe",372:"带扬抑符的拉丁文大写字母 W",374:"带扬抑符的拉丁文大写字母 Y",
373:"带扬抑符的拉丁文小写字母 W",375:"带扬抑符的拉丁文小写字母 Y",sbquo:"单下 9 形引号",8219:"单高横翻 9 形引号",bdquo:"双下 9 形引号",hellip:"水平省略号",trade:"商标标志",9658:"实心右指指针",bull:"加重号",rarr:"向右箭头",rArr:"向右双线箭头",hArr:"左右双线箭头",diams:"实心方块纸牌",asymp:"约等于"}); | PypiClean |
/Heterogeneous_Highway_Env-0.0.3-py3-none-any.whl/HighwayEnv_iPLAN/utils.py | import copy
import importlib
import itertools
from typing import Tuple, Dict, Callable, List, Optional, Union, Sequence
import numpy as np
# Useful types
Vector = Union[np.ndarray, Sequence[float]]
Matrix = Union[np.ndarray, Sequence[Sequence[float]]]
Interval = Union[np.ndarray,
Tuple[Vector, Vector],
Tuple[Matrix, Matrix],
Tuple[float, float],
List[Vector],
List[Matrix],
List[float]]
def do_every(duration: float, timer: float) -> bool:
return duration < timer
def lmap(v: float, x: Interval, y: Interval) -> float:
"""Linear map of value v with range x to desired range y."""
return y[0] + (v - x[0]) * (y[1] - y[0]) / (x[1] - x[0])
def get_class_path(cls: Callable) -> str:
return cls.__module__ + "." + cls.__qualname__
def class_from_path(path: str) -> Callable:
module_name, class_name = path.rsplit(".", 1)
class_object = getattr(importlib.import_module(module_name), class_name)
return class_object
def constrain(x: float, a: float, b: float) -> np.ndarray:
return np.clip(x, a, b)
def not_zero(x: float, eps: float = 1e-2) -> float:
if abs(x) > eps:
return x
elif x >= 0:
return eps
else:
return -eps
def wrap_to_pi(x: float) -> float:
return ((x + np.pi) % (2 * np.pi)) - np.pi
def point_in_rectangle(point: Vector, rect_min: Vector, rect_max: Vector) -> bool:
"""
Check if a point is inside a rectangle
:param point: a point (x, y)
:param rect_min: x_min, y_min
:param rect_max: x_max, y_max
"""
return rect_min[0] <= point[0] <= rect_max[0] and rect_min[1] <= point[1] <= rect_max[1]
def point_in_rotated_rectangle(point: np.ndarray, center: np.ndarray, length: float, width: float, angle: float) \
-> bool:
"""
Check if a point is inside a rotated rectangle
:param point: a point
:param center: rectangle center
:param length: rectangle length
:param width: rectangle width
:param angle: rectangle angle [rad]
:return: is the point inside the rectangle
"""
c, s = np.cos(angle), np.sin(angle)
r = np.array([[c, -s], [s, c]])
ru = r.dot(point - center)
return point_in_rectangle(ru, (-length/2, -width/2), (length/2, width/2))
def point_in_ellipse(point: Vector, center: Vector, angle: float, length: float, width: float) -> bool:
"""
Check if a point is inside an ellipse
:param point: a point
:param center: ellipse center
:param angle: ellipse main axis angle
:param length: ellipse big axis
:param width: ellipse small axis
:return: is the point inside the ellipse
"""
c, s = np.cos(angle), np.sin(angle)
r = np.matrix([[c, -s], [s, c]])
ru = r.dot(point - center)
return np.sum(np.square(ru / np.array([length, width]))) < 1
def rotated_rectangles_intersect(rect1: Tuple[Vector, float, float, float],
rect2: Tuple[Vector, float, float, float]) -> bool:
"""
Do two rotated rectangles intersect?
:param rect1: (center, length, width, angle)
:param rect2: (center, length, width, angle)
:return: do they?
"""
return has_corner_inside(rect1, rect2) or has_corner_inside(rect2, rect1)
def rect_corners(center: np.ndarray, length: float, width: float, angle: float,
include_midpoints: bool = False, include_center: bool = False) -> List[np.ndarray]:
"""
Returns the positions of the corners of a rectangle.
:param center: the rectangle center
:param length: the rectangle length
:param width: the rectangle width
:param angle: the rectangle angle
:param include_midpoints: include middle of edges
:param include_center: include the center of the rect
:return: a list of positions
"""
center = np.array(center)
half_l = np.array([length/2, 0])
half_w = np.array([0, width/2])
corners = [- half_l - half_w,
- half_l + half_w,
+ half_l + half_w,
+ half_l - half_w]
if include_center:
corners += [[0, 0]]
if include_midpoints:
corners += [- half_l, half_l, -half_w, half_w]
c, s = np.cos(angle), np.sin(angle)
rotation = np.array([[c, -s], [s, c]])
return (rotation @ np.array(corners).T).T + np.tile(center, (len(corners), 1))
def has_corner_inside(rect1: Tuple[Vector, float, float, float],
rect2: Tuple[Vector, float, float, float]) -> bool:
"""
Check if rect1 has a corner inside rect2
:param rect1: (center, length, width, angle)
:param rect2: (center, length, width, angle)
"""
return any([point_in_rotated_rectangle(p1, *rect2)
for p1 in rect_corners(*rect1, include_midpoints=True, include_center=True)])
def project_polygon(polygon: Vector, axis: Vector) -> Tuple[float, float]:
min_p, max_p = None, None
for p in polygon:
projected = p.dot(axis)
if min_p is None or projected < min_p:
min_p = projected
if max_p is None or projected > max_p:
max_p = projected
return min_p, max_p
def interval_distance(min_a: float, max_a: float, min_b: float, max_b: float):
"""
Calculate the distance between [minA, maxA] and [minB, maxB]
The distance will be negative if the intervals overlap
"""
return min_b - max_a if min_a < min_b else min_a - max_b
def are_polygons_intersecting(a: Vector, b: Vector,
displacement_a: Vector, displacement_b: Vector) \
-> Tuple[bool, bool, Optional[np.ndarray]]:
"""
Checks if the two polygons are intersecting.
See https://www.codeproject.com/Articles/15573/2D-Polygon-Collision-Detection
:param a: polygon A, as a list of [x, y] points
:param b: polygon B, as a list of [x, y] points
:param displacement_a: velocity of the polygon A
:param displacement_b: velocity of the polygon B
:return: are intersecting, will intersect, translation vector
"""
intersecting = will_intersect = True
min_distance = np.inf
translation, translation_axis = None, None
for polygon in [a, b]:
for p1, p2 in zip(polygon, polygon[1:]):
normal = np.array([-p2[1] + p1[1], p2[0] - p1[0]])
normal /= np.linalg.norm(normal)
min_a, max_a = project_polygon(a, normal)
min_b, max_b = project_polygon(b, normal)
if interval_distance(min_a, max_a, min_b, max_b) > 0:
intersecting = False
velocity_projection = normal.dot(displacement_a - displacement_b)
if velocity_projection < 0:
min_a += velocity_projection
else:
max_a += velocity_projection
distance = interval_distance(min_a, max_a, min_b, max_b)
if distance > 0:
will_intersect = False
if not intersecting and not will_intersect:
break
if abs(distance) < min_distance:
min_distance = abs(distance)
d = a[:-1].mean(axis=0) - b[:-1].mean(axis=0) # center difference
translation_axis = normal if d.dot(normal) > 0 else -normal
if will_intersect:
translation = min_distance * translation_axis
return intersecting, will_intersect, translation
def confidence_ellipsoid(data: Dict[str, np.ndarray], lambda_: float = 1e-5, delta: float = 0.1, sigma: float = 0.1,
param_bound: float = 1.0) -> Tuple[np.ndarray, np.ndarray, float]:
"""
Compute a confidence ellipsoid over the parameter theta, where y = theta^T phi
:param data: a dictionary {"features": [phi_0,...,phi_N], "outputs": [y_0,...,y_N]}
:param lambda_: l2 regularization parameter
:param delta: confidence level
:param sigma: noise covariance
:param param_bound: an upper-bound on the parameter norm
:return: estimated theta, Gramian matrix G_N_lambda, radius beta_N
"""
phi = np.array(data["features"])
y = np.array(data["outputs"])
g_n_lambda = 1/sigma * np.transpose(phi) @ phi + lambda_ * np.identity(phi.shape[-1])
theta_n_lambda = np.linalg.inv(g_n_lambda) @ np.transpose(phi) @ y / sigma
d = theta_n_lambda.shape[0]
beta_n = np.sqrt(2*np.log(np.sqrt(np.linalg.det(g_n_lambda) / lambda_ ** d) / delta)) + \
np.sqrt(lambda_*d) * param_bound
return theta_n_lambda, g_n_lambda, beta_n
def confidence_polytope(data: dict, parameter_box: np.ndarray) -> Tuple[np.ndarray, np.ndarray, np.ndarray, float]:
"""
Compute a confidence polytope over the parameter theta, where y = theta^T phi
:param data: a dictionary {"features": [phi_0,...,phi_N], "outputs": [y_0,...,y_N]}
:param parameter_box: a box [theta_min, theta_max] containing the parameter theta
:return: estimated theta, polytope vertices, Gramian matrix G_N_lambda, radius beta_N
"""
param_bound = np.amax(np.abs(parameter_box))
theta_n_lambda, g_n_lambda, beta_n = confidence_ellipsoid(data, param_bound=param_bound)
values, pp = np.linalg.eig(g_n_lambda)
radius_matrix = np.sqrt(beta_n) * np.linalg.inv(pp) @ np.diag(np.sqrt(1 / values))
h = np.array(list(itertools.product([-1, 1], repeat=theta_n_lambda.shape[0])))
d_theta = np.array([radius_matrix @ h_k for h_k in h])
# Clip the parameter and confidence region within the prior parameter box.
theta_n_lambda = np.clip(theta_n_lambda, parameter_box[0], parameter_box[1])
for k, _ in enumerate(d_theta):
d_theta[k] = np.clip(d_theta[k], parameter_box[0] - theta_n_lambda, parameter_box[1] - theta_n_lambda)
return theta_n_lambda, d_theta, g_n_lambda, beta_n
def is_valid_observation(y: np.ndarray, phi: np.ndarray, theta: np.ndarray, gramian: np.ndarray,
beta: float, sigma: float = 0.1) -> bool:
"""
Check if a new observation (phi, y) is valid according to a confidence ellipsoid on theta.
:param y: observation
:param phi: feature
:param theta: estimated parameter
:param gramian: Gramian matrix
:param beta: ellipsoid radius
:param sigma: noise covariance
:return: validity of the observation
"""
y_hat = np.tensordot(theta, phi, axes=[0, 0])
error = np.linalg.norm(y - y_hat)
eig_phi, _ = np.linalg.eig(phi.transpose() @ phi)
eig_g, _ = np.linalg.eig(gramian)
error_bound = np.sqrt(np.amax(eig_phi) / np.amin(eig_g)) * beta + sigma
return error < error_bound
def is_consistent_dataset(data: dict, parameter_box: np.ndarray = None) -> bool:
"""
Check whether a dataset {phi_n, y_n} is consistent
The last observation should be in the confidence ellipsoid obtained by the N-1 first observations.
:param data: a dictionary {"features": [phi_0,...,phi_N], "outputs": [y_0,...,y_N]}
:param parameter_box: a box [theta_min, theta_max] containing the parameter theta
:return: consistency of the dataset
"""
train_set = copy.deepcopy(data)
y, phi = train_set["outputs"].pop(-1), train_set["features"].pop(-1)
y, phi = np.array(y)[..., np.newaxis], np.array(phi)[..., np.newaxis]
if train_set["outputs"] and train_set["features"]:
theta, _, gramian, beta = confidence_polytope(train_set, parameter_box=parameter_box)
return is_valid_observation(y, phi, theta, gramian, beta)
else:
return True
def near_split(x, num_bins=None, size_bins=None):
"""
Split a number into several bins with near-even distribution.
You can either set the number of bins, or their size.
The sum of bins always equals the total.
:param x: number to split
:param num_bins: number of bins
:param size_bins: size of bins
:return: list of bin sizes
"""
if num_bins:
quotient, remainder = divmod(x, num_bins)
return [quotient + 1] * remainder + [quotient] * (num_bins - remainder)
elif size_bins:
return near_split(x, num_bins=int(np.ceil(x / size_bins)))
def distance_to_circle(center, radius, direction):
scaling = radius * np.ones((2, 1))
a = np.linalg.norm(direction / scaling) ** 2
b = -2 * np.dot(np.transpose(center), direction / np.square(scaling))
c = np.linalg.norm(center / scaling) ** 2 - 1
root_inf, root_sup = solve_trinom(a, b, c)
if root_inf and root_inf > 0:
distance = root_inf
elif root_sup and root_sup > 0:
distance = 0
else:
distance = np.infty
return distance
def distance_to_rect(line: Tuple[np.ndarray, np.ndarray], rect: List[np.ndarray]):
"""
Compute the intersection between a line segment and a rectangle.
See https://math.stackexchange.com/a/2788041.
:param line: a line segment [R, Q]
:param rect: a rectangle [A, B, C, D]
:return: the distance between R and the intersection of the segment RQ with the rectangle ABCD
"""
r, q = line
a, b, c, d = rect
u = b - a
v = d - a
u, v = u/np.linalg.norm(u), v/np.linalg.norm(v)
rqu = (q - r) @ u
rqv = (q - r) @ v
interval_1 = [(a - r) @ u / rqu, (b - r) @ u / rqu]
interval_2 = [(a - r) @ v / rqv, (d - r) @ v / rqv]
interval_1 = interval_1 if rqu >= 0 else list(reversed(interval_1))
interval_2 = interval_2 if rqv >= 0 else list(reversed(interval_2))
if interval_distance(*interval_1, *interval_2) <= 0 \
and interval_distance(0, 1, *interval_1) <= 0 \
and interval_distance(0, 1, *interval_2) <= 0:
return max(interval_1[0], interval_2[0]) * np.linalg.norm(q - r)
else:
return np.inf
def solve_trinom(a, b, c):
delta = b ** 2 - 4 * a * c
if delta >= 0:
return (-b - np.sqrt(delta)) / (2 * a), (-b + np.sqrt(delta)) / (2 * a)
else:
return None, None | PypiClean |
/FuXi-1.4.production.tar.gz/FuXi-1.4.production/lib/Horn/__init__.py | import unittest, os, time, sys
from FuXi.Syntax.InfixOWL import *
from rdflib import plugin,RDF,RDFS,URIRef,URIRef,Literal,Variable
from rdflib.util import first
from rdflib.store import Store
from cStringIO import StringIO
from rdflib.Graph import Graph,ReadOnlyGraphAggregate,ConjunctiveGraph
from rdflib.syntax.NamespaceManager import NamespaceManager
DATALOG_SAFETY_NONE = 0
DATALOG_SAFETY_STRICT = 1
DATALOG_SAFETY_LOOSE = 2
safetyNameMap = {
'none' : DATALOG_SAFETY_NONE,
'strict' : DATALOG_SAFETY_STRICT,
'loose' : DATALOG_SAFETY_LOOSE
}
def SubSumptionExpansion(owlClass):
owlClass = CastClass(owlClass)
if isinstance(owlClass,BooleanClass) and owlClass._operator == OWL_NS.unionOf:
for member in owlClass:
expanded = False
for innerMember in SubSumptionExpansion(Class(member)):
expanded = True
yield innerMember
if not expanded:
yield member
else:
for member in owlClass.subSumpteeIds():
expanded = False
for innerMember in SubSumptionExpansion(Class(member)):
expanded = True
yield innerMember
if not expanded:
yield member
def ComplementExpansion(owlClass,debug=False):
"""
For binary conjunctions of a positive conjunction concept and a negative atomic concept
"""
owlClass=CastClass(owlClass.identifier,owlClass.graph)
if isinstance(owlClass,BooleanClass) and \
len(owlClass) == 2 and owlClass._operator == OWL_NS.intersectionOf:
oldRepr = owlClass.__repr__()
#A boolean-constructed class
negativeClasses = set()
otherClasses = set()
for member in owlClass:
member = Class(member)
if member.complementOf:
#A negative class, expand it and add to bucket of classes to 'remove'
for expandedClass in SubSumptionExpansion(member.complementOf):
negativeClasses.add(expandedClass)
else:
#A positive class, expand it and add to bucket of base classes
expanded = False
for expandedClass in SubSumptionExpansion(member):
expanded = True
otherClasses.add(expandedClass)
if not expanded:
otherClasses.add(member.identifier)
if negativeClasses:
#Delete the old list of operands for the boolean class
oldList = owlClass._rdfList
oldList.clear()
#Recreate the list of operands, exluding the expanded negative classes
for allowedClasses in otherClasses.difference(negativeClasses) :
oldList.append(classOrIdentifier(allowedClasses))
owlClass.changeOperator(OWL_NS.unionOf)
if debug:
print "Incoming boolean class: ", oldRepr
print "Expanded boolean class: ", owlClass.__repr__()
return owlClass
else:
if debug:
print "There were no negative classes!"
class ComplementExpansionTestSuite(unittest.TestCase):
def setUp(self):
self.testGraph = Graph()
Individual.factoryGraph = self.testGraph
def testExpand(self):
EX = Namespace("http://example.com/")
namespace_manager = NamespaceManager(Graph())
namespace_manager.bind('ex', EX, override=False)
self.testGraph.namespace_manager = namespace_manager
man = Class(EX.Man)
boy = Class(EX.Boy)
woman = Class(EX.Woman)
girl = Class(EX.Girl)
male = Class(EX.Male)
female= Class(EX.Female)
human = Class(EX.Human)
animal = Class(EX.Animal)
cat = Class(EX.Cat)
dog = Class(EX.Dog)
animal = Class(EX.Animal)
animal = cat | dog | human
human += man
human += boy
human += woman
human += girl
male += man
male += boy
female += woman
female += girl
testClass = human & ~ female
self.assertEquals(repr(testClass),'( ex:Human and ( not ex:Female ) )')
newtestClass = ComplementExpansion(testClass,debug=True)
self.assertTrue(repr(newtestClass) in ['( ex:Boy or ex:Man )','( ex:Man or ex:Boy )'],repr(newtestClass))
testClass2 = animal & ~ (male | female)
self.assertEquals(repr(testClass2),
'( ( ex:Cat or ex:Dog or ex:Human ) and ( not ( ex:Male or ex:Female ) ) )')
newtestClass2 = ComplementExpansion(testClass2,debug=True)
testClass2Repr = repr(newtestClass2)
self.assertTrue(testClass2Repr in ['( ex:Cat or ex:Dog )','( ex:Dog or ex:Cat )'],testClass2Repr)
if __name__ == '__main__':
unittest.main()
sys.exit(1)
from optparse import OptionParser
parser = OptionParser()
parser.add_option('--verbose',action="store_true",default=False,
help='Output debug print statements or not')
parser.add_option('--format',default="xml",
help='The RDF serialization syntax to parse with')
(options, args) = parser.parse_args()
owlGraph = Graph()
for input in args[0:]:
if options.verbose:
print "Parsing ", input, " as ", options.format
owlGraph.parse(input,format=options.format)
Individual.factoryGraph = owlGraph
def topList(node,g):
for s in g.subjects(RDF.rest,node):
yield s
for negativeClass in owlGraph.subjects(predicate=OWL_NS.complementOf):
containingList = first(owlGraph.subjects(RDF.first,negativeClass))
prevLink = None
while containingList:
prevLink = containingList
containingList = first(owlGraph.subjects(RDF.rest,containingList))
for s,p,o in owlGraph.triples_choices((None,
[OWL_NS.intersectionOf,
OWL_NS.unionOf],
prevLink)):
_class = Class(s)
# print _class.__repr__(True,True)
ComplementExpansion(_class,debug=options.verbose) | PypiClean |
/Flask-RESTful-0.3.10.tar.gz/Flask-RESTful-0.3.10/docs/intermediate-usage.rst | .. _intermediate:
Intermediate Usage
==================
.. currentmodule:: flask_restful
This page covers building a slightly more complex Flask-RESTful app that will
cover out some best practices when setting up a real-world Flask-RESTful-based
API. The :ref:`quickstart` section is great for getting started with your first
Flask-RESTful app, so if you're new to Flask-RESTful you'd be better off
checking that out first.
Project Structure
-----------------
There are many different ways to organize your Flask-RESTful app, but here
we'll describe one that scales pretty well with larger apps and maintains
a nice level organization.
The basic idea is to split your app into three main parts: the routes, the
resources, and any common infrastructure.
Here's an example directory structure: ::
myapi/
__init__.py
app.py # this file contains your app and routes
resources/
__init__.py
foo.py # contains logic for /Foo
bar.py # contains logic for /Bar
common/
__init__.py
util.py # just some common infrastructure
The common directory would probably just contain a set of helper functions
to fulfill common needs across your application. It could also contain, for
example, any custom input/output types your resources need to get the job done.
In the resource files, you just have your resource objects. So here's what
``foo.py`` might look like: ::
from flask_restful import Resource
class Foo(Resource):
def get(self):
pass
def post(self):
pass
The key to this setup lies in ``app.py``: ::
from flask import Flask
from flask_restful import Api
from myapi.resources.foo import Foo
from myapi.resources.bar import Bar
from myapi.resources.baz import Baz
app = Flask(__name__)
api = Api(app)
api.add_resource(Foo, '/Foo', '/Foo/<string:id>')
api.add_resource(Bar, '/Bar', '/Bar/<string:id>')
api.add_resource(Baz, '/Baz', '/Baz/<string:id>')
As you can imagine with a particularly large or complex API, this file ends up
being very valuable as a comprehensive list of all the routes and resources in
your API. You would also use this file to set up any config values
(:meth:`~flask.Flask.before_request`, :meth:`~flask.Flask.after_request`).
Basically, this file configures your entire API.
The things in the common directory are just things you'd want to support your
resource modules.
Use With Blueprints
-------------------
See :ref:`blueprints` in the Flask documentation for what blueprints are and
why you should use them. Here's an example of how to link an :class:`Api`
up to a :class:`~flask.Blueprint`. ::
from flask import Flask, Blueprint
from flask_restful import Api, Resource, url_for
app = Flask(__name__)
api_bp = Blueprint('api', __name__)
api = Api(api_bp)
class TodoItem(Resource):
def get(self, id):
return {'task': 'Say "Hello, World!"'}
api.add_resource(TodoItem, '/todos/<int:id>')
app.register_blueprint(api_bp)
.. note ::
Calling :meth:`Api.init_app` is not required here because registering the
blueprint with the app takes care of setting up the routing for the
application.
Full Parameter Parsing Example
------------------------------
Elsewhere in the documentation, we've described how to use the reqparse example
in detail. Here we'll set up a resource with multiple input parameters that
exercise a larger amount of options. We'll define a resource named "User". ::
from flask_restful import fields, marshal_with, reqparse, Resource
def email(email_str):
"""Return email_str if valid, raise an exception in other case."""
if valid_email(email_str):
return email_str
else:
raise ValueError('{} is not a valid email'.format(email_str))
post_parser = reqparse.RequestParser()
post_parser.add_argument(
'username', dest='username',
location='form', required=True,
help='The user\'s username',
)
post_parser.add_argument(
'email', dest='email',
type=email, location='form',
required=True, help='The user\'s email',
)
post_parser.add_argument(
'user_priority', dest='user_priority',
type=int, location='form',
default=1, choices=range(5), help='The user\'s priority',
)
user_fields = {
'id': fields.Integer,
'username': fields.String,
'email': fields.String,
'user_priority': fields.Integer,
'custom_greeting': fields.FormattedString('Hey there {username}!'),
'date_created': fields.DateTime,
'date_updated': fields.DateTime,
'links': fields.Nested({
'friends': fields.Url('user_friends'),
'posts': fields.Url('user_posts'),
}),
}
class User(Resource):
@marshal_with(user_fields)
def post(self):
args = post_parser.parse_args()
user = create_user(args.username, args.email, args.user_priority)
return user
@marshal_with(user_fields)
def get(self, id):
args = post_parser.parse_args()
user = fetch_user(id)
return user
As you can see, we create a ``post_parser`` specifically to handle the parsing
of arguments provided on POST. Let's step through the definition of each
argument. ::
post_parser.add_argument(
'username', dest='username',
location='form', required=True,
help='The user\'s username',
)
The ``username`` field is the most normal out of all of them. It takes
a string from the POST body and converts it to a string type. This argument
is required (``required=True``), which means that if it isn't provided,
Flask-RESTful will automatically return a 400 with a message along the lines
of 'the username field is required'. ::
post_parser.add_argument(
'email', dest='email',
type=email, location='form',
required=True, help='The user\'s email',
)
The ``email`` field has a custom type of ``email``. A few lines earlier we
defined an ``email`` function that takes a string and returns it if the type is
valid, else it raises an exception, exclaiming that the email type was
invalid. ::
post_parser.add_argument(
'user_priority', dest='user_priority',
type=int, location='form',
default=1, choices=range(5), help='The user\'s priority',
)
The ``user_priority`` type takes advantage of the ``choices`` argument. This
means that if the provided `user_priority` value doesn't fall in the range
specified by the ``choices`` argument (in this case ``[0, 1, 2, 3, 4]``),
Flask-RESTful will automatically respond with a 400 and a descriptive error
message.
That covers the inputs. We also defined some interesting field types in the
``user_fields`` dictionary to showcase a couple of the more exotic types. ::
user_fields = {
'id': fields.Integer,
'username': fields.String,
'email': fields.String,
'user_priority': fields.Integer,
'custom_greeting': fields.FormattedString('Hey there {username}!'),
'date_created': fields.DateTime,
'date_updated': fields.DateTime,
'links': fields.Nested({
'friends': fields.Url('user_friends', absolute=True),
'posts': fields.Url('user_posts', absolute=True),
}),
}
First up, there's :class:`fields.FormattedString`. ::
'custom_greeting': fields.FormattedString('Hey there {username}!'),
This field is primarily used to interpolate values from the response into
other values. In this instance, ``custom_greeting`` will always contain the
value returned from the ``username`` field.
Next up, check out :class:`fields.Nested`. ::
'links': fields.Nested({
'friends': fields.Url('user_friends', absolute=True),
'posts': fields.Url('user_posts', absolute=True),
}),
This field is used to create a sub-object in the response. In this case,
we want to create a ``links`` sub-object to contain urls of related objects.
Note that we passed `fields.Nested` another dict which is built in such a
way that it would be an acceptable argument to :func:`marshal` by itself.
Finally, we used the :class:`fields.Url` field type. ::
'friends': fields.Url('user_friends', absolute=True),
'posts': fields.Url('user_posts', absolute=True),
It takes as its first parameter the name of the endpoint associated with the
urls of the objects in the ``links`` sub-object. Passing ``absolute=True``
ensures that the generated urls will have the hostname included.
Passing Constructor Parameters Into Resources
---------------------------------------------
Your :class:`Resource` implementation may require outside dependencies. Those
dependencies are best passed-in through the constructor to loosely couple each
other. The :meth:`Api.add_resource` method has two keyword arguments:
``resource_class_args`` and ``resource_class_kwargs``. Their values will be forwarded
and passed into your Resource implementation's constructor.
So you could have a :class:`Resource`: ::
from flask_restful import Resource
class TodoNext(Resource):
def __init__(self, **kwargs):
# smart_engine is a black box dependency
self.smart_engine = kwargs['smart_engine']
def get(self):
return self.smart_engine.next_todo()
You can inject the required dependency into TodoNext like so: ::
smart_engine = SmartEngine()
api.add_resource(TodoNext, '/next',
resource_class_kwargs={ 'smart_engine': smart_engine })
Same idea applies for forwarding `args`.
| PypiClean |
/123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/models/ssd_resnet_v1_fpn_feature_extractor.py | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from six.moves import range
import tensorflow.compat.v1 as tf
import tf_slim as slim
from object_detection.meta_architectures import ssd_meta_arch
from object_detection.models import feature_map_generators
from object_detection.utils import context_manager
from object_detection.utils import ops
from object_detection.utils import shape_utils
from nets import resnet_v1
class SSDResnetV1FpnFeatureExtractor(ssd_meta_arch.SSDFeatureExtractor):
"""SSD FPN feature extractor based on Resnet v1 architecture."""
def __init__(self,
is_training,
depth_multiplier,
min_depth,
pad_to_multiple,
conv_hyperparams_fn,
resnet_base_fn,
resnet_scope_name,
fpn_scope_name,
fpn_min_level=3,
fpn_max_level=7,
additional_layer_depth=256,
reuse_weights=None,
use_explicit_padding=False,
use_depthwise=False,
use_native_resize_op=False,
override_base_feature_extractor_hyperparams=False):
"""SSD FPN feature extractor based on Resnet v1 architecture.
Args:
is_training: whether the network is in training mode.
depth_multiplier: float depth multiplier for feature extractor.
min_depth: minimum feature extractor depth.
pad_to_multiple: the nearest multiple to zero pad the input height and
width dimensions to.
conv_hyperparams_fn: A function to construct tf slim arg_scope for conv2d
and separable_conv2d ops in the layers that are added on top of the
base feature extractor.
resnet_base_fn: base resnet network to use.
resnet_scope_name: scope name under which to construct resnet
fpn_scope_name: scope name under which to construct the feature pyramid
network.
fpn_min_level: the highest resolution feature map to use in FPN. The valid
values are {2, 3, 4, 5} which map to Resnet blocks {1, 2, 3, 4}
respectively.
fpn_max_level: the smallest resolution feature map to construct or use in
FPN. FPN constructions uses features maps starting from fpn_min_level
upto the fpn_max_level. In the case that there are not enough feature
maps in the backbone network, additional feature maps are created by
applying stride 2 convolutions until we get the desired number of fpn
levels.
additional_layer_depth: additional feature map layer channel depth.
reuse_weights: Whether to reuse variables. Default is None.
use_explicit_padding: Whether to use explicit padding when extracting
features. Default is False. UNUSED currently.
use_depthwise: Whether to use depthwise convolutions. UNUSED currently.
use_native_resize_op: Whether to use tf.image.nearest_neighbor_resize
to do upsampling in FPN. Default is false.
override_base_feature_extractor_hyperparams: Whether to override
hyperparameters of the base feature extractor with the one from
`conv_hyperparams_fn`.
Raises:
ValueError: On supplying invalid arguments for unused arguments.
"""
super(SSDResnetV1FpnFeatureExtractor, self).__init__(
is_training=is_training,
depth_multiplier=depth_multiplier,
min_depth=min_depth,
pad_to_multiple=pad_to_multiple,
conv_hyperparams_fn=conv_hyperparams_fn,
reuse_weights=reuse_weights,
use_explicit_padding=use_explicit_padding,
use_depthwise=use_depthwise,
override_base_feature_extractor_hyperparams=
override_base_feature_extractor_hyperparams)
if self._use_explicit_padding is True:
raise ValueError('Explicit padding is not a valid option.')
self._resnet_base_fn = resnet_base_fn
self._resnet_scope_name = resnet_scope_name
self._fpn_scope_name = fpn_scope_name
self._fpn_min_level = fpn_min_level
self._fpn_max_level = fpn_max_level
self._additional_layer_depth = additional_layer_depth
self._use_native_resize_op = use_native_resize_op
def preprocess(self, resized_inputs):
"""SSD preprocessing.
VGG style channel mean subtraction as described here:
https://gist.github.com/ksimonyan/211839e770f7b538e2d8#file-readme-mdnge.
Note that if the number of channels is not equal to 3, the mean subtraction
will be skipped and the original resized_inputs will be returned.
Args:
resized_inputs: a [batch, height, width, channels] float tensor
representing a batch of images.
Returns:
preprocessed_inputs: a [batch, height, width, channels] float tensor
representing a batch of images.
"""
if resized_inputs.shape.as_list()[3] == 3:
channel_means = [123.68, 116.779, 103.939]
return resized_inputs - [[channel_means]]
else:
return resized_inputs
def _filter_features(self, image_features):
# TODO(rathodv): Change resnet endpoint to strip scope prefixes instead
# of munging the scope here.
filtered_image_features = dict({})
for key, feature in image_features.items():
feature_name = key.split('/')[-1]
if feature_name in ['block1', 'block2', 'block3', 'block4']:
filtered_image_features[feature_name] = feature
return filtered_image_features
def extract_features(self, preprocessed_inputs):
"""Extract features from preprocessed inputs.
Args:
preprocessed_inputs: a [batch, height, width, channels] float tensor
representing a batch of images.
Returns:
feature_maps: a list of tensors where the ith tensor has shape
[batch, height_i, width_i, depth_i]
"""
preprocessed_inputs = shape_utils.check_min_image_dim(
129, preprocessed_inputs)
with tf.variable_scope(
self._resnet_scope_name, reuse=self._reuse_weights) as scope:
with slim.arg_scope(resnet_v1.resnet_arg_scope()):
with (slim.arg_scope(self._conv_hyperparams_fn())
if self._override_base_feature_extractor_hyperparams else
context_manager.IdentityContextManager()):
_, image_features = self._resnet_base_fn(
inputs=ops.pad_to_multiple(preprocessed_inputs,
self._pad_to_multiple),
num_classes=None,
is_training=None,
global_pool=False,
output_stride=None,
store_non_strided_activations=True,
min_base_depth=self._min_depth,
depth_multiplier=self._depth_multiplier,
scope=scope)
image_features = self._filter_features(image_features)
depth_fn = lambda d: max(int(d * self._depth_multiplier), self._min_depth)
with slim.arg_scope(self._conv_hyperparams_fn()):
with tf.variable_scope(self._fpn_scope_name,
reuse=self._reuse_weights):
base_fpn_max_level = min(self._fpn_max_level, 5)
feature_block_list = []
for level in range(self._fpn_min_level, base_fpn_max_level + 1):
feature_block_list.append('block{}'.format(level - 1))
fpn_features = feature_map_generators.fpn_top_down_feature_maps(
[(key, image_features[key]) for key in feature_block_list],
depth=depth_fn(self._additional_layer_depth),
use_native_resize_op=self._use_native_resize_op)
feature_maps = []
for level in range(self._fpn_min_level, base_fpn_max_level + 1):
feature_maps.append(
fpn_features['top_down_block{}'.format(level - 1)])
last_feature_map = fpn_features['top_down_block{}'.format(
base_fpn_max_level - 1)]
# Construct coarse features
for i in range(base_fpn_max_level, self._fpn_max_level):
last_feature_map = slim.conv2d(
last_feature_map,
num_outputs=depth_fn(self._additional_layer_depth),
kernel_size=[3, 3],
stride=2,
padding='SAME',
scope='bottom_up_block{}'.format(i))
feature_maps.append(last_feature_map)
return feature_maps
class SSDResnet50V1FpnFeatureExtractor(SSDResnetV1FpnFeatureExtractor):
"""SSD Resnet50 V1 FPN feature extractor."""
def __init__(self,
is_training,
depth_multiplier,
min_depth,
pad_to_multiple,
conv_hyperparams_fn,
fpn_min_level=3,
fpn_max_level=7,
additional_layer_depth=256,
reuse_weights=None,
use_explicit_padding=False,
use_depthwise=False,
use_native_resize_op=False,
override_base_feature_extractor_hyperparams=False):
"""SSD Resnet50 V1 FPN feature extractor based on Resnet v1 architecture.
Args:
is_training: whether the network is in training mode.
depth_multiplier: float depth multiplier for feature extractor.
min_depth: minimum feature extractor depth.
pad_to_multiple: the nearest multiple to zero pad the input height and
width dimensions to.
conv_hyperparams_fn: A function to construct tf slim arg_scope for conv2d
and separable_conv2d ops in the layers that are added on top of the
base feature extractor.
fpn_min_level: the minimum level in feature pyramid networks.
fpn_max_level: the maximum level in feature pyramid networks.
additional_layer_depth: additional feature map layer channel depth.
reuse_weights: Whether to reuse variables. Default is None.
use_explicit_padding: Whether to use explicit padding when extracting
features. Default is False. UNUSED currently.
use_depthwise: Whether to use depthwise convolutions. UNUSED currently.
use_native_resize_op: Whether to use tf.image.nearest_neighbor_resize
to do upsampling in FPN. Default is false.
override_base_feature_extractor_hyperparams: Whether to override
hyperparameters of the base feature extractor with the one from
`conv_hyperparams_fn`.
"""
super(SSDResnet50V1FpnFeatureExtractor, self).__init__(
is_training,
depth_multiplier,
min_depth,
pad_to_multiple,
conv_hyperparams_fn,
resnet_v1.resnet_v1_50,
'resnet_v1_50',
'fpn',
fpn_min_level,
fpn_max_level,
additional_layer_depth,
reuse_weights=reuse_weights,
use_explicit_padding=use_explicit_padding,
use_depthwise=use_depthwise,
use_native_resize_op=use_native_resize_op,
override_base_feature_extractor_hyperparams=
override_base_feature_extractor_hyperparams)
class SSDResnet101V1FpnFeatureExtractor(SSDResnetV1FpnFeatureExtractor):
"""SSD Resnet101 V1 FPN feature extractor."""
def __init__(self,
is_training,
depth_multiplier,
min_depth,
pad_to_multiple,
conv_hyperparams_fn,
fpn_min_level=3,
fpn_max_level=7,
additional_layer_depth=256,
reuse_weights=None,
use_explicit_padding=False,
use_depthwise=False,
use_native_resize_op=False,
override_base_feature_extractor_hyperparams=False):
"""SSD Resnet101 V1 FPN feature extractor based on Resnet v1 architecture.
Args:
is_training: whether the network is in training mode.
depth_multiplier: float depth multiplier for feature extractor.
min_depth: minimum feature extractor depth.
pad_to_multiple: the nearest multiple to zero pad the input height and
width dimensions to.
conv_hyperparams_fn: A function to construct tf slim arg_scope for conv2d
and separable_conv2d ops in the layers that are added on top of the
base feature extractor.
fpn_min_level: the minimum level in feature pyramid networks.
fpn_max_level: the maximum level in feature pyramid networks.
additional_layer_depth: additional feature map layer channel depth.
reuse_weights: Whether to reuse variables. Default is None.
use_explicit_padding: Whether to use explicit padding when extracting
features. Default is False. UNUSED currently.
use_depthwise: Whether to use depthwise convolutions. UNUSED currently.
use_native_resize_op: Whether to use tf.image.nearest_neighbor_resize
to do upsampling in FPN. Default is false.
override_base_feature_extractor_hyperparams: Whether to override
hyperparameters of the base feature extractor with the one from
`conv_hyperparams_fn`.
"""
super(SSDResnet101V1FpnFeatureExtractor, self).__init__(
is_training,
depth_multiplier,
min_depth,
pad_to_multiple,
conv_hyperparams_fn,
resnet_v1.resnet_v1_101,
'resnet_v1_101',
'fpn',
fpn_min_level,
fpn_max_level,
additional_layer_depth,
reuse_weights=reuse_weights,
use_explicit_padding=use_explicit_padding,
use_depthwise=use_depthwise,
use_native_resize_op=use_native_resize_op,
override_base_feature_extractor_hyperparams=
override_base_feature_extractor_hyperparams)
class SSDResnet152V1FpnFeatureExtractor(SSDResnetV1FpnFeatureExtractor):
"""SSD Resnet152 V1 FPN feature extractor."""
def __init__(self,
is_training,
depth_multiplier,
min_depth,
pad_to_multiple,
conv_hyperparams_fn,
fpn_min_level=3,
fpn_max_level=7,
additional_layer_depth=256,
reuse_weights=None,
use_explicit_padding=False,
use_depthwise=False,
use_native_resize_op=False,
override_base_feature_extractor_hyperparams=False):
"""SSD Resnet152 V1 FPN feature extractor based on Resnet v1 architecture.
Args:
is_training: whether the network is in training mode.
depth_multiplier: float depth multiplier for feature extractor.
min_depth: minimum feature extractor depth.
pad_to_multiple: the nearest multiple to zero pad the input height and
width dimensions to.
conv_hyperparams_fn: A function to construct tf slim arg_scope for conv2d
and separable_conv2d ops in the layers that are added on top of the
base feature extractor.
fpn_min_level: the minimum level in feature pyramid networks.
fpn_max_level: the maximum level in feature pyramid networks.
additional_layer_depth: additional feature map layer channel depth.
reuse_weights: Whether to reuse variables. Default is None.
use_explicit_padding: Whether to use explicit padding when extracting
features. Default is False. UNUSED currently.
use_depthwise: Whether to use depthwise convolutions. UNUSED currently.
use_native_resize_op: Whether to use tf.image.nearest_neighbor_resize
to do upsampling in FPN. Default is false.
override_base_feature_extractor_hyperparams: Whether to override
hyperparameters of the base feature extractor with the one from
`conv_hyperparams_fn`.
"""
super(SSDResnet152V1FpnFeatureExtractor, self).__init__(
is_training,
depth_multiplier,
min_depth,
pad_to_multiple,
conv_hyperparams_fn,
resnet_v1.resnet_v1_152,
'resnet_v1_152',
'fpn',
fpn_min_level,
fpn_max_level,
additional_layer_depth,
reuse_weights=reuse_weights,
use_explicit_padding=use_explicit_padding,
use_depthwise=use_depthwise,
use_native_resize_op=use_native_resize_op,
override_base_feature_extractor_hyperparams=
override_base_feature_extractor_hyperparams) | PypiClean |
/AthenaCSS-0.8.0.tar.gz/AthenaCSS-0.8.0/README.md | # - AthenaCSS -
[](https://pypi.org/project/AthenaCSS/) [](https://github.com/DirectiveAthena/VerSC-AthenaColor/blob/master/LICENSE) [](https://discord.gg/6JcDbhXkCH) [](https://pepy.tech/project/athenalib)
---
## Package Details
#### Details and features
- A Python package which allows CSS to be written as a Python script
- CSS output to file, string or console from the Python script
#### Python Version
- Supported Python versions: **3.10**
- Other older versions of Python are not currently supported.
- These older versions will probably not be supported by [@AndreasSas](https://github.com/AndreasSas) himself, but if you want to contribute to the project and make this package compatible with older versions of Python, Pull requests are always welcome.
---
## Quick Examples
The following example is only a very small CSS code piece, but should bring the idea across of how it works.
```python
from AthenaCSS import *
from AthenaColor import RGB # Dependecy on own other package
with (css_generator := CSSGenerator()) as generator:
with (rule0 := CSSRule()) as (selector, declaration):
selector.add(
SelectorElement.H1(CSSClass("title")),
)
declaration.add(
Property.Color(RGB(128,64,32)),
)
generator.add_comment(
"The following sets the Header with the class of 'title' to have a specific color"
).add_rule(
rule0
)
css_generator.to_console()
```
The above code will output the following CSS to the console:
```css
/*The following sets the Header with the class of 'title' to have a specific color*/
h1.title {
color: rgb(128, 64, 32);
}
```
---
## Documentation
Full documentation can be found at:
**[directiveathena.com](https://publish.obsidian.md/directiveathena/)** (redirect to Obsidian.md publish site)
(Reminder, the documentation is still under heavy development)
---
## Install
To install the package in your Python environment
```
pip install AthenaCSS --upgrade
```
---
## Links
Project files can be found at:
- [GitHub Repo](https://github.com/DirectiveAthena/AthenaCSS)
- [Pypi link](https://pypi.org/project/AthenaCSS/)
---
## Disclaimer
With *No Dependency*, the standard library is not counted as a dependency
---
Made By Andreas Sas,` 2022` | PypiClean |
/GenExpA-0.0.5-py3-none-any.whl/genexpa/core/normfinder.py | import numpy as np
import pandas as pd
def normfinder(dat0, textfile=None, ctVal=False, pStabLim=0.25):
if textfile:
dat0 = pd.read_csv(textfile, index_col=0, sep=r"\s+")
# ----------------------
ntotal = dat0.shape[1]
k0 = dat0.shape[0]
ngenes = k0-1
genenames = dat0.index[:-1]
grId = dat0.values[-1].astype(int)
dat = dat0.values[:-1]
if not ctVal:
dat = np.log2(dat)
uq_grId = np.unique(grId)
ngr = len(uq_grId)
nsamples = []
for _id in uq_grId:
_shape = dat[:,grId == _id].shape
nsamples.append(_shape[1])
nsamples = np.array(nsamples)
# MakeStab(dat)
da = dat
sampleavg = np.mean(dat, axis=0)
genegroupavg = []
groupavg = []
# Group averages
for gr_id in uq_grId:
gr_ave = np.mean(dat[:, grId == gr_id], axis=1)
genegroupavg.append(gr_ave)
groupavg.append(np.mean(gr_ave))
genegroupavg = np.array(genegroupavg).T
groupavg = np.array(groupavg)
# Variances
GGvar = np.zeros((ngenes, ngr))
for group_index, gr_name in enumerate(uq_grId):
grset = grId == gr_name
a = np.zeros(ngenes)
for gene_index in range(ngenes):
a[gene_index] = np.sum(
(da[gene_index, grset]
- genegroupavg[gene_index, group_index]
- sampleavg[grset]
+ groupavg[group_index])**2)/ (nsamples[group_index]-1)
GGvar[:,group_index] = (a-np.sum(a)/(ngenes*ngenes-ngenes))/(1-2/ngenes)
# Change possible negative values
genegroupMinvar = np.zeros((ngenes, ngr))
for group_index, gr_name in enumerate(uq_grId):
grset = grId == gr_name
z = da[:,grset]
for gene_index in range(ngenes):
varpair = np.zeros(ngenes, dtype=float)
for gene1_index in range(ngenes):
varpair[gene1_index] = np.var(z[gene_index,:]-z[gene1_index,:], ddof=1)
genegroupMinvar[gene_index, group_index] = np.min(varpair[np.arange(ngenes) != gene_index])/4.0
# Final variances
GGvar[GGvar < 0] = genegroupMinvar[GGvar < 0]
# Old stability measure for each gene is calculated:
dif = genegroupavg
difgeneavg = np.mean(dif, axis=1)
difgroupavg = np.mean(dif, axis=0)
difavg = np.mean(dif)
for gene_index in range(ngenes):
for group_index in range(ngr):
dif[gene_index, group_index] = (dif[gene_index, group_index]
-difgeneavg[gene_index]
-difgroupavg[group_index]
+difavg)
nsampMatrix = np.tile(nsamples, (ngenes, 1))
vardif = GGvar/nsampMatrix
gamma = np.sum(dif*dif)/((ngr-1)*(ngenes-1))-np.sum(vardif)/(ngenes*ngr)
gamma = max(gamma, 0)
difnew = dif*gamma/(gamma+vardif)
varnew = vardif+gamma*vardif/(gamma+vardif)
Ostab0 = np.abs(difnew)+np.sqrt(varnew)
Ostab = np.mean(Ostab0, axis=1)
# Measure of group differences:
mud = np.zeros(ngenes)
for gene_index in range(ngenes):
mud[gene_index] = 2.0*np.max(np.abs(dif[gene_index,:]))
# Common variance:
genevar = np.zeros(ngenes)
for gene_index in range(ngenes):
genevar[gene_index] = np.sum((nsamples-1)*GGvar[gene_index,:])/(np.sum(nsamples)-ngr)
Gsd = np.sqrt(genevar)
results = [mud, Gsd, Ostab, np.ones(ngenes)*gamma, GGvar, dif]
results = np.column_stack(results)
# END MAKE STAB DEFINITION
# MakeComb2
def MakeComb2(g1, g2, res):
gam = res[0,3]
d1 = res[g1, (4+ngr):(4+ngr+ngr)]
d2 = res[g2, (4+ngr):(4+ngr+ngr)]
s1 = res[g1, 4:(4+ngr)]
s2 = res[g2, 4:(4+ngr)]
try:
rho = np.abs(gam*d1/(gam+s1/nsamples)+
gam*d2/(gam+s2/nsamples))*np.sqrt(ngenes/(ngenes-2) )/2.0
except ZeroDivisionError:
print("TTEST")
print("(gam+s1/nsamples):{}".format(gam+s1/nsamples))
print("(gam+s2/nsamples):{}".format(gam+s2/nsamples))
print("(ngenes-2):{}".format(ngenes-2))
raise ValueError("val")
rho = rho + np.sqrt(s1/nsamples+gam*s1/(nsamples*gam+s1) +
s2/nsamples+gam*s2/(nsamples*gam+s2))/2
return np.mean(rho)
# Main part
res = results
#gcand = np.arange(ngenes)[res[:,2]<pStabLim]
gcand = np.arange(ngenes)[res[:,2] > -np.inf]
ncand = len(gcand)
if ncand < 4 :
if ngenes > 3:
li = sorted(res[:,2])[3]
gcand = np.arange(ngenes)[res[:,2] <= li]
ncand = len(gcand)
else:
gcand = np.arange(ngenes)
ncand = len(gcand)
vv2 = []
for g1 in range(ncand-1):
for g2 in range(g1+1, ncand):
qmeas = MakeComb2(gcand[g1], gcand[g2], res)
vv2.append([gcand[g1], gcand[g2], qmeas])
# return preparation
vv2_sorted = sorted(vv2, key = lambda x: x[2])
vv2_named = [[genenames[_[0]], genenames[_[1]], _[2]] for _ in vv2_sorted]
Ostab_named = [[genenames[_], Ostab[_]] for _ in np.argsort(Ostab)]
return Ostab_named, vv2_named | PypiClean |
/edward-1.3.5.tar.gz/edward-1.3.5/edward/inferences/sgld.py | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import six
import tensorflow as tf
from edward.inferences.monte_carlo import MonteCarlo
from edward.models import RandomVariable
from edward.util import copy
try:
from edward.models import Normal
except Exception as e:
raise ImportError("{0}. Your TensorFlow version is not supported.".format(e))
class SGLD(MonteCarlo):
"""Stochastic gradient Langevin dynamics [@welling2011bayesian].
#### Notes
In conditional inference, we infer $z$ in $p(z, \\beta
\mid x)$ while fixing inference over $\\beta$ using another
distribution $q(\\beta)$.
`SGLD` substitutes the model's log marginal density
$\log p(x, z) = \log \mathbb{E}_{q(\\beta)} [ p(x, z, \\beta) ]
\\approx \log p(x, z, \\beta^*)$
leveraging a single Monte Carlo sample, where $\\beta^* \sim
q(\\beta)$. This is unbiased (and therefore asymptotically exact as a
pseudo-marginal method) if $q(\\beta) = p(\\beta \mid x)$.
#### Examples
```python
mu = Normal(loc=0.0, scale=1.0)
x = Normal(loc=mu, scale=1.0, sample_shape=10)
qmu = Empirical(tf.Variable(tf.zeros(500)))
inference = ed.SGLD({mu: qmu}, {x: np.zeros(10, dtype=np.float32)})
```
"""
def __init__(self, *args, **kwargs):
super(SGLD, self).__init__(*args, **kwargs)
def initialize(self, step_size=0.25, *args, **kwargs):
"""
Args:
step_size: float, optional.
Constant scale factor of learning rate.
"""
self.step_size = step_size
return super(SGLD, self).initialize(*args, **kwargs)
def build_update(self):
"""Simulate Langevin dynamics using a discretized integrator. Its
discretization error goes to zero as the learning rate decreases.
#### Notes
The updates assume each Empirical random variable is directly
parameterized by `tf.Variable`s.
"""
old_sample = {z: tf.gather(qz.params, tf.maximum(self.t - 1, 0))
for z, qz in six.iteritems(self.latent_vars)}
# Simulate Langevin dynamics.
learning_rate = self.step_size / tf.pow(
tf.cast(self.t + 1, list(six.iterkeys(old_sample))[0].dtype), 0.55)
grad_log_joint = tf.gradients(self._log_joint(old_sample),
list(six.itervalues(old_sample)))
sample = {}
for z, grad_log_p in zip(six.iterkeys(old_sample), grad_log_joint):
qz = self.latent_vars[z]
event_shape = qz.event_shape
normal = Normal(
loc=tf.zeros(event_shape, dtype=qz.dtype),
scale=(tf.sqrt(tf.cast(learning_rate, qz.dtype)) *
tf.ones(event_shape, dtype=qz.dtype)))
sample[z] = old_sample[z] + \
0.5 * learning_rate * tf.convert_to_tensor(grad_log_p) + \
normal.sample()
# Update Empirical random variables.
assign_ops = []
for z, qz in six.iteritems(self.latent_vars):
variable = qz.get_variables()[0]
assign_ops.append(tf.scatter_update(variable, self.t, sample[z]))
# Increment n_accept.
assign_ops.append(self.n_accept.assign_add(1))
return tf.group(*assign_ops)
def _log_joint(self, z_sample):
"""Utility function to calculate model's log joint density,
log p(x, z), for inputs z (and fixed data x).
Args:
z_sample: dict.
Latent variable keys to samples.
"""
scope = tf.get_default_graph().unique_name("inference")
# Form dictionary in order to replace conditioning on prior or
# observed variable with conditioning on a specific value.
dict_swap = z_sample.copy()
for x, qx in six.iteritems(self.data):
if isinstance(x, RandomVariable):
if isinstance(qx, RandomVariable):
qx_copy = copy(qx, scope=scope)
dict_swap[x] = qx_copy.value()
else:
dict_swap[x] = qx
log_joint = 0.0
for z in six.iterkeys(self.latent_vars):
z_copy = copy(z, dict_swap, scope=scope)
log_joint += tf.reduce_sum(
self.scale.get(z, 1.0) * z_copy.log_prob(dict_swap[z]))
for x in six.iterkeys(self.data):
if isinstance(x, RandomVariable):
x_copy = copy(x, dict_swap, scope=scope)
log_joint += tf.reduce_sum(
self.scale.get(x, 1.0) * x_copy.log_prob(dict_swap[x]))
return log_joint | PypiClean |
/Heaty-2020.10b2.tar.gz/Heaty-2020.10b2/heaty/gui/heated_space/dialog.py | from typing import Optional, Dict, List
from PyQt5 import QtWidgets as qtw
from heaty.gui.auxiliary.types import ValueType
from heaty.gui.controller import Controller
from heaty.gui.heated_space.building_elements.widget import BuildingElementWidget
from heaty.gui.heated_space.input_fields import get_fields
from heaty.gui.user_input.form import InputForm, ButtonBox
# noinspection PyArgumentList,PyUnresolvedReferences
class HeatedSpaceDialog(qtw.QDialog):
def __init__(self, parent, name: str = '', params: Optional[Dict[str, ValueType]] = None,
bem_items: Optional[List[Dict[str, ValueType]]] = None, mode: str = 'add'):
super().__init__(parent, modal=True)
self.controller: Controller = parent.controller
self.mode = mode
if self.mode == 'add':
self.setWindowTitle('Add Heated Space')
self.hs_name = name
if self.mode == 'modify':
self.setWindowTitle('Modify Heated Space')
main_layout = qtw.QVBoxLayout()
self.setLayout(main_layout)
main_layout.setSizeConstraint(qtw.QLayout.SetFixedSize)
body_layout = qtw.QHBoxLayout()
left_layout = qtw.QVBoxLayout()
self.input_form = InputForm(self, get_fields()) if params is None else InputForm(self, get_fields(), params)
left_layout.addWidget(self.input_form)
# left_layout.addStretch()
right_layout = qtw.QVBoxLayout()
gpb_building_elements = qtw.QGroupBox('Building elements')
gpb_building_elements.setLayout(qtw.QVBoxLayout())
self.bem_widget = BuildingElementWidget(gpb_building_elements, bem_items)
gpb_building_elements.layout().addWidget(self.bem_widget)
right_layout.addWidget(gpb_building_elements)
body_layout.addLayout(left_layout)
body_layout.addLayout(right_layout)
main_layout.addLayout(body_layout)
buttonbox = ButtonBox(self, labels=['Submit', 'Discard'], slots=[self.accept, self.reject])
buttonbox.buttons['Discard'].setDefault(True)
main_layout.addWidget(buttonbox)
self.show()
def accept(self):
hs_params = self.input_form.read()
bem_items = self.bem_widget.get_bem_records()
if self.mode == 'add':
self.controller.add_heated_space(self.hs_name, hs_params, bem_items)
if self.mode == 'modify':
self.controller.modify_heated_space(hs_params, bem_items)
super().accept() | PypiClean |
/JaqalPaq-extras-1.2.0a1.tar.gz/JaqalPaq-extras-1.2.0a1/src/jaqalpaq/transpilers/tket/frontend.py | from pytket.circuit import OpType, Circuit, QubitRegister, Bit
from jaqalpaq.core.circuitbuilder import (
CircuitBuilder,
UnscheduledBlockBuilder,
SequentialBlockBuilder,
)
from jaqalpaq.core import Macro
from jaqalpaq.core.algorithm import expand_subcircuits, expand_macros
from jaqalpaq.core.algorithm.visitor import Visitor
from jaqalpaq.core.algorithm.fill_in_map import fill_in_map
import numpy as np
from jaqalpaq.error import JaqalError
_TKET_NAMES = {
OpType.PhasedX: lambda q, alpha, beta: ("R", q, beta, alpha),
OpType.Rz: lambda q, theta: ("Rz", q, theta),
OpType.XXPhase: lambda q1, q2, theta: ("MS", q1, q2, 0, theta),
}
_REVERSE_TKET_NAMES = {
"R": lambda q, beta, alpha: [(OpType.PhasedX, [alpha, beta], [q])],
"Sx": lambda q: [(OpType.SX, [q])],
"Sxd": lambda q: [(OpType.SXdg, [q])],
"Sy": lambda q: [(OpType.Ry, 0.5, [q])],
"Syd": lambda q: [(OpType.Ry, -0.5, [q])],
"Sz": lambda q: [(OpType.S, [q])],
"Szd": lambda q: [(OpType.Sdg, [q])],
"Px": lambda q: [(OpType.X, [q])],
"Py": lambda q: [(OpType.Y, [q])],
"Pz": lambda q: [(OpType.Z, [q])],
"Rz": lambda q, theta: [(OpType.Rz, theta, [q])],
"MS": lambda q1, q2, phi, theta: [
(OpType.Rz, phi, [q1]),
(OpType.Rz, phi, [q2]),
(OpType.XXPhase, theta, [q1, q2]),
(OpType.Rz, -phi, [q1]),
(OpType.Rz, -phi, [q2]),
],
"Sxx": lambda q1, q2: [(OpType.XXPhase, 0.5, [q1, q2])],
}
def jaqal_circuit_from_tket_circuit(
tkc, native_gates=None, names=None, remove_measurements=False
):
"""Converts a pytket Circuit object to a :class:`jaqalpaq.core.Circuit`.
The circuit will be structured as a sequence of parallel blocks, one for each Cirq
Moment in the input. The circuit will be structured into a sequence of unscheduled
blocks. All instructions between one barrier statement and the next will be put into
an unscheduled block together. If the :mod:`jaqalpaq.scheduler` is run on the circuit,
as many as possible of those gates will be parallelized within each block, while
maintaining the order of the blocks. Otherwise, the circuit will be treated as a fully
sequential circuit.
Measurement commands are supported, but only if applied to every qubit in the circuit
in immediate succession. If so, they will be mapped to a measure_all gate. If the
circuit does not end with a measurement, then a measure_all gate will be appended to
it.
Circuits containing multiple quantum registers will be converted to circuits with a
single quantum register, containing all the qubits from each register. The parts of
that larger register that correspond to each of the original registers will be mapped
with the appropriate names. Circuits containing multiple-index qubits will have each
such qubit mapped to a single-qubit register named with the indices separated by
underscore characters.
Measurements are supported, but only if applied to every qubit in the circuit in the
same moment. If so, they will be mapped to a measure_all gate. If the measure_all gate
is not the last gate in the circuit, a prepare_all gate will be inserted after it.
Additionally, a prepare_all gate will be inserted before the first moment. If the
circuit does not end with a measurement, then a measure_all gate will be appended.
:param pytket.circuit.Circuit tkc: The Circuit to convert.
:param names: A mapping from pytket OpTypes to functions taking qubits and gate
angle parameters and returning a tuple of arguments for
:meth:`jaqalpaq.core.Circuit.build_gate`. If omitted, maps
``pytket.OpType.PhasedX`` to the QSCOUT ``R`` gate, ``pytket.OpType.Rz`` to the
QSCOUT ``Rz`` gate, and ``pytket.OpType.XXPhase`` to the QSCOUT ``MS`` gate. The
``pytket.passes.SynthesiseUMD`` compilation pass will compile a circuit into this
basis.
:type names: dict or None
:param native_gates: The native gate set to target. If None, target the QSCOUT native
gates.
:type native_gates: dict or None
:param bool remove_measurements: Ignore any measure statements in the original circuit
and append a measure_all gate instead. Defaults to False.
:returns: The same quantum circuit, converted to JaqalPaq.
:rtype: jaqalpaq.core.Circuit
:raises JaqalError: If the circuit includes a gate not included in `names`.
"""
qreg_sizes = {}
for qb in tkc.qubits:
if len(qb.index) != 1:
qreg_sizes[qb.reg_name + "_".join([str(x) for x in qb.index])] = 1
elif qb.reg_name in qreg_sizes:
qreg_sizes[qb.reg_name] = max(qreg_sizes[qb.reg_name], qb.index[0] + 1)
else:
qreg_sizes[qb.reg_name] = qb.index[0] + 1
n = sum(qreg_sizes.values())
if native_gates is None:
from qscout.v1.std.jaqal_gates import ALL_GATES as native_gates
qsc = CircuitBuilder(native_gates=native_gates)
if names is None:
names = _TKET_NAMES
baseregister = qsc.register("baseregister", n)
offset = 0
registers = {}
for qreg in qreg_sizes:
registers[qreg] = qsc.map(
qreg, baseregister, slice(offset, offset + qreg_sizes[qreg])
)
offset += qreg_sizes[qreg]
# We're going to divide the circuit up into blocks. Each block will contain every gate
# between one barrier statement and the next. If the circuit is output with no further
# processing, then the gates in each block will be run in sequence. However, if the
# circuit is passed to the scheduler, it'll try to parallelize as many of the gates
# within each block as possible, while keeping the blocks themselves sequential.
block = UnscheduledBlockBuilder()
qsc.expression.append(block.expression)
block.gate("prepare_all")
measure_accumulator = set()
for command in tkc:
block, measure_accumulator = convert_command(
command,
qsc,
block,
names,
measure_accumulator,
n,
registers,
remove_measurements,
)
block.gate("measure_all", no_duplicate=True)
return qsc.build()
def convert_command(
command,
qsc,
block,
names,
measure_accumulator,
n,
registers,
remove_measurements,
remaps=None,
):
if remaps is None:
remaps = range(n)
op_type = command.op.type
if measure_accumulator:
if op_type == OpType.Measure:
qb = command.qubits[0]
if qb.reg_name in registers:
if len(qb.index) != 1:
target = registers[
qb.reg_name + "_".join([str(x) for x in qb.index])
][0]
else:
target = registers[qb.reg_name][qb.index[0]]
measure_accumulator.add(target.resolve_qubit()[1])
else:
raise JaqalError("Register %s invalid!" % target.register.name)
if len(measure_accumulator) == n:
block.gate("measure_all")
measure_accumulator = set()
return block, measure_accumulator
else:
raise JaqalError(
"Cannot measure only qubits %s and not whole register."
% measure_accumulator
)
# measure_accumulator = set()
if op_type == OpType.Measure:
if not remove_measurements:
qb = command.qubits[0]
if len(qb.index) != 1:
target = registers[qb.reg_name + "_".join([str(x) for x in qb.index])][
0
]
else:
target = registers[qb.reg_name][qb.index[0]]
measure_accumulator = {target.resolve_qubit()[1]}
if len(measure_accumulator) == n:
block.gate("measure_all")
measure_accumulator = set()
elif op_type == OpType.Reset and remove_measurements:
pass
elif op_type == OpType.Barrier:
block = UnscheduledBlockBuilder()
qsc.expression.append(block.expression)
# Use barriers to inform the scheduler, as explained above.
elif op_type in (OpType.CircBox, OpType.ExpBox, OpType.PauliExpBox):
new_remaps = [remaps[qb.index[0]] for qb in command.qubits]
macro_block = SequentialBlockBuilder()
subcirq = command.op.get_circuit()
for cmd in subcirq:
convert_command(
cmd,
qsc,
macro_block,
names,
set(),
n,
registers,
remove_measurements,
new_remaps,
)
macro_name = f"macro_{len(qsc.macros)}"
qsc.macro(macro_name, [], macro_block)
block.append(qsc.build_gate(macro_name))
# TODO: Re-use macros when the same circuit block appears in multiple places.
elif op_type in names:
targets = command.qubits
qb_targets = []
for qb in targets:
if (
len(qb.index) != 1
): # TODO: Figure out how to pass multi-index qubits in macros.
qb_targets.append(
registers[qb.reg_name + "_".join([str(x) for x in qb.index])][0]
)
else:
qb_targets.append(registers[qb.reg_name][remaps[qb.index[0]]])
block.gate(
*names[op_type](
*qb_targets,
*[float(param) * np.pi for param in command.op.params],
)
)
else:
raise JaqalError(
"Instruction %s not available on trapped ion hardware; try unrolling first."
% op_type
)
return block, measure_accumulator
def tket_circuit_from_jaqal_circuit(circuit, names=None):
"""
Converts a :class:`jaqalpaq.core.Circuit` to a pytket circuit. All scheduling
information in the circuit will be lost in conversion. Loop statements and macros will
be unrolled.
:param jaqalpaq.core.Circuit circuit: The circuit to convert.
:param names: A mapping from names of native Jaqal gates to the corresponding pytket
gate names. If omitted, maps R, Sx, Sxd, Sz, Szd, Px, Py, Pz, Rz, and Sxx
to their pytket counterparts; and Sy, Syd, and MS to equivalent sequences of
pytket gates.
:type names: dict or None
:returns: The same quantum circuit, converted to Qiskit.
:rtype: qiskit.circuit.QuantumCircuit
"""
if names is None:
names = _REVERSE_TKET_NAMES
tkr = {
reg.name: QubitRegister(name=reg.name, size=reg.size)
for reg in circuit.registers.values()
if reg.fundamental
}
expanded_circuit = fill_in_map(expand_subcircuits(expand_macros(circuit)))
visitor = TketTranspilationVisitor()
visitor.registers = tkr
visitor.names = names
return visitor.visit(expanded_circuit)
class TketTranspilationVisitor(Visitor):
registers = {}
names = {}
def visit_default(self, obj, *args, **kwargs):
return obj
def visit_LoopStatement(self, obj, circ):
subcirc = Circuit()
for qreg in self.registers.values():
subcirc.add_q_register(qreg)
self.visit(obj.statements, subcirc)
for i in range(obj.iterations):
circ.append(subcirc)
return circ
# TODO: This is inefficient, but at the moment I don't have a more efficient approach.
# If pytket implements a similar iteration construct, this should definitely be changed to use it.
def visit_BlockStatement(self, obj, circ):
for stmt in obj.statements:
self.visit(stmt, circ)
return circ
def visit_Circuit(self, obj, circ=None):
circ = Circuit()
for qreg in self.registers.values():
circ.add_q_register(qreg)
return self.visit(obj.body, circ)
def visit_GateStatement(self, obj, circ):
# Note: The code originally checked if a gate was a native gate, macro, or neither,
# and raised an exception if neither. This assumes everything not a macro is a native gate.
# Note: This could be more elegant with a is_macro method on gates.
if isinstance(obj.gate_def, Macro):
raise JaqalError("Expand macros before transpilation.")
elif obj.name == "prepare_all":
for reg in self.registers.values():
for qb in reg:
circ.add_gate(OpType.Reset, [qb])
elif obj.name == "measure_all":
for qreg in self.registers.values():
for bit in qreg:
cbit = Bit(len(circ.bits))
circ.add_bit(cbit)
circ.Measure(bit, cbit)
else:
classical_params = []
quantum_params = []
for pname, pval in obj.parameters.items():
if pname in [
cparam.name for cparam in obj.gate_def.classical_parameters
]:
classical_params.append(self.visit(pval))
else:
quantum_params.append(self.visit(pval))
[
circ.add_gate(*gate_data)
for gate_data in self.names[obj.name](
*quantum_params, *classical_params
)
]
def visit_Parameter(self, obj):
return self.visit(obj.resolve_value())
def visit_NamedQubit(self, obj):
reg, idx = obj.resolve_qubit()
return self.registers[reg.name][idx] | PypiClean |
/Chiplotle-0.4.1.tar.gz/Chiplotle-0.4.1/chiplotle/documentation/chapters/faq/index.rst | ***
FAQ
***
**Q:**
I'm trying to use Chiplotle with Windowz but it seems Chiplotle can't find my hardware. What should I do?
**A:**
No communication between Chiplotle and your hardware could be due to a variety of reasons. Check out the :doc:`Hardware </chapters/hardware/index>` section for some possible causes.
**Q:**
When I send a text file with HPGL commands to my serial port in the following way::
$ stty /dev/ttyUSB0 9600
$ cat bird.hpgl > /dev/ttyUSB0
my plotter starts drawing fine but will eventually just start pausing and drawing random straight lines. What's goind on? Do I have to be concerned with overflowing the plotter's internal RAM?
**A:**
Yes. The plotters buffer will fill up quickly, so you need to be listenning to the plotter for any buffer overflow warnings and errors. This is generally done in one of two ways:
#. Setting up hardware hand-shacking between the plotter and your computer.
#. Querying the plotter for its buffer size before sending data to avoid truncation.
This is one of the tasks that Chiplotle manages for you so you don't have to worry about these low level technicalities.
The easiest way to communicate with a plotter is to run Chiplotle by typing ``chiplotle`` from your terminal.
This will run python and load, Chiplotle library, and instantiate soft-plotters for your hardware plotters found. Once in Chiplotle, send your HPGL file with the ``write_file(filename)`` method of the instantiated plotter(s), or send newly createdHPGL commands via the ``write( )`` method, like so::
chiplotle> plotter.write_file('my_file.hpgl')
chiplotle> plotter.write(PA( ))
The ``plotter`` does the buffer managing for you.
See the :doc:`Tutorial </chapters/tutorial/index>` for more details.
**Q:**
Is there a facility in Chiplotle to send over already existing HPGL command files?
**A:**
Yes. Chiplotle comes with the ``plot_hpgl_file.py`` executable script designed exactly for this purpose. To send HPGL files to your plotter simply run the script from the command prompt with the file as the argument::
$ plot_hpgl_file.py my_file.hpgl
To see the usage instructions run ``plot_hpgl_file.py`` with no arguments. Note that Chiplotle simply pipes the file to the plotter and does not check the syntax of the HPGL file.
You can also send HPGL files to your plotter from within a live Chiplotle session using a Plotter's own ``write_file(filename)`` method, like so::
chiplotle> plotter.write_file('my_file.hpgl')
**Q:**
I installed chiplotle in Windowz 98. Unfortunately running ``chiplotle`` from the ``cmd`` shell does not work. Windowz isn't recognizing the ``chiplotle`` command.
**A:**
Windows is not very friendly with Chiplotle!
You will have to add Python (if not done so already) and the Chiplotle script files to your path. These are usually installed under ``C:\Python26`` and ``C:\Python26\Scripts``.
Windows has a built-in dialog for changing environment variables
(following guide applies to XP classical view): Right-click the icon for
your machine (usually located on your Desktop and called “My Computer”) and
choose Properties. Then, open the Advanced tab and click the
Environment Variables button.
My Computer ‣ Properties ‣ Advanced ‣ Environment Variables
In this dialog, you can add or modify User and System variables.
.. note::
To change System variables, you need non-restricted access to your machine (i.e. Administrator rights).
Another way of adding variables to your environment is using the set command::
set PATH=%PATH%;C:\path\to\chiplotle_executable
To make this setting permanent, you could add the corresponding command
line to your ``autoexec.bat``. ``msconfig`` is a graphical interface to this file.
Viewing environment variables can also be done more straight-forward: The
command prompt will expand strings wrapped into percent signs automatically::
echo %PATH%
.. note::
Don't forget to also install ``hp2xx``.
| PypiClean |
/ChromeController-0.3.26.tar.gz/ChromeController-0.3.26/README.md | ### Chrome Remote Control interface and debug protocol and toolkit. [](https://travis-ci.org/fake-name/ChromeController)[](https://coveralls.io/r/fake-name/ChromeController)
NOTE: Tests are currently broken due to an upstream Chromium bug: https://bugs.chromium.org/p/chromium/issues/detail?id=849972
The one-stop-shop for using google-chrome and/or chromium from python. With ChromeController you have
complete access to the entire debugging interface you'd get from javascript, but in a language that
doesn't make one want to go become a sheep herder instead.
Process lifecycle management, tab management, plus the complete chrome remote debugging
interface is exposed through to python.
Automatically updates via automatic code generation by parsing the remote debugging protocol
descripion json files.
#### Quickstart:
```python
import ChromeController
additional_options = ['--user-data-dir=/tmp/x']
with ChromeController.ChromeContext(binary="google-chrome", additional_options=additional_options) as cr:
# Do a blocking navigate to a URL, and get the page content as served by the remote
# server, with no modification by local javascript (if applicable)
raw_source = cr.blocking_navigate_and_get_source("http://www.google.com")
# Since the page is now rendered by the blocking navigate, we can
# get the page source after any javascript has modified it.
rendered_source = cr.get_rendered_page_source()
# We can get the current browser URL, after any redirects.
current_url = cr.get_current_url()
# We can get the page title as the browser sees it.
page_title, page_url = cr.get_page_url_title()
# Or take a screenshot
# The screenshot is the size of the remote browser's configured viewport,
# which by default is set to 1024 * 1366. This size can be changed via the
# Emulation_setVisibleSize(width, height) function if needed.
png_bytestring = cr.take_screeshot()
# We can spoof user-agent headers:
new_headers = {
'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36,gzip(gfe)',
'Accept-Language' : 'en-us, en;q=1.0,fr-ca, fr;q=0.5,pt-br, pt;q=0.5,es;q=0.5',
'Accept' : 'image/png, text/plain;q=0.8, text/html;q=0.9, application/xhtml+xml, application/xml, */*;q=0.1',
'Accept-Encoding' : 'gzip,deflate',
}
cr.update_headers(new_headers)
# We can extract the cookies from the remote browser.
# This call returns a list of python http.cookiejar.Cookie cookie
# objects (the Chrome cookies are converted to python cookies).
cookie_list = cr.get_cookies()
# We can also set cookies in the remote browser.
# Again, this interacts with http.cookiejar.Cookie() objects
# directly.
cook = http.cookiejar.Cookie(<params>)
cr.set_cookie(cook)
# We can create more tabs in the current browser context.
# Note that these additional tabs are scoped to the same lifetime as the original
# chromium object (`cr`), so they will become invalid after leaving the
# ChromeContext() context manager.
tab_2 = cr.new_tab()
tab_3 = cr.new_tab()
# At this time, multiple tabs are not thread safe, so they *probably* shouldn't
# be accessed concurrently. This *is* something that I'd like to change.
```
This library makes extensive use of the python `logging` framework, and logs to
the `Main.ChromeController.*` log path.
Automatic wrapper class creation for the remote interface by parsing
the chrome `protocol.json` file, and dynamic code generation through dynamic
AST building. While this is not the most maintainable design, I chose it mostly
because I wanted an excuse to learn/experiment with python AST manipulation.
A higher level automation layer is implemented on top of the autogenerated
wrapper. Both the higher-level interface, and it's associated documentation are
very much a work in process at the moment.
Interface documentation is here:
https://fake-name.github.io/ChromeController/ChromeController.ChromeRemoteDebugInterface.html
All remote methods are wrapped in named functions, with (partial) validation
of passed parameter types and return types.
Right now, simple parameter type validation is done (e.g. text arguments must be
of type string, numeric arguments must be either an int or a float, etc..).
However, the compound type arguments (bascally, anything that takes an array
or object) are not validated, due to the complexity of properly constructing
type validators for their semantics given the architecture (read: writing the
validator in raw AST broke my brain).
Tested mostly on python 3.5, 3.6, lightly on 3.8 and 3.7, all on linux. If you are
using python 2, please stahp. It works with normal chromium and on windows,
but that has only been very lightly used. My test-target is the
google-provided `chrome` binary.
Note that this tool generates and manipulates the AST directly, so it is
EXTREMELY sensitive to implementation details. It is *probably* broken on
python > 3.8 or < 3.5.
Transport layer (originally) from https://github.com/minektur/chrome_remote_shell
The library also has a relatively useful CLI interface, principally useful for
doing things like fetching pages which have jerberscript-rendered content:
```
python3 -m ChromeController --help
Usage: __main__.py [OPTIONS] COMMAND [ARGS]...
ChromeController
Usage: python3 -m ChromeController [-s | --silent] [-v | --verbose]
python3 -m ChromeController fetch <url> [--binary <bin_name>] [--outfile <out_file_name>]
python3 -m ChromeController update
python3 -m ChromeController (-h | --help)
python3 -m ChromeController --version
Options:
-s --silent Suppress all output aside from the fetched content
This basically makes ChromeController act like a alternative to curl
-v --verbose The opposite of silent. Causes the internal logging to output
all traffic over the chromium control interface. VERY noisy.
--version Show version.
fetch Fetch a specified URL's content, and output it to the console.
Options:
-v, --verbose The opposite of silent. Causes the internal logging to output
all traffic over the chromium control interface. VERY noisy.
-s, --silent Suppress all output aside from the fetched content.
This can be used to make ChromeController act like
an alternative to curl with proper JS rendering.
--help Show this message and exit.
Commands:
fetch Fetch a specified URL's content, and output...
update Update the generated class
version Print the ChromeController Version
```
License:
BSD
------
Current Usage so far has been basically to find bugs or strangeness in the
chromium remote debug interface:
- Strange Behaviour is `network.getCookies` (fixed)
https://bugs.chromium.org/p/chromium/issues/detail?id=668932
- `network.clearBrowserCookies` appears to have no effect (fixed)
https://bugs.chromium.org/p/chromium/issues/detail?id=672744
- General header issues in the devtools interface:
https://bugs.chromium.org/p/chromium/issues/detail?id=849972 | PypiClean |
/BASEmesh-2.2.0.post1-cp311-cp311-win_amd64.whl/basemesh/triangle/_triangulate.py | import atexit
import datetime
import functools
import logging
import os
import pathlib
import platform
import subprocess
import stat
import tempfile
from typing import Any, Callable, Collection, Dict, Iterator, Optional, Tuple, Union
from ._geometry import Element, Node, Segment
from .io import read_node, read_ele, write_poly
from ._markers import HoleMarker, RegionMarker
__all__ = [
'command',
'read_output',
'run_triangle',
'triangulate',
]
# Path to the local Triangle executables
_TRIANGLE_BIN = pathlib.Path(__file__).parent / 'bin'
_log = logging.getLogger('triangle')
def _print(msg: str) -> None:
"""Print a message to the console, stripping trailing whispace."""
print(msg.rstrip())
def command(**kwargs: Any) -> str:
"""Convert input keyword arguments to a triangle command string.
The supported keyword arguments alongside their respective triangle
command strings are.
Arguments::
min_angle <float>: q<value>
max_area <float>: a<value>
max_steiner <int>: S<value>
Flags::
input_is_poly: p
conforming_delaunay: D
jettison_nodes: j
write_edges: e
write_neigh: n
no_boundary_in_output: B
no_iteration_numbers: I
ignore_holes: O
no_steiner_on_boundary: Y
no_steiner_on_segment: YY
quiet: Q
verbose: V
keep_convex_hull: c
refine: r
no_output_node: N
no_output_ele: E
no_output_poly: P
use_region_areas: a
use_region_attributes: A
Debug level (specify multiple times for more output)::
debug_level: V[VVVV]
"""
switches = {
'input_is_poly': 'p',
'conforming_delaunay': 'D',
'jettison_nodes': 'j',
'write_edges': 'e',
'write_neigh': 'n',
'no_boundary_in_output': 'B',
'no_iteration_numbers': 'I',
'ignore_holes': 'O',
'no_steiner_on_boundary': 'Y',
'no_steiner_on_segment': 'YY',
'quiet': 'Q',
'verbose': 'V',
'keep_convex_hull': 'c',
'refine': 'r',
'no_output_node': 'N',
'no_output_ele': 'E',
'no_output_poly': 'P',
'use_region_areas': 'a',
'use_region_attributes': 'A'
}
flags = ''
for key, value in kwargs.items():
# Basic switches
if key in switches:
if value:
flags += switches[key]
continue
# Switches with arguments
if key == 'min_angle':
flags += f'q{value}'
elif key == 'max_area':
flags += f'a{value}'
elif key == 'max_steiner':
flags += f'S{value}'
# Debug level
elif key == 'debug_level':
flags += 'V' * int(value)
else:
raise ValueError(f'Unknown triangle flag: {key}={value}')
return flags
def read_output(
path: pathlib.Path) -> Tuple[Iterator[Node], Iterator[Element]]:
"""Return generators over the Triangle nodes and elemnets.
This function takes the stem of the output files as an input, but
does expect the files to be named as per Triangle's convention.
This means that a triangulation performed on the file
``input.poly`` will produce the output files ``input.1.node`` and
``input.1.ele``. This function then expects to be passed the stem
``input.1`` and will add the ``.node`` and ``.ele`` suffixes
itself.
This currently only returns the nodes and elements as these fully
define the triangulation.
This function returns generators to allow for lazy processing of
the output files. If your application requires a list of all nodes
and elements instead, you can pass the generators to two lists:
.. code-block:: python3
nodes, elements = [list(n) for n in read_output(path)]
:param path: Base name of the Triangle output files, without
extension.
:type path: :class:`pathlib.Path`
:return: The node and element generators.
:rtype: :class:`tuple` [
:class:`collections.abc.Iterator` [:class:`Node`],
:class:`collections.abc.Iterator` [:class:`Element`]]
"""
path_node = path.with_suffix(path.suffix + '.node')
path_ele = path.with_suffix(path.suffix + '.ele')
if not path_node.exists() or not path_ele.exists():
raise FileNotFoundError(
f'Missing output files: {path_node} and {path_ele}')
# The node and element generators return a tuple consisting of the ID and
# object. These generators discard the unused ID.
return ((n for _, n in read_node(path_node)),
(e for _, e in read_ele(path_ele)))
def run_triangle(input_path: pathlib.Path, triangle_cmd: str,
redirect_stdout: Union[Callable[[str], None], None] = _print,
) -> None:
"""Run Triangle on the given input file.
By default, all Triangle output is redirected to the Python
console. This can be overridden by passing another function to the
`redirect_stdout` argument. Setting this to `None` will disable
output entirely.
:param input_path: Path to the input file.
:type input_path: :class:`pathlib.Path`
:param triangle_cmd: Command string to pass to Triangle.
:type triangle_cmd: :class:`str`
:param redirect_stdout: Function to call with each line of output.
:type redirect_stdout: :class:`collections.abc.Callable` [
[:class:`str`], :obj:`None`] | :obj:`None`
"""
triangle_path = _find_triangle_executable()
cmd = f'"{triangle_path}" -{triangle_cmd} "{input_path}"'
# Instantiate a new subprocess
_log.info('Running Triangle: %s', cmd)
kwargs: Dict[str, Any] = {}
if os.name == 'nt':
kwargs.update({'creationflags': subprocess.CREATE_NO_WINDOW})
else:
# TODO: The explicit "shell=True" flag risks shell injection, in
# particular if we add generic arguments as part of this call. Double-
# check if this flag is really needed on Linux and MacOS as it is an
# ongoing security consideration.
kwargs.update({'shell': True})
process = subprocess.Popen(cmd, stdout=subprocess.PIPE,
universal_newlines=True, **kwargs)
_log.debug('Triangle subprocess spawned (PID %d)', process.pid)
stdout = process.stdout
assert stdout is not None
try:
for line in iter(stdout.readline, ''):
_log.debug('Subprocess: %s', line.rstrip())
if redirect_stdout is not None:
redirect_stdout(line)
finally:
stdout.close()
return_code = process.wait()
_log.debug('Triangle subprocess exited with return code %d', return_code)
if return_code:
raise subprocess.CalledProcessError(return_code, cmd)
def triangulate(nodes: Collection[Node], segments: Collection[Segment],
holes: Optional[Collection[HoleMarker]] = None,
regions: Optional[Collection[RegionMarker]] = None,
write_markers: bool = False,
triangle_io_dir: Optional[Union[str, pathlib.Path]] = None,
triangle_cmd: str = 'p',
redirect_stdout: Union[Callable[[str], None], None] = _print,
) -> pathlib.Path:
"""Run the Triangle executable on the given input geometries.
:param nodes: The nodes of the input geometry.
:type nodes: :class:`collections.abc.Collection` [
:class:`Node`]
:param segments: The segments of the input geometry.
:type segments: :class:`collections.abc.Collection` [
:class:`Segment`]
:param holes: The holes of the input geometry.
:type holes: :class:`collections.abc.Collection` [
:class:`HoleMarker`] | :obj:`None`
:param regions: The regions of the input geometry.
:type regions: :class:`collections.abc.Collection` [
:class:`RegionMarker`] | :obj:`None`
:param write_markers: Whether to write boundary markers in the
input files. A marker of 0 is used if no marker is given for a
given node or segment.
:type write_markers: :class:`bool`
:param triangle_io_dir: The directory to write the Triangle input
and output files to. If None, a temporary directory is created
and will be deleted when Python exits.
:type triangle_io_dir:
:class:`pathlib.Path` | :class:`str` | :obj:`None`
:param triangle_cmd: Command string to pass to Triangle.
:type triangle_cmd: :class:`str`
:param redirect_stdout: Function to call with each line of output.
:type redirect_stdout: :class:`collections.abc.Callable` [
[:class:`str`], :obj:`None`] | :obj:`None`
:return: The stem of the Triangle output files. If the stem is
``box.1``, the files may be named ``box.1.node`` and
``box.1.ele``.
:rtype: :class:`pathlib.Path`
"""
_log.info('Triangle run requested')
# Find executables
_log.debug('Locating executable for operating system "%s" (%s)' % (
os.name, ', '.join(platform.architecture())))
# Set up the directory containing the triangle input and output files
if isinstance(triangle_io_dir, str):
triangle_io_dir = pathlib.Path(triangle_io_dir)
if triangle_io_dir is None:
# Create temporary directory
triangle_io_dir = pathlib.Path(tempfile.mkdtemp())
# Schedule temporary directory for deletion on exit
atexit.register(functools.partial(_clear_dir, triangle_io_dir))
triangle_io_dir.mkdir(parents=True, exist_ok=True)
# Generate input file
_log.debug('Generating input file')
input_path = triangle_io_dir / _get_input_filename('poly')
write_poly(input_path, nodes, segments, holes, regions, write_markers)
# Run Triangle
run_triangle(input_path, triangle_cmd, redirect_stdout)
# Open output
return _get_output_path(input_path)
def _clear_dir(path: pathlib.Path) -> None:
"""Delete a directory and all its contents.
:param path: The directory to delete.
:type path: :class:`pathlib.Path`
"""
if not path.exists():
return
for entry in path.iterdir():
if entry.is_dir():
_clear_dir(entry)
else:
try:
entry.unlink()
except PermissionError:
print(f'Unable to delete temporary file {entry}')
try:
path.rmdir()
except OSError:
print(f'Unable to delete temporary directory {path}')
def _find_triangle_executable() -> pathlib.Path:
"""Find the appropriate Triangle executable for this system."""
# Microsoft Windows (NT)
if os.name == 'nt':
filename = 'triangle_32.exe'
# Unix-like systems (Linux, MacOS, BSD, etc.)
elif os.name == 'posix':
# MacOS
if platform.system() == 'Darwin':
raise NotImplementedError('MacOS not yet supported')
# Linux
if '32bit' in platform.architecture():
filename = 'triangle_linux_32'
else:
filename = 'triangle_linux_64'
# Other (unsupported)
else:
raise NotImplementedError(f'Unsupported operating system: {os.name}')
# Look for executable
triangle_path = _TRIANGLE_BIN / filename
if not os.path.isfile(triangle_path):
raise RuntimeError(f'Unable to locate executable "{triangle_path}"')
# Set executable flag for unix systems
if os.name == 'posix':
os.chmod(triangle_path, stat.S_IXUSR)
_log.info('Compatible Triangle binary found: %s', filename)
return triangle_path
def _get_input_filename(suffix: str) -> str:
"""Generate a unique base filename for Triangle input files."""
if not suffix.startswith('.'):
suffix = f'.{suffix}'
timestamp = datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
return f'triangle_{timestamp}{suffix}'
def _get_output_path(input_path: pathlib.Path) -> pathlib.Path:
"""Determine the name of the Triangle output files.
This only returns the base name, e.g. ``example.1``. The extensions
``.node``, ``.ele``, or ``.neigh`` must be added by the caller
using :math:`pathlib.Path.with_suffix`.
:param input_path: The input file name.
:type input_path: :class:`pathlib.Path`
:return: The base name of the output files.
:rtype: :class:`pathlib.Path`
"""
# Strip extension
base_name = input_path.with_suffix('')
# Determine current iteration number
try:
iter_num = int(base_name.suffix[1:])
except ValueError:
iter_num = 0
# Increment iteration number
iter_num += 1
# Return new base name
return base_name.with_suffix(f'.{iter_num}') | PypiClean |
/Gato-1.2.7.tar.gz/Gato-1.2.7/EditObjectAttributesDialog.py | from Tkinter import *
from ScrolledText import *
import tkSimpleDialog
import tkMessageBox
from tkColorChooser import askcolor
import copy
import sys
import os
import types
import logging
def typed_assign(var, val):
result = type(var)(val)
result.__dict__ = copy.copy(var.__dict__)
return result
#-------------------------------------------------------------------------------
class TkBoolEntry:
"""Tk entry field for selecting one Bool"""
def __init__(self, master, default=False):
self.frame = Frame(master, relief=FLAT)
self.status = IntVar()
self.status.set(default)
useDefaultButton = Checkbutton(self.frame, variable=self.status, command=self.toggle)
useDefaultButton.grid(row=0, column=0, padx=4, pady=3, sticky=W)
def tkWidget(self):
return self.frame
def get(self):
return bool(self.status)
def set(self, value):
self.status = value
def toggle(self):
self.status = not self.status
class TkStringEntry:
"""Tk entry field for editing strings"""
def __init__(self, master, width):
self.entryWidget = Entry(master, width=width, exportselection=FALSE)
def tkWidget(self):
return self.entryWidget
def get(self):
return self.entryWidget.get()
def set(self, value):
self.entryWidget.delete(0,END)
self.entryWidget.insert(0,"%s" % value)
def select(self):
self.entryWidget.selection_range(0,"end")
self.entryWidget.focus_set()
class TkIntEntry(TkStringEntry):
"""Tk entry field for editing one integer"""
def get(self):
return int(self.entryWidget.get())
class TkFloatEntry(TkStringEntry):
"""Tk entry field for editing one float"""
def get(self):
return float(self.entryWidget.get())
class TkDefaultMixin:
"""Mixin for TkStringEntry, TkIntEntry, TkFloatEntry, ... to deal with
values which have an externally defined default value. Combination
of 'use default' checkbox and corresponding entry field """
def __init__(self, master, useDefault, defaultValue, defaultText=None):
if defaultText is None:
defaultText="Use default ("+str(defaultValue)+")"
self.frame = Frame(master, relief=FLAT)
self.useDefault = IntVar()
self.useDefault.set(useDefault)
self.defaultValue = defaultValue
useDefaultButton = Checkbutton(self.frame, text=defaultText,
variable=self.useDefault,
command=self.toggleDefault)
useDefaultButton.grid(row=0, column=0, padx=4, pady=3, sticky=W)
def finish(self):
self.entryWidget.grid(row=0, column=1, padx=4, pady=3, sticky=W)
self.switchDefault(self.useDefault.get())
def UseDefault(self):
return self.useDefault.get()
def switchDefault(self, value):
if value == 0:
self.entryWidget['state'] = NORMAL
self.entryWidget.delete(0,END)
self.set(self.defaultValue)
else:
self.entryWidget.delete(0,END)
self.entryWidget['state'] = DISABLED
def toggleDefault(self):
self.switchDefault(self.useDefault.get())
class TkDefaultStringEntry(TkStringEntry, TkDefaultMixin):
def __init__(self, master, width, useDefault, defaultValue):
TkDefaultMixin.__init__(self, master, useDefault, defaultValue)
TkStringEntry.__init__(self, self.frame, width)
self.finish()
def tkWidget(self): # To avoid ambiguity
return self.frame
class TkDefaultIntEntry(TkIntEntry, TkDefaultMixin):
def __init__(self, master, width, useDefault, defaultValue):
TkDefaultMixin.__init__(self, master, useDefault, defaultValue)
TkIntEntry.__init__(self, self.frame, width)
self.finish()
def tkWidget(self): # To avoid ambiguity
return self.frame
def get(self):
if self.UseDefault():
return self.defaultValue
else:
return TkIntEntry.get(self)
class TkDefaultFloatEntry(TkFloatEntry, TkDefaultMixin):
def __init__(self, master, width, useDefault, defaultValue):
TkDefaultMixin.__init__(self, master, useDefault, defaultValue)
TkFloatEntry.__init__(self, self.frame, width)
self.finish()
def tkWidget(self): # To avoid ambiguity
return self.frame
def get(self):
if self.UseDefault():
return self.defaultValue
else:
return TkFloatEntry.get(self)
class TkPopupSelector:
def __init__(self, master, value2pop, pop2value, width):
self.value2pop = value2pop
self.pop2value = pop2value
self.popupvalue = StringVar()
self.popupvalue.set(self.pop2value.keys()[0]) # XXX first value as default
# XXX Uuughhh
keys = self.value2pop.keys()
keys.sort()
pops = map(lambda x: value2pop[x], keys)
#logging.debug("pops = %s" % pops)
args = (master, self.popupvalue) + tuple(pops)
self.tkwidget = OptionMenu(*args)
self.tkwidget.config(height=1, width=width)
def tkWidget(self):
return self.tkwidget
def get(self):
return self.pop2value[self.popupvalue.get()]
def set(self, value):
try:
self.popupvalue.set(self.value2pop[value])
except:
self.popupvalue.set(self.pop2value.keys()[0]) # XXX first value as default
def select(self):
# Cant choose invalid value with popup
pass
class TkStringPopupSelector:
def __init__(self, master, strings):
self.strings = strings
self.popupvalue = StringVar()
if len(self.strings) > 0:
self.popupvalue.set(self.strings[0]) # XXX first value as default
width = max(map(len, self.strings))
args = (master, self.popupvalue) + tuple(self.strings)
self.tkwidget = OptionMenu(*args)
self.tkwidget.config(height=1, width=width)
def tkWidget(self):
return self.tkwidget
def get(self):
return self.popupvalue.get()
def set(self, value):
try:
self.popupvalue.set(value)
except:
self.popupvalue.set(self.strings[0]) # XXX first value as default
def select(self):
# Cant choose invalid value with popup
pass
class TkColorSelector:
def __init__(self, master, color='black'):
#self.tkwidget = Button(master, width=8, command=self.editColor)
self.tkwidget = Frame(master, height=18, width=60, relief=RIDGE, borderwidth=1)
self.tkwidget.bind("<ButtonRelease-1>", self.editColor)
self.set(color)
def editColor(self, event):
color = askcolor(self.color)[1]
if color != None:
self.set(color)
def tkWidget(self):
return self.tkwidget
def get(self):
return self.color
def set(self, value):
self.color = value
self.tkwidget.config(bg=self.color)
def select(self):
# Cant choose invalid value with popup
pass
class EditObjectAttributesDialog(tkSimpleDialog.Dialog):
""" Creates an editable (pseudo-)inspector for a selected set of
attributes of a given object
- master : tk master widget
- object : the object, whose attributes we want to edit
- attr_names : a list of attr_names or a dictionary of attr_names and displayed Text
By making use of Python 2.2's capability of subclassing built-in
types such as ints, information about editing etc. is conveyed.
An attr must have:
- validate(value) method [return 1, if value is a valid new value for attr]
The class of an attr can have the following mix-ins:
- Popubable
- WithDefault
"""
def __init__(self, master, object, attr_names):
self.object = object
self.attr_names = attr_names
self.edit = {}
tkSimpleDialog.Dialog.__init__(self, master, "Edit: %s" % self.object.desc)
def editWidget(self, master, object, attr_name):
""" Create a widget capable of editing attr and insert attr's current value"""
attr = object.__dict__[attr_name]
attr_type = type(attr)
widget = None
default = isinstance(attr, WithDefault) # has a WithDefault mixin
if isinstance(attr, Popupable):
widget = TkPopupSelector(master, attr.val2pop, attr.pop2val, attr.width)
elif isinstance(attr, ValidatingBool):
widget = TkBoolEntry(master)
elif isinstance(attr, str):
if default:
widget = TkDefaultStringEntry(master, max(32, len(attr)), attr.useDefault, attr)
else:
widget = TkStringEntry(master, max(32, len(attr)))
elif isinstance(attr, int):
if default:
widget = TkDefaultIntEntry(master, 6, attr.useDefault, attr)
else:
widget = TkIntEntry(master, 6)
elif isinstance(attr, float):
if default:
widget = TkDefaultFloatEntry(master, 8, attr.useDefault, attr)
else:
widget = TkFloatEntry(master, 8)
widget.set(attr)
return widget
def body(self, master):
self.resizable(0,0)
# Header Zeile
label = Label(master, text="Name", anchor=E)
label.grid(row=0, column=0, padx=4, pady=3, sticky=E)
label = Label(master, text="Value", anchor=W)
label.grid(row=0, column=1, padx=4, pady=3, sticky=W)
cur_row = 1
if isinstance(self.attr_names, list):
for attr in self.attr_names:
label = Label(master, text="%s" % attr, anchor=E)
label.grid(row=cur_row, column=0, padx=4, pady=3, sticky=E)
self.edit[attr] = self.editWidget(master, self.object, attr)
if self.edit[attr] != None:
self.edit[attr].tkWidget().grid(row=cur_row, column=1, padx=2, pady=1, sticky=W)
cur_row = cur_row + 1
else:
for attr in self.attr_names.keys():
label = Label(master, text="%s" % self.attr_names[attr], anchor=E)
label.grid(row=cur_row, column=0, padx=4, pady=3, sticky=E)
self.edit[attr] = self.editWidget(master, self.object, attr)
if self.edit[attr] != None:
self.edit[attr].tkWidget().grid(row=cur_row, column=1, padx=2, pady=1, sticky=W)
cur_row = cur_row + 1
def validate(self):
for attr_name in self.edit.keys():
try:
# In python 2.2 we can subclass attributes and add a validate method
# to attributes
value = self.edit[attr_name].get()
if self.object.__dict__[attr_name].validate(value) == 0:
raise ValueError
except ValueError:
msg = "Please enter a valid value for %s" % attr_name
tkMessageBox.showwarning("Invalid Value", msg, parent=self)
self.edit[attr_name].select()
return 0
# Everything is valid => set values
for attr_name in self.edit.keys():
self.object.__dict__[attr_name] = typed_assign(self.object.__dict__[attr_name], self.edit[attr_name].get())
if isinstance(self.object.__dict__[attr_name], WithDefault):
self.object.__dict__[attr_name].useDefault = self.edit[attr_name].useDefault.get()
return 1
#-------------------------------------------------------------------------------
class WithDefault:
"""Mix-in for variables which have a default value"""
def setDefault(self, useDefault, defaultValue):
self.useDefault = useDefault
self.defaultValue = defaultValue
def validate(self, value):
## if self.useDefault:
## return 1
## else:
## return 1 # XXX How can I call a method of the class I am mixed too
return 1
class Popupable:
"""Mix-in for variables which can be edited via a pop-up menu
- val2pop : dict mapping value to string for pop up menu
- pop2val: dict mapping pop up menu string to value
- width: maximal string length in pop up
"""
def setPopup(self, val2pop, pop2val = None, width = None):
self.val2pop = val2pop
self.pop2val = None
self.width = None
if pop2val == None:
self.pop2val = {} # Private copy
self.width = 0
for val in val2pop.keys():
pop = val2pop[val]
self.width = max(len(pop), self.width)
self.pop2val[pop] = val
else:
self.pop2val = pop2val
self.width = width
def validate(self, value):
return 1
##class PopupableStr(str):
## """Class for variables which can be edited via a pop-up menu
## - values: array of values
## - width: maximal string length in pop up
## """
## def setPopup(self, values, width = None):
## self.values = values
## self.width = width
## if width == None:
## self.width = 0
## for s in values:
## self.width = max(len(s), self.width)
## def validate(self, value):
## return 1
class AlwaysValidate:
"""Mix-in for variables which always are valid"""
def validate(self, value):
return 1
#-------------------------------------------------------------------------------
class ValidatingBool(int):
"""Editable replacement for bool"""
def validate(self, value):
if value is True or value is False:
return 1
class ValidatingInt(int, AlwaysValidate):
"""Editable replacement for ints"""
pass
class ValidatingFloat(float, AlwaysValidate):
"""Editable replacement for floats"""
pass
class ValidatingString(str, AlwaysValidate):
"""Editable replacement for strings"""
pass
class PopupableInt(int, Popupable):
"""A replacement for ints editable via a pop-up"""
pass
class Probability(float):
"""An editable float taking values from [0,1]"""
def validate(self, value):
if 0.0 <= value and value <= 1.0:
return 1
else:
return 0
class DefaultedInt(int, WithDefault):
"""An editable int with a default value"""
pass
class DefaultedFloat(float, WithDefault):
"""An editable float with a default value"""
pass
class DefaultedString(str, WithDefault):
"""An editable strinf with a default value"""
pass
#======================================================================
#
# Demo:
#
class TkTestFrame(Frame):
def __init__(self, parent=None):
Frame.__init__(self,parent)
Pack.config(self)
self.createWidgets()
self.desc = ValidatingString("The TkTestFrame")
self.x = DefaultedInt(1)
self.x.setDefault(1, 122)
self.y = ValidatingFloat(2.33)
self.choose = PopupableInt(3)
self.pop2val = {"aaa":1, "xxx":2, "sss":3}
self.val2pop = {1:"aaa", 2:"xxx", 3:"sss"}
self.choose.setPopup(self.val2pop, self.pop2val, 5)
def createWidgets(self):
self.QUIT = Button(self, text='QUIT', foreground='red',
command=self.quit)
self.QUIT.pack(side=LEFT)
self.About = Button(self, text='Preferences', foreground='red',
command=self.About)
self.About.pack(side=LEFT)
def About(self):
aboutBox = EditObjectAttributesDialog(self.master, self, ['desc', 'x', 'y', 'choose'])
del self.pop2val["aaa"]
del self.val2pop[1]
aboutBox = EditObjectAttributesDialog(self.master, self, ['desc', 'x', 'y', 'choose'])
if __name__ == '__main__':
app = TkTestFrame()
app.mainloop() | PypiClean |
/ocn-xmlchecker.env.tar.gz/env (copy)/lib/python2.7/encodings/utf_32.py | import codecs, sys
### Codec APIs
encode = codecs.utf_32_encode
def decode(input, errors='strict'):
return codecs.utf_32_decode(input, errors, True)
class IncrementalEncoder(codecs.IncrementalEncoder):
def __init__(self, errors='strict'):
codecs.IncrementalEncoder.__init__(self, errors)
self.encoder = None
def encode(self, input, final=False):
if self.encoder is None:
result = codecs.utf_32_encode(input, self.errors)[0]
if sys.byteorder == 'little':
self.encoder = codecs.utf_32_le_encode
else:
self.encoder = codecs.utf_32_be_encode
return result
return self.encoder(input, self.errors)[0]
def reset(self):
codecs.IncrementalEncoder.reset(self)
self.encoder = None
def getstate(self):
# state info we return to the caller:
# 0: stream is in natural order for this platform
# 2: endianness hasn't been determined yet
# (we're never writing in unnatural order)
return (2 if self.encoder is None else 0)
def setstate(self, state):
if state:
self.encoder = None
else:
if sys.byteorder == 'little':
self.encoder = codecs.utf_32_le_encode
else:
self.encoder = codecs.utf_32_be_encode
class IncrementalDecoder(codecs.BufferedIncrementalDecoder):
def __init__(self, errors='strict'):
codecs.BufferedIncrementalDecoder.__init__(self, errors)
self.decoder = None
def _buffer_decode(self, input, errors, final):
if self.decoder is None:
(output, consumed, byteorder) = \
codecs.utf_32_ex_decode(input, errors, 0, final)
if byteorder == -1:
self.decoder = codecs.utf_32_le_decode
elif byteorder == 1:
self.decoder = codecs.utf_32_be_decode
elif consumed >= 4:
raise UnicodeError("UTF-32 stream does not start with BOM")
return (output, consumed)
return self.decoder(input, self.errors, final)
def reset(self):
codecs.BufferedIncrementalDecoder.reset(self)
self.decoder = None
def getstate(self):
# additonal state info from the base class must be None here,
# as it isn't passed along to the caller
state = codecs.BufferedIncrementalDecoder.getstate(self)[0]
# additional state info we pass to the caller:
# 0: stream is in natural order for this platform
# 1: stream is in unnatural order
# 2: endianness hasn't been determined yet
if self.decoder is None:
return (state, 2)
addstate = int((sys.byteorder == "big") !=
(self.decoder is codecs.utf_32_be_decode))
return (state, addstate)
def setstate(self, state):
# state[1] will be ignored by BufferedIncrementalDecoder.setstate()
codecs.BufferedIncrementalDecoder.setstate(self, state)
state = state[1]
if state == 0:
self.decoder = (codecs.utf_32_be_decode
if sys.byteorder == "big"
else codecs.utf_32_le_decode)
elif state == 1:
self.decoder = (codecs.utf_32_le_decode
if sys.byteorder == "big"
else codecs.utf_32_be_decode)
else:
self.decoder = None
class StreamWriter(codecs.StreamWriter):
def __init__(self, stream, errors='strict'):
self.encoder = None
codecs.StreamWriter.__init__(self, stream, errors)
def reset(self):
codecs.StreamWriter.reset(self)
self.encoder = None
def encode(self, input, errors='strict'):
if self.encoder is None:
result = codecs.utf_32_encode(input, errors)
if sys.byteorder == 'little':
self.encoder = codecs.utf_32_le_encode
else:
self.encoder = codecs.utf_32_be_encode
return result
else:
return self.encoder(input, errors)
class StreamReader(codecs.StreamReader):
def reset(self):
codecs.StreamReader.reset(self)
try:
del self.decode
except AttributeError:
pass
def decode(self, input, errors='strict'):
(object, consumed, byteorder) = \
codecs.utf_32_ex_decode(input, errors, 0, False)
if byteorder == -1:
self.decode = codecs.utf_32_le_decode
elif byteorder == 1:
self.decode = codecs.utf_32_be_decode
elif consumed>=4:
raise UnicodeError,"UTF-32 stream does not start with BOM"
return (object, consumed)
### encodings module API
def getregentry():
return codecs.CodecInfo(
name='utf-32',
encode=encode,
decode=decode,
incrementalencoder=IncrementalEncoder,
incrementaldecoder=IncrementalDecoder,
streamreader=StreamReader,
streamwriter=StreamWriter,
) | PypiClean |
/Netzob-2.0.0.tar.gz/Netzob-2.0.0/src/netzob/Model/Vocabulary/Domain/Variables/Leafs/Hashes/SHA2_256.py |
#+---------------------------------------------------------------------------+
#| 01001110 01100101 01110100 01111010 01101111 01100010 |
#| |
#| Netzob : Inferring communication protocols |
#+---------------------------------------------------------------------------+
#| Copyright (C) 2011-2017 Georges Bossert and Frédéric Guihéry |
#| This program is free software: you can redistribute it and/or modify |
#| it under the terms of the GNU General Public License as published by |
#| the Free Software Foundation, either version 3 of the License, or |
#| (at your option) any later version. |
#| |
#| This program is distributed in the hope that it will be useful, |
#| but WITHOUT ANY WARRANTY; without even the implied warranty of |
#| MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
#| GNU General Public License for more details. |
#| |
#| You should have received a copy of the GNU General Public License |
#| along with this program. If not, see <http://www.gnu.org/licenses/>. |
#+---------------------------------------------------------------------------+
#| @url : http://www.netzob.org |
#| @contact : contact@netzob.org |
#| @sponsors : Amossys, http://www.amossys.fr |
#| Supélec, http://www.rennes.supelec.fr/ren/rd/cidre/ |
#| ANSSI, https://www.ssi.gouv.fr |
#+---------------------------------------------------------------------------+
#+---------------------------------------------------------------------------+
#| File contributors : |
#| - Frédéric Guihéry <frederic.guihery (a) amossys.fr> |
#+---------------------------------------------------------------------------+
#+---------------------------------------------------------------------------+
#| Standard library imports |
#+---------------------------------------------------------------------------+
import hashlib
#+---------------------------------------------------------------------------+
#| Related third party imports |
#+---------------------------------------------------------------------------+
#+---------------------------------------------------------------------------+
#| Local application imports |
#+---------------------------------------------------------------------------+
from netzob.Model.Vocabulary.Domain.Variables.Leafs.AbstractHash import AbstractHash
class SHA2_256(AbstractHash):
r"""This class implements the SHA2_256 relationships between fields.
The constructor expects some parameters:
:param targets: The targeted fields of the relationship.
:type targets: a :class:`list` of :class:`Field <netzob.Model.Vocabulary.Field>`, required
The following example shows how to create a hash relation with
another field:
>>> from netzob.all import *
>>> import binascii
>>> f1 = Field(Raw(b'\xaa\xbb'))
>>> f2 = Field(SHA2_256([f1]))
>>> s = Symbol(fields = [f1, f2])
>>> binascii.hexlify(next(s.specialize()))
b'aabbd798d1fac6bd4bb1c11f50312760351013379a0ab6f0a8c0af8a506b96b2525a'
"""
def calculate(self, msg):
return hashlib.sha256(msg).digest()
def getBitSize(self):
return 256 | PypiClean |
/FS-Hopper-0.1.1.tar.gz/FS-Hopper-0.1.1/README.txt | =========
FS-Hopper
=========
FS-Hopper is a simplistic and very thin abstraction layer for
accessing a filesystem directory tree in an object-oriented style.
The only two known filesystem nodes are directories and trees. Both
are implemented as new-style classes. Each method call to them (e.g.
DirectoryNode.get_childs()) will return instances of either
DirectoryNode or FileNode, allowing easy traversal throughout the
tree.
Additionally, you can define a set_root, similar to (but not really)
GNU/chroot. It is prohibited to create Nodes above set_root making it
easy for you to jail your code into a certain directory within the
filesystem.
=====
Usage
=====
Jail code into /tmp and create a working directory:
#!/usr/bin/env python2
import fs_hopper
fs_hopper.set_root('/tmp')
workdir = fs_hopper.Directory('/tmp/fs_hopper')
workdir.mkdir()
print workdir
Recursively get all configuration files of some_app
#!/usr/bin/env python2
import os
import fs_hopper
name = os.path.expanduser('~/.config/some_app')
confdir = fs_hopper.Directory(name)
confs = confdir.get_subs('*.conf')
print confs
Read /etc/passwd
#!/usr/bin/env python2
import fs_hopper
passwd = fs_hopper.File('/etc/passwd')
print passwd.get_content()
=========
Resources
=========
- Github: https://gihub.com/baccenfutter/fs_hopper/
- PyPi : https://pypi.python.org/pypi/FS-Hopper/
| PypiClean |
/Django-4.2.4.tar.gz/Django-4.2.4/django/db/backends/postgresql/creation.py | import sys
from django.core.exceptions import ImproperlyConfigured
from django.db.backends.base.creation import BaseDatabaseCreation
from django.db.backends.postgresql.psycopg_any import errors
from django.db.backends.utils import strip_quotes
class DatabaseCreation(BaseDatabaseCreation):
def _quote_name(self, name):
return self.connection.ops.quote_name(name)
def _get_database_create_suffix(self, encoding=None, template=None):
suffix = ""
if encoding:
suffix += " ENCODING '{}'".format(encoding)
if template:
suffix += " TEMPLATE {}".format(self._quote_name(template))
return suffix and "WITH" + suffix
def sql_table_creation_suffix(self):
test_settings = self.connection.settings_dict["TEST"]
if test_settings.get("COLLATION") is not None:
raise ImproperlyConfigured(
"PostgreSQL does not support collation setting at database "
"creation time."
)
return self._get_database_create_suffix(
encoding=test_settings["CHARSET"],
template=test_settings.get("TEMPLATE"),
)
def _database_exists(self, cursor, database_name):
cursor.execute(
"SELECT 1 FROM pg_catalog.pg_database WHERE datname = %s",
[strip_quotes(database_name)],
)
return cursor.fetchone() is not None
def _execute_create_test_db(self, cursor, parameters, keepdb=False):
try:
if keepdb and self._database_exists(cursor, parameters["dbname"]):
# If the database should be kept and it already exists, don't
# try to create a new one.
return
super()._execute_create_test_db(cursor, parameters, keepdb)
except Exception as e:
if not isinstance(e.__cause__, errors.DuplicateDatabase):
# All errors except "database already exists" cancel tests.
self.log("Got an error creating the test database: %s" % e)
sys.exit(2)
elif not keepdb:
# If the database should be kept, ignore "database already
# exists".
raise
def _clone_test_db(self, suffix, verbosity, keepdb=False):
# CREATE DATABASE ... WITH TEMPLATE ... requires closing connections
# to the template database.
self.connection.close()
source_database_name = self.connection.settings_dict["NAME"]
target_database_name = self.get_test_db_clone_settings(suffix)["NAME"]
test_db_params = {
"dbname": self._quote_name(target_database_name),
"suffix": self._get_database_create_suffix(template=source_database_name),
}
with self._nodb_cursor() as cursor:
try:
self._execute_create_test_db(cursor, test_db_params, keepdb)
except Exception:
try:
if verbosity >= 1:
self.log(
"Destroying old test database for alias %s..."
% (
self._get_database_display_str(
verbosity, target_database_name
),
)
)
cursor.execute("DROP DATABASE %(dbname)s" % test_db_params)
self._execute_create_test_db(cursor, test_db_params, keepdb)
except Exception as e:
self.log("Got an error cloning the test database: %s" % e)
sys.exit(2) | PypiClean |
/NikeCA-0.2.13-py3-none-any.whl/_SnowflakePull.py | class SnowflakePull:
import pandas
def snowflake_pull(self, query: str | dict | None, un: str, wh: str, db: str, role: str, schema=None, table=None,
sample_table: bool = False, sample_val: bool = False, table_sample: dict = None,
dtypes_conv=None, separate_dataframes: bool = True, polars: bool = False): # -> pandas.DataFrame:
"""
function: pulls snowflake data
dependencies: [
pandas,
snowflake.connector,
time,
datetime.datetime
]
:param polars:
:param separate_dataframes:
:param table:
:param schema:
:param query: str | dict
SQL query to run on Snowflake
E.G. query = "SELECT * FROM NGP_DA_PROD.POS.TO_DATE_AGG_CHANNEL_CY"
Can also be multiple queries in the form of a dictionary
E.G. query = {"df1": "SELECT * FROM NGP_DA_PROD.POS.TO_DATE_AGG_CHANNEL_CY", "df "SELECT TOP 2 * \
FROM NGP_DA_PROD.POS.TO_DATE_AGG_CHANNEL_CY"}
:param un: str
Nike Snowflake Username
"USERNAME"
:param db: str, default 'NA'
Name of the Database
:param wh: str
Name of the Wharehouse
e.g. "DA_DSM_SCANALYTICS_REPORTING_PROD"
:param role: str
Name of the role under which you are running Snowflake
"DF_######"
:param sample_table: bool, default: False
:param sample_val: bool, default: False
:param table_sample: dict, default: None
later
if table_sample = None
table_sample = {'db': None, 'schema': None, 'table': None, 'col': None}
:param dtypes_conv: default: None
:return: pandas.DataFrame
"""
# snowflake connection packages:
import pandas as pd
import polars as pl
import snowflake.connector
import time
if table_sample is not None:
table_sample = {'db': None, 'schema': None, 'table': None, 'col': None}
# --> take a random sample from a table in snowflake
query = f'''SELECT * FROM {table_sample['db']}.{table_sample['schema']}.{table_sample['table']} LIMIT 100''' \
if sample_table else query
# --> take a random sample of a column from a table in snowflake
query = f'''SELECT DISTINCT
{table_sample['col']}
FROM
{table_sample['db']}.{table_sample['schema']}.{table_sample['table']}
ORDER BY 1 LIMIT 10''' \
if sample_val else query
recs = False
df = pd.DataFrame()
if type(query) == dict:
df = pd.DataFrame([query]).T
df_index = df.index
print(df_index)
df2 = pl.from_pandas(df)
print(df2)
df_return = pd.DataFrame(index=df.index)
df_return['sfqid'] = ''
queries = len(df)
print('Pulling ' + str(queries) + ' queries')
query_list = []
db_list = []
complete = []
for item in range(queries):
query_list.append(item)
db_list.append(item)
complete.append(0)
print('opening snowflake connection...')
try:
cnn = snowflake.connector.connect(
user=un,
account='nike',
authenticator='externalbrowser',
role=role,
warehouse='POS_REPORT_PROD'
)
cs = cnn.cursor()
process_complete = 0
process_pass = 0
counter = 0
for k, v in df.iterrows():
sql = v[0]
cs.execute_async(sql)
query_list[counter] = cs.sfqid
df_return['sfqid'][k] = cs.sfqid
counter += 1
dfs = {}
while process_complete == 0:
item = -1
process_pass += 1
if sum(complete) == queries or process_pass == 10:
process_complete = 1
for result in query_list:
item += 1
if complete[item] == 0:
status = cnn.get_query_status_throw_if_error(result)
print('the status for ' + df_return[df_return['sfqid'] == result].index[0] + ' is ' +
str(status))
if str(status) == 'QueryStatus.SUCCESS':
complete[item] = 1
cs.get_results_from_sfqid(result)
if separate_dataframes:
recs = True
dfs[df_return[df_return['sfqid'] == result].index[0]] = cs.fetch_pandas_all()
else:
df = pd.concat([df, cs.fetch_pandas_all()])
else:
time.sleep(.25)
except Exception as e:
print(e)
finally:
cnn.close()
print('process complete')
else:
# connection settings
from snowflake.connector.converter_null import SnowflakeNoConverterToPython
conn = snowflake.connector.connect(
user=str(un),
account='nike',
# opens separate browser window to confirm authentication
authenticator='externalbrowser',
warehouse=str(wh),
database=str(db),
role=str(role),
converter_class=SnowflakeNoConverterToPython
)
# connect to snowflake using conn variables
cur = conn.cursor()
try:
cur.execute(query) # execute sql, store into-->
# final data pull --> allows datatype-memory optimization
df = pl.DataFrame
if polars:
df = cur.fetch_arrow_all()
else:
df = cur.fetch_pandas_all()
print(df)
except snowflake.connector.errors.ProgrammingError:
print(f'Could not retrieve:\n\t {query}')
except TypeError:
print(f"""Could not retrieve:\n\t{query}\n\nbecause TypeError: NoneType""")
# --> allows metadata querying
except Exception as e:
print(e)
temp_df = cur.fetchall() # return data
cols = [x.name for x in cur.description] # get column names
df = pd.DataFrame(temp_df, columns=cols) # create dataset
conn.close()
cur.close()
if recs:
return [dfs[k] for k in df_index]
return df | PypiClean |
/CMSCouchapp-1.3.4-py3-none-any.whl/couchapp/client.py |
import base64
import itertools
import json
import logging
import re
import requests
from urllib.parse import quote
from couchapp import __version__
from couchapp.errors import ResourceNotFound, ResourceConflict, \
PreconditionFailed, RequestFailed, BulkSaveError, Unauthorized, \
InvalidAttachment
USER_AGENT = "couchapp/{0}".format(__version__)
aliases = {
'id': '_id',
'rev': '_rev'
}
UNKNOWN_VERSION = tuple()
logger = logging.getLogger(__name__)
class CouchdbResponse(requests.Response):
def __init__(self, respObj):
super(CouchdbResponse, self).__init__()
self.response = respObj
@property
def json_body(self):
"""
Parse the response object and return its json data,
or raise an exception if it failed
"""
if self.response.ok:
logger.debug("Response status_code: %s", self.response.status_code)
logger.debug("Response text: %s", self.response.text)
logger.debug("Response headers: %s", self.response.headers)
logger.debug("Response encoding: %s", self.response.encoding)
logger.debug("Response content: %s", self.response.content)
self.response.close()
try:
return self.response.json()
except ValueError:
# No JSON object could be decoded
return self.response.content
else:
errorReason = self.response.reason
errorCode = self.response.status_code
if errorCode in (401, 403):
raise Unauthorized(str(self.response))
elif errorCode == 404:
raise ResourceNotFound(errorReason, http_code=errorCode, response=self.response.text)
elif errorCode == 409:
raise ResourceConflict(errorReason, http_code=errorCode, response=self.response.text)
elif errorCode == 412:
raise PreconditionFailed(errorReason, http_code=errorCode, response=self.response.text)
else:
raise RequestFailed(str(self.response))
class CouchdbResource(object):
def __init__(self, uri="http://127.0.0.1:5984", **client_opts):
"""Constructor for a `CouchdbResource` object.
CouchdbResource represent an HTTP resource to CouchDB.
@param uri: str, full uri to the server.
"""
self.uri = uri
# FIXME: dangerous if the database name is not part of the URI
# self.database = uri.rsplit("/", 1)[1]
self.client_opts = client_opts
# requests.__init__(self, uri=uri, **client_opts)
self.safe = ":/%"
def copy(self, path=None, headers=None, **params):
""" add copy to HTTP verbs """
return self.request('COPY', path=path, headers=headers, **params)
def request(self, method, path=None, payload=None, headers=None,
params_dict=None, **params):
""" Perform HTTP call to the couchdb server and manage
JSON conversions, support GET, POST, PUT and DELETE.
Usage example, get infos of a couchdb server on
http://127.0.0.1:5984 :
import couchdbkit.CouchdbResource
resource = couchdbkit.CouchdbResource()
infos = resource.request('GET')
@param method: str, the HTTP action to be performed:
'GET', 'HEAD', 'POST', 'PUT', or 'DELETE'
@param path: str or list, path to add to the uri
@param data: str or string or any object that could be
converted to JSON.
@param headers: dict, optional headers that will
be added to HTTP request.
@param raw: boolean, response return a Response object
@param params: Optional parameterss added to the request.
Parameterss are for example the parameters for a view. See
`CouchDB View API reference
<http://wiki.apache.org/couchdb/HTTP_view_API>`_ for example.
@return: tuple (data, resp), where resp is an `httplib2.Response`
object and data a python object (often a dict).
"""
if path:
path = "{}/{}".format(self.uri, path)
else:
path = self.uri
headers = headers or {}
headers.setdefault('Accept', 'application/json')
headers.setdefault('User-Agent', USER_AGENT)
logger.debug("Request: %s %s", method, path)
try:
resp = requests.request(method, url=path, data=payload,
headers=headers, **params)
except Exception as e:
logger.exception("Error making a CouchdbResource call. Details: %s", e)
raise RequestFailed('unknown error [%s]', str(e))
return CouchdbResponse(resp).json_body
def couchdb_version(server_uri):
res = CouchdbResource(server_uri)
try:
resp = res.request("GET")
except Exception:
return UNKNOWN_VERSION
version = CouchdbResponse(resp).json_body
t = []
for p in version.split("."):
try:
t.append(int(p))
except ValueError:
continue
return tuple(t)
class Uuids(object):
def __init__(self, uri, max_uuids=1000, **client_opts):
api = "_uuids"
uri = "{}/{}".format(uri, api)
self.res = CouchdbResource(uri=uri, **client_opts)
self._uuids = []
self.max_uuids = max_uuids
def __next__(self):
if not self._uuids:
self.fetch_uuids()
self._uuids, res = self._uuids[:-1], self._uuids[-1]
return res
def __iter__(self):
return self
def fetch_uuids(self):
count = self.max_uuids - len(self._uuids)
resp = self.res.request("GET", count=count)
self._uuids += resp.json_body['uuids']
class Database(object):
""" Object that abstract access to a CouchDB database
A Database object can act as a Dict object.
"""
def __init__(self, uri, create=True, **client_opts):
if uri.endswith("/"):
uri = uri[:-1]
self.raw_uri = uri
self.res = CouchdbResource(uri=uri, **client_opts)
self.server_uri, self.dbname = uri.rsplit('/', 1)
self.uuids = Uuids(self.server_uri, **client_opts)
if create:
# create the db
try:
self.res.request("HEAD")
except ResourceNotFound:
self.res.request("PUT")
def delete(self):
self.res.request("DELETE")
def info(self):
"""
Get database information
@return: dict
"""
return self.res.request("GET")
def all_docs(self, **params):
"""
return all_docs
"""
return self.view('_all_docs', **params)
def open_doc(self, docid, wrapper=None, **params):
"""Open document from database
Args:
@param docid: str, document id to retrieve
@param rev: if specified, allows you to retrieve
a specific revision of document
@param wrapper: callable. function that takes dict as a param.
Used to wrap an object.
@params params: Other params to pass to the uri (or headers)
@return: dict, representation of CouchDB document as
a dict.
"""
resp = self.res.request("GET", escape_docid(docid), **params)
if wrapper is not None:
if not callable(wrapper):
raise TypeError("wrapper isn't a callable")
return wrapper(resp)
return resp
def save_doc(self, doc, encode=False, force_update=False, **params):
""" Save a document. It will use the `_id` member of the document
or request a new uuid from CouchDB. IDs are attached to
documents on the client side because POST has the curious property of
being automatically retried by proxies in the event of network
segmentation and lost responses.
@param doc: dict. doc is updated
with doc '_id' and '_rev' properties returned
by CouchDB server when you save.
@param force_update: boolean, if there is conlict, try to update
with latest revision
@param encode: Encode attachments if needed (depends on couchdb
version)
@return: new doc with updated revision an id
"""
if '_attachments' in doc and encode:
doc['_attachments'] = encode_attachments(doc['_attachments'])
headers = params.get('headers', {})
headers.setdefault('Content-Type', 'application/json')
params['headers'] = headers
if '_id' in doc:
docid = escape_docid(doc['_id'])
try:
resp = self.res.request("PUT", docid, payload=json.dumps(doc), **params)
except ResourceConflict:
if not force_update:
raise
rev = self.last_rev(doc['_id'])
doc['_rev'] = rev
resp = self.res.request("PUT", docid, payload=json.dumps(doc), **params)
else:
json_doc = json.dumps(doc)
try:
doc['_id'] = next(self.uuids)
resp = self.res.request("PUT", doc['_id'], payload=json_doc, **params)
except ResourceConflict:
resp = self.res.request("POST", payload=json_doc, **params)
json_res = resp
doc1 = {}
for a, n in list(aliases.items()):
if a in json_res:
doc1[n] = json_res[a]
doc.update(doc1)
return doc
def last_rev(self, docid):
""" Get last revision from docid (the '_rev' member)
@param docid: str, undecoded document id.
@return rev: str, the last revision of document.
"""
r = self.res.request("HEAD", escape_docid(docid))
if "etag" in r.headers:
# yeah new couchdb handle that
return r.headers['etag'].strip('"')
# old way ..
doc = self.open_doc(docid)
return doc['_rev']
def delete_doc(self, id_or_doc):
""" Delete a document
@param id_or_doc: docid string or document dict
"""
if isinstance(id_or_doc, (bytes, str)):
resp = self.res.request("DELETE", escape_docid(id_or_doc),
rev=self.last_rev(id_or_doc))
else:
docid = id_or_doc.get('_id')
if not docid:
raise ValueError('Not valid doc to delete (no doc id)')
rev = id_or_doc.get('_rev', self.last_rev(docid))
resp = self.res.request("DELETE", escape_docid(docid), rev=rev)
return resp
def save_docs(self, docs, all_or_nothing=False, use_uuids=True):
""" Bulk save. Modify Multiple Documents With a Single Request
@param docs: list of docs
@param use_uuids: add _id in doc who don't have it already set.
@param all_or_nothing: In the case of a power failure, when the
database restarts either all the changes will have been saved or none
of them. However, it does not do conflict checking, so the documents
will.
@return doc lists updated with new revision or raise BulkSaveError
exception. You can access to doc created and docs in error as
properties of this exception.
"""
def is_id(doc):
return '_id' in doc
if use_uuids:
noids = []
for k, g in itertools.groupby(docs, is_id):
if not k:
noids = list(g)
for doc in noids:
nextid = next(self.uuids)
if nextid:
doc['_id'] = nextid
payload = {"docs": docs}
if all_or_nothing:
payload["all-or-nothing"] = True
# update docs
res = self.res.request("POST", '/_bulk_docs', payload=json.dumps(payload),
headers={'Content-Type': 'application/json'})
json_res = res
errors = []
for i, r in enumerate(json_res):
if 'error' in r:
doc1 = docs[i]
doc1.update({'_id': r['id'],
'_rev': r['rev']})
errors.append(doc1)
else:
docs[i].update({'_id': r['id'],
'_rev': r['rev']})
if errors:
raise BulkSaveError(docs, errors)
def delete_docs(self, docs, all_or_nothing=False, use_uuids=True):
""" multiple doc delete."""
for doc in docs:
doc['_deleted'] = True
return self.save_docs(docs, all_or_nothing=all_or_nothing,
use_uuids=use_uuids)
def fetch_attachment(self, id_or_doc, name, headers=None):
""" get attachment in a document
@param id_or_doc: str or dict, doc id or document dict
@param name: name of attachment default: default result
@param header: optionnal headers (like range)
@return: `couchdbkit.resource.CouchDBResponse` object
"""
if isinstance(id_or_doc, (str, bytes)):
docid = id_or_doc
else:
docid = id_or_doc['_id']
return self.res.request("GET", "%s/%s" % (escape_docid(docid), name),
headers=headers)
def put_attachment(self, doc, content=None, name=None, headers=None):
""" Add attachement to a document. All attachments are streamed.
@param doc: dict, document object
@param content: string, iterator, fileobj
@param name: name or attachment (file name).
@param headers: optionnal headers like `Content-Length`
or `Content-Type`
@return: updated document object
"""
headers = {}
content = content or ""
if name is None:
if hasattr(content, "name"):
name = content.name
else:
raise InvalidAttachment('You should provid a valid ' +
'attachment name')
name = quote(name, safe="")
res = self.res.request("PUT", "%s/%s" % (escape_docid(doc['_id']), name),
payload=content, headers=headers, rev=doc['_rev'])
json_res = res
if 'ok' in json_res:
return doc.update(self.open_doc(doc['_id']))
return False
def delete_attachment(self, doc, name):
""" delete attachement to the document
@param doc: dict, document object in python
@param name: name of attachement
@return: updated document object
"""
name = quote(name, safe="")
self.res.request("DELETE", "%s/%s" % (escape_docid(doc['_id']), name),
rev=doc['_rev'])
return doc.update(self.open_doc(doc['_id']))
def view(self, view_name, **params):
try:
dname, vname = view_name.split("/")
path = "/_design/%s/_view/%s" % (dname, vname)
except ValueError:
path = view_name
if "keys" in params:
keys = params.pop("keys")
return self.res.request("POST", path, json.dumps({"keys": keys}, **params))
return self.res.request("GET", path, **params)
def encode_params(params):
""" encode parameters in json if needed """
_params = {}
if params:
for name, value in list(params.items()):
if value is None:
continue
if name in ('key', 'startkey', 'endkey') \
or not isinstance(value, str):
value = json.dumps(value).encode('utf-8')
_params[name] = value
return _params
def escape_docid(docid):
if docid.startswith('/'):
docid = docid[1:]
if docid.startswith('_design'):
docid = '_design/%s' % quote(docid[8:], safe='')
else:
docid = quote(docid, safe='')
return docid
def encode_attachments(attachments):
re_sp = re.compile('\s')
for k, v in attachments.items():
if v.get('stub', False):
continue
else:
v['data'] = re_sp.sub('', base64.b64encode(v['data']))
return attachments | PypiClean |
/NlpToolkit-Classification-1.0.16.tar.gz/NlpToolkit-Classification-1.0.16/Classification/Model/KMeansModel.py | from io import TextIOWrapper
from Math.DiscreteDistribution import DiscreteDistribution
from Classification.DistanceMetric.DistanceMetric import DistanceMetric
from Classification.DistanceMetric.EuclidianDistance import EuclidianDistance
from Classification.Instance.Instance import Instance
from Classification.InstanceList.InstanceList import InstanceList
from Classification.Model.GaussianModel import GaussianModel
class KMeansModel(GaussianModel):
__class_means: InstanceList
__distance_metric: DistanceMetric
def constructor1(self,
priorDistribution: DiscreteDistribution,
classMeans: InstanceList,
distanceMetric: DistanceMetric):
"""
The constructor that sets the classMeans, priorDistribution and distanceMetric according to given inputs.
PARAMETERS
----------
priorDistribution : DiscreteDistribution
DiscreteDistribution input.
classMeans : InstanceList
Class means.
distanceMetric : DistanceMetric
DistanceMetric input.
"""
self.__class_means = classMeans
self.prior_distribution = priorDistribution
self.__distance_metric = distanceMetric
def constructor2(self, fileName: str):
self.__distance_metric = EuclidianDistance()
inputFile = open(fileName, 'r')
self.loadPriorDistribution(inputFile)
self.__class_means = self.loadInstanceList(inputFile)
inputFile.close()
def loadInstanceList(self, inputFile: TextIOWrapper) -> InstanceList:
types = inputFile.readline().strip().split(" ")
instance_count = int(inputFile.readline().strip())
instance_list = InstanceList()
for i in range(instance_count):
instance_list.add(self.loadInstance(inputFile.readline().strip(), types))
return instance_list
def __init__(self,
priorDistribution: object,
classMeans: InstanceList = None,
distanceMetric: DistanceMetric = None):
if isinstance(priorDistribution, DiscreteDistribution):
self.constructor1(priorDistribution, classMeans, distanceMetric)
elif isinstance(priorDistribution, str):
self.constructor2(priorDistribution)
def calculateMetric(self,
instance: Instance,
Ci: str) -> float:
"""
The calculateMetric method takes an {@link Instance} and a String as inputs. It loops through the class means,
if the corresponding class label is same as the given String it returns the negated distance between given
instance and the current item of class means. Otherwise it returns the smallest negative number.
PARAMETERS
----------
instance : Instance
Instance input.
Ci : str
String input.
RETURNS
-------
float
The negated distance between given instance and the current item of class means.
"""
for i in range(self.__class_means.size()):
if self.__class_means.get(i).getClassLabel() == Ci:
return -self.__distance_metric.distance(instance, self.__class_means.get(i))
return -1000000 | PypiClean |
/Authlib-1.2.1.tar.gz/Authlib-1.2.1/authlib/integrations/httpx_client/assertion_client.py | import httpx
from httpx import Response, USE_CLIENT_DEFAULT
from authlib.oauth2.rfc7521 import AssertionClient as _AssertionClient
from authlib.oauth2.rfc7523 import JWTBearerGrant
from .utils import extract_client_kwargs
from .oauth2_client import OAuth2Auth
from ..base_client import OAuthError
__all__ = ['AsyncAssertionClient']
class AsyncAssertionClient(_AssertionClient, httpx.AsyncClient):
token_auth_class = OAuth2Auth
oauth_error_class = OAuthError
JWT_BEARER_GRANT_TYPE = JWTBearerGrant.GRANT_TYPE
ASSERTION_METHODS = {
JWT_BEARER_GRANT_TYPE: JWTBearerGrant.sign,
}
DEFAULT_GRANT_TYPE = JWT_BEARER_GRANT_TYPE
def __init__(self, token_endpoint, issuer, subject, audience=None, grant_type=None,
claims=None, token_placement='header', scope=None, **kwargs):
client_kwargs = extract_client_kwargs(kwargs)
httpx.AsyncClient.__init__(self, **client_kwargs)
_AssertionClient.__init__(
self, session=None,
token_endpoint=token_endpoint, issuer=issuer, subject=subject,
audience=audience, grant_type=grant_type, claims=claims,
token_placement=token_placement, scope=scope, **kwargs
)
async def request(self, method, url, withhold_token=False, auth=USE_CLIENT_DEFAULT, **kwargs) -> Response:
"""Send request with auto refresh token feature."""
if not withhold_token and auth is USE_CLIENT_DEFAULT:
if not self.token or self.token.is_expired():
await self.refresh_token()
auth = self.token_auth
return await super(AsyncAssertionClient, self).request(
method, url, auth=auth, **kwargs)
async def _refresh_token(self, data):
resp = await self.request(
'POST', self.token_endpoint, data=data, withhold_token=True)
return self.parse_response_token(resp)
class AssertionClient(_AssertionClient, httpx.Client):
token_auth_class = OAuth2Auth
oauth_error_class = OAuthError
JWT_BEARER_GRANT_TYPE = JWTBearerGrant.GRANT_TYPE
ASSERTION_METHODS = {
JWT_BEARER_GRANT_TYPE: JWTBearerGrant.sign,
}
DEFAULT_GRANT_TYPE = JWT_BEARER_GRANT_TYPE
def __init__(self, token_endpoint, issuer, subject, audience=None, grant_type=None,
claims=None, token_placement='header', scope=None, **kwargs):
client_kwargs = extract_client_kwargs(kwargs)
httpx.Client.__init__(self, **client_kwargs)
_AssertionClient.__init__(
self, session=self,
token_endpoint=token_endpoint, issuer=issuer, subject=subject,
audience=audience, grant_type=grant_type, claims=claims,
token_placement=token_placement, scope=scope, **kwargs
)
def request(self, method, url, withhold_token=False, auth=USE_CLIENT_DEFAULT, **kwargs):
"""Send request with auto refresh token feature."""
if not withhold_token and auth is USE_CLIENT_DEFAULT:
if not self.token or self.token.is_expired():
self.refresh_token()
auth = self.token_auth
return super(AssertionClient, self).request(
method, url, auth=auth, **kwargs) | PypiClean |
/EtherChainPy-1.0.80.tar.gz/EtherChainPy-1.0.80/etherchainpy/conversions.py | import json
class Conversions(object):
"""
Conversion Module ...
"""
def __init__(self, client=None):
self.math_base = 10
self.response = None
self.client = client
def to(self, what):
w = what.upper()
if w == "WEI": return self.toWEI()
elif w == "GWEI": return self.toGWEI()
elif w == "ETH": return self.toETH()
elif w == "ZAR": return self.toZAR()
elif w == "USD": return self.toUSD()
elif w == "EUR": return self.toEUR()
else: return self.toJSON()
def toJSON(self):
if self.response: return self.response.json()
else: return None
def toWEI(self):
wei = None
if self.response:
data = self.response.json()
if "result" in data:
rd = data["result"]
if isinstance(rd,str):
wei = int(data["result"],self.math_base)
elif isinstance(rd,list):
wei = []
for r in rd:
r.update({"wei": int(r["balance"],self.math_base) })
wei.append(r)
return wei
def toGWEI(self):
gwei = None
if self.response:
data = self.response.json()
if "result" in data:
rd = data["result"]
if isinstance(rd,str):
gwei = int(data["result"],self.math_base)/(10**9)
elif isinstance(rd,list):
gwei = []
for r in rd:
r.update({"gwei": int(r["balance"],self.math_base)/(10**9) })
gwei.append(r)
return gwei
def toETH(self):
eth = None
if self.response:
data = self.response.json()
if "result" in data:
rd = data["result"]
if isinstance(rd,str):
eth = int(data["result"],self.math_base) / (10**18)
elif isinstance(rd,list):
eth = []
for r in rd:
r.update({"eth": int(r["balance"],self.math_base)/(10**18) })
eth.append(r)
return eth
def toFIAT(self, value=None, to="ZAR"):
fiat = None
if self.client != None and value != None:
params = {"fsym":"ETH", "tsyms":to.upper()}
if self.client.conversion_api_key != None:
params.update({"api_key":self.client.conversion_api_key})
r = self.client.get(self.client.conversion_url,params).json()
if to.upper() in r:
cv = float(r[to.upper()])
if isinstance(value,int): fiat = float(value) * cv
if isinstance(value,float): fiat = float(value) * cv
elif isinstance(value,str): fiat = float(value) * cv
elif isinstance(value,list):
fiat = []
for v in value:
a = v
a.update({to:float(v["eth"]) * cv})
fiat.append(a)
return fiat
def toZAR(self):
v = self.toETH() # base
return self.toFIAT(v, "ZAR")
def toUSD(self):
v = self.toETH() # base
return self.toFIAT(v,"USD")
def toEUR(self):
v = self.toETH() # base
return self.toFIAT(v,"EUR") | PypiClean |
/Ax_Metrics-0.9.2.tar.gz/Ax_Metrics-0.9.2/py/axonchisel/metrics/foundation/chrono/framespec.py |
from datetime import datetime
from axonchisel.metrics.foundation.ax.obj import AxObj
# ----------------------------------------------------------------------------
# FrameSpec allowed modes
MODES = {
'CURRENT': {}, # current period (or incl reframe), possibly incomplete
'LASTWHOLE': {}, # previous completed period
}
# FrameSpec allowed time units
TIME_UNITS = {
'SECOND': {},
'MINUTE': {},
'MINUTE5': {},
'MINUTE10': {},
'MINUTE15': {},
'MINUTE30': {},
'HOUR': {},
'DAY': {},
'WEEK': {},
'MONTH': {},
'QUARTER': {},
'YEAR': {},
}
# ----------------------------------------------------------------------------
class FrameSpec(AxObj):
"""
Specification of time frame, measurement granularity, and smoothing,
such as for use in specifying a query.
A Stepper can take a FrameSpec and yield TimeRanges for each data point.
Examples representable by FrameSpecs:
- Last whole month, daily measurements.
- Current quarter to date, weekly measurements.
- Last 5 days, hourly measurements.
- 24 hour period on specific date in past, measurements every 15 minutes.
- Current month to date, smoothed daily measurements of trailing 7 days.
Attributes:
- range_unit:
Determines rounding granularity of start and stop
times and the interpretation of range_val to specify overall
duration. Value from TIME_UNITS, e.g. 'MONTH'.
- range_val:
Specifies how many range_units are in the period.
- gran_unit:
Determines granularity of data points within the range.
Value from TIME_UNITS, e.g. 'DAY'.
Without smoothing, each measurement step covers 1 gran_unit
beginning at each step point.
- smooth_unit:
Unit of time used for smoothing extended by smooth_val.
Ignored if smooth_val is 0.
- smooth_val:
If 0, no smoothing is done. If positive int is specified,
each measurement step time frame will have the same end it
would have otherwise, but the beginning will be
smooth_val smooth_units back from there (instead of just
1 gran_unit back).
Smoothing will be ignored if accumulate is enabled.
Option allow_overflow_begin affects early steps.
- mode:
Determine whether range includes current time (ie "X to date"),
or previous completed period.
Value from MODES: 'CURRENT' or 'LASTWHOLE'.
- reframe_dt:
If specified, changes period calc to execute as if "now"
was the datetime specified here.
- accumulate: True/False (default = False)
If True, all steps have their beginning locked to the
beginning of the overall frame, each growing in size.
If False (default), each step through the frame is roughly
the same size and proceeds mostly linearly.
Accumulate is not compatible with and will override smoothing.
- allow_overflow_begin: True/False (default = True)
If False, steps will never extend beyond the beginning
of the period, but early steps may be shorter than full frames.
If True (default), early steps may extend beyond the beginning
of the period, such as with 30 DAY smoothing combined
with WEEK granularity.
Default is True because reports (especially smoothed) would
otherwise see significant initial noise before full size
smoothing frame existed.
- allow_overflow_end: True/False (default = False)
If False (default), steps will never extend beyond the end
of the period, but late steps may be shorter than full frames.
If True, steps may extend beyond the end of the period,
such as the final step when using WEEK granularity
where a QUARTER ends mid-week.
Default is False because we don't want to report on data
past the period. Adding smoothing (even identical to
gran_unit) should address lower last frame issues by
forcing a full size frame (at cost of re-covering part
of end of previous period).
"""
def __init__(self, **kwargs):
"""
Initialize, optionally overriding any default properties with kwargs.
"""
# Set valid default state:
self.range_unit = 'MONTH'
self.range_val = 1
self.gran_unit = 'DAY'
self.smooth_unit = 'DAY'
self.smooth_val = 0
self.mode = 'CURRENT'
self.reframe_dt = None
self.accumulate = False
self.allow_overflow_begin = True
self.allow_overflow_end = False
# Apply initial values from kwargs:
self._init_kwargs(kwargs, [
'range_unit', 'range_val',
'gran_unit',
'smooth_unit', 'smooth_val',
'mode',
'reframe_dt',
'accumulate',
'allow_overflow_begin', 'allow_overflow_end',
])
#
# Public Methods
#
def is_reframed(self):
return self.reframe_dt is not None
def is_smoothed(self):
return self.smooth_val > 0
#
# Public Properties
#
@property
def range_unit(self):
"""Duration unit and rounding granularity of start/stop times."""
return self._range_unit
@range_unit.setter
def range_unit(self, val):
self._assert_type_string("range_unit", val)
self._assert_value("range_unit", val, TIME_UNITS)
self._range_unit = val
@property
def range_val(self):
"""Specifies how many range_units are in the period. """
return self._range_val
@range_val.setter
def range_val(self, val):
self._assert_type_int("range_val", val)
self._range_val = val
@property
def gran_unit(self):
"""Granularity of data points within the range."""
return self._gran_unit
@gran_unit.setter
def gran_unit(self, val):
self._assert_type_string("gran_unit", val)
self._assert_value("gran_unit", val, TIME_UNITS)
self._gran_unit = val
@property
def smooth_unit(self):
"""Unit of time used for smoothing extended by smooth_val."""
return self._smooth_unit
@smooth_unit.setter
def smooth_unit(self, val):
self._assert_type_string("smooth_unit", val)
self._assert_value("smooth_unit", val, TIME_UNITS)
self._smooth_unit = val
@property
def smooth_val(self):
"""Specifies amount of smooth_units to smooth data by, 0=none."""
return self._smooth_val
@smooth_val.setter
def smooth_val(self, val):
self._assert_type_int("smooth_val", val)
self._smooth_val = val
@property
def mode(self):
"""How to relate to current/reframed time."""
return self._mode
@mode.setter
def mode(self, val):
self._assert_type_string("mode", val)
self._assert_value("mode", val, MODES)
self._mode = val
@property
def reframe_dt(self):
"""Reframe "now" as given datetime, or None for real now."""
return self._reframe_dt
@reframe_dt.setter
def reframe_dt(self, val):
if val is not None:
self._assert_type_datetime("reframe_dt", val)
self._reframe_dt = val
@property
def accumulate(self):
"""Lock all steps to beginnining time frame?"""
return self._accumulate
@accumulate.setter
def accumulate(self, val):
self._assert_type_bool("accumulate", val)
self._accumulate = val
@property
def allow_overflow_begin(self):
"""Allow steps to extend beyond beginning of period?"""
return self._allow_overflow_begin
@allow_overflow_begin.setter
def allow_overflow_begin(self, val):
self._assert_type_bool("allow_overflow_begin", val)
self._allow_overflow_begin = val
@property
def allow_overflow_end(self):
"""Allow steps to extend beyond end of period?"""
return self._allow_overflow_end
@allow_overflow_end.setter
def allow_overflow_end(self, val):
self._assert_type_bool("allow_overflow_end", val)
self._allow_overflow_end = val
#
# Internal Methods
#
def __unicode__(self):
return (u"FrameSpec(every {self.gran_unit} "+
"for {self.range_val} {self.range_unit}s, "+
"{self.mode}, reframe {self.reframe_dt}, "+
"smooth {smooth}, accum {accum})"
).format(self=self, smooth=(
"{self.smooth_val} {self.smooth_unit}s".format(self=self)
) if self.smooth_val else "None",
accum=("Y" if self.accumulate else "N"),
) | PypiClean |
/CNFgen-0.9.2-py3-none-any.whl/cnfgen/families/subgraph.py | import networkx as nx
from itertools import combinations
from itertools import product
from cnfgen.formula.cnf import CNF
from cnfgen.graphs import Graph
from cnfgen.localtypes import non_negative_int
def non_edges(G):
N = G.order()
for u in range(1, N):
for v in range(u+1, N+1):
if not G.has_edge(u, v):
yield (u, v)
def SubgraphFormula(G, H, induced=False, symbreak=False, formula_class=CNF):
"""Test whether a graph has a k-clique.
Given two graphs :math:`H` and :math:`G`, the
CNF formula claims that :math:`H` is an (induced) subgraph of :math:`G`.
Parameters
----------
G : cnfgen.Graph
a simple graph
H : cnfgen.Graph
the candidate subgraph
induced: bool
test for induced containment
symbreak: bool
force mapping to be non decreasing
(this makes sense only if :math:`T` is symmetric)
Returns
-------
a CNF object
"""
G = Graph.normalize(G, 'G')
H = Graph.normalize(H, 'H')
F = formula_class()
if induced:
description = "{} is not an induced subgraph of {}".format(H.name, G.name)
else:
description = "{} is not a subgraph of {}".format(H.name, G.name)
F.header['description'] = description
N = G.order()
k = H.order()
s = F.new_mapping(k, N, label='s_{{{},{}}}')
F.force_complete_mapping(s)
F.force_functional_mapping(s)
F.force_injective_mapping(s)
if symbreak:
F.force_nondecreasing_mapping(s)
# Local consistency
localmaps = product(combinations(list(range(1, k+1)), 2),
combinations(list(range(1, N+1)), 2))
s = s.to_dict()
for (i1, i2), (j1, j2) in localmaps:
# check if this mapping is compatible
gedge = G.has_edge(j1, j2)
tedge = H.has_edge(i1, i2)
consistent = (gedge == tedge) or (gedge and not induced)
if not consistent:
F.add_clause([-s[i1, j1], -s[i2, j2]], check=False)
if not symbreak:
F.add_clause([-s[i1, j2], -s[i2, j1]], check=False)
return F
def CliqueFormula(G, k, symbreak=True, formula_class=CNF):
"""Test whether a graph has a k-clique.
Given a graph :math:`G` and a non negative value :math:`k`, the
CNF formula claims that :math:`G` contains a :math:`k`-clique.
Parameters
----------
G : cnfgen.Graph
a simple graph
k : a non negative integer
clique size
symbreak: bool
force mapping to be non decreasing
Returns
-------
a CNF object
"""
non_negative_int(k, 'k')
G = Graph.normalize(G, 'G')
F = formula_class()
description = "{} does not contain any {}-clique.".format(G.name, k)
F.header['description'] = description
N = G.order()
s = F.new_mapping(k, N, label='s_{{{},{}}}')
F.force_complete_mapping(s)
F.force_functional_mapping(s)
F.force_injective_mapping(s)
if symbreak:
F.force_nondecreasing_mapping(s)
# Local consistency
s = s.to_dict()
nonconsistents = product(combinations(list(range(1, k+1)), 2),
non_edges(G))
for (i1, i2), (j1, j2) in nonconsistents:
# check if this mapping is compatible
F.add_clause([-s[i1, j1], -s[i2, j2]], check=False)
if not symbreak:
F.add_clause([-s[i1, j2], -s[i2, j1]], check=False)
return F
def BinaryCliqueFormula(G, k, symbreak=True, formula_class=CNF):
"""Test whether a graph has a k-clique (binary encoding)
Given a graph :math:`G` and a non negative value :math:`k`, the
CNF formula claims that :math:`G` contains a :math:`k`-clique.
This formula uses the binary encoding, in the sense that the
clique elements are indexed by strings of bits.
Parameters
----------
G : cnfgen.Graph
a simple graph
k : a non negative integer
clique size
symbreak: bool
force mapping to be non decreasing
Returns
-------
a CNF object
"""
non_negative_int(k, 'k')
G = Graph.normalize(G, 'G')
F = formula_class()
description = "{} does not contain any {}-clique (Binary encoding).".format(G.name, k)
F.header['description'] = description
N = G.order()
m = F.new_binary_mapping(k, N, label='y_{{{},{}}}')
F.force_complete_mapping(m)
F.force_injective_mapping(m)
if symbreak:
F.force_nondecreasing_mapping(m)
# Local consistency
nonconsistents = product(combinations(list(range(1, k+1)), 2),
non_edges(G))
for (i1, i2), (j1, j2) in nonconsistents:
# check if this mapping is compatible
F.add_clause(m.forbid(i1, j1) + m.forbid(i2, j2), check=False)
if not symbreak:
F.add_clause(m.forbid(i1, j2) + m.forbid(i2, j1), check=False)
return F
def RamseyWitnessFormula(G, k, s, symbreak=True, formula_class=CNF):
"""True if graph contains either k-clique or and s independent set
Given a graph :math:`G` and a non negative values :math:`k` and
:math:`s`, the CNF formula claims that :math:`G` contains
a neither a :math:`k`-clique nor an independet set of size
:math:`s`.
Parameters
----------
G : cnfgen.Graph
a simple graph
k : a non negative integer
clique size
s : a non negative integer
independet set size
symbreak: bool
force mapping to be non decreasing
Returns
-------
a CNF object
"""
non_negative_int(k, 'k')
non_negative_int(s, 's')
G = Graph.normalize(G, 'G')
F = formula_class()
description = "{} does not contain {}-cliques nor {}-independent sets.".format(
G.name, k, s)
F.header['description'] = description
maybeclique = F.new_variable('C')
N = G.order()
s = F.new_mapping(k, N, label='s_{{{},{}}}')
F.force_complete_mapping(s)
F.force_functional_mapping(s)
F.force_injective_mapping(s)
# Local consistency
localmaps = product(combinations(range(1,k+1), 2),
combinations(range(1,N+1), 2))
for (i1, i2), (j1, j2) in localmaps:
# check if this mapping is compatible
edge = G.has_edge(j1, j2)
# increasing map
if not edge:
F.add_clause([-maybeclique, -s(i1, j1), -s(i2, j2)])
else:
F.add_clause([maybeclique, -s(i1, j1), -s(i2, j2)])
# decreasing map
if symbreak:
F.add_clause([-s(i1, j2), -s(i2, j1)])
elif not edge:
F.add_clause([-maybeclique, -s(i1, j2), -s(i2, j1)])
else:
F.add_clause([maybeclique, -s(i1, j2), -s(i2, j1)])
return F | PypiClean |
/KalturaApiClient-19.3.0.tar.gz/KalturaApiClient-19.3.0/KalturaClient/Plugins/BulkUploadXml.py | from __future__ import absolute_import
from .Core import *
from .BulkUpload import *
from ..Base import (
getXmlNodeBool,
getXmlNodeFloat,
getXmlNodeInt,
getXmlNodeText,
KalturaClientPlugin,
KalturaEnumsFactory,
KalturaObjectBase,
KalturaObjectFactory,
KalturaParams,
KalturaServiceBase,
)
########## enums ##########
########## classes ##########
# @package Kaltura
# @subpackage Client
class KalturaBulkUploadXmlJobData(KalturaBulkUploadJobData):
"""Represents the Bulk upload job data for xml bulk upload"""
def __init__(self,
userId=NotImplemented,
uploadedBy=NotImplemented,
conversionProfileId=NotImplemented,
resultsFileLocalPath=NotImplemented,
resultsFileUrl=NotImplemented,
numOfEntries=NotImplemented,
numOfObjects=NotImplemented,
filePath=NotImplemented,
bulkUploadObjectType=NotImplemented,
fileName=NotImplemented,
objectData=NotImplemented,
type=NotImplemented,
emailRecipients=NotImplemented,
numOfErrorObjects=NotImplemented,
privileges=NotImplemented):
KalturaBulkUploadJobData.__init__(self,
userId,
uploadedBy,
conversionProfileId,
resultsFileLocalPath,
resultsFileUrl,
numOfEntries,
numOfObjects,
filePath,
bulkUploadObjectType,
fileName,
objectData,
type,
emailRecipients,
numOfErrorObjects,
privileges)
PROPERTY_LOADERS = {
}
def fromXml(self, node):
KalturaBulkUploadJobData.fromXml(self, node)
self.fromXmlImpl(node, KalturaBulkUploadXmlJobData.PROPERTY_LOADERS)
def toParams(self):
kparams = KalturaBulkUploadJobData.toParams(self)
kparams.put("objectType", "KalturaBulkUploadXmlJobData")
return kparams
########## services ##########
########## main ##########
class KalturaBulkUploadXmlClientPlugin(KalturaClientPlugin):
# KalturaBulkUploadXmlClientPlugin
instance = None
# @return KalturaBulkUploadXmlClientPlugin
@staticmethod
def get():
if KalturaBulkUploadXmlClientPlugin.instance == None:
KalturaBulkUploadXmlClientPlugin.instance = KalturaBulkUploadXmlClientPlugin()
return KalturaBulkUploadXmlClientPlugin.instance
# @return array<KalturaServiceBase>
def getServices(self):
return {
}
def getEnums(self):
return {
}
def getTypes(self):
return {
'KalturaBulkUploadXmlJobData': KalturaBulkUploadXmlJobData,
}
# @return string
def getName(self):
return 'bulkUploadXml' | PypiClean |
/COMETSC-0.1.13.tar.gz/COMETSC-0.1.13/Comet/__main__.py | import os
import argparse
import datetime
import re
import pandas as pd
import numpy as np
from . import hgmd
from . import visualize as vis
from . import quads
import sys
import multiprocessing
import time
import math
import matplotlib.pyplot as plt
import random
import scipy.io
import csv
#from docs.source import conf
def init_parser(parser):
"""Initialize parser args."""
parser.add_argument(
'marker', type=str,
help=("Marker file input")
)
parser.add_argument(
'vis', type=str,
help=("vis file input")
)
parser.add_argument(
'cluster', type=str,
help=("Cluster file input")
)
parser.add_argument(
'-g', nargs='?', default=None,
help="Optional Gene list"
)
parser.add_argument(
'output_path', type=str,
help="the output directory where output files should go"
)
parser.add_argument(
'-C', nargs='?', default=None,
help="Num of cores avail for parallelization"
)
parser.add_argument(
'-X', nargs='?', default=None,
help="X argument for XL-mHG"
)
parser.add_argument(
'-L', nargs='?', default=None,
help="L argument for XL-mHG"
)
parser.add_argument(
'-Abbrev', nargs='?',default=[],
help="Choose between abbreviated or full 3-gene computation"
)
parser.add_argument(
'-K', nargs='?',default=None,
help="K-gene combinations to include"
)
parser.add_argument(
'-Down', nargs='?',default=False,
help="Downsample"
)
parser.add_argument(
'-Trim', nargs='?',default=2000,
help="Trim output files"
)
parser.add_argument(
'-Count', nargs='?',default=False,
help="Set to True when count data is being used, for visualizations."
)
parser.add_argument(
'-tenx', nargs='?',default=False,
help="Set to True when count data is being used, for visualizations."
)
parser.add_argument(
'-online', nargs='?',default=False,
help="Set to True for online version."
)
parser.add_argument(
'-skipvis', nargs='?',default=False,
help="Set to True to skip visualizations."
)
return parser
def read_data(cls_path, tsne_path, marker_path, gene_path, D, tenx, online,skipvis):
"""
Reads in cluster series, tsne data, marker expression without complements
at given paths.
"""
cls_ser = pd.read_csv(
cls_path, sep='\t', index_col=0, names=['cell', 'cluster'], squeeze=True
)
if ',' in cls_ser.index[1]:
cls_ser = pd.read_csv(
cls_path, sep=',', index_col=0, names=['cell', 'cluster'], squeeze=True )
if skipvis == 1:
tsne = None
pass
else:
tsne = pd.read_csv(
tsne_path, sep='\t', index_col=0, names=['cell', 'tSNE_1', 'tSNE_2']
)
if np.isnan(tsne['tSNE_1'][0]):
tsne = pd.read_csv(
tsne_path, sep=',', index_col=0, names=['cell', 'tSNE_1', 'tSNE_2'] )
start_= time.time()
tenx = int(tenx)
if tenx == 1:
print('Loading 10X matrix')
mat = scipy.io.mmread(marker_path+"matrix.mtx")
features_path = marker_path + "genes.tsv"
gene_names = [row[1] for row in csv.reader(open(features_path), delimiter="\t")]
barcodes_path = marker_path + "barcodes.tsv"
barcodes = [row[0] for row in csv.reader(open(barcodes_path), delimiter="\t")]
#construct pandas dataframe w/ the pieces (gene names, barcodes, counts in sparse form)
matrix = pd.DataFrame(index = gene_names, columns = barcodes )
now = time.time()
print('assembling expression matrix')
for i,j,v in zip(mat.row, mat.col, mat.data):
matrix.iat[i,j] = v
matrix.fillna(0,inplace=True)
noww = time.time()
print(str(noww-now) + ' seconds')
print('size: ' + str(matrix.shape))
no_complement_marker_exp = matrix
no_complement_marker_exp.rename_axis('cell',axis=1,inplace=True)
else:
#Should allow either tab OR comma delimited formats
try:
no_complement_marker_exp = pd.read_csv(
marker_path,sep='\t', index_col=0
).rename_axis('cell',axis=1)
if len(no_complement_marker_exp.columns) == 0:
raise Exception
elif len(no_complement_marker_exp.index) == 0:
raise Exception
else:
pass
except:
no_complement_marker_exp = pd.read_csv(
marker_path,sep=',', index_col=0
).rename_axis('cell',axis=1)
if no_complement_marker_exp.shape[1] == cls_ser.shape[0]:
pass
else:
for index,row in cls_ser.iteritems():
if str(index) in list(no_complement_marker_exp):
continue
else:
cls_ser.drop(labels=index,inplace=True)
#gene list filtering
no_complement_marker_exp = no_complement_marker_exp.loc[~no_complement_marker_exp.index.duplicated(keep='first')]
no_complement_marker_exp = np.transpose(no_complement_marker_exp)
no_complement_marker_exp.columns = [x.upper() for x in no_complement_marker_exp.columns]
no_complement_marker_exp = no_complement_marker_exp.loc[~no_complement_marker_exp.index.duplicated(keep='first')]
#gene filtering
#-------------#
if gene_path is None:
pass
else:
#read the genes
#Compatible with single line comma list OR one per line no commas OR mix of both
master_gene_list = []
with open(gene_path, "r") as genes:
lines = genes.readlines()
if len(lines) == 1:
with open(gene_path, "r") as genes:
init_read = genes.read().splitlines()
master_str = str.upper(init_read[0])
master_gene_list = master_str.split(",")
else:
for i, line in enumerate(lines):
if '\n' in line:
master_gene_list.append(line[:-1])
else:
master_gene_list.append(line)
for item in master_gene_list[:]:
if ',' in item:
new_split = item.split(",")
master_gene_list.remove(item)
for ele in new_split:
master_gene_list.append(str.upper(ele))
new_no_comp_mark_exp = pd.DataFrame()
master_gene_list = [y.upper() for y in master_gene_list]
for gene in master_gene_list:
try:
new_no_comp_mark_exp[gene] = no_complement_marker_exp[gene]
except:
pass
no_complement_marker_exp = new_no_comp_mark_exp
'''
for column_name in no_complement_marker_exp.columns:
if str.upper(column_name) in master_gene_list:
pass
else:
try:
no_complement_marker_exp.drop(column_name, axis=1, inplace=True)
except:
pass
'''
#-------------#
#downsampling
#-------------#
if D is False:
pass
else:
# get number of genes to set downsample threshold
gene_numb = len(no_complement_marker_exp.columns)
#print(gene_numb)
if gene_numb > 3000:
if int(D) < int(2500):
pass
else:
D = int(2500)
#total number of cells input
N = len(cls_ser)
#print(N)
#downsample target
M = int(D)
if N <= M:
return (cls_ser, tsne, no_complement_marker_exp, gene_path)
clusters = sorted(cls_ser.unique())
counts = { x : 0 for x in clusters}
for clus in cls_ser:
counts[clus] = counts[clus]+1
#at this point counts has values for # cells in cls
#dict goes like ->{ cluster:#cells }
take_nums = {x : 0 for x in clusters}
for clstr in take_nums:
take_nums[clstr] = math.ceil(counts[clstr]*(M/N))
summ = 0
for key in take_nums:
summ = summ + take_nums[key]
#print('Downsampled cell num ' + str(summ))
counts= { x : 0 for x in clusters}
new_cls_ser = cls_ser.copy(deep=True)
keep_first = 0
for index,value in new_cls_ser.iteritems():
keep_first = keep_first + 1
if keep_first ==1:
counts[value] = counts[value]+1
continue
new_cls_ser.drop(index,inplace=True)
cls_ser.drop(cls_ser.index[0],inplace=True)
#Now new_cls_ser has all removed except first item, which we can keep
for num in range(N-1):
init_rand_num = random.randint(0,N-num-1-1)
if counts[cls_ser[init_rand_num]] >= take_nums[cls_ser[init_rand_num]]:
cls_ser.drop(cls_ser.index[init_rand_num], inplace=True)
continue
new_cls_ser = new_cls_ser.append(pd.Series([cls_ser[init_rand_num]], index=[cls_ser.index[init_rand_num]]))
counts[cls_ser[init_rand_num]] = counts[cls_ser[init_rand_num]]+1
cls_ser.drop(cls_ser.index[init_rand_num], inplace=True)
new_cls_ser.rename_axis('cell',inplace=True)
new_cls_ser.rename('cluster', inplace=True)
return(new_cls_ser,tsne,no_complement_marker_exp,gene_path)
#-------------#
return (cls_ser, tsne, no_complement_marker_exp, gene_path)
def process(cls,X,L,plot_pages,cls_ser,tsne,marker_exp,gene_file,csv_path,vis_path,pickle_path,cluster_number,K,abbrev,cluster_overall,Trim,count_data,skipvis):
#for cls in clusters:
# To understand the flow of this section, read the print statements.
heur_limit = min(50,len(marker_exp.columns))
start_cls_time = time.time()
print('########\n# Processing cluster ' + str(cls) + '...\n########')
print(str(K) + ' gene combinations')
if len(abbrev) == 0:
pass
else:
print('Heuristic limit set to: ' + str(heur_limit))
print('Running t test on singletons...')
try:
t_test = hgmd.batch_stats(marker_exp, cls_ser, cls)
except Exception as err:
print('t-test error')
print(err)
print('Calculating fold change')
try:
fc_test = hgmd.batch_fold_change(marker_exp, cls_ser, cls)
except Exception as err:
print('fold-change error')
print(err)
print('Running XL-mHG on singletons...')
try:
xlmhg = hgmd.batch_xlmhg(marker_exp, cls_ser, cls, X=X, L=L)
except Exception as err:
print('XLMHG error')
print(err)
try:
q_val = hgmd.batch_q(xlmhg)
except Exception as err:
print('q-val error')
print(err)
# We need to slide the cutoff indices before using them,
# to be sure they can be used in the real world. See hgmd.mhg_slide()
try:
cutoff_value = hgmd.mhg_cutoff_value(
marker_exp, xlmhg[['gene_1', 'mHG_cutoff']]
)
xlmhg = xlmhg[['gene_1', 'mHG_stat', 'mHG_pval']].merge(
hgmd.mhg_slide(marker_exp, cutoff_value), on='gene_1'
)
# Update cutoff_value after sliding
cutoff_value = pd.Series(
xlmhg['cutoff_val'].values, index=xlmhg['gene_1']
)
xlmhg = xlmhg\
.sort_values(by='mHG_stat', ascending=True)
except Exception as err:
print('error in sliding values')
print(err)
print('Creating discrete expression matrix...')
try:
discrete_exp = hgmd.discrete_exp(marker_exp, cutoff_value, abbrev, xlmhg)
except Exception as err:
print('discrete matrix construction failed')
print(err)
'''
#For checking the sliding issue
count = 0
print(discrete_exp['Reg4'].sort_values(ascending=False).head(667))
#time.sleep(100000)
print(cls_ser)
for index in discrete_exp['Reg4'].sort_values(ascending=False).head(667).iteritems():
for index2 in cls_ser.iteritems():
if index[0] == index2[0]:
if index2[1] == 0:
count = count +1
print(count)
print(discrete_exp['Reg4'].sort_values(ascending=False).head(70))
print(cls_ser.to_string())
#print(marker_exp['1600029D21Rik'].sort_values(ascending=False).head(160))
#time.sleep(100000)
'''
if skipvis == 0:
discrete_exp_full = discrete_exp.copy()
print('Finding simple true positives/negatives for singletons...')
#Gives us the singleton TP/TNs for COI and for rest of clusters
#COI is just a DF, rest of clusters are a dict of DFs
(sing_tp_tn, other_sing_tp_tn) = hgmd.tp_tn(discrete_exp, cls_ser, cls, cluster_overall)
### Take out any genes with a true positive less than 15% from the expression matrix ###
discrete_exp = discrete_exp[ sing_tp_tn['gene_1'][ sing_tp_tn['TP'] > .15 ].values ]
########################################################################################
###########
#OLD HEURISTICS
#abb = '3'
#if abb in abbrev:
# print('Heuristic Abbreviation initiated for ' + str(abbrev) )
# count = 0
# trips_list=[]
# for index,row in xlmhg.iterrows():
# if sing_tp_tn.set_index('gene_1').at[row['gene_1'],'TP'] <= .15:
# continue
# ##THREE GENE VARIABLE HERE##
# if count == trips_heur:
# break
# else:
# trips_list.append(row['gene_1'])
# count = count + 1
#else:
# trips_list = None
############
print('Finding pair expression matrix...')
(
gene_map, in_cls_count, pop_count,
in_cls_product, total_product, upper_tri_indices,
cluster_exp_matrices, cls_counts
) = hgmd.pair_product(discrete_exp, cls_ser, cls, cluster_number,cluster_overall)
if K >= 4:
print('')
print('Starting quads')
print('')
quads_in_cls, quads_total, quads_indices, odd_gene_mapped, even_gene_mapped = quads.combination_product(discrete_exp,cls_ser,cls,xlmhg)
print('')
print('')
print('')
print('')
print('HG TEST ON QUADS')
quads_fin = quads.quads_hg(gene_map,in_cls_count,pop_count,quads_in_cls,quads_total,quads_indices,odd_gene_mapped,even_gene_mapped)
if K == 3:
start_trips = time.time()
print('Finding Trips expression matrix...')
try:
trips_in_cls,trips_total,trips_indices,gene_1_mapped,gene_2_mapped,gene_3_mapped = hgmd.combination_product(discrete_exp,cls_ser,cls,abbrev,heur_limit)
except Exception as err:
print('error in 3-gene matrix construction')
print(err)
end_trips = time.time()
print(str(end_trips-start_trips) + ' seconds')
HG_start = time.time()
print('Running hypergeometric test on pairs...')
pair, revised_indices = hgmd.pair_hg(
gene_map, in_cls_count, pop_count,
in_cls_product, total_product, upper_tri_indices, abbrev, heur_limit
)
pair_q = hgmd.pairs_q(pair)
HG_end = time.time()
print(str(HG_end-HG_start) + ' seconds')
pair_out_initial = pair\
.sort_values(by='HG_pval', ascending=True)
pair_out_initial['rank'] = pair_out_initial.reset_index().index + 1
#spair_out_print = pair_out_initial.head(Trim)
pair_out_initial.to_csv(
csv_path + '/cluster_' + str(cls) + '_pair_full_unranked.csv'
)
if K == 3:
HG_start = time.time()
print('Running hypergeometric test & TP/TN on trips...')
trips = hgmd.trips_hg(
gene_map,in_cls_count,pop_count,
trips_in_cls,trips_total,trips_indices,
gene_1_mapped,gene_2_mapped,gene_3_mapped,abbrev,heur_limit
)
#print(trips)
HG_end = time.time()
print(str(HG_end-HG_start) + ' seconds')
# Pair TP/TN FOR THIS CLUSTER
print('Finding simple true positives/negatives for pairs...')
try:
pair_tp_tn = hgmd.pair_tp_tn(
gene_map, in_cls_count, pop_count,
in_cls_product, total_product, upper_tri_indices, abbrev, revised_indices
)
except Exception as err:
print(err)
#accumulates pair TP/TN vals for all other clusters
##NEW
try:
other_pair_tp_tn = {}
for key in cluster_exp_matrices:
new_pair_tp_tn = hgmd.pair_tp_tn(
gene_map, cls_counts[key], pop_count,
cluster_exp_matrices[key], total_product, upper_tri_indices,
abbrev, revised_indices
)
other_pair_tp_tn[key] = new_pair_tp_tn
other_pair_tp_tn[key].set_index(['gene_1','gene_2'],inplace=True)
except:
pass
try:
pair = pair\
.merge(pair_tp_tn, on=['gene_1', 'gene_2'], how='left')\
.merge(pair_q, on=['gene_1','gene_2'], how='left')
except:
pass
#Supplementary figure 2 code#
'''
for cl in other_pair_tp_tn:
print(cl)
print(other_pair_tp_tn[cl].loc[('LY6D', 'CD3G_negation')])
for cl in other_sing_tp_tn:
print(cl)
print(other_sing_tp_tn[cl].loc[('LY6D')])
time.sleep(10000)
'''
try:
pair_tp_tn.set_index(['gene_1','gene_2'],inplace=True)
except:
pass
# sing_tp_tn.set_index(['gene_1'], inplace=True)
rank_start = time.time()
print('Finding NEW Rank')
try:
ranked_pair,histogram = hgmd.ranker(pair,xlmhg,sing_tp_tn,other_sing_tp_tn,other_pair_tp_tn,cls_counts,in_cls_count,pop_count)
except Exception as err:
print(err)
rank_end = time.time()
print(str(rank_end - rank_start) + ' seconds')
# Save TP/TN values to be used for non-cluster-specific things
print('Pickling data for later...')
try:
sing_tp_tn.to_pickle(pickle_path + 'sing_tp_tn_' + str(cls))
pair_tp_tn.to_pickle(pickle_path + 'pair_tp_tn_' + str(cls))
except:
print('pickling failed')
#trips_tp_tn.to_pickle(pickle_path + 'trips_tp_tn' + str(cls))
print('Exporting cluster ' + str(cls) + ' output to CSV...')
try:
sing_output = xlmhg\
.merge(t_test, on='gene_1')\
.merge(fc_test, on='gene_1')\
.merge(sing_tp_tn, on='gene_1')\
.merge(q_val, on='gene_1')\
.set_index('gene_1')\
.sort_values(by='mHG_stat', ascending=True)
except Exception as err:
print(err)
sing_output = xlmhg.sort_values(by='mHG_stat',ascending=True)
sing_output.sort_values(by='gene_1',ascending=True).to_csv(
csv_path + '/cluster_' + str(cls) + '_singleton_full_unranked.csv'
)
sing_output = sing_output.loc[sing_output['TP'] >= .15]
for index, row in sing_output.iterrows():
if re.compile(".*_negation$").match(index):
if row['TN'] >= .5:
pass
else:
sing_output.drop(labels=index,axis=0,inplace=True)
# make a copy of sing_output
# run the stuff below on both the original (fine) and the copy (no negations)
sing_output_positives = sing_output.copy(deep=True)
for index, row in sing_output_positives.iterrows():
if re.compile(".*_negation$").match(index):
sing_output_positives.drop(labels=index,axis=0,inplace=True)
sing_output_positives['hgrank'] = sing_output_positives.reset_index().index + 1
sing_output_positives.sort_values(by='Log2FoldChangeAbs', ascending=False, inplace=True)
sing_output_positives['fcrank'] = sing_output_positives.reset_index().index + 1
sing_output_positives['finrank'] = sing_output_positives[['hgrank', 'fcrank']].mean(axis=1)
sing_output_positives.sort_values(by='finrank',ascending=True,inplace=True)
sing_output_positives['rank'] = sing_output_positives.reset_index().index + 1
sing_output_positives.drop('finrank',axis=1, inplace=True)
count = 1
for index,row in sing_output_positives.iterrows():
if count == 100:
break
sing_output_positives.at[index,'Plot'] = 1
count = count + 1
sing_output_positives.to_csv(
csv_path + '/cluster_' + str(cls) + '_singleton_positive_markers_ranked.csv'
)
sing_output['hgrank'] = sing_output.reset_index().index + 1
sing_output.sort_values(by='Log2FoldChangeAbs', ascending=False, inplace=True)
sing_output['fcrank'] = sing_output.reset_index().index + 1
sing_output['finrank'] = sing_output[['hgrank', 'fcrank']].mean(axis=1)
sing_output.sort_values(by='finrank',ascending=True,inplace=True)
sing_output['rank'] = sing_output.reset_index().index + 1
sing_output.drop('finrank',axis=1, inplace=True)
count = 1
for index,row in sing_output.iterrows():
if count == 100:
break
sing_output.at[index,'Plot'] = 1
count = count + 1
sing_output.to_csv(
csv_path + '/cluster_' + str(cls) + '_singleton_all_ranked.csv'
)
sing_stripped = sing_output[
['mHG_stat', 'TP', 'TN']
].reset_index().rename(index=str, columns={'gene_1': 'gene_1'})
try:
ranked_print = ranked_pair.head(Trim)
ranked_print.to_csv(
csv_path + '/cluster_' + str(cls) + '_pair_final_ranking.csv'
)
except:
print('pair file not generated, no pairs available')
ranked_pair = pd.DataFrame(data=0,index=[1,2,3],columns={'TP','TN','Plot','gene_1','gene_2','rank'})
#Add trips data pages
#does not currently do new rank scheme
if K == 3:
trips_output = trips
# .sort_values(by='HG_stat', ascending=True)
#print(trips_output)
trips_output['rank'] = trips_output.reset_index().index + 1
trips_print = trips_output.head(Trim)
trips_print.to_csv(
csv_path + '/cluster_' + str(cls) + '_trips.csv'
)
else:
trips_output = int(1)
if K >= 4:
quads_final = quads_fin\
.sort_values(by='HG_stat', ascending=True)
quads_final['rank'] = quads_final.reset_index().index + 1
quads_print = quads_final.head(Trim)
quads_print.to_csv(
csv_path + '/cluster_' + str(cls) + '_quads.csv'
)
else:
quads_final = int(1)
#plt.bar(list(histogram.keys()), histogram.values(), color='b')
#plt.savefig(vis_path + '/cluster_' + str(cls) + '_pair_histogram')
#if cls == fincls:
# cls = 0
if skipvis == 1:
print('Skipping plots...')
pass
else:
print('Drawing plots...')
vis.make_plots(
pair=ranked_pair,
sing=sing_output,
sing_tp_tn=sing_tp_tn,
xlmhg=xlmhg,
trips=trips_output,
quads_fin=quads_final,
tsne=tsne,
discrete_exp=discrete_exp_full,
marker_exp=marker_exp,
plot_pages=plot_pages,
combined_path=vis_path + '/cluster_' + str(cls) + '_pairs_as_singletons',
sing_combined_path=vis_path + '/cluster_' +
str(cls) + '_singleton',
discrete_path=vis_path + '/cluster_' + str(cls) + '_discrete_pairs',
tptn_path=vis_path + 'cluster_' + str(cls) + 'pair_TP_TN',
trips_path=vis_path + 'cluster_' + str(cls) + '_discrete_trios',
quads_path=vis_path + 'cluster_' + str(cls) + '_discrete_quads',
sing_tptn_path=vis_path + 'cluster_' + str(cls) + '_singleton_TP_TN',
count_data = count_data
)
end_cls_time=time.time()
print(str(end_cls_time - start_cls_time) + ' seconds')
#time.sleep(10000)
def main():
"""Hypergeometric marker detection. Finds markers identifying a cluster.
Reads in data from single-cell RNA sequencing. Data is in the form of 3
CSVs: gene expression data by gene by cell, 2-D tSNE data by cell, and the
clusters of interest by cell. Creates a list of genes and a list of gene
pairs (including complements), ranked by hypergeometric and t-test
significance. The highest ranked marker genes generally best identify the
cluster of interest. Saves these lists to CSV and creates gene expression
visualizations.
"""
# TODO: more precise description
#ten_x = input("10X data? [y/n]")
start_dt = datetime.datetime.now()
start_time = time.time()
print("Started on " + str(start_dt.isoformat()))
args = init_parser(argparse.ArgumentParser(
description=("Hypergeometric marker detection. Finds markers identifying a cluster. Documentation available at https://hgmd.readthedocs.io/en/latest/index.html")
)).parse_args()
output_path = args.output_path
C = args.C
K = args.K
Abbrev = args.Abbrev
Down = args.Down
X = args.X
L = args.L
marker_file = args.marker
tsne_file = args.vis
cluster_file = args.cluster
gene_file = args.g
Trim = args.Trim
count_data = args.Count
tenx = args.tenx
online = args.online
skipvis = args.skipvis
plot_pages = 30 # number of genes to plot (starting with highest ranked)
# TODO: gene pairs with expression ratio within the cluster of interest
# under [min_exp_ratio] were ignored in hypergeometric testing. This
# functionality is currently unimplemented.
# min_exp_ratio = 0.4
csv_path = output_path + 'data/'
vis_path = output_path + 'vis/'
pickle_path = output_path + '_pickles/'
try:
os.makedirs(csv_path)
except:
os.system('rm -r ' + csv_path)
os.makedirs(csv_path)
try:
os.makedirs(vis_path)
except:
os.system('rm -r ' + vis_path)
os.makedirs(vis_path)
try:
os.makedirs(pickle_path)
except:
os.system('rm -r ' + pickle_path)
os.makedirs(pickle_path)
if Trim is not None:
Trim = int(Trim)
else:
Trim = int(2000)
if C is not None:
C = abs(int(C))
else:
C = 1
if X is not None:
try:
X = float(X)
except:
raise Exception('X param must be a number between 0 and 1')
if X > 1:
X = int(1)
elif X <= 0:
X = int(0)
else:
X = float(X)
print("Set X to " + str(X) + ".")
if L is not None:
L = int(L)
print("Set L to " + str(L) + ".")
if K is not None:
K = int(K)
else:
K = 2
if K > 4:
K = 4
print('Only supports up to 4-gene combinations currently, setting K to 4')
if count_data is not None:
if count_data == str(True):
count_data = 1
print('Count Data')
elif count_data == 'yes':
count_data = 1
print('Count Data')
else:
count_data = int(0)
else:
count_data = int(0)
if tenx is not None:
if tenx == str(True):
tenx = int(1)
elif tenx == 'yes':
tenx = int(1)
else:
tenx = int(0)
else:
tenx = int(0)
if online is not None:
if online == str(True):
online = int(1)
elif online == 'yes':
online = int(1)
else:
online = int(0)
else:
online = int(0)
if skipvis is not None:
if skipvis == str(True):
skipvis = int(1)
elif skipvis == 'yes':
skipvis = int(1)
else:
skipvis = int(0)
else:
skipvis = int(0)
print("Reading data...")
if gene_file is None:
(cls_ser, tsne, no_complement_marker_exp, gene_path) = read_data(
cls_path=cluster_file,
tsne_path=tsne_file,
marker_path=marker_file,
gene_path=None,
D=Down,
tenx=tenx,
online=online,
skipvis=skipvis)
else:
(cls_ser, tsne, no_complement_marker_exp, gene_path) = read_data(
cls_path=cluster_file,
tsne_path=tsne_file,
marker_path=marker_file,
gene_path=gene_file,
D=Down,
tenx=tenx,
online=online,
skipvis=skipvis)
#throw out vals that show up in expression matrix but not in cluster assignments
cls_ser_idx = set(cls_ser.index.values.tolist())
no_complement_marker_exp.drop([ind for ind,row in no_complement_marker_exp.iterrows() if ind not in cls_ser_idx], inplace=True)
#throw out gene rows that are duplicates and print out a message to user
'''
#throw out cls_ser vals not in marker_exp
for index in cls_ser.index.values.tolist():
if index in no_complement_marker_exp.columns:
continue
else:
cls_ser.drop(index,inplace=True)
'''
print("Generating complement data...")
marker_exp = hgmd.add_complements(no_complement_marker_exp)
marker_exp.sort_values(by='cell',inplace=True)
cls_ser.sort_index(inplace=True)
# Process clusters sequentially
clusters = cls_ser.unique()
clusters.sort()
cluster_overall=clusters.copy()
#Only takes a certain number of clusters (cuts out smallest ones)
if online == 1:
max_clus_size = 15
if len(clusters) <= max_clus_size:
pass
else:
cls_helper = list(clusters.copy())
cls_size_count = {}
for item in cls_ser:
if item in cls_size_count:
cls_size_count[item] = cls_size_count[item] + 1
else:
cls_size_count[item] = 1
for counted in cls_size_count:
cls_size_count[counted] = cls_size_count[counted] / len(cls_ser)
while len(cls_helper) > max_clus_size:
lowest = 1
place = 0
for key in cls_size_count:
if cls_size_count[key] < lowest:
place = key
lowest = cls_size_count[key]
cls_helper.remove(place)
del cls_size_count[place]
clusters = np.array(cls_helper)
#Below could probably be optimized a little (new_clust not necessary),
#cores is number of simultaneous threads you want to run, can be set at will
cores = C
cluster_number = len(clusters)
# if core number is bigger than number of clusters, set it equal to number of clusters
if cores > len(clusters):
cores = len(clusters)
if cores == 1:
for cls in clusters:
process(cls,X,L,plot_pages,cls_ser,tsne,marker_exp,gene_file,csv_path,vis_path,pickle_path,cluster_number,K,Abbrev,cluster_overall,Trim,count_data,skipvis)
else:
#below loops allow for splitting the job based on core choice
group_num = math.ceil((len(clusters) / cores ))
for element in range(group_num):
new_clusters = clusters[:cores]
print(new_clusters)
jobs = []
#this loop spawns the workers and runs the code for each assigned.
#workers assigned based on the new_clusters list which is the old clusters
#split up based on core number e.g.
#clusters = [1 2 3 4 5 6] & cores = 4 --> new_clusters = [1 2 3 4], new_clusters = [5 6]
for cls in new_clusters:
p = multiprocessing.Process(target=process,
args=(cls,X,L,plot_pages,cls_ser,tsne,marker_exp,gene_file,csv_path,vis_path,pickle_path,cluster_number,K,Abbrev,cluster_overall,Trim,count_data,skipvis))
jobs.append(p)
p.start()
p.join()
new_clusters = []
clusters = clusters[cores:len(clusters)]
end_time = time.time()
# Add text file to keep track of everything
end_dt = datetime.datetime.now()
print("Ended on " + end_dt.isoformat())
metadata = open(output_path + 'metadata.txt', 'w')
metadata.write("Started: " + start_dt.isoformat())
metadata.write("\nEnded: " + end_dt.isoformat())
metadata.write("\nElapsed: " + str(end_dt - start_dt))
#metadata.write("\nGenerated by COMET version " + conf.version)
print('Took ' + str(end_time-start_time) + ' seconds')
print('Which is ' + str( (end_time-start_time)/60 ) + ' minutes')
if __name__ == '__main__':
main() | PypiClean |
/NlpToolkit-MorphologicalDisambiguation-1.0.16.tar.gz/NlpToolkit-MorphologicalDisambiguation-1.0.16/MorphologicalDisambiguation/AutoDisambiguator.py | from __future__ import annotations
from DataStructure.CounterHashMap import CounterHashMap
from MorphologicalAnalysis.FsmMorphologicalAnalyzer import FsmMorphologicalAnalyzer
from MorphologicalAnalysis.FsmParse import FsmParse
from MorphologicalAnalysis.FsmParseList import FsmParseList
from MorphologicalAnalysis.MorphologicalTag import MorphologicalTag
class AutoDisambiguator:
morphological_analyzer: FsmMorphologicalAnalyzer
@staticmethod
def isAnyWordSecondPerson(index: int,
correctParses: list) -> bool:
count = 0
for i in range(index - 1, -1, -1):
if correctParses[i].containsTag(MorphologicalTag.A2SG) or \
correctParses[i].containsTag(MorphologicalTag.P2SG):
count = count + 1
return count >= 1
@staticmethod
def isPossessivePlural(index: int,
correctParses: list) -> bool:
for i in range(index - 1, -1, -1):
if correctParses[i].isNoun():
return correctParses[i].isPlural()
return False
@staticmethod
def nextWordPos(nextParseList: FsmParseList) -> str:
_map = CounterHashMap()
for i in range(nextParseList.size()):
_map.put(nextParseList.getFsmParse(i).getPos())
return _map.max()
@staticmethod
def isBeforeLastWord(index: int,
fsmParses: list) -> bool:
return index + 2 == len(fsmParses)
@staticmethod
def nextWordExists(index: int,
fsmParses: list) -> bool:
return index + 1 < len(fsmParses)
@staticmethod
def isNextWordNoun(index: int,
fsmParses: list) -> bool:
return index + 1 < len(fsmParses) and AutoDisambiguator.nextWordPos(fsmParses[index + 1]) == "NOUN"
@staticmethod
def isNextWordNum(index: int,
fsmParses: list) -> bool:
return index + 1 < len(fsmParses) and AutoDisambiguator.nextWordPos(fsmParses[index + 1]) == "NUM"
@staticmethod
def isNextWordNounOrAdjective(index: int,
fsmParses: list) -> bool:
return index + 1 < len(fsmParses) and (AutoDisambiguator.nextWordPos(fsmParses[index + 1]) == "NOUN" or
AutoDisambiguator.nextWordPos(fsmParses[index + 1]) == "ADJ" or
AutoDisambiguator.nextWordPos(fsmParses[index + 1]) == "DET")
@staticmethod
def isFirstWord(index: int) -> bool:
return index == 0
@staticmethod
def containsTwoNeOrYa(fsmParses: list,
word: str) -> bool:
count = 0
for fsm_parse in fsmParses:
surface_form = fsm_parse.getFsmParse(0).getSurfaceForm()
if surface_form == word:
count = count + 1
return count == 2
@staticmethod
def hasPreviousWordTag(index: int,
correctParses: list,
tag: MorphologicalTag) -> bool:
return index > 0 and correctParses[index - 1].containsTag(tag)
@staticmethod
def selectCaseForParseString(parseString: str,
index: int,
fsmParses: list,
correctParses: list) -> str:
surface_form = fsmParses[index].getFsmParse(0).getSurfaceForm()
root = fsmParses[index].getFsmParse(0).getWord().getName()
last_word = fsmParses[len(fsmParses) - 1].getFsmParse(0).getSurfaceForm()
# kısmını, duracağını, grubunun #
if parseString == "P2SG$P3SG":
if AutoDisambiguator.isAnyWordSecondPerson(index, correctParses):
return "P2SG"
return "P3SG"
elif parseString == "A2SG+P2SG$A3SG+P3SG":
if AutoDisambiguator.isAnyWordSecondPerson(index, correctParses):
return "A2SG+P2SG"
return "A3SG+P3SG"
# BİR #
elif parseString == "ADJ$ADV$DET$NUM+CARD":
return "DET"
# tahminleri, işleri, hisseleri #
elif parseString == "A3PL+P3PL+NOM$A3PL+P3SG+NOM$A3PL+PNON+ACC$A3SG+P3PL+NOM":
if AutoDisambiguator.isPossessivePlural(index, correctParses):
return "A3SG+P3PL+NOM"
return "A3PL+P3SG+NOM"
# Ocak, Cuma, ABD #
elif parseString == "A3SG$PROP+A3SG":
if index > 0:
return "PROP+A3SG"
# şirketin, seçimlerin, borsacıların, kitapların #
elif parseString == "P2SG+NOM$PNON+GEN":
if AutoDisambiguator.isAnyWordSecondPerson(index, correctParses):
return "P2SG+NOM"
return "PNON+GEN"
# FAZLA #
# ÇOK #
elif parseString == "ADJ$ADV$DET$POSTP+PCABL" or parseString == "ADJ$ADV$POSTP+PCABL":
if AutoDisambiguator.hasPreviousWordTag(index, correctParses, MorphologicalTag.ABLATIVE):
return "POSTP+PCABL"
if index + 1 < len(fsmParses):
if AutoDisambiguator.nextWordPos(fsmParses[index + 1]) == "NOUN":
return "ADJ"
elif AutoDisambiguator.nextWordPos(fsmParses[index + 1]) == "ADJ" or \
AutoDisambiguator.nextWordPos(fsmParses[index + 1]) == "ADV" or \
AutoDisambiguator.nextWordPos(fsmParses[index + 1]) == "VERB":
return "ADV"
elif parseString == "ADJ$NOUN+A3SG+PNON+NOM":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ"
return "NOUN+A3SG+PNON+NOM"
# fanatiklerini, senetlerini, olduklarını #
elif parseString == "A3PL+P2SG$A3PL+P3PL$A3PL+P3SG$A3SG+P3PL":
if AutoDisambiguator.isAnyWordSecondPerson(index, correctParses):
return "A3PL+P2SG"
if AutoDisambiguator.isPossessivePlural(index, correctParses):
return "A3SG+P3PL"
else:
return "A3PL+P3SG"
elif parseString == "ADJ$NOUN+PROP+A3SG+PNON+NOM":
if index > 0:
return "NOUN+PROP+A3SG+PNON+NOM"
# BU, ŞU #
elif parseString == "DET$PRON+DEMONSP+A3SG+PNON+NOM":
if AutoDisambiguator.isNextWordNoun(index, fsmParses):
return "DET"
return "PRON+DEMONSP+A3SG+PNON+NOM"
# gelebilir #
elif parseString == "AOR+A3SG$AOR^DB+ADJ+ZERO":
if AutoDisambiguator.isBeforeLastWord(index, fsmParses):
return "AOR+A3SG"
elif AutoDisambiguator.isFirstWord(index):
return "AOR^DB+ADJ+ZERO"
elif AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "AOR^DB+ADJ+ZERO"
else:
return "AOR+A3SG"
elif parseString == "ADV$NOUN+A3SG+PNON+NOM":
return "ADV"
elif parseString == "ADJ$ADV":
if AutoDisambiguator.isNextWordNoun(index, fsmParses):
return "ADJ"
return "ADV"
elif parseString == "P2SG$PNON":
if AutoDisambiguator.isAnyWordSecondPerson(index, correctParses):
return "P2SG"
return "PNON"
# etti, kırdı #
elif parseString == "NOUN+A3SG+PNON+NOM^DB+VERB+ZERO$VERB+POS":
if AutoDisambiguator.isBeforeLastWord(index, fsmParses):
return "VERB+POS"
# İLE #
elif parseString == "CONJ$POSTP+PCNOM":
return "POSTP+PCNOM"
# gelecek #
elif parseString == "POS+FUT+A3SG$POS^DB+ADJ+FUTPART+PNON":
if AutoDisambiguator.isBeforeLastWord(index, fsmParses):
return "POS+FUT+A3SG"
return "POS^DB+ADJ+FUTPART+PNON"
elif parseString == "ADJ^DB$NOUN+A3SG+PNON+NOM^DB":
if root == "yok" or root == "düşük" or root == "eksik" or root == "rahat" or root == "orta" \
or root == "vasat":
return "ADJ^DB"
return "NOUN+A3SG+PNON+NOM^DB"
# yaptık, şüphelendik #
elif parseString == "POS+PAST+A1PL$POS^DB+ADJ+PASTPART+PNON$POS^DB+NOUN+PASTPART+A3SG+PNON+NOM":
return "POS+PAST+A1PL"
# ederim, yaparım #
elif parseString == "AOR+A1SG$AOR^DB+ADJ+ZERO^DB+NOUN+ZERO+A3SG+P1SG+NOM":
return "AOR+A1SG"
# geçti, vardı, aldı #
elif parseString == "ADJ^DB+VERB+ZERO$VERB+POS":
if root == "var" and not AutoDisambiguator.isPossessivePlural(index, correctParses):
return "ADJ^DB+VERB+ZERO"
return "VERB+POS"
# ancak #
elif parseString == "ADV$CONJ":
return "CONJ"
# yaptığı, ettiği #
elif parseString == "ADJ+PASTPART+P3SG$NOUN+PASTPART+A3SG+P3SG+NOM":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ+PASTPART+P3SG"
return "NOUN+PASTPART+A3SG+P3SG+NOM"
# ÖNCE, SONRA #
elif parseString == "ADV$NOUN+A3SG+PNON+NOM$POSTP+PCABL":
if AutoDisambiguator.hasPreviousWordTag(index, correctParses, MorphologicalTag.ABLATIVE):
return "POSTP+PCABL"
return "ADV"
elif parseString == "NARR+A3SG$NARR^DB+ADJ+ZERO":
if AutoDisambiguator.isBeforeLastWord(index, fsmParses):
return "NARR+A3SG"
return "NARR^DB+ADJ+ZERO"
elif parseString == "ADJ$NOUN+A3SG+PNON+NOM$NOUN+PROP+A3SG+PNON+NOM":
if index > 0:
return "NOUN+PROP+A3SG+PNON+NOM"
elif AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ"
return "NOUN+A3SG+PNON+NOM"
# ödediğim #
elif parseString == "ADJ+PASTPART+P1SG$NOUN+PASTPART+A3SG+P1SG+NOM":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ+PASTPART+P1SG"
return "NOUN+PASTPART+A3SG+P1SG+NOM"
# O #
elif parseString == "DET$PRON+DEMONSP+A3SG+PNON+NOM$PRON+PERS+A3SG+PNON+NOM":
if AutoDisambiguator.isNextWordNoun(index, fsmParses):
return "DET"
return "PRON+PERS+A3SG+PNON+NOM"
# BAZI #
elif parseString == "ADJ$DET$PRON+QUANTP+A3SG+P3SG+NOM":
return "DET"
# ONUN, ONA, ONDAN, ONUNLA, OYDU, ONUNKİ #
elif parseString == "DEMONSP$PERS":
return "PERS"
elif parseString == "ADJ$NOUN+A3SG+PNON+NOM$VERB+POS+IMP+A2SG":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ"
return "NOUN+A3SG+PNON+NOM"
# hazineler, kıymetler #
elif parseString == "A3PL+PNON+NOM$A3SG+PNON+NOM^DB+VERB+ZERO+PRES+A3PL$PROP+A3PL+PNON+NOM":
if index > 0:
if fsmParses[index].getFsmParse(0).isCapitalWord():
return "PROP+A3PL+PNON+NOM"
return "A3PL+PNON+NOM"
# ARTIK, GERİ #
elif parseString == "ADJ$ADV$NOUN+A3SG+PNON+NOM":
if root == "artık":
return "ADV"
elif AutoDisambiguator.isNextWordNoun(index, fsmParses):
return "ADJ"
return "ADV"
elif parseString == "P1SG+NOM$PNON+NOM^DB+VERB+ZERO+PRES+A1SG":
if AutoDisambiguator.isBeforeLastWord(index, fsmParses) or root == "değil":
return "PNON+NOM^DB+VERB+ZERO+PRES+A1SG"
return "P1SG+NOM"
# görülmektedir #
elif parseString == "POS+PROG2$POS^DB+NOUN+INF+A3SG+PNON+LOC^DB+VERB+ZERO+PRES":
return "POS+PROG2"
# NE #
elif parseString == "ADJ$ADV$CONJ$PRON+QUESP+A3SG+PNON+NOM":
if last_word == "?":
return "PRON+QUESP+A3SG+PNON+NOM"
if AutoDisambiguator.containsTwoNeOrYa(fsmParses, "ne"):
return "CONJ"
if AutoDisambiguator.isNextWordNoun(index, fsmParses):
return "ADJ"
return "ADV"
# TÜM #
elif parseString == "DET$NOUN+A3SG+PNON+NOM":
return "DET"
# AZ #
elif parseString == "ADJ$ADV$POSTP+PCABL$VERB+POS+IMP+A2SG":
if AutoDisambiguator.hasPreviousWordTag(index, correctParses, MorphologicalTag.ABLATIVE):
return "POSTP+PCABL"
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ"
return "ADV"
# görülmedik #
elif parseString == "NEG+PAST+A1PL$NEG^DB+ADJ+PASTPART+PNON$NEG^DB+NOUN+PASTPART+A3SG+PNON+NOM":
if surface_form == "alışılmadık":
return "NEG^DB+ADJ+PASTPART+PNON"
return "NEG+PAST+A1PL"
elif parseString == "DATE$NUM+FRACTION":
return "NUM+FRACTION"
# giriş, satış, öpüş, vuruş #
elif parseString == "POS^DB+NOUN+INF3+A3SG+PNON+NOM$RECIP+POS+IMP+A2SG":
return "POS^DB+NOUN+INF3+A3SG+PNON+NOM"
# başka, yukarı #
elif parseString == "ADJ$POSTP+PCABL":
if AutoDisambiguator.hasPreviousWordTag(index, correctParses, MorphologicalTag.ABLATIVE):
return "POSTP+PCABL"
return "ADJ"
# KARŞI #
elif parseString == "ADJ$ADV$NOUN+A3SG+PNON+NOM$POSTP+PCDAT":
if AutoDisambiguator.hasPreviousWordTag(index, correctParses, MorphologicalTag.DATIVE):
return "POSTP+PCDAT"
if AutoDisambiguator.isNextWordNoun(index, fsmParses):
return "ADJ"
return "ADV"
# BEN #
elif parseString == "NOUN+A3SG$NOUN+PROP+A3SG$PRON+PERS+A1SG":
return "PRON+PERS+A1SG"
# yapıcı, verici #
elif parseString == "ADJ+AGT$NOUN+AGT+A3SG+PNON+NOM":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ+AGT"
return "NOUN+AGT+A3SG+PNON+NOM"
# BİLE #
elif parseString == "ADV$VERB+POS+IMP+A2SG":
return "ADV"
# ortalamalar, uzaylılar, demokratlar #
elif parseString == "NOUN+ZERO+A3PL+PNON+NOM$VERB+ZERO+PRES+A3PL":
return "NOUN+ZERO+A3PL+PNON+NOM"
# yasa, diye, yıla #
elif parseString == "NOUN+A3SG+PNON+DAT$VERB+POS+OPT+A3SG":
return "NOUN+A3SG+PNON+DAT"
# BİZ, BİZE #
elif parseString == "NOUN+A3SG$PRON+PERS+A1PL":
return "PRON+PERS+A1PL"
# AZDI #
elif parseString == "ADJ^DB+VERB+ZERO$POSTP+PCABL^DB+VERB+ZERO$VERB+POS":
return "ADJ^DB+VERB+ZERO"
# BİRİNCİ, İKİNCİ, ÜÇÜNCÜ, DÖRDÜNCÜ, BEŞİNCİ #
elif parseString == "ADJ$NUM+ORD":
return "ADJ"
# AY #
elif parseString == "INTERJ$NOUN+A3SG+PNON+NOM$VERB+POS+IMP+A2SG":
return "NOUN+A3SG+PNON+NOM"
# konuşmam, savunmam, etmem #
elif parseString == "NEG+AOR+A1SG$POS^DB+NOUN+INF2+A3SG+P1SG+NOM":
return "NEG+AOR+A1SG"
# YA #
elif parseString == "CONJ$INTERJ":
if AutoDisambiguator.containsTwoNeOrYa(fsmParses, "ya"):
return "CONJ"
if AutoDisambiguator.nextWordExists(index, fsmParses) and \
fsmParses[index + 1].getFsmParse(0).getSurfaceForm() == "da":
return "CONJ"
return "INTERJ"
elif parseString == "A3PL+P3PL$A3PL+P3SG$A3SG+P3PL":
if AutoDisambiguator.isPossessivePlural(index, correctParses):
return "A3SG+P3PL"
return "A3PL+P3SG"
# YÜZDE, YÜZLÜ #
elif parseString == "NOUN$NUM+CARD^DB+NOUN+ZERO":
return "NOUN"
# almanlar, uzmanlar, elmaslar, katiller #
elif parseString == "ADJ^DB+VERB+ZERO+PRES+A3PL$NOUN+A3PL+PNON+NOM$NOUN+A3SG+PNON+NOM^DB+VERB+ZERO+PRES+A3PL":
return "NOUN+A3PL+PNON+NOM"
# fazlası, yetkilisi #
elif parseString == "ADJ+JUSTLIKE$NOUN+ZERO+A3SG+P3SG+NOM":
return "NOUN+ZERO+A3SG+P3SG+NOM"
# HERKES, HERKESTEN, HERKESLE, HERKES #
elif parseString == "NOUN+A3SG+PNON$PRON+QUANTP+A3PL+P3PL":
return "PRON+QUANTP+A3PL+P3PL"
# BEN, BENDEN, BENCE, BANA, BENDE #
elif parseString == "NOUN+A3SG$PRON+PERS+A1SG":
return "PRON+PERS+A1SG"
# karşısından, geriye, geride #
elif parseString == "ADJ^DB+NOUN+ZERO$NOUN":
return "ADJ^DB+NOUN+ZERO"
# gideceği, kalacağı #
elif parseString == "ADJ+FUTPART+P3SG$NOUN+FUTPART+A3SG+P3SG+NOM":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ+FUTPART+P3SG"
return "NOUN+FUTPART+A3SG+P3SG+NOM"
# bildiğimiz, geçtiğimiz, yaşadığımız #
elif parseString == "ADJ+PASTPART+P1PL$NOUN+PASTPART+A3SG+P1PL+NOM":
return "ADJ+PASTPART+P1PL"
# eminim, memnunum, açım #
elif parseString == "NOUN+ZERO+A3SG+P1SG+NOM$VERB+ZERO+PRES+A1SG":
return "VERB+ZERO+PRES+A1SG"
# yaparlar, olabilirler, değiştirirler #
elif parseString == "AOR+A3PL$AOR^DB+ADJ+ZERO^DB+NOUN+ZERO+A3PL+PNON+NOM":
return "AOR+A3PL"
# san, yasa #
elif parseString == "NOUN+A3SG+PNON+NOM$NOUN+PROP+A3SG+PNON+NOM$VERB+POS+IMP+A2SG":
if index > 0:
return "NOUN+PROP+A3SG+PNON+NOM"
# etmeyecek, yapmayacak, koşmayacak #
elif parseString == "NEG+FUT+A3SG$NEG^DB+ADJ+FUTPART+PNON":
return "NEG+FUT+A3SG"
# etmeli, olmalı #
elif parseString == "POS+NECES+A3SG$POS^DB+NOUN+INF2+A3SG+PNON+NOM^DB+ADJ+WITH":
if AutoDisambiguator.isBeforeLastWord(index, fsmParses):
return "POS+NECES+A3SG"
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "POS^DB+NOUN+INF2+A3SG+PNON+NOM^DB+ADJ+WITH"
return "POS+NECES+A3SG"
# DE #
elif parseString == "CONJ$NOUN+PROP+A3SG+PNON+NOM$VERB+POS+IMP+A2SG":
if index > 0:
return "NOUN+PROP+A3SG+PNON+NOM"
# GEÇ, SIK #
elif parseString == "ADJ$ADV$VERB+POS+IMP+A2SG":
if surface_form == "sık":
previous_word = ""
next_word = ""
if index - 1 > -1:
previous_word = fsmParses[index - 1].getFsmParse(0).getSurfaceForm()
if index + 1 < len(fsmParses):
next_word = fsmParses[index + 1].getFsmParse(0).getSurfaceForm()
if previous_word == "sık" or next_word == "sık":
return "ADV"
if AutoDisambiguator.isNextWordNoun(index, fsmParses):
return "ADJ"
return "ADV"
# BİRLİKTE #
elif parseString == "ADV$POSTP+PCINS":
if AutoDisambiguator.hasPreviousWordTag(index, correctParses, MorphologicalTag.INSTRUMENTAL):
return "POSTP+PCINS"
return "ADV"
# yavaşça, dürüstçe, fazlaca #
elif parseString == "ADJ+ASIF$ADV+LY$NOUN+ZERO+A3SG+PNON+EQU":
return "ADV+LY"
# FAZLADIR, FAZLAYDI, ÇOKTU, ÇOKTUR #
elif parseString == "ADJ^DB$POSTP+PCABL^DB":
if AutoDisambiguator.hasPreviousWordTag(index, correctParses, MorphologicalTag.ABLATIVE):
return "POSTP+PCABL^DB"
return "ADJ^DB"
# kaybettikleri, umdukları, gösterdikleri #
elif parseString == "ADJ+PASTPART+P3PL$NOUN+PASTPART+A3PL+P3PL+NOM$NOUN+PASTPART+A3PL+P3SG+NOM$NOUN+PASTPART" \
"+A3SG+P3PL+NOM":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ+PASTPART+P3PL"
if AutoDisambiguator.isPossessivePlural(index, correctParses):
return "NOUN+PASTPART+A3SG+P3PL+NOM"
return "NOUN+PASTPART+A3PL+P3SG+NOM"
# yılın, yolun #
elif parseString == "NOUN+A3SG+P2SG+NOM$NOUN+A3SG+PNON+GEN$VERB+POS+IMP+A2PL$VERB^DB+VERB+PASS+POS+IMP+A2SG":
if AutoDisambiguator.isAnyWordSecondPerson(index, correctParses):
return "NOUN+A3SG+P2SG+NOM"
return "NOUN+A3SG+PNON+GEN"
# sürmekte, beklenmekte, değişmekte #
elif parseString == "POS+PROG2+A3SG$POS^DB+NOUN+INF+A3SG+PNON+LOC":
return "POS+PROG2+A3SG"
# KİMSE, KİMSEDE, KİMSEYE #
elif parseString == "NOUN+A3SG+PNON$PRON+QUANTP+A3SG+P3SG":
return "PRON+QUANTP+A3SG+P3SG"
# DOĞRU #
elif parseString == "ADJ$NOUN+A3SG+PNON+NOM$POSTP+PCDAT":
if AutoDisambiguator.hasPreviousWordTag(index, correctParses, MorphologicalTag.DATIVE):
return "POSTP+PCDAT"
return "ADJ"
# ikisini, ikisine, fazlasına #
elif parseString == "ADJ+JUSTLIKE^DB+NOUN+ZERO+A3SG+P2SG$NOUN+ZERO+A3SG+P3SG":
return "NOUN+ZERO+A3SG+P3SG"
# kişilerdir, aylardır, yıllardır #
elif parseString == "A3PL+PNON+NOM^DB+ADV+SINCE$A3PL+PNON+NOM^DB+VERB+ZERO+PRES+COP+A3SG$A3SG+PNON+NOM^DB" \
"+VERB+ZERO+PRES+A3PL+COP":
if root == "yıl" or root == "süre" or root == "zaman" or root == "ay":
return "A3PL+PNON+NOM^DB+ADV+SINCE"
else:
return "A3PL+PNON+NOM^DB+VERB+ZERO+PRES+COP+A3SG"
# HEP #
elif parseString == "ADV$PRON+QUANTP+A3SG+P3SG+NOM":
return "ADV"
# O #
elif parseString == "DET$NOUN+PROP+A3SG+PNON+NOM$PRON+DEMONSP+A3SG+PNON+NOM$PRON+PERS+A3SG+PNON+NOM":
if AutoDisambiguator.isNextWordNoun(index, fsmParses):
return "DET"
else:
return "PRON+PERS+A3SG+PNON+NOM"
# yapmalıyız, etmeliyiz, alınmalıdır #
elif parseString == "POS+NECES$POS^DB+NOUN+INF2+A3SG+PNON+NOM^DB+ADJ+WITH^DB+VERB+ZERO+PRES":
return "POS+NECES"
# kızdı, çekti, bozdu #
elif parseString == "ADJ^DB+VERB+ZERO$NOUN+A3SG+PNON+NOM^DB+VERB+ZERO$VERB+POS":
return "VERB+POS"
# BİZİMLE #
elif parseString == "NOUN+A3SG+P1SG$PRON+PERS+A1PL+PNON":
return "PRON+PERS+A1PL+PNON"
# VARDIR #
elif parseString == "ADJ^DB+VERB+ZERO+PRES+COP+A3SG$VERB^DB+VERB+CAUS+POS+IMP+A2SG":
return "ADJ^DB+VERB+ZERO+PRES+COP+A3SG"
# Mİ #
elif parseString == "NOUN+A3SG+PNON+NOM$QUES+PRES+A3SG":
return "QUES+PRES+A3SG"
# BENİM #
elif parseString == "NOUN+A3SG+P1SG+NOM$NOUN+A3SG+PNON+NOM^DB+VERB+ZERO+PRES+A1SG$PRON+PERS+A1SG+PNON" \
"+GEN$PRON+PERS+A1SG+PNON+NOM^DB+VERB+ZERO+PRES+A1SG":
return "PRON+PERS+A1SG+PNON+GEN"
# SUN #
elif parseString == "NOUN+PROP+A3SG+PNON+NOM$VERB+POS+IMP+A2SG":
return "NOUN+PROP+A3SG+PNON+NOM"
elif parseString == "ADJ+JUSTLIKE$NOUN+ZERO+A3SG+P3SG+NOM$NOUN+ZERO^DB+ADJ+ALMOST":
return "NOUN+ZERO+A3SG+P3SG+NOM"
# düşündük, ettik, kazandık #
elif parseString == "NOUN+A3SG+PNON+NOM^DB+VERB+ZERO+PAST+A1PL$VERB+POS+PAST+A1PL$VERB+POS^DB+ADJ+PASTPART" \
"+PNON$VERB+POS^DB+NOUN+PASTPART+A3SG+PNON+NOM":
return "VERB+POS+PAST+A1PL"
# komiktir, eksiktir, mevcuttur, yoktur #
elif parseString == "ADJ^DB+VERB+ZERO+PRES+COP+A3SG$NOUN+A3SG+PNON+NOM^DB+ADV+SINCE$NOUN+A3SG+PNON+NOM^DB" \
"+VERB+ZERO+PRES+COP+A3SG":
return "ADJ^DB+VERB+ZERO+PRES+COP+A3SG"
# edeceğim, ekeceğim, koşacağım, gideceğim, savaşacağım, olacağım #
elif parseString == "POS+FUT+A1SG$POS^DB+ADJ+FUTPART+P1SG$POS^DB+NOUN+FUTPART+A3SG+P1SG+NOM":
return "POS+FUT+A1SG"
# A #
elif parseString == "ADJ$INTERJ$NOUN+PROP+A3SG+PNON+NOM":
return "NOUN+PROP+A3SG+PNON+NOM"
# BİZİ #
elif parseString == "NOUN+A3SG+P3SG+NOM$NOUN+A3SG+PNON+ACC$PRON+PERS+A1PL+PNON+ACC":
return "PRON+PERS+A1PL+PNON+ACC"
# BİZİM #
elif parseString == "NOUN+A3SG+P1SG+NOM$NOUN+A3SG+PNON+NOM^DB+VERB+ZERO+PRES+A1SG$PRON+PERS+A1PL+PNON" \
"+GEN$PRON+PERS+A1PL+PNON+NOM^DB+VERB+ZERO+PRES+A1SG":
return "PRON+PERS+A1PL+PNON+GEN"
# erkekler, kadınlar, madenler, uzmanlar#
elif parseString == "ADJ^DB+VERB+ZERO+PRES+A3PL$NOUN+A3PL+PNON+NOM$NOUN+A3SG+PNON+NOM^DB+VERB+ZERO+PRES" \
"+A3PL$NOUN+PROP+A3PL+PNON+NOM":
return "NOUN+A3PL+PNON+NOM"
# TABİ #
elif parseString == "ADJ$INTERJ":
return "ADJ"
elif parseString == "AOR+A2PL$AOR^DB+ADJ+ZERO^DB+ADJ+JUSTLIKE^DB+NOUN+ZERO+A3SG+P2PL+NOM":
return "AOR+A2PL"
# ayın, düşünün#
elif parseString == "NOUN+A3SG+P2SG+NOM$NOUN+A3SG+PNON+GEN$VERB+POS+IMP+A2PL":
if AutoDisambiguator.isBeforeLastWord(index, fsmParses):
return "VERB+POS+IMP+A2PL"
return "NOUN+A3SG+PNON+GEN"
# ödeyecekler, olacaklar #
elif parseString == "POS+FUT+A3PL$POS^DB+NOUN+FUTPART+A3PL+PNON+NOM":
return "POS+FUT+A3PL"
# 9:30'daki #
elif parseString == "P3SG$PNON":
return "PNON"
# olabilecek, yapabilecek #
elif parseString == "ABLE+FUT+A3SG$ABLE^DB+ADJ+FUTPART+PNON":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ABLE^DB+ADJ+FUTPART+PNON"
return "ABLE+FUT+A3SG"
# düşmüş duymuş artmış #
elif parseString == "NOUN+A3SG+PNON+NOM^DB+VERB+ZERO+NARR+A3SG$VERB+POS+NARR+A3SG$VERB+POS+NARR^DB+ADJ+ZERO":
if AutoDisambiguator.isBeforeLastWord(index, fsmParses):
return "VERB+POS+NARR+A3SG"
return "VERB+POS+NARR^DB+ADJ+ZERO"
# BERİ, DIŞARI, AŞAĞI #
elif parseString == "ADJ$ADV$NOUN+A3SG+PNON+NOM$POSTP+PCABL":
if AutoDisambiguator.hasPreviousWordTag(index, correctParses, MorphologicalTag.ABLATIVE):
return "POSTP+PCABL"
return "ADV"
# TV, CD #
elif parseString == "A3SG+PNON+ACC$PROP+A3SG+PNON+NOM":
return "A3SG+PNON+ACC"
# değinmeyeceğim, vermeyeceğim #
elif parseString == "NEG+FUT+A1SG$NEG^DB+ADJ+FUTPART+P1SG$NEG^DB+NOUN+FUTPART+A3SG+P1SG+NOM":
return "NEG+FUT+A1SG"
# görünüşe, satışa, duruşa #
elif parseString == "POS^DB+NOUN+INF3+A3SG+PNON+DAT$RECIP+POS+OPT+A3SG":
return "POS^DB+NOUN+INF3+A3SG+PNON+DAT"
# YILDIR, AYDIR, YOLDUR #
elif parseString == "NOUN+A3SG+PNON+NOM^DB+ADV+SINCE$NOUN+A3SG+PNON+NOM^DB+VERB+ZERO+PRES+COP+A3SG$VERB^DB" \
"+VERB+CAUS+POS+IMP+A2SG":
if root == "yıl" or root == "ay":
return "NOUN+A3SG+PNON+NOM^DB+ADV+SINCE"
else:
return "NOUN+A3SG+PNON+NOM^DB+VERB+ZERO+PRES+COP+A3SG"
# BENİ #
elif parseString == "NOUN+A3SG+P3SG+NOM$NOUN+A3SG+PNON+ACC$PRON+PERS+A1SG+PNON+ACC":
return "PRON+PERS+A1SG+PNON+ACC"
# edemezsin, kanıtlarsın, yapamazsın #
elif parseString == "AOR+A2SG$AOR^DB+ADJ+ZERO^DB+ADJ+JUSTLIKE^DB+NOUN+ZERO+A3SG+P2SG+NOM":
return "AOR+A2SG"
# BÜYÜME, ATAMA, KARIMA, KORUMA, TANIMA, ÜREME #
elif parseString == "NOUN+A3SG+P1SG+DAT$VERB+NEG+IMP+A2SG$VERB+POS^DB+NOUN+INF2+A3SG+PNON+NOM":
if root == "karı":
return "NOUN+A3SG+P1SG+DAT"
return "VERB+POS^DB+NOUN+INF2+A3SG+PNON+NOM"
# HANGİ #
elif parseString == "ADJ$PRON+QUESP+A3SG+PNON+NOM":
if last_word == "?":
return "PRON+QUESP+A3SG+PNON+NOM"
return "ADJ"
# GÜCÜNÜ, GÜCÜNÜN, ESASINDA #
elif parseString == "ADJ^DB+NOUN+ZERO+A3SG+P2SG$ADJ^DB+NOUN+ZERO+A3SG+P3SG$NOUN+A3SG+P2SG$NOUN+A3SG+P3SG":
return "NOUN+A3SG+P3SG"
# YILININ, YOLUNUN, DİLİNİN #
elif parseString == "NOUN+A3SG+P2SG+GEN$NOUN+A3SG+P3SG+GEN$VERB^DB+VERB+PASS+POS+IMP+A2PL":
return "NOUN+A3SG+P3SG+GEN"
# ÇIKARDI #
elif parseString == "VERB+POS+AOR$VERB^DB+VERB+CAUS+POS":
return "VERB+POS+AOR"
# sunucularımız, rakiplerimiz, yayınlarımız #
elif parseString == "P1PL+NOM$P1SG+NOM^DB+VERB+ZERO+PRES+A1PL":
return "P1PL+NOM"
# etmiştir, artmıştır, düşünmüştür, alınmıştır #
elif parseString == "NOUN+A3SG+PNON+NOM^DB+VERB+ZERO+NARR+A3SG+COP$VERB+POS+NARR+COP+A3SG":
return "VERB+POS+NARR+COP+A3SG"
# hazırlandı, yuvarlandı, temizlendi #
elif parseString == "VERB+REFLEX$VERB^DB+VERB+PASS":
return "VERB^DB+VERB+PASS"
# KARA, ÇEK, SOL, KOCA #
elif parseString == "ADJ$NOUN+A3SG+PNON+NOM$NOUN+PROP+A3SG+PNON+NOM$VERB+POS+IMP+A2SG":
if index > 0:
if fsmParses[index].getFsmParse(0).isCapitalWord():
return "NOUN+PROP+A3SG+PNON+NOM"
return "ADJ"
# YÜZ #
elif parseString == "NOUN+A3SG+PNON+NOM$NUM+CARD$VERB+POS+IMP+A2SG":
if AutoDisambiguator.isNextWordNum(index, fsmParses):
return "NUM+CARD"
return "NOUN+A3SG+PNON+NOM"
elif parseString == "ADJ+AGT^DB+ADJ+JUSTLIKE$NOUN+AGT+A3SG+P3SG+NOM$NOUN+AGT^DB+ADJ+ALMOST":
return "NOUN+AGT+A3SG+P3SG+NOM"
# artışın, düşüşün, yükselişin#
elif parseString == "POS^DB+NOUN+INF3+A3SG+P2SG+NOM$POS^DB+NOUN+INF3+A3SG+PNON+GEN$RECIP+POS+IMP+A2PL":
if AutoDisambiguator.isAnyWordSecondPerson(index, correctParses):
return "POS^DB+NOUN+INF3+A3SG+P2SG+NOM"
return "POS^DB+NOUN+INF3+A3SG+PNON+GEN"
# VARSA #
elif parseString == "ADJ^DB+VERB+ZERO+COND$VERB+POS+DESR":
return "ADJ^DB+VERB+ZERO+COND"
# DEK #
elif parseString == "NOUN+A3SG+PNON+NOM$POSTP+PCDAT":
return "POSTP+PCDAT"
# ALDIK #
elif parseString == "ADJ^DB+VERB+ZERO+PAST+A1PL$VERB+POS+PAST+A1PL$VERB+POS^DB+ADJ+PASTPART+PNON$VERB+POS^DB" \
"+NOUN+PASTPART+A3SG+PNON+NOM":
return "VERB+POS+PAST+A1PL"
# BİRİNİN, BİRİNE, BİRİNİ, BİRİNDEN #
elif parseString == "ADJ^DB+NOUN+ZERO+A3SG+P2SG$ADJ^DB+NOUN+ZERO+A3SG+P3SG$NUM+CARD^DB+NOUN+ZERO+A3SG" \
"+P2SG$NUM+CARD^DB+NOUN+ZERO+A3SG+P3SG":
return "NUM+CARD^DB+NOUN+ZERO+A3SG+P3SG"
# ARTIK #
elif parseString == "ADJ$ADV$NOUN+A3SG+PNON+NOM$NOUN+PROP+A3SG+PNON+NOM":
return "ADV"
# BİRİ #
elif parseString == "ADJ^DB+NOUN+ZERO+A3SG+P3SG+NOM$ADJ^DB+NOUN+ZERO+A3SG+PNON+ACC$NUM+CARD^DB+NOUN+ZERO+A3SG" \
"+P3SG+NOM$NUM+CARD^DB+NOUN+ZERO+A3SG+PNON+ACC":
return "NUM+CARD^DB+NOUN+ZERO+A3SG+P3SG+NOM"
# DOĞRU #
elif parseString == "ADJ$NOUN+A3SG+PNON+NOM$NOUN+PROP+A3SG+PNON+NOM$POSTP+PCDAT":
if AutoDisambiguator.hasPreviousWordTag(index, correctParses, MorphologicalTag.DATIVE):
return "POSTP+PCDAT"
return "ADJ"
# demiryolları, havayolları, milletvekilleri #
elif parseString == "P3PL+NOM$P3SG+NOM$PNON+ACC":
if AutoDisambiguator.isPossessivePlural(index, correctParses):
return "P3PL+NOM"
return "P3SG+NOM"
# GEREK #
elif parseString == "CONJ$NOUN+A3SG+PNON+NOM$VERB+POS+IMP+A2SG":
if AutoDisambiguator.containsTwoNeOrYa(fsmParses, "gerek"):
return "CONJ"
return "NOUN+A3SG+PNON+NOM"
# bilmediğiniz, sevdiğiniz, kazandığınız #
elif parseString == "ADJ+PASTPART+P2PL$NOUN+PASTPART+A3SG+P2PL+NOM$NOUN+PASTPART+A3SG+PNON+GEN^DB+VERB+ZERO" \
"+PRES+A1PL":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ+PASTPART+P2PL"
return "NOUN+PASTPART+A3SG+P2PL+NOM"
# yapabilecekleri, edebilecekleri, sunabilecekleri #
elif parseString == "ADJ+FUTPART+P3PL$NOUN+FUTPART+A3PL+P3PL+NOM$NOUN+FUTPART+A3PL+P3SG+NOM$NOUN+FUTPART+A3PL" \
"+PNON+ACC$NOUN+FUTPART+A3SG+P3PL+NOM":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ+FUTPART+P3PL"
if AutoDisambiguator.isPossessivePlural(index, correctParses):
return "NOUN+FUTPART+A3SG+P3PL+NOM"
return "NOUN+FUTPART+A3PL+P3SG+NOM"
# KİM #
elif parseString == "NOUN+PROP$PRON+QUESP":
if last_word == "?":
return "PRON+QUESP"
return "NOUN+PROP"
# ALINDI #
elif parseString == "ADJ^DB+NOUN+ZERO+A3SG+P2SG+NOM^DB+VERB+ZERO$ADJ^DB+NOUN+ZERO+A3SG+PNON+GEN^DB+VERB" \
"+ZERO$VERB^DB+VERB+PASS+POS":
return "VERB^DB+VERB+PASS+POS"
# KIZIM #
elif parseString == "ADJ^DB+VERB+ZERO+PRES+A1SG$NOUN+A3SG+P1SG+NOM$NOUN+A3SG+PNON+NOM^DB+VERB+ZERO+PRES+A1SG":
return "NOUN+A3SG+P1SG+NOM"
# etmeliydi, yaratmalıydı #
elif parseString == "POS+NECES$POS^DB+NOUN+INF2+A3SG+PNON+NOM^DB+ADJ+WITH^DB+VERB+ZERO":
return "POS+NECES"
# HERKESİN #
elif parseString == "NOUN+A3SG+P2SG+NOM$NOUN+A3SG+PNON+GEN$PRON+QUANTP+A3PL+P3PL+GEN":
return "PRON+QUANTP+A3PL+P3PL+GEN"
elif parseString == "ADJ+JUSTLIKE^DB+NOUN+ZERO+A3SG+P2SG$ADJ+JUSTLIKE^DB+NOUN+ZERO+A3SG+PNON$NOUN+ZERO+A3SG" \
"+P3SG":
return "NOUN+ZERO+A3SG+P3SG"
# milyarlık, milyonluk, beşlik, ikilik #
elif parseString == "NESS+A3SG+PNON+NOM$ZERO+A3SG+PNON+NOM^DB+ADJ+FITFOR":
return "ZERO+A3SG+PNON+NOM^DB+ADJ+FITFOR"
# alınmamaktadır, koymamaktadır #
elif parseString == "NEG+PROG2$NEG^DB+NOUN+INF+A3SG+PNON+LOC^DB+VERB+ZERO+PRES":
return "NEG+PROG2"
# HEPİMİZ #
elif parseString == "A1PL+P1PL+NOM$A3SG+P3SG+GEN^DB+VERB+ZERO+PRES+A1PL":
return "A1PL+P1PL+NOM"
# KİMSENİN #
elif parseString == "NOUN+A3SG+P2SG$NOUN+A3SG+PNON$PRON+QUANTP+A3SG+P3SG":
return "PRON+QUANTP+A3SG+P3SG"
# GEÇMİŞ, ALMIŞ, VARMIŞ #
elif parseString == "ADJ^DB+VERB+ZERO+NARR+A3SG$VERB+POS+NARR+A3SG$VERB+POS+NARR^DB+ADJ+ZERO":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "VERB+POS+NARR^DB+ADJ+ZERO"
return "VERB+POS+NARR+A3SG"
# yapacağınız, konuşabileceğiniz, olacağınız #
elif parseString == "ADJ+FUTPART+P2PL$NOUN+FUTPART+A3SG+P2PL+NOM$NOUN+FUTPART+A3SG+PNON+GEN^DB+VERB+ZERO+PRES" \
"+A1PL":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ+FUTPART+P2PL"
return "NOUN+FUTPART+A3SG+P2PL+NOM"
# YILINA, DİLİNE, YOLUNA #
elif parseString == "NOUN+A3SG+P2SG+DAT$NOUN+A3SG+P3SG+DAT$VERB^DB+VERB+PASS+POS+OPT+A3SG":
if AutoDisambiguator.isAnyWordSecondPerson(index, correctParses):
return "NOUN+A3SG+P2SG+DAT"
return "NOUN+A3SG+P3SG+DAT"
# MİSİN, MİYDİ, MİSİNİZ #
elif parseString == "NOUN+A3SG+PNON+NOM^DB+VERB+ZERO$QUES":
return "QUES"
# ATAKLAR, GÜÇLER, ESASLAR #
elif parseString == "ADJ^DB+NOUN+ZERO+A3PL+PNON+NOM$ADJ^DB+VERB+ZERO+PRES+A3PL$NOUN+A3PL+PNON+NOM$NOUN+A3SG" \
"+PNON+NOM^DB+VERB+ZERO+PRES+A3PL":
return "NOUN+A3PL+PNON+NOM"
elif parseString == "A3PL+P3SG$A3SG+P3PL$PROP+A3PL+P3PL":
return "PROP+A3PL+P3PL"
# pilotunuz, suçunuz, haberiniz #
elif parseString == "P2PL+NOM$PNON+GEN^DB+VERB+ZERO+PRES+A1PL":
return "P2PL+NOM"
# yıllarca, aylarca, düşmanca #
elif parseString == "ADJ+ASIF$ADV+LY":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ+ASIF"
return "ADV+LY"
# gerçekçi, alıcı #
elif parseString == "ADJ^DB+NOUN+AGT+A3SG+PNON+NOM$NOUN+A3SG+PNON+NOM^DB+ADJ+AGT":
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "NOUN+A3SG+PNON+NOM^DB+ADJ+AGT"
return "ADJ^DB+NOUN+AGT+A3SG+PNON+NOM"
# havayollarına, gözyaşlarına #
elif parseString == "P2SG$P3PL$P3SG":
if AutoDisambiguator.isAnyWordSecondPerson(index, correctParses):
return "P2SG"
if AutoDisambiguator.isPossessivePlural(index, correctParses):
return "P3PL"
return "P3SG"
# olun, kurtulun, gelin #
elif parseString == "VERB+POS+IMP+A2PL$VERB^DB+VERB+PASS+POS+IMP+A2SG":
return "VERB+POS+IMP+A2PL"
elif parseString == "ADJ+JUSTLIKE^DB$NOUN+ZERO+A3SG+P3SG+NOM^DB":
return "NOUN+ZERO+A3SG+P3SG+NOM^DB"
# oluşmaktaydı, gerekemekteydi #
elif parseString == "POS+PROG2$POS^DB+NOUN+INF+A3SG+PNON+LOC^DB+VERB+ZERO":
return "POS+PROG2"
# BERABER #
elif parseString == "ADJ$ADV$POSTP+PCINS":
if AutoDisambiguator.hasPreviousWordTag(index, correctParses, MorphologicalTag.INSTRUMENTAL):
return "POSTP+PCINS"
if AutoDisambiguator.isNextWordNounOrAdjective(index, fsmParses):
return "ADJ"
return "ADV"
# BİN, KIRK #
elif parseString == "NUM+CARD$VERB+POS+IMP+A2SG":
return "NUM+CARD"
# ÖTE #
elif parseString == "NOUN+A3SG+PNON+NOM$POSTP+PCABL":
if AutoDisambiguator.hasPreviousWordTag(index, correctParses, MorphologicalTag.ABLATIVE):
return "POSTP+PCABL"
return "NOUN+A3SG+PNON+NOM"
# BENİMLE #
elif parseString == "NOUN+A3SG+P1SG$PRON+PERS+A1SG+PNON":
return "PRON+PERS+A1SG+PNON"
# Accusative and Ablative Cases#
elif parseString == "ADV+WITHOUTHAVINGDONESO$NOUN+INF2+A3SG+PNON+ABL":
return "ADV+WITHOUTHAVINGDONESO"
elif parseString == "ADJ^DB+NOUN+ZERO+A3SG+P3SG+NOM$ADJ^DB+NOUN+ZERO+A3SG+PNON+ACC$NOUN+A3SG+P3SG+NOM$NOUN" \
"+A3SG+PNON+ACC":
return "ADJ^DB+NOUN+ZERO+A3SG+P3SG+NOM"
elif parseString == "P3SG+NOM$PNON+ACC":
if fsmParses[index].getFsmParse(0).getFinalPos() == "PROP":
return "PNON+ACC"
else:
return "P3SG+NOM"
elif parseString == "A3PL+PNON+NOM$A3SG+PNON+NOM^DB+VERB+ZERO+PRES+A3PL":
return "A3PL+PNON+NOM"
elif parseString == "ADV+SINCE$VERB+ZERO+PRES+COP+A3SG":
if root == "yıl" or root == "süre" or root == "zaman" or root == "ay":
return "ADV+SINCE"
else:
return "VERB+ZERO+PRES+COP+A3SG"
elif parseString == "CONJ$VERB+POS+IMP+A2SG":
return "CONJ"
elif parseString == "NEG+IMP+A2SG$POS^DB+NOUN+INF2+A3SG+PNON+NOM":
return "POS^DB+NOUN+INF2+A3SG+PNON+NOM"
elif parseString == "NEG+OPT+A3SG$POS^DB+NOUN+INF2+A3SG+PNON+DAT":
return "POS^DB+NOUN+INF2+A3SG+PNON+DAT"
elif parseString == "NOUN+A3SG+P3SG+NOM$NOUN^DB+ADJ+ALMOST":
return "NOUN+A3SG+P3SG+NOM"
elif parseString == "ADJ$VERB+POS+IMP+A2SG":
return "ADJ"
elif parseString == "NOUN+A3SG+PNON+NOM$VERB+POS+IMP+A2SG":
return "NOUN+A3SG+PNON+NOM"
elif parseString == "INF2+A3SG+P3SG+NOM$INF2^DB+ADJ+ALMOST$":
return "INF2+A3SG+P3SG+NOM"
return None
def caseDisambiguator(index: int, fsmParses: list, correctParses: list) -> FsmParse:
fsm_parse_list = fsmParses[index]
default_case = AutoDisambiguator.selectCaseForParseString(fsmParses[index].parsesWithoutPrefixAndSuffix(), index,
fsmParses, correctParses)
if default_case is not None:
for i in range(fsm_parse_list.size()):
fsm_parse = fsm_parse_list.getFsmParse(i)
if default_case in fsm_parse.transitionList():
return fsm_parse
return None | PypiClean |
/CleanAdminDjango-1.5.3.1.tar.gz/CleanAdminDjango-1.5.3.1/django/contrib/admin/static/admin/js/SelectBox.js | var SelectBox = {
cache: new Object(),
init: function(id) {
var box = document.getElementById(id);
var node;
SelectBox.cache[id] = new Array();
var cache = SelectBox.cache[id];
for (var i = 0; (node = box.options[i]); i++) {
cache.push({value: node.value, text: node.text, displayed: 1});
}
},
redisplay: function(id) {
// Repopulate HTML select box from cache
var box = document.getElementById(id);
box.options.length = 0; // clear all options
for (var i = 0, j = SelectBox.cache[id].length; i < j; i++) {
var node = SelectBox.cache[id][i];
if (node.displayed) {
box.options[box.options.length] = new Option(node.text, node.value, false, false);
}
}
},
filter: function(id, text) {
// Redisplay the HTML select box, displaying only the choices containing ALL
// the words in text. (It's an AND search.)
var tokens = text.toLowerCase().split(/\s+/);
var node, token;
for (var i = 0; (node = SelectBox.cache[id][i]); i++) {
node.displayed = 1;
for (var j = 0; (token = tokens[j]); j++) {
if (node.text.toLowerCase().indexOf(token) == -1) {
node.displayed = 0;
}
}
}
SelectBox.redisplay(id);
},
delete_from_cache: function(id, value) {
var node, delete_index = null;
for (var i = 0; (node = SelectBox.cache[id][i]); i++) {
if (node.value == value) {
delete_index = i;
break;
}
}
var j = SelectBox.cache[id].length - 1;
for (var i = delete_index; i < j; i++) {
SelectBox.cache[id][i] = SelectBox.cache[id][i+1];
}
SelectBox.cache[id].length--;
},
add_to_cache: function(id, option) {
SelectBox.cache[id].push({value: option.value, text: option.text, displayed: 1});
},
cache_contains: function(id, value) {
// Check if an item is contained in the cache
var node;
for (var i = 0; (node = SelectBox.cache[id][i]); i++) {
if (node.value == value) {
return true;
}
}
return false;
},
move: function(from, to) {
var from_box = document.getElementById(from);
var to_box = document.getElementById(to);
var option;
for (var i = 0; (option = from_box.options[i]); i++) {
if (option.selected && SelectBox.cache_contains(from, option.value)) {
SelectBox.add_to_cache(to, {value: option.value, text: option.text, displayed: 1});
SelectBox.delete_from_cache(from, option.value);
}
}
SelectBox.redisplay(from);
SelectBox.redisplay(to);
},
move_all: function(from, to) {
var from_box = document.getElementById(from);
var to_box = document.getElementById(to);
var option;
for (var i = 0; (option = from_box.options[i]); i++) {
if (SelectBox.cache_contains(from, option.value)) {
SelectBox.add_to_cache(to, {value: option.value, text: option.text, displayed: 1});
SelectBox.delete_from_cache(from, option.value);
}
}
SelectBox.redisplay(from);
SelectBox.redisplay(to);
},
sort: function(id) {
SelectBox.cache[id].sort( function(a, b) {
a = a.text.toLowerCase();
b = b.text.toLowerCase();
try {
if (a > b) return 1;
if (a < b) return -1;
}
catch (e) {
// silently fail on IE 'unknown' exception
}
return 0;
} );
},
select_all: function(id) {
var box = document.getElementById(id);
for (var i = 0; i < box.options.length; i++) {
box.options[i].selected = 'selected';
}
}
} | PypiClean |
/Flask-Statics-Helper-1.0.0.tar.gz/Flask-Statics-Helper-1.0.0/flask_statics/static/BootstrapValidator/js/language/be_NL.js | (function($) {
/**
* Belgium (Dutch) language package
* Translated by @dokterpasta
*/
$.fn.bootstrapValidator.i18n = $.extend(true, $.fn.bootstrapValidator.i18n, {
base64: {
'default': 'Geef een geldige base 64 encoded tekst in'
},
between: {
'default': 'Geef een waarde tussen %s en %s (incl de waarden)',
notInclusive: 'Geef een waarde strikt tussen %s en %s'
},
callback: {
'default': 'Geef een geldige waarden in'
},
choice: {
'default': 'Geef een geldige waarden in',
less: 'Kies minimum %s opties',
more: 'Kies maximum %s opties',
between: 'Kies %s - %s opties'
},
color: {
'default': 'Geef een geldige kleur'
},
creditCard: {
'default': 'Geef een geldig creditkaart nummer'
},
cusip: {
'default': 'Geef een geldig CUSIP nummer'
},
cvv: {
'default': 'Geef een geldig CVV nummer'
},
date: {
'default': 'Geef een geldige datum',
min: 'Geef een datum na %s',
max: 'Geef een datum voor %s',
range: 'Geef een datum tussen %s en %s'
},
different: {
'default': 'Geef een andere waarde '
},
digits: {
'default': 'Geef alleen cijfers in'
},
ean: {
'default': 'Geef een geldig EAN nummer'
},
emailAddress: {
'default': 'Geef een geldig email adres op'
},
file: {
'default': 'Kies een geldig bestand'
},
greaterThan: {
'default': 'Geef een waar de gelijk aan of groter dan %s',
notInclusive: 'Geef een waarde groter dan %s'
},
grid: {
'default': 'Geef een geldig GRId nummer'
},
hex: {
'default': 'Geef een geldig hexadecimaal nummer'
},
hexColor: {
'default': 'Geef een geldige hex kleur'
},
iban: {
'default': 'Geef een geldig IBAN nummer',
countryNotSupported: 'De land code %s is niet ondersteund',
country: 'Geef een geldig IBAN nummer van %s',
countries: {
AD: 'Andorra',
AE: 'Verenigde Arabische Emiraten',
AL: 'Albania',
AO: 'Angola',
AT: 'Oostenrijk',
AZ: 'Azerbeidzjan',
BA: 'Bosnië en Herzegovina',
BE: 'België',
BF: 'Burkina Faso',
BG: 'Bulgarije"',
BH: 'Bahrein',
BI: 'Burundi',
BJ: 'Benin',
BR: 'Brazilië',
CH: 'Zwitserland',
CI: 'Ivoorkust',
CM: 'Kameroen',
CR: 'Costa Rica',
CV: 'Cape Verde',
CY: 'Cyprus',
CZ: 'Tsjechische',
DE: 'Duitsland',
DK: 'Denemarken',
DO: 'Dominicaanse Republiek',
DZ: 'Algerije',
EE: 'Estland',
ES: 'Spanje',
FI: 'Finland',
FO: 'Faeröer',
FR: 'Frankrijk',
GB: 'Verenigd Koninkrijk',
GE: 'Georgia',
GI: 'Gibraltar',
GL: 'Groenland',
GR: 'Griekenland',
GT: 'Guatemala',
HR: 'Kroatië',
HU: 'Hongarije',
IE: 'Ierland',
IL: 'Israël',
IR: 'Iran',
IS: 'IJsland',
IT: 'Italië',
JO: 'Jordan',
KW: 'Koeweit',
KZ: 'Kazachstan',
LB: 'Libanon',
LI: 'Liechtenstein',
LT: 'Litouwen',
LU: 'Luxemburg',
LV: 'Letland',
MC: 'Monaco',
MD: 'Moldavië',
ME: 'Montenegro',
MG: 'Madagascar',
MK: 'Macedonië',
ML: 'Mali',
MR: 'Mauretanië',
MT: 'Malta',
MU: 'Mauritius',
MZ: 'Mozambique',
NL: 'Nederland',
NO: 'Noorwegen',
PK: 'Pakistan',
PL: 'Polen',
PS: 'Palestijnse',
PT: 'Portugal',
QA: 'Qatar',
RO: 'Roemenië',
RS: 'Servië',
SA: 'Saudi-Arabië',
SE: 'Zweden',
SI: 'Slovenië',
SK: 'Slowakije',
SM: 'San Marino',
SN: 'Senegal',
TN: 'Tunesië',
TR: 'Turkije',
VG: 'Britse Maagdeneilanden'
}
},
id: {
'default': 'Geef een geldig identificatie nummer',
countryNotSupported: 'De land code %s is niet ondersteund',
country: 'Geef een geldig identificatie nummer van %s',
countries: {
BA: 'Bosnië en Herzegovina',
BG: 'Bulgarije',
BR: 'Brazilië',
CH: 'Zwitserland',
CL: 'Chili',
CN: 'China',
CZ: 'Tsjechische',
DK: 'Denemarken',
EE: 'Estland',
ES: 'Spanje',
FI: 'Finland',
HR: 'Kroatië',
IE: 'Ierland',
IS: 'IJsland',
LT: 'Litouwen',
LV: 'Letland',
ME: 'Montenegro',
MK: 'Macedonië',
NL: 'Nederland',
RO: 'Roemenië',
RS: 'Servië',
SE: 'Zweden',
SI: 'Slovenië',
SK: 'Slowakije',
SM: 'San Marino',
TH: 'Thailand',
ZA: 'Zuid-Afrika'
}
},
identical: {
'default': 'Geef dezelfde waarde'
},
imei: {
'default': 'Geef een geldig IMEI nummer'
},
imo: {
'default': 'Geef een geldig IMO nummer'
},
integer: {
'default': 'Geef een geldig nummer'
},
ip: {
'default': 'Geef een geldig IP adres',
ipv4: 'Geef een geldig IPv4 adres',
ipv6: 'Geef een geldig IPv6 adres'
},
isbn: {
'default': 'Geef een geldig ISBN nummer'
},
isin: {
'default': 'Geef een geldig ISIN nummer'
},
ismn: {
'default': 'Geef een geldig ISMN nummer'
},
issn: {
'default': 'Geef een geldig ISSN nummer'
},
lessThan: {
'default': 'Geef een waar de gelijk aan of kleiner dan %s',
notInclusive: 'Geef een waarde kleiner dan %s'
},
mac: {
'default': 'Geef een geldig MAC adres'
},
meid: {
'default': 'Geef een geldig MEID nummer'
},
notEmpty: {
'default': 'Geef een waarde'
},
numeric: {
'default': 'Geef een geldig comma getal'
},
phone: {
'default': 'Geef een geldig telefoon nummer',
countryNotSupported: 'De land code %s is niet ondersteund',
country: 'Geef een geldig telefoon nummer van %s',
countries: {
BR: 'Brazilië',
CN: 'China',
CZ: 'Tsjechische',
DE: 'Duitsland',
DK: 'Denemarken',
ES: 'Spanje',
FR: 'Frankrijk',
GB: 'Verenigd Koninkrijk',
MA: 'Marokko',
PK: 'Pakistan',
RO: 'Roemenië',
RU: 'Rusland',
SK: 'Slowakije',
TH: 'Thailand',
US: 'VS',
VE: 'Venezuela'
}
},
regexp: {
'default': 'Geef een waarde die gelijk is aan de regex'
},
remote: {
'default': 'Geef een geldige waarde'
},
rtn: {
'default': 'Geef een geldig RTN nummer'
},
sedol: {
'default': 'Geef een geldig SEDOL nummer'
},
siren: {
'default': 'Geef een geldig SIREN nummer'
},
siret: {
'default': 'Geef een geldig SIRET nummer'
},
step: {
'default': 'Geef een geldig stap van %s'
},
stringCase: {
'default': 'Geef alleen kleine letters',
upper: 'Geef alleen hoofdletters'
},
stringLength: {
'default': 'Geef een waarde met de juiste lengte',
less: 'Geef minder dan %s karakters',
more: 'Geef meer dan %s karakters',
between: 'Geef een aantal karakters tussen %s en %s'
},
uri: {
'default': 'Geef een geldig URI'
},
uuid: {
'default': 'Geef een geldig UUID nummer',
version: 'Geef een geldig UUID versie %s nummer'
},
vat: {
'default': 'Geef een geldig BTW nummer',
countryNotSupported: 'De land code %s is niet ondersteund',
country: 'Geef een geldig BTW nummer van %s',
countries: {
AT: 'Oostenrijk',
BE: 'België',
BG: 'Bulgarije',
BR: 'Brazilië',
CH: 'Zwitserland',
CY: 'Cyprus',
CZ: 'Tsjechische',
DE: 'Duitsland',
DK: 'Denemarken',
EE: 'Estland',
ES: 'Spanje',
FI: 'Finland',
FR: 'Frankrijk',
GB: 'Verenigd Koninkrijk',
GR: 'Griekenland',
EL: 'Griekenland',
HU: 'Hongarije',
HR: 'Kroatië',
IE: 'Ierland',
IS: 'IJsland',
IT: 'Italië',
LT: 'Litouwen',
LU: 'Luxemburg',
LV: 'Letland',
MT: 'Malta',
NL: 'Nederland',
NO: 'Noorwegen',
PL: 'Polen',
PT: 'Portugal',
RO: 'Roemenië',
RU: 'Rusland',
RS: 'Servië',
SE: 'Zweden',
SI: 'Slovenië',
SK: 'Slowakije',
VE: 'Venezuela',
ZA: 'Zuid-Afrika'
}
},
vin: {
'default': 'Geef een geldig VIN nummer'
},
zipCode: {
'default': 'Geef een geldige postcode',
countryNotSupported: 'De land code %s is niet ondersteund',
country: 'Geef een geldige postcode van %s',
countries: {
AT: 'Oostenrijk',
BR: 'Brazilië',
CA: 'Canada',
CH: 'Zwitserland',
CZ: 'Tsjechische',
DE: 'Duitsland',
DK: 'Denemarken',
FR: 'Frankrijk',
GB: 'Verenigd Koninkrijk',
IE: 'Ierland',
IT: 'Italië',
MA: 'Marokko',
NL: 'Nederland',
PT: 'Portugal',
RO: 'Roemenië',
RU: 'Rusland',
SE: 'Zweden',
SG: 'Singapore',
SK: 'Slowakije',
US: 'VS'
}
}
});
}(window.jQuery)); | PypiClean |
/Mathics3-6.0.2.tar.gz/Mathics3-6.0.2/mathics/builtin/patterns.py | # This tells documentation how to sort this module
sort_order = "mathics.builtin.rules-and-patterns"
from typing import Callable, List, Optional as OptionalType, Tuple, Union
from mathics.builtin.base import (
AtomBuiltin,
BinaryOperator,
Builtin,
PatternError,
PatternObject,
PostfixOperator,
)
from mathics.core.atoms import Integer, Number, Rational, Real, String
from mathics.core.attributes import (
A_HOLD_ALL,
A_HOLD_FIRST,
A_HOLD_REST,
A_PROTECTED,
A_SEQUENCE_HOLD,
)
from mathics.core.element import BaseElement, EvalMixin
from mathics.core.evaluation import Evaluation
from mathics.core.exceptions import InvalidLevelspecError
from mathics.core.expression import Expression, SymbolVerbatim
from mathics.core.list import ListExpression
from mathics.core.pattern import Pattern, StopGenerator
from mathics.core.rules import Rule
from mathics.core.symbols import Atom, Symbol, SymbolList, SymbolTrue
from mathics.core.systemsymbols import SymbolBlank, SymbolDefault, SymbolDispatch
from mathics.eval.parts import python_levelspec
class Rule_(BinaryOperator):
"""
<url>:WMA link:https://reference.wolfram.com/language/ref/Rule_.html</url>
<dl>
<dt>'Rule[$x$, $y$]'
<dt>'$x$ -> $y$'
<dd>represents a rule replacing $x$ with $y$.
</dl>
>> a+b+c /. c->d
= a + b + d
>> {x,x^2,y} /. x->3
= {3, 9, y}
"""
# TODO: An error message should appear when Rule is called with a wrong
# number of arguments
"""
>> a /. Rule[1, 2, 3] -> t
: Rule called with 3 arguments; 2 arguments are expected.
= a
"""
name = "Rule"
operator = "->"
precedence = 120
attributes = A_SEQUENCE_HOLD | A_PROTECTED
grouping = "Right"
needs_verbatim = True
summary_text = "a replacement rule"
class RuleDelayed(BinaryOperator):
"""
<url>:WMA link:https://reference.wolfram.com/language/ref/RuleDelayed.html</url>
<dl>
<dt>'RuleDelayed[$x$, $y$]'
<dt>'$x$ :> $y$'
<dd>represents a rule replacing $x$ with $y$, with $y$ held
unevaluated.
</dl>
>> Attributes[RuleDelayed]
= {HoldRest, Protected, SequenceHold}
"""
attributes = A_SEQUENCE_HOLD | A_HOLD_REST | A_PROTECTED
needs_verbatim = True
operator = ":>"
precedence = 120
summary_text = "a rule that keeps the replacement unevaluated"
# TODO: disentangle me
def create_rules(
rules_expr: BaseElement,
expr: Expression,
name: str,
evaluation: Evaluation,
extra_args: List = [],
) -> Tuple[Union[List[Rule], BaseElement], bool]:
"""
This function implements `Replace`, `ReplaceAll`, `ReplaceRepeated` and `ReplaceList` eval methods.
`name` controls which of these methods is implemented. These methods applies the rule / list of rules
`rules_expr` over the expression `expr`, using the evaluation context `evaluation`.
The result is a tuple of two elements. If the second element is `True`, then the first element is the result of the method.
If `False`, the first element of the tuple is a list of rules.
"""
if isinstance(rules_expr, Dispatch):
return rules_expr.rules, False
elif rules_expr.has_form("Dispatch", None):
return Dispatch(rules_expr.elements, evaluation)
if rules_expr.has_form("List", None):
rules = rules_expr.elements
else:
rules = [rules_expr]
any_lists = False
for item in rules:
if item.get_head() in (SymbolList, SymbolDispatch):
any_lists = True
break
if any_lists:
all_lists = True
for item in rules:
if not item.get_head() is SymbolList:
all_lists = False
break
if all_lists:
return (
ListExpression(
*[
Expression(Symbol(name), expr, item, *extra_args)
for item in rules
]
),
True,
)
else:
evaluation.message(name, "rmix", rules_expr)
return None, True
else:
result = []
for rule in rules:
if rule.get_head_name() not in ("System`Rule", "System`RuleDelayed"):
evaluation.message(name, "reps", rule)
return None, True
elif len(rule.elements) != 2:
evaluation.message(
# TODO: shorten names here
rule.get_head_name(),
"argrx",
rule.get_head_name(),
3,
2,
)
return None, True
else:
result.append(Rule(rule.elements[0], rule.elements[1]))
return result, False
class Replace(Builtin):
"""
<url>
:WMA link:
https://reference.wolfram.com/language/ref/Replace.html</url>
<dl>
<dt>'Replace[$expr$, $x$ -> $y$]'
<dd>yields the result of replacing $expr$ with $y$ if it
matches the pattern $x$.
<dt>'Replace[$expr$, $x$ -> $y$, $levelspec$]'
<dd>replaces only subexpressions at levels specified through
$levelspec$.
<dt>'Replace[$expr$, {$x$ -> $y$, ...}]'
<dd>performs replacement with multiple rules, yielding a
single result expression.
<dt>'Replace[$expr$, {{$a$ -> $b$, ...}, {$c$ -> $d$, ...}, ...}]'
<dd>returns a list containing the result of performing each
set of replacements.
</dl>
>> Replace[x, {x -> 2}]
= 2
By default, only the top level is searched for matches
>> Replace[1 + x, {x -> 2}]
= 1 + x
>> Replace[x, {{x -> 1}, {x -> 2}}]
= {1, 2}
Replace stops after the first replacement
>> Replace[x, {x -> {}, _List -> y}]
= {}
Replace replaces the deepest levels first
>> Replace[x[1], {x[1] -> y, 1 -> 2}, All]
= x[2]
By default, heads are not replaced
>> Replace[x[x[y]], x -> z, All]
= x[x[y]]
Heads can be replaced using the Heads option
>> Replace[x[x[y]], x -> z, All, Heads -> True]
= z[z[y]]
Note that heads are handled at the level of elements
>> Replace[x[x[y]], x -> z, {1}, Heads -> True]
= z[x[y]]
You can use Replace as an operator
>> Replace[{x_ -> x + 1}][10]
= 11
"""
messages = {
"reps": "`1` is not a valid replacement rule.",
"rmix": "Elements of `1` are a mixture of lists and nonlists.",
}
options = {"Heads": "False"}
rules = {"Replace[rules_][expr_]": "Replace[expr, rules]"}
summary_text = "apply a replacement rule"
def eval_levelspec(self, expr, rules, ls, evaluation, options):
"Replace[expr_, rules_, Optional[Pattern[ls, _?LevelQ], {0}], OptionsPattern[Replace]]"
try:
rules, ret = create_rules(rules, expr, "Replace", evaluation)
if ret:
return rules
heads = self.get_option(options, "Heads", evaluation) is SymbolTrue
result, applied = expr.do_apply_rules(
rules,
evaluation,
level=0,
options={"levelspec": python_levelspec(ls), "heads": heads},
)
return result
except InvalidLevelspecError:
evaluation.message("General", "level", ls)
except PatternError:
evaluation.message("Replace", "reps", rules)
class ReplaceAll(BinaryOperator):
"""
<url>
:WMA link:
https://reference.wolfram.com/language/ref/ReplaceAll.html</url>
<dl>
<dt>'ReplaceAll[$expr$, $x$ -> $y$]'
<dt>'$expr$ /. $x$ -> $y$'
<dd>yields the result of replacing all subexpressions of
$expr$ matching the pattern $x$ with $y$.
<dt>'$expr$ /. {$x$ -> $y$, ...}'
<dd>performs replacement with multiple rules, yielding a
single result expression.
<dt>'$expr$ /. {{$a$ -> $b$, ...}, {$c$ -> $d$, ...}, ...}'
<dd>returns a list containing the result of performing each
set of replacements.
</dl>
>> a+b+c /. c->d
= a + b + d
>> g[a+b+c,a]/.g[x_+y_,x_]->{x,y}
= {a, b + c}
If $rules$ is a list of lists, a list of all possible respective
replacements is returned:
>> {a, b} /. {{a->x, b->y}, {a->u, b->v}}
= {{x, y}, {u, v}}
The list can be arbitrarily nested:
>> {a, b} /. {{{a->x, b->y}, {a->w, b->z}}, {a->u, b->v}}
= {{{x, y}, {w, z}}, {u, v}}
>> {a, b} /. {{{a->x, b->y}, a->w, b->z}, {a->u, b->v}}
: Elements of {{a -> x, b -> y}, a -> w, b -> z} are a mixture of lists and nonlists.
= {{a, b} /. {{a -> x, b -> y}, a -> w, b -> z}, {u, v}}
ReplaceAll also can be used as an operator:
>> ReplaceAll[{a -> 1}][{a, b}]
= {1, b}
#> a + b /. x_ + y_ -> {x, y}
= {a, b}
ReplaceAll replaces the shallowest levels first:
>> ReplaceAll[x[1], {x[1] -> y, 1 -> 2}]
= y
"""
grouping = "Left"
needs_verbatim = True
operator = "/."
precedence = 110
messages = {
"reps": "`1` is not a valid replacement rule.",
"rmix": "Elements of `1` are a mixture of lists and nonlists.",
}
rules = {"ReplaceAll[rules_][expr_]": "ReplaceAll[expr, rules]"}
summary_text = "apply a replacement rule on each subexpression"
def eval(self, expr, rules, evaluation: Evaluation):
"ReplaceAll[expr_, rules_]"
try:
rules, ret = create_rules(rules, expr, "ReplaceAll", evaluation)
if ret:
return rules
result, applied = expr.do_apply_rules(rules, evaluation)
return result
except PatternError:
evaluation.message("Replace", "reps", rules)
class ReplaceRepeated(BinaryOperator):
"""
<url>
:WMA link:
https://reference.wolfram.com/language/ref/ReplaceRepeated.html</url>
<dl>
<dt>'ReplaceRepeated[$expr$, $x$ -> $y$]'
<dt>'$expr$ //. $x$ -> $y$'
<dd>repeatedly applies the rule '$x$ -> $y$' to $expr$ until
the result no longer changes.
</dl>
>> a+b+c //. c->d
= a + b + d
>> f = ReplaceRepeated[c->d];
>> f[a+b+c]
= a + b + d
>> Clear[f];
Simplification of logarithms:
>> logrules = {Log[x_ * y_] :> Log[x] + Log[y], Log[x_ ^ y_] :> y * Log[x]};
>> Log[a * (b * c) ^ d ^ e * f] //. logrules
= Log[a] + Log[f] + (Log[b] + Log[c]) d ^ e
'ReplaceAll' just performs a single replacement:
>> Log[a * (b * c) ^ d ^ e * f] /. logrules
= Log[a] + Log[f (b c) ^ d ^ e]
"""
grouping = "Left"
needs_verbatim = True
operator = "//."
precedence = 110
messages = {
"reps": "`1` is not a valid replacement rule.",
"rmix": "Elements of `1` are a mixture of lists and nonlists.",
}
options = {
"MaxIterations": "65535",
}
rules = {
"ReplaceRepeated[rules_][expr_]": "ReplaceRepeated[expr, rules]",
}
summary_text = "iteratively replace until the expression does not change anymore"
def eval_list(
self,
expr: BaseElement,
rules: BaseElement,
evaluation: Evaluation,
options: dict,
) -> OptionalType[BaseElement]:
"ReplaceRepeated[expr_, rules_, OptionsPattern[ReplaceRepeated]]"
try:
rules, ret = create_rules(rules, expr, "ReplaceRepeated", evaluation)
except PatternError:
evaluation.message("Replace", "reps", rules)
return None
if ret:
return rules
maxit = self.get_option(options, "MaxIterations", evaluation)
if maxit.is_numeric(evaluation):
maxit = maxit.get_int_value()
else:
maxit = -1
while True:
evaluation.check_stopped()
if maxit == 0:
break
maxit -= 1
result, applied = expr.do_apply_rules(rules, evaluation)
if applied:
result = result.evaluate(evaluation)
if applied and not result.sameQ(expr):
expr = result
else:
break
return result
class ReplaceList(Builtin):
"""
<url>
:WMA link:
https://reference.wolfram.com/language/ref/ReplaceList.html</url>
<dl>
<dt>'ReplaceList[$expr$, $rules$]'
<dd>returns a list of all possible results of applying $rules$
to $expr$.
</dl>
Get all subsequences of a list:
>> ReplaceList[{a, b, c}, {___, x__, ___} -> {x}]
= {{a}, {a, b}, {a, b, c}, {b}, {b, c}, {c}}
You can specify the maximum number of items:
>> ReplaceList[{a, b, c}, {___, x__, ___} -> {x}, 3]
= {{a}, {a, b}, {a, b, c}}
>> ReplaceList[{a, b, c}, {___, x__, ___} -> {x}, 0]
= {}
If no rule matches, an empty list is returned:
>> ReplaceList[a, b->x]
= {}
Like in 'ReplaceAll', $rules$ can be a nested list:
>> ReplaceList[{a, b, c}, {{{___, x__, ___} -> {x}}, {{a, b, c} -> t}}, 2]
= {{{a}, {a, b}}, {t}}
>> ReplaceList[expr, {}, -1]
: Non-negative integer or Infinity expected at position 3.
= ReplaceList[expr, {}, -1]
Possible matches for a sum:
>> ReplaceList[a + b + c, x_ + y_ -> {x, y}]
= {{a, b + c}, {b, a + c}, {c, a + b}, {a + b, c}, {a + c, b}, {b + c, a}}
"""
messages = {
"reps": "`1` is not a valid replacement rule.",
"rmix": "Elements of `1` are a mixture of lists and nonlists.",
}
summary_text = "list of possible replacement results"
def eval(
self, expr: BaseElement, rules: BaseElement, max: Number, evaluation: Evaluation
) -> OptionalType[BaseElement]:
"ReplaceList[expr_, rules_, max_:Infinity]"
if max.get_name() == "System`Infinity":
max_count = None
else:
max_count = max.get_int_value()
if max_count is None or max_count < 0:
evaluation.message("ReplaceList", "innf", 3)
return
try:
rules, ret = create_rules(
rules, expr, "ReplaceList", evaluation, extra_args=[max]
)
except PatternError:
evaluation.message("Replace", "reps", rules)
return None
if ret:
return rules
list = []
for rule in rules:
result = rule.apply(expr, evaluation, return_list=True, max_list=max_count)
list.extend(result)
return ListExpression(*list)
class PatternTest(BinaryOperator, PatternObject):
"""
<url>
:WMA link:
https://reference.wolfram.com/language/ref/PatternTest.html</url>
<dl>
<dt>'PatternTest[$pattern$, $test$]'
<dt>'$pattern$ ? $test$'
<dd>constrains $pattern$ to match $expr$ only if the
evaluation of '$test$[$expr$]' yields 'True'.
</dl>
>> MatchQ[3, _Integer?(#>0&)]
= True
>> MatchQ[-3, _Integer?(#>0&)]
= False
>> MatchQ[3, Pattern[3]]
: First element in pattern Pattern[3] is not a valid pattern name.
= False
"""
arg_counts = [2]
operator = "?"
precedence = 680
summary_text = "match to a pattern conditioned to a test result"
def init(
self, expr: Expression, evaluation: OptionalType[Evaluation] = None
) -> None:
super(PatternTest, self).init(expr, evaluation=evaluation)
# This class has an important effect in the general performance,
# since all the rules that requires specify the type of patterns
# call it. Then, for simple checks like `NumberQ` or `NumericQ`
# it is important to have the fastest possible implementation.
# To to this, we overwrite the match method taking it from the
# following dictionary. Here also would get some advantage by
# singletonizing the Symbol class and accessing this dictionary
# using an id() instead a string...
match_functions = {
"System`AtomQ": self.match_atom,
"System`StringQ": self.match_string,
"System`NumericQ": self.match_numericq,
"System`NumberQ": self.match_numberq,
"System`RealNumberQ": self.match_real_numberq,
"Internal`RealValuedNumberQ": self.match_real_numberq,
"System`Posive": self.match_positive,
"System`Negative": self.match_negative,
"System`NonPositive": self.match_nonpositive,
"System`NonNegative": self.match_nonnegative,
}
self.pattern = Pattern.create(expr.elements[0], evaluation=evaluation)
self.test = expr.elements[1]
testname = self.test.get_name()
self.test_name = testname
match_function = match_functions.get(testname, None)
if match_function:
self.match = match_function
def match_atom(self, yield_func, expression, vars, evaluation, **kwargs):
def yield_match(vars_2, rest):
items = expression.get_sequence()
# Here we use a `for` loop instead an all over iterator
# because in Cython this is faster, since it avoids a function
# call. For pure Python, it is the opposite.
for item in items:
if not isinstance(item, Atom):
break
else:
yield_func(vars_2, None)
self.pattern.match(yield_match, expression, vars, evaluation)
def match_string(self, yield_func, expression, vars, evaluation, **kwargs):
def yield_match(vars_2, rest):
items = expression.get_sequence()
for item in items:
if not isinstance(item, String):
break
else:
yield_func(vars_2, None)
self.pattern.match(yield_match, expression, vars, evaluation)
def match_numberq(self, yield_func, expression, vars, evaluation, **kwargs):
def yield_match(vars_2, rest):
items = expression.get_sequence()
for item in items:
if not isinstance(item, Number):
break
else:
yield_func(vars_2, None)
self.pattern.match(yield_match, expression, vars, evaluation)
def match_numericq(self, yield_func, expression, vars, evaluation, **kwargs):
def yield_match(vars_2, rest):
items = expression.get_sequence()
for item in items:
if not (isinstance(item, Number) or item.is_numeric(evaluation)):
break
else:
yield_func(vars_2, None)
self.pattern.match(yield_match, expression, vars, evaluation)
def match_real_numberq(self, yield_func, expression, vars, evaluation, **kwargs):
def yield_match(vars_2, rest):
items = expression.get_sequence()
for item in items:
if not isinstance(item, (Integer, Rational, Real)):
break
else:
yield_func(vars_2, None)
self.pattern.match(yield_match, expression, vars, evaluation)
def match_positive(self, yield_func, expression, vars, evaluation, **kwargs):
def yield_match(vars_2, rest):
items = expression.get_sequence()
if all(
isinstance(item, (Integer, Rational, Real)) and item.value > 0
for item in items
):
yield_func(vars_2, None)
self.pattern.match(yield_match, expression, vars, evaluation)
def match_negative(self, yield_func, expression, vars, evaluation, **kwargs):
def yield_match(vars_2, rest):
items = expression.get_sequence()
if all(
isinstance(item, (Integer, Rational, Real)) and item.value < 0
for item in items
):
yield_func(vars_2, None)
self.pattern.match(yield_match, expression, vars, evaluation)
def match_nonpositive(self, yield_func, expression, vars, evaluation, **kwargs):
def yield_match(vars_2, rest):
items = expression.get_sequence()
if all(
isinstance(item, (Integer, Rational, Real)) and item.value <= 0
for item in items
):
yield_func(vars_2, None)
self.pattern.match(yield_match, expression, vars, evaluation)
def match_nonnegative(self, yield_func, expression, vars, evaluation, **kwargs):
def yield_match(vars_2, rest):
items = expression.get_sequence()
if all(
isinstance(item, (Integer, Rational, Real)) and item.value >= 0
for item in items
):
yield_func(vars_2, None)
self.pattern.match(yield_match, expression, vars, evaluation)
def quick_pattern_test(self, candidate, test, evaluation: Evaluation):
if test == "System`NegativePowerQ":
return (
candidate.has_form("Power", 2)
and isinstance(candidate.elements[1], (Integer, Rational, Real))
and candidate.elements[1].value < 0
)
elif test == "System`NotNegativePowerQ":
return not (
candidate.has_form("Power", 2)
and isinstance(candidate.elements[1], (Integer, Rational, Real))
and candidate.elements[1].value < 0
)
else:
from mathics.builtin.base import Test
builtin = None
builtin = evaluation.definitions.get_definition(test)
if builtin:
builtin = builtin.builtin
if builtin is not None and isinstance(builtin, Test):
return builtin.test(candidate)
return None
def match(self, yield_func, expression, vars, evaluation, **kwargs):
# def match(self, yield_func, expression, vars, evaluation, **kwargs):
# for vars_2, rest in self.pattern.match(expression, vars, evaluation):
def yield_match(vars_2, rest):
testname = self.test_name
items = expression.get_sequence()
for item in items:
item = item.evaluate(evaluation)
quick_test = self.quick_pattern_test(item, testname, evaluation)
if quick_test is False:
break
elif quick_test is True:
continue
# raise StopGenerator
else:
test_expr = Expression(self.test, item)
test_value = test_expr.evaluate(evaluation)
if test_value is not SymbolTrue:
break
# raise StopGenerator
else:
yield_func(vars_2, None)
# try:
self.pattern.match(yield_match, expression, vars, evaluation)
# except StopGenerator:
# pass
def get_match_count(self, vars={}):
return self.pattern.get_match_count(vars)
class Alternatives(BinaryOperator, PatternObject):
"""
<url>
:WMA link:
https://reference.wolfram.com/language/ref/Alternatives.html</url>
<dl>
<dt>'Alternatives[$p1$, $p2$, ..., $p_i$]'
<dt>'$p1$ | $p2$ | ... | $p_i$'
<dd>is a pattern that matches any of the patterns '$p1$, $p2$,
...., $p_i$'.
</dl>
>> a+b+c+d/.(a|b)->t
= c + d + 2 t
Alternatives can also be used for string expressions
>> StringReplace["0123 3210", "1" | "2" -> "X"]
= 0XX3 3XX0
#> StringReplace["h1d9a f483", DigitCharacter | WhitespaceCharacter -> ""]
= hdaf
"""
arg_counts = None
needs_verbatim = True
operator = "|"
precedence = 160
summary_text = "match to any of several patterns"
def init(
self, expr: Expression, evaluation: OptionalType[Evaluation] = None
) -> None:
super(Alternatives, self).init(expr, evaluation=evaluation)
self.alternatives = [
Pattern.create(element, evaluation=evaluation) for element in expr.elements
]
def match(self, yield_func, expression, vars, evaluation, **kwargs):
for alternative in self.alternatives:
# for new_vars, rest in alternative.match(
# expression, vars, evaluation):
# yield_func(new_vars, rest)
alternative.match(yield_func, expression, vars, evaluation)
def get_match_count(self, vars={}):
range = None
for alternative in self.alternatives:
sub = alternative.get_match_count(vars)
if range is None:
range = list(sub)
else:
if sub[0] < range[0]:
range[0] = sub[0]
if range[1] is None or sub[1] > range[1]:
range[1] = sub[1]
return range
class _StopGeneratorExcept(StopGenerator):
pass
class Except(PatternObject):
"""
<url>
:WMA link:
https://reference.wolfram.com/language/ref/Except.html</url>
<dl>
<dt>'Except[$c$]'
<dd>represents a pattern object that matches any expression except \
those matching $c$.
<dt>'Except[$c$, $p$]'
<dd>represents a pattern object that matches $p$ but not $c$.
</dl>
>> Cases[{x, a, b, x, c}, Except[x]]
= {a, b, c}
>> Cases[{a, 0, b, 1, c, 2, 3}, Except[1, _Integer]]
= {0, 2, 3}
Except can also be used for string expressions:
>> StringReplace["Hello world!", Except[LetterCharacter] -> ""]
= Helloworld
#> StringReplace["abc DEF 123!", Except[LetterCharacter, WordCharacter] -> "0"]
= abc DEF 000!
"""
arg_counts = [1, 2]
summary_text = "match to expressions that do not match with a pattern"
def init(
self, expr: Expression, evaluation: OptionalType[Evaluation] = None
) -> None:
super(Except, self).init(expr, evaluation=evaluation)
self.c = Pattern.create(expr.elements[0])
if len(expr.elements) == 2:
self.p = Pattern.create(expr.elements[1], evaluation=evaluation)
else:
self.p = Pattern.create(Expression(SymbolBlank), evaluation=evaluation)
def match(self, yield_func, expression, vars, evaluation, **kwargs):
def except_yield_func(vars, rest):
raise _StopGeneratorExcept(True)
try:
self.c.match(except_yield_func, expression, vars, evaluation)
except _StopGeneratorExcept:
pass
else:
self.p.match(yield_func, expression, vars, evaluation)
class Verbatim(PatternObject):
"""
<url>
:WMA link:
https://reference.wolfram.com/language/ref/Verbatim.html</url>
<dl>
<dt>'Verbatim[$expr$]'
<dd>prevents pattern constructs in $expr$ from taking effect,
allowing them to match themselves.
</dl>
Create a pattern matching 'Blank':
>> _ /. Verbatim[_]->t
= t
>> x /. Verbatim[_]->t
= x
Without 'Verbatim', 'Blank' has its normal effect:
>> x /. _->t
= t
"""
arg_counts = [1, 2]
summary_text = "take the pattern elements as literals"
def init(
self, expr: Expression, evaluation: OptionalType[Evaluation] = None
) -> None:
super(Verbatim, self).init(expr, evaluation=evaluation)
self.content = expr.elements[0]
def match(self, yield_func, expression, vars, evaluation, **kwargs):
if self.content.sameQ(expression):
yield_func(vars, None)
class HoldPattern(PatternObject):
"""
<url>:WMA link:https://reference.wolfram.com/language/ref/HoldPattern.html</url>
<dl>
<dt>'HoldPattern[$expr$]'
<dd>is equivalent to $expr$ for pattern matching, but
maintains it in an unevaluated form.
</dl>
>> HoldPattern[x + x]
= HoldPattern[x + x]
>> x /. HoldPattern[x] -> t
= t
'HoldPattern' has attribute 'HoldAll':
>> Attributes[HoldPattern]
= {HoldAll, Protected}
"""
arg_counts = [1]
attributes = A_HOLD_ALL | A_PROTECTED
summary_text = "took the expression as a literal pattern"
def init(
self, expr: Expression, evaluation: OptionalType[Evaluation] = None
) -> None:
super(HoldPattern, self).init(expr, evaluation=evaluation)
self.pattern = Pattern.create(expr.elements[0], evaluation=evaluation)
def match(self, yield_func, expression, vars, evaluation, **kwargs):
# for new_vars, rest in self.pattern.match(
# expression, vars, evaluation):
# yield new_vars, rest
self.pattern.match(yield_func, expression, vars, evaluation)
class Pattern_(PatternObject):
"""
<url>:WMA link:https://reference.wolfram.com/language/ref/Pattern.html</url>
<dl>
<dt>'Pattern[$symb$, $patt$]'
<dt>'$symb$ : $patt$'
<dd>assigns the name $symb$ to the pattern $patt$.
<dt>'$symb$_$head$'
<dd>is equivalent to '$symb$ : _$head$' (accordingly with '__'
and '___').
<dt>'$symb$ : $patt$ : $default$'
<dd>is a pattern with name $symb$ and default value $default$,
equivalent to 'Optional[$patt$ : $symb$, $default$]'.
</dl>
>> FullForm[a_b]
= Pattern[a, Blank[b]]
>> FullForm[a:_:b]
= Optional[Pattern[a, Blank[]], b]
'Pattern' has attribute 'HoldFirst', so it does not evaluate its name:
>> x = 2
= 2
>> x_
= x_
Nested 'Pattern' assign multiple names to the same pattern. Still,
the last parameter is the default value.
>> f[y] /. f[a:b,_:d] -> {a, b}
= f[y]
This is equivalent to:
>> f[a] /. f[a:_:b] -> {a, b}
= {a, b}
'FullForm':
>> FullForm[a:b:c:d:e]
= Optional[Pattern[a, b], Optional[Pattern[c, d], e]]
>> f[] /. f[a:_:b] -> {a, b}
= {b, b}
"""
name = "Pattern"
arg_counts = [2]
attributes = A_HOLD_FIRST | A_PROTECTED
messages = {
"patvar": "First element in pattern `1` is not a valid pattern name.",
"nodef": (
"No default setting found for `1` in " "position `2` when length is `3`."
),
}
rules = {
"MakeBoxes[Verbatim[Pattern][symbol_Symbol, blank_Blank|blank_BlankSequence|blank_BlankNullSequence], f:StandardForm|TraditionalForm|InputForm|OutputForm]": "MakeBoxes[symbol, f] <> MakeBoxes[blank, f]",
# 'StringForm["`1``2`", HoldForm[symbol], blank]',
}
formats = {
"Verbatim[Pattern][symbol_, "
"pattern_?(!MatchQ[#, _Blank|_BlankSequence|_BlankNullSequence]&)]": (
'Infix[{symbol, pattern}, ":", 150, Left]'
)
}
summary_text = "a named pattern"
def init(
self, expr: Expression, evaluation: OptionalType[Evaluation] = None
) -> None:
if len(expr.elements) != 2:
self.error("patvar", expr)
varname = expr.elements[0].get_name()
if varname is None or varname == "":
self.error("patvar", expr)
super(Pattern_, self).init(expr, evaluation=evaluation)
self.varname = varname
self.pattern = Pattern.create(expr.elements[1], evaluation=evaluation)
def __repr__(self):
return "<Pattern: %s>" % repr(self.pattern)
def get_match_count(self, vars={}):
return self.pattern.get_match_count(vars)
def match(self, yield_func, expression, vars, evaluation, **kwargs):
existing = vars.get(self.varname, None)
if existing is None:
new_vars = vars.copy()
new_vars[self.varname] = expression
# for vars_2, rest in self.pattern.match(
# expression, new_vars, evaluation):
# yield vars_2, rest
if type(self.pattern) is OptionsPattern:
self.pattern.match(
yield_func, expression, new_vars, evaluation, **kwargs
)
else:
self.pattern.match(yield_func, expression, new_vars, evaluation)
else:
if existing.sameQ(expression):
yield_func(vars, None)
def get_match_candidates(
self, elements, expression, attributes, evaluation, vars={}
):
existing = vars.get(self.varname, None)
if existing is None:
return self.pattern.get_match_candidates(
elements, expression, attributes, evaluation, vars
)
else:
# Treat existing variable as verbatim
verbatim_expr = Expression(SymbolVerbatim, existing)
verbatim = Verbatim(verbatim_expr)
return verbatim.get_match_candidates(
elements, expression, attributes, evaluation, vars
)
class Optional(BinaryOperator, PatternObject):
"""
<url>:WMA link:https://reference.wolfram.com/language/ref/Optional.html</url>
<dl>
<dt>'Optional[$patt$, $default$]'
<dt>'$patt$ : $default$'
<dd>is a pattern which matches $patt$, which if omitted
should be replaced by $default$.
</dl>
>> f[x_, y_:1] := {x, y}
>> f[1, 2]
= {1, 2}
>> f[a]
= {a, 1}
Note that '$symb$ : $patt$' represents a 'Pattern' object. However, there is no
disambiguity, since $symb$ has to be a symbol in this case.
>> x:_ // FullForm
= Pattern[x, Blank[]]
>> _:d // FullForm
= Optional[Blank[], d]
>> x:_+y_:d // FullForm
= Pattern[x, Plus[Blank[], Optional[Pattern[y, Blank[]], d]]]
's_.' is equivalent to 'Optional[s_]' and represents an optional parameter which, if omitted,
gets its value from 'Default'.
>> FullForm[s_.]
= Optional[Pattern[s, Blank[]]]
>> Default[h, k_] := k
>> h[a] /. h[x_, y_.] -> {x, y}
= {a, 2}
#> a:b:c
= a : b : c
#> FullForm[a:b:c]
= Optional[Pattern[a, b], c]
#> (a:b):c
= a : b : c
#> a:(b:c)
= a : (b : c)
"""
arg_counts = [1, 2]
default_formats = False
formats = {
"Verbatim[Optional][pattern_Pattern, default_]": 'Infix[{HoldForm[pattern], HoldForm[default]}, ":", 140, Right]'
}
grouping = "Right"
rules = {
"MakeBoxes[Verbatim[Optional][Verbatim[Pattern][symbol_Symbol, Verbatim[_]]], f:StandardForm|TraditionalForm|InputForm|OutputForm]": 'MakeBoxes[symbol, f] <> "_."',
"MakeBoxes[Verbatim[Optional][Verbatim[_]], f:StandardForm|TraditionalForm|InputForm|OutputForm]": '"_."',
}
operator = ":"
precedence = 140
summary_text = "an optional argument with a default value"
def init(
self, expr: Expression, evaluation: OptionalType[Evaluation] = None
) -> None:
super(Optional, self).init(expr, evaluation=evaluation)
self.pattern = Pattern.create(expr.elements[0], evaluation=evaluation)
if len(expr.elements) == 2:
self.default = expr.elements[1]
else:
self.default = None
def match(
self,
yield_func,
expression,
vars,
evaluation,
head=None,
element_index=None,
element_count=None,
**kwargs
):
if expression.has_form("Sequence", 0):
if self.default is None:
if head is None: # head should be given by match_element!
default = None
else:
name = head.get_name()
default = get_default_value(
name, evaluation, element_index, element_count
)
if default is None:
evaluation.message(
"Pattern", "nodef", head, element_index, element_count
)
return
else:
default = self.default
expression = default
# for vars_2, rest in self.pattern.match(expression, vars, evaluation):
# yield vars_2, rest
self.pattern.match(yield_func, expression, vars, evaluation)
def get_match_count(self, vars={}):
return (0, 1)
def get_default_value(
name: str,
evaluation: Evaluation,
k: OptionalType[int] = None,
n: OptionalType[int] = None,
):
pos = []
if k is not None:
pos.append(k)
if n is not None:
pos.append(n)
for pos_len in reversed(range(len(pos) + 1)):
# Try patterns from specific to general
defaultexpr = Expression(
SymbolDefault, Symbol(name), *[Integer(index) for index in pos[:pos_len]]
)
result = evaluation.definitions.get_value(
name, "System`DefaultValues", defaultexpr, evaluation
)
if result is not None:
if result.sameQ(defaultexpr):
result = result.evaluate(evaluation)
return result
return None
class _Blank(PatternObject):
arg_counts = [0, 1]
def init(
self, expr: Expression, evaluation: OptionalType[Evaluation] = None
) -> None:
super(_Blank, self).init(expr, evaluation=evaluation)
if expr.elements:
self.head = expr.elements[0]
else:
# FIXME: elswhere, some code wants to
# get the attributes of head.
# So is this really the best thing to do here?
self.head = None
class Blank(_Blank):
"""
<url>:WMA link:https://reference.wolfram.com/language/ref/Blank.html</url>
<dl>
<dt>'Blank[]'
<dt>'_'
<dd>represents any single expression in a pattern.
<dt>'Blank[$h$]'
<dt>'_$h$'
<dd>represents any expression with head $h$.
</dl>
>> MatchQ[a + b, _]
= True
Patterns of the form '_'$h$ can be used to test the types of
objects:
>> MatchQ[42, _Integer]
= True
>> MatchQ[1.0, _Integer]
= False
>> {42, 1.0, x} /. {_Integer -> "integer", _Real -> "real"} // InputForm
= {"integer", "real", x}
'Blank' only matches a single expression:
>> MatchQ[f[1, 2], f[_]]
= False
#> StringReplace["hello world!", _ -> "x"]
= xxxxxxxxxxxx
"""
rules = {
"MakeBoxes[Verbatim[Blank][], f:StandardForm|TraditionalForm|OutputForm|InputForm]": '"_"',
"MakeBoxes[Verbatim[Blank][head_Symbol], f:StandardForm|TraditionalForm|OutputForm|InputForm]": '"_" <> MakeBoxes[head, f]',
}
summary_text = "match to any single expression"
def match(
self,
yield_func: Callable,
expression: Expression,
vars: dict,
evaluation: Evaluation,
**kwargs
):
if not expression.has_form("Sequence", 0):
if self.head is not None:
if expression.get_head().sameQ(self.head):
yield_func(vars, None)
else:
yield_func(vars, None)
class BlankSequence(_Blank):
"""
<url>:WMA link:https://reference.wolfram.com/language/ref/BlankSequence.html</url>
<dl>
<dt>'BlankSequence[]'
<dt>'__'
<dd>represents any non-empty sequence of expression elements in
a pattern.
<dt>'BlankSequence[$h$]'
<dt>'__$h$'
<dd>represents any sequence of elements, all of which have head $h$.
</dl>
Use a 'BlankSequence' pattern to stand for a non-empty sequence of
arguments:
>> MatchQ[f[1, 2, 3], f[__]]
= True
>> MatchQ[f[], f[__]]
= False
'__'$h$ will match only if all elements have head $h$:
>> MatchQ[f[1, 2, 3], f[__Integer]]
= True
>> MatchQ[f[1, 2.0, 3], f[__Integer]]
= False
The value captured by a named 'BlankSequence' pattern is a
'Sequence' object:
>> f[1, 2, 3] /. f[x__] -> x
= Sequence[1, 2, 3]
#> f[a, b, c, d] /. f[x__, c, y__] -> {{x},{y}}
= {{a, b}, {d}}
#> a + b + c + d /. Plus[x__, c] -> {x}
= {a, b, d}
#> StringReplace[{"ab", "abc", "abcd"}, "b" ~~ __ -> "x"]
= {ab, ax, ax}
"""
rules = {
"MakeBoxes[Verbatim[BlankSequence][], f:StandardForm|TraditionalForm|OutputForm|InputForm]": '"__"',
"MakeBoxes[Verbatim[BlankSequence][head_Symbol], f:StandardForm|TraditionalForm|OutputForm|InputForm]": '"__" <> MakeBoxes[head, f]',
}
summary_text = "match to a non-empty sequence of elements"
def match(
self,
yield_func: Callable,
expression: Expression,
vars: dict,
evaluation: Evaluation,
**kwargs
):
elements = expression.get_sequence()
if not elements:
return
if self.head:
ok = True
for element in elements:
if element.get_head() != self.head:
ok = False
break
if ok:
yield_func(vars, None)
else:
yield_func(vars, None)
def get_match_count(self, vars={}):
return (1, None)
class BlankNullSequence(_Blank):
"""
<url>:WMA link:https://reference.wolfram.com/language/ref/BlankNullSequence.html</url>
<dl>
<dt>'BlankNullSequence[]'
<dt>'___'
<dd>represents any sequence of expression elements in a pattern,
including an empty sequence.
</dl>
'BlankNullSequence' is like 'BlankSequence', except it can match an
empty sequence:
>> MatchQ[f[], f[___]]
= True
## This test hits infinite recursion
##
##The value captured by a named 'BlankNullSequence' pattern is a
##'Sequence' object, which can have no elements:
##>> f[] /. f[x___] -> x
## = Sequence[]
#> ___symbol
= ___symbol
#> ___symbol //FullForm
= BlankNullSequence[symbol]
#> StringReplace[{"ab", "abc", "abcd"}, "b" ~~ ___ -> "x"]
= {ax, ax, ax}
"""
rules = {
"MakeBoxes[Verbatim[BlankNullSequence][], f:StandardForm|TraditionalForm|OutputForm|InputForm]": '"___"',
"MakeBoxes[Verbatim[BlankNullSequence][head_Symbol], f:StandardForm|TraditionalForm|OutputForm|InputForm]": '"___" <> MakeBoxes[head, f]',
}
summary_text = "match to a sequence of zero or more elements"
def match(
self,
yield_func: Callable,
expression: Expression,
vars: dict,
evaluation: Evaluation,
**kwargs
):
elements = expression.get_sequence()
if self.head:
ok = True
for element in elements:
if element.get_head() != self.head:
ok = False
break
if ok:
yield_func(vars, None)
else:
yield_func(vars, None)
def get_match_count(self, vars={}):
return (0, None)
class Repeated(PostfixOperator, PatternObject):
"""
<url>:WMA link:https://reference.wolfram.com/language/ref/Repeated.html</url>
<dl>
<dt>'Repeated[$pattern$]'
<dd>matches one or more occurrences of $pattern$.
</dl>
>> a_Integer.. // FullForm
= Repeated[Pattern[a, Blank[Integer]]]
>> 0..1//FullForm
= Repeated[0]
>> {{}, {a}, {a, b}, {a, a, a}, {a, a, a, a}} /. {Repeated[x : a | b, 3]} -> x
= {{}, a, {a, b}, a, {a, a, a, a}}
>> f[x, 0, 0, 0] /. f[x, s:0..] -> s
= Sequence[0, 0, 0]
#> 1.. // FullForm
= Repeated[1]
#> 8^^1.. // FullForm (* Mathematica gets this wrong *)
= Repeated[1]
#> StringReplace["010110110001010", "01".. -> "a"]
= a1a100a0
#> StringMatchQ[#, "a" ~~ ("b"..) ~~ "a"] &/@ {"aa", "aba", "abba"}
= {False, True, True}
"""
arg_counts = [1, 2]
messages = {
"range": (
"Range specification in integers (max or {min, max}) "
"expected at position `1` in `2`."
)
}
operator = ".."
precedence = 170
summary_text = "match to one or more occurrences of a pattern"
def init(
self,
expr: Expression,
min: int = 1,
evaluation: OptionalType[Evaluation] = None,
):
self.pattern = Pattern.create(expr.elements[0], evaluation=evaluation)
self.max = None
self.min = min
if len(expr.elements) == 2:
element_1 = expr.elements[1]
allnumbers = not any(
element.get_int_value() is None for element in element_1.get_elements()
)
if element_1.has_form("List", 1, 2) and allnumbers:
self.max = element_1.elements[-1].get_int_value()
self.min = element_1.elements[0].get_int_value()
elif element_1.get_int_value():
self.max = element_1.get_int_value()
else:
self.error("range", 2, expr)
def match(self, yield_func, expression, vars, evaluation, **kwargs):
elements = expression.get_sequence()
if len(elements) < self.min:
return
if self.max is not None and len(elements) > self.max:
return
def iter(yield_iter, rest_elements, vars):
if rest_elements:
# for new_vars, rest in self.pattern.match(rest_elements[0],
# vars, evaluation):
def yield_match(new_vars, rest):
# for sub_vars, sub_rest in iter(rest_elements[1:],
# new_vars):
# yield sub_vars, rest
iter(yield_iter, rest_elements[1:], new_vars)
self.pattern.match(yield_match, rest_elements[0], vars, evaluation)
else:
yield_iter(vars, None)
# for vars, rest in iter(elements, vars):
# yield_func(vars, rest)
iter(yield_func, elements, vars)
def get_match_count(self, vars={}):
return (self.min, self.max)
class RepeatedNull(Repeated):
"""
<url>:WMA link:https://reference.wolfram.com/language/ref/RepeatedNull.html</url>
<dl>
<dt>'RepeatedNull[$pattern$]'
<dd>matches zero or more occurrences of $pattern$.
</dl>
>> a___Integer...//FullForm
= RepeatedNull[Pattern[a, BlankNullSequence[Integer]]]
>> f[x] /. f[x, 0...] -> t
= t
#> 1... // FullForm
= RepeatedNull[1]
#> 8^^1... // FullForm (* Mathematica gets this wrong *)
= RepeatedNull[1]
#> StringMatchQ[#, "a" ~~ ("b"...) ~~ "a"] &/@ {"aa", "aba", "abba"}
= {True, True, True}
"""
operator = "..."
precedence = 170
summary_text = "match to zero or more occurrences of a pattern"
def init(
self, expr: Expression, evaluation: OptionalType[Evaluation] = None
) -> None:
super(RepeatedNull, self).init(expr, min=0, evaluation=evaluation)
class Shortest(Builtin):
"""
<url>:WMA link:https://reference.wolfram.com/language/ref/Shortest.html</url>
<dl>
<dt>'Shortest[$pat$]'
<dd>is a pattern object that matches the shortest sequence consistent with the pattern $p$.
</dl>
>> StringCases["aabaaab", Shortest["a" ~~ __ ~~ "b"]]
= {aab, aaab}
>> StringCases["aabaaab", Shortest[RegularExpression["a+b"]]]
= {aab, aaab}
"""
summary_text = "the shortest part matching a string pattern"
class Longest(Builtin):
"""
<url>
:WMA link:
https://reference.wolfram.com/language/ref/Longest.html</url>
<dl>
<dt>'Longest[$pat$]'
<dd>is a pattern object that matches the longest sequence consistent \
with the pattern $p$.
</dl>
>> StringCases["aabaaab", Longest["a" ~~ __ ~~ "b"]]
= {aabaaab}
>> StringCases["aabaaab", Longest[RegularExpression["a+b"]]]
= {aab, aaab}
"""
summary_text = "the longest part matching a string pattern"
class Condition(BinaryOperator, PatternObject):
"""
<url>
:WMA link:
https://reference.wolfram.com/language/ref/Condition.html</url>
<dl>
<dt>'Condition[$pattern$, $expr$]'
<dt>'$pattern$ /; $expr$'
<dd>places an additional constraint on $pattern$ that only
allows it to match if $expr$ evaluates to 'True'.
</dl>
The controlling expression of a 'Condition' can use variables from
the pattern:
>> f[3] /. f[x_] /; x>0 -> t
= t
>> f[-3] /. f[x_] /; x>0 -> t
= f[-3]
'Condition' can be used in an assignment:
>> f[x_] := p[x] /; x>0
>> f[3]
= p[3]
>> f[-3]
= f[-3]
"""
arg_counts = [2]
# Don't know why this has attribute HoldAll in Mathematica
attributes = A_HOLD_REST | A_PROTECTED
operator = "/;"
precedence = 130
summary_text = "conditional definition"
def init(
self, expr: Expression, evaluation: OptionalType[Evaluation] = None
) -> None:
super(Condition, self).init(expr, evaluation=evaluation)
self.test = expr.elements[1]
# if (expr.elements[0].get_head_name() == "System`Condition" and
# len(expr.elements[0].elements) == 2):
# self.test = Expression(SymbolAnd, self.test, expr.elements[0].elements[1])
# self.pattern = Pattern.create(expr.elements[0].elements[0])
# else:
self.pattern = Pattern.create(expr.elements[0], evaluation=evaluation)
def match(
self,
yield_func: Callable,
expression: Expression,
vars: dict,
evaluation: Evaluation,
**kwargs
):
# for new_vars, rest in self.pattern.match(expression, vars,
# evaluation):
def yield_match(new_vars, rest):
test_expr = self.test.replace_vars(new_vars)
test_result = test_expr.evaluate(evaluation)
if test_result is SymbolTrue:
yield_func(new_vars, rest)
self.pattern.match(yield_match, expression, vars, evaluation)
class OptionsPattern(PatternObject):
"""
<url>
:WMA link:
https://reference.wolfram.com/language/ref/OptionsPattern.html</url>
<dl>
<dt>'OptionsPattern[$f$]'
<dd>is a pattern that stands for a sequence of options given \
to a function, with default values taken from 'Options[$f$]'. \
The options can be of the form '$opt$->$value$' or
'$opt$:>$value$', and might be in arbitrarily nested lists.
<dt>'OptionsPattern[{$opt1$->$value1$, ...}]'
<dd>takes explicit default values from the given list. The
list may also contain symbols $f$, for which 'Options[$f$]' is \
taken into account; it may be arbitrarily nested. \
'OptionsPattern[{}]' does not use any default values.
</dl>
The option values can be accessed using 'OptionValue'.
>> f[x_, OptionsPattern[{n->2}]] := x ^ OptionValue[n]
>> f[x]
= x ^ 2
>> f[x, n->3]
= x ^ 3
Delayed rules as options:
>> e = f[x, n:>a]
= x ^ a
>> a = 5;
>> e
= x ^ 5
Options might be given in nested lists:
>> f[x, {{{n->4}}}]
= x ^ 4
#> {opt -> b} /. OptionsPattern[{}] -> t
= t
#> Clear[f]
#> Options[f] = {Power -> 2};
#> f[x_, OptionsPattern[f]] := x ^ OptionValue[Power]
#> f[10]
= 100
#> f[10, Power -> 3]
= 1000
#> Clear[f]
#> Options[f] = {Power -> 2};
#> f[x_, OptionsPattern[]] := x ^ OptionValue[Power]
#> f[10]
= 100
#> f[10, Power -> 3]
= 1000
#> Clear[f]
"""
arg_counts = [0, 1]
summary_text = "a sequence of optional named arguments"
def init(
self, expr: Expression, evaluation: OptionalType[Evaluation] = None
) -> None:
super(OptionsPattern, self).init(expr, evaluation=evaluation)
try:
self.defaults = expr.elements[0]
except IndexError:
# OptionsPattern[] takes default options of the nearest enclosing
# function. Set to not None in self.match
self.defaults = None
def match(
self,
yield_func: Callable,
expression: Expression,
vars: dict,
evaluation: Evaluation,
**kwargs
):
if self.defaults is None:
self.defaults = kwargs.get("head")
if self.defaults is None:
# we end up here with OptionsPattern that do not have any
# default options defined, e.g. with this code:
# f[x:OptionsPattern[]] := x; f["Test" -> 1]
# set self.defaults to an empty List, so we don't crash.
self.defaults = ListExpression()
defaults = self.defaults
values = (
defaults.get_option_values(
evaluation, allow_symbols=True, stop_on_error=False
)
if isinstance(defaults, EvalMixin)
else {}
)
sequence = expression.get_sequence()
for options in sequence:
option_values = (
options.get_option_values(evaluation)
if isinstance(options, EvalMixin)
else None
)
if option_values is None:
return
values.update(option_values)
new_vars = vars.copy()
for name, value in values.items():
new_vars["_option_" + name] = value
yield_func(new_vars, None)
def get_match_count(self, vars: dict = {}):
return (0, None)
def get_match_candidates(
self,
elements: Tuple[BaseElement],
expression: Expression,
attributes: int,
evaluation: Evaluation,
vars: dict = {},
):
def _match(element: Expression):
return element.has_form(("Rule", "RuleDelayed"), 2) or element.has_form(
"List", None
)
return [element for element in elements if _match(element)]
class Dispatch(Atom):
class_head_name = "System`Dispatch"
def __init__(self, rulelist: Expression, evaluation: Evaluation) -> None:
self.src = ListExpression(*rulelist)
self.rules = [Rule(rule.elements[0], rule.elements[1]) for rule in rulelist]
self._elements = None
self._head = SymbolDispatch
def get_sort_key(self) -> tuple:
return self.src.get_sort_key()
def get_atom_name(self):
return "System`Dispatch"
def __repr__(self):
return "dispatch"
def atom_to_boxes(self, f: Symbol, evaluation: Evaluation):
from mathics.builtin.box.layout import RowBox
from mathics.eval.makeboxes import format_element
box_element = format_element(self.src, evaluation, f)
return RowBox(String("Dispatch"), String("["), box_element, String("]"))
class DispatchAtom(AtomBuiltin):
"""
<url>
:WMA link:
https://reference.wolfram.com/language/ref/DispatchAtom.html</url>
<dl>
<dt>'Dispatch[$rulelist$]'
<dd>Introduced for compatibility. Currently, it just return $rulelist$. \
In the future, it should return an optimized DispatchRules atom, \
containing an optimized set of rules.
</dl>
>> rules = {{a_,b_}->a^b, {1,2}->3., F[x_]->x^2};
>> F[2] /. rules
= 4
>> dispatchrules = Dispatch[rules]
= Dispatch[{{a_, b_} -> a ^ b, {1, 2} -> 3., F[x_] -> x ^ 2}]
>> F[2] /. dispatchrules
= 4
"""
class_head_name = "System`DispatchAtom"
messages = {
"invrpl": "`1` is not a valid rule or list of rules.",
}
summary_text = "convert a list of rules in an optimized dispatch rules atom"
def __repr__(self):
return "dispatchatom"
def eval_create(
self, rules: ListExpression, evaluation: Evaluation
) -> OptionalType[BaseElement]:
"""Dispatch[rules_List]"""
# TODO:
# The next step would be to enlarge this method, in order to
# check that all the elements in x are rules, eliminate redundancies
# in the list, and sort the list in a way that increases efficiency.
# A second step would be to implement an ``Atom`` class containing the
# compiled patters, and modify Replace and ReplaceAll to handle this
# kind of objects.
#
if isinstance(rules, Dispatch):
return rules
if isinstance(rules, Symbol):
rules = rules.evaluate(evaluation)
if rules.has_form("List", None):
rules = rules.elements
else:
rules = [rules]
all_list = all(rule.has_form("List", None) for rule in rules)
if all_list:
elements = [self.eval_create(rule, evaluation) for rule in rules]
return ListExpression(*elements)
flatten_list = []
for rule in rules:
if isinstance(rule, Symbol):
rule = rule.evaluate(evaluation)
if rule.has_form("List", None):
flatten_list.extend(rule.elements)
elif rule.has_form(("Rule", "RuleDelayed"), 2):
flatten_list.append(rule)
elif isinstance(rule, Dispatch):
flatten_list.extend(rule.src.elements)
else:
# WMA does not raise this message: just leave it unevaluated,
# and raise an error when the dispatch rule is used.
evaluation.message("Dispatch", "invrpl", rule)
return
try:
return Dispatch(flatten_list, evaluation)
except Exception:
return
def eval_normal(self, dispatch: Dispatch, evaluation: Evaluation) -> ListExpression:
"""Normal[dispatch_Dispatch]"""
if isinstance(dispatch, Dispatch):
return dispatch.src
else:
return dispatch.elements[0] | PypiClean |
/MODOI-1.0.tar.gz/MODOI-1.0/SimulationClient/SimulationClient.py | from multiprocessing.connection import Client
import time
import logging
import socket
import numpy as np
from SimulationUtilities import Configuration_Processing
from SimulationUtilities.Communication_Codes import comm_code
import LinearAlgebra as la
from CustomBFGS import find_geodesic_midpoint
from MetricValues import shutdown_metric
class SimulationClient:
"""
The purpose of this object is to compute local geodesics using a modified BFGS method. The object receives a pair of
end points to compute the local geodesic between. The simulation client then returns a new position for the node.
The simulation client needs the value of the potential and it's gradient function, in order to achieve this it makes
calls to it's assigned SimulationPotential servers.
Attributes:
CONFIGURATION (dict) :
A dictionary containing the parsed values from the file in configuration_file.
CURVE_ADDRESS (str, int) :
A tuple containing a string representing the hostname/IP and an integer for the running SimulationServer.
AUTHKEY (str) :
A string containing the authorisation key for the listener method.
DELAY (float) :
The length of time the SimulationClient should wait if there is no new available jobs, before attempting to
contact the SimulationServer again.
METRIC_SERVERS :
A list containing tuples of addresses for SimulationPotential instances.
ID (str) :
A string that uniquely identifies the client amongst all other clients in the computation.
MASS_MATRIX (numpy.array) :
A NumPy matrix containing the mass matrix of the molecular system. Produced automatically from the Atomistic
Simulation Environment.
"""
def __init__(self, simulation_client_id, server_host, server_port, authkey, metric_server_addresses,
configuration_file, logfile=None, log_level=logging.INFO, callback_delay=1.0):
"""The constructor for the SimulationClient class.
Note:
This class is intended to be used in conjunction with running SimulationServer and SimulationPotential
objects. It will cause a runtime exception if this condition isn't satisfied.
Args:
simulation_client_id (str) :
A string that uniquely identifies the client amongst all other clients in the computation.
server_host (str) :
The TCP/IP hostname of the running SimulationServer instance.
server_port (int) :
The port number that the SimulationServer instance is communicating on.
authkey (str, optional) :
Authentication key used to secure process communications. Default to None for local computations to
increase speed.
metric_server_addresses :
A list containing tuples of the type (str, int) containing the hostnames and ports for the running
SimulationPotential instances.
configuration_file (str) :
Directory and filename of the configuration file.
logfile (str, optional) :
Directory and filename of the log file. Is created if doesn't exist, overwritten if it does.
log_level (int, optional) :
Specify level of logging required as described in the logging package documentation.
callback_delay (float) :
The length of time the SimulationClient should wait if there is no new available jobs, before attempting
to contact the SimulationServer again.
"""
# Set the SimulationClient log output to write to logfile at prescribed log level if specified. Otherwise write
# to console output. Setting to DEBUG will cause poor performance and should only be used to debug.
if logfile is not None:
logging.basicConfig(filename=logfile, level=log_level, filemode='w')
else:
logging.basicConfig(level=logging.INFO)
# Read configuration from configuration_file and store in SimulationPotential's CONFIGURATION attribute.
self.CONFIGURATION = Configuration_Processing.read_configuration_file(configuration_file)
# Set ADDRESS and AUTHKEY attributes for Client object in the start_client method.
self.CURVE_ADDRESS = (server_host, server_port)
self.AUTHKEY = authkey
# Set the callback delay as described in the attributes.
self.DELAY = callback_delay
# Store the ADDRESS and AUTHKEY attributes for Client objects in the start_client method used to compute the
# metric values.
self.METRIC_SERVERS = metric_server_addresses
# Set the client's unique identifier.
self.ID = simulation_client_id
# Compute the mass matrix for the molecular system.
self.MASS_MATRIX = np.diag(np.dstack((self.CONFIGURATION['molecule'].get_masses(),) *
(self.CONFIGURATION['dimension'] /
len(self.CONFIGURATION['molecule'].get_masses()))).flatten())
def start_client(self):
"""Start the instance of SimulationClient and begin computing local geodesics.
"""
# Define a flag to indicate if contact with the SimulationServer instance is possible.
connection_made = False
# Create a response to send to the SimulationServer indicating that this is the first time this SimulationClient
# has attempted to get a task.
client_response = {'status_code': comm_code('CLIENT_FIRST_CONTACT'),
'client_name': self.ID}
# Attempt to connect to the SimulationServer instance.
try:
# Create a Client object that communicates with the listener on CURVE_ADDRESS using password AUTHKEY.
server = Client(self.CURVE_ADDRESS, authkey=self.AUTHKEY)
# When a connection is made send the client message.
server.send(client_response)
# The client assumes the server will respond with a message, either a local geodesic to compute or a message
# asking the client to try again after DELAY seconds.
server_response = server.recv()
# Interpret the servers response by first extracting the status_code variable from the response.
server_response_code = server_response['status_code']
# Close the connection to the server at this point to allow other clients to communicate with the
# SimulationServer.
server.close()
# Store in the connection_made flag that it was possible to create a connection.
connection_made = True
# If it isn't possible to connect to the server than a socket.error exception is raised.
except socket.error:
# Write an error to the log for this client indicating that the connection couldn't be made.
logging.warning('Failed to Make Connection to SimulationServer. Shutting down client.')
# Send a signal to the running instances of SimulationPotential that the SimulationClient would have used
# indicating that they should also shutdown.
shutdown_metric(self.METRIC_SERVERS, self.AUTHKEY)
# This is the main loop of the SimulationClient - the program stops running when it is no longer possible to
# communicate with the SimulationServer. This is decided by the connection_made flag.
while connection_made:
# At this point in the code a new server_response should have been received. How the SimulationClient reacts
# depends on the communication code received.
# If the server has indicated it is giving the SimulationClient a new geodesic to compute then...
if server_response_code == comm_code('SERVER_GIVES_NEW_TASK'):
# Compute the rescaled tangent direction of the curve as store as a NumPy array.
tangent_direction = (1 / float(self.CONFIGURATION['local_number_of_nodes'] + 1)) * \
np.subtract(server_response['right_end_point'], server_response['left_end_point'], dtype='float64')
# Compute the local geodesic using the BFGS method and store the NumPy array in result
result = \
find_geodesic_midpoint(server_response['left_end_point'],
server_response['right_end_point'],
self.CONFIGURATION['local_number_of_nodes'],
la.orthonormal_tangent_basis(tangent_direction,
self.CONFIGURATION['dimension']),
tangent_direction, self.CONFIGURATION['codimension'],
self.METRIC_SERVERS,
self.MASS_MATRIX,
self.AUTHKEY)
# If the function find_geodesic_midpoint returned a None object then it couldn't contact it's
# SimulationPotential instances and should be restarted.
if result is None:
# Tell the user via the log that the SimulationPotential instances couldn't be contacted.
logging.warning('Failed to Make Connection to SimulationPotential. Shutting down client.')
# Exit the main loop of the SimulationClient.
break
# If there is a midpoint then construct a client response to tell the server which node has which new
# position.
client_response = {'status_code': comm_code('CLIENT_HAS_MIDPOINT_DATA'),
'node_number': server_response['node_number'],
'new_node_position': result,
'client_name': self.ID
}
# Otherwise if the server has asked the SimulationClient to try again later...
elif server_response_code == comm_code('SERVER_REQUEST_CALLBACK'):
# Make the SimulationClient wait for DELAY seconds
time.sleep(self.DELAY)
# Create a response to tell the SimulationServer that the SimulationClient would like a new job.
client_response = {'status_code': comm_code('CLIENT_HAS_NO_TASK'), 'client_name': self.ID}
# Attempt to connect to the SimulationServer instance.
try:
# Create a Client object that communicates with the listener on CURVE_ADDRESS using password AUTHKEY.
server = Client(self.CURVE_ADDRESS, authkey=self.AUTHKEY)
# When a connection is made send the client message.
server.send(client_response)
# The client assumes the server will respond with a message, either a local geodesic to compute or a
# message asking the client to try again after DELAY seconds.
server_response = server.recv()
# Interpret the servers response by first extracting the status_code variable from the response.
server_response_code = server_response['status_code']
# Close the connection to the server at this point to allow other clients to communicate with the
# SimulationServer.
server.close()
# If it isn't possible to connect to the server than a socket.error exception is raised.
except (socket.error, EOFError):
# Write an error to the log for this client indicating that the connection couldn't be made.
logging.warning('Failed to Make Connection to SimulationServer. Shutting down client.')
# Send a signal to the running instances of SimulationPotential that the SimulationClient would have
# used indicating that they should also shutdown.
shutdown_metric(self.METRIC_SERVERS, self.AUTHKEY)
# Exit the main loop of the SimulationClient.
break | PypiClean |
/FluGibson-1.2-py3-none-any.whl/FluGibson-1.2.dist-info/DESCRIPTION.rst | # FluGibson

A tool for designing primers to clone influenza polymerase segments from viral cDNA.
# Installation
The installation requires the following packages:
1. `networkx`
2. `biopython`
3. `pandas` (optional)
4. `matplotlib` (optional)
*From Github:*
1. Download this repository as a Zip file.
2. Unzip the file.
3. In your terminal, navigate to the FluGibson directory.
4. Run command: `python setup.py install`
*From PyPI:*
1. (if applicable) Switch to your proper Python environment.
2. Run command: `pip install FluGibson`
*Using Conda:*
1. (if applicable) Switch to your proper Python environment.
2. Run command: `conda install FluGibson`
# Usage
## Scripted
One way to use FluGibson is to use the provided script in the `/examples` directory. Copy the script to your working directory.
Create the FASTA formatted files containing the DNA parts that you want to stitch together. For example, you would use the following FASTA definition to stitch the following 3 parts together:
>PART_1
>CATCTATCTCTCTACTGCGAGGCTATTCGACTGGCCGTTACTCGCCGGTACGTAGCTCGGTCTCGATCATCAGTACGTCTACGTGTCGTCGTACTTACACGGTCGCTCGGACTGACGTACGTCTACGTCGTCTGACTGA
>PART_2
>CTACTGTCTGCTGATGGTACGTACGTGAGTACGCGCAGCACAGACACTACTTACTCTCGCGCGAGAGCTATCTACGACTACGTACTCGTCGTACGAGCTGACTGATCGACGTAGCTTGACGTACGTATCACGTACGTATCG
>PART_3
>CAGCTTCGGCGCGATTACTCTACGAGCACGACGCAGCTGTCGCTGTCTGGTCTACGCTAGCGCTACGACTATCGATCAGCGTCGTACTGACGTGACGCGCATCGACGTTCGGACGTCGTCGTCGTACGACGTCTACGATGC
The parts will be joined in the order `PART_1-->PART_2-->PART_3`.
To produce the CSV file that has all of the primers listed, from the command line, run `python compute_primers.py`. You will get a CSV file, named `all_primers.csv`, that will house the primers that you will need to order.
# Changelog
## Version 1.2
Added a class that converts one nucleotide sequence into another, using Gibson assembly primers.
| PypiClean |
/Moose-0.9.9b3.tar.gz/Moose-0.9.9b3/moose/connection/mongodb.py | import time
import copy
from pymongo import MongoClient, errors
from moose.core.exceptions import \
ConnectionTimeout, SuspiciousOperation, ImproperlyConfigured
from moose.core.terminal import stdout
from moose.conf import settings
import logging
logger = logging.getLogger(__name__)
class MongoDBHandler(object):
"""
Wrapper for library pymongo, see ref:
https://api.mongodb.com/python/current/tutorial.html
"""
coll_source = 'Source'
coll_result = 'Result'
def __init__(self, settings_dict):
if not isinstance(settings_dict, dict):
raise ImproperlyConfigured(
"Argument `settings_dict` is the configure for mongodb drivers, "
"which must be an instance of `dict`.")
self.settings_dict = settings_dict
self.displayed_mongo_url = self.__get_displayed_url(settings_dict)
self._client = self.__connect()
self._db_name = None
self._db = None
self._coll_name = None
self._coll = None
def __del__(self):
self.close()
def __connect(self):
mongo_url = self.__get_mongo_url(self.settings_dict)
logger.debug("Connecting to '%s'..." % self.displayed_mongo_url)
conn_cnt = 0
while conn_cnt < settings.DB_CONN_MAX_TIMES:
try:
# More details about `MongoClient` API, see:
# https://api.mongodb.com/python/2.8/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient
client = MongoClient(mongo_url)
# the constructor returns immediately and launches the
# connection process on background threads.
# Checks if the server is available like this:
client.admin.command('ismaster')
logger.debug("Connection to MongoDB is established.")
return client
# If auto-reconnection will be performed, AutoReconnect will be raised.
# Application code should handle this exception (recognizing that the
# operation failed) and then continue to execute.
except errors.AutoReconnect as e:
conn_cnt += 1
msg = (
"Connection to MongoDB is lost and an attempt to "
"auto-connect will be made ..."
)
stdout.warn(msg)
logger.warning(msg)
logger.error("Unable to establish the connection to MongoDB.")
raise ImproperlyConfigured("Unable to establish the connection to MongoDB.")
def __get_mongo_url(self, settings_dict):
try:
return "mongodb://{USER}:{PASSWORD}@{HOST}:{PORT}".format(**settings_dict)
except KeyError as e:
logger.error("Key missing, check '{}' was set.".format(str(e)))
raise ImproperlyConfigured("Key missing, check '{}' was set.".format(str(e)))
def __get_displayed_url(self, settings_dict):
if settings_dict.get('PASSWORD'):
shadow_settings = copy.deepcopy(settings_dict)
shadow_settings['PASSWORD'] ='***'
return self.__get_mongo_url(shadow_settings)
else:
# we don't handle key missing errors here
return '***'
@property
def db(self):
if self._db != None:
return self._db
else:
logger.error("Database is not specified, call `set_database()` first.")
raise ImproperlyConfigured("Database is not specified, call `set_database()` first.")
def set_database(self, db_name):
logger.debug("Set database to '{}'".format(db_name))
try:
self._db = self._client[db_name]
self._db_name = db_name
except errors.InvalidName as e:
logger.warning("Unknown database specified: '{}'".format(db_name))
raise ImproperlyConfigured("Unknown database specified: '{}'".format(db_name))
@property
def coll(self):
if self._coll != None:
return self._coll
else:
logger.error("Collection is not specified, call `set_collection()` first.")
raise ImproperlyConfigured("Collection is not specified, call `set_collection()` first.")
def set_collection(self, coll_name):
logger.debug("Set collection to '{}'".format(coll_name))
if self._coll_name != coll_name:
try:
self._coll = self.db[coll_name]
self._coll_name = coll_name
except errors.InvalidName as e:
logger.warning("Unknown collection specified: '{}'".format(coll_name))
raise ImproperlyConfigured("Unknown collection specified: '{}'".format(coll_name))
def execute(self, coll_name, operator):
conn_cnt = 0
while conn_cnt < settings.DB_CONN_MAX_TIMES:
try:
self.set_collection(coll_name)
result = operator()
return result
except (errors.AutoReconnect, errors.ExecutionTimeout):
conn_cnt += 1
logger.warning(
"Failed to execute the operation, an attempt to "
"re-connect will be made."
)
raise ImproperlyConfigured("MongoDB: Failed too many times to execute, aborted.")
def fetch(self, coll_name, filter=None, *args, **kwargs):
logger.debug("Fetching from collection [%s] of '%s'." % (coll_name, self._db_name))
def _operator():
documents = []
for doc in self.coll.find(filter, *args, **kwargs):
documents.append(doc)
return documents
return self.execute(coll_name, _operator)
def fetch_source(self, filter=None, *args, **kwargs):
return self.fetch(self.coll_source, filter, *args, **kwargs)
def fetch_result(self, filter=None, *args, **kwargs):
return self.fetch(self.coll_result, filter, *args, **kwargs)
def insert(self, coll_name, documents, **kwargs):
if isinstance(documents, dict):
documents = [documents]
logger.warning(
"Insert {} documents to the collection '{}'".format(len(documents), coll_name))
def _operator():
return self.coll.insert_many(documents, **kwargs)
return self.execute(coll_name, _operator)
def update(self, coll_name, filter, document, **kwargs):
logger.warning("Update collection '{}' matching the filter: '{}'".format(coll_name, filter))
def _operator():
return self.coll.update_many(
filter,
{'$set': document},
**kwargs
)
return self.execute(coll_name, _operator)
def close(self):
try:
self._client.close()
except AttributeError as e:
logger.warning("No connections found.")
else:
logger.debug("Connection to '{}' closed.".format(self.displayed_mongo_url)) | PypiClean |
/Dabo-0.9.16.tar.gz/Dabo-0.9.16/dabo/settings.py | import os
import sys
import logging
### Settings - begin
# Event logging is turned off globally by default for performance reasons.
# Set to True (and also set LogEvents on the object(s)) to get logging.
eventLogging = False
# Set the following to True to get all the data in the UI event put into
# the dEvent EventData dictionary. Only do that for testing, though,
# because it is very expensive from a performance standpoint.
allNativeEventInfo = False
# Set fastNameSet to True to bypass Dabo's checking to make sure siblings
# have unique names, greatly speeding up instantiation of many objects. If you
# do this, your code takes responsibility for ensuring that sibling names are
# in fact unique. We recommend you leave fastNameSet to False here, and wrap
# your object creations around setting it to True. eg:
#
# dabo.fastNameSet = True
# for i in range(200):
# self.addObject(dabo.ui.dButton, Name="b_%s" % i)
# dabo.fastNameSet = False
fastNameSet = False
# autoBindEvents specifies whether events are bound
# automatically to callback functions when the object is instantiated. E.g.,
# where you used to have to have code like:
#
# def initEvents(self):
# self.bindEvent(dEvents.MouseEnter, self.onMouseEnter)
#
# with autoBindEvents, all you need to do is define the onMouseEnter function,
# and Dabo will understand that that function should be called upon the
# MouseEnter event. Additionally, if an object has it's RegID set, the form
# can have callbacks of the form on<EventName>_<ObjectName>(), which become
# the callback function for that event and object. E.g.:
#
# def onMouseEnter_cmdOkay(self, evt):
# """This gets called when the mouse enters the okay command button."""
#
autoBindEvents = True
# If you set MDI to True, then dFormMain and dForm will default to being MDI
# parent and MDI child, respectively. IOW, you don't have to change your dForm
# and dFormMain subclasses to inherit from dFormChildMDI, etc., but it comes at
# the cost of being a global setting. This must be set before dabo.ui is
# imported (ie right at the top of your app). Note that you could instead choose
# to deal with MDI/SDI explicitly in your form subclasses. IOW:
# class MyForm(dabo.ui.dFormChildMDI)
# class MyForm(dabo.ui.dFormParentMDI)
# class MyForm(dabo.ui.dFormSDI)
#
# All the MDI setting does is make dFormMain == (dFormMainSDI or dFormMainParentMDI)
# and dForm == (dFormSDI or dFormChildMDI)
#
# Setting MDI to True on Mac is essential to get a native-feeling Mac app
# where closing the last form doesn't exit the app. Whatever form has the
# focus will determine the menu used - the ParentMDI form is never shown on
# Mac but it's menu will be displayed on the Mac system menu bar.
#
# Setting MDI to True on Windows is still what users expect for most apps.
#
# Setting MDI to True on Linux results in a pageframe setup with each child
# form being a page in the parent pageframe.
MDI = False
# macFontScaling: If you set a font to 10 pt it'll look medium-small on most
# Windows and Linux screens. However, it will look very small on Mac because
# of automatic conversion in OS X. 8pt fonts on Mac are barely even readable.
# Set macFontScaling to True to make your fonts appear the same size on all
# platforms.
macFontScaling = True
# When doing date calculations, displaying calendars, etc., this determines whether
# 'Sunday' or 'Monday' is considered the beginning of the week
firstDayOfWeek = "Sunday"
# Default font size when none other is specified
defaultFontSize = 10
# Default language to use when none is specified
defaultLanguage = "en"
# Override language set in python.locale.getdefaultlocale
overrideLocaleLanguage = False
# Default encoding to use when none is specified
defaultEncoding = "utf-8"
# Default log file for the dabo.dBug.loggit function
loggitFile = "functionCall.log"
# Events should be processed from the latest to earliest binding.
# If you notice that events don't seem to be happening correctly,
# change this to False to see if that improves things. If so, please
# report it the Dabo developers.
reverseEventsOrder = True
# Does the UI layer (dForm) eat exceptions from the biz layer
# such as 'invalid row specified' or bizrule violations and
# automatically display an informational message to the user (True),
# or does the exception go unhandled so that the developer can
# quickly diagnose the issue (False)?
eatBizExceptions = True
# Check for web updates?
checkForWebUpdates = True
# Date and Time formats. None will use the os user's settings, but
# your code can easily override these. Example:
# dabo.dateFormat = "%d.%m.%Y" -> "31.12.2008".
dateFormat = None
dateTimeFormat = None
timeFormat = None
# Do we load the os user's locale settings automatically?
# Pythonista note: this executes:
# locale.setlocale(locale.LC_ALL, '')
loadUserLocale = True
# File extensions understood by the getFile functions. The format is a dictionary, with
# the extension as the key, and the descriptive text as the value. To add your own
# custom extensions, create a dict with this same format named 'custom_extensions'
# in your settings_override file, and those will be added to this list.
custom_extensions = {}
file_extensions = {
"*": "All Files",
"py": "Python Scripts",
"txt": "Text Files",
"log": "Log Files",
"fsxml": "Dabo FieldSpec Files",
"cnxml": "Dabo Connection Files",
"rfxml": "Dabo Report Format Files",
"cdxml": "Dabo Class Designer Files",
"mnxml": "Dabo Menu Designer Files",
"pdf": "PDF Files",
"js": "Javascript Files",
"html" : "HTML Files",
"xml" : "XML Files",
"jpg" : "JPEG Images",
"jpeg" : "JPEG Images",
"gif" : "GIF Images",
"tif" : "TIFF Images",
"tiff" : "TIFF Images",
"png" : "PNG Images",
"ico" : "Icon Images",
"bmp" : "Bitmap Images",
"sh": "Shell Scripts",
"zip": "ZIP Files",
"tar": "tar Archives",
"gz": "gzipped Files",
"tgz": "gzipped tar Archives",
"mov": "QuickTime Movies",
"wmv": "Windows Media Videos",
"mpg": "MPEG Videos",
"mpeg": "MPEG Videos",
"mp3": "mp3 Audio Files",
}
# For file-based data backends such as SQLite, do we allow creating a connection to
# a non-existent file, which SQLite will then create?
createDbFiles = False
# URL of the Web Update server
webupdate_urlbase = "http://daboserver.com/webupdate"
# Logging settings
mainLogQualName = "dabo.mainLog"
# Set the main log file to None initially
mainLogFile = None
mainLogConsoleLevel = logging.ERROR
mainLogFileLevel = logging.ERROR
mainLogDateFormat = "%Y-%m-%d %H:%M:%S"
consoleFormat = fileFormat = "%(asctime)s - %(levelname)s - %(message)s"
maxLogFileSize = 5242880 # 5 MB
dbLogLevel = logging.DEBUG
dbLogQualName = "dabo.dbActivityLog"
# Set the db file to None initially
dbLogFile = None
dbLogConsoleLevel = logging.ERROR
dbLogFileLevel = logging.DEBUG
dbLogDateFormat = "%Y-%m-%d %H:%M:%S"
dbConsoleFormat = dbFileFormat = "%(asctime)s - %(levelname)s - %(message)s"
dbMaxLogFileSize = 5242880 # 5 MB
# Report fonts configuration.
reportTTFontFilePath = None
reportTTFontFileMap = {}
# Determines if we import the debugger into the dabo namespace
importDebugger = True
# Do we save the current call stack to dabo.ui.lastCallAfterStack in every
# callAfter() call?
saveCallAfterStack = False
# When set to True, data control bound to the dBizobj source is automatically disabled
# if related dataset is empty (RowCount = 0), to prevent user interactions.
autoDisableDataControls = False
dTextBox_NumericBlankToZero = False
# When set to True, dTextBox control bound to the dBizobj source takes its TextLength
# property settings from it.
dTextBox_DeriveTextLengthFromSource = False
# When we copy values from a grid, we need to define the following values for the copied text:
copyValueSeparator = "\t"
copyStringSeparator= '"'
copyLineSeparator = "\n"
# Turn to False for better 'import dabo' performance from inside web apps, for example.
implicitImports = True
# Setting to determine if we call dLocalize.install("dabo") when dabo is imported.
localizeDabo = True
# When field values are float, and DataStructure isn't explicit, do we convert float
# values to Decimal automatically?
convertFloatToDecimal = True
### Settings - end
# Make sure that the current directory is in the sys.path
sys.path.append(os.getcwd())
# Do not copy/paste anything below this line into settings_override.py.
try:
from settings_override import *
except ImportError:
pass | PypiClean |
/Metage2Metabo-1.5.4.tar.gz/Metage2Metabo-1.5.4/metage2metabo/sbml_management.py |
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>
import libsbml
import logging
import os
import sys
from libsbml import SBMLReader, writeSBMLToFile, SBMLDocument
from multiprocessing import Pool
from metage2metabo import utils
from padmet.utils.connection import pgdb_to_padmet, sbmlGenerator
from padmet.utils.sbmlPlugin import convert_from_coded_id
logger = logging.getLogger(__name__)
def get_compounds(sbml_file):
"""Get compound from sbml
Args:
sbml_file (str): SBML file
Returns:
list: compound
"""
reader = SBMLReader()
document = reader.readSBML(sbml_file)
model = document.getModel()
if model is None:
logger.critical('SBML file "' + sbml_file + '" not well formatted. Is this file a SBML? Does it contains <model></model> tags?')
sys.exit(1)
compounds = [compound.id for compound in model.getListOfSpecies()]
return compounds
def compare_seeds_and_targets(seedfile, targetfile):
"""Returns the intersection of the seeds and the targets
Args:
seedfile (str): path to seeds SBML file
targetfile (str): path to targets SBML file
Returns:
set: intersection of seeds and targets
"""
seeds = set(get_compounds(seedfile))
targets = set(get_compounds(targetfile))
return seeds.intersection(targets)
def create_species_sbml(metabolites, outputfile):
"""Create a SBML files with a list of species containing metabolites of the input set
Args:
metabolites (set): set of metabolites
outputfile (str): SBML file to be written
"""
document = libsbml.SBMLDocument(2, 1)
model = document.createModel("metabolites")
for compound in metabolites:
compound = compound.strip('"')
name, stype, comp = convert_from_coded_id(compound)
s = model.createSpecies()
sbmlGenerator.check(s, 'create species')
sbmlGenerator.check(s.setId(compound), 'set species id')
# Add name and compartment if found by padmet
if name is not None:
sbmlGenerator.check(s.setName(name), 'set species name')
elif name is None:
logger.warning("No name for " + compound)
if comp is not None:
sbmlGenerator.check(s.setCompartment(comp), 'set species compartment')
elif comp is None:
logger.warning("No compartment for " + compound)
libsbml.writeSBMLToFile(document, outputfile)
def run_pgdb_to_sbml(species_multiprocess_data):
"""Turn PGDBs into SBML2 using multi-processing.
Args:
species_multiprocess_data (list): pathname to species pgdb dir, pathname to species sbml file
Returns:
sbml_check (bool): Check if sbml file exists
"""
species_pgdb_dir = species_multiprocess_data[0]
species_sbml_file = species_multiprocess_data[1]
sbml_level = species_multiprocess_data[2]
noorphan_bool = species_multiprocess_data[3]
padmet_file_dir = species_multiprocess_data[4]
padmet = pgdb_to_padmet.from_pgdb_to_padmet(
pgdb_folder=species_pgdb_dir,
extract_gene=True,
no_orphan=noorphan_bool)
if padmet_file_dir:
padmet.generateFile(padmet_file_dir)
sbmlGenerator.padmet_to_sbml(padmet, species_sbml_file, sbml_lvl=sbml_level, verbose=False)
sbml_check = utils.is_valid_path(species_sbml_file)
return sbml_check
def pgdb_to_sbml(pgdb_dir, output_dir, noorphan_bool, padmet_bool, sbml_level, cpu):
"""Turn Pathway Tools PGDBs into SBML2 files using Padmet
Args:
pgdb_dir (str): PGDB directory
output_dir (str): results directory
noorphan_bool (bool): ignores orphan reactions if True
padmet_bool (bool): creates padmet files if True
sbml_level (int): SBML level
cpu (int): number of CPU for multi-process
Returns:
sbml_dir (str): SBML directory if successful
"""
logger.info('######### Creating SBML files #########')
sbml_dir = os.path.join(output_dir, 'sbml')
padmet_dir = os.path.join(output_dir, 'padmet')
if padmet_bool:
if not utils.is_valid_dir(padmet_dir):
logger.critical('Impossible to access/create output directory')
sys.exit(1)
if not utils.is_valid_dir(sbml_dir):
logger.critical('Impossible to access/create output directory')
sys.exit(1)
pgdb_to_sbml_pool = Pool(processes=cpu)
multiprocess_data = []
for species in os.listdir(pgdb_dir):
pgdb_species_path = os.path.join(pgdb_dir, species)
sbml_species_path = os.path.join(sbml_dir, species + '.sbml')
padmet_species_path = os.path.join(padmet_dir, species + '.padmet')
if padmet_bool:
multiprocess_data.append(
[pgdb_species_path,
sbml_species_path,
sbml_level, noorphan_bool,
padmet_species_path])
else:
multiprocess_data.append(
[pgdb_species_path,
sbml_species_path,
sbml_level, noorphan_bool,
padmet_bool])
sbml_checks = pgdb_to_sbml_pool.map(run_pgdb_to_sbml, multiprocess_data)
pgdb_to_sbml_pool.close()
pgdb_to_sbml_pool.join()
if all(sbml_checks):
return sbml_dir
else:
logger.critical('Error during padmet/sbml creation.')
sys.exit(1) | PypiClean |
/OBITools-1.2.13.tar.gz/OBITools-1.2.13/src/obitools/graph/__init__.py | import sys
from obitools.utils import progressBar
class Indexer(dict):
'''
Allow to manage convertion between an arbitrarly hashable python
value and an unique integer key
'''
def __init__(self):
self.__max=0
self.__reverse=[]
def getLabel(self,index):
'''
Return the python value associated to an integer index.
:param index: an index value
:type index: int
:raises: IndexError if the index is not used in this
Indexer instance
'''
return self.__reverse[index]
def getIndex(self,key,strict=False):
'''
Return the index associated to a **key** in the indexer. Two
modes are available :
- strict mode :
if the key is not known by the :py:class:`Indexer` instance
a :py:exc:`KeyError` exception is raised.
- non strict mode :
in this mode if the requested *key** is absent, it is added to
the :py:class:`Indexer` instance and the new index is returned
:param key: the requested key
:type key: a hashable python value
:param strict: select the looking for mode
:type strict: bool
:return: the index corresponding to the key
:rtype: int
:raises: - :py:exc:`KeyError` in strict mode is key is absent
of the :py:class:`Indexer` instance
- :py:exc:`TypeError` if key is not an hashable value.
'''
if dict.__contains__(self,key):
return dict.__getitem__(self,key)
elif strict:
raise KeyError,key
else:
value = self.__max
self[key]= value
self.__reverse.append(key)
self.__max+=1
return value
def __getitem__(self,key):
'''
Implement the [] operateor to emulate the standard dictionnary
behaviour on :py:class:`Indexer` and returns the integer key
associated to a python value.
Actually this method call the:py:meth:`getIndex` method in
non strict mode so it only raises an :py:exc:`TypeError`
if key is not an hashable value.
:param key: the value to index
:type key: an hashable python value
:return: an unique integer value associated to the key
:rtype: int
:raises: :py:exc:`TypeError` if **key** is not an hashable value.
'''
return self.getIndex(key)
def __equal__(self,index):
'''
Implement equal operator **==** for comparing two :py:class:`Indexer` instances.
Two :py:class:`Indexer` instances are equals only if they are physically
the same instance
:param index: the second Indexer
:type index: an :py:class:`Indexer` instance
:return: True is the two :py:class:`Indexer` instances are the same
:rtype: bool
'''
return id(self)==id(index)
class Graph(object):
'''
Class used to represent directed or undirected graph.
.. warning::
Only one edge can connect two nodes in a given direction.
.. warning::
Specifying nodes through their index seepud your code but as no check
is done on index value, it may result in inconsistency. So prefer the
use of node label to specify a node.
'''
def __init__(self,label='G',directed=False,indexer=None,nodes=None,edges=None):
'''
:param label: Graph name, set to 'G' by default
:type label: str
:param directed: true for directed graph, set to False by defalt
:type directed: boolean
:param indexer: node label indexer. This allows to define several graphs
sharing the same indexer (see : :py:meth:`newEmpty`)
:type indexer: :py:class:`Indexer`
:param nodes: set of nodes to add to the graph
:type nodes: iterable value
:param edges: set of edges to add to the graph
:type edges: iterable value
'''
self._directed=directed
if indexer is None:
indexer = Indexer()
self._index = indexer
self._node = {}
self._node_attrs = {}
self._edge_attrs = {}
self._label=label
def newEmpty(self):
"""
Build a new empty graph using the same :py:class:`Indexer` instance.
This allows two graph for sharing their vertices through their indices.
"""
n = Graph(self._label+"_compact",self._directed,self._index)
return n
def addNode(self,node=None,index=None,**data):
'''
Add a new node or update an existing one.
:param node: the new node label or the label of an existing node
for updating it.
:type node: an hashable python value
:param index: the index of an existing node for updating it.
:type index: int
:return: the index of the node
:rtype: int
:raises: :py:exc:`IndexError` is index is not **None** and
corresponds to a not used index in this graph.
'''
if index is None:
index = self._index[node]
else:
if index >= len(self._index):
raise IndexError,"This index is not used in this graph..."
if index not in self._node:
self._node[index]=set()
if data:
if index in self._node_attrs:
self._node_attrs[index].update(data)
else:
self._node_attrs[index]=dict(data)
return index
def __contains__(self,node):
try:
index = self._index.getIndex(node,strict=True)
r = index in self._node
except KeyError:
r=False
return r
def getNode(self,node=None,index=None):
"""
:param node: a node label.
:type node: an hashable python value
:param index: the index of an existing node.
:type index: int
.. note:: Index value are prevalent over node label.
:return: the looked for node
:rtype: :py:class:`Node`
:raises: :py:exc:`IndexError` if specified node lablel
corresponds to a non-existing node.
.. warning:: no check on index value
"""
if index is None:
index = self._index.getIndex(node, True)
return Node(index,self)
def getBestNode(self,estimator):
'''
Select the node maximizing the estimator function
:param estimator: the function to maximize
:type estimator: a function returning a numerical value and accepting one
argument of type :py:class:`Node`
:return: the best node
:rtype: py:class:`Node`
'''
bestScore=0
best=None
for n in self:
score = estimator(n)
if best is None or score > bestScore:
bestScore = score
best=n
return best
def delNode(self,node=None,index=None):
"""
Delete a node from a graph and all associated edges.
:param node: a node label.
:type node: an hashable python value
:param index: the index of an existing node.
:type index: int
.. note:: Index value are prevalent over node label.
:raises: :py:exc:`IndexError` if specified node lablel
corresponds to a non-existing node.
.. warning:: no check on index value
"""
if index is None:
index = self._index[node]
#
# Remove edges pointing to the node
#
for n in self._node:
if n!=index:
e = self._node[n]
if index in e:
if (n,index) in self._edge_attrs:
del self._edge_attrs[(n,index)]
e.remove(index)
#
# Remove edges starting from the node
#
e = self._node[index]
for n in e:
if (index,n) in self._edge_attrs:
del self._edge_attrs[(index,n)]
#
# Remove the node by itself
#
del self._node[index]
#
# Remove attributes associated to the node
#
if index in self._node_attrs:
del self._node_attrs[index]
def hasEdge(self,node1=None,node2=None,index1=None,index2=None,**data):
if index1 is None:
index1 = self._index.getIndex(node1, True)
else:
if index1 >= len(self._index):
raise IndexError,"index1 = %d not in the graph" % index1
if index2 is None:
index2 = self._index.getIndex(node2, True)
else:
if index2 >= len(self._index):
raise IndexError,"index2 = %d not in the graph" % index1
rep = index2 in self._node[index1]
if not self._directed:
rep = rep or (index1 in self._node[index2])
return rep
def addEdge(self,node1=None,node2=None,index1=None,index2=None,**data):
'''
Create a new edge in the graph between both the specified nodes.
.. note:: Nodes can be specified using their label or their index in the graph
if both values are indicated the index is used.
:param node1: The first vertex label
:type node1: an hashable python value
:param node2: The second vertex label
:type node2: an hashable python value
:param index1: The first vertex index
:type index1: int
:param index2: The second vertex index
:type index2: int
:raises: :py:exc:`IndexError` if one of both the specified node lablel
corresponds to a non-existing node.
.. warning:: no check on index value
'''
index1=self.addNode(node1, index1)
index2=self.addNode(node2, index2)
self._node[index1].add(index2)
if not self._directed:
self._node[index2].add(index1)
if data:
if (index1,index2) not in self._edge_attrs:
data =dict(data)
self._edge_attrs[(index1,index2)]=data
if not self._directed:
self._edge_attrs[(index2,index1)]=data
else:
self._edge_attrs[(index1,index2)].update(data)
return (index1,index2)
def getEdge(self,node1=None,node2=None,index1=None,index2=None):
'''
Extract the :py:class:`Edge` instance linking two nodes of the graph.
.. note:: Nodes can be specified using their label or their index in the graph
if both values are indicated the index is used.
:param node1: The first vertex label
:type node1: an hashable python value
:param node2: The second vertex label
:type node2: an hashable python value
:param index1: The first vertex index
:type index1: int
:param index2: The second vertex index
:type index2: int
:raises: :py:exc:`IndexError` if one of both the specified node lablel
corresponds to a non-existing node.
.. warning:: no check on index value
'''
node1=self.getNode(node1, index1)
node2=self.getNode(node2, index2)
return Edge(node1,node2)
def delEdge(self,node1=None,node2=None,index1=None,index2=None):
"""
Delete the edge linking node 1 to node 2.
.. note:: Nodes can be specified using their label or their index in the graph
if both values are indicated the index is used.
:param node1: The first vertex label
:type node1: an hashable python value
:param node2: The second vertex label
:type node2: an hashable python value
:param index1: The first vertex index
:type index1: int
:param index2: The second vertex index
:type index2: int
:raises: :py:exc:`IndexError` if one of both the specified node lablel
corresponds to a non-existing node.
.. warning:: no check on index value
"""
if index1 is None:
index1 = self._index[node1]
if index2 is None:
index2 = self._index[node2]
if index1 in self._node and index2 in self._node[index1]:
self._node[index1].remove(index2)
if (index1,index2) in self._node_attrs:
del self._node_attrs[(index1,index2)]
if not self._directed:
self._node[index2].remove(index1)
if (index2,index1) in self._node_attrs:
del self._node_attrs[(index2,index1)]
def edgeIterator(self,predicate=None):
"""
Iterate through a set of selected vertices.
:param predicate: a function allowing node selection. Default value
is **None** and indicate that all nodes are selected.
:type predicate: a function returning a boolean value
and accepting one argument of class :py:class:`Edge`
:return: an iterator over selected edge
:rtype: interator over :py:class:`Edge` instances
.. seealso::
function :py:func:`selectEdgeAttributeFactory` for simple predicate.
"""
for n1 in self._node:
for n2 in self._node[n1]:
if self._directed or n1 <= n2:
e = self.getEdge(index1=n1, index2=n2)
if predicate is None or predicate(e):
yield e
def nodeIterator(self,predicate=None):
"""
Iterate through a set of selected vertices.
:param predicate: a function allowing edge selection. Default value
is **None** and indicate that all edges are selected.
:type predicate: a function returning a boolean value
and accepting one argument of class :py:class:`Node`
:return: an iterator over selected nodes.
:rtype: interator over :py:class:`Node` instances
"""
for n in self._node:
node = self.getNode(index=n)
if predicate is None or predicate(node):
yield node
def nodeIndexIterator(self,predicate=None):
"""
Iterate through the indexes of a set of selected vertices.
:param predicate: a function allowing edge selection. Default value
is **None** and indicate that all edges are selected.
:type predicate: a function returning a boolean value
and accepting one argument of class :py:class:`Node`
:return: an iterator over selected node indices.
:rtype: interator over `int`
"""
for n in self._node:
node = self.getNode(index=n)
if predicate is None or predicate(node):
yield n
def neighbourIndexSet(self,node=None,index=None):
if index is None:
index=self.getNode(node).index
return self._node[index]
def edgeCount(self):
n = reduce(lambda x,y:x+y, (len(z) for z in self._node.itervalues()),0)
if not self._directed:
n=n/2
return n
def subgraph(self,nodes,name='G'):
sub = Graph(name,self._directed,self._index)
if not isinstance(nodes, set):
nodes = set(nodes)
for n in nodes:
sub._node[n]=nodes & self._node[n]
if n in self._node_attrs:
sub._node_attrs[n]=dict(self._node_attrs[n])
for n2 in sub._node[n]:
if not self._directed:
if n <= n2:
if (n,n2) in self._edge_attrs:
data=dict(self._edge_attrs[(n,n2)])
sub._edge_attrs[(n,n2)]=data
sub._edge_attrs[(n2,n)]=data
else:
if (n,n2) in self._edge_attrs:
data=dict(self._edge_attrs[(n,n2)])
sub._edge_attrs[(n,n2)]=data
return sub
def __len__(self):
return len(self._node)
def __getitem__(self,key):
return self.getNode(node=key)
def __delitem__(self,key):
self.delNode(node=key)
def __iter__(self):
return self.nodeIterator()
def dot(self,nodePredicat=None,edgePredicat=None):
def combinedPredicat(edge):
graph = edge.graph
n1 = graph.getNode(edge.node1)
n2 = graph.getNode(edge.node2)
return nodePredicat(n1) and nodePredicat(n2) and edgePredicat(edge)
if edgePredicat is not None and nodePredicat is not None:
edgePredicat = combinedPredicat
if self._directed:
kw ='digraph'
else:
kw='graph'
nodes = "\n ".join([str(x) for x in self.nodeIterator(nodePredicat)])
edges = "\n ".join([str(x) for x in self.edgeIterator(edgePredicat)])
return "%s %s {\n %s\n\n %s\n}" % (kw,self._label,nodes,edges)
def __str__(self):
return self.dot()
class Node(object):
"""
Class used for representing one node or vertex in a graph
"""
def __init__(self,index,graph):
'''
.. warning::
:py:class:`Node` constructor is usualy called through the :py:class:`Graph` methods
:param index: Index of the node in the graph
:type index: int
:param graph: graph instance owning the node
:type graph: :py:class:`obitools.graph.Graph`
'''
self.index = index
self.__graph = graph
def getGraph(self):
'''
return graph owning this node.
:rtype: :py:class:`obitools.graph.Graph`
'''
return self.__graph
def getLabel(self):
'''
return label associated to this node.
'''
return self.__graph._index.getLabel(self.index)
def has_key(self,key):
'''
test is the node instance has a property named 'key'.
:param key: the name of a property
:type key: str
:return: True if the nade has a property named <key>
:rtype: bool
'''
if self.index in self.__graph._node_attrs:
return key in self.__graph._node_attrs[self.index]
else:
return False
def neighbourIterator(self,nodePredicat=None,edgePredicat=None):
'''
iterate through the nodes directly connected to
this node.
:param nodePredicat: a function accepting one node as parameter
and returning **True** if this node must be
returned by the iterator.
:type nodePredicat: function
:param edgePredicat: a function accepting one edge as parameter
and returning True if the edge linking self and
the current must be considered.
:type edgePredicat: function
:rtype: iterator on Node instances
'''
for n in self.neighbourIndexIterator(nodePredicat, edgePredicat):
node = self.graph.getNode(index=n)
yield node
def neighbourIndexSet(self):
'''
Return a set of node indexes directely connected
to this node.
.. warning::
do not change this set unless you know
exactly what you do.
@rtype: set of int
'''
return self.__graph._node[self.index]
def neighbourIndexIterator(self,nodePredicat=None,edgePredicat=None):
'''
iterate through the node indexes directly connected to
this node.
:param nodePredicat: a function accepting one node as parameter
and returning True if this node must be
returned by the iterator.
:type nodePredicat: function
:param edgePredicat: a function accepting one edge as parameter
and returning True if the edge linking self and
the current must be considered.
:type edgePredicat: function
:rtype: iterator on int
'''
for n in self.neighbourIndexSet():
if nodePredicat is None or nodePredicat(self.__graph.getNode(index=n)):
if edgePredicat is None or edgePredicat(self.__graph.getEdge(index1=self.index,index2=n)):
yield n
def degree(self,nodeIndexes=None):
'''
return count of edges linking this node to the
set of nodes describes by their index in nodeIndexes
:param nodeIndexes: set of node indexes.
if set to None, all nodes of the
graph are take into account.
Set to None by default.
:type nodeIndexes: set of int
:rtype: int
'''
if nodeIndexes is None:
return len(self.__graph._node[self.index])
else:
return len(self.__graph._node[self.index] & nodeIndexes)
def componentIndexSet(self,nodePredicat=None,edgePredicat=None):
'''
Return the set of node index in the same connected component.
:param nodePredicat: a function accepting one node as parameter
and returning True if this node must be
returned by the iterator.
:type nodePredicat: function
:param edgePredicat: a function accepting one edge as parameter
and returning True if the edge linking self and
the current must be considered.
:type edgePredicat: function
:rtype: set of int
'''
cc=set([self.index])
added = set(x for x in self.neighbourIndexIterator(nodePredicat, edgePredicat))
while added:
cc |= added
added = reduce(lambda x,y : x | y,
(set(z for z in self.graph.getNode(index=c).neighbourIndexIterator(nodePredicat, edgePredicat))
for c in added),
set())
added -= cc
return cc
def componentIterator(self,nodePredicat=None,edgePredicat=None):
'''
Iterate through the nodes in the same connected
component.
:rtype: iterator on :py:class:`Node` instance
'''
for c in self.componentIndexSet(nodePredicat, edgePredicat):
yield self.graph.getNode(c)
def shortestPathIterator(self,nodes=None):
'''
Iterate through the shortest path sourcing
from this node. if nodes is not None, iterates
only path linkink this node to one node listed in
nodes
:param nodes: set of node index
:type nodes: iterable on int
:return: an iterator on list of int describing path
:rtype: iterator on list of int
'''
if nodes is not None:
nodes = set(nodes)
Q=[(self.index,-1)]
gray = set([self.index])
paths = {}
while Q and (nodes is None or nodes):
u,p = Q.pop()
paths[u]=p
next = self.graph._node[u] - gray
gray|=next
Q.extend((x,u) for x in next)
if nodes is None or u in nodes:
if nodes:
nodes.remove(u)
path = [u]
while p >= 0:
path.append(p)
p = paths[p]
path.reverse()
yield path
def shortestPathTo(self,node=None,index=None):
'''
return one of the shortest path linking this
node to specified node.
:param node: a node label or None
:param index: a node index or None. the parameter index
has a priority on the parameter node.
:type index: int
:return: list of node index corresponding to the path or None
if no path exists.
:rtype: list of int or None
'''
if index is None:
index=self.graph.getNode(node).index
for p in self.shortestPathIterator([index]):
return p
def __getitem__(self,key):
'''
return the value of the <key> property of this node
:param key: the name of a property
:type key: str
'''
return self.__graph._node_attrs.get(self.index,{})[key]
def __setitem__(self,key,value):
'''
set the value of a node property. In the property doesn't
already exist a new property is added to this node.
:param key: the name of a property
:type key: str
:param value: the value of the property
.. seealso::
:py:meth:`Node.__getitem__`
'''
if self.index in self.__graph._node_attrs:
data = self.__graph._node_attrs[self.index]
data[key]=value
else:
self.graph._node_attrs[self.index]={key:value}
def __delitem__(self,key):
data = self.__graph._node_attrs[self.index]
del data[key]
def __len__(self):
'''
Count neighbour of this node
:rtype: int
.. seealso::
:py:meth:`Node.degree`
'''
return len(self.__graph._node[self.index])
def __iter__(self):
'''
iterate through neighbour of this node
:rtype: iterator in :py:class:`Node` instances
.. seealso::
:py:meth:`Node.neighbourIterator`
'''
return self.neighbourIterator()
def __contains__(self,key):
return self.has_key(key)
def __str__(self):
if self.index in self.__graph._node_attrs:
keys = " ".join(['%s="%s"' % (x[0],str(x[1]).replace('"','\\"').replace('\n','\\n'))
for x in self.__graph._node_attrs[self.index].iteritems()]
)
else:
keys=''
return '%d [label="%s" %s]' % (self.index,
str(self.label).replace('"','\\"').replace('\n','\\n'),
keys)
def keys(self):
if self.index in self.__graph._node_attrs:
k = self.__graph._node_attrs[self.index].keys()
else:
k=[]
return k
label = property(getLabel, None, None, "Label of the node")
graph = property(getGraph, None, None, "Graph owning this node")
class Edge(object):
"""
Class used for representing one edge of a graph
"""
def __init__(self,node1,node2):
'''
.. warning::
:py:class:`Edge` constructor is usualy called through the :py:class:`Graph` methods
:param node1: First node likend by the edge
:type node1: :py:class:`Node`
:param node2: Seconde node likend by the edge
:type node2: :py:class:`Node`
'''
self.node1 = node1
self.node2 = node2
def getGraph(self):
"""
Return the :py:class:`Graph` instance owning this edge.
"""
return self.node1.graph
def has_key(self,key):
'''
test is the :py:class:`Edge` instance has a property named **key**.
:param key: the name of a property
:type key: str
:return: True if the edge has a property named <key>
:rtype: bool
'''
if (self.node1.index,self.node2.index) in self.graph._edge_attrs:
return key in self.graph._edge_attrs[(self.node1.index,self.node2.index)]
else:
return False
def getDirected(self):
return self.node1.graph._directed
def __getitem__(self,key):
return self.graph._edge_attrs.get((self.node1.index,self.node2.index),{})[key]
def __setitem__(self,key,value):
e = (self.node1.index,self.node2.index)
if e in self.graph._edge_attrs:
data = self.graph._edge_attrs[e]
data[key]=value
else:
self.graph._edge_attrs[e]={key:value}
def __str__(self):
e = (self.node1.index,self.node2.index)
if e in self.graph._edge_attrs:
keys = "[%s]" % " ".join(['%s="%s"' % (x[0],str(x[1]).replace('"','\\"'))
for x in self.graph._edge_attrs[e].iteritems()]
)
else:
keys = ""
if self.directed:
link='->'
else:
link='--'
return "%d %s %d %s" % (self.node1.index,link,self.node2.index,keys)
def __contains__(self,key):
return self.has_key(key)
graph = property(getGraph, None, None, "Graph owning this edge")
directed = property(getDirected, None, None, "Directed's Docstring")
class DiGraph(Graph):
"""
:py:class:`DiGraph class`is a specialisation of the :py:class:`Graph` class
dedicated to directed graph representation
.. seealso::
:py:class:`UndirectedGraph`
"""
def __init__(self,label='G',indexer=None,nodes=None,edges=None):
'''
:param label: Graph name, set to 'G' by default
:type label: str
:param indexer: node label indexer
:type indexer: Indexer instance
:param nodes: set of nodes to add to the graph
:type nodes: iterable value
:param edges: set of edges to add to the graph
:type edges: iterable value
'''
Graph.__init__(self, label, True, indexer, nodes, edges)
class UndirectedGraph(Graph):
"""
:py:class:`UndirectGraph class`is a specialisation of the :py:class:`Graph` class
dedicated to undirected graph representation
.. seealso::
:py:class:`DiGraph`
"""
def __init__(self,label='G',indexer=None,nodes=None,edges=None):
'''
:param label: Graph name, set to 'G' by default
:type label: str
:param indexer: node label indexer
:type indexer: Indexer instance
:param nodes: set of nodes to add to the graph
:type nodes: iterable value
:param edges: set of edges to add to the graph
:type edges: iterable value
'''
Graph.__init__(self, label, False, indexer, nodes, edges)
def selectEdgeAttributeFactory(attribut,value):
"""
This function help in building predicat function usable for selecting edge
in the folowing :py:class:`Graph` methods :
- :py:meth:`Graph.edgeIterator`
"""
def selectEdge(e):
return attribut in e and e[attribut]==value
return selectEdge | PypiClean |
/Finance-JindowinData-0.0.7.tar.gz/Finance-JindowinData-0.0.7/jdwdata/SurfaceAPI/stock/risk_model.py | import abc
from typing import List
from sqlalchemy import select, and_, join
from sqlalchemy.ext.automap import automap_base
import pandas as pd
from jdwdata.SurfaceAPI.engine import FetchKDEngine
from jdwdata.SurfaceAPI.universe import Universe
from jdwdata.DataAPI.db.kd.kd_engine import risk_styles, industry_styles
macro_styles = ['COUNTRY']
total_risk_factors = risk_styles + industry_styles + macro_styles
class BaseRiskModel(metaclass=abc.ABCMeta):
def get_risk_profile(self):
pass
class FactorRiskModel(BaseRiskModel):
def __init__(self, factor_cov: pd.DataFrame, risk_exp: pd.DataFrame,
idsync: pd.Series):
self.factor_cov = factor_cov
self.idsync = idsync
self.codes = self.idsync.index.tolist()
self.factor_names = sorted(self.factor_cov.index)
self.risk_exp = risk_exp.loc[self.codes, self.factor_names]
self.factor_cov = self.factor_cov.loc[self.factor_names,
self.factor_names]
self.idsync = self.idsync[self.codes]
def get_risk_exp(self, codes: List[int] = None):
if codes:
return self.risk_exp.loc[codes, :].values
else:
return self.risk_exp.values
def get_factor_cov(self):
return self.factor_cov.values
def get_idsync(self, codes: List[int] = None):
if codes:
return self.idsync[codes].values
else:
return self.idsync.values
def get_risk_profile(self, codes: List[int] = None):
return dict(cov=None,
factor_cov=self.get_factor_cov(),
factor_loading=self.get_risk_exp(codes),
idsync=self.get_idsync(codes))
class RiskModel(object):
def __init__(self, risk_model=None):
self._engine = FetchKDEngine()
self._risk_model = risk_model if risk_model is not None else 'day'
def map_risk_model_table(self):
risk_cov_table = self._engine.table_model('risk_cov_{0}'.format(
self._risk_model))
special_risk_table = self._engine.table_model(
'specific_risk_{0}'.format(self._risk_model))
return risk_cov_table, special_risk_table
def fetch_risk(self, universe, start_date, end_date):
_, special_risk_table = self.map_risk_model_table()
RiskExposure = self._engine.table_model('risk_exposure')
risk_exposure_cols = [
RiskExposure.__table__.columns[f] for f in total_risk_factors
]
if isinstance(universe, Universe):
cond = universe._query_statements(start_date, end_date)
big_table = join(
RiskExposure, universe._table_model,
and_(
RiskExposure.trade_date ==
universe._table_model.trade_date,
RiskExposure.code == universe._table_model.code, cond))
big_table = join(
special_risk_table, big_table,
and_(RiskExposure.code == special_risk_table.code,
RiskExposure.trade_date == special_risk_table.trade_date,
RiskExposure.flag == 1, special_risk_table.flag == 1))
query = select([
RiskExposure.trade_date, RiskExposure.code,
special_risk_table.SRISK.label('srisk')
] + risk_exposure_cols).select_from(big_table).distinct()
else:
big_table = join(
RiskExposure, special_risk_table,
and_(RiskExposure.code == special_risk_table.code,
RiskExposure.trade_date == special_risk_table.trade_date))
query = select([
RiskExposure.trade_date, RiskExposure.code,
special_risk_table.SRISK.label('srisk')
] + risk_exposure_cols).select_from(big_table).where(
and_(RiskExposure.trade_date.between(start_date, end_date),
RiskExposure.code.in_(universe)))
risk_exp = pd.read_sql(query, self._engine.client()).sort_values(
['trade_date', 'code']).dropna()
return risk_exp
def universe_risk(self, universe, start_date, end_date):
risk_cov_table, special_risk_table = self.map_risk_model_table()
RiskExposure = self._engine.table_model('risk_exposure')
risk_exposure_cols = [
RiskExposure.__table__.columns[f] for f in total_risk_factors
]
cond = universe._query_statements(start_date, end_date)
big_table = join(
RiskExposure, universe._table_model,
and_(RiskExposure.trade_date == universe._table_model.trade_date,
RiskExposure.code == universe._table_model.code, cond))
big_table = join(
special_risk_table, big_table,
and_(RiskExposure.code == special_risk_table.code,
RiskExposure.trade_date == special_risk_table.trade_date,
RiskExposure.flag == 1, special_risk_table.flag == 1))
query = select(
[RiskExposure.trade_date,
RiskExposure.code,
special_risk_table.SRISK.label('srisk')] + risk_exposure_cols).select_from(big_table) \
.distinct()
risk_exp = pd.read_sql(query, self._engine.client()).sort_values(
['trade_date', 'code']).dropna()
return risk_exp
def codes_risk(self, codes, start_date, end_date):
_, special_risk_table = self.map_risk_model_table()
RiskExposure = self._engine.table_model('risk_exposure')
risk_exposure_cols = [
RiskExposure.__table__.columns[f] for f in total_risk_factors
]
big_table = join(
RiskExposure, special_risk_table,
and_(RiskExposure.code == special_risk_table.code,
RiskExposure.trade_date == special_risk_table.trade_date))
query = select([RiskExposure.trade_date, RiskExposure.code, special_risk_table.SRISK.label('srisk')] + risk_exposure_cols) \
.select_from(big_table).where(
and_(RiskExposure.trade_date.between(start_date,end_date),
RiskExposure.code.in_(codes)
))
risk_exp = pd.read_sql(query, self._engine.client()).sort_values(
['trade_date', 'code']).dropna()
return risk_exp
def fetch_cov(self, universe, start_date, end_date, model_type=None):
risk_cov_table, special_risk_table = self.map_risk_model_table()
cov_risk_cols = [
risk_cov_table.__table__.columns[f] for f in total_risk_factors
]
query = select([
risk_cov_table.trade_date, risk_cov_table.FactorID,
risk_cov_table.Factor
] + cov_risk_cols).where(
risk_cov_table.trade_date.between(start_date, end_date))
risk_cov = pd.read_sql(query, self._engine.client()).sort_values(
['trade_date', 'FactorID'])
RiskExposure = self._engine.table_model('risk_exposure')
risk_exposure_cols = [
RiskExposure.__table__.columns[f] for f in total_risk_factors
]
if isinstance(universe, Universe):
cond = universe._query_statements(start_date, end_date)
big_table = join(
RiskExposure, universe._table_model,
and_(
RiskExposure.trade_date ==
universe._table_model.trade_date,
RiskExposure.code == universe._table_model.code, cond))
big_table = join(
special_risk_table, big_table,
and_(RiskExposure.code == special_risk_table.code,
RiskExposure.trade_date == special_risk_table.trade_date,
RiskExposure.flag == 1, special_risk_table.flag == 1))
query = select([
RiskExposure.trade_date, RiskExposure.code,
special_risk_table.SRISK.label('srisk')
] + risk_exposure_cols).select_from(big_table).distinct()
else:
big_table = join(
RiskExposure, special_risk_table,
and_(RiskExposure.code == special_risk_table.code,
RiskExposure.trade_date == special_risk_table.trade_date))
query = select([
RiskExposure.trade_date, RiskExposure.code,
special_risk_table.SRISK.label('srisk')
] + risk_exposure_cols).select_from(big_table).where(
and_(RiskExposure.trade_date.between(start_date, end_date),
RiskExposure.code.in_(universe)))
risk_exp = pd.read_sql(query, self._engine.client()).sort_values(
['trade_date', 'code']).dropna()
if not model_type:
return risk_cov, risk_exp
elif model_type == 'factor':
new_risk_cov = risk_cov.set_index('Factor')
new_risk_exp = risk_exp.set_index('code')
risk_cov_groups = new_risk_cov.groupby('trade_date')
risk_exp_groups = new_risk_exp.groupby('trade_date')
models = {}
for ref_date, cov_g in risk_cov_groups:
exp_g = risk_exp_groups.get_group(ref_date)
factor_names = cov_g.index.tolist()
factor_cov = cov_g.loc[factor_names, factor_names] / 10000.
factor_loading = exp_g.loc[:, factor_names]
idsync = exp_g['srisk'] * exp_g['srisk'] / 10000
models[ref_date] = FactorRiskModel(factor_cov, factor_loading,
idsync)
return pd.Series(models), risk_cov, risk_exp
def universe_fetch(self, universe, start_date, end_date, model_type=None):
risk_cov_table, special_risk_table = self.map_risk_model_table()
cov_risk_cols = [
risk_cov_table.__table__.columns[f] for f in total_risk_factors
]
query = select([
risk_cov_table.trade_date, risk_cov_table.FactorID,
risk_cov_table.Factor
] + cov_risk_cols).where(
risk_cov_table.trade_date.between(start_date, end_date))
risk_cov = pd.read_sql(query, self._engine.client()).sort_values(
['trade_date', 'FactorID'])
RiskExposure = self._engine.table_model('risk_exposure')
risk_exposure_cols = [
RiskExposure.__table__.columns[f] for f in total_risk_factors
]
cond = universe._query_statements(start_date, end_date)
big_table = join(
RiskExposure, universe._table_model,
and_(RiskExposure.trade_date == universe._table_model.trade_date,
RiskExposure.code == universe._table_model.code, cond))
big_table = join(
special_risk_table, big_table,
and_(RiskExposure.code == special_risk_table.code,
RiskExposure.trade_date == special_risk_table.trade_date,
RiskExposure.flag == 1, special_risk_table.flag == 1))
query = select(
[RiskExposure.trade_date,
RiskExposure.code,
special_risk_table.SRISK.label('srisk')] + risk_exposure_cols).select_from(big_table) \
.distinct()
risk_exp = pd.read_sql(query, self._engine.client()).sort_values(
['trade_date', 'code']).dropna()
if not model_type:
return risk_cov, risk_exp
elif model_type == 'factor':
new_risk_cov = risk_cov.set_index('Factor')
new_risk_exp = risk_exp.set_index('code')
risk_cov_groups = new_risk_cov.groupby('trade_date')
risk_exp_groups = new_risk_exp.groupby('trade_date')
models = {}
for ref_date, cov_g in risk_cov_groups:
exp_g = risk_exp_groups.get_group(ref_date)
factor_names = cov_g.index.tolist()
factor_cov = cov_g.loc[factor_names, factor_names] / 10000.
factor_loading = exp_g.loc[:, factor_names]
idsync = exp_g['srisk'] * exp_g['srisk'] / 10000
models[ref_date] = FactorRiskModel(factor_cov, factor_loading,
idsync)
return pd.Series(models), risk_cov, risk_exp
def codes_fetch(self, codes, start_date, end_date, model_type=None):
risk_cov_table, special_risk_table = self.map_risk_model_table()
cov_risk_cols = [
risk_cov_table.__table__.columns[f] for f in total_risk_factors
]
query = select([
risk_cov_table.trade_date, risk_cov_table.FactorID,
risk_cov_table.Factor
] + cov_risk_cols).where(
risk_cov_table.trade_date.between(start_date, end_date))
risk_cov = pd.read_sql(query, self._engine.client()).sort_values(
['trade_date', 'FactorID'])
RiskExposure = self._engine.table_model('risk_exposure')
risk_exposure_cols = [
RiskExposure.__table__.columns[f] for f in total_risk_factors
]
big_table = join(
RiskExposure, special_risk_table,
and_(RiskExposure.code == special_risk_table.code,
RiskExposure.trade_date == special_risk_table.trade_date))
query = select([RiskExposure.trade_date, RiskExposure.code, special_risk_table.SRISK.label('srisk')] + risk_exposure_cols) \
.select_from(big_table).where(
and_(RiskExposure.trade_date.between(start_date,end_date),
RiskExposure.code.in_(codes)
))
risk_exp = pd.read_sql(query, self._engine.client()).sort_values(
['trade_date', 'code']).dropna()
if not model_type:
return risk_cov, risk_exp
elif model_type == 'factor':
new_risk_cov = risk_cov.set_index('Factor')
new_risk_exp = risk_exp.set_index('code')
risk_cov_groups = new_risk_cov.groupby('trade_date')
risk_exp_groups = new_risk_exp.groupby('trade_date')
models = {}
for ref_date, cov_g in risk_cov_groups:
exp_g = risk_exp_groups.get_group(ref_date)
factor_names = cov_g.index.tolist()
factor_cov = cov_g.loc[factor_names, factor_names] / 10000.
factor_loading = exp_g.loc[:, factor_names]
idsync = exp_g['srisk'] * exp_g['srisk'] / 10000
models[ref_date] = FactorRiskModel(factor_cov, factor_loading,
idsync)
return pd.Series(models), risk_cov, risk_exp | PypiClean |
/FlexGet-3.9.6-py3-none-any.whl/flexget/plugins/output/subtitles_periscope.py | import logging
import os
import tempfile
from loguru import logger
from flexget import plugin
from flexget.event import event
logger = logger.bind(name='subtitles')
class PluginPeriscope:
r"""
Search and download subtitles using Periscope by Patrick Dessalle
(http://code.google.com/p/periscope/).
Example (complete task)::
subs:
find:
path:
- d:\media\incoming
regexp: '.*\.(avi|mkv|mp4)$'
recursive: yes
accept_all: yes
periscope:
languages:
- it
alternatives:
- en
overwrite: yes
"""
schema = {
'type': 'object',
'properties': {
'languages': {'type': 'array', 'items': {'type': 'string'}, 'minItems': 1},
'alternatives': {'type': 'array', 'items': {'type': 'string'}},
'overwrite': {'type': 'boolean', 'default': False},
'subexts': {
'type': 'array',
'items': {'type': 'string'},
'default': ['srt', 'stp', 'sub', 'stl', 'ssa'],
},
},
'additionalProperties': False,
}
def on_task_start(self, task, config):
try:
import periscope # noqa
except ImportError as e:
logger.debug('Error importing Periscope: {}', e)
raise plugin.DependencyError(
'periscope', 'periscope', 'Periscope module required. ImportError: %s' % e
)
def subbed(self, filename):
for ext in self.exts:
if os.path.exists(os.path.splitext(filename)[0] + ext):
return True
return False
def on_task_output(self, task, config):
"""
Configuration::
periscope:
languages: List of languages in order of preference (at least one is required).
alternatives: List of second-choice languages; subs will be downloaded but entries rejected.
overwrite: If yes it will try to download even for videos that are already subbed. Default: no.
subexts: List of subtitles file extensions to check (only useful with overwrite=no).
Default: srt, stp, sub, stl, ssa.
"""
if not task.accepted:
logger.debug('nothing accepted, aborting')
return
import periscope
psc = periscope.Periscope(tempfile.gettempdir())
logging.getLogger('periscope').setLevel(logging.CRITICAL) # LOT of messages otherwise
langs = [s.encode('utf8') for s in config['languages']] # avoid unicode warnings
alts = [s.encode('utf8') for s in config.get('alternatives', [])]
if not config['overwrite']:
self.exts = ['.' + s for s in config['subexts']]
for entry in task.accepted:
if 'location' not in entry:
logger.warning('Cannot act on entries that do not represent a local file.')
elif not os.path.exists(entry['location']):
entry.fail('file not found: %s' % entry['location'])
elif '$RECYCLE.BIN' in entry['location']:
continue # ignore deleted files in Windows shares
elif not config['overwrite'] and self.subbed(entry['location']):
logger.warning('cannot overwrite existing subs for {}', entry['location'])
else:
try:
if psc.downloadSubtitle(entry['location'].encode("utf8"), langs):
logger.info('Subtitles found for {}', entry['location'])
elif alts and psc.downloadSubtitle(entry['location'].encode('utf8'), alts):
entry.fail('subtitles found for a second-choice language.')
else:
entry.fail('cannot find any subtitles for now.')
except Exception as err:
# don't want to abort the entire task for errors in a
# single video file or for occasional network timeouts
entry.fail(err.message)
@event('plugin.register')
def register_plugin():
plugin.register(PluginPeriscope, 'periscope', api_ver=2) | PypiClean |
/Flask-MDEditor-0.1.4.tar.gz/Flask-MDEditor-0.1.4/flask_mdeditor/static/mdeditor/js/lib/codemirror/mode/jade/jade.js |
(function(mod) {
if (typeof exports == "object" && typeof module == "object") // CommonJS
mod(require("../../lib/codemirror"), require("../javascript/javascript"), require("../css/css"), require("../htmlmixed/htmlmixed"));
else if (typeof define == "function" && define.amd) // AMD
define(["../../lib/codemirror", "../javascript/javascript", "../css/css", "../htmlmixed/htmlmixed"], mod);
else // Plain browser env
mod(CodeMirror);
})(function(CodeMirror) {
"use strict";
CodeMirror.defineMode('jade', function (config) {
// token types
var KEYWORD = 'keyword';
var DOCTYPE = 'meta';
var ID = 'builtin';
var CLASS = 'qualifier';
var ATTRS_NEST = {
'{': '}',
'(': ')',
'[': ']'
};
var jsMode = CodeMirror.getMode(config, 'javascript');
function State() {
this.javaScriptLine = false;
this.javaScriptLineExcludesColon = false;
this.javaScriptArguments = false;
this.javaScriptArgumentsDepth = 0;
this.isInterpolating = false;
this.interpolationNesting = 0;
this.jsState = jsMode.startState();
this.restOfLine = '';
this.isIncludeFiltered = false;
this.isEach = false;
this.lastTag = '';
this.scriptType = '';
// Attributes Mode
this.isAttrs = false;
this.attrsNest = [];
this.inAttributeName = true;
this.attributeIsType = false;
this.attrValue = '';
// Indented Mode
this.indentOf = Infinity;
this.indentToken = '';
this.innerMode = null;
this.innerState = null;
this.innerModeForLine = false;
}
/**
* Safely copy a state
*
* @return {State}
*/
State.prototype.copy = function () {
var res = new State();
res.javaScriptLine = this.javaScriptLine;
res.javaScriptLineExcludesColon = this.javaScriptLineExcludesColon;
res.javaScriptArguments = this.javaScriptArguments;
res.javaScriptArgumentsDepth = this.javaScriptArgumentsDepth;
res.isInterpolating = this.isInterpolating;
res.interpolationNesting = this.intpolationNesting;
res.jsState = CodeMirror.copyState(jsMode, this.jsState);
res.innerMode = this.innerMode;
if (this.innerMode && this.innerState) {
res.innerState = CodeMirror.copyState(this.innerMode, this.innerState);
}
res.restOfLine = this.restOfLine;
res.isIncludeFiltered = this.isIncludeFiltered;
res.isEach = this.isEach;
res.lastTag = this.lastTag;
res.scriptType = this.scriptType;
res.isAttrs = this.isAttrs;
res.attrsNest = this.attrsNest.slice();
res.inAttributeName = this.inAttributeName;
res.attributeIsType = this.attributeIsType;
res.attrValue = this.attrValue;
res.indentOf = this.indentOf;
res.indentToken = this.indentToken;
res.innerModeForLine = this.innerModeForLine;
return res;
};
function javaScript(stream, state) {
if (stream.sol()) {
// if javaScriptLine was set at end of line, ignore it
state.javaScriptLine = false;
state.javaScriptLineExcludesColon = false;
}
if (state.javaScriptLine) {
if (state.javaScriptLineExcludesColon && stream.peek() === ':') {
state.javaScriptLine = false;
state.javaScriptLineExcludesColon = false;
return;
}
var tok = jsMode.token(stream, state.jsState);
if (stream.eol()) state.javaScriptLine = false;
return tok || true;
}
}
function javaScriptArguments(stream, state) {
if (state.javaScriptArguments) {
if (state.javaScriptArgumentsDepth === 0 && stream.peek() !== '(') {
state.javaScriptArguments = false;
return;
}
if (stream.peek() === '(') {
state.javaScriptArgumentsDepth++;
} else if (stream.peek() === ')') {
state.javaScriptArgumentsDepth--;
}
if (state.javaScriptArgumentsDepth === 0) {
state.javaScriptArguments = false;
return;
}
var tok = jsMode.token(stream, state.jsState);
return tok || true;
}
}
function yieldStatement(stream) {
if (stream.match(/^yield\b/)) {
return 'keyword';
}
}
function doctype(stream) {
if (stream.match(/^(?:doctype) *([^\n]+)?/)) {
return DOCTYPE;
}
}
function interpolation(stream, state) {
if (stream.match('#{')) {
state.isInterpolating = true;
state.interpolationNesting = 0;
return 'punctuation';
}
}
function interpolationContinued(stream, state) {
if (state.isInterpolating) {
if (stream.peek() === '}') {
state.interpolationNesting--;
if (state.interpolationNesting < 0) {
stream.next();
state.isInterpolating = false;
return 'puncutation';
}
} else if (stream.peek() === '{') {
state.interpolationNesting++;
}
return jsMode.token(stream, state.jsState) || true;
}
}
function caseStatement(stream, state) {
if (stream.match(/^case\b/)) {
state.javaScriptLine = true;
return KEYWORD;
}
}
function when(stream, state) {
if (stream.match(/^when\b/)) {
state.javaScriptLine = true;
state.javaScriptLineExcludesColon = true;
return KEYWORD;
}
}
function defaultStatement(stream) {
if (stream.match(/^default\b/)) {
return KEYWORD;
}
}
function extendsStatement(stream, state) {
if (stream.match(/^extends?\b/)) {
state.restOfLine = 'string';
return KEYWORD;
}
}
function append(stream, state) {
if (stream.match(/^append\b/)) {
state.restOfLine = 'variable';
return KEYWORD;
}
}
function prepend(stream, state) {
if (stream.match(/^prepend\b/)) {
state.restOfLine = 'variable';
return KEYWORD;
}
}
function block(stream, state) {
if (stream.match(/^block\b *(?:(prepend|append)\b)?/)) {
state.restOfLine = 'variable';
return KEYWORD;
}
}
function include(stream, state) {
if (stream.match(/^include\b/)) {
state.restOfLine = 'string';
return KEYWORD;
}
}
function includeFiltered(stream, state) {
if (stream.match(/^include:([a-zA-Z0-9\-]+)/, false) && stream.match('include')) {
state.isIncludeFiltered = true;
return KEYWORD;
}
}
function includeFilteredContinued(stream, state) {
if (state.isIncludeFiltered) {
var tok = filter(stream, state);
state.isIncludeFiltered = false;
state.restOfLine = 'string';
return tok;
}
}
function mixin(stream, state) {
if (stream.match(/^mixin\b/)) {
state.javaScriptLine = true;
return KEYWORD;
}
}
function call(stream, state) {
if (stream.match(/^\+([-\w]+)/)) {
if (!stream.match(/^\( *[-\w]+ *=/, false)) {
state.javaScriptArguments = true;
state.javaScriptArgumentsDepth = 0;
}
return 'variable';
}
if (stream.match(/^\+#{/, false)) {
stream.next();
state.mixinCallAfter = true;
return interpolation(stream, state);
}
}
function callArguments(stream, state) {
if (state.mixinCallAfter) {
state.mixinCallAfter = false;
if (!stream.match(/^\( *[-\w]+ *=/, false)) {
state.javaScriptArguments = true;
state.javaScriptArgumentsDepth = 0;
}
return true;
}
}
function conditional(stream, state) {
if (stream.match(/^(if|unless|else if|else)\b/)) {
state.javaScriptLine = true;
return KEYWORD;
}
}
function each(stream, state) {
if (stream.match(/^(- *)?(each|for)\b/)) {
state.isEach = true;
return KEYWORD;
}
}
function eachContinued(stream, state) {
if (state.isEach) {
if (stream.match(/^ in\b/)) {
state.javaScriptLine = true;
state.isEach = false;
return KEYWORD;
} else if (stream.sol() || stream.eol()) {
state.isEach = false;
} else if (stream.next()) {
while (!stream.match(/^ in\b/, false) && stream.next());
return 'variable';
}
}
}
function whileStatement(stream, state) {
if (stream.match(/^while\b/)) {
state.javaScriptLine = true;
return KEYWORD;
}
}
function tag(stream, state) {
var captures;
if (captures = stream.match(/^(\w(?:[-:\w]*\w)?)\/?/)) {
state.lastTag = captures[1].toLowerCase();
if (state.lastTag === 'script') {
state.scriptType = 'application/javascript';
}
return 'tag';
}
}
function filter(stream, state) {
if (stream.match(/^:([\w\-]+)/)) {
var innerMode;
if (config && config.innerModes) {
innerMode = config.innerModes(stream.current().substring(1));
}
if (!innerMode) {
innerMode = stream.current().substring(1);
}
if (typeof innerMode === 'string') {
innerMode = CodeMirror.getMode(config, innerMode);
}
setInnerMode(stream, state, innerMode);
return 'atom';
}
}
function code(stream, state) {
if (stream.match(/^(!?=|-)/)) {
state.javaScriptLine = true;
return 'punctuation';
}
}
function id(stream) {
if (stream.match(/^#([\w-]+)/)) {
return ID;
}
}
function className(stream) {
if (stream.match(/^\.([\w-]+)/)) {
return CLASS;
}
}
function attrs(stream, state) {
if (stream.peek() == '(') {
stream.next();
state.isAttrs = true;
state.attrsNest = [];
state.inAttributeName = true;
state.attrValue = '';
state.attributeIsType = false;
return 'punctuation';
}
}
function attrsContinued(stream, state) {
if (state.isAttrs) {
if (ATTRS_NEST[stream.peek()]) {
state.attrsNest.push(ATTRS_NEST[stream.peek()]);
}
if (state.attrsNest[state.attrsNest.length - 1] === stream.peek()) {
state.attrsNest.pop();
} else if (stream.eat(')')) {
state.isAttrs = false;
return 'punctuation';
}
if (state.inAttributeName && stream.match(/^[^=,\)!]+/)) {
if (stream.peek() === '=' || stream.peek() === '!') {
state.inAttributeName = false;
state.jsState = jsMode.startState();
if (state.lastTag === 'script' && stream.current().trim().toLowerCase() === 'type') {
state.attributeIsType = true;
} else {
state.attributeIsType = false;
}
}
return 'attribute';
}
var tok = jsMode.token(stream, state.jsState);
if (state.attributeIsType && tok === 'string') {
state.scriptType = stream.current().toString();
}
if (state.attrsNest.length === 0 && (tok === 'string' || tok === 'variable' || tok === 'keyword')) {
try {
Function('', 'var x ' + state.attrValue.replace(/,\s*$/, '').replace(/^!/, ''));
state.inAttributeName = true;
state.attrValue = '';
stream.backUp(stream.current().length);
return attrsContinued(stream, state);
} catch (ex) {
//not the end of an attribute
}
}
state.attrValue += stream.current();
return tok || true;
}
}
function attributesBlock(stream, state) {
if (stream.match(/^&attributes\b/)) {
state.javaScriptArguments = true;
state.javaScriptArgumentsDepth = 0;
return 'keyword';
}
}
function indent(stream) {
if (stream.sol() && stream.eatSpace()) {
return 'indent';
}
}
function comment(stream, state) {
if (stream.match(/^ *\/\/(-)?([^\n]*)/)) {
state.indentOf = stream.indentation();
state.indentToken = 'comment';
return 'comment';
}
}
function colon(stream) {
if (stream.match(/^: */)) {
return 'colon';
}
}
function text(stream, state) {
if (stream.match(/^(?:\| ?| )([^\n]+)/)) {
return 'string';
}
if (stream.match(/^(<[^\n]*)/, false)) {
// html string
setInnerMode(stream, state, 'htmlmixed');
state.innerModeForLine = true;
return innerMode(stream, state, true);
}
}
function dot(stream, state) {
if (stream.eat('.')) {
var innerMode = null;
if (state.lastTag === 'script' && state.scriptType.toLowerCase().indexOf('javascript') != -1) {
innerMode = state.scriptType.toLowerCase().replace(/"|'/g, '');
} else if (state.lastTag === 'style') {
innerMode = 'css';
}
setInnerMode(stream, state, innerMode);
return 'dot';
}
}
function fail(stream) {
stream.next();
return null;
}
function setInnerMode(stream, state, mode) {
mode = CodeMirror.mimeModes[mode] || mode;
mode = config.innerModes ? config.innerModes(mode) || mode : mode;
mode = CodeMirror.mimeModes[mode] || mode;
mode = CodeMirror.getMode(config, mode);
state.indentOf = stream.indentation();
if (mode && mode.name !== 'null') {
state.innerMode = mode;
} else {
state.indentToken = 'string';
}
}
function innerMode(stream, state, force) {
if (stream.indentation() > state.indentOf || (state.innerModeForLine && !stream.sol()) || force) {
if (state.innerMode) {
if (!state.innerState) {
state.innerState = state.innerMode.startState ? state.innerMode.startState(stream.indentation()) : {};
}
return stream.hideFirstChars(state.indentOf + 2, function () {
return state.innerMode.token(stream, state.innerState) || true;
});
} else {
stream.skipToEnd();
return state.indentToken;
}
} else if (stream.sol()) {
state.indentOf = Infinity;
state.indentToken = null;
state.innerMode = null;
state.innerState = null;
}
}
function restOfLine(stream, state) {
if (stream.sol()) {
// if restOfLine was set at end of line, ignore it
state.restOfLine = '';
}
if (state.restOfLine) {
stream.skipToEnd();
var tok = state.restOfLine;
state.restOfLine = '';
return tok;
}
}
function startState() {
return new State();
}
function copyState(state) {
return state.copy();
}
/**
* Get the next token in the stream
*
* @param {Stream} stream
* @param {State} state
*/
function nextToken(stream, state) {
var tok = innerMode(stream, state)
|| restOfLine(stream, state)
|| interpolationContinued(stream, state)
|| includeFilteredContinued(stream, state)
|| eachContinued(stream, state)
|| attrsContinued(stream, state)
|| javaScript(stream, state)
|| javaScriptArguments(stream, state)
|| callArguments(stream, state)
|| yieldStatement(stream, state)
|| doctype(stream, state)
|| interpolation(stream, state)
|| caseStatement(stream, state)
|| when(stream, state)
|| defaultStatement(stream, state)
|| extendsStatement(stream, state)
|| append(stream, state)
|| prepend(stream, state)
|| block(stream, state)
|| include(stream, state)
|| includeFiltered(stream, state)
|| mixin(stream, state)
|| call(stream, state)
|| conditional(stream, state)
|| each(stream, state)
|| whileStatement(stream, state)
|| tag(stream, state)
|| filter(stream, state)
|| code(stream, state)
|| id(stream, state)
|| className(stream, state)
|| attrs(stream, state)
|| attributesBlock(stream, state)
|| indent(stream, state)
|| text(stream, state)
|| comment(stream, state)
|| colon(stream, state)
|| dot(stream, state)
|| fail(stream, state);
return tok === true ? null : tok;
}
return {
startState: startState,
copyState: copyState,
token: nextToken
};
});
CodeMirror.defineMIME('text/x-jade', 'jade');
}); | PypiClean |
/OTLModel/Classes/Onderdeel/Sokkel.py | from OTLMOW.OTLModel.BaseClasses.OTLAttribuut import OTLAttribuut
from OTLMOW.OTLModel.Classes.ImplementatieElement.AIMNaamObject import AIMNaamObject
from OTLMOW.OTLModel.Datatypes.BooleanField import BooleanField
from OTLMOW.OTLModel.Datatypes.DtcAfmetingDiameterInCm import DtcAfmetingDiameterInCm
from OTLMOW.OTLModel.Datatypes.DtuAfmetingGrondvlak import DtuAfmetingGrondvlak
from OTLMOW.OTLModel.Datatypes.KlAlgMateriaal import KlAlgMateriaal
from OTLMOW.OTLModel.Datatypes.KwantWrdInCentimeter import KwantWrdInCentimeter
from OTLMOW.GeometrieArtefact.VlakGeometrie import VlakGeometrie
# Generated with OTLClassCreator. To modify: extend, do not edit
class Sokkel(AIMNaamObject, VlakGeometrie):
"""Onderdeel dat zich voornamelijk voornamelijk boven het maaiveld bevindt en als doel heeft het object dat er op rust te verhogen, het object te omhullen ter bescherming of de ondergrond te nivelleren. Afhankelijk van de grootte van dat object en van de omvang van de sokkel, kan die ook zorgen voor nodige stabiliteit zoals een fundering."""
typeURI = 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Sokkel'
"""De URI van het object volgens https://www.w3.org/2001/XMLSchema#anyURI."""
def __init__(self):
AIMNaamObject.__init__(self)
VlakGeometrie.__init__(self)
self._afmetingGrondvlak = OTLAttribuut(field=DtuAfmetingGrondvlak,
naam='afmetingGrondvlak',
label='afmeting grondvlak',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Sokkel.afmetingGrondvlak',
definition='De afmeting van het grondvlak van de sokkel volgens zijn vorm.',
owner=self)
self._heeftMaaibescherming = OTLAttribuut(field=BooleanField,
naam='heeftMaaibescherming',
label='heeft maaibescherming',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Sokkel.heeftMaaibescherming',
definition='Geeft aan of de sokkel (en daarmee het object dat er bovenop geplaatst is) beschermd is tegen schade als gevolg van het maaien van omliggende begroeiing.',
owner=self)
self._hoogteBovenMaaiveld = OTLAttribuut(field=KwantWrdInCentimeter,
naam='hoogteBovenMaaiveld',
label='hoogte boven het maaiveld',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Sokkel.hoogteBovenMaaiveld',
definition='Hoogte in centimeters van het hoogste punt van de sokkel gemeten vanaf het maaiveld.',
owner=self)
self._hoogteSokkel = OTLAttribuut(field=DtcAfmetingDiameterInCm,
naam='hoogteSokkel',
label='hoogte van de sokkel',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Sokkel.hoogteSokkel',
usagenote='Attribuut uit gebruik sinds versie 2.0.0 ',
deprecated_version='2.0.0',
definition='De totale hoogte van de sokkel wanneer die rechtop staat.',
owner=self)
self._materiaal = OTLAttribuut(field=KlAlgMateriaal,
naam='materiaal',
label='materiaal',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Sokkel.materiaal',
definition='De grondstof waaruit de sokkel (voornamelijk) vervaardigd is.',
owner=self)
self._sokkelhoogte = OTLAttribuut(field=KwantWrdInCentimeter,
naam='sokkelhoogte',
label='hoogte van de sokkel',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#Sokkel.sokkelhoogte',
definition='De totale hoogte van de sokkel wanneer die rechtop staat.',
owner=self)
@property
def afmetingGrondvlak(self):
"""De afmeting van het grondvlak van de sokkel volgens zijn vorm."""
return self._afmetingGrondvlak.get_waarde()
@afmetingGrondvlak.setter
def afmetingGrondvlak(self, value):
self._afmetingGrondvlak.set_waarde(value, owner=self)
@property
def heeftMaaibescherming(self):
"""Geeft aan of de sokkel (en daarmee het object dat er bovenop geplaatst is) beschermd is tegen schade als gevolg van het maaien van omliggende begroeiing."""
return self._heeftMaaibescherming.get_waarde()
@heeftMaaibescherming.setter
def heeftMaaibescherming(self, value):
self._heeftMaaibescherming.set_waarde(value, owner=self)
@property
def hoogteBovenMaaiveld(self):
"""Hoogte in centimeters van het hoogste punt van de sokkel gemeten vanaf het maaiveld."""
return self._hoogteBovenMaaiveld.get_waarde()
@hoogteBovenMaaiveld.setter
def hoogteBovenMaaiveld(self, value):
self._hoogteBovenMaaiveld.set_waarde(value, owner=self)
@property
def hoogteSokkel(self):
"""De totale hoogte van de sokkel wanneer die rechtop staat."""
return self._hoogteSokkel.get_waarde()
@hoogteSokkel.setter
def hoogteSokkel(self, value):
self._hoogteSokkel.set_waarde(value, owner=self)
@property
def materiaal(self):
"""De grondstof waaruit de sokkel (voornamelijk) vervaardigd is."""
return self._materiaal.get_waarde()
@materiaal.setter
def materiaal(self, value):
self._materiaal.set_waarde(value, owner=self)
@property
def sokkelhoogte(self):
"""De totale hoogte van de sokkel wanneer die rechtop staat."""
return self._sokkelhoogte.get_waarde()
@sokkelhoogte.setter
def sokkelhoogte(self, value):
self._sokkelhoogte.set_waarde(value, owner=self) | PypiClean |
/MergePythonSDK.ticketing-2.2.2-py3-none-any.whl/MergePythonSDK/accounting/model/journal_entry_response.py | import re # noqa: F401
import sys # noqa: F401
from typing import (
Optional,
Union,
List,
Dict,
)
from MergePythonSDK.shared.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
OpenApiModel,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
)
from MergePythonSDK.shared.exceptions import ApiAttributeError
from MergePythonSDK.shared.model_utils import import_model_by_name
def lazy_import():
from MergePythonSDK.accounting.model.debug_mode_log import DebugModeLog
from MergePythonSDK.accounting.model.error_validation_problem import ErrorValidationProblem
from MergePythonSDK.accounting.model.journal_entry import JournalEntry
from MergePythonSDK.accounting.model.warning_validation_problem import WarningValidationProblem
globals()['DebugModeLog'] = DebugModeLog
globals()['ErrorValidationProblem'] = ErrorValidationProblem
globals()['JournalEntry'] = JournalEntry
globals()['WarningValidationProblem'] = WarningValidationProblem
class JournalEntryResponse(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
defined_types = {
'model': (JournalEntry,), # noqa: E501
'warnings': ([WarningValidationProblem],), # noqa: E501
'errors': ([ErrorValidationProblem],), # noqa: E501
'logs': ([DebugModeLog], none_type,), # noqa: E501
}
return defined_types
@cached_property
def discriminator():
return None
attribute_map = {
'model': 'model', # noqa: E501
'warnings': 'warnings', # noqa: E501
'errors': 'errors', # noqa: E501
'logs': 'logs', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, model, warnings, errors, *args, **kwargs): # noqa: E501
"""JournalEntryResponse - a model defined in OpenAPI
Args:
model (JournalEntry):
warnings ([WarningValidationProblem]):
errors ([ErrorValidationProblem]):
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
logs ([DebugModeLog]): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.model = model
self.warnings = warnings
self.errors = errors
self.logs = kwargs.get("logs", None)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, model, warnings, errors, *args, **kwargs): # noqa: E501
"""JournalEntryResponse - a model defined in OpenAPI
Args:
model (JournalEntry):
warnings ([WarningValidationProblem]):
errors ([ErrorValidationProblem]):
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
logs ([DebugModeLog]): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.model: Union["JournalEntry", {"lines": "JournalLine", "payments": "Payment"}] = model
self.warnings: Union[List["WarningValidationProblem"]] = warnings
self.errors: Union[List["ErrorValidationProblem"]] = errors
self.logs: Union[List["DebugModeLog"]] = kwargs.get("logs", None) | PypiClean |
/Nipo-0.0.1.tar.gz/Nipo-0.0.1/jinja2/asyncsupport.py | import asyncio
import inspect
from functools import update_wrapper
from markupsafe import Markup
from .environment import TemplateModule
from .runtime import LoopContext
from .utils import concat
from .utils import internalcode
from .utils import missing
async def concat_async(async_gen):
rv = []
async def collect():
async for event in async_gen:
rv.append(event)
await collect()
return concat(rv)
async def generate_async(self, *args, **kwargs):
vars = dict(*args, **kwargs)
try:
async for event in self.root_render_func(self.new_context(vars)):
yield event
except Exception:
yield self.environment.handle_exception()
def wrap_generate_func(original_generate):
def _convert_generator(self, loop, args, kwargs):
async_gen = self.generate_async(*args, **kwargs)
try:
while 1:
yield loop.run_until_complete(async_gen.__anext__())
except StopAsyncIteration:
pass
def generate(self, *args, **kwargs):
if not self.environment.is_async:
return original_generate(self, *args, **kwargs)
return _convert_generator(self, asyncio.get_event_loop(), args, kwargs)
return update_wrapper(generate, original_generate)
async def render_async(self, *args, **kwargs):
if not self.environment.is_async:
raise RuntimeError("The environment was not created with async mode enabled.")
vars = dict(*args, **kwargs)
ctx = self.new_context(vars)
try:
return await concat_async(self.root_render_func(ctx))
except Exception:
return self.environment.handle_exception()
def wrap_render_func(original_render):
def render(self, *args, **kwargs):
if not self.environment.is_async:
return original_render(self, *args, **kwargs)
loop = asyncio.get_event_loop()
return loop.run_until_complete(self.render_async(*args, **kwargs))
return update_wrapper(render, original_render)
def wrap_block_reference_call(original_call):
@internalcode
async def async_call(self):
rv = await concat_async(self._stack[self._depth](self._context))
if self._context.eval_ctx.autoescape:
rv = Markup(rv)
return rv
@internalcode
def __call__(self):
if not self._context.environment.is_async:
return original_call(self)
return async_call(self)
return update_wrapper(__call__, original_call)
def wrap_macro_invoke(original_invoke):
@internalcode
async def async_invoke(self, arguments, autoescape):
rv = await self._func(*arguments)
if autoescape:
rv = Markup(rv)
return rv
@internalcode
def _invoke(self, arguments, autoescape):
if not self._environment.is_async:
return original_invoke(self, arguments, autoescape)
return async_invoke(self, arguments, autoescape)
return update_wrapper(_invoke, original_invoke)
@internalcode
async def get_default_module_async(self):
if self._module is not None:
return self._module
self._module = rv = await self.make_module_async()
return rv
def wrap_default_module(original_default_module):
@internalcode
def _get_default_module(self):
if self.environment.is_async:
raise RuntimeError("Template module attribute is unavailable in async mode")
return original_default_module(self)
return _get_default_module
async def make_module_async(self, vars=None, shared=False, locals=None):
context = self.new_context(vars, shared, locals)
body_stream = []
async for item in self.root_render_func(context):
body_stream.append(item)
return TemplateModule(self, context, body_stream)
def patch_template():
from . import Template
Template.generate = wrap_generate_func(Template.generate)
Template.generate_async = update_wrapper(generate_async, Template.generate_async)
Template.render_async = update_wrapper(render_async, Template.render_async)
Template.render = wrap_render_func(Template.render)
Template._get_default_module = wrap_default_module(Template._get_default_module)
Template._get_default_module_async = get_default_module_async
Template.make_module_async = update_wrapper(
make_module_async, Template.make_module_async
)
def patch_runtime():
from .runtime import BlockReference, Macro
BlockReference.__call__ = wrap_block_reference_call(BlockReference.__call__)
Macro._invoke = wrap_macro_invoke(Macro._invoke)
def patch_filters():
from .filters import FILTERS
from .asyncfilters import ASYNC_FILTERS
FILTERS.update(ASYNC_FILTERS)
def patch_all():
patch_template()
patch_runtime()
patch_filters()
async def auto_await(value):
if inspect.isawaitable(value):
return await value
return value
async def auto_aiter(iterable):
if hasattr(iterable, "__aiter__"):
async for item in iterable:
yield item
return
for item in iterable:
yield item
class AsyncLoopContext(LoopContext):
_to_iterator = staticmethod(auto_aiter)
@property
async def length(self):
if self._length is not None:
return self._length
try:
self._length = len(self._iterable)
except TypeError:
iterable = [x async for x in self._iterator]
self._iterator = self._to_iterator(iterable)
self._length = len(iterable) + self.index + (self._after is not missing)
return self._length
@property
async def revindex0(self):
return await self.length - self.index
@property
async def revindex(self):
return await self.length - self.index0
async def _peek_next(self):
if self._after is not missing:
return self._after
try:
self._after = await self._iterator.__anext__()
except StopAsyncIteration:
self._after = missing
return self._after
@property
async def last(self):
return await self._peek_next() is missing
@property
async def nextitem(self):
rv = await self._peek_next()
if rv is missing:
return self._undefined("there is no next item")
return rv
def __aiter__(self):
return self
async def __anext__(self):
if self._after is not missing:
rv = self._after
self._after = missing
else:
rv = await self._iterator.__anext__()
self.index0 += 1
self._before = self._current
self._current = rv
return rv, self
async def make_async_loop_context(iterable, undefined, recurse=None, depth0=0):
import warnings
warnings.warn(
"This template must be recompiled with at least Jinja 2.11, or"
" it will fail in 3.0.",
DeprecationWarning,
stacklevel=2,
)
return AsyncLoopContext(iterable, undefined, recurse, depth0)
patch_all() | PypiClean |
/Nxpy-0.6.0.tar.gz/Nxpy-0.6.0/nxpy/ccase/test/env.py |
# Copyright Nicola Musatti 2012 - 2014
# Use, modification, and distribution are subject to the Boost Software
# License, Version 1.0. (See accompanying file LICENSE.txt or copy at
# http://www.boost.org/LICENSE_1_0.txt)
# See http://nxpy.sourceforge.net for library home page. ---------------------
r"""
Clearcase test environment definition.
"""
from __future__ import absolute_import
import os.path
import platform
from six.moves import configparser
import nxpy.test.env
class ConfigurationError(Exception):
r"""Raised when the ClearCase test environment is not properly set."""
class Env(nxpy.test.env.EnvBase):
r"""Environment configuration information for ClearCase related tests."""
src_link_name = "nxpy_test_link"
r"""Name for a link created when testing the *ln()* method."""
backup_ext = ".test"
r"""Extension used for backup files."""
def __init__(self):
r"""Takes initialization information from a conventionally placed configuration file."""
nxpy.test.env.EnvBase.__init__(self, "conf")
self.parser = configparser.SafeConfigParser()
self.parser.read(os.path.join(self.elem_dir, "ccase.ini"))
self.section = platform.node()
if not self.parser.has_section(self.section):
self.section = "localhost"
if not self.parser.has_section(self.section):
raise ConfigurationError("ccase.ini has no section for this computer")
self.src_view_dir = self._get("src_view_dir")
self.src_view_tag = self._get("src_view_tag")
self.src_stream_tag = self._get("src_stream_tag")
self.src_stream_sel = self._get("src_stream_sel")
self.src_proj_sel = self._get("src_proj_sel")
self.relative_src_dir = self._get("relative_src_dir")
self.src_file_name = self._get("src_file_name")
self.dest_view_dir = self._get("dest_view_dir")
self.dest_view_tag = self._get("dest_view_tag")
self.dest_stream_sel = self._get("dest_stream_sel")
self.proj_vob = self._get("proj_vob")
self.lsview_tags = self._get("lsview_tags")
self.activity = self._get("activity")
self.test = self._getboolean("test")
self.src_dir_path = os.path.join(self.src_view_dir, self.relative_src_dir)
self.src_file_path = os.path.join(self.src_dir_path, self.src_file_name)
self.backup_file_name = self.src_file_name + self.backup_ext
self.src_stream_pattern = "\*" + self.src_stream_tag + "\*"
def _get(self, option):
return self.parser.get(self.section, option)
def _getboolean(self, option):
return self.parser.getboolean(self.section, option)
def get_env(test):
r"""
If the environment is correctly configured an instance of *Env* is returned, otherwise
the current test is skipped. *test* is a :py:class:`unittest.TestCase` instance containing the
test currently being executed.
"""
try:
return Env()
except nxpy.test.env.TestEnvNotSetError:
test.skipTest("Test environment not set")
return None | PypiClean |
/Github_Contributor_Matcher-0.1.2-py3-none-any.whl/Github_Contributor_Matcher-0.1.2.dist-info/LICENSE.md | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| PypiClean |
/Highton-2.3.0.tar.gz/Highton-2.3.0/highton/parsing/xml_decoder.py | from abc import ABCMeta
from xml.etree import ElementTree
from highton import fields
class FieldDoesNotExist(Exception):
def __init__(self, obj, element):
self.obj = obj
self.element = element
def __str__(self):
return 'The field "{tag}" does not exist in {object}'.format(tag=self.element.tag, object=self.obj)
class XMLDecoder(metaclass=ABCMeta):
"""
This class is an abstract class which helps to decode a XML String to the actual object which inherits from it
"""
def __getattribute__(self, name):
attribute_value = object.__getattribute__(self, name)
if isinstance(attribute_value, fields.Field):
return attribute_value.value
return attribute_value
def _get_field_names_to_attributes(self):
return {
value.name: key
for key, value in self.__dict__.items()
if isinstance(value, fields.Field)
}
def _get_field(self, attribute_name):
return self.__dict__[attribute_name]
@staticmethod
def _set_field(xml_decoder_object, field_names_to_attributes, child_element):
try:
field = xml_decoder_object._get_field(
field_names_to_attributes[child_element.tag]
)
field.value = field.decode(child_element)
except KeyError:
raise FieldDoesNotExist(xml_decoder_object, child_element)
@staticmethod
def element_from_string(string):
"""
Make a Element from a str
:param string: string you want to parse
:type string: str
:return: the parsed xml string
:rtype: xml.etree.ElementTree.Element
"""
return ElementTree.fromstring(string)
@classmethod
def decode(cls, root_element):
"""
Decode the object to the object
:param root_element: the parsed xml Element
:type root_element: xml.etree.ElementTree.Element
:return: the decoded Element as object
:rtype: object
"""
new_object = cls()
field_names_to_attributes = new_object._get_field_names_to_attributes()
for child_element in root_element:
new_object._set_field(new_object, field_names_to_attributes, child_element)
return new_object | PypiClean |
/MusiContour-visimus-0.3.1.tar.gz/MusiContour-visimus-0.3.1/README | MusiContour: musical contour relations calculator
by Marcos da Silva Sampaio and Pedro Kröger (Genos Research Group [1])
This is a software to calculate and plot musical contour relations
operations. MusiContour can process contour related to parameters such
as pitch, duration, dynamics and chord density. This software provides
Contour Theories functions [2].
MusiContour had two names before its current one: Goiaba (2007), a
software developed to process musical contours, during Marcos da Silva
Sampaio master's degree [3], and Villa-Lobos Contour module, a module
of Villa-Lobos [4], and is now independent.
MusiContour is part of Marcos da Silva Sampaio PhD project about
musical contour relations. More information about these project is
available in [5], in portuguese.
Footnotes:
[1] http://genosmus.com
[2] Find more about Contour Theories in:
* Friedmann, Michael L. 1985. A Methodology for the Discussion of
Contour: its Application to Schoenberg's Music. Journal of Music
Theory 29, no. 2: 223-248.
* Friedmann, Michael L. 1987. A Response: My Contour, Their
Contour. Journal of Music Theory 31, no. 2: 268-274.
* Marvin, Elizabeth West. 1988. A generalized theory of musical
contour: its application to melodic and rhythmic analysis of
non-tonal music and its perceptual and pedagogical implications.
* Marvin, Elizabeth West, and Paul A Laprade. 1987. Relating Musical
Contours: Extensions of a Theory for Contour. Journal of Music
Theory 31, no. 2: 225-267.
* Morris, Robert D. 1993. New Directions in the Theory and Analysis of
Musical Contour. Music Theory Spectrum xv: 205-228.
* Morris, Robert D. 1987. Composition with Pitch-classes: A Theory of
Compositional Design. Yale University Press.
* Sampaio, Marcos da Silva. 2008. Em torno da romã: aplicações de
operações de contornos na composição. Salvador, BA. Brasil.
* Sampaio, Marcos, and Pedro Kröger. 2009. Goiaba: a software to
process musical contours. In Proceedings of the 12th Brazilian
Symposium on Computer Music, 203-206. Recife, Brazil.
[3] http://marcosdisilva.net/en/research
[4] http://github.com/kroger/villa-lobos
[5] http://genosmus.com/pesquisa/contornos
Keywords: Musical contour, contour in music, melodic contour,
computational musicology, contour theory, contorno melódico. | PypiClean |
/INGInious-0.8.7.tar.gz/INGInious-0.8.7/inginious/frontend/static/js/codemirror/mode/meta.js |
(function(mod) {
if (typeof exports == "object" && typeof module == "object") // CommonJS
mod(require("../lib/codemirror"));
else if (typeof define == "function" && define.amd) // AMD
define(["../lib/codemirror"], mod);
else // Plain browser env
mod(CodeMirror);
})(function(CodeMirror) {
"use strict";
CodeMirror.modeInfo = [
{name: "APL", mime: "text/apl", mode: "apl", ext: ["dyalog", "apl"]},
{name: "PGP", mimes: ["application/pgp", "application/pgp-keys", "application/pgp-signature"], mode: "asciiarmor", ext: ["pgp"]},
{name: "ASN.1", mime: "text/x-ttcn-asn", mode: "asn.1", ext: ["asn", "asn1"]},
{name: "Asterisk", mime: "text/x-asterisk", mode: "asterisk", file: /^extensions\.conf$/i},
{name: "Brainfuck", mime: "text/x-brainfuck", mode: "brainfuck", ext: ["b", "bf"]},
{name: "C", mime: "text/x-csrc", mode: "clike", ext: ["c", "h"]},
{name: "C++", mime: "text/x-c++src", mode: "clike", ext: ["cpp", "c++", "cc", "cxx", "hpp", "h++", "hh", "hxx"], alias: ["cpp"]},
{name: "Cobol", mime: "text/x-cobol", mode: "cobol", ext: ["cob", "cpy"]},
{name: "C#", mime: "text/x-csharp", mode: "clike", ext: ["cs"], alias: ["csharp"]},
{name: "Clojure", mime: "text/x-clojure", mode: "clojure", ext: ["clj", "cljc", "cljx"]},
{name: "ClojureScript", mime: "text/x-clojurescript", mode: "clojure", ext: ["cljs"]},
{name: "Closure Stylesheets (GSS)", mime: "text/x-gss", mode: "css", ext: ["gss"]},
{name: "CMake", mime: "text/x-cmake", mode: "cmake", ext: ["cmake", "cmake.in"], file: /^CMakeLists.txt$/},
{name: "CoffeeScript", mime: "text/x-coffeescript", mode: "coffeescript", ext: ["coffee"], alias: ["coffee", "coffee-script"]},
{name: "Common Lisp", mime: "text/x-common-lisp", mode: "commonlisp", ext: ["cl", "lisp", "el"], alias: ["lisp"]},
{name: "Cypher", mime: "application/x-cypher-query", mode: "cypher", ext: ["cyp", "cypher"]},
{name: "Cython", mime: "text/x-cython", mode: "python", ext: ["pyx", "pxd", "pxi"]},
{name: "Crystal", mime: "text/x-crystal", mode: "crystal", ext: ["cr"]},
{name: "CSS", mime: "text/css", mode: "css", ext: ["css"]},
{name: "CQL", mime: "text/x-cassandra", mode: "sql", ext: ["cql"]},
{name: "D", mime: "text/x-d", mode: "d", ext: ["d"]},
{name: "Dart", mimes: ["application/dart", "text/x-dart"], mode: "dart", ext: ["dart"]},
{name: "diff", mime: "text/x-diff", mode: "diff", ext: ["diff", "patch"]},
{name: "Django", mime: "text/x-django", mode: "django"},
{name: "Dockerfile", mime: "text/x-dockerfile", mode: "dockerfile", file: /^Dockerfile$/},
{name: "DTD", mime: "application/xml-dtd", mode: "dtd", ext: ["dtd"]},
{name: "Dylan", mime: "text/x-dylan", mode: "dylan", ext: ["dylan", "dyl", "intr"]},
{name: "EBNF", mime: "text/x-ebnf", mode: "ebnf"},
{name: "ECL", mime: "text/x-ecl", mode: "ecl", ext: ["ecl"]},
{name: "edn", mime: "application/edn", mode: "clojure", ext: ["edn"]},
{name: "Eiffel", mime: "text/x-eiffel", mode: "eiffel", ext: ["e"]},
{name: "Elm", mime: "text/x-elm", mode: "elm", ext: ["elm"]},
{name: "Embedded Javascript", mime: "application/x-ejs", mode: "htmlembedded", ext: ["ejs"]},
{name: "Embedded Ruby", mime: "application/x-erb", mode: "htmlembedded", ext: ["erb"]},
{name: "Erlang", mime: "text/x-erlang", mode: "erlang", ext: ["erl"]},
{name: "Factor", mime: "text/x-factor", mode: "factor", ext: ["factor"]},
{name: "FCL", mime: "text/x-fcl", mode: "fcl"},
{name: "Forth", mime: "text/x-forth", mode: "forth", ext: ["forth", "fth", "4th"]},
{name: "Fortran", mime: "text/x-fortran", mode: "fortran", ext: ["f", "for", "f77", "f90"]},
{name: "F#", mime: "text/x-fsharp", mode: "mllike", ext: ["fs"], alias: ["fsharp"]},
{name: "Gas", mime: "text/x-gas", mode: "gas", ext: ["s"]},
{name: "Gherkin", mime: "text/x-feature", mode: "gherkin", ext: ["feature"]},
{name: "GitHub Flavored Markdown", mime: "text/x-gfm", mode: "gfm", file: /^(readme|contributing|history).md$/i},
{name: "Go", mime: "text/x-go", mode: "go", ext: ["go"]},
{name: "Groovy", mime: "text/x-groovy", mode: "groovy", ext: ["groovy", "gradle"], file: /^Jenkinsfile$/},
{name: "HAML", mime: "text/x-haml", mode: "haml", ext: ["haml"]},
{name: "Haskell", mime: "text/x-haskell", mode: "haskell", ext: ["hs"]},
{name: "Haskell (Literate)", mime: "text/x-literate-haskell", mode: "haskell-literate", ext: ["lhs"]},
{name: "Haxe", mime: "text/x-haxe", mode: "haxe", ext: ["hx"]},
{name: "HXML", mime: "text/x-hxml", mode: "haxe", ext: ["hxml"]},
{name: "ASP.NET", mime: "application/x-aspx", mode: "htmlembedded", ext: ["aspx"], alias: ["asp", "aspx"]},
{name: "HTML", mime: "text/html", mode: "htmlmixed", ext: ["html", "htm"], alias: ["xhtml"]},
{name: "HTTP", mime: "message/http", mode: "http"},
{name: "IDL", mime: "text/x-idl", mode: "idl", ext: ["pro"]},
{name: "Pug", mime: "text/x-pug", mode: "pug", ext: ["jade", "pug"], alias: ["jade"]},
{name: "Java", mime: "text/x-java", mode: "clike", ext: ["java"]},
{name: "Java Server Pages", mime: "application/x-jsp", mode: "htmlembedded", ext: ["jsp"], alias: ["jsp"]},
{name: "JavaScript", mimes: ["text/javascript", "text/ecmascript", "application/javascript", "application/x-javascript", "application/ecmascript"],
mode: "javascript", ext: ["js"], alias: ["ecmascript", "js", "node"]},
{name: "JSON", mimes: ["application/json", "application/x-json"], mode: "javascript", ext: ["json", "map"], alias: ["json5"]},
{name: "JSON-LD", mime: "application/ld+json", mode: "javascript", ext: ["jsonld"], alias: ["jsonld"]},
{name: "JSX", mime: "text/jsx", mode: "jsx", ext: ["jsx"]},
{name: "Jinja2", mime: "null", mode: "jinja2"},
{name: "Julia", mime: "text/x-julia", mode: "julia", ext: ["jl"]},
{name: "Kotlin", mime: "text/x-kotlin", mode: "clike", ext: ["kt"]},
{name: "LESS", mime: "text/x-less", mode: "css", ext: ["less"]},
{name: "LiveScript", mime: "text/x-livescript", mode: "livescript", ext: ["ls"], alias: ["ls"]},
{name: "Lua", mime: "text/x-lua", mode: "lua", ext: ["lua"]},
{name: "Markdown", mime: "text/x-markdown", mode: "markdown", ext: ["markdown", "md", "mkd"]},
{name: "mIRC", mime: "text/mirc", mode: "mirc"},
{name: "MariaDB SQL", mime: "text/x-mariadb", mode: "sql"},
{name: "Mathematica", mime: "text/x-mathematica", mode: "mathematica", ext: ["m", "nb"]},
{name: "Modelica", mime: "text/x-modelica", mode: "modelica", ext: ["mo"]},
{name: "MUMPS", mime: "text/x-mumps", mode: "mumps", ext: ["mps"]},
{name: "MS SQL", mime: "text/x-mssql", mode: "sql"},
{name: "mbox", mime: "application/mbox", mode: "mbox", ext: ["mbox"]},
{name: "MySQL", mime: "text/x-mysql", mode: "sql"},
{name: "Nginx", mime: "text/x-nginx-conf", mode: "nginx", file: /nginx.*\.conf$/i},
{name: "NSIS", mime: "text/x-nsis", mode: "nsis", ext: ["nsh", "nsi"]},
{name: "NTriples", mime: "text/n-triples", mode: "ntriples", ext: ["nt"]},
{name: "Objective C", mime: "text/x-objectivec", mode: "clike", ext: ["m", "mm"], alias: ["objective-c", "objc"]},
{name: "OCaml", mime: "text/x-ocaml", mode: "mllike", ext: ["ml", "mli", "mll", "mly"]},
{name: "Octave", mime: "text/x-octave", mode: "octave", ext: ["m"]},
{name: "Oz", mime: "text/x-oz", mode: "oz", ext: ["oz"]},
{name: "Pascal", mime: "text/x-pascal", mode: "pascal", ext: ["p", "pas"]},
{name: "PEG.js", mime: "null", mode: "pegjs", ext: ["jsonld"]},
{name: "Perl", mime: "text/x-perl", mode: "perl", ext: ["pl", "pm"]},
{name: "PHP", mime: "application/x-httpd-php", mode: "php", ext: ["php", "php3", "php4", "php5", "phtml"]},
{name: "Pig", mime: "text/x-pig", mode: "pig", ext: ["pig"]},
{name: "Plain Text", mime: "text/plain", mode: "null", ext: ["txt", "text", "conf", "def", "list", "log"]},
{name: "PLSQL", mime: "text/x-plsql", mode: "sql", ext: ["pls"]},
{name: "PowerShell", mime: "application/x-powershell", mode: "powershell", ext: ["ps1", "psd1", "psm1"]},
{name: "Properties files", mime: "text/x-properties", mode: "properties", ext: ["properties", "ini", "in"], alias: ["ini", "properties"]},
{name: "ProtoBuf", mime: "text/x-protobuf", mode: "protobuf", ext: ["proto"]},
{name: "Python", mime: "text/x-python", mode: "python", ext: ["BUILD", "bzl", "py", "pyw"], file: /^(BUCK|BUILD)$/},
{name: "Puppet", mime: "text/x-puppet", mode: "puppet", ext: ["pp"]},
{name: "Q", mime: "text/x-q", mode: "q", ext: ["q"]},
{name: "R", mime: "text/x-rsrc", mode: "r", ext: ["r", "R"], alias: ["rscript"]},
{name: "reStructuredText", mime: "text/x-rst", mode: "rst", ext: ["rst"], alias: ["rst"]},
{name: "RPM Changes", mime: "text/x-rpm-changes", mode: "rpm"},
{name: "RPM Spec", mime: "text/x-rpm-spec", mode: "rpm", ext: ["spec"]},
{name: "Ruby", mime: "text/x-ruby", mode: "ruby", ext: ["rb"], alias: ["jruby", "macruby", "rake", "rb", "rbx"]},
{name: "Rust", mime: "text/x-rustsrc", mode: "rust", ext: ["rs"]},
{name: "SAS", mime: "text/x-sas", mode: "sas", ext: ["sas"]},
{name: "Sass", mime: "text/x-sass", mode: "sass", ext: ["sass"]},
{name: "Scala", mime: "text/x-scala", mode: "clike", ext: ["scala"]},
{name: "Scheme", mime: "text/x-scheme", mode: "scheme", ext: ["scm", "ss"]},
{name: "SCSS", mime: "text/x-scss", mode: "css", ext: ["scss"]},
{name: "Shell", mime: "text/x-sh", mode: "shell", ext: ["sh", "ksh", "bash"], alias: ["bash", "sh", "zsh"], file: /^PKGBUILD$/},
{name: "Sieve", mime: "application/sieve", mode: "sieve", ext: ["siv", "sieve"]},
{name: "Slim", mimes: ["text/x-slim", "application/x-slim"], mode: "slim", ext: ["slim"]},
{name: "Smalltalk", mime: "text/x-stsrc", mode: "smalltalk", ext: ["st"]},
{name: "Smarty", mime: "text/x-smarty", mode: "smarty", ext: ["tpl"]},
{name: "Solr", mime: "text/x-solr", mode: "solr"},
{name: "Soy", mime: "text/x-soy", mode: "soy", ext: ["soy"], alias: ["closure template"]},
{name: "SPARQL", mime: "application/sparql-query", mode: "sparql", ext: ["rq", "sparql"], alias: ["sparul"]},
{name: "Spreadsheet", mime: "text/x-spreadsheet", mode: "spreadsheet", alias: ["excel", "formula"]},
{name: "SQL", mime: "text/x-sql", mode: "sql", ext: ["sql"]},
{name: "SQLite", mime: "text/x-sqlite", mode: "sql"},
{name: "Squirrel", mime: "text/x-squirrel", mode: "clike", ext: ["nut"]},
{name: "Stylus", mime: "text/x-styl", mode: "stylus", ext: ["styl"]},
{name: "Swift", mime: "text/x-swift", mode: "swift", ext: ["swift"]},
{name: "sTeX", mime: "text/x-stex", mode: "stex"},
{name: "LaTeX", mime: "text/x-latex", mode: "stex", ext: ["text", "ltx"], alias: ["tex"]},
{name: "SystemVerilog", mime: "text/x-systemverilog", mode: "verilog", ext: ["v"]},
{name: "Tcl", mime: "text/x-tcl", mode: "tcl", ext: ["tcl"]},
{name: "Textile", mime: "text/x-textile", mode: "textile", ext: ["textile"]},
{name: "TiddlyWiki ", mime: "text/x-tiddlywiki", mode: "tiddlywiki"},
{name: "Tiki wiki", mime: "text/tiki", mode: "tiki"},
{name: "TOML", mime: "text/x-toml", mode: "toml", ext: ["toml"]},
{name: "Tornado", mime: "text/x-tornado", mode: "tornado"},
{name: "troff", mime: "text/troff", mode: "troff", ext: ["1", "2", "3", "4", "5", "6", "7", "8", "9"]},
{name: "TTCN", mime: "text/x-ttcn", mode: "ttcn", ext: ["ttcn", "ttcn3", "ttcnpp"]},
{name: "TTCN_CFG", mime: "text/x-ttcn-cfg", mode: "ttcn-cfg", ext: ["cfg"]},
{name: "Turtle", mime: "text/turtle", mode: "turtle", ext: ["ttl"]},
{name: "TypeScript", mime: "application/typescript", mode: "javascript", ext: ["ts"], alias: ["ts"]},
{name: "TypeScript-JSX", mime: "text/typescript-jsx", mode: "jsx", ext: ["tsx"], alias: ["tsx"]},
{name: "Twig", mime: "text/x-twig", mode: "twig"},
{name: "Web IDL", mime: "text/x-webidl", mode: "webidl", ext: ["webidl"]},
{name: "VB.NET", mime: "text/x-vb", mode: "vb", ext: ["vb"]},
{name: "VBScript", mime: "text/vbscript", mode: "vbscript", ext: ["vbs"]},
{name: "Velocity", mime: "text/velocity", mode: "velocity", ext: ["vtl"]},
{name: "Verilog", mime: "text/x-verilog", mode: "verilog", ext: ["v"]},
{name: "VHDL", mime: "text/x-vhdl", mode: "vhdl", ext: ["vhd", "vhdl"]},
{name: "Vue.js Component", mimes: ["script/x-vue", "text/x-vue"], mode: "vue", ext: ["vue"]},
{name: "XML", mimes: ["application/xml", "text/xml"], mode: "xml", ext: ["xml", "xsl", "xsd", "svg"], alias: ["rss", "wsdl", "xsd"]},
{name: "XQuery", mime: "application/xquery", mode: "xquery", ext: ["xy", "xquery"]},
{name: "Yacas", mime: "text/x-yacas", mode: "yacas", ext: ["ys"]},
{name: "YAML", mimes: ["text/x-yaml", "text/yaml"], mode: "yaml", ext: ["yaml", "yml"], alias: ["yml"]},
{name: "Z80", mime: "text/x-z80", mode: "z80", ext: ["z80"]},
{name: "mscgen", mime: "text/x-mscgen", mode: "mscgen", ext: ["mscgen", "mscin", "msc"]},
{name: "xu", mime: "text/x-xu", mode: "mscgen", ext: ["xu"]},
{name: "msgenny", mime: "text/x-msgenny", mode: "mscgen", ext: ["msgenny"]}
];
// Ensure all modes have a mime property for backwards compatibility
for (var i = 0; i < CodeMirror.modeInfo.length; i++) {
var info = CodeMirror.modeInfo[i];
if (info.mimes) info.mime = info.mimes[0];
}
CodeMirror.findModeByMIME = function(mime) {
mime = mime.toLowerCase();
for (var i = 0; i < CodeMirror.modeInfo.length; i++) {
var info = CodeMirror.modeInfo[i];
if (info.mime == mime) return info;
if (info.mimes) for (var j = 0; j < info.mimes.length; j++)
if (info.mimes[j] == mime) return info;
}
if (/\+xml$/.test(mime)) return CodeMirror.findModeByMIME("application/xml")
if (/\+json$/.test(mime)) return CodeMirror.findModeByMIME("application/json")
};
CodeMirror.findModeByExtension = function(ext) {
for (var i = 0; i < CodeMirror.modeInfo.length; i++) {
var info = CodeMirror.modeInfo[i];
if (info.ext) for (var j = 0; j < info.ext.length; j++)
if (info.ext[j] == ext) return info;
}
};
CodeMirror.findModeByFileName = function(filename) {
for (var i = 0; i < CodeMirror.modeInfo.length; i++) {
var info = CodeMirror.modeInfo[i];
if (info.file && info.file.test(filename)) return info;
}
var dot = filename.lastIndexOf(".");
var ext = dot > -1 && filename.substring(dot + 1, filename.length);
if (ext) return CodeMirror.findModeByExtension(ext);
};
CodeMirror.findModeByName = function(name) {
name = name.toLowerCase();
for (var i = 0; i < CodeMirror.modeInfo.length; i++) {
var info = CodeMirror.modeInfo[i];
if (info.name.toLowerCase() == name) return info;
if (info.alias) for (var j = 0; j < info.alias.length; j++)
if (info.alias[j].toLowerCase() == name) return info;
}
};
}); | PypiClean |
/observations-0.1.4.tar.gz/observations-0.1.4/observations/r/dactyl.py | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import csv
import numpy as np
import os
import sys
from observations.util import maybe_download_and_extract
def dactyl(path):
"""Edgeworth's counts of dactyls in Virgil's Aeneid
Edgeworth (1885) took the first 75 lines in Book XI of Virgil's *Aeneid*
and classified each of the first four "feet" of the line as a dactyl
(one long syllable followed by two short ones) or not.
Grouping the lines in blocks of five gave a 4 x 25 table of counts,
represented here as a data frame with ordered factors, `Foot` and
`Lines`. Edgeworth used this table in what was among the first
examples of analysis of variance applied to a two-way classification.
A data frame with 60 observations on the following 3 variables.
`Foot`
an ordered factor with levels `1` < `2` < `3` < `4`
`Lines`
an ordered factor with levels `1:5` < `6:10` < `11:15` <
`16:20` < `21:25` < `26:30` < `31:35` < `36:40` <
`41:45` < `46:50` < `51:55` < `56:60` < `61:65` <
`66:70` < `71:75`
`count`
number of dactyls
Stigler, S. (1999) *Statistics on the Table* Cambridge, MA: Harvard
University Press, table 5.1.
Args:
path: str.
Path to directory which either stores file or otherwise file will
be downloaded and extracted there.
Filename is `dactyl.csv`.
Returns:
Tuple of np.ndarray `x_train` with 60 rows and 3 columns and
dictionary `metadata` of column headers (feature names).
"""
import pandas as pd
path = os.path.expanduser(path)
filename = 'dactyl.csv'
if not os.path.exists(os.path.join(path, filename)):
url = 'http://dustintran.com/data/r/HistData/Dactyl.csv'
maybe_download_and_extract(path, url,
save_file_name='dactyl.csv',
resume=False)
data = pd.read_csv(os.path.join(path, filename), index_col=0,
parse_dates=True)
x_train = data.values
metadata = {'columns': data.columns}
return x_train, metadata | PypiClean |
/Argonaut-0.3.4.tar.gz/Argonaut-0.3.4/argonaut/public/ckeditor/_source/lang/km.js | /*
Copyright (c) 2003-2010, CKSource - Frederico Knabben. All rights reserved.
For licensing, see LICENSE.html or http://ckeditor.com/license
*/
/**
* @fileOverview Defines the {@link CKEDITOR.lang} object, for the
* Khmer language.
*/
/**#@+
@type String
@example
*/
/**
* Constains the dictionary of language entries.
* @namespace
*/
CKEDITOR.lang['km'] =
{
/**
* The language reading direction. Possible values are "rtl" for
* Right-To-Left languages (like Arabic) and "ltr" for Left-To-Right
* languages (like English).
* @default 'ltr'
*/
dir : 'ltr',
/*
* Screenreader titles. Please note that screenreaders are not always capable
* of reading non-English words. So be careful while translating it.
*/
editorTitle : 'Rich text editor, %1, press ALT 0 for help.', // MISSING
// ARIA descriptions.
toolbar : 'Toolbar', // MISSING
editor : 'Rich Text Editor', // MISSING
// Toolbar buttons without dialogs.
source : 'កូត',
newPage : 'ទំព័រថ្មី',
save : 'រក្សាទុក',
preview : 'មើលសាកល្បង',
cut : 'កាត់យក',
copy : 'ចំលងយក',
paste : 'ចំលងដាក់',
print : 'បោះពុម្ភ',
underline : 'ដិតបន្ទាត់ពីក្រោមអក្សរ',
bold : 'អក្សរដិតធំ',
italic : 'អក្សរផ្តេក',
selectAll : 'ជ្រើសរើសទាំងអស់',
removeFormat : 'លប់ចោល ការរចនា',
strike : 'ដិតបន្ទាត់ពាក់កណ្តាលអក្សរ',
subscript : 'អក្សរតូចក្រោម',
superscript : 'អក្សរតូចលើ',
horizontalrule : 'បន្ថែមបន្ទាត់ផ្តេក',
pagebreak : 'បន្ថែម ការផ្តាច់ទំព័រ',
unlink : 'លប់ឈ្នាប់',
undo : 'សារឡើងវិញ',
redo : 'ធ្វើឡើងវិញ',
// Common messages and labels.
common :
{
browseServer : 'មើល',
url : 'URL',
protocol : 'ប្រូតូកូល',
upload : 'ទាញយក',
uploadSubmit : 'បញ្ជូនទៅកាន់ម៉ាស៊ីនផ្តល់សេវា',
image : 'រូបភាព',
flash : 'Flash',
form : 'បែបបទ',
checkbox : 'ប្រអប់ជ្រើសរើស',
radio : 'ប៉ូតុនរង្វង់មូល',
textField : 'ជួរសរសេរអត្ថបទ',
textarea : 'តំបន់សរសេរអត្ថបទ',
hiddenField : 'ជួរលាក់',
button : 'ប៉ូតុន',
select : 'ជួរជ្រើសរើស',
imageButton : 'ប៉ូតុនរូបភាព',
notSet : '<មិនមែន>',
id : 'Id',
name : 'ឈ្មោះ',
langDir : 'ទិសដៅភាសា',
langDirLtr : 'ពីឆ្វេងទៅស្តាំ(LTR)',
langDirRtl : 'ពីស្តាំទៅឆ្វេង(RTL)',
langCode : 'លេខកូតភាសា',
longDescr : 'អធិប្បាយ URL វែង',
cssClass : 'Stylesheet Classes',
advisoryTitle : 'ចំណងជើង ប្រឹក្សា',
cssStyle : 'ម៉ូត',
ok : 'យល់ព្រម',
cancel : 'មិនយល់ព្រម',
close : 'Close', // MISSING
preview : 'Preview', // MISSING
generalTab : 'General', // MISSING
advancedTab : 'កំរិតខ្ពស់',
validateNumberFailed : 'This value is not a number.', // MISSING
confirmNewPage : 'Any unsaved changes to this content will be lost. Are you sure you want to load new page?', // MISSING
confirmCancel : 'Some of the options have been changed. Are you sure to close the dialog?', // MISSING
options : 'Options', // MISSING
target : 'Target', // MISSING
targetNew : 'New Window (_blank)', // MISSING
targetTop : 'Topmost Window (_top)', // MISSING
targetSelf : 'Same Window (_self)', // MISSING
targetParent : 'Parent Window (_parent)', // MISSING
langDirLTR : 'Left to Right (LTR)', // MISSING
langDirRTL : 'Right to Left (RTL)', // MISSING
styles : 'Style', // MISSING
cssClasses : 'Stylesheet Classes', // MISSING
// Put the voice-only part of the label in the span.
unavailable : '%1<span class="cke_accessibility">, unavailable</span>' // MISSING
},
contextmenu :
{
options : 'Context Menu Options' // MISSING
},
// Special char dialog.
specialChar :
{
toolbar : 'បន្ថែមអក្សរពិសេស',
title : 'តូអក្សរពិសេស',
options : 'Special Character Options' // MISSING
},
// Link dialog.
link :
{
toolbar : 'បន្ថែម/កែប្រែ ឈ្នាប់',
other : '<other>', // MISSING
menu : 'កែប្រែឈ្នាប់',
title : 'ឈ្នាប់',
info : 'ពត៌មានអំពីឈ្នាប់',
target : 'គោលដៅ',
upload : 'ទាញយក',
advanced : 'កំរិតខ្ពស់',
type : 'ប្រភេទឈ្នាប់',
toUrl : 'URL', // MISSING
toAnchor : 'យុថ្កានៅក្នុងទំព័រនេះ',
toEmail : 'អ៊ីមែល',
targetFrame : '<ហ្វ្រេម>',
targetPopup : '<វីនដូវ លោត>',
targetFrameName : 'ឈ្មោះហ្រ្វេមដែលជាគោលដៅ',
targetPopupName : 'ឈ្មោះវីនដូវលោត',
popupFeatures : 'លក្ខណះរបស់វីនដូលលោត',
popupResizable : 'Resizable', // MISSING
popupStatusBar : 'របា ពត៌មាន',
popupLocationBar: 'របា ទីតាំង',
popupToolbar : 'របា ឩបករណ៍',
popupMenuBar : 'របា មឺនុយ',
popupFullScreen : 'អេក្រុងពេញ(IE)',
popupScrollBars : 'របា ទាញ',
popupDependent : 'អាស្រ័យលើ (Netscape)',
popupWidth : 'ទទឹង',
popupLeft : 'ទីតាំងខាងឆ្វេង',
popupHeight : 'កំពស់',
popupTop : 'ទីតាំងខាងលើ',
id : 'Id', // MISSING
langDir : 'ទិសដៅភាសា',
langDirLTR : 'ពីឆ្វេងទៅស្តាំ(LTR)',
langDirRTL : 'ពីស្តាំទៅឆ្វេង(RTL)',
acccessKey : 'ឃី សំរាប់ចូល',
name : 'ឈ្មោះ',
langCode : 'ទិសដៅភាសា',
tabIndex : 'លេខ Tab',
advisoryTitle : 'ចំណងជើង ប្រឹក្សា',
advisoryContentType : 'ប្រភេទអត្ថបទ ប្រឹក្សា',
cssClasses : 'Stylesheet Classes',
charset : 'លេខកូតអក្សររបស់ឈ្នាប់',
styles : 'ម៉ូត',
selectAnchor : 'ជ្រើសរើសយុថ្កា',
anchorName : 'តាមឈ្មោះរបស់យុថ្កា',
anchorId : 'តាម Id',
emailAddress : 'អ៊ីមែល',
emailSubject : 'ចំណងជើងអត្ថបទ',
emailBody : 'អត្ថបទ',
noAnchors : '(No anchors available in the document)', // MISSING
noUrl : 'សូមសរសេរ អាស័យដ្ឋាន URL',
noEmail : 'សូមសរសេរ អាស័យដ្ឋាន អ៊ីមែល'
},
// Anchor dialog
anchor :
{
toolbar : 'បន្ថែម/កែប្រែ យុថ្កា',
menu : 'ការកំណត់យុថ្កា',
title : 'ការកំណត់យុថ្កា',
name : 'ឈ្មោះយុទ្ធថ្កា',
errorName : 'សូមសរសេរ ឈ្មោះយុទ្ធថ្កា'
},
// List style dialog
list:
{
numberedTitle : 'Numbered List Properties', // MISSING
bulletedTitle : 'Bulleted List Properties', // MISSING
type : 'Type', // MISSING
start : 'Start', // MISSING
validateStartNumber :'List start number must be a whole number.', // MISSING
circle : 'Circle', // MISSING
disc : 'Disc', // MISSING
square : 'Square', // MISSING
none : 'None', // MISSING
notset : '<not set>', // MISSING
armenian : 'Armenian numbering', // MISSING
georgian : 'Georgian numbering (an, ban, gan, etc.)', // MISSING
lowerRoman : 'Lower Roman (i, ii, iii, iv, v, etc.)', // MISSING
upperRoman : 'Upper Roman (I, II, III, IV, V, etc.)', // MISSING
lowerAlpha : 'Lower Alpha (a, b, c, d, e, etc.)', // MISSING
upperAlpha : 'Upper Alpha (A, B, C, D, E, etc.)', // MISSING
lowerGreek : 'Lower Greek (alpha, beta, gamma, etc.)', // MISSING
decimal : 'Decimal (1, 2, 3, etc.)', // MISSING
decimalLeadingZero : 'Decimal leading zero (01, 02, 03, etc.)' // MISSING
},
// Find And Replace Dialog
findAndReplace :
{
title : 'Find and Replace', // MISSING
find : 'ស្វែងរក',
replace : 'ជំនួស',
findWhat : 'ស្វែងរកអ្វី:',
replaceWith : 'ជំនួសជាមួយ:',
notFoundMsg : 'ពាក្យនេះ រកមិនឃើញទេ ។',
matchCase : 'ករណ៉ត្រូវរក',
matchWord : 'ត្រូវពាក្យទាំងអស់',
matchCyclic : 'Match cyclic', // MISSING
replaceAll : 'ជំនួសទាំងអស់',
replaceSuccessMsg : '%1 occurrence(s) replaced.' // MISSING
},
// Table Dialog
table :
{
toolbar : 'តារាង',
title : 'ការកំណត់ តារាង',
menu : 'ការកំណត់ តារាង',
deleteTable : 'លប់តារាង',
rows : 'ជួរផ្តេក',
columns : 'ជួរឈរ',
border : 'ទំហំស៊ុម',
align : 'ការកំណត់ទីតាំង',
alignLeft : 'ខាងឆ្វេង',
alignCenter : 'កណ្តាល',
alignRight : 'ខាងស្តាំ',
width : 'ទទឹង',
widthPx : 'ភីកសែល',
widthPc : 'ភាគរយ',
widthUnit : 'width unit', // MISSING
height : 'កំពស់',
cellSpace : 'គំលាតសែល',
cellPad : 'គែមសែល',
caption : 'ចំណងជើង',
summary : 'សេចក្តីសង្ខេប',
headers : 'Headers', // MISSING
headersNone : 'None', // MISSING
headersColumn : 'First column', // MISSING
headersRow : 'First Row', // MISSING
headersBoth : 'Both', // MISSING
invalidRows : 'Number of rows must be a number greater than 0.', // MISSING
invalidCols : 'Number of columns must be a number greater than 0.', // MISSING
invalidBorder : 'Border size must be a number.', // MISSING
invalidWidth : 'Table width must be a number.', // MISSING
invalidHeight : 'Table height must be a number.', // MISSING
invalidCellSpacing : 'Cell spacing must be a number.', // MISSING
invalidCellPadding : 'Cell padding must be a number.', // MISSING
cell :
{
menu : 'Cell', // MISSING
insertBefore : 'Insert Cell Before', // MISSING
insertAfter : 'Insert Cell After', // MISSING
deleteCell : 'លប់សែល',
merge : 'បញ្ជូលសែល',
mergeRight : 'Merge Right', // MISSING
mergeDown : 'Merge Down', // MISSING
splitHorizontal : 'Split Cell Horizontally', // MISSING
splitVertical : 'Split Cell Vertically', // MISSING
title : 'Cell Properties', // MISSING
cellType : 'Cell Type', // MISSING
rowSpan : 'Rows Span', // MISSING
colSpan : 'Columns Span', // MISSING
wordWrap : 'Word Wrap', // MISSING
hAlign : 'Horizontal Alignment', // MISSING
vAlign : 'Vertical Alignment', // MISSING
alignTop : 'Top', // MISSING
alignMiddle : 'Middle', // MISSING
alignBottom : 'Bottom', // MISSING
alignBaseline : 'Baseline', // MISSING
bgColor : 'Background Color', // MISSING
borderColor : 'Border Color', // MISSING
data : 'Data', // MISSING
header : 'Header', // MISSING
yes : 'Yes', // MISSING
no : 'No', // MISSING
invalidWidth : 'Cell width must be a number.', // MISSING
invalidHeight : 'Cell height must be a number.', // MISSING
invalidRowSpan : 'Rows span must be a whole number.', // MISSING
invalidColSpan : 'Columns span must be a whole number.', // MISSING
chooseColor : 'Choose' // MISSING
},
row :
{
menu : 'Row', // MISSING
insertBefore : 'Insert Row Before', // MISSING
insertAfter : 'Insert Row After', // MISSING
deleteRow : 'លប់ជួរផ្តេក'
},
column :
{
menu : 'Column', // MISSING
insertBefore : 'Insert Column Before', // MISSING
insertAfter : 'Insert Column After', // MISSING
deleteColumn : 'លប់ជួរឈរ'
}
},
// Button Dialog.
button :
{
title : 'ការកំណត់ ប៉ូតុន',
text : 'អត្ថបទ(តំលៃ)',
type : 'ប្រភេទ',
typeBtn : 'Button', // MISSING
typeSbm : 'Submit', // MISSING
typeRst : 'Reset' // MISSING
},
// Checkbox and Radio Button Dialogs.
checkboxAndRadio :
{
checkboxTitle : 'ការកំណត់ប្រអប់ជ្រើសរើស',
radioTitle : 'ការកំណត់ប៉ូតុនរង្វង់',
value : 'តំលៃ',
selected : 'បានជ្រើសរើស'
},
// Form Dialog.
form :
{
title : 'ការកំណត់បែបបទ',
menu : 'ការកំណត់បែបបទ',
action : 'សកម្មភាព',
method : 'វិធី',
encoding : 'Encoding' // MISSING
},
// Select Field Dialog.
select :
{
title : 'ការកំណត់ជួរជ្រើសរើស',
selectInfo : 'ពត៌មាន',
opAvail : 'ការកំណត់ជ្រើសរើស ដែលអាចកំណត់បាន',
value : 'តំលៃ',
size : 'ទំហំ',
lines : 'បន្ទាត់',
chkMulti : 'អនុញ្ញាតអោយជ្រើសរើសច្រើន',
opText : 'ពាក្យ',
opValue : 'តំលៃ',
btnAdd : 'បន្ថែម',
btnModify : 'ផ្លាស់ប្តូរ',
btnUp : 'លើ',
btnDown : 'ក្រោម',
btnSetValue : 'Set as selected value', // MISSING
btnDelete : 'លប់'
},
// Textarea Dialog.
textarea :
{
title : 'ការកំណត់កន្លែងសរសេរអត្ថបទ',
cols : 'ជូរឈរ',
rows : 'ជូរផ្តេក'
},
// Text Field Dialog.
textfield :
{
title : 'ការកំណត់ជួរអត្ថបទ',
name : 'ឈ្មោះ',
value : 'តំលៃ',
charWidth : 'ទទឹង អក្សរ',
maxChars : 'អក្សរអតិបរិមា',
type : 'ប្រភេទ',
typeText : 'ពាក្យ',
typePass : 'ពាក្យសំងាត់'
},
// Hidden Field Dialog.
hidden :
{
title : 'ការកំណត់ជួរលាក់',
name : 'ឈ្មោះ',
value : 'តំលៃ'
},
// Image Dialog.
image :
{
title : 'ការកំណត់រូបភាព',
titleButton : 'ការកំណត់ប៉ូតុនរូបភាព',
menu : 'ការកំណត់រូបភាព',
infoTab : 'ពត៌មានអំពីរូបភាព',
btnUpload : 'បញ្ជូនទៅកាន់ម៉ាស៊ីនផ្តល់សេវា',
upload : 'ទាញយក',
alt : 'អត្ថបទជំនួស',
width : 'ទទឹង',
height : 'កំពស់',
lockRatio : 'អត្រាឡុក',
unlockRatio : 'Unlock Ratio', // MISSING
resetSize : 'កំណត់ទំហំឡើងវិញ',
border : 'ស៊ុម',
hSpace : 'គំលាតទទឹង',
vSpace : 'គំលាតបណ្តោយ',
align : 'កំណត់ទីតាំង',
alignLeft : 'ខាងឆ្វង',
alignRight : 'ខាងស្តាំ',
alertUrl : 'សូមសរសេរងាស័យដ្ឋានរបស់រូបភាព',
linkTab : 'ឈ្នាប់',
button2Img : 'Do you want to transform the selected image button on a simple image?', // MISSING
img2Button : 'Do you want to transform the selected image on a image button?', // MISSING
urlMissing : 'Image source URL is missing.', // MISSING
validateWidth : 'Width must be a whole number.', // MISSING
validateHeight : 'Height must be a whole number.', // MISSING
validateBorder : 'Border must be a whole number.', // MISSING
validateHSpace : 'HSpace must be a whole number.', // MISSING
validateVSpace : 'VSpace must be a whole number.' // MISSING
},
// Flash Dialog
flash :
{
properties : 'ការកំណត់ Flash',
propertiesTab : 'Properties', // MISSING
title : 'ការកំណត់ Flash',
chkPlay : 'លេងដោយស្វ័យប្រវត្ត',
chkLoop : 'ចំនួនដង',
chkMenu : 'បង្ហាញ មឺនុយរបស់ Flash',
chkFull : 'Allow Fullscreen', // MISSING
scale : 'ទំហំ',
scaleAll : 'បង្ហាញទាំងអស់',
scaleNoBorder : 'មិនបង្ហាញស៊ុម',
scaleFit : 'ត្រូវល្មម',
access : 'Script Access', // MISSING
accessAlways : 'Always', // MISSING
accessSameDomain: 'Same domain', // MISSING
accessNever : 'Never', // MISSING
align : 'កំណត់ទីតាំង',
alignLeft : 'ខាងឆ្វង',
alignAbsBottom : 'Abs Bottom', // MISSING
alignAbsMiddle : 'Abs Middle', // MISSING
alignBaseline : 'បន្ទាត់ជាមូលដ្ឋាន',
alignBottom : 'ខាងក្រោម',
alignMiddle : 'កណ្តាល',
alignRight : 'ខាងស្តាំ',
alignTextTop : 'លើអត្ថបទ',
alignTop : 'ខាងលើ',
quality : 'Quality', // MISSING
qualityBest : 'Best', // MISSING
qualityHigh : 'High', // MISSING
qualityAutoHigh : 'Auto High', // MISSING
qualityMedium : 'Medium', // MISSING
qualityAutoLow : 'Auto Low', // MISSING
qualityLow : 'Low', // MISSING
windowModeWindow: 'Window', // MISSING
windowModeOpaque: 'Opaque', // MISSING
windowModeTransparent : 'Transparent', // MISSING
windowMode : 'Window mode', // MISSING
flashvars : 'Variables for Flash', // MISSING
bgcolor : 'ពណ៌ផ្ទៃខាងក្រោយ',
width : 'ទទឹង',
height : 'កំពស់',
hSpace : 'គំលាតទទឹង',
vSpace : 'គំលាតបណ្តោយ',
validateSrc : 'សូមសរសេរ អាស័យដ្ឋាន URL',
validateWidth : 'Width must be a number.', // MISSING
validateHeight : 'Height must be a number.', // MISSING
validateHSpace : 'HSpace must be a number.', // MISSING
validateVSpace : 'VSpace must be a number.' // MISSING
},
// Speller Pages Dialog
spellCheck :
{
toolbar : 'ពិនិត្យអក្ខរាវិរុទ្ធ',
title : 'Spell Check', // MISSING
notAvailable : 'Sorry, but service is unavailable now.', // MISSING
errorLoading : 'Error loading application service host: %s.', // MISSING
notInDic : 'គ្មានក្នុងវចនានុក្រម',
changeTo : 'ផ្លាស់ប្តូរទៅ',
btnIgnore : 'មិនផ្លាស់ប្តូរ',
btnIgnoreAll : 'មិនផ្លាស់ប្តូរ ទាំងអស់',
btnReplace : 'ជំនួស',
btnReplaceAll : 'ជំនួសទាំងអស់',
btnUndo : 'សារឡើងវិញ',
noSuggestions : '- គ្មានសំណើរ -',
progress : 'កំពុងពិនិត្យអក្ខរាវិរុទ្ធ...',
noMispell : 'ការពិនិត្យអក្ខរាវិរុទ្ធបានចប់: គ្មានកំហុស',
noChanges : 'ការពិនិត្យអក្ខរាវិរុទ្ធបានចប់: ពុំមានផ្លាស់ប្តូរ',
oneChange : 'ការពិនិត្យអក្ខរាវិរុទ្ធបានចប់: ពាក្យមួយត្រូចបានផ្លាស់ប្តូរ',
manyChanges : 'ការពិនិត្យអក្ខរាវិរុទ្ធបានចប់: %1 ពាក្យបានផ្លាស់ប្តូរ',
ieSpellDownload : 'ពុំមានកម្មវិធីពិនិត្យអក្ខរាវិរុទ្ធ ។ តើចង់ទាញយកពីណា?'
},
smiley :
{
toolbar : 'រូបភាព',
title : 'បញ្ជូលរូបភាព',
options : 'Smiley Options' // MISSING
},
elementsPath :
{
eleLabel : 'Elements path', // MISSING
eleTitle : '%1 element' // MISSING
},
numberedlist : 'បញ្ជីជាអក្សរ',
bulletedlist : 'បញ្ជីជារង្វង់មូល',
indent : 'បន្ថែមការចូលបន្ទាត់',
outdent : 'បន្ថយការចូលបន្ទាត់',
justify :
{
left : 'តំរឹមឆ្វេង',
center : 'តំរឹមកណ្តាល',
right : 'តំរឹមស្តាំ',
block : 'តំរឹមសងខាង'
},
blockquote : 'Block Quote', // MISSING
clipboard :
{
title : 'ចំលងដាក់',
cutError : 'ការកំណត់សុវត្ថភាពរបស់កម្មវិធីរុករករបស់លោកអ្នក នេះ\u200bមិនអាចធ្វើកម្មវិធីតាក់តែងអត្ថបទ កាត់អត្ថបទយកដោយស្វ័យប្រវត្តបានឡើយ ។ សូមប្រើប្រាស់បន្សំ ឃីដូចនេះ (Ctrl/Cmd+X) ។',
copyError : 'ការកំណត់សុវត្ថភាពរបស់កម្មវិធីរុករករបស់លោកអ្នក នេះ\u200bមិនអាចធ្វើកម្មវិធីតាក់តែងអត្ថបទ ចំលងអត្ថបទយកដោយស្វ័យប្រវត្តបានឡើយ ។ សូមប្រើប្រាស់បន្សំ ឃីដូចនេះ (Ctrl/Cmd+C)។',
pasteMsg : 'សូមចំលងអត្ថបទទៅដាក់ក្នុងប្រអប់ដូចខាងក្រោមដោយប្រើប្រាស់ ឃី \u200b(<STRONG>Ctrl/Cmd+V</STRONG>) ហើយចុច <STRONG>OK</STRONG> ។',
securityMsg : 'Because of your browser security settings, the editor is not able to access your clipboard data directly. You are required to paste it again in this window.', // MISSING
pasteArea : 'Paste Area' // MISSING
},
pastefromword :
{
confirmCleanup : 'The text you want to paste seems to be copied from Word. Do you want to clean it before pasting?', // MISSING
toolbar : 'ចំលងដាក់ពី Word',
title : 'ចំលងដាក់ពី Word',
error : 'It was not possible to clean up the pasted data due to an internal error' // MISSING
},
pasteText :
{
button : 'ចំលងដាក់អត្ថបទធម្មតា',
title : 'ចំលងដាក់អត្ថបទធម្មតា'
},
templates :
{
button : 'ឯកសារគំរូ',
title : 'ឯកសារគំរូ របស់អត្ថន័យ',
options : 'Template Options', // MISSING
insertOption : 'Replace actual contents', // MISSING
selectPromptMsg : 'សូមជ្រើសរើសឯកសារគំរូ ដើម្បីបើកនៅក្នុងកម្មវិធីតាក់តែងអត្ថបទ<br>(អត្ថបទនឹងបាត់បង់):',
emptyListMsg : '(ពុំមានឯកសារគំរូត្រូវបានកំណត់)'
},
showBlocks : 'Show Blocks', // MISSING
stylesCombo :
{
label : 'ម៉ូត',
panelTitle : 'Formatting Styles', // MISSING
panelTitle1 : 'Block Styles', // MISSING
panelTitle2 : 'Inline Styles', // MISSING
panelTitle3 : 'Object Styles' // MISSING
},
format :
{
label : 'រចនា',
panelTitle : 'រចនា',
tag_p : 'Normal',
tag_pre : 'Formatted',
tag_address : 'Address',
tag_h1 : 'Heading 1',
tag_h2 : 'Heading 2',
tag_h3 : 'Heading 3',
tag_h4 : 'Heading 4',
tag_h5 : 'Heading 5',
tag_h6 : 'Heading 6',
tag_div : 'Normal (DIV)'
},
div :
{
title : 'Create Div Container', // MISSING
toolbar : 'Create Div Container', // MISSING
cssClassInputLabel : 'Stylesheet Classes', // MISSING
styleSelectLabel : 'Style', // MISSING
IdInputLabel : 'Id', // MISSING
languageCodeInputLabel : ' Language Code', // MISSING
inlineStyleInputLabel : 'Inline Style', // MISSING
advisoryTitleInputLabel : 'Advisory Title', // MISSING
langDirLabel : 'Language Direction', // MISSING
langDirLTRLabel : 'Left to Right (LTR)', // MISSING
langDirRTLLabel : 'Right to Left (RTL)', // MISSING
edit : 'Edit Div', // MISSING
remove : 'Remove Div' // MISSING
},
font :
{
label : 'ហ្វុង',
voiceLabel : 'Font', // MISSING
panelTitle : 'ហ្វុង'
},
fontSize :
{
label : 'ទំហំ',
voiceLabel : 'Font Size', // MISSING
panelTitle : 'ទំហំ'
},
colorButton :
{
textColorTitle : 'ពណ៌អក្សរ',
bgColorTitle : 'ពណ៌ផ្ទៃខាងក្រោយ',
panelTitle : 'Colors', // MISSING
auto : 'ស្វ័យប្រវត្ត',
more : 'ពណ៌ផ្សេងទៀត..'
},
colors :
{
'000' : 'Black', // MISSING
'800000' : 'Maroon', // MISSING
'8B4513' : 'Saddle Brown', // MISSING
'2F4F4F' : 'Dark Slate Gray', // MISSING
'008080' : 'Teal', // MISSING
'000080' : 'Navy', // MISSING
'4B0082' : 'Indigo', // MISSING
'696969' : 'Dark Gray', // MISSING
'B22222' : 'Fire Brick', // MISSING
'A52A2A' : 'Brown', // MISSING
'DAA520' : 'Golden Rod', // MISSING
'006400' : 'Dark Green', // MISSING
'40E0D0' : 'Turquoise', // MISSING
'0000CD' : 'Medium Blue', // MISSING
'800080' : 'Purple', // MISSING
'808080' : 'Gray', // MISSING
'F00' : 'Red', // MISSING
'FF8C00' : 'Dark Orange', // MISSING
'FFD700' : 'Gold', // MISSING
'008000' : 'Green', // MISSING
'0FF' : 'Cyan', // MISSING
'00F' : 'Blue', // MISSING
'EE82EE' : 'Violet', // MISSING
'A9A9A9' : 'Dim Gray', // MISSING
'FFA07A' : 'Light Salmon', // MISSING
'FFA500' : 'Orange', // MISSING
'FFFF00' : 'Yellow', // MISSING
'00FF00' : 'Lime', // MISSING
'AFEEEE' : 'Pale Turquoise', // MISSING
'ADD8E6' : 'Light Blue', // MISSING
'DDA0DD' : 'Plum', // MISSING
'D3D3D3' : 'Light Grey', // MISSING
'FFF0F5' : 'Lavender Blush', // MISSING
'FAEBD7' : 'Antique White', // MISSING
'FFFFE0' : 'Light Yellow', // MISSING
'F0FFF0' : 'Honeydew', // MISSING
'F0FFFF' : 'Azure', // MISSING
'F0F8FF' : 'Alice Blue', // MISSING
'E6E6FA' : 'Lavender', // MISSING
'FFF' : 'White' // MISSING
},
scayt :
{
title : 'Spell Check As You Type', // MISSING
opera_title : 'Not supported by Opera', // MISSING
enable : 'Enable SCAYT', // MISSING
disable : 'Disable SCAYT', // MISSING
about : 'About SCAYT', // MISSING
toggle : 'Toggle SCAYT', // MISSING
options : 'Options', // MISSING
langs : 'Languages', // MISSING
moreSuggestions : 'More suggestions', // MISSING
ignore : 'Ignore', // MISSING
ignoreAll : 'Ignore All', // MISSING
addWord : 'Add Word', // MISSING
emptyDic : 'Dictionary name should not be empty.', // MISSING
optionsTab : 'Options', // MISSING
allCaps : 'Ignore All-Caps Words', // MISSING
ignoreDomainNames : 'Ignore Domain Names', // MISSING
mixedCase : 'Ignore Words with Mixed Case', // MISSING
mixedWithDigits : 'Ignore Words with Numbers', // MISSING
languagesTab : 'Languages', // MISSING
dictionariesTab : 'Dictionaries', // MISSING
dic_field_name : 'Dictionary name', // MISSING
dic_create : 'Create', // MISSING
dic_restore : 'Restore', // MISSING
dic_delete : 'Delete', // MISSING
dic_rename : 'Rename', // MISSING
dic_info : 'Initially the User Dictionary is stored in a Cookie. However, Cookies are limited in size. When the User Dictionary grows to a point where it cannot be stored in a Cookie, then the dictionary may be stored on our server. To store your personal dictionary on our server you should specify a name for your dictionary. If you already have a stored dictionary, please type it\'s name and click the Restore button.', // MISSING
aboutTab : 'About' // MISSING
},
about :
{
title : 'About CKEditor', // MISSING
dlgTitle : 'About CKEditor', // MISSING
moreInfo : 'For licensing information please visit our web site:', // MISSING
copy : 'Copyright © $1. All rights reserved.' // MISSING
},
maximize : 'Maximize', // MISSING
minimize : 'Minimize', // MISSING
fakeobjects :
{
anchor : 'Anchor', // MISSING
flash : 'Flash Animation', // MISSING
div : 'Page Break', // MISSING
unknown : 'Unknown Object' // MISSING
},
resize : 'Drag to resize', // MISSING
colordialog :
{
title : 'Select color', // MISSING
options : 'Color Options', // MISSING
highlight : 'Highlight', // MISSING
selected : 'Selected Color', // MISSING
clear : 'Clear' // MISSING
},
toolbarCollapse : 'Collapse Toolbar', // MISSING
toolbarExpand : 'Expand Toolbar', // MISSING
bidi :
{
ltr : 'Text direction from left to right', // MISSING
rtl : 'Text direction from right to left' // MISSING
}
}; | PypiClean |
/MapProxy-1.16.0.tar.gz/MapProxy-1.16.0/mapproxy/image/mask.py |
from mapproxy.compat.image import Image, ImageDraw
from mapproxy.srs import SRS, make_lin_transf
from mapproxy.image import ImageSource
from mapproxy.image.opts import create_image
from mapproxy.util.geom import flatten_to_polygons
def mask_image_source_from_coverage(img_source, bbox, bbox_srs, coverage,
image_opts=None):
if image_opts is None:
image_opts = img_source.image_opts
img = img_source.as_image()
img = mask_image(img, bbox, bbox_srs, coverage)
result = create_image(img.size, image_opts)
result.paste(img, (0, 0), img)
return ImageSource(result, image_opts=image_opts)
def mask_image(img, bbox, bbox_srs, coverage):
geom = mask_polygons(bbox, SRS(bbox_srs), coverage)
mask = image_mask_from_geom(img.size, bbox, geom)
img = img.convert('RGBA')
img.paste((255, 255, 255, 0), (0, 0), mask)
return img
def mask_polygons(bbox, bbox_srs, coverage):
coverage = coverage.transform_to(bbox_srs)
coverage = coverage.intersection(bbox, bbox_srs)
return flatten_to_polygons(coverage.geom)
def image_mask_from_geom(size, bbox, polygons):
mask = Image.new('L', size, 255)
if len(polygons) == 0:
return mask
transf = make_lin_transf(bbox, (0, 0) + size)
# use negative ~.1 pixel buffer
buffer = -0.1 * min((bbox[2] - bbox[0]) / size[0], (bbox[3] - bbox[1]) / size[1])
draw = ImageDraw.Draw(mask)
def draw_polygon(p):
draw.polygon([transf(coord) for coord in p.exterior.coords], fill=0)
for ring in p.interiors:
draw.polygon([transf(coord) for coord in ring.coords], fill=255)
for p in polygons:
# little bit smaller polygon does not include touched pixels outside coverage
buffered = p.buffer(buffer, resolution=1, join_style=2)
if buffered.is_empty: # can be empty after negative buffer
continue
if buffered.type == 'MultiPolygon':
# negative buffer can turn polygon into multipolygon
for p in buffered:
draw_polygon(p)
else:
draw_polygon(buffered)
return mask | PypiClean |
/Mathics3-6.0.2.tar.gz/Mathics3-6.0.2/mathics/core/parser/ast.py | class Node:
"""
The base class for a node or "elements" of an M-Expression.
Really there are only two kinds of nodes: Atoms which are the
expression's leaves and a non-leaf nodes.
"""
def __init__(self, head, *children):
if isinstance(head, Node):
self.head = head
else:
self.head = Symbol(head)
self.value = None
self.children = list(children)
self.parenthesised = False
def get_head_name(self):
if isinstance(self.head, Symbol):
return self.head.value
else:
return ""
def __repr__(self):
return "%s[%s]" % (self.head, ", ".join(str(child) for child in self.children))
def __eq__(self, other):
if not isinstance(other, Node):
raise TypeError()
return (
(self.get_head_name() == other.get_head_name())
and (len(self.children) == len(other.children))
and all(cs == co for cs, co in zip(self.children, other.children))
)
def flatten(self):
head_name = self.get_head_name()
new_children = []
for child in self.children:
if child.get_head_name() == head_name and not child.parenthesised:
new_children.extend(child.children)
else:
new_children.append(child)
self.children = new_children
return self
class Atom(Node):
"""
Atoms form the leaves of an M-Expression and have no internal structure of
their own. You can however compare Atoms for equality.
"""
def __init__(self, value):
self.head = Symbol(self.__class__.__name__)
self.value = value
self.children = []
self.parenthesised = False
def __repr__(self):
return "%s[%s]" % (self.head, self.value)
def __eq__(self, other):
return self.__class__ == other.__class__ and self.value == other.value
# What remains below are all of the different kinds of Atoms.
class Number(Atom):
"""
An Atom with a numeric value. Later on though in evaluation, a Number can get refined into
a particular kind of number such as an Integer or a Real. Note that these too
are Atoms.
"""
def __init__(
self, value: str, sign: int = 1, base: int = 10, suffix=None, exp: int = 0
):
assert isinstance(value, str)
assert sign in (-1, 1)
assert isinstance(base, int)
assert 2 <= base <= 36
assert isinstance(exp, int)
assert suffix is None or isinstance(suffix, str)
super(Number, self).__init__(None)
self.value = value
self.sign = sign
self.base = base
self.suffix = suffix
self.exp = exp
def __repr__(self):
result = self.value
if self.base != 10:
result = "%i^^%s" % (self.base, result)
if self.sign == -1:
result = "-%s" % result
if self.suffix is not None:
result = "%s`%s" % (result, self.suffix)
if self.exp != 0:
result = "%s*^%i" % (result, self.exp)
return result
def __eq__(self, other):
return isinstance(other, Number) and repr(self) == repr(other)
class Symbol(Atom):
"""
Symbols are like variables in a programming language.
But initially in an M-Expression the only properties it has is its name
and a representation of its name.
Devoid of a binding to the Symbol, which is done via a Definition, Symbols
are unique as they are say in Lisp, or Python.
"""
def __init__(self, value: str, context="System"):
self.context = context
self.value = value
self.children = []
# avoids recursive definition
@property
def head(self):
return Symbol(self.__class__.__name__)
def __repr__(self):
return self.value
class String(Atom):
"""
A string is is pretty much the same as in any other programming language, a sequence of characters.
Having this in a class is useful so that we can distinguish it from Symbols.
The display of a String is surrounded by double quotes.
"""
def __repr__(self):
return '"' + self.value + '"'
class Filename(Atom):
"""
A filename is printed the same way a Symbol prints, in contrast to a String.
However, like String, it doesn't have any other properties.
"""
def __repr__(self):
return self.value | PypiClean |
/Lab93-Finance-0.0.1.4.tar.gz/Lab93-Finance-0.0.1.4/src/Lab93_Finance/account/__init__.py | # Utilize the Alpaca-Py SDK to interact with money.
from alpaca.trading.client import TradingClient
from .positions import Positions
from .assets import Assets
class AccountDetails:
def __init__(self, credentials: tuple) -> None:
key, secret = credentials[0], credentials[1]
self.client = TradingClient( key, secret,
paper=False )
self.data = self.client.get_account()
self.assets = Assets( self.client )
self.positions = Positions( self.client )
def Id( self ):
return self.data\
.id
def SMA( self ):
return self.data\
.sma
def Cash( self ):
return self.data\
.cash
def Status( self ):
return self.data\
.status
def Equity( self ):
return self.data\
.equity
def Currency( self ):
return self.data\
.currency
def CreatedAt( self ):
return self.data\
.created_at
def LastEquity( self ):
return self.data\
.last_equity
def Multiplier( self ):
return self.data\
.multiplier
def AccruedFees( self ):
return self.data\
.accrued_fees
def BuyingPower( self ):
return self.data\
.buying_power
def CryptoStatus( self ):
return self.data\
.crypto_status
def AccountNumber( self ):
return self.data\
.account_number
def DaytradeCount( self ):
return self.data\
.daytrade_count
def InitialMargin( self ):
return self.data\
.initial_margin
def TradingBlocked( self ):
return self.data\
.trading_blocked
def AccountBlocked( self ):
return self.data\
.account_blocked
def PortfolioValue( self ):
return self.data\
.portfolio_value
def RegtBuyingPower( self ):
return self.data\
.regt_buying_power
def ShortingEnabled( self ):
return self.data\
.shorting_enabled
def LongMarketValue( self ):
return self.data\
.long_market_value
def PatternDayTrader( self ):
return self.data\
.pattern_day_trader
def ShortMarketValue( self ):
return self.data\
.short_market_value
def MaintenanceMargin( self ):
return self.data\
.maintenance_margin
def PendingTransferIn( self ):
return self.data\
.pending_transfer_in
def TransferesBlocked( self ):
return self.data\
.transfers_blocked
def PendingTransferOut( self ):
return self.data\
.pending_transfer_out
def DaytradeBuyingPower( self ):
return self.data\
.daytrading_buying_power
def TradeSuspendedByUser( self ):
return self.data\
.trade_suspended_by_user
def LastMaintenanceMargin( self ):
return self.data\
.last_maintenance_margin
def NonMarginableBuyingPower( self ):
return self.data\
.non_marginable_buying_power | PypiClean |
/CGATReport-0.9.1.tar.gz/CGATReport-0.9.1/doc/GallerySeabornScatterPlots.rst | .. _sb_pair_plot:
============
sb-pair-plot
============
The :class:`CGATReportPlugins.Seaborn.PairPlot` displays
:term:`numerical arrays` as pair plots.
.. report:: Trackers.MultipleColumnDataFullExample
:render: sb-pair-plot
:layout: row
:width: 200
A pair-plot from a tracker returning multiple columns of data.
.. report:: Trackers.ArrayDataExample
:render: sb-pair-plot
:layout: row
:width: 200
A pair-plot from a tracker returning multiple columns of data.
Options
-------
:class:`CGATReportPlugins.Seaborn.PairPlot` has no additional
options apart from :ref:`common plot options`.
| PypiClean |
/CTRs-0.0.10-py3-none-any.whl/ctrs/layers/layer.py |
# In[ ]:
import tensorflow as tf
# In[ ]:
class PredictionLayer(tf.keras.layers.Layer):
def __init__(self, task='binary', use_bias=False, **kwargs):
if task not in ["binary", "multiclass", "regression"]:
raise ValueError("task must be binary,multiclass or regression")
self.task = task
self.use_bias = use_bias
super(PredictionLayer, self).__init__(**kwargs)
def build(self, input_shape):
if self.use_bias:
self.global_bias = self.add_weight(name="global_bias",
initializer=initializers.Zeros(),
shape=(1,))
# Be sure to call this somewhere!
super(PredictionLayer, self).build(input_shape)
def call(self, inputs, **kwargs):
x = inputs
if self.use_bias:
x = tf.nn.bias_add(x, self.global_bias, data_format='NHWC')
if self.task == "binary":
x = tf.sigmoid(x)
output = tf.reshape(x, (-1, 1))
return output
def compute_output_shape(self, input_shape):
return (None, 1)
def get_config(self, ):
config = {'task': self.task, 'use_bias': self.use_bias}
base_config = super(PredictionLayer, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
# In[ ]:
class LinearLayer(tf.keras.layers.Layer):
"""The Linear Layer
Input shape
- nD tensor with shape: ``(batch_size, ..., input_dim)``.
The most common situation would be a 2D input with shape ``(batch_size, input_dim)``.
Output shape
- nD tensor with shape: ``(batch_size, ..., hidden_units[-1])``.
For instance, for a 2D input with shape ``(batch_size, input_dim)``,
the output would have shape ``(batch_size, hidden_units[-1])``.
Arguments
- **hidden_units**:list of positive integer, the layer number and units in each layer.
- **use_bias**: bool. Whether to add bias into layer. Default `True`.
- **l2_reg**: float between 0 and 1. L2 regularizer strength. Default 0.
- **dropout_rate**: float in [0,1). Fraction of the units to dropout. Default 0., meaning no dropout.
- **seed**: A Python integer to use as random seed.
"""
def __init__(self, hidden_units, use_bias=True, dropout_rate=0., l2_reg=0., seed=27, **kwargs):
super(LinearLayer, self).__init__(**kwargs)
self.hidden_units = hidden_units
self.use_bias = use_bias
self.dropout_rate = dropout_rate
self.l2_reg = l2_reg
self.seed = seed
def build(self, input_shape, **kwargs): # TensorShape of input when run call(), inference from inputs
self.dropout = tf.keras.layers.Dropout(self.dropout_rate)
self.kernel = self.add_weight(
name="linear_kernel",
shape=[input_shape[-1], self.hidden_units],
initializer=tf.keras.initializers.glorot_normal(self.seed),
regularizer=tf.keras.regularizers.l2(self.l2_reg)
)
if self.use_bias:
self.bias = self.add_weight(
name="linear_bias",
shape=[self.hidden_units],
initializer=tf.keras.initializers.Zeros(),
)
# Be sure to call this somewhere!
super(LinearLayer, self).build(input_shape)
def call(self, inputs, training=None, **kwargs):
feats = self.dropout(inputs, training=training)
return tf.matmul(feats, self.kernel) + self.bias
# In[ ]:
class FMLayer(tf.keras.layers.Layer):
"""The Factorization Machine Layer
Input shape
- nD tensor with shape: ``(batch_size, ..., input_dim)``.
The most common situation would be a 2D input with shape ``(batch_size, input_dim)``.
Output shape
- nD tensor with shape: ``(batch_size, ..., hidden_units[-1])``.
For instance, for a 2D input with shape ``(batch_size, input_dim)``,
the output would have shape ``(batch_size, hidden_units[-1])``.
Arguments
- **l2_reg**: float between 0 and 1. L2 regularizer strength. Default 0.
- **v**: interaction units. Default 10.
- **seed**: A Python integer to use as random seed.
"""
def __init__(self, l2_reg=0.0001, v=10, seed=27, **kwargs):
self.v = v
self.l2_reg = l2_reg
self.seed = seed
super(FM, self).__init__(**kwargs)
def build(self, input_shape, **kwargs):
if isinstance(input_shape, list):
shape = [int(input_shape[1][-1]), self.v]
else:
shape = [int(input_shape[-1]), self.v]
self.kernel = self.add_weight(name='fm_kernel',
shape=shape,
initializer=tf.keras.initializers.glorot_normal(self.seed),
regularizer=tf.keras.regularizers.l2(self.l2_reg),
trainable=True)
# Be sure to call this somewhere!
super(FM, self).build(input_shape)
def call(self, inputs, **kwargs):
# inferences
embedded_inputs = tf.matmul(inputs, self.kernel)
self.embedded_inputs = inputs
square_of_sum = tf.square(tf.reduce_sum(embedded_inputs, axis=1, keepdims=True))
sum_of_square = tf.reduce_sum(tf.pow(embedded_inputs, 2), axis=1, keepdims=True)
self.square_of_sum = square_of_sum
self.sum_of_square = sum_of_square
cross_term = tf.reduce_sum((square_of_sum - sum_of_square), axis=1, keepdims=False)
interaction_term = tf.multiply(0.5, cross_term)
return interaction_term
# In[ ]:
class SimpleDNN(tf.keras.layers.Layer):
"""The Multi Layer Perceptron
Input shape
- nD tensor with shape: ``(batch_size, ..., input_dim)``.
The most common situation would be a 2D input with shape ``(batch_size, input_dim)``.
Output shape
- nD tensor with shape: ``(batch_size, ..., hidden_units[-1])``.
For instance, for a 2D input with shape ``(batch_size, input_dim)``,
the output would have shape ``(batch_size, hidden_units[-1])``.
Arguments
- **hidden_units**:list of positive integer, the layer number and units in each layer.
- **activation**: Activation function. Default `relu`.
- **kernel_initializer**: Initializer for the kernel weights matrix.
- **bias_initializer**: Initializer for the bias vector.
- **kernel_regularizer**: Regularizer function applied to the kernel weights matrix.
- **bias_regularizer**: Regularizer function applied to the bias vector.
- **activity_regularizer**: Regularizer function applied to the output of the layer (its "activation").
- **l2_reg**: float between 0 and 1. L2 regularizer strength.
- **dropout_rate**: float in [0,1). Fraction of the units to dropout. Default `None`, meaning no dropout.
- **use_bn**: bool. Whether to use BatchNormalization before activation.
- **seed**: A Python integer to use as random seed.
- **use_bias**: bool. Whether to add bias into layer. Default `True`.
"""
def __init__(
hidden_units, activation='relu', use_bias=True, kernel_initializer='glorot_uniform',
bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None,
activity_regularizer=None, l2_reg=None, dropout_rate=None, seed=27, **kwargs
):
if l2_reg:
self.l2_reg = l2_reg
kernel_regularizer = tf.keras.regularizers.l2(l2_reg)
self.hidden_units = hidden_units
self.activation = activation
self.use_bias = use_bias
self.kernel_initializer = kernel_initializer
self.bias_initializer = bias_initializer
self.kernel_regularizer = kernel_regularizer
self.bias_regularizer = bias_regularizer
self.activity_regularizer = activity_regularizer
self.dropout_rate = dropout_rate
self.use_bn = use_bn
self.seed = seed
super(SimpleDNN, self).__init__(**kwargs)
def build(self, input_shape):
self.denses = [
tf.keras.layers.Dense(
units, activation, use_bias, kernel_initializer, bias_initializer,
kernel_regularizer, bias_regularizer, activity_regularizer)
for units in hidden_units
]
if self.use_bn:
self.bn_layers = [
tf.keras.layers.BatchNormalization()
for _ in range(len(self.hidden_units))
]
if self.dropout_rate:
self.dropout_layers = [
tf.keras.layers.Dropout(self.dropout_rate, seed=self.seed + i)
for i in range(len(self.hidden_units))
]
super(SimpleDNN, self).build(input_shape)
def call(self, inputs, training=None, **kwarg):
input_tensors = inputs
for idx, layer in enumerate(self.denses):
out = layer(input_tensors)
if self.use_bn:
out = self.bn_layers[idx](out, training=training)
if self.dropout_rate:
out = self.dropout_layers[idx](out, training=training)
deep_input = out
return deep_input
def compute_output_shape(self, input_shape):
if len(self.hidden_units) > 0:
shape = input_shape[:-1] + (self.hidden_units[-1],)
else:
shape = input_shape
return tuple(shape)
def get_config(self):
config = {
'activation': self.activation,
'hidden_units': self.hidden_units,
'l2_reg': self.l2_reg,
'use_bn': self.use_bn,
'dropout_rate': self.dropout_rate,
'seed': self.seed
}
base_config = super(DNN, self).get_config()
return dict(list(base_config.items()) + list(config.items())) | PypiClean |
/Hooke-1.0.0.alpha%20(Ninken).tar.gz/Hooke-1.0.0.alpha (Ninken)/README | .. -*- coding: utf-8 -*-
What it is
==========
It is a software for the (semiautomatic) analysis and filtering of
force curves. Force curves are the output of an analytical technique
called force spectroscopy. A force spectroscopy experiment usually
requires the analysis of thousands of force curves at a time. As of
today, there is no standard, free software for the analysis of force
curves. Hooke aims to solve that.
What it does
============
* View, annotate, measure force curves
* Worm-like chain and freely-jointed chain fit of force peaks
* Automatic convolution-based filtering of empty curves
* Automatic fit and measurement of multiple force peaks
* Handles force-clamp force experiments (experimental)
* It is extensible by users by mean of plugins and drivers
See `Hooke's Google Code homepage`_ for more details.
How it works
============
See the ``doc/tutorial.txt`` file distributed with Hooke, or the
`online docs`__.
__ docs_
Where it's published
====================
Sandal M, Benedetti F, Brucale M, Gomez-Casado A, Samorì B.
"Hooke: an open software platform for force spectroscopy."
Bioinformatics, 2009 25(11):1428-1430.
doi:10.1093/bioinformatics/btp180
Please cite Hooke if you use it.
Troubleshooting
===============
If you have troubles in using it, before throwing it in the trash:
1. Search the `issue tracker`_.
2. Look at the `troubleshooting wiki page`_.
3. Search the `discussion group`_.
4. Ask a question in the discussion group.
5. File a bug on the issue tracker.
Disclaimer
==========
Remember that Hooke is still experimental software! It has been mostly
done to fit the needs of its authors, and there is no guarantee it
will do what you need. However, you can write us/help us improve it so
that it does. We aim to make of Hooke a robust tool that can help your
research.
Source
======
Hooke is available as a Git_ repository::
$ git clone git://tremily.us/hooke.git
See the docs_ for installation details.
Licensing
=========
Hooke is released under the GNU Lesser General Public License, either
version 3 of the License (``COPYING`` and ``COPYING.LESSER``), or (at
your option) any later version.
.. _Hooke's Google Code homepage: http://code.google.com/p/hooke/
.. _docs: http://hooke.tremily.us/
.. _issue tracker: http://code.google.com/p/hooke/issues/list
.. _troubleshooting wiki page:
http://code.google.com/p/hooke/wiki/TroubleShooting
.. _discussion group: http://groups.google.com/group/hookesoftware
.. _Git: http://git-scm.com/ | PypiClean |
/Flask_Compress-1.13-py3-none-any.whl/flask_compress/flask_compress.py |
import sys
import functools
from gzip import GzipFile
import zlib
from io import BytesIO
from collections import defaultdict
import brotli
from flask import request, after_this_request, current_app
if sys.version_info[:2] == (2, 6):
class GzipFile(GzipFile):
""" Backport of context manager support for python 2.6"""
def __enter__(self):
if self.fileobj is None:
raise ValueError("I/O operation on closed GzipFile object")
return self
def __exit__(self, *args):
self.close()
class DictCache(object):
def __init__(self):
self.data = {}
def get(self, key):
return self.data.get(key)
def set(self, key, value):
self.data[key] = value
class Compress(object):
"""
The Compress object allows your application to use Flask-Compress.
When initialising a Compress object you may optionally provide your
:class:`flask.Flask` application object if it is ready. Otherwise,
you may provide it later by using the :meth:`init_app` method.
:param app: optional :class:`flask.Flask` application object
:type app: :class:`flask.Flask` or None
"""
def __init__(self, app=None):
"""
An alternative way to pass your :class:`flask.Flask` application
object to Flask-Compress. :meth:`init_app` also takes care of some
default `settings`_.
:param app: the :class:`flask.Flask` application object.
"""
self.app = app
if app is not None:
self.init_app(app)
def init_app(self, app):
defaults = [
('COMPRESS_MIMETYPES', ['text/html', 'text/css', 'text/xml',
'application/json',
'application/javascript']),
('COMPRESS_LEVEL', 6),
('COMPRESS_BR_LEVEL', 4),
('COMPRESS_BR_MODE', 0),
('COMPRESS_BR_WINDOW', 22),
('COMPRESS_BR_BLOCK', 0),
('COMPRESS_DEFLATE_LEVEL', -1),
('COMPRESS_MIN_SIZE', 500),
('COMPRESS_CACHE_KEY', None),
('COMPRESS_CACHE_BACKEND', None),
('COMPRESS_REGISTER', True),
('COMPRESS_STREAMS', True),
('COMPRESS_ALGORITHM', ['br', 'gzip', 'deflate']),
]
for k, v in defaults:
app.config.setdefault(k, v)
backend = app.config['COMPRESS_CACHE_BACKEND']
self.cache = backend() if backend else None
self.cache_key = app.config['COMPRESS_CACHE_KEY']
algo = app.config['COMPRESS_ALGORITHM']
if isinstance(algo, str):
self.enabled_algorithms = [i.strip() for i in algo.split(',')]
else:
self.enabled_algorithms = list(algo)
if (app.config['COMPRESS_REGISTER'] and
app.config['COMPRESS_MIMETYPES']):
app.after_request(self.after_request)
def _choose_compress_algorithm(self, accept_encoding_header):
"""
Determine which compression algorithm we're going to use based on the
client request. The `Accept-Encoding` header may list one or more desired
algorithms, together with a "quality factor" for each one (higher quality
means the client prefers that algorithm more).
:param accept_encoding_header: Content of the `Accept-Encoding` header
:return: name of a compression algorithm (`gzip`, `deflate`, `br`) or `None` if
the client and server don't agree on any.
"""
# A flag denoting that client requested using any (`*`) algorithm,
# in case a specific one is not supported by the server
fallback_to_any = False
# Map quality factors to requested algorithm names.
algos_by_quality = defaultdict(set)
# Set of supported algorithms
server_algos_set = set(self.enabled_algorithms)
for part in accept_encoding_header.lower().split(','):
part = part.strip()
if ';q=' in part:
# If the client associated a quality factor with an algorithm,
# try to parse it. We could do the matching using a regex, but
# the format is so simple that it would be overkill.
algo = part.split(';')[0].strip()
try:
quality = float(part.split('=')[1].strip())
except ValueError:
quality = 1.0
else:
# Otherwise, use the default quality
algo = part
quality = 1.0
if algo == '*':
if quality > 0:
fallback_to_any = True
elif algo == 'identity': # identity means 'no compression asked'
algos_by_quality[quality].add(None)
elif algo in server_algos_set:
algos_by_quality[quality].add(algo)
# Choose the algorithm with the highest quality factor that the server supports.
#
# If there are multiple equally good options, choose the first supported algorithm
# from server configuration.
#
# If the server doesn't support any algorithm that the client requested but
# there's a special wildcard algorithm request (`*`), choose the first supported
# algorithm.
for _, viable_algos in sorted(algos_by_quality.items(), reverse=True):
if len(viable_algos) == 1:
return viable_algos.pop()
elif len(viable_algos) > 1:
for server_algo in self.enabled_algorithms:
if server_algo in viable_algos:
return server_algo
if fallback_to_any:
return self.enabled_algorithms[0]
return None
def after_request(self, response):
app = self.app or current_app
vary = response.headers.get('Vary')
if not vary:
response.headers['Vary'] = 'Accept-Encoding'
elif 'accept-encoding' not in vary.lower():
response.headers['Vary'] = '{}, Accept-Encoding'.format(vary)
accept_encoding = request.headers.get('Accept-Encoding', '')
chosen_algorithm = self._choose_compress_algorithm(accept_encoding)
if (chosen_algorithm is None or
response.mimetype not in app.config["COMPRESS_MIMETYPES"] or
response.status_code < 200 or
response.status_code >= 300 or
(response.is_streamed and app.config["COMPRESS_STREAMS"] is False)or
"Content-Encoding" in response.headers or
(response.content_length is not None and
response.content_length < app.config["COMPRESS_MIN_SIZE"])):
return response
response.direct_passthrough = False
if self.cache is not None:
key = self.cache_key(request)
compressed_content = self.cache.get(key)
if compressed_content is None:
compressed_content = self.compress(app, response, chosen_algorithm)
self.cache.set(key, compressed_content)
else:
compressed_content = self.compress(app, response, chosen_algorithm)
response.set_data(compressed_content)
response.headers['Content-Encoding'] = chosen_algorithm
response.headers['Content-Length'] = response.content_length
# "123456789" => "123456789:gzip" - A strong ETag validator
# W/"123456789" => W/"123456789:gzip" - A weak ETag validator
etag = response.headers.get('ETag')
if etag:
response.headers['ETag'] = '{0}:{1}"'.format(etag[:-1], chosen_algorithm)
return response
def compressed(self):
def decorator(f):
@functools.wraps(f)
def decorated_function(*args, **kwargs):
@after_this_request
def compressor(response):
return self.after_request(response)
return f(*args, **kwargs)
return decorated_function
return decorator
def compress(self, app, response, algorithm):
if algorithm == 'gzip':
gzip_buffer = BytesIO()
with GzipFile(mode='wb',
compresslevel=app.config['COMPRESS_LEVEL'],
fileobj=gzip_buffer) as gzip_file:
gzip_file.write(response.get_data())
return gzip_buffer.getvalue()
elif algorithm == 'deflate':
return zlib.compress(response.get_data(),
app.config['COMPRESS_DEFLATE_LEVEL'])
elif algorithm == 'br':
return brotli.compress(response.get_data(),
mode=app.config['COMPRESS_BR_MODE'],
quality=app.config['COMPRESS_BR_LEVEL'],
lgwin=app.config['COMPRESS_BR_WINDOW'],
lgblock=app.config['COMPRESS_BR_BLOCK']) | PypiClean |
/Classeviva.py-1.2.0.tar.gz/Classeviva.py-1.2.0/src/classeviva/collegamenti/collegamenti.py | class Collegamenti:
base: str = "https://web.spaggiari.eu/rest"
accesso: str = f"{base}/v1/auth/login"
stato: str = f"{base}/v1/auth/status"
biglietto: str = f"{base}/v1/auth/ticket"
documenti: str = f"{base}/v1/students/{{}}/documents"
controllo_documento: str = f"{base}/v1/students/{{}}/documents/check/{{}}"
leggi_documento: str = f"{base}/v1/students/{{}}/documents/read/{{}}"
assenze: str = f"{base}/v1/students/{{}}/absences/details"
assenze_da: str = f"{base}/v1/students/{{}}/absences/details/{{}}"
assenze_da_a: str = f"{base}/v1/students/{{}}/absences/details/{{}}/{{}}"
agenda_da_a: str = f"{base}/v1/students/{{}}/agenda/all/{{}}/{{}}"
agenda_codice_da_a: str = f"{base}/v1/students/{{}}/agenda/{{}}/{{}}/{{}}"
didattica: str = f"{base}/v1/students/{{}}/didactics"
didattica_elemento: str = f"{base}/v1/students/{{}}/didactics/item/{{}}"
bacheca: str = f"{base}/v1/students/{{}}/noticeboard"
bacheca_leggi: str = f"{base}/v1/students/{{}}/noticeboard/read/{{}}/{{}}/101"
bacheca_allega: str = f"{base}/v1/students/{{}}/noticeboard/attach/{{}}/{{}}/101"
lezioni: str = f"{base}/v1/students/{{}}/lessons/today"
lezioni_giorno: str = f"{base}/v1/students/{{}}/lessons/{{}}"
lezioni_da_a: str = f"{base}/v1/students/{{}}/lessons/{{}}/{{}}"
lezioni_da_a_materia: str = f"{base}/v1/students/{{}}/lessons/{{}}/{{}}/{{}}"
calendario: str = f"{base}/v1/students/{{}}/calendar/all"
calendario_da_a: str = f"{base}/v1/students/{{}}/calendar/{{}}/{{}}"
libri: str = f"{base}/v1/students/{{}}/schoolbooks"
carta: str = f"{base}/v1/students/{{}}/card"
voti: str = f"{base}/v1/students/{{}}/grades"
periodi: str = f"{base}/v1/students/{{}}/periods"
materie: str = f"{base}/v1/students/{{}}/subjects"
note: str = f"{base}/v1/students/{{}}/notes/all"
leggi_nota: str = f"{base}/v1/students/{{}}/notes/{{}}/read/{{}}"
panoramica_da_a: str = f"{base}/v1/students/{{}}/overview/all/{{}}/{{}}"
avatar: str = f"{base}/v1/users/{{}}/avatar" | PypiClean |
/FairNLP-5.1.0.tar.gz/FairNLP-5.1.0/FNLP/Language/Character.py | from FNLP.Language import Constants
from F import LIST
def is_in_alphabet_lower(content:str):
firstChar = LIST.get(0, content, default=False)
if firstChar in Constants.ALPHABET_LOWER:
return True
return False
def is_in_alphabet_upper(content:str):
firstChar = LIST.get(0, content, default=False)
if firstChar in Constants.ALPHABET_UPPER:
return True
return False
def is_in_alphabet(content:str):
firstChar = LIST.get(0, content, default=False)
if firstChar in Constants.ALPHABET_ALL:
return True
return False
def is_single_number(content):
if type(content) != int:
content = LIST.get(0, content, default=False)
if content in Constants.NUMBERS_SINGLE:
return True
return False
def is_capital(content: str):
firstChar = LIST.get(0, content, default=False)
if firstChar and str(firstChar).isupper():
return True
return False
def are_capital(*content: str):
for item in content:
if not is_capital(item):
return False
return True
def are_periods(*content: str):
for item in content:
if not is_period(item):
return False
return True
def is_period(content:str):
firstChar = LIST.get(0, content, default=False)
if firstChar and str(content) == ".":
return True
return False
def are_periods_or_capitals(*content:str):
for item in content:
if is_capital(item) or is_period(item):
return True
return False
def is_empty(content: str):
firstChar = LIST.get(0, content, default=False)
if firstChar and not content or content == ' ' or content == '' or str(content) == " ":
return True
return False
def are_empty(*content: str):
for item in content:
if not item or item == ' ' or item == '' or str(item) == " ":
return True
return False
def is_quotation(content:str):
encoded_character = str(content).encode('utf-8')
if content == '"':
return True
elif encoded_character in Constants.QUOTES_ENCODINGS:
return True
return False
def is_space(content:str):
firstChar = LIST.get(0, content, default=False)
if firstChar and str(content) == ' ':
return True
return False
def is_space_or_quotation(content):
if is_quotation(content) or is_space(content):
return True
return False
def __is_sentence_ender(content):
if str(content) in Constants.SENTENCE_ENDERS:
return True
return False
def __is_sentence_beginner(content):
if is_in_alphabet(content):
return True
elif is_quotation(content):
return True
elif is_single_number(content):
return True
return False | PypiClean |
/CUP_EXAM-2.6.11.tar.gz/CUP_EXAM-2.6.11/CUP_EXAM/frontend.py | from tkinter import *
from tkinter.messagebox import *
from CUP_EXAM import backend
try:
win = Tk()
win.title('CUP考试安排查询')
win.geometry("400x130")
pre_text = Variable()
ipt = Entry(win, width=37, textvariable=pre_text)
except TclError:
tk_flag = False
print('Show UI failed! But we provide Basic UI:')
else:
tk_flag = True
ts = '\n当前考试安排:\n %s\n'
flag = False
cls_flag = True
def deal():
exp = ipt.get()
if not exp:
return
rres = backend.search(exp.split())
if not rres:
showinfo('暂无相关信息', '无信息,请联系老师查询')
return
new_win = Toplevel()
new_win.title('查询结果')
txt = Text(new_win, width=100, height=30)
srco = Scrollbar(new_win)
srco.pack(side=RIGHT, fill=Y)
txt.pack(side=LEFT, fill=Y)
srco.config(command=txt.yview)
txt.config(yscrollcommand=srco.set)
txt.insert(INSERT, rres)
txt.config(state=DISABLED)
new_win.mainloop()
def update(url=None, realurl=None):
if realurl:
backend.getNewXls(realurl)
backend.init()
return
if not url:
return
if backend.getNewXls(url):
if tk_flag:
showinfo('下载成功', 'XLS考试通知文件下载成功!')
else:
print('XLS考试通知文件下载成功!')
backend.init()
else:
if tk_flag:
showerror('下载失败', '未能成功下载XLS文件')
else:
print('未能成功下载XLS文件')
def clear_input(event):
global cls_flag
if cls_flag:
ipt.delete(0, END)
cls_flag = False
elif event.char == '\r' or event.char == '\n':
deal()
def main():
global flag, ss
if not backend.pre_check():
with open(backend.base_dir + '.last_title', 'w') as f:
f.write('\n\n')
ss = ts % 'NONE'
flag = True
else:
with open(backend.base_dir + '.last_title', 'r') as f:
ss = ts % (f.read().split()[0])
res = backend.new_note_check()
if tk_flag:
lb = Label(win, text=ss, justify='left')
lb.pack()
ipt.bind('<Key>', clear_input)
ipt.pack()
pre_text.set('搜索课程名称、教师名或班级(多个条件用空格分隔):')
ipt.focus()
bt = Button(win, text='查询', command=deal)
bt.pack()
if res == -1:
showerror('检测文件更新失败', '设备未联网' + ('且无可用考试安排文件,程序自动退出。' if flag else ''))
if flag:
exit('No Network')
elif res:
if res[-1]:
if flag:
ask = True
else:
ask = askyesno('发现新的考试安排', '发现新的考试安排:\n ' + res[1] + '\n\n是否更新考试安排文件?')
if ask:
if flag:
update(realurl=res)
else:
update(url=res)
lb.config(text=ts % res[1])
else:
update(realurl=res)
showinfo('自动更新', '发现考试安排变更,已自动更新。')
else:
backend.init()
win.mainloop()
else:
if res == -1:
print('检测文件更新失败', '设备未联网' + ('且无可用考试安排文件,程序自动退出。' if flag else ''))
if flag:
exit('No Network')
elif res:
if res[-1]:
if flag:
ask = True
else:
ask = input('发现新的考试安排:\n ' + res[1] + '\n\n是否更新考试安排文件?[y/n]:')
if ask or ask in 'yY':
if flag:
update(realurl=res)
else:
update(url=res)
else:
update(realurl=res)
print('发现考试安排变更,已自动更新。')
else:
backend.init()
while True:
target = input('输入查询信息: ')
if target:
print(backend.search(target))
else:
break
if __name__ == '__main__':
main() | PypiClean |
/Editra-0.7.20.tar.gz/Editra-0.7.20/src/extern/pygments/lexers/__init__.py | import sys
import types
import fnmatch
from os.path import basename
from pygments.lexers._mapping import LEXERS
from pygments.plugin import find_plugin_lexers
from pygments.util import ClassNotFound, bytes
__all__ = ['get_lexer_by_name', 'get_lexer_for_filename', 'find_lexer_class',
'guess_lexer'] + LEXERS.keys()
_lexer_cache = {}
def _load_lexers(module_name):
"""
Load a lexer (and all others in the module too).
"""
mod = __import__(module_name, None, None, ['__all__'])
for lexer_name in mod.__all__:
cls = getattr(mod, lexer_name)
_lexer_cache[cls.name] = cls
def get_all_lexers():
"""
Return a generator of tuples in the form ``(name, aliases,
filenames, mimetypes)`` of all know lexers.
"""
for item in LEXERS.itervalues():
yield item[1:]
for lexer in find_plugin_lexers():
yield lexer.name, lexer.aliases, lexer.filenames, lexer.mimetypes
def find_lexer_class(name):
"""
Lookup a lexer class by name. Return None if not found.
"""
if name in _lexer_cache:
return _lexer_cache[name]
# lookup builtin lexers
for module_name, lname, aliases, _, _ in LEXERS.itervalues():
if name == lname:
_load_lexers(module_name)
return _lexer_cache[name]
# continue with lexers from setuptools entrypoints
for cls in find_plugin_lexers():
if cls.name == name:
return cls
def get_lexer_by_name(_alias, **options):
"""
Get a lexer by an alias.
"""
# lookup builtin lexers
for module_name, name, aliases, _, _ in LEXERS.itervalues():
if _alias in aliases:
if name not in _lexer_cache:
_load_lexers(module_name)
return _lexer_cache[name](**options)
# continue with lexers from setuptools entrypoints
for cls in find_plugin_lexers():
if _alias in cls.aliases:
return cls(**options)
raise ClassNotFound('no lexer for alias %r found' % _alias)
def get_lexer_for_filename(_fn, code=None, **options):
"""
Get a lexer for a filename. If multiple lexers match the filename
pattern, use ``analyze_text()`` to figure out which one is more
appropriate.
"""
matches = []
fn = basename(_fn)
for modname, name, _, filenames, _ in LEXERS.itervalues():
for filename in filenames:
if fnmatch.fnmatch(fn, filename):
if name not in _lexer_cache:
_load_lexers(modname)
matches.append(_lexer_cache[name])
for cls in find_plugin_lexers():
for filename in cls.filenames:
if fnmatch.fnmatch(fn, filename):
matches.append(cls)
if sys.version_info > (3,) and isinstance(code, bytes):
# decode it, since all analyse_text functions expect unicode
code = code.decode('latin1')
def get_rating(cls):
# The class _always_ defines analyse_text because it's included in
# the Lexer class. The default implementation returns None which
# gets turned into 0.0. Run scripts/detect_missing_analyse_text.py
# to find lexers which need it overridden.
d = cls.analyse_text(code)
#print "Got %r from %r" % (d, cls)
return d
if code:
matches.sort(key=get_rating)
if matches:
#print "Possible lexers, after sort:", matches
return matches[-1](**options)
raise ClassNotFound('no lexer for filename %r found' % _fn)
def get_lexer_for_mimetype(_mime, **options):
"""
Get a lexer for a mimetype.
"""
for modname, name, _, _, mimetypes in LEXERS.itervalues():
if _mime in mimetypes:
if name not in _lexer_cache:
_load_lexers(modname)
return _lexer_cache[name](**options)
for cls in find_plugin_lexers():
if _mime in cls.mimetypes:
return cls(**options)
raise ClassNotFound('no lexer for mimetype %r found' % _mime)
def _iter_lexerclasses():
"""
Return an iterator over all lexer classes.
"""
for module_name, name, _, _, _ in LEXERS.itervalues():
if name not in _lexer_cache:
_load_lexers(module_name)
yield _lexer_cache[name]
for lexer in find_plugin_lexers():
yield lexer
def guess_lexer_for_filename(_fn, _text, **options):
"""
Lookup all lexers that handle those filenames primary (``filenames``)
or secondary (``alias_filenames``). Then run a text analysis for those
lexers and choose the best result.
usage::
>>> from pygments.lexers import guess_lexer_for_filename
>>> guess_lexer_for_filename('hello.html', '<%= @foo %>')
<pygments.lexers.templates.RhtmlLexer object at 0xb7d2f32c>
>>> guess_lexer_for_filename('hello.html', '<h1>{{ title|e }}</h1>')
<pygments.lexers.templates.HtmlDjangoLexer object at 0xb7d2f2ac>
>>> guess_lexer_for_filename('style.css', 'a { color: <?= $link ?> }')
<pygments.lexers.templates.CssPhpLexer object at 0xb7ba518c>
"""
fn = basename(_fn)
primary = None
matching_lexers = set()
for lexer in _iter_lexerclasses():
for filename in lexer.filenames:
if fnmatch.fnmatch(fn, filename):
matching_lexers.add(lexer)
primary = lexer
for filename in lexer.alias_filenames:
if fnmatch.fnmatch(fn, filename):
matching_lexers.add(lexer)
if not matching_lexers:
raise ClassNotFound('no lexer for filename %r found' % fn)
if len(matching_lexers) == 1:
return matching_lexers.pop()(**options)
result = []
for lexer in matching_lexers:
rv = lexer.analyse_text(_text)
if rv == 1.0:
return lexer(**options)
result.append((rv, lexer))
result.sort()
if not result[-1][0] and primary is not None:
return primary(**options)
return result[-1][1](**options)
def guess_lexer(_text, **options):
"""
Guess a lexer by strong distinctions in the text (eg, shebang).
"""
best_lexer = [0.0, None]
for lexer in _iter_lexerclasses():
rv = lexer.analyse_text(_text)
if rv == 1.0:
return lexer(**options)
if rv > best_lexer[0]:
best_lexer[:] = (rv, lexer)
if not best_lexer[0] or best_lexer[1] is None:
raise ClassNotFound('no lexer matching the text found')
return best_lexer[1](**options)
class _automodule(types.ModuleType):
"""Automatically import lexers."""
def __getattr__(self, name):
info = LEXERS.get(name)
if info:
_load_lexers(info[0])
cls = _lexer_cache[info[1]]
setattr(self, name, cls)
return cls
raise AttributeError(name)
oldmod = sys.modules['pygments.lexers']
newmod = _automodule('pygments.lexers')
newmod.__dict__.update(oldmod.__dict__)
sys.modules['pygments.lexers'] = newmod
del newmod.newmod, newmod.oldmod, newmod.sys, newmod.types | PypiClean |
/Firefly_III_API_Client-2.0.5.0-py3-none-any.whl/firefly_iii_client/paths/v1_categories_id/put.py | from dataclasses import dataclass
import typing_extensions
import urllib3
from urllib3._collections import HTTPHeaderDict
from firefly_iii_client import api_client, exceptions
from datetime import date, datetime # noqa: F401
import decimal # noqa: F401
import functools # noqa: F401
import io # noqa: F401
import re # noqa: F401
import typing # noqa: F401
import typing_extensions # noqa: F401
import uuid # noqa: F401
import frozendict # noqa: F401
from firefly_iii_client import schemas # noqa: F401
from firefly_iii_client.model.validation_error import ValidationError
from firefly_iii_client.model.category_single import CategorySingle
from firefly_iii_client.model.unauthenticated import Unauthenticated
from firefly_iii_client.model.category_update import CategoryUpdate
from firefly_iii_client.model.bad_request import BadRequest
from firefly_iii_client.model.internal_exception import InternalException
from firefly_iii_client.model.not_found import NotFound
from . import path
# Header params
XTraceIdSchema = schemas.UUIDSchema
RequestRequiredHeaderParams = typing_extensions.TypedDict(
'RequestRequiredHeaderParams',
{
}
)
RequestOptionalHeaderParams = typing_extensions.TypedDict(
'RequestOptionalHeaderParams',
{
'X-Trace-Id': typing.Union[XTraceIdSchema, str, uuid.UUID, ],
},
total=False
)
class RequestHeaderParams(RequestRequiredHeaderParams, RequestOptionalHeaderParams):
pass
request_header_x_trace_id = api_client.HeaderParameter(
name="X-Trace-Id",
style=api_client.ParameterStyle.SIMPLE,
schema=XTraceIdSchema,
)
# Path params
IdSchema = schemas.StrSchema
RequestRequiredPathParams = typing_extensions.TypedDict(
'RequestRequiredPathParams',
{
'id': typing.Union[IdSchema, str, ],
}
)
RequestOptionalPathParams = typing_extensions.TypedDict(
'RequestOptionalPathParams',
{
},
total=False
)
class RequestPathParams(RequestRequiredPathParams, RequestOptionalPathParams):
pass
request_path_id = api_client.PathParameter(
name="id",
style=api_client.ParameterStyle.SIMPLE,
schema=IdSchema,
required=True,
)
# body param
SchemaForRequestBodyApplicationJson = CategoryUpdate
SchemaForRequestBodyApplicationXWwwFormUrlencoded = CategoryUpdate
request_body_category_update = api_client.RequestBody(
content={
'application/json': api_client.MediaType(
schema=SchemaForRequestBodyApplicationJson),
'application/x-www-form-urlencoded': api_client.MediaType(
schema=SchemaForRequestBodyApplicationXWwwFormUrlencoded),
},
required=True,
)
_auth = [
'firefly_iii_auth',
]
SchemaFor200ResponseBodyApplicationVndApijson = CategorySingle
@dataclass
class ApiResponseFor200(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[
SchemaFor200ResponseBodyApplicationVndApijson,
]
headers: schemas.Unset = schemas.unset
_response_for_200 = api_client.OpenApiResponse(
response_cls=ApiResponseFor200,
content={
'application/vnd.api+json': api_client.MediaType(
schema=SchemaFor200ResponseBodyApplicationVndApijson),
},
)
SchemaFor400ResponseBodyApplicationJson = BadRequest
@dataclass
class ApiResponseFor400(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[
SchemaFor400ResponseBodyApplicationJson,
]
headers: schemas.Unset = schemas.unset
_response_for_400 = api_client.OpenApiResponse(
response_cls=ApiResponseFor400,
content={
'application/json': api_client.MediaType(
schema=SchemaFor400ResponseBodyApplicationJson),
},
)
SchemaFor401ResponseBodyApplicationJson = Unauthenticated
@dataclass
class ApiResponseFor401(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[
SchemaFor401ResponseBodyApplicationJson,
]
headers: schemas.Unset = schemas.unset
_response_for_401 = api_client.OpenApiResponse(
response_cls=ApiResponseFor401,
content={
'application/json': api_client.MediaType(
schema=SchemaFor401ResponseBodyApplicationJson),
},
)
SchemaFor404ResponseBodyApplicationJson = NotFound
@dataclass
class ApiResponseFor404(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[
SchemaFor404ResponseBodyApplicationJson,
]
headers: schemas.Unset = schemas.unset
_response_for_404 = api_client.OpenApiResponse(
response_cls=ApiResponseFor404,
content={
'application/json': api_client.MediaType(
schema=SchemaFor404ResponseBodyApplicationJson),
},
)
SchemaFor422ResponseBodyApplicationJson = ValidationError
@dataclass
class ApiResponseFor422(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[
SchemaFor422ResponseBodyApplicationJson,
]
headers: schemas.Unset = schemas.unset
_response_for_422 = api_client.OpenApiResponse(
response_cls=ApiResponseFor422,
content={
'application/json': api_client.MediaType(
schema=SchemaFor422ResponseBodyApplicationJson),
},
)
SchemaFor500ResponseBodyApplicationJson = InternalException
@dataclass
class ApiResponseFor500(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[
SchemaFor500ResponseBodyApplicationJson,
]
headers: schemas.Unset = schemas.unset
_response_for_500 = api_client.OpenApiResponse(
response_cls=ApiResponseFor500,
content={
'application/json': api_client.MediaType(
schema=SchemaFor500ResponseBodyApplicationJson),
},
)
_status_code_to_response = {
'200': _response_for_200,
'400': _response_for_400,
'401': _response_for_401,
'404': _response_for_404,
'422': _response_for_422,
'500': _response_for_500,
}
_all_accept_content_types = (
'application/vnd.api+json',
'application/json',
)
class BaseApi(api_client.Api):
@typing.overload
def _update_category_oapg(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,],
content_type: typing_extensions.Literal["application/json"] = ...,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[
ApiResponseFor200,
]: ...
@typing.overload
def _update_category_oapg(
self,
body: typing.Union[SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
content_type: typing_extensions.Literal["application/x-www-form-urlencoded"],
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[
ApiResponseFor200,
]: ...
@typing.overload
def _update_category_oapg(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
content_type: str = ...,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[
ApiResponseFor200,
]: ...
@typing.overload
def _update_category_oapg(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
skip_deserialization: typing_extensions.Literal[True],
content_type: str = ...,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
) -> api_client.ApiResponseWithoutDeserialization: ...
@typing.overload
def _update_category_oapg(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
content_type: str = ...,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = ...,
) -> typing.Union[
ApiResponseFor200,
api_client.ApiResponseWithoutDeserialization,
]: ...
def _update_category_oapg(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
content_type: str = 'application/json',
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = False,
):
"""
Update existing category.
:param skip_deserialization: If true then api_response.response will be set but
api_response.body and api_response.headers will not be deserialized into schema
class instances
"""
self._verify_typed_dict_inputs_oapg(RequestHeaderParams, header_params)
self._verify_typed_dict_inputs_oapg(RequestPathParams, path_params)
used_path = path.value
_path_params = {}
for parameter in (
request_path_id,
):
parameter_data = path_params.get(parameter.name, schemas.unset)
if parameter_data is schemas.unset:
continue
serialized_data = parameter.serialize(parameter_data)
_path_params.update(serialized_data)
for k, v in _path_params.items():
used_path = used_path.replace('{%s}' % k, v)
_headers = HTTPHeaderDict()
for parameter in (
request_header_x_trace_id,
):
parameter_data = header_params.get(parameter.name, schemas.unset)
if parameter_data is schemas.unset:
continue
serialized_data = parameter.serialize(parameter_data)
_headers.extend(serialized_data)
# TODO add cookie handling
if accept_content_types:
for accept_content_type in accept_content_types:
_headers.add('Accept', accept_content_type)
if body is schemas.unset:
raise exceptions.ApiValueError(
'The required body parameter has an invalid value of: unset. Set a valid value instead')
_fields = None
_body = None
serialized_data = request_body_category_update.serialize(body, content_type)
_headers.add('Content-Type', content_type)
if 'fields' in serialized_data:
_fields = serialized_data['fields']
elif 'body' in serialized_data:
_body = serialized_data['body']
response = self.api_client.call_api(
resource_path=used_path,
method='put'.upper(),
headers=_headers,
fields=_fields,
body=_body,
auth_settings=_auth,
stream=stream,
timeout=timeout,
)
if skip_deserialization:
api_response = api_client.ApiResponseWithoutDeserialization(response=response)
else:
response_for_status = _status_code_to_response.get(str(response.status))
if response_for_status:
api_response = response_for_status.deserialize(response, self.api_client.configuration)
else:
api_response = api_client.ApiResponseWithoutDeserialization(response=response)
if not 200 <= response.status <= 299:
raise exceptions.ApiException(
status=response.status,
reason=response.reason,
api_response=api_response
)
return api_response
class UpdateCategory(BaseApi):
# this class is used by api classes that refer to endpoints with operationId fn names
@typing.overload
def update_category(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,],
content_type: typing_extensions.Literal["application/json"] = ...,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[
ApiResponseFor200,
]: ...
@typing.overload
def update_category(
self,
body: typing.Union[SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
content_type: typing_extensions.Literal["application/x-www-form-urlencoded"],
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[
ApiResponseFor200,
]: ...
@typing.overload
def update_category(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
content_type: str = ...,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[
ApiResponseFor200,
]: ...
@typing.overload
def update_category(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
skip_deserialization: typing_extensions.Literal[True],
content_type: str = ...,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
) -> api_client.ApiResponseWithoutDeserialization: ...
@typing.overload
def update_category(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
content_type: str = ...,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = ...,
) -> typing.Union[
ApiResponseFor200,
api_client.ApiResponseWithoutDeserialization,
]: ...
def update_category(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
content_type: str = 'application/json',
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = False,
):
return self._update_category_oapg(
body=body,
header_params=header_params,
path_params=path_params,
content_type=content_type,
accept_content_types=accept_content_types,
stream=stream,
timeout=timeout,
skip_deserialization=skip_deserialization
)
class ApiForput(BaseApi):
# this class is used by api classes that refer to endpoints by path and http method names
@typing.overload
def put(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,],
content_type: typing_extensions.Literal["application/json"] = ...,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[
ApiResponseFor200,
]: ...
@typing.overload
def put(
self,
body: typing.Union[SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
content_type: typing_extensions.Literal["application/x-www-form-urlencoded"],
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[
ApiResponseFor200,
]: ...
@typing.overload
def put(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
content_type: str = ...,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[
ApiResponseFor200,
]: ...
@typing.overload
def put(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
skip_deserialization: typing_extensions.Literal[True],
content_type: str = ...,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
) -> api_client.ApiResponseWithoutDeserialization: ...
@typing.overload
def put(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
content_type: str = ...,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = ...,
) -> typing.Union[
ApiResponseFor200,
api_client.ApiResponseWithoutDeserialization,
]: ...
def put(
self,
body: typing.Union[SchemaForRequestBodyApplicationJson,SchemaForRequestBodyApplicationXWwwFormUrlencoded,],
content_type: str = 'application/json',
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = False,
):
return self._update_category_oapg(
body=body,
header_params=header_params,
path_params=path_params,
content_type=content_type,
accept_content_types=accept_content_types,
stream=stream,
timeout=timeout,
skip_deserialization=skip_deserialization
) | PypiClean |
/Flox-lib-0.20.0.tar.gz/Flox-lib-0.20.0/flox/clipboard.py | import ctypes
from ctypes.wintypes import BOOL, HWND, HANDLE, HGLOBAL, UINT, LPVOID
from ctypes import c_size_t as SIZE_T
# Credit for code goes to Mark Ransom at https://stackoverflow.com/a/25678113
OpenClipboard = ctypes.windll.user32.OpenClipboard
OpenClipboard.argtypes = HWND,
OpenClipboard.restype = BOOL
EmptyClipboard = ctypes.windll.user32.EmptyClipboard
EmptyClipboard.restype = BOOL
GetClipboardData = ctypes.windll.user32.GetClipboardData
GetClipboardData.argtypes = UINT,
GetClipboardData.restype = HANDLE
SetClipboardData = ctypes.windll.user32.SetClipboardData
SetClipboardData.argtypes = UINT, HANDLE
SetClipboardData.restype = HANDLE
CloseClipboard = ctypes.windll.user32.CloseClipboard
CloseClipboard.restype = BOOL
CF_UNICODETEXT = 13
GlobalAlloc = ctypes.windll.kernel32.GlobalAlloc
GlobalAlloc.argtypes = UINT, SIZE_T
GlobalAlloc.restype = HGLOBAL
GlobalLock = ctypes.windll.kernel32.GlobalLock
GlobalLock.argtypes = HGLOBAL,
GlobalLock.restype = LPVOID
GlobalUnlock = ctypes.windll.kernel32.GlobalUnlock
GlobalUnlock.argtypes = HGLOBAL,
GlobalSize = ctypes.windll.kernel32.GlobalSize
GlobalSize.argtypes = HGLOBAL,
GlobalSize.restype = SIZE_T
GMEM_MOVEABLE = 0x0002
GMEM_ZEROINIT = 0x0040
unicode_type = type(u'')
class Clipboard(object):
def get(self):
return get()
def put(self, text):
return put(text)
def get():
"""
Get the contents of the clipboard.
"""
text = None
OpenClipboard(None)
handle = GetClipboardData(CF_UNICODETEXT)
pcontents = GlobalLock(handle)
size = GlobalSize(handle)
if pcontents and size:
raw_data = ctypes.create_string_buffer(size)
ctypes.memmove(raw_data, pcontents, size)
text = raw_data.raw.decode('utf-16le').rstrip(u'\0')
GlobalUnlock(handle)
CloseClipboard()
return text
def put(s):
"""
Put the given string onto the clipboard.
"""
if not isinstance(s, unicode_type):
s = s.decode('mbcs')
data = s.encode('utf-16le')
OpenClipboard(None)
EmptyClipboard()
handle = GlobalAlloc(GMEM_MOVEABLE | GMEM_ZEROINIT, len(data) + 2)
pcontents = GlobalLock(handle)
ctypes.memmove(pcontents, data, len(data))
GlobalUnlock(handle)
SetClipboardData(CF_UNICODETEXT, handle)
CloseClipboard()
def copy(s):
put(s) | PypiClean |
/Grid2Op-1.9.3-py3-none-any.whl/grid2op/Chronics/gridValue.py | import numpy as np
import warnings
from datetime import datetime, timedelta
from abc import ABC, abstractmethod
from grid2op.dtypes import dt_int
from grid2op.Space import RandomObject
from grid2op.Exceptions import EnvError
# TODO sous echantillonner ou sur echantilloner les scenario: need to modify everything that affect the number
# TODO of time steps there, for example "Space.gen_min_time_on" or "params.NB_TIMESTEP_POWERFLOW_ALLOWED" for
# TODO example. And more generally, it would be better to have all of this attributes exported / imported in
# TODO time interval, instead of time steps.
# TODO add a class to sample "online" the data.
class GridValue(RandomObject, ABC):
"""
This is the base class for every kind of data for the _grid.
It allows the :class:`grid2op.Environment` to perform powergrid modification that make the "game" time dependant.
It is not recommended to directly create :class:`GridValue` object, but to use the
:attr:`grid2op.Environment.chronics_handler" for such a purpose. This is made in an attempt to make sure the
:func:`GridValue.initialize` is called. Before this initialization, it is not recommended to use any
:class:`GridValue` object.
The method :func:`GridValue.next_chronics` should be used between two epoch of the game. If there are no more
data to be generated from this object, then :func:`GridValue.load_next` should raise a :class:`StopIteration`
exception and a call to :func:`GridValue.done` should return True.
In grid2op, the production and loads (and hazards or maintenance) can be stored in this type of
of "GridValue". This class will map things generated (or read from a file) and assign the given element
of the powergrid with its proper value at each time steps.
Attributes
----------
time_interval: :class:`.datetime.timedelta`
Time interval between 2 consecutive timestamps. Default 5 minutes.
start_datetime: :class:`datetime.datetime`
The datetime of the first timestamp of the scenario.
current_datetime: :class:`datetime.datetime`
The timestamp of the current scenario.
max_iter: ``int``
Number maximum of data to generate for one episode.
curr_iter: ``int``
Duration of the current episode.
maintenance_time: ``numpy.ndarray``, dtype:``int``
Number of time steps the next maintenance will take place with the following convention:
- -1 no maintenance are planned for the forseeable future
- 0 a maintenance is taking place
- 1, 2, 3 ... a maintenance will take place in 1, 2, 3, ... time step
Some examples are given in :func:`GridValue.maintenance_time_1d`.
maintenance_duration: ``numpy.ndarray``, dtype:``int``
Duration of the next maintenance. 0 means no maintenance is happening. If a maintenance is planned for a
given powerline, this number decreases each time step, up until arriving at 0 when the maintenance is over. Note
that if a maintenance is planned (see :attr:`GridValue.maintenance_time`) this number indicates how long
the maintenance will last, and does not suppose anything on the maintenance taking place or not (= there can be
positive number here without a powerline being removed from the grid for maintenance reason). Some examples are
given in :func:`GridValue.maintenance_duration_1d`.
hazard_duration: ``numpy.ndarray``, dtype:``int``
Duration of the next hzard. 0 means no maintenance is happening. If a hazard is taking place for a
given powerline, this number decreases each time step, up until arriving at 0 when the maintenance is over. On
the contrary to :attr:`GridValue.maintenance_duration`, if a component of this vector is higher than 1, it
means that the powerline is out of service. Some examples are
given in :func:`GridValue.get_hazard_duration_1d`.
"""
NAN_BUT_IN_INT = -9999999
def __init__(
self,
time_interval=timedelta(minutes=5),
max_iter=-1,
start_datetime=datetime(year=2019, month=1, day=1),
chunk_size=None,
):
RandomObject.__init__(self)
self.time_interval = time_interval
self.current_datetime = start_datetime
self.start_datetime = start_datetime
self._max_iter = max_iter
self.curr_iter = 0
self.maintenance_time = None
self.maintenance_duration = None
self.hazard_duration = None
def get_kwargs(self, dict_):
"""
Overload this function if you want to pass some data when building
a new instance of this class.
"""
pass
@property
def max_iter(self):
return self._max_iter
@max_iter.setter
def max_iter(self, value : int):
self._max_iter = int(value)
@abstractmethod
def initialize(
self,
order_backend_loads,
order_backend_prods,
order_backend_lines,
order_backend_subs,
names_chronics_to_backend,
):
"""
This function is used to initialize the data generator.
It can be use to load scenarios, or to initialize noise if scenarios are generated on the fly. It must also
initialize :attr:`GridValue.maintenance_time`, :attr:`GridValue.maintenance_duration` and
:attr:`GridValue.hazard_duration`.
This function should also increment :attr:`GridValue.curr_iter` of 1 each time it is called.
The :class:`GridValue` is what makes the connection between the data (generally in a shape of files on the
hard drive) and the power grid. One of the main advantage of the Grid2Op package is its ability to change
the tool that computes the load flows. Generally, such :class:`grid2op.Backend` expects data in a specific
format that is given by the way their internal powergrid is represented, and in particular, the "same"
objects can have different name and different position. To ensure that the same chronics would
produce the same results on every backend (**ie** regardless of the order of which the Backend is expecting
the data, the outcome of the powerflow is the same) we encourage the user to provide a file that maps the name
of the object in the chronics to the name of the same object in the backend.
This is done with the "names_chronics_to_backend" dictionnary that has the following keys:
- "loads"
- "prods"
- "lines"
The value associated to each of these keys is in turn a mapping dictionnary from the chronics to the backend.
This means that each *keys* of these subdictionnary is a name of one column in the files, and each values
is the corresponding name of this same object in the dictionnary. An example is provided bellow.
Parameters
----------
order_backend_loads: ``numpy.ndarray``, dtype:str
Ordered name, in the Backend, of the loads. It is required that a :class:`grid2op.Backend` object always
output the informations in the same order. This array gives the name of the loads following this order.
See the documentation of :mod:`grid2op.Backend` for more information about this.
order_backend_prods: ``numpy.ndarray``, dtype:str
Same as order_backend_loads, but for generators.
order_backend_lines: ``numpy.ndarray``, dtype:str
Same as order_backend_loads, but for powerline.
order_backend_subs: ``numpy.ndarray``, dtype:str
Same as order_backend_loads, but for powerline.
names_chronics_to_backend: ``dict``
See in the description of the method for more information about its format.
Examples
--------
For example, suppose we have a :class:`grid2op.Backend` with:
- substations ids strart from 0 to N-1 (N being the number of substations in the powergrid)
- loads named "load_i" with "i" the subtations to which it is connected
- generators units named "gen_i" (i still the substation id to which it is connected)
- powerlnes are named "i_j" if it connected substations i to substation j
And on the other side, we have some files with the following conventions:
- substations are numbered from 1 to N
- loads are named "i_C" with i being the substation to which it is connected
- generators are named "i_G" with is being the id of the substations to which it is connected
- powerlines are namesd "i_j_k" where i is the origin substation, j the extremity substations and "k"
is a unique identifier of this powerline in the powergrid.
In this case, instead of renaming the powergrid (in the backend) of the data files, it is advised to build the
following elements and initialize the object gridval of type :class:`GridValue` with:
.. code-block:: python
gridval = GridValue() # Note: this code won't execute because "GridValue" is an abstract class
order_backend_loads = ['load_1', 'load_2', 'load_13', 'load_3', 'load_4', 'load_5', 'load_8', 'load_9',
'load_10', 'load_11', 'load_12']
order_backend_prods = ['gen_1', 'gen_2', 'gen_5', 'gen_7', 'gen_0']
order_backend_lines = ['0_1', '0_4', '8_9', '8_13', '9_10', '11_12', '12_13', '1_2', '1_3', '1_4', '2_3',
'3_4', '5_10', '5_11', '5_12', '3_6', '3_8', '4_5', '6_7', '6_8']
order_backend_subs = ['sub_0', 'sub_1', 'sub_10', 'sub_11', 'sub_12', 'sub_13', 'sub_2', 'sub_3', 'sub_4',
'sub_5', 'sub_6', 'sub_7', 'sub_8', 'sub_9']
names_chronics_to_backend = {"loads": {"2_C": 'load_1', "3_C": 'load_2',
"14": 'load_13', "4_C": 'load_3', "5_C": 'load_4',
"6_C": 'load_5', "9_C": 'load_8', "10_C": 'load_9',
"11_C": 'load_10', "12_C": 'load_11',
"13_C": 'load_12'},
"lines": {'1_2_1': '0_1', '1_5_2': '0_4', '9_10_16': '8_9', '9_14_17': '8_13',
'10_11_18': '9_10', '12_13_19': '11_12', '13_14_20': '12_13',
'2_3_3': '1_2', '2_4_4': '1_3', '2_5_5': '1_4', '3_4_6': '2_3',
'4_5_7': '3_4', '6_11_11': '5_10', '6_12_12': '5_11',
'6_13_13': '5_12', '4_7_8': '3_6', '4_9_9': '3_8', '5_6_10': '4_5',
'7_8_14': '6_7', '7_9_15': '6_8'},
"prods": {"1_G": 'gen_0', "3_G": "gen_2", "6_G": "gen_5",
"2_G": "gen_1", "8_G": "gen_7"},
}
gridval.initialize(order_backend_loads, order_backend_prods, order_backend_lines, names_chronics_to_backend)
"""
self.curr_iter += 1
self.current_datetime += self.time_interval
@staticmethod
def get_maintenance_time_1d(maintenance):
"""
This function allows to transform a 1d numpy aarray maintenance, where is specify:
- 0 there is no maintenance at this time step
- 1 there is a maintenance at this time step
Into the representation in terms of "next maintenance time" as specified in
:attr:`GridValue.maintenance_time` which is:
- `-1` no foreseeable maintenance operation will be performed
- `0` a maintenance operation is being performed
- `1`, `2` etc. is the number of time step the next maintenance will be performed.
Parameters
----------
maintenance: ``numpy.ndarray``
1 dimensional array representing the time series of the maintenance (0 there is no maintenance, 1 there
is a maintenance at this time step)
Returns
-------
maintenance_duration: ``numpy.ndarray``
Array representing the time series of the duration of the next maintenance forseeable.
Examples
--------
If no maintenance are planned:
.. code-block:: python
maintenance_time = GridValue.get_maintenance_time_1d(np.array([0 for _ in range(10)]))
assert np.all(maintenance_time == np.array([-1 for _ in range(10)]))
If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0)
.. code-block:: python
maintenance = np.array([0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0])
maintenance_time = GridValue.get_maintenance_time_1d(maintenance)
assert np.all(maintenance_time == np.array([5,4,3,2,1,0,0,0,-1,-1,-1,-1,-1,-1,-1,-1]))
If a maintenance planned of 3 time steps starting at timestep 6
(index 5 - index starts at 0), and a second one for 2 time steps at time step 13
.. code-block:: python
maintenance = np.array([0,0,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0])
maintenance_time = GridValue.get_maintenance_time_1d(maintenance)
assert np.all(maintenance_time == np.array([5,4,3,2,1,0,0,0,4,3,2,1,0,0,-1,-1,-1]))
"""
res = np.full(maintenance.shape, fill_value=GridValue.NAN_BUT_IN_INT, dtype=dt_int)
maintenance = np.concatenate((maintenance, (0, 0)))
a = np.diff(maintenance)
# +1 is because numpy does the diff `t+1` - `t` so to get index of the initial array
# I need to "+1"
start = np.where(a == 1)[0] + 1 # start of maintenance
end = np.where(a == -1)[0] + 1 # end of maintenance
prev_ = 0
# it's efficient here as i do a loop only on the number of time there is a maintenance
# and maintenance are quite rare
for beg_, end_ in zip(start, end):
res[prev_:beg_] = list(range(beg_ - prev_, 0, -1))
res[beg_:end_] = 0
prev_ = end_
# no maintenance are planned in the forseeable future
res[prev_:] = -1
return res
@staticmethod
def get_maintenance_duration_1d(maintenance):
"""
This function allows to transform a 1d numpy aarray maintenance (or hazards), where is specify:
- 0 there is no maintenance at this time step
- 1 there is a maintenance at this time step
Into the representation in terms of "next maintenance duration" as specified in
:attr:`GridValue.maintenance_duration` which is:
- `0` no forseeable maintenance operation will be performed
- `1`, `2` etc. is the number of time step the next maintenance will last (it can be positive even in the
case that no maintenance is currently being performed.
Parameters
----------
maintenance: ``numpy.ndarray``
1 dimensional array representing the time series of the maintenance (0 there is no maintenance, 1 there
is a maintenance at this time step)
Returns
-------
maintenance_duration: ``numpy.ndarray``
Array representing the time series of the duration of the next maintenance forseeable.
Examples
--------
If no maintenance are planned:
.. code-block:: python
maintenance = np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0])
maintenance_duration = GridValue.get_maintenance_duration_1d(maintenance)
assert np.all(maintenance_duration == np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]))
If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0)
.. code-block:: python
maintenance = np.array([0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0])
maintenance_duration = GridValue.get_maintenance_duration_1d(maintenance)
assert np.all(maintenance_duration == np.array([3,3,3,3,3,3,2,1,0,0,0,0,0,0,0,0]))
If a maintenance planned of 3 time steps starting at timestep 6
(index 5 - index starts at 0), and a second one for 2 time steps at time step 13
.. code-block:: python
maintenance = np.array([0,0,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0])
maintenance_duration = GridValue.get_maintenance_duration_1d(maintenance)
assert np.all(maintenance_duration == np.array([3,3,3,3,3,3,2,1,2,2,2,2,2,1,0,0,0]))
"""
res = np.full(maintenance.shape, fill_value=GridValue.NAN_BUT_IN_INT, dtype=dt_int)
maintenance = np.concatenate((maintenance, (0, 0)))
a = np.diff(maintenance)
# +1 is because numpy does the diff `t+1` - `t` so to get index of the initial array
# I need to "+1"
start = np.where(a == 1)[0] + 1 # start of maintenance
end = np.where(a == -1)[0] + 1 # end of maintenance
prev_ = 0
# it's efficient here as i do a loop only on the number of time there is a maintenance
# and maintenance are quite rare
for beg_, end_ in zip(start, end):
res[prev_:beg_] = end_ - beg_
res[beg_:end_] = list(range(end_ - beg_, 0, -1))
prev_ = end_
# no maintenance are planned in the foreseeable future
res[prev_:] = 0
return res
@staticmethod
def get_hazard_duration_1d(hazard):
"""
This function allows to transform a 1d numpy aarray maintenance (or hazards), where is specify:
- 0 there is no maintenance at this time step
- 1 there is a maintenance at this time step
Into the representation in terms of "hzard duration" as specified in
:attr:`GridValue.maintenance_duration` which is:
- `0` no forseeable hazard operation will be performed
- `1`, `2` etc. is the number of time step the next hzard will last (it is positive only when a hazard
affect a given powerline)
Compared to :func:`GridValue.get_maintenance_duration_1d` we only know when the hazard occurs how long it
will last.
Parameters
----------
hazard: ``numpy.ndarray``
1 dimensional array representing the time series of the hazards (0 there is no hazard, 1 there
is a hazard at this time step)
Returns
-------
hazard_duration: ``numpy.ndarray``
Array representing the time series of the duration of the next hazard forseeable.
Examples
--------
If no maintenance are planned:
.. code-block:: python
hazard = np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0])
hazard_duration = GridValue.get_hazard_duration_1d(hazard)
assert np.all(hazard_duration == np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]))
If a maintenance planned of 3 time steps starting at timestep 6 (index 5 - index starts at 0)
.. code-block:: python
hazard = np.array([0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0])
hazard_duration = GridValue.get_hazard_duration_1d(hazard)
assert np.all(hazard_duration == np.array([0,0,0,0,0,3,2,1,0,0,0,0,0,0,0,0]))
If a maintenance planned of 3 time steps starting at timestep 6
(index 5 - index starts at 0), and a second one for 2 time steps at time step 13
.. code-block:: python
hazard = np.array([0,0,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0])
hazard_duration = GridValue.get_hazard_duration_1d(hazard)
assert np.all(hazard_duration == np.array([0,0,0,0,0,3,2,1,0,0,0,0,2,1,0,0,0]))
"""
res = np.full(hazard.shape, fill_value=GridValue.NAN_BUT_IN_INT, dtype=dt_int)
hazard = np.concatenate((hazard, (0, 0)))
a = np.diff(hazard)
# +1 is because numpy does the diff `t+1` - `t` so to get index of the initial array
# I need to "+1"
start = np.where(a == 1)[0] + 1 # start of maintenance
end = np.where(a == -1)[0] + 1 # end of maintenance
prev_ = 0
# it's efficient here as i do a loop only on the number of time there is a maintenance
# and maintenance are quite rare
for beg_, end_ in zip(start, end):
res[prev_:beg_] = 0
res[(beg_):(end_)] = list(range(end_ - beg_, 0, -1))
prev_ = end_
# no maintenance are planned in the forseeable future
res[prev_:] = 0
return res
@abstractmethod
def load_next(self):
"""
INTERNAL
.. warning:: /!\\\\ Internal, do not use unless you know what you are doing /!\\\\
This is automatically called by the "env.step" function. It loads the next information
about the grid state (load p and load q, prod p and prod v as well as some maintenance
or hazards information)
Generate the next values, either by reading from a file, or by generating on the fly and return a dictionary
compatible with the :class:`grid2op.BaseAction` class allowed for the :class:`Environment`.
More information about this dictionary can be found at :func:`grid2op.BaseAction.update`.
As a (quick) reminder: this dictionary has for keys:
- "injection" (optional): a dictionary with keys (optional) "load_p", "load_q", "prod_p", "prod_v"
- "hazards" (optional) : the outage suffered from the _grid
- "maintenance" (optional) : the maintenance operations planned on the grid for the current time step.
Returns
-------
timestamp: ``datetime.datetime``
The current timestamp for which the modifications have been generated.
dict_: ``dict``
Always empty, indicating i do nothing (for this case)
maintenance_time: ``numpy.ndarray``, dtype:``int``
Information about the next planned maintenance. See :attr:`GridValue.maintenance_time` for more information.
maintenance_duration: ``numpy.ndarray``, dtype:``int``
Information about the duration of next planned maintenance. See :attr:`GridValue.maintenance_duration`
for more information.
hazard_duration: ``numpy.ndarray``, dtype:``int``
Information about the current hazard. See :attr:`GridValue.hazard_duration`
for more information.
prod_v: ``numpy.ndarray``, dtype:``float``
the (stored) value of the generator voltage setpoint
Raises
------
StopIteration
if the chronics is over
"""
self.current_datetime += self.time_interval
return (
self.current_datetime,
{},
self.maintenance_time,
self.maintenance_duration,
self.hazard_duration,
None,
)
@abstractmethod
def check_validity(self, backend):
"""
INTERNAL
.. warning:: /!\\\\ Internal, do not use unless you know what you are doing /!\\\\
This is called at the creation of the environment to ensure the Backend and the chronics
are consistent with one another.
A call to this method ensure that the action that will be sent to the current :class:`grid2op.Environment`
can be properly implemented by its :class:`grid2op.Backend`.
This specific method check that the dimension of all vectors are consistent
Parameters
----------
backend: :class:`grid2op.Backend.Backend`
The backend used by the :class:`grid2op.Environment.Environment`
"""
raise EnvError("check_validity not implemented")
def done(self):
"""
INTERNAL
.. warning:: /!\\\\ Internal, do not use unless you know what you are doing /!\\\\
Use the :class:`ChroncisHandler` for such purpose
Whether the episode is over or not.
Returns
-------
done: ``bool``
``True`` means the episode has arrived to the end (no more data to generate) ``False`` means that the episode
is not over yet.
"""
if self.max_iter >= 0:
return self.curr_iter >= self.max_iter
else:
return False
def forecasts(self):
"""
INTERNAL
.. warning:: /!\\\\ Internal, do not use unless you know what you are doing /!\\\\
Use the :class:`ChroncisHandler` for such purpose
This method is used to generate the forecasts that are made available to the :class:`grid2op.BaseAgent`.
This forecasts are behaving the same way than a list of tuple as the one returned by
:func:`GridValue.load_next` method.
The way they are generated depends on the GridValue class. If not forecasts are made available, then
the empty list should be returned.
Returns
-------
res: ``list``
Each element of this list having the same type as what is returned by :func:`GridValue.load_next`.
"""
return []
@abstractmethod
def next_chronics(self):
"""
INTERNAL
.. warning:: /!\\\\ Internal, do not use unless you know what you are doing /!\\\\
Move to the next "chronics", representing the next "level" if we make the parallel
with video games.
A call to this function should at least restart:
- :attr:`GridValue.current_datetime` to its origin value
- :attr:`GridValue.curr_iter`
"""
pass
def tell_id(self, id_num, previous=False):
"""
Tell the backend to use one folder for the chronics in particular. This method is mainly use when the GridValue
object can deal with many folder. In this case, this method is used by the :class:`grid2op.Runner` to indicate
which chronics to load for the current simulated episode.
This is important to ensure reproducibility, especially in parrallel computation settings.
This should also be used in case of generation "on the fly" of the chronics to ensure the same property.
By default it does nothing.
.. note::
As of grid2op 1.6.4, this function now accepts the return value of `self.get_id()`.
"""
warnings.warn(
'Class {} doesn\'t handle different input folder. "tell_id" method has no impact.'
"".format(type(self).__name__)
)
def get_id(self) -> str:
"""
Utility to get the current name of the path of the data are looked at, if data are files.
This could also be used to return a unique identifier to the generated chronics even in the case where they are
generated on the fly, for example by return a hash of the seed.
Returns
-------
res: ``str``
A unique identifier of the chronics generated for this episode. For example, if the chronics comes from a
specific folder, this could be the path to this folder.
"""
warnings.warn(
'Class {} doesn\'t handle different input folder. "get_id" method will return "".'
"".format(type(self).__name__)
)
return ""
def max_timestep(self):
"""
This method returned the maximum timestep that the current episode can last.
Note that if the :class:`grid2op.BaseAgent` performs a bad action that leads to a game over, then the episode
can lasts less.
Returns
-------
res: ``int``
-1 if possibly infinite length or a positive integer representing the maximum duration of this episode
"""
return self.max_iter
def shuffle(self, shuffler=None):
"""
This method can be overridden if the data that are represented by this object need to be shuffle.
By default it does nothing.
Parameters
----------
shuffler: ``object``
Any function that can be used to shuffle the data.
"""
pass
def sample_next_chronics(self, probabilities=None):
"""
this is used to sample the next chronics used with given probabilities
Parameters
-----------
probabilities: ``np.ndarray``
Array of integer with the same size as the number of chronics in the cache.
If it does not sum to one, it is rescaled such that it sums to one.
Returns
-------
selected: ``int``
The integer that was selected.
Examples
--------
Let's assume in your chronics, the folder names are "Scenario_august_dummy", and
"Scenario_february_dummy". For the sake of the example, we want the environment to loop
75% of the time to the month of february and 25% of the time to the month of august.
.. code-block:: python
import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1", test=True) # don't add "test=True" if
# you don't want to perform a test.
# check at which month will belong each observation
for i in range(10):
obs = env.reset()
print(obs.month)
# it always alternatively prints "8" (if chronics if from august) or
# "2" if chronics is from february) with a probability of 50% / 50%
env.seed(0) # for reproducible experiment
for i in range(10):
_ = env.chronics_handler.sample_next_chronics([0.25, 0.75])
obs = env.reset()
print(obs.month)
# it prints "2" with probability 0.75 and "8" with probability 0.25
"""
return -1
def set_filter(self, filter_fun):
"""
Assign a filtering function to remove some chronics from the next time a call to "reset_cache" is called.
**NB** filter_fun is applied to all element of :attr:`Multifolder.subpaths`. If ``True`` then it will
be put in cache, if ``False`` this data will NOT be put in the cache.
**NB** this has no effect until :attr:`Multifolder.reset` is called.
Notes
------
As of now, this has no effect unless the chronics are generated using :class:`Multifolder`
or :class:`MultifolderWithCache`
Examples
--------
Let's assume in your chronics, the folder names are "Scenario_august_dummy", and
"Scenario_february_dummy". For the sake of the example, we want the environment to loop
only through the month of february, because why not. Then we can do the following:
.. code-block:: python
import re
import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1", test=True) # don't add "test=True" if
# you don't want to perform a test.
# check at which month will belong each observation
for i in range(10):
obs = env.reset()
print(obs.month)
# it always alternatively prints "8" (if chronics if from august) or
# "2" if chronics is from february)
# to see where the chronics are located
print(env.chronics_handler.subpaths)
# keep only the month of february
env.chronics_handler.set_filter(lambda path: re.match(".*february.*", path) is not None)
env.chronics_handler.reset() # if you don't do that it will not have any effect
for i in range(10):
obs = env.reset()
print(obs.month)
# it always prints "2" (representing february)
"""
warnings.warn(
f'Calling this function has no effect for chronics generated from "{type(self)}"'
)
def set_chunk_size(self, new_chunk_size):
"""
This parameters allows to set, if the data generation process support it, the amount of data that is read
at the same time. It can help speeding up the computation process by adding more control on the io operation.
Parameters
----------
new_chunk_size: ``int``
The chunk size (ie the number of rows that will be read on each data set at the same time)
"""
pass
def fast_forward(self, nb_timestep):
"""
INTERNAL
.. warning:: /!\\\\ Internal, do not use unless you know what you are doing /!\\\\
Prefer using :func:`grid2op.Environment.BaseEnv.fast_forward_chronics`
This method allows you to skip some time step at the beginning of the chronics.
This is useful at the beginning of the training, if you want your agent to learn on more diverse scenarios.
Indeed, the data provided in the chronics usually starts always at the same date time (for example Jan 1st at
00:00). This can lead to suboptimal exploration, as during this phase, only a few time steps are managed by
the agent, so in general these few time steps will correspond to grid state around Jan 1st at 00:00.
Parameters
----------
nb_timestep: ``int``
Number of time step to "fast forward"
"""
for _ in range(nb_timestep):
self.load_next() | PypiClean |
/MOM-Tapyr-1.6.2.tar.gz/MOM-Tapyr-1.6.2/_Attr/Polisher.py |
from __future__ import division, print_function
from __future__ import absolute_import, unicode_literals
from _MOM import MOM
from _TFL import TFL
import _MOM._Attr
from _TFL.predicate import uniq
from _TFL.pyk import pyk
from _TFL.Regexp import *
import _TFL._Meta.Object
class _Polisher_ (TFL.Meta.Object) :
"""Base class for Polishers"""
buddies = ()
guard = None
polish_empty = False
pre_complete = True
def __init__ (self, ** kw) :
for k, v in pyk.iteritems (kw) :
### use `setattr` to allow `property.__set__` to intervene
setattr (self, k, v)
# end def __init__
def __call__ \
( self, attr, value_dict
, essence = None
, picky = False
, value = None
) :
"""Polish value of `attr` in `value_dict`, if any, return polished `value_dict`."""
result = dict (value_dict)
name = attr.name
value = self._attr_value (attr, name, value, value_dict, essence)
if isinstance (value, pyk.string_types) :
guard = self.guard
value = value.strip ()
if ( (self.polish_empty or value)
and (guard is None or guard (value))
) :
polished = self._polished \
(attr, name, value, value_dict, essence, picky)
result.update (polished)
return result
# end def __call__
def Instance (self, attr) :
return _Polisher_Instance_ (self, attr)
# end def Instance
def _attr_value (self, attr, name, value, value_dict, essence) :
result = value
if result is None :
result = value_dict.get (name)
return result
# end def _attr_value
def _polished (self, attr, name, value, value_dict, essence, picky) :
raise NotImplementedError \
( "%s needs to implement either __call__ or _polished"
% (self.__class__, )
)
# end def _polished
# end class _Polisher_
class _Polisher_Instance_ (TFL.Meta.Object) :
"""Polisher instance for a specific instance of MOM.Attr.Kind."""
def __init__ (self, polisher, attr) :
self.attr = attr
self.name = attr.name
self.polisher = polisher
# end def __init__
def __call__ (self, * args, ** kw) :
return self.polisher (* args, ** kw)
# end def __call__
@TFL.Meta.Once_Property
def names (self) :
return tuple (uniq ((self.name, ) + self.polisher.buddies))
# end def names
def __getattr__ (self, name) :
if name.startswith ("__") and name.endswith ("__") :
### Placate inspect.unwrap of Python 3.5,
### which accesses `__wrapped__` and eventually throws `ValueError`
return getattr (self.__super, name)
result = getattr (self.polisher, name)
setattr (self, name, result)
return result
# end def __getattr__
# end class _Polisher_Instance_
class Match_Split (_Polisher_) :
"""Polisher splitting a value into several attribute values."""
def __init__ (self, matcher, ** kw) :
if isinstance (matcher, (Regexp, ) + pyk.string_types) :
matcher = Multi_Regexp (matcher)
self.__super.__init__ (matcher = matcher, ** kw)
# end def __init__
def add (self, * matchers, ** kw) :
self.matcher.add (* matchers, ** kw)
# end def add
def _polished (self, attr, name, value, value_dict, essence, picky) :
result = {}
match = self.matcher.search (value)
if match is not None :
dct = match.groupdict ()
for k, v in pyk.iteritems (dct) :
if v is not None :
result [k] = v
return result
# end def _polished
# end class Match_Split
class Replace (_Polisher_) :
"""Polisher replacing matches of regular expressions."""
def __init__ (self, replacer, ** kw) :
self.__super.__init__ (replacer = replacer, ** kw)
# end def __init__
def _polished (self, attr, name, value, value_dict, essence, picky) :
result = { name : self.replacer (value) }
return result
# end def _polished
# end class Replace
def _capitalize_words (match) :
words = match.group (0).split ("-")
result = "-".join (w.capitalize () for w in words)
return result
# end def _capitalize_words
### `_uni_case_word` should use `[:lower:]` and `[:upper:]` but
### unfortunately Python regular expressions don't support these
_uni_case_word_pat = \
( r"("
r"[a-zäüöß0-9][-a-zäüöß0-9]*"
r"|"
r"[A-ZÄÜÖ0-9][-A-ZÄÜÖ0-9]*"
r")"
)
capitalize = Replace \
( Re_Replacer
( r"\b"
+ _uni_case_word_pat
+ r"\b"
, _capitalize_words
, re.UNICODE
)
)
capitalize_if_lower_case = Replace \
( Re_Replacer (r"\w+", _capitalize_words, re.UNICODE)
, guard = lambda s : s.islower ()
)
capitalize_if_not_mixed_case = Replace \
( Re_Replacer (r"\w+", _capitalize_words, re.UNICODE)
, guard = TFL.is_not_mixed_case
)
capitalize_last_word = Replace \
( Re_Replacer
( r"\b"
+ _uni_case_word_pat
+ r"$"
, _capitalize_words
, re.UNICODE
)
)
compress_spaces = Replace (Re_Replacer (r"[ \t]+", " ", re.UNICODE))
capitalize_compress_spaces = Replace \
( Multi_Re_Replacer
( capitalize.replacer
, compress_spaces.replacer
)
, guard = capitalize.guard
)
capitalize_if_lower_case_compress_spaces = Replace \
( Multi_Re_Replacer
( capitalize_if_lower_case.replacer
, compress_spaces.replacer
)
, guard = capitalize_if_lower_case.guard
)
capitalize_if_not_mixed_case_compress_spaces = Replace \
( Multi_Re_Replacer
( capitalize_if_not_mixed_case.replacer
, compress_spaces.replacer
)
, guard = capitalize_if_not_mixed_case.guard
)
capitalize_last_word_compress_spaces = Replace \
( Multi_Re_Replacer
( capitalize_last_word.replacer
, compress_spaces.replacer
)
, guard = capitalize_last_word.guard
)
_phone_cc_pat = r"(?:(?:\+ *|00)(?P<cc>\d+))"
_phone_ndc_pat = r"(?P<ndc>\d+)"
_phone_sn_pat = r"(?P<sn>\d+)\s*"
def _phone_multi_regexp (tail = "") :
return Multi_Regexp \
( Regexp
( r"^"
+ r"(?:" + _phone_cc_pat + " +|0 *)?"
+ _phone_ndc_pat
+ ((r" " + tail) if tail else "")
+ r"$"
, re.UNICODE
)
, Regexp
( r"^"
+ r"(?:" + _phone_cc_pat + "|0)? *"
+ r"\("
+ _phone_ndc_pat
+ r"\) *"
+ tail
+ r"$"
, re.UNICODE
)
, Regexp
( r"^"
+ r"(?:" + _phone_cc_pat + "|0)? *"
+ r"/"
+ _phone_ndc_pat
+ r"/"
+ ("" if tail else "?")
+ " *"
+ tail
+ r"$"
, re.UNICODE
)
, Regexp
( r"^"
+ r"(?:" + _phone_cc_pat + "|0)? *"
+ r"-"
+ _phone_ndc_pat
+ r"-"
+ ("" if tail else "?")
+ " *"
+ tail
+ r"$"
, re.UNICODE
)
, Regexp
( r"^"
+ r"(?:" + _phone_cc_pat + "|0)? *"
+ ((r" +" + tail) if tail else "")
+ r"$"
, re.UNICODE
)
)
# end def _phone_multi_regexp
phone_cc_clean = Match_Split (r"^" + _phone_cc_pat + r"$")
phone_ndc_split = Match_Split (_phone_multi_regexp ())
phone_sn_split = Match_Split (_phone_multi_regexp (_phone_sn_pat))
_test_capitalize = """
>>> from _TFL.Record import Record
>>> attr = Record (name = "test")
>>> def test (polisher, value) :
... rd = polisher (attr, dict (test = value))
... print (rd ["test"])
>>> l = "fröhlich-tanzer"
>>> m = "Fröhlich-Tanzer"
>>> u = "FRÖHLICH-TANZER"
>>> test (capitalize, l)
Fröhlich-Tanzer
>>> test (capitalize, m)
Fröhlich-Tanzer
>>> test (capitalize, u)
Fröhlich-Tanzer
>>> test (capitalize_last_word, l)
Fröhlich-Tanzer
>>> test (capitalize_last_word, m)
Fröhlich-Tanzer
>>> test (capitalize_last_word, u)
Fröhlich-Tanzer
>>> l = "christian fröhlich-tanzer"
>>> m = "Christian Fröhlich-Tanzer"
>>> u = "CHRISTIAN FRÖHLICH-TANZER"
>>> test (capitalize, l)
Christian Fröhlich-Tanzer
>>> test (capitalize, m)
Christian Fröhlich-Tanzer
>>> test (capitalize, u)
Christian Fröhlich-Tanzer
>>> test (capitalize_last_word, l)
christian Fröhlich-Tanzer
>>> test (capitalize_last_word, m)
Christian Fröhlich-Tanzer
>>> test (capitalize_last_word, u)
CHRISTIAN Fröhlich-Tanzer
>>> l = "van der fröhlich-tanzer"
>>> m = "van der Fröhlich-Tanzer"
>>> u = "van der FRÖHLICH-TANZER"
>>> v = "VAN DER FRÖHLICH-TANZER"
>>> test (capitalize, l)
Van Der Fröhlich-Tanzer
>>> test (capitalize, m)
Van Der Fröhlich-Tanzer
>>> test (capitalize, u)
Van Der Fröhlich-Tanzer
>>> test (capitalize_if_not_mixed_case, l)
Van Der Fröhlich-Tanzer
>>> test (capitalize_if_not_mixed_case, m)
van der Fröhlich-Tanzer
>>> test (capitalize_if_not_mixed_case, u)
van der FRÖHLICH-TANZER
>>> test (capitalize_if_not_mixed_case, v)
Van Der Fröhlich-Tanzer
>>> test (capitalize_last_word, l)
van der Fröhlich-Tanzer
>>> test (capitalize_last_word, m)
van der Fröhlich-Tanzer
>>> test (capitalize_last_word, u)
van der Fröhlich-Tanzer
>>> l = "mctanzer"
>>> m = "McTanzer"
>>> u = "MCTANZER"
>>> test (capitalize, l)
Mctanzer
>>> test (capitalize, m)
McTanzer
>>> test (capitalize, u)
Mctanzer
>>> test (capitalize_last_word, l)
Mctanzer
>>> test (capitalize_last_word, m)
McTanzer
>>> test (capitalize_last_word, u)
Mctanzer
>>> l = "mag."
>>> m = "Mag."
>>> u = "MAG."
>>> test (capitalize, l)
Mag.
>>> test (capitalize, m)
Mag.
>>> test (capitalize, u)
Mag.
>>> l = "dipl.ing."
>>> m = "Dipl.Ing."
>>> u = "DIPL.ING."
>>> test (capitalize, l)
Dipl.Ing.
>>> test (capitalize, m)
Dipl.Ing.
>>> test (capitalize, u)
Dipl.Ing.
"""
_test_phone_ndc_split = r"""
>>> from _TFL.Record import Record
>>> attr = Record (name = "ndc")
>>> def show_split (v) :
... r = phone_ndc_split (attr, dict (ndc = v))
... vs = ("%s = %s" % (k, v) for k, v in sorted (r.items ()))
... print (", ".join (vs))
>>> show_split ("664")
ndc = 664
>>> show_split ("0664")
ndc = 664
>>> show_split ("0 664")
ndc = 664
"""
_test_phone_cc_clean = r"""
>>> from _TFL.Record import Record
>>> attr = Record (name = "cc")
>>> def show_split (v) :
... r = phone_cc_clean (attr, dict (cc = v))
... vs = ("%s = %s" % (k, v) for k, v in sorted (r.items ()))
... print (", ".join (vs))
>>> show_split ("43")
cc = 43
>>> show_split ("+43")
cc = 43
>>> show_split ("+ 43")
cc = 43
>>> show_split ("0043")
cc = 43
"""
_test_phone_sn_split = r"""
>>> from _TFL.Record import Record
>>> attr = Record (name = "sn")
>>> def show_split (sn) :
... r = phone_sn_split (attr, dict (sn = sn))
... vs = ("%s = %s" % (k, v) for k, v in sorted (r.items ()))
... print (", ".join (vs))
>>> show_split ("12345678")
sn = 12345678
>>> show_split ("0043 664 12345678")
cc = 43, ndc = 664, sn = 12345678
>>> show_split ("+43 664 12345678")
cc = 43, ndc = 664, sn = 12345678
>>> show_split ("0043 664 12345678")
cc = 43, ndc = 664, sn = 12345678
>>> show_split ("664 12345678")
ndc = 664, sn = 12345678
>>> show_split ("0664 12345678")
ndc = 664, sn = 12345678
>>> show_split ("+43(664)12345678")
cc = 43, ndc = 664, sn = 12345678
>>> show_split ("+43 (664) 12345678")
cc = 43, ndc = 664, sn = 12345678
>>> show_split ("0(664)12345678")
ndc = 664, sn = 12345678
>>> show_split ("0 (664) 12345678")
ndc = 664, sn = 12345678
>>> show_split ("43 66412345678")
ndc = 43, sn = 66412345678
>>> show_split ("+43 66412345678")
cc = 43, sn = 66412345678
"""
__test__ = dict \
( test_phone_ndc_split = _test_phone_ndc_split
, test_phone_cc_clean = _test_phone_cc_clean
, test_capitalize = _test_capitalize
, test_phone_sn_split = _test_phone_sn_split
)
if __name__ != "__main__" :
MOM.Attr._Export_Module ()
### __END__ MOM.Attr.Polisher | PypiClean |
/Baraa_Validator-0.1.1-py3-none-any.whl/BaraaValidator/transactionValidators.py | import glob
import json
import os
import xml.etree.ElementTree as ET
import xmltodict
class __ValidationData:
netValue =0
taxes = []
def setNetValue(self,netValue):
self.netValue=netValue
def addTaxInformation(self,taxRate,taxMoney):
self.taxes.append({'taxRate':taxRate,'taxMoney': taxMoney})
def __xmlEnhancer(xmlData):
if(type(xmlData) == type(list())):
return xmlData
else:
tempList = list()
tempList.append(xmlData)
return tempList
def __getDataForTaxValidation(dictTransactionData):
requiredValidationData = []
for itemNumber in range(0,len(dictTransactionData['itemization'])):
itemData = __ValidationData()
try:
itemData.setNetValue(int(dictTransactionData['itemization'][itemNumber]['net_sales_money']['amount']))
for taxNumber in range(0,len(dictTransactionData['itemization'][itemNumber]['taxes'])):
taxRate = float(dictTransactionData['itemization'][itemNumber]['taxes'][taxNumber]['rate'])
taxMoney = int(dictTransactionData['itemization'][itemNumber]['taxes'][taxNumber]['applied_money']['amount'])
itemData.addTaxInformation(taxRate,taxMoney)
except ValueError as verr:
print(verr)
except Exception as ex:
print(ex)
requiredValidationData.append(itemData)
return requiredValidationData
def __validateTransactionTaxData(listedTransaction):
validatedData = __getDataForTaxValidation(listedTransaction)
for item in (validatedData):
for tax in item.taxes:
if int((tax['taxRate'] * item.netValue)+0.5) == tax['taxMoney']:
continue
else:
return False
return True
def __validate(transactionFile):
extension = os.path.splitext(transactionFile)[1]
if (extension == '.xml'):
return __validateXmlTransactionFile(transactionFile)
if (extension == '.json'):
return __validateJsonTransactionFile(transactionFile)
raise Exception("Only accepting json/xml files")
def __validateJsonTransactionFile(jsonTransactionFile):
jsonData = __readJsonFile(jsonTransactionFile)
return __validateTransactionTaxData(jsonData)
def __readJsonFile(jsonFile):
try:
file = open(jsonFile, )
JSONdata = json.load(file)
except IOError as io:
print(io)
except Exception as ex:
print(ex)
finally:
file.close()
return JSONdata
def __validateXmlTransactionFile(xmlTransactionFile):
xmlData = __readXmlFile(xmlTransactionFile)
return __validateTransactionTaxData(xmlData)
def __readXmlFile(xmlFile):
try:
file = open(xmlFile, )
tree = ET.parse(file)
except IOError as io:
print(io)
except Exception as ex:
print(ex)
finally:
file.close()
xml_data = tree.getroot()
xmlstr = ET.tostring(xml_data, encoding='utf8', method='xml')
xml_data_dict = dict(xmltodict.parse(xmlstr, dict_constructor=dict))['root']
xml_data_dict['itemization']= __xmlEnhancer(xml_data_dict['itemization']['element'])
xml_data_dict['taxes']= __xmlEnhancer(xml_data_dict['taxes']['element'])
for item in range(0,len(xml_data_dict['itemization'])):
xml_data_dict['itemization'][item]['taxes']= __xmlEnhancer(xml_data_dict['itemization'][item]['taxes']['element'])
xml_data_dict['itemization'][item]['modifiers']= __xmlEnhancer(xml_data_dict['itemization'][item]['modifiers']['element'])
return xml_data_dict
def validateTransactionsFolder(transactionsFolder):
validationresults = []
try:
transactionFiles = glob.glob(transactionsFolder+"*.*")
except Exception as ex:
print(ex)
for transactionFile in transactionFiles:
validationresult = __validate(transactionFile)
validationresults.append({transactionFile,validationresult})
return validationresults
def validateTransactionsFile(transactionFile):
return __validate(transactionFile)
def validateTransaction(dictTransaction):
return __validateTransactionTaxData(dictTransaction) | PypiClean |
/Nuitka-1.8.tar.gz/Nuitka-1.8/nuitka/build/inline_copy/yaml_35/yaml/scanner.py |
__all__ = ['Scanner', 'ScannerError']
from .error import MarkedYAMLError
from .tokens import *
class ScannerError(MarkedYAMLError):
pass
class SimpleKey:
# See below simple keys treatment.
def __init__(self, token_number, required, index, line, column, mark):
self.token_number = token_number
self.required = required
self.index = index
self.line = line
self.column = column
self.mark = mark
class Scanner:
def __init__(self):
"""Initialize the scanner."""
# It is assumed that Scanner and Reader will have a common descendant.
# Reader do the dirty work of checking for BOM and converting the
# input data to Unicode. It also adds NUL to the end.
#
# Reader supports the following methods
# self.peek(i=0) # peek the next i-th character
# self.prefix(l=1) # peek the next l characters
# self.forward(l=1) # read the next l characters and move the pointer.
# Had we reached the end of the stream?
self.done = False
# The number of unclosed '{' and '['. `flow_level == 0` means block
# context.
self.flow_level = 0
# List of processed tokens that are not yet emitted.
self.tokens = []
# Add the STREAM-START token.
self.fetch_stream_start()
# Number of tokens that were emitted through the `get_token` method.
self.tokens_taken = 0
# The current indentation level.
self.indent = -1
# Past indentation levels.
self.indents = []
# Variables related to simple keys treatment.
# A simple key is a key that is not denoted by the '?' indicator.
# Example of simple keys:
# ---
# block simple key: value
# ? not a simple key:
# : { flow simple key: value }
# We emit the KEY token before all keys, so when we find a potential
# simple key, we try to locate the corresponding ':' indicator.
# Simple keys should be limited to a single line and 1024 characters.
# Can a simple key start at the current position? A simple key may
# start:
# - at the beginning of the line, not counting indentation spaces
# (in block context),
# - after '{', '[', ',' (in the flow context),
# - after '?', ':', '-' (in the block context).
# In the block context, this flag also signifies if a block collection
# may start at the current position.
self.allow_simple_key = True
# Keep track of possible simple keys. This is a dictionary. The key
# is `flow_level`; there can be no more that one possible simple key
# for each level. The value is a SimpleKey record:
# (token_number, required, index, line, column, mark)
# A simple key may start with ALIAS, ANCHOR, TAG, SCALAR(flow),
# '[', or '{' tokens.
self.possible_simple_keys = {}
# Public methods.
def check_token(self, *choices):
# Check if the next token is one of the given types.
while self.need_more_tokens():
self.fetch_more_tokens()
if self.tokens:
if not choices:
return True
for choice in choices:
if isinstance(self.tokens[0], choice):
return True
return False
def peek_token(self):
# Return the next token, but do not delete if from the queue.
while self.need_more_tokens():
self.fetch_more_tokens()
if self.tokens:
return self.tokens[0]
def get_token(self):
# Return the next token.
while self.need_more_tokens():
self.fetch_more_tokens()
if self.tokens:
self.tokens_taken += 1
return self.tokens.pop(0)
# Private methods.
def need_more_tokens(self):
if self.done:
return False
if not self.tokens:
return True
# The current token may be a potential simple key, so we
# need to look further.
self.stale_possible_simple_keys()
if self.next_possible_simple_key() == self.tokens_taken:
return True
def fetch_more_tokens(self):
# Eat whitespaces and comments until we reach the next token.
self.scan_to_next_token()
# Remove obsolete possible simple keys.
self.stale_possible_simple_keys()
# Compare the current indentation and column. It may add some tokens
# and decrease the current indentation level.
self.unwind_indent(self.column)
# Peek the next character.
ch = self.peek()
# Is it the end of stream?
if ch == '\0':
return self.fetch_stream_end()
# Is it a directive?
if ch == '%' and self.check_directive():
return self.fetch_directive()
# Is it the document start?
if ch == '-' and self.check_document_start():
return self.fetch_document_start()
# Is it the document end?
if ch == '.' and self.check_document_end():
return self.fetch_document_end()
# TODO: support for BOM within a stream.
#if ch == '\uFEFF':
# return self.fetch_bom() <-- issue BOMToken
# Note: the order of the following checks is NOT significant.
# Is it the flow sequence start indicator?
if ch == '[':
return self.fetch_flow_sequence_start()
# Is it the flow mapping start indicator?
if ch == '{':
return self.fetch_flow_mapping_start()
# Is it the flow sequence end indicator?
if ch == ']':
return self.fetch_flow_sequence_end()
# Is it the flow mapping end indicator?
if ch == '}':
return self.fetch_flow_mapping_end()
# Is it the flow entry indicator?
if ch == ',':
return self.fetch_flow_entry()
# Is it the block entry indicator?
if ch == '-' and self.check_block_entry():
return self.fetch_block_entry()
# Is it the key indicator?
if ch == '?' and self.check_key():
return self.fetch_key()
# Is it the value indicator?
if ch == ':' and self.check_value():
return self.fetch_value()
# Is it an alias?
if ch == '*':
return self.fetch_alias()
# Is it an anchor?
if ch == '&':
return self.fetch_anchor()
# Is it a tag?
if ch == '!':
return self.fetch_tag()
# Is it a literal scalar?
if ch == '|' and not self.flow_level:
return self.fetch_literal()
# Is it a folded scalar?
if ch == '>' and not self.flow_level:
return self.fetch_folded()
# Is it a single quoted scalar?
if ch == '\'':
return self.fetch_single()
# Is it a double quoted scalar?
if ch == '\"':
return self.fetch_double()
# It must be a plain scalar then.
if self.check_plain():
return self.fetch_plain()
# No? It's an error. Let's produce a nice error message.
raise ScannerError("while scanning for the next token", None,
"found character %r that cannot start any token" % ch,
self.get_mark())
# Simple keys treatment.
def next_possible_simple_key(self):
# Return the number of the nearest possible simple key. Actually we
# don't need to loop through the whole dictionary. We may replace it
# with the following code:
# if not self.possible_simple_keys:
# return None
# return self.possible_simple_keys[
# min(self.possible_simple_keys.keys())].token_number
min_token_number = None
for level in self.possible_simple_keys:
key = self.possible_simple_keys[level]
if min_token_number is None or key.token_number < min_token_number:
min_token_number = key.token_number
return min_token_number
def stale_possible_simple_keys(self):
# Remove entries that are no longer possible simple keys. According to
# the YAML specification, simple keys
# - should be limited to a single line,
# - should be no longer than 1024 characters.
# Disabling this procedure will allow simple keys of any length and
# height (may cause problems if indentation is broken though).
for level in list(self.possible_simple_keys):
key = self.possible_simple_keys[level]
if key.line != self.line \
or self.index-key.index > 1024:
if key.required:
raise ScannerError("while scanning a simple key", key.mark,
"could not find expected ':'", self.get_mark())
del self.possible_simple_keys[level]
def save_possible_simple_key(self):
# The next token may start a simple key. We check if it's possible
# and save its position. This function is called for
# ALIAS, ANCHOR, TAG, SCALAR(flow), '[', and '{'.
# Check if a simple key is required at the current position.
required = not self.flow_level and self.indent == self.column
# The next token might be a simple key. Let's save it's number and
# position.
if self.allow_simple_key:
self.remove_possible_simple_key()
token_number = self.tokens_taken+len(self.tokens)
key = SimpleKey(token_number, required,
self.index, self.line, self.column, self.get_mark())
self.possible_simple_keys[self.flow_level] = key
def remove_possible_simple_key(self):
# Remove the saved possible key position at the current flow level.
if self.flow_level in self.possible_simple_keys:
key = self.possible_simple_keys[self.flow_level]
if key.required:
raise ScannerError("while scanning a simple key", key.mark,
"could not find expected ':'", self.get_mark())
del self.possible_simple_keys[self.flow_level]
# Indentation functions.
def unwind_indent(self, column):
## In flow context, tokens should respect indentation.
## Actually the condition should be `self.indent >= column` according to
## the spec. But this condition will prohibit intuitively correct
## constructions such as
## key : {
## }
#if self.flow_level and self.indent > column:
# raise ScannerError(None, None,
# "invalid intendation or unclosed '[' or '{'",
# self.get_mark())
# In the flow context, indentation is ignored. We make the scanner less
# restrictive then specification requires.
if self.flow_level:
return
# In block context, we may need to issue the BLOCK-END tokens.
while self.indent > column:
mark = self.get_mark()
self.indent = self.indents.pop()
self.tokens.append(BlockEndToken(mark, mark))
def add_indent(self, column):
# Check if we need to increase indentation.
if self.indent < column:
self.indents.append(self.indent)
self.indent = column
return True
return False
# Fetchers.
def fetch_stream_start(self):
# We always add STREAM-START as the first token and STREAM-END as the
# last token.
# Read the token.
mark = self.get_mark()
# Add STREAM-START.
self.tokens.append(StreamStartToken(mark, mark,
encoding=self.encoding))
def fetch_stream_end(self):
# Set the current intendation to -1.
self.unwind_indent(-1)
# Reset simple keys.
self.remove_possible_simple_key()
self.allow_simple_key = False
self.possible_simple_keys = {}
# Read the token.
mark = self.get_mark()
# Add STREAM-END.
self.tokens.append(StreamEndToken(mark, mark))
# The steam is finished.
self.done = True
def fetch_directive(self):
# Set the current intendation to -1.
self.unwind_indent(-1)
# Reset simple keys.
self.remove_possible_simple_key()
self.allow_simple_key = False
# Scan and add DIRECTIVE.
self.tokens.append(self.scan_directive())
def fetch_document_start(self):
self.fetch_document_indicator(DocumentStartToken)
def fetch_document_end(self):
self.fetch_document_indicator(DocumentEndToken)
def fetch_document_indicator(self, TokenClass):
# Set the current intendation to -1.
self.unwind_indent(-1)
# Reset simple keys. Note that there could not be a block collection
# after '---'.
self.remove_possible_simple_key()
self.allow_simple_key = False
# Add DOCUMENT-START or DOCUMENT-END.
start_mark = self.get_mark()
self.forward(3)
end_mark = self.get_mark()
self.tokens.append(TokenClass(start_mark, end_mark))
def fetch_flow_sequence_start(self):
self.fetch_flow_collection_start(FlowSequenceStartToken)
def fetch_flow_mapping_start(self):
self.fetch_flow_collection_start(FlowMappingStartToken)
def fetch_flow_collection_start(self, TokenClass):
# '[' and '{' may start a simple key.
self.save_possible_simple_key()
# Increase the flow level.
self.flow_level += 1
# Simple keys are allowed after '[' and '{'.
self.allow_simple_key = True
# Add FLOW-SEQUENCE-START or FLOW-MAPPING-START.
start_mark = self.get_mark()
self.forward()
end_mark = self.get_mark()
self.tokens.append(TokenClass(start_mark, end_mark))
def fetch_flow_sequence_end(self):
self.fetch_flow_collection_end(FlowSequenceEndToken)
def fetch_flow_mapping_end(self):
self.fetch_flow_collection_end(FlowMappingEndToken)
def fetch_flow_collection_end(self, TokenClass):
# Reset possible simple key on the current level.
self.remove_possible_simple_key()
# Decrease the flow level.
self.flow_level -= 1
# No simple keys after ']' or '}'.
self.allow_simple_key = False
# Add FLOW-SEQUENCE-END or FLOW-MAPPING-END.
start_mark = self.get_mark()
self.forward()
end_mark = self.get_mark()
self.tokens.append(TokenClass(start_mark, end_mark))
def fetch_flow_entry(self):
# Simple keys are allowed after ','.
self.allow_simple_key = True
# Reset possible simple key on the current level.
self.remove_possible_simple_key()
# Add FLOW-ENTRY.
start_mark = self.get_mark()
self.forward()
end_mark = self.get_mark()
self.tokens.append(FlowEntryToken(start_mark, end_mark))
def fetch_block_entry(self):
# Block context needs additional checks.
if not self.flow_level:
# Are we allowed to start a new entry?
if not self.allow_simple_key:
raise ScannerError(None, None,
"sequence entries are not allowed here",
self.get_mark())
# We may need to add BLOCK-SEQUENCE-START.
if self.add_indent(self.column):
mark = self.get_mark()
self.tokens.append(BlockSequenceStartToken(mark, mark))
# It's an error for the block entry to occur in the flow context,
# but we let the parser detect this.
else:
pass
# Simple keys are allowed after '-'.
self.allow_simple_key = True
# Reset possible simple key on the current level.
self.remove_possible_simple_key()
# Add BLOCK-ENTRY.
start_mark = self.get_mark()
self.forward()
end_mark = self.get_mark()
self.tokens.append(BlockEntryToken(start_mark, end_mark))
def fetch_key(self):
# Block context needs additional checks.
if not self.flow_level:
# Are we allowed to start a key (not nessesary a simple)?
if not self.allow_simple_key:
raise ScannerError(None, None,
"mapping keys are not allowed here",
self.get_mark())
# We may need to add BLOCK-MAPPING-START.
if self.add_indent(self.column):
mark = self.get_mark()
self.tokens.append(BlockMappingStartToken(mark, mark))
# Simple keys are allowed after '?' in the block context.
self.allow_simple_key = not self.flow_level
# Reset possible simple key on the current level.
self.remove_possible_simple_key()
# Add KEY.
start_mark = self.get_mark()
self.forward()
end_mark = self.get_mark()
self.tokens.append(KeyToken(start_mark, end_mark))
def fetch_value(self):
# Do we determine a simple key?
if self.flow_level in self.possible_simple_keys:
# Add KEY.
key = self.possible_simple_keys[self.flow_level]
del self.possible_simple_keys[self.flow_level]
self.tokens.insert(key.token_number-self.tokens_taken,
KeyToken(key.mark, key.mark))
# If this key starts a new block mapping, we need to add
# BLOCK-MAPPING-START.
if not self.flow_level:
if self.add_indent(key.column):
self.tokens.insert(key.token_number-self.tokens_taken,
BlockMappingStartToken(key.mark, key.mark))
# There cannot be two simple keys one after another.
self.allow_simple_key = False
# It must be a part of a complex key.
else:
# Block context needs additional checks.
# (Do we really need them? They will be catched by the parser
# anyway.)
if not self.flow_level:
# We are allowed to start a complex value if and only if
# we can start a simple key.
if not self.allow_simple_key:
raise ScannerError(None, None,
"mapping values are not allowed here",
self.get_mark())
# If this value starts a new block mapping, we need to add
# BLOCK-MAPPING-START. It will be detected as an error later by
# the parser.
if not self.flow_level:
if self.add_indent(self.column):
mark = self.get_mark()
self.tokens.append(BlockMappingStartToken(mark, mark))
# Simple keys are allowed after ':' in the block context.
self.allow_simple_key = not self.flow_level
# Reset possible simple key on the current level.
self.remove_possible_simple_key()
# Add VALUE.
start_mark = self.get_mark()
self.forward()
end_mark = self.get_mark()
self.tokens.append(ValueToken(start_mark, end_mark))
def fetch_alias(self):
# ALIAS could be a simple key.
self.save_possible_simple_key()
# No simple keys after ALIAS.
self.allow_simple_key = False
# Scan and add ALIAS.
self.tokens.append(self.scan_anchor(AliasToken))
def fetch_anchor(self):
# ANCHOR could start a simple key.
self.save_possible_simple_key()
# No simple keys after ANCHOR.
self.allow_simple_key = False
# Scan and add ANCHOR.
self.tokens.append(self.scan_anchor(AnchorToken))
def fetch_tag(self):
# TAG could start a simple key.
self.save_possible_simple_key()
# No simple keys after TAG.
self.allow_simple_key = False
# Scan and add TAG.
self.tokens.append(self.scan_tag())
def fetch_literal(self):
self.fetch_block_scalar(style='|')
def fetch_folded(self):
self.fetch_block_scalar(style='>')
def fetch_block_scalar(self, style):
# A simple key may follow a block scalar.
self.allow_simple_key = True
# Reset possible simple key on the current level.
self.remove_possible_simple_key()
# Scan and add SCALAR.
self.tokens.append(self.scan_block_scalar(style))
def fetch_single(self):
self.fetch_flow_scalar(style='\'')
def fetch_double(self):
self.fetch_flow_scalar(style='"')
def fetch_flow_scalar(self, style):
# A flow scalar could be a simple key.
self.save_possible_simple_key()
# No simple keys after flow scalars.
self.allow_simple_key = False
# Scan and add SCALAR.
self.tokens.append(self.scan_flow_scalar(style))
def fetch_plain(self):
# A plain scalar could be a simple key.
self.save_possible_simple_key()
# No simple keys after plain scalars. But note that `scan_plain` will
# change this flag if the scan is finished at the beginning of the
# line.
self.allow_simple_key = False
# Scan and add SCALAR. May change `allow_simple_key`.
self.tokens.append(self.scan_plain())
# Checkers.
def check_directive(self):
# DIRECTIVE: ^ '%' ...
# The '%' indicator is already checked.
if self.column == 0:
return True
def check_document_start(self):
# DOCUMENT-START: ^ '---' (' '|'\n')
if self.column == 0:
if self.prefix(3) == '---' \
and self.peek(3) in '\0 \t\r\n\x85\u2028\u2029':
return True
def check_document_end(self):
# DOCUMENT-END: ^ '...' (' '|'\n')
if self.column == 0:
if self.prefix(3) == '...' \
and self.peek(3) in '\0 \t\r\n\x85\u2028\u2029':
return True
def check_block_entry(self):
# BLOCK-ENTRY: '-' (' '|'\n')
return self.peek(1) in '\0 \t\r\n\x85\u2028\u2029'
def check_key(self):
# KEY(flow context): '?'
if self.flow_level:
return True
# KEY(block context): '?' (' '|'\n')
else:
return self.peek(1) in '\0 \t\r\n\x85\u2028\u2029'
def check_value(self):
# VALUE(flow context): ':'
if self.flow_level:
return True
# VALUE(block context): ':' (' '|'\n')
else:
return self.peek(1) in '\0 \t\r\n\x85\u2028\u2029'
def check_plain(self):
# A plain scalar may start with any non-space character except:
# '-', '?', ':', ',', '[', ']', '{', '}',
# '#', '&', '*', '!', '|', '>', '\'', '\"',
# '%', '@', '`'.
#
# It may also start with
# '-', '?', ':'
# if it is followed by a non-space character.
#
# Note that we limit the last rule to the block context (except the
# '-' character) because we want the flow context to be space
# independent.
ch = self.peek()
return ch not in '\0 \t\r\n\x85\u2028\u2029-?:,[]{}#&*!|>\'\"%@`' \
or (self.peek(1) not in '\0 \t\r\n\x85\u2028\u2029'
and (ch == '-' or (not self.flow_level and ch in '?:')))
# Scanners.
def scan_to_next_token(self):
# We ignore spaces, line breaks and comments.
# If we find a line break in the block context, we set the flag
# `allow_simple_key` on.
# The byte order mark is stripped if it's the first character in the
# stream. We do not yet support BOM inside the stream as the
# specification requires. Any such mark will be considered as a part
# of the document.
#
# TODO: We need to make tab handling rules more sane. A good rule is
# Tabs cannot precede tokens
# BLOCK-SEQUENCE-START, BLOCK-MAPPING-START, BLOCK-END,
# KEY(block), VALUE(block), BLOCK-ENTRY
# So the checking code is
# if <TAB>:
# self.allow_simple_keys = False
# We also need to add the check for `allow_simple_keys == True` to
# `unwind_indent` before issuing BLOCK-END.
# Scanners for block, flow, and plain scalars need to be modified.
if self.index == 0 and self.peek() == '\uFEFF':
self.forward()
found = False
while not found:
while self.peek() == ' ':
self.forward()
if self.peek() == '#':
while self.peek() not in '\0\r\n\x85\u2028\u2029':
self.forward()
if self.scan_line_break():
if not self.flow_level:
self.allow_simple_key = True
else:
found = True
def scan_directive(self):
# See the specification for details.
start_mark = self.get_mark()
self.forward()
name = self.scan_directive_name(start_mark)
value = None
if name == 'YAML':
value = self.scan_yaml_directive_value(start_mark)
end_mark = self.get_mark()
elif name == 'TAG':
value = self.scan_tag_directive_value(start_mark)
end_mark = self.get_mark()
else:
end_mark = self.get_mark()
while self.peek() not in '\0\r\n\x85\u2028\u2029':
self.forward()
self.scan_directive_ignored_line(start_mark)
return DirectiveToken(name, value, start_mark, end_mark)
def scan_directive_name(self, start_mark):
# See the specification for details.
length = 0
ch = self.peek(length)
while '0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z' \
or ch in '-_':
length += 1
ch = self.peek(length)
if not length:
raise ScannerError("while scanning a directive", start_mark,
"expected alphabetic or numeric character, but found %r"
% ch, self.get_mark())
value = self.prefix(length)
self.forward(length)
ch = self.peek()
if ch not in '\0 \r\n\x85\u2028\u2029':
raise ScannerError("while scanning a directive", start_mark,
"expected alphabetic or numeric character, but found %r"
% ch, self.get_mark())
return value
def scan_yaml_directive_value(self, start_mark):
# See the specification for details.
while self.peek() == ' ':
self.forward()
major = self.scan_yaml_directive_number(start_mark)
if self.peek() != '.':
raise ScannerError("while scanning a directive", start_mark,
"expected a digit or '.', but found %r" % self.peek(),
self.get_mark())
self.forward()
minor = self.scan_yaml_directive_number(start_mark)
if self.peek() not in '\0 \r\n\x85\u2028\u2029':
raise ScannerError("while scanning a directive", start_mark,
"expected a digit or ' ', but found %r" % self.peek(),
self.get_mark())
return (major, minor)
def scan_yaml_directive_number(self, start_mark):
# See the specification for details.
ch = self.peek()
if not ('0' <= ch <= '9'):
raise ScannerError("while scanning a directive", start_mark,
"expected a digit, but found %r" % ch, self.get_mark())
length = 0
while '0' <= self.peek(length) <= '9':
length += 1
value = int(self.prefix(length))
self.forward(length)
return value
def scan_tag_directive_value(self, start_mark):
# See the specification for details.
while self.peek() == ' ':
self.forward()
handle = self.scan_tag_directive_handle(start_mark)
while self.peek() == ' ':
self.forward()
prefix = self.scan_tag_directive_prefix(start_mark)
return (handle, prefix)
def scan_tag_directive_handle(self, start_mark):
# See the specification for details.
value = self.scan_tag_handle('directive', start_mark)
ch = self.peek()
if ch != ' ':
raise ScannerError("while scanning a directive", start_mark,
"expected ' ', but found %r" % ch, self.get_mark())
return value
def scan_tag_directive_prefix(self, start_mark):
# See the specification for details.
value = self.scan_tag_uri('directive', start_mark)
ch = self.peek()
if ch not in '\0 \r\n\x85\u2028\u2029':
raise ScannerError("while scanning a directive", start_mark,
"expected ' ', but found %r" % ch, self.get_mark())
return value
def scan_directive_ignored_line(self, start_mark):
# See the specification for details.
while self.peek() == ' ':
self.forward()
if self.peek() == '#':
while self.peek() not in '\0\r\n\x85\u2028\u2029':
self.forward()
ch = self.peek()
if ch not in '\0\r\n\x85\u2028\u2029':
raise ScannerError("while scanning a directive", start_mark,
"expected a comment or a line break, but found %r"
% ch, self.get_mark())
self.scan_line_break()
def scan_anchor(self, TokenClass):
# The specification does not restrict characters for anchors and
# aliases. This may lead to problems, for instance, the document:
# [ *alias, value ]
# can be interpteted in two ways, as
# [ "value" ]
# and
# [ *alias , "value" ]
# Therefore we restrict aliases to numbers and ASCII letters.
start_mark = self.get_mark()
indicator = self.peek()
if indicator == '*':
name = 'alias'
else:
name = 'anchor'
self.forward()
length = 0
ch = self.peek(length)
while '0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z' \
or ch in '-_':
length += 1
ch = self.peek(length)
if not length:
raise ScannerError("while scanning an %s" % name, start_mark,
"expected alphabetic or numeric character, but found %r"
% ch, self.get_mark())
value = self.prefix(length)
self.forward(length)
ch = self.peek()
if ch not in '\0 \t\r\n\x85\u2028\u2029?:,]}%@`':
raise ScannerError("while scanning an %s" % name, start_mark,
"expected alphabetic or numeric character, but found %r"
% ch, self.get_mark())
end_mark = self.get_mark()
return TokenClass(value, start_mark, end_mark)
def scan_tag(self):
# See the specification for details.
start_mark = self.get_mark()
ch = self.peek(1)
if ch == '<':
handle = None
self.forward(2)
suffix = self.scan_tag_uri('tag', start_mark)
if self.peek() != '>':
raise ScannerError("while parsing a tag", start_mark,
"expected '>', but found %r" % self.peek(),
self.get_mark())
self.forward()
elif ch in '\0 \t\r\n\x85\u2028\u2029':
handle = None
suffix = '!'
self.forward()
else:
length = 1
use_handle = False
while ch not in '\0 \r\n\x85\u2028\u2029':
if ch == '!':
use_handle = True
break
length += 1
ch = self.peek(length)
handle = '!'
if use_handle:
handle = self.scan_tag_handle('tag', start_mark)
else:
handle = '!'
self.forward()
suffix = self.scan_tag_uri('tag', start_mark)
ch = self.peek()
if ch not in '\0 \r\n\x85\u2028\u2029':
raise ScannerError("while scanning a tag", start_mark,
"expected ' ', but found %r" % ch, self.get_mark())
value = (handle, suffix)
end_mark = self.get_mark()
return TagToken(value, start_mark, end_mark)
def scan_block_scalar(self, style):
# See the specification for details.
if style == '>':
folded = True
else:
folded = False
chunks = []
start_mark = self.get_mark()
# Scan the header.
self.forward()
chomping, increment = self.scan_block_scalar_indicators(start_mark)
self.scan_block_scalar_ignored_line(start_mark)
# Determine the indentation level and go to the first non-empty line.
min_indent = self.indent+1
if min_indent < 1:
min_indent = 1
if increment is None:
breaks, max_indent, end_mark = self.scan_block_scalar_indentation()
indent = max(min_indent, max_indent)
else:
indent = min_indent+increment-1
breaks, end_mark = self.scan_block_scalar_breaks(indent)
line_break = ''
# Scan the inner part of the block scalar.
while self.column == indent and self.peek() != '\0':
chunks.extend(breaks)
leading_non_space = self.peek() not in ' \t'
length = 0
while self.peek(length) not in '\0\r\n\x85\u2028\u2029':
length += 1
chunks.append(self.prefix(length))
self.forward(length)
line_break = self.scan_line_break()
breaks, end_mark = self.scan_block_scalar_breaks(indent)
if self.column == indent and self.peek() != '\0':
# Unfortunately, folding rules are ambiguous.
#
# This is the folding according to the specification:
if folded and line_break == '\n' \
and leading_non_space and self.peek() not in ' \t':
if not breaks:
chunks.append(' ')
else:
chunks.append(line_break)
# This is Clark Evans's interpretation (also in the spec
# examples):
#
#if folded and line_break == '\n':
# if not breaks:
# if self.peek() not in ' \t':
# chunks.append(' ')
# else:
# chunks.append(line_break)
#else:
# chunks.append(line_break)
else:
break
# Chomp the tail.
if chomping is not False:
chunks.append(line_break)
if chomping is True:
chunks.extend(breaks)
# We are done.
return ScalarToken(''.join(chunks), False, start_mark, end_mark,
style)
def scan_block_scalar_indicators(self, start_mark):
# See the specification for details.
chomping = None
increment = None
ch = self.peek()
if ch in '+-':
if ch == '+':
chomping = True
else:
chomping = False
self.forward()
ch = self.peek()
if ch in '0123456789':
increment = int(ch)
if increment == 0:
raise ScannerError("while scanning a block scalar", start_mark,
"expected indentation indicator in the range 1-9, but found 0",
self.get_mark())
self.forward()
elif ch in '0123456789':
increment = int(ch)
if increment == 0:
raise ScannerError("while scanning a block scalar", start_mark,
"expected indentation indicator in the range 1-9, but found 0",
self.get_mark())
self.forward()
ch = self.peek()
if ch in '+-':
if ch == '+':
chomping = True
else:
chomping = False
self.forward()
ch = self.peek()
if ch not in '\0 \r\n\x85\u2028\u2029':
raise ScannerError("while scanning a block scalar", start_mark,
"expected chomping or indentation indicators, but found %r"
% ch, self.get_mark())
return chomping, increment
def scan_block_scalar_ignored_line(self, start_mark):
# See the specification for details.
while self.peek() == ' ':
self.forward()
if self.peek() == '#':
while self.peek() not in '\0\r\n\x85\u2028\u2029':
self.forward()
ch = self.peek()
if ch not in '\0\r\n\x85\u2028\u2029':
raise ScannerError("while scanning a block scalar", start_mark,
"expected a comment or a line break, but found %r" % ch,
self.get_mark())
self.scan_line_break()
def scan_block_scalar_indentation(self):
# See the specification for details.
chunks = []
max_indent = 0
end_mark = self.get_mark()
while self.peek() in ' \r\n\x85\u2028\u2029':
if self.peek() != ' ':
chunks.append(self.scan_line_break())
end_mark = self.get_mark()
else:
self.forward()
if self.column > max_indent:
max_indent = self.column
return chunks, max_indent, end_mark
def scan_block_scalar_breaks(self, indent):
# See the specification for details.
chunks = []
end_mark = self.get_mark()
while self.column < indent and self.peek() == ' ':
self.forward()
while self.peek() in '\r\n\x85\u2028\u2029':
chunks.append(self.scan_line_break())
end_mark = self.get_mark()
while self.column < indent and self.peek() == ' ':
self.forward()
return chunks, end_mark
def scan_flow_scalar(self, style):
# See the specification for details.
# Note that we loose indentation rules for quoted scalars. Quoted
# scalars don't need to adhere indentation because " and ' clearly
# mark the beginning and the end of them. Therefore we are less
# restrictive then the specification requires. We only need to check
# that document separators are not included in scalars.
if style == '"':
double = True
else:
double = False
chunks = []
start_mark = self.get_mark()
quote = self.peek()
self.forward()
chunks.extend(self.scan_flow_scalar_non_spaces(double, start_mark))
while self.peek() != quote:
chunks.extend(self.scan_flow_scalar_spaces(double, start_mark))
chunks.extend(self.scan_flow_scalar_non_spaces(double, start_mark))
self.forward()
end_mark = self.get_mark()
return ScalarToken(''.join(chunks), False, start_mark, end_mark,
style)
ESCAPE_REPLACEMENTS = {
'0': '\0',
'a': '\x07',
'b': '\x08',
't': '\x09',
'\t': '\x09',
'n': '\x0A',
'v': '\x0B',
'f': '\x0C',
'r': '\x0D',
'e': '\x1B',
' ': '\x20',
'\"': '\"',
'\\': '\\',
'N': '\x85',
'_': '\xA0',
'L': '\u2028',
'P': '\u2029',
}
ESCAPE_CODES = {
'x': 2,
'u': 4,
'U': 8,
}
def scan_flow_scalar_non_spaces(self, double, start_mark):
# See the specification for details.
chunks = []
while True:
length = 0
while self.peek(length) not in '\'\"\\\0 \t\r\n\x85\u2028\u2029':
length += 1
if length:
chunks.append(self.prefix(length))
self.forward(length)
ch = self.peek()
if not double and ch == '\'' and self.peek(1) == '\'':
chunks.append('\'')
self.forward(2)
elif (double and ch == '\'') or (not double and ch in '\"\\'):
chunks.append(ch)
self.forward()
elif double and ch == '\\':
self.forward()
ch = self.peek()
if ch in self.ESCAPE_REPLACEMENTS:
chunks.append(self.ESCAPE_REPLACEMENTS[ch])
self.forward()
elif ch in self.ESCAPE_CODES:
length = self.ESCAPE_CODES[ch]
self.forward()
for k in range(length):
if self.peek(k) not in '0123456789ABCDEFabcdef':
raise ScannerError("while scanning a double-quoted scalar", start_mark,
"expected escape sequence of %d hexdecimal numbers, but found %r" %
(length, self.peek(k)), self.get_mark())
code = int(self.prefix(length), 16)
chunks.append(chr(code))
self.forward(length)
elif ch in '\r\n\x85\u2028\u2029':
self.scan_line_break()
chunks.extend(self.scan_flow_scalar_breaks(double, start_mark))
else:
raise ScannerError("while scanning a double-quoted scalar", start_mark,
"found unknown escape character %r" % ch, self.get_mark())
else:
return chunks
def scan_flow_scalar_spaces(self, double, start_mark):
# See the specification for details.
chunks = []
length = 0
while self.peek(length) in ' \t':
length += 1
whitespaces = self.prefix(length)
self.forward(length)
ch = self.peek()
if ch == '\0':
raise ScannerError("while scanning a quoted scalar", start_mark,
"found unexpected end of stream", self.get_mark())
elif ch in '\r\n\x85\u2028\u2029':
line_break = self.scan_line_break()
breaks = self.scan_flow_scalar_breaks(double, start_mark)
if line_break != '\n':
chunks.append(line_break)
elif not breaks:
chunks.append(' ')
chunks.extend(breaks)
else:
chunks.append(whitespaces)
return chunks
def scan_flow_scalar_breaks(self, double, start_mark):
# See the specification for details.
chunks = []
while True:
# Instead of checking indentation, we check for document
# separators.
prefix = self.prefix(3)
if (prefix == '---' or prefix == '...') \
and self.peek(3) in '\0 \t\r\n\x85\u2028\u2029':
raise ScannerError("while scanning a quoted scalar", start_mark,
"found unexpected document separator", self.get_mark())
while self.peek() in ' \t':
self.forward()
if self.peek() in '\r\n\x85\u2028\u2029':
chunks.append(self.scan_line_break())
else:
return chunks
def scan_plain(self):
# See the specification for details.
# We add an additional restriction for the flow context:
# plain scalars in the flow context cannot contain ',', ':' and '?'.
# We also keep track of the `allow_simple_key` flag here.
# Indentation rules are loosed for the flow context.
chunks = []
start_mark = self.get_mark()
end_mark = start_mark
indent = self.indent+1
# We allow zero indentation for scalars, but then we need to check for
# document separators at the beginning of the line.
#if indent == 0:
# indent = 1
spaces = []
while True:
length = 0
if self.peek() == '#':
break
while True:
ch = self.peek(length)
if ch in '\0 \t\r\n\x85\u2028\u2029' \
or (not self.flow_level and ch == ':' and
self.peek(length+1) in '\0 \t\r\n\x85\u2028\u2029') \
or (self.flow_level and ch in ',:?[]{}'):
break
length += 1
# It's not clear what we should do with ':' in the flow context.
if (self.flow_level and ch == ':'
and self.peek(length+1) not in '\0 \t\r\n\x85\u2028\u2029,[]{}'):
self.forward(length)
raise ScannerError("while scanning a plain scalar", start_mark,
"found unexpected ':'", self.get_mark(),
"Please check http://pyyaml.org/wiki/YAMLColonInFlowContext for details.")
if length == 0:
break
self.allow_simple_key = False
chunks.extend(spaces)
chunks.append(self.prefix(length))
self.forward(length)
end_mark = self.get_mark()
spaces = self.scan_plain_spaces(indent, start_mark)
if not spaces or self.peek() == '#' \
or (not self.flow_level and self.column < indent):
break
return ScalarToken(''.join(chunks), True, start_mark, end_mark)
def scan_plain_spaces(self, indent, start_mark):
# See the specification for details.
# The specification is really confusing about tabs in plain scalars.
# We just forbid them completely. Do not use tabs in YAML!
chunks = []
length = 0
while self.peek(length) in ' ':
length += 1
whitespaces = self.prefix(length)
self.forward(length)
ch = self.peek()
if ch in '\r\n\x85\u2028\u2029':
line_break = self.scan_line_break()
self.allow_simple_key = True
prefix = self.prefix(3)
if (prefix == '---' or prefix == '...') \
and self.peek(3) in '\0 \t\r\n\x85\u2028\u2029':
return
breaks = []
while self.peek() in ' \r\n\x85\u2028\u2029':
if self.peek() == ' ':
self.forward()
else:
breaks.append(self.scan_line_break())
prefix = self.prefix(3)
if (prefix == '---' or prefix == '...') \
and self.peek(3) in '\0 \t\r\n\x85\u2028\u2029':
return
if line_break != '\n':
chunks.append(line_break)
elif not breaks:
chunks.append(' ')
chunks.extend(breaks)
elif whitespaces:
chunks.append(whitespaces)
return chunks
def scan_tag_handle(self, name, start_mark):
# See the specification for details.
# For some strange reasons, the specification does not allow '_' in
# tag handles. I have allowed it anyway.
ch = self.peek()
if ch != '!':
raise ScannerError("while scanning a %s" % name, start_mark,
"expected '!', but found %r" % ch, self.get_mark())
length = 1
ch = self.peek(length)
if ch != ' ':
while '0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z' \
or ch in '-_':
length += 1
ch = self.peek(length)
if ch != '!':
self.forward(length)
raise ScannerError("while scanning a %s" % name, start_mark,
"expected '!', but found %r" % ch, self.get_mark())
length += 1
value = self.prefix(length)
self.forward(length)
return value
def scan_tag_uri(self, name, start_mark):
# See the specification for details.
# Note: we do not check if URI is well-formed.
chunks = []
length = 0
ch = self.peek(length)
while '0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z' \
or ch in '-;/?:@&=+$,_.!~*\'()[]%':
if ch == '%':
chunks.append(self.prefix(length))
self.forward(length)
length = 0
chunks.append(self.scan_uri_escapes(name, start_mark))
else:
length += 1
ch = self.peek(length)
if length:
chunks.append(self.prefix(length))
self.forward(length)
length = 0
if not chunks:
raise ScannerError("while parsing a %s" % name, start_mark,
"expected URI, but found %r" % ch, self.get_mark())
return ''.join(chunks)
def scan_uri_escapes(self, name, start_mark):
# See the specification for details.
codes = []
mark = self.get_mark()
while self.peek() == '%':
self.forward()
for k in range(2):
if self.peek(k) not in '0123456789ABCDEFabcdef':
raise ScannerError("while scanning a %s" % name, start_mark,
"expected URI escape sequence of 2 hexdecimal numbers, but found %r"
% self.peek(k), self.get_mark())
codes.append(int(self.prefix(2), 16))
self.forward(2)
try:
value = bytes(codes).decode('utf-8')
except UnicodeDecodeError as exc:
raise ScannerError("while scanning a %s" % name, start_mark, str(exc), mark)
return value
def scan_line_break(self):
# Transforms:
# '\r\n' : '\n'
# '\r' : '\n'
# '\n' : '\n'
# '\x85' : '\n'
# '\u2028' : '\u2028'
# '\u2029 : '\u2029'
# default : ''
ch = self.peek()
if ch in '\r\n\x85':
if self.prefix(2) == '\r\n':
self.forward(2)
else:
self.forward()
return '\n'
elif ch in '\u2028\u2029':
self.forward()
return ch
return ''
#try:
# import psyco
# psyco.bind(Scanner)
#except ImportError:
# pass | PypiClean |
/MDSuite-0.2.0-py3-none-any.whl/mdsuite/database/simulation_database.py | import dataclasses
import logging
import pathlib
import time
import typing
from typing import List
import h5py as hf
import numpy as np
import tensorflow as tf
from mdsuite.utils.meta_functions import join_path
log = logging.getLogger(__name__)
@dataclasses.dataclass(frozen=True)
class PropertyInfo:
"""
Information of a trajectory property.
example:
pos_info = PropertyInfo('Positions', 3)
vel_info = PropertyInfo('Velocities', 3)
Attributes
----------
name:
The name of the property
n_dims:
The dimensionality of the property
"""
name: str
n_dims: int
@dataclasses.dataclass
class SpeciesInfo:
"""
Information of a species
Attributes
----------
name
Name of the species (e.g. 'Na')
n_particles
Number of particles of that species
properties: list of PropertyInfo
List of the properties that were recorded for the species
mass and charge are optional
"""
name: str
n_particles: int
properties: List[PropertyInfo]
mass: float = None
charge: float = 0
def __eq__(self, other):
same = (
self.name == other.name
and self.n_particles == other.n_particles
and self.mass == other.mass
and self.charge == other.charge
)
if len(self.properties) != len(other.properties):
return False
for prop_s, prop_o in zip(self.properties, other.properties):
same = same and prop_s == prop_o
return same
@dataclasses.dataclass
class MoleculeInfo(SpeciesInfo):
"""Information about a Molecule
All the information of a species + groups
Attributes
----------
groups: dict
A molecule specific dictionary for mapping the molecule to the
particles. The keys of this dict are index references to a specific molecule,
i.e. molecule 1 and the values are a dict of atom species and their indices
belonging to that specific molecule.
e.g
water = {"groups": {"0": {"H": [0, 1], "O": [0]}}
This tells us that the 0th water molecule consists of the 0th and 1st hydrogen
atoms in the database as well as the 0th oxygen atom.
"""
groups: dict = None
def __eq__(self, other):
"""Add a check to see if the groups are identical"""
if self.groups != other.groups:
return False
return super(MoleculeInfo, self).__eq__(other)
@dataclasses.dataclass
class TrajectoryMetadata:
"""
This metadata must be extracted from trajectory files to build the database into
which the trajectory will be stored
Attributes
----------
n_configurations : int
Number of configurations of the whole trajectory.
species_list: list of SpeciesInfo
The information about all species in the system.
box_l: list of float
The simulation box size in three dimensions
sample_rate : int optional
The number of timesteps between consecutive samples
# todo remove in favour of sample_step
sample_step : int optional
The time between consecutive configurations.
E.g. for a simulation with time step 0.1 where the trajectory is written
every 5 steps: sample_step = 0.5. Does not have to be specified
(e.g. configurations from Monte Carlo scheme), but is needed for all
dynamic observables.
temperature : float optional
The set temperature of the system.
Optional because only applicable for MD simulations with thermostat.
Needed for certain observables.
simulation_data : str|Path, optional
All other simulation data that can be extracted from the trajectory metadata.
E.g. software version, pressure in NPT simulations, time step, ...
"""
n_configurations: int
species_list: List[SpeciesInfo]
box_l: list = None
sample_rate: int = 1
sample_step: float = None
temperature: float = None
simulation_data: dict = dataclasses.field(default_factory=dict)
class TrajectoryChunkData:
"""
Class to specify the data format for transfer from the file to the database
"""
def __init__(self, species_list: List[SpeciesInfo], chunk_size: int):
"""
Parameters
----------
species_list : List[SpeciesInfo]
List of SpeciesInfo.
It contains the information which species are there and which properties
are recorded for each
chunk_size : int
The number of configurations to be stored in this chunk
"""
self.chunk_size = chunk_size
self.species_list = species_list
self._data = dict()
for sp_info in species_list:
self._data[sp_info.name] = dict()
for prop_info in sp_info.properties:
self._data[sp_info.name][prop_info.name] = np.zeros(
(chunk_size, sp_info.n_particles, prop_info.n_dims)
)
def add_data(self, data: np.ndarray, config_idx, species_name, property_name):
"""
Add configuration data to the chunk
Parameters
----------
data:
The data to be added, with shape (n_configs, n_particles, n_dims).
n_particles and n_dims relates to the species and the property that is
being added
config_idx:
Start index of the configs that are being added.
species_name
Name of the species to which the data belongs
property_name
Name of the property being added
example:
Storing velocity Information for 42 Na atoms in the 17th iteration of a loop
that reads 5 configs per loop:
add_data(vel_array, 16*5, 'Na', 'Velocities')
where vel.data.shape == (5,42,3)
"""
n_configs = len(data)
self._data[species_name][property_name][
config_idx : config_idx + n_configs, :, :
] = data
def get_data(self):
return self._data
class Database:
"""
Database class
Databases make up a large part of the functionality of MDSuite and are kept
fairly consistent in structure. Therefore, the database_path structure we
are using has a separate class with commonly used methods which act as
wrappers for the hdf5 database_path.
Attributes
----------
path : str|Path
The name of the database_path in question.
"""
def __init__(self, path: typing.Union[str, pathlib.Path] = "database"):
"""
Constructor for the database_path class.
Parameters
----------
path : str|Path
The name of the database_path in question.
"""
if isinstance(path, pathlib.Path):
self.path = path.as_posix()
elif isinstance(path, str):
self.path = path # name of the database_path
else:
# TODO fix this!
log.debug(f"Expected str|Path but found {type(path)}")
self.path = path
@staticmethod
def _update_indices(
data: np.array, reference: np.array, batch_size: int, n_atoms: int
):
"""
Update the indices key of the structure dictionary if the tensor_values must be
sorted.
Parameters
----------
data : np.ndarray
reference : np.ndarray
batch_size : int
n_atoms : int
Returns
-------
"""
ids = np.reshape(np.array(data[:, 0]).astype(int), (-1, n_atoms))
ref_ids = np.argsort(ids, axis=1)
n_batches = ids.shape[0]
return (
ref_ids[:, reference - 1] + (np.arange(n_batches) * n_atoms)[None].T
).flatten()
@staticmethod
def _build_path_input(structure: dict) -> dict:
"""
Build an input to a hdf5 database_path from a dictionary
In many cases, whilst a dict can be passed on to a method, it is not ideal for
use in the hdf5 database_path. This method takes a dictionary and return a new
dictionary with the relevant file path.
Parameters
----------
structure : dict
General structure of the dictionary with relevant dataset sizes. e.g.
{'Na': {'Forces': (200, 5000, 3)},
'Pressure': (5000, 6), 'Temperature': (5000, 1)} In this case, the last
value in the tuple corresponds to the number of components that wil be
parsed to the database_path. The value in the dict can also be an
integer corresponding to a resize operation such as
{'Na': {'velocities' 100}}. In any case, the deepest portion of the
dict must be a non-dict object and will be returned as the value of the
path to it in the new dictionary.
Returns
-------
architecture : dict
Corrected path in the hdf5 database_path. e.g. {'/Na/Velocities': 100},
or {'/Na/Forces': (200, 5000, 3)}
"""
# Build file paths for the addition.
architecture = {}
for group in structure:
if type(structure[group]) is not dict:
architecture[group] = structure[group]
else:
for subgroup in structure[group]:
db_path = join_path(group, subgroup)
architecture[db_path] = structure[group][subgroup]
return architecture
def add_data(self, chunk: TrajectoryChunkData, start_idx: int):
"""
Add new data to the dataset.
Parameters
----------
chunk:
a data chunk
start_idx:
Configuration at which to start writing
"""
workaround_time_in_axis_1 = True
chunk_data = chunk.get_data()
with hf.File(self.path, "r+") as database:
stop_index = start_idx + chunk.chunk_size
for sp_info in chunk.species_list:
for prop_info in sp_info.properties:
dataset_name = f"{sp_info.name}/{prop_info.name}"
write_data = chunk_data[sp_info.name][prop_info.name]
dataset_shape = database[dataset_name].shape
if len(dataset_shape) == 2:
# only one particle
database[dataset_name][start_idx:stop_index, :] = write_data[
:, 0, :
]
elif len(dataset_shape) == 3:
if workaround_time_in_axis_1:
database[dataset_name][
:, start_idx:stop_index, :
] = np.swapaxes(write_data, 0, 1)
else:
database[dataset_name][start_idx:stop_index, ...] = write_data
else:
raise ValueError(
"dataset shape must be either (n_part,n_config,n_dim) or"
" (n_config, n_dim)"
)
def resize_datasets(self, structure: dict):
"""
Resize a dataset so more tensor_values can be added
Parameters
----------
structure : dict
path to the dataset that needs to be resized. e.g.
{'Na': {'velocities': (32, 100, 3)}}
will resize all 'x', 'y', and 'z' datasets by 100 entries.
Returns
-------
"""
with hf.File(self.path, "r+") as db:
# construct the architecture dict
architecture = self._build_path_input(structure=structure)
# Check for a type error in the dataset information
for identifier in architecture:
dataset_information = architecture[identifier]
if not isinstance(dataset_information, tuple):
raise TypeError("Invalid input for dataset generation")
# get the correct maximum shape for the dataset -- changes if an
# experiment property or an atomic property
if len(dataset_information[:-1]) == 1:
axis = 0
expansion = dataset_information[0] + db[identifier].shape[0]
else:
axis = 1
expansion = dataset_information[1] + db[identifier].shape[1]
db[identifier].resize(expansion, axis)
def initialize_database(self, structure: dict):
"""
Build a database_path with a general structure.
Note, this method WILL overwrite a pre-existing database_path. This is because
it is only to be called on the initial construction of an experiment class and
the first addition of tensor_values to it.
Parameters
----------
structure : dict
General structure of the dictionary with relevant dataset sizes.
e.g. {'Na': {'Forces': (200, 5000, 3)}, 'Pressure': (5000, 6),
'Temperature': (5000, 1)} In this case, the last value in the tuple
corresponds to the number of components that wil be parsed to the
database_path.
Returns
-------
"""
self.add_dataset(structure) # add a dataset to the groups
def database_exists(self) -> bool:
"""
Check if the database file already exists
"""
return pathlib.Path(self.path).exists()
def add_dataset(self, structure: dict):
"""
Add a dataset of the necessary size to the database_path
Just as a separate method exists for building the group structure of the hdf5
database_path, so too do we include a separate method for adding a dataset.
This is so datasets can be added not just upon the initial construction of the
database_path, but also if tensor_values is added in the future that should
also be stored. This method will assume that a group has already been built,
although this is not necessary for HDF5, the separation of the actions is good
practice.
Parameters
----------
structure : dict
Structure of a single property to be added to the database_path.
e.g. {'Na': {'Forces': (200, 5000, 3)}}
Returns
-------
Updates the database_path directly.
"""
with hf.File(self.path, "a") as database:
architecture = self._build_path_input(structure) # get the correct file path
for item in architecture:
dataset_information = architecture[item] # get the tuple information
dataset_path = item # get the dataset path in the database_path
# Check for a type error in the dataset information
try:
if type(dataset_information) is not tuple:
raise TypeError("Invalid input for dataset generation")
except TypeError:
raise TypeError
if len(dataset_information[:-1]) == 1:
vector_length = dataset_information[-1]
max_shape = (None, vector_length)
else:
max_shape = list(dataset_information)
max_shape[1] = None
max_shape = tuple(max_shape)
database.create_dataset(
dataset_path,
dataset_information,
maxshape=max_shape,
scaleoffset=5,
chunks=True,
)
def _add_group_structure(self, structure: dict):
"""
Add a simple group structure to a database_path.
This method will take an input structure and build the required group structure
in the hdf5 database_path. This will NOT however instantiate the dataset for the
structure, only the group hierarchy.
Parameters
----------
structure : dict
Structure of a single property to be added to the database_path.
e.g. {'Na': {'Forces': (200, 5000, 3)}}
Returns
-------
Updates the database_path directly.
"""
with hf.File(self.path, "a") as database:
# Build file paths for the addition.
architecture = self._build_path_input(structure=structure)
for item in list(architecture):
if item in database:
log.info("Group structure already exists")
else:
database.create_group(item)
def get_memory_information(self) -> dict:
"""
Get memory information from the database_path
Returns
-------
memory_database : dict
A dictionary of the memory information of the groups in the
database_path
"""
with hf.File(self.path, "r") as database:
memory_database = {}
for item in database:
for ds in database[item]:
memory_database[join_path(item, ds)] = database[item][ds].nbytes
return memory_database
def check_existence(self, path: str) -> bool:
"""
Check to see if a dataset is in the database_path
Parameters
----------
path : str
Path to the desired dataset
Returns
-------
response : bool
If true, the path exists, else, it does not.
"""
with hf.File(self.path, "r") as database_object:
keys = []
database_object.visit(
lambda item: keys.append(database_object[item].name)
if isinstance(database_object[item], hf.Dataset)
else None
)
path = f"/{path}" # add the / to avoid name overlapping
response = any(list(item.endswith(path) for item in keys))
return response
def change_key_names(self, mapping: dict):
"""
Change the name of database_path keys
Parameters
----------
mapping : dict
Mapping for the change of names
Returns
-------
Updates the database_path
"""
with hf.File(self.path, "r+") as db:
groups = list(db.keys())
for item in groups:
if item in mapping:
db.move(item, mapping[item])
def load_data(
self,
path_list: list = None,
select_slice: np.s_ = np.s_[:],
dictionary: bool = False,
scaling: list = None,
d_size: int = None,
):
"""
Load tensor_values from the database_path for some operation.
Should be called by the tensor_values fetch class as this will ensure
correct loading and pre-loading.
Returns
-------
"""
if scaling is None:
scaling = [1 for _ in range(len(path_list))]
with hf.File(self.path, "r") as database:
data = {}
for i, item in enumerate(path_list):
if type(select_slice) is dict:
# index is the particle species name in the full item as a str.
slice_index = item.decode().split("/")[0]
my_slice = select_slice[slice_index]
else:
my_slice = select_slice
try:
data[item] = (
tf.convert_to_tensor(database[item][my_slice], dtype=tf.float64)
* scaling[i]
)
except TypeError:
data[item] = (
tf.convert_to_tensor(
database[item][my_slice[0]][:, my_slice[1], :],
dtype=tf.float64,
)
* scaling[i]
)
data[str.encode("data_size")] = d_size
return data
def get_load_time(self, database_path: str = None):
"""
Calculate the open/close time of the database_path.
Parameters
----------
database_path : str
Database path on which to test the time.
Returns
-------
opening time : float
Time taken to open and close the database_path
"""
if database_path is None:
start = time.time()
database_path = hf.File(self.path, "r")
database_path.close()
stop = time.time()
else:
start = time.time()
database_path = hf.File(database_path, "r")
database_path.close()
stop = time.time()
return stop - start
def get_data_size(self, data_path: str) -> tuple:
"""
Return the size of a dataset as a tuple (n_rows, n_columns, n_bytes)
Parameters
----------
data_path : str
path to the tensor_values in the hdf5 database_path.
Returns
-------
dataset_properties : tuple
Tuple of tensor_values about the dataset, e.g.
(n_rows, n_columns, n_bytes)
"""
with hf.File(self.path, "r") as db:
data_tuple = (
db[data_path].shape[0],
db[data_path].shape[1],
db[data_path].nbytes,
)
return data_tuple
def get_database_summary(self):
"""
Get a summary of the database properties.
Returns
-------
summary : list
A list of properties that are in the database
"""
with hf.File(self.path, "r") as db:
return list(db.keys()) | PypiClean |
/IdracRedfishSupportTest-0.0.7.tar.gz/IdracRedfishSupportTest-0.0.7/ChangeIdracUserPasswordREDFISH.py |
import argparse
import getpass
import json
import logging
import re
import requests
import sys
import time
import warnings
from datetime import datetime
from pprint import pprint
warnings.filterwarnings("ignore")
parser=argparse.ArgumentParser(description="Python script using Redfish API with OEM extension to change iDRAC username password. Once the password is changed, the script will execute GET command to verify the password change was successful.")
parser.add_argument('-ip',help='iDRAC IP address', required=False)
parser.add_argument('-u', help='iDRAC username', required=False)
parser.add_argument('-p', help='iDRAC password. If you do not pass in argument -p, script will prompt to enter user password which will not be echoed to the screen.', required=False)
parser.add_argument('-x', help='Pass in X-Auth session token for executing Redfish calls. All Redfish calls will use X-Auth token instead of username/password', required=False)
parser.add_argument('--ssl', help='SSL cert verification for all Redfish calls, pass in value \"true\" or \"false\". By default, this argument is not required and script ignores validating SSL cert for all Redfish calls.', required=False)
parser.add_argument('--script-examples', help='Get executing script examples', action="store_true", dest="script_examples", required=False)
parser.add_argument('--get', help='Get iDRAC user account information. This will return detailed information for each iDRAC user account.', action="store_true", required=False)
parser.add_argument('--user-id', help='Pass in the iDRAC user account ID you want to change the password for. If needed, use argument -g to get the iDRAC user account ID.', dest="user_id", required=False)
parser.add_argument('--new-pwd', help='Pass in the new password you want to set for the iDRAC user ID. If you do not pass in this argument, script will prompt to enter new password and will not be echoed to the screen.', dest="new_pwd", required=False)
args=vars(parser.parse_args())
logging.basicConfig(format='%(message)s', stream=sys.stdout, level=logging.INFO)
def script_examples():
print("""\n- ChangeIdracUserPasswordREDFISH.py -ip 192.168.0.120 -u root -p calvin --get, this example will get user account information for all iDRAC users.
\n- ChangeIdracUserPasswordREDFISH.py -ip 192.168.0.120 -u user -p calvin --user-id 3 --new-pwd pAssw0rd, this example shows changing iDRAC user ID 3 password.
\n- ChangeIdracUserPasswordREDFISH.py -ip 192.168.0.120 -x 983d154b4a125c7ae3838b8e32256b78 --user-id 8, this example using iDRAC X-auth token session will first prompt to enter new password, then change the password for user ID 8.""")
sys.exit(0)
def check_supported_idrac_version():
if args["x"]:
response = requests.get('https://%s/redfish/v1/Managers/iDRAC.Embedded.1/Accounts' % idrac_ip, verify=verify_cert, headers={'X-Auth-Token': args["x"]})
else:
response = requests.get('https://%s/redfish/v1/Managers/iDRAC.Embedded.1/Accounts' % idrac_ip, verify=verify_cert,auth=(idrac_username, idrac_password))
data = response.json()
if response.status_code == 401:
logging.warning("\n- WARNING, status code %s returned. Incorrect iDRAC username/password or invalid privilege detected." % response.status_code)
sys.exit(0)
if response.status_code != 200:
logging.warning("\n- WARNING, iDRAC version installed does not support this feature using Redfish API")
sys.exit(0)
def get_iDRAC_user_account_info():
if args["x"]:
response = requests.get('https://%s/redfish/v1/Managers/iDRAC.Embedded.1/Accounts?$expand=*($levels=1)' % idrac_ip, verify=verify_cert, headers={'X-Auth-Token': args["x"]})
else:
response = requests.get('https://%s/redfish/v1/Managers/iDRAC.Embedded.1/Accounts?$expand=*($levels=1)' % idrac_ip, verify=verify_cert,auth=(idrac_username, idrac_password))
data = response.json()
if response.status_code != 200:
logging.error("\n- FAIL, status code %s returned for GET command. Detail error results: \n%s" % (statusCode, data))
sys.exit(0)
logging.info("\n- iDRAC User Account Information -")
for i in data["Members"]:
pprint(i)
print("\n")
def change_idrac_user_password():
if args["x"]:
response = requests.get('https://%s/redfish/v1/Managers/iDRAC.Embedded.1/Accounts/%s' % (idrac_ip, args["user_id"]), verify=verify_cert, headers={'X-Auth-Token': args["x"]})
else:
response = requests.get('https://%s/redfish/v1/Managers/iDRAC.Embedded.1/Accounts/%s' % (idrac_ip, args["user_id"]), verify=verify_cert,auth=(idrac_username, idrac_password))
if response.status_code == 401:
logging.warning("\n- WARNING, status code 401 detected, check iDRAC username / password credentials and privilege level")
sys.exit(0)
data = response.json()
url = 'https://%s/redfish/v1/Managers/iDRAC.Embedded.1/Accounts/%s' % (idrac_ip, args["user_id"])
if not args["new_pwd"]:
args["new_pwd"] = getpass.getpass("\n- Argument --new-pwd not detected, pass in new user password: ")
payload = {'Password': args["new_pwd"]}
if args["x"]:
headers = {'content-type': 'application/json', 'X-Auth-Token': args["x"]}
response = requests.patch(url, data=json.dumps(payload), headers=headers, verify=verify_cert)
else:
headers = {'content-type': 'application/json'}
response = requests.patch(url, data=json.dumps(payload), headers=headers, verify=verify_cert,auth=(idrac_username,idrac_password))
if response.status_code == 200:
logging.info("\n- PASS, status code %s returned for PATCH command to change iDRAC user password for user ID %s" % (response.status_code, args["user_id"]))
else:
data = response.json()
logging.error("\n- FAIL, status code %s returned, password was not changed. Detailed error results: \n%s" % (response.status_code, data))
sys.exit(0)
if args["x"]:
logging.info("\n- INFO, X-auth token session detected. If you changed the user password for the user account that created the token, this token is no longer valid and needs to be recreated")
if __name__ == "__main__":
if args["script_examples"]:
script_examples()
if args["ip"] and args["ssl"] or args["u"] or args["p"] or args["x"]:
idrac_ip=args["ip"]
idrac_username=args["u"]
if args["p"]:
idrac_password=args["p"]
if not args["p"] and not args["x"] and args["u"]:
idrac_password = getpass.getpass("\n- Argument -p not detected, pass in iDRAC user %s password: " % args["u"])
if args["ssl"]:
if args["ssl"].lower() == "true":
verify_cert = True
elif args["ssl"].lower() == "false":
verify_cert = False
else:
verify_cert = False
else:
verify_cert = False
check_supported_idrac_version()
else:
logging.error("\n- FAIL, invalid argument values or not all required parameters passed in. See help text or argument --script-examples for more details.")
sys.exit(0)
if args["get"]:
get_iDRAC_user_account_info()
elif args["user_id"] or args["new_pwd"]:
idrac_account_id = args["user_id"]
idrac_new_password = args["new_pwd"]
change_idrac_user_password()
else:
logging.error("\n- FAIL, invalid argument values or not all required parameters passed in. See help text or argument --script-examples for more details.") | PypiClean |
/FlaskCms-0.0.4.tar.gz/FlaskCms-0.0.4/flask_cms/static/js/ckeditor/plugins/a11yhelp/dialogs/lang/hu.js | /*
Copyright (c) 2003-2013, CKSource - Frederico Knabben. All rights reserved.
For licensing, see LICENSE.md or http://ckeditor.com/license
*/
CKEDITOR.plugins.setLang("a11yhelp","hu",{title:"Kisegítő utasítások",contents:"Súgó tartalmak. A párbeszédablak bezárásához nyomjon ESC-et.",legend:[{name:"Általános",items:[{name:"Szerkesztő Eszköztár",legend:"Nyomjon ${toolbarFocus} hogy kijelölje az eszköztárat. A következő és előző eszköztár csoporthoz a TAB és SHIFT TAB-al juthat el. A következő és előző eszköztár gombhoz a BAL NYÍL vagy JOBB NYÍL gombbal juthat el. Nyomjon SPACE-t vagy ENTER-t hogy aktiválja az eszköztár gombot."},{name:"Szerkesző párbeszéd ablak",
legend:"Párbeszédablakban nyomjon TAB-ot a következő párbeszédmezőhöz ugráshoz, nyomjon SHIFT + TAB-ot az előző mezőhöz ugráshoz, nyomjon ENTER-t a párbeszédablak elfogadásához, nyomjon ESC-et a párbeszédablak elvetéséhez. Azokhoz a párbeszédablakokhoz, amik több fület tartalmaznak, nyomjon ALT + F10-et hogy a fülekre ugorjon. Ezután a TAB-al vagy a JOBB NYÍLLAL a következő fülre ugorhat. Az előző fülre ugráshoz használja a SHIFT + TAB-ot vagy a BAL NYILAT. Nyomjon SPACE-t vagy ENTER-t hogy kijelölje a fület."},
{name:"Szerkesztő helyi menü",legend:"Nyomjon ${contextMenu}-t vagy ALKALMAZÁS BILLENTYŰT a helyi menü megnyitásához. Ezután a következő menüpontra léphet a TAB vagy LEFELÉ NYÍLLAL. Az előző opciót a SHIFT+TAB vagy FELFELÉ NYÍLLAL érheti el. Nyomjon SPACE-t vagy ENTER-t a menüpont kiválasztásához. A jelenlegi menüpont almenüjének megnyitásához nyomjon SPACE-t vagy ENTER-t, vagy JOBB NYILAT. A főmenühöz való visszatéréshez nyomjon ESC-et vagy BAL NYILAT. A helyi menü bezárása az ESC billentyűvel lehetséges."},
{name:"Szerkesztő lista",legend:"A listán belül a következő elemre a TAB vagy LEFELÉ NYÍLLAL mozoghat. Az előző elem kiválasztásához nyomjon SHIFT+TAB-ot vagy FELFELÉ NYILAT. Nyomjon SPACE-t vagy ENTER-t az elem kiválasztásához. Az ESC billentyű megnyomásával bezárhatja a listát."},{name:"Szerkesztő elem utak sáv",legend:"Nyomj ${elementsPathFocus} hogy kijelöld a elemek út sávját. A következő elem gombhoz a TAB-al vagy a JOBB NYÍLLAL juthatsz el. Az előző gombhoz a SHIFT+TAB vagy BAL NYÍLLAL mehetsz. A SPACE vagy ENTER billentyűvel kiválaszthatod az elemet a szerkesztőben."}]},
{name:"Parancsok",items:[{name:"Parancs visszavonása",legend:"Nyomj ${undo}"},{name:"Parancs megismétlése",legend:"Nyomjon ${redo}"},{name:"Félkövér parancs",legend:"Nyomjon ${bold}"},{name:"Dőlt parancs",legend:"Nyomjon ${italic}"},{name:"Aláhúzott parancs",legend:"Nyomjon ${underline}"},{name:"Link parancs",legend:"Nyomjon ${link}"},{name:"Szerkesztősáv összecsukása parancs",legend:"Nyomjon ${toolbarCollapse}"},{name:"Hozzáférés az előző fókusz helyhez parancs",legend:"Nyomj ${accessNextSpace} hogy hozzáférj a legközelebbi elérhetetlen fókusz helyhez a hiányjel előtt, például: két szomszédos HR elemhez. Ismételd meg a billentyűkombinációt hogy megtaláld a távolabbi fókusz helyeket."},
{name:"Hozzáférés a következő fókusz helyhez parancs",legend:"Nyomj ${accessNextSpace} hogy hozzáférj a legközelebbi elérhetetlen fókusz helyhez a hiányjel után, például: két szomszédos HR elemhez. Ismételd meg a billentyűkombinációt hogy megtaláld a távolabbi fókusz helyeket."},{name:"Kisegítő súgó",legend:"Nyomjon ${a11yHelp}"}]}]}); | PypiClean |
/EMMO-1.0.1-py3-none-any.whl/emmo/patch.py | """This module injects some additional methods into owlready2 classes."""
import types
import owlready2
from owlready2 import ThingClass, PropertyClass, Thing, Restriction, Namespace
from owlready2 import Metadata
# Improve default rendering of entities
def render_func(entity):
if hasattr(entity, 'prefLabel') and entity.prefLabel:
name = entity.prefLabel[0]
elif hasattr(entity, 'label') and entity.label:
name = entity.label[0]
elif hasattr(entity, 'altLabel') and entity.altLabel:
name = entity.altLabel[0]
else:
name = entity.name
return "%s.%s" % (entity.namespace.name, name)
owlready2.set_render_func(render_func)
#
# Extending ThingClass (classes)
# ==============================
def get_preferred_label(self):
"""Returns the preferred label as a string (not list).
The following heuristics is used:
- if prefLabel annotation property exists, returns the first prefLabel
- if label annotation property exists, returns the first label
- otherwise return the name
"""
if hasattr(self, 'prefLabel') and self.prefLabel:
return self.prefLabel[0]
elif hasattr(self, 'label') and self.label:
return self.label.first()
else:
return self.name
def get_parents(self, strict=False):
"""Returns a list of all parents. If `strict` is true, parents that are
parents of other parents are excluded."""
if strict:
s = self.get_parents()
for e in s.copy():
s.difference_update(e.ancestors(include_self=False))
return s
elif isinstance(self, ThingClass):
return {cls for cls in self.is_a
if isinstance(cls, ThingClass)}
elif isinstance(self, owlready2.ObjectPropertyClass):
return {cls for cls in self.is_a
if isinstance(cls, owlready2.ObjectPropertyClass)}
else:
assert 0
def _dir(self):
"""Extend in dir() listing of ontology classes."""
s = set(object.__dir__(self))
props = self.namespace.world._props.keys()
s.update(props)
return sorted(s)
def get_class_annotations(self, all=False, imported=True):
"""Returns a dict with non-empty annotations.
If `all` is true, also annotations with no value are included.
If `imported` is true, also include annotations defined in
imported ontologies.
"""
onto = self.namespace.ontology
d = {get_preferred_label(a): a._get_values_for_class(self)
for a in onto.annotation_properties(imported=imported)}
if all:
return d
else:
return {k: v for k, v in d.items() if v}
def disjoint_with(self, reduce=False):
"""Returns a generator with all classes that are disjoint with `self`.
If `reduce` is true, all classes that are a descendant of another class
will be excluded."""
if reduce:
s = set(self.disjoint_with())
for e in s.copy():
s.difference_update(e.descendants(include_self=False))
for e in s:
yield e
else:
for d in self.disjoints():
for e in d.entities:
if e is not self:
yield e
def get_indirect_is_a(self, skip_classes=True):
"""Returns the set of all isSubclassOf relations of self and its
ancestors. If `skip_classes` is true, indirect classes are not
included in the returned set.
"""
s = set()
for e in reversed(self.mro()):
if hasattr(e, 'is_a'):
if skip_classes:
s.update(r for r in e.is_a
if not isinstance(r, owlready2.ThingClass))
else:
s.update(e.is_a)
s.update(self.is_a)
return s
# Inject methods into ThingClass
setattr(ThingClass, '__dir__', _dir)
setattr(ThingClass, 'get_preferred_label', get_preferred_label)
setattr(ThingClass, 'get_parents', get_parents)
setattr(ThingClass, 'get_annotations', get_class_annotations)
setattr(ThingClass, 'disjoint_with', disjoint_with)
setattr(ThingClass, 'get_indirect_is_a', get_indirect_is_a)
#
# Extending PropertyClass (properties)
# ====================================
def get_property_annotations(self, all=False, imported=True):
"""Returns a dict with non-empty property annotations.
If `all` is true, also annotations with no value are included.
If `imported` is true, also include annotations defined in
imported ontologies.
"""
onto = self.namespace.ontology
d = {get_preferred_label(a): a._get_values_for_class(self)
for a in onto.annotation_properties(imported=imported)}
if all:
return d
else:
return {k: v for k, v in d.items() if v}
setattr(PropertyClass, 'get_preferred_label', get_preferred_label)
setattr(PropertyClass, 'get_parents', get_parents)
setattr(PropertyClass, 'get_annotations', get_property_annotations)
#
# Extending Thing (individuals)
# =============================
def get_individual_annotations(self, all=False, imported=True):
"""Returns a dict with non-empty individual annotations.
If `all` is true, also annotations with no value are included.
If `imported` is true, also include annotations defined in
imported ontologies.
"""
onto = self.namespace.ontology
d = {get_preferred_label(a): a._get_values_for_individual(self)
for a in onto.annotation_properties(imported=imported)}
if all:
return d
else:
return {k: v for k, v in d.items() if v}
# Method names for individuals must be different from method names for classes
type.__setattr__(Thing, 'get_preflabel', get_preferred_label)
type.__setattr__(Thing, 'get_individual_annotations',
get_individual_annotations)
#
# Extending Restriction
# =====================
def get_typename(self):
return owlready2.class_construct._restriction_type_2_label[self.type]
setattr(Restriction, 'get_typename', get_typename)
#
# Extending Namespace
# ===================
orig_namespace_init = Namespace.__init__
def namespace_init(self, world_or_ontology, base_iri, name=None):
orig_namespace_init(self, world_or_ontology, base_iri, name)
if self.name.endswith('.ttl'):
self.name = self.name[:-4]
setattr(Namespace, '__init__', namespace_init)
#
# Extending Metadata
# ==================
def keys(self):
"""Return a generator over annotation property names associates
with this ontology."""
ns = self.namespace
for a in ns.annotation_properties():
if ns._has_data_triple_spod(s=ns.storid, p=a.storid):
yield a
def items(self):
"""Return a generator over annotation property (name, value_list)
pairs associates with this ontology."""
ns = self.namespace
for a in ns.annotation_properties():
if ns._has_data_triple_spod(s=ns.storid, p=a.storid):
yield a, self.__getattr__(a.name)
def has(self, name):
"""Returns true if `name`"""
return name in set(self.keys())
def __contains__(self, name):
return self.has(name)
def __iter__(self):
return self.keys()
def __setattr__(self, attr, values):
metadata__setattr__save(self, attr, values)
# Make sure that __setattr__() also updates the triplestore
lst = self.__dict__[attr]
if lst:
ns = self.namespace
annot = {
a.name: a for a in owlready2.AnnotationProperty.__subclasses__()}
if attr in annot:
prop = annot[attr]
else:
with ns.ontology:
prop = types.new_class(attr, (owlready2.AnnotationProperty, ))
o, d = owlready2.to_literal(lst[0])
ns._set_data_triple_spod(ns.storid, prop.storid, o, d)
for e in lst[1:]:
o, d = owlready2.to_literal(e)
ns._set_data_triple_spod(ns.storid, prop.storid, o, d)
def __repr__(self):
s = ['Metadata(']
for a, values in self.items():
sep = '\n' + ' ' * (len(a.name) + 4)
s.append(' %s=[%s],' % (a.name, sep.join(repr(v) for v in values)))
s.append(')')
return '\n'.join(s)
metadata__setattr__save = Metadata.__setattr__
setattr(Metadata, 'keys', keys)
setattr(Metadata, 'items', items)
setattr(Metadata, 'has', has)
setattr(Metadata, '__contains__', __contains__)
setattr(Metadata, '__iter__', __iter__)
setattr(Metadata, '__setattr__', __setattr__)
setattr(Metadata, '__repr__', __repr__)
Metadata.__getitem__ = Metadata.__getattr__
Metadata.__setitem__ = Metadata.__setattr__ | PypiClean |
/OBP_security_pillar_2-0.0.4.tar.gz/OBP_security_pillar_2-0.0.4/OBP_security_pillar_2/rds/rds_instance_public_access_check.py | import logging
import botocore
from botocore.exceptions import ClientError
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
def rds_instance_public_access_check(self) -> dict:
"""
:param self:
:return:
"""
logger.info(" ---Inside rds :: rds_instance_public_access_check()")
result = True
failReason = ''
offenders = []
control_id = 'Id1.10'
compliance_type = "RDS instance public access check"
description = "Checks whether the Amazon Relational Database Service (RDS) instances are not publicly accessible"
resource_type = "RDS Instance"
risk_level = 'High'
regions = self.session.get_available_regions('rds')
for region in regions:
try:
client = self.session.client('rds', region_name=region)
marker = ''
while True:
if marker == '' or marker is None:
response = client.describe_db_instances()
else:
response = client.describe_db_instances(
Marker=marker
)
for instance in response['DBInstances']:
if instance['PubliclyAccessible']:
result = False
failReason = "RDS instance is publicly accessible"
offenders.append(instance['DBInstanceIdentifier'])
try:
marker = response['Marker']
if marker == '':
break
except KeyError:
break
except ClientError as e:
logger.error("Something went wrong with the region {}: {}".format(region, e))
return {
'Result': result,
'failReason': failReason,
'resource_type': resource_type,
'Offenders': offenders,
'Compliance_type': compliance_type,
'Description': description,
'Risk Level': risk_level,
'ControlId': control_id
} | PypiClean |
/KonFoo-3.0.0-py3-none-any.whl/konfoo/categories.py | from __future__ import annotations
import enum
from typing import Any
class Category(enum.Enum):
""" The :class:`Category` class is a is a subclass of the :class:`~enum.Enum`
class provided by the Python standard module :mod:`enum`, and extends its
base class with methods
- to `describe` a specific `Category` member by its `name`, `value` tuple
- to list the `member names` of a `Category`
- to list the `member values` of a `Category`
- to `get` the `value` of the `Category` member by its name
- to `get` the `name` of the `Category` member by is value
- to `get` the `member` of the `Category` by its value
Example:
>>> class Format(Category):
... hour = 'hh'
... minute = 'mm'
... second = 'ss'
>>> Format
<enum 'Format'>
>>> type(Format.hour)
<enum 'Format'>
>>> isinstance(Format, Category)
False
>>> issubclass(Format, Category)
True
>>> isinstance(Format.hour, Format)
True
>>> print(Format.hour)
(hour, hh)
>>> str(Format.hour)
'(hour, hh)'
>>> Format.hour
Format.hour = 'hh'
>>> repr(Format.hour)
"Format.hour = 'hh'"
>>> list(Format)
[Format.hour = 'hh', Format.minute = 'mm', Format.second = 'ss']
>>> [format for format in Format]
[Format.hour = 'hh', Format.minute = 'mm', Format.second = 'ss']
>>> Format.hour.name
'hour'
>>> Format.hour.value
'hh'
>>> Format.hour.describe()
('hour', 'hh')
>>> [member.name for member in Format]
['hour', 'minute', 'second']
>>> Format.names()
['hour', 'minute', 'second']
>>> [member.value for member in Format]
['hh', 'mm', 'ss']
>>> Format.values()
['hh', 'mm', 'ss']
>>> Format['hour'].value
'hh'
>>> Format.get_value('hour')
'hh'
>>> Format('hh').name
'hour'
>>> Format.get_name('hh')
'hour'
>>> Format.get_member('hh')
Format.hour = 'hh'
>>> 'hh' in Format.values()
True
>>> 'hour' in Format.names()
True
"""
def __str__(self) -> str:
""" Return str(self).
Example:
>>> class Format(Category):
... hour = 'hh'
... minute = 'mm'
... second = 'ss'
>>> str(Format.hour)
'(hour, hh)'
"""
return f"({self.name!s}, {self.value!s})"
def __repr__(self) -> str:
""" Return repr(self). See help(type(self)) for accurate signature.
Example:
>>> class Format(Category):
... hour = 'hh'
... minute = 'mm'
... second = 'ss'
>>> repr(Format.hour)
"Format.hour = 'hh'"
"""
return f"{self.__class__.__name__}.{self.name!s} = {self.value!r}"
def describe(self) -> tuple[str, Any]:
""" Returns the `name`, `value` tuple to describe a specific `Category`
member.
Example:
>>> class Format(Category):
... hour = 'hh'
... minute = 'mm'
... second = 'ss'
>>> Format.hour.describe()
('hour', 'hh')
"""
return self.name, self.value
@classmethod
def names(cls) -> list[str]:
""" Returns a list of the member `names` of a `Category`.
Example:
>>> class Format(Category):
... hour = 'hh'
... minute = 'mm'
... second = 'ss'
>>> Format.names()
['hour', 'minute', 'second']
"""
return [member.name for member in cls]
@classmethod
def values(cls) -> list[Any]:
""" Returns a list of the member `values` of a `Category`.
Example:
>>> class Format(Category):
... hour = 'hh'
... minute = 'mm'
... second = 'ss'
>>> Format.values()
['hh', 'mm', 'ss']
"""
return [member.value for member in cls]
@classmethod
def get_name(cls, value: Any) -> str:
""" Returns the `name` of the `Category` member matches the *value*,
or an empty string if no member match.
Example:
>>> class Format(Category):
... hour = 'hh'
... minute = 'mm'
... second = 'ss'
>>> Format.get_name('hh')
'hour'
>>> Format.get_name('dd')
''
"""
for member in cls:
if member.value == value:
return member.name
return str()
@classmethod
def get_value(cls, name: str) -> Any | None:
""" Returns the `value` of the `Category` member matches the *name*,
or :data:`None` if no member match.
Example:
>>> class Format(Category):
... hour = 'hh'
... minute = 'mm'
... second = 'ss'
>>> Format.get_value('hour')
'hh'
>>> Format.get_value('day')
"""
for member in cls:
if member.name == name:
return member.value
return None
@classmethod
def get_member(cls,
value: Any,
default: Category | None = None) -> Category | None:
""" Returns the first `Category` member matches the *value*, or the
specified *default* member if no member match.
Example:
>>> class Format(Category):
... hour = 'hh'
... minute = 'mm'
... second = 'ss'
>>> Format.get_member('hh')
Format.hour = 'hh'
>>> Format.get_member('day', None)
"""
for member in cls:
if member.value == value:
return member
return default | PypiClean |
/certfrtracker-1.6.3.tar.gz/certfrtracker-1.6.3/src/certfrtracker/Router.py | import logging
import feedparser
import requests
import sys
import certfrtracker.NVD as NVD
import certfrtracker.html_parser as html_parser
from certfrtracker.sqlite import Sqlite
from certfrtracker.version_parser import systems_filter
logger = logging.getLogger(__name__)
class Report:
"""
Plain object returned by Router class as list of Report.
It contains all information about each alert.
"""
def __init__(self, alert_id: str, techno: str, version: str, status: str, score: float, publish_date: str,
update_date: str, description: str, source: str, details: str) -> None:
self.alert_id = alert_id # id of the Alert | String | CVE-2022-1234,
# CERTFR-2022-ALE-004, CERTFR-2022-AVI-004
self.techno = techno # Name of the Techno | String | ex: Apache, Postgresql, Openjdk
self.version = version # Version of the Techno | String | ex: 1.2.3, 1.2, 1.2.3-beta
self.status = status # Nature of Alert | String | Open or Applicable
self.score = score # CVSS or NVD Score | Float | between 1 and 10
self.publish_date = publish_date # Publish Date | String | ex: "2020-06-21
self.update_date = update_date # Update Date | String | ex: "2020-06-21
self.description = description # Alert Description | String
# self.source = | String | ex: url
self.source = source # url of the alert | string
self.details = details
class Router:
"""
This class creates and updates the database to the certfr.
It also compares the technos and versions you give as argument to the database and returns it as a list of Report
"""
def __init__(self, db_file="CertfrTracker.db"):
self.sqliteCon = Sqlite(db_file)
self.sqliteCon.create_database()
def __del__(self):
self.sqliteCon.close_connection()
def update_database(self, feed_type=""):
"""
Updates the database by scrapping the certfr, and it's RSS stream.
:param feed_type: str="" - "NextAlert" or "NextAvis", if null exec both.
"""
if feed_type == "NextAvis":
rss_link = 'https://www.cert.ssi.gouv.fr/avis/feed/'
last_scrap = self.sqliteCon.get_next_scrap(feed_type) # get lastScrap from database
if last_scrap == "Null":
last_scrap = "CERTFR-2015-AVI-001"
elif feed_type == "NextAlert":
rss_link = 'https://www.cert.ssi.gouv.fr/alerte/feed/'
last_scrap = self.sqliteCon.get_next_scrap(feed_type) # get lastScrap from database
if last_scrap == "Null":
last_scrap = "CERTFR-2014-ALE-001"
else:
self.update_database("NextAvis")
self.update_database("NextAlert")
return
# TODO: secure errors
logger.info(f"Starting new HTTPS connection: {rss_link}")
news_feed = feedparser.parse(rss_link)
# since the order of the feed is not sorted, we have to manually detect which of theses are the latest and
# newest alerts
data_set = {"date": [], "link": []}
for entry in news_feed.entries:
data_set["date"].append(entry.published_parsed)
data_set["link"].append(entry.link)
latest_feed = data_set["link"][data_set["date"].index(max(data_set["date"]))].split("/")[-2]
oldest_feed = data_set["link"][data_set["date"].index(min(data_set["date"]))].split("/")[-2]
# if the database is not up-to-date
if last_scrap != latest_feed:
self.update_database_using_rss(data_set["link"])
self.sqliteCon.set_next_scrap(feed_type, latest_feed) # set next_scrap to database
# if there's a hole between the oldest rss and the latest from database :
if int(last_scrap.split('-')[1]) <= int(oldest_feed.split('-')[1]) and int(last_scrap.split('-')[-1]) < int(
oldest_feed.split('-')[-1]):
self.update_database_using_crawler(feed_type=feed_type, last_scrap=last_scrap, max_crawl=oldest_feed)
logger.info(f"{feed_type} Database is up-to-date")
def update_database_using_rss(self, urls: [str]):
"""
Updates the database by scrapping the certfr
:param urls: [str] - a list containing all the alerts that are going to be saved.
"""
logger.info(f"Updating via RSS")
for url in urls:
# Download from URL.
r = requests.get(url, timeout=20)
logger.info(f"Starting new HTTPS connection: {url}")
logger.info(f"received status {r.status_code} from {url}")
# Save to database.
logger.info(f"Saving {url} alert into database...")
self.alerts_to_database(url.split("/")[-2], url, r.content)
logger.info(f"RSS Update finished")
def update_database_using_crawler(self, feed_type, last_scrap, max_crawl):
"""
Updates the database by crawling the certfr.
:param feed_type: str - "NextAlert" or "NextAvis"
:param last_scrap : str - the alert to begin the crawl with
:param max_crawl : str - the alert to stop the crawl with
"""
logger.info(f"Updating via Crawler")
if feed_type == "NextAvis":
url = 'https://www.cert.ssi.gouv.fr/avis/'
else:
url = 'https://www.cert.ssi.gouv.fr/alerte/'
begin_year = int(last_scrap[7:11])
begin_alert = int(last_scrap.split('-')[-1])
max_number = 10000
max_year = int(max_crawl[7:11])
for year in range(begin_year, max_year + 1):
if year != begin_year:
begin_alert = 1
# if this is the last year of crawling, it sets the max value to the value of crawl
if year == max_year:
max_number = int(max_crawl.split('-')[-1])
for count in range(begin_alert, max_number):
# Download from URL.
alert_id = f"CERTFR-{str(year)}-{feed_type.upper()[4:7]}-{str(count).zfill(3)}"
r = requests.get(f"{url}{alert_id}", timeout=20)
logger.info(f"Starting new HTTPS connection: {url}{alert_id}")
logger.info(f"received status {r.status_code} from {alert_id}")
# return the next alert to begin with for next run
if r.status_code == 404 and year == max_year:
logger.info(f"Crawler update finished")
return
# go to next year once the actual is done
elif r.status_code == 404:
break
# Save to database.
logger.info(f"Saving {alert_id} alert into database...")
self.alerts_to_database(alert_id, url + alert_id, r.content)
def alerts_to_database(self, alert_id, source, text):
"""
Scraps the html file in argument and insert it into the database.
:param alert_id: str - CERTFR-AAAA-(ALE or AVI)-NNN
:param source: str - URL of the alert
:param text: str - html content returned by the certfr
"""
systems_affectes = html_parser.systems_parser(text)
_date = html_parser.date_parser(text)
summary = html_parser.header_summary_parser(text)
documentation_texte, documentation_liens = html_parser.documentation_parser(text)
# NVD score
CVE = NVD.check_for_CVE(documentation_liens, documentation_texte)
score = 0.0
if CVE != "": # if a CVE has been detected it will replace 0.0 by the actual severity
logger.debug(f"CVE \"{CVE}\" found for {alert_id} alert")
score = NVD.get_NVD_score(CVE)
# get details link
details = html_parser.define_details(score, CVE, documentation_liens)
self.sqliteCon.add_new_alert(alert_id, _date, systems_affectes, summary, score, source, details)
def compare_inventory_with_alerts(self, technos: [str], versions: [str], dates: [str]) -> [Report]:
"""
returns a list of Report by comparing the inventory in entry with the database.
:param technos: [str]
:param versions: [str]
:param dates: [str]
:return: [Report]
"""
reports = []
for techno, version, _date, in zip(technos, versions, dates):
reports += self.compare_one_techno_with_alerts(techno, version, _date)
return reports
def compare_one_techno_with_alerts(self, techno: str, version: str, _date: str) -> [Report]:
"""
returns a list of Report by comparing the single techno in entry with the database.
:param techno: str
:param version: str
:param _date: str
:return: [Report]
"""
logger.debug(f"Comparing {techno} {version} with the database...")
reports = []
techno = techno.lower()
if _date == "":
_date = "01-01-2014"
for alert in self.sqliteCon.get_alerts_newer_than(_date):
for line in self.sqliteCon.get_one_row_from_alerts('SystèmesAffectés', alert).split("|"):
result = systems_filter(techno, version, line)
if result is not None:
score_nvd = self.sqliteCon.get_one_row_from_alerts('ScoreNVD', alert)
summary = self.sqliteCon.get_one_row_from_alerts('Résumé', alert)
release_date = self.sqliteCon.get_one_row_from_alerts('Date', alert)
source = self.sqliteCon.get_one_row_from_alerts('Source', alert)
details = self.sqliteCon.get_one_row_from_alerts('Details', alert)
report = Report(alert_id=alert, techno=techno, version=version, status=result,
score=float(score_nvd), update_date=release_date, publish_date=release_date,
description=summary, source=source, details=details)
reports.append(report)
# to remove duplicated from list
no_duplicates = []
logger.debug(f"Removing duplicates in reports")
for report in reports:
token = True
for index, no_duplicate in enumerate(no_duplicates):
# check if alert_id already exists in "no_duplicate"
if report.alert_id == no_duplicate.alert_id:
token = False
# check if the existing alert is "Applicable" or "Open"
# if the status of the existing alert is lower, it will replace it by the higher report
if report.status[0] == "A" and no_duplicate.status[0] == "O":
no_duplicates[index] = report
if token:
no_duplicates.append(report)
return no_duplicates | PypiClean |
/Dililatum-0.1.tar.gz/Dililatum-0.1/dililatum/character.py |
# Dililatum: a quest system for simple RPGs
# Copyright (C) 2010 Niels Serup
# This file is part of Dililatum.
#
# Dililatum is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Dililatum is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Dililatum. If not, see <http://www.gnu.org/licenses/>.
##[ Name ]## character
##[ Maintainer ]## Niels Serup <ns@metanohi.org>
##[ Description ]## Contains the Character class, controlling how
# people and monsters alike move
from datetime import datetime
import os.path
import pygame
from pygame.locals import *
class Frame:
def __init__(self, img):
self.surf = img
self.width = img.get_width()
self.height = img.get_height()
class EmptyCharacter:
def stop(self):
pass
def walk(self, direction):
pass
class Character:
def __init__(self, world, idname, datafiles, **oargs):
def get(key, default):
try: return oargs[key]
except KeyError: return default
self.id = idname
self.name = get('name', None)
self.world = world
self.data = datafiles
self.frames = {}
self.walking = False
self.step = 0
self.original_direction = get('direction', 'cb')
self.direction = self.original_direction[:]
self.position = get('position', world.get_center())
self.duration = get('duration', 200) * 1000
self.movement = get('movement', dict(
cb=(0, 3),
ct=(0, -3),
lm=(-3, 0),
rm=(3, 0),
lb=(-2.12, 2.12),
rt=(2.12, -2.12),
lt=(-2.12, -2.12),
rb=(2.12, 2.12)
))
self.use_shadow = get('shadow', True)
self.shadow_details = get('shadowdetails', {})
self.default_direction = 'cb'
self.default_position = (
self.world.size[0] / 2,
int(self.world.size[1] * 0.9))
self.original_position = get('position', self.default_position[:])
self.position = self.position[:]
self.modified_position = self.position[:]
self.visible = get('visible', False)
self.create()
if self.use_shadow:
self.create_shadow()
def get_bottom_area(self):
return self.modified_position[0] + self.get_size(self.position)[0] / 2, self.modified_position[1]
def convert_files_to_surfaces(self, *files):
frames = []
for f in files:
img = self.world.load_image(f, True)
frames.append(Frame(img))
return frames
def create(self):
for x in 'lt', 'ct', 'rt', 'lm', 'rm', 'lb', 'cb', 'rb':
if x in self.data[1]: # Files and directories
files = self.data[1][x][1] # 1 = files
files = [os.path.join(self.data[0], x, t) for t in files]
self.frames[x] = \
self.convert_files_to_surfaces(*files)
if len(self.frames) < 8:
self.fill_out_remaining_directions()
maxw = 0
maxh = 0
for x in self.frames.values():
for y in x:
if y.width > maxw:
maxw = y.width
if y.height > maxh:
maxh = y.height
self.maxsize = (maxw, maxh)
try:
self.head = \
self.world.load_image(os.path.join(self.data[0],
'head.png'), True)
except Exception:
self.head = None
def fill_out_remaining_directions(self):
tries = {
'lt': ('ct', 'lm', 'rt', 'lb', 'rm', 'cb', 'rb'),
'rt': ('ct', 'rm', 'lt', 'rb', 'lm', 'cb', 'lb'),
'ct': ('lt', 'rt', 'lm', 'rm', 'lb', 'rb', 'cb'),
'cb': ('lb', 'rb', 'lm', 'rm', 'lt', 'rt', 'ct'),
'lm': ('lt', 'lb', 'ct', 'cb', 'rt', 'rb', 'rm'),
'rm': ('rt', 'rb', 'ct', 'cb', 'lt', 'lb', 'lm'),
'lb': ('cb', 'lm', 'rb', 'lt', 'rm', 'ct', 'rt'),
'rb': ('cb', 'rm', 'lb', 'rt', 'lm', 'ct', 'lt')
}
for x in 'lt', 'ct', 'rt', 'lm', 'rm', 'lb', 'cb', 'rb':
if x not in self.frames:
ok = False
i = 0
while not ok:
try:
self.frames[x] = self.frames[tries[x][i]]
ok = True
break
except Exception:
if i == 7:
break
i += 1
if not ok:
self.frames[x] = None
def create_shadow(self):
charsurf = self.frames['cb'][0]
size = (charsurf.width - charsurf.width / 15, charsurf.height / 15)
ssize = [s * 3 for s in size]
rect = pygame.Rect((0, 0), ssize)
surf = pygame.Surface(ssize).convert_alpha()
surf.fill(pygame.Color(0, 0, 0, 0))
pygame.draw.ellipse(surf, pygame.Color(0, 0, 0, 200), rect)
surf = pygame.transform.smoothscale(surf, size)
self.shadow = surf
self.shadow_size = surf.get_size()
def get_size(self, pos, img=None):
resize = self.world.current_place.char_size(pos)
if img is None:
img = self.get_frame()
w = int(img.width * resize)
h = int(img.height * resize)
return w, h, resize
def get_frame(self):
if self.walking:
return self.frames[self.direction][self.step]
else:
return self.frames[self.direction][0]
def next_step(self):
self.step = (self.step + 1) % len(self.frames[self.direction])
def stop(self):
self.walking = False
def is_reverse(self, direction):
a = self.direction
b = direction
return (a == 'cb' and b in ('ct', 'rt', 'lt')) \
or (a == 'ct' and b in ('cb', 'rb', 'lb')) \
or (a == 'lm' and b in ('rm', 'rt', 'rb')) \
or (a == 'rm' and b in ('lm', 'lt', 'lb')) \
or (a == 'lt' and b == 'rb') \
or (a == 'rt' and b == 'lb') \
or (a == 'lb' and b == 'rt') \
or (a == 'rb' and b == 'lt')
def walk(self, direction):
w, h, resize = self.get_size(self.position)
size = w, h
maxsize = [s * resize for s in self.maxsize]
mov = self.movement[direction]
pos = self.position
pos_ok = False
scale = 1.0
while not pos_ok:
npos = [int(pos[i] + mov[i] * resize * scale *
self.world.size[i] / 100.0) for i in range(2)]
mpos = self.world.current_place.char_pos(npos)
pos_ok = self.world.current_place.pos_ok(mpos, maxsize)
scale -= .1
if scale < 0.4:
break
if pos_ok:
conti = True
for o in self.world.current_place.objects:
t = o.check_if_action_needed(npos, maxsize)
if not t: conti = False
if conti:
self.direction = direction
self.position = npos
self.modified_position = mpos
self.walking = True
else:
self.stop()
def set_position(self, pos):
self.position = pos
self.modified_position = self.world.current_place.char_pos(pos)
def reset_position(self):
self.position = self.original_position[:]
self.modified_position = self.original_position[:]
self.direction = self.original_direction
w, h, resize = self.get_size(self.modified_position)
for o in self.world.current_place.objects:
o.check_if_action_needed(self.position, (w, h))
def say(self, msg, **oargs):
if self.name is not None:
msg = self.name + ': ' + msg
oargs = oargs.copy()
oargs['head'] = self.head
self.world.show_message(msg, **oargs)
def ask(self, question, *answers, **oargs):
if self.name is not None:
question = self.name + ': ' + question
oargs = oargs.copy()
oargs['head'] = self.head
self.world.show_question(question, *answers, **oargs)
def hide(self):
self.visible = False
def show(self):
self.visible = True
def draw(self, surf=None):
if not self.visible: return False
img = self.get_frame()
pos = self.modified_position
w, h, r = self.get_size(pos, img)
img = pygame.transform.smoothscale(img.surf, (w, h))
if surf is None:
surf = self.world
if self.use_shadow:
ssize = [int(s * r) for s in self.shadow_size]
shad = pygame.transform.smoothscale(
self.shadow, ssize)
shad_pos = (pos[0] - w / 2 + (w - ssize[0]) / 2, pos[1] - h / 15)
surf.blit(shad, shad_pos)
surf.blit(img, (pos[0] - w / 2, pos[1] - h)) | PypiClean |
/MotifAnalysisPackage-0.0.1.tar.gz/MotifAnalysisPackage-0.0.1/motifStatistics/html.py |
#################################################################################################
##### LIBRARIES #################################################################################
#################################################################################################
import HTML
#################################################################################################
##### FUNCTIONS #################################################################################
#################################################################################################
def printHTML(logoIndexList,genesIndex,headerList,logoLocation,resultsMatrix,outputFileName):
"""Print results to a html file.
Keyword arguments:
logoIndexList -- List of indexes of the results Matrix that contains the factor names.
genesIndex -- Index of the results Matrix that contains the gene list.
headerList -- List of header titles.
logoLocation -- Location of the logo graphs in png.
resultsMatrix -- Matrix containing the results to be printed.
outputFileName -- Path + name of the output html file.
Returns:
outputFileName -- File containing the html results.
"""
# Initial table
table = [headerList]
# Adding data to table
for vec in resultsMatrix:
resVec = []
for i in range(0,len(vec)):
if(i in logoIndexList):
resVec.append(str(vec[i]))
resVec.append("<img src='"+logoLocation+str(vec[i])+".png' width=200 >")
elif(i == genesIndex):
geneL = "+".join(vec[i])
resVec.append("<a href='http://biit.cs.ut.ee/gprofiler/index.cgi?significant=1&sort_by_structure=1&ordered_query=0&organism=mmusculus&query="+geneL+"' > GO Enrichment </a>")
else: resVec.append(str(vec[i]))
table.append(resVec)
# Printing table
htmlcode = HTML.table(table)
outputFile = open(outputFileName,"w")
for line in htmlcode: outputFile.write(line)
outputFile.close()
return 0 | PypiClean |
/FAaDO-0.0.4.tar.gz/FAaDO-0.0.4/fado/runner/ml/model/built_in/mnist_conv_torch.py | import logging
import numpy as np
import os
import torch.nn as nn
import torch.nn.parallel
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader
from fado.cli.arguments.arguments import FADOArguments
from fado.runner.ml.model.fado_module import FADOModule
fado_args = FADOArguments()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f'Using device - {device}')
logger = logging.LoggerAdapter(logging.getLogger("fado"), extra={'node_id': 'model'})
class MnistConvTorch(FADOModule):
def __init__(self):
self.model = build_model()
def get_parameters(self):
""" Retrieve the model's weights
Returns:
list: a list containing a whole state of the module
"""
weights = [p.data for p in self.model.parameters()]
return [w.cpu().numpy() for w in weights]
def set_parameters(self, new_weights):
""" Assign weights to the model
Args:
new_weights (list):
a list containing a whole state of the module
"""
i = 0
for p in self.model.parameters():
p.data = torch.from_numpy(new_weights[i])
i += 1
def _train_dataloader(self, x_train, y_train):
train_dataset = TensorDataset(x_train, y_train)
train_loader = DataLoader(train_dataset, batch_size=fado_args.batch_size, shuffle=True)
optimizer = optim.SGD(self.model.parameters(), lr=fado_args.learning_rate)
for epoch in range(fado_args.epochs):
for inputs, targets in train_loader:
optimizer.zero_grad()
outputs = self.model(inputs)
loss = self.model.criterion(outputs, targets)
loss.backward()
optimizer.step()
def train(self, x, y):
""" Train the local model for one roung
This can correspond to training for multiple epochs, or a single epoch.
Returs return final weights, train loss, train accuracy
Returns:
tuple: final weights, train loss, train accuracy
"""
self.model.to(device)
self.model.train()
x_train = torch.tensor(x).to(device)
y_train = torch.tensor(y).to(device)
self._train_dataloader(x_train, y_train)
y_pred = self.model(x_train)
# Calculate the cross-entropy loss
loss = self.model.criterion(y_pred, y_train)
# Calculate the accuracy
_, predictions = torch.max(y_pred, dim=1)
correct = (predictions == y_train).float()
accuracy = torch.mean(correct)
return self.get_parameters(), loss, accuracy
def evaluate(self, x, y):
# switch to evaluate mode
self.model.eval()
# Convert numpy arrays to PyTorch tensors
x = torch.from_numpy(x)
y = torch.from_numpy(y).long()
y_pred = self.model(x)
# Calculate the cross-entropy loss
loss = self.model.criterion(y_pred, y)
# Calculate the accuracy
_, predictions = torch.max(y_pred, dim=1)
correct = (predictions == y).float()
accuracy = torch.mean(correct)
return loss.item(), accuracy.item()
def build_model():
return MnistConv()
class MnistConv(nn.Module):
def __init__(self):
super(MnistConv, self).__init__()
self.conv1 = nn.Conv2d(1, 16, 5, padding=2)
self.conv2 = nn.Conv2d(16, 32, 5, padding=2)
self.fc1 = nn.Linear(32 * 7 * 7, 512)
self.fc2 = nn.Linear(512, 62)
self.criterion = nn.CrossEntropyLoss().to(device)
def forward(self, x, noise=torch.Tensor()):
x = x.reshape(-1, 1, 28, 28)
x = F.max_pool2d(F.relu(self.conv1(x)), 2)
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, 32 * 7 * 7) # reshape Variable
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
class MnistConvLarge(nn.Module):
def __init__(self):
super(MnistConvLarge, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 5, padding=2)
self.conv2 = nn.Conv2d(32, 64, 5, padding=2)
self.fc1 = nn.Linear(64 * 7 * 7, 1024)
self.fc2 = nn.Linear(1024, 62)
self.criterion = nn.CrossEntropyLoss().to(device)
self.optimizer = optim.SGD(self.model.parameters())
def forward(self, x, noise=torch.Tensor()):
x = x.reshape(-1, 1, 28, 28)
x = F.max_pool2d(F.relu(self.conv1(x)), 2)
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, 64 * 7 * 7) # reshape Variable
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
class AverageMeter(object):
"""Computes and stores the average and current value
Imported from https://github.com/pytorch/examples/blob/master/imagenet/main.py#L247-L262
"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def accuracy(output, target):
_, predictions = torch.max(output, dim=1)
correct = (predictions == target).float()
acc = torch.mean(correct)
return acc | PypiClean |
/Mathics-1.0.tar.gz/Mathics-1.0/mathics/web/media/js/utils.js | var HTML_ENTITIES = $H({
'amp': 38,
'gt': 62,
'lt': 60,
'quot': 34,
'nbsp': 160,
'ndash': 8211,
'mdash': 8212,
'euro': 8364,
'copy': 169,
'trade': 8482,
'hellip': 8230,
'ldquo': 8220,
'rdquo': 8221,
'bdquo': 8222,
'reg': 174,
'larr': 8592,
'rarr': 8594
});
var ElementMethods = {
setText: function(element, text) {
element.deleteChildNodes();
element.appendChild(document.createTextNode(text));
return element;
},
getText: function(element) {
if (element.nodeName.toLowerCase() == 'textarea')
// in case of a textarea
return element.value;
else
return element.childNodes[0].data;
},
deleteChildNodes: function(element) {
while (element.hasChildNodes())
element.removeChild(element.lastChild);
return element;
},
setClassName: function(element, className, yesno) {
if (yesno)
element.addClassName(className);
else
element.removeClassName(className);
return element;
},
isShown: function(element, recursionDepth) {
return element.ancestors().invoke('visible').all();
},
tryFocus: function(element) {
if (element.isShown()) {
try {
element.focus();
return true;
} catch (e) {
return false;
}
} else
return false;
},
deleteElement: function(element) {
if (element.parentNode)
element.parentNode.removeChild(element);
},
updateDOM: function(element, content) {
// convert named entities to numerical entities before calling update
content = content.gsub(/&([a-zA-Z]+);/, function(match) {
var code = HTML_ENTITIES.get(match[1]);
return "&#" + code + ";";
});
element.update(content);
},
scrollIntoView: function(element) {
var offset = element.cumulativeOffset();
window.scrollTo(offset.left, offset.top);
},
setEnabled: function(element, enabled) {
if (element.enable && element.disable) {
if (enabled)
element.enable();
else
element.disable();
}
if (enabled)
element.removeClassName('disabled');
else
element.addClassName('disabled');
element.childElements().each(function(child) {
child.setEnabled(enabled);
});
}
};
Element.addMethods(ElementMethods);
Object.extend(Object, {
isDOMElement: function(object) {
return object && !!object.nodeType;
}
});
var Publisher = Class.create({
initialize: function() {
this.events = $H();
},
fire: function(event) {
var subscribers = this.events.get(event) || $A();
var args = $A(arguments).slice(1);
subscribers.each(function(subscriber) {
subscriber.apply(this, args);
}.bind(this));
},
observe: function(event, subscriber) {
var existing = this.events.get(event);
if (!existing)
this.events.set(event, $A([subscriber]));
else
existing.push(subscriber);
}
});
function exists(type) {
return type != "undefined";
}
function dateToStr(date) {
return date.getFullYear() + '-' + (date.getMonth() + 1) + '-' + date.getDate();
}
function dateFromJSON(json) {
if (json) {
parts = json.split('-');
return new Date(parts[0], parts[1] - 1, parts[2]);
} else
return null;
}
var lastCanceledLog = 0;
function log(message) {
var now = new Date().getTime();
if (now - lastCanceledLog < 100) {
lastCanceledLog = new Date().getTime();
return false;
}
message += '\n\n---\n' + 'Arguments: ' + Object.inspect($A(log.caller.arguments));
if (!window.confirm(message))
lastCanceledLog = new Date().getTime();
return true;
}
function $E(tag, properties) {
var children;
if (Object.isDOMElement(properties)) {
children = $A(arguments).slice(1);
properties = {};
} else
children = $A(arguments).slice(2);
if (tag == 'a' && properties.href == null)
properties.href = 'javascript:;';
var element = new Element(tag, properties);
children.each(function(child) {
if (child)
element.appendChild(child);
});
return element;
}
function $T(text) {
return document.createTextNode(text);
}
function submitForm(form, url, onSuccess, extraData) {
var params = {};
form = $(form);
form.select('input').each(function(input) {
params[input.name] = input.value;
});
form.select('input[type="text"]').each(function(input) {
input.blur();
});
form.select('input, button').each(function(input) {
input.disable();
});
if (!extraData)
extraData = {};
params = $H(params).merge(extraData);
new Ajax.Request(url, {
method: 'post',
parameters: params,
onSuccess: function(transport) {
var response = transport.responseText.evalJSON();
form.select('ul.errorlist').invoke('deleteElement');
var errors = false;
var errorFocus = false;
if (response.form.fieldErrors)
$H(response.form.fieldErrors).each(function(pair) {
errors = true;
var errorlist = $E('ul', {'class': 'errorlist'});
var input = form.select('[name="' + pair.key + '"]')[0];
pair.value.each(function(msg) {
errorlist.appendChild($E('li', $T(msg)));
});
input.insert({before: errorlist});
if (!errorFocus)
errorFocus = input;
});
if (response.form.generalErrors) {
var errorlist = $E('ul', {'class': 'errorlist'});
response.form.generalErrors.each(function(msg) {
errors = true;
errorlist.appendChild($E('li', $T(msg)));
});
form.insert({top: errorlist});
if (!errorFocus) {
var firstInput = form.select('input')[0];
if (firstInput)
errorFocus = firstInput;
}
}
form.select('input').each(function(input) {
input.enable();
});
if (errorFocus) {
errorFocus.activate();
}
if (!errors) {
response.values = params;
onSuccess(response);
}
},
onComplete: function() {
form.select('input').each(function(input) {
input.enable();
});
}
})
} | PypiClean |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.