hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d9bd405f70dc075c427e331f093846b5cf2a686c | 831 | py | Python | src/openprocurement/tender/cfaua/adapters/tender/serializable/complaintperiod.py | pontostroy/api | 5afdd3a62a8e562cf77e2d963d88f1a26613d16a | [
"Apache-2.0"
] | 3 | 2020-03-13T06:44:23.000Z | 2020-11-05T18:25:29.000Z | src/openprocurement/tender/cfaua/adapters/tender/serializable/complaintperiod.py | pontostroy/api | 5afdd3a62a8e562cf77e2d963d88f1a26613d16a | [
"Apache-2.0"
] | 2 | 2021-03-25T23:27:04.000Z | 2022-03-21T22:18:15.000Z | src/openprocurement/tender/cfaua/adapters/tender/serializable/complaintperiod.py | scrubele/prozorro-testing | 42b93ea2f25d8cc40e66c596f582c7c05e2a9d76 | [
"Apache-2.0"
] | 3 | 2020-10-16T16:25:14.000Z | 2021-05-22T12:26:20.000Z | # src/openprocurement.tender.openua/openprocurement/tender/openua/models.py:377
from openprocurement.api.adapters import Serializable
from openprocurement.tender.core.utils import calculate_complaint_business_date
from openprocurement.tender.openua.constants import COMPLAINT_SUBMIT_TIME
from openprocurement.api.models import Period
from schematics.types.compound import ModelType
class SerializableTenderComplaintPeriod(Serializable):
serialized_name = "complaintPeriod"
serialized_type = ModelType(Period)
def __call__(self, obj, *args, **kwargs):
complaintPeriod_class = obj._fields["tenderPeriod"]
endDate = calculate_complaint_business_date(obj.tenderPeriod.endDate, -COMPLAINT_SUBMIT_TIME, obj)
return complaintPeriod_class(dict(startDate=obj.tenderPeriod.startDate, endDate=endDate))
| 48.882353 | 106 | 0.821901 | 90 | 831 | 7.377778 | 0.488889 | 0.126506 | 0.121988 | 0.090361 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004021 | 0.102286 | 831 | 16 | 107 | 51.9375 | 0.886059 | 0.092659 | 0 | 0 | 0 | 0 | 0.035904 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.416667 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d9c599c3fcec80f519dffe5eb66e8d9ff1b168f2 | 342 | py | Python | setup.py | l-tang/pocmon | 237327dce73ef1a7a0e313af33833e821da5bd23 | [
"MIT"
] | 1 | 2021-05-22T02:00:40.000Z | 2021-05-22T02:00:40.000Z | setup.py | l-tang/pocmon | 237327dce73ef1a7a0e313af33833e821da5bd23 | [
"MIT"
] | null | null | null | setup.py | l-tang/pocmon | 237327dce73ef1a7a0e313af33833e821da5bd23 | [
"MIT"
] | null | null | null | from setuptools import setup
setup(
name='pocmon',
version='0.0.1',
description='PoC toolbox',
long_description='PoC toolbox',
url='http://github.com/l-tang/pocmon',
author='Li Tang',
author_email='litang1025@gmail.com',
license='MIT',
packages=['pocmon'],
zip_safe=False,
install_requires=[],
)
| 20.117647 | 42 | 0.640351 | 42 | 342 | 5.119048 | 0.761905 | 0.130233 | 0.195349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025362 | 0.192982 | 342 | 16 | 43 | 21.375 | 0.753623 | 0 | 0 | 0 | 0 | 0 | 0.292398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d9c73c4c812c32ded62c682e4d1fe7d225a8b6f7 | 10,781 | py | Python | doc/exampleVisualization.py | Kenneth-T-Moore/gaussian-wake | 49fbd695519615f0a3d12caf1e3658e02029fb1d | [
"Apache-2.0"
] | null | null | null | doc/exampleVisualization.py | Kenneth-T-Moore/gaussian-wake | 49fbd695519615f0a3d12caf1e3658e02029fb1d | [
"Apache-2.0"
] | null | null | null | doc/exampleVisualization.py | Kenneth-T-Moore/gaussian-wake | 49fbd695519615f0a3d12caf1e3658e02029fb1d | [
"Apache-2.0"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
from openmdao.api import Problem, pyOptSparseDriver
from wakeexchange.OptimizationGroups import OptAEP
from wakeexchange.gauss import gauss_wrapper, add_gauss_params_IndepVarComps
from wakeexchange.floris import floris_wrapper, add_floris_params_IndepVarComps
if __name__ == "__main__":
turbineX = np.array([1164.7, 947.2, 1682.4, 1464.9, 1982.6, 2200.1])
turbineY = np.array([1024.7, 1335.3, 1387.2, 1697.8, 2060.3, 1749.7])
nTurbines = turbineX.size
# theta = np.arctan((turbineY[5]-turbineY[0])/(turbineX[5]-turbineX[0]))*180./np.pi
#
# print theta
# quit()
rotor_diameter = 126.4
hub_height = 90.0
axial_induction = 1.0/3.0
CP = 0.7737/0.944 * 4.0 * 1.0/3.0 * np.power((1 - 1.0/3.0), 2)
# CP =0.768 * 4.0 * 1.0/3.0 * np.power((1 - 1.0/3.0), 2)
CT = 4.0*axial_induction*(1.0-axial_induction)
generator_efficiency = 0.944
# yaw_init_gauss = np.array([1.32217615e+01, 1.32140816e+01, 1.32935750e+01, 1.32968679e+01, 7.30138383e-04, 7.32602720e-04])
# for tuning with n_std_dev #1
# yaw_init_gauss = 0*np.array([1.26140798e+01, 1.26123736e+01, 1.17078481e+01, 1.17089869e+01, -1.22503914e-07, -1.31933646e-06])
yaw_init_gauss = 0*np.array([13.74293166, 13.74283174, 14.8380152, 14.83806931, 0., 0.])
# yaw_init_gauss = np.array([15.49, 15.49, 19.24, 19.24, 0.00, 0.00]) # power
# yaw_init_gauss = np.array([13.73, 13.73, 14.82, 14.82, 0.00, 0.00]) # linear
# yaw_init_gauss = np.array([19.00, 19.00, 23.80, 23.80, 0.05, 0.00])
# for tuning with n_std_dev #2
# yaw_init_gauss = np.array([1.29093304e+01, 1.29077015e+01, 1.19463208e+01, 1.19474608e+01, 5.61165053e-07, 8.01354835e-07])
yaw_init_floris = np.array([1.56406883e+01, 1.56406883e+01, 1.73139554e+01, 1.73139554e+01, 1.49265921e-05, 1.49265921e-05])
# Define turbine characteristics
axialInduction = np.zeros(nTurbines) + axial_induction
rotorDiameter = np.zeros(nTurbines) + rotor_diameter
generatorEfficiency = np.zeros(nTurbines) + generator_efficiency
yaw_floris = np.zeros(nTurbines) + yaw_init_floris
yaw_gauss = np.zeros(nTurbines) + yaw_init_gauss
Ct = np.zeros(nTurbines) + CT
Cp = np.zeros(nTurbines) + CP
# Define site measurements
nDirections = 1
wd_polar = 0.523599*180./np.pi
wind_direction = 270.-wd_polar
wind_speed = 8. # m/s
air_density = 1.1716
# set up sampling space
res = 400
# samplesX = np.linspace(min(turbineX)-2.*rotor_diameter, max(turbineX)+2.*rotor_diameter, res)
# samplesY = np.linspace(min(turbineY)-2.*rotor_diameter, max(turbineY)+2.*rotor_diameter, res)
samplesX = np.linspace(0, 3000, res)
samplesY = np.linspace(0, 3000, res)
samplesX, samplesY = np.meshgrid(samplesX, samplesY)
samplesX = samplesX.flatten()
samplesY = samplesY.flatten()
samplesZ = np.ones(samplesX.shape)*hub_height
# initialize problems
gauss_prob = Problem(root=OptAEP(nTurbines=nTurbines, nDirections=nDirections, use_rotor_components=False,
wake_model=gauss_wrapper, wake_model_options={'nSamples': res**2}, datasize=0,
params_IdepVar_func=add_gauss_params_IndepVarComps, force_fd=True,
params_IndepVar_args={}))
wake_model_options = {'differentiable': True, 'use_rotor_components': False, 'nSamples': res**2, 'verbose': False}
floris_prob = Problem(root=OptAEP(nTurbines=nTurbines, nDirections=nDirections, use_rotor_components=False,
wake_model=floris_wrapper, wake_model_options=wake_model_options, datasize=0,
differentiable=True, params_IdepVar_func=add_floris_params_IndepVarComps,
params_IndepVar_args={}))
probs = [gauss_prob, floris_prob]
names = ['gauss', 'floris']
for indx, prob in enumerate(probs):
prob.setup()
if names[indx] is 'gauss':
# print gauss_prob.root.AEPgroup.unknowns.keys()
# gauss_prob['model_params:ke'] = 0.052
# gauss_prob['model_params:spread_angle'] = 6.
# gauss_prob['model_params:rotation_offset_angle'] = 2.0
# gauss_prob['model_params:ke'] = 0.050755
# gauss_prob['model_params:spread_angle'] = 11.205766
# gauss_prob['model_params:rotation_offset_angle'] = 3.651790
# gauss_prob['model_params:n_std_dev'] = 9.304371
# gauss_prob['model_param s:ke'] = 0.051028
# gauss_prob['model_params:spread_angle'] = 11.862988
# gauss_prob['model_params:rotation_offset_angle'] = 3.594340
# gauss_prob['model_params:n_std_dev'] = 12.053127
# using ky with n_std_dev = 6
# gauss_prob['model_params:ke'] = 0.051115
# gauss_prob['model_params:spread_angle'] = 5.967284
# gauss_prob['model_params:rotation_offset_angle'] = 3.597926
# gauss_prob['model_params:ky'] = 0.494776
# using ky with n_std_dev = 3
# gauss_prob['model_params:ke'] = 0.051079
# gauss_prob['model_params:spread_angle'] = 0.943942
# gauss_prob['model_params:rotation_offset_angle'] = 3.579857
# gauss_prob['model_params:ky'] = 0.078069
# for decoupled ky with n_std_dev = 4
# gauss_prob['model_params:ke'] = 0.051145
# gauss_prob['model_params:spread_angle'] = 2.617982
# gauss_prob['model_params:rotation_offset_angle'] = 3.616082
# gauss_prob['model_params:ky'] = 0.211496
# for decoupled ky with n_std_dev = 6 and double diameter wake at rotor pos
# gauss_prob['model_params:ke'] = 0.051030
# gauss_prob['model_params:spread_angle'] = 1.864696
# gauss_prob['model_params:rotation_offset_angle'] = 3.362729
# gauss_prob['model_params:ky'] = 0.193011
# for integrating for decoupled ky with n_std_dev = 4, error = 1034.3
# gauss_prob['model_params:ke'] = 0.007523
# gauss_prob['model_params:spread_angle'] = 1.876522
# gauss_prob['model_params:rotation_offset_angle'] = 3.633083
# gauss_prob['model_params:ky'] = 0.193160
# for decoupled ke with n_std_dev=4, linear, not integrating
# gauss_prob['model_params:ke'] = 0.051190
# gauss_prob['model_params:spread_angle'] = 2.619202
# gauss_prob['model_params:rotation_offset_angle'] = 3.629337
# gauss_prob['model_params:ky'] = 0.211567
# for integrating for decoupled ky with n_std_dev = 4, error = 1034.3, linear, integrating
# gauss_prob['model_params:ke'] = 0.008858
# gauss_prob['model_params:spread_angle'] = 0.000000
# gauss_prob['model_params:rotation_offset_angle'] = 4.035276
# gauss_prob['model_params:ky'] = 0.199385
# for decoupled ky with n_std_dev = 4, error = 1332.49, not integrating, power law
# gauss_prob['model_params:ke'] = 0.051360
# gauss_prob['model_params:rotation_offset_angle'] = 3.197348
# gauss_prob['model_params:Dw0'] = 1.804024
# gauss_prob['model_params:m'] = 0.0
# for decoupled ky with n_std_dev = 4, error = 1630.8, with integrating, power law
# gauss_prob['model_params:ke'] = 0.033165
# gauss_prob['model_params:rotation_offset_angle'] = 3.328051
# gauss_prob['model_params:Dw0'] = 1.708328
# gauss_prob['model_params:m'] = 0.0
# for decoupled ky with n_std_dev = 4, error = 1140.59, not integrating, power law for expansion,
# linear for yaw
gauss_prob['model_params:ke'] = 0.050741
gauss_prob['model_params:rotation_offset_angle'] = 3.628737
gauss_prob['model_params:Dw0'] = 0.846582
gauss_prob['model_params:ky'] = 0.207734
# for decoupled ky with n_std_dev = 4, error = 1058.73, integrating, power law for expansion,
# linear for yaw
# gauss_prob['model_params:ke'] = 0.016129
# gauss_prob['model_params:rotation_offset_angle'] = 3.644356
# gauss_prob['model_params:Dw0'] = 0.602132
# gauss_prob['model_params:ky'] = 0.191178
gauss_prob['model_params:integrate'] = False
gauss_prob['model_params:spread_mode'] = 'power'
gauss_prob['model_params:n_std_dev'] = 4
prob['yaw0'] = yaw_gauss
else:
prob['yaw0'] = yaw_floris
prob['wsPositionX'] = np.copy(samplesX)
prob['wsPositionY'] = np.copy(samplesY)
prob['wsPositionZ'] = np.copy(samplesZ)
# print prob['wsPositionX'].shape, prob['wsPositionY'], prob['wsPositionZ']
# quit()
prob['turbineX'] = turbineX
prob['turbineY'] = turbineY
prob['rotorDiameter'] = rotorDiameter
prob['axialInduction'] = axialInduction
prob['generatorEfficiency'] = generatorEfficiency
prob['air_density'] = air_density
prob['Cp_in'] = Cp
prob['Ct_in'] = Ct
prob['windSpeeds'] = np.array([wind_speed])
prob['windDirections'] = np.array([wind_direction])
prob.run()
for indx, prob in enumerate(probs):
print names[indx], prob['yaw0']
samplesX = samplesX.reshape((res, res))
samplesY = samplesY.reshape((res, res))
for indx, prob in enumerate(probs):
print names[indx], prob['wtPower0'], np.sum(prob['wtPower0']), min(prob['wsArray0'])
plt.figure()
color = prob['wsArray0']
# color[color==wind_speed] *= 0
# print color[color>wind_speed]
# print samplesX.shape, samplesY.shape, color.shape
color = color.reshape((res, res))
# color maps that work better: jet, gnuplot2, coolwarm
plt.pcolormesh(samplesX, samplesY, color, cmap='coolwarm')#, vmin=1.75, vmax=11.)
plt.xlim([min(samplesX.flatten()), max(samplesX.flatten())])
plt.ylim([min(samplesY.flatten()), max(samplesY.flatten())])
plt.xlim([0.0, 3000])
plt.ylim([0.0, 3000])
yaw = prob['yaw0'] + wd_polar
r = 0.5*rotor_diameter
for turb in range(0, nTurbines):
x1 = turbineX[turb] + np.sin(yaw[turb]*np.pi/180.)*r
x2 = turbineX[turb] - np.sin(yaw[turb]*np.pi/180.)*r
y1 = turbineY[turb] - np.cos(yaw[turb]*np.pi/180.)*r
y2 = turbineY[turb] + np.cos(yaw[turb]*np.pi/180.)*r
plt.plot([x1, x2], [y1, y2], 'k', lw=3)
plt.title(names[indx])
plt.axis('square')
plt.show()
| 43.825203 | 134 | 0.627307 | 1,452 | 10,781 | 4.455234 | 0.221074 | 0.084866 | 0.125522 | 0.176225 | 0.447519 | 0.410574 | 0.307776 | 0.221673 | 0.143453 | 0.120884 | 0 | 0.122449 | 0.24098 | 10,781 | 245 | 135 | 44.004082 | 0.668092 | 0.427882 | 0 | 0.049505 | 0 | 0 | 0.071993 | 0.016804 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.069307 | null | null | 0.019802 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d9cccfa22b51bae532a24d720ec25ae37caae642 | 942 | py | Python | checking/__init__.py | igrekus/checking | a2029f9404c2e2d8eab73aa90a3b0db2ad25030e | [
"MIT"
] | 4 | 2020-10-02T18:26:14.000Z | 2022-01-06T09:09:18.000Z | checking/__init__.py | igrekus/checking | a2029f9404c2e2d8eab73aa90a3b0db2ad25030e | [
"MIT"
] | 1 | 2020-10-10T14:55:35.000Z | 2020-10-12T20:43:29.000Z | checking/__init__.py | igrekus/checking | a2029f9404c2e2d8eab73aa90a3b0db2ad25030e | [
"MIT"
] | 2 | 2020-10-05T10:02:24.000Z | 2020-10-10T12:55:50.000Z | from .asserts import *
from .context import *
from .runner import start
from .runner import common
from .annotations import *
from .classes.mocking import *
from .classes.fluent_assert import verify
from .classes.soft_assert import SoftAssert
from .classes.listeners.file_logger import DefaultFileListener
__all__ = ['start', 'common', 'SoftAssert', 'Spy', 'TestDouble', 'DefaultFileListener',
'DATA_FILE', 'CONTAINER', 'provider',
'equals', 'is_none', 'is_not_none', 'should_raise', 'test_fail', 'test_break', 'test_skip',
'no_exception_expected', 'contains', 'verify', 'not_contains', 'not_equals', 'is_false', 'is_true',
'mock_builtins', 'mock', 'mock_input', 'mock_print', 'mock_open',
'is_zero', 'is_positive', 'is_negative', 'is_empty', 'is_not_empty', 'Stub',
'test', 'before', 'after', 'before_group', 'after_group', 'before_suite', 'after_suite', 'common_function']
| 52.333333 | 118 | 0.68896 | 112 | 942 | 5.473214 | 0.5 | 0.065253 | 0.052202 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156051 | 942 | 17 | 119 | 55.411765 | 0.771069 | 0 | 0 | 0 | 0 | 0 | 0.414013 | 0.022293 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | false | 0 | 0.5625 | 0 | 0.5625 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d9e594867f1ed2a82eadbd4c88945e7fc0dbf1ff | 4,250 | py | Python | src/core/tests/test_performance.py | HitLuca/predict-python | 14f2f55cb29f817a5871d4c0b11a3758285301ca | [
"MIT"
] | null | null | null | src/core/tests/test_performance.py | HitLuca/predict-python | 14f2f55cb29f817a5871d4c0b11a3758285301ca | [
"MIT"
] | null | null | null | src/core/tests/test_performance.py | HitLuca/predict-python | 14f2f55cb29f817a5871d4c0b11a3758285301ca | [
"MIT"
] | null | null | null | """
performance tests
"""
import time
import unittest
from django.test import TestCase
from src.core.core import calculate
from src.core.tests.common import split_single, add_default_config
from src.encoding.encoding_container import EncodingContainer
from src.encoding.models import ValueEncodings
from src.hyperparameter_optimization.hyperopt_wrapper import calculate_hyperopt
from src.labelling.label_container import LabelContainer
from src.labelling.models import LabelTypes
from src.utils.tests_utils import bpi_log_filepath
@unittest.skip('performance test not needed normally')
class TestClassPerf(TestCase):
@staticmethod
def get_job():
json = dict()
json['clustering'] = 'noCluster'
json['split'] = split_single()
json['split']['original_log_path'] = bpi_log_filepath
json['method'] = 'randomForest'
json['encoding'] = EncodingContainer(ValueEncodings.BOOLEAN.value, prefix_length=20)
json['type'] = 'classification'
json['labelling'] = LabelContainer(LabelTypes.DURATION.value)
json['incremental_train'] = {'base_model': None}
return json
@staticmethod
def calculate_helper(job):
start_time = time.time()
calculate(job)
print('Total for %s %s seconds' % (job['method'], time.time() - start_time))
@staticmethod
def calculate_helper_hyperopt(job):
start_time = time.time()
calculate_hyperopt(job)
print('Total for %s %s seconds' % (job['method'], time.time() - start_time))
def test_class_randomForest(self):
job = self.get_job()
add_default_config(job)
self.calculate_helper(job)
def test_next_activity_randomForest(self):
job = self.get_job()
job['labelling'] = LabelContainer(LabelTypes.NEXT_ACTIVITY.value)
add_default_config(job)
self.calculate_helper(job)
def test_class_knn(self):
job = self.get_job()
job['method'] = 'knn'
add_default_config(job)
self.calculate_helper(job)
def test_class_decision(self):
job = self.get_job()
job['method'] = 'decisionTree'
add_default_config(job)
self.calculate_helper(job)
def test_class_hyperopt(self):
job = self.get_job()
job['labelling'] = LabelContainer(LabelTypes.NEXT_ACTIVITY.value)
job['hyperopt'] = {'use_hyperopt': True, 'max_evals': 10, 'performance_metric': 'f1score'}
add_default_config(job)
self.calculate_helper_hyperopt(job)
@unittest.skip('performance test not needed normally')
class RegPerf(TestCase):
@staticmethod
def get_job():
json = dict()
json['clustering'] = 'noCluster'
json['split'] = split_single()
json['split']['original_log_path'] = bpi_log_filepath
json['method'] = 'randomForest'
json['encoding'] = EncodingContainer(ValueEncodings.BOOLEAN.value, prefix_length=20)
json['prefix_length'] = 20
json['type'] = 'regression'
json['padding'] = 'no_padding'
json['labelling'] = LabelContainer(LabelTypes.REMAINING_TIME.value)
return json
@staticmethod
def calculate_helper(job):
start_time = time.time()
calculate(job)
print('Total for %s %s seconds' % (job['method'], time.time() - start_time))
@staticmethod
def calculate_helper_hyperopt(job):
start_time = time.time()
calculate_hyperopt(job)
print('Total for %s %s seconds' % (job['method'], time.time() - start_time))
def test_reg_randomForest(self):
job = self.get_job()
add_default_config(job)
self.calculate_helper(job)
def test_reg_linear(self):
job = self.get_job()
job['method'] = 'linear'
add_default_config(job)
self.calculate_helper(job)
def test_reg_lasso(self):
job = self.get_job()
job['method'] = 'lasso'
add_default_config(job)
self.calculate_helper(job)
def test_reg_hyperopt(self):
job = self.get_job()
job['hyperopt'] = {'use_hyperopt': True, 'max_evals': 10, 'performance_metric': 'rmse'}
add_default_config(job)
self.calculate_helper_hyperopt(job)
| 33.203125 | 98 | 0.660941 | 495 | 4,250 | 5.450505 | 0.193939 | 0.046701 | 0.059303 | 0.046701 | 0.729059 | 0.719422 | 0.714974 | 0.664196 | 0.627873 | 0.591549 | 0 | 0.003325 | 0.221647 | 4,250 | 127 | 99 | 33.464567 | 0.812273 | 0.004 | 0 | 0.663462 | 0 | 0 | 0.144852 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.144231 | false | 0 | 0.105769 | 0 | 0.288462 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d9e9c4e709e2799a761a4db66029e78228d8751d | 557 | py | Python | cca_zoo/data/utils.py | LegrandNico/cca_zoo | 03497ef4d8434f847435b572bff92b2095fbc4fc | [
"MIT"
] | null | null | null | cca_zoo/data/utils.py | LegrandNico/cca_zoo | 03497ef4d8434f847435b572bff92b2095fbc4fc | [
"MIT"
] | null | null | null | cca_zoo/data/utils.py | LegrandNico/cca_zoo | 03497ef4d8434f847435b572bff92b2095fbc4fc | [
"MIT"
] | null | null | null | import numpy as np
from torch.utils.data import Dataset
class CCA_Dataset(Dataset):
"""
Class that turns numpy arrays into a torch dataset
"""
def __init__(self, views):
"""
:param views: list/tuple of numpy arrays or array likes with the same number of rows (samples)
"""
self.views = [view for view in views]
def __len__(self):
return len(self.views[0])
def __getitem__(self, idx):
views = [view[idx].astype(np.float32) for view in self.views]
return {"views": views}
| 23.208333 | 102 | 0.62298 | 77 | 557 | 4.337662 | 0.558442 | 0.107784 | 0.053892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007426 | 0.274686 | 557 | 23 | 103 | 24.217391 | 0.819307 | 0.260323 | 0 | 0 | 0 | 0 | 0.013477 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.2 | 0.1 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d9fd73e910f7301441f25ece1b7b52db2e80e6f6 | 10,315 | py | Python | autogl/data/_dataset/_dataset.py | dedsec-9/AutoGL | 487f2b2f798b9b1363ad5dc100fb410b12222e06 | [
"MIT"
] | null | null | null | autogl/data/_dataset/_dataset.py | dedsec-9/AutoGL | 487f2b2f798b9b1363ad5dc100fb410b12222e06 | [
"MIT"
] | null | null | null | autogl/data/_dataset/_dataset.py | dedsec-9/AutoGL | 487f2b2f798b9b1363ad5dc100fb410b12222e06 | [
"MIT"
] | null | null | null | import typing as _typing
_D = _typing.TypeVar('_D')
class _Schema(_typing.MutableMapping[str, _typing.Any]):
def __setitem__(self, key: str, value: _typing.Any) -> None:
self.__data[key] = value
def __delitem__(self, key: str) -> None:
del self.__data[key]
def __getitem__(self, key: str) -> _typing.Any:
return self.__data[key]
def __len__(self) -> int:
return len(self.__data)
def __iter__(self) -> _typing.Iterator[str]:
return iter(self.__data)
def __init__(self):
self.__data: _typing.MutableMapping[str, _typing.Any] = {}
self.__meta_paths: _typing.Optional[
_typing.Iterable[_typing.Iterable[str]]
] = None
@property
def meta_paths(self) -> _typing.Optional[
_typing.Iterable[_typing.Iterable[str]]
]:
return self.__meta_paths
@meta_paths.setter
def meta_paths(
self, meta_paths: _typing.Optional[
_typing.Iterable[_typing.Iterable[str]]
]
):
self.__meta_paths = meta_paths
class Dataset(_typing.Iterable[_D], _typing.Sized):
def __len__(self) -> int:
raise NotImplementedError
def __iter__(self) -> _typing.Iterator[_D]:
raise NotImplementedError
def __getitem__(self, index: int) -> _D:
raise NotImplementedError
def __setitem__(self, index: int, data: _D):
raise NotImplementedError
@property
def train_split(self) -> _typing.Optional[_typing.Iterable[_D]]:
raise NotImplementedError
@property
def val_split(self) -> _typing.Optional[_typing.Iterable[_D]]:
raise NotImplementedError
@property
def test_split(self) -> _typing.Optional[_typing.Iterable[_D]]:
raise NotImplementedError
@property
def train_index(self) -> _typing.Optional[_typing.AbstractSet[int]]:
raise NotImplementedError
@property
def val_index(self) -> _typing.Optional[_typing.AbstractSet[int]]:
raise NotImplementedError
@property
def test_index(self) -> _typing.Optional[_typing.AbstractSet[int]]:
raise NotImplementedError
@train_index.setter
def train_index(self, train_index: _typing.Optional[_typing.Iterable[int]]):
raise NotImplementedError
@val_index.setter
def val_index(self, val_index: _typing.Optional[_typing.Iterable[int]]):
raise NotImplementedError
@test_index.setter
def test_index(self, test_index: _typing.Optional[_typing.Iterable[int]]):
raise NotImplementedError
@property
def schema(self) -> _Schema:
raise NotImplementedError
class _FoldsContainer:
def __init__(
self,
folds: _typing.Optional[_typing.Iterable[_typing.Tuple[_typing.Sequence[int], _typing.Sequence[int]]]] = ...
):
self._folds: _typing.Optional[_typing.List[_typing.Tuple[_typing.Sequence[int], _typing.Sequence[int]]]] = (
list(folds) if isinstance(folds, _typing.Iterable) else None
)
if self._folds is not None and len(self._folds) == 0:
self._folds = None
@property
def folds(self) -> _typing.Optional[_typing.Sequence[_typing.Tuple[_typing.Sequence[int], _typing.Sequence[int]]]]:
if self._folds is not None and len(self._folds) == 0:
self._folds = None
return self._folds
@folds.setter
def folds(self, folds: _typing.Optional[_typing.Iterable[_typing.Tuple[_typing.Sequence[int], _typing.Sequence[int]]]]):
self._folds: _typing.Optional[_typing.List[_typing.Tuple[_typing.Sequence[int], _typing.Sequence[int]]]] = (
list(folds) if isinstance(folds, _typing.Iterable) else None
)
if self._folds is not None and len(self._folds) == 0:
self._folds = None
class _FoldView:
def __init__(self, folds_container: _FoldsContainer, fold_index: int):
self._folds_container: _FoldsContainer = folds_container
self._fold_index: int = fold_index
@property
def train_index(self) -> _typing.Sequence[int]:
return self._folds_container.folds[self._fold_index][0]
@property
def val_index(self) -> _typing.Sequence[int]:
return self._folds_container.folds[self._fold_index][1]
class _FoldsView(_typing.Sequence[_FoldView]):
def __init__(self, folds_container: _FoldsContainer):
self._folds_container = folds_container
def __len__(self) -> int:
return (
len(self._folds_container.folds)
if self._folds_container.folds is not None
else 0
)
def __getitem__(self, fold_index: int) -> _FoldView:
return _FoldView(self._folds_container, fold_index)
class InMemoryDataset(Dataset[_D]):
@property
def schema(self) -> _Schema:
return self.__schema
def __init__(
self, data: _typing.Iterable[_D],
train_index: _typing.Optional[_typing.Iterable[int]] = ...,
val_index: _typing.Optional[_typing.Iterable[int]] = ...,
test_index: _typing.Optional[_typing.Iterable[int]] = ...,
schema: _typing.Optional[_Schema] = ...
):
self.__data: _typing.MutableSequence[_D] = list(data)
self.__train_index: _typing.Optional[_typing.Iterable[int]] = (
train_index if isinstance(train_index, _typing.Iterable) else None
)
self.__val_index: _typing.Optional[_typing.Iterable[int]] = (
val_index if isinstance(val_index, _typing.Iterable) else None
)
self.__test_index: _typing.Optional[_typing.Iterable[int]] = (
test_index if isinstance(test_index, _typing.Iterable) else None
)
self.__schema: _Schema = schema if isinstance(schema, _Schema) else _Schema()
self.__folds_container: _FoldsContainer = _FoldsContainer()
@property
def folds(self) -> _typing.Optional[_FoldsView]:
return (
_FoldsView(self.__folds_container)
if (
self.__folds_container.folds is not None and
len(self.__folds_container.folds) > 0
)
else None
)
@folds.setter
def folds(
self,
folds: _typing.Optional[
_typing.Iterable[
_typing.Tuple[_typing.Sequence[int], _typing.Sequence[int]]
]
] = ...
):
self.__folds_container.folds = folds
def __len__(self) -> int:
return len(self.__data)
def __iter__(self) -> _typing.Iterator[_D]:
return iter(self.__data)
def __getitem__(self, index: int) -> _D:
return self.__data[index]
def __setitem__(self, index: int, data: _D):
self.__data[index] = data
@property
def train_split(self) -> _typing.Optional[_typing.Iterable[_D]]:
return (
[self.__data[i] for i in self.__train_index]
if isinstance(self.__train_index, _typing.Iterable) else None
)
@property
def val_split(self) -> _typing.Optional[_typing.Iterable[_D]]:
return (
[self.__data[i] for i in self.__val_index]
if isinstance(self.__val_index, _typing.Iterable) else None
)
@property
def test_split(self) -> _typing.Optional[_typing.Iterable[_D]]:
return (
[self.__data[i] for i in self.__test_index]
if isinstance(self.__test_index, _typing.Iterable) else None
)
@property
def train_index(self) -> _typing.Optional[_typing.AbstractSet[int]]:
return self.__train_index
@property
def val_index(self) -> _typing.Optional[_typing.AbstractSet[int]]:
return self.__val_index
@property
def test_index(self) -> _typing.Optional[_typing.AbstractSet[int]]:
return self.__test_index
@train_index.setter
def train_index(self, train_index: _typing.Optional[_typing.Iterable[int]]):
if not (train_index is None or isinstance(train_index, _typing.Iterable)):
raise TypeError
elif train_index is None:
self.__train_index: _typing.Optional[_typing.Iterable[int]] = None
elif isinstance(train_index, _typing.Iterable):
if len(list(train_index)) == 0:
self.__train_index: _typing.Optional[_typing.Iterable[int]] = None
return
if not all([isinstance(i, int) for i in train_index]):
raise TypeError
if not (0 <= min(train_index) <= max(train_index) < len(self)):
raise ValueError
self.__train_index: _typing.Optional[_typing.Iterable[int]] = train_index
@val_index.setter
def val_index(self, val_index: _typing.Optional[_typing.Iterable[int]]):
if not (val_index is None or isinstance(val_index, _typing.Iterable)):
raise TypeError
elif val_index is None:
self.__val_index: _typing.Optional[_typing.Iterable[int]] = None
elif isinstance(val_index, _typing.Iterable):
if len(list(val_index)) == 0:
self.__val_index: _typing.Optional[_typing.Iterable[int]] = None
return
if not all([isinstance(i, int) for i in val_index]):
raise TypeError
if not (0 <= min(val_index) <= max(val_index) < len(self)):
raise ValueError
self.__val_index: _typing.Optional[_typing.Iterable[int]] = val_index
@test_index.setter
def test_index(self, test_index: _typing.Optional[_typing.Iterable[int]]):
if not (test_index is None or isinstance(test_index, _typing.Iterable)):
raise TypeError
elif test_index is None:
self.__test_index: _typing.Optional[_typing.Set[int]] = None
elif isinstance(test_index, _typing.Iterable):
if len(list(test_index)) == 0:
self.__test_index: _typing.Optional[_typing.Iterable[int]] = None
return
if not all([isinstance(i, int) for i in test_index]):
raise TypeError
if not (0 <= min(test_index) <= max(test_index) < len(self)):
raise ValueError
self.__test_index: _typing.Optional[_typing.Iterable[int]] = test_index
| 35.446735 | 124 | 0.63684 | 1,180 | 10,315 | 5.152542 | 0.061017 | 0.117434 | 0.138158 | 0.147368 | 0.788487 | 0.727467 | 0.633388 | 0.557566 | 0.507072 | 0.473849 | 0 | 0.001695 | 0.256617 | 10,315 | 290 | 125 | 35.568966 | 0.79121 | 0 | 0 | 0.497872 | 0 | 0 | 0.000194 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.204255 | false | 0 | 0.004255 | 0.080851 | 0.331915 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d9ff76b1a57ee3ef610dfe58c1f00fe876c3afa5 | 572 | py | Python | modules/test/test_wadsworth.py | shardulc/phenny | 950a247cc9f3335e671d7c68ddbae6abdca1c9db | [
"EFL-2.0"
] | null | null | null | modules/test/test_wadsworth.py | shardulc/phenny | 950a247cc9f3335e671d7c68ddbae6abdca1c9db | [
"EFL-2.0"
] | null | null | null | modules/test/test_wadsworth.py | shardulc/phenny | 950a247cc9f3335e671d7c68ddbae6abdca1c9db | [
"EFL-2.0"
] | null | null | null | """
test_wadsworth.py - the wadsworth.py module
author: mutantmonkey <mutantmonkey@mutantmonkey.in>
"""
import re
import unittest
from mock import MagicMock
from modules.wadsworth import wadsworth
class TestWadsworth(unittest.TestCase):
def setUp(self):
self.phenny = MagicMock()
self.input = MagicMock()
def test_wadsworth(self):
self.input.group.return_value = "Apply Wadsworth's Constant to a string"
wadsworth(self.phenny, self.input)
self.phenny.say.assert_called_once_with(
"Constant to a string")
| 26 | 80 | 0.704545 | 70 | 572 | 5.671429 | 0.528571 | 0.075567 | 0.055416 | 0.085642 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204545 | 572 | 21 | 81 | 27.238095 | 0.872527 | 0.166084 | 0 | 0 | 0 | 0 | 0.123667 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.153846 | false | 0 | 0.307692 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d9ffbf6fea0f7dcfd2d5ff2e86ce82220be1a969 | 38,060 | py | Python | pysnmp-with-texts/HIRSCHMANN-PIM-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/HIRSCHMANN-PIM-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/HIRSCHMANN-PIM-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module HIRSCHMANN-PIM-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/HIRSCHMANN-PIM-MIB
# Produced by pysmi-0.3.4 at Wed May 1 13:31:12 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
Integer, ObjectIdentifier, OctetString = mibBuilder.importSymbols("ASN1", "Integer", "ObjectIdentifier", "OctetString")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ValueRangeConstraint, SingleValueConstraint, ConstraintsIntersection, ValueSizeConstraint, ConstraintsUnion = mibBuilder.importSymbols("ASN1-REFINEMENT", "ValueRangeConstraint", "SingleValueConstraint", "ConstraintsIntersection", "ValueSizeConstraint", "ConstraintsUnion")
hmPlatform4Multicast, = mibBuilder.importSymbols("HIRSCHMANN-MULTICAST-MIB", "hmPlatform4Multicast")
InterfaceIndex, = mibBuilder.importSymbols("IF-MIB", "InterfaceIndex")
ipMRouteNextHopGroup, ipMRouteSourceMask, ipMRouteNextHopIfIndex, ipMRouteNextHopAddress, ipMRouteNextHopSourceMask, ipMRouteGroup, ipMRouteNextHopSource, ipMRouteSource = mibBuilder.importSymbols("IPMROUTE-STD-MIB", "ipMRouteNextHopGroup", "ipMRouteSourceMask", "ipMRouteNextHopIfIndex", "ipMRouteNextHopAddress", "ipMRouteNextHopSourceMask", "ipMRouteGroup", "ipMRouteNextHopSource", "ipMRouteSource")
ObjectGroup, ModuleCompliance, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ObjectGroup", "ModuleCompliance", "NotificationGroup")
Gauge32, IpAddress, iso, Unsigned32, Bits, ModuleIdentity, ObjectIdentity, Integer32, TimeTicks, MibIdentifier, NotificationType, MibScalar, MibTable, MibTableRow, MibTableColumn, Counter32, Counter64 = mibBuilder.importSymbols("SNMPv2-SMI", "Gauge32", "IpAddress", "iso", "Unsigned32", "Bits", "ModuleIdentity", "ObjectIdentity", "Integer32", "TimeTicks", "MibIdentifier", "NotificationType", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Counter32", "Counter64")
TruthValue, TextualConvention, RowStatus, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TruthValue", "TextualConvention", "RowStatus", "DisplayString")
hmPIMMIB = ModuleIdentity((1, 3, 6, 1, 4, 1, 248, 15, 4, 99))
hmPIMMIB.setRevisions(('2006-02-06 12:00',))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
if mibBuilder.loadTexts: hmPIMMIB.setRevisionsDescriptions(('Initial version, published as RFC 2934.',))
if mibBuilder.loadTexts: hmPIMMIB.setLastUpdated('200602061200Z')
if mibBuilder.loadTexts: hmPIMMIB.setOrganization('Hirschmann Automation and Control GmbH')
if mibBuilder.loadTexts: hmPIMMIB.setContactInfo('Customer Support Postal: Hirschmann Automation and Control GmbH Stuttgarter Str. 45-51 72654 Neckartenzlingen Germany Tel: +49 7127 14 1981 Web: http://www.hicomcenter.com/ E-Mail: hicomcenter@hirschmann.com')
if mibBuilder.loadTexts: hmPIMMIB.setDescription('The Hirschmann Private Platform4 PIM MIB definitions for Platform devices.')
hmPIMMIBObjects = MibIdentifier((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1))
hmPIMTraps = MibIdentifier((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 0))
hmPIM = MibIdentifier((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1))
hmPIMJoinPruneInterval = MibScalar((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 1), Integer32()).setUnits('seconds').setMaxAccess("readwrite")
if mibBuilder.loadTexts: hmPIMJoinPruneInterval.setStatus('current')
if mibBuilder.loadTexts: hmPIMJoinPruneInterval.setDescription('The default interval at which periodic PIM-SM Join/Prune messages are to be sent.')
hmPIMInterfaceTable = MibTable((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 2), )
if mibBuilder.loadTexts: hmPIMInterfaceTable.setStatus('current')
if mibBuilder.loadTexts: hmPIMInterfaceTable.setDescription("The (conceptual) table listing the router's PIM interfaces. IGMP and PIM are enabled on all interfaces listed in this table.")
hmPIMInterfaceEntry = MibTableRow((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 2, 1), ).setIndexNames((0, "HIRSCHMANN-PIM-MIB", "hmPIMInterfaceIfIndex"))
if mibBuilder.loadTexts: hmPIMInterfaceEntry.setStatus('current')
if mibBuilder.loadTexts: hmPIMInterfaceEntry.setDescription('An entry (conceptual row) in the hmPIMInterfaceTable.')
hmPIMInterfaceIfIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 2, 1, 1), InterfaceIndex())
if mibBuilder.loadTexts: hmPIMInterfaceIfIndex.setStatus('current')
if mibBuilder.loadTexts: hmPIMInterfaceIfIndex.setDescription('The ifIndex value of this PIM interface.')
hmPIMInterfaceAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 2, 1, 2), IpAddress()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMInterfaceAddress.setStatus('current')
if mibBuilder.loadTexts: hmPIMInterfaceAddress.setDescription('The IP address of the PIM interface.')
hmPIMInterfaceNetMask = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 2, 1, 3), IpAddress()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMInterfaceNetMask.setStatus('current')
if mibBuilder.loadTexts: hmPIMInterfaceNetMask.setDescription('The network mask for the IP address of the PIM interface.')
hmPIMInterfaceMode = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 2, 1, 4), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("dense", 1), ("sparse", 2), ("sparseDense", 3))).clone('dense')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: hmPIMInterfaceMode.setStatus('current')
if mibBuilder.loadTexts: hmPIMInterfaceMode.setDescription('The configured mode of this PIM interface. A value of sparseDense is only valid for PIMv1.')
hmPIMInterfaceDR = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 2, 1, 5), IpAddress()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMInterfaceDR.setStatus('current')
if mibBuilder.loadTexts: hmPIMInterfaceDR.setDescription('The Designated Router on this PIM interface. For point-to- point interfaces, this object has the value 0.0.0.0.')
hmPIMInterfaceHelloInterval = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 2, 1, 6), Integer32().clone(30)).setUnits('seconds').setMaxAccess("readcreate")
if mibBuilder.loadTexts: hmPIMInterfaceHelloInterval.setStatus('current')
if mibBuilder.loadTexts: hmPIMInterfaceHelloInterval.setDescription('The frequency at which PIM Hello messages are transmitted on this interface.')
hmPIMInterfaceStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 2, 1, 7), RowStatus()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: hmPIMInterfaceStatus.setStatus('current')
if mibBuilder.loadTexts: hmPIMInterfaceStatus.setDescription('The status of this entry. Creating the entry enables PIM on the interface; destroying the entry disables PIM on the interface.')
hmPIMInterfaceJoinPruneInterval = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 2, 1, 8), Integer32()).setUnits('seconds').setMaxAccess("readcreate")
if mibBuilder.loadTexts: hmPIMInterfaceJoinPruneInterval.setStatus('current')
if mibBuilder.loadTexts: hmPIMInterfaceJoinPruneInterval.setDescription('The frequency at which PIM Join/Prune messages are transmitted on this PIM interface. The default value of this object is the hmPIMJoinPruneInterval.')
hmPIMInterfaceCBSRPreference = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 2, 1, 9), Integer32().subtype(subtypeSpec=ValueRangeConstraint(-1, 255))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: hmPIMInterfaceCBSRPreference.setStatus('current')
if mibBuilder.loadTexts: hmPIMInterfaceCBSRPreference.setDescription('The preference value for the local interface as a candidate bootstrap router. The value of -1 is used to indicate that the local interface is not a candidate BSR interface.')
hmPIMNeighborTable = MibTable((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 3), )
if mibBuilder.loadTexts: hmPIMNeighborTable.setStatus('current')
if mibBuilder.loadTexts: hmPIMNeighborTable.setDescription("The (conceptual) table listing the router's PIM neighbors.")
hmPIMNeighborEntry = MibTableRow((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 3, 1), ).setIndexNames((0, "HIRSCHMANN-PIM-MIB", "hmPIMNeighborAddress"))
if mibBuilder.loadTexts: hmPIMNeighborEntry.setStatus('current')
if mibBuilder.loadTexts: hmPIMNeighborEntry.setDescription('An entry (conceptual row) in the hmPIMNeighborTable.')
hmPIMNeighborAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 3, 1, 1), IpAddress())
if mibBuilder.loadTexts: hmPIMNeighborAddress.setStatus('current')
if mibBuilder.loadTexts: hmPIMNeighborAddress.setDescription('The IP address of the PIM neighbor for which this entry contains information.')
hmPIMNeighborIfIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 3, 1, 2), InterfaceIndex()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMNeighborIfIndex.setStatus('current')
if mibBuilder.loadTexts: hmPIMNeighborIfIndex.setDescription('The value of ifIndex for the interface used to reach this PIM neighbor.')
hmPIMNeighborUpTime = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 3, 1, 3), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMNeighborUpTime.setStatus('current')
if mibBuilder.loadTexts: hmPIMNeighborUpTime.setDescription('The time since this PIM neighbor (last) became a neighbor of the local router.')
hmPIMNeighborExpiryTime = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 3, 1, 4), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMNeighborExpiryTime.setStatus('current')
if mibBuilder.loadTexts: hmPIMNeighborExpiryTime.setDescription('The minimum time remaining before this PIM neighbor will be aged out.')
hmPIMNeighborMode = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 3, 1, 5), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("dense", 1), ("sparse", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMNeighborMode.setStatus('deprecated')
if mibBuilder.loadTexts: hmPIMNeighborMode.setDescription('The active PIM mode of this neighbor. This object is deprecated for PIMv2 routers since all neighbors on the interface must be either dense or sparse as determined by the protocol running on the interface.')
hmPIMIpMRouteTable = MibTable((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 4), )
if mibBuilder.loadTexts: hmPIMIpMRouteTable.setStatus('current')
if mibBuilder.loadTexts: hmPIMIpMRouteTable.setDescription('The (conceptual) table listing PIM-specific information on a subset of the rows of the ipMRouteTable defined in the IP Multicast MIB.')
hmPIMIpMRouteEntry = MibTableRow((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 4, 1), ).setIndexNames((0, "IPMROUTE-STD-MIB", "ipMRouteGroup"), (0, "IPMROUTE-STD-MIB", "ipMRouteSource"), (0, "IPMROUTE-STD-MIB", "ipMRouteSourceMask"))
if mibBuilder.loadTexts: hmPIMIpMRouteEntry.setStatus('current')
if mibBuilder.loadTexts: hmPIMIpMRouteEntry.setDescription('An entry (conceptual row) in the hmPIMIpMRouteTable. There is one entry per entry in the ipMRouteTable whose incoming interface is running PIM.')
hmPIMIpMRouteUpstreamAssertTimer = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 4, 1, 1), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMIpMRouteUpstreamAssertTimer.setStatus('current')
if mibBuilder.loadTexts: hmPIMIpMRouteUpstreamAssertTimer.setDescription('The time remaining before the router changes its upstream neighbor back to its RPF neighbor. This timer is called the Assert timer in the PIM Sparse and Dense mode specification. A value of 0 indicates that no Assert has changed the upstream neighbor away from the RPF neighbor.')
hmPIMIpMRouteAssertMetric = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 4, 1, 2), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMIpMRouteAssertMetric.setStatus('current')
if mibBuilder.loadTexts: hmPIMIpMRouteAssertMetric.setDescription('The metric advertised by the assert winner on the upstream interface, or 0 if no such assert is in received.')
hmPIMIpMRouteAssertMetricPref = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 4, 1, 3), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMIpMRouteAssertMetricPref.setStatus('current')
if mibBuilder.loadTexts: hmPIMIpMRouteAssertMetricPref.setDescription('The preference advertised by the assert winner on the upstream interface, or 0 if no such assert is in effect.')
hmPIMIpMRouteAssertRPTBit = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 4, 1, 4), TruthValue()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMIpMRouteAssertRPTBit.setStatus('current')
if mibBuilder.loadTexts: hmPIMIpMRouteAssertRPTBit.setDescription('The value of the RPT-bit advertised by the assert winner on the upstream interface, or false if no such assert is in effect.')
hmPIMIpMRouteFlags = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 4, 1, 5), Bits().clone(namedValues=NamedValues(("rpt", 0), ("spt", 1)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMIpMRouteFlags.setStatus('current')
if mibBuilder.loadTexts: hmPIMIpMRouteFlags.setDescription('This object describes PIM-specific flags related to a multicast state entry. See the PIM Sparse Mode specification for the meaning of the RPT and SPT bits.')
hmPIMIpMRouteNextHopTable = MibTable((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 7), )
if mibBuilder.loadTexts: hmPIMIpMRouteNextHopTable.setStatus('current')
if mibBuilder.loadTexts: hmPIMIpMRouteNextHopTable.setDescription('The (conceptual) table listing PIM-specific information on a subset of the rows of the ipMRouteNextHopTable defined in the IP Multicast MIB.')
hmPIMIpMRouteNextHopEntry = MibTableRow((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 7, 1), ).setIndexNames((0, "IPMROUTE-STD-MIB", "ipMRouteNextHopGroup"), (0, "IPMROUTE-STD-MIB", "ipMRouteNextHopSource"), (0, "IPMROUTE-STD-MIB", "ipMRouteNextHopSourceMask"), (0, "IPMROUTE-STD-MIB", "ipMRouteNextHopIfIndex"), (0, "IPMROUTE-STD-MIB", "ipMRouteNextHopAddress"))
if mibBuilder.loadTexts: hmPIMIpMRouteNextHopEntry.setStatus('current')
if mibBuilder.loadTexts: hmPIMIpMRouteNextHopEntry.setDescription('An entry (conceptual row) in the hmPIMIpMRouteNextHopTable. There is one entry per entry in the ipMRouteNextHopTable whose interface is running PIM and whose ipMRouteNextHopState is pruned(1).')
hmPIMIpMRouteNextHopPruneReason = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 7, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("other", 1), ("prune", 2), ("assert", 3)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMIpMRouteNextHopPruneReason.setStatus('current')
if mibBuilder.loadTexts: hmPIMIpMRouteNextHopPruneReason.setDescription('This object indicates why the downstream interface was pruned, whether in response to a PIM prune message or due to PIM Assert processing.')
hmPIMRPTable = MibTable((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 5), )
if mibBuilder.loadTexts: hmPIMRPTable.setStatus('deprecated')
if mibBuilder.loadTexts: hmPIMRPTable.setDescription('The (conceptual) table listing PIM version 1 information for the Rendezvous Points (RPs) for IP multicast groups. This table is deprecated since its function is replaced by the hmPIMRPSetTable for PIM version 2.')
hmPIMRPEntry = MibTableRow((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 5, 1), ).setIndexNames((0, "HIRSCHMANN-PIM-MIB", "hmPIMRPGroupAddress"), (0, "HIRSCHMANN-PIM-MIB", "hmPIMRPAddress"))
if mibBuilder.loadTexts: hmPIMRPEntry.setStatus('deprecated')
if mibBuilder.loadTexts: hmPIMRPEntry.setDescription('An entry (conceptual row) in the hmPIMRPTable. There is one entry per RP address for each IP multicast group.')
hmPIMRPGroupAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 5, 1, 1), IpAddress())
if mibBuilder.loadTexts: hmPIMRPGroupAddress.setStatus('deprecated')
if mibBuilder.loadTexts: hmPIMRPGroupAddress.setDescription('The IP multicast group address for which this entry contains information about an RP.')
hmPIMRPAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 5, 1, 2), IpAddress())
if mibBuilder.loadTexts: hmPIMRPAddress.setStatus('deprecated')
if mibBuilder.loadTexts: hmPIMRPAddress.setDescription('The unicast address of the RP.')
hmPIMRPState = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 5, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("up", 1), ("down", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMRPState.setStatus('deprecated')
if mibBuilder.loadTexts: hmPIMRPState.setDescription('The state of the RP.')
hmPIMRPStateTimer = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 5, 1, 4), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMRPStateTimer.setStatus('deprecated')
if mibBuilder.loadTexts: hmPIMRPStateTimer.setDescription('The minimum time remaining before the next state change. When hmPIMRPState is up, this is the minimum time which must expire until it can be declared down. When hmPIMRPState is down, this is the time until it will be declared up (in order to retry).')
hmPIMRPLastChange = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 5, 1, 5), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMRPLastChange.setStatus('deprecated')
if mibBuilder.loadTexts: hmPIMRPLastChange.setDescription('The value of sysUpTime at the time when the corresponding instance of hmPIMRPState last changed its value.')
hmPIMRPRowStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 5, 1, 6), RowStatus()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: hmPIMRPRowStatus.setStatus('deprecated')
if mibBuilder.loadTexts: hmPIMRPRowStatus.setDescription('The status of this row, by which new entries may be created, or old entries deleted from this table.')
hmPIMRPSetTable = MibTable((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 6), )
if mibBuilder.loadTexts: hmPIMRPSetTable.setStatus('current')
if mibBuilder.loadTexts: hmPIMRPSetTable.setDescription('The (conceptual) table listing PIM information for candidate Rendezvous Points (RPs) for IP multicast groups. When the local router is the BSR, this information is obtained from received Candidate-RP-Advertisements. When the local router is not the BSR, this information is obtained from received RP-Set messages.')
hmPIMRPSetEntry = MibTableRow((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 6, 1), ).setIndexNames((0, "HIRSCHMANN-PIM-MIB", "hmPIMRPSetComponent"), (0, "HIRSCHMANN-PIM-MIB", "hmPIMRPSetGroupAddress"), (0, "HIRSCHMANN-PIM-MIB", "hmPIMRPSetGroupMask"), (0, "HIRSCHMANN-PIM-MIB", "hmPIMRPSetAddress"))
if mibBuilder.loadTexts: hmPIMRPSetEntry.setStatus('current')
if mibBuilder.loadTexts: hmPIMRPSetEntry.setDescription('An entry (conceptual row) in the hmPIMRPSetTable.')
hmPIMRPSetGroupAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 6, 1, 1), IpAddress())
if mibBuilder.loadTexts: hmPIMRPSetGroupAddress.setStatus('current')
if mibBuilder.loadTexts: hmPIMRPSetGroupAddress.setDescription('The IP multicast group address which, when combined with hmPIMRPSetGroupMask, gives the group prefix for which this entry contains information about the Candidate-RP.')
hmPIMRPSetGroupMask = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 6, 1, 2), IpAddress())
if mibBuilder.loadTexts: hmPIMRPSetGroupMask.setStatus('current')
if mibBuilder.loadTexts: hmPIMRPSetGroupMask.setDescription('The multicast group address mask which, when combined with hmPIMRPSetGroupAddress, gives the group prefix for which this entry contains information about the Candidate-RP.')
hmPIMRPSetAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 6, 1, 3), IpAddress())
if mibBuilder.loadTexts: hmPIMRPSetAddress.setStatus('current')
if mibBuilder.loadTexts: hmPIMRPSetAddress.setDescription('The IP address of the Candidate-RP.')
hmPIMRPSetHoldTime = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 6, 1, 4), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 255))).setUnits('seconds').setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMRPSetHoldTime.setStatus('current')
if mibBuilder.loadTexts: hmPIMRPSetHoldTime.setDescription('The holdtime of a Candidate-RP. If the local router is not the BSR, this value is 0.')
hmPIMRPSetExpiryTime = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 6, 1, 5), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMRPSetExpiryTime.setStatus('current')
if mibBuilder.loadTexts: hmPIMRPSetExpiryTime.setDescription('The minimum time remaining before the Candidate-RP will be declared down. If the local router is not the BSR, this value is 0.')
hmPIMRPSetComponent = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 6, 1, 6), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 255)))
if mibBuilder.loadTexts: hmPIMRPSetComponent.setStatus('current')
if mibBuilder.loadTexts: hmPIMRPSetComponent.setDescription(' A number uniquely identifying the component. Each protocol instance connected to a separate domain should have a different index value.')
hmPIMCandidateRPTable = MibTable((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 11), )
if mibBuilder.loadTexts: hmPIMCandidateRPTable.setStatus('current')
if mibBuilder.loadTexts: hmPIMCandidateRPTable.setDescription('The (conceptual) table listing the IP multicast groups for which the local router is to advertise itself as a Candidate-RP when the value of hmPIMComponentCRPHoldTime is non-zero. If this table is empty, then the local router will advertise itself as a Candidate-RP for all groups (providing the value of hmPIMComponentCRPHoldTime is non- zero).')
hmPIMCandidateRPEntry = MibTableRow((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 11, 1), ).setIndexNames((0, "HIRSCHMANN-PIM-MIB", "hmPIMCandidateRPGroupAddress"), (0, "HIRSCHMANN-PIM-MIB", "hmPIMCandidateRPGroupMask"))
if mibBuilder.loadTexts: hmPIMCandidateRPEntry.setStatus('current')
if mibBuilder.loadTexts: hmPIMCandidateRPEntry.setDescription('An entry (conceptual row) in the hmPIMCandidateRPTable.')
hmPIMCandidateRPGroupAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 11, 1, 1), IpAddress())
if mibBuilder.loadTexts: hmPIMCandidateRPGroupAddress.setStatus('current')
if mibBuilder.loadTexts: hmPIMCandidateRPGroupAddress.setDescription('The IP multicast group address which, when combined with hmPIMCandidateRPGroupMask, identifies a group prefix for which the local router will advertise itself as a Candidate-RP.')
hmPIMCandidateRPGroupMask = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 11, 1, 2), IpAddress())
if mibBuilder.loadTexts: hmPIMCandidateRPGroupMask.setStatus('current')
if mibBuilder.loadTexts: hmPIMCandidateRPGroupMask.setDescription('The multicast group address mask which, when combined with hmPIMCandidateRPGroupMask, identifies a group prefix for which the local router will advertise itself as a Candidate-RP.')
hmPIMCandidateRPAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 11, 1, 3), IpAddress()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: hmPIMCandidateRPAddress.setStatus('current')
if mibBuilder.loadTexts: hmPIMCandidateRPAddress.setDescription('The (unicast) address of the interface which will be advertised as a Candidate-RP.')
hmPIMCandidateRPRowStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 11, 1, 4), RowStatus()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: hmPIMCandidateRPRowStatus.setStatus('current')
if mibBuilder.loadTexts: hmPIMCandidateRPRowStatus.setDescription('The status of this row, by which new entries may be created, or old entries deleted from this table.')
hmPIMComponentTable = MibTable((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 12), )
if mibBuilder.loadTexts: hmPIMComponentTable.setStatus('current')
if mibBuilder.loadTexts: hmPIMComponentTable.setDescription('The (conceptual) table containing objects specific to a PIM domain. One row exists for each domain to which the router is connected. A PIM-SM domain is defined as an area of the network over which Bootstrap messages are forwarded. Typically, a PIM-SM router will be a member of exactly one domain. This table also supports, however, routers which may form a border between two PIM-SM domains and do not forward Bootstrap messages between them.')
hmPIMComponentEntry = MibTableRow((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 12, 1), ).setIndexNames((0, "HIRSCHMANN-PIM-MIB", "hmPIMComponentIndex"))
if mibBuilder.loadTexts: hmPIMComponentEntry.setStatus('current')
if mibBuilder.loadTexts: hmPIMComponentEntry.setDescription('An entry (conceptual row) in the hmPIMComponentTable.')
hmPIMComponentIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 12, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 255)))
if mibBuilder.loadTexts: hmPIMComponentIndex.setStatus('current')
if mibBuilder.loadTexts: hmPIMComponentIndex.setDescription('A number uniquely identifying the component. Each protocol instance connected to a separate domain should have a different index value. Routers that only support membership in a single PIM-SM domain should use a hmPIMComponentIndex value of 1.')
hmPIMComponentBSRAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 12, 1, 2), IpAddress()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMComponentBSRAddress.setStatus('current')
if mibBuilder.loadTexts: hmPIMComponentBSRAddress.setDescription('The IP address of the bootstrap router (BSR) for the local PIM region.')
hmPIMComponentBSRExpiryTime = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 12, 1, 3), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: hmPIMComponentBSRExpiryTime.setStatus('current')
if mibBuilder.loadTexts: hmPIMComponentBSRExpiryTime.setDescription('The minimum time remaining before the bootstrap router in the local domain will be declared down. For candidate BSRs, this is the time until the component sends an RP-Set message. For other routers, this is the time until it may accept an RP-Set message from a lower candidate BSR.')
hmPIMComponentCRPHoldTime = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 12, 1, 4), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 255))).setUnits('seconds').setMaxAccess("readcreate")
if mibBuilder.loadTexts: hmPIMComponentCRPHoldTime.setStatus('current')
if mibBuilder.loadTexts: hmPIMComponentCRPHoldTime.setDescription('The holdtime of the component when it is a candidate RP in the local domain. The value of 0 is used to indicate that the local system is not a Candidate-RP.')
hmPIMComponentStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 1, 12, 1, 5), RowStatus()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: hmPIMComponentStatus.setStatus('current')
if mibBuilder.loadTexts: hmPIMComponentStatus.setDescription('The status of this entry. Creating the entry creates another protocol instance; destroying the entry disables a protocol instance.')
hmPIMNeighborLoss = NotificationType((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 1, 0, 1)).setObjects(("HIRSCHMANN-PIM-MIB", "hmPIMNeighborIfIndex"))
if mibBuilder.loadTexts: hmPIMNeighborLoss.setStatus('current')
if mibBuilder.loadTexts: hmPIMNeighborLoss.setDescription('A hmPIMNeighborLoss trap signifies the loss of an adjacency with a neighbor. This trap should be generated when the neighbor timer expires, and the router has no other neighbors on the same interface with a lower IP address than itself.')
hmPIMMIBConformance = MibIdentifier((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 2))
hmPIMMIBCompliances = MibIdentifier((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 2, 1))
hmPIMMIBGroups = MibIdentifier((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 2, 2))
hmPIMV1MIBCompliance = ModuleCompliance((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 2, 1, 1)).setObjects(("HIRSCHMANN-PIM-MIB", "hmPIMV1MIBGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
hmPIMV1MIBCompliance = hmPIMV1MIBCompliance.setStatus('deprecated')
if mibBuilder.loadTexts: hmPIMV1MIBCompliance.setDescription('The compliance statement for routers running PIMv1 and implementing the PIM MIB.')
hmPIMSparseV2MIBCompliance = ModuleCompliance((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 2, 1, 2)).setObjects(("HIRSCHMANN-PIM-MIB", "hmPIMV2MIBGroup"), ("HIRSCHMANN-PIM-MIB", "hmPIMV2CandidateRPMIBGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
hmPIMSparseV2MIBCompliance = hmPIMSparseV2MIBCompliance.setStatus('current')
if mibBuilder.loadTexts: hmPIMSparseV2MIBCompliance.setDescription('The compliance statement for routers running PIM Sparse Mode and implementing the PIM MIB.')
hmPIMDenseV2MIBCompliance = ModuleCompliance((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 2, 1, 3)).setObjects(("HIRSCHMANN-PIM-MIB", "hmPIMDenseV2MIBGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
hmPIMDenseV2MIBCompliance = hmPIMDenseV2MIBCompliance.setStatus('current')
if mibBuilder.loadTexts: hmPIMDenseV2MIBCompliance.setDescription('The compliance statement for routers running PIM Dense Mode and implementing the PIM MIB.')
hmPIMNotificationGroup = NotificationGroup((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 2, 2, 1)).setObjects(("HIRSCHMANN-PIM-MIB", "hmPIMNeighborLoss"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
hmPIMNotificationGroup = hmPIMNotificationGroup.setStatus('current')
if mibBuilder.loadTexts: hmPIMNotificationGroup.setDescription('A collection of notifications for signaling important PIM events.')
hmPIMV2MIBGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 2, 2, 2)).setObjects(("HIRSCHMANN-PIM-MIB", "hmPIMJoinPruneInterval"), ("HIRSCHMANN-PIM-MIB", "hmPIMNeighborIfIndex"), ("HIRSCHMANN-PIM-MIB", "hmPIMNeighborUpTime"), ("HIRSCHMANN-PIM-MIB", "hmPIMNeighborExpiryTime"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceAddress"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceNetMask"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceDR"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceHelloInterval"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceStatus"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceJoinPruneInterval"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceCBSRPreference"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceMode"), ("HIRSCHMANN-PIM-MIB", "hmPIMRPSetHoldTime"), ("HIRSCHMANN-PIM-MIB", "hmPIMRPSetExpiryTime"), ("HIRSCHMANN-PIM-MIB", "hmPIMComponentBSRAddress"), ("HIRSCHMANN-PIM-MIB", "hmPIMComponentBSRExpiryTime"), ("HIRSCHMANN-PIM-MIB", "hmPIMComponentCRPHoldTime"), ("HIRSCHMANN-PIM-MIB", "hmPIMComponentStatus"), ("HIRSCHMANN-PIM-MIB", "hmPIMIpMRouteFlags"), ("HIRSCHMANN-PIM-MIB", "hmPIMIpMRouteUpstreamAssertTimer"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
hmPIMV2MIBGroup = hmPIMV2MIBGroup.setStatus('current')
if mibBuilder.loadTexts: hmPIMV2MIBGroup.setDescription('A collection of objects to support management of PIM Sparse Mode (version 2) routers.')
hmPIMDenseV2MIBGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 2, 2, 5)).setObjects(("HIRSCHMANN-PIM-MIB", "hmPIMNeighborIfIndex"), ("HIRSCHMANN-PIM-MIB", "hmPIMNeighborUpTime"), ("HIRSCHMANN-PIM-MIB", "hmPIMNeighborExpiryTime"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceAddress"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceNetMask"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceDR"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceHelloInterval"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceStatus"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceMode"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
hmPIMDenseV2MIBGroup = hmPIMDenseV2MIBGroup.setStatus('current')
if mibBuilder.loadTexts: hmPIMDenseV2MIBGroup.setDescription('A collection of objects to support management of PIM Dense Mode (version 2) routers.')
hmPIMV2CandidateRPMIBGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 2, 2, 3)).setObjects(("HIRSCHMANN-PIM-MIB", "hmPIMCandidateRPAddress"), ("HIRSCHMANN-PIM-MIB", "hmPIMCandidateRPRowStatus"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
hmPIMV2CandidateRPMIBGroup = hmPIMV2CandidateRPMIBGroup.setStatus('current')
if mibBuilder.loadTexts: hmPIMV2CandidateRPMIBGroup.setDescription('A collection of objects to support configuration of which groups a router is to advertise itself as a Candidate-RP.')
hmPIMV1MIBGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 2, 2, 4)).setObjects(("HIRSCHMANN-PIM-MIB", "hmPIMJoinPruneInterval"), ("HIRSCHMANN-PIM-MIB", "hmPIMNeighborIfIndex"), ("HIRSCHMANN-PIM-MIB", "hmPIMNeighborUpTime"), ("HIRSCHMANN-PIM-MIB", "hmPIMNeighborExpiryTime"), ("HIRSCHMANN-PIM-MIB", "hmPIMNeighborMode"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceAddress"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceNetMask"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceJoinPruneInterval"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceStatus"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceMode"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceDR"), ("HIRSCHMANN-PIM-MIB", "hmPIMInterfaceHelloInterval"), ("HIRSCHMANN-PIM-MIB", "hmPIMRPState"), ("HIRSCHMANN-PIM-MIB", "hmPIMRPStateTimer"), ("HIRSCHMANN-PIM-MIB", "hmPIMRPLastChange"), ("HIRSCHMANN-PIM-MIB", "hmPIMRPRowStatus"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
hmPIMV1MIBGroup = hmPIMV1MIBGroup.setStatus('deprecated')
if mibBuilder.loadTexts: hmPIMV1MIBGroup.setDescription('A collection of objects to support management of PIM (version 1) routers.')
hmPIMNextHopGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 2, 2, 6)).setObjects(("HIRSCHMANN-PIM-MIB", "hmPIMIpMRouteNextHopPruneReason"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
hmPIMNextHopGroup = hmPIMNextHopGroup.setStatus('current')
if mibBuilder.loadTexts: hmPIMNextHopGroup.setDescription('A collection of optional objects to provide per-next hop information for diagnostic purposes. Supporting this group may add a large number of instances to a tree walk, but the information in this group can be extremely useful in tracking down multicast connectivity problems.')
hmPIMAssertGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 248, 15, 4, 99, 2, 2, 7)).setObjects(("HIRSCHMANN-PIM-MIB", "hmPIMIpMRouteAssertMetric"), ("HIRSCHMANN-PIM-MIB", "hmPIMIpMRouteAssertMetricPref"), ("HIRSCHMANN-PIM-MIB", "hmPIMIpMRouteAssertRPTBit"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
hmPIMAssertGroup = hmPIMAssertGroup.setStatus('current')
if mibBuilder.loadTexts: hmPIMAssertGroup.setDescription('A collection of optional objects to provide extra information about the assert election process. There is no protocol reason to keep such information, but some implementations may already keep this information and make it available. These objects can also be very useful in debugging connectivity or duplicate packet problems, especially if the assert winner does not support the PIM and IP Multicast MIBs.')
mibBuilder.exportSymbols("HIRSCHMANN-PIM-MIB", hmPIMComponentEntry=hmPIMComponentEntry, hmPIMInterfaceHelloInterval=hmPIMInterfaceHelloInterval, hmPIMCandidateRPTable=hmPIMCandidateRPTable, hmPIMIpMRouteNextHopTable=hmPIMIpMRouteNextHopTable, hmPIMMIBObjects=hmPIMMIBObjects, hmPIMIpMRouteAssertMetricPref=hmPIMIpMRouteAssertMetricPref, hmPIMIpMRouteTable=hmPIMIpMRouteTable, hmPIMDenseV2MIBGroup=hmPIMDenseV2MIBGroup, hmPIMCandidateRPEntry=hmPIMCandidateRPEntry, hmPIMJoinPruneInterval=hmPIMJoinPruneInterval, hmPIMMIBCompliances=hmPIMMIBCompliances, hmPIMCandidateRPAddress=hmPIMCandidateRPAddress, hmPIMComponentBSRExpiryTime=hmPIMComponentBSRExpiryTime, hmPIMMIB=hmPIMMIB, hmPIMIpMRouteAssertRPTBit=hmPIMIpMRouteAssertRPTBit, hmPIMComponentCRPHoldTime=hmPIMComponentCRPHoldTime, hmPIMRPLastChange=hmPIMRPLastChange, hmPIMComponentTable=hmPIMComponentTable, hmPIMNeighborIfIndex=hmPIMNeighborIfIndex, hmPIMRPSetTable=hmPIMRPSetTable, hmPIMComponentStatus=hmPIMComponentStatus, PYSNMP_MODULE_ID=hmPIMMIB, hmPIMMIBConformance=hmPIMMIBConformance, hmPIMRPStateTimer=hmPIMRPStateTimer, hmPIMNeighborTable=hmPIMNeighborTable, hmPIMNextHopGroup=hmPIMNextHopGroup, hmPIMAssertGroup=hmPIMAssertGroup, hmPIMNeighborMode=hmPIMNeighborMode, hmPIMInterfaceTable=hmPIMInterfaceTable, hmPIMRPSetAddress=hmPIMRPSetAddress, hmPIMInterfaceMode=hmPIMInterfaceMode, hmPIMSparseV2MIBCompliance=hmPIMSparseV2MIBCompliance, hmPIMRPGroupAddress=hmPIMRPGroupAddress, hmPIMCandidateRPRowStatus=hmPIMCandidateRPRowStatus, hmPIMRPSetHoldTime=hmPIMRPSetHoldTime, hmPIMRPTable=hmPIMRPTable, hmPIMIpMRouteAssertMetric=hmPIMIpMRouteAssertMetric, hmPIMMIBGroups=hmPIMMIBGroups, hmPIMInterfaceNetMask=hmPIMInterfaceNetMask, hmPIMRPSetEntry=hmPIMRPSetEntry, hmPIMInterfaceDR=hmPIMInterfaceDR, hmPIMInterfaceAddress=hmPIMInterfaceAddress, hmPIMIpMRouteEntry=hmPIMIpMRouteEntry, hmPIMV2CandidateRPMIBGroup=hmPIMV2CandidateRPMIBGroup, hmPIMNeighborEntry=hmPIMNeighborEntry, hmPIMRPSetGroupMask=hmPIMRPSetGroupMask, hmPIMTraps=hmPIMTraps, hmPIMIpMRouteUpstreamAssertTimer=hmPIMIpMRouteUpstreamAssertTimer, hmPIMInterfaceStatus=hmPIMInterfaceStatus, hmPIMRPAddress=hmPIMRPAddress, hmPIMInterfaceIfIndex=hmPIMInterfaceIfIndex, hmPIMRPEntry=hmPIMRPEntry, hmPIMNeighborUpTime=hmPIMNeighborUpTime, hmPIMNeighborLoss=hmPIMNeighborLoss, hmPIMInterfaceEntry=hmPIMInterfaceEntry, hmPIMNeighborAddress=hmPIMNeighborAddress, hmPIMCandidateRPGroupAddress=hmPIMCandidateRPGroupAddress, hmPIMRPRowStatus=hmPIMRPRowStatus, hmPIMDenseV2MIBCompliance=hmPIMDenseV2MIBCompliance, hmPIMNotificationGroup=hmPIMNotificationGroup, hmPIMIpMRouteNextHopPruneReason=hmPIMIpMRouteNextHopPruneReason, hmPIMCandidateRPGroupMask=hmPIMCandidateRPGroupMask, hmPIMComponentBSRAddress=hmPIMComponentBSRAddress, hmPIMRPSetComponent=hmPIMRPSetComponent, hmPIMV2MIBGroup=hmPIMV2MIBGroup, hmPIMV1MIBGroup=hmPIMV1MIBGroup, hmPIMInterfaceCBSRPreference=hmPIMInterfaceCBSRPreference, hmPIMRPSetGroupAddress=hmPIMRPSetGroupAddress, hmPIMComponentIndex=hmPIMComponentIndex, hmPIMInterfaceJoinPruneInterval=hmPIMInterfaceJoinPruneInterval, hmPIMNeighborExpiryTime=hmPIMNeighborExpiryTime, hmPIMRPSetExpiryTime=hmPIMRPSetExpiryTime, hmPIMV1MIBCompliance=hmPIMV1MIBCompliance, hmPIM=hmPIM, hmPIMIpMRouteFlags=hmPIMIpMRouteFlags, hmPIMRPState=hmPIMRPState, hmPIMIpMRouteNextHopEntry=hmPIMIpMRouteNextHopEntry)
| 150.434783 | 3,327 | 0.782186 | 4,571 | 38,060 | 6.512361 | 0.117042 | 0.053615 | 0.093826 | 0.010212 | 0.477526 | 0.359144 | 0.303917 | 0.26206 | 0.222386 | 0.208076 | 0 | 0.050335 | 0.098003 | 38,060 | 252 | 3,328 | 151.031746 | 0.816778 | 0.008671 | 0 | 0.045643 | 0 | 0.165975 | 0.375759 | 0.034358 | 0 | 0 | 0 | 0 | 0.078838 | 1 | 0 | false | 0 | 0.041494 | 0 | 0.041494 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a047c20fc8afaaa9f3740548dcb6777de106715 | 269 | py | Python | book_listing/filters.py | AdityaJ42/DJ-Comps-Book-Exchange | 12bba45f016e1b708477c642c2595b7f15e3dcfc | [
"MIT"
] | 8 | 2018-01-10T18:13:18.000Z | 2022-01-11T06:22:43.000Z | book_listing/filters.py | AdityaJ42/DJ-Comps-Book-Exchange | 12bba45f016e1b708477c642c2595b7f15e3dcfc | [
"MIT"
] | 4 | 2017-12-28T17:23:37.000Z | 2019-02-03T04:44:15.000Z | book_listing/filters.py | AdityaJ42/DJ-Comps-Book-Exchange | 12bba45f016e1b708477c642c2595b7f15e3dcfc | [
"MIT"
] | 26 | 2017-12-27T08:35:22.000Z | 2018-10-20T17:03:08.000Z | import django_filters
from .models import Book_List
class BookFilter(django_filters.FilterSet):
title = django_filters.CharFilter(lookup_expr="icontains")
class Meta:
model = Book_List
fields = ["title", "semester", "subject", "publication"]
| 24.454545 | 64 | 0.713755 | 30 | 269 | 6.2 | 0.7 | 0.209677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182156 | 269 | 10 | 65 | 26.9 | 0.845455 | 0 | 0 | 0 | 0 | 0 | 0.148699 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8a05ae1d63a3fae5b558fbdbdd37f34d6ac20618 | 1,373 | py | Python | monero_glue/messages/ApplySettings.py | ph4r05/monero-agent | 0bac0e6f33142b2bb885565bfd1ef8ac04559280 | [
"MIT"
] | 20 | 2018-04-05T22:06:10.000Z | 2021-09-18T10:43:44.000Z | monero_glue/messages/ApplySettings.py | ph4r05/monero-agent | 0bac0e6f33142b2bb885565bfd1ef8ac04559280 | [
"MIT"
] | null | null | null | monero_glue/messages/ApplySettings.py | ph4r05/monero-agent | 0bac0e6f33142b2bb885565bfd1ef8ac04559280 | [
"MIT"
] | 5 | 2018-08-06T15:06:04.000Z | 2021-07-16T01:58:43.000Z | # Automatically generated by pb2py
# fmt: off
from .. import protobuf as p
if __debug__:
try:
from typing import Dict, List # noqa: F401
from typing_extensions import Literal # noqa: F401
except ImportError:
pass
class ApplySettings(p.MessageType):
MESSAGE_WIRE_TYPE = 25
def __init__(
self,
language: str = None,
label: str = None,
use_passphrase: bool = None,
homescreen: bytes = None,
auto_lock_delay_ms: int = None,
display_rotation: int = None,
passphrase_always_on_device: bool = None,
) -> None:
self.language = language
self.label = label
self.use_passphrase = use_passphrase
self.homescreen = homescreen
self.auto_lock_delay_ms = auto_lock_delay_ms
self.display_rotation = display_rotation
self.passphrase_always_on_device = passphrase_always_on_device
@classmethod
def get_fields(cls) -> Dict:
return {
1: ('language', p.UnicodeType, 0),
2: ('label', p.UnicodeType, 0),
3: ('use_passphrase', p.BoolType, 0),
4: ('homescreen', p.BytesType, 0),
6: ('auto_lock_delay_ms', p.UVarintType, 0),
7: ('display_rotation', p.UVarintType, 0),
8: ('passphrase_always_on_device', p.BoolType, 0),
}
| 30.511111 | 70 | 0.606701 | 159 | 1,373 | 4.962264 | 0.440252 | 0.065906 | 0.065906 | 0.076046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023884 | 0.298616 | 1,373 | 44 | 71 | 31.204545 | 0.795431 | 0.045885 | 0 | 0 | 1 | 0 | 0.075096 | 0.02069 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0.189189 | 0.108108 | 0.027027 | 0.243243 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
8a0eaf6ee9a17ae437e46eb5a09492f470d0f77f | 1,684 | bzl | Python | dotnet/workspace.bzl | FrySabotage/selenium | 87dc4f7fa3c70a2f71d22c3fc20c5821ce65b2ae | [
"Apache-2.0"
] | null | null | null | dotnet/workspace.bzl | FrySabotage/selenium | 87dc4f7fa3c70a2f71d22c3fc20c5821ce65b2ae | [
"Apache-2.0"
] | 2 | 2021-05-08T22:10:21.000Z | 2021-05-11T12:09:44.000Z | dotnet/workspace.bzl | FrySabotage/selenium | 87dc4f7fa3c70a2f71d22c3fc20c5821ce65b2ae | [
"Apache-2.0"
] | null | null | null | load(
"@d2l_rules_csharp//csharp:defs.bzl",
"csharp_register_toolchains",
"csharp_repositories",
"import_nuget_package",
)
def selenium_register_dotnet():
csharp_register_toolchains()
csharp_repositories()
native.register_toolchains("//third_party/dotnet/ilmerge:all")
import_nuget_package(
name = "json.net",
file = "third_party/dotnet/nuget/packages/newtonsoft.json.12.0.2.nupkg",
sha256 = "056eec5d3d8b2a93f7ca5b026d34d9d5fe8c835b11e322faf1a2551da25c4e70",
)
import_nuget_package(
name = "moq",
file = "third_party/dotnet/nuget/packages/moq.4.12.0.nupkg",
sha256 = "339bbb71107e137a753a89c6b74adb5d9072f0916cf8f19f48b30ae29c41f434",
)
import_nuget_package(
name = "benderproxy",
file = "third_party/dotnet/nuget/packages/benderproxy.1.0.0.nupkg",
sha256 = "fd536dc97eb71268392173e7c4c0699795a31f6843470134ee068ade1be4b57d",
)
import_nuget_package(
name = "castle.core",
file = "third_party/dotnet/nuget/packages/castle.core.4.4.0.nupkg",
sha256 = "ee12c10079c1f9daebdb2538c37a34e5e317d800f2feb5cddd744f067d5dec66",
)
import_nuget_package(
name = "nunit",
file = "third_party/dotnet/nuget/packages/nunit.3.12.0.nupkg"
#sha256 = "056eec5d3d8b2a93f7ca5b026d34d9d5fe8c835b11e322faf1a2551da25c4e70",
)
#import_nuget_package(
# name = "system.threading.tasks.extensions",
# package = "system.threading.tasks.extensions",
# version = "4.5.1",
#)
#import_nuget_package(
# name = "nunit",
# package = "nunit",
# version = "3.12.0",
#)
| 30.618182 | 85 | 0.675178 | 154 | 1,684 | 7.168831 | 0.318182 | 0.07971 | 0.130435 | 0.139493 | 0.450181 | 0.325181 | 0.175725 | 0 | 0 | 0 | 0 | 0.176779 | 0.207245 | 1,684 | 54 | 86 | 31.185185 | 0.650187 | 0.179929 | 0 | 0.147059 | 0 | 0 | 0.513139 | 0.456934 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | true | 0 | 0.176471 | 0 | 0.205882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a1b4caf5910c73c62ccc16f195d336cce40ea44 | 1,658 | py | Python | dc09_spt/comm/transpath.py | panos-stavrianos/dc09_spt | d8fcf5f40cb9fca0cceebe1e34a623dbfe409cfa | [
"Apache-2.0"
] | 19 | 2018-02-08T14:26:55.000Z | 2022-02-05T23:25:56.000Z | dc09_spt/comm/transpath.py | jvanovost/dc09_spt | a76f99446f0caa6f88c778c91bbc9e1b7f1fbb04 | [
"Apache-2.0"
] | 5 | 2019-12-05T01:26:07.000Z | 2022-03-04T09:10:45.000Z | dc09_spt/comm/transpath.py | panos-stavrianos/dc09_spt | d8fcf5f40cb9fca0cceebe1e34a623dbfe409cfa | [
"Apache-2.0"
] | 12 | 2018-10-21T21:01:11.000Z | 2022-03-04T09:35:16.000Z | # ----------------------------
# Transmit class
# (c 2018 van Ovost Automatisering b.v.
# Author : Jacq. van Ovost
# ----------------------------
import logging
from dc09_spt.comm.transpathtcp import TransPathTCP
from dc09_spt.comm.transpathudp import TransPathUDP
class TransPath:
"""
Handle the basic tasks for establishing and maintaining a transmit path
"""
def __init__(self, host, port, account, *, key=None, receiver=None, line=None, timeout=5.0, type=None):
self.host = host
self.port = port
self.offset = 0
self.timeout = timeout
self.receiver = receiver
if type is not None:
self.type = type.lower()
else:
self.type = 'tcp'
self.account = account
self.key = key
self.line = line
def set_offset(self, offset):
self.offset = offset
def get_offset(self):
return self.offset
def get_key(self):
return self.key
def get_receiver(self):
return self.receiver
def get_line(self):
return self.line
def get_account(self):
return self.account
def connect(self):
if self.type == 'tcp':
conn = TransPathTCP(self.host, self.port)
elif self.type == 'udp':
conn = TransPathUDP(self.host, self.port)
else:
conn = None
logging.error('Undefined connection type : %s', self.type)
if conn is not None:
conn.connect()
return conn
@staticmethod
def disconnect(conn):
if conn is not None:
conn.disconnect()
| 26.31746 | 114 | 0.560917 | 193 | 1,658 | 4.756477 | 0.331606 | 0.043573 | 0.076253 | 0.03268 | 0.041394 | 0.041394 | 0 | 0 | 0 | 0 | 0 | 0.009743 | 0.319059 | 1,658 | 62 | 115 | 26.741935 | 0.803366 | 0.125452 | 0 | 0.090909 | 0 | 0 | 0.027273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.204545 | false | 0 | 0.068182 | 0.113636 | 0.431818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
8a20c59dc210437e0eb1a8bff0d48394cbe83b6c | 652 | py | Python | tracker/management/commands/cron_update_popular.py | AB-informatica-service/swat4stats.com | c3a7e83953373b8d876a4ca7055b59168f345442 | [
"MIT"
] | 14 | 2015-04-02T06:50:10.000Z | 2021-02-18T13:26:46.000Z | tracker/management/commands/cron_update_popular.py | AB-informatica-service/swat4stats.com | c3a7e83953373b8d876a4ca7055b59168f345442 | [
"MIT"
] | 4 | 2016-09-09T19:27:44.000Z | 2020-05-08T17:59:31.000Z | tracker/management/commands/cron_update_popular.py | AB-informatica-service/swat4stats.com | c3a7e83953373b8d876a4ca7055b59168f345442 | [
"MIT"
] | 8 | 2015-01-03T02:44:04.000Z | 2022-03-23T19:52:00.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals, absolute_import
import warnings
from django.core.management.base import BaseCommand, CommandError
from tracker import tasks
class Command(BaseCommand):
"""
Update profile "popular" fields for the players that have played time seconds ago.
Usage:
python manage.py cron_update_ranks '60*60*24'
The management command is kept for backward compatibility.
"""
def handle(self, time, *args, **kwargs):
warnings.warn('Use of %s is deprecated. Use celery instead.' % __name__, DeprecationWarning)
tasks.update_popular(time_delta=eval(time))
| 29.636364 | 100 | 0.723926 | 83 | 652 | 5.518072 | 0.73494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013208 | 0.187117 | 652 | 21 | 101 | 31.047619 | 0.850943 | 0.342025 | 0 | 0 | 0 | 0 | 0.109726 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.5 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
8a27950277af976a304c9290d509eb67a24d71a9 | 317 | py | Python | yunionclient/api/k8snodes.py | yunionyun/python_yunionsdk | 40a567b80f6fb3ebc72d8cc6313b334a201b2f00 | [
"Apache-2.0"
] | 3 | 2021-09-22T11:34:08.000Z | 2022-03-13T04:55:17.000Z | yunionclient/api/k8snodes.py | yunionyun/python_yunionsdk | 40a567b80f6fb3ebc72d8cc6313b334a201b2f00 | [
"Apache-2.0"
] | 13 | 2019-06-06T08:25:41.000Z | 2021-07-16T07:26:10.000Z | yunionclient/api/k8snodes.py | yunionyun/python_yunionsdk | 40a567b80f6fb3ebc72d8cc6313b334a201b2f00 | [
"Apache-2.0"
] | 7 | 2019-03-31T05:43:36.000Z | 2021-03-04T09:59:05.000Z | from yunionclient.common import base
class K8sNode(base.ResourceBase):
pass
class K8sNodeManager(base.StandaloneManager):
resource_class = K8sNode
keyword = 'k8s_node'
keyword_plural = 'k8s_nodes'
_columns = ["Name", "Id", "Status", "Creationtimestamp", "Created_At", "Cluster_Id", "Cluster"]
| 24.384615 | 99 | 0.716088 | 34 | 317 | 6.470588 | 0.735294 | 0.109091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018797 | 0.160883 | 317 | 12 | 100 | 26.416667 | 0.808271 | 0 | 0 | 0 | 0 | 0 | 0.231013 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.125 | 0.125 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
8a37af43549d7c42df70e9c310fc308ba6216018 | 809 | py | Python | dragonfire/config.py | ismlkrkmz/Dragonfire | 7a5e22bd07ba9734d68fe76ce77d80164d47249e | [
"MIT"
] | 1,320 | 2017-06-20T21:47:35.000Z | 2022-03-29T08:53:31.000Z | dragonfire/config.py | ismlkrkmz/Dragonfire | 7a5e22bd07ba9734d68fe76ce77d80164d47249e | [
"MIT"
] | 120 | 2017-06-21T13:16:40.000Z | 2022-03-24T18:12:21.000Z | dragonfire/config.py | ismlkrkmz/Dragonfire | 7a5e22bd07ba9734d68fe76ce77d80164d47249e | [
"MIT"
] | 229 | 2017-06-21T05:38:43.000Z | 2022-03-14T14:03:10.000Z | #!/usr/bin/python3
# -*- coding: utf-8 -*-
"""
.. module:: config
:platform: Unix
:synopsis: the module that contains the configuration for the API of Dragonfire.
.. moduleauthor:: Mehmet Mert Yıldıran <mert.yildiran@bil.omu.edu.tr>
"""
import os
class Config():
"""Class that stores the Twitter API and MySQL database connection credentials in class variables.
"""
# Twitter Credentials
TWITTER_CONSUMER_KEY = 'CONSUMER_KEY'
TWITTER_CONSUMER_SECRET = 'CONSUMER_SECRET'
TWITTER_ACCESS_KEY = 'ACCESS_KEY'
TWITTER_ACCESS_SECRET = 'ACCESS_SECRET'
# MySQL Credentials
MYSQL_HOST = 'MYSQL_HOST'
MYSQL_DB = 'MYSQL_DB'
MYSQL_USER = 'MYSQL_USER'
MYSQL_PASS = 'MYSQL_PASS'
# SECRET KEY FOR JWT ENCODE/DECODE
SUPER_SECRET_KEY = 'SUPER_SECRET_KEY'
| 24.515152 | 102 | 0.700865 | 102 | 809 | 5.323529 | 0.509804 | 0.049724 | 0.051565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003086 | 0.199011 | 809 | 32 | 103 | 25.28125 | 0.834877 | 0.501854 | 0 | 0 | 0 | 0 | 0.26943 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.090909 | 0.090909 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
8a42b3cda5077349dcb1056df9da1c983e65192f | 295 | py | Python | generators/python/scripts/brain/calibrate_imu.py | EndlessLoops/blockly | 8ab0a87381c2b7310bfd83c3fb7e0faa8c1bf555 | [
"Apache-2.0"
] | 1 | 2018-01-31T23:38:36.000Z | 2018-01-31T23:38:36.000Z | generators/python/scripts/brain/calibrate_imu.py | EndlessLoops/blockly | 8ab0a87381c2b7310bfd83c3fb7e0faa8c1bf555 | [
"Apache-2.0"
] | 2 | 2016-03-17T11:48:40.000Z | 2016-04-20T10:49:16.000Z | generators/python/scripts/brain/calibrate_imu.py | EndlessLoops/blockly | 8ab0a87381c2b7310bfd83c3fb7e0faa8c1bf555 | [
"Apache-2.0"
] | 21 | 2015-11-26T19:23:04.000Z | 2021-02-27T06:14:22.000Z | import rosnode
import subprocess
import time
ros_nodes = rosnode.get_node_names()
if not '/imu_talker' in ros_nodes:
command='/home/erle/spider_ws/install_isolated/share/ros_erle_imu/imu_talker'
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
time.sleep(10)
| 29.5 | 81 | 0.783051 | 44 | 295 | 5.022727 | 0.681818 | 0.072398 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007663 | 0.115254 | 295 | 9 | 82 | 32.777778 | 0.83908 | 0 | 0 | 0 | 0 | 0 | 0.264407 | 0.227119 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8a48b21f2eba4142ebc5a743d16ca61e706c26aa | 5,744 | py | Python | aiida_restapi/aiida_db_mappings.py | janssenhenning/aiida-restapi | 9a382843effd9aae065f887d068c4cb980f50129 | [
"MIT"
] | 4 | 2021-06-02T23:06:29.000Z | 2021-07-09T11:28:03.000Z | aiida_restapi/aiida_db_mappings.py | janssenhenning/aiida-restapi | 9a382843effd9aae065f887d068c4cb980f50129 | [
"MIT"
] | 26 | 2021-06-02T11:26:49.000Z | 2021-12-23T10:06:45.000Z | aiida_restapi/aiida_db_mappings.py | janssenhenning/aiida-restapi | 9a382843effd9aae065f887d068c4cb980f50129 | [
"MIT"
] | 3 | 2021-06-13T02:51:06.000Z | 2022-03-19T09:15:51.000Z | # -*- coding: utf-8 -*-
"""The 'source of truth' for mapping AiiDA's database table models to pydantic models.
Note in the future we may want to do this programmatically, however, there are two issues:
- AiiDA uses both SQLAlchemy and Django backends, so one would need to be chosen
- Neither model includes descriptions of fields
"""
from datetime import datetime
from typing import Dict, Optional, Type
from uuid import UUID
from aiida import orm
from pydantic import BaseModel, Field, Json
class AuthInfo(BaseModel):
"""AiiDA AuthInfo SQL table fields."""
id: int = Field(description="Unique id (pk)")
aiidauser_id: int = Field(description="Relates to user")
dbcomputer_id: int = Field(description="Relates to computer")
metadata: Json = Field(description="Metadata of the authorisation")
auth_params: Json = Field(description="Parameters of the authorisation")
enabled: bool = Field(description="Whether the computer is enabled", default=True)
class Comment(BaseModel):
"""AiiDA Comment SQL table fields."""
id: int = Field(description="Unique id (pk)")
uuid: UUID = Field(description="Universally unique id")
ctime: datetime = Field(description="Creation time")
mtime: datetime = Field(description="Last modification time")
content: Optional[str] = Field(description="Content of the comment")
user_id: int = Field(description="Created by user id (pk)")
dbnode_id: int = Field(description="Associated node id (pk)")
class Computer(BaseModel):
"""AiiDA Computer SQL table fields."""
id: int = Field(description="Unique id (pk)")
uuid: UUID = Field(description="Universally unique id")
name: str = Field(description="Computer name")
hostname: str = Field(description="Identifier for the computer within the network")
description: Optional[str] = Field(description="Description of the computer")
scheduler_type: str = Field(
description="Scheduler plugin type, to manage compute jobs"
)
transport_type: str = Field(
description="Transport plugin type, to manage file transfers"
)
metadata: Json = Field(description="Metadata of the computer")
class Group(BaseModel):
"""AiiDA Group SQL table fields."""
id: int = Field(description="Unique id (pk)")
uuid: UUID = Field(description="Universally unique id")
label: str = Field(description="Label of group")
type_string: str = Field(description="type of the group")
time: datetime = Field(description="Created time")
description: Optional[str] = Field(description="Description of group")
extras: Json = Field(description="extra data about for the group")
user_id: int = Field(description="Created by user id (pk)")
class Log(BaseModel):
"""AiiDA Log SQL table fields."""
id: int = Field(description="Unique id (pk)")
uuid: UUID = Field(description="Universally unique id")
time: datetime = Field(description="Creation time")
loggername: str = Field(description="The loggers name")
levelname: str = Field(description="The log level")
message: Optional[str] = Field(description="The log message")
metadata: Json = Field(description="Metadata associated with the log")
dbnode_id: int = Field(description="Associated node id (pk)")
class Node(BaseModel):
"""AiiDA Node SQL table fields."""
id: int = Field(description="Unique id (pk)")
uuid: UUID = Field(description="Universally unique id")
node_type: str = Field(description="Node type")
process_type: str = Field(description="Process type")
label: str = Field(description="Label of node")
description: str = Field(description="Description of node")
ctime: datetime = Field(description="Creation time")
mtime: datetime = Field(description="Last modification time")
user_id: int = Field(description="Created by user id (pk)")
dbcomputer_id: Optional[int] = Field(description="Associated computer id (pk)")
attributes: Json = Field(
description="Attributes of the node (immutable after storing the node)",
)
extras: Json = Field(
description="Extra attributes of the node (mutable)",
)
class User(BaseModel):
"""AiiDA User SQL table fields."""
id: int = Field(description="Unique id (pk)")
email: str = Field(description="Email address of the user")
first_name: Optional[str] = Field(description="First name of the user")
last_name: Optional[str] = Field(description="Last name of the user")
institution: Optional[str] = Field(
description="Host institution or workplace of the user"
)
class Link(BaseModel):
"""AiiDA Link SQL table fields."""
id: int = Field(description="Unique id (pk)")
input_id: int = Field(description="Unique id (pk) of the input node")
output_id: int = Field(description="Unique id (pk) of the output node")
label: Optional[str] = Field(description="The label of the link")
type: str = Field(description="The type of link")
ORM_MAPPING: Dict[str, Type[BaseModel]] = {
"AuthInfo": AuthInfo,
"Comment": Comment,
"Computer": Computer,
"Group": Group,
"Log": Log,
"Node": Node,
"User": User,
"Link": Link,
}
def get_model_from_orm(
orm_cls: Type[orm.Entity], allow_subclasses: bool = True
) -> Type[BaseModel]:
"""Return the pydantic model related to the orm class.
:param allow_subclasses: Return the base class mapping for subclasses
"""
if orm_cls.__name__ in ORM_MAPPING:
return ORM_MAPPING[orm_cls.__name__]
if allow_subclasses and issubclass(orm_cls, orm.nodes.Node):
return Node
if allow_subclasses and issubclass(orm_cls, orm.Group):
return Group
raise KeyError(f"{orm_cls}")
| 38.039735 | 90 | 0.695682 | 738 | 5,744 | 5.357724 | 0.212737 | 0.238746 | 0.105716 | 0.090288 | 0.443349 | 0.363429 | 0.332575 | 0.286039 | 0.266313 | 0.248103 | 0 | 0.000214 | 0.188022 | 5,744 | 150 | 91 | 38.293333 | 0.847556 | 0.120648 | 0 | 0.215686 | 0 | 0 | 0.27475 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009804 | false | 0 | 0.04902 | 0 | 0.745098 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8a49da56ad8e178993a7ddb730fcb10bccfd2bf3 | 570 | py | Python | main.py | iiiCode/network-diagnostic-tools | 05c168d2eb8be12f9058f6c8f014fee9a3c92899 | [
"MIT"
] | null | null | null | main.py | iiiCode/network-diagnostic-tools | 05c168d2eb8be12f9058f6c8f014fee9a3c92899 | [
"MIT"
] | null | null | null | main.py | iiiCode/network-diagnostic-tools | 05c168d2eb8be12f9058f6c8f014fee9a3c92899 | [
"MIT"
] | null | null | null | """
Music classroom network diagnostic tools entrance.
Author: Yunchao Chen
Date: 2016-6-29
"""
import log
import classroom
from config import MSG_DATA
def log_init():
log_file = log.generate_log_file_name()
log.open_file(log_file)
def log_destroy():
log.close_file()
def pause_until_meet_enter_key():
print ""
raw_input("")
def main():
log_init()
log.write("NETWORK_DIAGNOSTIC_START")
classroom.load_music_classroom_test()
log.write("NETWORK_DIAGNOSTIC_FINISH")
pause_until_meet_enter_key()
log_destroy()
main()
| 15 | 50 | 0.717544 | 79 | 570 | 4.822785 | 0.518987 | 0.133858 | 0.052493 | 0.099738 | 0.115486 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.177193 | 570 | 37 | 51 | 15.405405 | 0.797441 | 0 | 0 | 0 | 1 | 0 | 0.103158 | 0.103158 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.157895 | null | null | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a55dac7b65169fc286b0be98e6f32ba62e78451 | 68 | py | Python | __init__.py | Yoshanuikabundi/JupyterJoy | 658f7bb0a33fdc34eb6366fae3b73d0481fd554f | [
"MIT"
] | 2 | 2018-04-27T22:52:39.000Z | 2019-04-01T15:29:17.000Z | __init__.py | Yoshanuikabundi/JupyterJoy | 658f7bb0a33fdc34eb6366fae3b73d0481fd554f | [
"MIT"
] | null | null | null | __init__.py | Yoshanuikabundi/JupyterJoy | 658f7bb0a33fdc34eb6366fae3b73d0481fd554f | [
"MIT"
] | 1 | 2019-04-01T15:29:21.000Z | 2019-04-01T15:29:21.000Z | __all__ = ['tophat', 'mdpbuild', 'brokeh', 'funfuncs', 'listviews']
| 34 | 67 | 0.647059 | 6 | 68 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102941 | 68 | 1 | 68 | 68 | 0.655738 | 0 | 0 | 0 | 0 | 0 | 0.544118 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a55e8b19d0bcb3a64a0188a0e2547a58d281d68 | 18,197 | py | Python | tests/api_client/test_json_parser.py | saxsys/flask-meetup-data-scraper | eb7cd67df9caf72588e19c8b3549bc2bce8f9fdb | [
"MIT"
] | 1 | 2020-01-21T14:35:36.000Z | 2020-01-21T14:35:36.000Z | tests/api_client/test_json_parser.py | saxsys/flask-meetup-data-scraper | eb7cd67df9caf72588e19c8b3549bc2bce8f9fdb | [
"MIT"
] | 43 | 2020-01-13T08:46:30.000Z | 2020-12-28T05:10:38.000Z | tests/api_client/test_json_parser.py | saxsys/flask-meetup-data-scraper | eb7cd67df9caf72588e19c8b3549bc2bce8f9fdb | [
"MIT"
] | 1 | 2020-01-27T09:30:32.000Z | 2020-01-27T09:30:32.000Z | import time
from datetime import datetime
from time import sleep
import pytest
from meetup_search.meetup_api_client.exceptions import (
EventAlreadyExists,
InvalidResponse,
)
from meetup_search.meetup_api_client.json_parser import (
get_category_from_response,
get_event_from_response,
get_event_host_from_response,
get_group_from_response,
get_group_organizer_from_response,
get_meta_category_from_response,
get_topic_from_response,
get_venue_from_response,
)
from meetup_search.models.group import Event, Group, Topic
from tests.meetup_api_demo_response import (
get_category_response,
get_event_host_response,
get_event_response,
get_group_response,
get_member_response,
get_meta_category_response,
get_topic_response,
get_venue_response,
)
def test_get_group_from_response():
# set group response
group_1_response: dict = get_group_response(
meetup_id=1, urlname="group_1", content=False
)
group_2_response: dict = get_group_response(
meetup_id=2, urlname="group_2", content=True
)
# get group model
group_1: Group = get_group_from_response(response=group_1_response)
group_2: Group = get_group_from_response(response=group_2_response)
# assert group_1
assert isinstance(group_1, Group)
assert group_1.meetup_id == group_1_response["id"]
assert group_1.urlname == group_1_response["urlname"]
assert (
time.mktime(group_1.created.timetuple()) == group_1_response["created"] / 1000
)
assert group_1.description == group_1_response["description"]
assert group_1.location == {
"lat": group_1_response["lat"],
"lon": group_1_response["lon"],
}
assert group_1.link == group_1_response["link"]
assert group_1.members == group_1_response["members"]
assert group_1.name == group_1_response["name"]
assert group_1.nomination_acceptable is False
assert group_1.status == group_1_response["status"]
assert group_1.timezone == group_1_response["timezone"]
assert group_1.visibility == group_1_response["visibility"]
assert group_1.short_link is None
assert group_1.welcome_message is None
assert group_1.city is None
assert group_1.city_link is None
assert group_1.untranslated_city is None
assert group_1.country is None
assert group_1.localized_country_name is None
assert group_1.localized_location is None
assert group_1.state is None
assert group_1.join_mode is None
assert group_1.fee_options_currencies_code is None
assert group_1.fee_options_currencies_default is None
assert group_1.fee_options_type is None
assert group_1.member_limit is None
assert group_1.who is None
# category
assert group_1.category_id is None
assert group_1.category_name is None
assert group_1.category_shortname is None
assert group_1.category_sort_name is None
# meta_category
assert group_1.meta_category_id is None
assert group_1.meta_category_shortname is None
assert group_1.meta_category_name is None
assert group_1.meta_category_sort_name is None
# topics
assert len(group_1.topics) == 0
# organizer
assert group_1.organizer_id is None
assert group_1.organizer_name is None
assert group_1.organizer_bio is None
# assert group_2
assert isinstance(group_2, Group)
assert group_2.meetup_id == group_2_response["id"]
assert group_2.urlname == group_2_response["urlname"]
assert (
time.mktime(group_2.created.timetuple()) == group_2_response["created"] / 1000
)
assert group_2.description == group_2_response["description"]
assert group_2.location == {
"lat": group_2_response["lat"],
"lon": group_2_response["lon"],
}
assert group_2.link == group_2_response["link"]
assert group_2.members == group_2_response["members"]
assert group_2.name == group_2_response["name"]
assert group_2.nomination_acceptable is True
assert group_2.status == group_2_response["status"]
assert group_2.timezone == group_2_response["timezone"]
assert group_2.visibility == group_2_response["visibility"]
assert group_2.short_link == group_2_response["short_link"]
assert group_2.welcome_message == group_2_response["welcome_message"]
assert group_2.city == group_2_response["city"]
assert group_2.city_link == group_2_response["city_link"]
assert group_2.untranslated_city == group_2_response["untranslated_city"]
assert group_2.country == group_2_response["country"]
assert group_2.localized_country_name == group_2_response["localized_country_name"]
assert group_2.localized_location == group_2_response["localized_location"]
assert group_2.state == group_2_response["state"]
assert group_2.join_mode == group_2_response["join_mode"]
assert (
group_2.fee_options_currencies_code
== group_2_response["fee_options"]["currencies"]["code"]
)
assert (
group_2.fee_options_currencies_default
== group_2_response["fee_options"]["currencies"]["default"]
)
assert group_2.fee_options_type == group_2_response["fee_options"]["type"]
assert group_2.member_limit == group_2_response["member_limit"]
assert group_2.who == group_2_response["who"]
# category
assert group_2.category_id == group_2_response["category"]["id"]
assert group_2.category_name is None
assert group_2.category_shortname is None
assert group_2.category_sort_name is None
# meta_category
assert group_2.meta_category_id == group_2_response["meta_category"]["id"]
assert (
group_2.meta_category_shortname
== group_2_response["meta_category"]["shortname"]
)
assert group_2.meta_category_name == group_2_response["meta_category"]["name"]
assert (
group_2.meta_category_sort_name
== group_2_response["meta_category"]["sort_name"]
)
# topics
assert len(group_2.topics) == 1
assert isinstance(group_2.topics[0], Topic)
# organizer
assert group_2.organizer_id == group_2_response["organizer"]["id"]
assert group_2.organizer_name is None
assert group_2.organizer_bio is None
def test_get_event_from_response():
# set group model
group_1: Group = get_group_from_response(
response=get_group_response(urlname="group_event_1")
)
# set event response
event_1_response: dict = get_event_response(meetup_id="1", content=False)
event_2_response: dict = get_event_response(meetup_id="2", content=True)
# get event model
event_1: Event = get_event_from_response(response=event_1_response, group=group_1)
event_2: Event = get_event_from_response(response=event_2_response, group=group_1)
# assert event_1
assert isinstance(event_1, Event)
assert event_1.meetup_id == event_1_response["id"]
assert event_1.time == datetime.fromtimestamp(event_1_response["time"] / 1000)
assert event_1.created is None
assert event_1.name == event_1_response["name"]
assert event_1.link == event_1_response["link"]
assert event_1.attendance_count is None
assert event_1.attendance_sample is None
assert event_1.attendee_sample is None
assert event_1.date_in_series_pattern is False
assert event_1.description is None
assert event_1.description is None
assert event_1.duration is None
# event_hosts
assert event_1.fee_accepts is None
assert event_1.fee_amount is None
assert event_1.fee_currency is None
assert event_1.fee_description is None
assert event_1.fee_label is None
assert event_1.how_to_find_us is None
assert event_1.status is None
assert event_1.updated is None
assert event_1.utc_offset is None
# venue
assert event_1.venue_visibility is None
assert event_1.visibility is None
# save group
group_1.add_event(event_1)
group_1.add_event(event_2)
group_1.save()
sleep(1)
# assert event_2
assert isinstance(event_2, Event)
assert event_2.meetup_id == event_2_response["id"]
assert event_2.time == datetime.fromtimestamp(event_2_response["time"] / 1000)
assert event_2.created == datetime.fromtimestamp(event_2_response["created"] / 1000)
assert event_2.name == event_2_response["name"]
assert event_2.link == event_2_response["link"]
assert event_2.attendance_count == event_2_response["attendance_count"]
assert event_2.attendance_sample == event_2_response["attendance_sample"]
assert event_2.attendee_sample == event_2_response["attendee_sample"]
assert event_2.date_in_series_pattern == event_2_response["date_in_series_pattern"]
assert event_2.description == event_2_response["description"]
assert event_2.description == event_2_response["description"]
assert event_2.duration == event_2_response["duration"]
# event_hosts
assert event_2.fee_accepts == event_2_response["fee"]["accepts"]
assert event_2.fee_amount == event_2_response["fee"]["amount"]
assert event_2.fee_currency == event_2_response["fee"]["currency"]
assert event_2.fee_description == event_2_response["fee"]["description"]
assert event_2.fee_label == event_2_response["fee"]["label"]
assert event_2.how_to_find_us == event_2_response["how_to_find_us"]
assert event_2.status == event_2_response["status"]
assert event_2.updated == datetime.fromtimestamp(event_2_response["updated"] / 1000)
assert event_2.utc_offset == event_2_response["utc_offset"] / 1000
# venue
assert event_2.venue_visibility == event_2_response["venue_visibility"]
assert event_2.visibility == event_2_response["visibility"]
# check when event already exists
with pytest.raises(EventAlreadyExists):
get_event_from_response(response=event_1_response, group=group_1)
# check when with invalid response
with pytest.raises(InvalidResponse):
del event_1_response["id"]
get_event_from_response(response=event_1_response, group=group_1)
def test_get_venue_from_response():
# set group model
group_1: Group = get_group_from_response(
response=get_group_response(urlname="group_venue_1")
)
# set event response
event_1_response: dict = get_event_response(meetup_id="1", content=False)
# set venue response
venue_1_response: dict = get_venue_response(content=False)
venue_2_response: dict = get_venue_response(content=True)
venue_3_response: dict = get_venue_response(
content=True, lat=37.387474060058594, lon=-122.05754089355469
)
# get event model
event_1: Event = get_event_from_response(response=event_1_response, group=group_1)
# get venue_1 from repsonse
event_2: Event = get_venue_from_response(response=venue_1_response, event=event_1)
# assert event_2
assert isinstance(event_2, Event)
assert event_2.venue_address_1 is None
assert event_2.venue_address_2 is None
assert event_2.venue_address_3 is None
assert event_2.venue_city is None
assert event_2.venue_country is None
assert event_2.venue_localized_country_name is None
assert event_2.venue_name is None
assert event_2.venue_phone is None
assert event_2.venue_zip_code is None
assert event_2.venue_location is None
# get venue_2 from repsonse
event_3: Event = get_venue_from_response(response=venue_2_response, event=event_1)
# assert event_3
assert isinstance(event_3, Event)
assert event_3.venue_address_1 == venue_2_response["address_1"]
assert event_3.venue_address_2 == venue_2_response["address_2"]
assert event_3.venue_address_3 == venue_2_response["address_3"]
assert event_3.venue_city == venue_2_response["city"]
assert event_3.venue_country == venue_2_response["country"]
assert (
event_3.venue_localized_country_name
== venue_2_response["localized_country_name"]
)
assert event_3.venue_name == venue_2_response["name"]
assert event_3.venue_phone == venue_2_response["phone"]
assert event_3.venue_zip_code == venue_2_response["zip_code"]
assert event_3.venue_location == {
"lat": venue_2_response["lat"],
"lon": venue_2_response["lon"],
}
# get venue_3 from repsonse
event_4: Event = get_venue_from_response(response=venue_3_response, event=event_1)
# assert event_3
assert isinstance(event_4, Event)
assert event_4.venue_location == {
"lat": venue_3_response["lat"],
"lon": venue_3_response["lon"],
}
# save group
group_1.save()
def test_get_event_host_from_response():
# set group model
group_1: Group = get_group_from_response(
response=get_group_response(urlname="group_event_host_1")
)
# set event response
event_1_response: dict = get_event_response(meetup_id="1", content=False)
# set event_host response
event_host_1_response: dict = get_event_host_response(content=False)
event_host_2_response: dict = get_event_host_response(content=True)
# get event model
event_1: Event = get_event_from_response(response=event_1_response, group=group_1)
# get event_host_2 from repsonse
event_2: Event = get_event_host_from_response(
response=event_host_1_response, event=event_1
)
# assert event_2
assert isinstance(event_2, Event)
assert event_2.event_host_host_count is None
assert event_2.event_host_id is None
assert event_2.event_host_intro is None
assert event_2.event_host_join_date is None
assert event_2.event_host_name is None
# get event_host_3 from repsonse
event_3: Event = get_event_host_from_response(
response=event_host_2_response, event=event_1
)
# assert event_3
assert isinstance(event_3, Event)
assert event_3.event_host_host_count == event_host_2_response["host_count"]
assert event_3.event_host_id == event_host_2_response["id"]
assert event_3.event_host_intro == event_host_2_response["intro"]
assert (
time.mktime(event_3.event_host_join_date.timetuple())
== event_host_2_response["join_date"] / 1000
)
assert event_3.event_host_name == event_host_2_response["name"]
# save group
group_1.save()
def test_get_group_organizer_from_response():
# set group model
group_1: Group = get_group_from_response(
response=get_group_response(urlname="group_organizer_1")
)
# set organizer response
organizer_1_response: dict = get_member_response(content=False)
organizer_2_response: dict = get_member_response(content=True)
# get organizer from repsonse
group_2: Group = get_group_organizer_from_response(
response=organizer_1_response, group=group_1
)
# assert group
assert isinstance(group_2, Group)
assert group_2.organizer_id == organizer_2_response["id"]
assert group_2.organizer_name is None
assert group_2.organizer_bio is None
# get organizer from repsonse
group_3: Group = get_group_organizer_from_response(
response=organizer_2_response, group=group_1
)
# assert group
assert isinstance(group_3, Group)
assert group_3.organizer_id == organizer_2_response["id"]
assert group_3.organizer_name == organizer_2_response["name"]
assert group_3.organizer_bio == organizer_2_response["bio"]
# save group
group_1.save()
def test_get_category_from_response():
# set group model
group_1: Group = get_group_from_response(
response=get_group_response(urlname="group_category_1")
)
# set category response
category_1_response: dict = get_category_response(content=False)
category_2_response: dict = get_category_response(content=True)
# get category from repsonse
group_2: Group = get_category_from_response(
response=category_1_response, group=group_1
)
# assert group
assert isinstance(group_2, Group)
assert group_2.category_id == category_1_response["id"]
assert group_2.category_name is None
assert group_2.category_shortname is None
assert group_2.category_sort_name is None
# get category from repsonse
group_3: Group = get_category_from_response(
response=category_2_response, group=group_1
)
# assert group
assert isinstance(group_3, Group)
assert group_3.category_id == category_2_response["id"]
assert group_3.category_name == category_2_response["name"]
assert group_3.category_shortname == category_2_response["shortname"]
assert group_3.category_sort_name == category_2_response["sort_name"]
# save groups
group_1.save()
group_2.save()
group_3.save()
def test_get_meta_category_from_response():
# set group model
group_1: Group = get_group_from_response(
response=get_group_response(urlname="group_meta_category_1")
)
# set meta_category response
meta_category_1_response: dict = get_meta_category_response()
# get meta_category from repsonse
group_2: Event = get_meta_category_from_response(
response=meta_category_1_response, group=group_1
)
# assert group
assert isinstance(group_2, Group)
assert group_2.meta_category_id == meta_category_1_response["id"]
assert group_2.meta_category_name == meta_category_1_response["name"]
assert group_2.meta_category_shortname == meta_category_1_response["shortname"]
assert group_2.meta_category_sort_name == meta_category_1_response["sort_name"]
# save group
group_1.save()
def test_get_topic_from_response():
# set group model
group_1: Group = get_group_from_response(
response=get_group_response(urlname="group_topic_1")
)
# set topic response
topic_1_response: dict = get_topic_response(meetup_id=1)
# get topic from repsonse
topic_1: Topic = get_topic_from_response(response=topic_1_response)
# assert topic
assert isinstance(topic_1, Topic)
assert topic_1.meetup_id == topic_1_response["id"]
assert topic_1.lang == topic_1_response["lang"]
assert topic_1.name == topic_1_response["name"]
assert topic_1.urlkey == topic_1_response["urlkey"]
# save group
group_1.save()
| 36.466934 | 88 | 0.739847 | 2,614 | 18,197 | 4.748279 | 0.050115 | 0.087738 | 0.055108 | 0.03972 | 0.577908 | 0.463422 | 0.341444 | 0.25282 | 0.224944 | 0.201096 | 0 | 0.033322 | 0.177062 | 18,197 | 498 | 89 | 36.540161 | 0.795526 | 0.063912 | 0 | 0.154494 | 0 | 0 | 0.061362 | 0.005128 | 0 | 0 | 0 | 0 | 0.542135 | 1 | 0.022472 | true | 0 | 0.022472 | 0 | 0.044944 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a59ee352fb8a01bde281bba809d6ebdbb0e5666 | 361 | py | Python | piccolo/apps/migrations/piccolo_app.py | coder3112/piccolo | f5f72c6b24782f1b6e04c31549654cd27fd3a148 | [
"MIT"
] | null | null | null | piccolo/apps/migrations/piccolo_app.py | coder3112/piccolo | f5f72c6b24782f1b6e04c31549654cd27fd3a148 | [
"MIT"
] | null | null | null | piccolo/apps/migrations/piccolo_app.py | coder3112/piccolo | f5f72c6b24782f1b6e04c31549654cd27fd3a148 | [
"MIT"
] | null | null | null | from piccolo.conf.apps import AppConfig
from .commands.backwards import backwards
from .commands.check import check
from .commands.clean import clean
from .commands.forwards import forwards
from .commands.new import new
APP_CONFIG = AppConfig(
app_name="migrations",
migrations_folder_path="",
commands=[backwards, check, clean, forwards, new],
)
| 25.785714 | 54 | 0.778393 | 46 | 361 | 6.021739 | 0.391304 | 0.216607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135734 | 361 | 13 | 55 | 27.769231 | 0.887821 | 0 | 0 | 0 | 0 | 0 | 0.027701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.545455 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8a70f27da2ec8ac9eff06ba0252f9d924659a5c3 | 1,300 | py | Python | tests/test_fast_pot.py | mudigonda/hmc_scripts | 96e479949868ad40dbb81ad1c84e0747f5e2dc1b | [
"MIT"
] | null | null | null | tests/test_fast_pot.py | mudigonda/hmc_scripts | 96e479949868ad40dbb81ad1c84e0747f5e2dc1b | [
"MIT"
] | null | null | null | tests/test_fast_pot.py | mudigonda/hmc_scripts | 96e479949868ad40dbb81ad1c84e0747f5e2dc1b | [
"MIT"
] | null | null | null | from mjhmc.search.objective import obj_func
from mjhmc.samplers.markov_jump_hmc import MarkovJumpHMC
from mjhmc.misc.distributions import ProductOfT
from mjhmc.misc.autocor import autocorrelation
from mjhmc.misc.autocor import sample_to_df
from mjhmc.fast import hmc
from scipy.sparse import rand
import numpy as np
import theano.tensor as T
import matplotlib.pyplot as plt
np.random.seed(2016)
nbasis=36
ndims=12
n_steps=1000
half_window=False
rand_val = rand(ndims,nbasis/2,density=0.1)
W = np.concatenate([rand_val.toarray(), -rand_val.toarray()],axis=1)
logalpha = np.random.randn(nbasis,1)
PoT_instance = ProductOfT(nbatch=100,ndims=ndims,nbasis=nbasis,W=W,logalpha=logalpha)
simulate = hmc.wrapper_hmc(s_rng=s_rng,energy_fn=rw.E_val,dim=dim)
a = np.zeros([1,10,n_samples])
b = np.zeros([2,10,n_samples])
for ii in np.arange(n_steps):
a[:,:,ii],b[:,:,ii] = simulate()
print "Acceptance Rate"
print a[:,:,ii]
print "Samples"
print b[:,:,ii]
ac_df = autocorrelation(df,half_window)
#Now, we can run the same autocorrelation from the data we will extract from the data frame
#but with the theano function
ac= hmc.normed_autocorrelation(df)
#We can compare the two plots individually and on a single plot
#Drop Mic and leave.
n_grad_evals = ac_df['num grad'].astype(int)
| 30.232558 | 91 | 0.764615 | 222 | 1,300 | 4.364865 | 0.509009 | 0.055728 | 0.040248 | 0.04128 | 0.053664 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022767 | 0.121538 | 1,300 | 42 | 92 | 30.952381 | 0.825744 | 0.153077 | 0 | 0 | 0 | 0 | 0.027347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.322581 | null | null | 0.129032 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8a818c9c6804e63a7d2491af40e4d2170a99e9c1 | 256 | py | Python | backend/keys/models/key_pair.py | block-id/wallet | b5479df7df0e5b5733f0ae262ffc17f9b923347d | [
"Apache-2.0"
] | null | null | null | backend/keys/models/key_pair.py | block-id/wallet | b5479df7df0e5b5733f0ae262ffc17f9b923347d | [
"Apache-2.0"
] | null | null | null | backend/keys/models/key_pair.py | block-id/wallet | b5479df7df0e5b5733f0ae262ffc17f9b923347d | [
"Apache-2.0"
] | 1 | 2021-12-31T17:27:44.000Z | 2021-12-31T17:27:44.000Z | from django.db import models
from django.contrib.auth import get_user_model
class KeyPair(models.Model):
user = models.OneToOneField(get_user_model(), on_delete=models.CASCADE)
private_key = models.TextField()
public_key = models.TextField()
| 28.444444 | 75 | 0.773438 | 35 | 256 | 5.457143 | 0.571429 | 0.104712 | 0.125654 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132813 | 256 | 8 | 76 | 32 | 0.86036 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8a856edba2de7187fd1e6c83c875cf8dc6af883c | 290 | py | Python | setuplocalmdi.py | asutera/Local-MDI-importance | e195cb3aad5705b29da0b619138967be631ae794 | [
"BSD-3-Clause"
] | 4 | 2021-11-04T05:20:17.000Z | 2021-11-04T23:08:03.000Z | setuplocalmdi.py | asutera/Local-MDI-importance | e195cb3aad5705b29da0b619138967be631ae794 | [
"BSD-3-Clause"
] | null | null | null | setuplocalmdi.py | asutera/Local-MDI-importance | e195cb3aad5705b29da0b619138967be631ae794 | [
"BSD-3-Clause"
] | null | null | null | """
Author: Antonio Sutera (sutera.antonio@gmail.com)
License: BSD 3 clause
"""
# To compile (in the directory):
# > python setuplocalmdi.py build_ext --inplace
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("LocalMDI_cy.pyx")
)
| 17.058824 | 49 | 0.737931 | 39 | 290 | 5.410256 | 0.794872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004065 | 0.151724 | 290 | 16 | 50 | 18.125 | 0.853659 | 0.513793 | 0 | 0 | 0 | 0 | 0.115385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8a88219af3562779dfbc5d8770c3d74b415a0a02 | 5,852 | py | Python | examples/sensortag.py | satyajitghana/bleak | 3c948ef50e71b91d5a4ce6964d0ebeb09b016720 | [
"MIT"
] | 1 | 2020-07-15T06:15:05.000Z | 2020-07-15T06:15:05.000Z | examples/sensortag.py | anujndoshi/bleak | 3c948ef50e71b91d5a4ce6964d0ebeb09b016720 | [
"MIT"
] | 1 | 2020-07-07T17:39:53.000Z | 2020-07-09T10:26:46.000Z | examples/sensortag.py | anujndoshi/bleak | 3c948ef50e71b91d5a4ce6964d0ebeb09b016720 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
TI CC2650 SensorTag
-------------------
An example connecting to a TI CC2650 SensorTag.
Created on 2018-01-10 by hbldh <henrik.blidh@nedomkull.com>
"""
import platform
import logging
import asyncio
from bleak import BleakClient
from bleak import _logger as logger
from bleak.uuids import uuid16_dict
ALL_SENSORTAG_CHARACTERISTIC_UUIDS = """
00002a00-0000-1000-8000-00805f9b34fb
00002a01-0000-1000-8000-00805f9b34fb
00002a04-0000-1000-8000-00805f9b34fb
00002a23-0000-1000-8000-00805f9b34fb
00002a24-0000-1000-8000-00805f9b34fb
00002a25-0000-1000-8000-00805f9b34fb
00002a26-0000-1000-8000-00805f9b34fb
00002a27-0000-1000-8000-00805f9b34fb
00002a28-0000-1000-8000-00805f9b34fb
00002a29-0000-1000-8000-00805f9b34fb
00002a2a-0000-1000-8000-00805f9b34fb
00002a50-0000-1000-8000-00805f9b34fb
00002a19-0000-1000-8000-00805f9b34fb
f000aa01-0451-4000-b000-000000000000
f000aa02-0451-4000-b000-000000000000
f000aa03-0451-4000-b000-000000000000
f000aa21-0451-4000-b000-000000000000
f000aa22-0451-4000-b000-000000000000
f000aa23-0451-4000-b000-000000000000
f000aa41-0451-4000-b000-000000000000
f000aa42-0451-4000-b000-000000000000
f000aa44-0451-4000-b000-000000000000
f000aa81-0451-4000-b000-000000000000
f000aa82-0451-4000-b000-000000000000
f000aa83-0451-4000-b000-000000000000
f000aa71-0451-4000-b000-000000000000
f000aa72-0451-4000-b000-000000000000
f000aa73-0451-4000-b000-000000000000
0000ffe1-0000-1000-8000-00805f9b34fb
f000aa65-0451-4000-b000-000000000000
f000aa66-0451-4000-b000-000000000000
f000ac01-0451-4000-b000-000000000000
f000ac02-0451-4000-b000-000000000000
f000ac03-0451-4000-b000-000000000000
f000ccc1-0451-4000-b000-000000000000
f000ccc2-0451-4000-b000-000000000000
f000ccc3-0451-4000-b000-000000000000
f000ffc1-0451-4000-b000-000000000000
f000ffc2-0451-4000-b000-000000000000
f000ffc3-0451-4000-b000-000000000000
f000ffc4-0451-4000-b000-000000000000
"""
uuid16_dict = {v: k for k, v in uuid16_dict.items()}
SYSTEM_ID_UUID = "0000{0:x}-0000-1000-8000-00805f9b34fb".format(
uuid16_dict.get("System ID")
)
MODEL_NBR_UUID = "0000{0:x}-0000-1000-8000-00805f9b34fb".format(
uuid16_dict.get("Model Number String")
)
DEVICE_NAME_UUID = "0000{0:x}-0000-1000-8000-00805f9b34fb".format(
uuid16_dict.get("Device Name")
)
FIRMWARE_REV_UUID = "0000{0:x}-0000-1000-8000-00805f9b34fb".format(
uuid16_dict.get("Firmware Revision String")
)
HARDWARE_REV_UUID = "0000{0:x}-0000-1000-8000-00805f9b34fb".format(
uuid16_dict.get("Hardware Revision String")
)
SOFTWARE_REV_UUID = "0000{0:x}-0000-1000-8000-00805f9b34fb".format(
uuid16_dict.get("Software Revision String")
)
MANUFACTURER_NAME_UUID = "0000{0:x}-0000-1000-8000-00805f9b34fb".format(
uuid16_dict.get("Manufacturer Name String")
)
BATTERY_LEVEL_UUID = "0000{0:x}-0000-1000-8000-00805f9b34fb".format(
uuid16_dict.get("Battery Level")
)
KEY_PRESS_UUID = "0000{0:x}-0000-1000-8000-00805f9b34fb".format(0xFFE1)
# I/O test points on SensorTag.
IO_DATA_CHAR_UUID = "f000aa65-0451-4000-b000-000000000000"
IO_CONFIG_CHAR_UUID = "f000aa66-0451-4000-b000-000000000000"
async def run(address, loop, debug=False):
if debug:
import sys
# loop.set_debug(True)
# l = logging.getLogger("asyncio")
# l.setLevel(logging.DEBUG)
# h = logging.StreamHandler(sys.stdout)
# h.setLevel(logging.DEBUG)
# l.addHandler(h)
async with BleakClient(address, timeout=1.0, loop=loop) as client:
x = await client.is_connected()
logger.info("Connected: {0}".format(x))
system_id = await client.read_gatt_char(SYSTEM_ID_UUID)
print(
"System ID: {0}".format(
":".join(["{:02x}".format(x) for x in system_id[::-1]])
)
)
model_number = await client.read_gatt_char(MODEL_NBR_UUID)
print("Model Number: {0}".format("".join(map(chr, model_number))))
try:
device_name = await client.read_gatt_char(DEVICE_NAME_UUID)
print("Device Name: {0}".format("".join(map(chr, device_name))))
except:
pass
manufacturer_name = await client.read_gatt_char(MANUFACTURER_NAME_UUID)
print("Manufacturer Name: {0}".format("".join(map(chr, manufacturer_name))))
firmware_revision = await client.read_gatt_char(FIRMWARE_REV_UUID)
print("Firmware Revision: {0}".format("".join(map(chr, firmware_revision))))
hardware_revision = await client.read_gatt_char(HARDWARE_REV_UUID)
print("Hardware Revision: {0}".format("".join(map(chr, hardware_revision))))
software_revision = await client.read_gatt_char(SOFTWARE_REV_UUID)
print("Software Revision: {0}".format("".join(map(chr, software_revision))))
battery_level = await client.read_gatt_char(BATTERY_LEVEL_UUID)
print("Battery Level: {0}%".format(int(battery_level[0])))
def keypress_handler(sender, data):
print("{0}: {1}".format(sender, data))
write_value = bytearray([0xA0])
value = await client.read_gatt_char(IO_DATA_CHAR_UUID)
print("I/O Data Pre-Write Value: {0}".format(value))
await client.write_gatt_char(IO_DATA_CHAR_UUID, write_value)
value = await client.read_gatt_char(IO_DATA_CHAR_UUID)
print("I/O Data Post-Write Value: {0}".format(value))
assert value == write_value
await client.start_notify(KEY_PRESS_UUID, keypress_handler)
await asyncio.sleep(5.0, loop=loop)
await client.stop_notify(KEY_PRESS_UUID)
if __name__ == "__main__":
import os
os.environ["PYTHONASYNCIODEBUG"] = str(1)
address = (
"24:71:89:cc:09:05"
if platform.system() != "Darwin"
else "B9EA5233-37EF-4DD6-87A8-2A875E821C46"
)
loop = asyncio.get_event_loop()
loop.run_until_complete(run(address, loop, True))
| 34.222222 | 84 | 0.72471 | 793 | 5,852 | 5.187894 | 0.245902 | 0.056393 | 0.084589 | 0.169178 | 0.274915 | 0.212202 | 0.142684 | 0.142684 | 0.142684 | 0.132961 | 0 | 0.29984 | 0.144566 | 5,852 | 170 | 85 | 34.423529 | 0.521974 | 0.061859 | 0 | 0.015625 | 0 | 0 | 0.438071 | 0.350201 | 0 | 0 | 0.001827 | 0 | 0.007813 | 1 | 0.007813 | false | 0.007813 | 0.0625 | 0 | 0.070313 | 0.085938 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8a8ab68b40e820eaf6d387f9b7266a198e0f9d77 | 2,825 | py | Python | src/test/resources/mockserver/app/app.py | peterlcole/xlr-variable-setter-plugin | 5e7fdaf490b921aa1fa5189ae99a88e1826df931 | [
"MIT"
] | null | null | null | src/test/resources/mockserver/app/app.py | peterlcole/xlr-variable-setter-plugin | 5e7fdaf490b921aa1fa5189ae99a88e1826df931 | [
"MIT"
] | null | null | null | src/test/resources/mockserver/app/app.py | peterlcole/xlr-variable-setter-plugin | 5e7fdaf490b921aa1fa5189ae99a88e1826df931 | [
"MIT"
] | null | null | null | #!flask/bin/python
#
# Copyright 2019 XEBIALABS
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
from flask import Flask
from flask import request, Response
from flask import make_response
from werkzeug.exceptions import HTTPException, BadRequest, NotFound
from functools import wraps
import os, io, json
app = Flask(__name__)
# These are the username and password we expect
expectedUN = "xlr_test"
expectedPW = "admin"
def getFile( fileName, status="200" ):
filePath = "/mockserver/responses/%s" % fileName
if not os.path.isfile(filePath):
raise NotFound("Unable to load response file")
f = io.open(filePath, "r", encoding="utf-8")
resp = make_response( (f.read(), status) )
resp.headers['Content-Type'] = 'text/*; charset=utf-8'
return resp
def check_auth(username, password):
"""This function is called to check if a username /
password combination is valid.
"""
return username == expectedUN and password == expectedPW
def authenticate():
"""Sends a 401 response that enables basic auth"""
return Response(
'Could not verify your access level for that URL.\n'
'You have to login with proper credentials', 401,
{'WWW-Authenticate': 'Basic realm="Login Required"'})
def requires_auth(f):
"""
Determines if the basic auth is valid
"""
@wraps(f)
def decorated(*args, **kwargs):
auth = request.authorization
if not auth or not check_auth(auth.username, auth.password):
return authenticate()
return f(*args, **kwargs)
return decorated
@app.route('/')
def index():
return "Hello, from the Mockserver!"
@app.route('/project1/repo1/<filename>', methods=['GET'])
@requires_auth
def getRequestedFile(filename):
return getFile(filename)
if __name__ == '__main__':
app.run(debug=False)
| 37.666667 | 462 | 0.724956 | 391 | 2,825 | 5.189258 | 0.506394 | 0.043371 | 0.022178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007404 | 0.187257 | 2,825 | 74 | 463 | 38.175676 | 0.876307 | 0.450265 | 0 | 0 | 0 | 0 | 0.203716 | 0.033179 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170732 | false | 0.073171 | 0.146341 | 0.04878 | 0.512195 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
8a907b1c6d13de1ba1b1b00cc75f189ab6e872b5 | 1,537 | py | Python | Python/Algorithms/1313.py | DimitrisJim/leetcode_solutions | 765ea578748f8c9b21243dec9dc8a16163e85c0c | [
"Unlicense"
] | 2 | 2021-01-15T17:22:54.000Z | 2021-05-16T19:58:02.000Z | Python/Algorithms/1313.py | DimitrisJim/leetcode_solutions | 765ea578748f8c9b21243dec9dc8a16163e85c0c | [
"Unlicense"
] | null | null | null | Python/Algorithms/1313.py | DimitrisJim/leetcode_solutions | 765ea578748f8c9b21243dec9dc8a16163e85c0c | [
"Unlicense"
] | null | null | null | from itertools import repeat
class DList:
""" Another attempt at trying to
reduce memory. Doesn't work. """
def __init__(self, l):
self._d = l
def __iter__(self):
d = self._d
for i in range(0, len(d), 2):
for j in range(d[i]):
yield d[i+1]
def __getitem__(self, i):
d = self._d
for ind, j in enumerate(d):
if ind == i:
return j
raise IndexError("Whoops")
def __repr__(self):
return repr(list(self))
def yield_sublist(nums):
for i in range(0, len(nums), 2):
yield from repeat(nums[i+1], nums[i])
class Solution(object):
def decompressRLElist(self, nums):
"""
:type nums: List[int]
:rtype: List[int]
"""
"""
# basically repeat in here manually.
l = len(nums)
k = 0
fr, el = nums[0], nums[1]
res = []
while True:
if fr == 0:
k += 2
if k >= l:
break
fr, el = nums[k], nums[k+1]
else:
res.append(el)
fr -= 1
return res
"""
"""
# repeat with extend and reusing list.
nums_len = len(nums)
for i in range(0, nums_len, 2):
nums.extend(repeat(nums[i+1], nums[i]))
return nums[nums_len:]
"""
# yield from a generator that yields from repeat's
return [i for i in yield_sublist(nums)]
| 24.396825 | 58 | 0.468445 | 199 | 1,537 | 3.497487 | 0.356784 | 0.028736 | 0.034483 | 0.047414 | 0.12069 | 0.12069 | 0 | 0 | 0 | 0 | 0 | 0.017778 | 0.414444 | 1,537 | 62 | 59 | 24.790323 | 0.755556 | 0.096942 | 0 | 0.086957 | 0 | 0 | 0.00823 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.26087 | false | 0 | 0.043478 | 0.043478 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8a93d3adea1c0a25cde54d9a38b04bc5b0e32543 | 481 | py | Python | noo/impl/registry/remote.py | nooproject/noo | 238711c55faeb1226a4e5339cd587a312c4babac | [
"MIT"
] | 2 | 2022-02-03T07:35:46.000Z | 2022-02-03T16:12:25.000Z | noo/impl/registry/remote.py | nooproject/noo | 238711c55faeb1226a4e5339cd587a312c4babac | [
"MIT"
] | 2 | 2022-03-05T02:31:38.000Z | 2022-03-05T21:26:42.000Z | noo/impl/registry/remote.py | nooproject/noo | 238711c55faeb1226a4e5339cd587a312c4babac | [
"MIT"
] | 1 | 2022-03-05T01:40:29.000Z | 2022-03-05T01:40:29.000Z | from __future__ import annotations
from typing import Any
from requests import Session
from yaml import safe_load
from ..utils import cancel
class RemoteResolver:
def __init__(self) -> None:
self._session = Session()
def fetch(self, ref: str) -> dict[str, Any]:
res = self._session.get(ref)
if res.status_code == 404:
cancel("registry", f"Ref {ref} not found.")
res.raise_for_status()
return safe_load(res.text)
| 20.041667 | 55 | 0.652807 | 64 | 481 | 4.671875 | 0.578125 | 0.053512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00831 | 0.24948 | 481 | 23 | 56 | 20.913043 | 0.819945 | 0 | 0 | 0 | 0 | 0 | 0.058212 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.357143 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
8aa6f166d1c473757934f0298f9ad5fec63c0040 | 549 | py | Python | recipes/recipe_modules/cloudbuildhelper/__init__.py | NDevTK/chromium-infra | d38e088e158d81f7f2065a38aa1ea1894f735ec4 | [
"BSD-3-Clause"
] | null | null | null | recipes/recipe_modules/cloudbuildhelper/__init__.py | NDevTK/chromium-infra | d38e088e158d81f7f2065a38aa1ea1894f735ec4 | [
"BSD-3-Clause"
] | 7 | 2022-02-15T01:11:37.000Z | 2022-03-02T12:46:13.000Z | recipes/recipe_modules/cloudbuildhelper/__init__.py | NDevTK/chromium-infra | d38e088e158d81f7f2065a38aa1ea1894f735ec4 | [
"BSD-3-Clause"
] | null | null | null | # Copyright 2019 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
PYTHON_VERSION_COMPATIBILITY = 'PY2+3'
DEPS = [
'recipe_engine/cipd',
'recipe_engine/commit_position',
'recipe_engine/context',
'recipe_engine/file',
'recipe_engine/golang',
'recipe_engine/json',
'recipe_engine/nodejs',
'recipe_engine/path',
'recipe_engine/raw_io',
'recipe_engine/step',
'depot_tools/depot_tools',
'depot_tools/git',
'depot_tools/git_cl',
]
| 23.869565 | 72 | 0.735883 | 77 | 549 | 5 | 0.636364 | 0.311688 | 0.077922 | 0.103896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012766 | 0.143898 | 549 | 22 | 73 | 24.954545 | 0.806383 | 0.282332 | 0 | 0 | 0 | 0 | 0.669231 | 0.187179 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8aa79720cc03e0838ca0483f5cfa55c041049b13 | 1,153 | py | Python | tests/unit/props/types/test_number.py | manoadamro/jason | e6a152797cc47fc158b41f1f4b4d55f79d0494f7 | [
"MIT"
] | null | null | null | tests/unit/props/types/test_number.py | manoadamro/jason | e6a152797cc47fc158b41f1f4b4d55f79d0494f7 | [
"MIT"
] | 136 | 2019-05-15T07:30:47.000Z | 2021-07-19T05:21:39.000Z | tests/unit/props/types/test_number.py | manoadamro/jason | e6a152797cc47fc158b41f1f4b4d55f79d0494f7 | [
"MIT"
] | 1 | 2019-05-15T10:00:34.000Z | 2019-05-15T10:00:34.000Z | import pytest
from jason import props
def test_validate():
props.Number().load(5)
def test_loads_int_from_string():
assert props.Number().load("5") == 5
def test_loads_float_from_string():
assert props.Number().load("5.5") == 5.5
def test_not_allow_strings():
with pytest.raises(props.PropertyValidationError):
assert props.Number(allow_strings=False).load("5") == 5
def test_accepts_int():
assert props.Number().load(5) == 5
def test_accepts_float():
assert props.Number().load(5.5) == 5.5
def test_too_low():
with pytest.raises(props.PropertyValidationError):
props.Number(min_value=10).load(5)
def test_too_high():
with pytest.raises(props.PropertyValidationError):
props.Number(max_value=10).load(20)
def test_nullable():
props.Number(nullable=True).load(None)
def test_not_nullable():
with pytest.raises(props.PropertyValidationError):
props.Number().load(None)
def test_wrong_type():
with pytest.raises(props.PropertyValidationError):
props.Number().load("nope")
def test_default():
assert props.Number(default=123).load(None) == 123
| 20.22807 | 63 | 0.700781 | 158 | 1,153 | 4.93038 | 0.253165 | 0.107831 | 0.134788 | 0.102696 | 0.568678 | 0.512195 | 0.477535 | 0.336329 | 0.082157 | 0.082157 | 0 | 0.028866 | 0.158716 | 1,153 | 56 | 64 | 20.589286 | 0.774227 | 0 | 0 | 0.16129 | 0 | 0 | 0.007806 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 1 | 0.387097 | true | 0 | 0.064516 | 0 | 0.451613 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8ab9890a66148f856ec3b40357ddd6c00c960759 | 338 | py | Python | main.py | deadrobots/bots18-charlie | f526a16a0203cd63033f106b8120498bc0cce6b4 | [
"MIT"
] | null | null | null | main.py | deadrobots/bots18-charlie | f526a16a0203cd63033f106b8120498bc0cce6b4 | [
"MIT"
] | 1 | 2019-01-23T01:54:22.000Z | 2019-01-23T01:57:09.000Z | main.py | deadrobots/bots18-charlie | f526a16a0203cd63033f106b8120498bc0cce6b4 | [
"MIT"
] | null | null | null | #!/usr/bin/python
import os, sys
from wallaby import *
import constants as c
import motors as m
import actions as a
import utils as u
import camera as x
import gyro as g
def main():
print("running")
enable_servos()
a.challenge()
if __name__ == "__main__":
sys.stdout = os.fdopen(sys.stdout.fileno(), "w", 0)
main()
| 16.9 | 55 | 0.677515 | 54 | 338 | 4.074074 | 0.648148 | 0.081818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003731 | 0.207101 | 338 | 19 | 56 | 17.789474 | 0.817164 | 0.047337 | 0 | 0 | 0 | 0 | 0.049844 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | true | 0 | 0.533333 | 0 | 0.6 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8abbcc30bdced219122a98a10ad265d27e234e87 | 1,507 | py | Python | bin/main/open_project.py | Lokesh-Kumar-Dudi/sudoSU | 26020e0ace735770323166df73f861c2e7dab6cf | [
"Apache-2.0"
] | null | null | null | bin/main/open_project.py | Lokesh-Kumar-Dudi/sudoSU | 26020e0ace735770323166df73f861c2e7dab6cf | [
"Apache-2.0"
] | null | null | null | bin/main/open_project.py | Lokesh-Kumar-Dudi/sudoSU | 26020e0ace735770323166df73f861c2e7dab6cf | [
"Apache-2.0"
] | null | null | null | import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
import glob, os
class open_project:
def __init__(self,parent):
self.parent = parent
self.open_project_dialogue()
def open_project_dialogue(self):
dialog = Gtk.FileChooserDialog(title="Please choose a Project",
parent=None,action=Gtk.FileChooserAction.SELECT_FOLDER)
dialog.add_buttons(Gtk.STOCK_CANCEL, 2, "Select", 1)
dialog.set_default_size(800, 400)
dialog.set_current_folder(self.parent.props.sudoSU_projects)
response = dialog.run()
if response == 1:
self.parent.props.curr_project = dialog.get_filename()
self.parent.iconview.refresh(self.parent.props.curr_project)
self.parent.workflow_treeview.refresh(self.parent.props.curr_project)
self.parent.props.segy_file = ""
os.chdir(self.parent.props.curr_project)
file =glob.glob("*.segy")
if file:
self.parent.props.segy_file = self.parent.props.curr_project+"/"+str(file[0])
else:
self.parent.props.segy_file =None
self.parent.home_obj.set_assets(self.parent.props.segy_file)
self.parent.run("cd "+self.parent.props.curr_project+"\nclear\n")
self.parent.send_message("Opened Project: "+self.parent.props.curr_project,1)
elif response == 2:
pass
dialog.destroy() | 38.641026 | 95 | 0.628401 | 185 | 1,507 | 4.935135 | 0.389189 | 0.208105 | 0.197152 | 0.145674 | 0.337349 | 0.166484 | 0.166484 | 0.094195 | 0 | 0 | 0 | 0.012545 | 0.259456 | 1,507 | 39 | 96 | 38.641026 | 0.805556 | 0 | 0 | 0 | 0 | 0 | 0.046419 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.03125 | 0.09375 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8ac29697e8e3568ffb343394c42bb13dfba61600 | 212 | py | Python | tutils/setup.py | transcendentsky/py_tutorials | fed8e6c8d79f854a1cebcfd5c37297a163846208 | [
"Apache-2.0"
] | 1 | 2018-06-18T12:09:33.000Z | 2018-06-18T12:09:33.000Z | tutils/setup.py | transcendentsky/py_tutorials | fed8e6c8d79f854a1cebcfd5c37297a163846208 | [
"Apache-2.0"
] | null | null | null | tutils/setup.py | transcendentsky/py_tutorials | fed8e6c8d79f854a1cebcfd5c37297a163846208 | [
"Apache-2.0"
] | 1 | 2018-06-18T12:13:21.000Z | 2018-06-18T12:13:21.000Z | from setuptools import setup, find_packages
setup(
name="tutils",
version="1.0",
author="trans",
author_email="transcendentsiki@gmail.com",
packages=find_packages()
)
# py_modules=['tutils'], | 21.2 | 46 | 0.688679 | 25 | 212 | 5.68 | 0.76 | 0.169014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 0.160377 | 212 | 10 | 47 | 21.2 | 0.786517 | 0.103774 | 0 | 0 | 0 | 0 | 0.21164 | 0.137566 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8ac2b2a1f75f1f28d541c7ee91a72ef2df41dd82 | 962 | py | Python | Manager/RoutingManager.py | amrishAK/DBB_gateway | 13970ce7021e8ccaafce6247ab86a5a518929f97 | [
"MIT"
] | null | null | null | Manager/RoutingManager.py | amrishAK/DBB_gateway | 13970ce7021e8ccaafce6247ab86a5a518929f97 | [
"MIT"
] | null | null | null | Manager/RoutingManager.py | amrishAK/DBB_gateway | 13970ce7021e8ccaafce6247ab86a5a518929f97 | [
"MIT"
] | null | null | null | import requests
from Helper.RoutingEndpoint import RoutingEndpoint
from Helper.JsonHandler import JsonHandler
class RoutingManager:
def __init__(self):
self._jsonHandler = JsonHandler()
self._routes = self._jsonHandler.LoadJson('Routes.json')
def __del__(self):
self._jsonHandler.WriteJson('Routes.json',self._routes)
def GetServiceUrl(self,path):
try:
pathArray = path.split("/")
service = pathArray[1]
serviceList = self._routes['services']
endpoint = next(endpoint for endpoint in serviceList if endpoint['Endpoint'] == service)
path = path.replace("/"+service,"")
return RoutingEndpoint(service,endpoint['Primary']['server'], endpoint['Port'], path)
except Exception as ex:
raise Exception(ex)
def ChangePrimary(self,endpoint,replicaList):
pass
def GetReplicaList(self,endpoint):
pass | 34.357143 | 100 | 0.647609 | 94 | 962 | 6.478723 | 0.468085 | 0.073892 | 0.062397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001381 | 0.247401 | 962 | 28 | 101 | 34.357143 | 0.839779 | 0 | 0 | 0.086957 | 0 | 0 | 0.05919 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0.086957 | 0.130435 | 0 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
76dcc417ceadc3d7f663ba509c9db7e77103e16d | 23,559 | py | Python | quantitative_finance/L3/python/cbsm_test.py | Aperture-Electronic/Vitis_Libraries | e178d54f82c90f1a1e47f599f401f8cbf8df86d7 | [
"Apache-2.0"
] | 1 | 2021-05-28T06:37:15.000Z | 2021-05-28T06:37:15.000Z | quantitative_finance/L3/python/cbsm_test.py | Aperture-Electronic/Vitis_Libraries | e178d54f82c90f1a1e47f599f401f8cbf8df86d7 | [
"Apache-2.0"
] | null | null | null | quantitative_finance/L3/python/cbsm_test.py | Aperture-Electronic/Vitis_Libraries | e178d54f82c90f1a1e47f599f401f8cbf8df86d7 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# Ensure environmental variables i.e. paths are set to the named the modules
from xf_fintech_python import DeviceManager, CFBlackScholesMerton, OptionType
import sys
# Basic checking that the number of arguments are correct
if len(sys.argv) != 2:
sys.exit("Incorrect number of arguments supplied - 1 expected - the name of the FPGA load - e.g. cfbsm.xclbin")
# State test financial model
print("\nThe CFBlack Scholes Merton financial model\n==========================================\n")
# Declaring Variables
deviceList = DeviceManager.getDeviceList("u250")
lastruntime = 0
numAssets = 0
# Inputs
stockPriceList = []
strikePriceList = []
volatilityList = []
riskFreeRateList= []
timeToMaturityList = []
dividendYieldList = []
# checklist of expected result, and tolerance
expectedResultList = []
tolerance = 0.0001
# Outputs - declaring them as empty lists
optionPriceList = []
deltaList = []
gammaList = []
vegaList = []
thetaList = []
rhoList = []
# Example financial data to test the module, same as used in the C++ example script
test_data_list = [
[50, 100, 0.0, 0.1, 1, 0.02, 0.000000], [50, 100, 0.01, 0.1, 1, 0.02, 0.000000],
[50, 100, 0.02, 0.1, 1, 0.02, 0.000000], [50, 100, 0.03, 0.1, 1, 0.02, 0.000000],
[50, 100, 0.05, 0.1, 1, 0.02, 0.000000], [50, 100, 0.07, 0.1, 1, 0.02, 0.000000],
[50, 100, 0.1, 0.1, 1, 0.02, 0.000000], [50, 100, 0.25, 0.1, 1, 0.02, 0.000002],
[50, 100, 0.01, 0.1, 1, 0.0, 0.000000], [50, 100, 0.01, 0.1, 1, 0.01, 0.000000],
[50, 100, 0.01, 0.1, 1, 0.02, 0.000000], [50, 100, 0.01, 0.1, 1, 0.03, 0.000000],
[50, 100, 0.01, 0.1, 1, 0.05, 0.000000], [50, 100, 0.01, 0.1, 1, 0.07, 0.000000],
[50, 100, 0.01, 0.1, 1, 0.1, 0.000000], [50, 100, 0.01, 0.1, 1, 0.25, 0.000000],
[50, 100, 0.01, 0.01, 1, 0.02, 0.000000], [50, 100, 0.01, 0.05, 1, 0.02, 0.000000],
[50, 100, 0.01, 0.1, 1, 0.02, 0.000000], [50, 100, 0.01, 0.25, 1, 0.02, 0.012620],
[50, 100, 0.01, 1.1, 1, 0.02, 11.161774], [50, 100, 0.01, 2.1, 1, 0.02, 29.159644],
[50, 100, 0.01, 0.1, 0.3, 0.02, 0.000000], [50, 100, 0.01, 0.1, 1, 0.02, 0.000000],
[50, 100, 0.01, 0.1, 3, 0.02, 0.000038], [100, 100, 0.0, 0.1, 1, 0.02, 3.036848],
[100, 100, 0.01, 0.1, 1, 0.02, 3.455492], [100, 100, 0.02, 0.1, 1, 0.02, 3.908798],
[100, 100, 0.03, 0.1, 1, 0.02, 4.396423], [100, 100, 0.05, 0.1, 1, 0.02, 5.471349],
[100, 100, 0.07, 0.1, 1, 0.02, 6.670211], [100, 100, 0.1, 0.1, 1, 0.02, 8.667360],
[100, 100, 0.25, 0.1, 1, 0.02, 20.171748], [100, 100, 0.01, 0.1, 1, 0.0, 4.485236],
[100, 100, 0.01, 0.1, 1, 0.01, 3.948082], [100, 100, 0.01, 0.1, 1, 0.02, 3.455492],
[100, 100, 0.01, 0.1, 1, 0.03, 3.006631], [100, 100, 0.01, 0.1, 1, 0.05, 2.234944],
[100, 100, 0.01, 0.1, 1, 0.07, 1.619499], [100, 100, 0.01, 0.1, 1, 0.1, 0.949838],
[100, 100, 0.01, 0.1, 1, 0.25, 0.023864], [100, 100, 0.01, 0.01, 1, 0.02, 0.082074],
[100, 100, 0.01, 0.05, 1, 0.02, 1.511434], [100, 100, 0.01, 0.1, 1, 0.02, 3.455492],
[100, 100, 0.01, 0.25, 1, 0.02, 9.314906], [100, 100, 0.01, 1.1, 1, 0.02, 40.655709],
[100, 100, 0.01, 2.1, 1, 0.02, 69.085523], [100, 100, 0.01, 0.1, 0.3, 0.02, 2.024682],
[100, 100, 0.01, 0.1, 1, 0.02, 3.455492], [100, 100, 0.01, 0.1, 3, 0.02, 5.262747],
[150, 100, 0.0, 0.1, 1, 0.02, 47.029964], [150, 100, 0.01, 0.1, 1, 0.02, 48.024923],
[150, 100, 0.02, 0.1, 1, 0.02, 49.010001], [150, 100, 0.03, 0.1, 1, 0.02, 49.985290],
[150, 100, 0.05, 0.1, 1, 0.02, 51.906875], [150, 100, 0.07, 0.1, 1, 0.02, 53.790425],
[150, 100, 0.1, 0.1, 1, 0.02, 56.546061], [150, 100, 0.25, 0.1, 1, 0.02, 69.149723],
[150, 100, 0.01, 0.1, 1, 0.0, 50.995060], [150, 100, 0.01, 0.1, 1, 0.01, 49.502560],
[150, 100, 0.01, 0.1, 1, 0.02, 48.024923], [150, 100, 0.01, 0.1, 1, 0.03, 46.562008],
[150, 100, 0.01, 0.1, 1, 0.05, 43.679802], [150, 100, 0.01, 0.1, 1, 0.07, 40.854912],
[150, 100, 0.01, 0.1, 1, 0.1, 36.723165], [150, 100, 0.01, 0.1, 1, 0.25, 18.034406],
[150, 100, 0.01, 0.01, 1, 0.02, 48.024818], [150, 100, 0.01, 0.05, 1, 0.02, 48.024818],
[150, 100, 0.01, 0.1, 1, 0.02, 48.024923], [150, 100, 0.01, 0.25, 1, 0.02, 48.752211],
[150, 100, 0.01, 1.1, 1, 0.02, 78.314027], [150, 100, 0.01, 2.1, 1, 0.02, 111.930458],
[150, 100, 0.01, 0.1, 0.3, 0.02, 49.404964], [150, 100, 0.01, 0.1, 1, 0.02, 48.024923],
[150, 100, 0.01, 0.1, 3, 0.02, 44.328384], [50, 100, 0.01, 0.1, 1, 0.01, 0.000000],
[60, 100, 0.01, 0.1, 1, 0.01, 0.000000], [70, 100, 0.01, 0.1, 1, 0.01, 0.000370],
[80, 100, 0.01, 0.1, 1, 0.01, 0.039517], [90, 100, 0.01, 0.1, 1, 0.01, 0.705293],
[100, 100, 0.01, 0.1, 1, 0.01, 3.948082], [110, 100, 0.01, 0.1, 1, 0.01, 10.844954],
[120, 100, 0.01, 0.1, 1, 0.01, 19.946863], [130, 100, 0.01, 0.1, 1, 0.01, 29.716802],
[140, 100, 0.01, 0.1, 1, 0.01, 39.603156], [150, 100, 0.01, 0.1, 1, 0.01, 49.502560],
[100, 100, 0.0, 0.1, 1, 0.01, 3.490220], [100, 100, 0.01, 0.1, 1, 0.01, 3.948082],
[100, 100, 0.02, 0.1, 1, 0.01, 4.440608], [100, 100, 0.03, 0.1, 1, 0.01, 4.967061],
[100, 100, 0.05, 0.1, 1, 0.01, 6.116985], [100, 100, 0.07, 0.1, 1, 0.01, 7.385101],
[100, 100, 0.1, 0.1, 1, 0.01, 9.471079], [100, 100, 0.25, 0.1, 1, 0.01, 21.148769],
[100, 100, 0.01, 0.01, 1, 0.01, 0.394971], [100, 100, 0.01, 0.05, 1, 0.01, 1.974658],
[100, 100, 0.01, 0.1, 1, 0.01, 3.948082], [100, 100, 0.01, 0.25, 1, 0.01, 9.848664],
[100, 100, 0.01, 1.1, 1, 0.01, 41.352463], [100, 100, 0.01, 2.1, 1, 0.01, 69.925427],
[100, 100, 0.01, 0.1, 0.3, 0.01, 2.173331], [100, 100, 0.01, 0.1, 1, 0.01, 3.948082],
[100, 100, 0.01, 0.1, 3, 0.01, 6.697292], [50, 100, 0.01, 0.1, 1, 0.1, 0.000000],
[60, 100, 0.01, 0.1, 1, 0.1, 0.000000], [70, 100, 0.01, 0.1, 1, 0.1, 0.000006],
[80, 100, 0.01, 0.1, 1, 0.1, 0.002016], [90, 100, 0.01, 0.1, 1, 0.1, 0.086171],
[100, 100, 0.01, 0.1, 1, 0.1, 0.949838], [110, 100, 0.01, 0.1, 1, 0.1, 4.227734],
[120, 100, 0.01, 0.1, 1, 0.1, 10.572463], [130, 100, 0.01, 0.1, 1, 0.1, 18.809969],
[140, 100, 0.01, 0.1, 1, 0.1, 27.697253], [150, 100, 0.01, 0.1, 1, 0.1, 36.723165],
[100, 100, 0.0, 0.1, 1, 0.1, 0.791893], [100, 100, 0.01, 0.1, 1, 0.1, 0.949838],
[100, 100, 0.02, 0.1, 1, 0.1, 1.131235], [100, 100, 0.03, 0.1, 1, 0.1, 1.337931],
[100, 100, 0.05, 0.1, 1, 0.1, 1.833875], [100, 100, 0.07, 0.1, 1, 0.1, 2.448868],
[100, 100, 0.1, 0.1, 1, 0.1, 3.608276], [100, 100, 0.25, 0.1, 1, 0.1, 12.849459],
[100, 100, 0.01, 0.01, 1, 0.1, 0.000000], [100, 100, 0.01, 0.05, 1, 0.1, 0.067542],
[100, 100, 0.01, 0.1, 1, 0.1, 0.949838], [100, 100, 0.01, 0.25, 1, 0.1, 5.764783],
[100, 100, 0.01, 1.1, 1, 0.1, 35.431726], [100, 100, 0.01, 2.1, 1, 0.1, 62.697569],
[100, 100, 0.01, 0.1, 0.3, 0.1, 1.076712], [100, 100, 0.01, 0.1, 1, 0.1, 0.949838],
[100, 100, 0.01, 0.1, 3, 0.1, 0.374831], [50, 100, 0.01, 0.1, 1, 0.25, 0.000000],
[60, 100, 0.01, 0.1, 1, 0.25, 0.000000], [70, 100, 0.01, 0.1, 1, 0.25, 0.000000],
[80, 100, 0.01, 0.1, 1, 0.25, 0.000003], [90, 100, 0.01, 0.1, 1, 0.25, 0.000585],
[100, 100, 0.01, 0.1, 1, 0.25, 0.023864], [110, 100, 0.01, 0.1, 1, 0.25, 0.304029],
[120, 100, 0.01, 0.1, 1, 0.25, 1.683464], [130, 100, 0.01, 0.1, 1, 0.25, 5.211635],
[140, 100, 0.01, 0.1, 1, 0.25, 10.951763], [150, 100, 0.01, 0.1, 1, 0.25, 18.034406],
[100, 100, 0.0, 0.1, 1, 0.25, 0.017668], [100, 100, 0.01, 0.1, 1, 0.25, 0.023864],
[100, 100, 0.02, 0.1, 1, 0.25, 0.031959], [100, 100, 0.03, 0.1, 1, 0.25, 0.042443],
[100, 100, 0.05, 0.1, 1, 0.25, 0.073008], [100, 100, 0.07, 0.1, 1, 0.25, 0.121532],
[100, 100, 0.1, 0.1, 1, 0.25, 0.245796], [100, 100, 0.25, 0.1, 1, 0.25, 3.105672],
[100, 100, 0.01, 0.01, 1, 0.25, 0.000000], [100, 100, 0.01, 0.05, 1, 0.25, 0.000001],
[100, 100, 0.01, 0.1, 1, 0.25, 0.023864], [100, 100, 0.01, 0.25, 1, 0.25, 1.962969],
[100, 100, 0.01, 1.1, 1, 0.25, 27.164544], [100, 100, 0.01, 2.1, 1, 0.25, 52.179771],
[100, 100, 0.01, 0.1, 0.3, 0.25, 0.233456], [100, 100, 0.01, 0.1, 1, 0.25, 0.023864],
[100, 100, 0.01, 0.1, 3, 0.25, 0.000041], [50, 100, 0.01, 0.01, 1, 0.02, 0.000000],
[60, 100, 0.01, 0.01, 1, 0.02, 0.000000], [70, 100, 0.01, 0.01, 1, 0.02, 0.000000],
[80, 100, 0.01, 0.01, 1, 0.02, 0.000000], [90, 100, 0.01, 0.01, 1, 0.02, 0.000000],
[100, 100, 0.01, 0.01, 1, 0.02, 0.082074], [110, 100, 0.01, 0.01, 1, 0.02, 8.816871],
[120, 100, 0.01, 0.01, 1, 0.02, 18.618857], [130, 100, 0.01, 0.01, 1, 0.02, 28.420844],
[140, 100, 0.01, 0.01, 1, 0.02, 38.222831], [150, 100, 0.01, 0.01, 1, 0.02, 48.024818],
[100, 100, 0.0, 0.01, 1, 0.02, 0.008406], [100, 100, 0.01, 0.01, 1, 0.02, 0.082074],
[100, 100, 0.02, 0.01, 1, 0.02, 0.391041], [100, 100, 0.03, 0.01, 1, 0.02, 1.056572],
[100, 100, 0.05, 0.01, 1, 0.02, 2.897294], [100, 100, 0.07, 0.01, 1, 0.02, 4.780485],
[100, 100, 0.1, 0.01, 1, 0.02, 7.536126], [100, 100, 0.25, 0.01, 1, 0.02, 20.139789],
[100, 100, 0.01, 0.01, 1, 0.0, 1.077916], [100, 100, 0.01, 0.01, 1, 0.01, 0.394971],
[100, 100, 0.01, 0.01, 1, 0.02, 0.082074], [100, 100, 0.01, 0.01, 1, 0.03, 0.008322],
[100, 100, 0.01, 0.01, 1, 0.05, 0.000007], [100, 100, 0.01, 0.01, 1, 0.07, 0.000000],
[100, 100, 0.01, 0.01, 1, 0.1, 0.000000], [100, 100, 0.01, 0.01, 1, 0.25, 0.000000],
[100, 100, 0.01, 0.01, 0.3, 0.02, 0.100012], [100, 100, 0.01, 0.01, 1, 0.02, 0.082074],
[100, 100, 0.01, 0.01, 3, 0.02, 0.027994], [50, 100, 0.01, 0.1, 1, 0.02, 0.000000],
[60, 100, 0.01, 0.1, 1, 0.02, 0.000000], [70, 100, 0.01, 0.1, 1, 0.02, 0.000245],
[80, 100, 0.01, 0.1, 1, 0.02, 0.029383], [90, 100, 0.01, 0.1, 1, 0.02, 0.575714],
[100, 100, 0.01, 0.1, 1, 0.02, 3.455492], [110, 100, 0.01, 0.1, 1, 0.02, 9.945921],
[120, 100, 0.01, 0.1, 1, 0.02, 18.805137], [130, 100, 0.01, 0.1, 1, 0.02, 28.441738],
[140, 100, 0.01, 0.1, 1, 0.02, 38.224524], [150, 100, 0.01, 0.1, 1, 0.02, 48.024923],
[100, 100, 0.0, 0.1, 1, 0.02, 3.036848], [100, 100, 0.01, 0.1, 1, 0.02, 3.455492],
[100, 100, 0.02, 0.1, 1, 0.02, 3.908798], [100, 100, 0.03, 0.1, 1, 0.02, 4.396423],
[100, 100, 0.05, 0.1, 1, 0.02, 5.471349], [100, 100, 0.07, 0.1, 1, 0.02, 6.670211],
[100, 100, 0.1, 0.1, 1, 0.02, 8.667360], [100, 100, 0.25, 0.1, 1, 0.02, 20.171748],
[100, 100, 0.01, 0.1, 1, 0.0, 4.485236], [100, 100, 0.01, 0.1, 1, 0.01, 3.948082],
[100, 100, 0.01, 0.1, 1, 0.02, 3.455492], [100, 100, 0.01, 0.1, 1, 0.03, 3.006631],
[100, 100, 0.01, 0.1, 1, 0.05, 2.234944], [100, 100, 0.01, 0.1, 1, 0.07, 1.619499],
[100, 100, 0.01, 0.1, 1, 0.1, 0.949838], [100, 100, 0.01, 0.1, 1, 0.25, 0.023864],
[100, 100, 0.01, 0.1, 0.3, 0.02, 2.024682], [100, 100, 0.01, 0.1, 1, 0.02, 3.455492],
[100, 100, 0.01, 0.1, 3, 0.02, 5.262747], [50, 100, 0.01, 2, 1, 0.02, 27.610220],
[60, 100, 0.01, 2, 1, 0.02, 35.025615], [70, 100, 0.01, 2, 1, 0.02, 42.690643],
[80, 100, 0.01, 2, 1, 0.02, 50.556861], [90, 100, 0.01, 2, 1, 0.02, 58.589225],
[100, 100, 0.01, 2, 1, 0.02, 66.761436], [110, 100, 0.01, 2, 1, 0.02, 75.053177],
[120, 100, 0.01, 2, 1, 0.02, 83.448386], [130, 100, 0.01, 2, 1, 0.02, 91.934119],
[140, 100, 0.01, 2, 1, 0.02, 100.499772], [150, 100, 0.01, 2, 1, 0.02, 109.136537],
[100, 100, 0.0, 2, 1, 0.02, 66.605371], [100, 100, 0.01, 2, 1, 0.02, 66.761436],
[100, 100, 0.02, 2, 1, 0.02, 66.917133], [100, 100, 0.03, 2, 1, 0.02, 67.072462],
[100, 100, 0.05, 2, 1, 0.02, 67.382003], [100, 100, 0.07, 2, 1, 0.02, 67.690040],
[100, 100, 0.1, 2, 1, 0.02, 68.149240], [100, 100, 0.25, 2, 1, 0.02, 70.392012],
[100, 100, 0.01, 2, 1, 0.0, 68.427416], [100, 100, 0.01, 2, 1, 0.01, 67.589662],
[100, 100, 0.01, 2, 1, 0.02, 66.761436], [100, 100, 0.01, 2, 1, 0.03, 65.942636],
[100, 100, 0.01, 2, 1, 0.05, 64.332920], [100, 100, 0.01, 2, 1, 0.07, 62.759727],
[100, 100, 0.01, 2, 1, 0.1, 60.466737], [100, 100, 0.01, 2, 1, 0.25, 50.122296],
[100, 100, 0.01, 2, 0.3, 0.02, 41.191693], [100, 100, 0.01, 2, 1, 0.02, 66.761436],
[100, 100, 0.01, 2, 3, 0.02, 86.216596], [50, 100, 0.01, 0.1, 0.3, 0.02, 0.000000],
[60, 100, 0.01, 0.1, 0.3, 0.02, 0.000000], [70, 100, 0.01, 0.1, 0.3, 0.02, 0.000000],
[80, 100, 0.01, 0.1, 0.3, 0.02, 0.000019], [90, 100, 0.01, 0.1, 0.3, 0.02, 0.045886],
[100, 100, 0.01, 0.1, 0.3, 0.02, 2.024682], [110, 100, 0.01, 0.1, 0.3, 0.02, 9.750184],
[120, 100, 0.01, 0.1, 0.3, 0.02, 19.584421], [130, 100, 0.01, 0.1, 0.3, 0.02, 29.524062],
[140, 100, 0.01, 0.1, 0.3, 0.02, 39.464512], [150, 100, 0.01, 0.1, 0.3, 0.02, 49.404964],
[100, 100, 0.0, 0.1, 0.3, 0.02, 1.888565], [100, 100, 0.01, 0.1, 0.3, 0.02, 2.024682],
[100, 100, 0.02, 0.1, 0.3, 0.02, 2.166851], [100, 100, 0.03, 0.1, 0.3, 0.02, 2.315054],
[100, 100, 0.05, 0.1, 0.3, 0.02, 2.629394], [100, 100, 0.07, 0.1, 0.3, 0.02, 2.967179],
[100, 100, 0.1, 0.1, 0.3, 0.02, 3.515974], [100, 100, 0.25, 0.1, 0.3, 0.02, 6.860052],
[100, 100, 0.01, 0.1, 0.3, 0.0, 2.328922], [100, 100, 0.01, 0.1, 0.3, 0.01, 2.173331],
[100, 100, 0.01, 0.1, 0.3, 0.02, 2.024682], [100, 100, 0.01, 0.1, 0.3, 0.03, 1.882934],
[100, 100, 0.01, 0.1, 0.3, 0.05, 1.619886], [100, 100, 0.01, 0.1, 0.3, 0.07, 1.383498],
[100, 100, 0.01, 0.1, 0.3, 0.1, 1.076712], [100, 100, 0.01, 0.1, 0.3, 0.25, 0.233456],
[100, 100, 0.01, 0.01, 0.3, 0.02, 0.100012], [100, 100, 0.01, 0.05, 0.3, 0.02, 0.942973],
[100, 100, 0.01, 0.1, 0.3, 0.02, 2.024682], [100, 100, 0.01, 0.25, 0.3, 0.02, 5.274331],
[100, 100, 0.01, 1.1, 0.3, 0.02, 23.370978], [100, 100, 0.01, 2.1, 0.3, 0.02, 43.046895],
[50, 100, 0.01, 0.1, 1, 0.02, 0.000000], [60, 100, 0.01, 0.1, 1, 0.02, 0.000000],
[70, 100, 0.01, 0.1, 1, 0.02, 0.000245], [80, 100, 0.01, 0.1, 1, 0.02, 0.029383],
[90, 100, 0.01, 0.1, 1, 0.02, 0.575714], [100, 100, 0.01, 0.1, 1, 0.02, 3.455492],
[110, 100, 0.01, 0.1, 1, 0.02, 9.945921], [120, 100, 0.01, 0.1, 1, 0.02, 18.805137],
[130, 100, 0.01, 0.1, 1, 0.02, 28.441738], [140, 100, 0.01, 0.1, 1, 0.02, 38.224524],
[150, 100, 0.01, 0.1, 1, 0.02, 48.024923], [100, 100, 0.0, 0.1, 1, 0.02, 3.036848],
[100, 100, 0.01, 0.1, 1, 0.02, 3.455492], [100, 100, 0.02, 0.1, 1, 0.02, 3.908798],
[100, 100, 0.03, 0.1, 1, 0.02, 4.396423], [100, 100, 0.05, 0.1, 1, 0.02, 5.471349],
[100, 100, 0.07, 0.1, 1, 0.02, 6.670211], [100, 100, 0.1, 0.1, 1, 0.02, 8.667360],
[100, 100, 0.25, 0.1, 1, 0.02, 20.171748], [100, 100, 0.01, 0.1, 1, 0.0, 4.485236],
[100, 100, 0.01, 0.1, 1, 0.01, 3.948082], [100, 100, 0.01, 0.1, 1, 0.02, 3.455492],
[100, 100, 0.01, 0.1, 1, 0.03, 3.006631], [100, 100, 0.01, 0.1, 1, 0.05, 2.234944],
[100, 100, 0.01, 0.1, 1, 0.07, 1.619499], [100, 100, 0.01, 0.1, 1, 0.1, 0.949838],
[100, 100, 0.01, 0.1, 1, 0.25, 0.023864], [100, 100, 0.01, 0.01, 1, 0.02, 0.082074],
[100, 100, 0.01, 0.05, 1, 0.02, 1.511434], [100, 100, 0.01, 0.1, 1, 0.02, 3.455492],
[100, 100, 0.01, 0.25, 1, 0.02, 9.314906], [100, 100, 0.01, 1.1, 1, 0.02, 40.655709],
[100, 100, 0.01, 2.1, 1, 0.02, 69.085523], [50, 100, 0.01, 0.1, 3, 0.02, 0.000038],
[60, 100, 0.01, 0.1, 3, 0.02, 0.003149], [70, 100, 0.01, 0.1, 3, 0.02, 0.061515],
[80, 100, 0.01, 0.1, 3, 0.02, 0.472285], [90, 100, 0.01, 0.1, 3, 0.02, 1.946350],
[100, 100, 0.01, 0.1, 3, 0.02, 5.262747], [110, 100, 0.01, 0.1, 3, 0.02, 10.683065],
[120, 100, 0.01, 0.1, 3, 0.02, 17.854285], [130, 100, 0.01, 0.1, 3, 0.02, 26.169121],
[140, 100, 0.01, 0.1, 3, 0.02, 35.103576], [150, 100, 0.01, 0.1, 3, 0.02, 44.328384],
[100, 100, 0.0, 0.1, 3, 0.02, 4.185436], [100, 100, 0.01, 0.1, 3, 0.02, 5.262747],
[100, 100, 0.02, 0.1, 3, 0.02, 6.499358], [100, 100, 0.03, 0.1, 3, 0.02, 7.890544],
[100, 100, 0.05, 0.1, 3, 0.02, 11.091010], [100, 100, 0.07, 0.1, 3, 0.02, 14.731189],
[100, 100, 0.1, 0.1, 3, 0.02, 20.640675], [100, 100, 0.25, 0.1, 3, 0.02, 46.939886],
[100, 100, 0.01, 0.1, 3, 0.0, 8.378469], [100, 100, 0.01, 0.1, 3, 0.01, 6.697292],
[100, 100, 0.01, 0.1, 3, 0.02, 5.262747], [100, 100, 0.01, 0.1, 3, 0.03, 4.061738],
[100, 100, 0.01, 0.1, 3, 0.05, 2.284543], [100, 100, 0.01, 0.1, 3, 0.07, 1.184236],
[100, 100, 0.01, 0.1, 3, 0.1, 0.374831], [100, 100, 0.01, 0.1, 3, 0.25, 0.000041],
[100, 100, 0.01, 0.01, 3, 0.02, 0.027994], [100, 100, 0.01, 0.05, 3, 0.02, 2.064242],
[100, 100, 0.01, 0.1, 3, 0.02, 5.262747], [100, 100, 0.01, 0.25, 3, 0.02, 14.992954],
[100, 100, 0.01, 1.1, 3, 0.02, 61.600161], [100, 100, 0.01, 2.1, 3, 0.02, 87.583642],
[50, 100, 0.01, 0.1, 1, 0.02, 0.000000], [60, 100, 0.01, 0.1, 1, 0.02, 0.000000],
[70, 100, 0.01, 0.1, 1, 0.02, 0.000245], [80, 100, 0.01, 0.1, 1, 0.02, 0.029383],
[90, 100, 0.01, 0.1, 1, 0.02, 0.575714], [100, 100, 0.01, 0.1, 1, 0.02, 3.455492],
[110, 100, 0.01, 0.1, 1, 0.02, 9.945921], [120, 100, 0.01, 0.1, 1, 0.02, 18.805137],
[130, 100, 0.01, 0.1, 1, 0.02, 28.441738], [140, 100, 0.01, 0.1, 1, 0.02, 38.224524],
[150, 100, 0.01, 0.1, 1, 0.02, 48.024923], [100, 100, 0.01, 0.1, 1, 0.0, 4.485236],
[100, 100, 0.01, 0.1, 1, 0.01, 3.948082], [100, 100, 0.01, 0.1, 1, 0.02, 3.455492],
[100, 100, 0.01, 0.1, 1, 0.03, 3.006631], [100, 100, 0.01, 0.1, 1, 0.05, 2.234944],
[100, 100, 0.01, 0.1, 1, 0.07, 1.619499], [100, 100, 0.01, 0.1, 1, 0.1, 0.949838],
[100, 100, 0.01, 0.1, 1, 0.25, 0.023864], [100, 100, 0.01, 0.01, 1, 0.02, 0.082074],
[100, 100, 0.01, 0.05, 1, 0.02, 1.511434], [100, 100, 0.01, 0.1, 1, 0.02, 3.455492],
[100, 100, 0.01, 0.25, 1, 0.02, 9.314906], [100, 100, 0.01, 1.1, 1, 0.02, 40.655709],
[100, 100, 0.01, 2.1, 1, 0.02, 69.085523], [100, 100, 0.01, 0.1, 0.3, 0.02, 2.024682],
[100, 100, 0.01, 0.1, 1, 0.02, 3.455492], [100, 100, 0.01, 0.1, 3, 0.02, 5.262747],
[50, 100, 0.1, 0.1, 1, 0.02, 0.000000], [60, 100, 0.1, 0.1, 1, 0.02, 0.000013],
[70, 100, 0.1, 0.1, 1, 0.02, 0.006695], [80, 100, 0.1, 0.1, 1, 0.02, 0.287844],
[90, 100, 0.1, 0.1, 1, 0.02, 2.544035], [100, 100, 0.1, 0.1, 1, 0.02, 8.667360],
[110, 100, 0.1, 0.1, 1, 0.02, 17.496493], [120, 100, 0.1, 0.1, 1, 0.02, 27.154107],
[130, 100, 0.1, 0.1, 1, 0.02, 36.942933], [140, 100, 0.1, 0.1, 1, 0.02, 46.744110],
[150, 100, 0.1, 0.1, 1, 0.02, 56.546061], [100, 100, 0.1, 0.1, 1, 0.0, 10.308151],
[100, 100, 0.1, 0.1, 1, 0.01, 9.471079], [100, 100, 0.1, 0.1, 1, 0.02, 8.667360],
[100, 100, 0.1, 0.1, 1, 0.03, 7.898742], [100, 100, 0.1, 0.1, 1, 0.05, 6.473076],
[100, 100, 0.1, 0.1, 1, 0.07, 5.204508], [100, 100, 0.1, 0.1, 1, 0.1, 3.608276],
[100, 100, 0.1, 0.1, 1, 0.25, 0.245796], [100, 100, 0.1, 0.01, 1, 0.02, 7.536126],
[100, 100, 0.1, 0.05, 1, 0.02, 7.645543], [100, 100, 0.1, 0.1, 1, 0.02, 8.667360],
[100, 100, 0.1, 0.25, 1, 0.02, 13.617097], [100, 100, 0.1, 1.1, 1, 0.02, 43.229142],
[100, 100, 0.1, 2.1, 1, 0.02, 70.369395], [100, 100, 0.1, 0.1, 0.3, 0.02, 3.515974],
[100, 100, 0.1, 0.1, 1, 0.02, 8.667360], [100, 100, 0.1, 0.1, 3, 0.02, 20.640675],
[50, 100, 0.25, 0.1, 1, 0.02, 0.000002], [60, 100, 0.25, 0.1, 1, 0.02, 0.005005],
[70, 100, 0.25, 0.1, 1, 0.02, 0.356736], [80, 100, 0.25, 0.1, 1, 0.02, 3.391579],
[90, 100, 0.25, 0.1, 1, 0.02, 10.759913], [100, 100, 0.25, 0.1, 1, 0.02, 20.171748],
[110, 100, 0.25, 0.1, 1, 0.02, 29.943167], [120, 100, 0.25, 0.1, 1, 0.02, 39.743802],
[130, 100, 0.25, 0.1, 1, 0.02, 49.545750], [140, 100, 0.25, 0.1, 1, 0.02, 59.347736],
[150, 100, 0.25, 0.1, 1, 0.02, 69.149723], [100, 100, 0.25, 0.1, 1, 0.0, 22.137590],
[100, 100, 0.25, 0.1, 1, 0.01, 21.148769], [100, 100, 0.25, 0.1, 1, 0.02, 20.171748],
[100, 100, 0.25, 0.1, 1, 0.03, 19.206918], [100, 100, 0.25, 0.1, 1, 0.05, 17.315873],
[100, 100, 0.25, 0.1, 1, 0.07, 15.480836], [100, 100, 0.25, 0.1, 1, 0.1, 12.849459],
[100, 100, 0.25, 0.1, 1, 0.25, 3.105672], [100, 100, 0.25, 0.01, 1, 0.02, 20.139789],
[100, 100, 0.25, 0.05, 1, 0.02, 20.139791], [100, 100, 0.25, 0.1, 1, 0.02, 20.171748],
[100, 100, 0.25, 0.25, 1, 0.02, 22.244069], [100, 100, 0.25, 1.1, 1, 0.02, 47.523561],
[100, 100, 0.25, 2.1, 1, 0.02, 72.440371], [100, 100, 0.25, 0.1, 0.3, 0.02, 6.860052],
[100, 100, 0.25, 0.1, 1, 0.02, 20.171748], [100, 100, 0.25, 0.1, 3, 0.02, 46.939886],
]
# Count the number of entries above - sets of data
numAssets = len(test_data_list)
# Populate the input lists from the above nested list
for loop in range(0, numAssets) :
stockPriceList += [test_data_list[loop][0]]
strikePriceList+= [test_data_list[loop][1]]
riskFreeRateList += [test_data_list[loop][2]]
volatilityList += [test_data_list[loop][3]]
timeToMaturityList += [test_data_list[loop][4]]
dividendYieldList += [test_data_list[loop][5]]
#This final field is not passed to the FPGA it is used to compare the result, as per the C++ example
expectedResultList += [test_data_list[loop][6]]
# Outputs - already declared those as empty lists
# Identify which cards are installed and choose the first available U250 card, as defined in deviceList above
print("Found these {0} device(s):".format(len(deviceList)))
for x in deviceList:
print(x.getName())
print("Choosing the first suitable card\n")
chosenDevice = deviceList[0]
# Selecting and loading into FPGA on chosen card the financial model to be used
CFBlackScholesMerton = CFBlackScholesMerton(numAssets,sys.argv[1]) # warning the lower levels to accomodate at least this figure
CFBlackScholesMerton.claimDevice(chosenDevice)
#Feed in the data and request the result
print("\nRunning...")
result = CFBlackScholesMerton.run(stockPriceList, strikePriceList, volatilityList, riskFreeRateList, timeToMaturityList, dividendYieldList,
optionPriceList, deltaList, gammaList, vegaList, thetaList, rhoList, OptionType.Call, numAssets)
print("Done")
runtime = CFBlackScholesMerton.lastruntime()
#Format output to match the example in C++, simply to aid comparison of results
print("+-------+-----------+----------------+--------------+---------------+---------------+---------------+")
print("| Index | Price | Delta | Gamma | Vega | Theta | Rho |")
print("+-------+-----------+----------------+--------------+---------------+---------------+---------------+")
for loop in range(0, numAssets) :
print(loop,"\t%9.5f"%optionPriceList[loop],"\t%9.5f"%deltaList[loop],"\t%9.5f"%gammaList[loop],"\t%9.5f"%vegaList[loop],"\t%9.5f"%thetaList[loop],"\t%9.5f"%rhoList[loop])
if (abs(expectedResultList[loop] - optionPriceList[loop])) > tolerance :
print("Comparison failure - expected",expectedResultList[loop],"returned %9.5f"%optionPriceList[loop])
print("\nThis run took", str(runtime), "microseconds")
#Relinquish ownership of the card
CFBlackScholesMerton.releaseDevice()
| 76.990196 | 174 | 0.510888 | 5,199 | 23,559 | 2.311214 | 0.095403 | 0.073069 | 0.145806 | 0.145057 | 0.702979 | 0.683838 | 0.631325 | 0.612683 | 0.526381 | 0.502663 | 0 | 0.524666 | 0.223906 | 23,559 | 305 | 175 | 77.242623 | 0.13252 | 0.043083 | 0 | 0.162963 | 0 | 0.003704 | 0.030659 | 0.01134 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.007407 | 0 | 0.007407 | 0.044444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
76e6af460478a8f1666da137d910cba6c9e8738c | 319 | py | Python | src/mobi/__init__.py | vsobolmaven/mobi.devices | 90aa22bda5e05f8a49bd752d485145756f134d5c | [
"BSD-4-Clause"
] | 1 | 2019-04-29T14:21:41.000Z | 2019-04-29T14:21:41.000Z | src/mobi/__init__.py | vsobolmaven/mobi.devices | 90aa22bda5e05f8a49bd752d485145756f134d5c | [
"BSD-4-Clause"
] | null | null | null | src/mobi/__init__.py | vsobolmaven/mobi.devices | 90aa22bda5e05f8a49bd752d485145756f134d5c | [
"BSD-4-Clause"
] | 1 | 2019-12-19T13:31:32.000Z | 2019-12-19T13:31:32.000Z | # Copyright (c) 2010 Infrae. All rights reserved.
# See also LICENSE.txt.
# See http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages
try:
__import__('pkg_resources').declare_namespace(__name__)
except ImportError:
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
| 31.9 | 75 | 0.777429 | 39 | 319 | 5.74359 | 0.794872 | 0.089286 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014337 | 0.125392 | 319 | 9 | 76 | 35.444444 | 0.78853 | 0.445141 | 0 | 0 | 0 | 0 | 0.076023 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
76ef6270c9660a79bcc11c1d251d76921b2b540c | 1,430 | py | Python | src/bq_test_kit/resource_loaders/package_file_loader.py | tiboun/python-bigquery-test-kit | 8f62bdf21122b615f56088a8e2701e0bb4c71f3b | [
"MIT"
] | 31 | 2021-03-03T21:07:44.000Z | 2022-03-20T22:00:45.000Z | src/bq_test_kit/resource_loaders/package_file_loader.py | tiboun/python-bq-test-kit | 8f62bdf21122b615f56088a8e2701e0bb4c71f3b | [
"MIT"
] | 14 | 2020-11-25T20:45:31.000Z | 2021-01-29T13:06:28.000Z | src/bq_test_kit/resource_loaders/package_file_loader.py | tiboun/python-bq-test-kit | 8f62bdf21122b615f56088a8e2701e0bb4c71f3b | [
"MIT"
] | null | null | null | # Copyright (c) 2020 Bounkong Khamphousone
#
# This software is released under the MIT License.
# https://opensource.org/licenses/MIT
# C0114 disabled because this module contains only one class
# pylint: disable=C0114
from os.path import basename, dirname
import pkg_resources
from logzero import logger
from bq_test_kit.resource_loaders.base_resource_loader import \
BaseResourceLoader
class PackageFileLoader(BaseResourceLoader):
"""Load from a given module path.
"""
def __init__(self, path: str) -> None:
"""Path that is available in the module path.
Args:
path (str): relative module path like a path seperator '/' or '\\'.
"""
self.path = path
self.package = dirname(path).replace("/", ".").replace("\\", ".")
self.file_name = basename(path)
def load(self) -> str:
logger.info("Loading file %s in package %s", self.file_name, self.package)
with open(self.absolute_path(), 'r') as schemafile:
return schemafile.read()
def absolute_path(self) -> str:
"""
Returns:
str: return absolute path in order to read it.
"""
return pkg_resources.resource_filename(self.package, self.file_name)
def __repr__(self) -> str:
return f"bq_test_kit.resource_loaders.PackageFileLoader('{self.path}')"
def __str__(self) -> str:
return self.__repr__()
| 28.6 | 82 | 0.651049 | 176 | 1,430 | 5.107955 | 0.477273 | 0.031146 | 0.040044 | 0.03782 | 0.053393 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010989 | 0.236364 | 1,430 | 49 | 83 | 29.183673 | 0.812271 | 0.297203 | 0 | 0 | 0 | 0 | 0.103004 | 0.065451 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.2 | 0.1 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
76f9464ca01764aa9348e7fc78c4f6170b1113eb | 14,074 | py | Python | pypy/module/cpyext/test/test_longobject.py | akercheval/espy | f8317d2f01ba726ed4f03cab081176c32ae4cac4 | [
"Apache-2.0",
"OpenSSL"
] | 4 | 2019-02-11T06:58:43.000Z | 2020-03-15T14:12:32.000Z | pypy/module/cpyext/test/test_longobject.py | murtyjones/kiwi-pypy | 1419c7de61a11b2d29602b25506cb3b4f225996e | [
"Apache-2.0",
"OpenSSL"
] | null | null | null | pypy/module/cpyext/test/test_longobject.py | murtyjones/kiwi-pypy | 1419c7de61a11b2d29602b25506cb3b4f225996e | [
"Apache-2.0",
"OpenSSL"
] | null | null | null | import sys
import pytest
from pypy.interpreter.error import OperationError
from rpython.rtyper.lltypesystem import rffi, lltype
from rpython.rlib.rarithmetic import maxint
from pypy.objspace.std.longobject import W_LongObject
from pypy.module.cpyext.test.test_api import BaseApiTest
from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase
from pypy.module.cpyext.longobject import (PyLong_Check, PyLong_CheckExact,
PyLong_FromLong, PyLong_AsLong, PyLong_AsUnsignedLong, PyLong_AsLongLong,
PyLong_AsUnsignedLongLong, PyLong_AsUnsignedLongLongMask)
class TestLongObject(BaseApiTest):
def test_FromLong(self, space, api):
w_value = api.PyLong_FromLong(3)
assert isinstance(w_value, W_LongObject)
assert space.unwrap(w_value) == 3
w_value = api.PyLong_FromLong(sys.maxint)
assert isinstance(w_value, W_LongObject)
assert space.unwrap(w_value) == sys.maxint
def test_aslong(self, space):
w_value = PyLong_FromLong(space, (sys.maxint - 1) / 2)
assert isinstance(w_value, W_LongObject)
w_value = space.mul(w_value, space.wrap(2))
assert isinstance(w_value, W_LongObject)
value = PyLong_AsLong(space, w_value)
assert value == (sys.maxint - 1)
w_value = space.mul(w_value, space.wrap(2))
with pytest.raises(OperationError) as excinfo:
PyLong_AsLong(space, w_value)
assert excinfo.value.w_type is space.w_OverflowError
value = PyLong_AsUnsignedLong(space, w_value)
assert value == (sys.maxint - 1) * 2
with pytest.raises(OperationError) as excinfo:
PyLong_AsUnsignedLong(space, space.newint(-1))
assert excinfo.value.w_type is space.w_OverflowError
def test_as_ssize_t(self, space, api):
w_value = space.newlong(2)
assert isinstance(w_value, W_LongObject)
value = api.PyLong_AsSsize_t(w_value)
assert value == 2
w_val2 = api.PyLong_FromSsize_t(2)
assert isinstance(w_val2, W_LongObject)
assert space.eq_w(w_value, w_val2)
def test_fromdouble(self, space, api):
w_value = api.PyLong_FromDouble(-12.74)
assert isinstance(w_value, W_LongObject)
assert space.unwrap(w_value) == -12
assert api.PyLong_AsDouble(w_value) == -12
def test_type_check(self, space, api):
w_l = space.wrap(sys.maxint + 1)
assert PyLong_Check(space, w_l)
assert PyLong_CheckExact(space, w_l)
w_i = space.wrap(sys.maxint)
assert not PyLong_Check(space, w_i)
assert not PyLong_CheckExact(space, w_i)
L = space.appexec([], """():
class L(long):
pass
return L
""")
l = space.call_function(L)
assert PyLong_Check(space, l)
assert not PyLong_CheckExact(space, l)
def test_as_longlong(self, space):
assert PyLong_AsLongLong(space, space.wrap(1 << 62)) == 1 << 62
with pytest.raises(OperationError) as excinfo:
PyLong_AsLongLong(space, space.wrap(1 << 63))
assert excinfo.value.w_type is space.w_OverflowError
assert PyLong_AsUnsignedLongLong(space, space.wrap(1 << 63)) == 1 << 63
with pytest.raises(OperationError) as excinfo:
PyLong_AsUnsignedLongLong(space, space.wrap(1 << 64))
assert excinfo.value.w_type is space.w_OverflowError
assert PyLong_AsUnsignedLongLongMask(space, space.wrap(1 << 64)) == 0
with pytest.raises(OperationError) as excinfo:
PyLong_AsUnsignedLongLong(space, space.newint(-1))
assert excinfo.value.w_type is space.w_OverflowError
def test_as_long_and_overflow(self, space, api):
overflow = lltype.malloc(rffi.CArrayPtr(rffi.INT_real).TO, 1, flavor='raw')
assert api.PyLong_AsLongAndOverflow(
space.wrap(sys.maxint), overflow) == sys.maxint
assert api.PyLong_AsLongAndOverflow(
space.wrap(-sys.maxint - 2), overflow) == -1
assert not api.PyErr_Occurred()
assert overflow[0] == -1
lltype.free(overflow, flavor='raw')
def test_as_longlong_and_overflow(self, space, api):
overflow = lltype.malloc(rffi.CArrayPtr(rffi.INT_real).TO, 1, flavor='raw')
assert api.PyLong_AsLongLongAndOverflow(
space.wrap(1<<62), overflow) == 1<<62
assert api.PyLong_AsLongLongAndOverflow(
space.wrap(1<<63), overflow) == -1
assert not api.PyErr_Occurred()
assert overflow[0] == 1
assert api.PyLong_AsLongLongAndOverflow(
space.wrap(-1<<64), overflow) == -1
assert not api.PyErr_Occurred()
assert overflow[0] == -1
lltype.free(overflow, flavor='raw')
def test_as_voidptr(self, space, api):
# CPython returns an int (not a long) depending on the value
# passed to PyLong_FromVoidPtr(). In all cases, NULL becomes
# the int 0.
w_l = api.PyLong_FromVoidPtr(lltype.nullptr(rffi.VOIDP.TO))
assert space.is_w(space.type(w_l), space.w_int)
assert space.unwrap(w_l) == 0
assert api.PyLong_AsVoidPtr(w_l) == lltype.nullptr(rffi.VOIDP.TO)
# Positive values also return an int (assuming, like always in
# PyPy, that an int is big enough to store any pointer).
p = rffi.cast(rffi.VOIDP, maxint)
w_l = api.PyLong_FromVoidPtr(p)
assert space.is_w(space.type(w_l), space.w_int)
assert space.unwrap(w_l) == maxint
assert api.PyLong_AsVoidPtr(w_l) == p
# Negative values always return a long.
p = rffi.cast(rffi.VOIDP, -maxint-1)
w_l = api.PyLong_FromVoidPtr(p)
assert space.is_w(space.type(w_l), space.w_long)
assert space.unwrap(w_l) == maxint+1
assert api.PyLong_AsVoidPtr(w_l) == p
def test_sign_and_bits(self, space, api):
assert api._PyLong_Sign(space.wraplong(0L)) == 0
assert api._PyLong_Sign(space.wraplong(2L)) == 1
assert api._PyLong_Sign(space.wraplong(-2L)) == -1
assert api._PyLong_NumBits(space.wrap(0)) == 0
assert api._PyLong_NumBits(space.wrap(1)) == 1
assert api._PyLong_NumBits(space.wrap(-1)) == 1
assert api._PyLong_NumBits(space.wrap(2)) == 2
assert api._PyLong_NumBits(space.wrap(-2)) == 2
assert api._PyLong_NumBits(space.wrap(3)) == 2
assert api._PyLong_NumBits(space.wrap(-3)) == 2
class AppTestLongObject(AppTestCpythonExtensionBase):
def test_fromunsignedlong(self):
module = self.import_extension('foo', [
("from_unsignedlong", "METH_NOARGS",
"""
PyObject * obj;
obj = PyLong_FromUnsignedLong((unsigned long)-1);
if (obj->ob_type != &PyLong_Type)
{
Py_DECREF(obj);
PyErr_SetString(PyExc_ValueError,
"PyLong_FromLongLong did not return PyLongObject");
return NULL;
}
return obj;
""")])
import sys
assert module.from_unsignedlong() == 2 * sys.maxint + 1
def test_fromlonglong(self):
module = self.import_extension('foo', [
("from_longlong", "METH_VARARGS",
"""
int val;
PyObject * obj;
if (!PyArg_ParseTuple(args, "i", &val))
return NULL;
obj = PyLong_FromLongLong((long long)val);
if (obj->ob_type != &PyLong_Type)
{
Py_DECREF(obj);
PyErr_SetString(PyExc_ValueError,
"PyLong_FromLongLong did not return PyLongObject");
return NULL;
}
return obj;
"""),
("from_unsignedlonglong", "METH_VARARGS",
"""
int val;
PyObject * obj;
if (!PyArg_ParseTuple(args, "i", &val))
return NULL;
obj = PyLong_FromUnsignedLongLong((long long)val);
if (obj->ob_type != &PyLong_Type)
{
Py_DECREF(obj);
PyErr_SetString(PyExc_ValueError,
"PyLong_FromLongLong did not return PyLongObject");
return NULL;
}
return obj;
""")])
assert module.from_longlong(-1) == -1
assert module.from_longlong(0) == 0
assert module.from_unsignedlonglong(0) == 0
assert module.from_unsignedlonglong(-1) == (1<<64) - 1
def test_from_size_t(self):
module = self.import_extension('foo', [
("from_unsignedlong", "METH_NOARGS",
"""
return PyLong_FromSize_t((size_t)-1);
""")])
import sys
assert module.from_unsignedlong() == 2 * sys.maxint + 1
def test_fromstring(self):
module = self.import_extension('foo', [
("from_string", "METH_NOARGS",
"""
return PyLong_FromString("0x1234", NULL, 0);
"""),
])
assert module.from_string() == 0x1234
def test_frombytearray(self):
module = self.import_extension('foo', [
("from_bytearray", "METH_VARARGS",
"""
int little_endian, is_signed;
if (!PyArg_ParseTuple(args, "ii", &little_endian, &is_signed))
return NULL;
return _PyLong_FromByteArray((unsigned char*)"\x9A\xBC", 2,
little_endian, is_signed);
"""),
])
assert module.from_bytearray(True, False) == 0xBC9A
assert module.from_bytearray(True, True) == -0x4366
assert module.from_bytearray(False, False) == 0x9ABC
assert module.from_bytearray(False, True) == -0x6544
def test_frombytearray_2(self):
module = self.import_extension('foo', [
("from_bytearray", "METH_VARARGS",
"""
int little_endian, is_signed;
if (!PyArg_ParseTuple(args, "ii", &little_endian, &is_signed))
return NULL;
return _PyLong_FromByteArray((unsigned char*)"\x9A\xBC\x41", 3,
little_endian, is_signed);
"""),
])
assert module.from_bytearray(True, False) == 0x41BC9A
assert module.from_bytearray(True, True) == 0x41BC9A
assert module.from_bytearray(False, False) == 0x9ABC41
assert module.from_bytearray(False, True) == -0x6543BF
def test_fromunicode(self):
module = self.import_extension('foo', [
("from_unicode", "METH_O",
"""
Py_UNICODE* u = PyUnicode_AsUnicode(args);
return Py_BuildValue("NN",
PyLong_FromUnicode(u, 6, 10),
PyLong_FromUnicode(u, 6, 16));
"""),
])
# A string with arabic digits. 'BAD' is after the 6th character.
assert module.from_unicode(u' 1\u0662\u0663\u0664BAD') == (1234, 4660)
def test_strtol(self):
module = self.import_extension('foo', [
("from_str", "METH_NOARGS",
"""
const char *str =" 400";
char * end;
if (400 != PyOS_strtoul(str, &end, 10))
return PyLong_FromLong(1);
if (str + strlen(str) != end)
return PyLong_FromLong(2);
if (400 != PyOS_strtol(str, &end, 10))
return PyLong_FromLong(3);
if (str + strlen(str) != end)
return PyLong_FromLong(4);
return PyLong_FromLong(0);
""")])
assert module.from_str() == 0
def test_slots(self):
module = self.import_extension('foo', [
("has_sub", "METH_NOARGS",
"""
PyObject *ret, *obj = PyLong_FromLong(42);
if (obj->ob_type != &PyLong_Type)
ret = PyLong_FromLong(-2);
else
{
if (obj->ob_type->tp_as_number->nb_subtract)
ret = obj->ob_type->tp_as_number->nb_subtract(obj, obj);
else
ret = PyLong_FromLong(-1);
}
Py_DECREF(obj);
return ret;
"""),
("has_pow", "METH_NOARGS",
"""
PyObject *ret, *obj = PyLong_FromLong(42);
PyObject *one = PyLong_FromLong(1);
if (obj->ob_type->tp_as_number->nb_power)
ret = obj->ob_type->tp_as_number->nb_power(obj, one, one);
else
ret = PyLong_FromLong(-1);
Py_DECREF(one);
Py_DECREF(obj);
return ret;
"""),
("has_hex", "METH_NOARGS",
"""
PyObject *ret, *obj = PyLong_FromLong(42);
if (obj->ob_type->tp_as_number->nb_hex)
ret = obj->ob_type->tp_as_number->nb_hex(obj);
else
ret = PyLong_FromLong(-1);
Py_DECREF(obj);
return ret;
"""),
("has_oct", "METH_NOARGS",
"""
PyObject *ret, *obj = PyLong_FromLong(42);
if (obj->ob_type->tp_as_number->nb_oct)
ret = obj->ob_type->tp_as_number->nb_oct(obj);
else
ret = PyLong_FromLong(-1);
Py_DECREF(obj);
return ret;
""")])
assert module.has_sub() == 0
assert module.has_pow() == 0
assert module.has_hex() == '0x2aL'
assert module.has_oct() == '052L'
| 40.912791 | 83 | 0.557127 | 1,576 | 14,074 | 4.759518 | 0.148477 | 0.032396 | 0.037995 | 0.023997 | 0.681243 | 0.651913 | 0.557392 | 0.490735 | 0.409279 | 0.390748 | 0 | 0.024512 | 0.33331 | 14,074 | 343 | 84 | 41.03207 | 0.774912 | 0.024584 | 0 | 0.372449 | 0 | 0 | 0.050083 | 0.004786 | 0 | 0 | 0.0069 | 0 | 0.408163 | 0 | null | null | 0.005102 | 0.102041 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
76fae58ec3f2c9fd960d3e356d89af55b538d336 | 803 | py | Python | tests/test_pks.py | sorXCode/fastapi-crudrouter | 76786f0cac08bb8a0a0d027b518ac37c39b5e8ab | [
"MIT"
] | null | null | null | tests/test_pks.py | sorXCode/fastapi-crudrouter | 76786f0cac08bb8a0a0d027b518ac37c39b5e8ab | [
"MIT"
] | null | null | null | tests/test_pks.py | sorXCode/fastapi-crudrouter | 76786f0cac08bb8a0a0d027b518ac37c39b5e8ab | [
"MIT"
] | null | null | null | from pydantic import BaseModel
from . import test_router, PotatoType
potato_type = PotatoType(name='russet', origin="Canada")
URL = '/potato_type'
def test_get(string_pk_client):
test_router.test_get(string_pk_client, URL)
def test_post(string_pk_client):
test_router.test_post(string_pk_client, URL, potato_type)
def test_get_one(string_pk_client):
test_router.test_get_one(string_pk_client, URL, PotatoType(name='kenebec', origin="Ireland"), 'name')
def test_delete_one(string_pk_client):
test_router.test_delete_one(string_pk_client, URL, PotatoType(name='golden', origin="Ireland"), 'name')
def test_delete_all(string_pk_client):
test_router.test_delete_all(string_pk_client, URL, PotatoType(name='red', origin="Ireland"), PotatoType(name='brown', origin="Ireland"))
| 29.740741 | 140 | 0.779577 | 119 | 803 | 4.882353 | 0.235294 | 0.137694 | 0.240964 | 0.154905 | 0.714286 | 0.645439 | 0.351119 | 0 | 0 | 0 | 0 | 0 | 0.098381 | 803 | 26 | 141 | 30.884615 | 0.802486 | 0 | 0 | 0 | 0 | 0 | 0.100872 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.357143 | false | 0 | 0.142857 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a06387884bb5eb16a4ec957e38ec3eb2af93eaa | 313 | py | Python | EmergencyServices/admin.py | gurleen-kaur1313/RAISE-BACKEND | f073ae8b25f3180ff0db1f733edbbbf7cb706cae | [
"MIT"
] | 1 | 2020-11-22T13:02:43.000Z | 2020-11-22T13:02:43.000Z | EmergencyServices/admin.py | gurleen-kaur1313/RAISE-BACKEND | f073ae8b25f3180ff0db1f733edbbbf7cb706cae | [
"MIT"
] | null | null | null | EmergencyServices/admin.py | gurleen-kaur1313/RAISE-BACKEND | f073ae8b25f3180ff0db1f733edbbbf7cb706cae | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import PoliceEmergency, HealthEmergency, HealthTest, Jobs, UnsafeAreas
admin.site.register(HealthEmergency)
admin.site.register(PoliceEmergency)
admin.site.register(HealthTest)
admin.site.register(Jobs)
admin.site.register(UnsafeAreas)
# Register your models here.
| 28.454545 | 83 | 0.833866 | 37 | 313 | 7.054054 | 0.405405 | 0.172414 | 0.325671 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076677 | 313 | 10 | 84 | 31.3 | 0.903114 | 0.083067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a0b0a4d08a4669080af8c592208ad2fff56cef3 | 797 | py | Python | security/backends/signals.py | druids/django-security | bca889ff1a58378a038a08ca365162d9e3ef3fbf | [
"MIT"
] | 9 | 2019-03-12T12:31:20.000Z | 2021-01-22T13:31:36.000Z | security/backends/signals.py | druids/django-security | bca889ff1a58378a038a08ca365162d9e3ef3fbf | [
"MIT"
] | 28 | 2019-12-05T12:20:49.000Z | 2022-03-25T08:15:10.000Z | security/backends/signals.py | druids/django-security | bca889ff1a58378a038a08ca365162d9e3ef3fbf | [
"MIT"
] | 5 | 2019-07-10T15:29:44.000Z | 2021-02-01T12:50:56.000Z | from django.dispatch import Signal, receiver
from security.config import settings
input_request_started = Signal()
input_request_finished = Signal()
input_request_error = Signal()
output_request_started = Signal()
output_request_finished = Signal()
output_request_error = Signal()
command_started = Signal()
command_output_updated = Signal()
command_finished = Signal()
command_error = Signal()
celery_task_invocation_started = Signal()
celery_task_invocation_triggered = Signal()
celery_task_invocation_ignored = Signal()
celery_task_invocation_timeout = Signal()
celery_task_invocation_expired = Signal()
celery_task_run_started = Signal()
celery_task_run_succeeded = Signal()
celery_task_run_failed = Signal()
celery_task_run_retried = Signal()
celery_task_run_output_updated = Signal()
| 26.566667 | 44 | 0.825596 | 99 | 797 | 6.161616 | 0.282828 | 0.196721 | 0.262295 | 0.213115 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095358 | 797 | 29 | 45 | 27.482759 | 0.846047 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a13685b7d092ea8cbb7aa993729071eb7a3b7e0 | 392 | py | Python | py_trans/__init__.py | Itz-fork/py-trans | 2c35eb987ad990850dab55f00ec0f1c489b2589e | [
"MIT"
] | 12 | 2021-09-11T16:27:24.000Z | 2021-11-07T12:48:13.000Z | py_trans/__init__.py | Itz-fork/py-trans | 2c35eb987ad990850dab55f00ec0f1c489b2589e | [
"MIT"
] | 1 | 2021-09-11T16:42:02.000Z | 2021-09-12T04:14:04.000Z | py_trans/__init__.py | Itz-fork/py-trans | 2c35eb987ad990850dab55f00ec0f1c489b2589e | [
"MIT"
] | 2 | 2021-10-03T10:17:31.000Z | 2022-01-25T12:33:45.000Z | # Copyright (c) 2021 - Itz-fork
# Project: py_trans
import os
import json
# py-trans version
def get_pytrans_version():
with open(f"{os.path.dirname(__file__)}/pytrans_data/version.json", "r") as jsn_f:
ver = json.load(jsn_f)
return ver["version"]
__version__ = get_pytrans_version()
from .translator import PyTranslator
from .async_translator import Async_PyTranslator | 26.133333 | 86 | 0.737245 | 56 | 392 | 4.839286 | 0.571429 | 0.051661 | 0.125461 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012085 | 0.155612 | 392 | 15 | 87 | 26.133333 | 0.806647 | 0.163265 | 0 | 0 | 0 | 0 | 0.187692 | 0.163077 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0a172d91fdc4562b9eddeac7857f0cbb97fffe23 | 4,981 | py | Python | {{cookiecutter.project_slug}}/{{cookiecutter.project_slug}}/{{cookiecutter.project_slug}}exceptions.py | yutanicorp/python_project | 79c154a3fc09ab0e5e3ac3668479ceb4e469a45b | [
"MIT"
] | 5 | 2020-01-28T13:43:23.000Z | 2021-08-04T11:06:51.000Z | {{cookiecutter.project_slug}}/{{cookiecutter.project_slug}}/{{cookiecutter.project_slug}}exceptions.py | yutanicorp/python_project | 79c154a3fc09ab0e5e3ac3668479ceb4e469a45b | [
"MIT"
] | 3 | 2021-03-20T09:40:31.000Z | 2022-01-13T00:34:43.000Z | {{cookiecutter.project_slug}}/{{cookiecutter.project_slug}}/{{cookiecutter.project_slug}}exceptions.py | yutanicorp/python_project | 79c154a3fc09ab0e5e3ac3668479ceb4e469a45b | [
"MIT"
] | 3 | 2021-02-27T23:02:18.000Z | 2021-06-08T07:22:59.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# File: {{cookiecutter.project_slug}}exceptions.py
{% if cookiecutter.license == "MIT" -%}
#
# Copyright {{ cookiecutter.year }} {{ cookiecutter.full_name }}
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
{% elif cookiecutter.license == "BSD-3" -%}
#
# Copyright {{ cookiecutter.year }} {{ cookiecutter.full_name }}
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation and/or
# other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
# INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
# OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
{% elif cookiecutter.license == "GNU GPL v3.0" -%}
#
# Copyright (C) {{ cookiecutter.year }} {{ cookiecutter.full_name }}
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
{% elif cookiecutter.license == "Apache Software License 2.0" -%}
#
# Copyright {{ cookiecutter.year }} {{ cookiecutter.full_name }}
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
{% endif %}
"""
Custom exception code for {{cookiecutter.project_slug}}.
.. _Google Python Style Guide:
http://google.github.io/styleguide/pyguide.html
"""
__author__ = '''{{cookiecutter.full_name}} <{{cookiecutter.email}}>'''
__docformat__ = '''google'''
__date__ = '''{{cookiecutter.release_date}}'''
__copyright__ = '''Copyright {{ cookiecutter.year }}, {{ cookiecutter.full_name }}'''
__credits__ = ["{{cookiecutter.full_name}}"]
__license__ = '''{{cookiecutter.license}}'''
__maintainer__ = '''{{cookiecutter.full_name}}'''
__email__ = '''<{{cookiecutter.email}}>'''
__status__ = '''Development''' # "Prototype", "Development", "Production".
| 46.990566 | 85 | 0.74724 | 689 | 4,981 | 5.332366 | 0.367199 | 0.012248 | 0.043549 | 0.043549 | 0.171475 | 0.11595 | 0.037017 | 0.037017 | 0.037017 | 0.037017 | 0 | 0.003353 | 0.161815 | 4,981 | 105 | 86 | 47.438095 | 0.876647 | 0.803453 | 0 | 0 | 0 | 0 | 0.410977 | 0.269076 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a190dec067c7e49bf1ee58b0928e8cdb1780846 | 155 | py | Python | All_Source_Code/StochasticGradientDescent/StochasticGradientDescent_1.py | APMonitor/pds | fa9a7ec920802de346dcdf7f5dd92d752142c16f | [
"MIT"
] | 11 | 2021-01-21T09:46:29.000Z | 2022-03-16T19:33:10.000Z | All_Source_Code/StochasticGradientDescent/StochasticGradientDescent_1.py | the-mahapurush/pds | 7cb4087dd8e75cb1e9b2a4283966c798175f61f7 | [
"MIT"
] | 1 | 2022-03-16T19:47:09.000Z | 2022-03-16T20:11:50.000Z | All_Source_Code/StochasticGradientDescent/StochasticGradientDescent_1.py | the-mahapurush/pds | 7cb4087dd8e75cb1e9b2a4283966c798175f61f7 | [
"MIT"
] | 12 | 2021-02-08T21:11:11.000Z | 2022-03-20T12:42:49.000Z | from sklearn.linear_model import SGDClassifier
sgd = SGDClassifier(loss='modified_huber',shuffle=True,random_state=101)
sgd.fit(XA,yA)
yP = sgd.predict(XB) | 38.75 | 72 | 0.812903 | 24 | 155 | 5.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02069 | 0.064516 | 155 | 4 | 73 | 38.75 | 0.827586 | 0 | 0 | 0 | 0 | 0 | 0.089744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a1a8b9f43c1198962a8efb9a9b5e6dd0892e131 | 601 | py | Python | src/ggrc_risk_assessments/migrations/versions/20140912230122_cd51533c624_indexes_on_updated_at.py | Killswitchz/ggrc-core | 2460df94daf66727af248ad821462692917c97a9 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2018-03-30T11:28:48.000Z | 2018-03-30T11:28:48.000Z | src/ggrc_risk_assessments/migrations/versions/20140912230122_cd51533c624_indexes_on_updated_at.py | trevordonnelly/ggrc-core | 499cf0d3cce70737b080991b12c203ec22015cea | [
"ECL-2.0",
"Apache-2.0"
] | 10 | 2018-07-06T00:04:23.000Z | 2021-02-26T21:13:20.000Z | src/ggrc_risk_assessments/migrations/versions/20140912230122_cd51533c624_indexes_on_updated_at.py | zidarsk8/ggrc-core | 2509c989eddf434249d3bef50c21e08dbf56c1a4 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2017-11-11T22:16:56.000Z | 2017-11-11T22:16:56.000Z | # Copyright (C) 2017 Google Inc.
# Licensed under http://www.apache.org/licenses/LICENSE-2.0 <see LICENSE file>
"""Indexes on updated_at
Revision ID: cd51533c624
Revises: 1afd15b0581f
Create Date: 2014-09-12 23:01:22.542186
"""
# revision identifiers, used by Alembic.
revision = 'cd51533c624'
down_revision = '1afd15b0581f'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_index('ix_risk_assessments_updated_at', 'risk_assessments', ['updated_at'], unique=False)
def downgrade():
op.drop_index('ix_risk_assessments_updated_at', table_name='risk_assessments')
| 22.259259 | 103 | 0.762063 | 84 | 601 | 5.261905 | 0.690476 | 0.081448 | 0.149321 | 0.162896 | 0.140271 | 0.140271 | 0 | 0 | 0 | 0 | 0 | 0.106262 | 0.123128 | 601 | 26 | 104 | 23.115385 | 0.732448 | 0.427621 | 0 | 0 | 0 | 0 | 0.374252 | 0.179641 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a1fffdcf217883f341e90b432c90a851b2288f0 | 511 | py | Python | yamlconf/__init__.py | halfak/yamlconf | 6b84ca8a7f738a1fe6b3b67b30b55acaaa8a56a8 | [
"MIT"
] | 1 | 2019-01-20T23:03:08.000Z | 2019-01-20T23:03:08.000Z | yamlconf/__init__.py | halfak/yamlconf | 6b84ca8a7f738a1fe6b3b67b30b55acaaa8a56a8 | [
"MIT"
] | 3 | 2015-12-16T21:05:23.000Z | 2020-05-27T19:18:39.000Z | yamlconf/__init__.py | halfak/yamlconf | 6b84ca8a7f738a1fe6b3b67b30b55acaaa8a56a8 | [
"MIT"
] | 2 | 2015-09-15T04:38:09.000Z | 2020-05-27T17:58:26.000Z | from .import_module import import_module
from .import_path import import_path
from .load import load, loads
from .merge import merge
from .propagate_defaults import propagate_defaults
from .about import (__name__, __version__, __author__, __author_email__,
__description__, __license__, __url__)
__all__ = [import_module, import_path, load, loads, merge, propagate_defaults,
__name__, __version__, __author__, __author_email__,
__description__, __license__, __url__]
| 39.307692 | 78 | 0.765166 | 56 | 511 | 5.714286 | 0.321429 | 0.1125 | 0.1125 | 0.14375 | 0.30625 | 0.30625 | 0.30625 | 0.30625 | 0 | 0 | 0 | 0 | 0.176125 | 511 | 12 | 79 | 42.583333 | 0.760095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.7 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0a2504c4310006e986b44ec23331257ea6fd9da7 | 635 | py | Python | mapswipe_workers/setup.py | jhollowe-forks/python-mapswipe-workers | 22d77d0af613c59d3527fd3ff5c3a05d98d6e34a | [
"Apache-2.0"
] | 16 | 2018-03-05T13:18:35.000Z | 2022-02-16T17:19:00.000Z | mapswipe_workers/setup.py | jhollowe-forks/python-mapswipe-workers | 22d77d0af613c59d3527fd3ff5c3a05d98d6e34a | [
"Apache-2.0"
] | 367 | 2018-02-14T22:54:39.000Z | 2022-03-29T14:23:26.000Z | mapswipe_workers/setup.py | jhollowe-forks/python-mapswipe-workers | 22d77d0af613c59d3527fd3ff5c3a05d98d6e34a | [
"Apache-2.0"
] | 8 | 2018-05-12T16:16:32.000Z | 2022-02-16T17:14:54.000Z | from setuptools import find_packages, setup
console_scripts = """
[console_scripts]
mapswipe_workers=mapswipe_workers.mapswipe_workers:cli
ms=mapswipe_workers.mapswipe_workers:cli
"""
with open("requirements.txt") as f:
requirements = f.read().splitlines()
setup(
name="mapswipe-workers",
version="3.0",
description="Install script for the MapSwipe-Python-Workers.",
author="B. Herfort, M. Schaub, M. Reinmuth",
author_email="",
url="www.mapswipe.org",
packages=find_packages(exclude=("docs", "python_scripts")),
install_requires=requirements,
entry_points=console_scripts,
)
| 27.608696 | 66 | 0.713386 | 75 | 635 | 5.853333 | 0.6 | 0.205011 | 0.157175 | 0.205011 | 0.150342 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003731 | 0.155906 | 635 | 22 | 67 | 28.863636 | 0.815299 | 0 | 0 | 0 | 0 | 0 | 0.44252 | 0.185827 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a2a8b30b70f65673616978a1af7cc791bc0a045 | 19,935 | py | Python | tfx/extensions/google_cloud_ai_platform/training_clients.py | swap-10/tfx | 8e80ce2486b4d7b219dcff906d6930e62c5fdd45 | [
"Apache-2.0"
] | null | null | null | tfx/extensions/google_cloud_ai_platform/training_clients.py | swap-10/tfx | 8e80ce2486b4d7b219dcff906d6930e62c5fdd45 | [
"Apache-2.0"
] | null | null | null | tfx/extensions/google_cloud_ai_platform/training_clients.py | swap-10/tfx | 8e80ce2486b4d7b219dcff906d6930e62c5fdd45 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""An abstract class for the Trainer for both CAIP and Vertex."""
import abc
import datetime
import json
import random
from typing import Any, Dict, List, Optional, Union
from absl import logging
from google.cloud.aiplatform import gapic
from google.cloud.aiplatform_v1beta1.types.custom_job import CustomJob
from google.cloud.aiplatform_v1beta1.types.job_state import JobState
from googleapiclient import discovery
from tfx import types
from tfx.types import artifact_utils
from tfx.utils import telemetry_utils
from tfx.utils import version_utils
# Default container image being used for CAIP training jobs.
_TFX_IMAGE = 'gcr.io/tfx-oss-public/tfx:{}'.format(
version_utils.get_image_version())
# Entrypoint of cloud AI platform training. The module comes from `tfx`
# package installation into a default location of 'python'.
_CONTAINER_COMMAND = ['python', '-m', 'tfx.scripts.run_executor']
_VERTEX_ENDPOINT_SUFFIX = '-aiplatform.googleapis.com'
_VERTEX_JOB_STATE_SUCCEEDED = JobState.JOB_STATE_SUCCEEDED
_VERTEX_JOB_STATE_FAILED = JobState.JOB_STATE_FAILED
_VERTEX_JOB_STATE_CANCELLED = JobState.JOB_STATE_CANCELLED
class AbstractJobClient(abc.ABC):
"""Abstract class interacting with CAIP CMLE job or Vertex CustomJob."""
JOB_STATES_COMPLETED = () # Job states for success, failure or cancellation
JOB_STATES_FAILED = () # Job states for failure or cancellation
def __init__(self):
self.create_client()
self._job_name = '' # Assigned in self.launch_job()
@abc.abstractmethod
def create_client(self) -> None:
"""Creates the job client.
Can also be used for recreating the job client (e.g. in the case of
communication failure).
Multiple job requests can be done in parallel if needed, by creating an
instance of the class for each job. Note that one class instance should
only be used for one job, as each instance stores variables (e.g. job_id)
specific to each job.
"""
pass
@abc.abstractmethod
def create_training_args(self, input_dict, output_dict, exec_properties,
executor_class_path, training_inputs,
job_id) -> Dict[str, Any]:
"""Get training args for runner._launch_aip_training.
The training args contain the inputs/outputs/exec_properties to the
tfx.scripts.run_executor module.
Args:
input_dict: Passthrough input dict for tfx.components.Trainer.executor.
output_dict: Passthrough input dict for tfx.components.Trainer.executor.
exec_properties: Passthrough input dict for
tfx.components.Trainer.executor.
executor_class_path: class path for TFX core default trainer.
training_inputs: Training input argument for AI Platform training job.
job_id: Job ID for AI Platform Training job. If not supplied,
system-determined unique ID is given.
Returns:
A dict containing the training arguments
"""
pass
@abc.abstractmethod
def _create_job_spec(
self,
job_id: str,
training_input: Dict[str, Any],
job_labels: Optional[Dict[str, str]] = None) -> Dict[str, Any]:
"""Creates the job spec.
Args:
job_id: The job ID of the AI Platform training job.
training_input: Training input argument for AI Platform training job.
job_labels: The dict of labels that will be attached to this job.
Returns:
The job specification.
"""
pass
@abc.abstractmethod
def launch_job(self,
job_id: str,
parent: str,
training_input: Dict[str, Any],
job_labels: Optional[Dict[str, str]] = None) -> None:
"""Launches a long-running job.
Args:
job_id: The job ID of the AI Platform training job.
parent: The project name in the form of 'projects/{project_id}'
training_input: Training input argument for AI Platform training job. See
https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#TrainingInput
for the detailed schema.
job_labels: The dict of labels that will be attached to this job.
"""
pass
@abc.abstractmethod
def get_job(self) -> Union[Dict[str, str], CustomJob]:
"""Gets the the long-running job."""
pass
@abc.abstractmethod
def get_job_state(
self, response: Union[Dict[str, str], CustomJob]) -> Union[str, JobState]:
"""Gets the state of the long-running job.
Args:
response: The response from get_job
Returns:
The job state.
"""
pass
def get_job_name(self) -> str:
"""Gets the job name."""
return self._job_name
class CAIPJobClient(AbstractJobClient):
"""Class for interacting with CAIP CMLE job."""
JOB_STATES_COMPLETED = ('SUCCEEDED', 'FAILED', 'CANCELLED')
JOB_STATES_FAILED = ('FAILED', 'CANCELLED')
def create_client(self) -> None:
"""Creates the discovery job client.
Can also be used for recreating the job client (e.g. in the case of
communication failure).
Multiple job requests can be done in parallel if needed, by creating an
instance of the class for each job. Note that one class instance should
only be used for one job, as each instance stores variables (e.g. job_id)
specific to each job.
"""
self._client = discovery.build(
'ml',
'v1',
requestBuilder=telemetry_utils.TFXHttpRequest,
)
def create_training_args(self, input_dict: Dict[str, List[types.Artifact]],
output_dict: Dict[str, List[types.Artifact]],
exec_properties: Dict[str, Any],
executor_class_path: str,
training_inputs: Dict[str, Any],
job_id: Optional[str]) -> Dict[str, Any]:
"""Get training args for runner._launch_aip_training.
The training args contain the inputs/outputs/exec_properties to the
tfx.scripts.run_executor module.
Args:
input_dict: Passthrough input dict for tfx.components.Trainer.executor.
output_dict: Passthrough input dict for tfx.components.Trainer.executor.
exec_properties: Passthrough input dict for
tfx.components.Trainer.executor.
executor_class_path: class path for TFX core default trainer.
training_inputs: Training input argument for AI Platform training job.
'pythonModule', 'pythonVersion' and 'runtimeVersion' will be inferred.
For the full set of parameters, refer to
https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#TrainingInput
job_id: Job ID for AI Platform Training job. If not supplied,
system-determined unique ID is given. Refer to
https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#resource-job
Returns:
A dict containing the training arguments
"""
training_inputs = training_inputs.copy()
json_inputs = artifact_utils.jsonify_artifact_dict(input_dict)
logging.info('json_inputs=\'%s\'.', json_inputs)
json_outputs = artifact_utils.jsonify_artifact_dict(output_dict)
logging.info('json_outputs=\'%s\'.', json_outputs)
json_exec_properties = json.dumps(exec_properties, sort_keys=True)
logging.info('json_exec_properties=\'%s\'.', json_exec_properties)
# We use custom containers to launch training on AI Platform, which invokes
# the specified image using the container's entrypoint. The default
# entrypoint for TFX containers is to call scripts/run_executor.py. The
# arguments below are passed to this run_executor entry to run the executor
# specified in `executor_class_path`.
container_command = _CONTAINER_COMMAND + [
'--executor_class_path',
executor_class_path,
'--inputs',
json_inputs,
'--outputs',
json_outputs,
'--exec-properties',
json_exec_properties,
]
if not training_inputs.get('masterConfig'):
training_inputs['masterConfig'] = {
'imageUri': _TFX_IMAGE,
}
# Always use our own entrypoint instead of relying on container default.
if 'containerCommand' in training_inputs['masterConfig']:
logging.warn('Overriding custom value of containerCommand')
training_inputs['masterConfig']['containerCommand'] = container_command
# Pop project_id so AIP doesn't complain about an unexpected parameter.
# It's been a stowaway in aip_args and has finally reached its destination.
project = training_inputs.pop('project')
with telemetry_utils.scoped_labels(
{telemetry_utils.LABEL_TFX_EXECUTOR: executor_class_path}):
job_labels = telemetry_utils.make_labels_dict()
# 'tfx_YYYYmmddHHMMSS' is the default job ID if not explicitly specified.
job_id = job_id or 'tfx_{}'.format(
datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
training_args = {
'job_id': job_id,
'project': project,
'training_input': training_inputs,
'job_labels': job_labels
}
return training_args
def _create_job_spec(
self,
job_id: str,
training_input: Dict[str, Any],
job_labels: Optional[Dict[str, str]] = None) -> Dict[str, Any]:
"""Creates the job spec.
Args:
job_id: The job ID of the AI Platform training job.
training_input: Training input argument for AI Platform training job. See
https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#TrainingInput
for the detailed schema.
job_labels: The dict of labels that will be attached to this job.
Returns:
The job specification. See
https://cloud.google.com/ai-platform/training/docs/reference/rest/v1/projects.jobs
"""
job_spec = {
'jobId': job_id,
'trainingInput': training_input,
'labels': job_labels,
}
return job_spec
def launch_job(self,
job_id: str,
project: str,
training_input: Dict[str, Any],
job_labels: Optional[Dict[str, str]] = None) -> None:
"""Launches a long-running job.
Args:
job_id: The job ID of the AI Platform training job.
project: The GCP project under which the training job will be executed.
training_input: Training input argument for AI Platform training job. See
https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#TrainingInput
for the detailed schema.
job_labels: The dict of labels that will be attached to this job.
"""
parent = 'projects/{}'.format(project)
job_spec = self._create_job_spec(job_id, training_input, job_labels)
# Submit job to AIP Training
logging.info('TrainingInput=%s', training_input)
logging.info('Submitting job=\'%s\', project=\'%s\' to AI Platform.',
job_id, parent)
request = self._client.projects().jobs().create(
body=job_spec, parent=parent)
self._job_name = '{}/jobs/{}'.format(parent, job_id)
request.execute()
def get_job(self) -> Dict[str, str]:
"""Gets the long-running job."""
request = self._client.projects().jobs().get(name=self._job_name)
return request.execute()
def get_job_state(self, response) -> str:
"""Gets the state of the long-running job.
Args:
response: The response from get_job
Returns:
The job state.
"""
return response['state']
class VertexJobClient(AbstractJobClient):
"""Class for interacting with Vertex CustomJob."""
JOB_STATES_COMPLETED = (_VERTEX_JOB_STATE_SUCCEEDED, _VERTEX_JOB_STATE_FAILED,
_VERTEX_JOB_STATE_CANCELLED)
JOB_STATES_FAILED = (_VERTEX_JOB_STATE_FAILED, _VERTEX_JOB_STATE_CANCELLED)
def __init__(self, vertex_region: str):
if vertex_region is None:
raise ValueError('Please specify a region for Vertex training.')
self._region = vertex_region
super().__init__()
def create_client(self) -> None:
"""Creates the Gapic job client.
Can also be used for recreating the job client (e.g. in the case of
communication failure).
Multiple job requests can be done in parallel if needed, by creating an
instance of the class for each job. Note that one class instance should
only be used for one job, as each instance stores variables (e.g. job_id)
specific to each job.
"""
self._client = gapic.JobServiceClient(
client_options=dict(
api_endpoint=self._region + _VERTEX_ENDPOINT_SUFFIX))
def create_training_args(self, input_dict: Dict[str, List[types.Artifact]],
output_dict: Dict[str, List[types.Artifact]],
exec_properties: Dict[str, Any],
executor_class_path: str,
training_inputs: Dict[str, Any],
job_id: Optional[str]) -> Dict[str, Any]:
"""Get training args for runner._launch_aip_training.
The training args contain the inputs/outputs/exec_properties to the
tfx.scripts.run_executor module.
Args:
input_dict: Passthrough input dict for tfx.components.Trainer.executor.
output_dict: Passthrough input dict for tfx.components.Trainer.executor.
exec_properties: Passthrough input dict for
tfx.components.Trainer.executor.
executor_class_path: class path for TFX core default trainer.
training_inputs: Spec for CustomJob for AI Platform (Unified) custom
training job. See
https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1/CustomJobSpec
for the detailed schema.
job_id: Display name for AI Platform (Unified) custom training job. If not
supplied, system-determined unique ID is given. Refer to
https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1/projects.locations.customJobs
Returns:
A dict containing the training arguments
"""
training_inputs = training_inputs.copy()
json_inputs = artifact_utils.jsonify_artifact_dict(input_dict)
logging.info('json_inputs=\'%s\'.', json_inputs)
json_outputs = artifact_utils.jsonify_artifact_dict(output_dict)
logging.info('json_outputs=\'%s\'.', json_outputs)
json_exec_properties = json.dumps(exec_properties, sort_keys=True)
logging.info('json_exec_properties=\'%s\'.', json_exec_properties)
# We use custom containers to launch training on AI Platform (unified),
# which invokes the specified image using the container's entrypoint. The
# default entrypoint for TFX containers is to call scripts/run_executor.py.
# The arguments below are passed to this run_executor entry to run the
# executor specified in `executor_class_path`.
container_command = _CONTAINER_COMMAND + [
'--executor_class_path',
executor_class_path,
'--inputs',
json_inputs,
'--outputs',
json_outputs,
'--exec-properties',
json_exec_properties,
]
if not training_inputs.get('worker_pool_specs'):
training_inputs['worker_pool_specs'] = [{}]
for worker_pool_spec in training_inputs['worker_pool_specs']:
if not worker_pool_spec.get('container_spec'):
worker_pool_spec['container_spec'] = {
'image_uri': _TFX_IMAGE,
}
# Always use our own entrypoint instead of relying on container default.
if 'command' in worker_pool_spec['container_spec']:
logging.warn('Overriding custom value of container_spec.command')
worker_pool_spec['container_spec']['command'] = container_command
# Pop project_id so AIP doesn't complain about an unexpected parameter.
# It's been a stowaway in aip_args and has finally reached its destination.
project = training_inputs.pop('project')
with telemetry_utils.scoped_labels(
{telemetry_utils.LABEL_TFX_EXECUTOR: executor_class_path}):
job_labels = telemetry_utils.make_labels_dict()
# 'tfx_YYYYmmddHHMMSS_xxxxxxxx' is the default job display name if not
# explicitly specified.
job_id = job_id or 'tfx_{}_{}'.format(
datetime.datetime.now().strftime('%Y%m%d%H%M%S'),
'%08x' % random.getrandbits(32))
training_args = {
'job_id': job_id,
'project': project,
'training_input': training_inputs,
'job_labels': job_labels
}
return training_args
def _create_job_spec(
self,
job_id: str,
training_input: Dict[str, Any],
job_labels: Optional[Dict[str, str]] = None) -> Dict[str, Any]:
"""Creates the job spec.
Args:
job_id: The display name of the AI Platform (Unified) custom training job.
training_input: Spec for CustomJob for AI Platform (Unified) custom
training job. See
https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1/CustomJobSpec
for the detailed schema.
job_labels: The dict of labels that will be attached to this job.
Returns:
The CustomJob. See
https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1/projects.locations.customJobs
"""
job_spec = {
'display_name': job_id,
'job_spec': training_input,
'labels': job_labels,
}
return job_spec
def launch_job(self,
job_id: str,
project: str,
training_input: Dict[str, Any],
job_labels: Optional[Dict[str, str]] = None) -> None:
"""Launches a long-running job.
Args:
job_id: The display name of the AI Platform (Unified) custom training job.
project: The GCP project under which the training job will be executed.
training_input: Spec for CustomJob for AI Platform (Unified) custom
training job. See
https://cloud.google.com/ai-platform-unified/docs/reference/rest/v1/CustomJobSpec
for the detailed schema.
job_labels: The dict of labels that will be attached to this job.
"""
parent = 'projects/{project}/locations/{location}'.format(
project=project, location=self._region)
job_spec = self._create_job_spec(job_id, training_input, job_labels)
# Submit job to AIP Training
logging.info('TrainingInput=%s', training_input)
logging.info('Submitting custom job=\'%s\', project=\'%s\''
' to AI Platform (Unified).', job_id, parent)
response = self._client.create_custom_job(parent=parent,
custom_job=job_spec)
self._job_name = response.name
def get_job(self) -> CustomJob:
"""Gets the long-running job."""
return self._client.get_custom_job(name=self._job_name)
def get_job_state(self, response) -> JobState:
"""Gets the state of the long-running job.
Args:
response: The response from get_job
Returns:
The job state.
"""
return response.state
def get_job_client(
enable_vertex: Optional[bool] = False,
vertex_region: Optional[str] = None
) -> Union[CAIPJobClient, VertexJobClient]:
"""Gets the job client.
Args:
enable_vertex: Whether to enable Vertex
vertex_region: Region for training endpoint in Vertex.
Defaults to 'us-central1'.
Returns:
The corresponding job client.
"""
if enable_vertex:
return VertexJobClient(vertex_region)
return CAIPJobClient()
| 37.122905 | 105 | 0.684976 | 2,617 | 19,935 | 5.043179 | 0.129156 | 0.01629 | 0.012123 | 0.019094 | 0.757312 | 0.70988 | 0.693893 | 0.659115 | 0.652599 | 0.652599 | 0 | 0.00188 | 0.226235 | 19,935 | 536 | 106 | 37.192164 | 0.853744 | 0.466015 | 0 | 0.5 | 0 | 0 | 0.1089 | 0.02273 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09322 | false | 0.025424 | 0.059322 | 0 | 0.237288 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a2b289aad34482a327c3dee988beb4b858acea5 | 837 | py | Python | comps/views/dancer_heats.py | dlanghorne0428/dancesport-tracker-projec | e55d91a4f03c26d6ee8c28846a809064adfdb158 | [
"MIT"
] | null | null | null | comps/views/dancer_heats.py | dlanghorne0428/dancesport-tracker-projec | e55d91a4f03c26d6ee8c28846a809064adfdb158 | [
"MIT"
] | 87 | 2020-04-15T22:29:03.000Z | 2022-01-02T02:21:28.000Z | comps/views/dancer_heats.py | dlanghorne0428/dancesport-tracker-projec | e55d91a4f03c26d6ee8c28846a809064adfdb158 | [
"MIT"
] | null | null | null | from django.core.paginator import Paginator
from django.db.models import Q
from django.shortcuts import render, get_object_or_404
from comps.models.comp import Comp
from comps.models.heat_entry import Heat_Entry
from rankings.models import Dancer
from comps.filters import HeatFilter
def dancer_heats(request, comp_id, dancer_id):
comp = get_object_or_404(Comp, pk=comp_id)
dancer = get_object_or_404(Dancer, pk=dancer_id)
heat_entries = Heat_Entry.objects.filter(heat__comp=comp).filter(Q(couple__dancer_1=dancer) | Q(couple__dancer_2=dancer)).distinct().order_by('heat__time')
paginator = Paginator(heat_entries, 16)
page_number = request.GET.get('page')
page_obj = paginator.get_page(page_number)
return render(request, "comps/dancer_heats.html", {'comp': comp, 'page_obj': page_obj, 'dancer': dancer})
| 44.052632 | 159 | 0.781362 | 129 | 837 | 4.782946 | 0.333333 | 0.048622 | 0.053485 | 0.068071 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.114695 | 837 | 18 | 160 | 46.5 | 0.815115 | 0 | 0 | 0 | 0 | 0 | 0.065711 | 0.027479 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.466667 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0a2cdf7b238508286e7a161633f67503340e94b9 | 2,209 | py | Python | hubspot/crm/pipelines/__init__.py | Ronfer/hubspot-api-python | 1c87274ecbba4aa3c7728f890ccc6e77b2b6d2e4 | [
"Apache-2.0"
] | 117 | 2020-04-06T08:22:53.000Z | 2022-03-18T03:41:29.000Z | hubspot/crm/pipelines/__init__.py | Ronfer/hubspot-api-python | 1c87274ecbba4aa3c7728f890ccc6e77b2b6d2e4 | [
"Apache-2.0"
] | 62 | 2020-04-06T16:21:06.000Z | 2022-03-17T16:50:44.000Z | hubspot/crm/pipelines/__init__.py | Ronfer/hubspot-api-python | 1c87274ecbba4aa3c7728f890ccc6e77b2b6d2e4 | [
"Apache-2.0"
] | 45 | 2020-04-06T16:13:52.000Z | 2022-03-30T21:33:17.000Z | # coding: utf-8
# flake8: noqa
"""
CRM Pipelines
Pipelines represent distinct stages in a workflow, like closing a deal or servicing a support ticket. These endpoints provide access to read and modify pipelines in HubSpot. Pipelines support `deals` and `tickets` object types. ## Pipeline ID validation When calling endpoints that take pipelineId as a parameter, that ID must correspond to an existing, un-archived pipeline. Otherwise the request will fail with a `404 Not Found` response. # noqa: E501
The version of the OpenAPI document: v3
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
__version__ = "1.0.0"
# import apis into sdk package
from hubspot.crm.pipelines.api.pipeline_stages_api import PipelineStagesApi
from hubspot.crm.pipelines.api.pipelines_api import PipelinesApi
# import ApiClient
from hubspot.crm.pipelines.api_client import ApiClient
from hubspot.crm.pipelines.configuration import Configuration
from hubspot.crm.pipelines.exceptions import OpenApiException
from hubspot.crm.pipelines.exceptions import ApiTypeError
from hubspot.crm.pipelines.exceptions import ApiValueError
from hubspot.crm.pipelines.exceptions import ApiKeyError
from hubspot.crm.pipelines.exceptions import ApiException
# import models into sdk package
from hubspot.crm.pipelines.models.collection_response_pipeline import CollectionResponsePipeline
from hubspot.crm.pipelines.models.collection_response_pipeline_stage import CollectionResponsePipelineStage
from hubspot.crm.pipelines.models.error import Error
from hubspot.crm.pipelines.models.error_detail import ErrorDetail
from hubspot.crm.pipelines.models.next_page import NextPage
from hubspot.crm.pipelines.models.paging import Paging
from hubspot.crm.pipelines.models.pipeline import Pipeline
from hubspot.crm.pipelines.models.pipeline_input import PipelineInput
from hubspot.crm.pipelines.models.pipeline_patch_input import PipelinePatchInput
from hubspot.crm.pipelines.models.pipeline_stage import PipelineStage
from hubspot.crm.pipelines.models.pipeline_stage_input import PipelineStageInput
from hubspot.crm.pipelines.models.pipeline_stage_patch_input import PipelineStagePatchInput
| 49.088889 | 460 | 0.842915 | 291 | 2,209 | 6.298969 | 0.381443 | 0.144026 | 0.160393 | 0.263502 | 0.451173 | 0.402073 | 0.156574 | 0.060011 | 0 | 0 | 0 | 0.006048 | 0.101856 | 2,209 | 44 | 461 | 50.204545 | 0.917843 | 0.299683 | 0 | 0 | 0 | 0 | 0.003292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.956522 | 0 | 0.956522 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
0a307d4d21effc1dd6ee8fac43606364266465fc | 135 | py | Python | output/models/sun_data/elem_decl/nillable/nillable00302m/nillable00302m_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 1 | 2021-08-14T17:59:21.000Z | 2021-08-14T17:59:21.000Z | output/models/sun_data/elem_decl/nillable/nillable00302m/nillable00302m_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 4 | 2020-02-12T21:30:44.000Z | 2020-04-15T20:06:46.000Z | output/models/sun_data/elem_decl/nillable/nillable00302m/nillable00302m_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | null | null | null | from output.models.sun_data.elem_decl.nillable.nillable00302m.nillable00302m_xsd.nillable00302m import Root
__all__ = [
"Root",
]
| 22.5 | 107 | 0.8 | 16 | 135 | 6.3125 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123967 | 0.103704 | 135 | 5 | 108 | 27 | 0.710744 | 0 | 0 | 0 | 0 | 0 | 0.02963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a38f96a395a7887fea46451cf005cb8e5ff8f70 | 583 | py | Python | app/controllers/Unidade_medida_controller.py | amandapersampa/FastFrango | 23a69d80576f6cef754e9d3acacf9908a4da30cd | [
"MIT"
] | null | null | null | app/controllers/Unidade_medida_controller.py | amandapersampa/FastFrango | 23a69d80576f6cef754e9d3acacf9908a4da30cd | [
"MIT"
] | null | null | null | app/controllers/Unidade_medida_controller.py | amandapersampa/FastFrango | 23a69d80576f6cef754e9d3acacf9908a4da30cd | [
"MIT"
] | null | null | null | from app import app
from app.dao.Unidade_medida_dao import Unidade_medida_dao
from app.service.Unidade_medida_service import Unidade_medida_service
from flask import jsonify
import json
service = Unidade_medida_service()
@app.route("/unidadeMedida")
def salva_unidade_medida():
unidade = Unidade_medida_dao("M")
return jsonify(service.salvar(unidade))
@app.route("/unidadeMedida/list")
def findAll_unidade():
return jsonify(service.findAll())
@app.route("/unidadeMedida/<id>")
def findById_unidade(id):
service.findById(id)
return 'ok'
| 25.347826 | 70 | 0.746141 | 75 | 583 | 5.586667 | 0.293333 | 0.217184 | 0.114558 | 0.128878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149228 | 583 | 22 | 71 | 26.5 | 0.844758 | 0 | 0 | 0 | 0 | 0 | 0.098039 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.294118 | 0.058824 | 0.647059 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0a3d4b559d545b8b6de231d0e4d414da32fdb67d | 331 | py | Python | 2020/Google CTF/Android/solve.py | CSEA-IITB/WriteUps | 46e7f36b0c4ef182cbaf375fd10fda954b6667a0 | [
"MIT"
] | 1 | 2021-12-20T15:34:02.000Z | 2021-12-20T15:34:02.000Z | 2020/Google CTF/Android/solve.py | CSEA-IITB/WriteUps | 46e7f36b0c4ef182cbaf375fd10fda954b6667a0 | [
"MIT"
] | 1 | 2020-09-06T18:19:55.000Z | 2020-09-06T18:19:55.000Z | 2020/Google CTF/Android/solve.py | CSEA-IITB/WriteUps | 46e7f36b0c4ef182cbaf375fd10fda954b6667a0 | [
"MIT"
] | 3 | 2020-09-01T09:59:52.000Z | 2021-07-02T14:08:14.000Z | import gmpy2
m = 4294967296
l = [40999019, 2789358025, 656272715, 18374979, 3237618335, 1762529471,
685548119, 382114257, 1436905469, 2126016673, 3318315423, 797150821]
o = [gmpy2.invert(i, m) for i in l]
s = ''
for i in o:
t = ''
for j in range(4):
t += chr(i % 256)
i //= 256
s += t
print(s)
| 18.388889 | 73 | 0.595166 | 47 | 331 | 4.191489 | 0.638298 | 0.040609 | 0.060914 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.543568 | 0.271903 | 331 | 17 | 74 | 19.470588 | 0.273859 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a4251df9c754550ac372f6d6fe04002302c4a29 | 1,181 | py | Python | LAB/sizk.py | Gazelle9/HelloPython3 | aead73494d1605f64110c826e232825389d3569e | [
"Apache-2.0"
] | null | null | null | LAB/sizk.py | Gazelle9/HelloPython3 | aead73494d1605f64110c826e232825389d3569e | [
"Apache-2.0"
] | null | null | null | LAB/sizk.py | Gazelle9/HelloPython3 | aead73494d1605f64110c826e232825389d3569e | [
"Apache-2.0"
] | null | null | null | import random
#윤년여부
def isLeapYear():
year = int(input("년도를 입력하세요. (19xx or 20xx)"))
isleap='윤년이 아닙니다'
if (year % 400 == 0) or (year % 100 != 0) and (year % 4 == 0):
isleap="윤년 입니다."
print("%d는 %s"%(year,isleap))
# 복권 발행
def rouletteLotto():
luck = input("숫자 세자리를 입력하세요.")
lotto = str(random.randint(100, 999))
match = 0 # 일치 여부
prize= '꽝 다음 기회에!~'
for i in[0,1,2]:
for j in [0,1,2]:
if(luck[i]==lotto[j] or\
luck[i] == lotto[j] or\
luck[i] == lotto[j]):
match +=1
if match == 3:
prize="1등 당첨 ㅆㅅㅌㅊ ㄱㅇㄷ인부분!"
elif match == 2:
prize = "2등 당첨 ㄱㅇㄷ!"
elif match == 1:
prize = "3등 당첨 ㄲㄲ!"
print (luck,lotto,prize)
#계산기(제곱연산추가)
def intCalu():
num1=int(input('좌변값을 하나 입력하세요'))
num2=int(input('우변값을 하나 입력하세요'))
fmt= "%d + %d = %d \n%d - %d = %d \n"
fmt +="%d * %d = %d \n%d / %d = %.3f \n"
fmt +="%d ** %d = %d"
print (fmt %(num1, num2,num1+num2,\
num1, num2, num1-num2,\
num1, num2, num1*num2,\
num1, num2, num1/num2,\
num1, num2, num1 ** num2))
| 25.12766 | 66 | 0.467401 | 174 | 1,181 | 3.172414 | 0.413793 | 0.144928 | 0.195652 | 0.26087 | 0.259058 | 0.244565 | 0.244565 | 0.211957 | 0.211957 | 0.144928 | 0 | 0.073643 | 0.344623 | 1,181 | 46 | 67 | 25.673913 | 0.639535 | 0.022862 | 0 | 0 | 0 | 0 | 0.181027 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.027778 | 0 | 0.111111 | 0.083333 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a4683357848e05100cbc833cb32b57c369e8920 | 835 | py | Python | pyconstruct/datasets/__init__.py | unitn-sml/pyconstruct | f7174b00f16e34ca582bb1923ef46e871fc892b9 | [
"MIT"
] | 13 | 2018-04-03T09:27:28.000Z | 2021-04-01T07:52:01.000Z | pyconstruct/datasets/__init__.py | unitn-sml/pyconstruct | f7174b00f16e34ca582bb1923ef46e871fc892b9 | [
"MIT"
] | 1 | 2019-08-06T18:46:47.000Z | 2019-08-13T08:48:28.000Z | pyconstruct/datasets/__init__.py | unitn-sml/pyconstruct | f7174b00f16e34ca582bb1923ef46e871fc892b9 | [
"MIT"
] | null | null | null | """\
Pyconstruct provides methods for loading a number of datasets for standard tasks
in structured-output prediction. The current list of available datasets
includes:
- **ocr** : Ben Taskar's ORC dataset
- **conll00** : CoNLL 2000 Text Chunking dataset
- **horseseg** : HorseSeg dataset (coming soon)
- **equations** : OCR equations dataset
Datasets can be loaded using the `load` function provided by this module. In
most cases, the dataset is downloaded upon first loading and stored in a local
directory on your computer for faster retrieval from the second loading onwards
(the actual directory depends on the operating system). The data is preprocessed
and made available in a format that can be already used for learning with any
algorithm provided by Pyconstruct.
"""
from .base import *
__all__ = [base.__all__]
| 34.791667 | 80 | 0.767665 | 120 | 835 | 5.275 | 0.675 | 0.015798 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008671 | 0.171257 | 835 | 23 | 81 | 36.304348 | 0.906069 | 0.929341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
0a54396c09453486ea9a0c48fb9692e25e6c67b6 | 2,530 | py | Python | APIs/Telegram API/telethon/crypto/authkey.py | Gregor-Davies/Awesome-Scripts-1 | b2f7fe99b7e9780b91cdd2f4f94e63795146e9c3 | [
"MIT"
] | 141 | 2018-10-04T10:02:15.000Z | 2022-03-18T08:47:01.000Z | APIs/Telegram API/telethon/crypto/authkey.py | Gregor-Davies/Awesome-Scripts-1 | b2f7fe99b7e9780b91cdd2f4f94e63795146e9c3 | [
"MIT"
] | 34 | 2018-10-04T08:28:01.000Z | 2020-11-02T09:36:02.000Z | APIs/Telegram API/telethon/crypto/authkey.py | Gregor-Davies/Awesome-Scripts-1 | b2f7fe99b7e9780b91cdd2f4f94e63795146e9c3 | [
"MIT"
] | 110 | 2018-10-04T04:28:11.000Z | 2022-03-22T05:49:02.000Z | """
This module holds the AuthKey class.
"""
import struct
from hashlib import sha1
from ..extensions import BinaryReader
class AuthKey:
"""
Represents an authorization key, used to encrypt and decrypt
messages sent to Telegram's data centers.
"""
def __init__(self, data):
"""
Initializes a new authorization key.
:param data: the data in bytes that represent this auth key.
"""
self.key = data
@property
def key(self):
"""
Returns the key of the key.
Args:
self: (todo): write your description
"""
return self._key
@key.setter
def key(self, value):
"""
Return the private key for the value.
Args:
self: (todo): write your description
value: (todo): write your description
"""
if not value:
self._key = self.aux_hash = self.key_id = None
return
if isinstance(value, type(self)):
self._key, self.aux_hash, self.key_id = \
value._key, value.aux_hash, value.key_id
return
self._key = value
with BinaryReader(sha1(self._key).digest()) as reader:
self.aux_hash = reader.read_long(signed=False)
reader.read(4)
self.key_id = reader.read_long(signed=False)
# TODO This doesn't really fit here, it's only used in authentication
def calc_new_nonce_hash(self, new_nonce, number):
"""
Calculates the new nonce hash based on the current attributes.
:param new_nonce: the new nonce to be hashed.
:param number: number to prepend before the hash.
:return: the hash for the given new nonce.
"""
new_nonce = new_nonce.to_bytes(32, 'little', signed=True)
data = new_nonce + struct.pack('<BQ', number, self.aux_hash)
# Calculates the message key from the given data
return int.from_bytes(sha1(data).digest()[4:20], 'little', signed=True)
def __bool__(self):
"""
Returns true if the boolean is true false otherwise.
Args:
self: (todo): write your description
"""
return bool(self._key)
def __eq__(self, other):
"""
Return true if other is equal to other.
Args:
self: (todo): write your description
other: (todo): write your description
"""
return isinstance(other, type(self)) and other.key == self._key
| 28.111111 | 79 | 0.585375 | 317 | 2,530 | 4.542587 | 0.334385 | 0.053472 | 0.054167 | 0.1 | 0.190278 | 0.134722 | 0.090278 | 0.0375 | 0 | 0 | 0 | 0.005248 | 0.322134 | 2,530 | 89 | 80 | 28.426966 | 0.834402 | 0.392095 | 0 | 0.064516 | 0 | 0 | 0.011905 | 0 | 0 | 0 | 0 | 0.078652 | 0 | 1 | 0.193548 | false | 0 | 0.096774 | 0 | 0.516129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0a64bcfcc7dc1eaf929fac64d4ff43426f6af4eb | 1,505 | py | Python | tool/square_type_to_unified_address.py | muzudho/kifuwarabe2020-wcsodr1 | f8f2de46a478e325c0d28fffeb925ac9d930f680 | [
"MIT"
] | null | null | null | tool/square_type_to_unified_address.py | muzudho/kifuwarabe2020-wcsodr1 | f8f2de46a478e325c0d28fffeb925ac9d930f680 | [
"MIT"
] | null | null | null | tool/square_type_to_unified_address.py | muzudho/kifuwarabe2020-wcsodr1 | f8f2de46a478e325c0d28fffeb925ac9d930f680 | [
"MIT"
] | null | null | null | print('trace | Start.')
# 配列アクセスは遅い気がするので、match構文で書こうぜ☆(^~^)
enums = ['Sq11', 'Sq12', 'Sq13', 'Sq14', 'Sq15', 'Sq16', 'Sq17', 'Sq18', 'Sq19', 'Sq21', 'Sq22', 'Sq23', 'Sq24', 'Sq25', 'Sq26', 'Sq27', 'Sq28', 'Sq29', 'Sq31', 'Sq32', 'Sq33', 'Sq34', 'Sq35', 'Sq36', 'Sq37', 'Sq38', 'Sq39', 'Sq41', 'Sq42', 'Sq43', 'Sq44', 'Sq45', 'Sq46', 'Sq47', 'Sq48', 'Sq49', 'Sq51', 'Sq52', 'Sq53', 'Sq54',
'Sq55', 'Sq56', 'Sq57', 'Sq58', 'Sq59', 'Sq61', 'Sq62', 'Sq63', 'Sq64', 'Sq65', 'Sq66', 'Sq67', 'Sq68', 'Sq69', 'Sq71', 'Sq72', 'Sq73', 'Sq74', 'Sq75', 'Sq76', 'Sq77', 'Sq78', 'Sq79', 'Sq81', 'Sq82', 'Sq83', 'Sq84', 'Sq85', 'Sq86', 'Sq87', 'Sq88', 'Sq89', 'Sq91', 'Sq92', 'Sq93', 'Sq94', 'Sq95', 'Sq96', 'Sq97', 'Sq98', 'Sq99', ]
print('impl SquareType {')
print(
' pub fn to_unified_address(&self, turn: Phase) -> UnifiedAddress {')
print(' use crate::cosmic::toy_box::SquareType::*;')
print(' if turn == Phase::First {')
print(' match self {')
i = 0
for file in range(1, 10):
for rank in range(1, 10):
print(
f' {enums[i]} => UnifiedAddress::Sq{file}{rank}_1,')
i += 1
print(' }')
print(' } else {')
print(' match self {')
i = 0
for file in range(1, 10):
for rank in range(1, 10):
print(
f' {enums[i]} => UnifiedAddress::Sq{file}{rank}_2,')
i += 1
print(' }')
print(' }')
print(' }')
print('}')
print('trace | Finished.')
| 45.606061 | 338 | 0.487708 | 179 | 1,505 | 4.078212 | 0.675978 | 0.068493 | 0.043836 | 0.054795 | 0.235616 | 0.235616 | 0.235616 | 0.235616 | 0.235616 | 0.235616 | 0 | 0.158311 | 0.244518 | 1,505 | 32 | 339 | 47.03125 | 0.48285 | 0.022591 | 0 | 0.62069 | 0 | 0 | 0.516678 | 0.087815 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.551724 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
0a65b5f2616bb1325a9f69c6b6cf02d90603c6f5 | 4,539 | py | Python | tests/test_miscutils.py | modyabhi/BackpackTf-API | 2019881f84669d2a660f0eab721b4ad43c0a9bf2 | [
"MIT"
] | 7 | 2019-10-06T02:35:05.000Z | 2022-02-24T14:27:44.000Z | tests/test_miscutils.py | modyabhi/BackpackTf-API | 2019881f84669d2a660f0eab721b4ad43c0a9bf2 | [
"MIT"
] | 10 | 2019-12-05T10:54:44.000Z | 2021-11-22T21:36:55.000Z | tests/test_miscutils.py | modyabhi/BackpackTf-API | 2019881f84669d2a660f0eab721b4ad43c0a9bf2 | [
"MIT"
] | 4 | 2019-12-29T07:15:22.000Z | 2021-12-02T17:28:34.000Z | from BackpackTF import MiscUtils
def test_quality_to_int():
misc = MiscUtils()
assert misc.quality_String_To_Int("Collector's") == 14
assert misc.quality_String_To_Int("Decorated Weapon") == 15
assert misc.quality_String_To_Int("Genuine") == 1
assert misc.quality_String_To_Int("Haunted") == 13
assert misc.quality_String_To_Int("Normal") == 0
assert misc.quality_String_To_Int("Self-Made") == 9
assert misc.quality_String_To_Int("Strange") == 11
assert misc.quality_String_To_Int("Unique") == 6
assert misc.quality_String_To_Int("Unusual") == 5
assert misc.quality_String_To_Int("Vintage") == 3
def test_particle_to_int():
misc = MiscUtils()
assert misc.particle_String_To_Int("Abduction") == 91
assert misc.particle_String_To_Int("Aces High") == 59
assert misc.particle_String_To_Int("Acidic Bubbles of Envy") == 3017
assert misc.particle_String_To_Int("Amaranthine") == 82
assert misc.particle_String_To_Int("Bubbling") == 34
assert misc.particle_String_To_Int("Burning Flames") == 13
assert misc.particle_String_To_Int("Cauldron Bubbles") == 39
assert misc.particle_String_To_Int("Cool") == 703
assert misc.particle_String_To_Int("Dead Presidents") == 60
def test_rarity_to_int():
misc = MiscUtils()
assert misc.rarity_String_To_Int("") == 99
assert misc.rarity_String_To_Int("Assassin") == 5
assert misc.rarity_String_To_Int("Civilian") == 1
assert misc.rarity_String_To_Int("Elite") == 6
assert misc.rarity_String_To_Int("Freelance") == 2
assert misc.rarity_String_To_Int("Immortal") == 7
def test_origin_to_int():
misc = MiscUtils()
assert misc.origin_String_To_Int("Achievement") == 1
assert misc.origin_String_To_Int("CD Key") == 15
assert misc.origin_String_To_Int("Collection Reward") == 16
assert misc.origin_String_To_Int("Earned") == 9
assert misc.origin_String_To_Int("Gifted") == 6
def test_wear_tier_to_int():
misc = MiscUtils()
assert misc.wear_tier_String_To_Int("Factory New") == 1
assert misc.wear_tier_String_To_Int("Minimal Wear") == 2
assert misc.wear_tier_String_To_Int("Field-Tested") == 3
assert misc.wear_tier_String_To_Int("Well-Worn") == 4
assert misc.wear_tier_String_To_Int("Battle Scarred") == 5
def test_killstreaker_to_int():
misc = MiscUtils()
assert misc.killstreaker_String_To_Int("Fire Horns") == 2002
assert misc.killstreaker_String_To_Int("Cerebral Discharge") == 2003
assert misc.killstreaker_String_To_Int("Tornado") == 2004
assert misc.killstreaker_String_To_Int("Flames") == 2005
assert misc.killstreaker_String_To_Int("Singularity") == 2006
assert misc.killstreaker_String_To_Int("Incinerator") == 2007
def test_sheen_to_int():
misc = MiscUtils()
assert misc.sheen_String_To_Int("Team Shine") == 1
assert misc.sheen_String_To_Int("Deadly Daffodil") == 2
assert misc.sheen_String_To_Int("Manndarin") == 3
assert misc.sheen_String_To_Int("Mean Green") == 4
assert misc.sheen_String_To_Int("Agonizing Emerald") == 5
assert misc.sheen_String_To_Int("Villainous Violet") == 6
assert misc.sheen_String_To_Int("Hot Rod") == 7
def test_Killstreak_tier_to_int():
misc = MiscUtils()
assert misc.killstreak_tier_String_To_Int("None") == 0
assert misc.killstreak_tier_String_To_Int("Standard") == 1
assert misc.killstreak_tier_String_To_Int("Specialized") == 2
assert misc.killstreak_tier_String_To_Int("Professional") == 3
def test_strange_parts_to_int():
misc = MiscUtils()
assert misc.strange_parts_String_To_Int("Airborne Enemies Killed") == 22
assert misc.strange_parts_String_To_Int("Allied Healing Done") == 84
assert misc.strange_parts_String_To_Int("Assists") == 95
assert misc.strange_parts_String_To_Int("Critical Kills") == 33
assert misc.strange_parts_String_To_Int("Damage Dealt") == 82
def test_paint_to_Int():
misc = MiscUtils()
assert misc.paint_String_To_Int("A Color Similar to Slate") == 3100495
assert misc.paint_String_To_Int("A Deep Commitment to Purple") == 8208497
assert misc.paint_String_To_Int("A Distinctive Lack of Hue") == 1315860
assert misc.paint_String_To_Int("A Mann's Mint") == 12377523
assert misc.paint_String_To_Int("After Eight") == 2960676
assert misc.paint_String_To_Int("Aged Moustache Grey") == 8289918
def test_steam_id_to_account_id():
misc = MiscUtils()
assert misc.steam_id_to_account_id("76561198195716551") == "235450823"
| 37.512397 | 77 | 0.732761 | 661 | 4,539 | 4.66112 | 0.252648 | 0.118468 | 0.224927 | 0.082116 | 0.666018 | 0.654333 | 0.195391 | 0 | 0 | 0 | 0 | 0.04367 | 0.152456 | 4,539 | 120 | 78 | 37.825 | 0.757213 | 0 | 0 | 0.126437 | 0 | 0 | 0.16215 | 0 | 0 | 0 | 0 | 0 | 0.735632 | 1 | 0.126437 | false | 0 | 0.011494 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a6d9fbc3af93dc2cd1da59818e2ad07fc714a69 | 584 | py | Python | dapr/actor/__init__.py | mukundansundararajan/python-sdk | 2ee94e31292a650135b97bc3c70e3eca885c9b47 | [
"MIT"
] | null | null | null | dapr/actor/__init__.py | mukundansundararajan/python-sdk | 2ee94e31292a650135b97bc3c70e3eca885c9b47 | [
"MIT"
] | null | null | null | dapr/actor/__init__.py | mukundansundararajan/python-sdk | 2ee94e31292a650135b97bc3c70e3eca885c9b47 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Copyright (c) Microsoft Corporation.
Licensed under the MIT License.
"""
from dapr.actor.actor_interface import ActorInterface, actormethod
from dapr.actor.client.proxy import ActorProxy, ActorProxyFactory
from dapr.actor.id import ActorId
from dapr.actor.runtime.actor import Actor
from dapr.actor.runtime.remindable import Remindable
from dapr.actor.runtime.runtime import ActorRuntime
__all__ = [
'ActorInterface',
'ActorProxy',
'ActorProxyFactory',
'ActorId',
'Actor',
'ActorRuntime',
'Remindable',
'actormethod',
]
| 22.461538 | 66 | 0.736301 | 64 | 584 | 6.640625 | 0.453125 | 0.112941 | 0.183529 | 0.141176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002024 | 0.15411 | 584 | 25 | 67 | 23.36 | 0.8583 | 0.155822 | 0 | 0 | 0 | 0 | 0.17732 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
0a6f355624c9c4c8fd25be7ef930ce3b9dd21b43 | 4,534 | py | Python | src/sage/tests/books/judson-abstract-algebra/cosets-sage.py | ChamanAgrawal/sage | 5f6d56ba247b352d7d46442e88fa3a027e9f222d | [
"BSL-1.0"
] | 2 | 2019-06-02T03:16:59.000Z | 2019-06-15T10:17:18.000Z | src/sage/tests/books/judson-abstract-algebra/cosets-sage.py | ChamanAgrawal/sage | 5f6d56ba247b352d7d46442e88fa3a027e9f222d | [
"BSL-1.0"
] | null | null | null | src/sage/tests/books/judson-abstract-algebra/cosets-sage.py | ChamanAgrawal/sage | 5f6d56ba247b352d7d46442e88fa3a027e9f222d | [
"BSL-1.0"
] | 3 | 2020-03-29T17:13:36.000Z | 2021-05-03T18:11:28.000Z | ## -*- coding: utf-8 -*- ##
## Sage Doctest File ##
#**************************************#
#* Generated from PreTeXt source *#
#* on 2017-08-24T11:43:34-07:00 *#
#* *#
#* http://mathbook.pugetsound.edu *#
#* *#
#**************************************#
##
"""
Please contact Rob Beezer (beezer@ups.edu) with
any test failures here that need to be changed
as a result of changes accepted into Sage. You
may edit/change this file in any sensible way, so
that development work may procede. Your changes
may later be replaced by the authors of "Abstract
Algebra: Theory and Applications" when the text is
updated, and a replacement of this file is proposed
for review.
"""
##
## To execute doctests in these files, run
## $ $SAGE_ROOT/sage -t <directory-of-these-files>
## or
## $ $SAGE_ROOT/sage -t <a-single-file>
##
## Replace -t by "-tp n" for parallel testing,
## "-tp 0" will use a sensible number of threads
##
## See: http://www.sagemath.org/doc/developer/doctesting.html
## or run $ $SAGE_ROOT/sage --advanced for brief help
##
## Generated at 2017-08-24T11:43:34-07:00
## From "Abstract Algebra"
## At commit 26d3cac0b4047f4b8d6f737542be455606e2c4b4
##
## Section 6.5 Sage
##
r"""
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: G = SymmetricGroup(3)
sage: a = G("(1,2)")
sage: H = G.subgroup([a])
sage: rc = G.cosets(H, side='right'); rc
[[(), (1,2)], [(2,3), (1,3,2)], [(1,2,3), (1,3)]]
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: lc = G.cosets(H, side='left'); lc
[[(), (1,2)], [(2,3), (1,2,3)], [(1,3,2), (1,3)]]
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: G = SymmetricGroup(3)
sage: b = G("(1,2,3)")
sage: H = G.subgroup([b])
sage: rc = G.cosets(H, side='right'); rc
[[(), (1,2,3), (1,3,2)], [(2,3), (1,3), (1,2)]]
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: lc = G.cosets(H, side='left'); lc
[[(), (1,2,3), (1,3,2)], [(2,3), (1,2), (1,3)]]
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: rc == lc
False
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: rc_sorted = sorted([sorted(coset) for coset in rc])
sage: rc_sorted
[[(), (1,2,3), (1,3,2)], [(2,3), (1,2), (1,3)]]
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: lc_sorted = sorted([sorted(coset) for coset in lc])
sage: lc_sorted
[[(), (1,2,3), (1,3,2)], [(2,3), (1,2), (1,3)]]
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: rc_sorted == lc_sorted
True
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: G = SymmetricGroup(3)
sage: sg = G.subgroups(); sg
[Subgroup generated by [()] of (Symmetric group of order 3! as a permutation group),
Subgroup generated by [(2,3)] of (Symmetric group of order 3! as a permutation group),
Subgroup generated by [(1,2)] of (Symmetric group of order 3! as a permutation group),
Subgroup generated by [(1,3)] of (Symmetric group of order 3! as a permutation group),
Subgroup generated by [(1,2,3)] of (Symmetric group of order 3! as a permutation group),
Subgroup generated by [(2,3), (1,2,3)] of (Symmetric group of order 3! as a permutation group)]
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: H = sg[4]; H
Subgroup generated by [(1,2,3)] of (Symmetric group of order 3! as a permutation group)
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: H.order()
3
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: H.list()
[(), (1,3,2), (1,2,3)]
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: H.is_cyclic()
True
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: G = AlternatingGroup(4)
sage: sg = G.subgroups()
sage: [H.order() for H in sg]
[1, 2, 2, 2, 3, 3, 3, 3, 4, 12]
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: G = SymmetricGroup(4)
sage: sg = G.subgroups()
sage: [H.order() for H in sg]
[1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4,
6, 6, 6, 6, 8, 8, 8, 12, 24]
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: len(sg)
30
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: G = CyclicPermutationGroup(20)
sage: [H.order() for H in G.subgroups()]
[1, 2, 4, 5, 10, 20]
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: G = CyclicPermutationGroup(19)
sage: [H.order() for H in G.subgroups()]
[1, 19]
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: n = 8
sage: G = CyclicPermutationGroup(n)
sage: [H.order() for H in G.subgroups()]
[1, 2, 4, 8]
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: euler_phi(345)
176
~~~~~~~~~~~~~~~~~~~~~~ ::
sage: m = random_prime(10000)
sage: n = random_prime(10000)
sage: m, n, euler_phi(m*n) == euler_phi(m)*euler_phi(n) # random
(5881, 1277, True)
"""
| 26.360465 | 100 | 0.483238 | 637 | 4,534 | 3.414443 | 0.249608 | 0.021149 | 0.017931 | 0.014713 | 0.477241 | 0.426207 | 0.413793 | 0.365977 | 0.358621 | 0.342989 | 0 | 0.076989 | 0.229378 | 4,534 | 171 | 101 | 26.51462 | 0.545507 | 0.267755 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a71b947c8af4a504529603a85aa23e3b866378c | 2,775 | py | Python | manager.py | Bernard2324/zabbix_manager | bb6576b70990db173333667d3634897b07c5ca4a | [
"MIT"
] | null | null | null | manager.py | Bernard2324/zabbix_manager | bb6576b70990db173333667d3634897b07c5ca4a | [
"MIT"
] | null | null | null | manager.py | Bernard2324/zabbix_manager | bb6576b70990db173333667d3634897b07c5ca4a | [
"MIT"
] | null | null | null |
import json
from zabexceptions import *
from connection import ConnManager, CredentialStore
from Crypto.Cipher import DES
from requests import exceptions
class ZabbixMethodManager(ConnManager):
def __init__(self):
self.rediscredentials = CredentialStore()
self.rediscredentials.redisConnection()
self.rediscredentials.storeCredentials({self.username: self.username}, 'username')
self.username = 'firstname.lastname'
self.ciphertext = self.passwordManager(
raw_input("Enter Secret Key: \n")
)
self.password = "".join(
chr(x - 1) for x in map(ord, [j for j in self.ciphertext])
).strip('WXYZ')
self.rediscredentials.storeCredentials({self.username: self.password}, 'password')
super(ZabbixMethodManager, self).__init__(self.username, self.password)
self.id = 0
def do_request(self, method, params=None):
redo = {method: params}
request_json = {
'jsonrpc': '2.0',
'method': method,
'params': params or {},
'id': self.id
}
try:
response = self.conn.sess.post(
self.conn.url,
data=json.dumps(request_json),
timeout=self.conn.timeout
)
except (exceptions.Timeout, exceptions.ConnectTimeout, exceptions.ReadTimeout):
print "Connection Timeout, Re-Establishing Connection"
if not all(map(self.rediscredentials.conn.exists, ['passwd', 'usern'])):
raise StandardError("Unable to Retrieve Credentials. Please login Again\n")
self.username = self.rediscredentials.conn.hmget('usern', self.username)
self.password = self.rediscredentials.conn.hmget('passwd', self.username)
self.do_request(redo.keys(), redo[method])
response.raise_for_status()
if not len(response.text):
raise ZabbixEmptyResponse("Received Empty Response!\n")
try:
response_json = json.loads(response.text)
except ValueError:
raise ZabbixGeneric(
"Unable to parse json: %s" % response.text
)
self.id += 1
if 'error' in response_json: # some exception
if 'data' not in response_json['error']: # some errors don't contain 'data': workaround for ZBX-9340
response_json['error']['data'] = "No data"
msg = u"Error {code}: {message}, {data}".format (
code=response_json['error']['code'],
message=response_json['error']['message'],
data=response_json['error']['data']
)
raise ZabbixGeneric(msg, response_json['error']['code'])
return response_json
def passwordManager(self, key):
if not isinstance(key, str):
key = str(key)
try:
encryptionObj = DES.new(key, DES.MODE_ECB)
cipher = '\x84\xe9&\xd5h\x1f\xd3\x15\xba"j\xdd\xb5B-x\xe6\x8d\x00\xbcV\xd0\xb0('
return encryptionObj.decrypt(cipher)
except PasswordDecryption:
print "Failed to Decrypt Password! Please Enter Correct Key\n"
class CleanObject(object):
pass
| 31.179775 | 104 | 0.706306 | 346 | 2,775 | 5.592486 | 0.413295 | 0.055814 | 0.049612 | 0.037209 | 0.082687 | 0.053747 | 0 | 0 | 0 | 0 | 0 | 0.0103 | 0.16036 | 2,775 | 88 | 105 | 31.534091 | 0.820172 | 0.025946 | 0 | 0.041667 | 0 | 0.013889 | 0.176732 | 0.025565 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.138889 | 0.069444 | null | null | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6a50fcabe479cdd00039e90cc3bcf5c29ca13974 | 1,127 | py | Python | memory.py | deniska/rpyv | f58749c9ad80f62325c9fa990a705f55522ee22a | [
"MIT"
] | null | null | null | memory.py | deniska/rpyv | f58749c9ad80f62325c9fa990a705f55522ee22a | [
"MIT"
] | null | null | null | memory.py | deniska/rpyv | f58749c9ad80f62325c9fa990a705f55522ee22a | [
"MIT"
] | null | null | null | class Memory:
def __init__(self, size):
self.mem = bytearray(size)
self.bin_size = 0
def read8(self, addr):
val = self.mem[addr]
#if addr > self.bin_size:
# print(f'Read {val:02x} at {addr}')
return val
def write8(self, addr, val):
self.mem[addr] = val
def write_bin(self, addr, data):
self.mem[addr: addr+len(data)] = data
self.bin_size = addr + len(data)
def __len__(self):
return len(self.mem)
class MemoryMap:
def __init__(self, mems):
self.mems = mems
def resolve_addr(self, addr):
for start, m in self.mems:
if start <= addr < start + len(m):
return m, start
raise IndexError(f'Access to unmapped memory {addr:x}')
def read8(self, addr):
mem, start = self.resolve_addr(addr)
val = mem.read8(addr - start)
#print(f'Read {val:x} at {addr}')
return val
def write8(self, addr, val):
#print(f'Write {val:x} at {addr}')
mem, start = self.resolve_addr(addr)
mem.write8(addr - start, val)
| 23.978723 | 63 | 0.553682 | 156 | 1,127 | 3.878205 | 0.237179 | 0.079339 | 0.054545 | 0.052893 | 0.272727 | 0.272727 | 0.218182 | 0.115702 | 0.115702 | 0 | 0 | 0.011719 | 0.318545 | 1,127 | 46 | 64 | 24.5 | 0.776042 | 0.112689 | 0 | 0.275862 | 0 | 0 | 0.034205 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.310345 | false | 0 | 0 | 0.034483 | 0.517241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6a551690cbeb0f1bd7da9c4accfaa5ae7421a7d6 | 898 | py | Python | contests/atcoder/intro-heuristics/intro_heuristics_b/generate.py | conao3/coder | 2cdb610fec013da88a3470d460108e8a9b462445 | [
"CC0-1.0"
] | null | null | null | contests/atcoder/intro-heuristics/intro_heuristics_b/generate.py | conao3/coder | 2cdb610fec013da88a3470d460108e8a9b462445 | [
"CC0-1.0"
] | null | null | null | contests/atcoder/intro-heuristics/intro_heuristics_b/generate.py | conao3/coder | 2cdb610fec013da88a3470d460108e8a9b462445 | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python3
# usage: $ oj generate-input 'python3 generate.py'
import random
# generated by online-judge-template-generator v4.1.0 (https://github.com/kmyk/online-judge-template-generator)
def main():
c = [None for _ in range(26)]
D = random.randint(1, 10 ** 9) # TODO: edit here
s = [[None for _ in range(26)] for _ in range(D + 28)]
t = [None for _ in range(D)]
for i in range(26):
c[i] = random.randint(1, 10 ** 9) # TODO: edit here
for j in range(D + 2):
for i in range(26):
s[i + j][i] = random.randint(1, 10 ** 9) # TODO: edit here
for i in range(D):
t[i] = random.randint(1, 10 ** 9) # TODO: edit here
print(D)
print(*[c[i] for i in range(26)])
for j in range(D + 2):
print(*[s[i + j][i] for i in range(26)])
for i in range(D):
print(t[i])
if __name__ == "__main__":
main()
| 33.259259 | 111 | 0.563474 | 155 | 898 | 3.187097 | 0.309677 | 0.17004 | 0.109312 | 0.133603 | 0.516194 | 0.362348 | 0.321862 | 0.253036 | 0.194332 | 0.133603 | 0 | 0.056317 | 0.268374 | 898 | 26 | 112 | 34.538462 | 0.695586 | 0.271715 | 0 | 0.285714 | 1 | 0 | 0.012365 | 0 | 0 | 0 | 0 | 0.038462 | 0 | 1 | 0.047619 | false | 0 | 0.047619 | 0 | 0.095238 | 0.190476 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6a6076d3caeeee6ee43d7cbf1164e5fdfc9730a5 | 171 | py | Python | portal/parser/util.py | liuxue0905/GoldenTimes | 9cc1fdd0b8c4b06e1f4f932baba0db02e895bc41 | [
"BSD-3-Clause"
] | null | null | null | portal/parser/util.py | liuxue0905/GoldenTimes | 9cc1fdd0b8c4b06e1f4f932baba0db02e895bc41 | [
"BSD-3-Clause"
] | 10 | 2020-06-20T02:04:24.000Z | 2021-12-13T19:47:35.000Z | portal/parser/util.py | liuxue0905/GoldenTimes | 9cc1fdd0b8c4b06e1f4f932baba0db02e895bc41 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
def str_strip(value):
s = str(value) if value else None
if s:
s = s.strip()
return s
| 17.1 | 39 | 0.608187 | 26 | 171 | 3.769231 | 0.653846 | 0.040816 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008065 | 0.274854 | 171 | 9 | 40 | 19 | 0.782258 | 0.122807 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6a62ef332200bd04dc74d321a734d6d11eade0bd | 31,134 | py | Python | instabot/bot/bot.py | leafnode/instabot | c58514521a1e7e9b4f0da6ea20628afbdf6644b2 | [
"Apache-2.0"
] | 1 | 2021-04-29T11:19:02.000Z | 2021-04-29T11:19:02.000Z | instabot/bot/bot.py | leafnode/instabot | c58514521a1e7e9b4f0da6ea20628afbdf6644b2 | [
"Apache-2.0"
] | null | null | null | instabot/bot/bot.py | leafnode/instabot | c58514521a1e7e9b4f0da6ea20628afbdf6644b2 | [
"Apache-2.0"
] | 1 | 2020-06-11T21:11:54.000Z | 2020-06-11T21:11:54.000Z | version = "0.116.0"
import atexit
import datetime
import logging
import os
import random
import signal
import time
from instabot import utils
# from instabot.api.api import API
from ..api import API
from .state.bot_state import BotState
from .state.bot_cache import BotCache
from .bot_archive import archive, archive_medias, unarchive_medias
from .bot_block import block, block_bots, block_users, unblock, unblock_users
from .bot_checkpoint import load_checkpoint, save_checkpoint
from .bot_comment import (
comment,
comment_geotag,
comment_hashtag,
comment_medias,
comment_user,
comment_users,
is_commented,
reply_to_comment,
)
from .bot_delete import delete_comment, delete_media, delete_medias
from .bot_direct import (
approve_pending_thread_requests,
send_hashtag,
send_like,
send_media,
send_medias,
send_message,
send_messages,
send_photo,
send_profile,
)
from .bot_filter import check_media, check_not_bot, check_user, filter_medias
from .bot_follow import (
approve_pending_follow_requests,
follow,
follow_followers,
follow_following,
follow_users,
reject_pending_follow_requests,
)
from .bot_get import (
convert_to_user_id,
get_archived_medias,
get_comment,
get_comment_likers,
get_geotag_medias,
get_geotag_users,
get_hashtag_medias,
get_hashtag_users,
get_last_user_medias,
get_link_from_media_id,
get_locations_from_coordinates,
get_media_commenters,
get_media_comments,
get_media_comments_all,
get_media_id_from_link,
get_media_info,
get_media_likers,
get_media_owner,
get_messages,
get_pending_follow_requests,
get_pending_thread_requests,
get_popular_medias,
get_self_story_viewers,
get_timeline_medias,
get_timeline_users,
get_total_hashtag_medias,
get_total_user_medias,
get_user_followers,
get_user_following,
get_user_id_from_username,
get_user_info,
get_user_likers,
get_user_medias,
get_user_reel,
get_user_stories,
get_user_tags_medias,
get_username_from_user_id,
get_your_medias,
search_users,
get_muted_friends,
)
from .bot_like import (
like,
like_comment,
like_followers,
like_following,
like_geotag,
like_hashtag,
like_location_feed,
like_media_comments,
like_medias,
like_timeline,
like_user,
like_users,
)
from .bot_photo import download_photo, download_photos, upload_photo
from .bot_stats import save_user_stats
from .bot_story import download_stories, upload_story_photo, watch_users_reels
from .bot_support import (
check_if_file_exists,
console_print,
extract_urls,
read_list_from_file,
)
from .bot_unfollow import (
unfollow,
unfollow_everyone,
unfollow_non_followers,
unfollow_users,
)
from .bot_unlike import (
unlike,
unlike_comment,
unlike_media_comments,
unlike_medias,
unlike_user,
)
from .bot_video import download_video, upload_video
current_path = os.path.abspath(os.getcwd())
class Bot(object):
def __init__(
self,
base_path=current_path + "/config/",
whitelist_file="whitelist.txt",
blacklist_file="blacklist.txt",
comments_file="comments.txt",
followed_file="followed.txt",
unfollowed_file="unfollowed.txt",
skipped_file="skipped.txt",
friends_file="friends.txt",
proxy=None,
max_likes_per_day=random.randint(50, 100),
max_unlikes_per_day=random.randint(50, 100),
max_follows_per_day=random.randint(50, 100),
max_unfollows_per_day=random.randint(50, 100),
max_comments_per_day=random.randint(50, 100),
max_blocks_per_day=random.randint(50, 100),
max_unblocks_per_day=random.randint(50, 100),
max_likes_to_like=random.randint(50, 100),
min_likes_to_like=random.randint(50, 100),
max_messages_per_day=random.randint(50, 100),
filter_users=False,
filter_private_users=False,
filter_users_without_profile_photo=False,
filter_previously_followed=False,
filter_business_accounts=False,
filter_verified_accounts=False,
max_followers_to_follow=5000,
min_followers_to_follow=10,
max_following_to_follow=2000,
min_following_to_follow=10,
max_followers_to_following_ratio=15,
max_following_to_followers_ratio=15,
min_media_count_to_follow=3,
max_following_to_block=2000,
like_delay=random.randint(300, 600),
unlike_delay=random.randint(300, 600),
follow_delay=random.randint(300, 600),
unfollow_delay=random.randint(300, 600),
comment_delay=random.randint(300, 600),
block_delay=random.randint(300, 600),
unblock_delay=random.randint(300, 600),
message_delay=random.randint(300, 600),
stop_words=("shop", "store", "free"),
blacklist_hashtags=["#shop", "#store", "#free"],
blocked_actions_protection=True,
blocked_actions_sleep=True,
blocked_actions_sleep_delay=random.randint(600, 1200),
verbosity=True,
device=None,
save_logfile=True,
log_filename=None,
loglevel_file=logging.DEBUG,
loglevel_stream=logging.INFO,
log_follow_unfollow=True,
):
self.api = API(
device=device,
base_path=base_path,
save_logfile=save_logfile,
log_filename=log_filename,
loglevel_file=loglevel_file,
loglevel_stream=loglevel_stream,
)
self.log_follow_unfollow = log_follow_unfollow
self.base_path = base_path
self.state = BotState()
self.delays = {
"like": like_delay,
"unlike": unlike_delay,
"follow": follow_delay,
"unfollow": unfollow_delay,
"comment": comment_delay,
"block": block_delay,
"unblock": unblock_delay,
"message": message_delay,
}
# limits - follow
self.filter_users = filter_users
self.filter_private_users = filter_private_users
self.filter_users_without_profile_photo = filter_users_without_profile_photo
self.filter_business_accounts = filter_business_accounts
self.filter_verified_accounts = filter_verified_accounts
self.filter_previously_followed = filter_previously_followed
self.max_per_day = {
"likes": max_likes_per_day,
"unlikes": max_unlikes_per_day,
"follows": max_follows_per_day,
"unfollows": max_unfollows_per_day,
"comments": max_comments_per_day,
"blocks": max_blocks_per_day,
"unblocks": max_unblocks_per_day,
"messages": max_messages_per_day,
}
self.blocked_actions_protection = blocked_actions_protection
self.blocked_actions_sleep = blocked_actions_sleep
self.blocked_actions_sleep_delay = blocked_actions_sleep_delay
self.max_likes_to_like = max_likes_to_like
self.min_likes_to_like = min_likes_to_like
self.max_followers_to_follow = max_followers_to_follow
self.min_followers_to_follow = min_followers_to_follow
self.max_following_to_follow = max_following_to_follow
self.min_following_to_follow = min_following_to_follow
self.max_followers_to_following_ratio = max_followers_to_following_ratio
self.max_following_to_followers_ratio = max_following_to_followers_ratio
self.min_media_count_to_follow = min_media_count_to_follow
self.stop_words = stop_words
self.blacklist_hashtags = blacklist_hashtags
# limits - block
self.max_following_to_block = max_following_to_block
# current following and followers
self.cache = BotCache()
# Adjust file paths
followed_file = os.path.join(base_path, followed_file)
unfollowed_file = os.path.join(base_path, unfollowed_file)
skipped_file = os.path.join(base_path, skipped_file)
friends_file = os.path.join(base_path, friends_file)
comments_file = os.path.join(base_path, comments_file)
blacklist_file = os.path.join(base_path, blacklist_file)
whitelist_file = os.path.join(base_path, whitelist_file)
# Database files
self.followed_file = utils.file(followed_file)
self.unfollowed_file = utils.file(unfollowed_file)
self.skipped_file = utils.file(skipped_file)
self.friends_file = utils.file(friends_file)
self.comments_file = utils.file(comments_file)
self.blacklist_file = utils.file(blacklist_file)
self.whitelist_file = utils.file(whitelist_file)
self.proxy = proxy
self.verbosity = verbosity
self.logger = self.api.logger
self.logger.info("Instabot version: " + version + " Started")
self.logger.debug("Bot imported from {}".format(__file__))
@property
def user_id(self):
# For compatibility
return self.api.user_id
@property
def username(self):
# For compatibility
return self.api.username
@property
def password(self):
# For compatibility
return self.api.password
@property
def last_json(self):
# For compatibility
return self.api.last_json
@property
def blacklist(self):
# This is a fast operation because
# `get_user_id_from_username` is cached.
return [
self.convert_to_user_id(i)
for i in self.blacklist_file.list
if i is not None
]
@property
def whitelist(self):
# This is a fast operation because
# `get_user_id_from_username` is cached.
return [
self.convert_to_user_id(i)
for i in self.whitelist_file.list
if i is not None
]
@property
def following(self):
now = time.time()
last = self.last.get("updated_following", now)
if self._following is None or (now - last) > 7200:
self.console_print("`bot.following` is empty, will download.", "green")
self._following = self.get_user_following(self.user_id)
self.last["updated_following"] = now
return self._following
@property
def followers(self):
now = time.time()
last = self.last.get("updated_followers", now)
if self._followers is None or (now - last) > 7200:
self.console_print("`bot.followers` is empty, will download.", "green")
self._followers = self.get_user_followers(self.user_id)
self.last["updated_followers"] = now
return self._followers
@property
def start_time(self):
return self.state.start_time
@start_time.setter
def start_time(self, value):
self.state.start_time = value
@property
def total(self):
return self.state.total
@total.setter
def total(self, value):
self.state.total = value
@property
def sleeping_actions(self):
return self.state.sleeping_actions
@sleeping_actions.setter
def sleeping_actions(self, value):
self.state.sleeping_actions = value
@property
def blocked_actions(self):
return self.state.blocked_actions
@blocked_actions.setter
def blocked_actions(self, value):
self.state.blocked_actions = value
@property
def last(self):
return self.state.last
@last.setter
def last(self, value):
self.state.last = value
@property
def _following(self):
return self.cache.following
@_following.setter
def _following(self, value):
self.cache.following = value
@property
def _followers(self):
return self.cache.followers
@_followers.setter
def _followers(self, value):
self.cache.followers = value
@property
def _user_infos(self):
return self.cache.user_infos
@_user_infos.setter
def _user_infos(self, value):
self.cache.user_infos = value
@property
def _usernames(self):
return self.cache.usernames
@_usernames.setter
def _usernames(self, value):
self.cache.usernames = value
@staticmethod
def version():
try:
from pip._vendor import pkg_resources
except ImportError:
import pkg_resources
return next(
(
p.version
for p in pkg_resources.working_set
if p.project_name.lower() == "instabot"
),
"No match",
)
def logout(self, *args, **kwargs):
self.api.logout()
self.logger.info(
"Bot stopped. " "Worked: %s", datetime.datetime.now() - self.start_time
)
self.print_counters()
def login(self, **args):
"""if login function is run threaded, for example in scheduled job,
signal will fail because it 'only works in main thread'.
In this case, you may want to call login(is_threaded=True).
"""
if self.proxy:
args["proxy"] = self.proxy
if self.api.login(**args) is False:
return False
self.prepare()
atexit.register(self.print_counters)
if "is_threaded" in args:
if args["is_threaded"]:
return True
signal.signal(signal.SIGTERM, self.print_counters)
return True
def prepare(self):
storage = load_checkpoint(self)
if storage is not None:
(
total,
self.blocked_actions,
self.api.total_requests,
self.start_time,
) = storage
for k, v in total.items():
self.total[k] = v
def print_counters(self, *args, **kwargs):
save_checkpoint(self)
for key, val in self.total.items():
if val > 0:
self.logger.info(
"Total {}: {}{}".format(
key,
val,
"/" + str(self.max_per_day[key])
if self.max_per_day.get(key)
else "",
)
)
for key, val in self.blocked_actions.items():
if val:
self.logger.info("Blocked {}".format(key))
self.logger.info("Total requests: {}".format(self.api.total_requests))
def delay(self, key):
"""
Sleep only if elapsed time since
`self.last[key]` < `self.delay[key]`.
"""
last_action, target_delay = self.last[key], self.delays[key]
elapsed_time = time.time() - last_action
if elapsed_time < target_delay:
t_remaining = target_delay - elapsed_time
time.sleep(t_remaining * random.uniform(0.25, 1.25))
self.last[key] = time.time()
def error_delay(self):
time.sleep(10)
def small_delay(self):
time.sleep(random.uniform(0.75, 3.75))
def very_small_delay(self):
time.sleep(random.uniform(0.175, 0.875))
def reached_limit(self, key):
current_date = datetime.datetime.now()
passed_days = (current_date.date() - self.start_time.date()).days
if passed_days > 0:
self.reset_counters()
return self.max_per_day[key] - self.total[key] <= 0
def reset_counters(self):
for k in self.total:
self.total[k] = 0
for k in self.blocked_actions:
self.blocked_actions[k] = False
self.start_time = datetime.datetime.now()
def reset_cache(self):
self._following = None
self._followers = None
self._user_infos = {}
self._usernames = {}
# getters
def get_user_stories(self, user_id):
"""
Returns array of stories links
"""
return get_user_stories(self, user_id)
def get_user_reel(self, user_id):
return get_user_reel(self, user_id)
def get_self_story_viewers(self, story_id):
return get_self_story_viewers(self, story_id)
def get_pending_follow_requests(self):
return get_pending_follow_requests(self)
def get_your_medias(self, as_dict=False):
"""
Returns your media ids. With parameter
as_dict=True returns media as dict.
:type as_dict: bool
"""
return get_your_medias(self, as_dict)
def get_archived_medias(self, as_dict=False):
"""
Returns your archived media ids. With parameter
as_dict=True returns media as dict.
:type as_dict: bool
"""
return get_archived_medias(self, as_dict)
def get_timeline_medias(self):
return get_timeline_medias(self)
def get_popular_medias(self):
return get_popular_medias(self)
def get_user_medias(self, user_id, filtration=True, is_comment=False):
return get_user_medias(self, user_id, filtration, is_comment)
def get_total_user_medias(self, user_id):
return get_total_user_medias(self, user_id)
def get_last_user_medias(self, user_id, count):
"""
Returns the last number of posts specified in count in media ids array.
:type count: int
:param count: Count of posts
:return: array
"""
return get_last_user_medias(self, user_id, count)
def get_hashtag_medias(self, hashtag, filtration=True):
return get_hashtag_medias(self, hashtag, filtration)
def get_total_hashtag_medias(self, hashtag, amount=100, filtration=False):
return get_total_hashtag_medias(self, hashtag, amount, filtration)
def get_geotag_medias(self, geotag, filtration=True):
return get_geotag_medias(self, geotag, filtration)
def get_locations_from_coordinates(self, latitude, longitude):
return get_locations_from_coordinates(self, latitude, longitude)
def get_media_info(self, media_id):
return get_media_info(self, media_id)
def get_timeline_users(self):
return get_timeline_users(self)
def get_hashtag_users(self, hashtag):
return get_hashtag_users(self, hashtag)
def get_geotag_users(self, geotag):
return get_geotag_users(self, geotag)
def get_user_id_from_username(self, username):
return get_user_id_from_username(self, username)
def get_user_tags_medias(self, user_id):
return get_user_tags_medias(self, user_id)
def get_username_from_user_id(self, user_id):
return get_username_from_user_id(self, user_id)
def get_user_info(self, user_id, use_cache=True):
return get_user_info(self, user_id, use_cache)
def get_user_followers(self, user_id, nfollows=None):
return get_user_followers(self, user_id, nfollows)
def get_user_following(self, user_id, nfollows=None):
return get_user_following(self, user_id, nfollows)
def get_comment_likers(self, comment_id):
return get_comment_likers(self, comment_id)
def get_media_likers(self, media_id):
return get_media_likers(self, media_id)
def get_media_comments(self, media_id, only_text=False):
return get_media_comments(self, media_id, only_text)
def get_media_comments_all(self, media_id, only_text=False, count=False):
return get_media_comments_all(self, media_id, only_text, count)
def get_comment(self):
return get_comment(self)
def get_media_commenters(self, media_id):
return get_media_commenters(self, media_id)
def get_media_owner(self, media):
return get_media_owner(self, media)
def get_user_likers(self, user_id, media_count=10):
return get_user_likers(self, user_id, media_count)
def get_media_id_from_link(self, link):
return get_media_id_from_link(self, link)
def get_link_from_media_id(self, link):
return get_link_from_media_id(self, link)
def get_messages(self):
return get_messages(self)
def search_users(self, query):
return search_users(self, query)
def get_muted_friends(self, muted_content="stories"):
return get_muted_friends(self, muted_content)
def convert_to_user_id(self, usernames):
return convert_to_user_id(self, usernames)
def get_pending_thread_requests(self):
return get_pending_thread_requests(self)
# like
def like(
self,
media_id,
check_media=True,
container_module="feed_short_url",
feed_position=0,
username=None,
user_id=None,
hashtag_name=None,
hashtag_id=None,
entity_page_name=None,
entity_page_id=None,
):
return like(
self,
media_id,
check_media,
container_module=container_module,
feed_position=feed_position,
username=username,
user_id=user_id,
hashtag_name=hashtag_name,
hashtag_id=hashtag_id,
entity_page_name=entity_page_name,
entity_page_id=entity_page_id,
)
def like_comment(self, comment_id):
return like_comment(self, comment_id)
def like_medias(
self,
media_ids,
check_media=True,
container_module="feed_timeline",
username=None,
user_id=None,
hashtag_name=None,
hashtag_id=None,
entity_page_name=None,
entity_page_id=None,
):
return like_medias(
self,
media_ids,
check_media,
container_module=container_module,
username=username,
user_id=user_id,
hashtag_name=hashtag_name,
hashtag_id=hashtag_id,
entity_page_name=entity_page_name,
entity_page_id=entity_page_id,
)
def like_timeline(self, amount=None):
return like_timeline(self, amount)
def like_media_comments(self, media_id):
return like_media_comments(self, media_id)
def like_user(self, user_id, amount=None, filtration=True):
return like_user(self, user_id, amount, filtration)
def like_hashtag(self, hashtag, amount=None):
return like_hashtag(self, hashtag, amount)
def like_geotag(self, geotag, amount=None):
return like_geotag(self, geotag, amount)
def like_users(self, user_ids, nlikes=None, filtration=True):
return like_users(self, user_ids, nlikes, filtration)
def like_location_feed(self, place, amount):
return like_location_feed(self, place, amount)
def like_followers(self, user_id, nlikes=None, nfollows=None):
return like_followers(self, user_id, nlikes, nfollows)
def like_following(self, user_id, nlikes=None, nfollows=None):
return like_following(self, user_id, nlikes, nfollows)
# unlike
def unlike(self, media_id):
return unlike(self, media_id)
def unlike_comment(self, comment_id):
return unlike_comment(self, comment_id)
def unlike_media_comments(self, media_id):
return unlike_media_comments(self, media_id)
def unlike_medias(self, media_ids):
return unlike_medias(self, media_ids)
def unlike_user(self, user):
return unlike_user(self, user)
# story
def download_stories(self, username):
return download_stories(self, username)
def upload_story_photo(self, photo, upload_id=None):
return upload_story_photo(self, photo, upload_id)
def watch_users_reels(self, user_ids, max_users=100):
return watch_users_reels(self, user_ids, max_users=max_users)
# photo
def download_photo(
self, media_id, folder="photos", filename=None, save_description=False
):
return download_photo(self, media_id, folder, filename, save_description)
def download_photos(self, medias, folder="photos", save_description=False):
return download_photos(self, medias, folder, save_description)
def upload_photo(
self, photo, caption=None, upload_id=None, from_video=False, options={}
):
"""Upload photo to Instagram
@param photo Path to photo file (String)
@param caption Media description (String)
@param upload_id Unique upload_id (String). When None, then
generate automatically
@param from_video A flag that signals whether the photo is loaded
from the video or by itself
(Boolean, DEPRECATED: not used)
@param options Object with difference options,
e.g. configure_timeout, rename (Dict)
Designed to reduce the number of function
arguments! This is the simplest request object.
@return Object with state of uploading to
Instagram (or False)
"""
return upload_photo(self, photo, caption, upload_id, from_video, options)
# video
def upload_video(self, video, caption="", thumbnail=None, options={}):
"""Upload video to Instagram
@param video Path to video file (String)
@param caption Media description (String)
@param thumbnail Path to thumbnail for video (String). When None,
then thumbnail is generated automatically
@param options Object with difference options, e.g.
configure_timeout, rename_thumbnail, rename (Dict)
Designed to reduce the number of function arguments!
@return Object with Instagram upload state (or False)
"""
return upload_video(self, video, caption, thumbnail, options)
def download_video(
self, media_id, folder="videos", filename=None, save_description=False
):
return download_video(self, media_id, folder, filename, save_description)
# follow
def follow(self, user_id, check_user=True):
return follow(self, user_id, check_user)
def follow_users(self, user_ids, nfollows=None):
return follow_users(self, user_ids, nfollows)
def follow_followers(self, user_id, nfollows=None):
return follow_followers(self, user_id, nfollows)
def follow_following(self, user_id, nfollows=None):
return follow_following(self, user_id, nfollows)
# unfollow
def unfollow(self, user_id):
return unfollow(self, user_id)
def unfollow_users(self, user_ids):
return unfollow_users(self, user_ids)
def unfollow_non_followers(self, n_to_unfollows=None):
return unfollow_non_followers(self, n_to_unfollows)
def unfollow_everyone(self):
return unfollow_everyone(self)
def approve_pending_follow_requests(self):
return approve_pending_follow_requests(self)
def reject_pending_follow_requests(self):
return reject_pending_follow_requests(self)
# direct
def send_message(self, text, user_ids, thread_id=None):
return send_message(self, text, user_ids, thread_id)
def send_messages(self, text, user_ids):
return send_messages(self, text, user_ids)
def send_media(self, media_id, user_ids, text=None, thread_id=None):
return send_media(self, media_id, user_ids, text, thread_id)
def send_medias(self, media_id, user_ids, text=None):
return send_medias(self, media_id, user_ids, text)
def send_hashtag(self, hashtag, user_ids, text="", thread_id=None):
return send_hashtag(self, hashtag, user_ids, text, thread_id)
def send_profile(self, profile_user_id, user_ids, text="", thread_id=None):
return send_profile(self, profile_user_id, user_ids, text, thread_id)
def send_like(self, user_ids, thread_id=None):
return send_like(self, user_ids, thread_id)
def send_photo(self, user_ids, filepath, thread_id=None):
return send_photo(self, user_ids, filepath, thread_id)
def approve_pending_thread_requests(self):
return approve_pending_thread_requests(self)
# delete
def delete_media(self, media_id):
return delete_media(self, media_id)
def delete_medias(self, medias):
return delete_medias(self, medias)
def delete_comment(self, media_id, comment_id):
return delete_comment(self, media_id, comment_id)
# archive
def archive(self, media_id, undo=False):
return archive(self, media_id, undo)
def unarchive(self, media_id):
return archive(self, media_id, True)
def archive_medias(self, medias):
return archive_medias(self, medias)
def unarchive_medias(self, medias):
return unarchive_medias(self, medias)
# comment
def comment(self, media_id, comment_text):
return comment(self, media_id, comment_text)
def reply_to_comment(self, media_id, comment_text, parent_comment_id):
return reply_to_comment(self, media_id, comment_text, parent_comment_id)
def comment_hashtag(self, hashtag, amount=None):
return comment_hashtag(self, hashtag, amount)
def comment_medias(self, medias):
return comment_medias(self, medias)
def comment_user(self, user_id, amount=None):
return comment_user(self, user_id, amount)
def comment_users(self, user_ids, ncomments=None):
return comment_users(self, user_ids, ncomments)
def comment_geotag(self, geotag):
return comment_geotag(self, geotag)
def is_commented(self, media_id):
return is_commented(self, media_id)
# block
def block(self, user_id):
return block(self, user_id)
def unblock(self, user_id):
return unblock(self, user_id)
def block_users(self, user_ids):
return block_users(self, user_ids)
def unblock_users(self, user_ids):
return unblock_users(self, user_ids)
def block_bots(self):
return block_bots(self)
# filter
def filter_medias(
self, media_items, filtration=True, quiet=False, is_comment=False
):
return filter_medias(self, media_items, filtration, quiet, is_comment)
def check_media(self, media):
return check_media(self, media)
def check_user(self, user, unfollowing=False):
return check_user(self, user, unfollowing)
def check_not_bot(self, user):
return check_not_bot(self, user)
# support
def check_if_file_exists(self, file_path, quiet=False):
return check_if_file_exists(file_path, quiet)
def extract_urls(self, text):
return extract_urls(text)
def read_list_from_file(self, file_path):
return read_list_from_file(file_path)
def console_print(self, text, color=None):
return console_print(self, text, color)
# stats
def save_user_stats(self, username, path=""):
return save_user_stats(self, username, path=path)
| 31.640244 | 84 | 0.660114 | 3,946 | 31,134 | 4.891029 | 0.093512 | 0.028601 | 0.022798 | 0.009948 | 0.461347 | 0.342798 | 0.215959 | 0.134197 | 0.085803 | 0.066218 | 0 | 0.007535 | 0.258271 | 31,134 | 983 | 85 | 31.672431 | 0.828216 | 0.074131 | 0 | 0.103734 | 0 | 0 | 0.021239 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210235 | false | 0.005533 | 0.04426 | 0.168741 | 0.4426 | 0.012448 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
6a6f92e73efb603a213aa26351c18b6868abfe8b | 779 | py | Python | demo/demo.py | andresokol/MTSI-BERT | 3d119dc732ec5566ad2ceaaf8099dcf88d759165 | [
"MIT"
] | 17 | 2020-01-29T19:37:15.000Z | 2022-02-15T09:42:54.000Z | demo/demo.py | andresokol/MTSI-BERT | 3d119dc732ec5566ad2ceaaf8099dcf88d759165 | [
"MIT"
] | null | null | null | demo/demo.py | andresokol/MTSI-BERT | 3d119dc732ec5566ad2ceaaf8099dcf88d759165 | [
"MIT"
] | 4 | 2020-11-08T21:45:38.000Z | 2021-08-09T08:34:15.000Z |
import json
import os
import sys
import pdb
sys.path.append(os.path.realpath('..'))
print(sys.path)
from model import (KvretConfig, KvretDataset, MTSIAdapterDataset, MTSIBert,
MTSIKvretConfig, TwoSepTensorBuilder)
def identify_sessions(dialogue):
pass
#split the dialoue into triplets
#fed these triplets to MTSI-BERT
#collect the sessions inside a JSON file
if __name__ == '__main__':
with open('input_dialogue.json') as json_dialogue:
dialogue_data = json.load(json_dialogue)
pdb.set_trace()
identify_sessions(dialogue_data)
#run flask on localhost
#allocate the model before any call
#inside the flask call do:
#read input json
#make forward
#return the json
| 16.574468 | 75 | 0.681643 | 96 | 779 | 5.364583 | 0.625 | 0.027184 | 0.093204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24647 | 779 | 46 | 76 | 16.934783 | 0.877342 | 0.287548 | 0 | 0 | 0 | 0 | 0.053211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.066667 | 0.333333 | 0 | 0.4 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
6a74693a5260639118ac11a1584e8a5d59f851a0 | 139,791 | py | Python | tools/genbuiltins.py | ffontaine/duktape | 841b6fe9dc13f32954bc8c9d2fc4cc1a83ea5354 | [
"MIT"
] | 34 | 2019-02-26T16:22:24.000Z | 2022-02-18T01:59:31.000Z | tools/genbuiltins.py | ffontaine/duktape | 841b6fe9dc13f32954bc8c9d2fc4cc1a83ea5354 | [
"MIT"
] | null | null | null | tools/genbuiltins.py | ffontaine/duktape | 841b6fe9dc13f32954bc8c9d2fc4cc1a83ea5354 | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
#
# Generate initialization data for built-in strings and objects.
#
# Supports two different initialization approaches:
#
# 1. Bit-packed format for unpacking strings and objects during
# heap or thread init into RAM-based structures. This is the
# default behavior.
#
# 2. Embedding strings and/or objects into a read-only data section
# at compile time. This is useful for low memory targets to reduce
# memory usage. Objects in data section will be immutable.
#
# Both of these have practical complications like endianness differences,
# pointer compression variants, object property table layout variants,
# and so on. Multiple #if defined()'d initializer sections are emitted
# to cover all supported alternatives.
#
import logging
import sys
logging.basicConfig(level=logging.INFO, stream=sys.stdout, format='%(name)-21s %(levelname)-7s %(message)s')
logger = logging.getLogger('genbuiltins.py')
logger.setLevel(logging.INFO)
import os
import re
import traceback
import json
import yaml
import math
import struct
import optparse
import copy
import logging
import dukutil
# Fixed seed for ROM strings, must match src-input/duk_heap_alloc.c.
DUK__FIXED_HASH_SEED = 0xabcd1234
# Base value for compressed ROM pointers, used range is [ROMPTR_FIRST,0xffff].
# Must match DUK_USE_ROM_PTRCOMP_FIRST (generated header checks).
ROMPTR_FIRST = 0xf800 # 2048 should be enough; now around ~1000 used
# ROM string table size
ROMSTR_LOOKUP_SIZE = 256
#
# Miscellaneous helpers
#
# Convert Unicode to bytes, identifying Unicode U+0000 to U+00FF as bytes.
# This representation is used in YAML metadata and allows invalid UTF-8 to
# be represented exactly (which is necessary).
def unicode_to_bytes(x):
if isinstance(x, str):
return x
tmp = ''
for c in x:
if ord(c) > 0xff:
raise Exception('invalid codepoint: %r' % x)
tmp += chr(ord(c))
assert(isinstance(tmp, str))
return tmp
# Convert bytes to Unicode, identifying bytes as U+0000 to U+00FF.
def bytes_to_unicode(x):
if isinstance(x, unicode):
return x
tmp = u''
for c in x:
tmp += unichr(ord(c))
assert(isinstance(tmp, unicode))
return tmp
# Convert all strings in an object to bytes recursively. Useful for
# normalizing all strings in a YAML document.
def recursive_strings_to_bytes(doc):
def f(x):
if isinstance(x, unicode):
return unicode_to_bytes(x)
if isinstance(x, dict):
res = {}
for k in x.keys():
res[f(k)] = f(x[k])
return res
if isinstance(x, list):
res = []
for e in x:
res.append(f(e))
return res
return x
return f(doc)
# Convert all strings in an object to from bytes to Unicode recursively.
# Useful for writing back JSON/YAML dumps.
def recursive_bytes_to_strings(doc):
def f(x):
if isinstance(x, str):
return bytes_to_unicode(x)
if isinstance(x, dict):
res = {}
for k in x.keys():
res[f(k)] = f(x[k])
return res
if isinstance(x, list):
res = []
for e in x:
res.append(f(e))
return res
return x
return f(doc)
# Check if string is an "array index" in Ecmascript terms.
def string_is_arridx(v):
is_arridx = False
try:
ival = int(v)
if ival >= 0 and ival <= 0xfffffffe and ('%d' % ival == v):
is_arridx = True
except ValueError:
pass
return is_arridx
#
# Metadata loading, merging, and other preprocessing
#
# Final metadata object contains merged and normalized objects and strings.
# Keys added include (see more below):
#
# strings_stridx: string objects which have a stridx, matches stridx index order
# objects_bidx: objects which have a bidx, matches bidx index order
# objects_ram_toplevel: objects which are top level for RAM init
#
# Various helper keys are also added, containing auxiliary object/string
# lists, lookup maps, etc. See code below for details of these.
#
def metadata_lookup_object(meta, obj_id):
return meta['_objid_to_object'][obj_id]
def metadata_lookup_object_and_index(meta, obj_id):
for i,t in enumerate(meta['objects']):
if t['id'] == obj_id:
return t, i
return None, None
def metadata_lookup_property(obj, key):
for p in obj['properties']:
if p['key'] == key:
return p
return None
def metadata_lookup_property_and_index(obj, key):
for i,t in enumerate(obj['properties']):
if t['key'] == key:
return t, i
return None, None
# Remove disabled objects and properties.
def metadata_remove_disabled(meta, active_opts):
objlist = []
count_disabled_object = 0
count_notneeded_object = 0
count_disabled_property = 0
count_notneeded_property = 0
def present_if_check(v):
pi = v.get('present_if', None)
if pi is None:
return True
if isinstance(pi, (str, unicode)):
pi = [ pi ]
if not isinstance(pi, list):
raise Exception('invalid present_if syntax: %r' % pi)
# Present if all listed options are true or unknown.
# Absent if any option is known to be false.
for opt in pi:
if active_opts.get(opt, None) == False:
return False
return True
for o in meta['objects']:
if o.get('disable', False):
logger.debug('Remove disabled object: %s' % o['id'])
count_disabled_object += 1
elif not present_if_check(o):
logger.debug('Removed object not needed in active configuration: %s' % o['id'])
count_notneeded_object += 1
else:
objlist.append(o)
props = []
for p in o['properties']:
if p.get('disable', False):
logger.debug('Remove disabled property: %s, object: %s' % (p['key'], o['id']))
count_disabled_property += 1
elif not present_if_check(p):
logger.debug('Removed property not needed in active configuration: %s, object: %s' % (p['key'], o['id']))
count_notneeded_property += 1
else:
props.append(p)
o['properties'] = props
meta['objects'] = objlist
if count_disabled_object + count_notneeded_object + count_disabled_property + count_notneeded_property > 0:
logger.info('Removed %d objects (%d disabled, %d not needed by config), %d properties (%d disabled, %d not needed by config)' % (count_disabled_object + count_notneeded_object, count_disabled_object, count_notneeded_object, count_disabled_property + count_notneeded_property, count_disabled_property, count_notneeded_property))
# Delete dangling references to removed/missing objects.
def metadata_delete_dangling_references_to_object(meta, obj_id):
for o in meta['objects']:
new_p = []
for p in o['properties']:
v = p['value']
ptype = None
if isinstance(v, dict):
ptype = p['value']['type']
delprop = False
if ptype == 'object' and v['id'] == obj_id:
delprop = True
if ptype == 'accessor' and v.get('getter_id') == obj_id:
p['getter_id'] = None
if ptype == 'accessor' and v.get('setter_id') == obj_id:
p['setter_id'] = None
# XXX: Should empty accessor (= no getter, no setter) be deleted?
# If so, beware of shorthand.
if delprop:
logger.debug('Deleted property %s of object %s, points to deleted object %s' % \
(p['key'], o['id'], obj_id))
else:
new_p.append(p)
o['properties'] = new_p
# Merge a user YAML file into current metadata.
def metadata_merge_user_objects(meta, user_meta):
if user_meta.has_key('add_objects'):
raise Exception('"add_objects" removed, use "objects" with "add: True"')
if user_meta.has_key('replace_objects'):
raise Exception('"replace_objects" removed, use "objects" with "replace: True"')
if user_meta.has_key('modify_objects'):
raise Exception('"modify_objects" removed, use "objects" with "modify: True"')
for o in user_meta.get('objects', []):
if o.get('disable', False):
logger.debug('Skip disabled object: %s' % o['id'])
continue
targ, targ_idx = metadata_lookup_object_and_index(meta, o['id'])
if o.get('delete', False):
logger.debug('Delete object: %s' % targ['id'])
if targ is None:
raise Exception('Cannot delete object %s which doesn\'t exist' % o['id'])
meta['objects'].pop(targ_idx)
metadata_delete_dangling_references_to_object(meta, targ['id'])
continue
if o.get('replace', False):
logger.debug('Replace object %s' % o['id'])
if targ is None:
logger.warning('object to be replaced doesn\'t exist, append new object')
meta['objects'].append(o)
else:
meta['objects'][targ_idx] = o
continue
if o.get('add', False) or not o.get('modify', False): # 'add' is the default
logger.debug('Add object %s' % o['id'])
if targ is not None:
raise Exception('Cannot add object %s which already exists' % o['id'])
meta['objects'].append(o)
continue
assert(o.get('modify', False)) # modify handling
if targ is None:
raise Exception('Cannot modify object %s which doesn\'t exist' % o['id'])
for k in sorted(o.keys()):
# Merge top level keys by copying over, except 'properties'
if k == 'properties':
continue
targ[k] = o[k]
for p in o.get('properties', []):
if p.get('disable', False):
logger.debug('Skip disabled property: %s' % p['key'])
continue
prop = None
prop_idx = None
prop, prop_idx = metadata_lookup_property_and_index(targ, p['key'])
if prop is not None:
if p.get('delete', False):
logger.debug('Delete property %s of %s' % (p['key'], o['id']))
targ['properties'].pop(prop_idx)
else:
logger.debug('Replace property %s of %s' % (p['key'], o['id']))
targ['properties'][prop_idx] = p
else:
if p.get('delete', False):
logger.debug('Deleting property %s of %s: doesn\'t exist, nop' % (p['key'], o['id']))
else:
logger.debug('Add property %s of %s' % (p['key'], o['id']))
targ['properties'].append(p)
# Replace 'symbol' keys and values with encoded strings.
def format_symbol(sym):
#print(repr(sym))
assert(isinstance(sym, dict))
assert(sym.get('type', None) == 'symbol')
variant = sym['variant']
if variant == 'global':
return '\x80' + sym['string']
elif variant == 'wellknown':
# Well known symbols use an empty suffix which never occurs for
# runtime local symbols.
return '\x81' + sym['string'] + '\xff'
elif variant == 'userhidden':
return '\xff' + sym['string']
elif variant == 'hidden': # hidden == Duktape hidden Symbol
return '\x82' + sym['string']
raise Exception('invalid symbol variant %r' % variant)
def metadata_normalize_symbol_strings(meta):
for o in meta['strings']:
if isinstance(o['str'], dict) and o['str'].get('type') == 'symbol':
o['str'] = format_symbol(o['str'])
#print('normalized symbol as string list element: %r', o)
for o in meta['objects']:
for p in o['properties']:
if isinstance(p['key'], dict) and p['key'].get('type') == 'symbol':
p['key'] = format_symbol(p['key'])
#print('normalized symbol as property key: %r', p)
if isinstance(p['value'], dict) and p['value'].get('type') == 'symbol':
p['value'] = format_symbol(p['value'])
#print('normalized symbol as property value: %r', p)
# Normalize nargs for top level functions by defaulting 'nargs' from 'length'.
def metadata_normalize_nargs_length(meta):
# Default 'nargs' from 'length' for top level function objects.
for o in meta['objects']:
if o.has_key('nargs'):
continue
if not o.get('callable', False):
continue
for p in o['properties']:
if p['key'] != 'length':
continue
logger.debug('Default nargs for top level: %r' % p)
assert(isinstance(p['value'], int))
o['nargs'] = p['value']
break
assert(o.has_key('nargs'))
# Default 'nargs' from 'length' for function property shorthand.
for o in meta['objects']:
for p in o['properties']:
if not (isinstance(p['value'], dict) and p['value']['type'] == 'function'):
continue
pval = p['value']
if not pval.has_key('length'):
logger.debug('Default length for function shorthand: %r' % p)
pval['length'] = 0
if not pval.has_key('nargs'):
logger.debug('Default nargs for function shorthand: %r' % p)
pval['nargs'] = pval['length']
# Prepare a list of built-in objects which need a runtime 'bidx'.
def metadata_prepare_objects_bidx(meta):
objlist = meta['objects']
meta['objects'] = []
meta['objects_bidx'] = []
# Objects have a 'bidx: true' if they need a DUK_BIDX_xxx constant
# and need to be present in thr->builtins[]. The list is already
# stripped of built-in objects which are not needed based on config.
# Ideally we'd scan the actually needed indices from the source
# but since some usage is inside #if defined()s that's not trivial.
for obj in objlist:
if obj.get('bidx', False):
obj['bidx_used'] = True
meta['objects'].append(obj)
meta['objects_bidx'].append(obj)
# Append remaining objects.
for obj in objlist:
if obj.get('bidx_used', False):
# Already in meta['objects'].
pass
else:
meta['objects'].append(obj)
# Normalize metadata property shorthand. For example, if a property value
# is a shorthand function, create a function object and change the property
# to point to that function object.
def metadata_normalize_shorthand(meta):
# Gather objects through the top level built-ins list.
objs = []
subobjs = []
def getSubObject():
obj = {}
obj['id'] = 'subobj_%d' % len(subobjs) # synthetic ID
obj['properties'] = []
obj['auto_generated'] = True # mark as autogenerated (just FYI)
subobjs.append(obj)
return obj
def decodeFunctionShorthand(funprop):
# Convert the built-in function property "shorthand" into an actual
# object for ROM built-ins.
assert(funprop['value']['type'] == 'function')
val = funprop['value']
obj = getSubObject()
props = obj['properties']
obj['native'] = val['native']
obj['nargs'] = val.get('nargs', val['length'])
obj['varargs'] = val.get('varargs', False)
obj['magic'] = val.get('magic', 0)
obj['internal_prototype'] = 'bi_function_prototype'
obj['class'] = 'Function'
obj['callable'] = val.get('callable', True)
obj['constructable'] = val.get('constructable', False)
obj['special_call'] = val.get('special_call', False)
fun_name = val.get('name', funprop['key'])
props.append({ 'key': 'length', 'value': val['length'], 'attributes': 'c' }) # Configurable in ES2015
props.append({ 'key': 'name', 'value': fun_name, 'attributes': 'c' }) # Configurable in ES2015
return obj
def addAccessor(funprop, magic, nargs, length, name, native_func):
assert(funprop['value']['type'] == 'accessor')
obj = getSubObject()
props = obj['properties']
obj['native'] = native_func
obj['nargs'] = nargs
obj['varargs'] = False
obj['magic'] = magic
obj['internal_prototype'] = 'bi_function_prototype'
obj['class'] = 'Function'
obj['callable'] = val.get('callable', True)
obj['constructable'] = val.get('constructable', False)
assert(obj.get('special_call', False) == False)
# Shorthand accessors are minimal and have no .length or .name
# right now. Use longhand if these matter.
#props.append({ 'key': 'length', 'value': length, 'attributes': 'c' })
#props.append({ 'key': 'name', 'value': name, 'attributes': 'c' })
return obj
def decodeGetterShorthand(key, funprop):
assert(funprop['value']['type'] == 'accessor')
val = funprop['value']
if not val.has_key('getter'):
return None
return addAccessor(funprop,
val['getter_magic'],
val['getter_nargs'],
val.get('getter_length', 0),
key,
val['getter'])
def decodeSetterShorthand(key, funprop):
assert(funprop['value']['type'] == 'accessor')
val = funprop['value']
if not val.has_key('setter'):
return None
return addAccessor(funprop,
val['setter_magic'],
val['setter_nargs'],
val.get('setter_length', 0),
key,
val['setter'])
def decodeStructuredValue(val):
logger.debug('Decode structured value: %r' % val)
if isinstance(val, (int, long, float, str)):
return val # as is
elif isinstance(val, (dict)):
# Object: decode recursively
obj = decodeStructuredObject(val)
return { 'type': 'object', 'id': obj['id'] }
elif isinstance(val, (list)):
raise Exception('structured shorthand does not yet support array literals')
else:
raise Exception('unsupported value in structured shorthand: %r' % v)
def decodeStructuredObject(val):
# XXX: We'd like to preserve dict order from YAML source but
# Python doesn't do that. Use sorted order to make the result
# deterministic. User can always use longhand for exact
# property control.
logger.debug('Decode structured object: %r' % val)
obj = getSubObject()
obj['class'] = 'Object'
obj['internal_prototype'] = 'bi_object_prototype'
props = obj['properties']
keys = sorted(val.keys())
for k in keys:
logger.debug('Decode property %s' % k)
prop = { 'key': k, 'value': decodeStructuredValue(val[k]), 'attributes': 'wec' }
props.append(prop)
return obj
def decodeStructuredShorthand(structprop):
assert(structprop['value']['type'] == 'structured')
val = structprop['value']['value']
return decodeStructuredValue(val)
def clonePropShared(prop):
res = {}
for k in [ 'key', 'attributes', 'auto_lightfunc' ]:
if prop.has_key(k):
res[k] = prop[k]
return res
for idx,obj in enumerate(meta['objects']):
props = []
repl_props = []
for val in obj['properties']:
# Date.prototype.toGMTString must point to the same Function object
# as Date.prototype.toUTCString, so special case hack it here.
if obj['id'] == 'bi_date_prototype' and val['key'] == 'toGMTString':
logger.debug('Skip Date.prototype.toGMTString')
continue
if isinstance(val['value'], dict) and val['value']['type'] == 'function':
# Function shorthand.
subfun = decodeFunctionShorthand(val)
prop = clonePropShared(val)
prop['value'] = { 'type': 'object', 'id': subfun['id'] }
repl_props.append(prop)
elif isinstance(val['value'], dict) and val['value']['type'] == 'accessor' and \
(val['value'].has_key('getter') or val['value'].has_key('setter')):
# Accessor normal and shorthand forms both use the type 'accessor',
# but are differentiated by properties.
sub_getter = decodeGetterShorthand(val['key'], val)
sub_setter = decodeSetterShorthand(val['key'], val)
prop = clonePropShared(val)
prop['value'] = { 'type': 'accessor' }
if sub_getter is not None:
prop['value']['getter_id'] = sub_getter['id']
if sub_setter is not None:
prop['value']['setter_id'] = sub_setter['id']
assert('a' in prop['attributes']) # If missing, weird things happen runtime
logger.debug('Expand accessor shorthand: %r -> %r' % (val, prop))
repl_props.append(prop)
elif isinstance(val['value'], dict) and val['value']['type'] == 'structured':
# Structured shorthand.
subval = decodeStructuredShorthand(val)
prop = clonePropShared(val)
prop['value'] = subval
repl_props.append(prop)
logger.debug('Decoded structured shorthand for object %s, property %s' % (obj['id'], val['key']))
elif isinstance(val['value'], dict) and val['value']['type'] == 'buffer':
# Duktape buffer type not yet supported.
raise Exception('Buffer type not yet supported for builtins: %r' % val)
elif isinstance(val['value'], dict) and val['value']['type'] == 'pointer':
# Duktape pointer type not yet supported.
raise Exception('Pointer type not yet supported for builtins: %r' % val)
else:
# Property already in normalized form.
repl_props.append(val)
if obj['id'] == 'bi_date_prototype' and val['key'] == 'toUTCString':
logger.debug('Clone Date.prototype.toUTCString to Date.prototype.toGMTString')
prop2 = copy.deepcopy(repl_props[-1])
prop2['key'] = 'toGMTString'
repl_props.append(prop2)
# Replace properties with a variant where function properties
# point to built-ins rather than using an inline syntax.
obj['properties'] = repl_props
len_before = len(meta['objects'])
meta['objects'] += subobjs
len_after = len(meta['objects'])
logger.debug('Normalized metadata shorthand, %d objects -> %d final objects' % (len_before, len_after))
# Normalize property attribute order, default attributes, etc.
def metadata_normalize_property_attributes(meta):
for o in meta['objects']:
for p in o['properties']:
orig_attrs = p.get('attributes', None)
is_accessor = (isinstance(p['value'], dict) and p['value']['type'] == 'accessor')
# If missing, set default attributes.
attrs = orig_attrs
if attrs is None:
if is_accessor:
attrs = 'ca' # accessor default is configurable
else:
attrs = 'wc' # default is writable, configurable
logger.debug('Defaulted attributes of %s/%s to %s' % (o['id'], p['key'], attrs))
# Decode flags to normalize their order in the end.
writable = 'w' in attrs
enumerable = 'e' in attrs
configurable = 'c' in attrs
accessor = 'a' in attrs
# Force 'accessor' attribute for accessors.
if is_accessor and not accessor:
logger.debug('Property %s is accessor but has no "a" attribute, add attribute' % p['key'])
accessor = True
# Normalize order and write back.
attrs = ''
if writable:
attrs += 'w'
if enumerable:
attrs += 'e'
if configurable:
attrs += 'c'
if accessor:
attrs += 'a'
p['attributes'] = attrs
if orig_attrs != attrs:
logger.debug('Updated attributes of %s/%s from %r to %r' % (o['id'], p['key'], orig_attrs, attrs))
pass
# Normalize ROM property attributes.
def metadata_normalize_rom_property_attributes(meta):
for o in meta['objects']:
for p in o['properties']:
# ROM properties must not be configurable (runtime code
# depends on this). Writability is kept so that instance
# objects can override parent properties.
p['attributes'] = p['attributes'].replace('c', '')
# Add a 'name' property for all top level functions; expected by RAM
# initialization code.
def metadata_normalize_ram_function_names(meta):
num_added = 0
for o in meta['objects']:
if not o.get('callable', False):
continue
name_prop = None
for p in o['properties']:
if p['key'] == 'name':
name_prop = p
break
if name_prop is None:
num_added += 1
logger.debug('Adding missing "name" property for function %s' % o['id'])
o['properties'].append({ 'key': 'name', 'value': '', 'attributes': 'c' })
if num_added > 0:
logger.debug('Added missing "name" property for %d functions' % num_added)
# Add a built-in objects list for RAM initialization.
def metadata_add_ram_filtered_object_list(meta):
# For RAM init data to support user objects, we need to prepare a
# filtered top level object list, containing only those objects which
# need a value stack index during duk_hthread_builtins.c init process.
#
# Objects in meta['objects'] which are covered by inline property
# notation in the init data (this includes e.g. member functions like
# Math.cos) must not be present.
objlist = []
for o in meta['objects']:
keep = o.get('bidx_used', False)
if o.has_key('native') and not o.has_key('bidx'):
# Handled inline by run-time init code
pass
else:
# Top level object
keep = True
if keep:
objlist.append(o)
logger.debug('Filtered RAM object list: %d objects with bidx, %d total top level objects' % \
(len(meta['objects_bidx']), len(objlist)))
meta['objects_ram_toplevel'] = objlist
# Add missing strings into strings metadata. For example, if an object
# property key is not part of the strings list, append it there. This
# is critical for ROM builtins because all property keys etc must also
# be in ROM.
def metadata_normalize_missing_strings(meta, user_meta):
# We just need plain strings here.
strs_have = {}
for s in meta['strings']:
strs_have[s['str']] = True
# For ROM builtins all the strings must be in the strings list,
# so scan objects for any strings not explicitly listed in metadata.
for idx, obj in enumerate(meta['objects']):
for prop in obj['properties']:
key = prop['key']
if not strs_have.get(key):
logger.debug('Add missing string: %r' % key)
meta['strings'].append({ 'str': key, '_auto_add_ref': True })
strs_have[key] = True
if prop.has_key('value') and isinstance(prop['value'], (str, unicode)):
val = unicode_to_bytes(prop['value']) # should already be, just in case
if not strs_have.get(val):
logger.debug('Add missing string: %r' % val)
meta['strings'].append({ 'str': val, '_auto_add_ref': True })
strs_have[val] = True
# Force user strings to be in ROM data.
for s in user_meta.get('add_forced_strings', []):
if not strs_have.get(s['str']):
logger.debug('Add user string: %r' % s['str'])
s['_auto_add_user'] = True
meta['strings'].append(s)
# Convert built-in function properties into lightfuncs where applicable.
def metadata_convert_lightfuncs(meta):
num_converted = 0
num_skipped = 0
for o in meta['objects']:
for p in o['properties']:
v = p['value']
ptype = None
if isinstance(v, dict):
ptype = p['value']['type']
if ptype != 'object':
continue
targ, targ_idx = metadata_lookup_object_and_index(meta, p['value']['id'])
reasons = []
if not targ.get('callable', False):
reasons.append('not-callable')
#if targ.get('constructable', False):
# reasons.append('constructable')
lf_len = 0
for p2 in targ['properties']:
# Don't convert if function has more properties than
# we're willing to sacrifice.
logger.debug(' - Check %r . %s' % (o.get('id', None), p2['key']))
if p2['key'] == 'length' and isinstance(p2['value'], (int, long)):
lf_len = p2['value']
if p2['key'] not in [ 'length', 'name' ]:
reasons.append('nonallowed-property')
if not p.get('auto_lightfunc', True):
logger.debug('Automatic lightfunc conversion rejected for key %s, explicitly requested in metadata' % p['key'])
reasons.append('no-auto-lightfunc')
# lf_len comes from actual property table (after normalization)
if targ.has_key('magic'):
try:
# Magic values which resolve to 'bidx' indices cannot
# be resolved here yet, because the bidx map is not
# yet ready. If so, reject the lightfunc conversion
# for now. In practice this doesn't matter.
lf_magic = resolve_magic(targ.get('magic'), {}) # empty map is a "fake" bidx map
logger.debug('resolved magic ok -> %r' % lf_magic)
except Exception, e:
logger.debug('Failed to resolve magic for %r: %r' % (p['key'], e))
reasons.append('magic-resolve-failed')
lf_magic = 0xffffffff # dummy, will be out of bounds
else:
lf_magic = 0
if targ.get('varargs', True):
lf_nargs = None
lf_varargs = True
else:
lf_nargs = targ['nargs']
lf_varargs = False
if lf_len < 0 or lf_len > 15:
logger.debug('lf_len out of bounds: %r' % lf_len)
reasons.append('len-bounds')
if lf_magic < -0x80 or lf_magic > 0x7f:
logger.debug('lf_magic out of bounds: %r' % lf_magic)
reasons.append('magic-bounds')
if not lf_varargs and (lf_nargs < 0 or lf_nargs > 14):
logger.debug('lf_nargs out of bounds: %r' % lf_nargs)
reasons.append('nargs-bounds')
if len(reasons) > 0:
logger.debug('Don\'t convert to lightfunc: %r %r (%r): %r' % (o.get('id', None), p.get('key', None), p['value']['id'], reasons))
num_skipped += 1
continue
p_id = p['value']['id']
p['value'] = {
'type': 'lightfunc',
'native': targ['native'],
'length': lf_len,
'magic': lf_magic,
'nargs': lf_nargs,
'varargs': lf_varargs
}
logger.debug(' - Convert to lightfunc: %r %r (%r) -> %r' % (o.get('id', None), p.get('key', None), p_id, p['value']))
num_converted += 1
logger.debug('Converted %d built-in function properties to lightfuncs, %d skipped as non-eligible' % (num_converted, num_skipped))
# Detect objects not reachable from any object with a 'bidx'. This is usually
# a user error because such objects can't be reached at runtime so they're
# useless in RAM or ROM init data.
def metadata_remove_orphan_objects(meta):
reachable = {}
for o in meta['objects']:
if o.get('bidx_used', False):
reachable[o['id']] = True
while True:
reachable_count = len(reachable.keys())
def _markId(obj_id):
if obj_id is None:
return
reachable[obj_id] = True
for o in meta['objects']:
if not reachable.has_key(o['id']):
continue
_markId(o.get('internal_prototype', None))
for p in o['properties']:
# Shorthand has been normalized so no need
# to support it here.
v = p['value']
ptype = None
if isinstance(v, dict):
ptype = p['value']['type']
if ptype == 'object':
_markId(v['id'])
if ptype == 'accessor':
_markId(v.get('getter_id'))
_markId(v.get('setter_id'))
logger.debug('Mark reachable: reachable count initially %d, now %d' % \
(reachable_count, len(reachable.keys())))
if reachable_count == len(reachable.keys()):
break
num_deleted = 0
deleted = True
while deleted:
deleted = False
for i,o in enumerate(meta['objects']):
if not reachable.has_key(o['id']):
logger.debug('object %s not reachable, dropping' % o['id'])
meta['objects'].pop(i)
deleted = True
num_deleted += 1
break
if num_deleted > 0:
logger.debug('Deleted %d unreachable objects' % num_deleted)
# Add C define names for builtin strings. These defines are added to all
# strings, even when they won't get a stridx because the define names are
# used to autodetect referenced strings.
def metadata_add_string_define_names(strlist, special_defs):
for s in strlist:
v = s['str']
if special_defs.has_key(v):
s['define'] = 'DUK_STRIDX_' + special_defs[v]
continue
if len(v) >= 1 and v[0] == '\x82':
pfx = 'DUK_STRIDX_INT_'
v = v[1:]
else:
pfx = 'DUK_STRIDX_'
t = re.sub(r'([a-z0-9])([A-Z])', r'\1_\2', v) # add underscores: aB -> a_B
s['define'] = pfx + t.upper()
# Add a 'stridx_used' flag for strings which need a stridx.
def metadata_add_string_used_stridx(strlist, used_stridx_meta):
defs_needed = {}
defs_found = {}
for s in used_stridx_meta['used_stridx_defines']:
defs_needed[s] = True
# strings whose define is referenced
for s in strlist:
if s.has_key('define') and defs_needed.has_key(s['define']):
s['stridx_used'] = True
defs_found[s['define']] = True
# duk_lexer.h needs all reserved words
for s in strlist:
if s.get('reserved_word', False):
s['stridx_used'] = True
# ensure all needed defines are provided
defs_found['DUK_STRIDX_START_RESERVED'] = True # special defines provided automatically
defs_found['DUK_STRIDX_START_STRICT_RESERVED'] = True
defs_found['DUK_STRIDX_END_RESERVED'] = True
defs_found['DUK_STRIDX_TO_TOK'] = True
for k in sorted(defs_needed.keys()):
if not defs_found.has_key(k):
raise Exception('source code needs define %s not provided by strings' % repr(k))
# Merge duplicate strings in string metadata.
def metadata_merge_string_entries(strlist):
# The raw string list may contain duplicates so merge entries.
# The list is processed in reverse because the last entry should
# "win" and keep its place (this matters for reserved words).
strs = []
str_map = {} # plain string -> object in strs[]
tmp = copy.deepcopy(strlist)
tmp.reverse()
for s in tmp:
prev = str_map.get(s['str'])
if prev is not None:
for k in s.keys():
if prev.has_key(k) and prev[k] != s[k]:
raise Exception('fail to merge string entry, conflicting keys: %r <-> %r' % (prev, s))
prev[k] = s[k]
else:
strs.append(s)
str_map[s['str']] = s
strs.reverse()
return strs
# Order builtin strings (strings with a stridx) into an order satisfying
# multiple constraints.
def metadata_order_builtin_strings(input_strlist, keyword_list, strip_unused_stridx=False):
# Strings are ordered in the result as follows:
# 1. Non-reserved words requiring 8-bit indices
# 2. Non-reserved words not requiring 8-bit indices
# 3. Reserved words in non-strict mode only
# 4. Reserved words in strict mode
#
# Reserved words must follow an exact order because they are
# translated to/from token numbers by addition/subtraction.
# Some strings require an 8-bit index and must be in the
# beginning.
tmp_strs = []
for s in copy.deepcopy(input_strlist):
if not s.get('stridx_used', False):
# Drop strings which are not actually needed by src-input/*.(c|h).
# Such strings won't be in heap->strs[] or ROM legacy list.
pass
else:
tmp_strs.append(s)
# The reserved word list must match token order in duk_lexer.h
# exactly, so pluck them out first.
str_index = {}
kw_index = {}
keywords = []
strs = []
for idx,s in enumerate(tmp_strs):
str_index[s['str']] = s
for idx,s in enumerate(keyword_list):
keywords.append(str_index[s])
kw_index[s] = True
for idx,s in enumerate(tmp_strs):
if not kw_index.has_key(s['str']):
strs.append(s)
# Sort the strings by category number; within category keep
# previous order.
for idx,s in enumerate(strs):
s['_idx'] = idx # for ensuring stable sort
def req8Bit(s):
return s.get('class_name', False) # currently just class names
def getCat(s):
req8 = req8Bit(s)
if s.get('reserved_word', False):
# XXX: unused path now, because keywords are "plucked out"
# explicitly.
assert(not req8)
if s.get('future_reserved_word_strict', False):
return 4
else:
return 3
elif req8:
return 1
else:
return 2
def sortCmp(a,b):
return cmp( (getCat(a),a['_idx']), (getCat(b),b['_idx']) )
strs.sort(cmp=sortCmp)
for idx,s in enumerate(strs):
# Remove temporary _idx properties
del s['_idx']
for idx,s in enumerate(strs):
if req8Bit(s) and i >= 256:
raise Exception('8-bit string index not satisfied: ' + repr(s))
return strs + keywords
# Dump metadata into a JSON file.
def dump_metadata(meta, fn):
tmp = json.dumps(recursive_bytes_to_strings(meta), indent=4)
with open(fn, 'wb') as f:
f.write(tmp)
logger.debug('Wrote metadata dump to %s' % fn)
# Main metadata loading function: load metadata from multiple sources,
# merge and normalize, prepare various indexes etc.
def load_metadata(opts, rom=False, build_info=None, active_opts=None):
# Load built-in strings and objects.
with open(opts.strings_metadata, 'rb') as f:
strings_metadata = recursive_strings_to_bytes(yaml.load(f))
with open(opts.objects_metadata, 'rb') as f:
objects_metadata = recursive_strings_to_bytes(yaml.load(f))
# Merge strings and objects metadata as simple top level key merge.
meta = {}
for k in objects_metadata.keys():
meta[k] = objects_metadata[k]
for k in strings_metadata.keys():
meta[k] = strings_metadata[k]
# Add user objects.
user_meta = {}
for fn in opts.builtin_files:
logger.debug('Merging user builtin metadata file %s' % fn)
with open(fn, 'rb') as f:
user_meta = recursive_strings_to_bytes(yaml.load(f))
metadata_merge_user_objects(meta, user_meta)
# Remove disabled objects and properties. Also remove objects and
# properties which are disabled in (known) active duk_config.h.
metadata_remove_disabled(meta, active_opts)
# Replace Symbol keys and property values with plain (encoded) strings.
metadata_normalize_symbol_strings(meta)
# Normalize 'nargs' and 'length' defaults.
metadata_normalize_nargs_length(meta)
# Normalize property attributes.
metadata_normalize_property_attributes(meta)
# Normalize property shorthand into full objects.
metadata_normalize_shorthand(meta)
# RAM top-level functions must have a 'name'.
if not rom:
metadata_normalize_ram_function_names(meta)
# Add Duktape.version and (Duktape.env for ROM case).
for o in meta['objects']:
if o['id'] == 'bi_duktape':
o['properties'].insert(0, { 'key': 'version', 'value': int(build_info['duk_version']), 'attributes': '' })
if rom:
# Use a fixed (quite dummy for now) Duktape.env
# when ROM builtins are in use. In the RAM case
# this is added during global object initialization
# based on config options in use.
o['properties'].insert(0, { 'key': 'env', 'value': 'ROM', 'attributes': '' })
# Normalize property attributes (just in case shorthand handling
# didn't add attributes to all properties).
metadata_normalize_property_attributes(meta)
# For ROM objects, mark all properties non-configurable.
if rom:
metadata_normalize_rom_property_attributes(meta)
# Convert built-in function properties automatically into
# lightfuncs if requested and function is eligible.
if rom and opts.rom_auto_lightfunc:
metadata_convert_lightfuncs(meta)
# Create a list of objects needing a 'bidx'. Ensure 'objects' and
# 'objects_bidx' match in order for shared length.
metadata_prepare_objects_bidx(meta)
# Merge duplicate strings.
meta['strings'] = metadata_merge_string_entries(meta['strings'])
# Prepare an ordered list of strings with 'stridx':
# - Add a 'stridx_used' flag for strings which need an index in current code base
# - Add a C define (DUK_STRIDX_xxx) for such strings
# - Compute a stridx string order satisfying current runtime constraints
#
# The meta['strings_stridx'] result will be in proper order and stripped of
# any strings which don't need a stridx.
metadata_add_string_define_names(meta['strings'], meta['special_define_names'])
with open(opts.used_stridx_metadata, 'rb') as f:
metadata_add_string_used_stridx(meta['strings'], json.loads(f.read()))
meta['strings_stridx'] = metadata_order_builtin_strings(meta['strings'], meta['reserved_word_token_order'])
# For the ROM build: add any strings referenced by built-in objects
# into the string list (not the 'stridx' list though): all strings
# referenced by ROM objects must also be in ROM.
if rom:
for fn in opts.builtin_files:
# XXX: awkward second pass
with open(fn, 'rb') as f:
user_meta = recursive_strings_to_bytes(yaml.load(f))
metadata_normalize_missing_strings(meta, user_meta)
metadata_normalize_missing_strings(meta, {}) # in case no files
# Check for orphan objects and remove them.
metadata_remove_orphan_objects(meta)
# Add final stridx and bidx indices to metadata objects and strings.
idx = 0
for o in meta['objects']:
if o.get('bidx_used', False):
o['bidx'] = idx
idx += 1
idx = 0
for s in meta['strings']:
if s.get('stridx_used', False):
s['stridx'] = idx
idx += 1
# Prepare a filtered RAM top level object list, needed for technical
# reasons during RAM init handling.
if not rom:
metadata_add_ram_filtered_object_list(meta)
# Sanity check: object index must match 'bidx' for all objects
# which have a runtime 'bidx'. This is assumed by e.g. RAM
# thread init.
for i,o in enumerate(meta['objects']):
if i < len(meta['objects_bidx']):
assert(meta['objects_bidx'][i] == meta['objects'][i])
if o.get('bidx', False):
assert(o['bidx'] == i)
# Create a set of helper lists and maps now that the metadata is
# in its final form.
meta['_strings_plain'] = []
meta['_strings_stridx_plain'] = []
meta['_stridx_to_string'] = {}
meta['_idx_to_string'] = {}
meta['_stridx_to_plain'] = {}
meta['_idx_to_plain'] = {}
meta['_string_to_stridx'] = {}
meta['_plain_to_stridx'] = {}
meta['_string_to_idx'] = {}
meta['_plain_to_idx'] = {}
meta['_define_to_stridx'] = {}
meta['_stridx_to_define'] = {}
meta['_is_plain_reserved_word'] = {}
meta['_is_plain_strict_reserved_word'] = {}
meta['_objid_to_object'] = {}
meta['_objid_to_bidx'] = {}
meta['_objid_to_idx'] = {}
meta['_objid_to_ramidx'] = {}
meta['_bidx_to_objid'] = {}
meta['_idx_to_objid'] = {}
meta['_bidx_to_object'] = {}
meta['_idx_to_object'] = {}
for i,s in enumerate(meta['strings']):
assert(s['str'] not in meta['_strings_plain'])
meta['_strings_plain'].append(s['str'])
if s.get('reserved_word', False):
meta['_is_plain_reserved_word'][s['str']] = True # includes also strict reserved words
if s.get('future_reserved_word_strict', False):
meta['_is_plain_strict_reserved_word'][s['str']] = True
meta['_idx_to_string'][i] = s
meta['_idx_to_plain'][i] = s['str']
meta['_plain_to_idx'][s['str']] = i
#meta['_string_to_idx'][s] = i
for i,s in enumerate(meta['strings_stridx']):
assert(s.get('stridx_used', False) == True)
meta['_strings_stridx_plain'].append(s['str'])
meta['_stridx_to_string'][i] = s
meta['_stridx_to_plain'][i] = s['str']
#meta['_string_to_stridx'][s] = i
meta['_plain_to_stridx'][s['str']] = i
meta['_define_to_stridx'][s['define']] = i
meta['_stridx_to_define'][i] = s['define']
for i,o in enumerate(meta['objects']):
meta['_objid_to_object'][o['id']] = o
meta['_objid_to_idx'][o['id']] = i
meta['_idx_to_objid'][i] = o['id']
meta['_idx_to_object'][i] = o
for i,o in enumerate(meta['objects_bidx']):
assert(o.get('bidx_used', False) == True)
meta['_objid_to_bidx'][o['id']] = i
assert(meta['_objid_to_bidx'][o['id']] == meta['_objid_to_idx'][o['id']])
meta['_bidx_to_objid'][i] = o['id']
meta['_bidx_to_object'][i] = o
if meta.has_key('objects_ram_toplevel'):
for i,o in enumerate(meta['objects_ram_toplevel']):
meta['_objid_to_ramidx'][o['id']] = i
# Dump stats.
if rom:
meta_name = 'ROM'
else:
meta_name = 'RAM'
count_add_ref = 0
count_add_user = 0
for s in meta['strings']:
if s.get('_auto_add_ref', False):
count_add_ref += 1
if s.get('_auto_add_user', False):
count_add_user += 1
count_add = count_add_ref + count_add_user
logger.debug(('Prepared %s metadata: %d objects, %d objects with bidx, ' + \
'%d strings, %d strings with stridx, %d strings added ' + \
'(%d property key references, %d user strings)') % \
(meta_name, len(meta['objects']), len(meta['objects_bidx']), \
len(meta['strings']), len(meta['strings_stridx']), \
count_add, count_add_ref, count_add_user))
return meta
#
# Metadata helpers
#
# Magic values for Math built-in.
math_onearg_magic = {
'fabs': 0, # BI_MATH_FABS_IDX
'acos': 1, # BI_MATH_ACOS_IDX
'asin': 2, # BI_MATH_ASIN_IDX
'atan': 3, # BI_MATH_ATAN_IDX
'ceil': 4, # BI_MATH_CEIL_IDX
'cos': 5, # BI_MATH_COS_IDX
'exp': 6, # BI_MATH_EXP_IDX
'floor': 7, # BI_MATH_FLOOR_IDX
'log': 8, # BI_MATH_LOG_IDX
'round': 9, # BI_MATH_ROUND_IDX
'sin': 10, # BI_MATH_SIN_IDX
'sqrt': 11, # BI_MATH_SQRT_IDX
'tan': 12, # BI_MATH_TAN_IDX
'cbrt': 13, # BI_MATH_CBRT_IDX
'log2': 14, # BI_MATH_LOG2_IDX
'log10': 15, # BI_MATH_LOG10_IDX
'trunc': 16, # BI_MATH_TRUNC_IDX
}
math_twoarg_magic = {
'atan2': 0, # BI_MATH_ATAN2_IDX
'pow': 1 # BI_MATH_POW_IDX
}
# Magic values for Array built-in.
array_iter_magic = {
'every': 0, # BI_ARRAY_ITER_EVERY
'some': 1, # BI_ARRAY_ITER_SOME
'forEach': 2, # BI_ARRAY_ITER_FOREACH
'map': 3, # BI_ARRAY_ITER_MAP
'filter': 4 # BI_ARRAY_ITER_FILTER
}
# Magic value for typedarray/node.js buffer read field operations.
def magic_readfield(elem, signed=None, bigendian=None, typedarray=None):
# Must match duk__FLD_xxx in duk_bi_buffer.c
elemnum = {
'8bit': 0,
'16bit': 1,
'32bit': 2,
'float': 3,
'double': 4,
'varint': 5
}[elem]
if signed == True:
signednum = 1
elif signed == False:
signednum = 0
else:
raise Exception('missing "signed"')
if bigendian == True:
bigendiannum = 1
elif bigendian == False:
bigendiannum = 0
else:
raise Exception('missing "bigendian"')
if typedarray == True:
typedarraynum = 1
elif typedarray == False:
typedarraynum = 0
else:
raise Exception('missing "typedarray"')
return elemnum + (signednum << 4) + (bigendiannum << 3) + (typedarraynum << 5)
# Magic value for typedarray/node.js buffer write field operations.
def magic_writefield(elem, signed=None, bigendian=None, typedarray=None):
return magic_readfield(elem, signed=signed, bigendian=bigendian, typedarray=typedarray)
# Magic value for typedarray constructors.
def magic_typedarray_constructor(elem, shift):
# Must match duk_hbufobj.h header
elemnum = {
'uint8': 0,
'uint8clamped': 1,
'int8': 2,
'uint16': 3,
'int16': 4,
'uint32': 5,
'int32': 6,
'float32': 7,
'float64': 8
}[elem]
return (elemnum << 2) + shift
# Resolve a magic value from a YAML metadata element into an integer.
def resolve_magic(elem, objid_to_bidx):
if elem is None:
return 0
if isinstance(elem, (int, long)):
v = int(elem)
if not (v >= -0x8000 and v <= 0x7fff):
raise Exception('invalid plain value for magic: %s' % repr(v))
return v
if not isinstance(elem, dict):
raise Exception('invalid magic: %r' % elem)
assert(elem.has_key('type'))
if elem['type'] == 'bidx':
# Maps to thr->builtins[].
v = elem['id']
return objid_to_bidx[v]
elif elem['type'] == 'plain':
v = elem['value']
if not (v >= -0x8000 and v <= 0x7fff):
raise Exception('invalid plain value for magic: %s' % repr(v))
return v
elif elem['type'] == 'math_onearg':
return math_onearg_magic[elem['funcname']]
elif elem['type'] == 'math_twoarg':
return math_twoarg_magic[elem['funcname']]
elif elem['type'] == 'array_iter':
return array_iter_magic[elem['funcname']]
elif elem['type'] == 'typedarray_constructor':
return magic_typedarray_constructor(elem['elem'], elem['shift'])
elif elem['type'] == 'buffer_readfield':
return magic_readfield(elem['elem'], elem['signed'], elem['bigendian'], elem['typedarray'])
elif elem['type'] == 'buffer_writefield':
return magic_writefield(elem['elem'], elem['signed'], elem['bigendian'], elem['typedarray'])
else:
raise Exception('invalid magic type: %r' % elem)
# Helper to find a property from a property list, remove it from the
# property list, and return the removed property.
def steal_prop(props, key, allow_accessor=True):
for idx,prop in enumerate(props):
if prop['key'] == key:
if not (isinstance(prop['value'], dict) and prop['value']['type'] == 'accessor') or allow_accessor:
return props.pop(idx)
return None
#
# RAM initialization data
#
# Init data for built-in strings and objects. The init data for both
# strings and objects is a bit-packed stream tailored to match the decoders
# in duk_heap_alloc.c (strings) and duk_hthread_builtins.c (objects).
# Various bitfield sizes are used to minimize the bitstream size without
# resorting to actual, expensive compression. The goal is to minimize the
# overall size of the init code and the init data.
#
# The built-in data created here is used to set up initial RAM versions
# of the strings and objects. References to these objects are tracked in
# heap->strs[] and thr->builtins[] which allows Duktape internals to refer
# to built-ins e.g. as thr->builtins[DUK_BIDX_STRING_PROTOTYPE].
#
# Not all strings and objects need to be reachable through heap->strs[]
# or thr->builtins[]: the strings/objects that need to be in these arrays
# is determined based on metadata and source code scanning.
#
# XXX: Reserved word stridxs could be made to match token numbers
# directly so that a duk_stridx2token[] would not be needed.
# Default property attributes.
LENGTH_PROPERTY_ATTRIBUTES = 'c'
ACCESSOR_PROPERTY_ATTRIBUTES = 'c'
DEFAULT_DATA_PROPERTY_ATTRIBUTES = 'wc'
# Encoding constants (must match duk_hthread_builtins.c).
PROP_FLAGS_BITS = 3
LENGTH_PROP_BITS = 3
NARGS_BITS = 3
PROP_TYPE_BITS = 3
NARGS_VARARGS_MARKER = 0x07
PROP_TYPE_DOUBLE = 0
PROP_TYPE_STRING = 1
PROP_TYPE_STRIDX = 2
PROP_TYPE_BUILTIN = 3
PROP_TYPE_UNDEFINED = 4
PROP_TYPE_BOOLEAN_TRUE = 5
PROP_TYPE_BOOLEAN_FALSE = 6
PROP_TYPE_ACCESSOR = 7
# must match duk_hobject.h
PROPDESC_FLAG_WRITABLE = (1 << 0)
PROPDESC_FLAG_ENUMERABLE = (1 << 1)
PROPDESC_FLAG_CONFIGURABLE = (1 << 2)
PROPDESC_FLAG_ACCESSOR = (1 << 3) # unused now
# Class names, numeric indices must match duk_hobject.h class numbers.
class_names = [
'Unused',
'Object',
'Array',
'Function',
'Arguments',
'Boolean',
'Date',
'Error',
'JSON',
'Math',
'Number',
'RegExp',
'String',
'global',
'Symbol',
'ObjEnv',
'DecEnv',
'Pointer',
'Thread'
# Remaining class names are not currently needed.
]
class2num = {}
for i,v in enumerate(class_names):
class2num[v] = i
# Map class name to a class number.
def class_to_number(x):
return class2num[x]
# Bitpack a string into a format shared by heap and thread init data.
def bitpack_string(be, s, stats=None):
# Strings are encoded as follows: a string begins in lowercase
# mode and recognizes the following 5-bit symbols:
#
# 0-25 'a' ... 'z' or 'A' ... 'Z' depending on uppercase mode
# 26-31 special controls, see code below
LOOKUP1 = 26
LOOKUP2 = 27
SWITCH1 = 28
SWITCH = 29
UNUSED1 = 30
EIGHTBIT = 31
LOOKUP = '0123456789_ \x82\x80"{' # special characters
assert(len(LOOKUP) == 16)
# support up to 256 byte strings now, cases above ~30 bytes are very
# rare, so favor short strings in encoding
if len(s) <= 30:
be.bits(len(s), 5)
else:
be.bits(31, 5)
be.bits(len(s), 8)
# 5-bit character, mode specific
mode = 'lowercase'
for idx, c in enumerate(s):
# This encoder is not that optimal, but good enough for now.
islower = (ord(c) >= ord('a') and ord(c) <= ord('z'))
isupper = (ord(c) >= ord('A') and ord(c) <= ord('Z'))
islast = (idx == len(s) - 1)
isnextlower = False
isnextupper = False
if not islast:
c2 = s[idx+1]
isnextlower = (ord(c2) >= ord('a') and ord(c2) <= ord('z'))
isnextupper = (ord(c2) >= ord('A') and ord(c2) <= ord('Z'))
# XXX: Add back special handling for hidden or other symbols?
if islower and mode == 'lowercase':
be.bits(ord(c) - ord('a'), 5)
if stats is not None: stats['n_optimal'] += 1
elif isupper and mode == 'uppercase':
be.bits(ord(c) - ord('A'), 5)
if stats is not None: stats['n_optimal'] += 1
elif islower and mode == 'uppercase':
if isnextlower:
be.bits(SWITCH, 5)
be.bits(ord(c) - ord('a'), 5)
mode = 'lowercase'
if stats is not None: stats['n_switch'] += 1
else:
be.bits(SWITCH1, 5)
be.bits(ord(c) - ord('a'), 5)
if stats is not None: stats['n_switch1'] += 1
elif isupper and mode == 'lowercase':
if isnextupper:
be.bits(SWITCH, 5)
be.bits(ord(c) - ord('A'), 5)
mode = 'uppercase'
if stats is not None: stats['n_switch'] += 1
else:
be.bits(SWITCH1, 5)
be.bits(ord(c) - ord('A'), 5)
if stats is not None: stats['n_switch1'] += 1
elif c in LOOKUP:
idx = LOOKUP.find(c)
if idx >= 8:
be.bits(LOOKUP2, 5)
be.bits(idx - 8, 3)
if stats is not None: stats['n_lookup2'] += 1
else:
be.bits(LOOKUP1, 5)
be.bits(idx, 3)
if stats is not None: stats['n_lookup1'] += 1
elif ord(c) >= 0 and ord(c) <= 255:
logger.debug('eightbit encoding for %d (%s)' % (ord(c), c))
be.bits(EIGHTBIT, 5)
be.bits(ord(c), 8)
if stats is not None: stats['n_eightbit'] += 1
else:
raise Exception('internal error in bitpacking a string')
# Generate bit-packed RAM string init data.
def gen_ramstr_initdata_bitpacked(meta):
be = dukutil.BitEncoder()
maxlen = 0
stats = {
'n_optimal': 0,
'n_lookup1': 0,
'n_lookup2': 0,
'n_switch1': 0,
'n_switch': 0,
'n_eightbit': 0
}
for s_obj in meta['strings_stridx']:
s = s_obj['str']
if len(s) > maxlen:
maxlen = len(s)
bitpack_string(be, s, stats)
# end marker not necessary, C code knows length from define
if be._varuint_count > 0:
logger.debug('Varuint distribution:')
logger.debug(json.dumps(be._varuint_dist[0:1024]))
logger.debug('Varuint encoding categories: %r' % be._varuint_cats)
logger.debug('Varuint efficiency: %f bits/value' % (float(be._varuint_bits) / float(be._varuint_count)))
res = be.getByteString()
logger.debug(('%d ram strings, %d bytes of string init data, %d maximum string length, ' + \
'encoding: optimal=%d,lookup1=%d,lookup2=%d,switch1=%d,switch=%d,eightbit=%d') % \
(len(meta['strings_stridx']), len(res), maxlen, \
stats['n_optimal'],
stats['n_lookup1'], stats['n_lookup2'],
stats['n_switch1'], stats['n_switch'],
stats['n_eightbit']))
return res, maxlen
# Functions to emit string-related source/header parts.
def emit_ramstr_source_strinit_data(genc, strdata):
genc.emitArray(strdata, 'duk_strings_data', visibility='DUK_INTERNAL', typename='duk_uint8_t', intvalues=True, const=True, size=len(strdata))
def emit_ramstr_header_strinit_defines(genc, meta, strdata, strmaxlen):
genc.emitLine('#if !defined(DUK_SINGLE_FILE)')
genc.emitLine('DUK_INTERNAL_DECL const duk_uint8_t duk_strings_data[%d];' % len(strdata))
genc.emitLine('#endif /* !DUK_SINGLE_FILE */')
genc.emitDefine('DUK_STRDATA_MAX_STRLEN', strmaxlen)
genc.emitDefine('DUK_STRDATA_DATA_LENGTH', len(strdata))
# This is used for both RAM and ROM strings.
def emit_header_stridx_defines(genc, meta):
strlist = meta['strings_stridx']
for idx,s in enumerate(strlist):
genc.emitDefine(s['define'], idx, repr(s['str']))
defname = s['define'].replace('_STRIDX','_HEAP_STRING')
genc.emitDefine(defname + '(heap)', 'DUK_HEAP_GET_STRING((heap),%s)' % s['define'])
defname = s['define'].replace('_STRIDX', '_HTHREAD_STRING')
genc.emitDefine(defname + '(thr)', 'DUK_HTHREAD_GET_STRING((thr),%s)' % s['define'])
idx_start_reserved = None
idx_start_strict_reserved = None
for idx,s in enumerate(strlist):
if idx_start_reserved is None and s.get('reserved_word', False):
idx_start_reserved = idx
if idx_start_strict_reserved is None and s.get('future_reserved_word_strict', False):
idx_start_strict_reserved = idx
assert(idx_start_reserved is not None)
assert(idx_start_strict_reserved is not None)
genc.emitLine('')
genc.emitDefine('DUK_HEAP_NUM_STRINGS', len(strlist))
genc.emitDefine('DUK_STRIDX_START_RESERVED', idx_start_reserved)
genc.emitDefine('DUK_STRIDX_START_STRICT_RESERVED', idx_start_strict_reserved)
genc.emitDefine('DUK_STRIDX_END_RESERVED', len(strlist), comment='exclusive endpoint')
genc.emitLine('')
genc.emitLine('/* To convert a heap stridx to a token number, subtract')
genc.emitLine(' * DUK_STRIDX_START_RESERVED and add DUK_TOK_START_RESERVED.')
genc.emitLine(' */')
# Encode property flags for RAM initializers.
def encode_property_flags(flags):
# Note: must match duk_hobject.h
res = 0
nflags = 0
if 'w' in flags:
nflags += 1
res = res | PROPDESC_FLAG_WRITABLE
if 'e' in flags:
nflags += 1
res = res | PROPDESC_FLAG_ENUMERABLE
if 'c' in flags:
nflags += 1
res = res | PROPDESC_FLAG_CONFIGURABLE
if 'a' in flags:
nflags += 1
res = res | PROPDESC_FLAG_ACCESSOR
if nflags != len(flags):
raise Exception('unsupported flags: %s' % repr(flags))
return res
# Generate RAM object initdata for an object (but not its properties).
def gen_ramobj_initdata_for_object(meta, be, bi, string_to_stridx, natfunc_name_to_natidx, objid_to_bidx):
def _stridx(strval):
stridx = string_to_stridx[strval]
be.varuint(stridx)
def _stridx_or_string(strval):
stridx = string_to_stridx.get(strval)
if stridx is not None:
be.varuint(stridx + 1)
else:
be.varuint(0)
bitpack_string(be, strval)
def _natidx(native_name):
natidx = natfunc_name_to_natidx[native_name]
be.varuint(natidx)
class_num = class_to_number(bi['class'])
be.varuint(class_num)
props = [x for x in bi['properties']] # clone
prop_proto = steal_prop(props, 'prototype')
prop_constr = steal_prop(props, 'constructor')
prop_name = steal_prop(props, 'name', allow_accessor=False)
prop_length = steal_prop(props, 'length', allow_accessor=False)
length = -1 # default value -1 signifies varargs
if prop_length is not None:
assert(isinstance(prop_length['value'], int))
length = prop_length['value']
be.bits(1, 1) # flag: have length
be.bits(length, LENGTH_PROP_BITS)
else:
be.bits(0, 1) # flag: no length
# The attributes for 'length' are standard ("none") except for
# Array.prototype.length which must be writable (this is handled
# separately in duk_hthread_builtins.c).
len_attrs = LENGTH_PROPERTY_ATTRIBUTES
if prop_length is not None:
len_attrs = prop_length['attributes']
if len_attrs != LENGTH_PROPERTY_ATTRIBUTES:
# Attributes are assumed to be the same, except for Array.prototype.
if bi['class'] != 'Array': # Array.prototype is the only one with this class
raise Exception('non-default length attribute for unexpected object')
# For 'Function' classed objects, emit the native function stuff.
# Unfortunately this is more or less a copy of what we do for
# function properties now. This should be addressed if a rework
# on the init format is done.
if bi['class'] == 'Function':
_natidx(bi['native'])
if bi.get('varargs', False):
be.bits(1, 1) # flag: non-default nargs
be.bits(NARGS_VARARGS_MARKER, NARGS_BITS)
elif bi.has_key('nargs') and bi['nargs'] != length:
be.bits(1, 1) # flag: non-default nargs
be.bits(bi['nargs'], NARGS_BITS)
else:
assert(length is not None)
be.bits(0, 1) # flag: default nargs OK
# All Function-classed global level objects are callable
# (have [[Call]]) but not all are constructable (have
# [[Construct]]). Flag that.
assert(bi.has_key('callable'))
assert(bi['callable'] == True)
assert(prop_name is not None)
assert(isinstance(prop_name['value'], str))
_stridx_or_string(prop_name['value'])
if bi.get('constructable', False):
be.bits(1, 1) # flag: constructable
else:
be.bits(0, 1) # flag: not constructable
# DUK_HOBJECT_FLAG_SPECIAL_CALL is handled at runtime without init data.
# Convert signed magic to 16-bit unsigned for encoding
magic = resolve_magic(bi.get('magic'), objid_to_bidx) & 0xffff
assert(magic >= 0)
assert(magic <= 0xffff)
be.varuint(magic)
# Generate RAM object initdata for an object's properties.
def gen_ramobj_initdata_for_props(meta, be, bi, string_to_stridx, natfunc_name_to_natidx, objid_to_bidx, double_byte_order):
count_normal_props = 0
count_function_props = 0
def _bidx(bi_id):
be.varuint(objid_to_bidx[bi_id])
def _bidx_or_none(bi_id):
if bi_id is None:
be.varuint(0)
else:
be.varuint(objid_to_bidx[bi_id] + 1)
def _stridx(strval):
stridx = string_to_stridx[strval]
be.varuint(stridx)
def _stridx_or_string(strval):
stridx = string_to_stridx.get(strval)
if stridx is not None:
be.varuint(stridx + 1)
else:
be.varuint(0)
bitpack_string(be, strval)
def _natidx(native_name):
if native_name is None:
natidx = 0 # 0 is NULL in the native functions table, denotes missing function
else:
natidx = natfunc_name_to_natidx[native_name]
be.varuint(natidx)
props = [x for x in bi['properties']] # clone
# internal prototype: not an actual property so not in property list
if bi.has_key('internal_prototype'):
_bidx_or_none(bi['internal_prototype'])
else:
_bidx_or_none(None)
# external prototype: encoded specially, steal from property list
prop_proto = steal_prop(props, 'prototype')
if prop_proto is not None:
assert(prop_proto['value']['type'] == 'object')
assert(prop_proto['attributes'] == '')
_bidx_or_none(prop_proto['value']['id'])
else:
_bidx_or_none(None)
# external constructor: encoded specially, steal from property list
prop_constr = steal_prop(props, 'constructor')
if prop_constr is not None:
assert(prop_constr['value']['type'] == 'object')
assert(prop_constr['attributes'] == 'wc')
_bidx_or_none(prop_constr['value']['id'])
else:
_bidx_or_none(None)
# name: encoded specially for function objects, so steal and ignore here
if bi['class'] == 'Function':
prop_name = steal_prop(props, 'name', allow_accessor=False)
assert(prop_name is not None)
assert(isinstance(prop_name['value'], str))
assert(prop_name['attributes'] == 'c')
# length: encoded specially, so steal and ignore
prop_proto = steal_prop(props, 'length', allow_accessor=False)
# Date.prototype.toGMTString needs special handling and is handled
# directly in duk_hthread_builtins.c; so steal and ignore here.
if bi['id'] == 'bi_date_prototype':
prop_togmtstring = steal_prop(props, 'toGMTString')
assert(prop_togmtstring is not None)
logger.debug('Stole Date.prototype.toGMTString')
# Split properties into non-builtin functions and other properties.
# This split is a bit arbitrary, but is used to reduce flag bits in
# the bit stream.
values = []
functions = []
for prop in props:
if isinstance(prop['value'], dict) and \
prop['value']['type'] == 'object' and \
metadata_lookup_object(meta, prop['value']['id']).has_key('native') and \
not metadata_lookup_object(meta, prop['value']['id']).has_key('bidx'):
functions.append(prop)
else:
values.append(prop)
be.varuint(len(values))
for valspec in values:
count_normal_props += 1
val = valspec['value']
_stridx_or_string(valspec['key'])
# Attribute check doesn't check for accessor flag; that is now
# automatically set by C code when value is an accessor type.
# Accessors must not have 'writable', so they'll always have
# non-default attributes (less footprint than adding a different
# default).
default_attrs = DEFAULT_DATA_PROPERTY_ATTRIBUTES
attrs = valspec.get('attributes', default_attrs)
attrs = attrs.replace('a', '') # ram bitstream doesn't encode 'accessor' attribute
if attrs != default_attrs:
logger.debug('non-default attributes: %s -> %r (default %r)' % (valspec['key'], attrs, default_attrs))
be.bits(1, 1) # flag: have custom attributes
be.bits(encode_property_flags(attrs), PROP_FLAGS_BITS)
else:
be.bits(0, 1) # flag: no custom attributes
if val is None:
logger.warning('RAM init data format doesn\'t support "null" now, value replaced with "undefined": %r' % valspec)
#raise Exception('RAM init format doesn\'t support a "null" value now')
be.bits(PROP_TYPE_UNDEFINED, PROP_TYPE_BITS)
elif isinstance(val, bool):
if val == True:
be.bits(PROP_TYPE_BOOLEAN_TRUE, PROP_TYPE_BITS)
else:
be.bits(PROP_TYPE_BOOLEAN_FALSE, PROP_TYPE_BITS)
elif isinstance(val, (float, int)) or isinstance(val, dict) and val['type'] == 'double':
# Avoid converting a manually specified NaN temporarily into
# a float to avoid risk of e.g. NaN being replaced by another.
if isinstance(val, dict):
val = val['bytes'].decode('hex')
assert(len(val) == 8)
else:
val = struct.pack('>d', float(val))
be.bits(PROP_TYPE_DOUBLE, PROP_TYPE_BITS)
# encoding of double must match target architecture byte order
indexlist = {
'big': [ 0, 1, 2, 3, 4, 5, 6, 7 ],
'little': [ 7, 6, 5, 4, 3, 2, 1, 0 ],
'mixed': [ 3, 2, 1, 0, 7, 6, 5, 4 ] # some arm platforms
}[double_byte_order]
data = ''.join([ val[indexlist[idx]] for idx in xrange(8) ])
logger.debug('DOUBLE: %s -> %s' % (val.encode('hex'), data.encode('hex')))
if len(data) != 8:
raise Exception('internal error')
be.string(data)
elif isinstance(val, str) or isinstance(val, unicode):
if isinstance(val, unicode):
# Note: non-ASCII characters will not currently work,
# because bits/char is too low.
val = val.encode('utf-8')
if string_to_stridx.has_key(val):
# String value is in built-in string table -> encode
# using a string index. This saves some space,
# especially for the 'name' property of errors
# ('EvalError' etc).
be.bits(PROP_TYPE_STRIDX, PROP_TYPE_BITS)
_stridx(val)
else:
# Not in string table, bitpack string value as is.
be.bits(PROP_TYPE_STRING, PROP_TYPE_BITS)
bitpack_string(be, val)
elif isinstance(val, dict):
if val['type'] == 'object':
be.bits(PROP_TYPE_BUILTIN, PROP_TYPE_BITS)
_bidx(val['id'])
elif val['type'] == 'undefined':
be.bits(PROP_TYPE_UNDEFINED, PROP_TYPE_BITS)
elif val['type'] == 'accessor':
be.bits(PROP_TYPE_ACCESSOR, PROP_TYPE_BITS)
getter_natfun = None
setter_natfun = None
getter_magic = 0
setter_magic = 0
if val.has_key('getter_id'):
getter_fn = metadata_lookup_object(meta, val['getter_id'])
getter_natfun = getter_fn['native']
assert(getter_fn['nargs'] == 0)
getter_magic = getter_fn['magic']
if val.has_key('setter_id'):
setter_fn = metadata_lookup_object(meta, val['setter_id'])
setter_natfun = setter_fn['native']
assert(setter_fn['nargs'] == 1)
setter_magic = setter_fn['magic']
if getter_natfun is not None and setter_natfun is not None:
assert(getter_magic == setter_magic)
_natidx(getter_natfun)
_natidx(setter_natfun)
be.varuint(getter_magic)
elif val['type'] == 'lightfunc':
logger.warning('RAM init data format doesn\'t support "lightfunc" now, value replaced with "undefined": %r' % valspec)
be.bits(PROP_TYPE_UNDEFINED, PROP_TYPE_BITS)
else:
raise Exception('unsupported value: %s' % repr(val))
else:
raise Exception('unsupported value: %s' % repr(val))
be.varuint(len(functions))
for funprop in functions:
count_function_props += 1
funobj = metadata_lookup_object(meta, funprop['value']['id'])
prop_len = metadata_lookup_property(funobj, 'length')
assert(prop_len is not None)
assert(isinstance(prop_len['value'], (int)))
length = prop_len['value']
# XXX: this doesn't currently handle a function .name != its key
# At least warn about it here. Or maybe generate the correct name
# at run time (it's systematic at the moment, @@toPrimitive has the
# name "[Symbol.toPrimitive]" which can be computed from the symbol
# internal representation.
_stridx_or_string(funprop['key'])
_natidx(funobj['native'])
be.bits(length, LENGTH_PROP_BITS)
if funobj.get('varargs', False):
be.bits(1, 1) # flag: non-default nargs
be.bits(NARGS_VARARGS_MARKER, NARGS_BITS)
elif funobj.has_key('nargs') and funobj['nargs'] != length:
be.bits(1, 1) # flag: non-default nargs
be.bits(funobj['nargs'], NARGS_BITS)
else:
be.bits(0, 1) # flag: default nargs OK
# XXX: make this check conditional to minimize bit count
# (there are quite a lot of function properties)
# Convert signed magic to 16-bit unsigned for encoding
magic = resolve_magic(funobj.get('magic'), objid_to_bidx) & 0xffff
assert(magic >= 0)
assert(magic <= 0xffff)
be.varuint(magic)
return count_normal_props, count_function_props
# Get helper maps for RAM objects.
def get_ramobj_native_func_maps(meta):
# Native function list and index
native_funcs_found = {}
native_funcs = []
natfunc_name_to_natidx = {}
native_funcs.append(None) # natidx 0 is reserved for NULL
for o in meta['objects']:
if o.has_key('native'):
native_funcs_found[o['native']] = True
for v in o['properties']:
val = v['value']
if isinstance(val, dict):
if val['type'] == 'accessor':
if val.has_key('getter_id'):
getter = metadata_lookup_object(meta, val['getter_id'])
native_funcs_found[getter['native']] = True
if val.has_key('setter_id'):
setter = metadata_lookup_object(meta, val['setter_id'])
native_funcs_found[setter['native']] = True
if val['type'] == 'object':
target = metadata_lookup_object(meta, val['id'])
if target.has_key('native'):
native_funcs_found[target['native']] = True
if val['type'] == 'lightfunc':
# No lightfunc support for RAM initializer now.
pass
for idx,k in enumerate(sorted(native_funcs_found.keys())):
natfunc_name_to_natidx[k] = len(native_funcs)
native_funcs.append(k) # native func names
return native_funcs, natfunc_name_to_natidx
# Generate bit-packed RAM object init data.
def gen_ramobj_initdata_bitpacked(meta, native_funcs, natfunc_name_to_natidx, double_byte_order):
# RAM initialization is based on a specially filtered list of top
# level objects which includes objects with 'bidx' and objects
# which aren't handled as inline values in the init bitstream.
objlist = meta['objects_ram_toplevel']
objid_to_idx = meta['_objid_to_ramidx']
objid_to_object = meta['_objid_to_object'] # This index is valid even for filtered object list
string_index = meta['_plain_to_stridx']
# Generate bitstream
be = dukutil.BitEncoder()
count_builtins = 0
count_normal_props = 0
count_function_props = 0
for o in objlist:
count_builtins += 1
gen_ramobj_initdata_for_object(meta, be, o, string_index, natfunc_name_to_natidx, objid_to_idx)
for o in objlist:
count_obj_normal, count_obj_func = gen_ramobj_initdata_for_props(meta, be, o, string_index, natfunc_name_to_natidx, objid_to_idx, double_byte_order)
count_normal_props += count_obj_normal
count_function_props += count_obj_func
if be._varuint_count > 0:
logger.debug('varuint distribution:')
logger.debug(json.dumps(be._varuint_dist[0:1024]))
logger.debug('Varuint encoding categories: %r' % be._varuint_cats)
logger.debug('Varuint efficiency: %f bits/value' % (float(be._varuint_bits) / float(be._varuint_count)))
romobj_init_data = be.getByteString()
#logger.debug(repr(romobj_init_data))
#logger.debug(len(romobj_init_data))
logger.debug('%d ram builtins, %d normal properties, %d function properties, %d bytes of object init data' % \
(count_builtins, count_normal_props, count_function_props, len(romobj_init_data)))
return romobj_init_data
# Functions to emit object-related source/header parts.
def emit_ramobj_source_nativefunc_array(genc, native_func_list):
genc.emitLine('/* native functions: %d */' % len(native_func_list))
genc.emitLine('DUK_INTERNAL const duk_c_function duk_bi_native_functions[%d] = {' % len(native_func_list))
for i in native_func_list:
# The function pointer cast here makes BCC complain about
# "initializer too complicated", so omit the cast.
#genc.emitLine('\t(duk_c_function) %s,' % i)
if i is None:
genc.emitLine('\tNULL,')
else:
genc.emitLine('\t%s,' % i)
genc.emitLine('};')
def emit_ramobj_source_objinit_data(genc, init_data):
genc.emitArray(init_data, 'duk_builtins_data', visibility='DUK_INTERNAL', typename='duk_uint8_t', intvalues=True, const=True, size=len(init_data))
def emit_ramobj_header_nativefunc_array(genc, native_func_list):
genc.emitLine('#if !defined(DUK_SINGLE_FILE)')
genc.emitLine('DUK_INTERNAL_DECL const duk_c_function duk_bi_native_functions[%d];' % len(native_func_list))
genc.emitLine('#endif /* !DUK_SINGLE_FILE */')
def emit_ramobj_header_objects(genc, meta):
objlist = meta['objects_bidx']
for idx,o in enumerate(objlist):
defname = 'DUK_BIDX_' + '_'.join(o['id'].upper().split('_')[1:]) # bi_foo_bar -> FOO_BAR
genc.emitDefine(defname, idx)
genc.emitDefine('DUK_NUM_BUILTINS', len(objlist))
genc.emitDefine('DUK_NUM_BIDX_BUILTINS', len(objlist)) # Objects with 'bidx'
genc.emitDefine('DUK_NUM_ALL_BUILTINS', len(meta['objects_ram_toplevel'])) # Objects with 'bidx' + temps needed in init
def emit_ramobj_header_initdata(genc, init_data):
genc.emitLine('#if !defined(DUK_SINGLE_FILE)')
genc.emitLine('DUK_INTERNAL_DECL const duk_uint8_t duk_builtins_data[%d];' % len(init_data))
genc.emitLine('#endif /* !DUK_SINGLE_FILE */')
genc.emitDefine('DUK_BUILTINS_DATA_LENGTH', len(init_data))
#
# ROM init data
#
# Compile-time initializers for ROM strings and ROM objects. This involves
# a lot of small details:
#
# - Several variants are needed for different options: unpacked vs.
# packed duk_tval, endianness, string hash in use, etc).
#
# - Static initializers must represent objects of different size. For
# example, separate structs are needed for property tables of different
# size or value typing.
#
# - Union initializers cannot be used portable because they're only
# available in C99 and above.
#
# - Initializers must use 'const' correctly to ensure that the entire
# initialization data will go into ROM (read-only data section).
# Const pointers etc will need to be cast into non-const pointers at
# some point to properly mix with non-const RAM pointers, so a portable
# const losing cast is needed.
#
# - C++ doesn't allow forward declaration of "static const" structures
# which is problematic because there are cyclical const structures.
#
# Get string hash initializers; need to compute possible string hash variants
# which will match runtime values.
def rom_get_strhash16_macro(val):
hash16le = dukutil.duk_heap_hashstring_dense(val, DUK__FIXED_HASH_SEED, big_endian=False, strhash16=True)
hash16be = dukutil.duk_heap_hashstring_dense(val, DUK__FIXED_HASH_SEED, big_endian=True, strhash16=True)
hash16sparse = dukutil.duk_heap_hashstring_sparse(val, DUK__FIXED_HASH_SEED, strhash16=True)
return 'DUK__STRHASH16(%dU,%dU,%dU)' % (hash16le, hash16be, hash16sparse)
def rom_get_strhash32_macro(val):
hash32le = dukutil.duk_heap_hashstring_dense(val, DUK__FIXED_HASH_SEED, big_endian=False, strhash16=False)
hash32be = dukutil.duk_heap_hashstring_dense(val, DUK__FIXED_HASH_SEED, big_endian=True, strhash16=False)
hash32sparse = dukutil.duk_heap_hashstring_sparse(val, DUK__FIXED_HASH_SEED, strhash16=False)
return 'DUK__STRHASH32(%dUL,%dUL,%dUL)' % (hash32le, hash32be, hash32sparse)
# Get string character .length; must match runtime .length computation.
def rom_charlen(x):
return dukutil.duk_unicode_unvalidated_utf8_length(x)
# Get an initializer type and initializer literal for a specified value
# (expressed in YAML metadata format). The types and initializers depend
# on declarations emitted before the initializers, and in several cases
# use a macro to hide the selection between several initializer variants.
def rom_get_value_initializer(meta, val, bi_str_map, bi_obj_map):
def double_bytes_initializer(val):
# Portable and exact float initializer.
assert(isinstance(val, str) and len(val) == 16) # hex encoded bytes
val = val.decode('hex')
tmp = []
for i in xrange(8):
t = ord(val[i])
if t >= 128:
tmp.append('%dU' % t)
else:
tmp.append('%d' % t)
return 'DUK__DBLBYTES(' + ','.join(tmp) + ')'
def tval_number_initializer(val):
return 'DUK__TVAL_NUMBER(%s)' % double_bytes_initializer(val)
v = val['value']
if v is None:
init_type = 'duk_rom_tval_null'
init_lit = 'DUK__TVAL_NULL()'
elif isinstance(v, (bool)):
init_type = 'duk_rom_tval_boolean'
bval = 0
if v:
bval = 1
init_lit = 'DUK__TVAL_BOOLEAN(%d)' % bval
elif isinstance(v, (int, float)):
fval = struct.pack('>d', float(v)).encode('hex')
init_type = 'duk_rom_tval_number'
init_lit = tval_number_initializer(fval)
elif isinstance(v, (str, unicode)):
init_type = 'duk_rom_tval_string'
init_lit = 'DUK__TVAL_STRING(&%s)' % bi_str_map[v]
elif isinstance(v, (dict)):
if v['type'] == 'double':
init_type = 'duk_rom_tval_number'
init_lit = tval_number_initializer(v['bytes'])
elif v['type'] == 'undefined':
init_type = 'duk_rom_tval_undefined'
init_lit = 'DUK__TVAL_UNDEFINED()'
elif v['type'] == 'null':
init_type = 'duk_rom_tval_null'
init_lit = 'DUK__TVAL_UNDEFINED()'
elif v['type'] == 'object':
init_type = 'duk_rom_tval_object'
init_lit = 'DUK__TVAL_OBJECT(&%s)' % bi_obj_map[v['id']]
elif v['type'] == 'accessor':
getter_ref = 'NULL'
setter_ref = 'NULL'
if v.has_key('getter_id'):
getter_object = metadata_lookup_object(meta, v['getter_id'])
getter_ref = '&%s' % bi_obj_map[getter_object['id']]
if v.has_key('setter_id'):
setter_object = metadata_lookup_object(meta, v['setter_id'])
setter_ref = '&%s' % bi_obj_map[setter_object['id']]
init_type = 'duk_rom_tval_accessor'
init_lit = 'DUK__TVAL_ACCESSOR(%s, %s)' % (getter_ref, setter_ref)
elif v['type'] == 'lightfunc':
# Match DUK_LFUNC_FLAGS_PACK() in duk_tval.h.
if v.has_key('length'):
assert(v['length'] >= 0 and v['length'] <= 15)
lf_length = v['length']
else:
lf_length = 0
if v.get('varargs', True):
lf_nargs = 15 # varargs marker
else:
assert(v['nargs'] >= 0 and v['nargs'] <= 14)
lf_nargs = v['nargs']
if v.has_key('magic'):
assert(v['magic'] >= -0x80 and v['magic'] <= 0x7f)
lf_magic = v['magic'] & 0xff
else:
lf_magic = 0
lf_flags = (lf_magic << 8) + (lf_length << 4) + lf_nargs
init_type = 'duk_rom_tval_lightfunc'
init_lit = 'DUK__TVAL_LIGHTFUNC(%s, %dL)' % (v['native'], lf_flags)
else:
raise Exception('unhandled value: %r' % val)
else:
raise Exception('internal error: %r' % val)
return init_type, init_lit
# Helpers to get either initializer type or value only (not both).
def rom_get_value_initializer_type(meta, val, bi_str_map, bi_obj_map):
init_type, init_lit = rom_get_value_initializer(meta, val, bi_str_map, bi_obj_map)
return init_type
def rom_get_value_initializer_literal(meta, val, bi_str_map, bi_obj_map):
init_type, init_lit = rom_get_value_initializer(meta, val, bi_str_map, bi_obj_map)
return init_lit
# Emit ROM strings source: structs/typedefs and their initializers.
# Separate initialization structs are needed for strings of different
# length.
def rom_emit_strings_source(genc, meta):
# Write built-in strings as code section initializers.
strs = meta['_strings_plain'] # all strings, plain versions
reserved_words = meta['_is_plain_reserved_word']
strict_reserved_words = meta['_is_plain_strict_reserved_word']
strs_needing_stridx = meta['strings_stridx']
# Sort used lengths and declare per-length initializers.
lens = []
for v in strs:
strlen = len(v)
if strlen not in lens:
lens.append(strlen)
lens.sort()
for strlen in lens:
genc.emitLine('typedef struct duk_romstr_%d duk_romstr_%d; ' % (strlen, strlen) +
'struct duk_romstr_%d { duk_hstring hdr; duk_uint8_t data[%d]; };' % (strlen, strlen + 1))
genc.emitLine('')
# String hash values depend on endianness and other factors,
# use an initializer macro to select the appropriate hash.
genc.emitLine('/* When unaligned access possible, 32-bit values are fetched using host order.')
genc.emitLine(' * When unaligned access not possible, always simulate little endian order.')
genc.emitLine(' * See: src-input/duk_util_hashbytes.c:duk_util_hashbytes().')
genc.emitLine(' */')
genc.emitLine('#if defined(DUK_USE_STRHASH_DENSE)')
genc.emitLine('#if defined(DUK_USE_HASHBYTES_UNALIGNED_U32_ACCESS)') # XXX: config option to be reworked
genc.emitLine('#if defined(DUK_USE_INTEGER_BE)')
genc.emitLine('#define DUK__STRHASH16(hash16le,hash16be,hash16sparse) (hash16be)')
genc.emitLine('#define DUK__STRHASH32(hash32le,hash32be,hash32sparse) (hash32be)')
genc.emitLine('#else')
genc.emitLine('#define DUK__STRHASH16(hash16le,hash16be,hash16sparse) (hash16le)')
genc.emitLine('#define DUK__STRHASH32(hash32le,hash32be,hash32sparse) (hash32le)')
genc.emitLine('#endif')
genc.emitLine('#else')
genc.emitLine('#define DUK__STRHASH16(hash16le,hash16be,hash16sparse) (hash16le)')
genc.emitLine('#define DUK__STRHASH32(hash32le,hash32be,hash32sparse) (hash32le)')
genc.emitLine('#endif')
genc.emitLine('#else /* DUK_USE_STRHASH_DENSE */')
genc.emitLine('#define DUK__STRHASH16(hash16le,hash16be,hash16sparse) (hash16sparse)')
genc.emitLine('#define DUK__STRHASH32(hash32le,hash32be,hash32sparse) (hash32sparse)')
genc.emitLine('#endif /* DUK_USE_STRHASH_DENSE */')
# String header initializer macro, takes into account lowmem etc.
genc.emitLine('#if defined(DUK_USE_HEAPPTR16)')
genc.emitLine('#if !defined(DUK_USE_REFCOUNT16)')
genc.emitLine('#error currently assumes DUK_USE_HEAPPTR16 and DUK_USE_REFCOUNT16 are both defined')
genc.emitLine('#endif')
genc.emitLine('#if defined(DUK_USE_HSTRING_CLEN)')
genc.emitLine('#define DUK__STRINIT(heaphdr_flags,refcount,hash32,hash16,blen,clen,next) \\')
genc.emitLine('\t{ { (heaphdr_flags) | ((hash16) << 16), DUK__REFCINIT((refcount)), (blen), (duk_hstring *) DUK_LOSE_CONST((next)) }, (clen) }')
genc.emitLine('#else /* DUK_USE_HSTRING_CLEN */')
genc.emitLine('#define DUK__STRINIT(heaphdr_flags,refcount,hash32,hash16,blen,clen,next) \\')
genc.emitLine('\t{ { (heaphdr_flags) | ((hash16) << 16), DUK__REFCINIT((refcount)), (blen), (duk_hstring *) DUK_LOSE_CONST((next)) } }')
genc.emitLine('#endif /* DUK_USE_HSTRING_CLEN */')
genc.emitLine('#else /* DUK_USE_HEAPPTR16 */')
genc.emitLine('#define DUK__STRINIT(heaphdr_flags,refcount,hash32,hash16,blen,clen,next) \\')
genc.emitLine('\t{ { (heaphdr_flags), DUK__REFCINIT((refcount)), DUK_LOSE_CONST((next)) }, (hash32), (blen), (clen) }')
genc.emitLine('#endif /* DUK_USE_HEAPPTR16 */')
# Organize ROM strings into a chained ROM string table. The ROM string
# h_next link pointer is used for chaining just like for RAM strings but
# in a separate string table.
#
# To avoid dealing with the different possible string hash algorithms,
# use a much more trivial lookup key for ROM strings for now.
romstr_hash = []
while len(romstr_hash) < ROMSTR_LOOKUP_SIZE:
romstr_hash.append([])
for str_index,v in enumerate(strs):
if len(v) > 0:
rom_lookup_hash = ord(v[0]) + (len(v) << 4)
else:
rom_lookup_hash = 0 + (len(v) << 4)
rom_lookup_hash = rom_lookup_hash & 0xff
romstr_hash[rom_lookup_hash].append(v)
romstr_next = {} # string -> the string's 'next' link
for lst in romstr_hash:
prev = None
#print(repr(lst))
for v in lst:
if prev is not None:
romstr_next[prev] = v
prev = v
chain_lens = {}
for lst in romstr_hash:
chainlen = len(lst)
if not chain_lens.has_key(chainlen):
chain_lens[chainlen] = 0
chain_lens[chainlen] += 1
tmp = []
for k in sorted(chain_lens.keys()):
tmp.append('%d: %d' % (k, chain_lens[k]))
logger.info('ROM string table chain lengths: %s' % ', '.join(tmp))
bi_str_map = {} # string -> initializer variable name
for str_index,v in enumerate(strs):
bi_str_map[v] = 'duk_str_%d' % str_index
# Emit string initializers. Emit the strings in an order which avoids
# forward declarations for the h_next link pointers; const forward
# declarations are a problem in C++.
genc.emitLine('')
for lst in romstr_hash:
for v in reversed(lst):
tmp = 'DUK_INTERNAL const duk_romstr_%d %s = {' % (len(v), bi_str_map[v])
flags = [ 'DUK_HTYPE_STRING', 'DUK_HEAPHDR_FLAG_READONLY', 'DUK_HEAPHDR_FLAG_REACHABLE' ]
is_arridx = string_is_arridx(v)
blen = len(v)
clen = rom_charlen(v)
if blen == clen:
flags.append('DUK_HSTRING_FLAG_ASCII')
if is_arridx:
flags.append('DUK_HSTRING_FLAG_ARRIDX')
if len(v) >= 1 and v[0] in [ '\x80', '\x81', '\x82', '\xff' ]:
flags.append('DUK_HSTRING_FLAG_SYMBOL')
if len(v) >= 1 and v[0] in [ '\x82', '\xff' ]:
flags.append('DUK_HSTRING_FLAG_HIDDEN')
if v in [ 'eval', 'arguments' ]:
flags.append('DUK_HSTRING_FLAG_EVAL_OR_ARGUMENTS')
if reserved_words.has_key(v):
flags.append('DUK_HSTRING_FLAG_RESERVED_WORD')
if strict_reserved_words.has_key(v):
flags.append('DUK_HSTRING_FLAG_STRICT_RESERVED_WORD')
h_next = 'NULL'
if romstr_next.has_key(v):
h_next = '&' + bi_str_map[romstr_next[v]]
tmp += 'DUK__STRINIT(%s,%d,%s,%s,%d,%d,%s),' % \
('|'.join(flags), 1, rom_get_strhash32_macro(v), \
rom_get_strhash16_macro(v), blen, clen, h_next)
tmpbytes = []
for c in v:
if ord(c) < 128:
tmpbytes.append('%d' % ord(c))
else:
tmpbytes.append('%dU' % ord(c))
tmpbytes.append('%d' % 0) # NUL term
tmp += '{' + ','.join(tmpbytes) + '}'
tmp += '};'
genc.emitLine(tmp)
# Emit the ROM string lookup table used by string interning.
#
# cdecl> explain const int * const foo;
# declare foo as const pointer to const int
genc.emitLine('')
genc.emitLine('DUK_INTERNAL const duk_hstring * const duk_rom_strings_lookup[%d] = {'% len(romstr_hash))
tmp = []
linecount = 0
for lst in romstr_hash:
if len(lst) == 0:
genc.emitLine('\tNULL,')
else:
genc.emitLine('\t(const duk_hstring *) &%s,' % bi_str_map[lst[0]])
genc.emitLine('};')
# Emit an array of duk_hstring pointers indexed using DUK_STRIDX_xxx.
# This will back e.g. DUK_HTHREAD_STRING_XYZ(thr) directly, without
# needing an explicit array in thr/heap->strs[].
#
# cdecl > explain const int * const foo;
# declare foo as const pointer to const int
genc.emitLine('')
genc.emitLine('DUK_INTERNAL const duk_hstring * const duk_rom_strings_stridx[%d] = {' % len(strs_needing_stridx))
for s in strs_needing_stridx:
genc.emitLine('\t(const duk_hstring *) &%s,' % bi_str_map[s['str']]) # strs_needing_stridx is a list of objects, not plain strings
genc.emitLine('};')
return bi_str_map
# Emit ROM strings header.
def rom_emit_strings_header(genc, meta):
genc.emitLine('#if !defined(DUK_SINGLE_FILE)') # C++ static const workaround
genc.emitLine('DUK_INTERNAL_DECL const duk_hstring * const duk_rom_strings_lookup[%d];' % ROMSTR_LOOKUP_SIZE)
genc.emitLine('DUK_INTERNAL_DECL const duk_hstring * const duk_rom_strings_stridx[%d];' % len(meta['strings_stridx']))
genc.emitLine('#endif')
# Emit ROM objects initialized types and macros.
def rom_emit_object_initializer_types_and_macros(genc):
# Objects and functions are straightforward because they just use the
# RAM structure which has no dynamic or variable size parts.
genc.emitLine('typedef struct duk_romobj duk_romobj; ' + \
'struct duk_romobj { duk_hobject hdr; };')
genc.emitLine('typedef struct duk_romarr duk_romarr; ' + \
'struct duk_romarr { duk_harray hdr; };')
genc.emitLine('typedef struct duk_romfun duk_romfun; ' + \
'struct duk_romfun { duk_hnatfunc hdr; };')
genc.emitLine('typedef struct duk_romobjenv duk_romobjenv; ' + \
'struct duk_romobjenv { duk_hobjenv hdr; };')
# For ROM pointer compression we'd need a -compile time- variant.
# The current portable solution is to just assign running numbers
# to ROM compressed pointers, and provide the table for user pointer
# compression function. Much better solutions would be possible,
# but such solutions are often compiler/platform specific.
# Emit object/function initializer which is aware of options affecting
# the header. Heap next/prev pointers are always NULL.
genc.emitLine('#if defined(DUK_USE_HEAPPTR16)')
genc.emitLine('#if !defined(DUK_USE_REFCOUNT16) || defined(DUK_USE_HOBJECT_HASH_PART)')
genc.emitLine('#error currently assumes DUK_USE_HEAPPTR16 and DUK_USE_REFCOUNT16 are both defined and DUK_USE_HOBJECT_HASH_PART is undefined')
genc.emitLine('#endif')
#genc.emitLine('#if !defined(DUK_USE_HEAPPTR_ENC16_STATIC)')
#genc.emitLine('#error need DUK_USE_HEAPPTR_ENC16_STATIC which provides compile-time pointer compression')
#genc.emitLine('#endif')
genc.emitLine('#define DUK__ROMOBJ_INIT(heaphdr_flags,refcount,props,props_enc16,iproto,iproto_enc16,esize,enext,asize,hsize) \\')
genc.emitLine('\t{ { { (heaphdr_flags), DUK__REFCINIT((refcount)), 0, 0, (props_enc16) }, (iproto_enc16), (esize), (enext), (asize) } }')
genc.emitLine('#define DUK__ROMARR_INIT(heaphdr_flags,refcount,props,props_enc16,iproto,iproto_enc16,esize,enext,asize,hsize,length) \\')
genc.emitLine('\t{ { { { (heaphdr_flags), DUK__REFCINIT((refcount)), 0, 0, (props_enc16) }, (iproto_enc16), (esize), (enext), (asize) }, (length), 0 /*length_nonwritable*/ } }')
genc.emitLine('#define DUK__ROMFUN_INIT(heaphdr_flags,refcount,props,props_enc16,iproto,iproto_enc16,esize,enext,asize,hsize,nativefunc,nargs,magic) \\')
genc.emitLine('\t{ { { { (heaphdr_flags), DUK__REFCINIT((refcount)), 0, 0, (props_enc16) }, (iproto_enc16), (esize), (enext), (asize) }, (nativefunc), (duk_int16_t) (nargs), (duk_int16_t) (magic) } }')
genc.emitLine('#define DUK__ROMOBJENV_INIT(heaphdr_flags,refcount,props,props_enc16,iproto,iproto_enc16,esize,enext,asize,hsize,target,has_this) \\')
genc.emitLine('\t{ { { { (heaphdr_flags), DUK__REFCINIT((refcount)), 0, 0, (props_enc16) }, (iproto_enc16), (esize), (enext), (asize) }, (duk_hobject *) DUK_LOSE_CONST(target), (has_this) } }')
genc.emitLine('#else /* DUK_USE_HEAPPTR16 */')
genc.emitLine('#define DUK__ROMOBJ_INIT(heaphdr_flags,refcount,props,props_enc16,iproto,iproto_enc16,esize,enext,asize,hsize) \\')
genc.emitLine('\t{ { { (heaphdr_flags), DUK__REFCINIT((refcount)), NULL, NULL }, (duk_uint8_t *) DUK_LOSE_CONST(props), (duk_hobject *) DUK_LOSE_CONST(iproto), (esize), (enext), (asize), (hsize) } }')
genc.emitLine('#define DUK__ROMARR_INIT(heaphdr_flags,refcount,props,props_enc16,iproto,iproto_enc16,esize,enext,asize,hsize,length) \\')
genc.emitLine('\t{ { { { (heaphdr_flags), DUK__REFCINIT((refcount)), NULL, NULL }, (duk_uint8_t *) DUK_LOSE_CONST(props), (duk_hobject *) DUK_LOSE_CONST(iproto), (esize), (enext), (asize), (hsize) }, (length), 0 /*length_nonwritable*/ } }')
genc.emitLine('#define DUK__ROMFUN_INIT(heaphdr_flags,refcount,props,props_enc16,iproto,iproto_enc16,esize,enext,asize,hsize,nativefunc,nargs,magic) \\')
genc.emitLine('\t{ { { { (heaphdr_flags), DUK__REFCINIT((refcount)), NULL, NULL }, (duk_uint8_t *) DUK_LOSE_CONST(props), (duk_hobject *) DUK_LOSE_CONST(iproto), (esize), (enext), (asize), (hsize) }, (nativefunc), (duk_int16_t) (nargs), (duk_int16_t) (magic) } }')
genc.emitLine('#define DUK__ROMOBJENV_INIT(heaphdr_flags,refcount,props,props_enc16,iproto,iproto_enc16,esize,enext,asize,hsize,target,has_this) \\')
genc.emitLine('\t{ { { { (heaphdr_flags), DUK__REFCINIT((refcount)), NULL, NULL }, (duk_uint8_t *) DUK_LOSE_CONST(props), (duk_hobject *) DUK_LOSE_CONST(iproto), (esize), (enext), (asize), (hsize) }, (duk_hobject *) DUK_LOSE_CONST(target), (has_this) } }')
genc.emitLine('#endif /* DUK_USE_HEAPPTR16 */')
# Initializer typedef for a dummy function pointer. ROM support assumes
# function pointers are 32 bits. Using a dummy function pointer type
# avoids function pointer to normal pointer cast which emits warnings.
genc.emitLine('typedef void (*duk_rom_funcptr)(void);')
# Emit duk_tval structs. This gets a bit messier with packed/unpacked
# duk_tval, endianness variants, pointer sizes, etc.
genc.emitLine('#if defined(DUK_USE_PACKED_TVAL)')
genc.emitLine('typedef struct duk_rom_tval_undefined duk_rom_tval_undefined;')
genc.emitLine('typedef struct duk_rom_tval_null duk_rom_tval_null;')
genc.emitLine('typedef struct duk_rom_tval_lightfunc duk_rom_tval_lightfunc;')
genc.emitLine('typedef struct duk_rom_tval_boolean duk_rom_tval_boolean;')
genc.emitLine('typedef struct duk_rom_tval_number duk_rom_tval_number;')
genc.emitLine('typedef struct duk_rom_tval_object duk_rom_tval_object;')
genc.emitLine('typedef struct duk_rom_tval_string duk_rom_tval_string;')
genc.emitLine('typedef struct duk_rom_tval_accessor duk_rom_tval_accessor;')
genc.emitLine('struct duk_rom_tval_number { duk_uint8_t bytes[8]; };')
genc.emitLine('struct duk_rom_tval_accessor { const duk_hobject *get; const duk_hobject *set; };')
genc.emitLine('#if defined(DUK_USE_DOUBLE_LE)')
genc.emitLine('struct duk_rom_tval_object { const void *ptr; duk_uint32_t hiword; };')
genc.emitLine('struct duk_rom_tval_string { const void *ptr; duk_uint32_t hiword; };')
genc.emitLine('struct duk_rom_tval_undefined { const void *ptr; duk_uint32_t hiword; };')
genc.emitLine('struct duk_rom_tval_null { const void *ptr; duk_uint32_t hiword; };')
genc.emitLine('struct duk_rom_tval_lightfunc { duk_rom_funcptr ptr; duk_uint32_t hiword; };')
genc.emitLine('struct duk_rom_tval_boolean { duk_uint32_t dummy; duk_uint32_t hiword; };')
genc.emitLine('#elif defined(DUK_USE_DOUBLE_BE)')
genc.emitLine('struct duk_rom_tval_object { duk_uint32_t hiword; const void *ptr; };')
genc.emitLine('struct duk_rom_tval_string { duk_uint32_t hiword; const void *ptr; };')
genc.emitLine('struct duk_rom_tval_undefined { duk_uint32_t hiword; const void *ptr; };')
genc.emitLine('struct duk_rom_tval_null { duk_uint32_t hiword; const void *ptr; };')
genc.emitLine('struct duk_rom_tval_lightfunc { duk_uint32_t hiword; duk_rom_funcptr ptr; };')
genc.emitLine('struct duk_rom_tval_boolean { duk_uint32_t hiword; duk_uint32_t dummy; };')
genc.emitLine('#elif defined(DUK_USE_DOUBLE_ME)')
genc.emitLine('struct duk_rom_tval_object { duk_uint32_t hiword; const void *ptr; };')
genc.emitLine('struct duk_rom_tval_string { duk_uint32_t hiword; const void *ptr; };')
genc.emitLine('struct duk_rom_tval_undefined { duk_uint32_t hiword; const void *ptr; };')
genc.emitLine('struct duk_rom_tval_null { duk_uint32_t hiword; const void *ptr; };')
genc.emitLine('struct duk_rom_tval_lightfunc { duk_uint32_t hiword; duk_rom_funcptr ptr; };')
genc.emitLine('struct duk_rom_tval_boolean { duk_uint32_t hiword; duk_uint32_t dummy; };')
genc.emitLine('#else')
genc.emitLine('#error invalid endianness defines')
genc.emitLine('#endif')
genc.emitLine('#else /* DUK_USE_PACKED_TVAL */')
# Unpacked initializers are written assuming normal struct alignment
# rules so that sizeof(duk_tval) == 16. 32-bit pointers need special
# handling to ensure the individual initializers pad to 16 bytes as
# necessary.
# XXX: 32-bit unpacked duk_tval is not yet supported.
genc.emitLine('#if defined(DUK_UINTPTR_MAX)')
genc.emitLine('#if (DUK_UINTPTR_MAX <= 0xffffffffUL)')
genc.emitLine('#error ROM initializer with unpacked duk_tval does not currently work on 32-bit targets')
genc.emitLine('#endif')
genc.emitLine('#endif')
genc.emitLine('typedef struct duk_rom_tval_undefined duk_rom_tval_undefined;')
genc.emitLine('struct duk_rom_tval_undefined { duk_small_uint_t tag; duk_small_uint_t extra; duk_uint8_t bytes[8]; };')
genc.emitLine('typedef struct duk_rom_tval_null duk_rom_tval_null;')
genc.emitLine('struct duk_rom_tval_null { duk_small_uint_t tag; duk_small_uint_t extra; duk_uint8_t bytes[8]; };')
genc.emitLine('typedef struct duk_rom_tval_boolean duk_rom_tval_boolean;')
genc.emitLine('struct duk_rom_tval_boolean { duk_small_uint_t tag; duk_small_uint_t extra; duk_uint32_t val; duk_uint32_t unused; };')
genc.emitLine('typedef struct duk_rom_tval_number duk_rom_tval_number;')
genc.emitLine('struct duk_rom_tval_number { duk_small_uint_t tag; duk_small_uint_t extra; duk_uint8_t bytes[8]; };')
genc.emitLine('typedef struct duk_rom_tval_object duk_rom_tval_object;')
genc.emitLine('struct duk_rom_tval_object { duk_small_uint_t tag; duk_small_uint_t extra; const duk_heaphdr *val; };')
genc.emitLine('typedef struct duk_rom_tval_string duk_rom_tval_string;')
genc.emitLine('struct duk_rom_tval_string { duk_small_uint_t tag; duk_small_uint_t extra; const duk_heaphdr *val; };')
genc.emitLine('typedef struct duk_rom_tval_lightfunc duk_rom_tval_lightfunc;')
genc.emitLine('struct duk_rom_tval_lightfunc { duk_small_uint_t tag; duk_small_uint_t extra; duk_rom_funcptr ptr; };')
genc.emitLine('typedef struct duk_rom_tval_accessor duk_rom_tval_accessor;')
genc.emitLine('struct duk_rom_tval_accessor { const duk_hobject *get; const duk_hobject *set; };')
genc.emitLine('#endif /* DUK_USE_PACKED_TVAL */')
genc.emitLine('')
# Double initializer byte shuffle macro to handle byte orders
# without duplicating the entire initializers.
genc.emitLine('#if defined(DUK_USE_DOUBLE_LE)')
genc.emitLine('#define DUK__DBLBYTES(a,b,c,d,e,f,g,h) { (h), (g), (f), (e), (d), (c), (b), (a) }')
genc.emitLine('#elif defined(DUK_USE_DOUBLE_BE)')
genc.emitLine('#define DUK__DBLBYTES(a,b,c,d,e,f,g,h) { (a), (b), (c), (d), (e), (f), (g), (h) }')
genc.emitLine('#elif defined(DUK_USE_DOUBLE_ME)')
genc.emitLine('#define DUK__DBLBYTES(a,b,c,d,e,f,g,h) { (d), (c), (b), (a), (h), (g), (f), (e) }')
genc.emitLine('#else')
genc.emitLine('#error invalid endianness defines')
genc.emitLine('#endif')
genc.emitLine('')
# Emit duk_tval initializer literal macros.
genc.emitLine('#if defined(DUK_USE_PACKED_TVAL)')
genc.emitLine('#define DUK__TVAL_NUMBER(hostbytes) { hostbytes }') # bytes already in host order
genc.emitLine('#if defined(DUK_USE_DOUBLE_LE)')
genc.emitLine('#define DUK__TVAL_UNDEFINED() { (const void *) NULL, (DUK_TAG_UNDEFINED << 16) }')
genc.emitLine('#define DUK__TVAL_NULL() { (const void *) NULL, (DUK_TAG_NULL << 16) }')
genc.emitLine('#define DUK__TVAL_LIGHTFUNC(func,flags) { (duk_rom_funcptr) (func), (DUK_TAG_LIGHTFUNC << 16) + (flags) }')
genc.emitLine('#define DUK__TVAL_BOOLEAN(bval) { 0, (DUK_TAG_BOOLEAN << 16) + (bval) }')
genc.emitLine('#define DUK__TVAL_OBJECT(ptr) { (const void *) (ptr), (DUK_TAG_OBJECT << 16) }')
genc.emitLine('#define DUK__TVAL_STRING(ptr) { (const void *) (ptr), (DUK_TAG_STRING << 16) }')
genc.emitLine('#elif defined(DUK_USE_DOUBLE_BE)')
genc.emitLine('#define DUK__TVAL_UNDEFINED() { (DUK_TAG_UNDEFINED << 16), (const void *) NULL }')
genc.emitLine('#define DUK__TVAL_NULL() { (DUK_TAG_NULL << 16), (const void *) NULL }')
genc.emitLine('#define DUK__TVAL_LIGHTFUNC(func,flags) { (DUK_TAG_LIGHTFUNC << 16) + (flags), (duk_rom_funcptr) (func) }')
genc.emitLine('#define DUK__TVAL_BOOLEAN(bval) { (DUK_TAG_BOOLEAN << 16) + (bval), 0 }')
genc.emitLine('#define DUK__TVAL_OBJECT(ptr) { (DUK_TAG_OBJECT << 16), (const void *) (ptr) }')
genc.emitLine('#define DUK__TVAL_STRING(ptr) { (DUK_TAG_STRING << 16), (const void *) (ptr) }')
genc.emitLine('#elif defined(DUK_USE_DOUBLE_ME)')
genc.emitLine('#define DUK__TVAL_UNDEFINED() { (DUK_TAG_UNDEFINED << 16), (const void *) NULL }')
genc.emitLine('#define DUK__TVAL_NULL() { (DUK_TAG_NULL << 16), (const void *) NULL }')
genc.emitLine('#define DUK__TVAL_LIGHTFUNC(func,flags) { (DUK_TAG_LIGHTFUNC << 16) + (flags), (duk_rom_funcptr) (func) }')
genc.emitLine('#define DUK__TVAL_BOOLEAN(bval) { (DUK_TAG_BOOLEAN << 16) + (bval), 0 }')
genc.emitLine('#define DUK__TVAL_OBJECT(ptr) { (DUK_TAG_OBJECT << 16), (const void *) (ptr) }')
genc.emitLine('#define DUK__TVAL_STRING(ptr) { (DUK_TAG_STRING << 16), (const void *) (ptr) }')
genc.emitLine('#else')
genc.emitLine('#error invalid endianness defines')
genc.emitLine('#endif')
genc.emitLine('#else /* DUK_USE_PACKED_TVAL */')
genc.emitLine('#define DUK__TVAL_NUMBER(hostbytes) { DUK_TAG_NUMBER, 0, hostbytes }') # bytes already in host order
genc.emitLine('#define DUK__TVAL_UNDEFINED() { DUK_TAG_UNDEFINED, 0, {0,0,0,0,0,0,0,0} }')
genc.emitLine('#define DUK__TVAL_NULL() { DUK_TAG_NULL, 0, {0,0,0,0,0,0,0,0} }')
genc.emitLine('#define DUK__TVAL_BOOLEAN(bval) { DUK_TAG_BOOLEAN, 0, (bval), 0 }')
genc.emitLine('#define DUK__TVAL_OBJECT(ptr) { DUK_TAG_OBJECT, 0, (const duk_heaphdr *) (ptr) }')
genc.emitLine('#define DUK__TVAL_STRING(ptr) { DUK_TAG_STRING, 0, (const duk_heaphdr *) (ptr) }')
genc.emitLine('#define DUK__TVAL_LIGHTFUNC(func,flags) { DUK_TAG_LIGHTFUNC, (flags), (duk_rom_funcptr) (func) }')
genc.emitLine('#endif /* DUK_USE_PACKED_TVAL */')
genc.emitLine('#define DUK__TVAL_ACCESSOR(getter,setter) { (const duk_hobject *) (getter), (const duk_hobject *) (setter) }')
# Emit ROM objects source: the object/function headers themselves, property
# table structs for different property table sizes/types, and property table
# initializers.
def rom_emit_objects(genc, meta, bi_str_map):
objs = meta['objects']
id_to_bidx = meta['_objid_to_bidx']
# Table for compressed ROM pointers; reserve high range of compressed pointer
# values for this purpose. This must contain all ROM pointers that might be
# referenced (all objects, strings, and property tables at least).
romptr_compress_list = []
def compress_rom_ptr(x):
if x == 'NULL':
return 0
try:
idx = romptr_compress_list.index(x)
res = ROMPTR_FIRST + idx
except ValueError:
romptr_compress_list.append(x)
res = ROMPTR_FIRST + len(romptr_compress_list) - 1
assert(res <= 0xffff)
return res
# Need string and object maps (id -> C symbol name) early.
bi_obj_map = {} # object id -> initializer variable name
for idx,obj in enumerate(objs):
bi_obj_map[obj['id']] = 'duk_obj_%d' % idx
# Add built-in strings and objects to compressed ROM pointers first.
for k in sorted(bi_str_map.keys()):
compress_rom_ptr('&%s' % bi_str_map[k])
for k in sorted(bi_obj_map.keys()):
compress_rom_ptr('&%s' % bi_obj_map[k])
# Property attributes lookup, map metadata attribute string into a
# C initializer.
attr_lookup = {
'': 'DUK_PROPDESC_FLAGS_NONE',
'w': 'DUK_PROPDESC_FLAGS_W',
'e': 'DUK_PROPDESC_FLAGS_E',
'c': 'DUK_PROPDESC_FLAGS_C',
'we': 'DUK_PROPDESC_FLAGS_WE',
'wc': 'DUK_PROPDESC_FLAGS_WC',
'ec': 'DUK_PROPDESC_FLAGS_EC',
'wec': 'DUK_PROPDESC_FLAGS_WEC',
'a': 'DUK_PROPDESC_FLAGS_NONE|DUK_PROPDESC_FLAG_ACCESSOR',
'ea': 'DUK_PROPDESC_FLAGS_E|DUK_PROPDESC_FLAG_ACCESSOR',
'ca': 'DUK_PROPDESC_FLAGS_C|DUK_PROPDESC_FLAG_ACCESSOR',
'eca': 'DUK_PROPDESC_FLAGS_EC|DUK_PROPDESC_FLAG_ACCESSOR',
}
# Emit property table structs. These are very complex because
# property count *and* individual property type affect the fields
# in the initializer, properties can be data properties or accessor
# properties or different duk_tval types. There are also several
# property table memory layouts, each with a different ordering of
# keys, values, etc. Union initializers would make things a bit
# easier but they're not very portable (being C99).
#
# The easy solution is to use a separate initializer type for each
# property type. Could also cache and reuse identical initializers
# but there'd be very few of them so it's more straightforward to
# not reuse the structs.
#
# NOTE: naming is a bit inconsistent here, duk_tval is used also
# to refer to property value initializers like a getter/setter pair.
genc.emitLine('#if defined(DUK_USE_HOBJECT_LAYOUT_1)')
for idx,obj in enumerate(objs):
numprops = len(obj['properties'])
if numprops == 0:
continue
tmp = 'typedef struct duk_romprops_%d duk_romprops_%d; ' % (idx, idx)
tmp += 'struct duk_romprops_%d { ' % idx
for idx,val in enumerate(obj['properties']):
tmp += 'const duk_hstring *key%d; ' % idx
for idx,val in enumerate(obj['properties']):
# XXX: fastint support
tmp += '%s val%d; ' % (rom_get_value_initializer_type(meta, val, bi_str_map, bi_obj_map), idx)
for idx,val in enumerate(obj['properties']):
tmp += 'duk_uint8_t flags%d; ' % idx
tmp += '};'
genc.emitLine(tmp)
genc.emitLine('#elif defined(DUK_USE_HOBJECT_LAYOUT_2)')
for idx,obj in enumerate(objs):
numprops = len(obj['properties'])
if numprops == 0:
continue
tmp = 'typedef struct duk_romprops_%d duk_romprops_%d; ' % (idx, idx)
tmp += 'struct duk_romprops_%d { ' % idx
for idx,val in enumerate(obj['properties']):
# XXX: fastint support
tmp += '%s val%d; ' % (rom_get_value_initializer_type(meta, val, bi_str_map, bi_obj_map), idx)
for idx,val in enumerate(obj['properties']):
tmp += 'const duk_hstring *key%d; ' % idx
for idx,val in enumerate(obj['properties']):
tmp += 'duk_uint8_t flags%d; ' % idx
# Padding follows for flags, but we don't need to emit it
# (at the moment there is never an array or hash part).
tmp += '};'
genc.emitLine(tmp)
genc.emitLine('#elif defined(DUK_USE_HOBJECT_LAYOUT_3)')
for idx,obj in enumerate(objs):
numprops = len(obj['properties'])
if numprops == 0:
continue
tmp = 'typedef struct duk_romprops_%d duk_romprops_%d; ' % (idx, idx)
tmp += 'struct duk_romprops_%d { ' % idx
for idx,val in enumerate(obj['properties']):
# XXX: fastint support
tmp += '%s val%d; ' % (rom_get_value_initializer_type(meta, val, bi_str_map, bi_obj_map), idx)
# No array values
for idx,val in enumerate(obj['properties']):
tmp += 'const duk_hstring *key%d; ' % idx
# No hash index
for idx,val in enumerate(obj['properties']):
tmp += 'duk_uint8_t flags%d; ' % idx
tmp += '};'
genc.emitLine(tmp)
genc.emitLine('#else')
genc.emitLine('#error invalid object layout')
genc.emitLine('#endif')
genc.emitLine('')
# Forward declare all property tables so that objects can reference them.
# Also pointer compress them.
for idx,obj in enumerate(objs):
numprops = len(obj['properties'])
if numprops == 0:
continue
# We would like to use DUK_INTERNAL_DECL here, but that maps
# to "static const" in a single file build which has C++
# portability issues: you can't forward declare a static const.
# We can't reorder the property tables to avoid this because
# there are cyclic references. So, as the current workaround,
# declare as external.
genc.emitLine('DUK_EXTERNAL_DECL const duk_romprops_%d duk_prop_%d;' % (idx, idx))
# Add property tables to ROM compressed pointers too.
compress_rom_ptr('&duk_prop_%d' % idx)
genc.emitLine('')
# Forward declare all objects so that objects can reference them,
# e.g. internal prototype reference.
for idx,obj in enumerate(objs):
# Careful with C++: must avoid redefining a non-extern const.
# See commentary above for duk_prop_%d forward declarations.
if obj.get('callable', False):
genc.emitLine('DUK_EXTERNAL_DECL const duk_romfun duk_obj_%d;' % idx)
elif obj.get('class') == 'Array':
genc.emitLine('DUK_EXTERNAL_DECL const duk_romarr duk_obj_%d;' % idx)
elif obj.get('class') == 'ObjEnv':
genc.emitLine('DUK_EXTERNAL_DECL const duk_romobjenv duk_obj_%d;' % idx)
else:
genc.emitLine('DUK_EXTERNAL_DECL const duk_romobj duk_obj_%d;' % idx)
genc.emitLine('')
# Define objects, reference property tables. Objects will be
# logically non-extensible so also leave their extensible flag
# cleared despite what metadata requests; the runtime code expects
# ROM objects to be non-extensible.
for idx,obj in enumerate(objs):
numprops = len(obj['properties'])
isfunc = obj.get('callable', False)
if isfunc:
tmp = 'DUK_EXTERNAL const duk_romfun duk_obj_%d = ' % idx
elif obj.get('class') == 'Array':
tmp = 'DUK_EXTERNAL const duk_romarr duk_obj_%d = ' % idx
elif obj.get('class') == 'ObjEnv':
tmp = 'DUK_EXTERNAL const duk_romobjenv duk_obj_%d = ' % idx
else:
tmp = 'DUK_EXTERNAL const duk_romobj duk_obj_%d = ' % idx
flags = [ 'DUK_HTYPE_OBJECT', 'DUK_HEAPHDR_FLAG_READONLY', 'DUK_HEAPHDR_FLAG_REACHABLE' ]
if isfunc:
flags.append('DUK_HOBJECT_FLAG_NATFUNC')
flags.append('DUK_HOBJECT_FLAG_STRICT')
flags.append('DUK_HOBJECT_FLAG_NEWENV')
if obj.get('callable', False):
flags.append('DUK_HOBJECT_FLAG_CALLABLE')
if obj.get('constructable', False):
flags.append('DUK_HOBJECT_FLAG_CONSTRUCTABLE')
if obj.get('class') == 'Array':
flags.append('DUK_HOBJECT_FLAG_EXOTIC_ARRAY')
if obj.get('special_call', False):
flags.append('DUK_HOBJECT_FLAG_SPECIAL_CALL')
flags.append('DUK_HOBJECT_CLASS_AS_FLAGS(%d)' % class_to_number(obj['class'])) # XXX: use constant, not number
refcount = 1 # refcount is faked to be always 1
if numprops == 0:
props = 'NULL'
else:
props = '&duk_prop_%d' % idx
props_enc16 = compress_rom_ptr(props)
if obj.has_key('internal_prototype'):
iproto = '&%s' % bi_obj_map[obj['internal_prototype']]
else:
iproto = 'NULL'
iproto_enc16 = compress_rom_ptr(iproto)
e_size = numprops
e_next = e_size
a_size = 0 # never an array part for now
h_size = 0 # never a hash for now; not appropriate for perf relevant builds
if isfunc:
nativefunc = obj['native']
if obj.get('varargs', False):
nargs = 'DUK_VARARGS'
elif obj.has_key('nargs'):
nargs = '%d' % obj['nargs']
else:
assert(False) # 'nargs' should be defaulted from 'length' at metadata load
magic = '%d' % resolve_magic(obj.get('magic', None), id_to_bidx)
else:
nativefunc = 'dummy'
nargs = '0'
magic = '0'
assert(a_size == 0)
assert(h_size == 0)
if isfunc:
tmp += 'DUK__ROMFUN_INIT(%s,%d,%s,%d,%s,%d,%d,%d,%d,%d,%s,%s,%s);' % \
('|'.join(flags), refcount, props, props_enc16, \
iproto, iproto_enc16, e_size, e_next, a_size, h_size, \
nativefunc, nargs, magic)
elif obj.get('class') == 'Array':
arrlen = 0
tmp += 'DUK__ROMARR_INIT(%s,%d,%s,%d,%s,%d,%d,%d,%d,%d,%d);' % \
('|'.join(flags), refcount, props, props_enc16, \
iproto, iproto_enc16, e_size, e_next, a_size, h_size, arrlen)
elif obj.get('class') == 'ObjEnv':
objenv_target = '&%s' % bi_obj_map[obj['objenv_target']]
objenv_has_this = obj['objenv_has_this']
tmp += 'DUK__ROMOBJENV_INIT(%s,%d,%s,%d,%s,%d,%d,%d,%d,%d,%s,%d);' % \
('|'.join(flags), refcount, props, props_enc16, \
iproto, iproto_enc16, e_size, e_next, a_size, h_size, objenv_target, objenv_has_this)
else:
tmp += 'DUK__ROMOBJ_INIT(%s,%d,%s,%d,%s,%d,%d,%d,%d,%d);' % \
('|'.join(flags), refcount, props, props_enc16, \
iproto, iproto_enc16, e_size, e_next, a_size, h_size)
genc.emitLine(tmp)
# Property tables. Can reference arbitrary strings and objects as
# they're defined before them.
# Properties will be non-configurable, but must be writable so that
# standard property semantics allow shadowing properties to be
# established in inherited objects (e.g. "var obj={}; obj.toString
# = myToString"). Enumerable can also be kept.
def _prepAttrs(val):
attrs = val['attributes']
assert('c' not in attrs)
return attr_lookup[attrs]
def _emitPropTableInitializer(idx, obj, layout):
init_vals = []
init_keys = []
init_flags = []
numprops = len(obj['properties'])
for val in obj['properties']:
init_keys.append('(const duk_hstring *)&%s' % bi_str_map[val['key']])
for val in obj['properties']:
# XXX: fastint support
init_vals.append('%s' % rom_get_value_initializer_literal(meta, val, bi_str_map, bi_obj_map))
for val in obj['properties']:
init_flags.append('%s' % _prepAttrs(val))
if layout == 1:
initlist = init_keys + init_vals + init_flags
elif layout == 2:
initlist = init_vals + init_keys + init_flags
elif layout == 3:
# Same as layout 2 now, no hash/array
initlist = init_vals + init_keys + init_flags
if len(initlist) > 0:
genc.emitLine('DUK_EXTERNAL const duk_romprops_%d duk_prop_%d = {%s};' % (idx, idx, ','.join(initlist)))
genc.emitLine('#if defined(DUK_USE_HOBJECT_LAYOUT_1)')
for idx,obj in enumerate(objs):
_emitPropTableInitializer(idx, obj, 1)
genc.emitLine('#elif defined(DUK_USE_HOBJECT_LAYOUT_2)')
for idx,obj in enumerate(objs):
_emitPropTableInitializer(idx, obj, 2)
genc.emitLine('#elif defined(DUK_USE_HOBJECT_LAYOUT_3)')
for idx,obj in enumerate(objs):
_emitPropTableInitializer(idx, obj, 3)
genc.emitLine('#else')
genc.emitLine('#error invalid object layout')
genc.emitLine('#endif')
genc.emitLine('')
# Emit a list of ROM builtins (those objects needing a bidx).
#
# cdecl > explain const int * const foo;
# declare foo as const pointer to const int
count_bidx = 0
for bi in objs:
if bi.get('bidx_used', False):
count_bidx += 1
genc.emitLine('DUK_INTERNAL const duk_hobject * const duk_rom_builtins_bidx[%d] = {' % count_bidx)
for bi in objs:
if not bi.get('bidx_used', False):
continue # for this we want the toplevel objects only
genc.emitLine('\t(const duk_hobject *) &%s,' % bi_obj_map[bi['id']])
genc.emitLine('};')
# Emit a table of compressed ROM pointers. We must be able to
# compress ROM pointers at compile time so we assign running
# indices to them. User pointer compression macros must use this
# array to encode/decode ROM pointers.
genc.emitLine('')
genc.emitLine('#if defined(DUK_USE_ROM_OBJECTS) && defined(DUK_USE_HEAPPTR16)')
genc.emitLine('DUK_EXTERNAL const void * const duk_rom_compressed_pointers[%d] = {' % (len(romptr_compress_list) + 1))
for idx,ptr in enumerate(romptr_compress_list):
genc.emitLine('\t(const void *) %s, /* 0x%04x */' % (ptr, ROMPTR_FIRST + idx))
romptr_highest = ROMPTR_FIRST + len(romptr_compress_list) - 1
genc.emitLine('\tNULL') # for convenience
genc.emitLine('};')
genc.emitLine('#endif')
logger.debug('%d compressed rom pointers (used range is [0x%04x,0x%04x], %d space left)' % \
(len(romptr_compress_list), ROMPTR_FIRST, romptr_highest, 0xffff - romptr_highest))
# Undefine helpers.
genc.emitLine('')
for i in [
'DUK__STRHASH16',
'DUK__STRHASH32',
'DUK__DBLBYTES',
'DUK__TVAL_NUMBER',
'DUK__TVAL_UNDEFINED',
'DUK__TVAL_NULL',
'DUK__TVAL_BOOLEAN',
'DUK__TVAL_OBJECT',
'DUK__TVAL_STRING',
'DUK__STRINIT',
'DUK__ROMOBJ_INIT',
'DUK__ROMFUN_INIT'
]:
genc.emitLine('#undef ' + i)
return romptr_compress_list
# Emit ROM objects header.
def rom_emit_objects_header(genc, meta):
bidx = 0
for bi in meta['objects']:
if not bi.get('bidx_used', False):
continue # for this we want the toplevel objects only
genc.emitDefine('DUK_BIDX_' + '_'.join(bi['id'].upper().split('_')[1:]), bidx) # bi_foo_bar -> FOO_BAR
bidx += 1
count_bidx = bidx
genc.emitDefine('DUK_NUM_BUILTINS', count_bidx)
genc.emitDefine('DUK_NUM_BIDX_BUILTINS', count_bidx)
genc.emitDefine('DUK_NUM_ALL_BUILTINS', len(meta['objects']))
genc.emitLine('')
genc.emitLine('#if !defined(DUK_SINGLE_FILE)') # C++ static const workaround
genc.emitLine('DUK_INTERNAL_DECL const duk_hobject * const duk_rom_builtins_bidx[%d];' % count_bidx)
genc.emitLine('#endif')
# XXX: missing declarations here, not an issue for single source build.
# Add missing declarations.
# XXX: For example, 'DUK_EXTERNAL_DECL ... duk_rom_compressed_pointers[]' is missing.
#
# Shared for both RAM and ROM
#
def emit_header_native_function_declarations(genc, meta):
emitted = {} # To suppress duplicates
funclist = []
def _emit(fname):
if not emitted.has_key(fname):
emitted[fname] = True
funclist.append(fname)
for o in meta['objects']:
if o.has_key('native'):
_emit(o['native'])
for p in o['properties']:
v = p['value']
if isinstance(v, dict) and v['type'] == 'lightfunc':
assert(v.has_key('native'))
_emit(v['native'])
logger.debug('Lightfunc function declaration: %r' % v['native'])
for fname in funclist:
# Visibility depends on whether the function is Duktape internal or user.
# Use a simple prefix for now.
if fname[:4] == 'duk_':
genc.emitLine('DUK_INTERNAL_DECL duk_ret_t %s(duk_context *ctx);' % fname)
else:
genc.emitLine('extern duk_ret_t %s(duk_context *ctx);' % fname)
#
# Main
#
def main():
parser = optparse.OptionParser()
parser.add_option('--git-commit', dest='git_commit', default=None, help='Git commit hash')
parser.add_option('--git-describe', dest='git_describe', default=None, help='Git describe')
parser.add_option('--git-branch', dest='git_branch', default=None, help='Git branch name')
parser.add_option('--duk-version', dest='duk_version', default=None, help='Duktape version (e.g. 10203)')
parser.add_option('--quiet', dest='quiet', action='store_true', default=False, help='Suppress info messages (show warnings)')
parser.add_option('--verbose', dest='verbose', action='store_true', default=False, help='Show verbose debug messages')
parser.add_option('--used-stridx-metadata', dest='used_stridx_metadata', help='DUK_STRIDX_xxx used by source/headers, JSON format')
parser.add_option('--strings-metadata', dest='strings_metadata', help='Default built-in strings metadata file, YAML format')
parser.add_option('--objects-metadata', dest='objects_metadata', help='Default built-in objects metadata file, YAML format')
parser.add_option('--active-options', dest='active_options', help='Active config options from genconfig.py, JSON format')
parser.add_option('--user-builtin-metadata', dest='obsolete_builtin_metadata', default=None, help=optparse.SUPPRESS_HELP)
parser.add_option('--builtin-file', dest='builtin_files', metavar='FILENAME', action='append', default=[], help='Built-in string/object YAML metadata to be applied over default built-ins (multiple files may be given, applied in sequence)')
parser.add_option('--ram-support', dest='ram_support', action='store_true', default=False, help='Support RAM strings/objects')
parser.add_option('--rom-support', dest='rom_support', action='store_true', default=False, help='Support ROM strings/objects (increases output size considerably)')
parser.add_option('--rom-auto-lightfunc', dest='rom_auto_lightfunc', action='store_true', default=False, help='Convert ROM built-in function properties into lightfuncs automatically whenever possible')
parser.add_option('--out-header', dest='out_header', help='Output header file')
parser.add_option('--out-source', dest='out_source', help='Output source file')
parser.add_option('--out-metadata-json', dest='out_metadata_json', help='Output metadata file')
parser.add_option('--dev-dump-final-ram-metadata', dest='dev_dump_final_ram_metadata', help='Development option')
parser.add_option('--dev-dump-final-rom-metadata', dest='dev_dump_final_rom_metadata', help='Development option')
(opts, args) = parser.parse_args()
if opts.obsolete_builtin_metadata is not None:
raise Exception('--user-builtin-metadata has been removed, use --builtin-file instead')
# Log level.
if opts.quiet:
logger.setLevel(logging.WARNING)
elif opts.verbose:
logger.setLevel(logging.DEBUG)
# Options processing.
build_info = {
'git_commit': opts.git_commit,
'git_branch': opts.git_branch,
'git_describe': opts.git_describe,
'duk_version': int(opts.duk_version),
}
desc = []
if opts.ram_support:
desc += [ 'ram built-in support' ]
if opts.rom_support:
desc += [ 'rom built-in support' ]
if opts.rom_auto_lightfunc:
desc += [ 'rom auto lightfunc' ]
logger.info('Creating built-in initialization data: ' + ', '.join(desc))
# Read in metadata files, normalizing and merging as necessary.
active_opts = {}
if opts.active_options is not None:
with open(opts.active_options, 'rb') as f:
active_opts = json.loads(f.read())
ram_meta = load_metadata(opts, rom=False, build_info=build_info, active_opts=active_opts)
rom_meta = load_metadata(opts, rom=True, build_info=build_info, active_opts=active_opts)
if opts.dev_dump_final_ram_metadata is not None:
dump_metadata(ram_meta, opts.dev_dump_final_ram_metadata)
if opts.dev_dump_final_rom_metadata is not None:
dump_metadata(rom_meta, opts.dev_dump_final_rom_metadata)
# Create RAM init data bitstreams.
ramstr_data, ramstr_maxlen = gen_ramstr_initdata_bitpacked(ram_meta)
ram_native_funcs, ram_natfunc_name_to_natidx = get_ramobj_native_func_maps(ram_meta)
if opts.ram_support:
ramobj_data_le = gen_ramobj_initdata_bitpacked(ram_meta, ram_native_funcs, ram_natfunc_name_to_natidx, 'little')
ramobj_data_be = gen_ramobj_initdata_bitpacked(ram_meta, ram_native_funcs, ram_natfunc_name_to_natidx, 'big')
ramobj_data_me = gen_ramobj_initdata_bitpacked(ram_meta, ram_native_funcs, ram_natfunc_name_to_natidx, 'mixed')
# Write source and header files.
gc_src = dukutil.GenerateC()
gc_src.emitHeader('genbuiltins.py')
gc_src.emitLine('#include "duk_internal.h"')
gc_src.emitLine('')
gc_src.emitLine('#if defined(DUK_USE_ASSERTIONS)')
gc_src.emitLine('#define DUK__REFCINIT(refc) 0 /*h_assert_refcount*/, (refc) /*actual*/')
gc_src.emitLine('#else')
gc_src.emitLine('#define DUK__REFCINIT(refc) (refc) /*actual*/')
gc_src.emitLine('#endif')
gc_src.emitLine('')
gc_src.emitLine('#if defined(DUK_USE_ROM_STRINGS)')
if opts.rom_support:
rom_bi_str_map = rom_emit_strings_source(gc_src, rom_meta)
rom_emit_object_initializer_types_and_macros(gc_src)
rom_emit_objects(gc_src, rom_meta, rom_bi_str_map)
else:
gc_src.emitLine('#error ROM support not enabled, rerun configure.py with --rom-support')
gc_src.emitLine('#else /* DUK_USE_ROM_STRINGS */')
emit_ramstr_source_strinit_data(gc_src, ramstr_data)
gc_src.emitLine('#endif /* DUK_USE_ROM_STRINGS */')
gc_src.emitLine('')
gc_src.emitLine('#if defined(DUK_USE_ROM_OBJECTS)')
if opts.rom_support:
gc_src.emitLine('#if !defined(DUK_USE_ROM_STRINGS)')
gc_src.emitLine('#error DUK_USE_ROM_OBJECTS requires DUK_USE_ROM_STRINGS')
gc_src.emitLine('#endif')
gc_src.emitLine('#if defined(DUK_USE_HSTRING_ARRIDX)')
gc_src.emitLine('#error DUK_USE_HSTRING_ARRIDX is currently incompatible with ROM built-ins')
gc_src.emitLine('#endif')
else:
gc_src.emitLine('#error ROM support not enabled, rerun configure.py with --rom-support')
gc_src.emitLine('#else /* DUK_USE_ROM_OBJECTS */')
if opts.ram_support:
emit_ramobj_source_nativefunc_array(gc_src, ram_native_funcs) # endian independent
gc_src.emitLine('#if defined(DUK_USE_DOUBLE_LE)')
emit_ramobj_source_objinit_data(gc_src, ramobj_data_le)
gc_src.emitLine('#elif defined(DUK_USE_DOUBLE_BE)')
emit_ramobj_source_objinit_data(gc_src, ramobj_data_be)
gc_src.emitLine('#elif defined(DUK_USE_DOUBLE_ME)')
emit_ramobj_source_objinit_data(gc_src, ramobj_data_me)
gc_src.emitLine('#else')
gc_src.emitLine('#error invalid endianness defines')
gc_src.emitLine('#endif')
else:
gc_src.emitLine('#error RAM support not enabled, rerun configure.py with --ram-support')
gc_src.emitLine('#endif /* DUK_USE_ROM_OBJECTS */')
gc_hdr = dukutil.GenerateC()
gc_hdr.emitHeader('genbuiltins.py')
gc_hdr.emitLine('#if !defined(DUK_BUILTINS_H_INCLUDED)')
gc_hdr.emitLine('#define DUK_BUILTINS_H_INCLUDED')
gc_hdr.emitLine('')
gc_hdr.emitLine('#if defined(DUK_USE_ROM_STRINGS)')
if opts.rom_support:
emit_header_stridx_defines(gc_hdr, rom_meta)
rom_emit_strings_header(gc_hdr, rom_meta)
else:
gc_hdr.emitLine('#error ROM support not enabled, rerun configure.py with --rom-support')
gc_hdr.emitLine('#else /* DUK_USE_ROM_STRINGS */')
if opts.ram_support:
emit_header_stridx_defines(gc_hdr, ram_meta)
emit_ramstr_header_strinit_defines(gc_hdr, ram_meta, ramstr_data, ramstr_maxlen)
else:
gc_hdr.emitLine('#error RAM support not enabled, rerun configure.py with --ram-support')
gc_hdr.emitLine('#endif /* DUK_USE_ROM_STRINGS */')
gc_hdr.emitLine('')
gc_hdr.emitLine('#if defined(DUK_USE_ROM_OBJECTS)')
if opts.rom_support:
# Currently DUK_USE_ROM_PTRCOMP_FIRST must match our fixed
# define, and the two must be updated in sync. Catch any
# mismatch to avoid difficult to diagnose errors.
gc_hdr.emitLine('#if !defined(DUK_USE_ROM_PTRCOMP_FIRST)')
gc_hdr.emitLine('#error missing DUK_USE_ROM_PTRCOMP_FIRST define')
gc_hdr.emitLine('#endif')
gc_hdr.emitLine('#if (DUK_USE_ROM_PTRCOMP_FIRST != %dL)' % ROMPTR_FIRST)
gc_hdr.emitLine('#error DUK_USE_ROM_PTRCOMP_FIRST must match ROMPTR_FIRST in genbuiltins.py (%d), update manually and re-dist' % ROMPTR_FIRST)
gc_hdr.emitLine('#endif')
emit_header_native_function_declarations(gc_hdr, rom_meta)
rom_emit_objects_header(gc_hdr, rom_meta)
else:
gc_hdr.emitLine('#error RAM support not enabled, rerun configure.py with --ram-support')
gc_hdr.emitLine('#else /* DUK_USE_ROM_OBJECTS */')
if opts.ram_support:
emit_header_native_function_declarations(gc_hdr, ram_meta)
emit_ramobj_header_nativefunc_array(gc_hdr, ram_native_funcs)
emit_ramobj_header_objects(gc_hdr, ram_meta)
gc_hdr.emitLine('#if defined(DUK_USE_DOUBLE_LE)')
emit_ramobj_header_initdata(gc_hdr, ramobj_data_le)
gc_hdr.emitLine('#elif defined(DUK_USE_DOUBLE_BE)')
emit_ramobj_header_initdata(gc_hdr, ramobj_data_be)
gc_hdr.emitLine('#elif defined(DUK_USE_DOUBLE_ME)')
emit_ramobj_header_initdata(gc_hdr, ramobj_data_me)
gc_hdr.emitLine('#else')
gc_hdr.emitLine('#error invalid endianness defines')
gc_hdr.emitLine('#endif')
else:
gc_hdr.emitLine('#error RAM support not enabled, rerun configure.py with --ram-support')
gc_hdr.emitLine('#endif /* DUK_USE_ROM_OBJECTS */')
gc_hdr.emitLine('#endif /* DUK_BUILTINS_H_INCLUDED */')
with open(opts.out_source, 'wb') as f:
f.write(gc_src.getString())
logger.debug('Wrote built-ins source to ' + opts.out_source)
with open(opts.out_header, 'wb') as f:
f.write(gc_hdr.getString())
logger.debug('Wrote built-ins header to ' + opts.out_header)
# Write a JSON file with build metadata, e.g. built-in strings.
ver = long(build_info['duk_version'])
plain_strs = []
base64_strs = []
str_objs = []
for s in ram_meta['strings_stridx']: # XXX: provide all lists?
t1 = bytes_to_unicode(s['str'])
t2 = unicode_to_bytes(s['str']).encode('base64').strip()
plain_strs.append(t1)
base64_strs.append(t2)
str_objs.append({
'plain': t1, 'base64': t2, 'define': s['define']
})
meta = {
'comment': 'Metadata for Duktape sources',
'duk_version': ver,
'duk_version_string': '%d.%d.%d' % (ver / 10000, (ver / 100) % 100, ver % 100),
'git_commit': build_info['git_commit'],
'git_branch': build_info['git_branch'],
'git_describe': build_info['git_describe'],
'builtin_strings': plain_strs,
'builtin_strings_base64': base64_strs,
'builtin_strings_info': str_objs
}
with open(opts.out_metadata_json, 'wb') as f:
f.write(json.dumps(meta, indent=4, sort_keys=True, ensure_ascii=True))
logger.debug('Wrote built-ins metadata to ' + opts.out_metadata_json)
if __name__ == '__main__':
main()
| 43.643771 | 335 | 0.626907 | 18,482 | 139,791 | 4.535169 | 0.073044 | 0.036078 | 0.008351 | 0.012276 | 0.43409 | 0.364595 | 0.314678 | 0.266598 | 0.232585 | 0.211778 | 0 | 0.010101 | 0.253571 | 139,791 | 3,202 | 336 | 43.657402 | 0.793194 | 0.196887 | 0 | 0.307626 | 1 | 0.041792 | 0.282671 | 0.069239 | 0.000862 | 0 | 0.001147 | 0 | 0.026282 | 0 | null | null | 0.002585 | 0.005601 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6a7c72e3aa5fd62eac7fb2f908a719f69d00bc11 | 455 | py | Python | server/util/stringutil.py | rleir/MediaCloud-Web-Tools | 86fd42959bec2f24c74cf63277da1931a159b218 | [
"Apache-2.0"
] | null | null | null | server/util/stringutil.py | rleir/MediaCloud-Web-Tools | 86fd42959bec2f24c74cf63277da1931a159b218 | [
"Apache-2.0"
] | null | null | null | server/util/stringutil.py | rleir/MediaCloud-Web-Tools | 86fd42959bec2f24c74cf63277da1931a159b218 | [
"Apache-2.0"
] | null | null | null | import logging
from datetime import datetime
from server import mc
logger = logging.getLogger(__name__)
def ids_from_comma_separated_str(comma_separated_string):
id_list = []
if len(comma_separated_string) > 0:
id_list = [int(cid) for cid in comma_separated_string.split(",") if len(cid) > 0]
return id_list
def trim_solr_date(date_str):
return datetime.strptime(date_str, mc.SENTENCE_PUBLISH_DATE_FORMAT).strftime("%Y-%m-%d")
| 26.764706 | 90 | 0.749451 | 69 | 455 | 4.594203 | 0.536232 | 0.176656 | 0.189274 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005168 | 0.149451 | 455 | 16 | 91 | 28.4375 | 0.813953 | 0 | 0 | 0 | 0 | 0 | 0.01978 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0.090909 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6a82b68a06d3bb1399ae461651499d90a157c004 | 154 | py | Python | apps/bitacoras/apps.py | Monse200599/sistema_medico | c220c5278d96bb9cd86f92fbac1bc66d6eebfd56 | [
"MIT"
] | null | null | null | apps/bitacoras/apps.py | Monse200599/sistema_medico | c220c5278d96bb9cd86f92fbac1bc66d6eebfd56 | [
"MIT"
] | null | null | null | apps/bitacoras/apps.py | Monse200599/sistema_medico | c220c5278d96bb9cd86f92fbac1bc66d6eebfd56 | [
"MIT"
] | 1 | 2021-11-30T06:02:11.000Z | 2021-11-30T06:02:11.000Z | from django.apps import AppConfig
class BitacoraConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'apps.bitacoras'
| 22 | 56 | 0.766234 | 18 | 154 | 6.444444 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 154 | 6 | 57 | 25.666667 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0.279221 | 0.188312 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6a84a7bff9b03e218f97fc0a6ce296d3af7b87a4 | 1,260 | py | Python | lib/connectionPool.py | qiuluo-oss/BBScan_py3 | 68d49f780677607a59d97d84cc0d696415e1b2eb | [
"Apache-2.0"
] | 4 | 2022-01-07T13:37:33.000Z | 2022-03-31T03:21:17.000Z | lib/connectionPool.py | qiuluo-oss/BBScan_py3 | 68d49f780677607a59d97d84cc0d696415e1b2eb | [
"Apache-2.0"
] | 1 | 2022-01-27T04:21:58.000Z | 2022-01-27T04:21:58.000Z | lib/connectionPool.py | qiuluo-oss/BBScan_py3 | 68d49f780677607a59d97d84cc0d696415e1b2eb | [
"Apache-2.0"
] | null | null | null | import urllib3
import socket
import struct
import logging
from six.moves.queue import Empty
urllib3.disable_warnings()
logging.getLogger('requests.packages.urllib3.connectionpool').setLevel(logging.CRITICAL)
class HTTPConnPool(urllib3.HTTPConnectionPool):
def close(self):
"""
Close all pooled connections and disable the pool.
"""
# Disable access to the pool
old_pool, self.pool = self.pool, None
try:
while True:
conn = old_pool.get(block=False)
if conn:
conn.sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))
conn.close()
except Empty:
pass
class HTTPSConnPool(urllib3.HTTPSConnectionPool):
def close(self):
"""
Close all pooled connections and disable the pool.
"""
# Disable access to the pool
old_pool, self.pool = self.pool, None
try:
while True:
conn = old_pool.get(block=False)
if conn:
conn.sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))
conn.close()
except Empty:
pass
| 27.391304 | 102 | 0.580159 | 140 | 1,260 | 5.157143 | 0.392857 | 0.038781 | 0.066482 | 0.047091 | 0.637119 | 0.637119 | 0.637119 | 0.637119 | 0.637119 | 0.637119 | 0 | 0.010689 | 0.331746 | 1,260 | 45 | 103 | 28 | 0.846793 | 0.12381 | 0 | 0.689655 | 0 | 0 | 0.041667 | 0.037879 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0.068966 | 0.172414 | 0 | 0.310345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6a8beadd52db0ecdf6e545f065fd92c94b4697ec | 319 | py | Python | __init__.py | Kamireddy0308/whatwords-skill | 6fe8afb7031b1a7ac36b88acdedb8760875a8bad | [
"Apache-2.0"
] | null | null | null | __init__.py | Kamireddy0308/whatwords-skill | 6fe8afb7031b1a7ac36b88acdedb8760875a8bad | [
"Apache-2.0"
] | null | null | null | __init__.py | Kamireddy0308/whatwords-skill | 6fe8afb7031b1a7ac36b88acdedb8760875a8bad | [
"Apache-2.0"
] | null | null | null | from mycroft import MycroftSkill, intent_file_handler
class Whatwords(MycroftSkill):
def __init__(self):
MycroftSkill.__init__(self)
@intent_file_handler('whatwords.intent')
def handle_whatwords(self, message):
self.speak_dialog('whatwords')
def create_skill():
return Whatwords()
| 19.9375 | 53 | 0.727273 | 35 | 319 | 6.2 | 0.542857 | 0.092166 | 0.156682 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178683 | 319 | 15 | 54 | 21.266667 | 0.828244 | 0 | 0 | 0 | 0 | 0 | 0.078616 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.111111 | 0.111111 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
6a8bfeda03316a73a3484a47c32f06187d77609c | 433 | py | Python | spongeauth/spongeauth/settings/test.py | felixoi/SpongeAuth | d44ee52d0b35b2e1909c7bf6bad29aa7b4835b26 | [
"MIT"
] | 10 | 2016-11-18T12:37:24.000Z | 2022-03-04T09:25:25.000Z | spongeauth/spongeauth/settings/test.py | felixoi/SpongeAuth | d44ee52d0b35b2e1909c7bf6bad29aa7b4835b26 | [
"MIT"
] | 794 | 2016-11-19T18:34:37.000Z | 2022-03-31T16:49:11.000Z | spongeauth/spongeauth/settings/test.py | PowerNukkit/OreAuth | 96a2926c9601fce6fac471bdb997077f07e8bf9a | [
"MIT"
] | 11 | 2016-11-26T22:30:17.000Z | 2022-03-16T17:20:14.000Z | import os
from .base import *
IS_TESTING = True
for queue in RQ_QUEUES.values():
queue["ASYNC"] = False
from fakeredis import FakeRedis, FakeStrictRedis
import django_rq.queues
django_rq.queues.get_redis_connection = lambda _, strict: FakeStrictRedis() if strict else FakeRedis()
if not os.environ.get("DJANGO_SETTINGS_SKIP_LOCAL", False):
try:
from .local_settings import *
except ImportError:
pass
| 22.789474 | 102 | 0.73903 | 57 | 433 | 5.421053 | 0.596491 | 0.07767 | 0.090615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180139 | 433 | 18 | 103 | 24.055556 | 0.870423 | 0 | 0 | 0 | 0 | 0 | 0.071594 | 0.060046 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.076923 | 0.461538 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
6ab731d6b9fd62c4b104ccccc7548eb19061383c | 107 | py | Python | output/models/ms_data/additional/isdefault002_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 1 | 2021-08-14T17:59:21.000Z | 2021-08-14T17:59:21.000Z | output/models/ms_data/additional/isdefault002_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 4 | 2020-02-12T21:30:44.000Z | 2020-04-15T20:06:46.000Z | output/models/ms_data/additional/isdefault002_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | null | null | null | from output.models.ms_data.additional.isdefault002_xsd.isdefault002 import Root
__all__ = [
"Root",
]
| 17.833333 | 79 | 0.766355 | 13 | 107 | 5.846154 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 0.130841 | 107 | 5 | 80 | 21.4 | 0.752688 | 0 | 0 | 0 | 0 | 0 | 0.037383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6adef061ce82827790fa98764cb7a254bbc9c2ca | 3,092 | py | Python | sys/controller/ServerHelper.py | frankdede/CMPUT410W14T07 | 18b88ae8d12459eeb0073c48e0708fb2b175b9ba | [
"Apache-2.0"
] | 1 | 2017-01-04T11:55:21.000Z | 2017-01-04T11:55:21.000Z | sys/controller/ServerHelper.py | frankdede/CMPUT410W14T07 | 18b88ae8d12459eeb0073c48e0708fb2b175b9ba | [
"Apache-2.0"
] | null | null | null | sys/controller/ServerHelper.py | frankdede/CMPUT410W14T07 | 18b88ae8d12459eeb0073c48e0708fb2b175b9ba | [
"Apache-2.0"
] | null | null | null | import json
from mysql.connector.errors import Error
from DatabaseAdapter import *
import Utility
class ServerHelper:
def __init__(self,dbAdapter):
self.dbAdapter = dbAdapter
def doesServerExists(self,url):
cur = self.dbAdapter.getcursor()
query = ("SELECT sid FROM servers WHERE url = '%s';")%(url)
try:
cur.execute(query)
except mysql.connector.Error as err:
print("****************************************")
print("SQLException from doesServerExists():")
print("Error code:", err.errno)
print("SQLSTATE value:", err.sqlstate)
print("Error message:", err.msg)
print("Query:",query)
print("****************************************")
return False
result = cur.fetchone()
if(result != None):
return result[0]
else:
return False
def addServer(self,name,url,local):
cur = self.dbAdapter.getcursor()
sid =Utility.getid()
query = ("INSERT INTO servers VALUES('%s','%s','%s',%s)")%(sid,name,url,local)
try:
cur.execute(query)
except mysql.connector.Error as err:
print("****************************************")
print("SQLException from doesServerExists():")
print("Error code:", err.errno)
print("SQLSTATE value:", err.sqlstate)
print("Error message:", err.msg)
print("Query:",query)
print("****************************************")
return None
if cur.rowcount > 0:
return sid
else:
return False
def getServerNameBySid(self,sid):
cur = self.dbAdapter.getcursor()
query = "SELECT name FROM servers WHERE sid='%s'"%(sid)
try:
cur.execute(query)
except Exception, e:
print("****************************************")
print("SQLException from getServerNameBySid():")
print("Error code:", err.errno)
print("SQLSTATE value:", err.sqlstate)
print("Error message:", err.msg)
print("Query:",query)
print("****************************************")
return None
re = cur.fetchone()
if (re != None):
if(len(re) != 0):
return re[0]
return None
def getServerUrlBySid(self,sid):
cur = self.dbAdapter.getcursor()
query = "SELECT url FROM servers WHERE sid='%s'"%(sid)
try:
cur.execute(query)
except Exception, e:
print("****************************************")
print("SQLException from getServerUrlBySide():")
print("Error code:", err.errno)
print("SQLSTATE value:", err.sqlstate)
print("Error message:", err.msg)
print("Query:",query)
print("****************************************")
return None
re = cur.fetchone()
if (re != None):
if(len(re) != 0):
return re[0]
return None
| 31.232323 | 86 | 0.476391 | 288 | 3,092 | 5.100694 | 0.211806 | 0.054459 | 0.043567 | 0.068074 | 0.664398 | 0.664398 | 0.639891 | 0.639891 | 0.581348 | 0.581348 | 0 | 0.002791 | 0.304657 | 3,092 | 98 | 87 | 31.55102 | 0.680465 | 0 | 0 | 0.722892 | 0 | 0 | 0.264877 | 0.125162 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.048193 | null | null | 0.337349 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6ae44c7495a0eebd37bc0b8cfb378103c6f20e7a | 1,672 | py | Python | swaggerconformance/schema/_parameter.py | gavrie/swagger-conformance | 58d970bf9a8886e54de408e705c128884af4d70d | [
"MIT"
] | 55 | 2017-04-21T23:41:20.000Z | 2021-04-27T13:24:22.000Z | swaggerconformance/schema/_parameter.py | gavrie/swagger-conformance | 58d970bf9a8886e54de408e705c128884af4d70d | [
"MIT"
] | 10 | 2018-02-12T18:49:07.000Z | 2020-12-03T12:27:05.000Z | swaggerconformance/schema/_parameter.py | gavrie/swagger-conformance | 58d970bf9a8886e54de408e705c128884af4d70d | [
"MIT"
] | 12 | 2017-10-31T12:39:28.000Z | 2020-07-13T08:23:44.000Z | """
Template for parameters of a Swagger-defined API operation.
"""
import logging
__all__ = ["Parameter"]
log = logging.getLogger(__name__)
class Parameter:
"""A Swagger API operation parameter.
:param swagger_definition: The swagger spec portion defining the parameter.
:type swagger_definition: schema.Primitive
"""
def __init__(self, swagger_definition):
self._swagger_definition = swagger_definition
def __repr__(self):
return "{}(name={!r}, type={!r}, format={!r}, required={!r})".format(
self.__class__.__name__, self.name, self.type, self.format,
self.required)
def strategy(self, value_factory):
"""Generate a hypothesis strategy representing this parameter.
:param value_factory: Factory to generate strategies for values.
:type value_factory: strategies.StrategyFactory
"""
value_template = value_factory.produce(self._swagger_definition)
return value_template.strategy()
@property
def name(self):
"""The name of this parameter, if it has one.
:rtype: str or None
"""
return self._swagger_definition.name
@property
def type(self):
"""The type of this parameter.
:rtype: str
"""
return self._swagger_definition.type
@property
def format(self):
"""The format of this parameter.
:rtype: str or None
"""
return self._swagger_definition.format
@property
def required(self):
"""Whether this parameter is required.
:rtype: bool
"""
return self._swagger_definition.required
| 24.588235 | 79 | 0.639952 | 183 | 1,672 | 5.595628 | 0.31694 | 0.166016 | 0.143555 | 0.105469 | 0.117188 | 0.080078 | 0.080078 | 0.080078 | 0 | 0 | 0 | 0 | 0.261962 | 1,672 | 67 | 80 | 24.955224 | 0.829822 | 0.355263 | 0 | 0.16 | 0 | 0 | 0.06531 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.28 | false | 0 | 0.04 | 0.04 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6aed848c1a5acf6a2b6e6eaa381ffb9ce908d48f | 2,178 | py | Python | 01-DesenvolvimentoDeSistemas/02-LinguagensDeProgramacao/01-Python/01-ListaDeExercicios/01-Gabarito/039.py | moacirsouza/nadas | ad98d73b4281d1581fd2b2a9d29001acb426ee56 | [
"MIT"
] | 1 | 2020-07-03T13:54:18.000Z | 2020-07-03T13:54:18.000Z | 01-DesenvolvimentoDeSistemas/02-LinguagensDeProgramacao/01-Python/01-ListaDeExercicios/01-Gabarito/039.py | moacirsouza/nadas | ad98d73b4281d1581fd2b2a9d29001acb426ee56 | [
"MIT"
] | null | null | null | 01-DesenvolvimentoDeSistemas/02-LinguagensDeProgramacao/01-Python/01-ListaDeExercicios/01-Gabarito/039.py | moacirsouza/nadas | ad98d73b4281d1581fd2b2a9d29001acb426ee56 | [
"MIT"
] | null | null | null | print("""
039) Faça um programa que leia o ano de nascimento de um jovem e informe,
de acordo com a sua idade, se ele ainda vai se alistar ao serviço, se é a
hora de se alistar ou se já passou do tempo do alistamento. Seu programa
também deverá mostrar o tempo que falta ou que passou do prazo.
""")
### Importação da classe "date", do módulo "datetime"
from datetime import date
### Entrada do programa: A variável "anoDeNascimento" recebe o valor
### de entrada como inteiro (int). Novamente, é importante notar o uso
### do método "strip()", no momento do recimento da entrada do usuário,
### a fim de fazer uma primeira validação simples, i.e., a remoção de
### possíveis espaços em branco antes e/ou depois do conteúdo que será
### processado
anoDeNascimento = int(input('Informe o ano em que você nasceu: ').strip())
### A propriedade "year", do construtor "today()", retorna o valor
### do ano atual
anoAtual = date.today().year
idadeDoJovem = anoAtual - anoDeNascimento
idadeMinimaParaAlistamento = 18
mensagemFinal = """
Estamos em {} e você tem, ou terá até o fim do ano, {} ano(s).
""".format(anoAtual, idadeDoJovem)
### O conjunto de testes é bem simples e apenas verifica se o jovem
### tem mais, menos ou exatamente 18 anos de idade. De acordo com
### este critério a variável "mensagemFinal" recebe novas informações
### para cada caso.
if idadeDoJovem > idadeMinimaParaAlistamento:
prazo = idadeDoJovem - idadeMinimaParaAlistamento
mensagemFinal += """
O prazo para seu alistamento já expirou há {} ano(s).
""".format(prazo)
elif idadeDoJovem < idadeMinimaParaAlistamento:
prazo = idadeMinimaParaAlistamento - idadeDoJovem
mensagemFinal += """
Ainda falta(m) {} ano(s) para o seu alistamento.
""".format(prazo)
else:
mensagemFinal += """
Este é o ano do seu alistamento! Para maiores
informações, visite a Junta Militar mais próxima!
"""
### Por fim, apenas a variável "mensagemFinal" é passada como parâmetro
### para a saída do programa. Todos os dados, cálculos e variações
### possíveis, derivadas da regra de negócios, já foram tratadas
### anteriormente, por isso não é mais preciso alterar nada pontualmente
print(mensagemFinal)
| 40.333333 | 74 | 0.73921 | 310 | 2,178 | 5.193548 | 0.487097 | 0.007453 | 0.013665 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003859 | 0.167126 | 2,178 | 53 | 75 | 41.09434 | 0.883682 | 0.43067 | 0 | 0.166667 | 0 | 0 | 0.493266 | 0 | 0 | 0 | 0 | 0.018868 | 0 | 1 | 0 | false | 0.066667 | 0.033333 | 0 | 0.033333 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0a81b4974790ec06bc095833e81d16783b64c1f4 | 471 | py | Python | setup.py | MMateuSouza/apply_for_jobs | ef1635b6402521bb848e7ebf043de8cdf42f303c | [
"Apache-2.0"
] | null | null | null | setup.py | MMateuSouza/apply_for_jobs | ef1635b6402521bb848e7ebf043de8cdf42f303c | [
"Apache-2.0"
] | null | null | null | setup.py | MMateuSouza/apply_for_jobs | ef1635b6402521bb848e7ebf043de8cdf42f303c | [
"Apache-2.0"
] | null | null | null | from setuptools import find_packages, setup
setup(
name="password-generator-microservice",
description="Apply For Jobs - TOTVs",
version="1.0.0",
author="Mateus Souza",
author_email="mota.mateus13@gmail.com",
packages=find_packages(where="src"),
package_dir={"": "src"},
install_requires=[
"blinker",
"flask",
"flask-apscheduler",
"flask-mongoengine",
"python-dotenv",
"requests",
],
)
| 23.55 | 43 | 0.607219 | 48 | 471 | 5.854167 | 0.8125 | 0.085409 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013966 | 0.239915 | 471 | 19 | 44 | 24.789474 | 0.77095 | 0 | 0 | 0 | 0 | 0 | 0.352442 | 0.11465 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.055556 | 0.055556 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0a84dc9da707f5499a72f16466953dff48fe3765 | 10,176 | py | Python | tests/test_cy_ndex.py | specter119/py4cytoscape | 11f968a8ab6518354406c9ed8321f331355b54f0 | [
"MIT"
] | 21 | 2020-09-01T14:31:11.000Z | 2022-03-08T01:16:35.000Z | tests/test_cy_ndex.py | specter119/py4cytoscape | 11f968a8ab6518354406c9ed8321f331355b54f0 | [
"MIT"
] | 50 | 2020-08-21T00:45:46.000Z | 2022-03-20T21:38:37.000Z | tests/test_cy_ndex.py | specter119/py4cytoscape | 11f968a8ab6518354406c9ed8321f331355b54f0 | [
"MIT"
] | 3 | 2021-01-14T08:30:31.000Z | 2021-08-04T07:58:17.000Z | # -*- coding: utf-8 -*-
""" Test functions cy_ndex.py.
"""
"""License:
Copyright 2020 The Cytoscape Consortium
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
import unittest
from test_utils import *
class CyNDExTests(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
_NDEX_USERID = 'cytoscape_test'
_NDEX_PASSWORD = 'cytoscape_rocks'
_NDEX_TEST_USERID = 'cytoscape_test'
_NDEX_TEST_PASSWORD = 'cytoscape_rocks'
_NDEX_SERVER_WAIT_SECS = 10
@unittest.skip('Get_network_ndex_id returns the first network UUID regardless ... it should scan for the network SUID')
@print_entry_exit
def test_get_export_network_ndex_id(self):
# Initialization
load_test_session()
load_test_network('data/yeastHighQuality.sif', make_current=False)
# Verify that unstored networks have a UUID of None
self.assertIsNone(get_network_ndex_id())
self.assertIsNone(get_network_ndex_id(network='yeastHighQuality.sif'))
# Verify that storing the first (and selected) network returns a UUID and it matches what's fetched separately
galFiltered_uuid = export_network_to_ndex(self._NDEX_USERID, self._NDEX_PASSWORD, False)
self.assertIsInstance(galFiltered_uuid, str)
fetched_galFiltered_uuid = get_network_ndex_id()
self.assertIsInstance(fetched_galFiltered_uuid, str)
self.assertEqual(galFiltered_uuid, fetched_galFiltered_uuid)
time.sleep(self._NDEX_SERVER_WAIT_SECS) # Give NDEx a chance to file the network before asking for it again.
# Verify that storing the second (and unselected) network returns a UUID and it matches what's fetched separately
yeast_uuid = export_network_to_ndex(self._NDEX_USERID, self._NDEX_PASSWORD, False, network='yeastHighQuality.sif')
self.assertIsInstance(yeast_uuid, str)
fetched_yeast_uuid = get_network_ndex_id(network='yeastHighQuality.sif')
self.assertIsInstance(fetched_yeast_uuid, str)
# TODO: This fails because get_network_ndex_id returns the first network's UUID regardless ... it should scan for the network's SUID
self.assertEqual(yeast_uuid, fetched_yeast_uuid)
# Verify that bad credentials are caught
self.assertRaises(CyError, export_network_to_ndex, 'BogusUser', self._NDEX_PASSWORD, False)
self.assertRaises(CyError, export_network_to_ndex, self._NDEX_USERID, 'BogusPassword', False)
# Verify that a bad network is caught
self.assertRaises(CyError, get_network_ndex_id, network='BogusNetwork')
self.assertRaises(CyError, export_network_to_ndex, self._NDEX_USERID, self._NDEX_PASSWORD, False, network='BogusNetwork')
# Initialization for subdomain param
load_test_session()
load_test_network('data/yeastHighQuality.sif', make_current=False)
# Verify that storing the first (and selected) network returns a UUID and it matches what's fetched separately
sub_galFiltered_uuid = export_network_to_ndex(self._NDEX_TEST_USERID, self._NDEX_TEST_PASSWORD, False, ndex_url="http://test.ndexbio.org", ndex_version="v2")
self.assertIsInstance(sub_galFiltered_uuid, str)
sub_fetched_galFiltered_uuid = get_network_ndex_id()
self.assertIsInstance(sub_fetched_galFiltered_uuid, str)
self.assertEqual(sub_galFiltered_uuid, sub_fetched_galFiltered_uuid)
time.sleep(self._NDEX_SERVER_WAIT_SECS) # Give NDEx a chance to file the network before asking for it again.
@print_entry_exit
def test_update_network_ndex_id(self):
# TODO: Find out how to test isPublic and metadata
# Initialization
load_test_session()
galFiltered_uuid = export_network_to_ndex(self._NDEX_USERID, self._NDEX_PASSWORD, False)
time.sleep(self._NDEX_SERVER_WAIT_SECS) # Give NDEx a chance to file the network before asking for it again.
# Verify that the network (with all nodes selected) can be updated on NDEx and that the same UUID is returned
all_node_names = node_suid_to_node_name(select_all_nodes())
updated_galFiltered_uuid = update_network_in_ndex(self._NDEX_USERID, self._NDEX_PASSWORD, False)
self.assertIsInstance(updated_galFiltered_uuid, str)
self.assertEqual(updated_galFiltered_uuid, galFiltered_uuid)
time.sleep(self._NDEX_SERVER_WAIT_SECS) # Give NDEx a chance to file the network before asking for it again.
close_session(False)
# Verify that when the network is reloaded, it still has all nodes selected and the same UUID
fetched_galFiltered_suid = import_network_from_ndex(updated_galFiltered_uuid, self._NDEX_USERID, self._NDEX_PASSWORD)
self.assertIsInstance(fetched_galFiltered_suid, int)
selected_nodes = get_selected_nodes(network=fetched_galFiltered_suid)
self.assertSetEqual(set(selected_nodes), set(all_node_names))
# Verify that bad credentials are caught
self.assertRaises(CyError, update_network_in_ndex, 'BogusUser', self._NDEX_PASSWORD, False)
self.assertRaises(CyError, update_network_in_ndex, self._NDEX_USERID, 'BogusPassword', False)
# Verify that a bad network is caught
self.assertRaises(CyError, update_network_in_ndex, self._NDEX_USERID, self._NDEX_PASSWORD, False, network='BogusNetwork')
# Initialization for subdomain param
load_test_session()
sub_galFiltered_uuid = export_network_to_ndex(self._NDEX_TEST_USERID, self._NDEX_TEST_PASSWORD, False, ndex_url="http://test.ndexbio.org", ndex_version="v2")
time.sleep(self._NDEX_SERVER_WAIT_SECS) # Give NDEx a chance to file the network before asking for it again.
# Verify that the network (with all nodes selected) can be updated on NDEx and that the same UUID is returned
sub_all_node_names = node_suid_to_node_name(select_all_nodes())
sub_updated_galFiltered_uuid = update_network_in_ndex(self._NDEX_TEST_USERID, self._NDEX_TEST_PASSWORD, False, ndex_url="http://test.ndexbio.org", ndex_version = "v2")
self.assertIsInstance(sub_updated_galFiltered_uuid, str)
self.assertEqual(sub_updated_galFiltered_uuid, sub_galFiltered_uuid)
time.sleep(self._NDEX_SERVER_WAIT_SECS) # Give NDEx a chance to file the network before asking for it again.
close_session(False)
# Verify that when the network is reloaded, it still has all nodes selected and the same UUID
sub_fetched_galFiltered_suid = import_network_from_ndex(sub_updated_galFiltered_uuid, self._NDEX_TEST_USERID, self._NDEX_TEST_PASSWORD, ndex_url="http://test.ndexbio.org", ndex_version="v2")
self.assertIsInstance(sub_fetched_galFiltered_suid, int)
sub_selected_nodes = get_selected_nodes(network=sub_fetched_galFiltered_suid)
self.assertSetEqual(set(sub_selected_nodes), set(sub_all_node_names))
@print_entry_exit
def test_import_network_from_ndex(self):
# TODO: Find out how to test accessKey
# Initialization
load_test_session()
galFiltered_uuid = export_network_to_ndex(self._NDEX_USERID, self._NDEX_PASSWORD, False)
all_node_names = get_all_nodes()
close_session(False)
time.sleep(self._NDEX_SERVER_WAIT_SECS) # Give NDEx a chance to file the network before asking for it again.
# Verify that the network can be loaded from NDEx and it has the same nodes
fetched_galFiltered_suid = import_network_from_ndex(galFiltered_uuid, self._NDEX_USERID, self._NDEX_PASSWORD)
self.assertIsInstance(fetched_galFiltered_suid, int)
all_fetched_node_names = get_all_nodes(fetched_galFiltered_suid)
self.assertSetEqual(set(all_fetched_node_names), set(all_node_names))
# Verify that bad credentials are caught
self.assertRaises(CyError, import_network_from_ndex, galFiltered_uuid, 'BogusUser', self._NDEX_PASSWORD)
self.assertRaises(CyError, import_network_from_ndex, galFiltered_uuid, self._NDEX_USERID, 'BogusPassword')
self.assertRaises(CyError, import_network_from_ndex, galFiltered_uuid, access_key='BogusKey')
# Initialization for subdomain param
load_test_session()
sub_galFiltered_uuid = export_network_to_ndex(self._NDEX_TEST_USERID, self._NDEX_TEST_PASSWORD, False, ndex_url="http://test.ndexbio.org", ndex_version="v2")
sub_all_node_names = get_all_nodes()
close_session(False)
time.sleep(self._NDEX_SERVER_WAIT_SECS) # Give NDEx a chance to file the network before asking for it again.
# Verify that the network can be loaded from test server and it has the same nodes
sub_fetched_galFiltered_suid = import_network_from_ndex(sub_galFiltered_uuid, self._NDEX_TEST_USERID, self._NDEX_TEST_PASSWORD, ndex_url="http://test.ndexbio.org", ndex_version="v2")
self.assertIsInstance(sub_fetched_galFiltered_suid, int)
sub_all_fetched_node_names = get_all_nodes(sub_fetched_galFiltered_suid)
self.assertSetEqual(set(sub_all_fetched_node_names), set(sub_all_node_names))
if __name__ == '__main__':
unittest.main()
| 59.508772 | 198 | 0.755994 | 1,395 | 10,176 | 5.192115 | 0.162007 | 0.048599 | 0.028994 | 0.026232 | 0.768052 | 0.734916 | 0.678034 | 0.64048 | 0.603617 | 0.522712 | 0 | 0.001554 | 0.177673 | 10,176 | 170 | 199 | 59.858824 | 0.864006 | 0.204009 | 0 | 0.357143 | 0 | 0 | 0.077411 | 0.007208 | 0 | 0 | 0 | 0.005882 | 0.336735 | 1 | 0.05102 | false | 0.255102 | 0.102041 | 0 | 0.214286 | 0.030612 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0a873319c0c3be7b41fd0324bb9d6e6e0c88f30c | 629 | py | Python | FatherSon/HelloWorld2_source_code/listing_11-5.py | axetang/AxePython | 3b517fa3123ce2e939680ad1ae14f7e602d446a6 | [
"Apache-2.0"
] | 1 | 2019-01-04T05:47:50.000Z | 2019-01-04T05:47:50.000Z | FatherSon/HelloWorld2_source_code/listing_11-5.py | axetang/AxePython | 3b517fa3123ce2e939680ad1ae14f7e602d446a6 | [
"Apache-2.0"
] | null | null | null | FatherSon/HelloWorld2_source_code/listing_11-5.py | axetang/AxePython | 3b517fa3123ce2e939680ad1ae14f7e602d446a6 | [
"Apache-2.0"
] | null | null | null | # Listing_11-5.py
# Copyright Warren & Carter Sande, 2013
# Released under MIT license http://www.opensource.org/licenses/mit-license.php
# Version $version ----------------------------
# Printing the loop variables with nested loops
numBlocks = int(raw_input('How many blocks of stars do you want? '))
for block in range(1, numBlocks + 1):
print 'block = ', block #Displays variables
for line in range(1, block * 2 ):
for star in range(1, (block + line) * 2):
print '*',
print ' line = ', line, 'star = ', star #Displays variables
print
| 37 | 82 | 0.575517 | 78 | 629 | 4.615385 | 0.628205 | 0.058333 | 0.066667 | 0.072222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028634 | 0.278219 | 629 | 16 | 83 | 39.3125 | 0.764317 | 0.416534 | 0 | 0 | 0 | 0 | 0.184211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
0a9059d46b658fd4491e327e5ce4b00a4ce3440a | 1,385 | py | Python | tests/test_entities/test_trace.py | ed741/PathBench | 50fe138eb1f824f49fe1a862705e435a1c3ec3ae | [
"BSD-3-Clause"
] | 46 | 2020-12-25T04:09:15.000Z | 2022-03-25T12:32:42.000Z | tests/test_entities/test_trace.py | ed741/PathBench | 50fe138eb1f824f49fe1a862705e435a1c3ec3ae | [
"BSD-3-Clause"
] | 36 | 2020-12-21T16:10:02.000Z | 2022-01-03T01:42:01.000Z | tests/test_entities/test_trace.py | judicaelclair/PathBenchURO | 101e67674efdfa8e27e1cf7787dac9fdf99552fe | [
"BSD-3-Clause"
] | 11 | 2021-01-06T23:34:12.000Z | 2022-03-21T17:21:47.000Z | import unittest
import copy
from algorithms.configuration.entities.entity import Entity
from algorithms.configuration.entities.trace import Trace
from structures import Point
class TestTrace(unittest.TestCase):
def test_copy(self) -> None:
entity1: Trace = Trace(Point(2, 3))
entity2: Trace = copy.copy(entity1)
self.assertEqual(entity1, entity2)
def test_deep_copy(self) -> None:
entity1: Trace = Trace(Point(2, 3))
entity2: Trace = copy.deepcopy(entity1)
self.assertEqual(entity1, entity2)
def test_str(self) -> None:
entity: Trace = Trace(Point(2, 3))
self.assertEqual("Trace: {position: Point(2, 3)}", str(entity))
def test_eq(self) -> None:
entity1: Trace = Trace(Point(2, 3))
entity2: Trace = Trace(Point(2, 3))
self.assertEqual(entity1, entity2)
def test_ne_pos(self) -> None:
entity1: Trace = Trace(Point(2, 3))
entity2: Trace = Trace(Point(2, 5))
self.assertNotEqual(entity1, entity2)
def test_ne_all(self) -> None:
entity1: Trace = Trace(Point(2, 3))
entity2: Trace = Trace(Point(1, 15))
self.assertNotEqual(entity1, entity2)
def test_ne_instance(self) -> None:
entity1: Trace = Trace(Point(2, 3))
entity2: Entity = Entity(Point(2, 3), 1)
self.assertNotEqual(entity1, entity2)
| 31.477273 | 71 | 0.641155 | 175 | 1,385 | 5.011429 | 0.205714 | 0.075257 | 0.171038 | 0.164196 | 0.622577 | 0.620296 | 0.596351 | 0.350057 | 0.350057 | 0.305587 | 0 | 0.049057 | 0.234657 | 1,385 | 43 | 72 | 32.209302 | 0.778302 | 0 | 0 | 0.363636 | 0 | 0 | 0.021661 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 1 | 0.212121 | false | 0 | 0.151515 | 0 | 0.393939 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0a9ccdeb40626e361f7144170f6a9db467787bac | 417 | py | Python | src/rich_click/__main__.py | harens/rich-click | 9ff60eef8ad27d210ceaa9748df328c00d28487e | [
"MIT"
] | null | null | null | src/rich_click/__main__.py | harens/rich-click | 9ff60eef8ad27d210ceaa9748df328c00d28487e | [
"MIT"
] | null | null | null | src/rich_click/__main__.py | harens/rich-click | 9ff60eef8ad27d210ceaa9748df328c00d28487e | [
"MIT"
] | null | null | null | """
Entry-point module for the command line prefixer,
called in case you use `python -m rich_click`.
Why does this file exist, and why `__main__`? For more info, read:
- https://www.python.org/dev/peps/pep-0338/
- https://docs.python.org/3/using/cmdline.html#cmdoption-m
"""
from rich_click.cli import main
if __name__ == "__main__":
# main will run a Click command which will either exit or raise
main()
| 26.0625 | 67 | 0.719424 | 69 | 417 | 4.144928 | 0.768116 | 0.062937 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014286 | 0.160671 | 417 | 15 | 68 | 27.8 | 0.802857 | 0.793765 | 0 | 0 | 0 | 0 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
0aa3d50febd9aa8cd29d83f827149973ff83da6e | 349 | py | Python | finished/python_principles/01_capital_indexes.py | UltiRequiem/daily-python-practice | 31f72c45378be90b8fcadd30d7042819ee551a17 | [
"MIT"
] | 8 | 2021-05-29T23:30:12.000Z | 2021-09-24T03:25:44.000Z | finished/python_principles/01_capital_indexes.py | UltiRequiem/daily-python-practice | 31f72c45378be90b8fcadd30d7042819ee551a17 | [
"MIT"
] | null | null | null | finished/python_principles/01_capital_indexes.py | UltiRequiem/daily-python-practice | 31f72c45378be90b8fcadd30d7042819ee551a17 | [
"MIT"
] | 6 | 2021-06-02T14:20:24.000Z | 2021-08-19T00:49:26.000Z | def capital_indexes(string: str) -> list:
return [index for index, char in enumerate(string) if char.isupper()]
# return [letter for letter in range(len(indexes)) if indexes[letter].isupper()]
def tests() -> None:
print(capital_indexes("mYtESt")) # [1, 3, 4]
print(capital_indexes("owO"))
if __name__ == "__main__":
tests()
| 26.846154 | 84 | 0.661891 | 47 | 349 | 4.680851 | 0.574468 | 0.190909 | 0.172727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01049 | 0.180516 | 349 | 12 | 85 | 29.083333 | 0.758741 | 0.252149 | 0 | 0 | 0 | 0 | 0.065891 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0.142857 | 0.428571 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
0aa8c1e005e4db88fafad5d7feabe99052e8ff04 | 291 | py | Python | novice/02-02/Lat_test.py | septiannurtrir/praxis-academy | 1ef7f959c372ae991d74ccd373123142c2fbc542 | [
"MIT"
] | 1 | 2019-08-27T17:06:13.000Z | 2019-08-27T17:06:13.000Z | novice/02-02/Lat_test.py | septiannurtrir/praxis-academy | 1ef7f959c372ae991d74ccd373123142c2fbc542 | [
"MIT"
] | null | null | null | novice/02-02/Lat_test.py | septiannurtrir/praxis-academy | 1ef7f959c372ae991d74ccd373123142c2fbc542 | [
"MIT"
] | null | null | null | import unittest
class TestSum(unittest.TestCase):
def test_sum(self):
self.assertEqual(sum([5, 5, 5]), 15, "Should be 15")
def test_sum_tuple(self):
self.assertEqual(sum([5, 4, 5]), 15, "Should be 15")
if __name__ == "__main__":
unittest.main() | 26.454545 | 60 | 0.597938 | 40 | 291 | 4.075 | 0.475 | 0.08589 | 0.122699 | 0.269939 | 0.441718 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06422 | 0.250859 | 291 | 11 | 61 | 26.454545 | 0.683486 | 0 | 0 | 0 | 0 | 0 | 0.109589 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0aa943f93c8076586de1c6d8ffe61f9a5d8cefe8 | 485 | py | Python | flask_app/api.py | vitorleal/flask-tests | 2165c39327e889c0c1749dbaf443dd2267e6401f | [
"Unlicense"
] | null | null | null | flask_app/api.py | vitorleal/flask-tests | 2165c39327e889c0c1749dbaf443dd2267e6401f | [
"Unlicense"
] | null | null | null | flask_app/api.py | vitorleal/flask-tests | 2165c39327e889c0c1749dbaf443dd2267e6401f | [
"Unlicense"
] | null | null | null | from flask_app import app, db
from flask.ext import restful
from models import Advertiser
from flask.ext.login import current_user, login_required
import json
#Create the Rest api App
api = restful.Api(app)
'''
Rest api
'''
#Users
class CurentUser(restful.Resource):
def get(self):
if current_user.is_anonymous():
return { 'error': 'no_user' }
else:
return json.loads(current_user.to_json())
api.add_resource(CurentUser, '/api/user')
| 22.045455 | 56 | 0.690722 | 68 | 485 | 4.794118 | 0.514706 | 0.082822 | 0.07362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206186 | 485 | 21 | 57 | 23.095238 | 0.846753 | 0.057732 | 0 | 0 | 0 | 0 | 0.047836 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.384615 | 0 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0aaa2496c5fef3086d7e6b6df658d829f86b458b | 13,117 | py | Python | examples/section3.1-1.py | charleswhchan/dwave-qubo-ocean-examples | 1286d5189046ecf1a4d838da55ec2ee38718d6bc | [
"Apache-2.0"
] | 3 | 2019-04-20T14:14:38.000Z | 2020-02-19T22:58:09.000Z | examples/section3.1-1.py | charleswhchan/dwave-qubo-ocean-examples | 1286d5189046ecf1a4d838da55ec2ee38718d6bc | [
"Apache-2.0"
] | 1 | 2019-12-04T13:22:14.000Z | 2019-12-04T16:40:30.000Z | examples/section3.1-1.py | charleswhchan/dwave-qubo-ocean-examples | 1286d5189046ecf1a4d838da55ec2ee38718d6bc | [
"Apache-2.0"
] | 2 | 2019-12-04T16:29:57.000Z | 2020-12-09T12:15:06.000Z | """
Section 3.1 The Number Partitioning Problem
Partition a set of numbers into two subsets such that the subset sums are as close to each other as possible.
Test list size 50 - small enough to be solved on the D-Wave 2000Q
"""
import copy
import dimod
import random
import sys
import time
from dwave.system.samplers import DWaveSampler
from dwave.system.composites import EmbeddingComposite
def generate_numbers(num_numbers):
random.seed(51229)
numbers = random.sample(range(1, 1000), num_numbers)
return numbers
def to_bqm(numbers):
c = sum(numbers)
c_square = c ** c
linear = {}
quadratic = {}
offset = 0.0
vartype = dimod.BINARY
for index, value in enumerate(numbers):
linear[index + 1] = value * (value - c)
for index1, value1 in enumerate(numbers[:-1]):
for index2 in range(index1 + 1, len(numbers)):
value = value1 * numbers[index2]
idx = (index1 + 1, index2 + 1)
quadratic[idx] = quadratic[tuple(reversed(idx))] = value
bqm = dimod.BinaryQuadraticModel(linear, quadratic, offset, vartype)
print(len(linear))
print(len(quadratic))
return bqm
def solve(sampler, bqm, num_reads=None):
params = {}
if num_reads:
params["num_reads"] = num_reads
return sampler.sample(bqm, **params)
def split_numbers_list(numbers, result):
list1 = []
list2 = []
for key, include_in_list in result.items():
index = key - 1
if include_in_list:
list1.append(numbers[index])
else:
list2.append(numbers[index])
return list1, list2
def print_result(sample_set):
for sample in sample_set.samples():
list1, list2 = split_numbers_list(numbers, sample)
print(
"list1: {}, sum: {}, list2: {}, sum: {}".format(
list1, sum(list1), list2, sum(list2)
)
)
exact_solver = dimod.ExactSolver()
simulated_annealing_sampler = dimod.SimulatedAnnealingSampler()
dwave_sampler = EmbeddingComposite(DWaveSampler())
print("#" * 80)
numbers = generate_numbers(50) # generate a list of numbers to be split into equal sums
bqm = to_bqm(numbers)
#
# ExactSolver fails when list has many items eg. len(numbers) == 100
#
# start = time.time()
# sample_set = solve(exact_solver, bqm)
# end = time.time()
# print "Using ExactSolver (elapsed time: {}s)".format(end-start)
# sample_set = sample_set.truncate(5)
# print sample_set
# print_result(sample_set)
# print ""
start = time.time()
sample_set = solve(simulated_annealing_sampler, bqm)
end = time.time()
print("Using SimulatedAnnlearingSampler (elapsed time: {}s)".format(end - start))
sample_set = sample_set.truncate(5)
print(sample_set)
print_result(sample_set)
print("")
# Using Simulated (elapsed time: 15.2062799931s)
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ... 50 energy num_oc.
# 0 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 ... 0 -134235392.0 1
# 1 1 0 1 1 0 0 1 1 1 1 0 1 1 1 1 1 ... 0 -134235392.0 1
# 2 0 0 0 0 0 0 0 1 0 1 1 0 1 1 1 1 ... 1 -134235332.0 1
# 3 1 1 1 1 1 1 1 1 1 1 0 0 1 0 0 1 ... 0 -134235315.0 1
# 4 1 1 1 1 0 1 1 0 0 1 1 0 1 1 0 0 ... 1 -134235200.0 1
# ['BINARY', 5 rows, 5 samples, 50 variables]
# list1: [871, 448, 112, 490, 834, 608, 488, 295, 455, 174, 911, 107, 67, 324, 88, 902, 209, 256, 500, 523, 102, 71, 696, 266, 280, 86, 417, 267, 206, 531], sum: 11584, list2: [977, 588, 239, 264, 766, 980, 762, 345, 170, 998, 993, 514, 673, 564, 561, 24, 6, 566, 996, 602], sum: 11588
# list1: [871, 112, 490, 488, 295, 455, 174, 977, 107, 67, 324, 588, 239, 762, 345, 500, 523, 102, 514, 71, 673, 696, 266, 24, 86, 267, 6, 566, 996], sum: 11584, list2: [448, 834, 608, 911, 264, 88, 766, 902, 980, 209, 170, 256, 998, 993, 564, 280, 561, 417, 206, 531, 602], sum: 11588
# list1: [295, 174, 911, 107, 67, 324, 588, 239, 264, 902, 980, 762, 170, 500, 998, 71, 696, 266, 561, 24, 531, 566, 996, 602], sum: 11594, list2: [871, 448, 112, 490, 834, 608, 488, 455, 977, 88, 766, 209, 345, 256, 523, 102, 993, 514, 673, 564, 280, 86, 417, 267, 6, 206], sum: 11578
# list1: [871, 448, 112, 490, 834, 608, 488, 295, 455, 174, 107, 588, 239, 264, 88, 980, 345, 256, 500, 993, 514, 673, 564, 86, 417, 206], sum: 11595, list2: [911, 977, 67, 324, 766, 902, 762, 209, 170, 523, 998, 102, 71, 696, 266, 280, 561, 24, 267, 6, 531, 566, 996, 602], sum: 11577
# list1: [871, 448, 112, 490, 608, 488, 174, 911, 107, 67, 264, 88, 209, 345, 170, 102, 993, 514, 673, 564, 266, 561, 24, 86, 267, 6, 566, 996, 602], sum: 11572, list2: [834, 295, 455, 977, 324, 588, 239, 766, 902, 980, 762, 256, 500, 523, 998, 71, 696, 280, 417, 206, 531], sum: 11600
start = time.time()
sample_set = solve(dwave_sampler, bqm, num_reads=10)
end = time.time()
print("Using DWaveSampler (elapsed time: {}s)".format(end - start))
print(sample_set)
print_result(sample_set)
print("")
# Using DWaveSampler (elapsed time: 5.94733715057s)
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ... 50 energy num_oc. ...
# 6 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 ... 0 -134229771.0 1 ...
# 4 0 0 0 1 0 1 1 0 1 0 0 0 1 1 0 ... 0 -134187872.0 1 ...
# 0 0 1 1 1 1 1 1 0 1 0 1 0 0 1 0 ... 0 -133890827.0 1 ...
# 2 0 1 1 0 0 1 1 0 1 0 1 0 1 0 0 ... 0 -133853472.0 1 ...
# 9 0 0 0 1 1 1 1 0 1 1 0 0 0 0 1 ... 0 -133832171.0 1 ...
# 3 0 0 1 1 1 0 0 0 1 0 1 0 1 1 0 ... 0 -133666880.0 1 ...
# 5 0 1 0 0 0 0 1 1 1 1 0 0 0 0 0 ... 0 -132553187.0 1 ...
# 8 0 0 1 1 1 1 1 1 0 1 1 1 1 1 1 ... 0 -132482420.0 1 ...
# 7 0 0 0 1 1 1 0 1 1 1 1 1 0 1 1 ... 1 -120751812.0 1 ...
# 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 0 ... 1 -113687307.0 1 ...
# ['BINARY', 10 rows, 10 samples, 50 variables]
# list1: [112, 490, 834, 608, 488, 295, 455, 174, 911, 588, 239, 88, 766, 980, 762, 102, 673, 564, 266, 561, 86, 417, 206, 996], sum: 11661, list2: [871, 448, 977, 107, 67, 324, 264, 902, 209, 345, 170, 256, 500, 523, 998, 993, 514, 71, 696, 280, 24, 267, 6, 531, 566, 602], sum: 11511
# list1: [490, 608, 488, 455, 107, 67, 264, 88, 766, 980, 762, 209, 345, 256, 500, 998, 102, 993, 71, 673, 696, 564, 266, 280, 86, 417, 267, 6], sum: 11804, list2: [871, 448, 112, 834, 295, 174, 911, 977, 324, 588, 239, 902, 170, 523, 514, 561, 24, 206, 531, 566, 996, 602], sum: 11368
# list1: [448, 112, 490, 834, 608, 488, 455, 911, 67, 980, 762, 500, 102, 993, 71, 673, 266, 561, 24, 86, 6, 566, 996], sum: 10999, list2: [871, 295, 174, 977, 107, 324, 588, 239, 264, 88, 766, 902, 209, 345, 170, 256, 523, 998, 514, 696, 564, 280, 417, 267, 206, 531, 602], sum: 12173
# list1: [448, 112, 608, 488, 455, 911, 107, 264, 980, 762, 209, 345, 500, 523, 993, 673, 696, 266, 561, 86, 417, 267, 6, 531, 996], sum: 12204, list2: [871, 490, 834, 295, 174, 977, 67, 324, 588, 239, 88, 766, 902, 170, 256, 998, 102, 514, 71, 564, 280, 24, 206, 566, 602], sum: 10968
# list1: [490, 834, 608, 488, 455, 174, 324, 588, 239, 264, 88, 980, 762, 170, 500, 523, 993, 514, 71, 673, 564, 266, 280, 24, 86, 267, 996], sum: 12221, list2: [871, 448, 112, 295, 911, 977, 107, 67, 766, 902, 209, 345, 256, 998, 102, 696, 561, 417, 6, 206, 531, 566, 602], sum: 10951
# list1: [112, 490, 834, 455, 911, 107, 67, 264, 88, 766, 980, 762, 209, 170, 256, 998, 102, 993, 673, 564, 561, 86, 417, 267, 6, 206, 996], sum: 12340, list2: [871, 448, 608, 488, 295, 174, 977, 324, 588, 239, 902, 345, 500, 523, 514, 71, 696, 266, 280, 24, 531, 566, 602], sum: 10832
# list1: [448, 488, 295, 455, 174, 588, 239, 766, 980, 762, 345, 170, 256, 500, 998, 102, 514, 71, 673, 696, 564, 266, 561, 86, 417, 267, 206, 996], sum: 12883, list2: [871, 112, 490, 834, 608, 911, 977, 107, 67, 324, 264, 88, 902, 209, 523, 993, 280, 24, 6, 531, 566, 602], sum: 10289
# list1: [112, 490, 834, 608, 488, 295, 174, 911, 977, 107, 67, 324, 588, 239, 88, 980, 762, 209, 170, 500, 102, 514, 266, 561, 86, 417, 267, 6, 206, 566, 996], sum: 12910, list2: [871, 448, 455, 264, 766, 902, 345, 256, 523, 998, 993, 71, 673, 696, 564, 280, 24, 531, 602], sum: 10262
# list1: [490, 834, 608, 295, 455, 174, 911, 977, 67, 324, 588, 239, 264, 902, 980, 762, 170, 500, 523, 102, 993, 673, 696, 280, 561, 86, 206, 996, 602], sum: 15258, list2: [871, 448, 112, 488, 107, 88, 766, 209, 345, 256, 998, 514, 71, 564, 266, 24, 417, 267, 6, 531, 566], sum: 7914
# list1: [448, 490, 834, 608, 488, 295, 455, 174, 911, 977, 107, 67, 588, 239, 264, 88, 766, 980, 209, 345, 170, 256, 998, 102, 993, 514, 673, 696, 561, 417, 267, 6, 531, 602], sum: 16119, list2: [871, 112, 324, 902, 762, 500, 523, 71, 564, 266, 280, 24, 86, 206, 566, 996], sum: 7053
#
# The result does not look optimal, let's try again with larger num_reads
#
start = time.time()
sample_set = solve(dwave_sampler, bqm, num_reads=1000)
end = time.time()
print("Using DWaveSampler (elapsed time: {}s)".format(end - start))
sample_set = sample_set.truncate(100)
print(sample_set)
print_result(sample_set)
print("")
# Using DWaveSampler (elapsed time: 11.7918388844s)
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ... 50 energy num_oc. ...
# 0 1 0 0 0 0 1 0 1 0 1 1 0 0 0 1 ... 1 -134235392.0 1 ...
# 1 0 0 1 1 0 1 1 1 0 0 1 0 1 0 0 ... 1 -134235380.0 1 ...
# 2 1 1 1 0 0 1 0 0 0 1 1 0 1 0 0 ... 1 -134235380.0 1 ...
# 3 0 1 1 0 1 0 0 1 0 0 1 1 1 0 1 ... 1 -134235380.0 1 ...
# 4 0 0 1 0 1 0 0 1 1 1 0 1 0 0 0 ... 1 -134235275.0 1 ...
# 5 0 1 1 0 0 0 0 1 0 1 0 0 0 0 1 ... 1 -134235107.0 1 ...
# 6 1 0 1 1 0 1 0 0 1 0 0 0 1 0 0 ... 1 -134235072.0 1 ...
# 7 1 0 1 0 1 0 0 0 0 1 1 0 0 1 1 ... 1 -134235035.0 1 ...
# 8 1 0 1 1 0 1 1 1 0 1 0 0 1 0 1 ... 0 -134234720.0 1 ...
# 9 0 1 1 0 0 0 0 1 0 1 1 0 0 0 0 ... 1 -134234435.0 1 ...
# 10 0 1 1 1 0 0 1 1 0 1 1 0 0 0 0 ... 1 -134234372.0 1 ...
# ...
# ['BINARY', 100 rows, 100 samples, 50 variables]
# list1: [871, 608, 295, 174, 911, 324, 239, 766, 980, 209, 345, 500, 523, 514, 71, 673, 696, 266, 561, 417, 267, 206, 566, 602], sum: 11584, list2: [448, 112, 490, 834, 488, 455, 977, 107, 67, 588, 264, 88, 902, 762, 170, 256, 998, 102, 993, 564, 280, 24, 86, 6, 531, 996], sum: 11588
# list1: [112, 490, 608, 488, 295, 911, 107, 239, 264, 766, 980, 762, 209, 345, 170, 256, 500, 523, 998, 102, 673, 561, 417, 6, 206, 602], sum: 11590, list2: [871, 448, 834, 455, 174, 977, 67, 324, 588, 88, 902, 993, 514, 71, 696, 564, 266, 280, 24, 86, 267, 531, 566, 996], sum: 11582
# list1: [871, 448, 112, 608, 174, 911, 107, 588, 239, 766, 902, 762, 209, 345, 170, 998, 673, 564, 266, 86, 417, 206, 566, 602], sum: 11590, list2: [490, 834, 488, 295, 455, 977, 67, 324, 264, 88, 980, 256, 500, 523, 102, 993, 514, 71, 696, 280, 561, 24, 267, 6, 531, 996], sum: 11582
# list1: [448, 112, 834, 295, 911, 977, 107, 324, 588, 88, 766, 902, 209, 345, 500, 514, 266, 86, 417, 206, 531, 566, 996, 602], sum: 11590, list2: [871, 490, 608, 488, 455, 174, 67, 239, 264, 980, 762, 170, 256, 523, 998, 102, 993, 71, 673, 696, 564, 280, 561, 24, 267, 6], sum: 11582
# list1: [112, 834, 295, 455, 174, 977, 588, 239, 88, 766, 980, 209, 345, 523, 514, 673, 696, 561, 417, 531, 996, 602], sum: 11575, list2: [871, 448, 490, 608, 488, 911, 107, 67, 324, 264, 902, 762, 170, 256, 500, 998, 102, 993, 71, 564, 266, 280, 24, 86, 267, 6, 206, 566], sum: 11597
# list1: [448, 112, 295, 174, 324, 588, 239, 766, 902, 209, 345, 500, 523, 998, 102, 514, 673, 696, 266, 280, 561, 24, 417, 267, 6, 206, 566, 602], sum: 11603, list2: [871, 490, 834, 608, 488, 455, 911, 977, 107, 67, 264, 88, 980, 762, 170, 256, 993, 71, 564, 86, 531, 996], sum: 11569
# list1: [871, 112, 490, 608, 455, 107, 239, 88, 766, 902, 345, 170, 256, 523, 998, 673, 561, 86, 417, 206, 531, 566, 996, 602], sum: 11568, list2: [448, 834, 488, 295, 174, 911, 977, 67, 324, 588, 264, 980, 762, 209, 500, 102, 993, 514, 71, 696, 564, 266, 280, 24, 267, 6], sum: 11604
# list1: [871, 112, 834, 174, 911, 67, 324, 588, 264, 88, 902, 209, 345, 256, 500, 514, 71, 696, 561, 417, 206, 531, 566, 996, 602], sum: 11605, list2: [448, 490, 608, 488, 295, 455, 977, 107, 239, 766, 980, 762, 170, 523, 998, 102, 993, 673, 564, 266, 280, 24, 86, 267, 6], sum: 11567
# list1: [871, 112, 490, 608, 488, 295, 174, 107, 324, 239, 264, 902, 980, 762, 209, 170, 256, 523, 998, 102, 514, 71, 673, 266, 561, 24, 417, 6, 206], sum: 11612, list2: [448, 834, 455, 911, 977, 67, 588, 88, 766, 345, 500, 993, 696, 564, 280, 86, 267, 531, 566, 996, 602], sum: 11560
# list1: [448, 112, 295, 174, 911, 588, 239, 264, 88, 766, 902, 209, 345, 500, 523, 998, 102, 993, 673, 696, 417, 206, 566, 602], sum: 11617, list2: [871, 490, 834, 608, 488, 455, 977, 107, 67, 324, 980, 762, 170, 256, 514, 71, 564, 266, 280, 561, 24, 86, 267, 6, 531, 996], sum: 11555
| 70.521505 | 285 | 0.574674 | 2,467 | 13,117 | 3.031617 | 0.107418 | 0.034229 | 0.029683 | 0.025672 | 0.461425 | 0.291483 | 0.237064 | 0.198021 | 0.154165 | 0.135446 | 0 | 0.494079 | 0.259663 | 13,117 | 185 | 286 | 70.902703 | 0.276079 | 0.787223 | 0 | 0.209877 | 0 | 0 | 0.064993 | 0.009601 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061728 | false | 0 | 0.08642 | 0 | 0.197531 | 0.209877 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0aba4c36c2f9975a98b0d3df741c538a86e844bb | 1,803 | py | Python | lib3to2/tests/test_intern.py | hajs/lib3to2_fork | 4a2c734398493c5ff72857f3b849895aecdfc3f7 | [
"Apache-2.0"
] | 3 | 2021-03-29T19:21:08.000Z | 2021-12-31T09:30:11.000Z | VisionAPI/lib/python3.8/site-packages/lib3to2/tests/test_intern.py | aniruddhakj/AnswerScriptEvaluation | 7b039b84355ecda1d55dc037ccfc4a4d661ad5e3 | [
"BSD-3-Clause"
] | 1 | 2019-05-07T11:15:34.000Z | 2019-05-07T11:15:34.000Z | env/lib/python2.7/site-packages/lib3to2/tests/test_intern.py | Eric-Muthemba/qontroverse | 1f12d0e3bbdee628a88bac77dc53426ded220755 | [
"MIT"
] | 2 | 2019-01-22T01:05:22.000Z | 2019-09-27T12:32:22.000Z | from lib3to2.tests.support import lib3to2FixerTestCase
class Test_intern(lib3to2FixerTestCase):
fixer = "intern"
#XXX: Does not remove unused "import sys" lines.
def test_prefix_preservation(self):
b = """import sys\nx = sys.intern( a )"""
a = """import sys\nx = intern( a )"""
self.check(b, a)
b = """import sys\ny = sys.intern("b" # test
)"""
a = """import sys\ny = intern("b" # test
)"""
self.check(b, a)
b = """import sys\nz = sys.intern(a+b+c.d, )"""
a = """import sys\nz = intern(a+b+c.d, )"""
self.check(b, a)
def test(self):
b = """from sys import intern\nx = intern(a)"""
a = """\nx = intern(a)"""
self.check(b, a)
b = """import sys\nz = sys.intern(a+b+c.d,)"""
a = """import sys\nz = intern(a+b+c.d,)"""
self.check(b, a)
b = """import sys\nsys.intern("y%s" % 5).replace("y", "")"""
a = """import sys\nintern("y%s" % 5).replace("y", "")"""
self.check(b, a)
# These should not be refactored
def test_multimports(self):
b = """from sys import intern, path"""
a = """from sys import path"""
self.check(b, a)
b = """from sys import path, intern"""
a = """from sys import path"""
self.check(b, a)
b = """from sys import argv, intern, path"""
a = """from sys import argv, path"""
self.check(b, a)
def test_unchanged(self):
s = """intern(a=1)"""
self.unchanged(s)
s = """intern(f, g)"""
self.unchanged(s)
s = """intern(*h)"""
self.unchanged(s)
s = """intern(**i)"""
self.unchanged(s)
s = """intern()"""
self.unchanged(s)
| 27.738462 | 68 | 0.482529 | 240 | 1,803 | 3.604167 | 0.2 | 0.114451 | 0.104046 | 0.114451 | 0.559538 | 0.43237 | 0.315607 | 0.304046 | 0.304046 | 0.304046 | 0 | 0.007347 | 0.320577 | 1,803 | 64 | 69 | 28.171875 | 0.698776 | 0.043261 | 0 | 0.478261 | 0 | 0 | 0.390017 | 0.012188 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.413043 | 0 | 0.543478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0acbd8c384b8df9cb29356ec6773fe9fbba75385 | 2,101 | py | Python | cosmic-core/systemvm/patches/centos7/opt/cosmic/router/bin/update_config.py | sanderv32/cosmic | 9a9d86500b67255a1c743a9438a05c0d969fd210 | [
"Apache-2.0"
] | 64 | 2016-01-30T13:31:00.000Z | 2022-02-21T02:13:25.000Z | cosmic-core/systemvm/patches/centos7/opt/cosmic/router/bin/update_config.py | sanderv32/cosmic | 9a9d86500b67255a1c743a9438a05c0d969fd210 | [
"Apache-2.0"
] | 525 | 2016-01-22T10:46:31.000Z | 2022-02-23T11:08:01.000Z | cosmic-core/systemvm/patches/centos7/opt/cosmic/router/bin/update_config.py | sanderv32/cosmic | 9a9d86500b67255a1c743a9438a05c0d969fd210 | [
"Apache-2.0"
] | 25 | 2016-01-13T16:46:46.000Z | 2021-07-23T15:22:27.000Z | #!/usr/bin/python
import glob
import logging
import os.path
import sys
import configure
from cs.CsHelper import mkdir
from cs.CsPasswordService import CsPasswordServiceVMConfig
from databag.merge import QueueFile
OCCURRENCES = 1
LOG_DIR="/var/log/cosmic/router"
if not os.path.isdir(LOG_DIR):
mkdir(LOG_DIR, 0o755, False)
logging.basicConfig(filename="/var/log/cosmic/router/router.log", level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(filename)s %(funcName)s:%(lineno)d %(message)s')
# first commandline argument should be the file to process
if len(sys.argv) != 2:
logging.error("Invalid usage")
sys.exit(1)
# FIXME we should get this location from a configuration class
jsonPath = "/var/cache/cloud/%s"
jsonCmdConfigPath = jsonPath % sys.argv[1]
def finish_config():
# Converge
returncode = configure.main(sys.argv)
sys.exit(returncode)
def process(do_merge=True):
logging.info("Processing JSON file %s" % sys.argv[1])
qf = QueueFile()
qf.setFile(sys.argv[1])
qf.do_merge = do_merge
qf.load(None)
return qf
def process_file():
logging.info("process_file")
process()
# Converge
finish_config()
def process_vmpasswd():
logging.info("process_vmpassword")
qf = process(False)
logging.info("Sending password to password server")
returncode = CsPasswordServiceVMConfig(qf.getData()).process()
# TODO: use the returncode as exit code, but for now we just log the exit code
logging.info("The vmpassword processing ended with exit code %d" % returncode)
#sys.exit(returncode)
filename = min(glob.iglob(jsonCmdConfigPath + '*'), key=os.path.getctime)
if not (os.path.isfile(filename) and os.access(filename, os.R_OK)):
logging.error("You are telling me to process %s, but i can't access it" % jsonCmdConfigPath)
sys.exit(1)
if sys.argv[1].startswith("vm_password.json"):
logging.info("Processing incoming vm_passwd file => %s" % sys.argv[1])
process_vmpasswd()
else:
logging.info("Processing incoming file => %s" % sys.argv[1])
process_file()
| 27.644737 | 105 | 0.706806 | 295 | 2,101 | 4.976271 | 0.423729 | 0.038147 | 0.032698 | 0.024523 | 0.036104 | 0.027248 | 0 | 0 | 0 | 0 | 0 | 0.008014 | 0.168491 | 2,101 | 75 | 106 | 28.013333 | 0.832284 | 0.118515 | 0 | 0.040816 | 0 | 0.020408 | 0.239024 | 0.042276 | 0 | 0 | 0 | 0.013333 | 0 | 1 | 0.081633 | false | 0.183673 | 0.163265 | 0 | 0.265306 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0acf2c3457f4ed004c15bbe068bca3c341b918e1 | 537 | py | Python | origamibot/core/teletypes/inline_keyboard_markup.py | cmd410/OrigamiBot | 03667d069f0c0b088671936ce36bf8f85a029b93 | [
"MIT"
] | 4 | 2020-06-30T10:32:54.000Z | 2020-11-01T23:07:58.000Z | origamibot/core/teletypes/inline_keyboard_markup.py | cmd410/OrigamiBot | 03667d069f0c0b088671936ce36bf8f85a029b93 | [
"MIT"
] | 6 | 2020-06-26T23:14:59.000Z | 2020-07-26T11:48:07.000Z | origamibot/core/teletypes/inline_keyboard_markup.py | cmd410/OrigamiBot | 03667d069f0c0b088671936ce36bf8f85a029b93 | [
"MIT"
] | 1 | 2020-07-28T08:52:51.000Z | 2020-07-28T08:52:51.000Z | from typing import List
from .base import TelegramStructure, ListField
from .inline_keyboard_button import InlineKeyboardButton
class InlineKeyboardMarkup(TelegramStructure):
"""This object represents an inline keyboard that appears right next
to the message it belongs to.
"""
inline_keyboard = ListField()
def __init__(self,
inline_keyboard: List[List[InlineKeyboardButton]],
):
self.inline_keyboard = \
ListField(inline_keyboard, [InlineKeyboardButton])
| 26.85 | 72 | 0.703911 | 52 | 537 | 7.076923 | 0.557692 | 0.228261 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.232775 | 537 | 19 | 73 | 28.263158 | 0.893204 | 0.176909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0aeb8218ff34d5ded18bbe8c6c0adb98ec636c23 | 264 | py | Python | tests/legacy_pytests/script_exe_test_3/testme.py | depaul-dice/provenance-to-use | e16e2824fbbe0b4e09cc50f0d2bcec3400bf4b87 | [
"BSD-3-Clause"
] | null | null | null | tests/legacy_pytests/script_exe_test_3/testme.py | depaul-dice/provenance-to-use | e16e2824fbbe0b4e09cc50f0d2bcec3400bf4b87 | [
"BSD-3-Clause"
] | null | null | null | tests/legacy_pytests/script_exe_test_3/testme.py | depaul-dice/provenance-to-use | e16e2824fbbe0b4e09cc50f0d2bcec3400bf4b87 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python2
import sys
sys.path.insert(0, '..')
from cde_test_common import *
def checker_func():
assert os.path.isfile(CDE_ROOT_DIR + '/home/pgbovine/CDE/tests/script_exe_test_3/hello-world')
generic_test_runner(["./run_script.py"], checker_func)
| 24 | 96 | 0.753788 | 43 | 264 | 4.348837 | 0.767442 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012448 | 0.087121 | 264 | 10 | 97 | 26.4 | 0.763485 | 0.079545 | 0 | 0 | 0 | 0 | 0.293388 | 0.223141 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.166667 | true | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
0aec5ecd3221e01a5aed886a74c13f89c04e136e | 478 | py | Python | server/domain/countries.py | Neoteroi/BlackSheep-Azure-API | e4a7dae9fd3002fe6926c56a1b2ff65ba851e5cb | [
"MIT"
] | null | null | null | server/domain/countries.py | Neoteroi/BlackSheep-Azure-API | e4a7dae9fd3002fe6926c56a1b2ff65ba851e5cb | [
"MIT"
] | null | null | null | server/domain/countries.py | Neoteroi/BlackSheep-Azure-API | e4a7dae9fd3002fe6926c56a1b2ff65ba851e5cb | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import List
@dataclass
class Country:
id: int
name: str
code: str
class CountriesDataProvider(ABC):
@abstractmethod
async def get_countries(self) -> List[Country]:
...
class CountriesHandler:
countries_data_provider: CountriesDataProvider
async def get_countries(self) -> List[Country]:
return await self.countries_data_provider.get_countries()
| 19.916667 | 65 | 0.73431 | 54 | 478 | 6.37037 | 0.481481 | 0.104651 | 0.063953 | 0.116279 | 0.203488 | 0.203488 | 0.203488 | 0 | 0 | 0 | 0 | 0 | 0.196653 | 478 | 23 | 66 | 20.782609 | 0.895833 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1875 | 0 | 0.6875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0aef397e06a48609f87d194e21cd063eadf02f26 | 1,363 | py | Python | examples/spouse_example/tsv_extractor/udf/ext_has_spouse_features.py | feiranwang/deepdive | 53c03edba643d53fbb6d9d382870fe5dfb2e47a1 | [
"Apache-2.0"
] | 1 | 2015-04-06T16:20:00.000Z | 2015-04-06T16:20:00.000Z | examples/spouse_example/tsv_extractor/udf/ext_has_spouse_features.py | feiranwang/deepdive | 53c03edba643d53fbb6d9d382870fe5dfb2e47a1 | [
"Apache-2.0"
] | null | null | null | examples/spouse_example/tsv_extractor/udf/ext_has_spouse_features.py | feiranwang/deepdive | 53c03edba643d53fbb6d9d382870fe5dfb2e47a1 | [
"Apache-2.0"
] | null | null | null | #! /usr/bin/env python
import fileinput
ARR_DELIM = '~^~'
# For each input tuple
for row in fileinput.input():
parts = row.strip().split('\t')
if len(parts) != 6:
print >>sys.stderr, 'Failed to parse row:', row
continue
# Get all fields from a row
words = parts[0].split(ARR_DELIM)
relation_id = parts[1]
p1_start, p1_length, p2_start, p2_length = [int(x) for x in parts[2:]]
p1_end = p1_start + p1_length
p2_end = p2_start + p2_length
p1_text = words[p1_start:p1_length]
p2_text = words[p2_start:p2_length]
# Features for this pair come in here
features = set()
# Feature 1: Words between the two phrases
left_idx = min(p1_end, p2_end)
right_idx = max(p1_start, p2_start)
words_between = words[left_idx:right_idx]
if words_between:
features.add("words_between=" + "-".join(words_between))
# Feature 2: Number of words between the two phrases
features.add("num_words_between=%s" % len(words_between))
# Feature 3: Does the last word (last name) match assuming the words are not equal?
last_word_left = words[p1_end-1]
last_word_right = words[p2_end-1]
if (last_word_left == last_word_right) and (p1_text != p2_text):
features.add("last_word_matches")
# TODO: Add more features, look at dependency paths, etc
for feature in features:
print str(relation_id) + '\t' + feature
| 28.395833 | 85 | 0.695525 | 222 | 1,363 | 4.04955 | 0.400901 | 0.106785 | 0.030033 | 0.050056 | 0.112347 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029918 | 0.190756 | 1,363 | 47 | 86 | 29 | 0.785131 | 0.244314 | 0 | 0 | 0 | 0 | 0.077299 | 0 | 0 | 0 | 0 | 0.021277 | 0 | 0 | null | null | 0 | 0.037037 | null | null | 0.074074 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0afc41c5f2887dde342fc2caf32f89226bdb1627 | 236 | py | Python | tests/test_4_delete_test_db.py | TimKrash/airport-viz | 9a56ae2fd6c7dcc508ca722120e8d5860860d816 | [
"Unlicense"
] | null | null | null | tests/test_4_delete_test_db.py | TimKrash/airport-viz | 9a56ae2fd6c7dcc508ca722120e8d5860860d816 | [
"Unlicense"
] | null | null | null | tests/test_4_delete_test_db.py | TimKrash/airport-viz | 9a56ae2fd6c7dcc508ca722120e8d5860860d816 | [
"Unlicense"
] | null | null | null | import os
db_path = 'tests/processed/test_db.sqlite'
def test_delete_test_db():
#This should always be the last script run. This will delete the test db created to avoid appending onto it on future tests
os.remove(db_path) | 33.714286 | 127 | 0.754237 | 41 | 236 | 4.195122 | 0.682927 | 0.104651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186441 | 236 | 7 | 128 | 33.714286 | 0.895833 | 0.516949 | 0 | 0 | 0 | 0 | 0.263158 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e406152a8085ad88e340bb8695bca298374726d5 | 8,133 | py | Python | dictutil.py | leifp/dictutil | 1195bd924342345c14dfc1ad2d5895d70a703f96 | [
"MIT"
] | 2 | 2015-11-07T11:56:01.000Z | 2016-05-21T14:03:42.000Z | dictutil.py | leifp/dictutil | 1195bd924342345c14dfc1ad2d5895d70a703f96 | [
"MIT"
] | null | null | null | dictutil.py | leifp/dictutil | 1195bd924342345c14dfc1ad2d5895d70a703f96 | [
"MIT"
] | null | null | null | """Useful functions for working with dicts."""
from collections import defaultdict
## clojure
def merge(*dicts):
"""Returns a dict that consists of the rest of the dicts merged with
the first. If a key occurs in more than one map, the value from the
latter (left-to-right) will be the value in the result."""
d = dict()
for _dict in dicts:
d.update(_dict)
return d
def merge_with(f, *dicts):
"""Returns a dict that consists of the rest of the dicts merged with
the first. If a key occurs in more than one map, the value from the
latter (left-to-right) will be the combined with the value in the former
by calling f(former_val, latter_val). Calling with no dicts returns {}."""
d = dict()
for _dict in dicts:
for k in _dict:
if k in d:
d[k] = f(d[k], _dict[k])
else:
d[k] = _dict[k]
return d
def zipdict(ks, vs):
"""Returns a dict with the keys mapped to the corresponding vals."""
return dict(zip(ks, vs))
## different semantics
#def zipdict(ks, vs):
# return dict.fromkeys(ks, vs)
#TODO: add not_found=None ?
def get_in(d, ks):
""" Returns the value in a nested associative structure, where `ks` is a
sequence of keys. Returns None if the key is not present. Returns `d` if
`ks` is empty."""
tmp = d
for k in ks:
tmp = tmp.get(k)
if tmp is None:
return None
return tmp
def set_in(d, ks, v):
""" Sets the value in a nested associative structure, where `ks` is a
sequence of keys. Creates the key if the key is not present.
MUTATES `d`."""
tmp = d
i = None
for i, next_key in enumerate(ks):
if i > 0:
tmp = tmp.setdefault(current_key, {})
current_key = next_key
if i is None:
raise KeyError("Empty keys iterable")
tmp[current_key] = v
def update_in(d, ks, f, *restargs):
"""'Updates' the value in a nested associative structure, where `ks` is a
sequence of keys and `f` is a function that will take the old value and
any supplied args and return the new value. Equivalent to
d[k1][k2][...] = f(d[k1][k2][...], *restargs).
An error is raised if the key at any level does not exist.
MUTATES `d`.
"""
tmp = d
i = None
for i, next_key in enumerate(ks):
if i > 0:
tmp = tmp.setdefault(current_key, {})
current_key = next_key
if i is None:
raise KeyError("Empty keys iterable")
tmp[current_key] = f(tmp[current_key], *restargs)
## haskell
#union is like merge (but merge prefers later args)
#def union(d1, d2):
# pass
def intersection(d1, d2):
"""Intersection of two dicts.
Return data in the first dict for the keys existing in both dicts."""
ret = {}
for k in d1:
if k in d2:
ret[k] = d1[k]
return ret
def difference(d1, d2):
"""Difference of two dicts.
Return elements of the first dict not existing in the second dict."""
ret = {}
for k in d1:
if k not in d2:
ret[k] = d1[k]
return ret
def map_values(f, d):
"""Map a function over all values in the dict."""
return dict((k, f(v)) for k, v in d.iteritems())
def map_keys(f, d):
"""Map a function over all keys in the dict."""
return dict((f(k), v) for k, v in d.iteritems())
def partition(f, d):
"""Partition the dict according to an equivalence relation.
Calls f(key, value) for all (key, value) pairs in the dict d. The return
value of f must be hashable.
Returns a new dict where the keys are distinct return values of f, and the
values are dicts containing the equivalence classes distinguished by those
return values:
>>> partition(lambda k, v: (k % 3), {0: 1, 1: 2, 2: 3, 3: 4, 4: 5})
{0: {0: 1, 3: 4}, 1: {1: 2, 4: 5}, 2: {2: 3}}
"""
partition = defaultdict(dict)
for k, v in d.iteritems():
partition[f(k, v)][k] = v
return partition
def partition_on_value(pred, d):
"""Partition the dict according to a predicate on the values.
Returns a tuple of two dicts:
The first dict contains all elements that satisfy the predicate,
the second all elements that fail the predicate."""
f = lambda k, v: bool(pred(v))
p = partition(f, d)
return p[True], p[False]
def partition_on_key(pred, d):
"""Partition the dict according to a predicate on the keys.
Returns a tuple of two dicts:
The first dict contains all elements that satisfy the predicate,
the second all elements that fail the predicate."""
f = lambda k, v: bool(pred(k))
p = partition(f, d)
return p[True], p[False]
def issubdict(d1, d2):
"""All keys in `d1` are in `d2`, and corresponding values are equal."""
return all((k in d2 and d1[k] == d2[k]) for k in d1)
#TODO: for python >= 2.7, reference dictviews
def key_set(d):
"""A set of all the keys of a dict."""
return set(d.iterkeys())
def value_set(d):
"""A set of all the values of a dict."""
return set(d.itervalues())
## sql
##TODO: group by f key, f val, f (key, val)?
def group_by(f, d, cmp=None, key=None, reverse=None):
"""Group by a function of the keys. Returns a dict given by
return_dict[f(k)] = [all values of original with same f(k)].
If any of `cmp`, `key`, or `reverse` is given, sort all of the values
of the result with `list.sort`."""
res = defaultdict(list)
for k, v in d.iteritems():
newk = f(k)
res[newk].append(v)
if cmp or key or (reverse is not None):
reverse = False if reverse is None else reverse
for k in res:
res[k].sort(cmp=cmp, key=key, reverse=reverse)
return res
def index(d, index_f):
"""Creates an index of `d`:
* Calls `index_f` for each value of `d`; this should return an iterable.
* For each index_key in the iterable, add the item key to the set at
result[index_key].
E.g.,
>>> index({1: 'foo bar', 2: 'foo baz'}, str.split)
{'foo': set([1, 2]), 'bar': set([1]), 'baz': set([2])}"""
res = defaultdict(set)
for k, v in d.iteritems():
idx_ks = index_f(v)
for idx_k in idx_ks:
res[idx_k].add(k)
return res
def project(d, ks):
"""Return the subdict of `d` containing only the keys in `ks`."""
return dict((k, d[k]) for k in ks)
# ruby values_at
def project_list(d, ks):
"""Return a list of the values of `d` at `ks`."""
return [d[k] for k in ks]
def where(pred, d):
"""Return the subdict of `d` where `pred(k,v)` holds."""
return dict((k, v) for k, v in d.iteritems() if pred(k, v))
def where_key(pred, d):
"""Return the subdict of `d` where `pred(k)` holds."""
return dict((k, v) for k, v in d.iteritems() if pred(k))
def where_value(pred, d):
"""Return the subdict of `d` where `pred(v)` holds."""
return dict((k, v) for k, v in d.iteritems() if pred(v))
## ruby
def del_if(pred, d):
"""Delete all items of `d` for which `pred(k,v)` holds.
MUTATES `d`."""
to_del = [k for k, v in d.iteritems() if pred(k, v)]
for k in to_del:
del d[k]
# flatten
def invert(d):
"""Return a new dict, with the keys and values reversed.
If there are duplicate values in the original, the result will only
have one of the corresponding keys."""
return dict((v, k) for k, v in d.iteritems())
#rename?
def rassoc(d, val):
"""Searches through the dict comparing `val` with the value. Returns
the first item (k, v) that matches."""
for k, v in d.iteritems():
if v == val:
return k, v
return None
# same as 'where' above (but faster?!)
# make an iterator?
def select(pred, d):
"""Return the subdict of `d` where `pred(k,v)` holds."""
ret = {}
for k, v in d.iteritems():
if pred(k, v):
ret[k] = v
return ret
# has_value
# values_at
## javascript
#TODO: js object-like dict class?
# e.g. d.foo is an alias for d['foo']
# This has been done in a lot of places...
# for reference see attributedict at https://github.com/bitprophet/lexicon
## perl
# eh.
| 31.645914 | 79 | 0.610968 | 1,373 | 8,133 | 3.581209 | 0.177713 | 0.012203 | 0.012203 | 0.017084 | 0.401058 | 0.389465 | 0.346553 | 0.31808 | 0.309335 | 0.293268 | 0 | 0.009689 | 0.263986 | 8,133 | 256 | 80 | 31.769531 | 0.811727 | 0.515185 | 0 | 0.373984 | 0 | 0 | 0.010591 | 0 | 0 | 0 | 0 | 0.011719 | 0 | 1 | 0.219512 | false | 0 | 0.00813 | 0 | 0.439024 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7c2bbbbbbefc0b0a7fcb86d04d64aca9f405ea57 | 304 | py | Python | Python-Introduction/Solutions/Excercise4.py | itzpc/Google-MLCC-NITJ | a0a4f3f7020175476b503ca144f4da246f59b26c | [
"MIT"
] | null | null | null | Python-Introduction/Solutions/Excercise4.py | itzpc/Google-MLCC-NITJ | a0a4f3f7020175476b503ca144f4da246f59b26c | [
"MIT"
] | null | null | null | Python-Introduction/Solutions/Excercise4.py | itzpc/Google-MLCC-NITJ | a0a4f3f7020175476b503ca144f4da246f59b26c | [
"MIT"
] | null | null | null | def main():
x=11
y=2
print("Addition X+Y=",x+y)
print("Subtraction X-Y=",x-y)
print("Multiplication X*Y=",x*y)
print("Division X/Y=",x/y)
print("Modulus X%Y=",x%y)
print("Exponent X**Y=",x**y)
print("Floor Division X//Y=",x//y)
if __name__ == '__main__':
main()
| 21.714286 | 38 | 0.546053 | 53 | 304 | 2.981132 | 0.301887 | 0.177215 | 0.132911 | 0.177215 | 0.468354 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012605 | 0.217105 | 304 | 13 | 39 | 23.384615 | 0.651261 | 0 | 0 | 0 | 0 | 0 | 0.381579 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.083333 | 0.583333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
7c3fdc72fee157fc2c53dd7655f8434df88f9b9a | 310 | py | Python | python_demos/src/rates_demo/__main__.py | t4d-classes/python_10042021 | e2c28448ad66784c429655ab766f902b76d6ac79 | [
"MIT"
] | null | null | null | python_demos/src/rates_demo/__main__.py | t4d-classes/python_10042021 | e2c28448ad66784c429655ab766f902b76d6ac79 | [
"MIT"
] | null | null | null | python_demos/src/rates_demo/__main__.py | t4d-classes/python_10042021 | e2c28448ad66784c429655ab766f902b76d6ac79 | [
"MIT"
] | null | null | null | """ main module """
import time
from .get_rates import get_rates_threaded
from .rates_api_server import rates_api_server
if __name__ == "__main__":
with rates_api_server():
start = time.time()
get_rates_threaded()
print(f"threaded time elapsed: {time.time() - start}")
| 20.666667 | 62 | 0.664516 | 40 | 310 | 4.675 | 0.425 | 0.128342 | 0.224599 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229032 | 310 | 15 | 63 | 20.666667 | 0.782427 | 0.035484 | 0 | 0 | 0 | 0 | 0.178082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0.125 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
7c4544a96fa97e54109492959d72237e9c29ff7a | 10,675 | py | Python | src/scripts/generate_runtime_manifest.py | arimas88/OpenXR-SDK-Source | 08ae73ea661e2b9ebde3ce9b83ac8d777623d44a | [
"MIT",
"BSD-3-Clause",
"Apache-2.0",
"Unlicense"
] | 1 | 2022-02-24T03:06:35.000Z | 2022-02-24T03:06:35.000Z | src/scripts/generate_runtime_manifest.py | phonon56/OpenXR-SDK-Source | 7450c5420789ff0ab63aaad2c4ed018807629a77 | [
"MIT",
"Apache-2.0",
"Unlicense",
"BSD-3-Clause"
] | 1 | 2022-02-20T16:14:51.000Z | 2022-02-20T16:14:51.000Z | src/scripts/generate_runtime_manifest.py | korejan/ALVR-OpenXR-Engine | 9bdd6f73d881e46835dfc9af892e9a4184242764 | [
"MIT",
"Apache-2.0",
"BSD-3-Clause",
"Unlicense"
] | null | null | null | #!/usr/bin/python3
#
# Copyright (c) 2017-2022, The Khronos Group Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KINdD, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import getopt
import sys
cur_runtime_json_version = '1.0.0'
def main(argv):
output_file = ''
library_location = ''
generate_badjson_jsons = False
usage = '\ngenerate_runtime_manifest.py <ARGS>\n'
usage += ' -f/--file <filename>\n'
usage += ' -l/--lib <library_location>\n'
usage += ' -b/--bad\n'
try:
opts, args = getopt.getopt(argv,"hbf:l:",["bad","file=","lib="])
except getopt.GetoptError:
print(usage)
sys.exit(2)
for opt, arg in opts:
if opt == '-h':
print(usage)
sys.exit()
elif opt in ("-f", "--file"):
output_file = arg.strip()
elif opt in ("-l", "--lib"):
library_location = arg.strip()
elif opt in ("-b", "--bad"):
generate_badjson_jsons = True
file_text = '{\n'
file_text += ' "file_format_version": "%s",\n' % cur_runtime_json_version
file_text += ' "runtime": {\n'
file_text += ' "library_path": "%s",\n' % library_location
file_text += ' "functions": {\n'
file_text += ' "xrNegotiateLoaderRuntimeInterface":\n'
file_text += ' "xrNegotiateLoaderRuntimeInterface"\n'
file_text += ' }\n'
file_text += ' }\n'
file_text += '}\n'
with open(output_file, 'w') as f:
f.write(file_text)
if generate_badjson_jsons:
# Bad File format versions
####################################
# Missing
bad_name = '_badjson_file_ver_missing.json'
file_text = '{\n'
file_text += ' "runtime": {\n'
file_text += ' "library_path": "%s",\n' % library_location
file_text += ' }\n'
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# Use int
bad_name = '_badjson_file_ver_int.json'
file_text = '{\n'
file_text += ' "file_format_version": 1,\n'
file_text += ' "runtime": {\n'
file_text += ' "library_path": "%s",\n' % library_location
file_text += ' }\n'
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# Use invalid string
bad_name = '_badjson_file_ver_string.json'
file_text = '{\n'
file_text += ' "file_format_version": "invalid string",\n'
file_text += ' "runtime": {\n'
file_text += ' "library_path": "%s",\n' % library_location
file_text += ' }\n'
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# Too low of a version
bad_name = '_badjson_file_ver_all_low.json'
file_text = '{\n'
file_text += ' "file_format_version": "0.0.0",\n'
file_text += ' "runtime": {\n'
file_text += ' "library_path": "%s",\n' % library_location
file_text += ' }\n'
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# Too high of a major version
bad_name = '_badjson_file_ver_major_high.json'
file_text = '{\n'
file_text += ' "file_format_version": "15.0.0",\n'
file_text += ' "runtime": {\n'
file_text += ' "library_path": "%s",\n' % library_location
file_text += ' }\n'
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# Too high of a minor version
bad_name = '_badjson_file_ver_minor_high.json'
file_text = '{\n'
file_text += ' "file_format_version": "1.15.0",\n'
file_text += ' "runtime": {\n'
file_text += ' "library_path": "%s",\n' % library_location
file_text += ' }\n'
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# No runtime section
####################################
# Completely Missing
bad_name = '_badjson_runtime_missing.json'
file_text = '{\n'
file_text += ' "file_format_version": "%s",\n' % cur_runtime_json_version
file_text += ' "library_path": "%s",\n' % library_location
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# Empty
bad_name = '_badjson_runtime_empty.json'
file_text = '{\n'
file_text += ' "file_format_version": "%s",\n' % cur_runtime_json_version
file_text += ' "runtime": {\n'
file_text += ' },\n'
file_text += ' "library_path": "%s",\n' % library_location
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# Bad path
####################################
# Missing
bad_name = '_badjson_path_missing.json'
file_text = '{\n'
file_text += ' "file_format_version": "%s",\n' % cur_runtime_json_version
file_text += ' "runtime": {\n'
file_text += ' }\n'
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# Use int
bad_name = '_badjson_path_int.json'
file_text = '{\n'
file_text += ' "file_format_version": "%s",\n' % cur_runtime_json_version
file_text += ' "runtime": {\n'
file_text += ' "library_path": 1,\n'
file_text += ' }\n'
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# Replace valid path with invalid one
bad_name = '_badjson_path_no_file.json'
file_text = '{\n'
file_text += ' "file_format_version": "%s",\n' % cur_runtime_json_version
file_text += ' "runtime": {\n'
file_text += ' "library_path": "%s",\n' % library_location.replace("test_runtimes","not_real")
file_text += ' }\n'
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# Valid JSON, but invalid Negotiate
####################################
# Always fail negotiate
bad_name = '_badnegotiate_always.json'
file_text = '{\n'
file_text += ' "file_format_version": "%s",\n' % cur_runtime_json_version
file_text += ' "runtime": {\n'
file_text += ' "library_path": "%s",\n' % library_location
file_text += ' "functions": {\n'
file_text += ' "xrNegotiateLoaderRuntimeInterface":\n'
file_text += ' "TestRuntimeAlwaysFailNegotiateLoaderRuntimeInterface"\n'
file_text += ' }\n'
file_text += ' }\n'
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# Pass negotiate, but return null GIPA
bad_name = '_badnegotiate_invalid_gipa.json'
file_text = '{\n'
file_text += ' "file_format_version": "%s",\n' % cur_runtime_json_version
file_text += ' "runtime": {\n'
file_text += ' "library_path": "%s",\n' % library_location
file_text += ' "functions": {\n'
file_text += ' "xrNegotiateLoaderRuntimeInterface":\n'
file_text += ' "TestRuntimeNullGipaNegotiateLoaderRuntimeInterface"\n'
file_text += ' }\n'
file_text += ' }\n'
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# Pass negotiate, but return invalid interface version
bad_name = '_badnegotiate_invalid_interface.json'
file_text = '{\n'
file_text += ' "file_format_version": "%s",\n' % cur_runtime_json_version
file_text += ' "runtime": {\n'
file_text += ' "library_path": "%s",\n' % library_location
file_text += ' "functions": {\n'
file_text += ' "xrNegotiateLoaderRuntimeInterface":\n'
file_text += ' "TestRuntimeInvalidInterfaceNegotiateLoaderRuntimeInterface"\n'
file_text += ' }\n'
file_text += ' }\n'
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
# Pass negotiate, but return invalid api version
bad_name = '_badnegotiate_invalid_api.json'
file_text = '{\n'
file_text += ' "file_format_version": "%s",\n' % cur_runtime_json_version
file_text += ' "runtime": {\n'
file_text += ' "library_path": "%s",\n' % library_location
file_text += ' "functions": {\n'
file_text += ' "xrNegotiateLoaderRuntimeInterface":\n'
file_text += ' "TestRuntimeInvalidApiNegotiateLoaderRuntimeInterface"\n'
file_text += ' }\n'
file_text += ' }\n'
file_text += '}\n'
bad_file = output_file.replace(".json", bad_name)
f = open(bad_file, 'w')
f.write(file_text)
f.close()
if __name__ == "__main__":
main(sys.argv[1:])
| 37.45614 | 109 | 0.534052 | 1,234 | 10,675 | 4.317666 | 0.13128 | 0.192192 | 0.121622 | 0.087838 | 0.698386 | 0.67042 | 0.654655 | 0.644895 | 0.644895 | 0.630631 | 0 | 0.004469 | 0.30829 | 10,675 | 284 | 110 | 37.588028 | 0.717091 | 0.099204 | 0 | 0.743119 | 0 | 0 | 0.309771 | 0.132259 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004587 | false | 0 | 0.009174 | 0 | 0.013761 | 0.009174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7c504e8def8d2b91d3cf3734f247b0ba22b449ad | 133 | py | Python | o3seespy/__about__.py | vijaypolimeru/o3seespy | c9ef0c27f685de705721b10eb1ea81c3a3c24c4e | [
"MIT",
"BSD-3-Clause"
] | null | null | null | o3seespy/__about__.py | vijaypolimeru/o3seespy | c9ef0c27f685de705721b10eb1ea81c3a3c24c4e | [
"MIT",
"BSD-3-Clause"
] | null | null | null | o3seespy/__about__.py | vijaypolimeru/o3seespy | c9ef0c27f685de705721b10eb1ea81c3a3c24c4e | [
"MIT",
"BSD-3-Clause"
] | 1 | 2020-03-20T16:31:22.000Z | 2020-03-20T16:31:22.000Z | __project__ = "o3seespy"
__author__ = "Maxim Millen & Minjie Zhu"
__version__ = "3.1.0.18"
__license__ = "MIT with OpenSees License"
| 26.6 | 41 | 0.736842 | 17 | 133 | 4.823529 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.142857 | 133 | 4 | 42 | 33.25 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.496241 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7c5e28c5dffb8d3a9e608d354d8042447959f5fd | 111 | py | Python | learn-api-rest-python/src/cfg/DevelopmentConfig.py | dpurge/jdp-container | 9d2aff0b4ddb5c2cc98663d27dce0322f2c9725b | [
"MIT"
] | null | null | null | learn-api-rest-python/src/cfg/DevelopmentConfig.py | dpurge/jdp-container | 9d2aff0b4ddb5c2cc98663d27dce0322f2c9725b | [
"MIT"
] | null | null | null | learn-api-rest-python/src/cfg/DevelopmentConfig.py | dpurge/jdp-container | 9d2aff0b4ddb5c2cc98663d27dce0322f2c9725b | [
"MIT"
] | null | null | null | from .Config import Config
class DevelopmentConfig(Config):
ENV = 'development'
DEBUG = True
| 18.5 | 33 | 0.657658 | 11 | 111 | 6.636364 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.27027 | 111 | 6 | 34 | 18.5 | 0.901235 | 0 | 0 | 0 | 0 | 0 | 0.102804 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7c68292230fa0cf7f24f5c3514ebe99d5c23ca77 | 889 | py | Python | blog/migrations/0009_auto_20210111_0126.py | Jay-davisphem/miniblog-project | 7b6990a02bb32561af74735d4172b795f613b3e5 | [
"MIT"
] | null | null | null | blog/migrations/0009_auto_20210111_0126.py | Jay-davisphem/miniblog-project | 7b6990a02bb32561af74735d4172b795f613b3e5 | [
"MIT"
] | null | null | null | blog/migrations/0009_auto_20210111_0126.py | Jay-davisphem/miniblog-project | 7b6990a02bb32561af74735d4172b795f613b3e5 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.2 on 2021-01-11 00:26
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('blog', '0008_auto_20201222_1658'),
]
operations = [
migrations.AlterField(
model_name='blogger',
name='profile',
field=models.TextField(blank=True, help_text='Enter blogger biography', null=True, verbose_name='Bio'),
),
migrations.AlterField(
model_name='commenter',
name='username',
field=models.CharField(blank=True, max_length=100, null=True),
),
migrations.AlterField(
model_name='post',
name='description',
field=models.TextField(help_text='Enter your blog post here'),
),
migrations.DeleteModel(
name='CommentCreate',
),
]
| 27.78125 | 115 | 0.582677 | 89 | 889 | 5.707865 | 0.606742 | 0.11811 | 0.147638 | 0.17126 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054662 | 0.300337 | 889 | 31 | 116 | 28.677419 | 0.762058 | 0.050619 | 0 | 0.28 | 1 | 0 | 0.162708 | 0.027316 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.