hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a8531e0516140b7ba04c8694d94c31bda49998bb | 228 | py | Python | train.py | airacid/pruned-face-detector | ef587e274ccf87633af653694890eb6712d6b3eb | [
"MIT"
] | 1 | 2021-11-01T02:39:36.000Z | 2021-11-01T02:39:36.000Z | train.py | airacid/pruned-face-detector | ef587e274ccf87633af653694890eb6712d6b3eb | [
"MIT"
] | null | null | null | train.py | airacid/pruned-face-detector | ef587e274ccf87633af653694890eb6712d6b3eb | [
"MIT"
] | 1 | 2021-11-01T02:39:37.000Z | 2021-11-01T02:39:37.000Z | from lib.helper.logger import logger
from lib.core.base_trainer.net_work import trainner
import setproctitle
logger.info('train start')
setproctitle.setproctitle("detect")
trainner=trainner()
trainner.train()
| 16.285714 | 52 | 0.758772 | 28 | 228 | 6.107143 | 0.571429 | 0.081871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149123 | 228 | 13 | 53 | 17.538462 | 0.881443 | 0 | 0 | 0 | 0 | 0 | 0.079439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
a8542bc39dc4bc6c313943d468e061100fee3bbb | 18,841 | py | Python | gewittergefahr/gg_utils/echo_classification_test.py | dopplerchase/GewitterGefahr | 4415b08dd64f37eba5b1b9e8cc5aa9af24f96593 | [
"MIT"
] | 26 | 2018-10-04T01:07:35.000Z | 2022-01-29T08:49:32.000Z | gewittergefahr/gg_utils/echo_classification_test.py | liuximarcus/GewitterGefahr | d819874d616f98a25187bfd3091073a2e6d5279e | [
"MIT"
] | 4 | 2017-12-25T02:01:08.000Z | 2018-12-19T01:54:21.000Z | gewittergefahr/gg_utils/echo_classification_test.py | liuximarcus/GewitterGefahr | d819874d616f98a25187bfd3091073a2e6d5279e | [
"MIT"
] | 11 | 2017-12-10T23:05:29.000Z | 2022-01-29T08:49:33.000Z | """Unit tests for echo_classification.py."""
import copy
import unittest
import numpy
from gewittergefahr.gg_utils import grids
from gewittergefahr.gg_utils import radar_utils
from gewittergefahr.gg_utils import echo_classification as echo_classifn
TOLERANCE = 1e-6
# The following constants are used to test _estimate_melting_levels.
MELTING_LEVEL_LATITUDES_DEG = numpy.linspace(-90., 90., num=19)
MELTING_LEVEL_TIME_UNIX_SEC = 1541823287 # 041447 UTC 10 Nov 2018
MELTING_LEVELS_M_ASL = (
echo_classifn.MELT_LEVEL_INTERCEPT_BY_MONTH_M_ASL[10] +
echo_classifn.MELT_LEVEL_SLOPE_BY_MONTH_M_DEG01[10] *
numpy.absolute(MELTING_LEVEL_LATITUDES_DEG)
)
# The following constants are used to test _neigh_metres_to_rowcol.
LARGE_GRID_HEIGHTS_M_ASL = radar_utils.get_valid_heights(
data_source=radar_utils.MYRORSS_SOURCE_ID, field_name=radar_utils.REFL_NAME)
LARGE_GRID_METADATA_DICT = {
echo_classifn.MIN_LATITUDE_KEY: 20.,
echo_classifn.LATITUDE_SPACING_KEY: 0.01,
echo_classifn.MIN_LONGITUDE_KEY: 230.,
echo_classifn.LONGITUDE_SPACING_KEY: 0.01,
echo_classifn.HEIGHTS_KEY: LARGE_GRID_HEIGHTS_M_ASL
}
THESE_LATITUDES_DEG, THESE_LONGITUDES_DEG = grids.get_latlng_grid_points(
min_latitude_deg=LARGE_GRID_METADATA_DICT[echo_classifn.MIN_LATITUDE_KEY],
min_longitude_deg=LARGE_GRID_METADATA_DICT[echo_classifn.MIN_LONGITUDE_KEY],
lat_spacing_deg=LARGE_GRID_METADATA_DICT[
echo_classifn.LATITUDE_SPACING_KEY],
lng_spacing_deg=LARGE_GRID_METADATA_DICT[
echo_classifn.LONGITUDE_SPACING_KEY],
num_rows=7001, num_columns=3501)
LARGE_GRID_METADATA_DICT[echo_classifn.LATITUDES_KEY] = THESE_LATITUDES_DEG
LARGE_GRID_METADATA_DICT[echo_classifn.LONGITUDES_KEY] = THESE_LONGITUDES_DEG
LARGE_RADIUS_METRES = 12000.
NUM_ROWS_IN_LARGE_NEIGH = 23
NUM_COLUMNS_IN_LARGE_NEIGH = 29
GRID_METADATA_DICT = {
echo_classifn.MIN_LATITUDE_KEY: 35.1,
echo_classifn.LATITUDE_SPACING_KEY: 0.2,
echo_classifn.MIN_LONGITUDE_KEY: 262.1,
echo_classifn.LONGITUDE_SPACING_KEY: 0.2,
echo_classifn.HEIGHTS_KEY: numpy.array([1000, 4000, 7000])
}
THESE_LATITUDES_DEG, THESE_LONGITUDES_DEG = grids.get_latlng_grid_points(
min_latitude_deg=GRID_METADATA_DICT[echo_classifn.MIN_LATITUDE_KEY],
min_longitude_deg=GRID_METADATA_DICT[echo_classifn.MIN_LONGITUDE_KEY],
lat_spacing_deg=GRID_METADATA_DICT[echo_classifn.LATITUDE_SPACING_KEY],
lng_spacing_deg=GRID_METADATA_DICT[echo_classifn.LONGITUDE_SPACING_KEY],
num_rows=5, num_columns=7)
GRID_METADATA_DICT[echo_classifn.LATITUDES_KEY] = THESE_LATITUDES_DEG
GRID_METADATA_DICT[echo_classifn.LONGITUDES_KEY] = THESE_LONGITUDES_DEG
NEIGH_RADIUS_METRES = 12000.
NUM_ROWS_IN_NEIGH = 3
NUM_COLUMNS_IN_NEIGH = 3
# The following constants are used to test _get_peakedness.
THIS_FIRST_MATRIX = numpy.array([[0, 1, 2, 3, 4, 5, 6],
[0, 1, 2, 3, 4, 5, 6],
[0, 1, 2, 3, 4, 5, 6],
[0, 1, 2, 3, 4, 5, 6],
[0, 1, 2, 3, 4, 5, 20]])
THIS_SECOND_MATRIX = numpy.array([[0, 0, 0, 0, 0, 0, 0],
[2, 2, 2, 2, 2, 2, 2],
[4, 4, 4, 4, 4, 4, 4],
[6, 6, 6, 6, 6, 6, 6],
[8, 8, 8, 8, 8, 8, 20]])
THIS_THIRD_MATRIX = numpy.array([[0, 1, 2, 3, 4, 5, 6],
[3, 4, 5, 6, 7, 8, 9],
[6, 7, 8, 9, 10, 11, 12],
[9, 10, 11, 12, 13, 14, 15],
[12, 13, 14, 15, 16, 17, 20]])
REFLECTIVITY_MATRIX_DBZ = numpy.stack(
(THIS_FIRST_MATRIX, THIS_SECOND_MATRIX, THIS_THIRD_MATRIX), axis=-1
).astype(float)
THIS_FIRST_MATRIX = numpy.array([[0, 0, 1, 2, 3, 4, 0],
[0, 1, 2, 3, 4, 5, 5],
[0, 1, 2, 3, 4, 5, 5],
[0, 1, 2, 3, 4, 5, 5],
[0, 0, 1, 2, 3, 4, 0]])
THIS_SECOND_MATRIX = numpy.array([[0, 0, 0, 0, 0, 0, 0],
[0, 2, 2, 2, 2, 2, 0],
[2, 4, 4, 4, 4, 4, 2],
[4, 6, 6, 6, 6, 6, 4],
[0, 6, 6, 6, 6, 6, 0]])
THIS_THIRD_MATRIX = numpy.array([[0, 1, 2, 3, 4, 5, 0],
[1, 4, 5, 6, 7, 8, 6],
[4, 7, 8, 9, 10, 11, 9],
[7, 10, 11, 12, 13, 14, 12],
[0, 10, 11, 12, 13, 14, 0]])
THIS_MATRIX = numpy.stack(
(THIS_FIRST_MATRIX, THIS_SECOND_MATRIX, THIS_THIRD_MATRIX), axis=-1
).astype(float)
PEAKEDNESS_MATRIX_DBZ = REFLECTIVITY_MATRIX_DBZ - THIS_MATRIX
# The following constants are used to test _apply_convective_criterion1.
MAX_PEAKEDNESS_HEIGHT_M_ASL = 9000.
CRITERION1_FLAG_MATRIX = numpy.array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 1]], dtype=bool)
# The following constants are used to test _apply_convective_criterion2.
VALID_TIME_UNIX_SEC = 1541823287 # 041447 UTC 10 Nov 2018
MIN_COMPOSITE_REFL_AML_DBZ = 15.
CRITERION2_FLAG_MATRIX = numpy.array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 1, 1, 1, 1]], dtype=bool)
# The following constants are used to test _apply_convective_criterion3.
MIN_ECHO_TOP_M_ASL = 6000.
ECHO_TOP_LEVEL_DBZ = 13.
CRITERION3_FLAG_MATRIX = numpy.array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1]], dtype=bool)
# The following constants are used to test _apply_convective_criterion4.
CRITERION4_FLAG_MATRIX = copy.deepcopy(CRITERION3_FLAG_MATRIX)
DUMMY_CRITERION3_FLAG_MATRIX = numpy.array([[1, 0, 0, 1, 1, 0, 1],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 1],
[1, 0, 0, 1, 0, 1, 0],
[1, 0, 0, 0, 1, 0, 0]], dtype=bool)
DUMMY_CRITERION4_FLAG_MATRIX = numpy.array([[0, 0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 1],
[1, 0, 0, 1, 0, 1, 0],
[1, 0, 0, 0, 1, 0, 0]], dtype=bool)
# The following constants are used to test _apply_convective_criterion5.
MIN_COMPOSITE_REFL_DBZ = 6.
CRITERION5_FLAG_MATRIX = numpy.array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1]], dtype=bool)
# The following constants are used to test find_classification_file.
TOP_DIRECTORY_NAME = 'foo'
CLASSIFN_FILE_NAME_UNZIPPED = (
'foo/2018/20181109/echo_classification_2018-11-10-041447.nc')
CLASSIFN_FILE_NAME_ZIPPED = (
'foo/2018/20181109/echo_classification_2018-11-10-041447.nc.gz')
class EchoClassificationTests(unittest.TestCase):
"""Each method is a unit test for echo_classification.py."""
def test_estimate_melting_levels(self):
"""Ensures correct output from _estimate_melting_levels."""
these_heights_m_asl = echo_classifn._estimate_melting_levels(
latitudes_deg=MELTING_LEVEL_LATITUDES_DEG,
valid_time_unix_sec=MELTING_LEVEL_TIME_UNIX_SEC)
self.assertTrue(numpy.allclose(
these_heights_m_asl, MELTING_LEVELS_M_ASL, atol=TOLERANCE))
def test_neigh_metres_to_rowcol_large(self):
"""Ensures correct output from _neigh_metres_to_rowcol.
In this case the grid is very large (3501 x 7001).
"""
this_num_rows, this_num_columns = echo_classifn._neigh_metres_to_rowcol(
neigh_radius_metres=LARGE_RADIUS_METRES,
grid_metadata_dict=LARGE_GRID_METADATA_DICT)
self.assertTrue(this_num_rows == NUM_ROWS_IN_LARGE_NEIGH)
self.assertTrue(this_num_columns == NUM_COLUMNS_IN_LARGE_NEIGH)
def test_neigh_metres_to_rowcol_small(self):
"""Ensures correct output from _neigh_metres_to_rowcol.
In this case the grid is small (5 x 7).
"""
this_num_rows, this_num_columns = echo_classifn._neigh_metres_to_rowcol(
neigh_radius_metres=NEIGH_RADIUS_METRES,
grid_metadata_dict=GRID_METADATA_DICT)
self.assertTrue(this_num_rows == NUM_ROWS_IN_NEIGH)
self.assertTrue(this_num_columns == NUM_COLUMNS_IN_NEIGH)
def test_get_peakedness(self):
"""Ensures correct output from _get_peakedness."""
this_matrix_dbz = echo_classifn._get_peakedness(
reflectivity_matrix_dbz=REFLECTIVITY_MATRIX_DBZ,
num_rows_in_neigh=NUM_ROWS_IN_NEIGH,
num_columns_in_neigh=NUM_COLUMNS_IN_NEIGH)
self.assertTrue(numpy.allclose(
this_matrix_dbz, PEAKEDNESS_MATRIX_DBZ, atol=TOLERANCE))
def test_apply_convective_criterion1(self):
"""Ensures correct output from _apply_convective_criterion1."""
this_flag_matrix = echo_classifn._apply_convective_criterion1(
reflectivity_matrix_dbz=REFLECTIVITY_MATRIX_DBZ,
peakedness_neigh_metres=NEIGH_RADIUS_METRES,
max_peakedness_height_m_asl=MAX_PEAKEDNESS_HEIGHT_M_ASL,
halve_resolution_for_peakedness=False,
min_composite_refl_dbz=None,
grid_metadata_dict=GRID_METADATA_DICT)
self.assertTrue(numpy.array_equal(
this_flag_matrix, CRITERION1_FLAG_MATRIX))
def test_apply_convective_criterion2(self):
"""Ensures correct output from _apply_convective_criterion2."""
this_flag_matrix = echo_classifn._apply_convective_criterion2(
reflectivity_matrix_dbz=REFLECTIVITY_MATRIX_DBZ,
convective_flag_matrix=CRITERION1_FLAG_MATRIX,
grid_metadata_dict=GRID_METADATA_DICT,
valid_time_unix_sec=VALID_TIME_UNIX_SEC,
min_composite_refl_aml_dbz=MIN_COMPOSITE_REFL_AML_DBZ)
self.assertTrue(numpy.array_equal(
this_flag_matrix, CRITERION2_FLAG_MATRIX))
def test_apply_convective_criterion3(self):
"""Ensures correct output from _apply_convective_criterion3."""
this_flag_matrix = echo_classifn._apply_convective_criterion3(
reflectivity_matrix_dbz=REFLECTIVITY_MATRIX_DBZ,
convective_flag_matrix=CRITERION2_FLAG_MATRIX,
grid_metadata_dict=GRID_METADATA_DICT,
min_echo_top_m_asl=MIN_ECHO_TOP_M_ASL,
echo_top_level_dbz=ECHO_TOP_LEVEL_DBZ)
self.assertTrue(numpy.array_equal(
this_flag_matrix, CRITERION3_FLAG_MATRIX))
def test_apply_convective_criterion4_main(self):
"""Ensures correct output from _apply_convective_criterion4.
In this case the input is the "main" flag matrix (the criterion-3 matrix
created by actually running `_apply_convective_criterion3`).
"""
this_flag_matrix = echo_classifn._apply_convective_criterion4(
convective_flag_matrix=CRITERION3_FLAG_MATRIX, min_size_pixels=2
)
self.assertTrue(numpy.array_equal(
this_flag_matrix, CRITERION4_FLAG_MATRIX
))
def test_apply_convective_criterion4_dummy(self):
"""Ensures correct output from _apply_convective_criterion4.
In this case the input is a "dummy" matrix (*not* created by running
`_apply_convective_criterion3`).
"""
this_flag_matrix = echo_classifn._apply_convective_criterion4(
convective_flag_matrix=DUMMY_CRITERION3_FLAG_MATRIX,
min_size_pixels=2
)
self.assertTrue(numpy.array_equal(
this_flag_matrix, DUMMY_CRITERION4_FLAG_MATRIX
))
def test_apply_convective_criterion5(self):
"""Ensures correct output from _apply_convective_criterion5."""
this_flag_matrix = echo_classifn._apply_convective_criterion5(
reflectivity_matrix_dbz=REFLECTIVITY_MATRIX_DBZ,
convective_flag_matrix=CRITERION4_FLAG_MATRIX,
min_composite_refl_dbz=MIN_COMPOSITE_REFL_DBZ)
self.assertTrue(numpy.array_equal(
this_flag_matrix, CRITERION5_FLAG_MATRIX))
def test_find_convective_pixels(self):
"""Ensures correct output from find_convective_pixels."""
option_dict = {
echo_classifn.PEAKEDNESS_NEIGH_KEY: NEIGH_RADIUS_METRES,
echo_classifn.MAX_PEAKEDNESS_HEIGHT_KEY:
MAX_PEAKEDNESS_HEIGHT_M_ASL,
echo_classifn.MIN_ECHO_TOP_KEY: MIN_ECHO_TOP_M_ASL,
echo_classifn.ECHO_TOP_LEVEL_KEY: ECHO_TOP_LEVEL_DBZ,
echo_classifn.MIN_COMPOSITE_REFL_CRITERION1_KEY: None,
echo_classifn.MIN_COMPOSITE_REFL_CRITERION5_KEY:
MIN_COMPOSITE_REFL_DBZ,
echo_classifn.MIN_COMPOSITE_REFL_AML_KEY: MIN_COMPOSITE_REFL_AML_DBZ
}
this_flag_matrix = echo_classifn.find_convective_pixels(
reflectivity_matrix_dbz=REFLECTIVITY_MATRIX_DBZ,
grid_metadata_dict=GRID_METADATA_DICT,
valid_time_unix_sec=VALID_TIME_UNIX_SEC, option_dict=option_dict
)[0]
self.assertTrue(numpy.array_equal(
this_flag_matrix, CRITERION5_FLAG_MATRIX))
def test_find_file_desire_zipped_allow_either_no_raise(self):
"""Ensures correct output from find_classification_file.
In this case, desire_zipped = True; allow_zipped_or_unzipped = True;
and raise_error_if_missing = False.
"""
this_file_name = echo_classifn.find_classification_file(
top_directory_name=TOP_DIRECTORY_NAME,
valid_time_unix_sec=VALID_TIME_UNIX_SEC, desire_zipped=True,
allow_zipped_or_unzipped=True, raise_error_if_missing=False)
self.assertTrue(this_file_name == CLASSIFN_FILE_NAME_UNZIPPED)
def test_find_file_desire_zipped_allow_zipped_no_raise(self):
"""Ensures correct output from find_classification_file.
In this case, desire_zipped = True; allow_zipped_or_unzipped = False;
and raise_error_if_missing = False.
"""
this_file_name = echo_classifn.find_classification_file(
top_directory_name=TOP_DIRECTORY_NAME,
valid_time_unix_sec=VALID_TIME_UNIX_SEC, desire_zipped=True,
allow_zipped_or_unzipped=False, raise_error_if_missing=False)
self.assertTrue(this_file_name == CLASSIFN_FILE_NAME_ZIPPED)
def test_find_file_desire_unzipped_allow_either_no_raise(self):
"""Ensures correct output from find_classification_file.
In this case, desire_zipped = False; allow_zipped_or_unzipped = True;
and raise_error_if_missing = False.
"""
this_file_name = echo_classifn.find_classification_file(
top_directory_name=TOP_DIRECTORY_NAME,
valid_time_unix_sec=VALID_TIME_UNIX_SEC, desire_zipped=False,
allow_zipped_or_unzipped=True, raise_error_if_missing=False)
self.assertTrue(this_file_name == CLASSIFN_FILE_NAME_ZIPPED)
def test_find_file_desire_unzipped_allow_unzipped_no_raise(self):
"""Ensures correct output from find_classification_file.
In this case, desire_zipped = False; allow_zipped_or_unzipped = True;
and raise_error_if_missing = False.
"""
this_file_name = echo_classifn.find_classification_file(
top_directory_name=TOP_DIRECTORY_NAME,
valid_time_unix_sec=VALID_TIME_UNIX_SEC, desire_zipped=False,
allow_zipped_or_unzipped=False, raise_error_if_missing=False)
self.assertTrue(this_file_name == CLASSIFN_FILE_NAME_UNZIPPED)
def test_find_file_desire_zipped_allow_either_raise(self):
"""Ensures correct output from find_classification_file.
In this case, desire_zipped = True; allow_zipped_or_unzipped = True;
and raise_error_if_missing = True.
"""
with self.assertRaises(ValueError):
echo_classifn.find_classification_file(
top_directory_name=TOP_DIRECTORY_NAME,
valid_time_unix_sec=VALID_TIME_UNIX_SEC, desire_zipped=True,
allow_zipped_or_unzipped=True, raise_error_if_missing=True)
def test_find_file_desire_zipped_allow_zipped_raise(self):
"""Ensures correct output from find_classification_file.
In this case, desire_zipped = True; allow_zipped_or_unzipped = False;
and raise_error_if_missing = True.
"""
with self.assertRaises(ValueError):
echo_classifn.find_classification_file(
top_directory_name=TOP_DIRECTORY_NAME,
valid_time_unix_sec=VALID_TIME_UNIX_SEC, desire_zipped=True,
allow_zipped_or_unzipped=False, raise_error_if_missing=True)
def test_find_file_desire_unzipped_allow_either_raise(self):
"""Ensures correct output from find_classification_file.
In this case, desire_zipped = False; allow_zipped_or_unzipped = True;
and raise_error_if_missing = True.
"""
with self.assertRaises(ValueError):
echo_classifn.find_classification_file(
top_directory_name=TOP_DIRECTORY_NAME,
valid_time_unix_sec=VALID_TIME_UNIX_SEC, desire_zipped=False,
allow_zipped_or_unzipped=True, raise_error_if_missing=True)
def test_find_file_desire_unzipped_allow_unzipped_raise(self):
"""Ensures correct output from find_classification_file.
In this case, desire_zipped = False; allow_zipped_or_unzipped = True;
and raise_error_if_missing = True.
"""
with self.assertRaises(ValueError):
echo_classifn.find_classification_file(
top_directory_name=TOP_DIRECTORY_NAME,
valid_time_unix_sec=VALID_TIME_UNIX_SEC, desire_zipped=False,
allow_zipped_or_unzipped=False, raise_error_if_missing=True)
if __name__ == '__main__':
unittest.main()
| 42.339326 | 80 | 0.647577 | 2,466 | 18,841 | 4.510949 | 0.083536 | 0.026429 | 0.033441 | 0.039554 | 0.816972 | 0.742179 | 0.668465 | 0.617853 | 0.59475 | 0.545307 | 0 | 0.05078 | 0.271482 | 18,841 | 444 | 81 | 42.434685 | 0.759653 | 0.157688 | 0 | 0.359712 | 0 | 0 | 0.008403 | 0.007692 | 0 | 0 | 0 | 0 | 0.07554 | 1 | 0.068345 | false | 0 | 0.021583 | 0 | 0.093525 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8696e66403bcd0cddf377103f52035cff243567 | 528 | py | Python | wemo/admin.py | 7ooL/web_home_auto | 66d1a96359154a2a8015fb8ebfabfedcf38f69a9 | [
"MIT"
] | null | null | null | wemo/admin.py | 7ooL/web_home_auto | 66d1a96359154a2a8015fb8ebfabfedcf38f69a9 | [
"MIT"
] | 8 | 2020-12-30T17:41:41.000Z | 2021-01-24T19:16:54.000Z | wemo/admin.py | 7ooL/HomeAuto | 66d1a96359154a2a8015fb8ebfabfedcf38f69a9 | [
"MIT"
] | null | null | null | from django.contrib import admin
import wemo.models as wemo
from homeauto.admin import make_discoverable, remove_discoverable
class WemoAdmin(admin.ModelAdmin):
list_display = ('name', 'id', 'type', 'status', 'enabled')
list_filter = ('type','status','enabled')
search_fields = ('name',)
actions = [make_discoverable, remove_discoverable]
admin.site.register(wemo.Device, WemoAdmin)
class WemoAccountAdmin(admin.ModelAdmin):
list_display = ('user',)
admin.site.register(wemo.Account, WemoAccountAdmin)
| 27.789474 | 65 | 0.744318 | 61 | 528 | 6.311475 | 0.52459 | 0.057143 | 0.114286 | 0.176623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126894 | 528 | 18 | 66 | 29.333333 | 0.835141 | 0 | 0 | 0 | 0 | 0 | 0.091082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a875316b50038c4a68ad39f7242aeb9165e411a9 | 1,200 | py | Python | domain/connection/services/ConnectorTypeService.py | muhammetbolat/pythondataintegrator | 5b274db8d39ca1340d535a500f04f6e734f1d54d | [
"MIT"
] | null | null | null | domain/connection/services/ConnectorTypeService.py | muhammetbolat/pythondataintegrator | 5b274db8d39ca1340d535a500f04f6e734f1d54d | [
"MIT"
] | null | null | null | domain/connection/services/ConnectorTypeService.py | muhammetbolat/pythondataintegrator | 5b274db8d39ca1340d535a500f04f6e734f1d54d | [
"MIT"
] | null | null | null | from typing import List
from injector import inject
from infrastructor.data.DatabaseSessionManager import DatabaseSessionManager
from infrastructor.data.Repository import Repository
from infrastructor.dependency.scopes import IScoped
from infrastructor.exception.OperationalException import OperationalException
from models.dao.connection.ConnectorType import ConnectorType
class ConnectorTypeService(IScoped):
@inject
def __init__(self,
database_session_manager: DatabaseSessionManager
):
self.database_session_manager = database_session_manager
self.connector_type_repository: Repository[ConnectorType] = Repository[ConnectorType](
database_session_manager)
def get(self) -> List[ConnectorType]:
"""
Get all
"""
entities = self.connector_type_repository.filter_by(IsDeleted=0).all()
return entities
def get_by_name(self, name) -> ConnectorType:
"""
Get
"""
entity = self.connector_type_repository.first(IsDeleted=0, Name=name)
if entity is None:
raise OperationalException("Connector Type Not Found")
return entity
| 34.285714 | 94 | 0.715833 | 117 | 1,200 | 7.162393 | 0.393162 | 0.081146 | 0.105012 | 0.096659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002134 | 0.219167 | 1,200 | 34 | 95 | 35.294118 | 0.892209 | 0.009167 | 0 | 0 | 0 | 0 | 0.021016 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.304348 | 0 | 0.565217 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a88238aa90a6a61ea36d190e25663ec0d54a09c0 | 2,493 | py | Python | fabfile.py | Ecotrust/formhub | 05033bb5aa152cc2cbcd7382c2c999d82b2c3276 | [
"BSD-2-Clause"
] | 1 | 2015-03-16T20:47:29.000Z | 2015-03-16T20:47:29.000Z | fabfile.py | Ecotrust/formhub | 05033bb5aa152cc2cbcd7382c2c999d82b2c3276 | [
"BSD-2-Clause"
] | 1 | 2016-11-22T13:08:58.000Z | 2016-11-22T13:08:58.000Z | fabfile.py | Ecotrust/formhub | 05033bb5aa152cc2cbcd7382c2c999d82b2c3276 | [
"BSD-2-Clause"
] | 3 | 2015-03-20T03:54:17.000Z | 2022-02-15T00:45:04.000Z | import os, sys
from fabric.api import env, run, cd
from fabric.decorators import hosts
DEFAULTS = {
'home': '/home/wsgi/srv/',
'repo_name': 'formhub',
}
DEPLOYMENTS = {
'dev': {
'home': '/home/ubuntu/srv/',
'host_string': 'ubuntu@23.21.82.214', # TODO: switch to dev.formhub.org
'project': 'formhub-ec2',
'key_filename': os.path.expanduser('~/.ssh/modilabs.pem'),
},
'prod': {
'home': '/home/ubuntu/srv/',
'host_string': 'ubuntu@formhub.org',
'project': 'formhub-ec2',
'key_filename': os.path.expanduser('~/.ssh/modilabs.pem'),
},
}
def run_in_virtualenv(command):
d = {
'activate': os.path.join(
env.project_directory, 'project_env', 'bin', 'activate'),
'command': command,
}
run('source %(activate)s && %(command)s' % d)
def check_key_filename(deployment_name):
if DEPLOYMENTS[deployment_name].has_key('key_filename') and \
not os.path.exists(DEPLOYMENTS[deployment_name]['key_filename']):
print "Cannot find required permissions file: %s" % \
DEPLOYMENTS[deployment_name]['key_filename']
return False
return True
def setup_env(deployment_name):
env.update(DEFAULTS)
env.update(DEPLOYMENTS[deployment_name])
if not check_key_filename(deployment_name): sys.exit(1)
env.project_directory = os.path.join(env.home, env.project)
env.code_src = os.path.join(env.project_directory, env.repo_name)
env.wsgi_config_file = os.path.join(
env.project_directory, 'apache', 'environment.wsgi')
env.pip_requirements_file = os.path.join(env.code_src, 'requirements.pip')
def deploy(deployment_name, branch='master'):
setup_env(deployment_name)
with cd(env.code_src):
run("git fetch origin")
run("git checkout origin/%s" % branch)
run("git submodule init")
run("git submodule update")
run('find . -name "*.pyc" -exec rm -rf {} \;')
# numpy pip install from requirments file fails
run_in_virtualenv("pip install numpy")
run_in_virtualenv("pip install -r %s" % env.pip_requirements_file)
with cd(env.code_src):
run_in_virtualenv("python manage.py syncdb")
run_in_virtualenv("python manage.py migrate")
run_in_virtualenv("python manage.py collectstatic --noinput")
run("sudo /etc/init.d/celeryd restart")
run("sudo /etc/init.d/celerybeat restart")
run("sudo reload gunicorn-formhub")
| 33.689189 | 79 | 0.648215 | 320 | 2,493 | 4.8875 | 0.35 | 0.080563 | 0.057545 | 0.04156 | 0.411765 | 0.262788 | 0.129156 | 0.086957 | 0.086957 | 0.086957 | 0 | 0.006064 | 0.206177 | 2,493 | 73 | 80 | 34.150685 | 0.784234 | 0.030886 | 0 | 0.131148 | 0 | 0 | 0.316618 | 0.009117 | 0 | 0 | 0 | 0.013699 | 0 | 0 | null | null | 0 | 0.04918 | null | null | 0.016393 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a88f260d4387784efa88dff71c1017e21e1e83ae | 424 | py | Python | sphinx/source/docs/user_guide/examples/interaction_checkbox_button_group.py | teresafds/bokeh | 95b2a74ff463cfabdf9e3390951fa380166e6691 | [
"BSD-3-Clause"
] | null | null | null | sphinx/source/docs/user_guide/examples/interaction_checkbox_button_group.py | teresafds/bokeh | 95b2a74ff463cfabdf9e3390951fa380166e6691 | [
"BSD-3-Clause"
] | null | null | null | sphinx/source/docs/user_guide/examples/interaction_checkbox_button_group.py | teresafds/bokeh | 95b2a74ff463cfabdf9e3390951fa380166e6691 | [
"BSD-3-Clause"
] | null | null | null | from bokeh.io import show
from bokeh.models import CheckboxButtonGroup, CustomJS
LABELS = ["Option 1", "Option 2", "Option 3"]
checkbox_button_group = CheckboxButtonGroup(labels=LABELS, active=[0, 1])
checkbox_button_group.js_on_event("button_click", CustomJS(args=dict(btn=checkbox_button_group), code="""
console.log('checkbox_button_group: active=' + btn.active, this.toString())
"""))
show(checkbox_button_group)
| 35.333333 | 105 | 0.771226 | 57 | 424 | 5.508772 | 0.526316 | 0.22293 | 0.302548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012987 | 0.091981 | 424 | 11 | 106 | 38.545455 | 0.802597 | 0 | 0 | 0 | 0 | 0 | 0.275943 | 0.082547 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8970963b81c6cd00d42a30b45086f5a9c8e4c32 | 22,114 | py | Python | colossalai/nn/layer/parallel_2p5d/_operation.py | DevinCheung/ColossalAI | 632e622de818697f9949e35117c0432d88f62c87 | [
"Apache-2.0"
] | null | null | null | colossalai/nn/layer/parallel_2p5d/_operation.py | DevinCheung/ColossalAI | 632e622de818697f9949e35117c0432d88f62c87 | [
"Apache-2.0"
] | null | null | null | colossalai/nn/layer/parallel_2p5d/_operation.py | DevinCheung/ColossalAI | 632e622de818697f9949e35117c0432d88f62c87 | [
"Apache-2.0"
] | null | null | null | from typing import Any, Tuple
import torch
import torch.distributed as dist
from torch import Tensor
from colossalai.context.parallel_mode import ParallelMode
from colossalai.core import global_context as gpc
from colossalai.utils import get_current_device
from torch.cuda.amp import custom_bwd, custom_fwd
def get_parallel_group(parallel_mode: ParallelMode):
return gpc.get_group(parallel_mode)
def get_global_rank():
return gpc.get_global_rank()
def get_parallel_rank(parallel_mode: ParallelMode):
return gpc.get_local_rank(parallel_mode)
class Matmul_AB_2p5D(torch.autograd.Function):
"""Matrix multiplication for :math:`C = AB`
"""
@staticmethod
@custom_fwd(cast_inputs=torch.float16)
def forward(ctx: Any,
A: Tensor,
B: Tensor,
tesseract_dim: int,
out_shape: Tuple[int, ...],
row_rank: int,
col_rank: int,
dep_rank: int,
row_parallel_mode: ParallelMode,
col_parallel_mode: ParallelMode,
data_parallel_rank: int,
pipeline_parallel_rank: int,
pipeline_parallel_size: int,
tensor_parallel_size: int) -> Tensor:
# A: [b / dq, s, h / q] -> [(b * s) / dq, h / q]
# B: [h / dq, s / q]
# C: [b / dq, s, s / q] -> [(b * s) / dq, s / q]
assert A.shape[-1] == B.shape[-2], \
'Invalid shapes: A={}, B={} for AB.'.format(A.shape, B.shape)
if ctx:
ctx.save_for_backward(A, B)
A_shape = A.shape
A = A.reshape((-1, A_shape[-1])).contiguous()
B_shape = B.shape
B = B.reshape((-1, B_shape[-1])).contiguous()
C_shape = (A.shape[0], B.shape[-1])
C = torch.zeros(C_shape, dtype=A.dtype, device=get_current_device())
A_list = [torch.empty_like(A) for _ in range(gpc.get_world_size(row_parallel_mode)-1)]
B_list = [torch.empty_like(B) for _ in range(gpc.get_world_size(col_parallel_mode)-1)]
A_list.insert(gpc.get_local_rank(row_parallel_mode), A)
B_list.insert(gpc.get_local_rank(col_parallel_mode), B)
op_a = dist.all_gather(A_list, A, group=gpc.get_group(row_parallel_mode), async_op=True)
op_a.wait()
op_b = dist.all_gather(B_list, B, group=gpc.get_group(col_parallel_mode), async_op=True)
for op in [op_a, op_b]:
op.wait()
for i in range(tesseract_dim):
src_a = i + tesseract_dim * row_rank
src_b = i + tesseract_dim * col_rank
src_a = src_a % tesseract_dim
src_b = src_b % tesseract_dim
A_temp = A_list[src_a]
B_temp = B_list[src_b]
torch.addmm(C, A_temp, B_temp, out=C)
out = C.reshape(out_shape)
if ctx:
ctx.tesseract_dim = tesseract_dim
ctx.row_rank = row_rank
ctx.col_rank = col_rank
ctx.dep_rank = dep_rank
ctx.row_parallel_mode = row_parallel_mode
ctx.col_parallel_mode = col_parallel_mode
ctx.A_shape = A_shape
ctx.B_shape = B_shape
ctx.data_parallel_rank = data_parallel_rank
ctx.pipeline_parallel_rank = pipeline_parallel_rank
ctx.pipeline_parallel_size = pipeline_parallel_size
ctx.tensor_parallel_size = tensor_parallel_size
return out
@staticmethod
@custom_bwd
def backward(ctx: Any, output_grad: Tensor) -> Tuple[Tensor, ...]:
A, B = ctx.saved_tensors
with torch.no_grad():
A_grad = Matmul_ABT_2p5D.apply(
output_grad, B,
ctx.tesseract_dim, ctx.A_shape,
ctx.row_rank, ctx.col_rank, ctx.dep_rank,
ctx.row_parallel_mode,
ctx.col_parallel_mode,
ctx.data_parallel_rank,
ctx.pipeline_parallel_rank,
ctx.pipeline_parallel_size,
ctx.tensor_parallel_size
)
B_grad = Matmul_ATB_2p5D.apply(
A, output_grad,
ctx.tesseract_dim, ctx.B_shape,
ctx.row_rank, ctx.col_rank, ctx.dep_rank,
ctx.row_parallel_mode,
ctx.col_parallel_mode,
ctx.data_parallel_rank,
ctx.pipeline_parallel_rank,
ctx.pipeline_parallel_size,
ctx.tensor_parallel_size
)
return A_grad, B_grad, None, None, None, None, None, None, None, None, None, None, None, None, None
class Matmul_ABT_2p5D(torch.autograd.Function):
"""Matrix multiplication for :math:`C = AB^T`
"""
@staticmethod
@custom_fwd(cast_inputs=torch.float16)
def forward(ctx: Any,
A: Tensor,
B: Tensor,
tesseract_dim: int,
out_shape: Tuple[int, ...],
row_rank: int,
col_rank: int,
dep_rank: int,
row_parallel_mode: ParallelMode,
col_parallel_mode: ParallelMode,
data_parallel_rank: int,
pipeline_parallel_rank: int,
pipeline_parallel_size: int,
tensor_parallel_size: int
) -> Tensor:
assert A.shape[-1] == B.shape[-1], \
'Invalid shapes: A={}, B={} for ABT.'.format(A.shape, B.shape)
if ctx:
ctx.save_for_backward(A, B)
A_shape = A.shape
A = A.reshape((-1, A_shape[-1]))
B_shape = B.shape
B = B.reshape((-1, B_shape[-1]))
C_shape = (A.shape[0], B.shape[0])
C = torch.empty(C_shape, dtype=A.dtype, device=get_current_device())
for i in range(tesseract_dim):
B_temp = B.clone()
src_b = col_rank + i * tesseract_dim + dep_rank * (
tesseract_dim ** 2) + data_parallel_rank * pipeline_parallel_size * tensor_parallel_size + \
pipeline_parallel_rank * tensor_parallel_size
dist.broadcast(B_temp, src=src_b, group=gpc.get_group(col_parallel_mode))
C_temp = torch.matmul(A, B_temp.transpose(0, 1))
src_c = i + row_rank * tesseract_dim + dep_rank * (
tesseract_dim ** 2) + data_parallel_rank * pipeline_parallel_size * tensor_parallel_size + \
pipeline_parallel_rank * tensor_parallel_size
dist.reduce(C_temp, dst=src_c, group=gpc.get_group(row_parallel_mode))
if i == col_rank:
C = C_temp.clone()
out = C.reshape(out_shape)
if ctx:
ctx.tesseract_dim = tesseract_dim
ctx.row_rank = row_rank
ctx.col_rank = col_rank
ctx.dep_rank = dep_rank
ctx.row_parallel_mode = row_parallel_mode
ctx.col_parallel_mode = col_parallel_mode
ctx.A_shape = A_shape
ctx.B_shape = B_shape
ctx.data_parallel_rank = data_parallel_rank
ctx.pipeline_parallel_rank = pipeline_parallel_rank
ctx.pipeline_parallel_size = pipeline_parallel_size
ctx.tensor_parallel_size = tensor_parallel_size
return out
@staticmethod
@custom_bwd
def backward(ctx: Any, output_grad: Tensor) -> Tuple[Tensor, ...]:
A, B = ctx.saved_tensors
with torch.no_grad():
A_grad = Matmul_AB_2p5D.apply(
output_grad, B,
ctx.tesseract_dim, ctx.A_shape,
ctx.row_rank, ctx.col_rank, ctx.dep_rank,
ctx.row_parallel_mode,
ctx.col_parallel_mode,
ctx.data_parallel_rank,
ctx.pipeline_parallel_rank,
ctx.pipeline_parallel_size,
ctx.tensor_parallel_size
)
B_grad = Matmul_ATB_2p5D.apply(
output_grad, A,
ctx.tesseract_dim, ctx.B_shape,
ctx.row_rank, ctx.col_rank, ctx.dep_rank,
ctx.row_parallel_mode,
ctx.col_parallel_mode,
ctx.data_parallel_rank,
ctx.pipeline_parallel_rank,
ctx.pipeline_parallel_size,
ctx.tensor_parallel_size
)
return A_grad, B_grad, None, None, None, None, None, None, None, None, None, None, None, None, None
class Matmul_ATB_2p5D(torch.autograd.Function):
"""Matrix multiplication for :math:`C = A^TB`
"""
@staticmethod
@custom_fwd(cast_inputs=torch.float16)
def forward(ctx: Any,
A: Tensor,
B: Tensor,
tesseract_dim: int,
out_shape: Tuple[int, ...],
row_rank: int,
col_rank: int,
dep_rank: int,
row_parallel_mode: ParallelMode,
col_parallel_mode: ParallelMode,
data_parallel_rank: int,
pipeline_parallel_rank: int,
pipeline_parallel_size: int,
tensor_parallel_size: int):
assert A.shape[-2] == B.shape[-2], \
'Invalid shapes: A={}, B={} for ATB.'.format(A.shape, B.shape)
if ctx:
ctx.save_for_backward(A, B)
A_shape = A.shape
A = A.reshape((-1, A_shape[-1]))
B_shape = B.shape
B = B.reshape((-1, B_shape[-1]))
C_shape = (A.shape[-1], B.shape[-1])
C = torch.empty(C_shape, dtype=A.dtype, device=get_current_device())
for i in range(tesseract_dim):
A_temp = A.clone()
src_a = i + row_rank * tesseract_dim + dep_rank * (
tesseract_dim ** 2) + data_parallel_rank * pipeline_parallel_size * tensor_parallel_size + \
pipeline_parallel_rank * tensor_parallel_size
dist.broadcast(A_temp, src=src_a,
group=get_parallel_group(row_parallel_mode))
C_temp = torch.matmul(A_temp.transpose(0, 1), B)
src_c = col_rank + i * tesseract_dim + dep_rank * (
tesseract_dim ** 2) + data_parallel_rank * pipeline_parallel_size * tensor_parallel_size + \
pipeline_parallel_rank * tensor_parallel_size
dist.reduce(C_temp, dst=src_c,
group=get_parallel_group(col_parallel_mode))
if i == row_rank:
C = C_temp.clone()
out = C.reshape(out_shape)
if ctx:
ctx.tesseract_dim = tesseract_dim
ctx.row_rank = row_rank
ctx.col_rank = col_rank
ctx.dep_rank = dep_rank
ctx.row_parallel_mode = row_parallel_mode
ctx.col_parallel_mode = col_parallel_mode
ctx.A_shape = A_shape
ctx.B_shape = B_shape
ctx.data_parallel_rank = data_parallel_rank
ctx.pipeline_parallel_rank = pipeline_parallel_rank
ctx.pipeline_parallel_size = pipeline_parallel_size
ctx.tensor_parallel_size = tensor_parallel_size
return out
@staticmethod
@custom_bwd
def backward(ctx: Any, output_grad: Tensor) -> Tuple[Tensor, ...]:
A, B = ctx.saved_tensors
with torch.no_grad():
A_grad = Matmul_ABT_2p5D.apply(
B, output_grad,
ctx.tesseract_dim, ctx.A_shape,
ctx.row_rank, ctx.col_rank, ctx.dep_rank,
ctx.row_parallel_mode,
ctx.col_parallel_mode,
ctx.data_parallel_rank,
ctx.pipeline_parallel_rank,
ctx.pipeline_parallel_size,
ctx.tensor_parallel_size
)
B_grad = Matmul_AB_2p5D.apply(
A, output_grad,
ctx.tesseract_dim, ctx.B_shape,
ctx.row_rank, ctx.col_rank, ctx.dep_rank,
ctx.row_parallel_mode,
ctx.col_parallel_mode,
ctx.data_parallel_rank,
ctx.pipeline_parallel_rank,
ctx.pipeline_parallel_size,
ctx.tensor_parallel_size
)
return A_grad, B_grad, None, None, None, None, None, None, None, None, None, None, None, None, None
class Add_Bias_2p5D(torch.autograd.Function):
"""Matrix add bias: :math:`C = A + b`
"""
@staticmethod
@custom_fwd(cast_inputs=torch.float16)
def forward(ctx: Any,
input: Tensor,
bias: Tensor,
output_size_per_partition: int,
tesseract_dim: int,
row_rank: int,
col_rank: int,
dep_rank: int,
col_parallel_mode: ParallelMode,
skip_bias_add: bool,
data_parallel_rank: int,
pipeline_parallel_rank: int,
pipeline_parallel_size: int,
tensor_parallel_size: int
) -> Tensor:
if row_rank == 0:
bias_temp = bias.clone()
else:
bias_temp = torch.zeros(
output_size_per_partition,
dtype=bias.dtype,
device=get_current_device())
src_rank = col_rank + dep_rank * (
tesseract_dim ** 2) + data_parallel_rank * pipeline_parallel_size * tensor_parallel_size + \
pipeline_parallel_rank * tensor_parallel_size
dist.broadcast(bias_temp, src=src_rank, group=get_parallel_group(col_parallel_mode))
ctx.row_rank = row_rank
ctx.col_rank = col_rank
ctx.dep_rank = dep_rank
ctx.tesseract_dim = tesseract_dim
ctx.col_parallel_mode = col_parallel_mode
ctx.bias = skip_bias_add
ctx.data_parallel_rank = data_parallel_rank
ctx.pipeline_parallel_rank = pipeline_parallel_rank
ctx.pipeline_parallel_size = pipeline_parallel_size
ctx.tensor_parallel_size = tensor_parallel_size
if skip_bias_add:
return bias_temp
else:
output = input + bias_temp
return output
@staticmethod
@custom_bwd
def backward(ctx: Any, output_grad: Tensor) -> Tuple[Tensor, ...]:
row_rank = ctx.row_rank
col_rank = ctx.col_rank
dep_rank = ctx.dep_rank
tesseract_dim = ctx.tesseract_dim
col_parallel_mode = ctx.col_parallel_mode
data_parallel_rank = ctx.data_parallel_rank
pipeline_parallel_rank = ctx.pipeline_parallel_rank
pipeline_parallel_size = ctx.pipeline_parallel_size
tensor_parallel_size = ctx.tensor_parallel_size
if ctx.bias:
dst_rank = col_rank + dep_rank * (
tesseract_dim ** 2) + data_parallel_rank * pipeline_parallel_size * tensor_parallel_size + \
pipeline_parallel_rank * tensor_parallel_size
dist.reduce(output_grad, dst=dst_rank, group=get_parallel_group(col_parallel_mode))
if row_rank == 0:
return None, output_grad, None, None, None, None, None, None, None, None, None, None, None, None, None, None
else:
grad_tmp = torch.zeros_like(output_grad)
return None, grad_tmp, None, None, None, None, None, None, None, None, None, None, None, None, None, None
else:
reduce_dim = tuple(range(output_grad.ndim - 1))
reduce = torch.sum(output_grad, dim=reduce_dim)
dst_rank = col_rank + dep_rank * (
tesseract_dim ** 2) + data_parallel_rank * pipeline_parallel_size * tensor_parallel_size + \
pipeline_parallel_rank * tensor_parallel_size
dist.reduce(reduce, dst=dst_rank, group=get_parallel_group(col_parallel_mode))
if row_rank == 0:
return output_grad, reduce, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None
else:
reduce_tmp = torch.zeros_like(reduce)
return output_grad, reduce_tmp, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None
class _LayerNorm_2p5D(torch.autograd.Function):
@staticmethod
@custom_fwd(cast_inputs=torch.float32)
def forward(ctx: Any,
input: Tensor,
E_x: Tensor,
Var_x: Tensor,
hidden_size: int,
row_parallel_mode: ParallelMode) -> Tensor:
input = input - E_x
# in here, input = x - E[x], Var_x = 1 / sqrt(Var[x] + eps)
ctx.hidden_size = hidden_size
output = input * Var_x
ctx.save_for_backward(output, Var_x)
ctx.row_parallel_mode = row_parallel_mode
return output
@staticmethod
@custom_bwd
def backward(ctx, output_grad):
row_parallel_mode = ctx.row_parallel_mode
x, Var_x = ctx.saved_tensors
# in here, Var_x = 1 / sqrt(Var[x] + eps), x = (x - E[x]) * Var_x
with torch.no_grad():
output_grad_sum = torch.sum(output_grad, dim=-1, keepdim=True)
torch.distributed.all_reduce(
output_grad_sum, group=get_parallel_group(row_parallel_mode))
output_grad_sum /= ctx.hidden_size
output_grad_mul_x_sum = torch.sum(
output_grad * x, dim=-1, keepdim=True)
torch.distributed.all_reduce(
output_grad_mul_x_sum, group=get_parallel_group(row_parallel_mode))
output_grad_mul_x_sum /= ctx.hidden_size
input_grad = output_grad.clone()
input_grad -= x * output_grad_mul_x_sum
input_grad -= output_grad_sum
input_grad *= Var_x
return input_grad, None, None, None, None, None, None
# class Sum_2p5D(torch.autograd.Function):
# """Compute the sum of input tensors
# """
# @staticmethod
# def forward(ctx,
# inputs,
# dim,
# tesseract_dim,
# row_parallel_mode,
# keepdim=False):
# # input: [b/q, s, h/q]
# ctx.save_for_backward(inputs)
# # sum: [b/q, s]
# out = torch.sum(inputs, dim=dim, keepdim=keepdim)
# torch.distributed.all_reduce(
# out, group=gpc.get_group(row_parallel_mode))
# return out
# @staticmethod
# def backward(ctx, output_grad):
# with torch.no_grad():
# inputs = ctx.saved_tensors
# input_grad = torch.ones(inputs.shape, dtype=output_grad.dtype)
# return input_grad, None, None, None, None, None
# class _ViT_Split_2p5D(torch.autograd.Function):
# @staticmethod
# @custom_fwd(cast_inputs=torch.float16)
# def forward(ctx, inputs, batch_size,
# tesseract_dim, tesseract_dep,
# xz_parallel_mode):
# # inputs: [b, s, h/q]
# # output: [b/dq, s, h/q]
# ctx.BATCH_SIZE = batch_size
# ctx.tesseract_dim = tesseract_dim
# ctx.tesseract_dep = tesseract_dep
# ctx.xz_parallel_mode = xz_parallel_mode
# xz_rank = gpc.get_local_rank(xz_parallel_mode)
# output = torch.chunk(inputs, tesseract_dep *
# tesseract_dim, dim=0)[xz_rank]
# output = output.clone()
# return output
# @staticmethod
# @custom_bwd
# def backward(ctx, output_grad):
# # output_grad: [b/dq, s, h/q]
# # grads: [b, s, h/q]
# # *
# grads_shape = (ctx.BATCH_SIZE,) + output_grad.shape[1:]
# grads = torch.empty(grads_shape,
# dtype=output_grad.dtype,
# device=get_current_device())
# dist.all_gather(list(grads.chunk(ctx.tesseract_dim * ctx.tesseract_dep, dim=0)),
# output_grad.contiguous(),
# group=get_parallel_group(ctx.xz_parallel_mode))
# return grads, None, None, None, None
class AllGatherLast(torch.autograd.Function):
@staticmethod
@custom_fwd(cast_inputs=torch.float16)
def forward(ctx: Any,
inputs: Tensor,
tesseract_dim: int,
col_parallel_mode: ParallelMode) -> Tensor:
ctx.tesseract_dim = tesseract_dim
ctx.row_rank = gpc.get_local_rank(col_parallel_mode)
last_dim = tesseract_dim * inputs.size(-1)
outputs_shape = (last_dim,) + inputs.shape[:-1]
outputs = torch.empty(
outputs_shape, dtype=inputs.dtype, device=get_current_device())
dist.all_gather(
list(outputs.chunk(tesseract_dim, dim=0)),
inputs.permute(2, 0, 1).contiguous(),
group=gpc.get_group(col_parallel_mode)
)
outputs = outputs.permute(1, 2, 0).contiguous()
return outputs
@staticmethod
@custom_bwd
def backward(ctx: Any, output_grad: Tensor) -> Tuple[Tensor, ...]:
grad = output_grad.chunk(ctx.tesseract_dim, dim=-1)[ctx.row_rank]
return grad.contiguous(), None, None
class SplitFirst(torch.autograd.Function):
@staticmethod
@custom_fwd(cast_inputs=torch.float16)
def forward(ctx: Any,
inputs: Tensor,
tesseract_dim: int,
col_parallel_mode: ParallelMode) -> Tensor:
ctx.tesseract_dim = tesseract_dim
ctx.batch_size = inputs.size(0)
ctx.para_mode = col_parallel_mode
row_rank = gpc.get_local_rank(col_parallel_mode)
outputs = inputs.chunk(tesseract_dim, dim=0)[row_rank]
return outputs
@staticmethod
@custom_bwd
def backward(ctx: Any, output_grad: Tensor) -> Tuple[Tensor, ...]:
grad_shape = (ctx.batch_size,) + output_grad.shape[1:]
grad = torch.empty(
grad_shape, dtype=output_grad.dtype, device=get_current_device())
dist.all_gather(
list(grad.chunk(ctx.tesseract_dim, dim=0)),
output_grad.contiguous(),
group=gpc.get_group(ctx.para_mode)
)
return grad, None, None | 38.728546 | 136 | 0.582391 | 2,734 | 22,114 | 4.383321 | 0.055962 | 0.069426 | 0.092123 | 0.109479 | 0.774449 | 0.730891 | 0.691172 | 0.660381 | 0.623498 | 0.605975 | 0 | 0.006975 | 0.325766 | 22,114 | 571 | 137 | 38.728546 | 0.796781 | 0.113231 | 0 | 0.632794 | 0 | 0 | 0.005327 | 0 | 0 | 0 | 0 | 0 | 0.006928 | 1 | 0.039261 | false | 0 | 0.018476 | 0.006928 | 0.122402 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8a5003ca45a8b192c77f895ba87db37d838e858 | 3,510 | py | Python | pyqudie/MongoExceptions.py | evi0s/pyqudie | 5d088482dd2b56e9aaf0991ea182fb11d6a1fc14 | [
"MIT"
] | null | null | null | pyqudie/MongoExceptions.py | evi0s/pyqudie | 5d088482dd2b56e9aaf0991ea182fb11d6a1fc14 | [
"MIT"
] | null | null | null | pyqudie/MongoExceptions.py | evi0s/pyqudie | 5d088482dd2b56e9aaf0991ea182fb11d6a1fc14 | [
"MIT"
] | null | null | null | '''MongoExceptions'''
class MongoExceptions(Exception):
def __init__(self, *args):
self.args = args
class NoDatabaseException(MongoExceptions):
def __init__(self, message = 'No such Database!', code = 422, args = ('No such Database!',)):
self.args = args
self.message = message
self.code = code
class InvalidArgumentsException(MongoExceptions):
def __init__(self, message = 'Invalid Arguments!', code = 422, args = ('Invalid Arguments!',)):
self.args = args
self.message = message
self.code = code
class ConnectFailedException(MongoExceptions):
def __init__(self, message = 'Authentication Required or Connection Error!', code = 422, args = ('Authentication Required or Connection Error!',)):
self.args = args
self.message = message
self.code = code
class InvalidCollectionException(MongoExceptions):
def __init__(self, message = 'Invalid Collection!', code = 422, args = ('Invalid Collection!',)):
self.args = args
self.message = message
self.code = code
class InvalidQueryObjectException(MongoExceptions):
def __init__(self, message = 'Invalid Query Object!', code = 422, args = ('Invalid Query Object!',)):
self.args = args
self.message = message
self.code = code
class InvalidInsertObjectException(MongoExceptions):
def __init__(self, message = 'Invalid Insert Object!', code = 422, args = ('Invalid Insert Object!',)):
self.args = args
self.message = message
self.code = code
class InvalidRemoveQueryException(MongoExceptions):
def __init__(self, message = 'Invalid Remove Query!', code = 422, args = ('Invalid Remove Query!',)):
self.args = args
self.message = message
self.code = code
class InvalidRemoveOptionException(MongoExceptions):
def __init__(self, message = 'Invalid Remove Option!', code = 422, args = ('Invalid Remove Option!',)):
self.args = args
self.message = message
self.code = code
class RemoveAllNotConfirmedException(MongoExceptions):
def __init__(self, message = 'Remove All not Confirmed!', code = 422, args = ('Remove All not Confirmed!',)):
self.args = args
self.message = message
self.code = code
class OperationFailedException(MongoExceptions):
def __init__(self, message = 'Operation Failed!', code = 422, args = ('Operation Failed!',)):
self.args = args
self.message = message
self.code = code
class InvalidUpdateQueryException(MongoExceptions):
def __init__(self, message = 'Invalid Update Query!', code = 422, args = ('Invalid Update Query!',)):
self.args = args
self.message = message
self.code = code
class InvalidUpdateDictException(MongoExceptions):
def __init__(self, message = 'Invalid Update Dict!', code = 422, args = ('Invalid Update Dict!',)):
self.args = args
self.message = message
self.code = code
class InvalidUpdateOptionException(MongoExceptions):
def __init__(self, message = 'Invalid Update Option!', code = 422, args = ('Invalid Update Option!',)):
self.args = args
self.message = message
self.code = code
class InvalidObjectIdException(MongoExceptions):
def __init__(self, message = 'Invalid ObjectId!', code = 422, args = ('Invalid ObjectId!',)):
self.args = args
self.message = message
self.code = code
| 29.25 | 151 | 0.654131 | 359 | 3,510 | 6.228412 | 0.133705 | 0.137746 | 0.073792 | 0.162791 | 0.678444 | 0.499553 | 0.410107 | 0.307245 | 0.307245 | 0.288462 | 0 | 0.015613 | 0.233618 | 3,510 | 119 | 152 | 29.495798 | 0.815613 | 0.004274 | 0 | 0.589041 | 0 | 0 | 0.175408 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205479 | false | 0 | 0 | 0 | 0.410959 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8a92eb55c4dbcb59995c698f2e9b284ba3da8d0 | 837 | py | Python | PyDynamic/misc/__init__.py | EnjoyLifeFund/macHighSierra-py36-pkgs | 5668b5785296b314ea1321057420bcd077dba9ea | [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
] | null | null | null | PyDynamic/misc/__init__.py | EnjoyLifeFund/macHighSierra-py36-pkgs | 5668b5785296b314ea1321057420bcd077dba9ea | [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
] | null | null | null | PyDynamic/misc/__init__.py | EnjoyLifeFund/macHighSierra-py36-pkgs | 5668b5785296b314ea1321057420bcd077dba9ea | [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
The `PyDynamic.misc` module provides various functions and methods which are used in the examples and in some of the
other implemented routines.
"""
from .filterstuff import db, grpdelay, mapinside, isstable, kaiser_lowpass, savitzky_golay
from .impinvar import impinvar
from .SecondOrderSystem import sos_FreqResp, sos_phys2filter, sos_absphase, sos_realimag
from .testsignals import shocklikeGaussian, GaussianPulse, squarepulse, rect, corr_noise
from .tools import print_mat, print_vec, make_semiposdef
__all__ = ['db', 'grpdelay', 'mapinside', 'isstable', 'kaiser_lowpass', 'savitzky_golay',
'impinvar', 'sos_absphase', 'sos_realimag', 'sos_FreqResp', 'sos_phys2filter',
'shocklikeGaussian','GaussianPulse','squarepulse','rect','corr_noise',
'print_vec','print_mat','make_semiposdef'] | 41.85 | 116 | 0.767025 | 100 | 837 | 6.18 | 0.54 | 0.032362 | 0.061489 | 0.087379 | 0.346278 | 0.346278 | 0.171521 | 0.171521 | 0 | 0 | 0 | 0.00406 | 0.117085 | 837 | 20 | 117 | 41.85 | 0.832206 | 0.199522 | 0 | 0 | 0 | 0 | 0.304676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.222222 | 0.555556 | 0 | 0.555556 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
a8a94f716da67da33d97c86db5c8deee530d894b | 184 | py | Python | python/src/project/py27/flask_base/flask_online_calculator/config.py | hhstore/learning-notes | 1e6634d75850fbf553f4bab4a8fa88b9a80b0287 | [
"MIT"
] | 4 | 2017-06-13T09:26:27.000Z | 2021-09-12T14:57:59.000Z | python/src/project/py27/flask_base/flask_online_calculator/config.py | hhstore/iPyScript | 1e6634d75850fbf553f4bab4a8fa88b9a80b0287 | [
"MIT"
] | 1 | 2018-03-30T05:50:44.000Z | 2018-03-30T05:50:44.000Z | python/src/project/py27/flask_base/flask_online_calculator/config.py | hhstore/iPyScript | 1e6634d75850fbf553f4bab4a8fa88b9a80b0287 | [
"MIT"
] | 4 | 2016-03-07T07:40:55.000Z | 2016-10-15T00:40:22.000Z | # -*- coding:utf-8 -*-
CSRF_ENABLED = True # CSRF_ENABLED 激活 跨站点请求伪造 保护。激活该配置 使得你的应用程序更安全。
SECRET_KEY = "TEST_BLOG" # SECRET_KEY 配置仅仅当 CSRF 激活时才需要,用来建立一个加密令牌,验证表单。务必设置很难被猜测到密钥
| 30.666667 | 84 | 0.733696 | 25 | 184 | 5.2 | 0.8 | 0.169231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006452 | 0.157609 | 184 | 5 | 85 | 36.8 | 0.832258 | 0.679348 | 0 | 0 | 0 | 0 | 0.163636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8ab8375293257229cf53a300274969590719711 | 732 | py | Python | jupyterlab_iframe/proxy.py | choldgraf/jupyterlab_iframe | 959a87fdd92c096a10a6a4d752bde6e4c10269fc | [
"Apache-2.0"
] | 1 | 2020-09-22T03:15:49.000Z | 2020-09-22T03:15:49.000Z | jupyterlab_iframe/proxy.py | choldgraf/jupyterlab_iframe | 959a87fdd92c096a10a6a4d752bde6e4c10269fc | [
"Apache-2.0"
] | null | null | null | jupyterlab_iframe/proxy.py | choldgraf/jupyterlab_iframe | 959a87fdd92c096a10a6a4d752bde6e4c10269fc | [
"Apache-2.0"
] | null | null | null | import tornado.gen
import tornado.web
import tornado.websocket
import tornado.httpclient
from notebook.base.handlers import IPythonHandler
from tornado_proxy_handlers import ProxyHandler as TProxyHandler, ProxyWSHandler as TProxyWSHandler
class ProxyHandler(IPythonHandler, TProxyHandler):
def initialize(self, **kwargs):
super(ProxyHandler, self).initialize(**kwargs)
@tornado.gen.coroutine
def get(self, *args):
'''Get the login page'''
yield TProxyHandler.get(self, url=self.get_argument('path'))
class ProxyWSHandler(TProxyWSHandler):
@tornado.gen.coroutine
def open(self, *args):
path = self.get_argument('path')
yield super(ProxyWSHandler, self).open(url=path)
| 30.5 | 99 | 0.740437 | 84 | 732 | 6.404762 | 0.404762 | 0.096654 | 0.070632 | 0.081784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159836 | 732 | 23 | 100 | 31.826087 | 0.874797 | 0.02459 | 0 | 0.117647 | 0 | 0 | 0.011299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.352941 | 0 | 0.647059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a8d92e89169e33eb6e8bb6f35d69a24f76ec1c6b | 2,208 | py | Python | test/test_contact_data.py | viktoryiakuzmitskaya/pytest_examples | bb3d73d27d5935617e812f5038e4dd4499cf73e5 | [
"Apache-2.0"
] | null | null | null | test/test_contact_data.py | viktoryiakuzmitskaya/pytest_examples | bb3d73d27d5935617e812f5038e4dd4499cf73e5 | [
"Apache-2.0"
] | null | null | null | test/test_contact_data.py | viktoryiakuzmitskaya/pytest_examples | bb3d73d27d5935617e812f5038e4dd4499cf73e5 | [
"Apache-2.0"
] | null | null | null | from model.contact import Contact
import re
from random import randrange
def test_contact_data_for_random_contact(app):
if app.contact.count() == 0:
app.contact.create(Contact(firstname="John", lastname="Connor", address=("%s, %s %s" % ("Los Angeles", str(randrange(1000)), "Nickel Road")), workphone="w44654532", email="hgjhg@.mnk.com"))
old_contacts = app.contact.get_contact_list()
index = randrange(len(old_contacts))
contact_from_home_page = app.contact.get_contact_list()[index]
contact_from_edit_page = app.contact.get_contact_info_from_edit_page(index)
#compare firstname
assert contact_from_home_page.firstname == contact_from_edit_page.firstname
#compare lastname
assert contact_from_home_page.lastname == contact_from_edit_page.lastname
#compare address
assert contact_from_home_page.address == contact_from_edit_page.address
#compare phones
assert contact_from_home_page.all_phones_from_home_page == merge_phones_like_on_home_page(contact_from_edit_page)
#compare emails
assert contact_from_home_page.all_emails_from_home_page == merge_emails_like_on_home_page(contact_from_edit_page)
def clear(s):
return re.sub("[() -]", "", s)
def merge_phones_like_on_home_page(contact):
return "\n".join(filter(lambda x: x != "", map(lambda x: clear(x), filter(lambda x: x is not None, [contact.homephone, contact.mobilephone, contact.workphone, contact.secondaryphone]))))
def merge_emails_like_on_home_page(contact):
return "\n".join(filter(lambda x: x != "", filter(lambda x: x is not None, [contact.email, contact.email2, contact.email3])))
def test_contact_db_info_matches_ui(app, db):
ui_list = app.contact.get_contact_list()
def clean(contact):
return Contact(id=contact.id, firstname=contact.firstname.strip(), lastname=contact.lastname.strip(),
address=contact.address.strip(), all_phones_from_home_page=merge_phones_like_on_home_page(contact),
all_emails_from_home_page=merge_emails_like_on_home_page(contact))
db_list = map(clean, db.get_contact_list())
assert sorted(ui_list, key=Contact.id_or_max) == sorted(db_list, key=Contact.id_or_max) | 55.2 | 197 | 0.75317 | 322 | 2,208 | 4.807453 | 0.245342 | 0.082687 | 0.077519 | 0.073643 | 0.444444 | 0.362403 | 0.265504 | 0.251292 | 0.235788 | 0.196382 | 0 | 0.007845 | 0.134058 | 2,208 | 40 | 198 | 55.2 | 0.801778 | 0.03442 | 0 | 0 | 0 | 0 | 0.034774 | 0 | 0 | 0 | 0 | 0 | 0.206897 | 1 | 0.206897 | false | 0 | 0.103448 | 0.137931 | 0.448276 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
7642f65db4bdf35329c8005d4e47e999b4a4a736 | 1,454 | py | Python | server/test/user_test.py | ioku-jts/LinkedList | ed0ae396ad311db51a7c48c95a8fff4654b4bdb4 | [
"MIT"
] | 1 | 2016-03-14T13:35:54.000Z | 2016-03-14T13:35:54.000Z | server/test/user_test.py | ioku-jts/LinkedList | ed0ae396ad311db51a7c48c95a8fff4654b4bdb4 | [
"MIT"
] | null | null | null | server/test/user_test.py | ioku-jts/LinkedList | ed0ae396ad311db51a7c48c95a8fff4654b4bdb4 | [
"MIT"
] | null | null | null | """
create user, log in, and get information back
"""
import requests
import json
headers = {'content-type': 'application/json'}
url = 'http://127.0.0.1:5000'
create_user_route='/user/createaccount'
login_user_route='/user/login'
get_user_data_route='/user'
create_user_data = {'username': 'Alice Test',
'email_address': 'alice@test.com',
'password': 'test123',
'password_conf': 'test123'}
login_user_data = {'email_address': 'alice@test.com', 'password': 'test123'}
try:
create_response = requests.post(url+create_user_route, data=json.dumps(create_user_data), headers=headers)
print 'status code:',create_response.status_code,'\nrequest body:',create_response.request.body,'\nresponse text:',create_response.text,'\n'
login_response = requests.post(url+login_user_route, data=json.dumps(login_user_data), headers=headers)
print 'status code:',login_response.status_code,'\nrequest body:',login_response.request.body,'\nresponse text:',login_response.text,'\n'
session_api_key = json.loads(login_response.text)['session_api_key']
get_user_data = {'session_api_key': session_api_key, 'email_address': 'alice@test.com'}
get_response = requests.post(url+get_user_data_route, data=json.dumps(get_user_data), headers=headers)
print 'status code:',get_response.status_code,'\nrequest body:',get_response.request.body,'\nresponse text:',get_response.text,'\n'
except Exception as e:
print e
| 40.388889 | 144 | 0.742091 | 203 | 1,454 | 5.049261 | 0.270936 | 0.062439 | 0.042927 | 0.061463 | 0.432195 | 0.18439 | 0.18439 | 0 | 0 | 0 | 0 | 0.014638 | 0.10729 | 1,454 | 35 | 145 | 41.542857 | 0.775039 | 0 | 0 | 0 | 0 | 0 | 0.284083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.130435 | 0.086957 | null | null | 0.173913 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
765109d6127a3603ddffeea167cd93c96c3314de | 832 | py | Python | agent.py | empea-careercriminal/the_pool | 2e20cf0ce08087b491760f284ce76ea181d97c0d | [
"MIT"
] | null | null | null | agent.py | empea-careercriminal/the_pool | 2e20cf0ce08087b491760f284ce76ea181d97c0d | [
"MIT"
] | null | null | null | agent.py | empea-careercriminal/the_pool | 2e20cf0ce08087b491760f284ce76ea181d97c0d | [
"MIT"
] | 3 | 2021-04-05T15:25:05.000Z | 2021-04-05T15:25:18.000Z | class Agent(object):
"""Keeps relevant data of NPC and handles behavior."""
def __init__(self, name, hitpoints, strenght):
self.name = name
self.hitpoints = hitpoints
self.strenght = strenght
def move(self):
pass
def talk(self):
pass
def give_item(self):
pass
def take_item(self):
pass
def attack(self):
pass
def defend(self):
pass
alia = Agent('Alia', 50, 1)
gertrude = Agent('Gertrude', 50, 1)
dicker_junge = Agent('Marek', 50, 1)
keines_maedchen = Agent('Sophia', 50, 1)
james = Agent('james', 50, 1)
gerald = Agent('Gerald', 50, 1)
samira = Agent('samira', 50, 1)
lisa = Agent('lisa', 50, 1)
bergtroll = Agent('Gronkh', 50, 1)
fledermaeuse = Agent('Die 3 Fledermaeuse', 50, 1)
kraehen = Agent('Die 3 Kraehen', 50, 1)
| 22.486486 | 58 | 0.604567 | 113 | 832 | 4.380531 | 0.39823 | 0.066667 | 0.111111 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056452 | 0.254808 | 832 | 36 | 59 | 23.111111 | 0.741935 | 0.057692 | 0 | 0.214286 | 0 | 0 | 0.104113 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.214286 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
765de538d08fcae69fc56a7784a2a8f37c6e64b5 | 1,588 | py | Python | Linux/etc/decript.py | Dave360-crypto/Oblivion | 0f5619ecba6a9b1ebc6dc6f4988ef6c542bf8ca3 | [
"BSD-3-Clause"
] | 339 | 2020-11-30T16:02:29.000Z | 2022-03-29T22:10:44.000Z | Linux/etc/decript.py | tracid56/Oblivion | f16dffbb6fab18c178aacda7f177ec3ae82d1997 | [
"BSD-3-Clause"
] | 5 | 2021-01-03T18:59:02.000Z | 2021-12-09T13:22:57.000Z | Linux/etc/decript.py | tracid56/Oblivion | f16dffbb6fab18c178aacda7f177ec3ae82d1997 | [
"BSD-3-Clause"
] | 71 | 2020-11-30T19:38:04.000Z | 2022-03-28T05:20:34.000Z | #!/usr/bin/python
import os
import pathlib
from cryptography.fernet import Fernet
# Global variables/Variáveis globais.
path_atual_dc = str(pathlib.Path(__file__).parent.absolute())
path_dc_final = path_atual_dc.replace('/etc','')
def decript_file(arquivo, chave=None):
"""
Decrypt a file/Desencriptografa uma arquivo.
:param arquivo: Path file/Local do arquivo.
:param chave: Key/Chave
"""
if chave == None:
with open(f'{path_dc_final}/etc/key_crypt.txt', 'r') as pegar_key:
key = pegar_key.read()
input_file = arquivo #+ '.encrypted'
output_file = arquivo
with open(input_file, 'rb') as f:
data = f.read()
fernet = Fernet(key)
decrypted = fernet.decrypt(data)
with open(output_file, 'wb') as f:
f.write(decrypted)
arquivo_f = str(arquivo)
arquivo_f = arquivo_f.replace('.encrypted', '')
os.rename(arquivo, arquivo_f)
else:
try:
key = str(chave)
input_file = arquivo
output_file = arquivo
with open(input_file, 'rb') as f:
data = f.read()
fernet = Fernet(key)
try:
decrypted = fernet.decrypt(data)
with open(output_file, 'wb') as f:
f.write(decrypted)
arquivo_f = str(arquivo)
arquivo_f = arquivo_f.replace('.encrypted', '')
os.rename(arquivo, arquivo_f)
except:
pass
except:
pass
| 24.430769 | 74 | 0.552897 | 184 | 1,588 | 4.597826 | 0.320652 | 0.07565 | 0.070922 | 0.049645 | 0.472813 | 0.472813 | 0.472813 | 0.472813 | 0.472813 | 0.472813 | 0 | 0 | 0.338791 | 1,588 | 64 | 75 | 24.8125 | 0.805714 | 0.11335 | 0 | 0.717949 | 0 | 0 | 0.047757 | 0.023878 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0.051282 | 0.076923 | 0 | 0.102564 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
766373b62520d853d63f1ad8e10a51908e148097 | 1,659 | py | Python | binding-python/runtime/src/main/python/etch/binding/transport/__init__.py | apache/etch | 5a875755019a7f342a07c8c368a50e3efb6ae68c | [
"ECL-2.0",
"Apache-2.0"
] | 9 | 2015-02-14T15:09:54.000Z | 2021-11-10T15:09:45.000Z | binding-python/runtime/src/main/python/etch/binding/transport/__init__.py | apache/etch | 5a875755019a7f342a07c8c368a50e3efb6ae68c | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | binding-python/runtime/src/main/python/etch/binding/transport/__init__.py | apache/etch | 5a875755019a7f342a07c8c368a50e3efb6ae68c | [
"ECL-2.0",
"Apache-2.0"
] | 14 | 2015-04-20T10:35:00.000Z | 2021-11-10T15:09:35.000Z | """
# Licensed to the Apache Software Foundation (ASF) under one *
# or more contributor license agreements. See the NOTICE file *
# distributed with this work for additional information *
# regarding copyright ownership. The ASF licenses this file *
# to you under the Apache License, Version 2.0 (the *
# "License"); you may not use this file except in compliance *
# with the License. You may obtain a copy of the License at *
# *
# http://www.apache.org/licenses/LICENSE-2.0 *
# *
# Unless required by applicable law or agreed to in writing, *
# software distributed under the License is distributed on an *
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY *
# KIND, either express or implied. See the License for the *
# specific language governing permissions and limitations *
# under the License.
"""
from __future__ import absolute_import
from .ArrayValue import *
from .DefaultDeliveryService import *
from .FormatFactory import *
from .MailboxManager import *
from .Messagizer import *
from .PlainMailbox import *
from .PlainMailboxManager import *
from .SessionMessage import *
from .TaggedDataInput import *
from .TaggedDataOutput import *
from .TcpTransportFactory import *
from .TransportMessage import *
from .UnwantedMessage import *
#from .fmt import *
#from .filters import *
import fmt
import filters
| 43.657895 | 65 | 0.614225 | 174 | 1,659 | 5.827586 | 0.522989 | 0.147929 | 0.025641 | 0.031558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003584 | 0.327306 | 1,659 | 37 | 66 | 44.837838 | 0.905018 | 0.655214 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
7666c1a45d318801873b4b7efa5a3ed9aca8e28f | 3,008 | py | Python | util/postprocessing.py | ZeayW/graph-contrastive-learning | b8952b677ec30110f8a616ba7162ae738d5d4052 | [
"Apache-2.0"
] | null | null | null | util/postprocessing.py | ZeayW/graph-contrastive-learning | b8952b677ec30110f8a616ba7162ae738d5d4052 | [
"Apache-2.0"
] | null | null | null | util/postprocessing.py | ZeayW/graph-contrastive-learning | b8952b677ec30110f8a616ba7162ae738d5d4052 | [
"Apache-2.0"
] | null | null | null | import sys
import os
import pickle
sys.path.append("..")
import util.structural as structural
import util.verilog as verilog
import dgl
if __name__ == "__main__":
folder = "../GCN/predicts/io/plus2/nl55"
total = 0
total_matched = 0
tried = 0
# find input from outputs
for case in os.listdir(folder):
# if case != "ut7_sample3.pkl":
# continue
case_name = os.path.join(folder, case)
with open(case_name, "rb") as f:
g, pred_i, pred_o = pickle.load(f)
assert len(pred_i) == len(pred_o) == g.number_of_nodes()
matched = set()
for idx, is_o in enumerate(pred_o):
in_s = []
if is_o == 0: # prediction: not output node
continue
for depth, nodes in enumerate(dgl.bfs_nodes_generator(g, idx, True)):
for n in nodes:
n = n.item()
tried += 1
if pred_i[n] == 1:
matched.add(n)
for n in nodes:
n = n.item()
if pred_i[n] == 1:
in_s.append(n)
matched.add(n)
if len(in_s) == 2:
break
if len(in_s) == 2:
break
print(
case,
len(matched) / len([1 for v in pred_i if v == 1]),
len(matched),
len([1 for v in pred_i if v == 1]),
)
total += len([1 for v in pred_i if v == 1])
total_matched += len(matched)
print(total_matched / total, total, total_matched, tried)
# find input from outputs
total = 0
total_matched = 0
tried = 0
for case in os.listdir(folder):
# if case != "ut7_sample3.pkl":
# continue
case_name = os.path.join(folder, case)
with open(case_name, "rb") as f:
g, pred_i, pred_o = pickle.load(f)
assert len(pred_i) == len(pred_o) == g.number_of_nodes()
matched = set()
for idx, is_i in enumerate(pred_i):
if is_i == 0: # prediction: not output node
continue
is_match = False
for depth, nodes in enumerate(dgl.bfs_nodes_generator(g, idx, False)):
if depth == 0: # cannot count self as output...
continue
for n in nodes:
n = n.item()
tried += 1
if pred_o[n] == 1:
matched.add(n)
is_match = True
if is_match:
break
print(
case,
len(matched) / len([1 for v in pred_o if v == 1]),
len(matched),
len([1 for v in pred_o if v == 1]),
)
total += len([1 for v in pred_o if v == 1])
total_matched += len(matched)
print(total_matched / total, total, total_matched, tried)
| 33.054945 | 82 | 0.470745 | 388 | 3,008 | 3.494845 | 0.198454 | 0.036873 | 0.030973 | 0.035398 | 0.752212 | 0.725664 | 0.661504 | 0.612094 | 0.612094 | 0.612094 | 0 | 0.020302 | 0.426862 | 3,008 | 90 | 83 | 33.422222 | 0.766241 | 0.073138 | 0 | 0.620253 | 0 | 0 | 0.015484 | 0.010443 | 0 | 0 | 0 | 0 | 0.025316 | 1 | 0 | false | 0 | 0.075949 | 0 | 0.075949 | 0.050633 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
766967048f6373925c88984d0ddb17403b5df64f | 2,733 | py | Python | project/schemas.py | firewut/image-resize-service | 33b8601e047fc4827c036c93686f39f9dd864956 | [
"MIT"
] | null | null | null | project/schemas.py | firewut/image-resize-service | 33b8601e047fc4827c036c93686f39f9dd864956 | [
"MIT"
] | null | null | null | project/schemas.py | firewut/image-resize-service | 33b8601e047fc4827c036c93686f39f9dd864956 | [
"MIT"
] | null | null | null | DIRECT_LINK_SCHEMA = {
"type": "object",
"properties": {
"local_file": {
"type": ["string", "null"],
"description": "local zip file path"
}
}
}
IMAGE_SCHEMA = {
"type": "object",
"properties": {
"original": {
"type": "file"
},
"size": {
"type": "array",
"items": {
"type": "number"
}
},
"custom_size": {
"type": "file",
"processors": [
{
"name": "resize",
"in": {
"original_image": {
"property": "original"
},
"size": {
"property": "size"
}
}
}
]
},
"120x120": {
"type": "file",
"processors": [
{
"name": "resize",
"in": {
"original_image": {
"property": "original"
},
"size": {
"value": [120, 120]
}
}
}
]
},
"152x152": {
"type": "file",
"processors": [
{
"name": "resize",
"in": {
"original_image": {
"property": "original"
},
"size": {
"value": [152, 152]
}
}
}
]
},
"167x167": {
"type": "file",
"processors": [
{
"name": "resize",
"in": {
"original_image": {
"property": "original"
},
"size": {
"value": [167, 167]
}
}
}
]
},
"180x180": {
"type": "file",
"processors": [
{
"name": "resize",
"in": {
"original_image": {
"property": "original"
},
"size": {
"value": [180, 180]
}
}
}
]
}
}
} | 26.278846 | 50 | 0.212221 | 105 | 2,733 | 5.428571 | 0.32381 | 0.084211 | 0.157895 | 0.192982 | 0.587719 | 0.587719 | 0.587719 | 0.587719 | 0.587719 | 0.587719 | 0 | 0.052174 | 0.663374 | 2,733 | 104 | 51 | 26.278846 | 0.567391 | 0 | 0 | 0.38835 | 0 | 0 | 0.193489 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7670a8d55e994e38c8ac686e3d1a057c36b057e5 | 1,779 | py | Python | examples/WiPy/02_simple_ssl.py | meznat/blynk-library-python-fork | 7af526de01d666408308a84befdf1f4233c9e134 | [
"MIT"
] | null | null | null | examples/WiPy/02_simple_ssl.py | meznat/blynk-library-python-fork | 7af526de01d666408308a84befdf1f4233c9e134 | [
"MIT"
] | null | null | null | examples/WiPy/02_simple_ssl.py | meznat/blynk-library-python-fork | 7af526de01d666408308a84befdf1f4233c9e134 | [
"MIT"
] | null | null | null | """
Blynk is a platform with iOS and Android apps to control
Arduino, Raspberry Pi and the likes over the Internet.
You can easily build graphic interfaces for all your
projects by simply dragging and dropping widgets.
Downloads, docs, tutorials: http://www.blynk.cc
Sketch generator: http://examples.blynk.cc
Blynk community: http://community.blynk.cc
Social networks: http://www.fb.com/blynkapp
http://twitter.com/blynk_app
This example shows how to make a secure connection using SSL.
Before running this example:
The server certificate must be uploaded to the WiPy. This
can easily done via FTP. Take the file 'ca.pem' located in
the blynk examples folder and put it in '/flash/cert/'.
Similary to firmware updates, certificates go into the internal
file system, so it won't be visible after being transferred.
In your Blynk App project:
Add a Gauge widget, bind it to Analog Pin 5.
Add a Slider widget, bind it to Digital Pin 25.
Run the App (green triangle in the upper right corner).
Don't forget to change WIFI_SSID, WIFI_AUTH and BLYNK_AUTH ;)
"""
import BlynkLib
from network import WLAN
from machine import RTC
WIFI_SSID = 'YourWiFiNetwork'
WIFI_AUTH = (WLAN.WPA2, 'YourWiFiPassword')
BLYNK_AUTH = 'YourAuthToken'
# Set the current time (mandatory to validate certificates)
RTC(datetime=(2017, 04, 18, 11, 30, 0, 0, None))
# Connect to WiFi
wifi = WLAN(mode=WLAN.STA)
wifi.connect(WIFI_SSID, auth=WIFI_AUTH)
while not wifi.isconnected():
pass
print(wifi.ifconfig())
# Initialize Blynk with security enabled
blynk = BlynkLib.Blynk(BLYNK_AUTH, ssl=True)
# Start Blynk (this call should never return)
blynk.run()
| 32.345455 | 66 | 0.708263 | 264 | 1,779 | 4.734848 | 0.609848 | 0.0168 | 0.0192 | 0.0224 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01294 | 0.2181 | 1,779 | 54 | 67 | 32.944444 | 0.885694 | 0.08769 | 0 | 0 | 0 | 0 | 0.105769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.142857 | 0.214286 | null | null | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
767c86efa5070b9ed62fc7db3a41b020c4c71450 | 855 | py | Python | mtp_noms_ops/apps/security/context_processors.py | ministryofjustice/money-to-prisoners-noms-ops | b01e5260f57b5aad553c7107ac423e915454093b | [
"MIT"
] | 3 | 2016-12-22T15:56:57.000Z | 2020-03-10T10:37:40.000Z | mtp_noms_ops/apps/security/context_processors.py | ministryofjustice/money-to-prisoners-noms-ops | b01e5260f57b5aad553c7107ac423e915454093b | [
"MIT"
] | 61 | 2016-06-10T08:37:23.000Z | 2022-01-28T12:41:29.000Z | mtp_noms_ops/apps/security/context_processors.py | ministryofjustice/money-to-prisoners-noms-ops | b01e5260f57b5aad553c7107ac423e915454093b | [
"MIT"
] | 1 | 2021-04-11T06:13:53.000Z | 2021-04-11T06:13:53.000Z | from urllib.parse import urlencode
from django.conf import settings
from django.contrib.auth import REDIRECT_FIELD_NAME
from security.forms.object_list import PRISON_SELECTOR_USER_PRISONS_CHOICE_VALUE
from security.utils import can_choose_prisons
def prison_choice_available(request):
return {
'prison_choice_available': (
request.user.is_authenticated and can_choose_prisons(request.user)
)
}
def initial_params(request):
return {
'initial_params': urlencode(
{'prison_selector': PRISON_SELECTOR_USER_PRISONS_CHOICE_VALUE},
),
}
def common(_):
return {
'footer_feedback_link': settings.FOOTER_FEEDBACK_LINK,
'REDIRECT_FIELD_NAME': REDIRECT_FIELD_NAME,
'PRISON_SELECTOR_USER_PRISONS_CHOICE_VALUE': PRISON_SELECTOR_USER_PRISONS_CHOICE_VALUE,
}
| 26.71875 | 95 | 0.74152 | 100 | 855 | 5.9 | 0.39 | 0.118644 | 0.122034 | 0.169492 | 0.244068 | 0.244068 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191813 | 855 | 31 | 96 | 27.580645 | 0.853835 | 0 | 0 | 0.130435 | 0 | 0 | 0.154386 | 0.074854 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.217391 | 0.130435 | 0.478261 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
768a0e8341e74ca7bff903c4836d5473cd5493e4 | 650 | py | Python | csepy/System/Models/Context/ContextFactory.py | csepy/csepy | 3dc39e60948d62bede48bddac0ad5aa8533550d3 | [
"MIT"
] | 1 | 2019-12-11T10:41:40.000Z | 2019-12-11T10:41:40.000Z | csepy/System/Models/Context/ContextFactory.py | csepy/csepy | 3dc39e60948d62bede48bddac0ad5aa8533550d3 | [
"MIT"
] | null | null | null | csepy/System/Models/Context/ContextFactory.py | csepy/csepy | 3dc39e60948d62bede48bddac0ad5aa8533550d3 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
from csepy.System.CommandQueue.CommandQueueFactory import GetCommandQueue
from csepy.System.Configurations.ConfigArchivist import GetConfig
from csepy.System.Logger.LoggerFactory import GetLogger
from csepy.System.Models.Context.Context import Context
from csepy.System.Models.OperatingSystemModel.OsModelFactory import GetOsModel
def GetContext():
commandQueue = GetCommandQueue()
osModel = GetOsModel()
config = GetConfig("SystemConfig")
logger = GetLogger(config, osModel.OperatingSystemName)
context = Context(commandQueue, osModel, logger, config)
commandQueue.SetContext(context)
return context
| 38.235294 | 78 | 0.807692 | 65 | 650 | 8.076923 | 0.446154 | 0.085714 | 0.142857 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001739 | 0.115385 | 650 | 16 | 79 | 40.625 | 0.911304 | 0.026154 | 0 | 0 | 0 | 0 | 0.018987 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.384615 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
769d57406654c9675751a7224ff8be6622c8c4d0 | 2,246 | py | Python | src/driver.py | QualiSystems/Ixia-IxChariotController-Shell | 4befeb14dee6c364f36ae0fa51690de0a46fc250 | [
"Apache-2.0"
] | null | null | null | src/driver.py | QualiSystems/Ixia-IxChariotController-Shell | 4befeb14dee6c364f36ae0fa51690de0a46fc250 | [
"Apache-2.0"
] | null | null | null | src/driver.py | QualiSystems/Ixia-IxChariotController-Shell | 4befeb14dee6c364f36ae0fa51690de0a46fc250 | [
"Apache-2.0"
] | null | null | null |
from cloudshell.shell.core.session.cloudshell_session import CloudShellSessionContext
from cloudshell.traffic.driver import TrafficControllerDriver
from ixc_handler import IxcHandler
class IxChariotControllerDriver(TrafficControllerDriver):
def __init__(self):
super(self.__class__, self).__init__()
self.handler = IxcHandler()
def load_config(self, context, ixc_config):
""" Load IxChariot configuration and select end points.
:type context: cloudshell.shell.core.driver_context.ResourceRemoteCommandContext
:param ixc_config: IxChariot configuration name.
"""
session_id = self.handler.load_config(context, ixc_config)
my_api = CloudShellSessionContext(context).get_api()
my_api.WriteMessageToReservationOutput(context.reservation.reservation_id,
ixc_config + ' loaded, endpoints reserved')
return session_id
def start_test(self, context, blocking):
"""
:type context: cloudshell.shell.core.driver_context.ResourceRemoteCommandContext
"""
self.handler.start_test(blocking)
def stop_test(self, context):
"""
:type context: cloudshell.shell.core.driver_context.ResourceRemoteCommandContext
"""
self.handler.stop_test()
def get_statistics(self, context, view_name, output_type):
""" Get statistics for specific view.
:type context: cloudshell.shell.core.driver_context.ResourceRemoteCommandContext
:param view_name: requested statistics view name.
:param output_type: CSV/PDF.
"""
return self.handler.get_statistics(context, view_name, output_type)
def end_session(self, context):
self.handler.end_session()
def del_session(self, context):
self.handler.del_session()
#
# Parent commands are not visible so we re define them in child.
#
def initialize(self, context):
super(self.__class__, self).initialize(context)
def cleanup(self):
super(self.__class__, self).cleanup()
def keep_alive(self, context, cancellation_context):
super(self.__class__, self).keep_alive(context, cancellation_context)
| 32.550725 | 90 | 0.693232 | 236 | 2,246 | 6.334746 | 0.305085 | 0.058863 | 0.063545 | 0.048161 | 0.346488 | 0.211371 | 0.211371 | 0.211371 | 0.211371 | 0.109699 | 0 | 0 | 0.220837 | 2,246 | 68 | 91 | 33.029412 | 0.854286 | 0.268477 | 0 | 0 | 0 | 0 | 0.017728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.344828 | false | 0 | 0.103448 | 0 | 0.551724 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
76a010ca57f20ce262a469687cf616eedb594e19 | 4,691 | py | Python | numpy_wrapper.py | mayi140611/utils | 8ad4f4e3f7913f0119bc04f84326a8ad01bf65d1 | [
"Apache-2.0"
] | 1 | 2018-07-16T03:42:57.000Z | 2018-07-16T03:42:57.000Z | numpy_wrapper.py | mayi140611/utils | 8ad4f4e3f7913f0119bc04f84326a8ad01bf65d1 | [
"Apache-2.0"
] | null | null | null | numpy_wrapper.py | mayi140611/utils | 8ad4f4e3f7913f0119bc04f84326a8ad01bf65d1 | [
"Apache-2.0"
] | 1 | 2020-04-19T11:42:05.000Z | 2020-04-19T11:42:05.000Z | #!/usr/bin/python
# encoding: utf-8
import numpy as np
class numpy_wrapper(object):
def __init__(self):
pass
@classmethod
def arange(self, start, stop, step=1, dtype=None):
'''
生成1D ndarray
Return evenly spaced values within a given interval.
'''
return np.arange(start, stop, step, dtype)
@classmethod
def linspace(self, start, stop, num=50, endpoint=True, retstep=False, dtype=None):
'''
生成1D ndarray
Return evenly spaced numbers over a specified interval.
'''
return np.linspace(start, stop, num, endpoint, retstep, dtype)
@classmethod
def build_array_from_seq(self, start, stop, step=1, shape=None, dtype=None):
'''
由序列生成数组
@shape: list or tuple
'''
return np.arange(start, stop, step, dtype).reshape(shape)
@classmethod
def build_array_from_arraylist(self, arraylist):
return np.array(arraylist)
@classmethod
def build_zeros_array(self,shape, dtype=float, order='C'):
return np.zeros(shape,dtype,order)
@classmethod
def add_newaxis_last(self,matr):
'''
在matr的最后加一个维度
@matr: 1D ndarray
'''
return matr[:,np.newaxis]
@classmethod
def flatten(self,order='C'):
'''
Return a copy of the array collapsed(坍塌) into one dimension.
@order: {'C', 'F', 'A', 'K'}, optional
'C' means to flatten in row-major (C-style) order.
'F' means to flatten in column-major (Fortran-
style) order. 'A' means to flatten in column-major
order if `a` is Fortran *contiguous* in memory,
row-major order otherwise. 'K' means to flatten
`a` in the order the elements occur in memory.
The default is 'C'.
'''
return np.flatten(order)
# ---------------------------------------------------------------------------------
# 描述性统计
# ---------------------------------------------------------------------------------
@classmethod
def max(self, a, axis=None):
'''
求最大值
@axis: None表示求整个数组的最大值;0表示求每列最大值;1表示求每行最大值
'''
return np.max(a, axis)
@classmethod
def min(self, a, axis=None):
'''
求最大值
@axis: None表示求整个数组的最大值;0表示求每列最大值;1表示求每行最大值
'''
return np.min(a, axis)
@classmethod
def sum(self, a, axis=None):
'''
求最大值
@axis: None表示求整个数组的最大值;0表示求每列最大值;1表示求每行最大值
'''
return np.sum(a, axis)
@classmethod
def mean(self, a, axis=None):
'''
求最大值
@axis: None表示求整个数组的最大值;0表示求每列最大值;1表示求每行最大值
'''
return np.mean(a, axis)
# ---------------------------------------------------------------------------------
# 生成随机数
# ---------------------------------------------------------------------------------
@classmethod
def generate_random_seed(self, seed=None):
'''
但是numpy.random.seed()不是线程安全的,
如果程序中有多个线程最好使用numpy.random.RandomState实例对象来创建
或者使用random.seed()来设置相同的随机数种子。
import random
random.seed(1234567890)
a = random.sample(range(10),5)
注意: 随机数种子seed只有一次有效,在下一次调用产生随机数函数前没有设置seed,则还是产生随机数。
'''
return np.random.RandomState(seed)
@classmethod
def uniform_rand(self, *param, seed=None):
'''
Create an array of the given shape and populate it with
random samples from a uniform distribution
over ``[0, 1)``.
'''
return self.generate_random_seed(seed).rand(*param)
@classmethod
def uniform_randint(self, low, high=None, size=None, dtype='l', seed=None):
'''
Return random integers from `low` (inclusive) to `high` (exclusive).
'''
return self.generate_random_seed(seed).randint(low, high, size, dtype)
@classmethod
def randn(self, *param, seed=None):
'''
Return a sample (or samples) from the "standard normal" distribution.
'''
return self.generate_random_seed(seed).randn(*param)
# '''
# ---------------------------------------------------------------------------------
# 线性代数
# ---------------------------------------------------------------------------------
# '''
@classmethod
def inv(self, matr):
'''
求解矩阵的逆
'''
return np.linalg.inv(matr)
@classmethod
def det(self, matr):
'''
计算行列式
'''
return np.linalg.det(matr)
@classmethod
def sum(self, matr, axis=None, dtype=None, out=None):
return np.sum(matr, axis, dtype, out) | 31.911565 | 87 | 0.514176 | 478 | 4,691 | 4.993724 | 0.322176 | 0.105572 | 0.021785 | 0.021785 | 0.26393 | 0.225388 | 0.162547 | 0.103896 | 0.103896 | 0.103896 | 0 | 0.009155 | 0.278192 | 4,691 | 147 | 88 | 31.911565 | 0.695806 | 0.395864 | 0 | 0.310345 | 0 | 0 | 0.001318 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.327586 | false | 0.017241 | 0.017241 | 0.051724 | 0.672414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4f161604df46e1bb42ece354f8ecb1b432c8b633 | 1,706 | py | Python | venv1/Lib/site-packages/tensorflow/contrib/saved_model/python/saved_model/signature_def_utils.py | Soum-Soum/Tensorflow_Face_Finder | fec6c15d2df7012608511ad87f4b55731bf99478 | [
"Apache-2.0",
"MIT"
] | null | null | null | venv1/Lib/site-packages/tensorflow/contrib/saved_model/python/saved_model/signature_def_utils.py | Soum-Soum/Tensorflow_Face_Finder | fec6c15d2df7012608511ad87f4b55731bf99478 | [
"Apache-2.0",
"MIT"
] | 1 | 2021-05-20T00:58:04.000Z | 2021-05-20T00:58:04.000Z | venv1/Lib/site-packages/tensorflow/contrib/saved_model/python/saved_model/signature_def_utils.py | Soum-Soum/Tensorflow_Face_Finder | fec6c15d2df7012608511ad87f4b55731bf99478 | [
"Apache-2.0",
"MIT"
] | null | null | null | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""SignatureDef utility functions implementation."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
def get_signature_def_by_key(meta_graph_def, signature_def_key):
"""Utility function to get a SignatureDef protocol buffer by its key.
Args:
meta_graph_def: MetaGraphDef protocol buffer with the SignatureDefMap to
look up.
signature_def_key: Key of the SignatureDef protocol buffer to find in the
SignatureDefMap.
Returns:
A SignatureDef protocol buffer corresponding to the supplied key, if it
exists.
Raises:
ValueError: If no entry corresponding to the supplied key is found in the
SignatureDefMap of the MetaGraphDef.
"""
if signature_def_key not in meta_graph_def.signature_def:
raise ValueError("No SignatureDef with key '%s' found in MetaGraphDef." %
signature_def_key)
return meta_graph_def.signature_def[signature_def_key]
| 39.674419 | 81 | 0.711606 | 226 | 1,706 | 5.20354 | 0.469027 | 0.081633 | 0.063776 | 0.053571 | 0.110544 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005818 | 0.194021 | 1,706 | 42 | 82 | 40.619048 | 0.849455 | 0.694607 | 0 | 0 | 0 | 0 | 0.12093 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.625 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
4f16df816260eee8b49f81f4d16e34b859006ca9 | 11,933 | py | Python | 4-assets/BOOKS/notebooks/Mini_tutoriel_pour_la_resolution_de_programmes_lineaires_avec_Python__ENS_Rennes_2021.py | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
] | 102 | 2016-06-25T09:30:00.000Z | 2022-03-24T21:02:49.000Z | 4-assets/BOOKS/notebooks/Mini_tutoriel_pour_la_resolution_de_programmes_lineaires_avec_Python__ENS_Rennes_2021.py | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
] | 34 | 2016-06-26T12:21:30.000Z | 2021-04-06T09:19:49.000Z | 4-assets/BOOKS/notebooks/Mini_tutoriel_pour_la_resolution_de_programmes_lineaires_avec_Python__ENS_Rennes_2021.py | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
] | 44 | 2017-05-13T23:54:56.000Z | 2021-07-17T15:34:24.000Z | #!/usr/bin/env python
# coding: utf-8
# # Mini tutoriel pour la résolution de programmes linéaires avec Python - ENS Rennes 2021
# ## Références si vous êtes curieux-ses
#
# Un très bon tutoriel :
#
# - https://realpython.com/linear-programming-python/
#
# Documentation de `scipy.optimize` :
#
# - https://docs.scipy.org/doc/scipy/reference/optimize.html
#
# D'autres tutoriels :
# - https://scipy-lectures.org/advanced/mathematical_optimization/index.html
# - https://medium.com/better-programming/how-to-solving-linear-programming-problems-with-examples-and-implementation-in-python-a7b7061bafc9
# - http://stackoverflow.com/questions/10697995/ddg#10705799
# - Auteur : [Lilian Besson](https://perso.crans.org/besson/)
# - License : [MIT](https://lbesson.mit-license.org/)
# - Date : 27/01/2021
# - Cours : [ALGO2](http://people.irisa.fr/Francois.Schwarzentruber/algo2/) @ [ENS Rennes](http://www.dit.ens-rennes.fr/)
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Mini-tutoriel-pour-la-résolution-de-programmes-linéaires-avec-Python---ENS-Rennes-2021" data-toc-modified-id="Mini-tutoriel-pour-la-résolution-de-programmes-linéaires-avec-Python---ENS-Rennes-2021-1"><span class="toc-item-num">1 </span>Mini tutoriel pour la résolution de programmes linéaires avec Python - ENS Rennes 2021</a></span><ul class="toc-item"><li><span><a href="#Références-si-vous-êtes-curieux-ses" data-toc-modified-id="Références-si-vous-êtes-curieux-ses-1.1"><span class="toc-item-num">1.1 </span>Références si vous êtes curieux-ses</a></span></li><li><span><a href="#Pré-requis-pour-exécuter-ce-notebook" data-toc-modified-id="Pré-requis-pour-exécuter-ce-notebook-1.2"><span class="toc-item-num">1.2 </span>Pré-requis pour exécuter ce notebook</a></span></li></ul></li><li><span><a href="#Quelques-petits-problèmes-linéaires" data-toc-modified-id="Quelques-petits-problèmes-linéaires-2"><span class="toc-item-num">2 </span>Quelques petits problèmes linéaires</a></span><ul class="toc-item"><li><span><a href="#Premier-problème-linéaire" data-toc-modified-id="Premier-problème-linéaire-2.1"><span class="toc-item-num">2.1 </span>Premier problème linéaire</a></span></li><li><span><a href="#Problème-exemple-du-cours-sur-le-simplexe" data-toc-modified-id="Problème-exemple-du-cours-sur-le-simplexe-2.2"><span class="toc-item-num">2.2 </span>Problème exemple du cours sur le simplexe</a></span><ul class="toc-item"><li><span><a href="#Exercice-1-:-sur-ce-problème,-cherchez-quelles-contraintes-(inégalités)-sont-saturées-(devenues-des-égalités)" data-toc-modified-id="Exercice-1-:-sur-ce-problème,-cherchez-quelles-contraintes-(inégalités)-sont-saturées-(devenues-des-égalités)-2.2.1"><span class="toc-item-num">2.2.1 </span><strong>Exercice 1</strong> : sur ce problème, cherchez quelles contraintes (inégalités) sont saturées (devenues des égalités)</a></span></li><li><span><a href="#Exercice-2-:-résoudre-un-autre-problème-vu-en-cours" data-toc-modified-id="Exercice-2-:-résoudre-un-autre-problème-vu-en-cours-2.2.2"><span class="toc-item-num">2.2.2 </span><strong>Exercice 2</strong> : résoudre un autre problème vu en cours</a></span></li><li><span><a href="#Bonus-:-réfléchir-à-une-situation-de-votre-vie-personnelle-qui-pourrait-être-mis-en-forme-comme-un-problème-linéaire" data-toc-modified-id="Bonus-:-réfléchir-à-une-situation-de-votre-vie-personnelle-qui-pourrait-être-mis-en-forme-comme-un-problème-linéaire-2.2.3"><span class="toc-item-num">2.2.3 </span>Bonus : réfléchir à une situation de votre vie personnelle qui pourrait être mis en forme comme un problème linéaire</a></span></li></ul></li><li><span><a href="#Comparer-différentes-méthodes-:" data-toc-modified-id="Comparer-différentes-méthodes-:-2.3"><span class="toc-item-num">2.3 </span>Comparer différentes méthodes :</a></span><ul class="toc-item"><li><span><a href="#Exercice-3-:-sur-un-problème-de-votre-choix,-testez-au-moins-deux-méthodes." data-toc-modified-id="Exercice-3-:-sur-un-problème-de-votre-choix,-testez-au-moins-deux-méthodes.-2.3.1"><span class="toc-item-num">2.3.1 </span><strong>Exercice 3</strong> : sur un problème de votre choix, testez au moins deux méthodes.</a></span></li><li><span><a href="#Exercice-4-:-Chercher-un-problème-qui-donne-une-réponse-différente-sur-deux-méthodes-différentes." data-toc-modified-id="Exercice-4-:-Chercher-un-problème-qui-donne-une-réponse-différente-sur-deux-méthodes-différentes.-2.3.2"><span class="toc-item-num">2.3.2 </span><strong>Exercice 4</strong> : Chercher un problème qui donne une réponse différente sur deux méthodes différentes.</a></span></li></ul></li><li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-2.4"><span class="toc-item-num">2.4 </span>Conclusion</a></span></li></ul></li></ul></div>
# ## Pré-requis pour exécuter ce notebook
#
# - Soit vous le téléchargez et vous l'utilisez localement, dans ce cas il faut que vous ayez installé le module `scipy`.
# + Utilisez votre gestionnaire de paquet système (`apt-get` par exemple) si Python installé par ce gestionnaire ;
# + Utilisez `pip install scipy` ou pip3, ou avec `sudo pip` (Linux/Mac) ou pip.exe (Windows) si modules Python gérés avec [pip](https://pypi.org/) (recommandé) ;
# + Utilisez `conda install scipy` si modules Python gérés avec conda (si installé avec [Anaconda](https://www.anaconda.com/products/individual).
#
# - Soit vous utilisez le lien fourni dans Discord pour exécuter ce notebook dans un environnement en ligne, avec [MyBinder](https://mybinder.org/) (gratuit libre et sans connexion).
# In[21]:
try:
import scipy.optimize
except ImportError:
print("Vous avez lu le paragraphe précédent...?")
print("Je t'envoie sur https://scipy.org/install.html et tu auras plus d'informations...")
import webbrowser
webbrowser.open_new_tab("https://scipy.org/install.html")
# ----
# # Quelques petits problèmes linéaires
# ## Premier problème linéaire
#
# Il vient du tutoriel [susmentionné](https://medium.com/better-programming/how-to-solving-linear-programming-problems-with-examples-and-implementation-in-python-a7b7061bafc9) :
# In[44]:
# Objective Function: 50x_1 + 80x_2
# Constraint 1: 5x_1 + 2x_2 <= 20
# Constraint 2: -10x_1 + -12x_2 <= -90
result = scipy.optimize.linprog(
[50, 80], # Cost function: 50x_1 + 80x_2
A_ub=[[5, 2], [-10, -12]], # Coefficients for inequalities
b_ub=[20, -90], # Constraints for inequalities: 20 and -90
bounds=(0, None), # Bounds on x, 0 <= x_i <= +oo by default
)
# On utilise les fonctionnalités de scipy pour les problèmes linéaires ([doc](https://docs.scipy.org/doc/scipy/reference/optimize.html#linear-programming)), et pour commencer [la seule fonction `scipy.optimize.linprog`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linprog.html#scipy.optimize.linprog) :
# In[45]:
if result.success:
print(f"X1: {round(result.x[0], 2)} hours")
print(f"X2: {round(result.x[1], 2)} hours")
else:
print("No solution")
# Et voilà, pas plus compliqué !
#
# Vous pourrez observer que le résultat de l'appel à cette fonction (si tout marche bien) est un objet qui encapsule plusieurs choses :
#
# - la valeur de la solution, `result.x` ;
# - le nombre d'itérations, `result.nit` ;
# - l'état des slacks variables (cf cours sur le simplexe), `result.slack` ;
# - évaluation de l'état de l'optimisation, `result.success`, et `result.message` ;
# - et même la valeur de la fonction objectif à ce point solution (c'est utile pour gagner du temps, si cette fonction objectif est très couteuse, par exemple quand on apprend un gros réseau de neurone, avec d'autres solveurs).
# In[26]:
print(result)
# ## Problème exemple du cours sur le simplexe
#
# Attention, généralement les solveurs cherchent à **minimiser** la fonction de coût, pas à la maximiser !
#
# <span style="color:red;">C'est un piège classique, on rentre le problème, le solveur répond [0,0,..,0] comme solution, et on ne sait pas ce qui n'a pas marché...</span>
# In[54]:
# Objective Function: x_1 + 6*x_2 + 13*x_3
# Constraint 1: x_1 <= 200
# Constraint 2: x_2 <= 300
# Constraint 3: x_1 + x_2 + x_3 <= 400
# Constraint 4: x_2 + 3*x_3 <= 600
# les variables sont supposées positives par défaut
# x_1 >= 0
# x_2 >= 0
# x_3 >= 0
result = scipy.optimize.linprog(
[-1, -6, -13], # Cost function: -x_1 + -6*x_2 + -13*x_3 to MINIMIZE
A_ub=[ # Coefficients for inequalities
[1, 0, 0], # for C1: 1*x_1 + 0*x_2 + 0*x_3 <= 200
[0, 1, 0], # for C2: 0*x_1 + 1*x_2 + 0*x_3 <= 300
[1, 1, 1], # for C3: 1*x_1 + 1*x_2 + 1*x_3 <= 400
[0, 1, 3], # for C4: 0*x_1 + 1*x_2 + 3*x_3 <= 600
],
b_ub=[200, 300, 400, 600], # Constraints for inequalities: 200, 300, 400, 600
bounds=(0, None), # Bounds on x, 0 <= x_i <= +oo by default
method="simplex",
)
# In[55]:
print(result)
# In[56]:
if result.success:
print(f"X1: {round(result.x[0], 2)} chocolats simples")
print(f"X2: {round(result.x[1], 2)} pyramides")
print(f"X2: {round(result.x[2], 2)} pyramides de luxe")
else:
print("No solution")
# On trouve donc la solution commerciale optimale pour Charlie le chocolatier : 300 pyramides, et 100 pyramides de luxe.
# ### **Exercice 1** : sur ce problème, cherchez quelles contraintes (inégalités) sont saturées (devenues des égalités)
#
# Indice : lire le tableau `result.slack` et l'interpréter.
# In[ ]:
# TODO
print(result)
print("Variables slack:")
print(result.slack)
# ### **Exercice 2** : résoudre un autre problème vu en cours
# ### Bonus : réfléchir à une situation de votre vie personnelle qui pourrait être mis en forme comme un problème linéaire
#
# Un exemple que j'ai beaucoup est le suivant, qui généralise l'idée de « régime diététique optimal » : https://jeremykun.com/2014/06/02/linear-programming-and-the-most-affordable-healthy-diet-part-1/ (très bon blogue à suivre si ça vous plaît).
# ## Comparer différentes méthodes :
# Comme le dit la documentation :
#
# > The linprog function supports the following methods:
#
# linprog(method=’simplex’)
# linprog(method=’interior-point’)
# linprog(method=’revised simplex’)
# linprog(method=’highs-ipm’)
# linprog(method=’highs-ds’)
# linprog(method=’highs’)
# > Certaines méthodes peuvent ne pas être disponible sur votre installation, mais normalement `"simplex"` et `"interior-point"` sont disponibles partout.
# ### **Exercice 3** : sur un problème de votre choix, testez au moins deux méthodes.
#
# Ce petit morceau de code peut vous aider :
# In[33]:
methods = [
"simplex",
"interior-point",
#"revised-simplex",
#"highs-ipm",
#"highs-ds",
#"highs",
]
# In[34]:
def solve_problem_1(method):
return scipy.optimize.linprog(
[50, 80], # Cost function: 50x_1 + 80x_2
A_ub=[[5, 2], [-10, -12]], # Coefficients for inequalities
b_ub=[20, -90], # Constraints for inequalities: 20 and -90
method=method
)
# In[35]:
for i, method in enumerate(methods):
# solve problem with this method
print(f"\n- Pour la méthode #{i}, {method}...")
solution = solve_problem_1(method)
print(f"La solution trouvée est {solution}")
# ### **Exercice 4** : Chercher un problème qui donne une réponse différente sur deux méthodes différentes.
# Avec le code ci-dessous, cherchez un problème plus compliqué qui donne une solution différente.
# Même si la différence est faible, commentez là :
#
# - en terme de nombre d'étape ?
# - valeurs de x* ?
# - valeur de f(x*) ?
# ---
# ## Conclusion
#
# Si vous êtes très curieux, regardez un peu les références données en haut de ce document.
#
# N'hésitez pas à nous contacter si vous avez des questions !
| 49.514523 | 3,971 | 0.702338 | 1,872 | 11,933 | 4.449786 | 0.246261 | 0.018247 | 0.02593 | 0.017167 | 0.482113 | 0.447299 | 0.388595 | 0.344178 | 0.319088 | 0.27575 | 0 | 0.036948 | 0.131317 | 11,933 | 240 | 3,972 | 49.720833 | 0.766448 | 0.841616 | 0 | 0.339286 | 0 | 0.017857 | 0.28063 | 0 | 0 | 0 | 0 | 0.004167 | 0 | 1 | 0.017857 | false | 0 | 0.053571 | 0.017857 | 0.089286 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4f1b439f5eef402f9c3644e866b543b7d9565b01 | 939 | py | Python | QGOpt/manifolds/convert.py | RyAlAl/QGOpt | f356ff7b670317046b2bfbf94c90b7d7573bbfd0 | [
"Apache-2.0"
] | 44 | 2020-05-08T22:26:54.000Z | 2022-03-24T17:37:06.000Z | QGOpt/manifolds/convert.py | RyAlAl/QGOpt | f356ff7b670317046b2bfbf94c90b7d7573bbfd0 | [
"Apache-2.0"
] | 7 | 2020-05-28T11:17:44.000Z | 2022-02-10T01:53:09.000Z | QGOpt/manifolds/convert.py | RyAlAl/QGOpt | f356ff7b670317046b2bfbf94c90b7d7573bbfd0 | [
"Apache-2.0"
] | 6 | 2020-05-28T18:45:27.000Z | 2021-05-21T02:13:30.000Z | import tensorflow as tf
def complex_to_real(tensor):
"""Returns tensor converted from a complex dtype with shape
(...,) to a real dtype with shape (..., 2), where last index
marks real [0] and imag [1] parts of a complex valued tensor.
Args:
tensor: complex valued tensor of shape (...,).
Returns:
real valued tensor of shape (..., 2)."""
return tf.concat([tf.math.real(tensor)[..., tf.newaxis],
tf.math.imag(tensor)[..., tf.newaxis]], axis=-1)
def real_to_complex(tensor):
"""Returns tensor converted from a real dtype with shape
(..., 2) to complex dtype with shape (...,), where last index
of a real tensor marks real [0] and imag [1]
parts of a complex valued tensor.
Args:
tensor: real valued tensor of shape (..., 2).
Returns:
complex valued tensor of shape (...,)."""
return tf.complex(tensor[..., 0], tensor[..., 1])
| 31.3 | 70 | 0.607029 | 131 | 939 | 4.320611 | 0.259542 | 0.127208 | 0.09894 | 0.134276 | 0.556537 | 0.464664 | 0.194346 | 0.194346 | 0.194346 | 0.194346 | 0 | 0.015603 | 0.249201 | 939 | 29 | 71 | 32.37931 | 0.787234 | 0.632588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4f250e37ae6f60cb40ee1c29b91dd8c012e4f452 | 2,185 | py | Python | python_spec/python_blocks/tests/scale_copy.py | ak-ustutt/GeCCo-public | 8d43a6c9323aeba7eb54625b95553bfd4b2418c6 | [
"MIT"
] | null | null | null | python_spec/python_blocks/tests/scale_copy.py | ak-ustutt/GeCCo-public | 8d43a6c9323aeba7eb54625b95553bfd4b2418c6 | [
"MIT"
] | null | null | null | python_spec/python_blocks/tests/scale_copy.py | ak-ustutt/GeCCo-public | 8d43a6c9323aeba7eb54625b95553bfd4b2418c6 | [
"MIT"
] | null | null | null | from python_interface.gecco_interface import *
new_target('TEST_ADD_UNITY',True)
DEF_OP_FROM_OCC({
LABEL:"DUMMY_1",
DESCR:'P,P|PP,PP|V,V|VV,VV|H,H|HH,HH'
})
SET_HERMITIAN({
LABEL:"DUMMY_1",
CA_SYMMETRY:+1})
DEF_ME_LIST({
LIST:'ME_DUMMY_1',
OPERATOR:'DUMMY_1',
IRREP:1,
'2MS':0,
AB_SYM:+1,
DIAG_TYPE:1,
MAX_REC:3,
MIN_REC:1,
REC:2
})
DEF_ME_LIST({
LIST:'ME_DUMMY_2',
OPERATOR:'DUMMY_1',
IRREP:1,
'2MS':0,
AB_SYM:+1,
DIAG_TYPE:1,
MAX_REC:3,
MIN_REC:1,
REC:2
})
ADD_UNITY({
LIST:'ME_DUMMY_1',
FAC:0.5,
INIT:True,
MS_SYM_SIGN:1})
PRINT({STRING:"Mode square"})
SCALE_COPY({LIST_RES:'ME_DUMMY_2',
LIST_INP:'ME_DUMMY_1',
FAC:3,
MODE:'square'
})
PRINT_MEL({LIST:'ME_DUMMY_2'})
PRINT({STRING:"Mode prc-thresh"})
SCALE_COPY({LIST_RES:'ME_DUMMY_2',
LIST_INP:'ME_DUMMY_1',
FAC:0.8,
MODE:'prc-thresh'
})
PRINT_MEL({LIST:'ME_DUMMY_2'})
PRINT({STRING:"Mode scale"})
SCALE_COPY({LIST_RES:'ME_DUMMY_2',
LIST_INP:'ME_DUMMY_1',
FAC:2.0,
MODE:'scale'
})
PRINT_MEL({LIST:'ME_DUMMY_2'})
PRINT({STRING:"Mode precond"})
# PReparing "preconditioner"
SCALE_COPY({LIST_RES:'ME_DUMMY_2',
LIST_INP:'ME_DUMMY_2',
FAC:0.0,
MODE:'scale'
})
SCALE_COPY({LIST_RES:'ME_DUMMY_2',
LIST_INP:'ME_DUMMY_2',
FAC:0.5,
MODE:'prc-thresh'
})
SCALE_COPY({LIST_RES:'ME_DUMMY_1',
LIST_INP:'ME_DUMMY_2',
FAC:1.0,
MODE:'precond'
})
PRINT_MEL({LIST:'ME_DUMMY_1'})
#
#DEF_OP_FROM_OCC({
# LABEL:"DUMMY_2",
# DESCR:'P,H|H,V|H,P'
#})
#DEF_ME_LIST({
# LIST:'ME_DUMMY_2',
# OPERATOR:'DUMMY_2',
# IRREP:1,
# '2MS':0,
# AB_SYM:+1,
# DIAG_TYPE:1,
# # MAX_REC:3,
# MIN_REC:1,
# REC:2
#})
#
#ADD_UNITY({
# # LIST:'ME_DUMMY_2',
# FAC:1.0,
# INIT:True,
# MS_SYM_SIGN:-1
#})
#
#PRINT_MEL({LIST:'ME_DUMMY_2'})
| 16.30597 | 46 | 0.524485 | 320 | 2,185 | 3.2375 | 0.196875 | 0.148649 | 0.11583 | 0.081081 | 0.774131 | 0.75 | 0.648649 | 0.604247 | 0.604247 | 0.410232 | 0 | 0.046144 | 0.305721 | 2,185 | 133 | 47 | 16.428571 | 0.636783 | 0.186728 | 0 | 0.689189 | 0 | 0.013514 | 0.204805 | 0.01659 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.013514 | 0 | 0.013514 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4f48219fba75ea9f69a9e7787ccf94e062da5920 | 1,610 | py | Python | setup.py | kade-robertson/afh-dl | c8cd0b830892ae7ad842bb2b877118539c5dc664 | [
"MIT"
] | 2 | 2019-01-25T16:00:28.000Z | 2021-03-20T02:21:48.000Z | setup.py | kade-robertson/afh-dl | c8cd0b830892ae7ad842bb2b877118539c5dc664 | [
"MIT"
] | null | null | null | setup.py | kade-robertson/afh-dl | c8cd0b830892ae7ad842bb2b877118539c5dc664 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
import afh_dl
long_desc = ""
try:
import pypandoc
long_desc = pypandoc.convert('README.md', 'rst', extra_args = ('--eol', 'lf'))
except(IOError, ImportError):
long_desc = open('README.md').read()
setup(
name = "afh-dl",
version = "1.0.3",
description = "A command-line tool for downloading files from AndroidFileHost.",
long_description = long_desc,
classifiers = [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.0",
"Programming Language :: Python :: 3.1",
"Programming Language :: Python :: 3.2",
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6"
],
entry_points = {
'console_scripts': [
'afh-dl = afh_dl:entry_main'
]
},
keywords = "android file host downloader",
author = "Kade Robertson",
author_email = "kade@kaderobertson.pw",
url = "https://github.com/kade-robertson/afh-dl",
license = "MIT",
packages = find_packages(),
install_requires = [
"future",
"requests",
"humanize",
"clint"
],
python_requires = '>=2.7, <4',
)
| 30.961538 | 84 | 0.581366 | 169 | 1,610 | 5.443787 | 0.538462 | 0.18587 | 0.244565 | 0.226087 | 0.058696 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020495 | 0.272671 | 1,610 | 51 | 85 | 31.568627 | 0.765158 | 0 | 0 | 0.041667 | 0 | 0 | 0.489441 | 0.013043 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4f4cdd30f1534aad4d5ff666692cbd06481bf014 | 436 | py | Python | codes/ToOneHot.py | mengzhu0308/couplet | 8c56bc6cc59d805a87d5c0dac3201f3df4ceeace | [
"Apache-2.0"
] | null | null | null | codes/ToOneHot.py | mengzhu0308/couplet | 8c56bc6cc59d805a87d5c0dac3201f3df4ceeace | [
"Apache-2.0"
] | null | null | null | codes/ToOneHot.py | mengzhu0308/couplet | 8c56bc6cc59d805a87d5c0dac3201f3df4ceeace | [
"Apache-2.0"
] | null | null | null | #! -*- coding:utf-8 -*-
'''
@Author: ZM
@Date and Time: 2020/12/15 20:27
@File: ToOneHot.py
'''
import numpy as np
class ToOneHot:
def __init__(self, num_classes):
self.num_classes = num_classes
def __call__(self, label):
label_len = len(label)
one_hot = np.zeros((label_len, self.num_classes), dtype='float32')
one_hot[np.arange(label_len), label] = 1.
return one_hot | 22.947368 | 74 | 0.607798 | 62 | 436 | 3.983871 | 0.596774 | 0.161943 | 0.17004 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049231 | 0.254587 | 436 | 19 | 75 | 22.947368 | 0.710769 | 0.233945 | 0 | 0 | 0 | 0 | 0.021407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4f62fd31f25e8ef62c147c645a369f27d607b890 | 171 | py | Python | _version.py | lgessler/rstWeb | 0041065374b023e799c8098d9e55e4a9732f3852 | [
"MIT"
] | null | null | null | _version.py | lgessler/rstWeb | 0041065374b023e799c8098d9e55e4a9732f3852 | [
"MIT"
] | null | null | null | _version.py | lgessler/rstWeb | 0041065374b023e799c8098d9e55e4a9732f3852 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
__version__ = "2.0.3"
__author__ = "Amir Zeldes"
__copyright__ = "Copyright 2015-2018, Amir Zeldes"
__license__ = "MIT License"
| 21.375 | 50 | 0.684211 | 22 | 171 | 4.590909 | 0.818182 | 0.19802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 0.140351 | 171 | 7 | 51 | 24.428571 | 0.605442 | 0.222222 | 0 | 0 | 0 | 0 | 0.450382 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4f71b6384c2d4fab2276f1f65952ff68ff2f4343 | 1,028 | py | Python | git_hook/commit_msg.py | FanyangKong/git-awesome-hook | ccba63044c4eb7a4b6dc9401fa5832f489114a86 | [
"Apache-2.0"
] | null | null | null | git_hook/commit_msg.py | FanyangKong/git-awesome-hook | ccba63044c4eb7a4b6dc9401fa5832f489114a86 | [
"Apache-2.0"
] | null | null | null | git_hook/commit_msg.py | FanyangKong/git-awesome-hook | ccba63044c4eb7a4b6dc9401fa5832f489114a86 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: UTF-8 -*-
# COMMIT_MSG_FILE=$1
# COMMIT_SOURCE=$2
# SHA1=$3
import sys, os
import re
from git_utils import *
commit_msg_filepath = sys.argv[1]
branch_name = get_branch_name()
commit_msg = ""
if branch_name.startswith("fix/bug"):
with open(commit_msg_filepath, 'r') as f:
lines = f.readlines()
for line in lines:
if line.startswith("#"):
break
commit_msg += line
commit_msg = commit_msg.rstrip()
searchObj = re.search("--bug=[0-9]+", commit_msg)
if not searchObj:
print "Commit Failed (~ ̄(OO) ̄)ブ , no bugid" + "\n\nInvalid Message:\n" + commit_msg
exit(1)
elif branch_name.startswith("feature/"):
with open(commit_msg_filepath, 'r') as f:
lines = f.readlines()
for line in lines:
if line.startswith("#"):
break
commit_msg += line
commit_msg = commit_msg.rstrip()
searchObj = re.search("--story=[0-9]+", commit_msg)
if not searchObj:
print "Commit Failed (~ ̄(OO) ̄)ブ , no storyid" + "\n\nInvalid Message:\n" + commit_msg
exit(1)
| 23.906977 | 89 | 0.659533 | 159 | 1,028 | 4.125786 | 0.396226 | 0.205793 | 0.077744 | 0.051829 | 0.64939 | 0.64939 | 0.64939 | 0.64939 | 0.554878 | 0.554878 | 0 | 0.014201 | 0.178016 | 1,028 | 43 | 90 | 23.906977 | 0.757396 | 0.08463 | 0 | 0.6 | 0 | 0 | 0.173959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0.066667 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4f7d8cc97bcd56652905e261ee16b5f8616af23c | 109 | py | Python | Term 2/8/4.py | theseana/ajisa | 1c92b00acd3fad7c92b8222b5f6a86fc6db4bcae | [
"MIT"
] | null | null | null | Term 2/8/4.py | theseana/ajisa | 1c92b00acd3fad7c92b8222b5f6a86fc6db4bcae | [
"MIT"
] | null | null | null | Term 2/8/4.py | theseana/ajisa | 1c92b00acd3fad7c92b8222b5f6a86fc6db4bcae | [
"MIT"
] | null | null | null | a = 'AJisa'
a = a.upper()
print(a)
a = a.lower()
print(a)
n = "89*"
print(n.isnumeric())
print(n.isalpha()) | 10.9 | 20 | 0.577982 | 20 | 109 | 3.15 | 0.45 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0.155963 | 109 | 10 | 21 | 10.9 | 0.663043 | 0 | 0 | 0.25 | 0 | 0 | 0.072727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
4f9e0f430f2b6fcc79b274cc35ef655c6aa12d0f | 471 | py | Python | text/symbols.py | highmaru-public/multi-speaker-tacotron-tensorflow | a43dcf61605f262b2ad9f3dd997c6edef2e4177d | [
"MIT"
] | 183 | 2017-10-20T00:17:33.000Z | 2022-03-19T02:03:18.000Z | text/symbols.py | highmaru-public/multi-speaker-tacotron-tensorflow | a43dcf61605f262b2ad9f3dd997c6edef2e4177d | [
"MIT"
] | 12 | 2021-01-26T05:36:12.000Z | 2022-03-12T00:51:57.000Z | text/symbols.py | highmaru-public/multi-speaker-tacotron-tensorflow | a43dcf61605f262b2ad9f3dd997c6edef2e4177d | [
"MIT"
] | 132 | 2017-10-19T02:42:44.000Z | 2021-11-30T05:22:15.000Z | '''
Defines the set of symbols used in text input to the model.
The default is a set of ASCII characters that works well for English or text that has been run
through Unidecode. For other data, you can modify _characters. See TRAINING_DATA.md for details.
'''
from jamo import h2j, j2h
from jamo.jamo import _jamo_char_to_hcj
from .korean import ALL_SYMBOLS, PAD, EOS
#symbols = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz!\'(),-.:;? '
symbols = ALL_SYMBOLS
| 33.642857 | 96 | 0.779193 | 71 | 471 | 5.056338 | 0.676056 | 0.027855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005 | 0.150743 | 471 | 13 | 97 | 36.230769 | 0.8925 | 0.698514 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
4fa72625135c85c41698d3b5f5ed3606a93522fb | 300 | py | Python | adminmgr/media/code/python/map3/BD_051_272_1339_mapper.py | IamMayankThakur/test-bigdata | cef633eb394419b955bdce479699d0115d8f99c3 | [
"Apache-2.0"
] | 9 | 2019-11-08T02:05:27.000Z | 2021-12-13T12:06:35.000Z | adminmgr/media/code/python/map1/BD_0051_0272_1339_mapper.py | IamMayankThakur/test-bigdata | cef633eb394419b955bdce479699d0115d8f99c3 | [
"Apache-2.0"
] | 6 | 2019-11-27T03:23:16.000Z | 2021-06-10T19:15:13.000Z | adminmgr/media/code/python/map3/BD_0051_0272_1339_mapper.py | IamMayankThakur/test-bigdata | cef633eb394419b955bdce479699d0115d8f99c3 | [
"Apache-2.0"
] | 4 | 2019-11-26T17:04:27.000Z | 2021-12-13T11:57:03.000Z | #!/usr/bin/python3
import sys
pair={}
for ln in sys.stdin:
col=ln.strip()
col=ln.split(",")
if(col[0]=="ball"):
if (col[4],col[6]) in pair:
pair[(col[4],col[6])]+=1
elif:
pair[(col[4],col[6])]=1
if(pair[(col[4],col[6])]>=6):
print(col[6],col[4],int(col[7]),int(col[8]),sep=",")
| 16.666667 | 53 | 0.54 | 60 | 300 | 2.7 | 0.416667 | 0.123457 | 0.17284 | 0.197531 | 0.234568 | 0.160494 | 0 | 0 | 0 | 0 | 0 | 0.065891 | 0.14 | 300 | 17 | 54 | 17.647059 | 0.562016 | 0.056667 | 0 | 0 | 0 | 0 | 0.021429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.083333 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4fa9b1ec809a3eda11d3427fe8778af6253c4abb | 52 | py | Python | resources/config.py | PythonForChange/PythonForChange.github.io- | 3176d85f56dcb1ece08f437ae72282d60386ae81 | [
"MIT"
] | 1 | 2021-06-02T17:08:26.000Z | 2021-06-02T17:08:26.000Z | resources/config.py | PythonForChange/pythonforchange.github.io | 3176d85f56dcb1ece08f437ae72282d60386ae81 | [
"MIT"
] | null | null | null | resources/config.py | PythonForChange/pythonforchange.github.io | 3176d85f56dcb1ece08f437ae72282d60386ae81 | [
"MIT"
] | null | null | null | #News Config
files=["news.md","README.md"]
year=2021 | 17.333333 | 29 | 0.711538 | 9 | 52 | 4.111111 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 0.057692 | 52 | 3 | 30 | 17.333333 | 0.673469 | 0.211538 | 0 | 0 | 0 | 0 | 0.390244 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
96cf2c23700221b2c2d8b46d84437784c43a4f1d | 10,840 | py | Python | src/Foundation.py | mita4829/Acorn | 9c8c37bb80dd1cfb1dcfc1bbb44c08cec88a507c | [
"MIT"
] | 2 | 2017-01-24T00:33:18.000Z | 2020-07-26T03:53:29.000Z | src/Foundation.py | mita4829/Acorn | 9c8c37bb80dd1cfb1dcfc1bbb44c08cec88a507c | [
"MIT"
] | 1 | 2022-01-06T21:36:40.000Z | 2022-01-06T21:36:40.000Z | src/Foundation.py | mita4829/Acorn | 9c8c37bb80dd1cfb1dcfc1bbb44c08cec88a507c | [
"MIT"
] | 1 | 2019-11-21T00:03:18.000Z | 2019-11-21T00:03:18.000Z | # Acorn 2.0: Cocoa Butter
# Booleans are treated as integers
# Allow hex
#Number meta class
class N():
def __init__(self,n):
try:
self.n = float(n)
if(self.n.is_integer()):
self.n = int(n)
except:
self.n = int(n,16)
def N(self):
return self.n
def __repr__(self):
return 'N(%s)' % self.n
#Boolean meta class
class B():
def __init__(self,b):
self.boolean = b
def B(self):
if(self.boolean == "true"):
return 1
elif(self.boolean == "false"):
return 0
elif(isinstance(self.boolean,B)):
return int((self.boolean).B())
elif(isinstance(self.boolean,N)):
return int(bool(self.boolean.N()))
elif(self.boolean == True):
return 1
else:
return 0
def __repr__(self):
return 'B(\'%s\')' % self.boolean
#String meta class
class S():
def __init__(self,s):
self.s = str(s)
def S(self):
return self.s
def __repr__(self):
return 'S(\'%s\')' % self.s
#Var meta class
class Var():
def __init__(self,x):
self.x = str(x)
def X(self):
return self.x
def __repr__(self):
return 'Var(\'%s\')' % self.x
#Null meta class
class Null():
def __init__(self):
self.n = "Null"
def null(self):
return self.n
def __repr__(self):
return 'Null()'
#Unary meta operations
class Unary():
def __init__(self,uop,e1):
self.op = str(uop)
self.e1 = e1
def expr1(self):
return self.e1
def uop(self):
return self.op
def __repr__(self):
return 'Unary(%s,%s)' % (self.op, self.e1)
#Binary meta operations
class Binary():
def __init__(self,bop,e1,e2):
self.op = str(bop)
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def bop(self):
return self.op
def __repr__(self):
return 'Binary(%s,%s,%s)' % (self.op, self.e1, self.e2)
#Trinary meta operator
class If():
def __init__(self,e1,e2,e3):
self.e1 = e1
self.e2 = e2
self.e3 = e3
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def expr3(self):
return self.e3
def __repr__(self):
return 'If(%s,%s,%s)' % (self.e1, self.e2, self.e3)
class Function():
def __init__(self,arguments,body):
self.e1 = arguments
self.e2 = body
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Function(%s,%s)' % (self.e1, self.e2)
class Call():
def __init__(self,arguments,body):
self.e1 = arguments
self.e2 = body
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Call(%s,%s)' % (self.e1, self.e2)
class Return():
def __init__(self,returns):
self.e1 = returns
def expr1(self):
return self.e1
def __repr__(self):
return 'Return(%s)' % (self.e1)
class Seq():
def __init__(self,e1,e2):
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Seq(%s,%s)' % (self.e1, self.e2)
class Eq():
def __init__(self,e1,e2):
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Eq(%s,%s)' % (self.e1, self.e2)
class Ne():
def __init__(self,e1,e2):
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Ne(%s,%s)' % (self.e1, self.e2)
class Lt():
def __init__(self,e1,e2):
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Lt(%s,%s)' % (self.e1, self.e2)
class Le():
def __init__(self,e1,e2):
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Le(%s,%s)' % (self.e1, self.e2)
class Gt():
def __init__(self,e1,e2):
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Gt(%s,%s)' % (self.e1, self.e2)
class Ge():
def __init__(self,e1,e2):
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Ge(%s,%s)' % (self.e1, self.e2)
class And():
def __init__(self,e1,e2):
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'And(%s,%s)' % (self.e1, self.e2)
class Or():
def __init__(self,e1,e2):
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Or(%s,%s)' % (self.e1, self.e2)
class BitwiseAnd():
def __init__(self,e1,e2):
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Intersect(%s,%s)' % (self.e1, self.e2)
class BitwiseOr():
def __init__(self,e1,e2):
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Union(%s,%s)' % (self.e1, self.e2)
class LeftShift():
def __init__(self,e1,e2):
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'LeftShift(%s,%s)' % (self.e1, self.e2)
class RightShift():
def __init__(self,e1,e2):
self.e1 = e1
self.e2 = e2
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'RightShift(%s,%s)' % (self.e1, self.e2)
class Malloc():
def __init__(self,m,x,v):
self.e1 = m
self.e2 = x
self.e3 = v
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def expr3(self):
return self.e3
def __repr__(self):
return 'Malloc(%s,%s,%s)' % (self.e1, self.e2, self.e3)
class Array():
def __init__(self,e1):
self.e1 = e1
def expr1(self):
return self.e1
def __repr__(self):
return 'Array(%s)' % self.e1
class Index():
def __init__(self,array,index):
self.e1 = array
self.e2 = index
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Index(%s,%s)' % (self.e1, self.e2)
class Assign():
def __init__(self,var,val):
self.e1 = var
self.e2 = val
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Assign(%s,%s)' % (self.e1, self.e2)
class ForEach():
def __init__(self,i,start,end,scope,closure):
self.e1 = i
self.e2 = start
self.e3 = end
self.e4 = scope
self.e5 = closure
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def expr3(self):
return self.e3
def expr4(self):
return self.e4
def expr5(self):
return self.e5
def __repr__(self):
return 'ForEach(%s,%s,%s,%s,%s)' % (self.e1, self.e2, self.e3, self.e4, self.e5)
class For():
def __init__(self,index,condition,count,scope):
self.e1 = index
self.e2 = condition
self.e3 = count
self.e4 = scope
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def expr3(self):
return self.e3
def expr4(self):
return self.e4
def __repr__(self):
return 'For(%s,%s,%s,%s)' % (self.e1, self.e2, self.e3, self.e4)
class While():
def __init__(self,condition,scope):
self.e1 = condition
self.e2 = scope
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'While(%s,%s)' % (self.e1, self.e2)
#Side effects
class Print():
def __init__(self,expr):
self.expr1 = expr
def E(self):
return self.expr1
def __repr__(self):
return 'Print(%s)' % self.expr1
#Side effects
class Println():
def __init__(self,expr):
self.expr1 = expr
def E(self):
return self.expr1
class Input():
def __init__(self):
self.expr1 = None
def cast(self,n):
if(isfloat(n)):
return N(n)
if(n=="true" or n=="false"):
return B(n)
if(n=="null"):
return Null()
return S(n)
def __repr__(self):
return 'Input(%s)' % (self.expr1)
class Cast():
def __init__(self,value,type):
self.e1 = value
self.e2 = type
def cast(self,value,type):
if(isinstance(type,TInt)):
try:
n = N(int(value))
return n
except:
return False
elif(isinstance(type,TS)):
try:
s = S(str(value))
return s
except:
return False
elif(isinstance(type,TFloat)):
try:
f = N(float(value))
return f
except:
return False
elif(isinstance(type,TB)):
try:
b = B(bool(value))
return b
except:
return False
return False
def expr1(self):
return self.e1
def expr2(self):
return self.e2
def __repr__(self):
return 'Cast(%s,%s)' % (self.e1, self.e2)
class TInt():
def __init__(self):
self.e1 = None
def __repr__(self):
return 'Int'
class TFloat():
def __init__(self):
self.e1 = None
def __repr__(self):
return 'Float'
class TB():
def __init__(self):
self.e1 = None
def __repr__(self):
return 'Bool' % (self.e1)
class TS():
def __init__(self):
self.e1 = None
def __repr__(self):
return 'String'
#Helper functions
def isfloat(n):
try:
float(n)
return True
except:
return False
| 22.821053 | 88 | 0.525 | 1,472 | 10,840 | 3.65625 | 0.080163 | 0.193237 | 0.171683 | 0.12003 | 0.629506 | 0.615199 | 0.584913 | 0.521368 | 0.497213 | 0.497213 | 0 | 0.04442 | 0.339576 | 10,840 | 474 | 89 | 22.869198 | 0.707361 | 0.023247 | 0 | 0.598558 | 0 | 0 | 0.039633 | 0.002176 | 0 | 0 | 0 | 0 | 0 | 1 | 0.353365 | false | 0 | 0 | 0.25 | 0.747596 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
96d654ed2e780d2b9dec1380bd939101a9ca67a1 | 3,110 | py | Python | gr-doa/python/qa_full_capon3_ccf.py | SamuelSkinner/Coherent-Radio-Project | 2684f235261b2c0d75bb5c5d2ba39d7497cf970a | [
"MIT"
] | 47 | 2017-09-11T20:54:51.000Z | 2022-03-15T06:12:06.000Z | gr-doa/python/qa_full_capon3_ccf.py | SamuelSkinner/Coherent-Radio-Project | 2684f235261b2c0d75bb5c5d2ba39d7497cf970a | [
"MIT"
] | 5 | 2017-12-08T02:43:21.000Z | 2020-04-14T14:09:58.000Z | gr-doa/python/qa_full_capon3_ccf.py | SamuelSkinner/Coherent-Radio-Project | 2684f235261b2c0d75bb5c5d2ba39d7497cf970a | [
"MIT"
] | 24 | 2017-09-20T07:53:58.000Z | 2021-12-11T06:00:43.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright 2017 <+YOU OR YOUR COMPANY+>.
#
# This is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# This software is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this software; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
from gnuradio import gr, gr_unittest
from gnuradio import blocks
import doa_swig as doa
class qa_full_capon3_ccf (gr_unittest.TestCase):
def setUp (self):
self.tb = gr.top_block ()
def tearDown (self):
self.tb = None
def test_001_t (self):
# data
self.vector_length_in = 8;
self.vector_length_out = 8;
self.data1 = ((2.101765 - 0.1136365j),(0.6830077 + 0.1766335j),(4.741745 + 0.07127096j),(1.037089 + 0.08937196j),(2.013856 + 0.08394291j),(4.518792 + 0.5311512j),(1.142077 - 0.1150589j),(-10.51938 - 0.6250837j))
self.data2 = ((-0.9668710 + 3.410859j),(1.675037 - 0.7679004j),(1.076942 + 4.065166j),(0.9010103 + 0.3478354j),(1.026168 + 0.9570264j),(5.078891 - 0.9142112j),(-1.145686 + 2.144195j),(-6.622580 - 4.368002j))
self.data3 = ((-4.652023 + 0.5961858j),(2.492679 - 0.2852718j),(-3.200703 + 0.5038460j),(0.8181259 - 0.04016215j),(-0.06216002 + 0.05145182j),(6.141905 - 0.6604887j),(-3.260490 + 0.4184076j),(-1.788836 + 0.007123172j))
self.expected = (0.00694161700084806, 0.00515712471678853, 0.00819214154034853, 0.0390417091548443, 2.37873339653018, 0.159786492586136, 3.01280951509776, 0.0243304856121540)
# blocks
self.src1 = blocks.vector_source_c(self.data1,False,self.vector_length_in)
self.src2 = blocks.vector_source_c(self.data2,False,self.vector_length_in)
self.src3 = blocks.vector_source_c(self.data3,False,self.vector_length_in)
self.capon = doa.full_capon3_ccf(self.vector_length_in,self.vector_length_out)
self.snk = blocks.vector_sink_f(self.vector_length_out)
# connections
self.tb.connect(self.src1, (self.capon,0))
self.tb.connect(self.src2, (self.capon,1))
self.tb.connect(self.src3, (self.capon,2))
self.tb.connect(self.capon,self.snk)
self.tb.run ()
# check data
self.results = self.snk.data()
print "***********************"
print "we got back: ",["%5.3f" % i for i in self.results]
print "we expected: ",["%5.3f" % i for i in self.expected]
print "***********************"
self.assertFloatTuplesAlmostEqual(self.expected,self.results,0)
if __name__ == '__main__':
gr_unittest.run(qa_full_capon3_ccf, "qa_full_capon3_ccf.xml")
| 44.428571 | 226 | 0.668489 | 452 | 3,110 | 4.488938 | 0.451327 | 0.039428 | 0.063085 | 0.044357 | 0.138985 | 0.081321 | 0.0138 | 0 | 0 | 0 | 0 | 0.21691 | 0.186174 | 3,110 | 69 | 227 | 45.072464 | 0.584749 | 0.254019 | 0 | 0.060606 | 0 | 0 | 0.04878 | 0.029617 | 0 | 0 | 0 | 0 | 0.030303 | 0 | null | null | 0 | 0.090909 | null | null | 0.121212 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
96d712726c2e0d75a51f11a62f0188b54a5b80f4 | 13,770 | py | Python | doorenv2/doorenv2/envs/doorenv_blue.py | kuolunwang/DoorGym | d9fbb67382756e659025b640857ede3a3735fb1d | [
"BSD-3-Clause"
] | 82 | 2019-08-07T06:54:44.000Z | 2022-02-02T16:44:33.000Z | doorenv2/doorenv2/envs/doorenv_blue.py | kuolunwang/DoorGym | d9fbb67382756e659025b640857ede3a3735fb1d | [
"BSD-3-Clause"
] | 4 | 2019-11-28T09:02:51.000Z | 2022-01-24T03:21:44.000Z | doorenv2/doorenv2/envs/doorenv_blue.py | kuolunwang/DoorGym | d9fbb67382756e659025b640857ede3a3735fb1d | [
"BSD-3-Clause"
] | 20 | 2019-08-11T13:42:18.000Z | 2022-01-03T08:47:50.000Z | import numpy as np
from gym import utils, spaces
from gym.envs.mujoco import mujoco_env
from gym.envs.robotics.rotations import quat2euler, euler2quat, mat2euler
import os
# import random
from random import uniform, randint, randrange
from mjremote import mjremote
import time
from doorenv2.envs.doorenv import DoorEnv
class DoorEnvBlueV1(DoorEnv, utils.EzPickle):
def __init__(self,
port=1050,
unity=False,visionnet_input=False,
world_path='/home/demo/DoorGym/world_generator/world/pull_floatinghook',
pos_control=False,
ik_control=False
):
super().__init__(
port=port,
unity=unity,
visionnet_input=visionnet_input,
world_path=world_path,
pos_control=pos_control,
)
utils.EzPickle.__init__(self)
def gripper_action_gen(self, a):
self.gripper_action = np.array([a[-1],-a[-1],a[-1],-a[-1]])
return np.concatenate((a,self.gripper_action))
def randomized_property(self):
self.model.body_mass[10:16] = self.sample_gaussiannormal(self.model_origin.body_mass[10:16], 0.2) # gaussiannormal x original_mass
self.model.dof_damping[0:10] = self.sample_gaussiannormal(self.model_origin.dof_damping[0:10], 0.2) # gaussiannormal x original_damping
self.model.actuator_gainprm[:,0] = self.sample_gaussiannormal(self.model_origin.actuator_gainprm[:,0], 0.1) # gaussiannormal x original_damping
def _reset_model(self, gg=2, hooked=False, untucked=False):
qpos = self.init_qpos
if self.xml_path.find("float")>-1:
qpos = self.np_random.uniform(low=-0.3, high=0.3, size=self.model.nq) + self.init_qpos
if self.xml_path.find("hook")>-1:
qpos[self.nn-1] = np.random.uniform(0.0,3.13)
if self.xml_path.find("gripper")>-1:
qpos[self.nn-2] = np.random.uniform(0.0,3.13)
elif self.xml_path.find("mobile")>-1:
qpos[0] = 0.0 + uniform(-0.0, 0.0) # x_slider
qpos[1] = 0.0 + uniform(-0.0, -0.0) # y_slider
qpos[2] = 0.0 + uniform(-2.3412, 3.3999) # base_roll_joint
qpos[3] = 0.0 + uniform(-2.2944, 0) # shoulder_lift_joint
qpos[4] = 0.0 + uniform(-2.6761, 2.6761) # shoulder_roll_joint
qpos[5] = 1.0 + uniform(-2.2944, 0) # elbow_lift_joint
qpos[6] = 0.0 + uniform(-2.6761, 2.6761) # elbow_roll_joint
qpos[7] = 1.0 + uniform(-2.2944, 0) # wrist_lift_joint
qpos[8] = 0.0 + uniform(-2.6761, 2.6761) # wrist_roll_joint
else:
qpos = self.init_qpos
qpos[0] = 0.0 + uniform(-0.1, 0.1) # base_roll_joint
qpos[1] = 0.0 + uniform(-0.1, 0.1) # shoulder_lift_joint
qpos[2] = 0.0 + uniform(-0.1, 0.1) # shoulder_roll_joint
qpos[3] = 0.0 + uniform(-0.1, 0.1) # elbow_lift_joint
qpos[4] = 0.0 + uniform(-0.1, 0.1) # elbow_roll_joint
qpos[5] = 0.0 + uniform(-0.1, 0.1) # wrist_lift_joint
qpos[6] = 0.0 + uniform(-0.1, 0.1) # wrist_roll_joint
if self.xml_path.find("pull")>-1:
self.goal = self.np_random.uniform(low=-.15, high=.15, size=gg)
if self.xml_path.find("lefthinge")>-1:
self.goal[0] = np.random.uniform(-0.15,0.05)
self.goal[1] = np.random.uniform(-0.15,0.15)
else:
self.goal[0] = np.random.uniform(-0.05,0.15)
self.goal[1] = np.random.uniform(-0.15,0.15)
else:
self.goal = np.zeros(gg)
self.goal[0] = np.random.uniform(-0.15,0.15)
qpos[self.nn:-gg] = 0
qpos[-gg:] = self.goal
# qvel = self.init_qvel
# self.set_state(qpos, qvel)
if hooked:
if self.xml_path.find("float")>-1:
robot_origin = np.array([1.0, 0, 1.2])
if self.xml_path.find("lever")>-1:
goal_in_xyz = self.sim.data.get_geom_xpos("door_knob_4") - robot_origin
offset_to_hook = np.array([0.13,0.0,0.0])
elif self.xml_path.find("round")>-1:
goal_in_xyz = self.sim.data.get_geom_xpos("door_knob_2") - robot_origin
offset_to_hook = np.array([0.0,0.0,0.0])
elif self.xml_path.find("pull")>-1:
goal_in_xyz = self.sim.data.get_geom_xpos("door_knob_7") - robot_origin
offset_to_hook = np.array([0.13,0.0,0.0])
else:
assert "not sure about the door knob type"
if self.xml_path.find("hook")>-1:
offset_to_hook_randomness = np.array([np.random.uniform(-0.01,0.01), np.random.uniform(-0.005,0.005), np.random.uniform(-0.06,0.06)])
hand_init_pos_3D = goal_in_xyz + offset_to_hook + offset_to_hook_randomness
hand_ori_random = self.np_random.uniform(low=-0.05, high=0.05, size=3)
wrist_dir_chance = np.random.randint(100)
if wrist_dir_chance>=50:
hand_ori_random[-1] = np.random.uniform(0.0,0.4)
else:
hand_ori_random[-1] = np.random.uniform(2.74,3.14)
qpos[:self.nn] = np.concatenate((hand_init_pos_3D,hand_ori_random))
if self.xml_path.find("gripper")>-1:
offset_to_hook_randomness = np.array([0.0, 0.0, np.random.uniform(-0.06,0.06)])
hand_init_pos_3D = goal_in_xyz + offset_to_hook + offset_to_hook_randomness
hand_ori_random = self.np_random.uniform(low=-0.01, high=0.01, size=3)
wrist_dir_chance = np.random.randint(100)
if wrist_dir_chance>=50:
hand_ori_random[-1] = np.random.uniform(0.0,0.01)
else:
hand_ori_random[-1] = np.random.uniform(3.13,3.14)
qpos[:self.nn-1] = np.concatenate((hand_init_pos_3D,hand_ori_random))
qpos[0] -= 0.02
qpos[self.nn: self.nn+4] = np.array([1.0,-1.0,1.0,-1.0])
qvel = self.init_qvel
self.set_state(qpos, qvel)
if self.unity:
self.remote.setqpos(self.sim.data.qpos)
return self._get_obs()
def get_robot_joints(self):
return np.concatenate([
self.sim.data.qpos.flat[:self.nn],
self.sim.data.qvel.flat[:self.nn]])
def get_finger_target(self):
if self.xml_path.find("hook")>-1:
return self.sim.data.get_geom_xpos("hookfinger_2")
elif self.xml_path.find("gripper")>-1:
return (self.sim.data.get_geom_xpos("fingerleft2") \
+ self.sim.data.get_geom_xpos("fingerright2"))/2.0
else:
assert "not sure about the end-effector type"
def get_finger_ori(self):
if self.xml_path.find("hook")>-1:
return quat2euler(self.sim.data.get_body_xquat("robotfinger_hook_target"))
elif self.xml_path.find("gripper")>-1:
return quat2euler(self.sim.data.get_body_xquat("robotwrist_rolllink"))
else:
assert "not sure about the end-effector type"
def get_finger_quat(self):
if self.xml_path.find("hook")>-1:
return self.sim.data.get_body_xquat("robotfinger_hook_target")
elif self.xml_path.find("gripper")>-1:
return self.sim.data.get_body_xquat("robotwrist_rolllink")
else:
assert "not sure about the end-effector type"
class DoorEnvBlueV2(DoorEnv, utils.EzPickle):
def __init__(self,
port=1050,
unity=False,
visionnet_input=False,
vision_obs=False,
world_path='/home/demo/DoorGym/world_generator/world/pull_floatinghook',
pos_control=False,
ik_control=False,
imgsize_h=640,
imgsize_w=640
):
# print("1st passed", imgsize_h)
super().__init__(
port=port,
unity=unity,
visionnet_input=visionnet_input,
vision_obs = vision_obs,
world_path=world_path,
pos_control=pos_control,
ik_control=ik_control,
imgsize_h=imgsize_h,
imgsize_w=imgsize_w
)
utils.EzPickle.__init__(self)
def gripper_action_gen(self, a):
self.gripper_action = np.array([a[-1],-a[-1],a[-1],-a[-1]])
return np.concatenate((a,self.gripper_action))
def physics_randomization(self):
self.model.body_mass[1:18] = self.sample_gaussiannormal(self.model_origin.body_mass[1:18], 0.2) # gaussiannormal x original_mass
self.model.dof_damping[0:12] = self.sample_gaussiannormal(self.model_origin.dof_damping[0:12], 0.2) # gaussiannormal x original_damping
self.model.actuator_gainprm[:,0] = self.sample_gaussiannormal(self.model_origin.actuator_gainprm[:,0], 0.1) # gaussiannormal x original_damping
def set_base_pos(self, pos_list=[0.6, 0.35, 0.7]):
for i,x in enumerate(pos_list):
self.model.body_pos[1,i] = x
# def color_randomization(self):
# import pprint as pp
# import sys
# pp.pprint(dir(self.model), width=1)
# print(">>>>>before>>>>>>>")
# pp.pprint(self.model.geom_rgba)
# geom_n = self.model.geom_rgba.shape[0]
# geom_rgba = []
# for i in range(geom_n):
# geom_rgba.append([randrange(1,100)/100.0, randrange(1,100)/100.0, randrange(1,100)/100.0, 1.0])
# self.model.geom_rgba[:,:] = np.array(geom_rgba)
# self.model.cam_quat[:,:] = np.array(euler2quat(cam_ori))
# self.model.cam_fovy[:] = np.array(cam_fovy)
# print(">>>>>after>>>>>>>")
# pp.pprint(self.model.geom_rgba)
# pp.pprint(self.model.cam_quat)
# pp.pprint(self.model.cam_fovy)
# sys.exit(1)
def _reset_model(self, gg=2, hooked=False, untucked=False):
def randomize():
qpos = self.init_qpos
# qpos[0] = uniform(-3.3999, 2.3412) # base_roll_joint
# qpos[1] = uniform(-2.2944, 0) # shoulder_lift_joint
# qpos[2] = uniform(-2.6761, 2.6761) # shoulder_roll_joint
# qpos[3] = uniform(-2.2944, 0) # elbow_lift_joint
# qpos[4] = uniform(-2.6761, 2.6761) # elbow_roll_joint
# qpos[5] = uniform(-2.2944, 0) # wrist_lift_joint
# qpos[6] = uniform(-2.6761, 2.676) # wrist_roll_joint
qpos[0] = 0.0 + uniform(-0.1, 0.1) # base_roll_joint
qpos[1] = -2.310 + uniform(-0.0, 0.1) # shoulder_lift_joint
qpos[2] = 1.571 + uniform(-0.1, 0.1) # shoulder_roll_joint
qpos[3] = -0.750 + uniform(-0.1, 0.1) # elbow_lift_joint
qpos[4] = -1.571 + uniform(-0.1, 0.1) # elbow_roll_joint
qpos[5] = 0.0 + uniform(-0.1, 0.1) # wrist_lift_joint
qpos[6] = 0.0 + uniform(-0.1, 0.1) # wrist_roll_joint
if self.xml_path.find("pull")>-1:
self.goal = self.np_random.uniform(low=-.15, high=.15, size=gg)
if self.xml_path.find("lefthinge")>-1:
self.goal[0] = np.random.uniform(-0.15,0.05)
self.goal[1] = np.random.uniform(-0.15,0.15)
else:
self.goal[0] = np.random.uniform(-0.05,0.15)
self.goal[1] = np.random.uniform(-0.15,0.15)
else:
self.goal = np.zeros(gg)
self.goal[0] = np.random.uniform(-0.15,0.15)
qpos[self.nn:-gg] = 0
qpos[-gg:] = self.goal
qvel = self.init_qvel
self.set_state(qpos, qvel)
collision = True
while collision:
# print("collision found! Count: ", self.sim.data.ncon)
randomize()
collision = self.sim.data.ncon > 0
# import pprint as pp
# pp.pprint(dir(env.env.sim.data))
# print("final collision count: ", self.sim.data.ncon)
# import sys
# sys.exit(1)
if self.unity:
self.remote.setqpos(self.sim.data.qpos)
return self._get_obs()
def get_robot_joints(self):
if self.ik_control:
return np.concatenate([
self.get_finger_target(),
self.get_finger_quat(),
self.get_gripper_pos(),
self.get_finger_vel(),
self.get_finger_angvel(),
])
else:
return np.concatenate([
self.sim.data.qpos.flat[:self.nn],
self.sim.data.qvel.flat[:self.nn]
])
def get_finger_target(self):
return (self.sim.data.get_geom_xpos("fingerleft2") \
+ self.sim.data.get_geom_xpos("fingerright2"))/2.0
def get_base_pos(self):
return self.sim.data.get_body_xpos("robotbase_link")
def get_finger_ori(self):
return quat2euler(self.sim.data.get_body_xquat("robotwrist_rolllink"))
def get_finger_quat(self):
return self.sim.data.get_body_xquat("robotwrist_rolllink")
def get_finger_vel(self):
return self.sim.data.get_body_xvelp("robotwrist_rolllink")
def get_finger_angvel(self):
return self.sim.data.get_body_xvelr("robotwrist_rolllink")
def get_gripper_pos(self):
return np.array([self.sim.data.get_joint_qpos("right_gripper_joint")]) | 44.563107 | 153 | 0.564779 | 1,916 | 13,770 | 3.857516 | 0.11691 | 0.014071 | 0.040184 | 0.04059 | 0.784062 | 0.744554 | 0.725342 | 0.667163 | 0.592613 | 0.536599 | 0 | 0.065181 | 0.299201 | 13,770 | 309 | 154 | 44.563107 | 0.700725 | 0.133769 | 0 | 0.568376 | 0 | 0 | 0.054989 | 0.013663 | 0 | 0 | 0 | 0 | 0.017094 | 1 | 0.094017 | false | 0 | 0.038462 | 0.034188 | 0.226496 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
96d76e9dbc97b34034f629775a41e4d813ebe47b | 3,029 | py | Python | fortnitepy/message.py | Jawschamp/fortnitepy | 23488088f71b44bd00062591d1b7202047d14dff | [
"MIT"
] | null | null | null | fortnitepy/message.py | Jawschamp/fortnitepy | 23488088f71b44bd00062591d1b7202047d14dff | [
"MIT"
] | null | null | null | fortnitepy/message.py | Jawschamp/fortnitepy | 23488088f71b44bd00062591d1b7202047d14dff | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
MIT License
Copyright (c) 2019 Terbau
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
import datetime
class MessageBase:
__slots__ = ('_client', '_author', '_content', '_created_at')
def __init__(self, client, author, content):
self._client = client
self._author = author
self._content = content
self._created_at = datetime.datetime.now()
@property
def client(self):
""":class:`Client`: The client."""
return self._client
@property
def author(self):
""":class:`Friend`: The author of the message."""
return self._author
@property
def content(self):
""":class:`str`: The content of the message."""
return self._content
@property
def created_at(self):
""":class:`datetime.datetime`: The time of when this message was received."""
return self._created_at
class FriendMessage(MessageBase):
__slots__ = MessageBase.__slots__
def __init__(self, client, author, content):
super().__init__(client, author, content)
async def reply(self, content):
"""|coro|
Replies to the message with the given content.
Parameters
----------
content: :class:`str`
The content of the message
"""
await self.author.send(content)
class PartyMessage(MessageBase):
__slots__ = MessageBase.__slots__ + \
('party',)
def __init__(self, client, party, author, content):
super().__init__(client, author, content)
self.party = party
@property
def author(self):
""":class:`PartyMember`: The author of a message."""
return self._author
async def reply(self, content):
"""|coro|
Replies to the message with the given content.
Parameters
----------
content: :class:`str`
The content of the message
"""
await self.party.send(content)
| 28.308411 | 85 | 0.653681 | 371 | 3,029 | 5.183288 | 0.363881 | 0.045762 | 0.049402 | 0.026521 | 0.25325 | 0.209568 | 0.185127 | 0.126885 | 0.126885 | 0.126885 | 0 | 0.002213 | 0.254209 | 3,029 | 107 | 86 | 28.308411 | 0.849048 | 0.435457 | 0 | 0.405405 | 0 | 0 | 0.029781 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.216216 | false | 0 | 0.027027 | 0 | 0.540541 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
96da13e8a01b36c35c748fe900249eb3f631f623 | 500 | py | Python | test/commands/scriptcommands/postcommit/test_postcommit_factory.py | sturzl/guet | b8c453f07968b689b303e20e7a31b405c02c54ef | [
"Apache-2.0"
] | null | null | null | test/commands/scriptcommands/postcommit/test_postcommit_factory.py | sturzl/guet | b8c453f07968b689b303e20e7a31b405c02c54ef | [
"Apache-2.0"
] | null | null | null | test/commands/scriptcommands/postcommit/test_postcommit_factory.py | sturzl/guet | b8c453f07968b689b303e20e7a31b405c02c54ef | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
from guet.commands.scriptcommands.postcommit.postcommit_factory import PostCommitFactory
from guet.commands.scriptcommands.postcommit.postcommit_strategy import PostCommitStrategy
from guet.settings.settings import Settings
class TestPostCommitFactory(TestCase):
def test_returns_post_commit_strategy(self):
factory = PostCommitFactory()
command = factory.build([], Settings)
self.assertIsInstance(command.strategy, PostCommitStrategy)
| 31.25 | 90 | 0.812 | 49 | 500 | 8.163265 | 0.489796 | 0.06 | 0.08 | 0.15 | 0.25 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126 | 500 | 15 | 91 | 33.333333 | 0.915332 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
96ebc5eb75c85430b7fb3f7a0ca2d51c5eb460e9 | 249 | py | Python | mistygrind/__init__.py | rerobots/mistygrind | be0f4dc602bbbb004c99b520f8f69ce5643c8d61 | [
"Apache-2.0"
] | 2 | 2019-03-17T22:23:43.000Z | 2019-10-02T23:33:59.000Z | mistygrind/__init__.py | rerobots/mistygrind | be0f4dc602bbbb004c99b520f8f69ce5643c8d61 | [
"Apache-2.0"
] | 1 | 2020-02-08T23:21:56.000Z | 2020-03-02T18:49:57.000Z | mistygrind/__init__.py | rerobots/mistygrind | be0f4dc602bbbb004c99b520f8f69ce5643c8d61 | [
"Apache-2.0"
] | null | null | null | """a tool for static analysis of Misty skills and offboard Misty REST API clients
The repository is at https://github.com/rerobots/mistygrind
"""
try:
from ._version import __version__
except ImportError:
__version__ = '0.0.0.dev0+Unknown'
| 27.666667 | 81 | 0.75502 | 36 | 249 | 4.972222 | 0.861111 | 0.022346 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019048 | 0.156627 | 249 | 8 | 82 | 31.125 | 0.833333 | 0.558233 | 0 | 0 | 0 | 0 | 0.174757 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8c0a0ff7f0c596129f81967f9f3823c84109a461 | 1,247 | py | Python | cfnbootstrap/security.py | roberthutto/aws-cfn-bootstrap | 801a16802a931fa4dae0eba4898fe1ccdb304924 | [
"Apache-2.0"
] | null | null | null | cfnbootstrap/security.py | roberthutto/aws-cfn-bootstrap | 801a16802a931fa4dae0eba4898fe1ccdb304924 | [
"Apache-2.0"
] | null | null | null | cfnbootstrap/security.py | roberthutto/aws-cfn-bootstrap | 801a16802a931fa4dae0eba4898fe1ccdb304924 | [
"Apache-2.0"
] | 3 | 2017-02-10T13:14:38.000Z | 2018-09-20T01:04:20.000Z | #==============================================================================
# Copyright 2011 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#==============================================================================
import os
import logging
def set_owner_and_group(filename, owner_name, group_name):
logging.warn("Unsupported OS for setting owner/group: %s" % os.name)
def create_or_modify_user(user_name, groups=[], homedir=None, uid=None):
logging.warn("Unsupported OS for user operations: %s", os.name)
def create_group(group_name, gid=None):
logging.warn("Unsupported OS for group operations: %s", os.name)
if os.name == "posix":
from posix_security import *
| 41.566667 | 79 | 0.655974 | 169 | 1,247 | 4.769231 | 0.56213 | 0.074442 | 0.081886 | 0.08933 | 0.150124 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0.007449 | 0.138733 | 1,247 | 29 | 80 | 43 | 0.743017 | 0.601443 | 0 | 0 | 0 | 0 | 0.257261 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.3 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8c18291a315b3913d835cb86c5b9f8d6b73dc624 | 712 | py | Python | q3/q3/strutils.py | virtimus/makaronLab | 10b9be7d7d65d3da6219f929ea7070dd5fed3a81 | [
"0BSD"
] | 2 | 2021-03-16T05:48:36.000Z | 2021-10-11T01:55:48.000Z | q3/q3/strutils.py | virtimus/makaronLab | 10b9be7d7d65d3da6219f929ea7070dd5fed3a81 | [
"0BSD"
] | null | null | null | q3/q3/strutils.py | virtimus/makaronLab | 10b9be7d7d65d3da6219f929ea7070dd5fed3a81 | [
"0BSD"
] | 1 | 2021-03-16T05:48:39.000Z | 2021-03-16T05:48:39.000Z |
def isBlank(myString:str):
return not (myString and myString.strip())
def isNotBlank(myString:str):
return bool(myString and myString.strip())
def trim(myString:str):
result = myString.strip() if myString != None else None
return result
def replace(myString:str, search:str, replace:str):
if myString == None:
return myString
return myString.replace(search,replace)
def toUpper(s:str):
result = s.upper() if s!=None else None
return result
def isSDigits(s:str):
if s == None:
return False
result = s[1:].isdigit() if s.startswith('-') else s.isdigit()
return result
def uuid():
import uuid
return uuid.uuid4()
| 19.243243 | 66 | 0.640449 | 94 | 712 | 4.851064 | 0.297872 | 0.096491 | 0.098684 | 0.105263 | 0.236842 | 0.118421 | 0 | 0 | 0 | 0 | 0 | 0.003717 | 0.244382 | 712 | 36 | 67 | 19.777778 | 0.843866 | 0 | 0 | 0.136364 | 0 | 0 | 0.001408 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.318182 | false | 0 | 0.045455 | 0.090909 | 0.772727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8c188b23dffa3a8f08c8f0d8c31809d3a3809771 | 1,770 | py | Python | zang/domain/usage.py | vlastikczech/zang-python | 980f5243071404d6838554500a6955ff7bc2a0c7 | [
"MIT"
] | 1 | 2019-02-18T21:51:58.000Z | 2019-02-18T21:51:58.000Z | zang/domain/usage.py | vlastikczech/zang-python | 980f5243071404d6838554500a6955ff7bc2a0c7 | [
"MIT"
] | 6 | 2019-06-26T13:56:22.000Z | 2022-02-17T16:40:48.000Z | zang/domain/usage.py | vlastikczech/zang-python | 980f5243071404d6838554500a6955ff7bc2a0c7 | [
"MIT"
] | 6 | 2017-10-17T12:44:32.000Z | 2020-02-07T20:45:00.000Z | # -*- coding: utf-8 -*-
"""
zang.domain.usage
~~~~~~~~~~~~~~~~~~~
`Usage` model
"""
from zang.domain.base_resource import BaseResource
from zang.domain.enums.product import Product
class Usage(BaseResource):
_strs = [
'sid',
'uri',
]
_ints = [
'product_id',
'day',
'month',
'year',
'quantity',
]
_reals = [
'average_cost',
'total_cost',
]
_enums = {
'product': Product,
}
def __init__(self):
super(Usage, self).__init__()
def __repr__(self):
return '<Usage at 0x%x>' % (id(self))
@property
def sid(self):
"""An alphanumeric string identifying this resource."""
return self._sid
@property
def product(self):
"""The product or feature used."""
return self._product
@property
def uri(self):
"""The URL to this resource."""
return self._uri
@property
def productId(self):
"""An integer identifying this product. You can see the full list
under List Usage."""
return self._product_id
@property
def day(self):
"""The day of the usage."""
return self._day
@property
def month(self):
"""The month of the usage."""
return self._month
@property
def year(self):
"""The year of the usage."""
return self._year
@property
def quantity(self):
"""The quantity of the usage."""
return self._quantity
@property
def averageCost(self):
"""The average cost of the usage."""
return self._average_cost
@property
def totalCost(self):
"""The total cost of the usage."""
return self._total_cost
| 19.666667 | 73 | 0.545198 | 195 | 1,770 | 4.779487 | 0.307692 | 0.118026 | 0.112661 | 0.103004 | 0.137339 | 0.051502 | 0 | 0 | 0 | 0 | 0 | 0.001671 | 0.323729 | 1,770 | 89 | 74 | 19.88764 | 0.776942 | 0.235028 | 0 | 0.181818 | 0 | 0 | 0.062112 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.218182 | false | 0 | 0.036364 | 0.018182 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8c1b44ae3c95cb1eae133f0bdac0cf0e14b9f633 | 17,994 | py | Python | notebooks/helper.py | rmcd-mscb/bmi-test-binder | ae4d2bf8aaf0d70b49d918555f082e43af7a56d9 | [
"MIT"
] | null | null | null | notebooks/helper.py | rmcd-mscb/bmi-test-binder | ae4d2bf8aaf0d70b49d918555f082e43af7a56d9 | [
"MIT"
] | null | null | null | notebooks/helper.py | rmcd-mscb/bmi-test-binder | ae4d2bf8aaf0d70b49d918555f082e43af7a56d9 | [
"MIT"
] | null | null | null | import xarray as xr
import datetime as dt
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import geopandas as gpd
import pandas as pd
import numpy as np
def bmi_prms6_value_plot(data, n_index, val, label, start, end, tax = None):
tax = tax or plt.gca()
#test if val exists in both and get nhru or nsegment
dim_type = None
try:
dim_type = data[val].dims[1]
if dim_type == 'nhru':
data_val = data[val].sel(nhru=n_index, time=slice(start, end)).to_pandas()
# dprms_val = dprms[val].sel(nhru=n_index, time=slice(start, end))
data_val.plot.line(ax=tax, label=label)
tax.legend()
# line1, = dprms_val.plot.line(x='time', ax=tax, add_legend=True)
elif dim_type == 'nsegment':
data_val = data[val].sel(nsegment=n_index, time=slice(start, end)).to_pandas()
# dprms_val = dprms[val].sel(nsegment=n_index, time=slice(start, end)).to_pandas()
data_val.plot(ax=tax, label=label)
tax.legend()
# line1, = dprms_val.plot(label='PRMS6')
tax.set_title(f'{val} {n_index}')
except Exception as err:
print('Error', {err})
def bmi_prms6_residual_plot(dbmi, dprms, n_index, val, label, start, end, tax = None):
tax = tax or plt.gca()
dim_type = dbmi[val].dims[1]
try:
if dim_type == 'nhru':
data_val = dbmi[val] - dprms[val]
data = data_val.sel(nhru=n_index, time=slice(start, end)).to_pandas()
# bmi = dbmi[val]
# prms = dprms.sel(nhru=n_index, time=slice(start, end))[val]
elif dim_type == 'nsegment':
data_val = dbmi[val] - dprms[val]
data = data_val.sel(nsegment=n_index, time=slice(start, end)).to_pandas()
# bmi = dbmi.sel[val]
# prms = dprms.sel(nsegment=n_index, time=slice(start, end))[val]
# res = prms-bmi
data.plot(ax=tax, label=label)
plt.gca().get_yaxis().get_major_formatter().set_useOffset(False)
tax.legend()
tax.set_title('Residual (prms-bmi)')
except Exception as err:
print('Error', {err})
def get_feat_coord(feat, data_set, feat_id):
lat_da = data_set[feat + '_lat']
lat = lat_da[feat_id-1].values
lon_da = data_set[feat + '_lon']
lon = lon_da[feat_id-1].values
return lat,lon
def get_hrus_for_box(ds, lat_min, lat_max, lon_min, lon_max):
sel = ds.hru_lat.sel(hruid=((ds.hru_lat.values >= lat_min)
& (ds.hru_lat.values <= lat_max)))
ids_1 = sel.hruid.values
sel_1 = ds.hru_lon.sel(hruid=ids_1)
sel_2 = sel_1.sel(hruid=((sel_1.values >= lon_min) & (sel_1.values <= lon_max)))
ids_2 = sel_2.hruid.values
return ids_2
def get_segs_for_box(ds, lat_min, lat_max, lon_min, lon_max):
sel = ds.seg_lat.sel(segid=((ds.seg_lat.values >= lat_min)
& (ds.seg_lat.values <= lat_max)))
ids_1 = sel.segid.values
sel_1 = ds.seg_lon.sel(segid=ids_1)
sel_2 = sel_1.sel(segid=((sel_1.values >= lon_min) & (sel_1.values <= lon_max)))
ids_2 = sel_2.segid.values
return ids_2
def get_values_for_DOY(ds, timestamp, hru_ids, var_name):
if (timestamp < pd.Timestamp('1979-10-01') or timestamp > pd.Timestamp('1980-09-30')):
print("The date you provided is outside of range 1979-10-01 to 1980-09-30")
return None
time_range = pd.date_range(timestamp, freq='1Y', periods=40)
dif = timestamp - time_range[0]
time_range = time_range + dif
# print(time_range)
date_list = []
val_list = []
for ts in time_range:
try:
date_str = str(ts.year).zfill(4) + '-' + str(ts.month).zfill(2) + '-' + str(ts.day).zfill(2)
ds_sel = ds[var_name].sel(hruid=hru_ids, time=date_str)
val = ds_sel.values[0][0]
date_list.append(date_str + 'T05:00:00')
val_list.append(val)
except:
pass
val_np = np.asarray(val_list, dtype=np.float64)
val_np = val_np.reshape((1, val_np.shape[0]))
hru_ids_np = np.asarray(hru_ids, dtype=np.int32)
date_np = np.asarray(date_list, dtype='datetime64[ns]')
attrs = ds[var_name].attrs
da_new = xr.DataArray(data=val_np, dims=['hruid','time'],
coords={'hruid':hru_ids_np,'time':date_np},
attrs=attrs)
return da_new
def plot_climate(c_xarray, hru_index, val, start, end, tax=None):
tax = tax or plt.gca()
hru_ids = c_xarray.hru.values
simclimate = c_xarray.sel(time=slice(start, end))
line, = simclimate.sel(hru=hru_ids[hru_index])[val].plot(ax=tax)
tax.set_title(val)
def bmi_prms6_value_splot(gdf, mbmi, value, tvmin, tvmax, index, timesel, pax = None):
tax = pax or plt.gca()
gdf[value] = mbmi.get_value(value)
divider = make_axes_locatable(tax)
tcax = divider.append_axes(position='right', size='5%', pad=0.1)
gdf.plot(column=value, vmin=tvmin, vmax=tvmax, ax=tax, legend=True, cax=tcax)
tax.set_title(value)
def get_DataSet_prms6(summary, myparam):
# merge spatial locations of hru and segments into summary file
ds = xr.open_dataset(summary)
param = xr.open_dataset(myparam)
hru_lat = param.get("hru_lat")
ds['hru_lat'] = hru_lat
hru_lon = param.get("hru_lon")
ds['hru_lon'] = hru_lon
seg_lat = param.get("seg_lat")
ds['seg_lat'] = seg_lat
seg_lon = param.get("seg_lon")
ds['seg_lon'] = seg_lon
return ds
def get_gdf(file, msurf):
gdf =gpd.read_file(file)
pd.set_option('mode.chained_assignment', None)
gdf_ps = gdf[gdf['hru_id_nat'].isin(msurf.var['nhm_id'].data)]
dindex = np.zeros(np.shape(gdf_ps.hru_id_nat.values), dtype=np.int8)
for index, val in np.ndenumerate(msurf.var['nhm_id'].data):
tind = np.int(np.where(gdf_ps['hru_id_nat'].values == msurf.var['nhm_id'].data[index])[0])
# print(type(tind), tind)
dindex[tind] = np.array([index])
# print(dindex)
gdf_ps.loc[:,'tindex'] = dindex
gdf_ps.sort_values(by=['tindex'], inplace=True)
return gdf_ps
def get_gdf_streams(file, msurf):
gdf =gpd.read_file(file)
pd.set_option('mode.chained_assignment', None)
gdf_ps = gdf[gdf['seg_id_nat'].isin(msurf.var['nhm_seg'].data)]
dindex = np.zeros(np.shape(gdf_ps.seg_id_nat.values), dtype=np.int8)
for index, val in np.ndenumerate(msurf.var['nhm_seg'].data):
tind = np.int(np.where(gdf_ps['seg_id_nat'].values == msurf.var['nhm_seg'].data[index])[0])
# print(type(tind), tind)
dindex[tind] = np.array([index])
# print(dindex)
gdf_ps.loc[:,'tindex'] = dindex
gdf_ps.sort_values(by=['tindex'], inplace=True)
return gdf_ps
def plot_climate2(clim_file, gdf_ps, msurf):
clim = xr.open_dataset(clim_file)
ptime = msurf.var['nowtime'].data
timesel = dt.datetime(ptime[0], ptime[1], ptime[2])
start_date = timesel
gdf_ps['tmax'] = clim.tmax.sel(time=timesel)
gdf_ps['tmin'] = clim.tmin.sel(time=timesel)
gdf_ps['prcp'] = clim.prcp.sel(time=timesel)
fig, ax = plt.subplots(ncols=3)
divider0 = make_axes_locatable(ax[0])
divider1 = make_axes_locatable(ax[1])
divider2 = make_axes_locatable(ax[2])
cax0 = divider0.append_axes("right", size="5%", pad=0.1)
cax1 = divider1.append_axes("right", size="5%", pad=0.1)
cax2 = divider2.append_axes("right", size="5%", pad=0.1)
h_tmax = gdf_ps.tmax.max()
l_tmax = gdf_ps.tmax.min()
h_tmin= gdf_ps.tmin.max()
l_tmin= gdf_ps.tmin.min()
h_tmax = gdf_ps.tmax.max()
l_tmax = gdf_ps.tmax.min()
h_ppt= gdf_ps.prcp.max()
l_ppt= gdf_ps.prcp.min()
gdf_ps.plot(column='tmax', ax=ax[0], vmin=l_tmax, vmax=h_tmax, legend=True,
label='tmax',cax=cax0)
gdf_ps.plot(column='tmin', ax=ax[1], vmin=l_tmin, vmax=h_tmin, legend=True,
label='tmin',cax=cax1)
gdf_ps.plot(column='prcp', ax=ax[2], vmin=l_ppt, vmax=h_ppt, legend=True,
label='prcp',cax=cax2)
for i in range(3):
ax[i].set_xticklabels([])
ax[i].set_yticklabels([])
if i == 0:
ax[i].set_title('tmax')
elif i == 1:
ax[i].set_title('tmin')
elif i == 2:
ax[i].set_title('prcp')
plt.tight_layout()
return clim
def example_plot(clim, gdf_ps, msurf, msoil, j, timesel):
gdf_ps['tmax'] = clim.tmax.sel(time=timesel)
gdf_ps['tmin'] = clim.tmin.sel(time=timesel)
gdf_ps['prcp'] = clim.prcp.sel(time=timesel)
gdf_ps['infil'] = msurf.var['infil'].data
gdf_ps['sroff'] = msurf.var['sroff'].data
gdf_ps['soil_moist_tot'] = msoil.var['soil_moist_tot'].data
fig, ax = plt.subplots(ncols=6, figsize = (12,2))
divider0 = make_axes_locatable(ax[0])
divider1 = make_axes_locatable(ax[1])
divider2 = make_axes_locatable(ax[2])
divider3 = make_axes_locatable(ax[3])
divider4 = make_axes_locatable(ax[4])
divider5 = make_axes_locatable(ax[5])
cax0 = divider0.append_axes("right", size="5%", pad=0.1)
cax1 = divider1.append_axes("right", size="5%", pad=0.1)
cax2 = divider2.append_axes("right", size="5%", pad=0.1)
cax3 = divider3.append_axes("right", size="5%", pad=0.1)
cax4 = divider4.append_axes("right", size="5%", pad=0.1)
cax5 = divider5.append_axes("right", size="5%", pad=0.1)
gdf_ps.plot(column='tmax', vmin=20.0, vmax=65.0, ax=ax[0], legend=True, cax=cax0)
gdf_ps.plot(column='tmin', vmin=20.0, vmax=65.0, ax=ax[1], legend=True, cax=cax1)
gdf_ps.plot(column='prcp', vmin=0.0, vmax=0.7, ax=ax[2], legend=True, cax=cax2)
gdf_ps.plot(column='infil', vmin=0.0, vmax=0.7, ax=ax[3], legend=True, cax=cax3)
gdf_ps.plot(column='sroff', vmin=0.0, vmax=0.25, ax=ax[4], legend=True, cax=cax4)
gdf_ps.plot(column='soil_moist_tot', vmin=0.25, vmax=1.75, ax=ax[5], legend=True, cax=cax5)
for i in range(6):
ax[i].set_xticklabels([])
ax[i].set_yticklabels([])
if j == 0:
if i == 0:
ax[i].set_title('tmax')
elif i == 1:
ax[i].set_title('tmin')
elif i == 2:
ax[i].set_title('prcp')
elif i == 3:
ax[i].set_title('infil')
elif i == 4:
ax[i].set_title('sroff')
elif i == 5:
ax[i].set_title('soil_moist_tot')
plt.tight_layout()
def example_plot_strm(clim, gdf_ps, gdf_stream, msurf, msoil, mgw, mstrm, j, timesel):
gdf_ps['tmax'] = (msurf.get_value('tmax')*(9.0/5.0)) + 32.0
gdf_ps['tmin'] = (msurf.get_value('tmin')*(9.0/5.0)) + 32.0
gdf_ps['prcp'] = msurf.get_value('hru_ppt')
gdf_ps['soil_moist_tot'] = msoil.var['soil_moist_tot'].data
gdf_ps['sroff'] = msurf.var['sroff'].data
gdf_ps['ssres_flow'] = msoil.var['ssres_flow'].data
gdf_ps['gwres_flow'] = mgw.var['gwres_flow'].data
gdf_stream['seg_outflow'] = mstrm.var['seg_outflow'].data
fig, ax = plt.subplots(ncols=7, figsize = (14,2))
divider0 = make_axes_locatable(ax[0])
divider1 = make_axes_locatable(ax[1])
divider2 = make_axes_locatable(ax[2])
divider3 = make_axes_locatable(ax[3])
divider4 = make_axes_locatable(ax[4])
divider5 = make_axes_locatable(ax[5])
divider6 = make_axes_locatable(ax[6])
cax0 = divider0.append_axes("right", size="5%", pad=0.1)
cax1 = divider1.append_axes("right", size="5%", pad=0.1)
cax2 = divider2.append_axes("right", size="5%", pad=0.1)
cax3 = divider3.append_axes("right", size="5%", pad=0.1)
cax4 = divider4.append_axes("right", size="5%", pad=0.1)
cax5 = divider5.append_axes("right", size="5%", pad=0.1)
cax6 = divider6.append_axes("right", size="5%", pad=0.1)
gdf_ps.plot(column='tmax', vmin=20.0, vmax=75.0, ax=ax[0], legend=True, cax=cax0)
gdf_ps.plot(column='prcp', vmin=0.0, vmax=0.7, ax=ax[1], legend=True, cax=cax1)
gdf_ps.plot(column='soil_moist_tot', vmin=0.25, vmax=3.0, ax=ax[2], legend=True, cax=cax2)
gdf_ps.plot(column='sroff', vmin=0.0, vmax=0.1, ax=ax[3], legend=True, cax=cax3)
gdf_ps.plot(column='ssres_flow', vmin=0.0, vmax=0.1, ax=ax[4], legend=True, cax=cax4)
gdf_ps.plot(column='gwres_flow', vmin=0.0, vmax=0.15, ax=ax[5], legend=True, cax=cax5)
gdf_stream.plot(column='seg_outflow', vmin=0.0, vmax=200, ax=ax[6], legend=True, cax=cax6)
for i in range(7):
ax[i].set_xticklabels([])
ax[i].set_yticklabels([])
if j == 0:
if i == 0:
ax[i].set_title('tmax')
elif i == 1:
ax[i].set_title('prcp')
elif i == 2:
ax[i].set_title('soil_moist_tot')
elif i == 3:
ax[i].set_title('sroff')
elif i == 4:
ax[i].set_title('ssres_flow')
elif i == 5:
ax[i].set_title('gwres_flow')
elif i == 6:
ax[i].set_title('seg_outflow')
plt.tight_layout()
def gm_example_plot(gdf_ps, gmdata, msurf, msoil, j, timesel):
gdf_ps['tmax'] = (gmdata.tmax.data[j,:]*(9/5))+32.0
gdf_ps['tmin'] = (gmdata.tmin.data[j,:]*(9/5))+32.0
gdf_ps['prcp'] = gmdata.precip.data[j,:]*.0393701
gdf_ps['infil'] = msurf.var['infil'].data
gdf_ps['sroff'] = msurf.var['sroff'].data
gdf_ps['soil_moist_tot'] = msoil.var['soil_moist_tot'].data
fig, ax = plt.subplots(ncols=6, figsize = (12,2))
divider0 = make_axes_locatable(ax[0])
divider1 = make_axes_locatable(ax[1])
divider2 = make_axes_locatable(ax[2])
divider3 = make_axes_locatable(ax[3])
divider4 = make_axes_locatable(ax[4])
divider5 = make_axes_locatable(ax[5])
# divider6 = make_axes_locatable(ax[6])
cax0 = divider0.append_axes("right", size="5%", pad=0.1)
cax1 = divider1.append_axes("right", size="5%", pad=0.1)
cax2 = divider2.append_axes("right", size="5%", pad=0.1)
cax3 = divider3.append_axes("right", size="5%", pad=0.1)
cax4 = divider4.append_axes("right", size="5%", pad=0.1)
cax5 = divider5.append_axes("right", size="5%", pad=0.1)
# cax6 = divider6.append_axes("right", size="5%", pad=0.1)
gdf_ps.plot(column='tmax', vmin=50.0, vmax=70.0, ax=ax[0], legend=True, cax=cax0)
gdf_ps.plot(column='tmin', vmin=20.0, vmax=45.0, ax=ax[1], legend=True, cax=cax1)
gdf_ps.plot(column='prcp', vmin=0.0, vmax=.75, ax=ax[2], legend=True, cax=cax2)
gdf_ps.plot(column='infil', vmin=0.0, vmax=0.7, ax=ax[3], legend=True, cax=cax3)
gdf_ps.plot(column='sroff', vmin=0.0, vmax=0.25, ax=ax[4], legend=True, cax=cax4)
gdf_ps.plot(column='soil_moist_tot', vmin=0.25, vmax=1.75, ax=ax[5], legend=True, cax=cax5)
# gdf_ps.plot(column='soil_moist_tot', vmin=0.0, vmax=1.5, ax=ax[6], legend=True, cax=cax6)
for i in range(6):
ax[i].set_xticklabels([])
ax[i].set_yticklabels([])
if j == 0:
if i == 0:
ax[i].set_title('tmax')
elif i == 1:
ax[i].set_title('tmin')
elif i == 2:
ax[i].set_title('prcp')
elif i == 3:
ax[i].set_title('soil_to_gw')
elif i == 4:
ax[i].set_title('ssr_to_gw')
elif i == 5:
ax[i].set_title('soil_moist_tot')
# elif i == 6:
# ax[i].set_title('soil_moist_tot')
plt.tight_layout()
def example_plot2(gdf_ps, msurf, msoil, j, timesel):
gdf_ps['tmax'] = (msurf.get_value('tmax')*(9.0/5.0)) + 32.0
gdf_ps['tmin'] = (msurf.get_value('tmin')*(9.0/5.0)) + 32.0
gdf_ps['prcp'] = msurf.get_value('hru_ppt')
gdf_ps['soil_to_gw'] = msoil.var['soil_to_gw'].data
gdf_ps['ssr_to_gw'] = msoil.var['ssr_to_gw'].data
gdf_ps['ssres_flow'] = msoil.var['ssres_flow'].data
gdf_ps['soil_moist_tot'] = msoil.var['soil_moist_tot'].data
fig, ax = plt.subplots(ncols=7, figsize = (14,2))
divider0 = make_axes_locatable(ax[0])
divider1 = make_axes_locatable(ax[1])
divider2 = make_axes_locatable(ax[2])
divider3 = make_axes_locatable(ax[3])
divider4 = make_axes_locatable(ax[4])
divider5 = make_axes_locatable(ax[5])
divider6 = make_axes_locatable(ax[6])
cax0 = divider0.append_axes("right", size="5%", pad=0.1)
cax1 = divider1.append_axes("right", size="5%", pad=0.1)
cax2 = divider2.append_axes("right", size="5%", pad=0.1)
cax3 = divider3.append_axes("right", size="5%", pad=0.1)
cax4 = divider4.append_axes("right", size="5%", pad=0.1)
cax5 = divider5.append_axes("right", size="5%", pad=0.1)
cax6 = divider6.append_axes("right", size="5%", pad=0.1)
gdf_ps.plot(column='tmax', vmin=20.0, vmax=75.0, ax=ax[0], legend=True, cax=cax0)
gdf_ps.plot(column='tmin', vmin=20.0, vmax=75.0, ax=ax[1], legend=True, cax=cax1)
gdf_ps.plot(column='prcp', vmin=0.0, vmax=0.7, ax=ax[2], legend=True, cax=cax2)
gdf_ps.plot(column='soil_to_gw', vmin=0.0, vmax=0.1, ax=ax[3], legend=True, cax=cax3)
gdf_ps.plot(column='ssr_to_gw', vmin=0.0, vmax=0.15, ax=ax[4], legend=True, cax=cax4)
gdf_ps.plot(column='ssres_flow', vmin=0.0, vmax=0.1, ax=ax[5], legend=True, cax=cax5)
gdf_ps.plot(column='soil_moist_tot', vmin=0.25, vmax=3.0, ax=ax[6], legend=True, cax=cax6)
for i in range(7):
ax[i].set_xticklabels([])
ax[i].set_yticklabels([])
if j == 0:
if i == 0:
ax[i].set_title('tmax')
elif i == 1:
ax[i].set_title('tmin')
elif i == 2:
ax[i].set_title('prcp')
elif i == 3:
ax[i].set_title('soil_to_gw')
elif i == 4:
ax[i].set_title('ssr_to_gw')
elif i == 5:
ax[i].set_title('ssres_flow')
elif i == 6:
ax[i].set_title('soil_moist_tot')
plt.tight_layout() | 40.527027 | 104 | 0.60309 | 2,920 | 17,994 | 3.528767 | 0.095548 | 0.040276 | 0.023292 | 0.039111 | 0.727582 | 0.709433 | 0.687888 | 0.663432 | 0.625971 | 0.61151 | 0 | 0.043145 | 0.223297 | 17,994 | 444 | 105 | 40.527027 | 0.694118 | 0.048627 | 0 | 0.583333 | 0 | 0 | 0.082812 | 0.00269 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0.002778 | 0.019444 | 0 | 0.088889 | 0.008333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8c2d01c85ecc4e4d59d6d66a9dac53e4728b7f74 | 845 | py | Python | backend/cms/sample/migrations/0016_auto_20201224_1407.py | howawong/openwancha | d34c1160a9c76713220f88e80a6975069ea11ed0 | [
"MIT"
] | null | null | null | backend/cms/sample/migrations/0016_auto_20201224_1407.py | howawong/openwancha | d34c1160a9c76713220f88e80a6975069ea11ed0 | [
"MIT"
] | null | null | null | backend/cms/sample/migrations/0016_auto_20201224_1407.py | howawong/openwancha | d34c1160a9c76713220f88e80a6975069ea11ed0 | [
"MIT"
] | 1 | 2021-05-31T09:03:58.000Z | 2021-05-31T09:03:58.000Z | # Generated by Django 3.1 on 2020-12-24 14:07
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('sample', '0015_communityactivitymetadata_audience_size'),
]
operations = [
migrations.AlterField(
model_name='communityactivitymetadata',
name='end_date',
field=models.DateField(blank=True, default=None, null=True),
),
migrations.AlterField(
model_name='communityactivitymetadata',
name='start_date',
field=models.DateField(blank=True, default=None, null=True),
),
migrations.AlterField(
model_name='communityactivitymetadata',
name='start_date_1',
field=models.DateField(blank=True, default=None, null=True),
),
]
| 29.137931 | 72 | 0.618935 | 80 | 845 | 6.4125 | 0.475 | 0.116959 | 0.146199 | 0.169591 | 0.662768 | 0.662768 | 0.549708 | 0.549708 | 0.549708 | 0.45614 | 0 | 0.030945 | 0.273373 | 845 | 28 | 73 | 30.178571 | 0.80456 | 0.050888 | 0 | 0.545455 | 1 | 0 | 0.19375 | 0.14875 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8c3fa055e074d2cbaddfcea3c0c937eac5370c40 | 88 | py | Python | output/models/saxon_data/vc/vc010_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 1 | 2021-08-14T17:59:21.000Z | 2021-08-14T17:59:21.000Z | output/models/saxon_data/vc/vc010_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 4 | 2020-02-12T21:30:44.000Z | 2020-04-15T20:06:46.000Z | output/models/saxon_data/vc/vc010_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | null | null | null | from output.models.saxon_data.vc.vc010_xsd.vc010 import Temp
__all__ = [
"Temp",
]
| 14.666667 | 60 | 0.715909 | 13 | 88 | 4.384615 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 0.159091 | 88 | 5 | 61 | 17.6 | 0.689189 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8c4769113c944957faa0e1b537d2bb93b3d9ec0a | 148 | py | Python | chap06/list0607.py | ytianjin/GitTest | a657f46098728ad90f7140fadad356e8561c9a7a | [
"MIT"
] | null | null | null | chap06/list0607.py | ytianjin/GitTest | a657f46098728ad90f7140fadad356e8561c9a7a | [
"MIT"
] | null | null | null | chap06/list0607.py | ytianjin/GitTest | a657f46098728ad90f7140fadad356e8561c9a7a | [
"MIT"
] | null | null | null | # 字符串txt是否包含字符串ptn
txt = input('字符串txt:')
ptn = input('字符串ptn:')
if ptn in txt:
print('ptn包含于txt。')
else:
print('txt不包含ptn。') | 16.444444 | 25 | 0.581081 | 16 | 148 | 5.375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 148 | 9 | 26 | 16.444444 | 0.774775 | 0.108108 | 0 | 0 | 0 | 0 | 0.276423 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8c5d94b37f56c83cdfef1fdb90edf94717227084 | 727 | py | Python | covidfaq/evaluating/model/cheating_model.py | dialoguemd/covidfaq | a493ed72e07b83cdf736684ce1cc9ee47b9bfb3f | [
"MIT"
] | 3 | 2020-06-22T17:05:22.000Z | 2021-07-18T20:51:57.000Z | covidfaq/evaluating/model/cheating_model.py | dialoguemd/covidfaq | a493ed72e07b83cdf736684ce1cc9ee47b9bfb3f | [
"MIT"
] | 25 | 2020-03-21T14:58:05.000Z | 2021-04-02T14:27:28.000Z | covidfaq/evaluating/model/cheating_model.py | dialoguemd/covidfaq | a493ed72e07b83cdf736684ce1cc9ee47b9bfb3f | [
"MIT"
] | 6 | 2020-03-21T23:33:02.000Z | 2020-07-27T15:12:22.000Z | from covidfaq.evaluating.model.model_evaluation_interface import (
ModelEvaluationInterface,
)
class CheatingModel(ModelEvaluationInterface):
"""
model that knows the golden truth and will always return the best result.
(useful for debugging)
"""
def __init__(self, test_data, passage_id2index):
self.test_data = test_data
self.passage_id2index = passage_id2index
def collect_answers(self, source2passages):
pass
def answer_question(self, question, source):
for example in self.test_data["examples"]:
if question == example["question"]:
passage_id = example["passage_id"]
return self.passage_id2index[passage_id]
| 29.08 | 77 | 0.686382 | 79 | 727 | 6.075949 | 0.544304 | 0.066667 | 0.075 | 0.108333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008993 | 0.235213 | 727 | 24 | 78 | 30.291667 | 0.854317 | 0.13205 | 0 | 0 | 0 | 0 | 0.042553 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0.428571 | 0.071429 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
8c64684af6441ce6479eb4599d84489030382d45 | 3,854 | py | Python | utils/database_tester.py | ax-rwnd/E-dot | b1b64fcce43c5d6f54dc38959498cdba95e757b1 | [
"BSD-3-Clause"
] | null | null | null | utils/database_tester.py | ax-rwnd/E-dot | b1b64fcce43c5d6f54dc38959498cdba95e757b1 | [
"BSD-3-Clause"
] | 1 | 2015-12-05T02:04:35.000Z | 2015-12-11T02:47:28.000Z | utils/database_tester.py | ax-rwnd/E-dot | b1b64fcce43c5d6f54dc38959498cdba95e757b1 | [
"BSD-3-Clause"
] | null | null | null | DATABASE = 'db_edot'
# mysql
HOST = 'localhost'
PORT = 3306
USER = 'root'
PASSWD = ''
SQLDB = 'db_edot'
from flask import Flask, render_template, request, g
app = Flask(__name__)
# DB support
import MySQLdb
# returns a database connection for MySQL
def connect_to_database_mysql(database=None):
if database:
return MySQLdb.connect(host=HOST, port=PORT, user=USER, passwd=PASSWD, db=database)
else:
return MySQLdb.connect(host=HOST, port=PORT, user=USER, passwd=PASSWD)
# set this line to define database connection
DBFUNC = connect_to_database_mysql
tbl_user = "tbl_user"
tbl_product = "tbl_product"
tbl_orderlines = "tbl_orderlines"
tbl_order = "tbl_order"
tbl_category = "tbl_category"
def main():
print "E-dot commerce database tester...\n"
clear_database()
add_testdata()
print_database() # Removing existing database if it already exists
print "\nCompleted sucessfully"
"""
def add_testdata():
db = DBFUNC(SQLDB)
print "Adding testdata"
cursor = db.cursor()
cursor.execute("insert into " + tbl_category + "(name) values ('Fine Gravel');")
cursor.execute("insert into " + tbl_category + "(name) values ('Lag Gravel');")
cursor.execute("insert into " + tbl_category + "(name) values ('Plateau Gravel');")
cursor.execute("insert into " + tbl_category + "(name) values ('Pea Gravel');")
cursor.execute("insert into " + tbl_category + "(name) values ('Crushed Stone');")
cursor.execute("insert into " + tbl_product + "(name, description, image_url, price, cat_id) values ('Gravel 2mm', 'Two millimeter fine gravel', '/images/fine1.png', '29.50', (SELECT id from tbl_category WHERE name='Fine Gravel'));")
cursor.execute("insert into " + tbl_product + "(name, description, image_url, price, cat_id) values ('Gravel 4mm', 'Four millimeter fine gravel', '/images/fine2.png', '99.90', (SELECT id from tbl_category WHERE name='Fine Gravel'));")
cursor.execute("insert into " + tbl_product + "(name, description, image_url, price, cat_id) values ('Granite', 'A common type of felsic intrusive igneous rock that is granular and phaneritic in texture.', '/images/granite.png', '995.90', (SELECT id from tbl_category WHERE name='Crushed Stone'));")
cursor.execute("insert into " + tbl_product + "(name, description, image_url, price, cat_id) values ('Limestone', 'A sedimentary rock composed largely of the minerals calcite and aragonite.', '/images/limestone.png', '1050.0', (SELECT id from tbl_category WHERE name='Crushed Stone'));")
cursor.execute("insert into " + tbl_product + "(name, description, image_url, price, cat_id) values ('Dolomite', 'An anhydrous carbonate mineral composed of calcium magnesium carbonate.', '/images/rock.png', '1250.0', (SELECT id from tbl_category WHERE name='Crushed Stone'));")
db.commit()
db.close()
"""
def clear_database():
db = DBFUNC(SQLDB)
print "Removing testdata"
cursor = db.cursor()
cursor.execute("delete from tbl_user;")
cursor.execute("delete from tbl_product;")
cursor.execute("delete from tbl_orderlines;")
cursor.execute("delete from tbl_order;")
cursor.execute("delete from tbl_category;")
print "Done"
db.commit()
db.close()
def print_database():
db = DBFUNC(SQLDB)
cursor = db.cursor()
cursor.execute("show tables;")
numrows = int(cursor.rowcount)
tables = []
print "Tables found:"
for x in range(0, numrows):
row = cursor.fetchone()
print row[0]
tables.append(row[0])
print ""
for t in tables:
cursor.execute("select * from " + t+";")
num = int(cursor.rowcount)
print "Content in table " + t + ":"
for x in range(0,num):
row = cursor.fetchone()
print row
db.close()
main()
| 37.057692 | 303 | 0.669694 | 500 | 3,854 | 5.044 | 0.296 | 0.087629 | 0.075337 | 0.091197 | 0.506344 | 0.40563 | 0.383029 | 0.381443 | 0.362411 | 0.283109 | 0 | 0.011283 | 0.195122 | 3,854 | 103 | 304 | 37.417476 | 0.801741 | 0.038402 | 0 | 0.135593 | 0 | 0 | 0.190611 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.050847 | 0.033898 | null | null | 0.186441 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
4fb280c66e2ed994196d38a8f36d34b7f652c306 | 2,035 | py | Python | WebBrickLibs/brisa/utils/properties.py | AndyThirtover/wb_gateway | 69f9c870369085f4440033201e2fb263a463a523 | [
"BSD-3-Clause"
] | 4 | 2015-02-18T21:42:17.000Z | 2020-03-22T14:38:12.000Z | WebBrickLibs/brisa/utils/properties.py | AndyThirtover/wb_gateway | 69f9c870369085f4440033201e2fb263a463a523 | [
"BSD-3-Clause"
] | null | null | null | WebBrickLibs/brisa/utils/properties.py | AndyThirtover/wb_gateway | 69f9c870369085f4440033201e2fb263a463a523 | [
"BSD-3-Clause"
] | 2 | 2016-04-03T10:11:55.000Z | 2021-12-27T03:00:31.000Z | # Licensed under the MIT license
# http://opensource.org/licenses/mit-license.php or see LICENSE file.
# Copyright 2007-2008 Brisa Team <brisa-develop@garage.maemo.org>
""" Facilities for python properties generation.
"""
def gen_property_with_default(name, fget=None, fset=None, doc=""):
""" Generates a property of a name either with a default fget or a default
fset.
@param name: property name
@param fget: default fget
@param fset: default fset
@param doc: documentation for the property
@type name: string
@type fget: callable or None
@type fset: callable or None
@type doc: string
"""
if fget == None and fset == None:
raise NotImplementedError("fget or fset must be not null")
internal_name = '%s%s' % ("_prop_", name)
def getter(self):
if not internal_name in dir(self):
setattr(self, internal_name, "")
return getattr(self, internal_name)
def setter(self, value):
return setattr(self, internal_name, value)
if fget is None:
return property(getter, fset, doc=doc)
return property(fget, setter, doc=doc)
def gen_property_of_type(name, _type, doc=""):
""" Generates a type-forced property associated with a name. Provides type
checking on the setter (coherence between value to be set and the type
specified).
@param name: property name
@param _type: force type
@param doc: documentation for the property
@type name: string
@type _type: type
@type doc: string
"""
internal_name = '%s%s' % ("_prop_", name)
def getter(self):
return getattr(self, internal_name)
def setter(self, value):
if isinstance(value, _type):
return setattr(self, internal_name, value)
else:
raise TypeError(("invalid type '%s' for property %s:"
"%s is required.") %
(type(value).__name__, name, type(_type).__name__))
return property(getter, setter, doc=doc)
| 29.492754 | 79 | 0.640295 | 266 | 2,035 | 4.781955 | 0.308271 | 0.075472 | 0.062893 | 0.054245 | 0.306604 | 0.265723 | 0.212264 | 0.212264 | 0.212264 | 0.083333 | 0 | 0.005302 | 0.258477 | 2,035 | 68 | 80 | 29.926471 | 0.837641 | 0.398034 | 0 | 0.4 | 0 | 0 | 0.087111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.24 | false | 0 | 0 | 0.08 | 0.52 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4fcc83fcb78abd3014690c59b50564738754bd6d | 144 | py | Python | src/013-large-sum/python/solve.py | xfbs/ProjectEulerRust | e26768c56ff87b029cb2a02f56dc5cd32e1f7c87 | [
"MIT"
] | 1 | 2018-01-26T21:18:12.000Z | 2018-01-26T21:18:12.000Z | src/013-large-sum/python/solve.py | xfbs/ProjectEulerRust | e26768c56ff87b029cb2a02f56dc5cd32e1f7c87 | [
"MIT"
] | 3 | 2017-12-09T14:49:30.000Z | 2017-12-09T14:59:39.000Z | src/013-large-sum/python/solve.py | xfbs/ProjectEulerRust | e26768c56ff87b029cb2a02f56dc5cd32e1f7c87 | [
"MIT"
] | null | null | null | import solver
import sys
datafile = open(sys.argv[1])
numbers = []
for line in datafile:
numbers.append(line)
print(solver.solve(numbers))
| 14.4 | 28 | 0.729167 | 21 | 144 | 5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00813 | 0.145833 | 144 | 9 | 29 | 16 | 0.845528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0.142857 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4fdb5edad413f722bc7d36988ea447b62a6f87e2 | 277 | py | Python | users/models.py | nylar/fora | 6fc604f832e41fbe744e8768fd2ed240b5c5571d | [
"CC0-1.0"
] | null | null | null | users/models.py | nylar/fora | 6fc604f832e41fbe744e8768fd2ed240b5c5571d | [
"CC0-1.0"
] | null | null | null | users/models.py | nylar/fora | 6fc604f832e41fbe744e8768fd2ed240b5c5571d | [
"CC0-1.0"
] | null | null | null | from django.contrib.auth.models import AbstractUser
from django.db import models
class User(AbstractUser):
signature = models.CharField(max_length=255)
def __str__(self):
return self.username
def __unicode__(self):
return unicode(self.username)
| 21.307692 | 51 | 0.729242 | 34 | 277 | 5.676471 | 0.617647 | 0.103627 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013393 | 0.191336 | 277 | 12 | 52 | 23.083333 | 0.848214 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
8b0c7856b7eb7e4b3daacd037fa2bcae6f12cf98 | 1,987 | py | Python | calculator.py | M4ND33P-M4L4K4R143/PyCalculator | c1da4f127c51066124a21c5f18e9ddcad587d4eb | [
"MIT"
] | 1 | 2021-08-21T13:58:37.000Z | 2021-08-21T13:58:37.000Z | calculator.py | M4ND33P-M4L4K4R143/PyCalculator | c1da4f127c51066124a21c5f18e9ddcad587d4eb | [
"MIT"
] | null | null | null | calculator.py | M4ND33P-M4L4K4R143/PyCalculator | c1da4f127c51066124a21c5f18e9ddcad587d4eb | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""Hey Guys From This Python Script You can Do Following Task Like :-
Addition
Subtraction
Multiplication ]: Coded by Mandeep :[
Division
As A Basic Calculator.....
Well You Can Enhance The features of This Dtool But Give Us Credit ..
"""
# Code Start From Here okay
# You can Contact me Learn making tool okay
# Contact me from this Number :- +919939411504
# ------------------------------ #
# Import OS Module
import os
os.system("clear") # This Line will clear the screen
print("\033[33m\033[1mHello User")
print("\033[1m\033[33mThis is A Calculator and It can Perform Basic Tasks. \033[42m ]:\033[37m Coded By \033[43m\033[31mMandeep \033[32m:[ \033[0m")
print("\033[1m\033[31m\033[4m............................\033[0m")
print()
num1 = int(input("\033[1m\033[33m[\033[32m!\033[33m]\033[34m Enter the First Number : \033[32m"))
num2 = int(input("\033[1m\033[33m[\033[32m!\033[33m]\033[34m Enter the Second Number : \033[32m"))
print("\033[4m............................")
print()
# Addition Line
print("\033[32m[\033[33m1\033[32m]\033[31m\033[4m Addition")
print("\033[32m[\033[33m2\033[32m]\033[31m\033[4m Subtraction")
print("\033[32m[\033[33m3\033[32m]\033[31m\033[4m MultiPlication")
print("\033[32m[\033[33m4\033[32m]\033[31m\033[4m Division")
print()
opt = int(input("\033[31m➢ \033[32mEnter your Choice : \033[31m"))
# Condition for The User:-
# If_Else Condition
if opt == 1:
Ans = num1 + num2
print()
print("\033[31mAnswer:-\033[32m", Ans)
elif opt == 2:
Ans = num1-num2
print()
print("\033[31mAnswer:-\033[32m", Ans)
elif opt == 3:
Ans = num1*num2
print()
print("\033[31mAnswer:-\033[32m", Ans)
elif opt == 4:
Ans = num1%num2
Ans2 = num1/num2
print("\033[32m")
print("\033[31mRemainder:- \033[33m",Ans)
print("\033[31mQuotient:- \033[33m",Ans2)
print()
else :
print("\033[33m\033[1mYou Press Invalid Option !")
| 20.915789 | 149 | 0.617514 | 302 | 1,987 | 4.062914 | 0.364238 | 0.08313 | 0.080685 | 0.044825 | 0.248574 | 0.248574 | 0.193154 | 0.193154 | 0.193154 | 0.193154 | 0 | 0.204268 | 0.174635 | 1,987 | 94 | 150 | 21.138298 | 0.543293 | 0.259688 | 0 | 0.27027 | 0 | 0.189189 | 0.594822 | 0.307208 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027027 | 0 | 0.027027 | 0.594595 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
8b1d81358cc8549e963395952f6cf2e31c281462 | 910 | py | Python | test/espnet2/tasks/test_abs_task.py | Hertin/espnet | a0f2175df08b4750a9f0305c20b8c11f6e941867 | [
"Apache-2.0"
] | 5 | 2020-10-26T11:28:04.000Z | 2021-12-17T07:49:11.000Z | test/espnet2/tasks/test_abs_task.py | Hertin/espnet | a0f2175df08b4750a9f0305c20b8c11f6e941867 | [
"Apache-2.0"
] | 2 | 2020-10-26T15:22:48.000Z | 2021-01-15T10:17:57.000Z | test/espnet2/tasks/test_abs_task.py | Hertin/espnet | a0f2175df08b4750a9f0305c20b8c11f6e941867 | [
"Apache-2.0"
] | 2 | 2021-11-30T07:42:44.000Z | 2021-12-01T07:10:01.000Z | import configargparse
import pytest
from espnet2.tasks.abs_task import AbsTask
@pytest.mark.parametrize("parser", [configargparse.ArgumentParser(), None])
def test_add_arguments(parser):
AbsTask.get_parser()
def test_add_arguments_help():
parser = AbsTask.get_parser()
with pytest.raises(SystemExit):
parser.parse_args(["--help"])
def test_main_help():
with pytest.raises(SystemExit):
AbsTask.main(cmd=["--help"])
def test_main_print_config():
with pytest.raises(SystemExit):
AbsTask.main(cmd=["--print_config"])
def test_main_with_no_args():
with pytest.raises(SystemExit):
AbsTask.main(cmd=[])
def test_print_config_and_load_it(tmp_path):
config_file = tmp_path / "config.yaml"
with config_file.open("w") as f:
AbsTask.print_config(f)
parser = AbsTask.get_parser()
parser.parse_args(["--config", str(config_file)])
| 23.333333 | 75 | 0.704396 | 119 | 910 | 5.117647 | 0.352941 | 0.068966 | 0.10509 | 0.170772 | 0.197044 | 0.197044 | 0.197044 | 0 | 0 | 0 | 0 | 0.001311 | 0.161538 | 910 | 38 | 76 | 23.947368 | 0.796855 | 0 | 0 | 0.24 | 0 | 0 | 0.057143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.24 | false | 0 | 0.12 | 0 | 0.36 | 0.16 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8b2b78121b91fad7c2b779840f31b29b76faa22d | 6,597 | py | Python | cs15211/MaximumSubarray.py | JulyKikuAkita/PythonPrac | 0ba027d9b8bc7c80bc89ce2da3543ce7a49a403c | [
"Apache-2.0"
] | 1 | 2021-07-05T01:53:30.000Z | 2021-07-05T01:53:30.000Z | cs15211/MaximumSubarray.py | JulyKikuAkita/PythonPrac | 0ba027d9b8bc7c80bc89ce2da3543ce7a49a403c | [
"Apache-2.0"
] | null | null | null | cs15211/MaximumSubarray.py | JulyKikuAkita/PythonPrac | 0ba027d9b8bc7c80bc89ce2da3543ce7a49a403c | [
"Apache-2.0"
] | 1 | 2018-01-08T07:14:08.000Z | 2018-01-08T07:14:08.000Z | __source__ = 'https://leetcode.com/problems/maximum-subarray/'
# https://github.com/kamyu104/LeetCode/blob/master/Python/maximum-subarray.py
# Time: O(n)
# Space: O(1)
#
# Description: Leetcode # 53. Maximum Subarray
#
# Find the contiguous subarray within an array (containing at least one number) which has the largest sum.
#
# For example, given the array [-2,1,-3,4,-1,2,1,-5,4],
# the contiguous subarray [4,-1,2,1] has the largest sum = 6.
#
# click to show more practice.
#
# More practice:
# If you have figured out the O(n) solution,
# try coding another solution using the divide and conquer approach, which is more subtle.
#
# Companies
# LinkedIn Bloomberg Microsoft
# Related Topics
# Array Dynamic Programming Divide and Conquer
# Similar Questions
# Best Time to Buy and Sell Stock Maximum Product Subarray
#
import unittest
class Solution:
# @param A, a list of integers
# @return an integer
def maxSubArray(self, A):
global_max, local_max = float("-inf"), 0
for x in A:
local_max = max(x + local_max, x)
global_max = max(global_max, local_max)
return global_max
# http://www.programcreek.com/2013/02/leetcode-maximum-subarray-java/
class WrongAns:
# @param A, a list of integers
# @return an integer
def maxSubArray(self, A):
global_max, local_max = float("-inf"), 0
for x in A:
local_max = max(x + local_max, 0) # fails if A = [-1]
global_max = max(global_max, local_max)
return global_max
# The changing condition for dynamic programming is
# "We should ignore the sum of the previous n-1 elements if nth element is greater than the sum."
class OneD_DP:
# @param A, a list of integers
# @return an integer
def maxSubArray(self, A):
if not A or len(A) == 0:
return 0
sum = [ 0 for i in xrange(len(A))]
sum[0], ans = A[0], A[0]
for i in xrange(1, len(A)):
sum[i] = max(sum[i-1] + A[i], A[i] )
ans = max(ans, sum[i])
return ans
class SolutionOther:
# @param A, a list of integers
# @return an integer
def maxSubArray(self, A):
ans, sum = A[0] ,A[0]
for i in range(1,len(A)):
if (sum < 0):
sum = A[i]
else:
sum += A[i]
ans = max(ans,sum)
return ans
test = SolutionOther()
arr = [-2,1,-3,4,-1,2,1,-5,4]
arr1 = [-1] #ans should be -1 not 0
#print test.maxSubArray(arr)
class TestMethods(unittest.TestCase):
def test_Local(self):
self.assertEqual(1, 1)
print Solution().maxSubArray(arr)
print Solution().maxSubArray(arr1)
print
print OneD_DP().maxSubArray(arr)
print OneD_DP().maxSubArray(arr1)
if __name__ == '__main__':
unittest.main()
Java = '''
# Thought:
#
this problem was discussed by Jon Bentley (Sep. 1984 Vol. 27 No. 9 Communications of the ACM P885)
the paragraph below was copied from his paper (with a little modifications)
algorithm that operates on arrays: it starts at the left end (element A[1]) and
scans through to the right end (element A[n]), keeping track of the maximum sum subvector seen so far.
The maximum is initially A[0]. Suppose we've solved the problem for A[1 .. i - 1];
how can we extend that to A[1 .. i]? The maximum
sum in the first I elements is either the maximum sum in the first i - 1 elements
(which we'll call MaxSoFar), or it is that of a subvector that ends in position i (which we'll call MaxEndingHere).
MaxEndingHere is either A[i] plus the previous MaxEndingHere, or just A[i], whichever is larger.
# 7ms 87.03%
class Solution {
public int maxSubArray(int[] A) {
int maxSoFar=A[0], maxEndingHere=A[0];
for (int i=1;i<A.length;++i){
maxEndingHere= Math.max(maxEndingHere+A[i],A[i]);
maxSoFar=Math.max(maxSoFar, maxEndingHere);
}
return maxSoFar;
}
}
# 10ms 38.86%
class Solution {
public int maxSubArray(int[] nums) {
int result = Integer.MIN_VALUE;
int cur = 0;
for (int i = 0; i < nums.length; i++) {
cur += nums[i];
result = Math.max(result, cur);
cur = Math.max(cur, 0);
}
return result;
}
}
# 6ms 99.93%
class Solution {
public int maxSubArray(int[] nums) {
int max = Integer.MIN_VALUE;
int sum = 0;
for (int i = 0; i < nums.length; i++) {
if (sum < 0)
sum = nums[i];
else
sum += nums[i];
if (sum > max)
max = sum;
}
return max;
}
}
2. DP
Analysis of this problem:
Apparently, this is a optimization problem, which can be usually solved by DP.
So when it comes to DP, the first thing for us to figure out is the format of the sub problem
(or the state of each sub problem).
The format of the sub problem can be helpful when we are trying to come up with the recursive relation.
At first, I think the sub problem should look like: maxSubArray(int A[], int i, int j),
which means the maxSubArray for A[i: j]. In this way,
our goal is to figure out what maxSubArray(A, 0, A.length - 1) is.
However, if we define the format of the sub problem in this way,
it's hard to find the connection from the sub problem to the original problem(at least for me).
In other words, I can't find a way to divided the original problem into the sub problems and use the solutions
of the sub problems to somehow create the solution of the original one.
So I change the format of the sub problem into something like: maxSubArray(int A[], int i),
which means the maxSubArray for A[0:i ] which must has A[i] as the end element.
Note that now the sub problem's format is less flexible and less powerful than the previous one
because there's a limitation that A[i] should be contained in that sequence and we have to keep track of
each solution of the sub problem to update the global optimal value. However, now the connect
between the sub problem & the original one becomes clearer:
maxSubArray(A, i) = maxSubArray(A, i - 1) > 0 ? maxSubArray(A, i - 1) : 0 + A[i];
And here's the code
# 10ms 38.86%
class Solution {
public int maxSubArray(int[] A) {
int n = A.length;
int[] dp = new int[n];//dp[i] means the maximum subarray ending with A[i];
dp[0] = A[0];
int max = dp[0];
for(int i = 1; i < n; i++){
dp[i] = A[i] + (dp[i - 1] > 0 ? dp[i - 1] : 0);
max = Math.max(max, dp[i]);
}
return max;
}
}
'''
| 33.48731 | 115 | 0.631044 | 1,053 | 6,597 | 3.921178 | 0.276353 | 0.008234 | 0.028336 | 0.018164 | 0.246791 | 0.234924 | 0.175345 | 0.158876 | 0.13611 | 0.100266 | 0 | 0.025371 | 0.265121 | 6,597 | 196 | 116 | 33.658163 | 0.82632 | 0.179627 | 0 | 0.20438 | 0 | 0.124088 | 0.710536 | 0.011355 | 0 | 0 | 0 | 0 | 0.007299 | 0 | null | null | 0 | 0.007299 | null | null | 0.036496 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8b2df474251034b1d8d9bcbce6b6cab0108a8b93 | 1,213 | py | Python | modules/simple/entry.py | epfl-dcsl/persona-orig | d94a8b60f07622bb61736127ff328329c7b131a9 | [
"Apache-2.0"
] | null | null | null | modules/simple/entry.py | epfl-dcsl/persona-orig | d94a8b60f07622bb61736127ff328329c7b131a9 | [
"Apache-2.0"
] | null | null | null | modules/simple/entry.py | epfl-dcsl/persona-orig | d94a8b60f07622bb61736127ff328329c7b131a9 | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 École Polytechnique Fédérale de Lausanne. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from ..common import service
from . import services
class EchoSingleton(service.ServiceSingleton):
class_type = services.EchoService
class IncrSingleton(service.ServiceSingleton):
class_type = services.Incrementer
_singletons = (EchoSingleton(), IncrSingleton())
_service_map = { a.get_shortname(): a for a in _singletons }
def get_tooltip():
return "simple services to use for debugging"
def get_services():
return _singletons
def lookup_service(name):
return _service_map[name]
| 34.657143 | 80 | 0.71723 | 155 | 1,213 | 5.529032 | 0.593548 | 0.070012 | 0.030338 | 0.03734 | 0.093349 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007759 | 0.150041 | 1,213 | 34 | 81 | 35.676471 | 0.823472 | 0.558945 | 0 | 0 | 0 | 0 | 0.069231 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.142857 | 0.214286 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
8b3df12f9460da50152505b49ccc009bfdd7238c | 3,575 | py | Python | ems_client/models.py | uw-it-cte/ems-client | 5ac61963bd973e3718b16387af34db9b63b027b4 | [
"Apache-2.0"
] | null | null | null | ems_client/models.py | uw-it-cte/ems-client | 5ac61963bd973e3718b16387af34db9b63b027b4 | [
"Apache-2.0"
] | 1 | 2019-03-02T00:39:14.000Z | 2019-03-02T00:39:15.000Z | ems_client/models.py | uw-it-cte/ems-client | 5ac61963bd973e3718b16387af34db9b63b027b4 | [
"Apache-2.0"
] | null | null | null | from restclients_core import models
# Create your models here.
class Status(models.Model):
STATUS_TYPE_BOOKED_SPACE = -14
STATUS_TYPE_WAIT = -13
STATUS_TYPE_CANCEL = -12
STATUS_TYPE_INFO_ONLY = -11
STATUS_TYPE_CHOICES = (
(STATUS_TYPE_BOOKED_SPACE, 'Booked Space'),
(STATUS_TYPE_WAIT, 'Wait'),
(STATUS_TYPE_CANCEL, 'Cancel'),
(STATUS_TYPE_INFO_ONLY, 'Info Only'),
)
description = models.CharField(max_length=30)
id = models.PositiveIntegerField(primary_key=True)
status_type_id = models.SmallIntegerField(choices=STATUS_TYPE_CHOICES)
display_on_web = models.BooleanField(default=None)
def __str__(self):
return self.description
class EventType(models.Model):
description = models.CharField(max_length=30)
id = models.PositiveIntegerField(primary_key=True)
display_on_web = models.BooleanField(default=None)
def __str__(self):
return self.description
class Building(models.Model):
description = models.CharField(max_length=50)
building_code = models.CharField(max_length=20, null=True)
id = models.PositiveIntegerField(primary_key=True)
time_zone_description = models.CharField(max_length=255)
time_zone_abbreviation = models.CharField(max_length=10)
def __str__(self):
return self.description
class Room(models.Model):
room = models.CharField(max_length=20)
description = models.CharField(max_length=50)
dv_building = models.CharField(max_length=50)
active = models.BooleanField()
building = models.ForeignKey(Building, on_delete=models.PROTECT)
id = models.PositiveIntegerField(primary_key=True)
external_reference = models.CharField(max_length=255, null=True)
def __str__(self):
return self.description
class Booking(models.Model):
booking_date = models.DateField()
room_description = models.CharField(max_length=75)
time_event_start = models.DateTimeField()
time_event_end = models.DateTimeField()
group_name = models.CharField(max_length=50)
event_name = models.CharField(max_length=255)
reservation_id = models.PositiveIntegerField()
event_type_description = models.CharField(max_length=30)
contact = models.CharField(max_length=113)
id = models.PositiveIntegerField(primary_key=True)
building = models.ForeignKey(Building, on_delete=models.PROTECT)
time_booking_start = models.DateTimeField()
time_booking_end = models.DateTimeField()
time_zone = models.CharField(max_length=10)
building_code = models.CharField(max_length=20)
dv_building = models.CharField(max_length=50)
room_code = models.CharField(max_length=20)
dv_room = models.CharField(max_length=50)
room = models.ForeignKey(Room, on_delete=models.PROTECT)
status = models.ForeignKey(Status, on_delete=models.PROTECT)
status_type_id = models.SmallIntegerField(
choices=Status.STATUS_TYPE_CHOICES)
date_added = models.DateTimeField(null=True)
date_changed = models.DateTimeField(null=True)
contact_email_address = models.CharField(max_length=75, null=True)
class ServiceOrderDetail(models.Model):
booking_date = models.DateField()
service_order_start_time = models.TimeField(null=True)
service_order_end_time = models.TimeField(null=True)
resource_description = models.CharField(max_length=50)
resource_external_reference = models.CharField(max_length=255, blank=True)
service_order_id = models.PositiveIntegerField()
booking = models.ForeignKey(Booking, on_delete=models.PROTECT)
| 37.239583 | 78 | 0.751049 | 437 | 3,575 | 5.853547 | 0.196796 | 0.134871 | 0.161845 | 0.215794 | 0.628616 | 0.49179 | 0.356138 | 0.163409 | 0.12197 | 0.12197 | 0 | 0.019523 | 0.154685 | 3,575 | 95 | 79 | 37.631579 | 0.826936 | 0.006713 | 0 | 0.328947 | 0 | 0 | 0.008735 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.013158 | 0.052632 | 0.921053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8b4ac894cb72122eec070f2fdd04a12d4a852d09 | 399 | py | Python | pycoax/coax/exceptions.py | lowobservable/coax | 9714fdfb418dff56357b9a35d2da3a91b8a60ffe | [
"0BSD"
] | 21 | 2020-05-11T19:46:29.000Z | 2022-02-09T01:32:41.000Z | pycoax/coax/exceptions.py | lowobservable/coax-interface | 614f8a5f448b1f7e8298ced2585c178f4d7f435d | [
"0BSD"
] | null | null | null | pycoax/coax/exceptions.py | lowobservable/coax-interface | 614f8a5f448b1f7e8298ced2585c178f4d7f435d | [
"0BSD"
] | 5 | 2020-07-20T08:05:10.000Z | 2022-01-30T13:57:05.000Z | """
coax.exceptions
~~~~~~~~~~~~~~~
"""
class InterfaceError(Exception):
"""An interface error occurred."""
class ReceiveError(Exception):
"""A receive error occurred."""
class InterfaceTimeout(Exception):
"""The interface timed out."""
class ReceiveTimeout(Exception):
"""The receive operation timed out."""
class ProtocolError(Exception):
"""A protocol error occurred."""
| 19.95 | 42 | 0.669173 | 38 | 399 | 7.026316 | 0.526316 | 0.146067 | 0.134831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150376 | 399 | 19 | 43 | 21 | 0.787611 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8c8d58413ad1b90719b2bf1ec4e775c56f6a0fcd | 593 | py | Python | mx_utils/gpu.py | hallmx/mx_utils | 5fc095d31330eee1a5f22698861f55a05d569dc7 | [
"Apache-2.0"
] | null | null | null | mx_utils/gpu.py | hallmx/mx_utils | 5fc095d31330eee1a5f22698861f55a05d569dc7 | [
"Apache-2.0"
] | 2 | 2021-09-28T00:57:33.000Z | 2022-02-26T06:44:13.000Z | mx_utils/gpu.py | hallmx/mx_utils | 5fc095d31330eee1a5f22698861f55a05d569dc7 | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: 04_device.ipynb (unless otherwise specified).
__all__ = ['versions']
# Cell
def versions():
"Checks if GPU enabled and if so displays device details with cuda, pytorch, fastai versions"
print("GPU: ", torch.cuda.is_available())
if torch.cuda.is_available() == True:
print("Device = ", torch.device(torch.cuda.current_device()))
print("Cuda version - ", torch.version.cuda)
print("cuDNN version - ", torch.backends.cudnn.version())
print("PyTorch version - ", torch.__version__)
print("fastai version", fastai.__version__) | 42.357143 | 95 | 0.708263 | 76 | 593 | 5.315789 | 0.486842 | 0.066832 | 0.054455 | 0.09901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003976 | 0.151771 | 593 | 14 | 96 | 42.357143 | 0.799205 | 0.311973 | 0 | 0 | 1 | 0 | 0.352705 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.1 | 0.6 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
8c941e77ae2f1d6e180cbe9a58011d0823c9b340 | 1,128 | py | Python | compvis/preprocessing/imagetoarraypreprocessor.py | IgorMeloS/Computer-Vision-Training | cc458d17ee0ff880ce6ccc47179d667bd55f4bd1 | [
"Apache-2.0"
] | null | null | null | compvis/preprocessing/imagetoarraypreprocessor.py | IgorMeloS/Computer-Vision-Training | cc458d17ee0ff880ce6ccc47179d667bd55f4bd1 | [
"Apache-2.0"
] | null | null | null | compvis/preprocessing/imagetoarraypreprocessor.py | IgorMeloS/Computer-Vision-Training | cc458d17ee0ff880ce6ccc47179d667bd55f4bd1 | [
"Apache-2.0"
] | null | null | null | # =============================================================================
# Image preprocessing using TensorFlow and Keras
# Image to array
# https://www.tensorflow.org/versions/r2.1/api_docs/python/tf/keras/preprocessing/image/img_to_array
# =============================================================================
# Importing Libraries
import tensorflow as tf
from tf.keras.preprocessing.image import img_to_array
class ImageToArrayPreprocessor:
"""Image to array preprpocessor
Args:
dataFormat: optional parameter. By default None (indicate the keras.json must be used). Other values are channel_first and channel_last.
"""
def __init__(self, dataFormat = None):
# Image data Format (row, column, channel) or (channel, row, column)
self.dataFormat = dataFormat
def preprocess(self, image):
"""preprocess function
Args:
image: image to be placed into array
"""
# Applying Keras function to convert image into array with the specific
# format
return img_to_array(image, data_format=self.dataFormat)
| 38.896552 | 145 | 0.60195 | 122 | 1,128 | 5.45082 | 0.516393 | 0.052632 | 0.045113 | 0.075188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002205 | 0.195922 | 1,128 | 28 | 146 | 40.285714 | 0.730981 | 0.642731 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8ca2305fbdb0d3f010af82ac0d8b45aba4ae2068 | 344 | py | Python | src/paperwriting/latex.py | Song921012/How2Research | d4d1eb6458cd59331d384f40578bb07c3a045015 | [
"MIT"
] | null | null | null | src/paperwriting/latex.py | Song921012/How2Research | d4d1eb6458cd59331d384f40578bb07c3a045015 | [
"MIT"
] | null | null | null | src/paperwriting/latex.py | Song921012/How2Research | d4d1eb6458cd59331d384f40578bb07c3a045015 | [
"MIT"
] | null | null | null | import streamlit as st
def main():
st.markdown("""
All solutions to Latex [LaTeX 工作室](https://www.latexstudio.net/)
Online Latex: [Your Projects - Overleaf, Online LaTeX Editor](https://www.overleaf.com/project)
R Bookdown and Markdown
""") | 31.272727 | 111 | 0.520349 | 35 | 344 | 5.114286 | 0.742857 | 0.089385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.377907 | 344 | 11 | 112 | 31.272727 | 0.836449 | 0 | 0 | 0 | 0 | 0.142857 | 0.823188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | true | 0 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8cad3bfa72c83468595c6ed396cc771005c15229 | 206 | py | Python | src/light_helper.py | sonjaq/aiyprojects-raspbian | 101403c1b80433f80aad483d7f4d1ad757112cd9 | [
"Apache-2.0"
] | null | null | null | src/light_helper.py | sonjaq/aiyprojects-raspbian | 101403c1b80433f80aad483d7f4d1ad757112cd9 | [
"Apache-2.0"
] | null | null | null | src/light_helper.py | sonjaq/aiyprojects-raspbian | 101403c1b80433f80aad483d7f4d1ad757112cd9 | [
"Apache-2.0"
] | null | null | null | import socket
def send_light_command(command):
clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
clientsocket.connect(('localhost', 7777))
clientsocket.send(command.encode())
| 20.6 | 68 | 0.752427 | 24 | 206 | 6.291667 | 0.625 | 0.15894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022346 | 0.131068 | 206 | 9 | 69 | 22.888889 | 0.821229 | 0 | 0 | 0 | 0 | 0 | 0.044118 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8cb28a37a9b43d03d11a7ad1aa0fcda975880c6b | 931 | py | Python | tests/test_allof.py | tgallant/sphinx-json-schema | a08d79fa412f479de41f971ff84b1681077d7411 | [
"MIT"
] | null | null | null | tests/test_allof.py | tgallant/sphinx-json-schema | a08d79fa412f479de41f971ff84b1681077d7411 | [
"MIT"
] | null | null | null | tests/test_allof.py | tgallant/sphinx-json-schema | a08d79fa412f479de41f971ff84b1681077d7411 | [
"MIT"
] | null | null | null | from unittest import TestCase
from sphinx_json_schema_formatter.mergers import merge
class AllOfTestCase(TestCase):
def test_merge_properties(self):
base = {
"required": ["A"],
"properties": {
"A": {"type": "string", "enum": ["x", "y"]},
"B": {"type": "string"},
},
}
to_merge = {
"required": ["C"],
"properties": {
"A": {"type": "string", "enum": ["x"]},
"C": {"type": "string"},
},
}
merge(base, to_merge, "allOf")
self.assertDictEqual(
base,
{
"required": ["A", "C"],
"properties": {
"A": {"type": "string", "enum": ["x"]},
"B": {"type": "string"},
"C": {"type": "string"},
},
},
)
| 25.162162 | 60 | 0.36305 | 69 | 931 | 4.797101 | 0.42029 | 0.21148 | 0.135952 | 0.190332 | 0.241692 | 0.241692 | 0.163142 | 0 | 0 | 0 | 0 | 0 | 0.44898 | 931 | 36 | 61 | 25.861111 | 0.645224 | 0 | 0 | 0.3 | 0 | 0 | 0.167562 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 1 | 0.033333 | false | 0 | 0.066667 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8cbee886af4492e493b04ff75a79b83a353a59b2 | 3,592 | py | Python | selection_policy.py | mkuchnik/Efficient_Augmentation | a82190c02509682c34f2df782fb58f8ffd3b11da | [
"MIT"
] | 11 | 2019-05-09T22:43:29.000Z | 2021-01-13T22:26:48.000Z | selection_policy.py | mkuchnik/Efficient_Augmentation | a82190c02509682c34f2df782fb58f8ffd3b11da | [
"MIT"
] | 1 | 2020-10-07T14:03:47.000Z | 2020-10-07T14:03:47.000Z | selection_policy.py | mkuchnik/Efficient_Augmentation | a82190c02509682c34f2df782fb58f8ffd3b11da | [
"MIT"
] | 6 | 2019-03-05T02:26:01.000Z | 2021-05-11T14:35:41.000Z | import numpy as np
def softmax(x):
"""Stable softmax"""
x -= np.max(x, axis=0)
e_x = np.exp(x)
return e_x / np.sum(e_x, axis=0)
def get_idx_aug_baseline(LOO_influences):
"""Returns points randomly"""
idxs = np.random.choice(
len(LOO_influences),
len(LOO_influences),
p=None,
replace=False,
)
for idx in idxs:
yield [idx]
def get_idx_aug_influence(LOO_influences):
"""Returns points with probability proportional to magnitude of LOO"""
p = np.abs(LOO_influences, dtype=float)
p[p == 0] = min(np.min(p[p > 0]), 1e-20)
p /= np.sum(p)
idxs = np.random.choice(
len(LOO_influences),
len(LOO_influences),
p=p,
replace=False,
)
for idx in idxs:
yield [idx]
def get_idx_aug_k_dpp(LOO_influences, k):
"""Returns points with probability proportional to L matrix using DPP"""
import sample_dpp
L = LOO_influences.T.dot(LOO_influences)
assert len(L) == len(LOO_influences)
idxs = sample_dpp.oct_sample_k_dpp(
L,
k=k,
one_hot=False)
for idx in idxs:
yield [idx]
def get_idx_aug_influence_reverse(LOO_influences):
"""Returns points with probability proportional to magnitude of LOO"""
p = np.abs(LOO_influences)
p[p == 0] = min(np.min(p[p > 0]), 1e-20)
p = 1 / p
p /= np.sum(p)
p[p == 0] = 1e-20
p /= np.sum(p)
idxs = np.random.choice(
len(LOO_influences),
len(LOO_influences),
p=p,
replace=False,
)
for idx in idxs:
yield [idx]
def get_idx_aug_softmax_influence(LOO_influences):
"""Returns points with probability proportional to softmax of magnitude
of LOO"""
p = np.abs(LOO_influences)
p[p == 0] = min(np.min(p[p > 0]), 1e-20)
p = math_util.softmax(p)
idxs = np.random.choice(
len(LOO_influences),
len(LOO_influences),
p=p,
replace=False,
)
for idx in idxs:
yield [idx]
def get_idx_aug_softmax_influence_reverse(LOO_influences):
"""Returns points with probability proportional to softmax of magnitude
of LOO"""
p = np.abs(LOO_influences)
p[p == 0] = min(np.min(p[p > 0]), 1e-20)
p = 1 / p
p = math_util.softmax(p)
p[p == 0] = 1e-20
p /= np.sum(p)
idxs = np.random.choice(
len(LOO_influences),
len(LOO_influences),
p=p,
replace=False,
)
for idx in idxs:
yield [idx]
def get_idx_aug_deterministic_influence(LOO_influences):
"""Returns points in deterministic order ranked by LOO magnitude"""
idxs = np.argsort(-np.abs(LOO_influences))
for idx in idxs:
yield [idx]
def get_idx_aug_deterministic_influence_reverse(LOO_influences):
"""Returns points in deterministic order ranked by LOO magnitude"""
idxs = np.argsort(np.abs(LOO_influences))
for idx in idxs:
yield [idx]
name_to_policy = {
"baseline": get_idx_aug_baseline,
"random_proportional": get_idx_aug_influence,
"random_inverse_proportional": get_idx_aug_influence_reverse,
"random_softmax_proportional": get_idx_aug_softmax_influence,
"random_inverse_softmax_proportional":
get_idx_aug_softmax_influence_reverse,
"deterministic_proportional": get_idx_aug_deterministic_influence,
"deterministic_inverse_proportional":
get_idx_aug_deterministic_influence_reverse,
}
def get_policy_by_name(name):
return name_to_policy[name]
| 27.212121 | 76 | 0.627227 | 501 | 3,592 | 4.265469 | 0.143713 | 0.164249 | 0.063173 | 0.044923 | 0.798783 | 0.746841 | 0.677117 | 0.635938 | 0.635938 | 0.635938 | 0 | 0.012107 | 0.264198 | 3,592 | 131 | 77 | 27.419847 | 0.796443 | 0.142261 | 0 | 0.575758 | 0 | 0 | 0.058278 | 0.049338 | 0 | 0 | 0 | 0 | 0.010101 | 1 | 0.10101 | false | 0 | 0.020202 | 0.010101 | 0.141414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8cc095e5ee7318ff20b7179e6d094e0220224294 | 131 | py | Python | examples/project/module/one.py | samtaufa/pry | 155b783351cef5d8c8669bba4484f1a5f159150e | [
"MIT"
] | 1 | 2016-05-09T08:20:56.000Z | 2016-05-09T08:20:56.000Z | examples/project/module/one.py | samtaufa/pry | 155b783351cef5d8c8669bba4484f1a5f159150e | [
"MIT"
] | null | null | null | examples/project/module/one.py | samtaufa/pry | 155b783351cef5d8c8669bba4484f1a5f159150e | [
"MIT"
] | null | null | null | def method():
if True:
return 1
else:
return 2
def method2():
x = 2
if x:
y = 3
z = 1
| 10.916667 | 16 | 0.396947 | 19 | 131 | 2.736842 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 0.51145 | 131 | 11 | 17 | 11.909091 | 0.71875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8ce5f24dd66d463182d2f73b9d9eddd1e84729c8 | 3,997 | py | Python | hard-gists/7428185/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 21 | 2019-07-08T08:26:45.000Z | 2022-01-24T23:53:25.000Z | hard-gists/7428185/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 5 | 2019-06-15T14:47:47.000Z | 2022-02-26T05:02:56.000Z | hard-gists/7428185/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 17 | 2019-05-16T03:50:34.000Z | 2021-01-14T14:35:12.000Z | import urllib
from wordpress_xmlrpc import Client, WordPressPost
from wordpress_xmlrpc.methods import posts
import xmlrpclib
from wordpress_xmlrpc.compat import xmlrpc_client
from wordpress_xmlrpc.methods import media, posts
import os
########################### Read Me First ###############################
'''
------------------------------------------In DETAIL--------------------------------
Description
===========
Add new posts to WordPress remotely using Python using XMLRPC library provided by the WordPress.
Installation Requirement
************************
Verify you meet the following requirements
==========================================
Install Python 2.7 (Don't download 3+, as most libraries dont yet support version 3).
Install from PyPI using easy_install python-wordpress-xmlrpc
Easy_Install Link: https://pypi.python.org/pypi/setuptools
==========================================
Windows Installation Guide
==========================
-Download and Install Easy_Install from above Link -Extract Downloaded File and from CMD go to the extracted directory and run 'python setup.py install'. This will install easy_install. -Go to %/python27/script and run following command easy_install python-wordpress-xmlrpc
Ubuntu Installation Guide
=========================
sudo apt-get install python-setuptools
sudo easy_install python-wordpress-xmlrpc
Note: Script has its dummy data to work initially which you can change or integrate with your code easily for making it more dynamic.
****************************************
For Bugs/Suggestions
contact@waqasjamal.com
****************************************
------------------------------------------In DETAIL--------------------------------
'''
class Custom_WP_XMLRPC:
def post_article(self,wpUrl,wpUserName,wpPassword,articleTitle, articleCategories, articleContent, articleTags,PhotoUrl):
self.path=os.getcwd()+"\\00000001.jpg"
self.articlePhotoUrl=PhotoUrl
self.wpUrl=wpUrl
self.wpUserName=wpUserName
self.wpPassword=wpPassword
#Download File
f = open(self.path,'wb')
f.write(urllib.urlopen(self.articlePhotoUrl).read())
f.close()
#Upload to WordPress
client = Client(self.wpUrl,self.wpUserName,self.wpPassword)
filename = self.path
# prepare metadata
data = {'name': 'picture.jpg','type': 'image/jpg',}
# read the binary file and let the XMLRPC library encode it into base64
with open(filename, 'rb') as img:
data['bits'] = xmlrpc_client.Binary(img.read())
response = client.call(media.UploadFile(data))
attachment_id = response['id']
#Post
post = WordPressPost()
post.title = articleTitle
post.content = articleContent
post.terms_names = { 'post_tag': articleTags,'category': articleCategories}
post.post_status = 'publish'
post.thumbnail = attachment_id
post.id = client.call(posts.NewPost(post))
print 'Post Successfully posted. Its Id is: ',post.id
#########################################
# POST & Wp Credentials Detail #
#########################################
#Url of Image on the internet
ariclePhotoUrl='http://i1.tribune.com.pk/wp-content/uploads/2013/07/584065-twitter-1375197036-960-640x480.jpg'
# Dont forget the /xmlrpc.php cause thats your posting adress for XML Server
wpUrl='http://YourWebSite.com/xmlrpc.php'
#WordPress Username
wpUserName='WordPressUsername'
#WordPress Password
wpPassword='YourWordPressPassword'
#Post Title
articleTitle='Testing Python Script version 3'
#Post Body/Description
articleContent='Final .... Testing Fully Automated'
#list of tags
articleTags=['code','python']
#list of Categories
articleCategories=['language','art']
#########################################
# Creating Class object & calling the xml rpc custom post Function
#########################################
xmlrpc_object = Custom_WP_XMLRPC()
#On Post submission this function will print the post id
xmlrpc_object.post_article(wpUrl,wpUserName,wpPassword,articleTitle, articleCategories, articleContent, articleTags,ariclePhotoUrl) | 39.186275 | 273 | 0.663748 | 457 | 3,997 | 5.750547 | 0.466083 | 0.039954 | 0.028919 | 0.02968 | 0.121005 | 0.060122 | 0.060122 | 0 | 0 | 0 | 0 | 0.013745 | 0.108081 | 3,997 | 102 | 274 | 39.186275 | 0.723422 | 0.121841 | 0 | 0 | 0 | 0.02381 | 0.194728 | 0.011296 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.119048 | 0.166667 | null | null | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
8cf310067c367e28b69b9143043a6275212ce136 | 1,990 | py | Python | ircclient.py | and3rson/notify | 7d82ee9d4a33c3bcf99af3bd785f43aad00c7504 | [
"MIT"
] | null | null | null | ircclient.py | and3rson/notify | 7d82ee9d4a33c3bcf99af3bd785f43aad00c7504 | [
"MIT"
] | null | null | null | ircclient.py | and3rson/notify | 7d82ee9d4a33c3bcf99af3bd785f43aad00c7504 | [
"MIT"
] | null | null | null | import socket
from select import select
from time import sleep
class IRCClient(object):
def __init__(self, server, port, username, password):
# super(Client, self).__init__()
self.server = server
self.port = port
self.username = username
self.password = password
self.alive = False
def send(self, message):
self.conn.send(message + '\r\n')
def start(self):
self.alive = True
while self.alive:
while not self.connect():
pass
self.process()
def process(self):
while self.alive:
i, o, e = select([self.conn], [], [], 1)
if self.conn in i:
data = filter(None, self.conn.recv(1024).split('\r\n'))
if not len(data):
return
for line in data:
self.on_recv(line)
def connect(self):
try:
self.conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.conn.connect((self.server, self.port))
self.send('NICK {}'.format(self.username))
self.send('PASS {}'.format(self.password))
self.send('USER {u} {u} {u} NotifyClient {u}'.format(u=self.username))
self.alive = True
except:
sleep(1)
return False
return True
def on_recv(self, line):
parts = line.split(' ')
if parts[0].lower() == 'ping':
self.send('PONG {}'.format(parts[1]))
elif parts[1] == '001':
self.on_connect()
else:
self.on_message(line)
def on_message(self, line):
raise NotImplementedError()
def join(self, channel):
self.send('JOIN #{}'.format(channel))
def privmsg(self, channel, message):
self.send('PRIVMSG #{} {}'.format(channel, message))
def stop(self):
self.alive = False
print 'See ya!'
self.send('QUIT')
| 26.891892 | 82 | 0.525628 | 231 | 1,990 | 4.463203 | 0.337662 | 0.054316 | 0.027158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009167 | 0.342211 | 1,990 | 73 | 83 | 27.260274 | 0.778457 | 0.015075 | 0 | 0.105263 | 0 | 0 | 0.052605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.070175 | 0.052632 | null | null | 0.017544 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
8cf8fd46694d8495f28d5157778a47d39a03ed62 | 708 | py | Python | webwaybooks/tests/utils/test_log.py | bysorry/telegram_media_downloader | fbc039f1e0274a7731b9a06f2408c831eb83c35b | [
"Unlicense"
] | 401 | 2019-08-04T17:19:38.000Z | 2022-03-31T18:18:53.000Z | webwaybooks/tests/utils/test_log.py | bysorry/telegram_media_downloader | fbc039f1e0274a7731b9a06f2408c831eb83c35b | [
"Unlicense"
] | 127 | 2019-08-06T14:36:39.000Z | 2022-03-28T10:05:10.000Z | webwaybooks/tests/utils/test_log.py | bysorry/telegram_media_downloader | fbc039f1e0274a7731b9a06f2408c831eb83c35b | [
"Unlicense"
] | 127 | 2019-12-23T15:27:53.000Z | 2022-03-25T17:21:02.000Z | """Unittest module for log handlers."""
import os
import sys
import unittest
import mock
sys.path.append("..") # Adds higher directory to python modules path.
from utils.log import LogFilter
class MockLog:
"""
Mock logs.
"""
def __init__(self, **kwargs):
self.funcName = kwargs["funcName"]
class MetaTestCase(unittest.TestCase):
def test_log_filter(self):
result = LogFilter().filter(MockLog(funcName="send"))
self.assertEqual(result, False)
result1 = LogFilter().filter(MockLog(funcName="get_file"))
self.assertEqual(result1, False)
result2 = LogFilter().filter(MockLog(funcName="Synced"))
self.assertEqual(result2, True) | 23.6 | 70 | 0.672316 | 80 | 708 | 5.8625 | 0.525 | 0.095949 | 0.140725 | 0.191898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007067 | 0.200565 | 708 | 30 | 71 | 23.6 | 0.821555 | 0.128531 | 0 | 0 | 0 | 0 | 0.046901 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 1 | 0.117647 | false | 0 | 0.294118 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
50838b3b374f7b2ad09ce6a77c9e1cf3de444ef8 | 5,599 | py | Python | DS_Scripts/functionsLoadCSV.py | SoulfulArt/Mapa_Vivo_Producao_Brasil | 1fb54875e66cd883bfea1443da1c1228e0546aed | [
"MIT"
] | null | null | null | DS_Scripts/functionsLoadCSV.py | SoulfulArt/Mapa_Vivo_Producao_Brasil | 1fb54875e66cd883bfea1443da1c1228e0546aed | [
"MIT"
] | null | null | null | DS_Scripts/functionsLoadCSV.py | SoulfulArt/Mapa_Vivo_Producao_Brasil | 1fb54875e66cd883bfea1443da1c1228e0546aed | [
"MIT"
] | null | null | null | #These are functions and variables that support script loadCSV.py
from os import remove
from os import path
#function that organizes numerical data because some files use , as thousand
#separator or , as decimal separator
def simplifyNumber (df):
df["Producao"] = df["Producao"].astype(str).str.replace(',','.')
df["AreaH"] = df["AreaH"].astype(str).str.replace(',','.')
df["Valor"] = df["Valor"].astype(str).str.replace(',','.')
df["Producao"] = df["Producao"].astype(float)
df["AreaH"] = df["AreaH"].astype(float)
df["Valor"] = df["Valor"].astype(float)
#Rendimento is a function of producao and area
df["Rendimento"] = (df["Producao"]/df["AreaH"])
df["Rendimento"] = round(df["Rendimento"],2)
return df
'''Function that replace letter with accent for letter without accent it can generate
problems in finding cities that are the same but has different accents from the
datasets. Other transformations that are specific of the data that is used in this
project such as '/' for ' e ' were identified as an important substituion to help
data manipulation through the project.'''
def simplifyText (pdSeries):
#it's better to work with homogenous casing
pdSeries = pdSeries.str.lower()
#cities that have name changes
pdSeries = pdSeries.str.replace('fortaleza do tabocão','tabocão')
pdSeries = pdSeries.str.replace('augusto severo','campo grande')
#problems with accent in Portuguese
pdSeries = pdSeries.str.replace('á','a')
pdSeries = pdSeries.str.replace('ã','a')
pdSeries = pdSeries.str.replace('â','a')
pdSeries = pdSeries.str.replace('é','e')
pdSeries = pdSeries.str.replace('ê','e')
pdSeries = pdSeries.str.replace('í','i')
pdSeries = pdSeries.str.replace('ó','o')
pdSeries = pdSeries.str.replace('ô','o')
pdSeries = pdSeries.str.replace('õ','o')
pdSeries = pdSeries.str.replace('ú','u')
pdSeries = pdSeries.str.replace('û','u')
pdSeries = pdSeries.str.replace('ü','u')
pdSeries = pdSeries.str.replace('j','g')
pdSeries = pdSeries.str.replace('-','')
pdSeries = pdSeries.str.replace('y','i') #old portuguese had y
#problems related to Portugese language
pdSeries = pdSeries.str.replace('za','sa') #Izabel x Isabel
pdSeries = pdSeries.str.replace('zo','so') #Brazopolis x Braspolis
pdSeries = pdSeries.str.replace('ze','se') #Euzebia x Eusebia
pdSeries = pdSeries.str.replace('reo','reu') #poxoreu x poxoreo
pdSeries = pdSeries.str.replace('tho','to') #thome x thome
pdSeries = pdSeries.str.replace('tomaz','tomas') #thomaz x thomas
pdSeries = pdSeries.str.replace('thi','ti') #thiago x tiago
pdSeries = pdSeries.str.replace('luiz','luis') #luiz x luis
pdSeries = pdSeries.str.replace('florinea','florinia') #florinea x florinia
pdSeries = pdSeries.str.replace(' moz',' mos') # moz x mos (porto)
pdSeries = pdSeries.str.replace(' luz',' lus') # santa luz x lus
pdSeries = pdSeries.str.replace('cruz','crus') #vera cruz x crus
pdSeries = pdSeries.str.replace('das artes','') #embu das artes x embu
pdSeries = pdSeries.str.replace('terezinha','teresinha') #terezinha x teresinha
#articles
pdSeries = pdSeries.str.replace(' de ','dx')
pdSeries = pdSeries.str.replace(' da ','dx')
pdSeries = pdSeries.str.replace(' do ','dx')
pdSeries = pdSeries.str.replace(' das ','dx')
pdSeries = pdSeries.str.replace(' dos ','dx')
#separator / and e
pdSeries = pdSeries.str.replace('/','e')
pdSeries = pdSeries.str.replace(' ','')
#some cities are comming with state initials
pdSeries = pdSeries.str.replace('(ac)','', regex = False)
pdSeries = pdSeries.str.replace('(al)','', regex = False)
pdSeries = pdSeries.str.replace('(ap)','', regex = False)
pdSeries = pdSeries.str.replace('(am)','', regex = False)
pdSeries = pdSeries.str.replace('(ba)','', regex = False)
pdSeries = pdSeries.str.replace('(ce)','', regex = False)
pdSeries = pdSeries.str.replace('(df)','', regex = False)
pdSeries = pdSeries.str.replace('(es)','', regex = False)
pdSeries = pdSeries.str.replace('(go)','', regex = False)
pdSeries = pdSeries.str.replace('(ma)','', regex = False)
pdSeries = pdSeries.str.replace('(mt)','', regex = False)
pdSeries = pdSeries.str.replace('(ms)','', regex = False)
pdSeries = pdSeries.str.replace('(mg)','', regex = False)
pdSeries = pdSeries.str.replace('(pa)','', regex = False)
pdSeries = pdSeries.str.replace('(pb)','', regex = False)
pdSeries = pdSeries.str.replace('(pr)','', regex = False)
pdSeries = pdSeries.str.replace('(pe)','', regex = False)
pdSeries = pdSeries.str.replace('(pi)','', regex = False)
pdSeries = pdSeries.str.replace('(rj)','', regex = False)
pdSeries = pdSeries.str.replace('(rg)','', regex = False)
pdSeries = pdSeries.str.replace('(rn)','', regex = False)
pdSeries = pdSeries.str.replace('(rs)','', regex = False)
pdSeries = pdSeries.str.replace('(ro)','', regex = False)
pdSeries = pdSeries.str.replace('(rr)','', regex = False)
pdSeries = pdSeries.str.replace('(sc)','', regex = False)
pdSeries = pdSeries.str.replace('(sp)','', regex = False)
pdSeries = pdSeries.str.replace('(se)','', regex = False)
pdSeries = pdSeries.str.replace('(to)','', regex = False)
return pdSeries
#Function cleanDataCSV clens data from a DataFrame
def cleanDataCSV (df):
#Valor Produção (Moeda em Real) must have number values
df = df[df['Valor Produção (Moeda em Real)']!='...']
df = df[df['Valor Produção (Moeda em Real)']!='..']
df = df[df['Valor Produção (Moeda em Real)']!='-']
df = df[df['Nome Lavoura']!='Total']
#Qtd.Produzida can't be NaN, but Valor can
df = df[df["Qtd.Produzida"].notna()]
df = df.reset_index(drop = True)
return df
| 42.097744 | 85 | 0.6803 | 749 | 5,599 | 5.084112 | 0.303071 | 0.181197 | 0.334296 | 0.45063 | 0.462185 | 0.28729 | 0.02521 | 0.02521 | 0.02521 | 0.02521 | 0 | 0.000206 | 0.132702 | 5,599 | 132 | 86 | 42.416667 | 0.783979 | 0.148062 | 0 | 0.022472 | 0 | 0 | 0.134813 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033708 | false | 0 | 0.022472 | 0 | 0.089888 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
508bf5bee690d01b9980d202464ddda6c2d8cffa | 908 | py | Python | scripts/run_badger_bot.py | Badger-Finance/price-bots | 8e7640928a29552718a12f961a9f3168512fc89e | [
"MIT"
] | 4 | 2021-06-07T07:14:43.000Z | 2022-03-12T00:00:18.000Z | scripts/run_badger_bot.py | Badger-Finance/price-bots | 8e7640928a29552718a12f961a9f3168512fc89e | [
"MIT"
] | null | null | null | scripts/run_badger_bot.py | Badger-Finance/price-bots | 8e7640928a29552718a12f961a9f3168512fc89e | [
"MIT"
] | 1 | 2021-05-12T20:48:22.000Z | 2021-05-12T20:48:22.000Z | import asyncio
import json
import logging
import os
import sys
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../src")))
from price_bot import PriceBot
from utils import get_secret
if __name__ == "__main__":
loop = asyncio.get_event_loop()
# name of secret in secrets manager
bot_token_secret_name = "price-bots/badger-bot-token"
# key value to retrieve secret value after boto3 call to secretsmanager
bot_token_secret_key = "BOT_TOKEN_BADGER"
badger_client = PriceBot(
coingecko_token_id="badger-dao",
token_display="BADGER",
discord_id=os.getenv("BOT_ID_BADGER"),
bot_token_secret_name=bot_token_secret_name,
bot_token_secret_key=bot_token_secret_key,
)
bot_token = get_secret(bot_token_secret_name, bot_token_secret_key)
loop.create_task(badger_client.start(bot_token))
loop.run_forever()
| 28.375 | 86 | 0.741189 | 133 | 908 | 4.646617 | 0.398496 | 0.15534 | 0.18123 | 0.116505 | 0.223301 | 0.223301 | 0.142395 | 0.113269 | 0 | 0 | 0 | 0.002656 | 0.170705 | 908 | 31 | 87 | 29.290323 | 0.818061 | 0.113436 | 0 | 0 | 0 | 0 | 0.107232 | 0.033666 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.318182 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
508e9f88a1490b12ea332c1a79506568b033c0bd | 4,918 | py | Python | pysnmp-with-texts/COMPANY-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/COMPANY-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/COMPANY-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module COMPANY-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/COMPANY-MIB
# Produced by pysmi-0.3.4 at Wed May 1 12:26:32 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, Integer, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "OctetString", "Integer", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ValueRangeConstraint, SingleValueConstraint, ValueSizeConstraint, ConstraintsIntersection, ConstraintsUnion = mibBuilder.importSymbols("ASN1-REFINEMENT", "ValueRangeConstraint", "SingleValueConstraint", "ValueSizeConstraint", "ConstraintsIntersection", "ConstraintsUnion")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
MibScalar, MibTable, MibTableRow, MibTableColumn, iso, Bits, TimeTicks, Counter64, IpAddress, MibIdentifier, Counter32, Gauge32, NotificationType, ObjectIdentity, enterprises, Unsigned32, Integer32, ModuleIdentity = mibBuilder.importSymbols("SNMPv2-SMI", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "iso", "Bits", "TimeTicks", "Counter64", "IpAddress", "MibIdentifier", "Counter32", "Gauge32", "NotificationType", "ObjectIdentity", "enterprises", "Unsigned32", "Integer32", "ModuleIdentity")
TextualConvention, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TextualConvention", "DisplayString")
allotCom = ModuleIdentity((1, 3, 6, 1, 4, 1, 2603))
if mibBuilder.loadTexts: allotCom.setLastUpdated('0103120000Z')
if mibBuilder.loadTexts: allotCom.setOrganization('Allot Communications')
if mibBuilder.loadTexts: allotCom.setContactInfo('Allot Communications postal: 5 Hanagar St. Industrial Zone Neve Neeman Hod Hasharon 45800 Israel phone: +972-(0)9-761-9200 fax: +972-(0)9-744-3626 email: support@allot.com')
if mibBuilder.loadTexts: allotCom.setDescription('This file defines the private Allot SNMP MIB extensions.')
neTraps = MibIdentifier((1, 3, 6, 1, 4, 1, 2603, 2))
nePrimaryActive = NotificationType((1, 3, 6, 1, 4, 1, 2603, 2, 11))
if mibBuilder.loadTexts: nePrimaryActive.setStatus('current')
if mibBuilder.loadTexts: nePrimaryActive.setDescription('This trap is sent when the primary NE changes to Active mode')
nePrimaryBypass = NotificationType((1, 3, 6, 1, 4, 1, 2603, 2, 12))
if mibBuilder.loadTexts: nePrimaryBypass.setStatus('current')
if mibBuilder.loadTexts: nePrimaryBypass.setDescription('This trap is sent when the primary NE changes to Bypass mode')
neSecondaryActive = NotificationType((1, 3, 6, 1, 4, 1, 2603, 2, 13))
if mibBuilder.loadTexts: neSecondaryActive.setStatus('current')
if mibBuilder.loadTexts: neSecondaryActive.setDescription('This trap is sent when the secondary NE changes to Active mode')
neSecondaryStandBy = NotificationType((1, 3, 6, 1, 4, 1, 2603, 2, 14))
if mibBuilder.loadTexts: neSecondaryStandBy.setStatus('current')
if mibBuilder.loadTexts: neSecondaryStandBy.setDescription('This trap is sent when the secondary NE changes to StandBy mode')
neSecondaryBypass = NotificationType((1, 3, 6, 1, 4, 1, 2603, 2, 15))
if mibBuilder.loadTexts: neSecondaryBypass.setStatus('current')
if mibBuilder.loadTexts: neSecondaryBypass.setDescription('This trap is sent when the secondary NE changes to Bypass mode')
collTableOverFlow = NotificationType((1, 3, 6, 1, 4, 1, 2603, 2, 21))
if mibBuilder.loadTexts: collTableOverFlow.setStatus('current')
if mibBuilder.loadTexts: collTableOverFlow.setDescription('This trap is sent when acounting is not reading from the collector which causes the collector table to exceeds limits')
neAlertEvent = NotificationType((1, 3, 6, 1, 4, 1, 2603, 2, 22))
if mibBuilder.loadTexts: neAlertEvent.setStatus('current')
if mibBuilder.loadTexts: neAlertEvent.setDescription('This trap is sent when user defined event occurs')
neNotificationsGroup = NotificationGroup((1, 3, 6, 1, 4, 1, 2603, 3)).setObjects(("COMPANY-MIB", "nePrimaryActive"), ("COMPANY-MIB", "nePrimaryBypass"), ("COMPANY-MIB", "neSecondaryActive"), ("COMPANY-MIB", "neSecondaryStandBy"), ("COMPANY-MIB", "neSecondaryBypass"), ("COMPANY-MIB", "collTableOverFlow"), ("COMPANY-MIB", "neAlertEvent"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
neNotificationsGroup = neNotificationsGroup.setStatus('current')
if mibBuilder.loadTexts: neNotificationsGroup.setDescription('The notifications which indicate specific changes of the NE state.')
mibBuilder.exportSymbols("COMPANY-MIB", nePrimaryBypass=nePrimaryBypass, nePrimaryActive=nePrimaryActive, neSecondaryBypass=neSecondaryBypass, PYSNMP_MODULE_ID=allotCom, neSecondaryActive=neSecondaryActive, allotCom=allotCom, collTableOverFlow=collTableOverFlow, neNotificationsGroup=neNotificationsGroup, neTraps=neTraps, neSecondaryStandBy=neSecondaryStandBy, neAlertEvent=neAlertEvent)
| 106.913043 | 505 | 0.787922 | 561 | 4,918 | 6.903743 | 0.317291 | 0.058869 | 0.103021 | 0.010328 | 0.373612 | 0.235735 | 0.21921 | 0.214046 | 0.211206 | 0.162406 | 0 | 0.052961 | 0.090077 | 4,918 | 45 | 506 | 109.288889 | 0.812514 | 0.064254 | 0 | 0 | 0 | 0.026316 | 0.328543 | 0.00958 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.210526 | 0.157895 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
50918d47bc3cf7b44544d215df3254fbb71a1795 | 177 | py | Python | pycs/astro/wl/__init__.py | sfarrens/cosmostat | a475315cda06dca346095a1e83cb6ad23979acae | [
"MIT"
] | 3 | 2021-02-09T05:03:24.000Z | 2021-11-26T10:20:02.000Z | pycs/astro/wl/__init__.py | sfarrens/cosmostat | a475315cda06dca346095a1e83cb6ad23979acae | [
"MIT"
] | 8 | 2020-04-28T17:09:50.000Z | 2022-02-01T16:24:43.000Z | pycs/astro/wl/__init__.py | sfarrens/cosmostat | a475315cda06dca346095a1e83cb6ad23979acae | [
"MIT"
] | 3 | 2020-06-22T07:53:00.000Z | 2021-02-10T19:59:53.000Z | # -*- coding: utf-8 -*-
"""WEAK LENSING ROUTINES
This module contains submodules for weak gravitational lensing.
"""
__all__ = ['lenspack', 'mass_mapping']
from . import *
| 14.75 | 63 | 0.683616 | 20 | 177 | 5.8 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006803 | 0.169492 | 177 | 11 | 64 | 16.090909 | 0.782313 | 0.615819 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
509fc355bdbad56b0b965f05d1900f65d85c4c5d | 2,156 | py | Python | tests/test_service_configuration.py | LaEmma/sparrow_cloud | fb9f76ea70b3ba5782c33f3b3379e2ffe4bab08c | [
"MIT"
] | 15 | 2019-09-24T09:32:32.000Z | 2021-12-30T08:07:41.000Z | tests/test_service_configuration.py | LaEmma/sparrow_cloud | fb9f76ea70b3ba5782c33f3b3379e2ffe4bab08c | [
"MIT"
] | 13 | 2019-09-06T03:20:02.000Z | 2021-09-27T03:37:25.000Z | tests/test_service_configuration.py | LaEmma/sparrow_cloud | fb9f76ea70b3ba5782c33f3b3379e2ffe4bab08c | [
"MIT"
] | 17 | 2019-09-02T06:31:05.000Z | 2021-10-08T04:23:23.000Z | # import os
# import datetime
# import unittest
# from unittest import mock
# from django.core.cache import cache
# from sparrow_cloud.registry.service_configuration import config
#
#
#
# DATA = ('8018915', {'LockIndex': 0, 'Key': 'foo', 'Flags': 0, 'Value': b'{"test_key0":"value0",\n"test_key1":"value1"}',
# 'CreateIndex': 8018915, 'ModifyIndex': 8018915})
#
#
# class ConsulServiceTest(unittest.TestCase):
#
# def setUp(self):
# os.environ["DJANGO_SETTINGS_MODULE"] = "tests.mock_settings"
#
# @mock.patch('consul.Consul.KV.get', return_value=DATA)
# def test_consul_parameter_variable(self, mock_consul_config_data):
# """
# 测试从consul中取数据
# """
# from django.conf import settings
# settings.CONSUL_CLIENT_ADDR = {
# "HOST": "127.0.0.1",
# "PORT": 8500
# }
# data = config(key='foo')
# self.assertEqual("value0", data.get("test_key0"))
# self.assertEqual("value1", data.get("test_key1"))
#
# def test_consul_error(self):
# """
# 测试cache value
# """
# from django.conf import settings
# data = {"test_key0": "value0", "test_key1": "value1"}
# data['cache_time'] = datetime.datetime.now()
# cache.set('foo', data, 60)
# settings.CONSUL_CLIENT_ADDR = {
# "HOST": "127.0.0.1",
# "PORT": "8500"
# }
# value = config(key='foo')
# self.assertEqual({"test_key0": "value0", "test_key1": "value1"}, value)
#
# @mock.patch('consul.Consul.KV.get', return_value=("", ""))
# def test_settings_value(self, mock_consul_data):
# """
# 测试settings 数据
# """
# from django.conf import settings
# settings.foo = {"test_key0": "value0", "test_key1": "value1"}
# settings.CONSUL_CLIENT_ADDR = {
# "HOST": "127.0.0.1",
# "PORT": "8500"
# }
# value = config(key='foo')
# self.assertEqual({"test_key0": "value0", "test_key1": "value1"}, value)
#
# def tearDown(self):
# del os.environ["DJANGO_SETTINGS_MODULE"]
| 33.6875 | 122 | 0.560297 | 232 | 2,156 | 5.034483 | 0.297414 | 0.041096 | 0.059932 | 0.061644 | 0.482021 | 0.385274 | 0.282534 | 0.282534 | 0.219178 | 0.219178 | 0 | 0.05 | 0.267161 | 2,156 | 63 | 123 | 34.222222 | 0.689241 | 0.941095 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
50a3ab53c6a5813f7a1ed537489a2eb980683bbf | 411 | py | Python | bitex/formatters/ccex.py | ligggooo/quant2018 | adbf68da414f422157dff8b744df214fc6631342 | [
"MIT"
] | 312 | 2018-01-06T13:51:48.000Z | 2022-03-01T21:14:21.000Z | bitex/formatters/ccex.py | ligggooo/quant2018 | adbf68da414f422157dff8b744df214fc6631342 | [
"MIT"
] | 111 | 2016-06-14T18:44:12.000Z | 2018-01-06T00:58:31.000Z | bitex/formatters/ccex.py | ligggooo/quant2018 | adbf68da414f422157dff8b744df214fc6631342 | [
"MIT"
] | 98 | 2018-01-06T15:24:36.000Z | 2022-01-13T03:00:05.000Z | # Import Built-Ins
import logging
# Import Third-Party
# Import Homebrew
from bitex.formatters.base import Formatter
# Init Logging Facilities
log = logging.getLogger(__name__)
class CcexFormatter(Formatter):
@staticmethod
def ticker(data, *args, **kwargs):
return (data['buy'], data['sell'], data['high'], data['low'], None,
None, data['lastprice'], None, data['updated']) | 22.833333 | 75 | 0.673966 | 48 | 411 | 5.6875 | 0.6875 | 0.058608 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184915 | 411 | 18 | 76 | 22.833333 | 0.814925 | 0.182482 | 0 | 0 | 0 | 0 | 0.090361 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0.125 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
50a6b6dec255bcdecd177ad44f7e53c61eeb74cf | 12,032 | py | Python | clients/python/unwired/models/geolocation_response_schema.py | 3v1lW1th1n/locationapi-client-libraries | 0d5619e38e54e650a9907d6ad4034c611da97eb5 | [
"MIT"
] | 7 | 2019-03-11T10:13:10.000Z | 2021-12-12T20:06:08.000Z | clients/python/unwired/models/geolocation_response_schema.py | 3v1lW1th1n/locationapi-client-libraries | 0d5619e38e54e650a9907d6ad4034c611da97eb5 | [
"MIT"
] | 1 | 2019-12-02T11:58:14.000Z | 2019-12-09T05:32:07.000Z | clients/python/unwired/models/geolocation_response_schema.py | 3v1lW1th1n/locationapi-client-libraries | 0d5619e38e54e650a9907d6ad4034c611da97eb5 | [
"MIT"
] | 10 | 2019-04-14T13:14:46.000Z | 2021-08-16T06:38:26.000Z | # coding: utf-8
"""
Location API
Geolocation, Geocoding and Maps # noqa: E501
OpenAPI spec version: 2.0.0
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
class GeolocationResponseSchema(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'status': 'str',
'message': 'str',
'balance': 'int',
'balance_slots': 'int',
'lat': 'float',
'lon': 'float',
'accuracy': 'int',
'address': 'str',
'address_details': 'AddressDetailsSchema',
'aged': 'int',
'fallback': 'FallbackSchema'
}
attribute_map = {
'status': 'status',
'message': 'message',
'balance': 'balance',
'balance_slots': 'balance_slots',
'lat': 'lat',
'lon': 'lon',
'accuracy': 'accuracy',
'address': 'address',
'address_details': 'address_details',
'aged': 'aged',
'fallback': 'fallback'
}
def __init__(self, status=None, message=None, balance=None, balance_slots=None, lat=None, lon=None, accuracy=None, address=None, address_details=None, aged=None, fallback=None): # noqa: E501
"""GeolocationResponseSchema - a model defined in OpenAPI""" # noqa: E501
self._status = None
self._message = None
self._balance = None
self._balance_slots = None
self._lat = None
self._lon = None
self._accuracy = None
self._address = None
self._address_details = None
self._aged = None
self._fallback = None
self.discriminator = None
if status is not None:
self.status = status
if message is not None:
self.message = message
if balance is not None:
self.balance = balance
if balance_slots is not None:
self.balance_slots = balance_slots
if lat is not None:
self.lat = lat
if lon is not None:
self.lon = lon
if accuracy is not None:
self.accuracy = accuracy
if address is not None:
self.address = address
if address_details is not None:
self.address_details = address_details
if aged is not None:
self.aged = aged
if fallback is not None:
self.fallback = fallback
@property
def status(self):
"""Gets the status of this GeolocationResponseSchema. # noqa: E501
If the request is successful, ok is returned. Otherwise error is returned # noqa: E501
:return: The status of this GeolocationResponseSchema. # noqa: E501
:rtype: str
"""
return self._status
@status.setter
def status(self, status):
"""Sets the status of this GeolocationResponseSchema.
If the request is successful, ok is returned. Otherwise error is returned # noqa: E501
:param status: The status of this GeolocationResponseSchema. # noqa: E501
:type: str
"""
self._status = status
@property
def message(self):
"""Gets the message of this GeolocationResponseSchema. # noqa: E501
Any additional information from the server is returned here # noqa: E501
:return: The message of this GeolocationResponseSchema. # noqa: E501
:rtype: str
"""
return self._message
@message.setter
def message(self, message):
"""Sets the message of this GeolocationResponseSchema.
Any additional information from the server is returned here # noqa: E501
:param message: The message of this GeolocationResponseSchema. # noqa: E501
:type: str
"""
self._message = message
@property
def balance(self):
"""Gets the balance of this GeolocationResponseSchema. # noqa: E501
This represents the remaining balance on the API token. Requests that return error are not charged and do not affect balance # noqa: E501
:return: The balance of this GeolocationResponseSchema. # noqa: E501
:rtype: int
"""
return self._balance
@balance.setter
def balance(self, balance):
"""Sets the balance of this GeolocationResponseSchema.
This represents the remaining balance on the API token. Requests that return error are not charged and do not affect balance # noqa: E501
:param balance: The balance of this GeolocationResponseSchema. # noqa: E501
:type: int
"""
self._balance = balance
@property
def balance_slots(self):
"""Gets the balance_slots of this GeolocationResponseSchema. # noqa: E501
This represents the remaining balance of device slots. Requests that return error are not charged and do not affect balance. If -1 is returned, then observe it as an error while calculating slots balance. This element will only exist if you are on a device plan. # noqa: E501
:return: The balance_slots of this GeolocationResponseSchema. # noqa: E501
:rtype: int
"""
return self._balance_slots
@balance_slots.setter
def balance_slots(self, balance_slots):
"""Sets the balance_slots of this GeolocationResponseSchema.
This represents the remaining balance of device slots. Requests that return error are not charged and do not affect balance. If -1 is returned, then observe it as an error while calculating slots balance. This element will only exist if you are on a device plan. # noqa: E501
:param balance_slots: The balance_slots of this GeolocationResponseSchema. # noqa: E501
:type: int
"""
self._balance_slots = balance_slots
@property
def lat(self):
"""Gets the lat of this GeolocationResponseSchema. # noqa: E501
The latitude representing the location # noqa: E501
:return: The lat of this GeolocationResponseSchema. # noqa: E501
:rtype: float
"""
return self._lat
@lat.setter
def lat(self, lat):
"""Sets the lat of this GeolocationResponseSchema.
The latitude representing the location # noqa: E501
:param lat: The lat of this GeolocationResponseSchema. # noqa: E501
:type: float
"""
self._lat = lat
@property
def lon(self):
"""Gets the lon of this GeolocationResponseSchema. # noqa: E501
The longitude representing the location # noqa: E501
:return: The lon of this GeolocationResponseSchema. # noqa: E501
:rtype: float
"""
return self._lon
@lon.setter
def lon(self, lon):
"""Sets the lon of this GeolocationResponseSchema.
The longitude representing the location # noqa: E501
:param lon: The lon of this GeolocationResponseSchema. # noqa: E501
:type: float
"""
self._lon = lon
@property
def accuracy(self):
"""Gets the accuracy of this GeolocationResponseSchema. # noqa: E501
The accuracy of the position is returned in meters # noqa: E501
:return: The accuracy of this GeolocationResponseSchema. # noqa: E501
:rtype: int
"""
return self._accuracy
@accuracy.setter
def accuracy(self, accuracy):
"""Sets the accuracy of this GeolocationResponseSchema.
The accuracy of the position is returned in meters # noqa: E501
:param accuracy: The accuracy of this GeolocationResponseSchema. # noqa: E501
:type: int
"""
self._accuracy = accuracy
@property
def address(self):
"""Gets the address of this GeolocationResponseSchema. # noqa: E501
The physical address of the location # noqa: E501
:return: The address of this GeolocationResponseSchema. # noqa: E501
:rtype: str
"""
return self._address
@address.setter
def address(self, address):
"""Sets the address of this GeolocationResponseSchema.
The physical address of the location # noqa: E501
:param address: The address of this GeolocationResponseSchema. # noqa: E501
:type: str
"""
self._address = address
@property
def address_details(self):
"""Gets the address_details of this GeolocationResponseSchema. # noqa: E501
:return: The address_details of this GeolocationResponseSchema. # noqa: E501
:rtype: AddressDetailsSchema
"""
return self._address_details
@address_details.setter
def address_details(self, address_details):
"""Sets the address_details of this GeolocationResponseSchema.
:param address_details: The address_details of this GeolocationResponseSchema. # noqa: E501
:type: AddressDetailsSchema
"""
self._address_details = address_details
@property
def aged(self):
"""Gets the aged of this GeolocationResponseSchema. # noqa: E501
Shown when the location is based on a single measurement or those older than 90 days or is an LAC fallback # noqa: E501
:return: The aged of this GeolocationResponseSchema. # noqa: E501
:rtype: int
"""
return self._aged
@aged.setter
def aged(self, aged):
"""Sets the aged of this GeolocationResponseSchema.
Shown when the location is based on a single measurement or those older than 90 days or is an LAC fallback # noqa: E501
:param aged: The aged of this GeolocationResponseSchema. # noqa: E501
:type: int
"""
self._aged = aged
@property
def fallback(self):
"""Gets the fallback of this GeolocationResponseSchema. # noqa: E501
:return: The fallback of this GeolocationResponseSchema. # noqa: E501
:rtype: FallbackSchema
"""
return self._fallback
@fallback.setter
def fallback(self, fallback):
"""Sets the fallback of this GeolocationResponseSchema.
:param fallback: The fallback of this GeolocationResponseSchema. # noqa: E501
:type: FallbackSchema
"""
self._fallback = fallback
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, GeolocationResponseSchema):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 30.772379 | 284 | 0.614279 | 1,351 | 12,032 | 5.387121 | 0.124352 | 0.059357 | 0.187414 | 0.158697 | 0.5856 | 0.508519 | 0.495191 | 0.381973 | 0.274938 | 0.203902 | 0 | 0.021302 | 0.305519 | 12,032 | 390 | 285 | 30.851282 | 0.849689 | 0.473321 | 0 | 0.080745 | 1 | 0 | 0.063355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.018634 | 0 | 0.31677 | 0.012422 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
50aeb0aea8a92d4860b4fde4a788e3fbd2b86353 | 5,664 | py | Python | AES_DES3.py | cjr0707/encryption | 87b8900d8e28f74ba8a94322d94e2078455d6e16 | [
"MIT"
] | 1 | 2020-06-07T01:36:01.000Z | 2020-06-07T01:36:01.000Z | AES_DES3.py | cjr0707/encryption | 87b8900d8e28f74ba8a94322d94e2078455d6e16 | [
"MIT"
] | null | null | null | AES_DES3.py | cjr0707/encryption | 87b8900d8e28f74ba8a94322d94e2078455d6e16 | [
"MIT"
] | 1 | 2020-06-07T01:36:03.000Z | 2020-06-07T01:36:03.000Z | import base64
import gzip
from binascii import b2a_hex, a2b_hex
from io import BytesIO
from Crypto.Cipher import AES
from Crypto.Cipher import DES
from Crypto.Cipher import DES3
from hexdump import hexdump
# 需要补位,str不是16的倍数那就补足为16的倍数
def add_to_16(value):
while len(value) % 16 != 0:
value += '\0'
return str.encode(value) # 返回bytes
class EncryptDate:
def __init__(self, type, key, iv=None):
if type == 'AES':
self.unpad = lambda date: date[0:-ord(date[-1])]
self.key = key # 初始化密钥
if iv:
print(f"有向量<{iv}>AES加解密>>CBC模式\n")
self.iv = iv
self.length = AES.block_size # 初始化数据块大小
self.aes = AES.new(self.key, AES.MODE_CBC, self.iv) # 初始化AES,ECB模式的实例
# 截断函数,去除填充的字符
else:
print("无向量AES加解密>>ECB模式\n")
self.length = AES.block_size # 初始化数据块大小
# self.aes = AES.new(self.key, AES.MODE_ECB) # 初始化AES,ECB模式的实例
self.aes = AES.new(add_to_16(self.key), AES.MODE_ECB) # 初始化AES,ECB模式的实例
# 截断函数,去除填充的字符
elif type == 'DES3':
self.key = key # 初始化密钥
if iv:
print(f"有向量<{iv}>DES加解密>>CBC模式\n")
self.iv = iv
self.length = DES3.block_size # 初始化数据块大小
self.aes = DES3.new(
self.key, DES3.MODE_CBC, self.iv) # 初始化AES,ECB模式的实例
else:
print(f"无向量<{iv}>DES加解密>>EBC模式\n")
self.length = DES3.block_size # 初始化数据块大小
self.aes = DES3.new(self.key, DES3.MODE_ECB) # 初始化AES,ECB模式的实例
elif type == 'DES':
self.key = key # 初始化密钥
if iv:
print(f"有向量<{iv}>DES加解密>>CBC模式\n")
self.iv = iv
self.length = DES.block_size # 初始化数据块大小
self.aes = DES.new(
self.key, DES.MODE_CBC, self.iv) # 初始化AES,ECB模式的实例
else:
print(f"无向量<{iv}>DES加解密>>EBC模式\n")
self.length = DES.block_size # 初始化数据块大小
self.aes = DES.new(self.key, DES.MODE_ECB) # 初始化AES,ECB模式的实例
def base64_decode(self, text):
return base64.b64decode(text)
def pad(self, text):
"""
#填充函数,使被加密数据的字节码长度是block_size的整数倍
"""
count = len(text.encode('utf-8'))
add = self.length - (count % self.length)
entext = text + (chr(add) * add)
return entext
# base64输出
def encrypt(self, encrData): # 加密函数
res = self.aes.encrypt(self.pad(encrData).encode("utf8"))
print(hexdump(res))
print(len(res))
msg = str(base64.b64encode(res), encoding="utf8")
return msg
def decrypt(self, decrData): # 解密函数
res = base64.b64decode(decrData.encode("utf8"))
print(len(res))
# print(len(res))
# print(res)
# print(b2a_hex(res))
# print(res)
print(self.aes.decrypt(res))
msg = self.aes.decrypt(res).decode()
return self.unpad(msg)
# 16进制输出
def encrypt_hex(self, encrData): # 加密函数
res = self.aes.encrypt(self.pad(encrData).encode())
print(hexdump(res))
# msg = str(base64.b64encode(res), encoding="utf8")
return b2a_hex(res).decode()
def decrypt_hex(self, decrData): # 解密函数
res = a2b_hex(decrData)
plain_text = self.aes.decrypt(res)
#
print(len(plain_text))
print(b2a_hex(plain_text).decode())
print("解密bytes:", hexdump(plain_text))
# return self.unpad(plain_text.decode())
return plain_text.decode()
def gzip_decompress(buf: bytes):
buf = BytesIO(buf)
gf = gzip.GzipFile(fileobj=buf)
content_bytes = gf.read()
return content_bytes
if __name__ == "__main__":
# a = "FIIBIckYL8OFPUp25VbKgZpJauHR7a6jlit/Z75TEXUWvlropB3Vt0OYZ5mFxCbB+qzdvs+GIBGhbJIzRdlFnQ=="
# print(base64.b64decode(a.encode()))
# # a="2daaf5f45222d792fb3eebaa4aa274c9122df992f4e7cd2cd8331ab5d48ff11e116a634d73ac1e88bbe5a5d219f4468a7d1357f80c14f14cc7e289011abdc872663ff0a9c2ba6b756de303fc8f6120966f2c442dd619def4748e6f016b9d5fbb6dc46ccd49008f36f9188e502d35f36cf06c75842c78a0d78e64a1e9cc5c1a59c378e37ac08cd491086fafd33a054ccc740abb8361e261c976843d4bb91d90d955926e7b0b7e38d6ec644bf653364cd6f628f5946322759ab2ec2cdf35ac3cd3a80ee3c061f589f72f5a15cedee0a9ac7a14d3a9e16adae458c2aa67be0936d8afc43c9717bd767f5a74fd962ee964b6689a832193972b54fbf92232b60f46a26c044fca93f1a64eba815d2b0c6a501484449fe62d9a7e9342048b720813c7f0a472fc564f78aac1daceafb2264f1c27fc55ecb3ca57628958b236ff38c2e54d3c262c9da0e5c87e6389f5c2830fc4d697453aafd5479fda09830ae4fc9397a72cdc76ed9d8cadc440cad2fa7f3c049707e0a30b075059b0ed1f4d85fdff3c9b3c33205590213c2e97c6efca1e5a986ce415a61114dd974bc5ec23c9fef610975f777b89f4488ae0aacb7ced893855773cdca261884073b999c789fb8658c7e6542b6ee7b374cf51ceee2bd1087728886385269856396cac17af9d67820279dbbe63547296fdc4a8508b0e10b91b6478fed4b41b576b4e0e06e83a3e7ece365b5d052df349170d1eec1ae58bda691238"
# key = a2b_hex("f72ccae4dc732149f0ab817e45f84744")
# # print(key)
# key = '8e963b3c738748e9'.encode()
# iv = a2b_hex("30313032303330343035303630373038")
# print(iv)
# aes = EncryptDate('AES', key)
# data = aes.decrypt(a)
# print(data)
key = 'abcdefgabcdefg12'
aes = EncryptDate('AES', key)
ret_aes_encrypt = aes.encrypt('Cluo667788')
print(ret_aes_encrypt)
# b6vmwH18ZrUmyqUe0key+w==
| 40.457143 | 1,069 | 0.631532 | 529 | 5,664 | 6.659735 | 0.230624 | 0.023843 | 0.028953 | 0.035765 | 0.244394 | 0.244394 | 0.237865 | 0.220267 | 0.220267 | 0.19472 | 0 | 0.192419 | 0.268715 | 5,664 | 139 | 1,070 | 40.748201 | 0.658136 | 0.343044 | 0 | 0.288889 | 0 | 0 | 0.060382 | 0.034178 | 0 | 1 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.088889 | 0.011111 | 0.288889 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
50b6528e0aeae0e4ae01a3a0e766baa6a0245047 | 2,230 | py | Python | docs/csveg.py | lessen/src | bc09a33d22e942214df5608806b11370e21ce7e8 | [
"MIT"
] | null | null | null | docs/csveg.py | lessen/src | bc09a33d22e942214df5608806b11370e21ce7e8 | [
"MIT"
] | 1 | 2016-12-28T22:44:52.000Z | 2016-12-28T22:44:52.000Z | docs/csveg.py | lessen/src | bc09a33d22e942214df5608806b11370e21ce7e8 | [
"MIT"
] | null | null | null | """
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-55634425-2', 'auto');
ga('send', 'pageview');
</script>
<img
src="https://avatars0.githubusercontent.com/u/23156192?v=3&s=200"
align=left
width=120>
<br> <br>
[home](http://ttv1.github.io) | [discuss](#discussion) | [report bug](https://github.com/ttv1/src/issues)
<br clear=all>
_________________
from csv import csv
from eg import eg
@eg
def _csvFromString():
"demo of read from string..."
stringOfData="""a,b,
c,d
1,2.0,3,x
10,20,30,y"""
for row in csv(stringOfData, header=True):
print(row)
print(row)
assert row == [10, 20.0, 30, 'y']
@eg
def _csvFromSimpleFile():
"Reading a few Ascii rows."
for row in csv(file="data/weather.csv"):
print(row)
assert row == ['rainy', 71.0, 'TRUE', 'no']
@eg
def _csvFromLargeFile(n=0):
"Reading over 50MB+ of data."
print("\nPlz wait a few seconds while I read 100MB+ of data...")
for row in csv(file="data/weatherLarge.csv"):
n +=1
print(n,row)
assert row == ['rainy', 71.0, 'TRUE', 'no']
assert n == 1835009
@eg
def _csvFromZip(n=0):
"Reading over 50MB of data."
for row in csv(file="weatherLarge.csv",
zip="data/data.zip"):
n += 1
assert row == ['rainy', 71.0, 91,'TRUE', 'no']
assert n == 1835009
if __name__ == "__main__": eg()
"""
____
<img align=right
src="https://raw.githubusercontent.com/timm/timm.github.io/master/timm.png"
width=170>
## Copyleft
Copyright © 2016 Tim Menzies <tim@menzies.us>
This program is free software. It comes without any warranty, to
the extent permitted by applicable law. You can redistribute it
and/or modify it under the terms of the Do What The F*ck You Want
To Public License, Version 2, as published by Sam Hocevar. See
[http://www.wtfpl.net](http://www.wtfpl.net) for more details.
Share and enjoy.
"""
| 24.505495 | 119 | 0.641704 | 356 | 2,230 | 3.926966 | 0.497191 | 0.007153 | 0.02289 | 0.031474 | 0.148784 | 0.112303 | 0.095851 | 0.037196 | 0 | 0 | 0 | 0.049322 | 0.172646 | 2,230 | 90 | 120 | 24.777778 | 0.708401 | 0 | 0 | 0.311111 | 0 | 0.022222 | 0.654584 | 0.110874 | 0 | 0 | 0 | 0 | 0.133333 | 0 | null | null | 0 | 0 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
50d796597cfd70a711d175daee1a3a663e4d178c | 236 | py | Python | exercises/ex9.py | gravyboat/python-exercises | 50162a9e6f3d51fbb2c15ed08fcecba810d61338 | [
"MIT"
] | null | null | null | exercises/ex9.py | gravyboat/python-exercises | 50162a9e6f3d51fbb2c15ed08fcecba810d61338 | [
"MIT"
] | null | null | null | exercises/ex9.py | gravyboat/python-exercises | 50162a9e6f3d51fbb2c15ed08fcecba810d61338 | [
"MIT"
] | null | null | null | #!/usr/bin/python
def is_member(item_to_check, list_to_check):
'''
Checks if an item is in a list
'''
for list_item in list_to_check:
if item_to_check == list_item:
return(True)
return(False)
| 16.857143 | 44 | 0.614407 | 37 | 236 | 3.621622 | 0.513514 | 0.208955 | 0.164179 | 0.223881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.288136 | 236 | 13 | 45 | 18.153846 | 0.797619 | 0.199153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
50f60390d4df0f898540860ae87b4514ab7fa0ed | 21,778 | py | Python | lib/googlecloudsdk/third_party/apis/apikeys/v2/apikeys_v2_messages.py | google-cloud-sdk-unofficial/google-cloud-sdk | 2a48a04df14be46c8745050f98768e30474a1aac | [
"Apache-2.0"
] | 2 | 2019-11-10T09:17:07.000Z | 2019-12-18T13:44:08.000Z | lib/googlecloudsdk/third_party/apis/apikeys/v2/apikeys_v2_messages.py | google-cloud-sdk-unofficial/google-cloud-sdk | 2a48a04df14be46c8745050f98768e30474a1aac | [
"Apache-2.0"
] | null | null | null | lib/googlecloudsdk/third_party/apis/apikeys/v2/apikeys_v2_messages.py | google-cloud-sdk-unofficial/google-cloud-sdk | 2a48a04df14be46c8745050f98768e30474a1aac | [
"Apache-2.0"
] | 1 | 2020-07-25T01:40:19.000Z | 2020-07-25T01:40:19.000Z | """Generated message classes for apikeys version v2.
Manages the API keys associated with developer projects.
"""
# NOTE: This file is autogenerated and should not be edited by hand.
from __future__ import absolute_import
from apitools.base.protorpclite import messages as _messages
from apitools.base.py import encoding
from apitools.base.py import extra_types
package = 'apikeys'
class ApikeysKeysLookupKeyRequest(_messages.Message):
r"""A ApikeysKeysLookupKeyRequest object.
Fields:
keyString: Required. Finds the project that owns the key string value.
"""
keyString = _messages.StringField(1)
class ApikeysOperationsGetRequest(_messages.Message):
r"""A ApikeysOperationsGetRequest object.
Fields:
name: The name of the operation resource.
"""
name = _messages.StringField(1, required=True)
class ApikeysProjectsLocationsKeysCloneRequest(_messages.Message):
r"""A ApikeysProjectsLocationsKeysCloneRequest object.
Fields:
name: Required. The resource name of the API key to be cloned in the same
project.
v2CloneKeyRequest: A V2CloneKeyRequest resource to be passed as the
request body.
"""
name = _messages.StringField(1, required=True)
v2CloneKeyRequest = _messages.MessageField('V2CloneKeyRequest', 2)
class ApikeysProjectsLocationsKeysCreateRequest(_messages.Message):
r"""A ApikeysProjectsLocationsKeysCreateRequest object.
Fields:
keyId: User specified key id (optional). If specified, it will become the
final component of the key resource name. The id must be unique within
the project, must conform with RFC-1034, is restricted to lower-cased
letters, and has a maximum length of 63 characters. In another word, the
id must match the regular expression: `[a-z]([a-z0-9-]{0,61}[a-z0-9])?`.
The id must NOT be a UUID-like string.
parent: Required. The project in which the API key is created.
v2Key: A V2Key resource to be passed as the request body.
"""
keyId = _messages.StringField(1)
parent = _messages.StringField(2, required=True)
v2Key = _messages.MessageField('V2Key', 3)
class ApikeysProjectsLocationsKeysDeleteRequest(_messages.Message):
r"""A ApikeysProjectsLocationsKeysDeleteRequest object.
Fields:
etag: Optional. The etag known to the client for the expected state of the
key. This is to be used for optimistic concurrency.
name: Required. The resource name of the API key to be deleted.
"""
etag = _messages.StringField(1)
name = _messages.StringField(2, required=True)
class ApikeysProjectsLocationsKeysGetKeyStringRequest(_messages.Message):
r"""A ApikeysProjectsLocationsKeysGetKeyStringRequest object.
Fields:
name: Required. The resource name of the API key to be retrieved.
"""
name = _messages.StringField(1, required=True)
class ApikeysProjectsLocationsKeysGetRequest(_messages.Message):
r"""A ApikeysProjectsLocationsKeysGetRequest object.
Fields:
name: Required. The resource name of the API key to get.
"""
name = _messages.StringField(1, required=True)
class ApikeysProjectsLocationsKeysListRequest(_messages.Message):
r"""A ApikeysProjectsLocationsKeysListRequest object.
Fields:
filter: Optional. Only list keys that conform to the specified filter. The
allowed filter strings are `state:ACTIVE` and `state:DELETED`. By
default, ListKeys returns only active keys.
pageSize: Optional. Specifies the maximum number of results to be returned
at a time.
pageToken: Optional. Requests a specific page of results.
parent: Required. Lists all API keys associated with this project.
"""
filter = _messages.StringField(1)
pageSize = _messages.IntegerField(2, variant=_messages.Variant.INT32)
pageToken = _messages.StringField(3)
parent = _messages.StringField(4, required=True)
class ApikeysProjectsLocationsKeysPatchRequest(_messages.Message):
r"""A ApikeysProjectsLocationsKeysPatchRequest object.
Fields:
name: Output only. The resource name of the key. The `name` has the form:
`projects//locations/global/keys/`. For example: `projects/123456867718/
locations/global/keys/b7ff1f9f-8275-410a-94dd-3855ee9b5dd2` NOTE: Key is
a global resource; hence the only supported value for location is
`global`.
updateMask: The field mask specifies which fields to be updated as part of
this request. All other fields are ignored. Mutable fields are:
`display_name` and `restrictions`. If an update mask is not provided,
the service treats it as an implied mask equivalent to all allowed
fields that are set on the wire. If the field mask has a special value
"*", the service treats it equivalent to replace all allowed mutable
fields.
v2Key: A V2Key resource to be passed as the request body.
"""
name = _messages.StringField(1, required=True)
updateMask = _messages.StringField(2)
v2Key = _messages.MessageField('V2Key', 3)
class ApikeysProjectsLocationsKeysUndeleteRequest(_messages.Message):
r"""A ApikeysProjectsLocationsKeysUndeleteRequest object.
Fields:
name: Required. The resource name of the API key to be undeleted.
v2UndeleteKeyRequest: A V2UndeleteKeyRequest resource to be passed as the
request body.
"""
name = _messages.StringField(1, required=True)
v2UndeleteKeyRequest = _messages.MessageField('V2UndeleteKeyRequest', 2)
class Operation(_messages.Message):
r"""This resource represents a long-running operation that is the result of
a network API call.
Messages:
MetadataValue: Service-specific metadata associated with the operation. It
typically contains progress information and common metadata such as
create time. Some services might not provide such metadata. Any method
that returns a long-running operation should document the metadata type,
if any.
ResponseValue: The normal response of the operation in case of success. If
the original method returns no data on success, such as `Delete`, the
response is `google.protobuf.Empty`. If the original method is standard
`Get`/`Create`/`Update`, the response should be the resource. For other
methods, the response should have the type `XxxResponse`, where `Xxx` is
the original method name. For example, if the original method name is
`TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
Fields:
done: If the value is `false`, it means the operation is still in
progress. If `true`, the operation is completed, and either `error` or
`response` is available.
error: The error result of the operation in case of failure or
cancellation.
metadata: Service-specific metadata associated with the operation. It
typically contains progress information and common metadata such as
create time. Some services might not provide such metadata. Any method
that returns a long-running operation should document the metadata type,
if any.
name: The server-assigned name, which is only unique within the same
service that originally returns it. If you use the default HTTP mapping,
the `name` should be a resource name ending with
`operations/{unique_id}`.
response: The normal response of the operation in case of success. If the
original method returns no data on success, such as `Delete`, the
response is `google.protobuf.Empty`. If the original method is standard
`Get`/`Create`/`Update`, the response should be the resource. For other
methods, the response should have the type `XxxResponse`, where `Xxx` is
the original method name. For example, if the original method name is
`TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"""
@encoding.MapUnrecognizedFields('additionalProperties')
class MetadataValue(_messages.Message):
r"""Service-specific metadata associated with the operation. It typically
contains progress information and common metadata such as create time.
Some services might not provide such metadata. Any method that returns a
long-running operation should document the metadata type, if any.
Messages:
AdditionalProperty: An additional property for a MetadataValue object.
Fields:
additionalProperties: Properties of the object. Contains field @type
with type URL.
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a MetadataValue object.
Fields:
key: Name of the additional property.
value: A extra_types.JsonValue attribute.
"""
key = _messages.StringField(1)
value = _messages.MessageField('extra_types.JsonValue', 2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
@encoding.MapUnrecognizedFields('additionalProperties')
class ResponseValue(_messages.Message):
r"""The normal response of the operation in case of success. If the
original method returns no data on success, such as `Delete`, the response
is `google.protobuf.Empty`. If the original method is standard
`Get`/`Create`/`Update`, the response should be the resource. For other
methods, the response should have the type `XxxResponse`, where `Xxx` is
the original method name. For example, if the original method name is
`TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
Messages:
AdditionalProperty: An additional property for a ResponseValue object.
Fields:
additionalProperties: Properties of the object. Contains field @type
with type URL.
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a ResponseValue object.
Fields:
key: Name of the additional property.
value: A extra_types.JsonValue attribute.
"""
key = _messages.StringField(1)
value = _messages.MessageField('extra_types.JsonValue', 2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
done = _messages.BooleanField(1)
error = _messages.MessageField('Status', 2)
metadata = _messages.MessageField('MetadataValue', 3)
name = _messages.StringField(4)
response = _messages.MessageField('ResponseValue', 5)
class StandardQueryParameters(_messages.Message):
r"""Query parameters accepted by all methods.
Enums:
FXgafvValueValuesEnum: V1 error format.
AltValueValuesEnum: Data format for response.
Fields:
f__xgafv: V1 error format.
access_token: OAuth access token.
alt: Data format for response.
callback: JSONP
fields: Selector specifying which fields to include in a partial response.
key: API key. Your API key identifies your project and provides you with
API access, quota, and reports. Required unless you provide an OAuth 2.0
token.
oauth_token: OAuth 2.0 token for the current user.
prettyPrint: Returns response with indentations and line breaks.
quotaUser: Available to use for quota purposes for server-side
applications. Can be any arbitrary string assigned to a user, but should
not exceed 40 characters.
trace: A tracing token of the form "token:<tokenid>" to include in api
requests.
uploadType: Legacy upload protocol for media (e.g. "media", "multipart").
upload_protocol: Upload protocol for media (e.g. "raw", "multipart").
"""
class AltValueValuesEnum(_messages.Enum):
r"""Data format for response.
Values:
json: Responses with Content-Type of application/json
media: Media download with context-dependent Content-Type
proto: Responses with Content-Type of application/x-protobuf
"""
json = 0
media = 1
proto = 2
class FXgafvValueValuesEnum(_messages.Enum):
r"""V1 error format.
Values:
_1: v1 error format
_2: v2 error format
"""
_1 = 0
_2 = 1
f__xgafv = _messages.EnumField('FXgafvValueValuesEnum', 1)
access_token = _messages.StringField(2)
alt = _messages.EnumField('AltValueValuesEnum', 3, default='json')
callback = _messages.StringField(4)
fields = _messages.StringField(5)
key = _messages.StringField(6)
oauth_token = _messages.StringField(7)
prettyPrint = _messages.BooleanField(8, default=True)
quotaUser = _messages.StringField(9)
trace = _messages.StringField(10)
uploadType = _messages.StringField(11)
upload_protocol = _messages.StringField(12)
class Status(_messages.Message):
r"""The `Status` type defines a logical error model that is suitable for
different programming environments, including REST APIs and RPC APIs. It is
used by [gRPC](https://github.com/grpc). Each `Status` message contains
three pieces of data: error code, error message, and error details. You can
find out more about this error model and how to work with it in the [API
Design Guide](https://cloud.google.com/apis/design/errors).
Messages:
DetailsValueListEntry: A DetailsValueListEntry object.
Fields:
code: The status code, which should be an enum value of google.rpc.Code.
details: A list of messages that carry the error details. There is a
common set of message types for APIs to use.
message: A developer-facing error message, which should be in English. Any
user-facing error message should be localized and sent in the
google.rpc.Status.details field, or localized by the client.
"""
@encoding.MapUnrecognizedFields('additionalProperties')
class DetailsValueListEntry(_messages.Message):
r"""A DetailsValueListEntry object.
Messages:
AdditionalProperty: An additional property for a DetailsValueListEntry
object.
Fields:
additionalProperties: Properties of the object. Contains field @type
with type URL.
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a DetailsValueListEntry object.
Fields:
key: Name of the additional property.
value: A extra_types.JsonValue attribute.
"""
key = _messages.StringField(1)
value = _messages.MessageField('extra_types.JsonValue', 2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
code = _messages.IntegerField(1, variant=_messages.Variant.INT32)
details = _messages.MessageField('DetailsValueListEntry', 2, repeated=True)
message = _messages.StringField(3)
class V2AndroidApplication(_messages.Message):
r"""Identifier of an Android application for key use.
Fields:
packageName: The package name of the application.
sha1Fingerprint: The SHA1 fingerprint of the application. For example,
both sha1 formats are acceptable :
DA:39:A3:EE:5E:6B:4B:0D:32:55:BF:EF:95:60:18:90:AF:D8:07:09 or
DA39A3EE5E6B4B0D3255BFEF95601890AFD80709. Output format is the latter.
"""
packageName = _messages.StringField(1)
sha1Fingerprint = _messages.StringField(2)
class V2AndroidKeyRestrictions(_messages.Message):
r"""The Android apps that are allowed to use the key.
Fields:
allowedApplications: A list of Android applications that are allowed to
make API calls with this key.
"""
allowedApplications = _messages.MessageField('V2AndroidApplication', 1, repeated=True)
class V2ApiTarget(_messages.Message):
r"""A restriction for a specific service and optionally one or multiple
specific methods. Both fields are case insensitive.
Fields:
methods: Optional. List of one or more methods that can be called. If
empty, all methods for the service are allowed. A wildcard (*) can be
used as the last symbol. Valid examples:
`google.cloud.translate.v2.TranslateService.GetSupportedLanguage`
`TranslateText` `Get*` `translate.googleapis.com.Get*`
service: The service for this restriction. It should be the canonical
service name, for example: `translate.googleapis.com`. You can use
[`gcloud services list`](/sdk/gcloud/reference/services/list) to get a
list of services that are enabled in the project.
"""
methods = _messages.StringField(1, repeated=True)
service = _messages.StringField(2)
class V2BrowserKeyRestrictions(_messages.Message):
r"""The HTTP referrers (websites) that are allowed to use the key.
Fields:
allowedReferrers: A list of regular expressions for the referrer URLs that
are allowed to make API calls with this key.
"""
allowedReferrers = _messages.StringField(1, repeated=True)
class V2CloneKeyRequest(_messages.Message):
r"""Request message for `CloneKey` method.
Fields:
keyId: User specified key id (optional). If specified, it will become the
final component of the key resource name. The id must be unique within
the project, must conform with RFC-1034, is restricted to lower-cased
letters, and has a maximum length of 63 characters. In another word, the
id must match the regular expression: `[a-z]([a-z0-9-]{0,61}[a-z0-9])?`.
The id must NOT be a UUID-like string.
"""
keyId = _messages.StringField(1)
class V2GetKeyStringResponse(_messages.Message):
r"""Response message for `GetKeyString` method.
Fields:
keyString: An encrypted and signed value of the key.
"""
keyString = _messages.StringField(1)
class V2IosKeyRestrictions(_messages.Message):
r"""The iOS apps that are allowed to use the key.
Fields:
allowedBundleIds: A list of bundle IDs that are allowed when making API
calls with this key.
"""
allowedBundleIds = _messages.StringField(1, repeated=True)
class V2Key(_messages.Message):
r"""The representation of a key managed by the API Keys API.
Fields:
createTime: Output only. A timestamp identifying the time this key was
originally created.
deleteTime: Output only. A timestamp when this key was deleted. If the
resource is not deleted, this must be empty.
displayName: Human-readable display name of this key that you can modify.
The maximum length is 63 characters.
etag: Output only. A checksum computed by the server based on the current
value of the Key resource. This may be sent on update and delete
requests to ensure the client has an up-to-date value before proceeding.
keyString: Output only. An encrypted and signed value held by this key.
This field can be accessed only through the `GetKeyString` method.
name: Output only. The resource name of the key. The `name` has the form:
`projects//locations/global/keys/`. For example: `projects/123456867718/
locations/global/keys/b7ff1f9f-8275-410a-94dd-3855ee9b5dd2` NOTE: Key is
a global resource; hence the only supported value for location is
`global`.
restrictions: Key restrictions.
uid: Output only. Unique id in UUID4 format.
updateTime: Output only. A timestamp identifying the time this key was
last updated.
"""
createTime = _messages.StringField(1)
deleteTime = _messages.StringField(2)
displayName = _messages.StringField(3)
etag = _messages.StringField(4)
keyString = _messages.StringField(5)
name = _messages.StringField(6)
restrictions = _messages.MessageField('V2Restrictions', 7)
uid = _messages.StringField(8)
updateTime = _messages.StringField(9)
class V2ListKeysResponse(_messages.Message):
r"""Response message for `ListKeys` method.
Fields:
keys: A list of API keys.
nextPageToken: The pagination token for the next page of results.
"""
keys = _messages.MessageField('V2Key', 1, repeated=True)
nextPageToken = _messages.StringField(2)
class V2LookupKeyResponse(_messages.Message):
r"""Response message for `LookupKey` method.
Fields:
name: The resource name of the API key. If the API key has been purged,
resource name is empty.
parent: The project that owns the key with the value specified in the
request.
"""
name = _messages.StringField(1)
parent = _messages.StringField(2)
class V2Restrictions(_messages.Message):
r"""Describes the restrictions on the key.
Fields:
androidKeyRestrictions: The Android apps that are allowed to use the key.
apiTargets: A restriction for a specific service and optionally one or
more specific methods. Requests are allowed if they match any of these
restrictions. If no restrictions are specified, all targets are allowed.
browserKeyRestrictions: The HTTP referrers (websites) that are allowed to
use the key.
iosKeyRestrictions: The iOS apps that are allowed to use the key.
serverKeyRestrictions: The IP addresses of callers that are allowed to use
the key.
"""
androidKeyRestrictions = _messages.MessageField('V2AndroidKeyRestrictions', 1)
apiTargets = _messages.MessageField('V2ApiTarget', 2, repeated=True)
browserKeyRestrictions = _messages.MessageField('V2BrowserKeyRestrictions', 3)
iosKeyRestrictions = _messages.MessageField('V2IosKeyRestrictions', 4)
serverKeyRestrictions = _messages.MessageField('V2ServerKeyRestrictions', 5)
class V2ServerKeyRestrictions(_messages.Message):
r"""The IP addresses of callers that are allowed to use the key.
Fields:
allowedIps: A list of the caller IP addresses that are allowed to make API
calls with this key.
"""
allowedIps = _messages.StringField(1, repeated=True)
class V2UndeleteKeyRequest(_messages.Message):
r"""Request message for `UndeleteKey` method."""
encoding.AddCustomJsonFieldMapping(
StandardQueryParameters, 'f__xgafv', '$.xgafv')
encoding.AddCustomJsonEnumMapping(
StandardQueryParameters.FXgafvValueValuesEnum, '_1', '1')
encoding.AddCustomJsonEnumMapping(
StandardQueryParameters.FXgafvValueValuesEnum, '_2', '2')
| 37.548276 | 89 | 0.737992 | 2,738 | 21,778 | 5.816654 | 0.188824 | 0.058458 | 0.032149 | 0.012809 | 0.407321 | 0.390933 | 0.356838 | 0.336808 | 0.325631 | 0.324878 | 0 | 0.016406 | 0.188309 | 21,778 | 579 | 90 | 37.613126 | 0.884539 | 0.654743 | 0 | 0.176829 | 1 | 0 | 0.073656 | 0.02651 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02439 | 0 | 0.646341 | 0.006098 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
50f73f2aff191a691bccd663c46c94e906ea7cb3 | 629 | py | Python | puzzle_app/puzzle_app/routes.py | nathandaddio/puzzle_app | dcdda6ed598e4794a37fec9dff334aef4cc3ddb1 | [
"MIT"
] | null | null | null | puzzle_app/puzzle_app/routes.py | nathandaddio/puzzle_app | dcdda6ed598e4794a37fec9dff334aef4cc3ddb1 | [
"MIT"
] | null | null | null | puzzle_app/puzzle_app/routes.py | nathandaddio/puzzle_app | dcdda6ed598e4794a37fec9dff334aef4cc3ddb1 | [
"MIT"
] | null | null | null | def includeme(config):
# config.add_static_view('static', 'static', cache_max_age=3600)
config.add_route('home', '/')
config.add_route('hitori_boards', r'/hitori_boards')
config.add_route('hitori_board', r'/hitori_boards/{board_id:\d+}')
config.add_route('hitori_board_solve', r'/hitori_boards/{board_id:\d+}/solve')
config.add_route('hitori_board_clone', r'/hitori_boards/{board_id:\d+}/clone')
config.add_route('hitori_solves', r'/hitori_solves')
config.add_route('hitori_solve', r'/hitori_solves/{solve_id:\d+}')
config.add_route('hitori_cell_value', r'/hitori_cells/{cell_id:\d+}/value')
| 52.416667 | 82 | 0.717011 | 94 | 629 | 4.43617 | 0.255319 | 0.194245 | 0.268585 | 0.335731 | 0.386091 | 0.254197 | 0 | 0 | 0 | 0 | 0 | 0.007018 | 0.0938 | 629 | 11 | 83 | 57.181818 | 0.724561 | 0.098569 | 0 | 0 | 0 | 0 | 0.525664 | 0.284956 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
50fa39d2073900c17b249cc7b5d3f8b30f861a39 | 13,568 | py | Python | importio2/apicore.py | import-io/import-io-api-python | 5c838a357742233e714b2ccfd19d25c18531cfa3 | [
"Apache-2.0"
] | 1 | 2021-08-18T03:27:40.000Z | 2021-08-18T03:27:40.000Z | importio2/apicore.py | import-io/import-io-api-python | 5c838a357742233e714b2ccfd19d25c18531cfa3 | [
"Apache-2.0"
] | null | null | null | importio2/apicore.py | import-io/import-io-api-python | 5c838a357742233e714b2ccfd19d25c18531cfa3 | [
"Apache-2.0"
] | 2 | 2021-09-13T14:28:50.000Z | 2021-09-27T17:56:21.000Z | #
# Copyright 2017 Import.io
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
import logging
import requests
"""
Low-level REST API calls that specify the inputs and invoke a REST call. Callers
have the responsibility of handling the Requests libraries response object which can be None
"""
logger = logging.getLogger(__name__)
def extractor_get(api_key, guid):
"""
Fetches the contents of an Extractor object from an account
:param api_key: Import.io user API key
:param guid: Extractor identifier
:return: returns response object from requests library
"""
url = "https://store.import.io/store/extractor/{0}".format(guid)
querystring = {
"_apikey": api_key
}
headers = {
'cache-control': "no-cache",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}".format(url, headers, querystring))
return requests.request("GET", url, headers=headers, params=querystring)
def extractor_list(api_key, page=1, per_page=1000):
"""
Fetches the list of Extractors associated to an account
:param api_key: Import.io user API key
:param page: which page of the list to display.
:param per_page: Number of extractors per page.
:return: returns response object from requests library
"""
url = "https://store.import.io/store/extractor/_search"
querystring = {"_sort": "_meta.creationTimestamp",
"_mine": "true",
"q": "_missing_:archived OR archived:false",
"_page": page,
"_apikey": api_key,
"_perpage": per_page
}
headers = {
}
logger.debug("url: {0}, headers: {1}, querystring: {2}".format(url, headers, querystring))
return requests.request("GET", url, headers=headers, params=querystring)
def extractor_get_crawl_runs(api_key, guid, page, per_page):
"""
:param api_key: Import.io user API key
:param guid: Extractor identifier
:param page: Specific crawl run page to display
:param per_page: Number of crawl runs per page
:return: returns response object from requests library
"""
url = "https://store.import.io/store/crawlrun/_search"
querystring = {"_sort": "_meta.creationTimestamp",
"_page": page,
"_perpage": per_page,
"extractorId": guid,
"_apikey": api_key
}
headers = {
'cache-control': "no-cache",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}".format(url, headers, querystring))
return requests.request("GET", url, headers=headers, params=querystring)
def extractor_query(api_key, guid, target_url):
"""
Perform a live query with the extractor
:param api_key: Import.io user API key
:param guid: Extractor identifier
:param target_url: URL to run the extractor against
:return: Requests response object
"""
url = "https://extraction.import.io/query/extractor/{0}".format(guid)
querystring = {
"_apikey": api_key,
"url": target_url
}
headers = {
'cache-control': "no-cache",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}".format(url, headers, querystring))
return requests.request("GET", url, headers=headers, params=querystring)
def extractor_url_list_get(api_key, guid, url_guid):
"""
Gets the URL list associated with an extractor
:param api_key: Import.io user API key
:param guid: Extractor identifier
:param url_guid: URL List identifier
:return: Requests response object
"""
url = "https://store.import.io/store/extractor/{0}/_attachment/urlList/{1}".format(guid, url_guid)
querystring = {"_apikey": api_key}
headers = {
'accept-encoding': "gzip",
'cache-control': "no-cache",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}".format(url, headers, querystring))
return requests.request("GET", url, headers=headers, params=querystring)
def extractor_url_list_put(api_key, guid, url_list):
url = "https://store.import.io/store/extractor/{0}/_attachment/urlList".format(guid)
querystring = {
"_apikey": api_key
}
payload = url_list
headers = {
'content-type': "text/plain",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}".format(url, headers, querystring))
return requests.request("PUT", url, data=payload, headers=headers, params=querystring)
def extractor_inputs_put(api_key, guid, inputs):
url = "https://store.import.io/store/extractor/{0}/_attachment/inputs".format(guid)
querystring = {
"_apikey": api_key
}
payload = inputs
headers = {
'content-type': "text/plain",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}".format(url, headers, querystring))
return requests.request("PUT", url, data=payload, headers=headers, params=querystring)
def extractor_inputs_get(api_key, guid, inputs_guid):
"""
Gets the inputs associated with an extractor
:param api_key: Import.io user API key
:param guid: Extractor identifier
:param inputs_guid: URL List identifier
:return: Requests response object
"""
url = "https://store.import.io/store/extractor/{0}/_attachment/inputs/{1}".format(guid, inputs_guid)
querystring = {"_apikey": api_key}
headers = {
'accept-encoding': "gzip",
'cache-control': "no-cache",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}".format(url, headers, querystring))
return requests.request("GET", url, headers=headers, params=querystring)
def extractor_cancel(api_key, guid):
"""
Cancels a crawl run of an extractor
:param api_key:
:param api_key: Import.io user API key
:param guid: Extractor identifier
:return: Response object from requests REST call
"""
url = "https://run.import.io/{0}/cancel".format(guid)
querystring = {
"_apikey": api_key
}
headers = {
'cache-control': "no-cache",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}".format(url, headers, querystring))
return requests.request("POST", url, headers=headers, params=querystring)
def extractor_start(api_key, guid):
"""
Initiates an crawl run of an extractor
:param api_key: Import.io user API key
:param guid: Extractor identifier
:return: Response object from requests REST call
"""
url = "https://run.import.io/{0}/start".format(guid)
querystring = {
"_apikey": api_key
}
headers = {
'cache-control': "no-cache",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}".format(url, headers, querystring))
return requests.request("POST", url, headers=headers, params=querystring)
def extractor_csv(api_key, guid):
"""
Returns the CSV file from the most recent extractor crawl run
:param api_key: Import.io user API key
:param guid: Extractor identifier
:return: Response object from requests REST call
"""
url = "https://data.import.io/extractor/{0}/csv/latest".format(guid)
querystring = {
'_apikey': api_key
}
headers = {
'accept-encoding': "gzip",
'cache-control': "no-cache",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}".format(url, headers, querystring))
return requests.request("GET", url, headers=headers, params=querystring)
def extractor_json(api_key, guid):
url = "https://data.import.io/extractor/{0}/json/latest".format(guid)
logger.debug("url: {0}".format(url))
querystring = {
"_apikey": api_key
}
headers = {
'accept-encoding': "gzip",
'cache-control': "no-cache",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}".format(url, headers, querystring))
return requests.request("GET", url, headers=headers, params=querystring)
def extractor_log(api_key, guid):
url = "https://data.import.io/extractor/{0}/log/latest".format(guid)
querystring = {
"_apikey": api_key
}
headers = {
'accept-encoding': "gzip",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}".format(url, headers, querystring))
return requests.request("GET", url, headers=headers, params=querystring)
def object_store_create(api_key, object_type, obj):
url = "https://store.import.io/{0}".format(object_type)
querystring = {
"_apikey": api_key
}
payload = json.dumps(obj)
headers = {
'accept': "application/json",
'content-type': "application/json",
'cache-control': "no-cache",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}, payload: {3}".format(url, headers, querystring, payload))
return requests.request("POST", url, data=payload, headers=headers, params=querystring)
def object_store_get(api_key, object_type, object_id):
"""
Fetches an object of specific type from the Object Store
:param api_key: Import.io API Key
:param object_type: Type of object: crawlrun, extractor, etc
:param object_id: Unique identifier of an object
:return: response
"""
url = "https://store.import.io/store/{0}/{1}".format(object_type, object_id)
querystring = {
"_apikey": api_key
}
headers = {
'accept': "application/json",
'cache-control': "no-cache",
}
response = requests.request("GET", url, headers=headers, params=querystring)
return response
def object_store_get_attachment(api_key, object_id, object_type, attachment_field, attachment_id,
attachment_type):
"""
Generic function for downloading attachments from Crawl Runs/Extractors.
:param api_key: Import.io API key
:param object_id: CrawlRun or Extractor Id
:param attachment_field: One of the following: csv, example, files, inputs, json, log, xlsx
:param attachment_id: Id of the attachment
:param attachment_type: Mime type
:return: response
"""
url = "https://store.import.io/{0}/{1}/_attachment/{2}/{3}".format(
object_type, object_id, attachment_field, attachment_id)
headers = {
'accept': attachment_type,
}
querystring = {
"_apikey": api_key
}
return requests.request("GET", url, headers=headers, params=querystring)
def object_store_put_attachment(api_key, object_type, object_id, attachment_field, attachment_contents,
attachment_type):
url = "https://store.import.io/{0}/{1}/_attachment/{2}".format(object_type, object_id, attachment_field)
querystring = {
"_apikey": api_key
}
payload = attachment_contents
headers = {
'accept': "application/json",
'content-type': attachment_type,
'cache-control': "no-cache",
}
logger.debug("url: {0}, headers: {1}, querystring: {2}, payload: {3}".format(url, headers, querystring, payload))
return requests.request("PUT", url, data=payload, headers=headers, params=querystring)
def object_store_change_ownership(api_key, object_type, object_id, owner_id):
"""
Changes the ownership of an object (Extractor or Crawl Run) in the object store.
NOTE: The API KEY must be from an account that has SUPPORT role
:param api_key: Import.io API Key
:param object_type: Specific object type
:param object_id: Object Id of the Extractor or Crawl Run to change ownershipt
:param owner_id: Owner GUID to set the objects ownership to.
:return: response
"""
url = "https://store.import.io/{0}/{1}".format(object_type, object_id)
querystring = {"newOwner": owner_id,
"_apikey": api_key
}
headers = {
}
response = requests.request("PATCH", url, headers=headers, params=querystring)
return response
def object_store_stream_attachment(api_key, object_id, object_type, attachment_field, attachment_id,
attachment_type, path):
"""
Generic function for streaming data from attachments from Crawl Runs/Extractors.
:param api_key: Import.io API key
:param object_id: CrawlRun or Extractor Id
:param attachment_field: One of the following: csv, example, files, inputs, json, log, xlsx
:param attachment_id: Id of the attachment
:param attachment_type: Mime type
:param path: Location to write the zip file
:return: response
"""
url = "https://store.import.io/{0}/{1}/_attachment/{2}/{3}".format(
object_type, object_id, attachment_field, attachment_id)
headers = {
'accept': attachment_type,
}
querystring = {
"_apikey": api_key
}
r = requests.get(url, headers=headers, params=querystring, stream=True)
with open(path, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
| 30.151111 | 117 | 0.648585 | 1,678 | 13,568 | 5.121573 | 0.13528 | 0.046079 | 0.02653 | 0.068536 | 0.73656 | 0.718874 | 0.683267 | 0.651036 | 0.630556 | 0.622295 | 0 | 0.008946 | 0.225531 | 13,568 | 449 | 118 | 30.218263 | 0.808908 | 0.270563 | 0 | 0.536697 | 0 | 0 | 0.256521 | 0.004958 | 0 | 0 | 0 | 0 | 0 | 1 | 0.087156 | false | 0 | 0.100917 | 0 | 0.270642 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
50fd486a360b465f33c80844287095389797b0db | 236 | py | Python | src/cube/op/apply/tran/log/denary.py | jedhsu/cortex | d610570743d59272c7e6326d10c53d55950e87fc | [
"Apache-2.0"
] | null | null | null | src/cube/op/apply/tran/log/denary.py | jedhsu/cortex | d610570743d59272c7e6326d10c53d55950e87fc | [
"Apache-2.0"
] | null | null | null | src/cube/op/apply/tran/log/denary.py | jedhsu/cortex | d610570743d59272c7e6326d10c53d55950e87fc | [
"Apache-2.0"
] | null | null | null | """
*Denary Logarithm*
"""
from dataclasses import dataclass
import jax.numpy as jnp
from ._operator import Logarithm
__all__ = ["DenaryLogarithm"]
@dataclass
class DenaryLogarithm(
Logarithm,
):
operator = jnp.log10
| 11.8 | 33 | 0.711864 | 24 | 236 | 6.791667 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010526 | 0.194915 | 236 | 19 | 34 | 12.421053 | 0.847368 | 0.076271 | 0 | 0 | 0 | 0 | 0.073529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
50fe6203e2d66be3046c4c5c9c62614f8bd466ae | 8,948 | py | Python | imagenet/SGD_GAF.py | LongJin-lab/Nematode-Connectome-Neural-Network | c1fcef110df7d5cfb9fec6a0778b8340e5289ede | [
"MIT"
] | null | null | null | imagenet/SGD_GAF.py | LongJin-lab/Nematode-Connectome-Neural-Network | c1fcef110df7d5cfb9fec6a0778b8340e5289ede | [
"MIT"
] | null | null | null | imagenet/SGD_GAF.py | LongJin-lab/Nematode-Connectome-Neural-Network | c1fcef110df7d5cfb9fec6a0778b8340e5289ede | [
"MIT"
] | null | null | null | import torch
from torch.optim.optimizer import Optimizer
from torch.optim.optimizer import required
import torch.nn.functional as F
class SGD_atan(Optimizer):
r"""Implements stochastic gradient descent (optionally with momentum).
Nesterov momentum is based on the formula from
`On the importance of initialization and momentum in deep learning`__.
Args:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float): learning rate
momentum (float, optional): momentum factor (default: 0)
weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
dampening (float, optional): dampening for momentum (default: 0)
nesterov (bool, optional): enables Nesterov momentum (default: False)
Example:
>>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()
"""
def __init__(self, params, lr=required, momentum=0, dampening=0,
weight_decay=0, nesterov=False, alpha=1.0, beta=1.5):
if lr is not required and lr < 0.0:
raise ValueError("Invalid learning rate: {}".format(lr))
if momentum < 0.0:
raise ValueError("Invalid momentum value: {}".format(momentum))
if weight_decay < 0.0:
raise ValueError("Invalid weight_decay value: {}".format(weight_decay))
defaults = dict(lr=lr, momentum=momentum, dampening=dampening,
weight_decay=weight_decay, nesterov=nesterov, alpha=alpha, beta=beta)
if nesterov and (momentum <= 0 or dampening != 0):
raise ValueError("Nesterov momentum requires a momentum and zero dampening")
super(SGD_atan, self).__init__(params, defaults)
def __setstate__(self, state):
super(SGD_atan, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('nesterov', False)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
weight_decay = group['weight_decay']
momentum = group['momentum']
dampening = group['dampening']
nesterov = group['nesterov']
alpha = group['alpha']
beta = group['beta']
for p in group['params']:
if p.grad is None:
continue
d_p = p.grad.data
#print(d_p.max().max())
#d_p = 0.05 * torch.atan(d_p*1.5)
if alpha >=0 and beta >=0:
d_p = alpha * torch.atan(beta*d_p)
#d_p = d_p.max().max()/2 * torch.atan(d_p*(1.5*2/d_p.max().max()))
if weight_decay != 0:
d_p.add_(weight_decay, p.data)
if momentum != 0:
param_state = self.state[p]
if 'momentum_buffer' not in param_state:
buf = param_state['momentum_buffer'] = torch.clone(d_p).detach()
else:
buf = param_state['momentum_buffer']
buf.mul_(momentum).add_(1 - dampening, d_p)
if nesterov:
d_p = d_p.add(momentum, buf)
else:
d_p = buf
p.data.add_(-group['lr'], d_p)
return loss
class SGD_atanMom(Optimizer):
def __init__(self, params, lr=required, momentum=0, dampening=0,
weight_decay=0, alpha=0.3, beta=4.5, nesterov=False):
if lr is not required and lr < 0.0:
raise ValueError("Invalid learning rate: {}".format(lr))
if momentum < 0.0:
raise ValueError("Invalid momentum value: {}".format(momentum))
if weight_decay < 0.0:
raise ValueError("Invalid weight_decay value: {}".format(weight_decay))
defaults = dict(lr=lr, momentum=momentum, dampening=dampening,
weight_decay=weight_decay, nesterov=nesterov, alpha=alpha, beta=beta)
if nesterov and (momentum <= 0 or dampening != 0):
raise ValueError("Nesterov momentum requires a momentum and zero dampening")
super(SGD_atanMom, self).__init__(params, defaults)
def __setstate__(self, state):
super(SGD_atanMom, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('nesterov', False)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
weight_decay = group['weight_decay']
momentum = group['momentum']
dampening = group['dampening']
nesterov = group['nesterov']
alpha = group['alpha']
beta = group['beta']
for p in group['params']:
if p.grad is None:
continue
d_p = p.grad.data
#print(d_p.max().max())
#d_p = 0.05 * torch.atan(d_p*1.5)
#d_p = 0.3 * torch.atan(d_p*4.5)
#d_p = 0.05 * torch.atan(d_p*1.5)
#d_p = 0.7 * torch.atan(d_p*0.7)
#d_p = 1.5 * (1 / (1 + torch.exp(-1 * d_p)) - 0.5) + 0.1 * torch.sign(d_p)
#1.5 * (1 / (1 + np.exp(-1 * x)) - 0.5) + 0.1 * np.sign(x)
#d_p = d_p.max().max()/2 * torch.atan(d_p*(1.5*2/d_p.max().max()))
if weight_decay != 0:
d_p.add_(weight_decay, p.data)
if momentum != 0:
param_state = self.state[p]
if 'momentum_buffer' not in param_state:
buf = param_state['momentum_buffer'] = torch.clone(d_p).detach()
else:
buf = param_state['momentum_buffer']
buf.mul_(momentum).add_(1 - dampening, d_p)
if nesterov:
d_p = d_p.add(momentum, buf)
else:
d_p = buf
if alpha >= 0 and beta >=0:
d_p = alpha * torch.atan(beta*d_p)
p.data.add_(-group['lr'], d_p)
return loss
class Adam_atan(Optimizer):
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0, alpha=0.1, beta=15.0):
defaults = dict(lr=lr, betas=betas, eps=eps,weight_decay=weight_decay, alpha=alpha, beta=beta)
super(Adam_atan, self).__init__(params, defaults)
def step(self, closure=None):
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if beta >= 0:
grad = alpha * torch.atan(grad*beta)
#grad = 0.05 * torch.atan(grad*1.5)
#grad = 0.7 * torch.atan(grad*0.7)
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = grad.new().resize_as_(grad).zero_()
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = grad.new().resize_as_(grad).zero_()
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
if group['weight_decay'] != 0:
grad = grad.add(group['weight_decay'], p.data)
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
p.data.addcdiv_(-step_size, exp_avg, denom)
return loss | 40.125561 | 108 | 0.531851 | 1,085 | 8,948 | 4.219355 | 0.15023 | 0.018786 | 0.02097 | 0.01682 | 0.666448 | 0.633464 | 0.605505 | 0.605505 | 0.605505 | 0.605505 | 0 | 0.024818 | 0.356057 | 8,948 | 223 | 109 | 40.125561 | 0.769698 | 0.202615 | 0 | 0.731884 | 0 | 0 | 0.082879 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057971 | false | 0 | 0.028986 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0fa059cdbd4bb75a8bac874ef35a7f81dd065130 | 473 | py | Python | get_latest_conda_build_path.py | amacd31/conda_ci_scripts | 8245f2d261e022e1400b3c9e134bf8cf767c7236 | [
"CC0-1.0"
] | null | null | null | get_latest_conda_build_path.py | amacd31/conda_ci_scripts | 8245f2d261e022e1400b3c9e134bf8cf767c7236 | [
"CC0-1.0"
] | null | null | null | get_latest_conda_build_path.py | amacd31/conda_ci_scripts | 8245f2d261e022e1400b3c9e134bf8cf767c7236 | [
"CC0-1.0"
] | null | null | null | import sys
import os
import yaml
import jinja2
import glob
from conda_build.config import Config
from conda_build.metadata import MetaData
from distutils.version import LooseVersion
config = Config()
recipe_metadata = MetaData(os.path.join(sys.argv[1]))
binary_package_glob = os.path.join(config.bldpkgs_dir, '{0}*.tar.bz2'.format(recipe_metadata.name()))
binary_package = sorted(glob.glob(binary_package_glob), key=LooseVersion, reverse = True)[0]
print(binary_package)
| 29.5625 | 101 | 0.805497 | 70 | 473 | 5.285714 | 0.471429 | 0.140541 | 0.075676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011601 | 0.088795 | 473 | 15 | 102 | 31.533333 | 0.846868 | 0 | 0 | 0 | 0 | 0 | 0.02537 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.615385 | 0 | 0.615385 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0fa8e93c94dbda10b1c851914d67a9f5a7679b79 | 767 | py | Python | maec/utils/__init__.py | colbyprior/python-maec | 109d4517b0123a5f01e31c15818f35772d451705 | [
"BSD-3-Clause"
] | null | null | null | maec/utils/__init__.py | colbyprior/python-maec | 109d4517b0123a5f01e31c15818f35772d451705 | [
"BSD-3-Clause"
] | null | null | null | maec/utils/__init__.py | colbyprior/python-maec | 109d4517b0123a5f01e31c15818f35772d451705 | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2015, The MITRE Corporation
# All rights reserved
"""MAEC utility methods"""
from mixbox.vendor.six import iteritems
def flip_dict(d):
"""Returns a copy of the input dictionary `d` where the values of `d`
become the keys and the keys become the values.
Note:
This does not even attempt to address key collisions.
Args:
d: A dictionary
"""
return dict((v,k) for k, v in iteritems(d))
# Namespace flattening
from .parser import EntityParser # noqa
from .comparator import (ObjectHash, BundleComparator, SimilarObjectCluster, # noqa
ComparisonResult) # noqa
from .deduplicator import BundleDeduplicator # noqa
#Ensure MAEC namespaces get registered
from .nsparser import * # noqa
| 27.392857 | 83 | 0.698827 | 98 | 767 | 5.459184 | 0.673469 | 0.033645 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006757 | 0.228162 | 767 | 27 | 84 | 28.407407 | 0.896959 | 0.48631 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.625 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0fa98a9a212faf370604ed30a8fd919ddabd0957 | 983 | py | Python | tests/python/pants_test/goal/data/register.py | lahosken/pants | 1b0340987c9b2eab9411416803c75b80736716e4 | [
"Apache-2.0"
] | 1 | 2021-11-11T14:04:24.000Z | 2021-11-11T14:04:24.000Z | tests/python/pants_test/goal/data/register.py | lahosken/pants | 1b0340987c9b2eab9411416803c75b80736716e4 | [
"Apache-2.0"
] | 2 | 2016-10-13T21:37:42.000Z | 2018-07-20T20:14:33.000Z | tests/python/pants_test/goal/data/register.py | lahosken/pants | 1b0340987c9b2eab9411416803c75b80736716e4 | [
"Apache-2.0"
] | 1 | 2021-11-11T14:04:12.000Z | 2021-11-11T14:04:12.000Z | # coding=utf-8
# Copyright 2016 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import (absolute_import, division, generators, nested_scopes, print_function,
unicode_literals, with_statement)
from pants.backend.jvm.tasks.nailgun_task import NailgunTask
from pants.base.workunit import WorkUnit
from pants.goal.task_registrar import TaskRegistrar as task
from pants.task.task import Task
def register_goals():
task(name='run-dummy-workunit', action=TestWorkUnitTask).install()
class TestWorkUnitTask(NailgunTask):
@classmethod
def register_options(cls, register):
register('--success', default=False, type=bool)
def execute(self):
result = WorkUnit.SUCCESS if self.get_options().success else WorkUnit.FAILURE
# This creates workunit and marks it as failure.
with self.context.new_workunit('dummy') as workunit:
workunit.set_outcome(result)
| 33.896552 | 93 | 0.76297 | 126 | 983 | 5.825397 | 0.619048 | 0.049046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008363 | 0.148525 | 983 | 28 | 94 | 35.107143 | 0.868578 | 0.189217 | 0 | 0 | 0 | 0 | 0.040404 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.3125 | 0 | 0.5625 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0fb09492cb0383481e3414a1cbe1ab5f1ecef089 | 2,690 | py | Python | dragonfly/distributions/model.py | anonymous-submission000/mobo | 090f774d742c7155c5e5ba01c10e7db7b93b6a0a | [
"MIT"
] | 1 | 2022-02-17T08:50:47.000Z | 2022-02-17T08:50:47.000Z | dragonfly/distributions/model.py | anonymous-submission000/mobo | 090f774d742c7155c5e5ba01c10e7db7b93b6a0a | [
"MIT"
] | null | null | null | dragonfly/distributions/model.py | anonymous-submission000/mobo | 090f774d742c7155c5e5ba01c10e7db7b93b6a0a | [
"MIT"
] | null | null | null | """
Class for abstract probability distribution
-- kvysyara@andrew.cmu.edu
"""
from __future__ import absolute_import
import numpy as np
# Local imports
from .distribution import Distribution
from ..sampling.metropolis import Metropolis, BinaryMetropolis
from ..sampling.nuts import NoUTurnSamplerDA as NUTS, LeapFrog
from ..sampling.slice import Slice
# pylint: disable=invalid-name
# pylint: disable=abstract-method
# pylint: disable=relative-import
class Model(Distribution):
""" Class for abstract distributions """
def __init__(self, pdf, logp, grad_logp):
super(Model, self).__init__()
self._pdf = pdf
self._logp = logp
self._grad_logp = grad_logp
self.dim = None
def pdf(self, x):
""" Returns pdf of distribution at x """
return np.asscalar(self._pdf(x))
def logp(self, x):
""" Returns log of pdf at x """
return self._logp(x)
def grad_logp(self, x):
""" Returns gradient of log pdf at x """
return self._grad_logp(x)
def draw_samples(self, method, size=None, init_sample=None, *args):
""" Returns samples drawn from distribution. """
if method == 'nuts':
return self.draw_samples_nuts(size, init_sample, *args)
elif method == 'slice':
return self.draw_samples_slice(size, init_sample, *args)
elif method == 'metropolis':
return self.draw_samples_metropolis(size, init_sample, *args)
elif method == 'binarymetropolis':
return self.draw_samples_binary_metropolis(size, init_sample, *args)
def draw_samples_slice(self, size, init_sample, *args):
""" Returns samples drawn from distribution using Slice sampler"""
sampler = Slice(self)
return sampler.sample(init_sample, size, *args)
def draw_samples_nuts(self, size, init_sample, *args):
""" Returns samples drawn from distribution using NUTS sampler"""
sampler = NUTS(self, LeapFrog)
return sampler.sample(init_sample, size, *args)
def draw_samples_metropolis(self, size, init_sample, *args):
""" Returns samples drawn from distribution using Metropolis sampler"""
if hasattr(init_sample, '__len__'):
self.dim = len(init_sample)
else:
self.dim = 1
sampler = Metropolis(self, True, *args)
return sampler.sample(init_sample, size)
def draw_samples_binary_metropolis(self, size, init_sample, *args):
""" Returns samples drawn from distribution using Binary Metropolis sampler"""
sampler = BinaryMetropolis(self, True, *args)
return sampler.sample(init_sample, size)
| 34.935065 | 86 | 0.664684 | 326 | 2,690 | 5.294479 | 0.202454 | 0.086906 | 0.06489 | 0.08343 | 0.378331 | 0.337775 | 0.266512 | 0.266512 | 0.266512 | 0.214368 | 0 | 0.000485 | 0.234201 | 2,690 | 76 | 87 | 35.394737 | 0.837379 | 0.224164 | 0 | 0.090909 | 0 | 0 | 0.020813 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.204545 | false | 0 | 0.136364 | 0 | 0.613636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0fb3d5631bf6694b6b04224c4ca2af898658c624 | 245 | py | Python | dice_ml/constants.py | Moeflon/DiCE | bb00f2c73495d80675d9d6e61d385ee8638c5be9 | [
"MIT"
] | null | null | null | dice_ml/constants.py | Moeflon/DiCE | bb00f2c73495d80675d9d6e61d385ee8638c5be9 | [
"MIT"
] | null | null | null | dice_ml/constants.py | Moeflon/DiCE | bb00f2c73495d80675d9d6e61d385ee8638c5be9 | [
"MIT"
] | null | null | null | """Constants for dice-ml package."""
class BackEndTypes:
Sklearn = 'sklearn'
Tensorflow1 = 'TF1'
Tensorflow2 = 'TF2'
Pytorch = 'PYT'
class SamplingStrategy:
Random = 'random'
Genetic = 'genetic'
KdTree = 'kdtree'
| 16.333333 | 36 | 0.62449 | 23 | 245 | 6.652174 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0.24898 | 245 | 14 | 37 | 17.5 | 0.809783 | 0.122449 | 0 | 0 | 0 | 0 | 0.167464 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0fc724e01dcf6faa7e2cb9efbaff1fc2842030b5 | 1,397 | py | Python | main/models.py | hassanaziz0012/real-estate-manager-site | 607139ad1f7f303623f7983a217f4178fc7ee8ec | [
"MIT"
] | null | null | null | main/models.py | hassanaziz0012/real-estate-manager-site | 607139ad1f7f303623f7983a217f4178fc7ee8ec | [
"MIT"
] | null | null | null | main/models.py | hassanaziz0012/real-estate-manager-site | 607139ad1f7f303623f7983a217f4178fc7ee8ec | [
"MIT"
] | null | null | null | from django.db import models
# Create your models here.
class Listing(models.Model):
address = models.CharField(max_length=300, null=False, blank=False)
zip_code = models.IntegerField(null=False, blank=False)
city = models.CharField(max_length=100, null=False, blank=False)
beds_count = models.IntegerField(default=1)
baths_count = models.IntegerField(default=1)
images = models.ManyToManyField('ListingImage', related_name='images', blank=True)
price = models.IntegerField(null=False, blank=False)
application_fee = models.IntegerField(null=False, blank=False, default=1)
description = models.TextField(null=True, blank=True)
def get_banner(self):
if len(self.images.all()):
return self.images.first().file.url
else:
return None
def __str__(self) -> str:
return f'{self.address} - ${self.price}'
def __repr__(self) -> str:
return f'<Listing: {self.address} - ${self.price}>'
class ListingImage(models.Model):
listing = models.ForeignKey(Listing, on_delete=models.CASCADE, related_name='listing')
file = models.ImageField(upload_to='listing-images', null=False, blank=False)
def __str__(self) -> str:
return f'{self.file.name} - {self.listing.address}'
def __repr__(self) -> str:
return f'<ListingImage: {self.file.name} - {self.listing.address}>'
| 36.763158 | 90 | 0.683608 | 176 | 1,397 | 5.272727 | 0.357955 | 0.05819 | 0.090517 | 0.122845 | 0.34375 | 0.27694 | 0.051724 | 0 | 0 | 0 | 0 | 0.007867 | 0.181102 | 1,397 | 37 | 91 | 37.756757 | 0.803322 | 0.01718 | 0 | 0.148148 | 0 | 0 | 0.151714 | 0.032823 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0 | 0.037037 | 0.148148 | 0.925926 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
0fd0938f5388415f09c92dd78e5f79354660972c | 715 | py | Python | DifferenceEvolution/__init__.py | LeslieWongCV/EE6227_Wong1 | 4813e2f33c5cb2ccb7ff2ec884481a97bd918de6 | [
"MIT"
] | 2 | 2021-01-09T05:55:41.000Z | 2022-01-01T07:14:55.000Z | DifferenceEvolution/__init__.py | LeslieWongCV/EE6227_Wong1 | 4813e2f33c5cb2ccb7ff2ec884481a97bd918de6 | [
"MIT"
] | null | null | null | DifferenceEvolution/__init__.py | LeslieWongCV/EE6227_Wong1 | 4813e2f33c5cb2ccb7ff2ec884481a97bd918de6 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# @Time : 2021/2/10 11:58 上午
# @Author : Yushuo Wang
# @FileName: __init__.py
# @Software: PyCharm
# @Blog :https://lesliewongcv.github.io/
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append('../')
import GenericAlgorithm as GA
from math import *
def fitness_score(input, NP, D):
Z = np.zeros(NP)
for i in range(D):
Z += input[:, i]**2
#Z = 1/(input-5) * np.sin(10 * pi * input) + 2
#Z = input[:, 0]**2 + input[:, 1]**2 + (input[:, 2] + 500)**2 + input[:, 3]**2 + input[:, 4]**2 - 200
return Z
def init_popu(NP, D):
X = np.random.random([NP, D]) * 200 - 100
X_fitness = fitness_score(X, NP, D)
return X, X_fitness
| 26.481481 | 105 | 0.581818 | 117 | 715 | 3.478632 | 0.529915 | 0.029484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072202 | 0.225175 | 715 | 26 | 106 | 27.5 | 0.662455 | 0.423776 | 0 | 0 | 0 | 0 | 0.007444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.333333 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0fd498bcdf9fe5b42393563b752da57b4eb04b2e | 13,554 | py | Python | analytics/selector/tests/unittest_selector.py | sadikovi/Pulsar | 3267426cf5bd676a3c4c20cbee88a80b89e65b0f | [
"Apache-2.0"
] | null | null | null | analytics/selector/tests/unittest_selector.py | sadikovi/Pulsar | 3267426cf5bd676a3c4c20cbee88a80b89e65b0f | [
"Apache-2.0"
] | 29 | 2015-02-23T07:59:13.000Z | 2015-04-05T09:49:53.000Z | analytics/selector/tests/unittest_selector.py | sadikovi/Pulsar | 3267426cf5bd676a3c4c20cbee88a80b89e65b0f | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# import libs
import unittest
from types import IntType, FloatType, ListType
import random
import warnings
# import classes
import analytics.exceptions.exceptions as ex
import analytics.core.processor.processor as processor
import analytics.selector.selector as selector
from analytics.core.map.clustermap import ClusterMap
from analytics.core.map.elementmap import ElementMap
from analytics.core.map.pulsemap import PulseMap
from analytics.algorithms.algorithmsmap import AlgorithmsMap
from analytics.algorithms.algorithm import Algorithm
class Selector_TestSequence(unittest.TestCase):
def setUp(self):
self._p = [
{"id": "1", "name": "random", "desc": "random", "sample": 1, "dynamic": True},
{"id": "2", "name": "order", "desc": "order", "sample": 1.0, "dynamic": False},
{"id": "3", "name": "dir", "desc": "dir", "sample": "str"}
]
self._c = [
{"id": "1", "name": "#1", "desc": "#1", "parent": None},
{"id": "2", "name": "#2", "desc": "#2", "parent": "1"},
{"id": "3", "name": "#3", "desc": "#3", "parent": "1"},
{"id": "4", "name": "#4", "desc": "#4", "parent": "2"},
{"id": "5", "name": "#5", "desc": "#5", "parent": "2"}
]
self._e = [
{"id": "1", "name": "@1", "desc": "@1", "cluster": "5", "random": 2, "order": 1.2, "dir": "up"},
{"id": "2", "name": "@2", "desc": "@2", "cluster": "5", "random": 9, "order": 0.9, "dir": "down"},
{"id": "3", "name": "@3", "desc": "@3", "cluster": "3", "random": 7, "order": 1.1, "dir": "down"},
{"id": "4", "name": "@4", "desc": "@4", "cluster": "3", "random": 1, "order": 1.5, "dir": "up"},
{"id": "5", "name": "@5", "desc": "@5", "cluster": "4", "random": 4, "order": 1.7, "dir": "down"}
]
# create process block
block = processor.ProcessBlock(
{"map": ClusterMap(), "data": self._c},
{"map": ElementMap(), "data": self._e},
{"map": PulseMap(), "data": self._p}
)
# parse object lists
block = processor.processWithBlock(block)
self._clustermap = block._clustermap
self._elementmap = block._elementmap
self._pulsemap = block._pulsemap
# create algorithms map
self._algorithmsmap = AlgorithmsMap()
self._algorithmsmap.assign(Algorithm("%1", "%1", "%1"))
def query_empty(self):
return ""
def query_cluster_select(self, cid):
return "select from ${clusters} where @id=[%s]" %(cid)
def query_pulse_static_select(self, pid, value):
value = value if type(value) in [IntType, FloatType] else "[%s]"%(value)
return "select from ${pulses} where @%s=%s and @%s |is| static"%(pid, str(value), pid)
def query_pulse_dynamic_select(self, pid, value):
value = value if type(value) in [IntType, FloatType] else "[%s]"%(value)
return "select from ${pulses} where @%s=%s and @%s |is| dynamic"%(pid, str(value), pid)
def query_all_select_static(self, cid, pid, value):
value = value if type(value) in [IntType, FloatType] else "[%s]"%(value)
return ";".join([
"select from ${clusters} where @id=[%s]" %(cid),
"select from ${pulses} where @%s=%s and @%s |is| static"%(pid, str(value), pid)
])
def query_all_select_dynamic(self, cid, pid, value):
value = value if type(value) in [IntType, FloatType] else "[%s]"%(value)
return ";".join([
"select from ${clusters} where @id=[%s]" %(cid),
"select from ${pulses} where @%s=%s and @%s |is| dynamic"%(pid, str(value), pid)
])
def test_selector_valid_query(self):
queries = [
self.query_empty(),
self.query_cluster_select("1"),
self.query_pulse_static_select("2", 1),
self.query_pulse_dynamic_select("3", 1),
self.query_all_select_static("4", "5", 1),
self.query_all_select_dynamic("4", "5", 1)
]
for query in queries:
a = selector.parseQueryset(query)
self.assertEqual(type(a), ListType)
def test_selector_integration_test1(self):
# create filter block
block = selector.FilterBlock(
self._algorithmsmap,
self._pulsemap,
self._clustermap,
self._elementmap
)
# filter with selector
block = selector.filterWithBlock(
self.query_empty(),
block
)
# reassign maps to new values
self._algorithmsmap = block._alg
self._pulsemap = block._pul
self._clustermap = block._clu
self._elementmap = block._ele
# assertion
self.assertEqual(len(self._algorithmsmap._map.values()), 1)
self.assertEqual(len(self._pulsemap._map), len(self._p))
self.assertEqual(len(self._clustermap._map), len(self._c))
self.assertEqual(len(self._elementmap._map), len(self._e))
def test_selector_integration_test1(self):
# create filter block
block = selector.FilterBlock(
self._algorithmsmap,
self._pulsemap,
self._clustermap,
self._elementmap
)
# extract parameters: cluster id
cluster_values = self._clustermap._map.values()
cid = [x.id() for x in cluster_values if x.name() == "#2"][0]
block = selector.filterWithBlock(self.query_cluster_select(cid), block)
# reassign maps to new values
self._algorithmsmap = block._alg
self._pulsemap = block._pul
self._clustermap = block._clu
self._elementmap = block._ele
# assertion
self.assertEqual(len(self._algorithmsmap._map.values()), 1)
self.assertEqual(len(self._pulsemap._map), 3)
self.assertEqual(len(self._clustermap._map), 3)
self.assertEqual(len(self._elementmap._map), 3)
def test_selector_integration_test2(self):
# create filter block
block = selector.FilterBlock(
self._algorithmsmap,
self._pulsemap,
self._clustermap,
self._elementmap
)
# extract parameters: pulse id and default value
pulse_values = self._pulsemap._map.values()
pid = [x.id() for x in pulse_values if x.name() == "random"][0]
value = 2
block = selector.filterWithBlock(
self.query_pulse_static_select(pid, value),
block
)
# reassign to maps
self._algorithmsmap = block._alg
self._pulsemap = block._pul
self._clustermap = block._clu
self._elementmap = block._ele
# assertion
self.assertEqual(len(self._algorithmsmap._map.values()), 1)
self.assertEqual(len(self._pulsemap._map), 3)
self.assertEqual(self._pulsemap.get(pid).static(), True)
self.assertEqual(self._pulsemap.get(pid).default(), value)
self.assertEqual(len(self._clustermap._map), 5)
self.assertEqual(len(self._elementmap._map), 1)
self.assertEqual(self._elementmap._map.values()[0].name(), "@1")
def test_selector_integration_test3(self):
# filter block
block = selector.FilterBlock(
self._algorithmsmap,
self._pulsemap,
self._clustermap,
self._elementmap
)
pulse_values = self._pulsemap._map.values()
pid = [x.id() for x in pulse_values if x.name() == "order"][0]
value = 20.0
block = selector.filterWithBlock(
self.query_pulse_dynamic_select(pid, value),
block
)
# reassign to maps
self._algorithmsmap = block._alg
self._pulsemap = block._pul
self._clustermap = block._clu
self._elementmap = block._ele
# assertion
self.assertEqual(len(self._algorithmsmap._map.values()), 1)
self.assertEqual(len(self._pulsemap._map), 3)
self.assertEqual(self._pulsemap.get(pid).static(), False)
self.assertEqual(self._pulsemap.get(pid).default(), value)
self.assertEqual(len(self._clustermap._map), 5)
self.assertEqual(len(self._elementmap._map), 5)
def test_selector_integration_test4(self):
# filter block
block = selector.FilterBlock(
self._algorithmsmap,
self._pulsemap,
self._clustermap,
self._elementmap
)
# extract parameters: cluster id, pulse id and default value
cluster_values = self._clustermap._map.values()
cid = [x.id() for x in cluster_values if x.name() == "#3"][0]
pulse_values = self._pulsemap._map.values()
pid = [x.id() for x in pulse_values if x.name() == "random"][0]
value = 7
block = selector.filterWithBlock(
self.query_all_select_static(cid, pid, value),
block
)
# reassign parameters to maps
self._algorithmsmap = block._alg
self._pulsemap = block._pul
self._clustermap = block._clu
self._elementmap = block._ele
# assertion
self.assertEqual(len(self._algorithmsmap._map.values()), 1)
self.assertEqual(len(self._pulsemap._map), 3)
self.assertEqual(self._pulsemap.get(pid).static(), True)
self.assertEqual(self._pulsemap.get(pid).default(), value)
self.assertEqual(len(self._clustermap._map), 1)
self.assertEqual(len(self._elementmap._map), 1)
def test_selector_integration_test5(self):
# filter block
block = selector.FilterBlock(
self._algorithmsmap,
self._pulsemap,
self._clustermap,
self._elementmap
)
# extract parameters: cluster id, pulse id, value
cluster_values = self._clustermap._map.values()
cid = [x.id() for x in cluster_values if x.name() == "#3"][0]
pulse_values = self._pulsemap._map.values()
pid = [x.id() for x in pulse_values if x.name() == "random"][0]
value = 7
block = selector.filterWithBlock(
self.query_all_select_dynamic(cid, pid, value),
block
)
# reassign parameters to maps
self._algorithmsmap = block._alg
self._pulsemap = block._pul
self._clustermap = block._clu
self._elementmap = block._ele
# assertion
self.assertEqual(len(self._algorithmsmap._map.values()), 1)
self.assertEqual(len(self._pulsemap._map), 3)
self.assertEqual(self._pulsemap.get(pid).static(), False)
self.assertEqual(self._pulsemap.get(pid).default(), value)
self.assertEqual(len(self._clustermap._map), 1)
self.assertEqual(len(self._elementmap._map), 2)
def test_selector_warn_staticdefault(self):
pulse_values = self._pulsemap._map.values()
pulse = [x for x in pulse_values if x.name() == "dir"][0]
values = [2, 20.0, "str", "up"]
for value in values:
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
block = selector.FilterBlock(
self._algorithmsmap,
self._pulsemap,
self._clustermap,
self._elementmap
)
block = selector.filterWithBlock(
self.query_pulse_static_select(pulse.id(), value),
block
)
if value == "up" or value == "down":
self.assertEqual(len(w), 0)
self.assertEqual(pulse.default(), value)
else:
self.assertEqual(len(w), 1)
self.assertTrue(issubclass(w[-1].category, UserWarning))
self.assertEqual(pulse.default(), None)
def test_selector_warn_dynamicdefault(self):
pulse_values = self._pulsemap._map.values()
pulse = [x for x in pulse_values if x.name() == "order"][0]
values = [20.0, 2, "str", "up"]
for value in values:
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
# filter block
block = selector.FilterBlock(
self._algorithmsmap,
self._pulsemap,
self._clustermap,
self._elementmap
)
block = selector.filterWithBlock(
self.query_pulse_dynamic_select(pulse.id(), value),
block
)
if type(value) is pulse.type():
self.assertEqual(len(w), 0)
self.assertEqual(pulse.default(), value)
else:
self.assertEqual(len(w), 1)
self.assertTrue(issubclass(w[-1].category, UserWarning))
self.assertNotEqual(pulse.default(), value)
# Load test suites
def _suites():
return [
Selector_TestSequence
]
# Load tests
def loadSuites():
# global test suite for this module
gsuite = unittest.TestSuite()
for suite in _suites():
gsuite.addTest(unittest.TestLoader().loadTestsFromTestCase(suite))
return gsuite
if __name__ == '__main__':
suite = loadSuites()
print ""
print "### Running tests ###"
print "-" * 70
unittest.TextTestRunner(verbosity=2).run(suite)
| 40.459701 | 110 | 0.574738 | 1,500 | 13,554 | 5.004667 | 0.106667 | 0.081924 | 0.067137 | 0.070334 | 0.721726 | 0.689889 | 0.645531 | 0.641401 | 0.622086 | 0.622086 | 0 | 0.014329 | 0.289435 | 13,554 | 334 | 111 | 40.580838 | 0.765133 | 0.051276 | 0 | 0.53405 | 0 | 0 | 0.069156 | 0 | 0 | 0 | 0 | 0 | 0.157706 | 0 | null | null | 0 | 0.043011 | null | null | 0.010753 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0feb6fde76421a2cd45120b5e80fefcf16660170 | 396 | py | Python | src/damn/settings.py | funkybob/django-amn | 93fdb9665d81795408564bcaf00dfa77d10d42f7 | [
"BSD-2-Clause"
] | 23 | 2015-03-14T22:41:02.000Z | 2021-10-15T07:29:25.000Z | src/damn/settings.py | funkybob/django-amn | 93fdb9665d81795408564bcaf00dfa77d10d42f7 | [
"BSD-2-Clause"
] | null | null | null | src/damn/settings.py | funkybob/django-amn | 93fdb9665d81795408564bcaf00dfa77d10d42f7 | [
"BSD-2-Clause"
] | null | null | null | from django.conf import settings
# Map of mode -> processor config
# {
# 'js': {
# 'processor': 'damn.processors.ScriptProcessor',
# 'aliases': {},
# },
# }
PROCESSORS = getattr(settings, "DAMN_PROCESSORS", {})
# File extension -> mode name
MODE_MAP = getattr(settings, "DAMN_MODE_MAP", {})
MODE_ORDER = getattr(settings, "DAMN_MODE_ORDER", ["css", "js",])
| 24.75 | 65 | 0.608586 | 41 | 396 | 5.707317 | 0.512195 | 0.192308 | 0.24359 | 0.196581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219697 | 396 | 15 | 66 | 26.4 | 0.757282 | 0.436869 | 0 | 0 | 0 | 0 | 0.224299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0feb779f370c95e0551c64ccb5ebfd0c453b6700 | 1,035 | py | Python | release/stubs.min/System/Runtime/InteropServices/__init___parts/StructLayoutAttribute.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | release/stubs.min/System/Runtime/InteropServices/__init___parts/StructLayoutAttribute.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | release/stubs.min/System/Runtime/InteropServices/__init___parts/StructLayoutAttribute.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | class StructLayoutAttribute(Attribute, _Attribute):
"""
Lets you control the physical layout of the data fields of a class or structure.
StructLayoutAttribute(layoutKind: LayoutKind)
StructLayoutAttribute(layoutKind: Int16)
"""
def __init__(self, *args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self, layoutKind):
"""
__new__(cls: type,layoutKind: LayoutKind)
__new__(cls: type,layoutKind: Int16)
"""
pass
Value = property(lambda self: object(), lambda self, v: None, lambda self: None)
"""Gets the System.Runtime.InteropServices.LayoutKind value that specifies how the class or structure is arranged.
Get: Value(self: StructLayoutAttribute) -> LayoutKind
"""
CharSet = None
Pack = None
Size = None
| 26.538462 | 221 | 0.671498 | 113 | 1,035 | 5.681416 | 0.460177 | 0.14486 | 0.074766 | 0.088785 | 0.26947 | 0.176012 | 0.176012 | 0.176012 | 0.176012 | 0.176012 | 0 | 0.004994 | 0.226087 | 1,035 | 38 | 222 | 27.236842 | 0.796504 | 0.441546 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.2 | 0 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
ba042e792f2ef5757d48cf5820dfe854cfad9cbc | 403 | py | Python | cores.py | mauriciocarminate/PrimeiroPython | 5cd5b761e491cf6dffc2a108e1ec5ba826677650 | [
"MIT"
] | null | null | null | cores.py | mauriciocarminate/PrimeiroPython | 5cd5b761e491cf6dffc2a108e1ec5ba826677650 | [
"MIT"
] | null | null | null | cores.py | mauriciocarminate/PrimeiroPython | 5cd5b761e491cf6dffc2a108e1ec5ba826677650 | [
"MIT"
] | null | null | null | #lista de cores para menu:
#cor da letra
limpa = '\033[m'
Lbranco = '\033[30m'
Lvermelho = '\033[31m'
Lverde = '\033[32m'
Lamarelo = '\033[33m'
Lazul = '\033[34m'
Lroxo = '\033[35m'
Lazulclaro = '\033[36'
Lcinza = '\033[37'
#Fundo
Fbranco = '\033[40m'
Fvermelho = '\033[41m'
Fverde = '\033[42m'
Famarelo = '\033[43m'
Fazul = '\033[44m'
Froxo = '\033[45m'
Fazulclaro = '\033[46m'
Fcinza = '\033[46m'
| 16.12 | 26 | 0.620347 | 60 | 403 | 4.166667 | 0.716667 | 0.048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244838 | 0.158809 | 403 | 24 | 27 | 16.791667 | 0.492625 | 0.104218 | 0 | 0 | 0 | 0 | 0.370787 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ba06afd9284e82a36d2bee0edc5bbfbb83d0e370 | 297 | py | Python | homehub/api/management/commands/update_wotd_cache.py | stricoff92/homehub | 5946d661f47a8f38e803db507e08133afd613ba7 | [
"MIT"
] | null | null | null | homehub/api/management/commands/update_wotd_cache.py | stricoff92/homehub | 5946d661f47a8f38e803db507e08133afd613ba7 | [
"MIT"
] | null | null | null | homehub/api/management/commands/update_wotd_cache.py | stricoff92/homehub | 5946d661f47a8f38e803db507e08133afd613ba7 | [
"MIT"
] | null | null | null |
from django.core.management.base import BaseCommand
from django.conf import settings
from api.lib import wordnik, script_logger
class Command(BaseCommand):
def handle(self, *args, **kwargs):
logger = script_logger.create_logger("wotd")
wordnik.update_cache(logger=logger)
| 22.846154 | 52 | 0.747475 | 38 | 297 | 5.736842 | 0.657895 | 0.091743 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161616 | 297 | 12 | 53 | 24.75 | 0.875502 | 0 | 0 | 0 | 0 | 0 | 0.013514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
ba0f520d197398af1ed845d3a98b021366e4db92 | 166 | py | Python | chapter11-regex/search-extract4.py | MyLanPangzi/py4e | af5cd5fa63956ff237f880a1f9dd0bfdd6b28930 | [
"Apache-2.0"
] | null | null | null | chapter11-regex/search-extract4.py | MyLanPangzi/py4e | af5cd5fa63956ff237f880a1f9dd0bfdd6b28930 | [
"Apache-2.0"
] | null | null | null | chapter11-regex/search-extract4.py | MyLanPangzi/py4e | af5cd5fa63956ff237f880a1f9dd0bfdd6b28930 | [
"Apache-2.0"
] | null | null | null | import re
fin = open('../mbox-short.txt')
for line in fin:
line = line.rstrip()
t = re.findall('^From .* (\d\d):', line)
if len(t) > 0:
print(t)
| 18.444444 | 44 | 0.518072 | 27 | 166 | 3.185185 | 0.703704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008197 | 0.26506 | 166 | 8 | 45 | 20.75 | 0.696721 | 0 | 0 | 0 | 0 | 0 | 0.198795 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.142857 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ba1429df0501f90cee72d7a50605110cecd5e8ad | 2,921 | py | Python | Scripts/SimpleProgramUtilities.py | ssell/PythonSimplePrograms | d219d940700cb727f455f6d4cb62076f0043d71d | [
"Apache-2.0"
] | null | null | null | Scripts/SimpleProgramUtilities.py | ssell/PythonSimplePrograms | d219d940700cb727f455f6d4cb62076f0043d71d | [
"Apache-2.0"
] | null | null | null | Scripts/SimpleProgramUtilities.py | ssell/PythonSimplePrograms | d219d940700cb727f455f6d4cb62076f0043d71d | [
"Apache-2.0"
] | null | null | null | # ------------------------------------------------------------------------
# Copyright 2016 Steven T Sell (ssell@vertexfragment.com)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
# ------------------------------------------------------------------------
# Standard error message start.
# \return String
# ------------------------------------------------------------------------
def errorMessage():
return "! Error !"
# ------------------------------------------------------------------------
# Used for declaring static function variables.
# Credit: http://stackoverflow.com/a/279586
# ------------------------------------------------------------------------
def staticVars(**args):
def decorate(func):
for arg in args:
setattr(func, arg, args[arg])
return func
return decorate
# ------------------------------------------------------------------------
# A simple function to convert a string to an int. If the conversion fails,
# then the specified fallback value is returned.
#
# \param str The string to attempt to convert.
# \param fallback The fallback value to return if conversion fails.
#
# \return The converted string value.
# ------------------------------------------------------------------------
def toInt(str, fallback):
try:
return int(str)
except ValueError:
return fallback
# ------------------------------------------------------------------------
# Retrieves the static path for the Data files.
# \return String
# ------------------------------------------------------------------------
def getDataPath():
return os.path.join(os.path.dirname(os.path.realpath(__file__)), (".." + os.sep + "Data"))
# ------------------------------------------------------------------------
# Retrieves the full system path for the specified data file.
# The data file is expected to reside in the Data folder.
#
# \param filename Filename (including extension) of the data file.
# \return Full system path to the data file. If DNE, returns empty string.
# ------------------------------------------------------------------------
@staticVars(dataPath=getDataPath())
def getDataFile(filename, mustExist):
file = (getDataFile.dataPath + os.sep + filename)
if mustExist:
if not os.path.isfile(file):
print("{} Failed to find Data file '{}'".format(errorMessage(), file))
file = ""
return file
| 36.5125 | 94 | 0.50873 | 295 | 2,921 | 5.023729 | 0.464407 | 0.040486 | 0.022267 | 0.021592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00565 | 0.15166 | 2,921 | 79 | 95 | 36.974684 | 0.592413 | 0.71722 | 0 | 0 | 0 | 0 | 0.060026 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.041667 | 0.083333 | 0.583333 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ba18ef136c0bc1a8bc5c4e6f34473be48d4dd297 | 955 | py | Python | tests/test_text_transliter.py | axtrace/PartyBook | be2367369f03a701ff4809894922f6873257f53d | [
"Unlicense"
] | 1 | 2019-06-01T15:12:29.000Z | 2019-06-01T15:12:29.000Z | tests/test_text_transliter.py | axtrace/PartyBook | be2367369f03a701ff4809894922f6873257f53d | [
"Unlicense"
] | 10 | 2018-11-02T20:17:36.000Z | 2021-12-13T19:52:00.000Z | tests/test_text_transliter.py | axtrace/PartyBook | be2367369f03a701ff4809894922f6873257f53d | [
"Unlicense"
] | 1 | 2019-06-03T17:45:05.000Z | 2019-06-03T17:45:05.000Z | import unittest
from text_transliter import TextTransliter
class TestTextTransliter(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_empty_line(self):
text = ''
result = TextTransliter(text, input_lang='ru').get_translitet()
self.assertEqual(result, '')
# only digit string
def test_digit_string(self):
text = '0123456789'
result = TextTransliter(text, input_lang='ru').get_translitet()
self.assertEqual(result, '0123456789')
# alphabet string
def test_big_letter_sense(self):
text = 'АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдеёжзийклмнопрстуфхцчшщъыьэюя'
result = TextTransliter(text, input_lang='ru').get_translitet()
self.assertEqual(result,
'ABVGDEEZhZIJKLMNOPRSTUFHTsChShSch\'Y\'EJuJaabvgdeezhzijklmnoprstufhtschshsch\'y\'ejuja')
if __name__ == '__main__':
unittest.main()
| 28.939394 | 114 | 0.679581 | 89 | 955 | 7.044944 | 0.449438 | 0.033493 | 0.114833 | 0.138756 | 0.330144 | 0.330144 | 0.330144 | 0.330144 | 0.330144 | 0.330144 | 0 | 0.026918 | 0.22199 | 955 | 32 | 115 | 29.84375 | 0.816958 | 0.034555 | 0 | 0.227273 | 0 | 0 | 0.193689 | 0.151251 | 0 | 0 | 0 | 0 | 0.136364 | 1 | 0.227273 | false | 0.090909 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
ba1d452a12fc4225bcc31cc68c40a35361608559 | 7,869 | py | Python | pysnmp/PROXY-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 11 | 2021-02-02T16:27:16.000Z | 2021-08-31T06:22:49.000Z | pysnmp/PROXY-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 75 | 2021-02-24T17:30:31.000Z | 2021-12-08T00:01:18.000Z | pysnmp/PROXY-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module PROXY-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/PROXY-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 20:33:35 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
Integer, ObjectIdentifier, OctetString = mibBuilder.importSymbols("ASN1", "Integer", "ObjectIdentifier", "OctetString")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsUnion, SingleValueConstraint, ValueSizeConstraint, ValueRangeConstraint, ConstraintsIntersection = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsUnion", "SingleValueConstraint", "ValueSizeConstraint", "ValueRangeConstraint", "ConstraintsIntersection")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
Unsigned32, Counter32, ModuleIdentity, MibScalar, MibTable, MibTableRow, MibTableColumn, Gauge32, Integer32, iso, Counter64, ObjectIdentity, MibIdentifier, IpAddress, NotificationType, TimeTicks, Bits, experimental = mibBuilder.importSymbols("SNMPv2-SMI", "Unsigned32", "Counter32", "ModuleIdentity", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Gauge32", "Integer32", "iso", "Counter64", "ObjectIdentity", "MibIdentifier", "IpAddress", "NotificationType", "TimeTicks", "Bits", "experimental")
DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention")
nsfnet = MibIdentifier((1, 3, 6, 1, 3, 25))
proxy = ModuleIdentity((1, 3, 6, 1, 3, 25, 17))
proxy.setRevisions(('1998-08-26 00:00',))
if mibBuilder.loadTexts: proxy.setLastUpdated('9809010000Z')
if mibBuilder.loadTexts: proxy.setOrganization('National Laboratory for Applied Network Research')
proxySystem = MibIdentifier((1, 3, 6, 1, 3, 25, 17, 1))
proxyConfig = MibIdentifier((1, 3, 6, 1, 3, 25, 17, 2))
proxyPerf = MibIdentifier((1, 3, 6, 1, 3, 25, 17, 3))
proxyMemUsage = MibScalar((1, 3, 6, 1, 3, 25, 17, 1, 1), Unsigned32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyMemUsage.setStatus('current')
proxyStorage = MibScalar((1, 3, 6, 1, 3, 25, 17, 1, 2), Unsigned32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyStorage.setStatus('current')
proxyCpuUsage = MibScalar((1, 3, 6, 1, 3, 25, 17, 1, 3), Unsigned32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyCpuUsage.setStatus('current')
proxyUpTime = MibScalar((1, 3, 6, 1, 3, 25, 17, 1, 4), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyUpTime.setStatus('current')
proxyAdmin = MibScalar((1, 3, 6, 1, 3, 25, 17, 2, 1), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyAdmin.setStatus('current')
proxySoftware = MibScalar((1, 3, 6, 1, 3, 25, 17, 2, 2), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxySoftware.setStatus('current')
proxyVersion = MibScalar((1, 3, 6, 1, 3, 25, 17, 2, 3), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyVersion.setStatus('current')
proxySysPerf = MibIdentifier((1, 3, 6, 1, 3, 25, 17, 3, 1))
proxyProtoPerf = MibIdentifier((1, 3, 6, 1, 3, 25, 17, 3, 2))
proxyCpuLoad = MibScalar((1, 3, 6, 1, 3, 25, 17, 3, 1, 1), Gauge32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyCpuLoad.setStatus('current')
proxyNumObjects = MibScalar((1, 3, 6, 1, 3, 25, 17, 3, 1, 2), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyNumObjects.setStatus('current')
proxyProtoClient = MibIdentifier((1, 3, 6, 1, 3, 25, 17, 3, 2, 1))
proxyProtoServer = MibIdentifier((1, 3, 6, 1, 3, 25, 17, 3, 2, 2))
proxyClientHttpRequests = MibScalar((1, 3, 6, 1, 3, 25, 17, 3, 2, 1, 1), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyClientHttpRequests.setStatus('current')
proxyClientHttpHits = MibScalar((1, 3, 6, 1, 3, 25, 17, 3, 2, 1, 2), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyClientHttpHits.setStatus('current')
proxyClientHttpErrors = MibScalar((1, 3, 6, 1, 3, 25, 17, 3, 2, 1, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyClientHttpErrors.setStatus('current')
proxyClientHttpInKbs = MibScalar((1, 3, 6, 1, 3, 25, 17, 3, 2, 1, 4), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyClientHttpInKbs.setStatus('current')
proxyClientHttpOutKbs = MibScalar((1, 3, 6, 1, 3, 25, 17, 3, 2, 1, 5), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyClientHttpOutKbs.setStatus('current')
proxyServerHttpRequests = MibScalar((1, 3, 6, 1, 3, 25, 17, 3, 2, 2, 1), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyServerHttpRequests.setStatus('current')
proxyServerHttpErrors = MibScalar((1, 3, 6, 1, 3, 25, 17, 3, 2, 2, 2), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyServerHttpErrors.setStatus('current')
proxyServerHttpInKbs = MibScalar((1, 3, 6, 1, 3, 25, 17, 3, 2, 2, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyServerHttpInKbs.setStatus('current')
proxyServerHttpOutKbs = MibScalar((1, 3, 6, 1, 3, 25, 17, 3, 2, 2, 4), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyServerHttpOutKbs.setStatus('current')
proxyMedianSvcTable = MibTable((1, 3, 6, 1, 3, 25, 17, 3, 3), )
if mibBuilder.loadTexts: proxyMedianSvcTable.setStatus('current')
proxyMedianSvcEntry = MibTableRow((1, 3, 6, 1, 3, 25, 17, 3, 3, 1), ).setIndexNames((0, "PROXY-MIB", "proxyMedianTime"))
if mibBuilder.loadTexts: proxyMedianSvcEntry.setStatus('current')
proxyMedianTime = MibTableColumn((1, 3, 6, 1, 3, 25, 17, 3, 3, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyMedianTime.setStatus('current')
proxyHTTPAllSvcTime = MibTableColumn((1, 3, 6, 1, 3, 25, 17, 3, 3, 1, 2), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyHTTPAllSvcTime.setStatus('current')
proxyHTTPMissSvcTime = MibTableColumn((1, 3, 6, 1, 3, 25, 17, 3, 3, 1, 3), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyHTTPMissSvcTime.setStatus('current')
proxyHTTPHitSvcTime = MibTableColumn((1, 3, 6, 1, 3, 25, 17, 3, 3, 1, 4), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyHTTPHitSvcTime.setStatus('current')
proxyHTTPNhSvcTime = MibTableColumn((1, 3, 6, 1, 3, 25, 17, 3, 3, 1, 5), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyHTTPNhSvcTime.setStatus('current')
proxyDnsSvcTime = MibTableColumn((1, 3, 6, 1, 3, 25, 17, 3, 3, 1, 6), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: proxyDnsSvcTime.setStatus('current')
mibBuilder.exportSymbols("PROXY-MIB", proxyStorage=proxyStorage, proxyMemUsage=proxyMemUsage, proxyServerHttpInKbs=proxyServerHttpInKbs, proxySystem=proxySystem, proxyAdmin=proxyAdmin, nsfnet=nsfnet, proxyHTTPHitSvcTime=proxyHTTPHitSvcTime, proxyUpTime=proxyUpTime, proxySoftware=proxySoftware, proxyClientHttpRequests=proxyClientHttpRequests, proxyServerHttpOutKbs=proxyServerHttpOutKbs, proxy=proxy, proxyPerf=proxyPerf, proxyProtoServer=proxyProtoServer, proxyProtoPerf=proxyProtoPerf, proxyNumObjects=proxyNumObjects, proxyDnsSvcTime=proxyDnsSvcTime, proxyMedianSvcEntry=proxyMedianSvcEntry, proxyHTTPAllSvcTime=proxyHTTPAllSvcTime, proxyConfig=proxyConfig, proxyMedianSvcTable=proxyMedianSvcTable, proxySysPerf=proxySysPerf, PYSNMP_MODULE_ID=proxy, proxyMedianTime=proxyMedianTime, proxyHTTPMissSvcTime=proxyHTTPMissSvcTime, proxyClientHttpOutKbs=proxyClientHttpOutKbs, proxyClientHttpInKbs=proxyClientHttpInKbs, proxyClientHttpErrors=proxyClientHttpErrors, proxyProtoClient=proxyProtoClient, proxyServerHttpErrors=proxyServerHttpErrors, proxyCpuLoad=proxyCpuLoad, proxyCpuUsage=proxyCpuUsage, proxyClientHttpHits=proxyClientHttpHits, proxyServerHttpRequests=proxyServerHttpRequests, proxyVersion=proxyVersion, proxyHTTPNhSvcTime=proxyHTTPNhSvcTime)
| 99.607595 | 1,254 | 0.763121 | 895 | 7,869 | 6.707263 | 0.16648 | 0.024321 | 0.017491 | 0.023322 | 0.416625 | 0.369982 | 0.259204 | 0.194903 | 0.170248 | 0.138431 | 0 | 0.075595 | 0.087178 | 7,869 | 78 | 1,255 | 100.884615 | 0.760128 | 0.039649 | 0 | 0 | 0 | 0 | 0.123741 | 0.005829 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.084507 | 0 | 0.084507 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.